Conversation
WalkthroughAdds CI support to build and push production Docker images via a new job Possibly related issues
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @.github/workflows/ci.yml:
- Around line 1055-1059: The CI step named "Build server and admin assets"
currently invokes nx using "npx nx run ghost:build:tsc", which is inconsistent
with the rest of the workflow and can pick a different nx version; update that
command to use the workspace runner (replace "npx nx run ghost:build:tsc" with
the equivalent "yarn nx run ghost:build:tsc") so it matches other jobs that use
yarn nx and ensures the workspace nx is used.
In `@Dockerfile.production`:
- Around line 27-29: The Dockerfile uses "COPY . ." followed by "RUN rm -rf
core/built/admin" so admin files remain in the earlier layer; fix by changing
the build to avoid copying admin into the core image layer—either create an
intermediate stage (e.g., "context-strip" or "builder") that does "COPY . ."
then removes "core/built/admin" and then COPY --from=that-stage into the "core"
stage, or replace the broad "COPY . ." with explicit COPYs that exclude
"core/built/admin"; update references to the "core" target and the later "full"
stage COPY of "core/built/admin" accordingly so the admin assets are only added
where intended.
- Line 8: The Dockerfile.production currently sets a default ARG NODE_VERSION
which duplicates the value in CI; update the CI build step
(job_docker_build_production in .github/workflows/ci.yml) to explicitly pass the
Node version to Docker (use build-args: NODE_VERSION: <value>) and remove or
leave a non-committal default in Dockerfile.production so the image always uses
the version provided by CI; reference ARG NODE_VERSION in Dockerfile.production
and the job_docker_build_production build step to keep them consistent and avoid
drift.
🧹 Nitpick comments (5)
ghost/core/package.json (1)
24-24: Globghost-*.tgzis fragile if stale artifacts exist.If a previous
pack:standalonerun failed mid-way (e.g., locally), leftoverghost-*.tgzfiles or a stalepackage/directory could cause unexpected behavior. Consider adding a cleanup prefix:♻️ Suggested hardening
- "pack:standalone": "npm pack && tar -xzf ghost-*.tgz && cp ../../yarn.lock package/ && rm ghost-*.tgz", + "pack:standalone": "rm -rf package ghost-*.tgz && npm pack && tar -xzf ghost-*.tgz && cp ../../yarn.lock package/ && rm ghost-*.tgz",This is a no-op in CI (clean workspace) but guards against local reuse issues.
Dockerfile.production (2)
15-23: Add--no-install-recommendsto reduce image size.As flagged by static analysis (Trivy DS-0029), the
apt-get installis missing--no-install-recommends. This pulls in unnecessary recommended packages, inflating the image — particularly forbuild-essentialandpython3which are only needed temporarily for the sqlite3 native build.♻️ Proposed fix
RUN apt-get update && \ apt-get install -y \ + --no-install-recommends \ build-essential \ libjemalloc2 \ python3 && \
31-42: RUN block looks solid overall; minor cleanup suggestion.The yarn install + sqlite3 native build + build-tool purge pattern is well structured. Two small nits:
- Line 35:
apt autoremove→ considerapt-get autoremovefor consistency withapt-get purgeon line 34 (theaptCLI prints a warning about unstable CLI in scripts, though it's cosmetic in Docker).- After purging build tools, consider adding
rm -rf /var/cache/apt/archives/*.debto ensure no stale.debfiles remain in the layer.Neither is blocking.
.github/workflows/ci.yml (2)
1031-1034: Consider gating this job on relevant file changes.Unlike other jobs (e.g.,
job_acceptance-tests,job_unit-tests), this job has noifcondition and will run on every single PR and push — even documentation-only changes. This adds CI time and GHCR storage for every commit.If "on every commit" is intentional (to always have a fresh image), this is fine. Otherwise, consider adding a condition like:
if: needs.job_setup.outputs.changed_core == 'true' || needs.job_setup.outputs.changed_admin == 'true'
1088-1108: Consider adding OCI labels for consistency with the development image.The existing
job_docker_build(lines 984-988) defines OCI labels (org.opencontainers.image.title,description,vendor,maintainer). The production image metadata steps don't include any labels, which means the images will lack descriptive metadata in the registry.Not blocking, but worth adding for parity.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #26360 +/- ##
=======================================
Coverage 72.81% 72.82%
=======================================
Files 1561 1561
Lines 120704 120778 +74
Branches 14539 14557 +18
=======================================
+ Hits 87895 87956 +61
- Misses 31797 31809 +12
- Partials 1012 1013 +1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
7bad77d to
785aaa1
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @.github/workflows/ci.yml:
- Around line 1064-1073: This step calculates is-fork-pr and should-push but
doesn't persist built images for fork PRs; add a conditional artifact upload
step (using actions/upload-artifact) after the image load/build steps that runs
when is-fork-pr is true (or should-push is false) to save the image tar(s) the
same way job_docker_build does so fork PR images are preserved; use the outputs
from the step id "strategy" (is-fork-pr / should-push) to gate the upload and
match the artifact naming pattern used in job_docker_build.
🧹 Nitpick comments (3)
Dockerfile.production (1)
47-48: Consider the implications ofVOLUMEdeclarations for downstream consumers.Declaring
VOLUME /home/ghost/contentandVOLUME /home/ghost/logmeans Docker will create anonymous volumes even when users don't explicitly mount them, which can lead to orphaned volumes over time. This is a common pattern for Ghost but worth documenting for self-hosters who may not expect it..github/workflows/ci.yml (2)
1031-1034: Job runs unconditionally on every PR — consider gating on relevant path changes.Unlike most other jobs in this workflow (which use
if: needs.job_setup.outputs.changed_core == 'true'or similar), this job has noifcondition. It will run on every PR, including docs-only or unrelated changes, adding ~5-10 minutes of CI time per run.If the intent is truly "build on every commit," that's fine — but you could at least gate it on
changed_any_codeto skip markdown-only PRs.♻️ Suggested condition
job_docker_build_production: name: Build & Push Production Docker Images needs: [job_setup] + if: needs.job_setup.outputs.changed_any_code == 'true' runs-on: ubuntu-latest
1116-1128: PR builds don't read from PR-specific Docker layer cache.The
cache-fromonly referencescache-main, butcache-towrites PR-specific caches (e.g.,cache-pr-123). This means subsequent pushes to the same PR won't benefit from the PR's own cache — only from main's cache. The existingjob_docker_build(lines 1002-1004) reads from both main and PR caches.♻️ Suggested fix for both core and full build steps
cache-from: type=registry,ref=ghcr.io/${{ github.repository_owner }}/ghost-core:cache-main + ${{ github.event_name == 'pull_request' && format('type=registry,ref=ghcr.io/{0}/ghost-core:cache-pr-{1}', github.repository_owner, github.event.pull_request.number) || '' }}Apply the same pattern to the
fullimage build step (line 1141).
785aaa1 to
3b73763
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@Dockerfile.production`:
- Around line 15-42: The Dockerfile installs build-essential and python3 in one
RUN and purges them in a later RUN, which leaves their files in earlier image
layers; consolidate the package install, yarn install (the RUN that executes
"yarn install --ignore-scripts --production --prefer-offline" and "(cd
node_modules/sqlite3 && npm run install)"), and the apt-get purge into a single
RUN so build deps are removed in the same layer they were added; ensure
libjemalloc2 remains installed (do not purge it) because it's required at
runtime (LD_PRELOAD), and keep the filesystem operations (mkdir, cp -R content,
chown commands) in that same consolidated RUN so temporary build artifacts are
cleaned before the layer is committed.
🧹 Nitpick comments (2)
.github/workflows/ci.yml (2)
1031-1034: Job runs on every commit regardless of changed files.Unlike other jobs (e.g.,
job_unit-tests,job_acceptance-tests) that gate onchanged_core,changed_admin, etc., this job has noifcondition. It will run even for documentation-only or Tinybird-only changes, consuming CI minutes to build Docker images unnecessarily.Consider adding a filter, e.g.:
if: needs.job_setup.outputs.changed_core == 'true' || needs.job_setup.outputs.changed_admin == 'true'
1118-1144: PR builds don't restore their own layer cache.Both
cache-fromentries (Lines 1129, 1143) only pull fromcache-main. Unlikejob_docker_build(Lines 1002-1004), there's no fallback to the PR-specific cache. On active PRs with multiple pushes, each build starts from the main-branch cache rather than reusing its own prior layers.♻️ Suggested fix (core example, same pattern for full)
- cache-from: type=registry,ref=ghcr.io/${{ steps.strategy.outputs.owner }}/ghost-core:cache-main + cache-from: | + type=registry,ref=ghcr.io/${{ steps.strategy.outputs.owner }}/ghost-core:cache-main + ${{ github.event_name == 'pull_request' && format('type=registry,ref=ghcr.io/{0}/ghost-core:cache-pr-{1}', steps.strategy.outputs.owner, github.event.pull_request.number) || '' }}
3b73763 to
30a6951
Compare
ref https://linear.app/ghost/issue/BER-3292/add-production-docker-image-build-to-ghost-ci Ghost CI now builds two production Docker images on every commit: - ghcr.io/tryghost/ghost-core (server + production deps, no admin) - ghcr.io/tryghost/ghost (core + built admin assets) Uses npm pack to create a standalone distribution via monobundle.js, which bundles private workspace packages as local tarballs. The Dockerfile mirrors Ghost-Moya's proven production setup (bookworm-slim, jemalloc, ghost user uid 1000, sqlite3 native build). Nothing consumes these images yet — this is purely additive. Ghost-Moya continues building from source until we're ready to switch.
30a6951 to
6d712ed
Compare
closes https://linear.app/ghost/issue/BER-3292/add-production-docker-image-build-to-ghost-ci
Problem
Ghost CI builds a dev image for E2E tests, but the production image lives in Ghost-Moya. Moya clones Ghost, rebuilds from source every commit (~12 min), and ships an artifact that was never tested by CI. We need Ghost to own its own production image so what we test is what we ship.
Proposal
Add a CI job that builds and pushes two production Docker images to GHCR on every commit:
ghcr.io/tryghost/ghost-core— server + production deps, no admin (base for Ghost-Pro)ghcr.io/tryghost/ghost— core + built admin (self-hosters)The Dockerfile mirrors Moya's proven production setup (bookworm-slim, jemalloc, sqlite3, ghost user). A
pack:standalonescript in ghost/core wrapsnpm packto produce a standalone distribution that works outside the monorepo.Nothing consumes these images yet. This is laying the foundation.
Not Doing
@faker-js/fakerin production deps, etc.)Trade-offs
COPYinto Docker rather than using a multi-stage Dockerfile. This reuses CI's dependency cache (fast) but means the Dockerfile isn't self-contained. Conventional wisdom says multi-stage, but Ghost's monorepo + monobundle + Nx cache make that impractical today.node_modules. Not great, but not our problem to solve here.Rollback
Easy. Delete the CI job and Dockerfile. Nothing depends on these images yet. No data, no migrations, no customer impact.
Blast Radius
None. Purely additive. Existing CI jobs, Moya, and production are completely untouched. The images are pushed to GHCR but nothing reads from them until we explicitly wire up Moya in a follow-up.