Build Backstage in Jenkins (Multi-Stage Docker) and Push to Harbor (Short SHA Tags)

This post shows how to build a Backstage app image (a single container image that runs the Backstage backend and serves the built frontend), tag it with a short Git commit SHA (7 chars like 3c71ccc), and push it to Harbor using a system robot account.

Target image format:

  • harbor.maksonlee.com/backstage/homelab-backstage:<short-sha>

What you’ll build

  • Jenkins pipeline triggered by GitHub push
  • Docker BuildKit enabled
  • Multi-stage Docker build (build happens inside Docker)
  • Tag images with short SHA (git rev-parse --short=7 HEAD)
  • Push to Harbor using a Harbor system robot (robot$jenkins) stored as Jenkins credentials

Prerequisites

Jenkins agent requirements

Your Jenkins agent labeled ssh-agent-with-docker must have:

  • Docker installed and usable by the Jenkins user
  • Network access to harbor.maksonlee.com
  • If Harbor uses a private CA certificate, Docker must trust it (see Troubleshooting)

Backstage repository

Your Backstage repo root should contain:

  • .yarn/, .yarnrc.yml, yarn.lock
  • packages/ (and optionally plugins/)
  • runtime configs such as app-config.yaml and app-config.production.yaml

  1. Add the multi-stage Dockerfile

Create Dockerfile.multi in the repo root.

This Dockerfile builds your Backstage app inside Docker:

  • installs dependencies with Yarn
  • compiles TypeScript
  • builds the backend bundle
  • produces a runtime image with production dependencies only

Dockerfile.multi

# Stage 1 - Create yarn install skeleton layer
FROM node:22-bookworm-slim AS packages

WORKDIR /app
COPY backstage.json package.json yarn.lock ./
COPY .yarn ./.yarn
COPY .yarnrc.yml ./

COPY packages packages

# Comment this out if you don't have any internal plugins
COPY plugins plugins

RUN find packages \! -name "package.json" -mindepth 2 -maxdepth 2 -exec rm -rf {} \+

# Stage 2 - Install dependencies and build packages
FROM node:22-bookworm-slim AS build

# Set Python interpreter for `node-gyp` to use
ENV PYTHON=/usr/bin/python3

# Install isolate-vm dependencies, these are needed by the @backstage/plugin-scaffolder-backend.
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
    --mount=type=cache,target=/var/lib/apt,sharing=locked \
    apt-get update &amp;&amp; \
    apt-get install -y --no-install-recommends python3 g++ build-essential &amp;&amp; \
    rm -rf /var/lib/apt/lists/*

# Install sqlite3 dependencies. You can skip this if you don't use sqlite3 in the image,
# in which case you should also move better-sqlite3 to "devDependencies" in package.json.
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
    --mount=type=cache,target=/var/lib/apt,sharing=locked \
    apt-get update &amp;&amp; \
    apt-get install -y --no-install-recommends libsqlite3-dev &amp;&amp; \
    rm -rf /var/lib/apt/lists/*

USER node
WORKDIR /app

COPY --from=packages --chown=node:node /app .

RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \
    yarn install --immutable

COPY --chown=node:node . .

RUN yarn tsc
RUN yarn --cwd packages/backend build

RUN mkdir packages/backend/dist/skeleton packages/backend/dist/bundle \
    &amp;&amp; tar xzf packages/backend/dist/skeleton.tar.gz -C packages/backend/dist/skeleton \
    &amp;&amp; tar xzf packages/backend/dist/bundle.tar.gz -C packages/backend/dist/bundle

# Stage 3 - Build the actual backend image and install production dependencies
FROM node:22-bookworm-slim

# Set Python interpreter for `node-gyp` to use
ENV PYTHON=/usr/bin/python3

# Install isolate-vm dependencies, these are needed by the @backstage/plugin-scaffolder-backend.
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
    --mount=type=cache,target=/var/lib/apt,sharing=locked \
    apt-get update &amp;&amp; \
    apt-get install -y --no-install-recommends python3 g++ build-essential &amp;&amp; \
    rm -rf /var/lib/apt/lists/*

# Install sqlite3 dependencies. You can skip this if you don't use sqlite3 in the image,
# in which case you should also move better-sqlite3 to "devDependencies" in package.json.
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
    --mount=type=cache,target=/var/lib/apt,sharing=locked \
    apt-get update &amp;&amp; \
    apt-get install -y --no-install-recommends libsqlite3-dev &amp;&amp; \
    rm -rf /var/lib/apt/lists/*

# From here on we use the least-privileged `node` user to run the backend.
USER node

# This should create the app dir as `node`.
# If it is instead created as `root` then the `tar` command below will
# fail: `can't create directory 'packages/': Permission denied`.
# If this occurs, then ensure BuildKit is enabled (`DOCKER_BUILDKIT=1`)
# so the app dir is correctly created as `node`.
WORKDIR /app

# Copy the install dependencies from the build stage and context
COPY --from=build --chown=node:node /app/.yarn ./.yarn
COPY --from=build --chown=node:node /app/.yarnrc.yml  ./
COPY --from=build --chown=node:node /app/backstage.json ./
COPY --from=build --chown=node:node /app/yarn.lock /app/package.json /app/packages/backend/dist/skeleton/ ./

# Note: The skeleton bundle only includes package.json files -- if your app has
# plugins that define a `bin` export, the bin files need to be copied as well to
# be linked in node_modules/.bin during yarn install.

RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \
    yarn workspaces focus --all --production &amp;&amp; rm -rf "$(yarn cache clean)"

# Copy the built packages from the build stage
COPY --from=build --chown=node:node /app/packages/backend/dist/bundle/ ./

# Copy any other files that we need at runtime
COPY --chown=node:node app-config*.yaml ./

# This will include the examples, if you don't need these simply remove this line
COPY --chown=node:node examples ./examples

# This switches many Node.js dependencies to production mode.
ENV NODE_ENV=production

# This disables node snapshot for Node 20 to work with the Scaffolder
ENV NODE_OPTIONS="--no-node-snapshot"

CMD ["node", "packages/backend", "--config", "app-config.yaml", "--config", "app-config.production.yaml"]

  1. Add a Dockerfile-specific ignore file

If your existing .dockerignore is optimized for host-build (for example it ignores packages/*/src or ignores plugins/), it will break multi-stage builds.

Create Dockerfile.multi.dockerignore in repo root:

dist-types
node_modules
packages/*/dist
packages/*/node_modules
plugins/*/dist
plugins/*/node_modules
*.local.yaml

This keeps the build context clean while still including all source code required by the multi-stage build.


  1. Create a Harbor system robot account for Jenkins

In Harbor UI (system admin):

  • Administration → Robot Accounts → New Robot Account
  • Choose System robot account
  • Name it: jenkins
    Harbor login user becomes: robot$jenkins
  • System permissions: none
  • Project permissions: for the project you will push to (example: backstage), grant:
    • Repository → Pull
    • Repository → Push
  • Create the account and copy the secret/token (shown once)

  1. Add the robot credentials to Jenkins

Jenkins → Manage Jenkins → Credentials → System → Global credentialsAdd Credentials:

  • Kind: Username with password
  • Username: robot$jenkins
  • Password: <robot secret/token>
  • ID: harbor-system-robot
  • Description: Harbor system robot for pushing Backstage images

Click Create.


  1. Jenkinsfile: build and push with short SHA tags
pipeline {
    agent { label 'ssh-agent-with-docker' }

    triggers {
        githubPush()
    }

    environment {
        DOCKER_BUILDKIT = '1'
        HARBOR_REGISTRY = 'harbor.maksonlee.com'
        HARBOR_PROJECT = 'backstage'
        IMAGE_NAME = 'homelab-backstage'
    }

    stages {
        stage('Fetch code') {
            steps {
                deleteDir()
                git credentialsId: 'vault-github-ssh',
                        url: 'git@github.com:maksonlee/homelab-backstage.git',
                        branch: 'main'
            }
        }

        stage('Build') {
            steps {
                sh '''#!/usr/bin/env bash
                    set -euo pipefail
                    TAG="$(git rev-parse --short=7 HEAD)"
                    IMAGE_SHA="${HARBOR_REGISTRY}/${HARBOR_PROJECT}/${IMAGE_NAME}:${TAG}"
                    IMAGE_TEST="${HARBOR_REGISTRY}/${HARBOR_PROJECT}/${IMAGE_NAME}:test"

                    docker build -f Dockerfile.multi -t "$IMAGE_SHA" -t "$IMAGE_TEST" .
                '''
            }
        }

        stage('Push to Harbor') {
            steps {
                withCredentials([usernamePassword(
                        credentialsId: 'harbor-system-robot',
                        usernameVariable: 'HARBOR_USER',
                        passwordVariable: 'HARBOR_PASS'
                )]) {
                    sh '''#!/usr/bin/env bash
                        set -euo pipefail
                        TAG="$(git rev-parse --short=7 HEAD)"
                        IMAGE_SHA="${HARBOR_REGISTRY}/${HARBOR_PROJECT}/${IMAGE_NAME}:${TAG}"
                        IMAGE_TEST="${HARBOR_REGISTRY}/${HARBOR_PROJECT}/${IMAGE_NAME}:test"

                        echo "$HARBOR_PASS" | docker login "$HARBOR_REGISTRY" -u "$HARBOR_USER" --password-stdin
                        docker push "$IMAGE_SHA"
                        docker push "$IMAGE_TEST"
                        docker logout "$HARBOR_REGISTRY"
                    '''
                }
            }
        }
    }

    post {
        always {
            cleanWs(deleteDirs: true, notFailBuild: true)
        }
    }
}

Did this guide save you time?

Support this site

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top