Skip to content

Conversation

@NUZAT-TABASSUM
Copy link

@NUZAT-TABASSUM NUZAT-TABASSUM commented Oct 29, 2025

This pull request adds a GitHub Actions workflow that automatically builds and publishes Tobira’s Docker images.

The workflow does the following:

  • Builds Tobira’s Docker image for both linux/amd64 and linux/arm64

  • Uses Node and Rust versions from util/scripts/build-container-image.sh

  • Builds and pushes images to GitHub Container Registry (GHCR) under ghcr.io/elan-ev/tobira

  • Automatically generates appropriate tags based on the Git event:

    • For pull requests, it only builds the image (no pushing).
    • For pushes to main, it builds and pushes development tags:
      edge → represents the latest code from main
      sha-<commit> → a unique tag for each commit,
    • For version tags (e.g., v3.9.0), it creates release images with multiple tags: v3.9.0, v3.9, v3, and latest
  • Each architecture (amd64 and arm64) is built separately and then combined into a multi-architecture image, so users can simply pull ghcr.io/elan-ev/tobira:<tag> and Docker automatically selects the correct architecture.

@KatrinIhler
Copy link
Member

This is related to opencast/opencast-docker#243 which is supposed to be a first step for opencast/opencast-docker#240.

@NUZAT-TABASSUM In your description, could you elaborate a bit more on what your workflow does?

@NUZAT-TABASSUM
Copy link
Author

This is related to opencast/opencast-docker#243 which is supposed to be a first step for opencast/opencast-docker#240.

@NUZAT-TABASSUM In your description, could you elaborate a bit more on what your workflow does?

I’ve updated the PR description and added more details about what the workflow does.

Copy link
Member

@LukasKalbertodt LukasKalbertodt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi! Sorry for the delay responding to your PR.

I don't know a ton about Docker, certainly not about best practices. This is my first review of it, but which is mostly just questions to you, not necessarily change requests. It would be nice if you could explain the reasoning behind your changes for me to understand.

  • Generally the script seems to be quite involved, covering many different cases. I would have expected something simpler first. Maybe all of this makes sense, it's just my intuition to start with something with fewer bells and whistles.
  • Did you test all of this on your own fork? I.e. also test the upload part?
  • Can you quickly explain why you decided for ghcr.io instead of say, quay.io, which is used in build-container-image.sh?

Also summoning @mtneug who knows a lot more about these things. I know you don't have a lot of time right now, but maybe you want to have a look here anyway.

Comment on lines +86 to +87
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
Copy link
Member

@LukasKalbertodt LukasKalbertodt Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this used for?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This creates an environment for cross-platform builds.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So it is required for building the ARM build? Because Rust doesn't need QEMU to cross-compile to ARM. Not sure what Docker (or buildx or whatever) requires.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, rust cross-compiling doesn’t need QEMU, but the Docker build still needs it to run package installs/scripts during the arm64 stages.

Comment on lines +54 to +56
- name: arm64
platform: linux/arm64
rust_target: aarch64-unknown-linux-musl
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why build for ARM? 🤔 Tobira never released for ARM, was never tested for ARM and is very unlikely to be used on an ARM machine.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not? I build Opencast container images for ARM. Tobira is probably even easier to build for this platform.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(a) bc it wastes compute and storage resources if no one uses it. And (b) we would have to at least clearly document somewhere that this is experimental/not officially supported or whatever. People should know that Tobira devs never compile nor test it. So I'd rather leave it out, until someone asks for it. Or am I missing something?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Users on macOS are on arm. While you can run other architectures, Docker (and probably any other container engine) defaults to downloading arm images. I would always build them if not only for local testing.

Comment on lines 3 to 8
on:
pull_request:
branches: [ main ]
push:
branches: [ main ]
tags: [ 'v*' ]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would run this workflow a lot less often, specifically: just for tags (i.e. releases). Is there any particular reason why you also build for PRs and build+upload for main pushes?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably debatable if you want this, but container images for Opencast have {version}-dev images that build from the r/{version}.x release branch (and dev that builds from develop) to ease testing of upcoming versions. There are no images resulting from Opencast PRs.

Comment on lines +100 to +101
- name: Build & push single-arch image
uses: docker/build-push-action@v6
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is using this action better than using build-container-image.sh? Would be nice to not duplicate the logic I guess? But maybe there are good reasons for not reusing the script?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I can do that too.

Comment on lines 109 to 110
cache-from: type=gha
cache-to: type=gha,mode=max
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What exactly is cached here? Is it actually worth it using a cache? In particular if we switch to only building releases.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are BuildKit caches. This could speed up builds if Docker can verify that a step in the Dockerfile does not need to be performed again. I don't think this will improve much if only releases are built. Also, I'm a bit afraid that some layers with external changes, e.g. OS package installs, are cached indefinitely.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those lines just turn on Docker BuildKit caching. It saves already-built Docker layers in GitHub Actions so the next build can reuse them and run faster.
This is mainly useful if we build often (PRs/main). If we switch to release-only builds, the cache probably won’t help much. I can remove it then.

Comment on lines 122 to 139
manifest:
name: Publish multi-arch tags
needs: [meta, build]
if: github.event_name != 'pull_request'
runs-on: ubuntu-latest
steps:
- name: Set up Buildx
uses: docker/setup-buildx-action@v3

- name: Generate and publish multi-arch Docker manifest
run: |
for TAG in ${{ needs.meta.outputs.tags }}; do
echo "→ ${TAG}"
docker buildx imagetools create \
--tag "${TAG}" \
"${TAG}-amd64" \
"${TAG}-arm64"
done
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not quite sure what all of this does

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is creating a multi-platform image, e.g. pulling ghcr.io/elan-ev/tobira:latest will automatically pull the arm64 or amd64 image based on the host (if not set otherwise during the pull). I never used the imagetools subcommands. That control is interesting, especially since I had some trouble with ghcr.io in the past when deleting some tags of multi-platform images.

@NUZAT-TABASSUM are you sure this works? Are the ${TAG}-amd64 and ${TAG}-arm64 variants created as tags?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, its working

images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=sha,format=short,prefix=sha-
type=raw,value=edge,enable=${{ github.ref == format('refs/heads/{0}', github.event.repository.default_branch) }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use dev instead of edge?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure

with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=sha,format=short,prefix=sha-
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want that? Git hashes as container tags?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added the sha mainly for traceability as it is easy to say use the image from this exact commit when debugging or testing. If you don’t want that, I can disable it.

Comment on lines 37 to 39
type=semver,pattern={{version}},enable=${{ startsWith(github.ref, 'refs/tags/') }}
type=semver,pattern={{major}}.{{minor}},enable=${{ startsWith(github.ref, 'refs/tags/') }}
type=semver,pattern={{major}},enable=${{ startsWith(github.ref, 'refs/tags/') }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this working? Tobira tags don't follow semver.

rust_target: aarch64-unknown-linux-musl

steps:
- uses: actions/checkout@v4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is now v6. Please check all versions again for current releases.

Comment on lines 61 to 67
- name: Extract Node and Rust versions
id: tools
run: |
nv=$(grep -E '^NODE_VERSION=' util/scripts/build-container-image.sh | head -n1 | cut -d= -f2)
rv=$(grep -E '^RUST_VERSION=' util/scripts/build-container-image.sh | head -n1 | cut -d= -f2)
echo "node=${nv}" >> "$GITHUB_OUTPUT"
echo "rust=${rv}" >> "$GITHUB_OUTPUT"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the canonical place to look for the current versions? Maybe add a new file that contains this info for all scripts in the repository? Not sure what is best for Rust projects. The Opencast container build repo simply has VERSION_OPENCAST, VERSION_FFMPEG etc.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Yes I already noticed that stuff is kind of out of date. There is util/scripts/check-system.sh and build-container-images.sh. There are probably better ways to do it, honestly. Cargo.toml has a field to specify the minimum Rust version required, for example. Will look into it.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now it's reading the Node/Rust versions from build-container-image.sh just because they already live there, but I agree that’s not a great way. Could I create versions.env fiile that hold both Node and Rust versions in one shared place for scripts? while Cargo.toml only covers Rust and not Node.

Comment on lines 112 to 113
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ steps.vars.outputs.version }}-${{ matrix.name }}
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:sha-${{ steps.vars.outputs.short_sha }}-${{ matrix.name }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why set other tags than selected with docker/metadata-action?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I use docker/metadata-action for the final tags people will pull (latest, edge, release tags, etc.).
The extra tags I set manually (…-amd64 / …-arm64) are just temporary helper tags so the manifest step can combine both architectures into one multi-arch tag.

Comment on lines 109 to 110
cache-from: type=gha
cache-to: type=gha,mode=max
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are BuildKit caches. This could speed up builds if Docker can verify that a step in the Dockerfile does not need to be performed again. I don't think this will improve much if only releases are built. Also, I'm a bit afraid that some layers with external changes, e.g. OS package installs, are cached indefinitely.

Comment on lines 122 to 139
manifest:
name: Publish multi-arch tags
needs: [meta, build]
if: github.event_name != 'pull_request'
runs-on: ubuntu-latest
steps:
- name: Set up Buildx
uses: docker/setup-buildx-action@v3

- name: Generate and publish multi-arch Docker manifest
run: |
for TAG in ${{ needs.meta.outputs.tags }}; do
echo "→ ${TAG}"
docker buildx imagetools create \
--tag "${TAG}" \
"${TAG}-amd64" \
"${TAG}-arm64"
done
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is creating a multi-platform image, e.g. pulling ghcr.io/elan-ev/tobira:latest will automatically pull the arm64 or amd64 image based on the host (if not set otherwise during the pull). I never used the imagetools subcommands. That control is interesting, especially since I had some trouble with ghcr.io in the past when deleting some tags of multi-platform images.

@NUZAT-TABASSUM are you sure this works? Are the ${TAG}-amd64 and ${TAG}-arm64 variants created as tags?

@NUZAT-TABASSUM
Copy link
Author

Hi! Sorry for the delay responding to your PR.

I don't know a ton about Docker, certainly not about best practices. This is my first review of it, but which is mostly just questions to you, not necessarily change requests. It would be nice if you could explain the reasoning behind your changes for me to understand.

* Generally the script seems to be quite involved, covering many different cases. I would have expected something simpler first. Maybe all of this makes sense, it's just my intuition to start with something with fewer bells and whistles.

* Did you test all of this on your own fork? I.e. also test the upload part?

* Can you quickly explain why you decided for ghcr.io instead of say, quay.io, which is used in `build-container-image.sh`?

Also summoning @mtneug who knows a lot more about these things. I know you don't have a lot of time right now, but maybe you want to have a look here anyway.

Hi Lukas — thanks a lot for the careful review.
I tried to cover the common “full workflow” in one go ((build, tag, push, and multi-arch).If you’d prefer I can start simpler (e.g. build/push only on release tags first).
Yes — I tested this on my fork on a main push, it pushed the images and created the multi-arch tags.
I used ghcr.io instead of quay.io as it was suggested from tobira group.

@NUZAT-TABASSUM
Copy link
Author

Hi! I’ve updated the workflow based on the feedback. Could you please take another look and re-review when you have a time? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants