Building with Dagger
I originally wanted to demonstrate the combination of Gitlab, Podman and Dagger in a single blog post, but it appears I’ve made this decision without knowing the hell of Gitlab and the self-signed mess of SSL-certificates.
So let us break this again apart and do another small series with the interesting parts about Dagger and Podman upfront and a follow-up about how to integrate rootless(?) Podman into Gitlab both as runner and as service inside of Podman and really see, if this is better than the Docker-in-Docker hot garbage.
Intrigued? Please follow me to the next chapter.
Short disclaimer: This articles revolves around the pipeline handling Dagger and not the dependency injector also named Dagger by Google. |
What is Dagger? &
Building software is usually a burdensome task and oftentimes involves lengthy and - I am most certain you have seen it for yourself - messy pipeline scripts in strange exotic configuration languages.
Theses scripts quickly earn the reputation do not mess with them and you have to rely on the single person responsible knowledgeable in the arcane - sounds familiar?
Dagger approaches this quite differently and provides a programmable CICD engine with bindings for actual programming languages with the aim to create pipelines-as-code. These pipelines can then be used inside of CICD systems and also directly on developer machines on a daily basis.
On a high-level the architecture of Dagger looks like this:
1 | The main engine must be running and handles all the coordination |
2 | There are multiple options to run the actual engine - like clouds or containerized on your local machine |
3 | Everything is handled via Dagger’s GrapQL API |
4 | Various SDK for popular languages allow integration and interaction |
5 | The Daggerverse contains user-contributed functions and modules for common tasks like handling secrets or specific deployments to service vendors |
6 | CI can also directly interact with the API |
7 | And the same is also true for the commandline |
How to build? &
One of the advantages of Dagger is its way to build inside of a container and therefore makes builds reproducible on different machines, if you avoid the usage of any latest tag of required dependencies. And additionally build layers are automatically cached and re-used if possible, so the overhead this unfortunately creates is only noticeable if the container itself is actually changed.
In order to get started we just have to define the build environment and start with a build container which can look like this in a minimal version:
func main() {
ctx := context.Background()
client, err := dagger.Connect(&ctx, dagger.WithLogOutput(os.Stdout)) (1)
if nil != err {
panic(err)
}
defer client.Close()
build(&ctx, client)
}
func build(ctx *context.Context, client *dagger.Client) error {
fmt.Println("Building with Dagger")
src := client.Host().Directory(".", dagger.HostDirectoryOpts{ (2)
Exclude: []string{"ci/"},
})
const buildPath = "build/"
golang := client. (3)
Pipeline("Build application").
Container().
From("golang:latest").
WithDirectory("/src", src).
WithWorkdir("/src").
WithExec([]string{"go", "build", "-o", path.Join(buildPath, "todo-showcase")})
output := golang.Directory(buildPath)
_, err := output.Export(*ctx, buildPath) (4)
if nil != err {
return err
}
return nil
}
1 | Connecting the client automatically checks if the Dagger engine is running and starts it otherwise |
2 | Our source directory includes everything besides the stuff underneath ci |
3 | The next few lines are just Containerfile commands called from Golang |
4 | And finally run out pipeline |
How to publish? &
Publishing images works quite similar to the build step. This time Dagger requires us to provide a container as a blueprint for the actual output artifact and once we pass it it happily assembles everything together.
Containers can also be just supplied by file, so we are using that approach in the next code example:
func WithCustomRegistryAuth(client *dagger.Client registryUrl string) dagger.WithContainerFunc { (5)
return func(container *dagger.Container) *dagger.Container {
token, exists := os.LookupEnv("REGISTRY_TOKEN")
if exists {
return container.WithRegistryAuth(registryUrl,
os.Getenv("REGISTRY_USER"),
client.SetSecret("REGISTRY_TOKEN", token))
}
return container
}
}
func publish(ctx *context.Context, client *dagger.Client) {
fmt.Println("Publishing with Dagger")
const registryUrl = "docker.io/unexist"
const tag = "0.1"
_, err := client.
Pipeline("Publish to registry").
Host().
Directory(".").
DockerBuild(dagger.DirectoryDockerBuildOpts{ (1)
Dockerfile: "./ci/Containerfile.dagger",
BuildArgs: []dagger.BuildArg{ (2)
{Name: "RUN_IMAGE", Value: "docker.io/alpine:latest"},
{Name: "BINARY_NAME", Value: "todo-showcase")},
},
}).
With(WithCustomRegistryAuth(client, registryUrl)). (3)
Publish(*ctx, fmt.Sprintf("%s/showcase-dagger-golang:%s", registryURL, tag)) (4)
if nil != err {
panic(err)
}
}
1 | Another way to use Containerfiles is by directly loading them from the filesystem |
2 | Parametrization can still be done e.g. via build arguments which are baked into the container |
3 | Since Dagger runs itself inside of a container it requires our Dockerhub credentials (also see (5)) |
4 | When everything is in place the show can finally start! |
5 | The odd numbering is no accident - this is just a contrived example with command chaining to demonstrate the possibilities of clean pipelines at the end |
The used Containerfile can be found here: https://github.com/unexist/showcase-dagger-golang/blob/master/todo-service-gin/ci/Containerfile.dagger |
Everything together &
After all those lines of code here is the full (although partially cached) output of a build - which looks even better with colors in a shell:
$ REGISTRY_USER=unexist REGISTRY_TOKEN=xxx make dagger-publish-docker
█ [1.35s] connect
┣ [0.10s] starting engine
┣ [0.09s] starting session
┃ OK!
█ [20.06s] go run ci/main.go
┃ Building with Dagger
┃ Publishing with Dagger
┣─╮
│ ▽ host.directory .
│ █ [0.02s] upload . from meanas (client id: uhk8ah6k6spg7775kp825tjlk) (exclude: ci/)
│ ┣ [0.00s] transferring .:
│ █ [0.00s] blob://sha256:d9173afb7ebb842a73a3514e38cbfb0680524b1e5333ab04179b9197824c92a1
│ ┣─╮ blob://sha256:d9173afb7ebb842a73a3514e38cbfb0680524b1e5333ab04179b9197824c92a1
│ ┻ │
┣─╮ │
│ ▼ │ Build application
│ ┣─┼─╮
│ │ │ ▽ from docker.io/golang:latest
│ │ │ █ [1.15s] resolve image config for docker.io/library/golang:latest
┣─┼─┼─┼─╮
│ │ │ │ ▼ Build application
│ │ │ █ │ [0.01s] pull docker.io/library/golang:latest
│ │ │ ┣ │ [0.01s] resolve docker.io/library/golang:latest@sha256:d5302d40dc5fbbf38ec472d1848a9d2391a13f93293a6a5b0b87c99dc0eaa6ae
│ │ │ ┣─┼─╮ pull docker.io/library/golang:latest
│ ┻ │ ┻ │ │
│ ╰──▶█ │ CACHED copy / /src
│ │ ┻
│ █ CACHED exec go build -o build/todo-service.bin
│ ╭─────┫ exec go build -o build/todo-service.bin
│ │ ┻
┣─┼─╮
│ │ ▼ Build application
│ │ █ [0.16s] export directory /src/build to host build/
│ ╰▶█ CACHED copy /src/build /
│ ┻
┣─╮
│ ▽ host.directory build
│ █ [0.00s] upload build from meanas (client id: uhk8ah6k6spg7775kp825tjlk)
│ ┣ [0.00s] transferring build:
│ █ [0.00s] blob://sha256:d8f7d9beecbd43c9016754eea21a5ce80dc7d3fa180f0ea7efc124f0573fb996
│ ┣─╮ blob://sha256:d8f7d9beecbd43c9016754eea21a5ce80dc7d3fa180f0ea7efc124f0573fb996
│ ┻ │
┣─╮ │
│ ▼ │ Publish to Gitlab
│ ┣─┼─╮
│ │ │ ▽ from docker.io/alpine:latest
│ │ │ █ [0.64s] resolve image config for docker.io/library/alpine:latest
│ │ │ █ [0.01s] pull docker.io/library/alpine:latest
│ │ │ ┣ [0.01s] resolve docker.io/library/alpine:latest@sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b
│ │ │ ┣─╮ pull docker.io/library/alpine:latest
│ ┻ │ ┻ │
┣─╮ │ │
│ ▼ │ │ Publish to Gitlab
│ █◀╯ │ CACHED copy / /build
│ │ ┻
│ █ CACHED exec mkdir -p /app
│ █ CACHED exec cp /build/todo-service.bin /app
┻ ┻
• Engine: 18a7ea691821 (version v0.10.2)
⧗ 21.42s ✔ 42 ∅ 10
Once done the final container can be found on any registry by choice - like Dockerhub: https://hub.docker.com/repository/docker/unexist/showcase-dagger-golang/general
Or easily verified with the help of dive - maybe by another pipeline:
$ dive docker.io/unexist/showcase-dagger-golang:0.1 --ci
Using default CI config
Image Source: docker://docker.io/unexist/showcase-dagger-golang:0.1
Fetching image... (this can take a while for large images)
Handler not available locally. Trying to pull 'docker.io/unexist/showcase-dagger-golang:0.1'...
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
Trying to pull docker.io/unexist/showcase-dagger-golang:0.1...
Getting image source signatures
Copying blob ff1da1984623 done
Copying blob 4abcf2066143 done
Copying blob 8392176c7d6a done
Copying blob 8a9c5edd599d done
Copying config e201989f55 done
Writing manifest to image destination
Storing signatures
e201989f555d02d5d8b7ae5f374f2daef5b2918979aa811b487154b407c820d0
Analyzing image...
efficiency: 100.0000 %
wastedBytes: 0 bytes (0 B)
userWastedPercent: 0.0000 %
Inefficient Files:
Count Wasted Space File Path
None
Results:
PASS: highestUserWastedPercent
SKIP: highestWastedBytes: rule disabled
PASS: lowestEfficiency
Result:PASS [Total:3] [Passed:2] [Failed:0] [Warn:0] [Skipped:1]
Conclusion &
Dagger offers a different way to create pipeline for supported languages and although we have seen a working example, the question still remains who is this for and is a migration worth the pain?
Using a full-fledged programming language surely doesn’t get rid of the complexity of building software, but it moves the whole process from a kind of niche-existence to a first-grade citizen and let’s more persons learn, play and adapt it to their needs.
This directly distributes the arcane knowledge in the team and maybe reduces the bottlenecks when builds suddenly break.
So ultimately Dagger is for any person and/or any team interested in improving productivity, reproducibility and who does not shy way from new technology.
As always, all examples can be found here: