k8s: Deploying CD on Kubernetes
Categories: programming
Next major step in my Home Lab platform is to move from Jenkins to a k8s native CD tool. Jenkins was great in the late 90s and early 2ks for building Java applications. Definitely continues to be relevant in many contexts however the integration with Kubernetes is a bit lack luster. Honestly after nearly two decades of building tooling the lack of documentation is just old school at this point. Hoping to find something more performant
Competitors
Two main solutions I know of in the CD space:
- ArgoCD is the one I am most familiar with. I had done some research in the last year however the community was fractured. I was able to get Argo Workflows up and running which claimed to be replacing ArgoCD however it seems like most of the community stuck with ArgoCD directly.
- Tekton seems to offer many of the same things. Seems to be more stable in terms of roadmap and direction.
From the outside both are fairly similar. They use Kubernetes Custom Resource Definitions to describe the intended state of the system, intended state, and progress towards rectifying them. From the marketing material they seem to differ in ArgoCD focusing of GitOps where Tekton focuses on flexibility. Feels like an opinionated versus toolbox view making Jenkins closer to Tekton after the shift from just Java.
Goals
Generally my pipelines follow a similar setup:
- Checkout the code from SCM
- Component Verification
- Run unit tests
- Run internal integration tests
- Deploy and Verify
- Build production artifacts
- Deploy to early integration environment
- Verify systemic behavior scoped to the products and farther down.
- Deploy to production like environments. Usually to each life cycle.
- Ideally at each stage we will verify the entire system works as expected.
- Deploy to production.
This is the ideal. I will happily cut corners depending on the maturity of the parts each component. For instance, I will generally spike a new system then commit the sin of not deleting before writing tests. Sure, sometimes it bites me but often these prototypes are just meant to prove out an idea.
Currently, my deployed software inventory contains:
- Go binaries, usually with a
FROM scratch
setup - NodeJS. Generally a NextJS setup but not always.
- Ruby
- Static HTML. Ideally this is served up via an S3 like setup but not always.
Starting with Tekton
Having less history with Tekton I figured there is the greatest potential of both risk and reward. Skipping past the
videos they have a Getting Started Guide using minikube.
Deployment appears to via a kubectl apply
manifest hosted by Google. Seems pretty straight forward with registering
many of the CRDs and admission controllers. Most of it existing within the tekton-pipelines
namespace.
Giving kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
a whirl
installs correctly with two warnings about autoscaling/v2beta1
and policy/v1beta1
being deprecated. The pods appear
to correctly deploy to both arm64
and amd64
nodes which is exciting.
Their example tasks are created in your default
namespace. Using kubectl logs --selector=tekton.dev/taskRun=hello-task-run
definitely works. Definitely no magic here. Totally dig that! Easy to find pod in the same namespace as the completed
task. I wonder how often the tasks are cleaned up? Future me problem though.
Pipelines are pretty straight forward. Although they require you to install their tool to verify the logs are generated
is a bit of a bummer. Using brew install tektoncd-cli
you need to symlink the binary into your path as kubectl-tkn
in order to make it accessible. Running tkn
provides a bunch of commands including several cluster level things, so
security is something I will need to keep an eye out for in the future.
Seeing Tekton in Action
Unless something has a user interface beyond the CLI I feel like it is just abstract putty. Strange hold over from building theoretical things which I could not share with others since there was no user interface. Tekton Dashboard is their visualization layer. Installation method worked well.
Security is entirely open by default. You have to tweak the deployment manifests or revoke afterwards in order to remove the permissions. Until I have a better understanding of the environment I chose to uninstall the Dashboard entirely.
Delivering Value using Tekton
Goal is to build a Golang application which runs in a trusted environment. This will require a pipeline composed of tasks and steps.
From their conceptual overview page:
- A pipeline executes a series of tasks with dependency management.
- A task consists of a series of steps executed in order.
- Steps is an operation. Such as compilation, unit testing, etc.
taskRun
and pipelineRun
track progress of the execution through the respective entity.
Dependencies
Their How to Guides are super sparse. However, there are a bunch of awesome prebuilt components at Tekton Hub which can be explored for examples. For this to work I will several existing components.
- git-clone for grabbing the source code.
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/git-clone/0.6/git-clone.yaml
Next up is the actual build tasks. Currently, I have an image which configures Go to properly pull from an internal repository.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: golang-build
labels:
app.kubernetes.io/version: "0.1"
annotations:
tekton.dev/pipelines.minVersion: "0.12.1"
tekton.dev/categories: Build Tools
tekton.dev/tags: build-tool
tekton.dev/displayName: "golang build"
tekton.dev/platforms: "linux/amd64,linux/arm64"
spec:
description: >-
Tests then builds a Golang service
params:
- name: package
description: base package to build in
- name: docker-image
description: Docker image URL in repository:tag format
- name: TARGETOS
description: target operating system. generally linux
- name: TARGETARCH
description: target architecture. generally AMD64 or ARM64
workspaces:
- name: source
steps:
- name: build
image: docker.workshop.meschbach.org/mee/platform/golang-builder:1.18.3
workingDir: $(workspaces.source.path)
script: |
go test ./pkg/...
CGO_ENABLED=0 GOOS=$TARGETOS GOARCH=$TARGETARCH go build -ldflags='-w -s -extldflags "-static"' -o service $(params.package)
- name: package
image: gcr.io/kaniko-project/executor:v1.8.1
workingDir: $(workspaces.source.path)
args:
- "--dockerfile=$(params.package)/Dockerfile"
- "--context=$(workspaces.source.path)"
- "--destination=$(params.docker-image)"
Pipeline definition
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: go-trusted-service
spec:
description: |
Clones, tests, and builds Golang artiactes
params:
- name: repo-url
type: string
description: The git repo URL to clone from.
- name: docker-image
type: string
description: Docker image to store product as
workspaces:
- name: scm
description: |
Workspace cloned from the scource code management system
- name: git-credentials
description: My ssh credentials
tasks:
- name: fetch-source
taskRef:
name: git-clone
workspaces:
- name: output
workspace: scm
- name: ssh-directory
workspace: git-credentials
params:
- name: url
value: $(params.repo-url)
- name: build
runAfter: ["fetch-source"]
taskRef:
name: golang-build
workspaces:
- name: source
workspace: scm
params:
- name: TARGETOS
value: linux
- name: TARGETARCH
value: amd64
- name: package
value: "./cmd/service"
- name: docker-image
value: "$(params.docker-image)"
Pipeline Run
The following references a secret git-credentials
. Since my home lab uses SSH I need to setup a simple SSH setup via
something like the following: kubectl create secret generic git-meschbach-com --from -file=config=./config --from-file=id_rsa=./go-get.key --from-file=known_hosts=./known_hosts
.
With all that we place a bow on it by running the pipeline:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: go-trusted-service-test
spec:
pipelineRef:
name: go-trusted-service
workspaces:
- name: scm
volumeClaimTemplate:
spec:
storageClassName: "synology-nfs-auto"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
- name: git-credentials
secret:
secretName: git-meschbach-com
params:
- name: repo-url
value: git@git.meschbach.com:mee/chorinator.git
- name: docker-image
value: "docker.workshop.meschbach.org/mee/chorinator:experiment"
I used the following references while building this:
- Clone a git repository with Tekton
- golang build
- Workspace are effectively volumes reused across steps. Overview
Taking it from Single architecture to multi-architecture
Overall Tekton is fairly nice. Setup and deployed fairly easy. I ran into a hairy portion with building for multiple
architectures. The following has to be added to your PipelineRun
object in order to get to enforce specific
architectures:
podTemplate:
nodeSelector:
kubernetes.io/arch: "arm64"
Unfortunately there is no way to add this to a task or pipeline. Effectively no way to say “do this but for all architectures”. Thinking this through, their model is probably a bit better than the shoe-horned mechanism I am attempting to reproduce:
- Perform unit tests
- Produce statically compiled binaries for each in a well known location
- Use a trigger or event
- Triggers would then kick off docker builds for each architecture
- Bring together each docker image with
image-manifest
or something.
These are problems I will leave for another day though.