f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD

Published at 2026-04-02T00:00:00+03:00

This is the 9th post in the f3s series about my self-hosting home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I use on FreeBSD-based physical machines.

2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage

2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation

2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts

2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs

2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network

2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage

2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments

2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability

2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo

2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD (You are currently reading this)

f3s logo

ArgoCD Application Resource Tree

Table of Contents

Introduction

In previous posts, I deployed applications to the k3s cluster using Helm charts and Justfiles--running `just install` or `just upgrade` to push changes to the cluster. That worked, but it had some drawbacks:

So I migrated everything to GitOps with ArgoCD. Now the Git repo is the single source of truth, and ArgoCD keeps the cluster in sync automatically.

GitOps in a Nutshell

Describe your entire desired state in Git, and let an agent in the cluster pull that state and reconcile it continuously. Every change goes through a commit, so you get version history, collaboration, and rollback for free.

For Kubernetes specifically:

ArgoCD

ArgoCD is a GitOps CD tool for Kubernetes. It runs as a controller in the cluster, constantly comparing what's running against what's in Git.

ArgoCD Documentation

The features I care about most for f3s:

Why Bother for a Home Lab?

Honestly, the biggest reason is disaster recovery. If the cluster dies, I can:

That's it. No "let me check my shell history to remember how I set this up."

It's also a great way to learn. Setting up GitOps for real--even on a small cluster--teaches you things you won't pick up from tutorials alone. Debugging sync issues, figuring out sync waves, dealing with secrets management--all stuff that's directly applicable at work too.

Beyond that: push to Git, things deploy. No SSH'ing to a workstation to run Helm commands. And if I manually tweak something while debugging and forget about it, ArgoCD reverts it back to the desired state. That's happened more than once.

Deploying ArgoCD

ArgoCD manages everything else via GitOps, but ArgoCD itself needs a bootstrap. Chicken-and-egg problem.

The installation lives in the config repo:

codeberg.org/snonux/conf/f3s/argocd

I deployed it using Helm via a Justfile:

Some highlights from `values.yaml`:

Persistent storage for the repo-server so cloned Git repos survive pod restarts:

Server runs in insecure mode since TLS is terminated by the OpenBSD edge relays (same pattern as all other f3s services):

Dex (SSO) and notifications are disabled--overkill for a single-user home lab:

The admin password is auto-generated on first install and stored in `argocd-initial-admin-secret`. It's preserved across Helm upgrades, so no manual secret creation needed:

Accessing ArgoCD

After deployment, ArgoCD runs in the `cicd` namespace:

ArgoCD login page

The ingress exposes both a WAN and LAN endpoint:

In-Cluster Git Server

I didn't want ArgoCD pulling from Codeberg over the internet every time it checks for changes. If Codeberg is down (or my internet is), the cluster can't reconcile. So I set up a Git server inside the cluster itself.

codeberg.org/snonux/conf/f3s/git-server (at 190473b)

The git-server runs as a single pod in the `cicd` namespace with two containers sharing a PVC:

ArgoCD uses the HTTP backend to clone repos. Most Application manifests point at:

For pushing, I use SSH via a NodePort (30022). The git user is locked down to git-shell--no actual shell access. SSH keys are managed through a Kubernetes Secret.

There's a chicken-and-egg situation here. The git-server's own ArgoCD Application manifest points at Codeberg (not at itself), since ArgoCD needs to bootstrap the git-server before it can use it:

Once the pod is up, all other apps use the in-cluster URL. The dependency chain is: Codeberg -> git-server -> everything else.

The repo storage lives on NFS. Initial setup was just cloning the Codeberg repo as a bare repo into the NFS volume, then pointing my laptop's git remote at the NodePort:

ArgoCD detects the change within a few minutes and syncs. No internet required. The whole thing is intentionally minimal--no database, no accounts, no webhooks. Just git over SSH for writes and HTTP for reads.

Repository Organization

I reorganized the config repo for GitOps. Application manifests are grouped by namespace:

The per-app directories (miniflux, prometheus, etc.) stayed the same--ArgoCD just points at the existing Helm charts. The main addition is the `argocd-apps/` tree and `manifests/` subdirectories for complex apps.

Migrating an App: Miniflux as Example

I migrated all apps one at a time. Same procedure for each--here's miniflux as an example.

Before ArgoCD, the Justfile looked like this:

Workflow: edit chart, run `just upgrade`, hope you didn't forget anything.

I created an Application manifest--this tells ArgoCD where the Helm chart lives and how to sync it:

Then applied it:

About 10 minutes, zero downtime. ArgoCD saw that the running resources already matched the Helm chart in Git and just adopted them.

After that, the Justfile is just utility commands--no more install/upgrade/uninstall:

New workflow: edit chart, commit, push. ArgoCD picks it up within a few minutes. Run `just sync` if you're impatient.

Migration Order

I started with the simplest services (miniflux, wallabag, radicale, etc.)--apps with straightforward Helm charts and no complex dependencies. This let me validate the pattern before touching anything critical.

After that: infrastructure apps (registry, cert-manager, pkgrepo, traefik-config), then the monitoring stack (tempo, loki, alloy, and finally prometheus--the most complex one), and last the CI/CD tools (git-server, argo-rollouts).

Complex Migration: Prometheus Multi-Source

Prometheus was the tricky one--it combines an upstream Helm chart with a bunch of custom manifests (recording rules, dashboards, persistent volumes, a post-sync hook to restart Grafana).

ArgoCD's multi-source feature made this manageable:

The `prometheus/manifests/` directory has 13 files. Each one has a sync wave annotation that controls when it gets deployed:

Sync Waves

By default, ArgoCD deploys everything at once in no particular order. Fine for simple apps, but Prometheus breaks--a PVC can't bind if the PV doesn't exist yet, and a PrometheusRule can't be created if the CRD hasn't been registered.

Sync waves fix this. You slap an annotation on each resource:

ArgoCD deploys all wave 0 resources first, waits until they're healthy, then moves to wave 1, waits again, and so on. Resources without the annotation default to wave 0.

For the Prometheus stack, the waves look like this:

ArgoCD also supports lifecycle hooks (`PreSync`, `Sync`, `PostSync`) that run Jobs at specific points. The Grafana restart hook runs after every sync so Grafana picks up updated datasources and dashboards:

The Result

All 30 applications across 5 namespaces, synced and healthy:

ArgoCD managing all 30 applications in the f3s cluster

What Changed Day-to-Day

The practical difference is pretty big:

Challenges Along the Way

Helm Release Adoption

When ArgoCD tries to manage resources already deployed by Helm, it can get confused. Fix: make sure the Application manifest matches the current Helm values exactly. ArgoCD then recognizes the resources and adopts them.

PersistentVolumes

PVs are cluster-scoped, and many of my Helm charts created them with `kubectl apply` outside of Helm. For simple apps I moved PV definitions into the Helm chart templates. For complex apps like Prometheus, I used the multi-source pattern with PVs in a separate `manifests/` directory at sync wave 0.

Secrets

Secrets shouldn't live in Git as plaintext. For now, I create them manually with `kubectl create secret` and reference them from Helm charts. ArgoCD doesn't manage the secrets themselves. Works, but isn't fully declarative--External Secrets Operator is on the list.

Grafana Not Reloading

After updating datasource ConfigMaps, Grafana wouldn't notice until the pod was restarted. The PostSync hook (the Grafana restart Job in sync wave 10) handles this automatically now.

Prometheus Multi-Source Ordering

Without sync waves, Prometheus resources deployed in random order and things broke. PVs before PVCs, secrets before the operator, recording rules after the CRDs. Adding sync wave annotations to everything in `prometheus/manifests/` fixed it.

Wrapping Up

The migration took a couple of days, doing one or two apps at a time. The result: 30 applications across 5 namespaces, all managed declaratively through Git. Push a change, it deploys. Break something, `git revert`. Cluster dies, rebuild from the repo.

All the config lives here:

codeberg.org/snonux/conf/f3s

ArgoCD Application manifests organized by namespace:

codeberg.org/snonux/conf/f3s/argocd-apps

I can't imagine going back to running Helm commands manually.

Other *BSD-related posts:

2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD (You are currently reading this)

2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo

2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability

2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments

2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage

2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network

2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs

2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts

2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation

2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage

2024-04-01 KISS high-availability with OpenBSD

2024-01-13 One reason why I love OpenBSD

2022-10-30 Installing DTail on OpenBSD

2022-07-30 Let's Encrypt with OpenBSD and Rex

2016-04-09 Jails and ZFS with Puppet on FreeBSD

E-Mail your comments to `paul@nospam.buetow.org` :-)

Back to the main site

Proxied content from gemini://foo.zone/gemfeed/2026-04-02-f3s-kubernetes-with-freebsd-part-9.gmi (external content)

Gemini request details:

Original URL
gemini://foo.zone/gemfeed/2026-04-02-f3s-kubernetes-with-freebsd-part-9.gmi
Status code
Success
Meta
text/gemini;
Proxied by
kineto

Be advised that no attempt was made to verify the remote SSL certificate.