My first k8s build log - Flux repo layout

Naming and organizing things is hard
kubernetes
talos
Published

January 26, 2026

Introduction

Continuing my rebooted cluster setup series, let’s talk about how I structured my repository to manage assets in flux. This is not going to be a flux tutorial, I don’t feel qualified for that, but I will try and explain the design decisions I made and link to relevant docs.

My repo structure is heavily influenced by Joryirving’s home-ops, which in turn is heavily based on onedr0p’s cluster-template. If you just want to get started with something I’d almost certainly recommend looking at that rather than copying what I did.

Goals

The layout of my repository influences a lot of how I will be able to manage my cluster going forward. I’ve gone through a few refactors already and I hope this is the final (or close to final) incarnation of the general structure.

First, I have multiple clusters to manage. I have a dev and a prod cluster. The idea is for them to be configured very similarly, but with dev as a place for me to test out new services, or just updates/patches to existing ones. Having hit some breaking changes already while just testing out things I’m relatively sure that I am willing to accept the added administrative burden of patching and updating two clusters for the extra stability of my prod cluster. After all, one of the reasons I wanted to move from a single node docker environment to kubernetes was reliability.

With this in mind, I want it to be easy to keep the resources in my clusters in sync, I want to minimize the amount of duplicate code that I’m maintaining two copies of. However, I do need to be able to introduce changes to one cluster but not the other. One requirement related to that, which I don’t see as much in other repositories is a requirement to be able to modify the version of any service I’m running independently across clusters. If there are to be discrepancies between clusters I want them to be as easy to reason about and reconcile as possible. Easy, right? Let’s see what I did.

Basic layout tour

To give a lay of the land before I start talking about components, here’s a simplified tree view of the kubernetes part of my homelab repo:

.
├── app-flux-kustomizations
│   └── [dev/prod]
│       ├── apps
│       │   ├── [app 1].yaml
│       │   ├── ...
│       └── platform
│           ├── [app 1].yaml
│           ├── cluster-configmap.yaml
│           ├── ...
├── apps
│   ├── [app 1]
│   │   ├── base
│   │   │   ├── [manifest 1.yaml]
│   │   │   ├── ...
│   │   │   └── kustomization.yaml
│   │   └── [dev/prod]
│   │   │   ├── [patches or cluster specific manifests]
│   │       └── kustomization.yaml
├── clusters
│   └── [dev/prod]
│       ├── apps.yaml
│       └── platform.yaml
├── components
│   ├── [some shared component]
│   │   ├── kustomization.yaml
│   │   └── [manifest 1].yaml

Clusters folder

As part of my bootstrap script I setup flux with a git repository resource, and a flux instance (along with a secret for GitHub access), which tells it to apply and synchronize any resources found under the corresponding cluster folder in the repo. These resources actually live under the apps folder and once things are bootstrapped flux will manage itself in addition to other apps in the cluster, trippy eh?

From there I have two root flux kustomizations for core platform components and then everything else. The line between platform and apps is a bit blurry. Really what I want is all the stuff I’d need to recover backups running on the server, without any services that would be trying to write to those places running at the same time. In the event of a cluster rebuild I can mark the apps.yaml resource as suspended, spin up the cluster, perform my recovery activities, and then proceed. At least I think, I haven’t tested this yet, and it also doesn’t cover database recoveries, which I think will be different. Anyway, that’s why there’s just a couple resources here. I also apply the top level post build variable substitution here to minimize how much I have to modify between clusters in downstream stages. The kustomization in apps.yaml is set to depend on platform.yaml, so none of the apps under it will get synchronized until the platform stuff is ready.

Both apps.yaml and platform.yaml point to the respective cluster folder under app-flux-kustomizations.

app-flux-kustomizations folder

This is the next level of abstraction. Most of the files in this folder are also flux kustomizations that point to an app in the apps directory. There’s also a cluster-configmap.yaml file which contains common cluster specific substitutions that all apps are likely to need, things like what their subdomain or cluster specific prefixes should be.

There are two subdirectories here, relating to apps corresponding to platform or apps as defined above. Again, that distinction is less about function and more about what point in the cluster I want things to come online. For example, prometheus is conceptually more a platform component than an app. It’s gathering metrics and other observability data, not performing a function end users care about. However, it needs its own persistent storage, and I might want to recover that in the event of cluster failover, so from that perspective it’s more like an app.

Besides the stuff in the configmap I try and do as little per environment modification as I can at this layer, most of the cluster specific modifications I keep in the overlays in the apps folder, having these extra layers of kustomizations are just to make variable substitution and dependency management easier.

apps folder

This is where the majority of the code actually lives. This section uses regular kustomize overlays to take common manifests and other resources from the base subdirectory of a given app and apply patches and cluster specific manifests in the [dev/prod] folder.

For example, as mentioned, I want to be able to patch apps independently between clusters. If the app is installed by a helm chart then I put an OCIRepository resource pinned to a specific version in the overlay folder for each cluster. I can then patch the version specified in each manifest independently to roll out changes to my dev cluster before letting them into prod.

For apps that I’m explicitly defining images, I actually use the flux kustomization in the previous section because the images patching for it is much cleaner.

I’ve got renovate configured to include the parent folder in the name of branches and PRs it makes, so having all my versions in files directly under a [dev/prod] folder ensures separate PRs for each cluster and makes it easy to tell where I’m applying the upgrade to.

components

This folder contains kustomize components for manifests I’d like to apply with minor modifications in multiple apps. The easiest example is for databases. Lots of apps need a postgres backend, and the manifests to spin one up (using cnpg in my case) will look largely the same for each of them. Rather than copy pasting a lot of boilerplate and then having to update it across multiple apps if I need to change anything, I make a component with placeholders for things I’ll want to change, and then set flux post build variable substitutions for the parts that will change like the app name, or size of storage.

Conclusion

I’m still early in actually running this cluster, so we’ll see if this approach sticks. It’s very similar to the repositories I listed in the intro, with the exception of the folder hierarchy being app/environment instead of environment/app. The latter approach would make navigating the code easier and remove the need for the intermediate app-flux-kustomizations folder I have, but at the cost of making it harder to manage per cluster versioning and app promotion. There’s probably a way I could have made it work, but between how renovate wants to do PR titles and how flux variable substitution doesn’t propagate between flux kustomizations I found this approach easier. I guess we’ll see if that choice comes back to haunt me.