Bridging the Gap: GitOps for Network Engineers - Part 1


Intro
Over the past 6–9 months, my career and perspective on technology have shifted dramatically. I’ve found myself drifting away from my views of traditional networking and increasingly seeing everything through the lens of applications, and treating it accordingly. To meet the demands of my current role, a colleague introduced me to the concept of GitOps and suggested we integrate it into our network automation workflows. At the time, I had no idea what GitOps even was. But a few months later, I’m wondering why I didn’t adopt this approach much earlier in my career. Within that short span, I had built a complete platform capable of hosting all of our network automation tools—NetBox, Nautobot, custom Python scripts, databases, monitoring stacks, and even Clabernetes (containerlab) for running virtual topologies. All self-contained, all deployed declaratively, and all benefiting from the GitOps principles I’ll be breaking down throughout this article.
So… what is GitOps?
At its core, GitOps is a way of managing infrastructure and applications using Git as the single source of truth. Think of it like this: instead of logging into systems and manually making changes (we’ve all been there), you define your desired state in code—YAML, JSON, whatever floats your repo, and store that in Git. From there, automation tools take over, constantly reconciling what's deployed with what lives in the repo. If something drifts or breaks, the system can alert you, fix it, or at least give you a clean way to roll back.
In traditional terms, it’s like having a version-controlled config file for every part of your infrastructure, and having robots to deploy it all for you, exactly how you wrote it.
Why Should Network Engineers/Orgs Care?
Historically, network automation has been about scripts, Python, maybe some Ansible sprinkled on top. But the problem with that approach is scale, visibility, and consistency. You might have 10 engineers all running different scripts in slightly different ways. Who knows what changed and when?
GitOps brings the same rigor DevOps teams apply to applications into the world of network automation. Imagine managing Nautobot or NetBox deployments through Git. Want to roll out a plugin, change a config, or update a container? You create a pull request, get it reviewed, and once it’s merged, it’s live in production (via ArgoCD, Flux, or whatever your GitOps controller is).
Even beyond the apps themselves, this mindset works for deploying the tools that generate your configs, run validations, or even trigger device changes. You're turning networking workflows into a pipeline. And once that happens, you get auditability, consistency, and less of that "it works on my machine" nonsense.
This is Part 1 of a series aimed at helping network engineers get hands-on with GitOps and understand the core components involved in building a modern automation platform. In this first part, we’ll focus on the foundational concepts of GitOps, the tools that power it, and walk through installing ArgoCD as the GitOps engine for our platform. Even if you're not deploying anything just yet, the goal here is to bridge the knowledge gap, so network engineers can better understand the deployment process and begin delivering their own code and tools in a structured, scalable way. At the very least, this knowledge helps you communicate more effectively with DevOps and Platform Engineering teams, making it easier to explain what you need when it comes to production-ready deployments.
In Part 2, we’ll pick up by deploying core infrastructure components—like MetalLB, Traefik, persistent storage, and secrets management—using the GitOps workflow established here.
For those interested in exploring the configurations and examples discussed in this article, all the code and resources are available in my GitHub repository: kubernetes-gitops-playground.
This repository serves as a comprehensive reference for setting up a GitOps-driven Kubernetes environment. It includes structured directories for applications like Nautobot, configurations for ArgoCD, and various Kubernetes add-ons. The repository is designed to be a practical guide for network engineers aiming to implement GitOps methodologies in their infrastructure.
Feel free to explore the repository to gain insights into the practical implementation of the concepts discussed here.
The GitOps Ecosystem: A Network Automation Perspective
Here’s a high-level breakdown of the components I use to power my GitOps-driven automation platform. This list reflects a practical, production-minded approach to deploying and managing applications, especially for network engineers looking to build, scale, or just better understand modern automation workflows.
Each component below plays a specific role in the platform, helping ensure security, flexibility, repeatability, and operational clarity.
Kubernetes Cluster (Obviously)
The foundation of everything. Kubernetes orchestrates and runs your containerized applications, managing scaling, availability, and resource utilization.
Git Provider (GitHub)
The single source of truth. All manifests, Helm values, and Kustomize overlays live here. Every change is tracked, reviewed, and version-controlled.
ArgoCD
This is the GitOps engine of the platform. It continuously syncs application state from Git repositories into the cluster, ensuring what’s deployed always matches what’s defined in code.
Cluster Load Balancing (MetalLB)
MetalLB enables load-balanced services in bare-metal or home lab environments by assigning external IPs to services that require them.
Traefik (IngressRoute)
Traefik is a powerful and flexible ingress controller that routes external traffic into your Kubernetes cluster using custom IngressRoute CRDs. It gives you fine-grained control over how services are exposed, supports TLS, and integrates smoothly with GitOps workflows.
Note: You can use NodePorts if you’re not ready for an ingress controller and want a simpler setup, but that approach isn’t ideal for production use and lacks the flexibility and security that Traefik provides.
Persistent Storage (Rook + Ceph)
Apps like network automation platforms often require persistent volumes. Rook with Ceph provides resilient, scalable storage within the cluster, critical for stateful services.
Secrets Vault (i.e., HashiCorp Vault)
A secure place to store sensitive information like API tokens, database credentials, and TLS certificates, outside the cluster and outside of Git.
Secrets Operator (i.e., External Secrets)
This bridges the gap between Vault and Kubernetes. It watches your external secret store and injects the data into Kubernetes Secrets based on declarative manifests.
Kubernetes Secrets
The native format for storing and referencing secrets inside Kubernetes workloads. These are the final form of secrets that your apps consume at runtime.
Helm & Custom Values
Helm acts as the package manager for Kubernetes, simplifying the deployment of complex, production-ready applications through reusable charts. By supplying custom values, you can easily override default configurations, tuning things like ports, storage, resource limits, and app-specific settings to fit your environment without modifying the underlying chart.
Kustomize
Kustomize lets you customize Kubernetes manifests without copying or editing the original files. It uses overlays to manage environment-specific changes, like different configs for dev, test, or prod. This helps keep your Git repo organized and clean.
You can also use Kustomize alongside Helm by referencing rendered Helm charts as a base, then layering custom configs on top, giving you the best of both tools.
Requirements & Housekeeping
Before we dive into the individual components of the platform, there are a few things that need to be in place:
Kubernetes Cluster: I won’t be covering how to stand up a Kubernetes cluster in this post. If you need help with that, check out this earlier article I wrote that walks through the setup. This also isn’t a Kubernetes 101 guide, you’ll need a solid understanding of how Kubernetes works, especially when it comes to common resource types like Deployments, Services, Secrets, ConfigMaps, and PersistentVolumeClaims.
Kubectl
andhelm
should also be installed and usable for the clusterGit & GitHub (or another Git provider): This isn’t a Git 101 tutorial. You’ll need some working knowledge of Git and GitHub, and you should already have an account set up. If you’re using another provider (like GitLab or Bitbucket), that’ll work too.
Persistent Storage: While persistent storage is part of the overall stack, this post won’t go deep into the setup. I’ll touch on what’s needed to support the apps, but I’m saving the storage deep dive for a separate article.
Linux & Bash: You should be comfortable using Linux and working in a bash shell. There will be commands, file edits, and troubleshooting that assume you’re not new to the terminal.
IDE (like VSCode): You’ll need a code editor to work with YAML, Helm values, and general GitOps structure. VSCode is a solid choice, it has excellent Git integration and Kubernetes plugins that can speed up your workflow.
My Setup
My cluster consists of a 3 node Rocky Linux 9 cluster. Same as what’s used in my other blog posts. Most other major distributions should work relatively the same but if you’re following closely Rocky and Redhat are the better OS options.
If you’re good on those fronts, let’s keep going.
ArgoCD: Your GitOps Automation Engine
Now that your Kubernetes cluster is built and your GitHub account is ready, it's time to dive into the heart of GitOps: ArgoCD.
What is ArgoCD?
ArgoCD (short for Argo Continuous Delivery) is a GitOps controller for Kubernetes. It continuously monitors Git repositories and ensures the live state of your cluster matches the declared state in Git. If something drifts, like someone manually edits a resource, ArgoCD can detect that and reconcile it back to the desired state stored in Git. It’s declarative, automated, and very production-friendly.
In simple terms: Git is the source of truth, and ArgoCD makes sure your cluster does what Git says.
Where ArgoCD Fits in the GitOps Model
GitOps workflows revolve around a few key principles:
Version control as truth: All manifests live in Git.
Pull-based automation: Kubernetes doesn’t wait for you to push changes, it pulls from Git.
Observability and rollback: You can track exactly what changed, when, and by who. Rolling back is as easy as reverting a commit.
ArgoCD is the engine that powers this model. It watches your repo, compares it to what’s actually running in your cluster, and syncs everything up, either automatically or on demand. It also gives you a nice web UI, CLI, and API for managing applications and monitoring sync status.
On a personal note—I freaking love ArgoCD! When I was first dipping my toes into GitOps and only had a surface-level understanding of Kubernetes, ArgoCD was an absolute game changer. Being able to visually see every single Kubernetes object that makes up an app, and how they relate to each other, leveled up my Kubernetes knowledge fast. The fact that you can pause, sync, delete, or rebuild individual resources with basically the flip of a switch? Insanely useful. And not having to constantly hammer out kubectl
commands just to check logs or dig into the YAML? Crazy time saver! Seriously, it’s one of the most valuable tools in this whole setup and tech today.
Installing ArgoCD on Rocky Linux 9
Let’s walk step-by-step through a basic installation of ArgoCD and its CLI. These steps assume you already have:
kubectl
configured and pointing to your Kubernetes clusterhelm
installed (needed for app creation later)Root or sudo access on your Rocky Linux 9 system
Step 1: Install ArgoCD into the Cluster
We'll install ArgoCD in its own namespace using the official manifests:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
This will install all the ArgoCD components: API server, controller, repo server, and UI server.
To confirm the installation was successful, run the below command.
kubectl get pods -n argocd
You should see all the argocd pods in a ‘Running’ state after 30 seconds or so -
NAME READY STATUS RESTARTS AGE
argocd-application-controller-0 1/1 Running 0 1m
argocd-applicationset-controller-dc47f7989-77ztg 1/1 Running 0 1m
argocd-dex-server-bc9bc7d65-68rxn 1/1 Running 0 1m
argocd-notifications-controller-5698dbd744-7vmzc 1/1 Running 0 1m
argocd-redis-656948fbd6-zfgjd 1/1 Running 0 1m
argocd-repo-server-74c4cb6cc5-pnxfv 1/1 Running 0 1m
argocd-server-856f78f5df-cxh9h 1/1 Running 0 1m
Step 2: Expose the ArgoCD UI
By default, ArgoCD’s API server is only accessible inside the cluster. For testing or lab use, you can expose it using a NodePort
or via your ingress controller (like Traefik):
Option A: NodePort (quick and dirty)
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
Find the port:
kubectl get svc argocd-server -n argocd
NodePorts are usually assigned within the 30000–32767 range. Look for the PORT(S)
column in the output, something like 8080:32678/TCP
means ArgoCD is accessible on port 32678 of any Kubernetes control-plane node.
Then access the UI at:http://<node-ip>:<nodeport>
Option B: IngressRoute (we’ll add this later once Traefik is installed)
If you're planning to use Traefik as your ingress controller, you'll eventually want to expose ArgoCD using an IngressRoute. This is the more GitOps-friendly approach because your ingress config, just like everything else, can live in Git and be managed declaratively.
That said, you probably don’t have an ingress controller installed yet, so this option won’t work just yet. No problem, start with the NodePort method for now, and once Traefik is in place, switching over to an IngressRoute is quick and clean. It fits perfectly into the GitOps model and keeps your exposure configs version-controlled along with the rest of your stack
Step 3: Get the Initial Admin Password
The default username is admin
. To get the initial password:
kubectl get secret argocd-initial-admin-secret -n argocd \
-o jsonpath="{.data.password}" | base64 -d && echo
Log in via the web UI or CLI using this password.
Step 4: Install the ArgoCD CLI
Install the CLI to interact with ArgoCD from your terminal.
VERSION=$(curl -s https://api.github.com/repos/argoproj/argo-cd/releases/latest \
| grep tag_name | cut -d '"' -f 4)
curl -sSL -o argocd "https://github.com/argoproj/argo-cd/releases/download/${VERSION}/argocd-linux-amd64"
chmod +x argocd
sudo mv argocd /usr/local/bin/
Confirm it’s installed:
argocd version
Step 5: Log In Using the CLI
argocd login <ARGOCD-SERVER> --username admin --password <PASSWORD>
Use the hostname or IP that maps to your argocd-server
service.
Alternative Installation Method (Github Actions Runner)
If you’ve followed along this far, you’re probably realizing we could automate a good chunk of this platform bootstrapping. And yes, we absolutely can.
I've created a GitHub Actions workflow that installs ArgoCD (and its CLI), exposes it, configures custom admin users, and even adds the Kubernetes cluster back into ArgoCD, all automatically. This method is particularly useful if you're managing multiple clusters or frequently rebuilding your platform. Feel free to use this for assistance in setting up your own runner. Here’s how it works.
Requirements
To use this workflow, you’ll need:
A self-hosted GitHub Actions runner that has access to your Kubernetes cluster
kubectl
andpython3.12
installed on the runnerA valid Kubeconfig file for the cluster you're targeting
GitHub repository secrets and variables configured properly:
ARGOCD_ADMIN_USER
,ARGOCD_ADMIN_PASSWORD
– default admin loginARGOCD_MY_ADMIN_USER
,ARGOCD_MY_ADMIN_PASSWORD
– a secondary, more permanent admin accountPAT_TOKEN
– GitHub personal access token for storing encrypted secrets per environmentGitHub Actions environment variables like
ARGOCD_PORT
andARGOCD_SERVER
(the IP or DNS hostname of a Kubernetes control-plane node)
Supporting Workflows Worth Noting
If you're wondering how this all connects behind the scenes, the repo also includes a few helper workflows that make this setup much smoother.
Kubeconfig Setup & Storage: There's a workflow that helps you extract your kubeconfig file and securely store it in GitHub as a repository variable or secret. This is crucial for giving your self-hosted runner authenticated access to your cluster during automated jobs.
kubectl Installation & Verification: Another workflow ensures
kubectl
is installed and properly configured on your self-hosted runner. It also includes a quick test to confirm the runner can talk to the cluster, basically your first "sanity check" before deploying anything.
These smaller workflows aren’t flashy, but they’re essential in keeping everything reliable, reproducible, and GitOps-friendly.
Workflow Breakdown
Here's what the job actually does:
Checkout the Repo
Grabs your current Git repository so that scripts and manifests can be used during the workflow.Set the Environment
Dynamically sets the target environment (e.g.,lab
orprod
) based on your manual trigger input. This is used for cluster context switching and naming.Configure kubectl
Updates the active Kubernetes context based on the selected environment so the workflow knows which cluster to operate on.Install Dependencies
Sets up a Python virtual environment and installspynacl
, which is used later for encrypting the ArgoCD password.Install ArgoCD
Creates theargocd
namespace (if it doesn't exist) and applies the official ArgoCD manifests to install the full stack into your cluster.Install the ArgoCD CLI
Downloads and installs the latest CLI version for use in later steps like login, user config, and cluster registration.Wait for ArgoCD to Come Online
Useskubectl wait
to ensure theargocd-server
deployment is available before proceeding.Expose ArgoCD via NodePort
Temporarily exposes the ArgoCD UI using a NodePort service on the configured port. This makes it accessible during early setup (before Ingress is configured).Extract the Initial Admin Password
Pulls the default ArgoCD admin password from the Kubernetes secret and stores it as a masked GitHub environment variable.Encrypt and Store the Admin Password in GitHub Secrets
Uses GitHub’s public key API and a Python script to encrypt the ArgoCD admin password and securely store it in the environment-specific GitHub Secrets.Log into ArgoCD with Default Admin
Authenticates with ArgoCD using the default credentials and ensures the CLI is working.Create a Custom Admin User
Edits theargocd-cm
ConfigMap to define a new admin-level account.Assign RBAC Permissions to the New User
Updates theargocd-rbac-cm
ConfigMap to give your new user full admin access.Set a Password for the New User
Uses the CLI to set the new admin user’s password securely.Verify the New Admin Login
Logs in with the new user credentials to confirm everything’s configured properly.Register the Cluster with ArgoCD
Ensures the current Kubernetes cluster is registered with ArgoCD, allowing future applications to target it via the ArgoCD UI or CLI.
Why This Rocks
Instead of manually copying YAML and running a dozen kubectl
commands, this workflow automates the whole thing, and tracks it all in Git. It’s GitOps deploying GitOps, and yes, I’m into that level of inception.
You can trigger it manually for different environments (e.g., lab vs prod), and the entire setup becomes repeatable, shareable, and documented as code.
ArgoCD is now up and running (hopefully). You should be able to access the login page using a URL from one of your Kubernetes control-plane nodes IPs (or hostname) and NodePort port -
Go ahead and login using the admin credentials or credentials you created from using an Actions Workflow. You should see a blank applications list like so -
Initial Setup
Configuring Your Cluster within ArgoCD
Before deploying anything, ArgoCD needs to know which Kubernetes cluster(s) it can target. If you installed ArgoCD into the same cluster you're working in, there's good news, ArgoCD automatically configures access to that cluster. It will show up as in-cluster
and is ready to go out of the box.
But if you're managing a remote cluster, or skipped using the automated GitHub Actions workflow I showed earlier, you’ll need to manually register the cluster using the ArgoCD CLI. This is required because you cannot add a new cluster through the ArgoCD UI.
Step 1: Login to the ArgoCD CLI
Before you can register a cluster, you need to authenticate using the CLI:
argocd login <ARGOCD_SERVER>:<PORT> --username admin --password <PASSWORD> --insecure
Replace the values with your ArgoCD server address and credentials. The --insecure
flag is common during lab/testing since you might not have valid TLS configured yet.
Step 2: Register the Cluster
Once logged in, you can add the Kubernetes cluster currently pointed to by kubectl
:
argocd cluster add <kube-context-name>
You can find your context name with:
kubectl config current-context
This command sets up a service account and RBAC within the target cluster, and registers it inside ArgoCD. Once complete, the cluster will appear in the UI and can be used for application deployments.
Adding and Configuring a New Project (via GUI)
Projects in ArgoCD are used to organize applications, enforce boundaries, and apply access rules. They’re especially useful when you want to group related apps, like having one project for core platform components and another for automation tools.
Step-by-Step: Create a New Project in the UI
Login to the ArgoCD UI
Use the NodePort or Ingress you’ve set up earlier to access the web UI. Login with your
admin
or custom user credentials.Go to “Settings” → “Projects”
In the sidebar, click Settings, then select Projects. Click + NEW PROJECT to create a new one.
Name Your Project
Give your project a meaningful name, like
platform-core
ornetwork-tools
. Mine is shown below -Define Destinations
These are the clusters and namespaces that apps in this project are allowed to deploy to. If you're using the default in-cluster setup, your server URL will be
https://kubernetes.default.svc
.Server:
https://kubernetes.default.svc
Namespace: e.g.,
default
,argocd
, ortools
For a basic setup just set it to * (for all namespaces)
Configure Role-Based Access and Restrictions
When you're setting up a new project in ArgoCD, you'll see options to define what types of Kubernetes resources the project is allowed to manage. This is where you can lock things down pretty tightly, but for basic setup and initial testing, it’s easiest to just allow everything and refine later once things are working.
Here’s what that looks like:
Cluster Resource Allow List
Kind:
*
Group:
*
Cluster Resource Deny List
- Leave this empty
Namespace Resource Allow List
Kind:
*
Group:
*
Namespace Resource Deny List
- Leave this empty
Resource Monitoring
- Move the slider to ‘Enabled’
Click “Create”
Your project is now set up and ready to have apps assigned to it.
From here, you’ll be able to define Git-based applications, point them at your manifests or Helm charts, and let ArgoCD handle the rest.
The first applications we’ll use it to deploy are the core pieces of our GitOps infrastructure itself, tools like MetalLB for load balancing, Traefik for ingress, and persistent storage components. In other words, we’ll be using GitOps to finish building out the platform that enables GitOps. Poetic, right?
Before we end this Part 1, let's talk about how this all ties back to Git...
Understanding the Repo Structure (and Why Everything Belongs in Git)
One of the core principles of GitOps is keeping everything—infrastructure, applications, configurations, and deployment logic—in version control. The folder layout in my example repo is designed with that in mind. It reflects GitOps best practices: everything is declarative, versioned, and easy to manage or scale over time.
Having a clear and intentional structure not only makes your deployments cleaner, it also simplifies troubleshooting, auditing, onboarding new team members, and extending the platform as your needs grow.
Here’s a quick breakdown of the folders that matter most for this series:
apps/
This is where you’ll find custom Helm values files and Kustomize overlays for each application managed by ArgoCD. Each subdirectory corresponds to a specific app—like MetalLB, Traefik, or ArgoCD itself—and contains the configuration needed to tailor the deployment to your environment. This keeps your app logic cleanly separated and easy to maintain.argocd-app-manifests/
Contains the ArgoCDApplication
andAppProject
manifests. These define what ArgoCD deploys, where, and from which repo. Managing these separately from app-specific config keeps the logic declarative and helps you track application lifecycle separately from platform logic.helm-charts/
This folder stores any custom or forked Helm charts that don’t live in an external Helm repo. It gives you a clean place to manage pinned chart versions or make local edits without cluttering the main app or manifest directories.
This layout isn’t just for organization, it’s what enables a GitOps workflow to scale. As your platform grows, this structure makes it easy to maintain a consistent, observable, and testable deployment pipeline across your infrastructure.
Summary & What’s Next
In Part 1, we laid the groundwork for a GitOps-driven automation platform. We covered the key components that make up the stack, walked through what GitOps actually is (without the fluff), and deployed ArgoCD, the engine that brings it all to life.
By now, you should have:
A working Kubernetes cluster
ArgoCD fully installed and accessible via NodePort or Ingress
Logged into the ArgoCD UI
Created your first ArgoCD project and verified it’s configured with the settings described earlier (associated cluster aka ‘Destination’, RBAC/Allowed Lists, enabled Resource Monitoring)
If you’ve made it this far, that’s a huge step forward, especially if you’re coming from a traditional networking background. You’ve already started to shift from manually pushing scripts to building a scalable, Git-driven platform.
But we’re just getting started.
In Part 2, we’ll begin deploying actual infrastructure apps using the GitOps workflow you’ve set up here. We’ll cover MetalLB (for load balancing), Traefik (for ingress), persistent storage with Rook/Ceph, and secrets management with External Secrets and HashiCorp Vault. These hands-on deployments depend on the foundation you just built, so make sure everything is in place before continuing.
Let’s keep building.
Subscribe to my newsletter
Read articles from Jeffrey Lyon directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
