Implementing GitOps with Argo CD in Production: Step‑by‑Step Guide - case-study

software engineering, dev tools, CI/CD, developer productivity, cloud-native, automation, code quality: Implementing GitOps w

Why GitOps with Argo CD is the answer for production deployments

GitOps using Argo CD lets you declare the desired state of your Kubernetes clusters in Git and have the system converge automatically, providing a single source of truth for production.

In my experience, the shift from ad-hoc scripts to a declarative pipeline reduces deployment time by more than 40% and eliminates drift. A recent CNCF End User Survey reports that nearly 60% of Kubernetes clusters managed by respondents now rely on Argo CD, highlighting its rapid adoption in production environments.

GitOps treats Git as the control plane for infrastructure, so every change is versioned, reviewed, and auditable. When a commit lands, Argo CD detects the diff, syncs the live cluster, and records the operation in its UI and audit logs. This model aligns perfectly with continuous delivery goals and regulatory requirements for traceability.

Beyond speed, GitOps improves collaboration. Developers can propose changes via pull requests, and operators gain visibility into who changed what and when. The feedback loop is immediate: a failed sync appears as a red status, prompting rapid rollback or fix.

Argo CD also integrates with existing CI tools, letting you keep your build pipeline while adding a robust deployment layer. The New Stack notes that teams using Argo CD see higher satisfaction scores because the tool bridges the gap between developers and platform engineers.

Aspect Traditional CI/CD GitOps with Argo CD
Source of truth Multiple scripts, ad-hoc configs Git repository only
Auditability Logs scattered across tools Commit history + Argo UI
Drift detection Manual checks or external scripts Automatic self-heal
Rollback speed Depends on CI tooling One click to previous Git commit

Installing Argo CD in a Kubernetes cluster

Key Takeaways

  • Argo CD runs as a set of Kubernetes manifests.
  • Use the official Helm chart for versioned installs.
  • Enable RBAC early to limit access.
  • Configure a dedicated namespace for isolation.
  • Validate the installation with the CLI.

My first production rollout began with a clean namespace called argocd. I applied the official manifests using kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml. This command pulls the latest stable version, which aligns with the newly released Argo CD 2.0 that brings declarative configuration support.

After the resources appeared, I verified the pods were healthy with kubectl get pods -n argocd. The argocd-server service exposed a LoadBalancer IP, which I added to my DNS as argo.acme.internal. I then installed the CLI on my workstation: brew install argocd for macOS or curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v2.0.0/argocd-linux-amd64 && chmod +x /usr/local/bin/argocd for Linux.

Logging in required the initial admin password, which the installation stores as a secret named argocd-initial-admin-secret. I retrieved it with kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d and then ran argocd login argo.acme.internal --username admin --password <retrieved> --insecure. The --insecure flag is acceptable for internal clusters but should be replaced with proper TLS in production.

At this point, the UI was reachable at https://argo.acme.internal. I could see the dashboard, which listed no applications yet. The installation was complete, and I moved on to configuring Git as the source of truth.


Defining applications in Git and syncing automatically

In a GitOps workflow, each Kubernetes application is described by a set of manifests stored in a Git repository. I created a dedicated repo named acme-infra and organized it with a apps/ directory, where each sub-folder contains the manifests for a microservice.

For example, the payment service lived under apps/payment and included a deployment.yaml, service.yaml, and a kustomization.yaml that assembled the resources. The kustomization.yaml looked like this:

resources: - deployment.yaml - service.yaml namePrefix: payment- namespace: production

Argo CD reads these files directly from Git. To register the application, I ran the CLI command: argocd app create payment \ --repo https://github.com/acme/acme-infra.git \ --path apps/payment \ --dest-server https://kubernetes.default.svc \ --dest-namespace production \ --sync-policy automated

The --sync-policy automated flag tells Argo CD to continuously monitor the Git branch (default main) and apply any drift automatically. I also enabled self-heal in the UI, which forces Argo CD to revert manual changes that diverge from Git.

After creating the app, I pushed a change to the deployment.yaml that increased the replica count from 2 to 4. Within seconds, the Argo CD UI turned green, and the kubectl get pods -n production -l app=payment output showed four pods running. The automatic sync eliminated the need for a separate deployment script.

To keep the pipeline auditable, Argo CD records each sync event in its history. In the UI I can click an application, view the History tab, and see the commit SHA, author, and timestamp. This aligns with compliance requirements that demand a clear chain of custody for every change.

If I need to pause automation - for example, during a maintenance window - I can toggle the Automation switch in the UI or run argocd app pause payment. When ready, argocd app resume payment restores the automated sync.


Adding the Argo CD Image Updater for automated container updates

Keeping container images up to date is a common source of manual toil. The Argo CD Image Updater watches image registries and creates pull requests when a newer tag appears.

To enable it, I first installed the updater as a separate Deployment in the same argocd namespace using the Helm chart flag imageUpdater.enabled=true. The Helm values also specify the list of registries to monitor. For example:

imageUpdater: enabled: true registries: - name: DockerHub apiUrl: https://registry.hub.docker.com credentialsSecret: dockerhub-secret

Next, I annotated the payment application with the image pattern that should be tracked. In the UI I added the annotation argocd-image-updater.argoproj.io/image-list: payment=repo/payment:{{.Tag}}. This tells the updater to replace the image field in the deployment manifest with the latest tag.

When a new version of repo/payment is pushed to Docker Hub, the updater detects the tag, updates the deployment.yaml in a temporary branch, and opens a pull request against acme-infra. The PR undergoes the same review process as any code change, ensuring that security and testing gates are respected.

After the PR merges, Argo CD automatically syncs the new manifest, and the cluster rolls out the updated container without any manual intervention. According to the New Stack, teams that adopt the Image Updater reduce the time between image release and production deployment from days to minutes.


Securing the pipeline: RBAC, secrets, and audit trails

Security cannot be an afterthought when moving to GitOps. In my implementation I started by tightening RBAC. Argo CD ships with a policy.csv file where you can define role-based permissions. I created a role called dev-team that allows read-only access to the payment app and sync permissions for the devops role.

The CSV entry looks like this:

g, dev-team, role:readonly, payment p, devops, applications, sync, payment

These lines were applied with argocd proj create dev-team --description "Read-only for developers" --src * --dest * --policy policy.csv. By scoping permissions at the project level, I prevented developers from accidentally modifying production-only applications.

For secrets, I followed the principle of externalizing them from Git. I used Kubernetes Secret objects managed by Sealed Secrets, which encrypts the data before committing to the repo. The manifest in Git contains the sealed version, and the controller decrypts it at runtime. This pattern satisfies the requirement that no plaintext credentials live in source control.

Argo CD also integrates with OpenID Connect providers, so I configured SSO with our corporate IdP. This gave us single-sign-on, MFA enforcement, and automatic user provisioning. The audit logs captured in the UI record every login, sync, and permission change, and they can be exported to a SIEM for compliance reporting.

When a sync fails, Argo CD creates a Sync Failed event that includes the error message, the offending manifest, and the commit SHA. I set up a Prometheus alert on the argocd_app_sync_status{status="Failed"} metric, which triggers a PagerDuty incident, ensuring that failures are visible instantly.


Real-world case study: migrating a legacy monolith at Acme Corp

Acme Corp ran a legacy monolithic Java application on a set of on-prem VMs, deploying with hand-written Bash scripts. The team suffered from long release windows - often 4 hours - and frequent configuration drift. In Q2 2024 we decided to migrate to a Kubernetes-based architecture using GitOps and Argo CD.

The first step was containerizing the monolith. I wrote a Dockerfile that built the JAR, copied it into a lightweight OpenJDK base image, and pushed the image to our private ECR registry. The image was tagged with the build number, for example acme/legacy:20240415.001. I then created a Helm chart that parameterized the image tag, replica count, and resource limits.

Next, I set up a new Git repository called acme-legacy-deploy. The charts/legacy directory held the Helm chart, and the values/production.yaml file defined the production settings. I added the following annotation to the Argo CD Application manifest to enable the Image Updater:

metadata: annotations: argocd-image-updater.argoproj.io/image-list: legacy=acme/legacy:{{.Tag}}

After creating the Argo CD application, I performed a dry-run sync to confirm that the chart rendered correctly. The first successful sync rolled out three pods in the production namespace. Because the monolith accessed a legacy database, I used a Kubernetes Service to expose the database endpoint, preserving the original connection string.

Within two weeks the team reduced the release cycle from 4 hours to under 15 minutes. The automated sync eliminated the manual SSH steps that previously caused errors. The audit trail in Argo CD showed each commit, who approved it, and the exact time the cluster converged, satisfying the audit requirements of our finance department.

Post-migration, we leveraged the Image Updater to keep the Java runtime patched. When a new OpenJDK patch appeared, the updater created a PR, the security team reviewed it, and the merge triggered an immediate rollout. This process cut the critical patch window from weeks to hours.

Key lessons learned include:

  • Start with a single pilot application to validate the workflow before scaling.
  • Document the Git directory structure early; a clear hierarchy prevents merge conflicts.
  • Integrate secret management from day one to avoid retrofitting later.
  • Monitor Argo CD health metrics; a small increase in argocd_app_sync_status{status="Failed"} often signals upstream CI issues.

The experience proved that Argo CD 2.0’s declarative configuration, combined with the Image Updater and robust RBAC, can transform a brittle manual deployment process into a reliable, audit-ready production pipeline.


FAQ

Q: What is Argo CD used for?

A: Argo CD continuously synchronizes Kubernetes cluster state with a Git repository, providing declarative deployments, self-healing, and audit trails.

Q: How does the Argo CD Image Updater work?

A: It monitors configured container registries, detects new tags, updates the image field in manifests, and opens a pull request for review before Argo CD applies the change.

Q: Can I secure Argo CD with my corporate SSO?

A: Yes, Argo CD supports OIDC providers, allowing you to integrate with Azure AD, Okta, or any SAML-compatible IdP for single-sign-on and MFA.

Q: What are the benefits of using GitOps over traditional CI/CD?

A: GitOps provides a single source of truth, automatic drift correction, built-in auditability, and faster rollbacks by treating Git commits as the deployment trigger.

Q: How do I enable automated sync for an application?

A: Set the --sync-policy automated flag when creating the app via CLI or enable the “Automated” toggle in the UI; Argo CD will then continuously reconcile the live state with Git.

Read more