Argo CD 3.3 and the GitOps ‘self-managing’ trap: upgrading safely with server-side apply

GitOps is at its best when it’s boring: your desired state is in Git, the controller reconciles, and drift gets corrected. But GitOps has an awkward edge case that shows up in almost every mature platform: the controller managing itself.

Argo CD’s v3.3.0 release is a reminder that “self-managing” is less a pattern and more a tightrope. The release includes clear guidance: if an Argo CD Application is responsible for keeping Argo CD installed and upgraded, you likely need to enable ServerSideApply=true for that Application—or upgrades can fail. In some scenarios, you may also need to set ClientSideApplyMigration=false.

Why this matters: GitOps controllers sit in the blast radius of their own behavior

Controllers like Argo CD reconcile changes by applying manifests. Over time, Kubernetes has evolved its apply semantics, and Argo CD has needed to evolve with it. The trouble is that the mechanics used to apply and diff resources also affect the resources that run the controller itself: CRDs, Deployments, RBAC, and ConfigMaps that define how Argo CD operates.

When you upgrade, you’re effectively asking Argo CD to replace parts of its own machinery while it’s still running. That works—until it doesn’t.

Argo CD 3.3’s headline operational warning

The release notes make it explicit: before upgrading to v3.3.0, read the upgrade guide, and if you manage Argo CD with an Argo CD Application, ensure ServerSideApply=true.

Why SSA specifically? Server-side apply changes the “source of truth” for field ownership. Instead of the client constructing patches and hoping they converge cleanly, SSA lets the API server merge changes and track field managers. In modern clusters—especially with CRDs and multiple reconcilers—SSA tends to reduce conflict and drift.

The ‘self-managing’ failure mode: apply migrations and managed fields

Many teams started GitOps years ago, when client-side apply and diffing behaviors were simpler. As Argo CD and Kubernetes evolved, those legacy assumptions can surface during upgrades:

  • Fields that used to be “owned” by kubectl now show up as owned by Argo CD (or vice versa).
  • CRDs and webhooks can be especially sensitive to ordering and apply strategy.
  • ManagedFields drift can cause Argo CD to repeatedly fight the API server (and itself).

The release notes mention a specific remediation for certain self-management setups (notably Kustomize-based): set ClientSideApplyMigration=false if you hit client-side apply migration sync errors.

Practical upgrade checklist for platform teams

1) Identify whether Argo CD manages itself

  • Do you have an Application that points at Argo CD install manifests or a Helm chart?
  • Is that Application in the same Argo CD instance you’re upgrading?

2) If yes, set safe sync options before you touch versions

  • ServerSideApply=true for the self-managing Application.
  • If you see client-side apply migration errors, consider ClientSideApplyMigration=false as per docs.

3) Treat controller upgrades like control-plane changes

Even though Argo CD is “just an app,” it is effectively part of your control plane. That means:

  • Staging the upgrade in a non-prod cluster isn’t optional.
  • Have a rollback plan (and a manual “break glass” path) for the self-managing app.
  • Snapshot current manifests and the live state so you can reason about diffs.

4) Re-evaluate the self-management pattern itself

Self-management is attractive because it keeps everything under GitOps. But consider splitting responsibilities:

  • Bootstrap layer installs/updates Argo CD (e.g., Helmfile, Terraform/OpenTofu, Cluster API templates, or a separate “seed” Argo CD).
  • GitOps layer manages everything else.

This reduces circular dependency and makes “controller upgrades” a controlled, deliberate action.

Where this connects to broader platform engineering trends

2026 platform engineering is about making change safe, boring, and repeatable. That includes your delivery tooling. Argo CD’s emphasis on SSA reflects a broader consensus: rely more on API-server-native reconciliation semantics and less on client-side patching tricks that are hard to reason about under concurrency.

Sources

Leave a Reply

Your email address will not be published. Required fields are marked *