Kubernetes: authenticating private registry mirrors with namespace-scoped Secrets (via CRI-O credential provider)

Private registry mirrors and pull-through caches are one of those Kubernetes “grown-up” optimizations: once your clusters get busy enough, you start caring about egress bills, image pull latency, and having a local source of truth for what’s allowed to run. The problem is that mirrors live in an awkward place in the stack. Kubernetes understands imagePullSecrets for authenticating to a registry, but the actual mirror configuration typically sits below the Kubernetes API, inside container runtime configuration on each node.

That mismatch forces an ugly choice in multi-tenant clusters: either skip private mirrors (and pay the cost), or push mirror credentials down to nodes (and break isolation). A CNCF blog post from Sascha Grunert (Red Hat) outlines a third option: use CRI-O’s credential provider to authenticate to registry mirrors using standard Kubernetes Secrets, scoped to the namespace of the pulling workload.

This is a story about closing a security gap without inventing a new control plane. It uses mechanisms Kubernetes already has: service accounts, tokens, Secrets, RBAC, and kubelet credential provider plugins.

Why mirror auth is harder than source-registry auth

Most platform teams know the “happy path”:

  • Your image lives in registry.example.com/team-a/app:1.2.3.
  • You attach imagePullSecrets to the ServiceAccount or Pod.
  • The kubelet uses those credentials to pull.

Mirrors complicate this because the kubelet doesn’t (and historically can’t) reason about runtime mirror configuration. Mirrors are commonly defined in files like /etc/containers/registries.conf, which belong to the node’s container stack. Kubernetes can’t naturally map “this image was pulled from a mirror” back to “use this namespace secret” unless something bridges the layers.

In practice, that has led teams to configure mirror credentials globally on the node. In multi-tenant or regulated environments, that’s a security smell:

  • Tenant isolation breaks: node-level creds are, effectively, cluster-wide.
  • Least privilege becomes impossible: every namespace inherits access it shouldn’t have.
  • Operations bottleneck: every credential rotation becomes a node maintenance exercise.

The Kubernetes mechanism that makes this possible: kubelet credential provider plugins

Kubernetes has had a kubelet image credential provider plugin API since 1.20 (stable since 1.26). The model is simple: when the kubelet needs credentials for an image pull, it can invoke an external executable plugin. The plugin can return credentials dynamically, instead of relying on static node config.

The CNCF post highlights a newer capability that makes namespace-scoped mirror authentication feasible: in Kubernetes 1.33+, the KubeletServiceAccountTokenForCredentialProviders feature gate lets the kubelet pass a service account token to the credential provider. That token is the key: it contains enough identity to figure out which namespace initiated the pull, and it can be used to call the Kubernetes API to fetch Secrets (subject to RBAC).

How CRI-O’s credential provider “bridges the gap”

CRI-O’s credential provider (crio-credential-provider) is not trying to redesign image pulling. Instead, it slots into the existing flow:

  1. The kubelet invokes the credential provider for matching image patterns.
  2. The provider parses the token to determine the namespace context for the pull.
  3. It reads mirror configuration locally (from runtime config).
  4. It queries the Kubernetes API for registry Secrets in that namespace (for example, kubernetes.io/dockerconfigjson).
  5. It generates a short-lived auth file for CRI-O to consume during the pull.
  6. It returns “success” to kubelet; CRI-O does the mirror pull using that auth material.

The clever part is that this keeps the “mirror knowledge” in the runtime (where it already is), while moving the credentials back into Kubernetes (where they belong). It also aligns operational boundaries: platform teams own the mirror plumbing; application teams can own their namespace-specific registry credentials.

Security and compliance implications (the part you should care about)

This design changes the blast radius of a registry credential in a way auditors tend to like:

  • Namespace scoping becomes real: mirror credentials can be stored per namespace, and only workloads with access to those Secrets can trigger pulls that use them.
  • Rotation gets easier: changing a Secret is a Kubernetes-native operation, not a node reconfiguration.
  • Multi-tenant platforms stop “credential smearing”: you don’t have to put every team’s registry credentials into one node-level config file.

There’s still a trade: the service account used by a workload now needs permission to read the Secrets that contain registry auth (or you need a narrowly scoped mechanism for the provider to read those secrets). That is manageable, but it deserves careful RBAC design so you don’t accidentally give broad Secret read access where you didn’t intend it.

Operational considerations: performance, caching, and failure modes

Because the plugin can run for every pull, it has to be fast. The CNCF post notes optimizations like early exits, streaming parsing, and pre-allocation. But platform teams should still think about real-world behavior:

  • Caching: kubelet credential provider configs support cache durations and caching modes. Decide whether you want per-token or per-service-account caching and what that means for token churn.
  • Rolling out binaries: the plugin executable must exist on every node referenced by kubelet config. This is an “image pull path dependency” — treat it like a node component with proper rollout and rollback.
  • When the API is slow: if your control plane is under load, secret lookups could become a new tail latency contributor during pulls. That’s still better than global static creds, but it’s something to observe.

Where this fits in the bigger supply-chain picture

Registry mirrors and caches are increasingly being used as policy enforcement points: allow-listing, provenance tracking, signature verification, and forcing pulls through controlled infrastructure. The missing piece has often been authentication that respects Kubernetes tenancy.

If you’re building a multi-tenant internal developer platform, this credential-provider approach is a solid pattern: it keeps the “policy plane” in Kubernetes (RBAC + Secrets), the “data plane” in the runtime (mirror routing), and avoids building bespoke sidecars or admission controllers just to glue those together.

What to do next if you want this in your environment

  • Confirm you’re on Kubernetes 1.33+ and have the feature gate story straight for your distro.
  • Check CRI-O versions and the availability of the crio-credential-provider binary for your node OS/arch.
  • Design RBAC intentionally: which service accounts can list/get which registry Secrets?
  • Instrument image pull failures and latency before and after rollout.

Most importantly, treat this as a security architecture choice, not “just” an optimization. Mirrors are infrastructure — and infrastructure credentials deserve Kubernetes-native lifecycle management.

Sources