Confidential Computing Meets Sovereign Cloud: Why ‘Data in Use’ Is the New Boundary

Sovereign cloud discussions often get stuck at geography: where is the data stored, which jurisdiction applies, which operators have access. Canonical is making a sharper argument: data residency is not data sovereignty, because the most dangerous state of data is the one most designs ignore — plaintext data in use, sitting in memory while computation happens.

In a new Canonical post, the company frames confidential computing as the missing layer for sovereign cloud: hardware-backed trusted execution environments (TEEs) that encrypt memory in use and block host/hypervisor introspection, combined with attestation that allows workloads to prove the state of the environment before secrets are released.

This is more than a security feature. It’s a shift in how cloud systems establish trust.

Why “at rest” and “in transit” aren’t enough

Most cloud security architectures are built around two protections:

  • Data at rest: disk encryption, key management, storage access controls.
  • Data in transit: TLS, mutual TLS, private networking, service mesh.

Canonical’s point is that these controls leave a blind spot: the runtime. During computation, plaintext exists in RAM, registers, and (in many AI workflows) GPU memory. If you assume the host OS, hypervisor, or cloud operator can’t see it, you’re not enforcing a boundary — you’re relying on process and contract.

Confidential computing tries to make “data in use” defensible by moving isolation into hardware. A TEE creates a protected region where memory is encrypted and access is restricted such that even privileged layers can’t trivially inspect it.

From identity-based trust to state-based trust

Traditional cloud controls ask: who is requesting access? That’s identity: IAM policies, roles, audit logs, conditional access. Confidential computing introduces a different question: what is the state of the environment where data is being used?

Canonical describes a model where workloads can attest: hardware class, firmware versions, boot chain integrity, whether debug features are active, and measurements of code. Secrets are released based on verified state. This is the conceptual shift: you’re not trusting the operator; you’re trusting an attested environment.

In sovereign cloud terms, that can be powerful. Regulations often care about who can access data and under what conditions. Attestation provides a technical enforcement mechanism that is more durable than organizational separation alone.

What this means for OpenInfra/OpenStack-style designs

OpenStack and the broader OpenInfra ecosystem are often deployed where sovereignty requirements are strongest: national clouds, regulated industries, private managed clouds, and edge environments. Confidential computing can be integrated into these designs as a tiered capability rather than an all-or-nothing rewrite.

Practically, think in layers:

  • Compute flavors / node pools: offer “confidential” instance types backed by hardware TEEs, separate from standard nodes.
  • Key release workflows: integrate attestation into secret delivery so that keys are only released when the guest proves it’s running on approved hardware/firmware.
  • Policy: treat attestation baselines as versioned policy artifacts, similar to how you treat CIS benchmarks or hardened images.

This aligns with Canonical’s framing of “programmable sovereignty”: requirements evolve faster than infrastructure, so you need a way to rotate trust baselines without re-architecting everything.

Where confidential computing helps most (and where it doesn’t)

It’s important to be specific about the threat model. Confidential computing is best when your problem is privileged infrastructure visibility: you don’t want the host or hypervisor to inspect tenant memory, and you want to reduce trust in administrators and operators.

It does not automatically solve:

  • Bad application security (SQL injection, auth bugs, unsafe deserialization).
  • Data exfiltration by the workload itself (if the app is compromised inside the TEE, it can still leak data outward).
  • Observability and debugging tradeoffs (reduced introspection can make incident response harder without careful design).

So the right pattern is selective use: put the most sensitive processing inside TEEs, and design supporting controls around it.

Operational reality: attestation becomes a new control plane

If you adopt TEEs, you’re also adopting attestation policies, measurement updates, and hardware lifecycle management. That is a new operational plane with its own failure modes:

  • firmware updates that change measurements and break key release,
  • inconsistent baselines across regions,
  • debug modes accidentally enabled,
  • vendor-specific tooling differences.

This is where ecosystem work matters. The Confidential Computing Consortium’s mission is to accelerate adoption through open collaboration — a sign that standard interfaces and shared practices are still evolving.

Bottom line

Canonical’s framing is persuasive: sovereignty is not just about where data sits, it’s about when and where plaintext exists. Confidential computing doesn’t remove the need for governance — but it can turn some governance requirements into enforceable, cryptographic conditions. For OpenInfra and regulated cloud builders, “data in use” is increasingly the boundary that differentiates a policy promise from a technical guarantee.

Sources