SpinKube + Gateway API: A Practical Path to Routing WebAssembly Apps on Kubernetes

WebAssembly on Kubernetes keeps creeping from “interesting demo” into a real platform option — especially for teams that want fast cold starts, stronger sandboxing, and polyglot runtimes without shipping heavyweight container images. The CNCF’s SpinKube project is one of the most concrete implementations of that idea: it runs Spin applications on Kubernetes using a containerd shim, but without actually running containers for the app processes.

There’s still a practical question every platform team has to answer: how do you expose these workloads cleanly? The CNCF blog post “Exposing Spin apps on SpinKube with GatewayAPI” is a useful blueprint because it connects two trends that are happening independently: WASM-based “serverless on K8s” stacks and the Kubernetes Gateway API, which aims to replace ad-hoc Ingress annotations with structured, portable routing resources.

Why Gateway API is more than “Ingress v2”

Ingress made Kubernetes accessible, but it also normalized an unhealthy pattern: a single object with wildly different semantics depending on your controller, plus a long list of opaque annotations. Gateway API flips that model by separating responsibilities:

  • GatewayClass: a cluster-level description of a gateway implementation (owned by platform/infrastructure teams).
  • Gateway: the actual entry point (listeners, ports, TLS) that can be shared.
  • Routes: objects like HTTPRoute that application teams can own, attaching routing rules to the shared gateway.

This “role-oriented” split is a big deal in multi-tenant clusters. It lets a platform team define the safe perimeter and let app teams control only the route rules that apply to their services, without granting broad admin access or exposing them to controller-specific knobs.

SpinKube’s model: Kubernetes primitives, but no container runtime for the app

SpinKube still uses familiar primitives — Deployments, Pods, and Services — but the runtime behavior changes. Instead of pulling a container image and starting a container process, SpinKube uses a containerd shim to launch a WebAssembly runtime for the Spin app. That means you keep Kubernetes scheduling, rollout, and service discovery, while getting a different packaging and execution model.

In practice, that creates two “platform surfaces” to manage:

  • Packaging/distribution: Spin apps can be published as OCI artifacts and pulled from registries, which aligns with existing supply chain patterns.
  • Traffic exposure: you still need stable ways to publish endpoints, apply auth, do canaries, and operate across environments.

A minimal working pattern: Gateway + HTTPRoute + path rewrites

The CNCF walkthrough demonstrates a pragmatic approach: install Gateway API CRDs, deploy a Gateway controller (they use NGINX Gateway Fabric as an implementation), then create a Gateway resource as the shared ingress point. Next, each Spin app gets its own HTTPRoute that attaches to that gateway. For teams without external DNS automation, the example uses path prefix matching plus URL rewrite filters to route traffic to multiple apps behind a single IP and listener.

This pattern matters because it’s composable. You can start with path-based routing on day one, and later evolve to host-based routing, split traffic for canaries, or add security policy layers — without rewriting your whole exposure story in controller-specific annotations.

What platform teams should pay attention to

SpinKube and Gateway API both shift responsibilities, and it’s worth calling those out explicitly:

  • Operational maturity: SpinKube is a CNCF sandbox project; treat it as “early but real.” Run it in a constrained environment first, and validate upgrade and rollback mechanics.
  • Gateway controller selection: Gateway API is a spec; behavior depends on the controller implementation. Validate feature parity for the traffic policies you need (timeouts, retries, auth integration, observability hooks).
  • Security posture: WASM sandboxing can be an advantage, but you still need to design around identity, egress, and secrets. The networking layer is where many mistakes become outages.
  • Developer experience: If the app team’s workflow is “build Spin app → push OCI artifact → apply SpinApp + HTTPRoute,” your platform should provide templates and guardrails so teams don’t hand-roll YAML.

Why this matters for the broader cloud native ecosystem

Cloud native is in an interesting phase: we’re not just standardizing “how to run containers,” we’re standardizing how to run workloads that might not be containers at all. WASM frameworks like Spin and orchestration layers like SpinKube show that Kubernetes can be a universal control plane if we keep improving the UX and portability of the surrounding APIs.

Gateway API plays a similar role for traffic: it’s an attempt to create a shared language for exposure, independent of specific ingress products. Put the two together and you get an approachable platform story: WASM workloads that feel Kubernetes-native, exposed through a structured, delegated networking API that reduces annotation chaos.

Sources

Leave a Reply

Your email address will not be published. Required fields are marked *