Kubernetes’ original Ingress API gave the ecosystem a shared baseline, but it also baked in limitations: a narrow feature set, implementation-specific annotations, and an awkward fit for multi-tenant environments. The Gateway API is the community’s answer—an API that aims to be expressive, role-oriented, and portable across implementations.
In 2026, the question for platform teams is no longer “should we experiment with Gateway API?” but “which implementation should we standardize on, and how do we keep the choice reversible?” This post lays out a pragmatic evaluation framework for four common paths: Envoy Gateway, Istio, Cilium, and Kong.
Gateway API, in one paragraph
Gateway API introduces resources like GatewayClass, Gateway, and HTTPRoute to represent infrastructure ownership boundaries. In broad strokes:
- Platform teams own
GatewayClassand sharedGatewayobjects. - Application teams attach routes (like
HTTPRoute) without needing cluster-wide privileges. - Policy and attachment rules become explicit instead of hidden in annotations.
That role-oriented model matters when you’re trying to scale the number of teams using Kubernetes without scaling the number of “people who can break the whole cluster.”
The four implementation archetypes
Although the ecosystem is broader than four options, most teams end up in one of these archetypes:
1) Envoy Gateway: the “canonical” Gateway API interpretation
Envoy Gateway’s pitch is straightforward: build a vendor-neutral, upstream-first gateway implementation centered on Envoy. For operators, the key advantage is clarity: fewer legacy knobs, fewer side goals, and a strong alignment with Gateway API concepts.
Where it shines:
- Teams that want a clean separation between app routing and infrastructure control.
- Clusters where you don’t want to adopt a full service mesh just to modernize ingress.
- Standardization efforts (platform teams supporting many clusters) that value portability.
Tradeoffs:
- You still need to solve east-west traffic separately if you later need a mesh.
- Advanced policy integrations can require additional controllers (authz, WAF, rate limiting) depending on your stack.
2) Istio: Gateway API plus the mesh toolbox
Istio brings a mature control plane, strong traffic policies, and a deep operational model. As a Gateway API implementation, it can be attractive because you get a cohesive story: ingress + east-west + policy.
Where it shines:
- Organizations that already run Istio and want to converge on Gateway API resources.
- Teams that need consistent mTLS, authz, and telemetry across north-south and east-west traffic.
Tradeoffs:
- Operational complexity: upgrades, control plane tuning, sidecar/resource overhead (even with sidecarless modes, there’s still operational surface area).
- Gateway API becomes one interface among several; legacy resources and mesh-specific policies still exist.
3) Cilium: eBPF-driven networking with Gateway API as an extension
Cilium’s strength is that it often becomes your cluster’s networking foundation (CNI, policies, observability). Adding Gateway API on top can feel like a natural extension: one vendor/project doing CNI + L7 ingress + policy.
Where it shines:
- Platform teams standardizing on Cilium for networking and network policy enforcement.
- Teams that want strong observability and policy in one cohesive stack.
Tradeoffs:
- The gateway feature set and maturity can vary by version; you need to validate against your edge cases.
- Coupling: if you later swap CNIs, you may also be swapping gateway behavior.
4) Kong: a product-shaped gateway with Kubernetes-native control
Kong remains popular where API gateway features (plugins, auth, rate limiting, developer portal integration) matter as much as Kubernetes primitives. Gateway API support can give you portability on paper, while you still benefit from Kong’s plugin ecosystem.
Where it shines:
- Organizations with strong API management requirements at the edge.
- Platform teams wanting a packaged, operationally supported gateway footprint.
Tradeoffs:
- Some advanced behaviors may still rely on Kong-specific CRDs or configuration patterns.
- Portability depends on how “purely” you stick to Gateway API resources.
An evaluation rubric that won’t lie to you
Most gateway bake-offs fail because they compare feature checklists, not operational reality. Here’s a rubric that tends to hold up under real load:
- Upgrade experience: Can you upgrade without traffic drops? How do you roll back?
- Multi-tenancy: Can teams self-serve routes safely? Can you enforce attachment policies?
- Observability: Are metrics/logs/traces first-class, or do you need bolt-ons?
- Policy integration: Authn/authz, rate limits, WAF—native vs. add-on controllers.
- Failure modes: What happens when the controller is down? What happens during reconciliation delays?
- Reversibility: How many vendor-specific CRDs do you end up depending on?
A practical migration plan (Ingress → Gateway API)
- Start with one cluster and one traffic class (e.g., internal services) to validate behavior.
- Normalize on Gateway API resources for most routes; isolate implementation-specific configuration behind platform-owned gateways.
- Codify policies (TLS, hostname ownership, allowed backends) as part of your gateway provisioning pipeline.
- Document the “escape hatch” so app teams don’t invent their own annotations/CRDs.
Gateway API’s promise is portability, but portability only holds if your platform team actively protects the abstraction. Treat your gateway stack like any other platform product: define supported patterns, publish examples, and keep the “just one more annotation” instinct under control.

Leave a Reply