Ten years after its first commit, Cilium returns to Amsterdam for CiliumCon 2026. The project has evolved from experimental container networking to the de facto CNI for cloud native infrastructure—and it’s now positioning itself as the networking data plane for AI workloads.
Cilium at 10: From Experiment to Infrastructure
When Cilium’s first commit landed in 2016, eBPF was a niche Linux kernel feature. Today, Cilium powers production clusters at scale for Roche, Etraveli Group, Ledger, and SUSE. The CNCF graduated project manages networking, security, and observability through eBPF—bypassing iptables entirely.
Cilium v1.19 Highlights
Released ahead of CiliumCon, v1.19 brings:
- Flow Aggregation – Microsoft’s observability improvement that reduces metric cardinality
- DNS Policy Wildcards – Expanded wildcard support for DNS-based network policies
- Multi-cluster at scale – Hundred-cluster support with reduced control plane overhead
- Hardware acceleration – DPU-based policy enforcement preview
Tetragon: Runtime Security
Tetragon’s per-workload security policies are maturing. Sessions at CiliumCon cover:
- Scaling Tetragon policies across large clusters
- Hardware-accelerated security with DPUs
- Replacing legacy hardware load balancers with eBPF
- DNS security and forensics
Targeting the AI Workload
Cilium’s latest strategic focus: becoming the networking data plane for AI workloads. With RDMA and high-throughput networking requirements for distributed training, eBPF’s kernel-bypass efficiency becomes critical.
Cisco’s keynote will detail why they’re betting big on Cilium—and how Tetragon redefines networking, security, and observability in multi-cloud environments.
What to Expect at CiliumCon
The March 23 event features five technical sessions from maintainers and end users, three lightning talks on new features, and community networking. Talks move beyond “how do we adopt Cilium?” to “how do we run it at scale?”

Leave a Reply