Crossplane 2.0 matters for AI infrastructure because it gives platform teams a declarative way to expose governed, reusable services to agents and developers through one control plane instead of a maze of tickets, scripts, and cloud consoles.
The KubeCon + CloudNativeCon India 2026 schedule is less interesting as an event announcement than as a demand signal. AI + ML, observability, operations, platform engineering, and security are showing up together because teams no longer get to treat them as separate tracks in production.
Kubernetes v1.35 continues a trend: clusters are increasingly asked to run mixed AI workloads (training, batch, and latency-sensitive inference) alongside traditional services. Here’s what’s new that matters for platform teams—especially around scheduling, resizing, and safer config workflows.
Two fast-moving projects shipped updates on Feb 20: LiteLLM (API gateway/router) and llama.cpp (local inference runtime). Together they sketch a practical production pattern: route, observe, and govern LLM calls like any other service.
OpenInfra is increasingly framing OpenStack and adjacent projects as ‘sovereign infrastructure’ in the AI era. Stewardship—not ownership—may be the governance model that keeps these platforms relevant.
As LLMs turn into infrastructure, the gap between ‘I can run a model’ and ‘I can train one’ is becoming a product category. tiny corp’s training box pitch is a signal: developers want simpler, more open training stacks—even if the first versions are niche.