GitHub’s workflow dispatch API can now return run metadata, eliminating brittle polling and guesswork in automation. Here’s why it matters for platform teams building ChatOps, self-service, and internal developer portals.
CNCF’s ‘Agentics Day: MCP + Agents’ points to a new infrastructure layer: standardized model-to-tool connections under neutral governance. Here’s what platform teams should expect—and what to prototype now.
GitHub’s workflow_dispatch API can now return run IDs. That makes self-service CI/CD safer and more observable, enabling tighter coupling between portal actions, audit logs, and rollout status.
Helm v4.1.1 is a patch release, but it’s a good excuse to revisit how chart supply chains, plugin sprawl, and CI-driven upgrades actually break production. Here’s a pragmatic operator playbook.
KubeCon + CloudNativeCon Europe heads back to Amsterdam on March 23–26, 2026. Here’s a practical preview of the themes to track—platform engineering, security, observability, and AI—and how to get more value out of the week.
Kubernetes’ Node Ready condition is a blunt instrument. The new Node Readiness Controller adds declarative, taint-based readiness gates so nodes only enter the scheduling pool when platform-specific dependencies (CNI, storage, GPU drivers, local agents) are truly healthy.
GitOps is great until you run a large Kubernetes fleet. Fastly describes the gaps they hit — orchestration, validation, blast-radius control — and how they layered a rollout system on top of Argo CD. Here’s what platform teams can steal.
ingress-nginx is heading into retirement in 2026. Here’s a practical, low-drama playbook to inventory your current usage, choose a target (Ingress controller vs Gateway API), and migrate with controlled risk.
A practical, ops-minded blueprint for running agentic workflows locally: LangGraph for durable state, MCP for standardized tool boundaries, and Ollama for local inference—plus the guardrails that keep it from becoming an unmaintainable demo.
Opus 4.6 is being positioned as stronger at coding and longer-running agentic tasks, with ‘agent teams’ entering preview. For platform leaders, the real story is operational: least privilege, audit trails, evals, and a clean boundary between propose vs execute.
The ‘LLM inference server’ is quickly becoming a standard platform component. vLLM and Ollama represent two distinct operating models—GPU-first throughput engineering vs developer-friendly packaging. Here’s how to pick based on tenancy, observability, and cost, not hype.
Gateway API is the direction of travel, but teams still need an implementation that can survive production traffic. Envoy Gateway is quietly becoming that default. Here’s what’s maturing, what’s still sharp, and how to adopt it without breaking every app team.
Dapr’s Conversation component abstracts LLM provider differences behind a runtime API, letting teams focus on prompts and tool calls while the sidecar handles retries, auth, and provider quirks. It’s an early blueprint for agentic, ops-friendly AI integration.
Argo CD 3.3.0 sharpens the line between old apply behaviors and server-side apply. If Argo CD manages itself, upgrades can fail unless you adopt the right sync options—making this a good time to audit GitOps bootstrapping patterns.
Kubernetes’ binary Node Ready signal is often too coarse for modern clusters. The new Node Readiness Controller proposes a declarative, taint-driven way to keep workloads off nodes until the platform-specific dependencies you care about are truly healthy.
Kubernetes’ new Node Readiness Controller tackles a long-standing problem: “Ready” is binary, but modern nodes fail in nuanced ways. What’s changing, why it matters, and how to roll it out.