OpenAI’s GPT‑5.4 rollout brings a new ‘Thinking’ experience inside ChatGPT and a higher-capability GPT‑5.4 Pro option aimed at demanding professional workflows. Here’s what’s actually new—computer use, longer context, tool search, and improved reliability—and how it can benefit real users.
NVIDIA GTC 2026 (March 16–19, San Jose) is shaping up to be a full‑stack AI and accelerated computing week—from Jensen Huang’s keynote to hands‑on training, agentic AI sessions, and deep dives into inference, CUDA, and robotics. Here’s what to expect, who’s featured, and how to register.
Collector-contrib v0.146.0 adds context inference to the Filter Processor, letting teams write readable, intent-first OTTL conditions instead of juggling internal contexts. Here’s what changes, how evaluation works, and how to adopt it safely.
GitHub now supports assigning Dependabot alerts to specific users (GA). That sounds small—but it’s the missing piece that lets teams operationalize dependency remediation the same way they do incidents: ownership, queues, automation, and reporting.
Hugging Face is bringing the GGML / llama.cpp team in-house while keeping the project open and community-led. This isn’t just a hiring headline: it’s a bet that local inference will be competitive, and that packaging + model-to-runtime alignment will be the next battleground.
AWS demonstrates migrating an EC2-hosted app to ECS Express Mode using Kiro CLI plus AWS/ECS MCP servers. Beyond the tutorial, this is a blueprint for ‘operator copilots’ that can discover, plan, validate, and execute infrastructure changes with guardrails.
Ingress-NGINX’s March 2026 retirement is forcing real migrations. Here’s a field guide to the weird edge behaviors you must inventory before moving to Gateway API (or another controller) — and how to avoid silent traffic breaks.
GitHub is deprecating several Copilot models (including GPT-5.1) and changing required network routing for Copilot coding agent. If you run agents on self-hosted runners, your allowlists and model policies need attention now.
OpenStack’s 6‑month cadence hides a lot of operational reality: maintained vs unmaintained phases, SLURP upgrade paths, and when vendors actually ship. Here’s how to use the official releases site to plan upgrades for 2026.1 Gazpacho.
OpenClaw’s 2026.3.2 release leans into enterprise ops: broader SecretRef coverage, faster failure on unresolved refs, and a first-class PDF tool. Meanwhile llama.cpp continues its rapid perf work with new AArch64 SME compute paths.