Cloudflare Workers AI Now Runs Large Models: Kimi K2.5 Delivers 77% Cost Savings
Cloudflare enters the large model inference game with Kimi K2.5 on Workers AI, offering frontier-level reasoning at a fraction of proprietary model costs.
Cloudflare enters the large model inference game with Kimi K2.5 on Workers AI, offering frontier-level reasoning at a fraction of proprietary model costs.
Kubernetes 1.34 brings Dynamic Resource Allocation to GA, enabling proper GPU sharing, topology-aware scheduling, and gang scheduling for AI/ML workloads.
The Kubernetes community announces a new working group focused on developing standards and best practices for AI Gateway infrastructure, including payload processing, egress gateways, and Gateway API extensions for machine learning workloads.
Ollama 0.18 brings official OpenClaw provider support, up to 2x faster Kimi-K2.5 performance, and the new Nemotron-3-Super model designed for high-performance agentic reasoning tasks.
vLLM 0.17 brings PyTorch 2.10, FlashAttention 4 support, and the new Nemotron 3 Super model, delivering next-generation attention performance for LLM inference.
vLLM 0.17.1 adds Nemotron 3 Super and, more importantly, patches several MoE and TRT-LLM edge cases. That is the real story: production LLM serving is still a game of backend-specific correctness, especially once MoE, FP8, and mixed execution paths enter the room.
Ollama’s 0.17.8 release candidate is not a flashy model-drop release. It is a runtime-hardening release: better GLM tool-call parsing, more graceful stream disconnect handling, MLX changes, ROCm 7.2 updates, and small fixes that make local inference feel more operational and less hobbyist.
Ollama 0.17.7 adds better handling for thinking levels (e.g., ‘medium’) and exposes more context-length metadata for compaction. It’s a small release that hints at a larger shift: local model runtimes are growing the same control surfaces as hosted LLM platforms.
CNCF argues the AI stack is converging on Kubernetes—data pipelines, training, inference, and long-running agents. Here’s what’s actually driving the migration, the hidden operational tax it removes, and the platform-level standards teams should lock in before the next wave hits.
Hugging Face is bringing the GGML / llama.cpp team in-house while keeping the project open and community-led. This isn’t just a hiring headline: it’s a bet that local inference will be competitive, and that packaging + model-to-runtime alignment will be the next battleground.
vLLM 0.16.0 lands with async scheduling and full pipeline parallelism support, plus speculative decoding improvements. Here’s how to think about throughput, tail latency, and operational rollout.
vLLM v0.16.0 ships with a large set of changes and a fast-moving contributor base. To adopt it safely, treat it like an API platform: validate OpenAI-compat endpoints, scheduling behavior, and observability before a fleet-wide cutover.
vLLM 0.16.0 lands with async scheduling and pipeline parallelism, a new WebSocket-based Realtime API, speculative decoding improvements, and major platform work—including an overhaul for XPU support. Here’s why those details matter to teams building reliable, cost-efficient inference stacks.
vLLM 0.16.0 isn’t a routine release. It signals a shift toward higher-throughput, more interactive open model serving—plus the operational primitives (sync, pause/resume) teams need for RLHF and agentic workloads.
vLLM’s v0.16.0 release lands major throughput improvements plus a WebSocket Realtime API for streaming audio interactions. It’s a useful snapshot of where the open inference stack is going: more parallelism, more modalities, and more production ergonomics.
The vLLM team details GB200 optimizations pushing DeepSeek-style MoE throughput. The bigger story: disaggregated serving and precision-aware kernels are becoming table stakes.