We’ve all seen “AI for ops” demos that look great until you ask the uncomfortable questions: how does it authenticate, how does it make changes safely, where are the guardrails, and how do you audit what it did? The Model Context Protocol (MCP) ecosystem is one of the more credible answers emerging in 2026 because it focuses on tools and capabilities, not just chat.
AWS’s recent Containers blog post is a concrete example: it walks through migrating a Node.js app running on EC2 into ECS Express Mode using Kiro CLI plus an AWS MCP Server and a specialized ECS MCP Server. On paper, it’s a tutorial. In practice, it’s a preview of what agent-assisted platform work will look like when it’s actually usable in real environments.
What is ECS Express Mode (and why it matters)
ECS Express Mode aims to remove a bunch of the “write the entire universe in JSON” friction from shipping containers to ECS. The blog describes Express Mode as simplifying workload definition and orchestrating supporting services like load balancers and autoscaling. The division of labor is clear:
- Express Mode: simplified definition + service orchestration
- ECS: scheduling and container orchestration
- Fargate: serverless compute that removes node management
If you’ve ever tried to convince a traditional VM-heavy team to adopt containers, you know the real barrier is operational complexity. Express Mode is AWS acknowledging that “the DX of shipping to ECS” is part of the product.
Where MCP fits: from runbooks to tool-using agents
The most interesting part is not the ECS feature. It’s the workflow pattern:
- The agent can discover current infrastructure (EC2 instance, ALB, IAM, data stores).
- It can propose a migration plan (containerization steps, service configuration, networking).
- It can execute via tools exposed through MCP servers, rather than free-form “do things.”
- It can validate with checks and rollbacks.
That’s what ops teams need: a structured interface to powerful actions, with permissions and auditability.
Kiro CLI: the UX layer that makes this approachable
The post frames Kiro CLI as an interactive chat mode where you can configure MCP servers and list available tools. If you squint, it’s basically a runbook console that speaks natural language, but ultimately calls explicit APIs through explicit tool definitions.
This matters because most “AI assistant” failures in infrastructure come from ambiguity. MCP-based tools reduce ambiguity: the assistant doesn’t “decide” what an ALB is; it calls a tool that knows.
Practical takeaways for platform teams
If you’re running a platform organization, you can treat this blog post as a design pattern and start applying it even if you aren’t all-in on AWS tooling:
- Expose safe tools: create a constrained set of actions (discover, diff, plan, apply, rollback) instead of giving an assistant raw shell access.
- Make plans reviewable: require an explicit plan artifact before execution.
- Integrate policy: least-privilege IAM and policy checks should be first-class, not afterthoughts.
- Log everything: treat the agent’s actions like change management—because that’s what they are.
Where this goes next
In the near term, expect to see MCP servers proliferate for “ops surfaces”: CI/CD, cloud control planes, ticketing, incident tooling, Kubernetes controllers, and infrastructure-as-code workflows. The winning stacks will be the ones that combine:
- good tool boundaries (MCP servers)
- a usable interface (CLI + UI)
- boring but essential operational guardrails (auth, audit, approvals)
AWS’s example is an early, concrete signal that the industry is moving from “AI suggests commands” to “AI executes bounded tools.” That’s a far more realistic path to production.
