On April 8, 2026, Anthropic published a public beta of Claude Managed Agents. The product does something no model provider had done before at this scale: it handles the infrastructure that AI agents need to run, so the customer does not have to. Instead of renting compute from a cloud provider and building the plumbing yourself, you call a single API and get a fully managed environment. Anthropic provisions the container, manages credentials, logs everything, and recovers automatically if something fails mid-task. Pricing is $0.08 per session-hour of active runtime plus standard token costs. For an always-on agent running around the clock, that is roughly $58 per month in session fees before token costs. A dedicated VPS running the same workload costs less and runs any model. On Friday, April 10, Fastly fell 18 percent, Akamai fell 13 percent, Cloudflare fell 11 percent, and DigitalOcean fell 13.4 percent in intraday trading.

What Anthropic Built

Claude Managed Agents is a REST API that manages the agent execution loop rather than just the model call. The architecture separates three layers: the model layer handles reasoning; the execution layer runs inside an isolated container where Claude can execute commands, read and write files, browse the web, and call external services; the session layer is an append-only log that persists everything across failures. If a process crashes, execution resumes from the last recorded point. Anthropic says this separation reduced response latency by roughly 60 percent at median and over 90 percent at the 95th percentile compared to architectures where the model waits for container provisioning.

Credential handling is the part that solves a problem most teams building agents have had to solve themselves. API keys and OAuth tokens for external services never reach the container where the agent’s code runs. They are stored in a vault and injected at the network layer. Built-in tools include Bash execution, file operations, web search, and web fetch. External tools connect via MCP. Research preview features, available by application, include multi-agent coordination, memory, and outcome evaluation. The API is available to all Anthropic API accounts by default.

Why Infrastructure Stocks Fell

Cloudflare, Fastly, Akamai, and DigitalOcean had all spent the previous 12 months positioning their platforms as the infrastructure layer for AI agents. Cloudflare built Workers AI, an Agents SDK, Sandbox containers, and Durable Objects for agent state. Fastly’s CEO had cited agent management as a driver of platform usage. Akamai launched its Inference Cloud. DigitalOcean introduced GPU droplets and one-click AI agent deployments. Each company’s AI narrative rested on the assumption that agents are complex to deploy and need to run on infrastructure those companies operated.

Claude Managed Agents removes that complexity for Claude specifically. Sandboxed execution, credential management, checkpointing, and observability are now included in Anthropic’s API price. A team that would have spent weeks building agent infrastructure on Cloudflare Workers or DigitalOcean Droplets can instead call a single endpoint. 24/7 Wall St. described the result as Anthropic “turn[ing] AI itself into the deployment layer, compressing the seat-based SaaS revenue model by replacing work that once required multiple licensed tools.”

None of the affected companies changed their financial guidance after the announcement. Cloudflare maintained a 2026 growth projection of 28-29 percent, DigitalOcean 21 percent. The guidance held because the installed base of workloads does not migrate overnight, and because multi-model deployments, on-premises requirements, and data sovereignty concerns keep much of the AI infrastructure market outside the reach of a single-vendor managed service.

Cloudflare’s response was visible in timing. Agents Week, announced Project Think (a five-tier execution ladder for agents), Workflows V2 (increasing concurrent instances from 4,500 to 50,000), Moltworker (a proof-of-concept ephemeral OpenClaw runtime on Cloudflare’s sandbox infrastructure), and Agent Lee write operations. Each announcement addressed a capability that Claude Managed Agents provides natively. Cloudflare’s argument was that its execution model is architecturally different: stateless, distributed, and not tied to a single model vendor.

What This Means for Hosting Providers and C-Level Teams

Claude Managed Agents makes a specific trade: operational simplicity in exchange for vendor concentration. A team using it gets production-grade sandboxing, checkpointing, and credential management without building any of it. The cost is that all agent executions run on Anthropic’s infrastructure, all data passes through Anthropic’s systems, and the runtime is Claude-only. Switching models, running on-premises, or bringing agent infrastructure in-house means rebuilding what Anthropic is providing.

For organizations in regulated industries, legal, or financial services, routing production workloads through a third-party cloud run by the model provider raises compliance questions the public beta does not yet address. Anthropic’s documentation does not discuss SOC 2 certification, HIPAA compliance, or data processing agreements for Managed Agents sessions. These are the questions that will determine how quickly enterprise adoption moves versus the startup segment, which can move faster on trust.

For hosting providers, the picture is more specific. The market for running AI agents on VPS and managed hosting has been forming for roughly five months, driven largely by OpenClaw adoption. Bluehost, HostGator, and Hostinger are selling managed OpenClaw VPS; DigitalOcean and others fill the tier above that. Those products exist because persistent agents need a place to run continuously. Claude Managed Agents charges $0.08 per session-hour, which for an always-on agent running 24 hours a day comes to roughly $58 per month in session fees alone, before token costs. A dedicated VPS running OpenClaw is still cheaper for that use case, and it runs any model.

Where Managed Agents competes directly is task-based agents: workloads that run for minutes or hours on demand rather than continuously. That segment of the agent hosting market, which hosting providers had been counting on as a growth path, is now priced directly against Anthropic’s infrastructure. Providers building managed runtimes for persistent, model-agnostic workloads have a defensible position. The ones who had not yet decided which type of agent workload to optimize for have a harder question to answer.

The April 10 stock selloff reflects a market view that the competitive pressure is real. The guidance revisions that did not happen reflect a view that the shift will not be immediate. Both can be true at the same time.