Anthropic
Claude Code
Anthropic adds code execution to MCP for faster, cheaper AI agents
Anthropic has expanded the Model Context Protocol (MCP) with built-in code execution, allowing AI agents to run logic and tool operations as code instead of direct model calls. The change significantly reduces token usage, latency, and costs for multi-tool workflows and makes MCP-based systems easier to scale across large ecosystems.
Georg S. Kuklick
•
November 4, 2025
Anthropic’s Model Context Protocol, launched in late 2024, has become a widely adopted standard for connecting AI agents to external systems and tools. As developers began linking thousands of MCP servers, the cost of loading tool definitions and passing results through the model’s context window grew rapidly. Each tool call consumed additional tokens, inflating both response times and compute costs.
The company’s new approach addresses these limits by letting agents generate and execute code that interacts with MCP servers directly. Rather than loading every tool definition into the context, agents can now load only the functions required for a given task, run them in a sandboxed execution environment, and pass back concise outputs. Anthropic reported that this method can reduce token usage from 150,000 to 2,000 tokens in typical multi-tool chains—a 98.7 percent improvement.
By treating MCP servers as code APIs, agents can now filter or transform large datasets before results reach the model, perform conditional logic, and manage loops or retries inside the execution environment. This improves efficiency for complex workflows such as CRM updates, document processing, or multi-system synchronization.
Code execution also enhances privacy and state management. Intermediate data remains within the execution environment unless explicitly returned, and sensitive information such as emails or phone numbers can be tokenized before reaching the model. Agents can persist data and code modules as reusable “skills,” enabling faster task reuse and long-term state tracking across sessions.
Anthropic cautions that the approach requires secure sandboxing and resource monitoring, since executing model-generated code introduces new infrastructure demands. Still, for developers building large-scale or enterprise-grade agents, code execution with MCP represents a major step toward more efficient, cost-controlled, and composable AI systems.