Anthropic expands Claude Sonnet 4 to 1 M token context
Anthropic has increased the context window for Claude Sonnet 4 to 1 million tokens, allowing the model to process entire codebases or dozens of research papers in a single prompt. The upgrade is available in public beta through Anthropic’s API and Amazon Bedrock, with Google Cloud Vertex AI integration planned. The change enables more coherent large-scale reasoning and workflow automation for developers and enterprises.

Anthropic’s latest update multiplies Claude Sonnet 4’s maximum context length by five, moving from 200,000 to 1 million tokens. This capacity allows users to input over 75,000 lines of code or a full collection of related research documents without splitting them into smaller parts. The company says the expansion reduces fragmentation and maintains more consistent reasoning across extended tasks.
The 1 million token context is currently available in public beta on Anthropic’s API at Tier 4 or via custom rate limits, as well as on Amazon Bedrock. Integration with Google Cloud Vertex AI is scheduled for release in the coming weeks. Anthropic has adjusted its pricing for prompts exceeding 200,000 tokens, while offering prompt caching and batch processing to manage cost and latency.
Early adopters include AI-first development platforms such as Bolt.new and iGent AI. Both report using the expanded context to execute full engineering workflows, including multi-day coding sessions and project-wide refactoring, without intermediate handoffs. For research teams, the new limit enables full-document ingestion and multi-source synthesis in one pass.
The move follows a broader industry trend of expanding model context to support agent-based systems and complex autonomous workflows. By reducing the need for manual context management, Anthropic aims to make Claude more effective for enterprise-scale deployment.
Pure Neo Signal:
We love
and you too
If you like what we do, please share it on your social media and feel free to buy us a coffee.