Anthropic has boosted the context window of its Claude Sonnet 4 model to one million tokens, a fivefold increase from the previous 200,000 limit. This upgrade enables the AI to process codebases exceeding 75,000 lines or analyse numerous research documents in a single request. The expanded context window facilitates more comprehensive data-intensive applications, including large-scale code analysis, document synthesis, and the development of context-aware agents capable of managing intricate workflows.
The enhanced capacity allows Claude to understand project architecture, identify cross-file dependencies, and suggest system-wide improvements. It can also analyse relationships across extensive document sets, like legal contracts and technical specifications, while maintaining complete context. This upgrade is now in public beta via the Anthropic API and Amazon Bedrock, with Google's Vertex AI support arriving soon.
While the increased computational demands lead to adjusted pricing for prompts exceeding 200,000 tokens, prompt caching and batch processing can help mitigate costs. This enhancement positions Claude Sonnet 4 as a strong contender for developers tackling complex projects requiring extensive contextual understanding.
Related Articles
Anthropic's Risky Customer Reliance
Read more about Anthropic's Risky Customer Reliance →OpenAI Debuts GPT-5 Model
Read more about OpenAI Debuts GPT-5 Model →Claude 4.1 tops benchmarks
Read more about Claude 4.1 tops benchmarks →Anthropic Advances Against GPT-5
Read more about Anthropic Advances Against GPT-5 →