DeepSeek releases V3.1 with 685B parameters and 128k context window
DeepSeek has launched its latest open-source AI model, DeepSeek-V3.1-Base, which comes with 685 billion parameters and a 128 000-token context length. The model posts benchmark results close to leading proprietary systems and is freely available for download, marking a significant move in the open-source AI landscape.
DeepSeek has unveiled DeepSeek-V3.1-Base, its newest large language model built with approximately 685 billion parameters. The release adds a 128 000-token context window and multi-format tensor support, including BF16, F8_E4M3, and F32. The model is distributed on Hugging Face in safetensors format, though no inference provider has yet integrated it.
Early benchmark data positions DeepSeek-V3.1 near the performance of leading proprietary models. The system scored 71.6 percent on the Aider coding benchmark, slightly higher than Anthropic’s Claude Opus 4. DeepSeek emphasized that the model achieves these results at lower projected costs compared with closed-source alternatives.
The release continues DeepSeek’s strategy of open sourcing frontier models. By making such a large-scale system available for public use, the company positions itself as a challenger to US-based firms that tightly control access to high-end AI systems. Developers and enterprises can download the model weights directly, enabling on-premise experimentation and deployment.
The model is expected to appeal to researchers, startups, and companies seeking to train or fine-tune systems without vendor lock-in. Its high parameter count and large context window could benefit tasks requiring reasoning across extended documents, coding projects, and multi-turn conversations. Analysts note that accessibility and cost advantages may increase adoption among organizations that have not engaged with closed-source alternatives.
Pure Neo Signal:
We love
and you too
If you like what we do, please share it on your social media and feel free to buy us a coffee.