Mistral debuts 'Magistral' a reasoning-first model for open-source and enterprise
Two new LLMs — open and proprietary — aim to tackle legal, financial, and regulated tasks with traceable logic and multilingual transparency. It’s a calculated move — not to outscale competitors, but to out-think them.
Mistral has launched Magistral, its first reasoning-centric model series, in both open-source and enterprise forms. Magistral Small (24B parameters) is fully open and clocked 70.7 % on AIME2024 (83.3 % with voting), while Magistral Medium — a commercial variant — scored 73.6 % (90 % with voting), putting it in reach of actual GPT, Claude and Gemini models. Designed to excel at step-by-step logic, both models support chain-of-thought, multilingual clarity, and transparent reasoning — a deliberate nod to use cases in law, finance, and healthcare.
Mistral claims its enterprise “Think mode” and rapid “Flash Answers” offer up to 10× faster token throughput than rival models. The open model extends Mistral’s track record of releasing production-grade LLMs to the public, while the enterprise tier positions Magistral Medium as a drop-in reasoning engine for domains that demand verifiability.
Pure Neo Signal:
We love
and you too
If you like what we do, please share it on your social media and feel free to buy us a coffee.