Huggingface
Hugging Face Releases Full SmolLM3 Training Stack and Checkpoints
Hugging Face has open-sourced the full training and evaluation pipeline for SmolLM3, its latest small-scale language model. The release includes more than 100 intermediate checkpoints and supports dual-mode reasoning, multilingual tasks, and long-context inference. This marks a major move toward transparency and reproducibility in compact AI models.
Hugging Face Releases SmolLM3, a 3B Multilingual LLM With Built-In Reasoning
The Smol Models team at Hugging Face has launched SmolLM3, a compact open-weight model that blends efficiency, multilingual reach, and long-context reasoning. Trained on 11 trillion tokens, the 3-billion parameter LLM matches or exceeds performance of larger models and introduces native support for 6 languages and structured thinking.
July 8, 2025