All News
Hugging Face Releases SmolLM3, a 3B Multilingual LLM With Built-In Reasoning

Hugging Face Releases SmolLM3, a 3B Multilingual LLM With Built-In Reasoning

The Smol Models team at Hugging Face has launched SmolLM3, a compact open-weight model that blends efficiency, multilingual reach, and long-context reasoning. Trained on 11 trillion tokens, the 3-billion parameter LLM matches or exceeds performance of larger models and introduces native support for 6 languages and structured thinking.

July 8, 2025
July 8, 2025
July 9, 2025
Georg S. Kuklick

SmolLM3 is positioned as a small yet capable foundation model. At just 3 billion parameters, it outperforms Meta’s Llama‑3.2‑3B and Alibaba’s Qwen‑2.5‑3B, while holding its own against 4B models on a wide range of benchmarks. The model comes with built-in reasoning control using two special modes: /think for chain-of-thought style reasoning, and /no_think for faster, direct outputs. This dual-mode setup gives developers control over response style, a feature typically reserved for prompt engineering tricks.

The model supports six languages—English, French, Spanish, German, Italian, and Portuguese—and was trained using Hugging Face’s open-source stack including JAX and FAX. SmolLM3 was pretrained on a filtered 11-trillion-token dataset and supports context windows up to 64k tokens, with extension to 128k possible via YaRN.

For researchers and developers, Hugging Face has published the entire training recipe and engineering blueprint. This transparency reinforces the lab’s push for open, reproducible AI. The model is optimized for deployment on constrained environments, offering a viable alternative to larger closed models for multilingual reasoning tasks. For businesses operating at the edge or scaling down costs, SmolLM3 could be a practical backbone for chatbots, assistants, or knowledge tools in global markets.

Pure Neo Signal:
Share this post:

We love

and you too

If you like what we do, please share it on your social media and feel free to buy us a coffee.