by ·
Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. [Read the blog post about the model here.](https://mistral.ai/news/mistral-small-3/)
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Capabilitiesjust now | 50 | 30% | +15.0 |
| Context Windowjust now | 72 | 15% | +10.7 |
| Output Capacityjust now | 70 | 15% | +10.5 |
| Recencyjust now | 57 | 15% | +8.6 |
| Pricingjust now | 0 | 25% | +0.0 |
Community and practitioner feedback adds real-world signal on top of benchmarks and pricing.
Share your experience with Mistral Small 3 and help the community make better decisions.
Cost Estimator
You save $41.38/month vs category average