by Mistral AI
Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. [Read the blog post about the model here.](https://mistral.ai/news/mistral-small-3/)
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Capabilities2026-03-03T20:25:33.240Z | 43 | 25% | +10.7 |
| Context Window2026-03-03T20:25:33.240Z | 72 | 15% | +10.7 |
| Recency2026-03-03T20:25:33.240Z | 61 | 15% | +9.1 |
| Output Capacity2026-03-03T20:25:33.240Z | 70 | 10% | +7.0 |
| Versatility2026-03-03T20:25:33.240Z | 33 | 10% | +3.3 |
| Pricing Tier2026-03-03T20:25:33.240Z | 0 | 25% | +0.0 |
Cost Estimator
You save $40.20/month vs category average