The complete Mistral AI model lineup: 27 models spanning the Mistral Large flagship, Mixtral mixture-of-experts series, Codestral coding models, and lightweight Devstral variants. Europe's leading AI lab, known for open weights and efficient architectures. Compare scores, API pricing, context windows, and capabilities. Updated hourly from live data.
27 models from Mistral AI, sorted by composite score
| Model | Input $/1M | Output $/1M |
|---|---|---|
| Mistral Small 3.1 24B (free) | Free | Free |
| Mistral Nemo | $0.020 | $0.040 |
| Mistral Small 3 | $0.050 | $0.080 |
| Ministral 3 3B 2512 | $0.100 | $0.100 |
| Pixtral 12B | $0.100 | $0.100 |
| Mistral Small 3.1 24B | $0.030 | $0.110 |
| Ministral 3 8B 2512 | $0.150 | $0.150 |
| Mistral 7B Instruct v0.1 | $0.110 | $0.190 |
| Ministral 3 14B 2512 | $0.200 | $0.200 |
| Mistral Small 3.2 24B | $0.075 | $0.200 |
| Devstral Small 1.1 | $0.100 | $0.300 |
| Mistral Small Creative | $0.100 | $0.300 |
| Voxtral Small 24B 2507 | $0.100 | $0.300 |
| Mixtral 8x7B Instruct | $0.540 | $0.540 |
| Mistral Small 4 | $0.150 | $0.600 |
| Saba | $0.200 | $0.600 |
| Codestral 2508 | $0.300 | $0.900 |
| Mistral Large 3 2512 | $0.500 | $1.50 |
| Mistral Medium 3.1 | $0.400 | $2.00 |
| Devstral 2 2512 | $0.400 | $2.00 |
| Mistral Medium 3 | $0.400 | $2.00 |
| Devstral Medium | $0.400 | $2.00 |
| Pixtral Large 2411 | $2.00 | $6.00 |
| Mistral Large | $2.00 | $6.00 |
| Mistral Large 2407 | $2.00 | $6.00 |
| Mistral Large 2411 | $2.00 | $6.00 |
| Mixtral 8x22B Instruct | $2.00 | $6.00 |
Mistral AI is Europe's leading AI company, headquartered in Paris. Founded by former DeepMind and Meta researchers, Mistral has rapidly become a major force in the AI landscape by releasing high-quality models with open weights. Their commitment to open-source and European AI sovereignty has made them a preferred choice for organizations seeking alternatives to US-based AI providers.
Mistral pioneered the mainstream adoption of Mixture-of-Experts (MoE) architecture with Mixtral. MoE models use multiple specialized sub-networks ("experts") and a routing mechanism that activates only a subset for each token, delivering the quality of a much larger model at a fraction of the compute cost. This makes Mixtral models exceptionally efficient for both inference and fine-tuning.
Mistral releases many of their models with open weights under permissive licenses, enabling self-hosting, fine-tuning, and community experimentation. Models like Mixtral 8x22B and Mistral 7B have become staples of the open-source AI ecosystem. Codestral and Devstral extend this philosophy to code-specialized models, providing strong coding assistants that can run on your own infrastructure.
Mistral models are available both through Mistral's own API (La Plateforme) and major cloud providers including AWS Bedrock and Azure. For self-hosting, open-weight models can be deployed via vLLM, TGI, or Ollama. The prices shown here reflect current per-token API rates. Self-hosting eliminates per-token costs but requires GPU infrastructure.
Explore Mistral comparisons, rankings, and pricing across the full model landscape.
Mistral offers a range from the efficient Mistral Small to the powerful Mistral Large. The lineup includes Mixtral (mixture-of-experts architecture), Codestral (coding-specialized), and Pixtral (multimodal).
Mistral offers both open-source and proprietary models. Mistral 7B and Mixtral 8x7B are open-source. Mistral Large and Codestral are available through their commercial API.
Mistral Large competes with GPT-4o on many benchmarks at lower prices. Mixtral offers strong performance for open-source deployments. Check our leaderboard for the latest benchmark comparisons.