by
MiMo-V2-Flash is an open-source foundation language model developed by Xiaomi. It is a Mixture-of-Experts model with 309B total parameters and 15B active parameters, adopting hybrid attention architecture. MiMo-V2-Flash supports a hybrid-thinking toggle and a 256K context window, and excels at reasoning, coding, and agent scenarios. On SWE-bench Verified and SWE-bench Multilingual, MiMo-V2-Flash ranks as the top #1 open-source model globally, delivering performance comparable to Claude Sonnet 4.5 while costing only about 3.5% as much. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config).
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Capabilitiesjust now | 67 | 30% | +20.0 |
| Recencyjust now | 100 | 15% | +15.0 |
| Context Windowjust now | 86 | 15% | +12.9 |
| Output Capacityjust now | 80 | 15% | +12.0 |
| Pricingjust now | 0 | 25% | +0.1 |
Community and practitioner feedback adds real-world signal on top of benchmarks and pricing.
Share your experience with MiMo-V2-Flash and help the community make better decisions.
Cost Estimator
You save $40.47/month vs category average