by MiniMax
MiniMax-01 is a combines MiniMax-Text-01 for text generation and MiniMax-VL-01 for image understanding. It has 456 billion parameters, with 45.9 billion parameters activated per inference, and can handle a context of up to 4 million tokens. The text model adopts a hybrid architecture that combines Lightning Attention, Softmax Attention, and Mixture-of-Experts (MoE). The image model adopts the “ViT-MLP-LLM” framework and is trained on top of the text model. To read more about the release, see: https://www.minimaxi.com/en/news/minimax-01-series-2
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Context Window2026-03-03T20:24:34.124Z | 95 | 15% | +14.3 |
| Output Capacity2026-03-03T20:24:34.124Z | 100 | 10% | +10.0 |
| Recency2026-03-03T20:24:34.124Z | 58 | 15% | +8.7 |
| Capabilities2026-03-03T20:24:34.124Z | 29 | 25% | +7.1 |
| Versatility2026-03-03T20:24:34.124Z | 50 | 10% | +5.0 |
| Pricing Tier2026-03-03T20:24:34.124Z | 1 | 25% | +0.3 |
Cost Estimator
You save $36.09/month vs category average