by Alibaba
The Qwen3.5 122B-A10B native vision-language model is built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. In terms of overall performance, this model is second only to Qwen3.5-397B-A17B. Its text capabilities significantly outperform those of Qwen3-235B-2507, and its visual capabilities surpass those of Qwen3-VL-235B.
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Capabilities2026-03-03T20:29:51.701Z | 71 | 25% | +17.9 |
| Recency2026-03-03T20:29:51.701Z | 100 | 15% | +15.0 |
| Context Window2026-03-03T20:29:51.701Z | 86 | 15% | +12.9 |
| Output Capacity2026-03-03T20:29:51.701Z | 80 | 10% | +8.0 |
| Versatility2026-03-03T20:29:51.701Z | 67 | 10% | +6.7 |
| Pricing Tier2026-03-03T20:29:51.701Z | 2 | 25% | +0.6 |
Cost Estimator
You save $31.49/month vs category average