by Alibaba
Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and code tasks, and a "non-thinking" mode for general conversational efficiency. The model demonstrates strong reasoning ability, multilingual support (100+ languages and dialects), advanced instruction-following, and agent tool-calling capabilities. It natively handles a 32K token context window and extends up to 131K tokens using YaRN-based scaling.
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Capabilities2026-03-03T20:29:34.446Z | 57 | 25% | +14.3 |
| Context Window2026-03-03T20:29:34.446Z | 81 | 15% | +12.2 |
| Recency2026-03-03T20:29:34.446Z | 77 | 15% | +11.5 |
| Output Capacity2026-03-03T20:29:34.446Z | 65 | 10% | +6.5 |
| Versatility2026-03-03T20:29:34.446Z | 33 | 10% | +3.3 |
| Pricing Tier2026-03-03T20:29:34.446Z | 2 | 25% | +0.5 |
Cost Estimator
You save $32.14/month vs category average