by Alibaba
Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass and natively supports up to 262,144 tokens of context. This "thinking-only" variant enhances structured logical reasoning, mathematics, science, and long-form generation, showing strong benchmark performance across AIME, SuperGPQA, LiveCodeBench, and MMLU-Redux. It enforces a special reasoning mode (</think>) and is designed for high-token outputs (up to 81,920 tokens) in challenging domains. The model is instruction-tuned and excels at step-by-step reasoning, tool use, agentic workflows, and multilingual tasks. This release represents the most capable open-source variant in the Qwen3-235B series, surpassing many closed models in structured reasoning use cases.
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Capabilities2026-03-03T20:29:50.259Z | 71 | 25% | +17.9 |
| Recency2026-03-03T20:29:50.259Z | 93 | 15% | +13.9 |
| Context Window2026-03-03T20:29:50.259Z | 81 | 15% | +12.2 |
| Pricing Tier2026-03-03T20:29:50.259Z | 30 | 25% | +7.5 |
| Versatility2026-03-03T20:29:50.259Z | 33 | 10% | +3.3 |
| Output Capacity2026-03-03T20:29:50.259Z | 20 | 10% | +2.0 |
Free