Daily rank changes, score trends, and performance data for all coding AI models. Scores update hourly from live model data.
| # | Model | Provider | Score | 24h | 7d | State | 14d Trend |
|---|---|---|---|---|---|---|---|
| 1 | GPT-5.4 Pro | OpenAI | 94.0 | +2 | 0 | stable | |
| 2 | GPT-5.4 | OpenAI | 94.0 | -1 | +4 | stable | |
| 3 | GPT-5.4 Mini | OpenAI | 93.3 | +1 | +301 | preliminary | |
| 4 | GPT-5.2 Pro | OpenAI | 92.7 | +1 | -1 | stable | |
| 5 | GPT-5.2 | OpenAI | 92.7 | -3 | +4 | stable | |
| 6 | Claude Opus 4.6 | Anthropic | 92.1 | +1 | -1 | stable | |
| 7 | GPT-5 Pro | OpenAI | 91.9 | +3 | 0 | stable | |
| 8 | o3 Deep Research | OpenAI | 91.5 | -2 | -4 | stable | |
| 9 | Claude Opus 4.5 | Anthropic | 90.4 | 0 | -1 | stable | |
| 10 | Gemini 3 Pro Preview | 90.3 | +6 | +1 | stable | ||
| 11 | GPT-5 | OpenAI | 90.2 | +2 | +1 | stable | |
| 12 | Gemini 3 Flash Preview | 89.4 | -4 | -2 | stable | ||
| 13 | Claude Sonnet 4.6 | Anthropic | 89.2 | -2 | +2 | stable | |
| 14 | Claude Sonnet 4.5 | Anthropic | 89.0 | -2 | +5 | stable | |
| 15 | o3 Pro | OpenAI | 87.7 | 0 | +10 | fragile | |
| 16 | Grok 4.1 Fast | xAI | 86.9 | -2 | -2 | stable | |
| 17 | Grok 4 | xAI | 85.8 | 0 | +20 | fragile | |
| 18 | Grok 4.20 Beta | xAI | 85.7 | +2 | +12 | fragile | |
| 19 | o3 | OpenAI | 85.7 | +3 | -6 | fragile | |
| 20 | Gemini 3.1 Pro Preview | 85.5 | +28 | -3 | stable | ||
| 21 | GPT-5.1 | OpenAI | 85.2 | +3 | +29 | fragile | |
| 22 | MiMo-V2-Omni | Xiaomi | 85.0 | +3 | +282 | preliminary | |
| 23 | MiMo-V2-Pro | Xiaomi | 85.0 | +15 | +281 | preliminary | |
| 24 | GPT-5.4 Nano | OpenAI | 85.0 | -5 | +280 | preliminary | |
| 25 | Seed-2.0-Lite | ByteDance | 85.0 | +3 | +6 | fragile | |
| 26 | GPT-5.3 Chat | OpenAI | 85.0 | +7 | -5 | stable | |
| 27 | Seed-2.0-Mini | ByteDance | 85.0 | +3 | +1 | stable | |
| 28 | Gemini 3.1 Pro Preview Custom Tools | 85.0 | +1 | -5 | stable | ||
| 29 | GPT-5.3-Codex | OpenAI | 85.0 | +26 | +9 | fragile | |
| 30 | Qwen3.5 Plus 2026-02-15 | Alibaba | 85.0 | -4 | +9 | fragile | |
| 31 | Kimi K2.5 | Moonshot AI | 85.0 | -8 | +10 | fragile | |
| 32 | GPT-5.2-Codex | OpenAI | 85.0 | +5 | +2 | stable | |
| 33 | Seed 1.6 Flash | ByteDance | 85.0 | +3 | +21 | fragile | |
| 34 | Seed 1.6 | ByteDance | 85.0 | +9 | +15 | fragile | |
| 35 | GPT-5.1-Codex-Max | OpenAI | 85.0 | -17 | -13 | fragile | |
| 36 | GPT-5.1 Chat | OpenAI | 85.0 | +18 | +10 | fragile | |
| 37 | GPT-5.1-Codex | OpenAI | 85.0 | -5 | +11 | fragile | |
| 38 | GPT-5.1-Codex-Mini | OpenAI | 85.0 | -4 | -18 | fragile | |
| 39 | Sonar Pro Search | Perplexity | 85.0 | -8 | -4 | stable | |
| 40 | Qwen3 VL 8B Thinking | Alibaba | 85.0 | +19 | -8 | fragile | |
| 41 | o4 Mini Deep Research | OpenAI | 85.0 | -1 | -5 | stable | |
| 42 | Qwen3 VL 30B A3B Thinking | Alibaba | 85.0 | +2 | +11 | fragile | |
| 43 | GPT-5 Codex | OpenAI | 85.0 | -8 | -27 | fragile | |
| 44 | o4 Mini High | OpenAI | 85.0 | -5 | -15 | fragile | |
| 45 | Grok Code Fast 1 | xAI | 84.8 | -24 | +10 | fragile | |
| 46 | Gemini 2.5 Pro | 84.8 | -5 | -20 | fragile | ||
| 47 | Gemini 2.5 Pro Preview 06-05 | 84.3 | +5 | +15 | fragile | ||
| 48 | Nemotron 3 Super (free) | NVIDIA | 84.1 | +2 | -1 | stable | |
| 49 | Gemini 2.5 Flash Lite Preview 09-2025 | 83.7 | 0 | +11 | fragile | ||
| 50 | o4 Mini | OpenAI | 83.7 | -23 | -23 | fragile | |
| 51 | MiniMax M2.5 (free) | MiniMax | 83.4 | 0 | +6 | fragile | |
| 52 | Grok 4 Fast | xAI | 83.3 | +5 | -19 | fragile | |
| 53 | MiniMax M2.7 | MiniMax | 83.0 | +11 | +251 | preliminary | |
| 54 | Claude Haiku 4.5 | Anthropic | 83.0 | -7 | -3 | stable | |
| 55 | GPT-5.2 Chat | OpenAI | 82.9 | +7 | +17 | fragile | |
| 56 | Qwen Plus 0728 (thinking) | Alibaba | 82.8 | -3 | -14 | fragile | |
| 57 | Gemini 2.5 Pro Preview 05-06 | 82.7 | +10 | +14 | fragile | ||
| 58 | MiMo-V2-Flash | Xiaomi | 82.6 | +14 | -15 | fragile | |
| 59 | Trinity Mini | arcee-ai | 82.4 | -13 | -15 | fragile | |
| 60 | Nemotron Nano 12B 2 VL (free) | NVIDIA | 82.3 | -18 | -15 | fragile | |
| 61 | Grok 4.20 Multi-Agent Beta | xAI | 82.2 | -3 | +17 | fragile | |
| 62 | Tongyi DeepResearch 30B A3B | Alibaba | 82.1 | +1 | +1 | stable | |
| 63 | Claude Opus 4.1 | Anthropic | 82.0 | -3 | -11 | fragile | |
| 64 | Gemini 3.1 Flash Lite Preview | 81.9 | +9 | +11 | fragile | ||
| 65 | Qwen3.5 397B A17B | Alibaba | 81.8 | -20 | -7 | fragile | |
| 66 | Qwen3 Max Thinking | Alibaba | 81.8 | +13 | -1 | stable | |
| 67 | Claude Opus 4 | Anthropic | 81.7 | +3 | -11 | fragile | |
| 68 | gpt-oss-safeguard-20b | OpenAI | 81.6 | +3 | +2 | stable | |
| 69 | Gemini 2.5 Flash Lite | 81.4 | +12 | -5 | stable | ||
| 70 | Mercury 2 | Inception | 81.3 | -4 | +9 | fragile | |
| 71 | Qwen3 VL 32B Instruct | Alibaba | 80.9 | +3 | -12 | fragile | |
| 72 | Qwen3 VL 8B Instruct | Alibaba | 80.9 | +4 | +17 | fragile | |
| 73 | Qwen3 VL 30B A3B Instruct | Alibaba | 80.9 | -8 | +1 | stable | |
| 74 | Qwen3 30B A3B Thinking 2507 | Alibaba | 80.9 | -18 | +9 | fragile | |
| 75 | GPT-4.1 Nano | OpenAI | 80.7 | -7 | -2 | stable | |
| 76 | Gemini 2.5 Flash | 80.1 | +4 | -15 | fragile | ||
| 77 | Claude Sonnet 4 | Anthropic | 79.9 | -16 | -11 | fragile | |
| 78 | Qwen3.5-122B-A10B | Alibaba | 79.7 | +9 | +6 | fragile | |
| 79 | Mistral Small 4 | Mistral AI | 79.4 | -4 | +225 | preliminary | |
| 80 | Qwen3.5-Flash | Alibaba | 79.4 | +8 | -11 | fragile | |
| 81 | Qwen3.5-9B | Alibaba | 79.3 | +8 | +14 | fragile | |
| 82 | GPT-5 Mini | OpenAI | 79.2 | -13 | 0 | stable | |
| 83 | Qwen3.5-27B | Alibaba | 79.1 | +8 | -16 | fragile | |
| 84 | Qwen3 Coder Plus | Alibaba | 78.6 | +15 | -7 | fragile | |
| 85 | Qwen3.5-35B-A3B | Alibaba | 78.3 | +11 | +18 | fragile | |
| 86 | Step 3.5 Flash (free) | StepFun | 78.2 | -9 | -10 | fragile | |
| 87 | Qwen3 Coder Flash | Alibaba | 78.2 | +13 | +3 | stable | |
| 88 | Nova Premier 1.0 | Amazon | 77.8 | +2 | -8 | fragile | |
| 89 | R1 0528 | DeepSeek | 77.7 | +3 | -4 | stable | |
| 90 | KAT-Coder-Pro V1 | Kuaishou | 77.4 | -4 | +16 | fragile | |
| 91 | Qwen3 VL 235B A22B Thinking | Alibaba | 77.4 | +11 | -10 | fragile | |
| 92 | GPT-4.1 | OpenAI | 77.4 | -14 | +4 | stable | |
| 93 | GPT-4.1 Mini | OpenAI | 77.4 | +11 | +15 | fragile | |
| 94 | DeepSeek V3.2 Exp | DeepSeek | 77.2 | -11 | -7 | fragile | |
| 95 | DeepSeek V3.2 Speciale | DeepSeek | 77.1 | +20 | -9 | fragile | |
| 96 | Claude 3.7 Sonnet | Anthropic | 77.1 | +11 | -5 | stable | |
| 97 | Qwen Plus 0728 | Alibaba | 77.0 | -13 | +14 | fragile | |
| 98 | Qwen3 Coder Next | Alibaba | 76.7 | -16 | -10 | fragile | |
| 99 | Llama 4 Maverick | Meta | 76.7 | +6 | +1 | stable | |
| 100 | o1-pro | OpenAI | 76.5 | -6 | +16 | fragile | |
| 101 | Composer 2 | Cursor | 76.4 | +15 | -4 | stable | |
| 102 | Composer 2 Fast | Cursor | 76.4 | -5 | +3 | stable | |
| 103 | Grok 3 Mini | xAI | 76.2 | -18 | -5 | stable | |
| 104 | MiniMax M2.5 | MiniMax | 76.0 | +5 | +14 | fragile | |
| 105 | Qwen3 Max | Alibaba | 76.0 | -7 | -12 | fragile | |
| 106 | Gemini 2.0 Flash Lite | 75.7 | +20 | +3 | stable | ||
| 107 | o1 | OpenAI | 75.7 | -12 | -15 | fragile | |
| 108 | GPT-5 Nano | OpenAI | 75.6 | +4 | +17 | fragile | |
| 109 | Qwen3 30B A3B Instruct 2507 | Alibaba | 75.2 | +15 | +6 | fragile | |
| 110 | ERNIE 4.5 VL 28B A3B | Baidu | 75.0 | +25 | +14 | fragile | |
| 111 | GPT-5 Chat | OpenAI | 75.0 | -3 | +11 | fragile | |
| 112 | Gemini 2.0 Flash | 75.0 | -19 | 0 | stable | ||
| 113 | DeepSeek V3.2 | DeepSeek | 74.1 | +15 | -14 | fragile | |
| 114 | Qwen3 VL 235B A22B Instruct | Alibaba | 74.0 | -4 | -7 | fragile | |
| 115 | DeepSeek V3.1 | DeepSeek | 73.8 | +16 | -13 | fragile | |
| 116 | gpt-oss-120b (free) | OpenAI | 73.8 | +26 | -12 | fragile | |
| 117 | gpt-oss-20b (free) | OpenAI | 73.8 | -16 | +21 | fragile | |
| 118 | DeepSeek V3.1 Terminus | DeepSeek | 73.7 | +4 | +10 | fragile | |
| 119 | Grok 3 | xAI | 73.7 | -16 | +14 | fragile | |
| 120 | Nemotron 3 Super | NVIDIA | 73.5 | +25 | +21 | fragile | |
| 121 | Nemotron 3 Nano 30B A3B | NVIDIA | 73.5 | +28 | +5 | stable | |
| 122 | Ministral 3 14B 2512 | Mistral AI | 73.5 | +12 | -21 | fragile | |
| 123 | Ministral 3 8B 2512 | Mistral AI | 73.5 | -3 | -4 | stable | |
| 124 | Mistral Large 3 2512 | Mistral AI | 73.5 | -10 | -7 | fragile | |
| 125 | o3 Mini | OpenAI | 73.4 | -7 | +11 | fragile | |
| 126 | Step 3.5 Flash | StepFun | 73.2 | -15 | +4 | stable | |
| 127 | DeepSeek V3 0324 | DeepSeek | 73.2 | -21 | -7 | fragile | |
| 128 | MiniMax M2.1 | MiniMax | 73.1 | +11 | +20 | fragile | |
| 129 | GPT-4o-mini Search Preview | OpenAI | 72.9 | +21 | -8 | fragile | |
| 130 | LongCat Flash Chat | Meituan | 72.8 | -7 | +16 | fragile | |
| 131 | Nova 2 Lite | Amazon | 72.7 | +1 | -21 | fragile | |
| 132 | MiniMax M2 | MiniMax | 72.7 | +6 | +19 | fragile | |
| 133 | Qwen3 Next 80B A3B Thinking | Alibaba | 72.7 | -6 | -20 | fragile | |
| 134 | Trinity Large Preview (free) | arcee-ai | 72.6 | +13 | +21 | fragile | |
| 135 | Ministral 3 3B 2512 | Mistral AI | 72.6 | -22 | +21 | fragile | |
| 136 | Trinity Mini (free) | arcee-ai | 72.6 | -15 | +8 | fragile | |
| 137 | Kimi K2 Thinking | Moonshot AI | 72.6 | -8 | -2 | stable | |
| 138 | Nemotron Nano 12B 2 VL | NVIDIA | 72.6 | -21 | -15 | fragile | |
| 139 | Solar Pro 3 | Upstage | 72.5 | +14 | -2 | stable | |
| 140 | Qwen3 Coder 30B A3B Instruct | Alibaba | 72.3 | -21 | 0 | stable | |
| 141 | Hunyuan A13B Instruct | Tencent | 72.3 | -16 | -27 | fragile | |
| 142 | GPT-4o Audio | OpenAI | 72.1 | -9 | -13 | fragile | |
| 143 | Llama 4 Scout | Meta | 72.0 | -13 | -12 | fragile | |
| 144 | Nemotron Nano 9B V2 (free) | NVIDIA | 71.6 | +14 | +1 | stable | |
| 145 | Nemotron Nano 9B V2 | NVIDIA | 71.6 | +14 | -18 | fragile | |
| 146 | Qwen3 30B A3B | Alibaba | 71.4 | +6 | -4 | stable | |
| 147 | Qwen3 14B | Alibaba | 71.4 | +8 | 0 | stable | |
| 148 | Qwen3 32B | Alibaba | 71.4 | -2 | -16 | fragile | |
| 149 | Qwen3 235B A22B | Alibaba | 71.3 | -8 | 0 | stable | |
| 150 | Jamba Large 1.7 | AI21 Labs | 71.2 | -14 | -7 | fragile | |
| 151 | Mistral Medium 3.1 | Mistral AI | 70.3 | 0 | +13 | fragile | |
| 152 | Qwen3 Next 80B A3B Instruct | Alibaba | 70.1 | -15 | +7 | fragile | |
| 153 | ERNIE 4.5 21B A3B Thinking | Baidu | 70.0 | +1 | -19 | fragile | |
| 154 | Qwen3 235B A22B Instruct 2507 | Alibaba | 70.0 | -11 | 0 | stable | |
| 155 | Claude 3.7 Sonnet (thinking) | Anthropic | 69.8 | -15 | +8 | fragile | |
| 156 | DeepSeek V3 | DeepSeek | 69.7 | +6 | -17 | fragile | |
| 157 | ERNIE 4.5 VL 424B A47B | Baidu | 69.5 | +13 | +5 | stable | |
| 158 | Qwen3 235B A22B Thinking 2507 | Alibaba | 69.3 | -10 | +3 | stable | |
| 159 | Aion-2.0 | aion-labs | 69.2 | -15 | -9 | fragile | |
| 160 | Qwen3 Coder 480B A35B (free) | Alibaba | 69.0 | -4 | +9 | fragile | |
| 161 | Llama 3.3 Nemotron Super 49B V1.5 | NVIDIA | 68.6 | +10 | -8 | fragile | |
| 162 | gpt-oss-20b | OpenAI | 68.5 | +12 | -2 | stable | |
| 163 | GPT Audio | OpenAI | 68.4 | +1 | -6 | fragile | |
| 164 | GPT Audio Mini | OpenAI | 68.4 | +8 | -12 | fragile | |
| 165 | MiniMax M1 | MiniMax | 68.4 | +2 | +11 | fragile | |
| 166 | R1 | DeepSeek | 68.3 | +9 | +2 | stable | |
| 167 | Qwen VL Max | Alibaba | 68.1 | -10 | -1 | stable | |
| 168 | Nemotron 3 Nano 30B A3B (free) | NVIDIA | 67.7 | -3 | +16 | fragile | |
| 169 | Devstral 2 2512 | Mistral AI | 67.7 | +12 | +5 | stable | |
| 170 | gpt-oss-120b | OpenAI | 67.7 | +12 | -12 | fragile | |
| 171 | Mistral Small 3.2 24B | Mistral AI | 67.3 | -11 | +10 | fragile | |
| 172 | Qwen3 Next 80B A3B Instruct (free) | Alibaba | 67.0 | -9 | +5 | stable | |
| 173 | Mercury Coder | Inception | 67.0 | -12 | -2 | stable | |
| 174 | Cogito v2.1 671B | deepcogito | 66.7 | +15 | +6 | fragile | |
| 175 | Olmo 3 32B Think | Allen AI | 66.3 | -6 | -8 | fragile | |
| 176 | Mistral Small 3.1 24B | Mistral AI | 66.2 | +1 | -1 | stable | |
| 177 | Grok 3 Mini Beta | xAI | 66.1 | +2 | +9 | fragile | |
| 178 | Claude 3.5 Sonnet | Anthropic | 65.8 | -12 | -8 | fragile | |
| 179 | Kimi K2 0905 | Moonshot AI | 65.7 | -11 | +3 | stable | |
| 180 | Llama 3.3 70B Instruct | Meta | 65.7 | 0 | -15 | fragile | |
| 181 | o3 Mini High | OpenAI | 65.4 | +6 | +21 | fragile | |
| 182 | ERNIE 4.5 21B A3B | Baidu | 65.2 | +4 | +15 | fragile | |
| 183 | Qwen3 8B | Alibaba | 65.1 | -7 | -4 | stable | |
| 184 | Mistral Medium 3 | Mistral AI | 65.0 | -1 | -6 | fragile | |
| 185 | Qwen-Plus | Alibaba | 65.0 | +9 | +3 | stable | |
| 186 | Olmo 3.1 32B Instruct | Allen AI | 64.9 | -1 | -13 | fragile | |
| 187 | Olmo 3.1 32B Think | Allen AI | 64.8 | +14 | +2 | stable | |
| 188 | Rnj 1 Instruct | essentialai | 64.8 | +14 | -5 | stable | |
| 189 | Codestral 2508 | Mistral AI | 64.8 | -16 | -17 | fragile | |
| 190 | Palmyra X5 | Writer | 64.7 | +6 | 0 | stable | |
| 191 | GPT-4o-mini | OpenAI | 64.6 | -7 | +10 | fragile | |
| 192 | GPT-4o | OpenAI | 64.4 | -14 | +6 | fragile | |
| 193 | Qwen3 Coder 480B A35B | Alibaba | 64.3 | -3 | -2 | stable | |
| 194 | GPT-4o Search Preview | OpenAI | 63.6 | +11 | 0 | stable | |
| 195 | Gemma 3 27B | 63.6 | -7 | +11 | fragile | ||
| 196 | Grok 3 Beta | xAI | 63.5 | +16 | -4 | stable | |
| 197 | ERNIE 4.5 300B A47B | Baidu | 63.4 | +13 | +3 | stable | |
| 198 | Mercury | Inception | 63.4 | +10 | +5 | stable | |
| 199 | GPT-4o (2024-11-20) | OpenAI | 63.3 | -4 | +8 | fragile | |
| 200 | Sonar Pro | Perplexity | 63.1 | -8 | +8 | fragile | |
| 201 | Qwen3 4B (free) | Alibaba | 63.0 | +8 | -16 | fragile | |
| 202 | Gemma 3 27B (free) | 62.8 | -4 | -6 | fragile | ||
| 203 | UI-TARS 7B | ByteDance | 62.7 | +3 | -4 | stable | |
| 204 | Kimi K2 0711 | Moonshot AI | 62.7 | -1 | -17 | fragile | |
| 205 | Devstral Medium | Mistral AI | 62.6 | -12 | +4 | stable | |
| 206 | Devstral Small 1.1 | Mistral AI | 62.6 | +7 | -11 | fragile | |
| 207 | Claude 3.5 Haiku | Anthropic | 62.5 | -10 | -3 | stable | |
| 208 | Spotlight | arcee-ai | 62.3 | -8 | -15 | fragile | |
| 209 | Virtuoso Large | arcee-ai | 62.2 | +5 | +3 | stable | |
| 210 | Mistral Small 3.1 24B (free) | Mistral AI | 62.2 | +11 | +11 | fragile | |
| 211 | MiniMax-01 | MiniMax | 62.0 | -20 | -6 | fragile | |
| 212 | Sonar Reasoning Pro | Perplexity | 61.6 | +4 | +8 | fragile | |
| 213 | Gemma 3 4B (free) | 61.0 | -6 | +1 | stable | ||
| 214 | R1 Distill Llama 70B | DeepSeek | 61.0 | +10 | +8 | fragile | |
| 215 | Qwen VL Plus | Alibaba | 60.9 | -16 | +8 | fragile | |
| 216 | Qwen-Turbo | Alibaba | 60.7 | -5 | -3 | stable | |
| 217 | GPT-4 Turbo | OpenAI | 60.5 | +8 | -2 | stable | |
| 218 | Qwen2.5 VL 72B Instruct | Alibaba | 60.3 | +9 | +10 | fragile | |
| 219 | R1 Distill Qwen 32B | DeepSeek | 60.2 | -2 | +12 | fragile | |
| 220 | Command A | Cohere | 60.0 | -16 | -2 | stable | |
| 221 | Llama 3.1 70B Instruct | Meta | 59.9 | -3 | +3 | stable | |
| 222 | Gemma 2 27B | 59.7 | +10 | -6 | fragile | ||
| 223 | Phi 4 | Microsoft | 59.6 | +3 | +6 | fragile | |
| 224 | Mistral Small 3 | Mistral AI | 59.5 | -4 | +6 | fragile | |
| 225 | MiniMax M2-her | MiniMax | 59.4 | +10 | -8 | fragile | |
| 226 | LFM2.5-1.2B-Thinking (free) | Liquid AI | 59.0 | -4 | -7 | fragile | |
| 227 | Mistral Small Creative | Mistral AI | 59.0 | +7 | -17 | fragile | |
| 228 | Llama Guard 4 12B | Meta | 59.0 | -13 | -17 | fragile | |
| 229 | Qwen-Max | Alibaba | 58.8 | +10 | -2 | stable | |
| 230 | Gemma 3n 2B (free) | 58.2 | -2 | +4 | stable | ||
| 231 | Nova Lite 1.0 | Amazon | 58.2 | -12 | +1 | stable | |
| 232 | Nova Pro 1.0 | Amazon | 58.2 | +4 | +4 | stable | |
| 233 | Llama 3.1 Nemotron Ultra 253B v1 | NVIDIA | 57.6 | -10 | -7 | fragile | |
| 234 | Qwen2.5 VL 32B Instruct | Alibaba | 56.7 | -4 | +15 | fragile | |
| 235 | Aion-1.0 | aion-labs | 56.6 | -6 | +15 | fragile | |
| 236 | Aion-1.0-Mini | aion-labs | 56.6 | -3 | +6 | fragile | |
| 237 | Gemma 3 4B | 56.2 | -6 | -12 | fragile | ||
| 238 | Gemma 3 12B | 56.2 | +8 | -5 | stable | ||
| 239 | Sonar Deep Research | Perplexity | 55.7 | +1 | +4 | stable | |
| 240 | Pixtral Large 2411 | Mistral AI | 55.7 | +5 | -5 | stable | |
| 241 | Maestro Reasoning | arcee-ai | 55.6 | +8 | +4 | stable | |
| 242 | GPT-4o (2024-08-06) | OpenAI | 55.6 | -5 | +12 | fragile | |
| 243 | Gemma 3n 4B (free) | 55.5 | +5 | +1 | stable | ||
| 244 | Gemma 3 12B (free) | 55.2 | -6 | -4 | stable | ||
| 245 | Granite 4.0 Micro | IBM | 55.1 | +2 | +2 | stable | |
| 246 | Llama 3.2 11B Vision Instruct | Meta | 54.4 | -3 | -5 | stable | |
| 247 | GPT-4o (extended) | OpenAI | 54.3 | -5 | -10 | fragile | |
| 248 | Mistral Large | Mistral AI | 54.1 | +4 | -9 | fragile | |
| 249 | Sonar | Perplexity | 53.7 | +8 | +7 | fragile | |
| 250 | GPT-4o-mini (2024-07-18) | OpenAI | 53.7 | -9 | +1 | stable | |
| 251 | LFM2-24B-A2B | Liquid AI | 53.2 | -7 | -5 | stable | |
| 252 | LFM2.5-1.2B-Instruct (free) | Liquid AI | 53.2 | +2 | 0 | stable | |
| 253 | LFM2-8B-A1B | Liquid AI | 53.2 | +7 | +9 | fragile | |
| 254 | LFM2-2.6B | Liquid AI | 53.2 | +2 | -16 | fragile | |
| 255 | Llama 3.1 Nemotron 70B Instruct | NVIDIA | 53.2 | -5 | +5 | stable | |
| 256 | Mistral Large 2407 | Mistral AI | 53.0 | -3 | -8 | fragile | |
| 257 | Saba | Mistral AI | 52.9 | -6 | -4 | stable | |
| 258 | GPT-4o (2024-05-13) | OpenAI | 52.7 | -3 | -1 | stable | |
| 259 | Qwen2.5 72B Instruct | Alibaba | 52.4 | 0 | -1 | stable | |
| 260 | Nova Micro 1.0 | Amazon | 51.2 | +5 | -5 | stable | |
| 261 | Mistral Nemo | Mistral AI | 50.7 | -3 | 0 | stable | |
| 262 | Mistral Large 2411 | Mistral AI | 49.9 | 0 | -3 | stable | |
| 263 | SWE-1.5 | Windsurf | 49.2 | -2 | +5 | stable | |
| 264 | Command R (08-2024) | Cohere | 47.8 | 0 | 0 | stable | |
| 265 | Command R+ (08-2024) | Cohere | 47.8 | -1 | -1 | stable | |
| 266 | Llemma 7b | eleutherai | 47.5 | 0 | -1 | stable | |
| 267 | QwQ 32B | Alibaba | 47.0 | +2 | +2 | stable | |
| 268 | Gemma 3n 4B | 46.3 | +2 | -2 | stable | ||
| 269 | Coder Large | arcee-ai | 45.5 | -2 | -2 | stable | |
| 270 | Command R7B (12-2024) | Cohere | 44.7 | +1 | +3 | stable | |
| 271 | Olmo 2 32B Instruct | Allen AI | 44.5 | +3 | +5 | stable | |
| 272 | Llama 3.3 70B Instruct (free) | Meta | 44.1 | -4 | -2 | stable | |
| 273 | Claude 3 Haiku | Anthropic | 43.0 | +5 | +9 | fragile | |
| 274 | Qwen2.5 Coder 7B Instruct | Alibaba | 42.9 | +2 | +4 | stable | |
| 275 | Llama Guard 3 8B | Meta | 42.9 | +2 | +6 | fragile | |
| 276 | Qwen2.5 7B Instruct | Alibaba | 42.8 | -4 | -5 | stable | |
| 277 | GPT-4 Turbo Preview | OpenAI | 42.7 | -4 | -5 | stable | |
| 278 | GPT-4 Turbo (older v1106) | OpenAI | 42.7 | -3 | +6 | fragile | |
| 279 | Qwen2.5 Coder 32B Instruct | Alibaba | 42.4 | +1 | -4 | stable | |
| 280 | Llama 3.1 8B Instruct | Meta | 42.4 | +1 | -6 | fragile | |
| 281 | Mixtral 8x7B Instruct | Mistral AI | 42.4 | +3 | +2 | stable | |
| 282 | Llama 3 70B Instruct | Meta | 40.5 | 0 | -5 | stable | |
| 283 | GPT-3.5 Turbo 16k | OpenAI | 39.9 | +6 | -4 | stable | |
| 284 | GPT-3.5 Turbo | OpenAI | 39.9 | -5 | -4 | stable | |
| 285 | GPT-4 (older v0314) | OpenAI | 39.0 | +8 | 0 | stable | |
| 286 | GPT-4 | OpenAI | 39.0 | +1 | +6 | fragile | |
| 287 | autofixer-01 | Vercel | 38.8 | -1 | +2 | stable | |
| 288 | Llama 3.1 405B (base) | Meta | 38.7 | +2 | -2 | stable | |
| 289 | Pixtral 12B | Mistral AI | 38.3 | -4 | -2 | stable | |
| 290 | GPT-3.5 Turbo (older v0613) | OpenAI | 38.0 | -7 | +4 | stable | |
| 291 | Qwen2.5-VL 7B Instruct | Alibaba | 37.6 | -3 | -3 | stable | |
| 292 | Mixtral 8x22B Instruct | Mistral AI | 37.1 | -1 | +3 | stable | |
| 293 | Inflection 3 Pi | Inflection | 36.8 | +2 | -3 | stable | |
| 294 | Inflection 3 Productivity | Inflection | 36.8 | 0 | -3 | stable | |
| 295 | Llama 3.2 3B Instruct | Meta | 35.9 | +2 | -2 | stable | |
| 296 | Llama 3.2 3B Instruct (free) | Meta | 35.2 | -4 | +1 | stable | |
| 297 | Mellum | JetBrains | 32.6 | +2 | -1 | stable | |
| 298 | WizardLM-2 8x22B | Microsoft | 32.2 | -2 | +2 | stable | |
| 299 | GPT-3.5 Turbo Instruct | OpenAI | 32.2 | -1 | 0 | stable | |
| 300 | Llama 3.2 1B Instruct | Meta | 31.9 | +1 | -2 | stable |
Models are ranked using a composite score from 0 to 100 that combines capabilities (25%), pricing tier (25%), context window size (15%), recency (15%), output capacity (10%), and versatility (10%). Scores update hourly from live API data across 290+ coding models.
The 24h column shows how many positions a model moved up or down in the last 24 hours, while the 7d column shows the change over the past week. Green values with a + indicate rank improvements, red values indicate drops, and 0 means the rank was unchanged.
The state classification reflects a model's ranking consistency. "Stable" means the model has maintained its position reliably. "Held" indicates it is holding steady but with some variance. "Fragile" means the model's rank is fluctuating and may shift significantly. "Preliminary" is assigned to newly tracked models without enough history.