AI models ranked by coding ability across benchmarks, real-world usage, and developer sentiment.
| Model | ||
|---|---|---|
| #1 | A Claude 4.5 SonnetAnthropic | 93.7 |
| #2 | O o1OpenAI | 91.9 |
| #3 | G Gemini 2.5 ProGoogle | 91.1 |
| #4 | O GPT-4oOpenAI | 91.1 |
| #5 | D DeepSeek V3DeepSeek | 87.5 |
| #6 | O GPT-4 TurboOpenAI | 84.9 |
| #7 | O CodexOpenAI | 82.7 |
| #8 | M Llama 3.1 405BMeta | 80.9 |
| #9 | X Grok 2xAI | 78.3 |
| #10 | A Qwen 2.5 72BAlibaba | 75.6 |
| #11 | M Mistral Large 2Mistral AI | 72.4 |
| #12 | G Gemini 2.0 FlashGoogle | 69.1 |
| #13 | M CodestralMistral AI | 65.8 |
| #14 | G GitHub CopilotGitHub | 62.5 |
| #15 | A Claude 3.5 HaikuAnthropic | 58.6 |
| #16 | O o1-miniOpenAI | 48.4 |
| #17 | O GPT-4o miniOpenAI | 36.5 |
Showing 17 of 17 models