Olympiad-level mathematical problem solving from the real 2024 AIME competition. 30 problems testing advanced algebra, geometry, combinatorics, and number theory.
Why it matters: Tests mathematical reasoning at competition level. Reasoning models achieve 70-90% while standard models struggle below 30%. Best differentiator for math ability.
Top Model
92%
Gemini 2.5 Pro
Average Score
77.1%
Across 7 models
Models Tested
7
Metric: accuracy
Human Baseline
—
Range: 0%–100%
All models with a reported AIME 2024 score, ranked by highest accuracy.
AIME 2024 is a standardized evaluation that measures AI model performance on specific tasks. It provides comparable scores across different models, helping developers choose the right model for their needs.
Gemini 2.5 Pro currently holds the top score on the AIME 2024 benchmark. See our full rankings table above for the complete leaderboard with 7 models.
We update benchmark data from multiple sources including HuggingFace Open LLM Leaderboard and LMArena. Scores are refreshed regularly as new evaluations are published and new models are released.
No. While AIME 2024 is an important indicator, real-world performance depends on many factors including pricing, latency, context window, and specific task requirements. We recommend using our composite score which weighs multiple benchmarks and practical factors.