Tests broad knowledge across 57 academic subjects (STEM, humanities, social sciences) with 16,000 multiple-choice questions. The most widely-cited LLM benchmark.
Why it matters: Shows how well a model has absorbed factual knowledge during training. Saturating above 90%, so less useful for differentiating frontier models.
Top Model
91.8%
o1
Average Score
86.0%
Across 24 models
Models Tested
24
Metric: accuracy
Human Baseline
89.8%
Range: 0%–100%
All models with a reported MMLU score, ranked by highest accuracy.
MMLU is a standardized evaluation that measures AI model performance on specific tasks. It provides comparable scores across different models, helping developers choose the right model for their needs.
o1 currently holds the top score on the MMLU benchmark. See our full rankings table above for the complete leaderboard with 24 models.
We update benchmark data from multiple sources including HuggingFace Open LLM Leaderboard and LMArena. Scores are refreshed regularly as new evaluations are published and new models are released.
No. While MMLU is an important indicator, real-world performance depends on many factors including pricing, latency, context window, and specific task requirements. We recommend using our composite score which weighs multiple benchmarks and practical factors.