164 Python function-generation problems where models must write correct code from docstrings, tested against unit tests. The original code generation benchmark.
Why it matters: The most recognized coding benchmark, though becoming saturated above 90%. Evidence of training data contamination in some models.
Top Model
95.2%
Claude Opus 4.5
Average Score
87.6%
Across 22 models
Models Tested
22
Metric: pass@1
Human Baseline
—
Range: 0%–100%
All models with a reported HumanEval score, ranked by highest pass@1.
HumanEval is a standardized evaluation that measures AI model performance on specific tasks. It provides comparable scores across different models, helping developers choose the right model for their needs.
Claude Opus 4.5 currently holds the top score on the HumanEval benchmark. See our full rankings table above for the complete leaderboard with 22 models.
We update benchmark data from multiple sources including HuggingFace Open LLM Leaderboard and LMArena. Scores are refreshed regularly as new evaluations are published and new models are released.
No. While HumanEval is an important indicator, real-world performance depends on many factors including pricing, latency, context window, and specific task requirements. We recommend using our composite score which weighs multiple benchmarks and practical factors.