Expert-level science reasoning across biology, chemistry, and physics at PhD level. Questions are designed to be 'Google-proof' — even domain experts with web access struggle.
Why it matters: One of the best discriminators between models. Scores range widely (40-85%), making it highly informative for comparing reasoning ability.
Top Model
86.2%
Claude Opus 4.5
Average Score
61.0%
Across 20 models
Models Tested
20
Metric: accuracy
Human Baseline
65%
Range: 0%–100%
All models with a reported GPQA Diamond score, ranked by highest accuracy.
GPQA Diamond is a standardized evaluation that measures AI model performance on specific tasks. It provides comparable scores across different models, helping developers choose the right model for their needs.
Claude Opus 4.5 currently holds the top score on the GPQA Diamond benchmark. See our full rankings table above for the complete leaderboard with 20 models.
We update benchmark data from multiple sources including HuggingFace Open LLM Leaderboard and LMArena. Scores are refreshed regularly as new evaluations are published and new models are released.
No. While GPQA Diamond is an important indicator, real-world performance depends on many factors including pricing, latency, context window, and specific task requirements. We recommend using our composite score which weighs multiple benchmarks and practical factors.