Grade-school level science questions that require reasoning beyond simple retrieval. The 'Challenge' set contains questions that simple baselines get wrong.
Why it matters: Tests commonsense scientific reasoning. Largely saturated for frontier models but still useful for comparing mid-tier and open-source models.
Top Model
96.9%
Llama 3.1 405B
Average Score
95.2%
Across 6 models
Models Tested
6
Metric: accuracy
Human Baseline
—
Range: 0%–100%
All models with a reported ARC-Challenge score, ranked by highest accuracy.
ARC-Challenge is a standardized evaluation that measures AI model performance on specific tasks. It provides comparable scores across different models, helping developers choose the right model for their needs.
Llama 3.1 405B currently holds the top score on the ARC-Challenge benchmark. See our full rankings table above for the complete leaderboard with 6 models.
We update benchmark data from multiple sources including HuggingFace Open LLM Leaderboard and LMArena. Scores are refreshed regularly as new evaluations are published and new models are released.
No. While ARC-Challenge is an important indicator, real-world performance depends on many factors including pricing, latency, context window, and specific task requirements. We recommend using our composite score which weighs multiple benchmarks and practical factors.