Evaluates AI coding assistants on real-world multi-file editing tasks across diverse codebases. Tests the ability to understand project context and make coordinated changes.
Why it matters: The first benchmark designed specifically for agentic coding assistants that edit multiple files. More realistic than single-function benchmarks like HumanEval.
Top Model
61.3%
Composer 2
Average Score
61.3%
Across 1 model
Models Tested
1
Metric: pass rate
Human Baseline
—
Range: 0%–100%
All models with a reported CursorBench score, ranked by highest pass rate.
CursorBench is a standardized evaluation that measures AI model performance on specific tasks. It provides comparable scores across different models, helping developers choose the right model for their needs.
Composer 2 currently holds the top score on the CursorBench benchmark. See our full rankings table above for the complete leaderboard with 1 models.
We update benchmark data from multiple sources including HuggingFace Open LLM Leaderboard and LMArena. Scores are refreshed regularly as new evaluations are published and new models are released.
No. While CursorBench is an important indicator, real-world performance depends on many factors including pricing, latency, context window, and specific task requirements. We recommend using our composite score which weighs multiple benchmarks and practical factors.