Multi-turn conversation quality across 8 categories (writing, roleplay, extraction, reasoning, math, coding, STEM, humanities) with 80 expert questions. Scored by GPT-4 as judge.
Why it matters: Tests real conversational ability across turns, not just single-shot performance. Important for chat applications.
Top Model
—
No data
Average Score
—
Across 0 models
Models Tested
0
Metric: average score
Human Baseline
—
Range: 1/10–10/10
All models with a reported MT-Bench score, ranked by highest average score.
MT-Bench is a standardized evaluation that measures AI model performance on specific tasks. It provides comparable scores across different models, helping developers choose the right model for their needs.
Unknown currently holds the top score on the MT-Bench benchmark. See our full rankings table above for the complete leaderboard with 0 models.
We update benchmark data from multiple sources including HuggingFace Open LLM Leaderboard and LMArena. Scores are refreshed regularly as new evaluations are published and new models are released.
No. While MT-Bench is an important indicator, real-world performance depends on many factors including pricing, latency, context window, and specific task requirements. We recommend using our composite score which weighs multiple benchmarks and practical factors.