The top AI models for every use case, ranked by our composite scoring system. Covering 318+ models across 35+ providers. Data refreshed hourly from live benchmarks, pricing, and capabilities.
by OpenAI — 1.1M context, $180.00/1M output
Models with reasoning capabilities, function calling, and top benchmark scores for code generation.
High-quality language models with streaming support, large context windows, and strong generation.
Models with dedicated reasoning capabilities for complex problem-solving and logical tasks.
The cheapest models that still deliver strong quality. Maximum performance per dollar.
Top-performing open-weight models you can self-host, fine-tune, and deploy without vendor lock-in.
Dedicated image generation models for creating visuals, art, and design assets from text prompts.
The highest-scoring models across all categories, ranked by composite score.
Our composite scoring system evaluates every model across six weighted dimensions: capabilities (25%), pricing tier (25%), context window (15%), recency (15%), output capacity (10%), and versatility (10%). Scores range from 0 to 100.
Capabilities include vision input, function calling, streaming, JSON mode, chain-of-thought reasoning, web search, and image output. Models that support more capabilities score higher, but pricing and context window carry equal weight, ensuring cost-effective models with large contexts are also surfaced.
Data is sourced from OpenRouter's live API, covering 318+ models from 35+ providers. Scores refresh hourly so rankings always reflect the latest model releases and pricing changes.
Based on our composite scoring system that evaluates capabilities, pricing, context window, recency, and versatility across 318+ models, GPT-5.4 Pro by OpenAI currently holds the #1 position with a score of 97. Rankings are updated hourly as new data comes in.
GPT-5.4 Pro by OpenAI leads our coding rankings with a score of 97. It excels at code generation thanks to its reasoning capabilities and 1.1M context window. Other strong coding models include GPT-5.4 and Claude Sonnet 4.6.
Mistral Nemo by Mistral AI offers excellent value at just $0.040 per million output tokens while maintaining a quality score of 54. For truly free options, several models from providers like Google and Meta are available at zero cost.
Open source models have closed the gap significantly. Gemini 3.1 Pro Preview Custom Tools scores 89, competitive with many proprietary options. Models from DeepSeek, Meta (Llama), and Alibaba (Qwen) now rival GPT-4o and Claude on many benchmarks. The main advantages of open source are self-hosting flexibility, fine-tuning, no vendor lock-in, and often lower API costs. Proprietary models like GPT-4o and Claude still lead on some enterprise features and ecosystem integrations.
Dive deeper into rankings, compare models head-to-head, or filter by price, category, and capabilities.