Every ranking on AI Models Map is computed algorithmically from real data pulled from the OpenRouter API. Data is refreshed hourly. There is no manual intervention, no expert panel, and no editorial curation. The algorithm is the only thing that determines rankings.
A fully automated pipeline with no human decision points.
A cron job pulls the latest model data from the OpenRouter API every hour, capturing pricing, context sizes, capabilities, and availability.
Each model is scored across six dimensions using a deterministic algorithm. The same input data always produces the same scores -- no randomness, no manual overrides.
Computed scores are written to a local JSON cache that the web application reads at runtime. Rankings update automatically with each refresh cycle.
Each model receives a composite score from 0 to 100, calculated as a weighted sum of six dimensions. The weights are fixed and published here.
What the model can do: coding, reasoning, instruction following, and multi-domain knowledge. Derived from benchmark data and capability flags reported by the provider.
Cost-effectiveness based on input and output token pricing. Lower cost per quality point scores higher, making it easy to find the best value for your budget.
Maximum context length the model supports. Larger context windows enable processing longer documents, codebases, and conversation histories.
How recently the model was released or last updated. Newer models receive a freshness boost that decays over time, reflecting the fast pace of AI development.
Maximum number of output tokens the model can generate in a single response. Higher output capacity matters for long-form generation tasks like code and articles.
Range of modalities and tasks the model supports, such as text, code, function calling, and image understanding. More versatile models score higher.