Ranked directory of 155 open source AI models with open weights you can self-host, fine-tune, and deploy without vendor lock-in. Scores computed from capabilities, pricing, context window, recency, and output capacity. Updated hourly.
Open Source Models
155
Average Score
47
Free to Use
25
All 155 open source models ranked by composite score. Click any model for detailed benchmarks and analysis.
Which companies and organizations contribute the most open source AI models.
Alibaba
38 open source models
Mistral AI
16 open source models
Meta
15 open source models
NVIDIA
11 open source models
DeepSeek
11 open source models
11 open source models
Allen AI
7 open source models
MiniMax
6 open source models
Open source AI models give you full control over your AI stack. Here is why teams choose open weights over proprietary APIs.
Self-host models on your own infrastructure. Your data never leaves your network, meeting strict compliance and privacy requirements.
Open weights mean you can deploy anywhere: on-premises, any cloud provider, or at the edge. Switch infrastructure without changing your model.
Fine-tune open source models on your proprietary data. Adapt architecture, optimize inference, and build domain-specific solutions.
Benefit from global research communities. Open models receive rapid improvements, safety patches, and community-built tooling.
The top open source AI models in 2026 include Meta's Llama 4 family, DeepSeek R1 and V3, Mistral's Large and Medium models, Alibaba's Qwen 2.5 series, and Google's Gemma. These models offer competitive performance with full access to model weights, enabling self-hosting and fine-tuning. Rankings are updated hourly based on benchmarks, capabilities, and community adoption.
Open source AI models are free to download and self-host - you only pay for your own compute infrastructure (GPU servers). Many providers also offer free API access to popular open source models through services like OpenRouter, Together AI, and Groq. Self-hosting eliminates per-token API costs entirely, which can save significant money at scale.
Yes, the performance gap has narrowed significantly. Models like DeepSeek R1, Llama 4 Maverick, and Qwen 2.5 now match or exceed GPT-4 on many benchmarks including coding, math, and reasoning tasks. While frontier proprietary models still lead on the most complex tasks, open source alternatives are viable for the vast majority of production use cases.
The most popular tools for running open source models locally are Ollama (easiest setup, one-line install), llama.cpp (optimized C++ inference for consumer hardware), and vLLM (high-throughput production serving). For consumer GPUs, quantized versions (GGUF format) let you run 7B-70B parameter models on hardware with 8-48GB VRAM. Cloud GPU providers like RunPod and Lambda also offer on-demand hosting.
Open weight models (like Llama and Gemma) release the trained model weights so you can run and fine-tune them, but may not release the full training code, datasets, or training infrastructure. Truly open source models release everything - weights, training code, data pipelines, and evaluation suites - under permissive licenses. Most models marketed as "open source" are technically open weight, which still provides the key benefits of self-hosting and customization.
Dive deeper into AI model rankings, pricing breakdowns, and head-to-head comparisons across all providers.