Complete pricing breakdown for all 11 DeepSeek API models. Compare input and output costs per million tokens for DeepSeek R1, V3, and Chat models. Includes a cost calculator and side-by-side comparison with OpenAI and Anthropic.
DeepSeek is a Chinese AI research lab that has gained significant attention for developing high-performance open-source language models. Founded in 2023 and headquartered in Hangzhou, China, DeepSeek focuses on building AI systems that push the boundaries of reasoning, coding, and general intelligence -- while keeping costs dramatically lower than Western competitors.
Their flagship DeepSeek R1 reasoning model made waves by matching OpenAI o1-level performance at a fraction of the price. The DeepSeek V3 model delivers GPT-4o-class capabilities for general tasks, coding, and multilingual understanding. All DeepSeek models are released with open weights, meaning developers can self-host them or access them through the official API with pay-per-token pricing.
| Model | Input $/1M | Output $/1M |
|---|---|---|
| R1 Distill Qwen 32B | $0.290 | $0.290 |
| DeepSeek V3.2 | $0.260 | $0.380 |
| DeepSeek V3.2 Exp | $0.270 | $0.410 |
| DeepSeek V3.1 | $0.150 | $0.750 |
| DeepSeek V3 0324 | $0.200 | $0.770 |
| DeepSeek V3.1 Terminus | $0.210 | $0.790 |
| R1 Distill Llama 70B | $0.700 | $0.800 |
| DeepSeek V3 | $0.320 | $0.890 |
| DeepSeek V3.2 Speciale | $0.400 | $1.20 |
| R1 0528 | $0.450 | $2.15 |
| R1 | $0.700 | $2.50 |
| Model | Input $/1M | Output $/1M |
|---|---|---|
| R1 Distill Qwen 32B | $0.290 | $0.290 |
| DeepSeek V3.2 | $0.260 | $0.380 |
| DeepSeek V3.2 Exp | $0.270 | $0.410 |
| DeepSeek V3.1 | $0.150 | $0.750 |
| DeepSeek V3 0324 | $0.200 | $0.770 |
| DeepSeek V3.1 Terminus | $0.210 | $0.790 |
| R1 Distill Llama 70B | $0.700 | $0.800 |
| DeepSeek V3 | $0.320 | $0.890 |
| Model | Input $/1M | Output $/1M |
|---|---|---|
| DeepSeek V3.2 Speciale | $0.400 | $1.20 |
| R1 0528 | $0.450 | $2.15 |
| R1 | $0.700 | $2.50 |
See how DeepSeek API pricing stacks up against OpenAI (GPT) and Anthropic (Claude) models. DeepSeek is known for offering comparable performance at significantly lower prices. All prices in USD per million tokens.
| Model | In | Out |
|---|---|---|
| R1 Distill Qwen 32B | $0.290 | $0.290 |
| DeepSeek V3.2 | $0.260 | $0.380 |
| DeepSeek V3.2 Exp | $0.270 | $0.410 |
| DeepSeek V3.1 | $0.150 | $0.750 |
| DeepSeek V3 0324 | $0.200 | $0.770 |
| DeepSeek V3.1 Terminus | $0.210 | $0.790 |
| R1 Distill Llama 70B | $0.700 | $0.800 |
| DeepSeek V3 | $0.320 | $0.890 |
| Model | In | Out |
|---|---|---|
| gpt-oss-120b (free) | Free | Free |
| gpt-oss-20b (free) | Free | Free |
| Sora | Free | Free |
| gpt-oss-20b | $0.030 | $0.140 |
| gpt-oss-120b | $0.039 | $0.190 |
| gpt-oss-safeguard-20b | $0.075 | $0.300 |
| GPT-5 Nano | $0.050 | $0.400 |
| GPT-4.1 Nano | $0.100 | $0.400 |
| Model | In | Out |
|---|---|---|
| Claude 3 Haiku | $0.250 | $1.25 |
| Claude 3.5 Haiku | $0.800 | $4.00 |
| Claude Haiku 4.5 | $1.00 | $5.00 |
| Claude Sonnet 4.6 | $3.00 | $15.00 |
| Claude Sonnet 4.5 | $3.00 | $15.00 |
| Claude Sonnet 4 | $3.00 | $15.00 |
| Claude 3.7 Sonnet | $3.00 | $15.00 |
| Claude 3.7 Sonnet (thinking) | $3.00 | $15.00 |
Estimated daily and monthly costs for common usage patterns. Assumes an average of ~1,000 input tokens and ~500 output tokens per request.
| Model | $/1M In | $/1M Out |
|---|---|---|
| R1 Distill Qwen 32B | $0.290 | $0.290 |
| DeepSeek V3.2 Exp | $0.270 | $0.410 |
| DeepSeek V3.1 Terminus | $0.210 | $0.790 |
| DeepSeek V3.2 Speciale | $0.400 | $1.20 |
| R1 | $0.700 | $2.50 |
Note: Actual costs vary with prompt length, response length, and batch processing. DeepSeek offers some of the most competitive pricing in the industry, with additional discounts for cached input tokens. Try the interactive calculator for custom estimates.
DeepSeek charges per token, not per request. A token is roughly 3/4 of a word. The sentence "Hello, how are you?" is about 6 tokens. Prices are quoted per million tokens. Input tokens (your prompts) are cheaper than output tokens (the model's response) because output generation requires more computation.
DeepSeek R1 is a reasoning model that uses chain-of-thought to solve complex problems, competing with OpenAI o1 at a fraction of the cost. DeepSeek V3 is the general-purpose model optimized for speed and broad capabilities including coding, translation, and analysis. R1 costs more due to extended reasoning but delivers superior accuracy on hard tasks.
All DeepSeek models are released with open weights under permissive licenses. This means you can self-host models on your own infrastructure, eliminating per-token costs entirely for high-volume workloads. The API provides a convenient managed option with pay-as-you-go pricing for teams that prefer not to manage infrastructure.
DeepSeek is already one of the cheapest API providers, but you can save further by using cached input tokens for repeated prompts, setting appropriate max_tokens limits, and choosing the right model for each task. Use V3 for general tasks and only upgrade to R1 when complex reasoning is needed.
Compare with GPT-4o, o3, and all OpenAI model costs.
Compare with Claude Opus 4, Sonnet 4, and all Anthropic models.
Compare with Gemini 2.5 Pro, Flash, and all Google models.
Find the most affordable models across all providers.
DeepSeek R1 pricing is $0.290/1M input tokens and $0.290/1M output tokens. DeepSeek R1 is a reasoning model that competes with OpenAI o1, offering chain-of-thought reasoning at a fraction of the cost.
DeepSeek offers 0 free models via their API. DeepSeek is known for extremely competitive pricing, often significantly cheaper than OpenAI and Anthropic equivalents. As a Chinese AI lab focused on open-source, DeepSeek keeps costs low while delivering state-of-the-art performance.
DeepSeek's average output price is $0.994/1M tokens across 11 paid models. OpenAI offers 62 models with varying price points. DeepSeek models are typically much more affordable than OpenAI equivalents -- DeepSeek R1 delivers reasoning capabilities comparable to o1 at a fraction of the cost, and DeepSeek V3 competes with GPT-4o at significantly lower prices.
DeepSeek V3 is priced at $0.260/1M input and $0.380/1M output tokens. It is DeepSeek's flagship general-purpose model with a 164K context window, offering strong performance across coding, reasoning, and multilingual tasks at highly competitive rates.
DeepSeek charges per token for API usage. Tokens are the basic units of text -- roughly 3/4 of a word. Pricing is split into input tokens (your prompts and context) and output tokens (the model's response). Output tokens are typically more expensive because they require more computation. Prices are quoted per million tokens. DeepSeek also offers discounted rates for cached input tokens.