Compare context window sizes across AI models. Larger context windows allow processing more text, code, or documents in a single request. Scale is logarithmic.
A context window is the maximum number of tokens (roughly 4 characters each) an AI model can process in a single request. It includes both your input prompt and the model's response. Larger context windows allow you to send more text, code, or documents at once, which is important for tasks like analyzing long documents or maintaining extended conversations.
Context window size determines how much information you can include in a single request. For coding tasks, larger windows let you include more source files. For document analysis, larger windows handle longer texts without splitting. For conversations, larger windows maintain more history. However, larger contexts often cost more and may be slower.
The chart uses a logarithmic scale because context windows range from 8K to over 1M tokens. On a linear scale, small models would be invisible next to the largest ones. The log scale compresses this range so you can visually compare models across all sizes, though the actual token counts shown in numbers remain accurate.