by ·
gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Capabilitiesjust now | 50 | 30% | +15.0 |
| Recencyjust now | 91 | 15% | +13.7 |
| Output Capacityjust now | 85 | 15% | +12.8 |
| Context Windowjust now | 81 | 15% | +12.2 |
| Pricingjust now | 30 | 25% | +7.5 |
Community and practitioner feedback adds real-world signal on top of benchmarks and pricing.
Share your experience with gpt-oss-120b (free) and help the community make better decisions.
Free