by
Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by Meta, activating 17 billion parameters out of a total of 109B. It supports native multimodal input (text and image) and multilingual output (text and code) across 12 supported languages. Designed for assistant-style interaction and visual reasoning, Scout uses 16 experts per forward pass and features a context length of 10 million tokens, with a training corpus of ~40 trillion tokens. Built for high efficiency and local or commercial deployment, Llama 4 Scout incorporates early fusion for seamless modality integration. It is instruction-tuned for use in multilingual chat, captioning, and image understanding tasks. Released under the Llama 4 Community License, it was last trained on data up to August 2024 and launched publicly on April 5, 2025.
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Capabilitiesjust now | 67 | 30% | +20.0 |
| Context Windowjust now | 88 | 15% | +13.1 |
| Output Capacityjust now | 70 | 15% | +10.5 |
| Recencyjust now | 69 | 15% | +10.3 |
| Pricingjust now | 0 | 25% | +0.1 |
Community and practitioner feedback adds real-world signal on top of benchmarks and pricing.
Share your experience with Llama 4 Scout and help the community make better decisions.
Cost Estimator
You save $40.51/month vs category average