by
Llama Guard 4 is a Llama 4 Scout-derived multimodal pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM—generating text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated. Llama Guard 4 was aligned to safeguard against the standardized MLCommons hazards taxonomy and designed to support multimodal Llama 4 capabilities. Specifically, it combines features from previous Llama Guard models, providing content moderation for English and multiple supported languages, along with enhanced capabilities to handle mixed text-and-image prompts, including multiple images. Additionally, Llama Guard 4 is integrated into the Llama Moderations API, extending robust safety classification to text and images.
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Capabilitiesjust now | 50 | 30% | +15.0 |
| Context Windowjust now | 83 | 15% | +12.4 |
| Recencyjust now | 73 | 15% | +11.0 |
| Output Capacityjust now | 20 | 15% | +3.0 |
| Pricingjust now | 0 | 25% | +0.0 |
Community and practitioner feedback adds real-world signal on top of benchmarks and pricing.
Share your experience with Llama Guard 4 12B and help the community make better decisions.
Cost Estimator
You save $40.17/month vs category average