by Meta
This safeguard model has 8B parameters and is based on the Llama 3 family. Just like is predecessor, [LlamaGuard 1](https://huggingface.co/meta-llama/LlamaGuard-7b), it can do both prompt and response classification. LlamaGuard 2 acts as a normal LLM would, generating text that indicates whether the given input/output is safe/unsafe. If deemed unsafe, it will also share the content categories violated. For best results, please use raw prompt input or the `/completions` endpoint, instead of the chat API. It has demonstrated strong performance compared to leading closed-source models in human evaluations. To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Context Window2026-03-03T20:26:20.636Z | 62 | 15% | +9.3 |
| Capabilities2026-03-03T20:26:20.636Z | 14 | 25% | +3.6 |
| Versatility2026-03-03T20:26:20.636Z | 33 | 10% | +3.3 |
| Output Capacity2026-03-03T20:26:20.636Z | 20 | 10% | +2.0 |
| Recency2026-03-03T20:26:20.636Z | 13 | 15% | +1.9 |
| Pricing Tier2026-03-03T20:26:20.636Z | 0 | 25% | +0.1 |
Cost Estimator
You save $38.79/month vs category average