Llama Guard 4 is a multimodal content safety classifier derived from Llama 4 Scout, fine-tuned for both prompt and response classification. It supports content moderation for English and multiple languages, including mixed text-and-image prompts. The model is aligned with the MLCommons hazards taxonomy and is integrated into the Llama Moderations API for robust safety classification in text and images.
Pricing
Pay-as-you-go rates for this model. More details can be found here.
Input Tokens (1M)
$0.02
Output Tokens (1M)
$0.02
Capabilities
Input Modalities
Output Modalities
Supported Parameters
Available parameters for API requests
Usage Analytics
Token usage of this model on our platform
Throughput
Not enough throughput data
Time-To-First-Token (TTFT)
Not enough TTFT data
Code Example
Example code for using this model through our API with Python (OpenAI SDK) or cURL. Replace placeholders with your API key and model ID.
Basic request example. Ensure API key permissions. For more details, see our documentation.
from openai import OpenAI
client = OpenAI(
base_url="https://api.naga.ac/v1",
api_key="YOUR_API_KEY",
)
resp = client.chat.completions.create(
model="llama-guard-4-12b",
messages=[
{{"role": "user", "content": "What's 2+2?"}}
],
temperature=0.2,
)
print(resp.choices[0].message.content)