Llama Guard 4 is a multimodal content safety classifier derived from Llama 4 Scout, fine-tuned for both prompt and response classification. It supports content moderation for English and multiple languages, including mixed text-and-image prompts. The model is aligned with the MLCommons hazards taxonomy and is integrated into the Llama Moderations API for robust safety classification in text and images.
Pricing
Pay-as-you-go rates for this model. More details can be found here.
Input Tokens (1M)
$0.02
Output Tokens (1M)
$0.02
Capabilities
Input Modalities
Text
Image
Output Modalities
Text
Supported Parameters
Available parameters for API requests
Frequency Penalty
Max Completion Tokens
Presence Penalty
Reasoning Effort
Response Format
Stop
Temperature
Top P