Llama 3.2 11B Vision Instruct
Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed for tasks combining visual and textual data. It excels at image captioning and visual question answering, bridging the gap between language generation and visual reasoning. Pre-trained on a massive dataset of image-text pairs, it is ideal for content creation, AI-driven customer service, and research.
Pricing
Pay-as-you-go rates for this model. More details can be found here.
Input Tokens (1M)
$0.10
Output Tokens (1M)
$0.10
Capabilities
Input Modalities
Output Modalities
Supported Parameters
Available parameters for API requests
Usage Analytics
Token usage across the last 30 active days
Uptime
Reliability over the last 7 days
Code Example
Example code for using this model through our API with Python (OpenAI SDK) or cURL. Replace placeholders with your API key and model ID.
Basic request example. Ensure API key permissions. For more details, see our documentation.
from openai import OpenAI
client = OpenAI(
base_url="https://api.naga.ac/v1",
api_key="YOUR_API_KEY",
)
resp = client.chat.completions.create(
model="llama-3.2-11b-vision-instruct",
messages=[
{{"role": "user", "content": "What's 2+2?"}}
],
temperature=0.2,
)
print(resp.choices[0].message.content)