Llama 3.3 70B Instruct

llama-3.3-70b-instruct
byMeta Llama|Created May 25, 2025
Chat Completions

The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction-tuned generative model with 70B parameters. Optimized for multilingual dialogue, it outperforms many open-source and closed chat models on industry benchmarks. Supported languages include English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

Pricing

Pay-as-you-go rates for this model. More details can be found here.

Input Tokens (1M)

$0.29

Output Tokens (1M)

$0.39

Capabilities

Input Modalities

Text

Output Modalities

Text

Supported Parameters

Available parameters for API requests

Frequency Penalty
Logprobs
Max Completion Tokens
Parallel Tool Calls
Presence Penalty
Response Format
Stop
Temperature
Tool Choice
Tools
Top P

Usage Analytics

Token usage across the last 30 active days

Uptime

Reliability over the last 7 days

Throughput

Time-To-First-Token (TTFT)

Code Example

Example code for using this model through our API with Python (OpenAI SDK) or cURL. Replace placeholders with your API key and model ID.

Basic request example. Ensure API key permissions. For more details, see our documentation.

from openai import OpenAI

client = OpenAI(
    base_url="https://api.naga.ac/v1",
    api_key="YOUR_API_KEY",
)

resp = client.chat.completions.create(
    model="llama-3.3-70b-instruct",
    messages=[
        {{"role": "user", "content": "What's 2+2?"}}
    ],
    temperature=0.2,
)
print(resp.choices[0].message.content)