Llama 3.1 8B Instruct
Chat Completions
llama-3.1-8b-instruct
Chat Completions
Meta’s Llama 3.1 8B instruct-tuned model, designed for fast and efficient dialogue. It performs strongly in human evaluations and is ideal for applications requiring a balance of speed and quality.
Pricing
Pay-as-you-go rates for this model. More details can be found here.
Input Tokens (1M)
$0.05
Output Tokens (1M)
$0.05
Capabilities
Input Modalities
Text
Output Modalities
Text
Supported Parameters
Available parameters for API requests
Frequency Penalty
Logit Bias
Logprobs
Max Completion Tokens
Parallel Tool Calls
Presence Penalty
Response Format
Stop
Temperature
Tool Choice
Tools
Top P
Usage Analytics
Token usage across the last 30 active days
Uptime
Reliability over the last 7 days
Throughput
Time-To-First-Token (TTFT)
Code Example
Example code for using this model through our API with Python (OpenAI SDK) or cURL. Replace placeholders with your API key and model ID.
Basic request example. Ensure API key permissions. For more details, see our documentation.
from openai import OpenAI
client = OpenAI(
base_url="https://api.naga.ac/v1",
api_key="YOUR_API_KEY",
)
resp = client.chat.completions.create(
model="llama-3.1-8b-instruct",
messages=[
{{"role": "user", "content": "What's 2+2?"}}
],
temperature=0.2,
)
print(resp.choices[0].message.content)