OpenAI’s 21B-parameter open-weight Mixture-of-Experts (MoE) model, released under the Apache 2.0 license. Features 3.6B active parameters per forward pass, optimized for low-latency inference and deployability on consumer or single-GPU hardware. Trained in OpenAI’s Harmony response format, it supports reasoning level configuration, fine-tuning, and agentic capabilities such as function calling and structured outputs.
Pricing
Pay-as-you-go rates for this model. More details can be found here.
Input Tokens (1M)
$0.02
Output Tokens (1M)
$0.10
Capabilities
Input Modalities
Output Modalities
Supported Parameters
Available parameters for API requests
Usage Analytics
Token usage of this model on our platform
Throughput
Time-To-First-Token (TTFT)
Not enough TTFT data
Code Example
Example code for using this model through our API with Python (OpenAI SDK) or cURL. Replace placeholders with your API key and model ID.
Basic request example. Ensure API key permissions. For more details, see our documentation.
from openai import OpenAI
client = OpenAI(
base_url="https://api.naga.ac/v1",
api_key="YOUR_API_KEY",
)
resp = client.chat.completions.create(
model="gpt-oss-20b",
messages=[
{{"role": "user", "content": "What's 2+2?"}}
],
temperature=0.2,
)
print(resp.choices[0].message.content)