GPT OSS 20B

gpt-oss-20b
byOpenAI|Created Aug 5, 2025
Chat Completions

OpenAI’s 21B-parameter open-weight Mixture-of-Experts (MoE) model, released under the Apache 2.0 license. Features 3.6B active parameters per forward pass, optimized for low-latency inference and deployability on consumer or single-GPU hardware. Trained in OpenAI’s Harmony response format, it supports reasoning level configuration, fine-tuning, and agentic capabilities such as function calling and structured outputs.

Pricing

Pay-as-you-go rates for this model. More details can be found here.

Input Tokens (1M)

$0.02

Output Tokens (1M)

$0.10

Capabilities

Input Modalities

Text
File

Output Modalities

Text

Supported Parameters

Available parameters for API requests

Frequency Penalty
Logit Bias
Logprobs
Max Completion Tokens
Parallel Tool Calls
Presence Penalty
Reasoning Effort
Response Format
Stop
Temperature
Tool Choice
Tools
Top P