GPT OSS 20B

gpt-oss-20b
by openai|Created Aug 5, 2025

OpenAI’s 21B-parameter open-weight Mixture-of-Experts (MoE) model, released under the Apache 2.0 license. Features 3.6B active parameters per forward pass, optimized for low-latency inference and deployability on consumer or single-GPU hardware. Trained in OpenAI’s Harmony response format, it supports reasoning level configuration, fine-tuning, and agentic capabilities such as function calling and structured outputs.

Pricing

Pay-as-you-go rates for this model. More details can be found here.

Input Tokens (1M)

$0.02

Output Tokens (1M)

$0.10

Capabilities

Input Modalities

Text
File

Output Modalities

Text

Usage Analytics

Token usage across the last 30 active days

Throughput

Time-To-First-Token (TTFT)