An open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI, designed for high-reasoning, agentic, and general-purpose production use cases. Activates 5.1B parameters per forward pass and is optimized for single H100 GPU deployment with native MXFP4 quantization. Supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.
Pricing
Pay-as-you-go rates for this model. More details can be found here.
Input Tokens (1M)
$0.07
Output Tokens (1M)
$0.30
Capabilities
Input Modalities
Text
File
Output Modalities
Text
Supported Parameters
Available parameters for API requests
Frequency Penalty
Logit Bias
Logprobs
Max Completion Tokens
Parallel Tool Calls
Presence Penalty
Reasoning Effort
Response Format
Stop
Temperature
Tool Choice
Tools
Top P