GPT OSS 120B

gpt-oss-120b
by openai|Created Aug 5, 2025

An open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI, designed for high-reasoning, agentic, and general-purpose production use cases. Activates 5.1B parameters per forward pass and is optimized for single H100 GPU deployment with native MXFP4 quantization. Supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.

Usage Analytics

Token usage across the last 30 active days