Qwen3 235B A22B Thinking 2507

qwen3-235b-a22b-thinking-2507
by qwen|Created Jul 27, 2025

Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. Activates 22B of its 235B parameters per forward pass and natively supports up to 262,144 tokens of context. This "thinking-only" variant enhances structured logical reasoning, mathematics, science, and long-form generation, and is instruction-tuned for step-by-step reasoning, tool use, agentic workflows, and multilingual tasks.

Pricing

Pay-as-you-go rates for this model. More details can be found here.

Input Tokens (1M)

$0.07

Output Tokens (1M)

$0.30

Capabilities

Input Modalities

Text

Output Modalities

Text

Usage Analytics

Token usage across the last 30 active days

Throughput

Time-To-First-Token (TTFT)