Grok 4 Fast Reasoning

grok-4-fast-reasoning
byxAI|Created Sep 20, 2025
Chat Completions

State-of-the-art reasoning model optimized for cost-efficient, high-quality chain-of-thought. Trained end-to-end with tool use and agentic search, it matches top-tier benchmarks like AIME, HMMT, and GPQA at 40% lower token use versus Grok 4. Features a huge 2M token context and native web/X browsing. Ideal for agentic workflows, research, code, logic, and complex multi-step tasks. Offers up to 98% cheaper reasoning versus previous models.

Pricing

Pay-as-you-go rates for this model. More details can be found here.

Input Tokens (1M)

$0.10

Cached Input Tokens (1M)

$0.02

Output Tokens (1M)

$0.25

Capabilities

Input Modalities

Text
Image

Output Modalities

Text

Supported Parameters

Available parameters for API requests

Max Completion Tokens
Response Format
Temperature
Tool Choice
Tools
Top P
Web Search Options

Usage Analytics

Token usage across the last 30 active days

Uptime

Reliability over the last 7 days

Throughput

Time-To-First-Token (TTFT)

Code Example

Example code for using this model through our API with Python (OpenAI SDK) or cURL. Replace placeholders with your API key and model ID.

Basic request example. Ensure API key permissions. For more details, see our documentation.

from openai import OpenAI

client = OpenAI(
    base_url="https://api.naga.ac/v1",
    api_key="YOUR_API_KEY",
)

resp = client.chat.completions.create(
    model="grok-4-fast-reasoning",
    messages=[
        {{"role": "user", "content": "What's 2+2?"}}
    ],
    temperature=0.2,
)
print(resp.choices[0].message.content)