MiniMax M2
MiniMax-M2 is a compact, efficient language model with 10B active (230B total) parameters, optimized for coding and agentic workflows. It achieves near-frontier reasoning and tool use with low latency and deployment cost. The model excels in code generation, multi-file editing, compile-run-fix cycles, and automated test repair, showing strong results on SWE-Bench and Terminal-Bench. MiniMax-M2 performs well in agentic benchmarks like BrowseComp and GAIA, handling long-term planning, retrieval, and error recovery. With a small activation footprint, it delivers fast inference and high concurrency, making it ideal for developer tools, agents, and applications that demand cost-effective, responsive reasoning.
Code Example
Example code for using this model through our API with Python (OpenAI SDK) or cURL. Replace placeholders with your API key and model ID.
Basic request example. Ensure API key permissions. For more details, see our documentation.
from openai import OpenAI
client = OpenAI(
base_url="https://api.naga.ac/v1",
api_key="YOUR_API_KEY",
)
resp = client.chat.completions.create(
model="minimax-m2",
messages=[
{{"role": "user", "content": "What's 2+2?"}}
],
temperature=0.2,
)
print(resp.choices[0].message.content)