Building upon Mistral Small 3.2 (2506), with added reasoning capabilities, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters.
from openai import OpenAI client = OpenAI( base_url="https://api.naga.ac/v1", api_key="YOUR_API_KEY", ) resp = client.chat.completions.create( model="magistral-small-2509", messages=[ {"role": "user", "content": "What's 2+2?"} ], temperature=0.2, ) print(resp.choices[0].message.content)