Qwen-Max is a large-scale Mixture-of-Experts (MoE) model from Qwen, based on Qwen2.5, and provides the best inference performance among Qwen models, especially for complex multi-step tasks. Pretrained on over 20 trillion tokens and further post-trained with curated Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF), it is designed for high-accuracy, high-recall applications. The exact parameter count is undisclosed.
Code Example
Example code for using this model through our API with Python (OpenAI SDK) or cURL. Replace placeholders with your API key and model ID.
Basic request example. Ensure API key permissions. For more details, see our documentation.
from openai import OpenAI
client = OpenAI(
base_url="https://api.naga.ac/v1",
api_key="YOUR_API_KEY",
)
resp = client.chat.completions.create(
model="qwen-max",
messages=[
{{"role": "user", "content": "What's 2+2?"}}
],
temperature=0.2,
)
print(resp.choices[0].message.content)