DeepSeek-V3.2 is a large language model optimized for high computational efficiency and strong tool-use reasoning. It features DeepSeek Sparse Attention (DSA), a mechanism that lowers training and inference costs while maintaining quality in long-context tasks. A scalable reinforcement learning post-training framework further enhances reasoning, achieving performance comparable to GPT-5 and earning top results on the 2025 IMO and IOI. V3.2 also leverages large-scale agentic task synthesis to improve reasoning in practical tool-use scenarios, boosting its generalization and compliance in interactive environments.
Pricing
Pay-as-you-go rates for this model. More details can be found here.
Input Tokens (1M)
$0.14
Cached Input Tokens (1M)
$0.01
Output Tokens (1M)
$0.21
Capabilities
Input Modalities
Text
Output Modalities
Text
Supported Parameters
Available parameters for API requests
Frequency Penalty
Max Completion Tokens
Presence Penalty
Reasoning Effort
Response Format
Stop
Temperature
Tool Choice
Tools
Top P