Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models, while operating at three times the speed on equivalent hardware.
Pricing
Pay-as-you-go rates for this model. More details can be found here.
Input Tokens (1M)
$0.05
Output Tokens (1M)
$0.15
Capabilities
Input Modalities
Text
Output Modalities
Text
Supported Parameters
Available parameters for API requests
Frequency Penalty
Max Completion Tokens
Parallel Tool Calls
Prediction
Presence Penalty
Response Format
Stop
Temperature
Tool Choice
Tools
Top P