Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models, while operating at three times the speed on equivalent hardware.
Uptime
Reliability over the last 7 days