Mistral Small 2501

mistral-small-2501
by mistralai|Created Jun 9, 2025

Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models, while operating at three times the speed on equivalent hardware.

Pricing

Pay-as-you-go rates for this model. More details can be found here.

Input Tokens (1M)

$0.05

Output Tokens (1M)

$0.15

Capabilities

Input Modalities

Text

Output Modalities

Text

Usage Analytics

Token usage across the last 30 active days

Throughput

Time-To-First-Token (TTFT)