MistralAI

Token usage over time

Browse models from MistralAI

13 models

Codestral 2501

985K Tokens

Codestral is Mistral’s cutting-edge language model for coding, specializing in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction, and test generation. It is optimized for developer productivity and supports a wide range of programming languages and code-related tasks.

byMistralAI
$0.15/1M input tokens$0.45/1M output tokens

Mistral Small 2501

46K Tokens

Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models, while operating at three times the speed on equivalent hardware.

byMistralAI
$0.05/1M input tokens$0.15/1M output tokens

Magistral Medium 2506

440K Tokens

Magistral is Mistral's first reasoning model, designed for general-purpose use cases that require extended thought processing and high accuracy. It excels in multi-step challenges such as legal research, financial forecasting, software development, and creative storytelling, where transparency and precision are critical.

byMistralAI
$1.00/1M input tokens$2.50/1M output tokens

Mistral Moderation 2411

Mistral Moderation 2411 is a content moderation model from Mistral, offering high-accuracy text moderation across nine safety categories and multiple languages. It is designed for robust, real-time moderation in diverse environments.

byMistralAI
$0.05/1M tokens

Mistral Medium 2505

15K Tokens

Mistral Medium 3 is a high-performance, enterprise-grade language model that balances state-of-the-art reasoning and multimodal capabilities with significantly reduced operational cost. It excels in coding, STEM reasoning, and enterprise adaptation, and is optimized for scalable deployments across professional and industrial use cases, including hybrid and on-prem environments.

byMistralAI
$0.20/1M input tokens$1.00/1M output tokens

Ministral 8B 2410

6K Tokens

Ministral 8B is an 8B parameter model with a unique interleaved sliding-window attention pattern for faster and more memory-efficient inference. Optimized for edge use cases, it supports up to 128k context length and delivers strong performance in knowledge and reasoning tasks. Exceeding other models in the sub-10B category, it is ideal for low-latency and privacy-focused applications.

byMistralAI
$0.05/1M input tokens$0.05/1M output tokens

Mistral Small 2506

27K Tokens

Mistral-Small-3.2-24B-Instruct-2506 is an updated 24B parameter model from Mistral, optimized for instruction following, repetition reduction, and improved function calling. It supports both image and text inputs, delivers strong performance across coding, STEM, and vision benchmarks, and is designed for efficient, structured output generation.

byMistralAI
$0.05/1M input tokens$0.15/1M output tokens

Magistral Small 2509

22K Tokens

Building upon Mistral Small 3.2 (2506), with added reasoning capabilities, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters.

byMistralAI
$0.25/1M input tokens$0.75/1M output tokens

Mistral Saba 2502

154K Tokens

Mistral Saba is a 24B-parameter language model specifically developed for the Middle East and South Asia. It delivers accurate and contextually relevant responses in multiple Indian-origin languages—including Tamil and Malayalam—alongside Arabic. The model is trained on curated regional datasets and is optimized for multilingual and regional applications.

byMistralAI
$0.10/1M input tokens$0.30/1M output tokens

Open Mistral Nemo 2407

823K Tokens

Mistral Large 2 (version mistral-large-2407) is Mistral AI’s flagship model, supporting dozens of languages—including French, German, Spanish, Italian, Portuguese, Arabic, Hindi, Russian, Chinese, Japanese, and Korean—and over 80 coding languages. It features a long context window for precise information recall and is optimized for reasoning, code, JSON, and chat tasks.

byMistralAI
$0.02/1M input tokens$0.04/1M output tokens

Pixtral Large 2411

17K Tokens

Pixtral Large is a 124B parameter, open-weight, multimodal model built on top of Mistral Large 2. It is capable of understanding documents, charts, and natural images, and is available under both research and commercial licenses. The model is designed for advanced document and image analysis tasks.

byMistralAI
$1.00/1M input tokens$3.00/1M output tokens

Mistral Small 2503

27K Tokens

Mistral-Small-3.2-24B-Instruct-2503 is an updated 24B parameter model from Mistral, optimized for instruction following, repetition reduction, and improved function calling. It supports both image and text inputs, delivers strong performance across coding, STEM, and vision benchmarks, and is designed for efficient, structured output generation.

byMistralAI
$0.05/1M input tokens$0.15/1M output tokens

Mistral Large 2411

45K Tokens

Mistral Large 2 2411 is an updated release of Mistral Large 2, featuring notable improvements in long context understanding, a new system prompt, and more accurate function calling. It is designed for advanced enterprise and research applications requiring high reliability and performance.

byMistralAI
$1.00/1M input tokens$3.00/1M output tokens