Text-Embedding-3-Small is OpenAI’s efficient, compact embedding model, designed to convert text into numerical representations for semantic tasks such as search, clustering, and recommendations. It offers improved performance and cost-effectiveness compared to previous models, with low latency and storage requirements.
The continually updated version of OpenAI ChatGPT 4o, always pointing to the current GPT-4o model used by ChatGPT. Incorporates additional RLHF and may differ from the API version. Intended for research and evaluation, not recommended for production as it may be redirected or removed in the future.
A compact variant of GPT-5, designed for efficient handling of lighter-weight reasoning and conversational tasks. GPT-5 Mini retains the instruction-following and safety features of its larger counterpart, but with reduced latency and cost. It is the direct successor to OpenAI’s o4-mini model, making it ideal for scalable, cost-sensitive deployments.
A compact variant of GPT-5, designed for efficient handling of lighter-weight reasoning and conversational tasks. GPT-5 Mini retains the instruction-following and safety features of its larger counterpart, but with reduced latency and cost. It is the direct successor to OpenAI’s o4-mini model, making it ideal for scalable, cost-sensitive deployments.
An open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI, designed for high-reasoning, agentic, and general-purpose production use cases. Activates 5.1B parameters per forward pass and is optimized for single H100 GPU deployment with native MXFP4 quantization. Supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.
GPT-5.2 is the newest frontier-level model in the GPT-5 line, providing enhanced agentic abilities and better long-context performance than GPT-5.1. It employs adaptive reasoning to dynamically distribute computational resources, enabling quick responses to simple requests and deeper analysis for complex challenges. Designed for wide-ranging tasks, GPT-5.2 offers steady improvements in mathematics, programming, science, and tool usage, delivering more coherent long-form responses and increased reliability when using tools.
A mid-sized GPT-4.1 model delivering performance competitive with GPT-4o at substantially lower latency and cost. Retains a 1 million token context window and demonstrates strong coding ability and vision understanding, making it suitable for interactive applications with tight performance constraints.
GPT-4o (“o” for “omni”) is OpenAI’s latest multimodal model, supporting both text and image inputs with text outputs. Delivers improved performance in non-English languages and visual understanding, while being faster and more cost-effective than previous models.
Whisper large-v3-turbo is a finetuned version of a pruned Whisper large-v3. In other words, it's the exact same model, except that the number of decoding layers have reduced from 32 to 4. As a result, the model is way faster, at the expense of a minor quality degradation.
OpenAI’s most advanced small model, GPT-4o mini, supports both text and image inputs with text outputs. It is highly cost-effective, achieving SOTA intelligence and outperforming larger models on key benchmarks, making it ideal for scalable, interactive applications.
GPT-5.1 is the newest top-tier model in the GPT-5 series, featuring enhanced general reasoning, better instruction following, and a more natural conversational tone compared to GPT-5. With adaptive reasoning, it dynamically adjusts its computational effort—responding swiftly to simple queries and diving deeper into complex tasks. Explanations are now clearer and use less jargon, making challenging topics easier to grasp. Designed for a wide range of tasks, GPT-5.1 consistently improves performance in math, coding, and structured analysis, offering more cohesive long-form responses and more reliable tool usage. Its conversation style is warmer and more intuitive, yet still precise. GPT-5.1 stands as the main, fully capable successor to GPT-5.
Whisper Large v3 is OpenAI’s state-of-the-art model for automatic speech recognition (ASR) and speech translation. Trained on over 5 million hours of labeled data, it demonstrates strong generalization across datasets and domains, excelling in zero-shot transcription and translation tasks.
GPT-5.2 Chat (also known as Instant) is the fast and lightweight version of the 5.2 family, built for low-latency chatting while maintaining strong general intelligence. It leverages adaptive reasoning to focus more “thinking” on challenging queries, boosting accuracy in math, coding, and multi-step tasks without sacrificing speed in everyday conversations. The model is naturally warmer and more conversational, with improved instruction following and more stable short-form reasoning. GPT-5.2 Chat is ideal for high-throughput, interactive scenarios where quick response and consistency are more important than in-depth analysis.
GPT-5.1-Codex-Mini is a more compact and faster variant of GPT-5.1-Codex.
A compact reasoning model in OpenAI’s o-series, optimized for fast, cost-efficient performance while retaining strong multimodal and agentic capabilities. Supports tool use and demonstrates competitive reasoning and coding performance across benchmarks, outperforming its predecessor o3-mini and approaching o3 in some domains. Well-suited for high-throughput scenarios where latency or cost is critical.
The November 2024 release of GPT-4o, featuring enhanced creative writing, more natural and engaging responses, and improved file handling. Maintains the intelligence of GPT-4 Turbo while being twice as fast and 50% more cost-effective, with better support for non-English languages and visual tasks.
The November 2024 release of GPT-4o, featuring enhanced creative writing, more natural and engaging responses, and improved file handling. Maintains the intelligence of GPT-4 Turbo while being twice as fast and 50% more cost-effective, with better support for non-English languages and visual tasks.
GPT-5.1 Chat (also known as Instant) is the fast, lightweight member of the 5.1 family, optimized for low-latency chat while retaining strong general intelligence. It uses adaptive reasoning to selectively “think” on harder queries, improving accuracy on math, coding, and multi-step tasks without slowing down typical conversations. The model is warmer and more conversational by default, with better instruction following and more stable short-form reasoning. GPT-5.1 Chat is designed for high-throughput, interactive workloads where responsiveness and consistency matter more than deep deliberation.
A mid-sized GPT-4.1 model delivering performance competitive with GPT-4o at substantially lower latency and cost. Retains a 1 million token context window and demonstrates strong coding ability and vision understanding, making it suitable for interactive applications with tight performance constraints.
The smallest and fastest member of the GPT-5 family, optimized for developer tools, rapid user interactions, and ultra-low latency environments. While it offers limited reasoning depth compared to larger models, GPT-5-Nano preserves essential instruction-following and safety mechanisms. It is the successor to GPT-4.1-nano and is best suited for real-time, cost-sensitive, or embedded applications.
DALL-E 3 is OpenAI’s third-generation text-to-image model, offering enhanced detail, accuracy, and the ability to understand complex prompts. It excels at generating realistic and creative images, handling intricate details like text and human anatomy, and supports various aspect ratios for flexible output.
OpenAI’s most advanced small model, GPT-4o mini, supports both text and image inputs with text outputs. It is highly cost-effective, achieving SOTA intelligence and outperforming larger models on key benchmarks, making it ideal for scalable, interactive applications.
GPT-4.1, a flagship model for advanced instruction following, software engineering, and long-context reasoning. Supports a 1 million token context window and is tuned for precise code diffs, agent reliability, and high recall in large document contexts.
GPT-Image-1.5 is the flagship image generation and editing model from OpenAI, designed for precise, natural, and fast creation. It reliably follows user instructions down to fine details, preserving critical elements like lighting, composition, and facial likeness across edits and generations. GPT-Image-1.5 excels at a wide range of editing tasks—including addition, removal, stylization, combination, and advanced text rendering—producing images that closely match user intent. With up to 4× faster generation speeds compared to previous versions, it streamlines creative workflows, enabling quick iterations whether you need a simple fix or a total visual transformation. Enhanced integration and lower API costs make GPT-Image-1.5 ideal for marketing, product visualization, ecommerce, and creative tools scenarios, while its dedicated editor and presets provide a delightful, accessible creative space for both practical and expressive image work.
GPT-5.1-Codex-Max is OpenAI’s newest agentic coding model, created for extended, high-context software development tasks. Built on an enhanced 5.1 reasoning stack, it’s been trained with agentic workflows covering software engineering, mathematics, and research. GPT-5.1-Codex-Max offers faster performance, better reasoning abilities, and increased token efficiency throughout the development process.
GPT-5 Chat is tailored for advanced, natural, and context-aware conversations in enterprise environments. It leverages the latest advancements in OpenAI’s conversational AI, supporting multimodal and dynamic dialogue with enhanced context retention and user intent understanding.
The August 2024 version of GPT-4o, offering improved structured output capabilities, including support for JSON schema in responses. Maintains high intelligence and efficiency, with enhanced non-English and visual performance.
A flagship large language model from OpenAI, optimized for advanced instruction following, real-world software engineering, and long-context reasoning. Supports a 1 million token context window and outperforms GPT-4o and GPT-4.5 in coding (54.6% SWE-bench Verified), instruction compliance (87.4% IFEval), and multimodal understanding. Tuned for precise code diffs, agent reliability, and high recall in large document contexts, making it ideal for agents, IDE tooling, and enterprise knowledge retrieval.
The continually updated version of OpenAI ChatGPT 4o, always pointing to the current GPT-4o model used by ChatGPT. Incorporates additional RLHF and may differ from the API version. Intended for research and evaluation, not recommended for production as it may be redirected or removed in the future.
A text-to-speech model built on GPT-4o mini, a fast and powerful language model. Use it to convert text into natural-sounding spoken audio.
OpenAI’s most advanced large language model, engineered for high-stakes applications requiring step-by-step reasoning, precise instruction following, and robust code generation. GPT-5 introduces major improvements in factual accuracy, user intent understanding, and hallucination reduction. It supports advanced prompt routing, user-specified intent (such as "think hard about this"), and is optimized for complex workflows in coding, writing, and health-related domains.
A speech-to-text model using GPT-4o for transcribing audio. It offers improved word error rate, better language recognition, and higher accuracy compared to the original Whisper models. Use it for more precise transcripts.
DALL-E 3 is OpenAI’s third-generation text-to-image model, offering enhanced detail, accuracy, and the ability to understand complex prompts. It excels at generating realistic and creative images, handling intricate details like text and human anatomy, and supports various aspect ratios for flexible output.
GPT-5.1-Codex is a specialized version of GPT-5.1 optimized for software engineering and coding workflows. It's designed for both interactive development sessions and long, independent execution of complex engineering tasks. The model supports building projects from scratch, feature development, debugging, large-scale refactoring, and code review. Compared to GPT-5.1, Codex is more steerable, closely follows developer instructions, and produces cleaner, higher-quality code. Codex integrates into developer environments like the CLI, IDE extensions, GitHub, and cloud tasks. It adapts its reasoning dynamically—providing quick answers for small tasks and sustaining long, multi-hour runs for large projects. The model is trained for structured code reviews, identifying critical flaws by reasoning over dependencies and validating behavior against tests. It also supports multimodal inputs like images or screenshots for UI development and integrates tools for search, dependency installation, and environment setup. Codex is specifically intended for agentic coding applications.
GPT-5.2 Pro is OpenAI's most advanced model, featuring significant upgrades in agentic coding and long-context capabilities compared to GPT-5 Pro. It is specifically optimized for handling complex tasks that demand step-by-step reasoning, precise instruction following, and accuracy in critical scenarios. The model supports advanced test-time routing and sophisticated prompt understanding, including user cues like "think hard about this." Key improvements include reduced hallucination and sycophancy, along with stronger performance in coding, writing, and health-related tasks.
Text-Embedding-3-Large is OpenAI’s most capable embedding model, supporting both English and non-English text tasks. It produces high-dimensional embeddings (up to 3072 dimensions) for advanced semantic similarity, search, and clustering, and allows flexible trade-offs between performance and resource usage.
Whisper Large v3 is OpenAI’s state-of-the-art model for automatic speech recognition (ASR) and speech translation. Trained on over 5 million hours of labeled data, it demonstrates strong generalization across datasets and domains, excelling in zero-shot transcription and translation tasks.
The April 2023 release of GPT-4 Turbo, supporting vision, JSON mode, and function calling. Trained on data up to April 2023, optimized for advanced multimodal tasks.
OpenAI’s new state-of-the-art image generation model. This is a natively multimodal language model that accepts both text and image inputs and produces image outputs. It powers image generation in ChatGPT, offering exceptional prompt adherence, a high level of detail, and quality.
A text-to-speech model built on GPT-4o mini, a fast and powerful language model. Use it to convert text into natural-sounding spoken audio.
Omni-Moderation is OpenAI’s newest multimodal content moderation model, available through the Moderation API. It is designed to identify potentially harmful content in both text and images, offering improved accuracy and granular control, especially in non-English languages.
OpenAI’s 21B-parameter open-weight Mixture-of-Experts (MoE) model, released under the Apache 2.0 license. Features 3.6B active parameters per forward pass, optimized for low-latency inference and deployability on consumer or single-GPU hardware. Trained in OpenAI’s Harmony response format, it supports reasoning level configuration, fine-tuning, and agentic capabilities such as function calling and structured outputs.
GPT-5-Codex is a specialized version of GPT-5 tailored for software engineering and coding tasks. It is suitable for both interactive development sessions and the independent execution of complex engineering projects. The model is capable of building projects from scratch, developing new features, debugging, performing large-scale refactoring, and conducting code reviews. Compared to the standard GPT-5, Codex offers greater steerability, follows developer instructions more closely, and delivers cleaner, higher-quality code.
The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. Trained with large-scale reinforcement learning for chain-of-thought reasoning, it is optimized for math, science, programming, and other STEM tasks, consistently achieving PhD-level accuracy on industry benchmarks.
GPT-4.1, a flagship model for advanced instruction following, software engineering, and long-context reasoning. Supports a 1 million token context window and is tuned for precise code diffs, agent reliability, and high recall in large document contexts.
GPT-4o (“o” for “omni”) is OpenAI’s latest multimodal model, supporting both text and image inputs with text outputs. Delivers improved performance in non-English languages and visual understanding, while being faster and more cost-effective than previous models.
The latest GPT-4 Turbo model with vision capabilities, supporting JSON mode and function calling. Trained on data up to December 2023, it is optimized for high-throughput, multimodal applications.
Preview release of GPT-4, featuring improved instruction following, JSON mode, reproducible outputs, and parallel function calling. Trained on data up to December 2023. Heavily rate-limited while in preview.
The August 2024 version of GPT-4o, offering improved structured output capabilities, including support for JSON schema in responses. Maintains high intelligence and efficiency, with enhanced non-English and visual performance.
A fine-tuned version of o4-mini, specifically optimized for use in Codex CLI. Recommended for code-related tasks, with improved performance in code generation and completion.
A well-rounded, powerful model from OpenAI, setting new standards in math, science, coding, and visual reasoning. Excels at technical writing and instruction-following, and is designed for multi-step problem solving across text, code, and images. BYOK is required for access.
Specialized GPT-4o variant trained for web search understanding and execution within chat completions, enabling advanced search query comprehension.
The fastest and most cost-effective model in the GPT-4.1 series, designed for tasks demanding low latency such as classification and autocompletion. Maintains a 1 million token context window and delivers exceptional performance at a small size, outperforming even some larger models on key benchmarks.
A cost-efficient language model from OpenAI, optimized for STEM reasoning tasks, especially in science, mathematics, and coding. Supports the `reasoning_effort` parameter for adjustable thinking time and features significant improvements over its predecessor, with better performance on complex questions and lower latency and cost.
Text-Embedding-Ada-002 is a widely used text embedding model from OpenAI, converting text into semantic vectors for tasks like search, clustering, recommendations, and classification. It is known for strong performance and efficiency, making it a standard choice for embedding applications.