kimi-k2-0905

by moonshotai

Kimi K2 0905 is the September update of Kimi K2 0711, a Mixture-of-Experts (MoE) language model from Moonshot AI with 1 trillion parameters and 32 billion active per pass. The long-context window has been expanded to 256k tokens. This release brings improved agentic coding accuracy and generalization across scaffolds, as well as more aesthetic and functional frontend code for web, 3D, and similar tasks. Kimi K2 remains optimized for advanced tool use, reasoning, and code synthesis, excelling in benchmarks like LiveCodeBench, SWE-bench, ZebraLogic, GPQA, Tau2, and AceBench. Its training uses a novel stack with the MuonClip optimizer for stable large-scale MoE training.

Pricing

Pay-as-you-go rates for this model. More details can be found here.

Input Tokens (1M)

$0.07

Output Tokens (1M)

$1.24

Capabilities

Input Modalities

Text

Output Modalities

Text

Rate Limits

Requests per minute (RPM) and per day (RPD) by tier. More about tiers here

TierRPMRPD
Free
Tier 110
Tier 215
Tier 325
Tier 450

Usage Analytics

Token usage across the last 30 active days

kimi-k2-0905 — Model | NagaAI