kimi-k2-0905
by moonshotaiKimi K2 0905 is the September update of Kimi K2 0711, a Mixture-of-Experts (MoE) language model from Moonshot AI with 1 trillion parameters and 32 billion active per pass. The long-context window has been expanded to 256k tokens. This release brings improved agentic coding accuracy and generalization across scaffolds, as well as more aesthetic and functional frontend code for web, 3D, and similar tasks. Kimi K2 remains optimized for advanced tool use, reasoning, and code synthesis, excelling in benchmarks like LiveCodeBench, SWE-bench, ZebraLogic, GPQA, Tau2, and AceBench. Its training uses a novel stack with the MuonClip optimizer for stable large-scale MoE training.
Pricing
Pay-as-you-go rates for this model. More details can be found here.
Input Tokens (1M)
$0.07
Output Tokens (1M)
$1.24
Capabilities
Input Modalities
Output Modalities
Rate Limits
Requests per minute (RPM) and per day (RPD) by tier. More about tiers here
Tier | RPM | RPD |
---|---|---|
Free | — | — |
Tier 1 | 10 | — |
Tier 2 | 15 | — |
Tier 3 | 25 | — |
Tier 4 | 50 | — |
Usage Analytics
Token usage across the last 30 active days