Try Kimi K2.5 in the Workbench
Run this model interactively, tune parameters, and compare outputs.
moonshotai-kimi-k2-5
moonshotai/Kimi-K2.5 is an open-source, native multimodal agentic model built through continual pretraining on approximately 15 trillion mixed visual and text tokens atop Kimi-K2-Base. It seamlessly integrates vision and language understanding with advanced agentic capabilities, instant and thinking modes, as well as conversational and agentic paradigms.
Key features include Agent Swarm (decomposes complex tasks into parallel sub-tasks executed by up to 100 dynamically instantiated sub-agents with up to 1,500 tool calls), Coding with Vision (generates code from visual specifications like UI designs and video workflows), and Native Multimodality (pre-trained on vision–language tokens for visual knowledge, cross-modal reasoning, and agentic tool use grounded in visual inputs).
| Metric | Value |
|---|---|
| Parameter Count | 1 trillion (32 billion activated) |
| Mixture of Experts | Yes |
| Active Parameter Count | 32 billion |
| Context Length | 256,000 tokens |
| Vision Encoder | MoonViT (400M params) |
| Multilingual | Yes |
| Quantized* | Yes |
| Precision* | INT4 |
Example request
- Minimal
- Basic parameters
- All parameters
Fetch model details
The models endpoint returns the full model object, including itsjson_request_schema.