Moonshot AI Releases Open-Source Multimodal Model Kimi K2.5, Targeting Advanced Coding and Agent Workflows

China-based Moonshot AI, backed by Alibaba and HongShan, has launched Kimi K2.5, an open-source foundation model designed for native multimodal reasoning across text, images, and video. The company reported that the model was trained on 15 trillion mixed visual and text tokens, enabling strong performance in coding and multi-agent orchestration tasks. 

Internal benchmarks showed Kimi K2.5 matching or surpassing leading proprietary systems, including Gemini, GPT, and Claude, across coding and video-reasoning evaluations. Alongside the release, Moonshot introduced Kimi Code, an open-source developer tool positioned against offerings from Anthropic. Founded by former Google and Metaresearcher Yang Zhilin, Moonshot has raised multiple billion-dollar rounds and is reportedly pursuing further funding, according to Bloomberg.

James Dargan

James Dargan is a writer and researcher at The AI Insider. His focus is on the AI startup ecosystem and he writes articles on the space that have a tone accessible to the average reader.

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape