China-based Moonshot AI, backed by Alibaba and HongShan, has launched Kimi K2.5, an open-source foundation model designed for native multimodal reasoning across text, images, and video. The company reported that the model was trained on 15 trillion mixed visual and text tokens, enabling strong performance in coding and multi-agent orchestration tasks.
Internal benchmarks showed Kimi K2.5 matching or surpassing leading proprietary systems, including Gemini, GPT, and Claude, across coding and video-reasoning evaluations. Alongside the release, Moonshot introduced Kimi Code, an open-source developer tool positioned against offerings from Anthropic. Founded by former Google and Metaresearcher Yang Zhilin, Moonshot has raised multiple billion-dollar rounds and is reportedly pursuing further funding, according to Bloomberg.




