Ant Group Unveils China’s First Multimodal AI Assistant with Code-Driven Outputs

Insider Brief

  • Ant Group introduced LingGuang, a multimodal AI assistant that generates answers as 3D models, animations, real-time visual analysis and code-built flash apps in under 30 seconds, processing language, images, audio, data and application code through a unified modular framework.
  • The system’s three core features include Fast Research for dynamic 3D and illustrated explanations, Flash App for instant no-code mini-apps, and AGI Camera for real-time scene understanding and on-the-fly image or video generation.
  • Ant Group positions LingGuang as China’s first broadly accessible AI tool enabling non-technical users to create functional applications and interact with AI across multiple formats in a single workflow.

Ant Group launched LingGuang, a multimodal AI assistant designed to move beyond text-based chat by delivering answers as 3D models, animations, real-time visual analysis and code custom flash apps in just 30 seconds. Built to interpret and produce language, images, audio, data and application code, LingGuang processes user queries through a modular framework that assembles responses across multiple formats at once, according to the company.

The system centers on three features. “Fast Research” pulls information from a topic and renders it as dynamic visual content, including 3D digital models of landmarks or historical subjects and generative illustrations that break down complex ideas such as quantum entanglement or economic principles, according to the company. Users can also navigate interactive maps to plan routes or explore local services.

“Flash App” uses LingGuang’s native coding engine to build functional mini-applications directly inside the conversation. With simple prompts, users can generate custom tools for fitness tracking, budgeting, trip planning, meal suggestions or shopping assistance, often in under 30 seconds. Ant Group positions this as the first broadly accessible AI in China that allows non-technical users to produce and run personalized applications instantly.

It’s “AGI Camera” extends LingGuang’s multimodal abilities to real-world imagery, interpreting photos and video in real time and providing contextual understanding of scenes, the company said. The system can detect objects, assess conditions, and execute on-the-fly editing or generation of new images or clips, blending analysis and creative output within a single workflow.

“At Ant Group, we believe Artificial General Intelligence should be a public good — something that benefits everyone, not just experts,” He Zhengyu, Chief Technology Officer of Ant Group, noted in the announcement. “LingGuang is bringing every user their own personal AI developer: someone who can code, create visuals, build apps, and turn complex ideas into simple solutions—right in your pocket.”

Image credit: Ant Group

Need Deeper Intelligence on the AI Market?

AI Insider's Market Intelligence platform tracks funding rounds, competitive landscapes, and technology trends across the global AI ecosystem in real time. Get the data and insights your organization needs to make informed decisions.

Related Articles

Runway Launches $10M Fund to Expand AI Video and World Model Ecosystem

Runway has announced the launch of a $10 million venture fund to support early-stage startups building across AI, media, and simulation, as the company expands

Mantis Biotech Announces $7.4M in Funding to Advance AI-Driven Digital Twin Models for Biomedical Research

Mantis Biotech has secured $7.4 million in seed funding led by Decibel VC, with participation from Y Combinator, Liquid 2, and angel investors, as it

ScaleOps Closes $130M Series C to Advance Autonomous AI Infrastructure Management

ScaleOps has secured $130 million in Series C funding led by Insight Partners, with participation from Lightspeed Venture Partners, NFX, Glilot Capital Partners, and Picture

Stay Updated with AI Insider

Get the latest AI funding news, market intelligence, and industry insights delivered to your inbox weekly.

Subscribe today for the latest news about the AI landscape