WaveSpeedAI, based in Singapore, announced the launch of its next-generation infrastructure platform for high-speed generative AI inference across image, video, and audio.
Founded by Zeyi Cheng and David Li, WaveSpeedAI delivers up to six times faster inference and one-third the compute cost of traditional solutions, powered by a proprietary dynamic compute scheduling framework.
CEO Zeyi Cheng said the platform removes key barriers for developers and creators by enabling real-time multimodal generation with seamless ComfyUI integrations. CTO David Li emphasized the system’s scalability across GPU platforms including A100, H100, and B200 chips.
With support for state-of-the-art models like Flux Series and Wan 2.1, WaveSpeedAI is positioning itself as the default infrastructure provider for AI-native applications worldwide.