Multiverse Computing Advances Compressed AI Models with Quantum-Inspired Technology

Spanish AI startup Multiverse Computing is expanding access to compressed large language models designed to reduce deployment costs while maintaining near-frontier performance. The company has released an updated version of its HyperNova 60B model, built using its CompactifAI compression technology inspired by quantum computing principles. The model, now available on Hugging Face, is approximately half the size of its source model, OpenAI’s gpt-oss-120b, with lower memory usage and latency. The latest iteration, HyperNova 60B 2602, adds improved support for tool calling and agentic coding tasks.

Multiverse reports enterprise adoption among clients including Iberdrola, Bosch, and the Bank of Canada. The company confirmed it is in active discussions regarding a potential new funding round following its $215 million Series B, which included participation from Spain’s SETT. Multiverse continues to position itself as a provider of sovereign AI solutions across Europe and North America.

James Dargan

James Dargan is a writer and researcher at The AI Insider. His focus is on the AI startup ecosystem and he writes articles on the space that have a tone accessible to the average reader.

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape