Spanish AI startup Multiverse Computing is expanding access to compressed large language models designed to reduce deployment costs while maintaining near-frontier performance. The company has released an updated version of its HyperNova 60B model, built using its CompactifAI compression technology inspired by quantum computing principles. The model, now available on Hugging Face, is approximately half the size of its source model, OpenAI’s gpt-oss-120b, with lower memory usage and latency. The latest iteration, HyperNova 60B 2602, adds improved support for tool calling and agentic coding tasks.
Multiverse reports enterprise adoption among clients including Iberdrola, Bosch, and the Bank of Canada. The company confirmed it is in active discussions regarding a potential new funding round following its $215 million Series B, which included participation from Spain’s SETT. Multiverse continues to position itself as a provider of sovereign AI solutions across Europe and North America.




