Lightning AI and Voltage Park Complete Merger to Create the First Cloud Built for AI

Lightning AI

Insider Brief

  • Lightning AI has completed a merger with Voltage Park, combining AI software and large-scale GPU infrastructure into a single AI-native cloud for training, deploying, and running AI models.
  • The merged platform integrates end-to-end AI tooling with access to more than 35,000 owned and operated H100, B200, and GB300 GPUs, aiming to reduce the cost and operational complexity associated with fragmented AI development stacks.
  • Existing customers will see expanded capabilities without changes to contracts or deployments, while the platform will continue to interoperate with traditional clouds such as Amazon Web Services and other neocloud providers.

PRESS RELEASE — Lightning AI, a cloud platform where developers and companies build and run AI applications, today announced the completion of a merger with Voltage Park, a large-scale GPU infrastructure provider. The two companies, operating under the Lightning AI name, bring together AI software and on-demand GPU compute in a single AI cloud designed for training, deploying, and running AI models and applications.

Lightning is used by over 400,000 individual developers, startups, and large enterprises to build AI models and applications without stitching together single-purpose tools or dealing with dozens of GPU vendors. Teams use Lightning to access GPUs, train models, deploy them into production, and run AI applications and large-scale inference all in one place.

Traditional clouds like AWS, were designed for building and hosting CPU-based software like websites, and services. Gen AI requires a different set of tools, built specifically for GPU-based workloads such as large-scale inference, multi-node training, and large-scale data preparation. This gap has led to dozens of single-purpose tools, each designed to support only one part of the AI lifecycle, like training or inference. Building production AI across so many fragmented tools creates significant operational and procurement overhead for enterprises.

“Imagine instead of using an iPhone, having to carry a separate calculator, flashlight, radio, and more – that’s where AI tooling is today,” said William Falcon, founder and CEO of Lightning AI.

The merger brings AI software and GPU infrastructure together into a single platform. Lightning users get virtually unlimited GPU burst capacity across a fleet of 35,000+ owned and operated H100, B200, and GB300 GPUs. Voltage Park customers get optional, built-in AI software, including large-scale inference, model serving, team management, and observability, without needing to use or pay for single-purpose tools. “Customers spend hundreds of millions on inference platforms that they now get bundled for free on Lightning AI,” said Saurabh Giri, CPTO of Lightning AI.

Before the merger, teams had to choose between traditional clouds like AWS with clunky AI software and expensive GPUs, single-purpose tools like inference platforms, or neoclouds with cheap GPUs with only basic Kubernetes software. As a result, AI teams paid for and worked across multiple tools, adding unnecessary cost and complexity. With Lightning, teams now get purpose-built AI software with enterprise-grade reliability at neocloud GPU prices. They can train models, run inference, and ship AI applications all in one place, so they can focus on shipping, not GPU shopping.

“Our vision has been to build the cloud for the Gen AI age,” said Falcon. “When I was pretraining world models in 2019 at Facebook’s AI Lab during my PhD, the amount of tooling required, limited access to high-performance infrastructure, and lack of collaboration slowed our research down. From day one, I set out to build a next-generation cloud accessible to everyone, from undergrads to Fortune 100 companies. It’s taken us six years to get here, and this merger is the next big step to making that vision real.”

“The next phase of AI will be won by teams that control the entire stack,” said Timo Mertens, CTO of Cantina Labs. “Model performance, cost efficiency, and iteration speed increasingly depend on tight integration between the platform, deep optimization expertise, and owned compute. What’s compelling about this combination is that it brings those layers together into a single, cohesive system—where software and infrastructure are designed in lockstep. That kind of vertical integration isn’t optional anymore; it’s the path forward for building and operating frontier AI at scale.”

“Frontier labs and the ecosystem more broadly have been waiting for what Lightning AI and Voltage Park are building jointly as one company. Performant verticalized infrastructure is an important unlock for frontier research and engineering at speed.” – Misha Laskin, CEO Reflection AI

“This puts us in a category that didn’t previously exist,” said Saurabh Giri, former CPTO of Voltage Park and now CPTO of Lightning AI. “Most neoclouds sell raw GPU capacity without a deep software stack. Most AI platforms depend on third-party clouds underneath. We’re software-first and infrastructure-native, and designed end-to-end for AI workloads. Our customers will benefit from a unified experience that rivals the hyperscalers in capability while offering better value and operational simplicity for AI workloads.”

“This merger reflects a broader shift in the industry,” said Ozan Kaya, former CEO of Voltage Park, now President of Lightning AI. “The next generation of cloud platforms won’t be built by stitching together single-purpose tools. They’ll be AI-native from day one, just as AWS was foundational for the internet era, we are building ground up for the AI era.”

For existing customers, the merger brings expanded capabilities with no disruption. There are no changes to contracts or deployments. Supporting multiple clouds remains a core part of Lightning’s platform, and customers can continue to use Lightning alongside AWS and other cloud providers. When needed, they can also burst workloads into Lightning’s own GPU infrastructure for additional capacity. Lightning will continue to grow its GPU marketplace through deeper partnerships with major cloud providers and neoclouds.

Matt Swayne

With a several-decades long background in journalism and communications, Matt Swayne has worked as a science communicator for an R1 university for more than 12 years, specializing in translating high tech and deep tech for the general audience. He has served as a writer, editor and analyst at The Space Impulse since its inception. In addition to his service as a science communicator, Matt also develops courses to improve the media and communications skills of scientists and has taught courses.

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape