PowerLattice Raises $25M to Reduce AI Compute Power Needs

Insider Brief

  • PowerLattice emerged from stealth with a $25 million Series A led by Playground Global and Celesta Capital, unveiling a power-delivery chiplet that brings power directly into the processor package and cuts compute power needs by more than 50%.
  • The company’s chiplet integrates into existing SoC designs, shortens the power path and reduces energy loss, effectively doubling compute performance while improving reliability for next-generation AI accelerators and GPUs exceeding 2 kW.
  • Founded by industry veterans from Qualcomm, NUVIA and Intel, PowerLattice has raised $31 million to date and is preparing engineering samples for 1 kW-plus processors as data centers confront escalating power constraints.

PRESS RELEASE — PowerLattice, the company reimagining power delivery for next-generation AI accelerators, today announced its emergence from stealth with $25 million in Series A funding jointly led by Playground Global and Celesta Capital. The company’s breakthrough power delivery chiplet tightly couples power and compute, reducing total compute power needs by more than 50%, effectively doubling performance. PowerLattice has raised $31 million in funding to date.

“Power is the defining challenge for AI’s future,” said Peng Zou, Co-Founder, CEO and President of PowerLattice. “Data centers are already starting to hit a power wall and the problem is only going to get worse if we don’t rethink how chips are powered. By bringing power directly into the processor package, we’re delivering the performance and efficiency AI needs to keep scaling beyond today’s limits.”

“AI is not constrained by capital, it’s constrained by power,” said Pat Gelsinger, General Partner, Playground Global. “PowerLattice represents a dramatic breakthrough in the efficiency and scale of power delivery. This is the kind of generational leap Playground backs: technology that doesn’t just advance chips, but reshapes the entire trajectory of computing.”

“PowerLattice is delivering a truly scalable solution to attack the cost-performance, reliability and cooling bottlenecks throttling AI data centers,” said Dr. Steve Fu, Partner, Celesta Capital. “I know exactly how tough this problem is, having previously led power device and system incubation at global semiconductor leaders and watching two decades of attempts fall short of the real potential. It is why we zeroed in on this opportunity in our thesis at Celesta – and why we believe PowerLattice’s solution is the unlock the industry has been waiting for.”

Reimagining Power for AI

AI accelerators and GPUs are pushing past 2 KW per chip, straining data centers that already consume as much energy as mid-size cities. Conventional power delivery forces very high electrical current to travel long, resistive paths before reaching the processor, wasting energy and limiting performance. Without a new approach, data center energy use could triple by 2028, consuming up to 12% of U.S. power supply and creating a barrier for AI to scale.

PowerLattice is breaking through this power wall by delivering power much closer to where compute happens. The company has developed the industry’s first power delivery chiplet, bringing power directly into the processor package. Combining proprietary miniaturized on-die magnetic inductors, advanced voltage control circuit innovations, a vertical design and a programmable software layer, PowerLattice’s chiplet tightly couples power and compute, delivering power precisely where and when it’s needed.

Impact and Readiness

PowerLattice’s chiplet integrates easily into existing system-on-a-chip (SOC) product designs, shrinking the overall processor footprint and dramatically shortening the power path. As a result, PowerLattice:

  • Unlocks chip performance: PowerLattice lifts the power ceiling, reduces power-related throttling and increases compute utilization, effectively doubling performance and enabling significantly more AI computation per rack.
  • Cuts the AI power bill: By tightly coupling power and compute, PowerLattice dramatically reduces energy loss, lowering compute power needs by more than 50%.
  • Delivers AI-grade reliability: As AI clusters scale to hundreds of thousands of GPUs and accelerators, PowerLattice delivers power with the consistency, precision and stability needed to ensure optimal performance and system longevity.

With silicon already in hand and engineering samples in progress for 1 KW+ GPUs, CPUs and accelerators, PowerLattice is delivering the performance, efficiency, and reliability that next generation AI and data center infrastructure demands.

A Founding Team with Decades of Expertise

PowerLattice was founded by Peng Zou, Gang Ren, and Sujith Dermal, who together bring decades of engineering leadership in integrated magnetics, analog IC, power management and system design, with experience at Qualcomm, NUVIA, Intel, and a portfolio of issued and pending patents. Joining the board are Pat Gelsinger, General Partner at Playground Global, and Dr. Steve Fu, Partner at Celesta Capital, underscoring the strategic importance of PowerLattice’s technology to leaders across the semiconductor ecosystem.

Image credit: PowerLattice

Greg Bock

Greg Bock is an award-winning investigative journalist with more than 25 years of experience in print, digital, and broadcast news. His reporting has spanned crime, politics, business and technology, earning multiple Keystone Awards and a Pennsylvania Association of Broadcasters honors. Through the Associated Press and Nexstar Media Group, his coverage has reached audiences across the United States.

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape