MIT Develops Ultrafast, Energy-Efficient Chip to Boost AI Performance

purple and blue light digital wallpaper

Insider Brief

  • MIT researchers have developed a photonic chip that uses light to perform all operations of a deep neural network, enabling ultrafast and energy-efficient AI computations.
  • The chip integrates nonlinear optical and electronic components, allowing real-time training of neural networks without external processors.
  • Fabricated using commercial foundry techniques, the chip achieves nanosecond-scale computations with over 92% accuracy, paving the way for scalable applications in telecommunications, navigation, and scientific research.

MIT researchers have unveiled a photonic chip that uses light to perform all the key operations of deep neural networks, promising faster and more energy-efficient AI computations.

As AI models grow more complex, traditional electronic processors struggle to keep up with their energy demands and computational requirements. The new photonic processor, described in Nature Photonics and reported in MIT News., addresses these challenges by processing data optically rather than electrically. The result: computations are completed in under half a nanosecond while achieving accuracy comparable to traditional hardware.

“This is especially useful for systems where you are doing in-domain processing of optical signals, like navigation or telecommunications, but also in systems that you want to learn in real time,” lead author Saumil Bandyopadhyay, a visiting scientist at MIT’s Research Laboratory of Electronics, told MIT News.

The Power of Nonlinear Optics

Deep neural networks rely on both linear operations, such as matrix multiplication, and nonlinear processes, which allow them to identify intricate patterns. Earlier optical chips could handle linear computations but relied on external electronics for nonlinear tasks, reducing their speed and efficiency.

MIT’s team solved this issue with nonlinear optical function units (NOFUs), which combine optics and electronics on the same chip. These units convert a small fraction of light into electrical signals to perform nonlinear calculations, enabling ultra-fast operations without significant energy overhead. This design keeps the majority of the computation within the optical domain, vastly improving performance.

Training AI in Real Time

The chip’s ability to train neural networks in real time, known as in situ training, sets it apart from traditional hardware. This feature makes it particularly useful for scenarios requiring on-the-spot learning, such as telecommunications or navigation.

“This work demonstrates that computing—at its essence, the mapping of inputs to outputs—can be compiled onto new architectures of linear and nonlinear physics that enable a fundamentally different scaling law of computation versus effort needed,” said Dirk Englund, senior author and MIT professor, as reported in MIT News.

In tests, the chip achieved over 96% accuracy during training and more than 92% accuracy in inference tasks, matching the performance of advanced digital systems.

A Path to Real-World Integration

Fabricated using commercial foundry techniques, the chip is designed for scalability, leveraging the same processes used to produce CMOS computer chips. This approach could enable its integration into consumer and industrial devices, including cameras, lidar systems, and telecommunications networks.

“There are a lot of cases where how well the model performs isn’t the only thing that matters, but also how fast you can get an answer. Now that we have an end-to-end system that can run a neural network in optics, at a nanosecond time scale, we can start thinking at a higher level about applications and algorithms,” Bandyopadhyay told MIT News. The chip’s speed—operating on a nanosecond scale—positions it for high-demand applications like scientific research and high-speed data transmission.

What’s Next?

Looking ahead, the researchers aim to enhance the chip’s capabilities, scaling it for more complex tasks and exploring algorithms that exploit the advantages of optical computing. Integrating the chip with existing electronics will also be a focus of future work.

The research was supported by the U.S. National Science Foundation, the U.S. Air Force Office of Scientific Research, and NTT Research.

For those interested in a deeper dive,we recommend you review the full study is available in Nature Photonics.

 

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape