Optical computing for deep learning with a programmable nanophotonic processor

Researchers at MIT and elsewhere has developed a new approach to deep learning AI computing, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations

Soljačić says that many researchers over the years have made claims about optics-based computers, but that “people dramatically over-promised, and it backfired.” While many proposed uses of such photonic computers turned out not to be practical, a light-based neural-network system developed by this team “may be applicable for deep-learning for some applications,” he says.

Traditional computer architectures are not very efficient when it comes to the kinds of calculations needed for certain important neural-network tasks. Such tasks typically involve repeated multiplications of matrices, which can be very computationally intensive in conventional CPU or GPU chips.

After years of research, the MIT team has come up with a way of performing these operations optically instead. “This chip, once you tune it, can carry out matrix multiplication with, in principle, zero energy, almost instantly,” Soljačić says. “We’ve demonstrated the crucial building blocks but not yet the full system.”
By way of analogy, Soljačić points out that even an ordinary eyeglass lens carries out a complex calculation (the so-called Fourier transform) on the light waves that pass through it. The way light beams carry out computations in the new photonic chips is far more general but has a similar underlying principle. The new approach uses multiple light beams directed in such a way that their waves interact with each other, producing interference patterns that convey the result of the intended operation. The resulting device is something the researchers call a programmable nanophotonic processor.

programmable nanophotonic processor

This futuristic drawing shows programmable nanophotonic processors integrated on a printed circuit board and carrying out deep learning computing. Image: RedCube Inc., and courtesy of the researchers

Nature Photonics – Deep learning with coherent nanophotonic circuits


Artificial neural networks are computational network models inspired by signal processing in the brain. These models have dramatically improved performance for many machine-learning tasks, including speech and image recognition. However, today’s computing hardware is inefficient at implementing neural networks, in large part because much of it was designed for von Neumann computing schemes. Significant effort has been made towards developing electronic architectures tuned to implement artificial neural networks that exhibit improved computational speed and accuracy. Here, we propose a new architecture for a fully optical neural network that, in principle, could offer an enhancement in computational speed and power efficiency over state-of-the-art electronics for conventional inference tasks. We experimentally demonstrate the essential part of the concept using a programmable nanophotonic processor featuring a cascaded array of 56 programmable Mach–Zehnder interferometers in a silicon photonic integrated circuit and show its utility for vowel recognition.

About The Author