New Photonic Chip Will Push to Limits of Computational Energy Efficiency Ten Million Times Beyond Conventional Chips

A new photonic chip could run optical neural networks 10 million times more efficiently than conventional chips.

The classical physical limit for computing energy is the Landauer limit. Landauer limit that sets a lower bound to the minimum heat dissipated per bit erasing operation. Performance below the thermodynamic (Landauer) limit for digital irreversible computation is theoretically possible in this device. The proposed accelerator can implement both fully connected and convolutional networks

Previous photonic chips had bulky optical components that limited their use to relatively small neural networks. MIT researchers have a new photonic accelerator that uses more compact optical components and optical signal-processing techniques, to drastically reduce both power consumption and chip area. That allows the chip to scale to neural networks several orders of magnitude larger than its counterparts.

Simulated training of neural networks on the MNIST image-classification dataset suggest the accelerator can theoretically process neural networks more than 10 million times below the energy-consumption limit of traditional electrical-based accelerators and about 1,000 times below the limit of photonic accelerators. The researchers are now working on a prototype chip to experimentally prove the results.

The researchers’ chip relies on a more compact, energy efficient “optoelectronic” scheme that encodes data with optical signals, but uses “balanced homodyne detection” for matrix multiplication. That’s a technique that produces a measurable electrical signal after calculating the product of the amplitudes (wave heights) of two optical signals.

Pulses of light encoded with information about the input and output neurons for each neural network layer — which are needed to train the network — flow through a single channel. Separate pulses encoded with information of entire rows of weights in the matrix multiplication table flow through separate channels. Optical signals carrying the neuron and weight data fan out to grid of homodyne photodetectors. The photodetectors use the amplitude of the signals to compute an output value for each neuron. Each detector feeds an electrical output signal for each neuron into a modulator, which converts the signal back into a light pulse.

The design requires only one channel per input and output neuron, and only as many homodyne photodetectors as there are neurons, not weights. Because there are always far fewer neurons than weights, this saves significant space, so the chip is able to scale to neural networks with more than a million neurons per layer.

AI accelerators is measured in how many joules it takes to perform a single operation of multiplying two numbers — such as during matrix multiplication. Traditional accelerators are measured in picojoules, or one-trillionth of a joule. Photonic accelerators measure in attojoules, which is a million times more efficient.

Physical Review X – Large-Scale Optical Neural Networks Based on Photoelectric Multiplication

ABSTRACT
Recent success in deep neural networks has generated strong interest in hardware accelerators to improve speed and energy consumption. This paper presents a new type of photonic accelerator based on coherent detection that is scalable to large (N over 1 million) networks and can be operated at high (gigahertz) speeds and very low (subattojoule) energies per multiply and accumulate (MAC), using the massive spatial multiplexing enabled by standard free-space optical components. In contrast to previous approaches, both weights and inputs are optically encoded so that the network can be reprogrammed and trained on the fly. Simulations of the network using models for digit and image classification reveal a “standard quantum limit” for optical neural networks, set by photodetector shot noise. Performance below the thermodynamic (Landauer) limit for digital irreversible computation is theoretically possible in this device. The proposed accelerator can implement both fully connected and convolutional networks. They also present a scheme for backpropagation and training that can be performed in the same hardware. This architecture will enable a new class of ultralow-energy processors for deep learning.

7 thoughts on “New Photonic Chip Will Push to Limits of Computational Energy Efficiency Ten Million Times Beyond Conventional Chips”

  1. It can’t. And state of the art computer hardware is more powerful…we are just inept programmers. Self-learning is demonstrating that very effectively…though that is still being limited by our programming.

    Our neurons fire at a maximum of 400 Hz and can’t keep that up or they will burn out. Also only a small fraction can do that at the same time or your brain will run out of energy and you will die. Though it would be pointless anyway as that would be a grand mal seizure…which is not the most productive thing in the world. A transistor can operate at maybe a dozen GHz. Also neurons are locked into a configuration with very limited flexibility, and each area is dedicated to do only one task. A computer chip can run any program and switch between them hundreds of times a second. And the quantity of data it can process is vastly larger than a brain can process. And it does not fail to remember. We can hold maybe 8 or 10 things in mind at the same time…computers can hold obscene amounts in RAM…and accurately.

    Human brains can do one thing very impressively. We are masters of conceited self-delusion.

    That doesn’t mean I do not think humans are of profound value…just that…that value is not in the neuron.

  2. Tron . . . where the speed of light might not be fast enough!
    — Early 1980’s movie poster line.

  3. The brain is made of (wet) nanomachines.

    It can perform computations, information storage and retrieval with great energy efficiency (within the body’s energy budget), albeit not as quickly as a silicon computer and with overall less reliability (most of us haven’t perfect memory).

    But it’s good enough for our survival.

    I imagine we could also get really good storage and performance per watt from organic DNA origami nanocomputers, but not as fast nor error free either.

  4. now if they could figure out how to turn a logic gate on and off at the speed of light, they might have something there…

  5. The trick is to figure out how the brain using less than 100 Watts can outperform supercomputers. 100 Watts isn’t even sufficient to run the cooling fans on a supercomputer.

Comments are closed.