Deep Learning chips can outperform graphic processors by 150 times for some tasks and new neuromorphic chips will learn from mistakes

2014 will see commercial neural network deep learning chips and commercial neuromorphic chips. Deep Learning chips can outperform graphic processors by 150 times for some tasks and new neuromorphic chips can tolerate, adapt and learn from mistakes.

Purdue University’s deep learning co-processor design is specialized to run multilayered neural networks above all else and to put them to work on streaming imagery. In tests, the prototype has proven about 15 times as efficient as using a graphics processor for the same task, and Culurciello believes that improvements to the system could make it 10 times more efficient than it is now.

The prototype is much less powerful than systems like Google’s cat detector, but it shows how new forms of hardware could make it possible to use the power of deep learning more widely. “There’s a need for this,” says Culurciello. “You probably have a collection of several thousand images that you never look at again, and we don’t have a good technology to analyze all this content.”

Devices such as Google Glass could also benefit from the ability to understand the abundant pictures and videos they are capturing, he says. A person’s images and videos might be searchable using text—”red car” or “sunny day with Mom,” for example. Likewise, novel apps could be developed that take action when they recognize particular people, objects, or scenes.

HRL labs has a more extreme solution – designing chips with silicon neurons and synapses that mimic those of real brains.

The Purdue group’s solution doesn’t represent such a fundamental rethinking of how computer chips operate. That may limit how efficiently their designs can run deep learning neural networks but also make it easier to get them into real-world use. Culurciello has already started a company, called TeraDeep, to commercialize his designs.

I.B.M. and Qualcomm, as well as the Stanford research team, have already designed neuromorphic processors, and Qualcomm has said that it is coming out in 2014 with a commercial version, which is expected to be used largely for further development. Moreover, many universities are now focused on this new style of computing. This fall the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

One great advantage of the new approach is its ability to tolerate glitches. Traditional computers are precise, but they cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks.

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks