What are the Limits of Deep Learning? Going Beyond Deep Learning

Glowing stickers are able to confuse deep learning systems. Deep Learning expert Geoffrey Hinton believes simple adversarial attacks show that Deep Learning has flaws.

Deep Learning flaws
* The systems needs 10,000+ examples to learn a concept like cows. Humans only need a handful of examples
* Deep Learning cannot explain how the systems got an answer
* Deep Learning lacks common sense. This makes the systems fragile and when errors are made, the errors can be very large.

There is a growing feeling in the field that deep learning’s shortcomings require some fundamentally new ideas.

PNAS – What are the limits of deep learning?

One solution is simply to expand the scope of the training data. In an article published in May 2018 , Botvinick’s DeepMind group studied what happens when a network is trained on more than one task. They found that as long as the network has enough “recurrent” connections running backward from later layers to earlier ones—a feature that allows the network to remember what it’s doing from one instant to the next—it will automatically draw on the lessons it learned from earlier tasks to learn new ones faster. This is at least an embryonic form of human-style “meta-learning,” or learning to learn, which is a big part of our ability to master things quickly.

A more radical possibility is to give up trying to tackle the problem at hand by training just one big network and instead have multiple networks work in tandem. In June 2018, the DeepMind team published an example they call the Generative Query Network architecture, which harnesses two different networks to learn its way around complex virtual environments with no human input. One, dubbed the representation network, essentially uses standard image-recognition learning to identify what’s visible to the AI at any given instant. The generation network, meanwhile, learns to take the first network’s output and produce a kind of 3D model of the entire environment—in effect, making predictions about the objects and features the AI doesn’t see. For example, if a table only has three legs visible, the model will include a fourth leg with the same size, shape, and color.

A more radical approach is to quit asking the networks to learn everything from scratch for every problem.

A potentially powerful new approach is known as the graph network. These are deep-learning systems that have an innate bias toward representing things as objects and relations.


Don’t miss the latest future news

Subscribe and get a FREE Ebook