The Future of AI Research is Never Ending Storage and Cheap, Small Processors

Dan Belov from DeepMind gave an overview of AI Research. He says for AI we should abandon our complex reliable processors for smaller, less reliable but much cheaper ones.

Machine Learning is about creating new knowledge by using the present data to solve a large diversity of novel problem.

Recipes are created to train programs.

Supervised DL (Deep learning) is inferring knowledge from observations.

The Iron Law of Deep Learning: More is More.

This is because more diverse data, bigger networks, more compute all combine to give better results.

On average deep learning networks are tripling in size every year.

We have scaled up reinforcement learning in robotics.

Another rule is never throw any data away, no matter how bad the data is.

Belov proposes neverending storage for AI. Because we never want to throw anything away.

We keep all failed experiments, random policies, interferences.

Start the repository with the best data possible.

Humans annotate random attempts to indicate where the rewards are.

Record everything that a robot does with all this data is stored and use it for future iteration learnings. Tesla has recorded about 4 billion autopilot miles while Waymo (spun out of google) is at about 20 million miles.

Need to train on clean examples but also need bad data to observe failure.

Understanding failure is critical to learn good behavior.

The deep learning systems are eventually outperforming humans.

The search spaces for chess and go were too large for brute force searches.
Deep Mind used value network to reduce the depth of the search tree. The value network provided the likelihood of winning from a given position based on previous games known as future stones in Go.
Deep mind used policy networks to indicate which are the best moves. This reduced the breadth of the search trees.

Here is a video of an older 2018 talk by Belov.

SOURCE- Hot Chips, Sander Olson, Deep Mind’s Belov, Primeur Magazine
Written By Brian Wang,

2 thoughts on “The Future of AI Research is Never Ending Storage and Cheap, Small Processors”

  1. Hmm… AGI improvement rate/ early success criteria: hardware vs software, precious/complex hardware vs cheap/simple/small hardware, error-checking software vs optimization software — we are in such early stages with such a diversity of approaches. Another article indicated a goal as: first artificial scientist with own molecular biology lab, designing own experiments, and reporting in straight-forward human language. The approaches -> an end result. Exciting.

  2. Unreliable components may be an advantage, supplying an analog to random sexual preferences occasionally being useful, thus forwarding fitness evolution. Turns out to be the key.

Comments are closed.