Robots That Learn by Doing – Imagining Pictures and Goals #emdigitaltech

Sergey Levines is anAssistant Professor, UC Berkeley
Robots That Learn by Doing

Sergey is talking about reinforcement learning.

Robots can keep learning from its environment with reinforcement learning.

We need algorithms for scalable off-policy reinforcement learning.

Sergey has examples of a robot teaching itself how to walk forward in about one hour. Robots were able to teach itself to grasp. He is trying now to mimic how children learn to play with blocks.

Reinforced Learnings needs to have goals so that it can target its learning functions. He is using reinforcement learning with imagined goals. They can create targeted pictures of what is wants to create with blocks. The robot then tries to then make the world (the blocks) to match the imagined pictures.

Sergey said need to advance robots into the real world at the medium to large scale in order to get enough experience for the robots to have fast learning.

Nextbigfuture notes that there is the large scale deployment of robots for home vacuuming and for self-driving cars and for warehouse and other applications.

He had a prior talks that is online from several months ago on these topics.

Sergey Levine is an assistant Professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley.

SOURCE- Live reporting from EmTech Digital 2019

Written By Brian Wang. Nextbigfuture.com