Update on Google Deep Mind Artificial Intelligence

Demis Hassabis gave a talk about his Google Deep Mind project In January 2014 DeepMind was acquired by Google for a reported £400 million (approximately $625 million), where Hassabis is now Vice President of Engineering leading their general AI projects.

AI researcher Ben Goertzel gave his review.

It’s a well-delivered, clear and concise talk, but so far as I can tell there’s nothing big and new there. Demis describes Deep Mind’s well-known work on reinforcement learning and video games, and then mentions their (already published) work on Neural Turing Machines… Nothing significant seems to be mentioned beyond what has already been published and publicized previously…

Demis, Shane Legg and many other Deep Mind researchers are known to me to be brilliant people with a true passion for AGI. What they’re doing is fantastic! However, currently none of their results look anywhere close to human-level AGI; and the design details that they’ve disclosed don’t come anywhere near to being a comprehensive plan for building an AGI…

Of course, 100 smart guys working together toward pure and applied AGI, with savvy leadership and Google’s resources at their disposal, is nothing to be sneered at….

For now, there are multiple different approaches to AGI, with various theoretical justifications and limited-scope practical achievements associated with them; and researchers place their confidence in one approach or another based on intuition as much as evidence, since the hard evidence is incomplete and fragmentary.

Demis Hassabis leads what is now called Google DeepMind. It is still headquartered in London and still has “solve intelligence” as its mission statement. Roughly 75 people strong at the time it joined Google, Hassabis has said he aimed to hire around 50 more. Around 75 percent of the group works on fundamental research. The rest form an “applied research team” that looks for opportunities to apply DeepMind’s techniques to existing Google products.

Over the next four years, DeepMind’s technology could be used to refine YouTube’s recommendations or improve the company’s mobile voice search.

They dream of creating “AI scientists” that could do things like generate and test new hypotheses about disease in the lab. When prodded, he also says that DeepMind’s software could also be useful to robotics, an area in which Google has recently invested heavily

DeepMind has combined deep learning with a technique called reinforcement learning, which is inspired by the work of animal psychologists such as B.F. Skinner. This led to software that learns by taking actions and receiving feedback on their effects, as humans or animals often do.

In 2013, DeepMind researchers showed off software that had learned to play three classic Atari games – Pong, Breakout and Enduro – better than an expert human. The software wasn’t programmed with any information on how to play; it was equipped only with access to the controls and the display, knowledge of the score, and an instinct to make that score as high as possible. The program became an expert gamer through trial and error.

No one had ever demonstrated software that could learn to master such a complex task from scratch.

Arxiv – Playing Atari with Deep Reinforcement Learning