Alphago creator targets vastly better smart phone within 5 years and improved robotics and healthcare

Verge interviewed Demis Hassabis who led the creation of Google’s Alphago program which has won a best of 5 series of games against the World’s top go Player of the last decade Lee Sedol.

Q. Is there potential for another AI-vs-game showdown in the future?

I think for perfect information games, Go is the pinnacle. Certainly there are still other top Go players to play. There are other games — no-limit poker is very difficult, multiplayer has its challenges because it’s an imperfect information game. And then there are obviously all sorts of video games that humans play way better than computers, like StarCraft is another big game in Korea as well. Strategy games require a high level of strategic capability in an imperfect information world — “partially observed,” it’s called. The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers.

Q. How Prior Career in Video Games led to this

DeepMind was always my ultimate goal. I’d been planning it for more than 20 years, in a way. If you view all the things I’ve done through a prism of eventually starting an AI effort, then it kind of makes sense what I chose to do. If you’re familiar with my stuff at Bullfrog and so on, you’ll know that AI was a core part of everything I wrote and was involved with, and obviously Peter Molyneux’s games are all AI games as well. Working on Theme Park when I was 16 or 17 years old was quite a seminal moment for me in terms of realizing how powerful AI could be if we really tried to extend it. We sold millions of copies, and so many people enjoyed playing that game, and it was because of the AI that adapted to the way you played. We took that forward and I tried to extend that for the rest of my games career, and then I switched out of that back to academia and neuroscience because I felt around the mid-2000s that we’d gone as far as we could trying to sneak in AI research through the back door while you’re actually supposed to be making a game. And that’s hard to do, because publishers just want the game, right?

Q. The main future uses of AI that you’ve brought up this week have been healthcare, smartphone assistants, and robotics. Let’s unpack some of those. To bring up healthcare, IBM with Watson has done some things with cancer diagnosis for example — what can DeepMind bring to the table?

Well, it’s early days in that. We announced a partnership with the NHS a couple of weeks ago but that was really just to start building a platform that machine learning can be used in. I think Watson’s very different than what we do, from what I understand of it — it’s more like an expert system, so it’s a very different style of AI. I think the sort of things you’ll see this kind of AI do is medical diagnosis of images and then maybe longitudinal tracking of vital signs or quantified self over time, and helping people have healthier lifestyles. I think that’ll be quite suitable for reinforcement learning.

Q. With the NHS partnership, you’ve announced an app which doesn’t seem to use much in the way of AI or machine learning. What’s the thought behind that? Why is the NHS using this rather than software from anybody else?

Well, NHS software as I understand it is pretty terrible, so I think the first step is trying to bring that into the 21st century. They’re not mobile, they’re not all the things we take for granted as consumers today. And it’s very frustrating, I think, for doctors and clinicians and nurses and it slows them down. So I think the first stage is to help them with more useful tools, like visualizations and basic stats. We thought we’ll just build that, we’ll see where we are, and then more sophisticated machine learning techniques could then come into play.

Q. AlphaGo got off the ground by being taught a lot of game patterns — how is that applicable to smartphones where the input is so much more varied?

Yeah, so there’s tons of data on that, you could learn from that. Actually, the AlphaGo algorithm, this is something we’re going to try in the next few months — we think we could get rid of the supervised learning starting point and just do it completely from self-play, literally starting from nothing. It’d take longer, because the trial and error when you’re playing randomly would take longer to train, maybe a few months. But we think it’s possible to ground it all the way to pure learning.

Could you give a timeframe for when some of these things might start making a noticeable difference to the phones that people use?

I think in the next two to three years you’ll start seeing it. I mean, it’ll be quite subtle to begin with, certain aspects will just work better. Maybe looking four to five, five-plus years away you’ll start seeing a big step change in capabilities.

Q. What are the most immediate use cases for learning robots that you can see?

We haven’t thought much about that, actually. Obviously the self-driving cars are kind of robots but they’re mostly narrow AI currently, although they use aspects of learning AI for the computer vision — Tesla uses pretty much standard off-the-shelf computer vision technology which is based on deep learning. I’m sure Japan’s thinking a lot about things like elderly care bots, or household cleaning bots, I think, would be extremely useful for society. Especially in demographics with an aging population, which I think is quite a pressing problem.

Q. So what are your far-off expectations for how humans, robots, and AIs will interact in the future? Obviously people’s heads go to pretty wild sci-fi places.

I don’t think much about robotics myself personally. What I’m really excited to use this kind of AI for is science, and advancing that faster. I’d like to see AI-assisted science where you have effectively AI research assistants that do a lot of the drudgery work and surface interesting articles, find structure in vast amounts of data, and then surface that to the human experts and scientists who can make quicker breakthroughs. I was giving a talk at CERN a few months ago; obviously they create more data than pretty much anyone on the planet, and for all we know there could be new particles sitting on their massive hard drives somewhere and no-one’s got around to analyzing that because there’s just so much data. So I think it’d be cool if one day an AI was involved in finding a new particle.

SOURCE – Verge