AI beats European Go champion but the exciting aspect is how it trained itself to get better and can play many other games

The first classic game mastered by a computer was noughts and crosses (also known as tic-tac-toe) in 1952 as a PhD candidate’s project. Then fell checkers in 1994. Chess was tackled by Deep Blue in 1997. The success isn’t limited to board games, either – IBM’s Watson won first place on Jeopardy in 2011, and in 2014 our own algorithms learned to play dozens of Atari games just from the raw pixel inputs. But one game has thwarted A.I. research thus far: the ancient game of Go. Invented in China over 2500 years ago, Go is played by more than 40 million people worldwide.

Highlights

  • Deep Mind artificial intelligence beat the human european Go champion
  • Go is way more complex than Chess
  • The AI was not specifically tailored for the one task of playing Go
  • The AI took initial rules database and then played against itself to learn how to get better
  • This is way more general purpose than prior AI

Details

In a breakthrough for artificial intelligence, a computing system developed by Google (Deep Mind) researchers in Great Britain has beaten a top human player at the game of Go, the ancient contest of strategy and intuition that has bedeviled AI experts for decades.

Machines have topped the best humans at most games held up as measures of human intellect, including chess, Scrabble, Othello, even Jeopardy!. But with Go—a 2,500-year-old game that’s exponentially more complex than chess—human grandmasters have maintained an edge over even the most agile computing systems. Earlier this month, top AI experts outside of Google questioned whether a breakthrough could occur anytime soon, and as recently as last year, many believed another decade would pass before a machine could beat the top humans.

But Google has done just that. “It happened faster than I thought,” says Rémi Coulom, the French researcher behind what was previously the world’s top artificially intelligent Go player.

Deep Learning is killing every problem in Artificial intelligence

The DeepMind system, dubbed AlphaGo, matched its artificial wits against Fan Hui, Europe’s reigning Go champion, and the AI system went undefeated in five games witnessed by an editor from the journal Nature and an arbiter representing the British Go Federation.

Using a vast collection of Go moves from expert players—about 30 million moves in total—DeepMind researchers trained their system to play Go on its own. But this was merely a first step. In theory, such training only produces a system as good as the best humans. To beat the best, the researchers then matched their system against itself. This allowed them to generate a new collection of moves they could then use to train a new AI player that could top a grandmaster.

“The most significant aspect of all this…is that AlphaGo isn’t just an expert system, built with handcrafted rules,” says Demis Hassabis, who oversees DeepMind. “Instead, it uses general machine-learning techniques how to win at Go.”

AlphaGo was not preprogrammed to play Go: rather, it learned using a general-purpose algorithm that allowed it to interpret the game’s patterns, in a similar way to how a DeepMind program learned to play 49 different arcade games.

Nature – Google AI algorithm masters ancient game of Go

Hassabis says DeepMind’s system works pretty well on a single computer equipped with a decent number of GPU chips, but for the match against Fan Hui, the researchers used a larger network of computers that spanned about 170 GPU cards and 1,200 standard processors, or CPUs. This larger computer network both trained the system and played the actual game, drawing on the results of the training.

When AlphaGo plays the world champion in South Korea, Hassabiss team will use the same setup, though they’re constantly working to improve it

SOURCES- Nature, Wired, Google