Tech

Google AI Beats Top Human At The Game Of Go

Researchers have taught a computer how to play Go extremely well — which could help them teach computers to do other things in the future.

Google AI Beats Top Human At The Game Of Go
Google DeepMind
SMS

Google researchers have achieved yet another milestone in artificial intelligence development: They've taught a computer to play a 2,500-year-old Chinese board game really, really well.

Engineers at Google subsidiary DeepMind pitted their Go-playing AI AlphaGo against top European Go-master Fan Hui in a five-game series last October. The computer won all five games.

Complex games like Go are a good method of testing how well AI can learn and perform against a human brain. IBM has been a big proponent of this strategy, first with chess in 1996, then with Jeopardy in 2011.

But the game of Go offers a much more daunting task. In chess, there are 10^120 possible games that can be played; a standard Go board can hold 10^761 games.

The staggering array of combinations means it's unfeasible to just crunch the numbers to determine what the best possible move would be. Instead, AlphaGo relies on two neural networks: one that searches for promising-looking moves and another that evaluates how well those moves will do later in the game.

AlphaGo learned how to pick and evaluate moves by playing millions and millions of Go games — first against itself, then against rival computers, and finally against a human champion.

AlphaGo's learning processes could have broader applications in other fields, from research to human behavior prediction, but DeepMind's just focusing on Go for now. The AI plays against Lee Sedol, the world's top Go player, in Seoul, South Korea, this March.

This video includes an image from Chad Miller / CC BY SA 2.0.