Google AI Beats Top Human At The Game Of Go
Researchers have taught a computer how to play Go extremely well — which could help them teach computers to do other things in the future.By Matt Picht | January 27, 2016
Google researchers have achieved yet another milestone in artificial intelligence development: They've taught a computer to play a 2,500-year-old Chinese board game really, really well.
Engineers at Google subsidiary DeepMind pitted their Go-playing AI AlphaGo against top European Go-master Fan Hui in a five-game series last October. The computer won all five games.
Complex games like Go are a good method of testing how well AI can learn and perform against a human brain. IBM has been a big proponent of this strategy, first with chess in 1996, then with Jeopardy in 2011.