Why is the AlphaGo victory significant for AI?

Monte Carlo Tree

Ten years ago, most computer scientists (myself included) believed that Go would be beyond the reach of algorithms for a very long time. Matching the reasoning abilities of Go masters seemed so difficult that maybe, just maybe, we would develop artificial general intelligence first, then apply it to Go.

It turned out to be a massive failure of the imagination. In 2006, Rémi Coulom showed that Monte Carlo Tree Search (a randomized technique) could be used to tame the combinatorial explosion caused by the width and depth of sequence of Go moves (Efficient Selectivity and Back-up Operators in Monte-Carlo Tree Search).

Then in 2015, DeepMind showed how to dramatically boost the performance of Monte Carlo search with Deep Learning, using neural nets to evaluate Go positions and generate plausible moves (Mastering the game of Go with deep neural networks and tree search).

That massive failure of the imagination is what I consider the most important takeaway of the AlphaGo victory. It shows how wrong well-intentioned and honest experts can be. When they now claim that some Artificial Intelligence (AI) systems are decades away, or even one hundred Nobel Prizes away, one has to wonder whether one or two breakthroughs and a lot of clever engineering might not do the trick. We just don’t know. Some people worry about the impact that intelligent computers will have on jobs and society (who will own the robots?). Even if it does not seem like it will happen tomorrow, we should not dismiss those concerns out of hand.

Now, AlphaGo is not some proto-AI. It is strictly a Go playing machine. It does use some general techniques such as Reinforcement Learning or Deep Convolutional Networks as building blocks, but the overall system is exquisitely handcrafted by its designers to play Go. So how generalizable is it to other use cases?

Technically, the most surprising insight may be that randomized algorithms can effectively tackle huge and highly structured combinatorial problems. There is no shortage of that kind of problems in computational biology, data mining, planning, and of course Machine Learning itself. I expect a renewed interest in randomized algorithms for Reinforcement Learning and Deep Learning.

This was posted appeared originally on Quora.

Image credit: Chris Goldberg at Flickr.

Leave a Reply

Your email address will not be published. Required fields are marked *