AlphaGo is Here. What’s Next?

One of the most dramatic events in 2016 was the triumph of Google DeepMind’s AlphaGo AI program against Lee Sedol of South Korea, one of the world’s top Go players.

This was a shock to many. Chess fell to AI many years ago but Go was thought to be safe from AI for a while and AlphaGo’s success set off a flurry of questions. Is AI much further along than we think? Are robots with human-level intelligence just around the corner?

Experts have lined up on both sides of these questions and there’s no shortage of perspectives. I wanted to share two that particularly resonated with me.

In an Edge interview on big data and AI (which is a great read in its entirety, btw), Gary Marcus of NYU highlights a key requirement of systems like Google DeepMind’s Atari AI and AlphaGo AI.

You’d think if it’s so great let’s take that same technique and put it in robots, so we’ll have robots vacuum our homes and take care of our kids. The reality is that in the [Google DeepMind] Atari game system, first of all, data is very cheap. You can play the game over and over again. If you’re not sticking quarters in a slot, you can do it infinitely. You can get gigabytes of data very quickly, with no real cost.

If you’re talking about having a robot in your home? – I’m still dreaming of Rosie the robot that’s going to take care of my domestic situation – you can’t afford for it to make mistakes. The DeepMind system is very much about trial and error on an enormous scale. If you have a robot at home, you can’t have it run into your furniture too many times. You don’t want it to put your cat in the dishwasher even once. You can’t get the same scale of data.

This is certainly true in my experience. Without lots and lots of data to learn from, the fancy machine learning/deep learning stuff doesn’t work as well (this is not to say that data is everything; many math/CS tricks contributed to the breakthroughs but lots of data is a must-have).

So is that it? In situations where we can’t have “trial-and-error on an enormous scale”, are we basically stuck?

Perhaps not. Machine learning researcher Paul Mineiro acknowledges this …

In the real world we have sample complexity constraints: you have to perform actual actions to get actual rewards.

… and suggests a way around it.

However, in the same way that cars and planes are faster than people because they have unfair energetic advantages (we are 100W machines; airplanes are much higher), I think “superhuman AI”, should it come about, will be because of sample complexity advantages, i.e., a distributed collection of robots that can perform more actions and experience more rewards (and remember and share all of them with each other).

AIs remembering and sharing with each other. That’s a cool idea.

Perhaps we can’t reduce the total amount of trial-and-error necessary for AIs to learn, but maybe we can “spread the data-collection pain” to thousands of AIs, learn from the pooled data, and push the learning back out to all the AIs and run this loop continuously. If my robot bumps into the furniture, maybe yours won’t have to.

Come to think of it, this “remembering and sharing with each other” is one of the arguments that have been put forth for how homo sapiens evolved from their humble beginnings to today where they can build things like AlphGo.

Share/Bookmark