Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Our spring member drive has ended, but it's not too late to give. You have the power to help fund the essential journalism that keeps us all informed. Help us close the gap on our spring fundraising goal! GIVE NOW

A Massive Google Network Learns To Identify — Cats

AUDIE CORNISH, HOST:

Researchers at Google's secretive X Labs say they've built a network of 16,000 computer processors as an experiment designed to emulate the powers of the human brain. Turned loose for three days on 10 million YouTube clips, and this brain did what any of our brains would do: It learned how to recognize a cat.

This adds to a growing body of research on artificial intelligence. The company is set to present the research at the International Conference on Machine Learning in Edinburgh, Scotland. That's where Google research partner and Stanford professor Andrew Ng is today. Hello, professor.

ANDREW NG: Hi, Audie.

CORNISH: So start by describing the experiment. What exactly did you ask this computer brain to do, and how did it perform?

NG: So what we did, as you said, was we turned the 16,000 computer cores loose on YouTube videos and had it watch video for a while. After that, we probed it to see what it had learned, and the remarkable thing about this was that no one had told it in advance to learn any one particular thing. It was free to watch videos and discover whatever common objects or common themes appear in YouTube videos. So what appears commonly in YouTube videos, well, we thought cats, and we looked around to see if our simulated artificial neural network actually had a simulated neuron that was detecting cats. And lo and behold, we actually found such a neuron in our artificial neural network.

CORNISH: How did you know that it had identified a cat in particular?

NG: The way we did this was we found a - we took a bunch of images - some of cats, some not of cats - and we showed these pictures to our, you know, little simulator brain, our little artificial neural network, and we probed around to see if any of the little simulated neurons will respond strongly only to a picture of a cat but not to pictures of other things. And maybe a little bit to our surprise, we actually found one such neuron, and that's when we thought that, gee, this neuron consistently responds to pictures of cats even though no one had told the algorithm in advance to be learning to look for cats.

CORNISH: When I hear a number like 16,000 processors involved just to recognize one animal, you know, it's a little confusing. I mean, I would think just one computer could do that. What's surprising about what your simulation was able to perform?

NG: In the field of artificial neural networks, what we're seeing consistently is that the bigger you can run these models, the better they perform. If you train one of these algorithms on one computer, you know, it will do pretty well. If you train them on 10, it will do even better. If you train on 100, even better. And we found that when we trained it on 16,000 CPU cores, 16,000 computer processors, that was the best model we were able to train.

Now, why do we need so many processors? The point wasn't to find a cat. The point was to have a software, maybe a little simulated baby brain, if you will, wake up, not knowing anything, watch YouTube video for several days and figure out what it learns. And I'm sure it's learned tons of other things other than, you know, cats. And it's just that cats was one thing we happen to look for and found.

CORNISH: In the end, what is the goal of these kinds of projects? Is this about developing artificial intelligence, or is this about helping us develop technologies that can help human brains?

NG: Machine learning and artificial intelligence is a pervasive technology today, and most of us use it dozens of times a day without knowing it. Artificial intelligence technology is responsible for giving us high-quality Web search engines, practical speech recognition, machine translation, even self-driving cars. And I think that by improving the technology, I hope that we'll be able to improve the performance of all of these sorts of important applications as well.

CORNISH: Professor Andrew Ng, associate professor of computer science at Stanford University, thank you for talking with us.

NG: OK. Thank you very much, Audie.

(SOUNDBITE OF MUSIC)

MELISSA BLOCK, HOST:

You're listening to ALL THINGS CONSIDERED from NPR News. Transcript provided by NPR, Copyright NPR.