The neural network taught itself to recognize cats, which is actually no frivolous activity. This week the researchers will present the results of their work at a conference in Edinburgh, Scotland.
The Google research team, led by the Stanford University computer scientist Andrew Y. Ng and the Google fellow Jeff Dean, used an array of 16,000 processors to create a neural network with more than one billion connections. They then fed it random thumbnails of images, one each extracted from 10 million YouTube videos.
“It is worth noting that our network is still tiny compared to the human visual cortex, which is a million times larger in terms of the number of neurons and synapses,” the researchers wrote.
Despite being dwarfed by the immense scale of biological brains, the Google research provides new evidence that existing machine learning algorithms improve greatly as the machines are given access to large pools of data.
There is an estimate that Google had computational capacity of 40 petaflops at the beginning of 2012
In January 2012, it was estimated that Google's total number of servers is around 1,800,000. This includes all eight of their self-built data centers currently in operation worldwide. Other respected industry watchers are saying Google has 900,000 servers.
Google said themselves in 2009 that their system is designed for 1 to 10 million servers. If they have ~2 million currently, that means there's room for five-fold growth, which would mean up to ~200 petaflops.
To reach 1 exaflops Google might need to evolve their architecture. Maybe that they'll start using GPUs, or processors with hundreds of cores. I've no idea, but I would guess someone inside Google is already thinking about it.
If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks