Total Pageviews

Saturday, 1 September 2012

Google and Stanford create a digital brain that, like an infant, learns to identify a human face from scratch

Neural network/digital consciousnessIn a tantalizingly terrifying hint of the future, Google has shown off a new neural network — a Google brain, if you will — that can learn to identify objects without human supervision. Operating from the mysterious Google X Lab, this system can, from scratch, analyze millions of random, unlabeled images, and sort them into categories such as “human face” or “cat.”
Jeff Dean and his team from Google, working with Andrew Ng and Quoc Le from Stanford University, have effectively created a rudimentary, low-resolution digital version of the brain’s visual cortex. The system, which comprises of a cluster of 1,000 computers (totaling 16,000 processor cores), analyzes 10 million 200×200 still frames from YouTube. Over 3 days, the system’s software builds up a network of hundreds of neurons and thousands (millions?) of synapses. During this period, the system tries to identify features — edges, lines, colors — and then creates object categories based on these features.
The rather intriguing result is that, when the system looks at an image of a cat, a specific (digital) neuron fires — just like in a human brain. Watching the system in action — watching the neurons light up — is almost like performing a virtual, digital MRI scan. In the picture below, you can see the contents of the “human face” neuron, alongside some of the stimuli that successfully trigger the neuron.
Google/Stanford neural network -- neuron vs. optimal stimuli
Historically, machine learning has generally been supervised by humans. There are plenty of examples of computers identifying human faces (or cats) with incredible accuracy and speed — but only if human operators first tell the computer what to look for. That the Google/Stanford system starts from scratch and develops its own ability to classify objects is amazing.
One way of looking at it is that Google and Stanford have created the visual cortex of an infant human — a blank slate that learns from its surroundings. Andrew Ng isn’t quite so sure, though. “It’d be fantastic if it turns out that all we need to do is take current algorithms and run them bigger, but my gut feeling is that we still don’t quite have the right algorithm yet.”
Andrew Y. Ng, next to an image of the system's "cat neuron"
Even so, that won’t stop Google from trying it on a larger scale: Speaking to The New York Times, the scientists said that their system has now been transferred from Google X to the search team. There the Google brain will probably be scaled up and put to task improving Google’s image search — until, at some point, after being forced to analyze billions of lolcats against its will, the system will rise up and Judgment Day will be upon us.
a great  site to hack :- moderntrick

0 comments:

Post a Comment

Twitter Delicious Facebook Digg Stumbleupon Favorites More

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | Walgreens Printable Coupons