Google has developed a virtual "neural network" that taught itself what a cat looks like by viewing images from YouTube. Developed at Google X, the research and development lab best known for Project Glass and self-driving cars, the neural network is a cluster of 1,000 computers with 16,000 cores between them. Google fed the cluster 200 x 200 pixel thumbnails taken from 10 million randomly selected YouTube videos and had it look for recurring features. Not only was its creation able to detect faces, but also "high-level concepts" such as cat faces and human bodies.
Google's machine was not taught, or given any data on what a face, body, or cat looks like before it started its analysis. Once it had discovered a recurring object, the computer developed image maps that were then used to detect similar objects. The team dubbed these maps "neurons" as a nod to the theory that certain neurons in the temporal cortex of the brain are specifically tasked with recognizing object categories such as faces or hands.
"IT BASICALLY INVENTED THE CONCEPT OF A CAT."
Traditional methods have seen researchers teaching computers what objects look like by defining the edges of a shape, and then marking images as containing the objects. In this project, Dr Jeff Dean, a Google fellow that worked on the project, says that "we never told it during the training, ‘this is a cat.' It basically invented the concept of a cat."
The system is not perfect yet, but as a result of the team's success, the research project has been taken out of Google X and is being continued by the company's search and business services team. Google hopes to refine the algorithm and use it in image search, speech recognition, and machine language translation.
0 comments:
Post a Comment