Skip to content

Google Simulates Brain Networks to Recognize Speech and Images

09 October 2012 | no comments | Tech News

[TECH NEWS]

This summer Google set a new landmark in the field of artificial intelligence with software that learned how to recognize cats, people, and other things simply by watching YouTube videos (see “Self-Taught Software“). That technology, modeled on how brain cells operate, is now being put to work making Google’s products smarter, with speech recognition being the first service to benefit, Technology Review reports. Google’s learning software is based on simulating groups of connected brain cells that communicate and influence one another. When such a neural network, as it’s called, is exposed to data, the relationships between different neurons can change. That causes the network to develop the ability to react in certain ways to incoming data of a particular kind — and the network is said to have learned something. Neural networks have been used for decades in areas where machine learning is applied, such as chess-playing software or face detection. Google’s engineers have found ways to put more computing power behind the approach than was previously possible.

October 5, 2012, Kurzweil AI

Share Our Article

  • Delicious
  • Digg
  • Newsvine
  • RSS
  • StumbleUpon
  • Technorati
  • Twitter

Related Posts

Comments

There are no comments on this entry.

Trackbacks

There are no trackbacks on this entry.