MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
neural
Search

How computers got shockingly good at recognizing images

Tuesday December 18, 2018. 02:00 PM , from Ars Technica
Enlarge (credit: Aurich / Getty)
Right now, I can open up Google Photos, type 'beach,' and see my photos from various beaches I've visited over the last decade. I never went through my photos and labeled them; instead, Google identifies beaches based on the contents of the photos themselves. This seemingly mundane feature is based on a technology called deep convolutional neural networks, which allows software to understand images in a sophisticated way that wasn't possible with prior techniques.
In recent years, researchers have found that the accuracy of the software gets better and better as they build deeper networks and amass larger data sets to train them. That has created an almost insatiable appetite for computing power, boosting the fortunes of GPU makers like Nvidia and AMD. Google developed its own custom neural networking chip several years ago, and other companies have scrambled to follow Google's lead.
Over at Tesla, for instance, the company has put deep learning expert Andrej Karpathy in charge of its Autopilot project. The carmaker is now developing a custom chip to accelerate neural network operations for future versions of Autopilot. Or, take Apple: the A11 and A12 chips at the heart of recent iPhones include a 'neural engine' to accelerate neural network operations and allow better image- and voice-recognition applications.
Read 104 remaining paragraphs | Comments
https://arstechnica.com/?p=1354145
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Apr, Sat 20 - 11:08 CEST