MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
programs
Search

Opinion: Artificial Intelligence Hits the Barrier of Meaning

Tuesday November 6, 2018. 07:14 PM , from Slashdot
Machine learning algorithms don't yet understand things the way humans do -- with sometimes disastrous consequences. Melanie Mitchell, a professor of Computer Science at Portland State University, writes: As someone who has worked in A.I. for decades, I've witnessed the failure of similar predictions of imminent human-level A.I., and I'm certain these latest forecasts will fall short as well. The challenge of creating humanlike intelligence in machines remains greatly underestimated. Today's A.I. systems sorely lack the essence of human intelligence: understanding the situations we experience, being able to grasp their meaning. The mathematician and philosopher Gian-Carlo Rota famously asked, 'I wonder whether or when A.I. will ever crash the barrier of meaning.' To me, this is still the most important question. The lack of humanlike understanding in machines is underscored by recent cracks that have appeared in the foundations of modern A.I. While today's programs are much more impressive than the systems we had 20 or 30 years ago, a series of research studies have shown that deep-learning systems can be unreliable in decidedly unhumanlike ways. I'll give a few examples. 'The bareheaded man needed a hat' is transcribed by my phone's speech-recognition program as 'The bear headed man needed a hat.' Google Translate renders 'I put the pig in the pen' into French as 'Je mets le cochon dans le stylo' (mistranslating 'pen' in the sense of a writing instrument). Programs that 'read' documents and answer questions about them can easily be fooled into giving wrong answers when short, irrelevant snippets of text are appended to the document. Similarly, programs that recognize faces and objects, lauded as a major triumph of deep learning, can fail dramatically when their input is modified even in modest ways by certain types of lighting, image filtering and other alterations that do not affect humans' recognition abilities in the slightest. One recent study showed that adding small amounts of 'noise' to a face image can seriously harm the performance of state-of-the-art face-recognition programs. Another study, humorously called 'The Elephant in the Room,' showed that inserting a small image of an out-of-place object, such as an elephant, in the corner of a living-room image strangely caused deep-learning vision programs to suddenly misclassify other objects in the image.

Read more of this story at Slashdot.
rss.slashdot.org/~r/Slashdot/slashdot/~3/NrQEFUCMbEM/opinion-artificial-intelligence-hits-the-barrie...
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Nov, Thu 21 - 21:23 CET