MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
apple
Search

Why Apple’s AI-driven reality distortion matters

Tuesday January 7, 2025. 05:35 PM , from ComputerWorld
Apple has been forced to admit what every company involved in artificial intelligence (AI) should also be forced to state — AI makes mistakes, just like people do. 

On the surface, it’s not a terribly big deal:

Apple’s AI badly mangled a handful of news headlines.

The BBC complained about the mangling.

Because it was a story about Apple, everyone discussed it.

Apple was eventually forced to answer the criticisms and come up with a plan of action to make things better in the future. 

What that plan means is that the company will update Apple Intelligence “in the coming weeks” with an update that will in some way clarify when a notification has been summarized by AI.

The idea behind this is that people reading those headlines will know that there could be a machine-generated error (as opposed to an error by humans) in the news they are perusing. The inference is, of course, that you should question everything you read to protect yourself against machine-generated error or human mistakes. 

Question everything: Human, or AI

The humans who generate news are up in arms, of course. They see the complaint as a cause celebre from which to make a stand against their own eventual replacement by machines. The UK National Union of Journalists, Reporters Without Borders and the head of Meta’s Oversight Board (if that board still exists by the end of the week) have all pointed to these erroneous headlines to suggest Apple’s AI isn’t yet up to the task. (Though even Apple’s critics point out that part of the problem is that even under human control, public trust in news has already sunk to record lows.)

Those critics also argue that telling users that a news headline has been generated by AI doesn’t go far enough. They argue that it means readers must confirm what they read. “It just transfers the responsibility to users, who — in an already confusing information landscape — will be expected to check if information is true or not,” Vincent Berthier, head of RSF’s technology and journalism desk, told the BBC. 

But is that really such a bad thing? Shouldn’t readers of human-generated news reports already be checking what they read?

French philosopher and media literacy theory thought leader Michel Foucault would argue that every reader of any news brand should run what they read through an effective framework of critical media analysis. He would urge readers to “criticize the workings of institutions that appear to be both neutral and independent.”

That includes Apple, of course, as well as the BBC — or even me.

Why this and not that?

The idea — and it really isn’t a complicated one — is that it is rare you should unquestioningly believe what you read, no matter who wrote it, human or machine.

What is written is one thing, why it is written is another. In this case, why has the BBC focused particularly on Apple’s error, rather than exploring the other errors that come with AI?

To some extent the story misses the biggest point: if AI isn’t yet ready to handle a task as relatively trivial as automatic news headline summaries, then this bodes badly for all the other things we’re being told AI should be used for. By inference, it means every AI system, from autonomous vehicles to public transit management or even machine intelligence supported health services can make mistakes. 

Knowing that machines makes errors might help people better prepare to handle those errors as they transpire. As AI becomes more widely deployed, it becomes very important to plan for what to do when things go wrong.

The relatively trivial Apple News headline story’s biggest take-away is that things will go wrong, so what are we going to do when that happens — particularly when the errors made are more serious than a headline.

Why mistakes happen

One more difference between human and machine is that it is not always possible to identify where AI errors originate. After all, in most cases, human error can be discussed and its reasons for existing understood.

In contrast, machine-driven errors take place in response to whatever algorithms are used to drive the AI, relationships and decision making processes that may not be at all transparent — the so-called “black box” problem machine intelligence practitioners have been concerned about for decades. At times, this could mean the logic prompting those errors isn’t visible, which means mistakes can easily recur.

It is not just Apple Intelligence that “hallucinates,” either. All the machines hallucinate, and it’s incredibly important to recognize this before too much discretionary power is given to them. It would also be useful to see major news corporations take a deeper look into the extent to which AI reflects the prejudices of those who own it, rather than trivializing this important matter around discussion of a single brand.

There is a danger, after all, that AI in news becomes a living example of centralized media ownership on steroids, weaving a mirror of the world that reflects a narrowing outlook.

We need tough scrutiny for AI

Given that AI is expected to have a profound impact on culture and society, it seems important to give its implementation serious scrutiny. At the very least, Apple’s proposed solution — to ensure humans can easily identify when AI has been used to decide a news headline — seems a relevant first step towards putting such scrutiny in place.

We should demand the same transparency wherever AI is applied — such as health insurance payment denials, for example. That’s as true for Apple (itself currently planning to extend Apple News into new markets) as it is for anyone else in the business of using AI to get things done.

At the end of the day, the story is not the headline. The story is why the headline was put there in the first place. At Apple. And at the BBC.

You can follow me on social media! Join me on BlueSky,  LinkedIn, Mastodon, and MeWe. 
https://www.computerworld.com/article/3632904/why-apples-ai-driven-reality-distortion-matters.html

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Jan, Wed 8 - 21:41 CET