Navigation
Search
|
When AI fails, who is to blame?
Friday May 30, 2025. 01:00 PM , from ComputerWorld
To state the obvious: Our species has fully entered the Age of AI. And AI is here to stay.
The fact that AI chatbots appear to speak human language has become a major source of confusion. Companies are making and selling AI friends, lovers, pets, and therapists. Some AI researchers falsely claim their AI and robots can “feel” and “think.” Even Apple falsely says it’s building a lamp that can feel emotion. Another source of confusion is whether AI is to blame when it fails, hallucinates, or outputs errors that impact people in the real world. Just look at some of the headlines: “Who’s to Blame When AI Makes a Medical Error?” “Human vs. AI: Who is responsible for AI mistakes?” “In a World of AI Agents, Who’s Accountable for Mistakes?” Look, I’ll give you the punchline in advance: The user is responsible. AI is a tool like any other. If a truck driver falls asleep at the wheel, it’s not the truck’s fault. If a surgeon leaves a sponge inside a patient, it’s not the sponge’s fault. If a prospective college student gets a horrible score on the SAT, it’s not the fault of their No. 2 pencil. It’s easy for me to claim that users are to blame for AI errors. But let’s dig into the question more deeply. Writers caught with their prose down Lena McDonald, a fantasy romance author, got caught using AI to copy another writer’s style. Her latest novel, Darkhollow Academy: Year 2, released in March, contained the following riveting line in Chapter 3: “I’ve rewritten the passage to align more with J. Bree’s style, which features more tension, gritty undertones, and raw emotional subtext beneath the supernatural elements.” This was clearly copied and pasted from an AI chatbot, along with words she was passing off as her own. This news is sad and funny but not unique. In 2025 alone, at least two other romance authors, K.C. Crowne and Rania Faris, were caught with similar AI-generated prompts left in their self-published novels, suggesting a wider trend. It happens in journalism, too. On May 18, the Chicago Sun-Times and The Philadelphia Inquirer published a “Summer Reading List for 2025” in its Sunday print supplement, featuring 15 books supposedly written by well-known authors. Unfortunately, most of the books don’t exist. Tidewater Dreams by Isabel Allende, Nightshade Market by Min Jin Lee, and The Last Algorithm by Andy Weir are fake books attributed to real authors. The fake books were dreamed up by AI, which the writer Marco Buscaglia admitted to using. (The article itself was not produced by the newspapers that printed it. The story originated with King Features Syndicate, a division of Hearst, which created and distributed the supplement to multiple newspapers nationwide.) Whose fault was this? Well, it was clearly the writer’s fault. A writer’s job always involves editing. A writer needs to, at minimum, read their own words and consider cuts, expansions, rewording, and other changes. In all these cases, the authors failed to be professional writers. They didn’t even read their books or the books they recommended. Fact-checkers exist at some publications and not at others. Either way, it’s up to writers to have good reason to assert facts or use quotes. Writers are also editors and fact-checkers. It’s just part of the job. I use these real-life examples because they demonstrate clearly that the writer — the AI user — is definitely to blame when errors occur with AI chatbots. The user chooses the tool, does the prompt engineering, sees the output, and either catches and corrects errors or not. OK, but what about bigger errors? Air Canada’s chatbot last year told a customer about a bereavement refund policy that didn’t exist. When the customer took the airline to a small-claims tribunal, Air Canada argued the chatbot was a “separate legal entity.” The tribunal didn’t buy it and ruled against the airline. Google’s AI Overviews became a punchline after telling users to put glue on pizza and eat small rocks. Apple’s AI-powered notification summaries created fake headlines, including a false report that Israeli Prime Minister Benjamin Netanyahu had been arrested. Canadian lawyer Chong Ke cited two court cases provided by ChatGPT in a custody dispute. The AI completely fabricated both cases, and Ke was ordered to pay the opposing counsel’s research costs. Last year, various reports exposed major flaws in AI-powered medical transcription tools, especially those based on OpenAI’s Whisper model. Researchers found that Whisper frequently “transcribes” content that was never said. A study presented at the Association for Computing Machinery FAccT Conference found that about 1% of Whisper’s transcriptions contained fabricated content, and nearly 38% of those errors could potentially cause harm in a medical setting. Every single one of these errors and problems falls squarely on the users of AI, and any attempt to blame the AI tools in use is just confusion about what AI is. The big picture What all my examples above have in common is that users let AI do the user’s job unsupervised. The opposite end of the spectrum of turning your job over to unsupervised AI is not using AI at all. In fact, many companies and organizations explicitly ban the use of AI chatbots and other AI tools. This is often a mistake, too. Acclimating ourselves to the Age of AI means finding a middle ground where we use AI tools to improve our jobs. Most of us should use AI. But we should learn to use it well and check every single thing it does, based on the knowledge that any use of AI is 100% the user’s responsibility. I expect the irresponsible use of AI will continue to cause errors, problems, and even catastrophes. But don’t blame the software. In the immortal words of the fictional HAL 9000 AI supercomputer from 2001: A Space Odyssey: “It can only be attributable to human error.”
https://www.computerworld.com/article/3998202/when-ai-fails-who-is-to-blame.html
Related News |
25 sources
Current Date
Jun, Sun 1 - 17:36 CEST
|