MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
openai
Search

OpenAI Says Models Programmed To Make Stuff Up Instead of Admitting Ignorance

Wednesday September 17, 2025. 07:28 PM , from Slashdot
OpenAI Says Models Programmed To Make Stuff Up Instead of Admitting Ignorance
AI models often produce false outputs, or 'hallucinations.' Now OpenAI has admitted they may result from fundamental mistakes it makes when training its models. The Register: The admission came in a paper [PDF] published in early September, titled 'Why Language Models Hallucinate,' and penned by three OpenAI researchers and Santosh Vempala, a distinguished professor of computer science at Georgia Institute of Technology. It concludes that 'the majority of mainstream evaluations reward hallucinatory behavior.'

The fundamental problem is that AI models are trained to reward guesswork, rather than the correct answer. Guessing might produce a superficially suitable answer. Telling users your AI can't find an answer is less satisfying. As a test case, the team tried to get an OpenAI bot to report the birthday of one of the paper's authors, OpenAI research scientist Adam Tauman Kalai. It produced three incorrect results because the trainers taught the engine to return an answer, rather than admit ignorance. 'Over thousands of test questions, the guessing model ends up looking better on scoreboards than a careful model that admits uncertainty,' OpenAI admitted in a blog post accompanying the release.

Read more of this story at Slashdot.
https://slashdot.org/story/25/09/17/1724241/openai-says-models-programmed-to-make-stuff-up-instead-o...

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Sep, Thu 18 - 02:54 CEST