Navigation
Search
|
OpenAI Says Models Programmed To Make Stuff Up Instead of Admitting Ignorance
Wednesday September 17, 2025. 07:28 PM , from Slashdot
![]() The fundamental problem is that AI models are trained to reward guesswork, rather than the correct answer. Guessing might produce a superficially suitable answer. Telling users your AI can't find an answer is less satisfying. As a test case, the team tried to get an OpenAI bot to report the birthday of one of the paper's authors, OpenAI research scientist Adam Tauman Kalai. It produced three incorrect results because the trainers taught the engine to return an answer, rather than admit ignorance. 'Over thousands of test questions, the guessing model ends up looking better on scoreboards than a careful model that admits uncertainty,' OpenAI admitted in a blog post accompanying the release. Read more of this story at Slashdot.
https://slashdot.org/story/25/09/17/1724241/openai-says-models-programmed-to-make-stuff-up-instead-o...
Related News |
25 sources
Current Date
Sep, Thu 18 - 02:54 CEST
|