MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
short
Search

Asking Chatbots For Short Answers Can Increase Hallucinations, Study Finds

Tuesday May 13, 2025. 02:42 AM , from Slashdot
Asking Chatbots For Short Answers Can Increase Hallucinations, Study Finds
Requesting concise answers from AI chatbots significantly increases their tendency to hallucinate, according to new research from Paris-based AI testing company Giskard. The study found that leading models -- including OpenAI's GPT-4o, Mistral Large, and Anthropic's Claude 3.7 Sonnet -- sacrifice factual accuracy when instructed to keep responses short.

'When forced to keep it short, models consistently choose brevity over accuracy,' Giskard researchers noted, explaining that models lack sufficient 'space' to acknowledge false premises and offer proper rebuttals. Even seemingly innocuous prompts like 'be concise' can undermine a model's ability to debunk misinformation.

Read more of this story at Slashdot.
https://slashdot.org/story/25/05/12/2114214/asking-chatbots-for-short-answers-can-increase-hallucina...

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
May, Tue 13 - 11:49 CEST