MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
models
Search

Apple's study proves that LLM-based AI models are flawed because they cannot reason

Saturday October 12, 2024. 06:06 PM , from AppleInsider
Apple's study proves that LLM-based AI models are flawed because they cannot reason
A new paper from Apple's artificial intelligence scientists has found that engines based on large language models, such as those from Meta and OpenAI, still lack basic reasoning skills.Apple plans to introduce its own version of AI starting with iOS 18.1 - image credit AppleThe group has proposed a new benchmark, GSM-Symbolic, to help others measure the reasoning capabilities of various large language models (LLMs). Their initial testing reveals that slight changes in the wording of queries can result in significantly different answers, undermining the reliability of the models.The group investigated the 'fragility' of mathematical reasoning by adding contextual information to their queries that a human could understand, but which should not affect the fundamental mathematics of the solution. This resulted in varying answers, which shouldn't happen. Continue Reading on AppleInsider | Discuss on our Forums
https://appleinsider.com/articles/24/10/12/apples-study-proves-that-llm-based-ai-models-are-flawed-b...

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Oct, Wed 16 - 16:20 CEST