Navigation
Search
|
Apple Researchers Challenge AI Reasoning Claims With Controlled Puzzle Tests
Monday June 9, 2025. 04:00 PM , from Slashdot
![]() The study, which examined models using Tower of Hanoi, checker jumping, river crossing, and blocks world puzzles rather than standard mathematical benchmarks, found three distinct performance regimes that contradict conventional assumptions about AI reasoning progress. At low complexity levels, standard language models surprisingly outperformed their reasoning-enhanced counterparts while using fewer computational resources. At medium complexity, reasoning models demonstrated advantages, but both model types experienced complete accuracy collapse at high complexity levels. Most striking was the counterintuitive finding that reasoning models actually reduced their computational effort as problems became more difficult, despite operating well below their token generation limits. Even when researchers provided explicit solution algorithms, requiring only step-by-step execution rather than creative problem-solving, the models' performance failed to improve significantly. The researchers noted fundamental inconsistencies in how models applied learned strategies across different problem scales, with some models successfully handling 100-move sequences in one puzzle type while failing after just five moves in simpler scenarios. Read more of this story at Slashdot.
https://apple.slashdot.org/story/25/06/09/1151210/apple-researchers-challenge-ai-reasoning-claims-wi...
Related News |
25 sources
Current Date
Jun, Mon 9 - 22:03 CEST
|