MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
models
Search

AI Models Still Struggle To Debug Software, Microsoft Study Shows

Friday April 11, 2025. 07:20 AM , from Slashdot
AI Models Still Struggle To Debug Software, Microsoft Study Shows
Some of the best AI models today still struggle to resolve software bugs that wouldn't trip up experienced devs. TechCrunch: A new study from Microsoft Research, Microsoft's R&D division, reveals that models, including Anthropic's Claude 3.7 Sonnet and OpenAI's o3-mini, fail to debug many issues in a software development benchmark called SWE-bench Lite. The results are a sobering reminder that, despite bold pronouncements from companies like OpenAI, AI is still no match for human experts in domains such as coding.

The study's co-authors tested nine different models as the backbone for a 'single prompt-based agent' that had access to a number of debugging tools, including a Python debugger. They tasked this agent with solving a curated set of 300 software debugging tasks from SWE-bench Lite.

According to the co-authors, even when equipped with stronger and more recent models, their agent rarely completed more than half of the debugging tasks successfully. Claude 3.7 Sonnet had the highest average success rate (48.4%), followed by OpenAI's o1 (30.2%), and o3-mini (22.1%).

Read more of this story at Slashdot.
https://developers.slashdot.org/story/25/04/11/0519242/ai-models-still-struggle-to-debug-software-mi...

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Apr, Tue 15 - 15:37 CEST