MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
openai
Search

OpenAI SWE-Lancer Research: “Frontier Models are Still Unable to Solve the Majority of Tasks”

Wednesday February 19, 2025. 06:30 PM , from eWeek
OpenAI SWE-Lancer Research: “Frontier Models are Still Unable to Solve the Majority of Tasks”
Large language models or LLMs are better at fixing bugs than understanding the root problem that causes them, according to an OpenAI study released on Feb. 18 titled “SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?”

Putting LLMs to work on bug fixes and other software engineering jobs

To evaluate how well LLMs handle real-world software engineering tasks, OpenAI developed a benchmark called SWE-Lancer and tested it on Upwork, a popular gig work platform.

SWE-Lancer assessed how much money three LLMs — Claude 3.5 Sonnet, OpenAI’s GPT-4o, and OpenAI o1 — could generate by completing software engineering jobs offered on Upwork. However, OpenAI researchers found that “…frontier models are still unable to solve the majority of tasks.”

Testing AI-generated solutions in real-world conditions

Unlike traditional AI assessments that compare models to human cognitive abilities, SWE-Lancer focused on measuring economic impact by simulating real-world gig work.

To build the benchmark, OpenAI researchers curated 764 tasks from Upwork and converted them into a structured dataset using Docker containers. The tasks varied in complexity, ranging from $50 bug fixes to $32,000 feature implementations. Since the LLMs still couldn’t directly access the extracted Upwork posts, the researchers needed to build prompts based on them, with a sample of each project’s code base.

The models’ responses were evaluated using Playwright, an open-source browser testing library, to determine whether the solution really worked.

Despite its strong performance, Claude 3.5 Sonnet resolved only 26.2% of tasks correctly, according to the experienced human software engineers overseeing the experiment. While LLMs can analyze data rapidly, they often fail to identify the underlying cause of an issue, leading to incorrect solutions.

Claude 3.5 Sonnet performed best in OpenAI’s SWE-Lancer research. Image: OpenAI

‘Real-world freelance work … remains challenging for AI’

“Results indicate that the real-world freelance work in our benchmark remains challenging for frontier language models,” wrote OpenAI researchers Samuel Miserendino, Michele Wang, Tejal Patwardhan, and Johannes Heidecke. 

Previous studies about how generative AI could be applied to software engineering studied specific tasks in self-contained environments, the researchers wrote.

“In the real world, however, software engineers operate across the full technology stack and must reason about complex inter-codebase interactions and tradeoffs,” they said. 

The dataset for SWE-Lancer is available on GitHub. It provides researchers with insights into AI’s evolving role in software development.
The post OpenAI SWE-Lancer Research: “Frontier Models are Still Unable to Solve the Majority of Tasks” appeared first on eWEEK.
https://www.eweek.com/news/openai-swelancer-llm-benchmarks/

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Feb, Fri 21 - 19:36 CET