MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
packages
Search

Large language models hallucinating non-existent developer packages could fuel supply chain attacks

Monday September 30, 2024. 11:28 PM , from InfoWorld
Large Language Models (LLMs) have a serious “package hallucination” problem that could lead to a wave of maliciously-coded packages in the supply chain, researchers have discovered in one of the largest and most in-depth ever studies to investigate the problem.

It’s so bad, in fact, that across 30 different tests, the researchers found that 440,445 (19.7%) of 2.23 million code samples they generated experimentally in two of the most popular programming languages, Python and JavaScript, using 16 different LLM models for Python and 14 models for JavaScript, contained references to packages that were hallucinated.

The multi-university study, first published in June but recently updated, also generated “a staggering 205,474 unique examples of hallucinated package names, further underscoring the severity and pervasiveness of this threat.”

The problem has its roots in the popularity of numerous Python and JavaScript libraries and packages which developers use to help them quickly assemble programs from lots of smaller parts.

Many popular repositories already have a problem with malicious code. The researchers note that one 2023 study discovered 245,000 malicious packages in open-source repositories alone.

Hallucination nightmare

Unfortunately, the current study suggests that the arrival of AI is only going to make things worse through LLM “package hallucination.”

LLMs are already notorious for hallucinations — making up nonsense answers to queries. This also happens in coding: the developer inputs a coding prompt into the LLM and occasionally receives back a nonsense answer.

In the case of package hallucination, the LLM goes one stage further and recommends or generates code including a package in a software repository that doesn’t exist.

Normally, this would cause any code referring to it to fail. However, a second possibility is the “package confusion” attack in which attackers generate hallucinated packages before seeding them with malware to bring them into existence.

The next stage would be to trick developers into downloading them so that they are eventually included inside larger legitimate programs. They could even make the code legitimate to start with, to increase trust before unleashing a payload later on.

“Unsuspecting users, who trust the LLM output, may not scrutinize the validity of these hallucinated packages in the generated code and could inadvertently include these malicious packages in their codebase,” say the researchers.

“This resulting insecure open-source code also has the potential of being included in the dependency chain of other packages and code, leading to a cascading effect where vulnerabilities are propagated across numerous codebases.”

Open sesame

If none of the LLMs on test were immune to the problem, some were noticeably worse than others.

“GPT-series models were found four times less likely to generate hallucinated packages compared to open-source models, with a 5.2% hallucination rate compared to 21.7%,” the study noted.

Python code was also less susceptible to the phenomenon than JavaScript, the study found.  

Package confusion attacks on repositories have been around for years, usually involving typosquatting (exploiting name similarity) or brandjacking. On top of that are more conventional attacks where criminals upload malicious packages to repositories, or simply corrupt legitimate packages.

Hallucination could supercharge this, the researchers argue. Earlier in 2024, researcher Bar Lanyado of Lasso Security sent a small shudder through parts of the developer world when he discovered that several large companies, including ecommerce giant Alibaba, were using or recommending a Python software package called “huggingface-cli”.

The package was completely hallucinated. When he uploaded an empty package with the same name to a repository to test its popularity, it was downloaded more than 30,000 times in a three-month period.

In other words, large numbers of developers were downloading an imaginary software package because an LLM had at some point hallucinated its existence to solve a specific programming task.

So far, no live package confusion attacks have been detected, but the fact that the possibility is now widely known suggests that a real-world incident is only a matter of time.

Is there a solution?

The authors discuss a variety of mitigations for the hallucination problem. One solution that they believe would not work is cross referencing generated packages with some kind of master list; that might detect bogus packages, but wouldn’t stop them from becoming active threats in the same way that other software threats operate.

A better solution, they say, would be to first address the underlying issue of why LLMs generate hallucinations in the first place. This might involve better prompt engineering and the use of Retrieval Augmented Generation (RAG) to generate narrower responses from specialized data sets. 

In addition, the LLMs themselves could be fine tuned to improve output on tasks more likely to generate hallucinations. For that sort of difficult improvement, the world will need the LLM developers themselves to act.

But nobody should hold their breath.

“We have disclosed our research to model providers including OpenAI, Meta, DeepSeek, and Mistral AI. As of this writing we have received no response or feedback,” the authors noted in a recent update to the study.
https://www.infoworld.com/article/3542884/large-language-models-hallucinating-non-existent-developer...

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Dec, Wed 18 - 11:07 CET