MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
hallucinations
Search

AI Hallucinations Lead To a New Cyber Threat: Slopsquatting

Tuesday April 22, 2025. 03:40 AM , from Slashdot
AI Hallucinations Lead To a New Cyber Threat: Slopsquatting
Researchers have uncovered a new supply chain attack called Slopsquatting, where threat actors exploit hallucinated, non-existent package names generated by AI coding tools like GPT-4 and CodeLlama. These believable yet fake packages, representing almost 20% of the samples tested, can be registered by attackers to distribute malicious code. CSO Online reports: Slopsquatting, as researchers are calling it, is a term first coined by Seth Larson, a security developer-in-residence at Python Software Foundation (PSF), for its resemblance to the typosquatting technique. Instead of relying on a user's mistake, as in typosquats, threat actors rely on an AI model's mistake. A significant number of packages, amounting to 19.7% (205,000 packages), recommended in test samples were found to be fakes. Open-source models -- like DeepSeek and WizardCoder -- hallucinated more frequently, at 21.7% on average, compared to the commercial ones (5.2%) like GPT 4. Researchers found CodeLlama ( hallucinating over a third of the outputs) to be the worst offender, and GPT-4 Turbo ( just 3.59% hallucinations) to be the best performer.

These package hallucinations are particularly dangerous as they were found to be persistent, repetitive, and believable. When researchers reran 500 prompts that had previously produced hallucinated packages, 43% of hallucinations reappeared every time in 10 successive re-runs, with 58% of them appearing in more than one run. The study concluded that this persistence indicates 'that the majority of hallucinations are not just random noise, but repeatable artifacts of how the models respond to certain prompts.' This increases their value to attackers, it added. Additionally, these hallucinated package names were observed to be 'semantically convincing.' Thirty-eight percent of them had moderate string similarity to real packages, suggesting a similar naming structure. 'Only 13% of hallucinations were simple off-by-one typos,' Socket added. The research can found be in a paper on arXiv.org (PDF).

Read more of this story at Slashdot.
https://it.slashdot.org/story/25/04/22/0118200/ai-hallucinations-lead-to-a-new-cyber-threat-slopsqua...

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Apr, Tue 22 - 10:24 CEST