Navigation
Search
|
It's trivially easy to poison LLMs into spitting out gibberish, says Anthropic
Thursday October 9, 2025. 10:45 PM , from TheRegister
Just 250 malicious training documents can poison a 13B parameter model - that's 0.00016% of a whole dataset
Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. …
https://go.theregister.com/feed/www.theregister.com/2025/10/09/its_trivially_easy_to_poison/
Related News |
25 sources
Current Date
Oct, Fri 10 - 06:58 CEST
|