Navigation
Search
|
How 'sleeper agent' AI assistants can sabotage your code without you realizing
Tuesday January 16, 2024. 10:30 PM , from TheRegister
Today's safety guardrails won't catch these backdoors, study warns
Analysis AI biz Anthropic has published research showing that large language models (LLMs) can be subverted in a way that safety training doesn't currently address.…
https://go.theregister.com/feed/www.theregister.com/2024/01/16/poisoned_ai_models/
Related News |
25 sources
Current Date
May, Fri 17 - 09:01 CEST
|