Navigation
Search
|
Scholars sneaking phrases into papers to fool AI reviewers
Tuesday July 8, 2025. 12:03 AM , from TheRegister
Using prompt injections to play a Jedi mind trick on LLMs
A handful of international computer science researchers appear to be trying to influence AI reviews with a new class of prompt injection attack.…
https://go.theregister.com/feed/www.theregister.com/2025/07/07/scholars_try_to_fool_llm_reviewers/
Related News |
25 sources
Current Date
Jul, Wed 9 - 22:29 CEST
|