Navigation
Search
|
A New Trick Uses AI to Jailbreak AI Models—Including GPT-4
Tuesday December 5, 2023. 12:00 PM , from Wired: Cult of Mac
Adversarial algorithms can systematically probe large language models like OpenAI’s GPT-4 for weaknesses that can make them misbehave.
https://www.wired.com/story/automated-ai-attack-gpt-4/
|
46 sources
Current Date
May, Fri 9 - 18:48 CEST
|