Navigation
Search
|
Anthropic Makes 'Jailbreak' Advance To Stop AI Models Producing Harmful Results
Monday February 3, 2025. 07:10 PM , from Slashdot
The development by Anthropic, which is in talks to raise $2 billion at a $60 billion valuation, comes amid growing industry concern over 'jailbreaking' -- attempts to manipulate AI models into generating illegal or dangerous information, such as producing instructions to build chemical weapons. Other companies are also racing to deploy measures to protect against the practice, in moves that could help them avoid regulatory scrutiny while convincing businesses to adopt AI models safely. Microsoft introduced 'prompt shields' last March, while Meta introduced a prompt guard model in July last year, which researchers swiftly found ways to bypass but have since been fixed. Read more of this story at Slashdot.
https://slashdot.org/story/25/02/03/1810255/anthropic-makes-jailbreak-advance-to-stop-ai-models-prod...
Related News |
25 sources
Current Date
Feb, Mon 3 - 22:48 CET
|