Navigation
Search
|
DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot
Friday January 31, 2025. 07:30 PM , from Wired: Cult of Mac
Security researchers tested 50 well-known jailbreaks against DeepSeek’s popular new AI chatbot. It didn’t stop a single one.
https://www.wired.com/story/deepseeks-ai-jailbreak-prompt-injection-attacks/
Related News |
46 sources
Current Date
Feb, Sat 1 - 04:04 CET
|