Navigation
Search
|
This Prompt Can Make an AI Chatbot Identify and Extract Personal Details From Your Chats
Thursday October 17, 2024. 12:30 PM , from Wired: Cult of Mac
Security researchers created an algorithm that turns a malicious prompt into a set of hidden instructions that could send a user's personal information to an attacker.
https://www.wired.com/story/ai-imprompter-malware-llm/
Related News |
46 sources
Current Date
Nov, Thu 21 - 15:03 CET
|