MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
browser
Search

The AI-powered cyberattack era is here

Monday September 1, 2025. 01:00 PM , from ComputerWorld
Prognosticators have been prognosticating for 20 years about a future in which hackers use AI to breach networks, steal data, and socially engineer credulous employees. And like so many AI-related futurisms in the age of LLM-based generative AI, this prediction is coming true.

Anthropic reported last week that a hacker used its technology for an AI-fueled crime spree involving large-scale ransomware attacks. The attacker used the Claude chatbot for recon, code generation, credential theft, infiltration, and ransom notes against 17 organizations, including healthcare providers, government agencies, religious charities, and a defense contractor.

The AI even helpfully proposed ransom amounts, ranging from $75,000 to $500,000 in Bitcoin. This marks the first known case where AI choreographed an entire extortion scheme, automating nearly every step.

AI is not only guiding and helping with cyberattacks, but even writing the code. Anthropic and the security firm ESET found that criminals are using generative AI to build and update actual ransomware code itself.

Anthropic identified a UK threat actor, GTG-5004, who developed, sold, and maintained AI-enhanced ransomware kits. Lacking technical skill with encryption or anti-analysis tools, they relied on Anthropic’s Claude chatbot for coding and software packaging. Ransomware services ranged from $400 to $1,200 for different bundles, allowing low-skilled crooks to unleash advanced malware.

These programs actually morph to dodge antivirus scans and slip past new security rules before defenders react.

ESET studied a proof-of-concept called PromptLock, which could generate and run malicious scripts using an open-source model based on OpenAI’s code, and adapt on the fly to target or encrypt files.

Researchers hack the chatbots

GenAI chatbots are designed to prevent misuse, but hackers are incentivized to “jailbreak” the tools and bypass their guardrails and alignment mechanisms for malicious purposes. Palo Alto Networks researchers showed how to do it: Just write poorly.

Researchers Tony Li and Hongliang Liu recently published information about how large language models — including Google’s Gemma, Meta’s Llama, and Alibaba’s Qwen — could be tricked by poorly punctuated, run-on sentences.

Sentences with bad grammar and without concluding punctuation can slow down chatbot safety “alignment” mechanisms, allowing harmful prompts to slip through. This method could elicit instructions for committing crimes, gathering private information, creating malware, or committing fraud.

Researchers at Trail of Bits, led by Kikimora Morozova and Suha Sabi Hussain, found another way around chatbot guardrails. By hiding malicious prompts in large, high-resolution photos and letting the AI’s downscaling algorithms reveal those messages, they could make production systems like Google’s Gemini and Vertex AI echo instructions an end user never saw — or intended. Call it a “multimodal prompt injection” attack.

The researchers found these attacks worked across systems, infecting everything from desktop tools to cloud APIs.

This method can trick both chatbots and users. By sending large numbers of users a photo with hidden prompts, an attacker could use them to commit widespread attacks.

Scammers let AI do the talking

More than six years ago, the cybersecurity world was alarmed when criminals used deepfake audio to impersonate a CEO to steal money. The scammers called, using technology to imitate the voice of the CEO for the German parent company of a UK-based energy firm. The fake German CEO called the real UK CEO and urgently requested the transfer of $243,000 to a Hungarian supplier for an urgent business need.

In the deepfake era, the crime was unprecedented and exotic. In the genAI era, it’s a banality.

You just need a three-second recording of a person talking, according to McAfee experts. With that snippet, you can create convincing fake messages or phone calls. If someone sounds like a trusted person, people are ready to hand over their cash or secrets. In 2024, the company’s global study found one in four people had suffered or knew someone hit by an AI voice scam.

Thanks to genAI, the technology is so good it can fool even a parent. A California man last year got a call from someone using a cloned copy of his son’s voice. The fake son claimed he’d been in an accident, had been taken into police custody, and needed bail money fast. After more calls and pressure from scammers, the dad withdrew thousands and sent it to the scammers.

In 2024, crooks cloned Italian Defense Minister Guido Crosetto’s voice to target big businesses — fashion legend Giorgio Armani, Prada’s Patrizio Bertelli, and former Inter Milan boss Massimo Moratti received calls from “Crosetto” about kidnapped journalists in peril. These convinced Moratti to transfer nearly one million euros to a Hong Kong account (later traced and frozen in the Netherlands).

The new risk from AI browsers

One challenge in the field of AI-enabled attacks — which is to say, attacks that didn’t exist or weren’t possible before genAI — is how quickly everything changes. Take AI browsers, for example. This new category of web browser includes Perplexity Comet, Dia (by The Browser Company), Fellou, Opera Neon, Sigma AI Browser, Arc Max, Microsoft Edge Copilot, Brave Leo, Wave Browser Pro, SigmaOS, Opera Aria, Genspark AI Browser, Poly, Quetta Browser, Browserbase, Phew AI Tab, and the upcoming OpenAI browser.

The most agentic is Perplexity’s Comet browser, which clicks links, navigates web pages, fills out forms, manages emails and calendars, books travel and makes purchases, analyzes browsing history, automates multistep workflows, interacts with logged-in services, compares products across websites, unsubscribes from emails, extracts and synthesizes information from multiple sources, manages tabs by opening and closing them, searches and filters through user-executed complex research tasks autonomously, and provides conversational assistance with contextual awareness across all browsing activities.

Security researchers at Guardio Labs demonstrated how simple it has become for criminals to trick AI browsers into committing crimes. When the researchers instructed Comet to buy an Apple Watch, the AI obediently visited a fake Walmart website they had created in 10 seconds using basic web tools. The browser ignored obvious signs of fraud and automatically filled in saved credit card details and shipping information to complete the purchase. In testing, Comet sometimes has refused the transaction or has asked for human approval, but in other cases it has handed over sensitive payment data directly to the scammers.

Brave and Guardio Labs discovered that criminals can manipulate Comet by hiding commands in fake CAPTCHA tests. The attack convinced the AI to click invisible buttons and bypass security checks without the user knowing.

(Note that Vivaldi CEO Jon von Tetzchner last week published a definitive statement announcing that the company will not integrate AI features into its browser because “we will not turn the joy of exploring into inactive spectatorship.”)

Fight fire with fire

The truth is that most attacks are still the old-fashioned kind, performed without help from AI. And most still involve human error. So all the standard guidelines and best practices apply. Companies should update software regularly, require multifactor authentication for all logins, and give employees training about fake emails and malicious links. Outside experts should run penetration tests twice a year. Making regular offline backups can save thousands after AI-based ransomware attacks.

But in the new AI cyberattack era, AI-based cybersecurity tools have become a requirement. They can scan millions of network events every second and flag problems before anything bad happens.

The unfortunate truth is that the AI-powered cyberattack era has just begun.
https://www.computerworld.com/article/4048415/the-ai-powered-cyberattack-era-is-here.html

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Sep, Mon 1 - 17:08 CEST