MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
data
Search

Over 40% of AI-Related Data Breaches Tied to Cross-Border AI Use by 2027

Monday February 17, 2025. 04:00 PM , from eWeek
Cross-border data transfers powered by artificial intelligence systems are becoming a critical security risk, a Gartner report has found. By 2027, over 40% of AI-related data breaches will originate from the improper use of generative AI across borders, highlighting the urgent need for stricter oversight.

The ‘Predicts 2025: Privacy in the Age of AI and the Dawn of Quantum’ report, published on Feb. 7, 2025, highlights the challenges businesses are likely to face within the next decade when it comes to utilizing AI and ensuring privacy. With varying regulatory standards and inconsistent enforcement across jurisdictions, organisations are struggling to control access to sensitive data.

“Unintended cross-border data transfers often occur due to insufficient oversight, particularly when GenAI is integrated in existing products without clear descriptions or announcement,” said Joerg Fritsch, VP analyst at Gartner, in a press release.

“Organisations are noticing changes in the content produced by employees using GenAI tools. While these tools can be used for approved business applications, they pose security risks if sensitive prompts are sent to AI tools and APIs hosted in unknown locations.”

Fragmented regulations and geopolitical tensions worsen risks

Two primary factors are amplifying these security risks: fragmented regulations and geopolitical tensions.

Without consistent global best practices for AI governance, enterprises must develop costly, region-specific compliance strategies, which not only hinder scalability but also leave gaps in data security.

At the same time, geopolitical conflicts are increasing demand for unauthorised access to confidential data. As major GenAI models are controlled by a few global tech giants, organisations may be unknowingly sending data across borders for processing, creating opportunities for interception.

Gartner predicts that by 2028, at least one jurisdiction will ban a major GenAI platform due to concerns over data confidentiality or content compromise.

Quantum encryption: a new challenge for AI security

Beyond AI-driven data risks, a third factor — quantum encryption — is set to redefine security challenges.

Numerous entities are developing postquantum cryptography algorithms, the Gartner analysts claim, with the U.S. National Institute of Standards and Technology finalising the first round of PQC encryption standards in 2024. While these advanced encryption methods are significantly more powerful than conventional cryptography, they also come at a steep cost.

“Organizations will need to undergo extensive efforts to inventory all the technologies that rely upon conventional cryptography methods as a means of protection, and systematically replace those algorithms with ones that are deemed quantum-safe,” the Gartner analysts wrote.

For many AI-driven businesses, switching to PQC may be prohibitively expensive, leaving them vulnerable to breaches. Analysts warn that by 2034, most, if not all, conventional encryption methods will be obsolete, making AI security even more precarious.

The future: risk mitigation or data deletion?

By 2029, organisations will delete the majority of personal data rather than risk exposure, as they may be unable to afford quantum security solutions.

With AI adoption growing and quantum threats on the horizon, businesses must act now to implement stronger governance and encryption strategies — or risk being left behind.
The post Over 40% of AI-Related Data Breaches Tied to Cross-Border AI Use by 2027 appeared first on eWEEK.
https://www.eweek.com/news/improper-cross-border-ai-use-gartner/

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Feb, Thu 20 - 20:45 CET