|
Navigation
Search
|
Keep AI browsers out of your enterprise, warns Gartner
Monday December 8, 2025. 07:00 PM , from ComputerWorld
AI browsers including Perplexity Comet and OpenAI’s ChatGPT Atlas present security risks that cannot be adequately mitigated, and enterprises should prevent employees using them, according to Gartner.
“Gartner strongly recommends that organizations block all AI browsers for the foreseeable future because of the cybersecurity risks,” analysts Dennis Xu, Evgeny Mirolyubov, and John Watts wrote in a research note last week. They made their recommendation based on risks they had already identified, “and other potential risks that are yet to be discovered, given this is a very nascent technology.” The warning is timely, as AI browsers are already gaining a foothold in the enterprise: 27.7% of organizations already have at least one user with Atlas installed, with some enterprises seeing up to 10% of employees actively using the browser, cybersecurity firm Cyberhaven said in October. It found adoption rates highest in the technology industry (67%), pharmaceuticals (50%), and finance (40%), all sectors with heightened security requirements. ChatGPT Atlas, launched on October 21, saw 62 times more corporate downloads than Perplexity Comet, which was released July 9, according to Cyberhaven. The launch of Atlas also sparked renewed interest in AI browsers overall, with Comet downloads surging sixfold during the same week. But concerns were raised immediately after the launch of ChatGPT Atlas about the threat posed by AI browsers, with analysts pointing to prompt injection vulnerabilities and data security concerns. Sensitive data at risk The reason AI browsers are of concern is that when they send active web content, browsing history, and open tab contents to the cloud for analysis, enterprises lose control of their data. Perplexity’s documentation, for example, warns that “Comet may process some local data using Perplexity’s servers to fulfill your queries. This means Comet reads context on the requested page (such as text and email) in order to accomplish the task requested.” Mirolyubov, senior director analyst at Gartner, said, “The real issue is that the loss of sensitive data to AI services can be irreversible and untraceable. Organizations may never recover lost data.” It’s not just where the browsers send your data for processing that is a concern; it’s what they do as a result: “Erroneous agentic transactions raise accountability concerns in case of expensive errors,” he said. Traditional controls inadequate AI browsers can autonomously navigate websites, fill out forms, and complete transactions while authenticated to web resources. As he and his colleagues wrote in their report, this makes the AI browsers susceptible to new cybersecurity risks, “such as indirect prompt-injection-induced rogue agent actions, inaccurate reasoning-driven erroneous agent actions, and further loss and abuse of credentials if the AI browser is deceived into autonomously navigating to a phishing website.” “Traditional controls are inadequate for the new risks introduced by AI browsers, and solutions are only beginning to emerge,” Mirolyubov said. “A major gap exists in inspecting multi-modal communications with browsers, including voice commands to AI browsers.” Prompt injection remains a particular concern, OpenAI CISO Dane Stuckey acknowledged in a post to X, formerly Twitter, the day after ChatGPT Atlas’s launch: “Prompt injection remains a frontier, unsolved security problem, and our adversaries will spend significant time and resources to find ways to make ChatGPT agents fall for these attacks.” Discovered vulnerabilities highlight immaturity Beyond theoretical risks, concrete security flaws have emerged in both major AI browsers. Days after ChatGPT Atlas launched, researchers discovered it stores OAuth tokens unencrypted with overly permissive file settings on macOS, potentially allowing unauthorized access to user accounts. The vulnerability was documented by security research group Teamwin on October 27. OpenAI had not released a patch as of October 31, when Gartner completed its research. Separately, cybersecurity firm LayerX Security reported in August the discovery of a vulnerability in Comet called “CometJacking” that could potentially exfiltrate user data to attacker-controlled servers. OpenAI and Perplexity did not immediately respond to requests for comment. Years, not months, to mature The discovered vulnerabilities highlight broader concerns about the maturity of AI browser technology. “Security and privacy must become core design principles rather than afterthoughts,” Mirolyubov said. AI browser vendors must incorporate enterprise-grade cybersecurity controls from the outset and provide greater transparency regarding data flows and agentic decisions, he said. Emerging AI usage control solutions will likely take “a matter of years rather than months” to mature, he said. “Eliminating all risks is unlikely — erroneous actions by AI agents will remain a concern. Organizations with low risk tolerance may need to block AI browsers for the longer term.” Organizations with higher risk tolerance that want to experiment should limit pilots to small groups tackling low-risk use cases that are easy to verify and roll back, the Gartner report said. Users must “always closely monitor how the AI browser autonomously navigates when interacting with web resources.” For now, Gartner said, organizations should block AI browser installations using existing network and endpoint security controls and review their AI policies to ensure that broad use of AI browsers is prohibited. “Today, most cybersecurity teams choose to block AI browsers, delaying adoption until risks are better understood and controls are more mature,” Mirolyubov said.
https://www.computerworld.com/article/4102569/keep-ai-browsers-out-of-your-enterprise-warns-gartner....
Related News |
25 sources
Current Date
Dec, Mon 8 - 21:17 CET
|







