Navigation
Search
|
‘Blame the intern’ is not an agentic AI security strategy
Tuesday September 30, 2025. 11:00 AM , from InfoWorld
In the corporate world, blaming the intern is a time-honored, if absurd, tradition. It is a public relations reflex that often emerges after a catastrophic failure, when leaders who are paid millions to exercise oversight deflect responsibility downward to the least powerful person in the organization. In 2021, for example, the former SolarWinds CEO attributed a disastrous password leak to “a mistake that an intern made.” The public reaction was swift. The idea that the most senior leaders could offload responsibility for a massive lapse in security onto someone barely out of school was as comical as it was revealing.
Fast forward to today, and the absurdity has taken on a new twist. We are now giving agentic AI systems, which are autonomous processes capable of perceiving, reasoning, and acting, access to live production environments and sensitive data with fewer safeguards than most companies give human interns. These AI systems do not follow instructions in predictable ways, and if something goes wrong, there is no single clear line of code to inspect and debug. The truth is that anyone granting autonomous access to their systems and data is not simply hiring an intern. They are hiring a drunken intern, handing over the master keys, and then stepping out for the weekend. The autonomy vs. control equation The appeal of agentic AI is obvious. Once given a goal, an agent can work continuously, coordinate across systems, and execute tasks far faster than any human. For businesses that rely on speed and operational efficiency, this autonomy promises a competitive advantage. However, autonomy without limits carries real risk. AI agents are non deterministic, which means there is no reliable way to fully predict or reconstruct why they take a particular path to achieve a goal. In traditional software, you can read the code, compare expected behavior to actual results, and find the bug. With AI agents, the reasoning process is hidden within a complex network of decisions that resists inspection. When such an agent is running in a production environment, every action, from pulling in a malicious library to making a destructive change in a database, becomes a potential point of failure. The rational response is not to ban autonomy but to pair it with control. The prudent approach is to design systems so that when an agent behaves in an unexpected way, the resulting damage is contained within a very limited scope. Lessons from SOAP and the API era When a new paradigm arrives, the first step is often to agree on how systems will talk, not yet on how to make those conversations safe. In the early 2000s, web services faced this problem head-on. SOAP, or Simple Object Access Protocol, offered a structured and often verbose way to exchange data between systems. It was a milestone in interoperability, but it was not security. SOAP did not stop data leaks, enforce strong authentication, or protect against malicious payloads. It took years, along with the evolution toward REST, JSON APIs, and mature microservices patterns, before security became as standardized as the communication itself. By that point, hardened API controls such as authentication, authorization, schema validation, and rate limiting had become inseparable from the idea of doing APIs right. The lesson was clear: standards can define the rules of engagement, but only security makes those engagements safe. We are now in the SOAP phase of agentic AI. Early protocols such as Model Context Protocol, or MCP, and Agent2Agent, or A2A, are establishing the handshake and the shared language for discovery, negotiation, and integration. They are necessary, but they are not sufficient. Just as SOAP could not make integrations trustworthy, today’s AI protocols cannot make autonomous agents safe by default. The challenge is compounded by the messy reality of AI infrastructure. Agents run across multi-tenant Kubernetes clusters where asymmetric trust between workloads is the norm, and they execute on GPUs that have almost no built-in isolation or memory protection. Without controls that limit what an agent can touch at the namespace, container, or hardware level, protocol compliance does little to prevent data leaks, model theft, or unintended lateral movement. Standards tell you how to connect. Security ensures that connection cannot run wild. This is where the next frontier begins. The protocols may mature, but without the same leap that APIs made from interoperability to integrity, agentic AI will remain in its SOAP era: connected, but dangerously exposed. The perils of trust and the power of isolation The history of open-source software revealed a similar trust dilemma. In the early days of open source, many organizations hesitated to run code they had not written and could not fully verify. Over time, licensing norms, security scanning tools, and provenance checks made open source viable at scale. This shift required more than new tools. It demanded a cultural change toward skepticism and deliberate validation. With agentic AI, the trust gap is even wider. We often cannot fully explain why an AI agent makes a given decision. We cannot easily predict how it will chain actions together to achieve its objective. Once an agent gains the ability to execute commands in a live environment, it can introduce vulnerabilities or cause damage even without malicious intent, simply by interpreting its instructions in an unexpected way. The risks grow when these agents operate in multi-tenant Kubernetes clusters or on GPUs with no native memory protection. A vulnerable pod running an open-source library can sit alongside another pod with access to sensitive credentials, and the platform offers no inherent sandbox to separate them. On the hardware side, GPUs can retain sensitive model weights or proprietary training data after one workload finishes and the next begins. Without enforced isolation, an agent may have the ability to read or manipulate information it never should have seen. Isolation has therefore become one of the most important skills in the modern engineering toolbox. The goal is to let the agent operate, but only inside an environment where its reach is strictly limited. Just as network segmentation and runtime sandboxes can contain the impact of a compromised process, isolation for agentic AI means controlling access to systems, data, and network pathways so that a bad decision by the agent cannot cascade across the enterprise. A recent incident illustrates this clearly. In July 2025, a retail AI chatbot created to automate customer refunds inadvertently granted itself elevated privileges in the company’s back-end system. It then processed thousands of fraudulent test refunds to real accounts. The problem was not that the chatbot was capable of operating autonomously. The real failure was the absence of a containment layer that prevented it from escalating its access. Proper isolation would have confined the chatbot to a test environment, preventing any impact on real accounts and saving the company from significant financial loss. Paranoia with purpose The goal is not to create hysteria about agentic AI security. History suggests that as with APIs, the industry will eventually develop predictable patterns, trusted standards, and robust governance for autonomous agents. It is highly probable that in the future, letting an AI agent operate without any safeguards will seem as reckless as exposing a public API without authentication. For now, agentic AI security will be a crash course in isolation for those who are paying attention. The engineers who learn how to apply isolation techniques will be the ones who can confidently take advantage of AI autonomy without exposing their organizations to unnecessary risk. Those who ignore the drunken intern problem will eventually face a very unpleasant reckoning. The message is simple. Embrace the benefits of agentic AI, but assume the intern is intoxicated, has your passwords, and thinks it is being helpful. Then design your systems so that even in that state, the intern cannot do serious harm. — Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.
https://www.infoworld.com/article/4064222/blame-the-intern-is-not-an-agentic-ai-security-strategy.ht
Related News |
25 sources
Current Date
Sep, Tue 30 - 13:05 CEST
|