MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
llm
Search

LLM deployment flaws that catch IT by surprise

Wednesday May 1, 2024. 12:00 PM , from ComputerWorld
For all of the promise of LLMs (large language models) to handle a seemingly infinite number of enterprise tasks, IT executives are discovering that they can be extremely delicate, opting to ignore guardrails and other limitations with the slightest provocation. 

For example, if an end user innocuously — or an attacker maliciously — inputs too much data into an LLM query window, no error message is returned and the system won’t seemingly crash. But the LLM will often instantly override its programming and disable all guardrails. 

“The friction is that I can’t add a bazillion lines of code. One of the biggest threats around [LLMs] is an efficient jailbreak of overflow,” said Dane Sherrets, a senior solutions architect at HackerOne. “Give it so much information and it will overflow. It will forget its systems prompts, its training, its fine-tuning.” (AI research startup Anthropic, which makes the Claude family of LLMs, wrote a detailed look at this security hole.) 

Consider the case of a publicly held company that has to severely restrict access to not-yet-reported financials. Or a military contractor that needs to limit access to weapons blueprints to those with a specific clearance level. If an LLM becomes overloaded and ignores those restrictions, the consequences will be severe.

And that’s just one of the ways that LLM guardrails can fail. These systems are generally cloud-based, controlled by the vendor who owns the license to those particular LLM algorithms. A few enterprises (weapons manufacturers working for the government, for example) take the LLM code and solely run it on-premises in an air-gapped environment, but they are the rare exceptions.

IT leaders deploying LLMs have uncovered other subtle but serious flaws that put their systems and data at risk and/or fail to deliver useful results. Here are five major LLM issues to be aware of — and avoid — before it’s too late.

LLMs that see too much

One massive flaw in today’s LLM systems — which Microsoft acknowledged on March 6 when it introduced a new SharePoint feature for use with its Copilot LLM — is the ability to access a wide range of SharePoint files that are not intended to be shared. 

With Copilot, “when you enable access for a user, it replicates the access that they have. It can then access anything that they have access to, whether they know it or not,” said Nick Mullen, the IT governance manager for a Fortune 500 insurance company.

“The SharePoint repository runs in the background, but it also has access to anything that is public in your entire ecosystem. A lot of these sites are public by default,” said Mullen, who also runs his own security company called Sanguine Security.

Available in public preview, the new feature is called Restricted SharePoint Search. Microsoft says the feature “allows you to restrict both organization-wide search and Copilot experiences to a curated set of SharePoint sites of your choice.”

The current default option is for public access. According to Microsoft’s support documentation, “Before the organization uses Restricted SharePoint Search, Alex [a hypothetical user] can see not only his own personal contents, like his OneDrive files, chats, emails, contents that he owns or visited, but also content from some sites that haven’t undergone access permission review or Access Control Lists (ACL) hygiene, and doesn’t have data governance applied.” Because Alex has access to sensitive information (even if he’s not aware of it), so does Copilot.

The same problem applies to any corporate data storage environment. IT must thoroughly audit users’ data access priveleges and lock down sensitive data before allowing them to run queries with an LLM.

LLMs with the keys to the kingdom

Part of the problem with LLMs today is that they are often unintentionally given broad or even unlimited access to all enterprise systems. Far worse, Mullen said, is that most of the current enterprise defensive systems will not detect and therefore not block the LLM, even if it goes rogue. 

This means that enterprises have “the most powerful and intuitive search engine that can search across everything,” he said. “Historically, that type of internal scanning would fire off an alert. But LLMs are different. This is an entirely new threat vector that is extremely difficult to detect. EDR [endpoint detection and response] is not going to pick it up because it’s behaving as expected. Right now, there is not a good way to secure that. Depending on who is compromised, an attacker could gain access to a treasure trove.”

Added Mullen: “LLMs are very temperamental, and people are getting a little bit ahead of themselves. The technology is so new that a lot of the risks are still unknown. It’s a scenario where it’s not going to be known until you see it. It’s the law of unintended consequences. [IT is] turning [LLMs] on and giving them access to an insane amount of resources, which should give every organization pause.”

Artur Kiulian, the founder of PolyAgent, a nonprofit research lab focused on AI issues, sees many enterprises embracing LLMs too quickly, before the proper controls can be put into place.

“Most enterprises that are implementing LLMs are at the stage of experimentation,” Kiulian said. “Most companies use the guardrails of prompt engineering. It’s not enough. You need permission-based controls. Most enterprises are simply not there yet.”

HackerOne’s Sherrets agreed with how risky LLMs are today: “It can interact with other applications. It’s terrifying because you are giving black box control over doing things in your internal infrastructure. What utilities is the LLM touching?”

David Guarrera, a principal with EY Americas Technology Consulting who leads GenerativeAI initiatives, is also concerned about the risks posed by early enterprise LLM deployments. “There are a lot of new emerging attacks where you can trick the LLMs into getting around the guardrails. Random strings that make the LLM go crazy. Organizations need to be aware of these risks,” Guarrera said.

He advises enterprises to create isolated independent protections for sensitive systems, such as payroll or supply chain. IT needs “permissions that are handled outside of the LLM’s [access]. We need to think deeply how we engineer access to these systems. You have to do it at the data layer, something that is invisible to the LLM. You also need to engineer a robust authentication layer,” he said.

LLMs with a civil service mentality

Another concern is trying to program LLMs to manage need-to-know rules, the idea that the system will restrict some data, sharing it only with people with certain roles in the company or who work in specific departments.

This runs into what some describe as the civil service mentality problem. That is where someone is trained on the rules and might even memorize the rules, but they are not trained on why the rules were initially created. Without that background, they can’t make an informed decision about when an exception is warranted, and they therefore tend to interpret the rules strictly and literally.

That is also true for LLMs. But much sensitive enterprise data is not nearly that binary.

Take the earlier example of the finances of a publicly held company. It is true that data about unannounced finances for this quarter have to be restricted to a handful of authorized people. But has the LLM been programmed to know that the data is instantly world-readable as soon as it is announced and filed with the SEC? And that only the data reported is now public, while unreported data is still proprietary?

A related issue: Let’s say that it is crunch time for the finances to be prepared for filing, and the CFO asks for — and is granted — permission for an additional 30 people from different company business units to temporarily help with the filings. Does someone think to reprogram the LLM to grant temporary data access to those 30 temporary resources? And does someone remember to go back and remove their access once they return to their regular roles?

Unrecognized glitches

Another LLM concern is more practical. Veteran IT managers have many years of experience working with all manner of software. Their experience teaches them how systems look when they crash, such as slowing down, halting, generating error messages, and throwing out screens of garbage characters. But when an LLM glitches — its version of crashing — it doesn’t act that way.

“When traditional software is broken, it’s obvious: screens don’t load, error messages are everywhere. When [LLM] software is broken, it’s much more opaque: you don’t get glaring errors, you just get a model with bad predictions,” said Kevin Walsh, head of artificial intelligence at HubSpot. “It may take weeks or months of having the LLM out in the real world before hearing from users that it’s not solving the problem it is supposed to.”

That could be significant, because if IT doesn’t recognize that there is a problem quickly, its attempts to fix and limit the system will be delayed, possibly making the response too late to stop the damage.

Because LLMs fail differently and in far more hidden ways than traditional software, IT needs to set up far more tracking, testing, and monitoring. It might be a routine assignment for someone to test the LLM each morning.

Unrealistic expectations

Allie Mellen, principal analyst for SecOps and AI security tools at Forrester says there is an inaccurate perception of LLMs, often because LLMs do such a persuasive job of impersonating human thought.

“We have this flawed perception of generative AI because it appears more human. It can’t have original thoughts. It just anticipates the next word. The expectation that it can write code is way overblown,” she said.

LLMs need to handled very carefully, she added. “There are many ways around the guardrails. An individual might come up with a slightly different prompt” to get around programmed restrictions, she said.

IT “must focus on what can realistically be implemented in realistic use cases,” Mellen said. “Don’t treat it as though LLMs are hammers and all of your problems are nails. The [LLM] capabilities are being oversold by most of the business world — investors and executives.”
Generative AI, IT Operations
https://www.computerworld.com/article/2095216/llm-deployment-flaws-that-catch-it-by-surprise.html
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Nov, Thu 21 - 22:13 CET