MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
are
Search

Renegade business units trying out genAI will destroy the enterprise before they help

Monday July 15, 2024. 12:00 PM , from ComputerWorld
One of the more tired cliches in IT circles refers to “Cowboy IT” or “Wild West IT,” but it’s the most appropriate way to describe enterprise generative AI (genAI) efforts these days. As much as IT is struggling to keep on top of internal genAI efforts, the biggest danger today involves various business units globally creating or purchasing their very own experimental AI efforts.

We’ve talked extensively about Shadow AI (employees/contractors purchasing AI tools outside of proper channels) and Sneaky AI (longtime vendors silently adding AI features into systems without telling anyone). But Cowboy AI is perhaps the worst of the bunch because no one can get intro trouble. Most boards and CEOs are openly encouraging all business units to experiment with genAI and see what enterprise advantages they can unearth.

The nightmare is that almost none of those line of business (LOB) teams understand how much they are putting the enterprise at risk. Uncontrolled and unmanaged, genAI apps are absolutely dangerous.

Longtime Gartner analyst Avivah Litan (whose official title these days is Distinguished VP Analyst) wrote on LinkedIn recently about the cybersecurity dangers from these kinds of genAI efforts. Although her points were intended for security talent, the problems she describes are absolutely a bigger problem for IT.

“Enterprise AI is under the radar of most Security Operations, where staff don’t have the tools required to protect use of AI,” she wrote. “Traditional Appsec tools are inadequate when it comes to vulnerability scans for AI entities. Importantly, Security staff are often not involved in enterprise AI development and have little contact with data scientists and AI engineers. Meanwhile, attackers are busy uploading malicious models into Hugging Face, creating a new attack vector that most enterprises don’t bother to look at. 

“Noma Security reported they just detected a model a customer had downloaded that mimicked a well-known open-source LLM model. The attacker added a few lines of code that caused a forward function. Still, the model worked perfectly well, so the data scientists didn’t suspect anything. But every input to the model and every output from the model were also sent to the attacker, who was able to extract it all. Noma also discovered thousands of infected data science notebooks. They recently found a keylogging dependency that logged all activities on their customer’s Jupyter notebooks. The keylogger sent the captured activity to an unknown location, evading Security which didn’t have the Jupyter notebooks in its sights.”

IT leaders: How many of the phrases above sound a little too familiar? 

Your team “often not involved in enterprise AI development and have little contact with data scientists and AI engineers?” Bad guys “creating a new attack vector that most enterprises don’t bother to look at?” Or maybe “the model worked perfectly well so the data scientists didn’t suspect anything. But every input to the model and every output from the model were also sent to the attacker, who was able to extract it all” or a manipulated external app which your IT team “didn’t have in its sights?”

Some enterprises have debated creating a new AI executive, but that’s unlikely to help. It will more than likely be an executive with lots of responsibilities, far too little budget and no actual authority to get any business unit to comply with the AI chief’s edicts. It’s sort of like many CISOs today, a toothless manager but with even more headaches. 

The better answer is to use the best power in the world to force LOB executives to take AI efforts seriously: make it an HR-approved criteria for their annual bonus. Put massive financial penalties on any problems that result from AI efforts their unit undertakes. (Paycheck hits get their attention because it is literally money out of their pockets.) Then add a caveat: If IT approves the effort in writing, then you are fully blameless for anything bad that later happens.

Magically, getting IT signoff becomes important to those LOB leaders. Then and only then, the CIO will have the clout to protect the company from errant AI.

Another possible outcome of this carrot-stick approach is that business execs will still want to maintain control and will instead hire AI experts for their units directly. That works, too. 

The cost of trying out many of these genAI efforts — especially for a relatively short time — is often negligible. That can be bad because it makes it easy for LOB workers to underestimate the risks to the business that they are accepting. 

The potential of genAI is unlimited and exciting, but if strict rules aren’t put in place right away, it could well destroy a business before it has a chance to help. 

Yippee-ki-yay, CIO.
https://www.computerworld.com/article/2514529/renegade-business-units-trying-out-genai-will-destroy-...

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Nov, Thu 21 - 17:24 CET