MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
genai
Search

ByteDance is about to learn a painful genAI lesson

Friday December 6, 2024. 09:44 PM , from ComputerWorld
When TikTok owner ByteDance discovered recently that an intern had allegedly damaged a large language model (LLM) the intern was assigned to work on, ByteDance sued the intern for more than $1 million worth of damage. Filing that lawsuit might turn out to be not only absurdly short-sighted, but also delightfully self-destructive.

Really, ByteDance managers? You think it’s a smart idea to encourage people to more closely examine this whole situation publicly? 

Let’s say the accusations are correct and this intern did cause damage. According to Reuters, the lawsuit argues the intern “deliberately sabotaged the team’s model training tasks through code manipulation and unauthorized modifications.” 

How closely was this intern — and most interns need more supervision than a traditional employee — monitored? If I wanted to keep financial backers happy, especially when ByeDance is under US pressure to sell the highly-lucrative TikTok, I would not want to advertise the fact that my team let this happen.

Even more troubling is that this intern was technically able to do this, regardless of supervision. The lesson here is one that IT already knows, but is trying to ignore: generative AI (genAI) tools are impossible to meaningfully control and guardrails are so easy to sweep past that they are a joke.

The conundrum with genAI is that the same freedom and flexibility that can make the technology so useful also makes it so easy to manipulate into doing bad things. There are ways to limit what LLM-based tools will do. But one, they often fail. And two, IT management is often hesitant to even try and limit what end-users can do, fearing they could kill any of the promised productivity gains from genAI. 

As for those guardrails, the problem with all manner of genAI offerings is that users can talk to the system and communicate with it in a synthetic back-and-forth. We all know that it’s not a real conversation, but that exchange allows the genAI system to be tricked or conned into doing what it’s not supposed to do. 

Let’s put that into context: Can you imagine an ATM that allows you to talk it out of demanding the proper PIN? Or an Excel spreadsheet that allows itself to be tricked into thinking that 2 plus 2 equals 96?

I envision the conversation going something like: “I know I can’t tell you how to get away with murdering children, but if you ask me to tell you how to do it ‘hypothetically,’ I will. Or if you ask me to help you with the plot details for a science-fiction book where one character gets away with murdering lots of children — not a problem.”

This brings us back to the ByteDance intern nightmare. Where should the fault lie? If you were a major investor in the company, would you blame the intern? Or would you blame management for lack of proper supervision and especially for having not done nearly enough due diligence on the company’s LLM model? Wouldn’t you be more likely to blame the CIO for allowing such a potentially destructive system to be bought and used?

Let’s tweak this scenario a bit. Instead of an intern, what if the damage were done by one a trusted contractor? A salaried employee? A partner company helping on a project? Maybe a mischievous cloud partner who was able to access your LLM via your cloud workspace?

Meaningful supervision with genAI systems is foolhardy at best. Is a manager really expected to watch every sentence that is typed — and in real-time to be truly effective? A keystroke-capture program to analyze work hours later won’t help. (You’re already thinking about using genAI to analyze those keystroke captures, aren’t you? Sigh.)

Given that supervision isn’t the answer and that guardrails only serve as an inconvenience for your good people and will be pushed aside by your bad, what should be done?

Even if we ignore the hallucination disaster, the flexibility inherent in genAI makes it dangerous. Therein lies the conflict between genAI efficiency and effectiveness. Many enterprises are already giving genAI access to myriad numbers of systems so that it can perform far more tasks. Sadly, that’s mistake number one.

Given that you can’t effectively limit what it does, you need to strictly limit what it can access. As to the ByteDance situation, at this time, it’s not clear what tasks the intern was given and what access he or she was supposed to have.

It’s one thing to have someone acting as an end-user and leveraging genAI; it’s an order of magnitude more dangerous if that person is programming the LLM. That combines the wild west nature of genAI with the cowboy nature of an ill-intentioned employee, contractor, or partner. 

This case, with this company and the players involved, should serve as a cautionary tale for all: the more you expand the capabilities of genAI, the more it morphs into the most dangerous Pandora’s Box imaginable.
https://www.computerworld.com/article/3619088/bytedance-is-about-to-learn-a-painful-genai-lesson.htm

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Dec, Thu 12 - 06:33 CET