MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
are
Search

Are we thinking too small about generative AI?

Monday July 15, 2024. 10:30 AM , from InfoWorld
The current state of generative AI feels like one of those tourist t-shirts: “I spent a trillion dollars on AI infrastructure and all I got was a poorly written school essay.” Obscene amounts of money are being dumped into data centers and associated infrastructure (e.g., Nvidia chips to fuel compute), with relatively little money coming back in terms of applications that businesses or consumers pay to use. In part, this is simply a matter of where we are in the AI hype cycle: lots of tire kicking leading to a trough of disillusionment.

But it may also be that we’re thinking way too small about AI and its uses, or that AI isn’t suited to solve the complicated problems that would warrant its costs.

Not making it up in volume

That’s the thinking of Jim Covello, Goldman Sachs’ head of global equity research. In an interview, he suggests that to justify the “substantial cost to develop and run AI technology,” applications “must solve extremely complex and important problems for enterprises to earn an appropriate return on investment.” Thus far, they haven’t. Yes, a quick scroll of Twitter shows cool demo-ware—a video or a jaunty little tune created by some large language model (LLM) or other. Neat. Unfortunately, these aren’t typical enterprise use cases, and no amount of AI-generated stock photography is going to change that.

Perhaps the most promising area for AI to date has been software development, where it seems to be having a sustained impact. Even here, though, only a subset of experienced developers are seeing significant productivity gains, and the impact is nowhere near covering the $1 trillion in AI investments that Goldman Sachs expects during the next few years. As Covello continues, “Replacing low-wage jobs [like creating content marketing assets] with tremendously costly technology is basically the polar opposite of the prior technology transitions” we’ve seen over the past few decades, including the advent of the Internet.

We’re far too cavalier, he notes, in assuming that AI infrastructure costs will fall far enough, fast enough, to make it a worthwhile replacement for many tasks today (assuming it’s capable of doing so, which is by no means guaranteed). Speaking of the dropping cost of servers that helped spark the dot-com boom, Covello points out, “People point to the enormous cost decline in servers within a few years of their inception in the late 1990s, but the number of $64,000 Sun Microsystems servers required to power the internet technology transition in the late 1990s pales in comparison to the number of expensive chips required to power the AI transition today.” Nor does that factor in the associated energy and other costs that combine to make AI particularly pricey.

All of this leads Covello to conclude, “Eighteen months after the introduction of generative AI to the world, not one truly transformative—let alone cost-effective—application has been found.” A damning indictment. MIT professor Daron Acemoglu argues that this will persist for the foreseeable future, because just 23% of the tasks that AI can reasonably replicate will be cost-effective to automate over the next decade.

Are they right? Definitely maybe.

The AI glass half full

There are, of course, no shortage of AI enthusiasts who will argue that costs will drop precipitously and that AI will be able to do far more than the pessimists imagine, etc. Among these is Goldman Sachs Senior Global Economist Joseph Briggs, who feels genAI’s “large potential to drive automation, cost savings, and efficiency gains should eventually lead to significant uplifts of productivity.”

Another supporter is Lori Beer, global CIO at JPMorgan Chase, who commands a $17 billion IT budget and has gone all in on AI. For Beer, a big reason many companies struggle to get value from AI (generative or otherwise) is that they haven’t made the requisite investments in data: “You can’t really start talking about AI if you’re not in the cloud, if you’re not modernizing your data, if you’re not doing all the foundational stuff.” AI, in other words, isn’t something that magically happens, whatever the Twitterati’s prompt engineering discussions may have led you to believe. Before genAI, machine learning, etc., will pay dividends, you first need to do the unsexy work of data preparation.

We’re still in the earliest innings of AI. It’s still way too hard, as Google engineer Jaana Dogan has indicated: “Each company needs to spend an enormous amount of time to figure out the basics” of AI. We’re seeing enterprises begin to use retrieval-augmented generation (RAG) in earnest and dabble in agentic systems. But it remains a murderous march, with too-uncertain outcomes and lots of people issues getting in the way. (TL;DR: People don’t like to be treated like machines, and they like to know how the machines work before trusting them.)

If you work for an enterprise and are waiting for someone else to make up your mind for you on AI, don’t. This is the time to experiment and see if you can find cost-effective ways to put AI to work for you. Maybe you’ll fail, but in that failure, you’ll learn far more than by reading a series of Goldman Sachs interviews (insightful though they are) or watching Twitter demos.
https://www.infoworld.com/article/2517120/are-we-thinking-too-small-about-genai.html

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Sep, Mon 16 - 20:58 CEST