MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
google
Search

Google’s latest genAI shift is a reminder to IT leaders — never trust vendor policy

Monday February 10, 2025. 08:18 PM , from ComputerWorld
Every enterprise CIO knows they cannot — and should not — ever trust a vendor’s policy position. Whether that’s because a vendor might not strictly adhere to its policies or can change policies anytime  without notice, it doesn’t matter.

Google’s move last week to back away from assurances  it would not help make weapons or engage in surveillance was utterly unsurprising. Companies are motivated by revenue, profits and market share and if corporate leaders can improve any of those financial metrics by helping to make weapons of mass destruction — or helping a government poison its people — that’s what can happen.

But enterprise CIOs are the customers— customers with big budgets that give them major clout. If companies want your dollars, they must agree to whatever you have in your RFP and your contract.

Why would these massive vendors agree? Because they fear that one of their competitors will do so if they don’t. That could cost them market share and revenue. 

Suddenly, you have their C-suite’s rapt attention.

As for Google in this case, what was the original language the company felt it needed to avoid? Last year’s statement gave a list of “AI applications we will not pursue.” 

This is part of that list: “Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people. Technologies that gather or use information for surveillance violating internationally accepted norms. Technologies whose purpose contravenes widely accepted principles of international law and human rights.”

Then, in an eerily predictive point, it added: “As our experience in this space deepens, this list may evolve.” 

It did evolve. It got a lot shorter.

If a lot of money can be made doing those things, Google now says, in effect, “Human suffering and death and maiming can be trumped by higher profits and marketshare. Ethics, morality and humanity don’t keep the lights on, buddy!”

You’ll also notice that the company has bagged its “Don’t be evil” tagline; Google apparently ditched it 10 years ago. Maybe they could update it now to something like this: “Google. Where we never let avoiding evil stand in the way of making a profit.”

I was recently discussing this issue with two executives at Phoenix Technologies, a Swiss cloud provider. They made the argument that enterprise CIOs shouldn’t rely on vendor promises, especially for large language model (LLM) making, including how they’re trained and used.

“If you are reliant on the model makers and their terms and conditions state that they can service anybody, you have to be willing to deal with the fallout,” said Peter DeMeo, the Phoenix group chief product officer. “You really can’t trust the model makers,” especially when they need revenue from government contracts.

His colleague, Phoenix group CTO Nunez Mencias, applauded Google for removing the restriction, given that it was unlikely it could ever be relied on. “The model makers “can always change their policies, their rules.”

But there’s a big difference between being unable to rely on a vendor’s self-stated rules and being powerless to discourage AI use in areas your company might not be comfortable with.

Just remember: Entities out there doing things you don’t like are always going to be able to get generative AI (genAI) services and tools from somebody. You think large terrorist cells can’t use their money to pay somebody to craft LLMs for them? 

Even the most powerful enterprises can’t stop it from happening. But, that may not be the point. Walmart, ExxonMobil, Amazon, Chase, Hilton, Pfizer and Toyota and the rest of those heavy-hitters merely want to pick and choose where their monies are spent. 

Big enterprises can’t stop AI from being used to do things they don’t like, but they can make sure none of it is being funded with their money. 

If they add a clause to every RFP that they will only work with model-makers that agree to not do X, Y, or Z, that will get a lot of attention. The contract would have to be realistic, though. It might say, for instance, “If the model-maker later chooses to accept payments for the above-described prohibited acts, they must reimburse all of the dollars we have already paid and must also give us 18 months notice so that we can replace the vendor with a company that will respect the terms of our contracts.”

From the perspective of Google, along with Microsoft, OpenAI, IBM, AWS and others, the idea is to take enterprise dollars on top of government contracts. If they were to believe that’s suddenly an either/or scenario, they might suddenly reconsider. 

Given that Google has decided that revenue is more important than morality, the answer is not to appeal to their morality. If money is all they care about, speak that language. 

Fortunately for enterprises, there are plenty of large companies willing to handle your genAI needs. Perhaps now is the time to use your buying power to influence who else they work with and limit what they do.
https://www.computerworld.com/article/3821126/googles-latest-genai-shift-is-a-reminder-to-it-leaders...

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Feb, Tue 11 - 08:54 CET