MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
not
Search

Using AI to automatically cancel customers? Not a smart move

Thursday December 18, 2025. 08:00 AM , from ComputerWorld
Enterprise IT execs know well the dangers of relying too much on third-parties, how automated decision systems need to always have a human in the loop, and the dangers of telling customers too much/too little when policy violations require an account shutdown. But a saga that played out Tuesday between Anthropic and the CEO of a Swiss cybersecurity company brings it all into a new and disturbing context.

The tale began when Tom Hoffman, the CEO of a Swiss cybersecurity company called Wicked Design, on Monday received an alert from Anthropic’s system that the company’s entire account had been cancelled due to an automated review that supposedly found unspecified policy violations.

Hoffman knew the best way to get that addressed was with social media support, so he wrote about it on LinkedIn. He also alerted some people who worked at Anthropic, including the head of product legal. The latter responded, saying he’d flagged the issue internally for review. “We are working on the automated ban stuff,” said the Anthropic attorney.  (Hoffman shared multiple Anthropic screen captures with Computerworld.) 

Within a day, the account was restored — sort of. The new alert told him: “Earlier this week, your account was disabled by an automated system for being in violation of our Terms of Service or Acceptable Use Policy. Upon further investigation, we believe this was an error and your account has been reinstated. We apologize for the inconvenience and for your patience.” 

Hoffman briefly celebrated, logged in and found that most of the account — 80% of company projects and data, he said — was missing.

When he asked Anthropic’s automated system how to restore those files, the system replied, “I understand how frustrating this must be. When a user is removed and later re-added to an organization, previous projects and their associated chats are not restored — even if you use the same email address. Unfortunately, there’s no way to restore or transfer these previous projects back to your account once you’ve been re-added.”

But the message also said the files could be restored if the company paid. Apparently, that “there’s no way to restore” reference doesn’t apply if money changes hands. “Reactivating your subscription will restore access to all your previous projects,” Anthropic said. 

(After Hoffman paid the money, the files were restored, he said.)

As amusing — and simultaneously terrifying — as that back-and-forth is, it highlights key issues for enterprise IT.

AI third-party dependency

This is nothing new, though most fears about AI vendor dependency involve outages and cyberattacks. The idea that an AI vendor’s automated system could cancel an account with no details, no warning and no easy way to make your case to a human, is where the really annoying part kicks in.

In an interview, Hoffman said that despite everything that happened, he plans on sticking with Anthropic. Why? 

“Their service is quite good and where else am I going to go?” Hoffman said, adding he has no reason to believe Microsoft, AWS, Google, OpenAI or Perplexity — or even Oracle or IBM — would be any better at avoiding this problem with being cut off. (Computerworld reached out to Microsoft, Google, AWS, Anthropic, OpenAI and Perplexity for comment. None offered any comment.)

There’s another concern Hoffman noted. Given that these firms typically share no details about the rationale for a cancellation, that could provide cover if some government pressured them to punish a company using their service. Such retribution is far easier if an AI vendor already has a history of not saying why someone has been cut off. 

“I definitely don’t trust them anymore,” he said. “You want to build your business around AI integration. And it’s a crown jewel that someone else can easily switch off? What is your contingency plan? Maybe the next time [the account is cut off], I am not so lucky.”

What to tell customers who are being cut off

This issue requires a balancing act, which is where many vendors — and autonomous agents and bots — struggle. 

The argument for not giving detail information to customers is based in cybersecurity and anti-fraud protocols. On the chance that the disconnected customer actually is a fraudster/cyberthief/state actor, telling them specific details about how their bad behavior was detected could be a mistake. It might allow them to refine tactics, steal a new identity and try again, perhaps more successfully.

The argument for telling customers as much as possible is housed in fairness for the customer, allowing them a meaningful chance to address the accusations and mount a defense. Paying customers certainly deserve that.

The answer, I would argue, is squarely in the middle. It’s not that difficult. Tell the customer enough so they can address the issue, but not so much a thief could find useful. Tell them what you think they did, not necessarily how you discovered it. 

Criminal lawyers do this routinely. When a suspect wants to ask prosecutors for an immunity deal in exchange for revealing information to help their case, they have to do this dance. Reveal enough that prosecutors can evaluate the offer and make a decision, but don’t reveal so much they have no reason to make a deal.

Using automated decision software

This is an easy one, in theory. But when vendors face serious cost-cutting pressures, it can get a lot harder. Having a human in the loop is critical. 

But just that is not enough. If software recommends 11,000 customers be disconnected — and one person has 30 minutes to make decisions — that’s not a serious attempt at human management of the process. 

The software should detect issues, but humans must actively review them. Yes, there are edge cases, such as when lives are in danger, where an account needs to be immediately suspended, prompting a follow-up investigation. But those cases are rare. Most times, giving advance warning to a customer and letting them respond before a decision is made is preferable.

Having a person discuss the case

This is another time where money comes into play. If a vendor wants to make a secure environment for all, it needs to be staffed to discuss customer cutoff decisions. And these people need to be sufficiently senior to be able to overrule the software and immediately reinstate customers. 

And there should be significant compensation for customers incorrectly accused of wrongdoing. That serves two purposes. One, it will make the customer less angry. Two, companies have little incentive to improve these automated systems if there is no financial pain for it making bad recommendations.

To be clear, if a vendor is paying out too much money to customers who’ve been falsely accused, maybe the software needs to be changed. With enough financial pain, that might actually happen.

Sanchit Vir Gogia, the chief analyst at Greyhound Research, said this cutoff situation is going to become more common and enterprises need to devise ways to deal with it.

“Silent shutdowns of paying enterprise accounts by AI and cloud providers are not rare enforcement anomalies,” Gogia said. “They are an emergent control risk created by automation, contractual discretion, and platform scale. [Roughly] 47% of global CIOs admit they have no defined response plan if a core cloud or AI provider suspends their account without explanation. This is not a security gap. It is a governance gap. 

“Enforcement systems now combine billing, fraud, policy, compliance, and reputational risk signals into a single automated decision path. When that path triggers, suspension is immediate and broad. Explanation is optional. Human appeal is uncertain. Continuity becomes conditional. Enterprises that now run identity, data, analytics, and production workloads inside these platforms are absorbing existential operational risk without visibility, proportionality, or procedural protection.”

The best way to mitigate this risk is contractually, Gogia said. “If a vendor can shut you down without telling you why, continuity is conditional. Enterprises must treat provider-initiated shutdown risk as a first-class governance issue,” Gogia said. “Procurement must demand bounded disclosure, defined 
https://www.computerworld.com/article/4108169/using-ai-to-automatically-cancel-customers-not-a-smart...

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Dec, Thu 18 - 20:26 CET