Zephyrnet Logo

You’d better be ready to back up your AI chatbot’s promises

Date:

Opinion I keep hearing about businesses that want to fire their call center employees and front-line staffers as fast as possible and replace them with AI. They’re upfront about it.

Meta CEO Mark Zuckerberg recently said the company behind Facebook was laying off employees “so we can invest in these long-term, ambitious visions around AI.” That may be a really dumb move. Just ask Air Canada.

Air Canada recently found out the hard way that when your AI chatbot makes a promise to a customer, the company has to make good on it. Whoops!

In Air Canada’s case, a virtual assistant told Jake Moffatt he could get a bereavement discount on his already purchased Vancouver to Toronto flight because of his grandmother’s death. The total cost of the trip without the discount: CA$1,630.36. Cost with the discount: $760. The difference between not quite a grand may be petty cash to an international airline, but it’s real money to ordinary people. 

The virtual assistant told him that if he purchased a normal-price ticket, he would have up to 90 days to claim back a bereavement discount. A real-live Air Canada rep confirmed he could get the bereavement discount.

When Moffatt later submitted his refund claim with the necessary documentation, Air Canada refused to pay out. That did not work out well for the company.

Moffatt took the business to small claims court, claiming Air Canada was negligent and had misrepresented its policy. Air Canada replied, in effect, that “The chatbot is a separate legal entity that is responsible for its own actions.

I don’t think so!

The court agreed. “This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”

The money quote for other businesses to pay attention to going forward with their AI plans is: “I find Air Canada did not take reasonable care to ensure its chatbot was accurate.”

This is one case, and the damages were minute. Air Canada was ordered to pay Moffatt back the refund he was owed. Yet businesses need to know that they are as responsible for their AI chatbots being accurate as they are for their flesh-and-blood employees. It’s that simple.

And, guess what? AI LLMs often aren’t right. They’re not even close. According to a study by non-profits AI Forensics and AlgorithmWatch, a third of Microsoft Copilot’s answers contained factual errors. That’s a lot of potential lawsuits!

As Avivah Litan, a Gartner distinguished vice president analyst focused on AI, said, if you let your AI chatbots be your front-line of customer service, your company “will end up spending more on legal fees and fines than they earn from productivity gains.”

Attorney Steven A. Schwartz knows all about that. He relied on ChatGPT to find prior cases to support his case. And, Chat GPT found prior cases right enough. There was only one little problem. Six of the cases he cited didn’t exist. US District Judge P. Kevin Castel was not amused.  The judge fined him $5,000, but it could have been much worse. Anyone making a similar mistake in the future is unlikely to face such leniency.

Accuracy alone isn’t the only problem. Prejudices baked into your Large Language Models (LLMs) can also bite you. The iTutorGroup can tell you all about that. This company lost a $365,000 lawsuit to the US Equal Employment Opportunity Commission (EEOC) because AI-powered recruiting software automatically rejected female applicants aged 55 and older and male applicants aged 60 and older.

To date, the biggest mistake caused by relying on AI was the American residential real estate company Zillow’s real estate pricing blunder.

In November 2021, Zillow wound down its Zillow Offers program. This AI program advised the company on making cash offers for homes that would then be renovated and flipped. However, with a median error rate of 1.9 percent and error rates as high as 6.9 percent, the company lost serious money. How much? Try a $304 million inventory write-down in one quarter alone. Oh, and Zillow laid off 25 percent of its workforce. 

I’m not a Luddite, but the simple truth is AI is not yet trustworthy enough for business. It’s a useful tool, but it’s no replacement for workers, whether they’re professionals or help desk staffers. In a few years, it will be a different story. Today, you’re just asking for trouble if you rely on AI to improve your bottom line.  ®

spot_img

Latest Intelligence

spot_img