Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Keeping your promises
In 2022, a grieving passenger went on Air Canada’s website and asked its AI-powered chatbot about the airline’s bereavement policy. The chatbot said yes, there are reduced fares if you’re traveling after the death of a loved one and you have 90 days after taking the flight in order to file a claim. The problem: That’s not Air Canada’s policy. The airline specifically requires passengers to apply for and receive the discount ahead of time — not after the flight.
Now, a Canadian court says that Air Canada has to honor the promises made by its AI chatbot, even though they were incorrect and inconsistent with the airline’s policies.
“While a chatbot has an interactive component, it is still just a part of Air Canada’s website,” the judge in the case wrote. “It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”
It’s a big ruling that could set new precedent, at least in Canada, that AI companies — or their clients — are legally liable for the accuracy of their chatbots’ claims. And that’s no simple thing to fix: Generative AI models are notorious for hallucinating — or making stuff up. If using AI becomes a major liability, it could drastically change how AI companies act, train their models, and lawyer up.
And it would immediately make AI a tough product to sell.