Liability for AI Errors: Who Actually Bears the Risks?

Artificial intelligence (AI) today does not merely analyse data — it makes decisions that directly affect key business processes, from recruitment to lending and marketing. AI errors may result in discrimination, financial losses, infringements of consumer rights, and reputational risks. The question “Who is liable for these errors?” has become a key issue for companies.

Who is liable if AI makes a mistake?

As a general rule, liability rests with the company providing the end service, rather than with the model developer or the owner of the server infrastructure. If a business has integrated a neural network, for example, into its pricing process or customer chatbot, it is that business which is accountable to the user for the correctness of the outcome.

Developers are liable for defects in the code and for breaches of system quality requirements. If the error was caused by a technical failure, claims may be brought against the developer.

Users are liable for improper use of AI, disregard of limitations, and lack of oversight. For example, an employee used a chatbot to analyse personal data in breach of internal rules, and the data were leaked. Liability fell on the employee rather than on the developers.

Legislation and liability

Legislation has not yet laid down specific rules, but new instruments, such as the EU AI Act, are shaping the roles and obligations of operators of high-risk systems:

  • to use the system strictly in accordance with the provider’s instructions;
  • to appoint competent human oversight;
  • to ensure the quality and relevance of input data;
  • to monitor the operation of the system and suspend its operation wherever there are any significant risks;
  • to notify the provider and regulators immediately of failures;
  • to retain detailed records of the system’s operation for at least six months.

A breach of these requirements is not only treated as non-compliance, but may also become direct evidence of fault: in the event of an incident, this materially increases the liability of the company that deployed the AI without proper oversight.

Case law

  1. Moffatt v. Air Canada. A customer used the airline’s chatbot to request information about a bereavement fare. The chatbot stated that compensation could be claimed after the trip; the customer purchased tickets, completed the journey, and submitted an application. The application was rejected, and when he brought the matter before the court, Air Canada attempted to shift the blame to the “chatbot”. The court confirmed that the chatbot forms part of the company’s website, and that the company itself is responsible for all information provided there. Air Canada was held liable to compensate the fare difference.

  2. Walters v. OpenAI. A radio host brought a claim against OpenAI, alleging that ChatGPT had identified him as being involved in financial misconduct within the Second Amendment Foundation. The court concluded that the algorithm had warned of possible inaccuracy and did not permit its responses to be treated as wholly reliable. The claimant failed to prove that OpenAI had acted negligently or with malicious intent. The claim was dismissed.

This shows that warnings, quality control, and informing users help to protect developers in disputes concerning inaccuracies. However, companies that publish AI-generated responses remain exposed to risk if they do not ensure the necessary and sufficient oversight of how those responses are used.

What should be done

  • Develop an internal policy on the use of AI. Define permitted use cases and prohibit autonomous decision-making without human oversight.
  • Appoint persons responsible for AI governance and for quality control of system performance.
  • Conduct regular testing for accuracy and absence of bias.
  • Ensure transparency. Label AI-generated content and inform users of its origin.
  • Allocate responsibility contractually in agreements with suppliers and integrators.
  • Review customer-facing chatbots and ensure they are audited regularly. Put in place filters for escalating a dialogue to an employee where requests are complex or critical, and verify the correctness of responses.

Accordingly, liability for AI errors always rests with the people and organisations that created, deployed, and use it. To reduce risks, businesses need to build an AI governance system, from policies and processes to contractual arrangements.

AI does not relieve anyone of liability; on the contrary, it requires clear oversight, transparency, and a properly structured risk management system.


Protect your business against AI risks together with REVERA

AI errors are inevitable, but the risks for a company can be minimised by establishing transparent processes, quality control, and allocation of responsibility. REVERA’s experts can help to:

  • develop an internal policy on the use of AI;
  • allocate responsibility in agreements with suppliers and integrators;
  • ensure compliance with new legislative requirements, including the EU AI Act.

Do not leave risk to chance — contact REVERA and protect your business against the legal and reputational threats associated with the use of AI.

AuthorsDaria Gordey, Artem Handriko.

Contact a lawyer for further information

Contact a lawyer