Artificial Intelligence – who is accountable for getting it wrong?

Artificial intelligence (AI) is rapidly becoming part of our daily and working lives, from customer service chatbots and legal research tools to medical diagnostics and financial decision-making systems. While its potential is immense, AI is not infallible. It can make mistakes, and sometimes those mistakes can have serious consequences. This raises an important question: who is responsible when AI gets it wrong?

Why AI gets things wrong

AI systems are trained on data. If that data is incomplete, inaccurate, or biased, the results can be flawed. Even with good data, AI can misinterpret complex situations or fail to pick up on nuances a human would recognise. Sometimes the error is obvious, but at other times it may be subtle and go unnoticed until it causes harm.

In specific sectors, such as healthcare, finance or law, a single incorrect AI-generated output could lead to a misdiagnosis, a wrongful loan rejection, or flawed legal advice.

Accountability – who bears the risk?

The law around AI accountability is still evolving in the UK and internationally. At present, responsibility often depends on the context:

  • The organisation deploying the AI may be liable if it fails to ensure the technology is fit for purpose, thoroughly tested, and appropriately monitored.
  • The developer or supplier might bear some responsibility if the error stems from a defect in the system itself.
  • The human operator still has a role in checking outputs and exercising judgment, particularly in regulated industries where professional standards apply.

What is clear is that relying solely on AI without human oversight is risky. In many cases, liability will ultimately rest with the party making or acting on the decision, even if it was based on an AI recommendation.

Risks of over-reliance on AI

The convenience of AI can tempt people into trusting its outputs unquestioningly. This creates several risks:

  • Loss of critical thinking: professionals may stop questioning the results and fail to spot errors.
  • Bias amplification: if the training data contains bias, AI can perpetuate or even worsen it.
  • Lack of transparency: some AI models are “black boxes”, meaning it is difficult to explain how a conclusion was reached.
  • Data protection issues: AI may process personal data in ways that raise compliance concerns under UK GDPR.

The safest approach is to treat AI as a powerful tool, but one that must be used with care.

Best practices for responsible AI use

If your business or profession uses AI, it is worth taking steps to manage the risks:

  • Always validate meaningful outputs with human review.
  • Keep clear records of how decisions are made, including the role AI played.
  • Ensure training and awareness so staff understand the limits of the technology.
  • Work with suppliers who can explain their systems and provide transparency on data sources and testing.
  • Have a plan for rectifying errors quickly if they occur.

Looking ahead

AI is only going to become more sophisticated and more deeply embedded in the way we work and live. With that comes the need for clear rules on accountability, robust oversight, and a continued emphasis on human judgment. Trust in AI will grow only if users and the public are confident that when it goes wrong, there is both a safety net and a clear route to putting things right.

To speak with us about any aspect of Commercial & Corporate law please call 01483 887766, email info@hartbrown.co.uk or start a live chat today. 

*This is not legal advice; it is intended to provide information of general interest about current legal issues.

Share

Nigel Maud

Partner, Commercial & Corporate, COLP

Nigel read Psychology and Politics in South Africa. He went on to qualify as a solicitor in 1995 and initially practiced as a prosecutor before...

Nigel Maud-Partner -Commercial & Corporate

Partner, Commercial & Corporate, COLP

Nigel Maud

Nigel read Psychology and Politics in South Africa. He went on to qualify as a solicitor in 1995 and initially practiced as a prosecutor before moving into private practice where he specialised in commercial work. He then moved into the business recovery and restructuring department at Pricewaterhouse Coopers broadening his understanding further of the problems and challenges a business faces.

Relocating to England in 1999 Nigel joined Hart Brown in 2002 and became a partner in 2004.

Nigel often received praise from his clients, these are just a few of the comments:

"Very efficient, cost effective service."

"This marks the end of a very long (15 years) and successful relationship with Hart Brown on the liquidation of the company. We thank the partners and staff at Hart Brown for all the advice and wise counsel they have given us over the years."

"You have an excellent team of people who make sure they understand the needs of the client."