Who's Responsible When AI Does Wrong?
Responsibility is typically easy to assess: the person who takes the action or makes the decision is responsible for the outcome. No matter what influences your actions--drugs, unresolved trauma, hope, or delirium--the consequences are assigned to the person, not to the influences. We might say that the consequences are greater or less depending on the person and their capabilities (for instance, if they're underage), but we don't say that the influence is fully responsible.
So it should be straightforward to say that whoever uses AI to do something, if the AI guides them wrong, they're at fault, not the AI. They took the input (AI) and decided what to do with it.
When I was learning about conflict resolution as part of my post-graduate work, I was introduced to the maxim that no one person is to blame in a conflict. Rather, multiple people and circumstances typically contribute to the conflict. With this in mind, let's say that someone makes a poor decision based on AI, or they publish something that AI wrote, and it has inaccuracies. We could say that AI contributed to the mistake, but the person is still responsible.
In school, we are taught to think critically, and that requires reading a variety of sources that both challenge us and challenge each other. We must then examine all of the information we have gathered and determine who is right and who is wrong, and we must form our own thoughts and conclusions that may rest somewhere between our various sources.
Part of the allure of AI, and the lie of it, is that it's so easy and seems so smart and well-read. We think that generative AI has all the sources, has done this careful thinking, and has reached the best and most right conclusion. But in truth, not only are its inputs limited (if astoundingly large), the way it produces the text you see is based on inference and guesswork.
Using AI also feels like going to a committee, and the modern business (and government) world loves a committee. If we decide as a group, then no single individual will be blamed if the decision is wrong. The worst-case scenario is that the group is dissolved and a new group is formed that, supposedly, can make better decisions.
But we lie to ourselves, either consciously or subconsciously, when we buy into distributed responsibility, through which we believe that no one is responsible. What's happening with the committee situation is that no single individual can be blamed, and there may be no consequences for an individual, but that doesn't mean a person isn't responsible. They just may be using the committee to dodge their responsibility.
The one who used AI, and/or accepted its output, is responsible if the output is wrong.
Does this mean that AI companies have no responsibility? No, I believe that they do. But in the same way that Facebook and Instagram will likely face no significant consequences for the negative impacts they've had on society, particularly on teenagers, I doubt that AI companies will face significant consequences for the false or inaccurate outputs that AI generates.
Remember, responsibility doesn't mean consequences. But it may mean culpability. It would be nice if we could offload our personal responsibility for AI failures and lay them at the feet of AI companies, but we can't legitimately do that. The tools those companies have built contributed to the mistake or failure, but responsibility rests with the person who used the tool.
For those of us using AI, we must treat it critically like we would any other source. Use it to speed up your work, by all means, but don't take it as gospel. Review and compare it with other sources. Review and think through what it produces and how confident you are in its accuracy and quality.
Because if you decide to put something out written by AI, it's your reputation on the line, not OpenAI's or Anthropic's. And even more important than reputation, in my opinion, it's your integrity at risk. When you use AI carelessly and something goes wrong, you're not just damaging how others perceive you; you're potentially compromising your own moral consistency, especially when you know better but choose convenience over conscience.
This post was inspired by the course Nobody Cares About Ethics, a 15-day free email course by Felicity Wild that challenges you to think more deeply about how and why you use artificial intelligence tools.