Do we need to be able to show our work?
In the second of Felicity Wild's ethics emails, she poses the question, "How comfortable are you using AI-created work when you can't explain how it was made?" Because, as she points out, "when you use AI to help with work, you're often using systems so complex that even their developers can't fully explain how they work."
Ethically, I have no problem using systems or working in areas where I can't explain how the end result was derived, because I have consistently done so throughout my entire life, long before AI entered the picture.
I don't fully understand the chemical reaction between different substances, but I trust the cookbook that tells me I need baking powder and yeast to make bread.
Last year, my doctor recommended that I begin taking three different vitamins. I didn't ask questions and started taking them daily.
Just like journalists rely on polling data and statistical analyses conducted by specialists using methodologies they can't replicate, I rely on many experts who know more than I do, from my car mechanic to the scientists and engineers building GPS satellites.
There are experts behind these recommendations or systems that I have chosen to trust.
For me, it's not an ethical challenge that makes me uncomfortable using AI-created work when I can't explain how it was made. It's the fact that AI so often hallucinates and gets things wrong.
And the more I work with AI, the less reliable I find it to be.
That doesn't mean I'm using it less. In fact, I'm probably using it more. But where I once hoped to outsource big chunks of my work to it, I'm finding that the most critical work must still be owned by me. AI can help bring some depth and breadth that I didn't have before, but even then, I have to fact-check it.
As a result, I'm using AI to write less. I'm using it more to ask myself questions and challenge myself to think more deeply, particularly from different perspectives. And I find it helpful for writing really good research questions that I can pose to Perplexity.
Are you familiar with the quandary in the game Dungeons & Dragons of obtaining a "wish" spell or a genie that grants wishes? Invariably, it's a trap, where the dungeon master (DM) will find some way to grant the wish but with a twist that makes it less desirable (or outright terrible).
AI is good at writing good wishes, and it's similarly good at writing 20 research questions for me that I can have Perplexity deep-research. However, I still need to review that research and pose follow-up questions to ensure it's accurate, as it sometimes isn't. It's far faster and deeper than I would get by going to the library and using the books and journals and reading everything myself, but it can't be entirely trusted.
All that to say, I'm not comfortable using AI when I can't explain how it reaches its conclusions, but that's not because of my ethics or any philosophical concern about black-box systems. My discomfort is entirely practical: I can't verify whether AI's reasoning process led it to the right answer or to a plausible-sounding hallucination. Even when what AI spits out matches my lived experience or understanding, I find myself hesitant these days. Is it writing that because it knows I'll like it and agree with it, or because it's true?
Don't trust, verify.
This post was inspired by the course Nobody Cares About Ethics, a 15-day free email course by Felicity Wild that challenges you to think more deeply about how and why you use artificial intelligence tools.