How AI Diminishes Our Humanity—and Our Honesty
In recent months, I have been burned three times by people of apparent intelligence and goodwill who have tried, essentially, to sell me work that was written 100% by AI, as diagnosed by GPTZero, which experts consider quite reliable. Every time I have fed it my own work, it reports “100% human generated” and every time I have fed it work known to be AI-generated, it has reported “100% AI-generated”; and with instances where someone used AI to generate a draft and then modified the draft manually, it came up with some percentage split like 60/40 or 70/30.
From conversations with seasoned professors (I haven’t been teaching for the past seven years, so I missed the sudden invasion of AI into the schools), I have come to think that most people who utilize AI actually don’t believe it’s unethical to do so. They think that if they crafted the prompt carefully and then agree with the result, it somehow counts as “their work,” much as a man who presses a button in a factory to generate a product might think he “made” it, even though there was no virtue of art in his soul at all. I shall say more below about how this illusion has managed to capture minds.
In reality, however, if one has not written 100% of the essay or the article, it is not honest to call it one’s own work, to put one’s name on it, and, a fortiori, to sell it as one’s own.
Why some people do not think it’s dishonest
In an article at First Things, “The AI Cheating Epidemic,” Jeremy S. Adams has missed an important aspect of the situation he analyzes. The students who are using these tools to cheat their way through school are not just using them brazenly; they are losing their perception that using such tools is wrong. It’s not that they are losing shame over doing wrong, but that they are losing clarity about the wrongness of the thing itself.
To understand what is happening, we need to draw on two important thinkers: Walter Ong and Marshall McLuhan.1 Ong analyzes the progression from oral culture to manuscript culture to printing press culture, while McLuhan analyzes the transition from the printing press to digital media.
Each of these transitions changed the way we perceive the word itself.
Moving from an oral culture to a written culture makes the word visible and imaginable as a visible thing for the first time ever. It makes it more objective, and makes it inseparable from the author. It brings with it more objectivity and thought, more logical and sequential reasoning.
The transition from manuscript culture to printing press culture intensifies this transition, making the written text all the more impersonal. It goes along with the Age of Enlightenment and the dream of perfectly objective thought. It brings with it the apex of the written word as dominant in human culture.
The transition from printing press to television downplayed the importance of the word, and the subsequent transition to digital media downplayed the word in a certain way further while simultaneously bringing the word back in a way that resembles oral culture again. The written word comes at us more in bits and bites, and with a simpler grammar than before. Twitter/X is the symbol of where the written word has come, or rather, what it has come to.
It is important to see large language models (LLMs) as another medium for the written word. Just as all the new language media did before, LLMs are changing the way we understand the written word itself. In this case, LLMs are making it easier to produce large amounts of text—but while they do so, and precisely because of how they do so, they persuade all of us that the written word is not something uniquely and especially human. Writing starts to feel like doing dishes—a task no one would do if they didn’t have to, or if they could find a way around it.
In a world where text can be generated as easily as a dishwasher can be turned on, the tasks inflicted upon students when they are asked to write will look and feel, more and more, in themselves and essentially, like busywork and nothing else.




Indeed. In the early days of ChatGPT, I asked it three questions I knew the answer to. Each time, the program delivered up falsehood. When asked about these lies—which people in the industry gloss over as "hallucinations" or some other weasel words—the model backtracked, complimented me for my "rigor," and tried to rationalize away the untruth.
AI/LLMs are, in short, a demonic work—a perversion of the word, and consequently, our sense of truth, and even our sense that we are, as you say, lying to ourselves about it. The very sense of right and wrong.
AI is the ultimate result when language is commodity, not mystery. The reciprocal result on our poor souls is that we become incapable of apprehending meaning at all, either spiritual or natural, as the Psalmist warns us in 113: "12 The idols of the gentiles are silver and gold, the works of the hands of men. 13 They have mouths and speak not: they have eyes and see not; 14 They have ears and hear not..."