How AI Diminishes Our Humanity—and Our Honesty
In recent months, I have been burned three times by people of apparent intelligence and goodwill who have tried, essentially, to sell me work that was written 100% by AI, as diagnosed by GPTZero, which experts consider quite reliable. Every time I have fed it my own work, it reports “100% human generated” and every time I have fed it work known to be AI-generated, it has reported “100% AI-generated”; and with instances where someone used AI to generate a draft and then modified the draft manually, it came up with some percentage split like 60/40 or 70/30.
From conversations with seasoned professors (I haven’t been teaching for the past seven years, so I missed the sudden invasion of AI into the schools), I have come to think that most people who utilize AI actually don’t believe it’s unethical to do so. They think that if they crafted the prompt carefully and then agree with the result, it somehow counts as “their work,” much as a man who presses a button in a factory to generate a product might think he “made” it, even though there was no virtue of art in his soul at all. I shall say more below about how this illusion has managed to capture minds.
In reality, however, if one has not written 100% of the essay or the article, it is not honest to call it one’s own work, to put one’s name on it, and, a fortiori, to sell it as one’s own.
Why some people do not think it’s dishonest
In an article at First Things, “The AI Cheating Epidemic,” Jeremy S. Adams has missed an important aspect of the situation he analyzes. The students who are using these tools to cheat their way through school are not just using them brazenly; they are losing their perception that using such tools is wrong. It’s not that they are losing shame over doing wrong, but that they are losing clarity about the wrongness of the thing itself.
To understand what is happening, we need to draw on two important thinkers: Walter Ong and Marshall McLuhan.1 Ong analyzes the progression from oral culture to manuscript culture to printing press culture, while McLuhan analyzes the transition from the printing press to digital media.
Each of these transitions changed the way we perceive the word itself.
Moving from an oral culture to a written culture makes the word visible and imaginable as a visible thing for the first time ever. It makes it more objective, and makes it inseparable from the author. It brings with it more objectivity and thought, more logical and sequential reasoning.
The transition from manuscript culture to printing press culture intensifies this transition, making the written text all the more impersonal. It goes along with the Age of Enlightenment and the dream of perfectly objective thought. It brings with it the apex of the written word as dominant in human culture.
The transition from printing press to television downplayed the importance of the word, and the subsequent transition to digital media downplayed the word in a certain way further while simultaneously bringing the word back in a way that resembles oral culture again. The written word comes at us more in bits and bites, and with a simpler grammar than before. Twitter/X is the symbol of where the written word has come, or rather, what it has come to.
It is important to see large language models (LLMs) as another medium for the written word. Just as all the new language media did before, LLMs are changing the way we understand the written word itself. In this case, LLMs are making it easier to produce large amounts of text—but while they do so, and precisely because of how they do so, they persuade all of us that the written word is not something uniquely and especially human. Writing starts to feel like doing dishes—a task no one would do if they didn’t have to, or if they could find a way around it.
In a world where text can be generated as easily as a dishwasher can be turned on, the tasks inflicted upon students when they are asked to write will look and feel, more and more, in themselves and essentially, like busywork and nothing else.




Very good distinction regarding the lack of charitable of students with regard to ethics. However, the ethical collapse is always downstream of moral collapse, so it is kind of expected.
I think that a good way of going forward in universities is to explain that AI use can harm the students' professional goals. After all, they are repeatedly told that their professional goals are more inportant than learning how to think.
I have found this to work quite well. The really crass offenders will use it either way, but those on the fence about it are quite easily dissuaded. That's my experience.
As a former teacher of elementary and high school for over 30 years, I wholeheartedly agree with you. Writing across the curriculum was my goal for all students especially because I struggled so as a student with the process. Organizing thoughts was always a challenge and I probably would have been tempted to use AI. ( I remember crying in community college because I could not write a paper.) God bless our teachers today. What challenges they face!