Mailbag of Reactions For and Against AI
The Difficult Art of Making Distinctions
As one might have expected, my post “How AI Diminishes Our Humanity—and Our Honesty” (and the “Additional Thoughts on AI”) generated a lot of reactions. Several readers sent me very perceptive comments and very good questions (soon it will be possible to leave such comments and questions on our community platform—we’ll let you know when it’s ready). I’d like to share the best of these today, with my responses.
One reader reacted thus:
As a one-man band trying to do the tasks an entire team would normally handle, I do sometimes inevitably, and reluctantly, use AI to help with certain tasks. For instance, using something like Perplexity can be a huge assistance in legal advice/drafting when you can’t afford a lawyer. I even had a lawyer confirm and recommend that approach; he said tools like that will get you a long way without having to pay a fortune for an attorney. Of course, once you get into serious matters or finalizing official legal documents, you’ll definitely want real legal counsel/review, but for general advice and drafts, he was fine with using those tools. I also might use something like Perplexity to help me find relevant information, always checking its sources, or bouncing ideas off of it since I don’t have any teammates to do that with.
As I’m sure you already know, AI has been in use for quite a long time already. Spell-check and grammar-check in Microsoft Word was an early form of AI. Programs like Adobe Photoshop/Premiere Pro/AfterEffects for photo/video or programs like Pro Tools, Logic, Cubase, etc. for audio DAWs and their various tools have used forms of AI to achieve a lot of their functionality since the late ’90s and early 2000s. It’s mostly in the last 4 years, since the dawn of ChatGPT, that AI has really exploded and become so pervasive.
But there are a couple of things I wanted to point out about your article too. In it you advocate for not using any kind of AI at all... but in the same article you also mention that you use GPTZero— which is itself an AI tool! It might be good to update that article to either clarify that you do not use GPTZero anymore either, or that you find there are some valid exceptions to the rule. Otherwise, it reads a bit puzzling and ironic.
I tried using GPTZero myself and found it no more reliable than any LLM. Actually, much less so. I gave it two of my recent articles (which I had spent a long time working on) and it flagged both of them as relatively certain they were AI or a mix. “Hmmm, that’s interesting,” I thought. I wonder why. When I looked at the reasons why it thought so, I was even more puzzled. When I write articles, I intentionally write in a more professional/academic tone. That’s normal. Priests often do this with their sermons and professors do this with lectures and speeches. But apparently GPTZero thinks that such a tone “looks like AI.” Granted, I am not a professional writer and no doubt my writing can be improved, but the reasons it gave for thinking the sentences were AI seem outright ridiculous.
GPTZero said of various sentences I had written: “The sentence uses indirect speech and paraphrasing, creating an impersonal tone…. The sentence uses a precise and technical word-choice, which prioritizes clarity and sophistication but affects the natural flow of the sentence…. The sentence uses a formal and polished structure with a focus on clarity and orderliness, but the repetitive use of technical terms may make it sound robotic…. The sentence uses a formal and somewhat stilted phrase and a complex phrase structure contributing to an overly formal tone.” All this is absurd. It implies that a normal human writer can’t write anything sophisticated, complex, formal, orderly, or technical!
So, GPTZero seems much less reliable than what I typically get from queries with Perplexity. A lot of times, the garbage responses from these LLMs come either from garbage prompts (garbage in, garbage out) or the sources it is drawing from are garbage. Or they “hallucinate.” But at least Perplexity (and some others) provide source citations, which I always check for accuracy.
In short: I do personally try to avoid using many forms of AI, but in my case, as I’m trying to do everything myself in my business, I find it can also be helpful at times. And again, various forms of AI have been built into programs for decades too.
I grant that something like AI has been around for a while, and I think what will be necessary is to distinguish different kinds of activities—gruntwork vs. creative or academic work. Certainly, the fact that so many people are trying to make a go of running a business all by themselves in such a complicated modern world seems to necessitate an army of “robots” to help us.
My view is that, ideally, we shouldn’t all be trying to do so much that we have to create an army of robotic assistants to support our livelihood. It is more natural, more human, and more divine to band together in villages where we can share responsibilities and gifts. (I say “more divine” because this is the life that the Son of God designed and chose for Himself from all eternity.)
But I know this “hamlet with a Latin Mass atop the hill” is a pipedream for most of us at this time. It seems like the further we move away from normal, local occupations, the more dependent we make ourselves on various unnatural arrangements. I’m not sure how long this can really last.
There is a further point. As I argued in my essay, AI in its advanced stage is making even human tasks seem like gruntwork, so we now have people advocating for AI-written novels, music, and what have you, to “relieve” people of the labor. It seems to me that somehow there needs to be a bright red line. I don’t know where to draw it, but I know it must be drawn. For example, we should never put AI-generated art on our walls and never listen to AI-generated music.
Another reader pitched in:
I enjoyed your recent article about AI. I noticed you mentioned in your roundup that “legal documents” weren’t really creative works, with kind apologies to lawyers. I have some thoughts on that. I’ve been an attorney nearly twenty years and file lots of legal documents. In my opinion, there are two main types of such documents:
1) Boilerplate. Motions to continue, etc. These are not creative documents.
2) Substantive filings. These are often intensely difficult and extremely creative. They involve constant reaching into statute and precedent, and often very nuanced argument. For example: whether a particular claim of injury should fall under the hearsay exception for statements for medical diagnosis or treatment in a very unique set of circumstances. This requires a lot of parsing of the evidentiary rules and nuanced reading of case law. […]
Please note that reading Monday’s article in full, or listening to its voiceover, requires a paid membership at Pelican+. Visit this amazing platform — now hosting dozens of your favorite traditional Catholic writers and podcasters — and find out what you might be missing!




How do I listen or read this on Pelican, I am a paid subscriber and am having trouble accessing articles 🤔
Recently, I went for an appointment to a financial advisor. She suggested I use ChatGPT to help me in my job search. I said I don't like AI, she said just use it. So, have been using sparingly...