11 Comments
User's avatar
Pete Prochilo's avatar

Indeed. In the early days of ChatGPT, I asked it three questions I knew the answer to. Each time, the program delivered up falsehood. When asked about these lies—which people in the industry gloss over as "hallucinations" or some other weasel words—the model backtracked, complimented me for my "rigor," and tried to rationalize away the untruth.

AI/LLMs are, in short, a demonic work—a perversion of the word, and consequently, our sense of truth, and even our sense that we are, as you say, lying to ourselves about it. The very sense of right and wrong.

Peter Kwasniewski's avatar

This has been exactly my experience.

And why do people naively think "it will get better over time," since it's drawing its massive databases from the gigantic mess of truth and error (and obscenity) produced by millions of people online?

Sirhc Aeyrud's avatar

Such timely article. Periodically management gets on a tangent to push AI use by saying just try it. I kindly remind them, especially of recent, how AI has been behaving like that person we all know somewhere in our life who doesn’t know the answers but cobbles together a bunch of random knowledge.

Recently a manager, last week, who can’t ever seem to stop boasting about himself, not surprisingly thought it ok to mention he used AI to write our reviews this past week.

We are living in a world where humility is gone. Without humility we lack the drive to do what is morally right, devolving further into self-interest instead of charity toward our neighbor.

It takes humility to seek God’s graces…

Angela Cuba's avatar

AI is the ultimate result when language is commodity, not mystery. The reciprocal result on our poor souls is that we become incapable of apprehending meaning at all, either spiritual or natural, as the Psalmist warns us in 113: "12 The idols of the gentiles are silver and gold, the works of the hands of men. 13 They have mouths and speak not: they have eyes and see not; 14 They have ears and hear not..."

Aaron Pattee's avatar

Very good distinction regarding the lack of charitable of students with regard to ethics. However, the ethical collapse is always downstream of moral collapse, so it is kind of expected.

I think that a good way of going forward in universities is to explain that AI use can harm the students' professional goals. After all, they are repeatedly told that their professional goals are more inportant than learning how to think.

I have found this to work quite well. The really crass offenders will use it either way, but those on the fence about it are quite easily dissuaded. That's my experience.

Lucy Fahrbach's avatar

As a former teacher of elementary and high school for over 30 years, I wholeheartedly agree with you. Writing across the curriculum was my goal for all students especially because I struggled so as a student with the process. Organizing thoughts was always a challenge and I probably would have been tempted to use AI. ( I remember crying in community college because I could not write a paper.) God bless our teachers today. What challenges they face!

Susan Sherwin's avatar

Saw video of man using AI that reminded me of asking questions of ouija board, which opens door to demonic, exorcists say.

A.T. Shackelford's avatar

Really looking forward to CR Wiley’s new book on this

Peter Kwasniewski's avatar

What's the book?

A.T. Shackelford's avatar

https://open.substack.com/pub/crwiley/p/ai-and-the-meaning-of-life?r=1qi332&utm_medium=ios

I don’t believe he’s released the title yet, but said on a recent episode of the Theology Pugcast he had finished up the manuscript.

Seems like it’s been his main focus for the last couple of years.

User's avatar
Comment deleted
Jan 8
Comment deleted
Peter Kwasniewski's avatar

There are many checkers and their reliability seems to vary tremendously.

GPTZero gets high reviews. At least when I've used it, it's been remarkably accurate at detecting things completely written by man and things completely generated by AI. It never gets that wrong.