In recent months, I have been burned three times by people of apparent intelligence and goodwill who have tried, essentially, to sell me work that was written 100% by AI, as diagnosed by GPTZero, which experts consider quite reliable.
Indeed. In the early days of ChatGPT, I asked it three questions I knew the answer to. Each time, the program delivered up falsehood. When asked about these lies—which people in the industry gloss over as "hallucinations" or some other weasel words—the model backtracked, complimented me for my "rigor," and tried to rationalize away the untruth.
AI/LLMs are, in short, a demonic work—a perversion of the word, and consequently, our sense of truth, and even our sense that we are, as you say, lying to ourselves about it. The very sense of right and wrong.
And why do people naively think "it will get better over time," since it's drawing its massive databases from the gigantic mess of truth and error (and obscenity) produced by millions of people online?
AI is the ultimate result when language is commodity, not mystery. The reciprocal result on our poor souls is that we become incapable of apprehending meaning at all, either spiritual or natural, as the Psalmist warns us in 113: "12 The idols of the gentiles are silver and gold, the works of the hands of men. 13 They have mouths and speak not: they have eyes and see not; 14 They have ears and hear not..."
Very good distinction regarding the lack of charitable of students with regard to ethics. However, the ethical collapse is always downstream of moral collapse, so it is kind of expected.
I think that a good way of going forward in universities is to explain that AI use can harm the students' professional goals. After all, they are repeatedly told that their professional goals are more inportant than learning how to think.
I have found this to work quite well. The really crass offenders will use it either way, but those on the fence about it are quite easily dissuaded. That's my experience.
As a former teacher of elementary and high school for over 30 years, I wholeheartedly agree with you. Writing across the curriculum was my goal for all students especially because I struggled so as a student with the process. Organizing thoughts was always a challenge and I probably would have been tempted to use AI. ( I remember crying in community college because I could not write a paper.) God bless our teachers today. What challenges they face!
There are many checkers and their reliability seems to vary tremendously.
GPTZero gets high reviews. At least when I've used it, it's been remarkably accurate at detecting things completely written by man and things completely generated by AI. It never gets that wrong.
Indeed. In the early days of ChatGPT, I asked it three questions I knew the answer to. Each time, the program delivered up falsehood. When asked about these lies—which people in the industry gloss over as "hallucinations" or some other weasel words—the model backtracked, complimented me for my "rigor," and tried to rationalize away the untruth.
AI/LLMs are, in short, a demonic work—a perversion of the word, and consequently, our sense of truth, and even our sense that we are, as you say, lying to ourselves about it. The very sense of right and wrong.
This has been exactly my experience.
And why do people naively think "it will get better over time," since it's drawing its massive databases from the gigantic mess of truth and error (and obscenity) produced by millions of people online?
AI is the ultimate result when language is commodity, not mystery. The reciprocal result on our poor souls is that we become incapable of apprehending meaning at all, either spiritual or natural, as the Psalmist warns us in 113: "12 The idols of the gentiles are silver and gold, the works of the hands of men. 13 They have mouths and speak not: they have eyes and see not; 14 They have ears and hear not..."
Very good distinction regarding the lack of charitable of students with regard to ethics. However, the ethical collapse is always downstream of moral collapse, so it is kind of expected.
I think that a good way of going forward in universities is to explain that AI use can harm the students' professional goals. After all, they are repeatedly told that their professional goals are more inportant than learning how to think.
I have found this to work quite well. The really crass offenders will use it either way, but those on the fence about it are quite easily dissuaded. That's my experience.
As a former teacher of elementary and high school for over 30 years, I wholeheartedly agree with you. Writing across the curriculum was my goal for all students especially because I struggled so as a student with the process. Organizing thoughts was always a challenge and I probably would have been tempted to use AI. ( I remember crying in community college because I could not write a paper.) God bless our teachers today. What challenges they face!
I agree, but I would be cautious of the AI checkers.
There are many checkers and their reliability seems to vary tremendously.
GPTZero gets high reviews. At least when I've used it, it's been remarkably accurate at detecting things completely written by man and things completely generated by AI. It never gets that wrong.
Saw video of man using AI that reminded me of asking questions of ouija board, which opens door to demonic, exorcists say.
Really looking forward to CR Wiley’s new book on this
What's the book?
https://open.substack.com/pub/crwiley/p/ai-and-the-meaning-of-life?r=1qi332&utm_medium=ios
I don’t believe he’s released the title yet, but said on a recent episode of the Theology Pugcast he had finished up the manuscript.
Seems like it’s been his main focus for the last couple of years.