I avoid AI generally. I regularly find Word suggestions or corrections are wrong in that they fail to express what I want to write clearly and accurately. As a retired Engineer previously writing specifications or patent applications or defending against claims of patent contravention or prosecuting the case that the clients ‘proven’ design that we were contracted to build was doomed to fail, I had to be exact and nuanced - bullet-proof. So I like to write everything myself.
BUT,
in the last year, I have extensively been using AI to write software. To do this I need to write a specification that is intensely boring, currently 7k+ words. It is often repetitive, littered with pedantic definitions of obscure parameters, something that AI could never specify or write. In the process I’ve learnt that AI is a good example of an idiot savant; absolutely brilliant and lightning fast in writing software, but painfully, regularly, idiotic in that while it fixes complex bugs in seconds, it is just as likely to reintroduce the same bug later. It clearly has absolutely no idea of the big picture. In short, I love it and I hate it - it is indispensable because I could not do what I want without it. I can now do in months what I have been trying to do for decades.
Recently, I went for an appointment to a financial advisor. She suggested I use ChatGPT to help me in my job search. I said I don't like AI, she said just use it. So, have been using sparingly...
I was going to respond to your thing, Peter, but I literally feared getting cancelled by a hysterical mob. There is a very large portion of our audiences who are very quick to label as "demonic" anything they don't like or understand. Using very careful language and making accurate distinctions becomes even more important in such a charged (by which I mean paranoid) atmosphere. We need to help people think clearly and calmly, without polemics, and learn to acquire some evidentiary standards, not affirm their fears.
Where I became seriously concerned was when you said it was a grave matter to "take to confession" an apparent attempt to bind consciences on an extremely complex subject that even Rome has only just begun to consider. We are laymen, and have no authority to make such a call. Given that you've just said, "It seems to me that somehow there needs to be a bright red line. I don’t know where to draw it, but I know it must be drawn." - and said something similar in your follow-up, the claim that AI use is grave matter that must be taken to confession seems insupportable, even by your own statements. I think a correction wouldn't be a terrible idea.
I think it would be very helpful to make those distinctions clear. If a student gives you fraudulent work, he should be told to do it again. The same with academic papers. But an absolute statement that using AI is sinful is way past anything we are able to assert in conscience. And labelling it sinful is treading on thin ice, particularly in the current atmosphere of online paranoia and total absence of training even reasoned examination of claims, evidentiary standards and general knowledge of what constitutes plagiarism. These are all difficult and vexed questions and it's not helpful to approach them with a blunt instrument. You yourself just said that you have no idea where the line ought to be drawn; a bit early to get out the label-maker, I think then.
I avoid AI generally. I regularly find Word suggestions or corrections are wrong in that they fail to express what I want to write clearly and accurately. As a retired Engineer previously writing specifications or patent applications or defending against claims of patent contravention or prosecuting the case that the clients ‘proven’ design that we were contracted to build was doomed to fail, I had to be exact and nuanced - bullet-proof. So I like to write everything myself.
BUT,
in the last year, I have extensively been using AI to write software. To do this I need to write a specification that is intensely boring, currently 7k+ words. It is often repetitive, littered with pedantic definitions of obscure parameters, something that AI could never specify or write. In the process I’ve learnt that AI is a good example of an idiot savant; absolutely brilliant and lightning fast in writing software, but painfully, regularly, idiotic in that while it fixes complex bugs in seconds, it is just as likely to reintroduce the same bug later. It clearly has absolutely no idea of the big picture. In short, I love it and I hate it - it is indispensable because I could not do what I want without it. I can now do in months what I have been trying to do for decades.
Yes, I've been told that it's a huge game-changer for programming.
How do I listen or read this on Pelican, I am a paid subscriber and am having trouble accessing articles 🤔
If you are logged in to Pelican+, the link will take you straight to the article:
https://app.pelicanplus.com/tabs/home/web-embeds/59234
And if you're not logged in, well, that's the first step!
Recently, I went for an appointment to a financial advisor. She suggested I use ChatGPT to help me in my job search. I said I don't like AI, she said just use it. So, have been using sparingly...
I was going to respond to your thing, Peter, but I literally feared getting cancelled by a hysterical mob. There is a very large portion of our audiences who are very quick to label as "demonic" anything they don't like or understand. Using very careful language and making accurate distinctions becomes even more important in such a charged (by which I mean paranoid) atmosphere. We need to help people think clearly and calmly, without polemics, and learn to acquire some evidentiary standards, not affirm their fears.
Where I became seriously concerned was when you said it was a grave matter to "take to confession" an apparent attempt to bind consciences on an extremely complex subject that even Rome has only just begun to consider. We are laymen, and have no authority to make such a call. Given that you've just said, "It seems to me that somehow there needs to be a bright red line. I don’t know where to draw it, but I know it must be drawn." - and said something similar in your follow-up, the claim that AI use is grave matter that must be taken to confession seems insupportable, even by your own statements. I think a correction wouldn't be a terrible idea.
Would you agree that if a student hands in a paper that was written by AI, he or she may have committed a grave sin?
Or if an academic publishes an AI-produced paper in a peer-reviewed journal?
Those are the situations of which I was primarily speaking.
I think it would be very helpful to make those distinctions clear. If a student gives you fraudulent work, he should be told to do it again. The same with academic papers. But an absolute statement that using AI is sinful is way past anything we are able to assert in conscience. And labelling it sinful is treading on thin ice, particularly in the current atmosphere of online paranoia and total absence of training even reasoned examination of claims, evidentiary standards and general knowledge of what constitutes plagiarism. These are all difficult and vexed questions and it's not helpful to approach them with a blunt instrument. You yourself just said that you have no idea where the line ought to be drawn; a bit early to get out the label-maker, I think then.