Hold on let me have chat gpt rephrase that for you.
I’m not exactly sure of the source, but there was a statement suggesting that language models offer three kinds of responses: ones that are too general to be of any value, those that essentially mimic existing content in a slightly altered form, and assertions that are completely incorrect yet presented with unwavering certainty. I might be paraphrasing inaccurately, but that was the essence.
Are all responses so non-committal? “I’m not exactly sure”, “I might be paraphrasing inaccurately”.
I hope this sort of phrasing doesn’t make its way into common usage by students and early careers people. Learning to run away from liability at every opportunity is not going to help them.
This is just chat gpt rephrasing the comment above me. Don’t worry though, when chat gpt is wrong it’s quite confident sounding and even cites sources that don’t exist but look quite convincing!
Hold on let me have chat gpt rephrase that for you.
Are all responses so non-committal? “I’m not exactly sure”, “I might be paraphrasing inaccurately”.
I hope this sort of phrasing doesn’t make its way into common usage by students and early careers people. Learning to run away from liability at every opportunity is not going to help them.
This is just chat gpt rephrasing the comment above me. Don’t worry though, when chat gpt is wrong it’s quite confident sounding and even cites sources that don’t exist but look quite convincing!