I just saw a post in my news feed reader where the author generated a list of (theoretically real-world) examples with Chat-GPT.
I immediately unsubscribed.
My husband says I’m being unfair. But I see it as a measure of quality: if you use Chat GPT to generate examples, to me that indicates you don’t care about being factual. To trust any generated list I would need an assurance it was fact-checked — which might be slightly faster than just doing the research in the first place but still require time. If you can’t be bothered to do research, you’re not meeting my standards for evidence. I’d have thought nothing of it if he just didn’t list examples, but as soon as I saw it was generated, I lost my trust in the entire article, and my interest in reading the newsletter. (It was also a feed I followed relatively recently so I was still in the evaluation stage.)
There are only so many articles I can read a day, only so many feeds I can follow — I can’t waste my time on low quality or untrustworthy material. Like I am practicing quitting books earlier, I am going to be more selective about the feeds I spend time and attention on by removing feeds from my reader faster.
Using AI to generate content will be a flag for me.
Using it to generate information that should be factual is an immediate no-go without a commitment to fact-checking.* I appreciated Wired’s recent enumeration of their standards for use of generative AI, and am ok with their stated approach of experimenting with research followed by fact checking at the original source.
(Using AI tools to generate writing signals that the author is not invested in their work; if they can’t be bothered to write it, I don’t know why I should bother to read it.)
In an age where the ruling minority is suppressing truth and gaslighting people with lies, when making the truth hard to ascertain is a regularĀ tool of fascists, I care a lot about high standards of truthfulness in the info I consume. Others may trust AI to provide accurate information (despite the many examples where it fails), but I refuse to give in to convenience: trusting that I can make decisions from factual information is worth the time it takes to research. I wonder how much of the appeal of generative AI is founded in a lack of research skills.
* See also: On Generative AI, phantom citations, and social calluses by Dave Karpf