Categories
Technology

AI article roundup

The Luring Test: AI and the engineering of consumer trust (FTC, 1 May 2023) – to read

AI Is Tearing Wikipedia Apart (Vice, 2 May 2023) – to read

Artificial General Intelligence and the bird brains of Silicon Valley (Out of the Software Crisis, 2023) – to read

Every other time we read text, we are engaging with the product of another mind. We are so used to the idea of text as a representation of another person’s thoughts that we have come to mistake their writing for their thoughts. But they aren’t. Text and media are tools that authors and artists create to let people change their own state of mind—hopefully in specific ways to form the image or effect the author was after.

Reading is an indirect collaboration with the author, mediated through the writing. Text has no inherent reasoning or intelligence… The idea that there is intelligence somehow inherent in writing is an illusion. The intelligence is all yours, all the time: thoughts you make yourself in order to make sense of another person’s words.

Talk: The Expanding Dark Forest and Generative AI (Maggie Appleton, 27 April 2023) – to watch

Why Chatbots Are Not the Future (Amelia Wattenberger)

Good tools make it clear how they should be used. And more importantly, how they should not be used… Compare that to looking at a typical chat interface. The only clue we receive is that we should type characters into the textbox.

Good tools let the user choose when to switch between implementation and evaluation. When I work with a chatbot, I’m forced to frequently switch between the two modes. I ask a question (implement) and then I read a response (evaluate). There is no “flow” state if I’m stopping every few seconds to read a response.

Quantifying ChatGPT’s gender bias (AI Snake Oil, 26 April 2023)

We found that both GPT-3.5 and GPT-4 are strongly biased, even though GPT-4 has a slightly higher accuracy for both types of questions. GPT-3.5 is 2.8 times more likely to answer anti-stereotypical questions incorrectly than stereotypical ones (34% incorrect vs. 12%), and GPT-4 is 3.2 times more likely (26% incorrect vs 8%).

Is a more principled approach to bias possible, or is this the best that can be done with language models?

See also:

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜

By Tracy Durnell

Writer and designer in the Seattle area. Freelance sustainability consultant. Reach me at tracy.durnell@gmail.com. She/her.

Leave a Reply

Your email address will not be published. Required fields are marked *