The Wordcraft Writers Workshop is a collaboration between Google’s PAIR and Magenta teams, and 13 professional writers. Together we explore the limits of co-writing with AI.
Interesting assessment of co-writing with an AI — it’s limited by its inability to perceive / remember context, a very generic, stereotyped and mainstream understanding of genre and stories, and mediocre prose without voice.
Allison Parrish described this as AI being inherently conservative. Because the training data is captured at a particular moment in time, and trained on language scraped from the internet, these models have a static representation of the world and no innate capacity to progress past the data’s biases, blind spots, and shortcomings.
The computer trying to insert a man into a lesbian love story 😬 We see time and again technology incorporating and reflecting real world biases. It feels like they think preventing bias is an afterthought, something that can be fixed after the fact. These tools will likely become quite important in the future. Can someone integrate people of color and queer people into their design process upfront?
I am intrigued by co-design, and feel like this project could benefit from it: learning upfront from writers what their biggest struggles are and where they wish they could have assistance. This feels a bit like, “we made a thing that makes words, let’s have some actual writers try it out and see what they do with it 🤷♀️”
One thing that writers often need is bit part characters. With existing biases, will the AI suggest all straight white men to fill these roles? When they create characters of color will they be caricatures?
Again, the training set proves itself essential to the tool — and behind many of its failings.
I read Robin Sloan’s short story, which was a clever little work that capitalized on the program’s strengths while critiquing reliance on shortcuts (and maybe poking a bit of fun at GRRM).