Categories
Mental Health Society Technology

I don’t want this to be the future

Bookmarked HUMAN_FALLBACK | Laura Preston (n+1)

I WAS ONE OF ABOUT SIXTY operators. Most of us were poets and writers with MFAs, but there were also PhDs in performance studies and comparative literature, as well as a number of opera singers, another demographic evidently well suited for chatbot impersonation—or, I suppose, for impersonating a chatbot that’s impersonating a person.

Let alone the present.

Each day when we reported for work one of them would hail us with a camp counselor’s greeting. “Top of the morning, my lovely Brendas!” they would say. Below their message, a garden of reaction emojis would bloom.

I am tired of the exploitation and undervaluation of emotional labor.

In the same way that algorithms tell us what they think we want, and do so with such tenacity that the imagined wants become actual, these buildings seemed intent on shaping a tenant’s aspirations. They seemed to tell the tenant they should not care about regional particularities or the idea of a neighborhood. The tenant should not even desire a home in the traditional sense, with hand-me-down furniture, hand-built improvements, and layers of multigenerational memory. This tenant was a renter for life, whose workplace was their primary address, and who would nevertheless be unable to afford property for as long as they lived.

See also: Neutralizing reality to sell

Brenda, they claimed, said the same thing to everyone, which meant that she was incapable of bias. And yet she was awfully good at repelling certain people: people without smartphones or reliable internet, people unaccustomed to texting, people who couldn’t read or write in English, and people who needed to figure out if they could access a property before showing up for a tour. Brenda deflected them all with polite violence. She was not a concierge but a bouncer, one made all the more sinister for her congeniality and sparkle.

 

See also:

OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic (TIME)

But the working conditions of data labelers reveal a darker part of that picture: that for all its glamor, AI often relies on hidden human labor in the Global South that can often be damaging and exploitative.

The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.

An OpenAI spokesperson said in a statement that the company did not issue any productivity targets, and that Sama was responsible for managing the payment and mental health provisions for employees.

🙄 Of course they’re not responsible for the work they hired out.

Conditions for vendors are so much worse than employees, so of course that’s the direction companies want to move: cheaper labor that they aren’t liable for. Ethics has no part in corporatism.

“They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent,” Andrew Strait, an AI ethicist, recently wrote on Twitter. “These are serious, foundational problems that I do not see OpenAI addressing.”

By Tracy Durnell

Writer and designer in the Seattle area. Freelance sustainability consultant. Reach me at tracy.durnell@gmail.com. She/her.

2 replies on “I don’t want this to be the future”

Leave a Reply

Your email address will not be published. Required fields are marked *