Categories
Mental Health Society Technology

I don’t want this to be the future

Bookmarked HUMAN_FALLBACK | Laura Preston (n+1)

I WAS ONE OF ABOUT SIXTY operators. Most of us were poets and writers with MFAs, but there were also PhDs in performance studies and comparative literature, as well as a number of opera singers, another demographic evidently well suited for chatbot impersonation—or, I suppose, for impersonating a chatbot that’s impersonating a person.

Let alone the present.

Each day when we reported for work one of them would hail us with a camp counselor’s greeting. “Top of the morning, my lovely Brendas!” they would say. Below their message, a garden of reaction emojis would bloom.

I am tired of the exploitation and undervaluation of emotional labor.

In the same way that algorithms tell us what they think we want, and do so with such tenacity that the imagined wants become actual, these buildings seemed intent on shaping a tenant’s aspirations. They seemed to tell the tenant they should not care about regional particularities or the idea of a neighborhood. The tenant should not even desire a home in the traditional sense, with hand-me-down furniture, hand-built improvements, and layers of multigenerational memory. This tenant was a renter for life, whose workplace was their primary address, and who would nevertheless be unable to afford property for as long as they lived.

See also: Neutralizing reality to sell

Brenda, they claimed, said the same thing to everyone, which meant that she was incapable of bias. And yet she was awfully good at repelling certain people: people without smartphones or reliable internet, people unaccustomed to texting, people who couldn’t read or write in English, and people who needed to figure out if they could access a property before showing up for a tour. Brenda deflected them all with polite violence. She was not a concierge but a bouncer, one made all the more sinister for her congeniality and sparkle.

 

See also:

OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic (TIME)

But the working conditions of data labelers reveal a darker part of that picture: that for all its glamor, AI often relies on hidden human labor in the Global South that can often be damaging and exploitative.

The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.

An OpenAI spokesperson said in a statement that the company did not issue any productivity targets, and that Sama was responsible for managing the payment and mental health provisions for employees.

🙄 Of course they’re not responsible for the work they hired out.

Conditions for vendors are so much worse than employees, so of course that’s the direction companies want to move: cheaper labor that they aren’t liable for. Ethics has no part in corporatism.

“They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent,” Andrew Strait, an AI ethicist, recently wrote on Twitter. “These are serious, foundational problems that I do not see OpenAI addressing.”

Categories
Culture Technology The Internet

Personality shaped by the algorithm

Emphasis mine.

The blandness of TikTok’s biggest stars by Rebecca Jennings (Vox)

[P]op culture is being increasingly determined by algorithms… [W]hat we’re seeing is the lowest common denominator of what human beings want to look at, appealing to our most base impulses and exploiting existing biases toward thinness, whiteness, and wealth.

TikTok fame celebrates a different kind of mediocrity, though, the kind where “relatability” means adhering to the internet’s fluctuating beauty standards and approachable upper-middle-classness and never saying anything that might indicate a personality.

+

What Works by Tara McMullin

Creators are basing their livelihoods on the performance of an identity through the expression of their knowledge, experiences, or talents.

As our actions are influenced by what Richard Seymour dubs the twittering machine, our identities are revealed to us by the algorithm. Not only does the machine tell us who we are and who we will become, it turns around and sells us the symbols of the identity. My identity is commodified in an instant. Who I Am and What I Do On the Internet can feel like an act of self-expression, but they are more likely artifacts of conformity.

Categories
Technology Writing

Bias is baked into the current state of AI fiction writing

Bookmarked Wordcraft Writers Workshop (g.co)

The Wordcraft Writers Workshop is a collaboration between Google’s PAIR and Magenta teams, and 13 professional writers. Together we explore the limits of co-writing with AI.

Interesting assessment of co-writing with an AI — it’s limited by its inability to perceive / remember context, a very generic, stereotyped and mainstream understanding of genre and stories, and mediocre prose without voice.

Allison Parrish described this as AI being inherently conservative. Because the training data is captured at a particular moment in time, and trained on language scraped from the internet, these models have a static representation of the world and no innate capacity to progress past the data’s biases, blind spots, and shortcomings.

The computer trying to insert a man into a lesbian love story 😬 We see time and again technology incorporating and reflecting real world biases. It feels like they think preventing bias is an afterthought, something that can be fixed after the fact. These tools will likely become quite important in the future.  Can someone integrate people of color and queer people into their design process upfront?

I am intrigued by co-design, and feel like this project could benefit from it: learning upfront from writers what their biggest struggles are and where they wish they could have assistance. This feels a bit like, “we made a thing that makes words, let’s have some actual writers try it out and see what they do with it 🤷‍♀️”

One thing that writers often need is bit part characters. With existing biases, will the AI suggest all straight white men to fill these roles? When they create characters of color will they be caricatures?

Again, the training set proves itself essential to the tool — and behind many of its failings.

I read Robin Sloan’s short story, which was a clever little work that capitalized on the program’s strengths while critiquing reliance on shortcuts (and maybe poking a bit of fun at GRRM).

Categories
Health Society

Why others get upset when you mask

Bookmarked Why Do They *Think* That? by JTO, Ph.D. (essaysyoudidntwanttoread.home.blog)

I’ll just give you a non-comprehensive run-down of various biases (which are basically rules of cognition that become errors when they’re incorrectly applied) and heuristics (which are basically thinking shortcuts or strategies that can lead to thinking errors), focusing on those that can cause people to be more alarmed by risk reduction than by the risk posed by actual threats.

Why people don’t seem to care about the health risks”

  • People don’t like to think about death or disability
  • Death and disability are abstract without personal experience
  • Selection and survivorship biases when they only see healthy people out and about
  • People estimate their own risk based on personal experiences
  • “base-rate fallacy: people are much more swayed by single dramatic events than by large numbers or probability statistics”
  • Optimism Bias = expect they’ll have a good outcome
  • Perceived invulnerability = don’t think bad stuff will happen to them
  • Diffusion of Responsibility –> they can’t directly see or be held responsible for the consequences of their actions (e.g. passing along sickness so people you don’t know die)
  • Just World Thinking = “people get what they deserve” because otherwise would have to admit the world is unfair and random, and can attribute their success to their own choices by blaming what others have done differently than them (e.g. get vaxxed)
  • Fundamental Attribution Error, which leads us to focus on personal vs. situational causes for other people’s behavior and outcomes – though not for our own”

Why do people seem to care so much that YOU care about Covid health risks?

  • Cognitive Dissonance
  • Confirmation Bias
  • Psychological Reactance –> people get mad when they think their freedoms are under attack or they’ll lose control –> trying to reassert control
  • “people personalize the actions of others, inferring that those people mean to have a negative effect on them – for example, thinking that masked people are deliberately trying to make them irate or imply they’re stupid” = hostile attribution bias
  • group norms, conformity, and group consensus
  • group think happens when going along with your group trumps making an informed decision –> group polarization = group beliefs gradually become more radical

“People wish to be seen (by themselves and others) as reasonable. Because of this, when folks try to decide on a “rational” response to an environmental threat, they often look at the array of available risk mitigation options and try to pick a percentage of these that is neither an ‘under-response’ or an ‘over-response.’” “Unfortunately, that’s not the way risk actually works; a threat is what it is, and it isn’t going to negotiate with you regarding how much you have to do or what is a “fair” amount of effort.”

 

Categories
Learning Resources and Reference

Find good longreads and unearth old articles worth reading

Bookmarked Read Something Great (Read Something Great)

Timeless articles from the belly of the internet.
Manually curated. Served 5 at a time.

Like this idea. Somewhat wary about the curation: who’s choosing the articles (one person, an editorial team…?), do they have selection criteria that considers bias in publishing and draws from reputable sources, what is their background (do they have a political bias or subject focus, if so do those complement my views and interests?), where are they sourcing these articles? No transparency on the website.

Categories
Resources and Reference

Diagram organizing cognitive biases

Bookmarked Cognitive Biases (busterbenson.com)

A cheat sheet to help you remember 200+ biases via 3 conundrums.

Categories
Art and Design

Questions for inclusive design

Bookmarked Another Lens (airbnb.design)

Together with News Deeply, our design research team put together a set of guiding principles and exercises. These help designers address skewed perspectives in order to create thoughtful, inclusive work.

Our tool, Another Lens, poses a set of questions to help you balance your bias, consider the opposite, and embrace a growth mindset.