…what we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon …) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent.
“This is effectively the greatest art heist in history.” — open letter co-authored by Molly Crabapple
“This whole “this is how humans learn so whats the difference” thing while stealing so much data to make billions for a few dudes is so insidious.” — Timnit Gebru @timnitGebru@dair-community.social
See also: Link pairing: AI trained on stolen art
It’s also why their hallucinations about all the wonderful things that AI will do for humanity are so important. Because those lofty claims disguise this mass theft as a gift – at the same time as they help rationalize AI’s undeniable perils.
See also: On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜
On the “hallucination” that AI will solve climate change better than humans can:
According to this logic, the failure to “solve” big problems like climate change is due to a deficit of smarts. Never mind that smart people, heavy with PhDs and Nobel prizes, have been telling our governments for decades what needs to happen to get out of this mess: slash our emissions, leave carbon in the ground, tackle the overconsumption of the rich and the underconsumption of the poor because no energy source is free of ecological costs.
We know what we need to do about climate change — we lack the will to do it.
It’s not an information deficit problem. We don’t need more ideas, we need to implement the things we know will work. Corporations just don’t like that answer; Don’t Look Up was painfully on the nose. But pretending that we’ll magic our way out of more emissions with technology that hasn’t been invented yet is an excuse not to change now (*whisper* plus AI needs a lot of resources too).
Generative AI is currently in what we might think of as its faux-socialism stage… Once the field is clear, introduce the targeted ads, the constant surveillance, the police and military contracts, the black-box data sales and the escalating subscription fees.
Funny how “disruption” is often code for “provide the same service under market value, exploiting uncompetitive business practices with the goal of creating a monopoly” 🤔 Netflix and the streaming industry are pulling the same stunt.
A world without crappy jobs means that rent has to be free, and healthcare has to be free, and every person has to have inalienable economic rights. And then suddenly we aren’t talking about AI at all – we’re talking about socialism.
Because we do not live in the Star Trek-inspired rational, humanist world that Altman seems to be hallucinating. We live under capitalism, and under that system, the effects of flooding the market with technologies that can plausibly perform the economic tasks of countless working people is not that those people are suddenly free to become philosophers and artists.
See also: Who does AI work for?
The dream of AI is the dream of free labor
UBI is a society-level failsafe for its people
(I realized recently that I don’t talk about Universal Basic Income (UBI) enough because I mentioned it to my mom the other day and she’d never heard of it 🥺 So, if you haven’t encountered the idea of UBI before, I encourage you to read a bit about it!
It is American society’s choice to allow children to go hungry to punish their parents, and to drive children into labor as soon as possible, but we could change our minds. Personally, I’m less worried about freeloaders than the kids who currently don’t have food and people who’ve lost a place to live because we make it so hard to qualify for assistance and offer all-or-nothing help that keeps people in poverty.)
A world of deep fakes, mimicry loops and worsening inequality is not an inevitability. It’s a set of policy choices.