Categories
Future Building Society Technology

Article trio: AI is a big risk to our society… and humanity

An Engine of Precaritization by Mandy Brown

AI as we know it today is designed to shift risks from systems to individuals, from the collective to the isolated.

+

The Great Replacement (Not That One, the Real One) by Cat Valente

There is a version of the world taking shape in a few very strange and rarified minds, minds so coddled by wealth and an almost Galtian removal from the travails of the masses that their existence borders on fairy magic … and that vision involves precisely none of us. […] It is a world in which we are simply not needed; content is created by AI, animated and voiced by AI, promoted and distributed by algorithms, consumed by automated subscriptions and mandatory pay-to-play purchases, pinned and pushed to the top of feeds, shunted into media ecosystems where a computer-generated, computer-voiced, computer-written Ellen exclaims with delight over an animated child-script programmed to perfectly perform a piano sonata, tracked and fed back into the algorithm in an infinite loop, bugs patched and code updated by AI, and, very possibly, actual organic human creations shoved to the bottom of the digital heap as inefficient, sloppy, and insufficiently vertically integrated.

I find it pretty interesting that when most other advancements in automation have arrived, the sales pitch has usually involved describing ways in which it will improve the lives of every day people as a kind of sugary treat to drown out the taste of a dystopian future. […] But with ChatGPT, literally the first thing I heard about it was a Reddit donkey-chorus of HA HA WHITE COLLARS ARE ALL REPLACED GET FUCKED. […] Which tells me, however fun a toy people are finding it to be, or however much no one likes writing their own cover letters or school essays, ChatGPT isn’t being sold to us directly at all, but to our potential employers in lieu of us.

+

How to navigate the AI apocalypse as a sane person by Eric Hoel

Merely training on autocomplete has led to beta-AGIs that can outperform humans across a huge host of tasks, at least in terms of speed and breadth. This means the space of general intelligences is likely really large, since it was so easy to access. Basically some of the first things we tried led to it. That’s bad! That’s really bad!

This indicates that there may be creatable minds located far away from all the little eight billion points stacked on top of each other, things much more intelligent than us and impossible to predict.

And what is more dangerous? The atom bomb, or a single entity significantly more intelligent than any human?

By Tracy Durnell

Writer and designer in the Seattle area. Freelance sustainability consultant. Reach me at tracy.durnell@gmail.com. She/her.

Leave a Reply

Your email address will not be published. Required fields are marked *