Artificial Intelligence: A Very Capitalist Risk

Posted by Stuart on May 19, 2023 · 2 mins read

One of my favourite takes on the traditional ‘vampire’ concept was Joe Aherne’s British TV series Ultraviolet. This take on the tensions between the blood-suckers and the rest of humanity is notable for its strikingly modern style. Indeed, the word “vampire” isn’t even mentioned – instead, they’re “Code 5”, an allusion to the Roman numeral “V”.

In a diametric opposition to Stoker’s original, the vampires in Ultraviolet are digital natives, masters of science and medicine, and innovators in reproductive technology and synthetic blood. And the question of the story is: why?

Initially, it seems, their goal is to subjugate humanity. If the vampires become lords, then we become vassals – and, from time to time, dinner. But, as Hegel argued, the problem for lords is that without vassals they are nothing. The lords, too, are trapped in mutual interdependence, whether they like it or not.

How does this relate to AI? Well, we’re currently under an onslaught of media articles suggesting there’s a new kind of existential risk from “superhuman” AI – AGI. But what if the risk was not some kind of super-AI that, either by accident or not, managed to cause human extinction?

What if, instead, the risk was that a technological oligarchy “lords” who decided that AI (with no need to even consider AGI) offered a way out of Hegel’s bind? Power without responsibility or obligation to the rest of humanity. Complete power, with no need to share.

In Ultraviolet (spoiler alert) it turned out that the vampires’ goal is no longer subjugation. Modernization has gone too far for that. Climate change and pandemic disease has created other risks for the vampires, and human power to end life can no longer be risked. Instead, their plan is first to build a technological independence from the rest of humanity, and then, simply, to extinguish it.

Maybe Barbrook and Cameron didn’t look far enough ahead in their withering assessment of Silicon Valley culture, The California Ideology, where they said: “Slave labour cannot be obtained without somebody being enslaved”.

Artificial intelligence is now asking, “why not?”

Even a decade back, Stephen Hawking warned about AGI. More recently, he was joined by Geoff Hinton after an apparent ‘road to Damascus’ moment. As usual, I won’t mention Elon Musk.

But, as many others have said, the “existential risk” of AGI-type AI is not pressing, given today’s actual threats (not risks), causing real harm, today, to among others artists, writers, and others. See, for one example, this article reporting Meredith Whittaker’s views. AGI is a “ghost story” to scare us into accepting this corporate strategy.

Even setting aside AGI, AI-based harms can add up to a sustained erosion of people’s participation in society. And, in the limit, that power struggle could lead to some of us – even all of us – becoming disposable.

So another ‘existential risk’ of AI is a sequential construction of disposability in groups of people. And what happens when these groups are no longer necessary?

Today’s AI-based imagined extropians are no dfferent from the vampires of Ultraviolet. They might look like us, but if they don’t need us, all bets are off.