There’s a lot of talk about artificial intelligence and existential risk.
Ultimately, the problem with AGI’s version of existential risk is it is completely incalculable. And because it falls outside all the norms of risk calculus (after all, we don’t have any past cases to guide us) there are no bounds on what people say.
Put simply, is the probability of risk (as some have put it) p = 1.0, or is it p = 0.0?
How anybody can claim with a straight face that the probability of an event that has never happened, and where there is no theory of how it can happen, is p = 1.0, I do not know. And yet they do. 😐
I strongly predict that superintelligence (built on anything remotely resembling current paradigms) kills us (on the first flailing attempt and then we don't get a second). I am far less sure about whether or not scaling up GPT-like things will get us to superintelligence.— Eliezer Yudkowsky (@ESYudkowsky) May 22, 2023
OK, sure, randomly choose p = 0.03 instead, why not?
AGI risk is not like "a million people will be killed by a pandemic within a year with 80% probability", but more like "all people may be killed within 100 years with 3% probability". Which do you think is worse?— Joscha Bach (@Plinz) October 14, 2022
This is not a risk calculus – it’s an instrumental calculus – people are choosing the numbers that add up to the point they want to make. It’s all an illusion.
If one ‘expert’ is saying “the sun will rise tomorrow” and the other is saying “the sun will not rise tomorrow” – you know at least one of them is wrong.
What is going on here? Several things:
A demystification of science – science has been, until recently, traditional. That is starting to change, and the boundaries between ‘experts’ (and their knowledge) and everyone else have started to weaken. All the processes of modernization are now transforming science itself – artificial intelligence’s “Big Science” is part of that. Peer review is now an irrelevant afterthought – the primary debates now take place in public spaces, like Congress, CNN, and Twitter. Many – most – of the participants are not ‘experts’. That is inevitable in a modernized science, but we mustn’t treat the likes of Altman or Yudkowsky as if they are experts. They aren’t.
In fact, much of what they are talking about is gaps in knowledge, non-knowledge. Nobody knows about AGI – and that’s the point. Non-knowledge rules the discussion.
Growth of a “risk business” of artificial intelligence – there is money to be made from risk, especially for the likes of PwC, OpenAI, a plethora of tech companies, Geoff Hinton – and, not least, the media. The New York Times does very well out of AI existential risk articles, thank you very much. When someone is telling you that artificial intelligence is scary – follow the money, where are they expecting it to flow? People will pay for security – who is aiming to provide it?
As Ulrich Beck said in World at Risk, “Risk means the anticipation of catastrophe” (original emphasis). This is important. Earlier, in Risk Society, he likens risks to a kind of “political explosive” – what was unpolitical becomes political. Risks – when they come to be socially significant – enable forced changes to power and authority. This is why Sam Altman went to Congress and was so open about existential risk. This is an opening shot at rewriting (or “modernizing”) democracy – it is no coincidence that a matter of days later he is offering funding to do exactly that. Fascinatingly, that even suggests embedding ChatGPT in discussions about its own regulation!
Tragically, risk is not spread equally in society – like other substances, it flows downhill. Risks accumulate for those least able to buy their way out of them.
To be frank, the shape of a modernized ‘democracy’ – or its replacement – built by tech oligarchs terrifies me much more than AGI. That’s my fear, my risk.