One of the more intriguing aspects of the current discussions about the existential risks of artificial intelligence has been watching an availability cascade happen in real time.
An availability cascade (see: Kuran & Sunstein, 1999) is a special kind of “going viral”, where what goes viral is public perception of an immediate and serious risk. It’s a social feedback loop where individuals, the media, and interest groups of various kinds together create a dynamic that massively amplifies awareness of a risk. The difference is that what goes viral is availability in the sense of the availability heuristic, a cognitive bias that means what is at the front of our minds is more likely to be important that everything else.
In effect: intense, frequent public discussion of an issue – any issue – creates an imperative that (to use David Allen Green’s phrase) “something must be done”, and drives a demand for policy action as “an assertion of political virility”.
That this drives the ‘existential risk’ narratives inside artificial intelligence shouldn’t be a total surprise – availability cascades have been part of the literature around the psychology of risk for a few decades now, discussed in detail by Kuran and Sunstein (who first described them) and in Kahneman’s (2011) Thinking Fast And Slow.
The “existential risk of AI: meets almost all of the Kuran & Sunstein’s “aggravating factors” of an availability cascade (i.e., uncontrollable new technology, heavy media coverage, human-generated irreversible impact on future generations, and poorly understood mechanisms). This is a perfectly crafted set up for an availability cascade.
In fact, there are two dimensions to an availability cascade: an informational dimension and a reputational dimension. The informational dimension is relatively straightforward: people believe there’s an existential risk because other people people believe there’s an existential risk. It’s a simple matter of viral spread, with the usual mechanisms of media and social media dissemination driving the amplification.
The reputational dimension is more interesting. As Kuran & Sunstein argue, some agents are availability entrepreneurs, activists who intentionally trigger availability cascades for their own benefit, which may be social or reputational as well as financial. Geoffrey Hinton and Sam Altman definitely appear to qualify, but there are many others too. Not all agents are individuals, there may be companies (e.g., Open AI, PwC) and NGO’s (e.g., the Center for AI Safety and the Future Of Life Institute) as well as politicians (e.g, Ursula von der Leyen, Narendra Modi) and – especially – normally trustworthy media organizations like The New York Times, Time, and the BBC. All stand to gain by mutually reinforcing the risk and using each others’ reputations to bolster their own impact. And it works, not least because risks of negative events get much more attention than risks of positive ones (Kahneman, 2011).
In effect, agents participate for many different reasons (which are not mutually exclusive):
In addition, those who don’t participate receive negative feedback rather than positive feedback. In this case, they include people who question the existential risk of AI, or raise concerns about the long-standing other risks. These agents are socially penalized. This process is clear in the excessive criticism of those who do not ‘buy into’ the existential risk frame – often because they see other, more immediate risks, as more worthy of attention as well as regulation.
You can see these positive and negative reputational dynamics in, for example, the following tweets.
We’ve released a statement on the risk of extinction from AI.— Center for AI Safety (@ai_risks) May 30, 2023
- Three Turing Award winners
- Authors of the standard textbooks on AI/DL/RL
- CEOs and Execs from OpenAI, Microsoft, Google, Google DeepMind, Anthropic
- Many morehttps://t.co/mkJWhCRVwB
Why can't both be existential risks? Furthermore, how are you so confident that AI is not an existential risk when Hinton and Bengio disagree with you?— Peter Berggren (@berggrenpeterm) June 10, 2023
This is the basic social process behind an availability cascade. But why are they a problem? As Kuran & Sunstein argue, while sometimes they may be effective at raising awareness, availability cascades can be highly detremental to good regulation.
“Availability cascades constitute a major, perhaps the leading, source of the risk-related scares that have cramped federal regulatory policy at both the legislative and executive levels, with high costs in terms of lives lost, lowered quality of life, and dollars wasted. Especially when they run their course quickly, cascades force governments to adopt expensive measures without careful consideration of the facts” (Kuran & Sunstein, 1999, p746)
While for these reasons, an availability cascade is not a good basis for regulation, the harms they drive can be mitigated to some extent. Kuran & Sunstein recommend involving more experts (in risk) to be a buffer around policy, so it doesn’t become too reactive. Slovic et al. (1982) recommends informing the public better, but using an understanding of the psychology of risk to manage biases better. Kahneman (2011) says both are probably right – but all three are clear and consistent that an uncontrolled cascade of availability is a problem, and jeopardise effective policy decisions.
Putting my cards on the table, I believe the “existential risk” of AI narrative is a beautiful example of an availability cascade, with many availability entrepreneurs actively promoting it for their own ends. And, unfortunately, I believe it has already been successful enough to do significant damage to regulation that was intended, not only through pointless and expensive regulation that won’t deal with the real risks, but also through reputational harms meted out – and which continue to be meted out – to those who haven’t endorsed the existential risk perspective.
Kahneman, D. (2011). Thinking, Fast and Slow. Macmillan.
Kasperson, R. E., Renn, O., Slovic, P., Brown, H. S., Emel, J., Goble, R., Kasperson, J. X., & Ratick, S. (1988). The Social Amplification of Risk A Conceptual Framework. Risk Analysis 8(2), p177-187.
Slovic, P., Fischhoff, B., & Lichtenstein, S. (1982). Facts versus fears: Understanding perceived risk. In Judgment under uncertainty: heuristics and biases, eds. Kahneman, D., Slovic, P., & Tversky, A. Cambridge University Press.
Image: copyright © 2023 Stuart Watt. All rights reserved.