Silicon Valley and Political Rationalism

Posted by Stuart on June 27, 2023 · 5 mins read

Classical and Silicon Valley rationalism

Rationalism today, at least in Silicon Valley, is not your grandmother’s rationalism. It is not classical philosophical rationalism, based on the ideas of Descartes, Spinoza, and Leibniz. It is not about seeking eternal truths which could be derived through reason, mathematics, and logic alone, and prizing reasoning above that other inadequate methods of knowing, like sensory experience, and religion.

Today’s Rationalism, especially in the various overlapping venture funding, entrepreneurship, and artificial intelligence communities, is very different. It is the polar opposite of Enlightenment-era classical rationalism. While there are various takes on it, many trace their immediate origins and consensus to Eliezer Yudkowsky’s texts and blogs, such as those on the LessWrong forum that he founded. And this brand of Rationalism, in one form or another, is highly influential: the names linked to it include Paul Graham, Peter Thiel, Marc Andreesen, and Elon Musk, as well as many others.

Silicon Valley Rationalism is part of Torres & Gebru’s “TESCREAL bundle”, and as such, is shared to some extent along with other, more esoteric, ideologies like extropianism and cosmism as well as more mundane (but still problematic) ones like longtermism, a kind of souped-up utilitarian ethics. What is common about all of these, however, and a point I’ll come back to: these are all visions of an ‘ideal’ future, not of today.

What’s wrong with being rational?

Yudkowsky himself is a polarizing figure, former self-taught artificial intelligence developer turned writer, primarily writing and tweeting now on artificial intelligence ‘singularity’ and its risks. Yudkowsky describes rationality as follows:

“Where there are systematic errors human brains tend to make — like an insensitivity to scope — rationality is about fixing those mistakes, or finding work-arounds” (Yudkowsky, 2006, “The Martial Art of Rationality”)

In some ways, this is intuitive and appealing. After all, as we’ve seen — especially from the work of Tversky and Kahneman — systematic biases like loss aversion can be counterproductive. Wouldn’t it be great if we could turn the whole of humanity into a 19th century Homo economicus? Why not simply wipe away all our inconvenient prejudices and biases, so we can make the ‘right’ decision each and every time. Surely that would make the world a better place? That, at least, is Yudkowsky’s argument. In fact, he would go farther, and say that’s necessary if we are to survive the future technological singularity — which he regards as a certainty.

There are several things wrong with that. First, it’s probably impossible for any individual person to do that. There are a lot of biases, and such is the nature of expert knowledge, even with awareness of the shortcomings of expertise, these biases still exist. While it might be possible for experts to learn to handle their cognitive biases better, at least some of the time, and therefore for experts to be better equipped when it comes to decision-making, this doesn’t work perfectly.

In fact, there’s already a bias that we are ‘rational’ in Yudkowsky’s sense: it’s called naive realism, or the bias blind spot. Analyzed in depth by Lee Ross, in particular, people are naturally biased to think that they see the world more objectively than others, and that others who don’t share their views are the biased ones — in fact, they must be either ignorant or irrational to think the way that they do. In other words, the whole of Yudkowsky’s emphasis on ‘rationality’ could be a giant hoax by our own biases, one huge bias that we are the objective ones. We might not be objective at all, we might just believe that we are. I am sure that Yudkowsky believes he is being rational. I am equally sure that many biases still pervade his thinking, even though he will surely argue that I am the irrational one. This is not a solid foundation to build supposedly fair decision-making on.

Finally, there’s no guarantee that the world would be a better place even if we could train ourselves into some impartial objective rationality. Would decisions about other people be better if we cauterize our empathy? Our care for others? There are people who generally don’t act on the basis of empathy: we tend to call them sociopaths. Would a sociopathic world be a better one for decisions? I don’t think so.

It’s also certain that contemporary Yudkowsky-style, what we might call Silicon Valley Rationalism, is nothing whatsoever like classical Cartesian rationalism. It’s not about logic, or reason, except to the extent that it compensates for our human lapses in logic and reason. There is no eternal truth to seek — all that matters is evolutionary survival against the risks of our own creations. Frankly, it’s a bit of a mess, if you think about it as a philosophical position.

Rationalism as a political project

Of course, Silicon Valley Rationalism isn’t a philosophical position at all — it’s a political one. Its goal is to shape policy, for people, for corporations, and for governments. That’s intrinsic in Rationalism as a vision of the future.

Michael Oakeshott’s essay Rationalism in Politics, first published in 1947 in The Cambridge Journal, is almost a perfect description of Yudkowsky and his fellow Rationalists. It’s also an outstandingly clear assessment of the problems of that ideology, particularly as driver for policy. For make no mistake, Yudkowsky et al.’s Rationalism is a political agenda, not an analysis of logic in decision-making.

Oakeshott’s essay is worth quoting in depth.

“The general character and disposition of the Rationalist are; I think, not difficult to identify. At bottom he stands (he always stands) for independence of mind on all occasions, for thought free from obligation to any authority save the authority of ‘reason’. His circumstances in the modern world have made him contentious: he is the enemy of authority, of prejudice, of the merely traditional, customary or habitual, His mental attitude is at once sceptical and optimistic: sceptical, because there is no opinion, no habit, no belief, nothing so firmly rooted or so widely held that he hesitates to question it and to judge it by what he calls his ‘reason’; optimistic, because the Rationalist never doubts the power of his ‘reason’ (when properly applied) to determine the worth of a thing.” (Oakeshott, p1-2).

“Consequently, much of his political activity consists in bringing the social, political, legal and institutional inheritance of his society before the tribunal of his intellect; and the rest is rational administration, ‘reason’ exercising an uncontrolled jurisdiction over the circumstances of the case. … This assimilation of politics to engineering is, indeed, what may be called the myth of rationalist politics.” (Ibid, p3)

“For the Rationalist, all that matters is that he has at last separated the ore of the ideal from the dross of the habit of behaviour; and, for us, the deplorable consequences of his success.” (Ibid, p35-36)

As political rationalism, Yudkowsky’s Rationalist project makes more sense, even if it is far less appealing. Personally, I’d call it repugnant. Truth is irrelevant — instead, we have engineering applied to politics. A vision of a society run by the geeks, because they – and only they – are the ones who have the sophisticated reasoning abilities to ‘do the right thing’. The tragedy of naive realism is that their belief in their superior assessments is simply false. The engineering that their project demands is social and political engineering (and, therefore, a revolution), and it is just as fervently wrong in the biases it naively exacerbates, as the system it is trying to replace.

Yudkowsky’s Utopia is just as unattainable — and just as hellish — as Thomas More’s.


Image: copyright © 2023 Stuart Watt. All rights reserved.