Fed up with giant AI models? "Slow AI" is a better way

Posted by Stuart on May 08, 2021 · 3 mins read

We often think of AI as a fast, disruptive technology – one that is reshaping the world we live in, by changing our economic standards and values. For example, maybe AI can do part of a job faster and more accurately than people – that opportunity for automation can transform salaries, careers, and cause the success or failure of entire companies.

Over the past week, I’ve been reconsidering that, thanks to an inspired thought from Mireille Hildebrandt mentioning the idea of “Slow AI”.

The idea of an AI as part of the “slow movement” has an intuitive appeal. There’s even a small but distinctive field of “slow technology” – could we establish a way to do AI that reflects that?

Slow technology is the opposite of solution-oriented. It is about the experience, not the goal. It is envelopement, not development – very much the kind of understanding and integration that Madeleine Clare Elish suggests. Slow technology is not disruptive, instead, it’s about designing to improve, and deliberate over, our experiences.

As a psychologist, I’ve been thinking of slow AI through another lens, too. Kahneman, in “Thinking Fast and Slow” talks about people as two systems: System 1 and System 2. System 1 is the instinctive, reflexive side to our thinking, where System 2 is the reflective, deliberative side.

The problem is, faced with events in the real world, System 1 is faster and often wins out. For example, when we are trying to decide if something is true or not, System 1 often leaps to the conclusion that it is true, and only when (and if) System 2 gets involved, may we reconsider and come to a more accurate assessment. Apply this to recommender systems, for example, or AI in recruitment – how much of the damage of AI comes from System 1 leading on System 2.

If – and this seems likely to me – this is the underlying cause of the “cognitive miser” aspect of automation bias, then we need to start designing AI to strengthen System 2, to encourage reflection. Slow AI may be the only way to overcome automation bias, as well as the myriad other consequences of poor AI systems.

Maybe this shouldn’t be a surprise to us. At Turalt, we’ve been intending to design AI that encourages reflection, and maybe even slows down some tasks (like email) to improve that deliberative, individual, aspect to social interaction.

But bringing the idea of a slow AI gives us opportunities to make our work better. For example, there is a playfulness to slow technology, an artistic aspect that encourages time to reflect. We need to start building AI applications that focus less on the goal, and more on our experiences living in the world we do.

And I don’t think this applies solely to our users, either. All too often we are driven to build models too quickly. As developers, we are just as prone to let System 1 make our decisions for us, throwing compute at problems, rather than thinking them through. Or using a bigger dataset in the hope that it’ll somehow create a better accuracy, when – seen through the lens of Slow AI – accuracy is not the problem.

And finally, what about ethics? Slow technology aligns well with ethics of care, although the “slow ethics” described there by Gallagher comments on the risk of superficiality, and defends the space for reflection in slow movement thinking. The slow movement is grounded in experiences, in relationships, in attentiveness to the consequences of our work. With a slow AI, ethics is not inherent or automatic, but by strengthening our use of reflection and System 2 (which is more involved in ethics than System 1), it’s at least a step forward from current approaches to AI, which run at a pace almost designed to inhibit it.

So what do you think? Should we think more about enveloping AI, rather than deploying it, in Elish’s phrasing? I’m certainly looking at ways to create that playfulness, that craft, and that understanding of experience, further forward in my work.

Some articles on slow technology: