In the classic paperclip maximiser thought experiment; an AI is tasked with making paperclips. Innocent enough. However, the AI is also not told when to stop. Driven by the singular goal to make as many paperclips as possible, the AI eventually destroys not only our planet, and our selves, but the entire universe, as it appropriates all possible intergalactic resources – and all technological advances – to make ever more paperclips.
The paperclip maximiser thought experiment is a warning on trying to manage, program or control complex systems. Even simple goals can have extraordinary effects (recall Langton’s Ant) – oftentimes disastrous. (The story of King Midas, who wanted everything he touched to turn to gold is not altogether unrelated, when it comes to getting the message across that maximising for singular goals (however tempting or attractive they may seem in the present moment) actioned by a powerful entity – be it an AI, a genie or a bureaucracy – do not tend to end up well for anyone.)
This year, the year 2020, we would have done well to recall the paperclip maximiser parable and the lessons it has for us around thinking through the nth degree systemic effects when playing around with complexity. Instead of a paperclip maximiser problem, we have create a paperclip minimiser problem. By focusing on reducing one singular negative metric (in the case of 2020, that would be cases of a certain virus) without considering the full consequences of the collateral damage to the wider system, much like with the paperclip maximiser (or, indeed, Old Lady Who Swallowed a Fly) we end up creating more harm than good in the world at large.
Of course, our current paperclip minimiser experiment has now taken on a life of its own, once a bureaucracy has ben “programmed” to solve for a particular singular goal and it gets going, stoping the momentum of the machine becomes nigh impossible. How much will we inadvertently destroy by refusing to see the bigger picture? That is yet to be seen.