As futurists, our role is to help others understand the world as it is; the likely futures ahead if we don’t do anything about it; the trade offs and choices we face; and the complicated potential and probable unintended consequences and collateral damage of our present and future decisions and the incentives that shape them.
Or, to simplify, we are in the business of improving empirical and instrumental rationality.
In order to do our job well, we need to be honest – honest both about current reality and about our own agendas. And here, I fear, we have a problem. I’ve become increasingly disturbed by badly framed and bad faith framing of present problems and future scenarios.
For example, I’ve seen futurists frame [“profit” or “principles”] and [“growth” or “sustainability”] (as if good things cannot go together, even though socio-economic evidence suggests other wise!) and [“chaos” or “control”] (at though attempts at control are not as likely as not to increase rather than decrease chaos!) as binary choices in scenario matrices. These are clearly a false dichotomies – much like the illogical set-menu party pack politics that plagues our political arenas. Framing like this is political framing that forces people to take sides on issues where there is in reality plenty of room for compromise and collaboration.
Why pick just one?
Why can’t we have both?
Why do I have to take a side of fries if I order a burger?
Why may I not add a side of fries to my burger?
Why can’t I mix and match?
A futurist should encourage mixing and matching and push back against false binaries rather than propagate them.
(For a more micro example that illustrates this point further, I recently had an argument with a good friend around his framing of the choice of “sides” around the vaccine debate. He framed the sides as being [“those who believe everyone who can, should get a vaccine” and “those who believe in their right to say no to a vaccine”]. This is a false frame because is it completely logically possible for someone to believe in both those statements! The first one refers to a desired end; the other to the limits around acceptable means in getting to that end, as is illustrated by the (somewhat flippant, yes, but nonetheless hopefully self-explanatory) chart below:
Individuals in the top right corner would have been forced to change their position on at least one axis in order to fit in with the binary framing around “sides”. This is a tragedy, since that group is best positioned to negotiate the common ground so badly needed these days in these sorts of fraught debates! The same principle, of course, applies to all sorts of bad choice frames that confuse ends and means and force people to make choices (and enemies) where no such binary choice is necessary.)
Another so-called futures initiative that annoyed me greatly is a “choose your own adventure” app-based game, (designed by a team of environmental scientists and science fiction creatives) targeted at children and designed to increase awareness of climate change. In order to win the game and “get to the end of the century” the player has to make a series of policy choices, every one of which increases or decreases the odds of humanity surviving to the year 2100. So far so good. However, in order to actually get to the end of the century, the user is forced to accept (among other unproven ideas), global universal basic income and a one world militarised government. Fail to agree to these highly centralised solutions and everyone dies.
This is a highly politicised, highly biased view to present to children as the only way to save them and their loved ones from (apparently) certain death.
As futurists, if we are to be credible, selling our preferred means (or even, indeed, our preferred ends) as the “only possible solution” to a wired problem is highly suspect practice. We are expert generalists, not general experts. We do not know everything about everything. We are supposed to know how to ask questions – not to provide the answers.
When it comes to making choices, both framing and incentives are critical.
Beware those who would place choices in false frames with manufactured trade offs designed to trick, scare or bully you into agreeing with them. Also be aware of those who wield incentives as a weapon.
Here, on that second point, regarding incentives, an incentive is essential a change to choice on offer, a change in the value trade and trade offs designed to change the outcome of that choice. Again, our role is to interrogate those incentives and their potential intended and unintended consequences, and not to hide or shrink from the undesirable side effects of our pet policy ideas.
We need to be bigger than that. We need to be bigger than our own biases – especially when evaluating choices that will affect others. (This is something I strive to get better at every day. Please call me out when I violate these principles. I’d appreciate that.)