Economists give their predictions to a digit after the decimal point to show that they have a sense of humour. Anonymous
Legends of prediction are common throughout the whole Household of Man. Gods speak, spirits speak, computers speak. Oracular ambiguity or statistical probability provides loopholes, and discrepancies are expunged by Faith. Ursula K. LeGuin
There are two classes of forecasters: those who don’t know and those who don’t know they don’t know. J K Galbraith
A few yeas back I did a piece called Really terrible predictions, which listed some of the most infamously bad predictions of the last century or so. Gems included IBM chairman Thomas Watson’s 1943 prediction that there would be a world market for five computers and Yale economics professor Irving Fisher’s claim that stocks had reached a permanently high plateau. He came up with this on October 16, 1929!
The post came back to me this summer while reading Dan Gardner’s Future Babble, a book which examines why experts are so bad at predicting the future. Of course the fact that experts sometimes get it spectacularly wrong doesn’t prove anything. As Ben Goldacre likes to say, the plural of anecdotes is not data. You need to conduct a rigorous study where hundreds of these experts – academics, intelligence analysts, economists, political scientists and even journalists – make predictions. Then you have to see how they have done.Gardner was able to find exactly this – the research of a professor of psychology at the University of Pennsylvania, Philip Tetlock. Beginning in the 1980s and for a period of twenty years Tetlock examined 27,451 forecasts by 284 experts about inflation elections, wars etc. It was an exhaustive study if ever there was one and the results did not speak highly of the experts abilities. Indeed they did little better than those proverbial dart-throwing chimps. That’s right, no better than random chance.
We need to dig more deeply into the results. The ones who were worse than average actually did worse than if they had been tossing a coin. Now that is quite an achievement! And even the ones who did better were not much better than random chance. Another paradoxical conclusion is that there was an inverse correlation between confidence and accuracy; the greater the expert’s confidence the less accurate the predictions were. According to Gardner what made the difference here was the style of thinking.
When examining the experts’ records it is helpful to use the philosopher Isaiah Berlin’s distinction, borrowed from the Greek poet Archilochus: “The fox knows many things, but the hedgehog knows one big thing.”
Tetlock analysed the difference in prediction styles in a 2005 book:
Low scorers look like hedgehogs: thinkers who “know one big thing,” aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who “do not get it,” and express considerable confidence that they are already pretty proficient forecasters, at least in the long term. High scorers look like foxes: thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible “ad hocery” that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.
The experts were asked many kinds of questions, not all within their area of expertise. The foxes did better when asked about their field. This is exactly what you would expect. But the bizarre thing was that the hedgehogs actually did worse when making predictions in their specialised area.
The worrying aspect about this is that everyone loves a hedgehog. They are the ones who get invited as pundits on to television shows, write in newspapers and appear on the bestseller lists. We don’t want nuance. And Tetlock found that the more famous the experts, the less accurate they were.
Psychologists were probably not surprised by Tetlock’s results. What we are dealing with here then is not merely the inherent complexity of predicting the future. We have our flawed human cognitive abilities to take into account.Gardnerlooks at the cognitive biases that can have a negative effect on our ability to predict the future. Here are some of the biases he mentions:
Optimism bias is the tendency to believe that we are better than we really are – we are all above-average in intelligence, looks etc. Getting married? Other people will end up in the divorce courts. Starting a new business? Most fail, but mine will be different. This may seem like delusional thinking but the evolutionary advantage as is that it encourages people to take action and makes them better able to deal with setbacks. To paraphrase Jack, we can’t handle the truth.
Another danger is confirmation bias. Once we form a belief we tend to seek out and accept information that supports it and not bother to look for information that does not. And even if we are actually presented with information that doesn’t fit, we will be hypercritical, looking for any excuse to dismiss it as worthless.
Status quo bias is the tendency to see tomorrow as being like today. We lack the imagination to see beyond today’s trends. Of course, this doesn’t mean we expect nothing to change. But most attempts at prediction seem to begin with current trends, which are then projected into the future. Current trends do often continue, but the further we look into the future, the more likely it is these trends will be reversed.
Negativity bias is a predilection for doom and gloom. We are drawn to bad news or images, and we are more likely to remember them than positive information.
What is so interesting about these types of biases is that none of us are immune to them. If someone had told me just thirty months ago that Spain would go on to win the European football championship and the World Cup in the space of two years I would have thought that they should be locked up. I assumed that all my experience in the past was valid. This is where experts can be especially dangerous. I have no problem admitting to my flawed thinking, but the experts with their experience, intelligence and expertise can actually be more prone to these psychological foibles.
At the end of the book Gardner looks at the work of controversial political scientist Bruce Bueno de Mesquita, who specialises in international relations and foreign policy.
Bueno de Mesquita doesn’t really care about the local culture, history, economy, or any of the other considerations that more traditional political scientists analyse. For him the key is self-interest and he uses game theory to make models of the future. He claims a number of impressive hits. His model predicted:
- Brezhnev being succeeded by the dark horse Andropov, who nobody at the time even considered a possibility.
- China’s crackdown on dissidents four months before Tiananmen Square
- The second Intifada and the end of the Middle East peace process, two years before it happened.
Bueno de Mesquita claims a hit rate of 90%. It sounds very impressive, but it does also make me a bit suspicious. We need to know the difficulty of these predictions. What did he get wrong? I am especially sceptical about black swans those rare and unpredictable events that can have catastrophic results. We seem to be incapable of predicting these.
I’m certainly not suggesting we leave the field to astrologers and clairvoyants But we have to recognise the difficulty of the enterprise and be aware that uncertainty is always going to be there. We sometimes just have to admit that we don’t know.