Guest post by Olaf Rieger
It’s December. And that means that the media is again full of lists and predictions for the coming year. Readers love it. But many of those predictions don’t come true at all. Fortunately, hardly anyone checks afterward whether your prediction has come true or not. And as a trend predictor or content creator, you can take advantage of this.
A few years ago, on a random Friday, I was in the car listening to BNR. El Clásico, the football match between Barcelona and Real Madrid, was played that weekend. A football connoisseur, I don’t remember which one, gave his opinion: Real would win. It was a good analysis; I believed what was said.
Three days later. Monday. Barcelona had won. The same connoisseur could again be heard on BNR. He explained why he had been wrong and why that could be explained logically. I believed him again.
Strange: I believed a predictor who drew a wrong conclusion. And then I believed it again. I thought that was pretty irrational behavior of my own.
Those and other events were sufficient reason to delve into the matter of forecasting. I was helped by statistician Nathan Silver’s book: The Signal and The Noise, The art and science of prediction. His conclusion: predictions usually go wrong, but that doesn’t seem to interest people.
Predictive content
It has been a while, at the end of 2014. For the financial blog I predicted what house prices, mortgage interest, and savings interest would do in 2015. I substantiated my vision by combining statements by top economists with statistical figures.
My prediction was to read 35,500 times and shared extensively via social media. The prediction I made a year later for 2016 was even read 39,100 times. Numbers that I was quite happy with.
At the end of the same year, I took stock. Did my predictions come true? I was right about mortgage interest and house prices. I was a little off on savings interest, but unexpected global developments could explain that.
And so I patted myself on the back and said, “Well done, Olaf.” I proudly published a retrospective article. I expected that readers would be curious if the predictions had come true. But no: it remained with a paltry 1,200 readers.
Why predictions work
Why do we find predictions interesting? One reason is that people don’t like uncertainty. This is called risk aversion in behavioral sciences. Daniël Kahneman writes extensively about uncertainty avoidance in his book On Fallible Thinking / Thinking Fast and Slow.
Uncertainty avoidance is deeply rooted in our DNA. The hunter-gatherer we once were as human beings mainly wanted to survive. Taking risks meant less chance of survival. You could just come across a lion, bear, or competing tribe if you took too much risk.
These primal instincts continue to work even today. Predictions reduce our sense of uncertainty. We think we know better where we stand. And we like that. We feel safe with it. Those predictions once were about better shelters or good hunting grounds. Now they are about the economy, tech innovations, and pandemics. And of course about whether or not it will rain next week.
We think we know better where we stand through a prediction. We like that.
Are our predictions a bit correct?
Philip Tetlock is a professor of psychology at the University of California at Berkely. In 2005 he published his book Expert Political judgment (affiliate). His conclusion: experts in politics are strongly convinced of their abilities, but at the same time are poor at making predictions.
These ‘experts’ regularly made statements such as: “This will absolutely never happen” or “This will 100% definitely happen.” But even with the extreme statements, the experts were completely wrong in 20% of the cases. And we are really talking about professors, professors, and experts with seasoned experience in their field.
More concrete works better.
Tetlock also found that predictors who were more often asked by TV shows to give their opinion were more likely to be wrong. But why were they so often asked for those programs? Precisely because they made more extreme statements.
A story with a lot of nuances is boring. Nice predictions with extreme numbers are smooth. It is engaging content that people like to share and distribute. This is also why gossip and conspiracy theories are spreading quickly, precisely because they are so smooth and concrete. More concrete works better. I wrote this article about that earlier.
Self-driving car
In 2015 there was a lot of speculation when it comes to self-driving cars. Not the least organizations, such as Google, BMW, and Ford, said that by 2020 cars would certainly drive without a driver. That also sounded smooth—specifically, such a car without a driver. But now, five years later, I still have to hit the brakes myself.
Economic growth or not?
The Federal Reserve Bank of Philidelphia annually examines weather forecasters in the field of the economy are actually right. Even now, these are seasoned economic experts. The bank’s conclusion: 1 in 3 predictions are incorrect.
The Dutch CPB recently made the forecast for our own economy in 2021. The CPB estimates are one of the pillars on which the government bases policy and legislative proposals. But in possibly 33% of the cases, that prediction is wrong. And due to the corona uncertainty, this percentage can be even higher. So quite a big risk to base government policy on that.
Why do predictions go wrong?
Where are things going wrong? The CPB, economists, terrorist fighters, and the RIVM all use averages. According to Nathan Silver, there is even a joke among statisticians that says, “A statistician once drowned in a river because the river was an average of three feet deep.”
Extremes confuse averages. Out of sample is called this phenomenon. But those out-of-sample cases are very common: the fall of the Berlin Wall, the 9/11 attacks on the WTC, the banking crisis, the corona crisis. None of them were predicted. All were out-of-sample examples.
Be aware of the unknown.
“Nobody saw it coming,” said S&P, the credit rating agency that assigns countries credit ratings such as AAA, A + or B-, on the banking and credit crisis. We apparently find this argument acceptable as humans. Even S&P got away with it while estimating risks is at the heart of their job.
So there are always things that we cannot predict in advance. As a predictor, the only thing you can do is consider the unexpected. No matter how difficult.
What can you do with it for your content?
Can you do something yourself with this knowledge? Yes! Suppose you have a strong idea within your field about tech, marketing, SEO, or content. You think you know which way things will go in the future. You would like to express your opinion. What do you have to take into account?
- The basic advice is simple: just make your prediction. We like to read predictions, and that will not change in the future.
- Argue. When you predict, list your arguments clearly. Although few people will check your prediction afterward, it is especially nice for yourself to see where you were and were not right.
- Make your prediction at the end of the year. Predictions that I often posted throughout the year were always less read than predictions at the end of the year.
So go ahead and predict. And if you are in doubt whether or not your predictions will come true, know that this does not interest the reader. But also know: if you turn out to be 100% right afterward, don’t expect a pat on the back. Be happy, treat yourself to a good cappuccino, and hope that maybe after 35 years, someone will look back and discover that you were indeed right…