Think - AT LONDON BUSINESS SCHOOL

Seeing the future

Is forecasting folly? Lucrezia Reichlin weighs up the arguments

Seeing-the-future-974x296

We crave certainty. We want to feel secure now and we want to feel that we have a pretty good idea about our likely future. We look at forecasts and we want them to be right. We want to trust them. And when forecasts – of tomorrow’s weather, next year’s inflation rate, the outcome of an election - turn out to be wrong, the public feels let down. Forecasters are ridiculed. I think this is unfair – certainly when looking at the economy. People may want something certain, but to be realistic, they’re not going to get it. It’s impossible.

So does that mean that we should give up? If we can’t have confidence in a prediction, then does that mean it’s useless? No.
There are two types of forecasts: purely statistical and more structural.

Statistical forecasts come from models which use data without imposing assumptions about the way that the economy works - about causal relationships. A statistical model works simply by identifying patterns in the historical data. Structural models, by contrast, impose assumptions – for example that individuals are rational. Structural models are often easier to interpret because they predict the behaviour of economic agents – consumers and firms – under different scenarios. Unfortunately, while structural models can 'make sense' of the data, they are not very good at predicting what will actually happen in the future.
No models - statistical or structural - do well when forecasting over anything but a relatively short horizon, but statistical models generally do outperform structural models within the horizon – less than one year – that is predictable.

When policy institutions and investment banks produce forecasts, they use models of the economy of both types. It is rare, however, that a structural model is used for forecasting in its pure form: judgement is used as well in order to put together a consistent story about the economy on the basis of (judgemental) assumptions about external variables.
For example, we may not be very good at forecasting the exchange rate, but we can make an assumption about what it will be; then we can make projections for other variables such as inflation, interest rates and so on.

The uncertainties


Already, there are two uncertainties: first about what the external variable – in this case the exchange rate – is going to be; and second about the relationship between different variables. We know that these relationships are very unstable. Models are very rough approximations.

“From a forecast, you can put together a story”

When a policy institution is putting together a forecast, it will never be derived from just one model: it will come from various different sources. Then everything can be cross-checked for consistency. Statistical models are used to predict the very short term – what is called nowcasting – while structural models are used to construct scenarios further ahead, say one or two years. Crucially, from a forecast, you can put together a story. I see a forecast as a story-telling device.
This is an important point: constructing a forecast makes you cross-check your story for coherence. For example, if you say something about the exchange rate, is it consistent with what you are saying about inflation? What are you saying about unemployment and about GDP? We know because of the historical relationship between these variables that there must be a coherence. Constructing a forecast allows you to bring these things together.

Of course, the uncertainties are huge. And if, at some point in the medium-term future, you look back at your forecasts, you are quite likely to find that they were incredibly inaccurate.

Enforcing discipline


But that doesn’t mean that it was wrong to even try to make the forecasts in the first place. The process is useful because it enforces the discipline of having to be internally coherent. Also, it allows an institution such as a central bank to ask “what if…?” questions. It provides a framework to test what the different possible outcomes would be of a range of policy moves. The forecast may not correctly predict the exact outcome: again, there are likely to be too many external variables over which the bank has no control and which will affect the result. But at least the central bank can make comparisons between available policy choices: “What if we did this as opposed to doing that?” Or “What if GDP were to fall by 2 percent? What would happen to banks’ balance sheets?”

None of this is to deny the central point that forecasts – even for the medium-term, let along the long-term – are likely to turn out to be inaccurate. But one key element of the whole process – and perhaps one which the public doesn’t appreciate – is that forecasts will themselves acknowledge their own uncertainty. With a forecasting model, you can make a probabilistic statement; for example, “the inflation rate is likely to be X in a year’s time, and the likelihood of its being within a certain margin either side of that figure is, say, 50 percent. In two years’ time, the inflation rate is likely to be Y and the likelihood of its being within the same margin is, say, 25 percent.” The further ahead the forecast, the greater the uncertainty.

Acknowledge uncertainty


It is crucial to acknowledge uncertainty, to try to quantify it and to make it part of the forecast itself. It is possibly this element of forecasting that the public doesn’t understand – which helps explain why forecasters get such a bad press. People aren’t educated in this kind of probabilistic way of looking at the world.
As forecasters try to look further and further into the future, their predictions become ever more uncertain. For example, an investment bank may predict that China will become the world’s largest economy in such-and-such a year. Well, maybe, maybe not. This is the sort of forecast that that makes a stab in the dark look like laser surgery.

Forecasts that try to predict with confidence how things will look even in the medium-term – say two years ahead – are likely to be wrong. Or rather, the margin of error is so large as to make the forecast almost meaningless. But that doesn’t stop their being useful as story-telling devices – enforcing internal consistency and providing useful answers to those “what if?...” questions.
Is there any type of forecasting that produces sound numbers where the errors are likely to be small? Yes. That’s “nowcasting” – using all the available data to produce an estimate of what has happened in the very recent past, the position now and the very-short term future.

“Economic data are all published after the event”

Nowcasting takes every piece of data – from what is happening in the markets at the moment to results of business confidence surveys, for example – as it comes along. All that new data is then digested and an algorithm will produce an update of forecasts every 15 minutes.
Is anything new? If so, it will rerun its projection and produce a new set of forecasts for all the variables. The logic of the nowcasting model is that it will extract all the data that is relevant and throw away the junk. It’s done totally mechanically.

If there is a piece of data that the algorithm finds “surprising” – in other words it finds something that is different from what it had expected – then its projections will be shifted. By continually updating, it is increasing its precision.

So what use is nowcasting? It allows you to use “soft” data such as survey results to fill the gap before official statistics are available. There are vast amounts of data around that can be exploited. The numbers the nowcasting black box produces can then be handed over to the story-telling forecasters to do whatever they want with them.
For those forecasters, they will at least have a reasonably good idea of the current economic variables from which to work. Of course when the hard data – on GDP, for example – become available, then the soft information like surveys and so on become redundant. But that will typically be some time after the end of the quarter to which the data refers. 

This has been an important area of my research. I and my colleague Professor Domenico Giannone (now at the New York Fed) designed the first nowcasting model used by a central bank – for the Federal Reserve in the early 2000s. Nowcasting models have since become widely used in central banks and have given rise to a field of academic research. A company I set up with my colleagues – Now-Casting Economics Ltd – operates these models on an automated platform and distributes the output data to market participants who pay a subscription fee.

But returning to the central question: is the public’s scepticism about forecasters and forecasting justified?


There have been times when criticism has been entirely fair – when things fail massively. A decade ago, the failure to see the risks that emerged in the great financial crisis represented a failure of the whole profession. There was a big piece missing from the story-telling device.

No one could have predicted the recession. But forecasters hadn’t spotted the weakness of the financial system and over-leverage, the risk and so on. That is something we can criticise.

“Probabilities lack a simple mass appeal”

Nevertheless, on the whole, I think that forecasting is useful and I would defend it. Forecasts are going to be inaccurate. That’s in their nature. People seek a level of certainty about the future that is simply impossible to achieve: we can talk about no more than probabilities, and probabilities lack a simple mass appeal. That’s why forecasters are never going to be heroes.

Comments (1)

cagbakwuru978 6 years, 5 months and 5 days ago

Time has come in modern Business trending when forecasting must be seen as results of careful and painstaking research approaches on subject matters and not mere magic.

Strategic Branding mobile

Strategic Branding

From Behavioural Insights to Business Growth Build your brand through customers insights to drive profits with our short course.