forecast prediction intervals

Practical Time Series Forecasting – Bounding Uncertainty

A good forecaster is not smarter than everyone else, he merely has his ignorance better organized.”
Anonymous

Predicting the future is an exercise in probability rather than certainty. As we have mentioned several times over the course of these articles, your forecast model will be wrong.

It is just a matter of how useful it might be.

A time series model will forecast a path through the forecast horizon, a “point forecast.” But this path is just one of the paths your forecast can take based on your estimated model.

Providing a sense of the uncertainty surrounding your forecast is an essential part of your job as a forecaster.

Forecast intervals

The standard approach is to provide the “forecast interval” for your forecast.

Typically, this is cast in terms of a 95% prediction interval. That is, 95 times out of 100, the actual value will fall within the specified range. (Note that there is a difference between a “confidence” interval and a “forecast” interval.)

Forecast Interval

Sources of forecast uncertainty

There are at least two sources of forecast uncertainty over the forecast horizon.

The first results from our ignorance of what the model’s error will be in the forecast horizon. So, we must rely on how well the model did in the recalibration sample (estimation + holdout) as an estimate.

The second source of uncertainty results from the model’s coefficients (or parameters) being estimates of their true values. As estimates, they have their own “confidence” interval.

As a result, the forecast interval can be quite large (as shown above). And, due to error compounding over time, the forecast interval widens the further into the forecast horizon you go.

In our example above, during the first month of the forecast horizon, the forecast interval is plus or minus 0.63% of the forecasted value. By month 6, this spread widens to plus or minus 2.95%.

Even accounting for forecast error and parameter uncertainty, these forecast intervals may still be too narrow.

What about meta forecasts?

In an earlier article we discussed combining forecasts into a meta forecast. The challenge in terms of a meta prediction interval is that it is not a simple matter to combine the prediction intervals of the constituents’ forecasts.

One approach is to simply show the extreme upper and lower forecast paths along with the meta forecast path, which will lie somewhere between the two extremes.

And then to caution the consumer of your forecast that this is just to give a sense of the possible forecast range, which will likely be too narrow (since the upper and lower forecast will each have their own prediction interval).

Probability-based assessment of forecast uncertainty

Another approach is to couch your forecast uncertainty in terms of a probability.

For example, based on your SALES forecast, what are the chances of hitting a certain level of sales by a certain date? If you are forecasting procurement needs for a warehouse, what is the chance of running out of inventory by a certain date? If you are a macroeconomist forecasting GDP, what are the chances of the economy falling into a recession by a certain date?

Suppose you are tasked with forecasting daily SALES over the next year.

Management has targeted a certain level of SALES and wants to know when that target will be hit. You can use the forecast uncertainty produced by your model to generate the following chart:

Forecast risk curveThe vertical axis is the chance of hitting the SALES target by a certain date (in this case, days into the next year). So, 160 days into the year, there is a 10% chance of hitting the sales target.

By day 192, a month later, the chance has grown to 30%. And by day 218, there is a 50/50 chance the sales target will be reached.

Stating these chances in terms of odds may be an easier way to present this:

By day 160, the odds of hitting the target would be 9 to 1. By day 192 it would be a little over 2 to 1. And by day 218, it would be 1 to 1…a flip of the coin.

Bottom line

Uncertainty is a fact of life and your forecasts will be “wrong.”

But quantifying how wrong they can be will go a long way towards making them “useful.”

quantify forecast uncertainty and make your forecast useful

Part 1 – Practical Time Series Forecasting – Introduction

Part 2 – Practical Time Series Forecasting – Some Basics

Part 3 – Practical Time Series Forecasting – Potentially Useful Models

Part 4 – Practical Time Series Forecasting – Data Science Taxonomy

Part 5 – Practical Time Series Forecasting – Know When to Hold ’em

Part 6 – Practical Time Series Forecasting – What Makes a Model Useful?

Part 7 – Practical Time Series Forecasting – To Difference or Not to Difference

Part 8 – Practical Time Series Forecasting – Know When to Roll ’em

Part 9 – Practical Time Series Forecasting – Meta Models