times series forecasting holdout samples

Practical Time Series Forecasting – Know When to Hold ‘em

The only relevant test of the validity of a hypothesis is comparison of prediction with experience.
Milton Friedman, economist

Holdout samples are a mainstay of predictive analytics.

Set aside a portion of your data (say, 30%). Build your candidate models. Then “internally validate” your models using the holdout sample.

More sophisticated methods like cross validation use multiple holdout samples. But the idea is to see how well your models predict using data the model has not “seen” before. Then go back and fine tune to improve the models’ predictive accuracy.

Time series holdout samples

The truest test of your models is when they are applied to “new” data. Data from a fresh marketing campaign, a new set of customers, a more recent time period (“external validation”).

But you may not have access to such data when building your models. You certainly will not have access to future data.

So, a holdout sample needs to be crafted from the historical data at your disposal.

When building predictive models for, say, a marketing campaign or for loan risk scoring, there is usually a large amount of data to work with. So, holding out a sample for testing still leaves lots of data for model building.

However, the situation can be much different when working with time series data.

Depending on the frequency of the series, the amount of data points available to work with can be limited. 50 years of annual data is just 50 data points. 5 years of monthly data is just 60 data points.

Obviously the greater the frequency of data, the greater the number of data points available to work with…5 years of daily data is 1,825 data points. But these time series sample sizes usually pale against the large customer sets used to fuel marketing campaigns, which can run into the hundreds of thousands.

So, does this mean that holdout samples shouldn’t be used to test time series forecasting models?

Absolutely not!

You still need a way to whittle down your candidate models. You just need to be careful in how you select and use your holdout sample.

Holdout sample length

How much data should you set aside for a holdout sample? The rule of thumb we go by is to choose a holdout sample length that is at least (a) equal to the length of your forecast horizon or (b) equal to the length of time needed for your business to make a change.

Suppose you need a 12-month forecast to support a business plan. And you wish to forecast monthly sales for the 12 months starting November 1, 2017.

Then, your holdout sample should be at least the 12 months pertaining to November 2016 through October 2017. And your estimation sample should be all months prior to November 2016.

Using a holdout sample for time series forecasting

Remember, the time series methods we are addressing are best used for short-run forecasting. Most business forecasting needs are for short-run forecasts. The next few months or few years. Not the next 5 to 10 years.

Alternatively, suppose your business only needs 8 months to make a change (maybe it is getting more salespeople on line). Then your holdout sample should be at least 8 months.

Holdout sample performance

Once you estimate a model, you apply it to the holdout sample to see how well it predicts. There are several measures you can use to gauge how well your model performs. We focus on measures of accuracy and bias.

To measure forecast accuracy:

If the business cost of a forecast error is high, then the Mean Square Error (MSE) or Root Mean Square Error (RMSE) will magnify it since forecast errors are squared. MSE is the average of (predicted – actual)2.

If the business cost of a forecast error is average, then the Mean Absolute Percent Error (MAPE) can be used. MAPE is simply the average of the absolute value of [(predicted – actual)/actual]. However, care should be taken if “0” values are possible as MAPE would be undefined.

See here for a discussion of forecast accuracy measures.

To measure forecast bias:

The Mean Percent Error (MPE) will indicate if there is a systematic bias to the forecast. If positive, then the model is over predicting; if negative it is underpredicting. And the further from 0, the greater the bias. MPE is the average of [(predicted – actual)/actual].

An alternative measure is Theil’s measure of systematic error, the “bias-proportion” of Theil’s inequality coefficient. This measures the extent to which average values of the forecasted and actual values deviate from each other, the larger the value, the greater the systematic bias.

In general, in the holdout sample, a good performing model will exhibit low overall error (high accuracy) and low systematic bias.

The chart below shows an example of such a model using a 5-month holdout sample. On average, the model’s error is between 0.28% and 1.85% while exhibiting a very small positive bias of 0.10%.

Example of holdout sample performance

Note that there is no absolute criterion for what constitutes a “low” error, for example, MSE.

Measures of forecast error are to be judged relative to the context of the forecast you are making. In some cases, your models may be averaging an error in the 30%’s; in others it could be in the single digits.

Length of estimation sample

A related issue is how much data do you use for model estimation?

Often, there is not a choice. After setting aside a holdout sample, there may be just a bare minimum amount of data left for modeling (i.e. need more data points than model parameters to be estimated).

In general, the fewer the number of model parameters and the less “noisy” the data (i.e. less random), the fewer the number of data points needed. Typically, though, we look for at least 40 data points.

If you have a high frequency time series (monthly, daily, hourly) you may have room to consider whether the choice of the estimation sample length can affect model performance.

One can argue that the modeling sample should be reflective of the characteristics of the forecast horizon. That is the next year, say, is more likely to be like the past several years, not like 20 years ago. So, limit the estimating sample to more recent years.

Consider the time series shown below. Clearly the time path of this series has not been consistent. Rather than estimating a model using the entire historical sample, maybe limit it to the more recent period.

Low variation time series

The trade-off is that there is less experiential history upon which to base a model. Maybe the dynamics associated with that turning point in early 2000 and subsequent recovery could prove to be fertile ground for training your model.

But this is testable proposition!

Because you have already set aside a holdout sample, you can test whether a model estimated on the full (non-holdout) sample performs better in the holdout sample than one based on a more recent sample.

Data frequency compression

Another use for a holdout sample is to test for whether changes to the frequency of the time series will improve predictive accuracy.

The frequency of the time series could be reduced to help match a desired forecast horizon. For example, suppose management wants a 3-year forecast. And you are working with monthly SALES. Yes, you could produce a 36 period (month) forecast. But that might be pushing the limits of your methodology, especially if there is not a strong trend.

Alternatively, by converting to a quarterly series, you would lessen the variability in your data and forecast only 12 periods. This might yield a more accurate forecast.

But again, this is testable using a holdout sample!

Bottom line

Holdout samples are a critical component of a time series forecasting methodology.

In a later article we will address using multiple holdout samples…to help guard against basing a model on a single, unrepresentative holdout sample (i.e. we found a great model just because we got lucky!).

Holdout sample a critical component of a time series forecasting methodology.

Part 1 – Practical Time Series Forecasting – Introduction

Part 2 – Practical Time Series Forecasting – Some Basics

Part 3 – Practical Time Series Forecasting – Potentially Useful Models

Part 4 – Practical Time Series Forecasting – Data Science Taxonomy