<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>forecast error Archives - KDD Analytics</title>
	<atom:link href="https://www.kddanalytics.com/tag/forecast-error/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.kddanalytics.com/tag/forecast-error/</link>
	<description>Data to Decisions</description>
	<lastBuildDate>Sat, 24 Mar 2018 02:48:29 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.3</generator>

 
<site xmlns="com-wordpress:feed-additions:1">114932494</site>	<item>
		<title>Practical Time Series Forecasting – Meta Models</title>
		<link>https://www.kddanalytics.com/practical-time-series-forecasting-meta-models/</link>
		
		<dc:creator><![CDATA[KDD]]></dc:creator>
		<pubDate>Mon, 05 Feb 2018 01:47:38 +0000</pubDate>
				<category><![CDATA[Data Analytics Methods]]></category>
		<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Forecasting]]></category>
		<category><![CDATA[Time Series]]></category>
		<category><![CDATA[forecast error]]></category>
		<category><![CDATA[MAPE]]></category>
		<category><![CDATA[meta forecast]]></category>
		<category><![CDATA[MPE]]></category>
		<category><![CDATA[regression]]></category>
		<category><![CDATA[weighting]]></category>
		<guid isPermaLink="false">http://www.kddanalytics.com/?p=1331</guid>

					<description><![CDATA[<p>“There are two kinds of forecasters: those who don’t know, and those who don’t know they don’t know.” ― John Kenneth Galbraith After an extensive model building and vetting process, along the lines we previously discussed here and here, the practical forecaster may still be left with several strong performing models. These models perform similarly&#8230;</p>
<p>The post <a href="https://www.kddanalytics.com/practical-time-series-forecasting-meta-models/">Practical Time Series Forecasting – Meta Models</a> appeared first on <a href="https://www.kddanalytics.com">KDD Analytics</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>“<em>There are two kinds of forecasters: those who don’t know, and those who don’t know they don’t know.</em>”<br />
― <a href="https://en.wikipedia.org/wiki/John_Kenneth_Galbraith" target="_blank" rel="noopener"><strong>John Kenneth Galbraith</strong></a></p>
<p>After an extensive model building and vetting process, along the lines we previously discussed <strong><a href="https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/" target="_blank" rel="noopener">here</a></strong> and <a href="https://www.kddanalytics.com/practical-time-series-forecasting-rolling-holdout-sample-analysis/" target="_blank" rel="noopener"><strong>here</strong></a>, the practical forecaster may still be left with several strong performing models.</p>
<p>These models perform similarly in the holdout sample tests. They retain their statistical properties when recalibrated on the full historical sample. But they <strong>yield different forecast paths over the forecast horizon</strong>.</p>
<p>Any one of the models could be easily defended. But the <strong>fact that the models yield different forecasts should make the forecaster pause</strong>.</p>
<h3>An example</h3>
<p>Below is an example of 3 short-run monthly forecasts:</p>
<p><img data-recalc-dims="1" fetchpriority="high" decoding="async" class="size-full wp-image-1334 aligncenter" src="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Example-of-Different-FC.png?resize=603%2C371&#038;ssl=1" alt="Examples of competiting forecasts" width="603" height="371" srcset="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Example-of-Different-FC.png?w=603&amp;ssl=1 603w, https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Example-of-Different-FC.png?resize=300%2C185&amp;ssl=1 300w" sizes="(max-width: 603px) 100vw, 603px" /></p>
<p>The 3 models perform similarly in the holdout sample. One of the models is a least squares model. The other 2 are ARIMA models.</p>
<p>One model produces a <strong>steeply declining forecast</strong>. Another a <strong>slightly declining forecast</strong>. The third model produces an <strong>increasing forecast</strong>.</p>
<p>What should the forecaster do?</p>
<h3>How can this happen?</h3>
<p>Models are just that – models. They are abstractions from reality. And <strong>no single model will “fit” the holdout sample perfectly</strong>.</p>
<p>Two <strong>models</strong>, especially <strong>of different types</strong> (e.g. least squares vs. ARIMA), could have very <strong>similar holdout sample performance but differ</strong> dramatically <strong>in their forecast</strong> over the forecast horizon.</p>
<p>The holdout sample <strong>MAPE</strong> (<a href="https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/" target="_blank" rel="noopener"><strong>mean absolute percentage error</strong></a>) could be very similar for these models. But the <strong>MAPE is an average error across the holdout sample</strong>. And the models could have arrived at their MAPEs by <strong>focusing on different aspects of the time series in the holdout sample.</strong></p>
<p>Projecting these differences into the forecast horizon can result in very different forecasts.</p>
<h3>Solutions</h3>
<p>When there is no clear “champion” model, one <strong>solution is to combine the forecasts into one</strong>. We call this a “<strong><a href="https://en.wikipedia.org/wiki/Metamodeling">meta</a></strong>” forecast.</p>
<p>There are several ways this can be accomplished.</p>
<h4>Checkpoint</h4>
<p><strong>But first</strong>, <strong>check</strong> to make sure the <strong>models</strong> to be combined are <strong>not “nested.”</strong> That is, <strong>one model is not a subset of another</strong>. If models are nested there usually is no advantage to combining their forecasts into a meta forecast.</p>
<p>In fact, a <strong>meta forecast will more likely be superior the greater the differences between the constituent models</strong>.</p>
<p>A meta forecast based on a least squares model and an ARIMA model will likely yield a smaller forecast error than that associated with either of the two models. However, if the two models were both least squares models, the superiority of a meta forecast might be questionable (<a href="https://www.amazon.com/Forecasting-Business-Economics-Econometrics-Mathematical/dp/0122951816"><strong>Granger, 1989</strong></a>).</p>
<h4>Solution 1</h4>
<p>The simplest approach to arriving at a meta forecast is to <strong>simply average the forecasts</strong> of the individual models.</p>
<p>This essentially assumes that <strong>each model’s forecast is equally important in the meta forecast </strong>(i.e. receives equal weighting). This is a quick and uncomplicated way to generate a meta forecast.</p>
<h4>Solution 2</h4>
<p>Another approach <strong>makes use</strong> of each model’s <strong>holdout sample performance measures of forecast accuracy and bias</strong>. A weighting for each model&#8217;s forecast can be calculated using each model’s <strong>MAPE</strong> and <strong>MPE</strong> (<a href="https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/" target="_blank" rel="noopener"><strong>mean percentage error</strong></a>) relative to that of all the models combined.</p>
<p>The meta forecast would then be a <strong>weighted average</strong> of the individual model forecasts. Models with <strong>lower MAPE and MPE</strong> would receive <strong>higher weights and contribute more</strong> to the meta forecast.</p>
<h4>Solution 3</h4>
<p>A third approach is to use <strong>regression</strong> to estimate the weights.</p>
<p>Using the holdout sample, or if too small, the full sample, <strong>regress the actual value on the forecasted value from each model</strong>. The goal is to find a regression with <strong>no constant and all regression coefficients positive and statistically significant</strong>.</p>
<p>The regression <strong>coefficients should then sum very close to one</strong>. These <strong>coefficients then become the weights</strong> by which forecasts are combined into a meta forecast (see <a href="https://www.amazon.com/Business-Forecasting-ForecastX-Holton-Wilson/dp/0073373648/ref=sr_1_2?s=books&amp;ie=UTF8&amp;qid=1512008807&amp;sr=1-2&amp;keywords=wilson+keating+forecasting"><strong>Wilson and Keating</strong></a>).</p>
<h3>Back to our example</h3>
<p>The forecaster could go with candidate 3 since it &#8220;splits the difference.&#8221; However, the forecaster is still left with the task of defending why the other two equally plausible models were not chosen.</p>
<p>Alternatively, a meta forecast can be used. As an example, we created a <strong>simple average forecast</strong> across the 3 candidate models. As discussed above, this <strong>assumes an equal weighting across the 3 short-run forecasts</strong>. A more sophisticated approach would have been to estimate the weights using a regression approach.</p>
<p><img data-recalc-dims="1" decoding="async" class="size-full wp-image-1335 aligncenter" src="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Example-of-a-meta-forecast.png?resize=605%2C371&#038;ssl=1" alt="Example of a meta forecast" width="605" height="371" srcset="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Example-of-a-meta-forecast.png?w=605&amp;ssl=1 605w, https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Example-of-a-meta-forecast.png?resize=300%2C184&amp;ssl=1 300w" sizes="(max-width: 605px) 100vw, 605px" /></p>
<p>Not surprisingly, the meta forecast is quite like the essentially flat forecast of candidate 3 (which lies almost half way between candidate 1’s and 2’s forecast). <strong>But not all cases will be like this</strong>.</p>
<p>If a regression approach to estimating the weights was used, the meta forecast could be quite different from that of candidate 3.</p>
<p>Yes, the meta forecast will lie between the two forecast extremes. But the <strong>assumed or estimated weights will dictate where the meta forecast will lie</strong>.</p>
<h3>Bottom line</h3>
<p>Combining forecasts from equally strong models is intuitively appealing since <strong>each model has its strengths and weaknesses</strong>.</p>
<p><strong> Combining</strong> models’ forecasts in a <strong>complementary fashion</strong> should lead to <strong>more robust and accurate short-run forecasts</strong>.</p>
<a class="dpsp-click-to-tweet dpsp-style-1" href="https://twitter.com/intent/tweet?text=Combine+forecasts+into+a+meta+forecast+for+a+more+accurate+forecast&url=https%3A%2F%2Fwww.kddanalytics.com%2Fpractical-time-series-forecasting-meta-models%2F"><div class="dpsp-click-to-tweet-content">Combine forecasts into a meta forecast for a more accurate forecast</div><div class="dpsp-click-to-tweet-footer"><span class="dpsp-click-to-tweet-cta"><span>Click to Tweet</span><i class="dpsp-network-btn dpsp-twitter"><span class="dpsp-network-icon"></span></i></span></div></a>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-introduction/" target="_blank" rel="noopener"><strong>Part 1 &#8211; Practical Time Series Forecasting &#8211; Introduction</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-basics/" target="_blank" rel="noopener"><strong>Part 2 &#8211; Practical Time Series Forecasting &#8211; Some Basics</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-useful-models/" target="_blank" rel="noopener"><strong>Part 3 &#8211; Practical Time Series Forecasting &#8211; Potentially Useful Models</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-data-science-taxonomy/" target="_blank" rel="noopener"><strong>Part 4 &#8211; Practical Time Series Forecasting &#8211; Data Science Taxonomy</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/" target="_blank" rel="noopener"><strong>Part 5 &#8211; Practical Time Series Forecasting &#8211; Know When to Hold &#8217;em</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-what-makes-a-useful-model/" target="_blank" rel="noopener"><strong>Part 6 &#8211; Practical Time Series Forecasting &#8211; What Makes a Model Useful?</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-deterministic-stochastic-trend/" target="_blank" rel="noopener"><strong>Part 7 &#8211; Practical Time Series Forecasting &#8211; To Difference or Not to Difference</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-times-series-forecasting-rolling-holdout-sample/" target="_blank" rel="noopener"><strong>Part 8 &#8211; Practical Time Series Forecasting &#8211; Know When to Roll &#8217;em</strong></a></p>
<p>&nbsp;</p>
<p>The post <a href="https://www.kddanalytics.com/practical-time-series-forecasting-meta-models/">Practical Time Series Forecasting – Meta Models</a> appeared first on <a href="https://www.kddanalytics.com">KDD Analytics</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1331</post-id>	</item>
		<item>
		<title>Practical Time Series Forecasting – Know When to Roll ‘em</title>
		<link>https://www.kddanalytics.com/practical-times-series-forecasting-rolling-holdout-sample/</link>
		
		<dc:creator><![CDATA[KDD]]></dc:creator>
		<pubDate>Mon, 29 Jan 2018 01:33:32 +0000</pubDate>
				<category><![CDATA[Data Analytics Methods]]></category>
		<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Forecasting]]></category>
		<category><![CDATA[Time Series]]></category>
		<category><![CDATA[forecast error]]></category>
		<category><![CDATA[holdout sample]]></category>
		<category><![CDATA[rolling analysis]]></category>
		<category><![CDATA[times series]]></category>
		<guid isPermaLink="false">http://www.kddanalytics.com/?p=1322</guid>

					<description><![CDATA[<p>“Prediction is very difficult, especially if it&#8217;s about the future.” ― Niels Bohr, physicist Holdout samples are a key component to estimating a “useful” forecasting model. Set aside data at least equal in length to your forecast horizon (“holdout sample”). Build your models on the remaining data (“modeling sample”). And compare the candidate models’ forecast&#8230;</p>
<p>The post <a href="https://www.kddanalytics.com/practical-times-series-forecasting-rolling-holdout-sample/">Practical Time Series Forecasting – Know When to Roll ‘em</a> appeared first on <a href="https://www.kddanalytics.com">KDD Analytics</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><strong>“</strong><em>Prediction is very difficult, especially if it&#8217;s about the future.</em><strong>”<br />
― <a href="https://en.wikipedia.org/wiki/Niels_Bohr" target="_blank" rel="noopener">Niels Bohr</a></strong>, physicist</p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/" target="_blank" rel="noopener"><strong>Holdout samples</strong></a> are a key component to estimating a “useful” forecasting model. <strong>Set aside data at least equal in length to your forecast horizon</strong> (“holdout sample”). Build your models on the remaining data (“modeling sample”). And <strong>compare the candidate models’ forecast performance over the holdout sample.</strong></p>
<p>At a minimum, a single holdout sample should be used.</p>
<p>But to get a <strong>better sense of a model’s future performance, consider using multiple holdout samples</strong>.</p>
<p>This <strong>guards against</strong> basing your model on a <strong>holdout sample</strong> that is <strong>unrepresentative</strong> of the overall characteristics of the time series.</p>
<p>One way to achieve this is to use<strong> “rolling” holdout samples</strong>.</p>
<h3>Rolling analysis</h3>
<p>A <a href="https://link.springer.com/chapter/10.1007%2F978-0-387-32348-0_9" target="_blank" rel="noopener"><strong>rolling analysis</strong></a> of a time series is generally used to test a model’s stability. That is, <strong>are a model’s parameters stable across time</strong> or do they change, especially in a systematic way?</p>
<p>This is important for a forecasting model. We <strong>don’t want</strong> a forecasting model whose <strong>parameters</strong> are <strong>changing during the forecast horizon in an unexpected (i.e. unmodeled) manner.</strong></p>
<p>Suppose our forecast horizon is 6 months.</p>
<p><strong> Under a single holdout sample</strong>, we would <strong>set aside the last 6 months of data as the holdout sample</strong>. Then using the remaining data as the modeling sample, estimate models, forecast over the single holdout sample and compare the models’ performance.</p>
<p>This will help narrow down the pool of candidate models.</p>
<h4>Rolling holdout samples</h4>
<p>But under a rolling holdout approach, also called &#8220;<a href="http://otexts.org/fpp2/accuracy.html" target="_blank" rel="noopener"><strong>time series cross-validation</strong></a>,&#8221;  <strong>we would set aside a longer sample of data</strong>, say, the last 12 months. Then:</p>
<p><strong>Step 1:</strong>  Estimate a model and forecast over the <strong>first</strong> 6-months of this 12-month period (&#8220;roll 1&#8221;);</p>
<p><strong>Step 2:</strong>  Then add one 1 month to the tail-end of the estimation sample, recalibrate the model, and forecast over the subsequent 6-months (“roll 2”);</p>
<p><strong>Step 3:</strong>  Then add another month to the estimation sample, recalibrate and forecast over the subsequent 6-months (“roll 3”);</p>
<p><strong>Step 4:</strong>  Repeat until there are no more 6-month periods (&#8220;rolls&#8221;) remaining in the 12-month period.</p>
<p>So, <strong>in this example</strong>, we would have <strong>recalibrated our model 7 times</strong> (each with a modeling sample that is one additional month longer than the previous). And we would have <strong>made 7 forecasts over the rolling holdout periods</strong>.</p>
<p>The <strong>last &#8220;roll</strong>,&#8221; it turns out, <strong>is the same 6-month period</strong> we would have used <strong>under a single 6-month holdout sample case</strong>. So, we generate the stats for a standard single holdout sample during the course of this rolling holdout approach.</p>
<p>If we are examining multiple candidate models, this process can generate a lot of data. Below is an example of the rolling forecasts for one model.</p>
<p><img data-recalc-dims="1" decoding="async" class="size-full wp-image-1325 aligncenter" src="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Rolling-Holdout-Samples.png?resize=561%2C547&#038;ssl=1" alt="Rolling Holdout Samples" width="561" height="547" srcset="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Rolling-Holdout-Samples.png?w=561&amp;ssl=1 561w, https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Rolling-Holdout-Samples.png?resize=300%2C293&amp;ssl=1 300w" sizes="(max-width: 561px) 100vw, 561px" /></p>
<h3>Summary roll statistics</h3>
<p>We could generate a similar chart for every model we are testing. But it is <strong>easier to work with measures of forecast accuracy and bias</strong>, such as <a href="https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/" target="_blank" rel="noopener"><strong>MAPE</strong></a> and <a href="https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/" target="_blank" rel="noopener"><strong>MPE</strong></a>.</p>
<p>For each roll forecast, we can calculate the MAPE and MPE and observe how they change across the rolling forecasts.</p>
<p>Are the MAPE and MPE constant? Fluctuate with no apparent trend? Or exhibit some systematic trend?</p>
<p>Doing this for every candidate model we are testing generates charts like this which can quickly show any areas of concern:</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-1326 aligncenter" src="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Rolling-Holdout-Samples-MAPE.png?resize=604%2C370&#038;ssl=1" alt="" width="604" height="370" srcset="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Rolling-Holdout-Samples-MAPE.png?w=604&amp;ssl=1 604w, https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Rolling-Holdout-Samples-MAPE.png?resize=300%2C184&amp;ssl=1 300w" sizes="auto, (max-width: 604px) 100vw, 604px" /></p>
<p>In this example, candidate models 18 and 15 may be worth further inspection since their MAPEs are much higher than the rest in a recent roll period (roll 6).</p>
<h3>What else makes a model useful?</h3>
<p>So, with respect to the <strong>guidelines</strong> for whittling down a pool of candidate models we listed in an <strong><a href="https://www.kddanalytics.com/practical-time-series-forecasting-what-makes-a-useful-model/" target="_blank" rel="noopener">earlier article</a></strong>, we can add the following from a rolling holdout analysis:</p>
<p><strong>Stability</strong> – The model’s parameters should retain their statistical significance and not vary too much across the rolling periods; and the model&#8217;s residuals should remain &#8220;<strong>white noise</strong>&#8221; across the rolls;</p>
<p><strong>Consistency of Performance</strong> – The model’s forecast accuracy and bias should not exhibit any strong trends, especially trends in the “wrong” direction (i.e. getting progressively worse) as the more recent time period is approached.</p>
<p><strong>Strong Rolling Holdout Sample Performance</strong> – The model’s forecast accuracy and bias, <strong>averaged across all the rolls</strong>, should be high and low respectively. That is <strong>both the average MAPE </strong>and<strong> MPE should be low</strong>.</p>
<h3>Benefits of Rolling</h3>
<p>The primary benefit of a rolling analysis is that we get to see <strong>how a model performs</strong> forecast-wise <strong>over multiple time spans</strong> equal in length to our forecast horizon; <strong>instead of relying on performance in just one holdout sample</strong>.</p>
<p>A rolling analysis also <strong>addresses the issue of a short holdout sample</strong> (e.g. short forecast horizon) <strong>possibly not being representative of the general character of the time series</strong>.</p>
<p>In addition, a rolling analysis can be used as a check for the “best” model chosen using a single holdout sample. That is, would you pick the same model using the rolling holdout approach? If not, why?</p>
<p>In sum, <strong>a model that is persistently better at holdout sample forecasting over a longer time frame is likely to be more robust.</strong></p>
<p>So, let ‘em roll!</p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-introduction/" target="_blank" rel="noopener"><strong>Part 1 &#8211; Practical Time Series Forecasting &#8211; Introduction</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-basics/" target="_blank" rel="noopener"><strong>Part 2 &#8211; Practical Time Series Forecasting &#8211; Some Basics</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-useful-models/" target="_blank" rel="noopener"><strong>Part 3 &#8211; Practical Time Series Forecasting &#8211; Potentially Useful Models</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-data-science-taxonomy/" target="_blank" rel="noopener"><strong>Part 4 &#8211; Practical Time Series Forecasting &#8211; Data Science Taxonomy</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/" target="_blank" rel="noopener"><strong>Part 5 &#8211; Practical Time Series Forecasting &#8211; Know When to Hold &#8217;em</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-what-makes-a-useful-model/" target="_blank" rel="noopener"><strong>Part 6 &#8211; Practical Time Series Forecasting &#8211; What Makes a Model Useful?</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-deterministic-stochastic-trend/" target="_blank" rel="noopener"><strong>Part 7 &#8211; Practical Time Series Forecasting &#8211; To Difference or Not to Difference</strong></a></p>
<p>&nbsp;</p>
<p>The post <a href="https://www.kddanalytics.com/practical-times-series-forecasting-rolling-holdout-sample/">Practical Time Series Forecasting – Know When to Roll ‘em</a> appeared first on <a href="https://www.kddanalytics.com">KDD Analytics</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1322</post-id>	</item>
		<item>
		<title>Practical Time Series Forecasting – To Difference or Not to Difference</title>
		<link>https://www.kddanalytics.com/practical-time-series-forecasting-deterministic-stochastic-trend-2/</link>
		
		<dc:creator><![CDATA[KDD]]></dc:creator>
		<pubDate>Mon, 22 Jan 2018 01:22:47 +0000</pubDate>
				<category><![CDATA[Data Analytics Methods]]></category>
		<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Forecasting]]></category>
		<category><![CDATA[Time Series]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[deterministic]]></category>
		<category><![CDATA[forecast error]]></category>
		<category><![CDATA[forecasting]]></category>
		<category><![CDATA[stochastic]]></category>
		<category><![CDATA[time series]]></category>
		<category><![CDATA[trend]]></category>
		<guid isPermaLink="false">http://www.kddanalytics.com/?p=1348</guid>

					<description><![CDATA[<p>“It is sometimes very difficult to decide whether trend is best modeled as deterministic or stochastic, and the decision is an important part of the science – and art – of building forecasting models.” ― Diebold,  Elements of Forecasting, 1998 A time series can have a very strong trend. Visually, we often can see it. Gross&#8230;</p>
<p>The post <a href="https://www.kddanalytics.com/practical-time-series-forecasting-deterministic-stochastic-trend-2/">Practical Time Series Forecasting – To Difference or Not to Difference</a> appeared first on <a href="https://www.kddanalytics.com">KDD Analytics</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>“<em>It is sometimes very difficult to decide whether trend is best modeled as deterministic or stochastic, and the decision is an important part of the science – and art – of building forecasting models</em>.”<br />
― <strong>Diebold,  Elements of Forecasting, 1998</strong></p>
<p><strong>A time series can have a very strong trend.</strong></p>
<p>Visually, we often can see it. Gross domestic product (GDP) per person increasing year after year.</p>
<p>When a “<strong>shock</strong>” occurs to the process generating GDP, due to a recession for example, GDP gets <strong>knocked off its long-run growth path</strong>.</p>
<p>But can we expect GDP to bounce back and return to its <strong>original</strong> long-run growth path? Or will it start growing again but along a <strong>different</strong> path?</p>
<p>If the former, then the trend in GDP is said to be “<strong>deterministic</strong>.” And adding TIME to a time series forecasting model is one way to capture this trend.</p>
<p>On the other hand, if GDP starts a new trend after a recession, its trend is said to be “<strong>stochastic</strong>,” driven by random shocks. The standard approach to time series forecast modeling in this case is to “<strong>difference</strong>” the data before modeling.</p>
<p>The challenge as a forecaster is that it is <strong>not always easy to tell if the trend in a time series is deterministic or stochastic</strong>.</p>
<p>And <strong>your answer</strong> and the subsequent modeling choice <strong>will have important implications for the resulting forecast</strong>.</p>
<p><strong>Deterministic vs. stochastic trends</strong></p>
<p>Consider the time series shown below.</p>
<p>Suppose you were <strong>tasked with generating a 2-year forecast</strong> starting December 2003 (at the end of the shown time series history).</p>
<p><strong>Is there a deterministic trend in this series</strong>? That is, do you suspect that the series will bounce back to the trend exhibited before January 2001?</p>
<p><strong>Or</strong> has there been a fundamental change to the process generating this series and a new trend will start (i.e. the <strong>trend is stochastic</strong>)?</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-1304 aligncenter" src="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Deterministic-or-stochastic-trend..png?resize=604%2C371&#038;ssl=1" alt="Deterministic vs stochastic trend" width="604" height="371" srcset="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Deterministic-or-stochastic-trend..png?w=604&amp;ssl=1 604w, https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Deterministic-or-stochastic-trend..png?resize=300%2C184&amp;ssl=1 300w" sizes="auto, (max-width: 604px) 100vw, 604px" /></p>
<p><strong>Deterministic trend</strong></p>
<p>If you opt for a deterministic trend, then your <strong>forecasting model will be in “levels.”</strong> If we are talking about SALES, then it is the value of SALES at any given point in time. So, when we have a deterministic trend, we can model SALES as:</p>
<p style="text-align: center;">SALES<sub>t</sub> = b<sub>0</sub> + b<sub>1</sub>*TIME + ε<sub>t</sub></p>
<p><strong>Of course, we could</strong> <strong>also</strong> account for <strong>seasonality</strong> by adding seasonal dummy variables as well as any <strong>hidden dynamics</strong> (cycles) by modeling the error term u<sub>t</sub> as an ARMA process. But the key characteristic is the inclusion of a TIME variable (May 1993 = 1, June 1993 =2, etc.) and possibly TIME<sup>2</sup> and/or TIME<sup>3</sup> depending on the series.</p>
<p><em><span style="color: #60786b;">An ARMA process models SALES as being based on past SALES as well as on unobservable shocks. Such models can include two types of components: An autoregressive (AR) component captures the effect of past SALES on current SALES while a moving average (MA) component captures random shocks to the SALES series. </span> </em></p>
<p><strong>Stochastic trend</strong></p>
<p>If you opt for a stochastic trend, then the <strong>standard methodology</strong> is to <strong>difference</strong> your data (to remove the trend) and model the differences. This is known as ARIMA modeling. An ARIMA process is like an ARMA process except that the dynamics of the differenced series are modeled (see <a href="http://people.duke.edu/~rnau/411arim.htm"><strong>here</strong></a>).</p>
<p><strong>Forecast differences</strong></p>
<p>The forecast implications of this choice are shown in the following chart. We estimated a deterministic and a stochastic model and generated a forecast from each starting in December 2003. Specifically,</p>
<p style="text-align: center;"><strong>Deterministic Trend Model:</strong>  Y<sub>t</sub> = b<sub>0</sub> + b<sub>1</sub>*TIME + b<sub>2</sub>*AR(1) + b<sub>3</sub>*AR(2) + b<sub>4</sub>*MA(3) + ε<sub>t</sub></p>
<p style="text-align: center;"><strong>Stochastic Trend Model: </strong> Y<sub>t</sub> &#8211; Y<sub>t-1</sub> = b<sub>0</sub> + b<sub>1</sub>*AR(1) + b<sub>2</sub>*AR(3) + ε<sub>t</sub></p>
<p>The forecast based on a <strong>deterministic model</strong> is shown by the <strong>orange line</strong> while the one based on the <strong>stochastic model</strong> is shown by the <strong>gray line</strong>. Also shown is what actually happened to the time series.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-1305 aligncenter" src="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Deterministic-vs-stochastic-forecast.png?resize=604%2C371&#038;ssl=1" alt="Deterministic vs stochastic forecast" width="604" height="371" srcset="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Deterministic-vs-stochastic-forecast.png?w=604&amp;ssl=1 604w, https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Deterministic-vs-stochastic-forecast.png?resize=300%2C184&amp;ssl=1 300w" sizes="auto, (max-width: 604px) 100vw, 604px" /></p>
<p>Hindsight is 20/20. In this case, the <strong>stochastic model would have been the better choice</strong>.</p>
<p>It does <strong>appear that some fundamental change occurred in the time series generation process</strong>. That is, the time series did not revert to its pre-2001 historical trend (at least during the forecast horizon).</p>
<p>The stochastic model yields a better forecast error (<a href="https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/"><strong>MAPE</strong></a> = 2.0%) than the deterministic model (<a href="https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/"><strong>MAPE</strong></a> = 5.6%) over the forecast horizon.</p>
<p>But at the time we had to make the forecast, all we had available were data through December 2003.</p>
<p><strong>So, how do we pick between a deterministic and a stochastic forecasting model?</strong></p>
<p><strong>Holdout sample</strong></p>
<p>From a practical perspective, unless we have very strong evidence of a stochastic process, the best course of action is to <strong>use a holdout sample.</strong></p>
<p>Yes, there are techniques for testing whether a time series is “<a href="https://www.otexts.org/fpp/8/1"><strong>stationary</strong></a>” (i.e. has no trend) when visually it is not obvious.</p>
<p>But pragmatically, we are concerned about short-run forecast accuracy. And <strong>one way to compare competing models is by their performance in a holdout sample.</strong></p>
<p>As we discussed in an <a href="https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/"><strong>earlier article</strong></a>, <strong>hold out a period of time at least equal to your forecast horizon</strong> from the data used to estimate a model. In this case, 2 years (January 2001 – December 2003).</p>
<p>Then build your models on data prior to January 2001 and <strong>compare the models’ forecast performance over the holdout sample</strong>.</p>
<p>In this case, such a holdout sample does not include any data from the strong trend period (pre-May 2001). So, likely a stochastic model would have performed better in the holdout sample as well.</p>
<p><strong>But suppose we do this and have two (or more) models that perform equally well in the holdout sample?</strong></p>
<p>We’ll cover this possibility in a subsequent article.</p>
<a class="dpsp-click-to-tweet dpsp-style-1" href="https://twitter.com/intent/tweet?text=deterministic%2Fstochastic+trend%3F+holdout+sample%21&url=https%3A%2F%2Fwww.kddanalytics.com%2Fpractical-time-series-forecasting-deterministic-stochastic-trend-2%2F"><div class="dpsp-click-to-tweet-content">deterministic/stochastic trend? holdout sample!</div><div class="dpsp-click-to-tweet-footer"><span class="dpsp-click-to-tweet-cta"><span>Click to Tweet</span><i class="dpsp-network-btn dpsp-twitter"><span class="dpsp-network-icon"></span></i></span></div></a>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-introduction/" target="_blank" rel="noopener"><strong>Part 1 &#8211; Practical Time Series Forecasting &#8211; Introduction</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-basics/" target="_blank" rel="noopener"><strong>Part 2 &#8211; Practical Time Series Forecasting &#8211; Some Basics</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-useful-models/" target="_blank" rel="noopener"><strong>Part 3 &#8211; Practical Time Series Forecasting &#8211; Potentially Useful Models</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-data-science-taxonomy/" target="_blank" rel="noopener"><strong>Part 4 &#8211; Practical Time Series Forecasting &#8211; Data Science Taxonomy</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/" target="_blank" rel="noopener"><strong>Part 5 &#8211; Practical Time Series Forecasting &#8211; Know When to Hold &#8217;em</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-what-makes-a-useful-model/" target="_blank" rel="noopener"><strong>Part 6 &#8211; Practical Time Series Forecasting &#8211; What Makes a Model Useful?</strong></a></p>
<p>The post <a href="https://www.kddanalytics.com/practical-time-series-forecasting-deterministic-stochastic-trend-2/">Practical Time Series Forecasting – To Difference or Not to Difference</a> appeared first on <a href="https://www.kddanalytics.com">KDD Analytics</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1348</post-id>	</item>
		<item>
		<title>Practical Time Series Forecasting – Know When to Hold ‘em</title>
		<link>https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/</link>
		
		<dc:creator><![CDATA[KDD]]></dc:creator>
		<pubDate>Mon, 08 Jan 2018 01:37:33 +0000</pubDate>
				<category><![CDATA[Data Analytics Methods]]></category>
		<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Forecasting]]></category>
		<category><![CDATA[Time Series]]></category>
		<category><![CDATA[forecast bias]]></category>
		<category><![CDATA[forecast error]]></category>
		<category><![CDATA[forecasting]]></category>
		<category><![CDATA[holdout sample]]></category>
		<category><![CDATA[methodology]]></category>
		<guid isPermaLink="false">http://www.kddanalytics.com/?p=1263</guid>

					<description><![CDATA[<p>“The only relevant test of the validity of a hypothesis is comparison of prediction with experience.” ― Milton Friedman, economist Holdout samples are a mainstay of predictive analytics. Set aside a portion of your data (say, 30%). Build your candidate models. Then “internally validate” your models using the holdout sample. More sophisticated methods like cross&#8230;</p>
<p>The post <a href="https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/">Practical Time Series Forecasting – Know When to Hold ‘em</a> appeared first on <a href="https://www.kddanalytics.com">KDD Analytics</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>“<em>The only relevant test of the validity of a hypothesis is comparison of prediction with experience.</em>”<br />
― <strong>Milton Friedman, economist</strong></p>
<p><strong>Holdout samples</strong> are a mainstay of predictive analytics.</p>
<p>Set aside a portion of your data (say, 30%). Build your <a href="https://www.kddanalytics.com/practical-time-series-forecasting-useful-models/" target="_blank" rel="noopener"><strong>candidate models</strong></a>. Then “<strong>internally validate</strong>” your models using the holdout sample.</p>
<p>More sophisticated methods like <a href="https://en.wikipedia.org/wiki/Cross-validation_(statistics)"><strong>cross validation</strong></a> use multiple holdout samples. But the idea is to <strong>see how well your models predict using data the model has not “seen” before</strong>. Then go back and fine tune to improve the models&#8217; predictive accuracy.</p>
<h3>Time series holdout samples</h3>
<p>The <strong>truest test of your models</strong> is when they are applied to “new” data. Data from a fresh marketing campaign, a new set of customers, a more recent time period (“<strong>external validation</strong>”).</p>
<p>But you may not have access to such data when building your models. You certainly will not have access to future data.</p>
<p>So, a <strong>holdout sample needs to be crafted from the historical data at your disposal</strong>.</p>
<p>When building predictive models for, say, a marketing campaign or for loan risk scoring, there is usually a large amount of data to work with. So, holding out a sample for testing still leaves lots of data for model building.</p>
<p>However, the situation can be much different when working with time series data.</p>
<p>Depending on the frequency of the series, the <strong>amount of data points available to work with can be limited</strong>. 50 years of annual data is just 50 data points. 5 years of monthly data is just 60 data points.</p>
<p>Obviously the greater the frequency of data, the greater the number of data points available to work with…5 years of daily data is 1,825 data points. But these time series sample sizes usually pale against the large customer sets used to fuel marketing campaigns, which can run into the hundreds of thousands.</p>
<p>So, does this mean that holdout samples shouldn’t be used to test time series forecasting models?</p>
<p><strong>Absolutely not!</strong></p>
<p>You still <strong>need a way to</strong> <strong>whittle down your candidate models</strong>. You just need to be careful in how you select and use your holdout sample.</p>
<h3>Holdout sample length</h3>
<p>How much data should you set aside for a holdout sample? The <strong>rule of thumb</strong> we go by is to choose a holdout sample length that is <strong>at least</strong> (a) <strong>equal to the length of your forecast horizon</strong> or (b) <strong>equal to the length of time needed for your business to make a change</strong>.</p>
<p>Suppose you need a 12-month forecast to support a business plan. And you wish to forecast monthly sales for the 12 months starting November 1, 2017.</p>
<p>Then, your holdout sample should be at least the 12 months pertaining to November 2016 through October 2017. And your estimation sample should be all months prior to November 2016.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-1267 aligncenter" src="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Example-of-Holdout-Sample-1.png?resize=618%2C385&#038;ssl=1" alt="Using a holdout sample for time series forecasting" width="618" height="385" srcset="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Example-of-Holdout-Sample-1.png?w=618&amp;ssl=1 618w, https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Example-of-Holdout-Sample-1.png?resize=300%2C187&amp;ssl=1 300w" sizes="auto, (max-width: 618px) 100vw, 618px" /></p>
<p>Remember, the <strong>time series methods we are addressing are best used for short-run forecasting</strong>. Most business forecasting needs are for short-run forecasts. The next few months or few years. Not the next 5 to 10 years.</p>
<p>Alternatively, suppose your business only needs 8 months to make a change (maybe it is getting more salespeople on line). Then your holdout sample should be at least 8 months.</p>
<h3>Holdout sample performance</h3>
<p>Once you estimate a model, you apply it to the holdout sample to see how well it predicts. There are several <strong>measures</strong> you can use to gauge <strong>how well your model performs</strong>. We focus on measures of <strong>accuracy</strong> and <strong>bias</strong>.</p>
<h4>To measure forecast accuracy:</h4>
<p><strong>If the business cost of a forecast error is high</strong>, then the <a href="https://en.wikipedia.org/wiki/Mean_squared_error"><strong>Mean Square Error</strong></a> (MSE) or <a href="https://en.wikipedia.org/wiki/Root-mean-square_deviation"><strong>Root Mean Square Error</strong></a> (RMSE) will magnify it since forecast errors are squared. MSE is the average of (predicted – actual)<sup>2</sup>.</p>
<p><strong>If the business cost of a forecast error is average</strong>, then the <a href="https://en.wikipedia.org/wiki/Mean_absolute_percentage_error"><strong>Mean Absolute Percent Error</strong></a> (MAPE) can be used. MAPE is simply the average of the absolute value of [(predicted – actual)/actual]. However, care should be taken if “0” values are possible as MAPE would be undefined.</p>
<p>See <a href="http://otexts.org/fpp2/accuracy.html" target="_blank" rel="noopener"><strong>here</strong></a> for a discussion of forecast accuracy measures.</p>
<h4>To measure forecast bias:</h4>
<p>The <a href="https://en.wikipedia.org/wiki/Mean_percentage_error"><strong>Mean Percent Error</strong></a> (MPE) will indicate if there is a <strong>systematic bias to the forecast</strong>. If positive, then the model is over predicting; if negative it is underpredicting. And the further from 0, the greater the bias. MPE is the average of [(predicted – actual)/actual].</p>
<p>An alternative measure is <strong>Theil’s measure of systematic error</strong>, the “bias-proportion” of Theil’s <a href="http://www.eviews.com/help/helpintro.html#page/content%2FForecast-Forecast_Basics.html%23" target="_blank" rel="noopener"><strong>inequality coefficient</strong></a>. This measures the extent to which average values of the forecasted and actual values deviate from each other, the larger the value, the greater the systematic bias.</p>
<p><strong>In general, in the holdout sample, a good performing model will exhibit low overall error (high accuracy) and low systematic bias</strong>.</p>
<p>The chart below shows an example of such a model using a 5-month holdout sample. On average, the model’s error is between 0.28% and 1.85% while exhibiting a very small positive bias of 0.10%.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-1268 aligncenter" src="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Example-of-Holdout-Sample-2.png?resize=618%2C385&#038;ssl=1" alt="Example of holdout sample performance" width="618" height="385" srcset="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Example-of-Holdout-Sample-2.png?w=618&amp;ssl=1 618w, https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Example-of-Holdout-Sample-2.png?resize=300%2C187&amp;ssl=1 300w" sizes="auto, (max-width: 618px) 100vw, 618px" /></p>
<p>Note that <strong>there is no absolute criterion for what constitutes a “low” error,</strong> for example, MSE.</p>
<p><strong>Measures of forecast error</strong> are to be <strong>judged relative to the context of the forecast</strong> you are making. In some cases, your models may be averaging an error in the 30%’s; in others it could be in the single digits.</p>
<h3>Length of estimation sample</h3>
<p>A related issue is <strong>how much data do you use for model estimation</strong>?</p>
<p><strong>Often, there is not a choice</strong>. After setting aside a holdout sample, there may be just a bare minimum amount of data left for modeling (i.e. need more data points than model parameters to be estimated).</p>
<p>In general, the <strong>fewer</strong> the <strong>number of model parameters</strong> and the <strong>less &#8220;noisy&#8221;</strong> the data (i.e. less random), the <strong>fewer the number of data points <a href="http://otexts.org/fpp2/short-ts.html" target="_blank" rel="noopener">needed</a></strong>. Typically, though, <strong>we look for at least 40 data points.</strong></p>
<p>If you have a <strong>high frequency time series</strong> (monthly, daily, hourly) you may have room to consider whether the <strong>choice of the estimation sample length can affect model performance</strong>.</p>
<p><strong>One can argue that the modeling sample should be reflective of the characteristics of the forecast horizon</strong>. That is the next year, say, is more likely to be like the past several years, not like 20 years ago. So, <strong>limit the estimating sample to more recent years</strong>.</p>
<p>Consider the time series shown below. Clearly the time path of this series has not been consistent. Rather than estimating a model using the entire historical sample, maybe limit it to the more recent period.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-1206 aligncenter" src="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Low-variation-time-series.png?resize=615%2C386&#038;ssl=1" alt="Low variation time series" width="615" height="386" srcset="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Low-variation-time-series.png?w=615&amp;ssl=1 615w, https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Low-variation-time-series.png?resize=300%2C188&amp;ssl=1 300w" sizes="auto, (max-width: 615px) 100vw, 615px" /></p>
<p>The <strong>trade-off</strong> is that there is <strong>less experiential history upon which to base a model</strong>. Maybe the dynamics associated with that turning point in early 2000 and subsequent recovery could prove to be fertile ground for training your model.</p>
<p><strong>But this is testable proposition!</strong></p>
<p>Because you have already set aside a holdout sample, <strong>you can test whether a model estimated on the full (non-holdout) sample performs better in the holdout sample than one based on a more recent sample.</strong></p>
<h3>Data frequency compression</h3>
<p>Another use for a holdout sample is to test for whether changes to the frequency of the time series will improve predictive accuracy.</p>
<p><strong>The frequency of the time series could be reduced to help match a desired forecast horizon</strong>. For example, suppose management wants a 3-year forecast. And you are working with monthly SALES. Yes, you could produce a 36 period (month) forecast. But that might be pushing the limits of your methodology, especially if there is not a strong trend.</p>
<p>Alternatively, by converting to a quarterly series, you would lessen the variability in your data and forecast only 12 periods. <strong>This might yield a more accurate forecast</strong>.</p>
<p><strong>But again, this is testable using a holdout sample!</strong></p>
<h3>Bottom line</h3>
<p><strong>Holdout samples are a critical component</strong> of a time series forecasting methodology.</p>
<p>In a later article we will address using <strong>multiple</strong> holdout samples…to help guard against basing a model on a single, unrepresentative holdout sample (i.e. we found a great model just because we got lucky!).</p>
<a class="dpsp-click-to-tweet dpsp-style-1" href="https://twitter.com/intent/tweet?text=Holdout+sample+a+critical+component+of+a+time+series+forecasting+methodology.&url=https%3A%2F%2Fwww.kddanalytics.com%2Fpractical-time-series-forecasting-holdout-sample%2F"><div class="dpsp-click-to-tweet-content">Holdout sample a critical component of a time series forecasting methodology.</div><div class="dpsp-click-to-tweet-footer"><span class="dpsp-click-to-tweet-cta"><span>Click to Tweet</span><i class="dpsp-network-btn dpsp-twitter"><span class="dpsp-network-icon"></span></i></span></div></a>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-introduction/" target="_blank" rel="noopener"><strong>Part 1 &#8211; Practical Time Series Forecasting &#8211; Introduction</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-basics/" target="_blank" rel="noopener"><strong>Part 2 &#8211; Practical Time Series Forecasting &#8211; Some Basics</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-useful-models/" target="_blank" rel="noopener"><strong>Part 3 &#8211; Practical Time Series Forecasting &#8211; Potentially Useful Models</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-data-science-taxonomy/" target="_blank" rel="noopener"><strong>Part 4 &#8211; Practical Time Series Forecasting &#8211; Data Science Taxonomy</strong></a></p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>The post <a href="https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/">Practical Time Series Forecasting – Know When to Hold ‘em</a> appeared first on <a href="https://www.kddanalytics.com">KDD Analytics</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1263</post-id>	</item>
		<item>
		<title>Practical Time Series Forecasting – To Difference or Not to Difference</title>
		<link>https://www.kddanalytics.com/practical-time-series-forecasting-deterministic-stochastic-trend/</link>
		
		<dc:creator><![CDATA[KDD]]></dc:creator>
		<pubDate>Sun, 22 Jan 2017 10:14:00 +0000</pubDate>
				<category><![CDATA[Data Analytics Methods]]></category>
		<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Forecasting]]></category>
		<category><![CDATA[Time Series]]></category>
		<category><![CDATA[ARIMA]]></category>
		<category><![CDATA[deterministic]]></category>
		<category><![CDATA[differencing]]></category>
		<category><![CDATA[forecast error]]></category>
		<category><![CDATA[stochastic]]></category>
		<category><![CDATA[time series]]></category>
		<category><![CDATA[trend]]></category>
		<guid isPermaLink="false">http://www.kddanalytics.com/?p=1301</guid>

					<description><![CDATA[<p>“It is sometimes very difficult to decide whether trend is best modeled as deterministic or stochastic, and the decision is an important part of the science – and art – of building forecasting models.” ― Diebold,  Elements of Forecasting, 1998 &#160; A times series can have a very strong trend. Visually, we often can see it.&#8230;</p>
<p>The post <a href="https://www.kddanalytics.com/practical-time-series-forecasting-deterministic-stochastic-trend/">Practical Time Series Forecasting – To Difference or Not to Difference</a> appeared first on <a href="https://www.kddanalytics.com">KDD Analytics</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>“<em>It is sometimes very difficult to decide whether trend is best modeled as deterministic or stochastic, and the decision is an important part of the science – and art – of building forecasting models</em>.”<br />
― <strong>Diebold,  Elements of Forecasting, 1998</strong></p>
<p>&nbsp;</p>
<p><strong>A times series can have a very strong trend.</strong></p>
<p>Visually, we often can see it. Gross domestic product (GDP) per person increasing year after year.</p>
<p>When a “<strong>shock</strong>” occurs to the process generating GDP, due to a recession for example, GDP gets <strong>knocked off its long-run growth path</strong>.</p>
<p>But can we expect GDP to bounce back and return to its <strong>original</strong> long-run growth path? Or will it start growing again but along a <strong>different</strong> path?</p>
<p>If the former, then the trend in GDP is said to be “<strong>deterministic</strong>.” And adding TIME to a time series forecasting model is one way to capture this trend.</p>
<p>On the other hand, if GDP starts a new trend after a recession, its trend is said to be “<strong>stochastic</strong>,” driven by random shocks. The standard approach to time series forecast modeling in this case is to “<strong>difference</strong>” the data before modeling.</p>
<p>The challenge as a forecaster is that it is <strong>not always easy to tell if the trend in a times series is deterministic or stochastic</strong>.</p>
<p>And <strong>your answer</strong> and the subsequent modeling choice <strong>will have important implications for the resulting forecast</strong>.</p>
<h3>Deterministic vs. stochastic trends</h3>
<p>Consider the times series shown below.</p>
<p>Suppose you were <strong>tasked with generating a 2-year forecast</strong> starting December 2003 (at the end of the shown time series history).</p>
<p><strong>Is there a deterministic trend in this series</strong>? That is, do you suspect that the series will bounce back to the trend exhibited before January 2001?</p>
<p><strong>Or</strong> has there been a fundamental change to the process generating this series and a new trend will start (i.e. the <strong>trend is stochastic</strong>)?</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-1304 aligncenter" src="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Deterministic-or-stochastic-trend..png?resize=604%2C371&#038;ssl=1" alt="Deterministic vs stochastic trend" width="604" height="371" srcset="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Deterministic-or-stochastic-trend..png?w=604&amp;ssl=1 604w, https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Deterministic-or-stochastic-trend..png?resize=300%2C184&amp;ssl=1 300w" sizes="auto, (max-width: 604px) 100vw, 604px" /></p>
<h4>Deterministic trend</h4>
<p>If you opt for a deterministic trend, then your <strong>forecasting model will be in “levels.”</strong> If we are talking about SALES, then it is the value of SALES at any given point in time. So, when we have a deterministic trend, we can model SALES as:</p>
<p style="text-align: center;">SALES<sub>t</sub> = b<sub>0</sub> + b<sub>1</sub>*TIME + u<sub>t</sub></p>
<p><strong>Of course, we could</strong> <strong>also</strong> account for <strong>seasonality</strong> by adding seasonal dummy variables as well as any <strong>hidden dynamics</strong> (cycles) by modeling the error term u<sub>t</sub> as an ARMA process. But the key characteristic is the inclusion of a TIME variable (May 1993 = 1, June 1993 =2, etc.) and possibly TIME<sup>2</sup> and/or TIME<sup>3</sup> depending on the series.</p>
<p><span style="color: #60786b;"><em>An ARMA process models SALES as being based on past SALES as well as on unobservable shocks. Such models can include two types of components: An autoregressive (AR) component captures the effect of past SALES on current SALES while a moving average (MA) component captures random shocks to the SALES series.  </em></span></p>
<h4>Stochastic trend</h4>
<p>If you opt for a stochastic trend, then the <strong>standard methodology</strong> is to <strong>difference</strong> your data (to remove the trend) and model the differences. This is known as ARIMA modeling. An ARIMA process is like an ARMA process except that the dynamics of the differenced series are modeled (see <strong><a href="http://people.duke.edu/~rnau/411arim.htm" target="_blank" rel="noopener">here</a></strong>).</p>
<h4>Forecast differences</h4>
<p>The forecast implications of this choice are shown in the following chart. We estimated a deterministic and a stochastic model and generated a forecast from each starting in December 2003. Specifically,</p>
<p style="text-align: center;"><strong>Deterministic Trend Model:</strong>  Y<sub>t</sub> = b<sub>0</sub> + b<sub>1</sub>*TIME + b<sub>2</sub>*AR(1) + b<sub>3</sub>*AR(2) + b<sub>4</sub>*MA(3) + u<sub>t</sub></p>
<p style="text-align: center;"><strong>Stochastic Trend Model: </strong> Y<sub>t</sub> &#8211; Y<sub>t-1</sub> = b<sub>0</sub> + b<sub>1</sub>*AR(1) + b<sub>2</sub>*AR(3) + u<sub>t</sub></p>
<p>The forecast based on a <strong>deterministic model</strong> is shown by the <strong>orange line</strong> while the one based on the <strong>stochastic model</strong> is shown by the <strong>gray line</strong>. Also shown is what actually happened to the times series.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-full wp-image-1305 aligncenter" src="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Deterministic-vs-stochastic-forecast.png?resize=604%2C371&#038;ssl=1" alt="Deterministic vs stochastic forecast" width="604" height="371" srcset="https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Deterministic-vs-stochastic-forecast.png?w=604&amp;ssl=1 604w, https://i0.wp.com/www.kddanalytics.com/wp-content/uploads/2017/12/Deterministic-vs-stochastic-forecast.png?resize=300%2C184&amp;ssl=1 300w" sizes="auto, (max-width: 604px) 100vw, 604px" /></p>
<p>Hind sight is 20/20. In this case, the <strong>stochastic model would have been the better choice</strong>.</p>
<p>It does <strong>appear that some fundamental change occurred in the time series generation process</strong>. That is, the time series did not revert to its pre-2001 historical trend (at least during the forecast horizon).</p>
<p>The stochastic model yields a better forecast error (<a href="https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/" target="_blank" rel="noopener"><strong>MAPE</strong></a> = 2.0%) than the deterministic model (<a href="https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/" target="_blank" rel="noopener"><strong>MAPE</strong></a> = 5.6%) over the forecast horizon.</p>
<p>But at the time we had to make the forecast, all we had available were data through December 2003.</p>
<p><strong>So, how do we pick between a deterministic and a stochastic forecasting model?</strong></p>
<h3>Holdout sample</h3>
<p>From a practical perspective, unless we have very strong evidence of a stochastic process, the best course of action is to <strong>use a holdout sample.</strong></p>
<p>Yes, there are techniques for testing whether a times series is “<a href="https://www.otexts.org/fpp/8/1" target="_blank" rel="noopener"><strong>stationary</strong></a>” (i.e. has no trend) when visually it is not obvious.</p>
<p>But pragmatically, we are concerned about short-run forecast accuracy. And <strong>one way to compare competing models is by their performance in a holdout sample.</strong></p>
<p>As we discussed in an <strong><a href="https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/" target="_blank" rel="noopener">earlier article</a></strong>, <strong>hold out a period of time equal to your forecast horizon</strong> from the data used to estimate a model. In this case, 2 years (January 2001 – December 2003).</p>
<p>Then build your models on data prior to January 2001 and <strong>compare the models’ forecast performance over the holdout sample</strong>.</p>
<p>In this case, such a holdout sample does not include any data from the strong trend period (pre-May 2001). So, likely a stochastic model would have performed better in the holdout sample as well.</p>
<p><strong>But suppose we do this and have two (or more) models that perform equally well in the holdout sample?</strong></p>
<p>We’ll cover this possibility in a subsequent article.</p>
<a class="dpsp-click-to-tweet dpsp-style-1" href="https://twitter.com/intent/tweet?text=Deterministic%2Fstochastic+trend%3F+holdout+sample%21&url=https%3A%2F%2Fwww.kddanalytics.com%2Fpractical-time-series-forecasting-deterministic-stochastic-trend%2F"><div class="dpsp-click-to-tweet-content">Deterministic/stochastic trend? holdout sample!</div><div class="dpsp-click-to-tweet-footer"><span class="dpsp-click-to-tweet-cta"><span>Click to Tweet</span><i class="dpsp-network-btn dpsp-twitter"><span class="dpsp-network-icon"></span></i></span></div></a>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-introduction/" target="_blank" rel="noopener"><strong>Part 1 &#8211; Practical Time Series Forecasting &#8211; Introduction</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-some-basics/" target="_blank" rel="noopener"><strong>Part 2 &#8211; Practical Time Series Forecasting &#8211; Some Basics</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-potentially-useful-models/" target="_blank" rel="noopener"><strong>Part 3 &#8211; Practical Time Series Forecasting &#8211; Potentially Useful Models</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-data-science-taxonomy/" target="_blank" rel="noopener"><strong>Part 4 &#8211; Practical Time Series Forecasting &#8211; Data Science Taxonomy</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-holdout-sample/" target="_blank" rel="noopener"><strong>Part 5 &#8211; Practical Time Series Forecasting &#8211; Know When to Hold &#8217;em</strong></a></p>
<p><a href="https://www.kddanalytics.com/practical-time-series-forecasting-what-makes-a-model-useful/" target="_blank" rel="noopener"><strong>Part 6 &#8211; Practical Time Series Forecasting &#8211; What Makes a Model Useful?</strong></a></p>
<p>The post <a href="https://www.kddanalytics.com/practical-time-series-forecasting-deterministic-stochastic-trend/">Practical Time Series Forecasting – To Difference or Not to Difference</a> appeared first on <a href="https://www.kddanalytics.com">KDD Analytics</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1301</post-id>	</item>
	</channel>
</rss>
