Tracking Error for Significance

Have you ever heard or made statements like these:

  • “Our forecast error is down for the third month in a row, showing that our new stat models are working.” 
  • “I want to recognize Susan for having the lowest forecast error last month, Congratulations Susan!”
  • “Forecast error went up for two months in a row, we need to retune the stat models.”

If so, you may want to rethink your credentials as a demand planner.

Demand planners specialize in using statistics to generate forecasts.  But we often overlook the application of statistics to differentiate between common variation and assignable cause in the very metrics we use to measure accuracy.  This may be an area where Demand Planners can learn something from Lean Six Sigma practitioners, and start using Process Behavior Charts to identify when a change in forecast accuracy is significant. 

Process Behavior Charts

Process Behavior Charts (or control charts) were first developed Walter Shewhart to monitor and reduce variation in manufacturing processes. Upper and Lower Control Limits are calculated based on variability in historic data for the metric being monitored.  If results stay between these limits, the process is “in control”, meaning subject only to innate common variation.  Any point falling outside the control indicates there is a high likelihood of some unusual variation that should be investigated.

30 years ago, I taught shop floor operators how to use control charts to monitor product variables such as weight and dimensions.  By sampling several times an hour and graphing the results on an Xbar-R chart, they could see when the process needed to be adjusted, and when it should be left alone. 

Before we started using the control charts, operators were trying to adjust the process to keep the weight at target.  Whenever a sample deviated from the target, they adjusted the process to move it back.  By not recognizing the inherent variability in the process, they were over-controlling it. Rather than helping keep the process stable, their frequent adjustments were actually introducing more variability.     

Frequent adjustments were actually introducing more variability

I have seen similar mistakes by Demand Planners overreacting to forecast errors:

  • One company updated stat models whenever there was three months in a row of oversells or undersells. 
  • Another always recognized the forecaster that was “closest to the pin” every month.
  • Another always reviewed the top twenty forecast errors from the prior month.

But none of these signals are necessarily statistically significant.  Many times, they are the result of common variation inherent in the demand from one month to another. 

Tracking Forecast Accuracy

An XmR control chart is a great tool to determine when forecast error has a statistical change (see figure 1). Any value outside the control limits indicates that investigation is warranted to determine why the process is acting in an unpredictable manner. Perhaps the sales history needs to be adjusted for an unusual event that impacted demand.  Perhaps the statistical models need to be re-adjusted to the new history.

Figure 1 – XmR Chart Example

I use XmR charts to track Weighted Mean Average Percent Error (WMAPE).  That way seasonality and trend are less likely to distort the tracking.  However, if you had an item with no seasonality and no trend, it might make sense to graph the forecast error quantity.   

“But wait,” you ask, “won’t I have to wait a long time before a Control chart indicates something changed? If the forecast error has gotten significantly worse, we do not want to wait too long before making a correction to the process.”

That’s not necessarily true.  If you are tracking forecast error at a weekly level, the control chart will detect a large shift in forecast accuracy within 10 weeks at least 39 times out of 40.[i]  That is less than three months.  So if you were waiting for three consecutive months of oversells or undersells, this would be faster.

In order to catch small shifts in forecast accuracy, you might consider adding extra detection rules, such as watching for eight consecutive values all on the same side of the central line (Western Electric rule #4).  Adding this rule increases the control chart’s sensitivity to moderate and large shifts.

Conclusion

Just as we Demand Planners apply statistics to generate forecasts, we should also apply statistics to determine when our results represent a significant deviation from historical patterns.  Start using XmR control charts for tracking MAPE and Bias and you’ll be saying things like this:

  • “Forecast error may be up for the past two months, but weekly variability is within historical range so there is no need to review the stat models.”

Aside from forecast accuracy, there are quite a few other metrics used in supply planning and demand planning that would benefit from using a control chart to identify when an investigation is warranted:

  • Schedule attainment
  • Schedule fulfillment
  • Order fill rate
  • On-time-delivery
  • Or pretty much any metric you might use

[i] https://www.qualitydigest.com/inside/statistics-column/when-should-we-use-extra-detection-rules-100917.html

Leave a Reply