Statistical Forecast Errors

Posted by Suresh Sellaiah on 14-Sep-2017 08:07:28

Forecast Accuracy defines how accurate the forecast works against the actual sales and is usually defined in percentage terms as;

Forecast Accuracy = 1 – Forecast Error

Forecast Error determines the deviation between the forecast and the actual demand/sales.

As Evan Esar’s saying goes "An economist is an expert who will know tomorrow why the things he predicted yesterday didn't happen today".

So, to be able to react quickly, it is very important to understand why this deviation has occurred. As there are various Error Measures available in software tools such as SAP APO or SAP IBP, it is vital to understand which error measure is applicable under what circumstances.

Forecast Error measures are widely used in the following two scenarios:

  1. To evaluate the statistical forecast model fit by allowing several forecast models to compete against each other. This estimation is based on the ex-post forecast produced by forecast models against the actual sales
  2. To evaluate the forecast performance by measuring the error between the forecasted sales and the actual sales, which most planners do in real time to report in their dashboards

Forecast Error measures can be classified into two groups:

  1. Percentage errors (or relative errors) - These are scale-independent (assuming the scale is based on quantity) by specifying the size of error in percentage and is easy to compare the forecast error between different data sets/series. Examples: Mean Percentage Error (MPE), Mean Absolute Percentage Error (MAPE), Weighted Mean Absolute Percentage Error (WMAPE) & Mean Absolute Squared Error (MASE).
  2. Scale-dependant errors (or absolute errors) – The size of error is measured in units by making it difficult to compare the forecast performance between different data sets/series. Examples: Mean Absolute Deviation (MAD), Mean Squared Error (MSE), Root Mean Square Error (RMSE), Error Total (ET) or Total Absolute Error (TAE).

In this blog, I will focus on scenario 1 and the tips and tricks to use.

If you are looking to improve the statistical forecast accuracy, start with the well-known approach by attacking the top 20 contributing (£s) SKUs and identifying if there is any scope of improvement. If not much could be done, go to the next 20 SKUs.

Understanding the Misunderstanding

Often, Planners can't understand why tools sometimes produce strange statistical forecast results! This is primarily because of inefficient ways of using the tool, or at times a lack of understanding as to how the tool works in the first place.

For instance, in SAP APO, the background jobs may be set up to use inappropriate models in the background, or I have seen clients where the job always runs automatic model selection procedure every month. As most of these jobs were set up during the implementation of project, many planners fail to review these assignments periodically.

The automatic model selection procedure should only be used to identify the pattern of data and the segregation of SKUs based on these patterns. The results produced depends on the chosen error measure to use (as highlighted in figure 1) during Automatic model selection procedure (or Constant Model with Alpha Adaptation in SAP APO). Incorrect error measure provides unexpected results (Ex: even though the data exhibits seasonality, constant model may be chosen by Auto-model selection).

apo-stat.jpg

Figure 1: Error Measure for Automatic Model Selection 2

Hence a sensible approach would be to segment the data on variability and volume, and then use a forecasting approach with the appropriate error measure. While some prefer using custom enhancement to measure error like symmetric MAPE (sMAPE) or weighted MAPE (wMAPE) or Mean Absolute Scaled Error (MASE) or even their own error measure for all assortments.

Now let me walk you through these error measures and explain where to or where not to use them.

Forecast Errors Defined

Error Measure

Formula

Remarks

Mean Percentage Error (MPE)

 MPE.jpeg

If Actuals = 0, but Forecast is non-zero, then SAP APO/SPP considers Actuals = 1 to avoid divide by zero error. While SAP IBP ceils to 100% for that period.

Mean Absolute Percentage Error (MAPE)

 MAPE.jpeg

If Actuals = 0, but Forecast is non-zero, then SAP APO/SPP considers Actuals = 1 to avoid divide by zero error. While SAP IBP ceils to 100% for that period.

Mean Absolute Deviation (MAD) or

Mean Absolute Error (MAE)

 MAE.jpeg

This is the formula used in SAP IBP

Smoothed Mean Absolute Deviation (SAD)

 SMAD.jpeg

This is the formula used in SAP APO or SPP as MAD.

 is a smoothing factor specified in the forecast profile (otherwise default = 0.3)

Mean Squared Error (MSE)

 MSE.jpeg

Timing alignment or mismatch can lead to a very high number

Root Mean Squared Error (RMSE)

 RMSE.jpeg

Reduced magnitude of MSE

Error Total (ET)

 ET.jpeg

Is it really used?

Total Absolute Error (TAE)

 TAE.jpeg

Not available for Statistical forecast model fit

Weighted MAPE (wMAPE)

 wMAPE.jpeg

Volume weighted MAPE available in SAP IBP, but not in SAP APO or SPP

Mean Absolute Squared Error (MASE)

 MASE.jpeg

m here denotes the number of periods in a season. m = 1 for non-seasonal data. Available in SAP IBP, but not in SAP APO or SPP.

Table 1: List of various Forecast Error Measures

Forecast Errors Explained

Mean Percentage Error (MPE): Due to the disadvantage of positive error getting netted off by the negative error, the planner will be clueless to pin down the issue. These are commonly observed symptoms when timing mismatch or periodicity mismatch exists between the forecast and sales, e.g. sales predicted (disaggregated) in weeks is inaccurate compared to the sales predicted in months.

Mean Absolute Percentage Error (MAPE): is not advisable for:
- Periods where forecast exists but not sold, will lead MAPE to infinite
- Periods where forecast was huge but very less was sold, will inflate MAPE to too high
- Heavily penalizes the positive errors (under forecasting in the formula mentioned)
- Intermittent data or high seasonal fluctuations

In SAP APO and SPP, we see error more than 100% (to avoid infinite), which doesn’t make any sense

Weighted Mean Absolute Percentage Error (WMAPE): This is also called MAD/Mean Ratio. It overcomes the infinite error issue of MAPE. But still other disadvantages addressed in MAPE are not best addressed by wMAPE

Mean Absolute Scaled Error (MASE): The numerator is Mean Absolute Error (MAE) and the denominator is the MAE of the one-step naïve forecast method which uses the actual sales from previous period as the forecast. It is well suited for intermittent and regular demand series. MASE never gives infinite or undefined values. It penalises both positive and negative forecast errors equally and penalizes errors in large forecasts and small forecasts equally (whereas MAPE doesn’t).

Mean Absolute Deviation (MAD) or Mean Absolute Error (MAE): Because of its simplicity to compute and compare across a single set of data series, it’s widely used as well, especially for forecast model fit selection. MAD should be used carefully while dealing with high-volume or premium products sales data, because ONE-threshold or ONE-size for reporting exceptions (or alerts) will not fit for all SKUs.

Mean Squared Error (MSE), Root Mean Square Error (RMSE), Total Absolute Error (TAE): This measure shows the importance in terms of Inventory. When MSE is thought in $ value of inventory or used as per segmentation of high volume products, its more efficient to analyse and react quicker.

Example: Let’s take 2 products. Product A has a forecast of 100 and Actual sales of 50. Product B has a forecast of 1000 and Actual sales of 500. In both cases, the accuracy is 50% if we talk in percentage errors. But the MSE for product A is 2500, whereas for product B it is 250000. So, planner would first bring product B under control.

Error Sum or Total Error (TE) or Error Total (ET): Honestly, I haven’t seen any customer using this error measure. Although, it can give the error in units along with a positive or negative sign which may give the direction of forecasting (over forecasting and under forecasting). But it’s still a risky choice because it may net off to 0.

Conclusion

“Falling down is how we grow. Staying down is how we die". 
Brian Vaszily

As seen, there are inherent methods available to measure forecast error. The planners must review the forecasting model assignments periodically, so that a better baseline forecast can be generated, which can later be complimented further by other inputs from sales, marketing or logistics. However, to quickly review these assignments, the planner needs more information like Coefficient of Variability (CV), Average Demand Interval, Demand Switching Frequency, Forecast Timing Accuracy, etc.

SAP APO or SAP SPP tools provides the information about the model selected, errors evaluated, etc. in the job spool. I have seen many planners who do not know how to view such results. As SAP APO basically lacks the capability to store error measures or model assignments that lack unique GUID assignment. Olivehorse is helping clients (using SAP APO) with custom enhancements that can be used to overcome these limitations and bring an improvement in your statistical forecasting approach. Call us to know more about it!

Are you using the tool in the right way? Let us know in the comments area!

Suresh Sellaiah

Senior SCM Consultant, Olivehorse Consulting

 

Read more on: SAP APO, Statistical Forecasing, Demand Planning, Forecast Error