Session 7 Evaluating forecasts Demand Forecasting and Planning

  • Slides: 19
Download presentation
Session 7: Evaluating forecasts Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai

Session 7: Evaluating forecasts Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai Joseph Ogrodowczyk, Ph. D.

Evaluating forecasts n Session agenda q q q v Background Measures of accuracy Cost

Evaluating forecasts n Session agenda q q q v Background Measures of accuracy Cost of forecast error Activity: Produce forecast error calculations for the forecasts made on Day 1 Session 7 Joseph Ogrodowczyk, Ph. D. Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai 2

Evaluating forecasts n Background q How do we measure the accuracy of our forecasts?

Evaluating forecasts n Background q How do we measure the accuracy of our forecasts? n n How do we know which forecasts were good and which need improvement? Error can be calculated across products within a given time period or across time periods for a given product q q The following examples are for one product over multiple time periods Two topics of forecast evaluation 1. 2. How accurate was the forecast? What was the cost of being wrong? Session 7 Joseph Ogrodowczyk Ph. D. Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai 3

Evaluating forecasts n Background q Definitions for evaluation: n n n Forecast period: The

Evaluating forecasts n Background q Definitions for evaluation: n n n Forecast period: The time increment for which the forecast is produced (month, week, quarter) Forecast bucket: The time increment being forecasted (period, month, quarter) Forecast horizon: The time increment including all forecast buckets being forecasted (12 months, 8 quarters) Forecast lag: The time between when the forecast is produced and the bucket that is forecasted Forecast snapshot: the specific combination of period, horizon, bucket, and lag associated with a forecast Session 7 Joseph Ogrodowczyk Ph. D. Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai 4

Evaluating forecasts n Background q Sources of error n n Data: Missing or omitted

Evaluating forecasts n Background q Sources of error n n Data: Missing or omitted data, mislabeled data Assumptions: Seasonality is not constant, trend changes are unanticipated, experts have insufficient information Model: Wrong choice of model type (judgment, statistical), correct model type and misspecified model (missing variables or too many variables), did not account for outliers Measures of accuracy q q q Point error Average error Trend of error Session 7 Joseph Ogrodowczyk Ph. D. Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai 5

Evaluating forecasts n Measures of accuracy q Point error n n n Error: The

Evaluating forecasts n Measures of accuracy q Point error n n n Error: The difference between the forecasted quantity and the actual demand quantity Squared error: The square of the error Percent error: The error relative to the actual demand quantity q q n n Denominator of actuals answers the question: How did well did we predict actual demand? Denominator of forecast answers the question: How much were we wrong relative to what we said we would do? Absolute error: The absolute value of the error Absolute percent error: The absolute value of the error relative to the actual demand quantity Session 7 Joseph Ogrodowczyk Ph. D. Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai 6

Evaluating forecasts n Measures of accuracy q Point error n n Data from Session

Evaluating forecasts n Measures of accuracy q Point error n n Data from Session 4, Naïve one-step model One product over multiple time periods Session 7 Joseph Ogrodowczyk Ph. D. Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai 7

Evaluating forecasts n Measures of accuracy q Average error n n n Mean square

Evaluating forecasts n Measures of accuracy q Average error n n n Mean square error (MSE): Sum of the squared errors Root mean square error (RMSE): Square root of the MSE Mean percent error (MPE): Average of the percent errors Mean absolute error (MAE): Average of the absolute errors Mean absolute percent error (MAPE): Average of the APE Weighted mean absolute percent error (WMAPE): Weighted average of the APE Session 7 Joseph Ogrodowczyk Ph. D. Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai 8

Evaluating forecasts n Measures of accuracy q Average error n One product over multiple

Evaluating forecasts n Measures of accuracy q Average error n One product over multiple time periods Session 7 Joseph Ogrodowczyk Ph. D. Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai 9

Evaluating forecasts n Measures of accuracy q Average error n Weighted mean absolute percent

Evaluating forecasts n Measures of accuracy q Average error n Weighted mean absolute percent error (WMAPE) q q Introduced as a method for overcoming inconsistencies in the MAPE § All time periods, regardless of the quantity of sales, have equal ability to affect MAPE § A 12% APE for a period in which 10 units were sold has no more importance than a 12% APE for a period in which 100 K units were sold Weight each APE calculation by the respective quantity WMAPE= Session 7 Joseph Ogrodowczyk Ph. D. Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai 10

Evaluating forecasts n Measures of accuracy q Average error n Weighted mean absolute percent

Evaluating forecasts n Measures of accuracy q Average error n Weighted mean absolute percent error (WMAPE) q q q In Session 4, we used a naïve one-step model and forecasted January 2008 using December 2007 data. Forecast was 88. 9 units and actual demand was 88. 2 Absolute percent error (APE) = |F-A|/A = |88. 9 -88. 2|/88. 2 =. 74% Multiply. 74% by 88. 2 (actual demand) =. 66% is the weighted error value for the January forecast WMAPE= Session 7 Joseph Ogrodowczyk Ph. D. Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai 11

Evaluating forecasts n Measures of accuracy q Average error n Weighted mean absolute percent

Evaluating forecasts n Measures of accuracy q Average error n Weighted mean absolute percent error (WMAPE) Session 7 Joseph Ogrodowczyk Ph. D. Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai 12

Evaluating forecasts n Measures of accuracy q Trend of error n Point error calculations

Evaluating forecasts n Measures of accuracy q Trend of error n Point error calculations and average error calculations are static q n They are calculated for a set time interval Additional information can be obtained by tracking these calculations over time q q q How does the error change over time? Also called the forecast bias Statistical analysis can be performed on the trending data § Mean, standard deviation, coefficient of variation Session 7 Joseph Ogrodowczyk Ph. D. Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai 13

Evaluating forecasts n Measures of accuracy q Trend of error n Two suggested methods

Evaluating forecasts n Measures of accuracy q Trend of error n Two suggested methods q q q Track a statistic through time (3 month MAPE) Compare time intervals (Q 1 against Q 2) Example is the 2008 naïve one-step forecast Session 7 Joseph Ogrodowczyk Ph. D. Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai 14

Evaluating forecasts n Cost of forecast error q q Accuracy measures do not contain

Evaluating forecasts n Cost of forecast error q q Accuracy measures do not contain the costs associated with forecast error Two methods for incorporating costs n n Calculate costs based on percent error and differentiating between over- and under-forecasting Calculate costs based on a loss function dependent on safety stock levels, lost sales, and service levels Session 7 Joseph Ogrodowczyk Ph. D. Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai 15

Evaluating forecasts n Cost of forecast error q Incorporating costs n Error differentiation q

Evaluating forecasts n Cost of forecast error q Incorporating costs n Error differentiation q q q Costs are calculated according to the mathematical sign of the percent error (+ or -) Costs of under-forecasting can be reflected in loss of sales, loss of related goods, increased production costs, increased shipment costs, etc. § Shipment and production costs are associated with production and expediting additional units to meet demand Costs of over-forecasting can be reflected in excess inventory, increased obsolescence, increased firesale items, etc. Session 7 Joseph Ogrodowczyk Ph. D. Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai 16

Evaluating forecasts n Cost of forecast error q Incorporating costs n Loss function q

Evaluating forecasts n Cost of forecast error q Incorporating costs n Loss function q q q A cost of forecast error metric (CFE) can be used to quantify the loss associated with both under- and over-forecasting Loss function based on the mean absolute error (MAE) First part of CFE calculates the necessary unit requirements to maintain a specified service level This is balanced against the volume of lost sales and associated cost of stock-outs Plotting a graph of cost of error against different service levels can supply information with regards to the service level corresponding to the lowest cost of forecast error Session 7 Joseph Ogrodowczyk Ph. D. Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai 17

Evaluating forecasts n Cost of forecast error q Final notes n Cost of error

Evaluating forecasts n Cost of forecast error q Final notes n Cost of error helps to guide forecast improvement process q q q These costs can be company specific and can be explored through understanding the implications of shortages and surpluses of products The specific mathematical calculations are beyond the scope of this workshop Applying costs to forecast errors will always require assumptions within the models § Recommend explicitly writing assumptions § Changing assumptions will lead to changes in the costs of the errors and can produce a range of estimated costs Session 7 Joseph Ogrodowczyk Ph. D. Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai 18

Evaluating forecasts n References q q Jain, Chaman L. and Jack Malehorn. 2005. Practical

Evaluating forecasts n References q q Jain, Chaman L. and Jack Malehorn. 2005. Practical Guide to Business Forecasting (2 nd Ed. ). Flushing, New York: Graceway Publishing Inc. Catt, Peter Maurice. 2007. Assessing the cost of forecast error: A practical example. Foresight. Summer: 5 -10. Session 7 Joseph Ogrodowczyk Ph. D. Demand Forecasting and Planning in Crisis 30 -31 July, Shanghai 19