forecast error

Upload: shane-trammel

Post on 10-Apr-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/8/2019 Forecast Error

    1/3

    Blogs | The Business Forecasting Deal

    ABOUT

    ProductForecasBusineso help e

    of the foprovidevexing pMike anBlogger

    SYNDIC

    QUICKS

    ARCHIV

    SepteAuguJuly 2ReceOlder.

    CATEG

    gfedcgfedc Go!

    All categ

    TAGS

    Friday, July 30. 2010

    The Argument for Max(Forecast,Actual)

    There is a long running debate among forecasting professionals, on whether to use Forecast or Actual in the denominator of your percentage error calculations. The Winter 2009 issue of Foresight had an article by Kesten Green and Len Tashman, reporting on a survey (of theInternational Institute of Forecasters discussion list and Foresight subcribers) asking:

    What should the denominator be when calculating percentage error?

    This non-scientific survey's responses were 56% for using Actual, 15% for using Forecast, and29% for something other, such as the average of Forecast and Actual, or an average of the

    Actuals in the series, or the absolute average of the period-over-period differences in the data(yielding a Mean Absolute Scaled Error or MASE). One lone respondent favored using theMaximum (Forecast, Actuals).

    Fast forward to the new Summer 2010 issue of Foresight (p.46):

    Letter to the Editor

    I have just read with interest Foresights article "Percentage Error: What Denominator" (Winter 2009, p.36). I thought Id send you a note regarding the response you received to that survey fromone person who preferred to use in the denominator the larger value of forecast and actuals.

    I also have a preference for this metric in my environment, even though I realize it may not beacademically correct. We have managed to gain an understanding at senior executive level that forecast accuracy improvement will drive significant competitive advantage.

    I have found over many years in different companies that there is a very different executivereaction to a reported result of 60% forecast accuracy vs. a 40% forecast error even thoughthey are equivalent! Reporting the perceived, high error has a tendency to generate knee-jerk reactions and drive to the creation of unrealistic goals. Reporting the equivalent accuracy metric tends to cause executives to ask the question What can we do to improve this? I know that thisis not logical but it is something I have observed time and again and so I now always recommend reporting forecast accuracy to a wider audience.

    But if you are going to use forecast accuracy as a metric then, if you have specified thedenominator to be either actuals or forecast, you will always have some errors that are greater than 100%. When converting these large errors to accuracy (accuracy being 1 error) then you end up with a negative accuracy result; this is the type of result that always seems to causemisunderstanding with management teams. A forecast accuracy result of minus 156% just doesnot seem to be intuitively understandable.

    When you use the maximum of forecast or actuals as the denominator, the forecast accuracy metric is constrained between 0 and 100%, making it conceptually easier for a wider audience,including the executive team, to understand.

    If the purpose of the metric is to identify areas of opportunity and drive improvement actions,using the larger value as the denominator and reporting accuracy as opposed to error enables the

    proper diagnostic activities to take place and reduces disruption caused by misinterpretation of the correct error metric.

    Page 1 of 3The Business Forecasting Deal - Exposing bad practices in business forecasting... and pro ...

    9/27/2010http://blogs.sas.com/forecasting/index.php?/archives/44-The-Argument-for-MaxForecast, ...

  • 8/8/2019 Forecast Error

    2/3

    The blog conecessarilyuse of this

    To summarize, I use the larger value methodology for ease of communication to key personnel who are not familiar with the intricacies of forecasting process and measurement.

    David Hawitt SIOP Development Manager for a global technology company [email protected]

    I have long favored "Forecast Accuracy" as a metric for management reporting , defining it as:

    FA = {1 [ |F A| / Max (F,A) ] } x 100

    where the summation is over n observations of forecasts and actuals. FA is defined to be 100%when both forecast and actual are zero. Here is a sample of the calculation over 6 weeks for twoproducts, X and Y:

    As all forecasting performance metrics do, calculating Forecast Accuracy using Max (F,A) in thedenominator has its flaws -- and it certainly has an army of detractors. Yet the detractors miss thepoint that David so nicely makes. We recognize that Max(F,A) is not "academically correct." Itlacks properties that would make it useful in other calculations. There is virtually nothing a self-respecting mathematician would find of value in it except it forces the Forecast Accuracy metric toalways be scaled between 0-100%, thereby making it an excellent choice for reportingperformance to management! If nothing else, it helps you avoid wasting time explaining the weirdand non-intuitive values you can get with the usual calculation of performance metrics.

    Posted by Mike Gilliland at 07:00 | Comments (0) | Trackbacks (0)

    TrackbacksTrackback specific URI for this entry

    No Trackbacks

    CommentsDisplay comments as ( Linear | Threaded)

    No comments

    Add Comment

    Name

    Email

    Homepage

    Page 2 of 3The Business Forecasting Deal - Exposing bad practices in business forecasting... and pro ...

    9/27/2010http://blogs.sas.com/forecasting/index.php?/archives/44-The-Argument-for-MaxForecast, ...

  • 8/8/2019 Forecast Error

    3/3