Forecasting support systems: What we know, what we need to know

Download Forecasting support systems: What we know, what we need to know

Post on 31-Dec-2016

214 views

Category:

Documents

1 download

Embed Size (px)

TRANSCRIPT

  • International Journal of Forecasting 29 (2013) 290294Contents lists available at SciVerse ScienceDirect

    International Journal of Forecasting

    journal homepage: www.elsevier.com/locate/ijforecast

    Editorial

    Forecasting support systems: What we know, what we need to knows. PIn calling for papers dedicated to Forecasting SupportSystems (FSSs), we set a wide agenda. The reason forthis was that, despite the increasing importance of suchsystems for effective practice (see for example Moon,Mentzer, & Smith, 2003; Sanders & Manrodt, 2003), therehas been relatively little research in the area. A recentdefinition of an FSS is:

    A set of (typically computer-based) procedures thatfacilitate interactive forecasting of key variables in a givenorganizational context. An FSS enables users to combinerelevant information, analyticalmodels, and judgments, aswell as visualizations, to produce (or create) forecasts andmonitor their accuracy (Ord & Fildes, 2013).

    Here, forecasting is viewed as a semi-structured deci-sion problem (Keen & ScottMorton, 1978), where forecastsare developed based on the interaction between the fore-caster and the information system.While parts of the fore-casting problem can be programmed, such as the detectionof regular patterns in data by statistical methods, humanjudgment is required to handle the unprogrammable as-pects. In thisway, FSSs are akin to decision support systems(DSS); indeed, they may be embedded within DSSs.

    By removing the burden of dealing with the pro-grammable aspects of forecasting, well-designed FSSsshould free users to deal with the remaining aspects. Theycan also support this by supplying relevant information inan amenable form, facilitating the analysis and modellingof data, providing advice, warnings and feedback, and al-lowing users to assess the possible consequences of differ-ent forecasting strategies.

    While the hardware required for such interactive fore-casting support was becoming widely available by 1980,the importance of the user was neglected in both researchand practice. Most of the early research in forecastingadopted one particular paradigm (i.e., the statistical), andignored the unprogrammable aspects that are pivotal tothe definition of an FSS. While the topic of combiningjudgment with quantitative methods had generated a con-siderable amount of research (see Clemen, 1989, for anearly review), little of this had examined how this wasto be done in an organisational setting. The earliest pro-posals (identified using Google Scholar) were within thedomain of weather forecasting (Racer & Gaffney, 1984),

    0169-2070/$ see front matter 2013 International Institute of Forecasterdoi:10.1016/j.ijforecast.2013.01.001although their articles reference list suggests earlier con-ference presentations.1 This early article emphasises theinterdependence of themodel, the information system andthe meteorologist, and goes on to suggest the develop-ment of an expert system with artificial intelligence tosupport these interactions. Wright and Ayton (1986) werethe first to discuss forecasting support, but only in pass-ing, while Bunn andWright (1991), and later Goodwin andWright (1993), were early contributors, focusing on theinteractions between judgment and statistical models. Inshort, while the topic was important even in the 1980s, itwas neglected by researchers. The new century has seensome limited increases in research interest (a search of theWeb of Science R database, using the search terms fore-cast support and forecasting support, yielded 15 paperspublished between 1993 and 2002 and 38 papers between2003 to 2012). At the same time, software giants such asSAP, SAS and JDEdwards have increased their market pres-ence in the delivery of organisational forecasting solutions.

    The gap between the increasing organisational impor-tance of forecasting support systems and the focus of mostacademic research stimulated the editors to develop thisspecial section with a research agenda that highlighted:

    The design, adoption and use of FSSs The role of judgment in the forecasting process Improving the effectiveness of FSSs.

    We will now discuss these topics and where theresearch contributions lie.

    The design, adoption and use of FSSs

    Perhaps the first question that arises relates to thenature of the systems in use in various organisations.Kusters, McCullough, and Bell (2006) provided a historyof the development of forecasting software, while Sandersand Manrodt (2003) provided a picture of the softwarepackages in use by companies a decade ago and theirlimitations. Excelwas ubiquitous! These two articles began

    1 Early research may well not have used the term, but an extendedsearch of the decision support literature did not produce any earlierreferences.

    ublished by Elsevier B.V. All rights reserved.

    http://dx.doi.org/10.1016/j.ijforecast.2013.01.001http://www.elsevier.com/locate/ijforecasthttp://www.elsevier.com/locate/ijforecasthttp://dx.doi.org/10.1016/j.ijforecast.2013.01.001

  • Editorial / International Journal of Forecasting 29 (2013) 290294 291to develop a view of the development and use of FSSs,while Fildes, Goodwin, and Lawrence (2006) examined thefeatures of such software in some detail, both actual andideal. However, their analysis was hindered by the lack ofevidence as to what forecasters who were working withFSSs actually required in use. A case study approach byGoodwin, Lee, Fildes, Nikolopoulos, and Lawrence (2007)demonstrated just how complex the use of such systemscan be and the role they play in the organisational processof forecast production. A laboratory study by Goodwin,Fildes, Lawrence, and Nikolopoulos (2007) showed thatdifferent strategieswere adopted bydifferent user clusters.It also demonstrated the importance of having a goodselection routine for the baseline statistical forecast.

    How do these ideas carry over into organisational prac-tice? Asimakopoulos, Dix, and Fildes (2011) provided atask analysis of the way in which forecasting was actu-ally carried out in a supply chain context. The statisticalanalysis of the time series, the focus of much forecastingresearch, proved less important than other organisationtasks, and this idea is taken further in this special section(Asimakopoulos & Dix, 2013), where the different viewsof users and software developers are contrasted, leadingto recommendations for improving the adoption and ef-fective use of such systems. However, we still know noth-ing about other users, such as marketing analysts. Furtherstudies are needed to provide an understanding of the wayinwhich organisations design and select their FSSs (the de-sign features that are valued), and also of theways inwhichthe systems are modified in use in order to contribute tothe organisational processes of producing forecasts.

    The role of judgment in the forecasting process

    The task analysis just described highlighted the im-portance of judgment in producing the final forecastdemanded by users. In many contexts, there are wellestablished benefits of combining statistical methods andjudgment. Statistical forecasting methods can detect sys-tematic patterns rapidly in large sets of data and can filterout noise, but they can be unreliable when data are scarceor have little relevance for future events. On the otherhand, judgmental forecasters can anticipate the effects ofspecial events, but they are subject to a range of cogni-tive andmotivational biases (Goodwin, Onkal, & Lawrence,2011). For example, they may have a tendency to perceivesystematic patterns in randomobservations (OConnor, Re-mus, & Griggs, 1993), or they may distort forecasts toplease senior managers (Galbraith & Merrill, 1996).

    Judgment is inevitably used in developing a forecast-ing procedure, meaning that, in practice, any analysis offorecasting must start with the judgmentally based choiceof which approach to adopt. Where both a judgmentallybased forecast and a model based forecast are available,one way to exploit the complementary benefits of the twomethods is to combine their forecasts mechanically. Forexample, a simple average of independent statistical andjudgmental forecastsmay bemore accurate than the actualindividual forecasts (Clemen, 1989). A mechanical combi-nation of independent forecasts has a number of advan-tages over a judgmentally based combination. For example,judgmental forecasters will not have an opportunityanchor on the statistical forecasts, and if the errors asso-ciated with the two types of forecasts are negatively cor-related, then each method will tend to negate the otherserrors. However, mechanical combination may be infeasi-ble in many situations. Decision makers may feel a loss ofcontrol over the final forecasts, and their sense of owner-ship of these forecasts may be lost. They may also be lesstrustful of statistical forecasts than those based on judg-ment (Onkal, Goodwin, Thomson, Gonul, & Pollock, 2009).As a result, the combination forecasts may be ignored.

    In these circumstances, a process of voluntary integra-tion is likely to be more acceptable. In practice, this usu-ally involves forecasters making judgmental adjustmentsto statistical forecasts when they see fit. Ostensibly, thiswill be because they are aware of factors thatwill influencethe variable to be forecast and have not been accountedfor in the statistical forecast. A survey by Fildes and Good-win (2007) showed this to be the most common typeof forecasting. In fact, research in supply-chain forecast-ing has shown that the frequency with which managersmake adjustments tends to be excessive (Fildes, Good-win, Lawrence, & Nikolopoulos, 2009; Franses & Legerstee,2009). For example, at one large company in Fildes et al.sstudy, around 90% of the statistical forecasts providedwerechanged, often without apparent reason. The researchfound that most of these adjustments were small, and,although they therefore had a limited potential to dam-age the accuracy, they wasted valuable management time.Larger adjustments tended to be made when they wereneeded, but were generally overoptimistic, particularly forpromotional events which were perceived as high impact(Trapero, Pedregal, Fildes, & Kourentzes, 2013). Also, fore-casters seemed to treat negative information (leading to adownward adjustment of the system forecast) differentlyto positive information (upgrading the system forecast), tosuch an extent that positive adjustments often diminishedthe forecasting accuracy.However, a recent analysis (Davy-denko & Fildes, in press) suggests that this conclusion maybe amisconstruction of the evidence arising from the errormeasures used in comparing accuracy levels. Analysing adifferent data set, Franses and Legerstee (2013) draw con-flicting conclusions, and,which is perhapsmore important,argue that, for their pharmaceutical company, the expertsgenerally failed to add value to the baseline forecasts. How-ever, when the forecasts were particularly inaccurate, theexperts did improve on the statistical model.

    The statistical model is often weakest for promotionalevents, and consequently this is where we would expectthe experts to add the greatest value. A detailed modellingof data for a manufacturer of cleaning products whichare promoted found that, despite the importance ofpromotional information in determining adjustments, theadjusted final forecasts were worse than the baselinestatistical forecasts. However, they did add value insome situations a model that included promotionalinformation could be improved by being combined withthe expert information (Trapero et al., 2013).

    These studies lead to the conclusion that different or-ganisations and their forecasting processes have different

  • 292 Editorial / International Journal of Forecasting 29 (2013) 290294biases, which in turn lead to varying forecasting perfor-mances when the statistical baseline forecasts are com-pared with various enhanced alternatives. On balance, theexpert forecasters have potentially valuable informationthat is not captured by simple models (not even thosewhich are enhanced by explanatory variables), but theforecasters and their FSSs do not integrate the differentsources of information. Despite the sudden flurry of re-search in recent years, more questions have been raisedthan answered, leaving moot the critical question of whenjudgment adds the most value. This leads us to the nextquestion: how can FSSs be improved?

    Improving the effectiveness of FSSs

    FSSs can potentially include a comprehensive range offeatures, but these are rarely encountered in practice, asthe various surveys demonstrate. Indeed, the focus of fore-casting software development has been on enhancing thefacilities for data management and statistical forecasting,rather than on the provision of improved support for man-agement judgment. Powerful statistical forecasting algo-rithms which are capable of handling large data sets arenow a feature of most state-of-the-art commercial soft-ware. However, these programs usually just provide anopportunity for judgmental overrides, at best. Support fordecisions on when to intervene and how large such inter-ventions should be is almost non-existent.

    In the absence of such guidance, judgment is oftenexercised through the back door. Users override thechoice of statisticalmethod recommended by the software,or change its parameters or the length of the time series towhich it is fitted until they obtain a statistical forecast thatis close to their judgment (Goodwin, Lee et al., 2007). Theresult is that the demarcation between the structured andunstructured parts of the process becomes blurred. Thejudgmental forecaster wastes time on the part of the taskwhich they are least able to perform well, and has greaterdifficulty in isolating and focusing on the part where theirjudgment is likely to be beneficial.

    One of the reasons for the absence of judgmentalsupport in modern software facilities may be the wayin which software is sold. A message that highlights thepower and accuracy of the statistical methods embodiedin the software, while also advertising the opportunity forunrestricted judgmental intervention, may be attractiveto potential purchasers. Judgmental support facilities mayalso be unfamiliar to potential users and they may beperceived as devices that will reduce the ease-of-use of theproduct and impede the ability to exercise ones judgmentfully (Venkatesh & Davis, 2000).

    However, it is also the case that research into the de-sign and effectiveness of these facilities and ways in whichpeople might use them is relatively sparse (Fildes et al.,2006) possibly too sparse to encourage commercial soft-ware companies to invest in their development. In fact,this was one of the reasons for this special section on FSSsbeing launched. Most of the research on judgmental fore-casting has investigated the biases associated with judg-ment, as opposed to methods for mitigating these biases(Lawrence, Goodwin, OConnor, & Onkal, 2006). However,we now have some idea of the effectiveness...