Forecasting support systems: What we know, what we need to know

Download Forecasting support systems: What we know, what we need to know

Post on 31-Dec-2016

214 views

Category:

Documents

1 download

Embed Size (px)

TRANSCRIPT

  • International Journal of Forecasting 29 (2013) 290294Contents lists available at SciVerse ScienceDirect

    International Journal of Forecasting

    journal homepage: www.elsevier.com/locate/ijforecast

    Editorial

    Forecasting support systems: What we know, what we need to knows. PIn calling for papers dedicated to Forecasting SupportSystems (FSSs), we set a wide agenda. The reason forthis was that, despite the increasing importance of suchsystems for effective practice (see for example Moon,Mentzer, & Smith, 2003; Sanders & Manrodt, 2003), therehas been relatively little research in the area. A recentdefinition of an FSS is:

    A set of (typically computer-based) procedures thatfacilitate interactive forecasting of key variables in a givenorganizational context. An FSS enables users to combinerelevant information, analyticalmodels, and judgments, aswell as visualizations, to produce (or create) forecasts andmonitor their accuracy (Ord & Fildes, 2013).

    Here, forecasting is viewed as a semi-structured deci-sion problem (Keen & ScottMorton, 1978), where forecastsare developed based on the interaction between the fore-caster and the information system.While parts of the fore-casting problem can be programmed, such as the detectionof regular patterns in data by statistical methods, humanjudgment is required to handle the unprogrammable as-pects. In thisway, FSSs are akin to decision support systems(DSS); indeed, they may be embedded within DSSs.

    By removing the burden of dealing with the pro-grammable aspects of forecasting, well-designed FSSsshould free users to deal with the remaining aspects. Theycan also support this by supplying relevant information inan amenable form, facilitating the analysis and modellingof data, providing advice, warnings and feedback, and al-lowing users to assess the possible consequences of differ-ent forecasting strategies.

    While the hardware required for such interactive fore-casting support was becoming widely available by 1980,the importance of the user was neglected in both researchand practice. Most of the early research in forecastingadopted one particular paradigm (i.e., the statistical), andignored the unprogrammable aspects that are pivotal tothe definition of an FSS. While the topic of combiningjudgment with quantitative methods had generated a con-siderable amount of research (see Clemen, 1989, for anearly review), little of this had examined how this wasto be done in an organisational setting. The earliest pro-posals (identified using Google Scholar) were within thedomain of weather forecasting (Racer & Gaffney, 1984),

    0169-2070/$ see front matter 2013 International Institute of Forecasterdoi:10.1016/j.ijforecast.2013.01.001although their articles reference list suggests earlier con-ference presentations.1 This early article emphasises theinterdependence of themodel, the information system andthe meteorologist, and goes on to suggest the develop-ment of an expert system with artificial intelligence tosupport these interactions. Wright and Ayton (1986) werethe first to discuss forecasting support, but only in pass-ing, while Bunn andWright (1991), and later Goodwin andWright (1993), were early contributors, focusing on theinteractions between judgment and statistical models. Inshort, while the topic was important even in the 1980s, itwas neglected by researchers. The new century has seensome limited increases in research interest (a search of theWeb of Science R database, using the search terms fore-cast support and forecasting support, yielded 15 paperspublished between 1993 and 2002 and 38 papers between2003 to 2012). At the same time, software giants such asSAP, SAS and JDEdwards have increased their market pres-ence in the delivery of organisational forecasting solutions.

    The gap between the increasing organisational impor-tance of forecasting support systems and the focus of mostacademic research stimulated the editors to develop thisspecial section with a research agenda that highlighted:

    The design, adoption and use of FSSs The role of judgment in the forecasting process Improving the effectiveness of FSSs.

    We will now discuss these topics and where theresearch contributions lie.

    The design, adoption and use of FSSs

    Perhaps the first question that arises relates to thenature of the systems in use in various organisations.Kusters, McCullough, and Bell (2006) provided a historyof the development of forecasting software, while Sandersand Manrodt (2003) provided a picture of the softwarepackages in use by companies a decade ago and theirlimitations. Excelwas ubiquitous! These two articles began

    1 Early research may well not have used the term, but an extendedsearch of the decision support literature did not produce any earlierreferences.

    ublished by Elsevier B.V. All rights reserved.

    http://dx.doi.org/10.1016/j.ijforecast.2013.01.001http://www.elsevier.com/locate/ijforecasthttp://www.elsevier.com/locate/ijforecasthttp://dx.doi.org/10.1016/j.ijforecast.2013.01.001

  • Editorial / International Journal of Forecasting 29 (2013) 290294 291to develop a view of the development and use of FSSs,while Fildes, Goodwin, and Lawrence (2006) examined thefeatures of such software in some detail, both actual andideal. However, their analysis was hindered by the lack ofevidence as to what forecasters who were working withFSSs actually required in use. A case study approach byGoodwin, Lee, Fildes, Nikolopoulos, and Lawrence (2007)demonstrated just how complex the use of such systemscan be and the role they play in the organisational processof forecast production. A laboratory study by Goodwin,Fildes, Lawrence, and Nikolopoulos (2007) showed thatdifferent strategieswere adopted bydifferent user clusters.It also demonstrated the importance of having a goodselection routine for the baseline statistical forecast.

    How do these ideas carry over into organisational prac-tice? Asimakopoulos, Dix, and Fildes (2011) provided atask analysis of the way in which forecasting was actu-ally carried out in a supply chain context. The statisticalanalysis of the time series, the focus of much forecastingresearch, proved less important than other organisationtasks, and this idea is taken further in this special section(Asimakopoulos & Dix, 2013), where the different viewsof users and software developers are contrasted, leadingto recommendations for improving the adoption and ef-fective use of such systems. However, we still know noth-ing about other users, such as marketing analysts. Furtherstudies are needed to provide an understanding of the wayinwhich organisations design and select their FSSs (the de-sign features that are valued), and also of theways inwhichthe systems are modified in use in order to contribute tothe organisational processes of producing forecasts.

    The role of judgment in the forecasting process

    The task analysis just described highlighted the im-portance of judgment in producing the final forecastdemanded by users. In many contexts, there are wellestablished benefits of combining statistical methods andjudgment. Statistical forecasting methods can detect sys-tematic patterns rapidly in large sets of data and can filterout noise, but they can be unreliable when data are scarceor have little relevance for future events. On the otherhand, judgmental forecasters can anticipate the effects ofspecial events, but they are subject to a range of cogni-tive andmotivational biases (Goodwin, Onkal, & Lawrence,2011). For example, they may have a tendency to perceivesystematic patterns in randomobservations (OConnor, Re-mus, & Griggs, 1993), or they may distort forecasts toplease senior managers (Galbraith & Merrill, 1996).

    Judgment is inevitably used in developing a forecast-ing procedure, meaning that, in practice, any analysis offorecasting must start with the judgmentally based choiceof which approach to adopt. Where both a judgmentallybased forecast and a model based forecast are available,one way to exploit the complementary benefits of the twomethods is to combine their forecasts mechanically. Forexample, a simple average of independent statistical andjudgmental forecastsmay bemore accurate than the actualindividual forecasts (Clemen, 1989). A mechanical combi-nation of independent forecasts has a number of advan-tages over a judgmentally based combination. For example,judgmental forecasters will not have an opportunityanchor on the statistical forecasts, and if the errors asso-ciated with the two types of forecasts are negatively cor-related, then each method will tend to negate the otherserrors. However, mechanical combination may be infeasi-ble in many situations. Decision makers may feel a loss ofcontrol over the final forecasts, and their sense of owner-ship of these forecasts may be lost. They may also be lesstrustful of statistical forecasts than those based on judg-ment (Onkal, Goodwin, Thomson, Gonul, & Pollock, 2009).As a result, the combination forecasts may be ignored.

    In these circumstances, a process of voluntary integra-tion is likely to be more acceptable. In practice, this usu-ally involves forecasters making judgmental adjustmentsto statistical forecasts when they see fit. Ostensibly, thiswill be because they are aware of factors thatwill influencethe variable to be forecast and have not been accountedfor in the statistical forecast. A survey by Fildes and Good-win (2007) showed this to be the most common typeof forecasting. In fact, research in supply-chain forecast-ing has shown that the frequency with which managersmake adjustments tends to be excessive (Fildes, Good-win, Lawrence, & Nikolopoulos, 2009; Franses & Legerstee,2009). For example, at one large company in Fildes et al.sstudy, around 90% of the statistical forecasts providedwerechanged, often without apparent reason. The re