forecasting support systems: what we know, what we need to know

5
International Journal of Forecasting 29 (2013) 290–294 Contents lists available at SciVerse ScienceDirect International Journal of Forecasting journal homepage: www.elsevier.com/locate/ijforecast Editorial Forecasting support systems: What we know, what we need to know In calling for papers dedicated to Forecasting Support Systems (FSSs), we set a wide agenda. The reason for this was that, despite the increasing importance of such systems for effective practice (see for example Moon, Mentzer, & Smith, 2003; Sanders & Manrodt, 2003), there has been relatively little research in the area. A recent definition of an FSS is: A set of (typically computer-based) procedures that facilitate interactive forecasting of key variables in a given organizational context. An FSS enables users to combine relevant information, analytical models, and judgments, as well as visualizations, to produce (or create) forecasts and monitor their accuracy (Ord & Fildes, 2013). Here, forecasting is viewed as a semi-structured deci- sion problem (Keen & Scott Morton, 1978), where forecasts are developed based on the interaction between the fore- caster and the information system. While parts of the fore- casting problem can be programmed, such as the detection of regular patterns in data by statistical methods, human judgment is required to handle the unprogrammable as- pects. In this way, FSSs are akin to decision support systems (DSS); indeed, they may be embedded within DSSs. By removing the burden of dealing with the pro- grammable aspects of forecasting, well-designed FSSs should free users to deal with the remaining aspects. They can also support this by supplying relevant information in an amenable form, facilitating the analysis and modelling of data, providing advice, warnings and feedback, and al- lowing users to assess the possible consequences of differ- ent forecasting strategies. While the hardware required for such interactive fore- casting support was becoming widely available by 1980, the importance of the user was neglected in both research and practice. Most of the early research in forecasting adopted one particular paradigm (i.e., the statistical), and ignored the unprogrammable aspects that are pivotal to the definition of an FSS. While the topic of combining judgment with quantitative methods had generated a con- siderable amount of research (see Clemen, 1989, for an early review), little of this had examined how this was to be done in an organisational setting. The earliest pro- posals (identified using Google Scholar) were within the domain of weather forecasting (Racer & Gaffney, 1984), although their article’s reference list suggests earlier con- ference presentations. 1 This early article emphasises the interdependence of the model, the information system and the meteorologist, and goes on to suggest the develop- ment of an expert system with artificial intelligence to support these interactions. Wright and Ayton (1986) were the first to discuss forecasting support, but only in pass- ing, while Bunn and Wright (1991), and later Goodwin and Wright (1993), were early contributors, focusing on the interactions between judgment and statistical models. In short, while the topic was important even in the 1980s, it was neglected by researchers. The new century has seen some limited increases in research interest (a search of the Web of Science R database, using the search terms ‘‘fore- cast support’’ and ‘‘forecasting support’’, yielded 15 papers published between 1993 and 2002 and 38 papers between 2003 to 2012). At the same time, software giants such as SAP, SAS and JDEdwards have increased their market pres- ence in the delivery of organisational forecasting solutions. The gap between the increasing organisational impor- tance of forecasting support systems and the focus of most academic research stimulated the editors to develop this special section with a research agenda that highlighted: The design, adoption and use of FSSs The role of judgment in the forecasting process Improving the effectiveness of FSSs. We will now discuss these topics and where the research contributions lie. The design, adoption and use of FSSs Perhaps the first question that arises relates to the nature of the systems in use in various organisations. Kusters, McCullough, and Bell (2006) provided a history of the development of forecasting software, while Sanders and Manrodt (2003) provided a picture of the software packages in use by companies a decade ago and their limitations. Excel was ubiquitous! These two articles began 1 Early research may well not have used the term, but an extended search of the decision support literature did not produce any earlier references. 0169-2070/$ – see front matter © 2013 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved. doi:10.1016/j.ijforecast.2013.01.001

Upload: paul

Post on 31-Dec-2016

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Forecasting support systems: What we know, what we need to know

International Journal of Forecasting 29 (2013) 290–294

Contents lists available at SciVerse ScienceDirect

International Journal of Forecasting

journal homepage: www.elsevier.com/locate/ijforecast

Editorial

Forecasting support systems: What we know, what we need to know

s. P

In calling for papers dedicated to Forecasting SupportSystems (FSSs), we set a wide agenda. The reason forthis was that, despite the increasing importance of suchsystems for effective practice (see for example Moon,Mentzer, & Smith, 2003; Sanders & Manrodt, 2003), therehas been relatively little research in the area. A recentdefinition of an FSS is:

A set of (typically computer-based) procedures thatfacilitate interactive forecasting of key variables in a givenorganizational context. An FSS enables users to combinerelevant information, analyticalmodels, and judgments, aswell as visualizations, to produce (or create) forecasts andmonitor their accuracy (Ord & Fildes, 2013).

Here, forecasting is viewed as a semi-structured deci-sion problem (Keen & ScottMorton, 1978), where forecastsare developed based on the interaction between the fore-caster and the information system.While parts of the fore-casting problem can be programmed, such as the detectionof regular patterns in data by statistical methods, humanjudgment is required to handle the unprogrammable as-pects. In thisway, FSSs are akin to decision support systems(DSS); indeed, they may be embedded within DSSs.

By removing the burden of dealing with the pro-grammable aspects of forecasting, well-designed FSSsshould free users to deal with the remaining aspects. Theycan also support this by supplying relevant information inan amenable form, facilitating the analysis and modellingof data, providing advice, warnings and feedback, and al-lowing users to assess the possible consequences of differ-ent forecasting strategies.

While the hardware required for such interactive fore-casting support was becoming widely available by 1980,the importance of the user was neglected in both researchand practice. Most of the early research in forecastingadopted one particular paradigm (i.e., the statistical), andignored the unprogrammable aspects that are pivotal tothe definition of an FSS. While the topic of combiningjudgment with quantitative methods had generated a con-siderable amount of research (see Clemen, 1989, for anearly review), little of this had examined how this wasto be done in an organisational setting. The earliest pro-posals (identified using Google Scholar) were within thedomain of weather forecasting (Racer & Gaffney, 1984),

0169-2070/$ – see front matter© 2013 International Institute of Forecasterdoi:10.1016/j.ijforecast.2013.01.001

although their article’s reference list suggests earlier con-ference presentations.1 This early article emphasises theinterdependence of themodel, the information system andthe meteorologist, and goes on to suggest the develop-ment of an expert system with artificial intelligence tosupport these interactions. Wright and Ayton (1986) werethe first to discuss forecasting support, but only in pass-ing, while Bunn andWright (1991), and later Goodwin andWright (1993), were early contributors, focusing on theinteractions between judgment and statistical models. Inshort, while the topic was important even in the 1980s, itwas neglected by researchers. The new century has seensome limited increases in research interest (a search of theWeb of Science R⃝ database, using the search terms ‘‘fore-cast support’’ and ‘‘forecasting support’’, yielded 15 paperspublished between 1993 and 2002 and 38 papers between2003 to 2012). At the same time, software giants such asSAP, SAS and JDEdwards have increased their market pres-ence in the delivery of organisational forecasting solutions.

The gap between the increasing organisational impor-tance of forecasting support systems and the focus of mostacademic research stimulated the editors to develop thisspecial section with a research agenda that highlighted:

• The design, adoption and use of FSSs• The role of judgment in the forecasting process• Improving the effectiveness of FSSs.

We will now discuss these topics and where theresearch contributions lie.

The design, adoption and use of FSSs

Perhaps the first question that arises relates to thenature of the systems in use in various organisations.Kusters, McCullough, and Bell (2006) provided a historyof the development of forecasting software, while Sandersand Manrodt (2003) provided a picture of the softwarepackages in use by companies a decade ago and theirlimitations. Excelwas ubiquitous! These two articles began

1 Early research may well not have used the term, but an extendedsearch of the decision support literature did not produce any earlierreferences.

ublished by Elsevier B.V. All rights reserved.

Page 2: Forecasting support systems: What we know, what we need to know

Editorial / International Journal of Forecasting 29 (2013) 290–294 291

to develop a view of the development and use of FSSs,while Fildes, Goodwin, and Lawrence (2006) examined thefeatures of such software in some detail, both actual andideal. However, their analysis was hindered by the lack ofevidence as to what forecasters who were working withFSSs actually required in use. A case study approach byGoodwin, Lee, Fildes, Nikolopoulos, and Lawrence (2007)demonstrated just how complex the use of such systemscan be and the role they play in the organisational processof forecast production. A laboratory study by Goodwin,Fildes, Lawrence, and Nikolopoulos (2007) showed thatdifferent strategieswere adopted bydifferent user clusters.It also demonstrated the importance of having a goodselection routine for the baseline statistical forecast.

How do these ideas carry over into organisational prac-tice? Asimakopoulos, Dix, and Fildes (2011) provided atask analysis of the way in which forecasting was actu-ally carried out in a supply chain context. The statisticalanalysis of the time series, the focus of much forecastingresearch, proved less important than other organisationtasks, and this idea is taken further in this special section(Asimakopoulos & Dix, 2013), where the different viewsof users and software developers are contrasted, leadingto recommendations for improving the adoption and ef-fective use of such systems. However, we still know noth-ing about other users, such as marketing analysts. Furtherstudies are needed to provide an understanding of the wayinwhich organisations design and select their FSSs (the de-sign features that are valued), and also of theways inwhichthe systems are modified in use in order to contribute tothe organisational processes of producing forecasts.

The role of judgment in the forecasting process

The task analysis just described highlighted the im-portance of judgment in producing the final forecastdemanded by users. In many contexts, there are wellestablished benefits of combining statistical methods andjudgment. Statistical forecasting methods can detect sys-tematic patterns rapidly in large sets of data and can filterout noise, but they can be unreliable when data are scarceor have little relevance for future events. On the otherhand, judgmental forecasters can anticipate the effects ofspecial events, but they are subject to a range of cogni-tive andmotivational biases (Goodwin, Onkal, & Lawrence,2011). For example, they may have a tendency to perceivesystematic patterns in randomobservations (O’Connor, Re-mus, & Griggs, 1993), or they may distort forecasts toplease senior managers (Galbraith & Merrill, 1996).

Judgment is inevitably used in developing a forecast-ing procedure, meaning that, in practice, any analysis offorecasting must start with the judgmentally based choiceof which approach to adopt. Where both a judgmentallybased forecast and a model based forecast are available,one way to exploit the complementary benefits of the twomethods is to combine their forecasts mechanically. Forexample, a simple average of independent statistical andjudgmental forecastsmay bemore accurate than the actualindividual forecasts (Clemen, 1989). A mechanical combi-nation of independent forecasts has a number of advan-tages over a judgmentally based combination. For example,

judgmental forecasters will not have an opportunityanchor on the statistical forecasts, and if the errors asso-ciated with the two types of forecasts are negatively cor-related, then each method will tend to negate the other’serrors. However, mechanical combination may be infeasi-ble in many situations. Decision makers may feel a loss ofcontrol over the final forecasts, and their sense of owner-ship of these forecasts may be lost. They may also be lesstrustful of statistical forecasts than those based on judg-ment (Onkal, Goodwin, Thomson, Gonul, & Pollock, 2009).As a result, the combination forecasts may be ignored.

In these circumstances, a process of voluntary integra-tion is likely to be more acceptable. In practice, this usu-ally involves forecasters making judgmental adjustmentsto statistical forecasts when they see fit. Ostensibly, thiswill be because they are aware of factors thatwill influencethe variable to be forecast and have not been accountedfor in the statistical forecast. A survey by Fildes and Good-win (2007) showed this to be the most common typeof forecasting. In fact, research in supply-chain forecast-ing has shown that the frequency with which managersmake adjustments tends to be excessive (Fildes, Good-win, Lawrence, & Nikolopoulos, 2009; Franses & Legerstee,2009). For example, at one large company in Fildes et al.’sstudy, around 90% of the statistical forecasts providedwerechanged, often without apparent reason. The researchfound that most of these adjustments were small, and,although they therefore had a limited potential to dam-age the accuracy, they wasted valuable management time.Larger adjustments tended to be made when they wereneeded, but were generally overoptimistic, particularly forpromotional events which were perceived as high impact(Trapero, Pedregal, Fildes, & Kourentzes, 2013). Also, fore-casters seemed to treat negative information (leading to adownward adjustment of the system forecast) differentlyto positive information (upgrading the system forecast), tosuch an extent that positive adjustments often diminishedthe forecasting accuracy.However, a recent analysis (Davy-denko & Fildes, in press) suggests that this conclusion maybe amisconstruction of the evidence arising from the errormeasures used in comparing accuracy levels. Analysing adifferent data set, Franses and Legerstee (2013) draw con-flicting conclusions, and,which is perhapsmore important,argue that, for their pharmaceutical company, the expertsgenerally failed to add value to the baseline forecasts. How-ever, when the forecasts were particularly inaccurate, theexperts did improve on the statistical model.

The statistical model is often weakest for promotionalevents, and consequently this is where we would expectthe experts to add the greatest value. A detailed modellingof data for a manufacturer of cleaning products whichare promoted found that, despite the importance ofpromotional information in determining adjustments, theadjusted final forecasts were worse than the baselinestatistical forecasts. However, they did add value insome situations — a model that included promotionalinformation could be improved by being combined withthe expert information (Trapero et al., 2013).

These studies lead to the conclusion that different or-ganisations and their forecasting processes have different

Page 3: Forecasting support systems: What we know, what we need to know

292 Editorial / International Journal of Forecasting 29 (2013) 290–294

biases, which in turn lead to varying forecasting perfor-mances when the statistical baseline forecasts are com-pared with various enhanced alternatives. On balance, theexpert forecasters have potentially valuable informationthat is not captured by simple models (not even thosewhich are enhanced by explanatory variables), but theforecasters and their FSSs do not integrate the differentsources of information. Despite the sudden flurry of re-search in recent years, more questions have been raisedthan answered, leaving moot the critical question of whenjudgment adds the most value. This leads us to the nextquestion: how can FSSs be improved?

Improving the effectiveness of FSSs

FSSs can potentially include a comprehensive range offeatures, but these are rarely encountered in practice, asthe various surveys demonstrate. Indeed, the focus of fore-casting software development has been on enhancing thefacilities for data management and statistical forecasting,rather than on the provision of improved support for man-agement judgment. Powerful statistical forecasting algo-rithms which are capable of handling large data sets arenow a feature of most state-of-the-art commercial soft-ware. However, these programs usually just provide anopportunity for judgmental overrides, at best. Support fordecisions on when to intervene and how large such inter-ventions should be is almost non-existent.

In the absence of such guidance, judgment is oftenexercised ‘through the back door’. Users override thechoice of statisticalmethod recommended by the software,or change its parameters or the length of the time series towhich it is fitted until they obtain a ‘statistical’ forecast thatis close to their judgment (Goodwin, Lee et al., 2007). Theresult is that the demarcation between the structured andunstructured parts of the process becomes blurred. Thejudgmental forecaster wastes time on the part of the taskwhich they are least able to perform well, and has greaterdifficulty in isolating and focusing on the part where theirjudgment is likely to be beneficial.

One of the reasons for the absence of judgmentalsupport in modern software facilities may be the wayin which software is sold. A message that highlights thepower and accuracy of the statistical methods embodiedin the software, while also advertising the opportunity forunrestricted judgmental intervention, may be attractiveto potential purchasers. Judgmental support facilities mayalso be unfamiliar to potential users and they may beperceived as devices that will reduce the ease-of-use of theproduct and impede the ability to exercise one’s judgmentfully (Venkatesh & Davis, 2000).

However, it is also the case that research into the de-sign and effectiveness of these facilities and ways in whichpeople might use them is relatively sparse (Fildes et al.,2006) — possibly too sparse to encourage commercial soft-ware companies to invest in their development. In fact,this was one of the reasons for this special section on FSSsbeing launched. Most of the research on judgmental fore-casting has investigated the biases associated with judg-ment, as opposed to methods for mitigating these biases(Lawrence, Goodwin, O’Connor, & Onkal, 2006). However,

we now have some idea of the effectiveness of differentforms of feedback (Remus, O’Connor, &Griggs, 1996);waysof encouraging people to trust advice, such as the sta-tistical forecast from an FSS (Yaniv & Kleinberger, 2000;Yaniv &Milyavsky, 2007, Goodwin, Gönül, & Önkal, 2013);whether requiring documentation of the reasons for ad-justments improves accuracy (Goodwin, 2000); whetherimposing restrictions on the size of adjustments is effec-tive (Goodwin, Fildes, Lawrence, & Stephens, 2011); andways inwhich people can be helped to use analogous situa-tions from the past in their forecasts (Lee, Goodwin, Fildes,Nikolopoulos, & Lawrence, 2007). At the same time, muchmorework remains to be done.While there aremanypathswith the potential to improve effectiveness, few of themhave been researched. Meanwhile, while the researchersdelay, the software designers deliver their preferredsolutions!

The papers in the special section and the research gapsremaining

The first two papers describe how forecasting supportsystems can be applied in two very different contexts.Song, Gao and Lin (2013) demonstrate how an FSS wasused to improve the accuracy of forecasts of tourist ar-rivals in Hong Kong. The web-based system allows theforecasts from an advanced econometric model to be judg-mentally adjusted by a panel of experts, when appropriate.Savio and Nikolopoulos (2013) show how an FSS, based onanalogies, can be used to inform governments’ policy im-plementation strategies. Specifically, the FSS allowed ex-perts to make forecasts in relation to a vehicle scrappagescheme. In this case, the system did not include any sta-tistical forecasts — its entire focus was on supporting ex-pert judgment. Under the heading of design, adoption anduse, Asimakopoulos and Dix (2013) use the technologies-in-practice model to identify the critical factors that areassociated with FSS adoption and effective use. Based oninterviews with software designers and users, togetherwith organizational documents and direct observations ofthe forecasting process, they identify a number of differentperceptions of the roles that FSSs play in product forecast-ing, and find that, in practice, their primary use can rangefrom amere data storage tool to a devicewhich enables thefor adjustment and communication of forecasts.We shouldalso note that the qualitative methodology based on de-tailed interview data adopted by the authors is unusual forforecasting publications. There remain many areas of fore-casting inwhich FSSs have potential value, and their designraises novel questions as to the roles of such systems inan organisational setting, not least being the way in whichusers achieve organisational value from them. In addition,it would bewell worth conductingmore research into howand why organizations choose to purchase particular FSSs.For example, there is some evidence that forecast accuracyis often not a prime consideration when making such de-cisions (McCullough, 2000).

The next paper discusses other ways in which judg-mental inputs to FSSs can be improved. Thomson, Pollock,Gönül, and Önkal (2013) found that drawing users’ atten-tion to the direction and strength of trends improved the

Page 4: Forecasting support systems: What we know, what we need to know

Editorial / International Journal of Forecasting 29 (2013) 290–294 293

forecasts of currency exchange rates. However, the con-sistency of the forecasts was still found to be poor. Thearea of user intervention deserves much more researcheffort than it has received to date, both in behaviouralmodels (in contrast to optimal models), and in the devel-opment of new approaches to improving the design of sys-tems to better support intervention. For example, many ofthe design features of FSSs that have been suggested to beworth testing are still to be evaluated. Similarly, we needto know more about the way that information should bepresented to the forecaster (or the group responsible forthe forecast). We would expect many of the judgmentalbiases associated with information presentation that havebeen observed in other contexts to carry over to the fore-casting task. In addition, the research of Fildes et al. (2009)and Franses and Legerstee (2013) has produced apparentlyconflicting conclusions which need to be reconciled.

The final paper considers the issue of trust in the guid-ance provided by an FSS. Goodwin et al. (2013) focus oninterval forecasts and identify the factors that determinepeople’s levels of trust in the statistical forecasts generatedby the system. They then assess the consequences for theperformances of the forecastswhen the statistical forecastsare not fully trusted. This is part of the more general ques-tion as to when information is trusted (i.e., appropriatelyweighted) when incorporated into the forecast.

Taken together, these papers (and the gaps we haveidentified) suggest that the key challenges facing the de-signers of forecasting software relate to behavioural issuesrather than the technical aspects of statistical forecasting.2In order to design and implement an effective FSSwithin anorganisation, a whole range of behavioural hurdles have tobe surmounted, including individual cognitive biases, mo-tivational biases, organisational cultural norms and poli-tics. We hope that this collection of papers will encourageand stimulate further research in this important area sothat future designers will be better equipped tomeet thesechallenges.

Robert FildesPaul Goodwin

Papers in the Special Section

Asimakopoulos, S., & Dix, A. (2013). Forecasting supportsystems technologies-in-practice: a model of adoptionand use for product forecasting. International Journal ofForecasting, 29(2), 322–336.

Goodwin, P., Gönül, S., & Önkal, D. (2013). Antecedentsand effects of trust in forecasting advice. InternationalJournal of Forecasting, 29(2), 354–366.

Savio, N. D., & Nikolopoulos, K. (2013). A strategicforecasting framework for governmental decision-makingand planning. International Journal of Forecasting, 29(2),311–321.

Song, H., Gao, B. Z., & Lin, V. S. (2013). Combiningstatistical and judgmental forecasts via a web-basedtourism demand forecasting system. International Journalof Forecasting, 29(2), 295–310.

2 We should recognize, however, that some FSSs include very limitedor even inadequate statistical modelling facilities.

Thomson, M. E., Pollock, A. C., Gönül, M. S., & Önkal,D. (2013). Effects of trend strength and direction onperformance and consistency in judgmental exchangerate forecasting. International Journal of Forecasting, 29(2),337–353.

References

Asimakopoulos, S., Dix, A., & Fildes, R. (2011). Using hierarchical taskdecomposition as a grammar to map actions in context: applicationto forecasting systems in supply chain planning. International Journalof Human-Computer Studies, 69, 234–250.

Bunn, D., & Wright, G. (1991). Interaction of judgmental and statisticalforecasting: issues and analysis. Management Science, 37, 501–518.

Clemen, R. T. (1989). Combining forecasts: a review and annotatedbibliography. International Journal of Forecasting , 5, 559–583.

Davydenko, A., & Fildes, R. Measuring forecasting accuracy: the case ofjudgmental adjustments to SKU-level demand forecasts. InternationalJournal of Forecasting (in press).

Fildes, R., & Goodwin, P. (2007). Against your better judgment? Howorganizations can improve their use of management judgment inforecasting. Interfaces, 37, 570–576.

Fildes, R., Goodwin, P., & Lawrence, M. (2006). The design features offorecasting support systems and their effectiveness. Decision SupportSystems, 42, 351–361.

Fildes, R., Goodwin, P., Lawrence, M., & Nikolopoulos, K. (2009). Effectiveforecasting and judgmental adjustments: an empirical evaluation andstrategies for improvement in supply-chain planning. InternationalJournal of Forecasting , 25, 3–23.

Franses, P. H., & Legerstee, R. (2009). Properties of expert adjustments onmodel-based SKU-level forecasts. International Journal of Forecasting ,25, 35–47.

Franses, P. H., & Legerstee, R. (2013). Do statistical forecasting modelsfor SKU-level data benefit from including past expert knowledge?International Journal of Forecasting , 29, 80–87.

Galbraith, C. S., & Merrill, G. B. (1996). The politics of forecasting:managing the truth. California Management Review, 38, 29–43.

Goodwin, P. (2000). Improving the voluntary integration of statisticalforecasts and judgment. International Journal of Forecasting , 16, 85–99.

Goodwin, P., Fildes, R., Lawrence, M., & Nikolopoulos, K. (2007). Theprocess of using a forecasting support system. International Journal ofForecasting , 23, 391–404.

Goodwin, P., Fildes, R., Lawrence, M., & Stephens, G. (2011). Restrictive-ness and guidance in support systems. Omega-International Journal ofManagement Science, 39, 242–253.

Goodwin, P., Lee, W. -Y., Fildes, R., Nikolopoulos, K., & Lawrence, M.(2007). Understanding the use of forecasting systems: an interpretivestudy in a supply-chain company. University of Bath, School ofManagement, Working paper 2007.14.

Goodwin, P., Onkal, D., & Lawrence, M. (2011). Improving the role ofjudgment in economic forecasting. In M. P. Clements, & D. F. Hendry(Eds.), The Oxford handbook of economic forecasting (pp. 163–189).Oxford: Oxford University Press.

Goodwin, P., & Wright, G. (1993). Improving judgmental time series fore-casting: a review of the guidance provided by research. InternationalJournal of Forecasting , 9, 147–161.

Keen, P. G. W., & Scott Morton, M. S. (1978). Decision support systems: anorganizational perspective. Reading, UK: Addison-Wesley.

Kusters, U., McCullough, B. D., & Bell, M. (2006). Forecasting software:past, present and future. International Journal of Forecasting , 22,599–615.

Lawrence, M., Goodwin, P., O’Connor, M., & Onkal, D. (2006). Judgmentalforecasting: a review of progress over the last 25 years. InternationalJournal of Forecasting , 22, 493–518.

Lee,W. Y., Goodwin, P., Fildes, R., Nikolopoulos, K., & Lawrence, M. (2007).Providing support for the use of analogies in demand forecastingtasks. International Journal of Forecasting , 23, 377–390.

McCullough, B. D. (2000). Is it safe to assume that software is accurate?International Journal of Forecasting , 16, 349–357.

Moon, M. A., Mentzer, J. T., & Smith, C. D. (2003). Conducting a salesforecasting audit. International Journal of Forecasting , 19, 5–25.

O’Connor, M., Remus, W., & Griggs, K. (1993). Judgemental forecasting intimes of change. International Journal of Forecasting , 9, 163–172.

Onkal, D., Goodwin, P., Thomson, M., Gonul, S., & Pollock, A. (2009).The relative influence of advice from human experts and statisticalmethods on forecast adjustments. Journal of Behavioral DecisionMaking , 22, 390–409.

Page 5: Forecasting support systems: What we know, what we need to know

294 Editorial / International Journal of Forecasting 29 (2013) 290–294

Ord, K., & Fildes, R. (2013). Principles of business forecasting. Mason, OH &Andover, U.K.: South-Western Cengage Learning.

Racer, R., & Gaffney, J. E. (1984). Interpretive processing/expert systems:an initiative in weather data analysis and forecasting. NationalWeather Digest , 9, 31–45.

Remus, W., O’Connor, M., & Griggs, K. (1996). Does feedback improve theaccuracy of recurrent judgemental forecasts? Organizational Behaviorand Human Decision Processes, 66, 22–30.

Sanders, N. R., & Manrodt, K. B. (2003). Forecasting software in practice:use, satisfaction, and performance. Interfaces, 33, 90–93.

Trapero, J. R., Pedregal, D. J., Fildes, R., & Kourentzes, N. (2013). Analysis ofjudgmental adjustments in the presence of promotions. InternationalJournal of Forecasting , 29, 234–243.

Venkatesh, V., & Davis, F. D. (2000). A theoretical extension ofthe Technology Acceptance Model: four longitudinal field studies.Management Science, 46, 186–204.

Wright, G., & Ayton, P. (1986). The psychology of forecasting. Futures, 18,420–439.

Yaniv, I., & Kleinberger, E. (2000). Advice taking in decision making:egocentric discounting and reputation formation. OrganizationalBehavior and Human Decision Processes, 83, 260–281.

Yaniv, I., & Milyavsky, M. (2007). Using advice from multiple sources torevise and improve judgments. Organizational Behavior and HumanDecision Processes, 103, 104–120.

Robert FildesDepartment of Management Science, Lancaster University

Management School, LA1 4YX, United KingdomE-mail address: [email protected].

Paul Goodwin∗

School of Management, University of Bath, Bath,BA2 7AY, United Kingdom

E-mail address:[email protected].

∗ Corresponding editor.