forecasting—yesterday, today and tomorrow

3
International Journal of Forecasting ( ) Contents lists available at ScienceDirect International Journal of Forecasting journal homepage: www.elsevier.com/locate/ijforecast Forecasting—Yesterday, Today and Tomorrow 1. Introduction First of all, I want to thank all of the people who honored me by both organizing the conference at which these papers were presented and taking on the task of editing this special issue. My heart-felt thanks go to Prakash Loungani of the IMF and my colleagues Fred Joutz and Tara Sinclair. I have chosen the title for this paper because, over the years, there have been various recurring themes in my research interests. Some of these issues have been resolved, but others require further attention. These themes include (1) an understanding of the forecasting process, (2) the procedures for evaluating forecasts, (3) the sources of forecast errors, (4) data problems and forecast revisions, (5) our failure to predict cyclical peaks, and (6) our obligation as forecasters to provide some stylized facts to macromodel builders. A paper that I presented in 2005 at the Leipzig Conference organized and hosted by Ullrich Heilemann focused on these same issues (Stekler, 2007). A number of the issues that were raised in that paper have now been resolved. I want to mention those, as well as focus on what I think must still be done. 2. Understanding the forecasting process In a series of papers, Lahiri and Sheng (2008, 2010a,b) explained the forecasting process in a Bayesian decision making framework. Forecasters start with priors, then modify their forecasts after a number of months as they obtain more information. Lahiri and Sheng showed that forecasts which are made earlier than the middle of the prior year are generally not modified much because little new information that would cause them to modify their predictions becomes available. 1 Moreover, they also ex- plained why disagreement exists and why the forecasts eventually converge. However, what we do not know is whether or not some forecasters are perennially optimists (pessimists). 1 Isiklar and Lahiri (2007) obtained similar results. 3. Procedures for evaluating forecasts 3.1. Systematic bias One of the standard forecast evaluation procedures is to test for bias and/or efficiency. Robert Fildes and I (Fildes & Stekler, 2002) noted that there were inconsistencies in some of the results obtained from these evaluations. There were systematic errors in the qualitative results, but the Holden and Peel (1990) and Mincer and Zarnowitz (1969) tests failed to reject the null of bias. The qualitative results showed that growth was underestimated when it was ris- ing and overestimated during declines. Since systematic errors are inconsistent with unbiased forecasts, the prob- lem lay in the way in which the statistical methodology that tested the quantitative data for rationality was ap- plied. This issue has now been addressed by Sinclair, Joutz, and Stekler (2010). Since the systematic errors were cor- related with the state of the economy, inserting a categor- ical dummy (to account for the state of the economy) into the Mincer–Zarnowitz (MZ) equation resulted in the null of rationality being rejected more frequently. 2 However, an unanswered question is whether a categorical dummy is the optimal variable to insert into the MZ equation to ac- count for this bias. 3.2. Multivariate evaluations Most forecast evaluations have been done on a univari- ate basis, analyzing one variable at a time. Thus, the growth and inflation forecasts are evaluated separately. However, I have contended that forecast evaluations should be con- ducted using a multivariate framework. GDP and infla- tion forecasts are issued together, so it makes no sense to analyze them separately and conclude that one is good while the other is off the mark. The forecasts were presum- ably determined simultaneously in a consistent model, and 2 The errors made in one direction offset those made in the other direction, thus cancelling each other out and yielding no bias. http://dx.doi.org/10.1016/j.ijforecast.2014.03.003 0169-2070/ © 2014 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved.

Upload: herman-o

Post on 03-Jan-2017

222 views

Category:

Documents


7 download

TRANSCRIPT

Page 1: Forecasting—Yesterday, Today and Tomorrow

International Journal of Forecasting ( ) –

Contents lists available at ScienceDirect

International Journal of Forecasting

journal homepage: www.elsevier.com/locate/ijforecast

Forecasting—Yesterday, Today and Tomorrow

1. Introduction

First of all, I want to thank all of the peoplewhohonoredme by both organizing the conference at which thesepapers were presented and taking on the task of editingthis special issue. My heart-felt thanks go to PrakashLoungani of the IMF andmy colleagues Fred Joutz and TaraSinclair.

I have chosen the title for this paper because, overthe years, there have been various recurring themesin my research interests. Some of these issues havebeen resolved, but others require further attention. Thesethemes include (1) an understanding of the forecastingprocess, (2) the procedures for evaluating forecasts, (3) thesources of forecast errors, (4) data problems and forecastrevisions, (5) our failure to predict cyclical peaks, and (6)our obligation as forecasters to provide some stylized factsto macromodel builders. A paper that I presented in 2005at the Leipzig Conference organized and hosted by UllrichHeilemann focused on these same issues (Stekler, 2007). Anumber of the issues that were raised in that paper havenow been resolved. I want to mention those, as well asfocus on what I think must still be done.

2. Understanding the forecasting process

In a series of papers, Lahiri and Sheng (2008, 2010a,b)explained the forecasting process in a Bayesian decisionmaking framework. Forecasters start with priors, thenmodify their forecasts after a number of months as theyobtain more information. Lahiri and Sheng showed thatforecasts which are made earlier than the middle of theprior year are generally not modified much because littlenew information that would cause them to modify theirpredictions becomes available.1 Moreover, they also ex-plained why disagreement exists and why the forecastseventually converge. However, what we do not know iswhether or not some forecasters are perennially optimists(pessimists).

1 Isiklar and Lahiri (2007) obtained similar results.

3. Procedures for evaluating forecasts

3.1. Systematic bias

One of the standard forecast evaluation procedures isto test for bias and/or efficiency. Robert Fildes and I (Fildes& Stekler, 2002) noted that there were inconsistencies insome of the results obtained from these evaluations. Therewere systematic errors in the qualitative results, but theHolden and Peel (1990) and Mincer and Zarnowitz (1969)tests failed to reject the null of bias. The qualitative resultsshowed that growth was underestimated when it was ris-ing and overestimated during declines. Since systematicerrors are inconsistent with unbiased forecasts, the prob-lem lay in the way in which the statistical methodologythat tested the quantitative data for rationality was ap-plied.

This issue has now been addressed by Sinclair, Joutz,and Stekler (2010). Since the systematic errors were cor-related with the state of the economy, inserting a categor-ical dummy (to account for the state of the economy) intotheMincer–Zarnowitz (MZ) equation resulted in the null ofrationality being rejected more frequently.2 However, anunanswered question is whether a categorical dummy isthe optimal variable to insert into the MZ equation to ac-count for this bias.

3.2. Multivariate evaluations

Most forecast evaluations have been done on a univari-ate basis, analyzing one variable at a time. Thus, the growthand inflation forecasts are evaluated separately. However,I have contended that forecast evaluations should be con-ducted using a multivariate framework. GDP and infla-tion forecasts are issued together, so it makes no senseto analyze them separately and conclude that one is goodwhile the other is off themark. The forecasts were presum-ably determined simultaneously in a consistentmodel, and

2 The errors made in one direction offset those made in the otherdirection, thus cancelling each other out and yielding no bias.

http://dx.doi.org/10.1016/j.ijforecast.2014.03.0030169-2070/© 2014 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved.

Page 2: Forecasting—Yesterday, Today and Tomorrow

2 / International Journal of Forecasting ( ) –

policy decisions are made by considering the future move-ments of both variables.

There has been considerable progress in this area. Sin-clair, Stekler, and Kitzinger (2010) analyzed multivariatedirectional forecasts, while Sinclair, Gamber, Stekler, andReid (2012) examined the economic costs of making incor-rect growth and inflation forecasts using both variables ina decision rule. Finally, Sinclair, Stekler, and Carnow (2012)compared a vector of growth, inflation, andunemploymentforecasts with a vector of the actual changes in these vari-ables, and tested whether the two vectors were signifi-cantly different. Tests for bias and efficiency have beendeveloped, but these have always required assumptionsabout the nature of the loss functions.3 I wonder whethera different approach that does not require these assump-tions would be possible.

4. Explanations for forecast errors

Most evaluations stop with characterizations of theerrors, and fail to determine why they occurred; therehave been only a few studies that have yielded someexplanations for these errors or biases. For example,Heilemann (2002) examined the process bywhich the RWIeconometric model generated the 1997 forecasts for theGerman economy. He demonstrated how the assumptionsthat weremade affected the estimates of each of themajorcomponents of GDP.

4.1. Biases and forecast smoothing

Coibion and Gorodnichenko (2012) provided a possibleexplanation for biases and deviations from rationality.They suggested that biases result from forecast rigidity,meaning that individuals do not revise their forecastsquickly enough.4 They found some evidence for thisphenomenon in the revisions of the consensus forecastsobtained from the SPF surveys. Similarly, Loungani,Stekler, and Tamirisa (2013) found further evidence inthe Consensus forecasts of both advanced and developingcountries.

On the other hand, some preliminary research examin-ing a single organization’s forecasts found biases, but foundthat they were not attributable to forecast smoothing. Infact, in the majority of cases, this organization’s revisionswere not smoothed but, on average, changed the forecastsin the wrong direction.

4.2. Data problems

Clearly, if the real-time GDP or inflation data do notpresent an accurate picture of the state of the economy,such discrepancies would contribute to the forecast errors.

3 See Komunjer and Owyang (2012).4 Based on the work of Lahiri and Sheng mentioned above, forecasters

would only revise their estimates when new information becameavailable as the forecasting lead diminished. Thus, the forecast rigidityexplanation might only be valid for very short horizon predictions.

The Bureau of Economic Analysis5 has noted that themean absolute difference between the earliest estimatesof the growth rate in a particular quarter and thehistorical data was about 1%. The problem is that manydiscrepancies occur during recessions (Sinclair & Stekler,2013).Moreover, Finzen and Stekler (1999) and Stekler andTalwar (2013) both show that data problems might havecontributed to forecasters’ failures to predict recessions inadvance.

4.3. Failures to predict recessions

There are other factors beyond data problems thatcontribute to our failures to predict recessions in advanceor even to identify them when they occur. Given thatthe forecasting process can be explained in a Bayesianframework, there is the possibility that recessions wouldnot be predicted if forecasters had low priors that suchan event would occur (Stekler, 1972). Alternatively, butwithin the same framework, the same result might beobserved if individuals have asymmetric costs between (1)predicting a turn that does not occur and (2) not predictinga turn that does happen (Schnader & Stekler, 1997).

5. What should be done?

I have suggestions both for discovering the sources ofthe forecast errors and for improving our ability to pre-dict recessions. Post mortems dealing with the sources offorecast errors have been scarce. We therefore need somecase studies showing how particular individuals or organi-zations went about making forecasts. These studies wouldenable us to determinewhy the forecastersmisinterpretedthe data or did not see what was happening.6

Personally, I do not have any difficulty predictingrecessions. I have forecast every one (and a fewmore) since1957. Even though there have recently been a number ofstudies that have focused on nowcasting or monitoringthe conditions that can lead up to a recession, I wouldlike to suggest that an old method should be exploredagain. The rate of changemethods, such as first differencesand diffusion indexes, did predict every recession, butalso called some that did not occur. If modern time seriesmethods were used, could the number of false turns thatwere predicted be reduced?

6. Stylized facts obtained from evaluations

Since forecasters are usually well-informed experts,understanding the process they follow when makingforecasts and the characteristics of those predictions mayprovide insights for the construction of macroeconomicmodels. These models contain assumptions about theway in which the agents in those models form their

5 http://www.bea.gov/newsrealeses/national/gdp/2013/gdp-4qadv.htm.6 A study of qualitative forecasts undertaken in 1929–30 provided

some insights into the thinking of individuals. It showed that individualswere utilizing analogies that were not relevant to the situation in theiranalyses (see Goldfarb, Stekler, & David, 2005).

Page 3: Forecasting—Yesterday, Today and Tomorrow

/ International Journal of Forecasting ( ) – 3

expectations. These assumptions should therefore beconsistent with the stylized facts that we have obtainedfrom the forecast evaluations.

The evidence indicates that most forecasters use aBayesian process when forming their views about eventsthat might occur, say, two or more years in the future. Thestylized facts would include the following:

(1) Forecasters have a prior probability that an event willoccur.

(2) They do not revise their beliefs for a substantial periodof time.

(3) They generally fail to predict recessions (major accel-erations of inflation) in advance, and sometimes do noteven recognize them as they occur.

(4) The forecasts generally contain systematic biases;i.e., positive (negative) growth rates are underesti-mated (overestimated), and conversely with inflation.

In conclusion, we forecasters can make a real contributionto the profession by making macroeconomic theorists awareof these stylized facts in order to ensure that their assumptionsare in accord with them.

References

Coibion, O., & Gorodnichenko, Y. (2012). What can survey forecasts tellus about information rigidities? Journal of Political Economy, 120,116–159.

Fildes, R., & Stekler, H. O. (2002). The state of macroeconomic forecasting.Journal of Macroeconomics, 24, 435–468.

Finzen, D., & Stekler, H. O. (1999). Why did forecasters fail to predict the1990 recession? International Journal of Forecasting , 15, 309–323.

Goldfarb, R. S., Stekler, H. O., & David, J. (2005). Methodological issues inforecasting: insights from the egregious business forecast errors oflate 1930. Journal of Economic Methodology, 12, 517–542.

Heilemann, U. (2002). Increasing the transparency of macroeconometricforecasts: a report from the trenches. International Journal ofForecasting , 18, 85–105.

Holden, K., & Peel, D. A. (1990). On testing for unbiasedness and efficiencyof economic forecasts. Manchester School, 58, 120–127.

Isiklar, G., & Lahiri, K. (2007). How far ahead can we forecast? Evidencefrom cross-country surveys. International Journal of Forecasting , 23,167–187.

Komunjer, I., & Owyang, M. T. (2012). Multivariate forecast evaluationand rationality testing. Review of Economics and Statistics, 94,1066–1080.

Lahiri, K., & Sheng, X. (2008). Evolution of forecast disagreement in aBayesian learning model. Journal of Econometrics, 144, 325–340.

Lahiri, K., & Sheng, X. (2010a). Measuring forecast uncertainty bydisagreement: the missing link. Journal of Applied Econometrics, 25,514–538.

Lahiri, K., & Sheng, X. (2010b). Learning and heterogeneity in GDP andinflation forecasts. International Journal of Forecasting , 26, 265–292.

Loungani, P., Stekler, H., & Tamirisa, N. (2013). Information rigidities ingrowth forecasts: some cross-country evidence. International Journalof Forecasting , 29(4), 605–621.

Mincer, J., & Zarnowitz, V. (1969). The evaluation of economic forecasts.In J. Mincer (Ed.), Economic forecasts and expectations: analysis offorecasting behavior and performance (pp. 14–20). NBER.

Schnader, M. H., & Stekler, H. O. (1997). Sources of turning point forecasterrors. Applied Economics Letters, 5, 519–521.

Sinclair, T. M., Gamber, E. N., Stekler, H. O., & Reid, E. (2012). Jointlyevaluating the federal reserve’s forecast of GDP growth and inflation.International Journal of Forecasting , 28, 309–314.

Sinclair, T. M., Joutz, F., & Stekler, H. O. (2010). Can the Fed predict thestate of the economy? Economics Letters, 108(1), 28–32.

Sinclair, T. M., & Stekler, H. O. (2013). Examining the quality of earlyGDP component estimates. International Journal of Forecasting , 29(4),736–750.

Sinclair, T. M., Stekler, H. O., & Carnow, W. (2012). A new ap-proach for evaluating economic forecasts. Economics Bulletin, 32(3),2332–2342.

Sinclair, T. M., Stekler, H. O., & Kitzinger, L. (2010). Directional forecastsof GDP and inflation: a joint evaluation with an application to federalreserve predictions. Applied Economics, 42(18), 2289–2297.

Stekler, H. O. (1972). An analysis of turning point forecast errors.AmericanEconomic Review, 62, 724–729.

Stekler, H. O. (2007). The future of macroeconomic forecasting:understanding the forecasting process. International Journal ofForecasting , 23, 237–248.

Stekler, H. O, & Talwar, R. J. (2013). Forecasting the downturn of the greatrecession. Business Economics, 48, 113–120.

Herman O. Stekler∗

372 Monroe Hall 2115 G St., NWWashington, DC 20052, USAE-mail address: [email protected].

∗ Tel.: +1 202 994 7581.