time series models.by a. c. harvey

2
Time Series Models. by A. C. Harvey Review by: E. Renshaw Biometrics, Vol. 38, No. 4 (Dec., 1982), p. 1124 Published by: International Biometric Society Stable URL: http://www.jstor.org/stable/2529903 . Accessed: 24/06/2014 21:25 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . International Biometric Society is collaborating with JSTOR to digitize, preserve and extend access to Biometrics. http://www.jstor.org This content downloaded from 185.44.78.115 on Tue, 24 Jun 2014 21:25:45 PM All use subject to JSTOR Terms and Conditions

Upload: review-by-e-renshaw

Post on 31-Jan-2017

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Time Series Models.by A. C. Harvey

Time Series Models. by A. C. HarveyReview by: E. RenshawBiometrics, Vol. 38, No. 4 (Dec., 1982), p. 1124Published by: International Biometric SocietyStable URL: http://www.jstor.org/stable/2529903 .

Accessed: 24/06/2014 21:25

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

International Biometric Society is collaborating with JSTOR to digitize, preserve and extend access toBiometrics.

http://www.jstor.org

This content downloaded from 185.44.78.115 on Tue, 24 Jun 2014 21:25:45 PMAll use subject to JSTOR Terms and Conditions

Page 2: Time Series Models.by A. C. Harvey

1124 1124 1124 Biometrics, December 1982 Biometrics, December 1982 Biometrics, December 1982

leading to simple estimates. In clinical trials there is an inescapable dilemma, for Bayesian and nonBayesian alike, that the welfare of the present patients has to be balanced against benefits to future sufferers. Kadane and Sedransk present a discussion which clarifies the issue; but as it involves several people's utilities, it should really be considered as part of multi-person decision theory, or, in other words, the (so-called) theory of games. Not surprisingly, Box contributes a short paper on robustness. But a tour of the book (and elsewhere makes it clear that Bayesian procedures, like nonBayesian, differ much in robustness. When we are concerned with estimating a proportion, or the mean of a normal distribution, common procedures are not only robust but also virtually the same for Bayesians and nonBayesians. When we come to hypothesis testing, we have quite a different kettle of fish. To take a simple hypothetical example, which is mirrored in many practical applications: we want to know whether smoking increases the risk of diabetes. We estimate the proportion P1 of diabetics in a sample of nonsmokers, and P2 in a sample of smokers. The conventional procedure is simplicity itself; essentially all we need do is to compare the difference between the observed proportions with its standard error. The result in such cases is often so obvious that detailed numerical calculation is hardly needed. In Bayesian terms the correct procedure was suggested by Jeffreys; one assigns prior distributions to the null and alterna- tive hypotheses, and then uses Bayes's theorem to find the posterior probability that the null hypothesis is true. But to find priors which are proper, natural, robust in their effect and convincing to all concerned is a desirable but very elusive goal. The most promising attempts are those of Good and his co-workers. But their solutions are much more complicated than the nonBayesian (deviation/SE) criterion. New and intri- guing tests of hypotheses are suggested by Zellner and Siow, and by Bernardo. But as Zellner and SiowSs priors seem improper, and Bernardo's depend on the sampling procedure, neither seem entirely convincing.

One topic is noticeably missing from the discussions, namely a definition of the word 'Bayesian', if there is one. It is clear from the text that Bayesians, self- styled, are heterogeneous: there are Savage Bayes- ians, Good Bayesians, Logical Bayesians, and Others, and many people would use Bayes's theorem in limited circumstances, but declare themselves non- Bayesians. Should not the word 'Bayesian' be aban- doned, as a noninformative prior, and replaced by more precise descriptions such as 'subjectivist' or 'Doogian'?

Possibly the most perceptive remark in the book is that of Tom Leonard on p. 543, '. . .statistical practice is a subjective process which is highly dependent on the expertise, honesty, and experience of the statisti- cian. . .'.

Two minor points: the book has no index, and its binding seems less than totally secure. But you would be well advised to read it before it falls to pieces.

CEDRIC A. B. SMITH Galton Laboratory, University College, London, England.

leading to simple estimates. In clinical trials there is an inescapable dilemma, for Bayesian and nonBayesian alike, that the welfare of the present patients has to be balanced against benefits to future sufferers. Kadane and Sedransk present a discussion which clarifies the issue; but as it involves several people's utilities, it should really be considered as part of multi-person decision theory, or, in other words, the (so-called) theory of games. Not surprisingly, Box contributes a short paper on robustness. But a tour of the book (and elsewhere makes it clear that Bayesian procedures, like nonBayesian, differ much in robustness. When we are concerned with estimating a proportion, or the mean of a normal distribution, common procedures are not only robust but also virtually the same for Bayesians and nonBayesians. When we come to hypothesis testing, we have quite a different kettle of fish. To take a simple hypothetical example, which is mirrored in many practical applications: we want to know whether smoking increases the risk of diabetes. We estimate the proportion P1 of diabetics in a sample of nonsmokers, and P2 in a sample of smokers. The conventional procedure is simplicity itself; essentially all we need do is to compare the difference between the observed proportions with its standard error. The result in such cases is often so obvious that detailed numerical calculation is hardly needed. In Bayesian terms the correct procedure was suggested by Jeffreys; one assigns prior distributions to the null and alterna- tive hypotheses, and then uses Bayes's theorem to find the posterior probability that the null hypothesis is true. But to find priors which are proper, natural, robust in their effect and convincing to all concerned is a desirable but very elusive goal. The most promising attempts are those of Good and his co-workers. But their solutions are much more complicated than the nonBayesian (deviation/SE) criterion. New and intri- guing tests of hypotheses are suggested by Zellner and Siow, and by Bernardo. But as Zellner and SiowSs priors seem improper, and Bernardo's depend on the sampling procedure, neither seem entirely convincing.

One topic is noticeably missing from the discussions, namely a definition of the word 'Bayesian', if there is one. It is clear from the text that Bayesians, self- styled, are heterogeneous: there are Savage Bayes- ians, Good Bayesians, Logical Bayesians, and Others, and many people would use Bayes's theorem in limited circumstances, but declare themselves non- Bayesians. Should not the word 'Bayesian' be aban- doned, as a noninformative prior, and replaced by more precise descriptions such as 'subjectivist' or 'Doogian'?

Possibly the most perceptive remark in the book is that of Tom Leonard on p. 543, '. . .statistical practice is a subjective process which is highly dependent on the expertise, honesty, and experience of the statisti- cian. . .'.

Two minor points: the book has no index, and its binding seems less than totally secure. But you would be well advised to read it before it falls to pieces.

CEDRIC A. B. SMITH Galton Laboratory, University College, London, England.

leading to simple estimates. In clinical trials there is an inescapable dilemma, for Bayesian and nonBayesian alike, that the welfare of the present patients has to be balanced against benefits to future sufferers. Kadane and Sedransk present a discussion which clarifies the issue; but as it involves several people's utilities, it should really be considered as part of multi-person decision theory, or, in other words, the (so-called) theory of games. Not surprisingly, Box contributes a short paper on robustness. But a tour of the book (and elsewhere makes it clear that Bayesian procedures, like nonBayesian, differ much in robustness. When we are concerned with estimating a proportion, or the mean of a normal distribution, common procedures are not only robust but also virtually the same for Bayesians and nonBayesians. When we come to hypothesis testing, we have quite a different kettle of fish. To take a simple hypothetical example, which is mirrored in many practical applications: we want to know whether smoking increases the risk of diabetes. We estimate the proportion P1 of diabetics in a sample of nonsmokers, and P2 in a sample of smokers. The conventional procedure is simplicity itself; essentially all we need do is to compare the difference between the observed proportions with its standard error. The result in such cases is often so obvious that detailed numerical calculation is hardly needed. In Bayesian terms the correct procedure was suggested by Jeffreys; one assigns prior distributions to the null and alterna- tive hypotheses, and then uses Bayes's theorem to find the posterior probability that the null hypothesis is true. But to find priors which are proper, natural, robust in their effect and convincing to all concerned is a desirable but very elusive goal. The most promising attempts are those of Good and his co-workers. But their solutions are much more complicated than the nonBayesian (deviation/SE) criterion. New and intri- guing tests of hypotheses are suggested by Zellner and Siow, and by Bernardo. But as Zellner and SiowSs priors seem improper, and Bernardo's depend on the sampling procedure, neither seem entirely convincing.

One topic is noticeably missing from the discussions, namely a definition of the word 'Bayesian', if there is one. It is clear from the text that Bayesians, self- styled, are heterogeneous: there are Savage Bayes- ians, Good Bayesians, Logical Bayesians, and Others, and many people would use Bayes's theorem in limited circumstances, but declare themselves non- Bayesians. Should not the word 'Bayesian' be aban- doned, as a noninformative prior, and replaced by more precise descriptions such as 'subjectivist' or 'Doogian'?

Possibly the most perceptive remark in the book is that of Tom Leonard on p. 543, '. . .statistical practice is a subjective process which is highly dependent on the expertise, honesty, and experience of the statisti- cian. . .'.

Two minor points: the book has no index, and its binding seems less than totally secure. But you would be well advised to read it before it falls to pieces.

CEDRIC A. B. SMITH Galton Laboratory, University College, London, England.

HARVEY, A. C. Time Series Models. John Wiley, New York, 1981, 229 pp. $39.95.

Whilst the usual biometric approach to time series analysis centres around the interpretation of series data and the development of appropriate models framed within a biological context, in econometrics underlying mechanisms are usually either unknown or too complicated to be useful and so emphasis is placed on the development of regression-type models and their use as predictors. The author regards this work as a companion to his earlier text on The Econometric Analysis of Time Series, and the contents are very definitely centred around the second of these two approaches through the study of autoregressive-mov- ing average (ARMA) models. Even so, the clarity of writing combined with an interesting presentation en- sures that this text will still appeal to biometricians, especially as many of the references relate to very recent papers (over one half date from 1976 or later).

After a thoughtfully prepared introduction which summarizes the main points of the book, there is a neat presentation of both time and frequency domain properties of stationary processes together with an unusually short exposition of spectral estimation. An excellent account is given of the Kalman filter. This useful technique allows for the inclusion of time- varying parameters which are assumed to be governed by a well-defined process, and its use is lamentably absent from the biological literature. The estimation of ARMA (p, q) models for fixed p and q proceeds in the usual manner, together with an examination of the consequences of (p, q)-misspecification and their application as predictors. The text concludes with a selection of topics from time series regression.

My main regret is the severe lack of worked exam- ples. Not only does this make theoretical material harder to place in context? but it also prevents valu- able insight being gained into problems of interpreta- tion. A lesser criticism concerns a tendency for several sections to finish abruptly with referenced comment just when further elaboratioll would be of consider- able interest. Nevertheless the author has provided an eminently readable update of the current state of the art, and as a general text his book is to be recom- mended.

E. RENSHAW Department of Statistics, University of Edinburgh,

Edinburgh, Scotland.

ARTHANARI, T. S. and DODGE, Y. Mathematical Programming in Statistics. John Wiley, New York, 1981, 413 pp. £18.35.

The authors state that their intention is 'to bring together most of the available results on applications of mathematical programming in statistics'. Such a wide ranging brief is bound to lead to a somewhat uneasy assortment of topics, the sort of thing one would expect from an information retrieval system fed with the keywords 'Mathematical Programming' and 'Statistics'.

HARVEY, A. C. Time Series Models. John Wiley, New York, 1981, 229 pp. $39.95.

Whilst the usual biometric approach to time series analysis centres around the interpretation of series data and the development of appropriate models framed within a biological context, in econometrics underlying mechanisms are usually either unknown or too complicated to be useful and so emphasis is placed on the development of regression-type models and their use as predictors. The author regards this work as a companion to his earlier text on The Econometric Analysis of Time Series, and the contents are very definitely centred around the second of these two approaches through the study of autoregressive-mov- ing average (ARMA) models. Even so, the clarity of writing combined with an interesting presentation en- sures that this text will still appeal to biometricians, especially as many of the references relate to very recent papers (over one half date from 1976 or later).

After a thoughtfully prepared introduction which summarizes the main points of the book, there is a neat presentation of both time and frequency domain properties of stationary processes together with an unusually short exposition of spectral estimation. An excellent account is given of the Kalman filter. This useful technique allows for the inclusion of time- varying parameters which are assumed to be governed by a well-defined process, and its use is lamentably absent from the biological literature. The estimation of ARMA (p, q) models for fixed p and q proceeds in the usual manner, together with an examination of the consequences of (p, q)-misspecification and their application as predictors. The text concludes with a selection of topics from time series regression.

My main regret is the severe lack of worked exam- ples. Not only does this make theoretical material harder to place in context? but it also prevents valu- able insight being gained into problems of interpreta- tion. A lesser criticism concerns a tendency for several sections to finish abruptly with referenced comment just when further elaboratioll would be of consider- able interest. Nevertheless the author has provided an eminently readable update of the current state of the art, and as a general text his book is to be recom- mended.

E. RENSHAW Department of Statistics, University of Edinburgh,

Edinburgh, Scotland.

ARTHANARI, T. S. and DODGE, Y. Mathematical Programming in Statistics. John Wiley, New York, 1981, 413 pp. £18.35.

The authors state that their intention is 'to bring together most of the available results on applications of mathematical programming in statistics'. Such a wide ranging brief is bound to lead to a somewhat uneasy assortment of topics, the sort of thing one would expect from an information retrieval system fed with the keywords 'Mathematical Programming' and 'Statistics'.

HARVEY, A. C. Time Series Models. John Wiley, New York, 1981, 229 pp. $39.95.

Whilst the usual biometric approach to time series analysis centres around the interpretation of series data and the development of appropriate models framed within a biological context, in econometrics underlying mechanisms are usually either unknown or too complicated to be useful and so emphasis is placed on the development of regression-type models and their use as predictors. The author regards this work as a companion to his earlier text on The Econometric Analysis of Time Series, and the contents are very definitely centred around the second of these two approaches through the study of autoregressive-mov- ing average (ARMA) models. Even so, the clarity of writing combined with an interesting presentation en- sures that this text will still appeal to biometricians, especially as many of the references relate to very recent papers (over one half date from 1976 or later).

After a thoughtfully prepared introduction which summarizes the main points of the book, there is a neat presentation of both time and frequency domain properties of stationary processes together with an unusually short exposition of spectral estimation. An excellent account is given of the Kalman filter. This useful technique allows for the inclusion of time- varying parameters which are assumed to be governed by a well-defined process, and its use is lamentably absent from the biological literature. The estimation of ARMA (p, q) models for fixed p and q proceeds in the usual manner, together with an examination of the consequences of (p, q)-misspecification and their application as predictors. The text concludes with a selection of topics from time series regression.

My main regret is the severe lack of worked exam- ples. Not only does this make theoretical material harder to place in context? but it also prevents valu- able insight being gained into problems of interpreta- tion. A lesser criticism concerns a tendency for several sections to finish abruptly with referenced comment just when further elaboratioll would be of consider- able interest. Nevertheless the author has provided an eminently readable update of the current state of the art, and as a general text his book is to be recom- mended.

E. RENSHAW Department of Statistics, University of Edinburgh,

Edinburgh, Scotland.

ARTHANARI, T. S. and DODGE, Y. Mathematical Programming in Statistics. John Wiley, New York, 1981, 413 pp. £18.35.

The authors state that their intention is 'to bring together most of the available results on applications of mathematical programming in statistics'. Such a wide ranging brief is bound to lead to a somewhat uneasy assortment of topics, the sort of thing one would expect from an information retrieval system fed with the keywords 'Mathematical Programming' and 'Statistics'.

This content downloaded from 185.44.78.115 on Tue, 24 Jun 2014 21:25:45 PMAll use subject to JSTOR Terms and Conditions