the use of bibliometric indicators to measure the research productivity of australian academics

13
This article was downloaded by: [University of Tennessee At Martin] On: 04 October 2014, At: 00:08 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Australian Academic & Research Libraries Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/uarl20 The Use of Bibliometric Indicators to Measure the Research Productivity of Australian Academics Pam Royle a & Ray Over a a La Trobe University Published online: 28 Oct 2013. To cite this article: Pam Royle & Ray Over (1994) The Use of Bibliometric Indicators to Measure the Research Productivity of Australian Academics, Australian Academic & Research Libraries, 25:2, 77-88, DOI: 10.1080/00048623.1994.10754876 To link to this article: http://dx.doi.org/10.1080/00048623.1994.10754876 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Upload: ray

Post on 14-Feb-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Use of Bibliometric Indicators to Measure the Research Productivity of Australian Academics

This article was downloaded by: [University of Tennessee At Martin]On: 04 October 2014, At: 00:08Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Australian Academic & Research LibrariesPublication details, including instructions for authors and subscriptioninformation:http://www.tandfonline.com/loi/uarl20

The Use of Bibliometric Indicators toMeasure the Research Productivity ofAustralian AcademicsPam Roylea & Ray Overa

a La Trobe UniversityPublished online: 28 Oct 2013.

To cite this article: Pam Royle & Ray Over (1994) The Use of Bibliometric Indicators to Measure theResearch Productivity of Australian Academics, Australian Academic & Research Libraries, 25:2, 77-88,DOI: 10.1080/00048623.1994.10754876

To link to this article: http://dx.doi.org/10.1080/00048623.1994.10754876

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis, ouragents, and our licensors make no representations or warranties whatsoever as to theaccuracy, completeness, or suitability for any purpose of the Content. Any opinions andviews expressed in this publication are the opinions and views of the authors, and are notthe views of or endorsed by Taylor & Francis. The accuracy of the Content should not berelied upon and should be independently verified with primary sources of information. Taylorand Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs,expenses, damages, and other liabilities whatsoever or howsoever caused arising directly orindirectly in connection with, in relation to or arising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any substantialor systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply,or distribution in any form to anyone is expressly forbidden. Terms & Conditions of accessand use can be found at http://www.tandfonline.com/page/terms-and-conditions

Page 2: The Use of Bibliometric Indicators to Measure the Research Productivity of Australian Academics

The Use of Bibliometric Indicators to Measure the Research Productivity of Australian Academics PAM ROYLE and RAY OVER La Trobe University

ABSTRACT The appropriateness of using the database used by the Institute for Scientific Information (/SI) in compiling the Science Citation Index (SCI) and the Social Sciences Citation Index (SSCI) to measure the research productivity of Australian academics is examined. It is shou•n from analysis of publications listed in the research reports of three Australian uni1•ersities that only 27% of journal articles authored by academics in social science disciplines are captured by the /SI database, in contrast to 74% of journal articles generated by academics in science disciplines. Using a performance indicator based solely on IS/ source indexes u•ill thus pro1•ide a distorted 1•iew of the research output of Australian academics, particularly in the social sciences. The alternati1•es of relying on entries in discipline-specific so11rce indices or permitting unit•ersities or indi1•id11a/s to decide u•hat constitutes a p11blication, and hence is to be incl11ded in the co11nt, raise additional concerns. The major problem in using freq11ency of publication as a measure of research productit>ity is to arril>e at a l'alid definition of u•hat constit11tes a publica­tion.

The analyses reported in this article were supported by funding to Pam Royle thro11gh the Borchardt Library, La Trobe Unit•ersity. Correspondence relating to the article should be directed to either Pam Royle, Borchardt Library, La Trobe Uni1•ersity, B11ndoora, Australia 3083, or Ray Ouer, Department of Psychology, La Trobe Unit>ersity, B11ndoora, Australia 3083.

Introduction Australian universities are under increasing pressure to demon­strate efficiency and effectiveness in the manner in which they use government funds in meeting defined objectives. Although the processes Australian universities must follow in demonstrating quality in contexts such as teaching and research have not yet been fully developed, there can be little doubt from documents such as Achie1•ing Quality (published in 1992 by the Higher Education Council of the National Board of Employment, Education and Training) that Australian universi­ties will increasingly be required to be accountable in all aspects of functioning. Thus 'universities should be able to demonstrate how they know that activities conducted in the name of the university, or using university resources and/or affiliations, are activities of quality' 1• In requiring the higher education system to justify its practices in cost-benefit terms, the Australian government is adopting an approach that is now well established in Europe and North America~• 3 . Quality assurance as conceptualised by the Higher Education Council involves evaluation of achievement through the use of quantitative performance indicators.

Bibliographic methods are now widely used to obtain quantitative measures of research output and impact within different disciplines4 - 7 • Several studies have examined the publication and citation records of Australian academicss- 11 . It is often claimed that bibliometric assessment, in contrast to the subjectivity associated with peer review or evaluation, provides objective specification of research per­formance. The sources most frequently used for this purpose are the Science Citation Index (SCI) and the Social Sciences Citation Index (SSC!), published by

77

Dow

nloa

ded

by [

Uni

vers

ity o

f T

enne

ssee

At M

artin

] at

00:

08 0

4 O

ctob

er 2

014

Page 3: The Use of Bibliometric Indicators to Measure the Research Productivity of Australian Academics

AARL June 1994

the Institute for Scientific Information (ISi) in Philadelphia, USA. The ISi database is multidisciplinary in terms of the journals it covers. The input comprises not only all articles published in each of many source journals but all cited references in each such article. Such a database is used to produce annually the Science Citation Index and the Social Sciences Citation Index. The SCI and the SSC/ each consists of a Source Index containing bibliographic descriptions of all articles published in the relevant source journals and books, and a Citation Index listing all bibliograph­ic references from articles in the Source Index. Measures of research output or productivity are based on a count of entries in the latter, while research impact is estimated from author listings in the former.

This article discusses the appropriateness of measuring the research productivity of Australian academics by counting the number of publications as listed in SC/ and SSC/. The analyses we report indicate that the ISi databases capture only part of the research output of Australian academics, and in particular, that publication counts based on ISi source journals distort the research output of Australian academics to a much greater extent in the case of the social sciences than the natural sciences. The question of whether bibliographic resources other than the ISi database yield quantitative measures of research performance suitable for quality assurance purposes is also addressed.

Specifying Research Productivity In identifying performance indicators, the search has been for objective, quantitative measures that specify level of research achievement in a defined domain. A suitable measure should ideally permit comparisons between individuals, between educational units (departments or universities), and between nations. Frequency of publication offers an obvious means of specifying research productivity. Since performance can be specified in terms of number of publications per unit time, the measure is quantitative and seemingly objective.

Rate of publication is a primary correlate of career advancement within the Australian university system, and on this basis has validity as a performance indicator. All Australian universities have formal procedures governing promotion of academic staff. Although neither the criteria nor the weighting given to different criteria are uniform across the system as a whole, all Australian universities specify scholarship as indexed by research and publication as a primary requirement for promotion 12 • A survey conducted at the University of Queensland by Mosesu indicated that most academics perceived promotion as governed much more by research performance than by achievement in teaching, administration, or service. One means of establishing the role of publication in career advancement is to compare the research records of academics who achieved promotion over a particular period with the records of academics who were not promoted. In using this methodology, Over 14 found that the performance indicators most directly associated with advancement from lecturer to senior lecturer or from senior lecturer to reader/associate professor were rate of publication in refereed journals, the frequency with which a person's publications are cited in the scholarly literature, research grants applied for and obtained, and the number of PhD students under the person's supervision.

Several reports commissioned by the Australian government have identified

78

Dow

nloa

ded

by [

Uni

vers

ity o

f T

enne

ssee

At M

artin

] at

00:

08 0

4 O

ctob

er 2

014

Page 4: The Use of Bibliometric Indicators to Measure the Research Productivity of Australian Academics

Royle and Over: The Use of Bibliometric Indicators to Measure Research Productit>ity

frequency of publication as a primary (but not exclusive) indicator of the research productivity of academics. However, the need to do more than compare individuals or universities in terms of number of publications has been noted. Bourke 1s, for example, emphasised the importance of considering the quality of output, and not simply quantity. The Research Group commissioned to undertake a trial evaluation study of performance indicators for the Australian university system recommended that instead of an overall measure of publication rate, separate counts be provided within three categories of publication: books and monographs, refereed journal articles, published conference papers 16 . The question of assigning credit in cases of co-authorship has been raised, as has the advisability of compiling three-year running averages rather than relying on counts for a single year. Although these are important issues, the more basic considerations are deciding what constitutes a publication, and what types of publications should be included in frequency counts. The further commentary will focus on this issue.

The NBEET Survey Applicants for and recipients of Australian Research Council grants were surveyed in 1991-92 in an attempt to establish measures which validly index the research productivity of academics in Australian universities 17 •

Respondents were asked to identify the extent to which 20 indicators conventional­ly used to specify research performance constituted suitable measures within their own discipline. Some indicators (such as patents, inventions and royalty income) were viewed as valid measures only within specific disciplines. Indicators where there was substantial consensus included being the editor of a journal (endorsed by 99% of respondents) and receiving a competitive research grant (98% endorse­ment). There was agreement across all disciplines (100% endorsement) that publication in a scholarly context is a valid index of research performance, despite some variation in terms of the extent to which several forms of publication (such as being the author of a book, editing a book, writing a chapter in a book, publishing articles in refereed journals, and publishing in conference proceedings) were identified as important. Although in some disciplines books were perceived as being of higher status than journal articles, the most generally acknowledged measure of research productivity was publication in refereed journals.

Each participant in the NBEET survey was asked to list in rank order up to ten 'important journals in your principal field of research'. In contrast to consensus across disciplines that publication is the best indicator of research productivity, there was limited agreement among respondents within any discipline as to which journals are highly 'important'. As might be expected, prestigious international journals such as Nature and Science were identified as important by 40% or more of respondents within some science disciplines. However, there also was a high level of endorsement of Australian journals such as Alcheringa, the Australian journal of Botany, the Australian journal of Plant Physiology, the Australian journal of Ecology, the Australian journal of Chemistry, the Australian journal of Earth Sciences, the Australian journal of Linguistics, the Australian journal of Philosophy, the Australian journal of Physics, the Australian journal of Statistics, Australian Geographical Studies, the Australian journal of Marine and Freshu·ater Research, the Australian journal of Geology, Photogrammetry, and Surt•eying, and the Australian Sun•eyor. Each of these journals was rated by 40% of respondents

79

Dow

nloa

ded

by [

Uni

vers

ity o

f T

enne

ssee

At M

artin

] at

00:

08 0

4 O

ctob

er 2

014

Page 5: The Use of Bibliometric Indicators to Measure the Research Productivity of Australian Academics

AARL June 1994

within the discipline as among the ten most important journals within their principal field of research.

The profile of important journals established for Australian academics seems puzzling, since 'to the extent that Australian researchers have an international orientation, one would hypothesize reasonably close agreement between their preferred journals and those with a high citation impact in the Science Citation Index (SCI)' 18 . It was instead the case that many journals with high citation impact (journals publishing articles that are most frequently cited in the database used by ISi in compiling SCI) were not identified as important by Australian academics, while the journals rated as important often were cited infrequently within the SCI database. The results from the survey indicated that 'Australian journals with comparatively low citation impact are rated highly' 19 . Interestingly, biologists and chemists included 'articles weighted by journal citation impact' amongst their four preferred indicators, yet rated as important several journals with low citation impact 20 .

In further analysis of results from the NBEET survey we used the ISi database to identify characteristics of the 168 journals that were rated as important by Austra­lian academics. Information on these journals, such as impact factor (how often the articles a journal published were cited within two years of publication) were taken from journal Citation Reports OCR) Source Data Listings produced by ISi for 1990. Although all but 22 (13%)of the 168 journals rated as important in the NBEET survey were source journals in the ISi database, the journals rated as important by Australian academics were not consistently those that publish the most frequently cited articles. Only seven of the 168 important journals ranked among the 100 journals with highest impact, as established from jCRjournal Ratings for 1990, and only 41 ranked among the 500 top journals. The 168 important journals as identified in the NBEET survey included 14 published in Australia, but only nine of these 14 journals (64%) were ISi source journals. In terms of impact factor, the highest ranked Australian journals were the Australian journal of Plant Pbysiology (which ranked 619), the Australian journal of Earth Sciences (rank 1 369), and the Australian journal of Ecology (rank 1 672).

Both SCI and SSC! are designed to provide comprehensive coverage of what are considered to he the world's most important journals21 • The decision as to whether a serial will be included as a source journal for SCI or SSC! is based on the frequency with which articles published in the journal are cited, journal standards (such as whether the journal employs peer review in processing manuscripts, the reputation of members of the journal's editorial board, punctual publication, and the standing of the publisher or the sponsoring society), and evaluation of journals by ISl's editorial advisory boards. Although geographic representation is a factor, Garfield noted that 'unless a journal of interest to only a small region of the world is exceptional in some way, we are less likely to cover it'22. Even though Garfield claimed that 'ISi could conceivably limit itself to the top 500 journals and still provide comprehensive coverage of the most important publications', the ISi data hank is extensive; for example, in 1991 SCI covered 3 213 source journals (which published 434 183 articles in total) and SSC! covered 1 440 journals fully and 1 541 selectively ( 62 285 articles in total).

80

Dow

nloa

ded

by [

Uni

vers

ity o

f T

enne

ssee

At M

artin

] at

00:

08 0

4 O

ctob

er 2

014

Page 6: The Use of Bibliometric Indicators to Measure the Research Productivity of Australian Academics

Royle and Over: The Use of Bibliometric Indicators to Measure Research Productivity

As noted above, many journals in the ISI database that publish highly cited articles were not ranked by Australian academics participating in the NBEET survey as among the ten most important journals in their principal field of research. The journals classified as important instead included many serials (and particularly serials published in Australia) that are not among the source journals within the ISI database. The implication is that the research productivity of Australian academics cannot be assessed simply by counting how often individuals publish per unit time in ISI source journals. Respondents consider there are important journals in their own discipline which are not represented in the SCI or the SSC/ database. If there is limited consensus among academics within a discipline as to what are important journals, if high impact serials among ISI source journals are not always regarded as important, and if some journals not covered by ISI are rated as important, what publications can validly be counted in order to measure the research productivity of Australian academics?

A useful starting point in addressing the above question is to examine where Australian academics publish. Bourke23 estimated that in the Institute of Advanced Studies at the Australian National University as much as 90% of publication by academics in science disciplines, but as little as 25% of publications in the humanities and social sciences, is in SCI source journals. The analysis we now report addresses this issue of 'capture' of publications by Australian academics for a larger sample, and one that is more representative of the Australian university system as a whole.

Publication by Australian Academics The research reports issued annually by each Australian university list publications within the past year by each member of academic staff. Although there are differences between universities in the types of material they classify as a publication, it is a relatively simple matter to edit research reports to standardise mode of publication between universities. The present analysis, which relies on entries in research reports for La Trobe University, Monash University, and the University of Melbourne in 1990 and 1991, covers articles published in journals or serials. Hence the research reports were edited to omit entries to publications such as authored books, edited books, book chapters, abstracts, conference proceedings, pamphlets, and recordings. Following Bourke24

academics in the sciences and the social sciences are contrasted. The objective in the comparisons that follow is to determine the relative rates at which articles published by Australian academics in science and social sciences appear in ISI source journals.

The research reports of La Trobe University, Monash University, and the Univer­sity of Melbourne listed 1 901 articles published in social sciences journals in 1991 or 1992; these articles had appeared in 921 different serials or journals. Table 1 identifies the major publication outlets (journals in which 12 or more articles had been published). Each article in the dataset was classified in terms of: (a) whether it had appeared in a journal published within or outside Australia, and (b) whether the journal in which it had been published was an ISI source journal. As can be seen from Table 2, the majority (52.2%) of articles produced in 1991 and 1992 by academics in social science disciplines appeared in journals published within Australia. Further, only one-quarter (26.7%) of the total output by these academics

81

Dow

nloa

ded

by [

Uni

vers

ity o

f T

enne

ssee

At M

artin

] at

00:

08 0

4 O

ctob

er 2

014

Page 7: The Use of Bibliometric Indicators to Measure the Research Productivity of Australian Academics

AARL June 199.f

82

Table 1. Social science journals and science journals in which the academ­ics at La Trobe University Monash University, and the University of Mel­bourne published most frequently in 1990-1991.

Social Science journals

Law institute journal • Legal service bulletin" Australian business law review"

Company and securities law journal" Australian economic review

Arena" •Economic record"

Melbourne report" Children Australia' Monash Cniversity law review"

•Medical journal of Australia" !\klhourne Universiry Law

Revit:w"

Asia Pacific human resource a

management •Australian New Zealand journal of cri m inologv" Asian studi~s review" Thesis eleven"

Australian societv" Quadrant" . Australian journal of labour law"

Australian floor news"

No. Science journals

35 •Medical journal of Australia" 30 •Australian journal of chemistry" 30 Australian New Zealand journal of

a surgery

29 Australian family physician" 27 *Clinical & experimental

pharmacology & 20 physiology" · 20 •Reproduction fertilitv &

development" · 18 Transplantation proceedings 15 •Lancet 15 •Proceedings of the National

Academy of Sciences USA 15

14 •Australian New Zealand journal of medicine"

13 •American journal of physiology

13 •Endocrinology 12 *Journal of biological chemistry 12 Ausrralian and New Zealand journal

of obstetrics & 12 gynaecology" 12 *Journal of pediatrics & child health" 12 *Molecular & biochemical

parasitology 12 *Journal of physiology

No.

139 88

76 76

70

68 65 49

45

44 41

40 39

34 34

Table 2. Numbers of journal articles by academics in social science and science disciplines that had appeared in Australian journals and in serials covered as source journals by ISi.

'.\umber of journals Number of articles Articles in Australian journals Articles in non·Australian journals Articles in ISi source journals Articles in ISi source journals published in

Social Science disciplines 921 1901 996 (52.4%) 905 (47.6%) 509 (26.7%)

Australia 124 (6.5%) Articles in journais not covered lw ISi 1392 (73.2'X,) Articles in Australian journals not.covered by ISi 872 (45.9%)

Science disciplines 1707 6304 1342 (212%) 4962 (78.8%) -!636 ( 73.5%)

718 (l I.4%) 1668 (26. 5%) 624 (9.9%)

• Indicates that tlw journal is an ISi source journal: a indicates that rhe journal is published in Australia.

Dow

nloa

ded

by [

Uni

vers

ity o

f T

enne

ssee

At M

artin

] at

00:

08 0

4 O

ctob

er 2

014

Page 8: The Use of Bibliometric Indicators to Measure the Research Productivity of Australian Academics

Royle and Over: The Use of Bibliometric Indicators to Measure Research Productit•ity

had appeared in ISi source journals (and hence was indexed in SSC! or SCI). The source journals for SSC! in 1991 included 21 serials published within Australia (these constituted only 1.4% of all SSC/ source journals). Hence it is not surprising that only 6.5% of all articles in the sample that had been published in Australian journals were covered by ISi.

The research reports of La Trobe University, Monash University, and the Univer­sity of Melbourne in 1991 and 1992 listed 6 304 articles in science disciplines (including medicine, but excluding engineering) published in a total of 1 707 journals. Table 1 identifies the major publication outlets (journals in which 34 or more articles had been published). As for the social sciences, each article in the data set was classified in terms of: (a) whether it had appeared in a journal published within or outside Australia, and (b) whether the journal in which it had been published was an ISi source journal. As can be seen from Table 2, only a minority (21.2%) of all articles in science disciplines had appeared in journals published within Australia. Further, almost three-quarters (73.5%) were published in ISi source journals (and hence were indexed in SCI or SSC!). The source journals for SCI in 1991 included 35 serials published within Australia (these constituted only 1.1% of all SC/ source journals). Nevertheless it was the case that 718 (11.4%) of the 1 342 articles in science disciplines that had been published in Australian journals were covered by ISi.

There is a further comparison of interest between publication in social science and science disciplines. The 88 social science journals in which academics in the sample had most often published (those journals with four or more publications) included 64 journals (73%) produced in Australia. However, of these 88 journals only 24 (27%) were ISi sources, and only ten (11 %) were ISi sources published in Australia. In contrast, the 104 science journals in which academics in the sample had most often published (those journals with 12 or more publications) included 23 (22%) produced in Australia. Further, 90 of the 104 journals (87%) were ISi sources, although only 14 03%) were ISi sources produced in Australia.

Measuring Frequency of Publication The data reported above point to problems in using frequency of publication as a measure of the research productivi­ty of Australian academics. While most (73.5%) of the publications of academics in science disciplines are in journals covered by ISi, the ISi source journals capture only 26.7% of articles published by academics in social science disciplines. Unless it can be argued that only publications in journals covered by ISi should be included in frequency counts, identifying the rate of publication by reliance solely on the ISi data bank will produce a distorted representation of the research output of Australian academics. For example, academics in the social sciences would be seen as less productive than their colleagues in science disciplines. Further, frequency counts within each disciplinary group will depend upon where persons had published. Since relatively few journals produced in Australia (particularly in the social sciences) serve as ISi sources, academics who publish primarily in Australian journals would be particularly disadvantaged. The supposition in relying on the ISi database is that serials not covered as source journals by ISi are inferior in scholarly standard and contribution to those that are covered. However, it is probably the case that many journals published in Australia deal with Australian-

83

Dow

nloa

ded

by [

Uni

vers

ity o

f T

enne

ssee

At M

artin

] at

00:

08 0

4 O

ctob

er 2

014

Page 9: The Use of Bibliometric Indicators to Measure the Research Productivity of Australian Academics

AARL June 1994

specific content (eg Australian flora or fauna, the Australian natural environment, the Australian economic system, the Australian legal system). As an example, a recent review of educational research in Australia identified 40 journals published in Australia2s. All these journals have a primary focus on education in an Australian context, but only two of the 40 journals are ISi source journals.

It could be argued that, since three-quarters of all articles generated by Austra­lian academics in science disciplines appear in ISi source journals, the ISi database can be used in identifying the research output of Australian scientists. However, to justify such a claim it would be necessary to demonstrate that all ISi source journals are of higher standard in terms of scientific contribution than journals not covered by ISi. As noted above, many respondents in the recent NBEET survey ranked as highly a number of journals (particularly some published in Australia) which are not covered by ISi. Since limiting the classification of a publication to material appearing in ISi source journals clearly would provide a distorted repre­sentation of the research productivity of Australian academics, classification of what counts as a publication might perhaps instead be based on discipline-specific resources, such as Chemical Abstracts, Index of Current journals in Education, Index Medicus, and Psychological Abstracts. Discipline-specific indexes provide much wider coverage of a discipline and cognate fields than the ISi database. However, the coverage is less selective, since comprehensiveness is achieved at the expense of quality control. A further disadvantage in defining publications exclu­sively on the basis of entries in a discipline-specific index is that material authored by an academic but appearing outside the subject coverage of the database (as may be the case in interdisciplinary research or where a person moves outside a disciplinary boundary) is disregarded.

An alternative to using entries in either general or discipline-specific indexes for classificatory purposes is to leave the issue of what constitutes a publication to universities. In preparing an annual research report, universities at present employ inclusion and exclusion criteria. However, there may be limited consistency in definition between universities. The report of the trial evaluation study funded by the Department of Employment, Education and Training26 pointed to the need to ensure 'at least minimum standards of originality and intellectual rigour', but did not provide guidelines that would ensure comprehensive coverage, quality control, and standardisation in application. The sole requirement in terms of journals was that articles be refereed. Although peer review has an important role in quality control, the issue surely is the stringency with which refereeing of submissions is undertaken. Counts based simply on frequency of publication in a refereed journal are readily open to manipulation, since pay-to-publish journals and what might be called 'vanity-press' journals can easily implement a token system of refereeing.

Implications for Assessing Scholarly Impact The above analyses have implications not only for measurement of research productivity, but for assessing the scholarly impact of Australian academics. A potential index of the scholarly impact of an individual is the frequency with which the person's publications are cited by other researchers. Whereas the ISi source indexes can be used to identify how often an individual has published as an author in ISi source journals, the ISi citation indexes reveal how often material published by a person over their career

84

Dow

nloa

ded

by [

Uni

vers

ity o

f T

enne

ssee

At M

artin

] at

00:

08 0

4 O

ctob

er 2

014

Page 10: The Use of Bibliometric Indicators to Measure the Research Productivity of Australian Academics

Royle and Over: The Use of Bibliometric Indicators to Measure Research Productivity

is referenced in the bibliographies of articles published in ISi source journals. Since prior publication is clearly a prerequisite for citation, it is to be expected that persons with many publications will generally be more often cited than persons with a low rate of publication. However, there is substantial variability in citation frequency among individuals with similar rates of publication.

Although citation counts should not be used uncritically as a performance indicator27, citation rates correlate with other measures of scholarly impact and reputation, such as peer ratings and academy membership28 29 . For example, the 300 authors who were most frequently cited in the literature covered by Science Citation Index between 1961 and 1975 included 42 Nobel laureates, 110 members of the National Academy of Sciences, and 55 Fellows of the Royal Society of London3o. Similarly, the 100 authors with the highest counts in Social Sciences Citation Index from 1969 to 1977 included 33 members of the National Academy of Sciences and 49 members of the American Academy of Arts and Sciences31 . Further, the 134 psychologists from American, British, and Canadian universities who were cited 100 or more times in SSC/ in 1975 included 33 of the 49 psychologists who were then members of the National Academy of Sciences 32 •

The statistics reported above for publication by Australian academics point to problems of assessing scholarly reputation simply by counting entries in the citation indices compiled by ISI. The number of citations a person gains indicates how often this person's publications were referenced in articles appearing in ISi source journals. Although some references in the ISi citation database are to articles published in other than ISI journal sources, these are the minority. Several factors make it unlikely that there will be a high rate of bibliographic reference in ISI source journals to articles published in Australian journals not covered as sources by ISI. The content of these journals is likely to be regional in nature, the journals generally have limited circulation, they are not well represented in overseas libraries, and coverage by abstracting and indexing services typically is sparse. Such factors need to be kept in mind when considering, for example, the proportion of citations within the ISI database accounted for by Australian academ­ics. The majority of articles by academics in science disciplines appear in journals published outside Australia, whereas the reverse applies in the case of academics in social science disciplines. Publications by Australian academics in science disciplines are thus more likely to be cited in ISi source journals than are publications by Australian academics in social science disciplines. It might be for this reason, and not because one discipline is more advanced than the other, that Australian academics account for a higher proportion of world-wide citations in science than in social science disciplines.

A further problem in basing citation counts solely on ISI citation indexes is that these indices do not include bibliographic references in journals outside the ISi source database. For example, references to an author in an Australian journal which is not an ISI source do not contribute to the person's citation count. In contrast to the many discipline-specific source indexes now available, there are no discipline-specific citation indexes that identify frequency of citation across a comprehensive range of journals in the discipline. However, if such indexes were available, it would be necessary to ask whether citations should be counted without

85

Dow

nloa

ded

by [

Uni

vers

ity o

f T

enne

ssee

At M

artin

] at

00:

08 0

4 O

ctob

er 2

014

Page 11: The Use of Bibliometric Indicators to Measure the Research Productivity of Australian Academics

AARL June 1994

reference to quality (eg where the publication under consideration was pub­lished).

Conclusions A major problem in using frequency of publication as a measure of research productivity is to arrive at a valid definition of what constitutes a publication. The analyses we have reported indicate that whereas almost three-quarters of articles published by Australian academics in science disciplines appear in ISI source journals, only one-quarter of articles in social science disci­plines are in ISI source journals. Using a performance indicator based solely on ISI source indexes would provide a distorted view of publication by Australian aca­demics, particularly in the social sciences. The alternatives of relying on entries in discipline-specific source indices or permitting universities or individuals to de­cide what constitutes a publication, and hence is to be included in the count, raise additional problems.

The issue of what constitutes a publication is particularly controversial when bodies with regulatory power advocate allocation of research funds within the Australian university system on the basis of performance indicators derived from the ISI database. For example, it has been argued that 'the strong performance by chemistry-related disciplines relative to other disciplines, as measured by impact of scientific publications in the international arena, should be recognized by research bodies when providing research resources' 33 , while 'of all the fields, only the Biological Sciences has an aggregate citation impact less than the world average, yet in Australia the Biological Sciences receive by far the largest share of ARC support'.

Our objective has been not to dispute the proposition that research, scholarship, and publication are central to the academic role. The aim instead has been to point to difficulties in obtaining a valid quantitative measure of research productivity. We believe that reaching conclusions about the relative research productivity of individuals or universities, or seeking to compare disciplines or nations, by reliance solely on the database used to compile the ISI source indexes is not a worthwhile exercise. At the same time, there can be value in using quantitative measures derived from different data sources as part of a convergent approach to the study of research productivity. Instead of simply relying on publication tallies, differences between publication record established by several means become a basis for dialogue between different stakeholders. For example, an academic listing many publications, few of which are in ISI source journals, could be asked to account for such a profile, as would an academic with no or few publications, no matter how publication is defined. In these terms accountability would not entail mechanical tallying and aggregating of measures to grade or rate a person's output in any definitive sense; quantitative performance indicators might instead provide a basis for establishing directions of inquiry and response. The central component in assessment and evaluation of whether an academic is performing adequately in nominated roles would continue to be informed peer judgment, with quantitative performance indicators at best offering a context within which dialogue can be structured.

Lindsay35 has been critical of recent moves by government to strengthen per­formance evaluation within the Australian university system through extensive use

86

Dow

nloa

ded

by [

Uni

vers

ity o

f T

enne

ssee

At M

artin

] at

00:

08 0

4 O

ctob

er 2

014

Page 12: The Use of Bibliometric Indicators to Measure the Research Productivity of Australian Academics

Royle and Over: The Use of Bibliometric Indicators to Measure Research Productivity

of quantitative performance indicators. He claimed that 'current approaches to performance and quality tend to oversimplify higher education's role and the notions of outcomes, overemphasise measurement at the expense of judgment, and make insufficient allowance for diverse and conflicting stakeholder judgments ... undue prominence [is given] to measurable but limited aspects of higher education at the expense of more significant but intangible aspects .... The available indicat­ors are simply not of sufficient substance in relation to higher education's goals, processes, and outcomes to provide a significant improvement in our capacity to measure performance or quality.' The analyses we have reported are consistent with Lindsay's claims. Identifying research productivity solely on the basis of frequency of publication can provide a distorted view of research activity, no matter what definition of publication is employed. At the same time there may well be merit in using quantitative measures such as frequency of publication as one of several inputs in assessment of performance through peer review processes. In doing so, however, publication rate should be thought of as a complementary measure rather than as the primary, central, or validating indicator of performance.

Notes

1 Higher Education Council Higher Education: Achieving Quality AGPS 1992 p44. 2 A Adams) Krislow 'Evaluating the Quality of American Universities' Research in Higher

Education vol 8 1978 pp97-109. 3 K Hufner E Rau 'Measuring Performance in Higher Education: Problems and Perspect­

ives' Higher Education in Europe voll 2 1987 ppS-13. 4 M Carpenter F Gibb M Harris) Irvine B Martin F Nann 'Bibliometric Profiles for British

Academic Institutions: An Experiment to Develop Research Output Indicators' Sciento­metrics vol 14 1988 pp213-33.

5 G Johnes 'Research Performance Indicators in the University Sector' Higher Education Quarterly vol 42 1988 pp54-71.

6 H Moed The Use of Bibliometric Indicators for the Assessment of Research Performance in the Natural and Life Sciences Leiden LLDSWO Press 1989.

7 M Cave S Hanney M Kogan The Use of Performance Indicators in Higher Education: A Critical Analysis of Det•eloping Practice 2nd ed Jessica Kingsley Publishers London 1991.

8 R Over D Moore 'Citation Statistics for Psychologists in Australian Universities: 1975-1977' Australian Psychologistvol 14 1979 pp319-27.

9 Department of Industry, Technology and Commerce Measures of Science and Innova­tion: Australian Science and Technology Indicators Report AGPS Canberra 1987.

10 P Bourke 'A Bibliometric Profile of Research in the Institute of Advanced Studies ANU 1976-1988' In Department of Employment, Education and Training Performance Indi­cators in Higher Education AGPS Canberra 1991 vol 2 pp86-117.

11 P Bourke 'Research: The Achievers' Campus Ret•ieu• May 20-26 1993 p7. 12 N Allen 'Aspects of Promotion Procedures in Australian Universities' Higher Ed11cation

vol 17 1988 pp267-80. 13 I Moses 'Promotion of Academic Staff: Reward and Incentive' Higher Ed11cation vol 15

1986 pp135-49. 14 R Over 'Correlates of Career Advancement in Australian Universities' Higher Ed11cation

vol 26 1993 pp313-29. 15 P Bourke Q11ality Meas11res in Uni1 1ersities Commonwealth Tertiary Education Commis­

sion Canberra 1987. 16 Department of Employment, Education and Training Performance Indicators in Higher

Education AGPS Canberra 1991 xxii

87

Dow

nloa

ded

by [

Uni

vers

ity o

f T

enne

ssee

At M

artin

] at

00:

08 0

4 O

ctob

er 2

014

Page 13: The Use of Bibliometric Indicators to Measure the Research Productivity of Australian Academics

AARL June 1994

17 National Board of Employment, Education and Training Research Performance Indicat-ors Survey AGPS Canberra 1993.

18 ibidx 19 ibid 20 ibidxi 21 E Garfield 'How ISi Selects Journals for Coverage: Quantitative and Qualitative Consid­

erations' Current Contents no 22 (May 28 1990) ppS-13. 22 ibidplO 23 P Bourke 'A Bibliometric Profile of Research in the Institute of Advanced Studies ANU

1976-1988'. 24 ibid 25 Australian Research Council Educational Research in Australia AGPS Canberra 1992. 26 Department of Employment, Education and Training Performance Indicators in Higher

Education pl02. 27 E Garfield 'Evaluating Research: Do Bibliometric Indicators Provide the Best Measures'

Current Contents no 14 (April 3 1989) pp3-9. 28 E Garfield A Willjams-Dorof 'Citation Data: Their Use as Quantitative Indicators for

Science and Technology Evaluation and Policy Making' Science and Public Policy vol 19 1992 pp321-7.

29 E Garfield A Willjams-Dorof 'Of Nobel Class: A Citation Perspective on High Impact Research Authors' Theoretical Medicinevol 13 1992 ppl17-135.

30 E Garfield 'The 300 Most-cited Authors 1961-1976 Including Co-authors Part 2 The Relationship Between Citedness Awards and Academy Memberships' Current Contents no 35 (28 August 1978) ppS-30.

31 E Garfield 'The 100 Most-cited SSCI Authors 2. A Catalog of Their Awards and Academy Memberships' Current Contents no 45 (6 November 1978) ppS-13.

32 R Over 'Affiliations of Psychologists Elected to the National Academy of Sciences' American Psychologist vol 36 1981 pp744-52.

33 National Board of Employment, Education and Training Chemistry: A Vision for Austra­lia AGPS Canberra 1993 p45.

34 ibidp63. 35 A Lindsay 'Performance and Quality in Higher Education' The Australian Unit>ersities'

Revieu• vol 36 1993 pp32-5.

88

Dow

nloa

ded

by [

Uni

vers

ity o

f T

enne

ssee

At M

artin

] at

00:

08 0

4 O

ctob

er 2

014