the role of editorial material in bibliometric research performance assessments
TRANSCRIPT
The role of editorial material in bibliometric researchperformance assessments
Thed van Leeuwen • Rodrigo Costas •
Clara Calero-Medina • Martijn Visser
Received: 28 June 2012 / Published online: 13 December 2012� Akademiai Kiado, Budapest, Hungary 2012
Abstract In this study, the possibilities to extend the basis for research performance
exercises with editorial material are explored. While this document type has been tradi-
tionally not considered as an important type of scientific communication in research
performance assessment procedures, there is a perception from researchers that editorial
materials should be considered as relevant document types as important sources for the
dissemination of scientific knowledge. In a number of these cases, some of the mentioned
editorial materials are actually ‘highly cited’. This lead to a thorough scrutiny of editorials
or editorial material over the period 1992–2001, for all citation indexes of Thomson
Scientific. The relevance of editorial materials through three quantitative bibliometric
characteristics of scientific publications, namely page length, number of references, and the
number of received citations, are thoroughly analyzed.
Keywords Document types � research assessment � Editorial material �Bibliometric methodology
Introduction
In the bibliometric field, the various types of scientific communication as processed for the
bibliographic databases that form the basis for bibliometric analyses are scrutinized on a
regular basis. While some of these analyses focused on the relevance of the various doc-
ument types for assessment studies (e.g., Sigogneau 2000; Lewison 2009; van Leeuwen
et al. 2007; Zuccala and van Leeuwen 2011), or the composition of journals in terms of the
underlying document types in relation to the validity of journals impact factors (e.g., van
Leeuwen et al. 1999; Frandsen 2008; Campanario et al. 2011), others focused on the validity
of the classification of documents in a typology within these bibliographic databases (van
Leeuwen et al. 2007; Harzing 2010, 2013). For many years, the publication types that
played a role in quantitative studies for research assessment procedures conducted by the
T. van Leeuwen (&) � R. Costas � C. Calero-Medina � M. VisserCenter for Science and Technology Studies (CWTS),Leiden University, Leiden, The Netherlandse-mail: [email protected]
123
Scientometrics (2013) 95:817–828DOI 10.1007/s11192-012-0904-5
Center for Science and Technology Studies (CWTS) are ‘articles’, ‘letters’, ‘reviews’ and
‘notes’. This latter type played a role until 1996, the year in which ISI decided to change the
classification of notes. This choice to use only articles, letters and reviews explicitly
excluded communication types like ‘meeting abstracts’ and ‘editorial materials’ as a base
for bibliometric analysis. Editorials are a particular type of publications, as they can have
various meanings within the journal. The formal definition of the editorial materials in the
Web of Science is: An article that gives the opinions of a person, group, or organization.
Includes editorials, interviews, commentary, discussions between individual, post-paper
discussions, round table symposia, and clinical conferences (see http://www.images.
webofknowledge.com/WOKRS51B6/help/WOS/hs_document_type.html for explanations
of the various document types in web of Science). From 1996 onwards, the document type
‘‘Discussion’’ merged into the document type editorial materials. As it is not clear exactly
how this type is composed, with a reference to an algorithm identifying editorial materials
(Rousseau 2009), and the composition observed through going through a number of jour-
nals, as well as examples shown below, we notice that a number of various headings in the
journal provide editorial materials, such as the discussion section, viewpoints, etc. As the
above definition by Thomson Reuters on the Web of Science classification of documents
indicates, sometimes editorials have the character of the introduction of the editor or edi-
torial board on the specific issue at hand, or it can have a somewhat different form, as a sort
of essay or reflection on a specific topic mentioned in the issue (for example, in case of a
special issue dealing with only one particular topic). An interesting analysis on editorials of
highly cited medical journals was conducted by Rousseau (2009), which was a follow up
study on a study by Garfield (1987). The former paper analyzes editorials materials from the
perspective of highly ‘citedness’, while the latter paper author analyzes the role of editorials
in five major general medicine journals, on an item-by-item basis.
The main reasons to exclude document types such as editorials materials or meeting
abstracts are threefold. A first reason is related to the nature of these two types. Meeting
abstracts are actually only abstracts, having little or no scientific impact, while editorial
material is a quite heterogeneous category, often the introduction of the editorial board to
the current issue of the journal, therefore having no real scientific contents, or publications
written by researchers on invitation by the editorial board of a journal. A second reason, that
is in particular relevant for editorial material, is their rather low frequency within a year of
volumes and issues of a journal, which makes it difficult, particularly against the light of the
heterogeneity of the type, to calculate valid bibliometric statistics. A final objection to
inclusion of these types relate to the question whether these are refereed or not, suspecting
that these types are in general not refereed (thereby lack a certain level of quality control,
which leads to a different status as compared to articles, reviews and letters).
Within the procedures developed at the CWTS at Leiden University for research
assessment studies, there is a ‘verification’ round included in which scientists are asked to
verify and control the publications collected for them (van Leeuwen 2007). The last couple
of years, some CWTS research assessment studies were confronted with many remarks by
the researchers on the exclusion of editorial material as a relevant type of scientific
communication. Their main argument was that some editorial material is written on the
basis of an invitation by the editorial board to an individual scientist or a group, thus
having the character of an editorial review and as a consequence a substantial scientific
content. For this reason, there is some urge to analyze the problem in more detail, in order
to determine whether there is a possibility to discriminate among editorial material in a
systematic manner, which would identify those that have some substantial scientific
content to be included in these research assessment studies.
818 Scientometrics (2013) 95:817–828
123
Research background
As stated above, CWTS applies a web-verification into the data collection procedure at the
researcher level in bottom-up analysis (van Leeuwen 2007). In this process, a growing
number of complaints from researchers who do not agree with the decision of excluding all
editorials or editorial material have been detected. Some of these researchers have been
asked to provide ‘proofs’ supporting their claims. Thus, some electronic copies of the
given publications have been received, which made clear that the situation is somewhat
more complex as initially expected. Below, a number of examples of publications which
are qualified in Web of Science (WoS) as editorial materials are presented and discussed
regarding their bibliographic and bibliometric characteristics.
The first example (Fig. 1) below is a publication in the Proceedings of the NationalAcademy of Science of the US, of five pages long, containing 36 references, and cited
already 172 times (as of September 2011). The authors claimed this publication was
erroneously labeled as editorial, and they claimed this should be treated as a ‘normal
article’. The second example (Fig. 2) below is a publication in Trends in Neuroscience,
also five pages long, containing 68 references, and already cited 344 times (again measured
in September 2011). The authors of this publication claimed this to be an invited review, so
they wanted this to be treated as a review.
These two examples suggest that excluding all editorial materials as a relevant type of
scientific communication in bibliometric assessment studies could be too ‘rigid’. On the
other hand, these examples are brought to our attention simply as a result of the procedures
we use for verification purposes, and as such this is not a very structural approach (only
Fig. 1 Editorial material in PNAS
Scientometrics (2013) 95:817–828 819
123
when the impact is high, we suppose to receive complaints). Therefore, it is necessary to
study whether it is necessary to consider the adjustment of current standard bibliometric
methodologies in order to be able to respond in a more systematic and consistent manner to
this problem of editorial materials. This paper presents an exploratory approach in order to
test these issues, being fulsomely developed in the following sections.
Data and methodology
The data used in this study are retrieved from the in-house bibliometric data-system of
CWTS. This system is based upon the internet version of the citation indexes of Thomson
Reuters, the WoS, which includes the Science Citation Index Expanded (SCI), the Social
Sciences Citation Index (SSCI), and the Arts & Humanities Citation Index (A&HCI). From
this data-system, we extracted all papers classified in the citation indexes as ‘editorial’ or
‘editorial material’. To develop objective quantitative criteria to determine whether or not a
certain difference among editorial materials exist, we decided to analyze three different
aspects of editorial materials, namely the page length, the number of citations given, and
the number of citations received.
The page length can be determined from the basic bibliographic information available
in the database, the number of given citations was determined by analyzing the reference
lists of all editorial materials, and in particular, counting them. The final aspect, the number
of received citations is extracted from the system as well, by combining all source papers
with the citing volume of source papers. The combinations of these characteristics have
been analyzed.
Fig. 2 Editorial material in Trends in Neuroscience
820 Scientometrics (2013) 95:817–828
123
Results
In the first place, the development of document types in the WoS over the past 30 years is
briefly discussed. For reason of clarity, only the ten most frequently appearing document types
is shown. Please note that we present the document types with their original Web of Science
coding, which might give some unexpected results (with ‘‘Ypoetry’’ being the original coding
for the document type poetry). Figure 3 presents the numbers in a regular way, while Fig. 4
presents the numbers on a log scale, as that allows for some more clear insight in the actual
developments, given the dominant role of normal articles in Fig. 3. Particularly Fig. 4 shows
that Reviews also display an important increase in numbers over time, but also document
types we do not consider for our bibliometric studies (meetings abstracts, editorial materials,
and books reviews) represent a substantial number of documents in the WoS. Letters display a
relative stable pattern of occurrences in the WoS between 1981 and 2010.
To create more insight in the characteristics of editorial materials as a publication type,
we have labeled each editorial material according to specific bibliographic (the page
length, the presence of references) or bibliometric characteristics (editorial materials being
cited). For this we have created classes in order to characterize the set of editorial mate-
rials. These classes have a preliminary character, as we did not know upfront what we
could encounter within the set of editorial materials in the WoS. These classes, shown in
Table 1, are defined as follows:
Obviously, the page length Class O is empty (this could relate to editorial materials
without any page indication, however, the data show that this can only be a very small
portion, if it occurs in the first place). For the two other variables, this class O is filled. In
total, over the period 1992–2001, we selected 378,936 editorial materials from the data-
base. Reason for selecting this set in the middle of the period 1981–2011 is the ability to
measure citation impact adequately also for the editorials published at the end of the period
1992–2001, in the years following that period. For all these editorial materials we deter-
mined the absolute values for the three variables, which were then labeled with the
0
200000
400000
600000
800000
1000000
1200000
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
@ ARTICLE M MEETING
B BOOK REV E EDITORIA
L LETTER R REVIEW
N NOTE C CORRECTI
5 NEWS ITE Y POETRY
Nrs
Pub
licat
ions
Fig. 3 Numbers of document types in the Web of Science, 1981–2010 (10 largest document types)
Scientometrics (2013) 95:817–828 821
123
previous classes. The scores per class were expressed as relative shares in comparing the
variables. This resulted in three tables and figures, showing the relation between the three
variables (Tables 2, 3, 4; Figs. 5, 6, 7).
In Table 2, the shares of the number of editorial material of the classes of number of
reference sis compared to the page length classes. We clearly observe that the class with
the shortest editorial materials (Class I, 82 % of all editorial materials) contains many
editorials that contain only up until three references (in total 61 % of all editorial
1
10
100
1000
10000
100000
1000000
10000000
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
@ ARTICLE M MEETING B BOOK REV E EDITORIA
L LETTER R REVIEW N NOTE C CORRECTI
5 NEWS ITE Y POETRY
Nrs
Pub
licat
ions
(Lo
g)
Fig. 4 Numbers of document types in the Web of Science, 1981–2010 (10 largest document types,displayed on a log scale)
Table 1 Overview of the composition of classes used
P_Class R_Class C_ClassPage length Number of references Number of citations
Class O – 0 0
Class I 1–3 1–3 1–3
Class II 4–10 4–10 4–10
Class III [10 [10 [10
Table 2 Comparing page-length class with reference class, as share of the total set of editorial materials
P_Class R_Class
O (55 %) I (12 %) II (14 %) III (19 %)
I (82 %) 50.5 10.8 10.9 10.2
II (15 %) 4.2 1.5 2.5 7.0
III (2 %) 0.4 0.2 0.3 1.4
822 Scientometrics (2013) 95:817–828
123
materials). In total 17 % of all editorial materials are longer than 4 pages, while 25 % of
these carry up until 4 references.
In Fig. 5 the information from Table 3 is shown graphically, this further underlines that
in Reference class II, over 80 % is in Page length Class I, while over 50 % of the
Reference Class III is in Page length Class I.
Table 3 presents the combination of Page length Classes with Classes of number of
Citations. Of all editorial materials with page length of 1–3 pages, 62 % is not cited ever,
while in total 76 % is cited up until 3 times. Among the editorial materials with page length
of 4–10 pages, in total nearly 13 % is cited up until 3 times.
In Fig. 6, the information from Table 3 is presented graphically. We observe a
diminishing share of the citations related to Page Length Class I, although in the Number
of Citations Class II and III, the share of editorial materials in Page Length Class I is still
Table 4 Comparing reference class with citation class, as share of the total set of editorial materials
R_Class C_Class
O (73 %) I (18 %) II (6 %) III (3 %)
O (55 %) 47.7 6.2 0.9 0.3
I (12 %) 9.7 2.2 0.5 0.2
II (14 %) 8.1 3.7 1.4 0.5
III (19 %) 7.2 5.6 3.3 2.4
0% 20% 40% 60% 80% 100%
O (55%)
I (12%)
II (14%)
III (19%)
I (82%)
II (15%)
III (2%)
Cla
sses
of
Nu
mb
er o
f R
efer
ence
s
Pag
e L
eng
th C
lass
es
Fig. 5 Overview of ‘page length’ and ‘number of references’ for editorial materials, 1992–2001
Table 3 Comparing page-length class with citation class, as share of the total set of editorial materials
P_Class C_Class
O (73 %) I (18 %) II (6 %) III (3 %)
I (82 %) 62.3 13.7 4.4 2.1
II (15 %) 9.0 3.6 1.5 1.1
III (2 %) 1.4 0.6 0.2 0.2
Scientometrics (2013) 95:817–828 823
123
high (over 70 and 60 %, respectively). Please note that the total share of editorial materials
within the Number of citations Classes II and III is still only 9 % !
In Table 4, the Number of References Class is compared to the Number of Citations
Class. In the two smallest classes (Class 0 and I) on these two dimensions, we find in total
nearly 66 % of all editorial materials (that is, editorial materials that contain up until 3
references, and get cited up until 3 times). When we look at the direct opposite corner of
the table, the classes II and III on both dimensions, we notice nearly 8 % of all editorial
materials present in this part of the table.
In Fig. 7, the information of Table 4 is presented graphically. By putting the Class of
Number of References on a 100 % scale for every Class of Number of Citations, we get the
confirmation that the classes II and II on both dimensions do contain probably many
editorial materials that are genuine scientific communications.
These three Figs. 5, 6, and 7, but especially Fig. 7, indicate there is a clear relation
between the number of references and the number of citations (something that has been also
observed in other cases—see for example, Costas et al. 2012). Therefore, we isolated these
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
O (73%)
I (18%)
II (6%)
III (3%)
O (55%)
I (12%)
II (14%)
III (19%)
Cla
sses
of
Nu
mb
er o
f R
efer
ence
s
Cla
sses
of
Nu
mb
er o
f C
itat
ion
s
Fig. 7 Overview of the ‘number of references’ and ‘number of citations’ for editorial materials, 1992–2001
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
O (73%)
I (18%)
II (6%)
III (3%)
I (82%)
II (15%)
III (2%)C
lass
es o
f N
um
ber
of
Cit
atio
ns
Pag
e L
eng
th C
lass
es
Fig. 6 Overview of the ‘page length’ and ‘number of citations’ for editorial materials, 1992–2001
824 Scientometrics (2013) 95:817–828
123
specific cases, by further focusing on the editorial materials in the Reference classes II and III
and the citation classes II and III. This resulted in a sub-set of editorial materials (n = 28,004
editorial materials, 8 % of all) from the total set, to conduct further analyses on.
In order to establish certain criteria to determine the ‘added value’ of these editorial
materials for evaluation purposes, we have to be able to compare their impact with other
types of scientific publications. While perhaps in nature strongly resembling with reviews,
the relative small annual numbers of reviews per journal as well as the very specific
citation characteristics of reviews, suggest that the best comparison of the selected set of
editorial materials is probably with normal research articles.
For each journal in which one of the selected editorial materials appeared, we selected
the relevant article citation characteristics, taking into consideration the year of publica-
tion. This provided us with combinations of journal-year-document type. For each of the
journals, we were then able to calculate mean impact scores for normal articles for con-
secutive years, which allowed us to compare these mean scores with mean scores for the
relevant editorial materials.
In order to compare the difference between articles and editorial materials, we use the
variable DIFF. DIFF is the difference between the mean citation impact scores of articles
(MCS-art), compared with the mean impact scores of editorial materials (MCS-Edi), or in a
formula:
DIFF ¼ MCS� art�MCS� Edi;
where MCS stands for the mean citation score per paper, excluding self-citations. A DIFF
value of 0 indicates that both editorial materials and regular articles have the same level of
citations in the same journal-year combination. Positive scores indicate a higher impact of
regular articles over editorial materials, while negative scores suggest a higher impact
of editorial materials in the same journal-year combination.
This variable is calculated for each combination of journal and year. Overall, the distribution
of DIFF-scores can be qualified as highly concentrated, as can be seen in Fig. 8.
0.0
10.0
20.0
30.0
40.0
50.0
60.0
70.0
80.0
90.0
Diff=0 Diffbetween-1 and 1
Diffbetween-2 and 2
Diffbetween-3 and 3
Diffbetween-4 and 4
Diffbetween-5 and 5
Diffbetween-6 and 6
Diffbetween-7 and 7
Diffbetween-8 and 8
Diffbetween-9 and 9
Diffbetween-10 and
10
Diff
% C
om
bin
atio
ns
cove
red
Fig. 8 Journal-year combinations covering DIFF values ranging between 0 and 10/-10
Scientometrics (2013) 95:817–828 825
123
Figure 8 shows the concentration of combinations of journal-year cases, with DIFF-
values ranging between 0 and 10/-10. This means that 80 % of all combinations of
journal-year, in which we compared the mean impact per paper between articles and
editorial materials, the difference in mean impact scores is lying between 0 and 10 (in case
of higher mean impact scores of articles) or -10 (in case of higher mean impact scores of
editorial materials). In 50 % of all cases, the DIFF-value is at about 3.5/-3.5. The figure
also shows that the larger/smaller the DIFF-value becomes, the smaller the number of cases
involved.
Conclusions
In this study, we have explored the possibilities to extent the basis for research per-
formance studies with editorial materials. For this purpose, and as a first exploration,
three bibliographic features have been studied in order to characterize editorial materials.
The page length, the number of references and the numbers of received citations of
editorial materials can play a role in determining whether or not editorial material can be
considered valuable extensions in research performance assessments. Based on these
three characteristics, we find that a set of editorial materials can qualify as such a
valuable extension.
Based on these findings, we came to two main conclusions that lead to the acceptance of
some editorial materials as a potentially valid and consistent basis for research assessment
procedures, while we still have an important additional consideration on the inclusion of
editorial materials in our research assessment procedures.
In the first place, as in our analyses we compare like with like, we would like to
calculate average impact scores for editorial materials per journal/year combination. As the
number of editorial materials is rather small, a valid statistical base to calculate average
scores on is lacking, especially as the number of possible ‘relevant’ editorial materials is
even much smaller. However, by treating (and thus comparing) them with normal articles,
this issue is solved. The analyses showed that the differences between articles and editorial
materials are citation wise relatively small.
In the second place, recent changes in the CWTS methodology exclude the publications
from the last year for analyses (Waltman et al. 2011), as their impact scores might be too
unstable to determine reliable impact scores per paper, compared to averages in that last
year. Given this change, editorial materials have somewhat more time to gain momentum
in impact development. The number of received citations is only established after a number
of years. However, an important aspect of citation analysis is the occurrence of publica-
tions with zero citations. In any set of publications, publications being not cited appear.
With respect to editorial materials, this means that also editorial materials without any
citations should be included in the selection of possible editorial materials that are standard
included in our performance assessment studies. This leads to the conclusion that received
impact cannot be the only discriminative bibliographic/bibliometric element that directs
the decision on inclusion of the set of possible candidate editorial materials for inclusion in
our studies.
Finally, we remain with the question whether this type of publication is actually ref-
ereed. While articles, letters, and reviews are refereed material, which is an implicit
parameter for inclusion in the research performance assessments, we do not know whether
or not editorial materials are refereed or not. This makes the inclusion of editorial materials
somewhat more complicated, but that should not block this development.
826 Scientometrics (2013) 95:817–828
123
It is important to realize that research assessments are often considered as very
threatening by research communities. While self evaluation reporting and peer review are
often considered as more appropriate, as these are considered as more nearby, and probably
somewhat easier to handle by the research community under scrutiny, bibliometric anal-
yses do have a somewhat more on-a-distance like, more objective, but perhaps also a
mechanistic character. Furthermore, these bibliometric studies do have a certain level of
complexity, which requires quite some effort, from both the bibliometric experts con-
ducting these exercises as well as the research community under study, to make the
procedures fully transparent and understood. In this process, the issue of the publications to
be selected for inclusion should be dealt with properly, taking into account the fact that
researchers do want as much as possible included in such analyses, as an adequate rep-
resentation of their scientific work, and in particular of those editorials that are actually
invited reviews by journal editors. Therefore, the bibliometric experts cannot hold on to too
rigid approaches, automatically excluding this type of publications.
This study has clearly shown that indeed some publications classified as editorial
materials could play a role as publications subject of citation analysis. Therefore, further
research should focus on investigating whether other bibliographic/bibliometric charac-
teristics (e.g., number of authors, number of addresses, types and number of words in titles
and abstracts, the presence of keywords or acknowledgments, etc.) can play an important
role in the re-classification and inclusion of editorial materials for research assessment
purposes, as well as the inclusion of other statistical advanced approaches (e.g., logistic
regression) approaches should lead in the future to identify editorial materials as ‘citable
items’ (regardless their level of citedness), and thus the inclusion of relevant editorial
materials in regular assessment procedures.
Acknowledgments The authors want to thank their colleague Gunnar Sivertsen (NIFU, Norway) for hisvaluable comments and discussion during the progress of our research.
References
Campanario, J. M., Carretero, J., Marangon, V., Molina, A., & Ros, G. (2011). Effect on the journal impactfactor of the number and document type of citing records: a wide-scale study. Scientometrics, 87,75–84.
Costas, R., van Leeuwen, T.N., & Bordons, M. (2012). Referencing patterns of individual researchers: dotop scientists rely on more extensive information sources? Journal of the American Society forInformation Science and Technology, (in press).
Frandsen, T. F. (2008). On the ratio of citable versus non-citable items in economics journals. Sciento-metrics, 74, 439–451.
Garfield, E. (1987). Why are the impacts of the leading medical journals so similar ad yet so different? Item-by-item audits reveal a diversity of editorial material. Current Contents, 2, 7–13.
Harzing, A.W. (2010). Working with ISI data: Beware of categorisation problems. http://www.harzing.com/ISI_categories.htm. Accessed 13 Feb 2012.
Harzing, A.W. (2013). Document categories in the ISI Web of Knowledge: Misunderstanding the socialsciences? Scientometrics. doi:10.1007/s11192-012-0738-1.
Lewison, G. (2009). The percentage of reviews in research output: a simple measure of research esteem.Research Evaluation, 18, 25–37.
Rousseau, R. (2009). The most influential editorials. In: Celebrating scholarly communication studies. Afestschrift for Olle Persson at his 60th birthday, (pp. 47–53).
Sigogneau, A. (2000). An analysis of document types published in journals related to physics: Proceedingpapers recorded in the Science Citation Index database. Scientometrics, 47, 589–604.
Van Leeuwen, T. N. (2007). Modelling of bibliometric approaches and importance of output verification inresearch performance assessment. Research Evaluation, 16, 93–105.
Scientometrics (2013) 95:817–828 827
123
Van Leeuwen, T. N., Moed, H. F., & Reedijk, J. (1999). Critical comments on Institute for ScientificInformation impact factors: a sample of inorganic molecular chemistry journals. Journal of Infor-mation Science, 25(6), 489–498.
Van Leeuwen, T. N., van der Wurff, L. J., & de Craen, A. J. M. (2007). Classification of ‘research letters’ ingeneral medical journals and its consequences in bibliometric research evaluation processes. ResearchEvaluation, 16(1), 59–63.
Waltman, L., van Eck, N. J., van Leeuwen, T. N., Visser, M. S., & van Raan, A. F. J. (2011). Towards a newcrown indicator: some theoretical considerations. Journal of Informetrics, 5(1), 37–47.
Zuccala, A., & van Leeuwen, T. N. (2011). Book reviews in humanities research evaluations. Journal of theAmerican Society of Information Science and Technology, 62, 1979–1991.
828 Scientometrics (2013) 95:817–828
123