era 2015 evaluation handbook - australian research … › quantitative  · web viewaustralian...

186
Australian Research Council ERA 2015 Evaluation Handbook Page 1 of 186

Upload: trinhanh

Post on 06-Jul-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

ERA 2015 Evaluation Handbook

ISBN: 978-0-9924254-6-3

Commonwealth of Australia 2015

This publication is available for your use under a Creative Commons BY Attribution 3.0 Australia license, with the exception of the Commonwealth Coat of Arms, the Australian Research Council (ARC) logo, the Excellence in Research for Australia (ERA) logo, images, signatures and where otherwise stated.

Use of ARC material under a Creative Commons BY Attribution 3.0 Australia license requires you to attribute the work. Attribution is not to be done in any way that suggests that the ARC endorses you or your use of the work. The ARC prefers the following attribution: Australian Research Council material used as supplied.

Provided you have not modified or transformed ARC material in any way the following attribution is preferred: Source: The Australian Research Council, Excellence in Research for Australia.

If you have modified, transformed, or derived new material from the ARC in any way, the ARC prefers the following attribution: Based on Australian Research Council, Excellence in Research for Australia data.

Requests and enquiries regarding this licence should be addressed to ARC Legal Services on +61 2 6287 6600.

Front Cover Image Credits:

Blue green wave stream texture background iStockphoto.com / rionm

Way energy iStockphoto.com / alengo

Water splash iStockphoto.com / kirstypargeter

Green leaves iStockphoto.com / Zaharov

Table of ContentsAddendum61How to use this Publication72Background92.1Objectives of ERA92.2Definition of Research92.3FoR codes92.3.1Two-digit FoR code92.3.2Four-digit FoR code92.3.3Six-digit FoR code102.3.4Implications of the FoR code hierarchy102.4Unit of Evaluation (UoE)102.4.1Low Volume Threshold112.4.2Low volume and national benchmarks112.5Interdisciplinary and multidisciplinary research112.5.1Institutional coding122.6Reference Period122.7ERA Submission Journal List122.8ERA Indicator Development132.9Development of arrangements for ERA 2015133ERA Roles and Responsibilities143.1Expert Review143.2Peer Review143.3Responsibilities of the Research Evaluation Committee (REC)143.4Responsibilities of a REC member153.5Responsibilities of a REC Chair153.6Responsibilities of a Peer Reviewer163.7Review of ERA processes and feedback163.8ERA Scrutiny Committee163.9Confidentiality163.10Conflict of interest (COI)163.11Research Integrity and Research Misconduct173.12Other sensitivities173.12.1Commercially Sensitive research outputs173.12.2Culturally Sensitive research outputs183.12.3Australian Government Security Classified research outputs183.13Assignment outside area of expertise183.14Copyright184The ERA Evaluation Process194.1ERA phases194.1.1Submission214.1.2Assignment214.1.3Evaluation and moderation214.1.4Reporting225The ERA Indicators: Background235.1Introduction to the ERA Indicator Suite235.2The ERA Indicator Principles255.3ERA Rating Scale265.3.1Notes on the Rating Scale265.4A Dashboard of Indicators275.5Drilldowns275.6Explanatory Statements275.7Volume and Activity vs. Quality275.8Assignment of FoRs to Research Outputs275.9FTE and Headcount285.10Research Income and Research Commercialisation Income285.11Applied Measures (excluding Research Commercialisation Income)285.12Esteem Measures285.13SEER warnings296The ERA Indicators: Detail306.1Indicator contextual information306.1.1Interdisciplinary profile306.1.2Intradisciplinary profile316.2UoE Indicator Summary326.3Volume and Activity336.3.1Research Outputs336.3.2FTE Profile by Academic Level366.3.3Research Output by Year386.4Publishing profile396.5Citation Analysis456.5.1Relative Citation Impact (RCI) Profile456.5.2Centile Analysis Profile476.5.3Distribution of papers by RCI Classes496.6Peer Review526.7Research Income586.8Applied Measures616.8.1Research Commercialisation Income626.8.2Patents646.8.3Registered Designs666.8.4Plant Breeders Rights676.8.5NHMRC Endorsed Guidelines696.9Esteem Measures707Glossary728Abbreviations759Discipline Clusters76Appendix 1: Research Output Drilldowns77Appendix 2: Peer Review Drilldowns and Peer Reviewer template82Appendix 3: HERDC Category 1 Research Income Drilldown84Appendix 4: Applied Measure Drilldowns86Appendix 5: Citation Benchmark Methodology88Appendix 6: ERA 2015 Discipline Matrix by Cluster96Appendix 7: Fields of Research code summary104Appendix 8: Aboriginal and Torres Strait Islander Studies133Appendix 9: Eligible ERA Institutions134

Addendum

A number of units were deemed to be unassessable by the RECs during the evaluation meeting. These are identified in the National Report as: n/r not rated due to coding issues.

How to use this Publication

The Excellence in Research for Australia (ERA) 2015 Evaluation Handbook has been written for Research Evaluation Committee (REC) members to assist in their evaluation of the quality of research undertaken in eligible higher education institutions (institutions).

The Handbook discusses the ERA approach, outlines the evaluation process and the principles of the ERA indicator suite and provides detailed information about each of the indicators. The Handbook is organised into five sections:

Background

This section discusses the underlying ERA framework including: the ERA objectives, definition of research, the Fields of Research codes (FoR) and the Unit of Evaluation (UoE). It also summarises changes to the ERA approach for 2015.

ERA Roles and Responsibilities

This section discusses ERA Expert Review and ERA Peer Review. It also outlines the roles and responsibilities of the REC, as well as detailing how issues such as conflict of interest (COI), copyright and confidentiality are addressed in ERA.

The ERA Evaluation Process

This section outlines the four stages of the ERA processsubmission, assignment, evaluation and reporting.

The ERA Indicators: Background

This section introduces the ERA Indicator Suite. It includes the ERA Indicator Principles and the dashboard approach to evaluation. It also details how FoR codes are apportioned to submission data in ERA.

The ERA Indicators: Detail

This section provides an in depth description of each of the ERA indicators, including the graphical presentation of data, Australian and world benchmarks (where applicable) and a description of the role of various indicators in the evaluation process.

This Handbook should be read in conjunction with the policy documents in the list below.

Policy documents

ERA 2015 Submission Guidelinesprovides guidance to institutions about ERA 2015 submission rules and components.

ERA 2015 Discipline Matrixprovides indicator applicability for disciplines. It is available in Appendix 6 of this Handbook or in Excel format from the ARC website.

ERA 2015 Peer Reviewer Handbookoutlines the evaluation process for ERA Peer Reviewers and provides information about the conduct of peer review.

Technical documents

ERA-SEER 2015 Technical Specificationsprovides technical instruction for institutions preparing and submitting ERA 2015 submissions.

ERA-SEER 2015 Technology Packcomprises technical documentation, Code Tables and XML schema related to the ERA 2015 submission process.

Further information about ERA is available on the ARC website. The ERA Team can be contacted by email at [email protected] or phone 02 6287 6755.

BackgroundObjectives of ERA

The objectives of ERA are to:

1. establish an evaluation framework that gives government, industry, business and the wider community assurance of the excellence of research conducted in Australias higher education institutions

2. provide a national stocktake of discipline-level areas of research strength and areas where there is opportunity for development in Australian higher education institutions

3. identify excellence across the full spectrum of research performance

4. identify emerging research areas and opportunities for further development

5. allow for comparisons of research in Australia, nationally and internationally, for all discipline areas.

Definition of Research

For the purposes of ERA, research is defined as the creation of new knowledge and/or the use of existing knowledge in a new and creative way so as to generate new concepts, methodologies, inventions and understandings. This could include synthesis and analysis of previous research to the extent that it is new and creative.

Institutions must ensure that all research outputs submitted to ERA meet this definition of research. Outputs that do not meet this definition may be excluded from submissions during the ERA submission process or, where they are not excluded from submissions, their inclusion may adversely affect the quality rating assigned during the evaluation process.

FoR codes

A Unit of Evaluation (UoE) in ERA is the discipline within an institution. For the purposes of ERA, disciplines are defined as four-digit and two-digit FoRs as identified in the Australian and New Zealand Standard Research Classification (ANZSRC). The ANZSRC provides 22 two-digit FoR codes, 157 four-digit FoR codes, and an extensive range of six-digit codes. The ANZSRC was released in 2008 by the Australian Bureau of Statistics (ABS) and Statistics New Zealand. It provides important information about each four-digit and two-digit FoR. The ANZSRC is available in full from the ABS website.

Two-digit FoR code

This is the highest level of the ANZSRC hierarchy. A two-digit FoR code relates to a broad discipline field, such as 02 Physical Sciences. A two-digit FoR code consists of a collection of related four-digit FoR codes, such as 0201 Astronomical and Space Sciences, 0203 Classical Physics, and all other four-digit FoRs within the 02 Physical Sciences code.

Four-digit FoR code

This is the second level of the ANZSRC hierarchy. A four-digit FoR code is a specific discipline field of a two-digit FoR code, for example, 0201 Astronomical and Space Sciences. A four-digit FoR code consists of a collection of related six-digit FoR codes. Institutions submit data for ERA at the four-digit FoR level.

Six-digit FoR code

This is the lowest level of the hierarchy of ANZSRC codes. A six-digit FoR code is a further breakdown of a four-digit FoR code, for example, 020101 Astrobiology is within 0201 Astronomical and Space Sciences. Six-digit FoR data is not collected in ERA and evaluation is not conducted at this level.

Implications of the FoR code hierarchy

ERA has been designed to provide flexibility for, and recognition of, discipline-specific research behaviours at both the four-digit and two-digit FoR code levels.

Although six-digit FoR codes are not assessed in ERA, it is important that REC members are aware of the diversity of six-digit FoR codes beneath the four-digit FoR codes. For many disciplines, the six-digit FoR codes represent a wide and diverse range of sub-disciplines which may have quite different publishing practices. For this reason, the profile for a particular four-digit FoR code for one institution may look very different from another institutions because of the differences in focus at the six-digit level. For example, FoR 0502 Environmental Science and Management includes 12 diverse six-digit fields.

This means that the 0502 UoE at an institution with a focus on 050209 Natural Resource Management may have very different publishing behaviours and research outlets to another 0502 UoE at an institution which focuses primarily on 050201 Aboriginal and Torres Strait Islander Environmental Knowledge.

Similarly, REC members must be cognisant of the six-digit codes which sit beneath the 99 (other) codes. In many cases, important sub-disciplines with significant research activity may be represented in the 99 (other) codes. For example, FoR 1699 (Other Studies In Human Society) includes six separate six-digit fields, such as Gender Specific Studies and Studies of Asian Society.

For some broad discipline areas, related disciplines are located in different parts of the ANZSRC. For example, some areas of Materials Science can be found in 02 Physical Sciences, 03 Chemical Sciences, 09 Engineering Sciences, and 10 Technology. REC members should ensure they are aware of the boundaries for their allocated FoR codes, and the interaction of the related FoR codes. Please refer to Appendix 6: Discipline Matrix ERA 2015 for further information.

Unit of Evaluation (UoE)

ERA evaluation occurs at both the four-digit and two-digit FoR code levels. A UoE for ERA is the research discipline, as defined by the ANZSRC four-digit and two-digit FoR codes, for an eligible institution (Appendix 9). UoEs do not correspond to named disciplines, departments or research groups within an institution.

Data for ERA is submitted by institutions at the four-digit FoR code level, and is aggregated to create four-digit and two-digit UoEs. Research Evaluation Committees (RECs) are formed around broad discipline groupings for the purpose of administering the ERA evaluations. RECs will evaluate both four-digit and two-digit UoEs.

The four-digit FoR codes generally align with their two-digit code within the same REC, with the exception of the four-digit FoR codes beneath 10Technology which are split across three RECs. The construction of the 10Technology two-digit UoEs for evaluation will include all the four-digit codes beneath (i.e. 10011099). The evaluation of the two-digit 10-Technology UoEs will occur by a cross-REC evaluation.

Low Volume Threshold

Four-digit and two-digit UoEs will only be assessed where there is a meaningful level of data to be evaluated. An institution is only evaluated in ERA in a four-digit or two-digit discipline if the number of research outputs reaches a low volume threshold.

For disciplines where citation analysis is used, no evaluation will beconducted for the FoR at a given institution if the number of indexed journal articles over the six year reference period is fewer than 50 in any four- or twodigitFoR.

For disciplines where peer review is used, no evaluation will be conducted for the FoR at a given institution where, over the six year reference period, there are fewer than the equivalent of 50 submitted research outputs. Books are given an effective weighting of 5:1 compared with other research outputs for the purposes of determining the low volume threshold in these disciplines; for other purposes in ERA they are counted as a single researchoutput.

For some FoRs at some institutions, there may be insufficient research volume to undertake a valid analysis at the four-digit level, but sufficient research volume at the two-digit level. In these instances, evaluation will take place at the two-digit FoR code level only.

The two-digit profiles include all data from the four-digit FoR codes beneath, regardless of whether they reached the low volume threshold at the four-digit FoR code level. The two-digit FoRs, therefore, form unique UoEs and may present the RECs with a quite different profile from the constituent four-digit FoRs. For example, a two-digit UoE may contain a mix of material which has been evaluated at the four-digit level and material which has not.

In instances where an institution does not meet the low volume threshold in the FoR, the UoE will be publicly reported as not assessed. This means that data submitted on research outputs, research income, applied measures and esteem measures for the relevant two-digit or four-digit FoR for that institution will be collected but not evaluated under ERA. However, the data submitted will still contribute to the construction of the ERA benchmarks and all ERA data will aggregate for national-level reporting irrespective of whether any FoRs within a specific institution meet the low volume threshold.

Low volume and national benchmarks

For the purposes of generating FoR-specific national benchmarks (referred to as Australian Higher Education Provider (HEP) benchmarks in ERA), the ARC will aggregate outputs within each of the two- and four-digit FoR code levels nationally. HEP benchmarks are used to profile an institutions performance against other Australian HEPs. Therefore, these benchmarks will include information submitted to a particular FoR from all institutions, including data from the not assessed UoEs.

Interdisciplinary and multidisciplinary research

As ERA is a discipline-based research evaluation exercise, interdisciplinary and multidisciplinary research is disaggregated based on its discipline components. However, RECs will have access to information which shows the nature and extent of inter/multidisciplinary research for each UoE. Each research output can be assigned to a maximum of three four-digit FoRs. For each UoE RECs are able to view a Discipline Profile showing the extent to which the research outputs of a UoE have also been assigned to other four-digit FoRs. This will provide additional information for the purposes of assigning UoEs to REC members, and is also contextual/discipline information for REC members to consider when undertaking their evaluation.

Where multi/interdisciplinary work is being considered, REC members may be assigned between RECs as required to bring appropriate expertise to bear on the evaluation. At the final REC evaluation meeting, all RECs will meet concurrently which also allows for cross-REC expertise to contribute to finalising evaluations.

Institutional coding

Institutions may add institutional reporting codes that link components of their submission to particular institutional units such as research centres or departments. Following completion of the ERA evaluation, institutions will then be able to use these codes to compile information about, for example, an institutional unit in climate change research that had its research outputs submitted for evaluation under a variety of Fields of Research (e.g. environmental science and management, atmospheric sciences, law, soil sciences and demography).

Institutional coding is for institutional use and not for the purposes of ERA evaluation.

Reference Period

The collection of data for ERA 2015 is based on several reference periods as detailed in Table 1 below.

Table 1: ERA 2015 reference periods

Data Type

Reference Period

Years

Research Outputs

1 January 2008 to 31 December 2013

6

Research Income

1 January 2011 to 31 December 2013

3

Applied Measures

1 January 2011 to 31 December 2013

3

Esteem Measures

1 January 2011 to 31 December 2013

3

Data regarding eligible researchers is not collected for a reference period but based on a single staff census date of 31 March 2014.

ERA Submission Journal List

The ERA 2015 Submission Journal List includes 24 028 scholarly journals. An article must be published in a journal included in the list in order to be submitted as a journal article in ERA.

The ERA 2015 Submission Journal List includes journals that meet the following criteria:

were active during the ERA 2015 reference period for research outputs (1 January 2008 to 31 December 2013)

are scholarly

have peer or editorial review policies acceptable to the discipline

have an ISSN.

Each journal on the list is assigned up to three FoR codes. The FoRs assigned to a journal are not listed in any order of relevance or importance.

A journal may be assigned either two-digit FoRs or four-digit FoRs. Where the subject matter of a journal is sufficiently broad to cover more than three four-digit FoR codes, the journal has been assigned one or more two-digit FoR codes; where the subject matter is sufficiently broad to cover more than three two-digit FoR codes, the journal has been assigned as Multidisciplinary (MD).

ERA Indicator Development

During 2008, the ARC convened an Indicator Development Group (IDG), comprising experts in research metrics and statistics to consider, test and recommend appropriate discipline-specific indicators, including measures of quality, applied research and research activity. To test the appropriateness of the proposed indicators for each discipline, the ARC held discipline cluster workshops with discipline experts. The ARC has also further consulted with the sector regarding the refinement of the indicators following the ERA 2010 and ERA 2012 evaluations. The indicator development process has been informed by analytical testing to verify the validity of the indicators. Where an indicator has not been clearly demonstrated to be a valid and robust measure of research quality for a discipline, it has not been included in ERA. The ERA Indicator Principles are included in Section 5: The ERA IndicatorsBackground, and the details of each indicator are discussed in Section 6: The ERA IndicatorsDetail, of this Handbook.

Development of arrangements for ERA 2015

The ARC has consulted broadly in the development of arrangements for ERA 2015. In addition to the feedback on ERA 2012 processes provided by the sector and REC members, the ARC issued a sector-wide consultation paper on a range of issues, as well as draft ERA 2015 documentation. The ERA 2015 Submission Guidelines and ERA 2015 Discipline Matrix are informed by the outcomes of these consultations.

For a list of submission-related changes for ERA 2015, see page 7 of the ERA 2015 Submission Guidelines. The changes relevant to the evaluation process include the addition of a new category of non-traditional research outputs for all disciplinesResearch Report for an External Body, which consists of four subcategories of reports. The ARC has also provided further clarification of the requirements for the selection of research outputs for peer review within a UoE.

ERA Roles and Responsibilities Expert Review

Expert review informed by discipline-specific indicators is central to ERA evaluations. ERA evaluations are conducted by the members who comprise the RECs. Each four-digit UoE will be assigned to three REC members. The same REC members will automatically be assigned to the two-digit UoEs based on the four-digit assignments. In cases where only the two-digit UoE is evaluated, typically due to the low volume threshold, at least three REC members will be assigned.

Evaluations are informed by the range of indicators identified in the ERA 2015 Discipline Matrix at Appendix 6, with particular focus placed on those that relate most closely to the quality of research outputssuch as citation metrics and peer review.

Peer Review

REC members have access to a pool of peer reviewers who have been recruited for ERA 2015. Peer reviewers are assigned by the principal reviewer, a nominated REC member, for each UoE in which peer review is used as an indicator, as identified in the ERA 2015 Discipline Matrix. In each case, the principal reviewer is expected to assign at least two peer reviewers to each UoE.

External peer reviewers report on, but do not rate, the sample of peer review outputs which they have reviewed. Their report informs the evaluations by the REC members. Peer reviewers do not have access to any of the ERA indicators or data presented to REC members, only the sample of outputs nominated by each institution for peer review.

Responsibilities of the Research Evaluation Committee (REC)

The responsibilities of a REC are to:

assign an agreed rating for all UoEs under each four-digit and two-digit FoR code where there is sufficient volume for an evaluation

work with the other RECs to ensure consistent application across the exercise of the overall quality standards and common assessment procedures

provide feedback and advice as requested by the ARC on any aspects of the assessment process

report the results to the ARC.

Responsibilities of a REC member

The responsibilities of a REC member are to:

participate fully in the evaluation process within their REC

abide by confidentiality and Conflict of Interest (COI) requirements as detailed in Section 3.10

maintain confidentiality of both the deliberations and decisions of the REC

identify all instances where they may have a COI or other sensitivity and raise these with the ARC prior to the conflict occurring

ensure they adequately prepare for meetings to avoid unnecessary additional administrative costs and inconvenience to other committee members

be diligent in completing tasks allocated to them by the REC Chair

assign external peer reviewers where required

evaluate assigned material and allocate preliminary ratings to each UoE

contribute fully, constructively and dispassionately to all REC processes and, within the capacity of their expertise, take ownership of the collective decisions of the REC

exercise due skill and care in the performance of their responsibilities.

Responsibilities of a REC Chair

The responsibilities of a REC Chair are to:

ensure that the REC operates within the policies, guidelines and procedures established by the ARC

abide by confidentiality and COI requirements

ensure that confidentiality is maintained for the deliberations and decisions of the REC

identify instances where there may be COI or other sensitivity and raise these with the ARC prior to conflict occurring

contribute fully, constructively and dispassionately to all REC processes and take ownership of the collective decisions of the REC

assign material to REC members for evaluation

evaluate their own assigned material and give preliminary ratings

ensure that evaluations are completed within agreed timeframes

chair the REC meeting to review preliminary ratings, and guide the REC to provide final ratings for quality separately for each UoE

ensure that REC members have an opportunity to contribute fully to the process and REC activities

ensure that REC decisions are documented

report on the results to the ARC

participate in a review at the conclusion of the REC meeting and report to the ARC on the evaluation processes undertaken by the REC.

In the event that a REC Chair is unable to perform some or all of these responsibilities the ARC will appoint an Acting Chair from within the REC with responsibilities, determined by the ARC, for all or part of the responsibilities of a REC Chair. This will most commonly occur, for example, where the Chair has identified a COI and the ARC appoints an Acting Chair for the purposes of assigning material for evaluation.

Responsibilities of a Peer Reviewer

The responsibilities of a peer reviewer are to:

evaluate assigned material and provide a report using the peer review template

be diligent in completing tasks allocated to them

exercise due skill and care in the performance of their responsibilities

identify instances where they may have a COI or other sensitivities, raise these with the ARC prior to conflict occurring and comply with the directions of the ARC relating to the management of COI

abide by confidentiality requirements.

Review of ERA processes and feedback

Throughout their engagement for the purposes of ERA, REC members are invited and encouraged to comment on and provide feedback about all ERA processes. One of the outcomes of the evaluation meeting is that RECs will make recommendations for consideration by the ARC about future improvements for ERA processes. The ARC will also convene a meeting of REC Chairs at the conclusion of the evaluation phase for a range of purposes, including an overarching review of evaluation processes.

ERA Scrutiny Committee

The ARC will appoint a Scrutiny Committee for ERA 2015 to:

scrutinise the processes followed by the RECs in assessing the home UoE of each REC member. A REC members home UoE would be the UoE associated with their institution and their primary four-digit FoR of expertise

scrutinise the outcome for each home UoE with the benefit of relevant benchmark information from the ERA 2015 evaluations

provide a report to the ARC Chief Executive Officer (CEO) advising of any issues in relation to the evaluation outcomes.

Confidentiality

REC members and peer reviewers are required to sign a confidentiality agreement with the ARC prior to their participation in ERA. The agreement covers all aspects of their work with ERA, and the agreement survives the conclusion of their engagement for the purposes of ERA.

REC members and peer reviewers may not contact researchers and/or institutions under any circumstances in relation to material that has been submitted for evaluation in ERA, or seek additional information from any sources. REC members and peer reviewers must not reveal details about any evaluation, deliberations or conclusions, at any time.

Conflict of interest (COI)

A COI is any situation where a REC member or peer reviewer has an interest which conflicts, might conflict, or may be perceived to conflict with the interests of the implementation of ERA. Examples of COI include:

being employed by, or holding an adjunct or honorary appointment at, the institution that has made the submission which is being assigned

having a close personal relationship with someone whose work is significantly incorporated in the UoE task being assigned for evaluation. This could include a partner, spouse, family member or close friend. Included in this category is enmity

being a close collaborator with someone whose work is significantly incorporated in the UoE task that is being assigned for evaluation. For example, where a REC member is a close collaborator with authors for 10% or more of the total outputs of a UoE, that would constitute a potential COI

other conflicts that a REC member will need to raise and have clarified, including financial interests (for example holding a company directorship, stock ownership or options, patents, royalties, consultancy or grant) which could lead to financial gain to a REC member in circumstances where they have access to information or are able to influence decision-making.

While most COIs will be determined before the assignment of evaluation tasks occurs, REC members and peer reviewers may encounter material with which they have a potential COI during evaluation and are required to declare any potential or actual COI as soon as practicable after it has been identified. In such circumstances, the ARC will address each instance on a case by case basis, usually by reassigning the material to another reviewer.

A REC member or a peer reviewer will never be involved in considerations about UoEs in any discipline from their own institution, or any institution with which they have a declared COI.

Research Integrity and Research Misconduct

As specified within the ARC Research Integrity and Research Misconduct Policy, anyone engaged on ARC business, such as ARC College of Experts members, Research Evaluation Committee members, Selection Advisory Committee members, external assessors and contractors, are required to report alleged breaches of research integrity or research misconduct issues identified in relation to ARC funded business to the ARC Research Integrity Officer.

The policy and contact details for the Research Integrity Officer are available on the ARC website.

Should you identify an alleged breach of research integrity or a research misconduct issue as part of your evaluation please notify the ARC Research Integrity Officer. A Notification Form for an Allegation of Research Integrity Breach or Misconduct (Attachment A of the policy) can be used to report the allegation.

The Research Integrity Officer will refer the allegation to the relevant institution for investigation in accordance with the requirements of the Australian Code for the Responsible Code of Research. Sufficient information should be provided to enable the institution to progress an investigation into the allegation (if required).

Other sensitivities

To be eligible for ERA, all research outputs must either be published or made publicly available in the ERA reference period. However, if any research material causes offence or serious sensitivity to a REC members or peer reviewers, they are asked to raise their concern with the ARC as soon as practicable. In this case the UoE would normally be reassigned.

Commercially Sensitive research outputs

A research output that includes commercially sensitive information may be included as part of a submission provided the necessary permissions have been obtained. This will be flagged to RECs and peer reviewers.

Culturally Sensitive research outputs

A research output that is culturally sensitive may be included as part of a submission provided that the ARC is appropriately advised of the sensitivities. This will be flagged to RECs and peer reviewers.

Australian Government Security Classified research outputs

A research output that includes information classified in line with the Australian Government Protective Security Manual as either In-Confidence or greater, or Restricted or greater, cannot be included in a submission (this also includes outputs subsequently classified as Sensitive, For Official Use Only, or greater under the Australian Government Security Classification System).

Assignment outside area of expertise

One of the REC members will be assigned as the principal reviewer for a UoE. The principal reviewer will take the lead role in the discussion of that UoE at the REC meeting.

There will also be a number of cross-REC assignments where REC Chairs will be able to draw on expertise from members outside their own REC. On such occasions, REC members may be asked to evaluate UoEs that do not appear to correspond directly with their area of expertise. REC members scholarly judgement and views are extremely valuable in the evaluation and moderation of these UoEs.

Copyright

ERA REC members and peer reviewers have access to, and use of, relevant research outputs to conduct ERA peer review. Acting under section 183(1) of the Copyright Act 1968 (Cth), the Commonwealth of Australia, as represented by the ARC, has authorised each ERA REC member and peer reviewer, to do acts comprised in the copyright of relevant material for the purposes of ERA. As a result, authorised REC members and peer reviewers may make all uses of relevant material that are necessary or convenient to enable their participation in ERA. The authorisation is strictly limited to their participation in ERA and will not extend to uses for any purpose unrelated to participation in ERA.

Access to research outputs is provided strictly for the purposes of conducting evaluation for ERA. REC members and peer reviewers are not permitted to reproduce or distribute the outputs for any purpose other than participation in ERA. To ensure appropriate protection of copyright material in ERA submissions, REC members and peer reviewers must at all times comply with the authorisation.

The ERA Evaluation ProcessERA phases

ERA 2015 consists of a number of phases, including Submission, Assignment, Evaluation and Reporting. Each of these phases is composed of a number of stages or activities. Table 2 below outlines the ERA 2015 phases and evaluation schedule.

Table 2: ERA phases and evaluation schedule

PHASE

ACTIVITY

Submission

Submission of data by eligible institutions to the ARC

Assignment

REC Chairs assign UoEs to REC members

REC members (principal reviewers) assign UoEs to peer reviewers

Evaluation

Stage 1

Preliminary individual evaluation of UoEs by REC members at the four-digit level, including peer review (where peer review is an identified indicator) of research outputs; evaluation of all assigned UoEs by peer reviewers

9 June to 27 July 2015

Stage 2A

REC members moderation of four-digit evaluations and preliminary independent evaluation of UoEs at the two-digit level

29 July to 8 September 2015

Stage 2B

REC members moderation of two-digit evaluations

10 September to 29 September 2015

Stage 2C

REC members review of moderated four-digit and two-digit evaluations in preparation for the Stage 3 meeting

1 October to 8 October 2015

Stage 3

Meeting of all RECs to finalise recommended evaluation outcomes

12 October to 16 October 2015

Reporting

State of Australian University Research 2015 -2016: Volume 1 ERA National Report published

The various stages of the ERA 2015 Evaluation process are outlined in Figure 1 below.

Figure 1: ERA stages and activity

Submission

Institutions will be given access to the ERA IT system, the System to Evaluate the Excellence of Research (SEER), to upload their ERA data. The data will be verified and validated to ensure that they meet the ERA requirements (see the ERA 2015 Submission Guidelines). The submitted data are used to construct UoEs for each four-digit and two-digit FoR code which includes all relevant indicators for evaluation as well as the relevant national and international benchmarks.

Assignment

At the conclusion of the submission phase, UoEs will be assigned to REC members by the REC Chair for evaluation, except in particular instances of identified COI, in which case an Acting Chair will be appointed by the ARC for the purposes of assignment.

Each four-digit UoE will be assigned to three REC members. REC members will automatically be assigned to two-digit UoEs based on their four-digit assignments. There will also be a number of cross-REC assignments where REC Chairs will be able to draw on expertise from members outside their own REC.

Each UoE will have a REC member appointed as principal reviewer who will take a lead role in discussion of the UoE at the Stage 3 Evaluation Meeting. Where peer review is identified as an indicator, external peer reviewers will be assigned for the purposes of constructing the peer review indicator.

REC Chairs and REC members should take account of identified COIs and workload when assigning UoEs for review or evaluation.

The appointed principal reviewer for a UoE will assign peer reviewers for that UoE. Assignment is based on peer reviewer expertise at the two- and four-digit FoR code level. Principal reviewers may also need to consider the expertise of peer reviewers at the six-digit FoR code level to ensure that evaluation is carried out by those with the appropriate expertise. This may be particularly relevant for Indigenous research. The ANZSRC provides alternative groupings to aid the understanding of research from different cultural perspectives which are unique to Australia and New Zealand. Appendix 8 provides a list of six-digit codes relating to Aboriginal and Torres Strait Islander Studies with the related four-digit codes and discipline grouping.

Evaluation and moderation

Evaluation in ERA is essentially conducted online to access the relevant data, indicators and peer review outputs (where peer review is an indicator) for each assigned UoE. REC members review the range of relevant indicators to reach a preliminary view (a rating with reference to the ERA rating scale and supporting text) about each UoE, and to record that view in SEER prior to the REC meeting. Peer review similarly is conducted through SEER, and peer reviewers have access through SEER to nominated outputs for peer review.

In the first instance, preliminary evaluations at the four-digit and two-digit levels are conducted independently by REC members. Evaluation is split across several stages as illustrated in Figure 1. In Stage 1, REC members undertake their initial evaluations of four-digit UoEs, independently from each other. In Stage 2A REC members have access to the evaluations of other REC members that are co-assigned to the same UoEs to allow them an opportunity to reflect on their preliminary evaluations, and to provide opportunity for moderation between REC members preliminary ratings.

Moderation is an integral process in ERAit ensures that each evaluation is conducted as an exchange of views between experts in a discipline and their colleagues in other disciplines. This process promotes the standard application of the ERA methodology across disciplines. In Stages 12C, moderation is conducted independently in SEER with individual REC members considering their own evaluations in light of the posted ratings and comments of other reviewers of the same UoE. It does not involve direct communication with other reviewers.

At the conclusion of the online evaluation stages the RECs will convene to consider all of the preliminary evaluations and agree to final evaluation outcomes for each UoE. The final ratings are the decision of the entire REC and every UoE will be discussed by the REC as a committee, except where REC members are excluded due to an identified COI.

The ratings agreed by the RECs are final. The RECs will deliver their agreed final ratings to the ARC.

Reporting

The ERA National Report will be produced by the ARC. The National Report will present a comprehensive assessment by discipline of the quality of research activity conducted in Australias higher education institutions. This report will provide information on the discipline-specific research activity of each eligible Australian higher education institution and the contribution of each discipline to the national landscape. In addition, the ARC will provide a range of information to individual institutions following the completion of the ERA 2015 evaluations, to assist further with their understanding of the ERA results.

The ERA Indicators: BackgroundIntroduction to the ERA Indicator Suite

ERA is based on the principle of expert review informed by indicators. Quantitative and qualitative indicators present significant amounts of data in a readily accessible format. Many of the tabular indicator presentations are complemented by graphical presentations, which display the same data in a different format.

The indicator profiles in ERA serve several functions:

to summarise data within a UoE

to provide a mechanism for REC members to review subsets of data through drilldown menus

to understand how a UoE performs relative to other Australian institutions

to understand how a UoE performs relative to the world.

The ERA indicator suite has been developed to align with the research behaviours of each discipline. For this reason, there are differences in the selection of indicators. For example, some disciplines use citation analysis, while others use peer review of research outputspeer review and citation analysis are not used in combination.

Figure 2 shows ERA 2015 indicators at a glance. Detailed information on which FoR codes use which indicators is available in the ERA 2015 Discipline Matrix (see Appendix 6).

Australian Research Council ERA 2015 Evaluation HandbookPage 1 of 1

Figure 2: ERA Indicators at a glance

Australian Research Council ERA 2015 Evaluation HandbookPage 24 of 135

The ERA Indicator Principles

The eight ERA indicator principles listed below have guided the development of the indicator suite. In addition, and at all times throughout the ERA development process, the ARC has been cognisant of the burden of data collection placed on submitting institutions. Each of the ERA indicators is designed with regard to the following criteria:

Quantitativeobjective measures that meet a defined methodology that will reliably produce the same result, regardless of when and by whom the principles are applied.

Internationally recognisedwhile not all indicators will allow for direct international comparability, the indicators must be internationally-recognised measures of research quality. Indicators must be sensitive to a range of research types, including research relevant to different audiences (e.g. practitioner focused, internationally relevant, nationally- and regionally-focused research). ERA will include research published in non-English language publications.

Comparable to indicators used for other disciplineswhile ERA evaluation processes will not make direct comparisons across disciplines, indicators must be capable of identifying comparable levels of research quality across disciplines.

Able to be used to identify excellenceindicators must be capable of assessing the quality of research, and where necessary, focused to identify excellence.

Research relevantindicators must be relevant to the research component of any discipline.

Repeatable and verifiableindicators must be repeatable and based on transparent and publicly available methodologies. This should allow institutions to reproduce the methodology in-house. All data submitted to ERA must be auditable and reconcilable.

Time-boundindicators must be specific to a particular period of time as defined by the reference period. Research activity outside of the reference period will not be assessed under ERA other than to the extent it results in the triggering of an indicator during the reference period.

Behavioural impactindicators should drive responses in a desirable direction and not result in perverse unintended consequences. They should also limit the scope for special interest groups or individuals to manipulate the system to their advantage.

ERA Rating Scale

ERA utilises a five-point rating scale. The rating scale is broadly consistent with the approach taken in research evaluation processes in other countries to allow for international comparison.

Table 3: ERA Rating Scale and Descriptor

Rating

Descriptor

5

The Unit of Evaluation profile is characterised by evidence of outstanding performance well above world standard presented by the suite of indicators used for evaluation.

4

The Unit of Evaluation profile is characterised by evidence of performance above world standard presented by the suite of indicators used for evaluation.

3

The Unit of Evaluation profile is characterised by evidence of average performance at world standard presented by the suite of indicators used for evaluation.

2

The Unit of Evaluation profile is characterised by evidence of performance below world standard presented by the suite of indicators used for evaluation.

1

The Unit of Evaluation profile is characterised by evidence of performance well below world standard presented by the suite of indicators used for evaluation.

NA

Not assessed due to low volume. The number of research outputs does not meet the volume threshold standard for evaluation in ERA.

Notes on the Rating Scale

World Standard refers to a quality standard. It does not refer to the nature or geographical scope of particular subjects, or to the locus of research nor its place of dissemination.

Each point within the rating scale represents a quality band. For example, one UoE might be rated highly within the 4 band and another rated lower within the same band, but the rating for both will be a 4. REC members may only give whole ratings (not 4.2, 4.5 etc.).

The banding of quality ratings assists REC members in determining a final rating. If, for example, a UoE has a preliminary rating at the top margin of the 4 band based on the assessment of the quality of the research outputs, other indicators (e.g. income or esteem measures) may be sufficient to raise the rating into the 5 band. The lack of such indicators will not, however, be used to lower a rating.

The ERA evaluation measures research quality, not scale or productivity. Volume information is presented to the RECs for the purposes of providing context to the research.

The methodology and rating scale allow for UoEs with different volumes of output to achieve the same rating. So, for example, a UoE with a small number of outputs can achieve a rating of 5 where the UoE meets the standard for that rating point, similar to a UoE with a large number of outputs.

Each UoE is assessed against the absolute standards of the rating scale, not against other UoEs. One of the key objectives of ERA is to identify excellence across the full spectrum of research performance.

REC members exercise their knowledge, judgment and expertise to reach a single rating for each UoE. In reaching a rating, REC members take account of all of the supporting evidence which is submitted for the UoE. REC members do not make comment about the contributions of individual researchers.

The rating for each UoE reflects the REC members expert and informed view of the characteristics of the UoE as a whole. In all cases the quality judgments relate to all of the evidence, including the entire indicator suite, and the ERA Rating Scale. In order to achieve a rating at a particular point on the scale, the majority of the output from the UoE will normally be expected to meet the standard for that rating point. Experience has demonstrated that there is normally a variety of quality within a UoE.

A Dashboard of Indicators

ERA is an evaluation of research quality, and is a holistic evaluation. The ERA indicator suite for each FoR is presented to REC members as a dashboard of indicators. The dashboard presents a range of information to the REC member, and the full range of indicators presented on the dashboard are relevant to the evaluation.

Drilldowns

REC members are able to view the underlying data behind each indicator. Through the ERA evaluation interface (SEER), REC members are able to drilldown into the underlying data of an indicator at various points. Drilldown menus are generally not available where information would allow the viewer to identify or track individual researchers.

Drilldowns enable REC members to view the unit record data which comprises the indicator. Some of the information displayed relates to other indicators, allowing REC members to enrich their view of the UoE. For example the journal field of the publishing profile will show both a list of all journal articles published in that journal and citation counts for these articles. This may reveal additional information such as a trend of low citation performance explained by a particular sub-discipline focus. In this manner, REC members can begin to build a richer picture of the UoE they are looking at and conduct evaluations informed by summary metrics, the underlying information, contextual information and their expert knowledge of the discipline.

Explanatory Statements

Explanatory Statements are an integral part of the ERA evaluations and are viewed by REC members alongside the indicators. Institutions have an opportunity to provide an Explanatory Statement for each two-digit FoR code. Explanatory Statements inform REC members of the context in which a two- or four-digit UoE is presented, and may guide REC members attention to a particular aspect of the submission or particular focus of the research such as an emerging discipline. Where the indicator profile looks unusual, the Explanatory Statement may assist REC members to understand apparent anomalies.

Volume and Activity vs. Quality

A Volume and Activity indicator is provided, and is intended to provide contextual information regarding the UoE, such as focussing REC members attention to the main output type for a UoE.

There are no assumptions in ERA about the relationship between quality and quantity. ERA assesses research quality, and recognises that a small UoE can be rated at the same level as a large UoE.

Assignment of FoRs to Research Outputs

The ERA methodology has been designed to allow submitting institutions flexibility to assign research outputs to the most appropriate FoR code.

Institutions may assign research outputs to up to three four-digit FoR codes relevant to the output. With the exception of journal articles and conference papers, there is no restriction on the FoR codes institutions can assign to research outputs.

Institutions may assign research outputs published in journals to any of the FoR codes listed for that journal in the ERA 2015 Submission Journal List. There is no requirement for institutions to assign a journal article to all of the listed codes for each journal, only the relevant codes.

In the case of articles published in journals with two-digit codes in the ERA 2015 Submission Journal List, institutions may assign to the article any four-digit codes associated with the two-digit codes identified for that journal in the list.

In the case of articles in journals marked as multidisciplinary (MD) in the ERA 2015 Submission Journal List, the institution may select any relevant four-digit FoRs to assign to the article.

In addition, the reassignment exception rule allows a journal article which has significant content (66% or greater) that could best be described by a particular FoR code, to be assigned by the institution to that FoR code, even if the ERA 2015 Submission Journal List does not assign that code to the journal in which the article was published.

Where a research output is assigned more than one FoR code, submitting institutions are required to apportion the item across the FoR codes to account for the whole output. Each of the FoR codes assigned must account for at least 20% of the output (and in the case of the reassignment exception apportionment to a FoR not shown on the journal list must account for at least 66%). The total of percentages apportioned to each research output must equal 100%.

FTE and Headcount

Institutions submit the fractional FTE (full-time equivalent) value of each eligible researcher as well as the FoR codes relevant to the researcher. Up to three four-digit FoR codes, totalling 100% can be assigned to each eligible researcher. Not all researchers require a FTE value to be eligible, for example affiliates and Emeritus Professors. The identity of individual researchers is protected in the staffing profile.

Research Income and Research Commercialisation Income

Institutions are required to assign the relevant FoR codes to Research Income and Research Commercialisation Income. There is no restriction on the number of FoR codes that can be assigned to Research Income and Research Commercialisation Income. The total of percentages apportioned to Research Income and Research Commercialisation Income must equal 100% (the minimum apportionment to any FoR code is 0.01%).

Applied Measures (excluding Research Commercialisation Income)

Each Applied Measure, with the exception of Research Commercialisation Income, may be assigned to up to three relevant FoR codes. Where more than one FoR code is assigned to an Applied Measure, submitting institutions are required to apportion the FoR codes so that the total of the percentages apportioned equals 100% (the minimum apportionment to any FoR code is 20%).

Esteem Measures

Each Esteem Measure may be assigned to up to three relevant FoR codes. Where more than one FoR code is attributed to an Esteem Measure, submitting institutions are required to apportion the Esteem Measure across the FoR codes totalling 100% (the minimum apportionment to any FoR code is 20%). The identity of individual researchers is protected in the esteem profile.

SEER warnings

The SEER system has been developed to ensure that all indicators presented are valid. REC members can be confident that the indicators presented are accurate, and have passed validation and all automated and human checks during the submission phase.

The indicator profiles in Section 6: The ERA IndicatorsDetail, provide information on the warnings relevant to each indicator. SEER will assign warnings based on a set of pre-defined rules. Warnings do not disqualify data, but flag outliers and issues that should be borne in mind while interpreting the data. Warnings may guide REC members to focus on particular indicators, or may highlight data outliers which may affect the indicator profile.

When warnings are presented for a UoE, REC members should take additional care to ensure that they are aware of all aspects of the UoE, including outliers in the underlying data (details of which can be viewed in the drilldowns).

The ERA Indicators: Detail

This section of the Handbook describes in detail the indicators that will be shown during evaluation.

Indicator contextual information will be shown on the first SEER screen for each UoE.

The remainder of this section provides information for each indicator used in the ERA evaluation process including:

The indicatordescription and purpose

FoR code specific issuesinformation about the applicability of the indicator to particular discipline groupings or UoEs

Indicator tables and interpretationhow the indicator is shown in SEER

Benchmarks and comparatorsdescription of any relevant benchmarks or the comparative information provided for the indicator

Relationship with other indicatorsincluding whether the indicator should be considered in conjunction with other indicators

Relevant warningsany warnings that will show in SEER regarding the integrity of the data, and an explanation of what each warning means

Drilldownsdrilldowns allow REC members to click on an aggregated indicator profile and view the details of individual items.

Australian Research Council ERA 2015 Evaluation HandbookPage 25 of 135

Indicator contextual information

Each UoE will be prefaced by contextual data to assist REC members to form an overall picture of the size of the UoE, the predominant output types and the extent of any interdisciplinary research.

The UoE Profile presents an Explanatory Statement submitted by the institution (see Section 5.6) together with the interdisciplinary profile.

Interdisciplinary profile

For four-digit UoEs RECs are provided with an interdisciplinary profile of the UoE. ERA allows up to three four-digit FoR codes to be apportioned to each research output and so REC members will have information about key areas of cross-over between the UoE being assessed and other FoRs from the same institution. The interdisciplinary profile should be viewed alongside information provided in the corresponding Explanatory Statement, which may, for example, highlight that the UoE is an integral part of broader research activity with significant volumes of research in other FoR codes.

Table 4 shows the interdisciplinary profile for a UoE. The first FoR code shown is the FoR of the UoE being assessed (in this case, 0901). The profile then displays a list of other FoR codes apportioned to outputs from this UoE.

Only FoR codes that account for at least 20% of the outputs are shown in the interdisciplinary profile. Twenty per cent of outputs shared across two or more FoR codes represents a significant interdisciplinary profile. This means, however, that some interdisciplinary research which represents less than 20% within the UoE will not be shown in the profile.

Table 4: Interdisciplinary profile table

FoR

Name

Apportioned count

Whole Count

%

0901

Aerospace Engineering

83.5

169

49%

0909

Geomatic Engineering

39.4

90

23%

Figure 3: Interdisciplinary profile bar graph

Table 4 is a profile for a UoE in 0901 (Aerospace Engineering) where 169 whole research outputs have been submitted. The proportion of these whole outputs that are apportioned to 0901 is equal to 83.5 outputs, or 49%. A proportion equal to 39.4 outputs (23% of the 169 research outputs) were also assigned to 0909 Geomatic Engineering. No other FoRs are shown in the interdisciplinary profile for this UoE because no other FoRs account for more than 20% of apportioned outputs submitted to the UoE. This is presented graphically in Figure 3.

Intradisciplinary profile

For two-digit UoEs, RECs are provided with an intradisciplinary profile that indicates which of the constituent four-digit FoR codes are prominent in the two-digit UoE. This may indicate a particular sub-discipline focus that needs to be accounted for in evaluation.

The intradisciplinary profile also indicates which of the constituent four-digit FoR codes will be evaluated as separate UoEs (i.e. which four-digit FoRs for this institution met the relevant low volume threshold). The blue bars represent four-digit FoR codes that will be evaluated as individual UoEs because the low volume threshold was met. The red bars show four-digit FoR codes where the low volume threshold was not met.

Figure 4: Two-digit Intradisciplinary profile

Figure 4 indicates which of the constituent four-digit FoR codes are prominent in the two-digit UoE. It shows the percentage contributed by each four-digit FoR code to the total apportioned research outputs for the two-digit UoE. It also demonstrates how different the four-digit and two-digit profiles are: that is, the two-digit UoE is not just an average of the four-digit performance, but includes a range of outputs that may not have been evaluated at the four-digit level (as in the example above).

UoE Indicator Summary

The first item shown in the view indicators section for each UoE is the Indicator Summary (Table 5). This summary outlines at a glance the research output volume information (for relevant output types), staffing profile, outputs nominated for peer review (where applicable), total research income for each category and applied and esteem information. It is intended as a quick reference for the information that is contained in the indicators, not as a measure of quality.

Table 5: UoE Indicator Summary

Volume and Activity

Books

10.0

Book Chapters

51.7

Journal Articles

64.4

Conference Publications

2.2

Original Creative Works

0.0

Live Performance

0.0

Recorded/ Rendered Works

0.0

Curated Works

0.0

Portfolios

0.0

Total headcount

24.9

Total FTE

20.1

Peer Review

Items Flagged

39

Research Income

Category 1

$473,764

Category 2

$11,971

Category 3

$14,909

Category 4

$0

Applied

Patents:

0

Commercialisation Income

$0

Registered Designs

2.3

Plant Breeders Rights

0

NHMRC Guidelines

0

Esteem

Work of Reference

0

Learned Academy

1.0

Cat 1 Fellowships

2.4

Statutory Committee

0

Australia Council

0.1

An orange-coloured box on this screen indicates that there is a warning associated with the profile (in this case for Category 1 income in the Research Income profile). The warning will be detailed in the specific indicator profile.

Volume and Activity

REC members have access to a range of Volume and Activity measures which provide an indication of the level of activity for each UoE. The Volume and Activity indicator is not a proxy for research quality. The quality of a small UoE will be evaluated according to the same criteria as a large UoE. Three Volume and Activity profiles are shown: Research Outputs, FTE Profile by Academic Level, and Research Output by Year.

The volume and activity indicator provides contextual information regarding the UoE. For example, it will show REC members the relative proportions of different types of research outputs within the UoE which may contribute to an understanding of the type of research being performed e.g. a UoE comprising mostly creative works will have different expected patterns of behaviours from one which comprises mostly of journal articles, and so the focus of the evaluation will be informed by this.

Research Outputs

This indicator provides an overview of the types and volume of research outputs, including the contribution of the institution to the total output of Australian research within the FoR.

FoR code specific issues

Traditional research output types apply to all disciplines. Non-traditional research outputs (NTROs) only apply to some disciplines. The exception is the NTRO category Research Reports for an External Body which applies to all disciplines.

Depending on the applicability of indicators, each UoE will have either a Traditional Output Table, as shown in Table 6 or a Traditional and Non-Traditional Output Table as shown in Table 7.

A pie chart showing the distribution of research output types (Figure 5) will be available for all UoEs.

Please refer to the ERA 2015 Discipline Matrix at Appendix 6 for detailed information regarding the applicability of indicators.

Indicator tables and interpretation

This indicator is presented both in tabular and graphical formats, as shown in Table 6, Figure 5 and Table 7. The indicator shows:

the apportioned number of outputs by type for the UoE

the percentage of outputs by type for the UoE

the percentage of the UoEs contribution to the Australian HEP apportioned total for the FoR code

for UoEs subject to Peer Review, the outputs nominated for Peer Review are shown in the Peer Review (whole count) column, as shown in Table 7.

Information regarding eligible output types is provided in Section 5.4.2 of the ERA 2015 Submission Guidelines.

Table 6: Traditional output table

Output Type

No. of outputs

% of outputs

% of contribution to Australian HEP FoR total

Books

0.0

0%

0%

Book Chapters

3.4

2%

23%

Journal Articles

159.7

97%

21%

Conference Publications

2.0

1%

80%

Research Reports for External Bodies

0.0

0%

0%

Total

165.1

100%

21%

Note, percentages can total more than 100% because of rounding of fractions.

Figure 5: Distribution of research output pie chart

Figure 5 displays the research output proportions from Table 6 graphically. Journal Articles make up the bulk of research outputs contributing to the UoE, with the other traditional output types comprising 3% of outputs submitted to the UoE. In this case the citation data, if an identified indicator, will be an influential evaluation indicator as it covers the significant majority of the outputs.

Table 7: Traditional and non-traditional output table

Output Type

No. of outputs

% of outputs

% of Contribution to Australian HEP FoR total

Peer Review (whole count)

Books

2.0

1%

2%

1

Book Chapters

5.2

2%

1%

2

Journal Articles

7.0

3%

1%

5

Conference Publications

12.7

4%

3%

0

Original Creative Works

93.5

30%

8%

45

Live Performance of Creative Works

126.0

40%

17%

22

Recorded/Rendered Creative Works

7.0

2%

2%

6

Curated or Produced Substantial Public Exhibitions and Events

59.8

19%

54%

14

Research Reports for an External Body

0.0

0%

0%

0

Portfolios of Non Traditional Research Outputs

0.0

0%

0%

0

Total

313.2

100%

7%

95

Note, percentages can total more than 100% because of the rounding of fractions.

Table 7 shows that there were a total of 313.2 apportioned outputs submitted for this UoE. Live-Performance of Creative Works make up the highest proportion at 40% of the total apportioned outputs for this UoE, followed by Original Creative Works at 30%. A total of 95 outputs have been nominated for Peer Review. REC members can drilldown into the Volume and Activity profile to examine the bibliographical detail of the outputs.

Benchmarks and Comparators

Benchmarks are not applied to this indicator, however, the percentage of total contribution to Australian HEP total for the FoR code is shown.

The percentage contribution to the Australian total is shown as a guide to the role of a particular UoE in shaping the Australian benchmarks for other indicators. An institution that contributes a high proportion of outputs to the FoR will obviously heavily influence Australian benchmarks.

Relationship with other indicators

Nil

Relevant warnings

Nil

Drilldowns

Example drilldowns for this indicator are available at Appendix 1Research Output Drilldowns.

FTE Profile by Academic Level

Staffing data provides contextual information to REC members regarding the academic profile of each UoE. Institutions are required to report academic classification levels of each eligible researcher as used in the Higher Education Staff Data Collection (HESDC). An Other category is provided to allow the inclusion of eligible researchers who cannot be assigned to one of the Level AE classifications. An example would be an administrative (rather than teaching or research) staff member who has produced an eligible research output. Further information regarding eligible researcher criteria is provided in Section 5.3.1 of the ERA 2015 Submission Guidelines. Institutions also assign relevant four-digit FoR codes to eligible researchers. A researcher may be assigned up to three four-digit FoR codes.

Both headcount and FTE are shown in this indicator. Headcount is shown alongside FTE because non-salaried staff (e.g. Emeritus and Adjunct staff) may contribute to the UoE but have an FTE of 0. Therefore, viewing both headcount and FTE provides REC members with a more complete picture of eligible researchers.

As with volume data, staffing data is provided as contextual information and cannot be used to draw conclusions about the quality of the research outputs within a UoE. A quick reference guide to researcher eligibility is presented in Table 8.

Table 8: Quick reference guide to researcher eligibility

Is a Member of Staff at the census date (31March 2014)

Nature of appointment with institution

Nature of Function

Minimum number of research outputs 1

Must have a research output with a demonstrable publication association

Submit Member of Staff researcher data

Research outputs to submit

No

No

Nil

Yes

FTE-based

RO or T&R

0

No

Yes

All

RO or T&R 8.00.

FoR code specific issues

Please refer to the ERA 2015 Discipline Matrix in Appendix 6 for information regarding the applicability of indicators.

Indicator tables and interpretation

The indicator shows:

number of UoE papers by RCI Classes (against world benchmark)

proportion of UoE papers by RCI Classes (against world benchmark)

Australian HEP average proportion of papers by RCI Classes (against world benchmark)

percentage of UoE contribution to Australian HEP FoR total by RCI Classes

number of UoE papers in Classes 0 and I (against world benchmark)

number of UoE papers in Classes IV, V and VI (against world benchmark)

ratio of number of UoE papers in Classes IV, V and VI against number of UoE papers in Classes 0 and I.

Citation analysis benchmarks are calculated for each year of the reference period. For each year of the reference period, for each FoR code eligible for citation analysis, a world and Australian institution benchmark will be derived. Papers published in a specific year will be assessed against the discipline-specific benchmark for that year.

This takes into account the differences in the time a paper has had in which to attract citations. That is, papers published in 2008 would (typically) have higher citations than a paper published in 2013 because the 2008 paper has had over six years to attract citations, whereas the 2013 paper has had fewer than two.

Additionally, some institutions will have a concentration of outputs in the more recent years of the reference period, others in earlier years, while some will have equal distribution across each year of the reference period. For this reason, ERA uses year-specific citation benchmarks and not benchmarks based on averages across the entire period. This ensures that any heterogeneity in publication patterns across the reference period is taken into account.

Table 18: Uni X, FoR Y: number of papers across RCI Classes (assessed against the world benchmark)

Class

RCI Range

No. of papers

(apportioned)

% of papers

Aust. HEP FoR Average

% Contribution to Aust HEP FoR Total

0

0

2.3

3%

14%

1%

I

0.01-0.79

9.5

14%

32%

3%

II

0.80-1.19

8

12%

11%

7%

III

1.20-1.99

23.9

35%

22%

10%

IV

2.00 -3.99

21.2

31%

15%

13%

V

4.00-7.99

3.7

5%

5%

7%

VI

>=8.00

0

0%

2%

0%

Total*

57.8

100%

100%

* This total includes journal articles where an RCI can be calculated, not the total indexed journal articles.

Table 18 shows that 3% (or 2.3 apportioned) of papers are uncited, and 17% below the world average. Most papers for this UoE are above the world average (71%) and 36% are cited at 2.00 or more times the world average.

The HEP average allows REC members to compare the UoE against the performance of Australia as a whole. This is shown graphically in Figure 7.

The UoEs total contribution to the Australian HEP FoR total for each of the RCI Classes is also available in Table 18.

Figure 7: UoE RCI Class distribution against FoR average

REC members are also shown the proportion of high and low RCI Classes in Table 19. The Low Classes constitute Class 0 and Class I, while the High Classes constitute Class IV, Class V and Class VI. The UoE, as shown in Table 19, has a high to low ratio of 2.11, which confirms the evidence in Table 18 that the UoE has a relatively high number of highly cited papers.

Table 19: UoE papers by low and high RCI Classes

Low RCI Class Count (Class 0I)

High RCI Class Count (Class IVVI)

Proportion of High to Low

11.8

24.9

2.11

Figure 8: UoE papers by RCI Class distribution (blue = low class, darker greens = high class)

Figure 8, as well as showing graphically the breakdown by RCI Classes of the UoE compared against the Australian HEP FoR average, also shows the proportion of high to low RCI Class distribution through the colour coding. Class 0 and Class I (i.e. the low RCI Classes), are shown in shades of blue, while Classes IV, V and VI are shown in various shades of dark green. Class II and Class III are shown in pale green.

Benchmarks and Comparators

Two benchmarks are used in this profile:

1. World benchmarkcalculated using the Scopus ERA 2015 world dataset for each FoR code and for each year of the reference period

2. Australian benchmarkcalculated using the institutionally submitted ERA 2015 data for each FoR code and for each year of the reference period.

Relationship with other indicators

This indicator should be interpreted in conjunction with RCI Profile.

Relevant warnings

In the drilldowns, any article with an RCI of greater than or equal to 8.0 will be highlighted.

Drilldowns

Example drilldowns for this indicator are available at Appendix 1Research Output Drilldowns.

Peer Review

For FoR codes that use peer review as an indicator, institutions are required to nominate 30% of the outputs in the FoR code for peer review. Institutions select the 30% sample of research outputs to make available for peer review. The 30% is calculated based on apportioned counts of research outputs. For example, a UoE contains 100 apportioned outputs, and must, therefore, identify 30 whole outputs for peer review. Institutions have been requested to provide a profile sample which is representative of both the range of outputs types, and the range of eligible researchers for the FoR within the institution.

Peer review occurs at the four-digit and two-digit level, for each assessable UoE which meets the low-volume threshold. REC members and selected peer reviewers review outputs nominated for peer review to inform the rating for a UoE. There is no separate rating for individual outputs or for the nominated peer review sample.

Each assessable UoE is assigned to multiple REC members and ERA peer reviewers. Peer reviewers in ERA will be assigned in all cases at the four-digit level, and in some cases also at the two-digit level (for example, where there are substantial new outputs included that were not evaluated at the four-digit level). In all cases ERA peer reviewers will be assigned to multiple UoEs. This ensures that they are able to include a degree of comparison in their evaluations. This does not mean that they are required to rank outputs or UoEs against each other. Reviewing across multiple UoEs will also assist peer reviewers develop a deeper familiarity with the ERA peer review criteria.

The research output types available for peer review include the standard range of academic outputs including books, book chapters, journal articles and conference publications. As well, ERA includes a range of Non-Traditional Research Output (NTRO) types for some disciplines. This category takes account of research in the creative arts which ranges from the experimental, involving the production of creative works, through to the analytical, involving the study of particular subjects.

The NTRO types include:

Original creative works

Live performance of creative works

Recorded / rendered creative works

Curated or produced substantial public exhibitions and events

Research reports for an external body.

These outputs may be submitted as individual items or, where individual works that are derived from the same underlying research endeavour but do not in themselves constitute research, they may be submitted as a Portfolio, which in ERA constitutes a single Non-Traditional Research Output.

Up to three FoR codes can be assigned to research outputs. Research outputs can be nominated for peer review in one or all of those codes. An output will only be available for peer review in a specific FoR if the submitting institution has nominated it for peer review in that FoR. It is not automatically available for peer review in all assigned FoRs. For example, a book is coded by the institution to FoR 2103, 1904 and 1901. If the book is nominated for peer review in 2103, but not in 1904 and 1901, it will only be available for peer review in 2103.

For NTROs which are nominated for ERA peer review and for each portfolio, a research statement identifying the research component of the outputs must be provided as part of the submission of an institution. The research statement must be no more than 2000 characters (around 250 words) and address the following categories:

1. Research Background

Field

Context

Research Question

2. Research Contribution

Innovation

New Knowledge

3. Research Significance

Evidence of Excellence.

REC members and ERA peer reviewers will evaluate NTROs selected for ERA peer review in the context of the research component as identified in the research statement.

FoR code specific issues

Please refer to the ERA 2015 Discipline Matrix at Appendix 6 for information regarding the applicability of indicators.

Indicator tables and interpretation

REC members have access to a summary table (Table 20) which lists the number of outputs by output type available for peer review.

Table 20: Available outputs for Peer Review

Output Type

No. of outputs

% of outputs

% of contribution to Aust HEP FoR total

Peer Review (whole count)

Books

4.1

1%

1%

1

Book Chapters

14.7

2%

1%

4

Journal Articles

39.3

6%

2%

12

Conference Publications

39.8

7%

4%

12

Original Creative Works

246.9

41%

5%

74

Live Performance of Creative Works

182.0

30%

23%

55

Recorded/Rendered Creative Works

36.0

6%

5%

11

Curated or Produced Substantial Public Exhibitions and Events

12.0

2%

2%

4

Portfolios of Non Traditional Research Outputs

31.0

5%

6%

9

Research reports for an External Body

0.0

0%

0%

0

Total

605.8

100%

5%

182

Note, percentages can total more than 100% because of the rounding of fractions.

Table 20 shows that the largest output type for this UoE is Original Creative Works (41%) and this is reflected in the proportions of output types which have been nominated for peer review.

The Peer Review tab in SEER lists the outputs which have been nominated for peer review and provides a link (through SEER) to the individual outputs for REC members and peer reviewers.

REC members will also have access to the peer reviewers reports. These reports will provide important advice to the RECs from a specialist perspective, which can then be incorporated into the REC members evaluation alongside the various other indicators on the dashboard. Peer reviewers base their responses on the pool of outputs available for peer review only, and do not have access to any of the other indicators or data provided on the dashboard. REC members are not required to complete a Peer Review Report.

The Peer Review Report form (reproduced at Appendix 2) asks each peer reviewer to nominate their expertise for each assigned UoE in terms of the discipline (i.e. four-digit code) on a scale of one (low expertise) to five (high expertise).

Reviewer expertise in Area

Low Expertise 1|_|2|_|3|_|4|_|High Expertise 5|_|

The assumption is that a peer reviewer who rates expertise at 5 is well-qualified in the discipline to comment on the assigned work. This information will assist REC members to incorporate peer reviewer reports into the overall evaluation of the UoE under consideration.

Peer reviewers must indicate each output they read by marking that output as Read in SEER. This is a helpful guide for peer reviewers as they work through the allocation of items for review and it is information which the REC members will use both to determine what outputs have contributed to a Peer Review Report and for any additional reading which might be necessary during subsequent stages of evaluation to ensure a broad range of the outputs submitted for peer review have been read.

Types of outputs reviewed?

Articles

Books

Book Ch

NTRO

Conf Pub

Total number of outputs reviewed

#

#

#

#

#

#

[auto-populated by SEER read items]

The types of outputs read for each UoE will be auto-populated and show in the individual Peer Review Reports.

The Peer Review Report includes a section for peer reviewers to describe their sampling strategy.

Sampling Strategy [Please make a statement about the sampling strategy you employed to select outputs for peer review. This may include reference to disciplinary expertise, types of outputs (books, journals articles, etc.), prior familiarity with work etc.]

It is useful for the REC members to have an indication of the range of output types that have been read, the extent to which the peer reviewer was familiar with these outputs prior to the evaluation, and the extent to which these outputs were within the peer reviewers disciplinary expertise. This information will assist REC members in applying the information in the report to their task of evaluating the UoE as a whole.

The Peer Review Report consists of a textual response on the quality of the sample of outputs that REC members will review, against the broad criteria of approach and contribution.

Approach is described as the approach taken in the group of outputs reviewed, potentially including reference to the methodologies, appropriateness of outlets/venues and discipline-specific publishing practices. Contribution is described as the contribution of the group of outputs reviewed to the field and/or practice.

The Peer Review Report form has a separate section for each of the criteria with a text limit of up to a maximum of10 000 characters for each criterion.

The task of peer review in ERA is to judge the quality of research in the outputs assessed using the criteria of approach and contribution. Peer reviewers and REC members are also asked to report how the quality of work is distributed within a UoE. The scale from Tier 1 (the lowest quality) to Tier 4 (the highest quality) is intended to be a banding rather than a series of fixed points. What that means is that each tier allows for a range of performance. The expectation is that written analysis in the Peer Review Report will align with and reflect the proportions of quality recorded across the quality distribution scale.

Quality Distribution: Percentage (which will sum to 100%) of research outputs read which you judge to be:

Tier 1 - Lowest Quality

Tier 2

Tier 3

Tier 4 Highest quality

##%

##%

##%

##%

The quality distribution in this scale should align with the textual responses given for the criteria of Approach and Contribution.

In each moderation stage, REC members will have access to peer review reports and the UoE reports of co-assigned REC members. REC members will also have access to a Moderation Report, a summary of the quality distribution.

Figure 9 is an example of the structure of the report screen which will be provided to REC members during the moderation stages. It provides links to the individual reports of co-assigned REC members and the peer reviewers. The Show graph button, as shown in Figure 9, will be available at Evaluation Stage 2C. It shows the distribution judgments of quality by all reviewers of the outputs which they have reviewed from lowest to highest quality for each UoE. An additional benchmarking line for the Australian HEP average for that FoR is also shown in the graph. The graph shows that five reviewers have submitted reports and their judgments are that the output for the UoE is of high to very high quality. The difference in judgment, for example, between Reviewer 4 and Reviewer 5 will be explained in the text of their reports.

Figure 9: Moderation Report

Benchmarks and comparators

In Stage 2C the quality distribution, as shown at the bottom section of Figure 9, for each four-digit UoE will show the average quality distribution for all reviewers for all assessed UoEs in the FoR (the Australian HEP average) to give a sense to REC members of how the quality distributions for the UoE align with the FoR average. Nil

Relationship with other indicators

Nil

Drilldowns

Example drilldowns for peer review are available at Appendix 2: Peer Review Drilldowns and Peer Reviewer template.

The drilldown provides access to outputs nominated for peer review and the associated Research Statement for non-traditional research outputs.

Relevant warnings

Nil

Research Income

The research income indicator profiles research income as defined by Higher Education Research Data Collection (HERDC) specifications.

For the purposes of ERA, the following categories of income are profiled:

1. Category 1: Australian competitive grants

2. Category 2: Other public sector research income

3. Category 3: Total Industry and other research income

3 (i): Australian

3 (ii): International A (competitive, peer reviewed)

3 (iii): International B (other international income)

4. Category 4: Cooperative Research Centre (CRC) research income.

Institutions are required to submit information on all research income falling within eligible income category types. In order for research income to be submitted, it must:

be an eligible research income category type

meet the research income reference period requirements (1 January 2011 to 31 December 2013).

Research Commercialisation Income is separate from the above-mentioned Research Income types and is addressed in Appl