rubrics for dmps

12
Using community-generated rubrics to evaluate data management plans. Progress to date…

Upload: jisc-rdm

Post on 08-Jan-2017

82 views

Category:

Education


0 download

TRANSCRIPT

Page 1: Rubrics for DMPs

Using community-generated rubrics to evaluate data

management plans.Progress to date…

Page 2: Rubrics for DMPs

Data management plan As a Research Tool (DART)

Amanda Whitmire | Stanford University LibrariesJake Carlson | University of Michigan LibraryPatricia M. Hswe | Pennsylvania State University LibrariesSusan Wells Parham | Georgia Institute of Technology LibraryBrian Westra | University of Oregon Libraries

This project was made possible in part by the Institute of Museum and Library

Services grant number LG-07-13-0328.

D A

R T

Team

24 Feb. 2016 2

@DMPResearch

@DMPResearch

Page 3: Rubrics for DMPs

5 June 2015 3

DART Premise

DMP

Research Data Management

needs

practices

capabilities

knowledgeresearcher

Page 4: Rubrics for DMPs

27 Oct 2015 4

Solution: an analytic rubricPerformance Levels

Performan

ce Criter

ia

Winning Okay No

Thing 1

Thing 2

Thing 3

@DMPResearch

Page 5: Rubrics for DMPs

27 Oct 2015 5

Performance Level

Performance Criteria Complete / detailedAddressed issue, but

incompleteDid not address

issue Directorates

General Assessment

Criteria

Describes what types of data will be captured, created or collected

Clearly defines data type(s). E.g. text, spreadsheets, images, 3D models, software, audio files, video files, reports, surveys, patient records, samples, final or intermediate numerical results from theoretical calculations, etc. Also defines data as: observational, experimental, simulation, model output or assimilation

Some details about data types are included, but DMP is missing details or wouldn’t be well understood by someone outside of the project

No details included, fails to adequately describe data types.

All

Directora

te- or

division-

specific

assessment

criteria

Describes how data will be collected, captured, or created (whether new observations, results from models, reuse of other data, etc.)

Clearly defines how data will be captured or created, including methods, instruments, software, or infrastructure where relevant.

Missing some details regarding how some of the data will be produced, makes assumptions about reviewer knowledge of methods or practices.

Does not clearly address how data will be captured or created.

GEO AGS,GEO EAR SGP, MPS AST

Identifies how much data (volume) will be produced

Amount of expected data (MB, GB, TB, etc.) is clearly specified.

Amount of expected data (GB, TB, etc.) is vaguely specified.

Amount of expected data (GB, TB, etc.) is NOT specified.

GEO EAR SGP, GEO AGS

@DMPResearch

Page 6: Rubrics for DMPs

Our rubrics project timeline:

February 2016 Workshop presentation at International Digital Curation Conference 2016 (Amanda Whitmire)

March 2016 Initial call for interest on [email protected]

April 2016 Breakout session at Research Data Management Forum 15

June 2016 Allocation of participants to working groups and distribution of links and documents

September 2016 Target date for completion of first drafts

November 2016 Target date for second drafts and start of consultation with funders

January 2017 Get rubrics hosted on Research Data Network for community access

Page 7: Rubrics for DMPs

Outcomes from initial meeting:• Recruitment of 34 participants:

• 24 Universities• 2 research institutes• 1 data centre• 1 funder

• List of funders for which Data Management Plan rubrics should be developed• all Research Councils UK funders• Cancer Research UK• Wellcome Trust• European Commission’s Horizon 2020 programme• National Institute for Health Research

• A general agreement on how to organise the project• a distributed collaborative effort via Google docs and sheets including template

• A list of potential ‘things to consider’

• An offer from JISC to host the rubrics on the Research Data Network

Page 8: Rubrics for DMPs

Our process:1 Identify the documents which will inform each rubric.

2 Develop a list of performance criteria for each data management plan

3 Develop descriptions of what constitutes each level of performance for each performance

criteria.

4 Gather feedback on descriptions from working group / other participants.

5 Incorporate feedback into descriptions

6 Send completed draft rubrics to relevant funders for feedback and discussion

7 Incorporate funder feedback as appropriate

8 Make rubrics available to community via the Research Data Network

9 Update rubrics when input documentation / funder guidance changes (ideal) or when changes

occur and someone has time (pragmatic)!

Page 9: Rubrics for DMPs

Where we are now:We set out to create rubrics for 11 funders… we currently have:

6 completed rubrics (BBSRC, EPSRC, NERC, Wellcome, CRUK (basic research), ESRC)

2 draft rubrics which are almost complete (MRC, CRUK (population research))

2 outline drafts which need descriptions (AHRC, STFC)

2 rubrics which have not yet been worked on (H2020, NIHR)

We intend to invite funders to review the rubrics and provide feedback… so far:

4 rubrics have been sent to reviewers for feedback (BBSRC, EPSRC, NERC, Wellcome)

3 rubrics are almost ready to be sent out (ESRC, MRC, CRUK (both versions))

We intend to make rubrics available as a community resource on the Research Data NetworkAn early draft of the BBSRC rubric is currently availableOther rubrics will become available once we have funder feedback

Page 10: Rubrics for DMPs

Next steps:Continue to work on the rubrics which are not yet finished…

Get all of the rubrics out to funders for discussion and feedback…

Use the rubrics as a tool to illustrate to funders that their expectations don’t match the text space they provide (BBSRC, we’re looking at your Joint Electronic Submission form here!)

Make the rubrics available as a community resource

Page 11: Rubrics for DMPs

The future…Can we use the rubrics for more than just in-house evaluation of DMPs?

Training resource – for researchers / reviewers of DMPs?Other uses?

Do we want to use the rubrics to gather data on how data management is done / communicated in our institutions?

This was a secondary output from the DART project and the data the team obtained was used to inform their subsequent data management training

processes, and to identify good practice in certain sectors.

How will we keep the rubrics from becoming obsolete?

Do we want a project blog to publicise / document the project?

Page 12: Rubrics for DMPs

Thanks to:

The DART project team:Amanda WhitmireJake CarlsonPatricia M. HsweSusan WellsBrian Westra

DART project slides were made available under a CC-BY licence.

The UK DMP Rubrics team:

Lee Bartlett Christina McMahonLibby Bishop Kerry MillerFay Campbell Ben MollittGareth Cole Niall O’LoughlinGrant Denkinson Georgina ParsonsMary Donaldson Stephen PearsonChris Emmerson Wayne PetersJamie Enoch Rachel ProudfootJenny Evans Hardy SchwammFederica Fina Cheryl SmytheStephen Grace John SouthallLaurence Horton Isobel StarkDanielle Hoyle Paul StokesSarah Jones Marta TeperekGareth Knight Mandy ThomasFrances Madden Laurian WilliamsonMichelle Mayer James Wilson