carpenter talk at elsevier booth during ala annual about niso alternative assessment project
DESCRIPTION
Todd Carpenter's talk at Elsevier Booth During ALA Annual Conference in Las Vegas about NISO alternative assessment project and the results of the first phase of the projectTRANSCRIPT
Comparing digital apples to digital apples: An update on NISO’s Alterna8ve
Assessment Ini8a8ve
Todd Carpenter Executive Director, NISO
ALA Annual June 28, 2014 June 28, 2014 1
! Non-‐profit industry trade associa9on accredited by ANSI
! Mission of developing and maintaining technical standards related to informa9on, documenta9on, discovery and distribu9on of published materials and media
! Volunteer driven organiza9on: 400+ contributors spread out across the world
! Responsible (directly and indirectly) for standards like ISSN, DOI, Dublin Core metadata, DAISY digital talking books, OpenURL, MARC records, and ISBN
About
June 28, 2014 2
3 June 28, 2014
What are the infrastructure elements
of alternative assessments?
June 28, 2014 4
Basic Definitions������
(So we are all talking��� about the same thing)
June 28, 2014 5
June 28, 2014 6
Element Identification
June 28, 2014 7
At what granularity?
June 28, 2014 8
How long do we measure?
June 28, 2014 9
June 28, 2014 10
Consistency across providers
Source: Scott Chamberlain, Consuming Article-‐Level Metrics: Observations And Lessons From Comparing Aggregator Provider Data, Information Standards Quarterly, Summer 2013, Vol 25, Issue 2. June 28, 2014 11
TRUST
=
June 28, 2014 12
What is NISO working toward? June 28, 2014 13
June 28, 2014 14
June 28, 2014 15
Steering CommiZee • Euan Adie, Altmetric • Amy Brand, Harvard University • Mike Buschman, Plum Analy9cs • Todd Carpenter, NISO • Mar9n Fenner, Public Library of Science (PLoS) (Chair) • Michael Habib, Reed Elsevier • Gregg Gordon, Social Science Research Network (SSRN) • William Gunn, Mendeley • Nebe Lagace, NISO • Jamie Liu, American Chemical Society (ACS) • Heather Piwowar, ImpactStory • John Sack, HighWire Press • Peter Shepherd, Project Counter • Chris9ne Stohn, Ex Libris • Greg Tananbaum, SPARC (Scholarly Publishing & Academic Resources Coali9on)
June 28, 2014 16
Alterna8ve Assessment Ini8a8ve
Phase 1 Mee8ngs
October 9, 2013 -‐ San Francisco, CA December 11, 2013 -‐ Washington, DC
January 23-‐24 -‐ Philadelphia, PA Round of 1-‐on-‐1 interviews – March/Apr
Phase 1 report Published June 9, 2014 �
June 28, 2014 17
Mee9ngs’ General Format
• Collocated with other industry mee9ng • Morning: lightning talks, post-‐it brainstorming • Aiernoon: discussion groups – X – Y – Z – Report back/react
• Live streamed (video recordings are available)
June 28, 2014 18
Mee9ng Lightning Talks • Expecta9ons of researchers • Exploring disciplinary differences in the use of social media in
scholarly communica9on • Altmetrics as part of the services of a large university library
system • Deriving altmetrics from annota9on ac9vity • Altmetrics for Ins9tu9onal Repositories: Are the metadata
ready? • Snowball Metrics: Global Standards for Ins9tu9onal
Benchmarking • Interna9onal Standard Name Iden9fier • Altmetric.com, Plum Analy9cs, Mendeley reader survey • TwiZer Inconsistency June 28, 2014 19
“Lightning" by snowpeak is licensed under CC BY 2.0
June 28, 2014 20
SF Mee9ng Discussions • Business & Use cases
– Publishers want to serve authors, make money – People don’t value a standard, they value something that helps them – … Couldn’t iden9fy a logical standard need that actors in the space would value,
and best prac9ces are of interest
• Quality & Data science – Themes: context, valida9on, provenance, quality, descrip9on & metadata – We'll never get to the point where assessment can be done without a human
in the loop, but discovery and recommenda9on can
• Defini9ons – Define “ALM” and “Altmetrics” – Map the landscape – We'll never get to the point where assessment can be done without a human in
the loop, but discovery and recommenda9on can
June 28, 2014 21
DC Mee9ng Discussions • Business and Use Cases • Discovery
– metrics only get generated if material is discovered
• Qualita9ve vs. Quan9ta9ve • Iden9fying Stakeholders and their Values
– stakeholders in outcomes / stakeholders in process of crea9ng metrics – shared values but tensions – branding
• Defini9ons/Defining Impact – metrics and analyses – what led to success of cita9on? – how to be certain we are measuring the right things
• Future Proofing – what won't change – impact -‐ hard to establish across disciplines
June 28, 2014 22
Philly Mee9ng Discussions • Defini9ons
– Define life cycle of scholarly output and associated metrics – Qualita9ve versus Quan9ta9ve aspects -‐ what is possible to define here – Consider other aspects of these data collec9ons
• Standards – Develop defini9ons (what is a download? what is a view?) – Differen9ate between scholarly impact versus popular/social use – Define sources/characteris9cs for metrics (social, commercial, scholarly)
• Data Integrity – Counter biases/gaming – Associa9on with credible en99es -‐ e.g. ORCID ID v. gmail account – Reproduceability is key – Everyone needs to be at the table to establish overall credibility
• Use cases (3X)
June 28, 2014 23
30 One-‐on-‐One Interviews
June 28, 2014 24
Poten8al work themes
Defini9ons Applica9on to types of research outputs Discovery implica9ons Research evalua9on Data quality and gaming Grouping, aggrega9ng, and granularity Context Adop9on
June 28, 2014 25
Poten9al work themes
Defini8ons Applica9on to types of research outputs Discovery implica9ons Research evalua9on Data quality and gaming Grouping, aggrega9ng, and granularity Context Adop9on
June 28, 2014 26
Poten9al work themes
Defini9ons
Applica8on to types of research outputs Discovery implica9ons Research evalua9on Data quality and gaming Grouping, aggrega9ng, and granularity Context Adop9on
June 28, 2014 27
Poten9al work themes
Defini9ons Applica9on to types of research outputs
Discovery implica8ons Research evalua9on Data quality and gaming Grouping, aggrega9ng, and granularity Context Adop9on
June 28, 2014 28
Poten9al work themes
Defini9ons Applica9on to types of research outputs Discovery implica9ons
Research evalua8on Data quality and gaming Grouping, aggrega9ng, and granularity Context Adop9on
June 28, 2014 29
Poten9al work themes
Defini9ons Applica9on to types of research outputs Discovery implica9ons Research evalua9on
Data quality and gaming Grouping, aggrega9ng, and granularity Context Adop9on
June 28, 2014 30
Poten9al work themes
Defini9ons Applica9on to types of research outputs Discovery implica9ons Research evalua9on Data quality and gaming
Grouping, aggrega8ng, and granularity Context Adop9on
June 28, 2014 31
Poten9al work themes
Defini9ons Applica9on to types of research outputs Discovery implica9ons Research evalua9on Data quality and gaming Grouping, aggrega9ng, and granularity
Context Adop9on
June 28, 2014 32
Poten9al work themes
Defini9ons Applica9on to types of research outputs Discovery implica9ons Research evalua9on Data quality and gaming Grouping, aggrega9ng, and granularity Context
Adop8on & Promo8on June 28, 2014 33
Alterna8ve Assessment Ini8a8ve
Phase 2 Presenta8ons of Phase 1 report (June 2014)
Priori8za8on Effort (June -‐ Aug, 2014) Project approval (Sept 2014)
Working group forma8on (Oct 2014) Consensus Development (Nov 2014 -‐ Dec 2015)
Trial Use Period (Dec 15 -‐ Mar 16)
Publica8on of final recommenda8ons (Jun 16) June 28, 2014 34
We all want our own
June 28, 2014 35
For more information:
www.niso.org/topics/tl/altmetrics_initiative/
June 28, 2014 36
Questions?
Todd Carpenter
Executive Director [email protected]
National Information Standards Organization (NISO) 3600 Clipper Mill Road, Suite 302 Baltimore, MD 21211 USA +1 (301) 654-2512 www.niso.org
June 28, 2014 37