high throughput mining of the scholarly literature: journals and theses

Post on 14-Feb-2017

110 Views

Category:

Science

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

High throughput mining of the scholarly literature: a new research tool

Peter Murray-Rust, University of Cambridge, UK

Dept of Chemistry and TheContentMine

Tech-LING, Braga, PT, 2016-10-14contentmine.org is supported by a grant to PMR as a

The scholarly literature now produces 10,000 articles per day and it is essential to use machines to understand, filterand analyse this stream. The full-text of these articles is much more valuable than the abstract, and in additionmany have supplemental files such as tables, images, computer code. Machines can filter this and extractinformation on a huge and useful scale. Europe wishes to see this developed as a strategic area, but there is muchresistance from “rights-owners”.

The information in articles is in semi-structured form - a narrative with embedded data, even for some “datafiles”. There is a huge amount of factual information in this material and many disciplines have journals whoseprimary role is the reporting of facts - experimental protocols, formal observations (increasingly through instrumentsor computation) , and analysis of results using domain-specific and general protocols. ContentMine, funded by theShuttleworth Foundation, has the vision of making these facts semantic and opening them to the whole world.The two main activities of document analysis are Information Retrieval (IR) and Information Extraction (IE).IR, filtering and classification, can be tackled by machine-learning (ML) or human-generated heuristics. ML is widelyused the drawbacks are: the need for an annotated corpus (boring, expensive in time, and difficult to update) andthe suspicion of “black-box” methods. Heuristics have the advantage that their methodology is usually self-evidentand can be crowd-sourced; however they are often more limited in which fields are tractable. IE is often domain- specific (e.g. chemistry, phylogenetics) but there are general outputs which cover many disciplines. The mosttractable and common are typed numeric quantities in running text: “Thermal expansion and land glacier meltingcontribute 0.15–0.23 meters to sea level rise by 2050, and 0.30 to 0.48 meters by 2100.” This is factual information(it may or may not be “true”). Natural Language Processing (NLP) can extract the numeric quantites intoprocessable form. The terms (entities) “Thermal expansion”, “land glacier melting” are likely to be form a de factovocabulary. IE can also extract facts from tables, lists, and diagrams (graphs, plots, etc.). This is at an early stage,but with probably 10-100 million numeric diagrams published per year the amount of data is potentially huge.The major problems in exploiting this are sociopolitical. The major “closed” journals are concerned that thiswill lead to “stealing” content and have therefore made it very difficult, technically and legally to mine scholarlyjournals. The UK government passed an exception to copyright in 2014 which allows mining for non-commercialresearch and ContentMine.org has been tooling up to support this.

PM-R and colleagues have legal access to a very wide range of scholarly publications and are interested inexploring mutually beneficial research activities.

by Peter Murray-RustContentMine.org and University of Cambridge‘High throughput mining of the scholarly literature: a new research tool’

Overview

• The huge scale of scholarly publication• Automation of downloading, normalization• How can machines “understand” this?• Word-based (lexical) approaches, NLP• Domain-specific (chemistry, evolution)• Wikidata (controlled scientific language)• Repositories and theses• Politics

The Right to Read is the Right to Mine* *PeterMurray-Rust, 2011

http://contentmine.org

(2x digital music industry!)

Output of scholarly publishing

[2] https://en.wikipedia.org/wiki/Mont_Blanc#/media/File:Mont_Blanc_depuis_Valmorel.jpg

586,364 Crossref DOIs 201507 [1] /month 8000 papers/day2.5 3 million (papers + supplemental data) /year each 3 mm thick 4500 m high per year [2] * Most is not Publicly readable[1] http://www.crossref.org/01company/crossref_indicators.html

What is “Content”?

http://www.plosone.org/article/fetchObject.action?uri=info:doi/10.1371/journal.pone.0111303&representation=PDF CC-BY

SECTIONS

MAPS

TABLES

CHEMISTRYTEXT

MATH

contentmine.org tackles these

Mining in action

A recipe!

https://upload.wikimedia.org/wikipedia/commons/0/0b/Wikibooks_hamburger_recipe.png

http://chemicaltagger.ch.cam.ac.uk/

• Typical

Typical chemical synthesis

Natural Language Processing

Part of speech tagging (Wordnet, Brown Corpus, etc.)

Parsing chemical sentences

Automatic semantic markup of chemistry

Could be used for analytical, crystallization, etc.

14

XML rendered with CSS

Chemical Tagger Rendering of PALEOTIME

http://www.clim-past.net/2/205/2006/cp-2-205-2006.html

Tools and resources

HAL repository FR

Retrieval/Extraction Technologies

• Bag Of Words https://en.wikipedia.org/wiki/Bag-of-words_model)

• Term-Frequency Inverse-Document-Frequency https://en.wikipedia.org/wiki/Tf%E2%80%93idf

• Regular Expressions • Templates (Information Extraction)• Natural Language Processing (NLP)• Image processing and mining• Lookup (Wikidata, Bioscience databases)

Bag of Words

Theses from HAL repository

Regular Expressions(Easier than Crosswords or Sudoku)

Ebola EbolaMali (not Malicious)

Mali\W (end of word)

Bat or bat [Bb]at (alternatives)bat or bats bats? (optional letter)Bat or Bats or bat or bats

[Bb]ats?

Sudden onset [Ss]udden\s+onset (space/s)Panthera leo or Gorilla gorilla

[A-Z][a-z]+\s+[a-z]+(ranges of letters)

Ebola regex• <compoundRegex title="ebola">• <regex weight="1.0" fields="ebola" case="">(Ebola)</regex>• <regex weight="1.0" fields="marburg">(Marburg)</regex>• <regex weight="1.0" fields="hemorrhagic_fever">([Hh]a?emorrhagic\s+fever)</regex>• <regex weight="0.8" fields="sudden_onset">([Ss]udden\s+onset)</regex>• <regex weight="0.6" fields="vomiting_diarrhoea">([Vv]omiting\s+diarrho?ea)</regex>• <regex weight="0.5" fields="guinea">(Guinea)</regex>• <regex weight="0.5" fields="sierra_leone">(Sierra\s+Leone)</regex>• <regex weight="0.5" fields="liberia">(Liberia)</regex>• <regex weight="0.5" fields="mali">(Mali)\W</regex>• <regex weight="0.6" fields="contact_tracing">([Cc]ontact\s+tracing)</regex>• <regex weight="0.5" fields="bat">\W([Bb]ats?\W)</regex>• <regex weight="0.5" fields="bushmeat">([Bb]ushmeat)</regex>• <regex weight="0.5" fields="drc">(Democratic Republic\s*(\s*of)?(\s*the)?\s*Congo)(DRC)</regex>• <regex weight="0.6" fields="safe_burial">([Ss]afe\s+burial\s+practice?s)</regex>• <regex weight="1.0" fields="etu">([Ee]bola\s+treatment\s+units?)(ETU)</regex>• </compoundRegex>

I

15 mins to create, 15 mins to install and testOr run online at CottageLabs

Europe PubMedCentral

Dictionaries!

Dengue Mosquito

abstract

methods

references

CaptionedFigures

Fig. 1

HTML tables

abstract

methods

references

CaptionedFigures

Fig. 1

HTML tables

Dict A

Dict B

ImageCaption

TableCaption

MININGwith sectionsand dictionaries

[W3C Annotation / https://hypothes.is/ ]

How does Rat find knowledge

Disease Dictionary (ICD-10)

<dictionary title="disease"> <entry term="1p36 deletion syndrome"/> <entry term="1q21.1 deletion syndrome"/> <entry term="1q21.1 duplication syndrome"/> <entry term="3-methylglutaconic aciduria"/> <entry term="3mc syndrome” <entry term="corpus luteum cyst”/> <entry term="cortical blindness" />

SELECT DISTINCT ?thingLabel WHERE { ?thing wdt:P494 ?wd . ?thing wdt:P279 wd:Q12136 . SERVICE wikibase:label { bd:serviceParam wikibase:language "en" }}

wdt:P494 = ICD-10 (P494) identifierwd:Q12136 = disease (Q12136) abnormal condition that affects the body of an organism

Wikidata ontology for disease

Example statistics dictionary<dictionary title="statistics2"> <entry term="ANCOVA" name="ANCOVA"/> <entry term="ANOVA" name="ANOVA"/> <entry term="CFA" name="CFA"/> <entry term="EFA" name="EFA"/> <entry term="Likert" name="Likert"/> <entry term="Mann-Whitney" name="Mann-Whitney"/> <entry term="MANOVA" name="MANOVA"/> <entry term="McNemar" name="McNemar"/> <entry term="PCA" name="PCA"/> <entry term="Pearson" name="Pearson"/> <entry term="Spearman" name="Spearman"/> <entry term="t-test" name="t-test"/> <entry term="Wilcoxon" name="Wilcoxon"/></dictionary>

“Mann-Whitney” link to Wikipedia entry and Wikidata (Q1424533) entry

catalogue

getpapers quickscrape

normalise

Files

mineFacts

Latest 20150908

ScholarlyHTML

URLs/DOIs

Crawl/Query

ContentMine workflow

catalogue

getpapers

query

DailyCrawl

EuPMC, arXivCORE , HAL,(UNIV repos)

ToCservices

PDF HTMLDOC ePUB TeX XML

PNGEPS CSV

XLSURLsDOIs

crawl

quickscrape

normaNormalizerStructurerSemanticTagger

Text

DataFigures

ami

UNIVRepos

search

LookupCONTENTMINING

Chem

Phylo

Trials

CrystalPlants

COMMUNITY

plugins

Visualizationand Analysis

PloSONE, BMC, peerJ… Nature, IEEE, Elsevier…

Publisher Sites

scrapersqueries

taggers

abstract

methods

references

CaptionedFigures

Fig. 1

HTML tables

100, 000 pages/day Semantic ScholarlyHTML(W3C community group)

Facts

Latest 20150908

PLoSONE BMC1

BMC2

Closed1 Closed2Hybrid

CATalog

Enhanced annotated articles

FACTSFACTS

Daily Crawl

Crawl … Scrape … Normalize … Mine

Linked OpenData

Semantic Scientific Objects

2000-5000 Articles

Amanuens.is demo

These slides represent snapshot of an interactive demo…

Subject: Flavour

What plants produce Carvone?

https://en.wikipedia.org/wiki/Carvone

https://en.wikipedia.org/wiki/Carvone

https://en.wikipedia.org/wiki/Carvone

WIKIDATA

Carvone in WikidataAlso SPARQL endpoint

Search for carvone

Mining for phytochemicals*• getpapers –q carvone –o carvone –x –k 100Search “carvone”, output to carvone/, fmt XML, limit 100 hits

• cmine carvone Normalize papers; search locally for species, sequences, diseases, drugsResults in dataTables.htmland results/…/results.xml (includes W3C annotation)

• python cmhypy.py carvone/ -u petermr <key>send IUCN redlist plant annotations -> hypothes.is

*(chemicals in plants)

Annotation (entity in context)

prefixsurface

label

location

suffix

ARTICLES FACETS

gene disease drug Phytochem

species genus words

Remote &Local papers

DiseaseICD-10

phytochemicals

species

Commonest words

Mining for phytochemicals• getpapers –q carvone –o carvone –x –k 100Search “carvone”, output to carvone/, fmt XML, limit 100 hits

• cmine carvone Normalize papers; search locally for species, sequences, diseases, drugsResults in dataTables.htmland results/…/results.xml (includes W3C annotation)

• python cmhypy.py carvone/ -u petermr <key>send annotations -> hypothes.is

Annotation (entity in context)

prefixsurface

label

location

suffix

Amanuens.isHypothes.is link

Hypothes.is markupof article

Annotation with Hypothes.is

Original publication “on publisher’s site”Annotation “on Hypothes.is site”

Systematic Reviews

Can we:• eliminate true negatives automatically?• extract data from formulaic language?• mine diagrams?• Annotate existing sources?• forward-reference clinical trials?

Polly has 20 seconds to read this paper…

…and 10,000 more

ContentMine software can do this in a few minutes

Polly: “there were 10,000 abstracts and due to time pressures, we split this between 6 researchers. It took about 2-3 days of work (working only on this) to get through ~1,600 papers each. So, at a minimum this equates to 12 days of full-time work (and would normally be done over several weeks under normal time pressures).”

400,000 Clinical TrialsIn 10 government registries

Mapping trials => papers

http://www.trialsjournal.com/content/16/1/80

2009 => 2015. What’s happened in last 6 years??

Search the whole scientific literatureFor “2009-0100068-41”

Mining diagrams

“Root”

OCR (Tesseract)

Norma (imageanalysis)

(((((Pyramidobacter_piscolens:195,Jonquetella_anthropi:135):86,Synergistes_jonesii:301):131,Thermotoga_maritime:357):12,(Mycobacterium_tuberculosis:223,Bifidobacterium_longum:333):158):10,((Optiutus_terrae:441,(((Borrelia_burgdorferi:…202):91):22):32,(Proprinogenum_modestus:124,Fusobacterium_nucleatum:167):217):11):9);

Semantic re-usable/computable output (ca 4 secs/image)

Supertree created from 4300 papers

Politics

http://www.lisboncouncil.net/publication/publication/134-text-and-data-mining-for-research-and-innovation-.html

Asian and U.S. scholars continue to show a huge interest in text and data mining as measured by academic research on the topic. And Europe’s position is falling relative to the rest of the world.

Legal clarity also matters. Some countries apply the “fair-use” doctrine, whichallows “exceptions” to existing copyright law, including for text and data mining.Israel, the Republic of Korea, Singapore, Taiwan and the U.S. are in this group.Others have created a new copyright “exception” for text and data mining – Japan,for instance, which adopted a blanket text-and-data-mining exception in 2009, andmore recently the United Kingdom, where text and data mining was declared fullylegal for non-commercial research purposes in 2014. Some researchers worry thatthe UK exception does not go far enough; others report that British researchers arenow at an advantage over their continental counterparts.

the Middle East is now the world’s fourth largest region for research on text and data mining, led by Iran and Turkey.

Pirate Party, MEP

@Senficon (Julia Reda) :Text & Data mining in times of #copyright maximalism:

"Elsevier stopped me doing my research" http://onsnetwork.org/chartgerink/2015/11/16/elsevier-stopped-me-doing-my-research/ …

#opencon #TDM

Elsevier stopped me doing my researchChris Hartgerink

I am a statistician interested in detecting potentially problematic research such as data fabrication, which results in unreliable findings and can harm policy-making, confound funding decisions, and hampers research progress.To this end, I am content mining results reported in the psychology literature. Content mining the literature is a valuable avenue of investigating research questions with innovative methods. For example, our research group has written an automated program to mine research papers for errors in the reported results and found that 1/8 papers (of 30,000) contains at least one result that could directly influence the substantive conclusion [1].In new research, I am trying to extract test results, figures, tables, and other information reported in papers throughout the majority of the psychology literature. As such, I need the research papers published in psychology that I can mine for these data. To this end, I started ‘bulk’ downloading research papers from, for instance, Sciencedirect. I was doing this for scholarly purposes and took into account potential server load by limiting the amount of papers I downloaded per minute to 9. I had no intention to redistribute the downloaded materials, had legal access to them because my university pays a subscription, and I only wanted to extract facts from these papers.Full disclosure, I downloaded approximately 30GB of data from Sciencedirect in approximately 10 days. This boils down to a server load of 0.0021GB/[min], 0.125GB/h, 3GB/day.Approximately two weeks after I started downloading psychology research papers, Elsevier notified my university that this was a violation of the access contract, that this could be considered stealing of content, and that they wanted it to stop. My librarian explicitly instructed me to stop downloading (which I did immediately), otherwise Elsevier would cut all access to Sciencedirect for my university.I am now not able to mine a substantial part of the literature, and because of this Elsevier is directly hampering me in my research.[1] Nuijten, M. B., Hartgerink, C. H. J., van Assen, M. A. L. M., Epskamp, S., & Wicherts, J. M. (2015). The prevalence of statistical reporting errors in psychology (1985–2013). Behavior Research Methods, 1–22. doi: 10.3758/s13428-015-0664-2

Chris Hartgerink’s blog post

WILEY … “new security feature… to prevent systematic download of content

“[limit of] 100 papers per day”

“essential security feature … to protect both parties (sic)”

CAPTCHAUser has to type words

http://onsnetwork.org/chartgerink/2016/02/23/wiley-also-stopped-my-doing-my-research/

Wiley also stopped me (Chris Hartgerink) doing my researchIn November, I wrote about how Elsevier wanted me to stop downloading scientific articles for my research. Today, Wiley also ordered me to stop downloading.

As a quick recapitulation: I am a statistician doing research into detecting potentially problematic research such as data fabrication and estimating how often it occurs. For this, I need to download many scientific articles, because my research applies content mining methods that extract facts from them (e.g., test statistics). These facts serve as my data to answer my research questions. If I cannot download these research articles, I cannot collect the data I need to do my research.I was downloading psychology research articles from the Wiley library, with a maximum of 5 per minute. I did this using the tool quickscrape, developed by the ContentMine organization. With this, I have downloaded approximately 18,680 research articles from the Wiley library, which I was downloading solely for research purposes.Wiley noticed my downloading and notified my university library that they detected a compromised proxy, which they

had immediately restricted. They called it “illegally downloading copyrighted content licensed by your institution”. However, at no point was there any investigation into whether my user credentials were actually compromised (they were not). Whether I had legitimate reasons to download these articles was never discussed. The original email from Wiley is available here.

As a result of Wiley denying me to download these research articles, I cannot collect data from another one of the big publishers, alongside Elsevier. Wiley is more strict than Elsevier by immediately condemning the downloading as illegal, whereas Elsevier offers an (inadequate) API with additional terms of use (while legitimate access has already been obtained). I am really confused about what the publisher’s stance on content mining is, because Sage and Springer seemingly allow it; I have downloaded 150,210 research articles from Springer and 12,971 from Sage and they never complained about it.

Julia Reda, Pirate MEP, running ContentMine software to liberate science 2016-04-16

The Right to Read is the Right to Mine* *PeterMurray-Rust, 2011

http://contentmine.org

top related