get another label? improving data quality and data mining using multiple, noisy labelers
DESCRIPTION
Joint work with Foster Provost & Panos Ipeirotis New York University Stern School. Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers. Victor Sheng Assistant Professor, University of Central Arkansas. Outsourcing KDD Preprocessing. - PowerPoint PPT PresentationTRANSCRIPT
![Page 1: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/1.jpg)
1
Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers
Joint work with Foster Provost & Panos Ipeirotis
New York University Stern School
Victor Sheng
Assistant Professor, University of Central Arkansas
![Page 2: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/2.jpg)
22
Outsourcing KDD Preprocessing
Traditionally, data mining teams invest in data formulation, information extraction, cleaning, etc.
– Raghu Ramakrishnan from his SIGKDD Innovation Award Lecture (2008)
“the best you can expect are noisy labels”
Now, outsource preprocessing tasks, such as labeling, feature extraction, etc.
– using Mechanical Turk, Rent-a-Coder, etc.
– quality may be lower than expert labeling
– but low costs can allow massive scale
![Page 3: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/3.jpg)
3
Mechanical Turk Example
3
![Page 4: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/4.jpg)
4
ESP Game (by Luis von Ahn)
4
![Page 5: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/5.jpg)
55
Other “Free” Labeling Schemes
Open Mind initiative (www.openmind.org)
Other gwap games– Tag a Tune– Verbosity (tag words)– Matchin (image ranking)
Web 2.0 systems?– Can/should tagging be directed?
![Page 6: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/6.jpg)
66
Noisy Labels Can Be Problematic
Many tasks rely on high-quality labels:– learning predictive models– searching for relevant information – duplicate detection (i.e. entity resolution)– image recognition/labeling– song categorization (e.g. Pandora)– sentiment analysis
Noisy labels can lead to degrade task performance
![Page 7: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/7.jpg)
77
40
50
60
70
80
90
100
1 20 40 60 80 100
120
140
160
180
200
220
240
260
280
300
Number of examples (Mushroom)
Acc
ura
cyNoisy Labels Impact Model Quality
Labeling quality increases classification quality increases
P = 0.5
P = 0.6
P = 0.8
P = 1.0
Here, labels are values for target variable
![Page 8: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/8.jpg)
88
Summary of Results
Repeated labeling can improve data quality and model quality (but not always)
Repeated labeling can be preferable to single labeling when labels aren’t particularly cheap
When labels are relatively cheap, repeated labeling can do much better
Round-robin repeated labeling does well Selective repeated labeling performs better
![Page 9: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/9.jpg)
99
Setting for This Talk (I)
Some process provides data points to be labeled with some fixed probability distribution
– data points <X, Y>, or “examples”
– Y is a multi-label set, e.g., {+,-,+}
– each example has a correct label
set L of labelers, L1, L2, …, (potentially unbounded)
– Li has “quality” pi, the probability of labeling any given example correctly
– same quality for all labelers, i.e.,pi = pj
– some strategies will acquire k labels for each example
A1 A2 … An + … +
X Y
![Page 10: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/10.jpg)
1010
Setting for This Talk (II)
Total acquisition cost includes CU and CL
– CU the cost of acquiring unlabeled “feature portion”
– CL the cost of acquiring “label” of example
– Same CU and CL for all examples
– Ignore CU, ρ=CU/CL gives cost ratio
A process (majority voting) to generate an integrated label from a set of labels e.g.,{+,-,+} +
We care about:1) the quality of the integrated labels
2) the quality of predictive models induced from the data+integrated labels, e.g., accuracy on testing data
![Page 11: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/11.jpg)
1111
Repeated Labeling Improves Label Quality
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1 3 5 7 9 11 13
Number of labelers
Inte
grat
ed q
ualit
y
P=0.4
P=0.5
P=0.6
P=0.7
P=0.8
P=0.9
P=1.0
Ask multiple labelers, keep majority label as “true” label
Quality is probability of being correct
P is probabilityof individual labelerbeing correct
![Page 12: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/12.jpg)
1212
Tradeoffs for Modeling
Get more labels Improve label quality Improve classification Get more examples Improve classification
40
50
60
70
80
90
100
1 20 40 60 80 100
120
140
160
180
200
220
240
260
280
300
Number of examples (Mushroom)
Acc
ura
cy
P = 0.5
P = 0.6
P = 0.8
P = 1.0
![Page 13: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/13.jpg)
1313
Basic Labeling Strategies
Single Labeling– Get as many examples as possible
– One label each
Round-robin Repeated Labeling (giving next label to the one with the fewest so far)
– Fixed Round Robin (FRR) Keep labeling the same set of examples in some order
– Generalized Round Robin (GRR) Get new examples
![Page 14: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/14.jpg)
1414
Fixed Round Robin vs. Single Labeling
FRR(100 examples)
SL
With high noise, repeated labeling better than single labeling
50
60
70
80
90
100
100 1100 2100 3100 4100 5100Number of labels (mushroom, p=0.6)
Acc
urac
y
SLFRR
p= 0.6, labeling quality#examples =100
![Page 15: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/15.jpg)
1515
Tradeoffs for Modeling
Get more labels Improve label quality Improve classification Get more examples Improve classification
40
50
60
70
80
90
100
1 20 40 60 80 100
120
140
160
180
200
220
240
260
280
300
Number of examples (Mushroom)
Acc
ura
cy
P = 0.5
P = 0.6
P = 0.8
P = 1.0
![Page 16: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/16.jpg)
1616
Fixed Round Robin vs. Single Labeling
FRR (50 examples)
SL
With low noise and few examples, more (single labeled) examples better
90
92
94
96
98
100
50 1050 2050 3050 4050 5050Number of labels (mushroom, p=0.8)
Acc
urac
y
SLFRR
p= 0.8, labeling quality#examples =50
![Page 17: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/17.jpg)
1717
Tradeoffs for Modeling
Get more labels Improve label quality Improve classification Get more examples Improve classification
40
50
60
70
80
90
100
Number of examples (Mushroom)
Acc
ura
cy
P = 0.5
P = 0.6
P = 0.8
P = 1.0
![Page 18: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/18.jpg)
1818
Gen. Round Robin vs. Single Labeling
ρ=CU/CL=0, (i.e. CU=0), k=10
40
50
60
70
80
90
100
0 5000 10000 15000 20000Data acquisition cost (mushroom, p=0.6, ρ=0, k=10)
Acc
urac
y
SL
GRR
ρ : cost ratiok: #labels
Use up all examples
Repeated labeling is better than single labeling when labels are not particularly cheap
![Page 19: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/19.jpg)
1919
Gen. Round Robin vs. Single Labeling
ρ=CU/CL=3, k=5
40
50
60
70
80
90
100
0 4000 8000 12000 16000Data acquisition cost (mushroom, p=0.6, ρ=3, k=5)
Acc
urac
y
SL
GRR
ρ : cost ratiok: #labels
Repeated labeling is better than single labeling when labels are relatively cheap
![Page 20: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/20.jpg)
2020
Gen. Round Robin vs. Single Labeling
40
50
60
70
80
90
100
0 11000 22000 33000 44000Data acquisition cost (mushroom, p=0.6, ρ=10, k=12)
Acc
urac
y
SL
GRR
ρ=CU/CL=10, k=12 ρ : cost ratiok: #labels
Repeated labeling is better than single labeling when labels are relatively cheaper
![Page 21: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/21.jpg)
2222
Selective Repeated-Labeling
We have seen: – With noisy labels, getting multiple labels is better than
single-labeling
– Considering costly preprocessing, the benefit is magnified
Can we do better than the basic strategies?
Key observation: additional information to guide the selection of data for repeated labeling
– the current multiset of labels
Example: {+,-,+,+,-,+} vs. {+,+,+,+}
![Page 22: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/22.jpg)
2323
Natural Candidate: Entropy
Entropy is a natural measure of label uncertainty:
E({+,+,+,+,+,+})=0 E({+,-, +,-, +,- })=1
Strategy: Get more labels for examples with high-entropy label multisets
||
||log
||
||
||
||log
||
||)( 22 S
S
S
S
S
S
S
SSE
negativeSpositiveS |:||:|
![Page 23: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/23.jpg)
2424
What Not to Do: Use Entropy
0.60.650.7
0.750.8
0.850.9
0.951
0 1000 2000 3000 4000Number of labels (mushroom, p=0.6)
Lab
elin
g qu
alit
y
GRR
ENTROPY
Improves at first, hurts in long run
![Page 24: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/24.jpg)
25
Why not Entropy
In noise scenarios– Label sets truly reflecting the truth have high
entropy, even with many labels
– Some sets accidently get low entropy, never get more labels
Entropy is scale invariant
(3+ , 2-) has same entropy as (600+ , 400-)
Fundamental problem: entropy not for uncertainty, but for mixture 25
![Page 25: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/25.jpg)
2626
Estimating Label Uncertainty (LU)
Uncertainty: estimated probability of causing the wrong label
Model as beta distribution, based on observing +’s and –’s
Label uncertainty SLU= tail probability of beta distribution
SLU
0.50.0 1.0
Beta probability density function
![Page 26: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/26.jpg)
27
Label Uncertainty
p=0.7 5 labels
(3+, 2-) Entropy ~ 0.97 SLU =0.34
27
![Page 27: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/27.jpg)
28
Label Uncertainty
p=0.7 10 labels
(7+, 3-) Entropy ~ 0.88 SLU=0.11
28
![Page 28: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/28.jpg)
29
Label Uncertainty
p=0.7 20 labels
(14+, 6-) Entropy ~ 0.88 SLU=0.04
29
![Page 29: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/29.jpg)
30
Label Uncertainty vs. Round Robin
30
0.6
0.7
0.8
0.9
1
0 1000 2000 3000 4000Number of labels (mushroom, p=0.6)
Lab
elin
g qu
ality
GRR
LU
similar results across a dozen data sets
![Page 30: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/30.jpg)
3131
Gen. Round Robin vs. Single Labeling
40
50
60
70
80
90
100
0 11000 22000 33000 44000Data acquisition cost (mushroom, p=0.6, ρ=10, k=12)
Acc
urac
y
SL
GRR
ρ=CU/CL=10, k=12 ρ : cost ratiok: #labels
Repeated labeling is better than single labeling here
![Page 31: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/31.jpg)
32
Label Uncertainty vs. Round Robin
32
0.6
0.7
0.8
0.9
1
0 1000 2000 3000 4000Number of labels (mushroom, p=0.6)
Lab
elin
g qu
ality
GRR
LU
similar results across a dozen data sets
Until now, LU is the best strategy
![Page 32: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/32.jpg)
3333
Another strategy:Model Uncertainty (MU)
Learning a model of the data provides an alternative source of information about label certainty
Model uncertainty: get more labels for instances that cause model uncertainty
Intuition– for data quality, low-certainty “regions”
may be due to incorrect labeling of corresponding instances
– for modeling: why improve training data quality if model already is certain there?
Models
Examples
Self-healing process
+ +
+ +
- - - -
+ +
+ +
- - - -
+ +
+ +
+ + - -
- -- - - -
+ +
+ +
+ + - -
- -
- - - -- - - -- -
- -
?
![Page 33: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/33.jpg)
3434
Yet another strategy:Label & Model Uncertainty (LMU)
Label and model uncertainty (LMU): avoid examples where either strategy is certain
MULULMU SSS
![Page 34: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/34.jpg)
35
Comparison
35
Label Uncertainty
Label & Model Uncertainty
Model Uncertainty alone also improves
quality
0.65
0.7
0.75
0.8
0.85
0 1000 2000 3000 4000Number of labels (waveform, p=0.6)
Lab
elin
g qu
ality
GRR MULU LMU
GRR
![Page 35: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/35.jpg)
3636
Comparison: Model Quality (I)
Label & Model Uncertainty
Across 12 domains, LMU is always better than GRR. LMU is statistically significantlybetter than LU and MU.
70
75
80
85
90
95
100
0 1000 2000 3000 4000Number of labels (sick, p=0.6)
Acc
urac
y
GRR MULU LMU
![Page 36: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/36.jpg)
3737
Comparison: Model Quality (II)
65
70
75
80
85
90
95
100
0 1000 2000 3000 4000Number of labels (mushroom, p=0.6)
Acc
urac
y
GRR MULU LMUSL
Across 12 domains, LMU is always better than GRR. LMU is statistically significantlybetter than LU and MU.
![Page 37: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/37.jpg)
3838
Summary of Results
Micro-task outsourcing (e.g., MTurk, ESP game) has changed the landscape for data formulation
Repeated labeling can improve data quality and model quality (but not always)
Repeated labeling can be preferable to single labeling when labels aren’t particularly cheap
When labels are relatively cheap, repeated labeling can do much better
Round-robin repeated labeling does well Selective repeated labeling performs better
![Page 38: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers](https://reader035.vdocuments.site/reader035/viewer/2022081517/568158da550346895dc62169/html5/thumbnails/38.jpg)
39
Thanks!
Q & A?