get another label? improving data quality and machine learning using multiple, noisy labelers

71
Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers Foster Provost New York University Joint work with Panos Ipeirotis, Victor Sheng, and Jing Wang

Upload: quana

Post on 24-Feb-2016

53 views

Category:

Documents


0 download

DESCRIPTION

Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers. Foster Provost New York University. Joint work with Panos Ipeirotis, Victor Sheng, and Jing Wang. Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Get Another Label? Improving Data Quality and Machine

Learning Using Multiple, Noisy Labelers

Foster ProvostNew York University

Joint work with Panos Ipeirotis, Victor Sheng, and Jing Wang

Page 2: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers
Page 3: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

3

Page 4: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

4

Page 5: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Get Another Label? Improving Data Quality and Machine

Learning Using Multiple, Noisy Labelers

Foster ProvostNew York University

Joint work with Panos Ipeirotis, Victor Sheng, and Jing Wang

Page 6: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

6

Outsourcing machine learning preprocessing

Traditionally, modeling teams have invested substantial internal resources in data formulation, information extraction, cleaning, and other preprocessing– Raghu Ramakrishnan from his SIGKDD Innovation Award Lecture (2008)

“the best you can expect are noisy labels” Now, we can outsource preprocessing tasks, such as labeling,

feature extraction, verifying information extraction, etc.– using Mechanical Turk, Rent-a-Coder, etc.– quality may be lower than expert labeling (much?) – but low costs can allow massive scale

The ideas may apply also to focusing user-generated tagging, crowdsourcing, etc.

Page 7: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Example: Build a web-page classifier for inappropriate content

Need a large number of hand-labeled web-pages Get people to look at pages and classify them. For example, for adult content:

G (general), PG (parental guidance), R (restricted), X (porn)

Cost/Speed Statistics Undergrad intern: 200 web-pages/hr, cost:

$15/hr MTurk: 2500 web-pages/hr, cost: $12/hr

Page 8: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

8

Page 9: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

12

Noisy labels can be problematic

Many tasks rely on high-quality labels for objects:– webpage classification for safe advertising– learning predictive models– searching for relevant information – finding duplicate database records – image recognition/labeling– song categorization– sentiment analysis

Noisy labels can lead to degraded task performance

Page 10: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

13

40

50

60

70

80

90

100

1 20 40 60 80 100

120

140

160

180

200

220

240

260

280

300

Number of examples (Mushroom)

Accu

racy

Quality and Classification Performance

Labeling quality increases classification quality increases

P = 0.5

P = 0.6

P = 0.8

P = 1.0

Here, labels are values for target variable

Page 11: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

14

Summary of results

Repeated labeling can improve data quality and model quality (but not always)

When labels are noisy, repeated labeling can be preferable to single labeling

When labels are relatively cheap, repeated labeling can do much better

Round-robin repeated labeling does well Selective repeated labeling improves

significantly

Page 12: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

15

I won’t talk about …

Page 13: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

16

Related topic

Estimating (and using) the labeler quality– for multilabeled data: Dawid & Skene 1979; Raykar et

al. JMLR 2010; Donmez et al. KDD09– for single-labeled data with variable-noise labelers: Donmez &

Carbonell 2008; Dekel & Shamir 2009a,b– to eliminate/down-weight poor labelers: Dekel &

Shamir, Donmez et al.; Raykar et al. (implicitly)– and correct labeler biases: Ipeirotis et al. HCOMP-10– Example-conditional labeler performance

Yan et al. 2010a,b Using learned model to find bad labelers/labels:

Brodley & Friedl 1999; Dekel & Shamir, Us (I’ll discuss)

Page 14: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Setting for this talk (I)

unknown process provides data points to be labeled, randomly from some fixed probability distribution– data points, or “examples”, comprise a vector of features or

descriptive attributes– we sometimes consider a fixed subset S of examples– labels are binary

set L of labelers, L1, L2, …, (potentially unbounded for this talk) – each Li has “quality” pi, which is the probability that Li will label any

given example correctly– pi = pj for most of this talk (sometimes called q)– some subset of L will label each example– some strategies will acquire k labels for each example

Total acquisition cost includes CU cost of acquiring unlabeled “feature portion” and cost CL of acquiring “label” of example– for most of the talk I’ll ignore CU – ρ=CU/CL gives cost ratio

Page 15: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Setting for this talk (II)

we select a fixed process for producing an integrated label from a set of labels (e.g., majority voting)

we care about:1) the quality of the labeling, i.e., the expectation that an

integrated label will be correct2) the generalization performance of predictive models

induced from the data+integrated labels, e.g., measured as generalization performance on hold-out data (accuracy, AUC, etc.)

Page 16: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

20

Majority Voting and Label Quality

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1 3 5 7 9 11 13

Number of labelers

Inte

grat

ed q

ualit

y

P=0.4

P=0.5

P=0.6

P=0.7

P=0.8

P=0.9

P=1.0

Ask multiple labelers, keep majority label as “true” label Quality is probability of being correct

P is probabilityof individual labelerbeing correct

Page 17: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

21

Tradeoffs for Modeling

Get more examples Improve classification Get more labels Improve label quality Improve classification

40

50

60

70

80

90

100

1 20 40 60 80 100

120

140

160

180

200

220

240

260

280

300

Number of examples (Mushroom)

Accu

racy

P = 0.5

P = 0.6

P = 0.8

P = 1.0

Page 18: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

22

Basic Labeling Strategies

Single Labeling– Get as many data points as possible– One label each

Round-robin Repeated Labeling– Fixed Round Robin (FRR)

keep labeling the same set of points in some order

– Generalized Round Robin (GRR) repeatedly label data points, giving next label to the one

with the fewest so far

Page 19: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

23

Fixed Round Robin vs. Single Labeling

p= 0.6, labeling quality#examples =100

FRR(100 examples)

SL

With high noise or many examples, repeated labeling better than single labeling

Page 20: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

24

Tradeoffs for Modeling

Get more labels Improve label quality Improve classification Get more examples Improve classification

40

50

60

70

80

90

100

1 20 40 60 80 100

120

140

160

180

200

220

240

260

280

300

Number of examples (Mushroom)

Accu

racy

P = 0.5

P = 0.6

P = 0.8

P = 1.0

Page 21: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

25

Fixed Round Robin vs. Single Labeling

p= 0.8, labeling quality#examples =50

FRR (50 examples)

Single

With low noise and few examples, more (single labeled) examples better

Page 22: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

26

Tradeoffs for Modeling

Get more labels Improve label quality Improve classification Get more examples Improve classification

40

50

60

70

80

90

100

1 20 40 60 80 100

120

140

160

180

200

220

240

260

280

300

Number of examples (Mushroom)

Accu

racy

P = 0.5

P = 0.6

P = 0.8

P = 1.0

Page 23: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

27

Tradeoffs for Modeling

Get more labels Improve label quality Improve classification Get more examples Improve classification

40

50

60

70

80

90

100

1 20 40 60 80 100

120

140

160

180

200

220

240

260

280

300

Number of examples (Mushroom)

Accu

racy

P = 0.5

P = 0.6

P = 0.8

P = 1.0

Page 24: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

28

Gen. Round Robin vs. Single Labeling

CU=0 (i.e. ρ =0), k=10

40

50

60

70

80

90

100

0 5000 10000 15000 20000Data acquisition cost (mushroom, p=0.6, CU=0, k=10)

Labe

ling

qual

ity

SLGRR

ρ : cost ratiok: #labels

Use up all examples

Repeated labeling is better than single labeling for this setting

Page 25: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

29

Gen. Round Robin vs. Single Labeling

ρ=CU/CL=3, k=5

40

50

60

70

80

90

100

0 4000 8000 12000 16000Data acquisition cost (mushroom, p=0.6, ρ=3, k=5)

Labe

ling

qual

ity

SLGRR

Repeated labeling is better than single labeling for this setting

ρ : cost ratiok: #labels

Page 26: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

30

Gen. Round Robin vs. Single Labeling

40

50

60

70

80

90

100

0 11000 22000 33000 44000Data acquisition cost (mushroom, p=0.6, ρ=10, k=12)

Labe

ling

qual

ity

SLGRR

ρ=CU/CL=10, k=12 ρ : cost ratiok: #labels

Repeated labeling is better than single labeling for this setting

Page 27: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

32

Selective Repeated-Labeling

We have seen so far: – With enough examples and noisy labels, getting multiple

labels is better than single-labeling– When we consider costly preprocessing, the benefit is

magnified

Can we do better than the basic strategies? Key observation: we have additional information to

guide selection of data for repeated labeling the current multiset of labels

Example: {+,-,+,-,-,+} vs. {+,+,+,+,+,+}

Page 28: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

33

Natural Candidate: Entropy

Entropy is a natural measure of label uncertainty:

E({+,+,+,+,+,+})=0 E({+,-, +,-, -,+ })=1

Strategy: Get more labels for high-entropy label multisets

||||log

||||

||||log

||||)( 22 S

SSS

SS

SSSE

negativeSpositiveS |:||:|

Page 29: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

34

What Not to Do: Use Entropy

0.60.65

0.70.75

0.80.85

0.90.95

1

0 1000 2000 3000 4000Number of labels (mushroom, p=0.6)

Labe

ling

qual

ity

GRRENTROPY

Improves at first, hurts in long run

Page 30: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Why not Entropy

In the presence of noise, entropy will be high even with many labels

Entropy is scale invariant (3+ , 2-) has same entropy as (600+ , 400-)

35

Page 31: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

36

Estimating Label Uncertainty (LU)

Observe +’s and –’s and compute Pr{+|obs} and Pr{-|obs} Label uncertainty = tail of beta distribution

SLU

0.50.0 1.0

Beta probability density function

Page 32: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Label Uncertainty

p=0.7 5 labels

(3+, 2-) Entropy ~ 0.97 CDFb=0.34

37

Page 33: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Label Uncertainty

p=0.7 10 labels

(7+, 3-) Entropy ~ 0.88 CDFb=0.11

38

Page 34: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Label Uncertainty

p=0.7 20 labels

(14+, 6-) Entropy ~ 0.88 CDFb=0.04

39

Page 35: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Label Uncertainty vs. Round Robin

40

0.6

0.7

0.8

0.9

1

0 1000 2000 3000 4000Number of labels (mushroom, p=0.6)

Labe

ling

qual

ity

GRRLU

similar results across a dozen data sets

Page 36: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

41

Gen. Round Robin vs. Single Labeling

40

50

60

70

80

90

100

0 11000 22000 33000 44000Data acquisition cost (mushroom, p=0.6, ρ=10, k=12)

Labe

ling

qual

ity

SLGRR

ρ=CU/CL=10, k=12 ρ : cost ratiok: #labels

Repeated labeling is better than single labeling here

Page 37: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Label Uncertainty vs. Round Robin

42

0.6

0.7

0.8

0.9

1

0 1000 2000 3000 4000Number of labels (mushroom, p=0.6)

Labe

ling

qual

ity

GRRLU

similar results across a dozen data sets

Page 38: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

More sophisticated label uncertainty?

43

Page 39: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

44

Estimating Label Uncertainty (LU)

Observe +’s and –’s and compute Pr{+|obs} and Pr{-|obs} Label uncertainty = tail of beta distribution

SLU

0.50.0 1.0

Beta probability density function

Page 40: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

More sophisticated label uncertainty?(using estimated instance-specific label quality)

45

Page 41: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

More sophisticated LU improves labeling quality under class imbalance and fixes some pesky LU learning curve glitches

46

Both techniquesperform essentially optimally with balanced classes

Page 42: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

47

Another strategy:Model Uncertainty (MU)

Learning models of the data provides an alternative source of information about label certainty– (a random forest for the results to come)

Model uncertainty: get more labels for instances that cause model uncertainty

Intuition?– for modeling: why improve training data

quality if model already is certain there?– for data quality, low-certainty “regions”

may be due to incorrect labeling of corresponding instances

Models

Examples

Self-healing process

+ ++

++ ++

+

+ ++

+

+ ++

+ + ++

+

- - - -- - - -- -

- -

- - - -

- - - -- - - -- - - -

- - - -

+

Page 43: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

48

Yet another strategy:Label & Model Uncertainty (LMU)

Label and model uncertainty (LMU): avoid examples where either strategy is certain

MULULMU SSS

Page 44: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Quality

49

0.60.650.7

0.750.8

0.850.9

0.951

0 400 800 1200 1600 2000Number of labels (waveform, p=0.6)

Labe

ling

qual

ity

UNF MULU LMU

Label Uncertainty

Uniform, round robin

Label + Model Uncertainty

Model Uncertainty alone also improves

quality

Page 45: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

51

Comparison: Model Quality (I)

Label & Model Uncertainty

Across 12 domains, LMU is always better than GRR. LMU is statistically significantlybetter than LU and MU.

70

75

80

85

90

95

100

0 1000 2000 3000 4000Number of labels (sick, p=0.6)

Acc

urac

y

GRR MULU LMU

Page 46: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

52

Comparison: Model Quality (II)

65

70

75

80

85

90

95

100

0 1000 2000 3000 4000Number of labels (mushroom, p=0.6)

Acc

urac

y

GRR MULU LMUSL

Across 12 domains, LMU is always better than GRR. LMU is statistically significantlybetter than LU and MU.

Page 47: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Why does Model Uncertainty (MU) work?

MU score distributions for correctly labeled (blue) and incorrectly labeled (purple) cases

53

+ ++

++ ++ +

+ ++

+

+ ++

+ + ++

+

- - - -- - - -- -

- -

- - - -

- - - -- - - -- - - -

- - - -

+

Page 48: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

54

Why does Model Uncertainty (MU) work?

Models

ExamplesSelf-healing process

+ ++

++ ++ +

+ ++

+

+ ++

+ + ++

+

- - - -- - - -- -

- -

- - - -

- - - -- - - -- - - -

- - - -

+

Self-healing MU

“active learning” MU

Page 49: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Adult content classification

57

Page 50: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

58

Summary of results

Micro-task outsourcing (e.g., MTurk, RentaCoder ESP game) changes the landscape for data formulation

Repeated labeling improves data quality and model quality (but not always)

With noisy labels, repeated labeling can be preferable to single labeling even when labels aren’t particularly cheap

When labels are relatively cheap, repeated labeling can do much better

Round-robin repeated labeling works well Selective repeated labeling improves substantially Best performance is by combining model-based and

label-set based indications of uncertainty

Page 51: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

59

Opens up many new directions…

Strategies using “learning-curve gradient” Estimating & correcting the quality of each

labeler (cf. related work list earlier) Example-conditional labeling difficulty Increased compensation vs. labeler quality Multiple “real” labels Truly “soft” labels Selective repeated tagging

Page 52: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Example: Build a web-page classifier for inappropriate content

Need a large number of hand-labeled webpages Get people to look at pages and classify them as:

G (general), PG (parental guidance), R (restricted), X (porn)

Cost/Speed Statistics Undergrad intern: 200 websites/hr, cost:

$15/hr MTurk: 2500 websites/hr, cost: $12/hr

Page 53: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Bad news: Spammers!

Worker ATAMRO447HWJQ

labeled X (porn) sites as G (general audience)

Page 54: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Solution: Repeated Labeling

Probability of correctness increases with number of workers

Probability of correctness increases with quality of workers

1 worker

70%

correct

11 workers

93%

correct

Page 55: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

11-vote Statistics MTurk: 227 websites/hr, cost: $12/hr Undergrad: 200 websites/hr, cost: $15/hr

Single Vote Statistics MTurk: 2500 websites/hr, cost: $12/hr Undergrad: 200 websites/hr, cost: $15/hr

But Majority Voting can be Expensive

Page 56: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Spammer among 9 workers

Our “friend” ATAMRO447HWJQ mainly marked sites as G.Obviously a spammer…

We can compute error rates for each worker

Error rates for ATAMRO447HWJQ P[X → X]=9.847% P[X → G]=90.153% P[G → X]=0.053% P[G → G]=99.947%

Page 57: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Rejecting spammers and Benefits

Random answers error rate = 50%

Average error rate for ATAMRO447HWJQ: 45.2% P[X → X]=9.847% P[X → G]=90.153% P[G → X]=0.053% P[G → G]=99.947%

Action: REJECT and BLOCK

Results: Over time you block all spammers Spammers learn to avoid your HITS You can decrease redundancy, as quality of workers is higher

Page 58: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

After rejecting spammers, quality goes up

With spam

1 worker

70%

correct

With spam

11 workers

93%

correct

Without

spam

1 worker

80% correct

Without

spam

5 workers

94% correct

Page 59: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Correcting biases

Sometimes workers are careful but biased

Classifies G → P and P → R Average error rate for ATLJIK76YH1TF: 45.0%

Is ATLJIK76YH1TF a spammer?

Error Rates for Worker: ATLJIK76YH1TF

P[G → G]=20.0% P[G → P]=80.0%P[G → R]=0.0% P[G → X]=0.0%P[P → G]=0.0% P[P → P]=0.0% P[P → R]=100.0% P[P → X]=0.0%P[R → G]=0.0% P[R → P]=0.0% P[R → R]=100.0% P[R → X]=0.0%P[X → G]=0.0% P[X → P]=0.0% P[X → R]=0.0% P[X → X]=100.0%

Page 60: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Correcting biases

For ATLJIK76YH1TF, we simply need to compute the “non-recoverable” error-rate (technical details omitted)

Non-recoverable error-rate for ATLJIK76YH1TF: 9%

The “condition number” of the matrix [how easy is to invert the matrix] is a good indicator of spamminess

Error Rates for Worker: ATLJIK76YH1TF

P[G → G]=20.0% P[G → P]=80.0%P[G → R]=0.0% P[G → X]=0.0%P[P → G]=0.0% P[P → P]=0.0% P[P → R]=100.0% P[P → X]=0.0%P[R → G]=0.0% P[R → P]=0.0% P[R → R]=100.0% P[R → X]=0.0%P[X → G]=0.0% P[X → P]=0.0% P[X → R]=0.0% P[X → X]=100.0%

Page 61: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

A different sort of Label Uncertainty:

70

If we were to know quality and class prior, we could estimate label uncertainty directly from p & n

Page 62: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

71

But we estimated the distribution over q above…

0.50.0 1.0

Beta probability density function

Page 63: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

“New” label uncertainty (NLU) I

72

Page 64: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

“New” label uncertainty (NLU) II

73

Page 65: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

More sophisticated LU improves labeling quality under class imbalance and fixes some pesky LU learning curve glitches

74

Page 66: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

… but doesn’t systematically help learning, even on the same data

75

Page 67: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

What if different labelers have different qualities?

(Sometimes) quality of multiple noisy labelers is better than quality of best labeler in set

here, 3 labelers:p-d, p, p+d

76

Page 68: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Estimating Labeler Quality

(Dawid, Skene 1979): “Multiple diagnoses”

– Assume equal qualities– Estimate “true” labels for examples– Estimate qualities of labelers given the “true” labels– Repeat until convergence

77

Page 69: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Soft Labeling vs. Majority Voting

MV: majority voting ME: soft labeling

55

60

65

70

10 410 810 1210 1610Number of examples (bmg, p=0.6)

Acc

urac

y

MVME

Page 70: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Adult content classification

79

Page 71: Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers

Thanks!

Q & A?