readability metric for colored text - seas.upenn.educse400/cse400_2012_2013/posters/01... ·...

1
Readability Metric for Colored Text Jordan Kodner, Ayaka Nonaka, Joshua Wilson Advisors: David Brainard & Lyle Ungar METHOD Data collection via Amazon Mechanical Turk 5500+ participants over 300,000 pairwise comparisons of images Data analysis production of a machine learning model that accurately quanties color text readability Senior Project Poster Day 2013 – Department of Computer and Information Science – University of Pennsylvania How readable is this? 42 GOAL To develop a model that, given a piece of text with some color on some background color, assigns a readablity score. MOTIVATION design rating design improvement design automation Currently, there exists no good measure for determining web text readability. Our algorithm, based on real human data, can fulfill the following applications. Which is more readable? Below is our current best model for computing readability on a 100 point scale. Try it out at www.upenncolortheory.com In all Mechanical Turk experiments, we included 3 black and white images. Two images were used to normalize the readability scores. The third image moved freely. Variability in its position was used to estimate measurement error. The graph to the left shows the dierence between our expected values from our measurement and our model's predictions on the training data. The graph to the left shows the dierence between our expected values from our measurement and our model's predictions on the test data. EXAMPLES MODEL VALIDITY Text: #3A70A7 Background: #FFFFFF Score: 82/100 "Readable" Text: #EDAEB5 Background: #FFBC58 Score: 7/100 "Not Readable" Color Color Measurement Error Training Error from Weka Test Error from Weka 1.0e-1 • |R| + 5.7e-1 • |G| + 8.6e-2 • |B| - 1.1e-2 • B - 1.1e-3 G 2 - 1.5e-4 • B 2 - 3.6

Upload: vannhu

Post on 02-Nov-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

Readability Metric for Colored TextJordan Kodner, Ayaka Nonaka, Joshua Wilson Advisors: David Brainard & Lyle Ungar

METHOD• Data collection via Amazon Mechanical Turk 5500+ participants over 300,000 pairwise comparisons of images• Data analysis

production of a machine learning model thataccurately quanties color text readability

Senior Project Poster Day 2013 – Department of Computer and Information Science – University of Pennsylvania

How readableis this? 42

GOALTo develop a model that, given a piece of text with some color on some background color, assigns a readablity score.

MOTIVATION

• design rating• design improvement• design automation

Currently, there exists no good measure for determining web text readability. Our algorithm, based on real human data, can fulfill the following applications.

Which is more readable?

Below is our current best model for computing readability on a 100 point scale.

Try it out at www.upenncolortheory.com

In all Mechanical Turk experiments, we included 3 black and white images. Two images were used to normalize the readability scores. The third image moved freely. Variability in its position was used to estimate measurement error.

The graph to the left shows the difference between our expected values from our measurement and our model's predictions on the training data.

The graph to the left shows the difference between our expected values from our measurement and our model's predictions on the test data.

EXAMPLES

MODEL

VALIDITY

Text: #3A70A7Background: #FFFFFFScore: 82/100"Readable"

Text: #EDAEB5Background: #FFBC58Score: 7/100"Not Readable"

Color

Color

Measurement Error

Training Error from Weka

Test Error from Weka

1.0e-1 • |∆R| + 5.7e-1 • |∆G| + 8.6e-2 • |∆B| -

1.1e-2 • ∆B - 1.1e-3 • ∆G2 - 1.5e-4 • ∆B2 - 3.6