a statistical perspective on data mining

26
A Statistical Perspective on Data Mining John Maindonald (Centre for Mathematics and Its Applications, Australian National University) July 2, 2009

Upload: tommy96

Post on 14-Jun-2015

900 views

Category:

Documents


9 download

TRANSCRIPT

Page 1: A Statistical Perspective on Data Mining

A Statistical Perspective on Data Mining

John Maindonald (Centre for Mathematics and ItsApplications, Australian National University)

July 2, 2009

Page 2: A Statistical Perspective on Data Mining

Content of the Talk

I Data Mining (DM); Machine Learning (ML)– Statistical Learning as an Over-riding Theme

I From regression to Data Mining

I The Tradition and Practice of Data Mining

I Validation/Accuracy assessment

I The Hazards of Size: Observations? Variables?

I Insight from Plots

I Comments on automation

I Summary

Page 3: A Statistical Perspective on Data Mining

Computing Technology is Transforming Science

I Huge data sets1, new types of dataI Web pages, medical images, expression arrays,

genomic data, NMR spectroscopy, sky maps, . . .I Sheer size brings challenges

I Data management; some new analysis challengesI Many observations, or many variables?

I New algorithms; “algorithmic” models.I Trees, random forests, Support Vector Machines, . . .

I Automation, especially for data collectionI There are severe limits to automated analysis

I Synergy: computing power with new theory.

Data Mining & Machine Learning fit in this mix.Both use Statistical Learning approaches.

1cf, the hype re Big Data in Weiss and Indurkha (1997)

Page 4: A Statistical Perspective on Data Mining

DM & ML Jargon – Mining, Learning and Training

Mining is used in two senses:

I Mining for data

I Mining to extract meaning, in a scientific/statistical sense.

Pre-analysis data extraction & processing may rely heavily oncomputer technology. Additionally, design of data collectionissues require attention.In mining to extract meaning, statistical considerations arepervasive.

Learning & Training

I The (computing) machine learns from data.

I Use training data to train the machine or software.

Page 5: A Statistical Perspective on Data Mining

Example 1: Record Times for Track and Road RacesT

ime

10^−1 10^0 10^1 10^2

10^0

10^1

10^2

10^3

●●●●●●●●

●●●●

●●

●●

●●●

●●

●●●

●●●

●●

●●●

●●

road track● ●

100m

290km

9.6sec

24h

Distance

Res

id (

log e

sca

le)

−0.10

−0.05

0.00

0.05

0.10

0.15

10^−1 10^0 10^1 10^2

●●●●

●●

●●●●●●

● ●●

●●●● ●●●

●●

●●

●●●

●● ●

Is the line the whole story?

Ratio of largest to smallesttime is ∼3000.

A difference from the line of>15%, for the longest race, isnot visually obvious!

The Plot of ResidualsHere, differences from the linebeen scaled up by a factor of∼20.

Page 6: A Statistical Perspective on Data Mining

World record times – Learning from the data

Fit a smooth to residuals.(’Let the computer rip’ !)The lower panel showsresiduals from the smooth.

resi

d(w

r.lm

)

10^−1 10^0 10^1 10^2

−0.10−0.05

0.000.050.100.15

●●●●

●●

●●

●●●

● ●●

●●●● ●●●

●●●

●●●●

●● ●

Distance

Res

id (

2)

−0.10

−0.05

0.00

0.05

0.10

0.15

10^−1 10^0 10^1 10^2

●●●●

●●●●●

● ●

●●●

● ●●

●●●● ●●● ●●

●●

●●● ●●

● ●

Is the curve ’real’?

The algorithm says “Yes”.Questions remain:

I ’Real’, in what sense?I Will the pattern be

the same in 2030?I Is it consistent across

geographical regions?I Does it partly reflect

greater attention paidto some distances?

I So why/when the smooth,rather than the line?

Martians may have a somewhat different curve!

Page 7: A Statistical Perspective on Data Mining

Example 2: the Forensic Glass Dataset (1987 data)

Find a rule to predict the type of any new piece of glass:

Axis 1

Axi

s 2

−8

−6

−4

−2

0

2

4

−4 −2 0 2 4 6

WinFWinNF

VehCon

TablHead Graph is a visual summary

of a classification.Window float (70)Window non-float (76)Vehicle window (17)Containers (13)Tableware (9)Headlamps (29)

Variables are %’s of Na,Mg, . . . , plus refractiveindex. (214 rows × 10columns.)

NB: These data are well past their “use by” date!

Page 8: A Statistical Perspective on Data Mining

Statistical Learning – Commentary

1. Continuous Outcome (eg, Times vs Distances)

I Allow data to largely choose form of response.

I Often, use methodology to refine a theoretical model(large datasets more readily highlight discrepancies)

2. Categorical Outcome (eg, forensic glass data)

I Theory is rarely much help in choosing the form of model.

I Theoretical accuracy measures are usually unavailable.

Both Types of Outcome

I Variable selection and/or model tuning wreaks havoc withmost theoretically based accuracy estimates. Hence theuse of empirical approaches.

Page 9: A Statistical Perspective on Data Mining

Selection – An Aid to Seeing What is not There!

Discriminant Axis 1

Dis

crim

inan

t Axi

s 2

−3

−2

−1

0

1

2

3

−2 0 2 4

●●

●●

●●●

●●

●●

Gp 1 Gp 2 Gp 3● ● ●Method

Random data,32 points, splitrandomly into 3groups .

(1) Select 15‘best’ features,from 500.

(2) Show 2-D viewof classificationthat uses the 15’best’ features.

NB: This demonstrates the usefulness of simulation.

Page 10: A Statistical Perspective on Data Mining

Selection Without Spurious Pattern – Train/Test

Training/Test

I Split data into training and test sets

I Train (NB: steps 1 & 2); use test data for assessment.

Cross-validationSimple version: Train on subset 1, test on subset 2

Then; Train on subset 2, test on subset 1

More generally, data are split into k parts (eg, k = 10). Useeach part in turn for testing, with other data used to train.

Warning – What the books rarely acknowledge

I All methods give accuracies based on sampling from thesource population.

I For observational data, target usually differs from source.

Page 11: A Statistical Perspective on Data Mining

Cross-Validation

Steps

I Split data into k parts (below,k=4)

I At the ith repeat or fold (i = 1, . . . k) use:the ith part for testing, the other k-1 parts for training.

I Combine the performance estimates from the k folds.

FOLD 1

TEST Training Training Training

FOLD 2

TESTTraining Training Training

FOLD 3

TESTTraining Training Training

FOLD 4

TESTTraining Training Training

n1 n2 n3 n4 observations

Page 12: A Statistical Perspective on Data Mining

Selection Without Spurious Pattern – the Bootstrap

Bootstrap Sampling

Here are two bootstrap samples from the numbers 1 . . . 101 3 6 6 6 6 6 8 9 10 (5 6’s; omits 2,4,5,7)2 2 3 4 6 8 8 9 10 10 (2 2’s, 2 8’s, 2 10’s; omits 1,5,7)1 1 1 1 3 3 4 6 7 8 (4 1’s, 2 3’s; omits 2,5,9,10)

Bootstrap Sampling – Putting it to Use

I Take repeated (with replacement) random samples of theobservations, of the same size as the initial sample.

I Repeat analysis on each new sample (NB: In the exampleabove, repeat both step 1 (selection) & 2 (analysis).

I Variability between samples indicates statistical variability.

I Combine separate analysis results into an an overall result.

Page 13: A Statistical Perspective on Data Mining

JM’s Preferred Methods for Classification Data

Linear Discriminant Analysis

I It is simple, in the style of classical regression.

I It leads naturally to a 2-D plot.

I The plot may suggest trying methods that buildin weaker assumptions.

Random forests

I It is clear why it works and does not overfit.

I No other method consistently outperforms it.

I It is simple, and highly automatic to use.

Use for illustration data with 3 outcome categories

Page 14: A Statistical Perspective on Data Mining

Example (3) – Clinical Classification of Diabetes

1st discriminant axis

2nd

disc

rimin

ant a

xis

−4

−2

0

2

−6 −4 −2 0 2

● ●

● ●

●●

●●

●● ●

●●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●●

●●●●

●●

● ●●

●●

●●

●●

●●●

●●

●●●

●●●●

● ●

●●●

overt chemical normal● ● ●

Predict class (overt,

chemical, or normal)

using relative weight,

fasting plasma glucose,

glucose area, insulin area,

& SSPG (steady state

plasma glucose).

LinearDiscriminantAnalysis

CV acc = 89%

Page 15: A Statistical Perspective on Data Mining

Tree-based Classification

|glucArea>=420.5

fpg>=117

overt chemical

normal Tree-basedClassification

CV acc = 97.2%

Page 16: A Statistical Perspective on Data Mining

Random forests – A Forest of Trees!

Each tree is for a different random with replacement('bootstrap') sample of the data, and sample of variables

Each tree has one vote; the majority wins

|fpg>=118.5

fpg>=100.5relwt>=1.045Insulin>=228.5overt

chemicalchemicalnormal normal

|fpg>=117

Insulin>=294.5

overt chemical normal

|fpg>=117

relwt>=1.045fpg>=102.5

overt chemicalchemicalnormal

|fpg>=117

SSPG>=192

overt chemical normal

|glucArea>=419.5

glucArea>=656.5

overt chemical

normal

|fpg>=119

relwt>=1.01

overt chemical normal

|glucArea>=420.5

glucArea>=596.5

overt chemicalnormal

|glucArea>=420.5

fpg>=116

overt chemicalnormal

|glucArea>=408

glucArea>=656.5

overt chemical

normal

|fpg>=117

SSPG>=150

overt chemicalnormal

|glucArea>=422

fpg>=118.5

overt chemicalnormal

|glucArea>=408

glucArea>=606

overt chemicalnormal

Page 17: A Statistical Perspective on Data Mining

Plot derived from random forest analysis

1st ordinate axis

2nd

ordi

nate

axi

s

−0.4

−0.2

0.0

0.2

0.4

−0.4 −0.2 0.0 0.2 0.4

●●●●●●●●●●●

●●●●●●

●●

●●●●●

●●●●

●●

●●●

● ●●●

●●●●●●●●

●●●

●●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●

●●●●●

overt chemical normal● ● ●

This plot trieshard to reflectprobabilitiesof groupmembershipassigned by theanalysis.

It does notresult froma ’scaling’ ofthe featurespace.

Page 18: A Statistical Perspective on Data Mining

When are crude data mining approaches enough?

Often, rough and ready approaches, ignoring distributional andother niceties, work well!

In other cases, rough and ready approaches can be highlymisleading!

Some Key Issues

I Watch for source/target (eg, 2008/2009) differences.

I Allow for effects of model and variable selection.

I Do not try to interpret individual model coefficients,unless you know how to avoid the traps.

Skill is needed to differentiate between problems whererelatively cavalier approaches may be acceptable, and problemsthat demand greater care.

Page 19: A Statistical Perspective on Data Mining

Different Relationships Between Source & Target

Source versus target Are data available from target?

1: Identical (or nearly so) Yes; data from source suffice

2: Source & Target differ Yes

3: Source & Target differ No. But change can be modeled(cf: multi-level models; time series)

4: Source & Target differ No; must make an informed guess.

Where source & target differ, it may be possible to get insightfrom matched historical source/target data.

There are major issues here, which might occupy severalfurther lectures!

Page 20: A Statistical Perspective on Data Mining

The Hazards of Size: Observations? Variables?

Many Observations

I Additional structure often comes with increased size –data may be less homogeneous, span larger regions oftime or space, be from more countries

I Or there may extensive information about not much!I e.g., temperatures, at second intervals, for a day.I SEs from modeling that ignores this structure may be

misleadingly small.

I In large homogeneous datasets, spurious effects are a riskI Small SEs increase the risk of detecting spurious effects that

arise, e.g., from sampling bias (likely in observational data)and/or from measurement error.

Many variables (features)

Huge numbers of variables spread information thinly!

This is a challenge to the analyst.

Page 21: A Statistical Perspective on Data Mining

The Hazards of High Dimensional Data

I Select, or summarize?I For selection, beware of selection effects – with

enough lottery tickets, some kind of prize is prettymuch inevitable

I A pattern based on the ‘best’ 15 features, out of 500,may well be meaningless!

I Summary measures may involve selection.I Beware of over-fitting

I Over-fitting reduces real accuracy.I Preferably, use an algorithm that does not overfit

with respect to the source population.I Unless optimized with respect to the target, some

over-fitting may be inevitable!I Any algorithm can be misused to overfit!

(even those that do not overfit!)

Page 22: A Statistical Perspective on Data Mining

Why plot the data?

I Which are the difficult points?

I Some points may be mislabeled (faulty medicaldiagnosis?)

I Improvement of classification accuracy is a useful goalonly if misclassified points are in principle classifiable.

What if points are not well represented in 2-D or 3-D?

One alternative is to identify points that are outliers on aposterior odds (of group membership) scale.

Page 23: A Statistical Perspective on Data Mining

New Technology has Many Attractions

Are you by now convinced that

I Data analysis is dapper

I Data mining is delightful

I Analytics is amazing

I Regression is rewarding

I Trees are tremendous

I Forests are fascinating

I Statistics is stupendous?

Some or all of these?

Even however with the best modern software, it is hard workto do data analysis well.

Page 24: A Statistical Perspective on Data Mining

The Science of Data Mining

I Getting the science right is more important than findingthe true and only best algorithm! (There is none!)

I Get to grips with the statistical issuesI Know the target (1987 glass is not 2008 glass)I Understand the traps (too many to talk about here2)I Grasp the basic ideas of time series, multi-level models, and

comparisons based on profiles over time.

I Use the technology critically and with care!

I Do it, where possible, with graphs.

2Maindonald, J.H. (2006) notes a number of common traps, with extensivereferences. Berk (2006) has excellent advice.

Page 25: A Statistical Perspective on Data Mining

References

Berk, R. 2008. Statistical Learning from a RegressionPerspective.[Berk’s insightful commentary injects needed reality checksinto the discussion of data mining and statistical learning.]

Maindonald, J.H. 2006. Data Mining MethodologicalWeaknesses and Suggested Fixes. Proceedings ofAustralasian Data Mining Conference (Aus06)’3

Maindonald, J. H. and Braun, W. J. 2007. Data Analysis andGraphics Using R – An Example-Based Approach. 2nd edn,Cambridge University Press.4

[Statistics, with hints of data mining!]Rosenbaum, P. R., 2002. Observational Studies. Springer, 2ed.

[Observational data from an experimental perspective.]Wood, S. N. 2006. Generalized Additive Models. An

Introduction with R. Chapman & Hall/CRC.[This has an elegant treatment of linear models andgeneralized linear models, as a lead-in to methods for fittingcurves and surfaces.]3http://www.maths.anu.edu.au/~johnm/dm/ausdm06/ausdm06-jm.pdf

and http://wwwmaths.anu.edu.au/~johnm/dm/ausdm06/ohp-ausdm06.pdf4http://www.maths.anu.edu.au/~johnm/r-book.html

Page 26: A Statistical Perspective on Data Mining

Web Sites

http://www.sigkdd.org/

[Association for Computing Machinery Special Interest Groupon Knowledge Discovery and Data Mining.]

http://www.amstat.org/profession/index.cfm?

fuseaction=dataminingfaq

[Comments on many aspects of data mining.]

http://www.cs.ucr.edu/~eamonn/TSDMA/

[UCR Time Series Data Mining Archive]

http://kdd.ics.uci.edu/ [UCI KDD Archive]

http://en.wikipedia.org/wiki/Data_mining

[This (Dec 12 2008) has useful links. Lacking in sharp criticalcommentary. It emphasizes commercial data mining tools.]

The R package mlbench has “a collection of artificial andreal-world machine learning benchmark problems, including,e.g., several data sets from the UCI repository.”