functional specialization · match each cortical area to its corresponding function: v1 v2 v3 v3a...

45
Functional Specialization

Upload: others

Post on 26-Mar-2020

19 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Functional Specialization

Page 2: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Functional specialization:shape perception

LO

Malach et al.

Page 3: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

V1 V2 V3V3A

/B V5 V7hV

4VO1

LO1

LO2

!0.6

!0.4

!0.2

0

0.2

0.4

0.6

0.8

Motion vs static

Kinetic boundaries vs transparent motion

Objects & faces vs scrambled

fM

RI r

espo

nse

(per

cent

mod

ulat

ion)

vs.

vs.

vs.

Faces & objects Scrambled

Motionboundaries

Transparent motion

Motion Static

fMR

I re

spon

se(%

chan

ge im

age

inte

nsit

y)

Functional properties of LO1 & LO2

Page 4: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Functional specialization:motion perception

MT+ (V5)

Page 5: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Subdividing MT+ (retinotopy)

small RF(MT)

large RF(MST)

Page 6: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

small RF(MT)

large RF(MST)

Subdividing MT+ (ipsilateral stimulation)

Page 7: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Subdividing MT+

Huk, Dougherty, & Heeger, J Neurosci, 22:7195-7205, 2002

Page 8: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

V1

V2

V3

MT

V3a

MST

Human MT and MST

Page 9: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Functional specialization

Match each cortical area to its corresponding function:

V1V2V3V3AV3BV4V5V7V9IPS1IPS2Etc.

MotionStereoColorTextureSegmentation, groupingRecognitionAttentionWorking memoryMental imageryDecision-makingSensorimotor integrationEtc.

Page 10: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Selectivity

Page 11: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Primary visualcortex

Columnar architecture

Columnar architecture

1 cm

1.5-2 mm

1 cm

Hubel & Wiesel

Horton & Hedley-Whyte, Philos Trans R Soc Lond B (1984)

Page 12: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

fMRI spatial and temporal resolution

Time series of fMRI images

Time

Typical pixel size: 3mm x 3mm x 3mm

Typical frame time: 2 sec

Page 13: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Convention spatial resolutionVoxel size: 3 x 3 x 3 mm3

FOV: 192 x 192 x 72 mm3

24 slices in 1.5 sec

Response to annulus

Response to center

Response to periphery

100 200 300 400 500 600 700 800

100

200

300

400

500

600

vs.

Stimulus:

100 200 300 400 500 600 700 800

100

200

300

400

500

600

1.5º

2.5º

Page 14: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

High spatial resolution fMRI(3D zoomed EPI with OVS)

Zoomed field of view

Voxel size: 0.7 x 0.7 x 0.7 mm3

FOV: 157 x 39 x 21 mm3

30 slices in 3 sec

Page 15: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

100 200 300 400 500 600 700 800

100

200

300

400

500

600

1.5º

2.5º

Reliability

100 200 300 400 500 600 700 800

100

200

300

400

500

600

Annulus

Center & periphery

Page 16: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Neurovascular coupling limits spatial

specificity

I

II

III

IV

V

VI

Reina de la Torre et al Anatomical Record (1998)

3.5 mm full-width at half-max (Engel el et al., 1997)

Spatial specificity (spread and mislocalization) depends on vein size.

Page 17: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Sluggishness of hemodynamics limits

temporal specificity

fMR

I re

spon

se(%

chan

ge im

age

inte

nsit

y)

Time (s)

0 2 4 6 8 10 12 14 16

-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5

Brief pulse of neural activity

Page 18: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Sluggishness of hemodynamics limits

temporal specificity

fMR

I re

spon

se(%

chan

ge im

age

inte

nsit

y)

Time (s)

0 2 4 6 8 10 12 14 16

-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5

Brief pulse of neural activity

Page 19: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Columns with fMRI: proof of principle

Cheng, Waggoner, & Tanaka, Neuron (2001)

Voxel size: 0.5 x 0.5 x 3 mm3

FOV: 240 x 240 x 9 mm3

3 slices in 9.6 sec

Same session

Different sessions

Page 20: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Adaptation

Page 21: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Parallel trials

Orthogonal trials

Blank trials

Adapter orientation Probe orientation

blank screen

Adapter 4 s

Probe 1 s

ISI 1 s

Response 1.2 s

how many X’s?

Unattendedperifoveal stimulus

(1.5-5 deg)

Attendedfoveal RSVP task

(<1 deg)

A

Parallel trials

Orthogonal trials

Blank trials

Adapter orientation Probe orientation

blank screen

B

Targets presented

11

2

2

3

3

4

4

Targ

ets

dete

cted

Subject S1Subject S2Subject S3

C

Trial types

Trial structure

Orientation-selective adaptation protocol

Page 22: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Adaptationmagnitude

Time (s)

Orthogonal trialsParallel trials

FM

RI r

espo

nse

(% m

odul

atio

n)

Adapter Probe

0 5 10 15

0

-0.2

0.2

0.4

0.6

0.8Parallel

Orthogonal

Adapter ProbeTrial type

Orientation-selectivity in human V1

Larsson, Landy, & Heeger,

J Neurophysiol (2006)

Time (sec)

fMR

I re

spon

se(%

chan

ge int

ensi

ty)

Page 23: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Adaptation index:

fMR

I re

spon

se a

mpl

itud

e

Visual area

Orientation-selective adaptation

A

A

Larsson, Landy, & Heeger,

J Neurophysiol (2006)

Page 24: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Adaptation index

Visual area

Ada

ptat

ion

inde

x

V1 V2 V3 hV4 VO1 LO1 V3A/B0

0.2

0.4

0.6

0.8 LM:LM• Adaptation indices

constant across visual areas

• No significant differences between V1 and extrastriate visual areas

• Adaptation in V1 can account for adaptation in extrastriate visual areas

Larsson, Landy, & Heeger,

J Neurophysiol (2006)

Page 25: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Visual area

MT+ V1 V2 V3 V4v V3A0

0.25

0.5

Huk, Ress, & Heeger (2001) Neuron, 32:161-

Dir

ecti

on-s

elec

tive

ad

apta

tion

ind

ex

Direction-selective adaptation

Page 26: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

+ =

Pattern motion

Page 27: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

component-motion cell

=

pattern-motion cell

pattern moving up-right strong response

=

grating component moving up-right => strong response

Component vs. pattern motion selectivity

Page 28: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Component gratings

Adapted

direction

plaids

Mixed

direction

plaids

Component gratings

Pattern motion adaptation protocol

Huk & Heeger, Nature Neurosci, 5:72-75, 2002

Page 29: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Huk & Heeger, Nature Neurosci, 5:72-75, 2002

Pattern motion adaptation

74 nature neuroscience • volume 5 no 1 • january 2002

articles

tiples of 90° from trial to trial to minimize adaptation. Theresponse modulations in MT+ were not significantly differentfrom zero (A.C.H., p = 0.35; D.J.H., p = 0.74, two-tailed t-test),demonstrating that the two blocks of plaids elicited similarresponse levels when the effects of both component- and pat-tern-motion adaptation were absent. This result provides furtherevidence that adaptation, due to the repetition of pattern direc-tion, was the key factor in the original experiment.

DISCUSSIONOur findings demonstrate that human MT+ contains a popula-tion of pattern-motion cells and that the activity of those neu-rons is linked to the perception of coherent pattern motion. Thepattern-motion responsivity of human MT+ adds to the case fora homology to macaque MT, which includes a relatively largeproportion of pattern-motion cells1. We also observed lesserdegrees of pattern-motion adaptation in V2, V3, V3A and V4v.Macaque V3 is known to have a minority of pattern-motioncells13, but there are no published investigations of pattern-motion cells in macaque V2, V3A or V4. Although our datademonstrate pattern-motion responses in each of these visualareas, we cannot determine if pattern motion is computed sepa-rately in each visual area or if the responses in V2–V4 are affect-ed by the adaptation that is taking place in MT+. We emphasizethat fMRI adaptation studies14–17 can reveal the selectivities ofsubpopulations of neurons in the human brain, even when thoseneurons are intermingled at a spatial scale that is finer than thespatial sampling resolution (voxel size) of the fMRI measure-ments.

METHODSWe collected fMRI data in 3 subjects, males, 25–39 years old, all withnormal or corrected-to-normal vision. Experiments were undertakenwith the written consent of each subject, and in compliance with the safe-ty guidelines for MR research. Each subject participated in several scan-ning sessions: one to obtain a high-resolution anatomical volume, oneto identify MT+, one to identify the retinotopically organized corticalvisual areas, 2–3 to measure motion adaptation, one to measure base-line responses and 1–3 to perform control measurements. In each subject,we collected 8–20 repeats of the pattern-motion adaptation experimentand 8–16 repeats of the various control experiments.

Stimulus and protocol. Stimuli were presented on a flat-panel display (NEC,multisynch LCD 2000, Itasca, Illinois) placed within a Faraday box with aconducting glass front, positioned near the subjects’ feet. Subjects lay ontheir backs in the MR scanner and viewed the display through binoculars.

Subjects viewed a pair of circular patches 12° in diameter centered 7.5°to the left and right of a central fixation point. Patches were filled witha plaid stimulus comprised of two superimposed sinusoidal gratings.Individual component gratings had 20% contrast, and spatial and tem-poral frequencies were selected to yield a variety of pattern directionswhen superimposed in various combinations (Fig. 1).

Each scan consisted of 6 (32-s) cycles; each cycle consisted of alter-nating adapted-direction and mixed-direction blocks. Adapted-direc-tion blocks consisted of 8 consecutive trials in which the plaid stimulusalways appeared to move in the same direction (horizontally, at 12.9 or1.9°/s; Fig. 1a); mixed-direction blocks consisted of 8 trials in which thedirection of the plaids varied from trial-to-trial (possible plaid direc-tions, computed from the intersection-of-constraints of the componentgratings, were ±31°, ±123° from horizontal at 5.3°/s and 6.3°/s; Fig. 1b).The component gratings with orientations of ±72° had spatial frequen-cies of 0.5 cycles/degree and temporal frequencies of 2 cycles/second (Fig. 1a, components above first plaid); the component gratings withorientations ±45° had spatial frequencies of 0.5 cycles/degree and tem-poral frequencies of 0.67 cycles/second (Fig. 1a, components above sec-ond plaid). In the component-motion experiment, perceptualtransparency was achieved by scaling one component’s spatial frequencyup to 1 cycle/degree and the other down to 0.125 cycle/degree, producinga 3-octave separation. (Temporal frequencies were also scaled accord-ingly to leave component velocities unchanged.)

To control attention, subjects performed a speed discrimination judg-ment on each stimulus presentation16. Each 2-s trial consisted of 1300 ms of plaid motion followed by a 700-ms luminance-matched blankperiod during which subjects pressed a button to indicate which plaid(left or right of fixation) moved faster. The speed differences were deter-mined by an adaptive staircase procedure, adjusting the speeds from trialto trial so that subjects would be approximately 80% correct.

Across different blocks and experiments, we chose to equate percent-correct performance, instead of the exact stimulus speed (although speedsdid remain within a few percent), because in previous work, we and oth-ers have noted large attentional effects on MT+ responses, but no effectsof slight differences in speed18–20. Although the speed discriminationthresholds were larger for non-coherent (transparent gratings) than forcoherent plaids (but not for mixed- versus adapted-direction blocks),the differences were not very large (percent speed-increment thresholdswere !15% versus !10% for non-coherent versus coherent, respective-ly). These small speed differences might affect the responses of someindividual neurons (although speed tuning curves of all direction-selec-tive cells are rather broad), but these speed differences would not beexpected to evoke measurable changes in the pooled activity (as mea-sured with fMRI) of large populations of neurons.

In the adaptation experiments, equal numbers of scans were collectedwith the plaids moving in opposite directions (for example, inward

0 16 32–0.5

0

0.5

fMR

I res

pons

e(p

erce

nt B

OLD

sig

nal)

Pattern motionComponent motion

Adapteddirection

Mixeddirection

Visual area

0

0.1

0.2

V1 V2 V3 V4v V3A MT+

Pat

tern

ada

ptat

ion

inde

x(a

dapt

atio

n re

spon

se/b

asel

ine

resp

onse

)Pattern motion

Component motion

Time (s)

Fig. 2. Pattern-motion adaptation in human visual cortex. (a) Averagetime series in MT+. Pattern-motion adaptation produced strong modu-lations in MT+ activity. Transparent component motion evoked muchless adaptation. Each trace represents the average MT+ response, aver-aged across subjects and scanning sessions. (b) Adaptation index acrossall visual areas. Pattern-motion adaptation was largest in MT+, but alsoevident in other extrastriate visual areas. Adaptation was weak androughly equal across visual areas in the transparent component-motionexperiment. Height of bars, geometric mean across subjects (arithmeticmean yielded similar results). Error bars, bootstrap estimates of the 68%confidence intervals.

a

b

©20

01 N

atur

e P

ublis

hing

Gro

up h

ttp:

//neu

rosc

i.nat

ure.

com

© 2001 Nature Publishing Group http://neurosci.nature.com

Visual area

Dir

ecti

on-s

elec

tive

ad

apta

tion

ind

exCoherent (pattern) motion

Transparent (component) motion

Page 30: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

+ =

strongpattern-motion

percept

weakpattern-motion

percept

+ =

Coherent vs. transparent percepts

Huk & Heeger, Nature Neurosci, 5:72-75, 2002

Page 31: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

fMRI

response (

%)

Time (sec)

7 14

.4

0

150°

30°

60°

90°

120°

180°

No Adaptor

Direction difference (°)

|Test - Adaptor|

Lee, Lee, & Lee, SFN, 2005

Adaptation vs motion direction

Page 32: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Test direction (°)0 180-180

Motion direction (°)0 180-180

Before

After

Response gain reduction

Response Gain

Lee, Lee, & Lee, SFN, 2005

fMRI

responseNeural response

Page 33: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Response Gain

Bandwidth

Motion direction (°)0 180-180

Test direction (°)0 180-180

Before

After

Bandwidth reduction

Lee, Lee, & Lee, SFN, 2005

fMRI

responseNeural response

Page 34: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Response Gain

Bandwidth

Direction Tuning

Motion direction (°)0 180-180

Test direction (°)0 180-180

Before

After

Attractive shift

Attractive shift

Lee, Lee, & Lee, SFN, 2005

fMRI

responseNeural response

Page 35: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Response Gain

Bandwidth

Direction Tuning

fMRI

response

Motion direction (°)0 180-180

Test direction (°)0 180-180

Neural response

Before

After

Repulsive shift

Repulsive shift

Lee, Lee, & Lee, SFN, 2005

Page 36: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

V1

V2

V3

MT

MST

0 37.5 75.0 112.5 150.0

Vis

ual ar

ea

Est. tuning bandwidth (deg)

35.0°

52.1°

57.4°

63.7°

117.0°

Estimated tuning bandwidth

Lee, Lee, & Lee, SFN, 2005

Response gainreduction

Response gainreduction and attractive shift

Page 37: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Classification

Page 38: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Kamitani & Tong, Nat Neurosci (2005)

Classifying stimulus orientation with

conventional resolution fMRI

Page 39: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Classifying orientation

Clockwise tilt

Counter-clockwise tilt

Stimulus

Classifier weights

Justin Gardner

Both

Page 40: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Haynes & Rees, Curr Biol (2005)

Classifying

percepts

during

binocular

rivalry

Time

Vox

el

Percept

Page 41: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Haxby et al, Science (2001)

Cor

rela

tion

Object-category classification

Page 42: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

How the human brain interacts with the

world in real life

Simple sensory stimuli:

The full complexity of real life:

Page 43: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Do we share the same conscious

experience from the same sensory

stimulus?

Hasson et al., Science (2004)

Page 44: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

“Mind reading” competition at Human Brain Mapping 2006(http://www.ebc.pitt.edu/competition.html)

fMRI responses to video from 3 segments of the Home Improvement TV series, rated for a variety of features (e.g., faces, emotions).

Utilize data from segments 1 and 2 to train classifier, then generate predicted behavior ratings for segment 3.

Brain activity interpretation competition

Page 45: Functional Specialization · Match each cortical area to its corresponding function: V1 V2 V3 V3A V3B V4 V5 V7 V9 IPS1 IPS2 Etc. Motion Stereo Color Texture Segmentation, grouping

Classifying spatial patterns of brain activity with machine learningmethods: Application to lie detection

C. Davatzikos,a,* K. Ruparel,b Y. Fan,a D.G. Shen,a M. Acharyya,a J.W. Loughead,b

R.C. Gur,b and D.D. Langlebenb,c

aDepartment of Radiology, University of Pennsylvania, 3600 Market Street, Suite 380, Philadelphia, PA 19104, USAbDepartment of Psychiatry, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USAcTreatment Research Center, University of Pennsylvania, 3900 Chestnut Street, Philadelphia, PA 19104, USA

Received 1 March 2005; revised 15 June 2005; accepted 4 August 2005

Available online 5 October 2005

Patterns of brain activity during deception have recently been

characterized with fMRI on the multi-subject average group level.

The clinical value of fMRI in lie detection will be determined by the

ability to detect deception in individual subjects, rather than group

averages. High-dimensional non-linear pattern classification methods

applied to functional magnetic resonance (fMRI) images were used to

discriminate between the spatial patterns of brain activity associated

with lie and truth. In 22 participants performing a forced-choice

deception task, 99% of the true and false responses were discriminated

correctly. Predictive accuracy, assessed by cross-validation in partic-

ipants not included in training, was 88%. The results demonstrate the

potential of non-linear machine learning techniques in lie detection and

other possible clinical applications of fMRI in individual subjects, and

indicate that accurate clinical tests could be based on measurements of

brain function with fMRI.

D 2005 Elsevier Inc. All rights reserved.

Introduction

A large body of functional neuroimaging literature has

elucidated relationships between structure and function, as wellas functional activity patterns during a variety of functionalactivation paradigms. Statistical parametric mapping (SPM)

(Friston et al., 1995) has played a fundamental role in thesestudies, by departing from the conventional biased ROI- andhypothesis-based methods of data analysis and enabling unbiased

voxel-by-voxel examination of all brain regions. While a great dealof knowledge has been gained during the past decade regardingbrain regions that are activated during various tasks using voxel-

based SPM analysis, the quantitative characterization of entirespatio-temporal patterns of brain activity, as opposed to voxel byvoxel examination, has received much less attention, especially asa means for deducing ‘‘the state of the mind’’ from functional

imaging data. The important distinction between a voxel-basedanalysis and the analysis of a spatio-temporal pattern is the same asthe distinction between (mass) uni-variate and multi-variate

analysis (Davatzikos, 2004). Specifically, a pattern of brain activityis not only a collection of active voxels, but carries with itcorrelations among different voxels. Notable efforts towards thefunctional activity pattern analysis have been made (Strother et al.,

1995; McIntosh et al., 1996), some of which, attempt to use thesemethods to classify complex activation patterns using machinelearning methods (Cox and Savoy, 2003; LaConte et al., 2005).

In this paper, we present an approach to the problem ofidentifying patterns of functional activity, by using a high-dimensional non-linear pattern classification method. We apply

this approach to one of the long-standing challenges in appliedpsychophysiology, namely lie detection. Deception is a sociallyand legally important behavior. The limitations of the specificityof the currently available physiological methods of lie detection

prompted the exploration of alternative methods based on thecorrelates of the central nervous system activity, such as EEG andfMRI (Rosenfeld, 2001; Spence et al., 2001; Langleben et al.,

2002). Using SPM-based analyses of multi-subject average groupdata, several recent fMRI studies demonstrated differences inbrain activation between truthful and non-truthful responses in

various experimental paradigms (Langleben et al., 2002; Langle-ben et al., in press; Ganis et al., 2003; Kozel et al., 2004a,b; Leeet al., 2002). In order to translate these data into a clinically

relevant application, discrimination between lie and truth has tobe achieved at the level of single participants and single trials(Kozel et al., 2004b), not just via group analysis. The potential ofthe SPM-based approach to achieve this goal is limited due to the

between-subject variability of regional brain activity. In thecurrent work, we have overcome this limitation using a multi-variate non-linear high-dimensionality pattern classification tech-

nique (Lao et al., 2004) applied to spatial patterns of brainactivation recorded via fMRI. Using data acquired with apreviously reported formal deception paradigm (Langleben et

al., in press), we have tested the hypothesis that truthful and non-

1053-8119/$ - see front matter D 2005 Elsevier Inc. All rights reserved.

doi:10.1016/j.neuroimage.2005.08.009

* Corresponding author. Fax: +1 215 614 0266.

E-mail address: [email protected] (C. Davatzikos).

Available online on ScienceDirect (www.sciencedirect.com).

www.elsevier.com/locate/ynimg

NeuroImage 28 (2005) 663 – 668

•Claim 88% accuracy

•Technology underlying commercial venture (http://www.noliemri.com/)

Neuroscience-based lie detector