lecture17 fourier comparison

28
1 Last Time • Bayesian • Nonparametric This Week Fourier Analysis Comparison of methods

Upload: others

Post on 18-Dec-2021

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Lecture17 Fourier Comparison

1

Last Time• Bayesian• Nonparametric

This Week• Fourier Analysis• Comparison of methods

Page 2: Lecture17 Fourier Comparison

2

Fourier Analysis• Fourier transform: converts information in time domain to frequency

domain– Used to change a raw time course to a power spectrum– Hypothesis: any repetitive/blocked task should have power at the task

frequency

• Subset of general linear model– Same as if used sine and cosine as regressors

12s on, 12s off Frequency (Hz)

Pow

er

Page 3: Lecture17 Fourier Comparison

3

-3%-2%-1%0%1%2%3%

1 12 23 34 45 56 67 78 89 100 111 122

Image

Left-RightRight-Left

-3%-2%-1%0%1%2%3%

1 12 23 34 45 56 67 78 89 100 111 122

Image

Left-RightRight-Left

0

400

800

1200

1600

0.005 0.047 0.089 0.130 0.172 0.214 0.255 0.297

Frequency

0

200

400

600

800

1000

1200

1400

0 20 40 60 80 100

120

140

160

180

200

220

240

260

280

300

320

340

Phase Angle (Degrees)Sp

ectra

l Pow

er a

t 0.0

58 H

z

0

200

400

600

800

1000

1200

1400

0 30 60 90 120 150 180 210 240 270 300 330

Phase Angle (Degrees)

Spe

ctra

l Pow

er a

t 0.0

58 H

z

0

400

800

1200

1600

0.005 0.047 0.089 0.130 0.172 0.214 0.255 0.297

Frequency

Kiviniemi, et al (2000), MRM 44, 373-378

Page 4: Lecture17 Fourier Comparison

4

Question

•• How does one optimize an fMRI analysis How does one optimize an fMRI analysis approach?approach?

•• How does one test an fMRI analysis How does one test an fMRI analysis approach?approach?

Aims•• Introduce neuroimaging pipeline metaIntroduce neuroimaging pipeline meta--models;models;•• Explain why these are important for preprocessing;Explain why these are important for preprocessing;•• Describe several possible frameworks for Describe several possible frameworks for

optimizing pipeline metaoptimizing pipeline meta--models, and their models, and their preprocessing steps;preprocessing steps;

•• Use one framework to explore preprocessing and Use one framework to explore preprocessing and data analysis optimization in several blockdata analysis optimization in several block--design design fMRI studies;fMRI studies;

•• Rank the importance of a subset of fMRI Rank the importance of a subset of fMRI preprocessing steps;preprocessing steps;

•• Conclude that pipeline steps should not be Conclude that pipeline steps should not be optimized in isolationoptimized in isolation

Page 5: Lecture17 Fourier Comparison

5

Data From A Typical fMRI Study

ReconstructedfMRI Data

B0 Correction

Slice TimingAdjustment

MotionCorrection

Non-LinearWarping

Spatial & TemporalFiltering

Statistical AnalysisEngine

StatisticalMaps

Some Preprocessing Steps

Table ofLocal Max

ExperimentalDesignMatrix

Rendering of Results on AnatomyData Modeling

Display

fMRIStudy

Meta-Data

Why Optimize Pipeline Meta-models?

•• Published results demonstrate that new insights Published results demonstrate that new insights into human brain function may be obscured by into human brain function may be obscured by poor and/or limited choices in the datapoor and/or limited choices in the data--processing processing pipeline!pipeline!

•• We donWe don’’t understand the relative importance of metat understand the relative importance of meta--model choices.model choices.

•• ““NeuroscientificallyNeuroscientifically plausibleplausible”” results are used to justify the metaresults are used to justify the meta--model choices made.model choices made.

•• Systematic bias towards prevailing Systematic bias towards prevailing neuroscientificneuroscientific expectations, and expectations, and against new discoveries.against new discoveries.

•• MetaMeta--model literature descriptions are incomplete.model literature descriptions are incomplete.

Page 6: Lecture17 Fourier Comparison

6

Optimization Framework 1Simulations: Receiver Operating Characteristic (ROC) Curves

PA = P(True positive)

= P(Truly active voxel is classified as active)

= Sensitivity

PI = P(False positive)

= P(Inactive voxel

is classified as active)

= False alarm rate

SkudlarskiSkudlarski P, P, NeuroimageNeuroimage. 9(3):311. 9(3):311--329, 1999.329, 1999.DellaDella--MaggioreMaggiore V, V, NeuroimageNeuroimage 17:1917:19––28, 2002.28, 2002.LukicLukic AS, AS, IEEE IEEE SympSymp. Biomedical Imaging. Biomedical Imaging, 2004., 2004.Beckmann CF, Smith SM. Beckmann CF, Smith SM. IEEE Trans. Med. IEEE Trans. Med. ImgImg.. 23:13723:137--152, 2004.152, 2004.

Page 7: Lecture17 Fourier Comparison

7

SkudlarskiSkudlarski P, P, NeuroimageNeuroimage. 9(3):311. 9(3):311--329, 1999.329, 1999.

Page 8: Lecture17 Fourier Comparison

8

Simulated Sources

Very slowly varying with occasional large transients (discontinuities)

Random fluctuationsPeriodic, transientPeriodic, slowly varying

Temporal Pattern

Motion-Related

Function-relatedTransiently task-related

Task-relatedSignal Types, s(v)

Evaluation of Estimated Sources

( ) ( )( )

( ) lnp

D p dp

⎛ ⎞= ⎜ ⎟⎜ ⎟

⎝ ⎠∫ s

su

ξs u ξ ξ

ξ

•• Define sourcesDefine sources•• Generate sourcesGenerate sources•• For all:For all:

•• Add noiseAdd noise•• SmoothSmooth•• Reduce (PCA, cluster, Reduce (PCA, cluster,

etc.)etc.)•• Unmix (Info., fastICA, Unmix (Info., fastICA,

jade, etc.)jade, etc.)•• Evaluate (KL)Evaluate (KL)

•• min(KL) is winnermin(KL) is winner

( ) ( )1

i

N

s ii

p p ξ=

=∏s ξ

( ) ( )1

i

N

u ii

p p ξ=

=∏u ξ

Criterion: KullbackCriterion: Kullback--Leibler (KL) Leibler (KL) divergencedivergence

““TrueTrue”” source distributionssource distributions

Estimated source distributionsEstimated source distributions

Page 9: Lecture17 Fourier Comparison

9

Results

In general, the infomax algorithm performed slightly better than the fastICA algorithm (with the performance difference increasing at higher noise levels) and the clustering performance was better than PCA at lower noise levels. Different types of smoothing did not change the results much and thus only the results for no smoothing are shown. PCA had the least amount of variability in performance while clustering exhibited some variability in performance. The best combination for the higher SNR was infomax with clustering as the data reduction approach. However as the SNR decreased, PCA began to outperform clustering.

“Hybrid” fMRI Experiment

Page 10: Lecture17 Fourier Comparison

10

Results

The results from the fMRI experiment are presented for CNR=0.41 and CNR=2.07. In general, the infomax approach outperformed the fastICA approach, and PCA outperformed clustering. Results from two smoothing kernels are presented, but did not have a significant effect on the outcome. The best overall combination for this case appears to be infomax and PCA, as in the low SNR case for the simulation results. Note that the variability of the results is also quite low when PCA is combined with infomax.

Optimization Framework 2

•• Minimize pMinimize p--values or maximize SPM values, e.g.,values or maximize SPM values, e.g.,•• HopfingerHopfinger JB, JB, BuchelBuchel C, Holmes AP, Friston KJ, A study of analysis C, Holmes AP, Friston KJ, A study of analysis

parameters that influence the sensitivity of event related fMRI parameters that influence the sensitivity of event related fMRI analyses, analyses, Neuroimage,Neuroimage, 11:32611:326--333, 2000. 333, 2000.

•• Tanabe J, Miller D, Tanabe J, Miller D, TregellasTregellas J, Freedman R, Meyer FG. Comparison J, Freedman R, Meyer FG. Comparison of of detrendingdetrending methods for optimal fMRI preprocessing. methods for optimal fMRI preprocessing. Neuroimage,Neuroimage,15:90215:902--907, 2002.907, 2002.

•• Does not necessarily imply a stronger Does not necessarily imply a stronger likelihood of getting the same result in likelihood of getting the same result in another replication of the same experiment!another replication of the same experiment!

Page 11: Lecture17 Fourier Comparison

11

HopfingerHopfinger JB, JB, BuchelBuchel C, Holmes AP, Friston KJ, A study of analysis parameters that iC, Holmes AP, Friston KJ, A study of analysis parameters that influence the sensitivity of event related fMRI nfluence the sensitivity of event related fMRI analyses, analyses, Neuroimage,Neuroimage, 11:32611:326--333, 2000.333, 2000.

Page 12: Lecture17 Fourier Comparison

12

•• Quantifying replication/reproducibility Quantifying replication/reproducibility because:because:•• replication is a fundamental criterion for a result to be replication is a fundamental criterion for a result to be

considered scientific;considered scientific;

•• smaller p values do not necessarily imply a stronger smaller p values do not necessarily imply a stronger likelihood of the same result in another replication of likelihood of the same result in another replication of the same experiment; the same experiment;

•• for for ““good scientific practicegood scientific practice”” it is necessary, but not it is necessary, but not sufficient, to build a measure of replication into the sufficient, to build a measure of replication into the experimental design and data analysis;experimental design and data analysis;

•• results are dataresults are data--driven and avoid simulations.driven and avoid simulations.

Optimization Framework 3

Replication as f(Processing Pipeline)

•• TegelerTegeler C, Strother SC, Anderson JR, Kim SC, Strother SC, Anderson JR, Kim S--G. G. Reproducibility of Reproducibility of BOLDBOLD--based functional MRI obtained at 4Tbased functional MRI obtained at 4T. Hum Brain . Hum Brain MappMapp, 7:267, 7:267--283, 1999.283, 1999.

•• Experimental Design, Data Acquisition & Data Analysis:Experimental Design, Data Acquisition & Data Analysis:•• 4 Tesla, single4 Tesla, single--shot EPI, TR=5 s, TE=30 ms, voxel = 3.75 x 3.75 x 5 mmshot EPI, TR=5 s, TE=30 ms, voxel = 3.75 x 3.75 x 5 mm33;;

•• leftleft--handed finger opposition task with alternating control and task handed finger opposition task with alternating control and task epochs;epochs;

•• 6 subjects performed 3 runs/session with 4 control & 3 task epoc6 subjects performed 3 runs/session with 4 control & 3 task epochs/run;hs/run;

•• Preprocessing:Preprocessing: 00--33rdrd order order detrendingdetrending, 3x3x3 block spatial smoothing, , 3x3x3 block spatial smoothing, correction for respiration and cardiac artifacts in some subjectcorrection for respiration and cardiac artifacts in some subjects;s;

•• Data analysis: Data analysis: tt--test & Fisher linear discriminant.test & Fisher linear discriminant.

Page 13: Lecture17 Fourier Comparison

13

Simple Motor-Task Replication at 4.0T

t-test Fisher Linear Discriminant

L R

Regional Replication: Activation Size

Page 14: Lecture17 Fourier Comparison

14

Regional Replication: Cluster Location

Utility of Physiological Corrections

Page 15: Lecture17 Fourier Comparison

15

Optimization Framework 3a

•• DataData--Driven, Empirical Driven, Empirical ROCsROCs::•• Genovese CR, Noll DC, Eddy WF. Genovese CR, Noll DC, Eddy WF. Estimating testEstimating test--retest reliability retest reliability

in functional MR imaging. I. Statistical methodology.in functional MR imaging. I. Statistical methodology. Magnetic Magnetic Resonance in MedicineResonance in Medicine, 38:497, 38:497––507, 1997.507, 1997.

•• MaitraMaitra, R., , R., RoysRoys, S. R., & , S. R., & GullapalliGullapalli, R. P. , R. P. TestTest––retest reliability retest reliability estimation of functional MRI data.estimation of functional MRI data. Magnetic Resonance in Magnetic Resonance in Medicine, 48, Medicine, 48, 62 62 ––70, 2002.70, 2002.

•• LiouLiou M, Su HM, Su H--R, Lee JR, Lee J--D, Cheng PE, Huang CD, Cheng PE, Huang C--C, Tsai CC, Tsai C--H. H. Bridging Functional MR Images and Scientific Inference: Bridging Functional MR Images and Scientific Inference: Reproducibility Maps.Reproducibility Maps. J. Cog. NeuroscienceJ. Cog. Neuroscience, 15:935, 15:935--945, 2003 945, 2003 (WE 312, OHBM(WE 312, OHBM’’04).04).

( ) ( ) ( )V VV V(M - R ) (M - R )R R

A A I IV

Mλ P 1- P + 1- λ P 1- P

R⎛ ⎞⎜ ⎟⎝ ⎠

Page 16: Lecture17 Fourier Comparison

16

Optimization Framework 3b

•• DataData--Driven, Empirical Driven, Empirical ROCsROCs::•• NandyNandy RR, RR, CordesCordes D. D. Novel ROCNovel ROC--Type Method for Testing the Efficiency Type Method for Testing the Efficiency

of Multivariate Statistical Methods in fMRI.of Multivariate Statistical Methods in fMRI. Magnetic Resonance in Magnetic Resonance in MedicineMedicine 49:115249:1152––1162, 2003 (WE 259, OHBM1162, 2003 (WE 259, OHBM’’04).04).

•• P(Y) = P(Y) = P(voxelP(voxel identified as active)identified as active)•• P(Y/F) = P(Y/F) = P(inactiveP(inactive voxel identified as active)voxel identified as active)•• P(Y) vs. P(Y/F) is a lower bound for true ROCP(Y) vs. P(Y/F) is a lower bound for true ROC•• Using a standard experimental run and a restingUsing a standard experimental run and a resting--state run state run

(for false positives) the true ROC may be approximated (for false positives) the true ROC may be approximated from P(Y) vs. P(Y/F).from P(Y) vs. P(Y/F).

•• Requires replication of control noise structure for accurate Requires replication of control noise structure for accurate false positive assessment.false positive assessment.

Optimization Framework 3c

•• Reproducibility Reproducibility PCAsPCAs::

•• Strother SC, et. al., Strother SC, et. al., Hum Brain Hum Brain MappMapp, , 5:3125:312--316, 1997.316, 1997.

•• TegelerTegeler C, et. Al., C, et. Al., Hum Brain Hum Brain MappMapp, , 7:2677:267--283, 1999.283, 1999.

Page 17: Lecture17 Fourier Comparison

17

Is Replication a Sufficient Metric?

•• A silly data analysis approach produces the A silly data analysis approach produces the same output pattern regardless of the input same output pattern regardless of the input data!data!

•• Results are perfectly replicable; Results are perfectly replicable; •• no variance;no variance;•• completely useless because they are severely completely useless because they are severely

biased!biased!

•• Must consider such biasMust consider such bias--variance tradeoffs variance tradeoffs when measuring pipeline performance.when measuring pipeline performance.

Optimization Framework 4

Prediction/CrossvalidationResamplingStone, M. 1974. Cross-

validatory choice and assessment of statistical predictions. J. R. Stat. Soc. B 36: 111–147.

HastieHastie T, T, TibshiraniTibshirani R, R, Friedman J. Friedman J. The elements The elements of statistical learning theory.of statistical learning theory.SpringerSpringer--VerlagVerlag, New York, , New York, 20012001

Page 18: Lecture17 Fourier Comparison

18

Prediction/Crossvalidation reSampling

•• International Neuroimaging Consortium:International Neuroimaging Consortium:•• LautrupLautrup B, et al. Proc. Workshop on Supercomputing in Brain B, et al. Proc. Workshop on Supercomputing in Brain

Research: From Tomography to Neural Networks (Research: From Tomography to Neural Networks (EdsEds, Hermann, H. , Hermann, H. J., Wolf, D. E. and J., Wolf, D. E. and PoeppelPoeppel, E.) World Scientific, , E.) World Scientific, UlichUlich, Germany, 1995. , Germany, 1995.

•• MorchMorch N, et. al. In: Duncan J, N, et. al. In: Duncan J, GindiGindi G, G, edseds: Lecture Notes in Computer : Lecture Notes in Computer Science 1230: Information Processing in Medical Imaging. SpringeScience 1230: Information Processing in Medical Imaging. Springerr--VerlagVerlag, 259, 259--270, 1997.270, 1997.

•• Hansen LK, et. al., Hansen LK, et. al., NeuroimageNeuroimage, 9:534, 9:534--544, 1999.544, 1999.•• KustraKustra R, Strother SC. R, Strother SC. IEEE Trans Med IEEE Trans Med ImgImg 20:37620:376--387, 2001.387, 2001.

•• Other Groups:Other Groups:•• Clark, C.M., et. al. Clark, C.M., et. al. J Cerebral Blood Flow J Cerebral Blood Flow MetabolMetabol, , 1111:A96:A96--102, 1991.102, 1991.•• McKeownMcKeown, MJ, et. al. , MJ, et. al. NeuroimageNeuroimage 11:2411:24--35, 2000.35, 2000.•• NganNgan SS--C, et. al. C, et. al. NeuroimageNeuroimage 11:79711:797--804, 2000.804, 2000.•• HaxbyHaxby JV, et. al. JV, et. al. ScienceScience, 293:2425, 293:2425--2430, 2001.2430, 2001.•• Cox D, Savoy RL. Cox D, Savoy RL. Neuroimage,Neuroimage, 19(2 Pt 1): 26119(2 Pt 1): 261--70, 2003.70, 2003.

Optimization Framework 5: NPAIRS

N Nonparametric N Nonparametric P PredictionP PredictionA ActivationA ActivationI InfluenceI InfluenceR ReproducibilityR ReproducibilityS S reSamplingreSampling

•• Combines prediction & reproducibility Combines prediction & reproducibility with splitwith split--half resampling.half resampling.

Strother SC, et al., Neuroimage 15:747-771, 2002; Kjems et al., Neuroimage 15:747-771, 2002

Page 19: Lecture17 Fourier Comparison

19

The NPAIRS Framework

•• NPAIRS Uses NPAIRS Uses ““splitsplit--halfhalf”” resampling to provide:resampling to provide:•• PCAPCA--based reproducibility measures of:based reproducibility measures of:

−− uncorrelated signal and noise uncorrelated signal and noise SPMsSPMs;;−− reproducible reproducible SPMsSPMs ((rSPMrSPM) on a Z) on a Z--score scale;score scale;−− multivariate dimensionality.multivariate dimensionality.

•• Combined prediction and reproducibility metrics for:Combined prediction and reproducibility metrics for:−− datadata--driven ROCdriven ROC--like curves;like curves;−− optimizing biasoptimizing bias--variance tradeoffs of pipeline interactions.variance tradeoffs of pipeline interactions.

•• Other Measures:Other Measures:−− empirical random effects correction;empirical random effects correction;−− measures of individual observation influence.measures of individual observation influence.

NPAIRS Metrics in Functional Neuroimaging Studies

PETPET•• Strother SC, et. al., Hum Brain Strother SC, et. al., Hum Brain MappMapp, 5:312, 5:312--316, 1997. 316, 1997. •• FrutigerFrutiger S, et. al., Neuroimage 12:515S, et. al., Neuroimage 12:515--527, 2000.527, 2000.•• MuleyMuley SA, et. al., Neuroimage 13:185SA, et. al., Neuroimage 13:185--195, 2001.195, 2001.•• Shaw ME, et. al., Neuroimage 15:661Shaw ME, et. al., Neuroimage 15:661--674, 2002.674, 2002.

Strother SC, et. al., Neuroimage 15:747Strother SC, et. al., Neuroimage 15:747--771, 2002.771, 2002.KjemsKjems U, et al., et al., Neuroimage 15:772U, et al., et al., Neuroimage 15:772--786, 2002.786, 2002.

fMRIfMRI•• TegelerTegeler C, et. al. Hum Brain C, et. al. Hum Brain MappMapp, 7:267, 7:267--283, 1999.283, 1999.

Shaw ME, et. al. Neuroimage 19:988Shaw ME, et. al. Neuroimage 19:988--1001, 2003.1001, 2003.LaConteLaConte S, et. al. Neuroimage S, et. al. Neuroimage 18:18:1010--23, 2003.23, 2003.Strother SC, et. al., Neuroimage (in press)Strother SC, et. al., Neuroimage (in press)

Page 20: Lecture17 Fourier Comparison

20

NPAIRS: Split-half reSampling for Activation-Pattern Reproducibility Metrics

1 1 1 11 r 1+r 02 2 2 2r 1 1 1 0 1-r 1 1

2 2 2 2

⎛ ⎞ ⎛ ⎞⎜ ⎟ ⎜ ⎟⎛ ⎞ ⎛ ⎞⎜ ⎟ ⎜ ⎟=⎜ ⎟ ⎜ ⎟− −⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

Optimization Frameworks1.1. Simulation & ROC curvesSimulation & ROC curves

DataData--Driven:Driven:2.2. Minimize pMinimize p--values;values;3.3. Replication/ReproducibilityReplication/Reproducibility

•• Empirical Empirical ROCsROCs;;•• Reproducibility Reproducibility PCAsPCAs..

4.4. Prediction Error/AccuracyPrediction Error/Accuracy5.5. NPAIRS: Prediction + ReproducibilityNPAIRS: Prediction + Reproducibility

Page 21: Lecture17 Fourier Comparison

21

••PCA of data matrix:PCA of data matrix:

••Canonical Canonical VariatesVariates Analysis (CVA):Analysis (CVA):

•• Design matrix (G) Design matrix (G) ““brain statesbrain states”” = discriminant classes.= discriminant classes.−− prediction metric = posterior probability of class membership.prediction metric = posterior probability of class membership.−− maximizes a multivariate signalmaximizes a multivariate signal--toto--noise ratio: noise ratio:

(between(between--class, B)/(pooled withinclass, B)/(pooled within--class, W) covariance;class, W) covariance;

A Multivariate Model for NPAIRS

( )T 1/2 T T 1/2 1t t t( ) ( )− − −∝svd G G G E E E W B

( ) t vt x v=svd X E S U

Optimization of fMRI Static Force fMRI •• Sixteen subjects with 2 runs/subject Sixteen subjects with 2 runs/subject •• Acquisition:Acquisition:

•• WholeWhole--brain, interleaved 1.5T BOLDbrain, interleaved 1.5T BOLD--EPI;EPI;•• 30 slices = 1 whole30 slices = 1 whole--brain scan;brain scan;•• 1 oblique slice = 3.44 x 3.44 x 5 mm1 oblique slice = 3.44 x 3.44 x 5 mm33;;•• TR/TE = 4000 ms/70 msTR/TE = 4000 ms/70 ms

•• Experimental Design:Experimental Design:

•• Analyzed with NPAIRS and PCA/CVA:Analyzed with NPAIRS and PCA/CVA:•• Dropped initial nonDropped initial non--equilibrium and stateequilibrium and state--transition scans;transition scans;•• 22--class singleclass single--subject;subject;•• 1111--class 16class 16--subject, group analysis;subject, group analysis;

Page 22: Lecture17 Fourier Comparison

22

Preprocessing for Static Force•• All runs/All runs/subject(ssubject(s) passed initial quality control:) passed initial quality control:

•• movement (AIR 3) < 1 voxel;movement (AIR 3) < 1 voxel;•• no artifacts in functional or structural scans;no artifacts in functional or structural scans;•• no obvious outliers in PCA of centered data matrix.no obvious outliers in PCA of centered data matrix.

•• Alignment (AIR 5):Alignment (AIR 5):•• WithinWithin--Subject:Subject: across runs to 1st retained scan of run one;across runs to 1st retained scan of run one;

•• BetweenBetween--Subject:Subject: 11st (Affine)st (Affine), 3, 3rdrd, 5, 5thth and 7and 7thth order polynomials;order polynomials;

•• TriTri--linear and linear and sincsinc (AIR 05) interpolation.(AIR 05) interpolation.

•• Temporal Temporal DetrendingDetrending using GLM Cosine Basis (SPM):using GLM Cosine Basis (SPM):•• None, None, •• 0.5, (0.5,1.0), (0.50.5, (0.5,1.0), (0.5--1.5), (0.51.5), (0.5--2.0), (0.52.0), (0.5--2.5), (0.52.5), (0.5--3.0) cosines/run.3.0) cosines/run.

−− (0.5(0.5--1.5) includes three GLM columns with 0.5, 1.0 and 1.5 cosines/ru1.5) includes three GLM columns with 0.5, 1.0 and 1.5 cosines/runn

•• Spatial Smoothing with 2D Gaussian:Spatial Smoothing with 2D Gaussian:•• None;None;•• FWHM = 1, 1.5, 2, 3, 4, 6, 8 pixels (3.44 mm)FWHM = 1, 1.5, 2, 3, 4, 6, 8 pixels (3.44 mm)

−− FWHM = 1.5 voxels = 0.52 mm; FWHM = 6 voxels = 21 mm.FWHM = 1.5 voxels = 0.52 mm; FWHM = 6 voxels = 21 mm.

Static Force: Single-Subject PCA

Page 23: Lecture17 Fourier Comparison

23

GLM Design MatrixY[Subject(time) x Voxels] = G[Subject(time) x Effects] x B[Effects x Voxels] + error

G

ROC-Like: Prediction vs. Reproducibility

•• A BiasA Bias--Variance Tradeoff. Variance Tradeoff. As model complexity increases As model complexity increases (i.e., #PCs 10 (i.e., #PCs 10 →→100), prediction of 100), prediction of design matrixdesign matrix’’s class labels s class labels improves and reproducibility improves and reproducibility (i.e., activation SNR) decreases.(i.e., activation SNR) decreases.

•• Optimizing Performance.Optimizing Performance.Like an ROC plot there is a single Like an ROC plot there is a single point, (1, 1), on this prediction vs. point, (1, 1), on this prediction vs. reproducibility plot with the best reproducibility plot with the best performance; at this location the performance; at this location the model has perfectly predicted the model has perfectly predicted the design matrix while extracting an design matrix while extracting an infinite SNR.infinite SNR.

LaConteLaConte S, et. al. S, et. al. Evaluating preprocessing Evaluating preprocessing choices in singlechoices in single--subject BOLDsubject BOLD--fMRI fMRI studies using datastudies using data--driven performance driven performance metricsmetrics. . Neuroimage Neuroimage 18:18:1010--23, 200323, 2003

Page 24: Lecture17 Fourier Comparison

24

Static Force, 16-Subject Group Analysis

Static Force, 16-Subject Group Analysis: Reproducibility & Dimensionality

Page 25: Lecture17 Fourier Comparison

25

Eigenimage 1: 50, 100 & 200 Split-Half PCs

Eigenimage 2: 50, 100 & 200 Split-Half PCs

Page 26: Lecture17 Fourier Comparison

26

Differences in Scanner Smoothness

Courtesy Lee Friedman, UNM & Functional BIRNCourtesy Lee Friedman, UNM & Functional BIRN

Subject-Specific Pipeline Optimization

Shaw ME, et. al., Neuroimage 19:988-1001, 2003

Page 27: Lecture17 Fourier Comparison

27

Subject-Specific Pipeline Optimization

Shaw ME, et. al., Neuroimage 19:988-1001, 2003

Summary of Preprocessing Effects

•• There are multiple dataThere are multiple data--driven ways to measure preprocessing effects and driven ways to measure preprocessing effects and optimize pipeline optimize pipeline metamodelsmetamodels that are being actively studied;that are being actively studied;

•• NPAIRS with prediction and reproducibility metrics found:NPAIRS with prediction and reproducibility metrics found:•• Small amounts of smoothing are the most important optimization pSmall amounts of smoothing are the most important optimization parameter;arameter;

•• Temporal Temporal detrendingdetrending was essential to remove lowwas essential to remove low--frequency, reproducing time frequency, reproducing time trends; trends;

•• Higher order polynomial warps compared to affine alignment had oHigher order polynomial warps compared to affine alignment had only a minor nly a minor impact on the performance metrics;impact on the performance metrics;

•• Both prediction and reproducibility metrics required to optimizBoth prediction and reproducibility metrics required to optimize the pipeline, and e the pipeline, and give different results;give different results;

•• SubjectSubject--specific pipeline optimization may improve group analysis resultspecific pipeline optimization may improve group analysis results.s.

•• The parameter settings of components in the pipeline interact soThe parameter settings of components in the pipeline interact so that the that the current practice of reporting the optimization of components tescurrent practice of reporting the optimization of components tested in ted in relative isolation is unlikely to lead to fullyrelative isolation is unlikely to lead to fully--optimized processing pipelines.optimized processing pipelines.

Page 28: Lecture17 Fourier Comparison

28

Next Week

•• Multivariate GLMMultivariate GLM