Statistics Understanding your findings l Chris Rorden 1

  • Slides: 34
Download presentation
Statistics – Understanding your findings l Chris Rorden 1. Modeling data: l l 2.

Statistics – Understanding your findings l Chris Rorden 1. Modeling data: l l 2. Signal, Error and Covariates Statistical contrasts Thresholding Results: l l l Statistical power and statistical errors The multiple comparison problem Familywise error and Bonferroni Thresholding Permutation Thresholding False Discovery Rate Thresholding Implications: null results uninterruptible 1

The f. MRI signal l Last lecture: we predict areas that are involved with

The f. MRI signal l Last lecture: we predict areas that are involved with a task will become brighter (after a delay) Consider a 12 -sec on, 12 -sec rest task. Our expected (model) signal should look like this: 2

Calculating statistics l Strong predictor: model predicts virtually all the variability in the observed

Calculating statistics l Strong predictor: model predicts virtually all the variability in the observed data. l Mediocre predictor: weaker correlation between model and observed data. 3

Calculating statistics l Does our model reliably explain observed data? a. b. c. l

Calculating statistics l Does our model reliably explain observed data? a. b. c. l Top: model good predictor (strong signal, little noise) Middle: mediocre predictor (strong signal, lots of noise) Bottom: mediocre predictor (little signal, little noise) Statistical probability is based on ratio of effect amplitude divided by error (signal/noise). 4

General Linear Model l The observed data is composed of a signal that is

General Linear Model l The observed data is composed of a signal that is predicted by our model and unexplained noise (Boynton et al. , 1996). Amplitude (solve for) Measured Data Design Model Noise 5

What is your design model? l l Model is predicted effect. Consider Block design

What is your design model? l l Model is predicted effect. Consider Block design experiment: – Three conditions, each for 11. 2 sec 1. 2. Intensity 3. Press left index finger when you see ç Press right index finger when you see è Do nothing when you see é Time 6

FSL/SPM display of model l l Analysis programs display model as grid. Each column

FSL/SPM display of model l l Analysis programs display model as grid. Each column is a regressor – Each row is a volume of data – Brightness of row is model’s predicted intensity. Intensity l for within-subject f. MRI = time Time l e. g. left / right arrows. Time 7

Statistical Contrasts l l f. MRI inference based on contrast. Consider study with left

Statistical Contrasts l l f. MRI inference based on contrast. Consider study with left arrow and right arrow as regressors 1. 2. l [1 0] identifies activation correlated with left arrows: we could expect visual and motor effects. [1 – 1] identifies regions that show more response to left arrows than right arrows. Visual effects should be similar, so should selectively identify contralateral motor cortex. Choice of contrast crucial to inference. 8

Statistical Contrasts t-Test is one tailed, F-test is two-tailed. l – – l T-test:

Statistical Contrasts t-Test is one tailed, F-test is two-tailed. l – – l T-test: [1 – 1] mutually exclusive of [-1 1]: left>right vs right>left. F-test: [1 – 1] = [-1 1]: difference between left and right. Choice of test crucial to inference. 9

How many regressors? l We collected data during a block design, where the participant

How many regressors? l We collected data during a block design, where the participant completed 3 tasks – – – l l Left hand movement Right hand movement Rest We are only interested in the brain areas involved with Left hand movement. Should we include uninteresting right hand movement as a regressor in our statistical model? – – =? I. E. Is a [1] analysis the same as a [1 0]? Is a [1 0] analysis identical, better, worse or different from a [1] analysis? 10

Meaningful regressors decrease noise l l Meaningful regressors can explain some of the variability.

Meaningful regressors decrease noise l l Meaningful regressors can explain some of the variability. Adding a meaningful regressor can reduce the unexplained noise from our contrast. 11

Single factor… l Consider a test to see how well height predicts weight. Weight

Single factor… l Consider a test to see how well height predicts weight. Weight Height t= Explained Variance Unexplained Variance Small t-score High t-score height only weakly predicts weight height strongly predicts weight 12

Adding a second factor… l l How does an additional factor influence our test?

Adding a second factor… l l How does an additional factor influence our test? E. G. We can add waist diameter as a regressor. Does this regressor influence the t-test regarding how well height predicts weight? Consider ratio of cyan to green. Weight Height Waist Increased t Decreased t Waist explains portion of weight not predicted by height. Waist explains portion of weight predicted by height. 13

Regressors and statistics Our analysis identifies three classes of variability: l 1. 2. 3.

Regressors and statistics Our analysis identifies three classes of variability: l 1. 2. 3. Signal: Predicted effect of interest Noise (aka Error): Unexplained variance Covariates: Predicted effects that are not relevant. – E. G. Regressors with a weight of zero Statistical significance is the ratio: l t= Signal Noise Covariates will l – – Improve sensitivity if they reduce error (explain otherwise unexplained variance). Reduce sensitivity if they reduce signal (explain variance that is also predicted by our effect of interest). 14

Correlated regressors decrease signal l If a regressor is strongly correlated with our effect,

Correlated regressors decrease signal l If a regressor is strongly correlated with our effect, it can reduce the residual signal – – Our signal is excluded as the regressor explains this variability. Example: responses highly correlated to visual stimuli 15

Summary l Regressors should be orthogonal – – l Each regressor describes independent variance.

Summary l Regressors should be orthogonal – – l Each regressor describes independent variance. Variance should not be explained by more than one regressor. E. G. we will see that including temporal derivatives as regressors tend to help event related designs (temporal processing lecture). 16

Inferring effect size from probability maps l l The claim “the parietal is more

Inferring effect size from probability maps l l The claim “the parietal is more active than the frontal lobe” may be wrong Statistical significance is based on amplitude relative to error. Instead “the trend is for the parietal activity to be more reliable than in the frontal lobe. ” If you want to infer effect size, examine percent signal change, not statistical significance. 17

Statistical thresholding – – – 50 % hematocrit E. G. erythropoietin (EPO) doping in

Statistical thresholding – – – 50 % hematocrit E. G. erythropoietin (EPO) doping in athletes 30 In endurance athletes, EPO % improves performance ~ 10% Races often won by less than 1% Without testing, athletes forced to If we set the threshold too dope to be competitive high, we will fail to detect dopers (high rate of Dangers: Carcinogenic and can misses). cause heart-attacks 50 Therefore: Measure hematocrit level % to identify drug users. 30 Problem hematocrit levels vary % even in people who are not doping. If we set the threshold too low, we will accuse innocent people (high rate of false alarms). hematocrit l 18

Alpha level l l l Statistics allow us to estimate our confidence. is our

Alpha level l l l Statistics allow us to estimate our confidence. is our statistical threshold: it measures our chance of Type I error. An alpha level of 5% means only 1/20 chance of false alarm (we will only accept p < 0. 05). An alpha level of 1% means only 1/100 chance of false alarm (p< 0. 01). Therefore, a 1% alpha is more conservative than a 5% alpha. 19

Errors l l With noisy data, we will make mistakes. Statistics allows us to

Errors l l With noisy data, we will make mistakes. Statistics allows us to – – l l Estimate our confidence Bias the type of mistake we make (e. g. we can decide whether we will tend to make false alarms or misses) We can be liberal: avoiding misses We can be conservative: avoiding false alarms. We want liberal tests for airport weapons detection (X-ray often leads to innocent baggage being opened). Our society wants conservative tests for criminal conviction: avoid sending innocent people to jail. 20

Statistical Power is our probability of making a Hit. l It reflects our ability

Statistical Power is our probability of making a Hit. l It reflects our ability to detect real effects. l To make new discoveries, we need to optimize power. l There are several ways to increase power… l Reality Decision Ho true Ho false Reject Ho Type I error false alarm Hit Accept Ho Type II error miss Correct rejection 21

Increasing power… 1. Adjust statistical threshold (e. g. p < 0. 05 instead of

Increasing power… 1. Adjust statistical threshold (e. g. p < 0. 05 instead of 0. 01) 1. 2. Increase signal/noise ratio 1. 2. 3. However, we increase the chance of a Type I error! Increase noise: block design, higher field magnet, higher dose Decrease noise: meaningful regressors, temporal and spatial processing (future lectures) Increase number of observations 1. 2. Scan individuals for longer, scan more individuals Disadvantage: time and money 22

Multiple Comparisons l Assume a 1% alpha for drug testing. – – l An

Multiple Comparisons l Assume a 1% alpha for drug testing. – – l An innocent athlete only has 1% chance of being accused. Problem: 10, 500 athletes in the Olympics. If all innocent, and = 1%, we will wrongly accuse 105 athletes (0. 01*10500)! This is the ‘multiple comparison problem’. The gray matter volume ~900 cc (900, 000 mm 3) – – – Typical f. MRI voxel is 3 x 3 x 3 mm (27 mm 3) Therefore, we will conduct >30, 000 tests With 5% alpha, we will make >1500 false alarms! 23

Multiple Comparison Problem l l l If we conduct 20 tests, with an =

Multiple Comparison Problem l l l If we conduct 20 tests, with an = 5%, we will on average make one false alarm (20 x 0. 05). If we make twenty comparisons, it is possible that we may be making 0, 1, 2 or in rare cases even more errors. The chance we will make at least one error is given by the formula: 1 - (1 - )C: if we make twenty comparisons at p <. 05, we have a 1 -(. 95) 20 = 64% chance that we are reporting at least one erroneous finding. This is our familywise error (FWE) rate. 24

Bonferroni Correction: controls FWE. l For example: if we conduct 10 tests, and want

Bonferroni Correction: controls FWE. l For example: if we conduct 10 tests, and want a 5% chance of any errors, we will adjust our threshold to be p < 0. 005 (0. 05/10). l Problem: Very conservative = very little chance of detecting real effects = low power. l 25

Random Field Theory l We spatially smooth our data – peaks due to noise

Random Field Theory l We spatially smooth our data – peaks due to noise should be attenuated by neighbors. – l l If we smooth our data with 8 mm FWHM, then resel size is 8 mm. 10 mm SPM uses RFT for FWE correction: only requires statistical map, smoothness and cluster size threshold. – l Worsley et al, HBM 4: 58 -73, 1995. RFT uses resolution elements (resels) instead of voxels. – l 5 mm Euler characteristic: unsmoothed noise will have high peaks but few clusters, smoothed data will be have lower peaks but show clustering. 15 mm RFT has many unchecked assumptions (Nichols) Works best for heavily smoothed data (x 3 voxel size) Image from Nichols 26

Permutation Thresholding Group 1 l l l Group 2 Prediction: Label ‘Group 1’ and

Permutation Thresholding Group 1 l l l Group 2 Prediction: Label ‘Group 1’ and ‘Group 2’ mean something. Null Hypothesis (Ho): Labels are meaningless. If Ho true, we should get similar t-scores if we randomly scramble order. 27

Permutation Thresholding Step 1: compute 1000 random permutations of data, record maximum Tscore observed

Permutation Thresholding Step 1: compute 1000 random permutations of data, record maximum Tscore observed for each permutation. Observed data: max T = 4. 1 l Permutation 1, max T = 3. 2 l Permutation 2, max T = 2. 9 l Permutation 3, max T = 3. 3 Group 1 Group 2 l Permutation 4, max T = 2. 8 l Permutation 5, max T = 3. 5 … … 1000. Permutation 1000, max T = 3. 1 5 T= 3. 9 Max T Step 2: Rank order the 1000 maximum T-scores, the 50 th most significant max T is the 5% threshold. … 0 5% Percentile 28

Permutation Thresholding l l Permutation Thresholding offers the same protection against false alarms as

Permutation Thresholding l l Permutation Thresholding offers the same protection against false alarms as Bonferroni. Typically, much more powerful than Bonferroni. Implementations include Sn. PM, FSL’s randomise, and my own NPM. Disadvantage: computing 1000 permutations means it takes x 1000 times longer than typical analysis! Simulation data from Nichols et al. : Permutation always optimal. Bonferroni typically conservative. Random Fields only accurate with high DF and heavily smoothed. 29

False Discovery Rate l l l Traditional statistics attempts to control the False Alarm

False Discovery Rate l l l Traditional statistics attempts to control the False Alarm rate. ‘False Discovery Rate’ controls the ratio of false alarms to hits. It often provides much more power than Bonferroni correction. Consider distribution of hemacrit for our population of 10000 Olympic athletes… No dopers: Z-score is standard distribution around zero Z-score 5% FDR: only 5% of 5% Bonferroni: only a expelled athletes are 5% chance an innocent. athlete will be accused. Some dopers: the dopers will form a distribution with Z-scores > 0 Z-score 30

Controlling for multiple comparisons l Bonferroni correction – l RFT correction – – l

Controlling for multiple comparisons l Bonferroni correction – l RFT correction – – l Typically less conservative than Bonferroni. Requires large DF and broad smoothing. Permutation Thresholding – – – l We will often fail to find real results. Offers same inference as Bonferroni correction. Typically much less conservative than Bonferroni. Computationally very slow FDR correction (though see www. pubmed. com/18547821) – – At FDR of. 05, about 5% of ‘activated’ voxels will be false alarms. If signal is only tiny proportion of data, FDR will be similar to 31 Bonferroni.

Alternatives to voxelwise analysis l Conventional f. MRI statistics compute one statistical comparison per

Alternatives to voxelwise analysis l Conventional f. MRI statistics compute one statistical comparison per voxel. – – Advantage: can discover effects anywhere in brain. Disadvantage: low statistical power due to multiple comparisons. Small Volume Comparison: Only test a small proportion of voxels. (Still have to adjust for RFT). l Region of Interest: Pool data across anatomical region for single statistical test. Example: how many comparisons on this slice? l • voxelwise: 1600 • SVC: 57 • ROI: 1 SPM SVC ROI 32

ROI analysis l In voxelwise analysis, we conduct an independent test for every voxel

ROI analysis l In voxelwise analysis, we conduct an independent test for every voxel – – l Each voxel is noisy Huge number of tests, so severe penalty for multiple comparisons Alternative: pool data from region of interest. – – l M 1: movement Averaging across meaningful region should reduce noise. One test per region, so FWE adjustment less severe. Region must be selected independently of statistical contrast! – – – S 1: sensation Anatomically predefined Defined based on previous localizer session Selected based on combination of conditions you will contrast. 33

Inference from f. MRI statistics l f. MRI studies have very low power. –

Inference from f. MRI statistics l f. MRI studies have very low power. – – – l Correction for multiple comparisons Poor signal to noise Variability in functional anatomy between people. Null results impossible to interpret. (Hard to say an area is not involved with task). 34