This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data. A book related to the class can be found here: https://leanpub.com/principlesoffmri

Loading...

From the course by Johns Hopkins University

Principles of fMRI 1

329 ratings

This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data. A book related to the class can be found here: https://leanpub.com/principlesoffmri

From the lesson

Week 3

This week we will discuss the General Linear Model (GLM).

- Martin Lindquist, PhD, MScProfessor, Biostatistics

Bloomberg School of Public Health | Johns Hopkins University - Tor WagerPhD

Department of Psychology and Neuroscience, The Institute of Cognitive Science | University of Colorado at Boulder

In this module, we're going to go into more detail on building GLM models.

Â This is design specification.

Â First, let's review some key concepts from before.

Â We talked about the structural model for the GLM, Y = X times beta plus error.

Â Where there betas are the model parameters that need to be estimated.

Â X is the design matrix that we are going to specify in advance.

Â That's what we're building today.

Â We talked about an overview of the GLM analysis process,

Â which is a two level hierarchical model involving design specification or

Â model building estimation and contrast specification and group analysis.

Â Then we're ready for anatomical localization imprints.

Â This is the first-level GLM for

Â a single-voxel and a single-subject, a basic design matrix.

Â What we care about here, is the activation parameter estimate, or

Â the beta, for the task regressor, in a very simple design.

Â If we go from two conditions of interest to more than two conditions,

Â we can specify any number of different event types or block types.

Â Here we've got four, A, B, C, and D, can involve each of them with an assumed basis

Â function, and end up with a Design Matrix.

Â So let's look now at model building and

Â we'll look specifically at multiple predictors and at contrasts.

Â So let's go back to our famous versus non-famous face example.

Â It's a block design.

Â What we care about is the difference between famous and non-famous faces.

Â This is a contrast across those two relations.

Â With a block design, one can use a single regressor that captures that difference,

Â and just build it into a model.

Â That's what we saw previously.

Â What happens if we have an event-related design,

Â we have to model each event type separately in the GLM.

Â Now, we end up with the design matrix that has one regressor for

Â famous and one regressor for non-famous faces.

Â In this case, we can flexibly test multiple contrasts on this design matrix.

Â So we can assess the difference between famous and non-famous faces,

Â we can test each one separately, or we can assess their average.

Â These functions are specified by different linear contrasts across those

Â parameter estimates, or betas.

Â So what is a contrast?

Â It's a flexible and powerful tool for testing a hypothesis in a GLM framework.

Â We'll focus now, specifically on T-contrast, which is a linear combination

Â of GLM parameters that gives us a single planned contrast.

Â I can do a t-test on that and

Â make a statistical inference on whether that contrast value is not zero or not.

Â It's specified by a vector of weights which we'll call C.

Â So that C transposed times beta hat, and

Â beta hat means the activation primary estimates, gives me a scale or value.

Â This is signed and can have negative or positive values under the null hypothesis.

Â So let's apply that to our famous and non famous phase example here.

Â So I've got two parameter estimates that I'm interested in.

Â Beta one for famous, beta two for non-famous.

Â I can specify a difference contrast, which is 0 for the intercept, 1 for

Â the famous and -1 for the non-famous faces.

Â That gives us the famous, non-famous difference.

Â This contrast specifies the song or average across the two face types.

Â So, this essentially gives me face versus rest.

Â So that's 0, 1 for famous, 1 for non-famous faces.

Â And we can test a single event, so 1 1 0 tests only the famous faces or

Â beta one, against the impulse to intercept.

Â And then ask, is there a significant positive or

Â negative response to famous faces?

Â So let's generalize this now to the case where you have multiple predictors.

Â This will be useful example that we'll take forward with us in the future

Â lectures as well.

Â So here, I've got a design with four conditions.

Â And let's just say this is a memory experiment.

Â So I've got four word types, A, B, C and D.

Â And they're grouped into two factors.

Â Factor 1, we'll call modality or visual versus auditory presentation.

Â So there are two levels of that factor.

Â Factor 2 is high versus low imageability.

Â Turns out words that are imageable are easy to remember.

Â So there's two levels of imageability in our example.

Â And this is an example of a factorial repeated-measures ANOVA design,

Â to go back to the earlier lecture.

Â And that's because there are four or more repeated measures.

Â We have each of the four trial types sampled within person,

Â with multiple instances per person.

Â In this case, we don't have any between-subject predictors yet,

Â no individual differences, so

Â I've just got a straight up factorial repeated-measures ANOVA design.

Â Very typical for fMRI.

Â So let's look at model building and contrasts with multiple predictors.

Â We'll specify our indicator function, for four different types of onsets, convolve

Â it with the basis function, the assumed HRF, then we get the Design Matrix.

Â So this is exactly the case that we saw previously.

Â And in general if you're modeling any kind of factorial design in fMRI

Â you can simply create one regressor or one event type per cell.

Â Now let's use this to look at contrasts.

Â [COUGH] So these are my four columns in my design matrix, and

Â now I'm going to apply contrast weights across those four columns.

Â I can apply the contrast 1, 1, -1,

Â -1, which means I'm taking a linear combination that equals

Â the perimeter estimate for column A plus B minus C plus D.

Â And you can see this graphically down here below.

Â This is a main effect of factor one or visual versus auditory presentation.

Â Let's look at some rules for T-contrast now and

Â this can help us elaborate our understanding of contrasts.

Â So first of all, C can be a matrix.

Â So it doesn't have to be one contrast value.

Â It can be several.

Â And if C is arranged in columns, so

Â that each column is a contrast specter, those columns are applied independently.

Â So they don't effect one another.

Â So each is really a separate test of a separate effect on the data.

Â So let's look at this contrast matrix.

Â It's got three columns.

Â And this contrast matrix corresponds to the main effects and

Â interaction or the standard ANOVA contrasts.

Â So let's look at those three columns a little bit more carefully.

Â The first column is 0.5, 0.5, -0.5, -0.5.

Â So this reflects the main effect of Factor 1.

Â So I've got positive weights on A an B, negative weights on C and D.

Â The second column reflects the main effect of Factor 2.

Â Now I've got positive weights on A and C, negative weights on B and D.

Â And finally, the third column reflects the interaction,

Â which is what I get when I multiply the contrast weights of those two columns

Â together to create the third column.

Â And now this column essentially captures the crossover interaction,

Â so I've got positive weights on A and D, negative weights on B and C.

Â And this is testing the effect that the effect of Factor 1 depends on the level of

Â Factor 2, or vice versa.

Â I'm not limited to ANOVA contrasts.

Â I can specify planned tests that make sense based on

Â whatever hypotheses I might have.

Â So in this case, the contrast one, -1, 0,

Â 0, is testing a simple effect, or the difference between A and B.

Â This might be of interest.

Â In this case,

Â it's testing high versus low imageability effect from visual items only,

Â which is a very sensible thing to test, depending on my psychological questions.

Â This contrast, -2, -1, -1, 0, tests something else.

Â This tests the magnitude of twice A versus B and C together.

Â So this may or may make sense depending on my design,

Â but this is a valid contrast, and in some cases it might be useful.

Â Another rule is about scaling.

Â So the scaling of the weights, the contrast weights,

Â affects the magnitude of the contrast values, but not the inferences I make.

Â So it doesn't effect the t values or the p values.

Â So I can use contrast weights of [1 -1] or [.5 -.5] and

Â get the same exact statistical result.

Â So let's look at this case, where I've got this contrast 2,

Â -1, -1 and this is twice A versus B minus C.

Â And if I rescale the contrast weights to be 1, -0.5, -0.5,

Â 0, then the contrast value estimates A versus the mean of B and C.

Â If these were four different sports teams and

Â I was testing memory effects of football players, hockey players, baseball players,

Â and basketball players, you can see why you might want to test

Â football players verses the average of hockey and basketball players for example.

Â So depending on what my question is, this can be quite useful.

Â And here's one tip as we move forward, contrast weights must be the same for

Â all participants to keep all the participant's estimates on the same scale.

Â And one way you can get into trouble is if you have missing sessions or run,

Â if you use contrast the weights of 1's and

Â -1's across the runs, they may not be on the same scale.

Â We'll hear more about that in the second course.

Â Another rule for T-contrasts is that the contrast weights typically sum to zero.

Â And this makes it so

Â the expected value of the contrast under the null hypothesis is zero.

Â And that permits us to do a t-test where zero is the null hypothesis value.

Â So it's very natural.

Â Let's consider a contrast C across 4 conditions.

Â Here's a valid contrast, -2, -1, -1.

Â The contrast weights sum to 0.

Â [LAUGH] This contrast is not valid.

Â Contrast is -2, -1 0, 0.

Â This tests 2*A- B.

Â But even if the beta values are random [LAUGH],

Â I'm going to get some non-zero value for the contrast estimate, and

Â that means I'm not sure what the null hypothesis value should be for a T-test.

Â There is an exception.

Â So the exception is that I can test the average of one or

Â more conditions against the implicit baseline.

Â So if I test the contrast 1, 0, 0, 0, then that contrast value

Â is testing the significance of the beta value for condition A only.

Â So essentially whether the response to A is different than zero.

Â The contrast 1, 1, 0 tests the sum where the average is beta for A and B.

Â In our example, this would be for all divisually presented events, for example.

Â One final note before we move forward.

Â We looked at model building for multiple predictors,

Â just like this, and let's just very quickly remind ourselves that

Â there are a number of assumptions that we have to make.

Â To build this model, I have to assume that the neural activity function is correct,

Â little sticks or blocks, we have to assume that the HRF is correct, and

Â we have to assume a linear time invariance system.

Â These three assumptions together allow me to construct the design matrix.

Â All of these assumptions are wrong to some degree.

Â All models are wrong, but

Â some are useful as the statistician George Bachs once said.

Â And we'll look at how to relax some of these assumptions in certain ways in later

Â lectures.

Â That's the end of this module, thanks [LAUGH].

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.