Another thing we can do, it's one of my favorite things to check for problems, is to look at histograms of the whole brain data bodies for each participant. This can reveal lots of problems. So what should we see if things look good? So this is a histogram of two participants, and the histogram reflects all the values across the entire brain. So, on average, these contrast values should really usually be centered on zero. It's unclear whether the entire brain really is ever activated by a task or deactivated by a task. So the histogram should be centered on zero. And the scale on the X axis here is arbitrary, that's the actual values in the contrast images, and that depends on a number of factors. But if we've treated all the participants the same, the scale should be the same. And that's really important. So we should see, the histograms should look roughly similar across participants. There should be a lack of dramatic skewness. We might get some positive or negative skew if there's allot of real activation across many brain areas. But in general, across the whole image, we shouldn't be able to see any dramatic skew. So, now let's look at all 30 of our subjects here, and as we can see, many of them look a little funny. And this is fairly typical. These are imperfect images and, a lot of times, when we see shifts away from zero, we see it on a whole image basis. So I wouldn't trust activation or deactivation for a single subject very much at all. And we do see some skew here. And what I've done here is flagged our problematic subjects from before. And it looks like six and 16 have some of the most problematic histograms, especially against subject 16. There seems to be something very funny happening where a lot of the brain is deactivated very strongly relative to other participants. This is probably strong enough that that's really not within the physiological range of what you'd see for people, so this is probably an artifact. Now we can examine the predictors themselves, or the regressors. And on the top panel here, you see a plot of just the raw reappraisal success scores. That's our predictor against the subject number. And then the bottom, we see a plot of the leverage of each of those observations. So, leverage is the weight that a variable has on the regression slopes due to the design itself. So extreme values far out on the X-axis have high leverage. This interacts with extreme data values when I observe the brain activity. If that's also high, there's a multiplicative increase in influence over the regression line. So values that have high leverage and unusual brain data Are potentially very problematic for regression. We can identify leverage by looking at the Hat matrix, which is this matrix. It's the matrix that you would multiply by the data to produce the fitted responses. And the diagonals of that matrix are the leverages as I described before. So what we see here is the subjects with the most extreme values on reappraisal success are subjects 16 and 18. And, indeed, both those subjects have very high leverage values. So subject 16 is interesting because it has extreme values in the brain data across the brains, and very unusual, probably artifacts. And it's got high leverage. So this is really a potentially problematic case. So now we'll run the regression. And, we'll run a simple second level regression, where we have two predictors you see down there at the bottom. One is a predictor for reappraisal success, and the other one is to intercept, which is virtually always in a model. So, these comprise the design matrix. The outcome is the reappraised versus the look negative, contrast values at each foxhole in the brain. So in the second level model, what we're going to do is mean center these continuous covariants, like success. And what that allows us to do is to interpret the regression maps, both for the success regressor, and for the intercept. And when we mean center success, the intercept is interpret-able, and it's interpreted as the average contrast value at the average level of success across the group. So that's a really sensible way of looking at group activation in the contrast. Ok and this is really especially useful when I have a brain area that's both activated in a contrast and the degree of activation is correlated with a predictor that I'm assessing. If I just assess the intercept by itself then all the individual differences are going to be noise. But if I assess the intercept, the average contrast value, when I'm controlling for the covariate, reappraisal and success, then the statistic values can get stronger because I'm explaining a known source of variance. So here, this is mean-zero then. The intercept, then, estimates the average activation. So now let's look at the regression results. What we see in the top map is the reappraisal success predictor so there are positive correlations with success across a number of brain areas there, in the pre-frontal cortex and others. And on the bottom map, what we are seeing is the average reappraised versus look neg, negative contrast value controlling for success. And we see large increases in the medial pre-frontal cortex, lateral prefrontal cortex and other areas as expected and I'll show you that soon. So, now let's examine these regression results more carefully and we'll look for negative controls. These are findings that shouldn't appear in the brain if the test is really valid. So it can be indicative if we find some stratus findings that we know are stratus of artifacts, or whole brain shifts, that are systematic or too consistent across individuals. And if this is the case then localized results might not be interpret-able. So a really simple strategy, we'll use here, is to look for values outside of the brain, or in the ventricle spaces, at a liberal threshold, 0.05 uncorrected let's say. So, here's the map of 0.05 uncorrected, everything colored there. And, what do we see? Well, we don't see any egregious incursions into white matter CSF. There's many things sort about the border, that might be reasonable. We can also, then, average the values within the all grey matter voxels, all the white matter voxels, and all the CSF voxels in our standard brain. And if we systematic deviations from zero that might be a sign that there's whole brain activation or deactivation where we really don't expect to see anything. And that can be a symbol, a sign of problems as well. And here we don't really see any strong systematic deviations from zero, so we'll proceed ahead. Next, we'll look at positive controls. Positive controls are conditions in which I could see a finding if the test is valid. So, this can help us test whether our findings are plausible and we've done all the steps right leading up to this point, which are many. One way we can evaluate this is to ask whether our results here, for this contrast match the prior literature. So there have been a number studies done of reappraisal, reappraising versus looking at negative images. And we can go to Neurosynth.org, here on this website, and we can pull up an automated meta-analysis of 161 studies of emotion regulation. And you see those here. So now we can download that map and we can compare it to our brain results. So here's our result on the top, and the bottom are the meta analysis results from Neurosynth.org. They don't separate activations and deactivations. So we actually to see the amygdala to be deactivated. So we'll ignore that for now. But we should see increases in activity in the frontal cortex and in all the other regions, pretty much that you see there. So this comparison, what does it tell us? Well we don't expect these maps to look identical after all, but it shows us that activation in many brain areas in our study does match fairly well, what's expected based on prior work. And this includes that increases in the dorsal interior cingulate, the ventral medial PFC, the ventral lateral pre-frontal cortex, the SMA and pre-SMA and other regions as well. So, finally we'll examine our regression result for resistance to outlier influences. So, I made a big deal before about subject 16, being a particular problem. So this was an outlier, and also there's some high leverage values in the set of observations as well in the predictor space. So we'll do two things. We'll refine the analysis by removing subject 16 to compare and we're also, then, going to rank our success scores. So now instead of a person R-correlation we'll be doing a Spearman's rho against the brain values. Now this is before, the view reappraisal success predictor and the average contrast activity. And here is after making or adjustment. So what do we see? Well, the average response, the contrast activation is similar, but there's less activity predicting success. So we might be more suspicious that some of that activity is spurious, or at least it's related to the way I scaled the data and the inclusion of this one particular very influential person. So some of the key areas, though, hold up, like activity in the subgenual anterior cingulate and nucleus accumbens, in the cingulate isthmus, in the superior cerebellum, in the prefrontal cortex. So we might conclude that we have more confidence in reporting those regions, because they hold up in both analyses. So now let's go back to our checklist and wrap up. We talked about looking at the brain data including issues of orientation and coverage, alignment and scaling of the image values across subjects. We talked about looking for outliers in terms of which subjects may be outliers, in terms of other extreme values in the brain that might require some fixes or signals from artifacts. We have looked at the predictors themselves and particularly the leverage of these predictors. And we looked at the regression results. And we've established some negative control conditions and examined them. We've established at least one positive control condition and verified that there's activation there. And we've looked at resistance to outliers based on our examination of the assumptions and the data distribution. So that wraps up practical analysis. Thanks for listening. [SOUND]