The purpose of this course has been to teach you the basics of research so you can be a more critical consumer of the research in positive psychology. And think about how to use and apply it to your work, and your life. The videos for this week will bring together what we've learned to reach these goals. They will be broken up into three parts. the first video, this video, will provide an overview of different types of research design. In the second video we will talk about the key pieces of a research study and how to use the abstract to identify key ideas without reading the full study. And then the third and final video we'll review the key questions to ask yourself when you're reviewing a media article that discussed the research study. There are three types of research designs. Experimental design, quasi-experimental or correlational design, and non-experimental design. First, experimental design. The primary question in an experimental design is did the treatment or intervention have an effect on the outcome? Or put another way, does the treatment or intervention work as intended? So, for example let's say you implemented the gratitude intervention where you asked youth to think of people that appreciate but have not expressed their gratitude too. And you have them write a letter of gratitude and perhaps you even have them share the letter of gratitude to the individuals and person that they wanted to thank. Your goal is to see if this gratitude intervention has an impact on the well being of those who participated. For this to be an experimental design, a number of conditions need to be met. First, there has to be a control group. So all experimental studies are performed by comparing two groups, a treatment or intervention group that is exposed to some kind of intervention. And a control group that does not receive the intervention. The control group exist because it allows for a comparison to be made. It gives the people doing the research and insight on to how the treatment group would have theoretically behaved, if they had never received the intervention at all. Second, participants have to either be randomly assigned to participate in the intervention or randomly assigned to the control group who did not receive the intervention. The fact that they are randomly assigned means everything else can be considered equal. And you can actually say that the intervention caused the outcomes. It is important that there's no crossover between the groups though. So you can say that the treatment or intervention really did cause the outcome and it's not that the control group somehow ended up receiving the same intervention. The key strength of experiments is that they have strong internal validity, which you'll recall means that we know that no other factors could've explained the results. And we know this because experimental designs randomly assign participants to receive the intervention which means that everything else about the groups is assumed to be the same. So if you're reading a research article and you see that it's an experiment, this means you can demonstrate causality if there was random assignment. And that the intervention, whether it's a gratitude letter, a deliberate practice task, or whatever positive psychology intervention you might be interested in actually led to the outcomes that were observed. This is why experimental design is considered to be the gold standard of research. However, there are some important weaknesses worth noting about experiments. First, they may have limited external validity or generalizability, because in order to do random sampling you may have to limit your sample of who you invite to participate. Practitioners also have important concerns with experiments. First, many worry that they are ethically questionable. So let's say for example, that you've identified an intervention like the gratitude letter or a goal setting activity that has been shown to have positive results for some population. Is it ethical to only give some people this intervention? Now this is obviously an even more extreme concern in fields like healthcare, where you might be depriving someone of a drug that could save their lives. But, positive psychology interventions have been shown to have a positive impact on people's lives. So you do have to ask yourself the question of how ethical it is to randomly select who gets to participate. Additionally it can be logistically challenging to randomize subjects. It requires work on the front end to set it up appropriately. Now there are some solutions to address these shortcomings. First, researchers could only include groups of people for which the treatment or intervention's effectiveness is uncertain or not legally or ethically questionable. Now this is probably the most obvious solution, but often not possible given the circumstances. Another option is to employ what's called a waitlist, for those currently denied treatment, so that you can give it to them later. So, for example, let's say you are piloting a happiness intervention where youth are going to learn about strategies for improving their well being. You could pilot the intervention with a random subset of students, in the fall semester, and then the waitlist could receive the intervention in the spring semester. That way you are ultimately providing everyone with access, eventually but still have a controlled experiment in the first semester. Or finally, you could randomly assign whole institutions, for example schools to the treatment or intervention group rather than individuals within a school. But this requires having a much larger sample obviously. When it's not possible to use an experimental design, a quasi-experimental or correlational research design is the next best thing. A quasi-experiment attempts to replicate the conditions of a true experiment to the greatest extent possible. Researchers would employ a quasi-experimental or correlational design in two circumstances. First, if naturally existing groups are being studied. So if you want to make comparisons between age groups or genders for example, you wouldn't randomly assign because that wouldn't make sense or even be possible. You can't assign people to a gender or an age group. Instead, you would use a quasi-experimental or correlational design. Or the second reason you would use a quasi-experimental or correlational design is that random assignment is not possible given external factors like the ones we discussed above. Maybe it's too logistically challenging or perhaps there are concerns about ethics. Quasi-experiments or correlational studies are logistically easier to implement because they analyze pre-existing groups. Though they don't control for all factors, they can increase internal validity by controlling for some variables. What this means is that they simply measure the other things that might also explain results, and then try to account for them. In some cases, quasi-experiments or correlational designs may demonstrate greater external validity or generalizability than true experiments because they can pull from a broader population. Because they aren't as constrained in how they construct the sample. So let's talk about an example of a quasi-experiment or correlational design. Let's say you implemented the same happiness intervention discussed above. But instead of randomly assigning youth to participate in the intervention, instead what you did is you just assigned half of the classes in this school to participate in the intervention, and the other half of the classes to be in the control group. This would be easier to implement, but you'd have to ask what other variables could have contributed to the results of the study. So perhaps certain classes were already happier to begin with based on how students had been assigned to their classes. And it was actually something about the existing class composition and not the happiness intervention that explained any differences in outcomes. To try to get at these shortcomings you could measure between the two groups of students to see if there were any other differences you could find at the beginning and then try to account for them. But the reality is that there could always be some variable you didn't think to consider or measure that could have explained results. The two studies and the True Grit article are correlational, or quasi-experimental because we're looking at the relationship between variables without doing any sort of random assignment. And the final type of design is non experimental design. Non experimental studies give researchers more insight into how a treatment or intervention works. With researchers typically going into more detail and providing richer descriptions about program practices and procedures. Non experimental research studies are more likely to include qualitative data that are more subjective in nature as well as descriptive quantitative data about programs or procedures or intervention. Non experimental studies can provide more in-depth understanding of particular trends in the data. They answer the questions of why and how, which are incredibly important at the front end of the research process to inform the design and also if you've demonstrated in intervention works and you want to better understand the mechanisms behind that interventions. Large scale descriptive analyses, say of national survey data for example, is a form of non-experimental design also. And a strength of this type of analysis is that, you can have stronger external validity, because you aren't concerned with how you assign people in the sampling process, so you can have a much larger and more generalizable sample. However because no data is being manipulated, you are very limited in your ability to demonstrate any sort of causality. You're merely describing results and trying to better understand trends when you're using a non-experimental design. Even though experimental designs are considered the gold standard, the best practice is to try to triangulate data which means using multiple methods to collect data. Convergent validity is when multiple data sources plan to the same outcome which can help to increase both your internal, and your external validity. So in sum, the design of positive psychology research study will fall into one of three categories. And you can tell a lot about the strengths and weaknesses of the research design just based on what type of category it falls into. First you have experimental design which answers questions about what impact the intervention has on outcomes. These designs randomly assign some individuals to the treatment group and others to the control group. Next you have quasi-experimental designs which explore how the intervention influenced outcomes or explores the relationship between variables, but fall short of causality because no random assignment. And finally, you have a non experimental design which answers questions about how the intervention was implemented or what explains the results. Here, you aren't manipulating the data but merely describing trends and answering questions of how and why. So whenever I'm reviewing a research study, after I identify the hypothesis, I determine what type of design it is and make assessments about the validity of its conclusions. Next, we'll build on this and talk more about how to dissect a research article to make sense of its findings.