I'm frequently asked what we should do in the face of anti-science rhetoric. In fact, sometimes the job facing us as science communicators can feel a little bit hopeless in the face of such rapid miss and disinformation. In this module, we're going to return to the field of cognitive psychology and talk about the various cognitive phenomenon that can influence how people perceive, respond to, and interact with science. First, in this lesson, we'll explore epistemological beliefs about science. Then we'll learn about cognitive biases and a type of bias reason called motivated reasoning. We'll also talk about how disinformation is intentionally designed to trigger motivated reasoning. Finally, in the last module, this course will explore one strategy for managing cognitive phenomenon like bias and motivated reasoning, a technique typically associated with the therapist's office called motivational interviewing. With that, let's start with epistemological beliefs about science. Epistemology is a philosophical term referring to the study of knowledge. Epistemological beliefs about science are beliefs that we have about the nature of science knowledge and where that knowledge comes from. It's a really important part of science literacy. When we consider science literacy, there are three main components; content knowledge, science process skills like inquiry, and reflected and epistemic processes. Depending on your background, you may have heard of nature of science principles instead of epistemological beliefs about science. Nature of science principles are the defining characteristics that makes science different from other domains of inquiry. I like to think about epistemological beliefs about science and nature of science understanding as being two sides of the same coin. What you believe influences what you know, this is going to influence your learning, and what you know is going to influence what you believe. Both of these come together to influence what you do in a science activity. This is important when considering science outreach as well as general science literacy. Someone's beliefs are going to influence how well they learn science and how they engage with science practices like inquiry, or how they make scientifically minded decisions. There are four aspects that make up epistemological beliefs about science; certainty, structure, justification, and finally source. We'll start with certainty. Certainty refers to how confident we are in science knowledge. It's easy to sit in a biology or any scientific classroom and look at a giant textbook. I always use the example of Molecular Biology of the Cell because this textbook literally weighs as much as the average human newborn at birth. Think that science is a collection of concrete facts or that science only exists to generate facts and science education is the process of learning the facts. If people think science is all about learning facts, what's the point of visiting a science museum or funding more science research? Outreach to the public is a great way to present science as a process rather than as a collection of facts. Science knowledge is inherently uncertain. By that I mean it is subject to revision in light of new evidence. One of the most famous illustrations of this was when the geocentric model of the solar system was replaced with the heliocentric model. On the top part of this picture, we see the geocentric model. Geo refers to the Earth. Geocentric means that Earth is at the center and the sun and other planets revolve around the earth. Now, if you were to spend the afternoon outside and you look up at the sky, you see the sun move through the sky over the course of the day. If you go outside at night, you can pick out other planets in the sky and they also appear to move in the sky. Without any other information other than your own two eyes, it does appear that there are heavenly bodies moving around the earth. This also creates something called firsthand bias, and we're more likely to believe something that we experienced firsthand, even if it's not true. Firsthand bias can also be used to overstate rare events. If it happens to you, even if it's statistically rare, it appears to be a much bigger concern. Now, when telescopes were invented, they were able to generate new evidence about the solar system. Data was collected that supported the heliocentric model, which is shown at the bottom of this figure. Rather than the planets and the sun revolving around the earth, the earth and the other planets revolved around the sun. The Catholic Church took particular exception to this, and the scientists who put forth these ideas, Galileo Galilei, was accused of heresy and died while under house arrest. Practicing scientists and perhaps somewhat ironically, many people I know who are religious are comfortable with uncertainty. However, when the general public looks at mights, and coulds, and maybes, a class of terms that convey uncertainty called hedges by linguists, we see that associated with science knowledge, it looks like scientists don't know what they're talking about. Word choice is really important to consider when doing outreach. Hedges which serve an important and very specific role to scientists who know that knowledge is subject to change in light of new evidence, comes off to the general public who have a different perspective, is demonstrating that scientists don't know what they're talking about. Conversely, other words like theory or error means something entirely different to a lay audience rather than a scientific audience. Going back to certainty, science knowledge is always subject to change in light of new evidence. That is the power of the scientific enterprise. This is why scientists use hedges. It's an acknowledgment that what we accept as fact today may change tomorrow in light of new evidence. Some science educators strongly suggest using hedges in the classroom and avoiding concrete terms like prove right and wrong to emphasize this point. I've put a reference at the end of this course, if you're interested in learning more. We will look at some public health issues. Beliefs about certainty can become very problematic. For example, let's consider when the article linking the MMR vaccine to the development of autism spectrum disorders was retracted. New evidence came to light that the original data was fabricated. The article was retracted and removed from the scientific record. However, if someone adheres to science knowledge is being certain, how do they make sense of a retracted article? Another interesting example that followed in the wake of the COVID-19 pandemic was debates around mask wearing. Originally, the public was discouraged from wearing masks, at least in the United States. The evidence came to light that although cloth masks aren't as effective as disposable surgical mask, they are effective in minimizing the transmission of COVID-19. The public was encouraged and in some cases mandated to wear face coverings when outside of the home. However, this caused outrage because without understanding that science knowledge changes over time, it looked more like political hymn-hauling, rather than an accurate reflection of the tenuous, uncertain nature of scientific knowledge. Now that we've discussed certainty, let's move on to structure. Structure of knowledge is sometimes called simplicity of knowledge. It refers to the degree to which we perceive knowledge as interrelated with other knowledge that we have, and how we connect various pieces of knowledge together. How we make these connections between different bits of knowledge we have is also influenced by another cognitive phenomenon called motivated reasoning, which we'll explore later in this module. For example, rising sea levels is one piece of information that on its own is fairly straightforward. However, it becomes more complicated when we realize that there are other pieces of information that are important for understanding rising sea levels. Rising global temperatures, drivers of climate change, all of these things are interrelated. Someone who believes the structure of science information to be straightforward, are also going to be more likely to consider only a single viewpoint, usually their own. They're only going to accumulate knowledge specifically around their viewpoint. That's where motivated reasoning and first-hand bias can both come into play. Let's take a look at another example. Let's say that you have a friend, and your friend was killed when they wrecked their car and it went underwater. They died because they drowned. They couldn't get out of their car in time because they had their seat belt on. You develop a knowledge structure that looks something like this. Let's say we have our water here. We've got a car. I'm not an artist here, but you get it. Here's our car. It's underwater. The submerged car and drowning. Here's our little guy in the car and he's wearing a seat belt. Seat belts are a liability. Then we make the conclusion that, again, here's my little person wearing their seat belt. Then we're going to say, well, seat belts are a liability, so I'm not going to wear them. We have this simple, straightforward linear progression of ideas here. But it's a lot more complicated than this. For example, in the United States, 38,000 people were killed in car accidents in 2019. Of those, 19,000 people were not wearing seat belts. Of those 38,000 people, 300 were killed in submersion deaths. Most of those submersion deaths occurred in the state of Florida. Which if you're not familiar with US geography, Florida is a peninsula and has water on three sides and includes the famous overseas highway that, well, is a large series of bridges that go over the ocean. There's many more opportunities to drown in your car in Florida. When even with that, it's still not that common. Going back to our drawing, we're going to add to this knowledge structure that we have here. This is rare. It doesn't happen very often. Not wearing a seat belt is much more likely to get you killed. Seat belts save lives, and it's actually more dangerous if I don't wear a seat belt. Because if there's a car accident, I'm much more likely to die if I don't wear a seat belt than if I do wear a seat belt. It's easy to swap this out with any hot-button issues that you want, and so it becomes a matter of balancing a little bit because our seat belts or vaccines are genetically modified organisms. Are they safe enough? Do the benefits outweigh the costs? This is also an example of how motivated reasoning and bias can play out. We see firsthand bias again because if this is something that happened to someone that you know, you're more likely to give them more weight because of the personal experience piece and then because of motivated reasoning, it's easier for you to find information that supports your viewpoint whether it's, I'm not going to wear seat belts or I am willing to wear seat belts, and that's how all this plays out with the structure of science knowledge when we're trying to make decisions. Now that we've talked about the structure of science knowledge, let's move into justification. The justification of knowledge refers to what aspects of science knowledge we accept and how the evidence to accept or reject that is generated using scientific inquiry. Science inquiry in the real-world isn't like the canned recipe like formula that so many of us were exposed to at school. Due to the various resource constraints, when we think of science, we think of that recipe like very straightforward scientific method. In actuality, a standardized scientific method doesn't exist in practice, and so much focus on that can give people the wrong idea about science and how we justify science information. Authentic science inquiry is far more interesting. There are the roadblocks, the dead ends, the twist, the turns, the random observations that led to a new direction. Authentic science inquiry can also look very different depending on the discipline and the specific research question. Like we discussed earlier in this course, the methods should always match the question at hand. There's also an important role for creativity and for culture and religious beliefs and how those influence the process of science as well. Outreach activities are great opportunity for presenting the messiness of science and the creative process of science. It's also a great way to raise awareness of how for example culture has led to inequitable practices in science. I always share Rosalind Franklin story as and example of how it anti-woman culture in the academy in the 1990s resulted in her contribution to the discovery of the structure of DNA, one of the greatest discoveries in the history of molecular biology being largely understated. I also use it as an opportunity for students to brainstorm ways to address ongoing inequities in stem fields. To justify knowledge, scientists also have to interact with colleagues beyond their immediate groups. For example peer review is another example of how scientists were collaboratively rather alone in their labs to generate and communicate science findings. Peer reviews used to independently assess if the claims that scientists made are well supported by the evidence they present and have generated. When someone has a novel or interesting finding, they write up what they did, what they found, and why it was important at a scientific manuscript. These manuscripts are then submitted to a scientific journal for peer review. The higher-quality the journal, the more extensive and rigorous peer review process, which again is super important for being able to justify the claims made. Usually the editor will make an initial review of the manuscript to see if it's up high enough quality to include it in the journal, and if it fits the journal scope, then the manuscript is sent out for two to three peer reviews. These peer reviews are done by other experts, scientists in the field, who independently examine the evidence and the claims made by the scientists to see if their claims are well-supported. Peer review can be single line, meaning the author's identity is revealed but not the reviewers. Or double-blind, meaning neither the author nor the reviewer knows who the other is. This has done to help mitigate bias. Since the author or, reviewers may know one another, and especially in small fields, this is often the case. A reviewer will check and make sure that the proper methods were used to the evidence generated as quality, and it justifies the claims made by the authors. But also look if there's any evidence of academic misconduct as well. If the initial reviewers disagree on the quality of the paper, the editor may bring in more to independently review the work. The journal editor then examines all the reviews, writes what's called a meta-review, and decides whether or not the article is acceptable for publication and therefore inclusion in the scientific record. Typically there's one if not more rounds of review and then re-review before an article is considered suitable for publication. Because after each of these reviews, the authors go through another revision. Some journals are now starting to publish the names of the reviewers and the initial reviews alongside the published manuscripts for an additional layer of transparency. You'll also find conflicts of interest statements as well as funding information included on good manuscripts as well. This is also a good marker if the information contained in the article's quality. Examining the work's history is important for ascertaining that the data presented in it can be used to justify a claim. Even if you are outside a scientific field, knowing to look for the following markers of quality can help you decide if that information is good or not. First, always look at the author's credentials and affiliations. Are they out of a reputable institution? Look for the funding information. Was it funded by a reputable funder? For example, federal level grants often involve a detailed peer review process of their own prior to funding. Next, look for information on the monitoring editor. They made the final call on whether or not to publish the paper. What are their credentials and experience? Look for the conflict of interest statements. Is there anything that should concern you? Look and see if you can find information on the people who reviewed the paper. Can you read the reviews? Are the reviewers appropriately credentialed? How many times is the paper revised and re-reviewed before was accepted? How science knowledge is justified using evidence and data in a peer-review process also ties into our last epistemological belief, the source of science information. The source of science information refers to beliefs about where knowledge comes from. Let's look at this in more detail. Where does science knowledge come from? Well, we have scientists or some other authority figure that we accept as having some advanced science knowledge. Draw a scientist here and give her a lab coat and maybe a flask that she can hold. So she looks very much like a scientist and their scientific journals. This is where we would find our peer-reviewed literature. Let's say we'll just call it, we'll have science, nature, and cell, those tend to be the biggest molecular biology journals. We also have the media, so maybe we have a newspaper here like the New York Times. Maybe we'll draw an old school television here with rabbit ears and you're watching CNN or Fox News. Maybe you're just on your computer or on your phone. Let's draw a little phone here and say that you're on a blog written by your neighbors. Maybe you're just listening to your friend who doesn't necessarily have any science credentials, we'll give our friend a little hat here. Who do we believe? Do we believe friend, the scientist, the journals, the media, blogs that we read on the Internet? This is where it becomes important and it's also a very interesting place where bias can feed in again and also what's called in-group and out-group bias. If you identify for example as a scientist, they're more believable. If you think scientists are evil and out to make money, it's a lot easier to believe your friend or this random blog you found on the Internet because you identify them as being part of your group. This also applies to political affiliation, which is why some people will only watch CNN and not listen to Fox News or the reverse. It can also involve skin color, religious identities, gender identities, cultural identities, we're more likely to believe information that comes from a source that we can relate to and more likely to believe things that come from people that we think we can relate to as well. Source also relates to first-hand bias as well. If we have directly experienced something or someone close to us has more likely to give an additional weight. The problem with firsthand bias is that it can be difficult to acknowledge and catch our own biases and it can be hard to make sense of the information by itself. Be aware next time you see a horror story on social media, you don't know the context or the prevalence of that observation that you found. Let's say you find 10 people and the same horrible thing happened to them, is it 10 people out of millions, thousands? The chances of it happening to you may still be less than the chance of being struck and killed by lightning. Correlation isn't causation either. My favorite example was the correlation between getting up early and having a heart attack. Who gets up early morning? Adults. Age is a major risk factor for having a heart attack. Another way of thinking about this is saying that people carrying umbrellas in the morning will cause it to rain in the afternoon. In this lecture, we've discussed epistemological beliefs about science and given some examples of how these relate to our own lives. We talked about the certainty of science knowledge, how it forms structures in our brains, how science knowledge is justified with inquiry and evidence and a peer review process and finally, the source of scientific information. In our next video, we'll learn about another cognitive phenomenon, bias.