In this module, we're shifting gears from talking about just straight biology content to talking about how cognitive phenomenon influences how we engage with biology content, particularly as part of our daily lives. This is a very important skill when trying to think about science literacy. It is necessary for each of those pseudo exam, [inaudible] evidence, and make decisions regarding many scientific issues, climate change, genetic technology, genetically modified organisms, conservation issues, just to name a few. The COVID-19 pandemic brought this to light in a very dramatic way. Not only were people forced to engage your scientific evidence to make decisions, or were subjected many for the first time to how science really works in the real world. In this module, we'll be talking about the science and decision-making. First, we'll talk about epistemological beliefs about science before moving on into motivated reasoning. Then we will end this module with applications of this material to help you hone or develop a skill set at evaluating scientific information in your daily life. To start, epistemological beliefs about science. Epistemology is a philosophical term referring to the study of knowledge. Epistemological beliefs about science, are beliefs that we have about the nature of science knowledge and how that knowledge is generated through science inquiry. It's a really important part of science literacy. When we consider science literacy and science education in general, there are three main aims of science education. Content knowledge. So one of the things that we need to know about how science works in the real world. Process skills. How does one make a scientific argument? How does one connect science inquiry? Finally, the epistemic and reflective processes. Depending on your background, you may have heard of nature of science principles instead of epistemological beliefs about science. Nature science principles are the defining characteristics that make science different from other domains of inquiry. We talked briefly about nature of science in the first course of this specialization. I like to think about epistemological beliefs about science and nature of science understanding is two sides of the same coin. What I argue is that what you believe is going to influence what you know, and what you know, is going to influence what you believe and that both what you know and would you believe are going to influence what you do in a science class and how you will do in that science class, and also how you engage with science practices such as argumentation or inquiry. The interrelationship between these two is really important when we consider science education as well as general science literacy. Someone's beliefs are going to influence how well they learn in a science classroom and how they engage with science practices like inquiry. Beliefs also influence how we make decisions in our daily lives as well. The rest of this lecture is going to focus on epistemological beliefs about science. You should be able to pick out obvious parallels between nature of science principles presented in the first course and the beliefs about the nature of science knowledge that I'm going to present here. There's four aspects that make up epistemological beliefs about science; certainty, structure, justification and source. We'll start with certainty. Certainty refers to how confident we are in science knowledge. For example, it's easy to sit in any science or biology classroom and look at these gigantic textbooks. This one weighs about as much as human baby does at birth, and say, "Well, science is just a collection of concrete facts, or, that science only exists so we can generate facts so we can put in giant textbooks like this." Science knowledge is actually inherently uncertain. What I mean by that is that science knowledge is subject to revision in line of new evidence. One of the most famous illustrations to this was when the geocentric model of the solar system was replaced with a heliocentric model. If we look at this picture, in the top part, we see the geocentric model. Geo is referring to the Earth. Geocentric means that the Earth is in the center of the universe and that the sun and other planets are revolving around the Earth. Now, think about this for a second. If you were to go outside for an afternoon and you were to look up at the sky, you'd be able to see the sun moves through the sky during the day. If you go out at nighttime, you can see the planets moving through the sky as well. Without any other information, it really does look like that the heavenly bodies are moving around the Earth and not the sun. This is also creating what's called firsthand bias. We're more likely to believe something or think that it's a bigger deal if we've experienced it firsthand. This change when telescopes were invented because when telescopes were invented, they generated a different line of evidence that demonstrated that the heliocentric model is correct. That's shown here on the bottom part of this figure. Rather than the planets and the sun revolving round Earth, the Earth and the other planets are actually revolving around the Sun. So helio, referring to the sun as the center. The Catholic Church took particular exception to this evidence, and the scientists who first put forth these ideas, Galileo Galilei, who was accused of heresy and died while under house arrest. Practicing scientists, and perhaps someone ironically, most of the people who I know who are religious are comfortable with uncertainty. However, when the general public looks at uncertainty, when scientists use the mights, the coulds, the maybes, a class of terms used to convey uncertainty called hedges by linguists, and they hear by this in relation to science knowledge, it can sometimes send a message that scientists don't know what they're talking about, but it's actually in the mark of a good scientist. Science knowledge is always subject to change in light of new evidence. It's part of the power of the scientific enterprise, something we emphasize in the first course. This is why scientists use hedges. It's an acknowledgment that what we accept as fact today may change in light of new evidence tomorrow. Some science educators strongly advocate for using hedges in science classrooms rather than concrete terms like; prove, right or wrong. I've put a reference at the end of this lecture, if you're interested in learning more. When we look at some public health issues, the issue of hedges becomes a little problematic. For example, when the article linking the MMR vaccine to the development of autism spectrum disorders was retracted, it was retracted because new evidence came to light that the original data was fabricated. The article was retracted, meaning it formally removed from the scientific record. However, if someone adheres to science knowledge is certain, and as facts that don't change, how do they make sense out of a retracted article? Another interesting example followed in the wake of the COVID-19 pandemic, and that was the debate surrounding mask wearing. Originally the public was discouraged from wearing masks. But then new evidence came to light suggesting that wearing masks is actually a very important way of preventing the spread of the COVID-19 virus. The public was then encouraged to always wear a mask when going outside of the home. However, this ended up causing outrage because without the understanding that science knowledge changes over time in light of new evidence, it ended up looking quite a bit more like political hemline and that scientists didn't know what they were talking about. When in truth, it was an accurate reflection of the tenuous nature of science knowledge. Now that we've discussed certainty, let's move on to structure. The structure of knowledge is also sometimes called the simplicity of knowledge. Neuro refers to the degree to which we can perceive knowledge as being interrelated, and how we connect various pieces of knowledge together. How we build these knowledge webs, knowledge structures within our brains is also influenced by another cognitive phenomenon known as motivated reasoning, which we'll explore more later in this module. Let's take an example of the structure of science knowledge. Let's look at rising sea levels. It's one piece of information that's on its own is fairly straightforward. Sea levels are rising. However, it becomes more complicated when we realize that there's other pieces of information that are important for understanding why sea levels are rising. This includes integrating information about global temperature changes, other drivers of climate change, and how all of these things are actually interconnected with one another. Someone who believes that the structure of science information to be straight forward are usually more likely just to consider one viewpoint. Their own, and accumulate knowledge specifically around that viewpoint. That's where motivated reasoning comes into play as well as first-hand bias. Let's take another example. Let's say that you have a friend. Your friend was killed when they wrecked the car and it went underwater. They died because they drowned. They couldn't get out of their car in time because they had their seat belt on. So you develop a knowledge structure that looks something like this. Let's say we have our water here, and we've done a car. I'm not an artist here, but you get it. Here's our car, it's underwater. The submerged car, and drowning. Here's our little guy in the car, and he's wearing a seat belt. Seat belts are a liability. Then we make the conclusion that, again, here's another person wearing their seat belt. Then we're going to say, well, seat bets are a liability, so I'm not going to wear them. We have this simple straightforward linear progression of ideas here, which is a lot more complicated than this. For example, in the United States, 38,000 people were killed in car accidents in 2019. Of those, 19,000 people were not wearing seat belts and of those 38,000 people, 300 were killed in submerging deaths and most of those submerging deaths occurred in the state of Florida. Which if you're not familiar with US geography, Florida is a peninsula and has water on three sides and includes the famous overseas highway that well is a large series of bridges that go over the ocean. So there's many more opportunities to drown in your car in Florida. Even with that it's still not that common. So going back to our drawing, we're going to add to this knowledge structure that we have here. This is rare, doesn't happen very often, and not wearing a seat belt is much more likely to get you killed. So seat belts save lives and it's actually more dangerous if I don't wear a seat belt. Because if there's a car accident, I'm much more likely to die if I don't wear a seat belt than if I do wear a seat belt. It's easy to swap this out with any hot-button issues that you want. It becomes a matter of balancing a little bit because our seat belts or vaccines or genetically modified organisms, are they safe enough? Do the benefits outweigh the costs? This is also an example of how motivated reasoning and bias can play out. We see firsthand bias again because if this is something that happened to someone that you know, you're more likely to give them more weight because of the personal experience piece, and then because of motivated reasoning is easier for you to find information that supports your viewpoint, whether it's I'm not going to wear seat belts or I am going to wear seat belts. That's how all this plays out with the structure of science knowledge when we're trying to make decisions. Now that we've talked about the structure of science knowledge, let's move into justification. Justification of knowledge refers to what aspects of science knowledge we accept and how evidence is generated by a science inquiry and how that evidence is used to justify what we know. As we talked about in course 1, science inquiry isn't that can rescue like formula that so many of us were exposed to in school. Due to various resource constraints, when we think about science, we usually think about that scan scientific method. When in reality, a simple standardized scientific method doesn't exist in practice and so having an over-emphasis on attention to that gives people the wrong idea about science and how we use science to justify what we know. Authentic science inquiry, as we explored in the first course is far more interesting. There are roadblocks and dead ends and twists and turns, random observations that lead to fruitful new directions. There's an important role for creativity, the role of other scientists in the process of culture, religious beliefs, these all influence the process of science. How do other scientists influence? This is where peer review is used to independently assess the claims that scientists make and really get down if the evidence they're using is actually good and if it supports their claims. When someone has a novel or interesting finding, they write up what they did, what they found, and why was important in a manuscript that they then submit to a journal for peer review. The higher-quality the journal, the more extensive and rigorous the peer review processes. Usually the editor will make an initial review of the manuscript to see if it will fit that journal, and then if it's high enough quality or at least appears to be high enough quality, it sends in out to two to three peer reviewers. Peer reviewers are other expert scientists in the field who independently examine the evidence and claims made by the scientist. Peer view can be single-blind, meaning the author's identity's revealed, but the reviewer's identity is not, or double-blind, meaning that neither the author nor the reviewer knows who the other one is. This is done to mitigate bias since authors or reviewers may know one another and that may influence their assessment. When someone reviews a paper, they check and make sure that the proper methods were used, that the evidence generated is quality, and then it justifies the claims made by the authors. They also look to check and see if there's any evidence of scientific misconduct. If initial reviewers disagree on the quality of the paper, more may be brought on to independently review the work. The journal editor examines all of the reviews and writes a matter of view and decides whether or not the article's acceptable for publication and therefore inclusion in the scientific record. Typically, there's one if not more rounds of review, edits, and re-review before an article is suitable for publication. As we'll talk about later in this module, you can find this information on the journal articles, it's included in the published record. Now some journalists are starting to publish the names of reviewers and the reviews that they wrote alongside the published manuscripts as an additional layer of transparency and vetting of the quality of the paper. You'll also find conflicts in interest statements as well as funding information on manuscripts as well. This will also tell you if the information contained in the article is quality and if there's possible bias to watch out for. Now, let's move on to the source of science information. The source of science information refers to beliefs about where knowledge comes from. Let's look at this in more detail. Where does science knowledge come from? Well, we have scientists or some other authority figure that we accept as having some kind of advanced science knowledge. Draw a scientist here and give her a lab coat and maybe a flask which she can hold. So she looks very much like a scientist. There's scientific journals, so this is where we would find our peer reviewed literature. Let's say we'll just call it lab, science, nature, and cell. This tend to be the biggest molecular biology journals. We also have the media. Then we have a newspaper here like the New York Times or maybe we'll draw an old-school television here with rabbit ears and you're watching CNN or Fox News. Maybe you're just on your computer or on your phones. Let's draw a little phone here and say that you're on a blog written by your neighbors. Maybe you're just listening to your friend who doesn't necessarily have any science credentials, you have a friend a little half here. Who do we believe? Do we believe a friend, the scientist, the journals, the media, blogs that we read on the Internet? This is where it becomes important. It's also a very interesting place where bias can feed in again, and also what's called in-group and out-group bias. If you identify, for example, as a scientist, they're more believable. If you think scientists are evil and how to make money, it's a lot easier to believe your friend or this random blog you found on the Internet because you identify them as being part of your group. This also applies to political affiliation, which is why some people will only watch CNN and not listen to Fox News, or the reverse. It can also involve skin color, religious identities, gender identities, cultural identities. We're more likely to believe information that comes from a source that we can relate to and more likely to believe things that come from people that we think we can relate to as well. Source also relates to firsthand bias as well. If we directly experience something or someone close to at us, we're more likely to give that observation additional weight, even if it happens to be a fluke. The problem with firsthand bias is that it can be difficult to acknowledge and catch our own biases and it can be hard to make sense of information by itself. Beware the next time you see a horror story on social media, you don't know the context or the prevalence of that observation. Even then, let's say you find 10 people and some horrible thing happen to all 10 of these people, is it 10 people out of millions? Is it 10 people out of thousands? Is it 10 people out of hundreds? You still may have a better chance of being struck by lightning than having that horrible thing happen to you. Correlation isn't causation either. My favorite example showing the difference between correlation and causation is the correlation between getting up early and having a heart attack. Who tends to get up early in the morning? Well, adults do, not teens. Who's also more likely to have a heart attack? Adults, rather than teens. So that they're correlated with one another, one doesn't cause the other. Another way of thinking about this is saying that people carrying umbrellas in the morning cause rain storms in the afternoon. Those two are correlated, but one is not causing the other. We'll come back around the source and how to evaluate source later in this module. In this lecture, we've discussed epistemological beliefs about science and given some examples about how these relate to our own lives. We talked about the certainty of science knowledge, how science knowledge form structures in our brains, how we justify science knowledge to the process of inquiry to generate good evidence, and finally, the source of science information. We'll now turn to an examination of motivated reasoning before getting a chance to apply these principles to making sense at a science information that we run into in our daily lives.