1 Introduction

There has been increased attention paid to science denial in both educational and social context (Hansson 2017b; Liu 2012; Rosenau 2012). Science denial is defined as “the systematic rejection of empirical evidence to avoid [personally and subjectively] undesirable facts or conclusions” (Liu 2012, p. 129). Some typical examples of science denial are denial of climate change, relativity theory, evolution, the origin of life, AIDS, vaccination, and tobacco disease. Science denial is a social phenomenon, and it is one form of pseudo-science (Bardon 2020). Another form is called pseudo-theory promotion. While science denial is coloured by a growing antipathy towards particular scientific theories and the refusal of some parts of science (e.g., climate change denial, evolution denial, continental drift denial, the origin of life, or relativity theory denial), pseudo-theory promotion is based on the attempts to construct personal theories or claims (e.g., transcendental meditation, astrology, herbal medicine, or iridology) (Hansson 2017b). Hansson (2017b, pp. 43–44) outlined ten sociological characteristics shared by science denialists and pseudo-theory promoters as listed in Table 1.

Table 1. Ten sociological characteristics of science denialists and pseudo-theory promoters (adapted from Hansson 2017b)

Science denial is slightly different than pseudo-theory promotion (Hansson 2017b). The most important difference between science denial and pseudo-theory promotion is that while the fabrication of false controversies is a standard practice in science denial, most cases of pseudo-theory promotion do not engage in producing fake controversies (Hansson 2017a). In contrast, pseudo-theory promotion tends to avoid controversies with science and describes its claims as compatible with and conformable to science (Hansson 2017a, b). In this paper, distinguishing and comparing science denial and pseudo-theory promotion is key for two main reasons. First, this paper focuses only on science denial due to the ongoing discussions around bringing science denial to classrooms (e.g., Boyle 2017) and the massive spread and acceptance of conspiracy theories about scientific phenomena (e.g., climate change, the origin of life, COVID-19) in both the public and schools. Second, the discussion in this paper takes the characteristics of science denial into account to determine some areas for both educators and researchers to focus on as to how to respond to science denial in educational settings.

The purpose of this paper is to bring the backfire effect within the context of science denial to the attention of science education researchers and practitioners and discuss the potential role(s) of epistemic understanding of knowledge production in science in dealing with the rejection of scientific evidence and claims in science classrooms. I wish to take the reader beyond what I present and discuss here and to detect some areas open for further exploration rather than providing a road map or a list of tips and strategies to combat science denial and the backfire effect.

2 Correcting Misbeliefs?

Many people resist evaluating and accepting reliable scientific evidence. One of the reasons for denying scientific evidence is that scientific ideas may threaten people’s beliefs, ideologies, and background assumptions which are often wrong and misleading. For instance, “what predicts the denial of human-made climate change is not scientific illiteracy but political ideology” (Pinker 2018, p. 357). Adherence to personal beliefs and background assumptions, what Sandoval (2005) called personal epistemology, interferes with the acceptance of scientific facts and conclusions (Sinatra et al. 2014). One may ask the question of whether we can change or correct people’s false beliefs. In general, people are supposed to adjust their assumptions when they evaluate scientific evidence that challenges their beliefs. So, is this always the case? The answer is no. In their review of the literature on correcting misinformation, Lewandowsky et al. (2012) showed that correcting people’s false beliefs rarely has an impact on eliminating the adherence to false beliefs and assumptions. They also argued that even though people understand the retraction, correcting false beliefs is still ineffective (Lewandowsky et al. 2012).

One of the reasons why people fail to refute personal beliefs and assumptions is explained by the backfire effect (Ecker et al. 2017; Swire et al. 2017). The backfire effect is a cognitive bias that causes people’s background assumptions to get stronger when they encounter contradictory evidence (Nyhan and Reifler 2010, 2015). In other words, the backfire effect means that showing people scientific claims and evidence which prove that they are wrong is often ineffective because it causes them to support their original assumptions more strongly than they previously did (Nyhan and Reifler 2010; Trevors et al. 2016). It is an important phenomenon because it derails critical thinking skills. The backfire effect is the very heart of how people negotiate between scientific ideas and their background assumptions (Sinatra et al. 2014).

In 2010, Nyhan and Reifler designed a study to test the backfire effect. The researchers created an article that included a very common misconception about certain issues in politics. Participants were first asked to read a fake article and then another article that corrected the fake article. Participants with a certain ideological belief strongly disagreed with the correct article while they articulated stronger beliefs about the fake article. In that study, corrections failed to reduce misconceptions among the targeted ideological group. The same researchers designed the same experiment about other controversial topics such as tax cuts and stem cell research. They concluded that corrections that contradicted participants’ beliefs caused background assumptions to get stronger (Nyhan and Reifler 2010).

The same researchers also conducted a study that examined people’s beliefs about vaccination against the flu. They showed that when people who believe that vaccine is unsafe are provided with correct information challenging their beliefs, misconceptions about vaccination among the group increased (Nyhan and Reifler 2015). Another study examined parents’ intent to vaccinate their children (Nyhan et al. 2014). The researchers found that corrective information (pro-vaccination messages) decreased intent to vaccinate among parents who had the most negative attitudes toward vaccines. Nyhan et al. (2014) concluded that “respondents brought to mind other concerns about vaccines to defend their anti-vaccine attitudes, a response that is broadly consistent with the literature on motivated reasoning about politics and vaccines” (p. 840).

Supporting the findings of Nyhan and Reifler (2010, 2015) and Nyhan and colleagues (2014), other researchers have concluded that even though people understand the rationale for retraction, corrections are still ineffective (Lewandowsky et al. 2012). Correcting widespread misinformation has little effect on the ways people act and think (Sides and Citrin 2007), and the arguments that reinforce people’s background beliefs are favoured while the ones that contradict their views are disparaged (Taber and Lodge 2006). Additionally, a review of research by Tippett (2010) on refutation texts in science education showed that reading a refutation text that explicitly challenges and refutes students’ naïve conceptions seemed to be useful for improving students’ conceptual understanding but the review also pointed out that a refutation text alone is not enough to change or improve students’ misconceptions (Tippett 2010).

On the other hand, some researchers (e.g., Crozier and Strange 2019; Haglin 2017; Wood and Porter 2017) have argued that the backfire effect is not as strong as had been claimed in the literature (e.g., Lewandowsky et al. 2012; Nyhan and Reifler 2015). Crozier and Strange (2019) found no evidence for a backfire effect in their study in which they evaluated the effects of corrections on reliance on misinformation. They found that corrections can decrease individuals’ reliance on misinformation (Crozier and Strange 2019). The researchers also argued that the format of corrections (the frequency of exposures to the corrections, the activation of the misinformation and its correction simultaneously, etc.) has a key role in its effectiveness (Crozier and Strange 2019). Replicating the Nyhan and Reifler (2015) corrective information experiment with a different population, Haglin (2017) also found no support for a backfire effect from corrections of misinformation and highlighted the importance of investigating the specific conditions and individuals affected when a suspected backfire effect occurs. According to the literature discussed, we still need more evidence to figure out whether corrections are a successful strategy for combatting misinformation or misbeliefs. It is important to make it clear that whether the backfire effect exists or not is not the focus of this paper. With the actual purpose of this piece in mind, I now turn to different forms of the backfire effect.

3 The Backfire Effect and Reasoning:

Two forms of the backfire effect cause the denial of scientific knowledge: the familiarity backfire effect (Swire et al. 2017) and the overkill backfire effect (Ecker et al. 2019). The familiarity backfire effect occurs when people remember misinformation rather than its inaccuracy as a result of getting exposed to misinformation frequently (Swire et al. 2017). This effect can influence the way people respond to pseudo-scientific arguments (Hansson 2017b). The overkill backfire effect occurs when people reject multiple complex scientific explanations for certain phenomena that are difficult to understand and process (Ecker et al. 2019). This shows that people tend to engage in simple and easy explanations. When people are presented with a complicated scientific explanation, the overkill backfire effect may cause them to reject that explanation and to stick to their simple misconceptions (Chater 1999; Lombrozo 2007).

The backfire effect explains why people confirm their own biases even though they have heard about scientific facts and observed scientific phenomena and why they reject scientific information and create counterarguments against empirical evidence. Additionally, the backfire effect can help us understand and explain why the way science is traditionally taught is not successful at eliminating science denial. In a traditional classroom setting, students who deny scientific facts and conclusions are usually provided with complex explanations that aim to convince students and correct their false beliefs and assumptions. Science instruction should encourage students, citizens of the future, to differentiate selective use of evidence, what Hansson (2017b) called “cherry-picking” or what Sinatra et al. (2014) called “motivated reasoning”, from accuracy-oriented scientific reasoning. It does not mean that there is no motivated reasoning in science. For instance, Mizrahi (2015) discussed some examples of confirmation bias from the history of science. Rather, it means that science instruction should emphasize the differences between deliberate thoughts and intuitive thoughts as students learn about methods of reasoning (Short et al. 2019).

The understanding of scientific reasoning is one of the three dimensions of scientific literacy (Fasce and Picó 2019). The understanding of scientific reasoning means a public understanding of the way(s) scientific knowledge is developed in terms of sociological, philosophical, and historical aspects of science (Fasce and Picó 2019). Students should understand scientific reasoning and separate scientific reasoning from motivated reasoning. Scientific reasoning has a logical nature based on some principles. There are some ways to decide how much confidence we should place in scientific explanations: deduction, induction, and abduction (inferences to the best) (Okasha 2002). These three forms of logical inference are important for understanding how we, human beings, think and how we make meaning out of the world around us. While reasoning, we look at the premises and draw conclusions based on the premises through deduction, induction, and abduction.

The first form of logical inference is deductive reasoning. With deduction, our conclusions must be true as long as the premises are true (Okasha 2002). Deductive inferences move from the general to the specific (Jaipal 2009). An example of deductive reasoning, or inference, in Okasha (2002, p. 18) is the following:

All Frenchmen like red wine.

Pierre is a Frenchman.

Therefore, Pierre likes red wine.

If the premises are true in the first two statements, then the conclusion must be true. The most important feature of deductive inferences is that their premises are general and their conclusions are more specific.

The second form of inference is inductive reasoning. In induction, the premises do not entail the conclusion (Okasha 2002). Here is an example of inductive reasoning from Okasha (2002, p. 19):

The first five eggs in the box were rotten.

All the eggs have the same best-before date stamped on them.

Therefore, the sixth egg will be rotten too.

It is possible that even if the premises of this inference are true, the conclusion can be false. The reason is that we move from specific observations about objects or events we have examined (i.e., the first five eggs) to generalizations about objects or events that we have not examined (i.e., the rest of the eggs in the box).

With deduction, we can be certain if we begin with true premises, we will come to a true conclusion. With induction, we cannot be so confident because inductive inferences can possibly take us from true premises to a false conclusion (Okasha 2002). Even though inductive reasoning is weaker than deductive reasoning, much scientific research and reasoning in everyday life is carried out inductively. Consider the following examples in Okasha (2002). An example of inductive reasoning in everyday life is as follows.

… when you turn on your computer on the morning, you are confident it will not explode in your face. Why? Because you turn on your computer every morning, and it has never exploded in your face up to now. The premises of this inference do not entail the conclusion. (Okasha 2002, p. 20)

So how do scientists use inductive reasoning? Consider this example.

… geneticists tell us that Down’s syndrome (DS) sufferers have an additional chromosome. How do they know this? The answer, of course, is that they examined a large number of DS sufferers and found that each had an additional chromosome. They then reasoned inductively to the conclusion that all DS sufferers, including ones they had not examined, have an additional chromosome. (Okasha 2002, pp. 20–22)

Some philosophers such as David Hume and Karl Popper denied the existence and importance of inductive reasoning in science by arguing that inductive inferences are not justifiable because we cannot make sure that phenomena that we have not experienced will resemble those that we have experienced in the past (Okasha 2002). However, we know that inductive reasoning is a perfectly sensible way of forming beliefs about the world around us by making our inferences quite probable.

The third form of logical inference is called abduction (inference to the best explanation). Abductive inference makes a similar jump to the logic of the inductive syllogism but the abductive inference is fallible. Consider the following example that Okasha (2002, p. 29) offers:

The cheese in the larder has disappeared, apart from a few crumbs.

Scratching noises were heard coming from the larder last night.

Therefore, the cheese was eaten by a mouse.

In this case, the premises do not entail the conclusion. However, with the available data, the inference is reasonable. If we obtain more data, we can make the reasoning stronger. Scientists (doctors and detectives as well) use abduction—drawing a conclusion that best explains a state of events from a set of possible scenarios, rather than solely based on evidence provided in the premises. Within this context, scientists’ theories provide strong evidence for their claims. In addition to inferences, many scientific laws and theories are expressed in terms of probability (probabilistic reasoning) such as Mendelian genetics arguing that there is a 50% chance that any gene in your mother (and father) will be in you. “Probability provides a continuous scale from poor theories with low probability to good theories with high probability” (Lakatos 1998, p. 22). The importance of probabilistic reasoning in understanding and accepting polarizing scientific ideas (e.g., evolution) is also highlighted in the literature (e.g., Fiedler et al. 2019; Lenormand et al. 2009).

Learning about the three forms of logical inferences discussed above is important to distinguish between motivated reasoning and scientific reasoning and to address science denial. As Hand et al. (1999) suggested, logical reasoning is important because “science distinguishes itself from other ways of knowing and from other bodies of knowledge through the use of empirical standards, logical arguments, and scepticism to generate the best temporal explanations possible about the natural world” (p. 1023). The way we make inferences through deduction, induction, and abduction shows that even though scientific knowledge is temporary and uncertain, it is highly probable and it is subject to change as we collect more evidence (Hand et al., 1999; Okasha 2002). In contrast, motivated reasoning relies on selectively interpreting evidence and leads to preferred inferences.

Making logical inferences while evaluating claims and evidence is one of the critical thinking abilities (Paul 1995). As one might infer from the nature of science literature, students have limited ability to evaluate scientific claims and evidence. One reason is that science instruction in K-12 does not facilitate engaging in aspects of scientific inquiry and practices about evaluating the strengths and limitations of the evidence and developing scientific arguments (Banilower 2019). Banilower (2019) provides an interesting finding from the study as follows:

Fewer than a quarter of secondary science classes have students, at least once a week, pose questions about scientific arguments, evaluate the credibility of scientific information, identify strengths and limitations of a scientific model, evaluate the strengths and weaknesses of competing scientific explanations, determine what details about an investigation might persuade a targeted audience about a scientific claim, or construct a persuasive case. (Banilower 2019, p. 204)

The absence of logical inferences may add strength to the backfire effect by leading to the retrieval of thoughts that support one’s background beliefs and assumptions. It means that “when we think we are reasoning, we may instead be rationalizing” (Mooney 2011, para. 11). Rationalization involves deciding what evidence to accept based on the preferred conclusion—motivated reasoning (Bardon 2020). In contrast, scientific reasoning requires using critical thinking skills to determine which explanation(s) represents the best answer to our question based on evidence (Lawson 1999).

As discussed earlier, when we encourage students to engage in evaluating evidence that has the potential to threaten their background assumptions and beliefs, science denial might become more entrenched. One reason is that people tend to look for evidence which confirms their beliefs and background assumptions (Druckman and McGrath 2019). Referring to this point, one may ask whether we should avoid discussing scientific evidence that may conflict with students’ worldviews while teaching controversial topics in science in order to not enable science denial. How can science educators address science denial in the classroom? How can science educators make scientific claims and evidence sticky so that students remember what they read or observe and try to evaluate their background assumptions? The answers to these questions are complicated. Regarding these questions, the following paragraphs discuss the intersections between the ways science should be taught and the suggestions for addressing science denial and the backfire effect.

4 Science Denial, the Backfire Effect and Science Teaching

It seems that pedagogical suggestions for avoiding the backfire effect and dealing with science denial are inconclusive and contradictory. Regarding the fact that there is a strong relationship between background assumptions and science denial or acceptance (Mazur 2004), Nyhan and Reifler (2010) and Cook and Lewandowsky (2011) suggested that when educators present counter-evidence, they should acknowledge students’ background assumptions (e.g., political ideologies, religious beliefs). On the other hand, there are some suggestions on how to discuss controversial issues by avoiding considering students’ background assumptions. Consider the following excerpt showing how we should be careful while teaching about climate change:

… in a polarized political landscape, talking about politicians and the decisions they make is counterproductive. Students may put their guard up, thinking that I’m partisan, and tune me out when I’m lecturing about other things, such as climate modeling. So, I made a conscious decision to change my approach to teaching the subject. As part of my modified strategy, I joined a local bipartisan group that aims to bring people together by emphasizing the potential consequences, rather than causes, of climate change. (Kannan 2019, p. 1042)

This example suggests that leaving politics out of the classroom while discussing polarizing issues in science is considered as an important attempt to prevent science denial and to avoid threatening students’ worldviews. So, should we acknowledge students’ background assumptions? It is not clear how educators should go about reconciling the advice in their classroom.

Another example of contradictory advice to educators can be seen in Cook and Lewandowsky (2011). The authors suggested that if teachers aim to debunk misbeliefs about scientific phenomena, they should begin by emphasizing the scientific facts, not the misbeliefs. The goal should be to increase students’ familiarity with scientific facts (Cook and Lewandowsky 2011). Even though this bit of advice seems to work for specifically combating the familiarity backfire effect discussed earlier, it still invites the backfire effect, in general, described by Nyhan and Reifler (2010, 2015) and Nyhan and colleagues (2014).

Moreover, when we compare what the literature on how to teach science and what to teach about science says with the suggested ways of avoiding the backfire effect and science denial, we see conflicting ideas on these issues. Duschl and Osborne (2002), for instance, argued that science instruction should focus on “how we know what we know and why we believe the beliefs of science to be superior or more fruitful than competing viewpoints” (Duschl and Osborne 2002, p. 43). Even though this statement refers to the importance of the epistemic aspect of understanding scientific practices, it seems to neglect what might happen when students are provided with the idea that the scientific way of knowing is superior to other ways of knowing, and triggering a possible backfire effect.

Emphasizing the role(s) of an epistemic understanding of knowledge production in science might be a fruitful way to avoid the backfire effect while learning and teaching polarizing scientific issues. Using Duschl (2008)’s framing of epistemic and conceptual aspects of science learning, I define the epistemic understanding of knowledge production in science as the consideration of multiple perspectives and contexts (social, cultural, historical, linguistic, etc.) while evaluating or challenging evidence and claims. The integration of the epistemic understanding of how to develop and evaluate scientific knowledge into scientific practices is one of the more important goals for science learning defined by Duschl (2008). This goal can be accomplished by facilitating a dialogical discourse through which learners have a chance to evaluate claims and evidence to make inferences about the natural world (Duschl 2020). Even though the literature on the importance of the epistemic understanding in science classrooms is well-established, its potential role in preventing or fostering science denial and the backfire effect is not adequately discussed in the field of science education. There are some areas that need to be focused on and investigated for their potential to combat science denial and the backfire effect while foregrounding the role(s) of the epistemic understanding of knowledge production for science instruction. These areas include expanding ways of knowing and marking the boundary between the scientific way of knowing and other ways of knowing at the same time, comparing claims and arguments that derive from different frameworks, teaching about the power and limitations of science, and bringing different and similar ways science is done to students’ attention.

First, educators can encourage expanding ways of knowing and marking the boundary between the scientific way of knowing and other ways of knowing at the same time. Expanding ways of knowing involves acknowledging knowledge that is value-based and cultural not only empirical. The scientific way of knowing produces knowledge (I will call this type of knowledge scientific knowledge) through specific practices (observation, experimentation, logical inference, etc.). Scientific knowledge tries to explain the natural world by focusing on individual parts. On the other hand, traditional knowledge, indigenous knowledge, or local knowledge (I use these terms interchangeably here) refers to other ways of knowing embedded in the cultural traditions, beliefs, and attitudes of specific communities. The production of this type of knowledge also includes observations, predictions, and problem-solving (Snively and Corsiglia 2001). However, the way of producing traditional knowledge is not always systematic. Additionally, the traditional ways of knowing try to understand the natural world more holistically by observing the interactions between all of the parts of a phenomenon. Consider this example. Cobern and Loving (2001) shared the following conversation between a researcher working at a scientific station on a South Pacific Island and an indigenous islander:

The islander commented that Westerners only think they know why the ocean rises and falls on a regular basis. They think it has to do with the moon. They are wrong. The ocean rises and falls as the great sea turtles leave and return to their homes in the sand. The ocean falls as the water rushes into the empty nest. The ocean rises as the water is forced out by the returning turtles. (Cobern and Loving 2001, p. 51)

As another example of other ways of knowing, Foucault (1970) mentioned a Chinese encyclopaedia in which animals are divided into groups: “(a) belonging to the Emperor, (b) embalmed, (c) tame, (d) sucking pigs, (e) sirens, (f) fabulous, (g) stray dogs, (h) included in the present classification, (i) frenzied, (j) innumerable, (k) drawn with a very fine camelhair brush, (l) etcetera, (m) having just broken the water pitcher, and (n) that from a long way off look like flies” (p. 16). For another example, an indigenous group, called Tao (or Yami) people, living on Orchid Island (Lanyu) located near South-East Taiwan, has a different taxonomy where fish are grouped into two main classes: edible and inedible fish (Wang 2012). The inedible fish are like fish without scales such as eels. The edible fish are further divided into different groups: old people fish (only to be consumed by elders), men fish (prohibited to women), and women fish (for all to consume). This kind of classification is based on the different purposes fish are used for in the community. The indigenous classification method is motivated by the protection of natural diversity and ecosystem while scientific classification aims to inform the user as to what the relatives of the taxon are hypothesized to be (M.-Y. Lin, personal communication, September 14, 2020). For instance, the reason Tao people do not eat eels (and classify it as inedible fish) is that the eels dredge the headwater of the taro fields and hunt pests (Wang 2012). These three examples of other ways of knowing show that knowledge is produced within specific contexts, with specific purposes, and with specific methods.

The literature in the sciences and science education has emphasized and valued expanding ways of knowing and marking the boundary between the scientific way of knowing and other ways of knowing without focusing on science denial and the backfire effect. As an example of acknowledging other ways of knowing, Behrens (1989) examined the correspondence between Shipibo, an indigenous group in the Peruvian Amazon, soil categories, and Western pedology (a branch of soil science) to understand soil-plant associations and agricultural productivity. There are also many studies about how educators can acknowledge different ways of knowing in their science teaching practices (see Barba 1995; Loving 1991; Ogawa 1995). Ogawa (1995), for instance, argued that bringing a multiscience perspective in science classrooms helps students understand more than one view simultaneously and discuss how and why some natural phenomena can be interpreted similarly or differently in different contexts. For another example, Loving (1991) proposed a model called the Scientific Theory Profile to help science teachers develop an understanding of the nature of science and evaluate scientific explanations and theories within cultural contexts. Even though these studies provide insights into what expanding ways of knowing might look like in practice and how it might be useful to facilitate the epistemic understanding of knowledge production in science, they do not discuss the potential of fostering science denial and the backfire effect instead of avoiding it.

The proponents of diverse perspectives in explaining natural phenomena argue that scientific way of knowing and other ways of knowing should be viewed as co-existing or parallel (e.g., Cobern and Loving 2001; Snively and Corsiglia 2001) rather than competing viewpoints. This is true. One reason is that different ways of knowing might be useful in different social or cultural contexts and lead to different consequences and decision-making processes (Feinstein and Waddington 2020). It is also important to note that these different ways of knowing are not equal. It means that knowledge-building encompasses multiple ways of origins, practices, logical conclusions, rationales, and methods. Here, the intent of this paper is not to discuss whether or not other ways of knowing are classified as scientific knowledge or science. The answers to this question in the science education literature are not in agreement with one another (for detailed discussions see Cobern and Loving 2001; Snively and Corsiglia 2001; Southerland 2000; Stanley and Brickhouse 1994).

5 Potential Impact on Students’ Learning

What we educators can do by expanding ways of knowing is to consider the epistemological pluralism and the ability to wisely differentiate scientific knowledge from other ways of knowing in light of logical inferences, use of evidence, systematic observation, etc. (Cobern and Loving 2001). By doing so, educators provide a way of distinguishing reliable knowledge claims from unreliable ones (Laudan 1996). Different ways of knowing can contribute to our explanations about the world (Snively and Corsiglia 2001) and work in consort because different ways of knowing may be important in different situations. Expanding ways of knowing provides students with a chance to see how the practice of science may utilize the insights of another domain of knowledge (Cobern and Loving 2001). Science instruction should “value knowledge on its many forms and from its many sources” (Cobern and Loving 2001, p. 63) so that students feel free to bring different perspectives and ways of knowing to their classroom and discuss them.

Second, students should be able to compare claims and arguments that derive from different frameworks or domains of knowledge. To do so, it is important to know how to engage in scientific practices such as making inferences, generating and evaluating explanations, and making observations. Teaching students about “methods for posing questions about science, scientific models for serious thinking about science, understandings about aspects of scientific inquiry, and a sceptical orientation regarding ways that science is characterized in curriculum materials and instruction” might be a good way to guide them to develop and evaluate arguments and counterarguments (Kelly 2014, p. 1368).

Constructing a counterargument that successfully weakens the force of others’ arguments is a challenging task for students (Kuhn 2010). In her study, Kuhn highlighted two important implications for learning and teaching about scientific argumentation: (a) students should be encouraged to develop alternative arguments based on evidence against the opponents’ argument rather than critiquing the opponents’ arguments and threatening their beliefs and assumptions. (b) There are two main ways of making use of evidence in argumentation: the support strategy—using the evidence to support one’s claim, and the challenge strategy—using the evidence to challenge the other’s claim. Educators tend to avoid using the term argument in the classroom because of fear that argument may be associated with negative concepts and senses in students’ minds. However, developing arguments and counterarguments are key components of critical thinking and it creates an opportunity for students to make use of their skills of analysis, synthesis, and evaluation (Osborne and Patterson 2011). An example that fits this argument would be the curriculum introduced in 2016 in Finland that requires students to think critically, interpret, and evaluate all the information they encounter across all subjects. Henley (2020) reports on how the national curriculum aims to accomplish this goal in Finland as follow:

In maths lessons, … pupils learn how easy it is to lie with statistics. In art, they see how an image’s meaning can be manipulated. In history, they analyse notable propaganda campaigns, while Finnish language teachers work with them on the many ways in which words can be used to confuse, mislead, and deceive. (Henley 2020, para. 4)

This is one way of providing students with the necessary skills and methods to evaluate claims and evidence without leading to any conflicts and threats. As reported by Henley from his personal communication with Mikko Salo, a member of the European Union’s independent high-level expert group on fake news, “It’s about trying to vaccinate against problems, rather than telling people what’s right and wrong. That can easily lead to polarisation” (Henley 2020, para. 23).

Third, students should learn about both the power and limitations of science to engage with the epistemic aspect of knowledge production in science. Even though the programme of study for 14–16-year-old students in England contains an acknowledgement that students are taught about the “power and limitations” of science (Department of Education 2014, p. 5), it is argued in the literature that school science does not explicitly and efficiently teach that argumentation is associated with uncertainty—being unsure and lacking knowledge or evidence (Chen et al. 2019). Researchers showed that an individual’s political attitudes, beliefs, and worldviews are strongly related to the level of tolerance of uncertainty (Jost et al. 2003; Pennycook et al. 2012). For instance, conservatives are less likely to tolerate uncertainty (Deppe et al. 2015). (A caveat should be noted: Denial is not a problem for only conservatives. Kahan et al. (2011) have found that liberals are less likely to accept a hypothetical expert consensus on nuclear waste disposal and handgun regulations). Uncertainty is one of the factors that trigger science denial that educators encounter while teaching and learning about hot button issues. Chen et al. (2019) proposed a way of productively managing uncertainty in the classroom: raising uncertainty—expressing confusion and seeing other ideas to problematize a phenomenon, maintaining uncertainty—facilitating a discussion by which students can deepen their scientific reasoning with evidence, and reducing uncertainty—synthesizing alternative ideas, looking for inconsistencies among them, and connecting them to each other. This way helps teachers facilitate students’ epistemic understanding of knowledge production to manage uncertainty and prevents students from constructing motivated reasoning.

Lastly, science educators can bring different and similar ways science is done to their students’ attention to emphasize epistemic understanding. For instance, historical (e.g., palaeontology, historical geology, archaeology) and experimental sciences (e.g., physics, chemistry, astronomy) use distinct ways of producing scientific knowledge and reasoning. Historical sciences focus on explaining observable phenomena in terms of unobservable causes by using retrodiction, abduction, reasoning from analogy, and multiple working hypotheses (Gray 2014). In contrast, experimental sciences engage in making predictions and testing these predictions in controlled laboratory settings by focusing on hypotheses, experiments, controls, and variables. In addition to the differences between historical and experimental sciences, it is also important to highlight that even though historical science hypotheses and methods are usually associated with fields such as palaeontology and archaeology, we can see historical hypotheses and methods in geology, planetary science, astronomy, and astrophysics—such as continental drift, the meteorite impact extinction of the dinosaurs, and the big bang origin of the universe hypotheses (Cleland 2001). The epistemological and methodological differences and similarities between historical and experimental sciences are important since background assumptions and beliefs about historical science claims can have important consequences (e.g., creationist critiques of evolution) (Gray 2014). Just because historical sciences cannot replicate unobservable causes in laboratory settings, it is not true to assume that the way historical scientists do science is inferior to the way experimental sciences produce knowledge and make inferences (Cleland 2001), and that historical sciences are more subject to denial.

For another example of different ways of doing science, scientists working on the same problem and with the same data can arrive at different conclusions. In a recent study (Silberzahn et al. 2018), 29 research teams (a total of 61 researchers) from 13 countries with a variety of research backgrounds including Psychology, Statistics, Research Methods, Economics, Sociology, Linguistics, and Management were provided with the same set of data and asked to answer the same question: whether soccer referees are more likely to give red cards to dark skin toned players than light skin toned players. Twenty of the teams found a statistically significant relationship between a player’s skin color and the likelihood of receiving a red card. Nine teams found no significant relationship at all. The researchers came to different conclusions because they used different statistical models and took different variables from the data set into account. It is clear that their analyses led to somewhat subjective decisions about the best statistical model to use and which variables should be included in the analyses. Silberzahn et al. (2018) concluded that “many subjective decisions are part of the research process and can affect the outcomes” (p. 354). As an important consequence, this variability in analytic approaches and conclusions is likely to affect decision-making processes. With this illustrative example in mind, it is important for teachers to consider different analytical tools and methodologies used in science and how these differences lead to diverse viewpoints while they engage students with using and interpreting scientific evidence and making inferences in classrooms.

These four areas discussed above are promising and are open to further investigations to evaluate their potential to combat science denial and the backfire effect while facilitating the epistemic understanding of how we know and what know about the natural world around us. The reason these areas are important to focus on is that they can address the sociological characteristics of science denial(ists), such as considering scientific theories as threats, finding scientific ideas difficult to understand, and disseminating false beliefs, assumptions, and ideologies in the public (see Table 1), and provide some insights into how to deal with science denial and the backfire effect. For instance, expanding ways of knowing can take the familiarity backfire effect into account while providing students with diverse perspectives on the same phenomenon. Encountering different ways of knowing, students can have a chance to access to and discuss a vast array of ideas instead of getting exposed to the same (mis)beliefs frequently. Moreover, if students would like to challenge some ideas, they need to learn how to develop counterarguments based on evidence rather than solely targeting other ideas just because these ideas contradict with their background assumptions. Additionally, teaching students about how knowledge is produced (different ways of logical reasoning, different methodologies, etc.) before teaching them scientific ideas themselves may prevent the overkill backfire effect. To do so, educators can explain why there are multiple explanations on the same phenomenon and why the ways science is done seem to be complicated processes that may lead to uncertainty or inconclusive evidence. The most important point of zooming in on these four areas can potentially provide learners, scientifically literate citizens, with opportunities to reflect on their background assumptions, beliefs, ideologies, and cultural resources while negotiating and distinguishing between different ways of knowing and evaluating the credibility of claims and evidence.

6 Conclusions and Discussion

With a focus on science denial, this paper brings the backfire effect to the attention of science educators and science education researchers and discusses the potential role(s) of epistemic understanding of knowledge production in science in dealing with the rejection of scientific evidence and claims in science classrooms. In order to investigate the potential role(s) of epistemic understanding of knowledge production in confronting the denial of scientific ideas and mitigating the influence of the backfire effect, the current paper suggest taking a close look at expanding ways of knowing and marking the boundary between the scientific way of knowing and other ways of knowing at the same time, comparing claims and arguments that derive from different domains of knowledge, recognizing the power and limitations of science, and learning about different ways science is done.

Given these four areas to seek effective ways of dealing with science denial in science classrooms, it may seem that the suggested areas for further explorations are based on the nature of science rather than the specific ways of combating the backfire effect. There are two main reasons for that. First, the literature on debunking misinformation and avoiding the backfire effect has offered contradictory advice (e.g., emphasizing scientific facts not (mis)beliefs vs. acknowledging students’ beliefs). This literature also falls short in providing educators with practical ways of implementing these strategies. For example, how can educators acknowledge students’ beliefs and values while presenting a counterargument or scientific fact? How can educators balance a discussion of different ways of knowing without opening the door to science denial? What forms of knowing or knowledge production should be admitted to science classrooms? Should educators care about the correctness of different ways of knowing at all? Or should they focus on how different ways of knowing are useful in different contexts?

Second, even though cognition-oriented research findings in the field of science education (e.g., conceptual change pedagogies such as cognitive conflict pedagogies) have provided insights on the processes of how students reconstruct their knowledge and understanding (Chinn and Malhotra 2002; diSessa 1993; Vosniadou 2002), we still do not know what steps students follow to achieve a meaningful conflict while they reconstruct their prior knowledge, beliefs, and values (Limón 2001). As an example, despite the fact that cognitive conflict—confronting learners with contradictory information—has a long history as a suggested strategy for supporting learning and teaching in science education, it has had less success in classroom implementations than expected and has led to conflicting results as well (e.g., Limón and Carretero 1997). One reason is that many educators do not know how to facilitate a meaningful cognitive conflict in classrooms (Limón 2001). Several models and theories on conceptual change focus only on the cognitive processes of individuals and underestimate the importance of epistemological beliefs, values, attitudes, and reasoning strategies (Limón 2001). Moreover, it seems that these models and theories neglect the consequences of inducing conflict by providing anomalous and contradictory information, situations which ignite the backfire effect. The given perspectives from these two areas, the literature on debunking misinformation and how students reconstruct their knowledge through a meaningful conflict, might be complementary but neither is sufficient alone to provide fruitful strategies to avoid the backfire effect and science denial and promote meaningful conflict while learning and teaching about controversial issues in science.

With regard to the potentially fruitful areas discussed earlier, the epistemic understanding of knowledge production in science is not a panacea, or a one-size-fits-all solution. However, the epistemic understanding of knowledge production in science seems to be relevant to lead students to consider different perspectives and sources of knowledge and knowing on polarizing scientific issues rather than dismissing ideas that contradict their knowledge, beliefs, and values. Limitations exist in terms of the role of researchers and educators in addressing science denial and the backfire effect while facilitating epistemic understanding of knowledge production. There are some important questions that we need to ask and to seek answers for. Do educators consider the importance of presenting relevant information to explain scientific phenomena in classrooms? Teachers, for instance, who heavily depend on textbooks to teach science might encounter issues related to the epistemic aspect of knowledge production in science. As Kuhn (1970) pointed out, textbooks are “persuasive” (p. 1) and what is described as science in the textbooks does not fit the way science is done. One may also ask whether we teach students about both the scientific knowledge and the way knowledge is produced. Teaching scientific knowledge before explaining how it is produced can be exemplified by a cart before the horse approach. There is a need, then, for educators and researchers to be conscious of the backfire effect and the nature of scientific knowledge and formulate a comprehensive approach to science denial. Moreover, educators and researchers should pay attention to students’ background assumptions according to their specific contexts. It means that the strategies in dealing with students’ assumptions and beliefs about electrons should be different than their beliefs about hot button issues such as vaccination and global warming (Hodgin and Kahne 2018). It is important to consider different pedagogical approaches based on whether students’ misbeliefs are caused by the absence of knowledge, pseudo-theory promotion, or antipathy towards scientific facts. Regarding the challenges of post-truth and science denial, it would be wise to develop well-focused and empirically grounded strategies to combat with different types of unwarranted beliefs to produce satisfactory instructional outcomes (Fasce and Picó 2019).

Only a handful of studies in political science have analysed the effects of attempts to correct misbeliefs and background assumptions, leading to contradictory research findings. The studies also lack evidence on effective strategies for pedagogical implementations. Little is known about how science educators and researchers approach the backfire effect with polarising issues and science denial within the field of science education. Use of epistemic understanding of knowledge production in science with a focus on avoiding the backfire effect may increase the potential for science education research to produce fruitful strategies and democratic environments which promote divergent perspectives to deepen students’ understanding of how science works. There is a need for science education research to consider the consequences of the backfire effect and develop a program of research or supplemental curriculum to help students use critical and reflective thinking skills within a multidisciplinary context (e.g., natural sciences, political sciences, media and communication studies).