Introduction

There is great variability in the way people reason. One common approach to understanding this are dual-process models, wherein reasoning is considered to be the product of two types of processes (Evans, 1989; Kahneman, 2011; Sloman, 1996; Stanovich & Stanovich & West, 2000). Type 1 processes are fast, intuitive, and effortless, and are often associated with spontaneous, heuristic responses. Type 2 processes are slower, require cognitive effort, and are usually associated with revising or inhibiting intuitive responses. However, while these models have been used to understand the origins of bad reasoning (i.e., reasoning that does not correspond to logical norms), they do not really provide any clear way of understanding differences in processing information where conclusions cannot clearly be described as normative or not.

More recently, another model of reasoning, the dual-strategy model, has proposed the existence of two types of reasoner that provides a processed-based account that is directly applicable to this problem. Based on an original idea by Verschueren et al. (2005), this model suggests that people generally use one of two reasoning strategies to process information: a statistical or a counterexample strategy (Markovits et al., 2013). When presented with information about a problem, a statistical reasoner will activate a broad network of knowledge and beliefs, in order to generate an estimate of the likelihood of a conclusion that translates the relative weight of available evidence. A counterexample reasoner focuses more on key aspects of a given problem in order to generate a representation of these, which importantly also places strong weight on potential counterexamples to any putative conclusion.

Studies have shown that reasoning strategy is a strong individual difference that not only impacts how people reason with typical deductive problems (Brisson & Markovits, 2020; Markovits, Brisson, et al., 2018), but can also predict individual differences on biases such as the tendency to use belief to judge the logical validity of a conclusion (Markovits, Brisson, De Chantal, & Thompson, 2017; De Chantal et al., 2019). In fact, a recent study has shown that strategy use is a strong predictor of susceptibility to a variety of reasoning biases, even when more traditional measures of cognitive capacity are used (Thompson & Markovits, 2021). Although these studies generally show that counterexample reasoners are more “logical” than statistical reasoners, this is not always the case (Markovits et al., 2016). Thus, the key difference underlying the two reasoning strategies is the way that information is processed.

This broader definition suggests that strategy use would predict differences in a much wider array of judgments. This is in fact the case. Strategy use has been shown to influence gender differences in both emotion processing (Markovits, Trémolière, & Blanchette, 2018) and mental rotation (Markovits, 2019), and has been shown to predict individual differences in a wide array of social biases (Gagnon-St-Pierre et al., 2020). These results suggest that the distinction between statistical and counterexample reasoners captures a basic difference in the way that people process a broad array of information. Although there are other components to this difference, the key aspect that is specifically examined here is best captured by the results of a recent study that looked at how people make inferences when given explicit statistical information (Markovits et al., 2018, b). For example, an inference of the kind “If P then Q, P is true. Is Q true or not?”, which is a Modus ponens inference, was accompanied by statistical information that said that out of 1,000 observations, 990 times P and Q were both true, while 10 times P was true but Q was false. In this case, statistical reasoners used the preponderance of evidence to conclude that “Q is true” at a very high rate, while counterexample reasoners rejected the same conclusion at a very high rate, which indicates that the latter clearly gave more weight to cases that were inconsistent with the conclusion. Similar but less extreme reactions were observed when the proportion of cases that were consistent with the conclusion decreased. In other words, statistical reasoners are prone to weigh the prepondereance of evidence more than potential counterexamples, while counterexample reasoners give relatively more weight to the latter, when both are present. This example is particularly interesting, since the “logical” response to the above-mentioned inference is to accept the conclusion that “Q is true,” which illustrates the idea that the distinction between strategies is not a matter of one being more “logical” than the other, but the way that each processes information (see also Markovits et al., 2016).

In the following study, we examine a specific hypothesis about the interaction between reasoning strategy and people’s susceptibility to belief in claims that are factually unknown, but which gain credibility by simple repetition, and for which a single disconfirming instance is presented. This is interesting for several reasons. First, there is no clear normative answer to the question of the believability of the claim. Both an increase in belief with repetition and a decrease with a disconfirming instance are reasonable under many sorts of approaches. Second, this question is timely, since the dynamic underlying the effects of repetition on belief have become increasingly relevant, since people process a great deal of information from simple exposure via the internet. With parties now able to easily plant false information in the media landscape (Allcott & Gentzkow, 2017) and algorithm technology allowing for targeted news (Schou & Farkas, 2016), it is important to better understand how people process such information. In the following, we focus on the effects of gist repetition on people’s belief in an unfamiliar (but empirically false) claim.

Previous research has mostly examined word-for-word repetition of a false claim and has shown that the believability of information increases with the number of times this information is repeated (Begg et al., 1992; Dechêne et al., 2010; Wang et al., 2016). Ecker, Lewandowsky, Swire, and Chang (2011) did find an effect of repetition when the repeated information was presented in varied contexts. Researchers have tried to prevent this effect by using warning tags (Pennycook et al., 2020) or retractions (Ecker et al., 2011, b), with relatively little success (Ecker, Lewandowsky, & Apai, 2011; Johnson & Seifert, 1994). Stronger retractions do lead to greater reductions in belief, though even the strongest retraction does not completely undo the effects of repetition (Ecker, Lewandowsky, Swire, & Chang, 2011). More repetition of a false claim increases its believability and makes subsequent correction harder (Schul & Mazursky, 1990). Theoretical accounts of the effect of repetition on belief (the illusory truth effect) are of two types. The most common of these is the idea that repetition reinforces low-level memory traces, which become more easily accessible and lead to increased perception of fluency (e.g., Alter & Oppenheimer, 2009; Begg et al., 1992). Another approach suggests that people might use repeated information to construct mental models (Johnson-Laird, 1983; Johnson & Seifert, 1994, 1999), which when reinforced become more resistant to change by discordant information. Both models would predict that repetition should lead to increased belief, although it could be argued that the fluency explanation is more suitable for the case of verbatim repetition.

This study has two specific aims. First, we attempted to replicate the interactions between repetition and corrective information in a context that is representative of at least a significant subset of the way that messages are transmitted. Especially when social media are used to transmit messages, information is sometimes transmitted in an identical form, which corresponds to the verbatim paradigm underlying most of the studies examining these effects. However, also frequent are situations in which misinformation is communicated in different ways, which maintain the gist, but not the surface form. On Twitter, for instance, it is against the platform’s policy to copy-paste an already existing tweet and post it again (although this does not apply to retweets). However, it is possible to use a different wording or emphasize a different portion of the initial tweet in order to repeat it. In fact, an analysis of one million tweets made by 1,500 accounts for a 3-month period led to the observation that 55% of the Twitter accounts are repeating the gist of their tweets, and that on average, the second tweet (i.e., the repetition) gets 86% as much visibility as the first one (Rey, 2014). These repetitions are usually embedded in an information flow, and are not necessarily specifically attended to. We therefore devised a novel method that consists of presenting a series of short paragraphs with variable foci. Participants are told that they would be asked a simple question about each paragraph immediately after reading it, as an attention check, and that at the end, they would be asked some (indeterminate) questions about paragraph contents, with no specification about which contents would be examined. We hypothesized that, consistent with previous results involving verbatim repetition, belief in an unfamiliar (but empirically false) claim would increase with gist repetition, and that a single disconfirming piece of evidence would reduce belief, but that its effect would lessen with increased repetition of the false claim.

The second and most important aim was to examine a prediction derived from the dual-strategy model of reasoning. The paradigm that we have developed allows distinguishing between two effects: (1) the effect of gist repetition on belief and (2) the effect of a single disconfirming instance on belief. Previous studies have shown that both statistical and counterexample reasoners are equally capable of processing basic information related to a given problem, with both responding similarly to statistical information that translates into probabilities that a given conclusion is indeed true (Brisson & Markovits, 2020). This suggests that both statistical and counterexample reasoners should show increased belief with repeated presentation of the same information. However, our earlier analysis of the difference in reasoning strategies clearly suggests that counterexample reasoners give relatively more weight to potential exceptions than do statistical reasoners. This would predict that counterexample reasoners will be more strongly affected by a single disconfirming piece of evidence than statistical reasoners.

Method

Participants

A total of 2,122 participants (1,058 females, 1,064 males, mean age = 33 years, 8 months) were recruited on the Prolific Academic platform to complete this online study. There were approximately 150 participants for each of seven conditions for each of the two contents. Previous studies have shown that approximately 100 will be classed as either counterexample or statistical reasoners, which will give about 200 participants across both contents. G*Power 3 indicates that this number provides greater than a 90% chance to detect a small to moderate interaction. All participants were paid £3 for their participation.

Materials

False-claim procedure

The basic design for the false-claim study was the following. Before starting the study, participants were told that if they did not perform above chance in the attention check questions asked after each paragraph, they would be eliminated from the study (in actual fact, no participants were eliminated; mean rate of success was very high (92.6%) and fewer than ten participants had a success rate lower than 50%). Participants then received a series of 25 short paragraphs (between 59 and 69 words). Immediately following each paragraph, they had to answer a question about some surface aspect of the paragraph in a yes-no format (as an attention check). For example, the following (abbreviated) item was one of the filler items:

“About 90% of the world’s vaping and e-cigarette devices are designed and manufactured in 200 factories in Seoul, South Korea. […] The industry is in damage control mode after an outbreak of vaping-related respiratory ailments led to at least 1,000 deaths across the USA.”

The attention check question was: “Are vaping and e-cigarettes produced in Singapore?”

The sequence of paragraphs differed in the following ways. In the One iteration (Baseline) condition, all of the 25 paragraphs had different content, with the fourth of these being the unfamiliar False Claim (which was thus presented one time). In the Repetition conditions, some of the original paragraphs were replaced by repetitions of the gist of the False Claim (although never in the same words, see below), with a total of three (which were placed in the fourth, ninth, and 12th positions), four (in the fourth, ninth, 12th, and 15th positions), or five iterations of the False Claim (in the fourth, ninth, 12th, 15th, and 18th positions). In the Disconfirming conditions, participants received the same set of paragraphs as in the Repetition conditions, i.e., with three, four, or five iterations of the False Claim, but following the repeated False Claims, they also received a single paragraph that presented clear evidence disconfirming this (which was placed in the 15th, 18th, or 21st position, for the three-, four-, and five-iteration conditions, respectively). It should be noted that the design was such that each further iteration of the False Claim was closer to the end of the sequence, although having the strategy diagnostic task (which is used to categorize reasoning strategy) between this procedure and the final questions was designed to reduce any recency effect on the belief level in the False Claim.

Two different versions of a false claim were used in this study. The first presented an unfamiliar (false) claim about the increased prevalence of a (real) condition called Hylophobia, while the second presented a (false) causal claim that ozone alleviates a (real) condition called Phantosmia. Half of the participants received one of the conditions involving Hylophobia, while the other half received one of the conditions involving Phantosmia.

The following is an abbreviated example of the false claim that ozone alleviates symptoms of Phantosmia:

“Phantosmia is a condition consisting of hallucinating smells. […] Scientists have speculated that this might be due to ozone, which is a component of air pollution.”

The attention check question was: “Does phantosmia involve olfactory hallucinations?”

The following is an abbreviated example of the Disconfirming claim that ozone does not alleviate symptoms of Phantosmia:

“Phantosmia is a condition that produces hallucinations of smells. […] Recently, a team of neuropsychologists […] showed that the observed effects of ozone were entirely due to other causes and that ozone levels had no effect on the symptoms of phantosmia.”

The following is an abbreviated example of the False claim that the prevalence of Hylophobia is increasing over time:

“Hylophobia, which is an irrational fear of trees, is not listed in the World Health Organization’s categorization of mental disorders. Child psychologists have argued […] that the phobia should be included in the diagnosis manuals.”

The attention check question was: “Does the term Hylophobia mean fear of water?”

The following is an abbreviated version of the Disconfirming claim that the prevalence of Hylophobia is not increasing over time:

“The European Psychological Association is arguing against the listing of "hylophobia" […] new data shows that contrary to previous results, the prevalence of this condition is decreasing. […]”

Belief evaluation questions

Following the strategy diagnostic, participants were given a set of seven statements for which they were asked to indicate whether or not they believed the statement to be true or false, and to indicate their level of belief on a scale varying from 1 to 7. Of these, six statements were based on items seen in the 25 paragraphs, while one was a factually false item. The specific False claim that was examined was presented as the fifth item in this sequence.

Strategy diagnostic

To assess individual differences in strategy use, participants were given the 13-item strategy diagnostic used in previous studies (e.g., Markovits et al., 2013). In this, participants were given conditional statements concerning an imaginary planet, and provided with explicit information about the relative co-occurrence of the antecedent and consequent, for example:

“Of the 1,000 last times that they have observed Trolytes, the geologists made the following observations:

910 times Philoben gas has been given off, and the Trolyte was heated.

90 times Philoben gas has been given off, and the Trolyte was not heated.”

They are then asked to evaluate the logical validity of a putative conclusion, such as

“Fact: A Trolyte has given off Philoben gas.

Conclusion: The Trolyte was heated.”

Of these inferences, five contained conclusions that were presented as highly probable (with a probability around 90%, although with potential counterexamples) and five presented conclusions that were not very probable (around 50%). Reasoners who reject all these ten inferences, even when they are highly probable, as above, are classified as Counterexample reasoners. That is, they reject inferences when there is a potential counterexample available, even when the occurrence of the counterexample is improbable. Reasoners who reject the low probability items at least twice more often than the high probability items are classed as Statistical reasoners (this translates into a relative difference in acceptance rates of about 40%, which is the difference in the probability of the high vs. low items). Statistical reasoners are more likely to accept inferences that are highly probable than those that are less so, presumably because they are basing their evaluation of a conclusion based on an assessment of the likelihood that the conclusion will be true.

Procedure

After indicating their consent to participate in the study, participants were first given the False Claim procedure. Participants were given one of the two sets of paragraphs defined by Content (Hylophobia or Phantosmia) and one of the following Conditions (Baseline [one iteration], Repetition [three, four, or five iterations] or Disconfirming [after three, four, or five iterations]), see Table 1 for a summary of these conditions. It should be noted that the Baseline condition was not included in the preregistration. Following this, participants received the Strategy Diagnostic (which normally takes between 10 and 15 min), and then the Belief Evaluation questions. All participants were then asked to answer basic demographic questions pertaining to their age and gender. They were then presented with a debrief stating that a lot of the “news” presented in the study was in fact entirely invented and written by the authors.

Table 1 Experimental design: Each participant is given one of the following conditions with one of the two contents

Design

This study used two between-subjects variables. Participants were divided into one of two strategy use categories, Counterexample or Statistical. Each participant received one of 14 total conditions involving combinations of two possible contents (Hylophobia or Phantosmia) and one of seven possible conditions (Baseline [one iteration], Repetition [three, four, or five iterations] or Disconfirming [after three, four, or five iterations]).

Results

We first transformed the belief rating into a single score, varying between -7 (totally unbelievable) to +7 (totally believable). Before examining the way that strategy interacts with both repetition and disconfirmation, we examined belief ratings as a function of these factors in all participants. We first examined the effect of Repetition on level of belief in the false claim. We performed an ANOVA with mean Belief score as dependent variable and Content (Hylophobia, Phantosmia) and Number of iterations of the False claim (one, three, four, five) as independent variables. This showed only a significant main effect of Number, F(3, 1219) = 30.75, p < .0001, ηp2 = .070. All post hoc analyses were performed with Tukey’s test with p = .05 (these analyses were not included in the preregistration). This indicated that mean level of Belief in the False claim was significantly lower with only one iteration (M = 1.30, SD = 5.12) than with three (M = 3.69, SD = 4.27), 4 (M = 4.30, SD = 3.72) or five (M = 3.90, SD = 3.98). There was no significant difference among means for three, four, or five iterations.

We then examined the effect of a single Disconfirmation on levels of belief as a function of Repetition of the False claim. We performed an ANOVA with mean Belief score as dependent variable and Content (Hylophobia, Phantosmia), Number of iterations of the False claim (three, four, five) and Condition (Repetition, Disconfirming) as independent variables. This showed significant main effects of Content, F(1, 1775) = 28.74, p < .0001, ηp2 = .016, Number, F(2, 1775) = 8.88, p < .0001, ηp2 = .010, and Condition, F(1, 1775) = 187.33, p < .0001, ηp2 = .095. There were significant interactions involving Number × Condition, F(2, 1775) = 6.22, p = .002, ηp2 = .007, and Content × Condition, F(1, 1775) = 43.75, p < .0001, ηp2 = .024.

Mean level of belief was higher with the Hylophobia content (M = 3.18, SD = 4.51) than with the Phantosmia content (M = 1.97, SD = 4.99). Mean levels of belief were lower with three iterations (M = 1.89, SD = 5.07) than with either four iterations (M = 2.67, SD = 4.70) or five iterations (M = 2.99, SD = 4.51), with no significant difference between four and five iterations. Mean levels of belief were higher in the Repetition condition (M = 3.97, SD = 4.00) than in the Disconfirming condition (M = 1.09, SD = 5.07).

Analysis of the Content × Condition interaction indicated that mean level of belief was lower with the Phantosmia content with Disconfirmation (M = -0.18, SD = 5.12) than with the Hylophobia content with Disconfirmation (M = 2.34, SD = 4.70), which was in turn lower than both the Hylophobia (M = 3.84, SD = 4.18) and Phantosmia (M = 4.09, SD = 3.82) contents with Repetition only, with no difference between the latter two.

Finally, analysis of the Number × Condition interaction showed that mean levels of belief were similar in the Repetition condition with three (M = 3.69, SD = 4.27), four (M = 4.30, SD = 3.72), or five (M = 3.90, SD = 3.98) iterations, which were higher than belief levels in the Disconfirmation condition. Among the latter, belief levels were lower with three iterations (M = 0.16, SD = 5.18) than with five iterations (M = 2.07, SD = 4.84), while belief levels with four iterations were intermediate (M = 1.04, SD = 5.02) and did not significantly differ from the other two.

Overall, these results show a clear pattern (see Fig. 1). They indicate that gist repetition does produce a large increase in levels of belief in the False claim when compared to the baseline (one iteration) condition. However, there is no significant difference between three, four, or five iterations. Examination of the effect of a single Disconfirmation shows a more nuanced pattern. As expected, disconfirmation does produce a clear decrease in belief levels. However, this effect is modulated by the number of iterations. Decrease in belief depends on this factor, so that disconfirmation has a larger relative effect with three iterations than with five iterations (with the effect on four iterations at an intermediate level).

Fig. 1
figure 1

Mean levels of belief in the false claim as a function of number of total presentations (1, 3, 4, 5) and condition (repetition, disconfirming). Error bars are confidence intervals

We then examined the way that Repetition and Disconfirmation interact with strategy use. In order to do this, we categorized performance on the strategy diagnostic in the following way (which is identical to the method used in previous studies, e.g. Markovits et al., 2013). We categorized participants who rejected all of the ten high and low probability inferences in the strategy diagnostic as being Counterexample reasoners, giving a total of 768 participants (36.2% of total sample). Participants whose rejection rate of the low probability items was at least twice more than the rate of rejection of the high probability items were categorized as Statistical reasoners, giving a total of 794 participants (37.4% of total sample). All others were put into an Other category (560 participants, 26.4% of sample), and were not considered in this analysis. This gave the following numbers of participants with either Counterexample or Statistical strategies in the experimental conditions: Baseline (Counterexample = 113, Statistical = 135); Repetition condition: three iterations (Counterexample = 113, Statistical = 100), four iterations (Counterexample = 115, Statistical = 105), five iterations (Counterexample = 102, Statistical = 124); Disconfirmation condition: three iterations (Counterexample = 105, Statistical = 112), four iterations (Counterexample = 104, Statistical = 117), five iterations (Counterexample = 116, Statistical = 101). It should be noted that this method of categorizing participants has been used in many previous studies. A more recent extensive analysis (Thompson & Markovits, 2021) suggests that the Other category is composed of participants with changing strategy use and of a subset of participants who are unable to adequately understand the nature of the task.

We first examined the extent to which initial levels of belief found in the neutral condition varied by content or strategy use. We performed an ANOVA with mean Belief score as dependent variable and Content (Hylophobia, Phantosmia) and Strategy (Counterexample, Statistical) as independent variables for the False Claims in the Baseline (one iteration) condition. This gave no significant effects of Content, F < 1, Strategy, F = 1.2, or Content × Strategy, F < 1. Initial belief levels were similar for the Hylophobia claim (M = 0.93, SD = 5.19) and for the Phantosmia claim (M = 1.31, SD = 5.08). Counterexample reasoners showed similar levels of overall belief (M = 1.57, SD = 4.83) to Statistical reasoners (M = 0.71, SD = 5.35), although the former was somewhat higher than the latter.

We then examined the effects of Repetition and Disconfirmation as a function of strategy use and content (see Fig. 2). We performed an ANOVA with mean Belief score as dependent variable and Content (Hylophobia, Phantosmia), Strategy (Counterexample, Statistical), Number of iterations of the False claim (three, four, five) and Condition (Repetition, Disconfirming) as independent variables. This showed significant main effects of Content, F(1, 1290) = 18.17, p < .0001, ηp2 = .014, Strategy, F(1, 1290) = 17.69, p < .0001, ηp2 = .013, Number, F(2, 1290) = 11.26, p < .0001, ηp2 = .017, and Condition, F(1, 1290) = 162.49, p < .0001, ηp2 = .112. There were significant interactions involving Number × Condition, F(2, 1290) = 4.82, p = .008, ηp2 = .007, Condition × Strategy, F(1, 1290) = 5.57, p = .018, ηp2 = .004, and Content × Condition, F(1, 1290) = 39.21, p < .0001, ηp2 = .030. There were no significant interactions involving Number × Strategy, F(2, 1290) = 1.76, p = .17, Number × Condition, F(2, 1290) < 1, Strategy × Condition, F(1, 1290) < 1, Strategy × Content, F(1, 1290) < 1, Number × Condition × Strategy, F(2, 1290) = 2.60, p = .074, Number × Condition × Content, F(2, 1290) < 1, Number × Strategy × Content, F(2, 1290) < 1, Condition × Strategy × Content, F(2, 1290) < 1, and Number × Strategy × Condition × Content, F(2, 1290) < 1.

Fig. 2
figure 2

Mean levels of belief in the false claim as a function of number of total presentations (1, 3, 4, 5), condition (repetition, disconfirming), and strategy (counterexample, statistical). Error bars are confidence intervals

Post hoc analyses were performed using the Tukey test with p = .05. Overall levels of Belief were higher with the Hylophobia content (M = 2.98, SD = 4.47) than with the Phantosmia content (M = 2.02, SD = 4.98). Overall levels of Belief were higher among Statistical reasoners (M = 3.00, SD = 4.74) than among Counterexample reasoners (M = 1.99, SD = 4.73). Analysis of the effect of Number of repetitions indicated that combined levels of Belief were greater with four (M = 2.72, SD = 4.66) or five presentations of the False claim (M = 3.05, SD = 4.41) than with three repetitions (M = 1.70, SD = 5.10). Overall levels of Belief were higher in the Repetition condition (M = 4.04, SD = 3.90) than in the Disconfirming condition (M = 0.95, SD = 5.05).

Analysis of the Content × Condition interaction showed that while there was no significant difference in overall levels of Belief in the Repeated condition (Hylophobia = 3.81, SD = 4.17, Phantosmia = 4.24, SD = 3.63), the level of Belief in the Disconfirming condition was greater for Hylophobia (M = 2.21, SD = 4.62) than for Phantosmia (M = -0.35, SD = 5.15).

Analysis of the Strategy × Condition interaction (see Fig. 2) showed that although Counterexample reasoners had lower levels of Belief in the Repetition condition than Statistical reasoners, this difference was not significant (Statistical = 4.25, SD = 3.99, Counterexample = 3.81, SD = 3.80). In the Disconfirming condition, Counterexample reasoners showed significantly lower levels of Belief (M = 0.14, 4.87) than Statistical reasoners (M = 1.75, 5.10). Finally, analysis of the Number × Condition interaction did not show any clear patterns and was not analyzed further.

Discussion

The results of this study show some clear patterns. First, analysis of all participants shows that gist repetitions do indeed increase the level of Belief compared to the baseline (one iteration) level, although there was no significant difference between three, four, or five repetitions. A single Disconfirming case following these repetitions does significantly reduce levels of Belief, consistent with previous results found by Ecker, Lewandowsky, Swire, and Chang (2011). Interestingly, the absolute effect of the Disconfirming case diminishes as the number of repetitions increases. More specifically, while three to five repetitions produce roughly equivalent levels of belief in the False claim, a single disconfirmation produces significantly lower levels of belief after three repetitions than after five repetitions (see Table 1). These results thus show that similarly to what has been found with verbatim repetition, gist repetition also creates an increase in the level of belief in a False claim. However, the fact that belief levels after a single disconfirmation clearly vary as a function of the number of gist repetitions suggests that, at least in the present context, the strength of people’s beliefs appear to be more strongly related to how resistant these are to disconfirmation than to the absolute level of belief after simple repetition.

While this experiment was not designed to distinguish possible explanations as to why gist repetition might increase belief, this pattern of results does allow some conclusions about potential explanations of the effect of gist repetition that rely on alternative mechanisms. One such explanation could be that when an item is repeated, participants might recognize this and elevate their level of belief simply as a recognition that this item was particularly important. This kind of explanation is consistent with the observation that there is little effect of number of repetitions on belief levels. However, the fact that the impact of disconfirmation does reflect number of repetitions makes any such explanation unlikely.

The most important conclusion of this study is the effect of strategy differences as determined by the dual-strategy model. This model distinguishes between Counterexample and Statistical strategies. Statistical reasoners respond to the statistical preponderance of evidence, while Counterexample reasoners use the same information but are more attuned to potential counterexamples to a given conclusion. The results show that, as predicted, both Counterexample and Statistical reasoners increase their levels of belief as a function of the number of gist repetitions in similar ways. However, what most clearly distinguishes them is their reaction to the Disconfirming statement. As predicted, Counterexample reasoners revised their levels of Belief significantly more in response to a Disconfirming statement than did Statistical reasoners, something that remains true for all levels of repetition.

This result is consistent with previous ones that show that Counterexample reasoners give more weight to potential exceptions than do Statistical reasoners (Markovits et al., 2018, b). They are also consistent with other results that indicate that they process information in different ways (Markovits et al., 2016). Statistical reasoners process information in a more associative way that leads to evaluations of conclusion likelihood that considers all sources of information equally. By contrast, Counterexample reasoners focus on key aspects of information that are used to construct models that are particularly sensitive to exceptions. This leads them to process similar information in different ways. For example, a recent study has shown that when given a text that supports the existence of gender differences, Statistical reasoners show higher degrees of sexist attitudes, while Counterexample reasoners do not (Gagnon-St-Pierre et al., 2020). Thus, one explanation of the present results would be that Statistical reasoners simply consider the single disconfirmation as another source of information that is weighted equally to that of the repeated claims, while Counterexample reasoners construct a more focused model of this information for which disconfirmation has a much stronger weight. Note that both Statistical and Counterexample reasoners react quite similarly to simple repetition, so that the differential weighting of information is particularly concentrated in the disconfirmation.

It should be noted that factors other than strategy use have also been found to affect the extent to which people are susceptible to post-correction reliance on misinformation. More specifically, measures of cognitive capacity such as short-term memory (Brydges et al., 2018) and verbal intelligence (De Keersmaecher & Roets, 2017) have been shown to be related to the way that corrective evidence interacts with previously received false judgments. Certainly, understanding the relative impact of cognitive capacity and strategy use in the way that disconfirmations are processed is an open question, and one that should be addressed in future studies. However, strategy use has been found to predict reasoning about belief-biased problems even when cognitive capacity is accounted for (De Chantal et al., 2019). A more recent study has shown that strategy use is a better overall predictor of “logical” reasoning than cognitive capacity (Thompson & Markovits, 2021). In fact, Markovits, de Chantal, Brisson, Dubé, Thompson, and Newman (2021) found that when people are given limited time to make inferences, strategy use remains a strong predictor of performance, while cognitive capacity is not. Thus, there is clear evidence from other forms of judgment that strategy use is an important individual difference over and above differences in cognitive capacity.

These results add to the growing body of evidence that individual differences in strategy use are an important determinant of information processing in a variety of different contexts (e.g., Gagnon-St-Pierre et al., 2020; Markovits et al., 2018, b). These results also add some weight to existing analyses of the effects of simple repetition on people’s beliefs in claims for which they have no real prior knowledge. In this study, we specifically examined the effects of repetitions of the gist of a False Claim and to a single Disconfirmation. Overall, these results clearly show that gist repetition has a clear effect on levels of belief. This is a useful generalization of results showing that repetition of an exact claim increases belief. This is particularly interesting since, while the repeated presentation of a single headline or news item often occurs, many other sources of gist repetition from rewording of a single story to hearsay transmission are also frequent. These results also suggest that resistance to contradictory information may be a better indicator of the effects of repetition on levels of belief than absolute levels of belief after repetition. Finally, while generalizing to other contexts is difficult, these results also indicate that understanding how people might process both repeated misinformation and disconfirmation cannot be understood in a unitary way, but must be nuanced by the processing style of each individual. More specifically, these results suggest that a targeted disconfirmation should be more effective with Counterexample reasoners, while a different approach involving more repetition of repeated disconfirmations might be more effective with Statistical reasoners.