Skip to content
BY 4.0 license Open Access Published online by De Gruyter Mouton January 16, 2024

“That’s just, like, your opinion” – European citizens’ ability to distinguish factual information from opinion

  • Andreas C. Goldberg ORCID logo EMAIL logo and Franziska Marquart
From the journal Communications

Abstract

In the current media landscape, it is becoming increasingly difficult for citizens to rely on trustworthy information, not least because reliable facts are mixed with dubious claims, unsubstantiated opinions, or outright lies. The ability to distinguish factual from other types of mediated information is becoming increasingly crucial, but we know little about how well-equipped citizens are to make these distinctions. In an original survey study conducted in ten European countries, we asked respondents whether they considered six different statements relating to the European Union to be factual or opinion statements. Our results show that citizens have considerable difficulties in correctly identifying both factual information and opinions. Next to pre-existing judgements, we identify media-related, political, and sociodemographic factors that influence categorisation accuracy. We discuss our findings in relation to citizens’ perceptions of journalistic credibility and their information literacy as well as ongoing debates about the effectiveness of fact-checkers on social media.

Every man has a right to his opinion, but no man has a right to be wrong in his facts.[1]

– Bernard Baruch

1 Introduction

In a 2020 article by the Poynter Institute, Eliana Miller asserts that “(r)eaders often can’t tell the difference” between opinion, news, and editorials (Miller, 2020). In addition to people struggling to identify such genre differences in the context of professional journalistic products, they often have difficulties to differentiate between opinionated and fact-based claims (e. g., Graham and Yair, 2023; Merpert et al., 2018; Mitchell et al., 2018; Walter and Salovich, 2021). This is a particular challenge on social media platforms, where information is shared by both experts and laypersons. Notably, substantial parts of the population rely on social media to inform themselves about current affairs (Newman et al., 2021), and the line between factual information and opinions is increasingly difficult to draw. This may have severe consequences for citizens’ critical thinking and their behavioural responses to falsely categorised information, and it is also relevant when assessing the corrective impact of fact-checking messages: If confronted with opinionated statements (online), citizens may be more likely to take them at face value if they believe they present a fact, and even corrective attempts in the form of fact-checkers may fail to mitigate that initial effect (Walter and Salovich, 2021). We therefore need to understand what makes citizens more (or less) able to differentiate between factual information and opinions.

In this study, we rely on original survey data from ten European countries, collected in the wake of the 2019 European Parliament (EP) elections. We asked respondents to classify six political statements relating to the European Union (EU) as either a factual statement or an opinion, and to subsequently indicate whether they judged them as accurate (if categorised as a factual statement), or whether they agreed with them (if categorised as an opinion). We assess respondents’ ability to correctly differentiate between factual and opinionated statements and investigate which role sociodemographic indicators and relevant attitudes play for correct classification. Furthermore, we aim to understand whether biases in citizens’ categorisation affect the strength of their beliefs in the accuracy of and/or their personal agreement with the statements. Our findings contribute to an understudied area of public opinion research on the underlying causes for political misperceptions and “fake news” susceptibility.

2 The distinction between factual information and opinions

Most attempts to distinguish between factual information and opinions highlight that the former can be “proved or disproved based on objective evidence” (Mitchell et al., 2018, p. 6), while “an opinion can be neither right nor wrong” (Cohen et al., 1989, p. 13). Unlike factual information, opinionated statements – particularly in the context of news media and reporting – are often considered to be “based on the values and beliefs of the journalist or the source making the statement” (Mitchell et al., 2018, p. 6); they “incorporate varying degrees of speculation, confidence, and judgment” (Schell, 1967, p. 5) or “depend on internal, subjective experience and preferences, as in the case of aesthetic judgments and personal taste” (Banerjee et al., 2007, p. 1084). The respective definitions by the Collins dictionary emphasise similar differences, with factual being defined as “concerned with facts or contains facts, rather than giving theories or personal interpretations” (Collins, n.d.-a), and opinions being “a belief not based on absolute certainty or positive knowledge but on what seems true, valid, or probable to one’s own mind” (Collins, n.d.-b). Especially the reference to personal interpretations or one’s own mind when it comes to opinions is crucial compared to more objective factual information. Similarly, Zaller (1991, p. 1215) points out that “every opinion is a marriage of information and values – information to generate a mental picture of what is at stake and values to make a judgment about it.” What these distinctions have in common is an emphasis on evaluative judgments and thus subjective individual assessments in connection with opinions. In general, extant conceptual distinctions do not assume that factual statements cannot be contested; indeed, as discussions across the globe during the Covid-19 pandemic revealed, a statement such as “vaccines significantly reduce the risk of a virus infection” is far from uncontroversial in public opinion. However, for categorisation as factual, what matters is whether a statement can be proven right or wrong, whereas we usually assume that this is not the case for opinions (see also Edelsztein and Vázquez, 2021).

The differentiation between factual information and opinions is also central to journalistic norms and at the core of professional journalistic values: Survey data from the Netherlands, for example, show that audience members find it very important that news media “clearly separate facts and opinions” (van der Wurff and Schönbach, 2014; see also Karlsson and Clerwall, 2019). Usually, traditional media distinguish both by denoting opinions on, for example, designated pages (in newspapers) or announcing commentary in a separate section of an evening news show. However, these clear-cut lines become increasingly blurry in the online world, especially when the source of a story is difficult to identify (e. g., Morrow et al., 2022; Pennycook et al., 2020). Whether or not, for example, a Facebook post on vaccine efforts at the early stages of the Covid-19 pandemic came from a credible source and presented “checkable” factual information was not always easy to determine.

Along similar lines, Walter and Salovich (2021) argue that for political fact-checkers to work as intended (i. e., in correcting misinformation), media users need to be able to distinguish opinions from (verifiable) factual information. Since “[one] of the distinct aspects of fact-checkers is their exclusive focus on checkable political statements or claims that can be factually verified” (Walter and Salovich, 2021, p. 4; original emphasis), political opinions may bypass fact-checking, with potential detrimental consequences for public opinion formation. In their study, US citizens’ ability to differentiate between factual and opinionated statements moderated the effectiveness of a fact-checker: Only those individuals who could correctly identify a large share of checkable statements reported a decrease in the perceived accuracy of false information (Walter and Salovich, 2021). In contrast, lower levels of correct “checkability” classification led to a backfire effect, that is, these citizens perceived a higher accuracy of false information, particularly those who exhibited a political bias towards the initial message. However, Merpert et al. (2018) showed that a simple 15-minute online training exercise including examples and detailed explanations significantly increased participants’ ability to detect checkable facts (see also Edelsztein and Vázquez, 2021; Tully et al., 2020). Yet it is important to acknowledge that the distinction between matters of fact and matters of opinion is not necessarily clear-cut from the start (e. g., Banerjee et al., 2007; Schell, 1967; Walter and Salovich, 2021), and that it becomes particularly challenging to distinguish between both in a social media setting (Morrow et al., 2022).

Previous evidence indicates that media users, both online and offline, struggle to determine whether they are exposed to factual information or opinions: Distinguishing between both can be a major challenge, even for fairly well-educated adults (e. g., Graham and Yair, 2023; Kuhn, 2010; Merpert et al., 2018; Walter and Salovich, 2021). Data from a Pew Research study (Mitchell et al., 2018) reveal that US American adults are better able to identify opinionated (e. g., “Immigrants who are in the U.S. illegally are a very big problem for the country today”) than factual statements (e. g., “President Barack Obama was born in the United States”) relating to politics. While on average each factual and opinion statement was correctly classified by 70 % of respondents, the authors found large differences with regard to respondents’ ability to distinguish between both. Evidence from Argentina supports these results by showing that respondents correctly classified 69 % of statements to contain checkable facts (vs. non-checkable statements), with significant differences in this ability according to respondents’ demographic characteristics (Merpert et al., 2018).

To contribute to this scarce previous evidence, we first wish to establish in how far such differentiating skills are prevalent in Europe as well, and ask:

RQ1: To what extent are people able to correctly distinguish between factual and opinion statements?

Research has shown that citizens’ political knowledge and interest as well as their gender, age, education, and media literacy affect their ability to distinguish the two types of statements. Citizens with a higher level of political knowledge and interest are better able to identify either; similarly, digital savviness (i. e., familiarity with digital media and technology) and higher media trust both increase the chances that citizens correctly classify factual information and opinions (Mitchell et al., 2018; see also Seo et al., 2020). In Argentina, men were more successful in identifying statements than women, and younger participants fared significantly better than their elder counterparts, as did those with a comparably higher formal education (Merpert et al., 2018). To extend this line of enquiry and gain a better understanding of the conditions that enable European citizens to distinguish between the two concepts, we pose three research questions. Even though we discuss why certain variables may influence peoples’ ability to distinguish between factual and opinion statements, we do not have strong theoretical claims for all of them. Further, in line with our research aiming to explain peoples’ distinction skills as comprehensively as possible – that is, identifying all or at least many of the relevant explanatory factors – we thus focus on broader research questions instead of formulating hypotheses for single variables of interest (but not for others).

First, we assess citizens’ trust in the news media and the extent to which they perceive the media to be agents of mis- and/or disinformation (Hameleers et al., 2022). Misinformation perceptions assess opinions about levels of untruthfulness in the media that may result from honest mistakes, whereas senses of disinformation measure notions of deliberate manipulation by journalists and the media. Either could be relevant predictors for citizens’ reliance on the news media to provide perceived factual knowledge. Along similar lines, Mitchell and colleagues (2018) found that US citizens with higher levels of media trust were more likely to correctly identify factual and opinionated statements.

We further ask whether the degree to which citizens rely on informal sources on social media may relate to their ability to differentiate factual information from opinions. The social media environment makes it increasingly difficult to identify false claims and fake news (Morrow et al., 2022; for a conceptual discussion of the “fake news” term see, e. g., Egelhofer and Lecheler, 2019). Findings by Luo et al. (2022) show that Facebook news headlines with more “likes” are perceived as more credible, and that study participants fare less well in detecting a fake news headline if it has a lot of likes. But it is particularly vulnerable populations (i. e., elderly and lower-educated citizens) that rely on informal contacts and judge them to be reliable sources of information on social media platforms (Seo et al., 2020), which shows that social endorsements (who of my friends likes/shares news online?) can play an important role (see also Turcotte et al., 2015). To take this information environment into account, we assess the relevance of informal social media sources (i. e., not professional news media) for respondents’ exposure to political news such as, for example, family and friends, celebrities, or YouTubers. Additionally, we expect levels of media literacy to determine citizens’ ability to distinguish factual information from opinion (e. g., Merpert et al., 2018; Seo et al., 2020; Tully et al., 2020). Summing up these potentially influential factors, we ask:

RQ2a: Which factors related to citizens’ use and perceptions of news media are associated with their ability to correctly distinguish factual from opinion statements?

Second, we wish to determine whether relevant political background variables might play a role and assess the influence of political knowledge, interest, efficacy, and respondents’ political orientation. Findings reported by Mitchell et al. (2018) indicate that higher levels of both political knowledge and political interest increase the likelihood that citizens in the US correctly classify either type of statements. The same positive relationship might be found for higher levels of internal political efficacy. Finally, we assess the role of respondents’ self-placement on the left-right political scale to account for potential biases as a result of political orientation, for instance to classify opinions that appeal to one’s political point of view as factual (see e. g., Mitchell et al., 2018; Taber and Lodge, 2006; Walter and Salovich, 2021). In line with this, Graham and Yair (2023) find evidence that partisanship affects citizens’ ability to distinguish factual and opinion statements. The authors determine that partisan differences in this area align with study respondents’ wish to express a response that is in line with their political orientation in the form of expressive responding. However, Graham and Yair’s studies are situated in a US and Israeli context, and it is unclear whether ideological leaning plays a similarly important role in the European context. Hence, we ask:

RQ2b: Which factors related to citizens’ political background are associated with their ability to correctly distinguish factual from opinion statements?

Lastly, we assess the influence of individuals’ sociodemographic backgrounds: Going beyond the common consideration as mere controls, various extant studies show the relevance of sociodemographic factors in the context of correctly identifying types of information. For instance, evidence by Merpert et al. (2021) highlights the importance of gender, age, and formal education, with both men, younger individuals, and those with higher levels of education scoring higher on the identification of checkable statements in Argentina (see Seo et al., 2020, for findings from the US). Whether these factors are similarly influential in the European context is again an empirical question, and we thus ask:

RQ2c: Which factors related to citizens’ sociodemographic background are associated with their ability to correctly distinguish factual from opinion statements?

3 Accuracy beliefs and personal agreement

In a next step, we aim to understand whether respondents’ classifications of factual information and opinions also affect the strength of their beliefs towards the accuracy of and/or their personal agreement with the respective statements – or, to be more precise, whether this process works the other way around. We assume that European citizens who categorise a statement as factual also believe it to be accurate or true, whereas they are more likely to dismiss presumed opinionated statements because they do not share the expressed viewpoint (see also Graham and Yair, 2023). In other words, people’s bias in favour of or against a view expressed in the statement under question will influence their perception of whether it constitutes factual information or an opinion. These assumptions align with the findings by Mitchell and colleagues (2018, p. 28): “Overall, Americans overwhelmingly tie the idea of news statements being factual with them also being accurate,” while “(…) seeing factual statements as opinions largely coincides with disagreeing with them.”

Take the aforementioned example of the sentence “vaccines significantly reduce the risk of a virus infection”: As public debates have shown, people may differ in evaluating the truthfulness of the statement and may not believe in its accuracy. However, independent of whether the statement is true or not, it can be checked and verified and would thus, following the definitions outlined above, constitute a factual statement. We posit that citizens who agree with the claim made in the sentence (i. e., find it to be accurate) will be more likely to categorise it as factual, whereas people who disagree with it will judge it to be an opinion. Following Taber and colleagues (2009) and Walter and Salovich (2021), we base these assumptions on the theory of motivated reasoning, which assumes that citizens’ pre-existing opinions will guide their processing and understanding of new incoming information if they follow partisan (as opposed to accuracy) goals in their political judgements (Taber and Lodge, 2006). Prior beliefs are then defended to the extent that new information will be dismissed (avoided, derogated, rejected; see Walter and Salovich, 2021) if it does not align with these beliefs (see also Gleadon, 1988). Furthermore, attitude-inconsistent new information is held to higher standards than congenial content, insofar as individuals will actively engage in more cognitive effort to disconfirm it (Taber et al., 2009). Along these lines, factual information about some political topics is more relevant for specific parts of the electorate than others, and political ideology can play a powerful role for motivated processing and reasoning of (political) arguments (Graham and Yair, 2023; Kuklinski et al., 1998; Taber and Lodge 2006).

We therefore assume that citizens will reject factual statements as “mere” opinion if they do not agree with them because it allows them to brush aside their importance and confirm their own personal existing beliefs (“That’s just, like, your opinion!”). In contrast, accepting an opinion statement as factual lends added weight to its importance and can reinforce the certainty in one’s own belief. As outlined above, these processes play a particularly crucial role when individuals are provided with information by the news media (Pennycook et al., 2020), which is why lower levels of media trust and heightened perceptions of media mis- and disinformation may be influential in this context: Individual biases in conjunction with scepticism towards journalistic professionals may lead citizens to brush aside information that does not align with their views. Based on the background of political motivated reasoning, we formulate the following hypotheses:

H1a: People who wrongly classify opinion statements as factual information are more likely to judge them as accurate compared to respondents who correctly classify factual statements.

H1b: People who wrongly classify factual statements as opinions are more likely to disagree with them compared to respondents who correctly classify opinion statements.

4 Data and methods

Dataset

The data for this study were collected as part of a larger multi-country survey project in the context of the 2019 European Parliament elections (EUROPINIONS; Goldberg et al., 2021). The project focussed on ten EU member states – Czechia (CZ), Germany (DE), Denmark (DK), Spain (ES), France (FR), Greece (GR), Hungary (HU), the Netherlands (NL), Poland (PL), and Sweden (SE) – which represent a variety of smaller and bigger EU member states and are geographically spread across Europe. All country surveys were conducted by the company Kantar using Computer Assisted Web Interviewing (CAWI). Sampling quotas were enforced to ensure representative samples according to age, gender, region, and education (checked against information from the National Statistics Bureaus or Governmental sources).[2] We are not interested in specifying country differences regarding our research questions and hypotheses, but profit from the comparative dataset to rely on larger and (more) representative samples than in common experimental approaches focussing on single countries.

The data collection followed a panel logic with at least one panel wave collected before and two waves after the 2019 EP elections in each country (see Goldberg et al., 2021 for more details). We embedded the key variables for our study, the distinction between factual and opinion statements,[3] in the final wave running in all countries between July 1–12, 2019. In addition to these specifically designed survey items for this study, for various other survey variables necessary to examine RQ2a–c, we use information from earlier waves, but do not make use of the panel design for other purposes. The numbers of respondents in the final wave (total N = 6643) per country are: NCZ = 733, NDE = 518, NDK = 563, NES = 557, NFR = 776, NGR = 494, NHU = 588, NNL = 1067, NPL = 857, NSE = 497.[4]

Factual vs. opinion statements

For the purpose of this study, we designed six statements for which the respondents were asked: “You will now see a series of statements that have been taken from news stories. Regardless of how knowledgeable you are about the topic, would you consider this statement to be a factual statement (whether you think it is accurate or not) or an opinion statement (whether you agree with it or not)?” (see also Mitchell et al., 2018; for an alternative approach see Graham and Yair, 2023). By adding the explanations in parentheses, we provided a minimal description as to how one may think about the difference between factual and opinion statements without providing a full-fledged definition or even concrete examples of the distinction. By not explicitly providing help, we aimed to uncover people’s real and unbiased ability to distinguish the two types of statements. Three of the statements each were factual and opinion statements (presented in random order):

The opinion statements were:

  1. “Some member states contribute too little to the EU budget;”

  2. “Climate change is one of the biggest challenges for the EU today;”

  3. “A minimum wage of 10.50 Euro/hour across the EU is essential for the economy.”

The factual statements were:

  1. “The amount of refugees coming to the EU has decreased significantly since 2016;”

  2. “Turnout at the 2019 EP elections was the highest in 20 years;”

  3. “Spending on agricultural subsidies makes up the largest portion of the EU’s budget.”

We chose the statements to cover common topics and information that appeared in the context of the 2019 European parliamentary elections and related political debates. Following established definitions (e. g., Cohen et al., 1989), the factual statements include information that one can check and provide concrete evidence for. All three factual statements represent true facts, that is, we simplified respondents’ task somewhat by not presenting both true and untrue factual statements, which could have resulted in confusion among respondents.[5] The opinion statements represent viewpoints that cannot be proven by hard evidence, but one can certainly agree with the formulated opinions (see also Merpert et al., 2018). In line with Walter and Salovich’s (2021) recommendation, the six statements cover a variety of topics to arrive at more generalisable results not related to respondents’ knowledge of one specific topic only. However, all statements related to knowledge about and attitudes towards the European Union, which allowed for comparability between the ten countries. Furthermore, we did not provide any information about the source of the statements in order to exclude potential effects due to source credibility. The differentiation between the two types of statements is not equally clear-cut for all six statements, which adds to the ecological validity of our research, considering the complexity of news and information in the regular media environment. Our design hence presents a trade-off in external and internal validity, and we get back to this point in the discussion. For our main regression analyses, we calculate the sum of correctly classified statements (0–6 scale).

Depending on whether respondents classified a statement as an opinion or as factual, they were further asked: “To the best of your knowledge, do you think the factual statement is accurate or inaccurate?” (for statements identified as factual), or “Do you tend to agree or disagree with the opinion statement?” (for statements identified as opinion). The respective answers offered a binary choice between inaccurate (0) and accurate (1) and between disagree (0) and agree (1), respectively. This setup asking for either accuracy or agreement based on identification by respondents and not on the real classification of a statement follows the design of the existing Pew Research study (Mitchell et al., 2018; see also Graham and Yair, 2023). We decided to not also ask for the “correct” follow-up questions – accuracy for actual factual statements and agreement for actual opinions – in order to not confuse respondents during the survey (i. e., to not indicate that they may have been wrong in their classification), which may have influenced their answering behaviour in the remaining classification tasks. It would further have been counterintuitive for respondents to be asked to rate agreement for a self-declared factual statement (and accuracy for self-declared opinions), and the resulting perceived logical flaws in the survey may ultimately have resulted in survey break-off. As a result, and following extant research, we consistently asked only one follow-up question after respondents’ assessment of each statement. Figure 1 presents an overview of the survey process in the form of a flow chart, visualising how respondents were guided through the questionnaire based on their responses.

Figure 1: Overview of classification tasks in the survey.
Figure 1:

Overview of classification tasks in the survey.

Survey variables

We include five media-related variables in our analysis. The first one, media trust, is measured as a mean scale of eight agreement items. The exact wordings of all variables can be found in Table A1 in the Appendix. Self-perceived media literacy is measured by a single item representing agreement with the statement “I am able to distinguish between accurate and false information.” For respondents’ perceptions of misinformation and disinformation in the media, we rely on the two mean scale measures developed by Hameleers et al. (2022). The informal social media sources variable measures people’s exposure to political news via different potential sources such as family and friends, online connections, celebrities, Facebook groups or forums, and is the sum of the number of informal social media sources mentioned by the respondent (0–11).

For our political variables, we consider four different measures. First, we include internal political efficacy as a mean scale of agreement to four commonly used statements. Political interest (EU) was measured on a 7-point scale by asking “How interested would you say you are in EU politics?”. To measure political knowledge, we count the number of correct answers to seven factual knowledge questions about both national and EU politics (0–7). The self-placement of respondents on the common left-right scale is measured from 0 (left) to 11 (right). To capture both ideology and extremity, we include the left-right scale in its linear and squared form.

Finally, we include three sociodemographic variables. Education is measured via national education degrees and was then recoded into seven ordinal levels according to the common ES-ISCED coding scheme. Gender is measured via a female dummy and age in its linear way. We further include country fixed effects to control for potential country-level differences.

Method

For our first research question and hypotheses we present descriptive statistics and related t-tests where possible. For the analysis of associations between media and political variables with the number of correctly classified statements in our second research question, we run OLS regression models. As one can debate whether the 0–6 scale of correctly classified statements qualifies as a linear measure, we repeat our analysis using ordinal probit regression models. To detect potential differences in the associations between the two types of statements, we run three models each, one with the combined 0–6 scale and two separate models for only the factual and opinion statements, respectively (0–3 scales). All these models include the previously mentioned country fixed effects.

5 Results

Table 1 presents the proportion of respondents who correctly classified each statement. As the numbers illustrate, respondents struggled to correctly distinguish factual information from opinions, with several statements being correctly identified by only half of the respondents or even less. Only the statements about minimum wage (opinion) and turnout (factual) were correctly identified by more than two thirds of respondents. In comparison to the Pew research setup in the US (Mitchell et al., 2018), the number of correctly classified statements is (much) lower in our sample.

Table 1:

Percentage of correctly classified statements per statement.

Opinion statements

Correctly classified

Climate change is one of the biggest challenges for the EU today.

42.0 %

Some member states contribute too little to the EU budget.

48.4 %

A minimum wage of 10.50 Euro/hour across the EU is essential for the economy.

72.6 %

Factual statements

The amount of refugees coming to the EU has decreased significantly since 2016.

51.0 %

Spending on agricultural subsidies makes up the largest portion of the EU’s budget.

56.8 %

Turnout at the 2019 EP elections was the highest in 20 years.

68.2 %

Turning to the sum scores of correctly identified statements in Figure 2, we can see that the largest proportions of respondents correctly identified three or four statements (M = 3.39, SD = 1.41).[6] Around 7.5 % were able to correctly identify all six statements, while only 1.5 % did not correctly classify a single statement. Comparing the number of correct classifications of the three opinion statements (M = 1.63, SD = .90) with the correct classification of the three factual statements (M = 1.76, SD = 1.02) shows a significant difference, with respondents being better able to correctly classify the factual statements (t(6642) = 8.05, p < .001); yet the overall ability is still low.

Figure 2: Number of correctly classified statements.
Figure 2:

Number of correctly classified statements.

Next, we turn to the regression model results, displayed in Table 2, to analyse associations of media-related, political, and sociodemographic variables with the ability to correctly distinguish opinions from factual information. The first model focusses on the combined 0–6 scale of correctly classified statements as dependent variable. For the media variables (RQ2a), we see that higher perceived disinformation in the media and the use of a larger number of informal social media sources is negatively related to the ability to correctly distinguish factual information from opinions. We equally observe two significant relationships for the political variables (RQ2b), with higher political efficacy and greater political knowledge positively linked to the ability to correctly identify factual and opinion statements. Looking at the sociodemographics (RQ2c), we find a positive association for higher educated respondents, and negative coefficients for female and older respondents, all in line with Merpert et al.’s findings (2018) in the Argentinian context.

Table 2:

OLS regression models.

(1)

Combined

(2)

Opinion scale

(3)

Factual scale

Media variables

Media trust

–.034

–.067***

.033*

(.021)

(.014)

(.015)

Media literacy

.016

.002

.014

(.014)

(.010)

(.010)

Misinformation perceptions

.021

.001

.020

(.021)

(.014)

(.015)

Disinformation perceptions

–.132***

–.035**

–.097***

(.020)

(.013)

(.014)

Informal social media sources

–.017**

–.011**

–.006

(.006)

(.004)

(.004)

Political variables

Political efficacy

.079***

.044***

.035**

(.016)

(.011)

(.012)

Political interest (EU)

.007

–.034***

.040***

(.012)

(.008)

(.009)

Political knowledge

.113***

.032***

.081***

(.011)

(.008)

(.008)

Leftright

–.023

.004

–.027

(.022)

(.015)

(.016)

Leftright2

–.001

–.000

–.000

(.002)

(.001)

(.002)

Sociodemographics

Education

.119***

.065***

.054***

(.010)

(.007)

(.007)

Female

–.151***

–.067**

–.085***

(.033)

(.022)

(.024)

Age

–.006***

–.004***

–.002*

(.001)

(.001)

(.001)

Country fixed effects (ref. Netherlands)

Czechia

–.173**

.100*

–.273***

(.064)

(.044)

(.047)

Denmark

.128

.012

.116*

(.069)

(.047)

(.050)

France

.032

.111*

–.080

(.064)

(.044)

(.047)

Germany

.100

.039

.062

(.071)

(.049)

(.052)

Greece

–1.021***

–.281***

–.740***

(.078)

(.053)

(.057)

Hungary

–.294***

–.183***

–.112*

(.072)

(.049)

(.052)

Spain

–.792***

–.223***

–.570***

(.071)

(.048)

(.051)

Sweden

.548***

.282***

.266***

(.072)

(.049)

(.052)

Poland

–.190**

–.027

–.163***

(.065)

(.044)

(.047)

Constant

3.236***

1.744***

1.492***

(.154)

(.105)

(.111)

N

6643

6643

6643

R2

.164

.060

.163

Standard errors in parentheses; *p < .05, **p < .01, ***p < .001.

Some further interesting results appear in models 2 and 3 when separating the dependent variable into identifying exclusively opinion or factual statements. First, we largely observe the same significant associations in these two sub-models compared to the combined model. Second, we see a couple of additional significant coefficients that run in opposite directions between the two sub-models, which were hidden in the combined model. From the media variables, we now see opposing relationships for media trust, with higher levels of trust being associated with fewer correctly classified opinion statements, but more correctly classified factual statements. In other words, people with higher media trust tend to classify both types of statements more often as factual than people with lower media trust.

We similarly see opposing coefficients for political interest, with more interested respondents correctly identifying fewer of the opinion statements, but more of the factual statements. Again, this means that respondents with a higher level of political interest tend to classify our statements as factual more often. In contrast to these relationships, we do not observe any significant coefficients for self-perceived media literacy, misinformation perceptions, or left-right ideology (and extremity).

Our robustness checks confirm these patterns. First, Table A2 in the Appendix displays the same regression outcomes when using the alternative ordinal probit regression setup. Second, in the separate probit regression models (one for each statement) displayed in Table A3, we see the described patterns in several models, indicating that the relationships do not depend on the content of a single statement but are more universal. Still, some of the relationships are stronger (or weaker) for some statements than for others. The important aspect, though, is that the coefficients of the combined model(s) are not driven by only one of our six statements.

Finally, turning to the perceived accuracy of (self-identified) factual statements and agreement with (self-identified) opinion statements, Table 3 displays the respective values across the six statements (the values represent the proportion of respondents who perceived the statements as accurate or agreed with them). We expected that respondents would be more likely to believe in the accuracy of wrongly classified factual statements – that is, the three actual opinion statements (H1a) – and more likely to disagree with the wrongly classified opinion statements – that is, the three actual factual statements (H1b). Taking the average of accuracy and agreement across the three statements each (numbers printed in bold) supports our expectations. First, respondents who wrongly classified opinion statements as factual are more likely to believe them (M = .70) than respondents believe the correctly classified factual statements (M = .66). The difference is much larger for the agreement measures: Respondents who wrongly classified factual statements as opinions tend to agree less with the statements (M = .25) than respondents agree with the correctly classified opinions (M = .54). Adding to this, the separate agreement with any of the (wrongly classified) factual statements is much lower (.19–.32) than with any of the correctly classified opinion statements (.48–.58). To be clear, due to the design of the classification tasks and related different sample compositions for the follow-up questions (as either agreement or accuracy was asked depending on the self-classification of the statements by respondents; see Figure 1), we cannot run common statistical measures such as t-tests which would provide clear cut answers for our hypotheses. Still, keeping this limitation in mind, in particular the sizeable differences in agreement support our expectations, especially H1b.

Table 3:

Perceived accuracy of and agreement with the statements in % (incl. N).

Opinion statements

Accuracy (0/1)

Agreement (0/1)

Some member states contribute too little to the EU budget.

.75 (N = 3427)

.58 (N = 3216)

Climate change is one of the biggest challenges for the EU today.

.77 (N = 3854)

.55 (N = 2789)

A minimum wage of 10.50 Euro/hour across the EU is essential for the economy.

.57 (N = 1819)

.48 (N = 4824)

M = .70

M = .54

Factual statements

The amount of refugees coming to the EU has decreased significantly since 2016.

.61 (N = 3388)

.19 (N = 3255)

Turnout at the 2019 EP elections was the highest in 20 years.

.73 (N = 4529)

.32 (N = 2114)

Spending on agricultural subsidies makes up the largest portion of the EU’s budget.

.64 (N = 3773)

.26 (N = 2870)

M = .66

M = .25

Note: Accuracy or agreement were not asked about for each statement across all respondents, but for only one of the two depending on whether the statement was classified as an opinion or factual statement by the respondent. The number of respondents in parentheses shows per statement/row how the total of 6643 respondents distributed themselves into having answered the accuracy or agreement questions (depending on their previous classification of the statement).

6 Discussion

In this study we examined European citizens’ ability to distinguish between factual and opinionated statements – as observable in news stories – and asked which factors make individuals more (or less) susceptible to attempts to present opinionated content as “factual news.” Departing from the argument that it is becoming increasingly difficult to tell the difference between either form of information, and pointing to the blurry boundaries between factual information and opinions in the online and social media environment (e. g., Morrow et al., 2022), we discussed the risks of citizens falling victim to “fake news” content, particularly if they struggle to recognise opinionated statements in the first place. Indeed, our findings show that such a differentiation presents a challenge to European citizens, especially for lower-educated, female, and elder individuals, which aligns with prior research from other countries (Graham and Yair, 2023; Merpert et al., 2018; Mitchell et al., 2018; Walter and Salovich; 2021; Seo et al., 2020). In contrast, political efficacy and knowledge increase the likelihood that citizens correctly classify statements, and these effects hold beyond general education levels. In addition, we find that perceptions of disinformation (Hameleers et al., 2022) and a stronger reliance on informal social media sources (see also Luo et al., 2022) make the differentiation more difficult as well.

One may ask whether these results point to a larger pattern in the form of a general problem, where citizens are increasingly skeptical of professional journalistic content on social media. It is worth investigating whether previously learned media literacy skills are difficult to apply in the online environment, also because the latter does not follow the same rules and norms that apply for offline outlets. However, it is also important to point out that in our study a higher level of self-reported media literacy does not relate to the number of correctly classified statements. On the one hand, this seems surprising, given the effectiveness of even short training sessions to raise media literacy skills (Merpert et al., 2018; but see Tully et al., 2020). On the other hand, we assessed media literacy based on only one item asking about respondents’ self-reported ability to distinguish between accurate and false information, instead of more objective measures of media literacy. The self-reported measurement might be less suited to capture actual skills or may suffer from respondents’ over/misestimation of their own literacy level, an assumption that is further confirmed by the comparably low correlation (Pearson’s r = .110) between respondents’ media literacy and general education levels (see Table A4 in the Appendix). Clearly, more research is needed to assess which skills are decisive in this context, and how they can best be addressed in literacy training.

Neither perceptions of misinformation nor respondents’ position on the left-right ideological scale affect their ability to differentiate between factual information and opinions. The latter finding does not align with research on motivated reasoning and cognitive biases (see also Graham and Yair, 2023), yet caution is warranted when interpreting our statements as decidedly in favour of or opposing the EU. Similarly, research has shown that Eurosceptic attitudes can be located on both the left and right sides of the political spectrum (van Elsas and van der Brug, 2015), which makes it difficult to connect a clear ideological leaning to presumably politically motivated biases in our data. Finally, a large variety of political systems, including different competition structures between parties, were included in our pooled country sample; a simple distinction between left- and right-leaning citizens might be insufficient to capture this diversity and might be complemented by more detailed measures of party identification and related strength in future research.

At the same time, several significant relationships between our predictors and the identification between both statement types only became visible when separating the correct cataloguing of factual information from opinions: Higher levels of trust in the news media, for example, make it more likely that European citizens correctly identify factual statements, and less likely that they recognise opinionated sentences (see Figure A2 in the Appendix). The results for political interest point in the same direction, indicating that respondents who trust the news media more and consider themselves more politically interested categorise more statements as factual in general. These findings correspond with Pennycook and colleagues’ (2020) “implied truth effect,” which shows that in the absence of contrary cues, people tend to take information provided by the media as true. Along these lines, we argue that opinions need to be labelled as such; otherwise, at least some citizens tend to automatically take them to be factual. These results indicate that a more skeptical news consumer may be better equipped to question the verifiability of information in the media, but also show that politically interested citizens might have a disadvantage when it comes to the same task.

We also tested whether citizens’ bias in favour of or against the view expressed in the statement in question influences their perception of whether or not it constitutes factual information or an opinion (Walter and Salovich, 2021). Our results show that respondents are slightly more likely to believe statements when they wrongly classified them as factual, in contrast to believing the correctly classified factual statements. This bias is even larger for agreement with opinions, that is, respondents agree much less with (factual) statements they wrongly classified as opinions compared to agreeing with correctly classified opinions. These findings are in line with the theory of motivated reasoning, in the sense that pre-existing opinions and judgements about certain topics influence the categorisation of information into factual information or opinions, especially the dismissal of factual information as “mere” opinion (Gleadon, 1988; Graham and Yair, 2023; Taber and Lodge, 2006; Taber et al., 2009).

Our study does not come without limitations. The selection of statements is linked to the EU and EP elections 2019, which increased comparability across countries. However, low levels of specific EU knowledge (e. g., Hobolt, 2007) may have made it more difficult to classify statements as factual or opinions. Thus, future studies should extend the investigation to test different and a larger variety of statements; the latter should ideally be part of an overall larger number of statements including other topics as well. This may also enable a more detailed within-category analysis, following our results of correct classification differences of 17 and 30 percentage points for factual and opinion statements, respectively. One may also test statements embedded in whole news stories, instead of as isolated sentences. Such future endeavours may increase both the reliability of our results and the external validity of a related study’s design.

Second, although our study goes beyond extant single-country studies and investigates citizens’ ability to distinguish factual information from opinions across a larger set of countries, we did not explore country-specific differences in more detail. Yet, mean differences between countries in Figure A1 and the significant country fixed effects in our regression model (Table 2) highlight that it may be worthwhile to investigate such differences further. For instance, future research may explore whether country-specific settings related to the media system, the level of social media use, or the political context, e. g., regarding polarisation, may be influential. Ideally, such research would also cover a larger and more heterogeneous set of countries to represent the different parts and regional differences across the world. Similarly, the explorative nature of our investigation into citizens’ ability to distinguish factual from opinion statements aimed at establishing the magnitude of the problem first, alongside examining some explanatory factors. We encourage future work to also consider outcomes of these processes, for example by assessing whether miscategorisations of opinions as factual information (and vice versa) affect political attitudes, voting behaviour, or citizens’ perceptions of their political institutions’ legitimacy. Finally, our explanations for the correct identification of statements as factual or opinions worked better for the factual statements (see lower R2 for the separate opinion model in Table A2), although we observed no discernible differences in terms of the significance of the included predictors. There may be other relevant factors that are crucial to explain the ability to correctly classify statements which we missed in this study.

Notwithstanding these limitations and recommendations for future research, our study provides an important addition to the extant literature by highlighting the limited ability of citizens to distinguish factual information from opinions across various countries and relying on larger survey samples (representative according to key sociodemographic variables) instead of commonly smaller experimental samples and/or more biased convenience samples. Awareness of these challenges among citizens may be especially relevant for efforts to combat misinformation using fact-checkers, as at least part of the related problems are relevant in the early stages of exposure to information. Our findings show that citizens may classify popular opinions as factual without them ever being checked for accuracy, while opinions are by definition impossible or at least hard to correct and thus escape the radar of fact-checkers.

  1. Funding: This research was funded by the European Research Council H2020, Grant Number: 643316

References

Banerjee, R., Yuill, N., Larson, C., Easton, K., Robinson, E., & Rowley, M. (2007). Children’s differentiation between beliefs about matters of fact and matters of opinion. Developmental Psychology, 43(5), 1084–1096. https://doi.org/10.1037/0012-1649.43.5.108410.1037/0012-1649.43.5.1084Search in Google Scholar

Cohen, J., Mutz, D., Nass, C., & Mason, L. (1989). Experimental test of some notions of the fact/opinion distinction in Libel. Journalism and Mass Communication Quarterly, 66(1), 11–17. https://doi.org/10.1177/10776990890660010210.1177/107769908906600102Search in Google Scholar

Collins (n.d.-a). Factual. In Collins Dictionary. Retrieved September 4, 2023 from https://www.collinsdictionary.com/dictionary/english/factualSearch in Google Scholar

Collins (n.d.-b). Opinion. In Collins Dictionary. Retrieved September 4, 2023 from https://www.collinsdictionary.com/dictionary/english/opinionSearch in Google Scholar

Edelsztein, V., & Vázquez, C. (2021). Checkable nutrition: A scientific literacy experience for students. International Journal of Science Education, 43(5), 777–792. https://doi.org/10.1080/09500693.2021.188431510.1080/09500693.2021.1884315Search in Google Scholar

Egelhofer, J. L., & Lecheler, S. (2019). Fake news as a two-dimensional phenomenon: A framework and research agenda. Annals of the International Communication Association, 43(2), 97–116. https://doi.org/10.1080/23808985.2019.160278210.1080/23808985.2019.1602782Search in Google Scholar

Gleadon, T. W. (1987). The fact/opinion distinction in libel. Hastings Communications and Entertainment Law Journal, 10(3), 763–793.Search in Google Scholar

Goldberg, A. C., van Elsas, E. J., Marquart, F., Brosius, A., de Boer, D. C., & de Vreese, C. H. (2021). Europinions: Public Opinion Survey. GESIS Data Archive, Cologne. ZA5553 Data file Version 1.0.0. https://doi.org/10.4232/1.13795Search in Google Scholar

Graham, M. H., & Yair, O. (2023, March 29). Less partisan but not more competent: Expressive responding and fact-opinion discernment [Working paper]. https://m-graham.com/papers/GrahamYair_FactOpinion.pdfSearch in Google Scholar

Hameleers, M., Brosius, A., Marquart, F., Goldberg, A. C., van Elsas, E., & de Vreese, C. H. (2022). Mistake or manipulation? Conceptualizing perceived mis- and disinformation among news consumers in 10 European countries. Communication Research, 49(7), 919–941. https://www.doi.org/10.1177/009365022199771910.1177/0093650221997719Search in Google Scholar

Hobolt, S. B. (2007). Taking cues on Europe? Voter competence and party endorsements in referendums on European integration. European Journal of Political Research 46(2), 151–182. https://doi.org/10.1111/j.1475-6765.2006.00688.x10.1111/j.1475-6765.2006.00688.xSearch in Google Scholar

Karlsson, M., & Clerwall, C. (2019). Cornerstones in journalism. According to citizens. Journalism Studies, 20(8), 1184–1199. https://doi.org/10.1080/1461670x.2018.149943610.1080/1461670X.2018.1499436Search in Google Scholar

Kuhn, D. (2010). What is scientific thinking and how does it develop? In U. Goswami (Ed.), Blackwell handbook of childhood cognitive development (pp. 497–523). John Wiley & Sons.10.1002/9781444325485.ch19Search in Google Scholar

Kuklinski, J. H., Quirk, P. J., Schwieder, D. W., & Rich, R. F. (1998). “Just the facts, ma’am”: Political facts and public opinion. The Annals of the American Academy of Political and Social Science, 560(1), 143–154. https://doi.org/10.1177/000271629856000101110.1177/0002716298560001011Search in Google Scholar

Luo, M., Hancock, J. T., & Markowitz, D. M. (2022). Credibility perceptions and detection accuracy of fake news headlines on social media: Effects of truth-bias and endorsement cues. Communication Research, 49(2), 171–195. https://www.doi.org/10.1177/009365022092132110.1177/0093650220921321Search in Google Scholar

Merpert, A., Furman, M., Anauati, M. V., Zommer, L., & Taylor, I. (2018). Is that even checkable? An experimental study in identifying checkable statements in political discourse. Communication Research Reports, 35(1), 48–57. https://doi.org/10.1080/08824096.2017.136630310.1080/08824096.2017.1366303Search in Google Scholar

Miller, E. (2020, July 15). Opinion, news or editorial? Readers often can’t tell the difference. Poynter. https://www.poynter.org/reporting-editing/2020/opinion-news-or-editorial-readers-often-cant-tell-the-difference/?__cf_chl_jschl_tk__=pmd_oydYAr7udj6E2s9k6lo8oOsZ_VBej8pK6518gFxoVIw-1629802125-0-gqNtZGzNAjujcnBszQjRSearch in Google Scholar

Mitchell, A. Gottfried, J., Barthel, M., & Sumida, N. (2018, June 18). Distinguishing between factual and opinion statements in the news. Pew Research Center. https://www.pewresearch.org/journalism/2018/06/18/distinguishing-between-factual-and-opinion-statements-in-the-news/Search in Google Scholar

Morrow, G., Swire‐Thompson, B., Polny, J. M., Kopec, M., & Wihbey, J. P. (2022). The emerging science of content labeling: Contextualizing social media content moderation. Journal of the Association for Information Science and Technology, 73(10), 1365–1386. https://doi.org/10.1002/asi.2463710.1002/asi.24637Search in Google Scholar

Newman, N., Fletcher, R., Schulz, A., Andı, A., Robertson, C. T., & Nielsen, R. K. (2021). Reuters Institute Digital News Report 2021 (10th ed.). Reuters Institute. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2021-06/Digital_News_Report_2021_FINAL.pdfSearch in Google Scholar

Pennycook, G., Bear, A., Collins, E. T., & Rand, G. (2020). The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Management Science, 66(11), 4944–4957. https://doi.org/10.1287/mnsc.2019.347810.1287/mnsc.2019.3478Search in Google Scholar

Schell, L. M. (1967). Distinguishing fact from opinion. Journal of Reading 11(1), 5–9.Search in Google Scholar

Seo, H., Blomberg, M., Altschwager, D., & Vu, H. T. (2020). Vulnerable populations and misinformation: A mixed-methods approach to underserved older adults’ online information assessment. New Media & Society, 23(7), 2012–2033. https://doi.org/10.1177/146144482092504110.1177/1461444820925041Search in Google Scholar

Taber, C. S., Cann, D., & Kucsova, S. (2009). The motivated processing of political arguments. Political Behavior, 31(2), 137–155. https://doi.org/10.1007/s11109-008-9075-810.1007/s11109-008-9075-8Search in Google Scholar

Taber, C.S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50(3), 755–769. https://doi.org/10.1111/j.1540-5907.2006.00214.x10.1111/j.1540-5907.2006.00214.xSearch in Google Scholar

Tully, M., Vraga, E. K., & Bode, L. (2020). Designing and testing news literacy messages for social media. Mass Communication and Society, 23(1), 22–46. https://doi.org/10.1080/15205436.2019.160497010.1080/15205436.2019.1604970Search in Google Scholar

Turcotte, J., York, C., Irving, J., Scholl, R. M., & Pingree, R. J. (2015). News recommendations from social media opinion leaders: Effects on media trust and information seeking. Journal of Computer-Mediated Communication, 20(5), 520–535. https://doi.org/10.1111/jcc4.1212710.1111/jcc4.12127Search in Google Scholar

Van der Wurff, R., & Schönbach, K. (2014). Civic and citizen demands of news media and journalists: What does the audience expect from good journalism? Journalism & Mass Communication Quarterly, 91(3), 433–451. https://doi.org/10.1177/107769901453897410.1177/1077699014538974Search in Google Scholar

Van Elsas, E., & van der Brug, W. (2015). The changing relationship between left–right ideology and euroscepticism, 1973–2010. European Union Politics, 16(2), 194–215. https://doi.org/10.1177/146511651456291810.1177/1465116514562918Search in Google Scholar

Walter, N., & Salovich, N. A. (2021). Unchecked vs. uncheckable: How opinion-based claims can impede corrections of misinformation. Mass Communication and Society, 24(4), 500–526. https://doi.org/10.1080/15205436.2020.186440610.1080/15205436.2020.1864406Search in Google Scholar

Zaller, J. (1991). Information, values, and opinion. The American Political Science Review, 85(4), 1215–1237. https://doi.org/10.2307/196394310.2307/1963943Search in Google Scholar

Published Online: 2024-01-16

© 2023 bei den Autoren, publiziert von De Gruyter.

Dieses Werk ist lizensiert unter einer Creative Commons Namensnennung 4.0 International Lizenz.

Downloaded on 28.4.2024 from https://www.degruyter.com/document/doi/10.1515/commun-2023-0076/html
Scroll to top button