Skip to content
BY 4.0 license Open Access Published by De Gruyter Mouton July 11, 2023

Combatting disinformation with crisis communication: An analysis of Meta’s newsroom stories

  • Michaël Opgenhaffen

    Michaël Opgenhaffen is associate professor at the Institute of Media Studies at the KU Leuven, Belgium. He studies digital news and journalism, with a focus on social media news and online disinformation.

    ORCID logo EMAIL logo
From the journal Communications

Abstract

This study examines how Meta as a company of various social media platforms communicates the disinformation crisis. Social media platforms are seen as a breeding ground for disinformation, and companies like Meta risk not only suffering reputational damage but also being further regulated by national and international legislation. We consider in this paper the news stories that Meta posted on the topic of disinformation on its own website between 2016 and 2022 as crisis communication, and build on insights from this domain that discuss some key response strategies. In this way, we conclude that Meta’s communication can be seen as crisis communication, and that it uses strategies such as addressing different stakeholders, sticking to key messages when discussing the interventions, and holding itself responsible for finding a solution rather than for the problem of disinformation itself. These insights contribute to understanding how Meta seeks to validate its legitimacy during this ongoing crisis, and how it engages in self-regulation.

1 Introduction

Meta, the company that runs the platforms Facebook, Instagram, and WhatsApp, has regularly been accused of being a perfect breeding ground for intentional (disinformation) or unintentional (misinformation) fake news posts reaching the platforms’ users. Since about 2016, the companies behind these and other platforms have come under fire for allegedly doing too little to stop the spread of disinformation on their platforms. Both national and international commissions and legislators are thinking about imposing obligations on social media platforms to combat disinformation, or have already implemented this (on some scale) in the meantime (see, e. g., Helberger, 2020; Saurwein and Spencer-Smith, 2020 for an overview). This type of regulation could stipulate, for example, that the platforms would be obliged to remove disinformation within a few hours on pain of a fine, or that they would have to offer transparency regarding the number of fake news messages checked and removed. Within the context of, for example, the European Digital Services Act (DSA) and Digital Marketing Act (DMA) regulations (e. g., Eifert et al., 2021; European Commission, 2023), the platforms will have to focus even more on fairness, accuracy, empowerment, and transparency. In short, the days of social media companies like Meta being able to do anything without being held accountable or without having to make interventions on their platform seem quietly over.

However, we still do not know enough about how Meta communicates about the issue of disinformation and its responsibility. To the best of our knowledge, only Iosifidis and Nicoli’s (2020) study examined Meta’s communications on this topic during the 2016–2019 period. We build on this study by also including more recent years, and by linking Meta’s communication to theoretical insights around crisis communication and impression management. We argue in this paper that the problem of disinformation faced by Facebook and other social media platforms can be seen as a crisis that has a potential negative impact on the company’s reputation among its various stakeholders, and therefore communication from the company about this problem can be seen as crisis communication. We discuss different strategies as response strategies to a crisis and use this as a theoretical framework to study Meta’s communication via its own website. We focus on the stakeholders it addresses, on the key messages it sends out, on the extent to which it takes responsibility, and on techniques that can be seen as impression management strategies to reaffirm or restore the legitimacy of the company and its various platforms among stakeholders. One could even see the way Meta communicates about the crisis as a form of self-regulation by which it wants to demonstrate to legislators that it is in control of the problem (Iosifidis and Nicoli, 2020).

We begin the literature review by discussing why disinformation is so prevalent on social media. We then argue why disinformation in the context of social media companies like Meta can be seen as a crisis with multiple stakeholders. We end the literature review with a discussion of crisis communication as a form of impression management to cope with this crisis. Following this, we formulate the research question: In what way does Meta use forms of crisis-response strategies and impression management when communicating to its various stakeholders about the crisis of online disinformation?

We are convinced that with this study we will gain the necessary insight into how Meta sells itself regarding the fight against disinformation to the different stakeholders, and anticipates future disinformation issues and possible further regulation.

2 Literature

Social media and disinformation

Based on recent events in which fake news has received a lot of media attention, such as the Covid-19 pandemic and the accompanying vaccination campaigns (e. g., Rocha et al., 2021), and Russia’s war in Ukraine (e. g., Kreft et al., 2023), one might get the impression that fake news is a recent phenomenon, but this is not the case. It has been around for much longer and has likely existed as long as mankind (see, e. g., Cortada and Aspray, 2019 for an overview). But it does not seem wrong to say that, since 2016, the concept of fake news has been undergoing a kind of revival, and has attracted a great deal of interest from academia, politics, and the news media. One could argue that an article published by the online-only news medium Buzzfeed on 20 October 2016 was the start of the renewed attention to the concept of fake news. That day, the news site published an article about its own research showing that, in the context of the 2016 US presidential campaign, hyperpartisan Facebook pages were posting false and misleading information on a large scale (Silverman et al., 2016). Certainly, when Trump became elected president a few weeks later, the debate about the role that Facebook – and by extension social media – play in the spread and success of fake news gained traction more or less all over the world. For example, 2016 and 2017 saw a spike in mainstream media coverage on fake news, and the volume of articles was incomparably higher than it was in the 35 years before (Al-Rawi, 2019). Google has also recorded an immense increase in online posts containing the term “fake news” since November 2016 (Kapantai et al., 2021). Since then, there has been a great deal of attention from academia to what has been labeled with an umbrella term “fake news,” both for deliberately false information with the aim of misleading people, better known as disinformation, and for false misinformation that has little or no intention of misleading people (see, e. g., Kapantai et al., 2021; Tandoc, Lim, and Ling, 2018 for a classification of the multitude of concepts). Moreover, the focus was no longer exclusively on political-ideological misreporting and began to shift to other topics such as health (e. g., Grimes, 2021) and climate change (e. g., Lutzke et al., 2019), but almost always in the context of social media as a breeding ground for this type of misinformation.

To explain the popularity and rapid spread of disinformation on social media platforms, the literature often refers to the algorithms that give certain posts extra visibility by injecting them into users’ timelines as, for example, a trending topic. In doing so, the algorithms prefer topics that show a sudden, steep spike in volume over topics that may have been discussed on social media for some time but without a spike in volume (Poell and van Dijck, 2014). This preference plays into the hands of spectacular, breaking, and even toxic news (e. g., Massanari, 2017) that is suddenly fiercely discussed within social media, and therefore elicits a rapid emergence and wide reach of bizarre messages containing disinformation. In addition, Stöcker (2019) refers to the monetization goals of social media platforms to capitalize on those types of metrics that favor watch time and engagement over relevance, and therefore provide a perfect ecosystem for targeted disinformation.

The above shows the role which the users of social media play, since they are primarily responsible for the engagement with certain topics. That it is not merely the algorithms but rather the users themselves whose behavior is reinforced by the algorithms is evident from a great deal of research on disinformation and the role that echo chambers play in this regard (see, e. g., Zimmer et al., 2019). There is a significant number of users who forward dubious news to others within their networks, even when they know the information is incorrect (Chadwick et al., 2018; Metzger et al., 2021). In short, social media provide a good platform for the rapid rise and spread of disinformation, and this has led to studies that zoomed in on disinformation on specific platforms such as Facebook (e. g., Buchanan and Benson, 2019), Twitter (Keller et al., 2020), Instagram (Mena et al., 2020), and TikTok (Basch et al., 2021).

Disinformation as a crisis with multiple stakeholders

Based on the above, we can state that social media platforms have been in a disinformation crisis since 2016. Coombs (2007, pp. 2–3) defines a crisis as “the perception of an unpredictable event that threatens important expectancies of stakeholders and can seriously impact an organization’s performance and generate negative outcomes.” And these characteristics of a crisis also apply to social media platforms in the context of disinformation. After all, we know that public perception is not good, as evidenced by the many negative media reports about social media as the place where disinformation is created and shared, as well as the fact that the CEOs of Facebook and Twitter had to defend themselves before the US Congress in 2021 (Fung, 2021).

Just as most crises have an impact on a variety of stakeholders (Heath and Coombs, 2006), the crisis surrounding disinformation on social media is also one that affects various stakeholders of social media platforms. If we look at Meta’s internal stakeholders, we see mainly its own employees and shareholders. Employees could become less proud of the company and possibly want to change jobs due to the disinformation crisis and criticism of Meta’s handling of the problem. Shareholders in turn could lose faith in the company’s value. In terms of external stakeholders, users are the most important. Several studies show that social media users are concerned about disinformation they encounter on those platforms. For example, in the context of disinformation on Covid-19, users of Facebook state that they encounter more disinformation on the platform than on news sites or through search engines (Newman et al., 2021), and the same seems true for users of Instagram (Jurkowitz and Mitchell, 2020). Then there are the advertisers who use the platforms to spread their messages, sell products through sponsored posts and ads, and are the main source of revenue for platforms like Facebook. The presence of disinformation on the platforms and the negative discourse around it could be a reason for companies to advertise less or not at all on these platforms. For example, in June 2020, the CEO of Levi Strauss and Co stated via a blog post that she was not satisfied with the way Facebook and Instagram addressed misinformation and announced that she would pause ad spending on both platforms for a while (Sey, 2020). Governments and politicians are also an important external stakeholder. On the one hand, they use the platforms to spread their messages (Peeters et al., 2022), but, on the other hand, they are the driving force behind initiatives to regulate the platforms through national and supra-national regulations and laws. Examples of this type of law include the 2018 Code of Practice on Disinformation and its 2022 enhancement (European Commission, 2022) and the DSA approved by EU member states in the European Parliament in 2022, which aims to better protect the users of digital platforms by holding the platforms accountable for illegal and harmful content (European Commission, 2023). This can lead to a dual stance where political actors need the platforms to spread their ideology and get advice from Facebook employees on how best to do so (Kreiss and McGregor, 2018), but at the same time are the ones to criticize and regulate the platforms through guidelines and laws. Next, the news media are important stakeholders since negative coverage on disinformation on social media platforms often reaches the public through those news media, like the Buzzfeed article mentioned in the introduction, and the numerous news articles that followed (see, e. g., Al-Rawi, 2019). Finally, even competitors are relevant stakeholders (Freeman, 2015), in the sense that other social media platforms such as Twitter and TikTok are also facing a disinformation crisis so that the communication Meta sends out to the world can turn out positively or negatively for them as well.

Crisis communication as impression management

It may be clear from the above that social media platforms are engaged in a crisis of disinformation, and that this is of concern to a variety of stakeholders dealing with those platforms. Crises can be seen as violations of stakeholder expectations, “meaning an organization does something stakeholders do not think it should have done” (Coombs, 2018, p. 52). To respond to this crisis, it makes sense for Meta to communicate about it, which we therefore logically consider in this paper as crisis communication. Crisis communication can be defined as the practice of minimizing the negative effects or damage of a crisis on the company’s reputation or loss of revenue (Williams and Treadeway, 1992) and as the strategic use of words and actions to manage information and meaning during a crisis process (Coombs, 2010). It is about managing information (collecting and disseminating crisis information) and managing meaning (influencing how the stakeholders perceive the crisis and/or the organization in crisis) (Coombs, 2018). Crisis communication can even be used to use a crisis in such a way that it enhances a company’s reputation, as the company can appear decisive in its response to the crisis to the various stakeholders (Sellnow et al., 2022). We therefore follow the premise that research on crises and associated crisis communication should focus on the different stakeholders where, according to stakeholder theory (Freeman et al., 2010), one must strike a good balance between all these stakeholders with sometimes conflicting interests.

A great deal of research has been done on crisis communication, from which a number of important theories and insights emerge, even though we know that practitioners do not always translate them perfectly into practice (Claeys and Opgenhaffen, 2016). While it is true that every crisis is unique and requires its own crisis communication (Coombs, 2015), we can draw insights from the numerous studies and the existing models and theories such as Image Repair Theory (IRT; Benoit, 2005) and Situational Crisis Communication Theory (SCCT; Coombs, 2007), which can be seen as the two most influential theories in the field of crisis communication (Claeys and Opgenhaffen, 2016). Both theories propose several crisis-response strategies that can be used by crisis communicators to best restore the reputation of the company in crisis. IRT argues that the central goal of crisis communication should be positive corporate reputation. The theory suggests a number of concrete advices to achieve this, such as strategies to deny self-responsibility (such as “shift the blame,” i. e., put the blame on something outside the organization), avoid responsibility (by emphasizing good intentions, for example), reduce vulnerability (by using the strategy of “bolstering,” i. e., focusing on the company’s positive attributes or actions), or simply by admitting guilt and expressing regret) (see, e. g., Benoit, 2014 for an overview). The SCCT also recommends different crisis-response strategies, taking strongly into account the nature of the crisis and the perception of stakeholders, focusing on the acceptance of responsiveness, and offering apologies (see, e. g., Coombs, 2007), even if this is not always easy for the legal departments of companies (Claeys and Opgenhaffen, 2021). In this context, Hall’s (2020) study around the 2018 apology campaign launched by Facebook in the wake of the Cambridge Analytica scandal found that the company used the strategy of apology to create a divided perception of the company that strengthened the brand identity. In addition, there are numerous other studies that provide useful insights about the benefit of self-disclosing (elements of) a crisis, better known as stealing thunder, in order to maintain control over the communication (Claeys et al., 2016) or the importance of consistency in the message in order to come across as credible (Coombs, 2020; Seeger, 2006), which ties in with the strategic advice to focus on a few key messages in crisis communication (De Waele et al., 2020). These insights align nicely with what is known as impression management strategies (Leary and Kowalski, 1990), in which the communication efforts of an organization are aimed at restoring the legitimacy and thus the way different stakeholders view the company. Impression management attempts to respond to pressure from the different stakeholders through communication by emphasizing good news and minimizing bad news, known as “thematic manipulation” (Brennan and Merkl-Davies, 2013; Hellmann et al., 2020).

Because this study examines Meta’s crisis communication, it aligns with the study by Iosifidis and Nicoli (2020), who scrutinized Facebook’s communication of disinformation over a period between 2016 and 2019 to identify the company’s self-regulation. To our knowledge, this is the only study conducted on this topic. The authors argued that Facebook did not see itself as an arbiter of the truth and put forward AI and machine learning as tools to combat various forms of fake news. Our study tries to complement that previous study by also analyzing the period between 2019 and 2022, since a lot happened in that period regarding disinformation, such as the Covid-19 pandemic and the war in Ukraine. Moreover, we try to make the link with the insights from crisis communication and impression management by mapping the variety of different stakeholders and study the different strategies such as the use of key messages and taking responsibility. We formulate the following research question:

RQ:

In what way does Meta use forms of crisis-response strategies and impression management when communicating to its various stakeholders about the crisis of online disinformation?

3 Method

To gain insights into how Meta engages in crisis communication, we conducted a content analysis on stories that Meta, as the company owning Facebook, Instagram, and WhatsApp, sends out into the world. Content analysis is a proven method to gain insights into how organizations engage in crisis communication, and, according to Coombs (2010), falls within the so-called transition crisis communication research that can study, for example, media reports, messages from the organization, and messages from social media. So, we focus this research on the organization’s own messages, and more specifically on the news stories that Meta has posted on its own website in the newsroom (https://about.fb.com/news/). The newsroom includes thousands of posts, but we collected those that deal with disinformation. We did this by clicking on the topic “combatting misinformation” in the menu, and collected all the stories in this category. We would like to note that the choice of the word misinformation does not mean that the posts are only about misinformation without the intent to mislead, as many posts also tackled the concept of disinformation. The choice for a label misinformation rather than disinformation could possibly even be seen as a strategic choice from Meta to downplay the severity of the crisis from the very beginning. This resulted in a corpus of 73 stories published between August 2016 and July 2022. After checking, we had to remove five articles from the corpus because we felt the link to disinformation was too limited. For example, there were some posts that were about dealing with clickbait articles and spammers, without a clear link to disinformation. We started working with a corpus of 68 articles, with the addition that eight articles together received 79 updates that were appended by Meta to the first story on that topic, for example, in the case of the topic Covid-19. We have included each of these updates in our corpus, so the total number of stories is 147.

The articles covered five major topics: (1) Meta’s general approach towards the fight against disinformation on their platforms; (2) the fight against disinformation in the context of elections, which was not only about US elections but also about elections in Australia, Indonesia, Georgia, Thailand, Myanmar, etc.; (3) the fight against disinformation around Covid-19 and the accompanying vaccination campaigns; (4) the fight against disinformation in the context of the war in Ukraine in 2022; and (5) the fight against disinformation in the context of climate change and global warming. There were lots of different authors linked to these messages throughout the six years. This should not be surprising since the posts dealt with different sub-aspects of the fight against disinformation, with some articles having the Vice-President of Integrity or Mark Zuckerberg as CEO himself as the author, and other articles having an engineer, the head of News Integrity Partnerships, the public policy director, or someone from the legal department as one of the authors.

To analyze these messages, we took a manual and inductive approach. We pasted all the messages into a document, and this resulted in a file of more than 336 pages. We then started the analysis in which we relied on the proven approach of thematic analysis (Braun and Clarke, 2006). In brief, this involves highlighting relevant paragraphs, sentences, and individual words, taking into account the insights from the literature, and paying extra attention to the so-called sensitizing concepts (Blumer, 1954) we encountered throughout the news stories. Here we focused on wording that told something about the stakeholders being addressed, consistency in terms of key messages that the company presented as solutions, the extent to which it took responsibility, and strategies for using the crisis to highlight the company’s positive values. We then gave each passage a raw code, after which we began to compare the codes with each other, and we began to refine or merge codes. We repeated this process several times to arrive at a code tree that allowed us not only to identify some of the key dimensions of Meta’s crisis communication but also to use relevant passages or phrases as illustrations. In the results section, we focused mainly on authoritative practices, that is, elements that we encountered on a regular basis, thus painting a generic picture of Meta’s crisis communication.

4 Results

Variety of stakeholders

Based on the analysis of newsroom messages, Meta targeted a wide variety of stakeholders. This targeting rarely happened explicitly, addressing a specific group of stakeholders in the introduction, for example, but rather implicitly by pointing out throughout the text the interests of interventions for certain groups. We distinguished three main groups of stakeholders. The first group is its own employees. Throughout the stories there are many references to the good work of employees. There are explicit references to concrete job profiles within the company such as the developers, the moderators, people from the legal and operations departments, but just as regularly, in more general terms, there are references to “our teams” working in-house to fight disinformation. A second group are the external stakeholders, the individual users of the platform being described in the stories as users or members, but also as, for example, admins or owners of groups or pages. Other “positive” external stakeholders such as NGOs and health organizations (especially in the context of Covid-19), national and local governments (especially in messages about elections), academic partners, news organizations, and external fact-checkers are often referred to. Meta often refers to the term “community” in its communications to describe this heterogeneous group of stakeholders while distinguishing it from its own employees, for example: “We’ll continue to share changes we make to keep both our community and the people who review content on our platforms safe” (story 41 update 41, April 21, 2020, “Update on content review work”), or “We’re doing everything we can to keep our global teams and the community that uses our apps safe” (story 41 update 19, March 19, 2020, “Keeping our platform safe with remote and reduced content review”). Stakeholders that are – perhaps surprisingly – less often mentioned are the commercial companies that, as advertisers, still play an important role in the revenue model of the platforms. The few times they are referenced, they are clearly linked to the benefits these businesses can get from the platform, as in this reference in a story defending Facebook’s revenue model: “Small businesses (…) get access to tools that help them thrive. There are more than 90 million small businesses on Facebook, and they make up a large part of our business. Most couldn’t afford to buy TV ads or billboards but now have access to tools that only big companies could use before” (story 12, January 24, 2019, “Understanding Facebook’s business model”). We can label the third group as external, hostile stakeholders who may pose a threat. These include the creators and disseminators of harmful content. It is interesting to note that these are more or less the same stakeholders as in the second category, but they are described in a negative way or addressed critically throughout the communication. They are about (foreign) governments and political advertisers with bad intentions, individual spreaders of fake news, bad administrators of pages and groups, or news organizations that have written too negatively (and wrongly, according to Meta) about the company’s fight against disinformation. In other words, different stakeholders are sometimes considered good partners and members of the community while at other times they are just considered threats to the community. It is also striking that there is hardly any explicit reference to the authorities that are trying to regulate Meta and other social media players. For example, there is no mention of the Code of Practice on Disinformation, nor of the DSA. Only one reference to governmental regulation is made in a 2019 news story by Mark Zuckerberg himself, in which he refers to the European General Data Protection Regulation (GDPR) as a good thing for everyone.

Key messages about interventions

Important in crisis communication is the message proclaimed by which the organization communicates to stakeholders how to address the problem. The analysis of the stories allowed us to identify some important key messages that dealt with the strategies Meta deployed during the period of the study. During the first years (2016–2020), we can summarize the key messages as removing-reducing-informing, which are labels that Meta itself uses throughout its communication.

Meta declares its primary commitment to detecting and removing content that violates its policy. It stresses that for this it not only relies on manual efforts of its own moderators, but has also partnered with independent fact-checkers with whom it has worked since late 2016, and that this partnership has expanded in the years since. Meta very frequently uses the phrase “(outside) experts” (e. g., story 21, April 10, 2019, “Remove, reduce, inform: New steps to manage problematic content”) to refer to the fact-checkers, seemingly wanting to highlight the legitimacy and independence of that group. Specifically in the context of disinformation about Covid-19 and climate change, the company refers to scientists and academics it works with to uncover erroneous reports. It also refers several times to the fact that users can report malicious posts themselves and do not have to wait until the platform itself has spotted these. But it also sees clear added value in technology, such as artificial intelligence tools to automate the process of detection and removal. Meta often emphasizes that the fight against disinformation involves a combination of human and technical efforts. Formulations like “a mix of automated and human review” (e. g., story 15, March 4, 2019, “Working to safeguard elections in Indonesia”) or “a combination of artificial intelligence, human review, and user reports” (e. g., story 68, July 20, 2022, “How Meta is preparing for Kenya’s 2022 general elections”) are used dozens of times throughout the communication. This is probably done to reassure critics of excessive automation and to indicate the “best of both worlds,” namely the speed and efficiency of AI tools and the accuracy and interpretation of humans. As a basis for determining whether posts – once designated as misleading or erroneous – should be removed, its own internal policy or community standards are used. Meta refers several times in its communication to adjustments to these standards in order to better respond to current needs, with phrases such as “we made several updates to our Community Standards” (e. g., story 58, December 2, 2021, “An independent assessment of Meta’s human rights impact in the Philippines”), “strengthening our policy toward misleading manipulated videos” (e. g., story 22, January 6, 2020, “Enforcing against manipulated media”), or the fact that it announced it posted a new section on the website containing the Facebook Community Standards “where people can track the updates we make each month” (story 21, April 10, 2019, “Remove, reduce, inform: New steps to manage problematic content”). In other words, the company communicates that it keeps its finger on the pulse all the time and tries to be transparent about it.

Posts that are misleading but not in violation of community standards are not deleted, but their distribution is restricted. Users should therefore see these messages (and the pages and groups in which these types of messages are posted) less frequently in their timelines. This should minimize the reach and impact of those types of messages and could thwart the revenue model of those spreading them. According to Meta, this is happening not only on Facebook but also on Instagram, and, in a later phase, on WhatsApp. Informing includes the company’s efforts to provide the public with additional context for posts that are potentially misleading or erroneous but still allowed to remain, such as a label or link to a fact-check just below the post or a warning when forwarding a particular post.

In addition to these strategies as solutions, which throughout Meta’s six years of crisis communication in the newsroom are always repeated in more or less the same form and can thus be considered key messages it consistently formulates, there are also several other interventions that are regularly mentioned as solutions in the fight against disinformation. Whereas the previous strategies rather started from the negative, namely the presence of erroneous reporting that is removed or limited and can thus be considered interventions where the focus is on the negative, other strategies can be seen as interventions where a positive message is given. For example, Meta communicates about interventions aimed at strengthening the community, making it more self-reliant, and empowering it through a variety of information and tools to fight disinformation itself. It is striking, for example, that during the last few years of Meta’s communication it has been emphasized very regularly that its goal is not only to protect users from bad information by removing or restricting it but also to connect them with “trusted,” “credible,” and “authentic” information. This also came up very occasionally during the early years when referring to the intervention of adding a link to a fact-check in case of a questionable post, but this strategy was emphasized more often over the years. We saw this especially in the context of Covid-19 and stories about disinformation about climate change where, for example, stickers were launched to give extra attention to reliable posts. Separate information centers were started on the platforms, where users could always find verified news posts and statistics about those hot topics, and with which Meta could indicate that its goal was “to more clearly connect people with credible and accurate information about Covid-19” (Story 27 update 2, December 15, 2020, “An update on our work to keep people informed and limit misinformation about COVID-19”). By communicating about this kind of intervention, it is not only about the presence of bad information on the platforms anymore but also about the good information that can be found there, which in terms of crisis communication and impression management seems a defendable strategy. The same can be said about trainings in key digital skills that Meta provided to (local) governments, NGOs, and newsrooms, which zoom in on strengthening knowledge and improving skills rather than stressing the negative.

Taking responsibility

Throughout the coding and analysis of the newsroom messages, we also focused on whether and how Meta takes responsibility for the disinformation crisis on its platforms. Meta often refers in its communications to the fact that disinformation must be fought on its platforms, so it is not denying the crisis but does not seem to want to take explicit responsibility for it in its communications. According to Meta, those responsible for the crisis are the actors with bad intentions, those who inject and spread disinformation on the platforms and do so for political or economic reasons: “State actors and agitators around the world” (story 7, June 14, 2018, “How is Facebook’s fact-checking program working?”) or “state-controlled media outlets” (e. g., story 59 update 4, March 1, 2022, “Meta’s ongoing efforts regarding Russia’s invasion of Ukraine”), for example, and very often it simply refers to “bad actors” (e. g., story 28, November 7, 2019, “How Facebook has prepared for the 109 UK general election”) or “fake accounts which are responsible for misinformation” (story 7, June 14, 2018, “How is Facebook’s fact-checking program working?”) and thus not Meta itself. It also frequently refers to the fact that those devious actors with bad intentions continually adjust their strategies to try to circumvent Meta’s detection strategy, requiring the company to make continuous efforts to combat them. As it writes in October 2018 (story 9, “Removing additional inauthentic activity from Facebook”): “As we get better at uncovering this kind of abuse, the people behind it – whether economically or politically motivated – will change their tactics to evade detection. It’s why we continue to invest heavily, including in better technology, to prevent this kind of misuse.” With this, Meta seems to be employing the “shift the blame” strategy (i. e., put the blame on something outside the organization, see Benoit, 2005). Meta sells itself as a victim but at the same time stresses the fact that the company is defending itself at all costs. And it is doing that for all of us, it indicates. While, as mentioned, it does not feel responsible for the disinformation as such, it does communicate very regularly that it feels responsible for fighting the problem. In September 2021 (story 53, “What the Wall Street Journal got wrong”), for example, it writes: “These are serious and complex issues, and it is absolutely legitimate for us to be held to account for how we deal with them.” In other words, it does not seem to bear responsibility for the problem, but it does bear responsibility for the solution.

The interesting thing is that in stating this responsibility, it very often takes the opportunity to proclaim a positive message about Meta and its various platforms, thus seeming to use the strategy of bolstering as impression management. It does this in two ways. First, by highlighting some positive values and goals. For example, stories often begin with a general statement by which the company highlights the importance of free speech and accurate information. This is the case in this intro to a January 2019 post (story 14, “Working to safeguard elections in Thailand”), “Protecting the integrity of elections while making sure people can have a voice in the political process is a top priority for Facebook,” and in this one from March 2021 (story 44, “Mark Zuckerberg announces Facebook’s plans to help get people vaccinated against COVID-19”), “Building on our goal to promote authoritative information about Covid-19 vaccines, we have implemented (…).” A second way to give a positive message is to refer to the size of the company, to the many billions of users who use its platforms every day to connect with others or look for reliable information and for whom Meta therefore feels responsible. With this, Meta further emphasizes that its popular platforms are of interest to numerous stakeholders. In September 2020 (story 36, “Stepping up the fight against climate change”), for example, it sounds like this: “As a global company that connects more than 3 billion people across our apps every month, we understand the responsibility Facebook has and we want to make a real difference;” or in March 2020 (story 41 update 8, March 16, 2020, Working with industry partners”): “We’re helping millions of people stay connected while also jointly combating fraud and misinformation about the virus.”

Finally, Meta also employs a strategy of extending responsibility to all of society, emphasizing that disinformation is not a problem of social media platforms alone but of everyone. “It‘s not a new phenomenon” (story 2, April 6, 2017, “Working to stop misinformation and false news”), it emphasizes in its communication, and one that cannot be solved “with a single solution” (e. g., story 54, September 21, 2021, “Our progress addressing challenges and innovating”); “fighting misinformation is an ever-evolving problem and we can’t do it alone” (story 23, June 12, 2019, “Automation plus expert journalism: How Full Fact is fighting misinformation”), adding that it is a shared responsibility of not only tech companies but also news and media organizations and teachers.

5 Conclusion/discussion

Social media are in a disinformation crisis. Meta recognizes this, as evidenced by the communications about the fight against disinformation that the company publishes on its own newsroom page. This study sought to gain insight into whether Meta is using these communications as crisis communication and to elaborate on some strategies.

We can conclude that Meta effectively uses stories as a form of crisis communication. Indeed, it recognizes the problem and uses in its communication different strategies described by experts as useful crisis-response strategies. For example, it focuses – albeit mainly implicitly – on different stakeholders, not only addressing its own employees and good partners but also making it clear that there is a group of stakeholders with bad intentions. Throughout its communication, the company clearly focuses on a set of key messages. By repeating that its focus is on removing, reducing, and informing, it maintains a consistent story, especially in the first years (2016–2020). Later the company started communicating more and more about offering good information as a strategy to empower the public, which is a more positive message than just talking about the fight against harmful information. With this, Meta seems to emphasize its policy actions to indicate it has everything under control. This could possibly be a way for the company to address the regulatory authorities, something Meta has done little or not explicitly throughout its crisis communications. By emphasizing that it has a consistent policy with some key interventions to address the problem of disinformation, Meta seems to want to say that self-regulation should suffice. Moreover, Meta does not portray itself in the stories as being largely responsible for the problem, rather as a victim of outside actors with bad intentions. But the company indicates that it feels responsible to fight the battle, to find a solution to this problem, and does so especially for those billions of users who use the platform to connect with each other and find reliable information. Hereby, the company can simultaneously proclaim positive attributes and values, highlighting its legitimacy, which is a form of impression management. These are important insights to understand how Meta presents itself in the context of disinformation and how it uses strategies to guard against this crisis.

This study adds to the scarce research that exists on how social media companies communicate about the crisis of disinformation. It contributes to the field of crisis communication and can be considered unique because it deals with a crisis that has been dragging on for several years and has not yet been resolved or completed. The study can be seen as a during-crisis study, in which differences between the early years and the later years can also be identified. The disadvantage of a study of crisis communication during a crisis, in turn, is that it is not evident to measure the effects of these efforts, for example, on attitudes of the different stakeholders towards the platforms in general and the communication about the crisis more specifically. Indeed, new developments in the crisis itself can disrupt a clear relationship between crisis communication as independent variable and perception as dependent variable. It would be very interesting to study the possible influence of Meta’s crisis communication on the perceptions of the regulatory authorities and how they are guided by this crisis communication in their plans to regulate the platforms. This should also include Meta’s other forms of communication, such as the Facebook posts of CEO Mark Zuckerberg and other managers, what they say in public debates, the white papers they publish, and their communications during the various hearings. Since the crisis around disinformation seems not to be over yet, one may not consider the research on crisis communication about disinformation as completed either.

About the author

Michaël Opgenhaffen

Michaël Opgenhaffen is associate professor at the Institute of Media Studies at the KU Leuven, Belgium. He studies digital news and journalism, with a focus on social media news and online disinformation.

References

Al-Rawi, A. (2019). Gatekeeping fake news discourses on mainstream media versus social media. Social Science Computer Review, 37(6), 687–704.10.1177/0894439318795849Search in Google Scholar

Basch, C. H., Meleo-Erwin, Z., Fera, J., Jaime, C., & Basch, C. E. (2021). A global pandemic in the time of viral memes: COVID-19 vaccine misinformation and disinformation on TikTok. Human Vaccines & Immunotherapeutics, 17(8), 2373–2377.10.1080/21645515.2021.1894896Search in Google Scholar

Benoit, W. L. (2005). Image restoration theory. In R. L. Heath (Ed.), Encyclopedia of public relations: Volume I (pp. 407–410). Thousand Oaks, CA: Sage.Search in Google Scholar

Benoit, W. L. (2014). Image repair theory in the context of strategic communication. In The Routledge handbook of strategic communication (pp. 327–335). Routledge.10.4324/9780203094440-28Search in Google Scholar

Blumer, H. (1954). What is wrong with social theory? American Sociological Review, 19, 3–10.10.2307/2088165Search in Google Scholar

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3, 77–101.10.1191/1478088706qp063oaSearch in Google Scholar

Brennan, N. M., & Merkl-Davies, D. M. (2013). Accounting narratives and impression management. In L. Jack, J. Davison, & R. Craig (Eds.), The Routledge Companion to Communication in Accounting (pp. 109–132). London: Routledge.Search in Google Scholar

Buchanan, T., & Benson, V. (2019). Spreading disinformation on Facebook: Do trust in message source, risk propensity, or personality affect the organic reach of “fake news”? Social Media + Society, 5(4). https://doi.org/10.1177/205630511988865410.1177/2056305119888654Search in Google Scholar

Chadwick, A., Vaccari, C. & O’Loughlin, B. (2018). New media & society, 20(11), 4255–4274.10.1177/1461444818769689Search in Google Scholar

Claeys, A. S., Cauberghe, V. & Pandelaere, M. (2016). Is old news no news? The impact of self-disclosure by organizations in crisis. Journal of Business Research, 69(10), 3963–3970.10.1016/j.jbusres.2016.06.012Search in Google Scholar

Claeys, A. S., & Opgenhaffen, M. (2016). Why practitioners do (not) apply crisis communication theory in practice. Journal of Public Relations Research, 28(5–6), 232–247.10.1080/1062726X.2016.1261703Search in Google Scholar

Claeys, A. S., & Opgenhaffen, M. (2021). Changing perspectives: Managerial and legal considerations regarding crisis communication. Public Relations Review, 47(4), 102080.10.1016/j.pubrev.2021.102080Search in Google Scholar

Coombs, W. T. (2007). Protecting organization reputations during a crisis: The development and application of situational crisis communication theory. Corporate Reputation Review, 10, 163–176.10.1057/palgrave.crr.1550049Search in Google Scholar

Coombs, W. T. (2010). Parameters for crisis communication. In W. T. Coombs & S. J. Holladay (Eds.), The handbook of crisis communication (pp. 65–90). Malden, MA: Blackwell.10.1002/9781444314885Search in Google Scholar

Coombs, W. T. (2015). The value of communication during a crisis: Insights from strategic communication research. Business Horizons, 58, 141–148.10.1016/j.bushor.2014.10.003Search in Google Scholar

Coombs, W. T. (2018). Crisis communication: The best evidence from research. In The Routledge companion to risk, crisis and emergency management (pp. 51–66). New York: Routledge.10.4324/9781315458175-6Search in Google Scholar

Coombs, W. T. (2020). Conceptualizing crisis communication. In Handbook of risk and crisis communication (pp. 99–118). New York: Routledge.10.4324/9781003070726-6Search in Google Scholar

Cortada, J. W., & Aspray, W. (2019). Fake news nation: The long history of lies and misinterpretations in America. Rowman & Littlefield.Search in Google Scholar

De Waele, A., Claeys, A. S., & Opgenhaffen, M. (2020). Preparing to face the media in times of crisis: Training spokespersons’ verbal and nonverbal cues. Public Relations Review, 46(2), 101871.10.1016/j.pubrev.2019.101871Search in Google Scholar

Eifert, M., Metzger, A., Schweitzer, H., & Wagner, G. (2021). Taming the giants: The DMA/DSA package. Common Market Law Review, 58(4), 987–1028.10.54648/COLA2021065Search in Google Scholar

European Commission (2022). The 2022 Code of Practice on Disinformation. https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformationSearch in Google Scholar

European Commission (2023). DSA: Making the online world safer. https://digital-strategy.ec.europa.eu/en/policies/safer-onlineSearch in Google Scholar

Freeman, R. E. (2015). Stakeholder theory. Wiley encyclopedia of management, 1–6.10.1002/9781118785317.weom020179Search in Google Scholar

Freeman, R. E., Harrison, J. S., Wicks, A., Parmar, B., & De Colle, S. (2010). Stakeholder Theory: The state of the art. Cambridge University Press.10.1017/CBO9780511815768Search in Google Scholar

Fung, B. (2021). Facebook, Twitter and Google CEOs grilled by Congress on misinformation. https://edition.cnn.com/2021/03/25/tech/tech-ceos-hearing/index.htmlSearch in Google Scholar

Grimes, D. R. (2021). Medical disinformation and the unviable nature of COVID-19 conspiracy theories. PLoS One, 16(3), e0245900.10.1371/journal.pone.0245900Search in Google Scholar

Hall, K. (2020). Public penitence: Facebook and the performance of apology. Social Media + Society, 6(2). https://doi.org/10.1177/205630512090794510.1177/2056305120907945Search in Google Scholar

Heath & Coombs (2006). Today’s public relations: An introduction. Sage.10.4135/9781452233055Search in Google Scholar

Helberger, N. (2020). The political power of platforms: How current attempts to regulate misinformation amplify opinion power. Digital Journalism, 8(6), 842–854.10.1080/21670811.2020.1773888Search in Google Scholar

Hellmann, A., Ang, L., & Sood, S. (2020). Towards a conceptual framework for analysing impression management during face-to-face communication. Journal of Behavioral and Experimental Finance, 25, 100265.10.1016/j.jbef.2020.100265Search in Google Scholar

Iosifidis, P., & Nicoli, N. (2020). The battle to end fake news: A qualitative content analysis of Facebook announcements on how it combats disinformation. International Communication Gazette, 82(1), 60–81.10.1177/1748048519880729Search in Google Scholar

Jurkowitz, M., & Mitchell, A. (2020). An oasis of bipartisanship: Republicans and Democrats distrust social media sites for political and election news. https://www.pewresearch.org/journalism/2020/01/29/an-oasis-of-bipartisanship-republicans-and-democrats-distrust-social-media-sites-for-political-and-election-news/Search in Google Scholar

Kapantai, E., Christopoulou, A., Berberidis, C., & Peristeras, V. (2021). A systematic literature review on disinformation: Toward a unified taxonomical framework. New Media & Society, 23(5), 1301–1326.10.1177/1461444820959296Search in Google Scholar

Keller, F. B., Schoch, D., Stier, S., & Yang, J. (2020). Political astroturfing on Twitter: How to coordinate a disinformation campaign. Political Communication, 37(2), 256–280.10.1080/10584609.2019.1661888Search in Google Scholar

Kreft, J., Boguszewicz-Kreft, M., & Hliebova, D. (2023). Under the fire of disinformation. Attitudes towards fake news in the Ukrainian frozen war. Journalism Practice, 1–21.10.1080/17512786.2023.2168209Search in Google Scholar

Kreiss, D., & McGregor, S. C. (2018). Technology firms shape political communication: The work of Microsoft, Facebook, Twitter, and Google with campaigns during the 2016 US presidential cycle. Political Communication, 35(2), 155–177.10.1080/10584609.2017.1364814Search in Google Scholar

Leary, M. R., & Kowalski, R. M. (1990). Impression management: A literature review and two-component model. Psychological Bulletin, 107(1), 34.10.1037/0033-2909.107.1.34Search in Google Scholar

Lutzke, L., Drummond, C., Slovic, P., & Árvai, J. (2019). Priming critical thinking: Simple interventions limit the influence of fake news about climate change on Facebook. Global Environmental Change, 58, 101964.10.1016/j.gloenvcha.2019.101964Search in Google Scholar

Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346.10.1177/1461444815608807Search in Google Scholar

Mena, P., Barbe, D., & Chan-Olmsted, S. (2020). Misinformation on Instagram: The impact of trusted endorsements on message credibility. Social Media + Society, 6(2), https://doi.org/10.1177/205630512093510210.1177/2056305120935102Search in Google Scholar

Metzger, M. J., Flanagin, A. J., Mena, P., Jiang, S., & Wilson, C. (2021). From dark to light: The many shades of sharing misinformation online. Media and Communication, 9(1), 134–143.10.17645/mac.v9i1.3409Search in Google Scholar

Newman, N., Fletcher, R., Schulz, A., Andi, S., Robertson, C. T., & Nielsen, R. K. (2021). Reuters Institute digital news report 2021. Reuters Institute for the Study of Journalism.Search in Google Scholar

Peeters, J., Opgenhaffen, M., Kreutz, T., & Van Aelst, P. (2022). Understanding the online relationship between politicians and citizens. A study on the user engagement of politicians’ Facebook posts in election and routine periods. Journal of Information Technology & Politics, 1–16.10.1080/19331681.2022.2029791Search in Google Scholar

Poell, T., & Van Dijck, J. (2014). Social media and journalistic independence. Media independence: Working with freedom or working for free, 1, 181–201.Search in Google Scholar

Rocha et al. (2021). Rocha, Y. M., de Moura, G. A., Desidério, G. A., de Oliveira, C. H., Lourenço, F. D., & de Figueiredo Nicolete, L. D. (2021). The impact of fake news on social media and its influence on health during the COVID-19 pandemic: A systematic review. Journal of Public Health, 1–10.10.1007/s10389-021-01658-zSearch in Google Scholar

Saurwein, F., & Spencer-Smith, C. (2020). Combating disinformation on social media: Multilevel governance and distributed accountability in Europe. Digital Journalism, 8(6), 820–841.10.1080/21670811.2020.1765401Search in Google Scholar

Seeger, M. W. (2006). Best practices in crisis communication: An expert panel process. Journal of Applied Communication Research, 34(3), 232–244.10.1080/00909880600769944Search in Google Scholar

Sellnow, T. L., Seeger, M. W., & Sheppard, R. (2022). Revisiting the discourse of Renewal Theory clarifications, extensions, interdisciplinary. The Handbook of Crisis Communication, 127.10.1002/9781119678953.ch9Search in Google Scholar

Sey, J. (2020). From our CMO: It’s time to stop hate for profit. https://www.levistrauss.com/2020/06/26/from-our-cmo-its-time-to-stop-hate-for-profit/Search in Google Scholar

Silverman, C., Strapagiel, L., Shaban, H., Hall, E., & Singer-Vine, J. (2016). Hyperpartisan Facebook pages are publishing false and misleading information at an alarming rate. Buzzfeed News, 20(1). https://www.buzzfeednews.com/article/craigsilverman/partisan-fb-pages-analysisSearch in Google Scholar

Stöcker, C. (2019, February). How Facebook and Google accidentally created a perfect ecosystem for targeted disinformation. In Multidisciplinary International Symposium on Disinformation in Open Online Media (pp. 129–149). Springer, Cham.10.1007/978-3-030-39627-5_11Search in Google Scholar

Tandoc Jr, E. C., Lim, Z. W., & Ling, R. (2018). Defining “fake news” A typology of scholarly definitions. Digital journalism, 6(2), 137–153.10.1080/21670811.2017.1360143Search in Google Scholar

Williams, D. E., & Treadaway, G. (1992). Exxon and the Valdez accident: A failure in crisis communication. Communication Studies, 43(1), 56–64.10.1080/10510979209368359Search in Google Scholar

Zimmer, F., Scheibe, K., Stock, M., & Stock, W. G. (2019). Fake news in social media: Bad algorithms or biased users? Journal of Information Science Theory and Practice, 7(2), 40–53.Search in Google Scholar

Published Online: 2023-07-11
Published in Print: 2023-08-24

© 2023 the author(s), published by De Gruyter.

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 27.4.2024 from https://www.degruyter.com/document/doi/10.1515/commun-2022-0101/html
Scroll to top button