1 Introduction

Trolling is one of the most problematic and most commonplace forms of antisocial online behavior. Because trolling is not always outright hostile, it is difficult for others to moderate or exclude from conversations. Trolls hide their true intentions (Donath 1999) by posting naïve, arrogant, dangerous, or incoherent comments (Hardaker 2013). If people engage by responding, the conversations may well turn into proliferation of non-constructive comments – especially if the troll employs the right strategies to prolong the exchange.

Particularly alongside personal insults, such prolonged digression renders trolling detrimental to online conversation, not least because contra-normative behavior increases discontentment and disillusionment among contributors and other readers. While the literature has examined definitions of trolling (Coles and West, 2016; Hardaker 2010; Herring et al., 2002), the motivations or goals behind it (Fichman and Sanfilippo 2016, pp. 23–47; Sanfilippo et al., 2017b), the distinct styles and general trolling strategies manifested (Hardaker 2013; Herring et al., 2002; Lumsden and Morgan, 2017; on ideological trolling in particular, see Zelenkauskaite and Niezkoda, 2017), and opportunities for preventing such behavior (Golf-Papez and Veer, 2017), researchers have not studied the conversational or interaction strategies that prolong non-constructive conversations specifically in terms of conversation structure.

To address this gap, we studied trolling-like observable behaviors to discover precisely how troll-like users succeed in drawing others into taking part in futile discussions. We concentrated on carefully analyzing their conversation strategies at the level of turns in interaction. Our findings, from data that we gathered from several online forums, dealing with various discussion topics, revealed what appears to be a common denominator among successful disruption strategies: they make use of unconventional, asymmetric responses in conversational interaction. We present three prominent response strategies that troll-like users employ in overriding the social norms of conversation. Our findings have several important implications. The paper concludes by considering these, with special regard to whether anything can be done to prevent or detect possible trolling.

2 Theoretical background

The following subsections review prior research into the research phenomenon (trolling), the theoretical background we applied (common grounding, from psycholinguistics), and our methodological approach (conversation analysis).

2.1 Trolling

Identifiability levels in text-based online communication range from complete anonymity and pseudonymity to communication wherein one’s true identity is known. Aspects of anonymity increase the potential for degenerative conversation involving harassment, hate speech, repetitive posts or spam, intensified polarization, and unresolvable disagreements (e.g., Coleman 2014). In addition, a lack of clues to authors’ identity and, hence, their motivations and the intended tone of their posts complicates the interpretation of intent and meanings (Donath 1999; Sudweeks and Rafaeli, 1996). Trolls take advantage of these ambiguities to lure others into prolonged and fruitless discussion (Herring et al., 2002).

Belying its recent associations with politically motivated disinformation campaigns, trolling was originally mostly entertainment for many of those performing it (see Shachaf and Hara, 2010). In some overlap with this use, for such venues as some USENET groups, it served as a means of boundary regulation between in-group and out-group members, or group ‘veterans’ and novices (Bishop 2014; Graham 2019). When employed for other purposes, it still was aimed at causing harm to smaller groups of people rather than entire societies (Graham 2019; Phillips 2015; Tepper 1997). In contexts of more recent online discourse, in contrast, political influence seems to have emerged as a new key motivation for trolling (e.g., Badawy et al., 2018). This type of trolling is tuned to amplifying polarization, propagating political agitation, limiting freedom of speech, and spreading fear and disinformation (e.g., Akhtar and Morrison, 2019; Bennett and Livingston, 2018).

While trolling, especially in some of its older forms, might not always be harmful or malicious (Cruz et al., 2018; Kirman et al., 2012; Sanfilippo et al., 2017a), our interest stemmed from its detrimental effects. We found a good starting point in Hardaker’s research (Hardaker, 2010) into the effects of trolling on online communities, as well as common trolling styles (referred to as strategies by Hardaker 2013). Hardaker, who paved new ground for conversational research into trolling, defines trolls as discussion participants who feign sincerity while their real intention is to disrupt conversation or create conflict for the sake of amusement (2010, p. 237). She cites the following comment as an example: ‘Uh.....not to be nitpicky,,,,,but...the past tense of drag is dragged, not drug. Otherwise it was an interesting post’ (2013, p. 72). The troll manages to deceive others into believing that the post is sincerely written (on ‘pseudo-intentions,’ see Hardaker 2010, pp. 233–236). This post unnecessarily attacks another user’s ‘face’ in the discussion (see Brown and Levinson, 1987), hypocritically correcting the poster’s grammar although it contains errors of its own; thereby, it successfully provokes other users into retaliating and digressing from discussion of the original topic. While our attention is not confined to amusement-oriented trolling, Hardaker‘s approach and classification of six perceived trolling styles form the foundations for our work.

She developed her classification by analyzing two large corpora of USENET discussion, from groups for horse-breeding- and football-related discussion. Filtering her search down to cases mentioning the word ‘troll’ or variants thereof, she derived the classification qualitatively. The trolling styles she identified range from overt, easily observable ones to covert (i.e., hidden or non-obvious) styles.

At the most overt end of her continuum is the ‘aggress’ category, which involves deliberately and openly provoking others into retaliating via aggression (e.g., name-calling or foul language), and shock trolling, which is characterized by posting about taboo or sensitive subjects such as religion, death, or touchy human-rights issues (cf. Phillips 2011; on ‘RIP trolling,’ see Fichman and Sanfilippo, 2016, p. 12). These trolling styles are very similar to ‘flaming’ (cf. ‘flame bait’ per Herring et al., 2002; Danet 2013; Herring 1994). Proceeding toward the more covert strategies, Hardaker defines ‘endanger’-class trolling as pseudo-naïve trolling (Donath 1999) wherein bad advice is given under a cloak of innocence (cf. the ‘Outward Manifestations of Sincerity’ strategy described by Herring et al., 2002). More experienced users feel compelled to respond so as to protect the forum’s novices. In antipathy trolling, the troll creates a sensitive or antagonistic context to exploit by being purposefully provocative, covertly manipulating others in aims of eliciting emotional responses. A (hypo)critical troll usually criticizes others excessively for peripheral matters of form such as punctuation, quite possibly while producing spelling mistakes and the like in the critical posts. Finally, the ‘digress’ class of trolling is the most covert style on the continuum. It involves luring others into straying from the topic at hand by spamming, partaking in posting of (off-topic) material in the conversation thread, or introducing tangential topics (e.g., Herring et al., 2002; Hopkinson 2013; Turner et al., 2005). Although Hardaker gave names to all these trolling styles and presented dialogue excerpts to exemplify the various categories, her work focuses on general content-related (and some linguistic) features of trolling, not on closely analyzing trolling as an interactive process. We go further, with analysis that homes in on the details of how effective trolling unfolds at the level of turns in interaction.

2.2 Common grounding theory

While the studies cited above produced theory on trolls’ intentions and analyzed what they do, the researchers did not consider trolling in light of theories of ordinary social interaction. Our key premise is that ordinary well-intended communication manifests common grounding. This concept, developed in the psycholinguistics field, refers to a ‘collective process by which the participants try to reach a mutual belief that they have understood what each other meant’ (Clark and Brennan, 1991, p. 223). Accordingly, an unintelligible action, such as an unexpected turn in a conversation, calls for an explanation from its originator. This corresponds to the Gricean cooperative principle, under which all the parties conversing are expected to engage in rectifying breakdowns when suspecting misunderstanding or confusion (Grice 1975). The cooperative principle is expressed via four maxims that articulate preconditions for effective communication: the maxim of quality (what is said should be true), the maxim of quantity (what is said should be as informative as required), the maxim of relevance (what is said should be relevant), and the maxim of manner (what is said should be clear, unambiguous, brief, and orderly).

The central premise of common grounding theory is that everything that is said in a conversation needs to be grounded – i.e., turned into mutual knowledge. For reaching a shared belief that the content has been understood, the interlocutor must provide evidence, to maintain a perception of common ground in the interaction (Clark and Brennan, 1991; Clark and Schaefer, 1989; Krauss and Fussell, 1991). The effort expended on sustaining common grounding can be understood as a cost of communication. Building upon the Gricean cooperative principle, particularly its constituent maxims of quantity and manner, Clark and Wilkes-Gibbs (1986) have suggested that partners in communication seek to minimize the costs by applying a principle of least collaborative effort: in conversation, the participants try to minimize their collaborative effort – the work that both do, from the initiation of each contribution to its mutual acceptance.

We argue that trolling is in many ways antithetical to common grounding. Communication with a troll-like counterpart incurs a higher cost than ordinary communication, because trolling-like behavior is designed to prolong regressive interaction and to cause communication breakdowns that others will attempt to mend. For example, troll-like users may express opinions without explaining them, or even without any apparent connection to the topic at hand. Efforts at trolling act contrary to the realization of common grounding by provoking others into co-creating patterns of regressive circular conversation that waste time and effort.

2.3 Conversation analysis

Many norms of conventional, non-technology-mediated conversation apply also to online conversations (e.g., Baron 2000; Herring 2010; Hård af Segerstad 2002). Hence, such phenomena as conversational repair, requests for explanation, and actions through which common grounding is achieved can likewise be found in online interpersonal interaction. So, naturally, trolls are happy to exploit them. For instance, arrogant responses such as unnecessarily addressing an earlier speaker as if talking to a child (Wagner et al., 2005) break the norms of common grounding.

Our study applied conversation analysis (CA) to examine interaction strategies that provoke others. As a research tradition that can inform careful analysis of interaction, CA builds on the analysis of social action in ethnomethodology (EM). Growing out of the EM tradition, which delves into micro-level social interactions, CA focuses particularly on analyses of turn-by-turn interactions between people. Both EM and CA study the methods by which humans coordinate their actions in social situations in a competent manner, how breakdowns in communication develop, and how they are resolved. Particularly relevant with regard to our interest in understanding trolling is the idea that social action depends on a moral obligation of behaving in an understandable manner. People who deviate from this norm are seen as breaking a fundamental rule. Typically, others challenge such deviators with demands that they should explain themselves (Heritage 1984).

CA therefore shares the view of psycholinguistic research on common grounding that discourse is jointly created in a conversation by its participants (Clark and Brennan, 1991; Schegloff et al., 1977). Both approaches apply micro-analytic attention to naturally occurring language in interaction. Our interest in CA is linked to its potential for analyzing specific actions that systematically break common grounding and prevent mutual understanding. The fact that we have not been able to identify prior studies of online trolling that have employed this method makes CA all the more interesting as a tool for analyzing online conversations.

Some characteristics specific to trolling must be taken into account in any CA-based analysis. It is an Internet-born phenomenon (cf. Hardaker 2017) that occurs mainly in asynchronous online forums, in synchronous chat, and on other conversational social-media platforms. In online spaces, participants may often enter and exit a conversation without others’ awareness. Also, text-based interaction is typically turn-based, with no temporally overlapping utterances. Although most conversation analysis studies do not deal with online interaction, some lines of analysis that focus on natively digital conversation are starting to emerge (Giles et al., 2015). These complement the extensive body of research into computer-mediated communication, or CMC (e.g., Dolata and Schwabe, 2019; Garcia and Jacobs, 1998; Herring 1994; Moore and Churchill, 2011; Moore et al., 2007; Ruhleder and Jordan, 2001), and EM-informed work in CMC (e.g., cf. Herring 2010; Rintel and Pittam, 1997; Woodruff and Aoki, 2004). CMC and EM-oriented studies have examined what makes online social interaction distinct – for instance, by pinpointing certain features that separate text-only online conversation from spoken interaction: turn-taking (e.g., Cech and Condon, 2004; Garcia and Jacobs, 1999), sequence organization, repair (Markman 2010), the norms related to responding (Skovholt and Svennevig, 2013), CMC’s conversational maxims (Crystal 2001; Lindholm 2013), openings and lack of embodied conduct (e.g., Meredith 2019), etc. Scholars doing such work appreciate the fact that software-related factors may shape these and other aspects of the interaction, including the problems faced (e.g., misinterpretation of silences; see Garcia and Jacobs, 1999) – problems that may be exploited in trolling. Researchers have also pointed out similarities: e.g., participants in both spoken interaction and online interaction orient themselves to building sequentially organized courses of action and maintaining intersubjectivity (Meredith and Stokoe, 2014) – i.e., building and maintaining common ground.

3 Data and methods

Our approach to analyzing trolling-reminiscent conversation strategies was based on conversation analysis that focused on ‘successful’ trolling, cases in which a troll-like post gains several responses. Unsuccessful trolling-like behavior, such as toxic remarks that other parties ignore and do not react to, was beyond our study’s scope; the latter type of behavior is not so detrimental to the discussion and to participants’ attempts to maintain common grounding. In addition, only cases that feature responses are amenable to conversation analysis. That is, our focus on successful trolling-like behavior was in line with our focus on the impacts on a conversation and with methodological considerations. Thus, we were able to see how trolling-like behavior disrupted efforts to reach common grounding: while others in the conversation sought to maintain it, the trolling-like actions complicated this attempt.

The subsections that follow describe the creation of our corpus of trolling-like behavior and our methods for analyzing specific interactions between troll-like users and non-trolls. In Subsection 3.1, we present how we distinguished between trolling-like and non-trolling-like behavior, via criteria derived from earlier research (Hardaker 2013; cf. Appendix 1), and then we discuss the interrater agreement we reached with our operationalization. Subsection 3.2 describes the interpretive conversation-analysis method that we applied to study trolling-like interaction.

3.1 The data

To have a valid basis for gathering the data, we adopted Hardaker’s typology of six trolling strategies, as outlined above. Appendix 1 presents how we operationalized these strategies as criteria for identifying trolling-like behaviors in online discussions. Using Hardaker’s descriptions of linguistic and content-related characteristics of trolling, we were able to identify observable indications of trolling-like behavior within a framework grounded in previous research.

We determined at the outset that we wanted to analyze material from political or societal and hobby- or casual-interest-based forums both, from a wide range of online sources, to increase the variety of trolling styles in our data. The online conversation spaces that we investigated were discussion areas on Reddit and comment sections of English-language online newspapers: The Telegraph, The Guardian, and The Washington Post. These are influential media platforms with a large readership, which renders them likely to be targeted by trolls. Our choice of platforms was designed for heterogeneity of conversation styles in the corpus, in the awareness that political ideology, context, (target) audience, popularity, accessibility, moderation practices, and language all exert an effect on the type of discussions readers are likely to engage in on a given site. Also, we focused our data collection on discussion sites that were not moderation-heavy, since service providers with strict moderation practices would have very likely deleted most posts that resemble trolling. Appendix 2 characterizes our data sources.

We acquired data from discussions that took place mainly in late 2018 to 2019. The political/societal topics we chose to look at were climate change and Brexit, and the interest-based topics were cats and fitness. Discussion of important political topics is likely to attract trolls attempting to disrupt the dialogue or incite polarized exchanges (e.g., Badawy et al., 2018; Phillips 2015). Conversations around Brexit and climate change have been especially prone to attract this type of attention (e.g., Antonio and Brulle, 2011; Humprecht 2018; Williams et al., 2015). However, apolitical and more prosaic topics too may become targets of trolling if the topic is dear to the discussion community (e.g., horses; see Hardaker 2010).

For each forum, we proceeded to read the comment sections on each subtopic, with all the individual comment threads (cf. Smith et al., 2000) (in all, approx. 500 threads), until we had identified at least two conversation trees on each subtopic, wherein a participant displayed trolling-like behavior. Here, ‘conversation tree’ refers to a branch within a thread (a thread is the full branching structure initiated by a root-level Reddit post or the whole comment section after a news article), where the branch is demarcated as consisting of the trolling-style post and all resulting branches of follow-up posts. Not all samples of trolling-like behavior were confined strictly to one tree, though: some users did not use the forum’s response and threading functions as intended, and there were some cases of the troll-like user continuing or reinitiating the discussion in another tree. Such instances were counted only once, lest such a user’s behavior get over-represented in the data. We looked for activity-rich discussion trees, to uncover trolling-like conversation strategies that proved clearly able to elicit multiple responses (directly or indirectly). For the online newspapers’ comment sections, this entailed approximately 8–15 response posts in the discussion tree triggered by the troll-like user, and the figure for Reddit is approximately 15–20.

Although our study of trolling-like behaviors was interpretive in nature, we found it important for the corpus to be rigorously gathered, so that it could be later extended in size via application of the same data-collection criteria. Since only the first author of this paper carried out the collection described above, we conducted interrater agreement analysis to evaluate the possibility of biased data collection. The dataset for interrater evaluation consisted of all the trees collected that contain trolling-like interaction, apart from 21 examples already seen by the other two authors, and an approximately equivalent set of non-trolling-resembling conversation trees retrieved from the same sources and topic spaces. Because the first author prepared both sets, their exact sizes remained unknown to the other two authors. The other two authors first classified 22 randomly selected conversation trees, using an early version of the coding system. Then we compared the classifications across all three authors: discussed and resolved all conflicts, and refined the coding protocol as necessary. Finally, we classified the remaining conversation trees (54 in total) in randomized order. There was 86.4% overall agreement, and the free-marginal Fleiss kappa value was 0.73, which is near the upper bound for substantial interrater agreement (range: 0.40–0.75; Fleiss 1981).

Tables 12 provide a more detailed description of our corpus. Table 1 presents a breakdown of the threads collected, by source and discussion topic. The counts refer to the number of (news-article-specific) comment sections where an instance of trolling-like behavior was found.

Table 1 The number of threads collected that include trolling-like behavior, by discussion topic and source (all manifested successful elicitation of reactions).
Table 2 The number of trolling-like strategies identified for the various discussion topics in trolling-like conversation trees within the threads collected.

The sections from which we collected comments had approximately 1113 comments, on average, with many Brexit or climate change related sections having a considerably higher average comment count (approx. 1712), than sections on cats or fitness did (402).

The distribution of trolling strategies, per Hardaker’s classification, manifested for each discussion topic can be seen in Table 2. The totals in Table 2 differ from those reported in Table 1 because a conversation sometimes exhibited multiple styles of trolling in parallel. Nearly all styles in Hardaker’s typology (2013) were displayed in discussion of each topic selected.

Hardaker’s categorization concentrates on linguistic and content-related features of trolling styles, without attempts to analyze in more detail how the various strategies are executed at the level of interaction or which actions in conversation render them effective. Therefore, once we had a corpus indexed for the trolling-like behaviors as defined by Hardaker, we proceeded to analyze the conversational interactions between troll-like users and non-troll-like users with the aid of CA.

3.2 The analysis method

We carefully combed through the final collection of 68 threads to analyze the conversation trees that manifested trolling-like behavior. This involved 1263 posts: in total, 828 from non-troll-like users and 435 from troll-like users. In applying CA concepts to our data, we were aware that CA is an interpretive research approach that requires expert human judgment. Therefore, we employed precisely defined analytical concepts and constructs in our analysis to provide concrete raw evidence for every conclusion stated.

Examining turns in interaction is fundamental to CA. For this, we utilized adjacency pairs. To avoid technical jargon and for ease of explanation, we will refer to adjacency pairs as action pairs or paired actions. Whatever term is used, the notion refers to a structural unit of conversation that in our case consists of a pair of posts to an online forum: a first pair part (FPP) and a second pair part (SPP). The first element in the pair, the FPP projects and simultaneously creates expectations of a specific type of SPP. For example, a greeting-like post anticipates another greeting, a question requests an answer, and an invitation awaits acceptance or rejection (Schegloff 1968, 2007; Schegloff and Sacks, 1973). Through paired actions, participants behave in a coherent and explainable, or accountable, manner. Behaviors that deviate from the normative patterns recognized for action pairs typically prompt others to demand that the deviator explain the aberrant behavior – to provide an account – and clear up the confusion (e.g., Garfinkel 1967; Heritage 1984).

We chose action pairs as our main object of interest for two reasons: (1) there already is a vast body of CA research on them (e.g., Schegloff 2007), which has demonstrated their fruitfulness in analyzing how people manage actions and activities jointly, and (2) conversation analytic studies of online environments, especially studies that look at maintaining coherence, have shown that people generally are strongly oriented toward paired actions in interaction (e.g., Meredith 2019; Skovholt and Svennevig 2013). Although text-only online communication has been described as only loosely coherent when compared to face-to-face conversation (Herring 1999) and also the adjacency of a pair’s two member actions on some platforms is disrupted (Giles et al., 2015; Gruber 1998; Herring 1999; Markman 2005), people still typically consider a post to have a main action that should be addressed in an expected manner (Stommel and Koole, 2010). Furthermore, prior research attests to the utility of studying conversational actions specifically in online communication (Condon and Cech 2001; Gruber 1998) – for instance, to aid in developing automated classification systems (Twitchell and Nunamaker, 2004) that identify speech acts (Austin 1962).

To annotate interactions with codes referring to specific conversational actions, we made particular use of literature on how action pairs unfold in typical conversations. Appendix 3 presents the full list of codes we used. We carefully analyzed the instances wherein social rules were not honored and discussion was disrupted. Social norms of conversation suggest that an FPP should be followed by an appropriate SPP (Heritage 1984, pp. 245–253; Levinson 1983, pp. 332–336; Schegloff 2007, pp. 13–27); that is, the FPP and SPP should be congruent with a typical action-pair type, such as question–answer (Stivers and Rossano, 2010; Stivers et al., 2010). Again, deviation could impel others to demand the deviant poster to either produce the expected response or offer an explanation (Heritage 1984; Pomerantz 1984). In the absence of an explanation, violation of the norm might also attract negative attention and follow-up questions, coupled with worries about ulterior motives (Greatbatch 1986; Romaniuk 2013). The costs for resolving norm-violation situations might be high, especially since attempts to hold the perpetrator accountable disrupt the flow of the discussion (Clark and Wilkes-Gibbs, 1986; Heritage 1984).

The notion of a proper SPP is closely connected with the CA concept of preference organization (Pomerantz 1984; Schegloff 2007, pp. 58–81), which refers to a set of social norms articulating the preferred (and dispreferred) ways of interacting (e.g., there is a norm that accepting an invitation is preferred over rejecting it, and responding to a question is preferred over not doing so). Several preference principles may be relevant simultaneously, so participants judge which principles hold in the situation in light of prior actions. In this article, we not only analyze dispreferred SPPs but also pay attention to an even more specific phenomenon: trolling-like responses are systemically deviant – i.e., dispreferred – yet are produced as if they were unproblematic; there is no justification or hesitation. Thereby, trolling-like response patterns constitute a unique form of non-normative responding.

To analyze paired actions in interactions with troll-like users, we developed an analytical frame by synthesizing widely acknowledged conversation analytical typologies of actions (Clark and Schaefer, 1989; Stivers 2013, p. 192; Vatanen 2014). This process, informed also by our close reading of the data, yielded 13 action-pair types, which we used in preliminary analysis to annotate interactions by action-pair category. Appendix 3 presents all the action pairs found in the seemingly trolling-displaying conversation trees we analyzed, covering paired actions found in any posts, by both troll-like and non-troll-like users, and both symmetric and asymmetric pairs. We counted the pairs found in the annotated data with reference to the pair-initiating part (the FPP). Four categories proved significantly more frequent than others:

  • Question – answer, where the FPP (a question) creates an expectation of an answer (Stivers and Robinson 2006). Other appropriate SPPs are reports of not having access to information relevant for answering and criticisms of the question’s implied premises (Thompson et al. 2015).

  • Assertion – agreement or disagreement, where the FPP makes a claim about a general state of affairs, often assuming an evaluative, personal stance to it (Vatanen 2014). Although assertions do not show strong expectations for a certain response, they often anticipate another assertion (confirming or disconfirming), consistent with the topic (Heritage and Raymond 2005; Pomerantz 1984; Vatanen 2014).

  • Accusation – admission or denial, where the FPP points to an admission or denial as the expected SPP (Dersley and Wootton 2000, p. 387; Drew 1978). The expected response in this case is a confirmation of the action mentioned in the FPP, possibly also explicitly supplying an account for the action (Dersley and Wootton 2000).

  • Request for action – acceptance or rejection, where the FPP normally limits the SPP to either an acceptance or a rejection (Thompson et al. 2015).

We expected to see possible trolls using the normative constraints of action pairs differently from what happens in ordinary communication aimed at common grounding. In particular, we supposed that they would reply in a condescending manner or might respond to only those questions that furthered their ends. In the discussion of our results, we demonstrate that trolling-like responses indeed often differ from typical responses, and we consider these findings in terms of the principles of common grounding, which we found to explain the phenomenon and its success.

4 Results

Hardaker’s typology of styles of trolling exhibited in online conversations not only was vital to our criteria for corpus collection but also served as a basis for the analysis itself, aimed at improving our understanding of how the use of conversational turns may differ between troll-like users and other, non-troll-like users. Our main output was identification of three ways in which troll-like participants’ use of conversational actions differed from those of non-troll-like users. All three are related to what we identified as symmetry violations in the action pairs. While non-troll-like users predominantly produced symmetrical turns – typically, an FPP was followed by an expected SPP – troll-like users often produced contra-normative turns: unexpected (i.e., asymmetric) SPPs. Below, we will refer to the expected, ‘proper’ SPPs as symmetric and the unexpected SPPs as asymmetric.

The three asymmetric actions identified are ignoring, mismatching, and challenging. In ignoring, the troll-like participant does not produce a response at all – for example, leaving a question (the FPP half) unanswered. In mismatching, the troll-like user fashions an SPP that violates the Gricean maxim of relevance: misinterpreting the FPP that preceded it or producing a post without relevance in relation to the main content of the first action. Finally, in challenging, the troll-like user questions the justifications for the FPP or challenges its author.

Table 3 shows the frequencies of asymmetric actions produced by non-troll-like and troll-like users in response to FPPs expecting an answer. The symmetric entries in this table feature responses (SPPs) to FPP actions that represent completion of a recognized action pair, in some cases following requests for further information before the responder answers. The overall values suggest that trolling-like communication contains more asymmetries than non-trolling-like communication, along with fewer symmetric closures for paired actions. We did not attempt to verify this finding statistically, however, since the observations are not independent of each other: often, a conversation incorporating trolling-like behavior displayed several symmetry violations produced by the same user. We leave such comparison for later analysis and note only that troll-like users’ posts appear to manifest an unexpectedly large percentage of conversational violations that disrupt grounding. In the following subsections, we will describe the three asymmetric strategies in more detail.

Table 3 Counts and percentages of asymmetric actions in our data.

4.1 Ignoring

Ignoring refers to deliberately refraining from responding to others’ posts or significant portions thereof. This type of behavior, also reported upon as a trolling strategy in Herring et al.’s research (2002), can disrupt conversation and provoke others because it may imply cold-shouldering or create a suspicion that the interlocutor has missed or misunderstood the first action (Schegloff 2007). In our data, troll-like users baited others into participating in off-topic discussion by exploiting their desire for continuity. Ignoring others’ posts or responding in only a selective manner left gaps in the conversation, which others sought to fill by highlighting the absence of replies. Troll-like users then prolonged the discussion by ignoring others’ repair attempts while remaining active in the conversation in other respects. Ignoring could be seen as violating the cooperative principle, especially with regard to the Gricean maxim of quantity, in that a non-response is significantly less than what is required for informational conversation.

Our first example is an excerpt from a larger Brexit-related discussion. Here, counter-normative posting triggers angry reactions and demands for explanations from others. This diverts large amounts of space and time from the on-topic conversation. The conversation digresses into reciprocal flaming and repetitive posting in reaction to B’s trolling-like behavior. In example 1, the discussion is initially disrupted by B’s unexpected and provocative flaming (post 2). In post 3, the troll-like user (B, whose content is in boldface) is asked for justification for the unexpected aggression and to explain appropriating the nickname of another forum member (referred to as E in the example; see posts 8 and 10). However, B ignores these pleas while remaining active in the conversation in other ways (post 6).

Example 1:

Use of the ignoring strategy in a conversation involving aggressive trolling-like behavior, in discussion of The Telegraph’s article ‘Theresa May told “you are the problem” by backbenchers furious over Brexit paralysis as they urge her to go for good of the party,’ from 9 April 2019.

Post

User

Post content

1

A

The nation state is past its sell by date.

2

B

@A You’re really exposing yourself now Moshe.

3

C

@B @A Why call him that?

  

[1 side-comment omitted]

4

D

@C Because C, this B is pretending to be another B [forum member E]. The latter has a brain. The former is an anti-semite.

  

[1 side-comment omitted]

5

F

@D @C E is a white-hater. B is not.

6

B

@D @C Antisemite? How did you arrive at that?

  

[2 posts omitted: one restating that B is not E, the second being a side-comment]

7

B

@D @F How can a person be “anti-group of languages?” what a bizarre accusation.

8

D

@B Your comments display that and even C has noticed. Why did you copy E’s name? It doesn’t confer her intelligence to you.

  

[1 side-comment omitted: debate about what constitutes anti-Semitism]

9

B

@D @B Display what? Back up your accusation.

  

[2 posts omitted: debate about what constitutes anti-Semitism]

10

D

@B I don’t need to as you have already done so in your comment history. Now tell me, why did you copy E’s name? You’re still thick as s’hit.

11

C

@B @C @F @D Calling him “Moshe” as a generic Jewish name is appalling. Please don’t go in that direction.

Viewing the flame (post 2) and the alleged nickname theft as things they needed to resolve so that the original conversation could resume, other participants continue speculating on the motives behind B’s actions and insist on holding B accountable for the behavior displayed. They attempt this by addressing B directly, referring to B’s selection of words, and describing past actions. Two distinct kinds of normative expectations are relevant here: one should (1) provide a response to an FPP directed at one and (2) give an account when identified as accountable. Contrary to the norm, B does not engage in resolving the breakdown; instead, B works against its repair by evading accountability, directing attention elsewhere (post 6), and later producing further provocation (e.g., post 7). The provocation displays that B has not left the discussion and should be engaged and ready to account for the actions (e.g., see Antaki et al., 2005). This, in turn, suggests that B is ignoring some posts deliberately.

Although B systematically acts in an incoherent manner and the others’ need for explanations is apparent, other users also actively contribute to eroding the quality of the interaction (e.g., with posts 4 and 8). Their requests or demands for explanation are neither very constructive nor polite (see post 10). Thus they ‘take the bait’ and co-create trolling with B. They could simply have pointed out that the comment was inappropriate, then left B alone after observing B’s lack of willingness to cooperate; instead, their aggressive retaliation breeds further disruption. This too undercuts common grounding. Also, B’s choice of nickname acts as a trigger for interaction, whether deliberate or not. Such nickname choice is another possible trigger for collaborative creation of trolling, if others interpret it as deliberate provocation and take the bait (on nicknames, see Lindholm 2013).

In example 2, a different user BFootnote 1 from the poster in example 1 successfully aggravates participants in a mainly peaceful pet-related discussion. The common juxtaposition of cat vs. dog is already inscribed in the title of the original article, and, by utilizing a message comparing the two, B creates an antagonistic context in which owning a cat would imply a man’s homosexuality.

Example 2:

Ignoring accusations of improper behavior in a conversation that includes aggressive/antipathetic trolling-like behavior, in discussion of The Washington Post’s ‘Dog owners are much happier than cat owners, survey finds,’ from 5 April 2019.

Post

User

Post content

1

A

I have two cats who I love. My girlfriend has a dog who I love. So I guess it’s ok to love them both. What’s especially nice is that they all seem to love me right back.

2

B → A

Ugh. A man with cats is the worst! Just come out of the closet already.

3

A → B

I thought your earlier post was kind of obnoxious. Now I know exactly why I thought that. Thanks for clearing things up.

4

B

LOL! Cat people are so easily offended....

5

A → B

Ever ask yourself why you go out of your way to offend people? Well, wait – maybe it’s not out of your way at all.

6

B → A

It’s Friday. Work is slow. They made the mistake of giving me internet access....

7

C → B

They made the mistake of giving you a job.

8

B

LOL! So many offended cat people. I LOVE THIS!!!!!!

9

D

Desperate for attentuon and affirmation. Get yourself a drooling dog.

  

[2 posts omitted: comments similar to post 9, criticizing A’s behavior]

10

E

Spot-on. He’s a miserable, immature attention-seeker who needs to insult innocent people and animals to feel validated.

  

[2 posts omitted: comments similar to post 10]

11

E → B

Dog person here: you’re obnoxious and need to examine why you delight in insulting people who seem perfectly nice. And stop insulting gays, too.

12

F → B

(Edited) Why are you being so ignorant? I have not read one decent comment from you. By the way, I am a dog person.

13

G → B

But, when he comes out, would he be a lesbian?

14

B

Hopefully, for his girlfriend’s sake.

B systematically ignores accusations of contra-normative behavior: posts 3, 13, and 14 are directed at B and call for a response. However, these posts are either completely ignored (13 and 14) or countered by implying that their authors are overly sensitive. The audience takes B’s bait and collaborates in authoring offensive and contentious posts that further contribute to a negative atmosphere. Consequently, the discussion digresses into flaming and acts of retribution, while B keeps hampering any possible conclusion of the conflict, by feeding the argument with additional asymmetric posts and provocation (e.g., post 14). Thus, poster B also displays active selectivity in choosing which posts to respond to. Choosing to ignore some posts and B’s manner of responding in, for example, post 14 provide evidence of systematic violation of coherence and accountability norms: instead of responding to questions such as that in post 12, B opts to respond to post 13, which continues to ridicule A and supports the trolling. In essence, B manages to prolong the conflict by working against common grounding. Rather than try to resolve problems by producing an expected reply (acknowledging the actions’ impropriety and/or giving an account), B evades criticism for the antisocial behavior. Hence, the conflict does not get resolved.

In the above examples, we can see that the ignoring strategy is highly effective in protracting fruitless exchanges by luring others into requesting explanations and proper answers that could make sense of the troll-like user’s ambiguous behavior. While others dutifully respond to the troll-like user’s posts, the disruptive user replies only to selected messages. Leaving posts unanswered creates information gaps that disrupt common grounding. Also, the reciprocity of a discussion is compromised when troll-like users refuse to provide explanations for their offensive actions.

4.2 Mismatching

In the second strategy that we identified – mismatching – the troll-like user either misinterprets/misrepresents the FPP or responds in a manner that is irrelevant with regard to the central point made in the FPP half. Asymmetric responses of this nature are contra-normative (Heritage 1984; Levinson 1983; Schegloff 2007), and they lead to breakdowns of conversation by disrupting the coherence of meaning-making. In particular, they violate the Gricean maxim of relevance: the first action anticipates or makes relevant a restricted set of actions, to which the irrelevant and often confusing response is not a fitting or understandable match.

For example, the excerpt below comes from a conversation revolving around cats and their contributions to overcoming social anxiety. In post 3, A displays trolling-like behavior by misinterpreting post 2.

Example 3:

Use of the strategy of mismatching in a conversation involving aggressive trolling-like behavior in discussion of The Guardian’s ‘Experience: I’m a full-time cat sitter,’ from 24 May 2019.

Post

User

Post content

1

A

There are two kinds of people in this world. People who live with cats and people whose houses don’t reek of cat piss.

2

B → A

Such a stunning insight. Did you come up with that all by yourself?

3

A → B

The first part of the aphorism is quite common. The second part is an observation that a lot of people whose houses don’t reek of cat piss tend to experience. So the answer to your question is yes and no.

4

C → B

“Such a stunning insight. Did you come up with that all by yourself?” No, of course not. it’s an old internet trope.

5

D → A

Lots of cats will go outside to do their business. They are actually very good at sticking to the same area outside i.e. they don’t like to randomly poop all over the place. Some people will put a litter tray outside. But I agree, if you have an indoor cat it’s likely there will be at least some cat pee smell as the ammonia is so strong.

6

E → A

But my cats prefer to shit and piss in your garden so I don’t know how this can be true.

7

D → D

* an indoor cat that uses an indoor litter tray

8

F → A

I have a cat. My house doesn’t reek of cat piss. Perhaps it’s the company you keep.

The expression in post 2 ‘did you come up with that all by yourself?’ is commonly used to point out an improper or foolish statement. Here, a symmetrical SPP might, for example, have provided an account of A’s reasoning process; however, A displays misinterpretation of post 2 as an information-seeking question. Thus, A resists the demand to answer the accusation and deflects any criticism of the problematic behavior. This has an impact on other participants’ responses: though some post sincere replies to A in order to work toward common grounding (e.g., post 5), most are aggravated into accusing A of disagreeable behavior (post 2) or posting insults (posts 6 and 8). If the audience had responded largely neutrally, in the manner of post 5, or had casually ignored A’s provocative comment, the posters might have succeeded in alleviating the strain. In contrast, content such as posts 2 and 8 only exacerbates the trolling or continues such behavior instead of mending the fractured common ground. In this way, by baiting others to reciprocate, A renders the conversation a regressive series of finger-pointing and name-calling actions. Later in the discussion, A’s further provocation and lack of accountability protract the others’ futile attempts at grounding all the more, so closure cannot be achieved.

In our data, participants exhibiting trolling-like behavior often utilize misinterpretation to dodge a key observation made in an earlier post. A case in point is example 4, below, wherein B acts in a trolling-like manner by derailing a discussion of the effects of Brexit on the British economy. Its transformation into an aggressive argument over racism begins with this comment by B: ‘It’s almost like letting a bunch of bitter racist white losers vote for shit ruins your country.’

Example 4:

The mismatching strategy used in a conversation containing digressive trolling-like behavior, under ‘UK economy shrinks by four times as much as predicted as Brexit paralysis takes hold,’ in Reddit’s r/worldnews, in June 2019.

Post

User

Post content

1

A

You: nope I’m not someone how calls everyone racist. Also you: the only reason the two biggest political movements in recent times happened is because RACISTS!!!!

2

B

You don’t have good analytical ability apparently.

3

A

Did you or did you not just boil the 2016 election and the brexit vote down to racism? Am I missing something? Because that’s exactly what you did

4

B

Yea, it’s white nationalist bullshit

The second post by B (post 4) demonstrates mismatching by misinterpretation: while post 3 is essentially an accusatory-question FPP, post 4 is an asymmetric response, better matching an information-seeking question. This intimates that B either does not understand post 3 or is willfully ignoring the point made in it. In either case, post 4 is not an expected response to post 3. Post 4’s inadequacy as a response is evident from A’s later actions (data not shown): re-articulating the accusation, to demand an admission or denial. This is done with greater aggression, manifesting frustration with post 4. Still, B resists the demands for accountability, refusing to cooperate and pursuing another agenda. At the same time, the contributions by A are quite face-threatening, calling out B’s behavior and intentions in very direct and offensive terms. Therefore, A’s reactions do not seem to offer a constructive footing for exchanging ideas and achieving common grounding. Arguably, A therefore contributes to the conversation’s breakdown. Again, sometimes a poster may not have intended to troll but trolling behavior gets triggered by the situation, thanks to others’ overly eager or aggressive reactions.

Mismatching was manifested in another way also, ridiculing the first action by producing responses utterly unrelated to the tone of the first post or its main point. In example 5, participants in a discussion in Reddit r/worldnews have been debating the economic effects of climate change. Behaving in a trolling-like manner in post 1, A leads the conversation into disarray by sniping at other users and the topic, then refusing to behave civilly and acknowledge others’ points properly.

Example 5:

A mismatching strategy used in antipathetic trolling-like interaction, in discussion under ‘Proposal to spend 25% of European Union budget on climate change,’ in Reddit’s r/worldnews, in May 2019.

Post

User

Post content

1

A

ahhhhhhh!!!!! 40% of the global population will be displaced, there will be massive storms, polar bears will be extinct, and the Amazon Rainforest will look like the Sahara Deset. Save us Prophets of Climatology, take our tithes to appease the climate gods!!!!

2

B

I never said that 40% of the global population will be displaced, 40% of the global population lives near the coast.

3

A

Bro, I AM SO SCARED!!! %@$! The climate is about to enter a tail spin which will decimate humanity, and all the benenvolent prophets of Climatolofy needs is billions of dollars,(for now) to save the planet? COUNT ME IN! Where do I send my tithes too?

4

B

I have never fucking said that, if you are going to reply then at least reply to my comment instead of being a goof

The provocation in post 1 results in a repair attempt executed via an assertion (post 2), to which several symmetric responses would have been possible: an account, self-repair, denial of the information, acceptance of new information (e.g., a change-of-state token, per Heritage 1984), or (implied) acknowledgment of A having misinterpreted matters. Instead, in post 3, A further provokes the other Redditor with an asymmetric response – a sarcastic and pejorative assertion showing a very weak connection to post 2. Even at the level of the whole conversation tree, post 3 seems aimed only at ridiculing the other person and the discussion in general. The strategy provokes the other user to reiterate the position taken earlier but in a more aggressive tone, in post 4. This post returns A’s insults and explicitly states that A’s action is counter-normative. After post 4, A continues to insult B, further ignoring the expressed need for accountability. While A’s contributions do not contain clearly defined argumentation, they are coherent with neither the earlier discussion nor B’s comments; by choosing to address this, B acknowledges them and allows room for trolling to take place.

The dataset features many cases of employing mismatching to direct others’ attention to off-topic issues. The effectiveness of misinterpretive trolling-like behavior lies in its obfuscation of the meaning and relevance of the response, in violation of the Gricean maxim of relevance. In ordinary non-trolling communication, signs of possible misinterpretation are dealt with quite promptly if the parties are genuinely striving for common grounding (Clark and Schaefer, 1989). For grounding to succeed, a repair is needed before discussion proceeds to the next relevant contribution. When not accepting the SPP as a proper response, the author of the FPP will normally initiate a repair (Schegloff et al., 1977; for parallels in common-grounding theory, see Clark and Brennan, 1991). In contrast, troll-like users work against successful repair and against conversational closure.

4.3 Challenging

The third asymmetric response strategy we identified involves challenging the justification behind the initial actions. Challenges can be found in (normal) spoken conversation too (e.g., Heritage 2012; Thompson et al. 2015), so it was unsurprising that our dataset features a few challenges in non-trolling-style turns. Every challenge carries possibly face-threatening implications (cf. Brown and Levinson 1987; Goffman 1967, p. 37); however, troll-like participants used this tool in a more systematic and provocative manner. Their challenges often breached the Gricean maxim of manner. We identified two types of challenging response: attacking the grounds for an earlier post and attacking another user’s authority. We discuss both below.

4.3.1 Challenging the grounds for a post

Challenging the grounds for an action often involved countering one question with another, thus casting the first post as unnecessary or invalid while placing the challenger in an authoritative position (see also the example of question-trolling described by Zvereva 2020, pp. 114–116). This was done also by means of such assessments or assertions as ‘What a dumb post’ or by redirecting the FPP back at the original poster (e.g., A: ‘You lack understanding of what you’re talking about.’ – B: ‘No u’). In example 6, B challenges A’s post by meeting a question with another question.

Example 6:

Challenging by questioning the grounds for an earlier message. An excerpt from a conversation containing digressive trolling-like behavior in discussion of The Washington Post’s ‘Adrift in the Arctic,’ from June 2019.

Post

User

Post content

1

A

B: Are you a scientist? Thanks.

  

[3 posts omitted: responses to other posts, unrelated to post 1]

2

B

(Edited) Dear A, Does it matter. If so how so? What I know about science is it is extremely equal to everyone of us. It doesn’t matter who you are, it doesn’t matter where you from, it doesn’t matter what kind of diploma you have. The only thing that matters in science is scientific evidence.

Earlier in this conversation, other users provided evidence and counter-arguments to refute B’s provocative assertions regarding climate change. Several times, B appealed to the superiority of scientific evidence/argumentation for meaning-making within the debate, mostly to back B’s own claims including information that counters information commonly presented in the thread as accepted. In post 1, Poster A asks whether B has sufficient educational background to refute the evidence provided. With post 2, B does not produce a symmetrical SPP (i.e., one with any information on B’s scientific background), instead challenging the question’s grounds with a counter-question to render it irrelevant. The questioner is in a position to place constraints on what the next person should do (Sacks 1992, p. 54; Stivers and Hayashi, 2010); by directing the question back to the first person, B takes control of the interaction here. After B’s challenge, a futile battle for authority erupts between B and other users (not shown in the example), who engage in a tangential debate over B’s educational background. The reactions of the audience are noteworthy in other respects too. For instance, A’s post 1 is somewhat confrontational. The anticipatory ‘Thanks’ after the question may frame it more as a rhetorical question or challenge than a sincere question, or the poster’s thanks may be taken as sarcastic. Although many participants in this conversation (beyond example 6) attempt to engage fairly neutrally in conversation with B, many posters, A among them, can be seen as posting provocative responses, possibly rendering the discussion even more negative in tone.

4.3.2 Challenging someone’s authority

The second way of issuing a challenge is to call another user’s authority into question, either by questioning that user’s epistemic authority on a topic or by questioning the poster’s legitimacy as a valid or sincere participant in the discussion. In example 7, a discussion originally revolving around cats, B challenges another user via accusations of trolling – i.e., questioning that user’s sincerity within the discussion.

Example 7:

Challenging by questioning another user’s authority, as displayed in a discussion tree involving aggressive trolling-like behavior, found under ‘Love our baby girl more than anything. I’ve never been able to understand why they’re considered bad luck,’ in Reddit’s r/cats, from April 2019.

Post

User

Post content

1

A

cough Still waiting on them sources...

2

B

The troll is interfering.

3

A

Agreed! So stop it already and cite your sources?

4

B

I already stated it troll.

Here, A keeps insisting that B’s assertions should be proven with proper references, using an indirect request-for-action FPP in post 1. In response, B resists this request, by accusing A of being a troll and, rather than providing the requested information, altogether denying A’s authority to ask for references. The situation escalates into reciprocal accusations of trolling, by means of which B is able to divert attention from unwanted questions. Such retaliatory accusations of trolling are quite common in online disputes (e.g., Knustad 2020), and they can be seen as another mechanism of co-creating trolling. In some cases, such as this one, the accusations themselves are a form of trolling; the user acting in a trolling-like way levels them against a sincere poster.

In our data, questioning someone’s legitimacy as a valid participant in the discussion proved to be a highly provocative means of challenging that person, especially in cases in the ‘aggress’ category (see Appendix 1). Overall, the challenges created an atmosphere of uncertainty in the discussion space. By exploiting challenges to someone’s authority, troll-like users incited quarrels over authority and prolonged regressive exchanges. Besides violating the Gricean maxim of manner, this fundamentally obscured honoring of the maxim of quality in the discussion space – it cast doubt over participants’ sincerity and their (epistemic) authority on a given topic. This muddling of the apparent motivations and identities of several users in the discussion space frequently resulted in bystanders not being able to distinguish sincere contributors from deceptive troublemakers. Moreover, the time and effort costs for reaching common grounding to resolve these breakdowns were often high.

5 Discussion

The core contribution of the study is our conversation analysis treatment of the turn-taking strategies used by troll-like participants in online discussions. We identified patterns used to degrade online conversation and found that troll-like users succeed in disrupting conversations by hindering common grounding – the joint search for conversational closure and maintenance of mutual understanding.

When closely examining the action pairs in the corpus of online conversations capturing observable characteristics of trolling-like behavior per Hardaker’s typology, we found that, across all political/societal and leisure-related discussions, troll-like users disrupt common grounding by deviating from expected conversational norms. Instead of continuing conversations in a way that would afford symmetry between an action pair’s first part (the FPP) and the second half (the SPP), they create asymmetries. Through our conversation-analysis approach, we uncovered several conversation strategies that can highly effectively derail conversations, frustrating common grounding or a satisfying closure. Of these techniques (characterized above as ignoring other users’ posts that anticipate a response, posting various mismatching responses, and challenging other users’ legitimacy or the grounds for their comments instead of addressing their posts’ content), only the first – ignoring – has been identified by scholars, in a brief observation by Herring et al. (2002). Certainly, none of them has been analyzed in detail.

The interaction patterns we pinpointed also, importantly, illustrate the collaborative nature of trolling. By not following the rules of conversational symmetry, troll-like users bait others to respond. While one user might perform counter-normatively in a relatively systematic manner, thereby disrupting the coherence of the conversation, trolling is a joint creation (as noted also by Cook et al, 2019), and anyone may end up contributing to it (cf. Cheng et al. 2017). Thus, trolling as a phenomenon emerges from community experience and culture: both for perceiving/identifying trolling behavior in the given context (cf. Sanfilippo et al. 2018) and for contributing to the trolling event or prolonging it, collaborative effort is necessary. As a good troll knows, ‘it takes two to tango.’

The maxims for what is expected in conversation frame our results well. We found that they often do not hold in conversations in which troll-like participants take part. By anchoring our findings theoretically in light of the Gricean cooperative principle, we were able to conclude that trolling-like behavior breaches the maxims of quantity (in ignoring), relevance (in mismatching), and manner (in challenging). Finally, when the trolling-like behavior is seen as an attempt to deceive other users, all of these strategies flout the maxim of quality, the notion that what is said should be true. Overall, the troll-like participants’ actions in the discussions display a strategically uncooperative orientation.

5.1 Limitations of the work

Our analysis was focused on only a subset of trolling-like phenomena. We relied on Hardaker’s definitions in conceptualizing six distinct trolling strategies to guide our collection of data. Hardaker’s framework is not comprehensive, so identifying trolling by means of it may have blinded us to other forms of trolling. In addition, although we sought heterogeneity by analyzing both political and leisure-interest-based discussions and by gathering material from four distinct online forums, collecting data from only these sources may have exposed us to a limited portion of the full set of trolling-like strategies possible. Also, while Hardaker’s categorization is useful for studying differences in trolling-like behaviors, its operationalization requires interpreting the categories in new contexts, quite different from those she cited herself. Further work could, accordingly, lead to the expansion of some categories or changes in the interpretation of their precise meanings. However, in light of our substantial interrater agreement, we concluded that we succeeded well in operationalizing the characteristics of trolling-like behaviors with the coding system in Appendix 1.

We examined the data only through the lens of conversation analysis, with special regard to action pairs and their symmetry violations as our main concepts. While addressing the research gap, this did restrict the possibilities for noticing any trolling mechanisms that cannot be identified via the lens of action pairs. Analysis of other action types or alternative specific CA phenomena might yield further findings. For instance, studies could concentrate on ‘third positions,’ turns that respond to the SPP.

Another possible factor is that preference principles in online conversations may differ somewhat on the basis of context, just as they diverge from those in face-to-face interactions in several ways, since the media offer very different affordances. For example, no one is held accountable for not responding to a post directed at a forum group as a whole. Furthermore, forum participants may vary greatly in sociocultural background, and research attests that the weight accorded to specific preference principles depends partly on cultural group (e.g., Goodwin and Goodwin, 1987).

Finally, while many of these limitations imply that other trolling-like conversation strategies have yet to be identified, the ones we found already provide ample evidence of effective disruption. Such strategies’ effectiveness can be explained in theoretical terms by drawing from psycholinguistic research on common grounding and on ethnomethodological and conversation analytic understanding of normative human behavior.

5.2 Conversation analytic approaches to trolling

One considerable benefit of our theoretical and methodological approach is its agnosticism to troll-like users’ true identity and intentions. Thus far, research into trolling has been preoccupied with studying it as intentional behavior (see Hardaker 2010). Therefore, the problem of creating a corpus that represents true trolling has been inescapable. After all, the identity of a suspected troll is seldom known; neither can the intentions behind even an avowed troll’s behavior be ascertained with any certainty. For scholars analyzing online material, it has proven highly challenging to judge whether a given user was trying to troll others or not. Since trolling is a deception game (Donath 1999) wherein the troll’s success relies largely on an ability to feign sincerity, extensive datasets wherein the trolls are conclusively identified are extremely difficult to generate.

By putting relatively little stress on the intentions behind the turn-by-turn interactions (Hopper 2005), conversation analysis is not hampered by such problems. One can pinpoint trolling-like behaviors in data by analyzing the effects of conversational turns on subsequent turns, especially with respect to repairs and attempts to maintain common grounding. For example, the effects of misinterpreting a post are visible as the discussion unfolds, and misinterpretation can be viewed as deliberate if the suspected troll does not self-correct or react in another normatively expected manner once other users have pointed out the improper behavior. In this light, behavior patterns can be identified and analyzed independently of whether the user intended to troll others. This creates opportunities for considering trolling-like behavior through its observable behavioral characteristics instead of its mind-internal, intention-bound nature. Thereby, trolling-like behavior is rendered much more amenable to analysis, especially since research traditions in conversation analysis offer a strong methodological and theoretical starting point for such work. Particularly important is the EM principle of a moral obligation of being accountable and understandable to others in one’s interactions (Garfinkel 1967; Heritage 1984). Trolling-like behaviors can be seen as violations of this fundamental norm. This explains their great harmfulness in online conversations.

We see potential for research to develop a new operational definition for trolling, or trolling-like behavior, one that does not lead to problems similar to those accompanying the prevailing approach – especially the requirement for theorizing about trolls’ identity and intentions, neither of which a researcher can reasonably ascertain. This work should be informed by studies with larger and more comprehensive corpora, to test whether our findings withstand statistical analysis.

5.3 Implications of the findings – positive and negative potential

Online social media are a catalyst for social unrest, expressions of misogyny, and other negative behavior. With regard to solving these problems, our findings might have negative implications, but they might also point to ways forward. We have shown that, at least to some extent, trolling is an activity that can be traced, recognized, and labeled. This deeper understanding of the specific techniques that lure others into meaningless fights may be put to harmful purposes, but it can also assist in detecting and preventing disruptive behaviors.

There are two possible negative outcomes of work such as ours. Firstly, for trolls, our research might point to more effective ways to disrupt online conversations. A more far-reaching possible outcome is that research of this sort may identify means by which automated trolling mechanisms could be created and honed. If trolling-like behaviors can be codified as repeatable conversation patterns, along the lines of those developed for ordinary chatbot design (see Moore and Arar, 2019), chatbots could emerge that take part in conversations as seemingly competent partners. Their trolling may be sophisticated enough to pass as genuine, sincere behavior. Therefore, it may be impossible to straightforwardly ignore the trolling attempts.

We find this scenario plausible, since trolling does not necessarily require the levels of sophistication or interpersonal sensitivity that mediated social interaction generally does. In essence, the effective trolling strategies that we have examined are a set of antisocial patterns violating the Gricean cooperative principle and are similar to the ‘breaching experiments’ discussed by EM scholars (Garfinkel 1967). Such patterns could lend themselves well to automation. So far, it has been fairly easy for people to recognize when they are interacting with a chatbot; likewise, chatbots that troll – trollbots – have seemed easy to ignore. However, were it possible to develop more capable trollbots that pass well enough as human, we could imagine a dystopian future wherein low-cost trollbots are planted in online forums to systematically disrupt and manipulate civil discussion.

On the positive side, awareness of trolling mechanisms can feed in to work on reducing the ripples from trolling. Arguably, trolls themselves and other malicious actors are already aware of the types of conversation strategies that derail discussion. For countermeasures, it may be possible to educate users of some discussion forums about conversation strategies that often disrupt successful exchange of ideas. With greater user awareness of such strategies, possible trolling attempts might be more readily detected and defused. However, efforts at thus countering trolling-like or other disruptive behaviors have not proven sufficient in the nearly 20 years since Herring et al. put the idea forward (2002, p. 381). It would seem, then, that automated moderation will eclipse them in attempts to keep online conversation civil.

Automated moderation to address trolling-like behaviors seems feasible. If, as we speculate above, trolling can be automated, it should be possible to recognize it automatically too. We argue that identifying measurable systematic patterns in interaction and conversational coherence – such as the asymmetric responses described in this paper – should offer a means for developing automatic identification and mediation of trolling and other disruptive behavior. Various moderation approaches are possible. For example, posts that manifest characteristics of known trolling strategies, such as asymmetric responses, could be flagged as possible attempts at disrupting the conversation or trolling. This flag, such as a warning symbol, could be supplemented with an explanation of grounds for thinking the post might be written by a troll and of what harm it could produce in the subsequent discussion. Others taking part in the conversation may thus be primed to look out for such attempts. An automatic moderation system could also provide users with warnings or suggestions as they write posts, in case they start composing a message in an offensive tone or without responding to another user’s question or comment. Naturally, one can imagine far more direct methods of moderation, including simply blocking individual posts or outright banning a user whose posts recurrently display characteristics of trolling.

So far, research has not developed effective methods to prevent online trolling. At present, both academic and practical research efforts seem to be lagging far behind the methods and tactics of harm-bringing participants in online discussion. It is our sincerest hope that the findings presented in this paper lead to more effective means of detecting and preventing trolling attempts in everyday online interaction, to eliminate their vast potential for harm.