Introduction

Technological development has led to the growth of online spaces that encourage virtual social connections across vast geographic distances at the click of a cursor (Haz et al. 2019). More than just another communication tool, digital platforms have become the primary venue for social, and often sexual, interactions for many in contemporary society. Some platforms, such as Omegle or Chatroulette, are designed to operate as not just a text-based service, but as a visual one as well. This visual dimension to online social connection presents risk for young people, resulting in exposure to sexual conversation or content when communicating using these live chat platforms (Napier et al. 2021). The heightened risk of sexual contact for minors on live chat sites demands further exploration, as there are potentially significant ramifications for the field of child sexual abuse (both contact and non-contact) in the modern, internet age (Martellozzo and Bradbury 2021).

The objective of this study was to critically identify and examine the anonymously submitted self-reported reasons that adults provided for having sexual contact with minors on live chat apps. This is somewhat unique, as most of the existing research on sex offenders’ motivations derives from data collected during qualitative interviews—a context wherein offenders may feel pressured to provide a fictitious account of an event in order to minimise the impact of their behaviours. Furthermore, this study was not just intended to add to the literature on offender motivation but also to identify structural issues present in the live chat model that contribute to, or otherwise exacerbate the risks of, online harms to minors.

The results are reflective of a largely unregulated environment, where minors are routinely exposed to adult users who are frequenting live chat sites for the overt purpose of sexual encounters; in these non-proximal and digitally-mediated spaces, prevalence of online disinhibition and normative deregulation is high, resulting in increased neutralisation of inappropriate (or illegal) sexual contact being demonstrated by the adults included in this research (Suler 2004). The product of said disinhibition and neutralisation can result in the online sexual abuse of young people, which research suggests occurs in a myriad of ways from unwanted sexual exposure to direct grooming and, even, the remote streaming of extreme child sexual abuse via live chat. By drawing on the personal experiences of live chat users to better understand the motivations and justifications for sexual contact on these platforms, the goal is to contribute to the development of a greater understanding of how adults engage with children sexually online and to emphasise the need for more robust child protection responses to these harmful online behaviours, including through education and policy measures.

Background and Context

Despite first being proposed over a half-century ago, Sykes and Matza’s Neutralisation Theory (1957) is highly relevant to the examination of online deviant sexual behaviours. Sykes and Matza did not view offending behaviours from a subcultural perspective, wherein social memberships contribute to the rational determination of whether to commit a criminal offence. Rather, they believed that criminality, and the decision to commit a criminal offence, is based on an individual’s level of control which, when impaired, determines the criminal behaviour. Neutralisation occurs when the individual becomes aware that their behaviour is wrong, and they seek to justify (or neutralise) those feelings of guilt and shame to maintain their identity as being reputable.

There are five ‘types’ of neutralisation: denial of responsibility (e.g. shifting blame for a person’s actions to a different source), denial of injury (e.g. asserting that a person’s actions did little or no actual harm), denial of victims (e.g. asserting that a victim ‘had it coming’ or otherwise deserved harm), condemnation of the condemners (e.g. in which a person turns the attention on their accusers, often claiming they are being unfairly attacked or victimised themselves), and an appeal to higher loyalties (e.g. when a person justifies their actions as being consistent with their ideological beliefs, or ‘for the greater good’). The process of neutralisation could be understood as a means to resolve cognitive dissonance. Outlined by Festinger (1957), cognitive dissonance is when individuals experience a negative effect when presented with two beliefs are inconsistent and, to resolve this, attempt to change one of those beliefs to make them more compatible with each other. In adopting new attitudes and beliefs, a person effectively ‘works backwards’ to justify their decisions, behaviours, and responses—the same process Sykes and Matza explored via neutralisation. The five general forms of neutralisation Sykes and Matza proposed serve as the central theoretical paradigm for interpreting the data featured in this article; however, adaptations have been made to strengthen the analytical framework, including incorporation of concepts like Cohen and Felson’s routine activities theory (1979).

The online environment means that there are now continuous opportunities for offenders to target children—access to minors for sexual purposes has not only become easier, but the process has also become faster. In their initial outline of routine activities theory, Cohen and Felson (1979) observe that changes to modern society were providing motivated offenders with increased opportunities to commit crimes. Building on this, Cohen-Almagor (2013) discusses the risks to children on the internet, stating that ‘predatory crimes need targets with guardians absent and, thus, we must ascertain children enjoy enough protection [online]’ (p. 197). He goes on to assert that online child protection requires ‘comprehensive and integrative [approaches] … based on multiple guardians around the child, reducing the opportunities for predators and convincing them that the anticipated benefits from the offence are not worth the risks of arrest and confinement’ (2013, p. 196). Any weaknesses in this network of guardianship, including in the internet platforms themselves, place children at heightened risk of exposure to harm—including the various types of sexual contact of concern to the current research.

The internet allows motivated offenders to adopt a ‘scatter gun’ approach in identifying targets, increasing the likelihood of gaining the trust of, and access to, one child through the targeting of many. Through live streaming chat websites, motivated offenders have multiple opportunities to not just coerce and obtain nude or semi-nude images of children but also to expose themselves and/or engage in virtual sex acts, such as exposing others (often non-consensually) to masturbation in fast, ‘hit-and-run’ style interactions. There is relatively scant recent literature on public sexual exposure (or exhibitionism) and, specifically, the practice of ‘flashing’. Balon (2016) suggests that this deficit ‘is perhaps reflecting a long-standing perception that exhibitionistic behaviours are merely a nuisance compared to other sexual offences’ (p. 77). Miller (2021) affirms this hypothesis, noting that the more lenient legal position on ‘cyber-flashing’ as opposed to traditional, in-person indecent exposure (to which it is analogous) emanates from the general perception that non-proximal offending is less likely to constitute a ‘true threat”—begging the question of ‘where does cyber-flashing cross the line from being merely an annoyance or upsetting to being an actual or perceived threat?’ (p. 443).

McGlynn and Johnson (2021) cite a range of overlapping motivations for cyberflashing (specifically sending unsolicited penis images) such as ‘sexual gratification, a “laugh”, status building or homosocial bonding, boredom, reduced inhibitions, as an exercise of male power and entitlement, and to harass, intimidate, control and distress’ (p. 176). Regardless of particular motivation, they note that ‘research has indicated that men are aware that receiving unsolicited penis images can be a threatening, harassing and distressing experience for women’ (McGlynn and Johnson 2021, p. 176). McGlynn and Johnson were referring to the impact on women generally; however, the distress caused by cyberflashing (and foreknowledge of said distress by offenders) must be considered even more so when dealing with offences against minors, as this research details.

Control is another important determining factor for criminality which can be muted or even distorted by online disinhibition. Suler’s (2004) theory of the online disinhibition effect postulated that cyberspace provides an environment where people do or say things that they would not normally in the ‘offline world’, empowered by a sense of anonymity and depersonalisation that enables people to feel a sense of freedom to act without impunity on the internet. Suler (2004) notes that this provides an opportunity for people to partake both in acts of kindness that would not be displayed offline, known as benign disinhibition, or great acts of cruelty, referred to as toxic disinhibition. Online disinhibition provides an opportunity to explore one’s own identity in ways which may be curtailed by the expectations or social rules tied to a person’s community, religion, or ethnicity in a process of self-actualisation (Casale et al. 2015). This process can also be intrinsic to determining and expressing one’s own sexual identity and offer the freedom to act in a sexual way, without fear of retribution, and often empowered by the veil of anonymity that the internet provides. Whilst this can be (and, often, is) conducted in a prosocial manner, there are nevertheless a significant number of those who use this opportunity to sexually offend against children. Some are first-time offenders, satiated by the ‘safety net’ that is provided due to the acts taking place online, and as a contactless offence (Taylor 2017). Offenders often maintain the illusion of their online behaviours as a ‘fantasy role play’ specific only to that time and space, a cognitive process linked to Suler's (2004) online disinhibition.

Suler (2004) describes this cognitive separation of self in cyberspace as a variation of the dissociative imagination: he believed that the combination of (a) being able to be someone different from your offline self and (b) the ability to escape or dissociate from our online behaviours is extremely powerful and magnifies the disinhibition experience by the individual. This disinhibition can occur to such an extent that the online persona that a person creates for themselves can be so far removed from reality that it is regarded as fantasy—part of a game in which the normal rules and regulations that exist ‘offline’ no longer apply. By turning off their smartphone or laptop and ‘leaving’ the internet, users experiencing such dissociation can return to normalcy and compartmentalise their online identity, creating a perceived separation between fantasy and reality (Suler 2002). In performing this ritualised separation, the user can effectively relinquish all responsibility for what has occurred in that discrete time period or virtual space. These concepts are strongly applicable to online sexual offending and, more pertinently, the neutralisation and rationalisation processes that can occur post-offence. The process of online disinhibition allows offenders to distance themselves from their actions and from those that they target (Whittle et al. 2013). This is especially true when said target is an adolescent who is already engaging in risky behaviour, such as engaging with strangers at random via platforms like Omegle. The inherent anonymity of live chat sites can dampen emotional reactions and the sense of accountability, as well as mitigating and/or minimising fears of reprisal such as reactions of dismay or disgust while, at the same time, increasing cognitive distortions and increasing the likelihood on engagement with young people (Whittle et al. 2013).

Methods

This study is aimed at critically examining the legal advice requests made in relation to sexual behaviours involving minors taking place online via live chat streaming platforms, such as Omegle or Chatroulette. Only legal advice requests made by self-identified ‘adults’ were considered. It is important to note that the requests for legal advice were made anonymously; therefore, the gender of the adults was impossible to determine. These adult participants will be referred to as ‘posters’. To locate the data available, preliminary research was conducted to identify the most common online webpages being used to ask for legal help for behaviours that are sexual, involve children, and take place on online live chat websites or apps. Two sites were identified, Avvo and Reddit. Both are open-source websites accessible by all members of the online community and allow users to see posts made by people geographically located worldwide. Avvo is an online platform offering free marketing for legal services, also providing legal information, referrals, and forums that allow for consultation between lawyers and posters. Reddit is an American-based aggregation website that allows members to participate in a wide array of discussions, not all related to legal advice as with Avvo. However, Reddit does host specific threads dedicated to providing legal advice. No restrictions were made regarding the location of the posters, nor were there any identifying details of the posters available at any stage of the process, including gender, age, and geographical location.

Data were obtained by entering the search terms ‘chat app’, ‘online chat’, and ‘chat room’ into the website’s own search box. The first 100 posts (N = 100) that related to sexual encounters between an adult and minor were extracted. While there were more posts than this available for analysis across Reddit and Avvo, saturation was reached after coding of the first set of 100 was completed (Ando et al. 2014). As such, it was determined that no further data collection was necessary. Of the total 100 (N = 100) requests for legal advice considered as part of this research, 649 individual codes were applied.

As a qualitative method that has only recently emerged, the ethical position on using anonymous online posts in research remains unsettled (Thompson et al. 2021). This work constitutes a non-participant digital ethnography involving the observation of posts made voluntarily on a public forum. No posts were collected from private accounts or online platforms which required membership—all can be accessed at any time, by any internet user. When contributing to these forums, users acknowledge the terms and conditions of the website which include that their posts are publicly accessible. Even though all posters used anonymized usernames to post, further steps were taken to ensure the anonymity of those included in this sample. When used, quotes were parsed and paraphrased as much as possible to limit searchability while, at the same time, maintaining the meaning and impact of what was being written. All posts were sanitised to remove identifiable data such as poster handles, contact details, email addresses, and locations. The ages and gender of the posters were not redacted, as they provide important context that is of concern to this research. There was no selection based on any criteria other than the order in which they were initially presented in the search. The responses from the legal professionals were not a research focus and, therefore, were not included in the overall analysis.

The sanitised data were uploaded onto Dedoose, an online thematic analysis platform for coding. Whilst codes were accumulated organically, several themes were pre-determined in order to apply a more clearly defined theoretical framework to the data analysis. These themes were adapted from both Sykes and Matza’s work on neutralisation (e.g. denial of responsibility) and Cohen and Felson’s routine activities theory (e.g. the motivated offender and absence/presence of a guardian). Additionally, these three thematic groups were supplemented by two additional parent categories: awareness of behaviour and child sexual abuse (CSA). The codes that constitute these thematic clusters are explored in greater detail in the results section below.

Results

Denial and Minimisation of Responsibility

Codes in the thematic cluster denial and minimisation of responsibility were applied in instances where an adult poster acknowledged their sexual behaviours were wrong, but insisted that they only occurred because they had ‘no choice’; claimed they were manipulated, forced, or coerced into inappropriate behaviour/s; blamed the child; and/or viewed themselves as a victim (Muller et al. 1994).

Within this theme, the most commonly applied code was blaming the child (N = 89). In this category, the most discussed theme was that the child lied about age (N = 24), applied wherever a child claimed to be older, looked older, or had a profile that incorrectly labelled their age as being 18 or older.

One poster described:

‘I talked to a girl on Omegle and she said she was of 18 years. We sexted and exchanged images and videos. When I saw her videos and body, I thought she was a young looking adult cause her body was fully mature. But then she told me about school, so I asked her to tell honestly and she told me she was 12’.

A second poster requested advice in regard to a comparable scenario, asking:

‘If you were on an anonymous chat site and you were talking to someone who claimed to be 18 or older and invited you to “sext” with her on a picture messenger and exchanged some pictures and she revealed herself to be 12 could you get in trouble for that?’

The second most applied code in this cluster was child-initiated (N = 17). This was applied when a poster specifically stated that a child had initiated sexual interaction (e.g. requesting nudes, exposing themselves, or offering to sell or share nudes).

Instances of this included:

‘SHE asked to ME to masturbate on camera for her’.

And;

‘It was revealed that they were 15, I felt very uncomfortable when they started steering the conversation sexually. I stated that they were too young to be on this website, while they were begging to reveal myself (I stated that it was extremely illegal to do that) and I then got instantly banned from Omegle as “banned for possible bad behaviour”’.

The motivated poster An adaptation of the traditional motivated offender (Cohen and Felson 1979), codes in the motivated poster theme (N = 147) were applied wherever an adult poster engaged in a sexual behaviour without first seeking the approval of the other person; stated in their post that they were on the live chat because they felt sexually aroused; used some form of encryption devices to hide their identity (either when visiting the live chat, or posting to request advice); were on the site to engage in a specific sexual fantasy, fetish, kink, or seeking roleplay engagement; demonstrated awareness of legislative loopholes prior to engaging in the sexual behaviour; said they were on the site purposely to purchase, sell, or trade nude images; or showed knowledge and understanding that they were using specific platforms based because of the lack of moderation.

Under this theme, the most commonly applied code was sexual conversations (N = 25), applied whenever a poster stated that they were on the live chat looking to flirt, sext, or ‘dirty talk’ with strangers—even if there was not necessarily explicit intent to have these conversations with children. Examples of sexual conversations include:

‘I ended up talking through text (no video) with a girl who was 17 and I told her I was masturbating and asked if she could help me. She said she would so I asked her to start masturbating and she said she would and then I realized what I was doing wasn’t right’.

The second most commonly applied code in this cluster was sexual acts (N = 19), which was applied when a poster deliberately engaged in a sexual act without seeking the age or approval of the other person, or engaged in a sexual act in full knowledge of the other person being a child. Examples of this included:

‘So yesterday I was very horny and I did something very stupid, pulling my dick out on Omegle. Now what I fear that some of the users I interacted with might be minors. I did not show my face or give personal information, just in case even cleared the cache of the browser’.

And;

‘I was really horny and decided to explore Omegle. I’d never used it before but I heard it kept you anonymous so I thought I’d masturbate on video chat just cycling through random people until I found a girl my age. (I’m 24) I’m worried because I know a couple kids saw my junk’.

In a clear demonstration of behavioural awareness, there was at least one example where a poster displayed knowledge of—and seemingly participated in—a child being actively abused on a live chat platform. In their post, the individual in question reported that the following conversation took place on a live chat platform (the poster self-identifying as ‘Me’):

‘Guy: She likes to watch.

Guy: She’s still young.

Me: How young?

Guy: I can’t say, OK?

Me: OK’.

Awareness of Behaviour

Awareness of behaviour (N = 158) codes were applied where the poster self-reported participation in the following: deliberately did not query or verify a child’s age prior to engaging with them sexually; sought legal advice to determine the extent of their accountability for what had happened; hid their identity to remain anonymous whilst engaging with a child sexually; or acted post-contact to block, delete, or hide evidence.

The most commonly applied code in this cluster was poster did not seek age verification (N = 35). Adult posters did not always ask the age of the person they were communicating with sexually; however, they regularly characterise them as ‘girls’. In some instances, posters did acknowledge that not asking for the child’s age was a mistake, such as:

‘It was a stupid mistake that could have been avoided had I simply asked for their age’.

Not all posters in this sample demonstrated a disregard for age after it was revealed that they were engaged in a sexual conversation with a child. In several cases, posters reported immediately ceasing contact when made aware that they were talking to someone under the age of 18. In one such instance, the poster reported:

‘Met someone on Reddit last night looking to sext. We moved to Kik where I was asked for a nude so I sent one. They sent back several non-nude photos. We talked about kinks for a few minutes and I asked for nudes. They then said “I’m actually not 18 I’m 15 is that a problem?” I told them it was and immediately blocked them. Is this something I need to be worried about?’

The second most applied code under awareness of behaviour was hidden identity (N = 19). In these instances, posters stated that their face were never visible during the sexual acts being committed on live chat, whether to excuse their actions and mitigate their accountability in a legal or moral sense. Discussion around the face not being visible ranged from being noted as a purposeful liability-limiting decision to simply serving as a standard part of the sexual interaction itself. A sample reflecting this diversity includes:

‘I sat right there without showing my face, saying hello to girls and asking them if they wanted to watch me play with myself’.

And:

‘The site said I was soliciting a minor when clearly I wasn’t as my face wasn’t visible’.

And also:

‘They first asked for a picture of my face, but I explained that I do not show my face for strangers online’.

Child Sexual Abuse (CSA)

For the CSAM theme (N = 47), all codes applied related to posters who self-reported being aware of the child’s age at the time of the sexual encounter, and/or had concerns about their actions being tantamount to creating and distributing CSAM, and/or inciting a child to produce CSAM. The most commonly applied code in this section was awareness of child’s age (N = 34). This was applied to all posts in which the adult was aware that the person they were speaking too was a child. Examples of this included:

‘I asked girl about her age on Omegle, she answered me that she’s 14 and suddenly she asked me that if I want nudes from her, I said yeah’,

And:

‘Was drunk on Omegle and was chatting with some people and I get connected with a girl.. she says she has no audio and asks me to call her, I’m staying at a hotel and call her from that phone—she says she’s 14 or 15 and asks if that bothers me.. I said no’.

And, also:

‘I was on Omegle unmoderated. I am 20. Came across a girls cam. She said she was 14 and asked if I wanted to see her breasts. I was clothed and didn’t show my face. I did not actively seek this out. I mistakenly, like an idiot with no brain, said sure’.

Absence/Presence of a Guardian

The theme absence/presence of a guardian (N = 98) included codes applied when a poster discussed elements like the live chat platform’s unverified age requirement; experiences of being moderated and reported to external law enforcement agencies; loopholes around international jurisdiction boundaries, being tracked or traced; the importance of there being a time lapse between the act and site action. The most applied code in this thematic cluster was 18 + adult specific site (N = 21). Examples included:

‘I was assuming he is 18+ since he pressed OK to be here’.

Some posters actively apportioned blame to the minor they interacted with for breaking the site’s rules, such as:

‘It was a minor who visits a video sex chat website, violates the TOS agreement by being underage, and chats with adults in a sexually explicit manner’.

Whilst it is true that sites like Omegle do state that they are ‘recommended for 18 + ’, they also do not have age restrictions or additional means of verification and, for example, Omegle states that children can use the site with permission of a parent.

The next most common codes were site moderation (N = 15) and international jurisdiction (N = 15). Site moderation included comments referring to instances in which moderators either intervened when the poster was engaging with a minor sexually, or otherwise put mechanisms in place to prevent inappropriate (or illegal) sexual contact, such as:

‘I was on a Chatroulette like website, naked, it sent me a msg that my ipadress was logged and I was going to be arrested, could this be true?’

In codes related to international jurisdiction, posters queried whether their geographical location and/or that of the child they were in contact with was a loophole that could prevent them from getting in legal trouble. Comments included:

‘I don’t live anywhere in the United States. I want to know how likely it is that the FBI will arrest foreigners living outside the United States for simple sexting?’

And:

‘I get into trouble for having showing my penis to minors? I’m from Spain but we talked in English so I think they were from the US’.

Discussion

These findings reveal considerable behavioural diversity and psychological complexity among adults who have sexual contact with children online. Whereas limited previous research has explored factors that drive online sexual offending from the offender’s perspective (Powell et al. 2019; Davidson et al. 2022), the present article adds to the literature considerably by taking the personal narratives of people who self-reported taking part in such behaviours, and examining their first-hand accounts of their experiences before, during, and after the encounter. This rich qualitative data provide a perspective that is otherwise unrepresented in the literature, especially as it pertains to the environment of a live chat website like Omegle. Beyond the specific context of live chat platforms, the findings of this study add to the research base on cyberdeviance perpetration more broadly, in particular sexual cyberoffending. It provides insight into the rationalisation processes of those who expose themselves online to children (and others) and contributes to important theoretical debates in the literature around pathways into deviance in the online space (see Goldsmith and Brewer 2015; Bleakley and McCarthy 2023).

Thematic analysis of the posts in this sample reveals a range of techniques used to rationalise and justify sexual contact with minors on websites like Omegle, including structural factors like the absence of clear warnings of legal risks of sexual contact with young people in these online chat sites (Salter and Sokolov 2023). This research reinforces the notion that sexual offences cannot take place without opportunities to act, and that the environment in which offences take place plays a significant role, both in the level of risk children experience and which specific types of behaviour occur. When individuals are motivated to engage in sexual contact, they are more likely to create opportunities to offend in locations where there is easy accessibility, lack of rules, warning signs, and guardianship (Clarke and Felson 1993). Where the safeguards in place on specific platforms like Omegle are weaker, the potential for adults to effectively justify sexual contact with minors, whether intentional or inadvertent, is far greater. This promotes a culture of blame-shifting, evidenced in the patterns of neutralisation observed in this sample.

Age verification processes typically require users to actively affirm in a single click that they are over 18 before gaining access to the live chat service. There is no mechanism in place to confirm this, meaning that, in practice, users can (and do) use the site at any age. Zero posters mentioned the term criminal offence and only nine (N = 9) posters specifically asked whether they have committed a crime. Therefore, the majority seemingly ignore the fact that online sexual communication with a minor is an offence and saw the platforms as spaces that offer security and anonymity, where everything is allowed, including online exposure to a child and sexual solicitation of a child. Rather, they sought for advice to establish whether they could be held responsible for their actions (N = 37) while denying responsibility for what took place.

Sykes and Matza (1957) describe denial of responsibility as a state in which a person acknowledges that their actions were wrong, but claim they had ‘no control’ or ‘no choice’ over the events that took place. This form of denial is key to the blame-shifting observed here, where many posters asserted that they genuinely believed they were speaking with an adult—either because they made assumptions based on physical appearance (e.g. ‘her body was fully mature’) or because the child they were speaking to actively hid the truth (e.g. ‘she said that she lied and that she was 15’). This level of plausible deniability is only possible because platforms allow it to as a product of their deficient protective protocols. By operating an ‘honesty system’ of age verification, adult users are effectively able to claim (legitimately or not) that they were of the genuine belief that the person they were speaking to was over 18, because they had agreed they were to access the site in the first place. Objectively, this may appear a weak justification, and it could be argued that adult users should be aware of these weak security processes and, as a result, cognisant of the risk that they may be communicating with children. However, several posters requesting help described scenarios wherein sexual conversation with a child appeared to be incidental, or unintentional. In these instances, posters admitted to participating in sexual conversation or requesting nudes but ceasing communication immediately when the person they were speaking to admitted to being under 18.

While still problematic, these examples raise questions as to where the onus of responsibility falls—on the adult poster who (sexual habits aside) was communicating in good faith on a platform explicitly intended for people who are also over 18, or on the platform itself, for creating the conditions in which inadvertent sexual contact with children can so easily take place? Though some posters indeed terminated their conversations after being made aware they were talking to a child, many more did not cease communication immediately, for various reasons. For example, several posters claimed a belief that the communication was fantasy, and that they thought they were talking to an adult who was posing as a young person, until the real age of the person was revealed either through texts or photographic evidence. In these cases, posters often shifted responsibility for events on to the child themselves, a blame-shifting exercise common to sexual offending both online and offline. Importantly, regardless of the stated ‘reason’ (or, perhaps, ‘excuse’), the posts analysed reflected a general lack of interest in the impact that their actions may have had on the child. Of those who knew that they were communicating with a child, only two mentioned the word regret.

Often, the literature on online abuse focuses on the grooming process wherein an adult actively pursues a child via digital platforms with the explicit motivation to engage in sexual offences, whether contact or non-contact. Traditionally, grooming requires a period of time for relationships of trust to be built, before they are later exploited, although online, this process is often faster (Martellozzo 2013). The pace at which sexual contact takes place is another aspect where behaviours on live chat differ: rather than a social process taking place over time, sexual contact on live chat was more likely to consist of fast-paced, ‘hit-and-run’ style behaviours (e.g. indiscriminately masturbating on camera, while being connected at random with other users). There is a strong element of online disinhibition (Suler 2004) in these behaviours: posters routinely mentioned not showing their faces while participating in sexual conduct on live chat and treating that ‘anonymity’ as a mitigating factor in their actions.

Disinhibition is central to understanding the cyberflashing that is prevalent on live chat sites, but so too is the fast-paced nature of the medium. These sites operate by allowing users to connect with others at random, simply clicking a button to ‘move on’ to the next user. This naturally creates a fast-paced, transitory online environment where contact is fleeting and can be ended very swiftly by either party to the conversation. For motivated posters, actively using the platform for sexual gratification such as cyberflashing, this means decisions on whether to expose themselves to the person on the other end of the webcam must be made quickly. One poster related a common way of using the site, saying ‘so I thought I’d masturbate on video chat just cycling through random people until I found a girl my age’. Several posters claimed to begin masturbating before connecting with others, demonstrating a wilful disregard for the risk they may expose themselves to children, but no specific intent to do so. There is a robust research base that highlights the challenges to rational, prosocial decision-making under stressful or otherwise fast-paced conditions (e.g. Partnoy 2012). The combination of pre-existing motivation, lack of adequate safeguards and protections, online disinhibition, and the fast-paced nature of the platform itself could be seen as a perfect storm of conditions that are favourable to child sexual offending on live chat sites.

Whilst this study provides a unique insight into the concerns being raised by adults who engage with others on online chat sites sexually, there are some limitations to the findings. For example, we do not know the extent to which the posters were being truthful. Whilst the study has a high ecological validity, as there was no influence or manipulation from a researcher, it is still likely that the posters may not have explained what occurred on the live chat site entirely honestly, to minimise their own role and potential harm caused. We also do not know whether the legal sites, where these posts for help were made, moderate the content and/or move posts that go against any posting rules relating to subject matters. Based on the graphic nature of some of the posts, it would suggest that they do not; however, this cannot be known with certainty. Going forward, it would be useful to develop this research further by exploring the requests for legal advice from children who engage with strangers sexually online, who also post requests for help on forums like Reddit and Avvo. It would also be important to identify the importance of why these platforms are being used to engage with strangers sexually, and how these platforms are being used to facilitate the sexual interactions.

Limitations

The current research is not without limitation. Using open-source data posted to Reddit and Avvo presents potential issues around self-selection. Those who posted on these forums seeking legal advice cannot be assumed to represent all people who have sexual contact with children on live chat platforms, and thus their experiences cannot be generalised to a more extensive population of users. Generalisability is also impacted by the sample size, which includes just 100 posts. While this was deemed sufficient for the purposes of this research and was enough for saturation to be achieved, it nevertheless only represents the experiences of a small proportion of live chat patrons, and an even smaller proportion of internet users in general terms. This research, like most qualitative work, does not aim for generalisability; rather, it presents exploratory findings that provide crucial insight into the personal motivations of a select group of individuals, which could provide foundation for future work in this area—including that which does include generalisability as a core objective. The other major limitation of this research is common to all self-report studies, wherein it is impossible to verify with certainty whether the narratives presented by posters in the sample are accurate. Indeed, due to the nature of the subject matter, it is very likely that these narratives are not entirely accurate recounts of events. For the purposes of this study, this is not a fatal flaw. Our interest was less in providing accurate recounts than it was in focusing on how the posters explained events from their own perspective and neutralised their actions. While knowing the truth of what happened in the interactions described would add another dimension to this analysis, it is not completely necessary to have this information in order to examine these posts as neutralisation narratives in their own right.

Conclusion

This research showcases the various ways in which adults who have had sexual contact with children on live chat platforms justify and rationalise these behaviours after an incident, in the context of crowd-sourcing legal advice from other users on forums like Reddit and Avvo. It reveals several major emergent trends in this population, some linked to conventional neutralisation practices like denying personal responsibility and shifting blame to online platforms, while others demonstrated more conscious approaches to online sexual offending such as intentionally hiding their identities or searching for ‘loopholes’ that minimised their own legal culpability. In the process of justifying their behaviour on platforms like Omegle, the research reveals a preoccupation among posters with structural protections—both in identifying a lack of these elements, or weaknesses in those that were in place. Furthermore, posters demonstrated a general lack of concern with the potential implications of sexual exposure online.

For many, it was not described as a primary objective to have sexual contact with a minor on live chat. Rather, their stated intent was to have sexual contact with any stranger they were connected with by the site with elements like age (or any other demographic factor) constructed as secondary, or incidental. This speaks to another phenomenon related to cyberdeviance, wherein behaviours like cyberflashing become normalised in certain online spaces, to the point that posters often justified their sexual contact with children by suggesting that their sexual conduct was normal on sites like Omegle, which is something that the children they had sexual contact with knew or, if they did not, should have. Again, this is reflective of the perception that online spaces are inherently distinct from the ‘real world’ and are characterised by different normative standards of behaviour, with the onus on all platform users to be aware of the particular norms of any online space they step into.

Implications

The first-hand recounts of posters who claimed to have some form of sexual contact with children on live chat revealed not only several areas of concern but also opportunities to improve online safeguarding for children. Above all, the main issue that emerged was that live chat sites offered motivated adults a space to participate in sexual conversation with others in a loosely regulated environment, which put young people under the age of 18 at heightened risk of sexual contact (Salter and Sokolov 2023). The general lack of regulation on these platforms created risks in several different ways. For motivated offenders, they unequivocally provide a space to actively seek out children to abuse with low-risk of being caught. However, for users, who are not specifically interested in children, the fast-paced and anonymous conditions of the site allow for disinhibition that, in turn, often results in a wilful disregard for the consequences of their sexual behaviours online, which also places children at risk. Though some posters reported having been ‘banned’ from using sites for inappropriate conduct, beyond this, the consequences for sexual contact with children on live chat were minimal. If this status quo is to continue, the result will likely be greater disinhibition borne from user perceptions that their actions on live chat sites do not have actual repercussions. Enhancing repercussions for offenders may be partly accomplished by strengthening legal provisions to allow law enforcement to investigate (and prosecute) those found to be pursuing sexual contact with children via live chat platforms. Structural measures like the European Union’s Digital Services Act (DSA), announced in 2022, may go some way toward enhancing protections in this area, although the practical efficacy and impact of transnational laws like the DSA remains to be seen (Pirkova 2021). On a domestic level, specific reference to cyberflashing has been included in the United Kingdom Online Safety Act that, along with potential jailtime for individual offenders, gives state regulators the power to fine or block access to sites that fail to take steps to prevent this kind of activity (Wakefield and Gerken 2022).

Apart from these external actions, there are also internal measures that platforms could adopt with immediate effect to improve their safeguarding capability. Functional age verification remains a crucial issue: relying on users to honestly affirm that they are over 18 does little to prevent access by children, absent additional safeguards designed to prevent children accessing these ‘adults-only’ sites. While effective age verification presents practical challenges, reforms to online platforms like OnlyFans show this is not impossible to achieve (albeit not entirely fool-proof) if the motivation to implement enhanced safety exists (Titheradge and Croxford 2021). Beyond age verification, risk could be mitigated to some extent by increasing warnings and reminders of the legal risks of engaging in sexual behaviours, including exposure, with strangers online. Many of the posters discussed in this study claimed to have been carried away in the thrill of sexual conversation, causing them to momentarily ‘forget’ the risks of their behaviours. If more consistent reminders of these risks were incorporated into the platform (e.g. automatic prompts and targeted warnings), this may prevent adult users who are not specifically seeking out children for sexual contact to reconsider their actions and take additional steps to prevent such contact from happening. These reforms should also extend to simplifying the process for reporting inappropriate contact, providing all users (not just children) with greater agency and ability to notify the platform when unwanted, inappropriate, or illegal communication occurs. These steps may not resolve the problems of child sexual abuse on live chat sites entirely yet are nevertheless necessary steps to minimise and mitigate risks to children and enhance the overall safety of these online spaces.