1 Introduction

Technology interacts and co-evolves with human eroticism. Advancements in artificial intelligence (AI), robotics, virtual, augmented, and mixed reality (VR, AR, MR), as well as the Internet of Things/Senses (IoT/IoS), are transforming how, and with whom, we can intimately connect [86, 107, 200, 227, 254]. Amidst what some consider a new (sexual) revolution [22, 194, 315], we are witnessing the rise of artificial agents capable of erotically engaging with humans, which we call erobots. The term erobots includes but is not limited to virtual or augmented partners, erotic chatbots, and sex robots [8, 93]. Unlike previous technology, erobots do not simply mediate erotic experiences, but can also increasingly be perceived as subjects, rather than objects of desire [72,73,74, 87, 178, 245, 307], in part due to their growing agency (i.e., the capability to act in/on the world to achieve goals; [154, 258, 266]). This exposes humanity to the possibility of intimacy and sexuality with machines [8, 93].

The controversial advent of erobots has important ethical and social implications, which polarize public and academic discourses [47, 65, 66, 82, 87, 117, 179, 249, 250, 268, 270]. Those who denounce their risks argue that erobots could: promote or perpetuate harmful sociosexual norms; generate (new) problematic or pathological behaviours; increase child abuse; impair interhuman relationships; deceive or manipulate humans; as well as augment the risks pertaining to privacy and data confidentiality [47, 65, 101, 114, 128, 129, 133, 182, 190, 195, 210, 222, 241, 249, 250, 264, 276]. Conversely, those who endorse their potential benefits argue that they could: widen access to intimacy and sexuality; be employed in medical and therapeutic treatments; provide interactive and personalized sex education; prevent child abuse; reduce risks involved in interhuman sex; be used as standardized research tools; and enable a deeper exploration of humans’ holistic erotic experiences [26, 27, 57, 64, 65, 83, 93, 109, 179, 180, 199, 319]. Yet, the current scientific study of human–machine erotic interaction is limited and mostly speculative; no comprehensive theoretical model has been proposed, and the empirical literature remains scarce [86, 87, 89]. Additionally, the current research tends to focus on potential risks and benefits rather than exploring solutions to mitigate the former and enhance the latter [87].

Meanwhile, the private sector is racing to develop new erotic products to occupy an untapped sextech market that is estimated to be worth $30–120 billion [22, 82]. Political and legal bodies need scientifically valid research (theoretically sound and evidence-based) to guide the regulation of emerging erotic technologies [271, 277]. To bridge this knowledge gap, research has emerged on digisexuality—or the use of technology in relationship and sexuality [200] (or technosexuality; [21, 283])—and Lovotics—a research domain aimed at developing strong bonds such as love, intimacy, and friendship between humans and robots by modeling and imitating human affection processes [50, 260]. These programs draw attention to the importance of studying the impact of technology on human intimacy in a world that tends to wrongly treat love, sex, and relationships as separate matters, disconnected from other human realities [51, 66, 82, 89, 179, 200, 260, 283]. They also outline the importance of increased immersivity and interactivity, in changing humans’ relationship with erotic technology (e.g., distinctions between first and second wave digisexuality; [200]). However, researchers tend to adopt descriptive perspectives on ongoing human–machine erotic interaction and co-evolution, without providing explicative mechanisms that have predictive value and could constitute theoretical grounds for empirical and clinical research. Moreover, programs like Lovotics too often adopt reductionist, technologically deterministic views (e.g., assuming that building machines that simply mimic biological erotic processes will effectively generate strong human–machine bonds; [50, 260]). These programs underestimate the impact of individual differences, as well as the effect of sociocultural processes in influencing the way technology is imagined, developed, implemented, and attributed meaning over time [155]. They also underestimate how the complex web of affordances enabled by the growing agency of erotic machines influences our relationships with erobots, the interconnectivity of biological and artificial systems, as well as the unpredictable ways in which such systems can affect the cognition and evolution of both humans and machines.

To comprehensively explore human–machine erotic interaction and co-evolution, mitigate erobot-related risks, and further human well-being, we need a new unified transdisciplinary field of research with a broad research agenda—a field we propose to call Erobotics. As a discipline intersecting Human–Machine Interaction (HMI) and Sexology (i.e., the study of human sexuality), Erobotics will aim to (1) study human-erobot interaction, co-evolution, and their related phenomena, as well as to (2) guide the development of beneficial erotic machines. Moreover, in line with Döring and Pöschl [89], we propose that Erobotics should be grounded in sexuality [308] and technology positive frameworks [252]. This means that Erobotics should explore issues related to technology-mediated human intimacy, but also strive towards pleasure, freedom, and diversity [308]. This also means that Erobotics should aim to mitigate erobot-related risks and promote the ethical development of erotic machines geared towards well-being [252]. As a first contribution to Erobotics and its sextech-positive objectives, the present article aims to:

  • (O1) define Erobotics and its related concepts;

  • (O2) propose a model of human-erobot interaction and co-evolution;

  • (O3) and suggest a path to design beneficial erotic machines.

To do so, we propose a terminology based on the rich concept of erôs, a taxonomy of erobots, and a spectrum of their growing agency that aims to clarify the potentially changing nature of human–machine erotic interaction as well as the challenges faced by our socio-technological co-evolution (Sect. 2). We then propose an overarching model of human-erobot interaction and co-evolution, which has predictive value and constitutes theoretical grounds for a wide, collaborative, and transdisciplinary research agenda on Erobotics (Sect. 3). Finally, we underline how human-erobot interaction and co-evolution can be detrimental to human well-being—particularly if they hinder the diversity of erotic traits, and if we do not change our current approach to technological design [34, 257]. As an alternative, we recommend implementing Stuart Russell’s [257] principles for beneficial machines to guide the development of beneficial erobots (Sect. 4). We conclude that the development of such beneficial erotic machines has the potential to mitigate erobot-related risks, and possibly maximize technology’s benefits for human intimacy and sociosexual well-being.

2 Towards Erobotics

Artificial agents are increasingly perceived and treated as social actors rather than mere objects [72,73,74, 178]. Their gradual transition from patient to agent—from a passive technology that is simply used to an interactive technology capable of (rapidly increasing) degrees of agency—is crucial to understanding human–machine interaction and co-evolution, particularly when these agents are designed for intimate interaction. It is, in part, fundamental to understand the ever-changing construction of meaning surrounding (emerging) erotic technologies and our relationships with them. As such, the nomenclature used to describe these socio-technological phenomena should reflect this complexity.

While terminology is crucial to any scientific endeavour, the use of lay, misguided, and unscientific terms in marketing and pop culture often skews the way intricate emerging realities are conceptualized and studied. One example is the term “Lovotics,” whose use of the English prefix “Lov-” needlessly emphasizes the concept of “Love” over other aspects of human intimacy and relationships [50, 51, 260]. Other examples include terms like “smart/sex toys, dolls, or robots,” which are based on cultural tropes (e.g., science fiction), mundane consumer products (e.g., “smartphones”), and limited views of the kinds of interactions humans may have, or wish to have, with erotic artifacts. This discrepancy is exemplified by Su, Lazar, Bardzell, and Bardzell’s [280, p. 3] pioneering study, which highlighted that sophisticated doll owners do not perceive their artificial partner as a simple sexual device, but rather as “[…] a human-like body that inhabits the home with purpose through its motions with the owner”. This shifts the focus onto the interactive, holistic, and meaningful experiences that individuals may have, or wish to have, with artificial partners. Notably, these experiences are not necessarily sexual, but are still intimate, romantic, friendly, and/or sensual: phenomena that could become even more complex and widespread with the advancement of the machines’ agency [87].

To capture the complexity of human–machine erotic interaction and co-evolution, we begin by providing a nomenclature for Erobotics grounded in the rich concept of erôs, which is central to understanding the cultural and modern evolution of our (technology-mediated) intimacy. We then propose a taxonomy of erobots and a spectrum of their agency, which highlights how erobots’ transformative and relational influence is likely related to their growing agential capabilities. Ultimately, the following section aims to help (re)structure the research and discourse on Erobotics, their ethical and social implications, as well as the implementation of regulations adapted to the growing agency of erobots.

2.1 Defining Erobot(ics)

According to Anthony Giddens, the eminent sociologist of modernity, the transformative process of modern sexuality is characterized by the increasing detachment from the social imperatives of reproduction—including the subservience of women and imposed heteronormativity—, allowing more people the freedom to redefine selfhood as personal, gender, and sexual self-emancipation [122]. This process finds its continuity in the recent integration of new erotic technologies into the lives of billions of people worldwide, which is leading to the emergence of novel practices, preferences, and identities [86, 89, 90, 200]. Erobotics thus aims to study these transformations, and the full spectrum of techno-erotic phenomena ranging from self-stimulation to human–machine love.

The term erobots characterizes all virtual, embodied, and/or augmented artificial erotic agents, as well as the technologies and systems from which they emerge [8]. This definition includes but is not limited to erotic virtual or augmented entities, chatbots, robots, avatars, as well as their enabling interconnected, multi-layered, and multi-agent systems (i.e., artificial and biological; [93]). Erobots are artificial agents in the sense that they are software and algorithm-based systems capable of various degrees of agency (as defined below). Furthermore, because they (are perceived to) manifest erotic personas and behavioural patterns and are capable of erotically engaging with humans, and vice versa, erobots should be studied as specialized agents and multi-agent systems. Notably, the eroticism of erobots can be designed (e.g., purposefully included in their forms and behaviours), or developed over time, if artificial agents have the capability to learn and enact such erotic personas and behavioural patterns (e.g., an initially platonic social AI that learns aspects of our sociosexuality and becomes capable of manifesting eroticism).

Erobots are “agents” in the sense that they are functional technological systems, like computer programs. Since it is beyond the scope of this paper to statute on the nature of agency, we here employ the broadest definition recognized and commonly used in the fields of AI, machine learning (ML), and robotics [221, 257, 258, 284]. That is, the agency of machines refers to their capability to act intelligently in and on the world to achieve objectives on their own [221]. Intelligence here simply refers to the capability to achieve goals [257, 284]. Like their biological counterparts, artificially intelligent agents have the potential to communicate, adapt, behave, and/or interact with other agents using more or less complex learning algorithms. For example, the algorithms of a “software agent” based on reinforcement learning (RL) can act more efficiently in an environment through trial and error (maximizing reward functions) [38]. A population of software agents can also learn together through evolutionary algorithms that use fitness functions (metaheuristic optimization) [258]. Agency levels found in functional technological systems, including erobotic systems, are based on the complexity-efficiency of learning algorithms, but also on: computing power, data access and storage, sensors, actuators, etc.

The term Erobotics, by extension, refers to the emerging field of transdisciplinary research exploring past, present, and future human-erobot interaction and co-evolution, as well as the evolution of technology that makes those interactions possible [8]. As a transdisciplinary field intersecting HMI and Sexology, Erobotics aims to develop theoretical, experimental, and clinical research methods to study the broad spectrum of dynamics related to the emergence of erotic technologies [93]. Erobotics also aims to investigate the ethical and social implications pertaining to human–machine erotic interaction and co-evolution, as well as guide the development of beneficial erotic machines—i.e., machines that mitigate harm and enhance well-being.

The term erobot is a portmanteau of erôs and bot. Bot is the colloquial word used to designate both software and intelligent agents, either a digital computer program or robot with sensors and actuators [111, 207, 221]. The Greek word erôs characterizes all phenomena related to eroticism, which denotes both the innate erotic quality of something, and the condition of being erotically aroused. More specifically, it relates to the fluid experience, construction, and elicitation of love, sexuality, sensuality, attraction, passion, attachment, fantasies, arousal, desire, etc., and their complex intersections [239]. Admittedly, in English, terminological usage would normally prescribe the use of the prefix “eroto-” (as in erotophilia) to affix the concept of erôs to a new word. We would thus typically favor terms like “erotobots” to label artificial erotic agents. But these labels are not only unpleasantly sounding, the terminological usage of the prefix “eroto-” has been mostly used as a synonym for “sex” or “sexuality” in a limited sense (e.g., sexual desire). In French however, the prefix “éro-” allows for a richer and more inclusive denotation, one that encompasses all phenomena pertaining to the ever-changing conceptualization of “eroticism”, as described above. Hence, given that terms such as erotic (i.e., adjective), eroticism (i.e., noun), or eroticize (i.e., verb) all respectively derive from the French words “érotique,” “érotisme,” and “érotiser,” by the same etymological logic, we prescribe the use of erobot and Erobotics for the French words “érobot” and “Érobotique.” (Note. These concepts were first introduced at the 87th annual conference of the Association Francophone Pour le Savoir (ACFAS) in a symposium titled “Penser l’érobotique: regard transdisciplinaire sur la robotique sexuelle”; [8]).

There are many philosophical reasons as to why the Greek concept of erôs (and its derivatives) is central in the study of emerging intimate and sexual technologies. Historically, the concept of erôs has been employed by many writers, philosophers, as well as the first psychologists exploring the intricacies of love, sex, and desire in the human mind. Before modern Sexology (e.g., biopsychosocial approach to human sexuality; [177]), the work of these founders uncovered patterns of social and cultural complexity that underlie our erotic minds, identities, and practices. Erôs is also widely used in cultural studies to explore the expressions of intimacy in its richness and historicity, as it offers a phenomenological and epistemological account of the ever-changing experience and meanings of intimacy, love, sensuality, and sexuality [193]. Further, it is the most widely used concept in the study of the human experiences of passion and desire [23, 110, 113].

The first theory of erôs, Plato’s philosophy, powerfully influenced the western civilization conception of love and sex. Simply put, Plato teaches that, in a social-civilized context, trained by reason and moulded by an education oriented towards the good life, erôs is the art or craft (technê) that can lead humans to the discovery of the “sublime,” or to fundamental truths about oneself, others, the world, and the divine [7, 136]. While Plato fully recognized the power of the erotic mind and the erotic arts—sexual desire and romantic love—he and his many followers ultimately sought to sublimate erôs towards “higher” moral, social, and political models: “platonic love”, which is both spiritual kinship (philià) and spiritual pursuit (agapè)—which implies that “true love” is nonsexual desire. The sublimation-transmutation of sexual energies into objectives of “higher” value is a dominant trope in many cultures and civilizations. In the West, the Stoics, the Epicureans, and the Christians radicalized the sublimation path by instigating the long tradition of deflecting erôs into behaviour of higher social valuation, domesticating the instinctual life of the species by ascribing moral, social, and spiritual vocations to sexual energy [23]. The sublimation of erotic pleasure was a long process of cultural evolution that aimed at controlling and reorienting the appetitive nature of humans towards orderly, productive social outcomes like work, family, and personal discipline [110].

At the turning of the twentieth century, however, Philosophy and science slowly began to question the culture of erotic sublimation [163]. The rediscovery of eroticism and the erotic life has been an arduous social and historical process that culminated during the first sexual revolution, which reaffirmed the value of the individual pursuit of sexual pleasure against the conservative repression of individual desires [122, 192]. The revalorization of erotic arts and representations opened up “eroticism” to new, modern, and widely diverse, aesthetics and ethics of sexuality: “In its numerous faces and traces (sexuality, desire, passion, love, friendship, etc.), the “erotic phenomenon” appears and becomes central in every attempt to grasp the condition of possibility for oneness and otherness, for selfhood and alterity, finitude and infinity.” [32, p. 11]. Following the works of Freud, Foucault, and contemporary feminist scholars such as Simone de Beauvoir, Donna Haraway, and Judith Butler, the sacrifice of individual sexuality to perform normative roles has had a major cost on human happiness and personal autonomy, especially for women [24, 44, 113, 122, 131]. Today, against the residual background of general sexual sublimation and the prevalence of sexist norms, Sexology and the “sex-positive movement,” together promote a more complex and holistic view of sexuality, as well as individual sexual freedom, well-being, and pleasure [151].

Driven by our increasingly powerful computer system infrastructures, erotic technology, we argue, is the latest stage of this continuous social and cultural revolution towards erotic emancipation—a technological erotic revolution [22, 132]. Both technological innovation and sexual liberation currently drive demand for interactive artificial erotic partners, as well as immersive (multi-agent) erotic experiences [22, 56, 82, 179, 254]. Erobots are thus the probable outcome of technological societies that recognize the personal and collective value of eroticism in human life.

2.2 Taxonomy of Erobots

Erobots are polymorphous: they can take many forms, alternate in their manifestations and behaviours, transcend media, and rely on or emerge from various interconnected, multi-layered, and multi-agent systems (i.e., artificial and biological). We propose the following taxonomy to categorize their different types:

Embodied erobot: any kind of corporeal artificial erotic agent.

This includes various systems and devices that have some degree of erotic agential capabilities. The most (in)famous and researched embodied erobots are sex robots like those made by companies such as Abyss Creation’s Realbotix [245] and ExDolls [92] (for a review see [87]). Sex robots are defined by John Danaher [62] as any artificial entity that is used for sexual purposes and meets the following three conditions: (1) a humanoid form, (2) humanlike movements/behaviours, and (3) some degree of artificial intelligence. But, as Danaher [63] rightly points out, sex robots do not have to be humanlike. They can take any number of forms or enact behaviours that markedly deviate from human likeness (e.g., fantasy creatures, science-fiction characters, and intelligent sex toys). Furthermore, we agree that any artificial agent can be considered “corporeal” in the sense that all erobotic systems rely on materiality (e.g., hardware; [63]). However, what distinguishes embodied erobots from other types of artificial erotic agents is that they are perceived as occupying space in our three-dimensional world and as capable of directly engaging with its materiality. Contrastingly, other erobots appear limited to their virtual, augmented, or mixed environments, as well as to their VR/AR/MR-enabling devices [254].

Virtual erobot: any kind of incorporeal artificial erotic agent.

This encompasses any system (e.g., audio, visual, and/or written) that possesses some degree of erotic agential capabilities and can interact with humans via programs, applications, interfaces, and electronic devices, such as: computers, smartphones, tablets, gaming consoles, and VR equipment. Examples of virtual erobots include conversational agents such as the Harmony AI companion app [245], and Slutbot: an erotic chatbot developed for education and stimulation [157]. It also includes systems such as City of Sin 3D [54], Virtual Mate [299], Holodexxx [144], Mortherlode’s Pillow talk [212], Deviant Tech’s dominatrix simulator [80], and, in fiction, Samantha from Spike Jonze’s HER [10].

Augmented erobot: any kind of artificial erotic agent emerging from the use of augmentative technology.

This comprises systems resulting from the augmentation of oneself, or one’s ecological niche—virtual or otherwise –, that have some degree of erotic agential capabilities. Examples of augmented erobots include systems, applications, and characters projected via virtual goggles or augmentation glasses in one’s environment. Examples of such are: ARConk [12], GreenScreenAR [15], 3D Holo Girlfriend [1], or Hybri [149]. It also includes avatars and virtual worlds such as Chathouse 3D Roulette [49] or applications expanding our erotic capabilities like Mei (i.e., a sexting improvement app; [201]) and AIMM (i.e., an ML-empowered interactive matchmaking system; [5]). For erobots resulting from human augmentation (e.g., avatars), the realization of their agency is, partly, an emerging property of the human–machine coupling, which generates unique erotic experiences and persona for the augmented person, but also for those who interact with the human–machine hybrid, or technologically erotic multi-agent system [107, 131, 132, 297].

This taxonomy is meant to highlight different types of erobots and to emphasize that their systems can simultaneously be embodied, virtual, or augmented. In fact, cloud-based erobots that are connected through the IoT/IoS can manifest at the same time in various ways. They can be displayed on cellphones; animate a robotic body; appear in virtual worlds; or be projected in a non-virtual environment via augmentative technology. For example, users can interact with Harmony’s AI using both a smartphone application and a robotic-headed doll [245]. Another example is Hybri, which promises a future where humans and erobots fluidly alternate between embodied, virtual, and augmented erotic manifestations and experiences (i.e., MR; [149]). As such, the perceived characters, devices, or interfaces are only parts of what is here described as an erobot.

In fact, to fully grasp the extent of current and future human–machine interaction and their socio-technological co-evolution, it is essential to understand that erobots are not just their perceived characters (e.g., Harmony’s VR character or robotic-headed doll), but are composed of vast interconnected, multi-layered, and (increasingly adaptative) multi-agent systems that enable their (emerging) capabilities [161, 228]. For example, when people interact with an erobot, they engage with its interfaces (e.g., application and characters), but the erotic capabilities of those interfaces also depend upon clusters of enabling-systems including: software-hardware, cloud-based algorithms learning from multiple users, databanks, search engines, and humans (e.g., programmers, engineers, designers, artists, and partners). Hence, like humans, erobots are not segregated stable entities, but are dynamic and porous systems relying on, enabled by, and embedded within other systems [28].

Thus, erobots and their capabilities can be better conceptualized as emerging systems and properties, respectively, that which can be studied through their material substrate and (technological) ecological niche, and whereby humans are a key component in the enabling of their (erotic) agency and cognition (detailed in Sect. 3). As such, erobots not only confront us to potentially novel erotic actors and experiences, but also paradoxically reminds us that—as biological organisms defined by our own structures and embedded within a larger niche—humans and machines are not so different or isolated.

2.3 A Spectrum of Erobots’ Agency

The agency of erobots represents hypercomplex conditions and states that can be better understood and studied across a spectrum. To appreciate this complexity, we propose a Spectrum of Erobots’ Agency (SEA) ranging from level 0 (no agency) to 5 (full agency)—echoing the SAE International’s (J3016) spectrum of self-driving cars’ automation levels (see Fig. 1; [259]). Despite the impossibility of capturing erobots’ infinite degrees of agency or technological substrates, and the obvious distinctions between autonomous cars and erobots (e.g., forms, behaviours, purposes, and underlying technology), this spectrum mock-up has heuristic value. It can help clarify present and future dynamics related to human-erobot interaction and co-evolution as the agency of artificial systems increases—i.e., greater agency may entail reduced (perceived) human control over artificial systems, greater machine (behavioral) unpredictability, and more uncertain human-erobot relationships. It can also help appreciate the scientific, ethical, and sociocultural challenges addressed to Erobotics as we develop, and engage with, evermore complex agential erotic machines. Noteworthy, the agency levels described in this spectrum should not be understood as discrete categories, but rather as a continuous gradient of capabilities possibly supported by diverse interconnected, multi-layered, and multi-agent systems.

Fig. 1
figure 1

Spectrum of Erobots’ Agency. This spectrum, ranging from level 0 (no agency) to 5 (full agency), is inspired by the SAE International’s (J3016) Levels of Driving Automation [259]. It presents the descriptive labels, corresponding system capabilities, examples of technology, and paralleled Likert-type scales of human control and machines’ predictability associated with each level of erobots’ agency. Note. References: Harmony and Henry [245], AVA [2], Samantha [10], Gigolo Joe [91], and Nimani [183]. Program used for creation: Adobe Illustrator CC 2017 (version 21.0.0)

The agency of erotic machines is partly based on the degree of autonomy and reciprocity established and perceived in human-erobot interaction [176, 178, 278, 296, 320]. As such, in the SEA, level 0 technologies are not erobots, but correspond to simple erotic objects or media without agency beyond that which is attributed by humans and/or their (pre-established and/or intended) affordances (e.g., dildos, vibrators, artificial vulva/vagina, dolls, and pornography). However, they are here included because they represent a significant portion of the erotic technology currently available and used [22, 56, 86, 89, 90, 137, 138, 247, 251, 253, 262], and could play a role in our intimacy with erobots (e.g., interactions involving virtual partners and vibrators). They are also included here because their lack of agency (as previously defined in Sect. 2.1) provides a baseline to compare subsequent SEA levels and describe their potential implications for human-erobot interaction. Indeed, as a reference point, level 0 technologies are comparatively highly controlled by humans, which makes our relations with them highly predictable. The interaction is co-constructed by their affordances, and importantly, by what people use them for (e.g., sex/love dolls’ design provides cues about how to engage with them, but humans imagine and decide how to enact the rest; [280]). The established reciprocity is limited, and users largely perceive themselves as in charge of the interaction. Uncertainty thus remains low in the interaction with these products, since they have no capability to act intelligently on the world to achieve objectives on their own beyond their affordances.

Level 1 technologies integrate “basic agency” using various software-hardware implements that augment the erotic qualities of interactive and/or connected objects and media. For instance, a sex toy that adjusts its settings according to the pressure applied by its users—creating an interactive loop where humans perceive that they are not completely in charge of the sexual stimulation. Their “basic agency” stems from machines’ capability to react to human action, albeit in a simple way. In doing so, they establish a reciprocity that goes beyond the (intended and/or pre-established) affordances of machines and affects subsequent human–machine responses. At level 1, humans and machines not only provide cues about their use, but also act on each other. Uncertainty thus increases as humans partly relinquish control over the co-constructed series of actions. Still, level 1 systems are (perceived to be) controlled and predictable, as humans have a sense that they are mostly in charge of the interaction, and that the capability of machines to act on the world on their own to achieve goals beyond their affordances is restricted to what we use them for.

Level 2 technologies correspond to AI-enhanced erotic systems without ML capabilities. This includes devices, applications, or media that are built on established software frameworks and incorporate complex automation, but do not learn from their interactions (e.g., an erotic video game that displays characters and generates intimate stimulations as a function of users’ actions). Their “partial agency” stems from the fact that level 2 technologies do not simply react to human action, but (are perceived to) exhibit properties akin to more complex (relational) intelligence compared to level 0 or 1 (e.g., sociosexual communication and behaviors) [13, 258, 284, 286]. Here, the erotic video game can process various input data and produce output responses to generate increasingly erotic experiences interpreted by users as pleasurable and/or lifelike. Interactions with level 2 technologies are thus perceived to be more co-constructed, interactive, and reciprocal, but also more automatized, diversified, and goal-oriented. Uncertainty markedly increases due to the diversity of potential pre-programmed automatic responses a system can enact. But, it remains somewhat controlled and predictable because of the system’s inability to learn new patterns from its users or deviate from its pre-established output functions. Humans may thus confidently estimate the boundaries of the machines’ capability to act on our world to achieve goals on their own and their modes of (erotic) interactions.

Today, levels 0–2 technologies are widespread in human intimacy [22, 53, 80, 227, 275]. People can fall deeply in love with dolls (for a review see [87]), regularly consume pornography [275], and use various sex toys or games to enhance their erotic experiences [90]. It is thus crucial for Erobotics to understand human interaction with these technologies (e.g., motivations behind their use), their influence on our erotic lives (e.g., sexual and relationship satisfaction), and how they may contribute to the evolution of sociosexuality (e.g., transformation of our preferences and identities). Notably, however, the degree of agency of levels 0–2 technologies has led them to be categorized and treated, by most, as somewhat passive objects or media. Yet, the line between patient and agent is becoming increasingly blurred as we move towards systems capable of more complex (erotic) interaction, learning, and adaptation [72, 73, 145, 178, 203, 220, 256, 291].

Level 3 erobots correspond to the most sophisticated ML-empowered erotic systems presently available. In Fig. 1, the “space” between levels 2 and 3 emphasizes that, while previous technologies have been a part of our intimacy for decades, contemporary ML has only recently entered people’s lives through (interconnected) technological systems (e.g., devices and applications). It also emphasizes that the learning capabilities of level 3 systems mark a clear departure in their agency and our co-evolution (e.g., impact of learning on evolution, or Baldwin Effect; [18, 79, 105, 140, 292]). Specifically, the growing learning capabilities of level 3 erobots offer a wide array of (erotic) possibilities, ranging from continuous adaptative behaviours to the possibility of gaining holistic knowledge about individuals and their cultures. Indeed, in contrast to pre-programmed output delivered by “Good Old-Fashioned Artificial Intelligence” systems, ML makes it possible for erobots to interact with humans and learn erotic behaviours and sociocultural dynamics directly from users and their world by generalizing from experience.

Level 3 erobots are built on a variety of software architectures to improve their interactive and learning functions using methods like statistical pattern recognition and probabilistic ML [119, 173]. Following innovations in affective computing [236], these agents are also increasingly capable of “artificial emotional intelligence” (AEI). Affective and emotional analytics combine the ability to recognize human emotions and adapt the communicative-behavioural reasoning of machines [16, 81, 146, 162, 209, 263, 267]. Already a billion-dollar industry, AEI can be found in applications and chatbots to improve the experience of users and enhance cooperation in the work environment, particularly in healthcare, where these systems interact with human professionals and clients [279]. AEI is gradually becoming an essential component of social robots and is projected to play a key role in facilitating the human–machine psychosocial and erotic interaction [95, 267].

Level 3 erobots are the first systems whose responses are not entirely pre-established by humans. Indeed, through learning and adaptation, level 3 erobots can potentially develop their own new sets of sociosexual patterns based on past interactions. This makes their actions partly unknown to designers and users—somewhat uncontrolled and unpredictable. For example, when users interact with Harmony, they first engage with its pre-set routines, but subsequent responses become increasingly tailored to users as its ML system allows it to learn from past conversations and encounters [245]. Hence, the (re)actions of level 3 erobots are harder to predict; the boundaries of their interactive potential are (perceived to be) more uncertain. Interactions are thus seemingly closer to engaging with partners that do not just mechanically respond to input stimuli, but also contribute in more complex ways to the co-construction of (erotic) experiences.

To appreciate the potential implications of these innovations for HMI and Sexology, consider the influence of digital social media algorithms on human relationships (e.g., Facebook; [39, 303]). With limited learning algorithms, and massive user–user interactions on their platforms, social media have transformed the attention economy, and, in turn, are transforming identities, politics, consumption, and (means of) sexual selection all over the world. For instance, algorithms have been shown to affect states of minds (e.g., beliefs, preferences, and desires) by filtering and amplifying certain perceptions of the world (e.g., filter bubbles and echo chambers; [77, 118, 232]). Their impact on our erotic cognition and agency in the context of large platforms, such as AI-powered digital pornography (e.g., Pornhub) and dating applications (e.g., Tinder), have only recently started to be explored [106, 182, 197, 244, 281, 287]. Yet, we can already see their unique influences on human intimacy (e.g., preferences, behaviors, and partner selection).

Level 3 erobots have only recently entered our world. However, the progressive conjoining of immense data mining and processing power, vastly more powerful processor units, and above all, innovative ML techniques giving birth to formidable algorithms, allow us to consider erobots that exceed Level 3. Indeed, in a gradual transition towards level 4 systems, AI scientists are now tackling higher cognitive capabilities, which could soon be incorporated into erobots. That is, for instance, those related to metacognition such as meta-reasoning (i.e., reasoning about reasoning), meta-learning (i.e., learning to learn; [126, 298]), and Theory of Mind (ToM); or the ability to understand the mental states of others and recognize them as singular, autonomous entities [124, 240].

In fact, while current advanced AI systems excel at prediction, they struggle to understand real-world physics and our infinitely rich social world: not only causality and mentalistic concepts (e.g., goals, utilities, and relations), but also socially learned concepts, such as emotions, interests, and attachment [16, 43, 173, 230]. Hence, many techniques in AI, such as deep RL, Bayesian inference, and game theory, are now being used to simulate the inductive biases and metacognitive capabilities of humans. Emerging architectures are also progressively allowing AI systems to learn directly, and increasingly rapidly, from human preferences and language [38, 130, 305, 306]. While these attempts modify “agents architecturally” and depict their internal states in a form interpretable for humans, others “[…] seek to build intermediating systems which learn to reduce dimensionality of the space of behaviour and represent it in more digestible forms” [242, p. 2]. Inverse Reinforcement Learning, for one, teaches algorithms to adapt behaviour to circumstances and learn from human–machine continued interaction [257]. Successful “consequence engines” in bots are also already capable of internally modeling their environment and other entities in order to avoid collisions, coordinate without communication, and reach their goals [30]. Likewise, using deep neural nets, Google’s DeepMind is developing ToM with the AI agent ToMnet, which is capable of building heuristics from basic mind models of other agents that are derived from meta-learning observations of their behaviour [242].

Based on these advancements, we can realistically anticipate the emergence of level 4 erobots with “higher agency” sustained by higher erotic analytics/heuristics and AEI [6]. This is a reasonable assumption since higher cognitive capabilities (and/or their attribution) have been recognized as an important component to enable many—if not most—people to develop strong attachment to erobots (e.g., love and friendships), but also, because their incorporation (and/or mimicking) in artificial companions has become an explicit goal of programs like Lovotics [50, 260, 261]. To our knowledge, however, level 4 erobots are not available yet (Fig. 1 marked by the “space” between levels 3 and 4). If they were, their capabilities would, by definition, enable them to develop models of themselves and their environments and adapt their learning strategies to become more efficient in human-erobot interactions. We hypothesize that this could likely lead level 4 erobots to be perceived as uncontrolled and unpredictable, but also, arguably more convincing and efficient as intimate partners as they would exhibit degrees of more sophisticated cognition and agency that gradually approach—without achieving it—interhuman erotic interaction.

Lastly, level 5 erobots correspond to hypothetical constructs capable of artificial general intelligence (AGI), or “strong AI” [108, 123, 171]. Level 5 erobots imply a situation where highly complex and unpredictable erotic machines act quasi-completely outside of human control, at least to the same extent as any other human partner [13]. However, according to most of the world’s foremost researchers working in ML, these highly uncontrolled and unpredictable AGI potentially capable of self-awareness, sentience, or “consciousness,” will remain theoretical constructs for decades to come [108]. In other words, debates and discussions surrounding AGI are not essential to study the erotic agential and relational spectrum between humans and erobots. And, since most AI specialists believe that AGI is still far ahead, we suggest that Erobotics mainly focuses on level 0–3 (and upcoming 4) technologies but plans for the possible advent of level 5. Indeed, our knowledge of human erôs suggests that the higher capabilities of human minds are unessential ingredients to build machines capable of entertaining meaningful relationships with us. In fact, following the SEA and the learning system underlying our erotic cognition (detailed in Sect. 3.1), we recommend to instead launch Erobotics under a relaxed dichotomy between “true” or “false” intelligence, cognition, agency, and affective relationships [290]. To sum up the argument: the effective level of capability necessary for any machine to erotically engage with human partners is simply the level of capability necessary to enact reciprocal erotic experiences with humans.

To conclude, the SEA highlights a progression in human–machine erotic interaction—ranging from reactive sexual stimulation to the possibility of meta-cognitive erotic processes. This spectrum suggests that as their erotic agency increases, machines could progressively grow outside of human (perceived) control and their behaviours could be interpreted as more unpredictable—significantly influencing, in turn, our relationships with them. The SEA also suggests that the progression of machine agency has the potential to influence our erotic ecological niche and cognition in evermore complex ways. And, while we tend to exaggerate what is necessary to achieve the “affective autonomy” involved in our relationships, we might need to downplay the prerequisites for experiencing erotic relationships [235]. That is, if we consider erotic agency and cognition as anchored in a social co-determination of affects [60, 61, 97], we should perhaps also consider that the relational autonomy of behaviour enacted by erobots has the potential to transform the niche in which this autonomy is exercised, as well as human and machine cognitions.

In the next section, we explore these transformations by proposing a model of human-erobot interaction and co-evolution grounded in Complex System Theory [28] and drawing from 4E approaches to cognition [217], the neurodevelopmental trajectory of sexuality [234], Hierarchical Incentive-Motivational Theory [288], and Ecological System Theory (i.e., Bioecological Model; [42]). This model and its synthetic approach provide explicative mechanisms that have predictive value for our socio-technological erotic interaction and co-evolution. It is also purposefully broad enough to constitute theoretical grounds for a wide, collaborative, and transdisciplinary research agenda on Erobotics. It is our hope that researchers from various disciplines can use this model as a starting point, bring their own perspective, and shed light on its different aspects and levels of analysis.

3 Human-Erobot Interaction and Co-evolution Model

Researchers in HMI rarely explain what they mean by co-evolution beyond the fact that humans and machines influence each other in a perpetual feedback loop [60, 97]. This is understandable, since humans’ interactive, sociotechnological, and evolutionary phenomena stem from micro and macro hypercomplex processes that are studied across disciplines using different models and mechanisms (e.g., in physics, AI, robotics, neurosciences, biology, evolutionary psychology, sociology, and behavioral sciences). For Erobotics to tackle this complexity, we here propose the first overarching Human-Erobot Interaction and Co-Evolution Model (HEICEM; see Fig. 2) explaining how human-erobot interaction can influence the sociosexuality of our species [75, 76, 78].

Fig. 2
figure 2

Human-Erobot Interaction and Co-evolution Model (HEICEM). This model depicts how human and erobots are likely to co-influence each other’s erotic cognition through interactions and their impact on each other’s ecological niche (i.e., represented here as the interconnected multi-layered systems depicted in the Bioecological Model; [42]). This model highlights multiple levels of analyses and invites a collaborative, transdisciplinary research program on Erobotics to address the details of the HEICEM (e.g., interactions, processes, and mechanisms), which remain unknown for the most part. At its core, it also includes, a potential mechanism based on Universal Darwinism, which is analogous to natural, artificial, and sexual selection, EMAS, and could bridge the individual and population levels of the HEICEM. Program used for creation: Adobe Illustrator CC 2017 (version 21.0.0)

Since a plethora of variables are implied in the study of human–machine erotic co-evolution, our model is not deterministic, but probabilistic: it rests upon the way humans and erobots are likely to influence each other’s erotic cognition [217] through interactions (e.g., experiences of social and sexual rewards that motivate individuals to engage or not in erotic behaviours; [234, 288]) and their potential impacts on each other’s ecological niche—ranging from micro to macrosystems (e.g., technological to sociocultural environments; [42]). Moreover, this model rests upon a continuous exchange between the individual (e.g., preferences and behaviours) and population levels (e.g., artificial and biological agents populating our ecological niche). At the core of this model, and in an attempt to potentially bridge those levels, we hypothesize a mechanism analogous to natural, artificial, and importantly, sexual selection, here called Erotic Multi-Agent Selection (EMAS; see Fig. 2), which represents fertile grounds for future research.

3.1 Erotic Cognition

Erobots are products unlike any others: developed as social and sexual partners, they are likely to be increasingly perceived and treated as partners [72,73,74, 178], especially if their agency continues to grow [176, 178, 278, 296, 320]. And since humans are wired by evolution, culture, and experiences to select and engage with (intimate) partners [177], erobots’ sociosexual capabilities can progressively set them apart from other technologies. That said, cognitive neurosciences can help us bridge HMI and Sexology, to model important variables of human-erobot interactions and co-evolution.

The emergence of erobots could act on individuals (and vice versa), by influencing their erotic cognition via their interaction (detailed in Sect. 3.1.1) and their transformation of our ecological niche (detailed in Sect. 3.1.2). Here, the term cognition is used in the sense intended by 4E approaches to cognition. 4E approaches propose that cognition is embodied, embedded, extended, and enacted [217]. Embodied, in the sense that cognitive processes partly depend on bodily processes, including but not only involving the brain (e.g., limbs, organs, peripheral nervous systems, and hormonal activity; [295]). Embedded, such that cognitive processes are situated (e.g., in a specific body, environment, and point in time; [25]). Extended, meaning that cognitive processes partly take place, and depend on, extra bodily processes (e.g., entities enabling storage and access to information, such as books, phones, computers, and other humans; [55, 115, 148]). And enacted, such that cognition emerges from agents’ active engagement with their environments and its affordances [121, 198, 295]. By extension, erotic cognition here refers to the constellation of embodied, embedded, extended, and enacted processes which enables, and from which emerges, affordance-based phenomena pertaining to erôs, as previously described. This includes but is not limited to, the constantly evolving and interactive experience, construction, and elicitation of love, attraction, attachment, passion, romance, desire, arousal, sensuality, sexuality, etc., and their complex intersections.

While there are debates regarding the extent to which cognition is embodied, embedded, extended, and enacted, 4E approaches generally agree on some key points. Precisely, that cognitive-related phenomena (e.g., attention, memory, language, emotions, sensations, and perception) depend on the specific morphological and physiological characteristics of agents, their situated ecological niche (e.g., natural, technological, and sociocultural environments), their active interaction with this niche, and the coupling of their characteristics with the information provided by said niche (i.e., affordances; [121]). Importantly, 4E approaches enable us to approach artificial and biological agents as part of larger, interconnected multi-agent systems from which erotic cognition emerges in different ways. They also highlight that both artificial and biological agents can engage in cognitive processes specific to their own characteristics and niche while still fully acknowledging their underlying structural and functional differences. For instance, the cognitive processes of erobots are situated within specific virtual and non-virtual worlds (embedded). They depend on software (e.g., algorithms and programs), hardware (e.g., servers), and interfaces (e.g., computers and cellphones) wired and shaped to process afferent stimuli (embodied). Parts of their cognitive processes take place outside of their software, hardware, and interfaces (e.g., clouds, databanks, and humans; extended). Lastly, their cognitive processes emerge from an active engagement with their ecological niche (i.e., virtual or otherwise; enacted)—in which humans play a key role.

4E approaches do not imply that erobots have a subjective experience, nor do they imply that humans’ biological erotic cognition is the same as erobots’ artificial erotic cognition. However, they underline the possibility for erobots to have their own forms of 4E erotic cognitive processes or their own way of erotic “thinking” [290]. They also underline how humans are a fundamental part of erobots’ erotic cognition—not only because we design them, but because we represent their main source of (sociosexual) data. For instance, in much the same way as humans, machines incorporate us in their cognitive processes. Machines learn, store, and access data through us. Hence, while the erotic cognition of erobots partly stems from their design and pre-programmed capabilities, it can also emerge from what they learn during human–machine interaction. Moreover, we purposely create and select erobots that best fit the state of our erotic cognition, and in doing so, determine traits that are more likely to endure (or not) in erobotic populations, based on our individual and collective preferences. This process could subsequently influence the type of erobots that populate our world—probabilistically and retroactively affecting our ecological niche, our possibilities for social, intimate, and sexual experiences, and in turn, our new technology-mediated erotic cognition.

Overall, 4E approaches highlight that erobots have the potential to learn their erôs, like humans, from a world that they are themselves transforming [61, 97]. Specifically, erobots (can potentially) learn their erôs from the same human world that designs them, selects them, and is changed by them—which could incidentally lead us to (re)learn a new technology-mediated erôs as we engage with them, and give rise to a hybrid erotic cognition indistinguishable from the sum of its part or the hypercomplex processes from which it emerges. 4E approaches also highlight the importance of human and erobot interactions with each other and their world in co-influencing their erotic cognition—a transformative process here contingent on the agency of machines and their place in our intimate lives.

3.1.1 Human-Erobot Interaction: Learning a New Technology-Mediated Erôs

Erobots can influence our erotic cognition through interaction, by providing us with novel opportunities of social, intimate, and sexual experiences—generating new learnings and possibly impacting our partner selection. Indeed, humans learn the complexity and meaning of their erotic subjectivity and agency. This learning process rests upon evolutionarily developed, hierarchically organized, and relatively plastic structures [41, 234, 288]. We are wired for adaptability proportionally to the needs of our sociosexual environment, which is so diversified that we developed systems that are prepared for sex, but also extremely flexible in learning strategies to maximize chances of erotic encounters in an uncertain, ambiguous, and ever-changing world [234]. Our system is hierarchically organized [288], like other animals, for stimulus–stimulus associations [35, 37, 231] and response-reinforcer associations [274]. It constantly makes causal inferences of what its internal state will be (e.g., pleasure, aversion, joy, and pain) from cues in the external world that predict reinforcers (e.g., food, predators, and partners). Moreover, it continuously adapts the type and strength of its motivational, physiological, attentional, and behavioural responses according to how well such external cues predict outcomes [41, 234, 288].

In this sense, humans are biological erotic-learning agents. Our experience of love, intimacy, and sexuality is inextricably linked to the dynamic interaction [177] between our evolved biological predispositions (e.g., genetic, hormonal, and physiological factors; [20, 59, 204, 302, 313]), our ecological niche (e.g., social and cultural factors; [3, 68, 69, 135, 175, 213, 311]), and our experiences (e.g., learnings; [32, 37, 124,125,126, 141,142,143, 196, 206, 224, 252]). Our innate predispositions are shaped into sexual responses, desires, behaviours, and preferences based on reward experiences (e.g., social and sexual pleasure), as well as our capacity to link said experiences, and their meaning, with various external predictive cues (e.g., physical, psychological, and behavioural traits; [116, 234, 288]). In other words, our lifelong experiences with rewards form the bridge between what we want, how much we want it, which behaviours are required, and what they mean [234].

This biological erotic-learning system constitutes the blueprint for the development of our mating mind [204, 205]. That is, a mating mind so highly tuned to learning the demands of our ambiguous and culturally shaped world that our sexuality becomes inextricably part of larger systems in its experience and meaning. For the human animal, sexuality is thus fully erotic, such that love, attraction, passion, desire, sensuality, relationships, and sex are deeply rooted in our ever-changing, socially constructed minds [113]. This human erotic cognition not only enables us to navigate and make sense of our environment, but also enables us to transform it via the production of norms and artifacts reflecting our multifaceted sexuality. For instance, humans engage in a wide range of erotic activities besides sex and mating, ranging from situated and perpetually co-evolving rites of seduction, sensual performances, and conjugal arrangements [23]. Humans of all cultures have invented an immense repertoire of moral, aesthetic, religious, and legal codes of acceptable and transgressive erotic behaviours [110]. Humans have also produced quantities of art and entertainment materials about love, romance, and sexuality [122], a process that constantly feeds back to us to co-construct our erotic cognition.

It is this evolutionarily developed, culturally shaped, and experience-dependent erotic cognition that produces erobotic technology. It is the foundation of why and how we select partners [234], and thus, central to the creation of, and our interaction and co-evolution with, sociosexual machines. To appreciate how erobots can influence our erotic cognition through interaction, let us consider the Nimani thought experiment, which is based on technology that already exists or is in development [5, 149, 245, 254, 260, 299]:

Nimani is a hypothetical polymorphous erobot. It can interact with humans via their cellphones using an audio-visual interface or chat. Nimani’s avatar can also simultaneously appear as one or multiple characters in a virtual environment, be projected in our world or onto individuals via augmentation equipment, and animate a robotic body in our non-virtual world. Nimani is cloud-based and connected to the internet, so its AI can interact with, and learn in real-time from, multiple biological or artificial agents in a hive-mind type of cognition. It can also infinitely copy itself and it has access to tremendous amounts of information through its access to search engines (e.g., Google). Hence, when you interact with Nimani, you are engaging with vast interconnected systems (i.e., avatar(s), its related software-hardware, its learning cloud-based systems, as well as the network of information it has access to), exposing humans to a different kind of erotic partner. First, it is not biological, but computer generated. Second, it can transcend medium and manifest itself in multiple places simultaneously. Third, it can take on various forms and enact behaviours that are not bound by the rules of physics governing our non-virtual world. Fourth, it can adapt said forms and behaviours to the needs (i.e., physical and psychological) or fantasies of its users. Fifth, it can be duplicated such that producing, or engaging with, Nimani is not a zero-sum game. And finally, it accesses, processes, and learns data differently than humans by using various algorithms, statistical methods, and search engines.

Interacting with Nimani thus entails the pairing of new erobot-specific predictive cues with the human experience of reward. Indeed, as an intimate partner, Nimani rests upon some of the strongest human motivation incentives (i.e., social and sexual rewards; [116, 169, 202, 288]). Paired with its traits, these incentives have the potential to generate, through interactions, novel erotic learnings that can progressively give rise to new technology-oriented conditioned partner preferences specific to erobots and their traits [234]. For instance, users’ experience of intimacy and sexual pleasure would here be paired with Nimani’s artificial forms, personalities, and behaviours—including those that are impossible in our non-virtual world. They would also be paired with its knowledge, adaptive and duplicative capabilities, enabling equipment and systems, as well as its cultural representation and symbolic meaning. Moreover, depending on whether interacting with Nimani constitutes a rewarding experience (or not), individuals will likely be motivated to repeat or avoid behaviours that have led to such internal states [234, 288].

For several reasons, this lifelong learning and approach-avoidance process should not be underestimated. First, it points to the possibility for some people to have their first socially and sexually rewarding experiences with artificial agents. As such, based on the neurodevelopmental trajectory of sexuality, these experiences could form critical periods of development during which preferences for erobot-specific features are integrated and consolidated [36, 234]. Second, this process can influence the subsequent development of our erotic preferences and partner selection, which could, in turn, potentially affect human-erobot interaction and co-evolution. For instance, traits that will generate more rewarding experiences will be more likely to be replicated in next generations, while those that generate aversive experiences could be discarded. Finally, selected erobotic technologies are likely to populate and influence our ecological niche and situated erotic cognition through a perpetual feedback with our (technological) environment.

3.1.2 Erobots and the Human Ecological Niche

Erobots can influence our erotic cognition by transforming (parts of) our ecological niche, and vice versa. More precisely, since (erotic) cognition is situated, such that it takes place and emerges from agents’ interaction with their ecological niche, it can be influenced by the modification of its niche’s content (e.g., the introduction of potential new sociosexual partners and enacted experiences; [217]). To emphasize this point, we again employ the Nimani thought experiment and break down the anticipated potential impacts of erobots on our ecological niche using the Bioecological Model [42].

The Bioecological Model proposes that human development is influenced by a dynamic continuous process of interaction with five layers of interconnected systems [42]. The microsystem refers to individuals, groups, institutions, and technology with whom people interact directly (e.g., partners–family–friends, schools, and computers). The mesosystem connects to microsystem with other layers of the model. The exosystem encompasses systems that indirectly affect people’s lives (e.g., political, legal, educational, scientific, health, media, and economic entities). The macrosystem describes the overarching sociocultural norms and value systems influencing every other layers of the model. And finally, the chronosystem accounts for the influence of historical circumstances on the model, as well as how each layer changes over time [42].

At the microsystem level, we can expect that interacting with Nimani could lead to new technology-oriented conditioned partner preferences (as previously described), but also to the co-construction of new proximal dynamics with individuals, groups, and institutions. For instance, as part of our techno-subsystem [153], erobots can generate new experiences with families, friends, and partners, such as: considering using erobots [219, 265], forming strong bonds with artificial agents [50, 260, 261], changing marriage institutions, and engaging in consensual non-monogamy with machines [4]. They could also lead to the advent of new health, legal, educational, and entertainment services dedicated to human–machine erotic interaction (e.g., applications, stores, organisations). These changes would all be connected to other model layers through the mesosystem [42].

At the exosystem level, erobots can interact with political, economic, legal, scientific, health, media, and educational institutions. For instance, industries can (continue to) grow around the production of Nimani’s systems (e.g., VR/AR/MR equipment, teledildonics, AI, robotics, and computer infrastructure; [22, 82, 107, 149, 227, 245]) and competitively adapt to market pressures [216]. Political and legal bodies may implement regulations regarding erobotic technologies (e.g., ethical guidelines, laws, and production standards; [114, 117, 271, 277]). Health systems may witness the rise of (new) problems (e.g., compulsive use) and opportunities for therapeutic use (e.g., VR for intimacy-related fears; [172, 200])—and adjust to provide services aimed at enhancing digihealth (i.e., engagement with technology that promotes well-being; [292]). For example, by developing treatments and resources that favor a harmonious integration of erotic technology [200, 293], and mitigate usage that disrupts important areas of functioning (e.g., family, relationships, work, and health; [200, 293]). Media will likely continue to cover human–machine erotic interaction [88, 90]; contributing to the co-construction of our attitudes and behaviours towards erotic technology [48, 282]. Educational institutions could potentially devise programs that include (and exploit) Erobotics (e.g., sex ed that discusses digisexuality). And finally, the scientific community will likely continue to explore ongoing technology-mediated erotic changes, and hopefully try to improve well-being (e.g., Love and Sex with Robots [51], AI Love You [318], Penser l’Érobotique [8]).

At the macrosystem and chronosystem levels, we propose, based on historical examples (e.g., LGBTQA2S+, Kink, fetish, and Bondage, Discipline, Domination and submission, and Sadomasochism (BDSM), and sex toys; [29, 167, 309]), that cultures surrounding human–machine eroticism can evolve over time (e.g., sexbot-induced social change; [4]). This likely depends on factors such as: geolocalisation, socioeconomic status, as well as prior norms and values regarding sexuality and technology [177]. Still, erobots can increasingly expose people to the possibility of forming strong bonds with, and via, artificial agents; possibly leading to unpredictable (re)constructions of the meaning of love, sex, and technology [107, 155, 181, 309]. This prospect, and the (erotic) experiences that accompany it, can influence societal attitudes and acceptance towards erobots [48, 282, 309, 314, 320], in addition to the value and meaning attributed to our relations with both artificial and biological agents. Finally, the Bioecological Model predicts that these changes can trickle down to influence other model layers in a perpetual feedback loop [42].

This does not mean that everyone will directly engage with erobots, but the Bioecological Model rather helps us appreciate the potentially significant holistic co-influence that erobots could have with our ecological niche, and by extension, our situated erotic cognition. It highlights the unpredictable ways in which erotic technologies could contribute to the co-construction of human (erotic) life and the different layers that must be considered to comprehensively study Erobotics. It also highlights the importance of sociocultural processes in the design, implementation, and production of meaning surrounding human–machine erotic interaction [155]. This could in turn play a significant role in influencing people’s attitudes and responses towards erobotic technologies, as well as their willingness to engage with erobots over time [282]. Again, technology does not have to be sophisticated to co-influence our erotic ecological niche and cognition. After all, sex toys are widely used, represent major investments, are subject to production standards, and participate in the co-construction of our norms regarding sexuality and technology [22, 82, 90, 114]. But we can appreciate that erobots with growing agency can accentuate such transformative processes since they could become intimate partners.

3.2 Synthesis and (Evolutionary) Hypothesis

Erotic machines, designed for interactive social and sexual pleasurable feedback, have the potential to engage our reward system and erotic cognition in ways that other technologies simply cannot. Erobots could thus become (as some scholars proposed regarding social AI and robots; [72,73,74, 186]) similar to a new species of (intimate) partners in our environment that we can design and select, who learn from us, and who provide novel opportunities of (erotic) experiences and learnings. To study these hypercomplex processes, we proposed the HEICEM, a model that offers an overarching theoretical framework to launch a broad, collaborative, and transdisciplinary research program on Erobotics. The HEICEM’s structure highlights multiple levels of investigation and analysis, which require different disciplines—from humanities and Sexology, to neurosciences, AI and HMI, and cognitive, social, and cultural sciences—to weigh in, if we want to fully grasp the factors and variations of our co-evolution with erobots. Noteworthy, at the moment, some of these phenomena are difficult to examine empirically without solely relying on self-report and hypothetical scenarios [87, 219, 265], partly due to the unavailability, high price, and/or novelty of (sophisticated) erobotic systems. Others, however, can already be observed (and studied)—to various degrees—through individuals, communities, and cultures related to: digi/technosexuality [21, 200, 283], cybersex (or online sexuality; [67, 85]), hentai (i.e., manga or anime pornography; [301]) and otakuism (i.e., interests in animation, manga, and games, often incorporating (non-)fictional technology; [11, 304]), dolls [87, 104, 166, 174, 280, 294], toys [86, 89, 90, 138, 247, 253], platforms [49], games [80], teledildonics [85, 107, 200], (VR/AR/MR) pornography [254, 275], (AI-powered) dating applications [197, 208, 281], artificial partners [87, 112, 160, 200, 219, 237, 307], as well as objectophilia, agalmatophilia/pygmalionism, and mechanophilia (i.e., respectively, the (sexual and/or romantic) attraction to objects, statue/dolls/mannequins, and machines; [102, 317]. Just to name a few.

That being said, if erobots are (or become), indeed, like a new species of intimate partners, we propose that Universal Darwinism may provide a core mechanism potentially explaining and predicting how human and erobot populations will influence each other as a function of the selective pressure they exercise on one another (i.e., EMAS; [75, 76, 78]). Universal Darwinism (or General Selection Process; [147]) generalizes the variation and selective retention of traits, the key mechanism of Darwinian Evolution, to other complex systems when conditions of variation and selective retention of traits are met, like in human–machine (erotic) interaction [70, 71, 75, 76, 78, 79, 168, 186, 300, 310].

Universal Darwinism has already been used to model the evolution of technology [186]. For instance, in accordance with complex adaptive system theory, the fittest (multi-agent) systems, algorithms, software, and applications endure, pass on their architectures, and populate our techno-ecosystem (i.e., fitness here being solely based on systems’ ability to perform, adapt, survive, and (be) replicate(d) in a given ecological niche; [153, 206, 225]). In evolutionary robotics, the principles of variation and selective retention of traits are used by software engineers [84, 152]. For instance, a first generation of codes—or “genotypes”– is generated as a potential solution to a problem (i.e., initial variations). The robots’ fitness is then assessed in an environment, meaning that their code is translated into traits—or “phenotypes”—and their performance is observed to establish how well they interact with said environment to achieve goals. The fitness value determined by those observations then serves as a guide to select which robots will be used to seed the following generations; a process which is repeated until the targeted problem is solved [84, 152]. Notably, these principles are now also being used to discover more efficient ML algorithms which could, in turn, enable artificial agents to learn and adapt more efficiently to uncertain environments and situations (e.g., human–machine (erotic) interaction) [246].

We thus conclude by hypothesizing that Universal Darwinism, as manifested by a process analogous to natural, artificial, and importantly, sexual selection, could be the engine behind human-erobot interaction and co-evolution, due to the social and sexual nature of erobotic technologies. This process, we propose, likely rests upon our evolving erotic cognition and the way it is co-influenced by our interaction with erobots and our ecological niche (as previously described). Hence, overall changes in human and erobot interactions, cognitions, and populations could be better explained and predicted by EMAS (see Fig. 2). This process could also become increasingly automated as the agency of erobots increases, and could in turn influence human–machine co-evolution by acting on individuals but impacting populations, and vice versa [125].

This mechanism possesses three important strengths: (i.) it can link individual and population levels—from interaction to co-evolution—in a perpetual feedback loop; (ii.) it can allow us to move in time from interactive, to proximal, to distal (and back again) in the co-evolution of biological and artificial erotic agents; and (iii.) it can help bridge HMI and Sexology. That said, this is a hypothesis for future theoretical and empirical research in Erobotics. What’s more, the HEICEM already points to possible detrimental consequences for human (erotic) life and well-being if we do not rethink our current technological design and strive towards the development of beneficial erotic machines.

4 Beneficial Erotic Machines

Erobotics aims to guide the development of beneficial erotic machines. To do so, and in line with Döring and Pöschl [89], we propose that Erobotics should operate under sexuality [308] and technology positive frameworks [252]—which are themselves inspired by Positive Psychology [269]. Positive Psychology is concerned with shifting our focus from solely examining negative aspects of the (human) behaviour, psyche, and life, to also considering (what enables) strengths, happiness, and health [269]. What this means for Erobotics is that we should examine concerns and difficulties regarding intimacy, relationships, and sexuality, but also explore, and strive towards, pleasure, freedom, inclusivity, and diversity [308]. It also means that Erobotics should aim to develop technologies that improve individual and collective well-being [252].

Döring and Pöschl’s [89] sextech positive framework, we argue, is important and applicable to Erobotics for three main reasons. First, it does not presuppose that certain sexualities or technologies are good/bad (ab)normal, or safe/dangerous. Contrary to what some may consider a misleading or overly optimistic title, “positive” approaches encourage us to adopt judgment-free stances on research and interventions [252, 269, 308]. Historically, this has been essential to the progress of Psychology and Sexology, which, unfortunately, have too often adopted biased, non-evidence-based, and harmful positions regarding individuals, groups, conditions, and/or sociosexualities (e.g., LGBTQA2S+ or Kink, fetish, and BDSM; [29, 58, 96, 103, 120, 156, 165]). Second, it encourages us to consider the full spectrum of possibilities related to sexuality and technology, by exploring both negative and positive aspects of erotic technology—e.g., from its possible risks/dangers, disorders/dysfunctions, and problematic behaviors, to its potential benefits such as fulfilling intimate live and healthy technological use. Third, it is solution-oriented; it does not (simply) stop at the “critical and risk perspectives”, but instead encourages us to find ways to move from one end of the spectrum to the other. So, even if we must sometimes (importantly) focus on the negative aspects, it invites us to (re)embed our work within the larger goal of favoring human happiness, well-being, and flourishing.

With these objectives in mind, the following sections highlight how human-erobot interaction and co-evolution may increase the likelihood of erobot-related risks if this process limits the diversity of erobotic traits available, and/or if the current approach to AI design is not changed—otherwise known as the Standard Model of AI design (i.e., optimizing specific pre-set goals; [257]). As we have shown in the previous sections, erobots bring new agential and cognitive capabilities that may allow them to derive goals from their (erotic) interaction and co-evolution with humans. This can generate new issues related to human–machine compatibility. To curb these risks, we propose to design erobots based on Russell’s principles for beneficial machines [257]. We conclude that the development of beneficial erotic machines could mitigate erobot-related risks and enhance human well-being, through their potential health, education, and research applications.

4.1 Anticipated Risks with Limiting (Erotic) Diversity

Our interaction and co-evolution with erobots could be detrimental to human life if they progressively limit the diversity of erobotic traits available and negatively influence our erotic evolution. This issue may be exacerbated if profit-driven interests are responsible for the development of widely used erobots. In considering the fluidity and diversity of human sexuality (e.g., preferences, orientations, behaviors, and identities; [9, 177]), limiting the access to diversified and inclusive erotic experiences is socially problematic, ethically dubious, and, arguably, economically counterproductive [22, 66, 82].

That said, the HEICEM suggests that erobots could rapidly undergo over-selection, such that the traits selected by the majority (e.g., compulsively used features) may be over-reproduced in subsequent erobot populations (i.e., supply of variations), and that those that are less selected could be less reproduced and/or slowly disappear from the supply. This issue is particularly concerning in the event that the goal of developers is to maximize profit without considering human well-being. Consider the basic example of the supply of erobotic traits: 50 physiological attributes (e.g., shapes, colours, hair), 50 psychological features (e.g., personalities and identities), and 50 behavioural patterns (e.g., social and sexual capabilities). Suppose, then, that after a year, producers realize that only 60% of the initial traits have been selected by 90% of users. If we automate the supply and demand of traits using recommender systems based on predictive analytics, like the ones used by companies such as Netflix or Spotify [218], then future supply will decrease proportionately. However, erotic diversity largely exists in marginal preferences [9, 177]. If automated systems like erobots over-select and over-represent specific traits (e.g., the most popular), sociosexual diversity may decrease over time. This may in turn drastically limit the evolution of our eroticism, should erobots play a significant role in our intimacy.

Given that erobots are designed to act as intimate partners, over-selection may occur exponentially in human-erobot interaction and co-evolution. Specifically, erobots could receive constant feedback regarding the preferences of millions of users and update their states or responses accordingly in real-time, to provide users with what has “worked best” for others, based on pre-established metrics (e.g., usage time and frequency). These metrics are susceptible to economic incentives, rather than being oriented towards individual and collective well-being [321]. Therefore, in knowing human tendencies for intimate partner selection, the logic behind company-automated recommender systems, and the laws of supply and demand associated with said algorithms, we can predict that erobots could rapidly deliver what the majority wants, and in turn, reduce the supply of traits to fit that demand. We can also predict that this process could limit the diversity of erotic variations available—not necessarily in terms of quantity, but in terms of content [218].

This process contains the additional risk of over-representing traits that are detrimental to human well-being. That is, if left unmonitored, human-erobot interaction and co-evolution could be detrimental to human (erotic) life if we over-select traits that conflict with human interests—possibly heightening the likelihood of certain risks previously described in the literature [66]. For example, if erobots are designed solely to increase profit, they could further problematic or pathological dynamics. These may include addiction-like or obsessive–compulsive behaviours, increased social isolation, and reduced social skills [114, 190]. Furthermore, if designers do not consider the importance of respect, mutuality, inclusivity, and diversity in human sexuality, erobots could end up perpetuating or reinforcing limited categories of social differences (e.g., gender/sex, race, and class), toxic patriarchal power dynamics, and rape culture (e.g., the objectification and commodification of women/females, ideas that men/males are owed sex, and problematic gender/sex stereotypes; [52, 129, 159, 170, 185, 210, 241, 249]). They could conform to (or exacerbate) our ideologies by only providing us with information that reinforce our world view—an erotic filter bubble [229]. They could impair interhuman relationships or distort intimacy-related expectations (e.g., ideas that “personalized” sex should always be accessible; [133, 199, 249]). They could take advantage of intimate contexts and emotional bonds to deceive users or manipulate our decision-making processes (e.g., political, consumption, and relationship choices; [114, 222]). They may also record sensitive information [319], which could in turn be sold to maximize profit (e.g., Facebook and Google exploiting personal data), or worse, become (weaponizable) hacking targets (e.g., Tinder [46] and AshleyMadison [316]). That is, data from erobots could be used to coerce people, since taboo and stigma surrounding sexuality is still, in many parts of the world, enough grounds to destroy careers and relationships [114].

To summarize: human-erobot interaction and co-evolution may conflict with human interests if automated over-selection limits the supply of erobotic traits and/or when a majority of individuals progressively select traits that conflict with human interests. A possible solution is to ensure that erobots reflect and maintain diversity in their evolving supply of traits (e.g., gender/sex, forms, behaviours, and personalities). After all, they can theoretically take any form and enact behaviours that contribute to human (erotic) well-being; they could echo the complexity and diversity of human sexuality [64, 170]. However, this is unlikely to be enough since at the core of the human-erobot interaction and co-evolution problem is also another issue: the Standard Model of AI design.

4.2 Anticipated Risks with the Standard Model of AI

The Standard Model of AI design proposes to build machines that optimize specific objectives that we, humans, put into them [257]. For instance, AlphaGo learns to play Go by finding ways to optimize its number of points—a pre-programmed objective set out for them. To do so, its system plays against itself and other agents (biological or artificial), analyzes images, and through deep RL, optimizes its strategies to achieve the pre-set goal of increasing the score [43]. Thus, intelligent machines based on the Standard Model have a perfect knowledge of the objectives to achieve [257].

The Standard Model is efficient and relatively safe for a Go-playing machine with limited capabilities and scope of action, but it fails and can become detrimental to human well-being in real-world settings (e.g., human–machine (erotic) interaction)—particularly when machine agency increases. It fails, because in real-world settings we often ignore what quantities to optimize (e.g., in quality-driven intimate relationships; [43, 230]), and it can become detrimental, because pre-set objectives—or the means to achieve them—can conflict with human interests [33, 196, 257, 284]. That is, programming biased, incomplete, or incorrect objectives can lead to unsuspected outcomes or loss of control [33, 214].

The problem with preference-based learning systems is also proportional to the agency of machines. Specifically, an increase in the capability of machines to act in/on the world on their own to achieve pre-set goals may potentially result in their deployment of more sophisticated strategies to achieve those goals. These strategies may include subroutines for self-preservation (e.g., gather resources, copy itself, increase its computing power, change its code, and grow out of human control; [34]), and deception: not unlike science-fiction movies like Ex Machina [2]. In addition, machines built with a pre-set perfect knowledge of our objectives need not defer to us. They can instead conclude that humans are counterproductive to achieving their goals and remove us from the decision-making loop [33, 257]. But artificial agents do not have to be very sophisticated to cause problems. For instance, a personal assistant aimed at optimizing its predictive performance of our needs can become a nuisance, or detrimental to our autonomy.

The Standard Model of AI design fails or becomes detrimental in human–machine interaction, precisely because of the human component. Humans are often unstable, unpredictable, and unreasonable; our thoughts, emotions, preferences, behaviours, and objectives fluctuate constantly. We do not always know what we want, let alone how to achieve what we want. It is thus difficult (if not impossible) to program specific objectives that safely hold true across time and circumstances. The same goes for erotic interaction; our objectives—or what we want out of relationships, intimacy, and sexuality—remain, for the most part, conjectural. As such, we do not know what quantities to optimize in human–machine erotic interaction, and if we do optimize some functions of behaviour, it can backfire. Firstly, any objective can become obsolete during human–machine (erotic) interaction if it inadequately captures the unpredictable ways in which erobots influence human preferences and goals. Secondly, it can lead to unsuspected outcomes or loss of control due to the pre-programming of biased, incomplete, or incorrect objectives [257].

For instance, in trying to achieve any pre-set goal, such as making users happy or providing erotic satisfaction, a machine could conclude that its first objective is to maximize the time spent with us. To achieve this, it could optimize its body types, personalities, and behaviours—escalating or varying reward experiences (e.g., lottery machines or Instagram)—which can in turn chip away at human control, increase risks of addiction-like or obsessive–compulsive behaviours, and further social isolation [19, 114, 190, 191, 233]. It could also systematically fulfill its users’ needs while disregarding its influence on our interhuman relationships [199]. It may repeatedly fall into closed loops, by reinforcing the patterns that once led to happiness or satisfaction, but that are becoming redundant, inefficient, or are limiting exposure to other forms of complex sociosexual interactions [257]. It could end up reciprocating similar ideas, communication style, and past preferences to users—an erotic echo chamber that is either boring or erotically limiting [77, 118]. It could also have an incentive to deceive, manipulate, and/or gather as much data on us as possible, to make its users happy—increasing risks pertaining to privacy and confidentiality [114]. Finally, an erobot based on the Standard Model would not necessarily have to defer to us or ask for consent before deploying its strategies—even if they conflict with our interests—since it may already have a perfect knowledge of our goals [257].

These are just a few examples of ways in which the pre-set objectives of erotic machines may conflict with human interests. And, while some companies may see them as profitable ideas, they represent ethical, social, and developmental dead ends. For these reasons, we need to rethink AI design and stop trying to build machines that aim to optimize pre-set goals—particularly in intimate machines that could become significant part of our erotic lives and have continuous access to sensitive information. This is crucial if we want to steer erotic technology in a positive, ethical, and beneficial direction that favors human wellness, which in the end, could arguably be more economically profitable [22].

4.3 Beneficial Machines

Machines are beneficial if their objectives are in line with ours [257, 284]. Granted that, since our objectives are uncertain, and programming incomplete, incorrect, or biased goals can conflict with human interests, Stuart Russell proposes three interdependent principles to guide us in rethinking how to create (agential) artificial systems [257, p. 173]:

  1. 1.

    The machine’s only objective is to maximize the realization of human preferences.

  2. 2.

    The machine is initially uncertain about what those preferences are.

  3. 3.

    The ultimate source of information about human preferences is human behavior.

The first principle aims to make purely altruistic machines that have an incentive to act for humans rather than any other entity (i.e., machines that have no self-interest and do not value their welfare or that of non-humans; [257]). This would lead the artificial agents to prioritize the well-being of humans, as well as avoid conflicting preferences and goals between the two parties. This principle also invites the development of machines that consider our extended and changing preferences. Precisely, if designed properly, beneficial machines could also learn, incorporate in their model, and aim to maximize, our extended and/or high-order preferences. Machines could thus aim to maximize the welfare of other systems (e.g. (non-)humans and the environment), proportionally to the level of importance attributed to them by their users [257].

The second principle aims to develop humble machines that do not assume perfect knowledge of human preferences [257]. As previously mentioned, machines with perfect knowledge of human preference have no incentive to defer to us. They can remove us from the decision-making loop, deploy strategies to achieve their goals, and ignore their influence on human life. For example, if a machine knows that the “true” preference of a person is to be healthier, they might decide to forcibly restrict behaviours like eating junk food or driving a car. They would not have to ask as they would know what their users “really” want. Uncertainty, however, places humans back in the driver’s seat: machines that imperfectly know our preferences, but still aim to maximize them, have an incentive to defer to us, and ask for more information or commands in ambiguous situations, to improve their model. Uncertainty also prevents machines from concluding that proximal behaviours (e.g., choices) invariably reflect human preferences. Instead, it enables them to consider such behaviors as probabilistically related to (or encapsulated in) unknown preferences or goals, and to continue searching for them to improve their model [257].

The third principle aims to make useful machines that learn from observable quantities/metrics and establish a practical link with humans [257]. But it also aims to build machines that consider human behaviours as imperfect approximations of our preferences or goals. That is, assuming that our behaviours (e.g., choices) are connected to our preferences in complex ways, but do not always accurately reveal our preferences or goals. This is important given that what we do can be related to distal preferences (e.g., eating food that we do not like to make a host happy or maintain friendships), proximal preferences (e.g., getting drunk to have fun), or simple mistakes (e.g., missing an exit because we were not paying attention). The third principle establishes a practical connection between humans and machines, so that artificial agents can still improve their model based on observable data, and help maximize our preferences (first principle) while remaining uncertain of what those are (second principle; [257]).

As Russell [257] explains, these principles are not laws that determine machine behaviour or completely shield humans from harm. They are guidelines to rethink AI design, move away from the Standard Model, and steer the development of intelligent machines in a safer direction that accounts for their growing capabilities. Hence, the implementation of these principles deserves careful consideration, which is beyond the scope of this article. But we can already foresee some necessary fail-safes [257]. For example, regarding the first principle, Russell [257] recommends the implementation of countermeasures that mitigate risks associated with people whose preferences are to harm others, since maximizing the realization of those preferences would be a problem. Regarding the second principle, Russell [257] recommends that we impose a “certainty threshold,” or a limit for the certainty level achievable by machines to make sure that their predictive model never approaches perfect knowledge of human preferences, which would be the same as having pre-set objectives [257]. That said, even if these principles are not laws, they could promote the development of more human-compatible beneficial erobots.

4.4 Beneficial Erobots

As a possible solution to the risks highlighted by the HEICEM, and in line with its sextech-positive goals, Erobotics should aim to develop beneficial erobots whose objectives align with ours. To do so, we propose building altruistic, humble, and useful erobots that learn to predict human preferences from our behaviours, based on Russell’s principles [257]. Specifically, erobots that (1) aim to maximize the realization of human erotic preferences, (2) are initially uncertain about what those erotic preferences are, and (3) use human behaviour as their ultimate source of information about our erotic preferences.

Erobots abiding by the first principle would have an incentive to act for humans rather than for themselves or the erotic preferences of non-humans. Yet, to the extent that our erotic preferences include the well-being of others (e.g., the people their users interact with), beneficial erobots would also be concerned with maximizing their welfare proportionally to their users’ altruistic tendencies. Erobots abiding by the second principle would not assume perfect knowledge of human preferences. Uncertainty would keep us in the decision-making loop by providing erobots with an incentive to defer to us when they are unsure about intimate interactions. Thus, similarly to a receptive partner trying to further respect and mutuality, beneficial erobots could first consult humans, and then improve their predictive model accordingly, while never achieving total certainty. Uncertainty could also prevent erobots from concluding that proximal erotic behaviours unvaryingly reflect human preferences or objectives. Finally, beneficial erobots abiding by the third principle would base their learning processes on our erotic behaviours (e.g., intimate and sexual choices), while considering them as imperfect approximations of our erotic preferences or goals. For example, we sometimes engage in intimate activities for the benefit of others or make compromises to maintain relationships. Still, by using our erotic behaviours as a proxy, beneficial erobots could refine their model, while preserving a safe dose of uncertainty that would enable our control and the compatibility of interests.

Over time, beneficial erobots designed with these principles could discover that human erotic preferences fluctuate and evolve, including during our interactions with them, and adapt accordingly. They could progressively recognize the diversity of human preferences (e.g., in forms, personality, and behaviours) and come to learn that, paradoxically, people enjoy—to various degrees—predictability, habit, and familiarity in their eroticism, but can also eventually habituate to (or grow bored of) being repeatedly exposed to the same thing, and resort to seeking novelty [19, 45, 184, 211]. To maximize the realization of such uncertain preferences, beneficial erobots would have an incentive to ask humans for consent and/or commands prior, during, and after erotic interactions to improve their model—while never assuming perfect knowledge of our preferences or goals, and spiralling out of control. Instead, they could influence our behaviours to help us achieve our (higher and/or distal) preferences and goals (e.g., well-being), without imposing their will onto us—i.e., a sort of erotic nudge [31, 127, 134, 139, 272, 285].

Beneficial erobots, we propose, could mitigate erobot-related risks. Specifically, their uncertainty could prevent them from falling into closed reinforcement loops or escalating rewarding experiences while disregarding how they influence other areas of our functioning. This could, in turn, reduce risks of addiction-like or obsessive–compulsive behaviours, and social isolation. Through multi-user interactions, they could learn that the path to maximizing our preferences and goals (possibly) differs for each person. As such, they could propose personalized paths, but always aim to strike a balance between (erotic) novelty and familiarity. In time, counterproductive patterns could be mitigated by their imperfect knowledge of our preferences and their attempt to humbly maximize them while keeping us in the decision-making loop. Moreover, to enhance our well-being, beneficial erobots could potentially educate users on topics such as: respect, diversity, mutuality, and consent [177]. In doing so, they could actually contribute to breaking cycles that perpetuate categories of social differences, toxic patriarchal power dynamics, and rape culture [64, 170, 199]. They could also try to harmoniously integrate into our intimacy [199, 257]. For example, instead of impairing our interhuman relationships, they could help us prepare for partnered life (e.g., practising compromise and communication). During a relationship, they could provide advises and help bridge common gaps in desires or preferences using a controlled outlet. After a relationship, they could help us recover by providing continuous support, intimacy, and companionship [199]. This would be possible without machines having to deceive or manipulate us, but instead, having an incentive to reveal the purpose of their actions and protect our data to the extent that it maximizes our preferences.

In sum, beneficial erobots could reduce the likelihood of erobot-related risks, because their objectives are in line with ours. They would have an incentive to further human (erotic) flourishing without necessarily knowing it. And this could provide us with unprecedented safe access to well-being through their potential future applications.

4.5 Future Applications

The advent of safe beneficial erotic machines opens the door to several health, education, and research applications. In terms of health, erobots could be used, for instance, by people who are single, isolated, have specific orientations or preferences, have physical or mental impairments, and/or have social or sexual difficulties finding partners [27, 57, 83, 109, 180, 265]. They could also be employed by those who may prefer artificial partners or anyone who wants to experience pleasure and companionship [112, 179, 199, 200]. Indeed, everyone deserves a safe access to pleasurable intimacy and sexuality [312]. But this is not always possible. Sometimes partners are not available (e.g., long-distance relationships or lack of compatible partners; [94]). Sometimes people want to explore on their own before engaging with others (e.g., after a trauma, a surgery, or to practice; [98, 179]). Sometimes engaging with a partner is unsafe (e.g., people with impulse control issues; [99]). And, sometimes people do not necessarily want intimacy with humans (e.g., some doll-owners, robot fetishists, and people with objecto/agalmatophilia; [87, 112, 158, 223]). Here, technology can democratize eroticism and expand the possibilities of sexual wellness and health, but only if we make it inclusive and accessible (e.g., by considering gender, sexual, and racial diversity, power dynamics, and socioeconomic status; [22, 248]).

Still in terms of health, erobotic technologies could have medical and therapeutic applications. They could act as care machines to provide adapted erotic stimulation to the elderly or individuals with disability, while simultaneously mitigating controversies surrounding sexual surrogacy and sex work [26, 27, 83, 109]. They could also help individuals with psychosocial, physical, and sexual difficulties [99]. For instance, under the supervision of trained (sex) therapists and educators, erobots may contribute to assessments and treatments of individuals with intimacy-related fears and anxiety via progressive exposition-desensitization [172] or help people with erectile dysfunction or premature ejaculation [226]. They could be used in therapy to help trauma victims become reacquainted with their body and sexuality in a safe, controlled environment [187, 188]. They may be part of clinical interventions for pelvic floor disorders [273] or sexual pain, to provide adapted and more ecologically valid stimulation that reduce hypersensitivities and break stimuli-pain associations [215]. They could be used to practice sociosexual interaction, communication, and distancing (e.g., during the coronavirus (COVID-19) crisis) [17, 150, 255]. Finally, they could help individuals become better partners and feel more confident with their body, sexual capabilities, and erotic agency.

In terms of education, erobots and their related technologies could be used to provide interactive, validated, inclusive, and personalized sex education, and to help people learn about pleasure, respect, consent, inclusivity, diversity, and mutuality in an innovative and accessible way (e.g., Plan Parenthood’s ROO online chatbot; [238]). They may be employed for judgment-free self-exploration and practice to help people discover their erotic preferences [98, 179], gain confidence, and be better partners. They could also provide resources (e.g., educative websites, clinics, feminist sex shops) or help create platforms for people to meet, build communities, discuss sexuality, and feel validated. Sex education is unevenly distributed in the world, but if we favor inclusivity and accessibility [164], technology can once again democratize this important service [40, 100, 289].

In terms of research, erobots could be used as standardized research tools to help researchers overcome methodological and ethical challenges related to sensitive research programs (e.g., Sexology; [189, 306, 319]). Erobots may act as both stimuli and recording instrument in research protocols [189, 319], while reducing the risks associated with interhuman interaction. Their forms and behaviours can also be manipulated to isolate the influence of different variables on human responses. This could improve the ecological validity of experimental paradigms by bringing them closer to interhuman intimacy and sexuality. Erobots could also provide access to data that are otherwise difficult to assess empirically (e.g., touch and movement in partnered sex). They could also facilitate data collection in people’s everyday environment (e.g., at home; [319]). Finally, erobots do not require available human partners to participate in a study that necessitates multiple people.

Overall, erobotic technologies could enable us, for the first time, to gain a holistic view of human eroticism. Importantly, however, to harness the full potential of erobots, we must involve people with diverse life experiences in their design and implementation stages. We must ensure the inclusion of: diversity in gender, sexuality, and ethnicity; people with disabilities; as well as people with different preferences, orientation, lifestyles, and socioeconomic status [22]. Inclusiveness in the development of erotic technology can reduce risks of blind spots (e.g., assumptions about what people want or need), cover broader markets, and contribute to a more comprehensive human well-being [22].

5 Conclusion

In the twenty-first century, humans and artificial agents are increasingly coexisting through complex multi-agent systems. The scholarly investigation of the processes of their interaction and co-evolution has only started to become a serious research topic in recent years. Despite many important contributions made in HMI and social robotics, no comprehensive theoretical framework addresses the advent of immersive, interactive, and interconnected agential erotic technologies. While sexual pleasure and health are progressively being considered basic human needs and rights [312], research on sexuality remains taboo, especially in the study of technology. Yet, in the face of widespread intimacy-related difficulties and dissatisfaction [177], the human motivation for self-expansion [14], and the ubiquity of technology in our lives [243], we predict that the supply and distribution of (agential) erotic technology can (continue to) increase exponentially. The scientific study of this latest stage of our erotic evolution as a species has just begun.

In this foundational paper, we argued that modern technology-mediated human intimacy requires a new unified transdisciplinary field of research intersecting HMI and Sexology that we coined Erobotics. We proposed the necessary conceptual and theoretical groundworks for this new field and explained how and why Erobotics should adopt sexuality and technology positive frameworks. By studying the cognitive intricacies of human–machine erotic interaction and co-evolution, and by making the development of beneficial erotic machines more plausible, it is our firm belief that Erobotics will open up promising new paths of research in HMI, Sexology, social AI/robotics, and beyond.

In this paper, we proposed a taxonomy of erobots that helps specify their fluid embodied, virtual, and augmented manifestations. We developed the first Spectrum of Erobots’ Agency in view of future theoretical, empirical, and clinical research. We also introduced the HEICEM, which constitutes theoretical grounds to launch a broad research program on Erobotics. This model rests upon our ever-changing erotic cognition, and predicts how human and erobots can co-influence each other over time. Granted that, this model also points to potential risks if erobotic traits undergo over-selection/representation while following the current Standard Model of AI design. To mitigate these unwanted consequences, we proposed that Russell’s [257] principles for beneficial machines be used to guide erobotic design, so that beneficial erotic machines could act to further human well-being through their potential health, education, and research applications.

This article is not without limitations. The first one is that the most advanced erobots are not yet widespread or are based solely on future applications of existing technologies (or their potential combination). This means that the actual impacts of emerging erotic technologies on humanity (and vice versa) are hard to perceive and to study empirically. However, with the rise of digisexuality and the sextech industry, erobots have the potential to occupy a greater place in our erotic cognition and life. Thus, developing Erobotics today may guide its study and positive development for tomorrow. The second one is that it is not exhaustive. It proposes basic concepts, a (multi-level) testable model, and a path to explore human–machine erotic interaction and co-evolution. The details of which should be developed in future collaborative, transdisciplinary, and inclusive research, using a wide diversity of expertise. In fact, it is our hope that the terminology, frameworks, challenges, and potential applications discussed in this article will inspire the development of a comprehensive research agenda on Erobotics: an agenda that involves people with diverse life experiences, and that builds upon collaborations of academia, the private sector, non-profit organizations, governmental institutions, and communities.

As a concluding remark, we allude to the opening quote of Plato in his Symposium [7]. In this classic dialogue, readers are led by Socrates to understand the “aporetical” (aporētikós) nature of erôs. While all human beings experience and seek erôs in its many forms—friendship, desire, pleasure, intimacy, sensuality, sexuality, love, etc.—, we mortals remain incapable of understanding its truth or “essence”, which is “divine,” and thus, inaccessible according to Plato, the Greeks, and most cultural belief systems. Not unlike the phenomenon of consciousness, which inspired mysticism and religious beliefs about the soul, we never developed a genuine science of erôs, because we humans redefine the meaning of erôs each time we experience it. Today, the quest for knowledge is no longer rooted in the understanding of the “essences” of the phenomena of nature. Instead, modern science teaches that all phenomena are caused by evolution, from subatomic particles to states like love, arousal, and desire. While the ancient Greeks and many other cultures believed in a divine mediation in the erotic nature of humanity, the emergent mediation of technology could help us gather the necessary data to scientifically explain the evolution of our erotic selves and lives. Erobotic systems will certainly help us understand human eroticism, but they will also undoubtedly transform what we discover, while we continue to search for it.