Full length article
Use of offensive language in human-artificial intelligence chatbot interaction: The effects of ethical ideology, social competence, and perceived humanlikeness

https://doi.org/10.1016/j.chb.2021.106795Get rights and content

Highlights

  • This study examined factors impacting AI chatbot users' language use.

  • Users' ethical idealism was related to the use of profanity and offensive words.

  • Users' perceived human-likeness of chatbot increased their use of offensive words.

Abstract

This study examined the factors that affect artificial intelligence (AI) chatbot users' use of profanity and offensive words, employing the concepts of ethical ideology, social competence, and perceived humanlikeness of chatbot. The study also looked into users' liking of chatbots' responses to the users' utterance of profanity and offensive words. Using a national survey (N = 645), the study found that users' idealism orientation was a significant factor in explaining use of such offensive language. In addition, users with high idealism revealed liking of chatbots' active intervention, whereas those with high relativism displayed liking of chatbots' reactive responses. Moreover, users’ perceived humanlikeness of chatbot increased their likelihood of using offensive words targeting dislikable acquaintances, racial/ethnic groups, and political parties. These findings are expected to fill the gap between the current use of AI chatbots and the lack of empirical studies examining language use.

Introduction

Unlike the past when machines were simply made to accomplish what an operator required to do, today's artificial intelligence (AI) technology not only presents a broad range of humanlike features but also has some functions to interact with human beings. Artificial intelligence, computer systems that have capabilities normally thought to be like human intelligence (Kok, Boers, Kosters, & Putten, 2013; Poole & Mackworth, 2017),1 is now widely applied in various domains such as autonomous cars, social media, games, military, and many more for the purpose of assisting or even replacing some tasks done by human beings. In particular, the interest in AI chatbots is enhanced by the recent diffusion of available applications such as Apple's Siri and Samsung's Bixby, which are embedded in smartphones. Chatbot2 is an interactive, virtual agent that engages in verbal interactions with human beings (Adamopoulou & Moussiades, 2020; Kahn & Das, 2018), in many cases powered by AI, which makes chatbot learn from conversations with human beings and learn how to respond to them.

In 2016, Microsoft launched its chatbots on Skype (Følstad & Brandtzæg, 2017). Facebook also launched chatbots for its messaging application, Messenger. Google's Echo and Amazon.com's Alexa are other examples of AI chatbots which converse with human users on a wide range of topics. Because these AI chatbots can learn and develop from their interactions with human users, and they can make some decisions, though limited, some issues related to language use may arise. For instance, despite the increasing use of chatbots, there is a lack of knowledge associated with use of profanity or offensive words in human and AI chatbot interaction. Hill, Randolph, and Farreras (2015) analyzed different content and quality of conversations between human-human interaction and human-chatbot interaction. The results showed that people exhibited greater profanity in interaction with chatbots than with another human user. However, the factors that affect use of profanity or offensive words in human-AI chatbot interaction have been largely undocumented. In order to fill this gap, the present study focuses on three theoretical concepts—ethical ideology, social competence, and perceived humanlikeness of chatbot—for a better understanding of such offensive language use. Then, the study intends to figure out the factors that differentiate language use between human-human interaction and human-chatbot interaction, as well as why people use offensive language when they interact with chatbots.

Section snippets

Predictors of use of offensive language during AI chatbot use

The predictors that affect chatbot users' use of profanity and offensive words and anticipated appropriate responses from AI chatbot can be studied from three different points of view: ethical perspective, communication perspective, and the perspective of users' perception toward chatbot. First, ethical issues arise in human behavior when a behavior could have a substantial impact on others, and when the behavior can be judged by what is right and wrong (Johannesen, Valde, & Weedbee, 2008). Use

Data

An online survey was conducted using a nationally representative sample in South Korea. Sample recruitment was operated by a research company which has approximately 300,000 online panel members. A quota sampling method was utilized considering age and sex based on the most recent census data. To recruit the sample, 14,282 panel members of the research company were randomly selected, and they received email invitations. Those who visited the present study's website were asked to report their

Predictors of utterance of offensive language during AI chatbot use

With respect to the effects of ethical ideology on use of profanity and offensive words, Table 4 shows that respondents' ethical ideology of idealism decreased their likelihood of using profanity (OR = .82, p = .03) and offensive words targeting specific racial/ethnic groups (OR = .69, p = .03). However, respondents' idealism orientation was not significantly associated with using offensive words targeting dislikable acquaintances, political parties, or gender groups. Thus, H1a was partially

Summary and implications of the findings

First, the findings indicate that users' ethical orientation of idealism was a significant factor in explaining use of profanity and offensive words targeting specific racial/ethnic groups. Moreover, users with high idealism were more likely to favor chatbots' active intervention such as suggesting gentle language use and offering warning messages as well as indirect intervention such as topic shift. On the other hand, those with high relativism were more likely to display liking of chatbots'

Conclusion

In conclusion, the present study explored the effects of ethical ideology, social competence, and perceived humanlikeness of chatbot on use of profanity and offensive words in the context of human-chatbot interaction. By specifying profanity and four types of offensive words and identifying the factors of such offensive language use, the study fills the gap between the current use of AI chatbots in our society and the lack of empirical studies that examine language use in human-chatbot

Notes

  • 1.

    The definition of artificial intelligence is diverse depending upon different disciplines, study areas, and approaches. However, most definitions focus on the following four categories: 1) systems that think like humans, 2) systems that act like humans, 3) systems that think rationally, and 4) systems that act rationally (Kok et al., 2013).

  • 2.

    Although chatbots are becoming popular recently, its origin goes back to Alan Turing's Turing Test in 1950, and the first chatbot, ELIZA, was introduced in

Credit author statement

All authors of this manuscript have received appropriate credit for their work.

Acknowledgement

This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of the Republic of Korea (NRF-2017S1A5A8022666).

References (42)

  • R.Y. Chan et al.

    Does ethical ideology affect software piracy attitude and behaviour?: An empirical investigation of computer users in China

    European Journal of Information Systems

    (2011)
  • J. Cohen et al.

    Applied multiple regression/correlation analysis for the behavioral sciences

    (2002)
  • Croes, E. A. J., & Antheunis, M. L. (in press). Can we be friends with mitsuku? A longitudinal study on the process of...
  • V. Demeure et al.

    How is believability of a virtual agent related to warmth, competence, personification, and embodiment? Presence

    Teleoperators & Virtual Environments

    (2011)
  • S.W. Duck

    Socially competent communication and relationship development

  • A. Følstad et al.

    Chatbots and the new world of HCI

    Interactions

    (2017)
  • D.R. Forsyth

    A taxonomy of ethical ideologies

    Journal of Personality and Social Psychology

    (1980)
  • D. Griol et al.

    An automatic dialog simulation technique to develop and evaluate interactive conversational agents

    Applied Artificial Intelligence

    (2013)
  • S.E. Hastings et al.

    The role of ethical ideology in reactions to injustice

    Journal of Business Ethics

    (2011)
  • C.A. Henle et al.

    The role of ethical ideology in workplace deviance

    Journal of Business Ethics

    (2005)
  • A. Ho et al.

    Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot

    Journal of Communication

    (2018)
  • Cited by (32)

    • Chatbots’ effectiveness in service recovery

      2023, International Journal of Information Management
    • A sustainable step forward: Understanding factors affecting customers’ behaviour to purchase remanufactured products

      2023, Journal of Retailing and Consumer Services
      Citation Excerpt :

      This means that customers must weigh the benefits to themselves and society of their product selections. Customers are held accountable for their behaviours not only by virtue of their ability to predict future outcomes, but also by virtue of moral norms and values that assign responsibility to them for their personal and societal obligations (Park et al., 2021). This will allow a various theoretical lens to be used to view the findings, explaining the discrepancies between those of prior studies and demonstrating how customers arrive at their ethical stance in the first place (Pecoraro et al., 2021).

    • Modeling adoption of intelligent agents in medical imaging

      2022, International Journal of Human Computer Studies
      Citation Excerpt :

      AI has been promptly identified as a powerful tool to extend and augment human creativity, thinking, as well as decision-making ability Coronato et al. (2020). To deal with some remaining challenges, many authors have examined the ethical dimension of AI-based tools Bennett (2019); Langer and Landers (2021); Park et al. (2021). Bennett et al.

    • Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms

      2022, Information Processing and Management
      Citation Excerpt :

      Empowered by artificial intelligence (AI) technology, chatbots are becoming widely utilized as replacements for human agents to perform certain tasks in service delivery across many industries, particularly in the e-commerce field (Park et al., 2021).

    View all citing articles on Scopus
    View full text