Full length articleUse of offensive language in human-artificial intelligence chatbot interaction: The effects of ethical ideology, social competence, and perceived humanlikeness
Introduction
Unlike the past when machines were simply made to accomplish what an operator required to do, today's artificial intelligence (AI) technology not only presents a broad range of humanlike features but also has some functions to interact with human beings. Artificial intelligence, computer systems that have capabilities normally thought to be like human intelligence (Kok, Boers, Kosters, & Putten, 2013; Poole & Mackworth, 2017),1 is now widely applied in various domains such as autonomous cars, social media, games, military, and many more for the purpose of assisting or even replacing some tasks done by human beings. In particular, the interest in AI chatbots is enhanced by the recent diffusion of available applications such as Apple's Siri and Samsung's Bixby, which are embedded in smartphones. Chatbot2 is an interactive, virtual agent that engages in verbal interactions with human beings (Adamopoulou & Moussiades, 2020; Kahn & Das, 2018), in many cases powered by AI, which makes chatbot learn from conversations with human beings and learn how to respond to them.
In 2016, Microsoft launched its chatbots on Skype (Følstad & Brandtzæg, 2017). Facebook also launched chatbots for its messaging application, Messenger. Google's Echo and Amazon.com's Alexa are other examples of AI chatbots which converse with human users on a wide range of topics. Because these AI chatbots can learn and develop from their interactions with human users, and they can make some decisions, though limited, some issues related to language use may arise. For instance, despite the increasing use of chatbots, there is a lack of knowledge associated with use of profanity or offensive words in human and AI chatbot interaction. Hill, Randolph, and Farreras (2015) analyzed different content and quality of conversations between human-human interaction and human-chatbot interaction. The results showed that people exhibited greater profanity in interaction with chatbots than with another human user. However, the factors that affect use of profanity or offensive words in human-AI chatbot interaction have been largely undocumented. In order to fill this gap, the present study focuses on three theoretical concepts—ethical ideology, social competence, and perceived humanlikeness of chatbot—for a better understanding of such offensive language use. Then, the study intends to figure out the factors that differentiate language use between human-human interaction and human-chatbot interaction, as well as why people use offensive language when they interact with chatbots.
Section snippets
Predictors of use of offensive language during AI chatbot use
The predictors that affect chatbot users' use of profanity and offensive words and anticipated appropriate responses from AI chatbot can be studied from three different points of view: ethical perspective, communication perspective, and the perspective of users' perception toward chatbot. First, ethical issues arise in human behavior when a behavior could have a substantial impact on others, and when the behavior can be judged by what is right and wrong (Johannesen, Valde, & Weedbee, 2008). Use
Data
An online survey was conducted using a nationally representative sample in South Korea. Sample recruitment was operated by a research company which has approximately 300,000 online panel members. A quota sampling method was utilized considering age and sex based on the most recent census data. To recruit the sample, 14,282 panel members of the research company were randomly selected, and they received email invitations. Those who visited the present study's website were asked to report their
Predictors of utterance of offensive language during AI chatbot use
With respect to the effects of ethical ideology on use of profanity and offensive words, Table 4 shows that respondents' ethical ideology of idealism decreased their likelihood of using profanity (OR = .82, p = .03) and offensive words targeting specific racial/ethnic groups (OR = .69, p = .03). However, respondents' idealism orientation was not significantly associated with using offensive words targeting dislikable acquaintances, political parties, or gender groups. Thus, H1a was partially
Summary and implications of the findings
First, the findings indicate that users' ethical orientation of idealism was a significant factor in explaining use of profanity and offensive words targeting specific racial/ethnic groups. Moreover, users with high idealism were more likely to favor chatbots' active intervention such as suggesting gentle language use and offering warning messages as well as indirect intervention such as topic shift. On the other hand, those with high relativism were more likely to display liking of chatbots'
Conclusion
In conclusion, the present study explored the effects of ethical ideology, social competence, and perceived humanlikeness of chatbot on use of profanity and offensive words in the context of human-chatbot interaction. By specifying profanity and four types of offensive words and identifying the factors of such offensive language use, the study fills the gap between the current use of AI chatbots in our society and the lack of empirical studies that examine language use in human-chatbot
Notes
- 1.
The definition of artificial intelligence is diverse depending upon different disciplines, study areas, and approaches. However, most definitions focus on the following four categories: 1) systems that think like humans, 2) systems that act like humans, 3) systems that think rationally, and 4) systems that act rationally (Kok et al., 2013).
- 2.
Although chatbots are becoming popular recently, its origin goes back to Alan Turing's Turing Test in 1950, and the first chatbot, ELIZA, was introduced in
Credit author statement
All authors of this manuscript have received appropriate credit for their work.
Acknowledgement
This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of the Republic of Korea (NRF-2017S1A5A8022666).
References (42)
- et al.
Flaming in electronic communication
Decision Support Systems
(2004) Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions
Computers in Human Behavior
(2018)- et al.
Differences in perceptions of communication quality between a Twitterbot and human agent for information seeking and learning
Computers in Human Behavior
(2016) - et al.
Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions
Computers in Human Behavior
(2019) - et al.
Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations
Computers in Human Behavior
(2015) - et al.
The media inequality: Comparing the initial human-human and human-AI social interactions
Computers in Human Behavior
(2017) - et al.
Social compensation or rich-get-richer? The role of social competence in college students' use of the Internet to find a partner
Computers in Human Behavior
(2012) - et al.
Factors involved in associations between Facebook and college adjustment: Social competence, perceived usefulness, and use patterns
Computers in Human Behavior
(2015) - et al.
An overview of chatbot technology
- et al.
Social interactions across media: Interpersonal communication on the Internet, telephone and face-to-face
New Media & Society
(2004)
Does ethical ideology affect software piracy attitude and behaviour?: An empirical investigation of computer users in China
European Journal of Information Systems
Applied multiple regression/correlation analysis for the behavioral sciences
How is believability of a virtual agent related to warmth, competence, personification, and embodiment? Presence
Teleoperators & Virtual Environments
Socially competent communication and relationship development
Chatbots and the new world of HCI
Interactions
A taxonomy of ethical ideologies
Journal of Personality and Social Psychology
An automatic dialog simulation technique to develop and evaluate interactive conversational agents
Applied Artificial Intelligence
The role of ethical ideology in reactions to injustice
Journal of Business Ethics
The role of ethical ideology in workplace deviance
Journal of Business Ethics
Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot
Journal of Communication
Cited by (32)
Artificial intelligence empowered conversational agents: A systematic literature review and research agenda
2023, Journal of Business ResearchChatbots’ effectiveness in service recovery
2023, International Journal of Information ManagementA sustainable step forward: Understanding factors affecting customers’ behaviour to purchase remanufactured products
2023, Journal of Retailing and Consumer ServicesCitation Excerpt :This means that customers must weigh the benefits to themselves and society of their product selections. Customers are held accountable for their behaviours not only by virtue of their ability to predict future outcomes, but also by virtue of moral norms and values that assign responsibility to them for their personal and societal obligations (Park et al., 2021). This will allow a various theoretical lens to be used to view the findings, explaining the discrepancies between those of prior studies and demonstrating how customers arrive at their ethical stance in the first place (Pecoraro et al., 2021).
Modeling adoption of intelligent agents in medical imaging
2022, International Journal of Human Computer StudiesCitation Excerpt :AI has been promptly identified as a powerful tool to extend and augment human creativity, thinking, as well as decision-making ability Coronato et al. (2020). To deal with some remaining challenges, many authors have examined the ethical dimension of AI-based tools Bennett (2019); Langer and Landers (2021); Park et al. (2021). Bennett et al.
Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms
2022, Information Processing and ManagementCitation Excerpt :Empowered by artificial intelligence (AI) technology, chatbots are becoming widely utilized as replacements for human agents to perform certain tasks in service delivery across many industries, particularly in the e-commerce field (Park et al., 2021).