An experimental study of public trust in AI chatbots in the public sector
Introduction
In light of the advent of the smartification of public services using data science technologies such as AI, this study investigates public trust in AI machines in the delivery of public services. Inspired by the literature on trust in automation (Coeckelbergh, 2012; Lee & See, 2004; Madhavan & Wiegmann, 2007), the study defines public trust as the public's confidence in a machine, based on the perceived probability of its performing the work expected of it and displaying favorable behavior. Highlighted here is the case of Japan, where a limited number of local governments have started piloting the use of what they label “AI” chatbots to respond to citizen enquiries. The location and the timing of this research are thus suitable for investigating what largely constitutes the public's initial trust in machines, formed “prior to interacting with a system” (Hoff & Bashir, 2015, p. 420) or “after a brief introduction to the system,” even before no actual interaction with the machines takes place (Merritt & Ilgen, 2008, p. 195). Trust at this stage is different from dynamic learned trust, formed “during an interaction” (Hoff & Bashir, 2015, p. 420) or post-task trust, formed “after completion of a task in which the person and machine work jointly” (Merritt & Ilgen, 2008, p. 196).
A chatbot is a computer program that interacts with users using natural language processing technology (Shawar & Atwell, 2007) – a form of narrow AI that extracts meaningful information from free texts based on user input and helps to “find the intent of the question asked by a user and send an appropriate reply” (Goyal, Pandey, & Jain, 2018, p. 19). “Narrow” AI is programmed to perform a certain task, and it differs from “artificial general intelligence,” whose breadth of capabilities is at least comparable to that of humans (Hassabis, Kumaran, Summerfield, & Botvinick, 2017; Lake, Ullman, Tenenbaum, & Gershman, 2017). While some writers question whether the technology underlying most chatbots in general use today truly qualifies as AI (see, for example, Naumov, 2018), chatbot vendors and local governments have been attaching the AI label to their chatbots. This study concerns public attitudes towards chatbots that are labeled and presented in this way. Inspired by theories of human trust in machines, the study hypothesized that initial public trust in chatbot responses would depend on the area of enquiry and on the purposes communicated to the public for introducing chatbot technology. The study used an experimental online survey to test these hypotheses.
Investigating initial public trust in chatbots in the public sector is worthwhile for several reasons. Practically speaking, the public do not use machines if they do not initially trust them, as numerous studies on human-machine relationships suggest (de Vries, Midden, & Bouwhuis, 2003; Gao & Waechter, 2017; Lewandowsky, Mundy, & Tan, 2000; Moray, Inagaki, & Itoh, 2000; Muir & Moray, 1989). Normatively speaking, public institutions can risk their democratic legitimacy if the public does not trust the services they intend to provide with new technology. As for research, AI has been studied chiefly in the field of computer science, while research in social science in general, and especially in the public sector context, remains rather limited (de Sousa, de Melo, Bermejo, Farias, & Gomes, 2019). As a result, the societal impacts of AI have been subject to wide speculation; while opinion surveys currently available offer some empirical insights (see, for example, Accenture, 2020), hypothesis-testing guided by theory is rare. These research gaps need to be addressed to help inform policy making by governments, who may become the chief users of data-science technologies (Engin & Treleaven, 2019), and to help realize a “Good AI Society” (Floridi et al., 2018).
The following section provides an overview of recent developments in Japanese local governments regarding the use of chatbots. The third section examines sources of public trust in public sector chatbots, which are the basis for the hypotheses presented in the fourth section. The fifth section explains the empirical strategies used in the study, the sixth highlights key results, and the seventh discusses policy implications, followed by a conclusion.
Section snippets
Chatbots: developments in Japanese local governments
AI is not new. It traces its origin back to neuroscience in the 1940s (Hassabis et al., 2017), and the term was coined in the 1950s (Copeland, 2015). Nevertheless, it has been the center of attention in recent years, due to its remarkable progress. The future prospects of AI have provoked both concern (Agarwal, 2018; Floridi et al., 2018; Wirtz, Weyerer, & Geyer, 2018) and excitement among members of society – the latter serving as a possible reason numerous commercial products are sold in the
Proposed sources of trust in chatbots in the public sector
To date, there is no theory on public trust in chatbots per se. However, scholars in psychology and ergonomics have made significant contributions to theorizing and understanding trust in both human-human and human-machine relations. This section draws on their valuable work, as well as on some studies in the fields of political science and public administration, to propose a general theory of trust in chatbots in the public sector, before delving into the specific hypotheses for this study in
Hypotheses for this study
The empirical testing for this study took place in the context of Japan and concerned the degree of trust the public places in AI chatbots when their local governments announce that a chatbot will answer citizen enquiries in lieu of administrators. Building on two of the three sources of public trust in chatbots discussed above, two sets of hypotheses were proposed.
The first set of hypotheses relates to expected performance, which is likely to vary across areas of enquiry. In regard to the
Method
This study was conducted as a part of a research project on AI in the public sector and involved an experimental survey, using an online panel of 2.2 million subscribers (as of April 2018) administered by the firm Rakuten Insight, Inc. The survey was made accessible to the panel from January 30 to February 6, 2019, until 8000 responses had been collected from individuals aged 18–79 who were living in Japan. The respondents were recruited to arrive at gender, age, and regional distributions
Results
The results show that public trust in chatbots depends on the area of enquiry, a finding that supports H1-a and H1-b. The ANOVA that compared the four areas of enquiry with the full sample shows that at least one pair of areas statistically and significantly differ at the 0.05 level or better for both dependent variables: Q1 [Welch's F(3, 4440.88) = 114.70, p < .0001], and Q2 [F(3, 7996) = 37.37, p < .0001]. Fig. 2 shows the results from the post-hoc tests for Q1 (H1-a) and Q2 (H1-b): except
Discussion
Clearly, the results call for policy makers to attend to the fact that public trust in chatbot responses depends on the area of enquiry. This study, inspired by a theoretical framework for understanding human trust in machines, proposes why this is the case: considering that performance is an important basis of trust, the public's confidence in the ability of chatbots to perform competently is lower for some areas of enquiry than for others. Throughout, this study argues that parental support
Conclusion
To conclude, the contributions of this study are worth highlighting. In light of the smartification of public services using technologies such as AI, it can be argued that investigating public trust in AI machines is important because the public tend not to use a machine unless they have initial trust in it. It is also important for the normative view that democratic governments should earn public support for the decision to use a chatbot, and yet public trust in public services delivered by AI
Acknowledgement
The data collection for this study was financed by the Staff Research Support Scheme of the Lee Kuan Yew School of Public Policy in the National University of Singapore. The author is currently affiliated with the University of Tokyo.
Naomi Aoki is an associate professor at the Graduate School of Public Policy, the University of Tokyo. Prior to joining the School, she served as an assistant professor in the Lee Kuan Yew School of Public Policy at the National University of Singapore. She researches on topics related to public administration and public management, from both interdisciplinary and international perspectives.
References (60)
- et al.
Analysing the critical factors influencing trust in e-government adoption from citizens' perspective: A systematic review and a conceptual framework
International Business Review
(2017) - et al.
The effects of errors on system trust, self-confidence, and the allocation of control in route planning
International Journal of Human Computer Studies
(2003) - et al.
The role of trust in automation reliance
International Journal of Human-Computer Studies
(2003) - et al.
Neuroscience-inspired artificial intelligence
Neuron
(2017) - et al.
Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices
Technological Forecasting and Social Change
(2016) - et al.
The challenges and limits of big data algorithms in technocratic governance
Government Information Quarterly
(2016) Trust between humans and machines, and the design of decision aids
International Journal of Man-Machine Studies
(1987)Citizens. Know them to serve them
(2020)Public administration challenges in the world of AI and Bots
Public Administration Review
(2018)Conceptual models and the Cuban missile crisis
The American Political Science Review
(1969)
The logic and limits of trust
Artificial intelligence and administrative discretion: Implications for public administration
The American Review of Public Administration
From street-level to system-level bureaucracies: How information and communication technology is transforming administrative discretion and constitutional control
Public Administration Review
Chatbots: Changing user needs and motivations
Interactions
Digital discretion: A systematic literature review of ICT and street-level discretion
Information Polity
Can we trust robots?
Ethics and Information Technology
Artificial intelligence: A philosophical introduction
Industry watch: The return of the chatbots
Natural Language Engineering
How and where is artificial intelligence in the public sector going? A literature review and research agenda
Government Information Quarterly
User agreement with incorrect expert system advice
Behaviour and Information Technology
Ready to serve the public? The role of empathy in public service education programs
Journal of Public Affairs Education
Algorithmic government: Automating public services and supporting civil servants in using data science technologies
The Computer Journal
Performance still matters: Explaining trust in government in the Dominican Republic
Comparative Political Studies
AI4People—An ethical framework for a Good AI Society: Opportunities, risks, principles, and recommendations
Minds & Machines
Examining the role of initial trust in user adoption of mobile payment services: An empirical investigation
Information Systems Frontiers
Deep learning for natural language processing: Creating neural networks with Python
Tokyto-shuzeikyokuno chattobotto-niyoru toiawasetaino jisshjikkennisankaku “shuzeikyokuhmupjino konsheruju” wojisshi
Trust in automation: Integrating empirical evidence on factors that influence trust
Human Factors
Assessing the relation between satisfaction with public service delivery and trust in government: The impact of the predisposition of citizens toward government on evaluations of its performance
Public Performance & Management Review
In search of prudence: The hidden problem of managerial reform
Public Administration Review
Cited by (114)
Investigating factors of students' behavioral intentions to adopt chatbot technologies in higher education: Perspective from expanded diffusion theory of innovation
2024, Computers in Human Behavior ReportsAI on the street: Context-dependent responses to artificial intelligence
2024, International Journal of Research in MarketingTrends and challenges of e-government chatbots: Advances in exploring open government data and citizen participation content
2023, Government Information QuarterlyCitizens' acceptance of artificial intelligence in public services: Evidence from a conjoint experiment about processing permit applications
2023, Government Information QuarterlyThe dynamics of AI capability and its influence on public value creation of AI within public administration
2023, Government Information Quarterly
Naomi Aoki is an associate professor at the Graduate School of Public Policy, the University of Tokyo. Prior to joining the School, she served as an assistant professor in the Lee Kuan Yew School of Public Policy at the National University of Singapore. She researches on topics related to public administration and public management, from both interdisciplinary and international perspectives.