Robot apology as a post-accident trust-recovery control strategy in industrial human-robot interaction
Introduction
Since industrial robots are often large, heavy and dangerous, for a long time industrial Human-Robot Interaction (HRI) has been constrained by safety requirements (Haddadin et al., 2008). However, as new and improved safety systems for industrial robots are being developed, the chance and severity of colliding with the dangerous machines is decreasing (Robla-Gómez et al., 2017). One of the recent developments is in mechanically safe and compliant collaborative robots (cobots), which allow for (accidental) physical contact with humans (Stilli et al., 2017) (Zhou et al., 2019). With that in mind, the barriers separating humans and industrial robots are slowly disappearing. It can be expected that in the future, working in close proximity with an industrial robot is going to be increasingly possible. Although physical safety may be guaranteed, this does not mean people will always feel comfortable working with robots and trust them completely (Lasota et al., 2014).
Trust and mutual understanding are important factors in HRI (Hancock et al., 2011) (Groom and Nass, 2007). Trust can influence the way people use a system (Parasuraman and Riley, 1997) (Lee and See, 2004) as well as their physiological stress while using it (Morris et al., 2017). One of the most widely used and generally accepted definitions of trust was created by Mayer et al. (1995). This work defines interpersonal trust as: “the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party”. This definition suggests that to trust someone means to know that there is something to be lost, but still willing to take a risk. It can be expected that without trust, people will try to minimise the risks (avoid being vulnerable) and choose not to rely on the other party. Although this definition is aimed at human-human relationship, it has been used in the context of automation (Lee and See, 2004), automated driving (Noah and Walker, 2017) and HRI (Martelaro et al., 2016).
Muir (1987) stated that a human's trust in a machine can be achieved in a similar way as a human's trust in another human. This suggests not only that the previously quoted definition of trust should be valid in a human-robot relationship, but also that the things that make humans build trust, lose trust and regain trust in other humans should also have the same effect on a human's trust in a robot. Although people's self-reported trust is often more dependent on personal actions and abilities rather than the robot's performance (Lee and Moray, 1994) (Desai et al., 2012), a robot's mistakes and faulty behaviours can still significantly influence the assessment of its reliability and trustworthiness (Salem et al., 2015). In unstructured situations, unforeseen incidents may occur, especially if the robot's paths are not pre-programmed and other unpredictable factors, such as the presences of humans, may introduce uncertainty. While increasing a robot's reliability by minimising the chance of failures is an important research objective, strategies to minimise the negative impact after the occurrence of failures are rarely studied.
Lewicki and Bunker (Lewicki et al., 1996) proposed a four-step sequence for regaining trust. They explain that the violator needs to acknowledge that a violation has occurred, admit to causing it, admit that it was “destructive”, finally, accept responsibility for it. Previous research showed that it is possible to repair trust after a robot's mistakes through communication and apology (Lee et al., 2010) (Robinette et al., 2015). However, HRI trust-repair studies are mostly focused on social robots, where the robot's errors rarely affect humans' safety. The environments and scenarios of industrial HRI are often completely different from those found in typical social HRI. Industrial HRI environments, in contrast, are often are fast-paced, performance-driven and require high cognitive load. Because of that, it is imperative to study the social problems of HRI, such as trust repair, in industrial settings, where implications of such problems might influence performance, well-being or safety of humans.
In the previous work, Fratczak et al. (2019) showed that unexpected sudden movements made by a virtual industrial robot working in close proximity to a human can elicit strong responses, which imply lack of trust. This work expands that work by implementing robot post-accident, trust-repairing control strategies. The objective of this paper is to study the possibility and effectiveness of using such control strategies by comparing participants' proximity to the robot, change of mistake ratio and response time in the case where the robot either tries to communicate an apology to the human or not. The hypothesis is, just like in social HRI, a simple apology from an industrial robot will speed up participants’ post-accident recovery.
Section snippets
Related work
As the interactions shift from “using a robot as a tool” towards “working alongside/with a robot”, the importance of trust between the human and the machine becomes crucial. The way people see robots and the way they feel about them could shape the nature of the interactions. For some workers, just the idea of sharing a physical workspace with an autonomous robot is enough to make them worried, as lack of experience and poor understanding of the robot's capabilities might lead to the fear of
Method
Two versions of the experiment were conducted - one with no robot trust-recovery control strategies (NCS), one with robot trust-recovery control strategies (CS) (see Section 3.2.3) and the control experiment. Every participant was allowed to take part in only one version of the experiment, i.e. between-subjects design.
Results
In the experiment, human motion signals and subjective self-reports were recorded and analysed. This section presents and compares the results acquired for two versions of the experiment - one with robot trust-recovery strategies (20 participants) and one without (32 participants). As explained in section 3.2.3, one of the robot control strategies was to slow down if the human responded negatively after an apology. None of the participants responded negatively, which means, other than the
Discussions
The initial study presented by Fratczak et al. (2019) showed that erratic actions of a virtual industrial robot strongly influences around 50% of participants making them lean away from the robot. This study extends that work by implementing robot control strategies designed to speed up the participants’ recovery after a trust-violating event. The functionality of the designed robot control strategies was analysed by comparing behaviours of two groups of participants. The first group (NCS
Conclusion
This paper has analysed the potential of using robot apology as a method to help people recover after a robot's trust-violating mistake. Such a method was previously investigated in social HRI situations, but not in industrial HRI. To overcome the safety concerns associated with industrial HRI, this paper has used VR to study the human responses to incidents and evaluate robot apology as a means of trust recovery. Although the results are limited by VR and it may be possible that human
CRediT authorship contribution statement
Piotr Fratczak: Conceptualization, Methodology, Software, Formal analysis, Investigation, Writing - original draft, Writing - review & editing, Visualization. Yee Mey Goh: Conceptualization, Resources, Writing - review & editing, Visualization, Supervision. Peter Kinnell: Conceptualization, Writing - review & editing, Supervision. Laura Justham: Supervision. Andrea Soltoggio: Supervision.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgements
The authors are grateful to the Engineering and Physical Science Research Council (EPSRC) for the financial support extended to this research through the Doctoral Training Program (DTP) (EP/N509516/1, Project Reference number: 2229424).
References (38)
- et al.
A dynamic model of interaction between reliance on auto mation and cooperation in multi-operator multi-automation situations
Int. J. Ind. Ergon.
(2006) - et al.
Human well-being and system performance in the transition to industry 4.0
Int. J. Ind. Ergon.
(2020) - et al.
Trust, self-confidence, and operators' adaptation to automation
Int. J. Hum. Comput. Stud.
(1994) Trust between humans and machines, and the design of decision aids
Int. J. Man Mach. Stud.
(1987)- et al.
Methodology for study of human-robot social interaction in dangerous situations
- et al.
Interpersonal distance in immersive virtual environments
Pers. Soc. Psychol. Bull.
(2003) - et al.
Immersive virtual environment technology as a methodological tool for social psychology
Psychol. Inq.
(2002) - et al.
The development of a scale to evaluate trust in industrial human-robot collaboration
Int. J. Social Robot.
(2016) - et al.
A conceptual and empirical examination of justifications for dichotomization
Psychol. Methods
(2009) - et al.
Effects of changing reliability on trust of robot systems