Robot apology as a post-accident trust-recovery control strategy in industrial human-robot interaction

https://doi.org/10.1016/j.ergon.2020.103078Get rights and content

Highlights

  • Virtual Reality study of industrial HRI is presented.

  • Effective robot trust-recovery strategy is presented.

  • People interacting with apologetic robots recover their post-accidental posture faster.

  • Robot apologies can be used to recover human trust in industrial HRI.

Abstract

Due to safety requirements for Human-Robot Interaction (HRI), industrial robots have to meet high standards of safety requirements (ISO 10218). However, even if robots are incapable of causing serious physical harm, they still may influence people's mental and emotional wellbeing, as well as their trust, behaviour and performance in close collaboration. This work uses an HTC Vive Virtual Reality headset to study the potential of using robot control strategies to positively influence human post-accident behaviour. In the designed scenario, a virtual industrial robot first makes sudden unexpected movements, after which it either does or does not attempt to apologise for them. The results show that after the robot tries to communicate with the participants, the robot is reported to be less scary, more predictable and easier to work with. Furthermore, postural analysis shows that the participants who were the most affected by the robot's sudden movement recover 74% of their postural displacement within 60 s after the event if the robot apologised, and only 34% if it did not apologise. It is concluded, that apologies, which are commonly used as a trust-recovery strategy in social robotics, can positively influence people engaged with industrial robotics as well.

Relevance to industry

Findings can be used as guidelines for designing robot behaviour and trust-recovery control strategies meant to speed up human recovery after a trust-violating event in industrial Human-Robot Interaction.

Introduction

Since industrial robots are often large, heavy and dangerous, for a long time industrial Human-Robot Interaction (HRI) has been constrained by safety requirements (Haddadin et al., 2008). However, as new and improved safety systems for industrial robots are being developed, the chance and severity of colliding with the dangerous machines is decreasing (Robla-Gómez et al., 2017). One of the recent developments is in mechanically safe and compliant collaborative robots (cobots), which allow for (accidental) physical contact with humans (Stilli et al., 2017) (Zhou et al., 2019). With that in mind, the barriers separating humans and industrial robots are slowly disappearing. It can be expected that in the future, working in close proximity with an industrial robot is going to be increasingly possible. Although physical safety may be guaranteed, this does not mean people will always feel comfortable working with robots and trust them completely (Lasota et al., 2014).

Trust and mutual understanding are important factors in HRI (Hancock et al., 2011) (Groom and Nass, 2007). Trust can influence the way people use a system (Parasuraman and Riley, 1997) (Lee and See, 2004) as well as their physiological stress while using it (Morris et al., 2017). One of the most widely used and generally accepted definitions of trust was created by Mayer et al. (1995). This work defines interpersonal trust as: “the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party”. This definition suggests that to trust someone means to know that there is something to be lost, but still willing to take a risk. It can be expected that without trust, people will try to minimise the risks (avoid being vulnerable) and choose not to rely on the other party. Although this definition is aimed at human-human relationship, it has been used in the context of automation (Lee and See, 2004), automated driving (Noah and Walker, 2017) and HRI (Martelaro et al., 2016).

Muir (1987) stated that a human's trust in a machine can be achieved in a similar way as a human's trust in another human. This suggests not only that the previously quoted definition of trust should be valid in a human-robot relationship, but also that the things that make humans build trust, lose trust and regain trust in other humans should also have the same effect on a human's trust in a robot. Although people's self-reported trust is often more dependent on personal actions and abilities rather than the robot's performance (Lee and Moray, 1994) (Desai et al., 2012), a robot's mistakes and faulty behaviours can still significantly influence the assessment of its reliability and trustworthiness (Salem et al., 2015). In unstructured situations, unforeseen incidents may occur, especially if the robot's paths are not pre-programmed and other unpredictable factors, such as the presences of humans, may introduce uncertainty. While increasing a robot's reliability by minimising the chance of failures is an important research objective, strategies to minimise the negative impact after the occurrence of failures are rarely studied.

Lewicki and Bunker (Lewicki et al., 1996) proposed a four-step sequence for regaining trust. They explain that the violator needs to acknowledge that a violation has occurred, admit to causing it, admit that it was “destructive”, finally, accept responsibility for it. Previous research showed that it is possible to repair trust after a robot's mistakes through communication and apology (Lee et al., 2010) (Robinette et al., 2015). However, HRI trust-repair studies are mostly focused on social robots, where the robot's errors rarely affect humans' safety. The environments and scenarios of industrial HRI are often completely different from those found in typical social HRI. Industrial HRI environments, in contrast, are often are fast-paced, performance-driven and require high cognitive load. Because of that, it is imperative to study the social problems of HRI, such as trust repair, in industrial settings, where implications of such problems might influence performance, well-being or safety of humans.

In the previous work, Fratczak et al. (2019) showed that unexpected sudden movements made by a virtual industrial robot working in close proximity to a human can elicit strong responses, which imply lack of trust. This work expands that work by implementing robot post-accident, trust-repairing control strategies. The objective of this paper is to study the possibility and effectiveness of using such control strategies by comparing participants' proximity to the robot, change of mistake ratio and response time in the case where the robot either tries to communicate an apology to the human or not. The hypothesis is, just like in social HRI, a simple apology from an industrial robot will speed up participants’ post-accident recovery.

Section snippets

Related work

As the interactions shift from “using a robot as a tool” towards “working alongside/with a robot”, the importance of trust between the human and the machine becomes crucial. The way people see robots and the way they feel about them could shape the nature of the interactions. For some workers, just the idea of sharing a physical workspace with an autonomous robot is enough to make them worried, as lack of experience and poor understanding of the robot's capabilities might lead to the fear of

Method

Two versions of the experiment were conducted - one with no robot trust-recovery control strategies (NCS), one with robot trust-recovery control strategies (CS) (see Section 3.2.3) and the control experiment. Every participant was allowed to take part in only one version of the experiment, i.e. between-subjects design.

Results

In the experiment, human motion signals and subjective self-reports were recorded and analysed. This section presents and compares the results acquired for two versions of the experiment - one with robot trust-recovery strategies (20 participants) and one without (32 participants). As explained in section 3.2.3, one of the robot control strategies was to slow down if the human responded negatively after an apology. None of the participants responded negatively, which means, other than the

Discussions

The initial study presented by Fratczak et al. (2019) showed that erratic actions of a virtual industrial robot strongly influences around 50% of participants making them lean away from the robot. This study extends that work by implementing robot control strategies designed to speed up the participants’ recovery after a trust-violating event. The functionality of the designed robot control strategies was analysed by comparing behaviours of two groups of participants. The first group (NCS

Conclusion

This paper has analysed the potential of using robot apology as a method to help people recover after a robot's trust-violating mistake. Such a method was previously investigated in social HRI situations, but not in industrial HRI. To overcome the safety concerns associated with industrial HRI, this paper has used VR to study the human responses to incidents and evaluate robot apology as a means of trust recovery. Although the results are limited by VR and it may be possible that human

CRediT authorship contribution statement

Piotr Fratczak: Conceptualization, Methodology, Software, Formal analysis, Investigation, Writing - original draft, Writing - review & editing, Visualization. Yee Mey Goh: Conceptualization, Resources, Writing - review & editing, Visualization, Supervision. Peter Kinnell: Conceptualization, Writing - review & editing, Supervision. Laura Justham: Supervision. Andrea Soltoggio: Supervision.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

The authors are grateful to the Engineering and Physical Science Research Council (EPSRC) for the financial support extended to this research through the Doctoral Training Program (DTP) (EP/N509516/1, Project Reference number: 2229424).

References (38)

  • S.P. Drummond et al.

    The neural basis of the psychomotor vigilance task

    Sleep

    (2005)
  • P. Fratczak et al.

    Understanding human behaviour in industrial human-robot interaction by means of virtual reality

  • V. Groom et al.

    Can robots be teammates?: benchmarks in human–robot teams

    Interact. Stud.

    (2007)
  • S. Haddadin et al.

    The role of the robot mass and velocity in physical human-robot interaction-Part I: non-constrained blunt impacts

  • K. Hald et al.

    Proposing human-robot trust assessment through tracking physical apprehension signals in close-proximity human-robot collaboration

  • A. Hamacher et al.

    Believing in BERT: using expressive communication to enhance trust and counteract operational error in physical Human-robot interaction

  • P.A. Hancock et al.

    A meta-analysis of factors affecting trust in human-robot interaction

    Hum. Factors

    (2011)
  • R.J. Kosinski
    (2008)
  • P.A. Lasota et al.

    Toward safe close-proximity human-robot interaction with standard industrial robots

  • Cited by (0)

    View full text