Background

Rescue robotics is a relatively young discipline within field robotics. Its goal is to provide rescuers in operation areas with the ability to sense and act at a distance from the site of disasters (Murphy, 2014), i.e. “phenomena caused by environmental or man-made events that lead to fatalities, injuries, stress, physical damage and economic breakdown of great significance.” (Cuny, 1992).

Rescue robots can enable operators to access areas in harsh conditions that would be inaccessible, or too dangerous or slow for humans to enter. They can also serve as remote sensing platforms that make it possible for humans to interact with the destroyed environment (Adams et al., 2014; Kochersberger et al., 2014; Stefanov & Evans, 2014). A rescue robot can, for example, help visually examine and map the interior of a collapsed building, inspect damage (Devault, 2000; Ellenberg et al., 2015; Lattanzi & Miller, 2017; Recchiuto & Sgorbissa, 2017; Torok et al., 2014), place acoustic, or thermal, or seismic sensors to monitor the situation, or quickly remove heavy rubble to facilitate extricating victims (Murphy & Stover, 2007; Murphy et al., 2009; Steimle et al., 2009). Providing this kind of rapid access and intervention should translate into fewer lives lost, lesser injuries and, overall, faster recovery from the disaster itself (Murphy, 2014).

The first reported use of rescue robots at a disaster site dates to 2001, when the Center for Robot-Assisted Search and Rescue used robots from the DARPA Tactical Mobile Robots program at the World Trade Center disaster in New York City (Murphy, 2014). Since then, rescue robots have been deployed across the world in mine accidents, earthquakes, mudslides, nuclear disasters, hurricanes, oil spills and building collapses, thereby gaining widespread public prominence. In the coming years, owing to the growing impact of natural and man-made disasters, the need for such robots is expected to increase across all phases of the disaster life-cycle (Murphy, 2014). In light of this broader view of their role, “rescue robots” are often termed “disaster robots”; the two terms will thus be used interchangeably throughout this paper, as will the terms “rescuer”, “responder” and “operator”.

The types of robots that are employed in disasters include Unmanned Ground Vehicles (UGV), which carry a range of sensors and are typically equipped with tracks to traverse unstructured terrains, Unmanned Aerial Vehicles (UAV), which can provide aerial support for disaster response operations, and Unmanned Marine Vehicles (UMV), which can, for instance, carry out underwater inspections and insert mitigation devices. Although most of these robots are controlled by humans, semi-autonomous systems that reduce the need for low-level control by operators are becoming more frequent (Birk & Carpin, 2006; Delmerico et al., 2019; Zuzanek et al., 2014).

Operations taking place in disaster settings are fraught with ethical challenges. Many of those challenges are associated with the hazardous, chaotic, and pressure-filled conditions under which responders must operate and the lack of time, materials, and capacity that characterizes their work. Choices regarding, for instance, where to concentrate rescue efforts, what kind of risks should be taken, whom to search for first, who should be given priority treatment, who must be left to wait, and how to make optimal use of the limited resources available, are morally burdensome (Gustavsson et al., 2020), and the consequences of those choices can weigh on victims and responders, but also on other stakeholders.

Policies and guidelines exist to support responders in their work (Medical Ethics Manual. World Medical Association., 2015; The ICN Code of Ethics for Nurses. International Council of Nurses., 2012), (Green et al., 2003). Limited guidance is available, however, for those who do not have medical roles and for ethically informed, practical decision-making in specific disaster settings (Gustavsson et al., 2020). This situation is exacerbated by the generalized lack of specific training programs to develop the knowledge and skills required for such decision-making (Gustavsson et al., 2020). Rescue robots’ increasing presence in operation areas is likely to carry an additional layer of ethical complexity, which will depend in part on what type of robots are used and the specific contexts in which they are deployed.

In some domains, e.g. in the industry, military and in education, ethical concerns regarding the application of robots have received much attention (Lichoki et al., 2011). A great deal of reflection has also been devoted to the use of robots in healthcare, looking at their impact on the privacy (Sharkey & Sharkey, 2012), human rights (Sharkey & Sharkey, 2011), and autonomy of patients (Sparrow, 2016). However, not much effort seems to have been dedicated to exposing and elucidating the ethical issues that may emerge when robots are used in disaster settings (Harbers et al., 2017). Therefore, to help focus timely ethical reflection on rescue robotics before rescue robots become commonplace and their use ubiquitous, we have conducted a scoping review of the relevant literature.

Methods

We followed the well-known scoping review framework by Arksey and O’Malley (Arksey & O’Malley, 2005) and subsequent recommendations (Colquhoun et al., 2014; Levac et al., 2010) for conducting and reporting scoping reviews.

Although there is no clear consensus on their definition or purpose, scoping reviews are commonly described as tools to map or synthesize a range of evidence in order to convey the size and scope of a research field (Levac et al., 2010).

According to Arksey and O’Malley, by conducting a scoping study researchers can survey the extent, range, and nature of research activity in a given field, establish whether a full systematic review is warranted, summarize and disseminate research evidence, or identify gaps in the existing literature (Arksey & O’Malley, 2005). Unlike systematic reviews, scoping studies do not typically provide an assessment of the quality of the studies covered (Grant & Booth, 2009; Rumrill et al., 2010). Unlike narrative or literature reviews, they require analytical reinterpretation of the literature. Scoping reviews are particularly relevant when the literature on a topic is complex or heterogeneous, can cover findings from a range of different study designs and methods, and may be especially useful when evidence on a topic is emerging (Levac et al., 2010). Thus, they are well-suited to synthesizing the literature on a topic that has yet to be comprehensively mapped, such as the one we have endeavored to review here.

This scoping review surveyed published literature and ethics approval was not required.

We identified papers discussing ethical issues associated with the use of rescue robots using a two-tiered search strategy: (1) searching five databases (Google Scholar, IEEE Xplore, Science Direct, Scopus, and the Web of Science), and (2) searching the references in the documents that were selected for the final qualitative synthesis. We searched title, abstract and keywords for the terms: ethics AND (“rescue robot” OR “disaster robot”). Query logic was modified to adapt to the language used by each engine or database. Only the first 250 hits retrieved in Google Scholar ordered by relevance were considered, in accordance with the methods used in numerous similar reviews. The search initially yielded 429 entries. Following the recommendations by Pham and colleagues (Pham et al., 2014), the subsequent study selection process was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (http://prisma-statement.org) as a guide (see Fig. 1).

Fig. 1
figure 1

PRISMA screening process for identified papers

At the first screening stage, one researcher reviewed the titles and abstracts of papers. Only papers written in English were included. Duplicates, papers not related to the topic, as well as theses, articles from the popular press, reports, non-reviewed books and book chapters, presentations and opinion pieces were excluded. This gave 42 papers which underwent further screening. The second eligibility screening was conducted independently by two members of the research team. Each researcher evaluated the papers against the inclusion and exclusion criteria and independent results were compared. A third researcher was involved to resolve any disagreements about paper eligibility.

Documents were included if they fulfilled all the following criteria:

  • Papers published in a peer-reviewed outlet contained in Google Scholar, IEEE Xplore, Science Direct, Scopus, or the Web of Science;

  • Papers in English;

  • Papers that included the relevant search terms as previously defined;

  • Papers in which ethical issues associated with rescue robots were the main focus or at least were addressed in their own part or section;

  • Papers that were published in 2001 or later.

  • In addition, the following items comprised our exclusion criteria:

  • Papers that focused on ethical issues in robotics but only casually mentioned rescue robots;

  • Papers that focused on rescue robotics but only casually mentioned any associated ethical issues.

In addition, the following items comprised our exclusion criteria:

  • Papers that focused on ethical issues in robotics but only casually mentioned rescue robots;

  • Papers that focused on rescue robotics but only casually mentioned any associated ethical issues.

Once the papers that fulfilled all of the criteria were identified, each of these papers’ reference list was screened for additional relevant documents.

Based on the recommendations by Levac and colleagues (Levac et al., 2010), we performed a descriptive quantitative synthesis and a thematic analysis. Following Arksey and O’Malley (Arksey & O’Malley, 2005), our descriptive quantitative summary includes the details of the articles identified, year of publication, discipline/field of inquiry, type of research, type of robot investigated etc. For the thematic analysis, the papers were coded following a multi-step process including open coding, axial coding and selective coding, using the Dedoose web-based application (www.dedoose.com). In the first phase, units of meaning were identified and labeled to allow categories to emerge from the data (open coding). Open codes were then categorized, with similar codes grouped, refined and combined into larger themes (axial coding). The conceptually stable thematic patterns that emerged were then organized and grouped into higher order conceptual themes Selective coding involved the integration and refinement of these concepts (Corbin & Strauss, 2008). Finally, findings were integrated and validated through discussion among all members of the research team.

Results

Quantitative data synthesis

Six papers fulfilled the selection criteria of our literature review. Most were published in scientific journals (4/6) and all were distributed across different outlets (see Table 1). No relevant publications were identified in any of the papers’ list of references.

Table 1 Papers that fulfilled selection criteria

None of the papers identified were published before 2008, and 5/6 were published between 2014 and 2020 (see Table 1).

The studies are mostly situated in robotics (n = 3), robot ethics (n = 3) and machine ethics (n = 3), but also in technology assessment (n = 2), and information systems (n = 1). Most of the papers include elements from different disciplines (see Table 2).

Table 2 Overview of the papers identified

Most studies are conceptual and/or technological but 2/6 feature a mixed approach including an experimental exploration of the ethical issues at hand. UGVs are discussed in most of the publications (4/6), with the remaining focusing on UAVs or both UGVs and UAVs (see Table 2).

Five of the six papers refer to a method of practicing ethics in Research and Innovation, and four use simulations or scenarios to anticipate the ethical implications and other consequences of using robots in search and rescue missions; one instead considers two case studies of robot system deployments in search and rescue settings to test socio-ethical approaches to the development of robots (see Table 2).

Qualitative text analysis

Qualitative text analysis highlighted seven core ethical concerns: fairness and discrimination; false or excessive expectations; labor replacement; privacy; responsibility; safety; trust. Reported in Table 2 are the occurrences of these themes across the six publications.

Two of the papers included discussion of ethical concerns that emerged from an inductive process involving qualitative research with stakeholders (Carlsen et al., 2015; Harbers et al., 2017). Three papers, instead, described using simulations to better understand the impact of ethical concerns identified elsewhere (Brandao et al., 2020; Stormont, 2008; Tanzi et al., 2015). Only one paper looked at cases of actual robot deployment—although in one of the two cases the setting was highly controlled—to test the introduction of an ethical framework (Amigoni & Schiaffonati, 2018).

Fairness and discrimination

Concerns associated with fairness and discrimination were the most frequently discussed ethical considerations in our review (4/6 papers); in three of these papers, discrimination was looked at in terms of disaster victims, in one, instead, as relating to rescue operators. As Amigoni and Schiaffonati point out:

Hazards and benefits should be fairly distributed (…) to avoid the possibility of some subjects incurring only costs while other subjects enjoy only benefits. This condition is particularly critical for search and rescue robot systems, e.g., when a robot makes decisions about prioritizing the order in which the detected victims are reported to the human rescuers or about which detected victim it should try to transport first (Amigoni & Schiaffonati, 2018).

In their paper, Brandão and colleagues provide an illuminating practical illustration of this concern (Brandão et al., 2020). The authors describe the hypothetical case of a UAV deployed after a disaster to search for victims and deliver medications. After each mission, the UAV needs to recharge its batteries and reload supplies at the base, in the center of the city. Because the distribution of the city’s population, as is often the case, is not uniform in terms of density, age, ethnicity and gender, the authors continue, the UAV’s planned paths will have skewed distribution of these characteristics. So, for instance, if the city in question has a high-density concentration of university students in its center, and the UAV begins its exploration missions from the area surrounding the base station, it will mostly find young people, who are usually more likely to survive than older people living elsewhere. The UAV will therefore be successful in terms of finding as many people as possible, but at the same time it will not respect the notion of distributive fairness, according to which, in this context, priority should be given to those who are most at risk (and need to be found first). Such a robot would confirm or even reinforce common critical views about disaster response missions according to which policies for selecting disaster response locations are often unfair (O’Mathuna et al., 2013).

Looking at the impact of information systems used during crisis management and disaster relief, Tanzi and colleagues also emphasize the risk of issues of social justice, pointing out that inclusive design is often lacking in emergency systems and that this may contribute to or exacerbate the marginalization of certain social groups and communities (Tanzi et al., 2015).

Carlsen et al., predict that male rescuers, being the ones traditionally involved in the riskiest and most physically demanding rescue operations, may be discriminated as they will be the most likely to be replaced by rescue robots (Carlsen et al., 2015).

False or excessive expectations

This theme was only discussed by Harbers and colleagues., who point out that stakeholders are generally unable to make sound assessments about the capabilities and limitations of rescue robots. In the authors’ view, this inability can lead stakeholders to overestimate or underestimate the capabilities of rescue robots. In the first case, this may translate into unjustified reliance on their performance, and thus, for example, into false hopes that the robots may save certain victims, or into their deployment for tasks for which they are not suitable or under inappropriate conditions. In the second case, when robots’ capabilities are underestimated, they may be underutilized, leading to a waste of precious resources (Harbers et al., 2017).

Labor replacement

Both Carlsen et al. and Harbers and colleagues report that stakeholders predict that rescue robots will likely replace human operators in the most physically challenging or high-risk rescue missions. While Carlsen et al. then focus on the likelihood of ensuing discrimination towards male responders, as mentioned above, (Carlsen et al., 2015), Harbers and colleagues express a concern that replacing humans with robots may determine degraded performance with respect to victim contact, situation awareness, manipulation capabilities, etc., pointing out that robot-mediated contact with victims may interfere with medical personnel’s ability to perform triage or provide medical advice or support (Harbers et al., 2017).

Privacy

Questions related to privacy are extensively examined in three of the papers we identified. According to Harbers and colleagues, the use of robots generally leads to an increase in information gathering, which can jeopardize the privacy of personal information. This may be personal information about rescue workers, such as images or data about their physical and mental stress levels, but also about victims or people living or working in the disaster area. Harbers et al. add that the loss of privacy potentially associated with the deployment of robots in disaster scenarios does not necessarily result in an ethical dilemma: indeed, given the critical nature of search and rescue operations, the benefits of collecting information in such settings largely outweigh any harms it may cause. This will require, however, that the information gathered by the robots is not shared with anyone outside professional rescue organizations and is exclusively used for rescue purposes. Given the time-critical, data-rich, high-stakes and often quite chaotic conditions that characterize rescue operations, ensuring such careful handling, the authors conclude, will require particular care (Harbers et al., 2017).

Reporting previous work (Buescher et al., 2013), Tanzi and colleagues emphasize the need for regulation when using information technology in crisis or emergency situations, in order to clarify misunderstandings about situations or cases (i.e. whether it is possible to collect, process and share data with other stakeholders) and to foster good practices. As an illustration, the authors quote Buescher et al., who explained how, during the 2005 London terrorist attack, a failure to share data, legitimacy and silo-thinking led to inefficiencies and mistakes on the part of the emergency agency, due to misinterpretations of the requirements of the UK Data Protection Act of 1998 (Buescher et al., 2013; Tanzi et al., 2015).

Within navigation planning, Brandão and colleagues explain that ensuring fairness in rescue robot navigation requires collecting data on the distribution of certain features of the population affected by a disaster. This may realistically involve privacy issues with the data collection itself, or with its analysis. It may also lead to leaks coming from data breaches but also from correlations within observed robot behavior, as paths taken by a robot could reveal information about the personal characteristics of the people in the city or other location where the robot is deployed (Brandão et al., 2020).

Responsibility

In the paper by Tanzi et al. issues of responsibility are viewed as associated with liability in the event of technical failures or accidents and injuries to victims (Tanzi et al., 2015).

Harbers and colleagues instead focus on responsibility assignment problems, which, they say, can apply to both moral and legal responsibility, where moral responsibility concerns blame and legal responsibility, instead, concerns accountability. Such problems, according to the authors, can arise when robots act with no human supervision. If a robot malfunctions, behaves incorrectly, makes a mistake or causes harm, it may be unclear who is responsible for the damage caused: the operator, the software developer, the manufacturer or the robot itself. Responsibility assignment problems, they continue, become particularly complicated when the robot has some degree of autonomy, self-learning capabilities or is capable of making choices that were not explicitly programmed (Harbers et al., 2017).

Carlsen and colleagues note that employing robots that are capable of learning introduces a “man in the middle” regarding responsibilities, explaining that, arguably, previous owners, as well as the designers, producers and users of such robots, could be held responsible for any problems they cause (Carlsen et al., 2015; Johansson, 2010). They also point out that first responders and other operators might be concerned about robots collecting visual data during rescue operations, as this may involve other people being able to watch them closely during missions. This would be a drastic change compared to the current situation, opening up the possibility of rescue operations being evaluated in unprecedented ways (Carlsen et al., 2015).

Safety

Harbers and colleagues acknowledge that although attention to safety is clearly one of the key priorities than need to be taken into account when deploying rescue robots, this priority will often have to be balanced against other values, as rescue missions necessarily involve safety risks. Certain of these risks can be mitigated by replacing operators with robots, but robots themselves, in turn, may determine other safety risks, mainly because they can malfunction. Even when they perform correctly, robots can still be harmful: they may, for instance, fail to identify and collide into a human being. In addition, robots can hinder the well-being of victims in subtler ways. For example, the authors argue, being trapped under a collapsed building, wounded and lost, and suddenly being confronted with a robot, especially if there are no humans around, can in itself be a shocking experience (Harbers et al., 2017).

Focusing specifically on the use of UAVs, Tanzi et al. also emphasize the risks associated with collisions and accidents, pointing out that even high-end military drones like the Predator crash with some frequency, although injuries are rare, and that in urban environments, small UAVs can still cause injury or property damage (Tanzi et al., 2015).

Amigoni and Schiaffonati acknowledge that, while risks should be contained as much as possible, it is also evident that a completely risk-free situation is not possible for robot systems operating in search and rescue missions (Amigoni & Schiaffonati, 2018).

Trust

The question of trust in autonomous systems is the focus of one of the papers identified by our review. In his paper, Stormont highlights how trust by an agent in another agent requires two beliefs: that an agent that can perform a task to help another achieve a goal has a) the ability to perform the task and b) the desire to perform it (Stormont, 2008). He then points out that two main components of trust have been identified in the literature: confidence and reputation. Stormont claims that autonomous systems and robots in general tend to not have a good reputation. While Stormont is unable to provide a comprehensive explanation for robots’ reputational problem, he suggests that confidence, the other component of trust, must be involved. In the author’s view, humans lack confidence in autonomous robots because they are unpredictable. Humans working together are generally able to anticipate each other’s actions in a wide range of circumstances—especially if they have trained together, as is the case in rescue crews. Autonomous systems, instead, often surprise even those who designed them, and such unpredictability can be both concerning and unwelcome in dangerous situations like those that are typical of disaster scenarios.

Discussion

Over the past years the capabilities of rescue robots have vastly improved. As reported by Delmerico and colleagues in their review of the current state and future outlook of rescue robotics, developments in UAVs have led to new applications for aerial platforms, progress in control and actuation now make it possible for legged robots to negotiate tough terrains, and novel human–robot interfaces are improving the ways in which operators can interact with robots (Delmerico et al., 2019). Some of these important advances are already used in field-ready commercial products, making widespread adoption of rescue robots increasingly likely. Yet, the ethical concerns associated with the use of such robots remain largely overlooked in the literature, as confirmed by the very limited corpus of research identified by our scoping review. With no relevant publications appearing before 2008 but 5/6 concentrated between 2014 and 2020, there appears to be a modest increase in interest; the annual publication rate, however, remains exceedingly low. Given the abundance of research and guidelines concerned with the ethics of robots, autonomous systems and artificial intelligence that has been produced in recent years (Winfield, 2019; Winfield & Jirotka, 2018), this finding is somewhat surprising.

Quantitative analysis: anticipating the ethical impact of rescue robots

Most of the papers we identified investigate the ethical concerns associated with rescue robots by envisioning potential scenarios or developing simulations of what could take place, responding to uncertainty with an anticipatory approach. Anticipating the impact of novel technologies is notoriously difficult, as illustrated by the control dilemma formulated by David Collingridge (Collingridge, 1980), which remains central in discussions among scholars of technology assessment. The dilemma states:

attempting to control a technology is difficult…because during its early stages, when it can be controlled, not enough can be known about its harmful social consequences to warrant controlling its development; but by the time these consequences are apparent, control has become costly and slow.

Since its formulation, approaches aimed at engaging with the control dilemma have mostly focused on reducing the uncertainty inherent in the early stages of technological development, proposing that technologies should be designed proactively in ways that prevent negative consequences and risks while at the same time striving to achieve positive impacts (van de Poel, 2015). This is the approach in a range of methods that have been developed to structure the process of practicing ethics in Research and Innovation, such as Constructive Technology Assessment (Rip et al., 1995), Value Sensitive Design (Friedman et al., 2006) and Responsible Innovation (Owen et al., 2013). Most of the papers identified by our review, indeed, refer to one or another of these methods, displaying an awareness that anticipation and proactive approaches, if not always enough to solve the control dilemma, are arguably a helpful component of the solution. Amigoni and Schiaffonati also discuss the value of coupling anticipation with explorative experiments. Recognizing that technological innovation can defy foresight by behaving in unexpected ways, this approach calls for gradual introduction of novel technologies into society, so that their effects can be monitored, and their design iteratively changed. To answer ethical questions concerning the adoption of rescue robots, Amigoni and Schiaffonati propose, anticipatory methods should be paired with explorative experiments aimed at gaining knowledge on the behavior of such robots in real world deployments (Amigoni & Schiaffonati, 2018).

Another finding that emerges in the studies we identified is the incorporation of empirical research, namely explorations of stakeholder views and values, in the ethical assessment of novel technologies. Such empirical explorations, often in the form of qualitative studies, are actually a key element of some of the approaches mentioned earlier. Responsible Innovation, for instance, advocates the involvement of relevant stakeholders throughout the life cycle of research and innovation (Von Schomberg, 2013), highlighting the importance of awareness and sensitivity to social and cultural contexts. Likewise, Value Sensitive Design includes empirical investigations using tools from social sciences research to explore the human context in which novel artifacts will function (Friedman et al., 2006).

Qualitative analysis: the core ethical themes

The core ethical themes that emerged from the qualitative text analysis of the publications in our review reflect several of the issues that are debated in the wider literature both on ethics in robotics and in disaster ethics. This may explain why, unlike what might be expected, we found limited interplay between these themes and the contexts, robot types and methods used in the studies (see Table 2).

Fairness and discrimination

Concerns associated with fairness and discrimination are discussed in most of the papers identified and notably by Brandão and colleagues. Many scholars in disaster ethics share the view that individuals most at risk should be given priority (Merin et al., 2010; O’Mathuna et al., 2013). This view is grounded in the normative position, typical of prioritarian ethical approaches to distributive justice—as Brandão and colleagues note—that those who have a greater need have a stronger moral claim to resources. Thus, in disaster settings, older victims and children should be given priority over victims who are less at risk. Brandão and colleagues’ paper shows that robot navigation planning has implications in terms of distributive justice and indirect discrimination. Indeed, how navigation is planned necessarily modifies the likelihood that certain people can access or not the benefits associated with the presence or actions of the rescue robot itself. This can lead to structural injustices having to do, for instance, with income- or age-related segregations in given urban areas (Brandão et al., 2020). Existing work conducted from a Responsible Innovation perspective, Brandão et al. remind us, has highlighted that fairness considerations are crucial to stakeholders when considering the application of autonomous systems (Webb et al., 2018). Similarly, questions of fairness, and controversies over lack of fairness (Angwin et al., 2016; Chouldechova, 2017) have occupied much of the public discourse on AI ethics, and as highlighted by Tanzi and colleagues, the lack of inclusive design can generate issues of social justice (Tanzi et al., 2015).

False or excessive expectations and trust

The question of false or excessive expectations is discussed by Harbers and colleagues (Harbers et al., 2017), and is closely linked to issues of trust (Stormont, 2008).

Popular accounts of the failures and successes of robots, e.g. in the media, in news sources, in science fiction literature and in movies, often mislead public expectations of what robots are and what they can do. In the case of UAVs, for example, before they were widely commercialized, media focus on specific aspects of drone usage generated false impressions and ideas, e.g. that most UAVs were armed Predator-type drones, owned and operated by just a few countries for military purposes (Franke, 2013).

Misconceptions about robots may also derive from the fact that robots built to interact with humans often give the impression that they are more intelligent than they really are (Kwon et al., 2016). To describe the phenomenon that takes place when humans develop incorrect or unrealistic expectations about the capabilities of complex engineered systems, Kwon and colleagues introduced the term “expectations gap”. They pointed out that robots are built to have specific skills, while humans usually have a wide range of capabilities. Because humans have a tendency to anthropomorphize human-like objects, including robots (Lemaignan et al., 2014), they also tend to generalize human mental models to those robots (Dautenhahn, 2002) and may overestimate the robot’s actual range of capabilities, at least initially. Human tendencies to misattribute positive human characteristics to robots may result in false expectations and lead to misplaced trust, which can then quickly turn into disappointment and eventually mistrust. If they interfere with teamwork efficiency, false expectations and misplaced trust can have dangerous consequences, particularly when robots support safety-critical tasks, as in the case of search and rescue missions (Groom & Nass, 2007). These factors provide a possible explanation for the reputational issues and lack of human confidence that characterize autonomous robots (Stormont, 2008).

Labor replacement

Carlsen et al. and Harbers and colleagues suggest that search and rescue jobs could eventually be taken over by rescue robots (Carlsen et al., 2015; Harbers et al., 2017). Both papers report that this concern was raised during workshops with stakeholders, specifically referring to fire-fighters. One current view is that occupations vulnerable to robotization are those that are intensive in routine or predictable tasks, and that fire-fighters, who generally represent a substantial proportion of search and rescue operators, score low on such a vulnerability scale (Owen, 2020). Nonetheless, ensuring that rescue robotics is both innovative and beneficial will require a clear understanding of its societal impact, with a goal, among others, of prioritizing innovations that complement rather than replace rescue workers.

Privacy

The goals and capabilities of rescue robots entail increased information gathering which can determine risks of malicious data exploitation or simply lead to privacy loss for rescue operators, victims or other stakeholders at disaster sites, whose information is purposefully or incidentally collected. As discussed by Harbers and colleagues, this may concern personal information regarding physical and mental stress levels, or images of victims bodies’, or photos of people’s devastated homes (Harbers et al., 2017). In addition, as Brandão and colleagues note, data leaks can be generated by security breaches but also through correlations within observed robot behavior (Brandão et al., 2020). Therefore, although increased information flow is widely accepted as appropriate for emergencies and disasters (Sanfilippo et al., 2019), information flows across robots introduce new complexity and provide more opportunities for privacy infringements. Privacy thus emerges as a key human rights concern in relation to the deployment of rescue robots, requiring careful regulation and good practices.

Responsibility

Coming to the question of responsibility, two interesting concerns emerge from our review: first, responsibility assignment and, second, rescue operators’ potential worry that the presence of robots during rescue missions may increase the transparency of operations to their detriment. Regarding responsibility assignment problems, as Harbers and colleagues point out, the issue may become particularly complicated when the rescue robot has some degree of autonomy, self-learning capabilities or is capable of making choices that were not explicitly programmed (Harbers et al., 2017). According to many scholars, humans should always be responsible for what a robot does (Coeckelbergh, 2020). Indeed, most global initiatives focusing on ethics in robotics and AI state that autonomous systems should be auditable, to ensure that designers, manufacturers, owners, and operators are held accountable for the technology or system's actions, and are considered responsible for any harm it might cause (Bird et al., 2020). In a different view, the possibility of extending legal personality to robots has been proposed as a mechanism that could be used to apply directly to robots obligations that currently apply only to individuals and legal persons such as companies (Fosch-Villaronga, 2019). A European Parliament resolution of 2017 thus suggested that the status of “electronic persons” might be conferred to the most sophisticated autonomous robots. The proposal, however, was opposed by scholars who argued that conferring rights and personhood to robots would be “morally unnecessary and legally troublesome”, as difficulties in holding such electronic persons accountable would outweigh any moral interests that such a legal fiction might protect (Bryson et al., 2017). Overall, while there is wide agreement that accountability, liability, and the rule of law must be upheld in the face of new technologies (European Group on Ethics in Science and New Technologies, 2018), how this should be done and how responsibility should be allocated when it comes to robots with increasing autonomy remains to be defined (Muller, 2020). One way forward, it has been suggested, might be to gather more qualitative and quantitative data in order to better understand how likely any harms connected to robot deployments actually might be, and whether such harms would justify the implementation of specific measures (Fosch-Villaronga, 2019).

Moving on to the question of what data gathered by robots might reveal about rescue operations and the associated concerns rescuers may have, in many legal systems both professional and volunteer rescuers are shielded from exposure to liability during operations if they discharge their duties reasonably and in good faith. Based on the above, it would be interesting to explore the views of responders who could be involved in missions alongside rescue robots, to better understand any concerns they might have about the information gathered by robots during missions, and to how its sharing might potentially affect them, e.g. in terms of perception and self-perception of their professionalism and role within society.

Safety

The account of safety given by the authors of the papers in our review mainly centers on the risk of technical failures or malfunctions leading to collisions and injuries between rescue robots deployed in operative areas and persons on the ground, both responders and victims. Harbers and colleagues however also point out that robots can interfere with victims’ well-being in different ways, noting that the experience of being, for instance, trapped under a collapsed building, wounded and afraid, and suddenly confronted by a robot, may be terrifying (Harbers et al., 2017). This suggestion resonates with the concerns laid forth by van Wynsberghe and Comes in their recent paper on ethical considerations about humanitarian drones (van Wynsberghe & Comes, 2020), who emphasize the safety concerns having to do with the behavioral, psychological and physiological wellbeing of people experiencing robots.

Conclusions

Taken together, the quantitative and qualitative findings of our scoping review show that, while the ethical concerns in rescue robotics are underexamined in the literature, the papers we identified uniformly endorse a proactive approach to handling such ethical concerns and display an awareness that ethical considerations need to be taken into account before rescue robots become ubiquitous in disaster settings.

As well as providing more in-depth analysis of the issues raised by the publications included here, future research should consider other ethical considerations that might be influential. Findings from van Wynsberghe and Comes's recent study on ethical concerns associated with the use of humanitarian drones, for instance, suggest that dignity, deskilling and informational transparency deserve attention (van Wynsberghe & Comes, 2020). In addition, more qualitative work is needed to explore the views of experts and professionals in the search and rescue robotics domain in order to move from hypothetical scenarios and simulations to understandings of lived experience with different types of robots in different contexts. Combining the results of normative and empirical research in the ethics of rescue robotics will clarify the issues that rescuers face when deploying robots in disaster scenarios, while at the same time facilitating the development of practical decision-making tools and empirically-informed guidelines for such deployment (Ienca et al., 2018).

The results of our study should be considered in light of certain limitations. First, new records in the academic literature will have been published between the time we concluded our review and publication of this study. Second, as we only selected papers published in English, we may have missed data, analyses and reflections reported in other languages. Finally, we did not include the grey literature, so we cannot rule out that relevant websites, reports, theses, and other documents exist beyond the publications we identified. Despite these limitations, by offering a comprehensive view of the current literature on the ethical concerns associated with the use of robots in disasters, this scoping review provides a helpful starting point for further exploration, analysis and reflection.