…my concern here is with why we aspire to devise and construct machines that have a face, when the world is filled with faces—Niklas Toivakainen.

Deceitful deception of god—

What mortal man shall avoid it?

With nimbleness, deftness, and speed,

Whose leaping foot shall escape it?

Benign and coxing at first

It leads us astray…

Aeschylus, The Persians

Introduction

The Onset of Humanlike AutomataFootnote 1

The rapidity with which aspiring manufacturers are developing humanlike robotics warrants timely ethics and policy discussions. Humanlike robots are generally geared toward performing social tasks (Ruijten et al. forthcoming). Such entities pose ethical and policy challenges which may differ according to kind (for example, see Miller 2015). I mention three types of social robots according to structure rather than designed role, which I discuss later. Two films illustrate two of these structural kinds. In Alien, the ship Nostromo’s science officer, Ash, whom the crew assumes is a human, turns out, upon his violent demise, to be a mechanical android. In Ridley Scott’s next movie, Blade Runner, the character Roy Batty and his friends are “replicants,” maximally humanlike beings used as slaves designed by the tycoon Tyrell to live only about 4 years. While the film leaves their nature unclear, they appear to be fully biological. Between these two kinds are the broad class of cyborgs—part-mechanical, part-biological.Footnote 2

This article focuses primarily on rights issues arising from the Ash variety of social robot—“mechanical” humanlike beings—and examines policy challenges engendered. This article gives the fully biological, such as Roy Batty, only brief consideration.Footnote 3 Current research focus is on the mechanical variety. Such beings’ possibly due rights, as well as the human rights of the parties involved in social-automaton use, pose enough philosophical challenges for one article. These parties include the person for whom the social automaton is directly applied, this person’s family, the device’s owners and institutions involved; the device’s developers, designers, and manufacturers, and other stakeholders. Indeed, authors have recently begun investigating the challenges to rights, even specifically human rights, of the parties involved in social-automata and related AI use. The authors include Miller (2015, 2017), Darling (2012), Sandewall (forthcoming), Risse 2019 (broadly AI), Coeckelbergh (2012 on deception), Gunkel (2012), and Niemelä et al. (forthcoming, which speaks to rights issues without specifically naming them as such). Such inquiries testify how pertinent are the ethical and human rights issues arising from this rapidly expanding technology.Footnote 4 This article focuses on the rights of those persons who are to receive the care of the social automaton.

Certainly, the R&D programs for social robotics for the most part are benignly motivated, such as caring for the elderly (Sharkey 2014) or bedridden or serving as companions for the lonely (Hauskeller 2014). Those benign intentions, though, evidence the subtlety of the automaton’s drawbacks, notably in terms of user rights, as I come to below. While I regret this article’s significant hiatus in the discussion by focusing on nonbiological automata, I hope at least to suggest a way to deal with the particular moral and policy concerns herein.

The article’s strategy is first to place the ethical concern for humanlike automata use within the broad ethical context of need and ought for such entities. I review the most prominent and pertinent literature on automata caregiving ethics, especially within the vivid R&D prospect of increasing elderly-caregiving automata’s humanlikeness. This prospect calls for introducing the potential maximally humanlike automaton and how that prospect may motivate further ethical and, eventually, human-rights concerns. A pivotal bioethics question, facing the shortage of human caregivers, is Why are social services for elderly care insufficiently available? The answer points to social values and organization, which, it turns out, underlie the motivation for increasing caregiving automata humanlikeness. To embrace these values unquestionably leads to a basic oversight, that arising from expediency, by which maximal humanlikeness may appear ethically permissible. The exacerbating factor, I contend, is that such maximal humanlikeness in a caregiving automaton inherently deceives the person who receives the MHA care. I suggest that somewhere along the scale of increasing humanlikeness, the deception intensifies, despite supposed Mori effects for some persons. Such deception is at the heart of the human-rights concerns about humanlike caregiving automata: specifically, fraud’s violation of human dignity and other violations of human rights to be discussed. Finally, given such potential violations, the call is for policy protecting persons thus potentially violated.

General Ethical Concern in Constructing Humanlike Automata

Automata approaching the human in resemblance and behavior such as autonomy and emotion have been proposed for a wide variety of purposes. These include one area that has received marked philosophical discussion in the literature, namely military combat (Gertz 2014, Galliot 2016). Other areas include care of the bedridden or elderly (Sharkey and Sharkey 2010, Sorell and Draper 2014) and children’s supervisors (Sharkey and Sharkey 2012) and teachers Sharkey (2015). Areas that may seem relatively frivolous to some observers at first but are seriously being developed include companionship, especially romantic and sexual partners (Hauskeller 2014).

I focus primarily on elderly care humanlike automata, seeming the most benign of these, although this seeming benignity creates subtle problems. These extend to the depths of contemporary societies’ organization and values and whether such automata may manifest human-rights abuse because their use reveals latent abusive values, which constitutes the body of this article.

The most basic ethical issues here, then, are the interrelated questions of whether we humans need to construct social automata with ever-increasing humanlikeness and whether we should. Need and “ought” interrelate in that a significant enough need can make a difference in whether one positively ought to. By contrast, insignificant need may tilt the balance toward ought-not-to, given significance of moral problems caused. (Consider soft-drinks whose containers compose much of the large oceanic islands of plastic, whereby a “need-not” in terms of soft-drink drinking can strengthen an argument of “ought-not.”) As will be seen, the human-rights concerns grow out of these general ethical issues of whether we need or should develop such automata, given the pervasive social and organization. The remainder of the article after this next section aims to bridge these basic ethical issues to those of human-rights and possible policy protections.

Literature on Automata Caregiving and How It Points to the Ethical Concerns

The bioethical and technology ethics literature in automata caregiving has largely focused on automaton caregiving at contemporary levels of engineering—not necessarily humanlike levels—and for specific care tasks. At this juncture it is pertinent to review this literature to help pinpoint where this article’s concerns lie relative to the relevant ethics literature and what this article, by contrast, aims to achieve.

In 1989 robotics designer Joseph Engleburger projected extending robotics from the factory to the home (van Wynesberghe 2013). By the early twenty-first century, the prospect of deploying automata to care for elders motivated designers and philosophers. For most of human existence—until the twentieth century—such care had been a home duty (Ting and Woo 2009). Since then has appeared a torrent of articles and books on the ethics of automated medical and elder care. These range along the ethical spectrum from enthusiasm for the moral opportunities for care (Sorell and Draper 2014) to moral skepticism about such applications (Sparrow and Sparrow 2006, Toivakainen, 2016), with plenty of commentaries in between, cautiously condoning certain limited, well-monitored applications (Sharkey and Sharkey 2012, van Wynesberghe 2013, Santoni de Sio and van Wynesberghe 2016),

Sharkey and Sharkey (2012) propose six major areas of ethical concern for elder-care automata: the cared-for’s reduced human contact; objectification and loss-of-control; loss of privacy and of liberty; the cared-for’s control over the automaton; and possible deception in the deployment of the technique. Building their ethical case upon a human-rights understanding of ethical care of the elderly, the authors find that in certain areas of care application among assistance, monitoring, and companionship, some elder-care may be done ethically via automata. Developing guidelines and legislated policies, consulting with the cared-for elders, and engineering automata with value-sensitive design may help increase these elders’ autonomy, dignity, and human social interaction. Van Wynesberghe (2013) suggests specific care values to be incorporated into the machine’s construction as a way of ensuring such techniques’ ethical deployment. Similarly to Sharkey and Sharkey, Santoni de Sio and van Wynesberghe (2016) consider specific applications of elder care that automata may do ethically, while humans more ethically handle other duties. This nature-of-activities approach can help single out “goal-oriented” maneuvers—those involving a quantifiable end to be reached—which automata may handle best. By contrast, activities in which the very doing of it—the practice itself—such as telling a story or playing a game, is ethically left to the human to do.

Childcare automata, such as nannies, teachers, and daycare workers, have also evoked a strong response from ethicists, little of it favorable. Sharkey and Sharkey (2012) argue that empirical studies of child development suggest that automaton nannies, if used steadily, may lead to pathologies in attachment and relationships, among other ethical concerns such as privacy infractions, deception, restraint, and accountability for negative outcomes. Sharkey (2015) and Toivakainen (2016) contend that automaton teachers, even if effectively winning some schoolchildren’s attentions, call into question the underlying social and pedagogical conditions that need correcting before inserting ad hoc techniques and never seizing the underlying problem.

Increasing Humanlikeness

These critical examinations of care automata for children and elders have generally been considered in light of current developmental levels of automata, which for the most part still readily look like machines. However, in some areas of caregiving, automata are made increasingly humanlike, in responsiveness, voice, sympathetic affect, and facial expression. It appears that human facial expressions evoke certain emotions (Bronson et al. 2013) in the fusiform facial area (FFA) of the brain. While debate continues as to how universal are emotional responses to facial expressions (Ekman et al. 1987, Ekman 2006), it would be quite plausible that automata with humanlike facial expressions may, for example, help diminish invalids’ loneliness. In light of facial expression as a social communication, a loved-one’s caring response to stress may elicit in the patient the type of positive, stress-allaying responses that can improve the cared-for’s wellbeing (Kalat 2013). Observers anticipate that designers will continue striving to make certain kinds of automata—particularly caregivers—more and more humanlike, physically, mentally, emotionally, morally (Hanson 2011). If investigations into the mounting issues of caregiving automata at the present level of the techniques are on-track, the more humanlike the automata, the more intense the ethical concerns may be (see Ferrari et al. 2016.)

The Prospect of Maximally Humanlike Automata: a Tool for Viewing Ethical Concerns

This prospect of increasing humanlikeness is one motivation for this article and why it turns to the concept of maximally humanlike automata (MHA). An MHA may best be considered as one which has passed a kind of modified Turing test (MTT) for humanlike automata: A panel of (non-technically expert) witnesses, perhaps as audience before a stage, would not distinguish the automaton from a human after observation over some time, say 2 h. Both appearance and behavior, including emotions, would be assessed. Furthermore, the automaton is not cyborg or biological but purely mechanical (pace readers who hold that there is no clear distinction between the “mechanical” and the biological).Footnote 5

Such a concept of MHA serves as a limiting factor to test the technologies’ increasing ethical challenges and their possible solutions in deploying these devices as they become more humanlike. This approach to evaluating moral and political ramifications of a set of technologies is similar to a thought experiment—but not purely so. Many thought experiments are entirely fantastic, such as Putnam’s Twin Earth scenario and its H20-lookalike compound XYZ. In this case of MHAs, some technicians have expressed a strong desire to build such a device and plausibly maintain that such is a realistic prospect (Hanson 2011). Some doubters may object, but we can safely say that neither doubters nor visionaries yet have solid proof favoring an MHA reality or impossibility. Thus, this “thought experiment” could concern actual granting of moral status for machines, as well as policy development. The MHA approach, then, is not a mere thought experiment but it also anticipates philosophical response and debate about a non-fanciful possibility.

Such anticipation, then, has a practical aspect, in that it can begin discussions well in advance, forming a second reason for using what is now a seemingly extreme case, that of the MHA. A third reason for the MHA approach is, even if no true MHA appears in the next couple hundred years, if ever, it can stimulate debates of both philosophical and practical issues pertaining to robotics, giving perspective on current automata developments as these approach proto-MHA levels of sophistication. Thus, if it is the case that an MHA would merit moral status commiserate with that for humans, then where along the spectrum of development between current automata and an MHA would a device not merit such status? The reverse approach would be awkward at best, if not unhelpful. If the current most humanlike automaton clearly does not merit human moral status, then when would it? Either one would have to pick arbitrary points along the R&D spectrum, such as “When the device has a humanlike sense of humor,” or “When it develops on its ambitions for its life ahead,” to gauge progress in moral status; or one would revert again to MHAs and say, “When it is undetectable from a human.” That is, MHA status would be an underlying, tacit gauge for determining the next new, increasingly sophisticated device’s moral status. The MHA is not only a logical but a natural standard against which to ascertain an automaton’s moral status.

Whether MHAs would be as useful as specialized, non-MHAs is moot. As the discussion has evidenced, some engineers are already pursuing MHAs, whatever a waste of funds such a device may be. Denying the possibility of a technology that many people are dedicated to realizing has poor support from the history of technology. For the purpose of this limiting factor, the MHA, this article stays open to the possibility that the closer that artifacts come to maximal humanlikeness, the more that moral concerns, such as those for human rights, arise.Footnote 6 Because the ethical concerns involve the wellbeing of many large groups of people, such as children and elders, possibly with infringements on their human rights, it is time to start investigating and proposing policies regarding care automata, as Sharkey and Sharkey (2010) suggest even for current techniques’ levels. If it is incumbent on ethicists, industrial and institutional developers, and policymakers to begin policy discussions now, one can only imagine how much more pressing international policy would be as the artifacts approach maximal humanlikeness.Footnote 7 Similarly, in cloning research, Dolly the sheep may be considered as the development that spurred intensive ethical debate about cloning technique applications, with human adult-cell cloning (HAC) serving as a kind of limit. The policy result of these discussions was an international ban on human adult-cell cloning (United Nations 2005). In the meantime, other levels of cloning, such as therapeutic cloning research, have been legally permitted to various levels in different countries, such as the United States (Patients First Act of 2017 (HR 2918, 115th Congress) 2017), while others, such as Canada (Baylis 2004), have been more stringent. Preliminary prospects for international regulations of MHA construction form this article’s ultimate object.

Niklas Toivakainen (2016) holds that we—those concerned in any way about technologies and society, especially designers—ethically should ask ourselves if a new mechanical technique is always the best way to attempt to solve a human-caused problem. Does the technique avoid or merely defer the problem, possibly exacerbating it and creating new ones? My analysis of MHA, to follow, takes a similar inquiring approach, although not via a particular ethical outlook. For issues dealing with some of the gravest moral matter—how we treat our children and elders, who are among the largest groups of highly vulnerable persons—sensitivity to nuance must be considered, in light of human rights for groups who need special legal and rights protection that is not yet well spelled-out (Sharkey and Sharkey 2010, 2012).

Ethical Prospects for MHA Elderly Caregiving

The prospect of maximally humanlike automata promises to fill vitally needed gaps of available social services in elderly care, as well as child and medical care. (See Sparrow and Sparrow 2006 for such needs in terms of current-level robotics.) Generally, and concerning elderly care specifically, there has been increasing attention to sociopolitical and bioethical challenges posed by the rising percentage of world populations over 65 (see Tomszyk and Klimscuk, 2015 for statistics). This literature extends from appealing to more personal, individually motivated action to handling one’s aging (Baars 2012) to the more sociological or policymaking perspectives on how the society and polity may respond (Tomszyk and Klimscuk, 2015). What is apparent across this range is that demographic and gerontological facts are undergoing such rapid shifts and growth is that it is especially hard for policy and institutions private and public to reconcile all these changes adequately. Among these institutions are the facilities that provide direct elderly care.

I find the most pertinent empirical bioethics question here is: Why are social services for elderly care insufficiently available? Two demographic issues point to empirical causes. On is that, certainly, with increasing health improvements over the past century and a half, such as clean water, sanitation, vaccination, and lowered infant mortality, more people are living longer. Many live past the point where they can function without steady attention from another person (Sparrow and Sparrow 2006, Sharkey and Sharkey 2012). Generally, these elderly persons are not as economically productive as they were before and for most of them pensions and related funds are minimal. Compared with what the cared-for can afford, the projected cost of adequate care is too high. Few in the labor market are willing to take on work at rates the cared-for can pay. This reluctance understandably leads to a scarcity of careworkers.

A second demographic fact is that, due to high geographical mobility within the labor market, family members are commonly scattered across the world and cannot, with some exceptions, be in close enough proximity to the elderly relative to give proper care. Further circumstances include impatience and lack of motivation for caring for elderly relatives. The ethical concern underlying these demographic facts is that contemporary, particularly industrialized, societies have come to be structured economically, culturally, and socially so that a preponderance of elderly citizens cannot receive due care as their human moral status would ordain. (See Sharkey 2014 for extensive discussion of human dignity for the elderly in terms of automata care.)

Benignantly, parties concerned with such moral issues propose filling in the care gap with automata. These, if properly programmed and trained, might provide better service than human workers. The next question of bioethical import is whether this route is indeed, ethically speaking, the best for the elderly and others under steady care. Commentators have even suggested building automata according to a design that makes the devices sensitive to such psychological and emotional needs that elders under care often have (Van Wynesberghe, 2013, 2016, Poulsen et al., 2018). While Sharkey (2014) and Sharkey and Sharkey (2012) notably cover this issue, I shall build upon their concerns via the MHA angle.

The elderly are in a typifying situation that I aim to make evident. Indeed, as Van Wynesberghe, 2013, 2016) and Poulsen et al. (2018) bring to fore, taking the elderly’s standpoint can help in product design. Further, it could help answer whether MHA would be the optimal method for care of many kinds of needs. Consider an elderly resident of a care center being introduced to a new care provider, an MHA.Footnote 8 The resident sees a being who looks and acts just like a human. Given that new technologies are often in the news, the caregiving team’s decision partly concerns whether to tell the resident that the new caregiver is a mechanical construction. The central moral and human rights question in this article is: Is not telling deceptive to the point of moral concern?

Some commentators have taken into account the moral problem of possible deception in employing care automata in child and elderly care. Sharkey and Sharkey (2012) conclude that deception should not be a problem, if the cared-for is provided sufficient information and perhaps consultation about preferences. However, for MHA caregivers, this approach to minimizing deception may not apply. The general industrial tendency toward MHA spurs the question of why should designers create care automata more like Homo sapiens, besides the mere intellectual challenge.Footnote 9 For the cared-for’s sake, the most reasonable response would be that increased lifelikeness improves favorable response of the cared-for (similarly with pet-companion automata; see Darling 2012). The assumption is that humans, perhaps inherently, respond by anthropomorphizing devices made to imitate animals and humans, so the more humanlike the automaton, the more intense the positive response. Yet, this endeavor would mean evermore “fooling” certain brain responses, such as the FFA mentioned earlier. The concern is whether such ongoing engineering tendency, if it comes to play even deeper into nonconscious brain activity, does not at the same time threaten its own stakes in terms of the cared-for’s potential suspicions of being played upon.

In approaching this key question, we should ask: Why not tell? The device developers and deployers may be concerned that disclosing the truth could cause the person unnecessary unease. In such a circumstance in the indefinite future, this MHA would likely not be the first humanlike automaton to be deployed and hit the news; development of such an entity would take time. It would also likely be widely known that humanlike (perhaps not maximally) automata have been used for caregiving. Taking the chance of not telling could attenuate trust between cared-for and caregiver. In this context, not telling could be deception, a crime of omission. Telling, then, would be called for, and this fact brings up the next issue, but first I note a related tactic: prescreening.

Potential users could be prescreened for use of a caregiving robot: Have the persons had extensive social interaction with automata? Do they have moral objection to such a use? Would they be willing to try once? Do they fear such machines or find them potentially deceptive? Does such use make the person feel ignored by individuals or the society? Prescreening could be done before informed-consent procedures. The elderly persons under care who do not pass the proper screening-measures would advisedly not move on to informed consent or receive an MHA caregiver.Footnote 10 Those who seem not to mind disclosure would be told the MHA’s identity upon the automaton’s introduction to the user.

Now consider the prescreened user (of the latter group) upon such introduction. The user observes a being that greets the person with a toothy smile and makes small talk. Picture the user’s further perspective of such a session (and possibly sessions to follow) and how user and device may see, feel, and evoke responses. Here the ontology of social automata creeps in: If the cared-for asks the caregiver to sit and have tea, the automaton must, to be true, either deny its capacity to drink tea or have instilled a way to have tea and cakes. The former case can lead to the cared-for’s suspicion that the caregiver is not human and may thereby suspect deception. Or, if the latter is the case, the caregiver can sit and have tea. But deception persists because the device cannot really ingest tea, yet the taking of tea and cakes makes these appear to be ingested. (Or the designers may go to great extents trying to make the caregiver really turn the food substances into energy, but this approach would still not be biological ingestion and would still signal deception.)Footnote 11

Similar complications would arise in other facets of the automaton’s design. To gather real memories of growing up, the device would, to be humanlike to the extent it does not deceive, also need to undergo growth. Such a design, to be accurately humanlike, would likely need actually to grow over some number of years, involving an economic factor at which many developers and sellers may balk. Other aspects of the device’s ontology facing such quandaries and introducing deceptions may include a need to sleep, shave, sneeze, pass water, get pregnant, and impregnate. In all or most of these cases, even if the cared-for passed the screening, express deception would be involved. Deception can be an ethical and, eventually, human-rights problem. First I answer some objections to the ethical concerns about deception in MHAs before beginning to develop the human-rights issues ramifying from this deception.

Five Objections to the Ethical Concern of Deceptiveness in MHA Construction and DeploymentFootnote 12

One: There Is No Deception Because the MHA Is Humanlike

The first objection asserts we need not worry morally about deceiving customers or users—the cared-for—because by definition MHA would be indistinguishable from a Homo sapiens. In response, I note this objection rests on implausible characterizations and expectations of practicality and probability. Assume for discussion that there is no prescreening of potential users as the previous section hypothesized. News of such devices would be in the air, spurring questions from the cared-for, such as “Is this one of those careworkers that look like a person but really are not?” Even more, if news of the devices is not in the air, we would have another moral worry: news suppression, sparking Mill’s (1952) concern about controlling the marketplace of ideas and deterioration of criticism’s role in bettering society. Effort to suppress news about the devices would signal an attempt to hide, hence deceive, if not defraud. Furthermore, accidents could occur (as happened with Ash in Aliens), revealing the automaton’s nature. The MTT and the device itself may not be infallible.

Two: There Is No Deception if the Cared-for Cannot Detect It

A second objection is that if the cared-for do not detect the deception, then such deception is of negligible moral concern. I respond: Market deceptions of consumers, despite what commentators such as Friedman (1970) and Carr (2008) contend, may be innocuous in isolated cases, but in others may stretch to the point of fraud, even if not always so-designated legally. One common outlook may be that deception in business transactions is less objectionable than interpersonal deceptions because the market is considered to be somewhat caveat emptor, albeit with government statutes protecting consumer rights. However, even with business transactions, deception may be considered a tiered transgression, from trivial levels (say, overly puffed bags of crisps with little inside), to extreme levels such as airplanes with dangerously faulty engines.

By contrast, care of the vulnerable—the very young and the elderly—has a moral dimension beyond a mere business transaction: Insofar as this moral dimension reflects our value of human beings in our care, it reflects our very valuing of ourselves qua human beings (perhaps the basis of all value among humans; see Tronto 1993, Held 2006, Miller 2013). Being cared for, no mere business transaction, is a basic, inextricable characteristic of human life, especially among the youngest and oldest. Few would make it to adulthood without care from others. Certainly, caregiving is often monetized, as in health care or school-teaching, although without consistently reflecting the underlying value of human life. (Thus, child-care workers represent the discrepancy between their monetized value and the value of human life itself embodied in our children.) Potential deceptions in child or elderly care generated by MHA careworkers reach beyond fair business transactions and extend to the degree that certain parties may allow deceptions of our most vulnerable members and in doing so deceive themselves. They concern the degree to which certain parties may allow deceptions of our most vulnerable members and in doing so deceive themselves. That is, if one perpetrates a deception, one is either lying or has deceived oneself in believing there is no deception. Thus, collectively deceiving a very large class (our most vulnerable) is not merely collective deception but, at the least, collective self-deception or, at worst, widespread lying.

Three: Any Possible Deceptions Would Be Insignificant

The third objection maintains that any deceptions as may eventuate in design or deployment are too insignificant to be of moral concern, even if deceptions are additive. Thus, a automaton’s design with attractive polymer teeth, if a “deception” at all, is no more morally significant than a three-foot-high robot with nothing like a human face but with two (mechanical-looking) legs and arms. Furthermore, adding a winning smile, capacity to eat, nano-engine-driven “digestion,” and ersatz nano-engine metabolism would not render the automaton sufficiently more deceptive to be of moral concern.

Response to this third objection requires ontological considerations of the automaton design and issues of design motivation. In the history of technology, humans usually fashion a tool to facilitate an activity or to make possible an activity that would be impossible without the tool.Footnote 13 Automaton proposals so far have been for projected tasks, such as to care for elderly persons. Reasonably, then, it follows that a caretaker MHA is designed to facilitate caretaking. Perhaps one would even be designed for caretaking tasks that would be otherwise impossible. Designing the automaton to resemble humans in appearance and behavior would presumably make the task feasible or easier. The automaton would, then, be designed to appeal to certain responses in the cared-for’s brain, such as the FFA mentioned above, which responds specifically to particularities of human behavior and appearance. The issue reaches beyond deception into manipulation. But this manipulation (unlike that of drugs’ “deceiving” the immune system to effect a cure) involves the sense of being accepted, loved, and vitally a part of the social stratum. Assume that the cared-for can be socially rehabilitated via such MHAs, which can do so even when the cared-for is apprised of the automaton’s derivation. Then there is indeed a conceivable, if highly complicated, way of bringing elderly cared-for persons back into society.

This conceivable route, though, is implausible in the current context of automaton elderly care, which works within the current institution of end-of-life care. The moral issue, then, reaches beyond not only deception and manipulation but into the contingent social attitudes about cared-for persons of all kinds and of human life itself, as well as into the caregiving jobs reflected in this attitude (which I come to momentarily). The ontology of such MHA, designed to evoke a human response, is traceable to current socially contingent attitudes and values of human life, particularly that of the vulnerable. This attitude does not reflect necessarily how careful attention has indeed been paid to the best interests of the vulnerable, rendering the attitude a moral concern. To answer finally the third objection: Deception is only a signal of a deeper social/moral problem in MHA design and construction.

Four: the Mori Effect

There is the phenomenon, first promulgated by the Japanese Researcher Masahiro Mori (1970), known as the “Uncanny Valley”: Briefly, by this phenomenon, people at large, when introduced to a device such as an automaton that resembles human aspect and behavior, become disgusted. In time, though, they acclimatize and habituate to the humanlikeness and better accept the facsimile. A similar effect would likely happen with the MHA. In time, people would lose any initial disgust and accept the device as acceptably human.

Two problems beset this objection. One is that the issue at hand is not of disgust but of deception. The MHA would be deceiving patients and users, no matter their level of disgust. Certainly, they may habituate and lose their disgust, but that fact does not mean they are no longer being deceived. People may habituate to all sorts of conditions that are mot morally correct to impose upon a person. Secondly, at this juncture we cannot be confident that the Mori effect could reverse at some point in the progression whereby the closer one approaches MHA, patients and users turn more against the facsimile.

Five: We All Voluntarily Accept Deceptions

When we read novels or watch movies, we voluntarily let ourselves be deceived. We then need not be concerned about an elderly person’s being deceived by a caregiver’s being an MHA.

Response: It is implausible that the so-called “suspension of belief” that readers and viewers allow when enjoying a work of fiction entails a deception comparable with that involved in an MHA’s deployment for duty to an elderly person. Few sane persons would confuse a character in a movie with a living human being, even if, in Aristotle’s terminology, they undergo catharsis, in sympathizing with the character. Neither the film director nor viewer credibly avows the representation of Abraham Lincoln in a biopic is the flesh-and-blood, historical Lincoln.Footnote 14

A patient or cared-for who volunteers being cared for by an MHA, with full knowledge of the caregiver’s nature, may still be said to be deceived—if not minding being so: The MHA, as this article argues, is constructed in such a way as to trigger responses in human beings like those triggered when a person encounters and flesh-and-blood human being, while it is not of the same nature. In this case, with the deceived patient properly informed, the person is indeed being duped voluntarily. But the parallel with novel-reading is unfounded because, as just contended, such a situation is, generally, not a matter of deceit.

The possibility remains, though, as described earlier, that a cared-for elder may end up not being properly informed of the caregiver’s nature, so there is no volunteerism in the deceit. The person is simply plopped into a deceitful situation. (Such situation calls further for policies, work on which needs to be started in a timely manner, ensuring institutions circumvent such problems.) Whether voluntarily or involuntarily deceived, the cared-for are deceived by a device that is produced because problematic social values have shunted off and sequestered elderly persons. These values, as the next section contends, are in themselves creating an ethical and hence human-rights problem in this growing neglectful attitude and action toward the elderly care-for.

Finally, the analogy to voluntarily being deceived in the moviehouse also overlooks the seriousness of the care situation compared with that of the movie. The moviegoing is playtime, done in 2 h, perhaps with a nightmare if a horror movie. The rest-home is not playtime but real world, with real-world ramifications that confront patients day-in and day-out for years or decades.

The Attitudinal Moral Problem in Care MHA

Sharkey and Sharkey (2010, 2012) and Sharkey (2015) have discussed deception and how it is not always a moral problem, as in reading novels and epic poems, where we willingly let our experience be affected. It seems that all that one need do is inform the cared-for about the exact nature of the caregiving automaton and thereby circumvent deception. These authors also bring up the ethical matter of consent: The cared-for must willingly and with full knowledge and cognizance allow caregiving by a automaton, just as a patient must give informed consent to participate in an experimental new medical procedure.Footnote 15 However, as I brought up above, the moral issue stretches beyond deception, manipulation, and informed consent, into conglomerated individual attitudes that translate into morally questionable social practices. These conglomerated social attitudes and practices, I contend herein, are morally questionable; signify a vast unethical misplacement of effort and treatment; and warrant consideration of professional, possibly international, policy on the design, construction, and deployment of MHA in care applications.

Values and Attitudes Leading to Questionable Social Practices

The manifestation of the (underlying) moral problem, in caregiving MHA, is the prospect of making such devices more and more humanlike to evoke a target response from the cared-for. However, some motivation behind continually refining a technique such as humanlike automata may be other than for a salubrious goal such as more precise care: Engineering in general often involves pursuing such refinement not for an overtly salubrious goal but because the research involves so great an array of puzzles to solve that these form the salient challenge (Shaw 2001). However, in the case of care automata research, such a motivation is tantamount to the attitude now often decried in bioethics (of dehumanizing patients) as reflected in the infamous “the kidney in Room 333.” The cared-for becomes merely a benchmark for the researchers’ problem-solving capacity. Interesting this attitude may be, in terms of ethical consideration it is not my main point of ethical criticism here.

Instead, the moral trouble lies in the attitudes, social practices, and values involving the vulnerable and, by extension, human life itself. Preliminarily, I point out how this attitude—which I describe momentarily—is reflected in the value ascribed to jobs caring for the vulnerable. Pay and career prestige do not reflect the value of tasks crucial to (human) life in and of itself. This lack of correlation is seen in the situation of many care workers, especially in infant and child care, early-years teaching, and elderly care. The literature, both academic and popular, is extensive on the monetary undervaluation of care work and, often, reflects the social devaluation of caring itself, the resultant malaise of care workers, and the ways this social condition may affect the cared-for. (See England et al. 2002, Raavi and Staab, 2010, Miletic 2014, Zillman, 2015, for a small sampling of the extensive literature.)

As rational-choice research in philosophy and social sciences has revealed, monetary value may not always reflect how an agent or society values a good; more complex economic factors may affect monetary value (Kahneman and Tversky 1979). However, an inexorable monetary valuation, which stretches across national borders and decades and is manifest in many wealthy industrialized societies, starts to coalesce into a persistent picture: The conglomerate wealthy industrialized societies place less importance on care at the early and late parts of life than on popular music, movies, sports, financial investment, transport machinery, telephony, and computing, as reflected in the remuneration of the most prominent providers of these goods and services. The living are, as is evident in the care of the very young and the elderly, in a significant way valued less than the goods which, at some point since their primordial origins, were a means to aid and support life but are now ends or goals in themselves, often valued more highly than living itself. Certainly, the large group between both ends values medical care as a means to health for its own sake, and many persons at either end of the lifeline also receive a great percentage of medical care. The ethical issue is that these two sizable groups, along with the chronically incapacitated, are as a whole shunted into care-ghettos whose monetary value—as revealed in the human caregivers’ remuneration—reflects a more general negligence in valuing by the society as a whole.

A tired but empirically valid trope stresses how, for hundreds of thousands of years humans lived in bands of about thirty to fifty persons.Footnote 16 Children were cared for by much of the entire band. With the high moral values on autonomy and individuality, those group members, such as gay individuals, that in industrial societies have long been considered unusual or even marginal, had equal social position, and elderly members were respected (Endicott 1999, Ingold 1999, Lee and Daly 1999, Flannery and Marcus 2012), being very present in the band’s daily life. However, one need not sentimentalize about these hundreds of formative millennia of Homo sapiens social and biological evolution to recognize that such a sense for, such a need to, provide care of the vulnerable is likely a potent part of general human moral and psychological makeup. It is not surprising that many people today find it an outrage that the very young, elderly, and chronically challenged are shunted not just behind the foul-lines but into the cellars of the social stadium.

The basic ethical issue here, concerning justification of manipulation that permit deceptiveness or fraud in deploying MHA, is the current value prioritization which devalues the vulnerable and shuts them away. Following is a list of some current highly prioritized values, in no particular order (see Miller 2013 for discussion of how these values are evidenced) in contemporary industrial societies: ease and convenience, wealth accumulation, prestige, power over others, specialization, competition and competitiveness, and desire extension or insatiability (ceaseless striving after new desires, as seen in hyper-consumerism). In a vivid example of such prioritization, Connor and Mazanov (2009) conducted a 13-year study of athletes and substance abuse. Given the challenge of whether they would choose a drug regimen that guaranteed athletic success but death in 5 years, half the athletes studied would choose the deadly regimen, strongly indicating a prioritizing of competitiveness, prestige, and wealth above life itself. The worldwide cancer epidemic, often attributed indirectly to the high levels of stress hormones (cortisones, glucocorticoids) in industrialized societies (National Cancer Institute 2012)Footnote 17 exemplifies the valuing of (stress-causing) competitiveness and wealth pursuit over the value of life itself. Increasing obesity in industrialized societies (Seidell 2005, World Health Organization 2009) with its attendant health threats, as well as global warming caused partly by intensive automobile use (Calkins 2011; Miller 2012) and partly by factories worldwide producing enormous amounts of products, all indicate a valuing of ease and convenience over that of life itself.

Empirical evidence indicates that these values prioritized over life itself were not always so prioritized over the millennia of human existence, if valued at all. For millennia, forager societies disvalued all of these listed values, which appear to have come into human social valuing only after agrarian economies and cultures took root. (See Boehme 1999, Lee and Daly 1999, Flannery and Marcus 2012 for details of forager-band values, underscoring the great degree to which industrialized societies uphold contrarily prioritized values, even to uphold them over life in and of itself.)

How the Problematic Social Values May Translate into Human Rights Abuse

Given that life in and of itself is tacitly, widely devalued currently—despite the enormous funds spent on medical care annually for the sake of values other than life itselfFootnote 18: It is then unsurprising that the very vulnerable, who cannot contribute much to these peripheral values, are then mere basic, indeterminable “stuff,” devalued and shunted away. The ethical and, eventually, political issue is that the vulnerable are at the mercy of those who control but devalue them.

This unfortunate situation is a formula for human rights abuse, however undeliberate. The fact that a controlling, devaluing group (those parties involved in deploying the devices) is already developing and beginning to deploy automata to take care of the vulnerable at the least raises a flag of suspicion demanding careful assessment of motivations. I am not referring to mere guilt-by-association but to the fact that any fix suggested by those who control their charges is likely to reflect the tacit value prioritization. I have shown how potential MHA caregivers would involve a questionable degree of deception accountable only by an intention to manipulate, itself justified by current, rife value-prioritizations. While I brought up MHA as an extreme case, such automata point to the possibility that available—or soon-to-be—care automata as well represent questionable motivations and value prioritizations. Those affected—the receivers (users) of such care devices—in their vulnerability may thereby already be partly victim to actions against their dignity, autonomy, respect, if not life itself. Such techniques’ deployment thereby leaves this group susceptible to human rights abuse.Footnote 19

Human Rights as They Relate to MHA Elderly Care

Constructing MHA, whether mechanically or biologically based, has a potential of violating rights of human recipients of MHA care or of the entities themselves. This article has broadly alluded to possible rights violations here, and this section will describe how rights may be violated by elderly care MHA. The prominent human-rights documents up to today, including the U.S. Declaration of Independence (1776) and Bill of Rights (1791), the French Declaration of the Rights of Man (1789), and the U.N. Universal Declaration of Human Rights (UNDRH, 1948), generally (not always explicitly) build upon the foundational rights of life, liberty, equality, property, personal pursuits (such as happiness), and security, including resistance to oppression. It is worth looking further at some of these concepts, as they ramify into particular concerns for elderly care.

Many of these concepts are, tantalizingly for such practical matters, quite ambiguous. “Life” at the least means staying alive, but entails much more, such as all the other foundational rights (although probably not the “high life” that only the wealthiest can experience). “Equality” is equivalently ambiguous. “Liberty” in the negative sense of protection from others’ aggressions is a little clearer. But then, security and resistance to oppression are entailed by this negative liberty. These foundational concepts, then, so interrelated, seem to be grasping for something underlying them all. A possible explanation for what this grasping seeks is something like—another elusive concept—human dignity. This dignity and those foundational rights that it seems to underlie are not, and cannot be, concerned merely with individual dignity but with that of all—as the individual dignity would be ill-defined without those others’ presence. Insofar as rights —even to “life”—arise only in the context of others, there would be no rights for a solitary person in the universe. Rights, one may say, concern the good of the species as much as of the individual human being. It is this concern of dignity underlying human rights that makes the inherent deception in MHA elderly (and likely child) care a possible rights problem. Their vulnerability leaves them especially exposed to the degradation of their dignity.

Shue (1996) speaks of a concept that resembles that of these foundational rights: “basic rights.” These rights, as he explained, comprise the negative one of security and the positive one of subsistence. The former, security, is that of protection from others’ incursions; the latter involves the supply of goods so one may be sustained. Both rights are basic because, as he explains, without them, all other rights cannot be experienced.Footnote 20 Rights then, Shue contends, borrowing from Nietzsche but without adopting Nietzsche’s moral perspective, are there to protect the vulnerable or weaker from the invulnerable or stronger. Rights are “to provide some minimal protection against utter helplessness of those too weak to protect themselves.” (18) This fact then harks back to the proposal that the individual has rights only in the context of others: Likewise, the vulnerable have rights only against the stronger, and the stronger need the weak in order to be defined as stronger (if all were “stronger,” there would be no standard of comparison to make them “stronger”). It is in this manner that the stronger, insofar as they have a duty to themselves, have a duty to the weaker. As Shue eloquently phrases the connection between rights and dignity: “It is only because rights may lead to demands and not something weaker that having rights is tied closely as it is to human dignity.” (14).

The weaker among society prominently include all children and many elderly people. The stronger include parties responsible for the elderly person’s care, as well as the designers, manufacturers, retailers, and deployers of care technologies. The elderly, especially, being adults, have naturally severe restrictions on their rights. That is, because of the condition of many, and similarly to medical patients, they by fact of nature cannot enjoy all rights. These persons are limited in positive liberty (UNDHR Article 3); in the fuller life of the stronger, yet possibly in equality before the law (Article 6) because of contingencies; limited also in privacy, family, home, and correspondence (Article 12); in property (17) for similar contingencies; in expression (19), assembly 20–1), non-coerced association (20–2) participation in governing (21–1), access to public services (21–2), work (23–2), fair remuneration (23–3), adequate standard of living (25–1), cultural life (27–1), and fulfilling community duties (29–1), among others. Insofar as these are due to contingencies of natural deprivations, it may not be possible to reconcile the vulnerable to full enjoyment of many rights. However, insofar as only those stronger than the vulnerable can help compensate for the deprivation of rights, it is incumbent upon the former to make the attempt.

However, what is crucial here for the rights of the vulnerable is that, indeed, these deprivations are understood to be naturally caused. Hence they are passive deprivation. There are, in contrast, non-naturally caused, that is humanmade, positive, active deprivations of rights. These include murder, theft, and fraud, to which elderly persons are particularly vulnerable. These are all cases of the stronger taking advantage of the weaker. Recall Shue’s proposal that both the stronger and the weaker need each other for there to be human rights at all. All of these perpetrations listed, being incursions of active deprivations upon the weaker, are antagonists of dignity, both individual and species-wide: As antagonists of dignity, they negate the very conception of human rights.

In this light, consider MHA caregivers. The very design of an MHA is to realize an entity to appear and act like something it is not. Deception, as is seen in the design itself of humanlikeness in MHA, is fraud and thereby degrades human dignity and so undermines human rights. For such a case, it is incumbent upon the stronger named above (the designers, manufacturers, and deployers), for the sake of human dignity and rights, to refrain from such deception. Whereas compensation (by the strong) for the natural deprivation of rights is contingent upon practical limitations (can the vulnerable obtain fair work or access public services?), responsibility for active, humanmade deprivation of rights is not contingent but absolute upon the perpetrating parties. However, as rights violations have repercussions beyond the perpetrators and victims to extend to the whole society and world, rights policy is called for.Footnote 21

Regulative Policy

To this extent, it is worth considering whether prospective international policy for the design and manufacture of such machines as MHA pertains. The complex techniques and sciences used to inform the development of automata in general and, if techniques were directed in such a way, MHA specifically, are pursued among international communities of scientists and technicians. Any policy, sanction, or moratorium would need to be international to have any teeth. Furthermore, the more that nations cooperate to oversee such developments, the less likely there would be leaks, say in uncooperative countries, thereby affecting the worldwide community.

International policy for overseeing and policing certain controversial or menacing techniques has salient precedence. Indeed, the bans mentioned in the previous section, on traffic of fissile nuclear materials, and treaties on nuclear arms development, have been widely viewed as necessary. And possibly, at least up to the present, these have had some effect in containing rogue development and deployment of these techniques (Schlosser 2014, Durcalec 2018).Footnote 22 However, bans on other dangerous materials, as in the smallpox eradication campaigns, remain controversial (Preston 2002). Some decriers of eradication contend the virus may be useful for medical research. At the least, the global medical research community has commanded widespread vigilance for such materials. Such vigilance cannot undermine their international containment. Human adult-cell cloning (HAC), also mentioned earlier, has been subject to international bans by many governments and international institutions, including the United Nations (United Nations 2005), the European Union (by the Charter of the Fundamental Rights of the European Union Title I, Article 3.2.a. See European Union 2012.), as well as many nations, including Australia, Canada, India, UK, and the USA, and many US states. Nonetheless, such banning is not universally favored.

Furthermore, there is a significant difference between the type of technologies involved in fissile materials development and that in MHA, in terms of how to contain these. For the most part, fissile materials are discrete—materials are either fissile or not, with only a small gray area between. The MHA technologies, as this article has brought out, involve a wide gray area between what is clearly not humanlike and what is maximally humanlike. This difference does not mean that there is no way to monitor and control developments, merely that starting the monitoring process would likely not be efficacious by an international policy imposed by fiat upon the global research community. Hence, the most efficacious route would be through bottom-up, grassroots discussion and debate among researchers, industry leaders, and ethicists. Thus, there must first be promulgation of the problems and concepts involved, as this article’s discussion has evinced, among these practitioners. The process would then operate, preliminarily, at the laboratory level, next at the institutional level (university, private research center, company), then at the associational level. At each of these levels, where the problem of the gray area would need direct attention, parties involved could begin bringing in possible policies. With a degree of possible policies in place, an efficient mode of nascent control could be peer pressure. When colleagues begin venturing beyond determinable and menacing levels of research, violators could be shunned. Policy proposals would be presented to state and national legislative bodies, or to relevant executive departments such as the National Science Foundation, National Institutes of Health, U.S. Food and Drug Administration. International bodies, such as the World Health Organization and UNICEF, could then be brought directly into the process. Among the issues to be resolved would be delineating where, in the gray area, would be minimally arbitrary for seeking control. Further matters would comprise what should be the most effective means of approaching violations; what should be the recourse for violations; and how should one encourage—as perhaps the collegially accommodating method—peer pressure.

While policing for adherence to these policies is limited, the widespread adoption of such international rules sends a strong signal to investigators and institutions worldwide: Such policy adoption creates a worldwide moral atmosphere in which this activity is perceived by peoples as illegal, unethical, and a threat to human rights. Certainly, it does not absolutely halt all practitioners, especially those who disregard global morals and human rights. However, a policy’s being international could help staunch the flow of research and communication that any complex scientific and technical development of questionable techniques would require. Currently, groups such as the Future of Life Institute are pursuing international policies controlling the development of artificial intelligence beyond a level deemed irrevocably dangerous. (See the institute’s website, futureoflife.org.)

Any international policy regulating the research and development of MHA could have an effect comparable to that on international policies controlling the development and spread of controversial techniques and materials. Campaigning for such a policy could be incorporated into the agenda of an extant technology-policy organization (such as the Future of Life Institute) or within a new organization preferably composed in part by practitioners in the fields of robotics and artificial intelligence, with industry leaders. The goal would be to appeal to national governments and international institutions such as the UN and EU to enact a new policy regulating the development of MHA.

Many problems and potential objections to such as program can hamper its gestation and progress. An obvious hurdle is not merely defining an MHA (even the MTT cannot be definitiveFootnote 23), but also drawing an effective and reasonable line between an MHA and standard currently available automata. Thus, a part of the spectrum dubbed “almost MHA” may be just as problematic as fully MHA—besides potentially being a wide slice of that spectrum. Another major hurdle is that leaders of industries may hope to profit from MHA R&D that they have already invested. Their NGOs and ideologues who seek to foster all proposed and complex techniques to hail in a new era may pose significant resistance to regulating the more hazardous technologies. Nonetheless, national and international bodies have provided an example in developing policies on comparably, if not much more, hazardous techniques despite forces that were against control of the technology.Footnote 24

A further practical problem—at least if this article’s argument about potential MHA rights abuse ever influences policy proposals—resides in the fact that part of this argument criticizes widely held social values as the roots of the moral problem at hand: Many policymakers and their constituency may uphold these value prioritizations. It may be hard for those who are unaccustomed to critically considering their own value priorities to lay these aside, even if for the sake of protecting the rights of the most vulnerable members of society. Once the need for protecting these vulnerable groups is acknowledged, it still may prove hard for policymakers to concur the problem stems from value priorities they themselves hold and send the message to their constituency that it, too, holds possibly harmful value priorities.

One may object to such policy: Why not simply change the delinquent values that leave the vulnerable persons’ human rights susceptible to abuse, instead of regulating R&D? After all, much like “it’s not guns but people who kill,” it is not the MHA that would be violating the vulnerable persons’ human rights but the society that has the problematic value priorities. In response to this objection, it would be hard to disentangle the machines from the social values that call for their construction and deployment. A society that had more morally defensible values about its vulnerable citizens would lack motivation for constructing such mechanisms. One may then, conversely, object that such a policy should not remain indefinitely in the books: If social values change in the meantime, why not lift any regulation? This objection only reflects what was just stated: If the social value-priorities were adjusted so as not to have a call for such machines, there would indeed be no need for the machines’ existence or their regulation.

Morally based shifts in policy in the past have often demanded shift in value priorities, such as abolition and women’s suffrage, which challenged many widely held values. (In some historical interpretations, even Lincoln was in a conflict arising from his different and inconsistent values (see Nagler 2009).) If policymakers were at least convinced of the need to protect the world’s vulnerable from unfortunate social value-priorities and human-rights abuse, the policy could stand as a new standard and, ideally, influence a reexamination of widely held social value-priorities.Footnote 25