Toward the human – Centered approach. A revised model of individual acceptance of AI

https://doi.org/10.1016/j.hrmr.2021.100856Get rights and content

Highlights

  • In Industry 5.0 innovation ventura emerged from the re-conceptualization of individual AIs acceptance in the HRM context.

  • Organizations adopting a human centered approach have a strong orientation toward innovation.

  • Employees' AI acceptance depends on the individual attitude toward technology

  • Individual attitude is influenced by personal traits, such as the rational, emotional, and cognitive spheres.

Abstract

The aim of the study is to understand how humans' acceptance of Artificial Intelligences (AIs) affects human resource management (HRM). To this end, we propose an original conceptual framework based on the idea of a sustainable growth driven by the interplay between AI and HRM. Current academic debate is overly concerned by the impact that future AI will have on business and society. One of the central aspects of the conversation is whether or not AI will replace humans in value-added activities. The study remarks that humanoids are an amplificator of human potential, in light of a human-centered approach. In this vein, present work reconceptualizes the tenets of society 5.0 by considering the category of “innovation ventura”, the evolution of the innovative enterprise in the next AI landscape.

Introduction

Current study aims at reconciling two apparently antithetical spheres, human resources management (HRM) and the increasing pervasiveness of Artificial Intelligences (AIs) technologies in business and society, by proposing a human centered conceptual model. According to gestalt, perceptions are more or less biased, depending on continuity and unity with the context. In our model, humanoids are deemed as an extension and an amplifier of human possibilities, abilities, and capabilities. This original conceptual model solves the long existing debate on the negative consequences for HRM of the adoption of AI (Cubric, 2020; Pan, Froese, Liu, Hu, & Ye, 2021).

As far as culture and technology evolve, so does the approach to labor. Nonetheless, the fear of control related to new technologies always casted a shadow on the goodness of new technologies for humanity. Thus, a small recollection of the philosophical and psychological antecedents of the fear of unknown is required, as a means to properly framing the issue related to humans-humanoids interactions. Starting with examples from movies and other forms of art, tributes to this theme are uncountable. In the famous Spielberg movie from 2001, named A.I., a humanoid is pictured as an existing being: the humanoid is a modern representation in steal-and-chips of Pinocchio, a being who's able to experience emotions, such as love, or to make significant friendships, and to collect memories. Though, this romanced version of humanoids is largely outnumbered by dreadful and dystopic representations of the relationship between humans and humanoids. That approach closely reminds of the biggest human fear from eons: the fear of the unknown, which basically is the fear of death, according to a Freudian perspective. Such a fear has been described as the ultimate human fear (Carleton, 2016), and as the “individual's dispositional incapacity to endure the aversive response triggered by the perceived absence of salient, key, or sufficient information, and sustained by the associated perception of uncertainty.” (Carleton, 2016; p. 31). This kind of fear drives an irrational anxiety. Yet, it strictly descends from the fear of death and it has, among its declensions, the “fear of being destroyed” (Hoelter, 1979). In the case of humanoids, à la Schumpeter (1942), humans fear of being replaced by them: they fear the annihilation (Sombart, 1913) forced by the destruction of human productivity (Marx, 1885) caused by AI. In the field of psychology and literacy, the conceptualization and the thinking about creative destruction entered the modern zeitgeist with Nietzsche (1968), for the author argued that a creator always annihilates the antecedent realm with his/her own opera. Hence, the roots of this fear are inherent to the human being. In a nutshell, in the creative destruction gale induced by AI, robotics it is feared as if it was going to entirely replace humans, thanks to its superior abilities in job performance. Distinguished scholars, such as Friedman (1960) or Ford (2015), argued that the adoption of new technologies may cause job loss. Similarly, the sociologist Durkheim (1964) suggested that adopting new technologies may have a negative effect on cohesion and harmony among employees, and Spencer (2018) claimed that the intensive use of such technologies impoverish the job quality and destroy the bargaining power of workers.

Apparently, then, AI causes a form of anxiety in human resources (Li & Huang, 2020).

Clearly, the way AI is perceived influences its adoption. The consideration of factors that obstacles, inhibits or foster AI adoption should thus inform HRM – the core domain in term of impact - to prepare a strategic plan for the transition and to incentivize the change, whose fear is often recondite. Consistently, speaking of adoption and technology acceptance, there is a well-known literature that explains the inertial behaviors of individuals, groups, and organizations toward change. As instance, according to Polites and Karahanna (2012), the resistance to adoption occurs as a distorted and diminished perception of the easy of use and usefulness of an artifact. Hence, individual inertia is the piéce of resistance of individuals: it amplifies the effect of subjective norms on behaviors and stems from the preference of the status quo because of numerous biases (Samuelson and Zeckhauser, 1988, Kahneman, Knetsch, & Thaler, 1991). However, acceptance and adoption also have a cultural grass-root. In this domain, the influence of societal norms on subjective norms is also dramatic, as proved by the adoption of ophthalmic AI devices in China and the need for conformity linked to Confucian culture and collectivism (Ye et al., 2019). Despite contingencies and normative pressures, there are strong evidences that the AI adoptions are negatively perceived by human resources.

In this vein, Braganza, Chen, Canhoto, and Sap (2020) found support that AI adoption generates a sort of alienation in workers' perception. Precisely, they found that AI adoption makes workers' psychological contract, engagement and trust to fall significantly.

Despite the enormous and often unjustified or unmotivated resistance to the shift toward the “AI world”, there are cogent pressures of governments and businesses for a massive AI diffusion at all levels. Coherently, HRM must be re-designed in order to be compliant with the new paradigm. In this fashion, HRM is slowly but steadily moving from the logic of society 4.0 to that of society 5.0, known as the human – centered society (Konno & Schillaci, 2021; Yabanci, 2020; Fukuyama, 2018). Albeit the conversation about this ginormous shift is only relatively recent, for the discussion started between 2001 (Gueutal & Stone, 2005; Ulrich, 2001) and - on HRM review - 2006 (Stone, Stone-Romero, & Lukaszewski, 2006), it is only recently that, due to various contingencies, this new approach has gained a dramatic relevance. As instance, AI may be extensively applied to achievement the 17 sustainable goals and to solve global challenges, such those posited by the global pandemic 2020–2021.

However, before getting to apply such technologies to the vastest fields of activity, people needs to dwell with them, getting familiar with them, by overcoming the negative legacy tied to the concept of AI via a cultural change.

Previously, in the HRM research domain, Stone et al. (2006) discussed the effectiveness of electronic(e)HRM to attracting new talents. Strohmeier (2007) studied the involvement of new technologies within the HRM system to become more efficient in terms of measuring employees' performance and delivering benefits. Stone and Deadrick (2015) and Stone, Deadrick, and Lukaszewski (2015) explored the involvement of eHRM in employees' journey from the manufacturing sector to the knowledge economy. Recently, Connelly, Fieseler, Černe, Giessner, and Wong (2020) investigated the digitalization of HRM, in light of the shift from the knowledge economy to the gig economy. Other works on related topics appeared over time (Malik, Budhwar, Patel, & Srikanth, 2020;Malik, Budhwar, & Srikanth, 2020; Powell & Dent-Micallef, 1997; Haenlein & Kaplan, 2019; Tambe, Cappelli, & Yakubovich, 2019).

These studies, then, explored the matter only in a tangent and partial manner, without providing a useful understanding of the deep psychological and cultural roots of individual aversion to AI.

Other works focused on the engineering aspects of the interaction of humans-humanoids as a new AI (Cho, Park, Park, Park, & Lee, 2017; Tarafdar, Beath, & Ross, 2019).

Apparently, all these studies failed to consider, as Wright (2020) said, the relevance of the “human” dimension in the strategic human capital, in terms of “rediscovering of human”. Yet, Sanders and Wood (2019) argued that the adoption of AIs is not a “plug and play model” but it involves ethical aspects which keep up the humanity side of an organization. Batistič (2018) emphasized the relevance of implementing socialization tactics within a company to improve the human resources environment.

Thereby, in order to study the relationship between humans and humanoids, we use the lenses of the technology acceptance model (TAM) (Davis, 1989) and its extensions (Venkatesh & Davis, 2000). As a matter of fact, the perceived ease-of-use and perceived usefulness are core aspects of the relationship between humans- humanoids. Accordingly, social believes and the external environment impact the perception of such technologies (Cenfetelli & Schwarz, 2011; Karahanna, Agarwal, & Angst, 2006). At an organizational success level, having a naïve attitude toward innovation (Kirton, 1977) guarantees a competitive advantage (Malik, 2019; Birdi, Leach, & Magadley, 2016; Jarrahi, 2018; Montani, Courcy, & Vandenberghe, 2017; Newman, Herman, Schwarz, & Nielsen, 2018). Based on the Service Robot Deployment (SRD) Model, an optimum equilibrium between human-humanoid interactions occurs when cognitive\analytical and emotional tasks are not too simple or too complex (Paluch, Wirtz, & Kunz, 2020; Wirtz & Zeithaml, 2018). Building upon above considerations, the literature status-of-art, and the criticism of prior research, current works aims to reconcile the HRM view about the relationship between humans-humanoids, by means of understanding what are the drivers and the obstacles to AI acceptance/adoption at an individual level. Yet, this study also answers the call for the urgent need to trigger a change in HRM scholars and practitioners' attitude toward AI (Gill, 2018; Rynes, Giluk, & Brown, 2007). Finally, this conceptual research introduces the concept of “innovation ventura”, as the new innovation model of the AI society 5.0.

Hence, at a theoretical level we provide an original envision of AI, which is human-centered. Such envision shall inform future studies. At a practical level, the study allows to understand how to promote the acceptance of AI by human resources, how to design the next AI technologies – i.e. in a human-centered manner -, and how to create value for society by deploying AI technologies.

Section snippets

Humans-humanoids interactions in the HRM context

Over time, the research domain of HRM was marked by an increasing attention to the human side of the organizations (Yadav, Yadav, & Malik, 2019), in the so-called switch to humanism (Bruce & Nyland, 2011), as opposed to Taylorism (Taylor, 1911). Currently, the pervasiveness of information, communication & technologies (ICT) in everyday life and business, imposed a further up-date of the tenets of humanism, in terms of “digital humanism” (Davis, 2016). In a nutshell, Taylorism emphasized the

Technology acceptance and adoption at individual level: A glance at main existing models

The theme of technology acceptance had long intrigued a large and multidisciplinary part of the scientific community. This interest can be explained by the myriad of adoption's implications, such as those at institutional, organizational, market, business, and societal levels. Accordingly, a wealth of models, using a variety of perspectives, was proposed over time. For what concerns HRM, the introductions of novel technologies over time have largely impacted practices and outcomes, along with

“Innovation ventura” in society 5.0

In his work on “Digital Studies Organology of knowledge”, the philosopher Stiegler, 2008, Stiegler, 2011 pointed out that AI empowers human resources, by augmenting their skills. According to his thought, AI augments the possibilities to create new knowledge. It can be said, then, that AI is a knowledge enabler. However, nowadays, some countries are moving fast-forward society 5.0, whilst other lags far beyond. Perhaps, a cultural factor may explain such differences between countries. One

Conclusions, real impact of the study and future research avenues

This original conceptual study tackles a resounding gap by focusing on the theme of employees' AI acceptance. Previous research mostly focused on digital technologies acceptance in HRM. Differently, we propose a holistic and up-to-date reconceptualization of employees' AI acceptance by proposing a human-centered approach. At a practical level, the study can be used to understand what are the factors that hinder and those that foster AI acceptance, such as emotions, social pressures, cognitions.

Acknowledgment

The article is based on the study funded by the Basic Research Program of the National Research University Higher School of Economics (HSE) and by the Russian Academic Excellence Project '5-100'.

Authorship statement

All persons who meet authorship criteria are listed as authors, and all authors certify that they have participated sufficiently in the work to take public responsibility for the content, including participation in the concept, theoretical model, writing, or revision of the manuscript. Furthermore, each author certifies that this material or similar material has not been and will not be submitted to or published in any other publication before its appearance in the Human Resource Management

Authorship contributions

The Authors, Manlio Del Giudice, Veronica Scuotto Beatrice Orlando,and Mario Mustilli have all make a valid contribution throughout the whole paper. In particular, Veronica has, initially, conceptualized the theoretical model, drafting and revising the manuscript. Beatrice Orlando has developed and revised the manuscript. Manlio Del Giudice and Mario Mustilli have revised the manuscript critically for important intellectual content and offered new insights on the critical side of the

References (134)

  • J.J. Lawler et al.

    Artificial intelligence in HRM: An experimental study of an expert system

    Journal of Management

    (1996)
  • J. Li et al.

    Dimensions of artificial intelligence anxiety based on the integrated fear acquisition theory

    Technology in Society

    (2020)
  • A. Malik et al.

    Value creation and capture through human resource management practices: Gazing through the business model lens

    Organizational Dynamics

    (2018)
  • F. Montani et al.

    Innovating under stress: The role of commitment and leader-member exchange

    Journal of Business Research

    (2017)
  • A. Newman et al.

    The effects of employees' creative self-efficacy on innovative behavior: The role of entrepreneurial leadership

    Journal of Business Research

    (2018)
  • P.C. Nystrom et al.

    Organizational context, climate and innovativeness: Adoption of imaging technology

    Journal of Engineering and Technology Management

    (2002)
  • S.A. Rahman et al.

    Technology acceptance among micro-entrepreneurs in marginalized social strata: The case of social innovation in Bangladesh

    Technological Forecasting and Social Change

    (2017)
  • E. Abrahamson et al.

    Management fashion: Lifecycles, triggers, and collective learning processes

    Administrative Science Quarterly

    (1999)
  • E. Abrahamson et al.

    Institutional and competitive bandwagons: Using mathematical modeling as a tool to explore innovation diffusion

    Academy of Management Review

    (1993)
  • D. Acemoglu et al.

    Robots and jobs: Evidence from US labor markets

    (2017)
  • N. Agar

    How to be human in the digital economy

    (2019)
  • G. Ahuja et al.

    Entrepreneurship in the large corporation: A longitudinal study of how established firms create breakthrough inventions

    Strategic Management Journal

    (2001)
  • K.A.S. Al Mashrafi

    Human resource management and the electronic human resource (E-HRM): A literature review

    International Journal of Management and Human Science (IJMHS)

    (2020)
  • L. Alos-Simo et al.

    The dynamic process of ambidexterity in eco-innovation

    Sustainability

    (2020)
  • N. Anderson et al.

    Innovation and creativity in organizations: A state-of-the-science review, prospective commentary, and guiding framework

    Journal of Management

    (2014)
  • H. Avery

    Private banking: Wealthtech 2.0 – When human meets robot

    (2019)
  • A.V. Banerjee

    A simple model of herd behavior

    The Quarterly Journal of Economics

    (1992)
  • M. Barrett et al.

    Reconfiguring boundary relations: Robotic innovations in pharmacy work

    Organization Science

    (2012)
  • S. Barro et al.

    People and machines: Partners in Innovation

    MIT Sloan Management Review

    (2019)
  • M. Beane

    Learning to work with intelligent machines

    Harvard Business Review

    (2019)
  • A. Beaudry et al.

    Understanding user responses to information technology: A coping model of user adaptation

    MIS Quarterly

    (2005)
  • M. Beck et al.

    The rise of AI makes emotional intelligence more important

    Harvard Business Review

    (2017)
  • G. BenMark et al.

    Messaging apps are changing how companies talk with customers

    Harvard Business Review

    (2016)
  • K. Birdi et al.

    The relationship of individual capabilities and environmental support with different facets of designers' innovative behavior

    Journal of Product Innovation Management

    (2016)
  • L.F.D.C. Botega et al.

    An artificial intelligence approach to support knowledge management on the selection of creativity and innovation techniques

    Journal of Knowledge Management

    (2020)
  • A. Braganza et al.

    Productive employment and decent work: The impact of AI adoption on psychological contracts, job engagement and employee trust

    Journal of Business Research

    (2020)
  • K. Bruce et al.

    Elton Mayo and the deification of human relations

    Organization Studies

    (2011)
  • M. Čaić et al.

    Value of social robots in services: Social cognition perspective

    Journal of Services Marketing

    (2019)
  • R.T. Cenfetelli et al.

    Identifying and testing the inhibitors of technology usage intentions

    Information Systems Research

    (2011)
  • C.M. Christensen et al.

    Why hard-nosed executives should care about management theory

    Harvard Business Review

    (2003)
  • C.E. Connelly et al.

    Working in the digitized economy: HRM theory & practice

    Human Resource Management Review

    (2020)
  • P.R. Daugherty et al.

    Human+ machine: Reimagining work in the age of AI, Harvard Business Press

    Journal of International Business Studies

    (2018)
  • T.H. Davenport

    The AI advantage: How to put the artificial intelligence revolution to work

    (2018)
  • T.H. Davenport et al.

    Artificial intelligence for the real world

    Harvard Business Review

    (2018)
  • F.D. Davis

    Perceived usefulness, perceived ease of use, and user acceptance of information technology

    MIS Quarterly

    (1989)
  • F.D. Davis et al.

    User acceptance of computer technology: A comparison of two theoretical models

    Management Science

    (1989)
  • J. Davis

    Program good ethics into artificial intelligence

    Nature, International weekly Journal of Science

    (2016)
  • de Carvalho Botega, L. F., & da Silva, J. C. An artificial intelligence approach to support knowledge management on the...
  • P.J. DiMaggio et al.

    The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields

    American Sociological Review

    (1983)
  • M. Dost et al.

    The impact of intellectual capital on innovation generation and adoption

    Journal of Intellectual Capital

    (2016)
  • Cited by (0)

    View full text