This preface introduces the special issue on The Dark Sides of AI. The special issue offers six papers that focus on challenges of AI technology. In the twenty-first century, artificial intelligence (AI) is an extremely disruptive innovation that has attracted considerable attention from practitioners and academics. AI provides extensive, and unprecedented, opportunities for fundamental changes and extensive upgrades across many industries. This disruptive technology makes incredible things possible, such as autonomous vehicles, facial recognition payment, guidance robots, etc.

More specifically, AI injects fresh vitality into digital business, and it facilitates the development of smart services and promotes digital transformation. Currently, AI is considered as one of the top five emerging technologies when enterprises attempt to implement the digital first strategy. It is predicted that 70% of organizations will develop AI architectures owing to the increasing maturity and availability of AI technology (Goasduff, 2021). The age of AI is coming without doubt. Recently, AI has attracted many academics attention in their corresponding fields. Many scholars have investigated the application of AI in different contexts such as information systems (e.g., Gursoy et al., 2019), tourism and hospitality (e.g., Li et al., 2019), marketing (e.g., Syam & Sharma, 2018), and financial management (e.g., Culkin & Das, 2017). Research results indicate that AI has the potential to change the way of interaction between organizations and customers, and brings higher business benefits (e.g., increasing efficiency, increasing effectiveness and decreasing cost).

Despite these great benefits for a business, Tarafdar et al. (2013) have warned of the dark sides of information technology. AI technology is no exception. It is admitted that AI has potential to induce potential risks at the individual level, organization level and social level (Alt, 2018). However, extensive attention is paid to positive aspects of AI, while the dark sides of AI are receiving relatively little attention, especially from the academic community. Considering the importance and universality of AI, the significant negative consequences brought by AI to individuals, organizations and society deserve further exploration. Given the limited research on the dark sides of AI, we thus provide this special issue and encourage researchers to explore the important issues in the AI realm especially in the electronic market context, such as electronic commerce, social media, emergent digital platforms, etc.

AI-enabled applications

AI technology has been widely applied in various industries to promote business benefits. The application to each context can have some differences. In the e-Commerce contexts, online vendors adopt chatbots to provide their consumers 24/7 services (Luo et al., 2019). In the financial industry, facial recognition payment is becoming increasingly popular in online marketplaces (Liu et al., 2021). In brick-and-mortar retail, guidance robots are used to help consumers select products. In the medical health industry, AI-enabled diagnostic systems are utilized in the medical imaging and histopathology context. AI shows great potential in medical diagnosis and treatment planning (Ploug & Holm, 2020). In the entertainment industry, the success of DeepMind’s AlphaGo Zero has caused heated controversy (Chao et al., 2018). In the field of transportation, autonomous vehicles have attracted extensive attention from businesses (Chattopadhyay et al., 2020). In addition, other AI technologies such as speech recognition, language translation, image processing and deep learning are making various aspects of our lives more convenient. We also find AI-driven smart home products, such as XiaoIceJIN, Xiaodu, etc. Admittedly, AI does facilitate in problem solving and encourages technology innovation in various contexts. As mentioned above, AI contributes to high efficiency, effectiveness and low cost. However, AI is causing increasing controversy in practices where it is used. Firstly, AI is not performing as intelligently as some people expect. For instance, Facebooks’ Project M reported that about 70% of interactions between humans and AI failed (Griffith & Simonite, 2018). Furthermore, about 80% of consumers are unwilling to communicate with chatbots in e-commerce websites since chatbots cannot understand their needs (Forbes, 2019). Therefore, AI has also spurred dark side effects we will explore further in the next section.

The dark sides of AI

Regardless of the numerous potential benefits, there are undoubtedly many negative consequences of AI. We cannot merely look at the promising potential of AI (Floridi et al., 2018). Alt (2018) emphasizes that AI may lead to enormous risks at the individual level, organization level and society level. Importantly, these three aspects are considered as the most important elements for digitalization (Alt, 2018). Considering the background of our special issue, we mainly discuss the dark sides of AI in electronic markets.

From the individual’s perspective, the detrimental effects of AI are mainly reflected in privacy concern, content recommendation and product recommendation in electronic markets. Firstly, AI has the ability to gain deep insights into consumers, which may exacerbate privacy concern (Grewal et al., 2021). For instance, Dickson (2019) argues that voice assistants (e.g., Alexa) can predict the moment the consumer’s current relationship will end by analyzing the consumer’s voice with AI technology. In addition, payments with facial recognition also introduce privacy risks since the human face carries a lot of personal information including the appearance, age, gender, etc. (Dibeklioğlu et al., 2015; Dantcheva & Brémond, 2016; Liu et al., 2021). A recent study states that personalized recommendations lead to perceived privacy concern and perceived information narrowing; as a result, people are reluctant to accept related technologies (Li et al., 2021).

From an organizational perspective, the introduction of AI-enabled products is likely to influence the reputation and final profit of companies. For instance, the performance of AI-enabled chatbots has an influence on consumers’ satisfaction (Ashfaq et al., 2020; Eren, 2021). If chatbots do not perform as well as consumers expect, consumers will distrust them. As a result, they will no longer trust the sellers, or companies that use them (Yen & Chiang, 2020). In addition, since AI is a relatively emergent technology, organizations also face the great challenge of successfully implementing AI strategies and often fail to address the issue of how this impacts the human workforce.

From a societal perspective, AI also yields dark effects that vary from issues of data security to workforce replacement and ethical problems (Boyd & Wilson, 2017; Wang & Siau, 2018). The use of AI has raised widespread concerns for the human workforce (Danaher, 2019). In fact, the belief that AI may render us unemployed does not only exist within e-commerce, but it exists in all walks of life. Other problems include AI law and regulation, and AI ethics like moral dilemmas, AI discrimination and AI fairness (Wirtz et al., 2020). These potential problems thus pose challenges for AI governance at the society level (Wirtz et al., 2020). In addition, AI also play an important role in the COVID-19 influenced work and life environment, and the issues raised by using AI also changes risk perceptions and coping behavior.

Articles of the present issue

The present special issue on the dark side of AI is the first in Electronic Markets on this topic and includes six research papers. These papers shift people’s attention from the bright sides of AI to the challenges of AI adoption in various domains. This enriches our understanding of the application of AI technology from different perspectives and helps us avoid the cycle of unrealistic expectations followed by disappointment. The following paragraphs discuss the articles in this special issue.

The first paper is authored by Sun et al. (2021b) with the tile “The dark sides of AI personal assistants: effects of service failure on user continuance intention”. In the last few years, smart devices are expanding their capabilities extensively. Smart devices can help companies provide a better service to customers. They can also help individuals to automatically perform some tasks such as checking information on the weather. In this paper, the authors use the context of a smart voice device, namely AI personal assistants (AIPAs). The authors argue that studies have scarcely uncovered the underlying mechanism through which those dark sides of AIPAs impact users’ intention to continue. From the perspective of technostress, this study proposes a theoretical model for consumers to cope with service failure pressure sources. Some interesting results emerge from this study. Furthermore, new avenues for research are opened to explore the mechanism of how the service failures of these AIPAs affect consumers’ continuance intention through the perspective of technostress.

The second paper is authored by Ma et al. (2021) entitled “Understanding users’ negative responses to recommendation algorithms in short-video platforms: a perspective based on the Stressor-Strain-Outcome (SSO) framework”. Because of their ability to customize and enhance social media and e-commerce, recommendation systems have been widely employed. They can promote products and services to the consumers with a better fit to users’ needs. This paper focuses on the role of AI-based recommendation algorithms. Although AI-based recommendation systems can bring benefits to both the users and the platform, not much is known regarding the dark side, especially users’ negative responses. From the perspective of recommendation features and information characteristics, this study aims to uncover users’ negative responses to AI-based recommendation algorithms in the algorithm-driven context of short-video platforms. Drawing on the stressor-strain-outcome (SSO) framework, this study identifies information-related stressors and examines their influence on users’ negative responses to a recommendation algorithm. This paper also offers some useful insights on studying AI-based recommendation algorithms and expands our knowledge of the dark side of recommendation algorithms.

The third paper is authored by Kai and Zhang (2021) and entitled “Categorization and eccentricity of the AI risks: A comparative study of the global AI guidelines”. This study develops a four-dimensional matrix based on the classic theoretical framework of “probability severity” in the field of risk management as a benchmark to analyze the possible risk of AI. Using the framework, a comparative study of the extant guidelines is conducted by coding the 123 guidelines with 1023 articles. The structure consists of four pairs of risks: specific-general, legal-ethical, individual-collective and generational-transgenerational. Results show that the extant guidelines are eccentric, while collective risk and generational risk are largely underestimated by stakeholders. Based on this analysis, gaps and conflicts are outlined for future research.

The fourth paper is authored by Mirbabaie et al. (2021) entitled “The rise of artificial intelligence – understanding the AI identity threat at the workplace”. This paper addresses the problem of AI resistance by examining the antecedents of AI identity threat. Applying a combination of quantitative and qualitative approaches, this study reveals three central predictors for AI identity threat in the workplace: changes to work, loss of status position, and AI identity. The work presented in this paper contributes to the theme of the special issue by clarifying how professional identity and AI identity relate to the negative impacts of AI (i.e., identity threat caused by AI) differently, and further provides useful information for practitioners who wish to introduce AI in the workplace.

The fifth paper is authored by Sun et al. (2021a) entitled “Prick the filter bubble: A novel cross domain recommendation model with adaptive diversity regularization”. Recommender systems harness the data we provide with our behaviour to focus sharply on what we want, discarding other alternatives (Deldjoo et al., 2020; Milano et al., 2020). Do we lose something in our personal and professional lives from these narrow-targeted results? The paper attempts to mitigate this. A model is developed that balances accuracy of the recommendation with diversity. It is important for people to have a breadth of information and stimulation, so that they do not lose their own ability to judge the results they receive, and make the final decision.

The sixth paper is authored by Hornung and Smolnik (2021) and entitled “AI invading the workplace: negative emotions towards the organizational use of personal virtual assistants”. We increasingly interact with AI through Personal Virtual Assistants (PVA), also referred to as chatbots. These can be convenient, effortless and effective but there are also challenges (Cheng et al., 2021; Zarifis et al., 2021). This paper focuses on the negative emotions towards them. The qualitative methodology of this research enables it to identify the most influential negative emotions as being related to having a loss and fear. These findings can help the design and implementation of PVAs that will have less resistance.