1 Introduction

Algorithmic advice refers to the automation of professional advice giving by expert systems interacting with consumers instead of highly trained specialists (Logg et al., 2019; Sampson, 2021). It is most widespread in financial services, in which “robo-advisors” are developing investment strategies without human intervention. However, other professional service industries are also automating advice. Services such as DoNotPay or Leaperr, for example, offer “AI-powered” legal and interior design advice without seeing a lawyer or interior designer.

Previous research on algorithmic advice has led to contradictory results. Several studies suggest that consumers oppose algorithmic advice, a phenomenon described as algorithm aversion. Dietvorst et al. (2015), for example, found that individuals were less likely to choose algorithmic advice over inferior human advice to predict student performance after seeing an algorithm err. In the medical domain, scholars have shown that patients do not trust algorithmic advice (Promberger & Baron, 2006) arguing that patients are afraid that it neglects human uniqueness (Longoni et al., 2019). In a similar vein, Castelo et al. (2019) found that algorithm aversion was higher for intuitive, subjective tasks than for quantifiable, objective tasks. However, a study by Logg et al. (2019) has called algorithm aversion into question. Focusing on different domains such as business forecasts or prediction of romantic attraction, they found that people generally appreciated advice from an algorithm over human advice. Hildebrand and Bergner (2021) showed that people appreciated algorithmic financial advice more strongly if it used a human-like, conversational style. In sum, these contradictory results point to the existence of additional factors which may affect adoption of algorithmic advice.

One such factor may be consumers’ lay beliefs about AI. Although lay beliefs about AI seem highly salient in the marketplace as people more frequently use AI-enabled services (Huang & Rust, 2018), research about such beliefs is scarce. That is, previous studies have provided participants with specific information about the quality of algorithmic advice such as information about its mistakes (e.g., Dietvorst et al., 2015; Longoni et al., 2019). However, in real life, people usually do not receive this kind of information and lack domain expertise to evaluate the accuracy of algorithmic advice. Hence, they may rely on more general cues such as their lay beliefs about AI when deciding whether to use algorithmic advice or not.

In our research, we want to address this gap and argue that consumers have different beliefs about how intelligent AI is in comparison to human intelligence. Specifically, we propose that lay beliefs about AI affect adoption of algorithmic advice because they serve as cues to infer accuracy of advice, especially when perceived complexity of a task is high. In three studies, we provide converging evidence for this prediction. In doing so, we contribute to research on algorithmic advice and point to the importance of considering consumers’ lay beliefs about AI when automating advice services.

2 Conceptual development

Figure 1 shows our conceptual model. At the core of this model is the impact of lay beliefs about AI on adoption of algorithmic advice via expected accuracy of advice. We also propose a moderation effect of perceived complexity.

Fig. 1
figure 1

Conceptual model

2.1 Lay beliefs about AI

According to the literature, individuals hold implicit theories about intelligence which differ from explicit theories of intelligence and affect their expectations of oneself and others (Furnham, 2001; Sternberg, 1985). Whereas explicit theories are tested by empirical studies and claim general validity, implicit theories are naïve constructions of lay people and can be viewed as templates of prototypical characteristics of an intelligent person (Sternberg, 1985). Specifically, research has found that people regard someone to be intelligent if she or he possesses analytical abilities such as solving number problems, processing of new information, and logical reasoning, whereas other types of skills (e.g., socio-emotional skills) are less strongly associated with an intelligent person (Furnham, 2001). In addition, extensive research has demonstrated that individuals have stereotypic beliefs about the overall level of intelligence of different groups compared to other groups (Petrides et al., 2004; Truxillo et al., 2012).

One may assume that individuals hold similar beliefs about AI concerning different types of skills (e.g., analytical, socio-emotional skills) and the overall intelligence of AI. Arguably, such beliefs may be dependent on the task AI engages in and on the way AI is implemented (Castelo et al., 2019; Logg et al., 2019). In the following, we focus on lay beliefs about AI in the field of expert systems which advice consumers on making an optimal choice (e.g., recommending an investment portfolio or an optimal combination of products).

As a first step to examine lay beliefs about AI in this context, we conducted a pre-study and asked bank customers to discuss AI.Footnote 1 The main characteristics associated with AI were applying of algorithmic rules, processing of data, and making calculations (“Artifical intelligence is about algorithms which follow a certain strategy based on historic data. I don’t imagine an artificial human being”, customer, male, 35). However, participants had different views about the overall level of AI compared to human intelligence. That is, while some participants believed AI to be higher than human intelligence due to greater processing power and memory (“I always compare robo-advice with medical diagnoses. With AI you can make much better medical diagnoses. Do you know Watson? A machine such as Watson can store all human illnesses and can precisely analyze them for possible diagnoses”, customer, male, 33), others argued that AI could never be as high as human intelligence because all AI was man-made (“I believe that AI can only be as smart as the intelligence of the people who programmed it”, customer, female, 42). More illustrative quotes can be found in Web Appendix A.

Against this background, we propose that lay beliefs about the level of AI compared to human intelligence may influence adoption of algorithmic advice because they serve as cues for accuracy of advice. Studies on advice suggest that maximizing decision accuracy is a main motive of decision makers (Schrah et al., 2006). That is, individuals may seek and use advice because they want to make as accurate decisions as possible. However, because it is usually difficult for individuals to evaluate accuracy in advance, they may rely on heuristic cues to infer accuracy (Bonaccio & Dalal, 2006). In the context of algorithmic advice, lay beliefs about AI may represent an important cue. Specifically, consumers who believe that AI is higher than human intelligence may expect to receive more accurate advice and may therefore expect to make better decisions when using algorithmic advice. Hence, they may be more motivated to use algorithmic advice. In contrast, consumers who believe that AI is inferior to human intelligence may not expect to receive more accurate advice. Consequently, they may be less likely to adopt algorithmic advice. Thus,

  • H1: Consumers believing that AI is higher than human intelligence will be more likely to adopt algorithmic advice than consumers who do not believe that AI is higher than human intelligence.

2.2 Perceived complexity

Scholars have emphasized that consumers tend to perceive professional services as complex (Mikolon et al., 2015). That is, consumers may regard services such as financial, architectural, or legal advice as complex because they deal with decision problems that involve multiple different, interdependent, and sometimes ambiguous aspects (Campbell, 1988; Dellaert et al., 2012). However, as complexity occurs on the task level rather than on the job level (Campbell, 1988), one may also argue that perceived complexity varies depending on the task. For example, consumers may regard developing an investment strategy a more complex task than finding the right credit card as it requires considering a greater number of product alternatives. Importantly, research suggests that individuals respond more favorably to (human) advice when they perceive a high degree of complexity (Gino & Moore, 2007; Schrah et al., 2006). To explain this effect, it has been argued that individuals facing a complex task want to reduce their cognitive effort but still obtain a high level of decision accuracy (Schrah et al., 2006).

Based on these findings, one may assume that the effect of lay beliefs on adoption of algorithmic advice is moderated by perceived complexity. When consumers perceive a high level of task complexity, lay beliefs about AI may strongly affect adoption of algorithmic advice. That is, individuals may feel that a high level of intelligence is needed when addressing a decision problem which requires analyzing a multitude of different, interdependent aspects. Consequently, consumers who believe that AI is higher than human intelligence may feel that they receive more accurate advice. In contrast, when consumers perceive a low level of complexity, lay beliefs about AI may not exert a similar impact. In this case, consumers may assume that a task can be solved by using simple rules. Hence, they may not consider intelligence a crucial factor to increase accuracy of advice. That is, they may feel that advice is equally accurate regardless of whether AI is lower or higher than human intelligence. As a result, they may not be more motivated to seek algorithmic advice when they believe that AI is higher than human intelligence. Hence,

  • H2: Consumers believing that AI is higher than human intelligence will only be more likely to adopt algorithmic advice when perceived task complexity is high.

3 Study 1

3.1 Methodology

In study 1, we cooperated with a Swiss retail bank to examine the impact of lay beliefs about AI. The bank had introduced a new investment advice service (“robo-advisor”) and allowed us to include a short questionnaire in an e-mailing which was sent to 3656 customers and stated that the bank wanted to get their customers’ opinion about the bank. The mailing did not make any reference to AI or investment advice. Participants were directed to a website where they were exposed to our survey questions. Next, they were asked to watch a short video about the new service. After the video, participants could register for the new service. In total, 454 customers answered the survey questions and watched the video (72.7% male, average age: 53.1 years).

We measured lay beliefs about AI with two items (artificial intelligence is superior to human intelligence in many areas/artificial intelligence is better able to solve complex problems than human intelligence; r = 0.60). To control for extraneous influence, we included a three-item measure of satisfaction with the bank (α = 0.91) from Mende and Bolton (2011). Moreover, we included several items from previous research on technology adoption as control measures such as need for interaction, convenience, and technical anxiety (e.g., Collier & Kimes, 2013; Meuter et al., 2005). We also asked individuals for their total assets (ordinal scale with five categories) and assessed if they had a personal client advisor or not. Unless stated otherwise, all items of this study and of the other studies used 7-point scales labeled strongly disagree (1) and strongly agree (7). All items can be found in Web Appendix D.

As a measure of adoption of algorithmic advice, we assessed whether participants had registered for the investment advice service. Hence, a dichotomous choice measure served as dependent variable (0 = did not register, 1 = registered). In total, 74 customers registered for the service.

3.2 Results

To examine the impact of lay beliefs about AI, we conducted logistic regression analyses. Satisfaction, convenience, total assets, and personal client advisor emerged as significant predictors and were included in the analyses. To facilitate interpretation, we z-standardized all predictor variables. We estimated three different models that predicted adoption of algorithmic advice by lay beliefs only (base model), by the control variables only (controls model), and by all predictors conjointly (full model). Testing these models allowed us to analyze the robustness of the effect of lay beliefs about AI and to examine whether a model which included lay beliefs better predicted adoption of algorithmic advice than a model which did not include lay beliefs about AI. Table 1 provides an overview of the different models. As expected, lay beliefs had a significant positive impact in the base model (Wald χ2(1) = 11.30, p < 0.001) and in the full model (Wald χ2(1) = 7.12, p = 0.01). Moreover, the full model (Nagelkerke R2 = 0.16) better predicted adoption of algorithmic advice than the controls model (Nagelkerke R2 = 0.14). Following the literature (Osborne, 2015), we also calculated predicted probabilities of adoption of algorithmic advice for participants believing in the superiority of AI (+ 1 SD, 17.9%) and participants not believing in the superiority of AI (− 1 SD, 9.1%) based on the regression line equation of the full model. In sum, these findings support H1.

Table 1 Study 1: binary logistic regression model results

Study 1 provides initial evidence that individuals who believe that AI is higher than human intelligence are more likely to adopt algorithmic advice in a real market setting. However, there are also some challenges. First, typical of a field context, restrictions regarding the overall length of the survey allowed us to measure lay beliefs about AI only with two items. Second, our results are correlational in nature as we did not manipulate lay beliefs about AI.

4 Study 2

4.1 Methodology

In study 2, we wanted to replicate the findings of study 1. To increase the generalizability of our results, we focused on a different advice context, namely, interior design. Unlike investment advice, developing an interior design concept may be considered a more subjective task, requiring intuition and a sense of style in addition to analytical skills. A total of 73 students taking part in postgraduate classes at a Swiss university participated in the study (74.0% male, average age: 31.1 years). To evoke lay beliefs about AI, we asked participants to read a short interview with a professor purportedly extracted from a newspaper intended to trigger the belief that AI was higher or lower than human intelligence (see Web Appendix for all stimulus materials). Next, participants were presented with a screenshot from an alleged “robo-interior design service” which offered advice on developing a personalized furnishing concept and selecting the right furniture. Finally, they completed the control and dependent measures.

Intentions to adopt algorithmic advice and expected accuracy of advice were measured with three-item scales adapted from Venkatesh et al. (2012; α = 0.92) and Wixom and Todd (2005; α = 0.71). We also assessed whether lay beliefs about AI had been manipulated effectively (two-item scale from study 1, r = 0.63) and all control variables from study 1 except satisfaction with bank and client advisor (note that we used household income as a proxy of wealth in this study). In addition, we measured inertia with a single item adapted from Meuter et al. (2005). Arguably, adoption of algorithmic advice would have required more effort in study 2 as it would have involved starting a relationship with a new service provider.

4.2 Results

Participants who read the interview stating that AI was superior to human intelligence believed more strongly that AI was higher than human intelligence (M = 5.21) compared to participants who read that AI was inferior to human intelligence (M = 3.59; F(1,71) = 35.07, p < 0.001). Convenience emerged as a significant covariate. All other variables did not emerge as significant covariates and were thus excluded from the analyses.

A one-way ANOVA revealed a significant effect for lay beliefs (F(1,70) = 4.26, p = 0.04). Participants who believed that AI was higher than human intelligence were more willing to adopt algorithmic advice (M = 5.30) than participants who did not have this belief (M = 4.80). These results provide further support for H1.

To examine the underlying process, we conducted mediation analysis with bootstrapping (Hayes, 2018; model 4). Expected accuracy of advice mediated the effect of lay beliefs on intention to adopt algorithmic advice [0.15, 95% CI = (0.0165, 0.3285)]. There was no direct effect of lay beliefs [0.15, 95% CI = (− 0.1233, 0.4265)].

5 Study 3

5.1 Methodology

The aim of study 3 was to test H2. In total, 227 Swiss consumers between 18 and 69 years were recruited by a direct marketing company and participated in the study (52.0% male, average age: 57.4 years). Participants were asked to indicate their lay beliefs about AI and were then exposed to a screenshot from an alleged new financial advice service called Robowealth. Perceived complexity was manipulated by varying the decision task. That is, participants were either exposed to a task of high complexity with many options (i.e., developing an optimal investment strategy) or to a task of lower complexity with less options (i.e., finding the right savings account). This setting allowed for a conservative test of our hypothesis as providing advice on a savings account may still be considered a relatively complex task. Next, participants completed the control and dependent measures.

To assess lay beliefs about AI, we adapted an eight-item scale from Truxillo et al., (2012, α = 0.80) and asked participants to evaluate different abilities of AI in comparison to human intelligence (e.g., abstract reasoning, processing of information).Footnote 2 We used the same scales as in study 2 to measure intentions to adopt algorithmic advice (α = 0.89) and expected accuracy (α = 0.83). As a manipulation check, participants indicated on a seven-point scale whether the task on which Robowealth offered advice was simple (1) or complex (7). Finally, we included the same control variables as in study 2.

5.2 Results

Participants in the high complexity condition rated the task as more complex than participants in the low complexity condition (Mlow complexity = 4.06, Mhigh complexity = 4.58; F(1, 225) = 5.98, p = 0.02). Convenience and inertia emerged as significant covariates and were included in the analyses.

We mean-centered the lay beliefs about AI variable and conducted an OLS regression. This analysis revealed a significant main effect of lay beliefs about AI (b = 0.23, t = 2.36, p = 0.02), an insignificant effect of perceived complexity (b = 0.07, t = 0.79, p = 0.43), and significant effects of the control variables (convenience: b = 0.15, t = 2.67, p = 0.01; inertia: b =  − 0.30, t =  − 5.69, p < 0.001). Importantly, the interaction between lay beliefs about AI and perceived complexity was significant (b = 0.18, t = 1.93, p = 0.05). To follow up on this effect, we performed planned contrasts and compared participants believing that AI is higher than human intelligence (i.e., one 1 SD above the mean) to participants not believing that AI is higher than human intelligence (i.e., one 1 SD below the mean) within the two complexity conditions (see Fig. 2). When the task was perceived to be complex, participants had stronger intentions towards adopting algorithmic advice when they believed that AI was higher than human intelligence (MAI intellectually superior = 3.69, MAI not intellectually superior = 2.84; b = 0.40, t = 3.50, p < 0.001). When participants perceived the task to be less complex, intentions towards adopting algorithmic advice did not vary as function of lay beliefs about AI (MAI intellectually superior = 3.17, MAI not intellectually superior = 3.07; b = 0.05, t = 0.34, p = 0.74). These findings support H2.

Fig. 2
figure 2

Interaction effect between lay beliefs about AI and perceived complexity

Next, we investigated the underlying process. Figure 1 proposes a case of moderated mediation, in which perceived complexity of the task moderates the effect of lay beliefs about AI on expected accuracy which, in turn, affects adoption of algorithmic advice. Using the bootstrapping method (Hayes, 2018; model 7), we found that the indirect effect of lay beliefs about AI on behavioral intentions via accuracy was significant when the task was of high perceived complexity [0.34, 95% CI = (0.1940, 0.5215)] but not significant when the task was of low perceived complexity [0.09, 95% CI = (-0.0943, 0.2665)], indicating a mediation effect of expected accuracy. The confidence interval for the index of moderated mediation did not include zero [0.25, 95% CI = (0.0385, 0.5172)], indicating that the mediation is moderated.

6 General discussion

The purpose of this research was to investigate how lay beliefs about AI affect adoption of experts systems offering algorithmic advice to consumers in professional service contexts. Across different operationalizations (i.e., measuring consumers’ lay beliefs, temporarily activating lay beliefs) and different advice settings (i.e., financial advice, interior design advice), we demonstrated a robust effect of lay beliefs on adoption of algorithmic advice. Moreover, we identified task complexity as an important boundary condition and showed that lay beliefs about AI only had a significant impact on adoption of algorithmic advice when the decision task was perceived to be complex.

6.1 Theoretical implications and future research

These findings contribute to research on algorithmic advice in professional services. In this field, several studies have found that people reject algorithmic advice (Schmitt, 2019), while others have found that people “readily rely on algorithmic advice” (Logg et al., 2019, p. 99). Our findings may help to partially reconcile these views. That is, we show that people who believe that AI is superior to human intelligence may rely more heavily on algorithmic advice, while people who believe that AI is inferior to human intelligence may tend to reject algorithmic advice. More specifically, our research shows that people use their lay beliefs about AI as a heuristic to infer accuracy of advice when they do not have sufficient information and/or domain expertise to evaluate the quality of algorithmic advice, a situation typical of many professional advice contexts.

We also contribute to the literature by showing that lay beliefs about AI only influence adoption of algorithmic advice when a task is perceived to be complex. This finding extends previous research (Castelo et al., 2019) which found that algorithm aversion was less pronounced when people perceived a task to be objective, that is, when they felt that it required quantitative analysis rather than intuition. Specifically, our findings suggest that consumers may be more willing to use algorithmic advice for an objective task (e.g., selecting a financial product) when they believe that AI is superior to human intelligence and when the task is perceived to be complex. Moreover, the results of study 2 tentatively suggest that consumers may also be willing to use algorithmic advice for more intuitive, complex tasks (e.g., finding an interior design which matches their style) when they believe in the superiority of AI. In sum, our research provides a more fine-grained analysis of adoption of algorithmic advice.

While contributing to existing research, our studies also have some limitations that call for future research. First, we investigated the impact of lay beliefs about AI in the context of financial advice and interior design. However, in other contexts such as medical advice, psychological distress may be higher, and factors such as human uniqueness may be of greater relevance. Hence, it may be interesting to investigate the impact of lay beliefs about AI in such contexts. Second, it would be worthwhile to examine why people develop different lay beliefs about AI in the first place. For instance, people who are used to different types of AI applications (e.g., robots, chatbots) may develop different lay beliefs about AI. Similarly, individual differences in speciesism (i.e., a fundamental bias toward the human species) may favor the development of different lay beliefs about AI (Schmitt, 2020).

6.2 Managerial implications

Our findings also have clear managerial implications. The most straightforward one is that professional service firms that want to offer algorithmic advice need to recognize the different beliefs their customers have about AI. For example, 46.9% of bank customers participating in study 1 tended to believe that AI is higher than human intelligence (i.e., they had a value above the midpoint of the scale), but 34.6% tended to oppose this idea (i.e., they had a value below the midpoint of the scale). Hence, managers may segment their customers according to different lay beliefs about AI and offer different segments different types of advice.

If service companies, however, want to increase general adoption of algorithmic advice, they may try to influence their customers’ lay beliefs about AI. To this end, they may, for example, inform customers about the data processing capabilities of AI or provide statistics about the predictive superiority of AI-enabled services.

Finally, on a more general level, our research may help managers to decide which types of advice they should automate. Whereas in practice companies often seem to focus on automation of advice related to simple tasks (e.g., advice on standardized products such as car insurance), our research indicates that automation of advice related to complex subjects (e.g., advice on individualized health care plans) may also be a promising endeavor when a substantial part of customers believes that AI is higher than human intelligence. Assuming that in the future more consumers will believe in the superiority of AI, professional service companies who consequently automate complex services may gain competitive advantage.