Introduction

A very important research question for public policy analysis is how decision-makers learn in the policy process. In the literature, scholars have identified policy evaluations as a source of information that provides decision-makers with material to learn about how to modify legislationsFootnote 1 (Head, 2016; Pattyn et al., 2018; Stephenson et al., 2019). In this context, scholars often refer to the term evidence-based policy making that describes the utilization of scientific evidence to create or change public policies. Implementing expertise in public policy comes along with three main challenges. Firstly, decision-makers face the temptation to use information in a way that fits their political goals, even if this practice comes at the cost of taming and solving a policy problem. Secondly, learning from expert positions is difficult in the post-truth era where ignoring facts becomes somewhat normalized (Perl et al., 2018). Thirdly, in a context of limited time and resources, decision-makers use heuristics to process information more effectively (Cairney, 2016; Kamkhaji & Radaelli, 2017; Perl et al., 2018).

This article contributes to analyzing the first one of these challenges, which is the strategic political usage of expertise by decision-makers at the expense of policy efficacy. Therefore, this paper examines how salience and technical complexity of policy issues are linked to policy learning and political learning from evaluations. Specifically, the paper combines two strands of research:

Firstly, the article harks back to research on policy evaluation to understand how scientific evidence influences policy making (Weiss, 1979; Weiss et al., 2005, Daviter, 2015, Newman et al., 2016, Schlaufer et al., 2018). Notably, researchers have focused on scientific evaluations of public policies, which members of parliament can demand from the administration (Jennings & Hall, 2012; Bundi, 2016). According to the literature, parliamentarians appreciate evaluations (Boyer & Langbein, 1991; Demaj & Summermatter, 2012; Hird, 2009) and use them in different ways (Boyer & Langbein, 1991; Tabuga, 2017; Whiteman, 1985). On the one hand, evaluations produce independent evidence about the implementation of a policy, which is why they are an instrument for parliamentary oversight that decision-makers use to ensure accountability of the administration to elected officials (Bundi, 2018). On the other, decision-makers can learn whether a policy works as intended, or, if it has different effects using evidence (Alkin & King, 2017; Eberli, 2019; Henry & Mark, 2003).

In this literature, scholars have also separated analytical from political uses of policy evaluations (Eberli, 2019; Frey, 2012; Shulock, 1998; Weiss, 1989). Researchers have pointed out that policy- and politically oriented uses of evaluation cannot be considered completely detached from each other as they can occur simultaneously or sequentially (Amara et al., 2004; Frey, 2012; Whiteman, 1985). Therefore, we should expect that any uses of policy evaluation have some political motives attached to it (Frey, 2012). Nevertheless, there is little evidence on determinants of policy-oriented and political uses of evidence and learning at an individual level and how differences between policy issues impact on different uses of policy information. What is more, only a few studies have examined different forms of evaluation use in parliament and linked this to learning in the policy process (Amara et al., 2004; Nutley et al., 2007, p. 67).

Secondly, this paper builds on the policy learning literature. Therein, scholars have repeatedly asserted that learning in public policy can follow political goals rather than only be focused on how to deal with a policy problem (Bennett & Howlett, 1992; Biesenbender & Tosun, 2014; Boswell, 2009; Cairney & Oliver, 2017; Dunlop et al., 2018; Greenhalgh & Russell, 2009; Pierson, 2000; Vagionaki & Trein, 2020; Zito & Schout, 2009). The complexity of social and political institutions makes learning about evidence a social experience (Ammons et al., 2008; Hall, 1993). Decision-makers mix political ideas and agendas with information from policy research and experiences (Gilardi, 2010; Gilardi et al., 2009; May, 1992). Cognitive limitations and bounded rationality reinforce this mix because decision-makers use cognitive shortcuts to satisfy limited rationality and uncertainty (Braun & Gilardi, 2006; Simon, 1947). Against this background, scholars have theorized that learning can entail political negotiation and happen in the context of hierarchies (Dunlop & Radaelli, 2018). Others have emphasized that decision-makers use research above all if the results fit their dominant political narrative (Boswell, 2009; Henry & Mark, 2003). Empirical research has shown that decision-makers introduce legislations on issues that are salient but only if scientific uncertainty about these issues is limited (Bromley‐Trujillo & Karch, 2019), and, decision-makers tend to maintain their policy positions against the background of scientific findings (Heikkila et al., 2020). Nevertheless, policy learning research focuses often on policy change (Heikkila & Gerlak, 2013) but requires more analyses of the micro-foundations of policy learning (Dunlop & Radaelli 2017).

This article draws on research focusing on policy evaluation and policy learning to distinguish (a) policy learning from evaluations and (b) political learning from evaluations. The paper then proceeds to analyze how these two forms of learning from evaluations are linked to salience (Baumgartner & Jones, 2010; Jones & Baumgartner, 2005) and technical complexity of the policy issue (Eshbaugh‐Soha, 2006; Gormley, 1983; Trein & Maggetti, 2020). The empirical analysis in the paper uses a survey on evaluation use by members of parliament in Switzerland as a proxy to examine policy learning and political learning at the individual level of analysis.

In using multilevel regression models, the paper shows that issue salience alone neither increases policy learning nor political learning from evaluations. If policy issues are technically complex, decision-makers are more likely to use evaluations for policy learning and political learning. Nevertheless, once issue salience (attention to the policy issue) increases, decision-makers employ evaluations for policy learning, but not for political learning if they deal with a technically complex policy issue. This paper contributes to the literature by unpacking microlevel dynamics of the learning process regarding policy aspects and political aspects of policy making (Dunlop & Radaelli, 2017, p.306–307). Notably, this research underlines that decision-makers use evaluations for policy learning rather than for political learning if they must deal with salient and technically complex issues, which emphasize the problem-solving capacity of democratic governance.

Evaluation use as political—and policy learning

In the literature, scholars have pointed out that decision-makers use evaluations in different ways. According to Alkin and King (2017: p. 573), the most common forms of evaluation use are instrumental, conceptual, and symbolic. Instrumental use is defined as the utilization of evaluations as a basis for decisions or actions. Conceptual use describes the use of evaluations in order to better understand an issue and to gain new perspectives or insights on the topic. Symbolic use refers to a situation when decision-makers utilize evaluations not primarily due to the evaluation findings, but because they expect a benefit for including them in their political activities (Knorr, 1977). For example, evaluations can be used to persuade others, justify a position, or legitimize an action. Symbolic use is sometimes also called persuasive, legitimizing, tactical, or strategic use of evaluations (Johnson, 1998). However, despite their conceptual clarity, different forms of evaluation use are not always easy to distinguish and to observe empirically. Instrumental and conceptual uses differ from symbolic uses, as the former require openness to the assessment and its outcomes (Frey, 2012). This perspective entails that instrumental and conceptual uses of policy evaluation contribute to improving policy problem-solving, since they aim at ameliorating policy measures based on scientific evidence about policy design and policy outcomes. Nevertheless, this rational notion of evaluation use for problem-solving is also present to some extent in symbolic use where actors mobilize the authority of knowledge for political reasons (Boswell, 2009).

Research on policy learning is conceptually similar to the policy evaluation literature. A common and parsimonious definition holds that learning is “the acquisition of new relevant information that permits the updating of beliefs [respectively ideas] about the effects of a new policy” (Braun & Gilardi, 2006, p. 306). In this context, information about the effects of a policy may come from policy evaluations. This definition is based on information processing in the Bayesian framework and used in policy process research until today (e.g., Nowlin, 2021). Another definition of policy learning is more encompassing. According to Heikkila and Gerlak, policy learning is “(1) a collective process, which may include acquiring information through diverse actions (e.g. trial and error), assessing or translating information, and disseminating knowledge or opportunities across individuals in a collective, and (2) collective products that emerge from the process, such as new shared ideas, strategies, rules, or policies” (Heikkila & Gerlak, 2013, p. 486).

Like evaluation scholars, who point to different forms of evaluation use, researchers of learning have distinguished different forms of learning in political research (Vagionaki & Trein, 2020). For example, scholars have defined no learning (the absence of learning), blocked learning (lessons learned do not transfer to the organizational level), instrumental learning (learning about policy instruments), and political learning (learning about political strategies) as some of the main learning forms (Bennett & Howlett, 1992; May, 1992; Zito & Schout, 2009). More recently, Dunlop and Radaelli have developed four modes of policy learning—epistemic, bargaining, hierarchical, and reflexive—that assume that decision-makers are homo discentis, i.e., learning and studying individuals (Dunlop & Radaelli, 2018).

Trein and Vagionaki build on this literature and distinguish policy-oriented learning from political (power-oriented) learning (Trein, 2018; Trein & Vagionaki, 2021). Policy-oriented learning entails the updating of ideas to change policy instruments for the better (e.g., Bennett & Howlett, 1992; Jenkins-Smith et al., 2018; May, 1992; Zito & Schout, 2009). Contrariwise, political learning entails learning with an explicit political motivation (Lowi, 1972; Steinmo et al., 1992; Pierson, 1993). Political learning means that decision-makers update their beliefs to maximize their political returns, even if this comes at the cost of implementing ineffective policies (Hertel-Fernandez, 2019; Trein & Vagionaki, 2021).

This article focuses on the individual level of learning and assesses how elected officials learn from policy evaluations. This perspective pays attention to the micro-level of policy learning as conceptualized by Dunlop and Radaelli in their analysis of the learning process (Dunlop & Radaelli, 2017). Nevertheless, the article does not focus on how learning changes public policies. Specifically, the paper distinguishes two ways of learning from evaluations: firstly, policy learning from evaluations, in which office holders focus on the findings of the evaluation for improvement of policy effectiveness; secondly, political learning from evaluations in which evaluations are used in a political self-interest manner. The paper delineates these two concepts from the literature on policy evaluation as well as from research on policy learning. The policy evaluation literature has distinguished analytical from political uses of evaluations (Alkin & King, 2017; Eberli, 2019; Frey, 2012). As Eberli (2019) has pointed out, the two-part distinction between analytical and political use refers to the fundamental difference between an analytical–improvement oriented and a political–strategic logic of use. However, the two forms of use manifest in different ways. Analytical use captures the types of instrumental and conceptual use discussed in the literature to date, while political use includes all types of symbolic uses, such as persuasive, legitimizing or tactical use (Alkin & Taut, 2003). We are aware that some scholars have pointed out that these two forms of evaluation use are, to a certain degree, conditional on one another, since political-symbolic use unfolds its effect by relying on the instrumental problem-solving image of systematically generated knowledge and its use (Boswell, 2009, p. 249). Although it is possible that actors use evaluations analytically to obtain policy-relevant information, elected officials are likely to always have an intention to (possibly) use evaluations politically (Frey, 2012), notably, as successful problem-solving is a political end in itself.

By distinguishing policy learning from evaluations and political learning from evaluations, we assume that decision-makers learn from an evaluation in one way or the other. This assumption is plausible if we analyze decision-makers as homo discentis, i.e., a learning and studying individual (Dunlop & Radaelli, 2018, p. S53). The logical consequence from such a perspective is that evaluation use necessarily results in learning, understood as the updating of beliefs. Consequently, policy-oriented learning from evaluations means that office holders learn from the findings of the evaluation to improve policy effectiveness and to improve the public good. Political learning from evaluations implies that decision-makers learn from evaluations regarding how to achieve their political self-interest. This conceptual separation focuses on learning as a process at the individual level without connecting it to learning in relation to policy results (Dunlop & Radaelli, 2018). In connecting analytical use of evaluations to policy learning and political use of evaluations to political learning, we widen the theoretical interpretation of evaluation use and improve the empirical grounding of the learning literature.

Policy issues and learning from evaluations

In addition to distinguishing policy learning from evaluations and political learning from evaluations, this article is interested in how differences between policy issues impact on the two forms of learning. To answer this question, this paper analyzes the effect of two explanations. Firstly, it assesses how policy salience impacts on learning from policy evaluations. Secondly, the article examines how the technical complexity of policy issues is linked to learning from evaluations. Thirdly, the paper examines how these two variables interact.

We know from the literature that the salience of policy issues has an impact on the production of policy reforms (Lieberherr & Thomann, 2020). If an issue receives a high amount of attention by the media and decision-makers, it is likely that there is a measurable change in policies (Baumgartner & Jones, 2010; Jones & Baumgartner, 2005). In case a policy issue is salient, it tends to be politicized in formal political arenas. For example, if a problem receives considerable attention, such as the remuneration of managers, it will be politicized publicly by political parties and not in informal arenas behind closed doors (Culpepper, 2010). In such a context, political actors need to take care that they stick to their strategic political positions and tend to only follow the recommendations of independent experts in the limits of the positions of their party.

Against this backdrop decision-makers select evidence that fits their preferred narrative (Eberli, 2019). According to Boswell, research does not only inform new policies aiming to solve pressing problems, but “… does indeed play an important political function …” that is different from an instrumental and problem-oriented use of knowledge. In this context, policy information serves the goal to substantiate or legitimize pre-existing policy positions (Boswell, 2009, p. 7). From the perspective of the learning literature, such practices are cases of political learning: based on new knowledge, decision-makers learn (i.e., update their beliefs) to adjust and inform their political strategies even if this practice entails choosing a policy option that is less effective (Bennett & Howlett, 1992; Boswell, 2009; Pierson, 2000; Zito & Schout, 2009), because they can afford to do so (Bandelow, 2008, p. 746; Deutsch, 1966, p. 111). In case of a highly salient policy issue, decision-makers follow this logic and learn from evaluations in a way that reinforces their political positions. Therefore, we hypothesize:

Hypothesis 1a

The more salient a policy issue, the more likely decision-makers learn politically from evaluations.

Nevertheless, the discussion in the previous section of the paper has pointed out that actors also use evidence and scientific information in a policy-oriented way (Heclo, 1974; Budge & Laver, 1986; Strom, 1990; Evans, 2018). This form of evaluation use and learning captures the policy-seeking and puzzling intentions of decision-makers. In other words, it points to the aspiration of actors to solve public problems through policy (Ansell, 2011). This way of learning from evidence and evaluation resembles an analytical use of policy evaluations (Eberli, 2019; Frey, 2012; Shulock, 1998; Weiss, 1989). In the context of the policy learning literature, this form of evaluation use resembles instrumental learning, i.e. learning to improve public policies for problem solving (Vagionaki, 2020; Zito & Schout, 2009). This literature implies that the more public attention an issue receives the more likely decision-makers are to learn from evaluations in a policy-oriented way because they want to solve pressing problems based on the foundation of expertise. Therefore, we hypothesize the following:

Hypothesis 1b

The more salient a policy issue the more likely is policy-oriented learning from evaluations.

Policy issues vary in their complexity. According to Sager and Andereggen (2012), complex policies are characterized by the use of different policy instruments by multiple interactions that makes it difficult to attribute policy effects. In the literature, scholars have distinguished between technically complex issues, such as energy policy and health policy where decision-makers usually integrate a broad range of policy instruments, and less complex issues such as labor market policy. In the case of technically complex issues, decision-makers are more likely to consult external experts to better understand how to calibrate different policy instruments. According to one author, “technical complexity is high when a policy problem requires the understanding of a specialist or expert, a professional appraisal more than a normative judgment” (Gormley, 1983, p. 89–90). Admittedly, even technical decisions require normative criteria and judgments, but the distinction between technical solutions requiring knowledge and less technical ones has been used in the public policy literature (Gormley, 1983, p. 90; Eshbaugh‐Soha, 2006; Trein & Maggetti, 2020). Contrariwise, other policy issues are not as technically complex. For example, unemployment policy is a policy field that requires much less technical knowledge and insights to design and decide on policy solutions compared to health and energy policy.

These insights imply that decision-makers could either learn in a policy-oriented way or in a political way from evaluations if they face technically complex policy issues. On the one hand, policy evidence based on research is useful for decision-makers to help them satisfy their problem-solving and policymaking intentions. Against the backdrop of a scientifically complex issue, decision-makers are therefore more likely to use evidence compared to issues that are of limited technical complexity. On the other hand, decision-makers do also have incentives to use scientific evidence to learn about how to achieve their political goals. In the context of scientifically complex issues, using insights from research will signal competence to voters, even if the actual learning from evaluations has political intentions and might be symbolic, as decision-makers will carefully ensure that the policy preferences they derive from evidence chime with their political goals. Against this background, we formulate the following two hypotheses:

Hypothesis 2a

Technologically complex policy issues make political learning from evaluations more likely.

Hypothesis 2b

Technologically complex policy issues make policy learning from evaluations more likely.

An important question that follows from this discussion is how issue salience and technical complexity interact regarding their impact on the different usages of evaluations. Do decision-makers learn in a policy-oriented or in a political way from evaluations if salience increases and they have to deal with a technically complex policy issue? This paper argues that issue salience, i.e., public attention to the policy problem, results in policy learning from evaluations if the policy issue is technically complex. If a policy problem receives a lot of attention from the media, decision-makers want to demonstrate that they seriously aim at problem solving and take expert recommendations regarding complex problems seriously. Contrariwise, it is unlikely that salience increases the political usage of political learning from evaluations in a context of technical complexity. In this situation, decision-makers face the risk of “being caught” practicing symbolic use of evidence due to public attention on the policy process, especially if they do not take advice by experts seriously (Feindt et al., 2021; Trein, 2018). Against a background of high issue attention, the usage of technically complex policy evidence in a purely political way might result in electoral punishment, if voters discover political learning from evaluations that ignores potential recommendations for policy improvement. Thus, we hypothesize the following:

Hypothesis H3

Higher issue salience increases the positive effect of technical complexity on policy learning from evaluations.

Empirical analysis

Data and methods

This study is based on an online survey of cantonal and federal members of parliament (MPs) in Switzerland conducted between May and June 2014 (Eberli et al. 2014). MPs were asked about their general attitudes toward evaluation, as well as about their habits on demanding policy evaluations, the way they use evaluations and how often they employ evaluation reports. As MPs have a broad understanding of the term evaluation, we provided a definition of the concept in the introduction to the poll: "In this survey, evaluations are interpreted as studies, reports or other documents, which assess a state’s measure in a systematic and transparent way with respect to their effectiveness, efficiency, or fitness for purpose.” A total of 1570 MPs took part in the survey, which corresponds to a response rate of 55.3%. Compared to similar surveys among Swiss parliamentarians this percentage is relatively high.Footnote 2

In order to measure the dependent variables–policy learning and political learning from evaluations–MPs were asked whether they had used evaluation in specific ways during the last four years. To operationalize different types of evaluation use, the paper employs four variables: MPs were asked whether they use evaluation in order to make a decision (instrumental use) or learn about policies (conceptual use) respectively whether they use them to justify a decision (legitimizing) or convince others (persuasive use) (Alkin & King, 2017; Eberli, 2019). On this basis, the paper builds two indexes. Table 1 shows the different questions and how they correlate with each other:

Table 1 Empirical approach to learning from policy evaluations

The independent variables were collected through an expert survey in 2015, as the hypotheses do not only focus on individual characteristics of MPs, but also on differences across policy fields. Fischer and Sciarini (2016) show that cross-sectional comparisons are important for the decision-making process. Hence, we collected a data set on the characteristics of policy fields with an expert survey of Swiss political scientists in order to obtain information on the characteristics of a policy field (Bundi, 2018).Footnote 3 Hooghe et al., (2010, p. 692) suggest that expert interviews are appropriate when reliable information is more likely to be found among experts than in other documentation sources. Since no data are available for the attributes of policy fields in Switzerland, experts were asked to assess the attributes of various policy fields. The survey provides the same list of policy fields with keywords that was also included in the survey of Swiss MPs. Moreover, the attributes of the policy fields were pre-defined and experts asked to rate the attribute on a scale from 0 to 10. In order to be able to compare the experts' ratings with each of the other policy fields, the ratings were standardized to a standard deviation of one and a mean value of zero and manually added to the first data set. To match the MPs with salience in a policy domain, we used their parliamentary committee affiliation. Specifically, we linked the value of salience and complexity of a policy domain (as assessed by experts) with the committee membership of a specific MP. This strategy is appropriate as the Swiss parliament is considered as a working parliament with a strong committee system where legislative projects are predominantly elaborated the committees and where evaluation use happens in the committee meetings (Dann, 2003; Eberli, 2019).Footnote 4

Next to the independent variables that operationalize the above-discussed hypotheses, the analyses include several influential variables: age, gender, education, urbanization, party membership,Footnote 5 membership of the Executive Committee,Footnote 6 legislative professionalization and evaluation demand. Moreover, an additional dummy question was created to establish whether an MP is a member of an oversight committee, as Bundi (2016) has shown that MPs with an oversight committee are more likely to ask for evaluations. On the structural level, the analysis includes whether the cantonal or federal constitution entails an evaluation clauseFootnote 7 and controls for the size of the parliament and the number of parties. The operationalization is summarized in Table 7 in the appendix.

Empirically, the analysis relies on a multilevel logistic regression model, since the observations are nested in groups (parliaments) that have the potential to influence learning from evaluations. According to Steenbergen and Jones (2002, p. 219–220), ignoring the clustering of the data structure could lead to distorted standard errors that would overestimate the importance of the effects. A robust variance estimation not only allows for the relaxation of the assumption that the error terms are identically distributed, but also clustering allows for further relaxation of the assumption that our observations are completely independent. Hence, we use a random intercept model to test variables at the two levels (MPs and parliaments).

Results

We first give a descriptive overview of the policy- and power-oriented learning from evaluations before presenting estimates that are related to them. Figure 1 illustrates the distribution of learning from evaluations in the sample. In doing so, the figures show that MPs frequently use evaluations in the parliament, but there is a small difference between policy learning from evaluations (mean 2.84) and political learning from evaluations (mean 2.68). Hence, MPs rather learn from evaluations to better understand a public policy than to convince someone else of their opinion. Nevertheless, this slight difference might also be a result of social desirability, as policy-oriented use and learning usually enjoy a higher acceptance rate compared to politically oriented use and learning (Bundi et al., 2018). The figure shows that those who actually demanded evaluations seem to be more likely to use them either for policy learning or for political learning. Nevertheless, these differences are not statistically significant. This finding indicates that also those MPs who do not demand evaluations use them for both types of learning.

Fig. 1
figure 1

Policy learning, political learning, and evaluation demand. Histogram (kernel density function) of the share of policy- and political learning in the sample, split by cases between MPs who demanded an evaluation and between MPs who do not

Yet how important are policy field characteristics in explaining policy learning and political learning from evaluations, according to the hypotheses discussed in the previous sections of the paper? Table 2 presents the estimates analyzing the link between issue salience and policy learning as well as political learning from evaluations.

Table 2 Issue salience and learning from evaluations

Overall, the different models suggest that issue salience neither augments policy learning from evaluations nor political learning from evaluations, in particular if we control for parliament-specific variables. Even though the regression coefficient regarding issue salience is slightly significant in Models 1 and 3, MPs are only about 8% more likely to learn from evaluations in highly salient policy issues compared to non-salient policy domains. Moreover, the significance level disappears if structural variables measuring the presence of an evaluation clause, the size of parliament, and the number of parties are included (Models 2 and 4, Table 2). In contrast, age is positively related to policy learning from evaluations, while older MPs are less likely to use evaluations to learn politically. Hence, the results suggest rejecting both hypotheses 1a and 1b, which predict that salience augments policy learning and political learning from evaluations. In addition, female MPs are 15% less likely to use evaluations for political learning in comparison to their male colleagues.

On the structural level, Models 1 and 3 confirm the positive relationship between evaluation demand and learning from policy evaluations (see Fig. 1). The more often MPs demand evaluations, the more often they use evaluations. We also see that the institutionalization of evaluations matters (cf. also Jacob et al., 2015). In parliaments with an evaluation clause in the constitution, MPs are more likely to use evaluations (15.2% for policy-oriented use and 19.5% for power-oriented use). Furthermore, Model 2 suggests that smaller parliaments are less likely to use evaluations in order to understand public policies better, which certainly has to do with their limited resources (Bundi et al., 2017).

Next, Table 3 presents the results for the link between technical complexity of policy issues, on the one hand, as well as policy learning and political learning from evaluations on the other. In contrast to salience, models 5 to 8 suggest that complexity is significantly associated with both forms of learning from evaluations. In case of a complex policy issue, those MPs who demand evaluations are also more likely to learn from them: 5.8% increase for policy learning and a 6.4% increase for political learning. This relationship even remains significant even if structural variables regarding the parliament are included in the analysis. Thus, the empirical models provide evidence for hypotheses 2a and 2b, which propose that against the background of a technically complex policy issue, decision-makers are more likely to learn from evaluations in a policy-oriented sense as well as in a political way compared to policy issues of limited technical complexity. Furthermore, the results are similar regarding the variables at the individual and structural levels compared to the models including issue salience. As a consequence, the analysis reveals few differences between the two forms of learning from evaluations. Thus, does it matter to distinguish these two forms of learning from evaluations in empirical analyses?

Table 3 Technical complexity and learning from evaluations

The regression models shown in Table 4 demonstrate that it is indeed important to distinguish policy learning and political learning from evaluations. While Model 9 shows a positive and significant interaction effect between issue salience and complexity for policy learning from evaluations, the analyses do not reveal any statistical relationship for the same interaction related to political learning from evaluations.

Table 4 Issue salience, technical complexity and learning from evaluations

This result implies that a high degree of policy salience has a positive effect on policy learning from evaluations for policy issues that have high levels of technical complexity. This finding suggests that MPs tend more frequently to learn from evaluations in a policy-oriented way if the policy domain is technically complex and salient. Figure 2 illustrates this relationship graphically, in demonstrating how the increase in complexity augments the impact of salience on policy learning from evaluations.Footnote 8

Fig. 2
figure 2

Marginal effect of salience on policy learning for issue complexity. Moderating effect of complexity on the relationship between salience and policy learning from evaluations in parliament. The plot is based on Table 3, model 9

Discussion

The results that are presented in this paper have theoretical implications for public policy analysis. The first implication is related to issue attention (salience). On the one hand, different strands of literature imply that decision-makers tend to learn politically if the policy issue is salient because they have already defined their position and follow political goals (Boswell, 2009; Deutsch, 1966; Strøm, 1990). On the other hand, researchers have pointed out that decision-makers are policy seekers and want to improve policies (Zito & Schout, 2009). Consequently, they should learn from evaluations in a policy-oriented sense to demonstrate their ability and willingness to improve such policies, especially if an issue is salient (Ansell, 2011). The results of the analysis in this paper show however that issue salience alone does not explain higher levels of policy learning or political learning from evaluations amongst decision-makers. Therefore, scholars should refute the hypothesis that issue salience alone increases directly how MPs learn from policy evaluations.

Nevertheless, the analysis in this paper shows also that issue salience is important for learning from policy evaluations in conjunction with the technical complexity of policy issues. Results in this article show that decision-makers learn above all in a policy-oriented way from evaluations if they face highly salient and technically complex policy issues. This finding supports the argument that elected officials are especially policy seekers and suggests that the “misuse” of evaluations for politically oriented and power-seeking strategies is limited (but not absent), particularly against the background of technically complex and salient policy problems. For example, if issues such as environmental protection or public health become salient, members of parliament will use policy evaluations to improve policies rather than to seek their proper interests only.

This analysis underlines that political orientations and ambitions of elected officials are not necessarily a challenge for the problem-solving effectiveness of democratic governance (Ansell, 2011; Scharpf, 2003). Although scholars have recently and correctly emphasized that the usage of evidence in public policy follows a political logic (Cairney, 2016; Cairney & Oliver, 2017), this does not mean that decision-makers largely ignore research results and clearly prioritize politics over problem-solving. Especially if a policy issue is pressing and complex, i.e., potential solutions require input from expertise, elected officials are more likely to learn from evaluations about how to solve the policy problem rather than to only serve their political goals.

However, we should be careful in the interpretation of our results. First, our research design does not allow causal inference. In this study, we presented possible factors that prior studies have shown to be correlating with evaluation use (Johnson et al., 2009). Moreover, we feel very confident that learning from evaluations does not influence our main independent variables, since individual behavior can hardly affect how a policy is perceived. Second, our findings only provide limited implications for policy learning and other forms of evidence use. Yet, this might be a very interesting future research avenue.

Conclusion

This paper aims at explaining how issue salience and technical complexity affect how elected officials learn from policy evaluations. By using a survey amongst 1500 members of national and subnational Swiss parliaments, this article demonstrates that decision-makers engage in policy learning rather than political learning from evaluations especially if the policy issue is salient and technically complex. The findings do not confirm the conventional assumption that issue salience increases political learning and decreases policy learning amongst MPs. The empirical material in this paper also shows that political and policy learning from evaluations increase the more technically complex a policy issue is.

Findings from this research have theoretical implications for public policy analysis as they underline the importance of policy learning from evaluations by elected officials. Members of parliament report to learn from evaluations in a policy-oriented fashion, especially if the issues are technically complex and salient. Political learning from evaluations seems less important in this configuration—although it is not absent since policy learning and political learning from evaluations are correlated with each other. This result implies that the problem of selective learning and cherry-picking of evaluation findings according to predetermined policy positions is less likely, especially against the background of complex problems requiring urgent attention. This is good news for the capacity of democracies to solve, or at least tame, complex policy problems such as climate change because elected officials overall prioritize “problems over politics” when it comes to the usage of policy evidence.

The results of this research need to be interpreted by keeping the scope conditions of this case study in mind. Switzerland has a multiparty government that comprises all the important parties represented in parliament. Thus, decision-makers need to find policy positions that allow for consensus, especially about important problems. Therefore, amongst the next steps are measures to better account for issue polarization and its link to policy learning from evaluations. It is possible that political learning from evaluations becomes more likely if the policy problem is not only salient but also highly polarized. Furthermore, this data is based on self-reported information use and learning by members of parliament, which might overestimate the importance of policy learning from evaluations. Future research needs to work toward how to control for this potential caveat.

Furthermore, readers should keep in mind that this article started from the assumption that evaluation use entails political or policy learning. This assumption is plausible and justifiable based on the micro-foundations of learning as a theory of policy process, which conceptualizes the individual as a homo discentis (Dunlop & Radaelli, 2018). From this perspective, this paper contributes to a better understanding of the micro-foundations of the policy learning process (Dunlop & Radaelli, 2017). Nevertheless, this starting point implies for future research that scholars should pay more attention to understanding to how individuals learn from policy evaluations in the policy process.