Introduction

Feedback from past policies has been highlighted as an important factor in explaining policy developments, often in relation to US policymaking, but also increasingly in studies on European policies (e.g., Bulmer, 2009; Skogstad, 2017). Policy feedback, where “a policy produces an effect that feeds back to policy and affects it in some way or another” (Daugbjerg & Kay, 2020: 255), has in recent years regained interest in academic debates, and important advancements have been made with regard to disaggregating the dependent variable, to improve our understanding of outcomes of policy feedback (e.g., Béland & Schlager, 2019; Daugbjerg & Kay, 2020; Moore & Jordan, 2020). In this regard, existing literature rightfully highlights that it is important to understand how policymakers interpret and use feedback as a resource in the policy process (Moore & Jordan, 2020: 303; Dagan & Teles, 2015), while largely overlooking the role of actors providing the feedback. This article, instead, focuses on the latter, where we argue that in order to understand the different dynamics driving policy feedback, we need to look at the agency of feedback providers.

In developing our understanding of the supply side of policy feedback, we will look at a specific type of actors providing policy feedback: implementing agencies. Before existing policies can affect future policy developments, they have to be put into practice (Moynihan & Soss, 2014). This implementation of policies entails a set of administrative practices performed by specific actors, often bureaucratic organizations, which we shall refer to as implementing agencies, whose experiences can provide policymakers with valuable input on implementation problems and how they can be solved (Haverland & Liefferink, 2012). In this respect, policy feedback from implementing agencies can be regarded a type of knowledge to be used by policymakers.

Nevertheless, the question of implementation feedback, i.e., how the practical expertise from implementing agencies serves as input for subsequent policymaking, has not been a central concern in studies on policy feedback, let alone within European studies. This is unfortunate, as bureaucratic expertise is regarded a valuable source for shaping government interventions and changes to policy instruments and settings (Workman, 2015: 152; Hall, 1993), especially in policy areas in which implementing agencies operate at distance from those responsible for formulating and designing policies and instruments, as in the case of the European Union (EU). The Common Agricultural Policy (CAP) is a case in point where feedback based on practical policy experiences has consequences for how policy instruments are put into practice (e.g., with regard to criteria for receiving “greening” payments). Moreover, looking at implementation feedback allows us to study the agency of actors providing feedback. Therefore, this paper answers the following question: How can we explain what types of feedback are provided by (domestic) implementing agencies to (EU) policymakers?

Current insights on policy feedback show that both positive, or self-reinforcing, and negative, or self-undermining feedback can occur simultaneously within the same policy program, in relation to different instruments, on different levels, and with the involvement of different sets of actors (Béland et al., 2019; Moore & Jordan, 2020). With respect to feedback of implementing agencies, we argue that involvement of different sets of actors is not a prerequisite for different types of feedback to occur, but that one set of actors can provide different types of feedback. Policy programs are complex and often consist of multiple policy instruments and settings. On the one hand, it makes sense to assume that implementing agencies will prefer stability for some instruments in because of potential administrative and organizational costs involved in changing their practices (Béland, 2009; Moynihan & Soss, 2014: 321). On the other hand, there may be instrument to which they wish to see changes, depending on issues that arise during implementation (Sager et al., 2014; Polman 2018). In order to understand the different feedback dynamics resulting from these preferences, it is not enough to simply look at the policy effects that shape actor preferences for policy change or continuity, but whether feedback is provided in response to a specific policy agenda, and the extent to which actors expect their feedback to be used by policymakers. This will require an understanding of policy feedback that goes beyond the limited distinction between positive and negative feedbacks (cf. Daugbjerg & Kay, 2020).

Moreover, we currently do not know much about the conditions under which different types of feedback are provided. Looking specifically at implementation feedback as a form of expert knowledge that can be used by policymakers, we expect implementing agencies to behave strategically and predominantly focus on providing types of feedback that are most likely to be used by policymakers. In this regard, we integrate existing insights on knowledge use to develop expectations on when different types of feedback are provided. The knowledge use literature is especially suited to this end, as it bridges knowledge creation and knowledge use (Weiss, 1979). In addition, insights from existing feedback literature have pointed out that different types of feedback have to compete with other, sometimes also exogenous, political interests (Béland et al., 2019). Therefore, when theorizing about the conditions under which different types of feedback occur, we will also discuss other factors that potentially affect feedback use by policymakers.

The contributions of this paper are conceptual, theoretical, and empirical. Conceptually, we build upon literature of policy feedback in order to present a novel typology of implementation feedback that accounts for dynamic policy preferences of implementing agencies and the policymakers’ policy agenda. Theoretically, we introduce expectations about knowledge use to policy feedback studies to develop the argument that some types of policy feedback are more likely to be pursued by implementing agencies. Empirically, this study improves our knowledge about the role of implementation feedback in the EU policy process, which is also valuable from a practical perspective because taking implementation experiences in consideration is expected to have a direct effect on the practicability of policy instruments (Elmore, 1979: 601).

A study of the CAP will be used to illustrate our typology. The CAP is a European policy domain in which policy changes take place on a regular basis and in which processes of path dependency and policy feedback are frequently used to explain the outcome of policy reforms (e.g., Daugbjerg, 2003, 2009; Garzon, 2006; Kay, 2003; Kuhmonen, 2018). In the CAP, implementing agencies are also known to have direct access channels to the European Commission (Polman 2018, 2020). This makes the CAP an appropriate case to illustrate different types of implementation feedback.

In the upcoming section, we elaborate on our approach to understanding feedback from implementing agencies which results in a novel typology. Subsequently, we explain why the CAP provides a good case to exemplify implementation feedback. After the empirical illustration of the types of implementation feedback in practice, we reflect upon the theoretical implications of these findings in a discussion and conclude with a brief recap and outlook for future research on implementation feedback and its practical relevance.

Understanding implementation feedback

Domestic implementing agencies are involved in the practical implementation (application and enforcement stages) of EU policies at the member state level, which leads to experiences and knowledge about the implementation efforts and impact of the policy on the ground. When we talk about implementation feedback, we refer to the information shared with policymakers by implementing agencies on the basis of their practical experiences with the administration of EU policies, with the purpose of affecting new rounds of policymaking. As implementation feedback relies on bureaucratic expertise, it is expected to concern mainly noncontroversial and technical policy issues related to policy instruments and their settings similar to other forms of bureaucratic expertise (e.g., Hall, 1993; Peterson, 1995). The specific expertise of implementing agencies contains important knowledge that is not readily available for the Commission otherwise, which puts implementing agencies in a position of power (Weber, 1978: 225). This is especially the case in the EU, since the European Commission has to rely on external actors for information on the implementation of its policies (Metz, 2015).

Similarly to existing policies’ effects on resource distribution and interpretative lenses (Pierson, 1993), this expert knowledge and implementing agency preferences are the result of experiences with existing policies. These feed-forward effects become feedbacks when they re-enter the policymaking process in order to affect subsequent policy decisions (Daugbjerg & Kay, 2020: 257). Tying these two aspects of the feedback process together requires agency from the actors providing the feedback (Fig. 1).

Fig. 1
figure 1

The policy feedback process and the role of agency

Implementing agencies are strategic actors with agency and policy preferences that they try to realize (e.g., Beyers et al., 2015; Sager et al., 2014). For implementation feedback, this means that we expect implementing agencies to focus predominantly on providing feedback that is most likely to be used by policymakers. As a first step, we propose a typology for capturing implementation feedback which overcomes some of the unclarities when applying the concepts of positive and negative feedback to the level of policy instruments and settings. Second, we theorize about the conditions under which we expect to see different types of feedback, based on assumptions about the likeliness that feedback is used by policymakers.

Building an implementation feedback typology

Policy feedback scholars regularly make a distinction between positive and negative (e.g., Moore & Jordan, 2020), or self-reinforcing and self-undermining feedback (e.g., Skogstad, 2017). The general conception is that positive feedback may lead to self-reinforcing dynamics and policy continuance or even expansion, while negative feedback is assumed to be self-undermining and trigger disruptive policy changes. The use of the concepts of positive and negative feedbacks, and self-reinforcing and self-undermining feedback, however, can be confusing. First, scholars of the punctuated equilibrium theory use positive and negative feedback the other way around. Negative feedback is related to policy stability, similar to the idea of a thermostat, where positive feedback may trigger so-called punctuations which disturb the equilibrium (e.g., True et al., 2007). In related fashion, Daugbjerg and Kay (2020) rightfully point out that negative policy feedback triggering changes on the level of policy instruments can be necessary in order to create a self-reinforcing effect at the level of policy goals, while too much positive feedback on the instrument level may lead to a drift at the level of policy goals. Therefore, this terminology can lead to confusion when applying it specifically to feedback regarding policy instruments and settings, the level at which we expect the feedback of implementing agencies to take place.

Second, there are numerous actors and interests in play in the policy process, and policy feedback is only one of the factors affecting the policy agenda (e.g., Béland et al., 2019). Using only distinctions between positive/negative and self-reinforcing/undermining does not capture how this feedback relates to the larger policy agenda which addresses changes which are not directly related to the policy experiences and subsequent preferences of implementing agencies. When talking about implementation feedback, we need to account for how this feedback relates to the policy agenda. Positive and negative feedbacks do not quite grasp these dynamics when the policy agenda contains major changes, but actors provide feedback that attempts to reduce the adverse effects of these changes without rejecting them altogether.

In order to deal with these issues, we propose to look at policy feedback along two dimensions: first, the policy preferences of the actors providing feedback, and second, the policy agenda in which policymakers balance between all relevant interests in the policy domain.

The first dimension captures the policy preferences of implementing agencies. There are two directions for actor preferences based on existing policies: stability and change. First, actors may prefer stability of policy instruments and settings. Following policy feedback literature rooted in historical institutionalism, actors involved in the implementation of policies are expected to prefer stability because of the potential administrative and organizational costs involved in changing their way of doing things (e.g., Béland, 2009; Moynihan & Soss, 2014: 321). This reluctance toward change is expected to serve as a motivation for implementing agencies to mobilize in favor of maintaining existing policy instruments and thus provide policy feedback accordingly (Pierson, 1993).

Second, and alternatively, actors may prefer to see policy adjustments. In practice, policy implementation is often a “trial-and-error” process (Lindblom, 1979), and the experiences from this process are expected to reveal policy problems that need to be addressed. Implementation experiences can point out inefficiency and ineffectiveness of existing policy instruments as well as high or unnecessary administrative and organizational costs. When implementation experiences highlight such concerns, implementing agencies are expected to use their experiences to provide feedback with the purpose of mitigating these issues (Baumgartner & Jones, 2005; True et al., 2007). Therefore, we assume that when the status quo contains issues that frustrate implementing agencies, such as ineffective and inefficient instruments, they will attempt to raise awareness on how to improve the practicability or efficiency of a policy by advocating adjustments to these instruments and settings.

Theoretically, implementing agencies may also prefer changes that go beyond the scope of policy instruments, such as changes to policy goals, or even paradigms. However, we have little or no reason to assume that the feedback based on the experiences of implementing agencies concerns these more radical changes. Like other forms of bureaucratic expertise, implementation feedback is expected to concern mainly non-controversial and technical policy issues related to policy instruments and their settings (Hall, 1993; Peterson, 1995).

The second dimension brings in the policy agenda, which consists of priorities for policymakers for which they are taking or planning to take concrete steps to pursue governance interventions (Zahariadis, 2016: 5–6). In setting the policy agenda for a reform, policymakers have to account for the position of a large number of actors, which may vary across contexts. In the EU obvious positions that have an effect on the agenda are the Council of Ministers, the European Parliament, relevant target groups, and organized interests (e.g., Daugbjerg, 2009; Klüver, 2013; Tallberg, 2002: 34; Hartlapp et al., 2010). These are all actors with strong institutional or political resources. In relation to this policy agenda, there are two situations in which feedback can occur. First, the expertise-based feedback of implementing agencies can be a response to an existing policy agenda that prescribes changes to policy instruments and settings based on the positions of other (exogenous) actors. Second, feedback can be an attempt to put a new issue on the policy agenda for which policymakers have not yet been making plans.

In this regard, it is important to consider that in contrast to other interest seeking actors, the power of implementing agencies to affect the policy agenda lies mostly in their practical knowledge (cf. Dunlop, 2014). However, unlike most epistemic communities, who use knowledge as critical resource for access and influence, implementing agencies focus less on introducing new ideas, and more on the actual implementation of existing ideas (e.g., Polman, 2018). Nevertheless, to policymakers implementation feedback is a form of expert knowledge that they can choose to use. Before elaborating on the various types of implementation feedback, we need to consider another important factor which we expect to play a role in determining what type of feedback implementing agencies will provide: the expected use of their feedback. We see implementing agencies as strategic actors that observe and interpret the context in which they operate and act accordingly, by adjusting their strategy on the basis of own expectations about when their feedback is used (see Hay & Wincott, 1998). Subsequently, as resources are limited, implementing agencies are expected to limit their efforts to provide feedback where their input is more likely to have an impact on policy outcomes (Panke, 2012). This estimation of success will be based on implementing agencies’ understanding of the policy process and experiences with earlier attempts at providing feedback.

We assume that implementing agencies aspire that their feedback is directly applied in policymaking the way it is intended, in other words that it is used instrumentally (e.g., Weiss, 1978: 31–32; Metz, 2015: 33). As a form of bureaucratic expertise, implementation feedback may indeed be suitable for instrumental use in non-controversial, technical, and incremental policy adjustments in which the settings of policy instruments are revised (e.g., Hall, 1993; Peterson, 1995). However, implementation feedback, like other types of knowledge, may also be used differently by policymakers, often conceptually or strategically.Footnote 1 Conceptual use means that policymakers use knowledge to improve their understanding about issues (e.g., Hoppe, 2005). Although this type of use may not be the first preference, it can still be a useful effect of feedback when policymakers give more considerations to practicability in designing future policy instruments because of the raised awareness. Strategic use, finally, occurs when policymakers make use of feedback as a form of political ammunition, or even for self-interest or manipulation. This is not necessarily considered beneficial for the producers of knowledge, and we do not expect that this type of use is the goal of implementing agencies allocating resources to provide feedback to policymakers (Weiss, 1978).

Having established the two dimensions of preferences and the policy agenda, and the strategic considerations related to expected use, we can fill in a two-by-two matrix which results in four types of implementation feedback (Table 1): stabilizing, agenda removal, mitigation and uncertainty reduction, and problem-solving.

Table 1 Typology of variation in implementation feedback

Stabilizing

When there is no immediate change on the agenda, and implementing agencies prefer to see stability, this is expected to result in a form of stabilizing feedback, in which implementing agencies positively appraise an existing policy instrument, or present satisfaction with the current policy direction. In this situation, there are no changes to policy instruments on the agenda, and implementing agencies are satisfied with the status quo, which makes instrumental use very unlikely. When there are no policy changes or new instruments to be discussed, this type of feedback offers little more than reassurance that the status quo is not that bad, and demands little action from policymakers. As stabilizing feedback cannot lead to meaningful change, the potential for instrumental use is very limited. This type of feedback is likely more suitable for strategic use, as potential ammunition for the Commission in order to defend the status quo when other actors in the policy process are interested in change (cf. Daugbjerg, 2009). Therefore, we expect that implementing agencies will not put much effort in providing stabilizing feedback.

Agenda removal

Situations in which there are policy adjustments, or completely new policy instruments on the policy agenda while implementing agencies prefer stability, the challenge for these agencies becomes removing issues from the policy agenda (or preventing them from landing there in the first place). Agenda removal will mostly take place in the early stages of a policy reform process, either when the policy agenda for a reform is determined, or during the early stages of formulation, when changes may still be taken out of proposals. This type of feedback will likely follow a small-steps approach, in which arguments and studies based on previous experiences are presented in order to convince the Commission that changes are not desirable or may lead to unnecessary complexity and uncertainty (see Princen, 2011).

For this type of feedback, instrumental use would mean that changes to policy instruments or settings advised against by implementing agencies do not make it to a policy proposal, despite known earlier intentions of the Commission to propose these changes. However, successful agenda removal would almost require some form of conceptual use or radical learning where implementation feedback convinces policymakers to disregard earlier beliefs about which changes need to be made (cf. Dunlop, 2014). Moreover, the setting of the policy agenda is a highly political stage of the policy process during which the Commission has to take into account the policy positions of other actors (e.g., Klüver, 2013; Hartlapp et al., 2010). When the context is more politicized and more actors are involved, instrumental use of technical expertise such as implementation feedback is expected to be less likely (e.g., Hall, 1993; Metz, 2015). As a consequence, this type of implementation feedback may also not be very likely to be provided by implementing agencies.

Mitigation and uncertainty reduction

We expect to see a third type of feedback when implementing agencies prefer adjustments to the changes that are already on the policy agenda. This type of feedback occurs when implementing agencies either welcome or accept the inevitability of these changes. In these situations, we expect to see implementation feedback aimed at reducing uncertainty and preventing unnecessary increases in administrative costs, which will result in attempts to mitigate or control the impact of implementation requirements and settings for new policy instruments.

This type of feedback is provided (ex-ante) for well-informed conjectures about the potential benefits and pitfalls during the design of new or adjusted policy instruments, without rejecting the proposed policy change altogether. The formulation of new policy instruments always brings about some degree of uncertainty about how a policy will work in practice. Offering feedback in order to reduce this uncertainty is in the direct interest of implementing agencies, as it may help them to avoid unnecessary implementation costs. In other words, the expertise of implementing agencies is aimed at reducing uncertainty surrounding new policy instruments, and mitigating administrative burdens, in order to help policymakers design policies that are both efficient and effective (Elmore, 1979; Linder & Peters, 1989). In this sense, implementation feedback serves as a form of policy advice (e.g., Craft & Howlett, 2012).

On the one hand, when this type of feedback suggests adjustments that require the Commission to completely go back to the drawing board, use will be unlikely, considering that more impactful adjustments come with a significant risk of reopening political debate and requiring new consent from veto-players. On the other hand, when the feedback of implementing agencies emphasizes, for example, a lack of clarity in a proposal, it can be dealt with more easily, making usage more likely.Footnote 2 In the practice of the EU policy process, this type of feedback can be used either in new drafts for legislative proposals, or in accompanying legislation, such as delegated and implementing acts, or complementary guidance documents that deal with the interpretation of policy instruments (see Van Dam, 2017).

Problem-solving

The final type of implementation feedback occurs when there is no policy change on the agenda and implementing agencies have encountered issues with existing policy instruments and settings that need to be addressed. In these situations, feedback is aimed at problem-solving. In this type of feedback, implementing agencies campaign for changes that deal with issues related to the implementation of the existing policy instruments and settings. In this regard, they may target decreased administrative burdens, or more efficient instruments. Problem-solving feedback can be found throughout the implementation of policy instruments, as these issues arise without being related to an apparent policy agenda. In problem-solving situations, implementation feedback may be used for incremental changes to existing policy instruments. This fits the traditional conception of problem-solving, in which experiences are used instrumentally in depoliticized technical issues, with stable decision-making procedures and a co-operative understanding of problems that need to be addressed (e.g., Peterson, 1995; Scharpf, 1997; Weiss, 1979).

Although the problem-solving type of feedback can occur throughout the policy process, it is mostly associated with problems occurring in the implementation stage. For existing policy instruments, it is easier for the Commission to make changes to policy instruments and settings in implementing or delegated acts. In the procedure of these types of acts, the Commission is less scrutinized by the Council and Parliament and is regarded to have more autonomy (Christiansen & Dobbels, 2012). Therefore, the potential for instrumental use of problem-solving feedback is expected to be relatively strong and can lead adjustments in settings of instruments to accommodate the concerns of implementing agencies.

It is important to note that these types of feedback are non-exclusive and may change throughout the policy process. For example, agenda removal may turn into uncertainty reduction when—after initial resistance—implementing agencies have come to accept that change is inevitable. Likewise, uncertainty reduction may turn into problem-solving when problems with new policy instruments occur during implementation.

Research design

In our empirical analysis, we will apply the typology laid out above and in a study of feedback from domestic implementing agencies in the policy process of the CAP between 2008 and 2016. In the upcoming sections, we will elaborate on our research design by discussing case selection and the data collection and analysis.

Case selection

In the CAP, implementing agencies provide feedback to the European Commission on a regular basis; therefore, this policy domain is very suitable for studying implementation feedback and its dynamics. The primary implementing agencies of the CAP within the member states are national (or regional) paying agencies (PAs). In this paper, we focus on implementation feedback from these PAs. PAs are tasked with the responsibility of controlling and paying eligible CAP beneficiaries for which they have to implement systems for monitoring the eligibility criteria.

The CAP is known to be a very comprehensive policy domain, consisting of a wide range of policy instruments that are often technically complex, and of which the implementation can be both difficult and costly (e.g., Polman 2020). Moreover, these policy instruments are frequently subjected to changes, because of the institutional structure which requires periodical reviews, with a clear policy agenda (e.g., Garzon, 2006). Thus, the CAP encompasses policy debate that allows us, theoretically, to study all types of implementation feedback dynamics.

In addition, the CAP, as a large and dynamic policy domain, offers plenty of variation on the factors that will affect the expected use of implementation feedback, such as politicization and veto-player involvement. Reform of the most fundamental regulations of the CAP (the so-called basic acts) is often the field of high politics, and all important veto-players will be involved, especially considering the distributive nature of the CAP (Matthews, 1996). This legislation is supplemented by a set of implementing and delegated acts, which are often formulated, while the basic acts are being implemented, or at least after a decision on these basic acts has been made in the Council. This more technical legislation in particular offers most likely cases for implementation feedback use. For the formulation of these acts, the Commission encounters less involvement of important veto-players, such as the EP and Council. In addition, there are also the highly technical guidance documents, in which the Commission explains in great detail how specific policy instruments should be implemented. In the period between 2008 and 2016, there were both an extensive reform of the basic acts (CAP post-2013) and a number of smaller technical adjustments made during implementation. Particularly, the so-called greening instruments that were introduced in this reform required not only changes to the basic acts, but also implied impactful changes to the day-to-day operations of the PAs.

For our analysis, we use a diverse case selection technique to select representative subcases for each of the different types of feedback in order to have variation across the different types of implementation feedback (see Gerring, 2007). These subcases will be further introduced in the empirical section of this paper.

Data collection and analysis

The main data for our analysis consist of meeting minutes, policy issue papers, and policy proposals. Through content analysis of the feedback from PAs to the Commission between 2008 and 2015, the arguments used by these agencies to address policy problems and convey their preferences will be mapped. PAs have multiple channels to provide the Commission with implementation feedback. Two of the most important, and well-documented, channels through which their feedback is shared with the Commission are the Conferences for Directors of Paying Agencies (CDPA) and the Learning Network (LN).Footnote 3

The primary data consist of CDPA meeting minutes taken between October 2008 and May 2015 (N = 14, 92 pages), documents from the LN in policy issue papers and working plans (N = 19, 178 pages), and minutes from meetings with the Commission (or representatives of DG AGRI) (N = 23, 93 pages). A full list of the documents is provided in “Appendix.” Implementation feedback has been distilled from these documents and structured on the basis of whether the feedback is aimed at stability or change, and whether it applies to existing legislation or a policy agenda. In order to assess the policy agenda, we have looked at the extent to which the issues raised in the feedback were directly related to policy proposals or communications by the Commission, and in the previously mentioned documents. In this step, we also used secondary literature on the post-2013 reform. More specifically, we have identified feedback types by looking for statements explicitly making one of the following demands: (a) The Commission should not consider any changes to a specific element, without referring to existing plans for such change (stabilizing); (b) the Commission should reconsider intended changes to specific elements which are being discussed publicly (agenda removal); (c) the Commission should consider to make adjustments to newly proposed legislation (mitigation and uncertainty reduction); and (d) the Commission should consider changes in existing legislation, without referring to existing plans for such change (problem-solving).

In addition, we draw upon eight semi-structured interviews, which were conducted with policy experts from both DG AGRI, and the Dutch, Swedish, and British PA, who were all active participants in the LN and CDPA. These interviews serve as background information in assessing both motivations for feedback and motivations for use by the Commission. An overview of the interviews is given in “Appendix 1.”

Implementation feedback around the post-2013 CAP reform

In this analysis, we discuss the different types of implementation feedback and their related use by the Commission in the period surrounding the post-2013 CAP reform. Each type of implementation feedback will be discussed by using sub-cases, in which we address the policy preferences of the PAs, why an issue was important for to provide feedback on, and the extent to which these issues were acted upon by the Commission.

Stabilizing feedback

As expected, stabilizing feedback does not occur frequently, at least not in the minutes of meetings between PAs and Commission representatives, or in policy papers. We could only identify one clear example, which was found during the agenda-setting stage for the post-2013 reform. On this occasion, PAs explicitly addressed their satisfaction with the general management framework for the agricultural funds, which had been set up in the 2005 reform, differentiating between the European Agricultural Guarantee Fund for market measures and the European Agricultural Fund for Rural Development for rural development programs:

“The general management framework of Council Regulation (EC) No 1290/2005 on the financing of the common agricultural policy (…) is assessed as positive for both the EAGF and the EAFRD” (CDPA 4).

Making changes to this general management framework was not at the agenda, and it is assumed that the Commission had little reason to make more changes to this issue, as this was also the official position of the head of DG AGRI (Daugbjerg & Swinbank, 2011: 139). Moreover, the structure of the general management framework was not a point of discussion throughout the upcoming reform process, and PAs did not mention their positive assessment of the general framework again in future interactions. It was also not mentioned in any of the interviews as an important issue for PAs. No further evidence of stabilizing feedback was found throughout the post-2013 reform process. This is not surprising, as there was much to discuss about new policy instruments that were on the Commission’s agenda to achieve the so-called greening of the CAP. This brings us to the next mode of feedback: agenda removal.

Agenda removal

Between the first communication of the Commission on the CAP post-2013 in 2010 and its first proposal in October 2011, the agenda setting for the upcoming reform was in full force. In this period, we expect to see feedback from PAs aimed at removing issues of the Commission’s agenda. Particularly, since there was a policy agenda that included the introduction of new greening instruments that required significant adjustments for PAs, this can be seen as a most likely case for agenda removal feedback. However, at the same time, PAs were very conscious about the difficulties of removing issues from the Commission’s agenda, something they “had not yet figured out” (interview 2).

The most controversial and potentially burdensome issue on the Commission’s policy agenda was related to the introduction of a set of new greening instruments, which would require new procedures and control systems for PAs. Because of these potential impactful changes to the policy instruments and their costly administrative consequences, we would expect PAs to be very active in attempts to remove some of these instruments from the Commission’s change agenda.

Indeed, in 2010 and early 2011, concerns were raised about the practicability of these new instruments. However, this feedback was mainly aimed at keeping implementation concerns in mind when formulating policy changes, as PAs realized that the introduction of the new greening instruments was a clear political decision, in which they had no role to play (interviews 2, 3, 6, 8). PAs explicitly made the decision not to intervene further in these political discussions, but to focus on the more technical discussions concerning the practicability of the instruments (LN A12). As a consequence, the feedback that was communicated in this stage of the CAP reform was focused on raising awareness about the potential administrative costs of the changes that the Commission was considering and that the introduction of new instruments should not lead to an increase in costs of control (e.g., CDPA 6).

We therefore did not observe explicit attempts in which PAs engaged to convince the Commission to completely remove any of the potential new instruments from the agenda, either in meetings with the LN or in the CDPA. The feedback provided during the agenda-setting stage for the new greening instruments was limited to general statements about PAs opposing additional administrative burdens, without suggesting that the new greening instruments should not be introduced. When zooming in on the level of the technical settings of the new policy instruments, however, there have been some attempts to remove or change specific elements of new instruments from the Commission’s proposals. Nevertheless, these are not attempts to remove an issue from the agenda, but attempts to mitigate the implementation burdens.

Instead of further interfering in political discussions that were still taking place between the European Parliament, the European Council, and the Commission, PAs opted for a strategy to address most of these issues during the formulation of implementing and delegated acts (LN A12). In these acts, technicalities of implementation are at the forefront of the debate and the politically sensitive changes to policy instruments have been formalized and laid down in the basic acts agreed upon in the reform subjected to the ordinary legislative procedure (OLP). These findings indicate that the lack of agenda removal can be attributed to the political weight of the greening of the CAP for the Commission’s agenda, and the unwillingness of PAs to interfere in this process. PAs embraced the inevitability of the Commission’s agenda and focused on putting the practicability of the new reform in the foreground.

Mitigation and uncertainty reduction

The most common mode of feedback during the formulation and decision making on the basic acts of the CAP was that of reducing or mitigating uncertainty and potential costs. During the period surrounding the post-2013 reform, the lack of explicit agenda removal feedback was more than made up for in attempts to reduce the uncertainty surrounding these new instruments, or attempts to reduce the implementation costs of upcoming changes. Also in this type of feedback, we clearly see that PAs abstain from involvement in political discussions. Even when the Commission explicitly asked for input on its proposal, PAs explicitly indicated on all controversial issues that they wanted to abstain from “political discussions” (LN A3; LN B11). According to a senior official of DG AGRI involved in receiving feedback from the LN, this was also perceived as part of the strength of the PAs: Their input was really seen as technical expertise (interview 5).

The proposed introduction of new instruments requiring new control systems was predominantly perceived as a major threat by PAs (interview 6). The PAs general position was that the new regulations should be “unambiguous and contain clear-cut definitions which will reduce the need for interpretative notes issued by the Commission” (CDPA 6). In line with the position of the PAs, the Commission was frequently asked to either provide clear definitions of elements of the newly introduced greening instruments, or to deliver guidelines on how to implement these instruments. This often happened in combination with references to predicted problems with the practicability or efficiency of these instruments. For example, after the Commission presented the first proposal for the basic acts, PAs highlighted that it was unclear how the new greening instrument of EFAs should be implemented:

“It is not clear when an area is determined. For instance, the farmer has effectively created an area for EFA, but has not met all requirements regarding EFA. Is that area correct, incorrect or something in between correct and incorrect?” (LN A8: 6).

This is an example of an issue that neither has to be addressed during the formulation of the basic acts, nor in the delegated and implementing acts. Instead, the Commission used this feedback to clarify matters in a so-called guidance document (European Commission, 2015; interview 6). This is a highly depoliticized policy document written up by the Commission with the goal of providing clarifications for practical implementation, which is formulated after the political decisions on the basic acts of the CAP. Actually, most of the implementation feedback related to lack of clarity required no actual changes to the basic acts, but were issues that could be addressed in other types of legislation. There were even some technical issues brought to the attention of the Commission that could be addressed through more detailed explanations in specific meetings (interview 8). In this sense, the feedback of PAs was aimed predominantly at reducing the level of uncertainty surrounding the implementation of the new greening instruments as decided upon during the OLP, albeit on a very technical level.

Similarly, during the formulation of the new greening instruments there were some sporadic attempts to remove specific difficult-to-implement elements from the Commission’s policy proposal, in an attempt to mitigate the implementation burdens of new policy instruments. For example, winter soil cover, as a landscape element that would be part of the new greening instrument of Ecological Focus Areas (EFAs), was removed from the Commission’s reform proposals because PAs indicated that it would be too costly and difficult to control (interview 4). Unfortunately, for the PAs winter soil cover was put back in by the European Parliament following the OLP (LN A9). However, the Commission’s agenda for change was not immediately resolved once the basic acts were published. Newly appointed agricultural Commissioner Hogan made further simplification of the CAP his top-priority, as the Commission was aware of difficulties with implementation of greening, and immediately announced a review of the greening after one year of implementation (Hogan, 2014). As a result, PAs further attempted to mitigate the increased costs of control, for example, by suggesting further changes to the EFA landscape elements and adjustments to the EFA were made on basis of the feedback from PAs, who were not able to map all the necessary landscape elements, so some were removed in line with the idea of simplifying the administration of the CAP (interview 3).

Another impactful new instrument on the Commission’s agenda for the post-2013 reform was to include a definition of active farmers. This definition was introduced to guarantee that agricultural funds would actually end up with farmers, and not, for example, with telecom companies owning large tracts of land for radio masts and towers, or golf and country clubs. Because this definition obviously had consequences for the distribution of the funds, it was a politically salient issue (Rutz et al., 2014). Despite the political nature of the issue PAs provided the Commission with feedback on how this provision should be formulated, because from the outset it was clear to them that this new provision would be very hard and costly to implement (LN A4). PAs warned, for example, that “the end result of [the Active Famer] application might not overcome their (sic.) administrative cost that they introduce. Thus, the abolishment or the fundamental repositioning is a need” (CDPA 8). However, this issue was not further addressed by PAs at this specific time.

Moreover, despite some understanding from the Commission’s side, providing unambiguous and easy-to-check boundary criteria was not politically feasible, as some member states and farmer interest groups were in favor of more loosely defined criteria (D’Andrea & Lironcurti, 2017). The final policy proposal for the post-2013 CAP reform tried to deal with this difficulty by giving member states more flexibility in adjusting the boundary criteria, leaving the PAs with a policy instrument that was very costly, or in some cases impossible, to implement (CDPA 11, interview 8).

The initial formulation of the active farmer provision (AFP) in the post-2013 reform illustrates that during the policy formulation phase implementation feedback is unable to compete with the agenda of the Commission and member state governments. This is especially the case in a politicized instrument such as the AFP, for which the distributional consequences are very tangible as it has the purpose of limiting eligibility to CAP funds. However, once the reform process ended, changes were possible for the AFP in a mode of problem-solving.

Problem-solving

Finally, there is the problem-solving mode of feedback. This is illustrated by two examples: revisions to the AFP and advancements in allowing the use of satellite data for administrative controls. After the storm of the post-2013 reform (Swinnen, 2015), PAs kept pushing for changes to the AFP because of the perceived problems during implementation. When the political decision concerning this new instrument had been made, there was more room for a discussion of the actual implementation. In this case, PAs argued against problems arising during the implementation of specific elements of the new AFP:

“There is no need in having a rule on active farmers regarding the negative list. The land will be leased to somebody else only resulting in administrative costs and burden for farmers an administration with no effect. Solution: The use of the negative list should become optional for member states” (LN A17).

The Commission, which at this time was also headed by a new Commissioner, Phil Hogan, who was looking for new ideas to simplify the CAP, was very receptive toward this type of input. A senior staff member of DG AGRI indicated on this topic that the Commission had to adapt to the realities of implementation (interview 3). Because changing the AFP required a change to one of the basic acts, agreed upon through the OLP, a specific window for change was necessary, which came in the form of the omnibus regulation, a large revision of rules for the use of EU funds, with the purpose of making them simpler (Council of the EU, 2017). As a result, the AFP was changed, explicitly on the basis of the experiences of PAs:

“Moreover, the experience of some member states is that the difficulties and the administrative costs of implementing the elements relating to the list of activities or businesses as provided for in Article 9(2) of Regulation (EU) No 1307/2013, has outweighed the benefit of excluding a very limited number of non-active beneficiaries from the direct support schemes. When a member state considers this to be the case, it should be able to discontinue the application of Article 9 thereof in relation to the list of activities or businesses” Preamble to regulation (EU) 2017/2393 (30).

Another example of the problem-solving mode of feedback, for which input from PAs was used by the Commission, is the allowance and development of satellite imagery for administrative checks. In the initial stages of the implementation of the post-2013 reform, it was still technically impossible to control for all requirements that were laid out in the new greening instruments. In order to overcome this problem, PAs informally requested the EU Copernicus satellite program to develop an advanced system for using satellite data for agricultural controls (Copernicus, 2017). In order to facilitate the implementation of the new greening instruments, PAs argued in favor of using satellite imagery as much as possible, in order to reduce the administrative burdens arising from all the additional checks (CDPA 9). The Commission was susceptible to this idea which was put on the agenda by the PAs and agreed that it would lead to more cost-effectiveness (CDPA 10). In addition, PAs invited representatives of the Commission to the European Space Agency in Noordwijk during the Dutch presidency in 2016 with the purpose of informing them about the potential of using satellite data.

Commissioner Hogan welcomed the initiatives by the PAs and agreed to use input from national administrations to develop an operational model and launch a large-scale pilot (Evans, 2017). In 2018, this resulted in the decision by the Commission to officially allow PAs to make use of Copernicus’ satellite data as evidence in checking farmers’ fulfillment of greening requirements (Implementing act EC 2018/746). These developments were highlighted by PA representatives as some of the most important achievements since the establishment of the LN (interviews 6, 7). In promoting and developing the use of satellite imagery for administrative controls, the PAs also paved the way for new policy instruments. The use of this new technology, for example, allows PAs to measure specific environmentally beneficial policy instruments, such as crop rotation, without exorbitant administrative costs.

However, problem-solving feedback does not always receive a warm welcome from the Commission. This was the case, for example, with regard to increasing flexibility of the timing and decreasing the number of farm visits in so-called On The Spot Checks (OTSC). The Commission’s standpoint on these OTSCs was always rigid: “easing administrative controls is not possible” (LN B21). The failure of the PAs to reduce the number of controls may be related to the fact that the preferences of the PAs did not overlap with two important priorities for the Commission. In the first place, the Commission aims to assure maximum accountability, in which the idea of reducing the number of controls has no place (interview 2). Second, simplification for farmers is not the same as simplification for administrations. The Commission prefers simpler rules for farmers over simpler rules for administrations, and less OTSC for PAs would likely imply more work for farmers to ensure accountability (Hogan, 2016; interview 2).

Discussion

The empirical examples of the typology presented above illustrate that it is a fitting tool to classify policy feedback. We show that different issues fit different types of implementation feedback, mostly conform our expectations. The typology also allows the formulation of more detailed expectations about which types of feedback will be provided. In this discussion, we will reflect on these theoretical and conceptual implications of our analysis.

First, not surprisingly, there appears to be little or no real effort of implementing agencies to inform policymakers that some of their instruments work just fine. In these situations, there is also no real potential for instrumental use, making stabilizing feedback almost a redundant category in practice.

Second, based on classic policy feedback theory rooted in historical institutionalism, we expected to see attempts at agenda removal feedback, especially when new instruments on the policy agenda were perceived as threats existing institutionalized routines, which was the case with the introduction of new greening instruments in the CAP. However, in practice there were no explicit attempts to remove new instruments from the agenda. In our case, this absence of agenda removal feedback can be attributed to implementing agencies making strategic decisions about what types of feedback they provide. Implementing agencies intentionally abstained from political discussions, and focused on the technical policy dimensions instead, in which their feedback was more likely to be used. Moreover, they showed awareness of their role in the policy process and the technical nature of their expertise. In this sense, they attempt to minimize the strategic or political use of their expertise by the Commission, while maximizing the potential instrumental use which is expected to result in more practicable policy instruments. This further emphasizes the importance of actor agency and strategic behavior in understanding feedback processes.

Third, as a consequence, we see that most instances of feedback were either aimed at problem-solving, or mitigation and uncertainty reduction, and related to issues for which the Commission could instrumentally use the input from implementing agencies to improve the practicability of the newly formulated, or existing policy instruments. However, in the high politics of the multi-veto-player formulation stage of the basic acts of the reformed CAP post-2013, there was little room for feedback from implementing agencies to affect the policy formulation by the Commission. During this stage, implementing agencies mainly provided feedback on issues that required more detailed explanations or clarifications, without challenging the Commission’s policy agenda and did not require the Commission to go completely back to the drawing board. In the changes to the AFP, we could see that issues remaining unaddressed as mitigating feedback can persist and turn into problem-solving feedback once the high politics related to the agenda-setting, formulation, and decision-making stages have passed. Potentially, it can be a strong strategic move to intentionally postpone feedback until the implementation stage and present it as problem-solving, in order to enhance its potential use.

Fourth, we also find evidence that the Commission actually uses these types of feedback in the formulation of new legislation, mostly in the specific legislation dealing with technical issues, such as implementing and delegated acts, and guidance documents. This also means that implementation feedback is not very likely to affect policy outcomes beyond the settings of policy instruments, which is in line with expectations from seminal works on orders of change by Hall (1993) and decision making in the EU by Peterson (1995), who expect that bureaucratic expertise and feedback mainly affect these kinds of first-order changes.

Fifth, the focus on agency of implementing agencies has led to interesting insights on the strategic behavior of these actors and the role of actor agency in feedback processes in general. We show that feedback providing actors make intentional strategic decisions on providing different types of feedback. In our empirical illustration, this is most explicitly the case when implementing agencies choose to focus on technical discussions and provide a type of mitigating feedback, rather than attempt to remove issues from the policy agenda. This strategic decision is based not only on the policy context, but also on the role-perceptions on the part of the feedback providing actors. Implementing agencies clearly see themselves as technical experts with a specific role that is not political. However, other actors may perceive themselves differently and as a consequence make different strategic decisions. For example, environmental interest organizations may see themselves as defenders of public goods, supported by a larger public, which may lead to different trade-offs.

Finally, adding the dimension of expected knowledge use helps us to improve our understanding of the strategic decisions of policy feedback providing actors. In line with our expectations on the basis of knowledge use literature, we see that implementing agencies, as knowledge creators and feedback providers, focus on instances of feedback for which instrumental use is most likely (Weiss, 1978). However, this focus on instrumental use does not mean that feedback cannot be used by policymakers in other ways, either intentionally, or as the byproduct of feedback intended for instrumental use. Moreover, continuously emphasizing concrete issues with the practicability of policy instruments may result in conceptual use of feedback by policymakers. This is the case when feedback over time results in a new way of approaching the design of policy instruments, while implementation concerns play a role at an earlier stage. This means that policymakers change their understanding of what to account for in policy design. In this sense, conceptual use can be the byproduct of feedback intended for instrumental use. The possibility of this type of conceptual use may be an additional incentive for actors to provide feedback, and more research is required to understand actual conditions for conceptual use of implementation feedback by policymakers.

Conclusion

The aim of this paper was to explain what types of feedback are provided by (domestic) implementing agencies to (EU) policymakers. Instead of focusing on the outcome of policy feedback, we have developed a novel typology that allows us to structure different types of feedback, and the considerations of actors to engage in these different types. We then linked different types of feedback to expectations about the use by the Commission and illustrated the occurrence of these types of implementation feedback provided by domestic implementing agencies in the period surrounding the CAP post-2013 reform.

Conceptually, this typology was developed to account for the interests and behavior of implementing agencies and to prevent confusion that can occur while using terms as positive and negative feedback by also including whether feedback is a response to existing policies, or to a policy agenda for change. The typology can be used to understand the dynamics between actors seeking influence and policymakers. This typology may also be applied to other EU policy domains where implementing agencies are involved in providing feedback, such as EU water governance (e.g., Van Eerd et al., 2018), or different types of actors, such as farmers’ organizations, or environmental groups.

Theoretically, studying different policy domains or different actors will lead to different expectations about which types of feedback will occur. In other, especially less distributive policy domains, which are potentially less politicized, specific types of feedback, such as agenda removal, may be more likely to occur. Similarly, other types of actors may make different trade-offs concerning the expected use of their feedback and will have a different role perception. By adding the dimension of expected feedback use by policymakers, we can formulate more precise expectations about what types of feedback will occur and when. We thereby emphasize the role of actor agency in the feedback process: Actors are more likely to provide feedback when they believe it has a reasonable chance of being used, which is strengthened by the particular role perception of implementing agencies we found in the empirical analysis.

We also found indications that the impact of implementation feedback may lead to increasing returns and even stronger effects at a later time. The influence of PAs on introducing and developing the use of satellite data in the CAP may have paved the way for even more impactful future changes, such as the introduction of 100% monitoring in the latest reform proposals for the CAP post-2020. How implementation feedback can contribute to increasing returns is a direction for future research.

Moreover, insights from this study help to improve our understanding of the EU policy process, by showing under which conditions implementation feedback plays a role in the formulation of policies. This is especially relevant since adaptations to policy instruments and settings occur on a daily basis and have tangible consequences for target groups. Empirically, our observations add to the rich body of literature on the CAP by investigating the role of implementing agencies in European agricultural policymaking. Even in a highly contested policy domain like the CAP, we find that implementing agencies play a role in providing feedback which, when used by policymakers, can have significant distributional effects over time. In more technical policy domains, the role of these agencies may be more visible in policy outcomes. In addition, this study adds to the growing body of literature on the European administrative space, which aims to understand the role of regulatory bodies and implementing agencies in the EU policy process (see Mastenbroek and Martinsen, 2018).

In conclusion, we show that implementation feedback plays a role at the level at which most EU legislation is made: that of policy instruments and settings (Peterson, 1995: 75). In this regard, this research raises important points for those interested in the practicability of EU policies, such as policymakers, but also for other actors involved in policymaking. Especially for policymakers, it is relevant to be aware that feedback from implementing agencies may help to make European policies more practicable, reaping the seeds of policy feedback in order to increase the output legitimacy of future policies.