Long-term multi-criteria improvement planning

https://doi.org/10.1016/j.dss.2021.113606Get rights and content

Highlights

  • Multi-criteria benchmarking is used to generate strategic action plans.

  • A given improvement is inspired by a combination of different observed performances.

  • Resistance to change is included in strategic planning.

  • A software and open source code (adaptable for future research) is available.

Abstract

This paper proposes a novel framework to support the generation of strategies for multi-criteria long-term improvement. It can be applied to general preference models but it is illustrated in this article on a Multi-Attribute Value Theory model. The novel contributions to the literature are twofold. Firstly, the framework addresses the issue of resistance to change that may arise during the implementation of a strategy. It constrains a step of improvement to be focused on a single criterion, and minimizes the intensity of operational changes. Secondly, it addresses the realism of the improvement scenarios by treating three types of structural dependencies differently: the positive synergies, the negative synergies and the bottlenecks. The scenarios are generated by finding a set of efficient solutions to a shortest path problem in a graph whose edges represent possible steps of improvement. Each edge is characterized by an increase of rank or level and two penalty functions relating to the difficulty of its execution, one representing a risk associated to bottleneck mechanisms and the other to operational change relative to a previous edge. A case-study using the Shanghai Academic Ranking of World Universities is presented in order to illustrate how this framework could be useful to generate a sequence of strategic actions for the Université libre de Bruxelles.

Introduction

Multi-criteria improvement planning is a complex problem that involves a group of stakeholders who are concerned with the performance evolution of a given entity (such as a company, a country, an administration, etc.).

The stakeholders can be divided into three categories. Firstly, the decision maker(s), who plan and study how to modify the performances of the considered entity. Secondly, the worker(s), who are involved in the operational implementation of the planning. Thirdly, the receiver(s), on whom the decision has an influence and who possibly have, in turn, a role to play in the decision acceptation and implementation. Due to the resistance to change that can arise from the workers and the receivers [8,14], improvement planning should be studied with the will to limit systemic changes (short-term operational changes) for a given strategic change (long-term vision that acts as a constraint for the improvement) [29]. As a corollary, McNair & Leibfried [29] advocate that”systemic change is possible but it cannot be done in every part of the organization at once” and that”changes have to be focused on achieving a new vision of the organization that will replace existing practices and traditions”. This observation constitutes one dimension of the complexity of improvement planning faced by the decision makers.

In addition to this first difficulty, the decision makers must be able to model the problem in an appropriate way. This involves the definition of indicators which will be used to evaluate the entity and its possible evolution based on available data or on the judgment of experts. As explained by Roy [37], a crisp representation of the nature by a limited amount of indicators is often a source of errors. This adds another layer of complexity for improvement planning. Moreover, if the present and the observed are already hard to model, the future is even more so. Besides, most of the time, these indicators are in conflict, i.e. it is usually impossible for a given entity to perform well on all indicators simultaneously. As a consequence, the decision makers need a methodology to be able to compare different entities with their own advantages and drawbacks in relation to each other. On top of that, they will have to decide on which indicators and to which degree they could improve the performance of the studied entity realistically and desirably.

Finally, the decision can be planned once, typically for short-term improvement planning, or planned repeatedly for a long-term evolution. This further enhances the complexity of the task.

The comparison of entities evaluated on different indicators is at the core of Multi-Criteria Decision Analysis (MCDA). In this context, tools have been developed to help reveal the preferences of the decision makers in order to properly model the problem. Different families of MCDA methods have been proposed in the literature and vary depending on the underlying assumptions related to the preference structure of the decision makers. These have been applied in different fields [4], including budgeting and resource allocation [26], geographical information systems [24], health care [18], etc.

Within operational research, another field called Data Envelopment Analysis (DEA) also focuses on the comparison of entities based on multidimensional evaluations [3,11]. In this context, indicators are considered to be either inputs (that the entity consumes) or outputs (that the entity produces). Here the focus is to assess the efficiency of entities (that are called Decision Making Units) in relation to each other. Typically, the aim is to give a recommendation for an inefficient entity to become efficient (in one or several steps). DEA is strongly related to benchmarking which is a systematic search for the best practices that will help the compared entities improve and reach the best performances [10]. If several consecutive performance modifications are suggested, it is called stepwise benchmarking. This has been introduced in order to ease the process of improvement, or to make it more realistic, with regard to the similarity of each intermediate step (see Park [31]), to the context-dependency (see Seiford & Zhu [41]), or to the risk of failure (see Ghahraman & Prior [19]), among others [1,16,25]. Let us note that all these contributions propose that each intermediary step match exactly an observed entity's performance. Alternatively, other works allow fictitious (not observed) steps [27,28].

Stepwise benchmarking has mainly been studied for problems that fit the DEA assumptions (i.e. the existence of indicators that can be explicitly viewed as inputs and outputs). Less attention has been paid to problems that do not fulfill these hypotheses and that typically belong to the MCDA field. A first family of contributions to generate and plan scenarios of improvement can be found in Petrović et al. [[32], [33], [34], [35]]. These approaches explicitly take into account the decision makers' preference structure. Petrović et al. [33] introduced stepwise benchmarking to the MCDA domain, using a modified version of the ELECTRE II method [38] developed in Bojković et al. [6]. They propose a discrete approach to model the performance evolution of each entity into a simple measurement system. An ordered partition of the entities is built allowing to represent groups of less preferred entities (which correspond to lower layers) to preferred entities (which correspond to higher layers). They have developed an optimal algorithm to find the best path of improvement, among these layers, by minimizing the difference of the Euclidean distance between each step. In addition, they propose a sensitivity analysis to their model [34].

Another recent contribution that involves multi-criteria improvement planning is Post Factum Analysis (PFA) proposed by Kadziński et al. [23] and later applied and developed by Ciomek et al. [12], as well as Dutta et al. [15]. It is strongly related to the robustness concern associated with a given recommendation. Among other results, the authors provide an interactive optimization method to evaluate the most realistic and attractive option (based on an individual's judgment represented by additive value functions) to improve the rank of a given entity by modifying a predefined subset of criteria. Their approach is very interesting in the context of a single step improvement. However, in the more general frame of sequential improvements, it becomes harder to manage, as the realism and feasibility of each step should be estimated by the decision maker.

The different approaches presented are tools that allow to benchmark several observed entities and to generate scenarios of improvement in the formal representation, i.e. in terms of evaluated performances, not real life actions. Regarding the practical interest in benchmarking, it has been used as a monitoring tool for cities, countries, companies, etc. Since the early 2000's, the European Union has been promoting benchmarking as a monitoring tool to help each country learn the best practices from others [2,7]. Stepwise benchmarking has been applied to banks [1], printer retailers [41], European telecommunication [35], and will be used in the future as the European Commission and the European Securities and Markets Authority (ESMA) strive for the regulation and stability of the financial benchmarks in the banking sector, in order to assess financial risks regarding derivatives, loans etc. [22,42]. In addition, let us note that other benchmarks exist and could be used as tools for stepwise improvement: University rankings (Academic Ranking of World Universities [5], Times Higher Education; country or city rankings (the UN's development goals for sustainability, the Environmental Performance index, the IMD smart city index), etc. In this context, the DEA assumptions (i.e. the explicit presence of inputs and outputs) are not naturally satisfied.

The tools offered to plan a sequence of improvement are mostly focused on finding similar observed entities to learn from. However, as the generated scenarios constitute a basis to build real strategic options, more attention should be paid to easing the implementation of improvement planning with regard to the stakeholders (which involves focused and limited systemic changes) as well as to easing the investigation on how to match improvement in the formal representation to real life actions. Moreover, for the sake of realism, the vast majority of the methods limit the possible future performances to those of other observed entities which is not always necessary and which reduces the chance of possible innovation in the generated scenarios.

We propose a theoretical framework to generate long-term multi-criteria improvement scenarios (with a planned sequence of improvement), such that each step focuses on improving one criterion at a time. Our approach intervenes at the same level as PFA, proposed by Kadziński et al. [23] (it must be applied to a predefined preference model). However, our framework proposes guidelines to build stepwise benchmarking and provides assumptions to define a realistic evolution of performances which are not especially observed (contrary to Petrović et al. [35]). In this respect, this framework is illustrated with an additive Multi-Attribute Value Theory (MAVT) model. It is important to note that the guidelines could be defined differently for other families of preference methods such as ELECTRE [17], for example. It will, indeed, be the subject of future research, which goes beyond the scope of this article.

In addition to the guidelines, a graph is created (starting from the performances of the entity to improve), the edges of which are weighted with three important characteristics to describe a long-term multi-criteria improvement scenario. The first indicator is the effort of execution if a given step risks facing a bottleneck mechanism (defined in Section 3), and should be minimized. The second is the effort of execution if a given step represents an operational change in relation to the previous step, and should also be minimized. The third is the rank or level increase and should be maximized. Finally, a representative set of efficient solutions is proposed and studied with the decision maker. Different analyses should be conducted together with the decision maker in order to discriminate the final convincing scenario(s): a scenario should be robust, realistic and desirable. We propose a very simple sensitivity analysis in the case-study presented in Section 6. However, a rigorous robustness analysis will be the subject of future research.

It is important to note that our work aims at generating and selecting several scenarios of improvement in the formal representation leading to create real life strategic action plans that would not have necessarily been conceived by a decision maker. When convincing action plans have been selected, a more precise analysis of the costs and work-load required in order to implement the strategy can be dealt with by building a strategy table as proposed by Howard [21], for instance. Indeed, there are benchmarks which use non-measurable indicators (such as surveys for example in the Times Higher Education ranking, assessing the quality of each university's teaching or research). It would be difficult for a university willing to improve through this ranking to put a cost on getting better at these surveys. However, one university could be inspired by those with better survey results to match real life actions. Therefore, this framework should be used as a generator of scenarios of improvement in a formal representation of the problem with observed entities as reference for each step, in order to dig into the real life actions of these entities, and ultimately create real strategic options and apply any existing valuation models that can be found in the vast literature of portfolio selection [40] or strategic decision analysis [21,30].

Finally, a software integrating this framework has been made publicly available and as an open source at http://dx.doi.org/10.17632/7cbsmm5sx9.1 in the format of a python file (MCDMBM.py) and of an executable file (MCDMBM.exe) to allow users willing to apply our framework without coding knowledge and let others modify it if needed. A configuration file (Article_cgf.txt) containing the parameter values used in Section 6, is also provided in order to be able to easily reproduce the results presented in this article (and modify it).

This article is organized as follows: Section 2 provides the notation used to describe the framework. Section 3 introduces the different guidelines and assumptions that are the basis of this work. Section 4 proposes a methodology to build a graph and find efficient solutions. Section 5 applies our framework to an MAVT preference model. Section 6 illustrates how the framework and MAVT model can be applied to generate scenarios of improvement and strategic action plans for a university attempting to improve its rank in the Academic Ranking of World Universities (ARWU).

Section snippets

Notation

We will use the following notation:

  • A = {a1, ⋯, ai, ⋯, am} - a set of observed entities called alternatives (and not actions, in order to distinguish them from real life actions);

  • aDM ∈ A, an alternative whose performances should be improved by a Decision Maker (DM). It is assumed that we only deal with one decision maker. Moreover, aDM should be considered all through the article as having an evolving performance;

  • G = {g1, ⋯, gj, ⋯, gn} - a set of criteria (to be maximized without loss of generality);

  • Ej,

General framework

As stated in the introduction, we newly propose to generate paths of improvement composed of steps involving a performance improvement on only one criterion at a time. For the sake of simplicity, we will call such steps single performance improvements in the rest of the article.

As already stressed, building a realistic path of improvement is a complex task. In this context, at least four main aspects must be considered:

  • 1.

    Degradation of performance: Should a degradation of performance appear in

Definition of a graph

Steps 1 to 4 enable to build a directed graph Γ = (N, Σ) where N ⊆ E is a set of vertices and Σ ⊆ N2 is a set of edges. The starting vertex of Γ is g(aDM). It is built repeatedly as follows:

  • 1.

    Initiate the set of vertices as follows: N = {g(aDM)}. Also initiate a set of new vertices: H = {g(aDM)};

  • 2.

    For each vertex ν ∈ H:

    • (a)

      For each criterion gk such that Λk(ν) ≠ ∅: If ρk(ν) ∈ N add ρk(ν) to H, add ρk(ν) to N. Add the edge (ν, ρk(ν)) to Σ (for all gk ∈ G);

    • (b)

      Remove ν from H;

  • 3.

    Repeat 2 until H = ∅.

It is obvious

Illustration on an MAVT model

In order to illustrate the general framework with an additive Multi-Attribute Value Theory (MAVT) model, the following notation will be used:

  • vj : Ej → [0, 1], a non-decreasing and positive partial value function associated to criterion gj;

  • wj ∈ [0, 1], a positive scaling factor that defines acceptable trade-offs for the decision maker such that

j=1nwj=1
  • V(x) = ∑j=1nwj. vj(xj) - the global value of x ∈ E.

It is worth noting that there is no restriction on the method used to obtain the scaling factors

Case study: University rankings

This section proposes an illustrative case-study on which we have worked, as academic members of the Université libre de Bruxelles (ULB), in order to generate strategic plans to improve the ULB according to the Shanghai Academic Ranking of World Universities 2020 (ARWU20). The idea was to generate scenarios of improvement in the formal representation and then investigate on several steps of improvement in order to propose real life actions that could match these scenarios of improvement. It is

Discussion and limitations

Our framework faces a few limitations. Firstly, many methodological choices must be made which are not unique. For instance, when adapting it to an MAVT model, Assumption 2 uses a very limited extension of the dominance relation (Eq. (11)) to determine the possible steps of focused improvement. The advantage of a set of performances x towards other performances y must be larger on one criterion gk than the disadvantages on the rest of the criteria in order to state as realistic that an

Conclusion

Let us summarize the important ideas of this article. A new framework is proposed aiming at generating scenarios of improvement. It is based on novel assumptions with regard to the literature on stepwise benchmarking. It takes into account practical considerations such as operational changes and the risk of bottlenecks while generating the scenarios. Moreover, it acknowledges the fact that benchmarks are often based on abstract formal representations of real goals. This means that a stepwise

Acknowledgements

We would like to thank Karim Lidouh, head of the Statistics and Prospective Studies Office at ULB for his precious comments on our case-study and the two anonymous referees for their insightful comments helping improving this article.

References (43)

  • F. Lootsma et al.

    Multi-criteria analysis and budget reallocation in long-term research planning

    Eur. J. Oper. Res.

    (1990)
  • S. Lozano et al.

    Dominance network analysis of economic efficiency

    Expert Syst. Appl.

    (2017)
  • M. Petrović et al.

    Benchmarking the digital divide using a multi-level outranking framework: Evidence from EBRD countries of operation

    Gov. Inf. Q.

    (2012)
  • M. Petrović et al.

    An ELECTRE-based decision aid tool for stepwise benchmarking: an application over EU digital agenda targets

    Decis. Support. Syst.

    (2014)
  • M. Petrović et al.

    Supporting performance appraisal in ELECTRE based stepwise benchmarking model

    Omega

    (2018)
  • L.M. Seiford et al.

    Context-dependent data envelopment analysis—measuring attractiveness and progress

    Omega

    (2003)
  • J. Arrowsmith et al.

    What can ‘benchmarking’ offer the open method of co-ordination?

    J. Europ. Public Policy

    (2004)
  • R.D. Banker et al.

    Some models for estimating technical and scale inefficiencies in data envelopment analysis

    Manag. Sci.

    (1984)
  • J.-C. Billaut et al.

    Should you believe in the Shanghai ranking?

    Scientometrics

    (2010)
  • S. Borrás et al.

    The open method of co-ordination and new governance patterns in the eu

    J. Europ. Public Policy

    (2004)
  • W.H. Bovey et al.

    Resistance to organisational change: the role of defence mechanisms

    J. Manag. Psychol.

    (2001)
  • Cited by (0)

    View full text