Introduction

A group of experts are always considered to be much wiser than individuals to reach a reasonable solution for a complex decision-making problem [5, 40]. It is worth noting that the theory and methods for group decision-making (GDM) have attracted a great deal of attention [17, 26]. A GDM problem is usually simplified as the consensus process that a group of experts reach the optimal solution to the problem of choosing the best one from a finite set of alternatives. Generally, there are three phases in the process of GDM [22, 24]:

  • the preference information phase;

  • the consensus phase;

  • the selection phase.

In the phase of preference information, various preference formats could be provided by decision-makers (DMs) when evaluating their opinions on the alternatives, such as pairwise comparison matrices (PCMs) [41, 51], additive reciprocal preference relations [11, 35, 44], linguistic preference relations [18, 54], interval-valued preference relations [42, 55] and others. Once the judgements are provided by DMs, the next important issue is how to aggregate individual opinions and to reach the consensus [17, 24, 26]. The consensus phase implies that the maximum degree agreement among the group of experts is obtained by a series of interactive discussions and learning each other [24, 26]. In particular, it is worth noting that the particle swarm optimization (PSO) algorithm has been used to simulate the consensus process [6, 7, 31, 32, 38]. A flexibility degree is always equipped, then the initial positions of DMs are adjusted to reach a consensus. The selection phase consists of two different steps: aggregation of individual preference relations and exploitation of the collective one [24]. When the PSO algorithm is used to achieve the consensus process, the level of consensus has been incorporated into the GDM model. That is, the opinion of any expert in the group is approximate to the collective judgement used to choose the best alternative.

On the other hand, the consistency of preference relations should be considered in the process of GDM. The consistency degree of decision information is related to the level of logic and rationality of DMs. Let us trace the study of consistency of preference relations back to 1980s. Saaty [41] gave the consistency definition of PCMs originating from the analytic hierarchy process (AHP). Then one can find that it is difficult to give a consistent PCM in a practical case. The consistency index was further defined to quantify the inconsistency degree of a PCM [41]. Moreover, it is noted that the consistency index is dependent on the dimensionality of a matrix. The consistency ratio was defined to eliminate the influences of the order of a PCM. When the consistency ratio equals to or less than 0.1,  the PCM is considered to be acceptable. When the consistency ratio is bigger than 0.1,  the PCM is unacceptable. The unacceptable PCM should be adjusted to that with acceptable consistency and many methods have been proposed [8, 21, 58]. In addition, the other consistency indexes have been further proposed to capture the inconsistency degree of PCMs [4]. One of the popular consistency indexes was proposed by Crawford and Williams [14], which was named as the geometric consistency index (GCI) and the thresholds of acceptable consistency have been provided [1]. The study of consistency definitions and consistency indexes has been further extended to additive reciprocal matrices [23, 35, 44, 52] and fuzzy-valued preference relations [30, 33, 48, 49, 55]. Furthermore, the group consistency level and the group consensus measure are the two other important issues in GDM. The former focuses on the consistency degree of the collective preference relation [53]. The latter refers to the consensus degree between individual preference relations and the collective one [19]. It is found that when individual preference relations are of acceptable consistency, the collective one obtained by an aggregation operator is acceptably consistent [34, 53]. To improve the group consensus degree, a great deal of models have been proposed within the framework of AHP. For example, Dong et al. [19] defined the geometric cardinal consensus index and the geometric ordinal consensus index to measure the consensus degree between individual PCMs and the collective one. Then some algorithms were offered to improve the consensus degree and two consensus models were proposed. Wu and Xu [51] constructed a decision support model where the individual consistency and the group consensus were captured by defining two indexes through the Hadamard product of two PCMs. Xu et al. [57] proposed a distance-based consensus model to solve group decision problems with additive reciprocal matrices and PCMs. Dong and Saaty [16] proposed a consensus reaching model where a moderator was set up and the most discordant DM could update her/his judgements. A novel consensus reaching model was further proposed by Dong and Cooper [15], where an automatic feedback mechanism was offered in a dynamic environment.

In the above-mentioned consensus models, the number of DMs is always supposed to be of small scale. With the development of societal and technological trends, the large-scale GDM has attracted much attention [20, 29, 46, 50] and it is worth to be further investigated. In addition, in the above typical consensus models, the consensus measures are usually defined and some algorithms are proposed to adjust their values to reach the consensus. When GDM is considered to be a social behavior of people, the PSO algorithm could be used to model the consensus process [6, 7, 31, 32, 38]. Motivated by the new study trends of GDM, the objective of this paper is to propose a new consensus model such that the large-scale GDM can be dealt with. The main novelty of the proposed model comes with an introduction of a new fitness function for performing the PSO algorithm. The group consensus is reached by minimizing the distance between the individual preference relations and the collective one. The group consistency is ensured by minimizing the GCI index of the collective PCM. A new algorithm is elaborated on to solve the GDM problem. Then it is applied to solve a practical large-scale GDM problem. Some comparisons with the existing methods show the advantages of the proposed model. This paper is structured as follows. Section 2 briefly introduces the concepts of PCMs and the consistency indexes. In Sect. 3, a new consensus model for GDM is constructed and the performance of the PSO algorithm is analyzed. As compared to the existing GDM models based on the PSO algorithm, the main differences are the research domain of the optimal solution formed by the proposed minimum–maximum method and the order of the optimization problem to be solved. In Sect. 4, a large-scale GDM problem is investigated and some comparisons with the existing method are offered. The obtained results reveal that the proposed model can be used to achieve an intelligent and effective decision-making for addressing a large-scale emergency management problem. Some concluding remarks are presented in Sect. 5.

Preliminaries

To choose the best alternative from a finite set \(X=\{x_{1}, x_{2}, \ldots , x_{n}\},\) a natural way is to compare them in pairs [41]. Then a PCM \(A=(a_{ij})_{n\times n}\) is constructed and it can be used to derive the priority weights of alternatives. Then the ranking of alternatives is obtained and the best one can be chosen. It is considered that the \(1-9\) scale is enough to evaluate the relative importance of the alternatives \(x_{i}\) over the alternative \(x_{j},\) which is expressed as the comparison ratio \(a_{ij}.\) When \(x_{i}\) is extremely important than \(x_{j},\) the value of \(a_{ij}\) is offered as 9 or 8. If \(x_{i}\) is very important than \(x_{j},\) the corresponding value of \(a_{ij}\) is given as 7 or 6. When the DM considers that \(x_{i}\) is essentially important than \(x_{j},\) the value of \(a_{ij}\) is evaluated using 5 or 4. If the DM thinks that \(x_{i}\) is weakly important than \(x_{j},\) the value of \(a_{ij}\) is expressed as 3 or 2. The unity value of the comparison ratio \(a_{ij}\) implies that \(x_{i}\) is equally important to \(x_{j}.\) Moreover, when the alternative \(x_{i}\) is not important than the alternative \(x_{j},\) the value of \(a_{ij}\) is computed using the following reciprocal property:

$$\begin{aligned} a_{ij}=\frac{1}{a_{ji}}. \end{aligned}$$

Hence, the definition of PCM is given as follows:

Definition 1

[41] The PCM \(A=(a_{ij})_{n\times n}\) is multiplicatively reciprocal if \(a_{ij}=1/a_{ji} (a_{ij} > 0)\) for \(\forall i, j=1,\ldots , n.\)

Furthermore, when evaluating the preference intensity of \(x_{i}\) over \(x_{j}\)\((i, j=1,2,...,n),\) some vagueness could be experienced by DMs. The theory of fuzzy sets is an effective tool used to quantify the uncertainty [2]. For example, interval number has been used to capture the uncertainty experienced by DMs and one has the following definition [42]:

Definition 2

[42] An interval multiplicative reciprocal matrix \({\tilde{A}}\) is represented as

$$\begin{aligned} {\tilde{A}}=({\tilde{a}}_{ij})_{n\times n}=\left( \begin{array}{cccc} \left[ {1,1}\right] &{}{\left[ {a^-_{12},a^+_{12}}\right] }&{}\cdots &{}{\left[ {a^-_{1n},a^+_{1n}}\right] }\\ \left[ {a^-_{21},a^+_{21}}\right] &{}{\left[ {1,1}\right] }&{}\cdots &{}\left[ {a^-_{2n},a^+_{2n}}\right] \\ \cdots &{}\cdots &{}\cdots &{}\cdots \\ \left[ {a^-_{n1}, a^+_{n1}}\right] &{}\left[ {a^-_{n2}, a^+_{n2}}\right] &{}\cdots &{}{\left[ {1,1}\right] }, \end{array}\right) . \end{aligned}$$

where \({\tilde{a}}_{ij}=\left[ {a^-_{ij}, a^+_{ij}}\right] \) means that the alternative \(x_{i}\) is between \(a^-_{ij}\) and \( a^+_{ij}\) times as important as the alternative \(x_{j},\) satisfying \({a^-_{ij}\cdot a^+_{ji}}={a^+_{ij}\cdot a^-_{ji}}=1\), \(a^+_{ij}\ge a^-_{ij}\ge 0,\) for \( i, j=1,2,\ldots , n.\)

On the other hand, the consistency degree of a PCM reflects the flexibility levels of the judgements. The cardinal transitivity of the judgements means the perfect consistency of a PCM. It is seen that the consistent PCM is defined as follows:

Definition 3

[41] A PCM in relative measurements \(A=(a_{ij})_{n\times n}\) is consistent if \(a_{ij}=a_{ik}a_{kj}\) for \(\forall i, j, k=1,\ldots , n.\)

Unfortunately, one always gives an inconsistent PCM in a practical case [41]. Then the inconsistency degree of a PCM should be quantified by using a consistency index. The consistency index (CI) and the consistency ratio (CR) have been defined by Saaty [41], respectively. Here we recall the geometric consistency index proposed by Crawford and Williams [14] as follows:

Definition 4

[14] Assume that \(A=(a_{ij})_{n\times n}\) is a PCM. The geometric consistency index (GCI) is defined as

$$\begin{aligned} GCI(A)=\frac{2}{(n-1)(n-2)}\sum _{i<j}\left[ \log (a_{ij})-\log (\omega _{i})+\log (\omega _{j})\right] ^2, \end{aligned}$$
(1)

where \(\omega _{i}=\left( \prod _{j=1}^{n}a_{ij}\right) ^{1/n}.\)

The thresholds \({\overline{GCI}}\) of GCI for a PCM with acceptable consistency have also been proposed in [1]. That is, we have \({\overline{GCI}}=0.31\) for \(n=3,\)\({\overline{GCI}}=0.35\) for \(n=4,\) and \({\overline{GCI}}=0.37\) for \(n>4.\) When the GCI of a PCM is less than the corresponding \({\overline{GCI}},\) the matrix is considered to be acceptable.

Building consensus in GDM

In what follows, we consider the GDM problem with the m experts in \(E=\{e_{1}, e_{2}, \ldots ,e_{m}\}\) and the n alternatives in \(X=\{x_{1},x_{2},\ldots ,x_{n}\}.\) It is supposed that the initial positions of the m experts are expressed using PCMs as \(\{A_{1}, A_{2}, \ldots , A_{m}\}.\) There may be some contradictions among the opinions of the experts. Then the initial positions with PCMs should be allowed to be adjusted to some degree such that the consensus in GDM can be built. According to the known works [6, 7, 31, 32, 38], a flexibility degree is always offered to each DM. Then the comparison ratio of \(x_{i}\) over \(x_{j}\) can be varied in an interval and an interval multiplicative reciprocal matrix is constructed as that in Definition 2. In the present study, different from the methods in [6, 7, 31, 32, 38], an interval-valued comparison matrix is constructed using the minimum and maximum of the entries in \(\{A_{1}, A_{2}, \ldots , A_{m}\}.\)

Construction of an interval-valued comparison matrix

It is assumed that the m experts express their opinions as PCMs \(A_{k}=(a_{ij}^{(k)})_{n\times n}\) for \(k=1,2,\ldots ,m.\) Letting

$$\begin{aligned} {\bar{a}}_{ij}^{-}=\min \left\{ a_{ij}^{(1)},a_{ij}^{(2)},\ldots ,a_{ij}^{(m)}\right\} , \end{aligned}$$
(2)

and

$$\begin{aligned} {\bar{a}}_{ij}^{+}=\max \left\{ a_{ij}^{(1)},a_{ij}^{(2)},\ldots ,a_{ij}^{(m)}\right\} , \end{aligned}$$
(3)

the interval-valued comparison matrix \({\bar{A}}=\left( [{\bar{a}}_{ij}^{-}, {\bar{a}}_{ij}^{+}]\right) _{n\times n}\) is constructed. One can see that all the matrices \(A_{k}=(a_{ij}^{(k)})_{n\times n}\) belong to \({\bar{A}},\) meaning that the entries \(a_{ij}^{(k)}\)\((k=1,2,\ldots ,m)\) belong to the interval \([{\bar{a}}_{ij}^{-}, {\bar{a}}_{ij}^{+}].\) We have the following result:

Theorem 1

Suppose that \(A_{k}{=}(a_{ij}^{(k)})_{n{\times } n}\) with \(k{=}1{,}2{,}\ldots {,}m\)are PCMs. The constructed interval-valued comparison matrix \({\bar{A}}=\left( [{\bar{a}}_{ij}^{-}, {\bar{a}}_{ij}^{+}]\right) _{n\times n}\) through (2and  (3satisfies the reciprocal properties with\({\bar{a}}_{ij}^{-}\cdot {\bar{a}}_{ji}^{+}=1\) and  \({\bar{a}}_{ij}^{+}\cdot {\bar{a}}_{ji}^{-}=1.\)

Proof

Since \(A_{k}=(a_{ij}^{(k)})_{n\times n}\) is a PCM for any \(k=1,2,\ldots ,m,\) we have

$$\begin{aligned} a_{ij}^{(k)}\cdot a_{ji}^{(k)}=1. \end{aligned}$$
(4)

In virtue of (2), it is assumed that there is a number k such that \({\bar{a}}_{ij}^{-}=a_{ij}^{(k)}.\) The reciprocal relation (4) means that \({\bar{a}}_{ij}^{-}=1/a_{ji}^{(k)}.\) Moreover, \(a_{ij}^{(k)}\) is the minimum in the set \(\left\{ a_{ij}^{(1)},a_{ij}^{(2)},\ldots ,a_{ij}^{(m)}\right\} .\) With definition (3), this implies that

$$\begin{aligned} {\bar{a}}_{ji}^{+}=\max \left\{ a_{ji}^{(1)},a_{ji}^{(2)},\ldots ,a_{ji}^{(m)}\right\} =a_{ji}^{(k)}. \end{aligned}$$

The application of the above result leads to

$$\begin{aligned} {\bar{a}}_{ij}^{-}\cdot {\bar{a}}_{ji}^{+}=1. \end{aligned}$$

It shows that the matrix \({\bar{A}}=\left( [{\bar{a}}_{ij}^{-}, {\bar{a}}_{ij}^{+}]\right) _{n\times n}\) is an interval multiplicative reciprocal preference relation. The proof is completed. \(\square \)

One can see from Theorem 1 that an interval multiplicative reciprocal preference relation is constructed using (2) and (3). For convenience, it is called as the minimum–maximum method since the minimum and maximum of entries in \(\{A_{1}, A_{2}, \ldots , A_{m}\}\) are used. Moreover, the consensus process for GDM involves of the dynamical iteration of the judgements of DMs after some discussions and learning each other [37]. In the present model, it is considered that the constructed matrix \({\bar{A}}=\left( [{\bar{a}}_{ij}^{-}, {\bar{a}}_{ij}^{+}]\right) _{n\times n}\) includes all the possible changes of the positions of the experts. The above idea is reasonable as compared to those in [6, 7, 31, 32, 38]. There are two main reasons. The first one is that when various experts compare two alternatives such as \(x_{i}\) and \(x_{j},\) the provided different comparison ratios reveal the possible importance degrees of \(x_{i}\) over \(x_{j}.\) This means that the alternatives \(x_{i}\) and \(x_{j}\) have been thoroughly investigated by all the experts in the group. The constructed interval numbers using (2) and (3) quantify all the possible values of the comparison ratios. The second one is that the flexibility degree offered to each experts has the question of being arbitrary [6, 7, 31, 32, 38]. In fact, we do not know the exact values of the flexibility degree of the experts when applying the methods in [6, 7, 31, 32, 38]. The proposed method of constructing an interval number can overcome a more or less arbitrariness.

Fitness function

In what follows, two objectives should be achieved. One is the acceptable consistency of the collective PCM. The other is to reach the consensus by considering the distance between individual PCMs and the collective one.

First, let us consider the acceptable consistency of the collective PCM. For convenience, it is supposed that the collective matrix is written as \(R=(r_{ij})_{n\times n}.\) To quantify the inconsistency degree of R,  any consistency index can be used theoretically. Here the GCI of R is considered and one has the following function:

$$\begin{aligned} Q_1(R)=GCI(R). \end{aligned}$$
(5)

The smaller the value of \(Q_{1},\) the more consistent the collective matrix R is. Second, to reach the consensus among all the DMs, the distance between individual PCMs and the collective one is considered. Hence, the following function is constructed:

$$\begin{aligned} Q_{2}(R)=\sum ^{m}_{k=1}\sum ^{n}_{i=1}\sum ^{n}_{j=1}\frac{|r_{ij}-a_{ij}^{(k)}|}{n^2-n}. \end{aligned}$$
(6)

Obviously, the smaller the value of \(Q_{2},\) the higher the group consensus level is. From the viewpoint of the ideal consideration, the case with \(Q_{1}(R)=0\) corresponding to a consistent collective PCM and the smallest value of \(Q_{2}(R)\) is the optimal solution. This means that the group of experts reach the highest consensus level while the final decision is perfectly consistent. Therefore, according to the consideration of seeking the smallest values of \(Q_{1}(R)\) and \(Q_{2}(R),\) the two objectives do not conflict. However, the smallest values of the two objectives may be not reached at the same time. This means that a multi-objective optimization problem with conflict criteria should be solved. In fact, to keep the decision information as much as possible, it is sufficient to ensure the condition of \(Q_{1}<{\overline{GCI}}\) and the perfectly consistent matrix is not pursued. Hence, for the sake of simplicity, the liner combination of \(Q_{1}\) and \(Q_{2}\) is used to deal with the multi-objective optimization problem, and it is written as follows [31, 32]:

$$\begin{aligned} Q=p Q_{1}+q Q_{2}, \end{aligned}$$
(7)

where the parameters \(p \ge 0\) and \(q \ge 0.\) When \(p=0,\)\(Q=qQ_{2},\) meaning that the group consensus is only considered. When \(q=0,\)\(Q=pQ_{1},\) implying that the group consistency is only studied. Generally, for \(p\ne 0\) and \(q\ne 0,\) the values of p and q have influences on the functions \(Q_{1}\) and \(Q_{2},\) respectively. It should be investigated through numerical computations.

Modeling the consensus process

In the fitness function Q,  the unknown quantity is the collective matrix R. The consensus process in GDM requires that the opinions of the experts are close to the collective matrix R. Then an optimization problem is constructed as follows:

$$\begin{aligned} \min \limits _{R\in {\bar{A}}} Q, \end{aligned}$$
(8)

with the constraint condition:

$$\begin{aligned} Q_{1}<{\overline{GCI}}(R), \end{aligned}$$
(9)

where \({\overline{GCI}}(R)\) stands for the threshold of acceptable consistency for a PCM having the same dimensionality as R. It is seen from the methods in [6, 7, 31, 32, 38] that the individual PCMs are changed in the iteration process. Here we directly consider the method of obtaining the collective matrix from the constructed interval-valued comparison matrix \({\bar{A}}.\) Moreover, it is worth noting that there are some other methods of determining the collective matrix such as the aggregation operators [56, 60, 61] and the mathematical programming models [48, 49]. Here the optimization problem (8) is constructed and solved to determine the collective matrix R.

One can see from the expressions of \(Q_{1}\) and \(Q_{2}\) together with (1) that the function Q is nonlinear and multi-variable. In other words, the difficulty of the optimization problem (8) makes its solution uneasy to be obtained through the typical methods such as the differentiation one. Therefore, to solve the nonlinear optimization problem (8) with the constraint condition (9), the algorithm of particle swarm optimization (PSO) is used to obtain a globally optimal solution [27]. It is worth noting that the PSO was developed by Kennedy and Eberhart when studying the social behavior of bird flocking or fishing schooling [27, 43]. The theory and applications of the PSO have been studied widely such as those presented in the books [12, 28] and the review papers [3, 39]. Recently, the large-scale optimization problems have attracted much attention and solved by developing the PSO algorithm [9, 10, 36]. For the constructed optimization problem (8), the decision variables are the entries of the collective matrix R. Even if the number of the alternatives is 9,  the order of (8) is 36 by considering the reciprocal property of the PCM. Therefore, the PSO algorithm developed by Shi and Eberhart [43] is still feasible. As compared to the existing models in [6, 7, 31, 32, 38], it is found that the proposed method can decrease the order of the constructed optimization problem and reduce the difficulty to obtain the optimal solution.

For convenience, the formulae of the applied PSO algorithm are given as follows [43]:

$$\begin{aligned}&\mathbf {v}_{t+1}=\omega \mathbf {v}_{t}+c_{1}\mathbf {r}_{1}\cdot (\mathbf {p}_{t}-\mathbf {x}_{t})+c_{2}\mathbf {r}_{2} \cdot (\mathbf {p}_{g}-\mathbf {x}_{t}),\\&\mathbf {x}_{t+1}=\mathbf {x}_{t}+\mathbf {v}_{t+1}. \end{aligned}$$

Here \(\mathbf {x}_{t}\) is the current position of particle; \(\mathbf {v}_{t}\) is the velocity of particle; \(\mathbf {p}_{t}\) is the previous best position and \(\mathbf {p}_{g}\) is the global best position. The constants \(c_{1}\) and \(c_{2}\) are chosen as 2. \(\mathbf {r}_{1}\) and \(\mathbf {r}_{2}\) are the random numbers uniformly distributed in \([0,1]^{n}.\) The weight \(\omega \) varies from 0.9 to 0.4 with regard to the change of the generation number. It is seen that the decision variables of the optimization problem (8) are the entries of \(R=(r_{ij})_{n\times n}.\) Using the reciprocal property of \(r_{ij}=1/r_{ji},\) the encoding strategy is to set the following vector as a particle:

$$\begin{aligned} \mathbf {r}=(r_{12},r_{13},\ldots ,r_{1n},r_{23},\ldots ,r_{(n-1)n}). \end{aligned}$$

The values of the entries in \(\mathbf {r}\) stand for the positions of the particle \(\mathbf {r}.\) The initial positions and velocities of the swarm are randomly generated. The fitness function (7) is used to adjust the positions of the particles in the swarm until the optimal solution is determined. In what follows, a numerical example is carried to verify the above algorithm and some comparisons are offered.

Numerical results and comparisons

In the following, let us offer an example to illustrate the proposed consensus model. Suppose that one should choose the best from the four alternatives \(X=\{ x_{1},x_{2},x_{3},x_{4}\}.\) A group of five experts provide the PCMs as follows [19, 51]:

$$\begin{aligned} {A}_{1}= & {} \left( \begin{array}{cccc} 1 &{} 4 &{} 6 &{} 7 \\ 1/4 &{} 1 &{} 3 &{} 4 \\ 1/6 &{} 1/3 &{} 1 &{} 2 \\ 1/7 &{} 1/4 &{} 1/2 &{} 1 \end{array} \right) , \quad {A}_{2}=\left( \begin{array}{cccc} 1 &{} 5 &{} 7 &{} 9 \\ 1/5 &{} 1 &{} 4 &{} 6 \\ 1/7 &{} 1/4 &{} 1 &{} 2 \\ 1/9 &{} 1/6 &{} 1/2 &{} 1 \end{array} \right) ,\\ {A}_{3}= & {} \left( \begin{array}{cccc} 1 &{} 3 &{} 5 &{} 8 \\ 1/3 &{} 1 &{} 4 &{} 5 \\ 1/5 &{} 1/4 &{} 1 &{} 2 \\ 1/8 &{} 1/5 &{} 1/2 &{} 1 \end{array} \right) , \quad {A}_{4}=\left( \begin{array}{cccc} 1 &{} 4 &{} 5 &{} 6 \\ 1/4 &{} 1 &{} 3 &{} 3 \\ 1/5 &{} 1/3 &{} 1 &{} 2 \\ 1/6 &{} 1/3 &{} 1/2 &{} 1 \end{array} \right) ,\\ {A}_{5}= & {} \left( \begin{array}{cccc} 1 &{} 1/2 &{} 1 &{} 2 \\ 2 &{} 1 &{} 2 &{} 3 \\ 1 &{} 1/2 &{} 1 &{} 4 \\ 1/2 &{} 1/3 &{} 1/4 &{} 1 \end{array} \right) . \end{aligned}$$

According to (2) and (3), an interval-valued comparison matrix is constructed as

$$\begin{aligned} {\bar{A}}=\left( \begin{array}{cccc} \left[ {1,1}\right] &{}{\left[ {1/2,5}\right] }&{}{\left[ {1,7}\right] }&{}{\left[ {2,9}\right] }\\ \left[ {1/5,2}\right] &{}{\left[ {1,1}\right] }&{}{\left[ {2,4}\right] }&{}{\left[ {3,6}\right] }\\ \left[ {1/7,1}\right] &{}{\left[ {1/4,1/2}\right] }&{}{\left[ {1,1}\right] }&{}{\left[ {2,4}\right] }\\ \left[ {1/9,1/2}\right] &{}{\left[ {1/6,1/3}\right] }&{}{\left[ {1/4,1/2}\right] }&{}{\left[ {1,1}\right] } \end{array}\right) . \end{aligned}$$

It is found that the matrix satisfies Theorem 1 and it is an interval multiplicative reciprocal preference relation (Definition 2).

Fig. 1
figure 1

Variations of the fitness function versus the generation number for the selected values of p and q

Then the optimization problem (8) with (9) is solved and the collective matrix is determined. When running the PSO algorithm, the sizes of the swarm and the maximal iteration generation are all chosen as 100. For some selected values of p and q,  the variations of the fitness function versus the generation number are depicted in Fig. 1. One can see from Fig. 1 that with the increasing of the generation number, the values of Q decrease and tend to a stable one. The phenomenon is in accordance with the findings in [6, 7, 31, 32, 38]. It is also found that when the parameters p and q are different, the final stable values of Q are different. The bigger the values of p or q,  the bigger the optimal solution of Q. In particular, here we are interest in the collective matrices under various parameters. To make the results more reliable, the PSO algorithm is run for multiple times and a mean matrix \(M=(m_{ij})_{n\times n}\) is determined, that is, when the PSO algorithm is run for n times to give \(R_{k}=(r_{ij}^{(k)})_{n\times n}\)\((k=1, 2, \ldots , n),\) the mean matrix can be computed as the following form:

$$\begin{aligned} M=\frac{1}{n}\sum _{k=1}^{n}R_{k}. \end{aligned}$$
(10)

In addition, to show the dispersion degree of \(R_{k},\) the standard deviation is defined as follows:

$$\begin{aligned} d=\left( \sum _{k=1}^{n}\Vert R_{k}-M\Vert ^{2}\right) ^{1/2}, \end{aligned}$$
(11)

where \(\Vert \bullet \Vert \) stands for a matrix norm. Here the Frobenius norm is used to numerically compute and it follows:

$$\begin{aligned} d=\left[ \sum _{k=1}^{n}\sum _{i=1}^{n}\sum _{j=1}^{n}\left( r_{ij}^{(k)}-m_{ij}\right) ^{2}\right] ^{1/2}. \end{aligned}$$
(12)

For convenience, it is supposed that the mean matrices \(M_{1},\)\(M_{2}\) and \(M_{3}\) correspond to \(p=q=1,\)\(p=3, q=1\) and \(p=1, q=1.5,\) respectively. We choose \(n=100\) and obtain the following results:

$$\begin{aligned} {M}_{1}= & {} \left( \begin{array}{cccc} 1.0000 &{} 2.8059 &{} 5.0000 &{} 7.2961 \\ 0.3564 &{} 1.0000 &{} 2.9742 &{} 3.9871 \\ 0.2000 &{} 0.3375 &{} 1.0000 &{} 2.0000 \\ 0.1371 &{} 0.2510 &{} 0.5000 &{} 1.0000 \end{array} \right) ,\\ {M}_{2}= & {} \left( \begin{array}{cccc} 1.0000 &{} 2.4400 &{} 5.0000 &{} 8.0100 \\ 0.4112 &{} 1.0000 &{} 2.2901 &{} 3.9309 \\ 0.2000 &{} 0.4386 &{} 1.0000 &{} 2.0000 \\ 0.1249 &{} 0.2547 &{} 0.5000 &{} 1.0000 \end{array} \right) , \end{aligned}$$

and

$$\begin{aligned} {M}_{3}=\left( \begin{array}{cccc} 1.0000 &{} 3.0000 &{} 5.0000 &{} 7.0000 \\ 0.3333 &{} 1.0000 &{} 3.0000 &{} 3.9800 \\ 0.2000 &{} 0.3350 &{} 1.0000 &{} 2.0000 \\ 0.1429 &{} 0.2517 &{} 0.5000 &{} 1.0000 \end{array} \right) . \end{aligned}$$

The weights of alternatives are computed using the row geometric mean method [14] and shown in Table 1. It is seen from Table 1 that the values of \(Q_{1}\) are all less than the threshold 0.35 of the matrix with the order 4 and the ranking of alternatives is \(x_{1}\succ x_{2}\succ x_{3}\succ x_{4}.\) Moreover, the values of d are also given in Table 1. Since a small value of d is always determined, the matrix R is close to M by performing the PSO algorithm.

Table 1 The values of \(Q_{1},\)\(Q_{2},\)d and the weights of alternatives
Table 2 Wilcoxon’s rank-sum test for two independent samples
Table 3 Comparison with the existing methods in [19, 51]

On the other hand, it is considered that the PSO algorithm is based on the technique of random optimization. The determined matrices \(R_{k}=(r_{ij}^{(k)})_{n\times n}\)\((k=1, 2, \ldots , n)\) are with randomness and a test method should be considered [25]. Here we apply the traditional Wilcoxon’s rank-sum test for two independent samples. It is assumed that the two samples with \(n_{1}\) and \(n_{2}\) matrices satisfying \(n_{1}\le n_{2}\) are created independently and written as follows:

$$\begin{aligned}&S_{1}=\{{\bar{R}}_{1},{\bar{R}}_{2},\ldots ,{\bar{R}}_{n_{1}}\},\\&S_{2}=\{{\tilde{R}}_{1},{\tilde{R}}_{2},\ldots ,{\tilde{R}}_{n_{2}}\}, \end{aligned}$$

with

$$\begin{aligned} {\bar{R}}_{k}=({\bar{r}}_{ij}^{(k)})_{n\times n},\quad {\tilde{R}}_{k}=({\tilde{r}}_{ij}^{(k)})_{n\times n}. \end{aligned}$$

The mean matrices obtained by \(S_{1}\) and \(S_{2}\) are expressed as \({\bar{M}}=({\bar{m}}_{ij})_{n\times n}\) and \({\tilde{M}}=({\tilde{m}}_{ij})_{n\times n},\) respectively. The observations

$$\begin{aligned} \left( {\bar{r}}_{ij}^{(1)},{\bar{r}}_{ij}^{(2)},\ldots ,{\bar{r}}_{ij}^{(n_{1})}\right) \end{aligned}$$

and

$$\begin{aligned} \left( {\tilde{r}}_{ij}^{(1)},{\tilde{r}}_{ij}^{(2)},\ldots ,{\tilde{r}}_{ij}^{(n_{2})}\right) \end{aligned}$$

are used to make the Wilcoxon’s rank-sum test. Since the samples \(S_{1}\) and \(S_{2}\) come from the same sample space, the null hypothesis is written as

$$\begin{aligned} H_{0}: {\bar{m}}_{ij}={\tilde{m}}_{ij},\quad \forall i,j\in \{1,2,\ldots ,n\}. \end{aligned}$$

Here we choose \(n_{1}=3,\)\(n_{2}=4\) and the significance level \(\alpha =0.05.\) By running the PSO algorithm with \(p=q=1,\) the matrices in \(S_{1}\) and \(S_{2}\) are obtained and given in Appendix A. Without loss of generality, we only need to choose a pair of i and j to test. For example, when \(i=2, j=3,\) the ranks of the random data and the sum are shown in Table 2. Since we have \(9>6,\) the null hypothesis \(H_{0}\) is true according to the Wilcoxons table [25], meaning that the computed results in Table 1 are convincing.

At the end, some comparisons with the existing methods in [19, 51] are offered. As an example, we consider the case of \(p=q=1\) in the proposed method. The computing results are shown in Table 3, where RGMM and EM denote the priority methods of the row geometric mean method [14, 19] and the eigenvector method [41, 51], respectively. It is found from Table 3 that the rankings of alternatives for all the methods are identical. The values of \(Q_{1}\) are also less than 0.35,  meaning that the collective matrices are of acceptable consistency. The main difference is the values of \(Q_{2},\) which reflects the consensus degree within the group of experts. Since \(0.6288<0.6966<0.7396,\) the least value of \(Q_{2}\) has been given for \(M_{1},\) implying that the consensus in GDM can be improved using the present method.

The application to large-scale group decision-making

In the typical GDM, the invited experts always have the professional knowledge about the considered problem. The rational judgements could be provided when comparing the alternatives within a sufficient time domain. However, the provided judgements of DMs could exhibit a high degree of uncertainty and complexity in some practical situations. For example, when the number of DMs is sufficiently increasing, the discrete degree of judgements of DMs could increase. When lots of agents in a social network provide their opinions on a decision-making problem, the interaction of these judgements could lead to an extreme decision. When the emergency events happen, the lack of time yields the opinions of DMs being irrational to a certain degree. Hence, the large-scale GDM is becoming an attractive research direction [20, 46, 50, 59]. One of the main challenges in the large-scale GDM is to reach a good consensus among all DMs. Here we extend the proposed model to address the large-scale GDM problem. An exact case is studied and some comparisons are offered.

In what follows, we investigate an emergency event where a large-scale group of DMs are involved [59]. The background of the example is the earthquake that occurred in Ya’an City, Sichuan, China on April 20, 2013. The emergency response should be adopted immediately under the complex environment with the blocked traffic, the interrupted communication, the insufficient rescue staff and medical facilities. To minimize the damage, the existing rescue team should choose the best alternative from the following four plans:

  • \(x_{1}\):  Searching for the trapped persons and evacuating people from the disaster areas.

  • \(x_{2}\):  Making every effort to treat the injured persons and no searching for the trapped persons.

  • \(x_{3}\):  Searching for the trapped persons and treating the serious injured people in situ.

  • \(x_{4}\):  Making every effort to searching for the trapped people and no treating the injured persons.

Under the practical situation with a high degree of uncertainty, complexity and emergency, the 20 DMs \(E=\{e_{1},e_{2},\ldots ,e_{20}\}\) coming from the rescue team were asked to give their opinions on the four alternatives according to their knowledge and experience. 20 PCMs were given as follows:

$$\begin{aligned} {A}_{1}= & {} \left( \begin{array}{cccc} 1 &{} 9 &{} 9 &{} 4 \\ 1/9 &{} 1 &{} 7/3 &{} 4 \\ 1/9 &{} 3/7 &{} 1 &{} 2/3 \\ 1/4 &{} 1/4 &{} 3/2 &{} 1 \end{array} \right) , \,\,\,\, {A}_{2}=\left( \begin{array}{cccc} 1 &{} 3/7 &{} 7/3 &{} 4 \\ 7/3 &{} 1 &{} 3/7 &{} 3/2 \\ 3/7 &{} 7/3 &{} 1 &{} 3/7 \\ 1/4 &{} 2/3 &{} 7/3 &{} 1 \end{array} \right) ,\\ {A}_{3}= & {} \left( \begin{array}{cccc} 1 &{} 1/9 &{} 3/2 &{} 3/2 \\ 9 &{} 1 &{} 3/2 &{} 2/3 \\ 2/3 &{} 2/3 &{} 1 &{} 3/7 \\ 2/3 &{} 3/2 &{} 7/3 &{} 1 \end{array} \right) ,\,\,\,\, {A}_{4}=\left( \begin{array}{cccc} 1 &{} 1/9 &{} 4 &{} 2/3 \\ 9 &{} 1 &{} 3/2 &{} 3/2 \\ 1/4 &{} 2/3 &{} 1 &{} 4 \\ 3/2 &{} 2/3 &{} 1/4 &{} 1 \end{array} \right) ,\\ {A}_{5}= & {} \left( \begin{array}{cccc} 1 &{} 2/3 &{} 2/3 &{} 2/3 \\ 3/2 &{} 1 &{} 1/9 &{} 1 \\ 3/2 &{} 9 &{} 1 &{} 2/3 \\ 3/2 &{} 1 &{} 3/2 &{} 1 \end{array} \right) , \,\,\,\, {A}_{6}=\left( \begin{array}{cccc} 1 &{} 3/7 &{} 3/2 &{} 7/3 \\ 7/3 &{} 1 &{} 4 &{} 4 \\ 2/3 &{} 1/4 &{} 1 &{} 9 \\ 3/7 &{} 1/4 &{} 1/9 &{} 1 \end{array} \right) ,\\ {A}_{7}= & {} \left( \begin{array}{cccc} 1 &{} 1/4 &{} 3/2 &{} 3/2 \\ 4 &{} 1 &{} 4 &{} 4 \\ 2/3 &{} 1/4 &{} 1 &{} 3/2 \\ 2/3 &{} 1/4 &{} 2/3 &{} 1 \end{array} \right) ,\,\,\,\, {A}_{8}=\left( \begin{array}{cccc} 1 &{} 2/3 &{} 7/3 &{} 3/2 \\ 3/2 &{} 1 &{} 2/3 &{} 4 \\ 3/7 &{} 3/2 &{} 1 &{} 7/3 \\ 2/3 &{} 1/4 &{} 3/7 &{} 1 \end{array} \right) ,\\ {A}_{9}= & {} \left( \begin{array}{cccc} 1 &{} 3/2 &{} 3/2 &{} 7/3 \\ 2/3 &{} 1 &{} 9 &{} 9 \\ 2/3 &{} 1/9 &{} 1 &{} 9 \\ 3/7 &{} 1/9 &{} 1/9 &{} 1 \end{array} \right) , \,\,\,\, {A}_{10}{=}\left( \begin{array}{cccc} 1 &{} 7/3 &{} 3/2 &{} 4 \\ 3/7 &{} 1 &{} 3/2 &{} 7/3 \\ 2/3 &{} 2/3 &{} 1 &{} 9 \\ 1/4 &{} 3/7 &{} 1/9 &{} 1 \end{array} \right) ,\\ {A}_{11}= & {} \left( \begin{array}{cccc} 1 &{} 2/3 &{} 2/3 &{} 3/2 \\ 3/2 &{} 1 &{} 3/7 &{} 2/3 \\ 3/2 &{} 7/3 &{} 1 &{} 7/3 \\ 2/3 &{} 3/2 &{} 3/7 &{} 1 \end{array} \right) , \,\,\,\, {A}_{12}{=}\left( \begin{array}{cccc} 1 &{} 3/7 &{} 3/2 &{} 2/3 \\ 7/3 &{} 1 &{} 3/2 &{} 3/2 \\ 2/3 &{} 2/3 &{} 1 &{} 3/2 \\ 3/2 &{} 2/3 &{} 2/3 &{} 1 \end{array} \right) ,\\ {A}_{13}= & {} \left( \begin{array}{cccc} 1 &{} 3/2 &{} 1/4 &{} 3/7 \\ 2/3 &{} 1 &{} 2/3 &{} 3/7 \\ 4 &{} 3/2 &{} 1 &{} 2/3 \\ 7/3 &{} 7/3 &{} 3/2 &{} 1 \end{array} \right) ,\,\,\,\, {A}_{14}{=}\left( \begin{array}{cccc} 1 &{} 9 &{} 7/3 &{} 4 \\ 1/9 &{} 1 &{} 4 &{} 7/3 \\ 3/7 &{} 1/4 &{} 1 &{} 1/9 \\ 1/4 &{} 3/7 &{} 9 &{} 1 \end{array} \right) ,\\ {A}_{15}= & {} \left( \begin{array}{cccc} 1 &{} 7/3 &{} 2/3 &{} 1 \\ 3/7 &{} 1 &{} 1/9 &{} 1/4 \\ 3/2 &{} 9 &{} 1 &{} 2/3 \\ 1 &{} 4 &{} 3/2 &{} 1 \end{array} \right) , \,\,\,\, {A}_{16}{=}\left( \begin{array}{cccc} 1 &{} 2/3 &{} 2/3 &{} 1/4 \\ 3/2 &{} 1 &{} 1/9 &{} 1/4 \\ 3/2 &{} 9 &{} 1 &{} 2/3 \\ 4 &{} 4 &{} 3/2 &{} 1 \end{array} \right) ,\\ {A}_{17}= & {} \left( \begin{array}{cccc} 1 &{} 2/3 &{} 2/3 &{} 1/9 \\ 3/2 &{} 1 &{} 1 &{} 2/3 \\ 3/2 &{} 1 &{} 1 &{} 7/3 \\ 9 &{} 3/2 &{} 3/7 &{} 1 \end{array} \right) ,\,\,\,\, {A}_{18}{=}\left( \begin{array}{cccc} 1 &{} 3/2 &{} 2/3 &{} 1/4 \\ 2/3 &{} 1 &{} 3/7 &{} 7/3 \\ 3/2 &{} 7/3 &{} 1 &{} 3/2 \\ 4 &{} 3/7 &{} 2/3 &{} 1 \end{array} \right) ,\\ {A}_{19}= & {} \left( \begin{array}{cccc} 1 &{} 3/2 &{} 1/4 &{} 3/7 \\ 2/3 &{} 1 &{} 2/3 &{} 3/7 \\ 4 &{} 3/2 &{} 1 &{} 2/3 \\ 7/3 &{} 7/3 &{} 3/2 &{} 1 \end{array} \right) ,\,\,\,\, {A}_{20}{=}\left( \begin{array}{cccc} 1 &{} 3/2 &{} 2/3 &{} 1/9 \\ 2/3 &{} 1 &{} 3/7 &{} 2/3 \\ 3/2 &{} 7/3 &{} 1 &{} 7/3 \\ 9 &{} 3/2 &{} 3/7 &{} 1 \end{array} \right) . \end{aligned}$$

Here it should be pointed out that the matrices \(A_{k}\) (\(k=1,2,\ldots ,20\)) are obtained from the known additive reciprocal matrices in [59] using the following transformation formula [58]:

$$\begin{aligned} a_{ij}=\frac{b_{ij}}{1-b_{ij}}, \end{aligned}$$

where \(a_{ij}\) stands for an entry in a PCM \(A=(a_{ij})_{n\times n}\) and \(b_{ij}\) denotes an entry in an additive reciprocal matrix \(B=(b_{ij})_{n\times n}.\) Moreover, it is noted that the scale of numerical examples for simulating the large-scale GDM is always no more than 50 DMs, for instance, 50 experts in [50], 35 agents in [20], 30 experts in [29], 25 agents in [46] and 20 experts in [59]. Additionally, one can see that the scale (order) of system dynamics model for simulating a complex social and economic system is about 10–100 [47]. Hence, a GDM with twenty experts can be considered to be the large scale. In what follows, we give the solution process of the large-scale GDM with 20 experts.

Step 1  Using the minimum–maximum method shown in (2) and (3), an interval-valued preference relation is constructed in virtue of the above twenty matrices as follows:

$$\begin{aligned} {\tilde{A}}=\left( \begin{array}{cccc} \left[ {1,1}\right] &{}{\left[ {1/9,9}\right] }&{}{\left[ {1/4,9}\right] }&{}{\left[ {1/9,4}\right] }\\ \left[ {1/9,9}\right] &{}{\left[ {1,1}\right] }&{}{\left[ {1/9,9}\right] }&{}{\left[ {3/7,9}\right] }\\ \left[ {1/9,4}\right] &{}{\left[ {1/9,9}\right] }&{}{\left[ {1,1}\right] }&{}{\left[ {1/9,9}\right] }\\ \left[ {1/4,9}\right] &{}{\left[ {1/9,7/3}\right] }&{}{\left[ {1/9,9}\right] }&{}{\left[ {1,1}\right] } \end{array}\right) . \end{aligned}$$

It is seen from the given matrix \({\tilde{A}}\) that the entries behave peculiar. That is, several entries are all [1/9, 9] and the bounds of the others are 1/9 or 9. The above phenomenon reveals that the judgements of DMs exhibit great divergence in the complex decision-making environment. Therefore, it is difficult to reach the consensus for the GDM problem.

Step 2  Because the time is pressing, the DMs cannot discuss endlessly. A fast yet reasonable decision should be reached. Here the proposed model is used to maximize the consensus degree of DMs. For example, we choose \(p=q=1\) in the fitness function (7) to compute. Figure 2 is drawn to show the variations of the fitness function versus the generation number. When the generation number is increasing, the values of the fitness function are decreasing and tending to a stable one. This means that the optimal solution can be obtained after generating 100 times.

Fig. 2
figure 2

Variations of the fitness function versus the generation number for \(p=q=1\)

Step 3  By running the PSO algorithm for 100 times, the mean matrix is given as follows:

$$\begin{aligned} {M}=\left( \begin{array}{cccc} 1.0000 &{} 0.6667 &{} 0.7037 &{} 1.0020 \\ 1.5000 &{} 1.0000 &{} 1.0000 &{} 1.5000 \\ 1.4210 &{} 1.0000 &{} 1.0000 &{} 1.5000 \\ 0.9980 &{} 0.6667 &{} 0.6667 &{} 1.0000 \end{array} \right) . \end{aligned}$$

Based on the RGMM method, the final priorities of alternatives are obtained as (0.2031, 0.3004, 0.2964, 0.2002). The ranking of alternatives is \(x_{2}\succ x_{3}\succ x_{1}\succ x_{4}\) and the injured persons can be treated in time. Moreover, one can see from those in [59] that an exit-delegation mechanism has been proposed for constructing a dynamical consensus model. However, in the practical case, the time is the most precious and a fast yet effective decision should be reached. The proposed model can provide an intelligent and fast decision procedure. In addition, it is found that the optimal solution is not given in [59]. For the sake of comparison, we complete the consensus process in [59] and it is shown in Appendix 2. The obtained result in [59] is in agreement with the present observation. This means that the proposed model is effective to save time and reach a good consensus among DMs.

Table 4 Priority vectors and ranking of alternatives for \(p=1\) and various values of q, respectively

At the end, it is of interest to analyze the sensitivity of the parameters p and q to the final solution of the GDM problem. For convenience, the parameters p and q are written as the style of two-tuple (pq). Based on the RGMM method, the priority weights of alternatives are determined in Tables 4 and 5 using the selected values of (pq). It is found from Tables 4 and 5 that the ranking of alternatives is not changed with the variations of (pq). This implies that the final solution of the large-scale GDM problem is not sensitive to the parameters p and q.

Table 5 Priority vectors and ranking of alternatives for \(q=1\) and various values of p, respectively

Conclusions

It is an important task to reach the optimal solution of a group decision-making (GDM) problem under a high consensus level within a group of experts. In particular, for a large-scale GDM with a high degree of uncertainty and emergency, a fast yet effective decision should be reached. In the present study, a novel GDM model has been proposed and applied to the large-scale one. A new optimization problem has been constructed by considering the acceptable consistency of the collective matrix and minimizing the distance between individual pairwise comparison matrices (PCMs) and the collective one. The particle swarm optimization (PSO) algorithm has been applied to solve the constructed optimization problem. Numerical examples have been carried out and some comparisons have been offered. The obtained results show that the proposed model is effective in solving typical and large-scale GDM problems with PCMs. Some novel observations are put forward as follows:

  • The minimum–maximum method has been proposed to construct an interval multiplicative reciprocal preference relation, which is used as the constraints of decision-makers’ opinions.

  • As compared to the existing methods, the proposed model can decrease the scale (order) of the optimization problem.

  • A fast yet effective decision can be reached using the proposed consensus model for the large-scale GDM arising in emergency management.

In addition, it should be pointed out that the two-objective optimization problem has been simplified as the single objective one in the present study. In fact, of much significance is to give multiple optimal solutions by solving the multi-objective optimization problem [13, 45]. In the future works, the evolutionary computation methods such as the PSO algorithm could be used to solve the constructed multi-objective optimization problems such that various decision schemes can be offered.