1 Introduction

One of the most important features of practical problems is the dynamics of their circumstances, which may also lead to changes in the problem-solving process, and even the answers of problems that have already been solved. For example, the study of the location of gas molecules has led researchers to accept uncertainty as a scientific issue, which attempts to eliminate are futile [1, 2]. The Covid-19 epidemic has changed a lot in various aspects of human life over the past year. It was able to reveal some of the flaws in the existing methods that were either hidden or were considered insignificant. From the beginning of the Covid-19 epidemic, and with the suppression of the initial psychological and social inflammations, one of the necessary actions was how to interact with the new conditions to adapt and overcome them. In other words, it was not possible to stay at home and close economic activities, social activities, scientific activities, educational activities, etc., in the long run. Therefore, over time, the closed activities were re-opened in accordance with the appropriate health protocols, and the society became almost normal. One of the areas most affected by the epidemic are schools and universities, where it is difficult to take preventive actions. Because of this, they closed very early in most countries and re-opened too late, and even in many countries they are still closed. This closure does not mean the end of scientific activities such as conferences, education, research and investigation, because cyberspace was a good platform to continue working, albeit not with the previous quality. In the teaching process, providing content is not everything, and it is necessary for the teacher to ensure the realization of learning. Homework and tests are suitable tools to get students to study, eliminate weaknesses and strengthen strengths on both sides of the educational activity, i.e., educator and learner. These have been severely weakened by the new conditions of society. Because, there are many reasons to doubt the high score of the student in the virtual test or presented homework. On the other hand, teachers and professors usually gain a relative knowledge of students’ scientific abilities based on their activities during several months of training, even in virtual form, which can be cited in learning assessments. One of the important advantages of this method is that it relies on the student’s scientific activities during the course. However, it does not work for classes with large participants, and even identifying a strong but shy student. The main question is how to increase the credibility and validity of e-learning results? However, this question has been asked before, even in normal social situations, and the declared scores have always been accompanied by degrees of uncertainty that were often overlooked. In fact, it can be said that the new conditions will increase uncertainty, and thus reduce the quality of education. Therefore, finding a way to model it properly can greatly contribute to the educational satisfaction of society.

Answer to this question can solve problems of some other areas of evaluation-based decision-making, such as medicine, exploration trips, cosmology, artificial intelligence and machine learning, in which for many reasons their predetermined or measured information, are unreliable and uncertain.

Shortly after its introduction, Fuzzy sets (FSs) theory [3] becomes an important tool in modeling the uncertainty of real-world’ problems and spread rapidly [4,5,6]. As the scope of application of fuzzy sets increased, its other generalizations such as type-2 fuzzy sets [7], intuitionistic fuzzy sets (IFSs) [8], Pythagorean fuzzy sets (PFSs) [9], rough sets and its merging with IFSs [10], hesitant fuzzy sets (HFSs) [11], and intuitionistic hesitant fuzzy sets (IHFSs) [12] were introduced. Of these, HFSs, due to the use of a finite set of membership degrees, are more capable of modeling the middle category of the triple classification of problems [1], i.e., organized complexity, which includes the bulk of real-world problems. Therefore, in a short period of time, many articles were published that, while explaining the necessary mathematical concepts, like operation laws [11, 13,14,15], score and variance functions [16], distance and similarity measures [17,18,19], correlation coefficient [18,19,20], entropy measure [21], and aggregation operators [22,23,24,25,26,27,28], also dealt with its practical applications. The wide range of applications of HFSs have led to many generalizations, each of which is subject to specific conditions. For example, if the decision maker expresses the degrees of doubt as intervals between 0 and 1, the interval-valued hesitant fuzzy sets (IVHFSs) are defined [29]. Recently, other generalizations of HFSs called hesitant fuzzy numbers (HFNs) have been introduced and utilized in solving decision-making problems. Deli [30] used a finite set of trapezoidal fuzzy numbers as the elements of HFEs that are called the generalized trapezoidal HFNs (GTHFNs). Ranjbar [31] assumed that the membership degrees of HFSs were not defined by crisp numbers, but by fuzzy numbers in [0, 1]. This extension is called HFNs, too.

Solving multi-attribute group decision-making (MAGDM) problems [32,33,34,35], is one of the most practical applications of HFSs. Therefore, some common methods for solving such problems, based on evaluation values, have been developed for use in hesitant fuzzy conditions, e.g., power average-based score function [36], HFTOPSIS method [37], Hesitant fuzzy COMET [38], some approaches to hesitant fuzzy MADM problems with incomplete weight information [39], HFVIKOR method [40], Hesitant fuzzy aggregation operators [22,23,24,25,26,27, 41], are some of the research done in this field. Also, based on preference relations, Hesitant fuzzy preference relation [16], hesitant probabilistic multiplicative preference relations in group decision-making [42], studied the uncertainty of preference modeling, discussed the consistency of preference relations, and consensus among DMs.

As we know, the main advantage of hesitant fuzzy sets over other types of fuzzy sets, which have been virtually neutralized in either case recently, i.e., HFNs, is the use of a finite number of memberships instead of an infinite number of them. On the other hand, in many cases it is necessary to use some recorded/historical crisp values in the process of decision-making, which may be obtained under certain conditions, or artificially produced for specific purposes. Utilizing such amounts are usually accompanied by some degree of skepticism by decision makers, when the initial conditions and assumptions have changed. In these cases, it does not make sense to use some uncertainty modeling due to the omission of some problem information, i.e., preset values [29, 31, 38, 42], and others will add to the ambiguity of the problem [30]. Then, we need another type of HFNs, that include decision makers’ satisfaction scores in addition to the available values [43, 44]. A HFN in its new structure, i.e., \(\tilde{a}_{\rm H}=\langle a;\{\gamma_1,\gamma_2,\ldots ,\gamma_n\}\rangle \), consists of two parts: the quantitative part, and the membership part. The quantitative part is a crisp number that is usually predetermined in different ways (recorded values, historical values, self-assessment values, \(\ldots \)). The membership part includes the opinions of decision makers, either on the quantitative part, or on the whole issue, which is expressed in the form of a finite set of values between 0 and 1. This model of HFNs is suitable for many fields of research such as artificial intelligence (AI), machine learning (ML), data analytics (DA) in general and Fairness-aware machine learning (FAML) in particular [45], economics, social, etc. In the case of COVID-19 epidemic, their scope of application has become much wider, with an emphasis on social distancing, and minimizing face-to-face communication.

The question is, are the previous mentioned hesitant fuzzy models capable of modeling the introduced situation? The answer is No, because none of them are suitable for the simultaneous use of a definite amount and a finite set of associated membership degrees.

This shows that the HFNs with mathematical representation \(\tilde{a}_{\rm H}=\langle a;\{\gamma_1,\gamma_2,\ldots ,\gamma_n\}\rangle \), like any of their predecessors, are tools for optimally modeling uncertainties in practical problems that, for whatever reason, the previous models do not have the necessary efficiency. Therefore, the development of HFNs calculus is essential for its practical applications [46].

With the emergence of FSs, finding a function as \(f:\,[0,1]\times [0,1]\rightarrow [0,1]\), with boundary condition, commutativity, associativity, and monotonicity properties, was also started. These functions are called t-norm and t-conorm [47], and different types have been defined so far: Frank, Hamacher, Einstein, algebraic, etc. There are some aggregation operators for HFSs using Frank, Hamacher, Einstein, Dombi, and algebraic t-norms and t-conorms [12, 29, 48,49,50,51,52]. It should be noted that, like any other new concept, HFNs have been introduced in response to the need for some real decision situations, which existing methods are unable to solve them. Therefore, the before mentioned functions in their existing form cannot be used with HFNs directly, and must be updated.

In this article, new score and variance functions, new aggregation operators, and some Archimedean t-norm and t-conorm-based operators of HFNs will be proposed. Based on these, arithmetic operations of adjusted HFNs in general forms and special forms, i.e., algebraic t-norm and t-conorm, Einstein t-norm and t-conorm, Hamacher t-norm and t-conorm, Frank t-norm and t-conorm will be defined. Then, A-HFNWA, A-HFNWG, A-HFNOWA, and A-HFNOWG operators will be defined to be used with HFNs. As an application of the proposed methods, we apply them in the process of scientific evaluation of students, especially in the circumstances of COVID-19 pandemic.

The topics in this article are organized as follows. The basic concepts required by the other sections are given in Sect. 2. Section 3 will introduce new concepts such as the score and variance functions of HFNs, arithmetic operations of HFNs, and some Archimedean t-norm and t-conorm-based aggregation operators in their general and specific forms, along with proofs of some important properties. Application of these new concepts in solving MAGDM problems, a numerical example with a numerical analysis, and conclusion will be discussed in Sects. 4, 5 and6, respectively.

2 Some Basic Concepts and Definitions

Some of the basic concepts needed in the other sections will be reviewed in this section.

A HFS with mathematical representation \(E=\{<x,h(x)>|x\in X\}\), in which X is a fixed set, uses a finite set of values from [0, 1] as membership degrees of \(x\in E\), i.e., \(h(x)=\{\gamma_1,\gamma_2,\ldots ,\gamma_n\}\) that is called HFE [11, 22]. Mathematical analysis of HFSs found ways to compare them, define some arithmetic operation laws, and aggregation operators [20, 22,23,24,25,26,27]. What enables us to do this, is finding functions \(f:\,[0,1]\times [0,1]\rightarrow [0,1]\) with boundary condition, commutativity, associativity, and monotonicity properties. In general, such functions are called triangular norm (t-norm) and triangular conorm (t-conorm) [47, 53]. Functions \(T:\,[0,1]\times [0,1]\rightarrow [0,1]\) and \(S:\,[0,1]\times [0,1]\rightarrow [0,1]\) are called t-norm and t-conorm, respectively, if \(\forall x,y,z\in [0,1]\):

$$ \begin{array}{*{20}l} {(1)T(1,x) = x,} & {(1^{\prime})S(0,x) = x,} \\ {(2)T(x,y) = T(y,x),} & {(2^{\prime})S(x,y) = S(y,x),} \\ {(3)T(x,T(y,z)) = T(T(x,y),z),} & {(3^{\prime})S(x,S(y,z)) = S(S(x,y),z),} \\ {(4)x \le x^{\prime}y \le y^{\prime} \Rightarrow T(x,y) \le T(x^{\prime},y^{\prime}),} & {(4^{\prime})x \le x^{\prime}y \le y^{\prime} \Rightarrow S(x,y) \le S(x^{\prime},y^{\prime}).} \\ \end{array} $$

A t-norm T and a t-conorm S are called archimedean, if for all \(x\in [0,1]\), we have \(T(x,x)<x\), and \(S(x,x)>x\). If the Archimedean t-norm T, and the Archimedean t-conorm S, are also strictly increasing for each \(x,y\in (0,1)\), they are called strictly Archimedean t-norm and t-conorm, respectively. Strictly Archimedean t-norm T has been defined by Klement and Mesiar [54], using an additive generator \(g:[0,1]\rightarrow [0,+\infty )\) as \(T(x,y)=g^{-1}(g(x)+g(y))\). Let \(f(t)=g(1-t)\), then \(S(x,y)=f^{-1}(f(x)+f(y))\) is strictly Archimedean t-conorm S.

There are many types of T and S, according to definition of function g(t) [47]. For example, if \(g(t)=-\log t\), we have Algebraic t-norm and t-conorm, \(g(t)=\log \frac{2-t}{t}\) resulted Einstein t-norm and t-conorm, and Hamacher t-norm and t-conorm has been defined by \(g(t)=\log \frac{\nu +(1-\nu )t}{t},\,\nu >0\).

Definition 1

[50] Let \(h,\, h_1\), and \(h_2\) be HFEs, and \(\lambda \) be a positive real number. Then

$$\begin{aligned}&(1)\,h_1\oplus h_2=\bigcup \limits_{\gamma_1\in h_1,\gamma_2\in h_2}\big \{S(\gamma_1,\gamma_2)\big \}=\bigcup \limits_{\gamma_1\in h_1,\gamma_2\in h_2}\big \{f^{-1}(f(\gamma_1)+f(\gamma_2))\big \};\\&(2)\,h_1\otimes h_2=\bigcup \limits_{\gamma_1\in h_1,\gamma_2\in h_2}\big \{T(\gamma_1,\gamma_2)\big \}=\bigcup \limits_{\gamma_1\in h_1,\gamma_2\in h_2}\big \{g^{-1}(g(\gamma_1)+g(\gamma_2))\big \};\\&(3)\,\lambda h=\bigcup \limits_{\gamma \in h}\big \{f^{-1}(\lambda f(\gamma ))\big \},\quad\lambda>0;\\&(4)\,h^\lambda =\bigcup \limits_{\gamma \in h}\big \{g^{-1}(\lambda g(\gamma ))\big \},\quad\lambda >0; \end{aligned}$$

Definition 2

[50] Let \(h_j (j=1,2,\ldots ,n)\) be a collection of HFEs, and \(0\le w_i\le 1\) with \(\sum \limits_{i=1}^{n}w_i=1\) be the weight vector of given HFEs. Then

$$\begin{aligned} (1)\quad \text{A-HFWA}(h_1,h_2,\ldots ,h_n)=\oplus_{i=1}^{n}(w_ih_i)=\bigcup \limits_{\gamma_i\in h_i}\bigg \{f^{-1}\big (\sum \limits_{i=1}^{n}w_i f(\gamma_i)\big )\bigg \}, \end{aligned}$$

is Archimedean t-norm and t-conorm-based hesitant fuzzy weighted averaging (A-HFWA) operator,

$$\begin{aligned} (2)\quad \text{A-HFWG}(h_1,h_2,\ldots ,h_n)=\otimes_{i=1}^{n}(h_i^{w_i})=\bigcup \limits_{\gamma_i\in h_i}\bigg \{g^{-1}\big (\sum \limits_{i=1}^{n}w_i g(\gamma_i)\big )\bigg \}, \end{aligned}$$

is Archimedean t-norm and t-conorm-based hesitant fuzzy weighted geometric (A-HFWG) operator,

$$\begin{aligned} (3) \quad \text{A-HFOWA}(h_1,h_2,\ldots ,h_n)=\oplus_{i=1}^{n}(w_ih_{\sigma (i)})=\bigcup \limits_{\gamma_{\sigma (i)}\in h_{\sigma (i)}}\bigg \{f^{-1}\big (\sum \limits_{i=1}^{n}w_i f(\gamma_{\sigma (i)})\big )\bigg \}, \end{aligned}$$

is Archimedean t-norm and t-conorm-based hesitant fuzzy ordered weighted averaging (A-HFWA) operator, and

$$\begin{aligned} (4) \quad \text{A-HFOWG}(h_1,h_2,\ldots ,h_n)=\otimes_{i=1}^{n}(h_{\sigma (i)}^{w_i})=\bigcup \limits_{\gamma_{\sigma (i)}\in h_{\sigma (i)}}\bigg \{g^{-1}\big (\sum \limits_{i=1}^{n}w_i g(\gamma_{\sigma (i)})\big )\bigg \} \end{aligned}$$

is Archimedean t-norm and t-conorm-based hesitant fuzzy ordered weighted geometric (A-HFOWG) operator, in which \(h_{\sigma (i)}\, i=1,2,\ldots ,n\) is a permutation of \(h_i,\,i=1,2,\ldots ,n\) such that \(h_{\sigma (1)}\le h_{\sigma (2)}\le \cdots h_{\sigma (n)}\).

Depending on additive generator g, Zhang [50] also discussed that the above AOs reduced to some other special AOs.

For an arbitrary HFE \(h(x)=\{\gamma_1,\gamma_2,\ldots ,\gamma_n\}\), \(S(h)=\frac{1}{n}\sum_{\gamma \in h}\gamma \) is its score function, and \(Var(h)=\frac{1}{n}\sqrt{\sum_{\gamma_i,\gamma_j}(\gamma_i-\gamma_j)^2} \) is variance of the HFE h(x) [16, 22]. These values are utilized to compare HFEs [16]: for any two arbitrary HFEs \(h_1\) and \(h_2\), they are called equivalent if \(S(h_1)=S(h_2)\) and \(Var(h_1)=Var(h_2)\), otherwise, the larger one has larger score value or, if they are equal, has smaller variance.

Any two arbitrary HFEs \(h_1(x)\) and \(h_2(x)\), are called adjusted if their cardinals are equal, i.e., \(|h_1(x)|=|h_2(x)|\). If not, we can adjust them by adding some values to which has smaller cardinality. It will be done through its minimum element (pessimistic mode), maximum element (optimistic mode), and arithmetic averaging (indifference mode) of elements.

Definition 3

[16] Let \(h,h_j\,(j=1,2,\ldots ,n)\) be a collection of adjusted HFEs, and \(w=(w_1,w_2,\ldots ,w_n)^T\) with \(w_i\in [0,1],\sum_{i=1}^n w_i=1\) be the weight vector of the given HFEs. Then,

$$\begin{aligned}&\,(1) h^\lambda =\bigg \{(h^{\sigma (t)})^\lambda |t=1,2,\ldots ,l\bigg \};\\&(2)\quad \lambda h=\bigg \{1-(1-h^{\sigma (t)})^\lambda |t=1,2,\ldots ,l\bigg \};\\&(3)\quad h_1\oplus h_2=\bigg \{h_1^{\sigma (t)}+h_2^{\sigma (t)}-h_1^{\sigma (t)}h_2^{\sigma (t)}|t=1,2,\ldots ,l\bigg \};\\&(4)\quad h_1\otimes h_2=\bigg \{h_1^{\sigma (t)}h_2^{\sigma (t)}|t=1,2,\ldots ,l\bigg \};\\&(5)\quad \displaystyle \oplus_{j=1}^nh_j =\bigg \{1-\Pi_{j=1}^n(1- h_j^{\sigma (t)}) |t=1,2,\ldots ,l\bigg \};\\&(6)\quad \displaystyle \otimes_{j=1}^nh_j =\bigg \{\Pi_{j=1}^nh_j^{\sigma (t)} |t=1,2,\ldots ,l\bigg \};\\&(7)\quad \text{an adjusted hesitant fuzzy weighted average (AHFWA) operator is}\\&{\rm AHFWA}(h_1,h_2,\ldots , h_n)=\oplus_{j=1}^nw_jh_j=\bigg \{1-\Pi_{j=1}^n(1-h_j^{\sigma (t)})^{w_j} |t=1,2,\ldots ,l\bigg \};\\&(8)\quad \text{an adjusted hesitant fuzzy weighted geometric ({ AHFWG}) operator is}\\&{\rm AHFWG}(h_1,h_2,\ldots ,h_n)=\otimes_{j=1}^n(h_j)^{w_j}=\bigg \{\Pi_{j=1}^n (h_j^{\sigma (t)})^{w_j} |t=1,2,\ldots ,l\bigg \}; \end{aligned}$$

where \(h_j^{\sigma (t)}\) is the tth smallest value in \(h_j\).

Due to flexibility of HFEs in modeling experimental problems, researchers have defined new extensions of them [31]. For a fixed set X, \(\lambda_i\in [0,1], (i=1,2,\ldots ,n)\), and real numbers \(a\le b\le c\le d\), Deli [30] proposed generalized trapezoidal HFNs (GTHF-numbers) as \(\langle (a,b,c,d); \{\lambda_i:\lambda_i\in \lambda (x)\}\), where \(\lambda (x)\) is a set of some values in [0, 1]. Another type of HFNs have been defined as follows [43].

Definition 4

Let X be the reference set and \(a\in \mathbb{R}\), a HFN \(\tilde{a}_{\rm H}\) in the set of real numbers \(\mathbb{R}\) is defined as \(\langle a; h(a)\rangle \), where HFE h(a) is a finite set of some values in [0, 1], are considered as membership degrees of \(a\in X\).

Arithmetic operations of HFNs have been defined as follows.

Definition 5

Let \(\tilde{a}_{\rm H}=\langle a; h(a)\rangle \), \(\tilde{b}_{\rm H}=\langle b; h(b)\rangle \) be two HFNs and \(\lambda >0\), then \((1)\quad \tilde{a}_{\rm H}\oplus \tilde{b}_{\rm H}=\langle a+b; h(a)\cup h(b)\rangle \), where \(h(a)\cup h(b)=\bigcup \limits_{\gamma_1\in h(a),\gamma_2\in h(b)}\max \{\gamma_1,\gamma_2\}\), \((2)\quad \lambda \tilde{a}_{\rm H}=\langle \lambda a; h(a)\rangle \),

\((3)\quad (\tilde{a}_{\rm H})^\lambda =\langle a^\lambda ; h(a)\rangle \),

\((4)\quad \tilde{a}_{\rm H}\otimes \tilde{b}_{\rm H}=\langle a.b; h(a)\cap h(b)\rangle \), where \(h(a)\cap h(b)=\bigcup \limits_{\gamma_1\in h(a),\gamma_2\in h(b)}\min \{\gamma_1,\gamma_2\}\).

Definition 6

Let \(h_j (j=1,2,\ldots ,n)\) be a collection of HFEs, and \(0\le w_i\le 1\) with \(\sum \limits_{i=1}^{n}w_i=1\) be the weight vector of given HFEs. Then

$$\begin{aligned} {\rm HWAA}_w(\tilde{a}_{\rm H}^1,\tilde{a}_{\rm H}^2,\ldots ,\tilde{a}_{\rm H}^k)=\bigg \langle \sum \limits_{i=1}^{k}w_ia_i; \bigcup \limits_{i=1}^{k}h(a_i)\bigg \rangle , \end{aligned}$$

is called hesitant fuzzy weighted arithmetic average (HWAA) operator where

$$\begin{aligned} \bigcup \limits_{i=1}^{k}h(a_i)=\bigcup \limits_{\gamma_i\in h(a_i)}\max \{\gamma_1,\gamma_2,\ldots ,\gamma_k\}. \end{aligned}$$

The hesitant fuzzy weighted arithmetic average operator called hesitant fuzzy arithmetic average (HAA) operator if \(w=\bigg (\dfrac{1}{n},\dfrac{1}{n},\ldots ,\dfrac{1}{n}\bigg )\).

Definition 7

Let \(w=(w_1,w_2,\ldots ,w_k)\) with \(w_i\in [0,1] \,\text{and}\,\sum \limits_{i=1}^{k}w_i=1\), be the weight vector of HFNs \(\tilde{a}_{\rm H}^i=\langle a_i, h(a_i)\rangle ,\,i=1,2,\cdots k\). Then

$$\begin{aligned} HWGA_w(\tilde{a}_{\rm H}^1,\tilde{a}_{\rm H}^2,\ldots ,\tilde{a}_{\rm H}^k)=\bigg \langle \prod \limits_{i=1}^{k}a_i^{w_i}; \bigcap \limits_{i=1}^{k}h(a_i)\bigg \rangle , \end{aligned}$$

is called hesitant fuzzy weighted geometric average (HWGA) operator where

$$\begin{aligned} \bigcup \limits_{i=1}^{k}h(a_i)=\bigcup \limits_{\gamma_i\in h(a_i)}\max \{\gamma_1,\gamma_2,\ldots ,\gamma_k\}. \end{aligned}$$

The hesitant fuzzy weighted geometric average operator called hesitant fuzzy geometric average (HGA) operator, if \(w=\bigg (\dfrac{1}{n},\dfrac{1}{n},\ldots ,\dfrac{1}{n}\bigg )\).

Definition 8

Let \(\tilde{a}_{\rm H}^i=\langle a_i; h(a_i)\rangle ,\,i=1,2,\ldots , k\) be the given HFNs and \(\tilde{a}_{\rm H}^{(r)}=\langle a_{(r)}, h(a_{(r)})\rangle ,\,i=1,2,\ldots , k\) be the rth largest value of them; i.e., \(\tilde{a}_{\rm H}^{(1)}<\tilde{a}_{\rm H}^{(2)}<\cdots <\tilde{a}_{\rm H}^{(k)}\), \(w=(w_1,w_2,\ldots ,w_k)\) with \(w_r\in [0,1]\, \text{and}\,\sum \limits_{r=1}^{k}w_r=1\) as the weight vector. Then (i) Hesitant fuzzy ordered weighted averaging (HOWA) operator is defined as

$$\begin{aligned} HOWA_w(\tilde{a}_{\rm H}^1,\tilde{a}_{\rm H}^2,\ldots ,\tilde{a}_{\rm H}^k)=\bigg \langle \sum \limits_{r=1}^{k}w_r a_{(r)}; \bigcup \limits_{r=1}^{k}h(a_{(r)})\bigg \rangle , \end{aligned}$$

where \(\bigcup \limits_{r=1}^{k} h(a_{(r)})=\bigcup \limits_{\gamma_{i}\in h(a_{(i)})}\max \{\gamma_{1},\gamma_{2},\ldots ,\gamma_{k}\}.\) (ii) Hesitant fuzzy ordered weighted geometric (HOWG) operator is defined as

$$\begin{aligned} HOWG_w(\tilde{a}_{\rm H}^1,\tilde{a}_{\rm H}^2,\ldots ,\tilde{a}_{\rm H}^k)=\bigg \langle \prod \limits_{i=1}^{k}a_{(i)}^{w_i}; \bigcap \limits_{i=1}^{k}h(a_{(i)})\bigg \rangle , \end{aligned}$$

where \(\bigcap \limits_{i=1}^{k}h(a_{(i)})=\bigcup \limits_{\gamma_{i}\in h(a_{(i)})}\min \{\gamma_{1},\gamma_{2},\ldots ,\gamma_{k}\}.\)

Definition 9

Let \(\tilde{a}_{\rm H}=\langle a, h(a)\rangle \) with \(h(a)=\{\gamma_1,\gamma_2,\ldots ,\gamma_n\}\) be a HFN that \(\gamma_i\in [0,1]\) are possible satisfaction degrees. Then

  1. (1)

    The mean value (or the score function) of HFN \(\tilde{a}_{\rm H}\) is displayed as \(S(\tilde{a}_{\rm H})\) and defined as \(S(\tilde{a}_{\rm H})=\dfrac{a}{n}\sum \limits_{i=1}^{n}\gamma_i.\)

  2. (2)

    The hesitant degree (or variance) of HFN \(\tilde{a}_{\rm H}\) is \(\Pi (\tilde{a}_{\rm H})=a\sqrt{\dfrac{1}{n}\sum \limits_{i=1}^{n}(S(\tilde{a}_{\rm H})-\gamma_i)^2}.\)

2.1 An Algorithm Comparing HFNs

Let \(\tilde{a}_{\rm H}^i=\langle a_i, h(a_i)\rangle ,\,i=1,2,\cdots k\) be arbitrary HFNs. To compare at least two HFNs, just follow the steps below.

  • Step 1 For \(i=1,2,\ldots , k\), compute \(S(\tilde{a}_{\rm H}^i)\) and \(\Pi (\tilde{a}_{\rm H}^i)\).

  • Step 2 Get \(S(\tilde{a}_{\rm H}^i), i=1,2,\ldots , k\), and rank them in ascending order. The ranking order of HFNs is similar to ranking order of their mean values, unless there exist some equal values.

  • Step 3 For \(i\ne j\) with \(S(\tilde{a}_{\rm H}^i)=S(\tilde{a}_{\rm H}^j)\), get \(\Pi (\tilde{a}_{\rm H}^i)\) and \(\Pi (\tilde{a}_{\rm H}^j)\). If \(\Pi (\tilde{a}_{\rm H}^i)=\Pi (\tilde{a}_{\rm H}^j)\) then \(\tilde{a}_{\rm H}^i=\tilde{a}_{\rm H}^j\). Otherwise, the larger one has less hesitant degree, i.e., if \(\Pi (\tilde{a}_{\rm H}^i)>\Pi (\tilde{a}_{\rm H}^j)\) then \(\tilde{a}_{\rm H}^i\prec \tilde{a}_{\rm H}^j\).

Example 1

Let \(\tilde{a}_{\rm H}=\langle 2,\{0.1,0.2,0.6,0.7\}\rangle \) and \(\tilde{b}_{\rm H}=\langle 2, \{0.2,0.3,0.4,0.7\}\rangle \) be two HNEs. Then, \(S(\tilde{a}_{\rm H})=0.8, \,S(\tilde{b}_{\rm H})=0.8, \,\Pi (\tilde{a}_{\rm H})=0.51\) and \(\Pi (\tilde{b}_{\rm H})=0.374\). Based on the proposed method because \(S(\tilde{a}_{\rm H})=S(\tilde{b}_{\rm H})\) and \(\Pi (\tilde{a}_{\rm H})>\Pi (\tilde{b}_{\rm H})\), then \(\tilde{a}_{\rm H}\prec \tilde{b}_{\rm H}\).

3 Several Aggregation Operators of HFNs

In this section, a new way to calculate score and variance of HFNs, and some Archimedean t-norm and t-conorm-based aggregation operators of HFNs will be introduced.

Definition 10

Consider HFN \(\tilde{a}_{\rm H}=\langle a, \{\gamma_1,\gamma_2,\ldots ,\gamma_n\}\rangle \). Then, for \(0\le w\le 1\) its score function (\(S(\tilde{a}_{\rm H})\)) and variance (\(\Pi (\tilde{a}_{\rm H})\)) can be defined as follows:

  1. (1)

    \(S(\tilde{a}_{\rm H})=(a)^w(\overline{\gamma })^{1-w}\), where \(\overline{\gamma }=\dfrac{\sum_{i=1}^{n}\gamma_i}{n}\),

  2. (2)

    \(\Pi (\tilde{a}_{\rm H})=(a)^w\bigg (\sqrt{\dfrac{1}{n} \sum \limits_{i=1}^{n}(\overline{\gamma }-\gamma_i)^2}\bigg )^{1-w}\),

in which, w is a parameter dependent on the decision maker. \(w=0\) indicates that the DM is willing to utilize only the membership part, and \(w=1\) indicates the willingness is only to apply the real part of HFN.

Definition 11

Two HFNs \(\tilde{a}_{\rm H}=\langle a; h(a)\rangle \) and \(\tilde{b}_{\rm H}=\langle b; h(b)\rangle \) are called adjusted HFNs, if the cardinality of their hesitant/membership parts are equal, i.e., \(|h(a)|=|h(b)|\).

Suppose \(\tilde{a}_{\rm H}=\langle a; h(a)\rangle \) and \(\tilde{b}_{\rm H}=\langle b; h(b)\rangle \) be arbitrary HFNs. Without loss of generality, let \(|h(a)|>|h(b)|\). To adjust their hesitant part, we have to add \(|h(a)|-|h(b)|\) values to h(b). This process can be done in different ways, such as adding the minimum value of h(b) (in pessimistic mode), the maximum value of h(b) (optimistic mode), the arithmetic mean of elements of h(b) (indifferent mode), or otherwise, the power average of the members of h(b) which is used in this paper [36].

Definition 12

Consider two adjusted HFNs as \(\tilde{a}_{\rm H}=\langle a; h(a)\rangle \), \(\tilde{b}_{\rm H}=\langle b; h(b)\rangle \), and let \(\lambda \) be a positive real number. Then

$$\begin{aligned}&(1)\quad \tilde{a}_{\rm H}\oplus \tilde{b}_{\rm H}=\bigg \langle a+b; \bigcup \limits_{\begin{array}{c} \gamma_{1(i)}\in h(a)\\ ,\gamma_{2(i)}\in h(b) \end{array}}\big \{S(\gamma_{1(i)},\gamma_{2(i)})\big \}\bigg \rangle =\bigg \langle a+b; \bigcup \limits_{\begin{array}{c} \gamma_{1(i)}\in h(a),\\ \gamma_{2(i)}\in h(b) \end{array}}\big \{f^{-1}(f(\gamma_{1(i)})+f(\gamma_{2(i)}))\big \}\bigg \rangle ,\\&(2)\quad \tilde{a}_{\rm H}\otimes \tilde{b}_{\rm H}=\bigg \langle a.b;\bigcup \limits_{\begin{array}{c} \gamma_{1(i)}\in h(a),\\ \gamma_{2(i)}\in h(b) \end{array}}\big \{T(\gamma_{1(i)},\gamma_{2(i)})\big \}\bigg \rangle =\bigg \langle a.b;\bigcup \limits_{\begin{array}{c} \gamma_{1(i)}\in h(a),\\ \gamma_{2(i)}\in h_(b) \end{array}}\big \{g^{-1}(g(\gamma_{1(i)})+g(\gamma_{2(i)}))\big \}\bigg \rangle ,\\&(3)\quad \lambda \tilde{a}_{\rm H}=\bigg \langle \lambda a;\bigcup \limits_{\gamma \in h(a)}\big \{f^{-1}(\lambda f(\gamma ))\big \}\bigg \rangle ,\\&(4)\quad (\tilde{a}_{\rm H})^\lambda =\bigg \langle a^\lambda ;\bigcup \limits_{\gamma \in h(a)}\big \{g^{-1}(\lambda g(\gamma ))\big \}\bigg \rangle , \end{aligned}$$

where \(\{\gamma_{l(1)},\gamma_{l(2)},\cdots \}\) is a permutation of \(\{\gamma_{l1},\gamma_{l2},\cdots \}\) such that \(\gamma_{l(1)}\le \gamma_{l(2)}\le \cdots .\)

The relations \((1)-(4)\) will be reduced to

  1. (i)

    Algebraic t-norm and t-conorm, if \(g(t)=-\log t\), i.e.,

    $$\begin{aligned}&(i1)\quad \tilde{a}_{\rm H}\oplus \tilde{b}_{\rm H}=\bigg \langle a+b; \bigcup \limits_{\begin{array}{c} \gamma_{1(j)}\in h(a),\\ \gamma_{2(i)}\in h(b) \end{array}}\big \{\gamma_{1(j)}+\gamma_{2(j)}-\gamma_{1(j)}.\gamma_{2(j)}\big \}\bigg \rangle ,\\&(i2)\quad \tilde{a}_{\rm H}\otimes \tilde{b}_{\rm H}=\bigg \langle a.b;\bigcup \limits_{\begin{array}{c} \gamma_{1(j)}\in h(a),\\ \gamma_{2(j)}\in h(b) \end{array}}\big \{\gamma_{1(j)}.\gamma_{2(j)}\}\bigg \rangle ,\\&(i3)\quad \lambda \tilde{a}_{\rm H}=\bigg \langle \lambda a;\bigcup \limits_{\gamma \in h(a)}\big \{1-(1-\gamma )^\lambda \big \}\bigg \rangle ,\\&(i4)\quad (\tilde{a}_{\rm H})^\lambda =\bigg \langle a^\lambda ;\bigcup \limits_{\gamma \in h(a)}\big \{\gamma ^\lambda \big \}\bigg \rangle ,\\ \end{aligned}$$
  2. (ii)

    Einstein t-norm and t-conorm, if \(g(t)=\log \frac{2-t}{t}\), i.e.,

    $$\begin{aligned}&(ii1)\quad \tilde{a}_{\rm H}\oplus \tilde{b}_{\rm H}=\bigg \langle a+b; \bigcup \limits_{\begin{array}{c} \gamma_{1(j)}\in h(a),\\ \gamma_{2(j)}\in h(b) \end{array}}\big \{\frac{\gamma_{1(j)}+\gamma_{2(j)}}{1+\gamma_{1(j)}.\gamma_{2(j)}}\big \}\bigg \rangle ,\\&(ii2)\quad \tilde{a}_{\rm H}\otimes \tilde{b}_{\rm H}=\bigg \langle a.b;\bigcup \limits_{\begin{array}{c} \gamma_{(1)}\in h(a),\\ \gamma_{2(j)}\in h(b) \end{array}}\big \{\frac{\gamma_{1(j)}.\gamma_{2(j)}}{1-(1-\gamma_{1(j)})(1-\gamma_{(2)})}\big \}\bigg \rangle ,\\&(ii3)\quad \lambda \tilde{a}_{\rm H}=\bigg \langle \lambda a;\bigcup \limits_{\gamma \in h(a)}\big \{\frac{(1+\gamma )^\lambda -(1-\gamma )^\lambda }{(1+\gamma )^\lambda +(1-\gamma )^\lambda } \big \}\bigg \rangle ,\\&(ii4)\quad (\tilde{a}_{\rm H})^\lambda =\bigg \langle a^\lambda ;\bigcup \limits_{\gamma \in h(a)}\big \{\frac{2\gamma ^\lambda }{(2-\gamma )^\lambda +\gamma ^\lambda } \big \}\bigg \rangle ,\\ \end{aligned}$$
  3. (iii)

    Hamacher t-norm and t-conorm, if \(g(t)=\left\{ \begin{array}{lr}\frac{1-t}{t}&{} \nu =0\\ \log \frac{\nu +(1-\nu )t}{t}&{} 0<\nu \le +\infty \end{array}\right. \), i.e.,

    $$\begin{aligned}&(iii1)\quad \tilde{a}_{\rm H}\oplus \tilde{b}_{\rm H}=\bigg \langle a+b; \bigcup \limits_{\begin{array}{c} \gamma_{1(j)}\in h(a), \\ \gamma_{2(j)}\in h(b) \end{array}}\big \{\frac{\gamma_{1(j)}+\gamma_{2(j)}-\gamma_{1(j)}.\gamma_{2(j)}-(1-\nu )\gamma_{1(j)}.\gamma_{2(j)}}{1-(1-\nu )\gamma_{1(j)}.\gamma_{2(j)}}\big \}\bigg \rangle ,\\&(iii2)\quad \tilde{a}_{\rm H}\otimes \tilde{b}_{\rm H}=\bigg \langle a.b;\bigcup \limits_{\begin{array}{c} \gamma_{(1)}\in h(a),\\ \gamma_{2(j)}\in h(b) \end{array}}\Big \{\frac{\gamma_{1(j)}.\gamma_{2(j)}}{\nu +(1-\nu )(\gamma_{1(j)}+\gamma_{2(j)}-\gamma_{1(j)}.\gamma_{2(j)})}\Big \}\bigg \rangle ,\\&(iii3)\quad \lambda \tilde{a}_{\rm H}=\bigg \langle \lambda a;\bigcup \limits_{\gamma \in h(a)}\Big \{\frac{(1+(\nu -1)\gamma )^\lambda -(1-\gamma )^\lambda }{(1+(\nu -1)\gamma )^\lambda +(\nu -1)(1-\gamma )^\lambda }\Big \}\bigg \rangle ,\\&(iii4)\quad (\tilde{a}_{\rm H})^\lambda =\bigg \langle a^\lambda ;\bigcup \limits_{\gamma \in h(a)}\bigg \{\frac{\nu \gamma ^\lambda }{(1+(\nu -1)(1-\gamma ))^\lambda +(\nu -1)\gamma ^\lambda } \bigg \}\bigg \rangle ,\\ \end{aligned}$$
  4. (iv)

    Frank t-norm and t-conorm, if \(g(t)=\left\{ \begin{array}{lr}-\log t&{} \nu =1\\ 1-t &{}\nu =+\infty \\ \log \frac{\nu -1}{\nu ^t-1}&{} otherwise \end{array}\right. \), i.e.,

    $$\begin{aligned}&(iv1)\quad \tilde{a}_{\rm H}\oplus \tilde{b}_{\rm H}=\bigg \langle a+b; \bigcup \limits_{\begin{array}{c} \gamma_{1(j)}\in h(a),\\ \gamma_{2(j)}\in h(b) \end{array}}\bigg \{1-\log_\nu \Big (1+\frac{(\nu ^{1-\gamma_{1(j)}}-1)(\nu ^{1-\gamma_{2(j)}}-1)}{\nu -1}\Big )\bigg \}\bigg \rangle ,\\&(iv2)\quad \tilde{a}_{\rm H}\otimes \tilde{b}_{\rm H}=\bigg \langle a.b;\bigcup \limits_{\begin{array}{c} \gamma_{1(j)}\in h(a),\\ \gamma_{2(j)}\in h(b) \end{array}}\big \{\log_\nu \Big (1+\frac{(\nu ^{\gamma_{1(j)}}-1)(\nu ^{\gamma_{2(j)}}-1)}{\nu -1}\Big )\bigg \rangle ,\\&(iv3)\quad \lambda \tilde{a}_{\rm H}=\bigg \langle \lambda a;\bigcup \limits_{\gamma \in h(a)}\bigg \{1-\log_\nu \Big (1+\frac{(\nu ^{1-\gamma }-1)^\lambda }{(\nu -1)^{\lambda -1}}\Big )\bigg \}\bigg \rangle ,\\&(iv4)\quad (\tilde{a}_{\rm H})^\lambda =\bigg \langle a^\lambda ;\bigcup \limits_{\gamma \in h(a)}\bigg \{\log_\nu \Big (1+\frac{(\nu ^\gamma -1)^\lambda }{(\nu -1)^{\lambda -1}} \Big ) \bigg \}\bigg \rangle ,\\ \end{aligned}$$

Theorem 1

For any HFNs \(\tilde{a}_{\rm H}=\langle a; h(a)\rangle \), \(\tilde{b}_{\rm H}=\langle b; h(b)\rangle \), \(\tilde{b}_{\rm H}=\langle b; h(b)\rangle \), and positive real number \(\lambda \), we have

$$\begin{aligned}&(1)\quad \tilde{a}_{\rm H}\oplus \tilde{b}_{\rm H}=\tilde{b}_{\rm H}\oplus \tilde{a}_{\rm H},\ \&(2)\,\tilde{a}_{\rm H}\otimes \tilde{b}_{\rm H}=\tilde{b}_{\rm H}\otimes \tilde{a}_{\rm H},\\&(3)\quad \lambda (\tilde{a}_{\rm H}\oplus \tilde{b}_{\rm H})=\lambda \tilde{a}_{\rm H}\oplus \lambda \tilde{b}_{\rm H},\ \&(4)(\tilde{a}_{\rm H}\otimes \tilde{b}_{\rm H})^\lambda =(\tilde{a}_{\rm H})^\lambda \otimes (\tilde{b}_{\rm H})^\lambda ,\\&(5)\quad (\lambda_1+\lambda_2)\tilde{a}_{\rm H}=\lambda_1\tilde{a}_{\rm H}+\lambda_2\tilde{a}_{\rm H},\ \&(6)\quad (\tilde{a}_{\rm H})^{\lambda_1+\lambda_2}=(\tilde{a}_{\rm H})^{\lambda_1}\otimes (\tilde{a}_{\rm H})^{\lambda_2}. \end{aligned}$$

Proof

Properties (1) and (2) are obvious. We only prove (3) and (4).

(3) By combining \(\tilde{a}_{\rm H}\oplus \tilde{b}_{\rm H}=\bigg \langle a+b; \bigcup \limits_{\begin{array}{c} \gamma_{(1)}\in h(a),\\ \gamma_{(2)}\in h(b) \end{array}}\big \{f^{-1}(f(\gamma_{(1)})+f(\gamma_{(2)}))\big \}\bigg \rangle \) and \(\lambda \tilde{a}_{\rm H}=\bigg \langle \lambda a;\bigcup \limits_{\gamma \in h(a)}\big \{f^{-1}(\lambda f(\gamma ))\big \}\bigg \rangle \), from Definition 12,

$$\begin{aligned} \lambda (\tilde{a}_{\rm H}\oplus \tilde{b}_{\rm H})&=\lambda \bigg \langle a+b; \bigcup \limits_{\begin{array}{c} \gamma_{1(i)}\in h(a),\\ \gamma_{2(j)}\in h(b) \end{array}}\big \{f^{-1}(f(\gamma_{1(j)})+f(\gamma_{2(j)}))\big \}\bigg \rangle \\&=\bigg \langle \lambda a+\lambda b; \lambda \bigg (\bigcup \limits_{\begin{array}{c} \gamma_{1(j)}\in h(a),\\ \gamma_{2(j)}\in h(b) \end{array}}\big \{f^{-1}(f(\gamma_{1(j)})+f(\gamma_{2(j)}))\big \}\bigg )\bigg \rangle \\&=\bigg \langle \lambda a+\lambda b; \bigcup \limits_{\begin{array}{c} \gamma_{1(j)}\in h(a),\\ \gamma_{2(j)}\in h(b) \end{array}}\bigg \{f^{-1}\bigg (\lambda f\Big (f^{-1}\big (f(\gamma_{1(j)})+f(\gamma_{2(j)})\big )\Big )\bigg )\bigg \}\bigg \rangle \\&=\bigg \langle \lambda a+\lambda b; \bigcup \limits_{\begin{array}{c} \gamma_{1(j)}\in h(a),\\ \gamma_{2(j)}\in h(b) \end{array}}\bigg \{f^{-1}\big (\lambda f(\gamma_{1(j)})+\lambda f(\gamma_{2(j)})\big )\bigg \}\bigg \rangle \\&=\bigg \langle \lambda a+\lambda b; \bigcup \limits_{\begin{array}{c} \gamma_{1(j)}\in h(a),\\ \gamma_{2(j)}\in h(b) \end{array}}\bigg \{f^{-1}\bigg (f\Big (f^{-1}\big (\lambda f(\gamma_{1(j)})\big )\Big )+f\Big (f^{-1}\big (\lambda f(\gamma_{2(j)})\big )\Big )\bigg )\bigg \}\bigg \rangle \\&=\bigg \langle \lambda a; \bigcup \limits_{\gamma_{1(j)}\in h(a)}\bigg \{f^{-1}\big (\lambda f(\gamma_{1(j)})\big )\bigg \}\bigg \rangle \oplus \bigg \langle \lambda b; \bigcup \limits_{\gamma_{2(j)}\in h(b)}\bigg \{f^{-1}\big (\lambda f(\gamma_{2(j)})\big )\bigg \}\bigg \rangle \\&=\lambda \tilde{a}_{\rm H}\oplus \lambda \tilde{b}_{\rm H}. \end{aligned}$$

(4) By combining \(\tilde{a}_{\rm H}\otimes \tilde{b}_{\rm H}=\bigg \langle a.b;\bigcup \limits_{\begin{array}{c} \gamma_{1(j)}\in h(a),\\ \gamma_{2(j)}\in h_(b) \end{array}}\big \{g^{-1}(g(\gamma_{1(j)})+g(\gamma_{2(j)}))\big \}\bigg \rangle \) and \((\tilde{a}_{\rm H})^\lambda =\bigg \langle a^\lambda ;\bigcup \limits_{\gamma \in h(a)}\big \{g^{-1}(\lambda g(\gamma ))\big \}\bigg \rangle \) from Definition 12

$$\begin{aligned} \Big (\tilde{a}_{\rm H}\otimes \tilde{b}_{\rm H}\Big )^\lambda&=\Bigg (\bigg \langle ab; \bigcup \limits_{\begin{array}{c} \gamma_{1(j)}\in h(a),\\ \gamma_{2(j)}\in h(b) \end{array}}\big \{g^{-1}(g(\gamma_{1(j)})+g(\gamma_{2(j)}))\big \}\bigg \rangle \Bigg )^\lambda \\&=\bigg \langle (ab)^\lambda ; \bigcup \limits_{\begin{array}{c} \gamma_{1(j)}\in h(a),\\ \gamma_{2(j)}\in h(b) \end{array}}\bigg \{g^{-1}\Big (\lambda g\Big ( g^{-1}(g(\gamma_{(1)})+g(\gamma_{(2)}))\Big )\Big )\bigg \}\bigg \rangle \\&=\bigg \langle a^\lambda b^\lambda ; \bigcup \limits_{\begin{array}{c} \gamma_{1(j)}\in h(a),\\ \gamma_{2(j)}\in h(b) \end{array}}\bigg \{g^{-1}\big (\lambda g(\gamma_{1(j)})+\lambda g(\gamma_{2(j)})\big )\bigg \}\bigg \rangle \\&=\bigg \langle a^\lambda b^\lambda ; \bigcup \limits_{\begin{array}{c} \gamma_{1(j)}\in h(a),\\ \gamma_{2(j)}\in h(b) \end{array}}\bigg \{g^{-1}\bigg (g\Big (g^{-1}\big (\lambda g(\gamma_{1(j)})\big )\Big )+g\Big (g^{-1}\big (\lambda g(\gamma_{2(j)})\big )\Big )\bigg )\bigg \}\bigg \rangle \\&=\bigg \langle a^\lambda ; \bigcup \limits_{\gamma_{1(j)}\in h(a)}\bigg \{g^{-1}\big (\lambda g(\gamma_{1(j)})\big )\bigg \}\bigg \rangle \otimes \bigg \langle b^\lambda ; \bigcup \limits_{\gamma_{2(j)}\in h(b)}\bigg \{g^{-1}\big (\lambda g(\gamma_{2(j)})\big )\bigg \}\bigg \rangle \\&=\Big (\tilde{a}_{\rm H}\Big )^\lambda \oplus \Big (\tilde{b}_{\rm H}\Big )^\lambda .\\ (5)\quad (\lambda_1+\lambda_2)\tilde{a}_{\rm H}&=\bigg \langle (\lambda_1+\lambda_2) a;\bigcup \limits_{\gamma \in h(a)}\bigg \{f^{-1}\big ((\lambda_1+\lambda_2) f(\gamma )\big )\bigg \}\bigg \rangle \\ {}&= \bigg \langle \lambda_1 a+\lambda_2 a;\bigcup \limits_{\gamma \in h(a)}\bigg \{f^{-1}\big (\lambda_1f(\gamma )+\lambda_2 f(\gamma )\big )\bigg \}\bigg \rangle \\ {}&= \bigg \langle \lambda_1 a+\lambda_2 a;\bigcup \limits_{\gamma \in h(a)}\bigg \{f^{-1}\big (f(f^{-1}(\lambda_1f(\gamma )))+f(f^{-1}(\lambda_2 f(\gamma ))\big )\bigg \}\bigg \rangle \\ {}&= \bigg \langle \lambda_1 a;\bigcup \limits_{\gamma \in h(a)}\bigg \{f^{-1}\big (\lambda_1f(\gamma )\big )\bigg \}\bigg \rangle \oplus \bigg \langle \lambda_2 a;\bigcup \limits_{\gamma \in h(a)}\bigg \{f^{-1}\big (\lambda_2f(\gamma )\big )\bigg \}\bigg \rangle \\ {}&=\lambda_1\tilde{a}_{\rm H}+\lambda_2\tilde{a}_{\rm H}\\ (6)\quad (\tilde{a}_{\rm H})^{\lambda_1+\lambda_2}&=\bigg \langle a^{\lambda_1+\lambda_2};\bigcup \limits_{\gamma \in h(a)}\big \{g^{-1}((\lambda_1+\lambda_2) g(\gamma ))\big \}\bigg \rangle \\ {}&= \bigg \langle a^{\lambda_1} a^{\lambda_2};\bigcup \limits_{\gamma \in h(a)}\bigg \{g^{-1}\big (\lambda_1g(\gamma )+\lambda_2 g(\gamma )\big )\bigg \}\bigg \rangle \\ {}&= \bigg \langle a^{\lambda_1} a^{\lambda_2};\bigcup \limits_{\gamma \in h(a)}\bigg \{g^{-1}\big (g(g^{-1}(\lambda_1g(\gamma )))+g(g^{-1}(\lambda_2 g(\gamma ))\big )\bigg \}\bigg \rangle \\ {}&= \bigg \langle a^{\lambda_1};\bigcup \limits_{\gamma \in h(a)}\bigg \{g^{-1}\big (\lambda_1g(\gamma )\big )\bigg \}\bigg \rangle \otimes \bigg \langle a^{\lambda_2};\bigcup \limits_{\gamma \in h(a)}\bigg \{g^{-1}\big (\lambda_2g(\gamma )\big )\bigg \}\bigg \rangle \\ {}&=(\tilde{a}_{\rm H})^{\lambda_1}\otimes (\tilde{a}_{\rm H})^{\lambda_2} \end{aligned}$$

\(\square \)

Example 2

Let \(\tilde{a}_{\rm H}=\langle 3; \{0.3,0.4,0.6,0.7,0.8\}\rangle \) and \(\tilde{b}_{\rm H}=\langle 2; \{0.2,0.4,0.5,0.7,0.8\}\rangle \) be two adjusted HFNs. Then using algebraic t-norm and t-conorm, we have

$$\begin{aligned}&\tilde{a}_{\rm H}+\tilde{b}_{\rm H}=\langle 5; \{0.44,0.64,0.8,0.91,0.7,0.96\}\rangle =\tilde{b}_{\rm H}\oplus \tilde{a}_{\rm H}, \\&\tilde{a}_{\rm H}\otimes \tilde{b}_{\rm H}=\langle 6; \{0.06,0.16,0.3,0.49,0.6,0.64\}\rangle =\tilde{b}_{\rm H}\otimes \tilde{a}_{\rm H},\\&2\tilde{a}_{\rm H}=\langle 6; \{0.51,0.64,0.84,0.91,0.96\}\rangle \\&\tilde{a}_{\rm H}^2=\langle 9; \{0.09,0.16,0.36,0.49,0.64\}\rangle . \end{aligned}$$

As in Definition 2, we can develop operators \(\text{A-HFWA}, \text{A-HFWG}, \text{A-HFOWA}\) and \(\text{A-HFOWG}\) to HFNs. We called them A-HFNWA, A-HFNWG, A-HFNOWA and A-HFNOWG operators, respectively.

Definition 13

Let \(\tilde{a}_{\rm H}^j=\langle a_j;h(a_j)\rangle (j=1,2,\ldots ,n)\) be a collection of HFNs, and \(0\le w_i\le 1\) with \(\sum \limits_{i=1}^{n}w_i=1\) be the weight vector of given HFNs. Then

$$\begin{aligned} (1) \quad {\rm A-HFNWA}(\tilde{a}_{\rm H}^1,\tilde{a}_{\rm H}^2,\ldots ,\tilde{a}_{\rm H}^n)&=\oplus_{i=1}^{n}(w_i\tilde{a}_{\rm H}^i)\\&=\bigg \langle \sum \limits_{i=1}^{n}w_ia_i; \bigcup \limits_{\gamma_i\in h(a_i)}\bigg \{f^{-1}\big (\sum \limits_{i=1}^{n}w_i f(\gamma_i)\big )\bigg \}\bigg \rangle , \end{aligned}$$

is Archimedean t-norm and t-conorm-based HFN weighted averaging (A-HFNWA) operator,

$$\begin{aligned} (2) \quad \text{A-HFNWG}(\tilde{a}_{\rm H}^1,\tilde{a}_{\rm H}^2,\ldots ,\tilde{a}_{\rm H}^n)&= \otimes_{i=1}^{n}\big ((\tilde{a}_{\rm H}^i)^{w_i}\big )\\&=\bigg \langle \Pi_{i=1}^{n}a_i^{w_i};\bigcup \limits_{\gamma_i\in h(a_i)}\bigg \{g^{-1}\big (\sum \limits_{i=1}^{n}w_i g(\gamma_i)\big )\bigg \}\bigg \rangle , \end{aligned}$$

is Archimedean t-norm and t-conorm-based HFN weighted geometric (A-HFNWG) operator,

$$\begin{aligned} (3) \quad \text{A-HFNOWA}(\tilde{a}_{\rm H}^1,\tilde{a}_{\rm H}^2,\ldots ,\tilde{a}_{\rm H}^n)&= \oplus_{i=1}^{n}(w_i\tilde{a}_{\rm H}^{\sigma (i)})\\&=\bigg \langle \sum \limits_{i=1}^{n}{w_i}a_{\sigma (i)};\bigcup \limits_{\gamma_{\sigma (i)}\in h(a_{\sigma (i)})}\bigg \{f^{-1}\big (\sum \limits_{i=1}^{n}w_i f(\gamma_{\sigma (i)})\big )\bigg \}\bigg \rangle , \end{aligned}$$

is Archimedean t-norm and t-conorm-based HFN ordered weighted averaging (A-HFNOWA) operator, and

$$\begin{aligned} (4) \quad \text{A-HFNOWG}(\tilde{a}_{\rm H}^1,\tilde{a}_{\rm H}^2,\ldots ,\tilde{a}_{\rm H}^n)&=\otimes_{i=1}^{n}\big ((\tilde{a}_{\rm H}^{\sigma (i)})^{w_i}\big )\\&= \bigg \langle \Pi_{i=1}^{n}a_{\sigma (i)}^{w_i} \bigcup \limits_{\gamma_{\sigma (i)}\in h(a_{\sigma (i)})}\bigg \{g^{-1}\big (\sum \limits_{i=1}^{n}w_i g(\gamma_{\sigma (i)})\big )\bigg \}\bigg \rangle , \end{aligned}$$

is Archimedean t-norm and t-conorm-based HFN ordered weighted geometric (A-HFNOWG) operator, where \(\tilde{a}_{\rm H}^{\sigma (i)}\,(i=1,2,\ldots ,n)\) is a permutation of \(\tilde{a}_{\rm H}^i\,(i=1,2,\ldots ,n)\), such that \(\tilde{a}_{\rm H}^{\sigma (1)}\le \tilde{a}_{\rm H}^{\sigma (2)}\le \cdots \le \tilde{a}_{\rm H}^{\sigma (n)}\).

Example 3

Let \(w=(0.3,0.1,0.4,0.2)\) be weight vector of HFNs \(\tilde{a}_{\rm H}^1=\langle 4;\{0.3,0.4,0.5,0.7\}\rangle , \tilde{a}_{\rm H}^2=\langle 6;\{0.2,0.4,0.6,0.8\}\rangle , \tilde{a}_{\rm H}^3=\langle 8;\{0.4,0.7,0.8,0.9\}\rangle ,\) and \(\tilde{a}_{\rm H}^4=\langle 6;\{0.5,0.6,0.7,0.8\}\rangle \). Based on Hamacher aggregation operator [49] and Definition 13, we have

$$\begin{aligned} \text{HFNHWA}_\nu (\tilde{a}_{\rm H}^1,&\tilde{a}_{\rm H}^2,\ldots ,\tilde{a}_{\rm H}^n)=\oplus_{i=1}^{n}(w_i\tilde{a}_{\rm H}^i)\\&=\bigg \langle \sum \limits_{i=1}^{n}w_ia_i; \bigcup \limits_{\gamma_{(i)}\in h(a_i)}\bigg \{\frac{\Pi_{i=1}^{n}(1+(1-\nu )\gamma_{(i)})^{w_i}-\Pi_{i=1}^n(1-\gamma_{(i)})^{w_i}}{\Pi_{i=1}^n(1+(1-\nu )\gamma_{(i)})^{w_i}+(\nu -1)\Pi_{i=1}^n(1-\gamma_{(i)})^{w_i}}\bigg \} \bigg \rangle , \end{aligned}$$

where \(\gamma_{(i)}\) is the ith largest element of \(h(a_i)\). Then, for \(\nu =2\) we have

$$\begin{aligned} \text{HFNHWA}(\tilde{a}_{\rm H}^1,\tilde{a}_{\rm H}^2,&\tilde{a}_{\rm H}^3,\tilde{a}_{\rm H}^4)=\oplus_{i=1}^{4}(w_i\tilde{a}_{\rm H}^i)=(w_1\tilde{a}_{\rm H}^1)\oplus (w_2\tilde{a}_{\rm H}^2)\oplus (w_3\tilde{a}_{\rm H}^3)\oplus (w_4\tilde{a}_{\rm H}^4)\\&=\bigg \langle \sum \limits_{i=1}^{4}w_ia_i; \bigcup \limits_{\gamma_{(i)}\in h(a_i)}\bigg \{\frac{\Pi_{i=1}^{4}(1+(1-\nu )\gamma_{(i)})^{w_i}-\Pi_{i=1}^4(1-\gamma_{(i)})^{w_i}}{\Pi_{i=1}^4(1+(1-\nu )\gamma_{(i)})^{w_i}+(\nu -1)\Pi_{i=1}^4(1-\gamma_{(i)})^{w_i}}\bigg \}\bigg \rangle \\ {}&=\bigg \langle 6.2;\{0.343,0.781,0.956,0.996\} \bigg \rangle \end{aligned}$$

4 Archimedean t-Norm and t-Conorm-Based HFN Operator and Solving MAGDM Problems

The proposed operators will be utilized to solve a special kind of decision-making problems, where we have to use predetermined/documented values, and subjective judgments of decision makers, simultaneously.

With the outbreak of the COVID-19 epidemic in early 2020, the delivery of many services has shifted to the virtual platform. Education at all its levels, and even scientific conferences were no exception to this rule. Students, teachers and professors, in addition to teaching in cyberspace, also had to rank students, take exams in absentia, which is a new challenge in terms of credibility and trust in assessment results. In order to properly face this challenge, in this article it is suggested to use HFNs as the score of each lesson. The real part of any HFN is the score that the student gets as a result of taking virtual exam. The membership section also includes the satisfaction degree of teacher from the student. These grades may be obtained by direct assessment of the student and independent of the virtual exam, or the degree to which the score matches the teacher’s cognition of the student.

In a MAGDM problem, we have a finite set of alternatives \(S_1,S_2,\ldots ,S_m\), a finite set of attributes/criteria \(c_1,c_2,\ldots ,c_n\) with weight vector \(W=(w_1,w_2,\ldots ,w_n)\), that should be ranked according to opinions of a group of decision makers (DMs). Utilizing HFNs, decision matrix \(\tilde{A}=(\tilde{a}_{ij})_{m\times n}\), in which \(\tilde{a}_{ij}=\langle a_{ij};h(a_{ij})\rangle \) is a HFN, \(a_{ij}\) is the score of ith student from final exam of jth course, finite set \(h(a_{ij})=\{\gamma |\,\gamma \in [0,1]\}\) includes subjective assessment of the ith student by the teacher of jth course. The following steps will solve the MAGDM problem.

  • Step 1 Normalize the HFN hybrid matrix \(\tilde{D}\) to \(\tilde{ND}=(\tilde{n}_{ij})_{m\times n}\), by

    $$\begin{aligned} \tilde{n}_{ij}=\langle s_{ij}; h(a_{ij})\rangle =\Big \langle \frac{a_{ij}}{\max \limits_{j}a_{ij}}; h(a_{ij})\Big \rangle ,\,i=1,2,\ldots ,m;\,\,j=1,2,\ldots ,n. \end{aligned}$$
  • Step 2 For \(i=1,2,\ldots ,m\) pick the ith row of \(\tilde{ND}\), and utilize one of the proposed operator to aggregate its elements to a single HFN.

  • Step 3 Compute the score functions and variance functions of obtained HFNs in Step 2, and reorder them in an ascending order.

  • Step 4 Rank the option according to ranking order of HFNs in Step 3.

5 Numerical Example

Example 4

Consider the problem of ranking of students ABCDE, and F, based on their scores in math (\(c_1\)), physics (\(c_2\)), chemistry (\(c_3\)), biology (\(c_4\)), literature (\(c_5\)), art (\(c_6\)), and computer (\(c_7\)). To this end, students must participate in the tests designed for each lesson. Table 1 shows the scores of each student in the final test of each course.

Table 1 The crisp scores of students

On the other hand, a test, no matter how high the standard, due to time constraints in performance and content, cannot be a good indicator to measure the scientific ability of candidates. In the context of COVID-19 epidemic, where trainings and tests are often done virtually, finding a way to make learners work harder is a top priority. Therefore, professors are also asked to express their evaluation from students, based on observations of the length of the course with numbers from [0, 1]. Teachers’ evaluations of each course from each student have been modeled by HFEs, as in the Table 2. Each element of the HFE can be interpreted as the teacher’s monthly evaluation over a 4-month period.

Table 2 The given HFEs assessment values by teachers

Based on the proposed method in this paper, the given information in Tables 1, 2 should be combined as HFNs. Table 3 contains the HFNs that resulted from the simultaneous use of the above two assessment methods.

Table 3 The hybrid method based on HFNs
Table 4 Ranking order of students

Then, as in the 3rd column of Table 4, we have \(C\prec D\prec A\prec E\prec F\prec B.\)

5.1 Numerical Analysis

Based on the given scores in Table 1, and using TOPSIS technique, we have \(E\prec B\prec C\prec F\prec A\prec D.\) Also, utilizing simple additive weighting (SAW) method, and Choquet integral method, we get the same ranking order as before.

The ranking of students based only on the professors’ opinions, requires the aggregation of given HFEs in each row of Table 2. In this paper we do this with the Hamacher operator, and sort the resulting HFEs, we will have: \(C\prec D\prec A\prec F\prec E\prec B.\) It can be seen that the results are completely different from the ranking obtained in Table 1. Therefore, using each one alone will lead to unrealistic results. Because learning is inherently an activity-oriented concept, and therefore it is necessary to consider individual efforts during the training. Relying on the results of the final exams were questionable even in the normal circumstances in which the exams were held in person. Because, tests are based on a limited number of questions that are run in a limited time, and this cannot be a reliable indicator of ranking.

In this Example, three ranking order have been given. The first is based on final exams, only. Although such assessments force students to work harder and learn in a short period of time, they are not sustainable. Because they are usually done at the end of the semester, in a short period of time, and in some parts not all of the subjects. Therefore, it cannot be a good criterion for distinguishing them from each other. Alternatively, professors’ ongoing evaluations of students can be introduced. This method encourages students to have more scientific activity and participation during the semester and even in the classroom. One of the biggest problems with this method is that it is time consuming, especially in cases where the number of students and the volume of lessons are large. In addition, in this method, it is not easy to measure the total content expressed up to the intended time, and as a result, to evaluate higher levels of analysis and composition.

A hybrid evaluation of these two methods, will help improve the quality of education by eliminating the shortcomings of each, by creating synergies in strengths. In this case, the data will be HFNs, and the similarity coefficient of the obtained ranking via the hybrid method, as reference point, with two previous ones can be determined by WS similarity coefficient [55]. Based on it, similarity coefficient of rankings in hybrid method and final exams is 0.35104, while it is 0.8958 for rankings in hybrid method and teacher’ assessments. This showed that the combined method is prone to evaluation results by professors. The decision maker can increase or decrease the impact of each section on the final score by selecting the appropriate values of the parameter (Table 5). The ranking in the Table 5, shows that the evaluation results fluctuate in a range from results that rely entirely on continuous assessment (\(w=0\)) to results that rely entirely on the final exams (\(w=1\)). The first column of the Table 4, is related to \(w=0\), and the 2nd is related to \(w=1\).

Table 5 Parameter-dependent Hybrid ranking order of students

6 Conclusion

In this article, we are looking for a way to improve the quality of e-learning by increasing the validity of the results of virtual assessments, influenced by the conditions of the Covid-19 epidemic. Finding such a method is not only suitable for measuring education, but also for all areas that need evaluation, including management, economics, medicine, astronomy, machine learning, and so on. HFNs, due to their two-part nature, are a reliable tool for achieving this goal. The real part includes the result of the final exam, which is taken on a certain day and time, but the membership part includes the teacher’s mental judgments of the evaluated efforts during the training course. It should be noted that the need for HFNs exists not only in corona conditions, but also in normal conditions. Therefore, several t-norm and t-conorm-based aggregation operators of HFNs in both general and specific formats, i.e., Algebraic t-norm and t-conorm, Einstein t-norm and t-conorm, Hamacher t-norm and t-conorm, Frank t-norm and t-conorm, A-HFNWA, A-HFNWG, A-HFNOWA, and A-HFNOWG have been proposed in this paper. At present, the HFNs are at the beginning of an important path. Therefore, much research needs to be done on them, in both calculus development, and practical applications. For example, research on some concepts such as similarity measure of HFNs, entropy measure of HFNs, distance measure of HFNs, are important axes of future researches. In the field of application, many existing methods such as TOPSIS, VIKOR, ELECTREE, ORESTE and aggregation operators [28, 37, 56], must be updated to be applied for solving decision problems, that their uncertainty has been modeled via HFNs.