Processing math: 9%
Research article Special Issues

Eisenstein field BCH codes construction and decoding

  • First, we will go through the theory behind the Eisenstein field (EF) and its extension field. In contrast, we provide a detailed framework for building BCH codes over the EF in the second stage. BCH codes over the EF are decoded using the Berlekamp-Massey algorithm (BMA) in this article. We investigate the error-correcting capabilities of these codes and provide expressions for minimal distance. We provide researchers and engineers creating and implementing robust error-correcting codes for digital communication systems with detailed information on building, decoding and performance assessment.

    Citation: Muhammad Sajjad, Tariq Shah, Qin Xin, Bander Almutairi. Eisenstein field BCH codes construction and decoding[J]. AIMS Mathematics, 2023, 8(12): 29453-29473. doi: 10.3934/math.20231508

    Related Papers:

    [1] I. A. Husseiny, M. Nagy, A. H. Mansi, M. A. Alawady . Some Tsallis entropy measures in concomitants of generalized order statistics under iterated FGM bivariate distribution. AIMS Mathematics, 2024, 9(9): 23268-23290. doi: 10.3934/math.20241131
    [2] Mansour Shrahili, Mohamed Kayid . Uncertainty quantification based on residual Tsallis entropy of order statistics. AIMS Mathematics, 2024, 9(7): 18712-18731. doi: 10.3934/math.2024910
    [3] Abdullah Ali H. Ahmadini, Amal S. Hassan, Ahmed N. Zaky, Shokrya S. Alshqaq . Bayesian inference of dynamic cumulative residual entropy from Pareto Ⅱ distribution with application to COVID-19. AIMS Mathematics, 2021, 6(3): 2196-2216. doi: 10.3934/math.2021133
    [4] Mohamed Said Mohamed, Muqrin A. Almuqrin . Properties of fractional generalized entropy in ordered variables and symmetry testing. AIMS Mathematics, 2025, 10(1): 1116-1141. doi: 10.3934/math.2025053
    [5] Baria A. Helmy, Amal S. Hassan, Ahmed K. El-Kholy, Rashad A. R. Bantan, Mohammed Elgarhy . Analysis of information measures using generalized type-Ⅰ hybrid censored data. AIMS Mathematics, 2023, 8(9): 20283-20304. doi: 10.3934/math.20231034
    [6] Mansour Shrahili, Mohamed Kayid, Mhamed Mesfioui . Stochastic inequalities involving past extropy of order statistics and past extropy of record values. AIMS Mathematics, 2024, 9(3): 5827-5849. doi: 10.3934/math.2024283
    [7] Mansour Shrahili . Some new results involving residual Renyi's information measure for k-record values. AIMS Mathematics, 2024, 9(5): 13313-13335. doi: 10.3934/math.2024649
    [8] Abdulhakim A. Al-Babtain, Amal S. Hassan, Ahmed N. Zaky, Ibrahim Elbatal, Mohammed Elgarhy . Dynamic cumulative residual Rényi entropy for Lomax distribution: Bayesian and non-Bayesian methods. AIMS Mathematics, 2021, 6(4): 3889-3914. doi: 10.3934/math.2021231
    [9] Mashael A. Alshehri, Mohamed Kayid . Cumulative entropy properties of consecutive systems. AIMS Mathematics, 2024, 9(11): 31770-31789. doi: 10.3934/math.20241527
    [10] Zawar Hussain, Atif Akbar, Mohammed M. A. Almazah, A. Y. Al-Rezami, Fuad S. Al-Duais . Diagnostic power of some graphical methods in geometric regression model addressing cervical cancer data. AIMS Mathematics, 2024, 9(2): 4057-4075. doi: 10.3934/math.2024198
  • First, we will go through the theory behind the Eisenstein field (EF) and its extension field. In contrast, we provide a detailed framework for building BCH codes over the EF in the second stage. BCH codes over the EF are decoded using the Berlekamp-Massey algorithm (BMA) in this article. We investigate the error-correcting capabilities of these codes and provide expressions for minimal distance. We provide researchers and engineers creating and implementing robust error-correcting codes for digital communication systems with detailed information on building, decoding and performance assessment.



    In recent decades, there has been a global trend of placing increased importance on the teaching quality of higher education. Teaching satisfaction, as a crucial evaluation criterion, is widely recognized as essential information for enhancing teaching quality. By evaluating teachers' teaching satisfaction, they are encouraged to identify areas for improvement, reflect on their teaching practices, address weaknesses and, ultimately, enhance the quality of education [1]. This, in turn, promotes the development and advancement of colleges and universities. Currently, there are many studies on teaching satisfaction, including the evaluation index and its influence. Previous studies [2,3,4,5,6] have indicated that teaching satisfaction is affected by diverse factors, such as the expertise of teachers, teaching attitude, content of courses and teaching methods, among others. Additionally, many colleges and universities have also established evaluation index systems that integrate standardized criteria and talent training programs [7]. The process of comparing and selecting a desirable option from several schemes evaluated on various dimensions or attributes is often considered as a multi-attribute decision-making (MADM) process [8].

    In the specific decision-making situation of teaching satisfaction, two issues should be the focus [9]:

    (1) How does a decision-maker express decision-related information appropriately?

    (2) How can an ideal scheme be determined by using relevant decision-making methods?

    Regarding the first issue, during the evaluation process, the meaning of each attribute representing the target is determined. For instance, when evaluating whether a teacher is fully prepared, a decision-maker may simply provide a binary opinion of "yes" or "no". However, in practical decision expressions, there is fuzziness and a large amount of uncertainties, which often involve subjective terms such as "very poor", "good" or "excellent". These terms do not always correspond to precise data. To address this issue, Zadeh [10] initially created fuzzy sets (FSs) with the concept of a membership function in 1965. By utilizing a FS, evaluation values can extend beyond the binary scale of {0,1} to a more flexible range of [0,1]. The theory of FSs has been widely applied in engineering, business management, education and other domains.

    With the development of theoretical research and practical demonstration, new problems have emerged. For instance, if 10 people decide "whether the teacher is patient with students", five evaluators may answer "yes/agreement", and four people may answer "no/disagreement". But one person may have some uncertainty in making decisions, and may even decline to answer. In this case, the hesitancy must be taken into consideration. Then, Atanassov [11] proposed the intuitionistic FS (IFS). Unlike FSs, IFSs have a membership degree, non-membership degree, hesitancy degree or intuitionistic index, which are consistent with humans' subjective habit of describing decisions with "negation", "affirmation" and "hesitation". In this case, IFSs are more suitable for describing and collecting decision-making information. However, in IFSs, the values of the membership degrees and others need to be precise numbers, which may be difficult to obtain because of the complex factors of a realistic environment and the limitations of decision-makers' cognitive abilities [8]. For example, if 0.6 represents "good" and 0.75 is "very good", a decision-maker is inclined to decide with an interval of [0.6, 0.7] rather than an exact value of 0.7. Subsequently, Atanassov and Gargov [12] further came up with the interval-valued intuitionistic fuzzy set (IvIFS) in 1989. They employ interval values rather than crisp values to express the degree of membership, non-membership and uncertainty for each element. The theory of IvIFSs has received widespread attention in theoretical research and practical application [13,14,15,16]. It has been demonstrated that the IvIFS is a more powerful tool to deal with uncertain and ambiguous information in actual environments, rather than previous theories, such as FSs and IFSs [17,18].

    Regarding the second issue, various approaches are employed in academic research to determine the optimal scheme(s) within the framework of IvIFSs, such as the approach of determining index weights, use of a distance measure, use of the technique for order performance by similaity to ideal solution (TOPSIS), and so on.

    (1) The method of determining index weight. For the case in which the weight value is unknown or uncertain, the entropy weight method has been widely recognized as an efficient technique for calculating index weights. Zhang and Jiang [19] initially established the concept of entropy and then gave a couple of formulas to compute IvIFSs' entropy, which could be used in clustering analysis and various fields. Wu and Wan [20] further advanced the entropy method with IvIFSs in supplier selection problems and computed the corresponding index weights, which made the conclusions more reliable and objective. Considering the risk preferences of experts, Zhang et al. [21] constructed a novel score function (Pλ) and then introduced the concept of average entropy for IvIFSs. They incorporated experts' risk preferences into the weighting process, enhancing the reliability of the results. Additionally, Xian et al. [14] developed a new weight approach based on the entropy measure, considering the presence of both positive and indeterminate preferences for attributes. This approach assigns unique weights to each attribute, therefore providing a comprehensive evaluation.

    (2) Distance measure of IvIFSs. It is an effective tool for handling uncertain and vague information within the framework of FS theory [22,23,24,25]. IvIFSs' distance method is generalized on the basis of FSs' distance as developed in [23,24]. Xu [22,25] defined several distance measures, such as the (normalized) Hamming distance, (normalized) Euclidean distance and hybrid weighted distance measures. Park [26] redefined pairs of various distance measures including the (normalized) Hamming distance and (normalized) Euclidean distance, by taking the amplitude of the membership of the elements into consideration. Muharrem [27] put forward a novel distance measure, which could be utilized to compare counter intuitive examples for IvIFSs. Inspired by intervals, Liu and Jiang [17] established a new interval-valued intuitionistic distance for IvIFSs, and it preserves the entire interval information and effectively avoids information loss. Garg and Kumar [18] constructed a new exponential distance based on different connection numbers.

    (3) TOPSIS method. TOPSIS is a widely employed MADM model initially introduced by Hwang and Yoon [28]. The fundamental principle of TOPSIS is to identify the ideal scheme(s) with the shortest distance to the positive ideal solution (PIS) and the longest distance to the negative ideal solution (NIS) [29]. In recent years, researchers have extended and applied the TOPSIS method for suitability with IvIFSs in various fields [30,31,32], such as signal processing [33], supplier selection [31,34] and emergency rescue [35].

    For instance, Qiao et al. [36] presented a TOPSIS method for IvIFSs which considers the preference information of schemes. Using the weighted TOPSIS method, Huang and Zhang [37] conducted a teaching effectiveness evaluation for higher education. Zhao [38] introduced an advanced TOPSIS method based on the conventional one to calculate the distances between schemes and the PIS/NIS; they were able to determine an ideal teaching quality. AI-Shamiri et al. [39] integrated TOPSIS and ELECTRE-I within the framework of cubic m-polar FSs to diagnose psychiatric disorders.

    In light of the literature analysis and discussion provided, the motivations of this research can be summarized as follows:

    (1) The evaluation process for teaching satisfaction needs to take multiple criteria into account from different perspectives, which often leads to internal ambiguity and inconsistency. Additionally, decision-makers may struggle to provide crisp values due to the inherent vagueness and uncertainty in cognition. Because the IvIFS can effectively handle fuzziness and uncertainty by considering both membership and non-membership degrees, it can reduce the vagueness in decision-making. Hence, we utilize IvIFSs to express decision-makers' evaluation opinions.

    (2) Although the research about distance measures has significantly advanced, as far as we know, the triangular divergence fails to be explored under the conditions of the interval-valued intuitionistic fuzzy (IvIF) environment. Besides, some existing measures cannot be adopted to distinguish subtle differences in data, while others involve complex and tedious calculation processes. Therefore, this research serves to introduce a novel IvIF triangular distance to enrich the information measure theory and yeild a new TOPSIS method based on it.

    Based on these motivations, this research has several key contributions. First, the proposed IvIF triangular distance enhances the information measure theory by providing a new perspective. Second, an improved TOPSIS method is established by utilizing the novel triangular distance within the IvIF environment. This method enables a more accurate and reliable evaluation of teaching satisfaction. Lastly, a comprehensive framework for teaching satisfaction evaluation, as based on the novel distance measure and TOPSIS method for IvIFSs, is developed to provide decision support for education managers. This framework serves as a driving force for teachers to enhance teaching quality, and for students to improve learning efficiency.

    Regarding the IvIFS environment, this study was designed to yield a new TOPSIS method through the introduction a novel distance measure for evaluating the teaching satisfaction of a college mathematics course. The structure of this paper is organized as follows. Section 2 provides an overview of the elementary concepts related to IvIFSs, including interval-valued intuitionistic fuzzy numbers (IvIFNs) and their relationships, as well as the entropy weight method. Section 3 proposes a new distance measure, and it is proved that the new distance measure satisfies the requirements for the related axiomatic properties. Additionally, several examples are illustrated to examine the superiority of the proposed distance measure in Section 4. Building upon the new distance measure, Section 5 presents an improved TOPSIS method. In Section 6, a novel decision-making approach specifically tailored to teaching quality evaluation is established, and a numerical example investigating the teaching satisfaction of mathematics courses is presented to illustrate the practical application of the proposed methodology. Furthermore, comparisons and counter intuitive examples are discussed to prove the rationality and superiority of the proposed TOPSIS method in Section 7. Finally, Section 8 concludes the study by summarizing the main findings and contributions of this research.

    In this section, related basic concepts, including the IvIFS and its properties, as well as the distance measures, are recalled briefly.

    Definition 2.1. [12] Suppose that Γ is a nonempty set; an IvIFS ˜H over Γ could be defined as below.

    ˜H={τ,[uL˜H(τ),uR˜H(τ)],[vL˜H(τ),vR˜H(τ)]|τΓ}, (2.1)

    where [uL˜H(τ),uR˜H(τ)][0,1], [vL˜H(τ),vR˜H(τ)][0,1] are the interval membership degree and non-membership degree of the element τ to ˜H, respectively. Besides, the formula satisfies the condition that uR˜H(τ)+vR˜H(τ)1 for any τΓ. The interval intuitionistic index or hesitancy degree is π˜H(τ)=[πL˜H(τ),πR˜H(τ)]=[1uR˜H(τ)vR˜H(τ),1uL˜H(τ)vL˜H(τ)], and π˜H(τ)[0,1].

    For convenience, we denote the IvIFN as ([uL,uR],[vL,vR]) [13].

    Especially, when uL˜H(τ)=uR˜H(τ) and vL˜H(τ)=vR˜H(τ), an IvIFS is reduced to an IFS and an IvIFN is reduced to an intuitionistic fuzzy number.

    Definition 2.2. [40] Suppose that ~Hk=τ,[uL~Hk(τ),uR~Hk(τ)],[vL~Hk(τ),vR~Hk(τ)] (k=1,2) are any two IvIFSs in Γ, we have:

    (1) ˜H1=˜H2 iff uL~H1(τ)=uL~H2(τ), uR~H1(τ)=uR~H2(τ), vL~H1(τ)=vL~H2(τ) and vR~H1(τ)=vR~H2(τ) for τΓ;

    (2) ˜H1˜H2 iff uL~H1(τ)uL~H2(τ), uR~H1(τ)uR~H2(τ), vL~H1(τ)vL~H2(τ) and vR~H1(τ)vR~H2(τ) for τΓ;

    (3) ˜H1˜H2={τ,[max(uL~Hk(τ)),max(uR~Hk(τ))],[min(vL~Hk(τ)),min(vR~Hk(τ))]|τΓ};

    (4) ˜H1˜H2={τ,[min(uL~Hk(τ)),min(uR~Hk(τ))],[max(vL~Hk(τ)),max(vR~Hk(τ))]|τΓ};

    (5) ˜H1C={τ,[vL~H1(τ),vR~H1(τ)],[uL~H1(τ),uR~H1(τ)]|τΓ}.

    Definition 2.3. [13] Assume that ~βk=([uL~βk,uR~βk],[vL~βk,vR~βk]) (k=1,2) represents any two IvIFNs; then, one has the following operational laws.

    (1) ~β1~β2=([uL~β1+uL~β2uL~β1uL~β2,uR~β1+uR~β2uR~β1uR~β2],[vL~β1vL~β2,vR~β1vR~β2]);

    (2) ~β1~β1=([uL~β1uL~β2,uR~β1uR~β2],[vL~β1+vL~β2vL~β1vL~β2,vR~β1+vR~β2vR~β1vR~β2]);

    (3) γ˜β=([1(1uL˜β)γ,1(1uR˜β)γ],[(vL˜β)γ,(vR˜β)γ]), γ>0;

    (4) (˜β)γ=([(uL˜β)γ,(uR˜β)γ],[1(1vL˜β)γ,1(1vR˜β)γ]) γ>0.

    Definition 2.4. [13] Suppose that ˜β refers to an IvIFN; the score function and accuracy function of ˜β will be given as below, respectively.

    fs(˜β)=uL˜β+uR˜βvL˜βvR˜β2,1fs(˜β)1. (2.2)
    fa(˜β)=uL˜β+uR˜β+vL˜β+vR˜β2,0fa(˜β)1. (2.3)

    For any two IvIFNs ~β1 and ~β2, the order relationship could be defined as follows.

    (1) ~β1>~β2 when fs(~β1)>fs(~β2);

    (2) If fs(~β1)=fs(~β2), the following holds:

    a. ~β1>~β2 when fa(~β1)>fa(~β2);

    b. ~β1=~β2 when fa(~β1)=fa(~β2).

    Definition 2.5. [19] Set Γ as a nonempty set; ~Hk is a separate element in an IvIFS ˜H in Γ. Then, the entropy of ~Hk can be given in the following form:

    E(~Hk)=112nnk=1(|uL~Hk(τ)vL~Hk(τ)|+|uR~Hk(τ)vR~Hk(τ)|); (2.4)

    the weight of element ~Hk is defined as in Eq (2.5):

    w(~Hk)=1E(~Hk)nk=1(1E(~Hk)). (2.5)

    Definition 2.6. [13] For a group of IvIFNs ~βk (k=1,2,,n), the IvIF weighted arithmetic average operator (IvIFWAA) will be given by Eq (2.6).

    IvIFWAA(~β1,~β2,,~βn)=nk=1wk~βk=([1nk=1(1uL˜βk)wk,1nk=1(1uR~βk)wk],[nk=1(vL~βk)wk,nk=1(vR~βk)wk]). (2.6)

    Here, wk is the weight of ~βk, satisfying 0wk1, and nk=1wk=1.

    Definition 2.7. [19] A mapping d:IvIFS(Γ)×IvIFS(Γ)[0,1] denotes the distance measure between the IvIFSs ~Hi and ~Hj if the following conditions are satisfied:

    (1) d(~Hi,~Hj)=0~Hi=~Hj;

    (2) d(~Hi,~Hj)=d(~Hj,~Hi);

    (3) 0d(~Hi,~Hj)1;

    (4) If ~Hi~Hj~Hk, then d(~Hi,~Hj)d(~Hi,~Hk), and d(~Hj,~Hk)d(~Hi,~Hk).

    Suppose that ~Hk=τ,[uL~Hk(τ),uR~Hk(τ)],[vL~Hk(τ),vR~Hk(τ)] (k=1,2) presents any two IvIFSs in Γ={τ1,τ2,,τm}; some existing distance measures between ~H1 and ~H2 are stated as below, which will be used in a later discussion.

    (1) Hamming distance [22]

    dH(~H1,~H2)=14m[mi=1|uL~H1(τi)uL~H2(τi)|+|uR~H1(τi)uR~H2(τi)|+|vL~H1(τi)vL~H2(τi)|+|vR~H1(τi)uR~H2(τi)|]. (2.7)

    (2) Euclidean distance [22]

    dE(~H1,~H2)=14mmi=1[(uL~H1(τi)uL~H2(τi))2+(uR~H1(τi)uR~H2(τi))2+(vL~H1(τi)vL~H2(τi))2+(vR~H1(τi)uR~H2(τi))2]. (2.8)

    (3) Hausdorff–Hamming distance [26]

    dHH(~H1,~H2)=12mmi=1[|uL~H1(τi)uL~H2(τi)||uR~H1(τi)uR~H2(τi)|+|vL~H1(τi)vL~H2(τi)||vR~H1(τi)uR~H2(τi)|]. (2.9)

    (4) Hausdorff–Euclidean distance [26]

    dHE(~H1,~H2)=12mmi=1[(|uL~H1(τi)uL~H2(τi)||uR~H1(τi)uR~H2(τi)|)2+(|vL~H1(τi)vL~H2(τi)||vR~H1(τi)uR~H2(τi)|)2]. (2.10)

    (5) Muharrem [27] created a novel distance measure for IvIFSs

    dtp(~H1,~H2)=p14m(t+1)pmi=1{|t(uL~H1(τi)uL~H2(τi))(vL~H1(τi)vL~H2(τi))|p+|t(vL~H1(τi)vL~H2(τi))(uL~H1(τi)uL~H2(τi))|p+|t(uR~H1(τi)uR~H2(τi))(vR~H1(τi)vR~H2(τi))|p+|t(vR~H1(τi)vR~H2(τi))(uR~H1(τi)uR~H2(τi))|p}, (2.11)

    where t=2,3,4,, the parameter p represents the Lp norm and t is used to identify the uncertainty level. For the calculations, t=2 and p=1 are used in this study.

    (6) Liu and Jiang [17] established a new distance measure for IvIFSs:

    dL(~H1,~H2)=12(D2u+D2v+D2π+DπDu+DπDv) (2.12)

    where an IvIFN [uL,uR],[vL,vR] is converted into an interval vector ([uL,uR],[vL,vR],[πL,πR])T, and

    D2u(~H1,~H2)=(uL~H1(τi)+uR~H1(τi)2uL~H2(τi)+uR~H2(τi)2)2+13(uR~H1(τi)uL~H1(τi)2uR~H2(τi)uL~H2(τi)2)2,
    D2v(~H1,~H2)=(vL~H1(τi)+vR~H1(τi)2vL~H2(τi)+vR~H2(τi)2)2+13(vR~H1(τi)vL~H1(τi)2vR~H2(τi)vL~H2(τi)2)2,
    D2π(~H1,~H2)=(πL~H1(τi)+πR~H1(τi)2πL~H2(τi)+πR~H2(τi)2)2+13(πR~H1(τi)πL~H1(τi)2πR~H2(τi)πL~H2(τi)2)2,

    and

    Du={D2uwhenuL~H1(τi)+uR~H1(τi)uL~H2(τi)+uR~H2(τi);D2uwhenuL~H1(τi)+uR~H1(τi)<uL~H2(τi)+uR~H2(τi).Dv={D2vwhenvL~H1(τi)+vR~H1(τi)vL~H2(τi)+vR~H2(τi);D2vwhenvL~H1(τi)+vR~H1(τi)<vL~H2(τi)+vR~H2(τi).Dπ={D2πwhenπL~H1(τi)+πR~H1(τi)πL~H2(τi)+πR~H2(τi);D2πwhenπL~H1(τi)+πR~H1(τi)<πL~H2(τi)+πR~H2(τi).

    (7) Garg and Kumar [18] defined a new exponential distance through the use of a connection set ˜H={(τi,r˜H(τi)+s˜H(τi)i+t˜H(τi)j)}, as follows:

    If fs(~H1)fs(~H2) and fa(~H1)fa(~H2), then

    r˜H(τi)=(uL˜H(τi)+uR˜H(τi))(2vL˜H(τi)vR˜H(τi))4;

    s˜H(τi)=1+(1uL˜H(τi)uR˜H(τi))(1vL˜H(τi)vR˜H(τi))2;

    t˜H(τi)=(vL˜H(τi)+vR˜H(τi))(2uL˜H(τi)uR˜H(τi))4.

    If fs(~H1)=fs(~H2) (either fa(~H1)fa(~H2) or fa(~H1)=fa(~H2)), one has

    r˜H(τi)=(uL˜H(τi)(1uR˜H(τi)vR˜H(τi))+uR˜H(τi)(1uL˜H(τi)vL˜H(τi)))(2vL˜H(τi)vR˜H(τi))4;

    s˜H(τi)=1r˜H(τi)t˜H(τi);

    t˜H(τi)=(vL˜H(τi)(1uR˜H(τi)vR˜H(τi))+vR˜H(τi)(1uL˜H(τi)vL˜H(τi)))(2uL˜H(τi)uR˜H(τi))4.

    Thus, a new normalized exponential Hamming distance is given by Eq (2.13):

    dHexp(~H1,~H1)={1exp[13ni=1(|r~H1(τi)r~H2(τi)|+|s~H1(τi)s~H2(τi)|+|t~H1(τi)t~H2(τi)|)]}/(1exp(n)), (2.13)

    and a new normalized exponential Euclidean distance is given by Eq (2.14):

    dEexp(~H1,~H1)={1exp[(13ni=1(|r~H1(τi)r~H2(τi)|2+|s~H1(τi)s~H2(τi)|2+|t~H1(τi)t~H2(τi)|2))1/2]}/(1exp(n)). (2.14)

    The distance measure plays a crucial role in distinguishing the differences among alternatives, making it a critical component in decision-making processes such as the TOPSIS method [17]. The distance measure with a higher distinguishing ability will lead to a better decision-making method, thus providing decision-makers with a definite choice. However, some existing distance measures have been established without explicit physical meaning, while others involve complex calculations. In some cases, these measures fail to adequately distinguish decision-making information, and the calculated results may even contradict theoretical requirements or intuitive feelings.

    Triangular divergence, a classical measure widely applied in probability distributions, has successfully handled counter intuitive problems better than other existing distance methods [41,42,43]. Therefore, based upon the concept of triangular divergence and the approaches described in [43], we have created a new distance measure in the IvIFS environment. By utilizing triangular divergence, the proposed distance measure aims to overcome the limitations of existing measures and provide a more effective tool for distinguishing differences between IvIFSs.

    In the following sections, we will discuss the concept of triangular divergence and its application in the new distance measure for IvIFSs.

    Definition 3.1. [42] Set Ψn={P=(p1,p2,,pn)|pi>0,ni=1pi=1},n2 as a set of finite discrete probability distributions. For P,QΨn, the classical triangular divergence measure between P and Q is defined as

    Δ(P,Q)=ni=1(piqi)2pi+qi. (3.1)

    The bigger the triangular divergence value, the greater the difference between the probability distributions P and Q.

    With Eq (3.1), the square root of the triangular divergence could be described as follows:

    d(P,Q)=ni=1(piqi)2pi+qi

    where, by convention, 0/0=0.

    Definition 3.2. Suppose that ~Hk=τj,[uL~Hk(τ),uR~Hk(τ)],[vL~Hk(τ),vR~Hk(τ)] (k=1,2) represents any two IvIFSs in Γ={τ1,τ2,,τm}; then, the distance between ~H1 and ~H2 could be determine by using the following formula.

    dIv(~H1,~H2)=14mmj=1[(uL~H1(τj)uL~H2(τj))2uL~H1(τj)+uL~H2(τj)+(uR~H1(τj)uR~H2(τj))2uR~H1(τj)+uR~H2(τj)+(vL~H1(τj)vL~H2(τj))2vL~H1(τj)+vL~H2(τj)+(vR~H1(τj)vR~H2(τj))2vR~H1(τj)+vR~H2(τj)]. (3.2)

    We denote dIv as an interval-valued intuitionistic distance measure based on triangular divergence (IvIFTD). As stated previously, the bigger the value of dIv, the greater the difference between the IvIFSs.

    Theorem 3.1. Set dIv(~H1,~H2) in Eq (3.2) as the distance measure between two IvIFSs; then, the following properties are satisfied:

    (1) dIv(~H1,~H2)=0~H1=~H2;

    (2) dIv(~H1,~H2)=dIv(~H2,~H1);

    (3) 0dIv(~H1,~H2)1;

    (4) If ~H1~H2~H3, then one has dIv(~H1,~H2)dIv(~H1,~H3) and dIv(~H2,~H2)dIv(~H1,~H3).

    Proof. (1) dIv(~H1,~H2)=0~H1=~H2.

    Necessity:

    For any τjΓ, if dIv(~H1,~H2)=0, one has

    dIv(~H1,~H2)=14mmj=1[(uL~H1(τj)uL~H2(τj))2uL~H1(τj)+uL~H2(τj)+(uR~H1(τj)uR~H2(τj))2uR~H1(τj)+uR~H2(τj)+(vL~H1(τj)vL~H2(τj))2vL~H1(τj)+vL~H2(τj)+(vR~H1(τj)vR~H2(τj))2vR~H1(τj)+vR~H2(τj)]=0.

    Then we have

    \begin{eqnarray*} \frac{{{{\left( {u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) - u_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)} \right)}^2}}}{{u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) + u_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)}} = \frac{{{{\left( {u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) - u_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)} \right)}^2}}}{{u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) + u_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)}} = \frac{{{{\left( {v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) - v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)} \right)}^2}}}{{v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) + v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)}} = \frac{{{{\left( {v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) - v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)} \right)}^2}}}{{v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) + v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)}} = 0, \end{eqnarray*}

    that is

    \begin{eqnarray*} {\left( {u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) - u_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)} \right)^2} = {\left( {u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) - u_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)} \right)^2} = {\left( {v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) - v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)} \right)^2} = {\left( {v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) - v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)} \right)^2} = 0. \end{eqnarray*}

    According to Definition 2.1, one has

    \begin{eqnarray*} 0 \le u_{\widetilde {{H_1}}}^L, u_{\widetilde {{H_1}}}^R, v_{\widetilde {{H_1}}}^L, v_{\widetilde {{H_1}}}^R, u_{\widetilde {{H_2}}}^L, u_{\widetilde {{H_2}}}^R, v_{\widetilde {{H_2}}}^L, v_{\widetilde {{H_2}}}^R \le 1; \end{eqnarray*}

    hence, we have

    \begin{eqnarray*} u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) = u_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right), u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) = u_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right), v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) = v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right), v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) = v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right). \end{eqnarray*}

    Therefore, \widetilde {{H_1}} = \widetilde {{H_2}} is deduced.

    Sufficiency:

    When \widetilde {{H_1}} = \widetilde {{H_2}} , one has

    \begin{eqnarray*} u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) = u_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right), u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) = u_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right), v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) = v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right), v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) = v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right). \end{eqnarray*}

    Then, we can obtain

    \begin{eqnarray*} {d_{Iv}}\left( {\widetilde {{H_1}}, \widetilde {{H_2}}} \right) =\\ \sqrt {\frac{1}{{4m}}\sum\limits_{j = 1}^m {\left[ {\frac{{{{\left( {u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) - u_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)} \right)}^2}}}{{u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) + u_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)}} + \frac{{{{\left( {u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) - u_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)} \right)}^2}}}{{u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) + u_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)}} + \frac{{{{\left( {v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) - v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)} \right)}^2}}}{{v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) + v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)}} + \frac{{{{\left( {v_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) - v_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)} \right)}^2}}}{{v_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) + v_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)}}} \right]} } = 0. \end{eqnarray*}

    Proof. (2) {d_{Iv}}\left({\widetilde {{H_1}}, \widetilde {{H_2}}} \right) = {d_{Iv}}\left({\widetilde {{H_2}}, \widetilde {{H_1}}} \right) .

    \begin{eqnarray*} \begin{array}{l} {d_{Iv}}\left( {\widetilde {{H_1}}, \widetilde {{H_2}}} \right) = \sqrt {\frac{1}{{4m}}\sum\limits_{j = 1}^m {\left[ {\frac{{{{\left( {u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) - u_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)} \right)}^2}}}{{u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) + u_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)}} + \frac{{{{\left( {u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) - u_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)} \right)}^2}}}{{u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) + u_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)}} + \frac{{{{\left( {v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) - v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)} \right)}^2}}}{{v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) + v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)}} + \frac{{{{\left( {v_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) - v_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)} \right)}^2}}}{{v_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) + v_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)}}} \right]} } \\ \begin{array}{*{20}{c}} {}&{}&{}&{} \end{array} = \sqrt {\frac{1}{{4m}}\sum\limits_{j = 1}^m {\left[ {\frac{{{{\left( {u_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right) - u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right)} \right)}^2}}}{{u_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right) + u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right)}} + \frac{{{{\left( {u_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right) - u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right)} \right)}^2}}}{{u_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right) + u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right)}} + \frac{{{{\left( {v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right) - v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right)} \right)}^2}}}{{v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right) + v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right)}} + \frac{{{{\left( {v_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right) - v_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right)} \right)}^2}}}{{v_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right) + v_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right)}}} \right]} } \\ \begin{array}{*{20}{c}} {}&{}&{}&{} \end{array} = {d_{Iv}}\left( {\widetilde {{H_2}}, \widetilde {{H_1}}} \right). \end{array} \end{eqnarray*}

    Proof. (3) 0 \le {d_{Iv}}\left({\widetilde {{H_1}}, \widetilde {{H_2}}} \right) \le 1 .

    Clearly, 0\leq {d_{Iv}}\left({\widetilde {{\beta _i}}, \widetilde {{\beta _j}}} \right) holds.

    According to Definition 2.1, one has

    \begin{eqnarray} 0 \le u_{\widetilde {{H_1}}}^L + v_{\widetilde {{H_1}}}^L \le u_{\widetilde {{H_1}}}^R + v_{\widetilde {{H_1}}}^R \le 1, 0 \le u_{\widetilde {{H_2}}}^L + v_{\widetilde {{H_2}}}^L \le u_{\widetilde {{H_2}}}^R + v_{\widetilde {{H_2}}}^R \le 1. \end{eqnarray} (3.3)

    So the following inequalities hold:

    {\left( {u_{\widetilde {{H_1}}}^L - u_{\widetilde {{H_2}}}^L} \right)^2} \\ \le {\left( {u_{\widetilde {{H_1}}}^L + u_{\widetilde {{H_2}}}^L} \right)^2}, {\left( {u_{\widetilde {{H_1}}}^R - u_{\widetilde {{H_2}}}^R} \right)^2} \le {\left( {u_{\widetilde {{H_1}}}^R + u_{\widetilde {{H_2}}}^R} \right)^2}, {\left( {v_{\widetilde {{H_1}}}^L - v_{\widetilde {{H_2}}}^L} \right)^2} \le {\left( {v_{\widetilde {{H_1}}}^L + v_{\widetilde {{H_2}}}^L} \right)^2}, {\left( {v_{\widetilde {{H_1}}}^R - v_{\widetilde {{H_2}}}^R} \right)^2} \le {\left( {v_{\widetilde {{H_1}}}^R + v_{\widetilde {{H_2}}}^R} \right)^2};

    then, one has

    \begin{eqnarray*} \begin{array}{l} {d_{Iv}}\left( {\widetilde {{H_1}}, \widetilde {{H_2}}} \right) = \sqrt {\frac{1}{{4m}}\sum\limits_{j = 1}^m {\left[ {\frac{{{{\left( {u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) - u_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)} \right)}^2}}}{{u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) + u_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)}} + \frac{{{{\left( {u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) - u_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)} \right)}^2}}}{{u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) + u_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)}} + \frac{{{{\left( {v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) - v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)} \right)}^2}}}{{v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) + v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)}} + \frac{{{{\left( {v_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) - v_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)} \right)}^2}}}{{v_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) + v_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)}}} \right]} } \\ \begin{array}{*{20}{c}} {}&{}&{}&{} \end{array} \le \sqrt {\frac{1}{{4m}}\sum\limits_{j = 1}^m {\left[ {\frac{{{{\left( {u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) + u_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)} \right)}^2}}}{{u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) + u_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)}} + \frac{{{{\left( {u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) + u_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)} \right)}^2}}}{{u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) + u_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)}} + \frac{{{{\left( {v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) + v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)} \right)}^2}}}{{v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) + v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right)}} + \frac{{{{\left( {v_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) + v_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)} \right)}^2}}}{{v_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) + v_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)}}} \right]} } \\ \begin{array}{*{20}{c}} {}&{}&{}&{} \end{array} = \sqrt {\frac{1}{{4m}}\sum\limits_{j = 1}^m {\left( {u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) + u_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right) + u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) + u_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right) + v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right) + v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right) + v_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right) + v_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right)} \right)} } \\ \begin{array}{*{20}{c}} {}&{}&{}&{} \end{array} \le \sqrt {\frac{1}{{4m}}\sum\limits_{j = 1}^m {\left( 4 \right)} } \\ \begin{array}{*{20}{c}} {}&{}&{}&{} \end{array} = 1. \end{array} \end{eqnarray*}

    Consequently, the formula 0 \le {d_{Iv}}\left({\widetilde {{\beta _i}}, \widetilde {{\beta _j}}} \right) \le 1 is proved.

    Proof. (4) If \widetilde {{H_1}} \le \widetilde {{H_2}} \le \widetilde {{H_3}} , then {d_{Iv}}\left({\widetilde {{H_1}}, \widetilde {{H_2}}} \right) \le {d_{Iv}}\left({\widetilde {{H_1}}, \widetilde {{H_3}}} \right) and {d_{Iv}}\left({\widetilde {{H_2}}, \widetilde {{H_2}}} \right) \le {d_{Iv}}\left({\widetilde {{H_1}}, \widetilde {{H_3}}} \right) .

    When \widetilde {{H_1}} \le \widetilde {{H_2}} \le \widetilde {{H_3}} , we have

    \begin{eqnarray} u_{\widetilde {{H_1}}}^L \le u_{\widetilde {{H_2}}}^L \le _{\widetilde {{H_3}}}^L, u_{\widetilde {{H_1}}}^R \le u_{\widetilde {{H_2}}}^R \le u_{\widetilde {{H_3}}}^R, v_{\widetilde {{H_3}}}^L \le v_{\widetilde {{H_2}}}^L \le v_{\widetilde {{H_1}}}^L, v_{\widetilde {{H_3}}}^R \le v_{\widetilde {{H_2}}}^R \le v_{\widetilde {{H_1}}}^R. \end{eqnarray} (3.4)

    For 0 \le {\eta _k} \le 1\left({k = 1, 2, 3, 4} \right) and 0 \le {\eta _1} + {\eta _3} \le 1 , 0 \le {\eta _2} + {\eta _4} \le 1 , a function g\left({{x_1}, {x_2}, {x_3}, {x_4}} \right) could be established as below:

    \begin{eqnarray} g\left( {{x_1}, {x_2}, {x_3}, {x_4}} \right) = \sum\limits_{k = 1}^4 {\frac{{{{\left( {{x_k} - {\eta _k}} \right)}^2}}}{{{x_k} + {\eta _k}}}} , {x_k} \in \left[ {0, 1} \right]; \end{eqnarray} (3.5)

    then, the partial derivation of the function g\left({{x_1}, {x_2}, {x_3}, {x_4}} \right) in terms of {x_i} will be calculated as follows:

    \begin{eqnarray} \frac{{\partial g}}{{\partial {x_k}}} = \frac{{\left( {{x_k} - {\eta _k}} \right)\left( {{x_k} + 3{\eta _k}} \right)}}{{{{\left( {{x_k} + {\eta _k}} \right)}^2}}}; \end{eqnarray} (3.6)

    from the partial derivation function of Eq (3.6), one has

    \begin{eqnarray} \left\{ \begin{array}{l} \frac{{\partial g}}{{\partial {x_k}}} \ge 0, \begin{array}{*{20}{c}} {} \end{array}0 \le {\eta _k} \le {x_k} \le 1, \\ \frac{{\partial g}}{{\partial {x_k}}} < 0, \begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {} \end{array}0 \le {x_k} < {\eta _k} \le 1.} \end{array} \end{array} \right. \end{eqnarray} (3.7)

    Therefore, when {x_k} \ge {\eta _k} , g\left({{x_1}, {x_2}, {x_3}, {x_4}} \right) is a monotonically increasing function for {x_k} , and when {x_k} \le {\eta _k} , g\left({{x_1}, {x_2}, {x_3}, {x_4}} \right) is a monotonically decreasing function for {x_k} .

    Let {\eta _1} = u_{\widetilde {{H_1}}}^L , {\eta _2} = u_{\widetilde {{H_1}}}^R , {\eta _3} = v_{\widetilde {{H_1}}}^L and {\eta _4} = v_{\widetilde {{H_1}}}^R .

    When \widetilde {{H_1}} \le \widetilde {{H_2}} \le \widetilde {{H_3}} , we have

    \begin{eqnarray*} {\eta _1} = u_{\widetilde {{H_1}}}^L \le u_{\widetilde {{H_2}}}^L \le u_{\widetilde {{H_3}}}^L, {\eta _2} = u_{\widetilde {{H_1}}}^R \le u_{\widetilde {{H_2}}}^R \le u_{\widetilde {{H_3}}}^R, v_{\widetilde {{H_3}}}^L \le v_{\widetilde {{H_2}}}^L \le v_{\widetilde {{H_1}}}^L = {\eta _3}, v_{\widetilde {{H_3}}}^R \le v_{\widetilde {{H_2}}}^R \le v_{\widetilde {{H_1}}}^R = {\eta _4}. \end{eqnarray*}

    Because g\left({{x_1}, {x_2}, {x_3}, {x_4}} \right) is monotonically increasing when {x_1} \ge {\eta _1} , if u_{\widetilde {{H_3}}}^L \ge u_{\widetilde {{H_2}}}^L , one has

    \begin{eqnarray} g\left( {u_{\widetilde {{H_3}}}^L, u_{\widetilde {{H_3}}}^R, v_{\widetilde {{H_3}}}^L, v_{\widetilde {{H_3}}}^R} \right) \ge g\left( {u_{\widetilde {{H_2}}}^L, u_{\widetilde {{H_3}}}^R, v_{\widetilde {{H_3}}}^L, v_{\widetilde {{H_3}}}^R} \right); \end{eqnarray} (3.8)

    similarly, because g\left({{x_1}, {x_2}, {x_3}, {x_4}} \right) is monotonically increasing when {x_2} \ge {\eta _2} , if u_{\widetilde {{H_3}}}^R \ge u_{\widetilde {{H_2}}}^R , one obtains

    \begin{eqnarray} g\left( {u_{\widetilde {{H_2}}}^L, u_{\widetilde {{H_3}}}^R, v_{\widetilde {{H_3}}}^L, v_{\widetilde {{H_3}}}^R} \right) \ge g\left( {u_{\widetilde {{H_2}}}^L, u_{\widetilde {{H_2}}}^R, v_{\widetilde {{H_3}}}^L, v_{\widetilde {{H_3}}}^R} \right); \end{eqnarray} (3.9)

    meanwhile, because g\left({{x_1}, {x_2}, {x_3}, {x_4}} \right) is monotonically decreasing when {x_3} \le {\eta _3} , if v_{\widetilde {{H_3}}}^L \le v_{\widetilde {{H_2}}}^L , one has

    \begin{eqnarray} g\left( {u_{\widetilde {{H_2}}}^L, u_{\widetilde {{H_2}}}^R, v_{\widetilde {{H_3}}}^L, v_{\widetilde {{H_3}}}^R} \right) \ge g\left( {u_{\widetilde {{H_2}}}^L, u_{\widetilde {{H_2}}}^R, v_{\widetilde {{H_2}}}^L, v_{\widetilde {{H_3}}}^R} \right); \end{eqnarray} (3.10)

    besides, because g\left({{x_1}, {x_2}, {x_3}, {x_4}} \right) is monotonically decreasing when {x_4} \le {\eta _4} , if v_{\widetilde {{H_3}}}^R \le v_{\widetilde {{H_2}}}^R , one has

    \begin{eqnarray} g\left( {u_{\widetilde {{H_2}}}^L, u_{\widetilde {{H_2}}}^R, v_{\widetilde {{H_2}}}^L, v_{\widetilde {{H_3}}}^R} \right) \ge g\left( {u_{\widetilde {{H_2}}}^L, u_{\widetilde {{H_2}}}^R, v_{\widetilde {{H_2}}}^L, v_{\widetilde {{H_2}}}^R} \right). \end{eqnarray} (3.11)

    Combining Eqs (3.8)–(3.11), one has

    \begin{eqnarray} g\left( {u_{\widetilde {{H_3}}}^L, u_{\widetilde {{H_3}}}^R, v_{\widetilde {{H_3}}}^L, v_{\widetilde {{H_3}}}^R} \right) \ge g\left( {u_{\widetilde {{H_2}}}^L, u_{\widetilde {{H_2}}}^R, v_{\widetilde {{H_2}}}^L, v_{\widetilde {{H_2}}}^R} \right), \end{eqnarray} (3.12)

    that is,

    \frac{{{{\left( {u_{\widetilde {{H_2}}}^L - u_{\widetilde {{H_1}}}^L} \right)}^2}}}{{u_{\widetilde {{H_2}}}^L + u_{\widetilde {{H_1}}}^L}} + \frac{{{{\left( {u_{\widetilde {{H_2}}}^R - u_{\widetilde {{H_1}}}^R} \right)}^2}}}{{u_{\widetilde {{H_2}}}^R + u_{\widetilde {{H_1}}}^R}} + \frac{{{{\left( {v_{\widetilde {{H_2}}}^L - v_{\widetilde {{H_1}}}^L} \right)}^2}}}{{v_{\widetilde {{H_2}}}^L + v_{\widetilde {{H_1}}}^L}} + \frac{{{{\left( {v_{\widetilde {{H_2}}}^R - v_{\widetilde {{H_1}}}^R} \right)}^2}}}{{v_{\widetilde {{H_2}}}^R + v_{\widetilde {{H_1}}}^R}} \\ \le \frac{{{{\left( {u_{\widetilde {{H_3}}}^L - u_{\widetilde {{H_1}}}^L} \right)}^2}}}{{u_{\widetilde {{H_3}}}^L + u_{\widetilde {{H_1}}}^L}} + \frac{{{{\left( {u_{\widetilde {{H_3}}}^R - u_{\widetilde {{H_1}}}^R} \right)}^2}}}{{u_{\widetilde {{H_3}}}^R + u_{\widetilde {{H_1}}}^R}} + \frac{{{{\left( {v_{\widetilde {{H_3}}}^L - v_{\widetilde {{H_1}}}^L} \right)}^2}}}{{v_{\widetilde {{H_3}}}^L + v_{\widetilde {{H_1}}}^L}} + \frac{{{{\left( {v_{\widetilde {{H_3}}}^R - v_{\widetilde {{H_1}}}^R} \right)}^2}}}{{v_{\widetilde {{H_3}}}^R + v_{\widetilde {{H_1}}}^R}}. (3.13)

    Consequently, we have

    \begin{eqnarray*} \begin{array}{l} {d_{Iv}}\left( {\widetilde {{H_1}}, \widetilde {{H_2}}} \right) = \sqrt {\frac{1}{{4m}}\sum\limits_{j = 1}^m {\left( {\frac{{{{\left( {u_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right) - u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right)} \right)}^2}}}{{u_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right) + u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right)}} + \frac{{{{\left( {u_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right) - u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right)} \right)}^2}}}{{u_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right) + u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right)}} + \frac{{{{\left( {v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right) - v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right)} \right)}^2}}}{{v_{\widetilde {{H_2}}}^L\left( {{\tau _j}} \right) + v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right)}} + \frac{{{{\left( {v_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right) - v_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right)} \right)}^2}}}{{v_{\widetilde {{H_2}}}^R\left( {{\tau _j}} \right) + v_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right)}}} \right)} } \\ \begin{array}{*{20}{c}} {}&{}&{} \end{array}\begin{array}{*{20}{c}} {}&{} \end{array} \le \sqrt {\frac{1}{{4m}}\sum\limits_{j = 1}^m {\left( {\frac{{{{\left( {u_{\widetilde {{H_3}}}^L\left( {{\tau _j}} \right) - u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right)} \right)}^2}}}{{u_{\widetilde {{H_3}}}^L\left( {{\tau _j}} \right) + u_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right)}} + \frac{{{{\left( {u_{\widetilde {{H_3}}}^R\left( {{\tau _j}} \right) - u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right)} \right)}^2}}}{{u_{\widetilde {{H_3}}}^R\left( {{\tau _j}} \right) + u_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right)}} + \frac{{{{\left( {v_{\widetilde {{H_3}}}^L\left( {{\tau _j}} \right) - v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right)} \right)}^2}}}{{v_{\widetilde {{H_3}}}^L\left( {{\tau _j}} \right) + v_{\widetilde {{H_1}}}^L\left( {{\tau _j}} \right)}} + \frac{{{{\left( {v_{\widetilde {{H_3}}}^R\left( {{\tau _j}} \right) - v_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right)} \right)}^2}}}{{v_{\widetilde {{H_3}}}^R\left( {{\tau _j}} \right) + v_{\widetilde {{H_1}}}^R\left( {{\tau _j}} \right)}}} \right)} } \\ \begin{array}{*{20}{c}} {}&{}&{} \end{array}\begin{array}{*{20}{c}} {}&{} \end{array} = {d_{Iv}}\left( {\widetilde {{H_1}}, \widetilde {{H_3}}} \right). \end{array} \end{eqnarray*}

    Hence, {d_{Iv}}\left({\widetilde {{H_1}}, \widetilde {{H_2}}} \right) \le {d_{Iv}}\left({\widetilde {{H_1}}, \widetilde {{H_3}}} \right) is proved.

    Similarly, {d_{Iv}}\left({\widetilde {{H_2}}, \widetilde {{H_3}}} \right) \le {d_{Iv}}\left({\widetilde {{H_1}}, \widetilde {{H_3}}} \right) could be proved, too.

    Definition 3.3. Specifically, for any two IvIFNs \widetilde {{\beta _i}} and \widetilde {{\beta _j}} , the distance measure between \widetilde {{\beta _i}} and \widetilde {{\beta _j}} could be given by Eq (3.14).

    \begin{eqnarray} {d_{Iv}}\left( {\widetilde {{\beta _i}}, \widetilde {{\beta _j}}} \right) = \sqrt {\frac{1}{4}\left[ {\frac{{{{\left( {u_{\widetilde {{\beta _i}}}^L - u_{\widetilde {{\beta _j}}}^L} \right)}^2}}}{{u_{\widetilde {{\beta _i}}}^L + u_{\widetilde {{\beta _j}}}^L}} + \frac{{{{\left( {u_{\widetilde {{\beta _i}}}^R - u_{\widetilde {{\beta _j}}}^R} \right)}^2}}}{{u_{\widetilde {{\beta _i}}}^R + u_{\widetilde {{\beta _j}}}^R}} + \frac{{{{\left( {v_{\widetilde {{\beta _i}}}^L - v_{\widetilde {{\beta _{j}}}}^L} \right)}^2}}}{{v_{\widetilde {{\beta _i}}}^L + v_{\widetilde {{\beta _j}}}^L}} + \frac{{{{\left( {v_{\widetilde {{\beta _i}}}^R - v_{\widetilde {{\beta _j}}}^R} \right)}^2}}}{{v_{\widetilde {{\beta _i}}}^R + v_{\widetilde {{\beta _j}}}^R}}} \right]}. \end{eqnarray} (3.14)

    Example 3.1. There are three IvIFSs , \widetilde{H_1}=\{\langle[0,0],[1,1]\rangle\}, , \widetilde{H_2}=\{\langle[0.35,0.55],[0.25,0.35]\rangle\} and \widetilde{H_3}=\{\langle[1,1],[0,0]\rangle\}. According to Definition 2.2, it holds that \widetilde {{H_1}} \subseteq \widetilde {{H_2}} \subseteq \widetilde {{H_3}} .

    With the proposed IvIFTD distance measure, one has the following:

    {d_{Iv}}\left({\widetilde {{H_1}}, \widetilde {{H_2}}} \right) = \sqrt {\frac{1}{4} \times \left({\frac{{{{0.35}^2}}}{{0.35}} + \frac{{{{0.55}^2}}}{{0.55}} + \frac{{{{\left({1 - 0.25} \right)}^2}}}{{1 + 0.25}} + \frac{{{{\left({1 - 0.35} \right)}^2}}}{{1 + 0.35}}} \right)} = 0.6448 ,

    {d_{Iv}}\left({\widetilde {{H_2}}, \widetilde {{H_3}}} \right) = \sqrt {\frac{1}{4} \times \left({\frac{{{{\left({0.35 - 1} \right)}^2}}}{{0.35 + 1}} + \frac{{{{\left({0.55 - 1} \right)}^2}}}{{0.55 + 1}} + \frac{{{{0.25}^2}}}{{0.25}} + \frac{{{{0.35}^2}}}{{0.35}}} \right)} = 0.5108 ,

    {d_{Iv}}\left({\widetilde {{H_1}}, \widetilde {{H_3}}} \right) = \sqrt {\frac{1}{4} \times \left({\frac{{{1^2}}}{1} + \frac{{{1^2}}}{1} + \frac{{{1^2}}}{1} + \frac{{{1^2}}}{1}} \right)} = 1 .

    Hence, we have that {d_{Iv}}\left({\widetilde {{H_1}}, \widetilde {{H_2}}} \right) \le {d_{Iv}}\left({\widetilde {{H_1}}, \widetilde {{H_3}}} \right) and {d_{Iv}}\left({\widetilde {{H_2}}, \widetilde {{H_3}}} \right) \le {d_{Iv}}\left({\widetilde {{H_1}}, \widetilde {{H_3}}} \right) .

    Example 3.2. Suppose that there are two IvIFSs \widetilde {{H_4}} and \widetilde {{H_5}} , as follows:

    \widetilde {{H_4}} = \left\{ {\langle {{x_1}, \left[ {0.55, 0.67} \right], \left[ {0.1, 0.28} \right]} \rangle, \langle {{x_2}, \left[ {0.26, 0.37} \right], \left[ {0.55, 0.63} \right]} \rangle } \right\}.
    \widetilde {{H_5}} = \left\{ {\langle {{x_1}, \left[ {0.7, 0.8} \right], \left[ {0.15, 0.2} \right]} \rangle, \langle {{x_2}, \left[ {0.61, 0.71} \right], \left[ {0, 0.1} \right]} \rangle } \right\}. The distance measure {for} \left( {\widetilde {{H_4}}, \widetilde {{H_5}}} \right)

    will be calculated by using the proposed distance measure, as follows:

    \begin{eqnarray*} \begin{array}{l} {d_{Iv}}\left( {\widetilde {{H_4}}, \widetilde {{H_5}}} \right) = \sqrt {\frac{1}{{4 \times 2}}\left( {\frac{{{{\left( {0.55 - 0.7} \right)}^2}}}{{0.55 + 0.67}} + \frac{{{{\left( {0.67 - 0.8} \right)}^2}}}{{0.67 + 0.8}} + \frac{{{{\left( {0.1 - 0.15} \right)}^2}}}{{0.1 + 0.15}} + \frac{{{{\left( {0.28 - 0.2} \right)}^2}}}{{0.28 + 0.2}}} \right. + \left. {\frac{{{{\left( {0.26 - 0.61} \right)}^2}}}{{0.26 + 0.61}} + \frac{{{{\left( {0.37 - 0.71} \right)}^2}}}{{0.37 + 0.71}} + \frac{{{{\left( {0.55} \right)}^2}}}{{0.55}} + \frac{{{{\left( {0.63 - 0.1} \right)}^2}}}{{0.63 + 0.1}}} \right)} \\ \begin{array}{*{20}{c}} {}&{}&{} \end{array}\begin{array}{*{20}{c}} {}& = \end{array}0.39. \end{array} \end{eqnarray*}

    To demonstrate the superiority of the IvIFTD distance measure over some previous measures, some examples are presented below.

    Example 4.1. Assume that there are three IvIFSs, as below:

    \widetilde{H_6}=\{\langle[0.2,0.3],[0.3,0.4]\rangle\}, \widetilde{H_7}=\{\langle[0.25,0.35],[0.35,0.45]\rangle\}, \widetilde{H_8}=\{\langle[0.25,0.35],[0.25,0.35]\rangle\}

    We note that \widetilde {{H_7}} \ne \widetilde {{H_8}} , so the distance measure between \left({\widetilde {{H_6}}, \widetilde {{H_7}}} \right) and \left({\widetilde {{H_6}}, \widetilde {{H_8}}} \right) should be different.

    Table 1 lists the values for different distance methods. It shows that the results in bold, with the Hamming distance, Euclidean distance and some other existing distance methods yielding the same results between \left({\widetilde {{H_6}}, \widetilde {{H_7}}} \right) and \left({\widetilde {{H_6}}, \widetilde {{H_8}}} \right) . However, the values calculated by using the exponential distance [18] and our proposed distance method are consistent with the intuitive experience and theoretical requirements, i.e., d_{_{\exp }}^H\left({\widetilde {{H_6}}, \widetilde {{H_7}}} \right) < d_{_{\exp }}^H\left({\widetilde {{H_6}}, \widetilde {{H_8}}} \right) , d_{_{\exp }}^E\left({\widetilde {{H_6}}, \widetilde {{H_7}}} \right) < d_{_{\exp }}^E\left({\widetilde {{H_6}}, \widetilde {{H_8}}} \right) and {d_{Iv}}\left({\widetilde {{H_6}}, \widetilde {{H_7}}} \right) < {d_{Iv}}\left({\widetilde {{H_6}}, \widetilde {{H_8}}} \right) .

    Table 1.  Comparison of distance measures for Example 4.1.
    Pair {d_H} {d_E} {d_{HH}} {d_{HE}} d_p^t {d_L} d_{_{\exp }}^H d_{_{\exp }}^E {d_{Iv}}
    \left({\widetilde {{H_6}}, \widetilde {{H_7}}} \right) \textbf{0.05} \textbf{0.05} \textbf{0.05} \textbf{0.05} 0.0167 \textbf{0.05}5 0.0387 0.0233 0.0636
    \left({\widetilde {{H_6}}, \widetilde {{H_8}}} \right) \textbf{0.05} \textbf{0.05} \textbf{0.05} \textbf{0.05} 0.05 \textbf{0.05} 0.0582 0.0402 0.0657

     | Show Table
    DownLoad: CSV

    Example 4.2. We discuss the distance measure for two IvIFNs \widetilde {{H_9}} = \left\{ \langle \left[{0, 0} \right], \left. \left[{0, 0} \right] \rangle \right. \right\} and \widetilde {{H_{10}}} = \left\{ \langle \left[{0.5, 0.5} \right], \left. \left[{0.5, 0.5} \right] \rangle \right. \right\} .

    Obviously, \widetilde {{H_9}} \ne \widetilde {{H_{10}}} . Therefore, the distance between \widetilde {{H_9}} and \widetilde {{H_{10}}} should not be 0. However, using the exponential distance measure described in [18], we have that d_{_{\exp }}^H\left({\widetilde {{H_9}}, \widetilde {{H_{10}}}} \right) = d_{_{\exp }}^E\left({\widetilde {{H_9}}, \widetilde {{H_{10}}}} \right) = 0 as shown in Table 2, which demonstrates that the exponential distance method is limited in this example. Alternatively, the result is 0.7071 with the proposed IvIFTD distance measure, which is in line with an actual intuitive experience.

    Table 2.  Distance measures for Example 4.2.
    Pair {d_H} {d_E} {d_{HH}} {d_{HE}} d_p^t {d_L} d_{_{\exp }}^H d_{_{\exp }}^E {d_{Iv}}
    \left({\widetilde {{H_9}}, \widetilde {{H_{10}}}} \right) 0.5 0.5 0.5 0.5 0.1667 0.5 \textbf{0} \textbf{0} 0.7071

     | Show Table
    DownLoad: CSV

    Example 4.3. In the case of three IvIFSs \widetilde {{H_{11}}}, \widetilde {{H_{12}}} and \widetilde {{H_{13}}} , we have

    \widetilde{H_{11}}=\{\langle[0.3,0.4],[0.2,0.3]\rangle\}, \widetilde{H_{12}}=\{\langle[0.5,0.55],[0.35,0.4]\rangle\}, \widetilde{H_{13}}=\{\langle[0.5,0.55],[0.2,0.3]\rangle\}

    In the case of the distance measures for the pair of IvIFSs \left({\widetilde {{H_{11}}}, \widetilde {{H_{12}}}} \right) and \left({\widetilde {{H_{11}}}, \widetilde {{H_{13}}}} \right) , the distance between the pair \left({\widetilde {{H_{11}}}, \widetilde {{H_{12}}}} \right) should be larger than the distance between \left({\widetilde {{H_{11}}}, \widetilde {{H_{13}}}} \right) from an intuitive perspective.

    However, regarding the results for the different distance measures in Table 3, the distance value for d_p^t [27] is d_p^t\left({\widetilde {{H_{12}}}, \widetilde {{H_{12}}}} \right) = 0.05 < d_p^t\left({\widetilde {{H_{11}}}, \widetilde {{H_{13}}}} \right) = 0.0875 , and those for d_{_{\exp }}^H and d_{_{\exp }}^H [18] are, respectively, d_{_{\exp }}^H\left({\widetilde {{H_{11}}}, \widetilde {{H_{12}}}} \right) = 0.0696 < d_{_{\exp }}^H\left({\widetilde {{H_{11}}}, \widetilde {{H_{13}}}} \right) = 0.1186 and d_{_{\exp }}^E\left({\widetilde {{H_{11}}}, \widetilde {{H_{12}}}} \right) = 0.0438 < d_{_{\exp }}^E\left({\widetilde {{H_{11}}}, \widetilde {{H_{13}}}} \right) = 0.0734 . These results are counter intuitive and different from those for the other distance measures, i.e., {d_H} , {d_E} [22], {d_{HH}} , {d_{HE}} [26] and {d_L} [17] , as well as the new IvIFTD distance measure. Therefore, the existing distance methods using d_p^t , d_{_{\exp }}^H and d_{_{\exp }}^H are invalid in this example. Alternatively, {d_H} , {d_E} , {d_{HH}} , {d_{HE}} , {d_L} and our proposed new IvIFTD can work well in this situation.

    Table 3.  Distance measures for Example 4.3.
    Pair {d_H} {d_E} {d_{HH}} {d_{HE}} d_p^t {d_L} d_{_{\exp }}^H d_{_{\exp }}^E {d_{Iv}}
    \left({{\widetilde {{H_{11}}}, \widetilde {{H_{12}}}} }\right) 0.15 0.1541 0.175 0.1768 \textbf{0.05} 0.1527 \textbf{0.0696} \textbf{0.0438} 0.1795
    \left({\widetilde {{H_{11}}}, \widetilde {{H_{13}}}} \right) 0.0875 0.125 0.1 0.1414 \textbf{0.0875} 0.1242 \textbf{0.1186} \textbf{0.0734} 0.1357

     | Show Table
    DownLoad: CSV

    Example 4.4. Mr. X needs to choose a product from an alternative house set, that is \left\{ {{p_i}|i = 1, 2, \cdots, 6} \right\} , from five of the same weighted attributes \left\{ {{a_1}, {a_2}, \cdots, {a_5}} \right\} . Relevant decision-making information in the IvIFS is supplied as shown in the following matrix, and it is assumed that the ideal alternative is {p_0} . The following is proposed to determine the best choice by adopting the proposed IvIFTD distance measure :

    \begin{eqnarray*} {M_{6 \times 5}} \\ = \left[ {\begin{array}{*{20}{c}} {\left( {\left[ {0.7, 0.8} \right], \left[ {0.1, 0.2} \right]} \right)}&{\left( {\left[ {0.82, 0.84} \right], \left[ {0.05, 0.15} \right]} \right)}&{\left( {\left[ {0.52, 0.72} \right], \left[ {0.18, 0.25} \right]} \right)}&{\left( {\left[ {0.55, 0.6} \right], \left[ {0.3, 0.35} \right]} \right)}&{\left( {\left[ {0.7, 0.8} \right], \left[ {0.1, 0.2} \right]} \right)}\\ {\left( {\left[ {0.85, 0.9} \right], \left[ {0.05, 0.1} \right]} \right)}&{\left( {\left[ {0.7, 0.74} \right], \left[ {0.17, 0.25} \right]} \right)}&{\left( {\left[ {0.1, 0.23} \right], \left[ {0.6, 0.7} \right]} \right)}&{\left( {\left[ {0.15, 0.25} \right], \left[ {0.2, 0.3} \right]} \right)}&{\left( {\left[ {0.05, 0.1} \right], \left[ {0.65, 0.8} \right]} \right)}\\ {\left( {\left[ {0.5, 0.7} \right], \left[ {0.2, 0.3} \right]} \right)}&{\left( {\left[ {0.86, 0.9} \right], \left[ {0.04, 0.1} \right]} \right)}&{\left( {\left[ {0.6, 0.7} \right], \left[ {0.2, 0.28} \right]} \right)}&{\left( {\left[ {0.2, 0.3} \right], \left[ {0.5, 0.6} \right]} \right)}&{\left( {\left[ {0.65, 0.8} \right], \left[ {0.15, 0.2} \right]} \right)}\\ {\left( {\left[ {0.4, 0.6} \right], \left[ {0.3, 0.4} \right]} \right)}&{\left( {\left[ {0.52, 0.64} \right], \left[ {0.23, 0.35} \right]} \right)}&{\left( {\left[ {0.72, 0.78} \right], \left[ {0.11, 0.21} \right]} \right)}&{\left( {\left[ {0.3, 0.5} \right], \left[ {0.4, 0.5} \right]} \right)}&{\left( {\left[ {0.8, 0.9} \right], \left[ {0.05, 0.1} \right]} \right)}\\ {\left( {\left[ {0.6, 0.8} \right], \left[ {0.15, 0.2} \right]} \right)}&{\left( {\left[ {0.3, 0.35} \right], \left[ {0.5, 0.65} \right]} \right)}&{\left( {\left[ {0.58, 0.68} \right], \left[ {0.18, 0.3} \right]} \right)}&{\left( {\left[ {0.68, 0.77} \right], \left[ {0.1, 0.2} \right]} \right)}&{\left( {\left[ {0.72, 0.85} \right], \left[ {0.1, 0.15} \right]} \right)}\\ {\left( {\left[ {0.3, 0.5} \right], \left[ {0.3, 0.45} \right]} \right)}&{\left( {\left[ {0.5, 0.68} \right], \left[ {0.25, 0.3} \right]} \right)}&{\left( {\left[ {0.33, 0.43} \right], \left[ {0.5, 0.55} \right]} \right)}&{\left( {\left[ {0.62, 0.65} \right], \left[ {0.15, 0.35} \right]} \right)}&{\left( {\left[ {0.84, 0.93} \right], \left[ {0.04, 0.07} \right]} \right)} \end{array}} \right]. \end{eqnarray*}
    Table 4.  The ideal solution for Example 4.4.
    {a_1} {a_2} {a_3} {a_4} a_5 a_6
    {p_0} \left({\left[{1, 1} \right], \left[{0, 0} \right]} \right) \left({\left[{1, 1} \right], \left[{0, 0} \right]} \right) \left({\left[{1, 1} \right], \left[{0, 0} \right]} \right) \left({\left[{1, 1} \right], \left[{0, 0} \right]} \right) \left({\left[{1, 1} \right], \left[{0, 0} \right]} \right) \left({\left[{1, 1} \right], \left[{0, 0} \right]} \right)

     | Show Table
    DownLoad: CSV

    The proposed IvIFTD distance measure yielded the distance values presented in Table 5. According to the results for the IvIFTD distance, the ranking order is {p_2} \succ {p_1} \succ {p_5} \succ {p_3} \succ {p_4} \succ {p_6} . Hence, {h_2} is accessed as the best choice. On the one hand, the ranking order is the same as that in [17], which proved the rationality of the proposed IvIFTD distance measure. On the other hand, it is unlike the original ranking in [44]: {p_2} \succ {p_1} = {p_5} \succ {p_3} = {p_4} = {p_6} , which also demonstrates the superiority of the proposed IvIFTD distance measure especially in terms of discriminating information with subtle differences, such as {p_1} , {p_5} or {p_3} , {p_4} , {p_6} .

    Table 5.  The distance measure for Example 4.4.
    Pair \left({{p_1}, {p_0}} \right) \left({{p_2}, {p_0}} \right) \left({{p_3}, {p_0}} \right) \left({{p_4}, {p_0}} \right) \left({{p_5}, {p_0}} \right) \left({{p_6}, {p_0}} \right)
    {d_{Iv}} 0.3437 0.2908 0.4125 0.4178 0.4031 0.446

     | Show Table
    DownLoad: CSV

    From the above examples, unlike some of the existing distance measures that failed to work for the IvIFSs, the proposed IvIFTD distance measure can effectively reflect the differences among IvIFSs. Therefore, the new IvIFTD distance measure is rational and superior to some existing distance methods.

    According to the new IvIFSTD distance measure for IvIFSs, an improved TOPSIS method is established correspondingly. The specific implementation process for TOPSIS is outlined as follows.

    Step 1. Set the biggest IvIFN \widetilde {{\beta ^ + }} as the PIS and the smallest \widetilde {{\beta ^ - }} as the NIS. Then, for any IvIFN \widetilde {{\beta _k}} , the distance between \widetilde {{\beta _k}} and \widetilde {{\beta ^ + }} ( \widetilde {{\beta ^ - }} ) will be {d_{Iv}}\left({\widetilde {{\beta _k}}, \widetilde {{\beta ^ + }}} \right) ( {d_{Iv}}\left({\widetilde {{\beta _k}}, \widetilde {{\beta ^ - }}} \right) ).

    Step 2. Calculate the relative closeness of the scheme \widetilde {{\beta _k}} with respect to \widetilde {{\beta ^ + }} , which could be given by the following expression :

    \begin{eqnarray} {\rho _k} = \frac{{{d_{Iv}}\left( {\widetilde {{\beta _k}}, \widetilde {{\beta ^ - }}} \right)}}{{{d_{Iv}}\left( {\widetilde {{\beta _k}}, \widetilde {{\beta ^ + }}} \right) + {d_{Iv}}\left( {\widetilde {{\beta _k}}, \widetilde {{\beta ^ - }}} \right)}}. \end{eqnarray} (5.1)

    Step 3. Rank the schemes based on the values of {\rho _k} . The larger the value of {\rho _k} , the better the scheme performs. Therefore, the decision-maker can select the optimal scheme based on the ranking results.

    As mentioned in the introduction, teaching quality in higher education is a hot topic for educational administrators, teachers and students alike. To further observe the teaching quality in higher education, a relevant decision-making method and application was employed by using the proposed TOPSIS method. This analysis serves to provide valuable information and evaluation regarding teaching quality in higher education.

    With the proposed TOPSIS method and related knowledge in Section 2, a new MADM method for teaching satisfaction evaluation was constructed as shown in Figure 1.

    Figure 1.  The decision-making flowchart for teaching satisfaction.

    Example 6.1. L University needs to determine the teaching satisfaction for four mathematics teaching courses. Assume that there are four experts with relevant knowledge and rich experience. Let S = \left\{ {{{\rm{S}}_{\rm{1}}}{\rm{, }}{{\rm{S}}_2}, {{\rm{S}}_3}, {S_4}} \right\} be a scheme set to be evaluated, {C_j} ( j = 1, 2, \cdots m ) -the first level criteria for this teaching satisfaction evaluation system and {C_j} consist of second level attributes, i.e., {C_j} = \left\{ {{a_{j1}}, {a_{j2}}, \cdots, {a_{jl}}, \cdots {a_{jn}}} \right\} . The characteristics of {a_{jl}} are expressed as IvIFNs, that is, {a_{jl}} = \left({\left[{u_{{a_{jl}}}^L, u_{{a_{jl}}}^R} \right], \left[{v_{{a_{jl}}}^L, v_{{a_{jl}}}^R} \right]} \right) . All expert weights are assumed to be equal to each other, but the criteria weight ( {w_{{C_j}}} ) and attribute weight ( {w_{{a_{jl}}}} ) are unknown and need to be determined.

    Based on the existing theoretical research and practical evaluation environment, an index system was constructed as shown in Table 6.

    Table 6.  Teaching satisfaction evaluation system.
    First level: Criterion ({C_j}) Second level: Attribute ({a_{jl}})
    {C_1} : Teaching attitude {a_{11}} : Fully prepared,
    {a_{12}} : Rigorous attitude,
    {a_{13}} : Respectful/patient with students,
    {a_{14}} : Manage the classroom,
    {C_2} : Teaching content {a_{21}} : Correct concept/knowledge,
    {a_{22}} : Explain clearly,
    {a_{23}} : Highlight key and difficult points,
    {a_{24}} : Connection between mathematics and social life,
    {C_3} : Teaching method {a_{31}} : Various methods,
    {a_{32}} : Teaching material,
    {a_{33}} : Participation and interaction,
    {a_{34}} : Focus on inspiration,
    {C_4} : Teaching effect {a_{41}} : Understanding /mastering of knowledge,
    {a_{42}} : Achievement of teaching objective,
    {a_{43}} : Stress the cultivation of comprehensive quality.

     | Show Table
    DownLoad: CSV

    In this study, four experts were invited to evaluate the courses by using IvIFNs. To facilitate the decision-making process for experts, we established evaluation reference criteria by using linguistic variables, which are presented as shown in Table 7.

    Table 7.  Evaluation reference criteria.
    Linguistic variables IvIFNs
    Especially good (EG) \left({\left[{0.9, 0.95} \right], \left[{0.005, 0.01} \right]} \right)
    Very good (VG) \left({\left[{0.778, 0.9} \right], \left[{0.01, 0.05} \right]} \right)
    Good (G) \left({\left[{0.667, 0.778} \right], \left[{0.1, 0.172} \right]} \right)
    Relatively good (RG) \left({\left[{0.556, 0.6675} \right], \left[{0.2, 0.283} \right]} \right)
    Medium (M) \left({\left[{0.445, 0.556} \right], \left[{0.3, 0.394} \right]} \right)
    Relatively bad (RB) \left({\left[{0.334, 0.445} \right], \left[{0.4, 0.505} \right]} \right)
    Bad (B) \left({\left[{0.223, 0.334} \right], \left[{0.5, 0.616} \right]} \right)
    Very bad (VB) \left({\left[{0.1, 0.223} \right], \left[{0.6, 0.72} \right]} \right)
    Especially bad (EB) \left({\left[{0, 0.1} \right], \left[{0.72, 0.9} \right]} \right)

     | Show Table
    DownLoad: CSV

    The decision-making data from four experts are shown in Tables 811.

    Table 8.  The initial decision-making data by Expert _1 .
    Scheme {a_{11}} {a_{12}} {a_{13}} {a_{14}} {a_{21}} {a_{22}} {a_{23}} {a_{24}} {a_{31}} {a_{32}} {a_{33}} {a_{34}} {a_{41}} {a_{42}} {a_{43}}
    \texttt{S}_{1} G RG G VG EG G G VG G VG G VG G VG RG
    \texttt{S}_{2} RG VG VG RG G VG M G RG G VG M RG G VG
    \texttt{S}_{3} G VG RB M VG G VG G VG RG G G VG G G
    \texttt{S}_{4} VG G VG G RG MG G RG G G RG G MG RG G

     | Show Table
    DownLoad: CSV
    Table 9.  The initial decision-making data by Expert _2 .
    Scheme {a_{11}} {a_{12}} {a_{13}} {a_{14}} {a_{21}} {a_{22}} {a_{23}} {a_{24}} {a_{31}} {a_{32}} {a_{33}} {a_{34}} {a_{41}} {a_{42}} {a_{43}}
    \texttt{S}_{1} VG RG EG VG G G RG G G M G M G G M
    \texttt{S}_{2} G G VG RG G VG G VG G VG VG RG G VG RG
    \texttt{S}_{3} M RG G G VG M G G RG G RG G VG G G
    \texttt{S}_{4} G G VG VG RG G VG RG VG VG VG M VG G G

     | Show Table
    DownLoad: CSV
    Table 10.  The initial decision-making data by Expert _3 .
    Scheme {a_{11}} {a_{12}} {a_{13}} {a_{14}} {a_{21}} {a_{22}} {a_{23}} {a_{24}} {a_{31}} {a_{32}} {a_{33}} {a_{34}} {a_{41}} {a_{42}} {a_{43}}
    \texttt{S}_{1} RG G G RG VG M EG G MG G G G RG RG RG
    \texttt{S}_{2} G RG M G G RG G M G VG M VG EB G VG
    \texttt{S}_{3} G G VG VG G G G VG RG G G RG G G EG
    \texttt{S}_{4} VG VG G VG M VG VG RG G VG VG G VG VG G

     | Show Table
    DownLoad: CSV
    Table 11.  The initial decision-making data by Expert _4 .
    Scheme {a_{11}} {a_{12}} {a_{13}} {a_{14}} {a_{21}} {a_{22}} {a_{23}} {a_{24}} {a_{31}} {a_{32}} {a_{33}} {a_{34}} {a_{41}} {a_{42}} {a_{43}}
    \texttt{S}_{1} G G RG M VG G G G VG G VG VG G MG VG
    \texttt{S}_{2} RG RG G VG RG RG RG RG G RG RG G M RG G
    \texttt{S}_{3} VG G VG G G VG EG G EG VG G VG VG G RG
    \texttt{S}_{4} G G M G VG G M VG G M VG G RG G RB

     | Show Table
    DownLoad: CSV

    After obtaining the evaluation data, the decision-making procedure generally proceeds as follows.

    Step 1. Aggregating experts' evaluation values for the attribute into one value by using Eq (2.6);

    Step 2. Computing the weight of attributes with by using the entropy method via Eqs (2.4) and (2.5);

    Step 3. Aggregating attributes' values into corresponding criteria by using Eq (2.6);

    Step 4. Calculating the weight by using the entropy method via Eqs (2.4) and (2.5);

    Step 5. Obtaining a comprehensive evaluation value for ( {\rm{S}}_i ) by using Eq (2.6);

    Step 6. Computing the distance between the scheme and the PIS (NIS) by using Eq (3.14);

    Step 7. Computing the relative closeness value for scheme ( {\rm{S}}_i ) by using Eq (5.1);

    Step 8. Ranking schemes.

    Step 1. Taking attribute a_11 as an example, the evaluation values from four experts were aggregated to obtain one value by using Eq (2.6). Here, the weights of experts are the same, i.e., 0.25:

    \begin{eqnarray*} \begin{array}{l} \left( {\left[ {1 - {{(1 - 0.667)}^{0.25}}{{(1 - 0.778)}^{0.25}}{{(1 - 0.556)}^{0.25}}{{(1 - 0.667)}^{0.25}}, 1 - {{(1 - 0.778)}^{0.25}}{{(1 - 0.9)}^{0.25}}{{(1 - 0.667)}^{0.25}}{{(1 - 0.778)}^{0.25}}} \right]} \right., \\ \left. {\begin{array}{*{20}{c}} {}&{\left[ {{{0.1}^{0.25}}{{0.01}^{0.25}}{{0.2}^{0.25}}{{0.1}^{0.25}}, {{0.172}^{0.25}}{{0.05}^{0.25}}{{0.283}^{0.25}}{{0.172}^{0.25}}} \right]} \end{array}} \right)\\ = \left( {\left[ {0.6767, 0.7987} \right], \left[ {0.0669, 0.1430} \right]} \right). \end{array} \end{eqnarray*}

    Similarly, the aggregation values for all attributes can be obtained as shown in Table 12.

    Table 12.  Aggregation values for attributes.
    {\textbf{S}_{1}} {\textbf{S}_{2}} {\textbf{S}_{3}} {\textbf{S}_{4}}
    {{a_{11}}} {{\left({\left[{0.677, 0.799} \right], \left[{0.067, 0.143} \right]} \right)}} {{\left({\left[{0.616, 0.728} \right], \left[{0.141, 0.221} \right]} \right)}} {{\left({\left[{0.658, 0.784} \right], \left[{0.074, 0.155} \right]} \right)}} {{\left({\left[{0.728, 0.851} \right], \left[{0.032, 0.093} \right]} \right)}}
    {a_{12}} {{\left({\left[{0.616, 0.728} \right], \left[{0.141, 0.221} \right]} \right)}} {{\left({\left[{0.653, 0.777} \right], \left[{0.08, 0.162} \right]} \right)}} {{\left({\left[{0.677, 0.799} \right], \left[{0.067, 0.143} \right]} \right)}} {{\left({\left[{0.699, 0.818} \right], \left[{0.056, 0.126} \right]} \right)}}
    {a_{13}} {{\left({\left[{0.735, 0.831} \right], \left[{0.056, 0.096} \right]} \right)}} {{\left({\left[{0.691, 0.823} \right], \left[{0.042, 0.114} \right]} \right)}} {{\left({\left[{0.677, 0.813} \right], \left[{0.045, 0.122} \right]} \right)}} {{\left({\left[{0.691, 0.823} \right], \left[{0.042, 0.114} \right]} \right)}}
    {a_{14}} {{\left({\left[{0.668, 0.804} \right], \left[{0.05, 0.129} \right]} \right)}} {{\left({\left[{0.653, 0.777} \right], \left[{0.08, 0.162} \right]} \right)}} {{\left({\left[{0.658, 0.784} \right], \left[{0.074, 0.155} \right]} \right)}} {{\left({\left[{0.728, 0.851} \right], \left[{0.032, 0.927} \right]} \right)}}
    {a_{21}} {{\left({\left[{0.799, 0.897} \right], \left[{0.150, 0.0456} \right]} \right)}} {{\left({\left[{0.642, 0.754} \right], \left[{0.119, 0.195} \right]} \right)}} {{\left({\left[{0.728, 0.851} \right], \left[{0.032, 0.093} \right]} \right)}} {{\left({\left[{0.605, 0.735} \right], \left[{0.106, 0.199} \right]} \right)}}
    {a_{22}} {{\left({\left[{0.622, 0.736} \right], \left[{0.132, 0.212} \right]} \right)}} {{\left({\left[{0.686, 0.818} \right], \left[{0.045, 0.119} \right]} \right)}} {{\left({\left[{0.658, 0.784} \right], \left[{0.074, 0.155} \right]} \right)}} {{\left({\left[{0.677, 0.799} \right], \left[{0.074, 0.143} \right]} \right)}}
    {a_{23}} {{\left({\left[{0.735, 0.831} \right], \left[{0.056, 0.088} \right]} \right)}} {{\left({\left[{0.593, 0.708} \right], \left[{0.157, 0.24} \right]} \right)}} {{\left({\left[{0.777, 0.875} \right], \left[{0.027, 0.062} \right]} \right)}} {{\left({\left[{0.691, 0.823} \right], \left[{0.042, 0.1} \right]} \right)}}
    {a_{24}} {{\left({\left[{0.699, 0.818} \right], \left[{0.056, 0.126} \right]} \right)}} {{\left({\left[{0.633, 0.761} \right], \left[{0.088, 0.164} \right]} \right)}} {{\left({\left[{0.699, 0.818} \right], \left[{0.056, 0.126} \right]} \right)}} {{\left({\left[{0.627, 0.754} \right], \left[{0.095, 0.184} \right]} \right)}}
    {a_{31}} {{\left({\left[{0.658, 0.784} \right], \left[{0.074, 0.145} \right]} \right)}} {{\left({\left[{0.642, 0.754} \right], \left[{0.119, 0.195} \right]} \right)}} {{\left({\left[{0.743, 0.847} \right], \left[{0.038, 0.08} \right]} \right)}} {{\left({\left[{0.699, 0.818} \right], \left[{0.056, 0.126} \right]} \right)}}
    {a_{32}} {{\left({\left[{0.658, 0.784} \right], \left[{0.074, 0.145} \right]} \right)}} {{\left({\left[{0.708, 0.835} \right], \left[{0.038, 0.105} \right]} \right)}} {{\left({\left[{0.677, 0.798} \right], \left[{0.067, 0.143} \right]} \right)}} {{\left({\left[{0.691, 0.823} \right], \left[{0.042, 0.114} \right]} \right)}}
    {a_{33}} {{\left({\left[{0.699, 0.818} \right], \left[{0.056, 0.126} \right]} \right)}} {{\left({\left[{0.668, 0.804} \right], \left[{0.05, 0.129} \right]} \right)}} {{\left({\left[{0.642, 0.754} \right], \left[{0.119, 0.195} \right]} \right)}} {{\left({\left[{0.708, 0.835} \right], \left[{0.038, 0.106} \right]} \right)}}
    {a_{34}} {{\left({\left[{0.691, 0.823} \right], \left[{0.042, 0.105} \right]} \right)}} {{\left({\left[{0.633, 0.761} \right], \left[{0.088, 0.176} \right]} \right)}} {{\left({\left[{0.677, 0.799} \right], \left[{0.067, 0.143} \right]} \right)}} {{\left({\left[{0.699, 0.818} \right], \left[{0.056, 0.126} \right]} \right)}}
    {a_{41}} {{\left({\left[{0.642, 0.754} \right], \left[{0.119, 0.195} \right]} \right)}} {{\left({\left[{0.517, 0.633} \right], \left[{0.221, 0.314} \right]} \right)}} {{\left({\left[{0.754, 0.878} \right], \left[{0.017, 0.068} \right]} \right)}} {{\left({\left[{0.583, 0.715} \right], \left[{0.116, 0.217} \right]} \right)}}
    {a_{42}} {{\left({\left[{0.633, 0.761} \right], \left[{0.088, 0.176} \right]} \right)}} {{\left({\left[{0.677, 0.799} \right], \left[{0.067, 0.144} \right]} \right)}} {{\left({\left[{0.667, 0.778} \right], \left[{0.1, 0.172} \right]} \right)}} {{\left({\left[{0.671, 0.835} \right], \left[{0.038, 0.105} \right]} \right)}}
    {a_{43}} {{\left({\left[{0.605, 0.735} \right], \left[{0.105, 0.199} \right]} \right)}} {{\left({\left[{0.708, 0.835} \right], \left[{0.038, 0.105} \right]} \right)}} {{\left({\left[{0.735, 0.831} \right], \left[{0.056, 0.096} \right]} \right)}} {{\left({\left[{0.604, 0.721} \right], \left[{0.141, 0.225} \right]} \right)}}

     | Show Table
    DownLoad: CSV

    Step 2. By using the entropy weight method as given by Eqs (2.4) and (2.5), the attribute weight ({w_{{a_{jl}}}}) at the second level could be obtained as shown in Table 13.

    Table 13.  Attribute weights.
    Criterion Attribute Entropy Weight ( {w_{{a_{jl}}}} )
    C_1 a_{11} 0.386 0.242
    a_{12} 0.404 0.235
    a_{13} 0.318 0.269
    a_{14} 0.365 0.254
    C_2 a_{211} 0.349 0.258
    a_{22} 0.397 0.239
    a_{23} 0.342 0.26
    a_{24} 0.386 0.243
    C_3 a_{31} 0.361 0.249
    a_{32} 0.344 0.255
    a_{33} 0.361 0.249
    a_{34} 0.363 0.248
    C_4 a_{41} 0.299 0.302
    a_{42} 0.178 0.354
    a_{43} 0.199 0.345

     | Show Table
    DownLoad: CSV

    Step 3. Using the weights from Step 2, repeat the aggregation method with the corresponding criteria at the first level. Then, we obtain the values listed in Table 14.

    Table 14.  Aggregation values for criteria.
    {\textbf{S}_{1}} {\textbf{S}_{2}} {\textbf{S}_{3}} {\textbf{S}_{4}}
    {{C_{1}}} {{\left({\left[{0.679, 0.795} \right], \left[{0.071, 0.139} \right]} \right)}} {{\left({\left[{0.655, 0.78} \right], \left[{0.077, 0.159} \right]} \right)}} {{\left({\left[{0.668, 0.795} \right], \left[{0.063, 0.143} \right]} \right)}} {{\left({\left[{0.712, 0.837} \right], \left[{0.039, 0.106} \right]} \right)}}
    {{C_{2}}} {{\left({\left[{0.723, 0.832} \right], \left[{0.049, 0.01} \right]} \right)}} {{\left({\left[{0.639, 0.762} \right], \left[{0.094, 0.175} \right]} \right)}} {{\left({\left[{0.721, 0.837} \right], \left[{0.043, 0.102} \right]} \right)}} {{\left({\left[{0.652, 0.781} \right], \left[{0.074, 0.151} \right]} \right)}}
    {{C_{3}}} {{\left({\left[{0.677, 0.803} \right], \left[{0.06, 0.129} \right]} \right)}} {{\left({\left[{0.664, 0.792} \right], \left[{0.066, 0.147} \right]} \right)}} {{\left({\left[{0.664, 0.792} \right], \left[{0.066, 0.147} \right]} \right)}} {{\left({\left[{0.699, 0.824} \right], \left[{0.047, 0.118} \right]} \right)}}
    {{C_{4}}} {{\left({\left[{0.626, 0.75} \right], \left[{0.102, 0.189} \right]} \right)}} {{\left({\left[{0.648, 0.775} \right], \left[{0.078, 0.163} \right]} \right)}} {{\left({\left[{0.719, 0.831} \right], \left[{0.049, 0.106} \right]} \right)}} {{\left({\left[{0.623, 0.767} \right], \left[{0.083, 0.167} \right]} \right)}}

     | Show Table
    DownLoad: CSV

    Step 4. Repeat the entropy weight method; the criteria weight ( {w_{{C_j}}} ) at the first level will be obtained as shown in Table 15.

    Table 15.  Criterion weight.
    Criterion Entropy Weight ( {w_{{C_j}}} )
    {{C_{1}}} 0.3594 0.2529
    {{C_{2}}} 0.3553 0.2546
    {C_{3}} 0.3524 0.2557
    {C_{4}} 0.4003 0.2368

     | Show Table
    DownLoad: CSV

    Step 5. With the criterion weight, we could get the integrated evaluation values shown in Table 16 by using the IvIFWAA operator.

    Table 16.  Integrated evaluation values.
    Scheme Integrated evaluation values ( {\widetilde {{\beta _i}}} )
    {C_{1}} \left({\left[{0.6789, 0.7977} \right], \left[{0.0673, 0.1349} \right]} \right)
    {C_{2}} \left({\left[{0.6517, 0.7774} \right], \left[{0.0783, 0.1606} \right]} \right)
    {C_{3}} \left({\left[{0.699, 0.8170} \right], \left[{0.0545, 0.1201} \right]} \right)
    {C_{4}} \left({\left[{0.6742, 0.8045} \right], \left[{0.0577, 0.133} \right]} \right)

     | Show Table
    DownLoad: CSV

    Steps 6–8. Suppose that \widetilde {{\beta ^ + }} = \left({\left[{1, 1} \right], \left[{0, 0} \right]} \right) , \widetilde {{\beta ^ - }} = \left({\left[{0, 0} \right], \left[{1, 1} \right]} \right) are, respectively, the PIS and NIS for the IvIFS in this example. Then, we could obtain the relative closeness degree for all schemes by using our proposed TOPSIS method. Then, all courses' teaching satisfaction is as ranked in Table 17.

    Table 17.  The closeness of different schemes.
    Scheme {d_{Iv}}\left({\widetilde {{\beta _i}}, \widetilde {{\beta ^ + }}} \right)/{d_{Iv}}\left({\widetilde {{\beta _i}}, \widetilde {{\beta ^ - }}} \right) {\rho _{Iv}} Ranking
    {\textbf{S}_{1}} 0.2676/0.8589 0.7625 3
    {\textbf{S}_{2}} 0.2917/0.8402 0.7423 4
    {\textbf{S}_{3}} 0.2482/0.8739 0.7788 1
    {\textbf{S}_{4}} 0.2623/0.8634 0.7628 2

     | Show Table
    DownLoad: CSV

    In this evaluation, the teaching course {{\rm{S}}_3} had the highest satisfaction degree. On the contrary, {{\rm{S}}_2} was evaluated with the lowest satisfaction degree. That is, {{\rm{S}}_3} \succ {{\rm{S}}_4} \succ {{\rm{S}}_1} \succ {{\rm{S}}_2} .

    To provide a more objective comparison of the proposed IvIF-TOPSIS method and existing methods, we adopt an example of teaching quality evaluation under the conditions of the IvIFS environment originally presented by Zhao [38]. This example will serve to illustrate the comparative process and showcase the effectiveness of the IvIF-TOPSIS method.

    Example 7.1. We evaluate five schools' teaching quality by using IvIFSs. Five alternatives {{\rm{A}}_i}\left({i = 1, 2, 3, 4, 5} \right) need to be evaluated based on four attributes {{\rm{G}}_j}\left({j = 1, 2, 3, 4} \right) . The weight vector for the four attributes is {w_j} = \left({0.15, 0.35, 0.395, 0.105} \right) , and the decision matrix is

    {M_{5 \times 4} = \left[ {\begin{array}{*{20}{c}} {\left( {\left[ {0.4, 0.5} \right], \left[ {0.3, 0.4} \right]} \right)}&{\left( {\left[ {0.4, 0.6} \right], \left[ {0.2, 0.4} \right]} \right)}&{\left( {\left[ {0.3, 0.4} \right], \left[ {0.4, 0.5} \right]} \right)}&{\left( {\left[ {0.5, 0.6} \right], \left[ {0.1, 0.3} \right]} \right)}\\ {\left( {\left[ {0.5, 0.6} \right], \left[ {0.2, 0.3} \right]} \right)}&{\left( {\left[{0.6, 0.7} \right], \left[ {0.2, 0.3} \right]} \right)}&{\left( {\left[ {0.5, 0.6} \right], \left[ {0.3, 0.4} \right]} \right)}&{\left( {\left[ {0.4, 0.7} \right], \left[ {0.1, 0.2} \right]} \right)}\\ {\left( {\left[ {0.3, 0.5} \right], \left[ {0.3, 0.4} \right]} \right)}&{\left( {\left[ {0.1, 0.3} \right], \left[ {0.5, 0.6} \right]} \right)}&{\left( {\left[ {0.2, 0.5} \right], \left[ {0.4, 0.5} \right]} \right)}&{\left( {\left[ {0.2, 0.3} \right], \left[ {0.4, 0.6} \right]} \right)}\\ {\left( {\left[ {0.2, 0.5} \right], \left[ {0.3, 0.4} \right]} \right)}&{\left( {\left[ {0.4, 0.7} \right], \left[ {0.1, 0.2} \right]} \right)}&{\left( {\left[ {0.4, 0.5} \right], \left[ {0.3, 0.5} \right]} \right)}&{\left( {\left[ {0.5, 0.8} \right], \left[ {0.1, 0.2} \right]} \right)}\\ {\left( {\left[ {0.3, 0.4} \right], \left[ {0.1, 0.3} \right]} \right)}&{\left( {\left[ {0.7, 0.8} \right], \left[ {0.1, 0.2} \right]} \right)}&{\left( {\left[ {0.5, 0.6} \right], \left[ {0.2, 0.4} \right]} \right)}&{\left( {\left[ {0.6, 0.7} \right], \left[ {0.1, 0.2} \right]} \right)} \end{array}} \right]} .

    In this evaluation, the PIS and NIS are, respectively,

    {\widetilde s^ + = \left[ {\left( {\left[ {0.5, 0.6} \right], \left[ {0.1, 0.3} \right]} \right), \left( {\left[ {0.7, 0.8} \right], \left[ {0.1, 0.2} \right]} \right), \left( {\left[ {0.5, 0.6} \right], \left[ {0.2, 0.4} \right]} \right), \left( {\left[ {0.6, 0.8} \right], \left[ {0.1, 0.2} \right]} \right)} \right]} ,
    {\widetilde s^ - } = \left[ {\left( {\left[ {0.2, 0.4} \right], \left[ {0.3, 0.4} \right]} \right), \left( {\left[ {0.1, 0.3} \right], \left[ {0.5, 0.6} \right]} \right), \left( {\left[ {0.2, 0.4} \right], \left[ {0.4, 0.5} \right]} \right), \left( {\left[ {0.2, 0.3} \right], \left[ {0.4, 0.6} \right]} \right)} \right] .

    Then, we used our proposed IvIFTD distance measure to decide which alternative is better, as shown in Table 18.

    Table 18.  Conclusions for Example 7.1.
    Scheme {d_{Iv}}\left({{A_i}, {{\widetilde s}^ + }} \right)/{d_{Iv}}\left({{A_i}, {{\widetilde s}^ - }} \right) {\rho _{Iv}} Ranking
    {\textbf{A}_{1}} 0.2013/0.2035 0.5027 4
    {\textbf{A}_{2}} 0.0985/0.3128 0.7605 2
    {\textbf{A}_{3}} 0.3572/0.0341 0.0870 5
    {\textbf{A}_{4}} 0.1394/0.27 0.6595 3
    {\textbf{A}_{5}} 0.0268/0.3578 0.9304 1

     | Show Table
    DownLoad: CSV

    According to the results presented in Table 18, the ranking result is {{\rm{A}}_5} \succ {{\rm{A}}_2} \succ {{\rm{A}}_4} \succ {{\rm{A}}_1} \succ {{\rm{A}}_3} . This ranking order is aligned with the original order reported in [38].

    We also performed a comparison with other methods, including the score function described by Xu [13], similarity function described by Wang [45], classical TOPSIS based on Hamming distance as described by Hu and Xu [30], Euclidean distance described by Qiao et al [36], M-TOPSIS method described by Aikhuele and Turan [34], correlation coefficient method described by Jun [46] and a new TOPSIS based on exponential distance by using connections, as described by Garg and Kumar [18]. These different methods were applied to the given data; the corresponding results are presented in Table 19. It is worth noting that, except for some minor differences observed with the score function [13], all of the ranking results for the five alternatives remained the same as that for the improved IvIF-TOPSIS method.

    Table 19.  Ranking results for different methods for Example 7.1.
    Methods {V_{{{\rm{A}}_1}}} {V_{{{\rm{A}}_2}}} {V_{{{\rm{A}}_3}}} {V_{{{\rm{A}}_4}}} {V_{{{\rm{A}}_5}}} Ranking
    Score function [13] 0.0832 0.306 0.1808 0.2123 0.3838 {{\rm{A}}_5} \succ {{\rm{A}}_2} \succ {{\rm{A}}_4} \succ {{\rm{A}}_3} \succ {{\rm{A}}_1}
    Similarity function [45] 0.5377 0.6345 0.427 0.5843 0.668 {{\rm{A}}_5} \succ {{\rm{A}}_2} \succ {{\rm{A}}_4} \succ {{\rm{A}}_1} \succ {{\rm{A}}_3}
    Hu's TOPSIS [30] 0.4815 0.788 0.0888 0.6387 0.9215 {{\rm{A}}_5} \succ {{\rm{A}}_2} \succ {{\rm{A}}_4} \succ {{\rm{A}}_1} \succ {{\rm{A}}_3}
    Qiao's TOPSIS [36] 0.4693 0.822 0.0548 0.6742 0.9444 {{\rm{A}}_5} \succ {{\rm{A}}_2} \succ {{\rm{A}}_4} \succ {{\rm{A}}_1} \succ {{\rm{A}}_3}
    Aikhuele's M-TOPSIS [34] 0.13 0.0293 0.2295 0.0773 0 {{\rm{A}}_5} \succ {{\rm{A}}_2} \succ {{\rm{A}}_4} \succ {{\rm{A}}_1} \succ {{\rm{A}}_3}
    Correlation coefficient method [46] 0.7486 0.8884 0.5044 0.8178 0.9053 {{\rm{A}}_5} \succ {{\rm{A}}_2} \succ {{\rm{A}}_4} \succ {{\rm{A}}_1} \succ {{\rm{A}}_3}
    Garg and Kumar's TOPSIS [18] 0.4367 0.7762 0.067 0.6234 0.9259 {{\rm{A}}_5} \succ {{\rm{A}}_2} \succ {{\rm{A}}_4} \succ {{\rm{A}}_1} \succ {{\rm{A}}_3}
    Our Proposed TOPSIS 0.5027 0.7605 0.0870 0.6595 0.9304 {{\rm{A}}_5} \succ {{\rm{A}}_2} \succ {{\rm{A}}_4} \succ {{\rm{A}}_1} \succ {{\rm{A}}_3}

     | Show Table
    DownLoad: CSV

    Example 7.2. Suppose that there are four courses {{\rm B}_1} , {{\rm B}_2} , {{\rm B}_3} and {{\rm B}_4} that need to be evaluated; its comprehensive IvIFN values are \widetilde {{\gamma _{{{\rm B}_1}}}} = \left({\left[{0.15, 0.25} \right], \left[{0.25, 0.35} \right]} \right) , \widetilde {{\gamma _{{{\rm B}_2}}}} = \left({\left[{0.2, 0.3} \right], \left[{0.15, 0.25} \right]} \right) , \widetilde {{\gamma _{{{\rm B}_3}}}} = \left({\left[{0.25, 0.35} \right], \left[{0.2, 0.3} \right]} \right) and \widetilde {{\gamma _{{{\rm B}_4}}}} = \left({\left[{0.352, 0.43} \right], \left[{0.095, 0.123} \right]} \right) respectively.

    According to Definition 2.4, the score function values are {f_s}\left({\widetilde {{\gamma _{{{\rm B}_1}}}}} \right) = - 0.1, {f_s}\left({\widetilde {{\gamma _{{{\rm B}_2}}}}} \right) = 0.05, {f_s}\left({\widetilde {{\gamma _{{{\rm B}_3}}}}} \right) = 0.05\;{and}\;{f_s}\left({\widetilde {{\gamma _{{{\rm B}_4}}}}} \right) = 0.282 and the accuracy function values are {f_a}\left({\widetilde {{\gamma _{{{\rm B}_2}}}}} \right) = 0.45 \;{and}\;{f_a}\left({\widetilde {{\gamma _{{{\rm B}_3}}}}} \right) = 0.55 . Thus, one has \widetilde {\gamma _{{{\rm B}_2}}} \prec \widetilde {{\gamma _{{{\rm B}_3}}}} \prec \widetilde {{\gamma _{{{\rm B}_4}}}} .

    Besides, we set the PIS as \widetilde {{\gamma ^ + }} = \left({\left[{\max \left\{ {u_{{{\rm{B}}_i}}^L} \right\}, \max \left\{ {u_{{{\rm{B}}_i}}^R} \right\}} \right], \left[{\min \left\{ {v_{{{\rm{B}}_i}}^L} \right\}, \min \left\{ {v_{{{\rm{B}}_i}}^R} \right\}} \right]} \right) and the NIS as \widetilde {{\gamma ^ - }} = \left({\left[{\min \left\{ {u_{{{\rm{B}}_i}}^L} \right\}, \min \left\{ {u_{{{\rm{B}}_i}}^R} \right\}} \right], \left[{\max \left\{ {v_{{{\rm{B}}_i}}^L} \right\}, \max \left\{ {v_{{{\rm{B}}_i}}^R} \right\}} \right]} \right) . Hence, we can get the PIS ( \widetilde {{\gamma ^ + }} ) and NIS ( \widetilde {{\gamma ^ - }} ) as \widetilde {{\gamma _{{{\rm B}_4}}}} and \widetilde {{\gamma _{{{\rm B}_1}}}} , respectively. Then, the relative closeness for the four courses was calculated by using Eq (5.1), and all schemes are ranked in Table 20.

    Table 20.  Comparison with different TOPSIS methods for Example 7.2.
    TOPSIS Methods The relative closeness degree Ranking
    Hu and Xu’s [30], Qiao's [36] {\rho _H}\left({{{\rm B}_2}} \right) = {\rho _H}\left({{{\rm B}_3}} \right) = 0.3927, {\rho _H}\left({{{\rm B}_1}} \right) = 0, {\rho _H}\left({{{\rm B}_4}} \right) = 1 {{\rm B}_4} \succ {{\rm B}_2} = {{\rm B}_3} \succ {{\rm B}_1}
    Wang's [45], Zhou's [47] {\rho _E}\left({{{\rm B}_2}} \right) = {\rho _E}\left({{{\rm B}_3}} \right) = 0.3940, {\rho _E}\left({{{\rm B}_1}} \right) = 0, {\rho _E}\left({{{\rm B}_4}} \right) = 1 {{\rm B}_4} \succ {{\rm B}_2} = {{\rm B}_3} \succ {{\rm B}_1}
    This study {\rho _{Iv}}\left({{{\rm B}_1}} \right) = 0, {\rho _{Iv}}\left({{{\rm B}_2}} \right) = 0.3995, {\rho _{Iv}}\left({{{\rm B}_3}} \right) = 0.3792, {\rho _{Iv}}\left({{{\rm B}_4}} \right) = 1 {{\rm B}_4} \succ {{\rm B}_2} \succ {{\rm B}_3} \succ {{\rm B}_1}

     | Show Table
    DownLoad: CSV

    As observed, certain traditional TOPSIS methods are unable to effectively compare the four courses due to the identical relative closeness values. Specifically, {\rho _H}\left({{{\rm B}_2}} \right) = {\rho _H}\left({{{\rm B}_3}} \right) = 0.3927 and {\rho _E}\left({{{\rm B}_2}} \right) = {\rho _E}\left({{{\rm B}_3}} \right) = 0.3940 . On the contrary, the proposed IvIF-TOPSIS method yielded {\rho _{Iv}}\left({{{\rm B}_2}} \right) = 0.3995 and {\rho _{Iv}}\left({{{\rm B}_3}} \right) = 0.3792 , indicating a noticeable distinction. Consequently, the IvIF-TOPSIS method allows for a comparison between course {{\rm B}_2} and {{\rm B}_3} , with {{\rm B}_2} being superior to {{\rm B}_3} .

    Based on the aforementioned comparisons, it is evident that the proposed IvIF-TOPSIS method is not only applicable to decision-making problems, but it also demonstrates a superior ability to rank schemes with subtle differences. Therefore, the improved IvIF-TOPSIS method is proven to be advantageous for decision-making problems.

    Teaching satisfaction evaluation plays an essential role in enhancing teaching quality in higher education. However, due to human limitations in terms of knowledge, cognitive uncertainty, and thinking habits, IvIFSs are often utilized to address MADM issues. In this domain, two vital aspects have arisen: how to objectively determine evaluation index weights and how to compare schemes with a decision-making method. To address these problems, we created a new distance measure based on the triangular divergence and demonstrated that the IvIFTD distance measure meets the requirements for the properties of the distance metric. Compared to some existing distance methods without explicit physical meaning or with complex calculations, the proposed IvIFTD distance measure is more in line with humans' intuitive experience and theoretical requirements. Additionally, it proves superior in the area of distinguishing subtle differences between different IvIFSs.

    Based on the IvIFTD distance measure, an improved TOPSIS method has been proposed. This method was subsequently applied for the establishment of an MADM approach for teaching satisfaction evaluation. An example was conducted to illustrate the decision-making process, and it included problem construction, calculation of comprehensive evaluation values, ranking and a selection of schemes using the IvIFTD distance measure and TOPSIS method. Comparative analyses have been presented to validate the rationality and superiority of the proposed method. The outcomes demonstrate that the improved TOPSIS method, based on the new distance measure, effectively handles uncertainty and subtle differences in actual evaluation problems involving different IvIFSs or IvIFNs. This advantage allows for the utilization of diverse evaluation values, providing more comprehensive decision-making information for teaching satisfaction evaluation.

    However, our study also has limitations that need to be addressed in future work. First, the proposed method does not consider the subjective weight of the evaluation criteria, thus overlooking the subjective preferences of decision-makers in the criteria. Additionally, the teaching satisfaction evaluation index system can be further improved by incorporating other innovative criteria. Moreover, considering a group decision-making approach for teaching satisfaction evaluation may be a more viable method to achieve objective evaluations.

    In future studies, we will aim to construct a more comprehensive teaching satisfaction evaluation index system from multiple perspectives through expert investigation and consultation. We will also extend subjective weight methods such as the best-worst method, full-consistency method, and step-wise weight assessment ratio analysis to the IvIFS environment to incorporate objective criterion importance. Furthermore, the construction of group decision-making methods, considering multi-granularity linguistic information, consensus processes, and behavioral decision theory are crucial for MADM problems.

    The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The research was funded by the Scientific Research Project of Neijiang Normal University (2022ZD10, 2021TD04) and Basic Research and Applied Basic Research Project of Neijiang City (2023018).

    The authors declare that they have no competing interests.



    [1] R. E. Blahut, Algebraic codes for data transmission, Cambridge University Press, 2003. https://doi.org/10.1017/CBO9780511800467
    [2] T. Richardson, R. Urbanke, Modern coding theory, Cambridge University Press, 2008. https://doi.org/10.1017/CBO9780511791338
    [3] J. E. F. Assmus, H. F. Mattson, Error-correcting codes: an axiomatic approach, Inf. Control, 6 (1963), 315–330. https://doi.org/10.1016/S0019-9958(63)80010-8 doi: 10.1016/S0019-9958(63)80010-8
    [4] D. Augot, E. Betti, E. Orsini, An introduction to linear and cyclic codes, In: M. Sala, S. Sakata, T. Mora, C. Traverso, L. Perret, Gröbner bases, coding, and cryptography, Springer, 2009, 47–68. https://doi.org/10.1007/978-3-540-93806-4_4
    [5] I. F. Blake, Codes over certain rings, Inf. Control, 20 (1972), 396–404. https://doi.org/10.1016/S0019-9958(72)90223-9 doi: 10.1016/S0019-9958(72)90223-9
    [6] I. F. Blake, Codes over integer residue rings, Inf. Control, 29 (1975), 295–300. https://doi.org/10.1016/S0019-9958(75)80001-5 doi: 10.1016/S0019-9958(75)80001-5
    [7] E. Spiegel, Codes over \mathbb{Z}m, Inf. Control, 35 (1977), 48–51. https://doi.org/10.1016/S0019-9958(77)90526-5 doi: 10.1016/S0019-9958(77)90526-5
    [8] E. Spiegel, Codes over \mathbb{Z}m, revisited, Inf. Control, 37 (1978), 100–104. https://doi.org/10.1016/S0019-9958(78)90461-8 doi: 10.1016/S0019-9958(78)90461-8
    [9] T. Shah, A. Khan, A. A. de Andrade, Constructions of codes through the semigroup ring B[X; \frac{1}{2^2} \quad \mathbb{Z}_0] and encoding, Comput. Math. Appl., 62 (2011), 1645–1654. https://doi.org/10.1016/j.camwa.2011.05.056 doi: 10.1016/j.camwa.2011.05.056
    [10] B. Yildiz, I. Siap, Cyclic codes over F2[u]/(u4−1) and applications to DNA codes, Comput. Math. Appl., 63 (2012), 1169–1176. https://doi.org/10.1016/j.camwa.2011.12.029 doi: 10.1016/j.camwa.2011.12.029
    [11] G. Weil, K. Heus, T. Faraut, J. Demongeot, The cyclic genetic code as a constraint satisfaction problem, Theor. Comput. Sci., 322 (2004), 313–334. https://doi.org/10.1016/j.tcs.2004.03.015 doi: 10.1016/j.tcs.2004.03.015
    [12] H. Q. Dinh, A. K. Singh, S. Pattanayak, S. Sriboonchitta, Construction of cyclic DNA codes over the ring \mathbb{Z}_4[u]/ < u2−1 > based on the deletion distance, Theor. Comput. Sci., 773 (2019), 27–42. https://doi.org/10.1016/j.tcs.2018.06.002 doi: 10.1016/j.tcs.2018.06.002
    [13] B. Kim, Y. Lee, J. Yoo, An infinite family of Griesmer quasi-cyclic self-orthogonal codes, Finite Fields Appl., 76 (2021), 1019–1023. https://doi.org/10.1016/j.ffa.2021.101923 doi: 10.1016/j.ffa.2021.101923
    [14] F. Zullo, Multi-orbit cyclic subspace codes and linear sets, Finite Fields Appl., 87 (2023), 102153. https://doi.org/10.1016/j.ffa.2022.102153 doi: 10.1016/j.ffa.2022.102153
    [15] Y. Lei, C. Li, Y. Wu, P. Zeng, More results on hulls of some primitive binary and ternary BCH codes, Finite Fields Appl., 82 (2022), 102066. https://doi.org/10.1016/j.ffa.2022.102066 doi: 10.1016/j.ffa.2022.102066
    [16] Y. Liu, R. Li, Q. Fu, L. Lu, Y. Rao, Some binary BCH codes with length n = 2m+1, Finite Fields Appl., 55 (2019), 109–133. https://doi.org/10.1016/j.ffa.2018.09.005 doi: 10.1016/j.ffa.2018.09.005
    [17] O. Alkam, E. A. Osba, On Eisenstein integers modulo n, Int. Math. Forum, 5 (2010), 1075–1082.
    [18] S. R. Nagpaul, S. K. Jain, Topics in applied abstract algebra, American Mathematical Society, 2005.
    [19] M. Sajjad, T. Shah, R. J. Serna, Designing pair of nonlinear components of a block cipher over Gaussian integers, Comput. Mater. Cont., 75 (2023), 5287–5305. https://doi.org/10.32604/cmc.2023.035347 doi: 10.32604/cmc.2023.035347
    [20] M. Sajjad, T. Shah, R. J. Serna, A. Z. E. Suarez, O. S. Delgado, Fundamental results of cyclic codes over octonion integers and their decoding algorithm, Computation, 10 (2022), 219. https://doi.org/10.3390/computation10120219 doi: 10.3390/computation10120219
    [21] M. Sajjad, T. Shah, M. M. Hazzazi, A. R. Alharbi, I. Hussain, Quaternion integers based higher length cyclic codes and their decoding algorithm, Comput. Mater. Cont., 73 (2022), 1177–1194. https://doi.org/10.32604/cmc.2022.025245 doi: 10.32604/cmc.2022.025245
    [22] M. Sajjad, T. Shah, M. Alammari, H. Alsaud, Construction and decoding of BCH-codes over the Gaussian field, IEEE Access, 11 (2023), 71972–71980. https://doi.org/10.1109/ACCESS.2023.3293007 doi: 10.1109/ACCESS.2023.3293007
    [23] M. Sajjad, T. Shah, H. Alsaud, M. Alammari, Designing pair of nonlinear components of a block cipher over quaternion integers, AIMS Math., 8 (2023), 21089–21105. https://doi.org/10.3934/math.20231074 doi: 10.3934/math.20231074
    [24] K. Huber, Codes over Eisenstein-Jacobi integers, Contemp. Math., 168 (1994), 165–179. https://doi.org/10.1090/conm/168/01696 doi: 10.1090/conm/168/01696
    [25] J. H, Baek, M. H. Sunwoo, New degree computationless modified Euclid algorithm and architecture for Reed-Solomon decoder, IEEE Trans. Very Large Scale Integr. (VLSI) Syst., 14 (2006), 915–920. https://doi.org/10.1109/TVLSI.2006.878484 doi: 10.1109/TVLSI.2006.878484
    [26] A. A. D. Andrade, T. Shah, A. Khan, Decoding procedure for BCH, alternant and Goppa codes defined over semigroup ring, TEMA, 12 (2011), 8–14.
    [27] M. Eiglsperger, M. Siebenhaller, M. Kaufmann, An efficient implementation of Sugiyama's algorithm for layered graph drawing, In: J. Pach, Graph Drawing, GD 2004. Lecture Notes in Computer Science, Springer, 3383 (2004), 155–166. https://doi.org/10.1007/978-3-540-31843-9_17
    [28] M. Sajjad, T. Shah, M. Alammari, H. Alsaud, Construction and decoding of BCH-codes over the Gaussian field, IEEE Access, 11 (2023), 71972–71981. https://doi.org/10.1109/ACCESS.2023.3293007 doi: 10.1109/ACCESS.2023.3293007
    [29] G. Forney, On decoding BCH codes, IEEE Trans. Inf. Theory, 11 (1965), 549–557. https://doi.org/10.1109/TIT.1965.1053825 doi: 10.1109/TIT.1965.1053825
    [30] T. Shah, A note on ascend and descend of factorization properties, Bull. Korean Math. Soc., 43 (2006), 419–424.
    [31] A. C. Canto, M. M. Kermani, R. Azarderakhsh, Reliable architectures for composite-field-oriented constructions of McEliece post-quantum cryptography on FPGA, IEEE Trans. Comput.-Aided Design Integr. Circuits Syst., 40 (2020), 999–1003. https://doi.org/10.1109/TCAD.2020.3019987 doi: 10.1109/TCAD.2020.3019987
    [32] A. C. Canto, M. M. Kermani, R. Azarderakhsh, Reliable CRC-based error detection constructions for finite field multipliers with applications in cryptography, IEEE Trans. Very Large Scale Integr. (VLSI) Syst., 29 (2020), 232–236. https://doi.org/10.1109/TVLSI.2020.3031170 doi: 10.1109/TVLSI.2020.3031170
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1422) PDF downloads(78) Cited by(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog