Processing math: 15%
Research article

Pythagorean hesitant fuzzy rough multi-attribute decision-making method with application to wearable health technology devices

  • Received: 11 July 2024 Revised: 04 September 2024 Accepted: 05 September 2024 Published: 20 September 2024
  • MSC : 03B52, 03E72

  • Identifying the most optimal wearable health technology devices for hospitals is a crucial step in emergency decision-making. The multi-attribute group decision-making method is a widely used and practical approach for selecting wearable health technology devices. However, because of the various factors that must be considered when selecting devices in emergencies, decision-makers often struggle to create a comprehensive assessment method. This study introduced a novel decision-making method that took into account various factors of decision-makers and has the potential to be applied in various other areas of research. First, we introduced a list of aggregation operators based on Pythagorean hesitant fuzzy rough sets, and a detailed description of the desired characteristics of the operators under investigation were provided. The proposed operators were validated by a newly defined score and accuracy function. Second, this paper used the proposed approach to demonstrate the Pythagorean hesitant fuzzy rough technique for order of preference by similarity to ideal solution (TOPSIS) model for multiple attribute decision-making and its stepwise algorithm. We developed a numerical example based on suggested operators for the evaluation framework to tackle the multiple-attribute decision-making problems while evaluating the performance of wearable health technology devices. In the end, the sensitivity analysis has confirmed the performance and reliability of the proposed framework. The findings indicated that the models being examined demonstrated greater reliability and efficacy compared to existing methodologies.

    Citation: Attaullah, Sultan Alyobi, Mohammed Alharthi, Yasser Alrashedi. Pythagorean hesitant fuzzy rough multi-attribute decision-making method with application to wearable health technology devices[J]. AIMS Mathematics, 2024, 9(10): 27167-27204. doi: 10.3934/math.20241321

    Related Papers:

    [1] Misbah Rasheed, ElSayed Tag-Eldin, Nivin A. Ghamry, Muntazim Abbas Hashmi, Muhammad Kamran, Umber Rana . Decision-making algorithm based on Pythagorean fuzzy environment with probabilistic hesitant fuzzy set and Choquet integral. AIMS Mathematics, 2023, 8(5): 12422-12455. doi: 10.3934/math.2023624
    [2] Attaullah, Shahzaib Ashraf, Noor Rehman, Asghar Khan, Muhammad Naeem, Choonkil Park . Improved VIKOR methodology based on $ q $-rung orthopair hesitant fuzzy rough aggregation information: application in multi expert decision making. AIMS Mathematics, 2022, 7(5): 9524-9548. doi: 10.3934/math.2022530
    [3] Jia-Bao Liu, Rashad Ismail, Muhammad Kamran, Esmail Hassan Abdullatif Al-Sabri, Shahzaib Ashraf, Ismail Naci Cangul . An optimization strategy with SV-neutrosophic quaternion information and probabilistic hesitant fuzzy rough Einstein aggregation operator. AIMS Mathematics, 2023, 8(9): 20612-20653. doi: 10.3934/math.20231051
    [4] Attaullah, Shahzaib Ashraf, Noor Rehman, Asghar Khan, Choonkil Park . A decision making algorithm for wind power plant based on q-rung orthopair hesitant fuzzy rough aggregation information and TOPSIS. AIMS Mathematics, 2022, 7(4): 5241-5274. doi: 10.3934/math.2022292
    [5] Wajid Ali, Tanzeela Shaheen, Iftikhar Ul Haq, Hamza Toor, Faraz Akram, Harish Garg, Md. Zia Uddin, Mohammad Mehedi Hassan . Aczel-Alsina-based aggregation operators for intuitionistic hesitant fuzzy set environment and their application to multiple attribute decision-making process. AIMS Mathematics, 2023, 8(8): 18021-18039. doi: 10.3934/math.2023916
    [6] Wenyi Zeng, Rong Ma, Deqing Li, Qian Yin, Zeshui Xu, Ahmed Mostafa Khalil . Novel operations of weighted hesitant fuzzy sets and their group decision making application. AIMS Mathematics, 2022, 7(8): 14117-14138. doi: 10.3934/math.2022778
    [7] Wei Lu, Yuangang Li, Yixiu Kong, Liangli Yang . Generalized triangular Pythagorean fuzzy weighted Bonferroni operators and their application in multi-attribute decision-making. AIMS Mathematics, 2023, 8(12): 28376-28397. doi: 10.3934/math.20231452
    [8] Tahir Mahmood, Ahmad Idrees, Majed Albaity, Ubaid ur Rehman . Selection of artificial intelligence provider via multi-attribute decision-making technique under the model of complex intuitionistic fuzzy rough sets. AIMS Mathematics, 2024, 9(11): 33087-33138. doi: 10.3934/math.20241581
    [9] Asit Dey, Tapan Senapati, Madhumangal Pal, Guiyun Chen . A novel approach to hesitant multi-fuzzy soft set based decision-making. AIMS Mathematics, 2020, 5(3): 1985-2008. doi: 10.3934/math.2020132
    [10] Mehwish Shehzadi, Hanan Alolaiyan, Umer Shuaib, Abdul Razaq, Qin Xin . Crafting optimal cardiovascular treatment strategy in Pythagorean fuzzy dynamic settings. AIMS Mathematics, 2024, 9(11): 31495-31531. doi: 10.3934/math.20241516
  • Identifying the most optimal wearable health technology devices for hospitals is a crucial step in emergency decision-making. The multi-attribute group decision-making method is a widely used and practical approach for selecting wearable health technology devices. However, because of the various factors that must be considered when selecting devices in emergencies, decision-makers often struggle to create a comprehensive assessment method. This study introduced a novel decision-making method that took into account various factors of decision-makers and has the potential to be applied in various other areas of research. First, we introduced a list of aggregation operators based on Pythagorean hesitant fuzzy rough sets, and a detailed description of the desired characteristics of the operators under investigation were provided. The proposed operators were validated by a newly defined score and accuracy function. Second, this paper used the proposed approach to demonstrate the Pythagorean hesitant fuzzy rough technique for order of preference by similarity to ideal solution (TOPSIS) model for multiple attribute decision-making and its stepwise algorithm. We developed a numerical example based on suggested operators for the evaluation framework to tackle the multiple-attribute decision-making problems while evaluating the performance of wearable health technology devices. In the end, the sensitivity analysis has confirmed the performance and reliability of the proposed framework. The findings indicated that the models being examined demonstrated greater reliability and efficacy compared to existing methodologies.



    There are numerous complementary fields, such as medical, social, and economics and management that necessitate the execution of substantial decision-making (DM), that is, determination of enterprise location, evaluation of scientific research, assessment of employees, selection of initiatives and projects, evaluation of investments, allocation of resources, and wide-ranging economic advantage classification. Prudent DM is the cornerstone of modern management and is essential for evaluating the overall effectiveness of management initiatives. Several multi-attribute DM (MADM) approaches have been implemented and their applications have been discussed in various fields of DM. For example, Gou et al. [1] developed two types of linguistic preference orderings for representing experts's opinions in the evaluation of resource allocation for medical health in public health emergencies, and they developed a new (Rangement Et Synthese De Ronnees Relationnelles) ORESTE method utilizing linguistic preference orderings to address MADM problems. The ORESTE method was proposed to address the actual MADM problem related to the allocation of medical health resources during public health emergencies. They analyzed the results of the ORESTE method and compared them to other existing methods in a double-hierarchy linguistic environment. Zhang et al. [2] introduced a novel method for comparing double hierarchy hesitant fuzzy linguistic elements (DHHFLEs). This method represents an advancement over existing comparison methods for DHHFLEs. Furthermore, they addressed a gap in the current research by proposing a cosine similarity measure for DHHFLEs, which takes a geometric perspective rather than relying solely on algebraic approaches. The proposed method is applied to solve an MADM problem in the performance evaluation of financial logistics enterprises. Gou et al. [3] introduced a new and comprehensive concept called the probabilistic double hierarchy linguistic term set (PDHLTS). They suggested some additional practical operations and a method to measure the distance between PDHLTSs. They enhanced the traditional (VIekriterijumsko KOmpromisno Rangiranje) VIKOR method by developing an extended probabilistic double hierarchy linguistic VIKOR method. In addition, the proposed method showcases its benefits and practicality through its application in solving a real-world MADM problem in the field of smart healthcare. Decision-makers frequently employ MADM approaches, which encompass the application of various logical methods to evaluate and prioritize a set of alternatives according to a set of interconnected criteria. Several MADM approaches employ different logical frameworks (for detailed information, see [4,5,6,7,8]). Typically, DMs consist of individuals who possess expertise in the particular issue being evaluated. They commonly favor the use of fuzzy numbers when conducting precise evaluations. This is because of the qualitative scoring that experts may introduce subjectivity and personal presumptions into their assessments. This implies an ambiguous and vague evaluation from the expert. Dealing with uncertainty and inaccurate information is a major difficulty. Optimal performance of traditional mathematical models often requires precise data, which is not always achievable in implementation. Rough set theory, developed by Pawlak [9] in the early 1980s, is a mathematical approach that addresses the challenges of dealing with vagueness and uncertainty. It acquired recognition as a widely-used framework for processing uncertain knowledge. The rough set theory stipulates a strong framework for handling uncertainty and imprecision in information. Through its emphasis on the approximation of sets using indiscernibility relations, this approach provides a distinct method for analyzing and interpreting complicated information without requiring additional details. It is often compared to fuzzy set (FS) theory, introduced by Zadeh [10] in 1965, and is sometimes used in combination with it as the fuzzy-rough approach, as established by Dubois and Prade [11]. Rough set (RS) theory has garnered global attention from numerous professionals and researchers who have made significant contributions to its advancement and practical implementations. RS theory is characterized by its overlapping nature with various other theories. However, RS theory can be perceived as a distinct discipline in its own regard. The RS theory has been widely utilized in diverse domains, particularly in research areas including artificial intelligence, cognitive sciences, machine learning, intelligent systems, inductive reasoning, pattern recognition, computational image processing, signal assessment, knowledge discovery, expert systems, data mining, knowledge discovery, feature selection and reduction, classification, clustering, decision support systems, medical diagnosis, and analyzing medical data to assist in diagnosis and treatment planning. Therefore, RS theory proves to be a valuable tool in the field of data analysis and decision support, especially in situations where data is incomplete or ambiguous. The framework enables the approximation of imprecise or incomplete information by utilizing the concepts of lower and upper approximations. Recently, there has been significant development in data analysis methods, particularly in the context of big data and the internet of things (IoT). The majority of references pertain to the stimulating works on hybrid models that are created by combining RS theory concepts within the suggested framework, which is considered a crucial element. RS theory is an emerging theory that addresses the handling of inadequate data (see [12,13,14,15] for detail information). Over the past few decades, researchers have proposed numerous generalizations and extensions of the classical RS theory. The applications of RSs in a fuzzy environment has gained significant attention in recent years. The study of Pei [16] focused on the challenges of approximating fuzzy sets within fuzzy information systems, leading to the development of the theory of fuzzy RSs. Jiang et al. [17] introduced a new methodology for utilizing a fuzzy similarity-based RS algorithm in the context of feature weighting and reduction for a case-based reasoning system. The algorithm is utilized in the process of tool selection for die and mould machining. Ding et al. [18] conducted a study on three-way group decisions using evidential reasoning in incomplete hesitant fuzzy information systems for liver disease diagnosis. Zhang et al. [19] designed an incomplete three-way MADM using adjustable multi-granulation Pythagorean fuzzy probabilistic RSs. They discussed two case studies involving mine ventilator fault diagnosis and air quality evaluations. They also conducted various comparative experiments and sensitivity analyses to demonstrate the effectiveness of their theoretical methodology. Wang [20] provided an overview of the properties of fuzzy RSs based on triangular norms. These norms do not necessarily need to be continuous. The purpose of this description is to provide generalization results for fuzzy RS theory from a mathematical perspective. The study conducted by Qi et al. [21] focused on the Fermatean fuzzy covering-based RS and its applications in MADM. This study aims to showcase the decision-making processes involved in selecting electric vehicle charging stations in an Indian city using innovative methodologies. The DM results were compared to other conventional techniques employing the Spearman ranking correlation coefficient and Pearson correlation coefficient methods to verify the effectiveness of the novel approaches. The investigation carried out by Theerens et al. [22] was centered around specific examples within the broader category of fuzzy quantifier-based fuzzy RSs (FQFRS). The lower and upper approximations in these models are evaluated using binary and unary fuzzy quantifiers, respectively. In addition, they presented a counterexample to support the claim that other FQFRS models are not granularly representable. However, these approaches are still effective in addressing inconsistencies in real-life datasets. Zhou et al. [23] introduced the concept of single granulation hesitant Pythagorean fuzzy RSs for the first time. Then, considering the multi-granulation framework, two types of multi-granulation hesitant Pythagorean fuzzy RSs are introduced, known as the optimistic and pessimistic multi-granulation hesitant Pythagorean fuzzy RSs. The relationships between serial hesitant Pythagorean fuzzy relations and hesitant Pythagorean fuzzy approximation operators are established. We provide a real-life instance to illustrate the applicability of the proposed method in MADM. Zhang et al. [24] presented an integrated DM framework that utilizes the hesitant fuzzy RS model across two universes and explores various properties of this model. An idea is made for the union, intersection, and composition of hesitant fuzzy approximation spaces, along with an investigation into their properties. In addition, they introduced a novel method for making decisions in uncertain environments by utilizing hesitant fuzzy rough sets across two different universes. Finally, two real-world MADM problems are provided to demonstrate the effectiveness of this approach. The development of an explainable prediction method that combines fuzzy RSs and the TOPSIS approach (developed by Hwang and Yoon [25]) was reported by Gaeta et al. [26]. The findings possess the potential to aid analysts and decision-makers in acquiring more comprehension of the phenomenon known as information disorder. The results are derived from authentic data and exhibit encouraging possibilities for future research. Hesitancy represents an inherent aspect of the natural world. Determining the superior alternatives that share identical characteristics in routine presents a complex challenge. Expert professionals are finding it challenging to deal with DM because of the ambiguity and hesitation about the outcomes. In addressing the issue of hesitancy, Torra [27] introduced the notion of hesitant fuzzy sets (HFS). The hesitant fuzzy sets effectively capture the uncertainty of intricate scenarios and the imprecision of perceptions among individuals. Since their establishment, HFSs have made notable progress in both practical and theoretical aspects. This framework can be applied to resolve a diverse array of DM challenges. A broad range of authors have merged the idea of HFS with other frameworks of fuzzy sets and employed them to address real-world challenges through the application of Dombi aggregation operators (introduced by Dombi in 1982) and the TOPSIS methodology within the context of group DM. Dombi aggregation operators and the TOPSIS approach gained significant recognition as an essential tool for addressing real-world DM complications. A number of researchers have employed the aforementioned concepts across various domains of DM challenges. Xia and Xu [28] explored various hesitant fuzzy aggregation operators (AOPs) and discussed their applications. Liu et al. [29] performed a study on the Dombi AOPs of the interval-valued HFS, with a specific focus on the Dombi t-norm and t-conorm. Akram et al. [30] introduced and explained several averaging and geometric AOPs that utilize the Dombi t-norm and t-conorm. They also discussed the practical applications of these techniques in DM environments. Tehreem et al. [31] developed Dombi AOPs in spherical cubic fuzzy information and discussed their applications in MADM. These approaches were presented to demonstrate their validity, practicability, and efficacy. Shi and Ye [32] have broadened the idea of Dombi operation to include neutrosophic cubic sets and have effectively applied it to the DM problem. Lu and Ye [33] initially established the Dombi AOPs for linguistic cubic variables to assist in the DM problem. Umer et al. [34] enhanced the TOPSIS approach by integrating a distance technique that incorporates interval type-2 trapezoidal Pythagorean fuzzy numbers. They thoroughly examined the applications of this approach in addressing MADM problems, diligently considered the viewpoints and insights of decision-makers throughout the evaluation process. A novel approach for determining the ranking order in group DM was proposed by Kacprzak et al. [35], which utilizes ordered fuzzy integers and the TOPSIS strategy. Rezaei et al. [36] developed an approach to address the issue of sustainable global partner selection by utilizing DM experts who were previously unidentified. Jun et al. [37] developed the hybrid TOPSIS methodology based on interval Pythagorean FS (PyFS) and explored its application in MADM. Zhang and Dai [38] were the first to develop the TOPSIS approach, which combines decision-theoretic rough fuzzy sets. They demonstrated the effectiveness and reliability of the technique through detailed analysis and evaluation of parameters. In addition, they accomplished simulated experiments to further confirm the effectiveness of the proposed DM approach.

    In the aforementioned research initiatives, hybrid models incorporating a PyFS and an RS are rarely accomplished. In addition, the Pawlak RS has been effectively integrated with other uncertainty theories, such as soft set theory, which has enabled the development of several innovative RS models. In order to deal with unclear and reluctant information while making decisions for real-world problems, the concept of Pythagorean hesitant fuzzy rough sets (PyFRSs) is also crucial. The aforesaid literature aroused our interest to put forward a novel notion known as PyHFRSs. The goal of our research is to integrate HFSs with PyFRSs to explore potential applications in tackling MADM challenges in a hesitant fuzzy environment. The main contributions of the manuscript are as follows:

    (1) To assemble a list of AOPs based on Dombi t-norm and t-conorm, namely, PyHFR Dombi weighted averaging (PyHFRDWA), PyHFR Dombi ordered weighted averaging (PyHFRDOWA), and PyHFR hybrid weighted averaging (PyHFRHDWA), and investigate their key operating laws. Additionally, describe their related properties.

    (2) To introduce and implement the score and accuracy functions employing PyHFRSs.

    (3) To provide a DM technique for combining uncertain information employing prescribed aggregation operators.

    (4) The initiated operators are employed for a real-world DM challenge involving the selection of wearable health technology devices (WHTDs) for hospitals.

    (5) The effectiveness and rationality of the suggested framework were confirmed by sensitivity and comparative analysis with the PyHFR-TOPSIS approach. The obtained results are shown graphically.

    The remaining part of this research is outlined as follows: Section 2 summarizes the basic information required for fundamental ideas. In Section 3, we explain the basic operations of the Domi t-norm and t-conorm in detail based on PyHFRS, which will be useful for the subsequent analysis. In Section 4, we introduce the MADM strategy under the PyHRFR information section. Section 5 implements the real-life application of the suggested approach for the selection of WHTDs for hospitals. Sections 6 and 7 present the sensitivity and comparative analyses, respectively. Section 8 concludes the study and presents future recommendations.

    In this section, we will briefly go through the crucial concepts that are essential to having a thorough understanding. These ideas include FS, HFS, intuitionistic FS (IFS), PyHFS, RS, intuitionistic fuzzy RS (IFRS), PyHFRS, and other operational laws that are inescapably interconnected. These key notions will assist and simplify the suggested framework.

    Definition 2.1. [10] Let X={˜˜ϱ1,˜ϱ2,˜ϱ3,...,˜ϱn} be a nonempty set. By fuzzy set F, we mean a structure of the form

    F={˜ϱi,HF(˜ϱi)|˜ϱiX},

    where HF(˜ϱi)[0,1] is known to be membership grade (MG).

    Definition 2.2. [39] Let X={˜ϱ1,˜ϱ2,˜ϱ3,...,˜ϱn} be a nonempty set. By HFS H; we mean a structure:

    H={˜ϱi,HhH(˜ϱi)|˜ϱiX},

    where HhH(˜ϱi)[0,1] is the MG and a set of values, and HhH(˜ϱi) is referred to as the HF element.

    Definition 2.3. [40]  Let X={˜ϱ1,˜ϱ2,˜ϱ3,...,˜ϱn} be an nonempty set. An IFS ϝ over ˜ϱi is structurally defined as:

    ˜F={˜ϱi,H˜F(˜ϱi),U˜F(˜ϱi)|˜ϱiX},

    and for each ϱiX, the functions H˜F:X[0, 1] and U˜F:ϱ[0,1] show the MG and nonmembership grade (NMG), respectively, which must satisfy the property 0H˜F(ϱi)+U˜F(ϱi)1.

    Definition 2.4. [41] Let X={˜ϱ1,˜ϱ2,˜ϱ3,...,˜ϱn} be a nonempty universal set. By intuitionistic hesitant FS (IHFS) E over X, we mean a structure as follows:

    E={˜ϱi,HhE(˜ϱi),UhE(˜ϱi)|˜ϱiX},

    where HhE(ϱi)[0,1] and UhE(ϱi)[0,1] are sets of some values, which shows the MG and NMG respectively, with the property for all η(˜ϱi)HhE(˜ϱi) and ν(˜ϱi)UhE(˜ϱi) such that (max(η(˜ϱi)))+(min(ν(˜ϱi)))1 and (min(η(˜ϱi)))+(max(ν(˜ϱi)))1.

    Definition 2.5. [9] Let X={˜ϱ1,˜ϱ2,˜ϱ3,...,˜ϱn} be a nonempty universal set and Z be any relation on X. Define a set valued mapping Z:XM(X) by Z(ξ)={aX|(ϱi,a)Z}, for ϱiX, where Z(ϱi) is called a successor neighborhood of the element ϱi with respect to relation Z. The pair (X,Z) is known as a crisp approximation space. Now, for any set ˜˜IX, the lower approximation (LA) and upper approximation (UA) of ˜˜I with respect to the space (X,Z) is defined as:

    Z(˜I)={˜ϱiX|Z(ξ)˜I};Z(˜I)={˜ϱiX|Z(ξ)˜Iϕ},

    where (Z(˜I),Z(˜I)):M(X)M(X) are LA and UA operators and the pair (Z(˜I),Z(˜I)) is called RS.

    Definition 2.6. [42] Let X={˜ϱ1,˜ϱ2,˜ϱ3,...,˜ϱn} be the universal set and ZIFS(X×X) be an IF relation. Then,

    (1) Z is reflexive if HZ(ϱi,ϱi)=1 and νZ(ϱi,ϱi)=0,ϱiX;

    (2) Z is symmetric if  (ϱi,£)X×X,HZ(ϱi,£)=HZ(£,ϱi), and νZ(ϱi,£)=νZ(£,ϱi);

    (3) Z is transitive if (ϱi,)X×X,

    HZ(˜ϱi,)cX[HZ(˜ϱi,c)HZ(c,)],

    and

    UZ(˜ϱi,)=cX[UZ(˜ϱi,c)UZ(c,)].

    Definition 2.7. [43] Let X={˜ϱ1,˜ϱ2,˜ϱ3,...,˜ϱn} be the universal set. Then, any relation ZIFS(X×X) is called an IF relation. The pair (X×Z) is said to be an IF approximation space. Now for any ˜IIFS(X), the LA and UA of ˜I with respect to IF approximation space (X,Z) are two IFSs and symbolized by Z(˜I) and Z(˜I) and are highlighted below:

    Z(˜I)={˜ϱi,HZ(˜I)(˜ϱi),UZ(˜I)(˜ϱi)|˜ϱiX}

    and

    Z(˜I)={˜ϱi,HZ(˜I)(˜ϱi),UZ(˜I)(˜ϱi)|˜ϱiX},

    where

    HZ(˜I)(˜ϱi)=BX[HZ(˜ϱi,B)H˜I(B)];UZ(˜I)(˜ϱi)=BX[UZ(˜ϱi,B)U˜I(B)];
    HZ(˜I)(˜ϱi)=BX[HZ(˜ϱi,B)H˜I(B)];UZ(˜I)(˜ϱi)=BX[UZ(˜ϱi,B)U˜I(B)];

    such that

    0((HZ(˜I)(˜ϱi))+(UZ(˜I)(˜ϱi)))1, and0((HZ(˜I)(˜ϱi))+(UZ(˜I)(˜ϱi)))1.

    As (Z(˜I),Z(˜I)) are IFSs,Z(˜I),Z(˜I):IFS(X)IFS(X) are LA and UA operators. The pair

    Z(˜I)=(Z(˜I),Z(˜I))={˜ϱi,(HZ(˜I)(˜ϱi),UZ(˜I)(ξ),(HZ(˜I)(˜ϱi),UZ(˜I)(˜ϱi))|˜ϱi˜I}

    is known as an IFRS. For simplicity,

    Z(˜I)={˜ϱi,HZ(˜I)(˜ϱi),UZ(˜I)(˜ϱi),(HZ(˜I)(˜ϱi),UZ(˜I)(˜ϱi))|˜ϱiX}

    is represented as Z(˜I)=((H,U),(H,U)) and is known as an intuitionistic fuzzy rough (IFR) value.

    Definition 2.8. Let X be the universal set and for any subset, ZPyHFS(X×X) is said to be a PyHFR relation. The pair (X,Z) is said to be a PyHF approximation space. If for any ˜IPyHFS(X), then the LA and UA of ˜I with respect to PyHF approximation space (X,Z) are two PyHFSs, which are denoted by Z(˜I) and Z(˜I) and defined as:

    Z(˜I)={˜ϱi,HhZ(˜I)(˜ϱi),UhZ(˜I)(˜ϱi)|˜ϱiX};

    and

    Z(˜I)={˜ϱi,HhZ(˜I)(˜ϱi),UhZ(˜I)(˜ϱi)|˜ϱiX};

    where

    HhZ(˜I)(˜ϱi)=ϑX[HhZ(˜ϱi,ϑ)Hh˜I(ϑ)];UhZ(˜I)(˜ϱi)=ϑX[UhZ(˜ϱi,ϑ)Uh˜I(ϑ)];
    HhZ(˜I)(ξ)=ϑX[HhZ(˜ϱi,ϑ)Hh˜I(ϑ)];UhZ(˜I)(˜ϱi)=ϑX[UhZ(ξ,H)Uh˜I(ϑ)];

    such that

    0(max(HhZ(˜I)(˜ϱi)))2+(min(UhZ(˜I)(˜ϱi)))21,

    and

    0(min(HhZ(˜I)(˜ϱi))2+(max(UhZ(˜I)(˜ϱi)))21.

    As (Z(˜I),Z(˜I)) are PyHFSs,Z(˜I),Z(˜I):PyHFS(X)PyHFS(X) are LA and UA operators. The pair

    Z(˜I)=(Z(˜I),Z(˜I))={˜ϱi,(HhZ(˜I)(˜ϱi),UhZ(˜I)(˜ϱi)),(HhZ(˜I)(˜ϱi),UhZ(˜I)(˜ϱi))|˜ϱi˜I}

    will be called Pythagorean hesitant fuzzy RSs (PyHFRSs). For simplicity,

    Z(˜I)={˜ϱi,(HhZ(˜I)(˜ϱi),UhZ(˜I)(˜ϱi)),(HhZ(˜I)(˜ϱi),UhZ(˜I)(˜ϱi))|˜ϱi˜I}

    is represented as Z(˜I)=((H,U),(H,U)) and is known as a PyHFR value.

    Definition 2.9. The scoring function employed for the PyHFR value Z(˜I)=(Z(˜I),Z(˜I))=((H,U),(H,U)) is given as:

    SR(Z(˜I))=14(2+1S(Hτ)+1N(Hτ)1S(Uτ)1N(Uτ)).

    The accuracy function for PyHFRV Z(˜I)=(Z(˜I),Z(˜I))=((Hτ,Uτ),(Hτ,Uτ)) is given as:

    ACZ(˜I)=14(1S(Hτ)+1S(Hτ)+1N(Uτ)+1N(Uτ)),

    where S and N represent the number of elements in Hτ and Uτ, respectively.

    Definition 2.10. Suppose Z(˜I1)=(Z(˜I1),Z(˜I1)) and Z(˜I2)=(Z(˜I2),Z(˜I2)) are two PyHFR values. Then,

    (1) if SR(Z(˜I1))>SR(Z(˜I2)), then Z(˜I1)>Z(˜I2),

    (2) if SR(Z(˜I1))<SR(Z(˜I2)), then Z(˜I1)<Z(˜I2),

    (3) if SR(Z(˜I1))=SR(Z(˜I2)), then

    (a) if ACZ(˜I1)>ACZ(˜I2) then Z(˜I1)>Z(˜I2),

    (b) if ACZ(˜I1)<ACZ(˜I2) then Z(˜I1)<Z(˜I2),

    (c) if ACZ(˜I1)=ACZ(˜I2) then Z(˜I1)=Z(˜I2).

    This section describes in detail the Domi product and dombi sum (see [44] for information) and discuss some basic operations based on PyHFRSs, which will be useful for the subsequent analysis. Following are the specific types of triangular norms and co-norms:

    Definition 3.1. [44] Let 1and 2 be considered any two real numbers within the interval [0, 1]. The Dombi t-norm can be expressed as

    12=11+((111)ζ+(122)ζ)1ζ,ζ>0.

    The Dombi t-conorm is given by

    12=111+((111)ζ+(122)ζ)1ζ,ζ>0, respectively.

    Definition 3.2. Let K1 = {(H1s, U1s):1sK1} and K2 = {(H2k, U2k):1kK2} be two PyHFRSs and ζ>0, and the Dombi operations for PyHFR numbers are outlined below.

    (1) K1K2=H1s, U1sK1H2k, U2kK2{111+((H21s1H21s)ζ+(H22k1H22k)ζ)1ζ,11+((1U21sU21s)ζ+(1U22kU22k)ζ)1ζ},

    (2) K1K2=H1s, U1sK1H2k, U2kK2{11+((1H21sH21s)ζ+(1H22kH22k)ζ)1ζ,111+((U21s1U21s)ζ+(U22k1U22k)ζ)1ζ},

    (3) λK1=H1s, U1sK1{111+(λ(H21s1H21s)ζ)1ζ,11+(λ(1U21sU21s)ζ)1ζ},

    (4) Kλ1H1s, U1sK1{11+(λ(1U21sU21s)ζ)1ζ,111+(λ(H21s1H21s)ζ)1ζ,}.

    Let n = {ı=(Hıj, Uıj),(Hıj, Uıj):1ıhi, j=1,2,3,...,n} be an n-dimensional of PyHFRSs. A PyHFRDWAA operator is defined by the function PyHFRDWAA: n as follows:

    PyHFRDWAA(1,2,3,...,n)=(nı=1(wıı),nı=1(wıı)),

    where wı is the weight vector of ı(ı=1,2,3,...,n), 0wı1, and nı=1wı=1.

    Theorem 3.3. Let ın(ı=1,2,3,...,n). Then,

    PyHFRDWAA(1,2,3,...,n)=(nı=1(wı),nı=1(wıı))=[H1,U1K1H2,U2K2...Hn,UnKn{[111+(nı=1wı(H2ı1H2ı)ζ)1ζ],[11+(nı=1wı(1U2ıU2ı)ζ)1ζ]},H1,U1K1H2,U2K2...Hn,UnKn{[111+(nı=1wı(H2ı1H2ı)ζ)1ζ],[11+(nı=1wı(1U2ıU2ı)ζ)1ζ]}],

    where wı is the weight vector of ı(ı=1,2,3,...,n), 0wı1 and nı=1wı=1.

    Proof. The theorem can be proved using the approach of mathematical induction as follows:

    Step 1. When n=2, we get the following findings based on Dombi operations for

    PyHFRSs(w1H1)=[H1, U1K1{(111+(w1(H2ı1H2ı)ζ)1ζ),(11+(w1(1U2ıU2ı)ζ)1ζ)},H1, U1K1{(111+(w1(H2ı1H2ı)ζ)1ζ),(11+(w1(1U21U21)ζ)1ζ)}],
    w2K2=[J2, U2K2{(111+(w2(H221H22)ζ)1ζ),(11+(w2(1U22U22)ζ)1ζ)},H2, U2K2{(111+(w2(H221H22)ζ)1ζ),(11+(w2(1U22U22)ζ)1ζ)}],w1K1w2K2=[H1, U1K1H2, U2K2{(111+(w1(H2ı1H2ı)ζ+w2(H221H22)ζ)1ζ),(11+(w1(1U21U21)ζ+w2(1U22U22)ζ)1ζ)},H1, U1K1H2, U2K2{(111+(w1(H211H21)ζ+w2(H221H22)ζ)1ζ),(11+(w1(1U21U21)ζ+w2(1U22U22)ζ)1ζ)}],w1K1w2K2=[J1, U1K1J2, U2K2{(111+(2ı=1wı(H2ı1J2ı)ζ)1ζ),(11+(2ı=1wı(1U2ıU2ı)ζ)1ζ)},H1, U1K1H2, U2K2{(111+(2ı=1wı(H2ı1H2ı)ζ)1ζ),(11+(2ı=1wı(1U2ıU2ı)ζ)1ζ)}].

    The theorem thus holds for n = 2.

    Step 2. Suppose the result is true for n=k, that is,

    PyHFRDWAA(1,2,3,...,k)=(nı=1(wı),nı=1(wıı))=H1, U1K1H2, U2K2...Hk, UkKk[(111+(kı=1wı(H2ı1H2ı)ζ)1ζ),(11+(kı=1wı(1U21U21)ζ)1ζ)],H1, U1K1H2, U2K2...Hk, UkKk[(111+(kı=1wı(H2ı1H2ı)ζ)1ζ),(11+(kı=1wı(1U2ıU2ı)ζ)1ζ)].

    Step 3. When n=k+1,

    PyHFRDWAA(1,2,3,...,k+1)=kı=1(wkk)(wk+1k+1)
    =[H1, U1K1H2, U2K2...Hk, UkKk[(111+(kı=1wı(H2ı1H2ı)ζ)1ζ),(11+(kı=1wı(1U21U21)ζ)1ζ)],H1, U1K1H2, U2K2...Hk, UkKk[(111+(kı=1wı(H2ı1H2ı)ζ)1ζ),(11+(kı=1wı(1U2ıU2ı)ζ)1ζ)]](wk+1k+1)=[H1, U1K1H2, U2K2...Hk, UkKk[(111+(kı=1wı(H2ı1H2ı)ζ)1ζ),(11+(kı=1wı(1U21U21)ζ)1ζ)],H1, U1K1H2, U2K2...Hk, UkKk[(111+(kı=1wı(H2ı1H2ı)ζ)1ζ),(11+(kı=1wı(1U2ıU2ı)ζ)1ζ)]][Hk+1, Uk+1Kk+1[(111+(wk+1(H2k+11H2k+1)ζ)1ζ),(11+(wk+1(1U2k+1U2k+1)ζ)1ζ)],Hk+1, Uk+1Kk+1[(111+(wk+1(H2k+11H2k+1)ζ)1ζ),(11+(wk+1(1U2k+1U2k+1)ζ)1ζ)]]
    [H1, U1K1H2, U2K2...Hk+1, Uk+1Kk+1[(111+(k+1ı=1wı(H2ı1H2ı)ζ)1ζ),(11+(k+1ı=1wı(1U2ıU2ı)ζ)1ζ)],H1, U1K1H2, U2K2...Hk+1, Uk+1Kk+1[(111+(k+1ı=1wı(H2ı1H2ı)ζ)1ζ),(11+(k+1ı=1wı(1U2ıU2ı)ζ)1ζ)]].

    So, the results are true for all n=k+1. Hence, the theorem is proved for nZ.

    Example 3.4. Let's suppose

    1={(0.13,0.17,0.18),(0.19,0.25,0.27),(0.21,0.26,0.29),(0.18,0.25,0.26)},2={(0.19,0.22,0.23),(0.16,0.19,0.22),(0.22,0.29,0.31),(0.17,0.26,0.34)},3={(0.13,0.14,0.29),(0.19,0.20,0.22),(0.15,0.18,0.19),(0.13,0.19,0.20)},4={(0.12,0.16,0.22),(0.14,0.15,0.18),(0.18,0.19,0.26),(0.19,0.23,0.26)}

    for n=4 and ζ=1, with weight vector w=(0.15,0.22,0.26,0.37). We get

    PyHFRDWAA(1,2,3,4)=4ı=1(wıı),4ı=1(wıı)=H1, U1K1H2, U2K2H3, U3K3H4, U4K4[(111+(4ı=1wı(H2ı1H2ı)ζ)1ζ),(11+(4ı=1wı(1U21U21)ζ)1ζ)],H1, U1K1H2, U2K2H3, U3K3H4, U4K4[(111+(4ı=1wı(H2ı1H2ı)ζ)1ζ),(11+(4ı=1wı(1U2ıU2ı)ζ)1ζ)],

    PyHFRDWAA(1,2,3)={[(0.1404,0.1705,0.2362),(0.1619,0.1810,0.2086)],[(0.1863,0.2227,0.2599),(0.1646,0.2261,0.2534)]}.

    Theorem 3.5. (Idempotency): Let ı(ı=1,2,3,...,n) be a number of PyHFRNs. Then,

    ı={(Hı,Uıj),(Hıj,Uıj):ı=1,2,3,...,n}

    is a number of PyHFR elements that are equal, i.e., ı= for all ı, then PyHFRDWAA(1,2,3,...,n)=.

    Let

    n={ı=(H2ı, Uıj),(Hıj, Uıj):1ihi, j=1,2,3,...,n}

    be an n-dimensional of PyHFRSs. A PyHFRDOWAA operator is defined by the function PyHFRDOWAA: n as follows:

    PyHFRDOWAA(1,2,3,...,n)=(nı=1(wıı),nı=1(wıı)),

    where ϑ(ı) is the ıth largest of ı and wı is the weight vector of ı(ı=1,2,3,...,n), such that 0wı1 and nı=1wı=1.

    Theorem 3.6. Let ın(ı=1,2,3,...,n). Then

    PyHFRDOWAA(1,2,3,...,n)=(nı=1(wıı),nı=1(wıı))=[H1,U1K1H2,U2K2...Hn,UnKn{(111+(nı=1wı(H2ϑ(ı)1H2ϑ(ı))ζ)1ζ),(11+(nı=1wı(1U2ϑ(ı)U2ϑ(ı))ζ)1ζ)},H1,U1K1H2,U2K2...Hn,UnKn{(111+(nı=1wı(H2ϑ(ı)1H2ϑ(ı))ζ)1ζ),(11+(nı=1wı(1U2ϑ(ı)U2ϑ(ı))ζ)1ζ)}],

    where \wp _{\vartheta (\imath)} is the \imath th largest of \wp _{\imath } and w_{\imath } is the weight vector of \wp _{i}(i = 1, 2, 3, ..., n ), such that 0\leq w_{\imath }\leq 1 and \boxplus _{\imath = 1}^{n}w_{\imath } = 1.

    Proof. The proof is straightforward and similar to the proof of Theorem 1.

    Let \beth ^{n} = \left\{ \wp _{\imath } = \langle \left(\downarrow{ \mathcal{H}}_{\imath j}, \text{ }\downarrow{\mathcal{U}}_{\imath j}\right), \left(\uparrow{\mathcal{H}}_{\imath j}, \text{ }\uparrow{\mathcal{U}} _{\imath j}\right) \rangle :1\leq \imath \leq \ell _{h_{i}}, \text{ } j = 1, 2, 3, ..., n\right\} be an n -dimensional of HFRSs and let w_{\imath } be the weight vector of \wp _{\imath }(\imath = 1, 2, 3, ..., n ), such that 0\leq w_{\imath }\leq 1 and \boxplus _{\imath = 1}^{n}w_{\imath } = 1 . A PyHFRDHWAA operator is defined by the function PyHFRDHWAA: \beth ^{n} \; \longmapsto \beth as follows:

    \begin{equation*} PyHFRDHWAA(\wp _{1},\wp _{2},\wp _{3},...,\wp _{n}) = \left( \boxplus _{\imath = 1}^{n}\left( \eta _{\imath }\downarrow{\widehat{\wp }}_{\delta (\imath )}\right) ,\boxplus _{\imath = 1}^{n}\left( \eta _{\imath }\uparrow{\widehat{ \wp }}_{\delta (\imath )}\right) \right). \end{equation*}

    Let \Omega = \left(\eta _{1}, \eta _{2}, ..., \eta _{n}\right) ^{T} be the associated weight vector such that \sum_{\imath = 1}^{n}\eta _{\imath } = 1 and 0\leq \; \eta _{\imath }\leq 1 .

    Theorem 3.7. Let \wp _{\imath }\in \; \beth ^{n}(\imath = 1, 2, 3, ..., n ). Then,

    \begin{eqnarray*} PyHFRDHWAA(\wp _{1},\wp _{2},\wp _{3},...,\wp _{n}) & = &\left( \boxplus _{\imath = 1}^{n}\left( \eta _{\imath }\downarrow{\widehat{\wp }}_{\delta (\imath )}\right) ,\boxplus _{\imath = 1}^{n}\left( \eta _{\imath }\uparrow{ \widehat{\wp }}_{\delta (\imath )}\right) \right) \\ & = &\left[ \begin{array}{c} \bigcup\limits_{ \begin{array}{c} \begin{array}{c} \downarrow{\mathcal{H}}_{1},\mathit{\text{}}\downarrow{\mathcal{U}}_{1}\in \mathcal{K }_{1} \\ \downarrow{\mathcal{H}}_{2},\mathit{\text{}}\downarrow{\mathcal{U}}_{2}\in \mathcal{K }_{2} \end{array} \\ ... \\ \downarrow{\mathcal{H}}_{n},\mathit{\text{}}\downarrow{\mathcal{U}}_{n}\in \mathcal{K }_{n} \end{array} }\left\{ \begin{array}{c} \left( \sqrt{1-\frac{1}{1+\left( \boxplus _{\imath = 1}^{n}\eta _{\imath }\left( \frac{\downarrow{\widehat{\mathcal{H}}}_{_{\delta (\imath )}}^{2}}{1- \downarrow{\widehat{\mathcal{H}}}_{_{\delta (\imath )}}^{2}}\right) ^{\zeta }\right) ^{\frac{1}{\zeta }}}}\right) , \\ \left( \sqrt{\frac{1}{1+\left( \boxplus _{\imath = 1}^{n}\eta _{\imath }\left( \frac{1-\downarrow{\widehat{\mathcal{U}}}_{_{\delta (\imath )}}^{2}}{ \downarrow{\widehat{\mathcal{U}}}_{_{\delta (\imath )}}^{2}}\right) ^{\zeta }\right) ^{\frac{1}{\zeta }}}}\right) \end{array} \right\} , \\ \bigcup\limits_{ \begin{array}{c} \begin{array}{c} \uparrow{\mathcal{H}}_{1},\mathit{\text{}}\uparrow{\mathcal{U}}_{1}\in \mathcal{K} _{1} \\ \uparrow{\mathcal{H}}_{2},\mathit{\text{}}\uparrow{\mathcal{U}}_{2}\in \mathcal{K} _{2} \end{array} \\ ... \\ \uparrow{\mathcal{H}}_{n},\mathit{\text{}}\uparrow{\mathcal{U}}_{n}\in \mathcal{K} _{n} \end{array} }\left\{ \begin{array}{c} \left( \sqrt{1-\frac{1}{1+\left( \boxplus _{\imath = 1}^{n}\eta _{\imath }\left( \frac{\uparrow{\widehat{\mathcal{H}}}_{_{\delta (\imath )}}^{2}}{1- \uparrow{\widehat{\mathcal{H}}}_{_{\delta (\imath )}}^{2}}\right) ^{\zeta }\right) ^{\frac{1}{\zeta }}}}\right) , \\ \left( \sqrt{\frac{1}{1+\left( \boxplus _{\imath = 1}^{n}\eta _{\imath }\left( \frac{1-\uparrow{\widehat{\mathcal{U}}}_{_{\delta (\imath )}}^{2}}{ \uparrow{\widehat{\mathcal{U}}}_{_{\delta (\imath )}}^{2}}\right) ^{\zeta }\right) ^{\frac{1}{\zeta }}}}\right) \end{array} \right\} \end{array} \right], \end{eqnarray*}

    where \widehat{\wp }_{\delta (\imath)} = \left(\wp _{\delta (\imath)}\right) ^{n\varpi _{i}} = (\left(\wp _{\imath }\right) ^{n\varpi _{i}}, \left(\wp _{\imath }\right) ^{n\varpi _{i}}) shows the superior value of permutation from the collection of HFRVs and n represents the balancing coefficient.

    Proof. The proof is straightforward and similar to the proof of Theorem 1.

    The MADM is a crucial procedure for selecting an alternative from a set of desirable alternatives based on the opinions of experts regarding attributes. A team of experts thoroughly evaluates each alternative, acquiring various attributes, and stipulates an assessment in the form of PyHFRV for each alternative pertaining to each attribute. The information collected from the decision makers is aggregated using the weights given by the experts. After collecting the appropriate data, it is then compiled to assess the overall value of each alternative, considering the assigned weights for each attribute. The MADM is incredibly beneficial for a wide range of industries. It is widely used in a variety of fields including research areas including business, economics, machine learning, intelligent systems, pattern recognition, image processing, signal assessment, data mining, knowledge discovery, feature selection and reduction, classification, clustering, aiding in DM processes by managing imprecise and inconsistent information, analyzing medical data to assist in diagnosis and treatment planning, engineering, and more. Here, we present an MADM methodology that involves the following crucial steps:

    Step 1. Let A = \{A_{1}, A_{2}, ..., A_{l}\} , \mathcal{C} = \{c_{1}, c_{2}, ..., c_{j}\} and \mathcal{D} = \{d_{1}, d_{2}, ..., d_{t}\} be the set of alternatives, criteria and decision-makers respectively. Let the wight vector be \mathcal{W} = (w_{1}, w_{2}, w_{3}..., w_{j}), where w_{s}\in (0, 1] and \boxplus _{s}^{j}w_{s} = 1 . The MADM method comprises the following procedures:

    Step 2. The assessment of the alternative A_{i} based on criteria c_{s} by decision-makers d_{r}\left(r = 1, 2, 3, ..., t\right) can be written as \widetilde{\psi} _{rs}\left(r = 1, 2, 3, ..., t;s = 1, 2, 3, ..., j\right) . Consequently, the PyHFR-decision matrix D {A_{i}} = [\widetilde{\psi} _{rs}]_{t\times j} may be established as follows:

    \begin{equation*} D_{A_{i}} = [\widetilde{\psi} _{rs}]_{t\times j} = \left[ \begin{array}{cccc} \widetilde{\psi} _{11} & \widetilde{\psi} _{12} & \ldots & \widetilde{\psi} _{1j} \\ \widetilde{\psi} _{21} & \widetilde{\psi} _{22} & \ldots & \widetilde{\psi} _{2j} \\ \vdots & \vdots & \ddots & \vdots \\ \widetilde{\psi} _{t1} & \widetilde{\psi} _{t2} & \ldots & \widetilde{\psi} _{tj} \end{array} \right]. \end{equation*}

    Step 3. Assess the normalized experts matrices as

    \begin{equation*} (N_{A_{i}}) = \begin{array}{cc} \wp _{i} = \langle \left( \downarrow{\mathcal{H}}_{\imath j},\text{ } \downarrow{\mathcal{U}}_{\imath j}\right) ,\left( \uparrow{\mathcal{H}} _{\imath j},\text{ }\uparrow{\mathcal{U}}_{\imath j}\right) \rangle & \text{if for benefit}, \\ \wp _{i}^{c} = \langle \left( \downarrow{\mathcal{U}}_{\imath j}, \downarrow{\mathcal{H}}_{\imath j}\right) ,\left( \uparrow{\mathcal{U}} _{\imath j},\uparrow{\mathcal{H}}_{\imath j}\right) \rangle & \text{ if for cost}. \end{array} \end{equation*}

    Step 4. Construct collected PyHFR information using PyHFRDWAA related to A_{\imath } denoted by \mathcal{L}_{\imath } , which is defined as follows:

    \begin{equation*} \mathcal{L}_{\imath } = \left( \boxplus _{\imath = 1}^{n}\left( w_{\imath } \downarrow{\wp }\right) ,\boxplus _{\imath = 1}^{n}\left( w_{\imath } \uparrow{\wp }_{\imath }\right) \right). \end{equation*}

    Step 5. To use the aforementioned aggregation information, evaluate the aggregated PyHFR values for each alternative examined in accordance to the given set of attributes/criteria.

    Step 6. Compute the score values for \mathcal{L}_{\imath }(\imath = 1, 2, 3, ..., l).

    Step 7. Order the score value of all \mathcal{L}_{\imath }(\imath = 1, 2, 3, ..., l) .

    Step 8. Select the option with the highest score value. Figure 1 illustrates the diagrammatic chart of the algorithm for the implied MADM technique.

    Figure 1.  Illustration of the flowchart for suggested MADM algorithm.

    In this section, we will provide an MADM problem-related wearable medical technology to demonstrate the implementation and adaptability of the aforementioned approach. The market for wearable medical and health-care manufactured products has widened significantly in the years owing to global industrialization. The utilization of wearable technology has increased significantly, having an enormous effect on the health-care industry. Wearable devices, equipped with transmitters and target receptors, are worn on the body or in clothing to detect changes in the human body and offer rapid feedback. This technology assists physicians in understanding the changes happening in the human body and providing timely medicines to patients. The goal of wearable technology in health-care is to transform the industry through providing patients with comprehensive information that can be carried out in order to gain practical insights. Wearable technology is influencing health-care by monitoring electrocardiograms like heart rate, blood pressure, and oxygen levels, as well as recording patient health data like physical activity, nutrition, and sleep patterns. Individuals with chronic illnesses such as diabetes, cardiovascular disease, and neurological issues need regular monitoring of their physiological functioning in order to get appropriate medical treatment. Wearable technology has emerged as a crucial tool in the health-care industry to meet the demands of this requirement. Wearable health technology facilitate patients to accumulate their own health data and transmit it in digital format, minimizing the need for face-to-face appointments. Wearable medical technology could assist hospitals save both resources and time while maintaining convenience to patients. Continuous monitoring of patients is required during their rehabilitation, and the adoption of wearable health monitoring devices has increased the efficiency of this approach. Wearable fitness trackers and other health-tracking technology are beneficial for those who desire to measure their physical activity. This includes not only measuring steps or physical training, but also monitoring the individual's overall mobility and essential indications Individuals' psychological conditions are also assessed employing wearable medical gadgets. These wearable health solutions include sensors that monitor patients' mental health. Wearable device health care provides the benefit of medical education by enabling individuals to effortlessly access critical information, essays, articles, case studies, and other study materials on their smartphone. Wearable medical devices also have benefits for electronic health record systems. Primary care asserts that combine patient-generated fitness or medical data with enormous amounts of information, such as current health data, into electronic medical records, in addition to other biological and genetic data, has significant power and resiliency.

    The challenges of incorporating wearable medical technology into health-care systems include the financial strain on medical organizations, whilst not every organization can afford to embrace such devices. Patients may feel uncomfortable when utilizing wearable medical equipment owing to its weight. These technologies are made up of a variety of components and are intended to be attached to the body. Due to data discrepancies, there is insufficient evidence to demonstrate wearable devices' unaltered accuracy. Individuals have data privacy concerns regarding a lack of control over their personal information. Furthermore, during the installation of programs, users are needed to provide authorization to access personal data, exposing them to possible hazards such as forgery and data destruction. Because the integration of patient data via wearable technology is still in its early stages, the area of health wearable technology has hurdles in terms of system compatibility and linkage. Wearable medical devices hold a large quantity of personal information concerning health conditions. It may cause people to get anxious and feel like they are being watched too closely. Furthermore, the medical community struggles to adequately handle the enormous amount of information.

    This section applies the developed model to a case study concerning wearable health technology device selection. Assume an employer has nominated four prospective companies on a worldwide basis with the intention of determining a wearable health technology gadget. A panel of specialists across the world has been invited to investigate and determine the most appropriate gadgets for the organization. Let \mathcal{D} = \{\mathcal{D}_{1}, \mathcal{D}_{2}, \mathcal{D}_{3}\} be a set of three decision-makers. A = \{A_{1}, A_{2}, A_{3}, A_{4}\} symbolizes the set of four devices that were under consideration for selection among them and the set of four index dimensions, in particular: accuracy (c_{1}) , security (c_{2}) , intelligence (c_{3}) , and limitations (c_{4}) . It should be noted that all of the attributes presented here are of the benefit type. The decision-makers weights vector is \mathcal{W} = \left[ 0.25, 0.49, 0.56\right] ^{T} and the associated attribute weight vector is \mathcal{W} = \left[ 0.11, 0.25, 0.28, 0.36\right] ^{T} . To assess the MADM situation by employing the aforementioned scheme for evaluating alternatives, the following procedures are carried out:

    Step 1. The information obtained from three experts based on PyHFRVs has been incorporated into Tables 13, respectively.

    Table 1.  Expert-1 information.
    (a)
    c_{1} c_{2}
    A_{1} \left[ \begin{array}{c} \left(\left(0.21, 0.31, 0.41\right), \left(0.22, 0.30, 0.31\right) \right), \\ \left(\left(0.41, 0.42, 0.43\right), \left(0.21, 0.41, 0.44\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.25, 0.27, 0.29\right), \left(0.27, 0.26, 0.31\right) \right), \\ \left(\left(0.22, 0.25, 0.26\right), \left(0.32, 0.34, 0.41\right) \right) \end{array} \right]
    A_{2} \left[ \begin{array}{c} \left(\left(0.22, 0.32, 0.42\right), \left(0.27, 0.29, 0.52\right) \right), \\ \left(\left(0.23, 0.25, 0.27\right), \left(0.26, 0.27, 0.33\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.33, 0.53, 0.62\right), \left(0.25, 0.37, 0.42\right) \right), \\ \left(\left(0.36, 0.37, 0.41\right), \left(0.23, 0.25, 0.29\right) \right) \end{array} \right]
    A_{3} \left[ \begin{array}{c} \left(\left(0.24, 0.35, 0.36\right), \left(0.36, 0.37, 0.38\right) \right), \\ \left(\left(0.27, 0.28, 0.29\right), \left(0.21, 0.24, 0.27\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.21, 0.22, 0.23\right), \left(0.25, 0.26, 0.28\right) \right), \\ \left(\left(0.24, 0.26, 0.27\right), \left(0.25, 0.37, 0.41\right) \right) \end{array} \right]
    A_{4} \left[ \begin{array}{c} \left(\left(0.26, 0.27, 0.29\right), \left(0.23, 0.24, 0.26\right) \right), \\ \left(\left(0.32, 0.37, 0.41\right), \left(0.27, 0.32, 0.43\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.33, 0.34, 0.35\right), \left(0.34, 0.37, 0.39\right) \right), \\ \left(\left(0.21, 0.42, 0.43\right), \left(0.32, 0.33, 0.41\right) \right) \end{array} \right]
    (b)
    c_{3} c_{4}
    A_{1} \left[ \begin{array}{c} \left(\left(0.22, 0.23, 0.24\right), \left(0.23, 0.24, 0.27\right) \right), \\ \left(\left(0.31, 0.35, 0.42\right), \left(0.23, 0.25, 0.27\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.25, 0.36, 0.37\right) \mathbf{, }\left(0.24, 0.25, 0.27\right) \right), \\ \left(\left(0.26, 0.28, 0.29\right), \left(0.26, 0.27, 0.29\right) \right) \end{array} \right]
    A_{2} \left[ \begin{array}{c} \left(\left(0.34, 0.35, 0.38\right), \left(0.34, 0.45, 0.47\right) \right), \\ \left(\left(0.22, 0.25, 0.33\right), \left(0.34, 0.35, 0.36\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.34, 0.36, 0.38\right), \left(0.33, 0.35, 0.42\right) \right), \\ \left(\left(0.27, 0.38, 0.39\right), \left(0.21, 0.23, 0.24\right) \right) \end{array} \right]
    A_{3} \left[ \begin{array}{c} \left(\left(0.23, 0.26, 0.37\right), \left(0.35, 0.37, 0.38\right) \right), \\ \left(\left(0.25, 0.39, 0.41\right), \left(0.25, 0.28, 0.38\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.23, 0.36, 0.42\right), \left(0.25, 0.26, 0.38\right) \right), \\ \left(\left(0.31, 0.33, 0.37\right), \left(0.33, 0.34, 0.35\right) \right) \end{array} \right]
    A_{4} \left[ \begin{array}{c} \left(\left(0.33, 0.34, 0.35\right), \left(0.36, 0.37, 0.38\right) \right), \\ \left(\left(0.23, 0.33, 0.43\right), \left(0.24, 0.27, 0.28\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.32, 0.43, 0.44\right), \left(0.45, 0.46, 0.47\right) \right), \\ \left(\left(0.23, 0.34, 0.35\right), \left(0.27, 0.28, 0.31\right) \right) \end{array} \right]

     | Show Table
    DownLoad: CSV
    Table 2.  Expert-2 information.
    (a)
    c_{1} c_{2}
    A_{1} \left[ \begin{array}{c} \left(\left(0.22, 0.23, 0.24\right), \left(0.32, 0.35, 0.38\right) \right), \\ \left(\left(0.24, 0.36, 0.37\right), \left(0.33, 0.45, 0.46\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.24, 0.25, 0.26\right), \left(0.23, 0.32, 0.35\right) \right), \\ \left(\left(0.32, 0.37, 0.41\right), \left(0.42, 0.43, 0.44\right) \right) \end{array} \right]
    A_{2} \left[ \begin{array}{c} \left(\left(0.22, 0.23, 0.24\right), \left(0.25, 0.28, 0.29\right) \right), \\ \left(\left(0.35, 0.36, 0.37\right), \left(0.28, 0.29, 0.30\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.23, 0.24, 0.26\right), \left(0.27, 0.28, 0.31\right) \right), \\ \left(\left(0.21, 0.61, 0.62\right), \left(0.23, 0.27, 0.32\right) \right) \end{array} \right]
    A_{3} \left[ \begin{array}{c} \left(\left(0.26, 0.32, 0.42\right), \left(0.42, 0.48, 0.53\right) \right), \\ \left(\left(0.23, 0.28, 0.30\right), \left(0.22, 0.42, 0.43\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.23, 0.24, 0.62\right), \left(0.41, 0.42, 0.43\right) \right), \\ \left(\left(0.33, 0.43, 0.44\right), \left(0.23, 0.34, 0.42\right) \right) \end{array} \right]
    A_{4} \left[ \begin{array}{c} \left(\left(0.21, 0.22, 0.41\right), \left(0.25, 0.26, 0.31\right) \right), \\ \left(\left(0.31, 0.32, 0.33\right), \left(0.42, 0.43, 0.44\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.25, 0.32, 0.33\right), \left(0.23, 0.32, 0.43\right) \right), \\ \left(\left(0.33, 0.34, 0.36\right), \left(0.44, 0.35, 0.37\right) \right) \end{array} \right]
    (b)
    c_{3} c_{4}
    A_{1} \left[ \begin{array}{c} \left(\left(0.22, 0.24, 0.52\right), \left(0.42, 0.43, 0.44\right) \right), \\ \left(\left(0.24, 0.27, 0.28\right), \left(0.22, 0.26, 0.27\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.21, 0.22, 0.32\right) \mathbf{, }\left(0.24, 0.26, 0.28\right) \right), \\ \left(\left(0.22, 0.25, 0.27\right), \left(0.28, 0.29, 0.33\right) \right) \end{array} \right]
    A_{2} \left[ \begin{array}{c} \left(\left(0.23, 0.35, 0.37\right), \left(0.32, 0.33, 0.36\right) \right), \\ \left(\left(0.26, 0.27, 0.28\right), \left(0.22, 0.28, 0.29\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.43, 0.44, 0.55\right), \left(0.24, 0.26, 0.27\right) \right), \\ \left(\left(0.21, 0.23, 0.25\right), \left(0.22, 0.23, 0.25\right) \right) \end{array} \right]
    A_{3} \left[ \begin{array}{c} \left(\left(0.35, 0.36, 0.37\right), \left(0.33, 0.35, 0.43\right) \right), \\ \left(\left(0.27, 0.28, 0.29\right), \left(0.32, 0.33, 0.35\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.22, 0.27, 0.28\right), \left(0.22, 0.27, 0.29\right) \right), \\ \left(\left(0.41, 0.42, 0.44\right), \left(0.25, 0.26, 0.27\right) \right) \end{array} \right]
    A_{4} \left[ \begin{array}{c} \left(\left(0.26, 0.27, 0.29\right), \left(0.32, 0.35, 0.43\right) \right), \\ \left(\left(0.32, 0.33, 0.44\right), \left(0.32, 0.35, 0.41\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.23, 0.25, 0.29\right), \left(0.24, 0.26, 0.27\right) \right), \\ \left(\left(0.42, 0.43, 0.46\right), \left(0.24, 0.25, 0.27\right) \right) \end{array} \right]

     | Show Table
    DownLoad: CSV
    Table 3.  Expert-3 information.
    (a)
    c_{1} c_{2}
    A_{1} \left[ \begin{array}{c} \left(\left(0.24, 0.27, 0.29\right), \left(0.33, 0.36, 0.38\right) \right), \\ \left(\left(0.32, 0.33, 0.38\right), \left(0.27, 0.28, 0.29\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.34, 0.37, 0.38\right), \left(0.27, 0.28, 0.29\right) \right), \\ \left(\left(0.33, 0.43, 0.44\right), \left(0.21, 0.32, 0.43\right) \right) \end{array} \right]
    A_{2} \left[ \begin{array}{c} \left(\left(0.31, 0.33, 0.34\right), \left(0.25, 0.26, 0.29\right) \right), \\ \left(\left(0.31, 0.42, 0.43\right), \left(0.42, 0.43, 0.44\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.22, 0.33, 0.37\right), \left(0.33, 0.42, 0.43\right) \right), \\ \left(\left(0.22, 0.25, 0.26\right), \left(0.22, 0.32, 0.33\right) \right) \end{array} \right]
    A_{3} \left[ \begin{array}{c} \left(\left(0.32, 0.33, 0.43\right), \left(0.24, 0.33, 0.43\right) \right), \\ \left(\left(0.21, 0.28, 0.29\right), \left(0.24, 0.34, 0.42\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.22, 0.23, 0.32\right), \left(0.33, 0.42, 0.43\right) \right), \\ \left(\left(0.25, 0.32, 0.33\right), \left(0.22, 0.33, 0.43\right) \right) \end{array} \right]
    A_{4} \left[ \begin{array}{c} \left(\left(0.21, 0.25, 0.27\right), \left(0.25, 0.26, 0.32\right) \right), \\ \left(\left(0.23, 0.25, 0.27\right), \left(0.24, 0.29, 0.33\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.22, 0.23, 0.33\right), \left(0.25, 0.26, 0.33\right) \right), \\ \left(\left(0.31, 0.33, 0.43\right), \left(0.32, 0.43, 0.44\right) \right) \end{array} \right]
    (b)
    c_{3} c_{4}
    A_{1} \left[ \begin{array}{c} \left(\left(0.22, 0.23, 0.28\right), \left(0.25, 0.26, 0.27\right) \right), \\ \left(\left(0.23, 0.25, 0.26\right), \left(0.22, 0.32, 0.33\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.21, 0.24, 0.29\right), \left(0.22, 0.23, 0.27\right) \right), \\ \left(\left(0.32, 0.33, 0.43\right), \left(0.32, 0.41, 0.42\right) \right) \end{array} \right]
    A_{2} \left[ \begin{array}{c} \left(\left(0.21, 0.23, 0.26\right), \left(0.24, 0.26, 0.28\right) \right), \\ \left(\left(0.26, 0.27, 0.29\right), \left(0.23, 0.28, 0.29\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.21, 0.42, 0.53\right), \left(0.32, 0.33, 0.52\right) \right), \\ \left(\left(0.23, 0.24, 0.26\right), \left(0.22, 0.31, 0.41\right) \right) \end{array} \right]
    A_{3} \left[ \begin{array}{c} \left(\left(0.31, 0.32, 0.33\right), \left(0.33, 0.35, 0.39\right) \right), \\ \left(\left(0.32, 0.33, 0.34\right), \left(0.22, 0.24, 0.26\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.31, 0.32, 0.34\right), \left(0.21, 0.31, 0.41\right) \right), \\ \left(\left(0.22, 0.24, 0.25\right), \left(0.32, 0.43, 0.54\right) \right) \end{array} \right]
    A_{4} \left[ \begin{array}{c} \left(\left(0.22, 0.23, 0.27\right), \left(0.33, 0.41, 0.42\right) \right), \\ \left(\left(0.42, 0.43, 0.44\right), \left(0.21, 0.22, 0.23\right) \right) \end{array} \right] \left[ \begin{array}{c} \left(\left(0.22, 0.33, 0.43\right), \left(0.32, 0.33, 0.41\right) \right), \\ \left(\left(0.23, 0.25, 0.28\right), \left(0.24, 0.35, 0.41\right) \right) \end{array} \right]

     | Show Table
    DownLoad: CSV

    Step 2. All the information provided by the experts are benefit type; there is no requirement for normalization.

    Step 3. Table 4 demonstrates the PyHFRDWA and evaluates the accumulated information of three expert analysts. Table 5 shows the score value of the collected experts in formation. Based on score value, the ordered collected experts information is presented in Table 6.

    Table 4.  Collected experts information.
    (a)
    c_{1} c_{2}
    A_{1} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2569, 0.2987, 0.3413\right), \\ \left(0.2600, 0.3043, 0.3232\right) \end{array} \right), \\ \left(\begin{array}{c} (0.3552, 0.4039, 0.4316), \\ \left(0.2387, 0.3050, 0.3168\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.3271, 0.3517, 0.3643\right), \\ \left(0.2232, 0.2554, 0.2778\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.3472, 0.4258, 0.4483\right), \\ \left(0.2382, 0.3165, 0.3850\right) \end{array} \right) \end{array} \right]
    A_{2} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2976, 0.3325, 0.3681\right), \\ \left(0.2240, 0.2411, 0.2762\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.3525, 0.4162, 0.4281\right), \\ \left(0.2793, 0.2889, 0.3111\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2820, 0.4049, 0.4680\right), \\ \left(0.2535, 0.3022, 0.3289\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2849, 0.5183, 0.5331\right), \\ \left(0.1990, 0.2509, 0.2817\right) \end{array} \right) \end{array} \right]
    A_{3} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2872, 0.3293, 0.4168\right), \\ \left(0.2968, 0.3776, 0.4511\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2279, 0.2800, 0.2940\right), \\ \left(0.2268, 0.3372, 0.3842\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2225, 0.2325, 0.4831\right), \\ \left(0.3344, 0.3770, 0.3919\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2836, 0.3631, 0.3730\right), \\ \left(0.2278, 0.3392, 0.4229\right) \end{array} \right) \end{array} \right]
    A_{4} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2184, 0.2422, 0.3377\right), \\ \left(0.2467, 0.2567, 0.3047\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2788, 0.3001, 0.3201\right), \\ \left(0.2857, 0.3325, 0.3766\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2519, 0.2870, 0.3331\right), \\ \left(0.2498, 0.2922, 0.3696\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.3062, 0.3497, 0.4052\right), \\ \left(0.3542, 0.3767, 0.4043\right) \end{array} \right) \end{array} \right]
    (b)
    c_{3} c_{4}
    A_{1} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2490, 0.2644, 0.4433\right), \\ \left(0.2487, 0.2584, 0.2738\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2841, 0.3158, 0.3458\right), \\ \left(0.1957, 0.2468, 0.2583\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2472, 0.2965, 0.3580\right), \\ \left(0.2036, 0.2155, 0.2421\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.3108, 0.3302, 0.3983\right), \\ \left(0.2572, 0.2853, 0.3109\right) \end{array} \right) \end{array} \right]
    A_{2} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2812, 0.3444, 0.3719\right), \\ \left(0.2472, 0.2700, 0.2917\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2856, 0.3005, 0.3317\right), \\ \left(0.2105, 0.2571, 0.2662\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.3809, 0.4643, 0.5669\right), \\ \left(0.2502, 0.2660, 0.3177\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2617, 0.3066, 0.3250\right), \\ \left(0.1922, 0.2264, 0.2564\right) \end{array} \right) \end{array} \right]
    A_{3} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.3170, 0.3291, 0.3525\right), \\ \left(0.3328, 0.3528, 0.4025\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2919, 0.3228, 0.3348\right), \\ \left(0.2523, 0.2724, 0.3011\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2676, 0.3089, 0.3338\right), \\ \left(0.2188, 0.2845, 0.3448\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.3251, 0.3392, 0.3591\right), \\ \left(0.2874, 0.3235, 0.3502\right) \end{array} \right) \end{array} \right]
    A_{4} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2558, 0.2657, 0.2916\right), \\ \left(0.3626, 0.3815, 0.4385\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.3299, 0.3775, 0.4168\right), \\ \left(0.2437, 0.2604, 0.2781\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2422, 0.3227, 0.3870\right), \\ \left(0.3244, 0.3491, 0.3761\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2886, 0.3057, 0.3378\right), \\ \left(0.2439, 0.2898, 0.3220\right) \end{array} \right) \end{array} \right]

     | Show Table
    DownLoad: CSV
    Table 5.  Score value of collected experts information.
    c_{1} c_{2} c_{3} c_{4}
    A_{1} 0.5283 0.5473 0.5351 0.5355
    A_{2} 0.5479 0.5729 0.5311 0.5664
    A_{3} 0.4801 0.4887 0.5028 0.5104
    A_{4} 0.4912 0.4905 0.5074 0.5178

     | Show Table
    DownLoad: CSV
    Table 6.  Ordered collected experts information.
    (a)
    c_{1} c_{2}
    A_{1} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.3271, 0.3517, 0.3643\right), \\ \left(0.2232, 0.2554, 0.2778\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.3472, 0.4258, 0.4483\right), \\ \left(0.2382, 0.3165, 0.3850\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2472, 0.2965, 0.3580\right), \\ \left(0.2036, 0.2155, 0.2421\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.3108, 0.3302, 0.3983\right), \\ \left(0.2572, 0.2853, 0.3109\right) \end{array} \right) \end{array} \right]
    A_{2} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2820, 0.4049, 0.4680\right), \\ \left(0.2535, 0.3022, 0.3289\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2849, 0.5183, 0.5331\right), \\ \left(0.1990, 0.2509, 0.2817\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.3809, 0.4643, 0.5669\right), \\ \left(0.2502, 0.2660, 0.3177\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2617, 0.3066, 0.3250\right), \\ \left(0.1922, 0.2264, 0.2564\right) \end{array} \right) \end{array} \right]
    A_{3} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2676, 0.3089, 0.3338\right), \\ \left(0.2188, 0.2845, 0.3448\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.3251, 0.3392, 0.3591\right), \\ \left(0.2874, 0.3235, 0.3502\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.3170, 0.3291, 0.3525\right), \\ \left(0.3328, 0.3528, 0.4025\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2919, 0.3228, 0.3348\right), \\ \left(0.2523, 0.2724, 0.3011\right) \end{array} \right) \end{array} \right]
    A_{4} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2422, 0.3227, 0.3870\right), \\ \left(0.2886, 0.3057, 0.3378\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.3244, 0.3491, 0.3761\right), \\ \left(0.2439, 0.2898, 0.3220\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2558, 0.2657, 0.2916\right), \\ \left(0.3299, 0.3775, 0.4168\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.3626, 0.3815, 0.4385\right), \\ \left(0.2437, 0.2604, 0.2781\right) \end{array} \right) \end{array} \right]
    (b)
    c_{3} c_{4}
    A_{1} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2490, 0.2644, 0.4433\right), \\ \left(0.2487, 0.2584, 0.2738\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2841, 0.3158, 0.3458\right), \\ \left(0.1957, 0.2468, 0.2583\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2569, 0.2987, 0.3413\right), \\ \left(0.2600, 0.3043, 0.3232\right) \end{array} \right), \\ \left(\begin{array}{c} (0.3552, 0.4039, 0.4316), \\ \left(0.2387, 0.3050, 0.3168\right) \end{array} \right) \end{array} \right]
    A_{2} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2976, 0.3325, 0.3681\right), \\ \left(0.2240, 0.2411, 0.2762\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.3525, 0.4162, 0.4281\right), \\ \left(0.2793, 0.2889, 0.3111\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2812, 0.3444, 0.3719\right), \\ \left(0.2472, 0.2700, 0.2917\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2856, 0.3005, 0.3317\right), \\ \left(0.2105, 0.2571, 0.2662\right) \end{array} \right) \end{array} \right]
    A_{3} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2225, 0.2325, 0.4831\right), \\ \left(0.3344, 0.3770, 0.3919\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2836, 0.3631, 0.3730\right), \\ \left(0.2278, 0.3392, 0.4229\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2872, 0.3293, 0.4168\right), \\ \left(0.2968, 0.3776, 0.4511\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2279, 0.2800, 0.2940\right), \\ \left(0.2268, 0.3372, 0.3842\right) \end{array} \right) \end{array} \right]
    A_{4} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2184, 0.2422, 0.3377\right), \\ \left(0.2467, 0.2567, 0.3047\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2788, 0.3001, 0.3201\right), \\ \left(0.2857, 0.3325, 0.3766\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2519, 0.2870, 0.3331\right), \\ \left(0.2498, 0.2922, 0.3696\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.3062, 0.3497, 0.4052\right), \\ \left(0.3542, 0.3767, 0.4043\right) \end{array} \right) \end{array} \right]

     | Show Table
    DownLoad: CSV

    Step 4. Assess the aggregated PyHFR values for each alternative evaluated using the specified set of criteria/attributes utilizing the aforementioned aggregation information. Tables 7 and 8 visualized the aggregated information for PyHFRDWA, PyHFRODWA, respectively. The hybrid collected experts information is shown in Table 9. The score values of hybrid information is presented in Table 10 and the aggregated information for PyHFRHDWA is depicted in Table 11.

    Table 7.  PyHF rough aggregation information using PyHFRDWA operator.
    A_{1} \left[ \begin{array}{c} \left(\left(0.2716, 0.3040, 0.3851\right), \left(0.2244, 0.2428, 0.2657\right) \right), \\ \left(\left(0.3189, 0.3631, 0.4031\right), \left(0.2289, 0.2806, 0.3057\right) \right) \end{array} \right]
    A_{2} \left[ \begin{array}{c} \left(\left(0.3242, 0.4077, 0.4821\right), \left(0.2468, 0.2716, 0.3070\right) \right), \\ \left(\left(0.2859, 0.3906, 0.4092\right), \left(0.2050, 0.2458, 0.2700\right) \right) \end{array} \right]
    A_{3} \left[ \begin{array}{c} \left(\left(0.2752, 0.3010, 0.3943\right), \left(0.2718, 0.3292, 0.3800\right) \right), \\ \left(\left(0.2967, 0.3353, 0.3498\right), \left(0.2523, 0.3108, 0.3499\right) \right) \end{array} \right]
    A_{4} \left[ \begin{array}{c} \left(\left(0.2461, 0.2908, 0.3447\right), \left(0.2805, 0.3099, 0.3585\right) \right), \\ \left(\left(0.3272, 0.3541, 0.3975\right), \left(0.2662, 0.2992, 0.3262\right) \right) \end{array} \right]

     | Show Table
    DownLoad: CSV
    Table 8.  PyHF rough aggregation information using PyHFRDOWA operator.
    A_{1} \left[ \begin{array}{c} \left(\left(0.2615, 0.2959, 0.3807\right), \left(0.2351, 0.2569, 0.2787\right) \right), \\ \left(\left(0.3253, 0.3675, 0.4042\right), \left(0.2273, 0.2813, 0.3000\right) \right) \end{array} \right]
    A_{2} \left[ \begin{array}{c} \left(\left(0.3149, 0.3845, 0.4479\right), \left(0.2413, 0.2627, 0.2962\right) \right), \\ \left(\left(0.3012, 0.3721, 0.3911\right), \left(0.2169, 0.2546, 0.2756\right) \right) \end{array} \right]
    A_{3} \left[ \begin{array}{c} \left(\left(0.2774, 0.3039, 0.4168\right), \left(0.2995, 0.3566, 0.4062\right) \right), \\ \left(\left(0.2729, 0.3231, 0.3359\right), \left(0.2380, 0.3159, 0.3613\right) \right) \end{array} \right]
    A_{4} \left[ \begin{array}{c} \left(\left(0.2430, 0.2746, 0.3316\right), \left(0.2674, 0.2962, 0.3524\right) \right), \\ \left(\left(0.3169, 0.3457, 0.3909\right), \left(0.2845, 0.3152, 0.3436\right) \right) \end{array} \right]

     | Show Table
    DownLoad: CSV
    Table 9.  Hybrid collected experts information.
    (a)
    c_{1} c_{2}
    A_{1} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.1705, 0.1846, 0.1920\right), \\ \left(0.9935, 0.1309, 0.1431\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.1820, 0.2290, 0.2432\right), \\ \left(0.9926, 0.1646, 0.2042\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.1513, 0.1831, 0.2242\right), \\ \left(0.9923, 0.1313, 0.1481\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.1925, 0.2054, 0.2521\right), \\ \left(0.9875, 0.1758, 0.1926\right) \end{array} \right) \end{array} \right]
    A_{2} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2400, 0.3000, 0.3816\right), \\ \left(0.9882, 0.1633, 0.1971\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.1606, 0.1898, 0.2019\right), \\ \left(0.9932, 0.1381, 0.1572\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.1454, 0.2162, 0.2560\right), \\ \left(0.9915, 0.1566, 0.1716\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.1470, 0.2900, 0.3005\right), \\ \left(0.9949, 0.1285, 0.1452\right) \end{array} \right) \end{array} \right]
    A_{3} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.1644, 0.1913, 0.2078\right), \\ \left(0.9911, 0.1753, 0.2152\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2020, 0.2115, 0.2249\right), \\ \left(0.9842, 0.2009, 0.2189\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.1742, 0.1814, 0.1955\right), \\ \left(0.9830, 0.1957, 0.2266\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.1594, 0.1776, 0.1848\right), \\ \left(0.9906, 0.1482, 0.1648\right) \end{array} \right) \end{array} \right]
    A_{4} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.1291, 0.1481, 0.1739\right), \\ \left(0.9918, 0.1510, 0.1951\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.1588, 0.1835, 0.2164\right), \\ \left(0.9825, 0.1993, 0.2158\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.1481, 0.2004, 0.2442\right), \\ \left(0.9795, 0.2181, 0.2366\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.1780, 0.1892, 0.2105\right), \\ \left(0.9888, 0.1788, 0.1999\right) \end{array} \right) \end{array} \right]
    (b)
    c_{3} c_{4}
    A_{1} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.1348, 0.1436, 0.2532\right), \\ \left(0.9909, 0.1401, 0.1490\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.1549, 0.1734, 0.1914\right), \\ \left(0.9945, 0.1336, 0.1401\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.0878, 0.1033, 0.1196\right), \\ \left(0.9960, 0.1054, 0.1126\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.1250, 0.1449, 0.1567\right), \\ \left(0.9967, 0.1056, 0.1101\right) \end{array} \right) \end{array} \right]
    A_{2} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.1532, 0.1906, 0.2074\right), \\ \left(0.9910, 0.1468, 0.1593\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.1558, 0.1644, 0.1829\right), \\ \left(0.9936, 0.1394, 0.1446\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.1028, 0.1161, 0.1302\right), \\ \left(0.9971, 0.0821, 0.0949\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.1240, 0.1501, 0.1552\right), \\ \left(0.9954, 0.0996, 0.1079\right) \end{array} \right) \end{array} \right]
    A_{3} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.1134, 0.1187, 0.2659\right), \\ \left(0.9846, 0.1994, 0.2083\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.1463, 0.1913, 0.1971\right), \\ \left(0.9932, 0.1774, 0.2272\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.0990, 0.1149, 0.1503\right), \\ \left(0.9947, 0.1340, 0.1653\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.0774, 0.0963, 0.1015\right), \\ \left(0.9970, 0.1180, 0.1367\right) \end{array} \right) \end{array} \right]
    A_{4} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.0740, 0.0825, 0.1182\right), \\ \left(0.9965, 0.0877, 0.1055\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.0958, 0.1038, 0.1114\right), \\ \left(0.9951, 0.1161, 0.1336\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.1387, 0.1443, 0.1593\right), \\ \left(0.9795, 0.2134, 0.2500\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.1818, 0.2109, 0.2358\right), \\ \left(0.9913, 0.1413, 0.1514\right) \end{array} \right) \end{array} \right]

     | Show Table
    DownLoad: CSV
    Table 10.  Score value of hybrid collected experts information.
    c_{1} c_{2} c_{3} c_{4}
    A_{1} 0.3591 0.3889 0.3743 0.3888
    A_{2} 0.3692 0.3930 0.3717 0.3981
    A_{3} 0.3376 0.3540 0.3552 0.3699
    A_{4} 0.3505 0.3612 0.3488 0.3585

     | Show Table
    DownLoad: CSV
    Table 11.  PyHF rough aggregation information using PyHFRDHWA operator.
    A_{1} \left[ \begin{array}{c} \left(\left(0.1298, 0.1478, 0.2004\right), \left(0.9934, 0.1216, 0.1316\right) \right), \\ \left(\left(0.1591, 0.1800, 0.2044\right), \left(0.9933, 0.1290, 0.1369\right) \right) \end{array} \right]
    A_{2} \left[ \begin{array}{c} \left(\left(0.1490, 0.1928, 0.2285\right), \left(0.9930, 0.1113, 0.1267\right) \right), \\ \left(\left(0.1435, 0.2034, 0.2142\right), \left(0.9945, 0.1182, 0.1284\right) \right) \end{array} \right]
    A_{3} \left[ \begin{array}{c} \left(\left(0.1333, 0.1451, 0.2065\right), \left(0.9885, 0.1640, 0.1929\right) \right), \\ \left(\left(0.1383, 0.1630, 0.1700\right), \left(0.9929, 0.1431, 0.1662\right) \right) \end{array} \right]
    A_{4} \left[ \begin{array}{c} \left(\left(0.1258, 0.1483, 0.1777\right), \left(0.9855, 0.1356, 0.1616\right) \right), \\ \left(\left(0.1590, 0.1785, 0.2001\right), \left(0.9907, 0.1423, 0.1581\right) \right) \end{array} \right]

     | Show Table
    DownLoad: CSV

    Step 5. The score value of the information obtained through PyHFRDWA, PyHFRODWA, and PyHFRHDWA is presented in Table 12 and depicted in Figure 2. Ranking the alternatives based on the score values, we observed that A_{2} is the best alternative.

    Table 12.  Ranking order of alternatives.
    Proposed operators Score values of alternatives Ranking
    A_{1} A_{2} A_{3} A_{4}
    PyHFRDWA 0.5415 0.5628 0.5049 0.5100 A_{2} > {\ }A_{1} > \ {\ }A_{4} > {\ }A_{3}
    PyHFRDOWA 0.5380 0.5554 0.4960 0.5036 A_{2} > {\ }A_{1} > \ \ A_{4} > {\ }A_{3}
    PyHFRDHWA 0.3763 0.3883 0.3590 0.3680 A_{2} > {\ }A_{1} > \ {\ }A_{4} > {\ }A_{3}

     | Show Table
    DownLoad: CSV
    Figure 2.  Graphical representation of ranking order obtained through PyHFRDWA, PyHFRODWA, and PyHFRHDWA operator.

    It is evident that the PyHFRDWAA includes a parameter \zeta , which significantly improves its adaptability and effectiveness in handling fuzzy information. In our example, we use a parameter value of \zeta = 1. We analyze the potential variations in the outcomes achieved with the PyHFRDWAA operator. Table 13 and Figure 3 display the ranking orders determined by the score values obtained through the PyHFRDWAA operator. Vary the value of parameter \zeta from 1 to 200, identify multiple values, and organize them in ascending order based on PyHFR scores. When the value \zeta in the PyHFRDWAA is varied, the optimal alternative is always the same, and the orderings of alternatives ( A_{2} > \; A_{1} > \; A_{4} > \; A_{3} ) are the same for all values of \zeta . Using the PyHFRDWAA operator and the score function, we achieved option A_{2} with the highest score value as an optimal alternative and A_{3} with the lowest score values presented in Table 13 for \zeta = 1, \; 2, \; 5, \; 10, \; 20, \; 50, \; 100, \; 200. Therefore, the most appropriate alternative is similar for different values of \zeta . Figure 3 depicts a graphical representation of Table 13.

    Table 13.  Ranking order for different values of \zeta in the PyHFRWAA operator.
    \zeta Score values of alternatives Ranking Best Alternative
    A_{1} A_{2} A_{3} A_{4}
    1 0.5415 0.5628 0.5049 0.5100 {\ }A_{2} > {\ }A_{1} > \ {\ }A_{4} > {\ }A_{3} A_{2}
    2 0.5454 0.5708 0.5098 0.5137 {\ }A_{2} > {\ }A_{1} > \ {\ }A_{4} > {\ }A_{3} A_{2}
    5 0.5559 0.5869 0.5210 0.5223 {\ }A_{2} > {\ }A_{1} > \ {\ }A_{4} > {\ }A_{3} A_{2}
    10 0.5661 0.5983 0.5302 0.5305 {\ }A_{2} > {\ }A_{1} > \ {\ }A_{4} > {\ }A_{3} A_{2}
    20 0.5737 0.6065 0.5370 0.5376 {\ }A_{2} > {\ }A_{1} > \ {\ }A_{4} > {\ }A_{3} A_{2}
    50 0.5788 0.6125 0.5418 0.5428 {\ }A_{2} > {\ }A_{1} > \ {\ }A_{4} > {\ }A_{3} A_{2}
    100 0.5806 0.6145 0.5435 0.5446 {\ }A_{2} > {\ }A_{1} > \ {\ }A_{4} > {\ }A_{3} A_{2}
    200 0.5815 0.6156 0.5443 0.5455 {\ }A_{2} > {\ }A_{1} > \ {\ }A_{4} > {\ }A_{3} A_{2}

     | Show Table
    DownLoad: CSV
    Figure 3.  The impact of \zeta on the order of ranking under the IHFRWAA operator.

    The goals of this section are to compare the outcome of the improved TOPSIS scheme with the novel aggregation operators based on PyHFRS. The fundamental concept of the TOPSIS approach states that the better the alternative, the farther it is from the negative ideal solution and the closer it is to the positive ideal solution. As a consequence, researchers have developed several techniques that are capable of completing the process of classifying alternatives in worldwide decision-making problems. These algorithms are devoted to dealing with the ordering of feasible alternatives. We develop a novel TOPSIS methodology based on PyHFRSs that handles the identification and categorization of alternatives. By following these criteria, we explain in detail how the innovative TOPSIS technique works and assess its validity. The feasibility of our technique is then verified, as well as its effectiveness and reliability are shown via comparisons to alternative approaches and a variety of experiments with varying input parameters. In addition, we conduct a simulation experiment to test how our strategy compares to the traditional TOPSIS approach in terms of sorting attribute. The TOPSIS is a framework employed by Hwang and Yoon [25] for assessing the advantages and disadvantages of various policy approaches under ideal situations. TOPSIS characterizes the best possible decision as the one that relates closely to the positive ideal solution all of the criteria while simultaneously becoming the farthest away from the negative ideal solution. The method consists of mainly the following aspects:

    Step 1. Let \widetilde{\kappa } = \{\ell _{1}, \ell _{2}, \ell _{3}, ..., \ell _{m}\} and C = \{c_{1}, c_{2}, c_{3}, ..., c_{n}\} be the set of alternatives and criteria. The information obtained from the professionals is provided as follows:

    where

    \begin{equation*} \uparrow{\beth }(\widetilde{\kappa }_{ij}) = \left\{ \langle \ell , \mathcal{H}_{h_{\uparrow{\beth }(\widetilde{\kappa })}}(\ell ),\mathcal{U} _{h_{\uparrow{\beth }(\widetilde{\kappa })}}(\ell )\rangle |\ell \in £ \right\} \end{equation*}

    and

    \begin{equation*} \downarrow{\beth }(\widetilde{\kappa }) = \left\{ \langle \ell ,\mathcal{H }_{h_{\downarrow{\beth }(\widetilde{\kappa })}}(\ell ),\mathcal{U}_{h_{ \downarrow{\beth }(\widetilde{\kappa })}}(\ell )\rangle |\ell \in £ \right\}, \end{equation*}

    such that

    \begin{equation*} 0\leq \left( \max (\mathcal{H}_{h_{\uparrow{\beth }(\widetilde{\kappa } )}}(\ell ))\right) +\left( \min (\mathcal{U}_{h_{\uparrow{\beth }( \widetilde{\kappa })}}(\ell ))\right) \leq 1\text{ and }0\leq \left( \min ( \mathcal{H}_{h_{\downarrow{\beth }(\widetilde{\kappa })}}(\ell )\right) +\left( \max (\mathcal{U}_{h_{\downarrow{\beth }(\widetilde{\kappa })}}(\ell ))\right) \leq 1 \end{equation*}

    are the PyHFR rough values.

    Step 2. First, we assemble information from decision-makers using PyHFR numbers.

    Step 3. Second, normalize DM information because the decision matrix may include benefit and cost criteria, which is demonstrated below:

    where \phi shows the number of experts.

    Step 4. Analyze the normalized information obtained from professionals \left(N\right) ^{\phi }, as

    \begin{equation*} \left( N\right) ^{\phi } = \left\{ \begin{array}{ccc} \beth (\widetilde{\kappa }_{ij}) = \left( \downarrow{\beth }\left( \widetilde{ \kappa }_{ij}\right) ,\uparrow{\beth }\left( \widetilde{\kappa } _{ij}\right) \right) & \text{if} & \text{For benefit}, \\ \left( \beth (\widetilde{\kappa }_{ij})\right) ^{c} = \left( \left( \downarrow{ \beth }\left( \widetilde{\kappa }_{ij}\right) \right) ^{c},\left( \uparrow{ \beth }\left( \widetilde{\kappa }_{ij}\right) \right) ^{c}\right) & \text{if } & \text{For cost}. \end{array} \right. \end{equation*}

    Step 5. The score value determines the positive and negative ideal solutions. \gimel ^{+} = \left(\gimel _{1}^{+}, \gimel _{2}^{+}, \gimel _{3}^{+}, ..., \gimel _{n}^{+}\right) and \daleth ^{-} = \left(\gimel _{1}^{-}, \gimel _{2}^{-}, \gimel _{3}^{-}, ..., \gimel _{n}^{-}\right) are the positive ideal solutions (PIS) and negative ideal solutions (NIS). The formula computes PIS \gimel ^{+} is as follows:

    \begin{eqnarray*} \gimel ^{+} & = &\left( \gimel _{1}^{+},\gimel _{2}^{+},\gimel _{3}^{+},...,\gimel _{n}^{+}\right) \\ & = &\left( \max\limits_{i}score(\gimel _{i1}),\max\limits_{i}score\gimel _{i2},\max\limits_{i}score\gimel _{i3},...,\max\limits_{i}score\gimel _{in}\right). \end{eqnarray*}

    In a comparable way, the formula used to determine the NIS \daleth ^{-} is as follows:

    \begin{eqnarray*} \gimel ^{-} & = &\left( \gimel _{1}^{-},\gimel _{2}^{-},\gimel _{3}^{-},...,\gimel _{n}^{-}\right) \\ & = &\left( \min\limits_{i}score\gimel _{i1},\min\limits_{i}score\gimel _{i2},\min\limits_{i}score\gimel _{i3},...,\min\limits_{i}score\gimel _{in}\right). \end{eqnarray*}

    Subsequently, calculate the geometric distance between each alternative and the PIS \gimel ^{+} using the formula enclosed:

    \begin{eqnarray*} d(\alpha _{ij},\gimel ^{+}) & = &\frac{1}{8}\left[ \begin{array}{c} \left( \begin{array}{c} \frac{1}{\#h}\sum_{s = 1}^{\#h}\left\vert \left( \downarrow{\mathcal{H}} _{ij(s)}\right) ^{2}-\left( \downarrow{\mathcal{H}}_{i}^{+}\right) ^{2}\right\vert \\ +\left\vert \left( \uparrow{\mathcal{H}}_{ij(s)}\right) ^{2}-\left( \uparrow{\mathcal{H}}_{i(s)}^{+}\right) ^{2}\right\vert \end{array} \right) \\ +\left( \begin{array}{c} \frac{1}{\#g}\sum_{s = 1}^{\#g}\left\vert \left( \downarrow{\mathcal{U}} _{ij(s)}\right) ^{2}-\left( \downarrow{\mathcal{U}}_{i(s)}^{+}\right) ^{2}\right\vert \\ +\left\vert \left( \uparrow{\mathcal{U}_{h}}_{_{ij}}\right) ^{2}-\left( \uparrow{\mathcal{U}_{h}}_{_{i}}^{+}\right) ^{2}\right\vert \end{array} \right) \end{array} \right], \end{eqnarray*}

    where i = 1, 2, 3, ..., n, and j = 1, 2, 3, ..., m.

    By routine calculation, the geometric distance between each alternative and the NIS \gimel ^{-} can be displayed as follows:

    \begin{eqnarray*} d(\alpha _{ij},\gimel ^{-}) & = &\frac{1}{8}\left[ \begin{array}{c} \left( \begin{array}{c} \frac{1}{\#h}\sum_{s = 1}^{\#h}\left\vert \left( \downarrow{\mathcal{H}} _{ij(s)}\right) ^{2}-\left( \downarrow{\mathcal{H}}_{i(s)}^{-}\right) ^{2}\right\vert \\ +\left\vert \left( \uparrow{\mathcal{H}}_{ij(s)}\right) ^{2}-\left( \uparrow{\mathcal{H}}_{i(s)}^{-}\right) ^{2}\right\vert \end{array} \right) \\ +\left( \begin{array}{c} \frac{1}{\#g}\sum_{s = 1}^{\#g}\left\vert \left( \downarrow{\mathcal{U}} _{ij(s)}\right) ^{2}-\left( \downarrow{\mathcal{U}}_{i(s)}^{-}\right) ^{2}\right\vert \\ +\left\vert \left( \uparrow{\mathcal{U}_{h}}_{_{ij}}\right) ^{2}-\left( \uparrow{\mathcal{U}_{h}}_{_{i}}^{-}\right) ^{2}\right\vert \end{array} \right) \end{array} \right], \end{eqnarray*}

    where i = 1, 2, 3, ..., n and j = 1, 2, 3, ..., m.

    Step 6. Here is the procedure employed for determining the relative closeness indices for each decision-makers alternative:

    \begin{equation*} RC(\alpha _{ij}) = \frac{d(\alpha _{ij},\gimel ^{+})}{d(\alpha _{ij},\gimel ^{-})+d(\alpha _{ij},\gimel ^{+})}. \end{equation*}

    Step 7. The ranking orders of alternatives may be established, and the most desired option with the shortest distance can be identified.

    In this section, we demonstrate an MADM problem to show the framework's applicability and versatility. A numerical illustration to choose the best international partner for a multinational company is given. The following is a summary of the key procedures:

    Step 1. The decision-makers information is provided in Tables 1 and 3 in the form of PyHFR numbers.

    Step 2. The collected information compiled by decision-makers portrayed as PyHFR numbers is displayed in Table 4.

    Step 3. There is no need to normalize the information since it is benefit type.

    Step 4. The PIS and NIS are determined based on the score value and are displayed in Table 14 as follows:

    Table 14.  Ideal solutions.
    \gimel ^{+} \gimel ^{-}
    c_{1} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2976, 0.3325, 0.3681\right), \\ \left(0.2240, 0.2411, 0.2762\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.3525, 0.4162, 0.4281\right), \\ \left(0.2793, 0.2889, 0.3111\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2872, 0.3293, 0.4168\right), \\ \left(0.2968, 0.3776, 0.4511\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2279, 0.2800, 0.2940\right), \\ \left(0.2268, 0.3372, 0.3842\right) \end{array} \right) \end{array} \right]
    c_{2} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2820, 0.4049, 0.4680\right), \\ \left(0.2535, 0.3022, 0.3289\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2849, 0.5183, 0.5331\right), \\ \left(0.1990, 0.2509, 0.2817\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2225, 0.2325, 0.4831\right), \\ \left(0.3344, 0.3770, 0.3919\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2836, 0.3631, 0.3730\right), \\ \left(0.2278, 0.3392, 0.4229\right) \end{array} \right) \end{array} \right]
    c_{3} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.3170, 0.3291, 0.3525\right), \\ \left(0.3328, 0.3528, 0.4025\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2919, 0.3228, 0.3348\right), \\ \left(0.2523, 0.2724, 0.3011\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.3170, 0.3291, 0.3525\right), \\ \left(0.3328, 0.3528, 0.4025\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2919, 0.3228, 0.3348\right), \\ \left(0.2523, 0.2724, 0.3011\right) \end{array} \right) \end{array} \right]
    c_{4} \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.3809, 0.4643, 0.5669\right), \\ \left(0.2502, 0.2660, 0.3177\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.2617, 0.3066, 0.3250\right), \\ \left(0.1922, 0.2264, 0.2564\right) \end{array} \right) \end{array} \right] \left[ \begin{array}{c} \left(\begin{array}{c} \left(0.2676, 0.3089, 0.3338\right), \\ \left(0.2188, 0.2845, 0.3448\right) \end{array} \right), \\ \left(\begin{array}{c} \left(0.3251, 0.3392, 0.3591\right), \\ \left(0.2874, 0.3235, 0.3502\right) \end{array} \right) \end{array} \right]

     | Show Table
    DownLoad: CSV

    Step 5. Compute the distance measure of PIS and NIS as follows:

    0.0614 0.0102 0.0790 0.0856

     | Show Table
    DownLoad: CSV

    and

    0.0619 0.0892 0.0723 0.0557

     | Show Table
    DownLoad: CSV

    Step 6. The calculation of the relative closeness indices for all decision-makers of the alternatives is as follows:

    A_{1} A_{2} A_{3} A_{4}
    0.4980 0.1026 0.5221 0.6058

     | Show Table
    DownLoad: CSV

    Step 7. The order of alternatives is ranked in descending order as shown in Table 15 and in Figure 4. The minimum distance can be observed to be associated with A_{2} . Therefore, option A_{2} is the best choice.

    Table 15.  Ranking order of alternatives.
    Score values of alternatives Ranking Best alternative
    A_{1} A_{2} A_{3} A_{4}
    0.5403 0.4064 0.7182 0.4436 {\ }A_{2} < {\ }A_{4} < \ {\ }A_{1} < {\ }A_{3} A_{2}

     | Show Table
    DownLoad: CSV
    Figure 4.  The ranking order obtained through TOPSIS approach.

    Assessing the performance of wearable health technology devices has exceptional practical value for product optimization and technology enhancement. However, there are numerous distinctions in the technical standards, functions, prices, and service objects of these solutions, all within an uncertain objective environment. Decision-makers often encounter unreliable uncertainty when evaluating alternatives simultaneously. Decision-makers are faced with a daunting task when it pertains to identifying the most optimal wearable health technology devices. To solve these problems, this paper introduced the concept of PyHFRSs, which are robust mathematical tools designed to effectively manage uncertain and imprecise information. The aim is to address the complexity and uncertainty associated with decision information and smart medical solutions. In addition, we introduced the innovative concept of PyHFRWA, PyHFROWA, and PyHFRHA aggregation operators based on PyHFRSs. We provided a comprehensive description of the characteristics and new score and accuracy functions for the proposed operators. Furthermore, the hybrid PyHFR-TOPSIS model for MADM and its stepwise algorithm are demonstrated using the proposed approach. On the other hand, the significance and validity of the evaluation framework are confirmed by numerical examples of the WHTDs evaluation system, and sensitivity and comparative show that the investigated models are more effective and useful than the existing approaches. The study enhances existing aggregation operators and the DM method based on the aforementioned operator, improving the existing fuzzy MADM techniques. The study provides assessment and theoretical support for evaluating and selecting WHTDs in the medical system. This approach can provide a valuable reference for designers of WHTDs in the medical system. It helps to identify areas for product improvement, optimize performance, enhance sustainable development capabilities, expand market influence, and motivate long-term enterprise growth. This method can serve as a valuable resource for hospitals and medical institutions seeking to identify the most effective solutions for their medical systems. By doing so, it has the potential to enhance the quality of medical services and improve the overall patient experience. In addition, this research presents valuable insights for the development and future direction of the national smart medical system, which will ultimately contribute to more effective health-care for humanity.

    In the upcoming research, we aim to expand this investigation to include other fuzzy settings and the model can be applied to additional specialized MADM challenges like [45,46,47].

    Attaullah: Conceptualization, methodology, software, validation, formal analysis, writing-original draft preparation, writing-review and editing, supervision, project administration; Sultan Alyobi: Conceptualization, data curation, writing-original draft preparation, writing-review and editing, supervision, project administration, funding acquisition; Mohammed Alharthi: Conceptualization, software, validation, investigation, resources, visualization; Yasser Alrashedi: Conceptualization, methodology, validation, data curation, project administration. All authors have read and approved the final version of the manuscript for publication.

    The authors are thankful to the Deanship of Graduate Studies and Scientific Research at University of Bisha for supporting this work through the Fast-Track Research Support Program.

    The authors declare that they have no conflict of interest.



    [1] X. Gou, X. Xu, F. Deng, W. Zhou, E. H. Viedma, Medical health resources allocation evaluation in public health emergencies by an improved ORESTE method with linguistic preference orderings, Fuzzy Optim. Decis. Ma., 23 (2024), 1–27. http://dx.doi.org/10.1007/s10700-023-09409-3 doi: 10.1007/s10700-023-09409-3
    [2] R. Zhang, Z. Xu, X. Gou, ELECTRE II method based on the cosine similarity to evaluate the performance of financial logistics enterprises under double hierarchy hesitant fuzzy linguistic environment, Fuzzy Optim. Decis. Ma., 22 (2023), 23–49. http://dx.doi.org/10.1007/s10700-022-09382-3 doi: 10.1007/s10700-022-09382-3
    [3] X. Gou, Z. Xu, H. Liao, F. Herrera, Probabilistic double hierarchy linguistic term set and its use in designing an improved VIKOR method: The application in smart healthcare, J. Oper. Res. Soc., 72 (2021), 2611–2630. http://dx.doi.org/10.1080/01605682.2020.1806741 doi: 10.1080/01605682.2020.1806741
    [4] C. Jana, T. Senapati, M. Pal, R. R. Yager, Picture fuzzy Dombi aggregation operators: Application to MADM process, Appl. Soft Comput., 74 (2019), 99–109. http://dx.doi.org/10.1016/j.asoc.2018.10.021 doi: 10.1016/j.asoc.2018.10.021
    [5] B. Ning, G. Wei, R. Lin, Y. Guo, A novel MADM technique based on extended power generalized Maclaurin symmetric mean operators under probabilistic dual hesitant fuzzy setting and its application to sustainable suppliers selection, Expert Syst. Appl., 204 (2022), 117419. http://dx.doi.org/10.1016/j.eswa.2022.11741 doi: 10.1016/j.eswa.2022.11741
    [6] A. R. Mishra, P. Rani, F. Cavallaro, I. M. Hezam, Intuitionistic fuzzy fairly operators and additive ratio assessment-based integrated model for selecting the optimal sustainable industrial building options, Sci. Rep., 13 (2023), 5055. http://dx.doi.org/10.1038/s41598-023-31843-x doi: 10.1038/s41598-023-31843-x
    [7] M. Akram, A. Bashir, H. Garg, Decision-making model under complex picture fuzzy Hamacher aggregation operators, Comput. Appl. Math., 39 (2020), 1–38. http://dx.doi.org/10.1007/s40314-020-01251-2 doi: 10.1007/s40314-020-01251-2
    [8] M. Deveci, I. Gokasar, A. R. Mishra, P. Rani, Z. Ye, Evaluation of climate change-resilient transportation alternatives using fuzzy Hamacher aggregation operators based group decision-making model, Eng. Appl. Artif. Intel., 119 (2023), 105824. http://dx.doi.org/10.1016/j.engappai.2023.105824 doi: 10.1016/j.engappai.2023.105824
    [9] Z. Pawlak, Rough sets, Int. J. Comput. Inform. Sci., 11 (1982), 341–356. http://dx.doi.org/10.1007/BF01001956
    [10] L. A. Zadeh, Fuzzy sets, Inf. Control, 8 (1965), 338–353. http://dx.doi.org/10.1016/S0019-9958(65)90241-X
    [11] D. Dubois, H. Prade, Rough fuzzy sets and fuzzy rough sets, Int. J. Gen. Syst., 17 (1990), 191–209. http://dx.doi.org/10.1080/03081079008935107 doi: 10.1080/03081079008935107
    [12] J. W. G. Busse, Rough sets, Adv. Imag. Elect. Phys., 94 (1995), 151–195. http://dx.doi.org/10.1016/S1076-5670(08)70145-9
    [13] W. Ziarko, Variable precision rough set model, J. Comput. Syst. Sci., 46 (1993), 39–59. http://dx.doi.org/10.1016/0022-0000(93)90048-2 doi: 10.1016/0022-0000(93)90048-2
    [14] R. Nowicki, R. Słowiński, J. Stefanowski, Evaluation of vibroacoustic diagnostic symptoms by means of the rough sets theory, Comput. Ind., 20 (1992), 141–152. http://dx.doi.org/10.1016/0166-3615(92)90048-R doi: 10.1016/0166-3615(92)90048-R
    [15] R. Nowicki, R. Słowiński, J. Stefanowski, Rough sets analysis of diagnostic capacity of vibroacoustic symptoms, Comput. Math. Appl., 24 (1992), 109–123. http://dx.doi.org/10.1016/0898-1221(92)90159-F doi: 10.1016/0898-1221(92)90159-F
    [16] D. Pei, A generalized model of fuzzy rough sets, Int. J. Gen. Syst., 34 (2005), 603–613. http://dx.doi.org/10.1080/03081070500096010 doi: 10.1080/03081070500096010
    [17] Y. J. Jiang, J. Chen, X. Y. Ruan, Fuzzy similarity-based rough set method for case-based reasoning and its application in tool selection, Int. J. Mach. Tool. Manu., 46 (2006), 107–113. http://dx.doi.org/10.1016/j.ijmachtools.2005.05.003 doi: 10.1016/j.ijmachtools.2005.05.003
    [18] J. Ding, D. Li, C. Zhang, M. Lin, Three-way group decisions with evidential reasoning in incomplete hesitant fuzzy information systems for liver disease diagnosis, Appl. Intell., 53 (2023), 29693–29712. https://doi.org/10.1007/s10489-023-05116-z doi: 10.1007/s10489-023-05116-z
    [19] C. Zhang, D. Li, J. Liang, B. Wang, MAGDM-oriented dual hesitant fuzzy multigranulation probabilistic models based on MULTIMOORA, Int. J. Mach. Learn. Cyb., 12 (2021), 1219–1241. http://dx.doi.org/10.1007/s13042-020-01230-3 doi: 10.1007/s13042-020-01230-3
    [20] Z. Wang, Fundamental properties of fuzzy rough sets based on triangular norms and fuzzy implications: The properties characterized by fuzzy neighborhood and fuzzy topology, Complex Intell. Syst., 10 (2024), 1103–1114. http://dx.doi.org/10.1007/s40747-023-01213-1 doi: 10.1007/s40747-023-01213-1
    [21] G. Qi, M. Atef, B. Yang, Fermatean fuzzy covering-based rough set and their applications in multi-attribute decision-making, Eng. Appl. Artif. Intel., 127 (2024), 107181. http://dx.doi.org/10.1016/j.engappai.2023.107181 doi: 10.1016/j.engappai.2023.107181
    [22] A. Theerens, C. Cornelis, On the granular representation of fuzzy quantifier-based fuzzy rough sets, Inf. Sci., 665 (2024), 120385. https://dx.doi.org/10.48550/arXiv.2312.16704 doi: 10.48550/arXiv.2312.16704
    [23] J. J. Zhou, H. L. Yang, Multigranulation hesitant Pythagorean fuzzy rough sets and its application in multi-attribute decision making, Int. J. Fuzzy Syst., 36 (2019), 5631–5644. http://dx.doi.org/10.3233/JIFS-181476 doi: 10.3233/JIFS-181476
    [24] H. Zhang, L. Shu, S. Liao, Hesitant fuzzy rough set over two universes and its application in decision making, Soft Comput., 21 (2017), 1803–1816. https://doi.org/10.1007/s00500-015-1882-3 doi: 10.1007/s00500-015-1882-3
    [25] C. L. Hwang, K. Yoon, Methods for multiple attribute decision making, In Multiple attribute decision making, Berlin: Springer, 1981, 58–191. http://dx.doi.org/10.1007/978-3-642-48318-9-3
    [26] A. Gaeta, V. Loia, F. Orciuoli, An explainable prediction method based on fuzzy rough sets, TOPSIS and hexagons of opposition: Applications to the analysis of information disorder, Inf. Sci., 659 (2024), 120050. http://dx.doi.org/10.1016/j.ins.2023.120050 doi: 10.1016/j.ins.2023.120050
    [27] V. Torra, Hesitant fuzzy sets, Int. J. Intell. Syst., 25 (2010), 529–539. http://dx.doi.org/10.1002/int.20418
    [28] M. Xia, Z. Xu, Hesitant fuzzy information aggregation in decision making, Int. J. Approx. Reason., 52 (2011), 395–407. http://dx.doi.org/10.1016/j.ijar.2010.09.002 doi: 10.1016/j.ijar.2010.09.002
    [29] H. B. Liu, Y. Liu, L. Xu, Dombi interval-valued hesitant fuzzyaggregation operators for information security risk assessment, Math. Probl. Eng., 2020. http://dx.doi.org/10.1155/2020/3198645
    [30] M. Akram, N. Yaqoob, G. Ali, W. Chammam, Extensions of Dombi aggregation operators for decision making under m-polar fuzzy information, J. Math., 2020 (2020), 1–20. http://dx.doi.org/10.1155/2020/4739567 doi: 10.1155/2020/4739567
    [31] A. Hussain, A. Alsanad, Novel Dombi aggregation operators in spherical cubic fuzzy information with applications in multiple attribute decision-making, Math. Probl. Eng., 2021 (2021), 1–25. http://dx.doi.org/10.1155/2021/9921553 doi: 10.1155/2021/9921553
    [32] L. Shi, J. Ye, Dombi aggregation operators of neutrosophic cubic sets for multiple attribute decision-making, Algorithms, 11 (2018), 29. http://dx.doi.org/10.3390/a11030029 doi: 10.3390/a11030029
    [33] X. Lu, J. Ye, Dombi aggregation operators of linguistic cubic variables for multiple attribute decision making, Information, 9 (2018), 188. http://dx.doi.org/10.3390/info9080188 doi: 10.3390/info9080188
    [34] R. Umer, M. Touqeer, A. H. Omar, A. Ahmadian, S. Salahshour, M. Ferrara, Selection of solar tracking system using extended TOPSIS technique with interval type-2 pythagorean fuzzy numbers, Optim. Eng., 22 (2021), 2205–2231. http://dx.doi.org/10.1007/s11081-021-09623-1 doi: 10.1007/s11081-021-09623-1
    [35] D. Kacprzak, An extended TOPSIS method based on ordered fuzzy numbers for group decision making, Artif. Intell. Rev., 53 (2020), 2099–2129. http://dx.doi.org/10.1007/s10462-019-09728-1 doi: 10.1007/s10462-019-09728-1
    [36] P. Rani, A. R. Mishra, G. Rezaei, H. Liao, A. Mardani, Extended Pythagorean fuzzy TOPSIS method based on similarity measure for sustainable recycling partner selection, Int. J. Fuzzy Syst., 22 (2020), 735–747. http://dx.doi.org/10.1007/s40815-019-00689-9 doi: 10.1007/s40815-019-00689-9
    [37] H. U. Jun, W. U. Junmin, W. U. Jie, TOPSIS hybrid multiattribute group decision-making based on interval pythagorean fuzzy numbers, Math. Probl. Eng., 2021 (2021), 1–8. http://dx.doi.org/10.1155/2021/5735272 doi: 10.1155/2021/5735272
    [38] K. Zhang, J. Dai, A novel TOPSIS method with decision-theoretic rough fuzzy sets, Inf. Sci., 608 (2022), 1221–1244. http://dx.doi.org/10.1016/j.ins.2022.07.009 doi: 10.1016/j.ins.2022.07.009
    [39] V. Torra, Y. Narukawa, On hesitant fuzzy sets and decision, In 2009 IEEE International Conference on Fuzzy Systems, 2009, 1378–1382. http://dx.doi.org/10.1109/FUZZY.2009.5276884
    [40] K. T. Atanassov, Intuitionistic fuzzy sets, Physica-Verlag HD, 1999, 1–137. https://doi.org/10.1007/978-3-7908-1870-3-1
    [41] I. Beg, T. Rashid, Group decision making using intuitionistic hesitant fuzzy sets, Int. J. Fuzzy Log. Inte., 14 (2014), 181–187. http://dx.doi.org/10.5391/IJFIS.2014.14.3.181 doi: 10.5391/IJFIS.2014.14.3.181
    [42] X. Zhang, Z. Xu, Extension of TOPSIS to multiple criteria decision making with Pythagorean fuzzy sets, Int. J. Intell. Syst., 29 (2014), 1061–1078. http://dx.doi.org/10.1002/int.21676 doi: 10.1002/int.21676
    [43] R. Chinram, A. Hussain, T. Mahmood, M. I. Ali, EDAS method for multi-criteria group decision making based on intuitionistic fuzzy rough aggregation operators, IEEE Access, 9 (2021), 10199–10216. http://dx.doi.org/10.1109/ACCESS.2021.3049605 doi: 10.1109/ACCESS.2021.3049605
    [44] J. Dombi, A general class of fuzzy operators, the DeMorgan class of fuzzy operators and fuzziness measures induced by fuzzy operators, Fuzzy Set. Syst., 8 (1982), 149–163. http://dx.doi.org/10.1016/0165-0114(82)90005-7 doi: 10.1016/0165-0114(82)90005-7
    [45] X. Gou, Z. Xu, P. Ren, The properties of continuous Pythagorean fuzzy information, Int. J. Intell. Syst., 31 (2016), 401–424. http://dx.doi.org/10.1002/int.21788 doi: 10.1002/int.21788
    [46] H. Zhu, J. Zhao, 2DLIF-PROMETHEE based on the hybrid distance of 2-dimension linguistic intuitionistic fuzzy sets for multiple attribute decision making, Expert Syst. Appl., 202 (2022), 117219. http://dx.doi.org/10.1016/j.eswa.2022.117219 doi: 10.1016/j.eswa.2022.117219
    [47] P. Ren, Z. Liu, W. G. Zhang, X. Wu, Consistency and consensus driven for hesitant fuzzy linguistic decision making with pairwise comparisons, Expert Syst. Appl., 202 (2022), 117307. https://doi.org/10.48550/arXiv.2111.04092 doi: 10.48550/arXiv.2111.04092
  • This article has been cited by:

    1. Tao Li, Jiayi Sun, Liguo Fei, Application of Multiple-Criteria Decision-Making Technology in Emergency Decision-Making: Uncertainty, Heterogeneity, Dynamicity, and Interaction, 2025, 13, 2227-7390, 731, 10.3390/math13050731
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(747) PDF downloads(61) Cited by(1)

Figures and Tables

Figures(4)  /  Tables(15)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog