Processing math: 100%
Research article Special Issues

Proximity algorithms for the L1L2/TVα image denoising model

  • Inspired by the ROF model and the L1/TV image denoising model, we propose a combined model to remove Gaussian noise and salt-and-pepper noise simultaneously. This model combines the L1 -data fidelity term, L2 -data fidelity term and a fractional-order total variation regularization term, and is termed the L1L2/TVα model. We have used the proximity algorithm to solve the proposed model. Through this method, the non-differentiable term is solved by using the fixed-point equations of the proximity operator. The numerical experiments show that the proposed model can effectively remove Gaussian noise and salt and pepper noise through implementation of the proximity algorithm. As we varied the fractional order α from 0.8 to 1.9 in increments of 0.1, we observed that different images correspond to different optimal values of α.

    Citation: Donghong Zhao, Ruiying Huang, Li Feng. Proximity algorithms for the L1L2/TVα image denoising model[J]. AIMS Mathematics, 2024, 9(6): 16643-16665. doi: 10.3934/math.2024807

    Related Papers:

    [1] Shahid Hussain Gurmani, Zhao Zhang, Rana Muhammad Zulqarnain . An integrated group decision-making technique under interval-valued probabilistic linguistic T-spherical fuzzy information and its application to the selection of cloud storage provider. AIMS Mathematics, 2023, 8(9): 20223-20253. doi: 10.3934/math.20231031
    [2] Muhammad Akram, Sumera Naz, Feng Feng, Ghada Ali, Aqsa Shafiq . Extended MABAC method based on 2-tuple linguistic T-spherical fuzzy sets and Heronian mean operators: An application to alternative fuel selection. AIMS Mathematics, 2023, 8(5): 10619-10653. doi: 10.3934/math.2023539
    [3] Ghous Ali, Kholood Alsager, Asad Ali . Novel linguistic q-rung orthopair fuzzy Aczel-Alsina aggregation operators for group decision-making with applications. AIMS Mathematics, 2024, 9(11): 32328-32365. doi: 10.3934/math.20241551
    [4] Zeeshan Ali, Tahir Mahmood, Muhammad Bilal Khan . Three-way decisions with complex q-rung orthopair 2-tuple linguistic decision-theoretic rough sets based on generalized Maclaurin symmetric mean operators. AIMS Mathematics, 2023, 8(8): 17943-17980. doi: 10.3934/math.2023913
    [5] Muhammad Akram, Sumera Naz, Gustavo Santos-García, Muhammad Ramzan Saeed . Extended CODAS method for MAGDM with 2-tuple linguistic T-spherical fuzzy sets. AIMS Mathematics, 2023, 8(2): 3428-3468. doi: 10.3934/math.2023176
    [6] Sumaira Yasmin, Muhammad Qiyas, Lazim Abdullah, Muhammad Naeem . Linguistics complex intuitionistic fuzzy aggregation operators and their applications to plastic waste management approach selection. AIMS Mathematics, 2024, 9(11): 30122-30152. doi: 10.3934/math.20241455
    [7] Abbas Qadir, Shadi N. Alghaffari, Shougi S. Abosuliman, Saleem Abdullah . A three-way decision-making technique based on Pythagorean double hierarchy linguistic term sets for selecting logistic service provider and sustainable transportation investments. AIMS Mathematics, 2023, 8(8): 18665-18695. doi: 10.3934/math.2023951
    [8] Wajid Azeem, Waqas Mahmood, Tahir Mahmood, Zeeshan Ali, Muhammad Naeem . Analysis of Einstein aggregation operators based on complex intuitionistic fuzzy sets and their applications in multi-attribute decision-making. AIMS Mathematics, 2023, 8(3): 6036-6063. doi: 10.3934/math.2023305
    [9] Tahir Mahmood, Azam, Ubaid ur Rehman, Jabbar Ahmmad . Prioritization and selection of operating system by employing geometric aggregation operators based on Aczel-Alsina t-norm and t-conorm in the environment of bipolar complex fuzzy set. AIMS Mathematics, 2023, 8(10): 25220-25248. doi: 10.3934/math.20231286
    [10] Misbah Rasheed, ElSayed Tag-Eldin, Nivin A. Ghamry, Muntazim Abbas Hashmi, Muhammad Kamran, Umber Rana . Decision-making algorithm based on Pythagorean fuzzy environment with probabilistic hesitant fuzzy set and Choquet integral. AIMS Mathematics, 2023, 8(5): 12422-12455. doi: 10.3934/math.2023624
  • Inspired by the ROF model and the L1/TV image denoising model, we propose a combined model to remove Gaussian noise and salt-and-pepper noise simultaneously. This model combines the L1 -data fidelity term, L2 -data fidelity term and a fractional-order total variation regularization term, and is termed the L1L2/TVα model. We have used the proximity algorithm to solve the proposed model. Through this method, the non-differentiable term is solved by using the fixed-point equations of the proximity operator. The numerical experiments show that the proposed model can effectively remove Gaussian noise and salt and pepper noise through implementation of the proximity algorithm. As we varied the fractional order α from 0.8 to 1.9 in increments of 0.1, we observed that different images correspond to different optimal values of α.



    Generally speaking, multi-criteria evaluation refers to the evaluation conducted under multiple criteria that cannot be replaced by each other. In the specific evaluation process, information is often missing because of the wide range of evaluation criteria. Some information is available but very inaccurate; Some can only give a rough range based on empirical judgment. Therefore, in the process of scheme evaluation, quantitative calculation shall be carried out for those criteria that can be accurately quantified; For those criteria that are difficult to accurately quantify or cannot be quantified, it is necessary to make a rough estimation or invite relevant experts to conduct qualitative analysis and hierarchical semi quantitative description. On account of the complexity and uncertainty of objective things, as well as the fuzziness brought by human cognitive level and thinking mode, it is difficult for experts to give accurate and quantitative information in evaluation process. Therefore, how to realize mutual transformation between qualitative and quantitative as well as reflect the soft reasoning ability in linguistic expression has always been a research hot-spot in uncertain system evaluation and decision-making.

    For example, the fuzzy set [1] shows the relationship between scheme and criterion in a quantitative way, which has been recognized by many scholars. Since then, quantitative decision tools, that is, various extensions of fuzzy set [2,3,4,5], have been emerged to show decision information. However, with the ceaseless advancement and change of decision-making environment, it is difficult for decision-makers to use a set of quantitative and specific values to describe the decision-making information of a scheme under a certain criterion. To solve this deficiency, Zadeh proposed linguistic variable (LV) [6], which qualitatively displays the decision information of decision-makers in a non-numerical way for the first time. Then, In line with LV, decision tools such as uncertain LV [7,8,9], hesitant fuzzy linguistic term set (HFLTS) [10,11], terms with weakening modifiers [12] appeared to help decision-makers give qualitative decision information. In the face of complex MADM problems, owing to the influence of complex information as well as uncertain factors of group cognition, people sometimes use a single linguistic term (LT) to describe attribute evaluation information, but sometimes need to use several LTs to express decision information at the same time [13,14]. For example, when students evaluate the quality of class teaching, they may use both "good" and "very good" to evaluate it at the same time. Inspired by hesitant fuzzy sets [15] and linguistic term set (LTS) [6], Rodriguez et al [10] defined HFLTS in 2012, which allows decision-makers to use several possible LTs to evaluate attributes simultaneously.

    Although HFLTSs can meet the needs of decision-makers to express information by multiple LTs, the HFLTSs are assumed that the weight of all possible LTs are equal. Obviously, this assumption is too idealistic and inconsistent with actual situation. Because although decision-makers shilly-shally about several possible LTs, they may tend to use some of them under certain circumstances. Therefore, different LTs should possess different weights. In line with this reality, Pang et al. (2016) Proposed probabilistic linguistic term sets (PLTSs), which is composed of possible LTs and associated with their probabilities [16]. On the one hand, probability linguistic contains several LTs to show decision-makers view in decision-making, which retains the good nature of HFLTS. On the other hand, it reflects the corresponding weights of several LTs. This way of displaying decision information by combining qualitative and quantitative information well reflects decision-makers decision information, which will not lose linguistic evaluation information, so they can make decision evaluation more in line with the reality. Although it is difficult to give a definite decision-making view on the problems that needs to decide, it will give the weight of corresponding view and give relatively clear decision-making information as much as possible to help experts solve decision-making problems. Briefly, the merit of PLTSs is that it can express information more completely and accurately. Hence, PLTSs could be utilized to solve practical decision-making problems.

    Aggregation operator (AO) is an important tool for information fusion. Most AOs are built on the special triangle t-norm. Archimedeans t-norm (ATN) and Archimedeans t-conorm (ATC) are composed of t-norm (TN) and t-conorm(TC) families. They can deduce some basic algorithms of fuzzy sets. Linguistic scaling functions can define different semantics for LTs in different linguistic environments. At the same time, the significant advantage of Muirhead operator is that it can reflect the relationship between any parameters. Liu et al. [17] defined the algorithm for PLTS based on ATN and linguistic scaling function, and then combined the Muirhead average operator with the PLTSs to propose the Archimedeans Muirhead average operator and Archimedeans weighted Muirhead average operator of probabilistic linguistic, Archimedean dual Muirhead average operator of probability linguistic and Archimedean weighted dual Muirhead average operator of probability linguistic. After that, more and more attention has been paid to various aggregation operators [18,19,20]. In real MADM, it is rare that the evaluation attributes of various alternatives are independent with each other. For example, there is a positive correlation between teaching quality and lesson design, that is, the better the curriculum design, the higher the quality of teaching. For capturing these dependencies, Maclaurin [21] initially proposed Maclaurin symmetric mean (MSM) operator, which can consider the relationship between multiple attribute values at the same time, and MSM has an adjusting parameter k. The MSM operator is a flexible operator that can consider the relationship between several attribute values. Therefore, it is essential to develop some MSM operators [22,23,24,25,26,27,28,29,30,31,32,33,34] in different polymerization environment; Besides, the operation laws are essential in the process of aggregation, and they can generate many operation laws based on certain ATTs and ATCs. Although MSM operators have attracted a lot of attentions since it's appearance, MSM operator has some disadvantages. The main disadvantage of MSM operator is that it only focus on the overall relationship, ignoring heterogeneity among individuals. To address this handicap, Detemple and Robertson [35] proposed generalized Maclaurin symmetric mean (GMSM), which is considered as a new generalization of MSM. GMSM can not only reflect the relationship of the whole, but also consider the importance level of individuals. Besides, compared with MSM, GMSM can avoid information loss. Because the polymerization process increased equality constraints. Therefore, GMSM is extensively employed in information fusion.

    Since PLTSs are introduced by integrating the LTSs and the HFSs, PLTSs can successfully express random and fuzzy information. Therefore, it is necessary to develop a novel important probabilistic linguistic information fusion tool (that is, PLGMSM operator) which can not only combine the merits of PLTSs and MSM, but also reflect the relationship of the whole and consider the importance level of individuals. These considerations lead us to lock the main targets that follow from this work:

    (1) To introduce new probabilistic linguistic GMSM operators along with investigate some properties as well as some special situations;

    (2) To construct an MADM algorithm based upon the proposed PLGMSM operators;

    (3) To manifest an example based on probabilistic linguistic information to prove the availability of the proposed MADM approach;

    (4) To analyze the sensitivities of parameters in the proposed aggregation operators.

    To achieve the above objective, some probabilistic linguistic GMSM operators are introduced for PLTSs based on ATN and ATC in current work. The structure of this work is arranged as: In Sect.2, some related basic concepts are presented, for instance PLTS, MSM operators, ATN and ATC, etc. In Sect.3, the PLGMSM are introduced based upon the ATN and ATC, some properties of the PLGMSM operators and special situations of PLGMSM operators are also given. In Sect.4, the weighted PLGMSM (WPLGMSM) is introduced based upon the ATN and ATC, some properties of the PLGMSM and special situations of WPLGMSM are also listed in this section. Section. 5 constructs a MADM method for evaluating quality of classroom teaching. Some comparisons are carried out in Sect.6. and a conclusion is made in Sect.7.

    Some basic concepts will be reviewed in this part, including linguistic term set (LTS), probabilistic linguistic term set (PLTS), ATN and ATC.

    Definition 2.1. [36] Suppose S={sv|v=τ,,1,0,1,,τ} be a LTS, where sv expresses a possible value of a LV, and τ is a positive integer. For any two LVs sα,sβS, it satisfies: if α>β, then sα>sβ.

    Definition 2.2. [16] Suppose S={sv|v=τ,,1,0,1,,τ} be a LTS, a PLTS is defined as following:

    (p)={(i)(p(i))|(i)S,p(i)0,i=1,,#(p),#(p)i=1p(i)1} (2.1)

    where (i)(p(i)) denotes the i-th LV with probability p(i), and #(p) represents the number of all different elements in (p).

    In line with Definition 2.2, Eq (2.1) can be transformed

    (˜p)={(i)(˜p(i))|(i)S,˜p(i)0,i=1,,#(p),#(p)i=1˜p(i)=1} (2.2)

    where ˜p(i)=p(i)/#(p)i=1p(i).

    The score of (p) can be calculated as

    E((p))=sr, (2.3)

    where r=#(p)i=1r(i)p(i)/#(p)i=1p(i), r(k) is subscript of (k).

    The deviation degree of (p) is

    σ((p))=(#pi=1((r(i)r)p(i))2)1/2/#(p)i=1p(i). (2.4)

    For any two LPTSs 1(p),2(p),

    (1) if E(1(p))>E(2(p)), the 1(p)>2(p);

    (2) if E(1(p))=E(2(p)), when σ(1(p))>σ(2(p)), then 1(p)<2(p);

    (3) if E(1(p))=E(2(p)), when σ(1(p))=σ(2(p)), then 1(p)=2(p).

    In order to calculate the PLTSs more conveniently, the transformation function g was introduced by Gou et al. [37]. Suppose there is a LTS S and a PLTS (p), the g and g1 are defined as:

    g:[τ,τ][0,1],g(v(p))={(ν+τ2τ)(p(i))}={(γ)(p(i))},γ[0,1], (2.5)

    and

    g1:[0,1][τ,τ],g1(g((p)))={S(2γ1)τ(p(i))|γ[0,1]}=(p). (2.6)

    AOs are very important tools for information fusion in some decision-making problems. However, most of AOs are defined based on TNs and TCs. So it is essentially to review TNs and TCs before the operations of PLTS are given.

    Definition 2.3. [37] If the function ς:[0,1]2[0,1] meets the following four requirements for all α,β,δ[0,1], it was named as a TN:

    (1) ς(α,β)=ς(β,α);

    (2) ς(α,ς(β,δ))=ς(ς(α,β),δ);

    (3) ς(α,β)ς(α,δ), if βδ;

    (4) ς(1,α)=α.

    A TC ς [38] is a mapping from [0,1]2 to [0,1], if ς meets the following four requirements for all α,β,δ[0,1]:

    (1) ς(α,β)=ς(β,α);

    (2) ς(α,ς(β,δ))=ς(ς(α,β),δ);

    (3) ς(α,β)ς(α,δ), if βδ;

    (4) ς(0,α)=α.

    The TN ς and TC ς are dual, that is, ς(α,β)=1ς(1α,1β).

    A TN ς is Archimedean t-norm (ATN), if there exists an integral n, such that ς(a,,an  times)<b for any (a,b)[0,1]2. A TC ς is Archimedean t-conorm (ATC), if there is an integral n, such that ς(a,,an  times)>b for any (a,b)[0,1]2. Specially, if ς and ς satisfy the three given requirements:

    (1) ς and ς are continuous;

    (2) ς and ς and are strictly increasing;

    (3) for all α[0,1], and ς(α,α)>α,

    then ς and ς are strict ATN and strict ATC respectively.

    Assuming there is an additive generator J:[0,1][0,). A strict ATN ς(α,β) can be defined by:

    ς(α,β)=J1(J(α)+J(β)), (2.7)

    where J1 is the inverse of J. Similarly, its ATC ς(α,β) also can be generated by its additive generator J:

    ς(α,β)=(J)1(J(α)+J(β)), (2.8)

    where J(α)=J(1α), (J)1(α)=1J1(α) and (J)1 is the inverse of J.

    Moreover, we can also derive ς(α,β) as:

    ς(α,β)=1J1(J(1α)+J(1β)). (2.9)

    MSM operator [21] was originally proposed by Maclaurin and then further generalized by Detemple [35], it's merit is that it can reflect the relationship between multiple input parameters. The MSM is defined as follows:

    Definition 2.4. [21] Let ξ1,ξ2,,ξn be n nonnegative real numbers, and m=1,,n. A MSM operator will be expressed as

    MSM(m)(ξ1,ξ2,ξn)=(1i1<<imnmj=1ξijCmn)1m, (2.10)

    where (i1,i2,,in) is a permutation of (1,2,,n).

    Definition 2.5. [35] Let ξ1,ξ2,,ξn be n nonnegative real numbers, and μj0. A GMSM operator can be expressed as

    GMSM(m,u1,,um  )(ξ1,ξ2,,ξn)=(1i1<<imnmj=1ξujijCmn)1(u1+u2++um  ), (2.11)

    where (i1,i2,,in) is a permutation of (1,2,,n), i=1,2,,n and j=1,2,,m.

    The properties of GMSM(m,u1,,um  ) are given as follows:

    (a) Idempotency. GMSM(m,u1,,um  )  (ξ,,ξ)=ξ, if ξi=ξ for all i;

    (b) Monotonicity. GMSM(m,u1,,um  )  (ξ1,,ξn)GMSM(m,u1,,um  )  (η1,,ηn), if ξiηi for all i;

    (c) Boundedness. mini(ξi)GMSM(m,u1,,um  )  (ξ1,ξ2,,ξn)maxi(ξi).

    Under some special situations, GMSM(m,u1,,um  ) can reduce to some concrete operators when m takes different values:

    (1) When m=2, the GMSM(m,u1,,um  ) will reduce to BM operator:

    GMSM(m,u1,u2)(ξ1,ξ2,,ξn)=(1i<jnξu1iξu2jn(n1))1(u1+u2). (2.12)

    (2) When m=3, the GMSM(m,u1,,um  ) will reduce to generalized BM operator:

    GMSM(m,u1,u2,u3)(ξ1,ξ2,,ξn)=(1i,j,kn,ijkξu1iξu2jξu3kn(n1)(n2))1(u1+u2+u3). (2.13)

    (3) When u1=u2==um=1, the GMSM(m,u1,,um  ) will reduced to MSM operator:

    GMSM(m,u1,,um  )(ξ1,ξ2,,ξn)=(1i1<<imnmj=1ξijCmn)1m. (2.14)

    In terms of ATN and ATC, a series of operation laws of PLTS can be defined as follows:

    Definition 3.1. Let 1(p),2(p) be two PLTSs, then

    (1)

    1(p)2(p)=g1(η(t)ig(i(p))(i=1,2){ς(η(t)1,η(t)2)(p(i)1p(i)2)})=g1(η(t)ig(i(p))(i=1,2){(1J1(J(1η(i)1)+J(1η(i)2)))(p(i)1p(i)2)});

    (2)

    1(p)2(p)=g1(η(t)ig(i(p))(i=1,2){ς(η(i)1,η(i)2)(p(i)1p(i)2)})=g1(η(t)ig(i(p))(i=1,2){J1(J(η(i)1)+J(η(i)2))(p(i)1p(i)2)});

    (3)

    λ1(p)=g1(η(t)1g(1(p)){(1J1(λJ(1η(i)1)))(p(i)1)}),forallλR;

    (4)

    1(p)λ=g1(η(t)1g(1(p)){J1(λJ(η(i)1))(p(i)1)}),forallλR.

    Remark 3.1. In Definition 3.1, J is a generator of ATN, when J takes different function which satisfies the condition of generators, we can obtain different operations of two PLTSs. Therefore, Definition 3.1 can be regarded as a unified expression of some existing operations of PLTSs.

    In what follows, i(p)={(t)i(p(t)i)|t=1,2,,#i(p)} if not specifically stated. Based upon operational laws of PLTSs defined in Defintion 3.1, PLGMSM operator can be proposed and listed as follows.

    Definition 3.2. Let 1(p),,n(p) be a group of PLTSs, the probabilistic linguistic generalized MSM operator (PLGMSM) based on ATN and ATC is a function PLGMSM:ΩnΩ and

    PLGMSM(m,u1,,um)(1(p),,n(p))=(1i1<<imn(mj=1(p)ijuj)Cmn)1u1+u2++um, (3.1)

    where Ω is the set of all PLTSs.

    According to Definition 3.1 and Definition 3.2, the following result can be derived.

    Theorem 3.1. Let 1(p),,n(p) be a group of PLTSs, then

    PLGMSM(m,u1,u2,,um)(1(p),,n(p))=g1(η(t)ig(i(p)){J1(1mk=1ukJ(1J1(1Cmn(Σ1i1<<imnJ(1J1(mj=1ujJ(η(t)ij)))))))(1i1<<imnmj=1p(t)ij)}) (3.2)

    Proof. According to Definition 3.1, we have

    ((p)ij)uj=g1(η(t)ig(i(p)){J1(ujJ(η(t)ij))(p(t)ij)}),

    and

    mj=1((p)ij)uj=g1(η(t)ig(i(p)){J1(mj=1(J(J1(ujJ(η(t)ij)))))(mj=1p(t)ij)})=g1(η(t)ig(i(p)){J1(mj=1(ujJ(η(t)ij)))(mj=1p(t)ij)}),

    therefore

    1i1<<imn(mj=1((p)ij)uj)=g1(η(t)ig(i(p)){1J1(mj=1J(1J1(mj=1ujJ(η(t)ij))))(1i1<<imnmj=1p(t)ij)}).

    Furthermore, we have

    1i1<<imn(mj=1(p)ijuj)Cmn=g1(η(t)ig(i(p)){1J1(1CmnJ(1(1J1(Σ1i1<<imnJ(1J1(mj=1ujJ(η(t)ij)))))))(1i1<<imnmj=1p(t)ij)}).

    So

    (1i1<<imn(mj=1(p)ijuj)Cmn)1u1+u2++um=g1(η(t)ig(i(p)){J1(1mk=1ukJ(1J1(1Cmn(Σ1i1<<imnJ(1J1(mj=1ujJ(η(t)ij)))))))(1i1<<imnmj=1p(t)ij)}).

    Proved.

    Theorem 3.2. Let 1(p),,n(p) be a group of PLTSs. If (1(p)(p),2(p),,n(p)) is a permutation of (1(p),,n(p)), then

    PLGMSM(m, u1u2um)(1(p),,n(p)) = PLGMSM(m, u1u2um)(1(p),,n(p)). (3.3)

    Proof. The proofs is similar to Property 3 in [24]. So, the details are omitted.

    In this section, some special PLGMSMS operators will be investigated when parameters take different values and the generator takes different functions.

    (a) When m=1, the PLGMSM operator based on ATN and ATC will reduce to

    PLMSM(1(p),,n(p))=g1(η(t)i{J1(1u1J(1J1(1n(1inJ(1J1(u1J(η(t)i)))))))(1inp(t)i)}). (3.4)

    (b) When m=2, the PLGMSM operator based on ATN and ATC will reduce to

    PLGMSM(2,u1,u2,)(1(p),,n(p))=g1(η(t)i{J1(1u1+u2J(1J1(1n(n1)(Σ1i1<i2nJ(1J1(2j=1ujJ(η(t)ij)))))))(1i1<i2n2j=1p(t)ij)}). (3.5)

    (c) When u1=u2==um=1, the PLGMSM operator based on ATN and ATC will reduce to

    PLGMSM(m,u1,u2,,um)(1(p),,n(p))=g1(η(t)i{J1(1mJ(1J1(1Cmn(Σ1i1<<imnJ(1J1(mj=1ujJ(η(t)ij)))))))(1i1<<imnmj=1p(t)ij)}). (3.6)

    (a) If J(x)=lnx, it has J1(x)=ex. Then we get probabilistic linguistic Archimedean Algebraic GMSM (PLAAGMSM) [16] operators as follows:

    PLAAGMSM(m,u1,u2,,um)(1(p),,n(p))=g1(η(t)i(1(1i1<<imn(1mj=1(η(t)ij)uj)1Cmn)1mk=1uk)(1i1<<imnmj=1p(t)ij)). (3.7)

    In this situation, GPLGMSM reduce to the PLAAMSM.

    (b) If J(x)=ln2xx, it gains J1(x)=2ex+1. We get probabilistic linguistic Archimedean Einstein GMSM (PLAEGMSM) operators as follows:

    PLAEGMSM(m,u1,u2,,um)(1(p),,n(p))=g1(η(t)I(2(A1)1mk=1uk(2A+3)1mk=1uk+(A1)1mk=1uk)(1i1<<imnmj=1p(t)ij)), (3.8)

    in which,

    A=1i1<<imn(mj=1(2η(t)ij)uj+3mj=1(η(t)ij)ujmj=1(2η(t)ij)ujmj=1(η(t)ij)uj)1Cmn.

    (c) If J(x)=lnε+(1ε)xx (ε>0), then it has J1(x)=εex+ε1. We can get probabilistic linguistic Archimedean Hamacher GMSM (PLAHGMSM) operators as follows.

    PLAHGMSM(m,u1,u2,,um)(1(p),,n(p))=g1(η(t)i(ε(ε2b1+1)1mk=1uk+ε1)(1i1<<imnmj=1p(t)ij)), (3.9)

    in which,

    b=1i1<<imn(ε2a1+1)1Cmn,

    and where

    a=mj=1(ε+(1ε)η(t)ijη(t)ij)uj.

    In this case, when ε=1, PLAHGMSM reduces to the GPLAAMSM.

    Example 3.1. Suppose 1={1(1)}, 2={s2(1)}, 3={s0(0.3),s1(0.7)} be three PLTSs, applying the function g to convert i (i=1,2,3) into

    g(1)={0.25(1)},g(2)={0(1)},g(3)={0.5(0.3),0.75(0.7)}.

    Besides, previous different operators could be used to aggregate i (i=1,2,3). Here, set m=2,u1=1,u2=2. In line with the above formula, we get

    PLAAGMSM(2, 1, 2)(1(p),2(p),3(p))=g1(η(t)ig(i(p)),i=1,3(1(1i1<i23(12j=1(η(t)ij)uj)1C23)11+2)(1i1<i232j=1p(t)ij))={s1.34(0.3),s0.8(0.7)}.

    Due to each individual's different background knowledge and preference, the importance should be different. Hence, it is essential to consider the individual weight information to make the decision results are more reasonable and scientific. In this section, the weighted probabilistic linguistic generalized MSM operators based on ATN and ATC will be introduced.

    Definition 4.1. Let 1(p),,n(p) be n PLTSs and wi be the weight of i(p) with wi[0,1], ni=1wi=1. The WPLGMSM is a function WPLGMSM(m, u1u2um):ΩnΩ, if

    WPLGMSM(m, u1u2um)(1(p),,n(p))=(1i1<<imn(mj=1((nwij)(p)ij)uj)Cmn)1u1+u2 + +um, (4.1)

    where Ω is the set of all PLTSs.

    In the light of Definition 4.1, the following result can be derived.

    Theorem 4.1. Let 1(p),,n(p) be n PLTSs and wi be the weight of i(p) with wi[0,1], ni=1wi=1. Then

    WPLGMSM(m, u1u2um)(1(p),,n(p))=g1(η(t)ig(i(p)){J1(1mk=1ukJ(1J1(1Cmn1i1<<imnJ(1J1(mj=1ujJ(1J1(nwijJ(1η(t)ij))))))))}(1i1<<imnmj=1pij(t))).

    Proof. In line with the operational formula, we have

    (nwij(p)ij)=g1(η(t)ijg((p)ij){1J1(nwijJ(1η(t)ij))}(p(t)ij)),(nwij(p)ij)uj=g1(η(t)ijg((p)ij){{J1[ujJ(1J1(nwijJ(1η(t)ij)))]}}(p(t)ij)).

    Therefore,

    mj=1(nwij(p)ij)uj=g1(η(t)ijg((p)ij){J1(mj=1J(J1[ujJ(1J1(nwijJ(1η(t)ij)))]))}(mj=1p(t)ij))=g1(η(t)ijg((p)ij){J1(mj=1(ujJ(1J1(nwijJ(1η(t)ij)))))}(mj=1p(t)ij))

    and so

    mj=1(nwij(p)ij)uj=g1(η(t)ijg((p)ij){J1(mj=1J(J1[ujJ(1J1(nwijJ(1η(t)ij)))]))}(mj=1p(t)ij))=g1(η(t)ijg((p)ij){J1(mj=1(ujJ(1J1(nwijJ(1η(t)ij)))))}(mj=1p(t)ij)),
    1Cmn1i1<<imn(mj=1(nwij(p)ij)uj)=g1(η(t)ijg((p)ij){1J1(1CmnJ(J1[1i1<<imnJ(1J1(mj=1ujJ(1J1(nwijJ(1η(t)ij)))))]))}(1i1<<imnmj=1pij(t)))=g1(η(t)ijg((p)ij){1J1(1Cmn(1i1<<imnJ(1J1(mj=1ujJ(1J1(nwijJ(1η(t)ij)))))))}(1i1<<imnmj=1pij(t))).

    Hence,

    (1Cmn1i1<<imn(mj=1((nwij)(p)ij)uj))1u1+u2 + +um=g1(η(t)ijg((p)ij){J1(1mk=1ukJ(1J1(1Cmn1i1<<imnJ(1J1(mj=1ujJ(1J1(nwijJ(1η(t)ij))))))))}(1i1<<imnmj=1pij(t))).

    Proved.

    Theorem 4.2. Let 1(p),,n(p) be a group of PLTSs. If (1(p),,n(p)) is a permutation of (1(p),,n(p)), then {

    WPLGMSM(m, u1u2um)(1(p),,n(p)) = WPLGMSM(m, u1u2um)(1(p),,n(p)). (4.2)

    Proof. The proofs of this theorem is similar to Property 3 in [24]. So, the details are omitted.

    In this section, some special PLGMSMS operators will be investigated when the parameters take different values and the generator takes different function.

    (a) When m=1, the WPLGMSM operator based on ATN and ATC will reduce to

    WPLMSM(1(p),,n(p))=g1(η(t)ig(i(p)){J1(1u1J(1J1(1n1i1nJ(1J1(ujJ(1J1(nwijJ(1η(t)ij))))))))}(1i1npi1(t))).

    (b) When m=2, the PLGMSM operator based on ATN and ATC will reduce to

    PLGMSM(2, u1u2, )(1(p),,n(p))=g1(η(t)ig(i(p)),{J1(1u1+u2J(1J1(1n(n1)(Σ1i1<i2nJ(1J1(2j=1ujJ(1J1(nwijJ(1η(t)ij)))))))))(1i1<i2n2j=1p(t)ij)}).

    (c) When u1 = u2 =  = um = 1, the PLGMSM operator based on ATN and ATC will reduce to

    PLGMSM(m, u1u2um)(1(p),,n(p))=g1(η(t)ijg((p)ij){J1(Jm(1J1(1Cmn(Σ1i1<<imnJ(1J1(mj=1ujJ(1J1(nwijJ(1η(t)ij)))))))))(1i1<<imnmj=1p(t)ij)}).

    In the what follows, the special situations of the WPLGMSM based on ATN and ATC will be discussed.

    (1) If J(x)=lnx, then J1(x) = ex. The weighted probabilistic linguistic Archimedean Algebraic GMSM (WPLAAGMSM) operators will be obtained as follows:

    WPLAAGMSM(m, u1u2um)(1(p),,n(p))=g1(η(t)ijg((p)ij)(1(1i1<<imn(1mj=1(1(1η(t)ij)nwij)uj)1Cmn)1u1+u2 + +um)(1i1<<imnmj=1p(t)ij)). (4.3)

    (2) If J(x)=ln 2xx, it has J1(x) = 2ex+1. Then the weighted probabilistic linguistic Archimedean Einstein GMSM (WPLAEGMSM) operators will be obtained as follows:

    WPLAEGMSM(m, u1u2um)(1(p),,n(p)))=g1(η(t)ijg((p)ij)(2(A1Cmn1)1u1+u2 + +um(A1Cmn1)1u1+u2 + +um+(A1Cmn+1)1u1+u2 + +um)(1i1<<imnmj=1p(t)ij)), (4.4)

    where,

    A=1i1<<imn((mj=1((2η(t)ij)nwij+(η(t)ij)nwij(2η(t)ij)nwij(η(t)ij)nwij)uj+1)/(mj=1((2η(t)ij)nwij+(η(t)ij)nwij(2η(t)ij)nwij(η(t)ij)nwij)uj+1)(mj=1((2η(t)ij)nwij+(η(t)ij)nwij(2η(t)ij)nwij(η(t)ij)nwij)uj1)(mj=1((2η(t)ij)nwij+(η(t)ij)nwij(2η(t)ij)nwij(η(t)ij)nwij)uj1)).

    (3) If J(x)=ln ε+(1ε)xx (ε>0), then J1(x) = εex+ε1, the weighted probabilistic linguistic Archimedean Hamacher GMSM (WPLAHGMSM) operators will be obtained as follows:

    WPLAHGMSM(m, u1u2um)(1(p),,n(p))=g1(η(t)ijg((p)ij){ε(Cij)1u1+u2 + +um(ε+(1ε)Cij)1u1+u2 + +um(1ε)(Cij)1u1+u2 + +um}(1i1<<imnmj=1p(t)ij)), (4.5)

    where,

    Cij=((1i1<<imnε+(1ε)BijBij)1Cmn1)/((1i1<<imnε+(1ε)BijBij)1Cmn1)((1i1<<imnε+(1ε)BijBij)1Cmn(1ε))((1i1<<imnε+(1ε)BijBij)1Cmn(1ε)),B(t)ij=(ε+(1ε)Aij)ujAijuj(ε+(1ε)Aij)uj(1ε)Aijuj,A(t)ij=(ε+(1ε)(1η(t)ij))nwij(η(t)ij)nwij(ε+(1ε)(1η(t)ij))nwij(1ε)(η(t)ij)nwij.

    In the following, we will continue to give some examples to testify different aggregation operators, and discuss some special situations for diverse parameters.

    Example 4.1. Let 1={s - 1(0.3),s1(0.7)}, 2={s1(1)}, 3={s0(0.25),s2(0.75)} be three PLTSs. Suppose w=(0.35,0.25,0.4) is the weight vector of 1,2,3. Set τ=3, then, with the function g, 1,2,3 will converted into g(1(p))={0.33(0.3),0.67(0.7)}, g(2(p))={0.67(1)}, g(3(p))={0.5(0.25),0.83(0.75)}, respectively.

    Then will use the WPLGMSM operator based on ATN and ATC to fuse 1,2,3. set m=2,u1=1,u2=2, with different additive generators, the aggregated results are obtained as follows:

    (1) If J(x)=ln x:

    WPLGMSM(2, 1, 2)(1(P),2(P),3(P))=(1i1<i23(2j=1((3wij)(p)ij)uj)C23)11+2={S0.068(0.006)S0.329(0.013)S0.436(0.017)S1.024(0.039)S0.329(0.013)S0.546(0.031)S0.637(0.039)S1.153(0.092)S0.708(0.017)S0.873(0.039)S0.943(0.051)S1.362(0.118)S0.873(0.039)S1.108(0.092)S1.081(0.118)S1.460(0.276)}={S0.068(0.006)S0.329(0.026)S0.436(0.017)S1.024(0.039)S0.546(0.031)S0.637(0.039)S1.153(0.092)S0.708(0.017)S0.873(0.078)S0.943(0.051)S1.362(0.118)S1.108(0.092)S1.081(0.118)S1.460(0.276)}.

    (2) If J(x)=ln 2xx:

    WPLAEGMSM(2, 1, 2)(1(P),2(P),3(P))=(1i1<i23(2j=1((3wij)(p)ij)uj)C23)11+2={s0.226(0.006)s0.469(0.013)s0.638(0.017)s1.136(0.039)s1.008(0.013)s1.136(0.031)s1.233(0.039)s1.558(0.092)s0.437(0.017)s0.641(0.039)s0.787(0.051)s1.235(0.118)s1.118(0.039)s1.235(0.092)s1.325(0.118)s1.628(0.276)}.

    (3) If J(x)=ln ε+(1ε)xx (ε=2):

    WPLAHGMSM(2, 1, 2)(1(P),2(P),3(P))=(1i1<i23(2j=1((3wij)(p)ij)uj)C23)11+2={s0.857(0.006)s0.782(0.013)s0.737(0.017)s0.568(0.039)s0.567(0.013)s0.529(0.031)s0.505(0.039)s0.406(0.092)s0.721(0.017)s0.666(0.039)s0.632(0.051)s0.497(0.118)s0.496(0.039)s0.465(0.092)s0.445(0.118)s0.361(0.276)}

    Before giving the decision-making approach, a formal description of a MADM problem with probabilistic linguistic information will be given. Suppose A={A1,,Ak} be a set of diverse alternatives, CR={CR1,CR2,,CRl} be the set of different attributes, and w={w1,w2,,wl} be the weight vector of attributes CRi with wi and li=1wi=1. A probabilistic linguistic decision matrix can be expressed as M = ((p)ij)k×l, where (p)ij = {(t)ij(p(t)ij)|t=1,2,,#(p)ij} is a PLTS, and (p)ij expresses the evaluation value of alternatives Aj(j=1,2,,k) for the attributes CRi(i=1,,l).

    In line with the given above-mentioned description of MADM problem, the proposed aggregated operators will adopted to address some actual issues and find an ideal alternative. Some main procedures are listed as follows:

    Step 1. Standardize the attribute values by the following ways:

    If the attribute is a benefit type, then

    (p)ij = {(t)ij(p(t)ij)|t=1,2,,#(p)ij}, (5.1)

    If the attribute is a cost type, then

    (p)ij = g1(η(t)ijg(LS(p)ij){(1η(t)ij)(p(t)ij)}). (5.2)

    Step 2. Transform all attributes values (p)ij of each alternative to probabilistic hesitant fuzzy element r(p)ij.

    Step 3. Aggregate all attributes values r(p)ij of each alternative to the comprehensive values r(p)j.

    Step 4. Transform r(p)j into PLTS (p)j.

    Step 5. Calculate the score function and the deviation degree of Aj(j=1,,k) by Eq (2.3) and Eq (2.4).

    Step 6. Rank all alternatives and then choose the desirable one.

    This section will discuss the decision making option based upon the given WPLGMSM with experimental cases.

    Example 5.1. At present, continuous improvement of education quality has been placed at an important position in colleges and universities, and the evaluation of teaching quality is the baton for the healthy development of education, as well as an essential part of education mechanism. Exploring the evaluation index system of teaching quality, building a scientific evaluation model, and forming a reasonable education evaluation system will help to improve the teaching quality and promote the high-quality development of education. In this study four main factors will be used as teaching quality evaluation indexes, they are CR1: teaching content, CR2: teaching method, CR3: teaching effect and CR4: teaching attitude. It is assumed that the weight of four indexes are w=(0.2,0.3,0.4,0.1), and four teachers' (A1A4) teaching course Bayesian formula and its application in Probability Theory and Mathematical Statistics will be evaluated. At the same time, through the questionnaire survey on teaching experience of some graduated students and teachers, they are required to evaluate with the following linguistic variables through their own experience:

    {s3=extremely  bad,s2=very  bad,s1=bad,s0=medium,s1=good,s2=very  good,s3=extremely  good}.

    After collecting data, relevant decision-making information is obtained as in Table 1.

    Table 1.  Original decision making matrix.
    CR1 CR2 CR3 CR4
    A1 {s0(0.45),s1(0.55)} {s2(1)} {s0(0.3),s2(0.7)} {s0(1)}
    A2 {s0(1)} {s0(0.25),s2(0.75)} {s1(1)} {s0(1)}
    A3 {s0(0.45),s1(0.55)} {s2(1)} {s0(0.3),s2(0.7)} {s0(1)}
    A4 {s0(0.45),s1(0.55)} {s2(1)} {s0(0.3),s2(0.7)} {s0(1)}

     | Show Table
    DownLoad: CSV

    The following task is to make decision by using the procedure in Section 5.1:

    Step 1. Standardize the attribute values (p)ij. As all criteria are benefit-type, so it is not necessary to standardize.

    Step 2. Transformed all attributes values of each alternative to probabilistic hesitant fuzzy element r(p)ij. We set τ=3 and use the function of g. Probabilistic linguistic information will transformed into probabilistic hesitant fuzzy element and listed r(p)ij in Table 2.

    Table 2.  Probabilistic hesitant fuzzy element decision information matrix.
    CR1 CR2 CR3 CR4
    g(A1(p)) {12(0.45),23(0.55)} {56(1)} {12(0.3),56(0.7)} {12(1)}
    g(A2(p)) {12(1)} {12(0.25),56(0.75)} {23(1)} {12(1)}
    g(A3(p)) {12(0.5),23(0.5)} {23(1)} {13(0.3),56(0.7)} {13(1)}
    g(A4(p)) {56(1)} {12(0.35),56(0.65)} {23(1)} {23(1)}

     | Show Table
    DownLoad: CSV

    Step 3.-Step 4. We choose the aggregation operator based on algebraic generator to fuse decision information. As the vast numbers, the results not listed here.

    Step 5. Calculate the expected value and listed as follows:

    E(A1)=1.1233,E(A2)=1.1145,E(A3)=0.6608,E(A4)=1.4618.

    Step 6. Determine the desirable alternative according to the expected values. From the calculated results of Step 5, we have A4A1A2A3. Therefore, A4 is the desirable one.

    Meanwhile, we use other three proposed aggregation operators to fuse above decision information, the results are listed in Table 3.

    Table 3.  The ranking based three proposed aggregation operators when u1 = 1, u2 = 2.
    Aggregation operators Expected values Ranking
    WPLAAGMSM E(A1)=1.1233,E(A2)=1.1145,E(A3)=0.6608,E(A4)=1.4618 A4A1A2A3
    WPLAEGMSM E(A1)=1.2210,E(A2)=0.8550,E(A3)=0.8217,E(A4)=1.2546 A4A1A2A3
    WPLAHGMSM E(A1)=0.3405,E(A2)=0.3837,E(A3)=0.3964,E(A4)=0.2681 A4A1A2A3

     | Show Table
    DownLoad: CSV

    It is obviously the ranking results are consistent with different aggregated operators. Meanwhile, we feed back the ranking results of this paper to some evaluators. Most of them state that the results are in line with their selection order, which also demonstrates the rationalities and effectiveness of the proposed method in Section 5.1.

    To further justify the validity and robustness of our proposed decision-making method, more comparisons will be carried out in this section.

    B. Fang, et al.[40] proposed an improved possibility degree formula to assess the education and teaching quality in military academies with probabilistic linguistic MCDM method, some main concepts are reviewed as follows.

    Definition 6.1. [40] Assume S={sυ|υ=τ,,1,0,1,,τ} be a LTS, for any two PLTSs 1(p) and 2(p), the possibility degree of 1(p)2(p) could be defined as follows:

    P(12)=0.5+#1t=1(τ+r(t)12τ0.5(τ+r(t)12τ)2)p(t)1#2t=1(τ+r(t)22τ0.5(τ+r(t)22τ)2)p(t)2 (6.1)

    in which, r(t)1 and r(t)2 are the subscript of (t)1 and (t)2, p(t)1 and p(t)2 are the corresponding probability, respectively.

    Step 1. Calculate comprehensive possibility degree matrix. For each attribute CRj, wj is the weight of CRj and nj=1wj=1. If Pjik=P(ijkj), then the possibility degree matrix of will be defined as follows:

    Pj=(Pj11Pj12Pj1nPj21Pj22Pj2nPjn1Pj2nPjnn).

    Step 2. Then with the method of weighted arithmetic average, the comprehensive possibility degree matrix will be calculated by

    P=nj=1wjPj.

    Step 3. Calculate the ranking results of alternatives Set δ=(δ1,δ2,,δm)T is the ordering vector of matrix P, and 0δi1 with mi=1δi=1. Then the alternatives can be ranked with the values of δi. The higher value of δi, the better of the alternative, in which,

    δi=1m(mk=1Pik+1)0.5,

    with this method, the teaching quality in Example 5.1 could be ranked as follows.

    Firstly, calculating the possibility degree matrix with Step 1.

    P1=(0.50.53820.50350.42710.46180.50.46530.38890.49650.53470.50.42360.57290.61110.57640.5),P2=(0.50.52780.54170.53890.47220.50.51390.51110.45830.48610.50.49720.46110.48890.50280.5),
    P3=(0.50.50830.52920.50830.49170.50.52080.50.47080.47920.50.47920.49170.50.52080.5),P4=(0.50.50.59720.43060.50.50.59720.40360.40280.40280.50.33330.56940.56940.66670.5).

    Secondly, calculating the compressive possibility degree matrix with weight w=(0.2,0.3,0.4,0.1) as follows:

    P=4i=1wiPi=(0.50.51930.53460.49350.48070.50.51530.47420.46540.48470.50.45890.50650.52580.54110.5)

    Lastly, calculating the ranking results:

    δ1=0.2618,δ2=0.2425,δ3=0.2273,δ4=0.2684.

    According to the above results, the ranking order is

    A4A1A2A3,

    which is the same as our proposed method.

    We take the example of Zhao [39]. To evaluate four cities (Λ1: Nanchang, Λ2: Ganzhou, Λ3: Jiujiang, Λ4: Jingdezhen) intelligent transportation system. Taking four factors into consideration (CR1: Traffic data collection, CR2: Convenient transportation, CR3: Accident emergency handling capacity, CR4: Traffic signal equipment) to improve the rationality of evaluation. There we set w=(0.2,0.35,0.25,0.2)T are the weight of CRi(i=1,3,4), and the LTS is:

    S={s2=bad,s1=slightly  bad,s0=medium,s1=slightly  good,s2=good},

    using the data of the decision-making matrix without considering the hesitance, after normalizing the probability and listed in the following Table 4.

    Table 4.  Decision matrix.
    CR1 CR2 CR3 CR4
    Λ1 {s1(0.4),s0(0.6)} {s0(0.6)} {s2(0.5),s1(0.4)} {s1(0.5)}
    Λ2 {s1(0.5),s0(0.3)} {s0(0.4)} {s1(0.7)} {s2(0.7)}
    Λ3 {s2(0.4),s0(0.2)} {s1(0.4)} {s2(0.3),s1(0.2)} {s0(0.6)}
    Λ4 {s2(0.7)} {s1(0.7)} {s1(0.4)} {s2(0.5),s1(0.4)}

     | Show Table
    DownLoad: CSV

    After normalizing the probability and using the function of g, the probabilistic hesitant fuzzy matrix will be obtained and listed in Table 5.

    Table 5.  Normalized decision matrix.
    CR1 CR2 CR3 CR4
    g(Λ1) {14(0.4),12(0.6)} {12(1)} {0(0.56),14(0.44)} {14(1)}
    g(Λ2) {14(0.625),12(0.375)} {14(1)} {14(1)} {0(1)}
    g(Λ3) {0(0.67),12(0.33)} {14(3)} {0(0.6),14(0.4)} {12(1)}
    g(Λ4) {0(1)} {14(1)} {14(1)} {0(0.56),14(0.44)}

     | Show Table
    DownLoad: CSV

    Based on the operator of J(x)=lnx, the following expected values and ranking order are obtained:

    E(Λ1)=0.9548,E(Λ2)=1.0790,E(Λ3)=0.6757,E(Λ4)=1.9367.

    It has Λ3Λ1Λ2Λ4. Hence, the optimal intelligent transportation system is Λ3 (Jiujiang), which is the same as the answer in Zhao [39].

    Peide Liu et al. [17] proposed probabilistic linguistic Archimedean Muirhead mean operators to rank the alternatives. For the case of maximization profit problems, we make a comparison between our methods and Archimedean Muirhead Mean operators' methods.

    For four potential projects Λi(i=1,2,3,4), directors need to choose the desirable one through four attributes (Λ1: Financial perspective, Λ2: Customers satisfaction, Λ3: Internal business process, Λ4: Learning and growth) to improve the rationality of evaluation. Suppose the weight of attributes are w=(0.2,0.3,0.3,0.2), and the LTS is:

    S={s2=low,s1=little low,s0=medium,s1=little high,s2=high}.

    The original decision-making matrix with PLTSs can be normalized and listed in Table 6.

    Table 6.  Original decision matrix.
    CR1 CR2 CR3 CR4
    Λ1 {s0(1)} {s1(0.6)} {s1(0.4),s2(0.4)} {s1(0.8)}
    Λ2 {s0(0.8)} {s1(0.8)} {s2(0.6),s1(0.2)} {s0(0.6)}
    Λ3 {s1(0.4)} {s1(0.6)} {s0(0.8),s1(0.2)} {s2(0.5)}
    Λ4 {s1(0.8)} {s1(0.6)} {s1(0.5),s2(0.5)} {s0(1)}

     | Show Table
    DownLoad: CSV

    To assure the calculated results more accurate, we do not normalize the probabilities. Using the function of g, we get the probabilistic hesitant fuzzy matrix and listed in Table 7.

    Table 7.  Probabilistic hesitant fuzzy decision matrix.
    CR1 CR2 CR3 CR4
    g(Λ1) {12(1)} {14(0.6)} {34(0.4),1(0.4)} {34(0.8)}
    g(Λ2) {12(0.8)} {14(0.8)} {0(0.6)14(0.2)} {12(0.6)}
    g(Λ3) {14(0.4)} {34(0.6)} {12(0.8),34(0.2)} {0(0.5)}
    g(Λ4) {14(0.8)} {14(0.6)} {1(0.5)34(0.5)} {12(1)}

     | Show Table
    DownLoad: CSV

    Using different decision approaches in [16,17] and our proposed approaches to address this decision problem and the results are listed in Table 8.

    Table 8.  The results obtained by different decision approaches.
    Methods Aggregated results Ranking
    PLWA [16] E(Λ1)=1.48,E(Λ2)=0.83,E(Λ3)=0.95,E(Λ4)=1.27 Λ1Λ4Λ3Λ2
    PLWG [16] E(Λ1)=1.59,E(Λ2)=0.61,E(Λ3)=0,E(Λ4)=1.38 Λ1Λ4Λ2Λ3
    HPLAWMM [17] E(Λ1)=3.20,E(Λ2)=1.28,E(Λ3)=2.05,E(Λ4)=3.01(suppose  P=(1,0,0,0)  and  δ=1) Λ1Λ4Λ3Λ2
    HPLADWMM [17] E(Λ1)=2.08,E(Λ2)=0.33,E(Λ3)=0,E(Λ4)=1.67(suppose  P=(1,0,0,0)  and  δ=1) Λ1Λ4Λ2Λ3
    Proposed PLAAGMSM E(Λ1)=0.516,E(Λ2)=1.278,E(Λ3)=0.258,E(Λ4)=0.159(suppose  u1=1  u1=2,  n=3) Λ1Λ4Λ3Λ2
    Proposed PLAEGMSM E(Λ1)=0.5433,E(Λ2)=0.7559,E(Λ3)=0.3227,E(Λ4)=0.1177(suppose  u1=1,  u1=2,  n=3) Λ1Λ3Λ4Λ2
    Proposed PLAHGMSM E(Λ1)=0.2653,E(Λ2)=0.3507,E(Λ3)=0.3076,E(Λ4)=0.3087(suppose  u1=1,  u1=2,  n=3) Λ1Λ3Λ4Λ2

     | Show Table
    DownLoad: CSV

    According to the ranking order, the optimal project is Λ1, which is the same obtained by other extant decision making approach. This also shows the reliability and effectiveness of the method.

    Although the proposed decision-making approach integrated the advantages of PLTSs and GMSM operators, there is a limitation of the proposed approach, that is, as the number of elements increases, the complexity of calculation will increase, corresponding.

    Classical MSM operators have been attracted great attention of many scholars and widely been used in the field of information fusion due to their biggest merit that they can reflect the relationship between multiple input arguments. On this basis, Wang generalized the traditional MSM operators and introduced the generalized MSM operators. On the other hand, PLTS, a new tool for describing uncertain decision information, can better reflect the actual decision-making problems such as the hesitation of decision-makers, the relative importance of linguistic variables. Combining the merits of PTLS and GMSM operators, PLGMSM and WPLGMSM based on ATN and ATC are proposed and their properties are also investigated. Meanwhile, some special situations are discussed when parameters take different values and the generators of ATN take different function. Besides, this proposed method is applied in teaching quality evaluation in universities, to evaluate an ideal classroom teaching in four alternatives. At last, several comparison analysis are adopted to ensure the validity of the decision-making results, which further verify the reasonability of our proposed decision-making approach.

    In future studies, we will continue the current work in expanding and applying the current operators into other contexts. Also some novel MADM approaches will be developed to address some decision-making problems with probabilistic linguistic information. The proposed MADM problem could also be used to other complicated issues.

    The research was funded by the General Program of Natural Funding of Sichuan Province (No: 2021JY018), Scientific Research Project of Neijiang Normal University (2022ZD10, 18TD08, 2021TD04.)

    The authors declare no conflict of interest.



    [1] L. I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation-based noise removal algorithms, Physical. D, 60 (1992), 259–268. https://doi.org/10.1016/0167-2789(92)90242-F doi: 10.1016/0167-2789(92)90242-F
    [2] A. Chambolle, V. Caselles, D. Cremers, M. Novaga, T. Pock, An introduction to total variation for image analysis, Radon Series Comp. Appl. Math., 9 (2010), 263–340. https://doi.org/10.1515/9783110226157.263 doi: 10.1515/9783110226157.263
    [3] Y. L. You, M. Kaveh, Fourth-order partial differential equations for noise removal, IEEE Trans. Image Process., 9 (2000), 1723–1730. https://doi.org/10.1109/83.869184 doi: 10.1109/83.869184
    [4] M. Lysaker, A. Lundervold, X. C. Tai, Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time, IEEE Trans. Image Process., 12 (2003), 1579–1590. https://doi.org/10.1109/TIP.2003.819229 doi: 10.1109/TIP.2003.819229
    [5] X. W. Liu, L. H. Huang, Z. Y. Gao, Adaptive fourth-order partial differential equation filter for image denoising, Appl. Math. Lett., 24 (2011), 1282–1288. https://doi.org/10.1016/j.aml.2011.01.028 doi: 10.1016/j.aml.2011.01.028
    [6] D. N. H. Thanh, V. B. S. Prasath, L. M. Hieu, A review on CT and X-Ray images denoising methods, Informatica, 43 (2019), 151–159. https://doi.org/10.31449/INF.V43I2.2179 doi: 10.31449/INF.V43I2.2179
    [7] D. N. H. Thanh, V. B. S. Prasath, L.T. Thanh, Total variation L1 fidelity Salt-and-pepper denoising with adaptive regularization parameter, In: 2018 5th NAFOSTED Conference on information and computer science (NICS), 2018,400–405. https://doi.org/10.1109/NICS.2018.8606870
    [8] T. F. Chan, S. Esedo, Aspects of total variation regularized L1 function approximation, SIAM. J. Appl. Math., 65 (2005), 1817–1837. https://doi.org/10.1137/040604297 doi: 10.1137/040604297
    [9] W. Yin, D. Goldfard, S. Osher, The total variation regularized L1 model for multiscale decomposition, Multiscale Model. Simul., 6 (2007), 190–211. https://doi.org/10.1137/060663027 doi: 10.1137/060663027
    [10] T. Chen, W. Yin, X. S. Zhou, D. Comaniciu, T. S. Huang, Total variation models for variable lighting face regularization, IEEE Trans. Pattern Anal. Mach. Intell., 28 (2006), 1519–1524. https://doi.org/10.1109/TPAMI.2006.195 doi: 10.1109/TPAMI.2006.195
    [11] C. Zach, T. Pock, H. Bischof, A duality based approach for real time TV-L1 optical flow, In: Lecture notes in computer science, Heidelberg: Springer, Berlin, 4713 (2007), 214–223. https://doi.org/10.1007/978-3-540-74936-3_22
    [12] K. Padmavathi, C.S. Asha, V. K. Maya, A novel medical image fusion by combining TV-L1 decomposed textures based on adaptive weighting scheme, Eng. Sci. Technol., 23 (2020), 225–239. https://doi.org/10.1016/j.jestch.2019.03.008 doi: 10.1016/j.jestch.2019.03.008
    [13] M. Hintermüller, A. Langer, Subspace correction methods for a class of nonsmooth and nonadditive convex variational problems with mixed L1- L2data-fidelity in image processing, SIAM. J. Imaging Sci., 6 (2013), 34–73. https://doi.org/10.1137/120894130 doi: 10.1137/120894130
    [14] Z. Gong, Z. Shen, K. C. Toh, Image restoration with mixed or unknown noises, Multiscale Model. Simul., 12 (2014), 58–87. https://doi.org/10.1137/130904533 doi: 10.1137/130904533
    [15] A. Langer, Automated parameter selection in the L1-L2-TV model for removing Gaussian plus impulse noise, Inverse probl., 33 (2017), 074002. https://doi.org/10.1088/1361-6420/33/7/074002 doi: 10.1088/1361-6420/33/7/074002
    [16] D. N. H. Thanh, L. T. Thanh, N. N. Hien, S. Prasath, Adaptive total variation L1 regularization for salt and pepper image denoising, Optik, 208 (2008), 163677. https://doi.org/10.1016/j.ijleo.2019.163677 doi: 10.1016/j.ijleo.2019.163677
    [17] R. W. Liu, L. Shi, S. C. H. Yu, D. Wang, Box-constrained second-order total generalized variation minimization with a combined L1,2 data-fidelity term for image reconstruction, J. Electron Imaging, 34 (2015), 033026. https://doi.org/10.1117/1.JEI.24.3.033026 doi: 10.1117/1.JEI.24.3.033026
    [18] K. Bredies, K. Kunisch, T. Pock, Total generalized variation, SIAM. J. Imaging Sci., 3 (2010), 492–526. https://doi.org/10.1137/090769521 doi: 10.1137/090769521
    [19] D. Chen, Y. Chen, D. Xue, Fractional-order total variation image restoration based on primal-dual algorithm, Abstr. Appl. Anal., 2013 (2013), 585310. https://doi.org/10.1155/2013/585310 doi: 10.1155/2013/585310
    [20] D. Chen, H. Sheng, Y. Chen, D. Xue, Fractional-order variational optical flow model for motion estimation, Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 371 (2013), 20120148. https://doi.org/10.1098/rsta.2012.0148 doi: 10.1098/rsta.2012.0148
    [21] J. Zhang, Z. Wei, A class of fractional-order multi-scale variational models and alternating projection algorithm for image denoising, Appl. Math. Model., 35 (2011), 2516–2528. https://doi.org/10.1016/j.apm.2010.11.049 doi: 10.1016/j.apm.2010.11.049
    [22] J. Zhang, Z. Wei, L. Xiao, Adaptive fractional-order multi-scale method for image denoising, J. Math. Imaging. Vis., 43 (2012), 39–49. https://doi.org/10.1007/s10851-011-0285-z doi: 10.1007/s10851-011-0285-z
    [23] D. Chen, S. Sun, C. Zhang, Y. Chen, D. Xue, Fractional-order TV-L2 model for image denoising, Cent. Eur. J. Phys., 11 (2013), 1414–1422. https://doi.org/10.2478/s11534-013-0241-1 doi: 10.2478/s11534-013-0241-1
    [24] J. F. Cai, S. Osher, Z. W. Shen, Split Bregman methods and frame based image restoration, Multiscale Model. Simul., 8 (2009), 337–369. https://doi.org/10.1137/090753504 doi: 10.1137/090753504
    [25] Z. Qin, D. Goldfarb, S. Ma, An alternating direction method for total variation denoising, Optim. Methods Softw., 30 (2011), 594–615. https://doi.org/10.1080/10556788.2014.955100 doi: 10.1080/10556788.2014.955100
    [26] C. A. Micchelli, L. Shen, Y. Xu, Proximity algorithms for image models: denoising, Inverse Probl., 27 (2011), 045009. https://doi.org/10.1088/0266-5611/27/4/045009 doi: 10.1088/0266-5611/27/4/045009
    [27] Q. Li, C. A. Micchelli, L. Shen, Y. Xu, A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models, Inverse Probl., 28 (2012), 095003. https://doi.org/10.1088/0266-5611/28/9/095003 doi: 10.1088/0266-5611/28/9/095003
    [28] C. A. Micchelli, L. Shen, Y. Xu, X. Zeng, Proximity algorithms for the L1/TV image denoising model, Adv. Comput. Math., 38 (2013), 401–426. https://doi.org/10.1007/s10444-011-9243-y doi: 10.1007/s10444-011-9243-y
    [29] D. Chen, Y. Chen, D. Xue, Fractional-order total variation image denoising based on proximity algorithm, Appl. Math. Comput., 257 (2015), 537–545. https://doi.org/10.1016/j.amc.2015.01.012 doi: 10.1016/j.amc.2015.01.012
    [30] Y. H. Hu, C. Li, X. Q. Yang, On convergence rates of linear proximal algorithms for convex composite optimization with applications, SIAM J. Optim., 26 (2016), 1207–1235. https://doi.org/10.1137/140993090 doi: 10.1137/140993090
    [31] J. J. Moreau, Fonctions convexes duales et points proximaux dans un espace hilbertien, Comptes rendus hebdomadaires des séances de l'Académie des sciences, 255 (1962), 2897–2899.
    [32] P. L. Combettes, V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul., 4 (2005), 1168–1120. https://doi.org/10.1137/050626090 doi: 10.1137/050626090
    [33] X. Y. Yu, D. H. Zhao, A weberized total variance regularization-based image multiplicative noise model, Image Anal. Stereol., 42 (2023), 65–76. https://doi.org/10.5566/ias.2837 doi: 10.5566/ias.2837
    [34] J. M. Shapiro, Embedded image coding using zerotrees of wavelet coefficients, IEEE Trans. Signal Process., 41 (1993), 3445–3462. https://doi.org/10.1109/78.258085 doi: 10.1109/78.258085
    [35] L. Rudin, P. L. Lions, S. Osher, Multiplicative denoising and deblurring: Theory and algorithms, In: Geometric level set methods in imaging, vision, and graphics, New York: Springer, 2003,103–119. https://doi.org/10.1007/0-387-21810-6_6
  • This article has been cited by:

    1. Shizhou Weng, Zhengwei Huang, Yuejin Lv, Probability numbers for multi-attribute decision-making, 2024, 46, 10641246, 6109, 10.3233/JIFS-223565
    2. Zeynep Tuğçe Kalender, Olasılıklı dil terimi kümeleri yaklaşımı kullanılarak akıllı ev teknolojilerinin benimsenmesinde tüketici dinamiklerinin incelenmesi, 2025, 40, 1300-1884, 1099, 10.17341/gazimmfd.1396803
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(891) PDF downloads(52) Cited by(1)

Figures and Tables

Figures(9)  /  Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog