Loading [MathJax]/jax/output/SVG/jax.js
Research article

A strategy for hepatitis diagnosis by using spherical q-linear Diophantine fuzzy Dombi aggregation information and the VIKOR method

  • Received: 16 February 2023 Revised: 15 March 2023 Accepted: 23 March 2023 Published: 19 April 2023
  • MSC : 03E72, 47S40

  • Hepatitis is an infectious disease typified by inflammation in internal organ tissues, and it is caused by infection or inflammation of the liver. Hepatitis is often feared as a fatal illness, especially in developing countries, mostly due to contaminated water, poor sanitation, and risky blood transfusion practices. Although viruses are typically blamed, other potential causes of this kind of liver infection include autoimmune disorders, toxins, medicines, opioids, and alcohol. Viral hepatitis may be diagnosed using a variety of methods, including a physical exam, liver surgery (biopsy), imaging investigations like an ultrasound or CT scan, blood tests, a viral serology panel, a DNA test, and viral antibody testing. Our study proposes a new decision-support system for hepatitis diagnosis based on spherical q-linear Diophantine fuzzy sets (Sq-LDFS). Sq-LDFS form the generalized structure of all existing notions of fuzzy sets. Furthermore, a list of novel Einstein aggregation operators is developed under Sq-LDF information. Also, an improved VIKOR method is presented to address the uncertainty in analyzing the viral hepatitis categories demonstration. Interesting and useful properties of the proposed operators are given. The core of this research is the proposed algorithm based on the proposed Einstein aggregation operators and improved VIKOR approach to address uncertain information in decision support problems. Finally, a hepatitis diagnosis case study is examined to show how the suggested approach works in practice. Additionally, a comparison is provided to demonstrate the superiority and efficacy of the suggested decision technique.

    Citation: Huzaira Razzaque, Shahzaib Ashraf, Wajdi Kallel, Muhammad Naeem, Muhammad Sohail. A strategy for hepatitis diagnosis by using spherical q-linear Diophantine fuzzy Dombi aggregation information and the VIKOR method[J]. AIMS Mathematics, 2023, 8(6): 14362-14398. doi: 10.3934/math.2023735

    Related Papers:

    [1] Sani Aji, Poom Kumam, Aliyu Muhammed Awwal, Mahmoud Muhammad Yahaya, Kanokwan Sitthithakerngkiet . An efficient DY-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. AIMS Mathematics, 2021, 6(8): 8078-8106. doi: 10.3934/math.2021469
    [2] Aliyu Muhammed Awwal, Poom Kumam, Kanokwan Sitthithakerngkiet, Abubakar Muhammad Bakoji, Abubakar S. Halilu, Ibrahim M. Sulaiman . Derivative-free method based on DFP updating formula for solving convex constrained nonlinear monotone equations and application. AIMS Mathematics, 2021, 6(8): 8792-8814. doi: 10.3934/math.2021510
    [3] Siting Yu, Jingjing Peng, Zengao Tang, Zhenyun Peng . Iterative methods to solve the constrained Sylvester equation. AIMS Mathematics, 2023, 8(9): 21531-21553. doi: 10.3934/math.20231097
    [4] Cuijie Zhang, Zhaoyang Chu . New extrapolation projection contraction algorithms based on the golden ratio for pseudo-monotone variational inequalities. AIMS Mathematics, 2023, 8(10): 23291-23312. doi: 10.3934/math.20231184
    [5] D. Jeni Seles Martina, G. Deepa . Some algebraic properties on rough neutrosophic matrix and its application to multi-criteria decision-making. AIMS Mathematics, 2023, 8(10): 24132-24152. doi: 10.3934/math.20231230
    [6] Jamilu Sabi'u, Ali Althobaiti, Saad Althobaiti, Soubhagya Kumar Sahoo, Thongchai Botmart . A scaled Polak-Ribiˊere-Polyak conjugate gradient algorithm for constrained nonlinear systems and motion control. AIMS Mathematics, 2023, 8(2): 4843-4861. doi: 10.3934/math.2023241
    [7] Yonghong Duan, Ruiping Wen . An alternating direction power-method for computing the largest singular value and singular vectors of a matrix. AIMS Mathematics, 2023, 8(1): 1127-1138. doi: 10.3934/math.2023056
    [8] Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth . Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces. AIMS Mathematics, 2021, 6(5): 4873-4900. doi: 10.3934/math.2021286
    [9] Yue Zhao, Meixia Li, Xiaowei Pan, Jingjing Tan . Partial symmetric regularized alternating direction method of multipliers for non-convex split feasibility problems. AIMS Mathematics, 2025, 10(2): 3041-3061. doi: 10.3934/math.2025142
    [10] Jun He . A SOR-like AVM for the maximal correlation problem. AIMS Mathematics, 2018, 3(1): 253-262. doi: 10.3934/Math.2018.1.253
  • Hepatitis is an infectious disease typified by inflammation in internal organ tissues, and it is caused by infection or inflammation of the liver. Hepatitis is often feared as a fatal illness, especially in developing countries, mostly due to contaminated water, poor sanitation, and risky blood transfusion practices. Although viruses are typically blamed, other potential causes of this kind of liver infection include autoimmune disorders, toxins, medicines, opioids, and alcohol. Viral hepatitis may be diagnosed using a variety of methods, including a physical exam, liver surgery (biopsy), imaging investigations like an ultrasound or CT scan, blood tests, a viral serology panel, a DNA test, and viral antibody testing. Our study proposes a new decision-support system for hepatitis diagnosis based on spherical q-linear Diophantine fuzzy sets (Sq-LDFS). Sq-LDFS form the generalized structure of all existing notions of fuzzy sets. Furthermore, a list of novel Einstein aggregation operators is developed under Sq-LDF information. Also, an improved VIKOR method is presented to address the uncertainty in analyzing the viral hepatitis categories demonstration. Interesting and useful properties of the proposed operators are given. The core of this research is the proposed algorithm based on the proposed Einstein aggregation operators and improved VIKOR approach to address uncertain information in decision support problems. Finally, a hepatitis diagnosis case study is examined to show how the suggested approach works in practice. Additionally, a comparison is provided to demonstrate the superiority and efficacy of the suggested decision technique.



    The problem of recovering low-rank and sparse matrices or tensors from small sets of linear measurements occurs in many areas, such as the foreground in video surveillance [1], hyperspectral compressive sensing [2] and image denoising [3]. Mathematically, the optimization model of a matrix recovery is described as follows:

    minA,EA+λE1s.t.D=PQ(A+E), (1.1)

    where A=Σrk=1σk(A), σk(A) denotes the kth largest singular value of ARn1×n2 of rank r. E1 denotes the sum of the absolute values of the matrix entries and λ is a positive weighting parameter. QRn1×n2 is a linear subspace, and PQ denotes the projection onto Q. Since the matrix recovery problem is closely connected to the robust principal component analysis (RPCA) problem, then it can be formulated in the same way as RPCA. Further, many theoretical results and algorithmic methods for recovering a low-rank and sparse matrix or tensor have been obtained; for example, see [1,2,4,5,6,7,8,9,10,11,12], and the references therein.

    Wright et al. [6], Li[13], Chen and Xu [14] studied the robust matrix completion problem that we call matrix compressive recovery namely, PQ=PΩ, where Ω is a random subset of indices for the known entries. Further, (1.1) is formulated as follows:

    minA,EA+λE1s.t.D=PΩ(A+E). (1.2)

    In [1], Candès et al. have proved that both A and E can be recovered by solving (1.2) with high probability. Meanwhile, an algorithm and some applications in the area of video surveillance were discussed in detail. Li [13] also gave some new theorems and models for solving (1.2). Then Chen and Xu [14] discussed (1.2) via the augmented Lagrange multiplier (ALM) method and studied its application in image restoration. Meng et al. proposed the following problem in [3]:

    minA,EA+λPΩ(E)1s.t.A+E=D. (1.3)

    They also used the ALM algorithm for solving (1.3), but they did not discuss the convergence. Li et al. [15] proved that the ALM algorithm was convergent for the optimization (1.3). Further, they stated that (1.3) was equivalent to (1.2). Chen et al. [16] provided a new unified performance guarantee that an exact recovery can be obtained when minimizing the nuclear norm plus l1 norm.

    Then the robust formulation has been improved to deal with Gaussian noise [17], leading to the following convex optimization problem (convex robust matrix completion):

    minA,E1,E2λA+γE11+12E222s.t.D=PΩ(A+E1+E2). (1.4)

    He et al. [18,19] developed a robust version of the Grassmannian rank-one update subspace estimation (GROUSE) [20] algorithm named the Grassmannian robust adaptive subspace tracking algorithm (GRASTA), which aims at solving the problem of robust subspace tracking. Their algorithm can be cast to solve problems formulated as

    minPΩ(E)1s.t.PΩ(UV+E)=PΩ(M)UGr(m,r)VRr×n, (1.5)

    where Gr(m,r) is the Grassman manifold. The advantage of their algorithm is that it is designed to tackle the problem of online subspace estimation from incomplete data; hence it can also be cast to solve online low-rank matrix completion where we observe one column of the matrix M at a time.

    In 2018, Chiang et al. [21] proposed a general model

    minA,EA+λE1,s.t.xTiAyj+Eij=Mij,(i,j)Ω, (1.6)

    which exploits side information to better learn low-rank matrices from missing and corrupted observations, and show that the proposed model can be further applied to several popular scenarios such as matrix completion and RPCA.

    On the other hand, for a given low-rank r and sparse level |S|0, some non-convex models and algorithms were proposed.

    In 2014, a non-convex algorithm based on alternating projections, namely AltProj, was presented in [22] which solves the following problem:

    minL,SDLSFs.t.rank(L)r,S0|Ω|, (1.7)

    where r is the rank of the underlying low rank matrix, Ω denotes the support set of the underlying sparse matrix and |Ω| is the cardinality of Ω. AltProj iteratively updates L by projecting the matrix DS onto the space of rank-r matrices (denoted by Lr, ) which can be done via the singular value decomposition, followed by truncating out small singular values, and then updating S by projecting the matrix DL onto the space of sparse matrices (denoted by S, ) which can be done by the hard thresholding operator. In 2016, Gu et al. [23] factorized L into the product of two matrices, that is L=UV, and performd alternating minimization over both matrices. Yi et al. [24] applied a similar factorization and an alternating descent algorithm. Then, a method based on manifold optimization which reduces the dependence on the condition number of the underlying low-rank matrix theoretically was proposed by Zhang and Yang [25]. In 2019, an accelerated AltProj was proposed by Cai et al. [26] and empirical performance evaluations showed the advantage of the accelerated AltProj over other state-of-the-art algorithms for RPCA.

    In 2015, Wang et al. [27] proposed a proximal alternating robust subspace minimization method for solving the practical matrix completion problem under the prevalent case where the rank of A and the cardinality of Ω are upper bounded by some known paremeters r and k via the following non-convex, non-smooth optimization model:

    minA,E12PΩ(DLS)2F+ϵ2PˉΩ(D)21s.t.rank(L)r,LRm×nS0k,SFKS,SRm×nΩ, (1.8)

    where Rm×nΩ denotes the set of m×n matrices whose supports are subsets of Ω and KS is a finite constant introduced to facilitate the convergence proof.

    In this paper, we develop an alternating directional method and its variant equipped with the non-monotone search procedure for solving low-rank and sparse structure matrix completion problems from incomplete data. By introducing a parameter α, we consider the following problem:

    minPΩ(DLS)2F,s.t.rank(L)r,S0α|Ω|, (1.9)

    where α represents the sparsity ratio of the sparse part. Based on the factorization L=UV with URm×r and VRr×n, (1.9) can be represented as

    minf(U,V,S)s.t.S0α|Ω|, (1.10)

    where f(U,V,S)=12PΩ(DUVS)2F.

    The rest of this paper is organized as follows. In Section 2, we give the proposed algorithms in details. Convergence analysis is discussed under mild condition in Section 3. In Section 4, we compare Algorithm 1 with Algorithm 2 through numerical experiments to illustrate the efficiency of the non-monotone technique, also compare the proposed algorithms with the previous algorithm to show the effectiveness of the new algorithms. Finally, we conclude the paper in Section 5.

    In this section, we develop two alternating directional methods for solving (1.10), where one of U,V and S is solved in the Gauss-Seidel manner while the other two variables are fixed until convergence. In both algorithms, we apply a single step of the steepest gradient descent method with exact step-size to solve the least square subproblems with respect to variable U or V. If f(U,V,S) is denoted by fV,S(U) when V and S are held constant while fU,S(V) when U and S are held constant. The steepest descents about U and V are

    fV,S(U)=(PΩ(D)PΩ(UV+S))VT

    and

    fU,S(V)=UT(PΩ(D)PΩ(UV+S)),

    respectively. The associated exact step-sizes are denoted by tU and tV, and can be computed explicitly by

    tU=||fV,S(U)||2F||PΩ(fV,S(U)V)||2F and tV=||fU,S(V)||2F||PΩ(UfU,S(V))||2F.

    Based on the above discussion, the concrete algorithms can be formally described as Algorithm 1 and Algorithm 2, where Algorithm 1 updates S by minimizing f(U,V,S) with the sparsity level constraint while Algorithm 2 updates it by a non-monotone search approach.

    Algorithm 1. Input: PΩ(D), U0Rm×r, V0Rr×n, sparsity parameter α, S0Rm×n with |S0|α|Ω|;

    Repeat

    1) fVk,Sk(Uk)=(PΩ(D)PΩ(UkVk+Sk))VTk

    2) tUk=||fVk,Sk(Uk)||2F||PΩ(fVk,Sk(Uk)Vk)||2F

    3) Uk+1=UktUkfVk,Sk(Uk)

    4) fUk+1,Sk(Vk)=UTk+1(PΩ(D)PΩ(Uk+1Vk+Sk))

    5) tVk=||fUk+1,Sk(Vk)||2F||PΩ(Uk+1fUk+1,Sk(Vk))||2F

    6) Vk+1=VktVkfUk+1,Sk(Vk)

    7) Sk+1=argmin|S|α|Ω|||PΩ(DUk+1Vk+1S)||2F.

    Until termination criteria is reached.

    Algorithm 2. Input: PΩ(D), U0Rm×r, V0Rr×n, integer l0, sparsity parameter α, S0Rm×n with |S0|α|Ω|;

    Repeat

    1) fVk,Sk(Uk)=(PΩ(D)PΩ(UkVk+Sk))VTk

    2) tUk=||fVk,Sk(Uk)||2F||PΩ(fVk,Sk(Uk)Vk)||2F

    3) Uk+1=UktUkfVk,Sk(Uk)

    4) fUk+1,Sk(Vk)=UTk+1(PΩ(D)PΩ(Uk+1Vk+Sk))

    5) tVk=||fUk+1,Sk(Vk)||2F||PΩ(Uk+1fUk+1,Sk(Vk))||2F

    6) Vk+1=VktVkfUk+1,Sk(Vk)

    7) Set ˉSk=argmin|S|α|Ω|||PΩ(DUk+1Vk+1S)||2F and update

    (Sk+1)ij={(ˉSk)ij+τk,if (ˉSk)ij<0,(ˉSk)ijτk,if (ˉSk)ij>0,0,otherwise,

    where τk satisfies the following non-monotone condition:

    ||PΩ(DUk+1Vk+1Sk+1)||2Fmax{||Rk+12||2F,,||Rkl+12||2F}.

    Until termination criteria is reached.

    Remark 1. In the above algorithms, |S| is the number of non-zero entries of S and the termination criteria is ||PΩ(DUk+1Vk+1Sk+1)||2F||PΩ(D)||2F<ϵ.

    Remark 2. By preserving the entries of PΩ(DUk+1Vk+1) with top α|Ω| large magnitudes and setting the rest to zero, we obtain the sparse matrix Sk+1 in Algorithm 1 and ˉSk in Algorithm 2.

    Remark 3. Algorithm 2 is different from Algorithm 1 in that it employs the non-monotone search procedure for updating S. The non-monotone line search technique relaxes the single-step descent into a multi-step descent, which greatly improves the computational efficiency. The next section also gives the convergence analysis of the proposed algorithms. In Section 4, many work verified numerically that the non-monotone technique outperforms the traditional monotone strategies. Hence, we here utilize the non-monotone search technique to improve the performance of the proposed algorithms.

    In this section, we discuss the global convergence monotone Algorithm 1 and non-monotone Algorithm 2. Theoretically, that is an equivalent verification of the robust PCA can reasonably express the mutual encouragement between the low-rank and structured sparsity.

    Before that, we first list some notations and preliminaries for the coming main result. For the ease of exposition, let {Uk,Vk,Sk} be the sequence generated by Algorithm 1 or Algorithm 2, and

    Rk=PΩ(DUkVkSk),Rk+12=PΩ(DUk+1Vk+1Sk),Rk+1=PΩ(DUk+1Vk+1Sk+1),ˉRk=PΩ(DUk+1VkSk). (3.1)

    Two propositions are provided first before analysing the convergence, they are the conditions of Lemma 3.6 in [26]:

    |H|83βμ0γ(m+n)logn;

    ● the matrix L is μ0-relevant.

    Lemma 1. The sequence {Uk,Vk,Sk} generated by Algorithm 2 satisfies:

    f(Uk+1,Vk,Sk)=f(Uk,Vk,Sk)12Vk,Sk(Uk)4FPΩ(Vk,Sk(Uk))Vk2F, (3.2)
    f(Uk+1,Vk+1,Sk)=f(Uk+1,Vk,Sk)12Uk+1,Sk(Vk)4FPΩ(Uk+1Uk+1,Sk(Vk))Vk2F. (3.3)

    Proof. By simple deduction

    f(Uk+1,Vk+1,Sk)=12PΩ(DSkUk+1Vk)2F=12PΩ(DSk(Uk+tUkVk,Sk(Uk))Vk2F=12PΩ(DSkUkVk)tUkPΩ(Vk,Sk(Uk))Vk2F=12Rk2FtUkPΩ(Vk,Sk(Uk))Vk,Rk+12PΩ(Vk,Sk(Uk))Vk2F=f(Uk,Vk,Sk)12Vk,Sk(Uk)4FPΩ(Vk,Sk(Uk))Vk2F.

    Similarly, we have

    f(Uk+1,Vk+1,Sk)=f(Uk+1,Vk,Sk)12Uk+1,Sk(Vk)4FPΩ(Uk+1Uk+1,Sk(Vk))Vk2F.

    Theorem 1. Suppose that there exist (UTiUi)1 and (VTiVi)1 and they are bounded. Then, the sequence {Uk,Vk,Sk} generated by Algorithm 2 satisfies:

    limkVk,Sk(Uk)=0,

    and

    limkUk+1,Sk(Vk)=0.

    Proof. According to Sk+1 given by Algorithm 2, we obtain

    f(Uk+1,Vk+1,Sk+1)f(Uk,Vk,Sk)+|Ω|τ2k,

    together with Lemma 1, which implies that

    f(Uk+1,Vk+1,Sk+1)f(U0,V0,S0)12ki=1Vi,Si(Ui)4FPΩ(Vi,Si(Ui))Vi2F12ki=1Ui+1,Si(Vi)4FPΩ(Ui+1Ui+1,Si(Vi))2F+|Ω|ki=1τ2i.

    Since

    PΩ(Vk,Sk(Uk))Vk2FVk,Sk(Uk)Vk2F=RkVTkVk2F=tr(RTkRk(VTkVk)2),

    and

    Ui+1,Si(Vi)2F=RkVTk2F=tr(RTkRk(VTkVk)),

    then

    12ki=1Vi,Si(Ui)4FPΩ(Vi,Si(Ui))Vi2Ftr2(RTiRi(VTiVi)2)tr(RTiRi(VTiVi))12ki=1σ2irVi,Si(Ui)2F,

    where σir is the ith largest singular value of the matrix Vi.

    Similarly, we have

    12ki=1Ui+1,Si(Vi)4FPΩ(Ui+1Ui+1,Si(Vi))2F12ki=1˜σ2irUi+1,Si(Vi)2F,

    where ˜σir is the ith largest singular value of the matrix Ui.

    Consequently, i=1(σ2irVi,Si(Ui)2F+˜σ2irUi+1,Si(Vi)2F+|Ω|τ2i) is convergent. Thus,

    limkσ2irVi,Si(Ui)F=0,

    and

    limk˜σ2irUi+1,Si(Vi)F=0.

    The theorem is true from the assumptions.

    In addition, from the Lemma 3.6 of [28], there exists a scalar c (0<c<1), such that

    PΩ(ˉVk,Sk(Vk)Vk(1c)ˉVk,Sk(Vk)Vk.

    Hence,

    tUkσ2k11c1,

    where 0<c1<1,σk1 is the top singular value of Vk. Therefore,

    limk(Uk+1Uk)=0.

    Similarly,

    limk(Vk+1Vk)=0.

    The proof is completed.

    Theorem 2. Assume that the sequence {Uk,Vk} is bounded and there exist (UTkUk)1 and (VTkVk)1. Then

    limkLk=L,

    and

    limkSk=S.

    Proof. The {Lk} is bounded from the sequence {Uk,Vk} is bounded. Then there exists a sub-sequence {Lki} is closed to Lα. It is noted that

    limkUk+1Uk=0andlimkVk+1Vk=0.

    Then

    Lk+1LkF=Uk+lVk+lUkVkF=(Uk+tUkVk,Sk(Uk))(Vk+tVkVk+1,Sk(Vk))UkVkF=(Uk+1Uk)Vk+Uk(Vk+1Vk)+(Uk+1Uk)(Vk+1Vk)FUk+1UkFVkF+UkFVk+1VkF+Uk+1UkFVk+1VkF.

    Thus,

    limkLk+1LkF=0.

    That is to say,

    limkLk=L.

    Let DL=S. Then

    (Sk)ij(S)ij=(DLk)ij(DL)ij+τkLkL+τk,i,jαΩ,

    which shows that

    limkSk=S.

    The proof is completed.

    In this section, we apply the proposed algorithms to solve two problems: some matrix recovery tasks with incomplete samples and some background modeling in video processing. We compare our algorithms and the AM algorithm in [23] in the senses of the iteration step (denoted as IT) and the total CPU time (denoted as CPU) in second. Moreover, R.error and Error represent the relative deviation of the deserved matrices (or images) from the given matrices (or images), which are computed by the following formulas:

    R.error=||PΩ(Xk)PΩ(D)||F||PΩ(D)||F,

    and

     Error=A+EDFDF.

    In our implementations, all the codes were written by Matlab R2019b and run on a PC with Intel Xeon E5 processor 2.5GHz and 20GB memory.

    In the experiments, the dimension n of a square matrix XRn×n is denoted as Size(X) in the list. For each concerned X, the sampling density is denoted as Den(X) may be 60%, 70%, 75% and 80%, the rank of the low-rank component is denoted by Rank(L) may be 50, 60, 90 and 100, and the sparsity ratio of sparse part is denoted by Spa(S) may be 5% and 10%. In total, we obtain some instances for each dimension. In each instance, the test matrix is randomly generated. The numerical results are provided in Tables 13. Here, the iteration is terminated once the current iterations obey R.error <104 or the criterion is not satisfied after 10000 iteration steps. The symbol "-" indicates that the iteration is failing.

    Table 1.  Computational results for small-size problems.
    Size(X) Rank(L) Den(X)(%) Spar(S)(%) Algorithm R.error CPU(S) IT
    AM 9.23e05 27.11 142
    1000 50 70 5 Alg 1 9.96e05 104.79 308
    Alg 2 8.74e05 4.06 14
    AM 9.90e05 29.01 143
    1000 50 80 10 Alg 1 9.99e05 119.50 319
    Alg 2 9.74e05 4.23 13
    AM 8.03e05 39.65 75
    1000 100 60 5 Alg 1 9.94e05 23.01 68
    Alg 2 9.98e05 7.75 24
    AM 8.61e05 56.03 80
    1000 100 60 10 Alg 1 9.49e05 22.21 67
    Alg 2 9.09e05 8.21 26
    AM 9.66e05 61.97 79
    1000 100 70 5 Alg 1 9.99e05 27.19 76
    Alg 2 9.92e05 7.17 21
    AM 8.89e05 71.03 80
    1000 100 70 10 Alg 1 9.92e05 39.27 105
    Alg 2 9.72e05 7.97 23
    AM 7.81e05 73.11 91
    1000 100 75 10 Alg 1 9.02e05 44.25 111
    Alg 2 9.42e05 6.99 27
    AM 9.21e05 90.79 144
    2000 50 70 5 Alg 1 9.99e05 226.39 154
    Alg 2 8.70e05 12.50 10
    AM 7.51e05 98.70 142
    2000 50 70 10 Alg 1 9.99e05 514.65 339
    Alg 2 6.64e05 13.97 11
    AM 9.88e05 109.23 70
    2000 50 80 5 Alg 1 9.78e05 164.68 106
    Alg 2 9.91e05 11.50 9
    AM 6.72e05 86.11 90
    2000 50 80 10 Alg 1 9.99e05 323.54 205
    Alg 2 7.85e05 13.88 10
    AM 9.05e05 101.23 98
    2000 100 60 5 Alg 1 9.99e05 278.48 190
    Alg 2 8.90e05 17.65 14
    AM 7.03e05 98.27 108
    2000 100 75 10 Alg 1 9.01e05 511.54 329
    Alg 2 8.21e05 16.89 12

     | Show Table
    DownLoad: CSV
    Table 2.  Computational results for middle-size problems.
    Size(X) Rank(L) Den(X)(%) Spar(S)(%) Algorithm R.error CPU(S) IT
    AM 9.61e05 98.81 83
    3000 50 70 5 Alg 1 9.94e05 429.42 141
    Alg 2 9.72e05 22.58 9
    AM 7.08e05 122.00 98
    3000 50 70 10 Alg 1 9.97e05 891.26 288
    Alg 2 6.04e05 25.56 10
    AM 7.08e05 101.03 111
    3000 60 75 5 Alg 1 9.33e05 668.20 269
    Alg 2 9.41e05 24.44 8
    AM 7.23e05 100.31 40
    4000 50 70 10 Alg 1 9.97e05 1158.50 235
    Alg 2 5.88e05 40.06 10
    AM 6.28e05 85.02 35
    4000 50 75 5 Alg 1 9.15e05 356.20 70
    Alg 2 9.22e05 28.44 7
    AM 8.28e05 88.22 36
    4000 50 80 5 Alg 1 9.95e05 374.40 73
    Alg 2 9.39e05 30.74 8
    AM 7.23e05 85.33 38
    4000 50 80 10 Alg 1 9.99e05 841.22 161
    Alg 2 6.54e05 38.48 9
    AM 9.23e05 90.21 41
    4000 100 60 5 Alg 1 9.99e05 874.89 183
    Alg 2 8.48e05 739.76 10
    AM 6.44e05 104.27 39
    5000 50 70 5 Alg 1 9.98e05 735.44 98
    Alg 2 5.60e05 53.51 9
    AM 7.05e05 99.45 38
    5000 50 70 10 Alg 1 9.97e05 1585.46 206
    Alg 2 9.08e05 56.45 9
    AM 5.92e05 196.04 47
    5000 50 80 5 Alg 1 9.98 e05 468.41 59
    Alg 2 7.20e05 48.16 8
    AM 7.73e05 395.47 69
    5000 100 75 10 Alg 1 9.07e05 1011.22 124
    Alg 2 5.88e05 57.94 9
    AM 6.72e05 256.11 98
    5000 50 80 10 Alg 1 9.99e05 322.04 201
    Alg 2 8.05e05 13.11 9

     | Show Table
    DownLoad: CSV
    Table 3.  Computational results for large-size problems.
    Size(X) Rank(L) Den(X)(%) Spar(S)(%) Algorithm R.error CPU(S) IT
    AM 7.02e05 950.03 70
    10000 50 60 5 Alg 1 9.97e05 2527.91 98
    Alg 2 5.54e05 191.48 9
    AM 9.01e05 1417.34 151
    10000 50 60 10 Alg 1 9.99e05 5125.45 194
    Alg 2 8.67e05 191.29 9
    AM 8.81e05 1022.03 41
    10000 50 70 5 Alg 1 9.98e05 1683.68 59
    Alg 2 7.83e05 173.10 8
    AM 7.70e05 445.11 50
    10000 50 75 10 Alg 1 9.95e05 3863.30 135
    Alg 2 6.29e05 208.24 9
    AM 6.27e05 566.03 48
    10000 50 80 5 Alg 1 9.93e05 1249.34 41
    Alg 2 5.46e05 189.02 8
    AM 9.08e05 575.14 72
    10000 50 80 10 Alg 1 9.96e05 2528.33 80
    Alg 2 8.50e05 190.36 8
    AM - - -
    10000 100 60 5 Alg 1 9.99e05 2907.35 109
    Alg 2 4.93e05 197.94 9
    AM - - -
    10000 100 60 10 Alg 1 9.99e05 6455.45 235
    Alg 2 8.36e05 199.31 9
    AM 6.72e05 551.20 61
    10000 100 70 5 Alg 1 9.90e05 1820.84 63
    Alg 2 6.97e05 178.44 8
    AM 7.27e05 748.86 65
    10000 100 70 10 Alg 1 9.99e05 4139.06 140
    Alg 2 5.42e05 210.46 9
    AM 8.21e05 605.16 76
    10000 90 75 10 Alg 1 9.02e05 4009.77 128
    Alg 2 6.72e05 200.01 8
    AM 8.02e05 761.11 62
    10000 100 80 5 Alg 1 9.88e05 1461.60 45
    Alg 2 4.56e05 197.07 8
    AM - - -
    10000 100 80 10 Alg 1 9.93e05 2892.42 90
    Alg 2 7.55 e05 197.00 8

     | Show Table
    DownLoad: CSV

    For Algorithm 2, we set l=2 and the initial thresholding value in each iteration to be the p+1-th largest number of the absolute values of all elements of Sk, where p is the required number of the non-zero entries of S, i.e., p=α|Ω|. By the way, Alg 1 and Alg 2 represent the abbreviations of Algorithm 1 and Algorithm 2, respectively.

    The results recorded in Tables 13 show that the proposed algorithms are feasible and efficient for solving some matrix recovery tasks with incomplete samples. It is worth mentioning that our proposed algorithms are more effective than AM algorithm (see [23]) for large-size problems since both of the algorithms can obtain a solution within reasonable time for the problems with size 10000×10000.

    Moreover, the results recorded in Figures 12 illustrate the superiority of non-monotone decrease in the iteration procedure of Algorithm 2. We observe Algorithm 2 outperforms Algorithm 1 since the CPU times of Algorithm 2 are much shorter than those of Algorithm 1 and the relative error of Algorithm 2 is smaller than those of Algorithm 1. The acceleration ratio of Algorithm 2 with 60% samples is 13.3 (the left in Figure 1) and 26.8 (the right in Figure 1) respectively, that with 80% samples is 8.0 (the left in Figure 1) and 18.1 (the right in Figure 2) respectively.

    Figure 1.  Compare the difference in CPU time between different parameters. Left: R = 50, Den(X) = 60, Spa = 0.05. Right: R = 50, Den(X) = 60, Spa = 0.1.
    Figure 2.  Compare the difference in CPU time between different parameters. Left: R = 100, Den(X) = 80, Spa = 0.05. Right: R = 100, Den(X) = 80, Spa = 0.1.

    In order to verify the performance of the new algorithms in solving practical problems, we compare Algorithm 2 and the AM algorithm for solving the background modeling in video processing. The problem of the background modeling is to separate the foreground and the background in the video. We use the AM algorithm and our Algorithm 2 to separate the foreground and the background for four real videos from [29]. All videos meet the requirement of the low-rank sparse structure since the background of all frames are relevant and the moving objects are sparse and independent. In our tests, each data matrix consists of the first 200 frames of each video. For example, the first video consists of the first 200 frames with a resolution of 144×176, the size of the matrix should be 25344×200 by converting each frame into a vector. Here, the iteration is terminated once the current iterations obey Error <107 or the criterion is not satisfied after 3000 iteration steps.

    The results recorded in Figures 36 are the separation of one frame of each video sequence under D=A+E model. D is the original image, A denotes its background (the low-rank part) and E its foreground (the sparse part).

    Figure 3.  Separation results for the hall: the upper is the result of AM and the lower is one of Alg 2.
    Figure 4.  Separation results for the shopping-mall: the upper is the result of AM and the lower is one of Alg 2.
    Figure 5.  Separation results for the bootstrap: the upper is the result of AM and the lower is one of Alg 2.
    Figure 6.  Separation results for the fountain: the upper is the result of AM and the lower is one of Alg 2.

    Table 4 lists the Error and the running time CPU(S) of the experiments. From Table 4, we can see that the running time of the AM is about 2.8–5.4 times that of Algorithm 2, and the accuracy of Algorithm 2 is also always higher than that of AM algorithm in Figures 36 and Table 4, so the advantage of Algorithm 2 in the recovery of high-dimensional array is obvious.

    Table 4.  Experiment results of background-foreground separation.
    Image Resolution AM Alg 2
    Error CPU(S) Error CPU(S)
    Fig 3 144×176 8.83e-08 334.35 6.42e-08 108.85
    Fig 4 256×320 8.52e-08 1038.13 8.07e-08 191.09
    Fig 5 120×160 9.17e-08 275.68 7.31e-08 98.57
    Fig 6 128×160 8.58e-08 304.14 6.45e-08 99.30

     | Show Table
    DownLoad: CSV

    In this paper, we focus on the problem of recovering a matrix that is the sum of a low-rank matrix and a sparse matrix from a subset of its entries. This model can characterize many problems arising from the areas of signal and image processing, statistical inference, and machine learning. We propose an alternating directional method for solving the low-rank matrix sparse structure model. The key idea of the method is that each block variables are solved in the Gauss-Seidel manner while the others are fixed until convergence. We further develop a version of our algorithm by introducing non-monotone search technique to improve the performance of the new algorithm. Both versions are theoretically proved to be globally convergent under some requirements. Based on computational records, we observe that both algorithms are computationally inexpensive to find satisfactory results and the non-monotone strategy performs much better than the monotone one from these instances.

    The authors are very much indebted to the anonymous referees for their helpful comments and suggestions which greatly improved the original manuscript of this paper. The authors are so thankful for the support from the special fund for science and technology innovation teams of Shanxi province (202204051002018) and the scientific research project of Returned Overseas Chinese in Shanxi Province, China (2022-169).

    The authors declare that they have no conflict of interests.



    [1] L. A. Zadeh, Fuzzy sets, in: Fuzzy sets, fuzzy logic, and fuzzy systems: Selected papers by Lotfi A Zadeh, 1996,394–432. https://doi.org/10.1142/9789814261302 0021
    [2] K. T. Atanassov, Intuitionistic fuzzy sets, Fuzzy Set. Syst., 20 (1986), 87–96. https://doi.org/10.1016/S0165-0114(86)80034-3 doi: 10.1016/S0165-0114(86)80034-3
    [3] R. R. Yager, Pythagorean membership grades in multi-criteria decision making, IEEE T. Fuzzy Syst., 2014,958–965. https://doi.org/10.1109/TFUZZ.2013.2278989 doi: 10.1109/TFUZZ.2013.2278989
    [4] R. R. Yager, Generalized orthopair fuzzy sets, IEEE T. Fuzzy Syst., 25 (2017), 1222–1230. https://doi.org/10.1109/TFUZZ.2016.2604005 doi: 10.1109/TFUZZ.2016.2604005
    [5] M. Riaz, M. R. Hashmi, Linear Diophantine fuzzy set and its applications towards multi-attribute decision-making problems, J. Intell. Fuzzy Syst., 37 (2019), 5417–5439. https://doi.org/10.3233/JIFS-190550 doi: 10.3233/JIFS-190550
    [6] A. O. Almagrabi, S. Abdullah, M. Shams, Y. D. Al-Otaibi, S. Ashraf, A new approach to q-linear Diophantine fuzzy emergency decision support system for COVID19, J. Amb. Intel. Hum. Comp., 13 (2022), 1687–1713. https://doi.org/10.1007/s12652-021-03130-y doi: 10.1007/s12652-021-03130-y
    [7] S. Ashraf, S. Abdullah, Spherical aggregation operators and their application in multi-attribute group decision‐making, Int. J. Intell. Syst., 34 (2019), 493–523. https://doi.org/10.1002/int.22062 doi: 10.1002/int.22062
    [8] S. Ashraf, S. Abdullah, T. Mahmood, F. Ghani, T. Mahmood, Spherical fuzzy sets and their applications in multi-attribute decision making problems, J. Intell. Fuzzy Syst., 36 (2019), 2829–2844. https://doi.org/10.3233/JIFS-172009 doi: 10.3233/JIFS-172009
    [9] M. Riaz, M. R. Hashmi, D. Pamucar, Y. M. Chu, Spherical linear Diophantine fuzzy sets with modeling uncertainties in MCDM, Comput. Model. Eng. Sci., 126 (2021), 1125–1164. https://doi.org/10.32604/cmes.2021.013699 doi: 10.32604/cmes.2021.013699
    [10] J. Dombi, A general class of fuzzy operators, the demorgan class of fuzzy operators and fuzziness measures induced by fuzzy operators, Fuzzy Set. Syst., 8 (1982), 149–163. https://doi.org/10.1016/0165-0114(82)90005-7 doi: 10.1016/0165-0114(82)90005-7
    [11] C. Jana, G. Muhiuddin, M. Pal, D. Al-Kadi, Intuitionistic fuzzy dombi hybrid decision-making method and their applications to enterprise financial performance evaluation, Math. Probl. Eng., 2021.
    [12] C. Jane, M. Pal, G. Wei, Multiple attribute decision making method based on intuitionistic Dombi operators and its application in mutual fund evaluation, Arch. Control Sci., 2020,437–470.
    [13] Z. Li, H. Gao, G. Wei, Methods for multiple attribute group decision making based on intuitionistic fuzzy dombi hamy mean operators, Symmetry, 10 (2018), 574. https://doi.org/10.3390/sym10110574 doi: 10.3390/sym10110574
    [14] T. Senapati, V. Simic, A. Saha, M. Dobrodolac, Y. Rong, E. B. Tirkolaee, Intuitionistic fuzzy power Aczel-Alsina model for prioritization of sustainable transportation sharing practices, Eng. Appl. Artif. Intell., 119 (2023), 105716. https://doi.org/10.1016/j.engappai.2022.105716 doi: 10.1016/j.engappai.2022.105716
    [15] C. Jana, T. Senapati, M. Pal, Pythagorean fuzzy Dombi aggregation operators and its applications in multiple attribute decision-making, Int. J. Intell. Syst., 34 (2019), 2019–2038. https://doi.org/10.1002/int.22125 doi: 10.1002/int.22125
    [16] M. Akram, W. A. Dudek, J. M. Dar, Pythagorean Dombi fuzzy aggregation operators with application in multicriteria decision-making, Int. J. Intell. Syst., 34 (2019), 3000–3019. https://doi.org/10.1002/int.22183 doi: 10.1002/int.22183
    [17] P. Liu, Q. Khan, T. Mahmood, R. A. Khan, H. U. Khan, Some improved pythagorean fuzzy Dombi power aggregation operators with application in multiple-attribute decision making, J Intel. Fuzzy Syst., 40 (2021), 9237–9257. https://doi.org/10.3233/JIFS-201723 doi: 10.3233/JIFS-201723
    [18] M. Akram, A. Khan, A. Luqman, T. Senapati, D. Pamucar, An extended MARCOS method for MCGDM under 2-tuple linguistic q-rung picture fuzzy environment, Eng. Appl. Artif. Intel., 120 (2023), 105892.
    [19] T. Senapati, Approaches to multi-attribute decision-making based on picture fuzzy Aczel-Alsina average aggregation operators, Comput. Appl. Math., 41 (2022), 40.
    [20] C. Jana, G. Muhiuddin, M. Pal, Some Dombi aggregation of Q-rung orthopair fuzzy numbers in multiple-attribute decision making, Int. J. Intell. Syst., 34 (2019), 3220–3240. https://doi.org/10.1002/int.22191 doi: 10.1002/int.22191
    [21] S. Ashraf, S. Abdullah, T. Mahmood, Spherical fuzzy Dombi aggregation operators and their application in group decision making problems, J. Amb. Intel. Hum. Comp., 11 (2020), 2731–2749. https://doi.org/10.1007/s12652-019-01333-y doi: 10.1007/s12652-019-01333-y
    [22] J. Aldring, S. Santhoshkumar, D. Ajay, A decision making approach using linear diophantine fuzzy sets with Dombi operations, In: International Conference on Intelligent and Fuzzy Systems, 2022,684–692.
    [23] S. Ashraf, S. Abdullah, M. Aslam, M. Qiyas, M. A. Kutbi, Spherical fuzzy sets and its representation of spherical fuzzy t-norms and t-conorms, J. Intell. Fuzzy Syst., 36 (2019), 6089–6102. https://doi.org/10.3233/JIFS-181941 doi: 10.3233/JIFS-181941
    [24] P. Liu, B. Zhu, P. Wang, M. Shen, An approach based on linguistic spherical fuzzy sets for public evaluation of shared bicycles in China, Eng. Appl. Artif. Intell., 87 (2020), 103295. https://doi.org/10.1016/j.engappai.2019.103295 doi: 10.1016/j.engappai.2019.103295
    [25] M. Rafiq, S. Ashraf, S. Abdullah, T. Mahmood, S. Muhammad, The cosine similarity measures of spherical fuzzy sets and their applications in decision making, J. Intell. Fuzzy Syst., 36 (2019), 6059–6073. https://doi.org/10.3233/JIFS-181922 doi: 10.3233/JIFS-181922
    [26] S. Ashraf, S. Abdullah, T. Mahmood, Spherical fuzzy Dombi aggregation operators and their application in group decision making problems, J. Amb. Intel. Hum. Comp., 11 (2020), 2731–2749. https://doi.org/10.1007/s12652-019-01333-y doi: 10.1007/s12652-019-01333-y
    [27] Q. Khan, T. Mahmood, K. Ullah, Applications of improved spherical fuzzy Dombi aggregation operators in decision support system, Soft Comput., 25 (2021), 9097–9119. https://doi.org/10.1007/s00500-021-05829-8 doi: 10.1007/s00500-021-05829-8
    [28] G. Elif, H. Aygün, Generalized spherical fuzzy Einstein aggregation operators: Application to multi-criteria group decision-making problems, In: Conference Proceedings of Science and Technology, 3 (2020), 227–235.
    [29] S. Ashraf, S. Abdullah, Decision aid modeling based on sine trigonometric spherical fuzzy aggregation information, Soft Comput., 25 (2021), 8549–8572. https://doi.org/10.1007/s00500-021-05712-6 doi: 10.1007/s00500-021-05712-6
    [30] M. Qiyas, S. Abdullah, S. Khan, M. Naeem, Multi-attribute group decision making based on sine trigonometric spherical fuzzy aggregation operators, Granular Comput., 7 (2022), 141–162. https://doi.org/10.1007/s41066-021-00256-4 doi: 10.1007/s41066-021-00256-4
    [31] R. Chinram, S. Ashraf, S. Abdullah, P. Petchkaew, Decision support technique based on spherical fuzzy Yager aggregation operators and their application in wind power plant locations: A case study of Jhimpir, Pakistan, J. Math., 2020. https://doi.org/10.1155/2020/8824032 doi: 10.1155/2020/8824032
    [32] M. Riaz, H. M. Athar Farid, D. Pamucar, S. Tanveer, Spherical fuzzy information aggregation based on Aczel-Alsina operations and data analysis for supply chain, Math. Probl. Eng., 2022.
    [33] M. Qiyas, S. Abdullah, M. Naeem, Spherical uncertain linguistic Hamacher aggregation operators and their application on achieving consistent opinion fusion in group decision making, Int. J. Intell. Comput., 2021.
    [34] G. Wei, H. Zhang, X. Chen, Spherical fuzzy Hamacher power aggregation operators based on entropy for multiple attribute group decision making.
    [35] F. Kutlu-Gündoğdu, C. Kahraman, A novel VIKOR method using spherical fuzzy sets and its application to warehouse site selection, J. Intell. Fuzzy Syst., 37 (2019), 1197–1211. https://doi.org/10.3233/JIFS-182651 doi: 10.3233/JIFS-182651
    [36] F. Kutlu-Gündoğdu, C. Kahraman, A. Karaşan, Spherical fuzzy VIKOR method and its application to waste management, In: International Conference on Intelligent and Fuzzy Systems, Springer Cham., 2019,997–1005.
    [37] M. Akram, C. Kahraman, K. Zahid, Group decision-making based on complex spherical fuzzy VIKOR approach, Knowl.-Based Syst., 216 (2021), 106793. https://doi.org/10.1016/j.knosys.2021.106793 doi: 10.1016/j.knosys.2021.106793
    [38] M. Akram, M. Shabir, A. Adeel, A. N. Al-Kenani, A multiattribute decision-making framework: VIKOR method with complex spherical fuzzy-soft sets, Math. Probl. Eng., 2021.
    [39] I. M. Sharaf, Spherical fuzzy VIKOR with SWAM and SWGM operators for MCDM, In: Decision Making with Spherical Fuzzy Sets, Springer, Cham. 2021,217–240.
    [40] E. Ayyildiz, A. Taskin, A novel spherical fuzzy AHP-VIKOR methodology to determine serving petrol station selection during COVID-19 lockdown: A pilot study for İstanbul, Socio-Econ. Plan. Sci., 2022, 101345.
    [41] T. Y. Chen, An evolved VIKOR method for multiple-criteria compromise ranking modeling under T-spherical fuzzy uncertainty, Adv. Eng. Inform., 54 (2022), 101802. https://doi.org/10.1016/j.aei.2022.101802 doi: 10.1016/j.aei.2022.101802
    [42] M. Qiyas, S. Abdullah, S. Ashraf, M. Aslam, Utilizing linguistic picture fuzzy aggregation operators for multiple-attribute decision-making problems, Int J Fuzzy Syst., 22 (2020), 310–320. https://doi.org/10.1007/s40815-019-00726-7 doi: 10.1007/s40815-019-00726-7
    [43] V. T. Nguyen, R. Chaysiri, Spherical fuzzy AHP-VIKOR model application in solar energy location selection problem: A case study in Vietnam, In : {International Joint Symposium on Artificial Intelligence and Natural Language Processing} (iSAI-NLP), IEEE, 2022, 1–5.
    [44] I. M. Sharaf, A new approach for spherical fuzzy TOPSIS and spherical fuzzy VIKOR applied to the evaluation of hydrogen storage systems, Soft Comput., 27 (2023), 1–21. https://doi.org/10.1007/s00500-022-07749-7 doi: 10.1007/s00500-022-07749-7
    [45] I. Riali, M. Fareh, M. C. Ibnaissa, M. Bellil, A semantic-based approach for hepatitis C virus prediction and diagnosis using a fuzzy ontology and a fuzzy Bayesian network, J. Intell. Fuzzy Syst., 44 (2022), 1–15. https://doi.org/10.3233/JIFS-213563 doi: 10.3233/JIFS-213563
    [46] K. Naeem, M. Riaz, F. Karaaslan, A mathematical approach to medical diagnosis via Pythagorean fuzzy soft TOPSIS, VIKOR and generalized aggregation operators, Complex Intell. Syst., 7 (2021), 2783–2795. https://doi.org/10.1007/s40747-021-00458-y doi: 10.1007/s40747-021-00458-y
    [47] B. Khaoula, B. Imene, G. Wahiba, G. Abdelkader, L. Slimane, Intelligent analysis of some factors accompanying hepatitis B, Mol. Sci. Appl., 2 (2022), 61–71. https://doi.org/10.37394/232023.2022.2.7 doi: 10.37394/232023.2022.2.7
    [48] W. Liu, X. Liu, M. Peng, G. Q. Chen, P. H. Liu, X. W. Cui, et al., Artificial intelligence for hepatitis evaluation, World J. Gastroentero., 27 (2021), 5715.
    [49] M. Riaz, S. T. Tehrim, A robust extension of VIKOR method for bipolar fuzzy sets using connection numbers of SPA theory based metric spaces, Artif. Intell. Rev., 54 (2021), 561–591. https://doi.org/10.1007/s10462-020-09859-w doi: 10.1007/s10462-020-09859-w
    [50] A. Vaillant, Transaminase elevations during treatment of chronic hepatitis B infection: Safety considerations and role in achieving functional cure, Viruses, 13 (2021), 745. https://doi.org/10.3390/v13050745 doi: 10.3390/v13050745
    [51] J. Wei, X. Lin, The multiple attribute decision-making VIKOR method and its application, In: 2008 4th international conference on wireless communications, networking and mobile computing, IEEE, 2008, 1–4.
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1379) PDF downloads(48) Cited by(9)

Figures and Tables

Figures(10)  /  Tables(6)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog