Processing math: 58%
Research article Special Issues

Medical diagnosis for the problem of Chikungunya disease using soft rough sets

  • One of the most difficulties that doctors face when diagnosing a disease is making an accurate decision to correctly determine the nature of the injury. This is attributable to the similarity of symptoms for different diseases. The current work is devoted to proposing new mathematical methodologies to help in precise decision-making in the medical diagnosis of the problem of Chikungunya virus disease through the use of soft rough sets. In fact, we introduce some improvements for soft rough sets (given by Feng et al.). We suggest a new approach to studying roughness through the use of soft sets to find approximations of any set, i.e., so-called "soft δ-rough sets". To illustrate this approach, we compare it with the previous studies and prove that the proposed approach is more accurate than the previous works. The proposed approach is more accurate than Feng et al. approach and extends the scope of applications because the problem of soft upper approximation is solved. The main characterizations of the presented technique are elucidated. Some important relations related to soft δ-rough approximations (such as soft δ-memberships, soft δ-equality and soft δ-inclusion) are provided and their properties are examined. In addition, an important medical application in the diagnosis of the problem of Chikungunya virus using soft δ-rough sets is provided with two algorithms. These algorithms were tested on fictitious data in order to compare them to existing methods which represent simple techniques to use in MATLAB. Additionally, we examine the benefits and weaknesses of the proposed approach and present a plan for some upcoming work.

    Citation: Mostafa K. El-Bably, Radwan Abu-Gdairi, Mostafa A. El-Gayar. Medical diagnosis for the problem of Chikungunya disease using soft rough sets[J]. AIMS Mathematics, 2023, 8(4): 9082-9105. doi: 10.3934/math.2023455

    Related Papers:

    [1] Belén Ariño-Morera, Angélica Benito, Álvaro Nolla, Tomás Recio, Emilio Seoane . Looking at Okuda's artwork through GeoGebra: A Citizen Science experience. AIMS Mathematics, 2023, 8(8): 17433-17447. doi: 10.3934/math.2023890
    [2] José Luis Díaz Palencia, Abraham Otero . Instability analysis and geometric perturbation theory to a mutual beneficial interaction between species with a higher order operator. AIMS Mathematics, 2022, 7(9): 17210-17224. doi: 10.3934/math.2022947
    [3] Yutong Zhou . Idempotent completion of right suspended categories. AIMS Mathematics, 2022, 7(7): 13442-13453. doi: 10.3934/math.2022743
    [4] Naveed Iqbal, Meshari Alesemi . Soliton dynamics in the (2+1)-dimensional Nizhnik-Novikov-Veselov system via the Riccati modified extended simple equation method. AIMS Mathematics, 2025, 10(2): 3306-3333. doi: 10.3934/math.2025154
    [5] Muhammad Imran Asjad, Naeem Ullah, Asma Taskeen, Fahd Jarad . Study of power law non-linearity in solitonic solutions using extended hyperbolic function method. AIMS Mathematics, 2022, 7(10): 18603-18615. doi: 10.3934/math.20221023
    [6] Mahmoud A. E. Abdelrahman, Wael W. Mohammed, Meshari Alesemi, Sahar Albosaily . The effect of multiplicative noise on the exact solutions of nonlinear Schrödinger equation. AIMS Mathematics, 2021, 6(3): 2970-2980. doi: 10.3934/math.2021180
    [7] Chun Huang, Zhao Li . Soliton solutions of conformable time-fractional perturbed Radhakrishnan-Kundu-Lakshmanan equation. AIMS Mathematics, 2022, 7(8): 14460-14473. doi: 10.3934/math.2022797
    [8] Shahbaz Ali, Muhammad Khalid Mahmmod, Raúl M. Falcón . A paradigmatic approach to investigate restricted hyper totient graphs. AIMS Mathematics, 2021, 6(4): 3761-3771. doi: 10.3934/math.2021223
    [9] Imrana Kousar, Saima Nazeer, Abid Mahboob, Sana Shahid, Yu-Pei Lv . Numerous graph energies of regular subdivision graph and complete graph. AIMS Mathematics, 2021, 6(8): 8466-8476. doi: 10.3934/math.2021491
    [10] Muhammad Zeeshan Hanif, Naveed Yaqoob, Muhammad Riaz, Muhammad Aslam . Linear Diophantine fuzzy graphs with new decision-making approach. AIMS Mathematics, 2022, 7(8): 14532-14556. doi: 10.3934/math.2022801
  • One of the most difficulties that doctors face when diagnosing a disease is making an accurate decision to correctly determine the nature of the injury. This is attributable to the similarity of symptoms for different diseases. The current work is devoted to proposing new mathematical methodologies to help in precise decision-making in the medical diagnosis of the problem of Chikungunya virus disease through the use of soft rough sets. In fact, we introduce some improvements for soft rough sets (given by Feng et al.). We suggest a new approach to studying roughness through the use of soft sets to find approximations of any set, i.e., so-called "soft δ-rough sets". To illustrate this approach, we compare it with the previous studies and prove that the proposed approach is more accurate than the previous works. The proposed approach is more accurate than Feng et al. approach and extends the scope of applications because the problem of soft upper approximation is solved. The main characterizations of the presented technique are elucidated. Some important relations related to soft δ-rough approximations (such as soft δ-memberships, soft δ-equality and soft δ-inclusion) are provided and their properties are examined. In addition, an important medical application in the diagnosis of the problem of Chikungunya virus using soft δ-rough sets is provided with two algorithms. These algorithms were tested on fictitious data in order to compare them to existing methods which represent simple techniques to use in MATLAB. Additionally, we examine the benefits and weaknesses of the proposed approach and present a plan for some upcoming work.



    Working with large amounts of data is one of the main challenges we face today. With the rise of social networks and rapid technological advances, we must develop tools that allow us to work with so much information. At this point the use of tensor products comes into play, since their use reduces the number of and speed up the operations to be carried out. Proof of this is the recent article [1], where tensor products are used to speed up the calculation of matrix products. Other articles that exemplify the goodness of this operation include [2], where the solution of 2, 3-dimensional optimal control problems with spectral fractional Laplacian-type operators is studied, and [3], where high-order problems are studied through the use of proper generalized decomposition methods.

    When we try to solve a linear system of the form Ax=b, in addition to the classical methods, there are methods based on tensors that can be more efficient [4], since the classical methods face the problem of the curse of dimensionality, which makes them lose effectiveness as the size of the problem increases. The tensor methods look for the solution in separated form, that is, as the tensor combination

    x=j=1xj1xjd,

    where xjiRNi, d is the dimension of the problem, and is the Kronecker product as reviewed in the next Section. The main family of methods that solves this problem is proper generalized decomposition family [5], and it is based on the greedy rank-one update (GROU) algorithm [6,7]. This algorithm calculates the solution of the linear system Ax=b in separated form and, for this, in each iteration, it updates the approximation of the solution with the term resulting from minimizing the remaining residue. Furthermore, there are certain square matrices for which the GROU algorithm improves their convergence i.e., matrices of the form

    A=di=1idN1idNi1AiidNi+1idNd,

    where idNk is the identity matrix of size Nk×Nk, and AkRNk×Nk, for 1kd. These matrices are called Laplacian-like matrices, due to their relationship with the Laplace operator written as

    di=12x2i=di=10x010x0i1(2x2i)0x0i+10x0d.

    It is not easy to decide when a given matrix A can be represented in that form. To do this, we can use some of the previously results obtained by the authors of [8]. In this paper, we prove that the set of Laplacian-like matrices is a linear subspace for the space of square matrices with a particular decomposition of its dimension. Moreover, we provide a greedy algorithm that provides the best Laplacian approximation LA, for a given matrix A, as well its residue, RA=ALA. However, an iterative algorithm it is not useful enough against a direct solution algorithm. The main goal of this paper is to provide a direct algorithm that allows one to construct the best Laplacian-like approximation by using only a particular block decomposition of the matrix A. It can be considered as a pre-processing procedure that allows one to represent a given matrix in its best Laplacian-like form, and if the residual is equal to zero, we definitively have its Laplacian-like representation form. Hence, we efficiently use the GROU algorithm to solve the high-dimensional linear system associated with the matrix A.

    We remark that, by using the decomposition A=LA+RA, we can rewrite the linear system as (LA+RA)x=b, and when the value of the remainder is small, we can approximate the solution of the system x by using the solution of the Laplacian system xL. This fact is specially interesting in the case of the discretization of some partial differential equations. We also study the Laplacian decomposition of the matrix that comes from the discretization of a general second order partial differential equation of the form

    α2ux2+β2uy2+γux+δuy+μu=f,

    with homogeneous boundary conditions. Besides, to compare different numerical methods to solve partial differential equations, we consider two particular cases: the Helmholtz equation, which solves an eigenvalue problem for the Laplace operator. Furthermore, to illustrate that it is not necessary to be limited to the second order, we also consider the 4th order Swift-Hohenberg equation

    ut=ε(1+2x2)2u.

    This equation is noted for its pattern-forming behavior, and it was derived from the equations for thermal convection [9].

    The paper is organized as follows. We begin by recalling some preliminary definition and results used throughout the paper in Section 2. Section 3 is devoted to the statement and the proof of the main result of this paper, which allow one to construct explicitly the best approximation of a given matrix to the linear space of Laplacian-like matrices. After that, in Section 4, we discuss how we applied this result to compute the best Laplacian approximation for the discretization of a second order partial differential equations without mixing derivatives. Finally, some numerical examples are given in Section 5.

    First at all we introduce some notations that we use throughout the paper. We denote by RN×M, the set of N×M-matrices and by AT the transpose of a given matrix A. As usual we use

    x,y2=x,yRN=xTy=yTx

    to denote the Euclidean inner product in RN, and its corresponding 2-norm, by x2=xRN=x,x1/22.

    Given a sequence {uj}j=0RN, we say that a vector uRN can be written as

    u=j=0uj

    if and only if

    limnnj=0uj=u

    in the 2-topology.

    The Kronecker product of two matrices ARN1×M1 and BRN2×M2 is defined by

    AB=(A1,1BA1,2BA1,M1BA2,1BA2,2BA2,M1BAN1,1BAN1,2BAN1,M1B)RN1N2×M1M2.

    We can see some of the well-known properties of the Kronecker product in [7].

    As we already said, we are interested solving the high-dimensional linear system Ax=b obtained from a discretization of a partial differential equation. We are interested in solving it by using a tensor-based algorithm; so, we are going to look for an approximation of the solution in separated form. To see this, we assume that the coefficient matrix A is a (N1Nd)×(N1Nd)-dimensional invertible matrix for some N1,,NdN. Next, we look for an approximation (of rank n) of A1b of the form

    A1bnj=1xj1xjd. (2.1)

    To do this, given xRN1Nd, we say that xR1=R1(N1,N2,,Nd) if x=x1x2xd, where xiRNi for i=1,,d. For n2, we define, inductively, that Rn=Rn(N1,N2,,Nd)=Rn1+R1, that is,

    Rn={x:x=ki=1x(i),x(i)R1 for 1ikn}.

    Note that RnRn+1 for all n1.

    To perform (2.1), what we will do is minimize the difference

    bA(nj=1xjdxjd)2,

    that is, solve the problem

    argminuRnbAu2. (2.2)

    Here, 2 is the 2-norm, or the Frobenius norm, defined by

    A2=mi=1nj=1|ai,j|2=tr(AA),forARm×n.

    Unfortunately, from Proposition 4.1(a) of [10], we have that the set Rn is not necessarily (or even usually) closed for each n2. In consequence, no best rank-n approximation exists, that is, (2.2) has no solution. However, from Proposition 4.2 of [10] it follows that R1 is a closed set in any norm-topology. This fact allows us to introduce the following algorithm.

    The GROU algorithm is an iterative method to solve linear systems of the form Ax=b by using only rank-one updates. Thus, given AGL(RN×N) with N=N1Nd, and bRN, we can obtain an approximation of the form

    A1bun=nj=1xj1xjd

    for some n1, and xjiRNi for i=1,2,,d and j=1,2,,n [7]. We proceed with the following iterative procedure (see algorithm 1 below): let u0=y0=0, and, for each n1, take

    rn1=bAun1, (2.3)
    un=un1+yn,whereynargminuR1rn1Au2. (2.4)

    Since unA1b, we can define the rank for A1b obtained by the GROU Algorithm as

    rank(A1b)={if{j1:yj=0}=,min{j1:yj=0}1otherwise.

    The next result, presented in [7], gives the convergence of the sequence {un}n0 to the solution A1b of the linear system.

    Theorem 2.1. Let bRN1Nd and ARN1Nd×N1Nd be an invertible matrix. Then, by using the iterative scheme described by (2.3) and (2.4), we obtain that the sequence {rn2}rank(A1b)n=0 is strictly decreasing and

    A1b=limnun=rank(A1b)j=0yj. (2.5)

    Note that the updates in the previous scheme works under the assumption that, in line 5 of algorithm 1, we have a way to obtain

    yargminxR1riAx22. (2.6)

    To compute y, we can use an alternating least squares (ALS) approach (see [7,11]).

    Table  .   .
    Algorithm 1 GROU algorithm
    1: procedure GROU(f,A,ε,tol,rank_max)
    2:   r0=f
    3:   u=0
    4:   for i=0,1,2,,rank_max do
    5:     y=procedure(minxR1riAx22)
    6:     ri+1=riAy
    7:     uu+y
    8:     if ri+12<ε or |ri+12ri2|<tol then goto 13
    9:     end if
    10:   end for
    11:   return u and rrank_max2.
    12:   break
    13:   return u and ri+12
    14: end procedure

     | Show Table
    DownLoad: CSV

    The idea below the ALS strategy to solve (2.6) is as follows: for each 1kd, we proceed as follows. Assume that the values x1,,xk1,xk+1,,xd are given. Then, we look for the unknown xk, satisfying

    xkargminzkRNk×NkbA(x1xk1zkxk+1xd)2,

    where we can write

    A(x1xk1zkxk+1xd)=A(x1xk1idNkxk+1xd)zk.

    In consequence, by using a least squares approach [11], we can obtain xk by solving the following Nk×Nk-dimensional linear system:

    Zkzk=b, (2.7)

    where

    Zk:=(xT1xTk1idNkxTk+1xTd)ATA(x1xk1idkxk+1xd)

    and

    bk:=(xT1xTk1idNkxTk+1xTd)ATb.

    Clearly,

    bA(x1xk1zkxk+1xd)2bA(x1xk1xkxk+1xd)2

    holds for all zkRNk×Nk. However, it is well known (see Section 4 in [11]) that the performance of the ALS strategy can be improved (see Algorithm 2 below) when the shape of the matrix ATARN×N, with N=N1Nd, can be written in the form

    ATA=ri=1dj=1A(i)j, (2.8)

    where dj=1A(i)j=A(i)1A(i)d; here, A(i)jRNj×Nj for 1jd and 1ir. In particular, when the matrix A is given by

    A=di=1idN1idNi1AiidNi+1idNd,

    then the matrix ATA can be easily written in the form of (2.8). These matrices were introduced in [8] as Laplacian-like matrices since they can be easily related to the classical Laplacian operator [2,12]. The next section will be devoted to the study of this class of matrices.

    As we said in the introduction, the proper orthogonal decomposition is a popular numerical strategy in the engineering process to solve high-dimensional problems. It is based on the GROU algorithms (2.3) and (2.4), and it can be considered as a tensor-based decomposition algorithm.

    There is a particular type of matrices to solve high-dimensional linear systems for which these methods work particularly well, i.e., those that satisfy the property (2.8). To this end, we introduce the following definition.

    Table  .   .
    Algorithm 2 An ALS algorithm for matrices in the form of (2.8) [11, Algorithm 2]
    1: Given ATA=ri=1dj=1A(i)jRN×N and bRN.
    2: Initialize x(0)iRNi for i=1,2,d.
    3: Introduce ε>0 and itermax,iter = 1.
    4: while distance > ε and iter < itermax do
    5:   for k=1,2,,d do
    6:     x(1)k=x(0)k
    7:     for i=1,2,,r do
    8:       α(i)k=(k1j=1(x(0)j)TA(i)jx(0)j)(dj=k+1(x(1)j)TA(i)jx(1)j)
    9:     end for
    10:     x(0)k solves (ri=1α(i)kA(i)k)xk=(x(0)1x(0)k1idNkx(0)kx(0)d)Tb
    11:   end for
    12:   iter = iter + 1.
    13:   distance =max1idx(0)ix(1)i2.
    14: end while

     | Show Table
    DownLoad: CSV

    Definition 3.1. Given a matrix ARN×N, where N=N1Nd, we say that A is a Laplacian-like matrix if there exist matrices AiRNi×Ni for 1id be such that

    A=di=1Aiid[Ni]di=1idN1idNi1AiidNi+1idNd, (3.1)

    where idNj is the identity matrix of size Nj×Nj.

    It is not difficult to see that the set of Laplacian-like matrices is a linear subspace RN×N of matrices satisfying the property (2.8). From now on, we will denote by L(RN×N) the subspace of Laplacian-like matrices in RN×N for a fixed decomposition of N=N1Nd.

    Now, given a matrix ARN×N, our goal is to solve the following optimization problem:

    minLL(RN×N)AL2. (3.2)

    Clearly, if we denote by ΠL(RN×N) the orthogonal projection onto the linear subspace L(RN×N), then LA:=ΠL(RN×N)(A) is the solution of (3.2). Observe that ALA2=0 if and only if AL(RN×N).

    We are interested in trying to achieve a structure similar to (3.1) to study the matrices of large-dimensional problems. We search an algorithm that allows one to construct, for a given matrix A, its Laplacian-like best approximation LA.

    To do this, we will use the following theorem, which describes a particular decomposition of the space of matrices RN×N. Observe that the linear subspace span{idN} in RN×N has, as the orthogonal space, the following null trace matrices:

    span{idn}={ARn×n:tr(A)=0},

    with respect to the inner product A,BRN×N=tr(ATB).

    Theorem 3.2. Consider (RN×N,2) as a Hilbert space where N=N1Nd. Then, there exists a decomposition

    RN×N=span{idN}hN=L(RN×N)L(RN×N),

    where hN=span{idN} is the orthogonal complement of the linear subspace generated by the identity matrix. Moreover,

    L(RN×N)=span{idN}Δ, (3.3)

    where Δ=hNL(RN×N). Furthermore, L(RN×N) is a subspace of hN and

    Δ=di=1span{idN1}span{idNi1}span{idNi}span{idNi+1}span{idNd}.

    Proof. It follows from Lemma 3.1, Theorem 3.1 and Theorem 3.2 in [8].

    The above theorem allows us to compute the projection of matrix A onto L(RN×N) as follows. Denote by Πi the orthogonal projection of RN×N onto the linear subspace

    span{idN1}span{idNi1}span{idNi}span{idNi+1}span{idNd}

    for 1id. Thus, ki=1Πi is the orthogonal projection of RN×N onto the linear subspace Δ. In consequence, by using (3.3), we have

    tr(A)NidN+di=1Πi(A)=argminLL(RN×N)AL2. (3.4)

    If we further analyze (3.4), we observe that the second term on the left is of the form

    di=1Πi(A)=di=1idN1idNi1XiidNi+1idNd,

    and that it has only (N21++N2dd)-degrees of freedom (recall that dimspan{idNi}=N2i1). In addition, due to the tensor structure of the products, the unknowns xl of Xk are distributed in the form of a block so that we can calculate which will be the entries of the matrix A that we can approximate. Therefore, to obtain the value of each xl, we only need to calculate which is the value that best approximates the entries (i,j) of the original matrix that are in the same position as xl.

    In our next result, we will see how to carry out this procedure. To do this, we make the following observation. Given a matrix A=(ai,j)RKL×KL for some integers K,L>1, we can write A as a matrix block:

    A=(A(K,L)1,1A(K,L)1,2A(K,L)1,LA(K,L)2,1A(K,L)2,2A(K,L)2,LA(K,L)L,1A(K,L)L,2A(K,L)L,L), (3.5)

    where the block A(K,L)i,jRK×K for 1i,jL is given by

    A(K,L)i,j=(a(i1)K+1,(j1)K+1a(i1)K+1,jKaiK,(j1)K+1aiK,jK).

    Moreover,

    A2RKL×KL=KLi=1KLj=1a2i,j=Lr=1Ls=1A(K,L)r,s2RK×K.

    Observe that K and L can be easily interchanged. To simplify the notation, from now on given N=N1N2Nd, we denote it by N[k]=N1Nk1Nk+1Nd for each 1kd.

    Theorem 3.3. Let ARN×N, with N=N1Nd. For each fixed 1kd, consider the linear function Pk:RNk×NkRN×N given by

    Pk(Xk):=idN1idNk1XkidNk+1idNd.

    Then, the solution of the minimization problem

    minXkRNk×NkAPk(Xk)2 (3.6)

    is given by

    (Xk)i,j={1N[1]N[1]n=1a(i1)N[1]+n,(j1)N[1]+nifk=1,1N[k]Nk+1Ndm=1(N1Nk1n=1A(NkNd,N1Nk1)n,n)(i1)Nk+1Nd+m,(j1)Nk+1Nd+mif1<k<d,1N[d](N[d]n=1A(Nd,N[d])n,n)i,jifk=d.

    Proof. First, let us observe that idN1idNk=idN1Nk; so, we can find three different situations in the calculation of the projections:

    (1). P1(A)=X1idN[1]; in this case,

    P1(X1)=((X1)1,1idN[1](X1)1,2idN[1](X1)1,N1idN[1](X1)2,1idN[1](X1)2,2idN[1](X1)2,N1idN[1](X1)N1,1idN[1](X1)N1,2idN[1](X1)N1,N1idN[1])RN[1]N1×N[1]N1.

    (2). Pd(Xd)=idN[d]Xd; in this case,

    Pd(Xd)=(XdOdOdOdXdOdOdOdXd)RNdN[d]×NdN[d],

    where Od denotes the zero matrix in RNd×Nd.

    (3). Pi(Xi)=idN1Ni1XiidNi+1Nd for i=2,,d1; in this case, for a fixed 2id1, we write N=N1Ni1 and Nr=Ni+1Nd. Thus,

    Pi(Xi)=idNXiidNr=idN((Xi)1,1idNr(Xi)1,2idNr(Xi)1,N1idNr(Xi)2,1idNr(Xi)2,2idNr(Xi)2,N1idNr(Xi)N1,1idNr(Xi)N1,2idNr(Xi)N1,N1idNr)=(XiidNrOiidNrOiidNrOiidNrXiidNrOiidNrOiidNrOiidNrXiidNr)R(NiNr)N×(NiNr)N.

    In either case, a difference of the form

    minXkRNk×NkAPk(A)2

    must be minimized. To this end, we will consider each case A as a block matrix ARKL×KL in the form of (3.5).

    Case 1: For P1(X1), we take K=N[1] and L=N1; hence,

    AP1(X1)=(A(K,L)1,1(X1)1,1idN[1]A(K,L)1,2(X1)1,2idN[1]A(K,L)1,N1(X1)1,N1idN[1]A(K,L)2,1(X1)2,1idN[1]A(K,L)2,2(X1)2,2idN[1]A(K,L)2,N1(X1)2,N1idN[1]A(K,L)N1,1(X1)N1,1idN[1]A(K,L)N1,2(X1)N1,2idN[1]A(K,L)N1,N1(X1)N1,N1idN[1]).

    In this situation, we have

    AP1(X1)2RN×N=N1i=1N1j=1A(K,L)i,j(X1)i,jidN[1]2RN[1]×N[1];

    hence, we wish for each 1i and jN1 to yield

    (X1)i,j=xargminxRA(K,L)i,jxidN[1]2RN[1]×N[1]=argminxRN[1]n=1(a(i1)N[1]+n,(j1)N[1]+nx)2.

    Thus, it is not difficult to see that

    (X1)i,j=1N[1]N[1]n=1a(i1)N[1]+n,(j1)N[1]+n

    for 1i,jN1.

    Case 2: For Pd(Xd), we take K=Nd and L=N[d]; hence,

    APd(Xd)=(A(K,L)1,1XdA(K,L)1,2OdA(K,L)1,N[d]OdA(K,L)2,1OdA(K,L)2,2XdA(K,L)2,N[d]OdA(K,L)N[d],1OdA(K,L)N[d],2OdA(K,L)N[d],N[d]Xd).

    Now, we have

    APd(Xd)2RN×N=N[d]i=1A(K,L)i,iXd2RNd×Nd+N[d]i=1,j=1,ijA(K,L)i,i2RNd×Nd.

    Thus, XdRNd×Nd minimizes APd(Xd)2RN×N if and only if

    XdargminXRNd×NdN[d]i=1A(K,L)i,iX2RNd×Nd.

    In consequence,

    Xd=1N[d]N[d]i=1A(K,L)i,i.

    Case 3: For Pi(Xi), we take K=NiNr and L=N; hence,

    APi(Xi)=(A(K,L)1,1XiidNrA(K,L)1,2OiidNrA(K,L)1,NOiidNrA(K,L)2,1OiidNrA(K,L)2,2XiidNrA(K,L)1,NOiidNrA(K,L)N,1OiidNrA(K,L)N,2OiidNrA(K,L)N,NXiidNr).

    In this case,

    APi(Xi)2RN×N=Nn=1A(K,L)n,nXiidNr2RNiNr×NiNr+Nn=1,j=1,njA(K,L)n,j2RNiNr×NiNr,

    so we need to solve the following problem:

    minXRNi×NiNn=1A(K,L)n,nXidNr2RNiNr×NiNr. (3.7)

    Since XidNrRNi×Nispan{idNr}, we can write (3.7) as

    minZRNi×Nispan{idNr}Nn=1A(K,L)n,nZ2RNiNr×NiNr. (3.8)

    Observe that

    A=(au,v)=1NNn=1A(K,L)n,n=argminURNiNr×NiNrNn=1A(K,L)n,nU2RNiNr×NiNr.

    To simplify the notation, we write U:=RNi×Nispan{idNr}. Then, we have the following orthogonal decomposition, RNiNr×NiNr=UU. Denote by ΠU the orthogonal projection onto the linear subspace U. Then, for each ZU, we have

    A(K,L)n,nZ2=(idΠU)(A(K,L)n,n)+ΠU(A(K,L)n,n)Z2=(idΠU)(A(K,L)n,n)2+ΠU(A(K,L)n,n)Z2,

    because (idΠU)(A(K,L)n,n)U and ΠU(A(K,L)n,n)ZU. In consequence, the process of solving (3.8) is equivalent that for solving the following optimization problem:

    minZUNn=1ΠU(A(K,L)n,n)Z2RNiNr×NiNr. (3.9)

    Thus,

    Z=1NNn=1ΠU(A(K,L)n,n)=argminZUNn=1ΠU(A(K,L)n,n)Z2RNiNr×NiNr,

    that is, Z=ΠU(A); hence,

    Z=argminZUAZ2=XiidNr=argminXRNi×NiAXidNr2RNiNr×NiNr.

    Proceeding in a similar way as in Case 1, we obtain

    (Xi)u,v=1NrNrm=1a(u1)Nr+m,(v1)Nr+m=1Nr1NlNrm=1(Nln=1A(K,L)n,n)(u1)Nr+m,(v1)Nr+m

    for 1u,vNi. This concludes the proof of the theorem.

    To conclude, we obtain the following useful corollary.

    Corollary 3.4. Let ARN×N, with N=N1Nd. For each fixed 1 \le k \le d, consider the linear function P_k:\mathbb{R}^{N_k \times N_k} \longrightarrow \mathbb{R}^{N \times N} given by

    P_k(X_k): = \mathrm{id}_{N_1}\otimes \dots \otimes \mathrm{id}_{N_{k-1}} \otimes X_k \otimes \mathrm{id}_{N_{k+1}} \otimes \dots \otimes \mathrm{id}_{N_d}.

    For each 1 \le k \le d, let X_k \in \mathbb{R}^{N_k \times N_k} be the solution of the optimization problem (3.6). Then,

    \begin{align} L_A = \frac{ \mathrm{tr}(A)}{N} \mathrm{id}_N + \sum\limits_{k = 1}^dP_k\left( X_k - \frac{ \mathrm{tr}(X_k)}{N_k} \mathrm{id}_{N_k}\right) = \mathop {{\rm{arg}}\;{\rm{min}}}\limits_{L \in \mathcal{L}\left(\mathbb{R}^{N \times N}\right)} \|A-L\|_2. \end{align} (3.10)

    Proof. Observe that, for 1 \le k \le d, the matrix X_k satisfies

    P_k(X_k) = \mathop {{\rm{arg}}\;{\rm{min}}}\limits_{Z \in \mathfrak{h}^{(k)}}\|A-Z\|_2,

    where

    \mathfrak{h}^{(k)} : = \mathrm{span}\{ \mathrm{id}_{N_1}\} \otimes \cdots \otimes \mathrm{span}\{ \mathrm{id}_{N_{k-1}}\} \otimes \mathbb{R}^{N_k \times N_k} \otimes \mathrm{span}\{ \mathrm{id}_{N_{k+1}}\} \otimes \cdots \otimes \mathrm{span}\{ \mathrm{id}_{N_d}\}

    is a linear subspace of \mathbb{R}^{N \times N} that is linearly isomorphic to \mathbb{R}^{N_k \times N_K}. Since \mathbb{R}^{N_k \times N_k} = \mathrm{span}\{ \mathrm{id}_{N_k}\} \oplus \mathrm{span}\{ \mathrm{id}_{N_k}\}^{\bot}, then

    X_k = \frac{ \mathrm{tr}(X_k)}{N_k} \mathrm{id}_{N_k} + \left( X_k - \frac{ \mathrm{tr}(X_k)}{N_k} \mathrm{id}_{N_k}\right);

    hence

    \begin{align*} P_k(X_k) & = P_k\left(\frac{ \mathrm{tr}(X_k)}{N_k} \mathrm{id}_{N_k} \right) + P_k\left( X_k - \frac{ \mathrm{tr}(X_k)}{N_k} \mathrm{id}_{N_k}\right) \\ & = \frac{ \mathrm{tr}(X_k)}{N_k} \mathrm{id}_{N} + P_k\left( X_k - \frac{ \mathrm{tr}(X_k)}{N_k} \mathrm{id}_{N_k}\right). \end{align*}

    We can conclude that \Pi_k(A) = P_k\left(X_k - \frac{ \mathrm{tr}(X_k)}{N_k} \mathrm{id}_{N_k}\right); recall that \Pi_k is the orthogonal projection of \mathbb{R}^{N \times N} onto the linear subspace

    \mathrm{span}\{ \mathrm{id}_{N_1}\} \otimes \cdots \otimes \mathrm{span}\{ \mathrm{id}_{N_{k-1}}\} \otimes \mathrm{span}\{ \mathrm{id}_{N_{k}}\}^{\bot} \otimes \mathrm{span}\{ \mathrm{id}_{N_{k+1}}\} \otimes \cdots \otimes \mathrm{span}\{ \mathrm{id}_{N_d}\}.

    From (3.4), the corollary is proved.

    In this section, we consider the general equation of a generic second-order partial differential equation without mixing derivatives with homogeneous boundary conditions. More precisely, let

    \begin{align} \alpha \mathbf{u}_{xx} + \beta \mathbf{u}_{yy} + \gamma \mathbf{u}_x + \delta \mathbf{u}_y +\mu \mathbf{u} & = \mathbf{f} \text{ for } (x,y) \in (0,1) \times (0,1), \end{align} (4.1)
    \begin{align} \mathbf{u}(x,0) = \mathbf{u}(x,1) = \mathbf{u}(0,y) = \mathbf{u}(1,y) & = 0 \text{ for all } 0 \le x \le 1 \text{ and } 0 \le y \le 1. \end{align} (4.2)

    We discretize (4.1) with the help of the following derivative approximations:

    \mathbf{u}_{x}(x,y)\approx \dfrac{ \mathbf{u}(x_{i+1},y_j)- \mathbf{u}(x_{i-1},y_j)}{2h}, \quad \mathbf{u}_{y}(x,y)\approx \dfrac{ \mathbf{u}(x_{i},y_{j+1})- \mathbf{u}(x_{i},y_{j-1})}{2k},

    and

    \begin{matrix} \begin{aligned} \mathbf{u}_{xx}(x,y)&\approx \dfrac{ \mathbf{u}(x_{i+1},y_j)-2 \mathbf{u}(x_i,y_j)+ \mathbf{u}(x_{i-1},y_j)}{h^2},\\ \mathbf{u}_{yy}(x,y)&\approx \dfrac{ \mathbf{u}(x_{i},y_{j+1})-2 \mathbf{u}(x_i,y_j)+ \mathbf{u}(x_{i},y_{j-1})}{k^2} \end{aligned} \end{matrix}

    for i = 1, \dots, N and j = 1, \dots, M. From (4.2), we have that \mathbf{u}(x, y_0) = \mathbf{u}(x, y_{M+1}) = \mathbf{u}(x_0, y) = \mathbf{u}(x_{N+1}, y) = 0 for all 0 \le x \le 1 and 0 \le y \le 1.

    Next, in order to obtain a linear system, we put \mathbf{u}_{\ell} : = \mathbf{u}(x_i, y_j) and \mathbf{f}_{\ell}: = \mathbf{f}(x_i, y_j), where \ell: = (i-1)M+j for 1 \le i \le N and 1 \le j \le M. In this way, the represented mesh is traversed as shown in Figure 1, and the elements U = (\mathbf{u}_{\ell})_{\ell = 1}^{MN} and F = \{ \mathbf{f}_{\ell}\}_{\ell = 1}^{MN} are column vectors. It allows us to represent (4.1) and (4.2) as the linear system AF = U , where A is the MN \times MN -block matrix

    \begin{equation} A = \begin{pmatrix} T & D_1 & & &\\ D_2 & T & D_1 & &\\ & \ddots & \ddots & \ddots& \\ & & D_2 & T & D_1 \\ & & & D_2 & T \end{pmatrix} \end{equation} (4.3)
    Figure 1.  Proceeding from (1, 1) to (1, M) ; (2, 1), \dots, (2, M) ; and, ending at (N, 1), \dots, (N, M). .

    for T \in \mathbb{R}^{M \times M}, given by

    \begin{equation*} T = \begin{pmatrix} 2\mu h^2k^2-4\alpha k^2-4\beta h^2 & 2\beta h^2+\delta h^2k & 0 & \dots & 0 \\ 2\beta h^2-\delta h^2k & 2\mu h^2k^2-4\alpha k^2-4\beta h^2 & 2\beta h^2+\delta h^2k & \dots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \dots & 0 & 2\beta h^2-\delta h^2k & 2\mu h^2k^2-4\alpha k^2-4\beta h^2 \end{pmatrix}, \end{equation*}

    and D_1, D_2 \in \mathbb{R}^{M \times M} are the diagonal matrices:

    \begin{equation*} D_1 = (2\alpha k^2+\gamma hk^2) \mathrm{id}_M, \quad D_2 = (2\alpha k^2-\gamma hk^2) \mathrm{id}_M. \end{equation*}

    In this case, \mathrm{tr}(A) = NM(2\mu h^2k^2-4\alpha k^2-4\beta h^2); so, instead of looking for L_A, as in (3.10), we will look for L_{\hat{A}}, where

    \hat{A} = \left(A-\dfrac{ \mathrm{tr}(A)}{NM} \mathrm{id}_{NM}\right)

    has a null trace. Proceeding according to Theorem 3.3 for sizes N_1 = N and N_2 = M , we obtain the following decomposition:

    X_1 = \begin{pmatrix} 0 & 2\alpha k^2+\gamma hk^2 & 0 & \dots & 0 \\ 2\alpha k^2-\gamma hk^2 & 0 & 2\alpha k^2+\gamma hk^2 & \dots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \dots & 2\alpha k^2-\gamma hk^2 & 0 & 2\alpha k^2+\gamma hk^2 \\ 0 & \dots & 0 & 2\alpha k^2-\gamma hk^2 & 0 \end{pmatrix} \in \mathbb{R}^{N \times N}

    and

    X_2 = \begin{pmatrix} 0 & 2\beta h^2+\delta h^2k & 0 & \dots & 0 \\ 2\beta h^2-\delta h^2k & 0 & 2\beta h^2+\delta h^2k & \dots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \dots & 2\beta h^2-\delta h^2k & 0 & 2\beta h^2+\delta h^2k \\ 0 & \dots & 0 & 2\beta h^2-\delta h^2k & 0 \end{pmatrix} \in \mathbb{R}^{M \times M}.

    We remark that \mathrm{tr}(X_1) = \mathrm{tr}(X_2) = 0. Moreover, the residual of the approximation L_{\hat{A}} of \hat{A} is \| \hat{A} - L_{\hat{A}} \| = 0. In consequence, we can write the original matrix A as

    A = \dfrac{ \mathrm{tr}(A)}{NM} \mathrm{id}_{NM} + X_1 \otimes \mathrm{id}_{M} + \mathrm{id}_{N} \otimes X_2.

    Recall that the first term is

    \dfrac{ \mathrm{tr}(A)}{NM} \mathrm{id}_{NM} = \left(2\mu h^2k^2 -4\alpha k^2-4\beta h^2 \right) \cdot \mathrm{id}_{NM} = \left(2\mu h^2k^2 -4\alpha k^2-4\beta h^2 \right) \cdot \mathrm{id}_{N} \otimes \mathrm{id}_{M};

    hence, A can be written as

    \begin{equation*} \begin{matrix} \begin{aligned} A & = Z_1 \otimes \mathrm{id}_{M} + \mathrm{id}_{N} \otimes Z_2, \end{aligned} \end{matrix} \end{equation*}

    where

    Z_1 = \begin{pmatrix} \mu h^2k^2-2\alpha k^2-2\beta h^2 & 2\alpha k^2+\gamma hk^2 & 0 & \dots & 0 \\ 2\alpha k^2-\gamma hk^2 & \mu h^2k^2-2\alpha k^2-2\beta h^2 & 2\alpha k^2+\gamma hk^2 & \dots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \dots & 2\alpha k^2-\gamma hk^2 & \mu h^2k^2-2\alpha k^2-2\beta h^2 & 2\alpha k^2+\gamma hk^2 \\ 0 & \dots & 0 & 2\alpha k^2-\gamma hk^2 & \mu h^2k^2-2\alpha k^2-2\beta h^2 \end{pmatrix}

    is an N \times N -matrix and

    Z_2 = \begin{pmatrix} \mu h^2k^2-2\alpha k^2-2\beta h^2 & 2\beta h^2+\delta h^2k & 0 & \dots & 0 \\ 2\beta h^2-\delta h^2k & \mu h^2k^2-2\alpha k^2-2\beta h^2 & 2\beta h^2+\delta h^2k & \dots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \dots & 2\beta h^2-\delta h^2k & \mu h^2k^2-2\alpha k^2-2\beta h^2 & 2\beta h^2+\delta h^2k \\ 0 & \dots & 0 & 2\beta h^2-\delta h^2k & \mu h^2k^2-2\alpha k^2-2\beta h^2 \end{pmatrix}

    is a M \times M -matrix.

    Now, we can use this representation of A to implement the GROU Algorithm 1, together with the ALS strategy given by Algorithm 2, to solve the following linear system:

    AU = (Z_1 \otimes \mathrm{id}_{M} + \mathrm{id}_{N} \otimes Z_2)U = F.

    This study can be extended to high-dimensional equations, as occurs in [8] with the three-dimensional Poisson equation.

    Next, we are going to consider some particular equations to analyze their numerical behavior. In all cases, the characteristics of the computer used are as follows: 11th Gen Intel ^{(R)} Core ^{(TM)} i7-11370H @ 3.30GHz, RAM 16 GB, 64-bit operating system; and, Matlab version R2021b [13].

    Let us consider the particular case of the second-order partial differential equation with \alpha = \beta = 1 , \mu = c^2 and \mathbf{f} = 0 , that is,

    \mathbf{u}_{xx}+ \mathbf{u}_{yy}+c^2 \mathbf{u} = 0.

    This is the 2D-Helmholtz equation. To obtain the linear system associated with the discrete problem, we need some boundary conditions; for example,

    \left\{ \begin{matrix} \begin{aligned} \mathbf{u}(x,0)& = \sin(\omega x)+\cos(\omega x) \quad \text{for} \quad 0\leq x \leq , \\ \mathbf{u}(0,y)& = \sin(\omega y)+\cos(\omega y) \quad \text{for} \quad 0 \leq y \leq T, \end{aligned} \end{matrix} \right.

    and

    \left\{ \begin{matrix} \begin{aligned} \mathbf{u}(x,T)& = \sin\left(\omega (x+T)\right)+\cos\left(\omega (x+T)\right) \quad \text{for} \quad 0\leq x \leq , \\ \mathbf{u}(L,y)& = \sin \left(\omega (y+L)\right) +\cos\left(\omega (y+L)\right) \quad \text{for} \quad 0 \leq y \leq T. \end{aligned} \end{matrix} \right.

    This initial value problem has a closed solution for \omega = \dfrac{c}{\sqrt{2}},

    \mathbf{u}(x,y) = \sin\left(\omega(x+y)\right)+\cos\left(\omega(x+y)\right).

    From the above operations, and by taking h = k for simplicity, we can write the matrix of the discrete linear system associated with the equation of Helmholtz as

    \begin{equation*} \begin{matrix} \begin{aligned} A = \begin{pmatrix} 2c^2 h^4-8 h^2 & 2h^2 & 0 & \dots & 0 \\ 2h^2 & 2c^2 h^4- 8 h^2 & 2h^2 & \dots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \dots & 0 & 2h^2 & 2c^2h^4-8h^2 \end{pmatrix} \otimes \mathrm{id}_{M} + \mathrm{id}_{N} \otimes \begin{pmatrix} 0 & 2 h^2 & 0 & \dots & 0 \\ 2 h^2 & 0 & 2 h^2& \dots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \dots & 0 & 2h^2& 0 \end{pmatrix} \end{aligned} \end{matrix} \end{equation*}

    or, equivalently,

    \begin{equation*} \begin{matrix} \begin{aligned} A = & \begin{pmatrix} c^2 h^4-4 h^2 & 2h^2 & 0 & \dots & 0 \\ 2h^2 & c^2 h^4-4 h^2 & 2h^2 & \dots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \dots & 0 & 2h^2 & c^2 h^4-4 h^2 \end{pmatrix} \otimes \mathrm{id}_{M} \\ &+ \mathrm{id}_{N} \otimes \begin{pmatrix} c^2 h^4-4 h^2 & 2 h^2 & 0 & \dots & 0 \\ 2 h^2 & c^2 h^4-4 h^2 & 2 h^2& \dots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \dots & 0 & 2h^2& c^2 h^4-4 h^2 \end{pmatrix}. \end{aligned} \end{matrix} \end{equation*}

    If we solve this linear system A \mathbf{u}_l = \hat{\mathbf{f}}_{l} for the case c = \sqrt{2} , L = T = , and with N = M , we obtain the temporary results shown in Figure 2. To carry out this experiment, we have used the following parameter values: for the GROU Algorithm 1: \texttt{tol} = 2.2204e-16 ; \varepsilon = 2.2204e-16 ; \texttt{rank_max} = 10 ; ( \texttt{iter-max} = 5 and \varepsilon = 2.2204e-16 were used to perform Algorithm 2); and, the number of nodes in (0, 1)^2 (that is, the number of rows or columns of the matrix A ) was increased from 10^2 to 200^2 .

    Figure 2.  CPU time, in seconds, employed to solve the discrete Helmholtz initial value problem by using the Matlab command A\backslash b , the GROU Algorithm 1 and the GROU Algorithm 1 with A written as L_A, as obtained from Corollary 3.3.

    To measure the goodness of the approximations obtained, we calculated the normalized errors, that is, the value of the difference, in absolute value, of the results obtained and the real solution, between the length of the solution, i.e.,

    \varepsilon = \dfrac{|exact\,solution-approximate\,solution |}{N^2}

    for the different approximations obtained. The values of these errors were of the order of 10^{-4} and can be seen in Figure 3.

    Figure 3.  Normalized error between the solution of the discrete Helmholtz initial value problem and the solutions obtained by using the Matlab command A\backslash b , the GROU Algorithm 1 and the GROU Algorithm 1 with A written as L_A, as obtained from Corollary 3.3.

    Now, let us consider the partial differential equation of order four:

    \begin{equation} \frac{\partial u}{\partial t} = \varepsilon - \left( 1+\frac{\partial^2}{\partial x^2}\right)^2u, \end{equation} (5.1)

    with the boundary conditions

    \begin{equation} \left\{ \begin{matrix} \begin{aligned} u(x,0) & = \sin(kx), \\ u(x,T) & = \sin(kx) e^T \end{aligned} \end{matrix} \quad \text{for} \quad 0 \leq x \leq L, \right. \end{equation} (5.2)

    and

    \begin{equation} u(0,t) = u(L,t) = 0 \quad \text{for} \quad 0 \leq t \leq T. \end{equation} (5.3)

    For k = \sqrt{1+\sqrt{\varepsilon-1}} L = 2\pi/k , the initial value problem (5.1)–(5.3) has the following as a solution:

    u(x,t) = \sin(kx) e^t.

    If we discretize the problem described by (5.1)–(5.3) as in the previous example with the same step size for both variables, h , we obtain a linear system of the form A \mathbf{u}_{l} = \hat{\mathbf{f}}_{l} , where A , in Laplacian-like form, is the matrix

    \begin{equation*} \begin{matrix} \begin{aligned} A = \begin{pmatrix} 12-8 h^2 +(2-2\varepsilon) h^4 & 4h^2-8 & 2 & 0 & \dots & 0 \\ 4h^2-8 & 12-8 h^2 +(2-2\varepsilon) h^4 & 4h^2-8 & 2 & \dots & 0 \\ \vdots & \ddots & \ddots & \ddots & \ddots & \vdots \\ 0 & \dots & 0 & 2 & 4h^2-8 & 12-8 h^2 +(2-2\varepsilon) h^4 \end{pmatrix} \otimes \mathrm{id}_{M} \\ + \mathrm{id}_{N} \otimes \begin{pmatrix} 0 & h^3 & 0 & \dots & 0 \\ -h^3 & 0 & h^3& \dots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \dots & 0 & -h^3& 0 \end{pmatrix}, \end{aligned} \end{matrix} \end{equation*}

    and l = (i-1)M+j is the order established for the indices, with 1 \leq i \leq N and 1 \leq j \leq M .

    To perform a numerical experiment, we set \varepsilon = 2 , L = T = 2\pi and the same number of points for the two variables. At this point, we can solve the linear system associated with the Swift-Hohenberg discrete problem by using our tools: the Matlab command A\backslash b , the GROU Algorithm 1, and the GROU Algorithm 1, together with the ALS Algorithm 2 with A written in Laplacian-like form. In this case, we used the following parameter values in the algorithms: \texttt{tol} = 2.2204e-16 ; \varepsilon = 2.2204e-16 ; \texttt{rank_max} = 10 for the GROU Algorithm 1, with \texttt{iter-max} = 5 for the ALS step; and, the number of nodes in (0, 2\pi)^2 was increased from 10^2 to 200^2 . Figure 4 shows the results obtained.

    Figure 4.  CPU time, in seconds, employed to solve the discrete Swift-Hohenberg initial value problem by using the Matlab command A\backslash b , the GROU Algorithm 1 and the GROU Algorithm 1 with A written in Laplacian form.

    Again, we calculated the normalized errors to estimate the goodness of the approximations, the results of which are shown in Figure 5.

    Figure 5.  Normalized error between the solution of the discrete Swift-Hohenberg initial value problem and the solutions obtained by using the Matlab command A\backslash b , the GROU Algorithm 1 and the GROU Algorithm 1 with A written in Laplacian form.

    In this work, we have studied the Laplacian decomposition algorithm, which, given any square matrix, calculates its best Laplacian approximation. Furthermore, in Theorem 3.3, we have shown how it is implemented optimally.

    For us, the greatest interest in this algorithm lies in the computational improvement of combining it with the GROU Algorithm 1 to solve linear systems through the discretization of a partial derivative equation. Said improvement can be seen in the different numerical examples shown, where we have compared this procedure with the standard resolution of Matlab by means of the instruction A\backslash b.

    This proposal is a new way of dealing with certain large-scale problems, where classical methods prove to be more inefficient.

    The authors declare that they have not used artificial intelligence tools in the creation of this article.

    J. A. Conejero acknowledges funding from grant PID2021-124618NB-C21, funded by MCIN/AEI/ 10.13039/501100011033, and by "ERDF: A way of making Europe", by the "European Union"; M. Mora-Jiménez was supported by the Generalitat Valenciana and the European Social Fund under grant number ACIF/2020/269; A. Falcó was supported by the MICIN grant number RTI2018-093521-B-C32 and Universidad CEU Cardenal Herrera under grant number INDI22/15.

    The authors declare that they no have conflicts of interest.



    [1] Z. Pawlak, Rough sets, Int. J. Inform. Comput. Sci., 11 (1982), 341-356. https://doi.org/10.1007/BF01001956 doi: 10.1007/BF01001956
    [2] L. A. Zadeh, Fuzzy sets, Inform. Control, 8 (1965), 338-353. https://doi.org/10.1016/S0019-9958(65)90241-X doi: 10.1016/S0019-9958(65)90241-X
    [3] D. A. Molodtsov, Soft set theory-first results, Comput. Math. Appl., 37 (1999), 19-31. https://doi.org/10.1016/S0898-1221(99)00056-5 doi: 10.1016/S0898-1221(99)00056-5
    [4] F. Feng, X. Liu, V. Leoreanu-Fotea, Y. B. Jun, Soft sets and soft rough sets, Inform. Sci., 181 (2011), 1125-1137. https://doi.org/10.1016/j.ins.2010.11.004 doi: 10.1016/j.ins.2010.11.004
    [5] D. P. Acharjya, D. Arya, Multicriteria decision-making using interval valued neutrosophic soft set, Artificial Intelligence and Global Society, Chapman and Hall/CRC, New York, 2021,275. https://doi.org/10.1201/9781003006602
    [6] N. Kumari, D. P. Acharjya, Data classification using rough set and bioinspired computing in healthcare applications-an extensive review, Multimed. Tools Appl., 2022, 1-27. https://doi.org/10.1007/s11042-022-13776-1 doi: 10.1007/s11042-022-13776-1
    [7] N. Kumari, D. P. Acharjya, A decision support system for diagnosis of hepatitis disease using an integrated rough set and fish swarm algorithm, Concurr. Comp.-Pract. E., 34 (2022), e7107. https://doi.org/10.1002/cpe.7107 doi: 10.1002/cpe.7107
    [8] M. K. El-Bably, E. A. Abo-Tabl, A topological reduction for predicting of a lung cancer disease based on generalized rough sets, J. Intell. Fuzzy Syst., 41 (2021), 335-346. https://doi.org/10.1155/2021/2559495 doi: 10.1155/2021/2559495
    [9] M. K. El-Bably, A. A. El Atik, Soft \beta-rough sets and its application to determine COVID-19, Turk. J. Math., 45 (2021), 1133-1148. https://doi.org/10.3906/mat-2008-93 doi: 10.3906/mat-2008-93
    [10] E. A. Marei, Neighborhood system and decision making, Master's Thesis, Zagazig University, Zagazig, Egypt, 2007.
    [11] E. A. Marei, Generalized soft rough approach with a medical decision making problem, Eur. J. Sci. Res., 133 (2015), 49-65.
    [12] R. Abu-Gdairi, M. A. El-Gayar, T. M. Al-shami, A. S. Nawar, M. K. El-Bably, Some topological approaches for generalized rough sets and their decision-making applications, Symmetry, 14 (2022). https://doi.org/10.3390/sym14010095 doi: 10.3390/sym14010095
    [13] M. A. El-Gayar, A. A. El Atik, Topological models of rough sets and decision making of COVID-19, Complexity, 2022 (2022), 2989236. https://doi.org/10.1155/2021/2989236 doi: 10.1155/2021/2989236
    [14] H. Lu, A. M. Khalil, W. Alharbi, M. A. El-Gayar, A new type of generalized picture fuzzy soft set and its application in decision making, J. Intell. Fuzzy Syst., 40 (2021), 12459-12475. https://doi.org/10.3233/JIFS-201706 doi: 10.3233/JIFS-201706
    [15] E. A. Abo-Tabl, M. K. El-Bably, Rough topological structure based on reflexivity with some applications, AIMS Math., 7 (2022), 9911-9925. https://doi.org/10.3934/math.2022553 doi: 10.3934/math.2022553
    [16] M. K. El-Bably, M. I. Ali, E. A. Abo-Tabl, New topological approaches to generalized soft rough approximations with medical applications, J. Math., 2021 (2021), 2559495. https://doi.org/10.1155/2021/2559495 doi: 10.1155/2021/2559495
    [17] M. K. El-Bably, M. El-Sayed, Three methods to generalize Pawlak approximations via simply open concepts with economic applications, Soft Comput., 26 (2022), 4685-4700. https://doi.org/10.1007/s00500-022-06816-3 doi: 10.1007/s00500-022-06816-3
    [18] M. El Sayed, M. A. El Safty, M. K. El-Bably, Topological approach for decision-making of COVID-19 infection via a nano-topology model, AIMS Math., 6 (2021), 7872-7894. https://doi.org/10.3934/math.2021457 doi: 10.3934/math.2021457
    [19] M. M. El-Sharkasy, Topological model for recombination of DNA and RNA, Int. J. Biomath., 11 (2018). https://doi.org/10.1142/S1793524518500973 doi: 10.1142/S1793524518500973
    [20] A. S. Nawar, M. A. El-Gayar, M. K. El-Bably, R. A. Hosny, \theta \beta-ideal approximation spaces and their applications, AIMS Math., 7 (2022), 2479-2497. https://doi.org/10.3934/math.2022139 doi: 10.3934/math.2022139
    [21] R. Abu-Gdairi, M. A. El-Gayar, M. K. El-Bably, K. K. Fleifel, Two views for generalized rough sets with applications, Mathematics, 18 (2021), 2275. https://doi.org/10.3390/math9182275 doi: 10.3390/math9182275
    [22] Z. Li, T. Xie, Q. Li, Topological structure of generalized rough sets, Comput. Math. Appl., 63 (2021), 1066-1071. https://doi.org/10.1016/j.camwa.2011.12.011 doi: 10.1016/j.camwa.2011.12.011
    [23] M. E. Abd El-Monsef, M. A. EL-Gayar, R. M. Aqeel, On relationships between revised rough fuzzy approximation operators and fuzzy topological spaces, Int. J. Granul. Comput. Rough Set. Intel. Syst., 3 (2014), 257-271. https://doi.org/10.1504/IJGCRSIS.2014.068022 doi: 10.1504/IJGCRSIS.2014.068022
    [24] M. K. El-Bably, T. M. Al-shami, Different kinds of generalized rough sets based on neighborhoods with a medical application, Int. J. Biomath., 14 (2021), 2150086. https://doi.org/10.1142/S1793524521500868 doi: 10.1142/S1793524521500868
    [25] M. E. A. El-Monsef, M. A. El-Gayar, R. M. Aqeel, A comparison of three types of rough fuzzy sets based on two universal sets, Int. J. Mach. Learn. Cyb., 8 (2017), 343-353. https://doi.org/10.1007/s13042-015-0327-8 doi: 10.1007/s13042-015-0327-8
    [26] M. I. Ali, A note on soft sets, rough soft sets and fuzzy soft sets, Appl. Soft Comput., 11 (2011), 3329-3332. https://doi.org/10.1016/j.asoc.2011.01.003 doi: 10.1016/j.asoc.2011.01.003
    [27] E. A. Abo-Tabl, A comparison of two kinds of definitions of rough approximations based on a similarity relation, Inform. Sci., 181 (2011), 2587-2596. https://doi.org/10.1016/j.ins.2011.01.007 doi: 10.1016/j.ins.2011.01.007
    [28] A. A. Allam, M. Y. Bakeir, E. A. Abo-Tabl, New approach for basic rough set concepts, In: Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing, Part of the Lecture Notes in Artificial Intelligence, Springer, Berlin, Heidelberg, 2005, 64-73.
    [29] Y. Yao, Generalized rough set models, In: Rough Sets in knowledge Discovery, Polkowski, Physica Verlag, Heidelberg 1998,286-318.
    [30] P. Zhang, T. Li, C. Luo, G. Wang, AMG-DTRS: Adaptive multi-granulation decision-theoretic rough sets, Int. J. Approx. Reason., 140 (2022), 7-30. https://doi.org/10.1016/j.ijar.2021.09.017 doi: 10.1016/j.ijar.2021.09.017
    [31] M. Suo, L. Tao, B. Zhu, X. Miao, Z. Liang, Y. Ding, et al., Single-parameter decision-theoretic rough set, Inform. Sci., 539 (2020), 49-80. https://doi.org/10.1016/j.ins.2020.05.124 doi: 10.1016/j.ins.2020.05.124
    [32] H. Dou, X. Yang, X. Song, H. Yu, W. Z. Wu, J. Yang, Decision-theoretic rough set: A multicost strategy, Knowl.-Based Syst., 91 (2016), 71-83. https://doi.org/10.1016/j.knosys.2015.09.011 doi: 10.1016/j.knosys.2015.09.011
    [33] Y. Yao, Three-way decisions with probabilistic rough sets, Inform. Sci., 180 (2010), 341-353. https://doi.org/10.1016/j.ins.2009.09.021 doi: 10.1016/j.ins.2009.09.021
    [34] Y. Yao, J. Yang, Granular rough sets and granular shadowed sets: Three-way approximations in Pawlak approximation spaces, Int. J. Approx. Reason., 142 (2022), 231-247. https://doi.org/10.1016/j.ijar.2021.11.012 doi: 10.1016/j.ijar.2021.11.012
    [35] Y. Yao, Three-way granular computing, rough sets, and formal concept analysis, Int. J. Approx. Reason., 116 (2022), 106-125. https://doi.org/10.3917/empa.125.0116 doi: 10.3917/empa.125.0116
    [36] J. Yang, Y. Yao, Semantics of soft sets and three-way decision with soft sets, Knowl.-Based Syst., 194 (2020), 105538. https://doi.org/10.1016/j.knosys.2020.105538 doi: 10.1016/j.knosys.2020.105538
    [37] Z. Liu, J. C. R. Alcantud, K. Qin, L. Xiong, The soft sets and fuzzy sets-based neural networks and application, IEEE Access, 8 (2020), 41615-41625. https://doi.org/10.1109/ACCESS.2020.2976731 doi: 10.1109/ACCESS.2020.2976731
    [38] M. El Sayed, A. A. Q. Al Qubati, M. K. El-Bably, Soft pre-rough sets and its applications in decision making, Math. Biosci. Eng., 17 (2021), 6045-6063. https://doi.org/10.3934/mbe.2020321 doi: 10.3934/mbe.2020321
    [39] A. R. Roy, P. K. Maji, Fuzzy soft set theoretic approach to decision making problems, J. Comput. Appl. Math., 203 (2007), 412-1418. https://doi.org/10.1016/j.cam.2006.04.008 doi: 10.1016/j.cam.2006.04.008
    [40] P. K. Maji, A. R. Roy, R. Biswas, An application of soft sets in a decision making problem, Comput. Math. Appl., 44 (2002), 1077-1083. https://doi.org/10.1016/S0898-1221(02)00216-X doi: 10.1016/S0898-1221(02)00216-X
    [41] O. Dalkılıç, N. Demirta, Algorithms for Covid-19 outbreak using soft set theory: Estimation and application, Soft Comput., 2022. https://doi.org/10.1007/s00500-022-07519-5 doi: 10.1007/s00500-022-07519-5
    [42] O. Dalkılıç, N. Demirta, Decision analysis review on the concept of class for bipolar soft set theory, Comput. Appl. Math., 41 (2022), 205. https://doi.org/10.1007/s40314-022-01922-2 doi: 10.1007/s40314-022-01922-2
    [43] M. Shabir, M. I. Ali, T. Shaheen, Another approach to soft rough sets, Knowl.-Based Syst., 40 (2013), 72-80. https://doi.org/10.1016/j.knosys.2012.11.012 doi: 10.1016/j.knosys.2012.11.012
    [44] J. C. R. Alcantud, The semantics of N-soft sets, their applications, and a coda about three-way decision, Inform. Sci., 606 (2022), 837-852. https://doi.org/10.1016/j.ins.2022.05.084 doi: 10.1016/j.ins.2022.05.084
    [45] J. C. R. Alcantud, G. Santos-García, M. Akram, OWA aggregation operators and multi-agent decisions with N-soft sets, Expert Syst. Appl., 203 (2022), 117430. https://doi.org/10.1016/j.eswa.2022.117430 doi: 10.1016/j.eswa.2022.117430
    [46] M. Akram, U. Amjad, J. C. R. Alcantud, G. Santos-García, Complex fermatean fuzzy N-soft sets: A new hybrid model with applications, J. Amb. Intel. Hum. Comput., 2022. https://doi.org/10.1007/s12652-021-03629-4 doi: 10.1007/s12652-021-03629-4
    [47] J. C. R. Alcantud, J. Zhan, Convex rough sets on finite domains, Inform. Sci., 611 (2022), 81-94. https://doi.org/10.1016/j.ins.2022.08.013 doi: 10.1016/j.ins.2022.08.013
    [48] J. C. R. Alcantud, The relationship between fuzzy soft and soft topologies, Int. J. Fuzzy Syst., 24 (2022), 16531668. https://doi.org/10.1007/s40815-021-01225-4 doi: 10.1007/s40815-021-01225-4
    [49] J. C. R. Alcantud, J. Zhan, Multi-granular soft rough covering sets, Soft Comput., 24 (2020), 93919402. https://doi.org/10.1007/s00500-020-04987-5 doi: 10.1007/s00500-020-04987-5
    [50] J. C. R. Alcantud, F. Feng, R. R. Yager, An N-soft set approach to rough sets, IEEE T. Fuzzy Syst., 28 (2020), 2996-3007. https://doi.org/10.1109/TFUZZ.2019.2946526 doi: 10.1109/TFUZZ.2019.2946526
    [51] J. C. R. Alcantud, Some formal relationships among soft sets, fuzzy sets, and their extensions, Int. J. Approx. Reason., 68 (2016), 45-53. https://doi.org/10.1016/j.ijar.2015.10.004 doi: 10.1016/j.ijar.2015.10.004
    [52] T. K. Das, D. P. Acharjya, A decision making model using soft set and rough set on fuzzy approximation spaces, Int. J. Intell. Syst. Technol. Appl., 13 (2014), 170-186. https://doi.org/10.1504/IJISTA.2014.065172 doi: 10.1504/IJISTA.2014.065172
    [53] D. P. Acharjya, P. K. Ahmed, A hybridized rough set and bat-inspired algorithm for knowledge inferencing in the diagnosis of chronic liver disease, Multimed. Tools Appl., 81 (2022), 13489-13512. https://doi.org/10.1007/s11042-021-11495-7 doi: 10.1007/s11042-021-11495-7
    [54] D. P. Acharjya, A. Abraham, Rough computing–-A review of abstraction, hybridization and extent of applications, Eng. Appl. Artif. Intel., 96 (2020), 103924. https://doi.org/10.1016/j.engappai.2020.103924 doi: 10.1016/j.engappai.2020.103924
    [55] D. P. Acharjya, Knowledge inferencing using artificial bee colony and rough set for diagnosis of hepatitis disease, Int. J. Healthc. Inf. Sy., 16 (2021), 49-72. https://doi.org/10.1353/aph.2021.0086 doi: 10.1353/aph.2021.0086
    [56] D. P. Acharjya, A hybrid scheme for heart disease diagnosis using rough set and cuckoo search technique, J. Med. Syst., 44 (2020). https://doi.org/10.1007/s10916-019-1497-9 doi: 10.1007/s10916-019-1497-9
    [57] X. Yang, D. Yu, J. Yang, C. Wu, Generalization of soft set theory: From crisp to fuzzy case, Fuzzy Information and Engineering, Springer, Berlin, Heidelberg, 40 (2007), 345-354. https://doi.org/10.1007/BF03215615
    [58] J.-B. Liu, S. Ali, M. K. Mahmood, M. H. Mateen, On m-polar diophantine fuzzy N-soft set with applications, Comb. Chem. High T. Scr., 25 (2022), 536-546. https://doi.org/10.2174/1386207323666201230092354 doi: 10.2174/1386207323666201230092354
    [59] M. S. Hameed, S. Mukhtar, H. N. Khan, S. Ali, M. H. Mateen, M. Gulzar, Pythagorean fuzzy N-soft groups, Indones. J. Electr. Eng. Comput. Sci., 21 (2021), 1030-1038. http://dx.doi.org/10.11591/ijeecs.v21i2.pp1030-1038 doi: 10.11591/ijeecs.v21i2.pp1030-1038
    [60] M. Gulzar, M. H. Mateen, Y. M. Chu, D. Alghazzawi, G. Abbas, Generalized direct product of complex intuitionistic fuzzy subrings, Int. J. Comput. Intell. Syst., 14 (2021), 582-593. https://doi.org/10.2991/ijcis.d.210106.001 doi: 10.2991/ijcis.d.210106.001
    [61] M. Gulzar, M. H. Mateen, D. Alghazzawi, N. Kausar, A novel applications of complex intuitionistic fuzzy sets in group theory, IEEE Access, 8 (2021), 196075-196085. https://doi.org/10.1109/ACCESS.2020.3034626 doi: 10.1109/ACCESS.2020.3034626
    [62] F. Tchier, G. Ali, M. Gulzar, D. Pamučar, G. Ghorai, A new group decision-making technique under picture fuzzy soft expert information, Entropy, 23 (2021), 1176. https://doi.org/10.3390/e23091176 doi: 10.3390/e23091176
    [63] Chikungunya, World Health Organization, Geneva, Switzerland. Available from: https: //www.who.int/news-room/fact-sheets/detail/chikungunya.
    [64] M. L. Thivagar, C. Richard, N. R. Paul, Mathematical innovations of a modern topology in medical events, Int. J. Inform. Sci., 2 (2012), 33-36. https://doi.org/10.25291/VR/36-VR-33 doi: 10.25291/VR/36-VR-33
    [65] M. L. Thivagar, C. Richard, On nano forms of weakly open sets, Int. J. Math. Stat. Invent., 1 (2013), 31–37.
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1717) PDF downloads(81) Cited by(20)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog