Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Boosting task scheduling in IoT environments using an improved golden jackal optimization and artificial hummingbird algorithm

  • Applications for the internet of things (IoT) have grown significantly in popularity in recent years, and this has caused a huge increase in the use of cloud services (CSs). In addition, cloud computing (CC) efficiently processes and stores generated application data, which is evident in the lengthened response times of sensitive applications. Moreover, CC bandwidth limitations and power consumption are still unresolved issues. In order to balance CC, fog computing (FC) has been developed. FC broadens its offering of CSs to target end users and edge devices. Due to its low processing capability, FC only handles light activities; jobs that require more time will be done via CC. This study presents an alternative task scheduling in an IoT environment based on improving the performance of the golden jackal optimization (GJO) using the artificial hummingbird algorithm (AHA). To test the effectiveness of the developed task scheduling technique named golden jackal artificial hummingbird (GJAH), we conducted a large number of experiments on two separate datasets with varying data sizing. The GJAH algorithm provides better performance than those competitive task scheduling methods. In particular, GJAH can schedule and carry out activities more effectively than other algorithms to reduce the makespan time and energy consumption in a cloud-fog computing environment.

    Citation: Ibrahim Attiya, Mohammed A. A. Al-qaness, Mohamed Abd Elaziz, Ahmad O. Aseeri. Boosting task scheduling in IoT environments using an improved golden jackal optimization and artificial hummingbird algorithm[J]. AIMS Mathematics, 2024, 9(1): 847-867. doi: 10.3934/math.2024043

    Related Papers:

    [1] Belén Ariño-Morera, Angélica Benito, Álvaro Nolla, Tomás Recio, Emilio Seoane . Looking at Okuda's artwork through GeoGebra: A Citizen Science experience. AIMS Mathematics, 2023, 8(8): 17433-17447. doi: 10.3934/math.2023890
    [2] José Luis Díaz Palencia, Abraham Otero . Instability analysis and geometric perturbation theory to a mutual beneficial interaction between species with a higher order operator. AIMS Mathematics, 2022, 7(9): 17210-17224. doi: 10.3934/math.2022947
    [3] Yutong Zhou . Idempotent completion of right suspended categories. AIMS Mathematics, 2022, 7(7): 13442-13453. doi: 10.3934/math.2022743
    [4] Naveed Iqbal, Meshari Alesemi . Soliton dynamics in the (2+1)-dimensional Nizhnik-Novikov-Veselov system via the Riccati modified extended simple equation method. AIMS Mathematics, 2025, 10(2): 3306-3333. doi: 10.3934/math.2025154
    [5] Muhammad Imran Asjad, Naeem Ullah, Asma Taskeen, Fahd Jarad . Study of power law non-linearity in solitonic solutions using extended hyperbolic function method. AIMS Mathematics, 2022, 7(10): 18603-18615. doi: 10.3934/math.20221023
    [6] Mahmoud A. E. Abdelrahman, Wael W. Mohammed, Meshari Alesemi, Sahar Albosaily . The effect of multiplicative noise on the exact solutions of nonlinear Schrödinger equation. AIMS Mathematics, 2021, 6(3): 2970-2980. doi: 10.3934/math.2021180
    [7] Chun Huang, Zhao Li . Soliton solutions of conformable time-fractional perturbed Radhakrishnan-Kundu-Lakshmanan equation. AIMS Mathematics, 2022, 7(8): 14460-14473. doi: 10.3934/math.2022797
    [8] Shahbaz Ali, Muhammad Khalid Mahmmod, Raúl M. Falcón . A paradigmatic approach to investigate restricted hyper totient graphs. AIMS Mathematics, 2021, 6(4): 3761-3771. doi: 10.3934/math.2021223
    [9] Imrana Kousar, Saima Nazeer, Abid Mahboob, Sana Shahid, Yu-Pei Lv . Numerous graph energies of regular subdivision graph and complete graph. AIMS Mathematics, 2021, 6(8): 8466-8476. doi: 10.3934/math.2021491
    [10] Muhammad Zeeshan Hanif, Naveed Yaqoob, Muhammad Riaz, Muhammad Aslam . Linear Diophantine fuzzy graphs with new decision-making approach. AIMS Mathematics, 2022, 7(8): 14532-14556. doi: 10.3934/math.2022801
  • Applications for the internet of things (IoT) have grown significantly in popularity in recent years, and this has caused a huge increase in the use of cloud services (CSs). In addition, cloud computing (CC) efficiently processes and stores generated application data, which is evident in the lengthened response times of sensitive applications. Moreover, CC bandwidth limitations and power consumption are still unresolved issues. In order to balance CC, fog computing (FC) has been developed. FC broadens its offering of CSs to target end users and edge devices. Due to its low processing capability, FC only handles light activities; jobs that require more time will be done via CC. This study presents an alternative task scheduling in an IoT environment based on improving the performance of the golden jackal optimization (GJO) using the artificial hummingbird algorithm (AHA). To test the effectiveness of the developed task scheduling technique named golden jackal artificial hummingbird (GJAH), we conducted a large number of experiments on two separate datasets with varying data sizing. The GJAH algorithm provides better performance than those competitive task scheduling methods. In particular, GJAH can schedule and carry out activities more effectively than other algorithms to reduce the makespan time and energy consumption in a cloud-fog computing environment.



    Working with large amounts of data is one of the main challenges we face today. With the rise of social networks and rapid technological advances, we must develop tools that allow us to work with so much information. At this point the use of tensor products comes into play, since their use reduces the number of and speed up the operations to be carried out. Proof of this is the recent article [1], where tensor products are used to speed up the calculation of matrix products. Other articles that exemplify the goodness of this operation include [2], where the solution of 2, 3-dimensional optimal control problems with spectral fractional Laplacian-type operators is studied, and [3], where high-order problems are studied through the use of proper generalized decomposition methods.

    When we try to solve a linear system of the form Ax=b, in addition to the classical methods, there are methods based on tensors that can be more efficient [4], since the classical methods face the problem of the curse of dimensionality, which makes them lose effectiveness as the size of the problem increases. The tensor methods look for the solution in separated form, that is, as the tensor combination

    x=j=1xj1xjd,

    where xjiRNi, d is the dimension of the problem, and is the Kronecker product as reviewed in the next Section. The main family of methods that solves this problem is proper generalized decomposition family [5], and it is based on the greedy rank-one update (GROU) algorithm [6,7]. This algorithm calculates the solution of the linear system Ax=b in separated form and, for this, in each iteration, it updates the approximation of the solution with the term resulting from minimizing the remaining residue. Furthermore, there are certain square matrices for which the GROU algorithm improves their convergence i.e., matrices of the form

    A=di=1idN1idNi1AiidNi+1idNd,

    where idNk is the identity matrix of size Nk×Nk, and AkRNk×Nk, for 1kd. These matrices are called Laplacian-like matrices, due to their relationship with the Laplace operator written as

    di=12x2i=di=10x010x0i1(2x2i)0x0i+10x0d.

    It is not easy to decide when a given matrix A can be represented in that form. To do this, we can use some of the previously results obtained by the authors of [8]. In this paper, we prove that the set of Laplacian-like matrices is a linear subspace for the space of square matrices with a particular decomposition of its dimension. Moreover, we provide a greedy algorithm that provides the best Laplacian approximation LA, for a given matrix A, as well its residue, RA=ALA. However, an iterative algorithm it is not useful enough against a direct solution algorithm. The main goal of this paper is to provide a direct algorithm that allows one to construct the best Laplacian-like approximation by using only a particular block decomposition of the matrix A. It can be considered as a pre-processing procedure that allows one to represent a given matrix in its best Laplacian-like form, and if the residual is equal to zero, we definitively have its Laplacian-like representation form. Hence, we efficiently use the GROU algorithm to solve the high-dimensional linear system associated with the matrix A.

    We remark that, by using the decomposition A=LA+RA, we can rewrite the linear system as (LA+RA)x=b, and when the value of the remainder is small, we can approximate the solution of the system x by using the solution of the Laplacian system xL. This fact is specially interesting in the case of the discretization of some partial differential equations. We also study the Laplacian decomposition of the matrix that comes from the discretization of a general second order partial differential equation of the form

    α2ux2+β2uy2+γux+δuy+μu=f,

    with homogeneous boundary conditions. Besides, to compare different numerical methods to solve partial differential equations, we consider two particular cases: the Helmholtz equation, which solves an eigenvalue problem for the Laplace operator. Furthermore, to illustrate that it is not necessary to be limited to the second order, we also consider the 4th order Swift-Hohenberg equation

    ut=ε(1+2x2)2u.

    This equation is noted for its pattern-forming behavior, and it was derived from the equations for thermal convection [9].

    The paper is organized as follows. We begin by recalling some preliminary definition and results used throughout the paper in Section 2. Section 3 is devoted to the statement and the proof of the main result of this paper, which allow one to construct explicitly the best approximation of a given matrix to the linear space of Laplacian-like matrices. After that, in Section 4, we discuss how we applied this result to compute the best Laplacian approximation for the discretization of a second order partial differential equations without mixing derivatives. Finally, some numerical examples are given in Section 5.

    First at all we introduce some notations that we use throughout the paper. We denote by RN×M, the set of N×M-matrices and by AT the transpose of a given matrix A. As usual we use

    x,y2=x,yRN=xTy=yTx

    to denote the Euclidean inner product in RN, and its corresponding 2-norm, by x2=xRN=x,x1/22.

    Given a sequence {uj}j=0RN, we say that a vector uRN can be written as

    u=j=0uj

    if and only if

    limnnj=0uj=u

    in the 2-topology.

    The Kronecker product of two matrices ARN1×M1 and BRN2×M2 is defined by

    AB=(A1,1BA1,2BA1,M1BA2,1BA2,2BA2,M1BAN1,1BAN1,2BAN1,M1B)RN1N2×M1M2.

    We can see some of the well-known properties of the Kronecker product in [7].

    As we already said, we are interested solving the high-dimensional linear system Ax=b obtained from a discretization of a partial differential equation. We are interested in solving it by using a tensor-based algorithm; so, we are going to look for an approximation of the solution in separated form. To see this, we assume that the coefficient matrix A is a (N1Nd)×(N1Nd)-dimensional invertible matrix for some N1,,NdN. Next, we look for an approximation (of rank n) of A1b of the form

    A1bnj=1xj1xjd. (2.1)

    To do this, given xRN1Nd, we say that xR1=R1(N1,N2,,Nd) if x=x1x2xd, where xiRNi for i=1,,d. For n2, we define, inductively, that Rn=Rn(N1,N2,,Nd)=Rn1+R1, that is,

    Rn={x:x=ki=1x(i),x(i)R1 for 1ikn}.

    Note that RnRn+1 for all n1.

    To perform (2.1), what we will do is minimize the difference

    bA(nj=1xjdxjd)2,

    that is, solve the problem

    argminuRnbAu2. (2.2)

    Here, 2 is the 2-norm, or the Frobenius norm, defined by

    A2=mi=1nj=1|ai,j|2=tr(AA),forARm×n.

    Unfortunately, from Proposition 4.1(a) of [10], we have that the set Rn is not necessarily (or even usually) closed for each n2. In consequence, no best rank-n approximation exists, that is, (2.2) has no solution. However, from Proposition 4.2 of [10] it follows that R1 is a closed set in any norm-topology. This fact allows us to introduce the following algorithm.

    The GROU algorithm is an iterative method to solve linear systems of the form Ax=b by using only rank-one updates. Thus, given AGL(RN×N) with N=N1Nd, and bRN, we can obtain an approximation of the form

    A1bun=nj=1xj1xjd

    for some n1, and xjiRNi for i=1,2,,d and j=1,2,,n [7]. We proceed with the following iterative procedure (see algorithm 1 below): let u0=y0=0, and, for each n1, take

    rn1=bAun1, (2.3)
    un=un1+yn,whereynargminuR1rn1Au2. (2.4)

    Since unA1b, we can define the rank for A1b obtained by the GROU Algorithm as

    rank(A1b)={if{j1:yj=0}=,min{j1:yj=0}1otherwise.

    The next result, presented in [7], gives the convergence of the sequence {un}n0 to the solution A1b of the linear system.

    Theorem 2.1. Let bRN1Nd and ARN1Nd×N1Nd be an invertible matrix. Then, by using the iterative scheme described by (2.3) and (2.4), we obtain that the sequence {rn2}rank(A1b)n=0 is strictly decreasing and

    A1b=limnun=rank(A1b)j=0yj. (2.5)

    Note that the updates in the previous scheme works under the assumption that, in line 5 of algorithm 1, we have a way to obtain

    yargminxR1riAx22. (2.6)

    To compute y, we can use an alternating least squares (ALS) approach (see [7,11]).

    Table  .   .
    Algorithm 1 GROU algorithm
    1: procedure GROU(f,A,ε,tol,rank_max)
    2:   r0=f
    3:   u=0
    4:   for i=0,1,2,,rank_max do
    5:     y=procedure(minxR1riAx22)
    6:     ri+1=riAy
    7:     uu+y
    8:     if ri+12<ε or |ri+12ri2|<tol then goto 13
    9:     end if
    10:   end for
    11:   return u and rrank_max2.
    12:   break
    13:   return u and ri+12
    14: end procedure

     | Show Table
    DownLoad: CSV

    The idea below the ALS strategy to solve (2.6) is as follows: for each 1kd, we proceed as follows. Assume that the values x1,,xk1,xk+1,,xd are given. Then, we look for the unknown xk, satisfying

    xkargminzkRNk×NkbA(x1xk1zkxk+1xd)2,

    where we can write

    A(x1xk1zkxk+1xd)=A(x1xk1idNkxk+1xd)zk.

    In consequence, by using a least squares approach [11], we can obtain xk by solving the following Nk×Nk-dimensional linear system:

    Zkzk=b, (2.7)

    where

    Zk:=(xT1xTk1idNkxTk+1xTd)ATA(x1xk1idkxk+1xd)

    and

    bk:=(xT1xTk1idNkxTk+1xTd)ATb.

    Clearly,

    bA(x1xk1zkxk+1xd)2bA(x1xk1xkxk+1xd)2

    holds for all zkRNk×Nk. However, it is well known (see Section 4 in [11]) that the performance of the ALS strategy can be improved (see Algorithm 2 below) when the shape of the matrix ATARN×N, with N=N1Nd, can be written in the form

    ATA=ri=1dj=1A(i)j, (2.8)

    where dj=1A(i)j=A(i)1A(i)d; here, A(i)jRNj×Nj for 1jd and 1ir. In particular, when the matrix A is given by

    A=di=1idN1idNi1AiidNi+1idNd,

    then the matrix ATA can be easily written in the form of (2.8). These matrices were introduced in [8] as Laplacian-like matrices since they can be easily related to the classical Laplacian operator [2,12]. The next section will be devoted to the study of this class of matrices.

    As we said in the introduction, the proper orthogonal decomposition is a popular numerical strategy in the engineering process to solve high-dimensional problems. It is based on the GROU algorithms (2.3) and (2.4), and it can be considered as a tensor-based decomposition algorithm.

    There is a particular type of matrices to solve high-dimensional linear systems for which these methods work particularly well, i.e., those that satisfy the property (2.8). To this end, we introduce the following definition.

    Table  .   .
    Algorithm 2 An ALS algorithm for matrices in the form of (2.8) [11, Algorithm 2]
    1: Given ATA=ri=1dj=1A(i)jRN×N and bRN.
    2: Initialize x(0)iRNi for i=1,2,d.
    3: Introduce ε>0 and itermax,iter = 1.
    4: while distance > ε and iter < itermax do
    5:   for k=1,2,,d do
    6:     x(1)k=x(0)k
    7:     for i=1,2,,r do
    8:       α(i)k=(k1j=1(x(0)j)TA(i)jx(0)j)(dj=k+1(x(1)j)TA(i)jx(1)j)
    9:     end for
    10:     x(0)k solves (ri=1α(i)kA(i)k)xk=(x(0)1x(0)k1idNkx(0)kx(0)d)Tb
    11:   end for
    12:   iter = iter + 1.
    13:   distance =max1idx(0)ix(1)i2.
    14: end while

     | Show Table
    DownLoad: CSV

    Definition 3.1. Given a matrix ARN×N, where N=N1Nd, we say that A is a Laplacian-like matrix if there exist matrices AiRNi×Ni for 1id be such that

    A=di=1Aiid[Ni]di=1idN1idNi1AiidNi+1idNd, (3.1)

    where idNj is the identity matrix of size Nj×Nj.

    It is not difficult to see that the set of Laplacian-like matrices is a linear subspace RN×N of matrices satisfying the property (2.8). From now on, we will denote by L(RN×N) the subspace of Laplacian-like matrices in RN×N for a fixed decomposition of N=N1Nd.

    Now, given a matrix ARN×N, our goal is to solve the following optimization problem:

    minLL(RN×N)AL2. (3.2)

    Clearly, if we denote by ΠL(RN×N) the orthogonal projection onto the linear subspace L(RN×N), then LA:=ΠL(RN×N)(A) is the solution of (3.2). Observe that ALA2=0 if and only if AL(RN×N).

    We are interested in trying to achieve a structure similar to (3.1) to study the matrices of large-dimensional problems. We search an algorithm that allows one to construct, for a given matrix A, its Laplacian-like best approximation LA.

    To do this, we will use the following theorem, which describes a particular decomposition of the space of matrices RN×N. Observe that the linear subspace span{idN} in RN×N has, as the orthogonal space, the following null trace matrices:

    span{idn}={ARn×n:tr(A)=0},

    with respect to the inner product A,BRN×N=tr(ATB).

    Theorem 3.2. Consider (RN×N,2) as a Hilbert space where N=N1Nd. Then, there exists a decomposition

    RN×N=span{idN}hN=L(RN×N)L(RN×N),

    where hN=span{idN} is the orthogonal complement of the linear subspace generated by the identity matrix. Moreover,

    L(RN×N)=span{idN}Δ, (3.3)

    where Δ=hNL(RN×N). Furthermore, L(RN×N) is a subspace of hN and

    Δ=di=1span{idN1}span{idNi1}span{idNi}span{idNi+1}span{idNd}.

    Proof. It follows from Lemma 3.1, Theorem 3.1 and Theorem 3.2 in [8].

    The above theorem allows us to compute the projection of matrix A onto L(RN×N) as follows. Denote by Πi the orthogonal projection of RN×N onto the linear subspace

    span{idN1}span{idNi1}span{idNi}span{idNi+1}span{idNd}

    for 1id. Thus, ki=1Πi is the orthogonal projection of RN×N onto the linear subspace Δ. In consequence, by using (3.3), we have

    tr(A)NidN+di=1Πi(A)=argminLL(RN×N)AL2. (3.4)

    If we further analyze (3.4), we observe that the second term on the left is of the form

    di=1Πi(A)=di=1idN1idNi1XiidNi+1idNd,

    and that it has only (N21++N2dd)-degrees of freedom (recall that dimspan{idNi}=N2i1). In addition, due to the tensor structure of the products, the unknowns xl of Xk are distributed in the form of a block so that we can calculate which will be the entries of the matrix A that we can approximate. Therefore, to obtain the value of each xl, we only need to calculate which is the value that best approximates the entries (i,j) of the original matrix that are in the same position as xl.

    In our next result, we will see how to carry out this procedure. To do this, we make the following observation. Given a matrix A=(ai,j)RKL×KL for some integers K,L>1, we can write A as a matrix block:

    A=(A(K,L)1,1A(K,L)1,2A(K,L)1,LA(K,L)2,1A(K,L)2,2A(K,L)2,LA(K,L)L,1A(K,L)L,2A(K,L)L,L), (3.5)

    where the block A(K,L)i,jRK×K for 1i,jL is given by

    A(K,L)i,j=(a(i1)K+1,(j1)K+1a(i1)K+1,jKaiK,(j1)K+1aiK,jK).

    Moreover,

    A2RKL×KL=KLi=1KLj=1a2i,j=Lr=1Ls=1A(K,L)r,s2RK×K.

    Observe that K and L can be easily interchanged. To simplify the notation, from now on given N=N1N2Nd, we denote it by N[k]=N1Nk1Nk+1Nd for each 1kd.

    Theorem 3.3. Let ARN×N, with N=N1Nd. For each fixed 1kd, consider the linear function Pk:RNk×NkRN×N given by

    Pk(Xk):=idN1idNk1XkidNk+1idNd.

    Then, the solution of the minimization problem

    minXkRNk×NkAPk(Xk)2 (3.6)

    is given by

    (Xk)i,j={1N[1]N[1]n=1a(i1)N[1]+n,(j1)N[1]+nifk=1,1N[k]Nk+1Ndm=1(N1Nk1n=1A(NkNd,N1Nk1)n,n)(i1)Nk+1Nd+m,(j1)Nk+1Nd+mif1<k<d,1N[d](N[d]n=1A(Nd,N[d])n,n)i,jifk=d.

    Proof. First, let us observe that idN1idNk=idN1Nk; so, we can find three different situations in the calculation of the projections:

    (1). P1(A)=X1idN[1]; in this case,

    P1(X1)=((X1)1,1idN[1](X1)1,2idN[1](X1)1,N1idN[1](X1)2,1idN[1](X1)2,2idN[1](X1)2,N1idN[1](X1)N1,1idN[1](X1)N1,2idN[1](X1)N1,N1idN[1])RN[1]N1×N[1]N1.

    (2). Pd(Xd)=idN[d]Xd; in this case,

    Pd(Xd)=(XdOdOdOdXdOdOdOdXd)RNdN[d]×NdN[d],

    where Od denotes the zero matrix in RNd×Nd.

    (3). Pi(Xi)=idN1Ni1XiidNi+1Nd for i=2,,d1; in this case, for a fixed 2id1, we write N=N1Ni1 and Nr=Ni+1Nd. Thus,

    Pi(Xi)=idNXiidNr=idN((Xi)1,1idNr(Xi)1,2idNr(Xi)1,N1idNr(Xi)2,1idNr(Xi)2,2idNr(Xi)2,N1idNr(Xi)N1,1idNr(Xi)N1,2idNr(Xi)N1,N1idNr)=(XiidNrOiidNrOiidNrOiidNrXiidNrOiidNrOiidNrOiidNrXiidNr)R(NiNr)N×(NiNr)N.

    In either case, a difference of the form

    minXkRNk×NkAPk(A)2

    must be minimized. To this end, we will consider each case A as a block matrix ARKL×KL in the form of (3.5).

    Case 1: For P1(X1), we take K=N[1] and L=N1; hence,

    AP1(X1)=(A(K,L)1,1(X1)1,1idN[1]A(K,L)1,2(X1)1,2idN[1]A(K,L)1,N1(X1)1,N1idN[1]A(K,L)2,1(X1)2,1idN[1]A(K,L)2,2(X1)2,2idN[1]A(K,L)2,N1(X1)2,N1idN[1]A(K,L)N1,1(X1)N1,1idN[1]A(K,L)N1,2(X1)N1,2idN[1]A(K,L)N1,N1(X1)N1,N1idN[1]).

    In this situation, we have

    AP1(X1)2RN×N=N1i=1N1j=1A(K,L)i,j(X1)i,jidN[1]2RN[1]×N[1];

    hence, we wish for each 1i and jN1 to yield

    (X1)i,j=xargminxRA(K,L)i,jxidN[1]2RN[1]×N[1]=argminxRN[1]n=1(a(i1)N[1]+n,(j1)N[1]+nx)2.

    Thus, it is not difficult to see that

    (X1)i,j=1N[1]N[1]n=1a(i1)N[1]+n,(j1)N[1]+n

    for 1i,jN1.

    Case 2: For Pd(Xd), we take K=Nd and L=N[d]; hence,

    APd(Xd)=(A(K,L)1,1XdA(K,L)1,2OdA(K,L)1,N[d]OdA(K,L)2,1OdA(K,L)2,2XdA(K,L)2,N[d]OdA(K,L)N[d],1OdA(K,L)N[d],2OdA(K,L)N[d],N[d]Xd).

    Now, we have

    APd(Xd)2RN×N=N[d]i=1A(K,L)i,iXd2RNd×Nd+N[d]i=1,j=1,ijA(K,L)i,i2RNd×Nd.

    Thus, XdRNd×Nd minimizes APd(Xd)2RN×N if and only if

    XdargminXRNd×NdN[d]i=1A(K,L)i,iX2RNd×Nd.

    In consequence,

    Xd=1N[d]N[d]i=1A(K,L)i,i.

    Case 3: For Pi(Xi), we take K=NiNr and L=N; hence,

    APi(Xi)=(A(K,L)1,1XiidNrA(K,L)1,2OiidNrA(K,L)1,NOiidNrA(K,L)2,1OiidNrA(K,L)2,2XiidNrA(K,L)1,NOiidNrA(K,L)N,1OiidNrA(K,L)N,2OiidNrA(K,L)N,NXiidNr).

    In this case,

    APi(Xi)2RN×N=Nn=1A(K,L)n,nXiidNr2RNiNr×NiNr+Nn=1,j=1,njA(K,L)n,j2RNiNr×NiNr,

    so we need to solve the following problem:

    minXRNi×NiNn=1A(K,L)n,nXidNr2RNiNr×NiNr. (3.7)

    Since XidNrRNi×Nispan{idNr}, we can write (3.7) as

    minZRNi×Nispan{idNr}Nn=1A(K,L)n,nZ2RNiNr×NiNr. (3.8)

    Observe that

    A=(au,v)=1NNn=1A(K,L)n,n=argminURNiNr×NiNrNn=1A(K,L)n,nU2RNiNr×NiNr.

    To simplify the notation, we write U:=RNi×Nispan{idNr}. Then, we have the following orthogonal decomposition, RNiNr×NiNr=UU. Denote by ΠU the orthogonal projection onto the linear subspace U. Then, for each ZU, we have

    A(K,L)n,nZ2=(idΠU)(A(K,L)n,n)+ΠU(A(K,L)n,n)Z2=(idΠU)(A(K,L)n,n)2+ΠU(A(K,L)n,n)Z2,

    because (idΠU)(A(K,L)n,n)U and ΠU(A(K,L)n,n)ZU. In consequence, the process of solving (3.8) is equivalent that for solving the following optimization problem:

    minZUNn=1ΠU(A(K,L)n,n)Z2RNiNr×NiNr. (3.9)

    Thus,

    Z=1NNn=1ΠU(A(K,L)n,n)=argminZUNn=1ΠU(A(K,L)n,n)Z2RNiNr×NiNr,

    that is, Z=ΠU(A); hence,

    Z=argminZUAZ2=XiidNr=argminXRNi×NiAXidNr2RNiNr×NiNr.

    Proceeding in a similar way as in Case 1, we obtain

    (Xi)u,v=1NrNrm=1a(u1)Nr+m,(v1)Nr+m=1Nr1NlNrm=1(Nln=1A(K,L)n,n)(u1)Nr+m,(v1)Nr+m

    for 1u,vNi. This concludes the proof of the theorem.

    To conclude, we obtain the following useful corollary.

    Corollary 3.4. Let ARN×N, with N=N1Nd. For each fixed 1kd, consider the linear function Pk:RNk×NkRN×N given by

    Pk(Xk):=idN1idNk1XkidNk+1idNd.

    For each 1kd, let XkRNk×Nk be the solution of the optimization problem (3.6). Then,

    LA=tr(A)NidN+dk=1Pk(Xktr(Xk)NkidNk)=argminLL(RN×N)AL2. (3.10)

    Proof. Observe that, for 1kd, the matrix Xk satisfies

    Pk(Xk)=argminZh(k)AZ2,

    where

    h(k):=span{idN1}span{idNk1}RNk×Nkspan{idNk+1}span{idNd}

    is a linear subspace of RN×N that is linearly isomorphic to RNk×NK. Since RNk×Nk=span{idNk}span{idNk}, then

    Xk=tr(Xk)NkidNk+(Xktr(Xk)NkidNk);

    hence

    Pk(Xk)=Pk(tr(Xk)NkidNk)+Pk(Xktr(Xk)NkidNk)=tr(Xk)NkidN+Pk(Xktr(Xk)NkidNk).

    We can conclude that Πk(A)=Pk(Xktr(Xk)NkidNk); recall that Πk is the orthogonal projection of RN×N onto the linear subspace

    span{idN1}span{idNk1}span{idNk}span{idNk+1}span{idNd}.

    From (3.4), the corollary is proved.

    In this section, we consider the general equation of a generic second-order partial differential equation without mixing derivatives with homogeneous boundary conditions. More precisely, let

    αuxx+βuyy+γux+δuy+μu=f for (x,y)(0,1)×(0,1), (4.1)
    u(x,0)=u(x,1)=u(0,y)=u(1,y)=0 for all 0x1 and 0y1. (4.2)

    We discretize (4.1) with the help of the following derivative approximations:

    ux(x,y)u(xi+1,yj)u(xi1,yj)2h,uy(x,y)u(xi,yj+1)u(xi,yj1)2k,

    and

    uxx(x,y)u(xi+1,yj)2u(xi,yj)+u(xi1,yj)h2,uyy(x,y)u(xi,yj+1)2u(xi,yj)+u(xi,yj1)k2

    for i=1,,N and j=1,,M. From (4.2), we have that u(x,y0)=u(x,yM+1)=u(x0,y)=u(xN+1,y)=0 for all 0x1 and 0y1.

    Next, in order to obtain a linear system, we put u:=u(xi,yj) and f:=f(xi,yj), where :=(i1)M+j for 1iN and 1jM. In this way, the represented mesh is traversed as shown in Figure 1, and the elements U=(u)MN=1 and F={f}MN=1 are column vectors. It allows us to represent (4.1) and (4.2) as the linear system AF=U, where A is the MN×MN-block matrix

    A=(TD1D2TD1D2TD1D2T) (4.3)
    Figure 1.  Proceeding from (1,1) to (1,M); (2,1),,(2,M); and, ending at (N,1),,(N,M)..

    for TRM×M, given by

    T=(2μh2k24αk24βh22βh2+δh2k002βh2δh2k2μh2k24αk24βh22βh2+δh2k0002βh2δh2k2μh2k24αk24βh2),

    and D1,D2RM×M are the diagonal matrices:

    D1=(2αk2+γhk2)idM,D2=(2αk2γhk2)idM.

    In this case, tr(A)=NM(2μh2k24αk24βh2); so, instead of looking for LA, as in (3.10), we will look for LˆA, where

    ˆA=(Atr(A)NMidNM)

    has a null trace. Proceeding according to Theorem 3.3 for sizes N1=N and N2=M, we obtain the following decomposition:

    X1=(02αk2+γhk2002αk2γhk202αk2+γhk2002αk2γhk202αk2+γhk2002αk2γhk20)RN×N

    and

    X2=(02βh2+δh2k002βh2δh2k02βh2+δh2k002βh2δh2k02βh2+δh2k002βh2δh2k0)RM×M.

    We remark that tr(X1)=tr(X2)=0. Moreover, the residual of the approximation LˆA of ˆA is ˆALˆA=0. In consequence, we can write the original matrix A as

    A=tr(A)NMidNM+X1idM+idNX2.

    Recall that the first term is

    tr(A)NMidNM=(2μh2k24αk24βh2)idNM=(2μh2k24αk24βh2)idNidM;

    hence, A can be written as

    A=Z1idM+idNZ2,

    where

    Z1=(μh2k22αk22βh22αk2+γhk2002αk2γhk2μh2k22αk22βh22αk2+γhk2002αk2γhk2μh2k22αk22βh22αk2+γhk2002αk2γhk2μh2k22αk22βh2)

    is an N×N-matrix and

    Z2=(μh2k22αk22βh22βh2+δh2k002βh2δh2kμh2k22αk22βh22βh2+δh2k002βh2δh2kμh2k22αk22βh22βh2+δh2k002βh2δh2kμh2k22αk22βh2)

    is a M×M-matrix.

    Now, we can use this representation of A to implement the GROU Algorithm 1, together with the ALS strategy given by Algorithm 2, to solve the following linear system:

    AU=(Z1idM+idNZ2)U=F.

    This study can be extended to high-dimensional equations, as occurs in [8] with the three-dimensional Poisson equation.

    Next, we are going to consider some particular equations to analyze their numerical behavior. In all cases, the characteristics of the computer used are as follows: 11th Gen Intel(R) Core(TM) i7-11370H @ 3.30GHz, RAM 16 GB, 64-bit operating system; and, Matlab version R2021b [13].

    Let us consider the particular case of the second-order partial differential equation with α=β=1, μ=c2 and f=0, that is,

    uxx+uyy+c2u=0.

    This is the 2D-Helmholtz equation. To obtain the linear system associated with the discrete problem, we need some boundary conditions; for example,

    {u(x,0)=sin(ωx)+cos(ωx)for0x,u(0,y)=sin(ωy)+cos(ωy)for0yT,

    and

    {u(x,T)=sin(ω(x+T))+cos(ω(x+T))for0x,u(L,y)=sin(ω(y+L))+cos(ω(y+L))for0yT.

    This initial value problem has a closed solution for ω=c2,

    u(x,y)=sin(ω(x+y))+cos(ω(x+y)).

    From the above operations, and by taking h=k for simplicity, we can write the matrix of the discrete linear system associated with the equation of Helmholtz as

    A=(2c2h48h22h2002h22c2h48h22h20002h22c2h48h2)idM+idN(02h2002h202h20002h20)

    or, equivalently,

    A=(c2h44h22h2002h2c2h44h22h20002h2c2h44h2)idM+idN(c2h44h22h2002h2c2h44h22h20002h2c2h44h2).

    If we solve this linear system Aul=ˆfl for the case c=2, L=T=, and with N=M, we obtain the temporary results shown in Figure 2. To carry out this experiment, we have used the following parameter values: for the GROU Algorithm 1: tol=2.2204e16; ε=2.2204e16; rank_max=10; (iter-max=5 and ε=2.2204e16 were used to perform Algorithm 2); and, the number of nodes in (0,1)2 (that is, the number of rows or columns of the matrix A) was increased from 102 to 2002.

    Figure 2.  CPU time, in seconds, employed to solve the discrete Helmholtz initial value problem by using the Matlab command Ab, the GROU Algorithm 1 and the GROU Algorithm 1 with A written as LA, as obtained from Corollary 3.3.

    To measure the goodness of the approximations obtained, we calculated the normalized errors, that is, the value of the difference, in absolute value, of the results obtained and the real solution, between the length of the solution, i.e.,

    ε=|exactsolutionapproximatesolution|N2

    for the different approximations obtained. The values of these errors were of the order of 104 and can be seen in Figure 3.

    Figure 3.  Normalized error between the solution of the discrete Helmholtz initial value problem and the solutions obtained by using the Matlab command Ab, the GROU Algorithm 1 and the GROU Algorithm 1 with A written as LA, as obtained from Corollary 3.3.

    Now, let us consider the partial differential equation of order four:

    ut=ε(1+2x2)2u, (5.1)

    with the boundary conditions

    {u(x,0)=sin(kx),u(x,T)=sin(kx)eTfor0xL, (5.2)

    and

    u(0,t)=u(L,t)=0for0tT. (5.3)

    For k=1+ε1 L=2π/k, the initial value problem (5.1)–(5.3) has the following as a solution:

    u(x,t)=sin(kx)et.

    If we discretize the problem described by (5.1)–(5.3) as in the previous example with the same step size for both variables, h, we obtain a linear system of the form Aul=ˆfl, where A, in Laplacian-like form, is the matrix

    A=(128h2+(22ε)h44h282004h28128h2+(22ε)h44h28200024h28128h2+(22ε)h4)idM+idN(0h300h30h3000h30),

    and l=(i1)M+j is the order established for the indices, with 1iN and 1jM.

    To perform a numerical experiment, we set ε=2, L=T=2π and the same number of points for the two variables. At this point, we can solve the linear system associated with the Swift-Hohenberg discrete problem by using our tools: the Matlab command Ab, the GROU Algorithm 1, and the GROU Algorithm 1, together with the ALS Algorithm 2 with A written in Laplacian-like form. In this case, we used the following parameter values in the algorithms: tol=2.2204e16; ε=2.2204e16; rank_max=10 for the GROU Algorithm 1, with iter-max=5 for the ALS step; and, the number of nodes in (0,2π)2 was increased from 102 to 2002. Figure 4 shows the results obtained.

    Figure 4.  CPU time, in seconds, employed to solve the discrete Swift-Hohenberg initial value problem by using the Matlab command Ab, the GROU Algorithm 1 and the GROU Algorithm 1 with A written in Laplacian form.

    Again, we calculated the normalized errors to estimate the goodness of the approximations, the results of which are shown in Figure 5.

    Figure 5.  Normalized error between the solution of the discrete Swift-Hohenberg initial value problem and the solutions obtained by using the Matlab command Ab, the GROU Algorithm 1 and the GROU Algorithm 1 with A written in Laplacian form.

    In this work, we have studied the Laplacian decomposition algorithm, which, given any square matrix, calculates its best Laplacian approximation. Furthermore, in Theorem 3.3, we have shown how it is implemented optimally.

    For us, the greatest interest in this algorithm lies in the computational improvement of combining it with the GROU Algorithm 1 to solve linear systems through the discretization of a partial derivative equation. Said improvement can be seen in the different numerical examples shown, where we have compared this procedure with the standard resolution of Matlab by means of the instruction Ab.

    This proposal is a new way of dealing with certain large-scale problems, where classical methods prove to be more inefficient.

    The authors declare that they have not used artificial intelligence tools in the creation of this article.

    J. A. Conejero acknowledges funding from grant PID2021-124618NB-C21, funded by MCIN/AEI/ 10.13039/501100011033, and by "ERDF: A way of making Europe", by the "European Union"; M. Mora-Jiménez was supported by the Generalitat Valenciana and the European Social Fund under grant number ACIF/2020/269; A. Falcó was supported by the MICIN grant number RTI2018-093521-B-C32 and Universidad CEU Cardenal Herrera under grant number INDI22/15.

    The authors declare that they no have conflicts of interest.



    [1] J. B. Hu, J. W. Huang, Z. Y. Li, J. X. Wang, T. He, A receiver-driven transport protocol with high link utilization using anti-ecn marking in data center networks, IEEE Trans. Netw. Serv. Manag., 20 (2022), 1898–1912. https://doi.org/10.1109/TNSM.2022.3218343 doi: 10.1109/TNSM.2022.3218343
    [2] J. Wang, Y. Liu, S. Y. Rao, X. Y. Zhou, J. B. Hu, A novel self-adaptive multi-strategy artificial bee colony algorithm for coverage optimization in wireless sensor networks, Ad Hoc Netw., 150 (2023), 103284. https://doi.org/10.1016/j.adhoc.2023.103284 doi: 10.1016/j.adhoc.2023.103284
    [3] H. Singh, S. Tyagi, P. Kumar, S. S. Gill, R. Buyya, Metaheuristics for scheduling of heterogeneous tasks in cloud computing environments: Analysis, performance evaluation, and future directions, Simul. Model. Pract. Theory, 111 (2021), 102353. https://doi.org/10.1016/j.simpat.2021.102353 doi: 10.1016/j.simpat.2021.102353
    [4] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, I. Brandic, Cloud computing and emerging it platforms: Vision, hype, and reality for delivering computing as the 5th utility, Future Gener. Comp. Syst., 25 (2009), 599–616. https://doi.org/10.1016/j.future.2008.12.001 doi: 10.1016/j.future.2008.12.001
    [5] B. M. Nguyen, H. T. T. Binh, T. T. Anh, D. B. Son, Evolutionary algorithms to optimize task scheduling problem for the Iot based bag-of-tasks application in cloud-fog computing environment, Appl. Sci., 9 (2019), 1730. https://doi.org/10.3390/app9091730 doi: 10.3390/app9091730
    [6] M. A. Elaziz, I. Attiya, L. Abualigah, M. Iqbal, A. Ali, A. Al-Fuqaha, et al., Hybrid enhanced optimization-based intelligent task scheduling for sustainable edge computing, IEEE Trans. Consum. Electr., 2023, 1. https://doi.org/10.1109/TCE.2023.3321783 doi: 10.1109/TCE.2023.3321783
    [7] I. Attiya, M. A. Elaziz, L. Abualigah, T. N. Nguyen, A. A. A. El-Latif, An improved hybrid swarm intelligence for scheduling iot application tasks in the cloud, IEEE Trans. Ind. Inform., 18 (2022), 6264–6272. https://doi.org/10.1109/TII.2022.3148288 doi: 10.1109/TII.2022.3148288
    [8] M. R. Raju, S. K. Mothku, Delay and energy aware task scheduling mechanism for fog-enabled iot applications: A reinforcement learning approach, Comput. Netw., 224 (2023), 109603. https://doi.org/10.1016/j.comnet.2023.109603 doi: 10.1016/j.comnet.2023.109603
    [9] M. A. A. Al-qaness, A. A. Ewees, H. Fan, L. Abualigah, M. A. Elaziz, Boosted ANFIS model using augmented marine predator algorithm with mutation operators for wind power forecasting, Appl. Energy, 314 (2022), 118851. https://doi.org/10.1016/j.apenergy.2022.118851 doi: 10.1016/j.apenergy.2022.118851
    [10] T. Li, S. Fong, R. C. Millham, J. Fiaidhi, S. Mohammed, Fast incremental learning with swarm decision table and stochastic feature selection in an iot extreme automation environment, IT Prof., 21 (2019), 14–26. https://doi.org/10.1109/MITP.2019.2900016 doi: 10.1109/MITP.2019.2900016
    [11] M. A. A. Al-qaness, A. M. Helmi, A. Dahou, M. A. Elaziz, The applications of metaheuristics for human activity recognition and fall detection using wearable sensors: A comprehensive analysis, Biosensors, 12 (2022), 821. https://doi.org/10.3390/bios12100821 doi: 10.3390/bios12100821
    [12] M. A. Elaziz, M. A. A. Al-qaness, A. Dahou, R. A. Ibrahim, A. A. A. El-Latif, Intrusion detection approach for cloud and iot environments using deep learning and capuchin search algorithm, Adv. Eng. Softw., 176 (2023), 103402. https://doi.org/10.1016/j.advengsoft.2022.103402 doi: 10.1016/j.advengsoft.2022.103402
    [13] S. N. Ghorpade, M. Zennaro, B. S. Chaudhari, R. A. Saeed, H. Alhumyani, S. Abdel-Khalek, Enhanced differential crossover and quantum particle swarm optimization for iot applications, IEEE Access, 9 (2021), 93831–93846. https://doi.org/10.1109/ACCESS.2021.3093113 doi: 10.1109/ACCESS.2021.3093113
    [14] G. Agarwal, S. Gupta, R. Ahuja, A. K. Rai, Multiprocessor task scheduling using multi-objective hybrid genetic algorithm in fog-cloud computing, Knowl. Based Syst., 272 (2023), 110563. https://doi.org/10.1016/j.knosys.2023.110563 doi: 10.1016/j.knosys.2023.110563
    [15] W. B. Sun, J. Xie, X. Yang, L. Wang, W. X. Meng, Efficient computation offloading and resource allocation scheme for opportunistic access fog-cloud computing networks, IEEE Trans. Cogn. Commun. Netw., 9 (2023), 521–533. https://doi.org/10.1109/TCCN.2023.3234290 doi: 10.1109/TCCN.2023.3234290
    [16] B. Jana, M. Chakraborty, T.a Mandal, A task scheduling technique based on particle swarm optimization algorithm in cloud environment, In: Soft computing: Theories and applications, Singapore: Springer, 742 (2019), 525–536. https://doi.org/10.1007/978-981-13-0589-4_49
    [17] A. Pradhan, S. K. Bisoy, A. Das, A survey on PSO based meta-heuristic scheduling mechanism in cloud computing environment, J. King Saud Univ. Comput. Inform. Sci., 34 (2022), 4888–4901. https://doi.org/10.1016/j.jksuci.2021.01.003 doi: 10.1016/j.jksuci.2021.01.003
    [18] F. Al-Turjman, M. Z. Hasan, H. Al-Rizzo, Task scheduling in cloud-based survivability applications using swarm optimization in Iot, Trans. Emerg. Telecommun. Technol., 30 (2019), e3539. http://doi.org/10.1002/ett.3539 doi: 10.1002/ett.3539
    [19] A. M. S. Kumar, M. Venkatesan, Multi-objective task scheduling using hybrid genetic-ant colony optimization algorithm in cloud environment, Wireless Pers. Commun., 107 (2019), 1835–1848. http://doi.org/10.1007/s11277-019-06360-8 doi: 10.1007/s11277-019-06360-8
    [20] M. A. Elaziz, I. Attiya, An improved Henry gas solubility optimization algorithm for task scheduling in cloud computing, Artif. Intell. Rev., 54 (2021), 3599–3637. http://doi.org/10.1007/s10462-020-09933-3 doi: 10.1007/s10462-020-09933-3
    [21] A. Mohammadzadeh, M. Masdari, F. S. Gharehchopogh, Energy and cost-aware workflow scheduling in cloud computing data centers using a multi-objective optimization algorithm, J. Netw. Syst. Manag., 29 (2021), 31. http://doi.org/10.1007/s10922-021-09599-4 doi: 10.1007/s10922-021-09599-4
    [22] M. A. Elaziz, L. Abualigah, R. A. Ibrahim, I. Attiya, Iot workflow scheduling using intelligent arithmetic optimization algorithm in fog computing, Comput. Intel. Neurosc., 2021 (2021), 9114113. https://doi.org/10.1155/2021/9114113 doi: 10.1155/2021/9114113
    [23] N. Arivazhagan, K. Somasundaram, D. V. Babu, M. G. Nayagam, R. M. Bommi, G. B. Mohammad, et al., Cloud-internet of health things (IOHT) task scheduling using hybrid moth flame optimization with deep neural network algorithm for e healthcare systems, Sci. Program., 2022 (2022), 4100352. https://doi.org/10.1155/2022/4100352 doi: 10.1155/2022/4100352
    [24] B. B. Naik, D. Singh, A. B. Samaddar, Multi-objective virtual machine selection in cloud data centers using optimized scheduling, Wireless Pers. Commun., 116 (2021), 2501–2524. https://doi.org/10.1007/s11277-020-07807-z doi: 10.1007/s11277-020-07807-z
    [25] N. Arora, R. K. Banyal, Workflow scheduling using particle swarm optimization and gray wolf optimization algorithm in cloud computing, Concurr. Comput. Pract. Exper., 33 (2021), e6281. https://doi.org/10.1002/cpe.6281 doi: 10.1002/cpe.6281
    [26] S. Goyal, S. Bhushan, Y. Kumar, A. ul H. S. Rana, M. R. Bhutta, M. F. Ijaz, et al., An optimized framework for energy-resource allocation in a cloud environment based on the whale optimization algorithm, Sensors, 21 (2021), 1583. https://doi.org/10.3390/s21051583 doi: 10.3390/s21051583
    [27] D. Alsadie, TSMGWO: Optimizing task schedule using multi-objectives grey wolf optimizer for cloud data centers, IEEE Access, 9 (2021), 37707–37725. https://doi.org/10.1109/ACCESS.2021.3063723 doi: 10.1109/ACCESS.2021.3063723
    [28] W. Zhao, L. Wang, S. Mirjalili, Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications, Comput. Methods Appl. Mech. Eng., 388 (2022), 114194. https://doi.org/10.1016/j.cma.2021.114194 doi: 10.1016/j.cma.2021.114194
    [29] N. Chopra, M. M. Ansari, Golden jackal optimization: A novel nature-inspired optimizer for engineering applications, Expert Syst. Appl., 198 (2022), 116924. https://doi.org/10.1016/j.eswa.2022.116924 doi: 10.1016/j.eswa.2022.116924
    [30] I. Attiya, L. Abualigah, D. Elsadek, S. A. Chelloug, M. A. Elaziz, An intelligent chimp optimizer for scheduling of Iot application tasks in fog computing, Mathematics, 10 (2022), 1100. https://doi.org/10.3390/math10071100 doi: 10.3390/math10071100
    [31] I. Attiya, X. Zhang, X. Yang, TCSA: A dynamic job scheduling algorithm for computational grids, In: 2016 First IEEE international conference on computer communication and the internet (ICCCI), 2016,408–412. https://doi.org/10.1109/CCI.2016.7778954
    [32] Mahdi Azizi, Atomic orbital search: A novel metaheuristic algorithm, Appl. Math. Modell., 93 (2021), 657–683. https://doi.org/10.1016/j.apm.2020.12.021 doi: 10.1016/j.apm.2020.12.021
    [33] A. Faramarzi, M. Heidarinejad, S. Mirjalili, A. H. Gandomi, Marine predators algorithm: A nature-inspired metaheuristic, Expert Syst. Appl., 152 (2020), 113377. https://doi.org/10.1016/j.eswa.2020.113377 doi: 10.1016/j.eswa.2020.113377
    [34] L. Abualigah, A. Diabat, S. Mirjalili, M. A. Elaziz, A. H. Gandomi, The arithmetic optimization algorithm, Comput. Methods Appl. Mech. Engrg., 376 (2021), 113609. https://doi.org/10.1016/j.cma.2020.113609 doi: 10.1016/j.cma.2020.113609
    [35] I. Attiya, L. Abualigah, S. Alshathri, D. Elsadek, M. A. Elaziz, Dynamic jellyfish search algorithm based on simulated annealing and disruption operators for global optimization with applications to cloud task scheduling, Mathematics, 10 (2022), 1894. https://doi.org/10.3390/math10111894 doi: 10.3390/math10111894
    [36] M. S. Braik, Chameleon swarm algorithm: A bio-inspired optimizer for solving engineering design problems, Expert Syst. Appl., 174 (2021), 114685. https://doi.org/10.1016/j.eswa.2021.114685 doi: 10.1016/j.eswa.2021.114685
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1584) PDF downloads(78) Cited by(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog