Research article Special Issues

MSFSS: A whale optimization-based multiple sampling feature selection stacking ensemble algorithm for classifying imbalanced data

  • Received: 07 March 2024 Revised: 23 April 2024 Accepted: 24 April 2024 Published: 21 May 2024
  • MSC : 68T20

  • Learning from imbalanced data is a challenging task in the machine learning field, as with this type of data, many traditional supervised learning algorithms tend to focus more on the majority class while damaging the interests of the minority class. Stacking ensemble, which formulates an ensemble by using a meta-learner to combine the predictions of multiple base classifiers, has been used for solving class imbalance learning issues. Specifically, in the context of class imbalance learning, a stacking ensemble learning algorithm is generally considered to combine with a specific sampling algorithm. Such an operation, however, might suffer from suboptimization problems as only using a sampling strategy may make it difficult to acquire diverse enough features. In addition, we also note that using all of these features may damage the meta-learner as there may exist noisy and redundant features. To address these problems, we have proposed a novel stacking ensemble learning algorithm named MSFSS, which divides the learning procedure into two phases. The first stage combined multiple sampling algorithms and multiple supervised learning approaches to construct meta feature space by means of cross combination. The adoption of this strategy satisfied the diversity of the stacking ensemble. The second phase adopted the whale optimization algorithm (WOA) to select the optimal sub-feature combination from the meta feature space, which further improved the quality of the features. Finally, a linear regression classifier was trained as the meta learner to conduct the final prediction. Experimental results on 40 benchmarked imbalanced datasets showed that the proposed MSFSS algorithm significantly outperformed several popular and state-of-the-art class imbalance ensemble learning algorithms. Specifically, the MSFSS acquired the best results in terms of the F-measure metric on 27 datasets and the best results in terms of the G-mean metric on 26 datasets, out of 40 datasets. Although it required consuming more time than several other competitors, the increment of the running time was acceptable. The experimental results indicated the effectiveness and superiority of the proposed MSFSS algorithm.

    Citation: Shuxiang Wang, Changbin Shao, Sen Xu, Xibei Yang, Hualong Yu. MSFSS: A whale optimization-based multiple sampling feature selection stacking ensemble algorithm for classifying imbalanced data[J]. AIMS Mathematics, 2024, 9(7): 17504-17530. doi: 10.3934/math.2024851

    Related Papers:

    [1] Patarawadee Prasertsang, Thongchai Botmart . Improvement of finite-time stability for delayed neural networks via a new Lyapunov-Krasovskii functional. AIMS Mathematics, 2021, 6(1): 998-1023. doi: 10.3934/math.2021060
    [2] Sheng Dong, Qingwen Wang, Lei Hou . Determinantal inequalities for block Hadamard product and Khatri-Rao product of positive definite matrices. AIMS Mathematics, 2022, 7(6): 9648-9655. doi: 10.3934/math.2022536
    [3] Ali Algefary . Diagonal solutions for a class of linear matrix inequality. AIMS Mathematics, 2024, 9(10): 26435-26445. doi: 10.3934/math.20241286
    [4] Wentao Le, Yucai Ding, Wenqing Wu, Hui Liu . New stability criteria for semi-Markov jump linear systems with time-varying delays. AIMS Mathematics, 2021, 6(5): 4447-4462. doi: 10.3934/math.2021263
    [5] Yanmei Xue, Jinke Han, Ziqiang Tu, Xiangyong Chen . Stability analysis and design of cooperative control for linear delta operator system. AIMS Mathematics, 2023, 8(6): 12671-12693. doi: 10.3934/math.2023637
    [6] Rupak Datta, Ramasamy Saravanakumar, Rajeeb Dey, Baby Bhattacharya . Further results on stability analysis of Takagi–Sugeno fuzzy time-delay systems via improved Lyapunov–Krasovskii functional. AIMS Mathematics, 2022, 7(9): 16464-16481. doi: 10.3934/math.2022901
    [7] Huahai Qiu, Li Wan, Zhigang Zhou, Qunjiao Zhang, Qinghua Zhou . Global exponential periodicity of nonlinear neural networks with multiple time-varying delays. AIMS Mathematics, 2023, 8(5): 12472-12485. doi: 10.3934/math.2023626
    [8] Erfeng Xu, Wenxing Xiao, Yonggang Chen . Local stabilization for a hyperchaotic finance system via time-delayed feedback based on discrete-time observations. AIMS Mathematics, 2023, 8(9): 20510-20529. doi: 10.3934/math.20231045
    [9] Zahra Eidinejad, Reza Saadati, Donal O'Regan, Fehaid Salem Alshammari . Measure of quality and certainty approximation of functional inequalities. AIMS Mathematics, 2024, 9(1): 2022-2031. doi: 10.3934/math.2024100
    [10] Ahmad Y. Al-Dweik, Ryad Ghanam, Gerard Thompson, M. T. Mustafa . Algorithms for simultaneous block triangularization and block diagonalization of sets of matrices. AIMS Mathematics, 2023, 8(8): 19757-19772. doi: 10.3934/math.20231007
  • Learning from imbalanced data is a challenging task in the machine learning field, as with this type of data, many traditional supervised learning algorithms tend to focus more on the majority class while damaging the interests of the minority class. Stacking ensemble, which formulates an ensemble by using a meta-learner to combine the predictions of multiple base classifiers, has been used for solving class imbalance learning issues. Specifically, in the context of class imbalance learning, a stacking ensemble learning algorithm is generally considered to combine with a specific sampling algorithm. Such an operation, however, might suffer from suboptimization problems as only using a sampling strategy may make it difficult to acquire diverse enough features. In addition, we also note that using all of these features may damage the meta-learner as there may exist noisy and redundant features. To address these problems, we have proposed a novel stacking ensemble learning algorithm named MSFSS, which divides the learning procedure into two phases. The first stage combined multiple sampling algorithms and multiple supervised learning approaches to construct meta feature space by means of cross combination. The adoption of this strategy satisfied the diversity of the stacking ensemble. The second phase adopted the whale optimization algorithm (WOA) to select the optimal sub-feature combination from the meta feature space, which further improved the quality of the features. Finally, a linear regression classifier was trained as the meta learner to conduct the final prediction. Experimental results on 40 benchmarked imbalanced datasets showed that the proposed MSFSS algorithm significantly outperformed several popular and state-of-the-art class imbalance ensemble learning algorithms. Specifically, the MSFSS acquired the best results in terms of the F-measure metric on 27 datasets and the best results in terms of the G-mean metric on 26 datasets, out of 40 datasets. Although it required consuming more time than several other competitors, the increment of the running time was acceptable. The experimental results indicated the effectiveness and superiority of the proposed MSFSS algorithm.



    A matrix ARn×n is said to be stable if all its eigenvalues lie in the open left half plane, i.e., all the eigenvalues of A have negative real parts. It is well-known that a matrix A is stable if and only if there is a positive definite matrix P such that ATP+PA is negative definite. This implies that V(x)=xTPx serves as a quadratic Lyapunov function for the asymptotically stable linear system

    ˙x=Ax.

    In this paper, we consider only real square matrices. Let A be a real n×n matrix. A0 (A0, resp.) means A is a symmetric positive definite (semidefinite, resp.) matrix.

    For the sake of convenience, we will adopt the concept of positive stability. A matrix ARn×n is defined as positive stable if all its eigenvalues possess positive real parts. Clearly, if A is positive stable, then A is stable. Therefore, results in positive stability can be translated into terms of stability.

    It is well-established that a matrix ARn×n is positive stable if and only if there exists P0 in Rn×nsuch that

    ATP+PA0. (1.1)

    In this case, P is known as a Lyapunov solution for A or to the Lyapunov inequality (1.1). Several numerical methods have been developed to address the problem of finding such matrices P [1,2,3].

    A particular case emerges from (1.1) when a positive diagonal matrix D satisfies the Lyapunov inequality. If so, D is called a Lyapunov diagonal solution for A. Furthermore, we say that A is a Lyapunov diagonally stable matrix. The problem of Lyapunov diagonal stability is well investigated in the literature ([4,5,6,7,8,9] and the references therein). The importance of this problem is due to its applications in, most significantly, population dynamics [10], communication networks [11], and systems theory [12].

    Another case of (1.1), known as Lyapunov α-scalar stability, appeard in [13]. For a partition α={α1,,αs} of the set {1,,n}, the diagonal solution D has an α-scalar structure, i.e. D[αi]=ciI, ciR, i=1,,s, where D[αi] is the principal submatrix of D on row and column indices αi. A set α={α1,,αs}, 1sn, is said to be a partition of {1,,n} if for all i,j{1,,s}, αi, αiαj=, and αiαs={1,,n}. We assume, without loss of generality, that these αi's are taken to have contiguous indices because our results are applicable with simultaneous row and column permutations.

    For brevity, if ARk×k is a Lyapunov diagonally stable matrix, we will write ALDSk. Similarly, we write ALDSαk if A is Lyapunov α-scalar stable.

    A recent generalization of Lyapunov diagonal stability to a family of real matrices of the same size, A={A(i)}ri=1, has drawn significant interest [14,15,16,17,18]. This extension studies the existence of a diagonal matrix D0 satisfying

    (A(i))TD+DA(i)0, (1.2)

    i=1,,r. If such matrix D exists, it is known as a common Lyapunov diagonal solution for A or to (1.2). Consequently, we say A has common Lyapunov diagonal stability. From this definition, it is clear that common Lyapunov diaognal stability can be interpreted as simultaneous Laypunov diagonal stability for the matrices in A. The existence of a common Lyapunov diagonal solution D for A implies that V(x)=xTDx acts as a common Lyapunov diagonal function for the collection of asymptotically stable linear systems

    ˙x=A(i)x,i=1,,r.

    An immediate observation here is that when A=A, i.e., A is a singleton, ACLDS is equivalent to ALDS, and ACLDSα is equivalent to ALDSα. Additionally, it is worth mentioning that the cardinality of A is not relevant. For convenience, we shall fix it to be r throughout the rest of this note.

    Applications of common Lyapunov diagonal stability have been found in the fields of large-scale dynamics [19,20,21,22], as well as in the study of interconnected time-varying and switched systems [18]. Beyond these practical applications, common Lyapunov diagonal stability is also a significant research topic in itself, as evidenced by works such as [14,16,17,23].

    Let ARn×n be a nonsingular matrix. In [24], Redheffer proved that ALDSn if and only if the (n1)×(n1) leading principal submatrices of A and A1 have a common Lyapunov diagonal solution. This result has been restated in [16,23] using the notion of Schur complements. The new statement is free of the nonsingularity condition. Specifically, it was shown that a matrix ALDSn if and only if ann>0 and the (n1)×(n1) leading principal submatrix of A and its Schur complement have a common Lyapunov diagonal solution.

    For any vectors u,vRn, when we write uv, it means ui>vi for all i{1,,n}. For a matrix ARn×n, the vector uRn with ui=aii, for i=1,,n, is denoted by diag(A). We denote the identity matrix IRk×k by Ik and the matrix of all ones in Rk×k by Jk.

    Let A=[aij],B=[bij]Rn×n and CRm×m. The Hadamard product of A and B is denoted by AB=[aijbij]Rn×n. The Kronecker product of A and C is denoted by AC=[aijC]Rnm×nm.

    Let n,mN and for i=1,,n, let miN such that m=m1++mn, where nm. Then, a matrix SRm×m is an n by n block matrix if it is partitioned into blocks that conform with mi, i=1,,n. Moreover, we denote each mj by mk block of S as Sjk. Similarly, a vector uRm is called an n-block vector if it is partitioned into n subvectors, i.e., uT=[uTm1uTmn], where umiRmi, i=1,,n. Throughout this note, it is assumed that n,m and all mi, i=1,,n, are natural number with nm and m=m1++mn.

    The Khatri-Rao product of a matrix A=[aij]Rn×n and an n by n block matrix B=[Bij]Rm×m is defined as AB=[aijBij]Rm×m. Similarly, let v=[vi]Rn and u=[umi]Rm be n-block vector, then vu=[viumi]Rm.

    Suppose that α{1,,k}, |α| is the cardinality of α and αc={1,,n}α. We denote the principal submatrix of A obtained by selecting rows and columns indexed by α as A[α]. Similarly, for a vector uRk, u[α] represents the subvector of u containing only the elements indexed by α.

    Lemma 1.1. ([25,Corollary 4.2.13]) Let ARn×n and BRm×m. If A and B are both positive semidefinite matrices then ABRnm×nm is also a positive semidefinite matrix.

    Lemma 1.2. ([26,Theorem 3.1]) Let ARn×n be a positive definite matrix. If BRm×m is a positive semidefinite n by n block matrix with Bii0, i=1,,m, then AB0.

    For the remainder of this note, Q=[qij]Rm×m denotes the nonzero n by n block matrix defined as

    (Qij)kl={1if k=l0if kl. (1.3)

    Additionally, for an n by n block matrix BRm×m, we define the matrix T(B)Rn×n such that (T(B))ij=(Bij)11.

    Now, let us recall the definition of a P-matrix. A matrix whose principal minors are all positive is known as a P-matrix. A well-known characterization for P-matrices in the context of real matrices is given next.

    Lemma 1.3. ([27,Theorem 3.3]) A matrix ARn×n is a P-matrix if and only if ui(Au)i>0 for all nonzero uRn.

    Motivated by Lemma 1.3, a generalization of the concept of P-matrices to Pα-matrices has been developed in [13].

    Definition 1.1. Let ARn×n and α={α1,,αs} be a partition of {1,,n}. Then, A is a Pα-matrix if there is some k{1,,s} such that u[αk]T(Au)[αk]>0 for all nonzero uRn.

    For ARk×k, APk indicates that A is a P-matrix, while APαk means that A is Pα-matrix.

    Using the characterization in Lemma 1.3 and Definition 1.1, the P-matrix and Pα-matrix properties were extended in [17,28], respectively, to consider a family of real matrices.

    Definition 1.2. Let A={A(i)}ri=1 be a family of real n×n matrices. Then, A is called a P-set and write APn if for any family of vectors {u(i)}ri=1 in Rn, not all being zero, there is some k{1,,n} such that

    ri=1u(i)k(A(i)u(i))k>0.

    Definition 1.3. Let A={A(i)}ri=1 be a family of real n×n matrices and α={α1,,αs} be a partition of {1,,n}. Then, A is called a Pα-set and write APαn if for any family of vectors {u(i)}ri=1 in Rn, not all being zero, there is some k{1,,s} such that

    ri=1u(i)[αk]T(A(i)u(i))[αk]>0.

    Theorem 1.4. ([14,Theorem 2]) Let A={A(i)}ri=1 be a family of matrices such that A(i)Rn×n, i=1,,r. Then, ACLDSn if and only if the matrix

    ri=1A(i)H(i)

    has a positive diagonal entry for any H(i)0 in Rn×n, i=1,,r, not all being zero.

    Theorem 1.5. ([17,Theorem 2.5]) Let A={A(i)}ri=1 be a family of matrices such that A(i)Rn×n, i=1,,r. Then, the following are equivalent:

    (i) ACLDSn.

    (ii) {A(i)S(i)}ri=1CLDSn for all S(i)0, with diag(S(i))0 for i=1,,r.

    (iii) {A(i)S(i)}ri=1CLDSn for all S(i)0, with diag(S(i))=e for i=1,,r.

    (iv) {A(i)S(i)}ri=1Pn for all S(i)0, with diag(S(i))0 for i=1,,r.

    (v) {A(i)S(i)}ri=1Pn for all S(i)0, with diag(S(i))=e for i=1,,r.

    The above two theorems provide characterizations for common Lyapunov diagonal stability. Theorem 1.4 extends Theorem 1 from [4], while Theorem 1.5 is inspired by the work of Kraaijevanger [8]. The primary objective of our work is to offer additional characterizations that enhance and unify the existing results in the literature.

    We begin this section with a lemma that gives a necessary condition for the common Lyapunov diagonal stability.

    Lemma 2.1. ([17,Theorem 2.3]) Let A={A(i)}ri=1 be a family of real n×n matrices. If ACLDSn, then APn.

    Next, we demonstrate that if a family of matrices A of the same size forms a P-set, then any family of principal submatrices of A obtained by deleting the same rows and columns also forms a P-set.

    Lemma 2.2. Let A={A(i)}ri=1 be a family of real n×n matrices and α{1,,n}. If APn, then B={A[α]}ri=1P|α|.

    Proof. For i=1,,r, let v(i)R|α|, not all being zero. Then, for each i, construct u(i)Rn to be such that u(i)[α]=v(i) and u(i)[αc]=0. Clearly, not all these u(i)'s are zero vectors since not all v(i)'s are zero. Hence, since APn, there is some k{1,,n} such that

    ri=1u(i)k(A(i)u(i))k>0.

    Observe that for each i, u(i)k=v(i)l and (A(i)u(i))k=(A[α](i)v(i))l for some lα. Otherwise, the above summation equals zero. From this observation, we obtain that

    ri=1v(i)l(A(i)[α]v(i))l>0.

    Therefore, by Definition 1.2, BP|α|.

    We are now ready to present our main theorem.

    Theorem 2.3. Let A={A(i)}ri=1 be a family of matrices such that A(i)Rn×n for i=1,,r. Then, the following are equivalent:

    (i) ACLDSn.

    (ii) {A(i)S(i)}ri=1CLDSm for all n by n block matrices S(i)0 in Rm×m with S(i)jj0 for all i=1,,r and j=1,,n.

    (iii) {A(i)S(i)}ri=1CLDSm for all n by n block matrices S(i)0 in Rm×m with S(i)jj=Imj for all i=1,,r and j=1,,n.

    (iv) {A(i)S(i)}ri=1Pm for all n by n block matrices S(i)0 in Rm×m with S(i)jj0 for all i=1,,r and j=1,,n.

    (v) {A(i)S(i)}ri=1Pm for all n by n block matrices S(i)0 in Rm×m with S(i)jj=Imj for all i=1,,r and j=1,,n.

    Proof. It is trivial to see that (ii) implies (iii) and (iv) implies (v). Moreover, from Lemma 2.1, it is clear that (ii) implies (iv) and (iii) implies (v). Hence, to finish the proof, we show that (i) implies (ii) and (v) implies (i).

    (i)(ii): Suppose that D0 in Rn×n is a common Lyapunov diagonal solution for A. Then, for i=1,,r, we have (A(i))TD+DA(i)0. Let {S(i)}ri=1 be any family of positive semidefinite n by n block matrices in Rm×m with S(i)jj0 for j=1,,n and i=1,,r. Hence, according to Lemma 1.2, we have

    ((A(i))TD+DA(i))S(i)0, (2.1)

    for i=1,,r. Since we have

    ((A(i))TD+DA(i))S(i)=((A(i))TD)S(i)+(DA(i))S(i),

    it follows from (2.1) that

    ((A(i))TD)S(i)+(DA(i))S(i)0. (2.2)

    Now, observe that

    (DA(i))S(i)=(DIm)(A(i)S(i)),

    and

    ((A(i))TD)S(i)=(A(i)S(i))T(DIm)

    for each i, where ImRm×m is the identity matrix partitioned into n by n blocks. Using these observations, it follows from (2.2) that

    (A(i)S(i))T(DIm)+(DIm)(A(i)S(i))0,

    for i=1,,r. Clearly, the diagonal matrix DImRm×m is positive definite. Hence, (ii) follows.

    (v)(i): For i=1,,r, let X(i)=[x(i)kl]0 in Rn×n, not all being zero. Now, set D(i) to be the diagonal matrices whose diagonal elements d(i)kk=x(i)kk for all i=1,,r and k=1,,n. Thus, for each i, we can write X(i)=D(i)S(i)D(i) for some S(i)=[s(i)kl]0 in Rn×n with skk=1, k=1,,n. Next, let us fix p=max{m1,,mn}. Then, by Lemma 1.1, S(i)Ip0 in Rnp×np, i=1,r. Observe that for each i, S(i)QRm×m is a principal submatrix of S(i)Ip, where Q is a matrix defined as in (1.3). Therefore, we conclude that S(i)Q0, with (S(i)Q)jj=Imj, i=1,,r, j=1,,n. By (v), {A(i)(S(i))Q)}ri=1Pm. So, we obtain from Lemma 2.2 that {T(A(i)(S(i)Q))}ri=1Pn. Now, let u(i)Rn×n, i=1,,r, be such that u(i)k=d(i)kk, k=1,,n. It is clear that not all u(i) are zero vectors. Thus, from the definition of P-sets, we must have

    ri=1u(i)q[(T(A(i)(S(i)Q)))u(i)]q>0

    for some q{1,,n}. Hence, it follows that

    ri=1u(i)q[(T(A(i)S(i)Q))u(i)]q=ri=1d(i)qqnk=1(T(A(i)S(i)Q))qkd(i)kk=ri=1d(i)qqnk=1(T((A(i)S(i))Q))qkd(i)kk=ri=1d(i)qqnk=1(a(i)qks(i)qkQqk)11d(i)kk=ri=1d(i)qqnk=1a(i)qks(i)qkd(i)kk=ri=1nk=1a(i)qkd(i)qqs(i)qkd(i)kk=ri=1nk=1a(i)qkx(i)qk=ri=1nk=1a(i)qkx(i)kq=(ri=1A(i)X(i))qq>0.

    From this last inequality and by Theorem 1.4, (i) holds.

    The proof is complete now.

    To demonstrate the validity of Theorem 2.3, consider the following example.

    Example 2.1. Let n=2, m=3, m1=2, and m2=1. Then, consider the family A={A(i)}2i=1, where

    A(1)=[2103]andA(2)=[1104].

    According to Theorem 2.3, to show that ACLDS2 it suffices to show that {A(i)S(i)}2i=1P3 for any 2 by 2 block matrices S(i)0 in R3×3 with S(i)jj=Imj for all i=1,2 and j=1,2. Now, consider the matrices

    S(1)=[10s(1)1301s(1)23s(1)13s(1)231]andS(2)=[10s(2)1301s(2)23s(2)13s(2)231].

    Hence, we have

    A(1)S(1)=[20s(1)1302s(1)23003]andA(2)S(2)=[10s(2)1301s(2)23004].

    Next, for i=1,2, let u(i) be any vectors in R3. Thus, a simple calculation shows that

    (A(1)S(1))u(1)=[2u(1)1s(1)13u(1)32u(1)2s(1)23u(1)33u(1)3]and(A(2)S(2))u(2)=[u(2)1s(2)13u(2)3u(2)2s(2)23u(2)34u(2)3].

    If at least one of u(1)3 and u(2)3 is nonzero, then 2i=1u(i)3(A(i)u(i))3>0. Otherwise, we must have

    (A(1)S(1))u(1)=[2u(1)12u(1)20]and(A(2)S(2))u(2)=[u(2)1u(2)20].

    Since u(1) and u(2) are not both zero vectors, then we must have k{1,2} such that 2i=1u(i)k(A(i)u(i))k>0. That means {A(i)S(i)}2i=1P3. Therefore, from Theorem 2.3, this implies that ACLDS2. In fact, we found that

    D=[21]

    is a common Lyapunov diagonal solution for A.

    We emphasize here that Theorem 2.6 is equivalent to Theorem 1.5 when m=n. Before we proceed with the presentation of further results, we cite the following two lemmas from [28].

    Lemma 2.4. ([28,Lemma 3.2]) Let A={A(i)}ri=1 be a family of real n×n matrices and α be any partition of {1,,n}. If ACLDSαn, then APαn.

    Lemma 2.5. ([28,Proposition 4.1]) Let A={A(i)}ri=1 be a family of real n×n matrices and α be any partition of {1,,n}. If APαn, then APn.

    For the remainder of this paper, let α={α1,,αn} is a partition of {1,,m} such that α1={1,,m1},α2={m1+1,,m1+m2},,αn={mmn+1,,m}. With this notation established, we now provide another characterization of common Lyapunov diagonal stability.

    Theorem 2.6. Let A={A(i)}ri=1 be a family of matrices such that A(i)Rn×n for i=1,,r. Then, the following are equivalent:

    (i) ACLDSn.

    (ii) {A(i)S(i)}ri=1CLDSαm for all n by n block matrices S(i)0 in Rm×m with S(i)jj0 for all i=1,,r and j=1,,n.

    (iii) {A(i)S(i)}ri=1CLDSαm for all n by n block matrices S(i)0 in Rm×m with S(i)jj=Imj for all i=1,,r and j=1,,n.

    (iv) {A(i)S(i)}ri=1Pαm for all n by n block matrices S(i)0 in Rm×m with S(i)jj0 for all i=1,,r and j=1,,n.

    (v) {A(i)S(i)}ri=1Pαm for all n by n block matrices S(i)0 in Rm×m with S(i)jj=Imj for all i=1,,r and j=1,,n.

    Proof. It is clear that (ii)(iii) and (iv)(v). In addition, according to Lemma 2.4, (ii)(iv) and (iii)(v).

    (i)(ii): Let {S(i)}ri=1 be a family of positive semidefinite matrices given as in (ii) and D be a common Lyapunov diagonal solution for A. Then, DIm is a positive α-scalar matrix, where ImRm×m is the identity matrix partitioned into n by n blocks. Thus, as we have seen in the proof of Theorem 2.3, we have

    (A(i)S(i))T(DIm)+(DIm)(A(i)S(i))0,

    for i=1,,r.

    (v)(i): Using Lemma 2.5, we can see that (v) here implies (v) in Theorem 2.3. Therefore, (i) holds.

    Now, we extend Theorem 2.6 by considering different partitions of {1,,m}. Before presenting our next result, let us set the stage first.

    Let us define a bijective function τ:αiβi that maps each element jαi to some βi for i{1,,n}. Hence, τ is a permutation of {1,,m}, and β is a partition of {1,,m}. Clearly, for every i, the cardinality of αi is the same as the cardinality of βi. For the remainder of this section, β denotes such partitions. In addition, construct the permutation matrix P such that Pjτ(j)=1 for all j=1,,m and zero everywhere else. For any permutation matrix P, we write CP=PCPT, where CRm×m. Then, the following observation can be easily verified.

    Observation 2.1. Let P be a permutation matrix associated with some partition β. Then, we have

    (1) S0 (S0, resp.) if and only if SP0 (SP0, resp.), SRm×m.

    (2) D is β-scalar matrix if and only if DP is α-scalar matrix, DRm×m.

    (3) {A(i)}ri=1Pβm if and only if {A(i)P}ri=1Pαm, where A(i)Rm×m, i=1,,r.

    Lemma 2.7. Let A={A(i)}ri=1 be a family of matrices such that A(i)Rn×n for i=1,,r and P be a permutation matrix associated with some partition β. Then, {A(i)}ri=1CLDSβn if and only if {A(i)P}ri=1CLDSαn.

    Proof. The conclusion follows directly from observation 1 and noting that

    ((A(i))TD+DA(i))P=(A(i))TPDP+DPA(i)P,

    for all i.

    Theorem 2.8. Let A={A(i)}ri=1 be a family of matrices such that A(i)Rn×n for i=1,,r and P be a permutation matrix associated with some partition β. Then, the following are equivalent:

    (i) ACLDSn.

    (ii) {(PT(A(i)Jm)P)S(i)}ri=1CLDSβm for all matrices S(i)0 in Rm×m with S(i)[βj]0 for all i=1,,r and j=1,,n.

    (iii) {(PT(A(i)Jm)P)S(i)}ri=1CLDSβm for all matrices S(i)0 in Rm×m with S(i)[βj]=Imj for all i=1,,r and j=1,,n.

    (iv) {(PT(A(i)Jm)P)S(i)}ri=1Pβm for all matrices S(i)0 in Rm×m with S(i)[βj]0 for all i=1,,r and j=1,,n.

    (v) {(PT(A(i)Jm)P)S(i)}ri=1Pβm for all matrices S(i)0 in Rm×m with S(i)[βj]=Imj for all i=1,,r and j=1,,n.

    Proof. Clearly, the condition (ii) gives (iii) and (iv) gives (v). Moreover, (ii) leads to (iv) and (iii) to (v) by Lemma 2.4.

    (i)(ii): For i=1,,r, let S(i)0 in Rm×m be given as in (ii). Then, for each i, S(i)P0 n by n block matrix with (S(i)P)jj0, j=1,,n. Hence, it follows from Theorem 2.6 that {A(i)S(i)P}ri=1CLDSαm. Now, by observing that

    A(i)S(i)P=(A(i)Jm)S(i)P=((PT(A(i)Jm)P)S(i))P (2.3)

    for each i, we conclude that {((PT(A(i)Jm)P)S(i))P}ri=1CLDSαm. Hence, by Lemma 2.7, (ii) follows.

    (v)(i): From observation 1, {((PT(A(i)Jm)P)S(i))P}ri=1Pαm. This, by (2.3), means {A(i)S(i)P}ri=1Pαm. Finally, using Theorem 2.6, we obtain (i). This completes the proof.

    A final remark before moving to the next section is that Theorems 2.3, 2.6, and 2.8 are equivalent to Theorems 4, 9, and 10 in [29] when A is a singleton.

    In this section, we generalize the main results of Section 2 to provide more characterizations for common Lyapunov α-scalar stability. In this section, let γ={γ1,,γs} be any partition of {1,,n}. Then, δ={δ1,,δs}, where

    δ1=|γ1|i=1αi,δ2=|γ1|+|γ2|i=|γ1|+1αi,,δs=ni=n|γs|+1αi

    is a partition of {1,,m}, where α={α1,,αn} is the partition of {1,,m} defined in Section 2.

    Lemma 3.1. Let A={A(i)}ri=1 be a family of matrices such that A(i)Rn×n for i=1,,r and γ={γ1,,γs} be any partition of {1,,n}. If {A(i)Q}ri=1Pδm, then {T(A(i)Q)}ri=1Pγn.

    Proof. Let u(i)Rn, i=1,,r, not all being zero. Then, let v=[vmj]Rm be the nonzero n-block vector defined as follows

    (vmj)k={1if k=10if k1.

    Then, for each i, choose z(i)=u(i)v. Clearly, z(i)Rm, i=1,,r, and not all being zero vectors since not all u(i) are zero. Furthermore, we have T(z(i))=u(i) and T(z(i)[δl])=u(i)[γl] for l{1,,s}. Now, because {A(i)Q}ri=1Pδm, there is some l{1,s} such that

    ri=1z(i)[δl]T((A(i)Q)z(i))[δl]>0 (3.1)

    Now, observe that

    ri=1z(i)[δl]T((A(i)Q)z(i))[δl]=ri=1T(z(i)[δl]T)T(((A(i)Q)z(i))[δl]).

    Consequently, it follows from (3.1) that

    0<ri=1z(i)[δl]T((A(i)Q)z(i))[δl]=ri=1T(z(i)[δl]T)T(((A(i)Q)z(i))[δl])=ri=1u(i)[γl]T(T(A(i)Q)T(z(i)))[γl]=ri=1u(i)[γl]T(T(A(i)Q)u(i))[γl].

    Therefore, by Definition 1.3, the result follows.

    Lemma 3.2. ([28,Corollary 2.1]) Let A={A(i)}ri=1 be a family of matrices such that A(i)Rn×n for i=1,,r and γ={γ1,,γs} be a partition of {1,,n}. Then, ACLDSγn if and only if there is l{1,,s} such that

    trri=1(A(i)X(i))[γl]>0

    for any X(i)0, X(i)Rn×n, i=1,,r, not all being zero matrices.

    Theorem 3.3. Let A={A(i)}ri=1 be a family of matrices such that A(i)Rn×n for i=1,,r and γ={γ1,,γs} be any partition of {1,,n}. Then, the following are equivalent:

    (i) ACLDSγn.

    (ii) {A(i)S(i)}ri=1CLDSδm for all n by n block matrices S(i)0 in Rm×m with S(i)jj0 for all i=1,,r and j=1,,n.

    (iii) {A(i)S(i)}ri=1CLDSδm for all n by n block matrices S(i)0 in Rm×m with S(i)jj=Imj for all i=1,,r and j=1,,n.

    (iv) {A(i)S(i)}ri=1Pδm for all n by n block matrices S(i)0 in Rm×m with S(i)jj0 for all i=1,,r and j=1,,n.

    (v) {A(i)S(i)}ri=1Pδm for all n by n block matrices S(i)0 in Rm×m with S(i)jj=Imj for all i=1,,r and j=1,,n.

    Proof. It is clear that (ii) implies (iii) and (iv) implies (v). Besides, from Lemma 2.4, it is clear that (ii) implies (iv) and (iii) implies (v).

    (i)(ii): Clearly, if D is a positive γ-scalar matrix, then DIm is a positive δ-scalar matrix. Moreover, if D is a common Lyapunov γ-scalar solution for A, then, by Lemma 1.2, for any S(i)'s given as in (ii), we have

    ((A(i))TD+DA(i))S(i)=(A(i)S(i))T(DIm)+(DIm)(A(i)S(i))0,

    for i=1,,r. This last inequality means that {A(i)S(i)}ri=1 has (DIm) as a common Lyapunov δ-scalar solution.

    (v)(i): Let X(i)=[xkl]0, X(i)Rn×n i=1,,r, not all being zero. Next, let D(i)Rn×n, i=1,,r, be a diagonal matrix such that d(i)kk=x(i)kk for j=1,,n. Thus, for each i, there is S(i)=[skl]0, S(i)Rn×n with s(i)kk=1, k=1,,n, such that X(i)=D(i)S(i)D(i). By setting p=max{m1,,mn}, we conclude, using to Lemma 1.1, that S(i)Ip0 in Rnp×np, i=1,,r. Since, for each i, S(i)Q is a principal submatrix of S(i)Ip, then S(i)Q0, Q here is the matrix in (1.3). Furthermore, each diagonal block (S(i)Q)jj=Imj, i=1,,r. So, {A(i)(S(i)Q)}ri=1={(A(i)S(i))Q}ri=1Pδm, by (v). Consequently, according to Lemma 3.1, {T((A(i)S(i))Q)}ri=1Pγn. Set u(i)=D(i)e, i=1,,r, where e is the vector of all ones in Rn. By the construction of these u(i)'s, it is easy to see that not all of them are zero vectors. Therefore, there is some index q{1,,s} such that

    ri=1u(i)[γq]T((T((A(i)S(i))Q))u(i))[γq]=ri=1(D(i)e)[γq]T((T((A(i)S(i))Q))(D(i)e))[γq]=ri=1e[γq]TD(i)[γq]((T((A(i)(S(i)D(i)))Q))e)[γq]=ri=1e[γq]T((T((A(i)(D(i)S(i)D(i)))Q))e)[γq]=ri=1e[γq]T((T((A(i)X(i))Q))e)[γq]=ri=1e[γq]T((A(i)X(i))e)[γq]=trri=1(A(i)X(i))[γq]>0.

    Therefore, (i) follows by Lemma 3.2. This finishes the proof.

    This last Theorem can be generalized to provide more characterizations for common Lyapunov α-scalar stability. Recall that in Section 2, we defined β to be a partition of {1,,m} obtained from α through a permutation function τ. Using this notation and the definition of δ above, for any partition γ={γ1,,γs} of {1,,n}, we define another partition of {1,,m} called ϵ={ϵ1.,ϵs}, where

    ϵ1=|γ1|i=1βi,ϵ2=|γ1|+|γ2|i=|γ1|+1βi,,ϵs=ni=n|γs|+1βi.

    Clearly, if we replace α with δ and β with ϵ, Observation 1 will hold true for a permutation matrix P associated with β. Now, we have the following theorem, whose proof follows the lines of the proof of Theorem 2.8 and is therefore omitted.

    Theorem 3.4. Let A={A(i)}ri=1 be a family of matrices such that A(i)Rn×n for i=1,,r and γ={γ1,,γs} be any partition of {1,,n}. In addition, let P be a permutation matrix associated with some partition β. Then, the following are equivalent:

    (i) ACLDSγn.

    (ii) {(PT(A(i)Jm)P)S(i)}ri=1CLDSϵm for all matrices S(i)0 in Rm×m with S(i)[ϵj]0 for all i=1,,r and j=1,,n.

    (iii) {(PT(A(i)Jm)P)S(i)}ri=1CLDSϵm for all matrices S(i)0 in Rm×m with S(i)[ϵj]=Imj for all i=1,,r and j=1,,n.

    (iv) {(PT(A(i)Jm)P)S(i)}ri=1Pϵm for all matrices S(i)0 in Rm×m with S(i)[ϵj]0 for all i=1,,r and j=1,,n.

    (v) {(PT(A(i)Jm)P)S(i)}ri=1Pϵm for all matrices S(i)0 in Rm×m with S(i)[ϵj]=Imj for all i=1,,r and j=1,,n.

    We remark here that Theorems 3.3 and 3.4 are the same as Theorems 2.6 and 2.8, respectively, when γ={{1},{2},,{n}}. Additionally, when r=1, i.e., A is a singleton, these last two theorems reduce to the following corollaries, whose proofs shall be omitted.

    Corollary 3.5. Let ARn×n and γ={γ1,,γs} be any partition of {1,,n}. Then, the following are equivalent:

    (i) ALDSγn.

    (ii) ASLDSδm for all n by n block matrices S0 in Rm×m with Sjj0, j=1,,n.

    (iii) ASLDSδm for all n by n block matrices S0 in Rm×m with Sjj=Imj, j=1,,n.

    (iv) ASPδm for all n by n block matrices S0 in Rm×m with Sjj0, j=1,,n.

    (v) ASPδm for all n by n block matrices S0 in Rm×m with Sjj=Imj, j=1,,n.

    Corollary 3.6. Let ARn×n and γ={γ1,,γs} be any partition of {1,,n}. In addition, let P be a permutation matrix associated with some partition β. Then, the following are equivalent:

    (i) ALDSγn.

    (ii) (PT(AJm)P)SLDSϵm for all matrices S0 in Rm×m with S[ϵj]0, j=1,,n.

    (iii) (PT(AJm)P)SLDSϵm for all matrices S0 in Rm×m with S[ϵj]=Imj, j=1,,n.

    (iv) (PT(AJm)P)SPϵm for all matrices S0 in Rm×m with S[ϵj]0, j=1,,n.

    (v) (PT(AJm)P)SPϵm for all matrices S0 in Rm×m with S[ϵj]=Imj, j=1,,n.

    Motivated by the work in [29], we have presented new characterizations for common Lyapunov diagonal stability using the Khatri-Rao product. The notions of P-sets and Pα-sets have been used to formulate these results. Moreover, these characterizations have been extended to the notion of common Lyapunov α-scalar stability. Our work here extends and broadens the scope of results in [17,28].

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The Researchers would like to thank the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support (QU-APC-2024-9/1). We would like to extend my sincere gratitude to the anonymous reviewers for their valuable comments and suggestions, which have significantly contributed to improving the quality of this paper.

    The author does not have any conflict of interest.



    [1] P. Branco, L. Torgo, R. P. Ribeiro, A survey of predictive modeling on imbalanced domains, ACM Comput. Surv. (CSUR), 49 (2016), 1–50. https://doi.org/10.1145/2907070 doi: 10.1145/2907070
    [2] K. Oksuz, B. C. Cam, S. Kalkan, E. Akbas, Imbalance problems in object detection: A review, IEEE T. Pattern Anal., 43 (2021), 3388–3415. https://doi.org/10.1109/TPAMI.2020.2981890 doi: 10.1109/TPAMI.2020.2981890
    [3] M. Ghorbani, A. Kazi, M. S. Baghshah, H. R. Rabiee, N. Navab, RA-GCN: Graph convolutional network for disease prediction problems with imbalanced data, Med. Image Anal., 75 (2022), 102272. https://doi.org/10.1016/j.media.2021.102272 doi: 10.1016/j.media.2021.102272
    [4] Y. C. Wang, C. H Cheng, A multiple combined method for rebalancing medical data with class imbalances, Comput. Biol. Med., 134 (2021), 104527. https://doi.org/10.1016/j.compbiomed.2021.104527 doi: 10.1016/j.compbiomed.2021.104527
    [5] A. Abdelkhalek, M. Mashaly, Addressing the class imbalance problem in network intrusion detection systems using data resampling and deep learning, J. Supercomput., 79 (2023), 10611–10644. https://doi.org/10.1007/s11227-023-05073-x doi: 10.1007/s11227-023-05073-x
    [6] Z. Li, K. Kamnitsas, B. Glocker, Analyzing overfitting under class imbalance in neural networks for image segmentation, IEEE T. Med. Imaging, 40 (2021), 1065–1077. https://doi.org/10.1109/TMI.2020.3046692 doi: 10.1109/TMI.2020.3046692
    [7] V. Rupapara, F. Rustam, H. F. Shahzad, A. Mehmood, I. Ashraf, G. S. Choi, Impact of SMOTE on imbalanced text features for toxic comments classification using RVVC model, IEEE Access, 9 (2021), 78621–78634. https://doi.org/10.1109/ACCESS.2021.3083638 doi: 10.1109/ACCESS.2021.3083638
    [8] W. Zheng, Y. Xun, X. Wu, Z. Deng, X. Chen, Y. Sui, A comparative study of class rebalancing methods for security bug report classification, IEEE T. Reliab., 70 (2021), 1658–1670. https://doi.org/10.1109/TR.2021.3118026 doi: 10.1109/TR.2021.3118026
    [9] J. Kuang, G. Xu, T. Tao, Q. Wu, Class-imbalance adversarial transfer learning network for cross-domain fault diagnosis with imbalanced data, IEEE T. Instrum. Meas., 71 (2021), 1–11. https://doi.org/10.1109/TIM.2021.3136175 doi: 10.1109/TIM.2021.3136175
    [10] M. Qian, Y. F. Li, A weakly supervised learning-based oversampling framework for class-imbalanced fault diagnosis, IEEE T. Reliab., 71 (2022), 429–442. https://doi.org/10.1109/TR.2021.3138448 doi: 10.1109/TR.2021.3138448
    [11] Y. Aydın, Ü. Işıkdağ, G. Bekdaş, S. M. Nigdeli, Z. W. Geem, Use of machine learning techniques in soil classification, Sustainability, 15 (2023), 2374. https://doi.org/10.3390/su15032374 doi: 10.3390/su15032374
    [12] M. Asgari, W. Yang, M. Farnaghi, Spatiotemporal data partitioning for distributed random forest algorithm: Air quality prediction using imbalanced big spatiotemporal data on spark distributed framework, Environ. Technol. Inno., 27 (2022), 102776. https://doi.org/10.1016/j.eti.2022.102776 doi: 10.1016/j.eti.2022.102776
    [13] L. Dou, F. Yang, L. Xu, Q. Zou, A comprehensive review of the imbalance classification of protein post-translational modifications, Brief. Bioinform., 22 (2021), bbab089. https://doi.org/10.1093/bib/bbab089 doi: 10.1093/bib/bbab089
    [14] S. Y. Bae, J. Lee, J. Jeong, C. Lim, J. Choi, Effective data-balancing methods for class-imbalanced genotoxicity datasets using machine learning algorithms and molecular fingerprints, Comput. Toxicol., 20 (2021), 100178. https://doi.org/10.1016/j.comtox.2021.100178 doi: 10.1016/j.comtox.2021.100178
    [15] G. H. Fu, Y. J. Wu, M. J. Zong, J. Pan, Hellinger distance-based stable sparse feature selection for high-dimensional class-imbalanced data, BMC Bioinformatics, 21 (2020), 121. https://doi.org/10.1186/s12859-020-3411-3 doi: 10.1186/s12859-020-3411-3
    [16] N. V. Chawla, K. W. Bowyer, L. O. Hall, W. P. Kegelmeyer, SMOTE: Synthetic minority over-sampling technique, J. Artif. Intell. Res., 16 (2002), 321–357. https://doi.org/10.1613/jair.953 doi: 10.1613/jair.953
    [17] G. E. A. P. A. Batista, R. C. Prati, M. C. Monard, A study of the behavior of several methods for balancing machine learning training data, ACM SIGKDD Explor. Newslett., 6 (2004), 20–29. https://doi.org/10.1145/1007730.1007735 doi: 10.1145/1007730.1007735
    [18] H. He, Y. Bai, E. A. Garcia, S. Li, ADASYN: Adaptive synthetic sampling approach for imbalanced learning, In: 2008 IEEE international joint conference on neural networks (IEEE world congress on computational intelligence), IEEE Press, 2008. https://doi.org/10.1109/IJCNN.2008.4633969
    [19] M. Kubat, S. Matwin, Addressing the curse of imbalanced training sets: one-sided selection, In: International Conference of Machine Learning, Morgan Kaufmann, 1997.
    [20] M. A. Tahir, J. Kittler, F. Yan, Inverse random under sampling for class imbalance problem and its application to multi-label classification, Pattern Recogn., 45 (2012), 3738–3750. https://doi.org/10.1016/j.patcog.2012.03.014 doi: 10.1016/j.patcog.2012.03.014
    [21] A. Zhang, H. Yu, Z. Huan, X. Yang, S. Zheng, S. Gao, SMOTE-RkNN: A hybrid re-sampling method based on SMOTE and reverse k-nearest neighbors, Inform. Sci., 595 (2022), 70–88. https://doi.org/10.1016/j.ins.2022.02.038 doi: 10.1016/j.ins.2022.02.038
    [22] R. Batuwita, V. Palade, FSVM-CIL: Fuzzy support vector machines for class imbalance learning, IEEE T. Fuzzy Syst., 18 (2010), 558–571. https://doi.org/10.1109/TFUZZ.2010.2042721 doi: 10.1109/TFUZZ.2010.2042721
    [23] H. Yu, C. Sun, X. Yang, S. Zheng, H Zou, Fuzzy support vector machine with relative density information for classifying imbalanced data, IEEE T. Fuzzy Syst., 27 (2019), 2353–2367. https://doi.org/10.1109/TFUZZ.2019.2898371 doi: 10.1109/TFUZZ.2019.2898371
    [24] H. Yu, C. Mu, C. Sun, W. Yang, X. Yang, X. Zuo, Support vector machine-based optimized decision threshold adjustment strategy for classifying imbalanced data, Knowl.-Based Syst., 76 (2015), 67–78. https://doi.org/10.1016/j.knosys.2014.12.007 doi: 10.1016/j.knosys.2014.12.007
    [25] H. Yu, C. Sun, X. Yang, W. Yang, J. Shen, Y. Qi, ODOC-ELM: Optimal decision outputs compensation-based extreme learning machine for classifying imbalanced data, Knowl.-Based Syst., 92 (2016), 55–70. https://doi.org/10.1016/j.knosys.2015.10.012 doi: 10.1016/j.knosys.2015.10.012
    [26] J. Laurikkala, Improving identification of difficult small classes by balancing class distribution, In: Artificial Intelligence in Medicine: 8th Conference on Artificial Intelligence in Medicine in Europe, AIME 2001 Cascais, Portugal, Springer Berlin Heidelberg, 2001. https://doi.org/10.1007/3-540-48229-6_9
    [27] F. S. Hanifah, H. Wijayanto, A. Kurnia, Smotebagging algorithm for imbalanced dataset in logistic regression analysis (case: Credit of bank X), Appl. Math. Sci., 9 (2015), 6857–6865. http://dx.doi.org/10.12988/ams.2015.58562
    [28] C. Seiffert, T. M. Khoshgoftaar, J. Van Hulse, A. Napolitano, RUSBoost: Improving classification performance when training data is skewed, In: 19th international conference on pattern recognition, IEEE, 2008. https://doi.org/10.1002/abio.370040210
    [29] Y. Zhang, G. Liu, W. Luan, C. Yan, C. Jiang, An approach to class imbalance problem based on Stacking and inverse random under sampling methods, In: 2018 IEEE 15th international conference on networking, sensing and control (ICNSC), IEEE, 2018. https://doi.org/10.1002/abio.370040210
    [30] Y. Pristyanto, A. F. Nugraha, I. Pratama, A. Dahlan, L. A. Wirasakti, Dual approach to handling imbalanced class in datasets using oversampling and ensemble learning techniques, In: 2021 15th International Conference on Ubiquitous Information Management and Communication (IMCOM), IEEE, 2021. https://doi.org/10.1002/abio.370040210
    [31] Z. Seng, S. A. Kareem, K. D. Varathan, A neighborhood undersampling stacked ensemble (NUS-SE) in imbalanced classification, Exp. Syst. Appl., 168 (2021), 114246. https://doi.org/10.1016/j.eswa.2020.114246 doi: 10.1016/j.eswa.2020.114246
    [32] D. H. Wolpert, Stacked generalization, Neural Networks, 5 (1992), 241–259. https://doi.org/10.1016/S0893-6080(05)80023-1 doi: 10.1016/S0893-6080(05)80023-1
    [33] Y. Shi, R. Eberhart, A modified particle swarm optimizer, In: Proceedings of 1998 IEEE international conference on evolutionary computation proceedings. IEEE world congress on computational intelligence (Cat. No. 98TH8360), IEEE, 1998, 69–73. https://doi.org/10.1109/icec.1998.699146
    [34] K. V. Price, Differential evolution: A fast and simple numerical optimizer, In: Proceedings of North American fuzzy information processing, IEEE, 1996,524–527. https://doi.org/10.1109/nafips.1996.534790
    [35] E. Cuevas, M. Cienfuegos, D. Zaldívar, M. Pérez-Cisneros, A swarm optimization algorithm inspired in the behavior of the social-spider, Exp. Syst. Appl., 40 (2013), 6374–6384. https://doi.org/10.1016/j.eswa.2013.05.041 doi: 10.1016/j.eswa.2013.05.041
    [36] S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Soft., 95 (2016), 51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008 doi: 10.1016/j.advengsoft.2016.01.008
    [37] E. Cuevas, A. Rodríguez, M. Perez, J. Murillo-Olmos, B. Morales-Castañ eda, A. Alejo-Reyes, et al., Optimal evaluation of re-opening policies for COVID-19 through the use of metaheuristic schemes, Appl. Math. Model., 121 (2023), 506–523. https://doi.org/10.1016/j.apm.2023.05.012 doi: 10.1016/j.apm.2023.05.012
    [38] M. H. Nadimi-Shahraki, S. Taghian, S. Mirjalili, L. Abualigah, M. Abd Elaziz, D. Oliva, EWOA-OPF: Effective whale optimization algorithm to solve optimal power flow problem, Electronics, 10 (2021), 2975. https://doi.org/10.1007/978-981-16-9447-9_20 doi: 10.1007/978-981-16-9447-9_20
    [39] R. Kundu, S. Chattopadhyay, E. Cuevas, R. Sarkar, AltWOA: Altruistic Whale Optimization Algorithm for feature selection on microarray datasets, Comput. Biol. Med., 144 (2022), 105349. https://doi.org/10.1016/j.compbiomed.2022.105349 doi: 10.1016/j.compbiomed.2022.105349
    [40] M. S. Santos, P. H. Abreu, N. Japkowicz, A. Fernández, C. Soares, S. Wilk, et al., On the joint-effect of class imbalance and overlap: a critical review, Artif. Intell. Rev., 55 (2022), 6207–6275. https://doi.org/10.1007/s10462-022-10150-3 doi: 10.1007/s10462-022-10150-3
    [41] S. K. Pandey, A. K. Tripathi, An empirical study toward dealing with noise and class imbalance issues in software defect prediction, Soft Comput., 25 (2021), 13465–13492. https://doi.org/10.1007/s00500-021-06096-3 doi: 10.1007/s00500-021-06096-3
    [42] L. Breiman, Bagging predictors, Mach. Learn., 24 (1996), 123–140. https://doi.org/10.1007/BF00058655 doi: 10.1007/BF00058655
    [43] R E. Schapire, The strength of weak learnability, Mach. Learn., 5 (1990), 197–227. https://doi.org/10.1007/BF00116037 doi: 10.1007/BF00116037
    [44] A. Krogh, J. Vedelsby, Neural network ensembles, cross validation, and active learning, Adv. Neural Inform. Proces. Syst., 7 (1995), 231–238. Available from: http://papers.nips.cc/paper/1001-neural-network-ensembles-cross-validation-and-active-learning.
    [45] S. Zhang, X. Li, M. Zong, X. Zhu, R. Wang, Efficient kNN classification with different numbers of nearest neighbors, IEEE T. Neur. Net. Learn., 29 (2018), 1774–1785. https://doi.org/10.1109/TNNLS.2017.2673241 doi: 10.1109/TNNLS.2017.2673241
    [46] J R. Quinlan, Induction of decision trees, Mach. Learn., 1 (1986), 81–106. https://doi.org/10.1023/A:1022643204877 doi: 10.1023/A:1022643204877
    [47] C. Cortes, V. Vapnik, Support-vector networks, Mach. Learn., 20 (1995), 273–297. https://doi.org/10.1007/BF00994018 doi: 10.1007/BF00994018
    [48] T. Bayes, An essay towards solving a problem in the doctrine of chances, MD Comput. Comput. Med. Pract., 8 (1991), 376–418. https://doi.org/10.1002/abio.370040210 doi: 10.1002/abio.370040210
    [49] A. Tharwat, T. Gaber, A. Ibrahim, A. E. Hassanien, Linear discriminant analysis: A detailed tutorial, AI Commun., 30 (2017), 169–190. https://doi.org/10.3233/AIC-170729 doi: 10.3233/AIC-170729
    [50] X. Su, X. Yan, C. L. Tsai, Linear regression, WIRES Comput. Stat., 4 (2012), 275–294. https://doi.org/10.1002/wics.1198 doi: 10.1002/wics.1198
    [51] C. Blake, E. Keogh, C. J. Merz, UCI repository of machine learning databases, Department of Information and Computer Science, University of California, Irvine, CA, USA, 1998. Available from: http://www.ics.uci.edu/mlearn/MLRepository.html.
    [52] I. Triguero, S. González, J. M. Moyano, S. García, J. Alcalá-Fdez, J. Luengo, et al., KEEL 3.0: An open source software for multi-stage analysis in data mining international, J. Comput. Intell. Syst., 10 (2017), 1238–1249. https://doi.org/10.2991/ijcis.10.1.82 doi: 10.2991/ijcis.10.1.82
    [53] J. Demsar, Statistical comparisons of classifiers over multiple data sets, J. Mach. Learn. Res., 7 (2006), 1–30. Available from: http://jmlr.org/papers/v7/demsar06a.html.
    [54] S. García, A. Fernández, J. Luengo, F. Herrera, Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power, Inform. Sci., 180 (2010), 2044–2064. https://doi.org/10.1016/j.ins.2009.12.010 doi: 10.1016/j.ins.2009.12.010
  • This article has been cited by:

    1. Ali Algefary, Diagonal solutions for a class of linear matrix inequality, 2024, 9, 2473-6988, 26435, 10.3934/math.20241286
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(964) PDF downloads(54) Cited by(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog