Research article Special Issues

Novel fixed-time synchronization results of fractional-order fuzzy cellular neural networks with delays and interactions

  • This research investigated the fixed-time (FXT) synchronization of fractional-order fuzzy cellular neural networks (FCNNs) with delays and interactions based on an enhanced FXT stability theorem. By conceiving proper Lyapunov functions and applying inequality techniques, several sufficient conditions were obtained to vouch for the fixed-time synchronization (FXTS) of the discussed systems through two categories of control schemes. Moreover, in terms of another FXT stability theorem, different upper-bounding estimating formulas for settling time (ST) were given, and the distinctions between them were pointed out. Two examples were delivered at length to demonstrate the conclusions.

    Citation: Jun Liu, Wenjing Deng, Shuqin Sun, Kaibo Shi. Novel fixed-time synchronization results of fractional-order fuzzy cellular neural networks with delays and interactions[J]. AIMS Mathematics, 2024, 9(5): 13245-13264. doi: 10.3934/math.2024646

    Related Papers:

    [1] Ali Al Khabyah, Haseeb Ahmad, Ali Ahmad, Ali N. A. Koam . A uniform interval-valued intuitionistic fuzzy environment: topological descriptors and their application in neural networks. AIMS Mathematics, 2024, 9(10): 28792-28812. doi: 10.3934/math.20241397
    [2] Tareq Saeed . Intuitionistic fuzzy variational inequalities and their applications. AIMS Mathematics, 2024, 9(12): 34289-34310. doi: 10.3934/math.20241634
    [3] Rashid Ali, Faisar Mehmood, Aqib Saghir, Hassen Aydi, Saber Mansour, Wajdi Kallel . Solution of integral equations for multivalued maps in fuzzy b-metric spaces using Geraghty type contractions. AIMS Mathematics, 2023, 8(7): 16633-16654. doi: 10.3934/math.2023851
    [4] Mohammed Shehu Shagari, Saima Rashid, Fahd Jarad, Mohamed S. Mohamed . Interpolative contractions and intuitionistic fuzzy set-valued maps with applications. AIMS Mathematics, 2022, 7(6): 10744-10758. doi: 10.3934/math.2022600
    [5] Changlin Xu, Yaqing Wen . New measure of circular intuitionistic fuzzy sets and its application in decision making. AIMS Mathematics, 2023, 8(10): 24053-24074. doi: 10.3934/math.20231226
    [6] Doaa Al-Sharoa . (α1, 2, β1, 2)-complex intuitionistic fuzzy subgroups and its algebraic structure. AIMS Mathematics, 2023, 8(4): 8082-8116. doi: 10.3934/math.2023409
    [7] Zhihua Wang . Stability of a mixed type additive-quadratic functional equation with a parameter in matrix intuitionistic fuzzy normed spaces. AIMS Mathematics, 2023, 8(11): 25422-25442. doi: 10.3934/math.20231297
    [8] Rana Muhammad Zulqarnain, Xiao Long Xin, Muhammad Saeed . Extension of TOPSIS method under intuitionistic fuzzy hypersoft environment based on correlation coefficient and aggregation operators to solve decision making problem. AIMS Mathematics, 2021, 6(3): 2732-2755. doi: 10.3934/math.2021167
    [9] Samina Batul, Faisar Mehmood, Azhar Hussain, Dur-e-Shehwar Sagheer, Hassen Aydi, Aiman Mukheimer . Multivalued contraction maps on fuzzy b-metric spaces and an application. AIMS Mathematics, 2022, 7(4): 5925-5942. doi: 10.3934/math.2022330
    [10] Afrah Ahmad Noman Abdou . Chatterjea type theorems for complex valued extended b-metric spaces with applications. AIMS Mathematics, 2023, 8(8): 19142-19160. doi: 10.3934/math.2023977
  • This research investigated the fixed-time (FXT) synchronization of fractional-order fuzzy cellular neural networks (FCNNs) with delays and interactions based on an enhanced FXT stability theorem. By conceiving proper Lyapunov functions and applying inequality techniques, several sufficient conditions were obtained to vouch for the fixed-time synchronization (FXTS) of the discussed systems through two categories of control schemes. Moreover, in terms of another FXT stability theorem, different upper-bounding estimating formulas for settling time (ST) were given, and the distinctions between them were pointed out. Two examples were delivered at length to demonstrate the conclusions.



    A problem that occurs frequently in a variety of mathematical contexts, is to find the common invariant subspaces of a single matrix or set of matrices. In the case of a single endomorphism or matrix, it is relatively easy to find all the invariant subspaces by using the Jordan normal form. Also, some theoretical results are given only for the invariant subspaces of two matrices. However, when there are more than two matrices, the problem becomes much harder, and unexpected invariant subspaces may occur. No systematic method is known. In a recent article [1], we have provided a new algorithms to determine common invariant subspaces of a single matrix or of a set of matrices systematically.

    In the present article we consider a more general version of this problem, that is, providing two algorithms for simultaneous block triangularization and block diagonalization of sets of matrices. One of the main steps in the first two proposed algorithms, consists of finding the common invariant subspaces of matrices using the new method proposed in the recent article [1]. It is worth mentioning that an efficient algorithm to explicitly compute a transfer matrix which realizes the simultaneous block diagonalization of unitary matrices whose decomposition in irreducible blocks (common invariant subspaces) is known from elsewhere is given in [2]. An application of simultaneous block-diagonalization of normal matrices in quantum theory is presented in [3].

    In this article we shall be concerned with finite dimensions only. Of course the fact that a single complex matrix can always be put into triangular form follows readily from the Jordan normal form theorem [4]. For a set of matrices, Jacobson in [5] introduced the notion of a composition series for a collection of matrices. The idea of a composition series for a group is quite familiar. The Jordan-Hölder Theorem [4] states that any two composition series of the same group have the same length and the same composition factors (up to permutation). Jacobson in [5] characterized the simultaneous block triangularization of a set of matrices by the existence of a chain {0}=V0V1...Vt=Cn of invariant subspaces with dimension dim(Vi/Vi1)=ni. Therefore, in the context of a collection of matrices Ω={Ai}Ni=1, the idea is to locate a common invariant subspace V of minimal dimension d of a set of matrices Ω. Assume V is generated by the (linearly independent) set B1={u1,u2,...,ud}, and let B={u1,u2,...,ud,ud+1,ud+2,...,un} be a basis of Cn containing B1. Upon setting S=(u1,u2,...,ud,ud+1,ud+2,...,un), S1AiS has the block triangular form

    S1AiS=(Bi1,1Bi1,20Bi2,2),

    for i=1,...,n. Thereafter, one may define a quotient of the ambient vector space, and each of the matrices in the given collection will pass to this quotient. As such, one defines

    Ti=Bi2,2=(0(nd)×dInd)S1AiS(0d×(nd)Ind).

    Then one may begin again the process of looking for a common invariant subspace of minimal dimension of a set of matrices {Ti}Ni=1 and iterate the procedure. Since all spaces and matrices are of finite dimension, the procedure must terminate at some point. Again, any two such composition series will be isomorphic. When the various quotients and submatrices are lifted back to the original vector space, one obtains precisely the block-triangular form for the original set of matrices. It is important to find a composition series in the construction in order to make the set of matrices as "block-triangular as possible."

    Dubi [6] gave an algorithmic approach to simultaneous triangularization of a set of matrices based on the idea of Jacobson in [5]. In the case of simultaneous triangularization, it can be understood as the existence of a chain {0}=V0V1...Vt=Cn of invariant subspaces with dimension dim(Vi)=i. We generalize his study to cover simultaneous block triangularization of a set of matrices. The generalized algorithm depends on the novel algorithm for constructing invariant subspaces of a set of matrices given in the recent article [1].

    Specht [7] (see also [8]) proved that if the associative algebra L generated by a set of matrices Ω over C satisfies L=L, then Ω admits simultaneous block triangularization if and only if it admits simultaneous block diagonalization, in both cases via a unitary matrix. Following a result of Specht, we prove that a set of matrices Ω admits simultaneous block diagonalization if and only if the set Γ=ΩΩ admits simultaneous block triangularization. Finally, an algorithmic approach to simultaneous block diagonalization of a set of matrices based on this fact is proposed.

    The latter part of this paper presents an alternate approach for simultaneous block diagonalization of a set of n×n matrices {As}Ns=1 by an invertible matrix that does not require finding the common invariant subspaces. Maehara et al. [9] introduced an algorithm for simultaneous block diagonalization of a set of matrices by a unitary matrix based on the existence of a Hermitian commuting matrix. Here, we extend their algorithm to simultaneous block diagonalization of a set of matrices by an invertible matrix based on the existence of a commuting matrix which is not necessarily Hermitian. For example, consider the set of matrices Ω={Ai}2i=1 where

    A1=(100220111),A2=(000210010). (1.1)

    The only Hermitian matrix commuting with the set Ω is the identity matrix. Therefore, we cannot apply the proposed algorithm given in [9]. However, one can verify that the following non Hermitian matrix C commutes with all the matrices {Ai}2i=1

    C=(000210010). (1.2)

    The matrix C has distinct eigenvalues λ1=0,λ2=1 with algebraic multiplicities n1=2,n2=1, respectively. Moreover, the matrix C is not diagonalizable. Therefore, we cannot construct the eigenvalue decomposition for the matrix C. However, one can decompose the matrix C by its generalized eigen vectors as follows:

    S1CS=(010000001)=(0100)(1), (1.3)

    where

    S=(0120011101). (1.4)

    Initially, it is noted that the matrices {Ai}2i=1 can be decomposed into two diagonal blocks by the constructed invertible matrix S where

    S1A1S=(11201)(2),S1A2S=(0100)(1). (1.5)

    Then, a new algorithm is developed for simultaneous block diagonalization by an invertible matrix based on the generalized eigenvectors of a commuting matrix. Moreover, a new characterization is presented by proving that the existence of a commuting matrix that possesses at least two distinct eigenvalues is the necessary and sufficient condition to guarantee the simultaneous block diagonalization by an invertible matrix.

    An outline of the paper is as follows. In Section 2 we review several definitions pertaining to block-triangular and block-diagonal matrices and state several elementary consequences that follow from them. In Section 3, following a result of Specht [7] (see also [8]), we provide conditions for putting a set of matrices into block-diagonal form simultaneously. Furthermore, we apply the theoretical results to provide two algorithms that enable a collection of matrices to be put into block-triangular form or block-diagonal form simultaneously by a unitary matrix based on the existence of invariant subspaces. In Section 4, a new characterization is presented by proving that the existence of a commuting matrix that possesses at least two distinct eigenvalues is the necessary and sufficient condition to guarantee the simultaneous block diagonalization by an invertible matrix. Furthermore, we apply the theoretical results to provide an algorithm that enables a collection of matrices to be put into block-diagonal form simultaneously by an invertible matrix based on the existence of a commuting matrix. Sections 3 and 4 also provide concrete examples using the symbolic manipulation system Maple.

    Let Ω be a set of n×n matrices over an algebraically closed field F, and let L denote the algebra generated by Ω over F. Similarly, let Ω be the set of the conjugate transpose of each matrix in Ω and L denote the algebra generated by Ω over F.

    Definition 2.1. An n×n matrix A is given the notation BT(n1,...,nt) provided A is block upper triangular with t square blocks on the diagonal, of sizes n1,...,nt, where t2 and n1+...+nt=n. That is, a block upper triangular matrix A has the form

    A=(A1,1A1,2A1,t0A2,2A2,t00At,t) (2.1)

    where Ai,j is a square matrix for all i=1,...,t and j=i,...,t.

    Definition 2.2. A set of n×n matrices Ω is BT(n1,...,nt) if all of the matrices in Ω are BT(n1,...,nt).

    Remark 2.3. A set of n×n matrices Ω admits a simultaneous triangularization if it is BT(n1,...,nt) with ni=1 for i=1,...,t.

    Remark 2.4. A set of n×n matrices Ω is BT(n1,...,nt) if and only if the algebra L generated by Ω is BT(n1,...,nt).

    Proposition 2.5. [7] (see also [8]) Let Ω be a nonempty set of complex n×n matrices. Then, there is a nonsingular matrix S such that SΩS1 is BT(n1,...,nt) if and only if there is a unitary matrix U such that UΩU is BT(n1,...,nt).

    Theorem 2.6. [5,Chapter Ⅳ] Let Ω be a nonempty set of complex n×n matrices. Then, there is a unitary matrix U such that UΩU is BT(n1,...,nt) if and only if the set Ω has a chain {0}=V0V1...Vt=Cn of invariant subspaces with dimension dim(Vi/Vi1)=ni.

    Definition 2.7. An n×n matrix A is given the notation BD(n1,...,nt) provided A is block diagonal with t square blocks on the diagonal, of sizes n1,...,nt, where t2, n1+...+nt=n, and the blocks off the diagonal are the zero matrices. That is, a block diagonal matrix A has the form

    A=(A1000A2000At) (2.2)

    where Ak is a square matrix for all k=1,...,t. In other words, matrix A is the direct sum of A1,...,At. It can also be indicated as A1A2...At.

    Definition 2.8. A set of n×n matrices Ω is BD(n1,...,nt) if all of the matrices in Ω are BD(n1,...,nt).

    Remark 2.9. A set of n×n matrices Ω admits a simultaneous diagonalization if it is BD(n1,...,nt) with ni=1 for i=1,...,t.

    Remark 2.10. A set of n×n matrices Ω is BD(n1,...,nt) if and only if the algebra L generated by Ω is BD(n1,...,nt).

    Proposition 2.11. [7] (see also [8]) Let Ω be a nonempty set of complex n×n matrices and let L be the algebra generated by Ω over C. Suppose L=L. Then, there is a nonsingular matrix S such that SLS1 is BT(n1,...,nt) if and only if there is a unitary matrix U such that ULU is BD(n1,...,nt).

    Dubi [6] gave an algorithmic approach to simultaneous triangularization of a set of n×n matrices. In this section, we will generalize his study to cover simultaneous block triangularization and simultaneous block diagonalization of a set of n×n matrices. The generalized algorithms depend on the novel algorithm for constructing invariant subspaces of a set of matrices given in the recent article [1] and Theorem 3.3.

    Lemma 3.1. Let Ω be a nonempty set of complex n×n matrices, Ω be the set of the conjugate transpose of each matrix in Ω and L be the algebra generated by Γ=ΩΩ. Then, L=L.

    Proof. Let A be a matrix in L. Then, A=P(B1,...,Bm) for some multivariate noncommutative polynomial P(x1,...,xm) and matrices {Bi}mi=1Γ. Therefore, A=P(B1,...,Bm)=Q(B1,...,Bm) for some multivariate noncommutative polynomial Q(x1,...,xm) where the matrices {Bi}mi=1Γ=Γ. Hence, the matrix AL

    Lemma 3.2. Let Ω be a nonempty set of complex n×n matrices and Ω be the set of the conjugate transpose of each matrix in Ω, and Γ=ΩΩ. Then, there is a unitary matrix U such that UΓU is BD(n1,...,nt) if and only if there is a unitary matrix U such that UΩU is BD(n1,...,nt).

    Proof. Assume that there exists a unitary matrix U such that UΩU is BD(n1,...,nt). Then, (UΩU)=UΩU is BD(n1,...,nt). Hence, UΓU is BD(n1,...,nt).

    Theorem 3.3. Let Ω be a nonempty set of complex n×n matrices and Ω be the set of the conjugate transpose of each matrix in Ω, and Γ=ΩΩ. Then, there is a unitary matrix U such that UΩU is BD(n1,...,nt) if and only if there is a unitary matrix U such that UΓU is BT(n1,...,nt).

    Proof. Let L be the algebra generated by Γ. Then, L=L using Lemma 3.1. Now, by applying Proposition 2.11 and Lemma 3.2, the following statements are equivalent :

    There is a unitary matrix U such that UΓU is BT(n1,...,nt).

    There is a unitary matrix U such that ULU is BT(n1,...,nt).

    There is a unitary matrix U such that ULU is BD(n1,...,nt).

    There is a unitary matrix U such that UΓU is BD(n1,...,nt).

    There is a unitary matrix U such that UΩU is BD(n1,...,nt).

    (1) Input: the set Ω={Ai}Ni=1.

    (2) Set k=0,B=ϕ,s=n,Ti=Ai,S2=I.

    (3) Search for a d-dimensional invariant subspace V=v1,v2,...,vd of a set of matrices {Ti}Ni=1 starting from d=1 up to d=s1. If one does not exist and k=0, abort and print "no simultaneous block triangularization". Else, if one does not exist and k0, go to step (8). Else, go to next step.

    (4) Set Vk+1=(S2v1S2v2...S2vd),B=B{S2v1,S2v2,...,S2vd},S1=(V1V2...Vk+1).

    (5) Find a basis {u1,u2,...,ul} for the orthogonal complement of B.

    (6) Set S2=(u1u2...ul),S=(S1S2), and

    Ti=(0(sd)×dIsd)S1AiS(0d×(sd)Isd).

    (7) Set k=k+1,s=sd, and return to step (3).

    (8) Compute the QR decomposition of the invertible matrix S, by means of the Gram–Schmidt process, to convert it to a unitary matrix Q.

    (9) Output: a unitary matrix U as the conjugate transpose of the resulting matrix Q.

    Remark 3.4. If one uses any non-orthogonal complement in step 5 of Algorithm A, then the matrix S is invertible such that S1ΩS is BT(n1,...,nt). However, in such a case, one cannot guarantee that UΩU is BT(n1,...,nt).

    Example 3.5. The set of matrices Ω={Ai}2i=1 admits simultaneous block triangularization where

    A1=(321011050000014012131113020025010006),A2=(441244840360001012320444168524404102880400040). (3.1)

    Applying Algorithm A to the set Ω can be summarized as follows:

    Input: Ω.

    Initiation step:

    We have k=0,B=ϕ,s=6,T1=A1,T2=A2,S2=I.

    In the first iteration:

    We found two-dimensional invariant subspace V=e1,e4 of a set of matrices {Ti}2i=1. Therefore, B={e1,e4},S1=(e1,e4),S2=(e2,e3,e5,e6),

    T1=(5000141220251006),T2=(360011232444128840040), (3.2)

    k=1, and s=4.

    In the second iteration: We found two-dimensional invariant subspace V=e2,e3 of a set of matrices {Ti}2i=1. Therefore, B={e1,e4,e3,e5},S1=(e1,e4,e3,e5),S2=(e2,e6),

    T1=(5016),T2=(361440), (3.3)

    k=2, and s=2.

    In the third iteration: There is no one-dimensional invariant subspace of a set of matrices {Ti}2i=1. Therefore, S=(e1e4e3e5e2e6), and the corresponding unitary matrix is

    U=(100000000100001000000010010000000001)

    such that the set UΩU={UAiU}2i=1 is BT(2,2,2) where

    UA1U=(301121111133004112000225000050000016),UA2U=(444481244528416400324124001284800003610000440). (3.4)

    (1) Input: the set Ω={Ai}Ni=1.

    (2) Construct the set Γ=ΩΩ.

    (3) Find a unitary matrix U such that UΓU is BT(n1,...,nt) using Algorithm A.

    (4) Output: a unitary matrix U.

    Remark 3.6. Algorithm B provides the finest block-diagonalization. Moreover, the number of the blocks equals the number the of the invariant subspaces, and the size of each block is ni×ni, where ni is the dimension of the invariant subspace.

    Example 3.7. The set of matrices Ω={Ai}2i=1 admits simultaneous block diagonalization where

    A1=(3000000020000000200000001000000010000000100000003),A2=(0000000000000001000000000000000000000010001000000). (3.5)

    Applying Algorithm B to the set Ω can be summarized as follows:

    Input: Γ=ΩΩ.

    Initiation step:

    We have k=0,B=ϕ,s=7,T1=A1,T2=A2,T3=AT2,S2=I.

    In the first iteration:

    We found one-dimensional invariant subspace V=e5 of a set of matrices {Ti}3i=1. Therefore, B={e5},S1=(e5),S2=(e1,e2,e3,e4,e6,e7),

    T1=(300000020000002000000100000010000003),T2=(000000000000010000000000000100100000),T3=TT2, (3.6)

    k=1, and s=6.

    In the second iteration: We found two-dimensional invariant subspace V=e4,e5 of a set of matrices {Ti}3i=1. Therefore, B={e5,e4,e6},S1=(e5e4e6),S2=(e1,e2,e3,e7),

    T1=(3000020000200003),T2=(0000000001001000),T3=TT2, (3.7)

    k=2, and s=4.

    In the third iteration: We found two-dimensional invariant subspace V=e2,e3 of a set of matrices {Ti}3i=1. Therefore, B={e5,e4,e6,e2,e3},S1=(e5e4e6e2e3),S2=(e1,e7),

    T1=(3003),T2=(0010),T3=(0100), (3.8)

    k=3, and s=2.

    In the fourth iteration: There is no one-dimensional invariant subspace of a set of matrices {Ti}3i=1. Therefore, S=(e5e4e6e2e3e1e7), and the corresponding unitary matrix is

    U=(0000100000100000000100100000001000010000000000001)

    such that the set UΩU={UAiU}2i=1 is BD(1,2,2,2) where

    UA1U=(1)(1001)(2002)(3003),UA2U=(0)(0010)(0010)(0010). (3.9)

    Example 3.8. The set of matrices Ω={Ai}2i=1 admits simultaneous block diagonalization where

    A1=(3000000020000000200000001000000010000000100000003),A2=(0000000000100001000000000000000010000001001000000). (3.10)

    Similarly, applying Algorithm B to the set Ω provides the matrix S=(e6e5e7e1e3e2e4). Therefore, the corresponding unitary matrix is

    U=(0000010000010000000011000000001000001000000001000)

    such that the set UΩU={UAiU}2i=1 is BD(2,2,3) where

    UA1U=(1001)(3003)(200020001),UA2U=(0101)(0100)(010001000). (3.11)

    Example 3.9. The set of matrices Ω={Ai}3i=1 admits simultaneous block diagonalization where

    A1=(000000000020000000001000000000200000000000000000001000000000100000000010000000000),A2=(000100000100010000000001000000000000000100000000000000000000000000000100000000000),A3=(010000000000000000000000000100010000010000000001000000000000010000000000000000000). (3.12)

    Similarly, applying Algorithm B to the set Ω provides the matrix S=(e1+e5e9e3e6e8e7e1e5,e2e4). Therefore, the corresponding unitary matrix is

    U=(12200012200000000000010010000000000010000000000100000001001220001220000010000000000100000)

    such that the set UΩU={UAiU}3i=1 is BD(1,1,2,2,3) where

    UA1U=(0)(0)(1001)(1001)(000020002),UA2U=(0)(0)(0100)(0100)(002200000),UA3U=(0)(0)(0010)(0010)(020000200). (3.13)

    This section focuses on an alternate approach for simultaneous block diagonalization of a set of n×n matrices {As}Ns=1 by an invertible matrix that does not require finding the common invariant subspaces as Algorithm B given in the previous section. Maehara et al. [9] introduced an algorithm for simultaneous block diagonalization of a set of matrices by a unitary matrix based on the eigenvalue decomposition of a Hermitian commuting matrix. Here, we extend their algorithm to be applicable for a non-Hermitian commuting matrix by considering its generalized eigen vectors. Moreover, a new characterization is presented by proving that the existence of a commuting matrix that possesses at least two distinct eigenvalues is the necessary and sufficient condition to guarantee the simultaneous block diagonalization by an invertible matrix.

    Proposition 4.1. Let V be a vector space, and let T:VV be a linear operator. Let λ1,...,λk be distinct eigenvalues of T. Then, each generalized eigenspace Gλi(T) is T-invariant, and we have the direct sum decomposition

    V=Gλ1(T)Gλ2(T)...Gλk(T).

    Lemma 4.2. Let V be a vector space, and let T:VV, L:VV be linear commuting operators. Let λ1,...,λk be distinct eigenvalues of T. Then, each generalized eigenspace Gλi(T) is L-invariant.

    Proof. Let V be a vector space and λ1,...,λk be distinct eigenvalues of T with the minimal polynomial μ(x)=(xλ1)n1(xλ2)n2...(xλk)nk. Then, we have the direct sum decomposition V=Gλ1(T)Gλ2(T)...Gλk(T).

    For each i=1,..,k, let xGλi(T), and then (TλiI)nix=0. Then, (TλiI)niLx=L(TλiI)nix=0. Hence, LxGλi(T).

    Theorem 4.3. Let {As}Ns=1 be a set of n×n matrices. Then, the set {As}Ns=1 admits simultaneous block diagonalization by an invertible matrix S if and only if the set {As}Ns=1 commutes with a matrix C that possesses two distinct eigenvalues.

    Proof. Assume that the set {As}Ns=1 admits simultaneous block diagonalization by the an invertible matrix S such that

    S1AsS=Bs,1Bs,2...Bs,k,

    where the number of blocks k2, and the matrices Bs,1,Bs,2,...,Bs,k have sizes n1×n1,n2×n2,...,nk×nk, respectively, for all s=1,..,N.

    Now, define the matrix C as

    C=S(λ1In1×n1λ2In2×n2...λkInk×nk)S1,

    where λ1,λ2,...,λk are any distinct numbers.

    Clearly, the matrix C commutes with the set {As}Ns=1. Moreover, it has the distinct eigenvalues λ1,λ2,...,λk.

    Assume that the set {As}Ns=1 commutes with a matrix C that posseses distinct eigenvalues λ1,λ2,...,λk.

    Using Proposition 4.1, one can use the generalized eigenspace Gλi(C) of the matrix C associated to these distinct eigenvalues to decompose the matrix C as a direct sum of k matrices. This can be achieved by restricting the matrix C on the invariant subspaces Gλi(C) as follows:

    S1CS=[C]Gλ1(C)[C]Gλ2(C)...[C]Gλk(C)

    where

    S=(Gλ1(C),Gλ2(C),...,Gλk(C)).

    Using Lemma 4.2, one can restrict each matrix As on the invariant subspaces Gλi(C) to decompose the matrix As as a direct sum of k matrices as follows:

    S1AsS=[As]Gλ1(C)[As]Gλ2(C)...[As]Gλk(C).

    Remark 4.4. For a given set of n×n matrices {As}Ns=1, if the set {As}Ns=1 commutes only with the matrices having only one eigenvalue, then it does not admit a simultaneous block diagonalization by an invertible matrix.

    Algorithm C:

    (1) Input: the set Ω={As}Ns=1.

    (2) Construct the the following matrix:

    X=(IA1AT1IIA2AT2I...IANATNI).

    (3) Compute the null space of the matrix X and reshape the obtained vectors as n×n matrices. These matrices commute with all the matrices {As}Ns=1.

    (4) Choose a matrix C from the obtained matrices that possesses two distinct eigenvalues.

    (5) Find the distinct eigenvalues λ1,...,λk of the matrix C and the corresponding algebraic multiplicity n1,n2,...,nk.

    (6) Find each generalized eigenspace Gλi(C) of the matrix C associated to the eigenvalue λi by computing the null space of (CλiI)ni.

    (7) Construct the invertible matrix S as

    S=(Gλ1(C),Gλ2(C),...,Gλk(C)).

    (8) Verify that

    S1AsS=Bs,1Bs,2...Bs,k,

    where the matrices Bs,1,Bs,2,...,Bs,k have sizes n1×n1,n2×n2,...,nk×nk, respectively, for all s=1,..,N.

    (9) Output: an invertible matrix S.

    Remark 4.5. Algorithm C provides the finest block-diagonalization if one chooses a matrix C with maximum number of distinct eigenvalues. Moreover, the number of the blocks equals the number the of the distinct eigenvalues, and the size of each block is ni×ni, where ni is the algebraic multiplicity of the eigenvalue λi.

    Example 4.6. Consider the set of matrices Ω={Ai}6i=1 where

    A1=(000000000100000010010000001000000000),A2=(000100000000000001100000000000001000),A3=(000010000001000000000000100000010000),A4=(010000100000000000000000000001000010),A5=(001000000000100000000001000000000100),A6=(000000001000010000000010000100000000). (4.1)

    The set Ω admits simultaneous block diagonalization by an invertible matrix. An invertible matrix can be obtained by applying algorithm C to the set Ω as summarized below:

    A matrix C that commutes with all the matrices {Ai}6i=1 can be obtained as

    C=(000001000010000100001000010000100000). (4.2)

    .

    The distinct eigenvalues of the matrix C are λ1=1,λ2=1 with algebraic multiplicities n1=3,n2=3, respectively..

    The generalized eigenspaces of the matrix C associated to the distinct eigenvalues are

    Gλ1(C)=N(Cλ1I)3=e6e1,e2+e5,e4e3,Gλ2(C)=N(Cλ2I)3=e1+e6,e5e2,e3+e4. (4.3)

    The invertible matrix S=(Gλ1(C),Gλ2(C)) is

    S=(100100010010001001001001010010100100). (4.4)

    The set S1ΩS={S1AiS}6i=1 contains block diagonal matrices where

    S1A1S=(000001010)(000001010),S1A2S=(001000100)(001000100),S1A3S=(010100000)(010100000),S1A4S=(010100000)(010100000),S1A5S=(001000100)(001000100),S1A6S=(000001010)(000001010). (4.5)

    It is well known that a set of non-defective matrices can be simultaneously diagonalized if and only if the matrices commute. In the case of non-commuting matrices, the best that can be achieved is simultaneous block diagonalization. Both Algorithm B and the Maehara et al. [9] algorithm are applicable for simultaneous block diagonalization of a set of matrices by a unitary matrix. Algorithm C can be applied for block diagonalization by an invertible matrix when finding a unitary matrix is not possible. In case block diagonalization of a set of matrices is not possible by a unitary or an invertible matrix, then one may utilize block triangularization by Algorithm A. Algorithms A and B are based on the existence of invariant subspaces; however, Algorithm C is based on the existence of a commuting matrix which is not necessarily Hermitian, unlike the Maehara et al. algorithm.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    Ahmad Y. Al-Dweik and M. T. Mustafa would like to thank Qatar University for its support and excellent research facilities. R. Ghanam and G. Thompson are grateful to VCU Qatar and Qatar Foundation for their support.

    The authors declare that they have no conflicts of interest.

    Figure Listing 1.  Step 5 in Algorithm A.
    Figure Listing 2.  Step 6 in Algorithm A.
    Figure Listing 3.  Steps 8 & 9 in Algorithm A.
    Figure Listing 4.  Steps 2 & 3 in Algorithm C.
    Figure Listing 5.  Steps 6 & 7 in Algorithm C.


    [1] L. O. Chua, L. Yang, Cellular neural networks: theory, IEEE Trans. Circuits Syst., 35 (1988), 1257–1272. http://dx.doi.org/10.1109/31.7600 doi: 10.1109/31.7600
    [2] T. Yang, L. B. Yang, C. W. Wu, L. O. Chua, Fuzzy cellular neural networks: theory, In: 1996 Fourth IEEE International workshop on cellular neural networks and their applications proceedings (CNNA-96), Spain: IEEE, 1996,181–186. http://dx.doi.org/10.1109/CNNA.1996.566545
    [3] T. Yang, L. B. Yang, C. W. Wu, L. O. Chua, Fuzzy cellular neural networks: applications, In: 1996 Fourth IEEE International workshop on cellular neural networks and their applications proceedings (CNNA-96), Spain: IEEE, 1996,225–230. http://dx.doi.org/10.1109/CNNA.1996.566560
    [4] C. Lin, C. Yeh, S. Liang, J. Chung, N. Kumar, Support-vector-based fuzzy neural network for pattern classification, IEEE Trans. Fuzzy Syst., 14 (2006), 31–41. http://dx.doi.org/10.1109/TFUZZ.2005.861604 doi: 10.1109/TFUZZ.2005.861604
    [5] K. Ratnavelu, M. Kalpana, P. Balasubramaniam, K. Wong, P. Raveendran, Image encryption method based on chaotic fuzzy cellular neural networks, Signal Process., 140 (2017), 87–96. https://doi.org/10.1016/j.sigpro.2017.05.002 doi: 10.1016/j.sigpro.2017.05.002
    [6] J. Liu, L. Shu, Q. Chen, S. Zhong, Fixed-time synchronization criteria of fuzzy inertial neural networks via Lyapunov functions with indefinite derivatives and its application to image encryption, Fuzzy Sets Syst., 459 (2023), 22–42. https://doi.org/10.1016/j.fss.2022.08.002 doi: 10.1016/j.fss.2022.08.002
    [7] J. Liu, Q. Chen, D. Zhang, L. Shu, K. S. Shi, Novel finite-time synchronization results of fuzzy inertial neural networks via event-triggered control and its application to image encryption, Int. J. Fuzzy Syst., 25 (2023), 2779–2795. https://doi.org/10.1007/s40815-023-01530-0 doi: 10.1007/s40815-023-01530-0
    [8] P. Arena, R. Caponetto, L. Fortuna, D. Porto, Bifurcation and chaos in noninteger order cellular neural networks, Internat. J. Bifur. Chaos, 8 (1998), 1527–1539. https://doi.org/10.1142/S0218127498001170 doi: 10.1142/S0218127498001170
    [9] X. Yao, X. Liu, S. Zhong, Exponential stability and synchronization of memristor-based fractional-order fuzzy cellular neural networks with multiple delays, Neurocomputing, 419 (2021), 239–250. https://doi.org/10.1016/j.neucom.2020.08.057 doi: 10.1016/j.neucom.2020.08.057
    [10] C. Jiyang, C. Li, X. Yang, Asymptotic stability of delayed fractional-order fuzzy neural networks with impulse effects, J. Franklin Inst., 355 (2018), 7595–7608. https://doi.org/10.1016/j.jfranklin.2018.07.039 doi: 10.1016/j.jfranklin.2018.07.039
    [11] S. Tyagi, S. C. Martha, Finite-time stability for a class of fractional-order fuzzy neural networks with proportional delay, Fuzzy Sets Syst., 381 (2019), 68–77. https://doi.org/10.1016/j.fss.2019.04.010 doi: 10.1016/j.fss.2019.04.010
    [12] C. Aouiti, T. Farid, Global dissipativity of quaternion-valued fuzzy cellular fractional-order neural networks with time delays, Neural Process. Lett., 55 (2023), 481–503. https://doi.org/10.1007/s11063-022-10893-8 doi: 10.1007/s11063-022-10893-8
    [13] M. S. Ali, G. Narayanan, S. Saroha, B. Priya, G. K. Thakur, Finite-time stability analysis of fractional-order memristive fuzzy cellular neural networks with time delay and leakage term, Math. Comput. Simul., 185 (2021), 468–485. https://doi.org/10.1016/j.matcom.2020.12.035 doi: 10.1016/j.matcom.2020.12.035
    [14] M. Zheng, L. Li, H. Peng, J. Xiao, Y. Yang, Y. Zhang, et al., Finite-time stability and synchronization of memristor-based fractional-order fuzzy cellular neural networks, Commun. Nonlinear Sci. Numer. Simul., 59 (2018), 272–291. https://doi.org/10.1016/j.cnsns.2017.11.025 doi: 10.1016/j.cnsns.2017.11.025
    [15] Y. Sun, Y. Liu, Fixed-time synchronization of delayed fractional-order memristor-based fuzzy cellular neural networks, IEEE Access, 8 (2020), 165951–165962. https://doi.org/10.1109/ACCESS.2020.3022928 doi: 10.1109/ACCESS.2020.3022928
    [16] M. S. Asl, M. Javidi, B. Ahmad, New predictor-corrector approach for nonlinear fractional differential equations: Error analysis and stability, J. Appl. Anal. Comput., 9 (2019), 1527–1557. https://doi.org/10.11948/2156-907X.20180309 doi: 10.11948/2156-907X.20180309
    [17] A. A. Alikhanov, M. S. Asl, C. Huang, A. Khibiev, A second-order difference scheme for the nonlinear time-fractional diffusion-wave equation with generalized memory kernel in the presence of time delay, J. Comput. Appl. Math., 438 (2024), 115515. https://doi.org/10.1016/j.cam.2023.115515 doi: 10.1016/j.cam.2023.115515
    [18] K. Liang, L. Wang, Exponential synchronization in inertial Cohen-Grossberg neural networks with time delays, J. Franklin Inst., 356 (2019), 11285–11304. https://doi.org/10.1016/j.jfranklin.2019.07.027 doi: 10.1016/j.jfranklin.2019.07.027
    [19] S. Yang, C. Hu, Y. Yu, H. Jiang, Exponential stability of fractional-order impulsive control systems with applications in synchronization, IEEE Trans. Cybernet., 50 (2020), 3157–3168. https://doi.org/10.1109/TCYB.2019.2906497 doi: 10.1109/TCYB.2019.2906497
    [20] W. Ma, C. Li, Y. Wu, Y. Wu, Synchronization of fractional fuzzy cellular neural networks with interactions, Chaos, 27 (2017), 103106. https://doi.org/10.1063/1.5006194 doi: 10.1063/1.5006194
    [21] T. Hu, X. Zhang, S. Zhong, Global asymptotic synchronization of nonidentical fractional-order neural networks, Neurocomputing, 313 (2018), 39–46. https://doi.org/10.1016/j.neucom.2018.05.098 doi: 10.1016/j.neucom.2018.05.098
    [22] P. Mani, R. Rajan, L. Shanmugam, Y. H. Joo, Adaptive control for fractional order induced chaotic fuzzy cellular neural networks and its application to image encryption, Inform. Sci., 491 (2019), 74–89. https://doi.org/10.1016/j.ins.2019.04.007 doi: 10.1016/j.ins.2019.04.007
    [23] J. Wang, X. Wang, X. Zhang, S. Zhu, Global h-synchronization for high-order delayed inertial neural networks via direct SORS strategy, IEEE Trans. Syst. Man Cybernet. Syst., 53 (2023), 6693–6704. https://doi.org/10.1109/TSMC.2023.3286095 doi: 10.1109/TSMC.2023.3286095
    [24] Z. Dong, X. Wang, X. Zhang, M. Hu, T. N. Dinh, Global exponential synchronization of discrete-time high-order switched neural networks and its application to multi-channel audio encryption, Nonlinear Anal. Hybrid Syst., 47 (2023), 101291. https://doi.org/10.1016/j.nahs.2022.101291 doi: 10.1016/j.nahs.2022.101291
    [25] Z. Yang, J. Zhang, J. Hu, J. Mei, New results on finite-time stability for fractional-order neural networks with proportional delay, Neurocomputing, 442 (2021), 327–336. https://doi.org/10.1016/j.neucom.2021.02.082 doi: 10.1016/j.neucom.2021.02.082
    [26] Y. W. Wang, Y. Zhang, X. K. Liu, X. Chen, Distributed predefined-time optimization and control for multi-bus DC microgrid, IEEE Trans. Power Syst., 2023, 1–11. https://doi.org/10.1109/TPWRS.2023.3349165 doi: 10.1109/TPWRS.2023.3349165
    [27] A. Polyakov, Nonlinear feedback design for fixed-time stabilization of linear control systems, IEEE Trans. Automat. Control, 57 (2012), 2106–2110. https://doi.org/10.1109/TAC.2011.2179869 doi: 10.1109/TAC.2011.2179869
    [28] C. Chen, L. Li, H. Peng, Y. Yang, L. Mi, H. Zhao, A new fixed-time stability theorem and its application to the fixed-time synchronization of neural networks, Neural Netw., 123 (2020), 412–419. https://doi.org/10.1016/j.neunet.2019.12.028 doi: 10.1016/j.neunet.2019.12.028
    [29] A. Abdurahman, H. Jiang, C. Hu, Improved fixed-time stability results and application to synchronization of discontinuous neural networks with state-dependent switching, Internat. J. Robust Nonlinear Control, 31 (2021), 5725–5744. https://doi.org/10.1002/rnc.5566 doi: 10.1002/rnc.5566
    [30] C. Hu, J. Yu, Z. Chen, H. Jiang, T. Huang, Fixed-time stability of dynamical systems and fixed-time synchronization of coupled discontinuous neural networks, Neural Netw., 89 (2017), 74–83. https://doi.org/10.1016/j.neunet.2017.02.001 doi: 10.1016/j.neunet.2017.02.001
    [31] T. Jia, X. Chen, L. He, F. Zhao, J. Qiu, Finite-time synchronization of uncertain fractional-order delayed memristive neural networks via adaptive sliding mode control and its application, Fractal Fract., 6 (2022), 502. https://doi.org/10.3390/fractalfract6090502 doi: 10.3390/fractalfract6090502
    [32] X. Chen, T. Jia, Z. Wang, X. Xie, J. Qiu, Practical fixed-time bipartite synchronization of uncertain coupled neural networks subject to deception attacks via dual-channel event-triggered control, IEEE Trans. Cybernet., 2023, 1–11. https://doi.org/10.1109/TCYB.2023.3338165 doi: 10.1109/TCYB.2023.3338165
    [33] C. Chen, L. Li, H. Peng, Y. Yang, L. Mi, L. Wang, A new fixed-time stability theorem and its application to the synchronization control of memristive neural networks, Neurocomputing, 349 (2019), 290–300. https://doi.org/10.1016/j.neucom.2019.03.040 doi: 10.1016/j.neucom.2019.03.040
    [34] Y. Lei, Y. Wang, I. Morărescu, R. Postoyan, Event-triggered fixed-time stabilization of two time scales linear systems, IEEE Trans. Automat. Control, 68 (2023), 1722–1729. https://doi.org/10.1109/TAC.2022.3151818 doi: 10.1109/TAC.2022.3151818
    [35] M. Zheng, L. Li, H. Peng, J. Xiao, Y. Yang, Y. Zhang, et al., Fixed-time synchronization of memristor-based fuzzy cellular neural network with time-varying delay, J. Franklin Inst., 355 (2018), 6780–6809. https://doi.org/10.1016/j.jfranklin.2018.06.041 doi: 10.1016/j.jfranklin.2018.06.041
    [36] F. Kong, Q. Zhu, R. Sakthivel, A. Mohammadzadeh, Fixed-time synchronization analysis for discontinuous fuzzy inertial neural networks with parameter uncertainties, Neurocomputing, 422 (2021), 295–313. https://doi.org/10.1016/j.neucom.2020.09.014 doi: 10.1016/j.neucom.2020.09.014
    [37] Y. Liu, G. Zhang, J. Hu, Fixed-time stabilization and synchronization for fuzzy inertial neural networks with bounded distributed delays and discontinuous activation functions, Neurocomputing, 495 (2022), 86–96. https://doi.org/10.1016/j.neucom.2022.04.101 doi: 10.1016/j.neucom.2022.04.101
    [38] W. Wang, X. Jia, Z. Wang, X. Luo, L. Li, J. Kurths, et al., Fixed-time synchronization of fractional order memristive MAM neural networks by sliding mode control, Neurocomputing, 401 (2020), 364–376. https://doi.org/10.1016/j.neucom.2020.03.043 doi: 10.1016/j.neucom.2020.03.043
    [39] E. Arslan, G. Narayanan, M. S. Ali, S. Arik, S. Saroha, Controller design for finite-time and fixed-time stabilization of fractional-order memristive complex-valued BAM neural networks with uncertain parameters and time-varying delays, Neural Netw., 130 (2020), 60–74. https://doi.org/10.1016/j.neunet.2020.06.021 doi: 10.1016/j.neunet.2020.06.021
    [40] Q. Gan, R. Xu, P. Yang, Synchronization of non-identical chaotic delayed fuzzy cellular neural networks based on sliding mode control, Commun. Nonlinear Sci. Numer. Simul., 17 (2012), 433–443. https://doi.org/10.1016/j.cnsns.2011.05.014 doi: 10.1016/j.cnsns.2011.05.014
    [41] M. Roohi, C. Zhang, Y. Chen, Adaptive model-free synchronization of different fractional-order neural networks with an application in cryptography, Nonlinear Dyn., 100 (2020), 3979–4001. https://doi.org/10.1007/s11071-020-05719-y doi: 10.1007/s11071-020-05719-y
    [42] M. Roohi, C. Zhang, M. Taheri, A. Basse-O'Connor, Synchronization of fractional-order delayed neural networks using dynamic-free adaptive sliding mode control, Fractal Fract., 7 (2023), 682. https://doi.org/10.3390/fractalfract7090682 doi: 10.3390/fractalfract7090682
    [43] K. Mathiyalagan, J. H. Park, R. Sakthivel, Synchronization for delayed memristive BAM neural networks using impulsive control with random nonlinearities, Appl. Math. Comput., 259 (2015), 967–979. https://doi.org/10.1016/j.amc.2015.03.022 doi: 10.1016/j.amc.2015.03.022
    [44] Y. Liu, M. Liu, X. Xu, Adaptive control design for fixed-time synchronization of fuzzy stochastic cellular neural networks with discrete and distributed delay, Iran. J. Fuzzy Syst., 18 (2021), 13–28. https://doi.org/10.22111/ijfs.2021.6330 doi: 10.22111/ijfs.2021.6330
    [45] H. Ren, Z. Peng, Y. Gu, Fixed-time synchronization of stochastic memristor-based neural networks with adaptive control, Neural Netw., 130 (2020), 165–175. https://doi.org/10.1016/j.neunet.2020.07.002 doi: 10.1016/j.neunet.2020.07.002
    [46] W. Sun, Y. Wu, J. Zhang, S. Qin, Inner and outer synchronization between two coupled networks with interactions, J. Franklin Inst., 352 (2014), 3166–3177. https://doi.org/10.1016/j.jfranklin.2014.08.004 doi: 10.1016/j.jfranklin.2014.08.004
    [47] A. A. Kilbas, H. M. Srivastava, J. J. Trujillo, Theory and applications of fractional differential equations, 204 (2006), 1–523.
    [48] B. Chen, J. Chen, Global asymptotical ω-periodicity of a fractional-order non-autonomous neural networks, Neural Netw., 68 (2015), 78–88. https://doi.org/10.1016/j.neunet.2015.04.006 doi: 10.1016/j.neunet.2015.04.006
    [49] G. H. Hardy, J. E. Littlewood, G. Pólya, Inequalities, 2 Eds., Cambridge: Cambridge University Press, 1952.
  • This article has been cited by:

    1. Mohammed Shehu Shagari, Saima Rashid, Fahd Jarad, Mohamed S. Mohamed, Interpolative contractions and intuitionistic fuzzy set-valued maps with applications, 2022, 7, 2473-6988, 10744, 10.3934/math.2022600
    2. Rehana Tabassum, Mohammed Shehu Shagari, Akbar Azam, OM Kalthum S. K. Mohamed, Awad A. Bakery, Muhammad Gulzar, Intuitionistic Fuzzy Fixed Point Theorems in Complex-Valued b -Metric Spaces with Applications to Fractional Differential Equations, 2022, 2022, 2314-8888, 1, 10.1155/2022/2261199
    3. Xiaoming Qi, Zeeshan Ali, Tahir Mahmood, Peide Liu, Multi-Attribute Decision-Making Method Based on Complex Interval-Valued q-Rung Orthopair Linguistic Heronian Mean Operators and Their Application, 2023, 1562-2479, 10.1007/s40815-022-01455-0
    4. Naveed Iqbal, Imran Khan, Rasool Shah, Kamsing Nonlaopon, The fuzzy fractional acoustic waves model in terms of the Caputo-Fabrizio operator, 2023, 8, 2473-6988, 1770, 10.3934/math.2023091
    5. Wasfi Shatanawi, Taqi A. M. Shatnawi, Some fixed point results based on contractions of new types for extended b-metric spaces, 2023, 8, 2473-6988, 10929, 10.3934/math.2023554
    6. Zeeshan Ali, Tahir Mahmood, Dragan Pamucar, Chuliang Wei, Complex Interval-Valued q-Rung Orthopair Fuzzy Hamy Mean Operators and Their Application in Decision-Making Strategy, 2022, 14, 2073-8994, 592, 10.3390/sym14030592
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(966) PDF downloads(71) Cited by(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog