Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Zeroing neural networks for computing quaternion linear matrix equation with application to color restoration of images

  • The importance of quaternions in a variety of fields, such as physics, engineering and computer science, renders the effective solution of the time-varying quaternion matrix linear equation (TV-QLME) an equally important and interesting task. Zeroing neural networks (ZNN) have seen great success in solving TV problems in the real and complex domains, while quaternions and matrices of quaternions may be readily represented as either a complex or a real matrix, of magnified size. On that account, three new ZNN models are developed and the TV-QLME is solved directly in the quaternion domain as well as indirectly in the complex and real domains for matrices of arbitrary dimension. The models perform admirably in four simulation experiments and two practical applications concerning color restoration of images.

    Citation: Vladislav N. Kovalnogov, Ruslan V. Fedorov, Denis A. Demidov, Malyoshina A. Malyoshina, Theodore E. Simos, Vasilios N. Katsikis, Spyridon D. Mourtas, Romanos D. Sahas. Zeroing neural networks for computing quaternion linear matrix equation with application to color restoration of images[J]. AIMS Mathematics, 2023, 8(6): 14321-14339. doi: 10.3934/math.2023733

    Related Papers:

    [1] Vladislav N. Kovalnogov, Ruslan V. Fedorov, Denis A. Demidov, Malyoshina A. Malyoshina, Theodore E. Simos, Spyridon D. Mourtas, Vasilios N. Katsikis . Computing quaternion matrix pseudoinverse with zeroing neural networks. AIMS Mathematics, 2023, 8(10): 22875-22895. doi: 10.3934/math.20231164
    [2] Abdur Rehman, Muhammad Zia Ur Rahman, Asim Ghaffar, Carlos Martin-Barreiro, Cecilia Castro, Víctor Leiva, Xavier Cabezas . Systems of quaternionic linear matrix equations: solution, computation, algorithm, and applications. AIMS Mathematics, 2024, 9(10): 26371-26402. doi: 10.3934/math.20241284
    [3] Vladislav N. Kovalnogov, Ruslan V. Fedorov, Igor I. Shepelev, Vyacheslav V. Sherkunov, Theodore E. Simos, Spyridon D. Mourtas, Vasilios N. Katsikis . A novel quaternion linear matrix equation solver through zeroing neural networks with applications to acoustic source tracking. AIMS Mathematics, 2023, 8(11): 25966-25989. doi: 10.3934/math.20231323
    [4] Huamin Zhang, Hongcai Yin . Zeroing neural network model for solving a generalized linear time-varying matrix equation. AIMS Mathematics, 2022, 7(2): 2266-2280. doi: 10.3934/math.2022129
    [5] R. Sriraman, R. Samidurai, V. C. Amritha, G. Rachakit, Prasanalakshmi Balaji . System decomposition-based stability criteria for Takagi-Sugeno fuzzy uncertain stochastic delayed neural networks in quaternion field. AIMS Mathematics, 2023, 8(5): 11589-11616. doi: 10.3934/math.2023587
    [6] R. Sriraman, P. Vignesh, V. C. Amritha, G. Rachakit, Prasanalakshmi Balaji . Direct quaternion method-based stability criteria for quaternion-valued Takagi-Sugeno fuzzy BAM delayed neural networks using quaternion-valued Wirtinger-based integral inequality. AIMS Mathematics, 2023, 8(5): 10486-10512. doi: 10.3934/math.2023532
    [7] Yimeng Xi, Zhihong Liu, Ying Li, Ruyu Tao, Tao Wang . On the mixed solution of reduced biquaternion matrix equation $ \sum\limits_{i = 1}^nA_iX_iB_i = E $ with sub-matrix constraints and its application. AIMS Mathematics, 2023, 8(11): 27901-27923. doi: 10.3934/math.20231427
    [8] Houssem Jerbi, Izzat Al-Darraji, Saleh Albadran, Sondess Ben Aoun, Theodore E. Simos, Spyridon D. Mourtas, Vasilios N. Katsikis . Solving quaternion nonsymmetric algebraic Riccati equations through zeroing neural networks. AIMS Mathematics, 2024, 9(3): 5794-5809. doi: 10.3934/math.2024281
    [9] Anli Wei, Ying Li, Wenxv Ding, Jianli Zhao . Three special kinds of least squares solutions for the quaternion generalized Sylvester matrix equation. AIMS Mathematics, 2022, 7(4): 5029-5048. doi: 10.3934/math.2022280
    [10] Sondess B. Aoun, Nabil Derbel, Houssem Jerbi, Theodore E. Simos, Spyridon D. Mourtas, Vasilios N. Katsikis . A quaternion Sylvester equation solver through noise-resilient zeroing neural networks with application to control the SFM chaotic system. AIMS Mathematics, 2023, 8(11): 27376-27395. doi: 10.3934/math.20231401
  • The importance of quaternions in a variety of fields, such as physics, engineering and computer science, renders the effective solution of the time-varying quaternion matrix linear equation (TV-QLME) an equally important and interesting task. Zeroing neural networks (ZNN) have seen great success in solving TV problems in the real and complex domains, while quaternions and matrices of quaternions may be readily represented as either a complex or a real matrix, of magnified size. On that account, three new ZNN models are developed and the TV-QLME is solved directly in the quaternion domain as well as indirectly in the complex and real domains for matrices of arbitrary dimension. The models perform admirably in four simulation experiments and two practical applications concerning color restoration of images.



    Quaternions, introduced by Hamilton in 1843 [1,2], are important in many fields including but not limited to computer graphics [3,4,5,6], modeling human motions [7], kinematic modeling of manipulators [8,9], robotic controllers [10,11], spacecraft maneuvering [12,13,14] and physics and mathematics, namely quantum fields [15,16], quantum mechanics [17,18,19], electromagnetism [20,21], mathematical physics [22,23] and linear algebra [24,25]. Quaternions consist of a skew-field or a division algebra over the field of real numbers[26]. As such, the set of quaternions H is not commutative under the operation of multiplication and thus, in practical applications, complexity quickly becomes an issue [27]. On the other hand, a scalar quaternion may be readily represented by either a complex 2×2 or a real 4×4 matrix [26,28,29]. This property also extends to matrices of quaternions, with the dimensions of the representation matrices scaling appropriately. Therefore, in solving problems involving quaternions, it has become common practice to solve an equivalent problem in the real or complex domain and then convert the solution back to quaternion form. This technique, arguably powerful, even in a static environment, proves especially useful in problems of time-varying nature.

    Time-varying (TV) problems involving quaternions and matrices of quaternions have recently started to attract research attention. Inversion of TV matrices of quaternions has been studied in [27], whereas the dynamic Sylvester quaternion matrix equation has been solved in [30] and the TV inequality-constrained quaternion matrix least-squares problem has been dealt with in [31]. Furthermore, a practical application involving a problem in robotics has been studied in [32]. Last but not least, the TV quaternion valued linear matrix equation (TV-QLME) for square matrices has been solved in [33] through transformation of the quaternion valued equation to an equivalent real representation. All these articles share a common theme, in that an online solution is derived through use of zeroing neural networks (ZNNs).

    ZNNs, originated by Zhang et al. in a series of papers, comprise a class of recurrent neural networks, with strong parallel processing properties, that is dedicated to solving TV problems. Originally, ZNNs were designed to deal with the problem of matrix inversion [34]. Nowadays, their application has extended to solving problems of matrix and/or tensor inversion [35], as well as generalized inversion [36], solving systems of linear equations and systems of matrix equations [37,38,39], solving linear and quadratic optimization problems [40,41,42] and approximating miscellaneous matrix functions. Robot control [43,44], financial portfolio optimization [45,46] and text classification [47] are other common ZNNs practical applications.

    In this paper, drawing motivation from [33], we shall also study the TV-QLME in view of generalizing its solution to rectangular matrices and, more importantly, examining whether direct solution of the problem in the quaternion domain or indirect solution through representation in the complex domain is more effective than the already proposed solution in the real domain. To this end, we shall develop three ZNNs in total, one for each domain, which we shall thoroughly test on four simulation examples. Their effectiveness shall be further examined by application in two tasks of color restoration of contaminated images. This piece of research further contributes to the literature by conducting theoretical analysis as well as analyzing the computational complexity of all discussed models.

    The rest of the paper is organized as follows. Preliminaries, notation and the TV-QLME problem are presented in Section 2. The three ZNN models are developed in Section 3. Section 4 includes theoretical analysis whereas computational complexity is discussed in Section 5. Simulation examples and applications to color restoration of images are then presented in Section 6. Finally, concluding comments and remarks are given in Section 7.

    This section lays out certain preliminaries regarding matrices of quaternions, the TV-QLME problem, ZNNs and finally, the notation to be used throughout the rest of the paper as well as the main results to be discussed.

    Let H:={q=q1+q2i+q3j+q4k:q1,q2,q3,q4R} denote the set of quaternions and let Hm×n:={Q=Q1+Q2i+Q3j+Q4k:Q1,Q2,Q3,Q4Rm×n} denote the set of all m×n quaternion matrices with entries from R. Note that Q1,Q2,Q3,Q4, also called the coefficient matrices of the quaternion matrix Q, are real matrices of the same dimension as Q. All things considered, let us now turn our attention to the following general form of a TV-QLME:

    ˜A(t)˜X(t)=˜B(t), (2.1)

    where ˜A(t)Hm×n,˜X(t)Hn×r and ˜B(t)Hm×r with mnr. The matrix ˜X(t) is unknown whereas ˜A(t)=A1(t)+A2(t)ı+A3(t)ȷ+A4(t)k and ˜B(t)=B1(t)+B2(t)ı+B3(t)ȷ+B4(t)k are smoothly TV matrices whose coefficient matrices Ai(t)Rm×n and Bi(t)Rm×r, i=1,2,3,4, along with the coefficient matrices of their derivatives, are either given or can be accurately estimated.

    The product of the two TV quaternion matrices ˜A(t) and ˜X(t) is the following:

    ˜A(t)˜X(t)=C1(t)+C2(t)ı+C3(t)ȷ+C4(t)k (2.2)

    where

    C1(t)=A1(t)X1(t)A2(t)X2(t)A3(t)X3(t)A4(t)X4(t),C2(t)=A1(t)X2(t)+A2(t)X1(t)+A3(t)X4(t)A4(t)X3(t),C3(t)=A1(t)X3(t)+A3(t)X1(t)+A4(t)X2(t)A2(t)X4(t),C4(t)=A1(t)X4(t)+A4(t)X1(t)+A2(t)X3(t)A3(t)X2(t), (2.3)

    with Ci(t)Rm×r for i=1,,4.

    One complex representation of the TV quaternion matrix ˜A(t)Hm×n is the following:

    ˆA(t)=[A1(t)A4(t)ıA3(t)A2(t)ıA3(t)A2(t)ıA1(t)+A4(t)ı]C2m×2n. (2.4)

    With the process of multiplying two quaternion matrices and subsequently computing the complex representation of the result being equal to the process of multiplying the respective complex representations of the matrices in the first place [30,Theorem 1], solving (2.1) is equivalent to solving the complex matrix equation:

    ˆA(t)ˆX(t)=ˆB(t), (2.5)

    where ˆX(t)C2n×2r and ˆB(t)C2m×2r. Note that the construction of ˆX(t) and ˆB(t) follows the same pattern as that of ˆA(t) in (2.4).

    One real representation of the TV quaternion matrix ˜A(t)Hm×n is the following:

    A(t)=[A1(t)A4(t)A3(t)A2(t)A4(t)A1(t)A2(t)A3(t)A3(t)A2(t)A1(t)A4(t)A2(t)A3(t)A4(t)A1(t)]R4m×4n. (2.6)

    With the process of multiplying two quaternion matrices and subsequently computing the real representation of the result being equal to the process of multiplying the respective real representations of the matrices in the first place, [33,Corollary 1], solving (2.1) is equivalent to solving the real matrix equation:

    A(t)X(t)=B(t), (2.7)

    where X(t)R4n×4r and B(t)R4m×4r. Note that the construction of X(t) and B(t) follows the same pattern as that of A(t) in (2.6).

    For the rest of this paper, Ir will refer to the identity r×r matrix whereas 0r and 0m×n will refer to the zero r×r and m×n matrices, respectively. Furthermore, vec() will denote the vectorization process and will denote the Kronecker product. Last but not least, ()T will refer to the transpose operator, (˙) will be used to denote the time derivative of an expression and F will denote the matrix Frobenius norm. Lastly, Table 1 includes a list of all the abbreviations used in this paper along with their full names.

    Table 1.  Paper's abbreviations list.
    Abbreviation Full name
    TV time-varying
    TV-QLME TV quaternion valued linear matrix equation
    ZNN zeroing neural network
    EME error matrix equation
    ZNNQ ZNN quaternion
    ZNNQC ZNNQ by representation in the complex domain
    ZNNQR ZNNQ by representation in the real domain
    SE simulation examples

     | Show Table
    DownLoad: CSV

    The development of a ZNN consists of two universal steps. First, one defines an error function E(t), also called a Zhang function or error matrix equation (EME). Note that the expression for E(t) has to involve the unknown TV matrix X(t). It is also worth mentioning that each unique Zhang function defines a class of ZNNs. With E(t) determined, a ZNN must satisfy

    ˙E(t)=λF(E(t)), (2.8)

    with ˙E(t) referring to the time derivative of E(t), λ>0 being a positive real number through one may adjust the convergence rate of the model and F denoting an increasing, odd activation function that acts element-wise on E(t). Substituting in (2.8) the expressions for E(t) and ˙E(t), one is then to derive the Zhang dynamics model; an explicit or implicit expression for ˙X(t), the time derivative of the unknown TV matrix X(t). The Zhang function E(t), coupled with the choice of activation function F and the Zhang dynamics model, are the three defining components of a ZNN.

    Learning continuously from non-stationary data, coupled with concurrent transfer and preservation of previous knowledge, is known as continual learning. It is a fact that the ZNN design revolves around forcing every entry of the error function E(t) to 0, as time evolves. This is achieved through the continuous-time learning rule, which results from the definition of the error function (2.8). As a result, the error function may be perceived as a means of monitoring the ZNN models learning. In this paper we shall discuss the linear ZNN dynamical system:

    ˙E(t)=λE(t). (2.9)

    Note that a higher value for the gain parameter λ will result in the model converging even faster.

    The following are the key results of the paper.

    (1) Three new ZNN models for solving TV-QLME problems are presented.

    (2) The proposed models are applicable to matrices of arbitrary dimension.

    (3) Supportive theoretical analysis is conducted.

    (4) Simulation experiments with illustrations as well as applications to color restoration of images are performed to support the theoretical research.

    In this section we shall develop three ZNN models, each one operating on a different domain. We assume that ˜A(t)Hm×n and ˜B(t)Hm×r are differentiable TV quaternion matrices, and ˜X(t)Hn×r is the unknown quaternion matrix to be found.

    According to (2.2), the following equation is satisfied in the case of the TV-QLME:

    C1(t)B1(t)+(C2(t)B2(t))ı+(C3(t)B3(t))ȷ+(C4(t)B4(t))k=0m×r. (3.1)

    Further, according to (2.3) and (3.1), the following are satisfied in the case of the TV-QLME:

    {A1(t)X1(t)A2(t)X2(t)A3(t)X3(t)A4(t)X4(t)=B1(t),A2(t)X1(t)+A1(t)X2(t)A4(t)X3(t)+A3(t)X4(t)=B2(t),A3(t)X1(t)+A4(t)X2(t)+A1(t)X3(t)A2(t)X4(t)=B3(t),A4(t)X1(t)A3(t)X2(t)+A2(t)X3(t)+A1(t)X4(t)=B4(t). (3.2)

    Then, setting

    Z(t)=[A1(t)A2(t)A3(t)A4(t)A2(t)A1(t)A4(t)A3(t)A3(t)A4(t)A1(t)A2(t)A4(t)A3(t)A2(t)A1(t)]R4m×4n,Y(t)=[XT1(t),XT2(t),XT3(t),XT4(t)]TR4n×r,W(t)=[BT1(t),BT2(t),BT3(t),BT4(t)]TR4m×r, (3.3)

    we have the following EME:

    EQ(t)=Z(t)Y(t)W(t). (3.4)

    where its first derivative is:

    ˙EQ(t)=˙Z(t)Y(t)+Z(t)˙Y(t)˙W(t). (3.5)

    When E(t) and ˙E(t) from (2.9) are replaced with EQ(t) defined in (3.4) and ˙EQ(t) defined in (3.5), respectively, solving the equation in terms of ˙Y(t) yields the following result:

    Z(t)˙Y(t)=λEQ(t)˙Z(t)Y(t)+˙W(t). (3.6)

    Then, with the aid of the Kronecker product and the vectorization process, the dynamic model of (3.6) may be simplified:

    (IrZ(t))vec(˙Y(t))=vec(λEQ(t)˙Z(t)Y(t)+˙W(t)) (3.7)

    Furthermore, after setting:

    D1(t)=(IrZ(t))R4mr×4nr,D2(t)=vec(λEQ(t)˙Z(t)Y(t)+˙W(t))R4mr,y(t)=vec(Y(t))R4nr,˙y(t)=vec(˙Y(t))R4nr, (3.8)

    we derive the following ZNN model:

    DT1(t)D1(t)˙y(t)=DT1(t)D2(t) (3.9)

    where DT1(t)D1(t)R4nr×4nr is a, nonsingular, mass matrix and DT1(t)D2(t)R4nr. The dynamic model of (3.9), termed ZNNQ, is the proposed ZNN model to be used in solving the TV-QLME of (2.1).

    According to (2.7), the following equation is satisfied in the case of the TV-QLME through complex representation of the quaternion matrix:

    ˆA(t)ˆX(t)ˆB(t)=02m×2r, (3.10)

    where ˆA(t)C2m×2n, ˆX(t)C2n×2r and ˆB(t)C2m×2r. As a result, we can set the following EME:

    EC(t)=ˆA(t)ˆX(t)ˆB(t), (3.11)

    where its first derivative is:

    ˙EC(t)=˙ˆA(t)ˆX(t)+ˆA(t)˙ˆX(t)˙ˆB(t). (3.12)

    When E(t) and ˙E(t) from (2.9) are replaced with EC(t) defined in (3.11) and ˙EC(t) defined in (3.12), respectively, solving the equation in terms of ˙ˆX(t) yields the following result:

    ˆA(t)˙ˆX(t)=λEC(t)˙ˆA(t)ˆX(t)+˙ˆB(t). (3.13)

    Then, with the aid of the Kronecker product and the vectorization process, the dynamic model of (3.13) may be simplified:

    (I2rˆA(t))vec(˙ˆX(t))=vec(λEC(t)˙ˆA(t)ˆX(t)+˙ˆB(t)) (3.14)

    Furthermore, after setting:

    G1(t)=(I2rA(t))C4mr×4nr,G2(t)=vec(λEC(t)˙ˆA(t)ˆX(t)+˙ˆB(t))C4mr,h(t)=vec(ˆX(t))C4nr,˙h(t)=vec(˙ˆX(t))C4nr, (3.15)

    we derive the following ZNN model:

    GT1(t)G1(t)˙h(t)=GT1(t)G2(t), (3.16)

    where GT1(t)G2(t)C4nr and GT1(t)G1(t)C4nr×4nr is a, nonsingular, mass matrix. The dynamic model of (3.16), termed ZNNQC, is the proposed ZNN model to be used in solving the TV-QLME of (2.1) through complex representation of the quaternion matrix.

    According to (2.7), the following equation is satisfied in the case of the TV-QLME through real representation of the quaternion matrix:

    A(t)X(t)B(t)=04m×4r, (3.17)

    where A(t)R4m×4n, X(t)R4n×4r and B(t)R4m×4r. As a result, we can set the following EME:

    ER(t)=A(t)X(t)B(t), (3.18)

    where its first derivative is:

    ˙ER(t)=˙A(t)X(t)+A(t)˙X(t)˙B(t). (3.19)

    When E(t) and ˙E(t) from (2.9) are replaced with ER(t) defined in (3.18) and ˙ER(t) defined in (3.19), respectively, solving the equation in terms of ˙X(t) yields the following result:

    A(t)˙X(t)=λER(t)˙A(t)X(t)+˙B(t). (3.20)

    Then, with the aid of the Kronecker product and the vectorization process, the dynamic model of (3.20) may be simplified:

    (IrA(t))vec(˙X(t))=vec(λER(t)˙A(t)X(t)+˙B(t)) (3.21)

    Furthermore, after setting:

    R1(t)=(I4rA(t))R16mr×16nr,R2(t)=vec(λER(t)˙A(t)X(t)+˙B(t))R16mr,x(t)=vec(X(t))R16nr,˙x(t)=vec(˙X(t))R16nr, (3.22)

    we derive the following ZNN model:

    RT1(t)R1(t)˙x(t)=RT1(t)R2(t), (3.23)

    where RT1(t)R2(t)R16nr and RT1(t)R1(t)R16nr×16nr is a, nonsingular, mass matrix. The dynamic model of (3.23), termed ZNNQR, is the proposed ZNN model to be used in solving the TV-QLME of (2.1) through real representation of the quaternion matrix.

    The convergence and the stability analysis of the ZNNQ (3.9), ZNNQC (3.16) and ZNNQR (3.6) models are presented in this section.

    Theorem 4.1. Assuming that Z(t)R4m×4n and W(t)R4m×r are differentiable, the dynamical system (3.6) converges to the theoretical solution (TSOL) Y(t) of the TV-QLME (2.1). The solution is then stable, based on Lyapunov.

    Proof. The substitution ˜Y(t):=Y(t)Y(t) implies Y(t)=Y(t)˜Y(t), where Y(t) is a TSOL. The time derivative of Y(t) is ˙Y(t)=˙Y(t)˙˜Y(t). Notice that

    Z(t)Y(t)W(t)=04m×r, (4.1)

    and its first derivative

    ˙Z(t)Y(t)+Z(t)˙Y(t)˙W(t)=04m×r. (4.2)

    As a result, following the substitution of Y(t)=Y(t)˜Y(t) into (3.4), one can verify

    ˜EQ(t)=EQ(˜Y(t),t)=Z(t)Y(t)Z(t)˜Y(t)W(t). (4.3)

    Further, the implicit dynamics (2.9) imply

    ˙˜EQ(t)=˙EQ(˜Y(t),t)=Z(t)˙Y(t)Z(t)˙˜Y(t)+˙Z(t)Y(t)˙Z(t)˜Y(t)˙W(t)=λEQ(˜Y(t),t). (4.4)

    We then determine the candidate Lyapunov function so as to confirm convergence:

    L(t)=12˜EQ(t)2F=12Tr(˜EQ(t)(˜EQ(t))T). (4.5)

    Then, the next identities can be verified:

    ˙L(t)=2Tr((˜EQ(t))T˙˜EQ(t))2=Tr((˜EQ(t))T˙˜EQ(t))=λTr((˜EQ(t))T˜EQ(t)). (4.6)

    Consequently, it hold that

    dL(˜Y(t),t)dt{<0,EQ(˜Y(t),t)0=0,EQ(˜Y(t),t)=0,˙L(t){<0,Z(t)Y(t)Z(t)˜Y(t)W(t)0=0,Z(t)Y(t)Z(t)˜Y(t)W(t)=0,˙L(t){<0,˜Y(t)0=0,˜Y(t)=0, (4.7)

    With ˜Y(t) being the equilibrium point of the system (4.4) and EQ(0)=0, we have that:

    dL(˜Y(t),t)dt0, ˜Y(t)0. (4.8)

    By the Lyapunov stability theory, we infer that the equilibrium state ˜Y(t)=Y(t)Y(t)=0 is stable. Thus, Y(t)Y(t) as t.

    Theorem 4.2. Let Z(t)R4m×4n and W(t)R4m×r be differentiable. For any initial value y (0) that one may consider, the ZNNQ model (3.9) converges exponentially to the TSOL y(t) at each time t[0,tf)[0,+).

    Proof. The EME of (3.4) is declared so as to determine the TSOL of the TV-QLME. The model (3.6) is developed utilizing the linear ZNN design (2.9) for zeroing (3.4). When t, Y(t)Y(t) for any choice of initial value, according to Theorem 4.1. As a result, the ZNNQ model (3.9) also converges to the TSOL y(t) for any choice of initial value y(0) when t, as it is simply an alternative version of (3.6). Therefore, the proof is finished.

    Theorem 4.3. Assuming that ˆA(t)R2m×2n and ˆB(t)R2m×2r are differentiable, the dynamical system (3.13) converges to the TSOL ˆX(t) of the TV-QLME (2.1). The solution is then stable, based on Lyapunov.

    Proof. The proof is omitted, being that it resembles the proof of Theorem 4.1.

    Theorem 4.4. Let ˆA(t)R2m×2n and ˆB(t)R2m×2r be differentiable. For any initial value h(0) that one may consider, the ZNNQ model (3.16) converges exponentially to the TSOL h(t) at each time t[0,tf)[0,+).

    Proof. The proof is omitted, being that it is identical to the proof of Theorem 4.2 once we replace Theorem 4.1 with Theorem 4.3.

    Theorem 4.5. Assuming that A(t)R4m×4n and B(t)R4m×4r are differentiable, the dynamical system (3.20) converges to the TSOL X(t) of the TV-QLME (2.1). The solution is then stable, based on Lyapunov.

    Proof. The proof is omitted, being that it resembles the proof of Theorem 4.1.

    Theorem 4.6. Let A(t)R4m×4n and B(t)R4m×4r be differentiable. For any initial value x(0) that one may consider, the ZNNQ model (3.23) converges exponentially to the TSOL x(t) at each time t[0,tf)[0,+).

    Proof. The proof is omitted, being that it is identical to the proof of Theorem 4.2 once we replace Theorem 4.1 with Theorem 4.3.

    The complexity of producing and solving (3.9), (3.16) and (3.23) contributes to the overall computational complexity of the ZNNQ, ZNNQC and ZNNQR models, respectively. In particular, the computational complexity of producing (3.9) is O((4nr)2) operations as in each iteration of the equation we perform (4nr)2 multiplications and 4nr additions/subtractions. For the same reasons, the computational complexity of computing (3.23) is O((16nr)2) operations. However, the ZNNQC model deals with complex numbers. It is worth pointing out that if we multiply two complex numbers, the calculation works out to (a+bı)(c+dı)=acbd+adı+bcı, which requires a total of four multiplications and two addition/subtraction operations. As a result, the computational complexity of computing (3.16) is O((8nr)2) as each iteration of the equation has 4(4nr)2 multiplication and 2(4nr) addition/subtraction operations.

    Additionally, the linear system of equations is, at each step, solved through use of the implicit MATLAB solver ode15s. The complexity of solving (3.9) is O((4nr)3 as it involves a (4nr)×(4nr) matrix. In the same manner, the complexity of solving (3.16) is O((8nr)3 and the complexity of solving (3.23) is O((16nr)3. Therefore, the overall computational complexity of the ZNNQ model is O((4nr)3), while that of the ZNNQC model is O((8nr)3) and that of the ZNNQR model is O((16nr)3). The overall computational complexity of the ZNN models is also presented in Table 2.

    Table 2.  Computational complexity of the ZNN models.
    Model
    ZNNQ ZNNQC ZNNQR
    Comput. Complexity O((4nr)3 O((8nr)3 O((16nr)3

     | Show Table
    DownLoad: CSV

    In this section we shall present four simulation examples (SE) and two applications to color restoration of images. Following are a few important clarifications. The ZNN design parameter λ is used with value 10, while the initial values of the ZNNQ, ZNNQC and ZNNQR models have been set to y(0)=04nr, h(0)=04nr and x(0)=016nr, respectively. For convenience purposes, applying to all SEs, we have set α(t)=sin(t) and β(t)=cos(t). Last but not least, a MATLAB ode solver, namely ode15s, is used in the computations throughout all SEs and applications, with the time interval being set to [0,10].

    Example 6.1. The coefficients of the input matrix ˜A(t) have been set to

    A1(t)=[2α(t)+0.571β(t)+2],A2(t)=[22α(t)+12β(t)+2β(t)+6],A3(t)=[3α(t)+26β(t)+23],A4(t)=[32α(t)+13β(t)+25],

    and the coefficients of the input matrix ˜B(t) have been set to

    B1(t)=[1α(t)+0.5β(t)2], B2(t)=[22β(t)α(t)], B3(t)=[2α(t)+263α(t)], B4(t)=[2α(t)+4α(t)+0.584].

    Figures 1 and 2 depict certain aspects of the simulation experiments on the TV-QLME defined by these matrices.

    Figure 1.  EMEs and error of (2.1) in SEs 6.1–6.4.
    Figure 2.  Real and imaginary parts of the trajectories of ˜X(t) in SEs 6.1–6.4.

    Example 6.2. This example considers the matrices A1(t),A2(t),A3(t) and A4(t) of NE 1. In view of inverting A(t), the coefficients of B(t) have been set to Bi(t)=02,i=1,,4. The generated results are presented in Figures 1b, 1f.

    Example 6.3. The coefficients of the input matrix ˜A(t) have been set to

    A1(t)=[2α(t)+173β(t)+271β(t)+22β(t)+47],A2(t)=[22α(t)+12α(t)372β(t)+4β(t)+672α(t)+1],A3(t)=[3α(t)+262α(t)+17β(t)+23β(t)+57],A4(t)=[32α(t)+1773β(t)+2572α(t)+1],

    and the coefficients of the input matrix ˜B(t) have been set to

    B1(t)=[1α(t)+17α(t)+1β(t)2α(t)+1,7],B2(t)=[β(t)+22β(t),7β(t)+7α(t)7β(t)],B3(t)=[2α(t)+26783α(t)76],B4(t)=[2α(t)+456α(t)+18432]

    Generated results are presented in Figure 1c, 1g.

    Example 6.4. The coefficients of the input matrix ˜A(t) have been set to

    A1(t)=[2α(t)+1773β(t)+2721β(t)+272β(t)+47α(t)+3], A2(t)=[22α(t)+152α(t)3372β(t)+4β(t)+6772α(t)+14],A3(t)=[3α(t)+2672α(t)+176β(t)+237β(t)+57β(t)],A4(t)=[32α(t)+1657β(t)3β(t)+25772α(t)+17],

    and the coefficients of the input matrix ˜B(t) have been set to

    B1(t)=[β(t)α(t)+16α(t)+132α(t)+17],B2(t)=[β(t)+22β(t)8α(t)+7α(t)7β(t)],B3(t)=[3α(t)+26783α(t)76],B4(t)=[2α(t)+35β(t)α(t)+1843β(t)].

    The results are presented in Figures 1d, 1h.

    The performance of the ZNNQ (3.9), ZNNQC (3.16) and ZNNQR (3.6) models for solving the TV-QLME (2.1) is investigated throughout the SEs in Sections 1–4. To each section corresponds a different TV-QLME problem, defined by an appropriate pair of matrices ˜A(t),˜B(t).

    For each such problem, Figure 1a1d depict the corresponding error paths of the models; that is, the value of the Frobenius norm of their EMEs in the time interval between t=0 and t=10. These curves convey information about the convergence of each model. Notice that, with the value of the parameter λ being set to 10, the error values in all SEs experience a steep decline which, by the time-mark of t1.5, brings them to the range [105,103]. Generally, a larger value for λ will force the ZNN models to converge even faster. It bears mentioning that, throughout all SEs, the corresponding error curves of the ZNNQ model are positioned lower than those of the other two models. That is, the ZNNQ model displays better convergence. The successful convergence of all models is further stressed in Figure 2, where the theoretical trajectories of the real as well as the three imaginary parts of the quaternion TV matrix ˜X(t) are compared with the trajectories obtained by the three models. Namely, Figure 2a2d corresponds to the SE of section 6.1, Figure 2e2h corresponds to the SE of section 6.2, Figure 2i2l corresponds to the SE of section 6.3 and, last but not least, Figure 2m2p corresponds to the SE of section 6.4. For all SEs, the generated trajectories of the three models match the theoretical trajectories.

    At this point, ZNNQ, ZNNQC and ZNNQR have generated, for each SE, the previously unknown TV matrices ˜X(t),ˆX(t) and X(t), respectively, for t[0,10]. In order to validate the models, we shall evaluate ||˜A(t)˜X(t)˜B(t)||F; that is, the error on the given TV-QLME, for each model and SE. In order for this to be accomplished, ˆX(t) and X(t), the complex and real representations of the TV solution matrix, respectively, are converted back to quaternion form. Figure 1e1h demonstrates the respective paths for the SEs of sections 6.1–6.4, respectively. It is interesting to note that, as far as Figure 1e and 1f are concerned, the corresponding curves of the ZNNQ and ZNNQR model are almost identical. Overall, the ZNNQ seems to have a slight competitive edge as its respective ||˜A(t)˜X(t)˜B(t)||F curves are positioned, for the most part, lower than those of the other two models, for the remaining two SEs.

    Last but not least, the results above can be put into better perspective once we take the complexity of each model into account. Namely, in line with the analysis of Section 5, the ZNNQR has by far the highest complexity as the dimensions of the corresponding real valued matrices A(t),X(t) and B(t) are two times bigger than those of the complex valued matrices ˆA(t),ˆX(t) and ˆB(t) and four times bigger than those of the quaternion valued matrices ˜A(t),˜X(t) and ˜B(t). On that account, as the dimensions of the matrices ˜A(t) and ˜B(t) grow, opting to solve the TV-QLME problem in the real domain comes with a serious cost of memory, with RAM quickly becoming a limiting factor. Furthermore, the choice of programming language (and in some cases, linear algebra libraries) also starts to become important. All things considered, the ZNNQ seems to be the model with the highest potential.

    This section presents applications to color restoration of images. Using the the ZNNQ, ZNNQC and ZNNQR models to restore the color of two contaminated images, their applicability can be further stressed. The first image, shown in Figure 3a, is a thumbnail of Mona Lisa at 256×256 pixels, and the second image, shown in Figure 3d, is a thumbnail of Lena Soderberg at 256×256 pixels.

    Figure 3.  Original, contaminated and restored images.

    Following is a description of the task of image restoration. Suppose that a pure quaternion matrix ˜S=Reı+Grȷ+Blk is used to represent a colored image. Then the three imaginary parts of the quaternion ˜SQ256×256 are pixel matrices of Re (red), Gr (green), and Bl (blue) channels of the color image. Given the quaternion matrix ˜A=1.2I256ı+0.8I256ȷ+0.2I256k, the quaternion matrix ˜B representing the contaminated image is produced by ˜B=˜A˜S. To restore the image back to its original state, we should solve (2.1) to get ˜X=˜S. One should keep in mind that the coefficient matrices of ˜S,˜A and ˜B are sparse matrices.

    On a similar note to the analysis in Section 6.2, Figure 4a and 4b depict the paths of the EMEs of each model, from which we can draw conclusions regarding the convergence of the models. With the parameter λ being set to 10, the values of the EMEs of all three ZNNs steadily decrease as t increases from 0 to 10. For both images, the ending values of the EMEs of the models are in the neighborhood of 1010. Thus, all three ZNNs have converged. This is further stressed by Figure 5, where the theoretical trajectories of the imaginary parts of ˜X(t) are compared to the trajectories obtained by the three models. Namely, Figure 5a5c corresponds to the imaginary parts of ˜X(t), relative to the first image, whereas Figure 5d5f corresponds to the imaginary parts of ˜X(t), relative to the second image. In both cases, the generated trajectories match the theoretical trajectories.

    Figure 4.  EMEs and error of (2.1) in images.
    Figure 5.  Imaginary parts trajectories of ˜X(t) in images.

    On the other hand, the performance of each model on the task of color restoration of images can be better evaluated by examining Figure 4c and 4d. Once again, we have transformed all involved matrices back to quaternion form and have plotted, for each model and image, the corresponding ||˜A(t)˜X(t)˜B(t)||F curves of the models. For both images, the respective curves of the ZNNQ and ZNNQR models follow identical paths and are positioned lower than those of the ZNNQC.

    Observing the original images (Figure 3a and 3d), the contaminated images (Figure 3b and 3e) and lastly the restored images (Figure 3c and 3f), it is evident that the color restoration process has been successful. Thus, the developed models may be effectively used to restore contaminated images to their original colors. In line with the analysis in Section 6.2, opting to work with the ZNNQ seems to be a more efficient choice.

    In view of handling the TV-QLME problem for matrices of arbitrary dimension, three models; namely, ZNNQ, ZNNQC and ZNNQR, have been proposed. Along with simulation examples and practical applications to color restoration of images, the development of those models has been supported by theoretical analysis and analysis of their computational complexity. With the TV-QLME problem having been effectively solved both directly in the quaternion domain as well as indirectly, by representation in the complex and real domains and subsequent conversion of the solutions back to the quaternion domain, the direct method, implemented by the ZNNQ model, has been proposed as the most efficient and effective of the three. All things considered, the established results open the way for more interesting research endeavors. A few considerations are the following:

    ● One may investigate application of nonlinear ZNNs to quaternion valued TV problems.

    ● One could also consider the task of pseudo-inversion of quaternion valued TV matrices.

    ● Using the predefined-time ZNN architecture to quaternion valued TV problems is something that can be looked into.

    ● Utilizing carefully chosen design parameters specified in fuzzy environments to speed up the convergence of ZNN models is another area of investigation.

    This work was supported by a Mega Grant from the Government of the Russian Federation within the framework of federal project No. 075-15-2021-584.

    The authors declare no conflict of interest.



    [1] W. R. Hamilton, On a new species of imaginary quantities, connected with the theory of quaternions, P. Roy. Irish Acad. (1836–1869), 2 (1840), 424–434.
    [2] B. L. Van Der Waerden, Hamilton's discovery of quaternions, Math. Magazine, 49 (1976), 227–234. https://doi.org/10.1080/0025570X.1976.11976586 doi: 10.1080/0025570X.1976.11976586
    [3] K. Shoemake, Animating rotation with quaternion curves, In: Proceedings of the 12th annual conference on Computer graphics and interactive techniques, 1985,245–254.
    [4] R. Goldman, Understanding quaternions, Graph. Models, 73 (2011), 21–49. https://doi.org/10.1016/j.gmod.2010.10.004 doi: 10.1016/j.gmod.2010.10.004
    [5] M. Joldeş, J. M. Muller, Algorithms for manipulating quaternions in floating-point arithmetic, In: 2020 IEEE 27th Symposium on Computer Arithmetic (ARITH), IEEE, 2020, 48–55.
    [6] A. Szynal-Liana, I. Włoch, Generalized commutative quaternions of the Fibonacci type, Boletín de la Sociedad Matemática Mexicana, 28 (2022), 1.
    [7] D. Pavllo, C. Feichtenhofer, M. Auli, D. Grangier, Modeling human motion with quaternion-based neural networks, Int. J. Comput. Vision, 128 (2020), 855–872. https://doi.org/10.1007/s11263-019-01207-y doi: 10.1007/s11263-019-01207-y
    [8] J. Funda, R. H. Taylor, R. P. Paul, On homogeneous transforms, quaternions, and computational efficiency, IEEE T. Robot. Autom., 6 (1990), 382–388.
    [9] M. Gouasmi, Robot kinematics, using dual quaternions, IAES Int. J. Robot. Autom., 1 (2012), 13.
    [10] J. S. Yuan, Closed-loop manipulator control using quaternion feedback, IEEE J. Robot. Autom., 4 (1988), 434–440.
    [11] E. Özgür, Y. Mezouar, Kinematic modeling and control of a robot arm using unit dual quaternions, Robot. Autonom. Syst., 77 (2016), 66–73. https://doi.org/10.1016/j.robot.2015.12.005 doi: 10.1016/j.robot.2015.12.005
    [12] A. R. Klumpp, Singularity-free extraction of a quaternion from a direction-cosine matrix, J. Spacecraft Rockets, 13 (1976), 754–755.
    [13] B. Wie, P. M. Barba, Quaternion feedback for spacecraft large angle maneuvers, J. Guid. Control Dynam., 8 (1985), 360–365.
    [14] A. M. S. Goodyear, P. Singla, D. B. Spencer, Analytical state transition matrix for dual-quaternions for spacecraft pose estimation, In: AAS/AIAA Astrodynamics Specialist Conference, 2019, Univelt Inc., 2020,393–411.
    [15] Quaternionic quantum mechanics and quantum fields, Phys. Today, 49 (1996), 58. https://doi.org/10.1063/1.2807466
    [16] H. Kaiser, E. A. George, S. A. Werner, Neutron interferometric search for quaternions in quantum mechanics, Phys. Rev. A, 29 (1984), 2276. https://doi.org/10.1103/PhysRevA.29.2276 doi: 10.1103/PhysRevA.29.2276
    [17] A. J. Davies, B. H. J. McKellar, Nonrelativistic quaternionic quantum mechanics in one dimension, Phys. Rev. A, 40 (1989), 4209. https://doi.org/10.1103/PhysRevB.40.4209 doi: 10.1103/PhysRevB.40.4209
    [18] A. J. Davies, B. H. J. McKellar, Observability of quaternionic quantum mechanics, Phys. Rev. A, 46 (1992), 3671. https://doi.org/10.1103/PhysRevA.46.3671 doi: 10.1103/PhysRevA.46.3671
    [19] S. Giardino, Quaternionic quantum mechanics in real Hilbert space, J. Geom. Phys., 158 (2020), 103956. https://doi.org/10.1016/j.geomphys.2020.103956 doi: 10.1016/j.geomphys.2020.103956
    [20] M. E. Kansu, Quaternionic representation of electromagnetism for material media, Int. J. Geom. Methods M., 16 (2019), 1950105. https://doi.org/10.1142/S0219887819501056 doi: 10.1142/S0219887819501056
    [21] S. Demir, M. Tanışlı, N. Candemir, Hyperbolic quaternion formulation of electromagnetism, Adv. Appl. Clifford Al., 20 (2010), 547–563.
    [22] I. Frenkel, M. Libine, Quaternionic analysis, representation theory and physics, Adv. Math., 218 (2008), 1806–1877. https://doi.org/10.1016/j.aim.2008.03.021 doi: 10.1016/j.aim.2008.03.021
    [23] Z. H. Weng, Field equations in the complex quaternion spaces, Adv. Math. Phys., 2014.
    [24] V. G. Kravchenko, V. V. Kravchenko, Quaternionic factorization of the Schrödinger operator and its applications to some first-order systems of mathematical physics, J. Phys. A-Math. Gen., 36 (2003), 11285.
    [25] R. Ghiloni, V. Moretti, A. Perotti, Continuous slice functional calculus in quaternionic Hilbert spaces, Rev. Math. Phys., 25 (2013), 1350006. https://doi.org/10.1142/S0129055X13500062 doi: 10.1142/S0129055X13500062
    [26] J. Groß, G. Trenkler, S. O. Troschke, Quaternions: Further contributions to a matrix oriented approach, Linear Algebra Appl, 326 (2001), 205–213.
    [27] L. Xiao, S. Liu, X. Wang, Y. He, L. Jia, Y. Xu, Zeroing neural networks for dynamic quaternion-valued matrix inversion, IEEE T. Ind. Inform., 18 (2022), 1562–1571.
    [28] R. W. Farebrother, J. Groß, S. O. Troschke, Matrix representation of quaternions, Linear Algebra Appl, 362 (2003), 251–255.
    [29] F. Zhang, Quaternions and matrices of quaternions, Linear Algebra Appl., 251 (1997), 21–57.
    [30] L. Xiao, W. Huang, X. Li, F. Sun, Q. Liao, L. Jia, et al., ZNNs with a varying-parameter design formula for dynamic Sylvester quaternion matrix equation, IEEE T. Neural Network. Learn. Syst., 1–11.
    [31] L. Xiao, P. Cao, W. Song, L. Luo, W. Tang, A fixed-time noise-tolerance ZNN model for time-variant inequality-constrained quaternion matrix least-squares problem, IEEE T. Neural Network. Learn. Syst., 1–10.
    [32] G. Du, Y. Liang, B. Gao, S. A. Otaibi, D. Li, A cognitive joint angle compensation system based on self-feedback fuzzy neural network with incremental learning, IEEE T. Ind. Inform., 17 (2021), 2928–2937.
    [33] L. Xiao, Y. Zhang, W. Huang, L. Jia, X. Gao, A dynamic parameter noise-tolerant zeroing neural network for time-varying quaternion matrix equation with applications, IEEE T. Neural Network. Learn. Syst., 1–10.
    [34] Y. Zhang, S. S. Ge, Design and analysis of a general recurrent neural network model for time-varying matrix inversion, IEEE T. Neural Network., 16 (2005), 1477–1490.
    [35] J. Jin, J. Zhu, L. Zhao, L. Chen, A fixed-time convergent and noise-tolerant zeroing neural network for online solution of time-varying matrix inversion, Appl. Soft Comput., 130 (2022), 109691. https://doi.org/10.1016/j.asoc.2022.109691 doi: 10.1016/j.asoc.2022.109691
    [36] T. E. Simos, V. N. Katsikis, S. D. Mourtas, P. S. Stanimirović, D. Gerontitis, A higher-order zeroing neural network for pseudoinversion of an arbitrary time-varying matrix with applications to mobile object localization, Information Sciences, 600 (2022), 226–238. https://doi.org/10.1016/j.ins.2022.03.094 doi: 10.1016/j.ins.2022.03.094
    [37] W. Jiang, C. L. Lin, V. N. Katsikis, S. D. Mourtas, P. S. Stanimirović, T. E. Simos, Zeroing neural network approaches based on direct and indirect methods for solving the Yang–Baxter-like matrix equation, Mathematics, 10 (2022), 1950.
    [38] T. E. Simos, V. N. Katsikis, S. D. Mourtas, P. S. Stanimirović, Unique non-negative definite solution of the time-varying algebraic Riccati equations with applications to stabilization of LTV systems, Math. Comput. Simulat., 202 (2022), 164–180.
    [39] T. E. Simos, V. N. Katsikis, S. D. Mourtas, P. S. Stanimirović, Finite-time convergent zeroing neural network for solving time-varying algebraic Riccati equations, J. Franklin I., 359 (2022), 10867–10883.
    [40] S. D. Mourtas, V. N. Katsikis, Exploiting the Black-Litterman framework through error-correction neural networks, Neurocomputing, 498 (2022), 43–58. https://doi.org/10.1016/j.neucom.2022.05.036 doi: 10.1016/j.neucom.2022.05.036
    [41] V. N. Kovalnogov, R. V. Fedorov, D. A. Generalov, A. V. Chukalin, V. N. Katsikis, S. D. Mourtas, et al., Portfolio insurance through error-correction neural networks, Mathematics, 10 (2022), 3335.
    [42] J. Jin, W. Chen, C. Chen, L. Chen, Z. Tang, L. Chen, et al., A predefined fixed-time convergence ZNN and its applications to time-varying quadratic programming solving and dual-arm manipulator cooperative trajectory tracking, IEEE T. Ind. Inform., 1–12.
    [43] Y. Liu, K. Liu, G. Wang, Z. Sun, L. Jin, Noise-tolerant zeroing neurodynamic algorithm for upper limb motion intention-based human-robot interaction control in non-ideal conditions, Expert Syst. Appl., 213 (2023), 118891. https://doi.org/10.1016/j.eswa.2022.118891 doi: 10.1016/j.eswa.2022.118891
    [44] D. Chen, S. Li, Q. Wu, A novel supertwisting zeroing neural network with application to mobile robot manipulators, IEEE T. Neural Network. Learn. Syst., 32 (2021), 1776–1787.
    [45] V. N. Katsikis, S. D. Mourtas, P. S. Stanimirović, S. Li, X. Cao, Time-varying mean-variance portfolio selection problem solving via LVI-PDNN, Comput. Oper. Res., 138 (2022), 105582.
    [46] V. N. Katsikis, S. D. Mourtas, P. S. Stanimirović, S. Li, X. Cao, Time-varying minimum-cost portfolio insurance problem via an adaptive fuzzy-power LVI-PDNN, Appl. Math. Comput., 441 (2023), 127700.
    [47] W. Chen, J. Jin, D. Gerontitis, L. Qiu, J. Zhu, Improved recurrent neural networks for text classification and dynamic Sylvester equation solving, Neural Processing Lett., 1–30.
  • This article has been cited by:

    1. Jiajie Luo, Jichun Li, William Holderbaum, Jiguang Li, 2024, A Novel Approach for Solving the Time-Varying Complex-Valued Linear Matrix Inequality Based on Fuzzy-Parameter Zeroing Neural Network, 979-8-3503-6419-4, 543, 10.1109/CIS-RAM61939.2024.10672985
    2. Cheng Hua, Xinwei Cao, Qian Xu, Bolin Liao, Shuai Li, Dynamic Neural Network Models for Time-Varying Problem Solving: A Survey on Model Structures, 2023, 11, 2169-3536, 65991, 10.1109/ACCESS.2023.3290046
    3. Vladislav N. Kovalnogov, Ruslan V. Fedorov, Igor I. Shepelev, Vyacheslav V. Sherkunov, Theodore E. Simos, Spyridon D. Mourtas, Vasilios N. Katsikis, A novel quaternion linear matrix equation solver through zeroing neural networks with applications to acoustic source tracking, 2023, 8, 2473-6988, 25966, 10.3934/math.20231323
    4. Vladislav N. Kovalnogov, Ruslan V. Fedorov, Denis A. Demidov, Malyoshina A. Malyoshina, Theodore E. Simos, Spyridon D. Mourtas, Vasilios N. Katsikis, Computing quaternion matrix pseudoinverse with zeroing neural networks, 2023, 8, 2473-6988, 22875, 10.3934/math.20231164
    5. Houssem Jerbi, Obaid Alshammari, Sondess Ben Aoun, Mourad Kchaou, Theodore E. Simos, Spyridon D. Mourtas, Vasilios N. Katsikis, Hermitian Solutions of the Quaternion Algebraic Riccati Equations through Zeroing Neural Networks with Application to Quadrotor Control, 2023, 12, 2227-7390, 15, 10.3390/math12010015
    6. Abdur Rehman, Muhammad Zia Ur Rahman, Asim Ghaffar, Carlos Martin-Barreiro, Cecilia Castro, Víctor Leiva, Xavier Cabezas, Systems of quaternionic linear matrix equations: solution, computation, algorithm, and applications, 2024, 9, 2473-6988, 26371, 10.3934/math.20241284
    7. Houssem Jerbi, Izzat Al-Darraji, Saleh Albadran, Sondess Ben Aoun, Theodore E. Simos, Spyridon D. Mourtas, Vasilios N. Katsikis, Solving quaternion nonsymmetric algebraic Riccati equations through zeroing neural networks, 2024, 9, 2473-6988, 5794, 10.3934/math.2024281
    8. Sondess B. Aoun, Nabil Derbel, Houssem Jerbi, Theodore E. Simos, Spyridon D. Mourtas, Vasilios N. Katsikis, A quaternion Sylvester equation solver through noise-resilient zeroing neural networks with application to control the SFM chaotic system, 2023, 8, 2473-6988, 27376, 10.3934/math.20231401
    9. Predrag S. Stanimirović, Miroslav Ćirić, Spyridon D. Mourtas, Gradimir V. Milovanović, Milena J. Petrović, Simultaneous Method for Solving Certain Systems of Matrix Equations with Two Unknowns, 2024, 13, 2075-1680, 838, 10.3390/axioms13120838
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1904) PDF downloads(86) Cited by(9)

Figures and Tables

Figures(5)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog