The time-varying solution of a class generalized linear matrix equation with the transpose of an unknown matrix is discussed. The computation model is constructed and asymptotic convergence proof is given by using the zeroing neural network method. Using an activation function, the predefined-time convergence property and noise suppression strategy are discussed. Numerical examples are offered to illustrate the efficacy of the suggested zeroing neural network models.
Citation: Huamin Zhang, Hongcai Yin. Zeroing neural network model for solving a generalized linear time-varying matrix equation[J]. AIMS Mathematics, 2022, 7(2): 2266-2280. doi: 10.3934/math.2022129
Related Papers:
[1]
Guowei Zhang .
The lower bound on the measure of sets consisting of Julia limiting directions of solutions to some complex equations associated with Petrenko's deviation. AIMS Mathematics, 2023, 8(9): 20169-20186.
doi: 10.3934/math.20231028
[2]
Hong Li, Keyu Zhang, Hongyan Xu .
Solutions for systems of complex Fermat type partial differential-difference equations with two complex variables. AIMS Mathematics, 2021, 6(11): 11796-11814.
doi: 10.3934/math.2021685
[3]
Arunachalam Murali, Krishnan Muthunagai .
Generation of Julia and Mandelbrot fractals for a generalized rational type mapping via viscosity approximation type iterative method extended with s-convexity. AIMS Mathematics, 2024, 9(8): 20221-20244.
doi: 10.3934/math.2024985
[4]
Nan Li, Jiachuan Geng, Lianzhong Yang .
Some results on transcendental entire solutions to certain nonlinear differential-difference equations. AIMS Mathematics, 2021, 6(8): 8107-8126.
doi: 10.3934/math.2021470
[5]
Minghui Zhang, Jianbin Xiao, Mingliang Fang .
Entire solutions for several Fermat type differential difference equations. AIMS Mathematics, 2022, 7(7): 11597-11613.
doi: 10.3934/math.2022646
[6]
Wenjie Hao, Qingcai Zhang .
The growth of entire solutions of certain nonlinear differential-difference equations. AIMS Mathematics, 2022, 7(9): 15904-15916.
doi: 10.3934/math.2022870
[7]
Yeyang Jiang, Zhihua Liao, Di Qiu .
The existence of entire solutions of some systems of the Fermat type differential-difference equations. AIMS Mathematics, 2022, 7(10): 17685-17698.
doi: 10.3934/math.2022974
[8]
Zheng Wang, Zhi Gang Huang .
On transcendental directions of entire solutions of linear differential equations. AIMS Mathematics, 2022, 7(1): 276-287.
doi: 10.3934/math.2022018
[9]
Bin He .
Developing a leap-frog meshless methods with radial basis functions for modeling of electromagnetic concentrator. AIMS Mathematics, 2022, 7(9): 17133-17149.
doi: 10.3934/math.2022943
[10]
Wenju Tang, Keyu Zhang, Hongyan Xu .
Results on the solutions of several second order mixed type partial differential difference equations. AIMS Mathematics, 2022, 7(2): 1907-1924.
doi: 10.3934/math.2022110
Abstract
The time-varying solution of a class generalized linear matrix equation with the transpose of an unknown matrix is discussed. The computation model is constructed and asymptotic convergence proof is given by using the zeroing neural network method. Using an activation function, the predefined-time convergence property and noise suppression strategy are discussed. Numerical examples are offered to illustrate the efficacy of the suggested zeroing neural network models.
1.
Introduction
Due to its well-established applications in various scientific and technical fields, fractional calculus has gained prominence during the last three decades. Many pioneers have shown that when adjusted by integer-order models, fractional-order models may accurately represent complex events [1,2]. The Caputo fractional derivatives are nonlocal in contrast to the integer-order derivatives, which are local in nature [1]. In other words, the integer-order derivative may be used to analyze changes in the area around a point, but the Caputo fractional derivative can be used to analyze changes in the whole interval. Senior mathematicians including Riemann [4], Caputo [5], Podlubny [6], Ross [7], Liouville [8], Miller and others, collaborated to create the fundamental foundation for fractional order integrals and derivatives. The theory of fractional-order calculus has been related to real-world projects, and it has been applied to chaos theory [9], signal processing [10], electrodynamics [11], human diseases [12,13], and other areas [14,15,16].
Due to the numerous applications of fractional differential equations in engineering and science such as electrodynamics [17], chaos ideas [18], accounting [19], continuum and fluid mechanics [20], digital signal [21] and biological population designs [22] fractional differential equations are now more widely known. For such issues to be resolved, efficient tools are needed [23,24,25]. Because of this, we will attempt to apply an efficient analytical technique to solve nonlinear arbitrary order differential equations in this article. Many strategies in collaboration fields may be delightfully and even more accurately analyzed using fractional differential equations. Various strategies have been developed in this regard, some of them are as follows, such as the fractional Reduced differential transformation technique [26], Adomian decomposition technique [27], the fractional Variational iteration technique [28], Elzaki decomposition technique [29,30], iterative transformation technique [31], the fractional natural decomposition method (FNDM) [32], and the fractional homotopy perturbation method [33].
The power series solution is used to solve some classes of the differential and integral equations of fractional or non-fractional order, and it is based on assuming that the solution of the equation can be expanded as a power series. RPS is an easy and fast technique for determining the coefficients of the power series solution. The Jordanian mathematician Omar Abu Arqub created the residual power series method in 2013, as a technique for quickly calculating the coefficients of the power series solutions for 1st and 2nd-order fuzzy differential equations [34]. Without perturbation, linearization, or discretization, the residual power series method provides a powerful and straightforward power series solution for highly linear and nonlinear equations [35,36,37,38]. The residual power series method has been used to solve an increasing variety of nonlinear ordinary and partial differential equations of various sorts, orders, and classes during the past several years. It has been used to make non-linear fractional dispersive partial differential equation have solitary pattern results and to predict them [39], to solve the highly nonlinear singular differential equation known as the generalized Lane-Emden equation [40], to solve higher-order ordinary differential equations numerically [41], to approximate solve the fractional nonlinear KdV-Burger equations, to predict and represent the RPSM differs from several other analytical and numerical approaches in some crucial ways [42]. First, there is no requirement for a recursion connection or for the RPSM to compare the coefficients of the related terms. Second, by reducing the associated residual error, the RPSM offers a straightforward method to guarantee the convergence of the series solution. Thirdly, the RPSM doesn't suffer from computational rounding mistakes and doesn't use a lot of time or memory. Fourth, the approach may be used immediately to the provided issue by selecting an acceptable starting guess approximation since the residual power series method does not need any converting when transitionary from low-order to higher-order and from simple linearity to complicated nonlinearity [43,44,45]. The process of solving linear differential equations using the LT method consists of three steps. The first step depends on transforming the original differential equation into a new space, called the Laplace space. In the second step, the new equation is solved algebraically in the Laplace space. In the last step, the solution in the second step is transformed back into the original space, resulting in the solution of the given problem.
In this article, we apply the Laplace residual power series method to achieve the definitive solution of the fractional-order nonlinear partial differential equations. The Laplace transformation efficiently integrates the residual power series method for the renewability algorithmic technique. This proposed technique produces interpretive findings in the sense of a convergent series. The Caputo fractional derivative operator explains quantitative categorizations of the partial differential equations. The offered methodology is well demonstrated in modelling and enumeration investigations. The exact-analytical findings are a valuable way to analyze the problematic dynamics of systems, notably for computational fractional partial differential equations.
2.
Preliminaries
Definition 2.1. The fractional Caputo derivative of a function u(ζ,t) of order α is given as [46]
CDαtu(ζ,t)=Jm−αtum(ζ,t),m−1<α≤m,t>0,
(2.1)
where m∈N and Jαt is the fractional integral Riemann-Liouville (RL) of u(ζ,t) of order α is given as
Jσtu(ζ,t)=1Γ(α)∫t0(t−τ)α−1u(φ,τ)dτ
(2.2)
Definition 2.2. The Laplace transformation (LT) of u(ζ,t) is given as [46]
Theorem 2.1.Let u(ζ,t) be a piecewise continuous function on I×[0,∞) with exponential order ζ. Assume that the fractional expansion of the function U(ζ,s)=Lt[u(ζ,t)] is as follows:
Now, we calculate fk(ζ,s), k=1,2,3,⋯, substituting the kth-truncate series of Eq (4.4) into the kth residual Laplace term Eq (4.6), multiply the solution equation by skα+1, and then solve recursively the link lims→∞(skα+1LtResu,k(ζ,s))=0, k=1,2,3,⋯. Following are the first some term:
f1(ζ,s)=24,f2(ζ,s)=−384,f3(ζ,s)=6144.
(4.7)
Putting the value of fk(ζ,s), k=1,2,3,⋯, in Eq (4.4), we get
In Figure 1, the exact and LRPSM solutions for u(ζ,t) at α=3 at ζ and t=0.3 of Example 4.1. In Figure 2, analytical solution for u(ζ,t) at different value of α=2.8 and 2.6 at ζ and t=0.3. In Figure 3, analytical solution for u(ζ,t) at various value of α at t=0.3 of Example 4.1.
Figure 1.
The actual and LRPSM results for u(ζ,t) at α=3 at ζ and t=0.3.
Now, we calculate fk(ζ,s), k=1,2,3,⋯, substituting the kth-truncate series of Eq (4.15) into the kth residual Laplace term Eq (4.16), multiply the solution equation by skα+1, and then solve recursively the link lims→∞(skα+1LtResu,k(ζ,s))=0, k=1,2,3,⋯. Following are the first some term:
In Figure 4, the exact and LRPSM solutions for u(ζ,t) at α=3 at ζ and t=0.3 of Example 4.2. In Figure 5, LRPSM solutions for u(ζ,t) at α=2.5 and α=2.8 and t=0.3 of Example 4.2.
Figure 4.
Exact and LRPSM solutions for u(ζ,t) at α=3 at ζ and t=0.3.
Now, we calculate fk(ζ,s), k=1,2,3,⋯, substituting the kth-truncate series of Eq (4.25) into the kth residual Laplace term Eq (4.26), multiply the solution equation by skα+1, and then solve recursively the link lims→∞(skα+1LtResu,k(ζ,s))=0, k=1,2,3,⋯. Following are the first some term:
f1(ζ,s)=−cosζ,f2(ζ,s)=cosζ,f3(ζ,s)=−cosζ.
(4.27)
Putting the value of fk(x,s), k=1,2,3,⋯, in Eq (4.25), we get
In this article, the fractional partial differential equation has been solved analytically by employing the Laplace residual power series method in conjunction with the Caputo operator. To demonstrate the validity of the recommended method, we analyzed three distinct partial differential equation problems. The simulation results demonstrate that the outcomes of our method are in close accordance with the exact answer. The new method is highly straightforward, efficient, and suitable for getting numerical solutions to partial differential equations. The primary advantage of the proposed approach is the series form solution, which rapidly converges to the exact answer. We can therefore conclude that the suggested approach is quite methodical and efficient for a more thorough investigation of fractional-order mathematical models.
Conflict of interest
The authors declare no conflicts of interest.
References
[1]
J. Q. Gong, J. Jin, A better robustness and fast convergence zeroing neural network for solving dynamic nonlinear equations, Neural Comput. Appl., 2021, 1–11. doi: 10.1007/s00521-020-05617-9.
[2]
L. Xie, J. Ding, F. Ding, Gradient based iterative solutions for general linear matrix equations, Comput. Math. Appl., 58 (2009), 1441–1448. doi: 10.1016/j.camwa.2009.06.047.
[3]
Z. N. Zhang, F. Ding, X. G. Liu, Hierarchical gradient based iterative parameter estimation algorithm for multivariable output error moving average systems, Comput. Math. Appl., 61 (2011), 672–682. doi: 10.1016/j.camwa.2010.12.014.
[4]
F. Ding, Combined state and least squares parameter estimation algorithms for dynamic systems, Appl. Math. Modell., 38 (2014), 403–412. doi: 10.1016/j.apm.2013.06.007.
[5]
H. M. Zhang, Quasi gradient-based inversion-free iterative algorithm for solving a class of the nonlinear matrix equations, Comput. Math. Appl., 77 (2019), 1233–1244. doi: 10.1016/j.camwa.2018.11.006.
[6]
H. M. Zhang, L. J. Wan, Zeroing neural network methods for solving the Yang-Baxter-like matrix equation, Neurocomputing, 383 (2020), 409–418. doi: 10.1016/j.neucom.2019.11.101.
[7]
V. N. Katsikis, S. D. Mourtas, P. S. Stanimiroviˊc, Y. N. Zhang, Solving complex-valued time-varying linear matrix equations via QR decomposition with applications to robotic motion tracking and on angle-of-arrival localization, IEEE T. Neur. Net. Lear., 2021, 1–10. doi: 10.1109/TNNLS.2021.3052896.
[8]
L. Xiao, Z. J. Zhang, S. Li, Solving time-varying system of nonlinear equations by finite-time recurrent neural networks with application to motion tracking of robot manipulators, IEEE T. Syst. Man Cy.-S., 49 (2019), 2210–2220. doi: 10.1109/TSMC.2018.2836968.
[9]
L. Xiao, S. Li, K. L. Li, L. Jin, B. L. Liao, Co-design of finite-time convergence and noise suppression: A unified neural model for time varying linear equations with robotic applications, IEEE T. Syst. Man Cy.-S., 50 (2020), 5233–5243. doi: 10.1109/TSMC.2018.2870489.
[10]
L. Jin, S. Li, B. L. Liao, Z. J. Zhang, Zeroing neural networks: A survey, Neurocomputing, 267 (2017), 597–604. doi: 10.1016/j.neucom.2017.06.030.
[11]
Z. B. Sun, T. Shi, L. Jin, B. C. Zhang, Z. X. Pang, J. Z. Yu, Discrete-time zeroing neural network of O(τ4) pattern for online time-varying nonlinear optimization problem: Application to manipulator motion generation, J. Frankl. I., 358 (2021), 7203–7220. doi: 10.1016/j.jfranklin.2021.07.006.
[12]
L. Jin, Y. N. Zhang, Discrete-time Zhang neural network for online time-varying nonlinear optimization with application to manipulattor motion generation, IEEE T. Neur. Net. Lear., 26 (2015), 1525–1531. doi: 10.1109/TNNLS.2014.2342260.
[13]
L. Jin, S. Li, Distributed task allocation of multiple robots: A control perspective, IEEE T. Syst. Man Cy.-S., 48 (2018), 693–701. doi: 10.1109/TSMC.2016.2627579.
[14]
Y. M. Qi, L. Jin, Y. N. Wang, L. Xiao, J. L. Zhang, Complex-valued discrete-time neural dynamics for perturbed time-dependent complex quadratic programming with applications, IEEE T. Neur. Net. Lear., 31 (2020), 3555–3569. doi: 10.1109/TNNLS.2019.2944992.
[15]
L. Jin, Y. N. Zhang, Continuous and discrete Zhang dynamics for real-time varying nonlinear optimization, Numer. Algorithms, 73 (2016), 115–140. doi: 10.1007/s11075-015-0088-1.
[16]
L. Jin, Y. N. Zhang, S. Li, Y. Y. Zhang, Noise-tolerant ZNN models for solving time-varying zero-finding problems: A control-theoretic approach, IEEE T. Automat. Contr., 62 (2017), 992–997. doi: 10.1109/TAC.2016.2566880.
[17]
Z. B. Sun, T. Shi, L. Wei, Y. Y. Sun, K. P. Liu, L. Jin, Noise-suppressing zeroing neural network for online solving time-varying nonlinear optimization problem: A control-based approach, Neural Comput. Appl., 32 (2020), 11505–11520. doi: 10.1007/s00521-019-04639-2.
[18]
Z. B. Sun, F. Li, B. C. Zhang, Y. Y. Sun, L. Jin, Different modified zeroing neural dynamics with inherent tolerance to noises for time-varying reciprocal problems: A control-theoretic approach, Neurocomputing, 337 (2019), 165–179. doi: 10.1016/j.neucom.2019.01.064.
[19]
S. Z. Qiao, X. Z. Wang, Y. M. Wei, Two finite-time convergent Zhang neural network models for time-varying complex matrix Drazin inverse, Linear Algebra Appl., 542 (2018), 101–117. doi: 10.1016/j.laa.2017.03.014.
[20]
Z. Li, Y. N. Zhang, Improved Zhang neural network model and its solution of time-varying generalized linear matrix equations, Expert Syst. Appl., 37 (2010), 7213–7218. doi: 10.1016/j.eswa.2010.04.007.
[21]
S. Li, S. F. Chen, B. Liu, Accelerating a recurrent neural network to finite-time convergence for solving time-varying Sylvester equation by using a sign-bi-power activation function, Neural Process. Lett., 37 (2013), 189–205. doi: 10.1007/s11063-012-9241-1.
[22]
Y. J. Shen, P. Miao, Y. H. Huang, Y. Shen, Finite-time stability and its application for solving time-varying Sylvester equation by recurrent neural network, Neural Process. Lett., 42 (2015), 763–784. doi: 10.1007/s11063-014-9397-y.
[23]
L. Xiao, Accelerating a recurrent neural network to finite-time convergence using a new design formula and its application to time-varying matrix square root, J. Frankl. I., 354 (2017), 5667–5677. doi: 10.1016/j.jfranklin.2017.06.012.
[24]
L. Jia, L. Xiao, J. H. Dai, Z. H. Qi, Z. J. Zhang, Y. S. Zhang, Design and application of an adaptive fuzzy control strategy to zeroing neural network for solving time-variant QP problem, IEEE T. Fuzzy Syst., 29 (2021), 1544–1555. doi: 10.1109/TFUZZ.2020.2981001.
[25]
V. N. Katsikis, S. D. Mourtas, P. S. Stanimiroviˊc, Y. N. Zhang, Continuous-time varying complex QR decomposition via zeroing neural dynamics, Neural Process. Lett., 53 (2021), 3573–3590. doi: 10.1007/s11063-021-10566-y.
[26]
Z. J. Zhang, L. N. Zheng, J. Weng, Y. J. Mao, W. Lu, L. Xiao, A new varying-parameter recurrent neural-network for online solution of time-varying Sylvester equation, IEEE T. Cybernetics, 48 (2018), 3135–3148. doi: 10.1109/TCYB.2017.2760883.
[27]
Z. J. Zhang, L. N. Zheng, T. R. Qiu, F. Q. Deng, Varying-parameter convergent-differential neural solution to time-varying overdetermined system of linear equations, IEEE T. Automat. Contr., 65 (2020), 874–881. doi: 10.1109/TAC.2019.2921681.
[28]
Z. J. Zhang, L. N. Zheng, A complex varying-parameter convergent-differential neural-network for solving online time-varying complex Sylvester equation, IEEE T. Cybernetics, 49 (2019), 3627–3639. doi: 10.1109/TCYB.2018.2841970.
[29]
Z. J. Zhang, L. D. Kong, L. N. Zheng, P. C. Zhang, X. L. Qu, B. L. Liao, et al., Robustness analysis of a power-type varying-parameter recurrent neural network for solving time-varying QM and QP problems and applications, IEEE T. Syst. Man Cy.-S., 50 (2020), 5106–5118. doi: 10.1109/TSMC.2018.2866843.
[30]
H. M. Zhang, F. Ding, On the Kronecker products and their applications, J. Appl. Math., 2013 (2013), 1–8. doi: 10.1155/2013/296185.
[31]
Y. N. Zhang, D. C. Jiang, J. Wang, A recurrent neural network for solving Sylvester equation with time-varying coefficients, IEEE T. Neural Networ., 13 (2002), 1053–1063 doi: 10.1109/TNN.2002.1031938.
[32]
K. Chen, Recurrent implicit dynamics for online matrix inversion, Appl. Math. Comput., 219 (2013), 10218–10224. doi: 10.1016/j.amc.2013.03.117.
[33]
Y. N. Zhang, K. Chen, H. Z. Tan, Performance analysis of gradient neural network exploited for online time-varying matrix inversion, IEEE T. Automat. Contr., 54 (2009), 1940–1945. doi: 10.1109/TAC.2009.2023779.
[34]
F. Ding, G. J. Liu, X. P. Liu, Parameter estimation with scarce measurements, Automatica, 47 (2011), 1646–1655. doi: 10.1016/j.automatica.2011.05.007.
[35]
F. Ding, Y. J. Liu, B. Bao, Gradient based and least squares based iterative estimation algorithms for multi-input multi-output systems, P. I. Mech. Eng. I-J. Sys., 226 (2012), 43–55. doi: 10.1177/0959651811409491.
[36]
F. Ding, G. J. Liu, X. P. Liu, Partially coupled stochastic gradient identification methods for non-uniformly sampled systems, IEEE T. Automat. Contr., 55 (2010), 1976–1981. doi: 10.1109/TAC.2010.2050713.
[37]
L. Xiao, J. H. Dai, L. Jin, W. B. Li, S. Li, J. Hou, A noise-enduring and finite-time zeroing neural network for equality-constrained time-varying nonlinear optimization, IEEE T. Syst. Man Cy.-S., 51 (2021), 4729–4740. doi: 10.1109/TSMC.2019.2944152.
[38]
L. Xiao, K. L. Li, M. X. Duan, Computing time-varying quadratic optimization with finite-time convergence and noise tolerance: A unified framework for zeroing neural network, IEEE Trans. Neural Netw. Lear. Syst., 30 (2019), 3360–3369. doi: 10.1109/TNNLS.2019.2891252.
[39]
L. Xiao, Y. S. Zhang, J. H. Dai, K. Chen, S. Yang, W. B. Li, et al., A new noise-tolerant and predefined-time ZNN model for time-dependent matrix inversion, Neural Networks, 117 (2019), 124–134. doi: 10.1016/j.neunet.2019.05.005.
[40]
F. Yu, L. Liu, L. Xiao, K. L. Li, S. Cai, A robust and fixed-time zeroing neural dynamics for computing time-variant nonlinear equation using a novel nonliear activation function, Neurocomputing, 350 (2019), 108–116. doi: 10.1016/j.neucom.2019.03.053.
[41]
L. Xiao, Y. K. Cao, J. H. Dai, L. Jia, H. Y. Tan, Finite-time and predefined-time convergence design for zeroing neural network: Theorem, method, and verification, IEEE T. Ind. Inform., 17 (2021), 4724–4732. doi: 10.1109/TII.2020.3021438.
[42]
L. Xiao, J. H. Dai, R. B. Lu, S. Li, J. C. Li, S. J. Wang, Design and comprehensive analysis of a noise-tolerant ZNN model with limited-time convergence for time-dependent nonlinear minimization, IEEE T. Neur. Net. Lear., 31 (2020), 5339–5348. doi: 10.1109/TNNLS.2020.2966294.
[43]
L. Xiao, Y. S. Zhang, Q. Y. Zuo, J. H. Dai, J. C. Li, W. S. Tang, A noise-tolerant zeroing neural network for time-dependent complex matrix inversion under various kinds of noises, IEEE T. Ind. Inform., 16 (2020), 3757–3766. doi: 10.1109/TII.2019.2936877.
[44]
M. Liu, L. M. Chen, X. H. Du, L. Jin, M. S. Shang, Activated gradients for deep neural networks, IEEE T. Neur. Net. Lear., 2021, 1–13. doi: 10.1109/TNNLS.2021.3106044.
This article has been cited by:
1.
Xin XIA, Ying ZHANG, Zhigang HUANG,
On Limiting Directions of Julia Sets of Entire Solutions of Complex Differential Equations,
2024,
29,
1007-1202,
357,
10.1051/wujns/2024294357