Research article

Gradient-descent iterative algorithm for solving exact and weighted least-squares solutions of rectangular linear systems

  • Received: 27 December 2022 Revised: 22 February 2023 Accepted: 15 March 2023 Published: 17 March 2023
  • MSC : 65F10, 15A12, 15A60, 26B25

  • Consider a linear system $ Ax = b $ where the coefficient matrix $ A $ is rectangular and of full-column rank. We propose an iterative algorithm for solving this linear system, based on gradient-descent optimization technique, aiming to produce a sequence of well-approximate least-squares solutions. Here, we consider least-squares solutions in a full generality, that is, we measure any related error through an arbitrary vector norm induced from weighted positive definite matrices $ W $. It turns out that when the system has a unique solution, the proposed algorithm produces approximated solutions converging to the unique solution. When the system is inconsistent, the sequence of residual norms converges to the weighted least-squares error. Our work includes the usual least-squares solution when $ W = I $. Numerical experiments are performed to validate the capability of the algorithm. Moreover, the performance of this algorithm is better than that of recent gradient-based iterative algorithms in both iteration numbers and computational time.

    Citation: Kanjanaporn Tansri, Pattrawut Chansangiam. Gradient-descent iterative algorithm for solving exact and weighted least-squares solutions of rectangular linear systems[J]. AIMS Mathematics, 2023, 8(5): 11781-11798. doi: 10.3934/math.2023596

    Related Papers:

  • Consider a linear system $ Ax = b $ where the coefficient matrix $ A $ is rectangular and of full-column rank. We propose an iterative algorithm for solving this linear system, based on gradient-descent optimization technique, aiming to produce a sequence of well-approximate least-squares solutions. Here, we consider least-squares solutions in a full generality, that is, we measure any related error through an arbitrary vector norm induced from weighted positive definite matrices $ W $. It turns out that when the system has a unique solution, the proposed algorithm produces approximated solutions converging to the unique solution. When the system is inconsistent, the sequence of residual norms converges to the weighted least-squares error. Our work includes the usual least-squares solution when $ W = I $. Numerical experiments are performed to validate the capability of the algorithm. Moreover, the performance of this algorithm is better than that of recent gradient-based iterative algorithms in both iteration numbers and computational time.



    加载中


    [1] W. D. James, Applied numerical linear algebra, Philadelphia: Society for Industrial and Applied Mathematics, 1997.
    [2] P. J. Olver, C. Shakiban, Applied linear algebra, New York: Springer, 2018.
    [3] D. M. Young, Iterative solution of large linear systems, New York: Academic Press, 1971.
    [4] P. Albrechtt, M. P. Klein, Extrapolated iterative methods for linear systems, SIAM J. Numer. Anal., 21 (1984), 192–201. https://doi.org/10.1137/0721014 doi: 10.1137/0721014
    [5] A. J. Hughes-Hallett, The convergence of accelerated overrelaxation iterations, Math. Comput., 47 (1986), 219–223.
    [6] Y. Saad, Iterative methods for sparse linear systems, 2 Eds., Philadelphia: Society for Industrial and Applied Mathematics, 2003.
    [7] X. I. A. Yang, R. Mittal, Acceleration of the Jacobi iterative method by factors exceeding 100 using scheduled relaxation, J. Comput. Phys., 274 (2014), 695–708. https://doi.org/10.1016/j.jcp.2014.06.010 doi: 10.1016/j.jcp.2014.06.010
    [8] J. E. Adsuara, I. Cordero-Carrión, P. Cerdá-Durán, M. A. Aloy, Scheduled Relaxation Jacobi method improvements and applications, Comput. Phys., 321 (2016), 369–413. https://doi.org/10.1016/j.jcp.2016.05.053 doi: 10.1016/j.jcp.2016.05.053
    [9] F. Ding, T. Chen, Iterative least-squares solutions of coupled Sylvester matrix equations, Syst. Control Lett., 54 (2005), 95–107. https://doi.org/10.1016/j.sysconle.2004.06.008 doi: 10.1016/j.sysconle.2004.06.008
    [10] F. Ding, T. Chen, On iterative solutions of general coupled matrix equations, SIAM J. Control Optim., 44 (2006), 2269–2284. https://doi.org/10.1137/S0363012904441350 doi: 10.1137/S0363012904441350
    [11] Q. Niu, X. Wang, L. Z. Lu, A relaxed gradient based algorithm for solving Sylvester equation, Asian J. Control, 13 (2011), 461–464. https://doi.org/10.1002/asjc.328 doi: 10.1002/asjc.328
    [12] X. Wang, L. Dai, D. Liao, A modified gradient based algorithm for solving Sylvester equation, Appl. Math. Comput., 218 (2012), 5620–5628. https://doi.org/10.1016/j.amc.2011.11.055 doi: 10.1016/j.amc.2011.11.055
    [13] Y. J. Xie, C. F. Ma, The accelerated gradient based iterative algorithm for solving a class of generalized Sylvester ­transpose matrix equation, Appl. Math. Comput., 273 (2016), 1257–1269. https://doi.org/10.1016/j.amc.2015.07.022 doi: 10.1016/j.amc.2015.07.022
    [14] N. Sasaki, P. Chansangiam, Modified Jacobi-gradient iterative method for generalized Sylvester matrix equation, Symmetry, 12 (2020), 1831. https://doi.org/10.3390/sym12111831 doi: 10.3390/sym12111831
    [15] W. Fan, C. Gu, Z. Tian, Jacobi-gradient iterative algorithms for Sylvester matrix equations, Proceedings of the 14th Conference of the International Linear Algebra Society, Shanghai University, Shanghai, China, 2007.
    [16] Z. Tian, M. Tian, C. Gu, X. Hao, An accelerated Jacobi-gradient based iterative algorithm for solving Sylvester matrix equations, Filomat, 31 (2017), 2381–2390.
    [17] N. Boonruangkan, P. Chansangiam, Convergence analysis of a gradient iterative algorithm with optimal convergence factor for a generalized sylvester-transpose matrix equation, AIMS Math., 6 (2021), 8477–8496. https://doi.org/10.3934/math.2021492 doi: 10.3934/math.2021492
    [18] Z. Y. Li, Y. Wang, B. Zhou, G. R. Duan, Least squares solution with the minimum-norm to general matrix equations via iteration, Appl. Math. Comput., 215 (2010), 3547–3562. https://doi.org/10.1016/j.amc.2009.10.052 doi: 10.1016/j.amc.2009.10.052
    [19] A. Kittisopaporn, P. Chansangiam, The steepest descent of gradient-based iterative method for solving rectangular linear systems with an application to Poisson's equation, Adv. Differ. Equ., 2020 (2020), 259. https://doi.org/10.1186/s13662-020-02715-9 doi: 10.1186/s13662-020-02715-9
    [20] A. Kittisopaporn, P. Chansangiam, Gradient-descent iterative algorithm for solving a class of linear matrix equations with applications to heat and Poisson equations, Adv. Differ. Equ., 2020 (2020), 324. https://doi.org/10.1186/s13662-020-02785-9 doi: 10.1186/s13662-020-02785-9
    [21] A. Kittisopaporn, P. Chansangiam, Approximate solutions of the 2D space-time fractional diffusion equation via a gradient-descent iterative algorithm with Grünwald-Letnikov approximation, AIMS Math., 7 (2022), 8471–8490. https://doi.org/10.3934/math.2022472 doi: 10.3934/math.2022472
    [22] R. A. Horn, C. R. Johnson, Topics in matrix analysis, New York: Cambridge University Press, 1991. https://doi.org/10.1017/CBO9780511840371
    [23] S. P. Boyd, L. Vandenberghe, Convex optimization, Cambridge: Cambridge University Press, 2004.
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1068) PDF downloads(69) Cited by(0)

Article outline

Figures and Tables

Figures(5)  /  Tables(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog