Loading [MathJax]/jax/output/SVG/jax.js
Research article

A structure-preserving doubling algorithm for the square root of regular M-matrix

  • Received: 10 May 2024 Revised: 27 August 2024 Accepted: 06 September 2024 Published: 18 September 2024
  • The matrix square root is widely encountered in many fields of mathematics. In this paper, based on the properties of M-matrix and quadratic matrix equations, we study the square root of M-matrix, and prove that for a regular M-matrix there always exists a regular M-matrix as its square root. In addition, a structure-preserving doubling algorithm is proposed to compute the square root. Theoretical analysis and numerical experiments are given to show that our method is feasible and is effective under certain conditions.

    Citation: Zehua Wang, Jinrui Guan, Ahmed Zubair. A structure-preserving doubling algorithm for the square root of regular M-matrix[J]. Electronic Research Archive, 2024, 32(9): 5306-5320. doi: 10.3934/era.2024245

    Related Papers:

    [1] Cairong Chen . A structure-preserving doubling algorithm for solving a class of quadratic matrix equation with $ M $-matrix. Electronic Research Archive, 2022, 30(2): 574-584. doi: 10.3934/era.2022030
    [2] Jin Wang . Least squares solutions of matrix equation $ AXB = C $ under semi-tensor product. Electronic Research Archive, 2024, 32(5): 2976-2993. doi: 10.3934/era.2024136
    [3] Janthip Jaiprasert, Pattrawut Chansangiam . Exact and least-squares solutions of a generalized Sylvester-transpose matrix equation over generalized quaternions. Electronic Research Archive, 2024, 32(4): 2789-2804. doi: 10.3934/era.2024126
    [4] Ruiping Wen, Liang Zhang, Yalei Pei . A hybrid singular value thresholding algorithm with diagonal-modify for low-rank matrix recovery. Electronic Research Archive, 2024, 32(11): 5926-5942. doi: 10.3934/era.2024274
    [5] Xing Zhang, Xiaoyu Jiang, Zhaolin Jiang, Heejung Byun . Algorithms for solving a class of real quasi-symmetric Toeplitz linear systems and its applications. Electronic Research Archive, 2023, 31(4): 1966-1981. doi: 10.3934/era.2023101
    [6] Yimou Liao, Tianxiu Lu, Feng Yin . A two-step randomized Gauss-Seidel method for solving large-scale linear least squares problems. Electronic Research Archive, 2022, 30(2): 755-779. doi: 10.3934/era.2022040
    [7] Saeedreza Tofighi, Farshad Merrikh-Bayat, Farhad Bayat . Designing and tuning MIMO feedforward controllers using iterated LMI restriction. Electronic Research Archive, 2022, 30(7): 2465-2486. doi: 10.3934/era.2022126
    [8] Jia-Min Luo, Hou-Biao Li, Wei-Bo Wei . Block splitting preconditioner for time-space fractional diffusion equations. Electronic Research Archive, 2022, 30(3): 780-797. doi: 10.3934/era.2022041
    [9] Dongmei Yu, Yifei Yuan, Yiming Zhang . A preconditioned new modulus-based matrix splitting method for solving linear complementarity problem of $ H_+ $-matrices. Electronic Research Archive, 2023, 31(1): 123-146. doi: 10.3934/era.2023007
    [10] D. Mosić, P. S. Stanimirović, L. A. Kazakovtsev . The $ m $-weak group inverse for rectangular matrices. Electronic Research Archive, 2024, 32(3): 1822-1843. doi: 10.3934/era.2024083
  • The matrix square root is widely encountered in many fields of mathematics. In this paper, based on the properties of M-matrix and quadratic matrix equations, we study the square root of M-matrix, and prove that for a regular M-matrix there always exists a regular M-matrix as its square root. In addition, a structure-preserving doubling algorithm is proposed to compute the square root. Theoretical analysis and numerical experiments are given to show that our method is feasible and is effective under certain conditions.



    Let ACn×n, a matrix XCn×n is called a square root of A if it satisfies the following matrix equation:

    X2=A. (1.1)

    The matrix square root is widely encountered in many fields of mathematics, such as nonlinear matrix equations, computation of the matrix logarithm, boundary value problems, Markov chains, and so on (see [1,2,3,4]).

    In recent years, many scholars have conducted in-depth research on the theories and numerical algorithms of matrix square root, and obtained a large number of results. In terms of theoretical research, people have discussed the existence and the number of square roots for general matrices and obtained many results (see [2,3]). In addition, for matrices with special structure and special properties, such as M-matrix (see [5]), H-matrix with positive diagonal entries (see [6]), central symmetric matrix (see [7,8]), P-orthogonal matrix (see [9]), Toeplitz matrix (see [10]), cyclic matrix (see [11]), Boolean matrix (see [12]), and so on, the existence of square root is also studied.

    In terms of numerical algorithms, the Schur method was first used to calculate the square root of a general matrix (see [13,14]). In fact, the command sqrtm in Matlab for calculating the square root of a general matrix was written based on the Schur method. However, when the size of the matrix increases, the complexity of the Schur method increases sharply. Moreover, for some matrices with special structures, iterative methods are more effective, so people turned to iterative methods. Higham first proposed the Newton method to calculate the square root of a general matrix in [15]. Although the Newton method has a fast convergence rate, the operation count of each iteration is too large. In order to overcome the defects of the Newton method, the simple Newton method was proposed. However, the simple Newton method has poor stability in calculation (see [2]). Later, the Newton method was improved, and some fast and effective algorithms were proposed, such as the DB method, the CR method, and so on (see [16,17,18,19,20]). Recently, many effective algorithms for calculating matrix square root have also been proposed from different perspectives, such as the power method (see [21,22]), gradient neural network method (see [23]), Zolotarev iterative method (see [24]), Chebyshev iterative method (see [25]), and so on (see [26,27,28]).

    Generally speaking, the theory of matrix square root is very complex, and the existence of a square root of a general matrix is not as evident as it seems. For a general matrix, it may have no square root, or may have many square roots. A sufficient condition for one to exist is that A has no real negative eigenvalue. More generally, any matrix A having no non-positive real eigenvalues has a unique square root for which every eigenvalue has positive real part, and this square root, denoted by A1/2, is called the principal square (see [1,2]). As for M-matrix, we have the following result.

    Lemma 1.1. [2] Let A be a nonsingular M-matrix. Then A has exactly one nonsingular M-matrix as its square root.

    But for general M-matrix, the above conclusion is not necessarily valid. For example, consider

    A=(0100),B=(1100).

    It is easy to verify that A and B are M-matrices, but direct calculation shows that A has no square root while B has an M-matrix square root. Therefore, it is necessary to extend the above theorem to more general subclasses of M-matrix. The regular M-matrix is an extension of the nonsingular M-matrix and the irreducible singular M-matrix. The regular M-matrix has many beautiful properties similar to the nonsingular M-matrix and the irreducible singular M-matrix, and it also plays an important role in the theories of nonsymmetric algebraic Riccati equations. In this paper, we will prove that for a regular M-matrix, there always exists a regular M-matrix as its square root. In addition, a structure-preserving doubling algorithm is proposed to compute the square root. Theoretical analysis and numerical experiments are given to show that our method is feasible and is effective under certain conditions.

    The rest of the paper is organized as follows: In Section 2, we give some preliminary results of M-matrix. In Section 3, based on the properties of M-matrix and quadratic matrix equations, we prove the existence of square root of a regular M-matrix. In Section 4, we propose a structure-preserving doubling algorithm to compute the square root and then give a convergence analysis of it. In Section 5, we use some numerical examples to show the feasibility and effectiveness of our method. Conclusions are given in Section 6.

    In this section, we review the definitions and some properties of M-matrix.

    For any matrix ARm×n, if the elements of A satisfy aij0 for 1im, 1jn, then A is called a nonnegative matrix, denoted by A0. For any matrices ARm×n, BRm×n, we write AB if aijbij hold for all 1im, 1jn. Let A=(aij)Rn×n, then A is called a Z-matrix if aij0 for all ij. A Z-matrix A is called an M-matrix if there exists a nonnegative matrix B such that A=sIB and sρ(B), where ρ(B) is the spectral radius of B. In particular, A is called a nonsingular M-matrix if s>ρ(B) and is called a singular M-matrix if s=ρ(B).

    The following properties of M-matrix are well-known and can be found in [29].

    Lemma 2.1. Let ARn×n be a Z-matrix. Then the following statements are equivalent:

    (i) A is a nonsingular M-matrix;

    (ii) A10;

    (iii) Au>0 for some positive vector u>0.

    Lemma 2.2. Let A be a nonsingular M-matrix and B be a Z-matrix. If AB, then B is also a nonsingular M-matrix.

    Lemma 2.3. Let A be an irreducible singular M-matrix and B be a Z-matrix. If AB and AB, then B is a nonsingular M-matrix.

    The regular M-matrix is an extension of the nonsingular M-matrix and the irreducible singular M-matrix. The definition of regular M-matrix is introduced in the following:

    Definition 2.1. [30] An M-matrix A is said to be regular if Au0 for some u>0.

    It is easy to verify that nonsingular M-matrices and irreducible singular M-matrices are always regular M-matrices, and any Z-matrix A such that Au0 for some u>0 is a regular M-matrix.

    In this section, we consider the square root of a regular M-matrix, and prove that for a regular M-matrix there exists a regular M-matrix as its square root.

    Let A=(aij)Rn×n be a regular M-matrix, and let

    X=αIY, (3.1)

    where α>0 is a parameter to be determined. Then Eq (1.1) can be rewritten as

    Y22αY+α2IA=0. (3.2)

    Here we choose the parameter α such that α2IA is a nonnegative matrix. Since A is a regular M-matrix, we can easily verify from definition that aii0 for i=1,2,,n, where aii are the diagonal elements of A. In addition, it is evident that max1inaii>0 unless A=0. Thus, if we choose

    αmax1inaii,

    then α2IA is a nonnegative matrix.

    In the following, we discuss the existence of a minimal non-negative solution for Eq (3.2). In order to achieve this goal, we write Eq (3.2) in the following form:

    2αY=Y2+α2IA,

    and then consider the iteration

    Yk+1=12α(Y2k+α2IA),Y0=0. (3.3)

    We have the following result for iteration (3.3).

    Lemma 3.1. Let A=(aij)Rn×n be a regular M-matrix, and the parameter α satisfy αmax1inaii. Then the sequence {Yk} generated by iteration (3.3) is well defined, monotonically increasing, and satisfies Ykuαu, where u>0 is a positive vector such that Au0.

    Proof. It is evident that the sequence {Yk} is well defined, since only matrix additions and multiplications are used in the iteration (3.3).

    We prove the conclusion by mathematical induction.

    (ⅰ) The sequence {Yk} is monotonically increasing, i.e.,

    0YkYk+1,k0.

    When k=0, it is evident that

    0=Y0Y1=12α(α2IA).

    Suppose the conclusion is true for k1. From

    Yk+1=12α(Y2k+α2IA),Yk=12α(Y2k1+α2IA),

    and

    Yk+1Yk=12α(Y2kY2k1)0,

    we know the conclusion is true for k. So for any k0, the conclusion is true.

    (ⅱ) Since A is a regular M-matrix, there is a positive vector u>0 such that Au0. In the following, we prove that for any k0, Ykuαu holds true.

    When k=0, the conclusion is obvious. Suppose that the conclusion is true for k1. Then

    Yku=12α(Y2k1+α2IA)u12α(α2u+α2u)=αu,

    thus, the same is true for k. So for any k0, the conclusion is true.

    Theorem 3.1. Let A=(aij)Rn×n be a regular M-matrix, and the parameter α satisfy αmax1inaii. Then Eq (3.2) has a minimal non-negative solution Y, and αIY is a regular M-matrix.

    Proof. According to Lemma 3.1, the sequence {Yk} obtained from iteration (3.3) is monotonically increasing and upper bounded. So there is a limit limkYk=Y. Taking limit on both sides of (3.3), we know Y is a solution of Eq (3.2), and Y0.

    If Z0 is another non-negative solution of Eq (3.2), we can easily verify as in Lemma 3.1 that the sequence {Yk} obtained from iteration (3.3) satisfies YkZ. Taking limit yields YZ, so Y is the minimal non-negative solution.

    In addition, we have obtained Ykuαu for k0 in the proof of Lemma 3.1. Taking limit, we have Yuαu, so

    (αIY)u=αuYu0.

    Thus, by definition, αIY is a regular M-matrix.

    According to the above theorem, we can achieve the following conclusion.

    Theorem 3.2. Let ARn×n be a regular M-matrix. Then there exists a square root of A, and the square root is a regular M-matrix.

    Proof. By Theorem 3.1, αIY is a regular M-matrix, and it is a square root of A since it satisfies Eq (1.1).

    Corollary 3.1. Let ARn×n be a regular M-matrix, and

    α1>α2max1inaii

    be two parameters. Then Y1Y2, where Y1 and Y2 are minimal non-negative solutions of Eq (3.2) associated with α1 and α2, respectively.

    Proof. Let A1/2 be the square root of A, as in Theorem 3.2. Since

    A1/2=α1IY1=α2IY2,

    we have Y1Y2 immediately.

    Corollary 3.2. Let ARn×n be a regular M-matrix and Y be the minimal non-negative solution of Eq (3.2). If A is a nonsingular M-matrix, then ρ(Y)<α; if A is singular, then ρ(Y)=α.

    Proof. Note that αIY is an M-matrix and satisfies Eq (1.1). If A is nonsingular, so is αIY, and by definition of M-matrix we know ρ(Y)<α. If A is singular, so is αIY, and thus ρ(Y)=α.

    Remark 1. We have proved that, for a regular M-matrix A, there exists a square root of A, and the square root is a regular M-matrix. However, A may have more than one M-matrix as its square root. For example, consider

    A=(1100000000000000).

    It is evident that A is a regular M-matrix, and A is a square root of itself since A2=A. In addition,

    B=(1100000000010000)

    is an M-matrix (not regular), and B is also a square root of A.

    The doubling algorithms are very efficient iterative methods for solving nonlinear matrix equations. The idea of doubling algorithms can be traced back to the 1970s, and recently the doubling algorithms have been successfully applied to various kinds of nonlinear matrix equations. We refer to [31] for a complete description of doubling algorithms. In this section, we propose a structure-preserving doubling algorithm to compute the square root of a regular M-matrix and then give a theoretical analysis of it.

    The analysis in the previous section shows that in order to calculate the square root of a regular M-matrix, it is only necessary to compute the minimal non-negative solution of Eq (3.2). For effectively solving (3.2), we divide both sides of Eq (3.2) by α2 and let Z=Y/α to obtain

    Z22Z+1α2(α2IA)=0. (4.1)

    By Theorem 3.1 and Corrolary 3.2, Eq (4.1) has a minimal non-negative solution Z such that ρ(Z)1. In particular, ρ(Z)<1 when A is a nonsingular M-matrix.

    It can be verified that Eq (4.1) is equivalent to

    (0I1α2(α2IA)2I)(IZ)=(I00I)(IZ)Z. (4.2)

    Pre-multiply Eq (4.2) by

    P=(I12I012I),

    then we have

    M0(IZ)=N0(IZ)Z, (4.3)

    where

    M0=(12α2(α2IA)012α2(α2IA)I)=:(E00H0I),N0=(I12I012I)=:(IG00F0).

    It is clear that the matrix pencil M0λN0 is in the first standard form (SF1) as defined in [31], and it is natural to apply the doubling algorithm for SF1 to solve (4.3).

    The basic idea of the structure-preserving doubling algorithm for SF1 to solve (4.3) is to recursively construct a sequence of pencils MkλNk for k1 that satisfy

    Mk(IZ)=Nk(IZ)Z2k, (4.4)

    and at the same time have the same forms as M0λN0:

    Mk=(Ek0HkI),Nk=(IGk0Fk). (4.5)

    Moreover, the blocks in Mk and Nk can be produced by the following iteration of doubling algorithms:

    Ek+1=Ek(IGkHk)1Ek,Fk+1=Fk(IHkGk)1Fk,Gk+1=Gk+Ek(IGkHk)1GkFk,Hk+1=Hk+Fk(IHkGk)1HkEk.

    For Eq (4.1), another matrix equation can be constructed as follows:

    1α2(α2IA)W22W+I=0, (4.6)

    which is called the dual equation of (4.1). It can be verified similarly that Eq (4.6) has a minimal non-negative solution W such that ρ(W)1 as for (4.1). In addition, we have

    Mk(WI)W2k=Nk(WI),k=0,1,2, (4.7)

    for matrix pencils MkλNk in (4.5).

    Thus, as long as Ek, Fk, Gk, and Hk are well-defined, we will have

    Mk(IZ)=Nk(IZ)Z2k,Mk(WI)W2k=Nk(WI),

    or, equivalently,

    ZHk=FkZ2k+1,Ek=(IGkZ)Z2k, (4.8)
    WGk=EkW2k+1,Fk=(IHkW)W2k. (4.9)

    According to the above derivation, the structure-preserving doubling algorithm to compute the square root of regular M-matrix can be stated as follows:

    In the following, we give a convergence analysis of the structure-preserving doubling algorithm.

    Lemma 4.1. Let ARn×n be a regular M-matrix, and Z and W be the minimal non-negative solutions for Eqs (4.1) and (4.6), respectively. Then IZW and IWZ are both regular M-matrices. In particular, IZW and IWZ are nonsingular M-matrices if A is nonsingular, and they are irreducible M-matrices if A is irreducible.

    Proof. (ⅰ) Since A is a regular M-matrix, there exists a positive vector u>0 such that Au0. Firstly, we can easily verify as in Lemma 3.1 that the sequences {Zk} and {Wk} generated by the following iterations

    Zk+1=12(Z2k+1α2(α2IA)),Z0=0, (4.10)
    Wk+1=12(I+1α2(α2IA)W2k),W0=0, (4.11)

    are well defined, monotonically increasing, and satisfy

    Zkuu,Wkuu,

    for all k0. Taking limit, we have Zuu and Wuu.

    In addition, it is clear IZW and IWZ are Z-matrices, and noting that

    (IZW)u=uZWuuZu0,(IWZ)u=uWZuuWu0,

    we know IZW and IWZ are both regular M-matrices.

    (ⅱ) When A is a nonsingular M-matrix, it follows that Au=v>0 for u>0. We can verify further that the sequence {Zk} generated by (4.10) satisfies

    Zkuuv2α2,k=0,1,2,.

    Taking limit, we obtain

    Zuuv2α2<u.

    Thus

    (IZW)u=uZWuuZu>0,(IWZ)u=uWZu>uWu0,

    which mean IZW and IWZ are nonsingular M-matrices.

    When A is irreducible, from (4.10) and (4.11) we can observe

    Z1=12α2(α2IA),W2=12(I+14α2(α2IA)),

    are irreducible. Since ZZ10 and WW20, we can conclude Z and W are irreducible M-matrices. Hence IZW and IWZ are irreducible M-matrices.

    By the general theory of doubling algorithms, we can conclude the following convergence result when A is a nonsingular M-matrix or an irreducible singular M-matrix.

    Theorem 4.1. Let ARn×n be a nonsingular M-matrix or an irreducible singular M-matrix, and Z and W be the minimal non-negative solutions for Eqs (4.1) and (4.6), respectively. Then the sequences {Ek}, {Fk}, {Gk}, {Hk} generated by Algorithm 4.1 are well-defined, and for k1

    (a) Ek=(IGkZ)Z2k0;

    (b) Fk=(IHkW)W2k0;

    (c) IGkHk and IHkGk are nonsingular M-matrices;

    (d) 0HkHk+1Z, 0GkGk+1W, and

    0ZHkW2kZZ2k,0WGkZ2kWW2k. (4.12)

    Algorithm 4.1 The structure-preserving doubling algorithm (SDA)
    Input: A regular M-matrix A;
    Output: A square root of A, which is a regular M-matrix.
    Step 1. Set α=max1inaii;
    Step 2. Compute E0=H0=12α2(α2IA), F0=G0=12I;
    Step 3. For k=0,1,2,, until convergence, compute
    Ek+1=Ek(IGkHk)1Ek,
    Fk+1=Fk(IHkGk)1Fk,
    Gk+1=Gk+Ek(IGkHk)1GkFk,
    Hk+1=Hk+Fk(IHkGk)1HkEk;
    Step 4. Set A1/2=α(IZ), where Z=limkHk.

    In addition, we have

    lim supkZHk1/2kρ(Z)ρ(W),lim supkWGk1/2kρ(Z)ρ(W).

    Proof. We prove the conclusions by mathematical induction.

    (ⅰ) When k=1, we observe firstly that E0, F0, G0, and H0 are well-defined and are all non-negative. In addition, since

    IH0G0=IG0H0=I14α2(α2IA)=14α2(3α2I+A),

    both IH0G0 and IG0H0 are nonsingular M-matrices by Lemmas 2.2 and 2.3. Thus E1, F1, G1, and H1 are well-defined. Moreover, from the iteration of doubling algorithms we have

    E1=E0(IG0H0)1E00,F1=F0(IH0G0)1F00,G1=G0+E0(IG0H0)1G0F0G0,H1=H0+F0(IH0G0)1H0E0H0.

    Let k=1 in (4.8) and (4.8) to get

    ZH1=F1Z30,E1=(IG1Z)Z20,
    WG1=E1W30,F1=(IH1W)W20,

    thus we have

    0H0H1Z,0G0G1W.

    From Lemma 4.1 and noting that

    IH1G1IZW,IG1H1IWZ,

    we know IH1G1 and IG1H1 are nonsingular M-matrices. Furthermore, we have

    0ZH1=F1Z3=(IH1W)W2Z3W2ZZ2,
    0WG1=E1W3=(IG1Z)Z2W3Z2WW2.

    This completes the proof for k=1.

    (ⅱ) Suppose now that the conclusions are true for k=l. From the iteration of doubling algorithms, El+1, Fl+1, Gl+1, and Hl+1 are well-defined, and satisfy

    El+1=El(IGlHl)1El0,Fl+1=Fl(IHlGl)1Fl0,Gl+1=Gl+El(IGlHl)1GlFlGl,Hl+1=Hl+Fl(IHlGl)1HlElHl.

    On the other hand, let k=l+1 in (4.8) and (4.8) to obtain

    ZHl+1=Fl+1Z2l+1+10,El+1=(IGl+1Z)Z2l+10,
    WGl+1=El+1W2l+10,Fl+1=(IHl+1W)W2l+10.

    Thus, we have

    IHl+1Gl+1IZW,IGl+1Hl+1IWZ,

    which mean that IHl+1Gl+1 and IGl+1Hl+1 are nonsingular M-matrices from Lemma 4.1. Furthermore, we have

    0ZHl+1=Fl+1Z2l+1+1=(IHl+1W)W2l+1Z2l+1+1W2l+1ZZ2l+1,
    0WGl+1=El+1W2l+1=(IGl+1Z)Z2l+1W2l+1Z2l+1WW2l+1.

    Thus, the conclusions are true for k=l+1.

    By the induction, the conclusions are true for all k1.

    In addition, from (4.12) we can obtain

    ZHk1/2kW2k1/2kZ1/2kZ2k1/2k,
    WGk1/2kZ2k1/2kW1/2kW2k1/2k.

    Letting k, we have

    lim supkZHk1/2kρ(Z)ρ(W),lim supkWGk1/2kρ(Z)ρ(W).

    Remark 2. We only prove the convergence result of the structure-preserving doubling algorithm for the nonsingular M-matrix and the irreducible singular M-matrix. However, when A is a general regular M-matrix, the convergence analysis of the structure-preserving doubling algorithm is very complicated. We refer to [31] for a complete convergence analysis of the structure-preserving doubling algorithm. In addition, from Theorem 4.1, we can observe that when A is a nonsingular M-matrix, the convergence rate is quadratic since ρ(Z)ρ(W)<1. When A is an irreducible singular M-matrix, the convergence rate is linear.

    In this section, we use some numerical examples to show the feasibility and effectiveness of the structure-preserving doubling algorithm (Algorithm 4.1). We will compare Algorithm 4.1 (SDA) with the basic Newton method (Newton) in [2]. The numerical results are presented in terms of iteration numbers (IT), CPU time (CPU) in seconds, and residue (Res), where the residue is defined to be

    Res=X2AA.

    In our implementations, all iterations are performed in Matlab (R2018a) on a personal computer with a 2G CPU and 8G memory and are terminated when the current iterate satisfies Res<1012.

    Example 5.1. Consider nonsingular M-matrix as follows:

    A=(DIIDIID)Rn×n,D=(4114114)Rm×m,

    where n=m2. For different orders of m, we apply both methods to calculate the square root of A. The numerical results are shown in Table 1.

    Table 1.  Numerical results of Example 5.1.
    m Method IT CPU RES
    10 Newton 6 0.0645 7.5213e-16
    SDA 6 0.0352 3.8339e-13
    15 Newton 6 0.3145 1.1951e-15
    SDA 7 0.1086 8.4507e-16
    20 Newton 6 1.3102 4.1972e-14
    SDA 7 0.5396 4.6757e-14
    25 Newton 7 6.5119 1.6059e-15
    SDA 8 1.8972 1.5041e-15
    30 Newton 7 19.8473 1.5212e-15
    SDA 8 4.9366 1.1492e-15

     | Show Table
    DownLoad: CSV

    It can be seen from the table that both methods can obtain results satisfying the accuracy for different orders of m. In particular, the Newton method is a little faster than SDA, while SDA is much cheaper than the Newton method in terms of CPU time.

    Example 5.2. Consider the nonsingular M-matrix generated by the MATLAB command as in the following

    a=rand(n,n);A=diag(aones(n,1))a+eye(n);

    For different sizes of n, we apply both methods to calculate the square root of A. The numerical results are reported in Table 2.

    Table 2.  Numerical results of Example 5.2.
    n Method IT CPU RES
    100 Newton 7 0.0955 3.6745e-14
    SDA 7 0.0310 6.8434e-16
    200 Newton 8 0.4543 1.0868e-15
    SDA 7 0.1041 3.8228e-13
    300 Newton 8 1.1642 1.2259e-15
    SDA 8 0.2740 1.0488e-15
    400 Newton 8 2.6180 1.4675e-14
    SDA 8 0.5141 9.9678e-16
    500 Newton 8 5.3220 3.6847e-13
    SDA 8 0.9228 1.4307e-15

     | Show Table
    DownLoad: CSV

    From Table 2, it can be seen that both methods can obtain results satisfying the accuracy. In particular, SDA is much cheaper than the Newton method in terms of CPU time.

    Example 5.3. Consider irreducible singular M-matrix as follows:

    a=rand(n,n);A=diag(aones(n,1))a;

    For different sizes of n, the numerical results are reported in Table 3.

    Table 3.  Numerical results of Example 5.3.
    n Method IT CPU RES
    100 Newton 22 0.2288 7.1685e-13
    SDA 20 0.0820 4.5514e-13
    200 Newton 23 1.1957 2.8263e-13
    SDA 20 0.2292 4.5569e-13
    300 Newton 21 2.9924 3.5541e-13
    SDA 20 0.5114 4.5553e-13
    400 Newton 26 8.5343 2.7856e-13
    SDA 20 1.2232 4.5548e-13
    500 Newton 29 19.2009 3.6264e-13
    SDA 20 2.5535 4.5548e-13

     | Show Table
    DownLoad: CSV

    In this example, due to the singularity of A, the convergence rates of the two iteration methods are linear. In particular, SDA is a little faster than the Newton method and is much cheaper than the Newton method.

    Example 5.4. Consider the following regular singular M-matrix

    A=(110110000).

    The square root (reserved to four decimal places) is

    A1/2=(0.70710.707100.70710.70710000).

    The numerical results are shown in Table 4. For this example, the Newton method fails to converge, but SDA can still get a result that meets the accuracy.

    Table 4.  Numerical results of Example 5.4.
    Method IT RES
    Newton - -
    SDA 21 1.5486e-16

     | Show Table
    DownLoad: CSV

    From the above four examples, it can be seen that the SDA proposed in this paper is feasible. In particular, it requires fewer CPU time than the Newton method.

    We studied the square root of M-matrix in this paper and proved that the square root of a regular M-matrix exists and is still a regular M-matrix. In addition, we proposed a structure-preserving doubling algorithm (SDA) to compute the square root. Theoretical analysis and numerical experiments have shown that SDA is feasible and is effective under certain conditions.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors would like to express their gratitude to the editor and the two anonymous reviewers for their valuable comments and suggestions, which have greatly improved the quality of the paper. This work was supported by the National Natural Science Foundation of China (12001395).

    The authors declare there is no conflicts of interest.



    [1] R. A. Horn, C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, London, 1991.
    [2] N. J. Higham, Functions of Matrices: Theory and Computation, SIAM, Philadelphia London, 2008.
    [3] N. J. Higham, A. H. Al-Mohy, Computing matrix functions, Acta Numer., 19 (2010), 159–208. https://doi.org/10.1017/S0962492910000036 doi: 10.1017/S0962492910000036
    [4] D. A. Bini, B. Iannazzo, B. Meini, Numerical Solution of Algebraic Riccati Equations, SIAM, Philadelphia London, 2012.
    [5] G. Alefeld, N. Schneider, On square root of M-matrices, Linear Algebra Appl., 42 (1982), 119–132. https://doi.org/10.1016/0024-3795(82)90143-4 doi: 10.1016/0024-3795(82)90143-4
    [6] L. Lin, Z. Y. Liu, On the square root of an H-matrix with positive diagonal elements, Ann. Oper. Res., 103 (2001), 339–350. https://doi.org/10.1023/A:1012931928589 doi: 10.1023/A:1012931928589
    [7] C. R. Johnson, K. Okubo, R. Reams, Uniqueness of matrix square roots and an application, Linear Algebra Appl., 323 (2001), 51–60. https://doi.org/10.1016/S0024-3795(00)00243-3 doi: 10.1016/S0024-3795(00)00243-3
    [8] Z. Y. Liu, Y. L. Zhang, R. Ralha, Computing the square roots of matrices with central symmetry, Appl. Math. Comput., 186 (2007), 715–726. https://doi.org/10.1016/j.amc.2006.08.032 doi: 10.1016/j.amc.2006.08.032
    [9] J. R. Cardoso, C. S. Kenney, F. S. Leite, Computing the square root and logarithm of a real P-orthogonal matrix, Appl. Numer. Math., 46 (2003), 173–196. https://doi.org/10.1016/S0168-9274(03)00033-3 doi: 10.1016/S0168-9274(03)00033-3
    [10] C. B. Lu, C. Q. Gu, The computation of the square root of circulant matrices, Appl. Math. Comput., 217 (2011), 6819–6829. https://doi.org/10.1016/j.amc.2011.01.018 doi: 10.1016/j.amc.2011.01.018
    [11] Z. Y. Liu, Y. L. Zhang, J. Santos, R. Ralha, On computing complex square roots of real matrices, Appl. Math. Lett., 25 (2012), 1565–1568. https://doi.org/10.1016/j.aml.2012.01.015 doi: 10.1016/j.aml.2012.01.015
    [12] P. D. Moral, A. Niclas, A Taylor expansion of the square root matrix function, J. Math. Anal. Appl., 465 (2018), 259–2668. https://doi.org/10.1016/j.jmaa.2018.05.005 doi: 10.1016/j.jmaa.2018.05.005
    [13] Å. Björck, S. Hammarling, A Schur method for the square root of a matrix, Linear Algebra Appl., 52/53 (1983), 127–140. https://doi.org/10.1016/0024-3795(83)80010-X doi: 10.1016/0024-3795(83)80010-X
    [14] N. J. Higham, Computing real square roots of a real matrix, Linear Algebra Appl., 88-89 (1987), 405–430. https://doi.org/10.1016/0024-3795(87)90118-2 doi: 10.1016/0024-3795(87)90118-2
    [15] N. J. Higham, Newton's method for the matrix square root, Math. Comput., 46 (1986), 537–549. https://doi.org/10.2307/2007992 doi: 10.2307/2007992
    [16] N. J. Higham, Stable iterations for the matrix square root, Numer. Alg., 15 (1997), 227–242. https://doi.org/10.1023/A:1019150005407 doi: 10.1023/A:1019150005407
    [17] E. D. Denman, A. N. Beavers, The matrix sign function and computations in systems, Appl. Math. Comput., 2 (1976), 63–94. https://doi.org/10.1016/0096-3003(76)90020-5 doi: 10.1016/0096-3003(76)90020-5
    [18] M. A. Hasan, A power method for computing square roots of complex matrices, J. Math. Anal. Appl., 213 (1997), 393–405. https://doi.org/10.1006/jmaa.1997.5517 doi: 10.1006/jmaa.1997.5517
    [19] B. Meini, The matrix square root from a new functional perspective: Theoretical results and computational issues, SIAM J. Matrix Anal. Appl., 26 (2004), 362–376. https://doi.org/10.1137/S0895479803426656 doi: 10.1137/S0895479803426656
    [20] D. A. Bini, B. Meini, The cyclic reduction algorithm: from Poisson equation to stochastic processes and beyonds, Numer. Algor., 51 (2009), 23–60. https://doi.org/10.1007/s11075-008-9253-0 doi: 10.1007/s11075-008-9253-0
    [21] A. Frommer, B. Hashemi, Verified computation of square roots of a matrix, SIAM J. Matrix Anal. Appl., 31 (2009), 1279–1302. https://doi.org/10.1137/090757058 doi: 10.1137/090757058
    [22] A. Sadeghi, Approximating the principal matrix square root using some novel third-order iterative methods, Ain Shams Eng. J., 9 (2018), 993–999. https://doi.org/10.1016/j.asej.2016.06.004 doi: 10.1016/j.asej.2016.06.004
    [23] C. Mo, D. Gerontitis, P. S. Stanimirović, Solving the time-varying tensor square root equation by varying-parameters finite-time Zhang neural network, Neurocomputing, 445 (2021), 309–325. https://doi.org/10.1016/j.neucom.2021.03.011 doi: 10.1016/j.neucom.2021.03.011
    [24] S. G. Evan, Zolotarev iterations for the matrix square root, SIAM J. Matrix Anal. Appl., 40 (2019), 696–719. https://doi.org/10.1137/18M1178529 doi: 10.1137/18M1178529
    [25] C. H. Guo, Explicit convergence regions of Newton's method and Chebyshev's method for the matrix pth root, Linear Algebra Appl., 583 (2019), 63–76. https://doi.org/10.1016/j.laa.2019.08.020 doi: 10.1016/j.laa.2019.08.020
    [26] S. Miyajima, Fast enclosure for a matrix inverse square root, Linear Algebra Appl., 467 (2015), 116–135. https://doi.org/10.1016/j.laa.2014.11.007 doi: 10.1016/j.laa.2014.11.007
    [27] X. F. Duan, C. Y. Wang, C. M. Li, Newton's method for solving the tensor square root problem, Appl. Math. Lett., 98 (2019), 57–62. https://doi.org/10.1016/j.aml.2019.05.031 doi: 10.1016/j.aml.2019.05.031
    [28] D. Lu, C. H. Guo, Explicit p-dependent convergence regions of Newton's method for the matrix pth root, Appl. Math. Lett., 122 (2021), 107566. https://doi.org/10.1016/j.aml.2021.107566 doi: 10.1016/j.aml.2021.107566
    [29] A. Berman, R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Academic Press, New York, 1994.
    [30] C. H. Guo, On algebraic Riccati equations associated with M-matrices, Linear Algebra Appl., 439 (2013), 2800–2814. https://doi.org/10.1016/j.laa.2013.08.018 doi: 10.1016/j.laa.2013.08.018
    [31] T. M. Huang, R. C. Li, W. W. Lin, Structure-Preserving Doubling Algorithms for Nonlinear Matrix Equations, SIAM, Philadelphia, 2018.
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(649) PDF downloads(30) Cited by(0)

Figures and Tables

Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog