Processing math: 100%
Research article

Structured conditioning theory for the total least squares problem with linear equality constraint and their estimation

  • Received: 23 August 2022 Revised: 02 December 2022 Accepted: 11 December 2022 Published: 13 March 2023
  • MSC : 15A12, 15A60, 65F20, 65F30, 65F35

  • This article is devoted to the structured and unstructured condition numbers for the total least squares with linear equality constraint (TLSE) problem. By making use of the dual techniques, we investigate three distinct kinds of unstructured condition numbers for a linear function of the TLSE solution and three structured condition numbers for this problem, i.e., normwise, mixed, and componentwise ones, and present their explicit expressions under both unstructured and structured componentwise perturbations. In addition, the relations between structured and unstructured normwise, componentwise, and mixed condition numbers for the TLSE problem are investigated. Furthermore, using the small-sample statistical condition estimation method, we also consider the statistical estimation of both unstructured and structured condition numbers and propose three algorithms. Theoretical and experimental results show that structured condition numbers are always smaller than the corresponding unstructured condition numbers.

    Citation: Mahvish Samar, Xinzhong Zhu. Structured conditioning theory for the total least squares problem with linear equality constraint and their estimation[J]. AIMS Mathematics, 2023, 8(5): 11350-11372. doi: 10.3934/math.2023575

    Related Papers:

    [1] Lingsheng Meng, Limin Li . Condition numbers of the minimum norm least squares solution for the least squares problem involving Kronecker products. AIMS Mathematics, 2021, 6(9): 9366-9377. doi: 10.3934/math.2021544
    [2] Yimeng Xi, Zhihong Liu, Ying Li, Ruyu Tao, Tao Wang . On the mixed solution of reduced biquaternion matrix equation $ \sum\limits_{i = 1}^nA_iX_iB_i = E $ with sub-matrix constraints and its application. AIMS Mathematics, 2023, 8(11): 27901-27923. doi: 10.3934/math.20231427
    [3] Limin Li . Normwise condition numbers of the indefinite least squares problem with multiple right-hand sides. AIMS Mathematics, 2022, 7(3): 3692-3700. doi: 10.3934/math.2022204
    [4] Joseph Lifton, Tong Liu, John McBride . Non-linear least squares fitting of Bézier surfaces to unstructured point clouds. AIMS Mathematics, 2021, 6(4): 3142-3159. doi: 10.3934/math.2021190
    [5] Fan Sha, Jianbing Zhang . Randomized symmetric Gauss-Seidel method for solving linear least squares problems. AIMS Mathematics, 2024, 9(7): 17453-17463. doi: 10.3934/math.2024848
    [6] Gerardo Sánchez Licea . Sufficiency for singular trajectories in the calculus of variations. AIMS Mathematics, 2020, 5(1): 111-139. doi: 10.3934/math.2020008
    [7] Gerardo Sánchez Licea . Strong and weak measurable optimal controls. AIMS Mathematics, 2021, 6(5): 4958-4978. doi: 10.3934/math.2021291
    [8] W. B. Altukhaes, M. Roozbeh, N. A. Mohamed . Feasible robust Liu estimator to combat outliers and multicollinearity effects in restricted semiparametric regression model. AIMS Mathematics, 2024, 9(11): 31581-31606. doi: 10.3934/math.20241519
    [9] Jin Zhong, Yilin Zhang . Dual group inverses of dual matrices and their applications in solving systems of linear dual equations. AIMS Mathematics, 2022, 7(5): 7606-7624. doi: 10.3934/math.2022427
    [10] Bo Jiang, Yongge Tian . Equivalent analysis of different estimations under a multivariate general linear model. AIMS Mathematics, 2024, 9(9): 23544-23563. doi: 10.3934/math.20241144
  • This article is devoted to the structured and unstructured condition numbers for the total least squares with linear equality constraint (TLSE) problem. By making use of the dual techniques, we investigate three distinct kinds of unstructured condition numbers for a linear function of the TLSE solution and three structured condition numbers for this problem, i.e., normwise, mixed, and componentwise ones, and present their explicit expressions under both unstructured and structured componentwise perturbations. In addition, the relations between structured and unstructured normwise, componentwise, and mixed condition numbers for the TLSE problem are investigated. Furthermore, using the small-sample statistical condition estimation method, we also consider the statistical estimation of both unstructured and structured condition numbers and propose three algorithms. Theoretical and experimental results show that structured condition numbers are always smaller than the corresponding unstructured condition numbers.



    The total least squares with linear equality constraint (TLSE) problem is stated as

    minG,h[G,h]F, subject to (A+G)x=b+h,Qx=d (1.1)

    where QRp×n and [QA] are full row-rank and full column rank, respectively such that ARq×n, bRq and dRp. As proved in Eq (17) in [1], if the following genericity condition holds

    ˉσnp>˜σnp+1, (1.2)

    then the TLSE problem guarantees the existence of the unique solution

    x=QAd+HATb, (1.3)

    where

    H=(B(ATA˜σ2np+1In)B),QA=(InHATA)Q,B=InQQ.

    The TLSE problem is reduced to the total least squares (TLS) problem [2,3] when Q=0 and d=0, to the least squares problem with equality constraint [4,5,6,7,8] when G=0, and to the mixed least squares-total least squares problem [3,9] when Q=0 and d=0, and some columns of G are zero. The TLSE problem was first demonstrated by Dowling et al. [10] in 1992. They introduced the TLSE problem and interpreted how to solve it by using SVD and QR matrix factorizations, whereas Schaffrin and felus [11,12] investigated an iterative method of constrained total least squares estimation and an algorithmic approach to the TLSE problem with linear and quadratic constraints by using the Euler-Lagrange theorem. Liu et al. [13] suggested a QR-based inverse iteration method. Liu and colleagues [1,14] presented the perturbation analysis and condition numbers of the TLSE problem.

    Condition numbers play a vital role in estimating forward errors of the algorithms [15,16,17]. Recent studies of condition numbers for different problems such as TLS, TLSE, multidimensional, mixed least squares, truncated and scaled TLS problems can be found in [1,14,18,19,20,21,22,23,24]. Structured TLS problems have received significant attention in recent years (see [25,26,27]). This topic has shifted many authors attention toward research on the structured condition numbers in which the TLS problem [28,29], the truncated TLS problem [30], the scaled TLS problem [31] and the mixed LS-TLS problem [32] are included. The analysis of the structured perturbations on the input data is essential for structured TLS problems because studying the structured TLSE problem is motivated by the fact that these methods preserve the underlying matrix structure that can improve the accuracy and efficiency of computation.

    As far as we know, no work has been done on structured condition numbers for the TLSE problem. So, the main purpose of this work is to study structured condition numbers for TLSE problems as well as their relationships to unstructured condition numbers, and their statistical estimation. In certain situations, the computation of unstructured and structured condition numbers is not directly applicable to analyzing the forward error bound, but studying a reliable statistical condition estimation for unstructured and structured condition numbers is attractive and interesting.

    Particularly, we will derive the unstructured condition numbers for a linear function of the TLSE problem by using the dual technique in Section 3. The dual technique was first introduced in [33] for the mixed and componentwise condition numbers of the least squares problem. Later, it was applied to get the mixed and componentwise condition numbers of the total least squares problem [28], the weighted least squares problem [34] and the constrained and weighted least squares problem [35]. As described in [33], the dual technique allows us to derive condition numbers by maximizing a linear function over a space of smaller dimension than the data space. Furthermore, in Section 4, the explicit expressions of relevant structured condition numbers are provided, and the links between the unstructured condition numbers for the TLSE problem and their corresponding structured counterparts are investigated. We also discuss how to recover the expressions of structured condition numbers for the solution of the TLS problem with the help of the derivative of the TLSE problem. Considering that it is expensive to compute these condition numbers, we consider the statistical estimation of both structured and unstructured condition numbers by using the small-sample statistical condition estimation (SSCE) method [36] and design three algorithms in Section 5. Meanwhile, in Section 5, we also provide some numerical findings and demonstrate the accuracy of the proposed algorithms by considering numerical examples. Moreover, Section 2 describes some useful preliminaries, and Section 6 gives some brief conclusions.

    In this section, we recall some necessary definitions and results about Dual techniques, which will be used throughout the paper.

    Consider a linear operator J:XY, between two Euclidean spaces X and Y with the scalar products ,X and ,Y, respectively. Denote the corresponding norms by X and Y, respectively. Here we provide definitions for "adjoint operator" and "dual norm".

    Definition 2.1. The adjoint operator of J,J:YX is expressed as follows:

    <y,Jx>Y=<Jy,x>X,

    where (x,y)X×Y.

    Definition 2.2. The dual norm X of X is

    xX=maxu0x,uXuX.

    It is known that the dual norms of the standard vector norms for the canonical scalar product in Rn are provided by:

    1= and =1 and 2=2.

    Considering the scalar product <A,B>=trace(ATB), the norm of a matrix in Rm×n is AF=AF.

    Suppose that X,Y is an operator norm induced by X and Y for the linear operator J from X to Y. Assume that Y,X is an operator norm induced by dual norms X and Y for the linear operator from Y to X.

    The above discussion implies the following results [33].

    Lemma 2.3. Assume that J is a linear operator from X to Y; then,

    JX,Y=JY,X.

    If the Euclidean space Y has fewer dimensions than X, then it will be very suitable to find JY,X in place of JX,Y as given in [33].

    According to [15], the absolute condition number of φ at yX is defined as

    κ=dφ(y)X,Y=maxzX=1dφ(y)zY, (2.1)

    where φ is Fréchet differentiable in the neighborhood of yX and dφ(y) presents the Fréchet differential of φ at y. The relative normwise condition number for nonzero φ is given by

    κn=κyXφ(y)Y. (2.2)

    By using Lemma 2.3, we can define κ in terms of an adjoint operator and a dual norm as follows:

    κ=maxdyS=1dφ(y)dyY=maxzY=1dφ(y)zX. (2.3)

    Let X=Rn be a data space for the componentwise metric and Xy presents the subset of all elements dyRn for any input data yRn satisfying dyi=0 for yi=0,1in. Thus, the perturbation dyXy of y can be measured by applying the below componentwise norm with respect to y as follows:

    dyc=min{w,|dyi|w|yi|, i=1,,n}.

    Equivalently,

    dyc=max{|dyi||yi|, yi0}=(|dyi||yi|). (2.4)

    By Eq (2.16) in [28], the dual norm of (2.4) can be written as

    d(y)c=(dy1X,,dynX)=(|dy1||y1|,,|dyn||yn|)1. (2.5)

    Using the above componentwise norm, we can rewrite the condition number κ.

    Lemma 2.4. [28,33] Given the above assumptions and the componentwise norm defined in (2.4), the condition number κ can be expressed as

    κ=maxzY=1(dφ(y))zc,

    where c is given by (2.5).

    Next, we present some necessary results for the TLSE problem, which will be used throughout the paper.

    Lemma 2.5. Let

    ˜A=[A,b],˜Q=[Q,d],
    K=[QA],f=[db].

    Consider the following linear function φ of the TLSE solution:

    φ:Rm×n×RmRl,(K,f)φ(K,f)=L(QAd+HATb), (2.6)

    where LRl×n. With the help of [14,Theorem 3.2], using the genericity assumption (1.2) implies that φ:Rm×n×Rm×1 is a continuous map. Further, φ is Fréchet differentiable at (K,f); its Fréchet derivative is

    :=dφ(K,f)(dK,df)=L(2r22HATrtT[QA,HAT])(dKxdf)LHdKTt (2.7)

    where dKRm×n, dfRm×1, tT=rT[(˜A˜Q),Iq] and r=Axb.

    Using the 'vec' operator and applying vec(AXB)=(BTA)vec(X), we obtain

    dφ(K,f)(dK,df)=vec(dφ(K,f)(dK,df))=(xTL(2r22HATrtT[QA,HAT])L(HtT)L(2r22HATrtT[QA,HAT]))[vec(dK)df]=WK,f[vec(dK)df], (2.8)

    where

    WK,f=(xTL(2r22HATrtT[QA,HAT])L(HtT)L(2r22HATrtT[QA,HAT])).

    Using (2.3) and (2.8), the absolute normwise condition number of φ for the TLSE solution can be expressed as follows:

    κ(K,f)=WK,f2.

    The relative normwise condition number corresponding to κ(K,f) is given by

    κn(K,f)=κ(K,f)[K,f]FLx2. (2.9)

    When the data are sparse or poorly scaled, the componentwise perturbation analysis is more appropriate for investigating the TLSE problem's conditioning. Through the use of dual techniques discussed in the previous part, we will derive the explicit expressions of unstructured mixed and componentwise condition numbers for the TLSE problem in this section. Additionally, we demonstrate the mathematical equality between the derived expressions and the earlier ones [1]. Before moving on to the main results, we will first provide the lemma, which is as follows:

    Lemma 3.1. The adjoint operator of the Fréchet derivative (dK,df) in (2.7) is given by

    :RlRm×n×Rm×1u([2r22HATrtT[QAHAT]]LuxtuLH,[2r22HATrtT[QA,HAT]]Lu).

    Proof. Let 1(u) and 2(u) be the first and second terms in the sum (2.7), respectively. For any uRl, we get the following expression by using the concept of the scalar product in the matrix space:

    u,1(u)=uL[(2r22HATrtT[QA,HKT])dKxH(dK)t]=trace(xuL(2r22HATrtT[QA,HAT])dK)trace(tuLH(dK))=[2r22HATrtT[QA,HAT]]LtatuLH,dK.

    For 2, we have

    u,2(u)=uL[2r22HATrtT[QA,HKT]]df=[2r22HATrtT[QA,HAT]]Lu,df.

    Let

    1(u)=[2r22HATrtT[QA,HAT]]LuxtuLH2(u)=[2r22HATrtT[QA,HAT]]Lu;

    then

    (u),(dK,df)=(1(u),f)=(1(u),2(u))),(dK,ddf)=u,(dK,df),

    which completes the proof. Now, we present an explicit expression of the condition number κ (2.3) by applying the dual norm in the solution space.

    Theorem 3.2. The condition number (2.3) for the linear function φ of the TLSE solution is expressed as

    κ=maxuQ=1[VDK,SDf]TLTu1=[VDK,SDf]TLTQ,1,

    where

    V=xT(2r22HATrtT[QA,HAT])HtT,S=(2r22HATrtT[QA,HAT]), (3.1)

    and DX presents the diagonal matrix diag(vec(X)) for any matrix X.

    Proof. Let dkij and dfi be the entries of dK and df, respectively. Thus, using (2.5), we obtain

    (dK,df)c=i,j|dkijkij|+i|dfi||fi|.

    Using Lemma 3.1, we get the following:

    (u)c=m,ni,j=1|kij||([2r22HATrtT[QA,HAT]]LuxtuLH)ij|+mi=1|fi||([2r22HATrtT[QA,HAT]]Lu)i|=m,ni,j=1|kij||[xj(2r22HATrtT[QA,HAT])eitiHTej]LTu|+mi=1|fi||((2r22HATrtT[QA,HAT])ei)LTu|,

    where tj is the jth component of t. Consider (3.1), it can be verified that xj(2r22HATrtT [QA,HAT])eitiHTej is the (m(j1)+i)th column of the n×(mn) matrix V. Thus, the above expression equals

    [DKVTLuDfSTLu]1=[VDK,SDf]TLTu1.

    Then, by Lemma 2.4, we get the required result.

    Using Theorem 3.2, we can easily obtain the explicit expressions of the mixed condition number for the linear function φ of the TLSE solution.

    Corollary 3.3. When the infinite norm is taken as the norm in the solution space Y under the same assumption as in Theorem 3.2, then obtain

    κ(K,f)=|LV|vec(|K|)+|LS||f|.

    If the infinity norm is selected as the norm in the solution space Rn, the corresponding mixed condition number is given by

    κm(K,f)=|LV|vec(|K|)+|LS||f|Lx=|L[xT(2r22HATrtT[QA,HAT])HtT]|vec(|K|)+|L[2r22HATrtT[QA,HAT]]||f|Lx. (3.2)

    Using Theorem 3.2, we can also find the explicit expressions of the componentwise condition number for the linear function φ of the TLSE solution.

    Corollary 3.4. Considering the componentwise norm on the solution space given by

    yc=min{w,|yi|w|(Lx)i|,i=1,,l}=max{|yi|/|(Lx)1|,i=1,,l}. (3.3)

    The componentwise condition number for the linear function φ of the TLSE solution has the following expression:

    κc(K,f)=D1LxL[VDH,SDf]=|D1Lx|(|LV|vec(|K|)+|LS||f|)=|D1Lx|(|L[xT(2r22HATrtT[CA,HAT])HtT]|vec(|K|)+|L[2r22HATrtT[QA,HAT]]||f|). (3.4)

    By applying the 2-norm to the solution space, we get an upper bound for the relevant condition number in terms of the 2-norm.

    Corollary 3.5. When the 2-norm is used in the solution space, we obtain

    κ2(K,f)kκ(K,f). (3.5)

    Proof. If Q=2, then Q=2. Utilizing Theorem 3.2, we obtain the following

    κ2(K,f)=[VDK,SDf]TLT2,1.

    According to [37], for any matrix W, W2,1=maxu2=1Wu1=Wˆu1, where ˆuRl is a unit 2-norm vector. Using ˆu1kˆu2, we get

    W2,1=Wˆu1W1ˆu1kW1.

    Substituting the above W with [VDK,SDf]TLT, we have

    κ2(K,f)k[VDK,SDf]TLT1,

    which implies (3.5).

    Additionally, we utilize the dual techniques to derive the condition number expressions, which allows us to minimize the computational complexity of the problem. This is possible due to the fact that the number of columns in the matrix expression of is often less than the number of rows.

    Remark 3.6. Using the Kronecker product property and the fact that

    r22HATrtT=ρ2HxtT,

    as shown in Eq (3.3) [14], we have

    V=xT(2r22HATrtT[QA,HAT])HtT=xT(2ρ2HxtT[QA,HAT])H(IntT),S=2r22HATrtT[QA,HAT]=2ρ2HxtT[QA,HAT].

    Applying these two facts together with (3.2) and (3.4) for the case where L=In allows us to recover the expressions of normwise, mixed and componentwise condition numbers of the TLSE problem, which are given in [1,Theorem 5].

    The following corollary, based on a triangle inequality, yields the upper bounds for κm and κc, without the Kronecker product, and it omits its proof. Note that the following relationship holds for any matrix WRp×q and diagonal matrix DvRq×q: WDv=|WDv|=|W||Dv|=|W|Dv|e=|WDv|=|Wv, where e=[1,,1]Rq.

    Corollary 3.7. The mixed and componentwise condition numbers for the linear function φ of the TLSE solution can be bounded as follows:

    κum(K,f)=LD|(2r22HATrtT[QA,HAT])||K||x|+LD|H||KT||t|+LD|(2r22HATrtT[QA,HAT])||b|Lx,
    κuc(K,f)=D1LxLD|(2r22HATrtT[QA,HAT])||K||x|+D1LxLD|H||KT||t|+D1LxLD|(2r22HATrtT[QA,HAT])||b|.

    In this section, we study the sensitivity of a linear function of the structured TLSE solution to perturbations on the data k and f, which is given below:

    φs:Rθ×RmRl (4.1)

    such that φs(k,f)=Lx=L(QAb+HATd) and x is the solution of the structured TLSE problem. Assume that KS is a linear structured matrix, like a Toeplitz matrix for the structured TLSE problem, where the set S of the linear structured matrix has the dimension θ and there exists a unique vector denoted by k=[k1,,kθ]T such that

    K=θi=1kiSi,

    where S1,,Sθ form a basis of S. Note that

    dK=θi=1dkiSi (4.2)

    and

    vec(K)=θi=1kivec(Si)=ΦKk,

    where ΦK=[vec(S1),vec(S2),,vec(Sθ)] and by the statement in [31,Theotrem 4.1], ΦsK is orthogonal with full column rank and has one nonzero entry in each row at most. Consider the fact that

    vec([K,f])=ΦsK,fs:=[ΦK00Im][kf],

    when we restrict the perturbation matrices [ΔK,Δf] into [K,f], i.e., vec([ΔK,Δf])=ΦK,fϵ, where ϵRθ+m.

    The following structured absolute normwise condition number of φs can be obtained by using (2.3) and (2.8)

    κs(k,f)=NΦsK,f2,

    where

    NΦsK,f=(xTL(2r22HATrtT[QA,HAT])L(HtT)L(2r22HATrtT[QA,HAT]))[ΦsK00Im]=(L(2r22HATrtT[QA,HAT])[S1x,,Sθx,Im]LH[ST1t,,STθt,0n×1]).

    The structured relative normwise condition number corresponding to κs(k,f) is expressed as

    κs,n(k,f)=κs(k,f)[kT,fT]2Lx2, (4.3)

    which can be efficiently computable with less storage and it is Kronecker product-free.

    With the help of (2.7), we have to prove that φs given in (4.1) is Fréchet differentiable at (k,f) and find its Fréchet derivative.

    Lemma 4.1. Assume the following linear function φs of the TLSE solution

    φs:Rθ×RmRl,(k,f)φ(k,f)=L(QAb+HATd),

    where LRl×n. With the help of [14,Theorem 3.2], using the genericity assumption (1.2) implies that φs:Rθ×Rm is a continuous map. Further, φs is Fréchet differentiable at (K,f); its Fréchet derivative is

    s:=dφs(k,f)(dk,df)=LUdkL(2r22HATrtT[QA,HAT])df, (4.4)

    where U=[u1,  ,uθ]Rn×θ,  ui=(2r22HATrtT[QA,HAT])SixHSTit,dkRθ and dfRm.

    Lemma 4.2. The adjoint operator of the Fréchet derivative s(dk,df) in (4.4) is given by

    s:RlRθ×Rmu(UTLu,[2r22HATrtT[QA,HAT]]Lu).

    Theorem 4.3. The condition number of the linear function φs for the structured TLSE problem can be deduced from (3.1) as follows:

    κs=[VsDK,SDf]TLTQ,1,

    where

    Vs=U.

    With the help of Theorem 4.3, we can simply determine the structured mixed condition number for the linear function φs of the TLSE solution.

    Corollary 4.4. When the infinite norm is taken as the norm in the solution space Y, under the same assumption as in Theorem 4.3, we get

    κs,(k,f)=|LVs||k|+|LS||f|.

    If the infinity norm is selected as the norm in the solution space Rn, the corresponding structured mixed condition number is given by

    κs,m(k,f)=|LVs||k|+|LS||f|Lx=θi=1|ki||L((2r22HATrtT[QA,HAT])SixHSTit)|+|L(2r22HATrtT[QA,HAT])||f|Lx. (4.5)

    In light of the 2-norm to the solution space, we will derive an upper bound for the associated structured condition number in terms of the 2-norm. Since the proof is similar to that of Corollary 3.5, we will not repeat it here.

    Corollary 4.5. When the 2-norm is used in the solution space, we get

    κs,2(k,f)kκs,(k,f). (4.6)

    Corollary 4.6. Assume that (3.3) represents the componentwise norm in the solution space. The structured componentwise condition number for the linear function φs of the TLSE solution has the following two expressions:

    κs,c(k,f)=|D1Lx|(|LVs||k|+|LS||f|)=|D1Lx|(θi=1|ki||L((2r22HATrtT[QA,HAT])SixHSTit)|+|L(2r22HATrtT[QA,HAT])||f|). (4.7)

    For a linearly structured matrix given by (4.2), we verify that the structured absolute normwise κs(k,f), mixed κs,m(k,f) and componentwise condition numbers κs,c(k,f) are less than the unstructured condition numbers κn(K,f), κm(K,f) and κc(K,f) respectively.

    Theorem 4.7. Using the notations above, we have that κs(k,f)κ(K,f). Moreover suppose that the basis {S1,S2,,Sθ} for S satisfies |K|=θi=1|ki||Si| for any KS; then,

    κs,m(k,f)κm(K,f)andκs,c(k,f)κc(K,f).

    Proof. The matrix ΦsK is column orthogonal according to [31,Theorem 4.1]. Therefore, ΦsK2=1 and it is simple to observe that κs(k,f)κ(K,f) by comparing their expressions. By applying monotonicity of the infinity norm and using the assumption that

    |K|=θi=1|ki||Si|,

    we get the following result.

    κs,m(k,f)=θi=1|ki||L((2r22HATrtT[QA,HAT])SixHSTit)| (4.8)
    +|L(2r22HATrtT[QA,HAT])||f|Lx=|L[VUS]|[|k||f|]Lx[|LV||U||LS|][|k||f|]Lx|LV|θi=1|ki||vec(Si)|+|LS||f|Lx=|LV|vec(|K|)+|LS||f|Lx=κm(K,f). (4.9)

    Similarly, we can prove that κs,c(k,f)κc(K,f).

    Remark 4.8. By utilizing the intermediate results of Lemma 4.1, we can retrieve the structured condition numbers for the TLS problem [28]. Suppose that Q=ΔQ=0 and d=Δd=0; we have

    QA=0n×p,H=(ATAσ2n+1In)1=:ˉP1,tT=[01×p,rT],ˉr=(bAx),

    2r22ATrrT=2xˉrT1+xxT and

    LˉP1[0n×p 2r22ATrrTAT]=LˉP1[0n×p(AT+2xˉrT1+xxT)].

    Considering the above fact and using (4.4), we obtain

    ˉs:=dφs(a,b)(da,db)=LˉP1Uda+LˉP1(AT+2xˉrT1+xxT)db,

    ˉU=[ˉu1,  ,ˉuθ]Rn×θ,  ˉui=(AT+2xˉrT1+xxT)Six+STiˉr, daRθ and dbRm, where the latter is just the result in [28,Lemma 3.2] with which we can recover the structured condition numbers for the TLS problem [28].

    In the following section, we continue our research on estimating the unstructured and structured condition numbers for the TLSE problem before presenting the specific examples. In this part, we construct two algorithms to estimate the unstructured condition numbers. The first one, outlined in Algorithm A, is from [36] and has been used for different matrix problems [29,30,38,39,40,41,42]. We propose an algorithm for the unstructured normwise condition estimation of the TLSE problem based on SSCE. The second one, outlined in Algorithm B, is also from [36]. We provide a statistical estimation of the unstructured mixed and componentwise condition numbers by using the SSCE method [36].

    Denote by κTLSEi(A,d) the normwise condition number of the function zTix, where zi's are from the unit n-sphere Sn1 and are orthogonal. From (2.9), we have

    κTLSE,(γ)abs:=ωγωβ|σ1|2+|σ2|2++|σγ|2, (5.1)

    where

    σi=L(2r22HATrtT[QA,HAT])(dKixdfi)LHdKTit.

    The analysis in [36] shows that

    κTLSE,(γ)SCE=NTLSE,(γ)SCE[K,f]FLx2, (5.2)

    where NTLSE,(γ)SCE:=ωγωβσ122+σ222++σγ22=κTLSE,(γ)absF is a good estimate of the normwise condition number (2.9). In the above expression, ωβ is the Wallis factor with ω1=1, ω2=2/π, and for β>2;

    ωβ={135(β2)246(β1), for β odd, 2π246(β2)357(β1), for β even. 

    The Wallis factor can be approximated by

    ωβ2π(β12) (5.3)

    with high accuracy. As a matter of fact, we can devise Algorithm A.

    Algorithm A (SSCE method for the unstructured normwise condition number)

    (1) Generate matrices [ΔK1,Δf1],,[ΔKγ,Δfγ] with each entry in N(0,1), and orthonormalize the matrix

    [vec(ΔK1)vec(ΔK2)vec(ΔKγ)Δf1Δf2Δfγ]

    to obtain [τ1,τ2,,τγ] via the modified Gram-Schmidt orthogonalization process. Each τi can be converted into the corresponding matrices [ΔKi,Δfi] by applying the unvec operation.

    (2) Let β=m+mn. Approximate ωβ and ωγ by using (5.3).

    (3) For i=1,2,,γ, compute

    σi=L(2r22HATrtT[QA,HAT])(dKixdfi)LHdKTit. (5.4)

    (4) Compute the absolute condition vector by using (5.1), where the square operation is applied to each entry of σi,i=1,2,,γ and the square root is also applied componentwise.

    (5) Estimate the normwise condition number (2.9) by using (5.2).

    Algorithm B (SSCE method for the unstructured mixed and componentwise condition numbers)

    (1) Generate matrices [ΔK1,Δf1],[ΔK2,Δf2],,[ΔKγ,Δfγ] with each entry in N(0,1) and orthonormalize the matrix

    [vec(ΔK1)vec(ΔK2)vec(ΔKγ)Δf1Δf2Δfγ]

    to obtain [τ1,τ2,,τγ] via the modified Gram-Schmidt orthogonalization process. Apply the unvec operation to convert each τi into the corresponding matrices [ΔKi,Δfi]. Suppose that [ΔKi,Δfi] is the matrix [~ΔKi,~Δfi] and is multiplied by [K,f] componentwise.

    (2) Assume that β=m(n+1). Approximate ωβ and ωγ by using (5.3).

    (3) For i=1,2,,γ, compute

    yi=L(2r22HATrtT[QA,HAT])(dKixdfi)LHdKTit.

    Using the approximations for ωβ and ωγ, compute the absolute condition vector

    CTLSE,(γ)abs=ωγωβ|y1|2+|y2|2++|yγ|2.

    (4) Compute the relative condition vector CTLSE,(γ)rel=CTLSE,(γ)abs/x. Estimate the mixed and componentwise condition estimations mTLSE,(γ)SCE and cTLSE,(γ)SCE as follows:

    mTLSE,(γ)SCE:=CTLS,(γ)absLx,cTLSE,(γ)SCE:=CTLS,(γ)rel=CTLS,(γ)absLx.

    On the basis of the SSCE method [36], we propose Algorithm C to estimate the structured normwise, mixed and componentwise condition numbers.

    Algorithm C (SSCE method for the structured condition numbers)

    (1) Generate matrices [Δk1,Δk2,,Δkγ]and[Δf1,Δf2,,Δfγ] with entries in N(0,1), where kiRθ and fiRm. Orthonormalize the below matrix

    [Δk1Δk2ΔkγΔf1Δf2Δfγ]

    to get an orthonormal matrix [ξ1,ξ2,ξγ] by using a modified Gram-Schmidt orthogonalization technique, where ξi can be converted into the corresponding matrices [ki,fi] by applying the unvec operation.

    (2) Let α=q+m. Approximate ωα and ωγ by using (5.3).

    (3) For j=1,2,,γ, compute yj from (5.4). Estimate the absolute condition vector

    ˉκabss=ωγωα|y1|2+|y2|2++|yγ|2.

    (4) Estimate the structured normwise condition estimation as follows:

    κSTLSE,(γ)SCE=ˉκabss2[kT,fT]T2Lx2.

    (5) Compute the structured mixed condition estimation mSTLSE,(γ)SCE and structured componentwise condition estimation cSTLSE,(γ)SCE as follows:

    mSTLSE,(γ)SCE:=ˉκabssLx,cSTLSE,(γ)SCE:=κTLSE,(γ)rel=ˉκabssLx.

    Moving forward, we will illustrate four specific examples. The first compares the unstructured condition numbers with our SSCE-based estimates. It also comes to a conclusion about how well Algorithms A and B make estimates that are too high. The second is used to present the efficiency of statistical condition estimators of structured normwise, mixed, and componentwise condition numbers, while the third compares the structured and unstructured condition numbers, and the fourth checks the efficiency of over estimation ratios by implementing Algorithm C in in association with the structured condition numbers.

    Example 5.1. To compare the unstructured normwise, mixed and componentwise condition numbers and interpret the effectiveness of Algorithms A and B, we employ the random TLSE problem that is generated by following method given in [18]. Consider a random matrix [A,b], and the matrix ˜Q=[Q,d] is constructed as follows

    ˜Q=Y[D0]ZT,

    where Y=Ip2yyT,Z=In+12zzT, and yRγ,zRn+1 are random unit vectors, and D denotes the diagonal matrix of order p×p with a condition number of κ˜Q. For Algorithms A and B, we assume q=300, p=75 and n=225, we create various TLSE problems for every chosen κ˜Q. The matrix L is provided for the purpose of selecting the component part of the solution. For instance, when L=In (l=n), all n components of the solution x are chosen equally. Whenever L=ei (l=1), the ith row of In is chosen and only the ith part of the solution is chosen. Assume that xmax and xmin denote the highest and lowest elements of x in absolute value, respectively. We select the L matrix for our condition numbers.

    L0=In,L1=[e1e2]T,L2=emax,L3=emin,

    Here, max and min are the indices for xmax and xmin, respectively. As a result, the components xmax and xmin, the whole x, and the subvector [x1 x2]T, are chosen in accordance with the following four matrices.

    In light of Table 1, the mixed and componentwise condition numbers may more directly convey the true conditioning of this TLSE problem than the normwise condition number. We also found that Algorithms A and B can yield accurate results from condition estimates based on SSCE.

    Table 1.  Efficiency of statistical condition estimates by Algorithms A and B.
    L0 κ˜C κTLSE,(γ)SCE κn(K,f) mTLSE,(γ)SCE κm(K,f) cTLSE,(γ)SCE κc(K,f)
    100 3.6402e+04 4.4831e+04 4.2091e+00 5.6123e+00 5.2533e+00 6.3214e+00
    101 4.8304e+05 5.3057e+05 5.3421e+00 6.5216e+00 6.5503e+00 7.6052e+00
    103 6.0632e+05 7.9206e+05 5.3891e+00 6.6844e+00 6.7606e+00 7.8605e+00
    105 7.3281e+05 8.5316e+05 6.4371e+00 7.5211e+00 7.1976e+00 8.2906e+00
    108 3.1505e+08 3.8551e+08 7.7803e+00 8.4302e+00 8.6431e+00 8.7653e+00
    L1 κ˜C κTLSE,(γ)SCE κn(K,f) mTLSE,(γ)SCE κm(K,f) cTLSE,(γ)SCE κc(K,f)
    100 5.0425e+03 6.1032e+03 4.1022e+00 5.3052e+00 5.1677e+00 6.2309e+00
    101 3.1146e+04 4.3204e+04 4.4781e+00 5.6893e+00 5.3501e+00 6.0814e+00
    103 5.6421e+04 6.8711e+04 4.515e+00 5.8661e+00 5.5432e+00 6.7322e+00
    105 7.3462e+04 8.4560e+04 5.0112e+00 6.1210e+00 6.1205e+00 7.1633e+00
    108 4.7311e+05 5.8773e+05 5.4423e+00 6.3404e+00 6.4065e+00 7.3211e+00
    L2 κ˜C κTLSE,(γ)SCE κn(K,f) mTLSE,(γ)SCE κm(K,f) cTLSE,(γ)SCE κc(K,f)
    100 4.4502e+03 5.6732e+03 3.8303e+00 4.9326e+00 4.7522e+00 5.8220e+00
    101 1.8733e+04 2.3504e+04 2.5581e+00 3.6282e+00 3.4655e+00 4.5931e+00
    103 4.5611e+04 6.0532e+04 2.7442e+00 3.8934e+00 4.6997e+00 5.8906e+00
    105 6.3015e+04 7.1441e+04 2.9052e+00 3.9977e+00 5.0114e+00 6.1642e+00
    108 3.8522e+05 4.4913e+05 4.8437e+00 5.9042e+00 5.2309e+00 6.9732e+00
    L3 κ˜C κTLSE,(γ)SCE κn(K,f) mTLSE,(γ)SCE κm(K,f) cTLSE,(γ)SCE κc(K,f)
    100 4.2618e+03 5.4501e+03 3.6214e+00 4.7406e+00 4.1088e+00 5.3876e+00
    101 1.7421e+04 2.1562e+04 2.3521e+00 3.5245e+00 3.3832e+00 4.4097e+00
    103 4.0669e+04 5.8660e+04 2.5880e+00 3.7621e+00 4.2452e+00 5.6421e+00
    105 6.0153e+04 6.9401e+04 2.0133e+00 3.8001e+00 4.8773e+00 6.0113e+00
    108 3.6542e+05 4.0903e+05 4.3415e+00 5.7553e+00 5.1066e+00 6.8302e+00

     | Show Table
    DownLoad: CSV

    The rest of this part is provided to evaluate the efficiency of the over-estimation ratios proposed in Algorithms A and C. Assume random perturbations

    [ΔK,Δf]=1012rand(p+q,n+1),

    and fix

    ϵ=[ΔK,Δf]F[K,f]F.

    When the perturbations [ΔQΔA],[ΔdΔb]F are small enough, under the genericity condition (1.2), the following perturbed TLSE problem has the following unique solution, denoted by x+Δx:

    minG,f[G,f]F,subject to((A+ΔA)+G)x=(b+Δb)+f,(Q+ΔQ)x+Δx=d+Δd,

    where the perturbations ΔA of A, ΔQ of Q, Δb of b, and Δd of d are represented by

    ΔK=[ΔQΔA],Δf=[ΔdΔb].

    In order to show the efficiency of unstructured over-estimation ratios of Algorithms A and B. We determine the following over-estimation ratios

    rn:=κTLSE,(γ)SCEεΔx2/x2,rm:=mTLSE,(γ)SCEεΔx/x,rc:=cTLSE,(γ)SCEεΔx/x.

    To carry out the experiments, we generated 500 TLSE problems, where κTLSE,(γ)SCE, mTLSE,(γ)SCE and cTLSE,(γ)SCE are the outcomes from Algorithms A and B. Generally, the ratios in (0.1,10) are acceptable [37,Chapter 19]. From Figure 1, we indicate that the mixed condition estimation rm and componentwise condition estimation rc are more effective than rn which may significantly overestimate the actual relative normwise error.

    Figure 1.  Results for Algorithms A and B.

    Regarding the structured TLSE problem, it is reasonable to take into consideration the fact that the perturbation ΔK has the same structure as K. For Toeplitz matrices, the assumption

    |K|=θi=1|ki||Si|

    for θ=m+n1 is satisfied, when

    S1=toeplitz(0,en),,Sn=toeplitz(0,e1),Sn+1=toeplitz(e2,0),Sm+n1=toeplitz(em,0),

    where the MATLAB-routine notation K= toeplitz (Tc,Tr)S denotes a Toeplitz matrix with TcRm as its first column and TrRn as its first row; and, K=θi=1kiSi, where k=[TTc,Tr(2: end )]TRm+n1.

    Example 5.2. A signal restoration implementation is the source of this example, which is derived from [25]. Assume that η is 1.21 and ν is 4. The ˉK convolution matrix is a m×(m2ν) Toeplitz matrix with the first column

    ki1=12πη2exp[(ν˜z+1)22η2],i=1,2,,2ν+1,

    and the rest is ki1=0. Only k11 is a non-zero value in this row. Now, we generate a Toeplitz matrix and a corresponding right-hand side vector in this manner.

    K=ˉK+G and f=ˉf+h.

    Here, G is a random Toeplitz matrix of the same structure as ˉK, and ˉf is the vector of all ones. A standard normal distribution is used to construct the G and h elements, and it scaled such that

    h2ˉf2=G2ˉK2=ω.

    We take ω=0.001 and m=300 in our experiment. In this example we compare the structured normwise, mixed and componentwise condition numbers and interpret the effectiveness of Algorithm C.

    Taking into account the data presented in Table 2, we are able to reach the conclusion that Algorithm C is capable of providing accurate estimates of the structured mixed and componentwise numbers, whereas the structured normwise condition estimation may significantly overestimate the true relative structured normwise condition number.

    Table 2.  Efficiency of statistical condition estimates by Algorithm C.
    m,n κSTLSE,(γ)SCE κs,n(k,f) mSTLSE,(γ)SCE κs,m(k,f) cSTLSE,(γ)SCE κs,c(k,f)
    200, 198 L0 1.1032e+02 2.8632e+02 4.1038e+00 6.4943e+00 5.2065e+00 6.5483e+00
    L1 2.3064e+01 4.3719e+01 3.7429e+00 4.3862e+00 4.8572e+00 6.1654e+00
    L2 4.1065e+01 5.2076e+01 2.4301e+00 3.5042e+00 2.8095e+00 3.9644e+00
    L3 4.1065e+01 5.2076e+01 2.4301e+00 3.5042e+00 2.8095e+00 3.9644e+00
    m,n κSTLSE,(γ)SCE κs,n(k,f) mSTLSE,(γ)SCE κs,m(k,f) cSTLSE,(γ)SCE κs,c(k,f)
    300, 298 L0 3.6432e+02 5.2467e+02 4.8436e+00 6.7935e+00 5.9764e+00 7.6043e+00
    L1 3.1753e+02 4.8711e+02 3.9875e+00 5.8622e+00 5.1342e+00 6.9664e+00
    L2 5.5045e+01 6.7762e+01 2.8503e+00 4.3329e+00 3.1065e+00 4.5483e+00
    L3 5.5045e+01 6.7762e+01 2.8503e+00 4.3329e+00 3.1065e+00 4.5483e+00
    m,n κSTLSE,(γ)SCE κs,n(k,f) mSTLSE,(γ)SCE κs,m(k,f) cSTLSE,(γ)SCE κs,c(k,f)
    400, 398 L0 1.7654e+03 3.5749e+03 5.4332e+00 7.1109e+00 6.8753e+00 8.0655e+00
    L1 1.2435e+03 2.5420e+03 4.7344e+00 6.3427e+00 6.4673e+00 7.7644e+00
    L2 2.6545e+02 4.3566e+02 3.2345e+00 5.6311e+00 5.3444e+00 6.8643e+00
    L3 2.6545e+02 4.3566e+02 3.2345e+00 5.6311e+00 5.3444e+00 6.8643e+00
    m,n κSTLSE,(γ)SCE κs,n(k,f) mSTLSE,(γ)SCE κs,m(k,f) cSTLSE,(γ)SCE κs,c(k,f)
    500, 498 L0 3.0441e+03 5.3325e+03 5.8643e+00 8.0776e+00 7.8061e+00 8.9077e+00
    L1 2.3064e+03 4.3719e+03 5.1065e+00 7.2460e+00 7.0572e+00 8.1103e+00
    L2 5.1427e+02 7.8761e+02 4.7436e+00 6.4935e+00 6.1743e+00 7.6043e+00
    L3 5.1427e+02 7.8761e+02 4.7436e+00 6.4935e+00 6.1743e+00 7.6043e+00

     | Show Table
    DownLoad: CSV

    Example 5.3. In this example, we consider the data matrix K and the vector f [3]

    K=[m1111m1111m1111111]Rm×(m2),f=[11m11]Rm.

    We note that the first m2 singular values of [Kf] are equal but larger than the (m1)th singular σm1. It seems obvious that K is a Toeplitz matrix. Here we fix γ=2 in all calculations for Algorithm C. For Toeplitz matrix K and the vector f, we find that Algorithm C gives reliable componentwise condition estimations. From Table 3, we conclude that, in accordance with Theorem 4.7, the structured normwise mixed and componentwise condition numbers κn,s(k, f), κm,s(k,f) and κc,s(k,f), respectively, are consistently smaller than the corresponding unstructured ones κn(K,f), κm(K,f) and κc(K,f) as a result of selecting different values of m and n.

    Table 3.  Comparison of structured and unstructured condition numbers.
    m,n κs,n(k,f) κn(K,f) κs,m(k,f) κm(K,f) κs,c(k,f) κc(K,f)
    200, 198 L0 5.3211e+03 3.7543e+04 9.4332e+01 2.4031e+02 7.1322e+01 4.5094e+02
    L1 3.6774e+03 1.8722e+04 4.8130e+01 1.7659e+02 5.8572e+01 3.1654e+02
    L2 7.2043e+02 4.7965e+03 7.3422e+00 5.3507e+01 8.3504e+00 9.5436e+01
    L3 6.8782e+02 2.0532e+03 4.9643e+00 3.8951e+01 6.6543e+00 7.8390e+01
    m,n κs,n(k,f) κn(K,f) κs,m(k,f) κm(K,f) κs,c(k,f) κc(K,f)
    300, 298 L0 6.5853e+03 2.4706e+04 7.6964e+01 6.3820e+02 8.5728e+01 6.0765e+02
    L1 5.1453e+03 1.9432e+04 6.0443e+01 3.6543e+02 6.6404e+01 5.8261e+02
    L2 9.4065e+02 8.2976e+03 4.2712e+01 2.8903e+02 5.9632e+01 4.5722e+02
    L3 6.1427e+02 7.8761e+03 3.6042e+01 1.7211e+02 4.2704e+01 3.4821e+02
    m,n κs,n(k,f) κn(K,f) κs,m(k,f) κm(K,f) κs,c(k,f) κc(K,f)
    400, 398 L0 3.6543e+04 4.6542e+05 6.5320e+02 5.5732e+03 7.0875e+02 6.2063e+03
    L1 2.4743e+04 3.0542e+05 3.7654e+02 4.0754e+03 6.8572e+02 5.8432e+03
    L2 4.7543e+03 9.8654e+04 7.0665e+01 3.8943e+03 9.3520e+01 4.7543e+03
    L3 4.3401e+03 7.2311e+04 5.9870e+01 2.9205e+03 8.2145e+01 4.4002e+03
    m,n κs,n(k,f) κn(K,f) κs,m(k,f) κm(K,f) κs,c(k,f) κc(K,f)
    500, 498 L0 9.1032e+04 7.6502e+05 7.2461e+02 6.4320e+03 9.3043e+02 8.9201e+03
    L1 2.3064e+04 6.2511e+05 5.8549e+02 5.7017e+03 8.1702e+02 6.3354e+03
    L2 4.1065e+03 3.9647e+05 3.6038e+02 4.9552e+03 6.0641e+02 5.8602e+03
    L3 5.1427e+03 2.6021e+05 3.0403e+02 4.3193e+03 5.9762e+02 5.3511e+03

     | Show Table
    DownLoad: CSV

    Example 5.4. Under structured perturbations, we can check the efficiency of structured condition estimations for the TLSE problem in the following example by taking 1000 samples of Toeplitz matrix K and f as taken before in Example 5.3. We construct the componentwise structured perturbation matrix ΔK and the perturbation vector Δf for each sample as given below:

    ΔK=ε×(EK),Δf=ε×(gf),

    where ε=108,E is a Toeplitz matrix and E and g are random matrices whose entries are uniformly distributed in the open interval (1,1). Over estimation ratios with respect to the componentwise structured perturbations ΔK and Δf are given below:

    rs,n:=κSTLSE,(γ)SCEεΔx2/x2,rs,m:=mSTLSE,(γ)SCEεΔx/x,rs,c:=cSTLSE,(γ)SCEεΔx/x.

    Hence the structured condition estimations mSTLSE,(γ)SCE and cSTLSE,(γ)SCE are very reliable whereas the structured normwise condition estimation κSTLSE,(γ)SCE may seriously overestimate the true relative normwise error for γ=2 as shown in Figure 2.

    Figure 2.  Results for Algorithm C.

    Using a dual technique, we have obtained explicit expressions for both the structured and unstructured condition numbers of the linear function of the TLSE problem in this article. Additionally, we investigated how the new results relate to the earlier findings. The comparisons between the structured and unstructured condition numbers are also given. We show that the previous structured condition numbers of the TLS problem can be recovered from the structured condition numbers of the TLSE problem. To efficiently estimate the structured and unstructured normwise, mixed, and componentwise conditions for the TLSE problem, we applied the SSCE method and constructed three algorithms. Finally, the performance of the proposed algorithms is illustrated in the numerical results. We have found that the structured condition numbers for the TLSE problem might be smaller than their unstructured counterparts, and that differences are significant. In the future, we will continue our research on this problem.

    The authors are grateful to the handling editor and the anonymous referees for their constructive feedback and helpful suggestions. This work was supported by the Zhejiang Normal University Postdoctoral Research Fund (Grant No. ZC304022938), the Natural Science Foundation of China (Project No. 61976196) and the Zhejiang Provincial Natural Science Foundation of China under Grant No. LZ22F030003.

    The authors declare no conflict of interest.



    [1] Q. Liu, Z. Jia, On the condition number of the total least squares problem with linear equality constraint, Numer. Algor., 90 (2022), 363–385. https://doi.org/10.1007/s11075-021-01191-w doi: 10.1007/s11075-021-01191-w
    [2] G. H. Golub, C. F. Van Loan, An analysis of total least squares problem, SIAM J. Matrix Anal. Appl., 17 (1980), 883–893. https://doi.org/10.1137/0717073 doi: 10.1137/0717073
    [3] S. Van Huffel, J. Vandevalle, The Total Least Squares Problems: Computational Aspects and Analysis, Philadelphia: SIAM, 1991.
    [4] G. H. Golub, C. F. Van Loan, Matrix Computations, Baltimore: Johns Hopkins University Press, 2013. https://doi.org/10.2307/3621013
    [5] G. Stewart, On the weighting method for least squares problems with linear equality constraints, BIT Numer. Math., 37 (1997), 961–967. https://doi.org/10.1007/BF02510363 doi: 10.1007/BF02510363
    [6] A. J. Cox, N. J. Higham, Accuracy and stability of the null space method for solving the equality constrained least squares problem, BIT Numer. Math., 39 (1999), 34–50. https://doi.org/10.1023/A:1022365107361 doi: 10.1023/A:1022365107361
    [7] H. Diao, Condition numbers for a linear function of the solution of the linear least squares problem with equality constraints, J. Comput. Appl. Math., 344 (2018), 640–656. https://doi.org/10.1016/j.cam.2018.05.050 doi: 10.1016/j.cam.2018.05.050
    [8] H. Li, S. Wang, Partial condition number for the equality constrained linear least squares problem, Calcolo, 54 (2017), 1121–1146. https://doi.org/10.1007/s10092-017-0221-8 doi: 10.1007/s10092-017-0221-8
    [9] Q. Liu, M. Wang, On the weighting method for mixed least squares-total least squares problems, Numer. Linear Algebra Appl., 24 (2017), e2094. https://doi.org/10.1002/nla.2094 doi: 10.1002/nla.2094
    [10] E. M. Dowling, R. D. Degroat, D. A. Linebarger, Total least squares with linear constraints, IEEE Int. Conf. Acoust., 5 (1992), 341–344. https://doi.org/10.1109/ICASSP.1992.226613 doi: 10.1109/ICASSP.1992.226613
    [11] B. Schaffrin, A note on constrained total least squares estimation, Linear Algebra Appl., 417 (2006), 245–258. https://doi.org/10.1016/j.laa.2006.03.044 doi: 10.1016/j.laa.2006.03.044
    [12] B. Schaffrin, Y. A. Felus, An algorithmic approach to the total least-squares problem with linear and quadratic constraints, Stud. Geophys. Geod., 53 (2009), 1–16. https://doi.org/10.1007/s11200-009-0001-2 doi: 10.1007/s11200-009-0001-2
    [13] Q. Liu, S. Jin, L. Yao, D. Shen, The revisited total least squares problems with linear equality constraint, Appl. Numer. Math., 152 (2020), 275–284. https://doi.org/10.1016/j.apnum.2019.11.021 doi: 10.1016/j.apnum.2019.11.021
    [14] Q. Liu, C. Chen, Q. Zhang, Perturbation analysis for total least squares problems with linear equality constraint, Appl. Numer. Math., 161 (2021), 69–81. https://doi.org/10.1016/j.apnum.2020.10.025 doi: 10.1016/j.apnum.2020.10.025
    [15] J. Rice, A theory of condition, SIAM J. Numer. Anal., 3 (1966), 287–310. https://doi.org/10.1137/0703023
    [16] I. Gohberg, I. Koltracht, Mixed, componentwise, and structured condition numbers, SIAM J. Matrix Anal Appl., 14 (1993), 688–704. https://doi.org/10.1137/0614049 doi: 10.1137/0614049
    [17] P. Burgisser, F. Cucker, Condition: The Geometry of Numerical Algorithms, Heidelberg: Springer, 2013. https://doi.org/10.1007/978-3-642-38896-5
    [18] M. Baboulin, S. Gratton, A contribution to the conditioning of the total least-squares problem, SIAM J.Matrix Anal. Appl., 32 (2011), 685–699. https://doi.org/10.1137/090777608 doi: 10.1137/090777608
    [19] S. Gratton, D. Titley-Peloquin, J. T. Ilunga, Sensitivity and conditioning of the truncated total least squares solution, SIAM J. Matrix Anal. Appl., 34 (2013), 1257–1276. https://doi.org/10.1137/120895019 doi: 10.1137/120895019
    [20] Z. Jia, B. Li, On the condition number of the total least squares problem, Numer. Math., 125 (2013), 61–87. https://doi.org/10.1007/s00211-013-0533-9 doi: 10.1007/s00211-013-0533-9
    [21] B. Zheng, L. Meng, Y. Wei, Condition numbers of the multidimensional total least squares problem, SIAM J. Matrix Anal. Appl., 38 (2017), 924–948. https://doi.org/10.1137/15M1053815 doi: 10.1137/15M1053815
    [22] B. Zheng, Z. Yang, Perturbation analysis for mixed least squares-total least squares problems, Numer. Linear Algebra Appl., 26 (2019), 22–39. https://doi.org/10.1002/nla.2239 doi: 10.1002/nla.2239
    [23] L. Zhou, L. Lin, Y. Wei, S. Qiao, Perturbation analysis and condition numbers of scaled total least squares problems, Numer. Algor., 51 (2009), 381–399. https://doi.org/10.1007/s11075-009-9269-0 doi: 10.1007/s11075-009-9269-0
    [24] S. Wang, H. Li, H. Yang, A note on the condition number of the scaled total least squares problem, Calcolo, 55 (2018), 46. https://doi.org/10.1007/s10092-018-0289-9 doi: 10.1007/s10092-018-0289-9
    [25] J. Kamm, J. G. Nagy, A total least squares method for Toeplitz systems of equations, BIT, 38 (1998), 560–582. https://doi.org/10.1007/BF02510260 doi: 10.1007/BF02510260
    [26] P. Lemmerling, S. Van Huffel, Analysis of the structured total least squares problem for Hankel/Toeplitz matrices, Numer. Algorithms, 27 (2001), 89–114. https://doi.org/10.1023/A:1016775707686 doi: 10.1023/A:1016775707686
    [27] I. Markovsky, S. Van Huffel, Overview of total least-squares methods, Signal Process., 87 (2007), 2283–2302. https://doi.org/10.1016/j.sigpro.2007.04.004 doi: 10.1016/j.sigpro.2007.04.004
    [28] H. Diao, Y. Sun, Mixed and componentwise condition numbers for a linear function of the solution of the total least squares problem, Linear Algebra Appl., 544 (2018), 1–29. https://doi.org/10.1016/j.laa.2018.01.008 doi: 10.1016/j.laa.2018.01.008
    [29] H. Diao, Y. Wei, P. Xie, Small sample statistical condition estimation for the total least squares problem, Numer. Algorithms, 75 (2017), 435–455. https://doi.org/10.1007/s11075-016-0185-9 doi: 10.1007/s11075-016-0185-9
    [30] Q. Meng, H. Diao, Z. Bai, Condition numbers for the truncated total least squares problem and their estimations, Numer. Linear Algebra Appl., 28 (2021), e2369. https://doi.org/10.1002/nla.2369 doi: 10.1002/nla.2369
    [31] B. Li, Z. Jia, Some results on condition numbers of the scaled total least squares problem, Linear Algebra Appl., 435 (2011), 674–686. https://doi.org/10.1016/J.LAA.2010.07.022 doi: 10.1016/J.LAA.2010.07.022
    [32] Q. Liu, Q. Zhanga, D. Shen, Condition numbers of the mixed least squares-total least squares problem revisited, 2022. https://doi.org/10.1080/03081087.2022.2094861
    [33] M. Baboulin, S. Gratton, Using dual techniques to derive componentwise and mixed condition numbers for a linear function of a linear least squares solution, BIT Numer. Math., 49 (2009), 3–19. https://doi.org/10.1007/s10543-009-0213-4 doi: 10.1007/s10543-009-0213-4
    [34] H. Diao, L. Liang, S. Qiao, A condition analysis of the weighted linear least squares problem using dual norms, Linear Algebra Appl., 66 (2018), 1085–1103. https://doi.org/10.1080/03081087.2017.1337059 doi: 10.1080/03081087.2017.1337059
    [35] M. Samar, Condition numbers for a linear function of the solution to the constrained and weighted least squares problem and their statistical estimation, Taiwanese J. Math., 25 (2021), 717–741. https://doi.org/10.11650/tjm/201202 doi: 10.11650/tjm/201202
    [36] C. S. Kenney, A. J. Laub, Small-sample statistical condition estimates for general matrix functions, SIAM J. Sci. Comput., 15 (1994), 36–61. https://doi.org/10.1137/0915003 doi: 10.1137/0915003
    [37] N. J. Higham, Accuracy and Stability of Numerical Algorithms, Philadelphia: SIAM, 2002.
    [38] P. Xie, H. Xiang, Y. Wei, A contribution to perturbation analysis for total least squares problems, Numer. Algor., 75 (2017), 381–395. https://doi.org/10.1007/s11075-017-0285-1 doi: 10.1007/s11075-017-0285-1
    [39] S. Wang, H. Yang, H. Li, Condition numbers for the nonlinear matrix equation and their statistical estimation, Linear Algebra Appl., 482 (2015), 221–240. https://doi.org/10.1016/j.laa.2015.06.011 doi: 10.1016/j.laa.2015.06.011
    [40] M. Samar, H. Li, Y. Wei, Condition numbers for the K-weighted pseudoinverse LK and their statistical estimation, Linear Multilinear Algebra, 69 (2021), 752–770. https://doi.org/10.1080/03081087.2019.1618235 doi: 10.1080/03081087.2019.1618235
    [41] M Samar, F. Lin, Perturbation and condition numbers for the Tikhonov regularization of total least squares problem and their statistical estimation, J. Comput. Appl. Math., 411 (2022), 114230. https://doi.org/10.1016/j.cam.2022.114230 doi: 10.1016/j.cam.2022.114230
    [42] M. Baboulin, S. Gratton, R. Lacroix, A. J. Laub, Statistical estimates for the conditioning of linear least squares problems, Lect. Notes Comput. Sci., 8384 (2014), 124–133. https://doi.org/10.1007/978-3-642-55224-3-13 doi: 10.1007/978-3-642-55224-3-13
  • This article has been cited by:

    1. Mahvish Samar, Xinzhong Zhu, Huiying Xu, Conditioning Theory for ML-Weighted Pseudoinverse and ML-Weighted Least Squares Problem, 2024, 13, 2075-1680, 345, 10.3390/axioms13060345
    2. Mahvish Samar, Xinzhong Zhu, Abdul Shakoor, Conditioning Theory for Generalized Inverse CA‡ and Their Estimations, 2023, 11, 2227-7390, 2111, 10.3390/math11092111
    3. Yonghao Chen, Xiangming Yan, Weigang Wang, Siyao Chen, Yuanjian Liu, Jianfei Chen, Two-steps power flow calculation, 2024, 236, 03787796, 110958, 10.1016/j.epsr.2024.110958
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1406) PDF downloads(54) Cited by(3)

Figures and Tables

Figures(2)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog