Research article

Hermitian and skew-Hermitian splitting method on a progressive-iterative approximation for least squares fitting

  • Received: 18 May 2022 Revised: 17 July 2022 Accepted: 25 July 2022 Published: 29 July 2022
  • MSC : 41A15, 65D10, 65D17, 65F10, 65F15

  • To solve the problems of curves and surfaces approximation with normalized totally positive bases, a new progressive and iterative approximation for least square fitting method called HSS-LSPIA is proposed, which is based on the HSS iterative approach for solving linear equations of LSPIA. The HSS-LSPIA format includes two iterations with iterative difference vectors, each of which is distinct from the other. The approximate optimal positive constant, as well as convergence analyses, are provided. Furthermore, the HSS-LSPIA method can be faster than the ELSPIA, LSPIA, and WHPIA methods in terms of convergence speed. Numerical results verify this phenomenon.

    Citation: Saknarin Channark, Poom Kumam, Juan Martinez-Moreno, Wachirapong Jirakitpuwapat. Hermitian and skew-Hermitian splitting method on a progressive-iterative approximation for least squares fitting[J]. AIMS Mathematics, 2022, 7(9): 17570-17591. doi: 10.3934/math.2022967

    Related Papers:

    [1] Lv Zhang, Qingbiao Wu . Modified Newton-EHS method for solving nonlinear problems with complex symmetric Jacobian matrices. AIMS Mathematics, 2023, 8(10): 24233-24253. doi: 10.3934/math.20231236
    [2] Fengxia Zhang, Ying Li, Jianli Zhao . The semi-tensor product method for special least squares solutions of the complex generalized Sylvester matrix equation. AIMS Mathematics, 2023, 8(3): 5200-5215. doi: 10.3934/math.2023261
    [3] Kanjanaporn Tansri, Pattrawut Chansangiam . Gradient-descent iterative algorithm for solving exact and weighted least-squares solutions of rectangular linear systems. AIMS Mathematics, 2023, 8(5): 11781-11798. doi: 10.3934/math.2023596
    [4] Junxiang Lu, Chengyi Zhang . On the strong P-regular splitting iterative methods for non-Hermitian linear systems. AIMS Mathematics, 2021, 6(11): 11879-11893. doi: 10.3934/math.2021689
    [5] M. Mossa Al-Sawalha, Khalil Hadi Hakami, Mohammad Alqudah, Qasem M. Tawhari, Hussain Gissy . Novel Laplace-integrated least square methods for solving the fractional nonlinear damped Burgers' equation. AIMS Mathematics, 2025, 10(3): 7099-7126. doi: 10.3934/math.2025324
    [6] Fan Sha, Jianbing Zhang . Randomized symmetric Gauss-Seidel method for solving linear least squares problems. AIMS Mathematics, 2024, 9(7): 17453-17463. doi: 10.3934/math.2024848
    [7] Jiao Xu, Hairui Zhang, Lina Liu, Huiting Zhang, Yongxin Yuan . A unified treatment for the restricted solutions of the matrix equation $AXB=C$. AIMS Mathematics, 2020, 5(6): 6594-6608. doi: 10.3934/math.2020424
    [8] Joseph Lifton, Tong Liu, John McBride . Non-linear least squares fitting of Bézier surfaces to unstructured point clouds. AIMS Mathematics, 2021, 6(4): 3142-3159. doi: 10.3934/math.2021190
    [9] Shu-Xin Miao, Jing Zhang . On Uzawa-SSI method for non-Hermitian saddle point problems. AIMS Mathematics, 2020, 5(6): 7301-7315. doi: 10.3934/math.2020467
    [10] Mashail M. Al Sobhi . The modified Kies-Fréchet distribution: Properties, inference and application. AIMS Mathematics, 2021, 6(5): 4691-4714. doi: 10.3934/math.2021276
  • To solve the problems of curves and surfaces approximation with normalized totally positive bases, a new progressive and iterative approximation for least square fitting method called HSS-LSPIA is proposed, which is based on the HSS iterative approach for solving linear equations of LSPIA. The HSS-LSPIA format includes two iterations with iterative difference vectors, each of which is distinct from the other. The approximate optimal positive constant, as well as convergence analyses, are provided. Furthermore, the HSS-LSPIA method can be faster than the ELSPIA, LSPIA, and WHPIA methods in terms of convergence speed. Numerical results verify this phenomenon.



    Geometric iteration techniques (GI) are a subcategory of iterative linear algebraic methods for solving linear equations that have gained popularity as a data-fitting tool in recent years. Since its introduction, geometric iteration approaches in geometric design, including computer-aided design (CAD), differential equations, statistics, and approximation theory, have been widely used in many areas of research and practical applications. The progressive and iterative approximation (PIA) method presented by Lin et al. [1] is an adaptable and stable approach to data fitting that belongs to the geometric iteration methods category. The PIA method can be applied to interpolate and approximate curves and surfaces with normalized totally positive (NTP) bases.

    Moreover, the EPIA method [2] is extended to approximate a given data points set. The EPIA organizes the data points into groups with equal to the amount of control points as there are control points. Each group represents a control point and then executes PIA-format iterative approximation. However, the limit of the generated curves and surfaces sequence is not the least square fitting (LSF) result to the data points set. Then, Deng and Lin proposed the LSPIA method [3], in which the least square fitting (LSF) result to the data set with the B-spline basis is the limit of the generated curves and surfaces sequence. Ebrahimi and Loghman [4] presented a composite iterative procedure for the progressive and iteration approximation for generates a sequence of matrices based on the Schulz iterative method which it uses in the adjusting vectors. In addition, Lin et al. [5] also gives an review of interpolation and approximation geometric iteration methods, as well as local characteristics and accelerating methods, as well as demonstrating convergence.

    Recently, Wang [6] presented an extended the progressive and iterative approximation for least square fitting (ELSPIA) approach with global and local relaxation parameters, which converges faster than the LSPIA method [3] in terms of iterative steps and computing time. In this article, we propose the Hermitian and skew-hermitian splitting methods for least squares fitting on a progressive-iterative approximation, can theoretically reduce the approximation error to zero.

    The PIA [1] is an iterative approach for solving a system of linear equations BP=Q, as we know, where B={Bi(tj)}n,mi,j=1 is a collocation matrix with 0=t1<t2<t3<<tm=1 and n=m, {Qj}mj=1 refers to a collection of data points and {Pi}ni=1 refers to a collection of control points. When all of the entries in each row of the blending basis matrix B add up to 1, B is normalized. Consider that if all of a matrix's minors are non-negative, it is totally positive. The basis is considered to be totally positive if all of the collocation matrices are totally positive. As a result, the blending basis matrix B is normalized totally positive (NTP), and the NTP basis is the corresponding basis. The NTP bases have traditionally been related to shape-preserving representations [7]. The majority of curve and surface representations used in computer-aided design, such as the Bernstein basis and the B-spline basis, are based on NTP bases.

    Using the Hermitian and skew-Hermitian splits, Bai et al. [8] presented an iteration approach (HSS). Hu et al. [9] developed a new PIA iterative algorithm and its weighted form for progressive iterative interpolation of data points based on the HSS-iteration approach, using NTP bases, namely HPIA and WHPIA, respectively. These two approaches' iterative processes formally consist of two iterations, with the iterative distinction vectors in the two iterations differing.

    As we know, the LSPIA [3,10] and ELSPIA [6] are the iteration approaches for solving a system of linear equations μBTBP=BTQ, where B={Bi(tj)}n,mi,j=1 is a collocation matrix with 0=t1<t2<t3<<tm=1 and mn, μ is constant satisfying the condition 0<μ<2λ0 with λ0 is the largest eigenvalue of BTB, {Qj}mj=1 refers to a collection of data points and {Pi}ni=1 refers to control points.

    According to theoretical analysis [8], the HSS-iteration consistently converges unconditionally to the exact solution of the system of linear equations Ax=b, where ACn×n and non-singular. Using the HSS-iteration approach, we propose a new procedure, called the HSS-LSPIA method, for solving a system of linear equations of LSPIA such that A=μBTB, x=P and b=BTQ, where B={Bi(tj)}n,mi,j=1Rm×n is a collocation matrix with mn.

    The paper is organized as the following: Section 2 presents preliminaries. Section 3 shows the iterative format of the HSS-LSPIA method schemes for curves and surfaces with NTP bases respectively. And proves the convergence of HSS-LSPIA in the case of curves and surfaces in this section. In Section 4, we present the experimental results of five examples show that the HSS-LSPIA method is more effectively efficient than the LSPIA [3], ELSPIA [6] and WHPIA[9]. Finally, in Section 5, some conclusions are summarized.

    For any blending curves and surfaces fitting, we'll go through the progressive iterative approximation property. For the ordered point set {Qj}mj=1, select control point set {Pi}ni=1 from {Qj}mj=1 with mn, let {Qj}mj=1 be the assigned parameters to {Qj}mj=1, choose B={Bi(tj)}n,mi,j=1 as an NTP basis. For the ELSPIA method[6], let γ0j=0,j=1,2,,m, and P0i=Pi,i=1,2,,n be weighted sum and the initial control points, respectively, we can obtain the (k+1)th blending curve,

    Ck+1(t)=ni=1Bi(t)Pk+1i,t[0,1]. (2.1)

    We update the control points

    Pk+1i=Pki+Δk+1i,i=1,2,,n, (2.2)

    by adjusting vector

    Δk+1i=μmj=1Bi(tj)γk+1j,i=1,2,,n, (2.3)

    with the inner iteration

    γk+1j=(1ω)γkj+ωδkj,j=1,2,,m, (2.4)

    and the difference vector

    δkj=Qjni=1Bi(tj)Pki,j=1,2,,m. (2.5)

    The parameters ω and μ both satisfy the condition in this case

    0<ω<2,0<μ<2(2ω)ωσ20 (2.6)

    where σ0 stands for the collocation matrix B's largest singular value.

    When the parameters ω and μ are satisfy as to the following

    ω=1,0<μ<2σ20 (2.7)

    the LSPIA method [3] is a special case of the ELSPIA method for blending curve fitting.

    In the case of the surface, two NTP bases are required. In both directions t and v, a Kronecker product is formed by {Bi(t)}n1i=1 and {Nj(v)}n2j=1. We given the initial control points {P0ij}n1,n2i,j=1, with a real increasing parameters set {t0h}m1h=1, i.e. 0=t1<t2<t3<<tm1=1 and {v0l}m2l=1, i.e. 0=v1<v2<v3<<vm2=1. We select {Pij}n1,n2i,j=1 from the ordered point set {Qij}m1,m2i,j=1. The sequence of surface iterations C(k+1)(t,v) is related to that of the curve case with k0;

    Ck+1(t,v)=n1i=1n2j=1Bi(t)Nj(v)Pk+1i,j. (2.8)

    We update the control points

    Pk+1ij=Pkij+Δk+1ij,i=1,2,,n1,j=1,2,,n2, (2.9)

    by adjusting vector

    Δk+1ij=μm1h=1m2l=1Bi(th)Nj(vl)γk+1hl, i=1,2,,n1 and j=1,2,,n2, (2.10)

    with the inner iteration

    γk+1ij=(1ω)γkij+ωδkij, i=1,2,,m1, j=1,2,,m2, (2.11)

    and the difference vector

    δkij=Qijn1h=1n2l=1Bi(tj)Bi(th)Nj(vl)γkhl, (2.12)

    with i=1,2,,m1 and  j=1,2,,m2, where μ is the global relaxation parameter and ω is the local relaxation parameter satisfy

    0<ω<2,0<μ<2(2ω)ωσ2B0σ2N0 (2.13)

    where σB0 and σN0 are the largest singular values of the collocation matrix B and N, which correspond to the bases {Bi(t)}n1i=1 and {Nj(v)}n2j=1, respectively.

    When the parameters ω and μ are satisfy as to the following

    ω=1,0<μ<2(2ω)ωσ2B0σ2N0. (2.14)

    the LSPIA method [3] is a special case of the ELSPIA method for blending surface fitting.

    Let

    Γk+1=[γk+10,γk+11,,γk+1m]T,k0,
    Pk+1=[Pk+10,Pk+11,,Pk+1n]T,k0,

    and

    Q=[Q0,Q1,,Qn]T.

    According to updating the control points in the case of curve (2.2), we revise the iterative form with (2.3)–(2.5) as the matrix iterative form in the same matrix form as updating the control points in the case of surface (2.9). We then rewrite with (2.10)–(2.12) as follows

    {Γk+1=(1ω)Γk+ω(QBPk),Pk+1=Pk+μBTΓk+1,k0. (2.15)

    Many problems in scientific computing lead to a system of linear equations

    BP=Q,BCm×m is nonsingular,P,QCm, (2.16)

    with B a large sparse non-Hermitian and positive definite matrix.

    Iterative methods for the system of linear equations (2.16) require efficient splittings of the coefficient matrix B. For example, the Jacobi and the Gauss-Seidel iterations [11,12,13,14,15,16] split the matrix B into its diagonal and off-diagonal part. Because the matrix B naturally possesses a Hermitian/skew-Hermitian splitting (HSS) [17,18]

    B=H+S, (2.17)

    where

    H=12(B+BT)andS=12(BBT), (2.18)

    Bai et al. presented the Hermitian/skew-hermitian splitting (HSS) iteration approach [9] as a new way to solve a system of linear equations (2.16). We'll pretend that we've recieved the iteration sequence until Pk+1 (k1) converges. Then, the HSS iteration can compute from the format as

    {(αI+H)Pk+12=(αIS)Pk+Q,(αI+S)Pk+1=(αIH)Pk+12+Q, (2.19)

    where I is the identity matrix and α is a positive constant.

    They have also proved this method converges unconditionally to the exact solution of the system of linear equations (2.16). The upper bound of the contraction factor of the HSS iteration is dependent on the spectrum of the Hermitian part H but is independent of the spectrum of the skew-Hermitian part S as well as the eigenvectors of the matrices H, S, and B.

    Hu et al. [9] presented a new PIA iterative technique and its weighted edition for progressive iterative interpolation of data points based on the HSS-iteration approach, using NTP bases called HPIA and WHPIA, respectively. The WHPIA method typically converges faster than the HPIA method in the majority of cases. Following that, we briefly review the WHPIA' iterative processes, which formally consist of two iterations with different iterative distinction vectors in the two iterations, and set up a function based on NTP bases like a perturbation term in the iteration process.

    The case of a curve with an NTP basis {Bi(t)}mi=1 and a collection of data points to be interpolated {Qi}mi=1. Then, with a real increasing parameters set {ti}mi=1 i.e. 0=t1<t2<t3<<tm=1, use these data points to be the initial control points {P0i}mi=1 and {P12i}mi=1. The initial blending curve can be formed as follows:

    C0(t)=mi=1Bi(t)P0i,t[0,1], (2.20)

    and

    C12(t)=mi=1Bi(t)P12i,t[0,1]. (2.21)

    Then, the sequence of curves Ck(t) for k1 as the following

    Ck(t)=mi=1Bi(t)Pki,t[0,1], (2.22)

    where

    {Pki=Pk12i+ωΔk1i,Δk1i=P0i[12(Ck(ti)+Ck12(ti))+12(Ck12(Bi)+Ck(Bi))],Pk+12i=Pki+ωΔk2i,Δk2i=P0i[12(Ck1(ti)+Ck12(ti))+12(Ck12(Bi)+Ck1(Bi))]. (2.23)

    In (2.23), Ck(Bi) is defined as follows

    Ck(Bi)=mj=1Bi(tj)Pkj,i=1,2,,m,k=0,12,1,,

    which is a function that receives the blending bases as variables.

    Surface fitting with two NTP bases {Bi(t)}n1i=1, {Nj(v)}n2j=1, and a collection of data points {Qij}n1,n2i,j=1 to be interpolated. Then, using these data points to be the initial control points {P0ij}n1,n2i,j=1, with a real increasing parameters set {t0i}n1i=1, i.e. 0=t1<t2<t3<<tn1=1 and {v0j}n2j=1, i.e. 0=v1<v2<v3<<vn2=1. The initial blending surface can be created as follows:

    C0(t,v)=n1i=1n2j=1Bi(t)Nj(v)P0i,j,t,v[0,1]. (2.24)

    Then, the sequence of surfaces Ck+1(t,v) for k1, can be compute as follows

    Ck(t,v)=n1i=1n2j=1Bi(t)Nj(v)Pki,j, (2.25)

    where

    {Pkij=Pk12ij+ωΔk1ij,Δk1ij=P0ij[12(Ck(ti,vj)+Ck12(ti,vj))+12(Ck12(Bi,Nj)+Ck(Bi,Nj))],Pk+12ij=Pkij+ωΔk2ij,Δk2ij=P0ij[12(Ck1(ti,vj)+Ck12(ti,vj))+12(Ck12(Bi,Nj)+Ck1(Bi,Nj))]. (2.26)

    We can use the following expression in (2.26) as a perturbation term in the iteration process, even as we appear to have done with a curve;

    Ck(Bi,Nj)=n1i=1n2j=1Bi(t)Nj(v)Pkij,k=0,12,1,.

    The WHPIA format of curves and surface, which involves two iterations and substituting the sequential iterative process length of each control point in PIA, is referred to as (2.23) and (2.26), respectively.

    As we know, the PIA [1] is an iterative approach for solving a system of linear equations BP=Q, BCm×n with n=m. The matrix form of PIA method as

    Pk+1=Pk+(QBPk)=(IB)Pk+Q, (3.1)

    and the LSPIA [3,10] and ELSPIA [6] are the iteration approaches for solving a system of linear equations μBTBP=BTQ, BCm×n with mn. The matrix form of LSPIA method as

    Pk+1=Pk+μBT(QBPk)=(IμBTB)Pk+μBTQ. (3.2)

    For an NTP basis {Bi(tj)}n,mi,j=1 and a data set points {Qj}mj=1 to be fitted in either Rm×2 or Rm×3. We use all these data points to be the initial control points {P0i}ni=1 in Rn×2 or Rn×3 and combine the HSS-iteration with the LSPIA method for solving a system of linear equations

    μBTBP=BTQ,BCm×n(mn). (3.3)

    The matrix form of HSS-LSPIA method as the following

    {(αI+H)Pk+12=(αIS)Pk+μBTQ,(αI+S)Pk+1=(αIH)Pk+12+μBTQ, (3.4)

    where

    H=μ2(BTB+(BTB)T)=μ2(BTB+BTB)=μBTB (3.5)

    and

    S=μ2(BTB(BTB)T)=μ2(BTBBTB)=0. (3.6)

    We can rewrite (3.4) as

    {(αI+μBTB)Pk+12=(αI)Pk+μBTQ,(αI)Pk+1=(αIμBTB)Pk+12+μBTQ. (3.7)

    Then, we have

    αIPk+12+μBTBPk+12=αIPk+μBTQ, (3.8)

    and

    αIPk+1=αIPk+12μBTBPk+12+μBTQ. (3.9)

    By (3.8) and (3.9), we get

    αIPk+1=αIPk+μBTQμBTBPk+12+μBTQμBTBPk+12. (3.10)

    So,

    Pk+1=Pk+μαBT(QBPk+12)+μαBT(QBPk+12). (3.11)

    Let

    Pk+12i=Pk+μαBT(QBPk+12)

    and

    Δki=μαBT(QBPk+12),

    then

    Pk+1i=Pk+12i+Δki. (3.12)

    Remark 3.1. Since the skew-Hermitian part S=0, HSS-LSPIA can called shifted splitting LSPIA [19].

    Considering an NTP basis {Bi(tj)}n,mi,j=1 with mn and a data set points {Qj}mj=1 to be fitted in either Rm×2 or Rm×3. The initial blending curve C0(t) and C12(t) can then be generated using all these data points to be the initial control points {P0i}ni=1 in Rn×2 or Rn×3 and {P12i}ni=1, with a real increasing parameters set {ti}mi=1 i.e. 0=t1<t2<t3<<tm=1. The initial blending curve can be formed as follows:

    C0(t)=ni=1Bi(t)P0i,t[0,1] (3.13)

    and

    C12(t)=ni=1Bi(t)P12i,t[0,1]. (3.14)

    Then, the remaining curve of the sequence Ck+1(t) for k0, can be compute as following:

    Ck+1(t)=ni=1Bi(t)Pk+1i,t[0,1], (3.15)

    where

    {Pk+1i=Pk+12i+Δki,Δki=μαBT(QiCk+12(ti)),Pk+12i=Pki+μαBT(QiCk+12(ti)). (3.16)

    In (3.16), Ck+12(tj) is defined as the following:

    Ck+12(tj)=ni=1Bi(tj)Pk+12i,k=0,1,2,. (3.17)

    Both global and local relaxation parameters, μ and α are satisfied as

    0<μ<2λmax,α>0 (3.18)

    where λmax denotes the largest eigenvalue of BTB. If (3.16) converge, then

    limkPki=limkPk+12i=limkPk+1i, (3.19)

    the HSS-LSPIA format of curves, including of two iterations and substituting the sequential iterative process from each control point in LSPIA, is recognized as (3.16).

    Remark 3.2. From (3.16), we will get limkCk+1(ti) exist, if for any ϵ>0 there is a natural number A, when k>A, Pk+1iPki<ϵ. Furthermore, whether the spectral radius of the iterative matrix Pk+1 in (3.11) is less than 1 affects convergence.

    Lemma 3.3. [1] Given an NTP basis {Bi(tj)}n,mi,j=1 with mn and BTB is nonsingular collocation matrix. And assuming that λi(BTB), i=1,2,,n is eigenvalue of BTB, then 0<λi(BTB)1.

    Let BCm×n, B=MiNi (i=1,2) be two splitting of the matrix B. In the matrix form, Eq (3.16)'s two iterative processes can be written as

    {M1Pk+12=N1Pk+μBTQ,M2Pk+1=N2Pk+12+μBTQ, (3.20)

    where M1=(αI+μBTB), N1=αI, M2=αI,N2=(αIμBTB), I is identity matrix. Then we can rewrite (3.20) as a unified matrix iteration format as follows

    Pk+1=M12N2M11N1Pk+μM12(I+N2M11)BTQ, k=0,1,2, (3.21)

    or

    Pk+1=MPk+μM0BTQ, k=0,1,2, (3.22)

    where M=M12N2M11N1 and M0=M12(I+N2M11).

    Now we have to show that the iterative control point sequence {Pk+1} leads to the unique solution P, i.e. ρ(M)<1.

    Theorem 3.4. Let BCm×n with mn be a positive definite matrix, let H=μ2(BTB+(BTB))=μBTB and S=μ2(BTB(BTB))=0 be its Hermitian ans skew-Hermitian part, and given α be a positive constant. The HHS-LSPIA iteration's iteration matrix M(α) is then provided by

    M(α)=(αI+S)1(αIH)(αI+H)1(αIS)

    or

    M(α)=(αI)1(αIμBTB)(αI+μBTB)1(αI) (3.23)

    and the spectral radius of ˆM(α) is ρ(ˆM(α)), bounded by

    σ(α)maxλiλ(H)|αλiα+λi|,

    where the spectral set of the matrix H is λ(H).

    As a consequence, it maintains that

    ρ(M(α))σ(α)<1α>0,

    i.e., the HHS-LSPIA iteration converges to the unique solution P of the linear equation (3.3).

    Proof. By the resemblance invariance to the matrix spectrum, we have

    ρ(M(α))=ρ((αI)1(αIμBTB)(αI+μBTB)1(αI))=ρ((αIμBTB)(αI+μBTB)1(αI)(αI)1)||(αIμBTB)(αI+μBTB)1(αI)(αI)1||2||(αIμBTB)(αI+μBTB)1||2.||(αI)(αI)1||2.

    We know that (αI)(αI)1=I which is unitary matrix i.e., ||(αI)(αI)1||2=1. Then, we get

    ρ(M(α))<||(αIμBTB)(αI+μBTB)1||2=maxλiλ(H)|αλiα+λi|.

    Since λi>0 (i=1,2,3,,n) and α is positive constant, it is easy to see that ρ(M(α))ρ(α)<1.

    Corollary 3.5. Under the same assumption of Theorem 3.4. Let λmax and λmin be the maximum and the minimum eigenvalues of BTB, respectively. Then, we have the approximate fastest convergence rate of HHS-LSPIA methed for the curve fitting when

    α=μλmax(BTB)λmin(BTB) (3.24)

    where μ=2λmax+λmin presented in [3].

    Proof. In [8], it is shown that the optimal spectral radius. Now,

    σ(α)max{|αλminα+λmin|,|αλmaxα+λmax|}.

    To compute an approximate optimal α>0 such that the convergence factor ρ(M(α)) of the HSS-LSPIA iteration is minimized, we can minimize the upper bound σ(α) of ρ(M(α)) instead. If α is such a minimum point, then it must satisfy αλmin>0,αλmax<0, and

    αλminα+λmin=λmaxαλmaxα.

    Therefore,

    α=λmax(μBTB)λmin(μBTB)

    or

    α=μλmax(BTB)λmin(BTB).

    Remark 3.6. From (3.16) and Corollary 3.5, we will get μα=μμλmax(BTB)λmin(BTB)=1λmax(BTB)λmin(BTB)=ω. Then, we use the ω=1λmax(BTB)λmin(BTB) for compute in (3.16) instead of μα.

    Let two NTP bases {Bi(ti)}n1,m1i,i=1 Cm1×n1, {Nj(vj)}n2,m2j,j=1 Cm2×n2, and a data set points {Qij}m1,m2i,j=1 R(m1×m2)×3 to be fitted, where m1m2n1n2. The initial blending surface C0(t,v) and C12(t,v) can then be generated using all these data points to be the initial control points {P0ij}n1,n2i,j=1 R(n1×n2)×3 and {P12ij}n1,n2i,j=1, with a real increasing parameters set {ti}mi=1 i.e. 0=t1<t2<t3<<tm=1. The initial blending surface can be formed as follows:

    C0(t,v)=n1i=1n2j=1Bi(t)Nj(v)P0i,j,t,v[0,1], (3.25)

    and

    C12(t,v)=n1i=1n2j=1Bi(t)Nj(v)P12i,j,t,v[0,1]. (3.26)

    Let D=BTBNTN where is Kronecker product. Then, the remaining surfaces of the sequence Ck+1(t,v) for k0, can be compute as follows:

    Ck+1(t,v)=n1i=1n2j=1Bi(t)Nj(v)Pk+1i,j, (3.27)

    where

    {Pk+1i,j=Pk+12i,j+Δki,jΔki,j=μαDT(Qi,jCk+12(ti,vj))Pk+12i,j=Pki,j+μαDT(Qi,jCk+12(ti,vj)). (3.28)

    We can use the following expression in (3.28) as in the case of curves:

    Ck+12(ti,vj)=n1i=1n2j=1Bi(t)Nj(v)Pk+12i,j,k=0,1,2,. (3.29)

    If (3.28) converge, then

    limkPki,j=limkPk+12i,j=limkPk+1i,j (3.30)

    the HSS-LSPIA format of surfaces, including two iterations and substituting the sequential iterative process length from each control point in LSPIA, is recognized as (3.28).

    Lemma 3.7. [1] Given an NTP bases {Bi(ti)}n1,m1i,i=1 Cm1×n1 and {Nj(vj)}n2,m2j,j=1 Cm2×n2, BTB and NTN are nonsingular collocation matrices, where m1m2n1n2. Assuming that λi(BTB), i=1,2,,n1 and λi(NTN), i=1,2,,n2 are their eigenvalues respectively. Then,

    (1)0<λi(BTB)1,0<λi(NTN)1,

    (2)0<λi(D)1,D=BTBNTN and here is Kronecker product.

    Let D=ˆMiˆNi (i=1,2) be two splitting of the matrix D. The two iterative processes of (3.28) can be written in the matrix form

    {ˆM1Pk+12=ˆN1Pk+μDTQ,ˆM2Pk+1=ˆN2Pk+12+μDTQ, (3.31)

    where ˆM1=(αI+μDTD), ˆN1=αI, ˆM2=αI,ˆN2=(αIμDTD), I is identity matrix. Then we can rewrite (3.31) as a unified matrix iteration format as follows

    Pk+1=ˆM12N2ˆM11ˆN1Pk+μˆM12(I+ˆN2ˆM11)DTQ, k=0,1,2, (3.32)

    or

    Pk+1=ˆMPk+μˆM0DTP0, k=0,1,2, (3.33)

    where ˆM=ˆM12ˆN2ˆM11ˆN1 and ˆM0=ˆM12(I+ˆN2ˆM11).

    Now we need to show that the iterative control point sequence {Pk+1} leads to the unique solution P, i.e. ρ(M)<1.

    Theorem 3.8. Let D be a positive definite matrix, let ˆH=μ2(DTD+(DTD))=μDTD and ˆS=μ2(DTD(DTD))=0 be its Hermitian ans skew-Hermitian part, and let α be a positive constant. The HHS-LSPIA iteration's matrix M(α) is then provided by

    ˆM(α)=(αI+ˆS)1(αIˆH)(αI+ˆH)1(αIˆS)

    or

    ˆM(α)=(αI)1(αIμDTD)(αI+μDTD)1(αI) (3.34)

    and the spectral radius of ˆM(α) is ρ(ˆM(α)), bounded by

    σ(α)maxλiλ(ˆH)|αλiα+λi|,

    where λ(ˆH) is the spectral set of the matrix ˆH.

    As a consequence, it maintains that

    ρ(ˆM(α))σ(α)<1α>0,

    i.e., the HHS-LSPIA iteration converges to the unique solution P of the linear equation (3.3).

    Proof. By the resemblance invariance to the matrix spectrum, we get

    ρ(ˆM(α))=ρ((αI)1(αIμBTB)(αI+μBTB)1(αI))=ρ((αIμDTD)(αI+μDTD)1(αI)(αI)1)||(αIμDTD)(αI+μDTD)1(αI)(αI)1||2||(αIμDTD)(αI+μDTD)1||2.||(αI)(αI)1||2.

    We know that (αI)(αI)1=I which is unitary matrix i.e., ||(αI)(αI)1||2=1. Then, we get

    ρ(ˆM(α))<||(αIμDTD)(αI+μDTD)1||2=maxλiλ(ˆH)|αλiα+λi|.

    Since λi>0 (i=1,2,,n) and α is positive constant, it is simpler to see that ρ(ˆM(α))ρ(α)<1.

    Corollary 3.9. Under the same assumption of Theorem 3.8. Let λmax and λmin be the maximum and the minimum eigenvalues of DTD, respectively. Then, we have the fastest convergence rate of HHS-LSPIA methed for the surface fitting when

    α=μλmax(DTD)λmin(DTD) (3.35)

    where μ=2λmax+λmin presented in [3].

    The proof to the Corollary 3.9 is omitted since it is similar to the case of Corollary 3.5.

    Remark 3.10. From (3.28) and Corollary 3.9, we will get μα=μμλmax(DTD)λmin(DTD)=1λmax(DTD)λmin(DTD)=ˆω. Then, we use the ˆω=1λmax(DTD)λmin(DTD) for compute in (3.28) instead of μα.

    In this section, we demonstrated how the HHS-LSPIA effectiveness for three B-spline curve fitting examples and three examples of B-spline tensor product surface fitting as follows:

    ● Example 1: A subdivision curve generated using the incenter subdivision technique yielded 100 points (music notation).

    ● Example 2: y=3cos(θ) (θ[0,6.28]) is a sample of 68 points taken evenly from the polar coordinate equation.

    ● Example 3: G-shape font characteristics measured and smoothed with 315 points.

    ● Example 4: z=y2cos(x) the surface model with 21×21 (441) points.

    ● Example 5: z=ysin(x2)xcos(y2) the surface model with 21×21 (441) points.

    ● Example 6: z=x2sin(y2) the surface model with 33×33 (1089) points.

    For all the examples, the number of control points is equal to the number of data points and the number of control points is the same in each method. We show the details of the implementation for B-spline curve fitting with the HSS-LSPIA method in the following sections, and we deduce the specifics surface for B-spline tensor product surface fitting in just the same way. Given an ordered point set {Qj}mj=1, we assign the parameters {tj}mj=1 for {Qj}mj=1 with the normalized accumulated chord parameterization method [3,20], that is, t1=0, tm=1,

    tj=tj1+QjQj1D,j{2,3,4,m1}, (4.1)

    where D=mj=2QjQj1 is the sum of chord length. In addition, the knots for the cubic B-spline fitting curve C(t)=ni=1Bi,3(t)Pi, is defined as {0,0,0,0,ˉt4,ˉt5,,ˉtn,1,1,1,1}, where

    ˉti+3=(1α)tj1+αtj (4.2)

    where d=mn2,α=idi,j=id i{1,2,3,n3}.

    In our implementation, we choose the parameter α defined by (3.24) for the HSS-LSPIA method. The iteration process is stopped for the comparison of the LSPIA [3], ELSPIA [6], WHPIA [9] and HSS-LSPIA methods if Ek+1Ek<107, where the fitting error of the kth iteration is computed by

    Ek=mj=1Qjni=1Bi(tj)Pki2, k0 (4.3)

    and we compare Ek of each iteration to show the efficiency and validity of the HSS-LSPIA.

    All the examples are operated on a laptop with a 2.4 GHz Quad-Core Intel Core i5 processor and 8 GB memory via MATLAB R2019a.

    In Examples 1–3, the cubic B-spline curves created by the LSPIA [3], ELSPIA [6], WHPIA [9] and HSS-LSPIA methods with the fitting result are shown in Figures 13 respectively. In Examples 4–6, the B-spline tensor product surface fitting created by the LSPIA [3], ELSPIA [6], WHPIA [9] and HSS-LSPIA methods with the fitting result are shown in Figures 46 respectively. The blue points in each of the three instances represent known points that form a dotted limit curve to be fitted, green points represent control points gained at the appropriate step, and the red line shows the curve of each iteration.

    Figure 1.  The fitting results of Example 1 using LSPIA, ELSPIA, WHPIA and HSS-LSPIA.
    Figure 2.  The fitting results of Example 2 using LSPIA, ELSPIA, WHPIA and HSS-LSPIA.
    Figure 3.  The fitting results of Example 3 using LSPIA, ELSPIA, WHPIA and HSS-LSPIA.
    Figure 4.  The fitting results of Example 4 using LSPIA, ELSPIA, WHPIA and HSS-LSPIA, where 4c and 4d shows the result of LSPIA, 4e and 4f shows the result of ESPIA, 4g and 4h shows the result of WHPIA and 4i and 4j shows the result of HSS-LSPIA.
    Figure 5.  The fitting results of Example 5 using LSPIA, ELSPIA, WHPIA and HSS-LSPIA, where 5c and 5d shows the result of LSPIA, 5e and 5f shows the result of ESPIA, 5g and 5h shows the result of WHPIA and 5i and 5j shows the result of HSS-LSPIA.
    Figure 6.  The fitting results of Example 6 using LSPIA, ELSPIA, WHPIA and HSS-LSPIA, where 6c and 6d shows the result of LSPIA, 6e and 6f shows the result of ESPIA, 6g and 6h shows the result of WHPIA and 6i and 6j shows the result of HSS-LSPIA.

    In Figure 7a7f, we can see that the HSS-LSPIA method has the fitting inaccuracy of the kth iteration less than the ELSPIA method, LSPIA method and the WHPIA method respectively.

    Figure 7.  The comparision of errors for Examples 1–6.

    In Table 1 columns "IT" (iteration numbers), "CPU time(s)" (ten times the average amount of CPU times(s)) and "Ek+1" (fitting error of the (k+1)th iteration) for the LSPIA [3], ELSPIA [6], WHPIA [9] and HSS-LSPIA methods are listed. We can observe that the HSS-LSPIA methods requires fewer iteration numbers, uses less CPU time, and has a smallest fitting error at (k+1)th iteration. We can now conclude from the experiments that, the HSS-LSPIA methods is faster than the ELSPIA method, LSPIA method and the WHPIA method respectively, in terms of convergence speed. The examples provided in this section are just intended to show how effective the methods propose by us. Because the spectral radius will vary depends on the NTP bases, we cannot guarantee that our methods will converge faster than ELSPIA, LSPIA, and WHPIA in other examples.

    Table 1.  Comparision of IT, CPU time(s) and Ek+1.
    Examples Methods IT CPU time(s) Ek+1
    Example 1 LSPIA 101 0.0084870 1.6400E-05
    ELSPIA 59 0.0037290 1.5100E-05
    WHPIA 130 0.0082561 1.9899E-05
    HSS-LSPIA 47 0.0020687 3.0800E-06
    Example 2 LSPIA 410 0.0069510 3.2800E-05
    ELSPIA 105 0.0044940 4.7500E-05
    WHPIA 1877 0.0319717 5.2800E-04
    HSS-LSPIA 65 0.0013589 3.7900E-06
    Example 3 LSPIA 395 0.0458660 1.6100E-05
    ELSPIA 85 0.0091790 1.3100E-05
    WHPIA 763 0.0747946 2.4400E-04
    HSS-LSPIA 23 0.0059934 1.1220E-06
    Example 4 LSPIA 470 5.5093410 2.7500E-04
    ELSPIA 137 1.7922920 1.6703E-04
    WHPIA 2901 64.869787 2.0210E-03
    HSS-LSPIA 73 1.2409530 1.6634E-05
    Example 5 LSPIA 477 6.887048 3.9281E-04
    ELSPIA 407 5.638451 2.7947E-04
    WHPIA 3834 89.834053 1.9947E-03
    HSS-LSPIA 74 1.335767 2.5180E-05
    Example 6 LSPIA 1220 42.263918 2.0272E-04
    ELSPIA 623 7.318428 1.0700E-04
    WHPIA 3935 68.133107 5.6847E-03
    HSS-LSPIA 96 1.596564 3.7280E-05

     | Show Table
    DownLoad: CSV

    In this research, based on the HSS iterative approach for solving system of linear equations of LSPIA, a new progressive and iterative approximation for least square fitting method called HSS-LSPIA is developed to solve the problems of curves and surfaces approximation with NTP bases. The HSS-LSPIA format including two iterations with the iterative distinction vectors in which the two iterations are different from each other. And then, we provided the approximate value of the fastest convergence positive constant. In terms of iterative steps and computing time, numerical results indicate that the HSS-LSPIA method is more effectively efficient than the ELSPIA, LSPIA, and WHPIA methods, respectively. It has demonstrated that the HSS-LSPIA method can be faster than the ELSPIA, LSPIA, and WHPIA methods in terms of convergence speed.

    The authors acknowledge the financial support provided by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT. The first author was supported by the Petchra Pra Jom Klao Ph.D. Research Scholarship from King Mongkut's University of Technology Thonburi (Grant No. 54/2561). Moreover, this research project is supported by Thailand Science Research and Innovation (TSRI) Basic Research Fund: Fiscal year 2022 under project number FRB650048/0164.

    The authors declare that there is no conflict of interest regarding the publication of this article.



    [1] H. W. Lin, H. J. Bao, G. J. Wang, Totally positive bases and progressive iteration approximation, Comput. Math. Appl., 50 (2005), 575–586. https://doi.org/10.1016/j.camwa.2005.01.023 doi: 10.1016/j.camwa.2005.01.023
    [2] H. Lin, Z. Zhang, An extended iterative format for the progressive-iteration approximation, Comput. Graph., 35 (2011), 967–975. https://doi.org/10.1016/j.cag.2011.07.003 doi: 10.1016/j.cag.2011.07.003
    [3] C. Deng, H. Lin, Progressive and iterative approximation for least squares {B}-spline curve and surface fitting, Comput.-Aided Des., 47 (2014), 32–44. https://doi.org/10.1016/j.cad.2013.08.012 doi: 10.1016/j.cad.2013.08.012
    [4] A. Ebrahimi, G. B. Loghmani, A composite iterative procedure with fast convergence rate for the progressive-iteration approximation of curves, J. Comput. Appl. Math., 359 (2019), 1–15. https://doi.org/10.1016/j.cam.2019.03.025 doi: 10.1016/j.cam.2019.03.025
    [5] H. Lin, T. Maekawa, C. Deng, Survey on geometric iterative methods and their applications, Comput.-Aided Des., 95 (2018), 40–51. https://doi.org/10.1016/j.cad.2017.10.002 doi: 10.1016/j.cad.2017.10.002
    [6] H. Wang, On extended progressive and iterative approximation for least squares fitting, Vis. Comput., 38 (2022), 591–602. https://doi.org/10.1007/s00371-020-02036-8 doi: 10.1007/s00371-020-02036-8
    [7] J. M. Peña, Shape preserving representations in computer-aided geometric design, Commack, New York: Nova Science Publishers, 1999.
    [8] Z. Z. Bai, G. Golub, M. Ng, Hermitian and skew-hermitian splitting methods for non-hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 24 (2003), 603–626. https://doi.org/10.1016/j.laa.2007.02.018 doi: 10.1016/j.laa.2007.02.018
    [9] L. Hu, H. Shou, Z. Dai, Hss-iteration-based iterative interpolation of curves and surfaces with NTP bases, In: Simulation tools and techniques, Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, Springer, 2019. https://doi.org/10.1007/978-3-030-32216-8_36
    [10] H. Lin, Z. Zhang, An efficient method for fitting large data sets using {T}-splines, SIAM J. Sci. Comput., 35 (2013), A3052–A3068. https://doi.org/10.1137/120888569 doi: 10.1137/120888569
    [11] G. H. Golub, C. F. Van Loan, Matrix computations, 3 Eds., Baltimore: Johns Hopkins University Press, 1996.
    [12] Y. Saad, Iterative methods for sparse linear systems, 2 Eds., SIAM, Philadelphia, 2003.
    [13] D. M. Young, Iterative solution of large linear systems, New York: Academic Press, 1971.
    [14] C. Liu, J. Li, L. Hu, Jacobi–PIA algorithm for bi-cubic B-spline interpolation surfaces, Graph. Models, 24 (2022), 101134. https://doi.org/10.1016/j.gmod.2022.101134 doi: 10.1016/j.gmod.2022.101134
    [15] F. H. Yusuf, Y. Jiang, H. Lin, Gauss-seidel progressive and iterative approximation for least squares fitting, J. Comput.-Aided Des. Comput. Graph., 33 (2021), 1–10. https://doi.org/10.3724/SP.J.1089.2021.18289 doi: 10.3724/SP.J.1089.2021.18289
    [16] C. Liu, X. Han, L. Zhang, Unconditional convergence of iterative approximation methods, Eng. Anal. Bound. Elem., 126 (2021), 161–168. https://doi.org/10.1016/j.enganabound.2021.03.001 doi: 10.1016/j.enganabound.2021.03.001
    [17] G. H. Golub, D. Vanderstraeten, On the preconditioning of matrices with a dominant skew-symmetric component, Numer. Algorithms, 25 (2000), 223–239.
    [18] C. L. Wang, Z. Z. Bai, Sufficient conditions for the convergent splittings of non-hermitian positive definite matrices, Linear Algebra Appl., 330 (2001), 215–218. https://doi.org/10.1016/S0024-3795(01)00275-0 doi: 10.1016/S0024-3795(01)00275-0
    [19] Z. Z. Bai, J. F. Yin, Y. F. Su, Shift-splitting preconditioner for non-Hermitian positive definite matrices, J. Comput. Math., 24 (2006), 539–552.
    [20] I. J. Schoenberg, A. Whitney, On polya frequency function. III. The positivity of translation determinants with an application to the interpolation problem by spline curves, Trans. Amer. Math. Soc., 74 (1953), 246–259.
  • This article has been cited by:

    1. Chengzhi Liu, Nian-Ci Wu, Juncheng Li, Lijuan Hu, Two novel iterative approaches for improved LSPIA convergence, 2024, 111, 01678396, 102312, 10.1016/j.cagd.2024.102312
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1673) PDF downloads(64) Cited by(1)

Figures and Tables

Figures(7)  /  Tables(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog