Processing math: 100%
Research article

On a class of inverse palindromic eigenvalue problem

  • Received: 03 March 2021 Accepted: 11 May 2021 Published: 20 May 2021
  • MSC : 15A09, 15A24

  • In this paper we first consider the following inverse palindromic eigenvalue problem (IPEP): Given matrices Λ=diag{λ1,,λp}Cp×p, λiλj for ij, i,j=1,,p, X=[x1,,xp]Cn×p with rank(X)=p, and both Λ and X are closed under complex conjugation in the sense that λ2i=ˉλ2i1C, x2i=ˉx2i1Cn for i=1,,m, and λjR, xjRn for j=2m+1,,p, find a matrix ARn×n such that AX=AXΛ. We then consider a best approximation problem (BAP): Given ˜ARn×n, find ˆASA such that ˆA˜A=minASAA˜A, where is the Frobenius norm and SA is the solution set of IPEP. By partitioning the matrix Λ and using the QR-decomposition, the expression of the general solution of Problem IPEP is derived. Also, we show that the best approximation solution ˆA is unique and derive an explicit formula for it.

    Citation: Jiao Xu, Yinlan Chen. On a class of inverse palindromic eigenvalue problem[J]. AIMS Mathematics, 2021, 6(8): 7971-7983. doi: 10.3934/math.2021463

    Related Papers:

    [1] Lina Liu, Huiting Zhang, Yinlan Chen . The generalized inverse eigenvalue problem of Hamiltonian matrices and its approximation. AIMS Mathematics, 2021, 6(9): 9886-9898. doi: 10.3934/math.2021574
    [2] Shixian Ren, Yu Zhang, Ziqiang Wang . An efficient spectral-Galerkin method for a new Steklov eigenvalue problem in inverse scattering. AIMS Mathematics, 2022, 7(5): 7528-7551. doi: 10.3934/math.2022423
    [3] Yalçın Güldü, Ebru Mişe . On Dirac operator with boundary and transmission conditions depending Herglotz-Nevanlinna type function. AIMS Mathematics, 2021, 6(4): 3686-3702. doi: 10.3934/math.2021219
    [4] Batirkhan Turmetov, Valery Karachik . On solvability of some inverse problems for a nonlocal fourth-order parabolic equation with multiple involution. AIMS Mathematics, 2024, 9(3): 6832-6849. doi: 10.3934/math.2024333
    [5] Wei Ma, Zhenhao Li, Yuxin Zhang . A two-step Ulm-Chebyshev-like Cayley transform method for inverse eigenvalue problems with multiple eigenvalues. AIMS Mathematics, 2024, 9(8): 22986-23011. doi: 10.3934/math.20241117
    [6] Liangkun Xu, Hai Bi . A multigrid discretization scheme of discontinuous Galerkin method for the Steklov-Lamé eigenproblem. AIMS Mathematics, 2023, 8(6): 14207-14231. doi: 10.3934/math.2023727
    [7] Lingling Sun, Hai Bi, Yidu Yang . A posteriori error estimates of mixed discontinuous Galerkin method for a class of Stokes eigenvalue problems. AIMS Mathematics, 2023, 8(9): 21270-21297. doi: 10.3934/math.20231084
    [8] Jia Tang, Yajun Xie . The generalized conjugate direction method for solving quadratic inverse eigenvalue problems over generalized skew Hamiltonian matrices with a submatrix constraint. AIMS Mathematics, 2020, 5(4): 3664-3681. doi: 10.3934/math.2020237
    [9] Phakhinkon Phunphayap, Prapanpong Pongsriiam . Extremal orders and races between palindromes in different bases. AIMS Mathematics, 2022, 7(2): 2237-2254. doi: 10.3934/math.2022127
    [10] Hannah Blasiyus, D. K. Sheena Christy . Two-dimensional array grammars in palindromic languages. AIMS Mathematics, 2024, 9(7): 17305-17318. doi: 10.3934/math.2024841
  • In this paper we first consider the following inverse palindromic eigenvalue problem (IPEP): Given matrices Λ=diag{λ1,,λp}Cp×p, λiλj for ij, i,j=1,,p, X=[x1,,xp]Cn×p with rank(X)=p, and both Λ and X are closed under complex conjugation in the sense that λ2i=ˉλ2i1C, x2i=ˉx2i1Cn for i=1,,m, and λjR, xjRn for j=2m+1,,p, find a matrix ARn×n such that AX=AXΛ. We then consider a best approximation problem (BAP): Given ˜ARn×n, find ˆASA such that ˆA˜A=minASAA˜A, where is the Frobenius norm and SA is the solution set of IPEP. By partitioning the matrix Λ and using the QR-decomposition, the expression of the general solution of Problem IPEP is derived. Also, we show that the best approximation solution ˆA is unique and derive an explicit formula for it.



    Since the 1960s, the rapid development of high-speed rail has made it a very important means of transportation. However, the vibration will be caused because of the contact between the wheels of the train and the train tracks during the operation of the high-speed train. Therefore, the analytical vibration model can be mathematically summarized as a quadratic palindromic eigenvalue problem (QPEP) (see [1,2])

    (λ2A1+λA0+A1)x=0,

    with AiRn×n, i=0,1 and A0=A0. The eigenvalues λ, the corresponding eigenvectors x are relevant to the vibration frequencies and the shapes of the vibration, respectively. Many scholars have put forward many effective methods to solve QPEP [3,4,5,6,7]. In addition, under mild assumptions, the quadratic palindromic eigenvalue problem can be converted to the following linear palindromic eigenvalue problem (see [8])

    Ax=λAx, (1)

    with ARn×n is a given matrix, λC and nonzero vectors xCn are the wanted eigenvalues and eigenvectors of the vibration model. We can obtain 1λxA=xA by transposing the equation (1). Thus, λ and 1λ always come in pairs. Many methods have been proposed to solve the palindromic eigenvalue problem such as URV-decomposition based structured method [9], QR-like algorithm [10], structure-preserving methods [11], and palindromic doubling algorithm [12].

    On the other hand, the modal data obtained by the mathematical model are often evidently different from the relevant experimental ones because of the complexity of the structure and inevitable factors of the actual model. Therefore, the coefficient matrices need to be modified so that the updated model satisfies the dynamic equation and closely matches the experimental data. Al-Ammari [13] considered the inverse quadratic palindromic eigenvalue problem. Batzke and Mehl [14] studied the inverse eigenvalue problem for T-palindromic matrix polynomials excluding the case that both +1 and 1 are eigenvalues. Zhao et al. [15] updated -palindromic quadratic systems with no spill-over. However, the linear inverse palindromic eigenvalue problem has not been extensively considered in recent years.

    In this work, we just consider the linear inverse palindromic eigenvalue problem (IPEP). It can be stated as the following problem:

    Problem IPEP. Given a pair of matrices (Λ,X) in the form

    Λ=diag{λ1,,λp}Cp×p,

    and

    X=[x1,,xp]Cn×p,

    where diagonal elements of Λ are all distinct, X is of full column rank p, and both Λ and X are closed under complex conjugation in the sense that λ2i=ˉλ2i1C, x2i=ˉx2i1Cn for i=1,,m, and λjR, xjRn for j=2m+1,,p, find a real-valued matrix A that satisfy the equation

    AX=AXΛ. (2)

    Namely, each pair (λt,xt), t=1,,p, is an eigenpair of the matrix pencil

    P(λ)=AxλAx.

    It is known that the mathematical model is a "good" representation of the system, we hope to find a model that is closest to the original model. Therefore, we consider the following best approximation problem:

    Problem BAP. Given ˜ARn×n, find ˆASA such that

    ˆA˜A=minASAA˜A, (3)

    where is the Frobenius norm, and SA is the solution set of Problem IPEP.

    In this paper, we will put forward a new direct method to solve Problem IPEP and Problem BAP. By partitioning the matrix Λ and using the QR-decomposition, the expression of the general solution of Problem IPEP is derived. Also, we show that the best approximation solution ˆA of Problem BAP is unique and derive an explicit formula for it.

    We first rearrange the matrix Λ as

    Λ=[10   00Λ1   000   Λ2]t2s2(k+2l)           t 2s2(k+2l), (4)

    where t+2s+2(k+2l)=p, t=0 or 1,

    Λ1=diag{λ1,λ2,,λ2s1,λ2s},λiR, λ12i1=λ2i, 1is,Λ2=diag{δ1,,δk,δk+1,δk+2,,δk+2l1,δk+2l}, δjC2×2,

    with

    δj=[αj+βji00αjβji], i=1, 1jk+2l,δ1j=ˉδj, 1jk,δ1k+2j1=δk+2j, 1jl,

    and the adjustment of the column vectors of X corresponds to those of Λ.

    Define Tp as

    Tp=diag{It+2s,12[1i1i],,12[1i1i]}Cp×p, (5)

    where i=1. It is easy to verify that THpTp=Ip. Using this matrix of (5), we obtain

    ˜Λ=THpΛTp=[1000Λ1000˜Λ2], (6)
    ˜X=XTp=[xt,,xt+2s,2yt+2s+1,2zt+2s+1,,2yp1,2zp1], (7)

    where

    ˜Λ2=diag{[α1β1β1α1],,[αk+2lβk+2lβk+2lαk+2l]}diag{˜δ1,,˜δk+2l},

    and ˜Λ2R2(k+2l)×2(k+2l), ˜XRn×p. yt+2s+j and zt+2s+j are, respectively, the real part and imaginary part of the complex vector xt+2s+j for j=1,3,,2(k+2l)1. Using (6) and (7), the matrix equation (2) is equivalent to

    A˜X=A˜X˜Λ. (8)

    Since rank(X)=rank(˜X)=p. Now, let the QR-decomposition of ˜X be

    ˜X=Q[R0], (9)

    where Q=[Q1,Q2]Rn×n is an orthogonal matrix and RRp×p is nonsingular. Let

    QAQ=[A11A12A21A22]pnp               p np. (10)

    Using (9) and (10), then the equation of (8) is equivalent to

    A11R=A11R˜Λ, (11)
    A21R=A12R˜Λ. (12)

    Write

    RA11RF=[f11F12   F13F21F22   F23F31F32   F33]t2s2(k+2l)                           t   2s 2(k+2l), (13)

    then the equation of (11) is equivalent to

    F12=F21Λ1, F21=F12, (14)
    F13=F31˜Λ2, F31=F13, (15)
    F23=F32˜Λ2, F32=F23Λ1, (16)
    F22=F22Λ1, (17)
    F33=F33˜Λ2. (18)

    Because the elements of Λ1,˜Λ2 are distinct, we can obtain the following relations by Eqs (14)-(18)

    F12=0, F21=0, F13=0, F31=0, F23=0, F32=0, (19)
    F22=diag{[0h1λ1h10],,[0hsλ2s1hs0]}, (20)
    F33=diag{G1,,Gk,[0Gk+1Gk+1˜δk+10],,[0Gk+lGk+l˜δk+2l10]}, (21)

    where

    Gi=aiBi, Gk+j=ak+2j1D1+ak+2jD2, Gk+j=Gk+j,Bi=[11αiβi1αiβi1], D1=[1001], D2=[0110],

    and 1ik,1jl. h1,,hs,a1,,ak+2l are arbitrary real numbers. It follows from Eq (12) that

    A21=A12E, (22)

    where E=R˜ΛR1.

    Theorem 1. Suppose that Λ=diag{λ1,,λp}Cp×p, X=[x1,,xp]Cn×p, where diagonal elements of Λ are all distinct, X is of full column rank p, and both Λ and X are closed under complex conjugation in the sense that λ2i=ˉλ2i1C, x2i=ˉx2i1Cn for i=1,,m, and λjR, xjRn for j=2m+1,,p. Rearrange the matrix Λ as (4), and adjust the column vectors of X with corresponding to those of Λ. Let Λ,X transform into ˜Λ,˜X by (6)(7) and QR-decomposition of the matrix ˜X be given by (9). Then the general solution of (2) can be expressed as

    SA={A|A=Q[R[f11000F22000F33]R1A12A12EA22]Q}, (23)

    where E=R˜ΛR1, f11 is arbitrary real number, A12Rp×(np),A22R(np)×(np) are arbitrary real-valued matrices and F22,F33 are given by (20)(21).

    In order to solve Problem BAP, we need the following lemma.

    Lemma 1. [16] Let A,B be two real matrices, and X be an unknown variable matrix. Then

    tr(BX)X=B, tr(XB)X=B, tr(AXBX)X=(BXA+AXB),tr(AXBX)X=BXA+AXB, tr(AXBX)X=AXB+AXB.

    By Theorem 1, we can obtain the explicit representation of the solution set SA. It is easy to verify that SA is a closed convex subset of Rn×n×Rn×n. By the best approximation theorem (see Ref. [17]), we know that there exists a unique solution of Problem BAP. In the following we will seek the unique solution ˆA in SA. For the given matrix ˜ARn×n, write

    Q˜AQ=[˜A11˜A12˜A21˜A22]pnp               p np, (24)

    then

    A˜A2=[R[f11000F22000F33]R1˜A11A12˜A12A12E˜A21A22˜A22]2=R[f11000F22000F33]R1˜A112+A12˜A122+A12E˜A212+A22˜A222.

    Therefore, A˜A=min if and only if

    R[f11000F22000F33]R1˜A112=min, (25)
    A12˜A122+A12E˜A212=min, (26)
    A22=˜A22. (27)

    Let

    R1=[R1R2R3], (28)

    then the relation of (25) is equivalent to

    R1f11R1+R2F22R2+R3F33R3˜A112=min. (29)

    Write

    R1=[r1,t], R2=[r2,1r2,2s], R3=[r3,1r3,k+2l], (30)

    where r1,tRt×p,r2,iR1×p,r3,jR2×p, i=1,,2s, j=1,,k+2l.

    Let

    {Jt=r1,tr1,t,Jt+i=λ2i1r2,2ir2,2i1+r2,2i1r2,2i (1is),Jr+i=r3,iBir3,i (1ik),Jr+k+2i1=r3,k+2iD1˜δk+2i1r3,k+2i1+r3,k+2i1D1r3,k+2i (1il),Jr+k+2i=r3,k+2iD2˜δk+2i1r3,k+2i1+r3,k+2i1D2r3,k+2i (1il), (31)

    with r=t+s,q=t+s+k+2l. Then the relation of (29) is equivalent to

    g(f11,h1,,hs,a1,,ak+2l)=f11Jt+h1Jt+1++hsJr+a1Jr+1++ak+2lJq˜A112=min,

    that is,

    g(f11,h1,,hs,a1,,ak+2l)=tr[(f11Jt+h1Jt+1++hsJr+a1Jr+1++ak+2lJq˜A11)(f11Jt+h1Jt+1++hsJr+a1Jr+1++ak+2lJq˜A11)]=f211ct,t+2f11h1ct,t+1++2f11hsct,r+2f11a1ct,r+1++2f11ak+2lct,q2f11et+h21ct+1,t+1++2h1hsct+1,r+2h1a1ct+1,r+1++2h1ak+2lct+1,q2h1et+1++h2scr,r+2hsa1cr,r+1++2hsak+2lcr,q2hser+a21cr+1,r+1++2a1ak+2lcr+1,q2a1er+1++a2k+2lcq,q2ak+2leq+tr(˜A11˜A11),

    where ci,j=tr(JiJj),ei=tr(Ji˜A11)(i,j=t,,t+s+k+2l) and ci,j=cj,i.

    Consequently,

    g(f11,h1,,hs,a1,,ak+2l)f11=2f11ct,t+2h1ct,t+1++2hsct,r+2a1ct,r+1++2ak+2lct,q2et,g(f11,h1,,hs,a1,,ak+2l)h1=2f11ct+1,t+2h1ct+1,t+1++2hsct+1,r+2a1ct+1,r+1++2ak+2lct+1,q2et+1,g(f11,h1,,hs,a1,,ak+2l)hs=2f11cr,t+2h1cr,t+1++2hscr,r+2a1cr,r+1++2ak+2lcr,q2er,g(f11,h1,,hs,a1,,ak+2l)a1=2f11cr+1,t+2h1cr+1,t+1++2hscr+1,r+2a1cr+1,r+1++2ak+2lcr+1,q2er+1,g(f11,h1,,hs,a1,,ak+2l)ak+2l=2f11cq,t+2h1cq,t+1++2hscq,r+2a1cq,r+1++2ak+2lcq,q2eq.

    Clearly, g(f11,h1,,hs,a1,,ak+2l)=min if and only if

    g(f11,h1,,hs,a1,,ak+2l)f11=0,,g(f11,h1,,hs,a1,,ak+2l)ak+2l=0.

    Therefore,

    f11ct,t+h1ct,t+1++hsct,r+a1ct,r+1++ak+2lct,q=et,f11ct+1,t+h1ct+1,t+1++hsct+1,r+a1ct+1,r+1++ak+2lct+1,q=et+1,f11cr,t+h1cr,t+1++hscr,r+a1cr,r+1++ak+2lcr,q=er,f11cr+1,t+h1cr+1,t+1++hscr+1,r+a1cr+1,r+1++ak+2lcr+1,q=er+1,f11cq,t+h1cq,t+1++hscq,r+a1cq,r+1++ak+2lcq,q=eq. (32)

    If let

    C=[ct,tct,t+1ct,rct,r+1ct,qct+1,tct+1,t+1ct+1,rct+1,r+1ct+1,qcr,tcr,t+1cr,rcr,r+1cr,qcr+1,tcr+1,t+1cr+1,rcr+1,r+1cr+1,qcq,tcq,t+1cq,rcq,r+1cq,q], h=[f11h1hsa1ak+2l], e=[etet+1erer+1eq],

    where C is symmetric matrix. Then the equation (32) is equivalent to

    Ch=e, (33)

    and the solution of the equation (33) is

    h=C1e. (34)

    Substituting (34) into (20)-(21), we can obtain f11,F22 and F33 explicitly. Similarly, the equation of (26) is equivalent to

    g(A12)=tr(A12A12)+tr(˜A12˜A12)2tr(A12˜A12)+tr(EA12A12E)+tr(˜A21˜A21)2tr(EA12˜A21).

    Applying Lemma 1, we obtain

    g(A12)A12=2A122˜A12+2EEA122E˜A21,

    setting g(A12)A12=0, we obtain

    A12=(Ip+EE)1(˜A12+E˜A21), (35)

    Theorem 2. Given ˜ARn×n, then the Problem BAP has a unique solution and the unique solution of Problem BAP is

    ˆA=Q[R[f11000F22000F33]R1A12A12E˜A22]Q, (36)

    where E=R˜ΛR1, F22,F33,A12,˜A22 are given by (20),(21),(35),(24) and f11,h1,,hs,a1,,ak+2l are given by (34).

    Based on Theorems 1 and 2, we can describe an algorithm for solving Problem BAP as follows.

    Algorithm 1.

    1) Input matrices Λ, X and ˜A;

    2) Rearrange Λ as (4), and adjust the column vectors of X with corresponding to those of Λ;

    3) Form the unitary transformation matrix Tp by (5);

    4) Compute real-valued matrices ˜Λ,˜X by (6) and (7);

    5) Compute the QR-decomposition of ˜X by (9);

    6) F12=0,F21=0,F13=0,F31=0,F23=0,F32=0 by (19) and E=R˜ΛR1;

    7) Compute ˜Aij=Qi˜AQj,i,j=1,2;

    8) Compute R1 by (28) to form R1,R2,R3;

    9) Divide matrices R1,R2,R3 by (30) to form r1,t,r2,i,r3,j, i=1,,2s,j=1,,k+2l;

    10) Compute Ji, i=t,,t+s+k+2l, by (31);

    11) Compute ci,j=tr(JiJj),ei=tr(Ji˜A11), i,j=t,,t+s+k+2l;

    12) Compute f11,h1,,hs,a1,,ak+2l by (34);

    13) Compute F22,F33 by (20), (21) and A22=˜A22;

    14) Compute A12 by (35) and A21 by (22);

    15) Compute the matrix ˆA by (36).

    Example 1. Consider a 11-DOF system, where

    ˜A=[96.189818.184751.325049.086413.197364.911562.561981.762858.704531.110226.22120.463426.380340.180848.925394.205173.172278.022779.483120.774292.338060.284377.491014.55397.596733.771995.613564.77468.112664.431830.124643.020771.121681.730313.606923.991690.005457.520945.092492.938637.860947.092318.481622.174786.869586.929212.331936.92475.978054.700977.571381.158023.048890.488111.74188.443657.970518.390811.120323.478029.632148.679253.282684.430997.974829.667639.978354.986023.995378.025235.315974.469343.585935.072719.476443.887031.877825.987014.495541.726738.973982.119418.895544.678493.900222.592211.111942.416780.006885.30314.965424.16911.540368.677530.634987.594317.070825.806550.785843.141462.205590.271640.39124.302418.351150.850955.015622.766440.87208.551691.064835.095294.47879.645516.899036.848551.077262.247543.569959.489626.2482],

    the measured eigenvalue and eigenvector matrices Λ and X are given by

    Λ=diag{1.0000, 1.8969, 0.5272, 0.1131+0.9936i,0.11310.9936i,1.9228+2.7256i, 1.92282.7256i, 0.17280.2450i, 0.1728+0.2450i},

    and

    X=[0.01321.00000.17530.0840+0.4722i0.08400.4722i0.09550.39370.11960.33020.1892i0.3302+0.1892i0.19920.52200.04010.39300.2908i0.3930+0.2908i0.07400.02870.62950.35870.3507i0.3587+0.3507i0.44250.36090.57450.45440.3119i0.4544+0.3119i0.45440.31920.24610.30020.1267i0.3002+0.1267i0.25970.33630.90460.23980.0134i0.2398+0.0134i0.11400.09660.08710.1508+0.0275i0.15080.0275i0.09140.03560.23870.18900.0492i0.1890+0.0492i0.24310.54281.00000.6652+0.3348i0.66520.3348i1.00000.24580.24300.2434+0.6061i0.24340.6061i0.6669+0.2418i0.66690.2418i0.25560.1080i0.2556+0.1080i0.11720.0674i0.1172+0.0674i0.55060.1209i0.5506+0.1209i0.55970.2765i0.5597+0.2765i0.3308+0.1936i0.33080.1936i0.72170.0566i0.7217+0.0566i0.73060.2136i0.7306+0.2136i0.0909+0.0713i0.09090.0713i0.5577+0.1291i0.55770.1291i0.1867+0.0254i0.18670.0254i0.2866+0.1427i0.28660.1427i0.53110.1165i0.5311+0.1165i0.38730.1096i0.3873+0.1096i0.2624+0.0114i0.26240.0114i0.6438+0.2188i0.64380.2188i0.06190.1504i0.0619+0.1504i0.27870.2166i0.2787+0.2166i0.32940.1718i0.3294+0.1718i0.9333+0.0667i0.93330.0667i0.4812+0.5188i0.48120.5188i0.64830.1950i0.6483+0.1950i].

    Using Algorithm 1, we obtain the unique solution of Problem BAP as follows:

    ˆA=[34.256341.782433.357333.629823.806442.077050.064137.570531.090848.616919.097218.856135.225235.959244.350231.991855.292055.305254.379331.390960.834516.954029.63597.680519.124917.718316.708240.063618.291649.943737.691315.60274.960358.878251.490647.897435.698545.688956.043453.090856.540255.512038.344735.889433.408746.96359.776741.421551.446652.105865.672460.12935.806162.013916.523131.658051.235924.797865.556761.784062.549458.936374.709952.210555.853244.392519.296151.233322.428056.934042.634845.845356.372961.555531.683667.952540.201241.279671.382134.414033.281777.439360.894432.1411108.505649.607819.835185.743464.089057.652419.128025.039439.052466.774020.902348.851214.469518.928424.834837.255032.325438.353459.735833.590254.026550.777070.201165.415958.072040.065228.130114.76388.950720.096325.590759.694030.855866.878130.480723.610712.9984],

    and

    ˆAXˆAXΛ=8.2431×1013.

    Therefore, the new model ˆAX=ˆAXΛ reproduces the prescribed eigenvalues (the diagonal elements of the matrix Λ) and eigenvectors (the column vectors of the matrix X).

    Example 2. (Example 4.1 of [12]) Given α=cos(θ), β=sin(θ) with θ=0.62 and λ1=0.2,λ2=0.3,λ3=0.4. Let

    J0=[02ΓI2I2], Js=[03diag{λ1,λ2,λ3}I303],

    where Γ=[αββα]. We construct

    ˜A=[J000Js],

    the measured eigenvalue and eigenvector matrices Λ and X are given by

    Λ=diag{5,0.2,0.8139+0.5810i,0.81390.5810i},

    and

    X=[0.41550.68750.21570.4824i0.2157+0.4824i0.42240.31480.3752+0.1610i0.37520.1610i0.07030.63020.59500.4050i0.5950+0.4050i1.00000.46670.22930.1045i0.2293+0.1045i0.26500.30510.2253+0.7115i0.22530.7115i0.90300.23270.48620.3311i0.4862+0.3311i0.67420.31320.55210.0430i0.5521+0.0430i0.63580.11720.06230.0341i0.0623+0.0341i0.41190.27680.1575+0.4333i0.15750.4333i0.20621.00000.17790.0784i0.1779+0.0784i].

    Using Algorithm 1, we obtain the unique solution of Problem BAP as follows:

    ˆA=[0.11690.23660.61720.71950.08360.28840.00920.04900.02020.01710.01140.09570.14620.61940.37380.16370.12910.00710.09720.12470.76070.04970.58030.03460.09790.29590.09370.10600.13230.03390.01090.67400.30130.73400.19420.08720.00540.00510.02970.08140.17830.22830.26430.03870.09860.31250.02920.29260.07170.05460.09530.10270.03600.26680.24180.12060.14060.05510.30710.20970.01060.23190.19460.02980.19350.01580.08860.02160.05600.24840.10440.12850.19020.22770.69610.16570.07280.02620.08310.00010.09060.00210.07640.12640.21440.67030.08500.07640.01040.01490.12450.08130.19520.07840.07600.08750.79780.00930.02060.1182],

    and

    ˆAXˆAXΛ=1.7538×108.

    Therefore, the new model ˆAX=ˆAXΛ reproduces the prescribed eigenvalues (the diagonal elements of the matrix Λ) and eigenvectors (the column vectors of the matrix X).

    In this paper, we have developed a direct method to solve the linear inverse palindromic eigenvalue problem by partitioning the matrix Λ and using the QR-decomposition. The explicit best approximation solution is given. The numerical examples show that the proposed method is straightforward and easy to implement.

    The authors declare no conflict of interest.



    [1] A. Hilliges, C. Mehl, V. Mehrmann, On the solution of palindromic eigenvalue problems, In: 4th European congress on computational methods in applied sciences and engineerings (ECCOMAS), Jyv¨askyl¨a, Finland, 2004.
    [2] A. Hilliges, Numerische L¨osung von quadratischen Eigenwertproblemen mit Anwendung in der Schienendynamik, Diplomarbeit, Technical University Berlin. Inst. F¨ur Mathematik, Germany, 2004.
    [3] E. K. Chu, T. M. Hwang, W. W. Lin, C. T. Wu, Vibration of fast trains, palindromic eigenvalue problems and structure-preserving doubling algorithms, J. Comput. Appl. Math., 219 (2008), 237–252. doi: 10.1016/j.cam.2007.07.016
    [4] T. M. Huang, W. W. Lin, J. Qian, Structure-preserving algorithms for palindromic quadratic eigenvalue problems arising from vibration of fast trains, SIAM J. Matrix Anal. Appl., 30 (2009), 1566–1592. doi: 10.1137/080713550
    [5] B. Iannazzo, B. Meini, Palindromic matrix polynomials, matrix functions and integral represen-tations, Linear Algebra Appl., 434 (2011), 174–184. doi: 10.1016/j.laa.2010.09.013
    [6] L. Z. Lu, F. Yuan, R. C. Li, A new look at the doubling algorithm for a structured palindromic quadratic eigenvalue problem, Numer. Linear Algebr., 22 (2015), 393–409. doi: 10.1002/nla.1962
    [7] L. Z. Lu, T. Wang, Y. C. Kuo, R. C. Li, W. W. Lin, A fast algorithm for fast train palindromic quadratic eigenvalue problems, SIAM J. Sci. Comput., 38 (2016), 3410–3429. doi: 10.1137/16M1063563
    [8] D. S. Mackey, N. Mackey, C. Mehl, V. Mehrmann, Structured polynomial eigenvalue problems: Good vibrations from good linearizations, SIAM J. Matrix Anal. Appl., 28 (2006), 1029–1051. doi: 10.1137/050628362
    [9] C. Schr¨oder, URV decomposition based structured methods for palindromic and even eigenvalue problems, Technical report, Preprint 375, TU Berlin, Matheon, Germany, 2007.
    [10] C. Schr¨oder, A QR-like algorithm for the palindromic eigenvalue problem, Technical report, Preprint 388, TU Berlin, Matheon, Germany, 2007.
    [11] D. S. Mackey, N. Mackey, C. Mehl, V. Mehrmann, Numerical methods for palindromic eigenvalue problems: Computing the anti-triangular Schur form, Numer. Linear Algebr., 16 (2009), 63–86. doi: 10.1002/nla.612
    [12] T. Li, C. Y. Chiang, E. K. Chu, W. W. Lin, The palindromic generalized eigenvalue problem Ax=λAx: Numerical solution and applications, Linear Algebra Appl., 434 (2011), 2269–2284. doi: 10.1016/j.laa.2009.12.020
    [13] M. Al-Ammari, Analysis of structured polynomial eigenvalues problems, Ph.D. thesis, The University of Manchester, Manchester, UK, 2011.
    [14] L. Batzke, C. Mehl, On the inverse eigenvalue problem of T-alternating and T-palindromic matrix polynomials, Linear Algebra Appl., 452 (2014), 172–191. doi: 10.1016/j.laa.2014.03.037
    [15] K. Zhao, L. Cheng, A. Liao, Updating -palindromic quadratic systems with no spill-over, Comput. Appl. Math., 37 (2018), 5587–5608. doi: 10.1007/s40314-018-0655-x
    [16] G. Rogers, Matrix Derivatives, In: Lecture Notes in Statistics, vol. 2, New York, 1980.
    [17] J. P. Aubin, Applied Functional Analysis, Wiley, New York, 1979.
  • This article has been cited by:

    1. Jiajie Luo, Lina Liu, Sisi Li, Yongxin Yuan, A direct method for the simultaneous updating of finite element mass, damping and stiffness matrices, 2022, 0308-1087, 1, 10.1080/03081087.2022.2092047
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2661) PDF downloads(102) Cited by(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog