Loading [MathJax]/jax/element/mml/optable/MathOperators.js
Research article Special Issues

Some identities of the generalized bi-periodic Fibonacci and Lucas polynomials

  • Received: 10 January 2024 Revised: 02 February 2024 Accepted: 05 February 2024 Published: 21 February 2024
  • MSC : 11B37, 11B39

  • In this paper, we considered the generalized bi-periodic Fibonacci polynomials, and obtained some identities related to generalized bi-periodic Fibonacci polynomials using the matrix theory. In addition, the generalized bi-periodic Lucas polynomial was defined by Ln(x)=bp(x)Ln1(x)+q(x)Ln2(x) (if n is even) or Ln(x)=ap(x)Ln1(x)+q(x)Ln2(x) (if n is odd), with initial conditions L0(x)=2, L1(x)=ap(x), where p(x) and q(x) were nonzero polynomials in Q[x]. We obtained a series of identities related to the generalized bi-periodic Fibonacci and Lucas polynomials.

    Citation: Tingting Du, Zhengang Wu. Some identities of the generalized bi-periodic Fibonacci and Lucas polynomials[J]. AIMS Mathematics, 2024, 9(3): 7492-7510. doi: 10.3934/math.2024363

    Related Papers:

    [1] Lina Liu, Huiting Zhang, Yinlan Chen . The generalized inverse eigenvalue problem of Hamiltonian matrices and its approximation. AIMS Mathematics, 2021, 6(9): 9886-9898. doi: 10.3934/math.2021574
    [2] Shixian Ren, Yu Zhang, Ziqiang Wang . An efficient spectral-Galerkin method for a new Steklov eigenvalue problem in inverse scattering. AIMS Mathematics, 2022, 7(5): 7528-7551. doi: 10.3934/math.2022423
    [3] Yalçın Güldü, Ebru Mişe . On Dirac operator with boundary and transmission conditions depending Herglotz-Nevanlinna type function. AIMS Mathematics, 2021, 6(4): 3686-3702. doi: 10.3934/math.2021219
    [4] Batirkhan Turmetov, Valery Karachik . On solvability of some inverse problems for a nonlocal fourth-order parabolic equation with multiple involution. AIMS Mathematics, 2024, 9(3): 6832-6849. doi: 10.3934/math.2024333
    [5] Wei Ma, Zhenhao Li, Yuxin Zhang . A two-step Ulm-Chebyshev-like Cayley transform method for inverse eigenvalue problems with multiple eigenvalues. AIMS Mathematics, 2024, 9(8): 22986-23011. doi: 10.3934/math.20241117
    [6] Liangkun Xu, Hai Bi . A multigrid discretization scheme of discontinuous Galerkin method for the Steklov-Lamé eigenproblem. AIMS Mathematics, 2023, 8(6): 14207-14231. doi: 10.3934/math.2023727
    [7] Lingling Sun, Hai Bi, Yidu Yang . A posteriori error estimates of mixed discontinuous Galerkin method for a class of Stokes eigenvalue problems. AIMS Mathematics, 2023, 8(9): 21270-21297. doi: 10.3934/math.20231084
    [8] Jia Tang, Yajun Xie . The generalized conjugate direction method for solving quadratic inverse eigenvalue problems over generalized skew Hamiltonian matrices with a submatrix constraint. AIMS Mathematics, 2020, 5(4): 3664-3681. doi: 10.3934/math.2020237
    [9] Phakhinkon Phunphayap, Prapanpong Pongsriiam . Extremal orders and races between palindromes in different bases. AIMS Mathematics, 2022, 7(2): 2237-2254. doi: 10.3934/math.2022127
    [10] Hannah Blasiyus, D. K. Sheena Christy . Two-dimensional array grammars in palindromic languages. AIMS Mathematics, 2024, 9(7): 17305-17318. doi: 10.3934/math.2024841
  • In this paper, we considered the generalized bi-periodic Fibonacci polynomials, and obtained some identities related to generalized bi-periodic Fibonacci polynomials using the matrix theory. In addition, the generalized bi-periodic Lucas polynomial was defined by Ln(x)=bp(x)Ln1(x)+q(x)Ln2(x) (if n is even) or Ln(x)=ap(x)Ln1(x)+q(x)Ln2(x) (if n is odd), with initial conditions L0(x)=2, L1(x)=ap(x), where p(x) and q(x) were nonzero polynomials in Q[x]. We obtained a series of identities related to the generalized bi-periodic Fibonacci and Lucas polynomials.



    Since the 1960s, the rapid development of high-speed rail has made it a very important means of transportation. However, the vibration will be caused because of the contact between the wheels of the train and the train tracks during the operation of the high-speed train. Therefore, the analytical vibration model can be mathematically summarized as a quadratic palindromic eigenvalue problem (QPEP) (see [1,2])

    (λ2A1+λA0+A1)x=0,

    with AiRn×n, i=0,1 and A0=A0. The eigenvalues λ, the corresponding eigenvectors x are relevant to the vibration frequencies and the shapes of the vibration, respectively. Many scholars have put forward many effective methods to solve QPEP [3,4,5,6,7]. In addition, under mild assumptions, the quadratic palindromic eigenvalue problem can be converted to the following linear palindromic eigenvalue problem (see [8])

    Ax=λAx, (1)

    with ARn×n is a given matrix, λC and nonzero vectors xCn are the wanted eigenvalues and eigenvectors of the vibration model. We can obtain 1λxA=xA by transposing the equation (1). Thus, λ and 1λ always come in pairs. Many methods have been proposed to solve the palindromic eigenvalue problem such as URV-decomposition based structured method [9], QR-like algorithm [10], structure-preserving methods [11], and palindromic doubling algorithm [12].

    On the other hand, the modal data obtained by the mathematical model are often evidently different from the relevant experimental ones because of the complexity of the structure and inevitable factors of the actual model. Therefore, the coefficient matrices need to be modified so that the updated model satisfies the dynamic equation and closely matches the experimental data. Al-Ammari [13] considered the inverse quadratic palindromic eigenvalue problem. Batzke and Mehl [14] studied the inverse eigenvalue problem for T-palindromic matrix polynomials excluding the case that both +1 and 1 are eigenvalues. Zhao et al. [15] updated -palindromic quadratic systems with no spill-over. However, the linear inverse palindromic eigenvalue problem has not been extensively considered in recent years.

    In this work, we just consider the linear inverse palindromic eigenvalue problem (IPEP). It can be stated as the following problem:

    Problem IPEP. Given a pair of matrices (Λ,X) in the form

    Λ=diag{λ1,,λp}Cp×p,

    and

    X=[x1,,xp]Cn×p,

    where diagonal elements of Λ are all distinct, X is of full column rank p, and both Λ and X are closed under complex conjugation in the sense that λ2i=ˉλ2i1C, x2i=ˉx2i1Cn for i=1,,m, and λjR, xjRn for j=2m+1,,p, find a real-valued matrix A that satisfy the equation

    AX=AXΛ. (2)

    Namely, each pair (λt,xt), t=1,,p, is an eigenpair of the matrix pencil

    P(λ)=AxλAx.

    It is known that the mathematical model is a "good" representation of the system, we hope to find a model that is closest to the original model. Therefore, we consider the following best approximation problem:

    Problem BAP. Given ˜ARn×n, find ˆASA such that

    ˆA˜A=minASAA˜A, (3)

    where is the Frobenius norm, and SA is the solution set of Problem IPEP.

    In this paper, we will put forward a new direct method to solve Problem IPEP and Problem BAP. By partitioning the matrix Λ and using the QR-decomposition, the expression of the general solution of Problem IPEP is derived. Also, we show that the best approximation solution ˆA of Problem BAP is unique and derive an explicit formula for it.

    We first rearrange the matrix Λ as

    Λ=[10   00Λ1   000   Λ2]t2s2(k+2l)           t 2s2(k+2l), (4)

    where t+2s+2(k+2l)=p, t=0 or 1,

    Λ1=diag{λ1,λ2,,λ2s1,λ2s},λiR, λ12i1=λ2i, 1is,Λ2=diag{δ1,,δk,δk+1,δk+2,,δk+2l1,δk+2l}, δjC2×2,

    with

    δj=[αj+βji00αjβji], i=1, 1jk+2l,δ1j=ˉδj, 1jk,δ1k+2j1=δk+2j, 1jl,

    and the adjustment of the column vectors of X corresponds to those of Λ.

    Define Tp as

    Tp=diag{It+2s,12[1i1i],,12[1i1i]}Cp×p, (5)

    where i=1. It is easy to verify that THpTp=Ip. Using this matrix of (5), we obtain

    ˜Λ=THpΛTp=[1000Λ1000˜Λ2], (6)
    ˜X=XTp=[xt,,xt+2s,2yt+2s+1,2zt+2s+1,,2yp1,2zp1], (7)

    where

    ˜Λ2=diag{[α1β1β1α1],,[αk+2lβk+2lβk+2lαk+2l]}

    and \tilde{\Lambda}_2 \in {\bf \mathbb{R}}^{2(k+2l) \times 2(k+2l)} , \tilde{X} \in {\bf \mathbb{R}}^{n \times p} . y_{t+2s+j} and z_{t+2s+j} are, respectively, the real part and imaginary part of the complex vector x_{t+2s+j} for j = 1, 3, \cdots, 2(k+2l)-1 . Using (6) and (7), the matrix equation (2) is equivalent to

    \begin{equation} A\tilde{X} = A^\top \tilde{X}\tilde{\Lambda}. \end{equation} (8)

    Since \text{rank}(X) = \text{rank}(\tilde{X}) = p . Now, let the QR-decomposition of \tilde{X} be

    \begin{equation} \tilde{X} = Q\left[ \begin{array}{c} R \\ 0 \\ \end{array} \right], \end{equation} (9)

    where Q = [Q_1, Q_2]\in \mathbb{R}^{n \times n} is an orthogonal matrix and R \in \mathbb{R}^{p \times p} is nonsingular. Let

    \begin{equation} \begin{array}{cc} Q^\top AQ = \left[ \begin{array}{cc} A_{11}& A_{12} \\ A_{21}& A_{22} \\ \end{array}\right]&\begin{array}{c} p \\ n-p \\ \end{array} \\ \begin{array}{cc} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ p & \ n-p \\ \end{array}& \end{array}. \end{equation} (10)

    Using (9) and (10), then the equation of (8) is equivalent to

    \begin{eqnarray} && A_{11}R = A_{11}^\top R\tilde{\Lambda}, \end{eqnarray} (11)
    \begin{eqnarray} && A_{21}R = A_{12}^\top R\tilde{\Lambda}. \end{eqnarray} (12)

    Write

    \begin{equation} \begin{array}{ccc} R^\top A_{11}R\triangleq F = \left[ \begin{array}{ccc} f_{11} & F_{12} & \ \ \ F_{13} \\ F_{21} & F_{22} & \ \ \ F_{23} \\ F_{31} & F_{32} & \ \ \ F_{33} \\ \end{array} \right]&\begin{array}{c} t \\ 2s \\ 2(k+2l) \\ \end{array} \\ \begin{array}{ccc} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ t & \ \ \ 2s & \ 2(k+2l) \\ \end{array}& \end{array}, \end{equation} (13)

    then the equation of (11) is equivalent to

    \begin{eqnarray} && F_{12} = F_{21}^\top \Lambda_1, \ F_{21} = F_{12}^\top, \end{eqnarray} (14)
    \begin{eqnarray} && F_{13} = F_{31}^\top \tilde{\Lambda}_2, \ F_{31} = F_{13}^\top, \end{eqnarray} (15)
    \begin{eqnarray} && F_{23} = F_{32}^\top \tilde{\Lambda}_2, \ F_{32} = F_{23}^\top \Lambda_1, \end{eqnarray} (16)
    \begin{eqnarray} && F_{22} = F_{22}^\top \Lambda_1, \end{eqnarray} (17)
    \begin{eqnarray} && F_{33} = F_{33}^\top \tilde{\Lambda}_2. \end{eqnarray} (18)

    Because the elements of \Lambda_1, \tilde{\Lambda}_2 are distinct, we can obtain the following relations by Eqs (14)-(18)

    \begin{eqnarray} &&F_{12} = 0, \ F_{21} = 0, \ F_{13} = 0, \ F_{31} = 0, \ F_{23} = 0, \ F_{32} = 0, \end{eqnarray} (19)
    \begin{eqnarray} &&F_{22} = \text{diag} \left\{ \left[\begin{array}{cc} 0 & h_{1} \\ \lambda_1 h_{1} & 0 \\ \end{array}\right], \cdots, \left[\begin{array}{cc} 0 & h_{s} \\ \lambda_{2s-1} h_{s} & 0 \\ \end{array}\right] \right\}, \end{eqnarray} (20)
    \begin{eqnarray} &&F_{33} = \text{diag}\left\{G_{1}, \cdots, G_{k}, \left[\begin{array}{cc} 0 & G_{k+1} \\ G_{k+1}^\top\tilde{\delta}_{k+1} & 0 \\ \end{array}\right], \cdots, \left[\begin{array}{cc} 0 & G_{k+l} \\ G_{k+l}^\top\tilde{\delta}_{k+2l-1} & 0 \\ \end{array}\right]\right\}, \end{eqnarray} (21)

    where

    \begin{eqnarray*} && G_i = a_iB_i, \ G_{k+j} = a_{k+2j-1}D_1+a_{k+2j}D_2, \ G_{k+j}^\top = G_{k+j}, \\ && B_i = \left[\begin{array}{cc} 1 & \frac{1-\alpha_{i}}{\beta_{i}} \\ -\frac{1-\alpha_{i}}{\beta_{i}} & 1 \\ \end{array}\right], \ D_1 = \left[\begin{array}{cc} 1 & 0 \\ 0 & -1 \\ \end{array}\right], \ D_2 = \left[\begin{array}{cc} 0 & 1 \\ 1 & 0 \\ \end{array}\right], \end{eqnarray*}

    and 1\leq i\leq k, 1\leq j\leq l . h_{1}, \cdots, h_{s}, a_{1}, \cdots, a_{k+2l} are arbitrary real numbers. It follows from Eq (12) that

    \begin{equation} A_{21} = A_{12}^\top E, \end{equation} (22)

    where E = R\tilde{\Lambda}R^{-1} .

    Theorem 1. Suppose that \Lambda = \mathit{\text{diag}} \{\lambda_{1}, \cdots, \lambda_{p}\} \in {\mathbb{C}}^{p \times p} , X = [x_{1}, \cdots, x_{p}] \in {\mathbb{C}}^{n \times p} , where diagonal elements of \Lambda are all distinct, X is of full column rank p , and both \Lambda and X are closed under complex conjugation in the sense that \lambda_{2i} = \bar{\lambda}_{2i-1} \in {\mathbb{C}} , x_{2i} = \bar{x}_{2i-1} \in {\mathbb{C}}^{n} for i = 1, \cdots, m , and \lambda_{j} \in {\mathbb{R}} , x_{j} \in {\mathbb{R}}^{n} for j = 2m+1, \cdots, p . Rearrange the matrix \Lambda as (4) , and adjust the column vectors of X with corresponding to those of \Lambda . Let \Lambda, X transform into \tilde{\Lambda}, \tilde{X} by (6)-(7) and QR-decomposition of the matrix \tilde{X} be given by (9) . Then the general solution of (2) can be expressed as

    \begin{equation} {\mathcal{S}}_{A} = \left\{A \left| A = Q\left[ \begin{array}{cc} R^{-\top}\left[ \begin{array}{ccc} f_{11} & 0 & 0 \\ 0 & F_{22} & 0 \\ 0 & 0 & F_{33} \\ \end{array} \right]R^{-1} & A_{12} \\ A_{12}^\top E & A_{22} \\ \end{array} \right]Q^\top \right. \right\}, \end{equation} (23)

    where E = R\tilde{\Lambda}R^{-1}, f_{11} is arbitrary real number, A_{12} \in {\mathbb{R}}^{p \times (n-p)}, A_{22}\in {\mathbb{R}}^{(n-p) \times (n-p)} are arbitrary real-valued matrices and F_{22}, F_{33} are given by (20)-(21) .

    In order to solve Problem BAP, we need the following lemma.

    Lemma 1. [16] Let A, B be two real matrices, and X be an unknown variable matrix. Then

    \begin{eqnarray*} && \frac{\partial tr(BX)}{\partial X} = B^\top, \ \frac{\partial tr(X^\top B^\top)}{\partial X} = B^\top, \ \frac{\partial tr(AXBX)}{\partial X} = (BXA+AXB)^\top, \\ && \frac{\partial tr(AX^\top BX^\top)}{\partial X} = BX^\top A+AX^\top B, \ \frac{\partial tr(AXBX^\top)}{\partial X} = AXB+A^\top XB^\top. \end{eqnarray*}

    By Theorem 1 , we can obtain the explicit representation of the solution set {\mathcal{S}}_{A} . It is easy to verify that \mathcal{S}_A is a closed convex subset of {\bf \mathbb{R}}^{n \times n}\times {\bf \mathbb{R}}^{n \times n}. By the best approximation theorem (see Ref. [17]), we know that there exists a unique solution of Problem BAP. In the following we will seek the unique solution \hat{A} in \mathcal{S}_A. For the given matrix \tilde{A} \in {\bf \mathbb{R}}^{n \times n}, write

    \begin{equation} \begin{array}{cc} Q^\top \tilde{A}Q = \left[ \begin{array}{cc} \tilde{A}_{11}& \tilde{A}_{12} \\ \tilde{A}_{21}& \tilde{A}_{22} \\ \end{array}\right]&\begin{array}{c} p \\ n-p \\ \end{array} \\ \begin{array}{cc} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ p & \ n-p \\ \end{array}& \end{array}, \end{equation} (24)

    then

    \begin{eqnarray*} \|A-\tilde{A}\|^2 & = & \left\|\left[ \begin{array}{cc} R^{- \top}\left[ \begin{array}{ccc} f_{11} & 0 & 0 \\ 0 & F_{22} & 0 \\ 0 & 0 & F_{33} \\ \end{array} \right]R^{-1}-\tilde{A}_{11} & A_{12}-\tilde{A}_{12} \\ A_{12}^\top E-\tilde{A}_{21} & A_{22}-\tilde{A}_{22} \\ \end{array} \right]\right\|^2\\ & = &\left\| R^{- \top}\left[ \begin{array}{ccc} f_{11} & 0 & 0 \\ 0 & F_{22} & 0 \\ 0 & 0 & F_{33} \\ \end{array} \right]R^{-1}-\tilde{A}_{11}\right\|^2\\ &+& \| A_{12}-\tilde{A}_{12}\|^2+\|A_{12}^\top E-\tilde{A}_{21} \|^2+\|A_{22}-\tilde{A}_{22}\|^2. \end{eqnarray*}

    Therefore, \|A-\tilde{A}\| = \min if and only if

    \begin{eqnarray} && \left\|R^{-\top}\left[ \begin{array}{ccc} f_{11} & 0 & 0 \\ 0 & F_{22} & 0 \\ 0 & 0 & F_{33} \\ \end{array} \right]R^{-1}-\tilde{A}_{11}\right\|^2 = \min, \end{eqnarray} (25)
    \begin{eqnarray} && \| A_{12}-\tilde{A}_{12}\|^2+\|A_{12}^\top E-\tilde{A}_{21} \|^2 = \min, \end{eqnarray} (26)
    \begin{eqnarray} &&A_{22} = \tilde{A}_{22}. \end{eqnarray} (27)

    Let

    \begin{equation} \begin{array}{c} R^{-1} = \left[ \begin{array}{c} R_1 \\ R_2 \\ R_3 \\ \end{array} \right] \end{array}, \end{equation} (28)

    then the relation of (25) is equivalent to

    \begin{equation} \|R_1^\top f_{11}R_1+ R_2^\top F_{22}R_2 + R_3^\top F_{33} R_3- \tilde{A}_{11} \|^2 = \min. \end{equation} (29)

    Write

    \begin{equation} R_1 = \left[r_{1, t}\right], \ R_2 = \left[ \begin{array}{c} r_{2, 1} \\ \vdots \\ r_{2, 2s}\\ \end{array} \right], \ R_3 = \left[ \begin{array}{c} r_{3, 1} \\ \vdots \\ r_{3, k+2l} \\ \end{array} \right], \end{equation} (30)

    where r_{1, t} \in {\mathbb{R}}^{t \times p}, r_{2, i} \in {\mathbb{R}}^{1 \times p}, r_{3, j} \in {\mathbb{R}}^{2 \times p}, \ i = 1, \cdots, 2s, \ j = 1, \cdots, k+2l .

    Let

    \begin{equation} \left\{ \begin{array}{rcl} && J_t = r_{1, t}^\top r_{1, t}, \\ && J_{t+i} = \lambda_{2i-1}r_{2, 2i}^\top r_{2, 2i-1}+r_{2, 2i-1}^\top r_{2, 2i} \ (1 \leq i \leq s), \\ && J_{r+i} = r_{3, i}^\top B_i r_{3, i} \ (1 \leq i \leq k), \\ && J_{r+k+2i-1} = r_{3, k+2i}^\top D_1 \tilde{\delta}_{k+2i-1}r_{3, k+2i-1}+r_{3, k+2i-1}^\top D_1 r_{3, k+2i} \ (1 \leq i \leq l), \\ && J_{r+k+2i} = r_{3, k+2i}^\top D_2 \tilde{\delta}_{k+2i-1}r_{3, k+2i-1}+r_{3, k+2i-1}^\top D_2 r_{3, k+2i} \ (1 \leq i \leq l), \end{array} \right. \end{equation} (31)

    with r = t+s, q = t+s+k+2l. Then the relation of (29) is equivalent to

    \begin{eqnarray*} && g(f_{11}, h_1, \cdots, h_s, a_1, \cdots, a_{k+2l}) = \\ &&\|f_{11}J_t+h_1J_{t+1}+\cdots+h_sJ_{r}+a_1J_{r+1}+\cdots+a_{k+2l}J_{q}-\tilde{A}_{11}\|^2 = \text{min}, \end{eqnarray*}

    that is,

    \begin{eqnarray*} && g(f_{11}, h_1, \cdots, h_s, a_1, \cdots, a_{k+2l}) \\ && = \text{tr} [(f_{11}J_t+h_1J_{t+1}+\cdots+h_sJ_{r}+a_1J_{r+1}+\cdots+a_{k+2l}J_{q}-\tilde{A}_{11})^\top \\ && (f_{11}J_t+h_1J_{t+1}+\cdots+h_sJ_{r}+a_1J_{r+1}+\cdots+a_{k+2l}J_{q}-\tilde{A}_{11})]\\ && = f_{11}^2c_{t, t}+2f_{11}h_1c_{t, t+1}+\cdots+2f_{11}h_sc_{t, r}+2f_{11}a_1c_{t, r+1}+\cdots+2f_{11}a_{k+2l}c_{t, q}-2f_{11}e_t\\ && +h_1^2c_{t+1, t+1}+\cdots+2h_1h_sc_{t+1, r}+2h_1a_1c_{t+1, r+1}+\cdots+2h_1a_{k+2l}c_{t+1, q}-2h_1e_{t+1} \\ && +\cdots \\ && +h_s^2c_{r, r}+2h_sa_1c_{r, r+1}+\cdots+2h_sa_{k+2l}c_{r, q}-2h_se_{r}\\ && +a_1^2c_{r+1, r+1}+\cdots+2a_1a_{k+2l}c_{r+1, q}-2a_1e_{r+1}\\ && +\cdots \\ && +a_{k+2l}^2c_{q, q}-2a_{k+2l}e_{q}+\text{tr} (\tilde{A}_{11}^\top \tilde{A}_{11}), \end{eqnarray*}

    where c_{i, j} = \text{tr} (J_i^\top J_j), e_i = \text{tr} (J_i^\top\tilde{A}_{11}) (i, j = t, \cdots, t+s+k+2l) and c_{i, j} = c_{j, i} .

    Consequently,

    \begin{eqnarray*} \frac{\partial g(f_{11}, h_1, \cdots, h_s, a_1, \cdots, a_{k+2l})}{\partial f_{11}}&& = 2f_{11}c_{t, t}+2h_1c_{t, t+1}+\cdots+2h_sc_{t, r}+2a_1c_{t, r+1}\\ &&+\cdots+2a_{k+2l}c_{t, q}-2e_t, \\ \frac{\partial g(f_{11}, h_1, \cdots, h_s, a_1, \cdots, a_{k+2l})}{\partial h_{1}}&& = 2f_{11}c_{t+1, t}+2h_1c_{t+1, t+1}+\cdots+2h_sc_{t+1, r}+2a_1c_{t+1, r+1}\\ &&+\cdots+2a_{k+2l}c_{t+1, q}-2e_{t+1}, \\ &&\cdots\\ \frac{\partial g(f_{11}, h_1, \cdots, h_s, a_1, \cdots, a_{k+2l})}{\partial h_{s}}&& = 2f_{11}c_{r, t}+2h_1c_{r, t+1}+\cdots+2h_sc_{r, r}+2a_1c_{r, r+1}\\ &&+\cdots+2a_{k+2l}c_{r, q}-2e_{r}, \\ \frac{\partial g(f_{11}, h_1, \cdots, h_s, a_1, \cdots, a_{k+2l})}{\partial a_{1}}&& = 2f_{11}c_{r+1, t}+2h_1c_{r+1, t+1}+\cdots+2h_sc_{r+1, r}+2a_1c_{r+1, r+1}\\ &&+\cdots+2a_{k+2l}c_{r+1, q}-2e_{r+1}, \\ &&\cdots\\ \frac{\partial g(f_{11}, h_1, \cdots, h_s, a_1, \cdots, a_{k+2l})}{\partial a_{k+2l}}&& = 2f_{11}c_{q, t}+2h_1c_{q, t+1}+\cdots+2h_sc_{q, r}+2a_1c_{q, r+1}\\ &&+\cdots+2a_{k+2l}c_{q, q}-2e_{q}. \end{eqnarray*}

    Clearly, g(f_{11}, h_1, \cdots, h_s, a_1, \cdots, a_{k+2l}) = \text{min} if and only if

    \frac{\partial g(f_{11}, h_1, \cdots, h_s, a_1, \cdots, a_{k+2l})}{\partial f_{11}} = 0, \cdots, \frac{\partial g(f_{11}, h_1, \cdots, h_s, a_1, \cdots, a_{k+2l})}{\partial a_{k+2l}} = 0.

    Therefore,

    \begin{equation} \begin{split} &f_{11}c_{t, t}+h_1c_{t, t+1}+\cdots+h_sc_{t, r}+a_1c_{t, r+1}+\cdots+a_{k+2l}c_{t, q} = e_t, \\ &f_{11}c_{t+1, t}+h_1c_{t+1, t+1}+\cdots+h_sc_{t+1, r}+a_1c_{t+1, r+1}+\cdots+a_{k+2l}c_{t+1, q} = e_{t+1}, \\ &\cdots\\ &f_{11}c_{r, t}+h_1c_{r, t+1}+\cdots+h_sc_{r, r}+a_1c_{r, r+1}+\cdots+a_{k+2l}c_{r, q} = e_{r}, \\ &f_{11}c_{r+1, t}+h_1c_{r+1, t+1}+\cdots+h_sc_{r+1, r}+a_1c_{r+1, r+1}+\cdots+a_{k+2l}c_{r+1, q} = e_{r+1}, \\ &\cdots\\ &f_{11}c_{q, t}+h_1c_{q, t+1}+\cdots+h_sc_{q, r}+a_1c_{q, r+1}+\cdots+a_{k+2l}c_{q, q} = e_{q}. \end{split} \end{equation} (32)

    If let

    C = \left[ \begin{array}{ccccccc} c_{t, t}&c_{t, t+1}&\cdots&c_{t, r}&c_{t, r+1}&\cdots&c_{t, q}\\ c_{t+1, t}&c_{t+1, t+1}&\cdots&c_{t+1, r}&c_{t+1, r+1}&\cdots&c_{t+1, q}\\ \vdots&\vdots& &\vdots&\vdots& &\vdots\\ c_{r, t}&c_{r, t+1}&\cdots&c_{r, r}&c_{r, r+1}&\cdots&c_{r, q}\\ c_{r+1, t}&c_{r+1, t+1}&\cdots&c_{r+1, r}&c_{r+1, r+1}&\cdots&c_{r+1, q}\\ \vdots&\vdots& &\vdots&\vdots& &\vdots\\ c_{q, t}&c_{q, t+1}&\cdots&c_{q, r}&c_{q, r+1}&\cdots&c_{q, q}\\ \end{array} \right], \ h = \left[ \begin{array}{c} f_{11}\\ h_1\\ \vdots\\ h_s\\ a_1\\ \vdots\\ a_{k+2l}\\ \end{array} \right], \ e = \left[ \begin{array}{c} e_t\\ e_{t+1}\\ \vdots\\ e_{r}\\ e_{r+1}\\ \vdots\\ e_{q}\\ \end{array} \right],

    where C is symmetric matrix. Then the equation (32) is equivalent to

    \begin{equation} C h = e, \end{equation} (33)

    and the solution of the equation (33) is

    \begin{equation} h = C^{-1} e. \end{equation} (34)

    Substituting (34) into (20)-(21), we can obtain f_{11}, F_{22} and F_{33} explicitly. Similarly, the equation of (26) is equivalent to

    \begin{eqnarray*} &&g(A_{12}) = \text{tr} (A_{12}^\top A_{12})+\text{tr} (\tilde{A}_{12}^\top \tilde{A}_{12})-2\text{tr} (A_{12}^\top \tilde{A}_{12})\\ &&+\text{tr} (E^\top A_{12}A_{12}^\top E)+\text{tr} (\tilde{A}_{21}^\top \tilde{A}_{21})-2\text{tr} (E^\top A_{12}\tilde{A}_{21}). \end{eqnarray*}

    Applying Lemma 1, we obtain

    \frac{\partial g(A_{12})}{\partial A_{12}} = 2A_{12}-2\tilde{A}_{12}+2EE^\top A_{12}-2E\tilde{A}_{21}^\top,

    setting \frac{\partial g(A_{12})}{\partial A_{12}} = 0 , we obtain

    \begin{equation} A_{12} = (I_p+EE^\top)^{-1}(\tilde{A}_{12}+E\tilde{A}_{21}^\top), \end{equation} (35)

    Theorem 2. Given \tilde{A} \in {\mathbb{R}}^{n \times n} , then the Problem BAP has a unique solution and the unique solution of Problem BAP is

    \begin{equation} \hat{A} = Q\left[ \begin{array}{cc} R^{-\top}\left[ \begin{array}{ccc} f_{11} & 0 & 0 \\ 0 & F_{22} & 0 \\ 0 & 0 & F_{33} \\ \end{array} \right]R^{-1} & A_{12} \\ A_{12}^\top E & \tilde{A}_{22} \\ \end{array} \right]Q^\top, \end{equation} (36)

    where E = R\tilde{\Lambda} R^{-1} , F_{22}, F_{33}, A_{12}, \tilde{A}_{22} are given by (20), (21), (35), (24) and f_{11}, h_{1}, \cdots, h_s, a_1, \cdots, a_{k+2l} are given by (34) .

    Based on Theorems 1 and 2 , we can describe an algorithm for solving Problem BAP as follows.

    Algorithm 1.

    1) Input matrices \Lambda , X and \tilde{A} ;

    2) Rearrange \Lambda as (4), and adjust the column vectors of X with corresponding to those of \Lambda ;

    3) Form the unitary transformation matrix T_p by (5);

    4) Compute real-valued matrices \tilde{\Lambda}, \tilde{X} by (6) and (7);

    5) Compute the QR-decomposition of \tilde{X} by (9);

    6) F_{12} = 0, F_{21} = 0, F_{13} = 0, F_{31} = 0, F_{23} = 0, F_{32} = 0 by (19) and E = R\tilde{\Lambda} R^{-1} ;

    7) Compute \tilde{A}_{ij} = Q_i^\top \tilde{A}Q_j, i, j = 1, 2 ;

    8) Compute R^{-1} by (28) to form R_1, R_2, R_3 ;

    9) Divide matrices R_1, R_2, R_3 by (30) to form r_{1, t}, r_{2, i}, r_{3, j}, i = 1, \cdots, 2s, j = 1, \cdots, k+2l ;

    10) Compute J_i, i = t, \cdots, t+s+k+2l, by (31);

    11) Compute c_{i, j} = \mbox{tr} (J_i^\top J_j), e_i = \mbox{tr} (J_i^\top\tilde{A}_{11}), i, j = t, \cdots, t+s+k+2l ;

    12) Compute f_{11}, h_1, \cdots, h_s, a_1, \cdots, a_{k+2l} by (34);

    13) Compute F_{22}, F_{33} by (20), (21) and A_{22} = \tilde{A}_{22} ;

    14) Compute A_{12} by (35) and A_{21} by (22);

    15) Compute the matrix \hat{A} by (36).

    Example 1. Consider a 11 -DOF system, where

    \tilde{A} = \left[ {\begin{array}{rrrrrrrrrrr} 96.1898 & 18.1847 & 51.3250 & 49.0864 & 13.1973 & 64.9115 & 62.5619 & 81.7628 & 58.7045 & 31.1102 & 26.2212 \\ 0.4634 & 26.3803 & 40.1808 & 48.9253 & 94.2051 & 73.1722 & 78.0227 & 79.4831 & 20.7742 & 92.3380 & 60.2843 \\ 77.4910 & 14.5539 & 7.5967 & 33.7719 & 95.6135 & 64.7746 & 8.1126 & 64.4318 & 30.1246 & 43.0207 & 71.1216 \\ 81.7303 & 13.6069 & 23.9916 & 90.0054 & 57.5209 & 45.0924 & 92.9386 & 37.8609 & 47.0923 & 18.4816 & 22.1747 \\ 86.8695 & 86.9292 & 12.3319 & 36.9247 & 5.9780 & 54.7009 & 77.5713 & 81.1580 & 23.0488 & 90.4881 & 11.7418 \\ 8.4436 & 57.9705 & 18.3908 & 11.1203 & 23.4780 & 29.6321 & 48.6792 & 53.2826 & 84.4309 & 97.9748 & 29.6676 \\ 39.9783 & 54.9860 & 23.9953 & 78.0252 & 35.3159 & 74.4693 & 43.5859 & 35.0727 & 19.4764 & 43.8870 & 31.8778 \\ 25.9870 & 14.4955 & 41.7267 & 38.9739 & 82.1194 & 18.8955 & 44.6784 & 93.9002 & 22.5922 & 11.1119 & 42.4167 \\ 80.0068 & 85.3031 & 4.9654 & 24.1691 & 1.5403 & 68.6775 & 30.6349 & 87.5943 & 17.0708 & 25.8065 & 50.7858 \\ 43.1414 & 62.2055 & 90.2716 & 40.3912 & 4.3024 & 18.3511 & 50.8509 & 55.0156 & 22.7664 & 40.8720 & 8.5516 \\ 91.0648 & 35.0952 & 94.4787 & 9.6455 & 16.8990 & 36.8485 & 51.0772 & 62.2475 & 43.5699 & 59.4896 & 26.2482 \\ \end{array}} \right],

    the measured eigenvalue and eigenvector matrices \Lambda and X are given by

    \begin{eqnarray*} &&\Lambda = \mbox{diag} \{1.0000, \ -1.8969, \ -0.5272, \ -0.1131+0.9936i, -0.1131-0.9936i, \\ &&1.9228+2.7256i, \ 1.9228-2.7256i, \ 0.1728-0.2450i, \ 0.1728+0.2450i\}, \end{eqnarray*}

    and

    \begin{eqnarray*} &&X = \left[ {\begin{array}{rrrrr} -0.0132& -1.0000& 0.1753& 0.0840 + 0.4722i& 0.0840 - 0.4722i \\ -0.0955& 0.3937& 0.1196& -0.3302 - 0.1892i& -0.3302 + 0.1892i \\ -0.1992& 0.5220& -0.0401& 0.3930 - 0.2908i& 0.3930 + 0.2908i \\ 0.0740& 0.0287& 0.6295& -0.3587 - 0.3507i& -0.3587 + 0.3507i \\ 0.4425& -0.3609& -0.5745& 0.4544 - 0.3119i& 0.4544 + 0.3119i \\ 0.4544& -0.3192& -0.2461& -0.3002 - 0.1267i& -0.3002 + 0.1267i \\ 0.2597& 0.3363& 0.9046& -0.2398 - 0.0134i& -0.2398 + 0.0134i \\ 0.1140& 0.0966& 0.0871& 0.1508 + 0.0275i& 0.1508 - 0.0275i \\ -0.0914& -0.0356& -0.2387& -0.1890 - 0.0492i& -0.1890 + 0.0492i \\ 0.2431& 0.5428& -1.0000& 0.6652 + 0.3348i& 0.6652 - 0.3348i \\ 1.0000& -0.2458& 0.2430& -0.2434 + 0.6061i& -0.2434 - 0.6061i \\ \end{array}} \right. \\ &&\left. {\begin{array}{rrrr} 0.6669 + 0.2418i& 0.6669 - 0.2418i& 0.2556 - 0.1080i& 0.2556 + 0.1080i \\ -0.1172 - 0.0674i& -0.1172 + 0.0674i& -0.5506 - 0.1209i& -0.5506 + 0.1209i \\ 0.5597 - 0.2765i& 0.5597 + 0.2765i& -0.3308 + 0.1936i& -0.3308 - 0.1936i \\ -0.7217 - 0.0566i& -0.7217 + 0.0566i& -0.7306 - 0.2136i& -0.7306 + 0.2136i \\ 0.0909 + 0.0713i& 0.0909 - 0.0713i& 0.5577 + 0.1291i& 0.5577 - 0.1291i \\ 0.1867 + 0.0254i& 0.1867 - 0.0254i& 0.2866 + 0.1427i& 0.2866 - 0.1427i \\ -0.5311 - 0.1165i& -0.5311 + 0.1165i& -0.3873 - 0.1096i& -0.3873 + 0.1096i \\ 0.2624 + 0.0114i& 0.2624 - 0.0114i& -0.6438 + 0.2188i& -0.6438 - 0.2188i \\ -0.0619 - 0.1504i& -0.0619 + 0.1504i& 0.2787 - 0.2166i& 0.2787 + 0.2166i \\ 0.3294 - 0.1718i& 0.3294 + 0.1718i& 0.9333 + 0.0667i& 0.9333 - 0.0667i \\ -0.4812 + 0.5188i& -0.4812 - 0.5188i& 0.6483 - 0.1950i& 0.6483 + 0.1950i \\ \end{array}} \right]. \end{eqnarray*}

    Using Algorithm 1, we obtain the unique solution of Problem BAP as follows:

    \begin{eqnarray*} \hat{A} = \left[ {\begin{array}{rrrrrrrrrrr} 34.2563 & 41.7824 & 33.3573 & 33.6298 & 23.8064 & 42.0770 & 50.0641 & 37.5705 & 31.0908 & 48.6169 & 19.0972\\ 18.8561 & 35.2252 & 35.9592 & 44.3502 & 31.9918 & 55.2920 & 55.3052 & 54.3793 & 31.3909 & 60.8345 & 16.9540\\ 29.6359 & 7.6805 & 19.1249 & 17.7183 & 16.7082 & 40.0636 & 18.2916 & 49.9437 & 37.6913 & 15.6027 & 4.9603\\ 58.8782 & 51.4906 & 47.8974 & 35.6985 & 45.6889 & 56.0434 & 53.0908 & 56.5402 & 55.5120 & 38.3447 & 35.8894\\ 33.4087 & 46.9635 & 9.7767 & 41.4215 & 51.4466 & 52.1058 & 65.6724 & 60.1293 & 5.8061 & 62.0139 & 16.5231\\ 31.6580 & 51.2359 & 24.7978 & 65.5567 & 61.7840 & 62.5494 & 58.9363 & 74.7099 & 52.2105 & 55.8532 & 44.3925\\ 19.2961 & 51.2333 & 22.4280 & 56.9340 & 42.6348 & 45.8453 & 56.3729 & 61.5555 & 31.6836 & 67.9525 & 40.2012\\ 41.2796 & 71.3821 & 34.4140 & 33.2817 & 77.4393 & 60.8944 & 32.1411 & 108.5056 & 49.6078 & 19.8351 & 85.7434\\ 64.0890 & 57.6524 & 19.1280 & 25.0394 & 39.0524 & 66.7740 & 20.9023 & 48.8512 & 14.4695 & 18.9284 & 24.8348\\ 37.2550 & 32.3254 & 38.3534 & 59.7358 & 33.5902 & 54.0265 & 50.7770 & 70.2011 & 65.4159 & 58.0720 & 40.0652\\ 28.1301 & 14.7638 & 8.9507 & 20.0963 & 25.5907 & 59.6940 & 30.8558 & 66.8781 & 30.4807 & 23.6107 & 12.9984\\ \end{array}} \right], \end{eqnarray*}

    and

    \|\hat{A}X-\hat{A}^\top X\Lambda\| = 8.2431\times 10^{-13}.

    Therefore, the new model \hat{A}X = \hat{A}^\top X\Lambda reproduces the prescribed eigenvalues (the diagonal elements of the matrix \Lambda ) and eigenvectors (the column vectors of the matrix X ).

    Example 2. (Example 4.1 of [12]) Given \alpha = \cos(\theta) , \beta = \sin(\theta) with \theta = 0.62 and \lambda_1 = 0.2, \lambda_2 = 0.3, \lambda_3 = 0.4 . Let

    \begin{equation*} J_0 = \left[ {\begin{array}{rr} 0_2 & \Gamma\\ I_2 & I_2\\ \end{array}} \right], \ J_s = \left[ {\begin{array}{cc} 0_3 & \mbox{diag} \{\lambda_1, \lambda_2, \lambda_3 \}\\ I_3 & 0_3\\ \end{array}} \right], \end{equation*}

    where \Gamma = \left[{\begin{array}{cc} \alpha & -\beta\\ \beta & \alpha\\ \end{array}} \right]. We construct

    \begin{equation*} \tilde{A} = \left[ {\begin{array}{rr} J_0 & 0\\ 0 & J_s\\ \end{array}} \right], \end{equation*}

    the measured eigenvalue and eigenvector matrices \Lambda and X are given by

    \begin{eqnarray*} \Lambda = \mbox{diag} \{5, 0.2, 0.8139+0.5810i, 0.8139-0.5810i\}, \end{eqnarray*}

    and

    \begin{eqnarray*} X = \left[ {\begin{array}{rrrr} -0.4155& 0.6875& -0.2157 - 0.4824i& -0.2157 + 0.4824i\\ -0.4224& -0.3148& -0.3752 + 0.1610i& -0.3752 - 0.1610i\\ -0.0703& -0.6302& -0.5950 - 0.4050i& -0.5950 + 0.4050i\\ -1.0000& -0.4667& 0.2293 - 0.1045i& 0.2293 + 0.1045i\\ 0.2650& 0.3051& -0.2253 + 0.7115i& -0.2253 - 0.7115i\\ 0.9030& -0.2327& 0.4862 - 0.3311i& 0.4862 + 0.3311i\\ -0.6742& 0.3132& 0.5521 - 0.0430i& 0.5521 + 0.0430i\\ 0.6358& 0.1172& -0.0623 - 0.0341i& -0.0623 + 0.0341i\\ -0.4119& -0.2768& 0.1575 + 0.4333i& 0.1575 - 0.4333i\\ -0.2062& 1.0000& -0.1779 - 0.0784i& -0.1779 + 0.0784i\\ \end{array}} \right]. \end{eqnarray*}

    Using Algorithm 1, we obtain the unique solution of Problem BAP as follows:

    \begin{equation*} \hat{A} = \left[ {\begin{array}{rrrrrrrrrr} -0.1169& -0.2366& 0.6172& -0.7195& -0.0836& 0.2884& 0.0092& -0.0490& -0.0202& 0.0171\\ -0.0114& -0.0957& 0.1462& 0.6194& 0.3738& -0.1637& 0.1291& -0.0071& 0.0972& 0.1247\\ 0.7607& -0.0497& 0.5803& -0.0346& 0.0979& 0.2959& 0.0937& -0.1060& 0.1323& -0.0339\\ -0.0109& 0.6740& -0.3013& 0.7340& 0.1942& -0.0872& 0.0054& 0.0051& 0.0297& 0.0814\\ 0.1783& 0.2283& 0.2643& 0.0387& 0.0986& -0.3125& -0.0292& 0.2926& -0.0717& -0.0546\\ 0.0953& 0.1027& 0.0360& 0.2668& -0.2418& 0.1206& 0.1406& -0.0551& 0.3071& 0.2097\\ -0.0106& -0.2319& 0.1946& -0.0298& -0.1935& 0.0158& -0.0886& 0.0216& -0.0560& 0.2484\\ 0.1044& 0.1285& 0.1902& 0.2277& 0.6961& 0.1657& 0.0728& -0.0262& -0.0831& -0.0001\\ 0.0906& 0.0021& 0.0764& -0.1264& 0.2144& 0.6703& -0.0850& 0.0764& -0.0104& -0.0149\\ -0.1245& 0.0813& 0.1952& -0.0784& 0.0760& -0.0875& 0.7978& -0.0093& 0.0206& -0.1182\\ \end{array}} \right], \end{equation*}

    and

    \|\hat{A}X-\hat{A}^\top X\Lambda\| = 1.7538\times 10^{-8}.

    Therefore, the new model \hat{A}X = \hat{A}^\top X\Lambda reproduces the prescribed eigenvalues (the diagonal elements of the matrix \Lambda ) and eigenvectors (the column vectors of the matrix X ).

    In this paper, we have developed a direct method to solve the linear inverse palindromic eigenvalue problem by partitioning the matrix \Lambda and using the QR-decomposition. The explicit best approximation solution is given. The numerical examples show that the proposed method is straightforward and easy to implement.

    The authors declare no conflict of interest.



    [1] W. M. Abd-Elhameed, A. N. Philippou, N. A. Zeyada, Novel results for two generalized classes of Fibonacci and Lucas polynomials and their uses in the reduction of some radicals, Mathematics, 10 (2022), 2342. https://doi.org/10.3390/math10132342 doi: 10.3390/math10132342
    [2] W. M. Abd-Elhameed, N. A. Zeyada, New identities involving generalized Fibonacci and generalized Lucas numbers, Indian J. Pure Appl. Math., 49 (2018), 527–537. https://doi.org/10.1007/s13226-018-0282-7 doi: 10.1007/s13226-018-0282-7
    [3] W. M. Abd-Elhameed, A. Napoli, Some novel formulas of Lucas polynomials via different approaches, Symmetry, 15 (2023), 185. https://doi.org/10.3390/sym15010185 doi: 10.3390/sym15010185
    [4] W. M. Abd-Elhameed, A. Napoli, New formulas of convolved Pell polynomials, AIMS Math., 9 (2024), 565–593. https://doi.org/10.3934/math.2024030 doi: 10.3934/math.2024030
    [5] W. M. Abd-Elhameed, Y. H. Youssri, N. El-Sissi, M. Sadek, New hypergeometric connection formulae between Fibonacci and Chebyshev polynomials, Ramanujan J., 42 (2017), 347–361. https://doi.org/10.1007/s11139-015-9712-x doi: 10.1007/s11139-015-9712-x
    [6] Y. Yi, W. Zhang, Some identities involving the Fibonacci polynomials, Fibonacci Quart, 40 (2002), 314–318.
    [7] V. E. Hoggatt, M. Bicknell, Roots of Fibonacci polynomials, Fibonacci Quart, 11 (1973), 271–274.
    [8] Z. Wu, W. Zhang, The sums of the reciprocals of Fibonacci polynomials and Lucas polynomials, J. Inequal. Appl., 2012 (2012), 134. https://doi.org/10.1186/1029-242X-2012-134 doi: 10.1186/1029-242X-2012-134
    [9] U. K. Dutta, P. K. Ray, On the finite reciprocal sums of Fibonacci and Lucas polynomials, AIMS Math., 4 (2019), 1569–1581. https://doi.org/10.3934/math.2019.6.1569 doi: 10.3934/math.2019.6.1569
    [10] T. Koshy, Fibonacci and Lucas numbers with applications, John Wiley & Sons, Inc., 2001. https://doi.org/10.1002/9781118033067
    [11] R. Flórez, N. McAnally, A. Mukherjee, Identities for the generalized Fibonacci polynomial, arXiv, 2017. https://doi.org/10.48550/arXiv.1702.01855
    [12] A. Nalli, P. Haukkanen, On generalized Fibonacci and Lucas polynomials, Chaos Solitons Fract., 42 (2009), 3179–3186. https://doi.org/10.1016/j.chaos.2009.04.048 doi: 10.1016/j.chaos.2009.04.048
    [13] R. Flórez, R. A. Higuita, A. Mukherjee, Characterization of the strong divisibility property for generalized Fibonacci polynomials, arXiv, 2018. https://doi.org/10.48550/arXiv.1701.06722
    [14] R. Flórez, R. A. Higuita, A. Mukherjee, Alternating sums in the Hosoya polynomial triangle, J. Integer Seq., 17 (2014), 14.9.5.
    [15] N. Yilmaz, A. Coskun, N. Taskara, On properties of bi-periodic Fibonacci and Lucas polynomials, AIP Conf. Proc., 1863 (2017), 310002. https://doi.org/10.1063/1.4992478 doi: 10.1063/1.4992478
    [16] T. Du, Z. Wu, Some identities involving the bi-periodic Fibonacci and Lucas polynomials, AIMS Math., 8 (2023), 5838–5846. https://doi.org/10.3934/math.2023294 doi: 10.3934/math.2023294
    [17] B. Guo, E. Polatli, F. Qi, Determinantal formulas and recurrent relations for bi-periodic Fibonacci and Lucas polynomials, In: S. K. Paikray, H. Dutta, J. N. Mordeson, New trends in applied analysis and computational mathematics, Springer, Singapore, 1356 (2021), 263–276. https://doi.org/10.1007/978-981-16-1402-6_18
    [18] Y. Taşyurdu, Bi-periodic generalized Fibonacci polynomials, Turk. J. Sci., 7 (2022), 157–167.
    [19] H. W. Gould, A history of the Fibonacci Q-matrix and a higher dimensional problem, Fibonacci Quart, 19 (1981), 250–257.
    [20] J. R. Silvester, Fibonacci properties by matrix methods, Math. Gaz., 63 (1979), 188–191. https://doi.org/10.2307/3617892 doi: 10.2307/3617892
    [21] S. P. Jun, K. H. Choi, Some properties of the generalized Fibonacci sequence q_{n} by matrix methods, Korean J. Math., 24 (2016), 681–691. https://doi.org/10.11568/kjm.2016.24.4.681 doi: 10.11568/kjm.2016.24.4.681
    [22] E. Tan, A. B. Ekin, Some identities on conditional sequences by using matrix method, Miskolc Math. Notes, 18 (2017), 469–477. https://doi.org/10.18514/MMN.2017.1321 doi: 10.18514/MMN.2017.1321
  • This article has been cited by:

    1. Jiajie Luo, Lina Liu, Sisi Li, Yongxin Yuan, A direct method for the simultaneous updating of finite element mass, damping and stiffness matrices, 2022, 0308-1087, 1, 10.1080/03081087.2022.2092047
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1084) PDF downloads(65) Cited by(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog