
In this paper we expressed the eigenvalues of a sort of heptadiagonal symmetric matrices as the zeros of explicit rational functions establishing upper and lower bounds for each of them. From the prescribed eigenvalues, we computed eigenvectors for these types of matrices, giving also a formula not dependent on any unknown parameter for its determinant and inverse. Potential applications of the results are still provided.
Citation: João Lita da Silva. Spectral properties for a type of heptadiagonal symmetric matrices[J]. AIMS Mathematics, 2023, 8(12): 29995-30022. doi: 10.3934/math.20231534
[1] | Gonca Kizilaslan . The altered Hermite matrix: implications and ramifications. AIMS Mathematics, 2024, 9(9): 25360-25375. doi: 10.3934/math.20241238 |
[2] | Hasan Gökbaş . Some properties of the generalized max Frank matrices. AIMS Mathematics, 2024, 9(10): 26826-26835. doi: 10.3934/math.20241305 |
[3] | Qin Zhong, Ling Li . Notes on the generalized Perron complements involving inverse $ {{N}_{0}} $-matrices. AIMS Mathematics, 2024, 9(8): 22130-22145. doi: 10.3934/math.20241076 |
[4] | Salima Kouser, Shafiq Ur Rehman, Mabkhoot Alsaiari, Fayyaz Ahmad, Mohammed Jalalah, Farid A. Harraz, Muhammad Akram . A smoothing spline algorithm to interpolate and predict the eigenvalues of matrices extracted from the sequence of preconditioned banded symmetric Toeplitz matrices. AIMS Mathematics, 2024, 9(6): 15782-15795. doi: 10.3934/math.2024762 |
[5] | Sumaira Hafeez, Rashid Farooq . On generalized inverse sum indeg index and energy of graphs. AIMS Mathematics, 2020, 5(3): 2388-2411. doi: 10.3934/math.2020158 |
[6] | Yuwen He, Jun Li, Lingsheng Meng . Three effective preconditioners for double saddle point problem. AIMS Mathematics, 2021, 6(7): 6933-6947. doi: 10.3934/math.2021406 |
[7] | Chih-Hung Chang, Ya-Chu Yang, Ferhat Şah . Reversibility of linear cellular automata with intermediate boundary condition. AIMS Mathematics, 2024, 9(3): 7645-7661. doi: 10.3934/math.2024371 |
[8] | Yongge Tian . Miscellaneous reverse order laws and their equivalent facts for generalized inverses of a triple matrix product. AIMS Mathematics, 2021, 6(12): 13845-13886. doi: 10.3934/math.2021803 |
[9] | Ahmad Y. Al-Dweik, Ryad Ghanam, Gerard Thompson, M. T. Mustafa . Algorithms for simultaneous block triangularization and block diagonalization of sets of matrices. AIMS Mathematics, 2023, 8(8): 19757-19772. doi: 10.3934/math.20231007 |
[10] | Wanlin Jiang, Kezheng Zuo . Revisiting of the BT-inverse of matrices. AIMS Mathematics, 2021, 6(3): 2607-2622. doi: 10.3934/math.2021158 |
In this paper we expressed the eigenvalues of a sort of heptadiagonal symmetric matrices as the zeros of explicit rational functions establishing upper and lower bounds for each of them. From the prescribed eigenvalues, we computed eigenvectors for these types of matrices, giving also a formula not dependent on any unknown parameter for its determinant and inverse. Potential applications of the results are still provided.
The spectral properties of tridiagonal matrices is a well-studied topic for which a vast literature can be found (e.g. [1,5,16,17,19,25,27,35], among others), and even formulae for the corresponding inverse of these matrices has also been discussed over the last decades of twentieth century (see [15] and references therein). Recently, taking advantage of basic properties of the Chebyshev polynomials, some authors have established localization theorems for the eigenvalues of real pentadiagonal and heptadiagonal symmetric Toeplitz matrices by expressing them as the zeros of explicit rational functions [12,32]. The eigenvalues of a special kind of heptadiagonal matrices were still derived in [26] by employing other methods, namely, determinant properties and recurrence relations.
In fact, the above-mentioned matrices are typical examples of a much more wider class called band matrices (see [30], page 13), and the idea of having explicit formulas to compute its eigenvalues, eigenvectors or establishing some other properties is both appealing and challenging by reason of their usefulness in many areas of science and engineering (see, for instance, [4,10,11,14,20,24,33]).
In order to give a contribution on this matter, we shall obtain the eigenvalues of the following n×n heptadiagonal matrix
Hn=[ξηcd0……………0ηabcd⋱⋮cbabc⋱⋱⋮dcbab⋱⋱⋱⋮0dcba⋱⋱⋱⋱⋮⋮⋱⋱⋱⋱⋱⋱⋱⋱⋱⋮⋮⋱⋱⋱⋱abcd0⋮⋱⋱⋱babcd⋮⋱⋱cbabc⋮⋱dcbaη0……………0dcηξ] | (1.1) |
as the zeros of explicit rational functions, also providing upper/lower bounds non-depending of any unknown parameter to each of them. Further, we shall compute eigenvectors for these sort of matrices at the expense of the prescribed eigenvalues. To accomplish these purposes, we will obtain an orthogonal block diagonalization for matrix (1.1) where each block is a sum of a diagonal matrix plus dyads, i.e.
diag(d1,d2,…,dκ)+u1v⊤1+u2v⊤2+…+umv⊤m, | (1.2) |
where uj,vj, j=1,…,m are κ×1 matrices, by exploiting the modification technique introduced by Fasino in [13] for matrices of the type (1.1). This key ingredient allows us to get formulas for the characteristic polynomial of Hn on one hand, and for the inverse of Hn on the other (assuming, of course, its nonsingularity). With the aim of getting expressions as explicit as possible, we will use not only results concerning the secular equation of diagonal matrices perturbed by the addition of rank-one matrices developed by Anderson in the nineties [2], but also a Miller's formula of the eighties for the inverse of the sum of matrices [29]. In section four of the paper, applications are given for the established results, showing its potential usage.
Since the class of matrices Hn includes the ones considered in [12] and [32], our statements will extend necessarily the results of these papers. Moreover, the current approach also points a way to achieve localization formulas for the eigenvalues of general symmetric quasi-Toeplitz matrices. In detail, the eigenvalues of any symmetric quasi-Toeplitz matrix enjoying a block diagonalization with diagonal elements of the form (1.2) are precisely the eigenvalues of each one of these diagonal blocks, which in turn can be located/computed by rational functions via Anderson's secular equation.
In this paper, n is generally assumed to be an integer greater or equal to four and a,b,c,d,ξ,η in (1.1) will be taken as real numbers; in fact, this last restriction can be discarded because the majority of forthcoming statements remain valid when a,b,c,d,ξ,η are complex numbers. Moreover, Sn will be the n×n symmetric, involutory and orthogonal matrix defined by
[Sn]k,ℓ:=√2n+1sin(kℓπn+1). | (2.1) |
Our first auxiliary result is an orthogonal diagonalization for the following n×n heptadiagonal symmetric matrix
ˆHn=[a−cb−dcd0……………0b−dabcd⋱⋮cbabc⋱⋱⋮dcbab⋱⋱⋱⋮0dcba⋱⋱⋱⋱⋮⋮⋱⋱⋱⋱⋱⋱⋱⋱⋱⋮⋮⋱⋱⋱⋱abcd0⋮⋱⋱⋱babcd⋮⋱⋱cbabc⋮⋱dcbab−d0……………0dcb−da−c]. | (2.2) |
Lemma 1. Let a,b,c,d be real numbers and
λk=a+2bcos(kπn+1)+2ccos(2kπn+1)+2dcos(3kπn+1),k=1,…,n. | (2.3) |
If ˆHn is the n×n matrix (2.2), then
ˆHn=SnΛnSn, |
where Λn=diag(λ1,…,λn), and Sn is the matrix (2.1).
Proof. Supposing the n×n matrix
Ωn=[010………0101⋱⋮010⋱⋱⋮⋮⋱⋱⋱⋱⋱⋮⋮⋱⋱010⋮⋱1010………010], |
it is a simple matter of routine to verify that
ˆHn=(a−2c)In+(b−3d)Ωn+cΩ2n+dΩ3n. |
Using the spectral decomposition
Ωn=n∑ℓ=12cos(ℓπn+1)sℓs⊤ℓ, |
where
sℓ=[√2n+1sin(ℓπn+1)√2n+1sin(2ℓπn+1)⋮√2n+1sin(nℓπn+1)] |
(i.e. the ℓth column of Sn), it follows
ˆHn=n∑ℓ=1[(a−2c)+2(b−3d)cos(ℓπn+1)+4ccos2(ℓπn+1)+8dcos3(ℓπn+1)]sℓs⊤ℓ=n∑ℓ=1λℓsℓs⊤ℓ=SnΛnSn, |
where Λn=diag(λ1,…,λn), and Sn is the matrix (2.1). The proof is complete.
The next statement is an orthogonal block diagonalization for matrices Hn of the form (1.1) and it extends Proposition 3.1 in [7], which is valid only for heptadiagonal symmetric Toeplitz matrices.
Lemma 2. Let a,b,c,d,ξ,η be real numbers, λk, k=1,…,n be given by (2.3) and Hn be the n×n matrix (1.1).
(a) If n is even,
x=[2√n+1sin(πn+1)2√n+1sin(3πn+1)⋮2√n+1sin[(n−1)πn+1]],y=[2√n+1sin(2πn+1)2√n+1sin(6πn+1)⋮2√n+1sin[(2n−2)πn+1]] | (2.4a) |
and
v=[2√n+1sin(2πn+1)2√n+1sin(4πn+1)⋮2√n+1sin(nπn+1)],w=[2√n+1sin(4πn+1)2√n+1sin(8πn+1)⋮2√n+1sin(2nπn+1)], | (2.4b) |
then
Hn=SnPn[Φn2OOΨn2]P⊤nSn, |
where Sn is the n×n matrix (2.1), Pn is the n×n permutation matrix defined by
[Pn]k,ℓ={1ifk=2ℓ−1ork=2ℓ−n0,otherwise | (2.4c) |
and
Φn2=diag(λ1,λ3,…,λn−1)+(c+ξ−a)xx⊤+(d+η−b)xy⊤+(d+η−b)yx⊤ | (2.4d) |
Ψn2=diag(λ2,λ4,…,λn)+(c+ξ−a)vv⊤+(d+η−b)vw⊤+(d+η−b)wv⊤. | (2.4e) |
(b) If n is odd,
x=[2√n+1sin(πn+1)2√n+1sin(3πn+1)⋮2√n+1sin(nπn+1)],y=[2√n+1sin(2πn+1)2√n+1sin(6πn+1)⋮2√n+1sin(2nπn+1)] | (2.5a) |
and
v=[2√n+1sin(2πn+1)2√n+1sin(4πn+1)⋮2√n+1sin[(n−1)πn+1]],w=[2√n+1sin(4πn+1)2√n+1sin(8πn+1)⋮2√n+1sin[2(n−1)πn+1]], | (2.5b) |
then
Hn=SnPn[Φn+12OOΨn−12]P⊤nSn, |
where Sn is the n×n matrix (2.1), Pn is the n×n permutation matrix defined by
[Pn]k,ℓ={1ifk=2ℓ−1ork=2ℓ−n−10,otherwise | (2.5c) |
and
Φn+12=diag(λ1,λ3,…,λn)+(c+ξ−a)xx⊤+(d+η−b)xy⊤+(d+η−b)yx⊤ | (2.5d) |
Ψn−12=diag(λ2,λ4,…,λn−1)+(c+ξ−a)vv⊤+(d+η−b)vw⊤+(d+η−b)wv⊤. | (2.5e) |
Proof. Consider a,b,c,d,ξ,η as real numbers, λk, k=1,…,n given by (2.3) and Hn as the n×n matrix (1.1). Setting θ:=c+ξ−a, ϑ:=d+η−b,
ˆEn=[c+ξ−a0……000⋱⋮⋮⋱⋱⋱⋮⋮⋱000……0c+ξ−a] |
and
ˆFn=[0d+η−b0……0d+η−b00⋱⋮00⋱⋱⋱⋮⋮⋱⋱⋱00⋮⋱00d+η−b0……0d+η−b0], |
we have from Lemma 1
SnHnSn=Sn(ˆHn+ˆEn+ˆFn)Sn=Λn+Gn+Kn, |
where Sn is the n×n matrix (2.1), ˆHn is the n×n matrix (2.2),
Λn=diag(λ1,…,λn),[Gn]k,ℓ=2θn+1sin(kπn+1)sin(ℓπn+1)[1+(−1)k+ℓ] |
and
[Kn]k,ℓ=2ϑn+1[sin(kπn+1)sin(2ℓπn+1)+sin(2kπn+1)sin(ℓπn+1)][1+(−1)k+ℓ]. |
Since [Gn]k,ℓ=[Kn]k,ℓ=0 whenever k+ℓ is odd, we can permute rows and columns of Λn+Gn+Kn according to the permutation matrices (2.4c) and (2.5c), yielding: for n even,
Hn=SnPn[Υn2+θxx⊤+ϑxy⊤+ϑyx⊤OOΔn2+θvv⊤+ϑvw⊤+ϑwv⊤]P⊤nSn, |
where Pn is the matrix (2.4c), Υn2=diag(λ1,λ3,…,λn−1), Δn2=diag(λ2,λ4,…,λn) and x, y are given by (2.4a); for n odd,
Hn=SnPn[Υn+12+θxx⊤+ϑxy⊤+ϑyx⊤OOΔn−12+θvv⊤+ϑvw⊤+ϑwv⊤]P⊤nSn, |
with Pn defined in (2.5c), Υn+12=diag(λ1,λ3,…,λn), Δn−12=diag(λ2,λ4,…,λn−1) and v, w defined by (2.5a). The proof is complete.
Remark 1. Let us point out that the decomposition for real heptadiagonal symmetric Toeplitz matrices established in Proposition 3.1 of [7] at the expense of the bordering technique is no more useful for matrices having the shape (1.1). As consequence, some results stated by these authors will be necessarily extended, particularly, the referred decomposition and a formula to compute the determinant of real heptadiagonal symmetric Toeplitz matrices (Corollary 3.1 of [7]).
The orthogonal block diagonalization established in Lemma 2 will lead us to an explicit formula for the determinant of the matrix Hn.
Theorem 1. Let a,b,c,d,ξ,η be real numbers, λk, k=1,…,n be given by (2.3), xk=sin(kπn+1), k=1,…,2n and Hn the n×n matrix (1.1). If θ:=c+ξ−a, ϑ:=d+η−b and
(a) n is even, then
det(Hn)=[n2∏j=1λ2j+n2∑k=14θx22k+8ϑx2kx4k(n+1)n2∏j=1j≠kλ2j−∑1⩽k<ℓ⩽n216ϑ2(x2kx4ℓ−x2ℓx4k)2(n+1)2n2∏j=1j≠k,ℓλ2j]×[n2∏j=1λ2j−1+n2∑k=14θx22k−1+8ϑx2k−1x4k−2(n+1)n2∏j=1j≠kλ2j−1−∑1⩽k<ℓ⩽n216ϑ2(x2k−1x4ℓ−2−x2ℓ−1x4k−2)2(n+1)2n2∏j=1j≠k,ℓλ2j−1]. |
(b) n is odd, then
det(Hn)=[n−12∏j=1λ2j+n−12∑k=14θx22k+8ϑx2kx4k(n+1)n−12∏j=1j≠kλ2j−∑1⩽k<ℓ⩽n−1216ϑ2(x2kx4ℓ−x2ℓx4k)2(n+1)2n−12∏j=1j≠k,ℓλ2j]×[n+12∏j=1λ2j−1+n+12∑k=14θx22k−1+8ϑx2k−1x4k−2(n+1)n+12∏j=1j≠kλ2j−1−∑1⩽k<ℓ⩽n+1216ϑ2(x2k−1x4ℓ−2−x2ℓ−1x4k−2)2(n+1)2n+12∏j=1j≠k,ℓλ2j−1]. |
Proof. Since both assertions can be proven in the same way, we only prove (a). Consider a,b,c,d,ξ,η are real numbers, xk=sin(kπn+1), k=1,…,2n, λk, k=1,…,n as given by (2.3), θ:=c+ξ−a, ϑ:=d+η−b and the notations used in Lemma 2. The determinant formula for block-triangular matrices (see [21], page 185) and Lemma 2 ensure det(Hn)=det(Φn2)det(Ψn2). We shall first assume λk≠0 for all k=1,…,n,
4θn+1n2∑k=1x22k−1λ2k−1≠−1 | (3.1a) |
4θn+1n2∑k=1x22k−1λ2k−1+4ϑn+1n2∑k=1x2k−1x4k−2λ2k−1≠−1 | (3.1b) |
n2∑k=14θx22k−1+8ϑx2k−1x4k−2(n+1)λ2k−1−16ϑ2(n+1)2∑1⩽k<ℓ⩽n2(x2k−1x4ℓ−2−x2ℓ−1x4k−2)2λ2k−1λ2ℓ−1≠−1 | (3.1c) |
and
4θn+1n2∑k=1x22kλ2k≠−1 | (3.2a) |
4θn+1n2∑k=1x22kλ2k+4ϑn+1n2∑k=1x2kx4kλ2k≠−1 | (3.2b) |
n2∑k=14θx22k+8ϑx2kx4k(n+1)λ2k−16ϑ2(n+1)2∑1⩽k<ℓ⩽n2(x2kx4ℓ−x2ℓx4k)2λ2kλ2ℓ≠−1. | (3.2c) |
Putting Υn2:=diag(λ1,λ3,…,λn−1) and Δn2:=diag(λ2,λ4,…,λn), we have
det(Φn2)=det(Υn2+θxx⊤+ϑxy⊤+ϑyx⊤)=[1+θx⊤Υ−1n2x+2ϑx⊤Υ−1n2y+ϑ2(x⊤Υ−1n2y)2−ϑ2(x⊤Υ−1n2x)(y⊤Υ−1n2y)]det(Υn2)=[n2∏j=1λ2j−1+n2∑k=14θx22k−1+8ϑx2k−1x4k−2(n+1)n2∏j=1j≠kλ2j−1−∑1⩽k<ℓ⩽n216ϑ2(x2k−1x4ℓ−2−x2ℓ−1x4k−2)2(n+1)2n2∏j=1j≠k,ℓλ2j−1] |
and
det(Ψn2)=det(Δn2+θvv⊤+ϑvw⊤+ϑwv⊤)=[1+θv⊤Δ−1n2v+2ϑv⊤Δ−1n2w+ϑ2(v⊤Δ−1n2w)2−ϑ2(v⊤Δ−1n2v)(w⊤Δ−1n2w)]det(Δn2)=[n2∏j=1λ2j+n2∑k=14θx22k+8ϑx2kx4k(n+1)n2∏j=1j≠kλ2j−∑1⩽k<ℓ⩽n216ϑ2(x2kx4ℓ−x2ℓx4k)2(n+1)2n2∏j=1j≠k,ℓλ2j] |
(see [29], pages 69 and 70), i.e.
det(Hn)=[n2∏j=1λ2j+n2∑k=14θx22k+8ϑx2kx4k(n+1)n2∏j=1j≠kλ2j−∑1⩽k<ℓ⩽n216ϑ2(x2kx4ℓ−x2ℓx4k)2(n+1)2n2∏j=1j≠k,ℓλ2j]×[n2∏j=1λ2j−1+n2∑k=14θx22k−1+8ϑx2k−1x4k−2(n+1)n2∏j=1j≠kλ2j−1−∑1⩽k<ℓ⩽n216ϑ2(x2k−1x4ℓ−2−x2ℓ−1x4k−2)2(n+1)2n2∏j=1j≠k,ℓλ2j−1]. | (3.3) |
Since both sides of (3.3) are polynomials in the variables a,b,c,d,ξ,η, conditions (3.1a)–(3.2c) as well as λk≠0 can be dropped, and (3.3) is valid more generally.
Example 1. Suppose the following symmetric quasi-Toeplitz matrix
Tn=[ξbc0…………0babc⋱⋮cbab⋱⋱⋮0cba⋱⋱⋱⋮⋮⋱⋱⋱⋱⋱⋱⋱⋮⋮⋱⋱⋱abc0⋮⋱⋱babc⋮⋱cbab0…………0cbξ] |
(when ξ=a, we have a pentadiagonal symmetric Toeplitz matrix). Let us notice that Theorem 3 of [12] cannot be employed to compute det(Tn). However, according to our Theorem 1 we get (with d=0, η=b and ϑ=0)
det(Tn)={[n2∏j=1λ2j+n2∑k=14(c+ξ−a)x22k(n+1)n2∏j=1j≠kλ2j][n2∏j=1λ2j−1+n2∑k=14(c+ξ−a)x22k−1(n+1)n2∏j=1j≠kλ2j−1],neven[n−12∏j=1λ2j+n−12∑k=14(c+ξ−a)x22k(n+1)n−12∏j=1j≠kλ2j][n+12∏j=1λ2j−1+n+12∑k=14(c+ξ−a)x22k−1(n+1)n+12∏j=1j≠kλ2j−1],nodd |
where
λk=a+2bcos(kπn+1)+2ccos(2kπn+1),k=1,…,n |
and xk=sin(kπn+1), k=1,…,2n. Moreover, if ξ=a−c in Tn, then det(Tn) simply turns into λ1λ2…λn (let us stress that this includes the particular case c=0, i.e. the determinant of tridiagonal symmetric Toeplitz matrices).
The following lemma will allows us to express the eigenvalues of key matrices in this paper as the zeros of explicit rational functions providing, additionally, explicit upper and lower bounds for each one. We will denote the Euclidean norm by ‖⋅‖.
Lemma 3. Let a,b,c,d,ξ,η be real numbers and λk, k=1,…,n be given by (2.3).
(a) If n is even,
ⅰ. x,y are given by (2.4a) and the eigenvalues of
diag(λ1,λ3,…,λn−1)+(c+ξ−a)xx⊤+(d+η−b)xy⊤+(d+η−b)yx⊤ | (3.4a) |
are not of the form a+2bcos[(2k−1)πn+1]+2ccos[2(2k−1)πn+1]+2dcos[3(2k−1)πn+1], k=1,…,n2, then the eigenvalues of (3.4a) are precisely the zeros of the rational function
f(t)=1+4n+1n2∑k=1(c+ξ−a)sin2[(2k−1)πn+1]+2(d+η−b)sin[(2k−1)πn+1]sin[(4k−2)πn+1]λ2k−1−t−16(d+η−b)2(n+1)2∑1⩽k<ℓ⩽n2{sin[(2k−1)πn+1]sin[(4ℓ−2)πn+1]−sin[(4k−2)πn+1]sin[(2ℓ−1)πn+1]}2(λ2k−1−t)(λ2ℓ−1−t). | (3.4b) |
Moreover, if μ1⩽μ2⩽…⩽μn2 are the eigenvalues of (3.4a) and λτ(1)⩽λτ(3)⩽…⩽λτ(n−1) are arranged in a nondecreasing order by some bijection τ defined in {1,3,…,n−1}, then
λτ(2k−1)+(c+ξ−a)−√(c+ξ−a)2+4(d+η−b)22⩽μk⩽λτ(2k−1)+(c+ξ−a)+√(c+ξ−a)2+4(d+η−b)22 | (3.4c) |
for each k=1,…,n2.
ⅱ. v,w are given by (2.4b) and the eigenvalues of
diag(λ2,λ4,…,λn)+(c+ξ−a)vv⊤+(d+η−b)vw⊤+(d+η−b)wv⊤ | (3.5a) |
are not of the form a+2bcos(2kπn+1)+2ccos(4kπn+1)+2dcos(6kπn+1), k=1,…,n2, then the eigenvalues of (3.5a) are precisely the zeros of the rational function
g(t)=1+4n+1n2∑k=1(c+ξ−a)sin2(2kπn+1)+2(d+η−b)sin(2kπn+1)sin(4kπn+1)λ2k−t−16(d+η−b)2(n+1)2∑1⩽k<ℓ⩽n2[sin(2kπn+1)sin(4ℓπn+1)−sin(4kπn+1)sin(2ℓπn+1)]2(λ2k−t)(λ2ℓ−t). | (3.5b) |
Furthermore, if ν1⩽ν2⩽…⩽νn2 are the eigenvalues of (3.5a) and λσ(2)⩽λσ(4)⩽…⩽λσ(n) are arranged in a nondecreasing order by some bijection σ defined in {2,4,…,n}, then
λσ(2k)+(c+ξ−a)−√(c+ξ−a)2+4(d+η−b)22⩽νk⩽λσ(2k)+(c+ξ−a)+√(c+ξ−a)2+4(d+η−b)22 | (3.5c) |
for every k=1,…,n2.
(b) If n is odd,
ⅰ. x,y are given by (2.5a) and the eigenvalues of
diag(λ1,λ3,…,λn)+(c+ξ−a)xx⊤+(d+η−b)xy⊤+(d+η−b)yx⊤ | (3.6a) |
are not of the form a+2bcos[(2k−1)πn+1]+2ccos[2(2k−1)πn+1]+2dcos[3(2k−1)πn+1], k=1,…,n+12, then the eigenvalues of (3.6a) are precisely the zeros of the rational function
f(t)=1+4n+1n+12∑k=1(c+ξ−a)sin2[(2k−1)πn+1]+2(d+η−b)sin[(2k−1)πn+1]sin[(4k−2)πn+1]λ2k−1−t−16(d+η−b)2(n+1)2∑1⩽k<ℓ⩽n+12{sin[(2k−1)πn+1]sin[(4ℓ−2)πn+1]−sin[(4k−2)πn+1]sin[(2ℓ−1)πn+1]}2(λ2k−1−t)(λ2ℓ−1−t). | (3.6b) |
Moreover, if μ1⩽μ2⩽…⩽μn+12 are the eigenvalues of (3.6a) and λτ(1)⩽λτ(3)⩽…⩽λτ(n) are arranged in a nondecreasing order by some bijection τ defined in {1,3,…,n}, then
λτ(2k−1)+(c+ξ−a)−√(c+ξ−a)2+4(d+η−b)22⩽μk⩽λτ(2k−1)+(c+ξ−a)+√(c+ξ−a)2+4(d+η−b)22 | (3.6c) |
for any k=1,…,n+12.
ⅱ. v,w are given by (2.5b) and the eigenvalues of
diag(λ2,λ4,…,λn−1)+(c+ξ−a)vv⊤+(d+η−b)vw⊤+(d+η−b)wv⊤ | (3.7a) |
are not of the form a+2bcos(2kπn+1)+2ccos(4kπn+1)+2dcos(6kπn+1), k=1,…,n−12, then the eigenvalues of (3.7a) are precisely the zeros of the rational function
g(t)=1+4n+1n−12∑k=1(c+ξ−a)sin2(2kπn+1)+2(d+η−b)sin(2kπn+1)sin(4kπn+1)λ2k−t−16(d+η−b)2(n+1)2∑1⩽k<ℓ⩽n−12[sin(2kπn+1)sin(4ℓπn+1)−sin(4kπn+1)sin(2ℓπn+1)]2(λ2k−t)(λ2ℓ−t). | (3.7b) |
Furthermore, if ν1⩽ν2⩽…⩽νn−12 are the eigenvalues of (3.7a) and λσ(2)⩽λσ(4)⩽…⩽λσ(n−1) are arranged in a nondecreasing order by some bijection σ defined in {2,4,…,n−1}, then
λσ(2k)+(c+ξ−a)−√(c+ξ−a)2+4(d+η−b)22⩽νk⩽λσ(2k)+(c+ξ−a)+√(c+ξ−a)2+4(d+η−b)22 | (3.7c) |
for all k=1,…,n−12.
Proof. Suppose real numbers a,b,c,d,ξ,η, λk, k=1,…,n given by (2.3) and put θ:=c+ξ−a, ϑ:=d+η−b. We shall denote by S(k,m) the collection of all k-element subsets of {1,2,…,m} written in increasing order; additionally, for any rectangular matrix M, we shall indicate by det(M[I,J]) the minor determined by the subsets I={i1<i2<…<ik} and J={j1<j2<…<jk}. Supposing θ≠0,
X=[2√θn+1sin(πn+1)2√θn+1sin(3πn+1)…2√θn+1sin(nπn+1)2√θn+1sin(πn+1)2√θn+1sin(3πn+1)…2√θn+1sin(nπn+1)2ϑ√θ(n+1)sin(2πn+1)2ϑ√θ(n+1)sin(6πn+1)…2ϑ√θ(n+1)sin[(4n−2)πn+1]] |
and
Y=[2√θn+1sin(πn+1)2√θn+1sin(3πn+1)…2√θn+1sin(nπn+1)2ϑ√θ(n+1)sin(2πn+1)2ϑ√θ(n+1)sin(6πn+1)…2ϑ√θ(n+1)sin[(4n−2)πn+1]2√θn+1sin(πn+1)2√θn+1sin(3πn+1)…2√θn+1sin(nπn+1)]. |
Theorem 1 of [2] ensures that ζ is an eigenvalue of (3.4a) if, and only if,
1+n2∑k=1∑J∈S(k,n2)∑I∈S(k,3)det(X[I,J])det(Y[I,J])∏j∈J(λ2j−1−ζ)=0, |
provided that ζ is not an eigenvalue of diag(λ1,λ3,…,λn−1). Since
1+n2∑k=1∑J∈S(k,n2)∑I∈S(k,3)det(X[I,J])det(Y[I,J])∏j∈J(λ2j−1−ζ)=1+4n+1n2∑k=1θsin2[(2k−1)πn+1]+2ϑsin[(2k−1)πn+1]sin[(4k−2)πn+1]λ2k−1−ζ−16ϑ2(n+1)2∑1⩽k<ℓ⩽n2{sin[(2k−1)πn+1]sin[(4ℓ−2)πn+1]−sin[(4k−2)πn+1]sin[(2ℓ−1)πn+1]}2(λ2k−1−ζ)(λ2ℓ−1−ζ), |
we obtain (3.4b). Considering now θ=0 and setting
X=[2√ϑn+1sin(πn+1)2√ϑn+1sin(3πn+1)…2√ϑn+1sin(nπn+1)2√ϑn+1sin(2πn+1)2√ϑn+1sin(6πn+1)…2√ϑn+1sin[(4n−2)πn+1]],Y=[2√ϑn+1sin(2πn+1)2√ϑn+1sin(6πn+1)…2√ϑn+1sin[(4n−2)πn+1]2√ϑn+1sin(πn+1)2√ϑn+1sin(3πn+1)…2√ϑn+1sin(nπn+1)] |
we still have that ζ is an eigenvalue of (3.4a) if, and only if,
1+n2∑k=1∑J∈S(k,n2)∑I∈S(k,2)det(X[I,J])det(Y[I,J])∏j∈J(λ2j−1−ζ)=0, |
assuming that ζ is not an eigenvalue of diag(λ1,λ3,…,λn−1). Hence,
1+n2∑k=1∑J∈S(k,n2)∑I∈S(k,2)det(X[I,J])det(Y[I,J])∏j∈J(λ2j−1−ζ)=1+8ϑn+1n2∑k=1sin[(2k−1)πn+1]sin[(4k−2)πn+1]λ2k−1−ζ−16ϑ2(n+1)2∑1⩽k<ℓ⩽n2{sin[(2k−1)πn+1]sin[(4ℓ−2)πn+1]−sin[(4k−2)πn+1]sin[(2ℓ−1)πn+1]}2(λ2k−1−ζ)(λ2ℓ−1−ζ), |
and (3.4b) is established. Let μ1⩽μ2⩽…⩽μn2 be the eigenvalues of (3.4a) and λτ(1)⩽λτ(3)⩽…⩽λτ(n−1) be arranged in a nondecreasing order by some bijection τ defined in {1,3,…,n−1}. Thus,
λτ(2k−1)+λmin(θxx⊤+ϑxy⊤+ϑyx⊤)⩽μk⩽λτ(2k−1)+λmax(θxx⊤+ϑxy⊤+ϑyx⊤) | (3.8) |
for each k=1,…,n2 (see [23], page 242). Since the characteristic polynomial of θxx⊤+ϑxy⊤+ϑyx⊤ is
det[tIn2−θxx⊤−ϑxy⊤−ϑyx⊤]=tn2−2[t2−(θx⊤x+ϑy⊤x+ϑx⊤y)t+ϑ2(x⊤y)(y⊤x)−ϑ2(x⊤x)(y⊤y)]=tn2−2{t2−(θ‖x‖2+2ϑx⊤y)t+ϑ2[(x⊤y)2−‖x‖2‖y‖2]}, |
we have that its spectrum is
Spec(θxx⊤+ϑxy⊤+ϑyx⊤)={0,α−,α+}, | (3.9) |
where α±:=θ‖x‖2+2ϑx⊤y±√(θ‖x‖2+2ϑx⊤y)2−4ϑ2[(x⊤y)2−‖x‖2‖y‖2]2. From the identities
n2∑k=1sin2[(2k−1)πn+1]=n+14=n2∑k=1sin2[(4k−2)πn+1],n2∑k=1sin[(2k−1)πn+1]sin[(4k−2)πn+1]=0, |
it follows ‖x‖=‖y‖=1 and x⊤y=0. Hence, (3.8) and (3.9) yields (3.4c). The proofs of the remaining assertions are performed in the same way and so will be omitted.
The next statement allows us to locate the eigenvalues of Hn, providing also explicit bounds for each of them.
Theorem 2. Let a,b,c,d,ξ,η be real numbers, λk, k=1,…,n be given by (2.3) and Hn be the n×n matrix (1.1).
(a) If n is even, the eigenvalues of Φn2 in (2.4d) are not of the form λ2k−1, k=1,…,n2 and the eigenvalues of Ψn2 in (2.4e) are not of the form λ2k, k=1,…,n2, then the eigenvalues of Hn are precisely the zeros of the rational functions f(t) and g(t) given by (3.4b) and (3.5b), respectively. Moreover, if μ1⩽μ2⩽…⩽μn2 are the zeros of f(t) and ν1⩽ν2⩽…⩽νn2 are the zeros of g(t) (counting multiplicities in both cases), then μk, k=1,…,n2 and νk, k=1,…,n2 satisfy (3.4c) and (3.5c), respectively.
(b) If n is odd, the eigenvalues of Φn+12 in (2.5d) are not of the form λ2k−1, k=1,…,n+12 and the eigenvalues of Ψn−12 in (2.5e) are not of the form λ2k, k=1,…,n−12, then the eigenvalues of Hn are precisely the zeros of the rational functions f(t) and g(t) given by (3.6b) and (3.7b), respectively. Furthermore, if μ1⩽μ2⩽…⩽μn+12 are the zeros of f(t) and ν1⩽ν2⩽…⩽νn−12 are the zeros of g(t) (counting multiplicities in both cases), then μk, k=1,…,n+12 and νk, k=1,…,n−12 satisfy (3.6c) and (3.7c), respectively.
Proof. Suppose a,b,c,d,ξ,η are real numbers and λk, k=1,…,n as given by (2.3).
(a) According to Lemma 2 and the determinant formula for block-triangular matrices (see [21], page 185), the characteristic polynomial of Hn for n even is
det(tIn−Hn)=det(tIn2−Φn2)det(tIn2−Ψn2), |
where Φn2 and Ψn2 are given by (2.4d) and (2.4e), respectively, so that the thesis is a direct consequence of Lemma 2.
(b) For n odd, we obtain
det(tIn−Hn)=det(tIn+12−Φn+12)det(tIn−12−Ψn−12), |
where Φn+12 and Ψn−12 are given by (2.5d) and (2.5e), respectively. The conclusion follows from Lemma 2.
From Geršgorin theorem (see [23], Theorem 6.1.1), it can also be stated that all eigenvalues of Hn (n⩾7) belong to [hmin,hmax], where
hmin:=min{ξ−|c|−|d|−|η|,a−|b|−|c|−|d|−|η|,a−2|b|−2|c|−2|d|} |
and
hmax:=max{ξ+|c|+|d|+|η|,a+|b|+|c|+|d|+|η|,a+2|b|+2|c|+2|d|}. |
Further, all eigenvalues of the n×n heptadiagonal symmetric Toeplitz matrix
heptan(d,c,b,a,b,c,d)=[abcd0……………0babcd⋱⋮cbabc⋱⋱⋮dcbab⋱⋱⋱⋮0dcba⋱⋱⋱⋱⋮⋮⋱⋱⋱⋱⋱⋱⋱⋱⋱⋮⋮⋱⋱⋱⋱abcd0⋮⋱⋱⋱babcd⋮⋱⋱cbabc⋮⋱dcbab0……………0dcba] |
are contained in the interval
[min−π⩽t⩽πφ(t),max−π⩽t⩽πφ(t)], |
where φ(t)=a+2bcos(t)+2ccos(2t)+2dcos(3t), −π⩽t⩽π (see [18], Theorem 6.1). As illustrated, eigenvalues of Hn and those of heptan(d,c,b,a,b,c,d) with a=0, b=−2, c=−1, d=2, ξ=9, η=−7 are depicted in complex plane for increasing values of n.
A distinctive feature of the blue graphics is the existence of two outliers for Hn, i.e. eigenvalues that do not belong to the interval [−15427,7], which seems to become just one as n→∞. This numerical experiment also reveals that as the matrix size increases, the spectrum of quasi-Toeplitz matrix Hn approaches the spectrum of Toeplitz matrix heptan(2,−1,−2,1,−2,−1,2) plus the outliers; this is the scenario that is consistent with the study presented in [6].
Remark 2. In [12] and [32], similar localization results were established for the eigenvalues of symmetric Toeplitz matrices (pentadiagonal and heptadiagonal, respectively). The referred papers make use of Chebyshev polynomials and their properties to earn rational functions with a more concise form. However, its statements do not cover the broader class of matrices (1.1).
The decomposition presented in Lemma 2 allows us also to compute eigenvectors for Hn in (1.1).
Theorem 3. Let a,b,c,d,ξ,η be real numbers, λk, k=1,…,n be given by (2.3) and Hn be the n×n matrix (1.1).
(a) If n is even, Sn is the n×n matrix (2.1), Pn is the n×n permutation matrix (2.4c), the zeros μ1,…,μn2 of (3.4b) are not of the form λ2k−1, k=1,…,n2, the zeros ν1,…,νn2 of (3.5b) are not of the form λ2k, k=1,…,n2,
n2∑j=1{(c+ξ−a)sin2[(2j−1)πn+1]+(d+η−b)sin[(2j−1)πn+1]sin[(4j−2)πn+1]μk−λ2j−1}≠n+14,n2∑j=1[(c+ξ−a)sin(2jπn+1)sin(4jπn+1)+(d+η−b)sin2(4jπn+1)νk−λ2j]≠n+14 |
and b≠d+η, then
SnPn[2sin(2πn+1)√n+1(μk−λ1)+8∑n2j=1{(c+ξ−a)sin[(2j−1)πn+1]sin[(4j−2)πn+1]+(d+η−b)sin2[(4j−2)πn+1]μk−λ2j−1}n+1−4∑n2j=1{(c+ξ−a)sin2[(2j−1)πn+1]+(d+η−b)sin[(2j−1)πn+1]sin[(4j−2)πn+1]μk−λ2j−1}sin(πn+1)√n+1(μk−λ1)2sin(6πn+1)√n+1(μk−λ3)+8∑n2j=1{(c+ξ−a)sin[(2j−1)πn+1]sin[(4j−2)πn+1]+(d+η−b)sin2[(4j−2)πn+1]μk−λ2j−1}n+1−4∑n2j=1{(c+ξ−a)sin2[(2j−1)πn+1]+(d+η−b)sin[(2j−1)πn+1]sin[(4j−2)πn+1]μk−λ2j−1}sin(3πn+1)√n+1(μk−λ3)⋮2sin[(2n−2)πn+1]√n+1(μk−λn−1)+8∑n2j=1{(c+ξ−a)sin[(2j−1)πn+1]sin[(4j−2)πn+1]+(d+η−b)sin2[(4j−2)πn+1]μk−λ2j−1}n+1−4∑n2j=1{(c+ξ−a)sin2[(2j−1)πn+1]+(d+η−b)sin[(2j−1)πn+1]sin[(4j−2)πn+1]μk−λ2j−1}sin[(n−1)πn+1]√n+1(μk−λn−1)0⋮0] | (3.10a) |
is an eigenvector of Hn associated to μk, k=1,…,n2, and
SnPn[0⋮02sin(4πn+1)√n+1(νk−λ2)+8∑n2j=1[(c+ξ−a)sin(2jπn+1)sin(4jπn+1)+(d+η−b)sin2(4jπn+1)νk−λ2j]n+1−4∑n2j=1[(c+ξ−a)sin(2jπn+1)sin(4jπn+1)+(d+η−b)sin2(4jπn+1)νk−λ2j]sin(2πn+1)√n+1(νk−λ2)2sin(8πn+1)√n+1(νk−λ4)+8∑n2j=1[(c+ξ−a)sin(2jπn+1)sin(4jπn+1)+(d+η−b)sin2(4jπn+1)νk−λ2j]n+1−4∑n2j=1[(c+ξ−a)sin(2jπn+1)sin(4jπn+1)+(d+η−b)sin2(4jπn+1)νk−λ2j]sin(4πn+1)√n+1(νk−λ4)⋮2sin(2nπn+1)√n+1(νk−λn)+8∑n2j=1[(c+ξ−a)sin(2jπn+1)sin(4jπn+1)+(d+η−b)sin2(4jπn+1)νk−λ2j]n+1−4∑n2j=1[(c+ξ−a)sin(2jπn+1)sin(4jπn+1)+(d+η−b)sin2(4jπn+1)νk−λ2j]sin(nπn+1)√n+1(νk−λn)] | (3.10b) |
is an eigenvector of {{\bf{H}}}_{n} associated to \nu_{k} , k = 1, \ldots, \frac{n}{2} .
(b) If n is odd, {{\bf{S}}}_{n} is the n \times n matrix (2.1), {{\bf{P}}}_{n} is the n \times n permutation matrix (2.4c), the zeros \mu_{1}, \ldots, \mu_{\frac{n+1}{2}} of (3.6b) are not of the form \lambda_{2k-1} , k = 1, \ldots, \frac{n+1}{2} , the zeros \nu_{1}, \ldots, \nu_{\frac{n-1}{2}} of (3.7b) are not of the form \lambda_{2k} , k = 1, \ldots, \frac{n-1}{2} ,
\begin{equation*} \underset{j = 1}{\overset{\frac{n+1}{2}}{\sum}} \left\{\frac{(c + \xi - a) \sin^{2} \left[\frac{(2j - 1)\pi}{n + 1} \right] + (d + \eta - b) \sin \left[\frac{(2j - 1)\pi}{n + 1} \right] \sin \left[\frac{(4j - 2)\pi}{n + 1} \right]}{\mu_{k} - \lambda_{2j-1}} \right\} \neq \frac{n+1}{4}, \end{equation*} |
\begin{equation*} \underset{j = 1}{\overset{\frac{n-1}{2}}{\sum}} \left[\frac{(c + \xi - a) \sin \left(\frac{2j\pi}{n + 1} \right) \sin \left(\frac{4j\pi}{n + 1} \right) + (d + \eta - b) \sin^{2} \left(\frac{4j\pi}{n + 1} \right)}{\nu_{k} - \lambda_{2j}} \right] \neq \frac{n+1}{4} \end{equation*} |
and b \neq d + \eta , then
\begin{equation} {{\bf{S}}}_{n} {{\bf{P}}}_{n} \left[ \begin{array}{c} \frac{2\sin\left(\frac{2\pi}{n+1} \right)}{\sqrt{n+1}(\mu_{k} - \lambda_{1})} + \frac{8 \underset{j = 1}{\overset{\frac{n+1}{2}}{\sum}} \left\{\frac{(c + \xi - a) \sin \left[\frac{(2j - 1)\pi}{n + 1} \right] \sin \left[\frac{(4j - 2)\pi}{n + 1} \right] + (d + \eta - b) \sin^{2} \left[\frac{(4j - 2)\pi}{n + 1} \right]}{\mu_{k} - \lambda_{2j-1}} \right\}}{n + 1 - 4 \underset{j = 1}{\overset{\frac{n+1}{2}}{\sum}} \left\{\frac{(c + \xi - a) \sin^{2} \left[\frac{(2j - 1)\pi}{n + 1} \right] + (d + \eta - b) \sin \left[\frac{(2j - 1)\pi}{n + 1} \right] \sin \left[\frac{(4j - 2)\pi}{n + 1} \right]}{\mu_{k} - \lambda_{2j-1}} \right\}} \frac{\sin \left(\frac{\pi}{n+1} \right)}{\sqrt{n+1}(\mu_{k} - \lambda_{1})} \\[25pt] \frac{2\sin\left(\frac{6\pi}{n+1} \right)}{\sqrt{n+1}(\mu_{k} - \lambda_{3})} + \frac{8 \underset{j = 1}{\overset{\frac{n+1}{2}}{\sum}} \left\{\frac{(c + \xi - a) \sin \left[\frac{(2j - 1)\pi}{n + 1} \right] \sin \left[\frac{(4j - 2)\pi}{n + 1} \right] + (d + \eta - b) \sin^{2} \left[\frac{(4j - 2)\pi}{n + 1} \right]}{\mu_{k} - \lambda_{2j-1}} \right\}}{n + 1 - 4 \underset{j = 1}{\overset{\frac{n+1}{2}}{\sum}} \left\{\frac{(c + \xi - a) \sin^{2} \left[\frac{(2j - 1)\pi}{n + 1} \right] + (d + \eta - b) \sin \left[\frac{(2j - 1)\pi}{n + 1} \right] \sin \left[\frac{(4j - 2)\pi}{n + 1} \right]}{\mu_{k} - \lambda_{2j-1}} \right\}} \frac{\sin \left(\frac{3\pi}{n+1} \right)}{\sqrt{n+1}(\mu_{k} - \lambda_{3})} \\ \vdots \\[3pt] \frac{2\sin\left(\frac{2n\pi}{n+1} \right)}{\sqrt{n+1}(\mu_{k} - \lambda_{n-1})} + \frac{8 \underset{j = 1}{\overset{\frac{n+1}{2}}{\sum}} \left\{\frac{(c + \xi - a) \sin \left[\frac{(2j - 1)\pi}{n + 1} \right] \sin \left[\frac{(4j - 2)\pi}{n + 1} \right] + (d + \eta - b) \sin^{2} \left[\frac{(4j - 2)\pi}{n + 1} \right]}{\mu_{k} - \lambda_{2j-1}} \right\}}{n + 1 - 4 \underset{j = 1}{\overset{\frac{n+1}{2}}{\sum}} \left\{\frac{(c + \xi - a) \sin^{2} \left[\frac{(2j - 1)\pi}{n + 1} \right] + (d + \eta - b) \sin \left[\frac{(2j - 1)\pi}{n + 1} \right] \sin \left[\frac{(4j - 2)\pi}{n + 1} \right]}{\mu_{k} - \lambda_{2j-1}} \right\}} \frac{\sin \left(\frac{n\pi}{n+1} \right)}{\sqrt{n+1}(\mu_{k} - \lambda_{n-1})} \\[15pt] 0 \\ \vdots \\[3pt] 0 \end{array} \right] \end{equation} | (3.11a) |
is an eigenvector of {{\bf{H}}}_{n} associated to \mu_{k} , k = 1, \ldots, \frac{n+1}{2} , and
\begin{equation} {{\bf{S}}}_{n} {{\bf{P}}}_{n} \left[ \begin{array}{c} 0 \\ \vdots \\[3pt] 0 \\ \frac{2\sin\left(\frac{4\pi}{n+1} \right)}{\sqrt{n+1}(\nu_{k} - \lambda_{2})} + \frac{8 \underset{j = 1}{\overset{\frac{n-1}{2}}{\sum}} \left[\frac{(c + \xi - a) \sin \left(\frac{2j\pi}{n + 1} \right) \sin \left(\frac{4j\pi}{n + 1} \right) + (d + \eta - b) \sin^{2} \left(\frac{4j\pi}{n + 1} \right)}{\nu_{k} - \lambda_{2j}} \right]}{n + 1 - 4 \underset{j = 1}{\overset{\frac{n-1}{2}}{\sum}} \left[\frac{(c + \xi - a) \sin \left(\frac{2j\pi}{n + 1} \right) \sin \left(\frac{4j\pi}{n + 1} \right) + (d + \eta - b) \sin^{2} \left(\frac{4j\pi}{n + 1} \right)}{\nu_{k} - \lambda_{2j}} \right]} \frac{\sin \left(\frac{2\pi}{n+1} \right)}{\sqrt{n+1}(\nu_{k} - \lambda_{2})} \\[20pt] \frac{2\sin\left(\frac{8\pi}{n+1} \right)}{\sqrt{n+1}(\nu_{k} - \lambda_{4})} + \frac{8 \underset{j = 1}{\overset{\frac{n-1}{2}}{\sum}} \left[\frac{(c + \xi - a) \sin \left(\frac{2j\pi}{n + 1} \right) \sin \left(\frac{4j\pi}{n + 1} \right) + (d + \eta - b) \sin^{2} \left(\frac{4j\pi}{n + 1} \right)}{\nu_{k} - \lambda_{2j}} \right]}{n + 1 - 4 \underset{j = 1}{\overset{\frac{n-1}{2}}{\sum}} \left[\frac{(c + \xi - a) \sin \left(\frac{2j\pi}{n + 1} \right) \sin \left(\frac{4j\pi}{n + 1} \right) + (d + \eta - b) \sin^{2} \left(\frac{4j\pi}{n + 1} \right)}{\nu_{k} - \lambda_{2j}} \right]} \frac{\sin \left(\frac{4\pi}{n+1} \right)}{\sqrt{n+1}(\nu_{k} - \lambda_{4})} \\ \vdots \\ \frac{2\sin\left[\frac{2(n-1)\pi}{n+1} \right]}{\sqrt{n+1}(\nu_{k} - \lambda_{n})} + \frac{8 \underset{j = 1}{\overset{\frac{n-1}{2}}{\sum}} \left[\frac{(c + \xi - a) \sin \left(\frac{2j\pi}{n + 1} \right) \sin \left(\frac{4j\pi}{n + 1} \right) + (d + \eta - b) \sin^{2} \left(\frac{4j\pi}{n + 1} \right)}{\nu_{k} - \lambda_{2j}} \right]}{n + 1 - 4 \underset{j = 1}{\overset{\frac{n-1}{2}}{\sum}} \left[\frac{(c + \xi - a) \sin \left(\frac{2j\pi}{n + 1} \right) \sin \left(\frac{4j\pi}{n + 1} \right) + (d + \eta - b) \sin^{2} \left(\frac{4j\pi}{n + 1} \right)}{\nu_{k} - \lambda_{2j}} \right]} \frac{\sin \left[\frac{(n-1)\pi}{n+1} \right]}{\sqrt{n+1}(\nu_{k} - \lambda_{n})} \end{array} \right] \end{equation} | (3.11b) |
is an eigenvector of {{\bf{H}}}_{n} associated to \nu_{k} , k = 1, \ldots, \frac{n-1}{2} .
Proof. Since both assertions can be proven in the same way, we only prove (a). Let n be even. We can rewrite the matricial equation (\mu_{k} {{\bf{I}}}_{n} - {{\bf{H}}}_{n}) {{\bf{q}}} = {{\bf{0}}} as
\begin{equation} {{\bf{S}}}_{n} {{\bf{P}}}_{n} \left[ \begin{array}{c|c} \mu_{k} {{\bf{I}}}_{\frac{n}{2}} - {\boldsymbol{\Phi}}_{\frac{n}{2}} & {{\bf{O}}} \\[2pt] \hline {{\bf{O}}} & \mu_{k} {{\bf{I}}}_{\frac{n}{2}} - {\boldsymbol{\Psi}}_{\frac{n}{2}} \end{array} \right] {{\bf{P}}}_{n}^{\top} {{\bf{S}}}_{n} {{\bf{q}}} = {{\bf{0}}}, \end{equation} | (3.12) |
where {{\bf{S}}}_{n} is the matrix (2.1), {{\bf{P}}}_{n} is the permutation matrix (2.4c) and {\boldsymbol{\Phi}}_{\frac{n}{2}} and {\boldsymbol{\Psi}}_{\frac{n}{2}} are given by (2.4d) and (2.4e), respectively. Thus,
\begin{gather*} \left[\mu_{k} {{\bf{I}}}_{\frac{n}{2}} - {{\mathrm{diag}}} \left(\lambda_{1},\lambda_{3},\ldots,\lambda_{n-1} \right) - (c + \xi - a) {{\bf{x}}} {{\bf{x}}}^{\top} - (d + \eta - b) {{\bf{x}}} {{\bf{y}}}^{\top} - (d + \eta - b) {{\bf{y}}} {{\bf{x}}}^{\top} \right] {{\bf{q}}}_{1} = {{\bf{0}}}, \\ \left[\mu_{k} {{\bf{I}}}_{\frac{n}{2}} - {{\mathrm{diag}}} \left(\lambda_{2},\lambda_{4},\ldots,\lambda_{n} \right) - (c + \xi - a) {{\bf{v}}} {{\bf{v}}}^{\top} - (d + \eta - b) {{\bf{v}}} {{\bf{w}}}^{\top} - (d + \eta - b) {{\bf{w}}} {{\bf{v}}}^{\top} \right] {{\bf{q}}}_{2} = {{\bf{0}}}, \\ \left[ \begin{array}{c} {{\bf{q}}}_{1} \\ {{\bf{q}}}_{2} \end{array} \right] = {{\bf{P}}}_{n}^{\top} {{\bf{S}}}_{n} {{\bf{q}}}. \end{gather*} |
That is,
\begin{gather*} {{\bf{q}}}_{1} = \alpha \left[\mu_{k} {{\bf{I}}}_{\frac{n}{2}} - {{\mathrm{diag}}} \left(\lambda_{1},\lambda_{3},\ldots,\lambda_{n-1} \right) - (c + \xi - a) {{\bf{x}}} {{\bf{x}}}^{\top} - (d + \eta - b) {{\bf{x}}} {{\bf{y}}}^{\top} \right]^{-1} {{\bf{y}}} \\ {{\bf{q}}}_{2} = {{\bf{0}}} \end{gather*} |
for \alpha \neq 0 (see [8], page 41), and
\begin{equation*} {{\bf{q}}} = {{\bf{S}}}_{n} {{\bf{P}}}_{n} \left[ \begin{array}{c} \alpha \left[\mu_{k} {{\bf{I}}}_{\frac{n}{2}} - {{\mathrm{diag}}} \left(\lambda_{1},\lambda_{3},\ldots,\lambda_{n-1} \right) - (c + \xi - a) {{\bf{x}}} {{\bf{x}}}^{\top} - (d + \eta - b) {{\bf{x}}} {{\bf{y}}}^{\top} \right]^{-1} {{\bf{y}}} \\ {{\bf{0}}} \end{array} \right] \end{equation*} |
is a nontrivial solution of (3.12). Thus, choosing \alpha = 1 , we conclude that (3.10a) is an eigenvector of {{\bf{H}}}_{n} associated to the eigenvalue \mu_{k} . Similarly, from (\nu_{k} {{\bf{I}}}_{n} - {{\bf{H}}}_{n}) {{\bf{q}}} = {{\bf{0}}} , we have
\begin{equation*} {{\bf{S}}}_{n} {{\bf{P}}}_{n} \left[ \begin{array}{c|c} \nu_{k} {{\bf{I}}}_{\frac{n}{2}} - {\boldsymbol{\Phi}}_{\frac{n}{2}} & {{\bf{O}}} \\[2pt] \hline {{\bf{O}}} & \nu_{k} {{\bf{I}}}_{\frac{n}{2}} - {\boldsymbol{\Psi}}_{\frac{n}{2}} \end{array} \right] {{\bf{P}}}_{n}^{\top} {{\bf{S}}}_{n} {{\bf{q}}} = {{\bf{0}}} \end{equation*} |
and
\begin{equation*} {{\bf{q}}} = {{\bf{S}}}_{n} {{\bf{P}}}_{n} \left[ \begin{array}{c} {{\bf{0}}} \\ \alpha \left[\nu_{k} {{\bf{I}}}_{\frac{n}{2}} - {{\mathrm{diag}}} \left(\lambda_{2},\lambda_{4},\ldots,\lambda_{n} \right) - (c + \xi - a) {{\bf{v}}} {{\bf{v}}}^{\top} - (d + \eta - b) {{\bf{v}}} {{\bf{w}}}^{\top} \right]^{-1} {{\bf{w}}} \end{array} \right] \end{equation*} |
for \alpha \neq 0 , which is an eigenvector of {{\bf{H}}}_{n} associated to the eigenvalue \nu_{k} .
The orthogonal block diagonalization presented in Lemma 2 and Miller's formula for the inverse of the sum of nonsingular matrices lead us to an explicit expression for the inverse of {{\bf{H}}}_{n} .
Theorem 4. Let a, b, c, d, \xi, \eta be real numbers, \lambda_{k} , k = 1, \ldots, n be given by (2.3) and {{\bf{H}}}_{n} be the n \times n matrix (1.1). If \lambda_{k} \neq 0 for every k = 1, \ldots, n , {{\bf{H}}}_{n} is nonsingular and:
(a) n is even, then
\begin{equation*} {{\bf{H}}}_{n}^{-1} = {{\bf{S}}}_{n} {{\bf{P}}}_{n} \left[ \begin{array}{cc} {{\bf{Q}}}_{\frac{n}{2}} & {{\bf{O}}} \\ {{\bf{O}}} & {{\bf{R}}}_{\frac{n}{2}} \end{array} \right] {{\bf{P}}}_{n}^{\top} {{\bf{S}}}_{n}, \end{equation*} |
where {{\bf{S}}}_{n} is the n \times n matrix (2.1), {{\bf{P}}}_{n} is the n \times n permutation matrix (2.4c),
\begin{equation} \begin{split} {{\bf{Q}}}_{\frac{n}{2}} & = {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} - \tfrac{(d + \eta - b) + (d + \eta - b)^{2} {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}}}{\rho} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} \left({{\bf{y}}} {{\bf{x}}}^{\top} + {{\bf{x}}} {{\bf{y}}}^{\top} \right) {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} \\ & \quad + \tfrac{(d + \eta - b)^{2}{{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{y}}} - (c + \xi - a)}{\rho} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} + \tfrac{(d + \eta - b)^{2} {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}}}{\rho} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{y}}} {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1}, \end{split} \end{equation} | (3.13a) |
with {\boldsymbol{\Upsilon}}_{\frac{n}{2}} : = {{\mathrm{diag}}} \left(\lambda_{1}, \lambda_{3}, \ldots, \lambda_{n-1} \right) , {{\bf{x}}}, {{\bf{y}}} given by (2.4a),
\begin{equation} \rho = 1 + (c + \xi - a) {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} + 2 (d + \eta - b) {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} + (d + \eta - b)^{2} \left[\big({{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{y}}} \big)^{2} - \big({{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} \big) \big({{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{y}}} \big) \right] \end{equation} | (3.13b) |
and
\begin{equation} \begin{split} {{\bf{R}}}_{\frac{n}{2}} & = {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} - \tfrac{(d + \eta - b) + (d + \eta - b)^{2} {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}}}{\rho} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} \left({{\bf{w}}} {{\bf{v}}}^{\top} + {{\bf{v}}} {{\bf{w}}}^{\top} \right) {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} \\ & \quad + \tfrac{(d + \eta - b)^{2}{{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{w}}} - (c + \xi - a)}{\rho} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} + \tfrac{(d + \eta - b)^{2} {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}}}{\rho} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{w}}} {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1}, \end{split} \end{equation} | (3.13c) |
with {\boldsymbol{\Delta}}_{\frac{n}{2}} : = {{\mathrm{diag}}} \left(\lambda_{2}, \lambda_{4}, \ldots, \lambda_{n} \right) , {{\bf{v}}}, {{\bf{w}}} given by (2.5a) and
\begin{equation} \varrho = 1 + (c + \xi - a) {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} + 2 (d + \eta - b) {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} + (d + \eta - b)^{2} \left[\big({{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{w}}} \big)^{2} - \big({{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} \big) \big({{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{w}}} \big) \right]. \end{equation} | (3.13d) |
(b) n is odd, then
\begin{equation*} {{\bf{H}}}_{n}^{-1} = {{\bf{S}}}_{n} {{\bf{P}}}_{n} \left[ \begin{array}{cc} {{\bf{Q}}}_{\frac{n+1}{2}} & {{\bf{O}}} \\ {{\bf{O}}} & {{\bf{R}}}_{\frac{n-1}{2}} \end{array} \right] {{\bf{P}}}_{n}^{\top} {{\bf{S}}}_{n}, \end{equation*} |
where {{\bf{S}}}_{n} is the n \times n matrix (2.1), {{\bf{P}}}_{n} is the n \times n permutation matrix (2.5c),
\begin{equation} \begin{split} {{\bf{Q}}}_{\frac{n+1}{2}} & = {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} - \tfrac{(d + \eta - b) + (d + \eta - b)^{2} {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{x}}}}{\rho} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} \left({{\bf{y}}} {{\bf{x}}}^{\top} + {{\bf{x}}} {{\bf{y}}}^{\top} \right) {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} \\ & \quad + \tfrac{(d + \eta - b)^{2}{{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{y}}} - (c + \xi - a)}{\rho} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{x}}} {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} + \tfrac{(d + \eta - b)^{2} {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{x}}}}{\rho} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{y}}} {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1}, \end{split} \end{equation} | (3.14a) |
with {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}} : = {{\mathrm{diag}}} \left(\lambda_{1}, \lambda_{3}, \ldots, \lambda_{n} \right) , {{\bf{x}}}, {{\bf{y}}} given by (2.5a),
\begin{equation} \begin{split} \rho & = 1 + (c + \xi - a) {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{x}}} + 2 (d + \eta - b) {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{x}}} \\&+ (d + \eta - b)^{2} \left[\big({{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{y}}} \big)^{2} - \big({{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{x}}} \big) \big({{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{y}}} \big) \right] \end{split} \end{equation} | (3.14b) |
and
\begin{equation} \begin{split} {{\bf{R}}}_{\frac{n-1}{2}} & = {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} - \tfrac{(d + \eta - b) + (d + \eta - b)^{2} {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{v}}}}{\rho} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} \left({{\bf{w}}} {{\bf{v}}}^{\top} + {{\bf{v}}} {{\bf{w}}}^{\top} \right) {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} \\ & \quad + \tfrac{(d + \eta - b)^{2}{{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{w}}} - (c + \xi - a)}{\rho} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{v}}} {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} + \tfrac{(d + \eta - b)^{2} {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{v}}}}{\rho} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{w}}} {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1}, \end{split} \end{equation} | (3.14c) |
with {\boldsymbol{\Delta}}_{\frac{n-1}{2}} : = {{\mathrm{diag}}} \left(\lambda_{2}, \lambda_{4}, \ldots, \lambda_{n-1} \right) , {{\bf{v}}}, {{\bf{w}}} in (2.5b),
\begin{equation} \begin{split} \varrho & = 1 + (c + \xi - a) {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{v}}} + 2 (d + \eta - b) {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{v}}} \\ &+ (d + \eta - b)^{2} \left[\big({{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{w}}} \big)^{2} - \big({{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{v}}} \big) \big({{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{w}}} \big) \right] . \end{split} \end{equation} | (3.14d) |
Proof. Consider a, b, c, d, \xi, \eta as real numbers, \lambda_{k} \neq 0 , k = 1, \ldots, n are given by (2.3) and {{\bf{H}}}_{n} in (1.1) is nonsingular. Recall that if {{\bf{H}}}_{n} is nonsingular, then \rho and \varrho in (3.13b) and (3.13d), respectively, are both nonzero. Setting \theta : = c + \xi - a , \vartheta : = d + \eta - b and assuming that conditions (3.1a) and (3.1b) are satisfied (note that (3.1c) corresponds to \rho \neq 0 ), we have from the main result of [29] (see pages 69 and 70),
\begin{equation*} \begin{split} \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} \big)^{-1} & = {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} - \tfrac{\theta}{1 + \theta {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}}} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1}, \\ \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} + \vartheta {{\bf{x}}} {{\bf{y}}}^{\top} \big)^{-1} & = \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} \big)^{-1} - \tfrac{\vartheta}{1 + \vartheta {{\bf{y}}}^{\top} \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} \big)^{-1} {{\bf{x}}}} \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} \big)^{-1} {{\bf{x}}} {{\bf{y}}}^{\top} \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} \big)^{-1} \\ & = {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} - \tfrac{\theta}{1 + \theta {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} + \vartheta {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}}} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} - \tfrac{\vartheta}{1 + \theta {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} + \vartheta {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}}} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} \end{split} \end{equation*} |
and
\begin{equation} \begin{split} &\big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} + \vartheta {{\bf{x}}} {{\bf{y}}}^{\top} + \vartheta {{\bf{y}}} {{\bf{x}}}^{\top} \big)^{-1} \\ & \quad = \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} + \vartheta {{\bf{x}}} {{\bf{y}}}^{\top} \big)^{-1} - \tfrac{\vartheta}{1 + \vartheta {{\bf{x}}}^{\top} \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} + \vartheta {{\bf{x}}} {{\bf{y}}}^{\top} \big)^{-1} {{\bf{y}}}} \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} + \vartheta {{\bf{x}}} {{\bf{y}}}^{\top} \big)^{-1} {{\bf{x}}} {{\bf{y}}}^{\top} \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} + \vartheta {{\bf{x}}} {{\bf{y}}}^{\top} \big)^{-1} \\ & \quad = {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} - \tfrac{\vartheta + \vartheta^{2} {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}}}{\rho} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} \left({{\bf{y}}} {{\bf{x}}}^{\top} + {{\bf{x}}} {{\bf{y}}}^{\top} \right) {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} + \tfrac{\vartheta^{2}{{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{y}}} - \theta}{\rho} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} + \tfrac{\vartheta^{2} {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}}}{\rho} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{y}}} {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1}, \end{split} \end{equation} | (3.15) |
with {\boldsymbol{\Upsilon}}_{\frac{n}{2}} : = {{\mathrm{diag}}} \left(\lambda_{1}, \lambda_{3}, \ldots, \lambda_{n-1} \right) , {{\bf{x}}}, {{\bf{y}}} given by (2.4a) and \rho in (3.13b). In the same way, supposing (3.2a) and (3.2b) (observe that (3.2c) is \varrho \neq 0 ), we obtain
\begin{equation*} \begin{split} \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} \big)^{-1} & = {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} - \tfrac{\theta}{1 + \theta {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}}} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1}, \\ \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} + \vartheta {{\bf{v}}} {{\bf{w}}}^{\top} \big)^{-1} & = \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} \big)^{-1} - \tfrac{\vartheta}{1 + \vartheta {{\bf{w}}}^{\top} \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} \big)^{-1} {{\bf{v}}}} \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} \big)^{-1} {{\bf{v}}} {{\bf{w}}}^{\top} \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} \big)^{-1} \\ & = {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} - \tfrac{\theta}{1 + \theta {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} + \vartheta {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}}} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} - \tfrac{\vartheta}{1 + \theta {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} + \vartheta {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}}} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} \end{split} \end{equation*} |
and
\begin{equation} \begin{split} &\big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} + \vartheta {{\bf{v}}} {{\bf{w}}}^{\top} + \vartheta {{\bf{w}}} {{\bf{v}}}^{\top} \big)^{-1} \\ &\quad = \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} + \vartheta {{\bf{v}}} {{\bf{w}}}^{\top} \big)^{-1} - \tfrac{\vartheta}{1 + \vartheta {{\bf{v}}}^{\top} \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} + \vartheta {{\bf{v}}} {{\bf{w}}}^{\top} \big)^{-1} {{\bf{w}}}} \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} + \vartheta {{\bf{v}}} {{\bf{w}}}^{\top} \big)^{-1} {{\bf{v}}} {{\bf{w}}}^{\top} \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} + \vartheta {{\bf{v}}} {{\bf{w}}}^{\top} \big)^{-1} \\ & \quad = {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} - \tfrac{\vartheta + \vartheta^{2} {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}}}{\varrho} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} \left({{\bf{w}}} {{\bf{v}}}^{\top} + {{\bf{v}}} {{\bf{w}}}^{\top} \right) {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} + \tfrac{\vartheta^{2}{{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{w}}} - \theta}{\varrho} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} + \tfrac{\vartheta^{2} {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}}}{\varrho} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{w}}} {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1}, \end{split} \end{equation} | (3.16) |
where {\boldsymbol{\Delta}}_{\frac{n}{2}} : = {{\mathrm{diag}}} \left(\lambda_{2}, \lambda_{4}, \ldots, \lambda_{n} \right) and {{\bf{v}}}, {{\bf{w}}} are given by (2.5a) and \varrho in (3.13d). Since the nonsingularity of {{\bf{H}}}_{n} and \lambda_{k} \neq 0 , for all k = 1, \ldots, n are sufficient for both sides of (3.15) and (3.16) to be well-defined, conditions (3.1a), (3.1b), (3.2a) and (3.2b) previously assumed can be dropped. Hence, the block diagonalization provided in (a) of Lemma 2 together with 8.5b of [21] (see page 88) establish the thesis in (a). The proof of (b) is analogous, so we will omit the details.
It is well known that the fourth derivative can be computed through the following centered finite-formula
\begin{equation} f^{(4)}(x_{k}) \approx \frac{-f(x_{k-3}) + 12 f(x_{k-2}) - 39 f(x_{k-1}) + 56 f(x_{k}) - 39 f(x_{k+1}) + 12 f(x_{k+2}) - f(x_{k+3})}{6h^{4}} \end{equation} | (4.1) |
(see [9], page 556). Consider an interval [a, b] (a < b) , a mesh of points x_{k} = a + k h , k = 0, 1, \ldots, N where h = (b - a)/N and a function f\colon [a, b] \longrightarrow \mathbb{R} , such that f(a) = 0 = f(b) . By setting
\begin{equation*} \begin{split} f(x_{-2}) : = \alpha f(x_{2}), \\ f(x_{-1}) : = \alpha f(x_{1}), \\ f(x_{N+1}) : = \alpha f(x_{N-1}), \\ f(x_{N+2}) : = \alpha f(x_{N-2}) \end{split} \end{equation*} |
for some \alpha\in \mathbb{R} , the matrix operator corresponding to (4.1) for the fourth derivative is
\begin{equation} \left[ \begin{array}{ccccccccccc} 12 \alpha + 56 & -(\alpha + 39) & 12 & -1 & 0 & \ldots & \ldots & \ldots & \ldots & \ldots & 0 \\ -(\alpha + 39) & 56 & -39 & 12 & -1 & \ddots & & & & & \vdots \\ 12 & -39 & 56 & -39 & 12 & \ddots & \ddots & & & & \vdots \\ -1 & 12 & -39 & 56 & -39 & \ddots & \ddots & \ddots & & & \vdots \\ 0 & -1 & 12 & -39 & 56 & \ddots & \ddots & \ddots & \ddots & & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & & \ddots & \ddots & \ddots & \ddots & 56 & -39 & 12 & -1 & 0 \\ \vdots & & & \ddots & \ddots & \ddots & -39 & 56 & -39 & 12 & -1 \\ \vdots & & & & \ddots & \ddots & 12 & -39 & 56 & -39 & 12 \\ \vdots & & & & & \ddots & -1 & 12 & -39 & 56 & -(\alpha + 39) \\ 0 & \ldots & \ldots & \ldots & \ldots & \ldots & 0 & -1 & 12 & -(\alpha + 39) & 12 \alpha + 56 \end{array} \right]. \end{equation} | (4.2) |
A remarkable example that involves the fourth derivative is the ordinary differential equation that governs the deflection of a laterally loaded symmetrical beam of length L ,
\begin{equation} {{\mathrm{E}}} \, {{\mathrm{I}}}(x) y^{(4)}(x) = q(x), \quad x \in ]0,L[, \end{equation} | (4.3) |
where {{\mathrm{E}}} is the modulus of elasticity of the beam material, {{\mathrm{I}}}(x) is the moment of inertia of the beam cross section and q(x) is the distributed load. The ordinary differential equation (4.3) can be equipped with the boundary conditions y(0) = 0 = y(L) (see, for instance, [22]).
The eigenvalues of derivative matrices are very useful. In fact, they can be compared with those of the exact (continuous) derivative operator to gauge the accuracy of the finite difference approximation. On the other hand, in the context of partial differential equations, the eigenvalues of the spatial operator is considered along with the stability diagram of the time-integration scheme to evaluate the stability of the numerical solution for the partial differential equation [3]. The statements of subsection 3.2 can be employed to locate (bound) the eigenvalues of (4.2).
Another example of a derivative matrix is
\begin{equation} \left[ \begin{array}{ccccccc} -\frac{2}{3} & \frac{2}{3} & 0 & \ldots & \ldots & \ldots & 0 \\ 1 & -2 & 1 & \ddots & & & \vdots \\ 0 & 1 & -2 & \ddots & \ddots & & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots &\ddots & \vdots \\ \vdots & & \ddots & \ddots & -2 & 1 & 0 \\ \vdots & & & \ddots & 1 & -2 & 1 \\ 0 & \ldots & \ldots & \ldots & 0 & \frac{2}{3} & - \frac{2}{3} \end{array} \right], \end{equation} | (4.4) |
which appears in the discretization of the second-derivative operator via three-point centered finite-difference formula with Neumann boundary conditions f'(x_{0}) = a and f'(x_{N}) = b (see [3], pages 133 and 134). Our results can also be used to locate (bound) its eigenvalues by noticing that the eigenvalues of (4.4) and
\begin{align*} &{{\mathrm{diag}}}\left(1,\frac{\sqrt{6}}{3},\ldots,\frac{\sqrt{6}}{3},1 \right) \left[ \begin{array}{ccccccc} -\frac{2}{3} & \frac{2}{3} & 0 & \ldots & \ldots & \ldots & 0 \\ 1 & -2 & 1 & \ddots & & & \vdots \\ 0 & 1 & -2 & \ddots & \ddots & & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots &\ddots & \vdots \\ \vdots & & \ddots & \ddots & -2 & 1 & 0 \\ \vdots & & & \ddots & 1 & -2 & 1 \\ 0 & \ldots & \ldots & \ldots & 0 & \frac{2}{3} & - \frac{2}{3} \end{array} \right] {{\mathrm{diag}}}\left(1,\frac{\sqrt{6}}{2},\ldots,\frac{\sqrt{6}}{2},1 \right) \\ &\quad = \left[ \begin{array}{ccccccc} -\frac{2}{3} & \frac{\sqrt{6}}{3} & 0 & \ldots & \ldots & \ldots & 0 \\ \frac{\sqrt{6}}{3} & -2 & 1 & \ddots & & & \vdots \\ 0 & 1 & -2 & \ddots & \ddots & & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots &\ddots & \vdots \\ \vdots & & \ddots & \ddots & -2 & 1 & 0 \\ \vdots & & & \ddots & 1 & -2 & \frac{\sqrt{6}}{3} \\ 0 & \ldots & \ldots & \ldots & 0 & \frac{\sqrt{6}}{3} & -\frac{2}{3} \end{array} \right] \end{align*} |
are exactly the same.
Consider n pairs of observations (x_{1}, y_{1}), (x_{2}, y_{2}), \ldots, (x_{n}, y_{n}) such that
\begin{equation*} y_{k} = r(x_{k}) + \varepsilon_{k} \quad {\text{and}} \quad \mathbb{E}(\varepsilon_{k}) = 0 \qquad (k = 1,2,\ldots,n), \end{equation*} |
where r is the regression function to be estimated. The estimator of r(x) is usually denoted by \widehat{r}(x) and called smoother. An estimator \widehat{r} of r is a linear smoother if, for each x , there exists a vector {\boldsymbol{\varsigma}}(x) = (\varsigma_{1}(x), \ldots, \varsigma_{n}(x))^{\top} such that
\begin{equation*} \widehat{r}(x) = \sum\limits_{k = 1}^{n} \varsigma_{k}(x) y_{k}. \end{equation*} |
Defining the vector of fitted values {{\bf{\widehat{y}}}} = (\widehat{r}_{n}(x_{1}), \ldots, \widehat{r}_{n}(x_{n}))^{\top} , it follows
\begin{equation*} {{\bf{\widehat{y}}}} = {\boldsymbol{\Sigma}} \, {{\bf{y}}}, \end{equation*} |
where {\boldsymbol{\Sigma}} is an n \times n matrix whose k^{{{\mathrm{th}}}} row is {\boldsymbol{\varsigma}}(x_{k})^{\top} , called the smoothing matrix and {{\bf{y}}} = (y_{1}, \ldots, y_{n})^{\top} (see [34], page 66).
The eigendecomposition of the smoothing matrix {\boldsymbol{\Sigma}} provides a useful characterization of the properties of a smoother. In fact, if {\boldsymbol{\Sigma}} = \sum_{k = 1}^{n} \lambda_{k} {\boldsymbol{\sigma}}_{k} {\boldsymbol{\sigma}}_{k}^{\top} is the spectral decomposition of the smoothing matrix, where \lambda_{k} are the ordered eigenvalues and {\boldsymbol{\sigma}}_{k} the corresponding eigenvectors, we can meaningfully decompose the fit as {{\bf{\widehat{y}}}} = \sum_{k = 1}^{n} \alpha_{k} \lambda_{k} {\boldsymbol{\sigma}}_{k} , where the eigenvectors {\boldsymbol{\sigma}}_{k} illustrate what sequences are preserved or compressed via a scalar multiplication and \alpha_{k} are the specific coefficients of the projection of {{\bf{y}}} onto the space spanned by the eigenvectors {\boldsymbol{\sigma}}_{k} , {{\bf{y}}} = \sum_{k = 1}^{n} \alpha_{k} {\boldsymbol{\sigma}}_{k} . Moreover, {{\mathrm{tr}}}({\boldsymbol{\Sigma}}) = \sum_{k = 1}^{n} \lambda_{k} provides the number of degrees of freedom of a smoother, which is a measure of the equivalent number of parameters used to obtain the fit {{\bf{\widehat{y}}}} that allows us to compare alternative filters according to their degree of smoothing (see [28] and the references therein).
The smoothing matrix associated to the Beveridge-Nelson smoother (see [31] for details) when the observed series is generated by an {{\mathrm{ARIMA}}}(1, 1, 0) model with -1 < \phi < 0 and (half) bandwidth filter m = 1 is the following tridiagonal matrix:
\begin{equation*} {\boldsymbol{\Sigma}} = \left[ \begin{array}{ccccccc} \frac{1}{1 - \phi} & -\frac{\phi}{1 - \phi} & 0 & \ldots & \ldots & \ldots & 0 \\ -\frac{\phi}{(1 - \phi)^{2}} & \frac{1 + \phi^{2}}{(1 - \phi)^{2}} & -\frac{\phi}{(1 - \phi)^{2}} & \ddots & & & \vdots \\ 0 & -\frac{\phi}{(1 - \phi)^{2}} & \frac{1 + \phi^{2}}{(1 - \phi)^{2}} & \ddots & \ddots & & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots &\ddots & \vdots \\ \vdots & & \ddots & \ddots & \frac{1 + \phi^{2}}{(1 - \phi)^{2}} & -\frac{\phi}{(1 - \phi)^{2}} & 0 \\ \vdots & & & \ddots & -\frac{\phi}{(1 - \phi)^{2}} & \frac{1 + \phi^{2}}{(1 - \phi)^{2}} & -\frac{\phi}{(1 - \phi)^{2}} \\ 0 & \ldots & \ldots & \ldots & 0 & -\frac{\phi}{1 - \phi} & \frac{1}{1 - \phi} \end{array} \right] \end{equation*} |
(see [28]). Since the matrices {\boldsymbol{\Sigma}} and
\begin{align*} &{{\mathrm{diag}}}\left(1,\sqrt{1 - \phi},\ldots,\sqrt{1 - \phi},1 \right) \, {\boldsymbol{\Sigma}} \, {{\mathrm{diag}}}\left(1,\frac{1}{\sqrt{1 - \phi}},\ldots,\frac{1}{\sqrt{1 - \phi}},1 \right) \\ &\quad = \left[ \begin{array}{ccccccc} \frac{1}{1 - \phi} & -\frac{\phi}{\sqrt{(1 - \phi)^{3}}} & 0 & \ldots & \ldots & \ldots & 0 \\ -\frac{\phi}{\sqrt{(1 - \phi)^{3}}} & \frac{1 + \phi^{2}}{(1 - \phi)^{2}} & -\frac{\phi}{(1 - \phi)^{2}} & \ddots & & & \vdots \\ 0 & -\frac{\phi}{(1 - \phi)^{2}} & \frac{1 + \phi^{2}}{(1 - \phi)^{2}} & \ddots & \ddots & & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots &\ddots & \vdots \\ \vdots & & \ddots & \ddots & \frac{1 + \phi^{2}}{(1 - \phi)^{2}} & -\frac{\phi}{(1 - \phi)^{2}} & 0 \\ \vdots & & & \ddots & -\frac{\phi}{(1 - \phi)^{2}} & \frac{1 + \phi^{2}}{(1 - \phi)^{2}} & -\frac{\phi}{\sqrt{(1 - \phi)^{3}}} \\ 0 & \ldots & \ldots & \ldots & 0 & -\frac{\phi}{\sqrt{(1 - \phi)^{3}}} & \frac{1}{1 - \phi} \end{array} \right] \end{align*} |
share the same eigenvalues, we are able to locate (bound) the eigenvalues of {\boldsymbol{\Sigma}} by using results of subsection 3.2. Moreover, at the expense of the prescribed eigenvalues, an eigendecomposition for {\boldsymbol{\Sigma}} can also be obtained at the expense of statements in subsection 3.3.
In this paper, a procedure to express the eigenvalues and associated eigenvectors of a symmetric heptadiagonal quasi-Toeplitz matrix was presented, as well as an explicit formula for its inverse. The proposed method allowed us to get rational functions to locate the eigenvalues and closed-form formulas to the corresponding eigenvectors for the class of matrices under analysis, which cannot be considered in recent works on this subject, but most of all leave an open door for additional statements on symmetric quasi-Toeplitz matrices in general. The numerical example provided to highlight the differences between the quasi-Toeplitz and Toeplitz cases also raised some open questions. Indeed, despite Geršgorin theorem leading us to an interval containing all eigenvalues of generic quasi-Toeplitz matrices, it would be interesting to have a more precise tool, as in the "pure" Toeplitz case. A method that could predict the number of outliers and its asymptotic behavior when n tends to infinity would be also very welcome. Of course, another open problem closely related to the content of this paper would be the obtention of a block diagonalization for nonsymmetric quasi-Toeplitz matrices in the same spirit of Lemma 2.
The author declares he has not used Artificial Intelligence (AI) tools in the creation of this article.
The author would like to thank Professor Yongjian Hu for the invitation to submit the manuscript, and also to anonymous referees for the careful reading of it as well as their very constructive comments, which greatly improved the final version of the paper.
This work is funded by national funds through the FCT - Fundação para a Ciência e a Tecnologia, I.P., under the scope of project UIDB/04035/2020 (GeoBioTec).
The author declares there is no conflict of interest.
[1] |
R. Álvarez-Nodarse, J. Petronilho, N. R. Quintero, On some tridiagonal k-Toeplitz matrices: Algebraic and analytical aspects. Applications, J. Comput. Appl. Math., 184 (2005), 518–537. https://doi.org/10.1016/j.cam.2005.01.025 doi: 10.1016/j.cam.2005.01.025
![]() |
[2] |
J. Anderson, A secular equation for the eigenvalues of a diagonal matrix perturbation, Linear Algebra Appl., 246 (1996), 49–70. https://doi.org/10.1016/0024-3795(94)00314-9 doi: 10.1016/0024-3795(94)00314-9
![]() |
[3] | H. Aref, S. Balachandar, A First Course in Computational Fluid Dynamics, Cambridge: Cambridge University Press, 2018. https://doi.org/10.1017/9781316823736 |
[4] |
S. O. Asplund, Finite boundary value problems solved by Green's matrix, Math. Scand., 7 (1959), 49–56. https://doi.org/10.7146/math.scand.a-10560 doi: 10.7146/math.scand.a-10560
![]() |
[5] |
I. Bar-On, Interlacing properties of tridiagonal symmetric matrices with applications to parallel computing, SIAM J. Matrix Anal. Appl., 17 (1996), 548–562. https://doi.org/10.1137/S0895479893252003 doi: 10.1137/S0895479893252003
![]() |
[6] |
R. M. Beam, R. F. Warming, The asymptotic spectra of banded Toeplitz and quasi-Toeplitz matrices, SIAM J. Sci. Comput., 14 (1993), 971–1006. https://doi.org/10.1137/0914059 doi: 10.1137/0914059
![]() |
[7] |
D. Bini, M. Capovani, Spectral and computational properties of band symmetric Toeplitz matrices, Linear Algebra Appl., 52/53 (1983), 99–126. https://doi.org/10.1016/0024-3795(83)80009-3 doi: 10.1016/0024-3795(83)80009-3
![]() |
[8] |
J. R. Bunch, C. P. Nielsen, D. C. Sorensen, Rank-one modification of the symmetric eigenproblem, Numer. Math., 31 (1978), 31–48. https://doi.org/10.1007/BF01396012 doi: 10.1007/BF01396012
![]() |
[9] | S. C. Chapra, Applied Numerical Methods with MATLABⓇ for Engineers and Scientists, 4^{th} edition, New York: McGraw-Hill, 2018. |
[10] |
S. Demko, Inverses of band matrices and local convergence of spline projections, SIAM J. Numer. Anal., 14 (1977), 616–619. https://doi.org/10.1137/0714041 doi: 10.1137/0714041
![]() |
[11] |
S. E. Ekström, C. Garoni, A. Jozefiak, J. Perla, Eigenvalues and eigenvectors of tau matrices with applications to Markov processes and economics, Linear Algebra Appl., 627 (2021), 41–71. https://doi.org/10.1016/j.laa.2021.06.005 doi: 10.1016/j.laa.2021.06.005
![]() |
[12] |
M. Elouafi, An eigenvalue localization theorem for pentadiagonal symmetric Toeplitz matrices, Linear Algebra Appl., 435 (2011), 2986–2998. https://doi.org/10.1016/j.laa.2011.05.025 doi: 10.1016/j.laa.2011.05.025
![]() |
[13] |
D. Fasino, Spectral and structural properties of some pentadiagonal symmetric matrices, Calcolo, 25 (1988), 301–310. https://doi.org/10.1007/BF02575838 doi: 10.1007/BF02575838
![]() |
[14] |
C. F. Fischer, R. A. Usmani, Properties of some tridiagonal matrices and their application to boundary value problems, SIAM J. Numer. Anal., 6 (1969), 127–142. https://doi.org/10.1137/0706014 doi: 10.1137/0706014
![]() |
[15] |
C. M. da Fonseca, J. Petronilho, Explicit inverses of some tridiagonal matrices, Linear Algebra Appl., 325 (2001), 7–21. https://doi.org/10.1016/S0024-3795(00)00289-5 doi: 10.1016/S0024-3795(00)00289-5
![]() |
[16] |
C. M. da Fonseca, On the location of the eigenvalues of Jacobi matrices, Appl. Math. Lett., 19 (2006), 1168–1174. https://doi.org/10.1016/j.aml.2005.11.029 doi: 10.1016/j.aml.2005.11.029
![]() |
[17] |
S. Friedland, A. A. Melkman, On the eigenvalues of non-negative Jacobi matrices, Linear Algebra Appl., 25 (1979), 239–253. https://doi.org/10.1016/0024-3795(79)90021-1 doi: 10.1016/0024-3795(79)90021-1
![]() |
[18] | C. Garoni, S. Serra-Capizzano, Generalized Locally Toeplitz Sequences: Theory and Applications, Cham: Springer Cham, 2017. https://doi.org/10.1007/978-3-319-53679-8 |
[19] |
M. J. C. Gover, The eigenproblem of a tridiagonal 2-Toeplitz matrix, Linear Algebra Appl., 197/198 (1994), 63–78. https://doi.org/10.1016/0024-3795(94)90481-2 doi: 10.1016/0024-3795(94)90481-2
![]() |
[20] |
S. Haley, Solution of band matrix equations by projection-recurrence, Linear Algebra Appl., 32 (1980), 33–48. https://doi.org/10.1016/0024-3795(80)90005-1 doi: 10.1016/0024-3795(80)90005-1
![]() |
[21] | D. A. Harville, Matrix Algebra From a Statistician's Perspective, New York: Springer-Verlag, 1997. https://doi.org/10.1007/b98818 |
[22] | J. D. Hoffman, Numerical Methods for Engineers and Scientists, 2^{ed} edition, New York: Marcel Dekker, 2001. https://doi.org/10.1201/9781315274508 |
[23] | R. A. Horn, C. R. Johnson, Matrix Analysis, 2^{ed} edition, New York: Cambridge University Press, 2013. https://doi.org/10.1017/CBO9781139020411 |
[24] |
A. J. Keeping, Band matrices arising from finite difference approximations to a third order partial differential equation, SIAM J. Numer. Anal., 7 (1970), 142–156. https://doi.org/10.1137/0707010 doi: 10.1137/0707010
![]() |
[25] |
S. Kouachi, Eigenvalues and eigenvectors of some tridiagonal matrices with non-constant diagonal entries, Appl. Math., 35 (2008), 107–120. https://doi.org/10.4064/am35-1-7 doi: 10.4064/am35-1-7
![]() |
[26] |
S. Kouachi, Explicit eigenvalues of some perturbed heptadiagonal matrices via recurrent sequences, Lobachevskii J. Math., 36 (2015), 28–37. https://doi.org/10.1134/S1995080215010096 doi: 10.1134/S1995080215010096
![]() |
[27] |
D. Kulkarni, D. Schmidt, S. K. Tsui, Eigenvalues of tridiagonal pseudo-Toeplitz matrices, Linear Algebra Appl., 297 (1999), 63–80. https://doi.org/10.1016/S0024-3795(99)00114-7 doi: 10.1016/S0024-3795(99)00114-7
![]() |
[28] |
A. Luati, T. Proietti, On the spectral properties of matrices associated with trend filters, Econom. Theory, 26 (2010), 1247–1261. https://doi.org/10.1017/S0266466609990715 doi: 10.1017/S0266466609990715
![]() |
[29] |
K. S. Miller, On the inverse of the sum of matrices, Math. Mag., 54 (1981), 67–72. https://doi.org/10.1080/0025570X.1981.11976898 doi: 10.1080/0025570X.1981.11976898
![]() |
[30] | S. Pissanetsky, Sparse Matrix Technology, London: Academic Press, 1984. https://doi.org/10.1016/C2013-0-11311-6 |
[31] |
T. Proietti, A. Harvey, A Beveridge–Nelson smoother, Econom. Lett., 67 (2000), 139–146. https://doi.org/10.1016/S0165-1765(99)00276-1 doi: 10.1016/S0165-1765(99)00276-1
![]() |
[32] |
M. S. Solary, Finding eigenvalues for heptadiagonal symmetric Toeplitz matrices, J. Math. Anal. Appl., 402 (2013), 719–730. https://doi.org/10.1016/j.jmaa.2013.02.008 doi: 10.1016/j.jmaa.2013.02.008
![]() |
[33] |
R. A. Usmani, T. H. Andres, D. J. Walton, Error estimation in the integration of ordinary differential equations, Int. J. Comput. Math., 5 (1975), 241–256. https://doi.org/10.1080/00207167608803115 doi: 10.1080/00207167608803115
![]() |
[34] | L. Wasserman, All of Nonparametric Statistics, New York: Springer Science+Business Media, 2006. https://doi.org/10.1007/0-387-30623-4 |
[35] |
A. R. Willms, Analytic results for the eigenvalues of certain tridiagonal matrices, SIAM J. Matrix Anal. Appl., 30 (2008), 639–656. https://doi.org/10.1137/070695411 doi: 10.1137/070695411
![]() |