This article aims to present novel identities for elementary and complete symmetric polynomials and explore their applications, particularly to generalized Vandermonde and special tri-diagonal matrices. It also extends existing results on Jacobi polynomials P(α,β)n(x) and introduces an explicit formula based on the zeros of P(α,β)n−1(x). Several illustrative examples are included.
Citation: Ahmed Arafat, Moawwad El-Mikkawy. Novel identities for elementary and complete symmetric polynomials with diverse applications[J]. AIMS Mathematics, 2024, 9(9): 23489-23511. doi: 10.3934/math.20241142
[1] | Mohra Zayed, Shahid Wani . Exploring the versatile properties and applications of multidimensional degenerate Hermite polynomials. AIMS Mathematics, 2023, 8(12): 30813-30826. doi: 10.3934/math.20231575 |
[2] | Waleed Mohamed Abd-Elhameed, Omar Mazen Alqubori . New expressions for certain polynomials combining Fibonacci and Lucas polynomials. AIMS Mathematics, 2025, 10(2): 2930-2957. doi: 10.3934/math.2025136 |
[3] | Mohamed Obeid, Mohamed A. Abd El Salam, Mohamed S. Mohamed . A novel generalized symmetric spectral Galerkin numerical approach for solving fractional differential equations with singular kernel. AIMS Mathematics, 2023, 8(7): 16724-16747. doi: 10.3934/math.2023855 |
[4] | Kaimin Cheng . Permutational behavior of reversed Dickson polynomials over finite fields II. AIMS Mathematics, 2017, 2(4): 586-609. doi: 10.3934/Math.2017.4.586 |
[5] | Pablo Díaz, Esmeralda Mainar, Beatriz Rubio . Total positivity, Gramian matrices, and Schur polynomials. AIMS Mathematics, 2025, 10(2): 2375-2391. doi: 10.3934/math.2025110 |
[6] | Tinglan Yao . An optimal $ Z $-eigenvalue inclusion interval for a sixth-order tensor and its an application. AIMS Mathematics, 2022, 7(1): 967-985. doi: 10.3934/math.2022058 |
[7] | Mohamed Abdalla . On Hankel transforms of generalized Bessel matrix polynomials. AIMS Mathematics, 2021, 6(6): 6122-6139. doi: 10.3934/math.2021359 |
[8] | Saad Ihsan Butt, Ahmet Ocak Akdemir, Muhammad Nadeem, Nabil Mlaiki, İşcan İmdat, Thabet Abdeljawad . $ (m, n) $-Harmonically polynomial convex functions and some Hadamard type inequalities on the co-ordinates. AIMS Mathematics, 2021, 6(5): 4677-4690. doi: 10.3934/math.2021275 |
[9] | Jung Yoog Kang, Cheon Seoung Ryoo . The forms of $ (q, h) $-difference equation and the roots structure of their solutions with degenerate quantum Genocchi polynomials. AIMS Mathematics, 2024, 9(11): 29645-29661. doi: 10.3934/math.20241436 |
[10] | Waleed Mohamed Abd-Elhameed, Anna Napoli . New formulas of convolved Pell polynomials. AIMS Mathematics, 2024, 9(1): 565-593. doi: 10.3934/math.2024030 |
This article aims to present novel identities for elementary and complete symmetric polynomials and explore their applications, particularly to generalized Vandermonde and special tri-diagonal matrices. It also extends existing results on Jacobi polynomials P(α,β)n(x) and introduces an explicit formula based on the zeros of P(α,β)n−1(x). Several illustrative examples are included.
Symmetric polynomials are significant in various areas of mathematics, including computational linear algebra [1,2], representation theory [3], combinatorics [4,5,6], and others. There are several types of symmetric polynomials, including power-sum, monomial, Schur, elementary and complete polynomials. For more information, refer to [7].
According to the fundamental theorem of symmetric polynomials, the elementary symmetric polynomials are distinguished from other symmetric polynomials, as any symmetric polynomial can be uniquely represented in terms of the elementary symmetric polynomials (see [8]). Also, from the Jacobi–Trudi and Nägelsbach–Kostka identities, we see that the elementary and the complete symmetric polynomials are dual to each other (see [9]).
There are numerous studies presenting identities for symmetric polynomials, such as ([10,11,12]) and others listed in the references. For instance, the authors in [11] introduced some identities for the elementary and complete symmetric polynomials and used them to generalize Stirling numbers, in addition to proving a conjecture proposed in [13]. In [12], the author presented new relationships between elementary and complete symmetric polynomials and used them to provide a new representation for the Gaussian polynomials. Similarly, the author in [10] introduced identities for the elementary symmetric polynomials and used them to present elegant representations for Legendre polynomials. In the current paper, we will present additional identities for elementary and complete symmetric polynomials, supported by some applications. Some of these applications include improving the results presented in [2] related to the Vandermonde determinant, computing determinants for some special cases of tri-diagonal matrices, and generalizing the results presented in [10] related to Legendre polynomials.
To begin with, we will introduce some basic definitions, notations, and well-known results, which we will use in the sequel. For further details, refer to [1,7,10,11,12,14,15,16]. Throughout this article, let n∈N, x=(x1,x2,…,xn)∈Cn, and we use the notation Rn+:={(x1,x2,…,xn)∣xi∈R+, i=1,2,…,n}. Let us start with the definitions of elementary and complete symmetric polynomials.
Definition 1.1. The elementary symmetric polynomial (for short, ESP) of degree k, denoted by σ(n)k(x), is the sum of all possible products of distinct k variables of {x1,x2,…,xn}, that is,
σ(n)k(x)={0,ifk>n or k<0,1,ifk=0,∑1≤i1<i2<⋯<ik≤nxi1xi2⋯xik,ifk=1,2,…,n. | (1.1) |
The complete symmetric polynomial (for short, CSP) of degree k, denoted by h(n)k(x), is defined as follows:
h(n)k(x)={0,ifn<0ork<0 or (n=0 with k≠0),1,ifk=0,∑1≤i1≤i2≤⋯≤ik≤nxi1xi2⋯xik,ifk=1,2,…. | (1.2) |
For instance, the ESP and CSP of degree 2 for n=3 are given by
σ(3)2(x1,x2,x3)=x1x2+x1x3+x2x3,h(3)2(x1,x2,x3)=x21+x22+x23+x1x2+x1x3+x2x3. |
It should be noticed that for a fixed degree k, each σ(n)k(x) involves (nk) terms, and each h(n)k(x) involves (n+k−1k) terms. From the definition of ESP and CSP, we see that they are homogeneous polynomials. Therefore, it is convenient to state the so-called Euler's theorem on homogeneous functions.
Theorem 1.1 (Euler's theorem on homogeneous functions). Let x∈Rn. If the function f:Rn→R is homogeneous of degree m, then
n∑i=1xi∂f(x)∂xi=m f(x). |
The generating functions for ESP and CSP are given, respectively, by
En(t;x)=n∏j=1(1+xjt)=n∑k=0σ(n)k(x)tk, | (1.3) |
and
Hn(t;x)=n∏j=1(1−xjt)−1=∞∑k=0h(n)k(x)tk. | (1.4) |
We can rewrite (1.3) and (1.4), respectively, as follows:
n∑k=0σ(n)k(x)tk=(1+xnt)n−1∑k=0σ(n−1)k(x1,x2,…,xn−1)tk, | (1.5) |
and
(1−xnt)∞∑k=0h(n)k(x)tk=∞∑k=0h(n−1)k(x1,x2,…,xn−1)tk. | (1.6) |
From (1.3) and (1.4), we see that En(t;x)Hn(−t;x)=1, consequently [17, Eq (1)]
n∑k=0(−1)kσ(n)k(x) h(n)n−k(x)=0. | (1.7) |
Furthermore, it is important to note that the ESP and CSP of degree k are interconnected through Jacobi–Trudi and Nägelsbach–Kostka identities, receptively,
σ(n)k(x)=det([h(n)1−i+j(x)]1≤i,j≤k), | (1.8) |
h(n)k(x)=det([σ(n)1−i+j(x)]1≤i,j≤k), | (1.9) |
for any positive integer k (see [9]).
Now, by comparing the coefficients of tk in (1.5), the ESP satisfies
σ(n)k(x)=σ(n−1)k(x1,x2,…,xn−1)+xn σ(n−1)k−1(x1,x2,…,xn−1). | (1.10) |
Similarly, for the CSP, by comparing the coefficients of tk in (1.6), we have
h(n)k(x)=h(n−1)k(x1,x2,…,xn−1)+xn h(n)k−1(x). | (1.11) |
By using the symmetry property of σ(n)k(x), we see that
σ(n)k(x)=σ(n−1)k(x1,x2,…,xi−1,xi+1,…,xn)+xi σ(n−1)k−1(x1,x2,…,xi−1,xi+1,…,xn), |
for all k=0,1,2,… and i=1,2,…,n.
By differentiating the recurrence relation (1.10) with respect to xi, we obtain
∂σ(n)k(x)∂xi=σ(n)k,i(x)=σ(n−1)k−1(x1,x2,…,xi−1,xi+1,…,xn), | (1.12) |
for all k=0,1,2,… (see [11]). Moreover, using (1.3) and (1.12) yields
σ(n)k−1(x)=σ(n)k,i(x)+xi σ(n)k−1,i(x), | (1.13) |
for all k=0,1,2,… and i=1,2,…,n. Repeated application of (1.13) gives
σ(n)k,i(x)=k∑j=1(−1)j−1σ(n)k−j(x) xj−1i, | (1.14) |
for all k=0,1,2,… and i=1,2,…,n.
In a similar manner, the CSP satisfies
∂h(n)k(x)∂xi=h(n)k,i(x)=h(n)k−1(x)+xi h(n)k−1,i(x), | (1.15) |
and repeated application of (1.15), we obtain
h(n)k,i(x)=k∑j=1h(n)k−j(x) xj−1i, | (1.16) |
for all k=0,1,2,… and i=1,2,…,n.
The structure of the remaining sections of this article is outlined as follows: In Section 2, we introduce new identities for the ESP and the CSP. Section 3 includes applications of these identities, supplemented with numerical examples. Finally, Section 4 presents the conclusion of the article.
In this section, we are going to introduce novel identities for the ESP and the CSP. We begin with the following result, which comes directly from [15, Theorem 1.1].
Corollary 2.1. Let n and m be any positive integers. If x∈Cn and y∈Cm, then
h(n+m)r(x,y)=r∑k=0h(n)k(x) h(m)r−k(y), | (2.1) |
for all r=0,1,2,….
As a direct consequence of Corollary 2.1, we can infer that the CSP adheres to the following identity:
h(n)k(x)=k∑j=0h(i)j(x1,…,xi) h(n−i)k−j(xi+1,…,xn), | (2.2) |
for a non-negative integer k. The following result is a particular case of Corollary 2.1.
Corollary 2.2. For any positive integer n and any non-negative integer k, we have
h(2n)k(x,−x)={0,if k is odd,h(n)k/2(x21,x22,…,x2n),if k is even. | (2.3) |
The following is a more general result of Corollary 2.2.
Proposition 2.1. For any positive integer n and any non-negative integers k and r such that 0≤r≤⌈n2⌉, we have
h(n)k(x1,…,xr,−x1,…,−xr,y1,…,yn−2r)=k∑ℓ=0ℓ≡0(mod 2)h(r)ℓ/2(x21…,x2r) h(n−2r)k−ℓ(y1,…,yn−2r), | (2.4) |
for all k=0,1,2,…, where ⌈⋅⌉ represents the ceiling function.
Proof. Using (1.4), we have
∞∑k=0h(n)k(x1,…,xr,−x1,…,−xr,y1,…,yn−2r) tk=(r∏i=1(1−x2it2)−1)(n−2r∏i=1(1−yit)−1). |
By applying Corollary 2.2 to the first term on the right-hand side of the previous equation, we obtain
∞∑k=0h(n)k(x1,…,xr,−x1,…,−xr,y1,…,yn−2r) tk=(∞∑ℓ=0ℓ≡0(mod 2)h(r)ℓ/2(x21…,x2r)tℓ)(∞∑ℓ=0h(n−2r)ℓ(y1,…,yn−2r)tℓ)=∞∑k=0(k∑ℓ=0ℓ≡0(mod 2)h(r)ℓ/2(x21…,x2r) h(n−2r)k−ℓ(y1,…,yn−2r))tk. | (2.5) |
Equation (2.5) can be obtained by utilizing the Cauchy product of two infinite series. This concludes the proof.
The following identities hold for the ESP:
σ(n+m)r(x,y)=r∑k=0σ(n)k(x) σ(m)r−k(y), | (2.6) |
and
σ(2n+1)k(x,0,−x)=σ(2n)k(x,−x)={0,if k is odd,(−1)k/2σ(n)k/2(x21,x22,…,x2n),if k is even, | (2.7) |
for any non-negative integer k and any positive integers n and m (see [10]). The formula (2.6) reduces to
σ(n)k(x)=k∑j=0σ(i)j(x1,…,xi) σ(n−i)k−j(xi+1,…,xn), |
for any non-negative integer k when y is missing (see [15]). It should be noticed that, from (1.3) and (1.4), we conclude that h(n)k(−x)=(−1)k h(n)k(x) and σ(n)k(−x)=(−1)k σ(n)k(x). Now, we are ready to extend Lemma 2 presented in [10] as follows:
Proposition 2.2. For any positive integer n and any non-negative integers k and r such that 0≤r≤⌈n2⌉, we have
σ(n)k(x1,…,xr,−x1,…,−xr,y1,…,yn−2r)=k∑ℓ=0ℓ≡0(mod 2)(−1)ℓ/2σ(r)ℓ/2(x21…,x2r) σ(n−2r)k−ℓ(y1,…,yn−2r), | (2.8) |
for all k=0,1,2,….
Proof. Using (1.3), we have
n∑k=0σ(n)k(x1,…,xr,−x1,…,−xr,y1,…,yn−2r) tk=r∏i=1(1−x2it2)n−2r∏i=1(1+yit)=r∑ℓ=0ℓ≡0(mod 2)(−1)ℓ/2σ(r)ℓ/2(x21…,x2r)tℓ n−2r∑ℓ=0σ(n−2r)ℓ(y1,…,yn−2r)tℓ=n∑k=0(k∑ℓ=0ℓ≡0(mod 2)(−1)ℓ/2σ(r)ℓ/2(x21…,x2r)σ(n−2r)k−ℓ(y1,…,yn−2r))tk. | (2.9) |
Comparing the coefficients of tk on both sides, the required result follows.
For n=2r+1, Proposition 2.2 gives
σ(2n+1)k(x,−x,y1)={(−1)k−12y1σ(n)k−12(x21,x22,…,x2n),if k is odd,(−1)k2σ(n)k2(x21,x22,…,x2n),if k is even. | (2.10) |
It is worth pointing out that if we set y1=0 in Eq (2.10), we essentially arrive at Lemma 2 as presented in [10]. The following results may be obtained by using Corollary 2.1 together with Corollary 2.2.
Corollary 2.3. For s=0,1,2,…, the following identities are satisfied:
(1) s+1∑i=−s(−1)i h(n)s+i(x) h(n)s−i+1(x)=0;
(2) s∑i=−s(−1)i h(n)s+i(x) h(n)s−i(x)=h(n)s(x21,x22,…,x2n).
Likewise, regarding ESP, the authors in [18] demonstrated that the following identities hold true for s=0,1,2,…
s+1∑i=−s(−1)i σ(n)s+i(x) σ(n)s−i+1(x)=0, | (2.11) |
and
s∑i=−s(−1)i σ(n)s+i(x) σ(n)s−i(x)=σ(n)s(x21,x22,…,x2n). | (2.12) |
It should be noticed that both σ(n)k(x) and h(n)k(x) are homogeneous polynomials of degree k. The following identities are satisfied by using Theorem 1.1.
Corollary 2.4. For k=0,1,2,…,n, the ESP and the CSP satisfy,
(1) n∑i=1xi σ(n)k,i(x)=k σ(n)k(x);
(2) n∑i=1xi h(n)k,i(x)=k h(n)k(x).
The following result introduces some novel additional identities concerning the ESP and the CSP.
Theorem 2.5. For any positive integer n and any non-negative integer k, the ESP and the CSP satisfy the following identities:
(1) σ(k+1)k(x1,x2,…,xk+1)=(xk+xk+1) σ(k)k−1(x1,x2,…,xk)−x2k σ(k−1)k−2(x1,x2,…,xk−1);
(2) h(2)k(x1,x2)=(x1+x2) h(2)k−1(x1,x2)−x1x2 h(2)k−2(x1,x2);
(3) h(n)k(x)=∑ni=1xi h(i)k−1(x1,…,xi);
(4) If x1,x2,…,xn are distinct non-zero variables, then
σ(n)k(1x1,1x2,…,1xn)=σ(n)n−k(x)σ(n)n(x), |
for all i=1,2,…,n;
(5) If x1,x2,…,xn are distinct non-zero variables, then
h(n)k(1x1,1x2,…,1xn)=n∑i=1x−kin∏j=1j≠ixj(xj−xi); | (2.13) |
(6) n∑i=1x2i σ(n)k,i(x)=σ(n)1(x) σ(n)k(x)−(k+1) σ(n)k+1(x);
(7) n∑i=1x2i h(n)k,i(x)=(k+1) h(n)k+1(x)−h(n)1(x) h(n)k(x);
(8) n∑i=1σ(n)k+1,i(x)=(n−k) σ(n)k(x);
(9) n∑i=1h(n)k+1,i(x)=(n+k) h(n)k(x);
(10) n∑i=1xi σ(n)k,i,j(x)=(k−1) σ(n)k,j(x),j=0,1,2,…,n;
(11) n∑i=1xi h(n)k,i,j(x)=(k−1) h(n)k,j(x),j=0,1,2,…,n;
(12) σ(n)k,i(x)−σ(n)k,j(x)=(xj−xi) σ(n)k,i,j(x),i,j=0,1,2,…,n.
Proof.
● To prove Theorem 2.5 (1), we rewrite the right-hand side as follows:
(xk+xk+1) σ(k)k−1(x1,x2,…,xk)−x2k σ(k−1)k−2(x1,x2,…,xk−1)=xk (σ(k)k−1(x1,x2,…,xk)−xk σ(k−1)k−2(x1,x2,…,xk−1))+xk+1 σ(k)k−1(x1,x2,…,xk). |
By applying the recurrence relation (1.10), this completes the proof of Theorem 2.5 (1).
● To prove Theorem 2.5 (2), we express the right-hand side in the following manner:
(x1+x2) h(2)k−1(x1,x2)−x1x2 h(2)k−2(x1,x2)=x1 (h(2)k−1(x1,x2)−x2 h(2)k−2(x1,x2))+x2 h(2)k−1(x1,x2). |
By using the recurrence relation given in Eq (1.11), we have now completed the proof of Theorem 2.5 (2).
● To prove Theorem 2.5 (3), we directly apply and repeatedly use the recurrence relation (1.11).
● To prove Theorem 2.5 (4), we replace xj by 1xj on the generating function of the elementary symmetric polynomial (1.3). Hence, we obtain
n∑k=0σ(n)k(1x1,1x2,…,1xn) tk=n∏j=1(t+xj)x1x2⋯xn=n∑k=0σ(n)n−k(x) tkσ(n)n(x). |
Note that the numerator of the last term above comes from Vieta's theorem (see [19]). Furthermore, by comparing the coefficients of tk, the proof of Theorem 2.5 (4) is complete.
● To prove Theorem 2.5 (5), since x1,x2,…,xn are distinct, then by partial fraction decomposition, we have
∞∑k=0h(n)k(1x1,1x2,…,1xn) tk=n∏i=11(1−txi)=n∑i=1λi(1−txi)−1, |
where
λi=n∏j=1j≠i(1−txj)−1|t=xi=n∏j=1j≠ixj(xj−xi), |
for all i=1,2,…,n. Consequently,
∞∑k=0h(n)k(1x1,1x2,…,1xn) tk=n∑i=1λi∞∑k=0x−ki tk=∞∑k=0(n∑i=1λix−ki) tk=∞∑k=0(n∑i=1x−ki n∏j=1j≠ixj(xj−xi)) tk, |
the proof of Theorem 2.5 (5) is complete.
● To prove Theorem 2.5 (6), we rewrite the left-hand side as follows:
n∑i=1x2i σ(n)k,i(x)=n∑i=1xi (xi σ(n)k,i(x)). |
Now, by using the recurrences (1.13) and the identity (1) in Corollary 2.4, we obtain Theorem 2.5 (6).
● To prove Theorem 2.5 (7), we use the recurrence relation (1.15), and applying the identity (2) from Corollary 2.4, we establish Theorem 2.5 (7).
● To prove Theorem 2.5 (8), we will rewrite (1.13) as
σ(n)k(x)=σ(n)k+1,i(x)+xi σ(n)k,i(x). |
Then, by summing both sides in the above identity over i from 1 to n and using the identity (1) in Corollary 2.4. Thus, we obtain
n σ(n)k(x)=n∑i=1σ(n)k+1,i(x)+k σ(n)k(x). |
● To prove Theorem 2.5 (9), similarly, we will rewrite (1.15) as
h(n)k+1,i(x)=h(n)k(x)+xi h(n)k,i(x). |
Then, by summing both sides over i from 1 to n and using the identity (2) in Corollary 2.4. Thus, we directly obtain the identity Theorem 2.5 (9).
● To prove Theorem 2.5 (10), we rewrite the identity (1) in Corollary 2.4 as follows:
n∑i=1i≠jxi σ(n)k,i(x)+xj σ(n)k,j(x)=k σ(n)k(x). |
Differentiate both sides with respect to xj gives
n∑i=1i≠jxi σ(n)k,i,j(x)+σ(n)k,j(x)=k σ(n)k,j(x). |
Since σ(n)k,j,j(x)=0, we directly get the identity Theorem 2.5 (10).
● To prove Theorem 2.5 (11), in a similar way, by using (2) in Corollary 2.4 and noting that h(n)k,j,j(x)≠0, we see that Theorem 2.5 (11) is satisfied.
● To prove Theorem 2.5 (12), we rewrite the recurrence relation (1.13) as
σ(n)k,i(x)=σ(n)k−1(x)−xi σ(n)k−1,i(x). |
By partial differentiation with respect to xj, then we have
σ(n)k,i,j(x)=σ(n)k−1,j(x)−xi σ(n)k−1,i,j(x). | (2.14) |
Moreover, we conclude that
σ(n)k,j,i(x)=σ(n)k−1,i(x)−xj σ(n)k−1,j,i(x). | (2.15) |
Since σ(n)k(x) is symmetric, then we complete the proof by subtracting (2.15) from (2.14).
Additionally, the authors in [16] showed that the CSP satisfies the following identity
h(n)k,i(x)−h(n)k,j(x)=(xi−xj) h(n)k,i,j(x), | (2.16) |
for i,j,k=1,2,…,n.
Based on the Schur-concavity of σ(n)k(x) on Rn+ and the identity (12) in Theorem 2.5, we see that σ(n)k,i,j(x)≥0 holds true for all i,j,k=1,2,…,n and x∈Rn+ (see [20]). While the Schur-convexity of h(n)k(x) for even degree on Rn, combined with the identity (2.16), results in h(n)k,i,j(x)≥0 for all i,j=1,2,…,n and k being an even positive integer (see [21]).
The complete symmetric polynomial can be written as a rational function, as shown by Jacobi (see [4]). The author in [22] provided proof of this fact using matrix decomposition. The current paper gives the proof by using partial fractions.
Theorem 2.6. For a positive integer n and a set of distinct variables x1,x2,…,xn, then
h(n)k(x)=n∑i=1xn+k−1i∏nj=1j≠i(xi−xj). | (2.17) |
Proof. Since the variables x1,x2,…,xn are all distinct from each other, we can use partial fraction decomposition to express the following:
H(t)=n∏i=11(1−xit)=n∑i=1ai(1−xit), |
where for all i=1,2,…,n, ai is defined as
ai=1∏nj=1j≠i(1−xjt)|t=1/xi=xn−1in∏j=1j≠i(xi−xj). |
Hence,
∞∑k=0h(n)k(x) tk=n∑i=1ai∞∑k=0xki tk=∞∑k=0(n∑i=1aixki) tk=∞∑k=0(n∑i=1xn+k−1i∏nj=1j≠i(xi−xj)) tk. |
The required result follows.
The main objective of the current section is to demonstrate three potential applications. Firstly, we will concentrate on the inversion of a generalized Vandermonde matrix. Secondly, we will explore specific applications concerning the determinant of two special tri-diagonal matrices. Lastly, we will discuss the representation of Jacobi polynomials in terms of their zeros.
The Vandermonde matrix is an example of such matrices, and it finds applications in various fields, including mathematics ([23,24,25]), engineering ([26,27]), and natural science ([28,29,30]).
Let p∈R. A generalized Vandermonde matrix denoted by Vn,p(x1,x2,…,xn) (for short, Vn,p) and defined as Vn,p=[xp+i−1j]ni,j=1 for distinct nodes x1,x2,…,xn∈C. Here, we assume that Vn,p is an invertible matrix. It is clear that the classical Vandermonde matrix is a special case of Vn,p(x1,x2,…,xn) with p=0. Following [1], the explicit formula of the determinant for a generalized Vandermonde matrix Vn,p, is given by
det(Vn,p)=xp1n∏i=2xpii−1∏j=1(xi−xj). |
In their recent work [2], concise and rigorous proofs were presented for the determinant and inverse formulas of a generalized Vandermonde matrix. For the convenience of the reader, we mention the following result:
Theorem 3.1 ([2]). Consider a generalized Vandermonde matrix, Vn,p with distinct nodes x1,x2,…,xn∈C. Then, we have V−1n,p=[NijD(xi)]ni,j=1, where
Nij=n−j∑ℓ=0ϱℓ xn−j−ℓi, | (3.1) |
D(xi)=n∑ℓ=1ℓ ϱn−ℓ xp+ℓ−1i, | (3.2) |
and
ϱℓ=(−1)ℓσ(n)ℓ(x1,x2,…,xn). | (3.3) |
It is advantageous to note, according to formula (3.2), that for every Vandermonde node xi, there exists a corresponding denominator D(xi). So, the denominators D(xi) in any row of V−1n,p remain consistent. Moreover, in V−1n,p(x1,x2,…,xn2,−x1,−x2,…,−xn2), we can infer that
D(−xi)=(−1)p+1D(xi). | (3.4) |
Due to the relationship described in Eq (3.4), this will lead to a reduction in the computational cost for inverting the Vandermonde matrix Vn,p(x1,x2,…,xn2,−x1,−x2,…,−xn2).
Proposition 2.2 enables us to introduce the following result, which encompasses Corollaries 1 and 2 in [2] and provides a generalization of them for computing the inverse of Vn,p with distinct nodes x1,x2,…,xr,−x1,−x2,…,−xr,y1,y2,…,yn−2r∈C, where r be a non-negative integer such that 0≤r≤⌈n2⌉.
Corollary 3.2. Consider the generalized Vandermonde matrix Vn,t with distinct nodes x1,x2,…,xr,−x1,−x2…,−xr,y1,y2,…,yn−2r∈C, where r be a non-negative integer such that 0≤r≤⌈n2⌉. Then we have
V−1n,p=[NijD(xi)]ni,j=1, |
where
Nij=n−j∑ℓ=0ϱℓ xn−j−ℓi, | (3.5) |
D(xi)=n∑ℓ=1ℓ ϱn−ℓ xp+ℓ−1i, | (3.6) |
and
ϱℓ=ℓ∑κ=0κ≡0(mod 2)(−1)(κ+2ℓ)/2σ(r)κ/2(x21…,x2r) σ(n−2r)ℓ−κ(y1,…,yn−2r). | (3.7) |
Notice that, when r=n2 in Corollary 3.2, we obtain Corollary 1 in [2], which involves computing the inverse of Vn,0(x1,x2,…,xn2,−x1,−x2,…,−xn2). The special case r=n−12 and y1=0 gives Corollary 2 in [2], which entails computing the inverse of Vn,0(x1,x2,…,xn−12,0,−x1,−x2,…,−xn−12), with ones as the first-row inputs. The benefit of Corollary 3.2 is to reduce the computational cost of computing the inverse of Vn,p with distinct nodes x1,x2,…,xr,−x1,−x2,…,−xr,y1,y2,…,yn−2r∈C, as the number of ESPs σ(n)k evaluations in (3.7) will decrease by approximately r times.
Example 3.3. Consider the Vandemonde matrix
V5,12(−2,−1,2,1,3)=[√−2√−1√21√3(√−2)3(√−1)3(√2)31(√3)3(√−2)5(√−1)5(√2)51(√3)5(√−2)7(√−1)7(√2)71(√3)7(√−2)9(√−1)9(√2)91(√3)9]. |
For this matrix, we have n=5,x1=−2,x2=−1,x3=2,x4=1,x5=3, p=12 and r=2. As stated in [2], through the implementation of the VMIEA algorithm and employing formula (3.7), we can infer that ϱ0=1,ϱ1=−3,ϱ2=−5,ϱ3=15,ϱ4=4, and ϱ5=−12. Furthermore, utilizing formula (3.6), we find that D(−2)=60√2i,D(−1)=−24i,D(2)=−12√2,D(1)=12 and D(3)=40√3, where i=√−1. Thus, the inverse of V5,12(−2,−1,2,1,3) can be expressed as
V−15,12=[−660√2i560√2i560√2i−560√2i160√2i−12−24i16−24i−1−24i−4−24i1−24i6−12√21−12√2−7−12√2−1−12√21−12√21212812−712−212112440√3040√3−540√3040√3140√3]=[√220i−√224i−√224i√224i−√2120i−12i23i−124i−16i124i−12√2−112√2712√2112√2−112√2123−712−16112110√30−18√30140√3]. |
The tri-diagonal matrix is defined as T=[tij]ni,j=1 with tij=0 for |i−j|≥2. This matrix is a common occurrence in various scientific and engineering fields, such as algebra [31], physics [32], parallel computing [33], and engineering [34].
Based on the identities (1) and (2) in Theorem 2.5, we obtain the following result: calculating the determinant of a particular tri-diagonal matrix.
Corollary 3.4. Consider real symmetric n×n tri-diagonal matrices of the form
An=[x1+x2x20⋯⋯0x2x2+x3x3⋱⋮0x3x3+x4⋱⋱⋮⋮⋱⋱⋱⋱0⋮⋱⋱⋱xn0⋯⋯0xnxn+xn+1], |
and
Bn=[a+bb0⋯⋯0aa+bb⋱⋮0aa+b⋱⋱⋮⋮⋱⋱⋱⋱0⋮⋱⋱⋱b0⋯⋯0aa+b], |
then det(An)=σ(n+1)n(x1,x2,…,xn+1) and det(Bn)=h(2)n(a,b).
Proof. Let det(An)=△n. Write △1=x1+x2=σ(2)1(x1,x2). The determinant of An can be computed via the three-term recurrence relation, that is,
△i=(xi+xi+1)△i−1−x2i△i−2, | (3.8) |
for i=1,2,…,n and △0=1, △−1=0 (see [35]). According to identity Theorem 2.5 (1), we can deduce that
△2=(x2+x3) σ(2)1(x1,x2)−x22 σ(1)0(x1)=σ(3)2(x1,x2,x3). |
By repeating this process, we get △n=σ(n+1)n(x1,x2,…,xn+1).
Similarly, define det(Bn)=ˆ△n and write ˆ△1=a+b=h(2)1(a,b). Utilizing the following three-term recurrence relation
ˆ△i=(a+b) ˆ△i−1−ab ˆ△i−2, | (3.9) |
for i=1,2,…,n and ˆ△0=1, ˆ△−1=0 (see [35]), we can compute the determinant of Bn. Based on the identity Theorem 2.5 (2), we obtain
ˆ△2=(a+b) h(2)1(a,b)−ab h(2)0(a,b)=h(2)2(a,b). |
By repeating this procedure, we complete the proof.
The inverse of matrices An and Bn can be calculated using the methods described in [36] or the algorithm outlined in [37]. The following corollary presents some specific cases that can be derived from Corollary 3.4. The proof of this corollary is straightforward and will not be included here.
Corollary 3.5. Consider the tri-diagonal matrices An and Bn defined in Corollary 3.4.
(1) If a=b=1, then det(Bn)=n+1,
(2) If a=b=−1, then det(Bn)=(−1)n(n+1),
(3) If xi=i−1, i=1,2,…,n+1, then det(An)=n!,
(4) If xi=i, i=1,2,…,n+1, then det(An)=(n+1)! Hn+1, where Hn is the harmonic numbers,
(5) If xi=y, i=1,2,…,n+1, then det(An)=(n+1) yn,
(6) If xi=−y, i=1,2,…,n+1, then det(An)=(−1)n+1(n+1)yn,
(7) If xi=q for i=1,2,…,⌈n2⌉, and xi=−q for i=⌈n2⌉+1,…,n+1, then
det(An)={0if n is odd,(−1)n/2qnif n is even. |
In this subsection, we are going to focus on Jacobi polynomials and their zeros. A formula for Legendre polynomials in terms of their zeros was previously presented in [10]. The objective of the present study is to build upon this idea and derive a formula for Jacobi polynomials that expresses them in terms of their zeros.
Jacobi polynomials P(α,β)n(x) are a class of orthogonal polynomials with respect to a weight function ω(x)=(1−x)α(1+x)β that are defined as the polynomials of degree n on the interval [−1,1]. The Jacobi polynomials P(α,β)n(x) are characterized by the two parameters α,β>−1. According to formula (4.21.2) in [38], we can deduce that P(α,β)n(x) satisfies the following explicit formula:
P(α,β)n(x)=n∑k=0(n+αn−k)(n+α+β+kk)(x−12)k. | (3.10) |
Following the formula (4.5.7) in [38], the Jacobi polynomials satisfy the subsequent recurrence relation
(2n+α+β)(1−x2)ddxP(α,β)n−1(x)=(n+α+β)(α−β+(2n+α+β)x)P(α,β)n−1(x)−2n(n+α+β)P(α,β)n(x). | (3.11) |
There are various special instances of Jacobi polynomials. These include the Legendre polynomials Pn(x) (α=β=0), the Chebyshev polynomials of the first kind Tn(x) (α=β=−1/2), the Chebyshev polynomials of the second kind Un(x) (α=β=1/2), the Chebyshev polynomials of the third kind Vn(x) (α=−1/2,β=1/2), the Chebyshev polynomials of the fourth kind Wn(x) (α=1/2,β=−1/2), and the ultraspherical polynomials (Gegenbauer polynomials) C(λ)n(x) (α=β>−12), where λ=α+12. These polynomials have many applications in mathematics and physics, as demonstrated in works such as [39,40,41,42]. An alternate explicit formula for Jacobi polynomials, equivalent to formula (3.10), is provided by the following result:
Corollary 3.6. Consider the sequence of Jacobi polynomials {P(α,β)n(x)}∞n=0. Then the explicit formula (3.10) is equivalent to the following formula:
P(α,β)n(x)=n∑k=0[n−k∑ℓ=0(−1)ℓ2k+ℓ(n+αn−k−ℓ)(n+α+β+k+ℓk+ℓ)(k+ℓk)] xk. | (3.12) |
Proof. The proof is obtained by applying the binomial theorem to (x−1)k in formula (3.10), and then applying the associativity and commutativity properties of double summations.
Following this, we will proceed to introduce a result that allows us to express Jacobi polynomials in terms of their zeros.
Theorem 3.7. For any positive integer n, consider x1,x2,…,xn are the zeros of the Jacobi polynomial P(α,β)n(x). Then
P(α,β)n(x)=2−n(2n+α+βn)n∑k=0(−1)n−kσ(n)n−k(x1,x2,…,xn)xk. | (3.13) |
Proof. Using the formula (3.12), we can deduce that the highest-degree term of P(α,β)n(x) has a coefficient of 2−n(2n+α+βn). Since x1,x2,…,xn are the zeros of P(α,β)n(x), the Jacobi polynomial P(α,β)n(x) can be expressed as follows:
P(α,β)n(x)=2−n(2n+α+βn) Q(x), |
where Q(x) is a monic polynomial of degree n with zeros x1,x2,...,xn. Furthermore, Q(x) can be written as the product of n linear factors, that is, Q(x)=∏ni=1(x−xi). Utilizing Vieta's formula [43, Theorem 33.3] on Q(x) allows us to complete the proof and arrive at the desired result.
By combining Corollary 3.6 and Theorem 3.7, we can derive the following result, whose proof will be omitted.
Lemma 3.8. Consider n to be a positive integer and k to be a non-negative integer. If we let x1,x2,…,xn be the zeros of the Jacobi polynomial P(α,β)n(x), then
σ(n)k(x1,x2,…,xn)=2kk! (2n+α+β)!(nk)k∑ℓ=0(−1)ℓ+k(n+αk−ℓ)(2n+α+β−k+ℓ)!ℓ!. | (3.14) |
In addition, it can be shown that the Jacobi polynomials P(α,β)n(x) can be expressed using the zeros of P(α,β)n−1(x). The following result will provide this fact.
Theorem 3.9. For any positive integer n>1. Let P(α,β)n(x) and P(α,β)n−1(x) be Jacobi polynomials with zeros x1,x2,…,xn and y1,y2,…,yn−1, respectively. Then,
P(α,β)n(x)=2−n(2n+α+β−1)(2n+α+βn)n∑k=0(−1)k Ak xn−k, | (3.15) |
with
Ak=(2n+α+β−k−1)σ(n−1)k(y)−(n+α+β)(α−β)(2n+α+β)σ(n−1)k−1(y)−(n−k+1)σ(n−1)k−2(y), | (3.16) |
where y=(y1,y2,…,yn−1).
Proof. We can demonstrate the validity of this proof by using the recurrence relation (3.11) with the modified formula of Jacobi polynomials (3.13).
If we set α=β, we obtain the property that P(α,α)n(−x)=(−1)n P(α,α)n(x). By virtue of Theorem 3.7, the following corollaries for P(α,α)n(x) with even and odd orders can be derived, respectively.
Corollary 3.10. Let n be an even positive integer, and x1,x2,…,xn2 are the positive zeros of the Jacobi polynomial P(α,α)n(x). Then P(α,α)n(x) can be expressed as follows:
P(α,α)n(x)=2−n(2n+2αn)n2∑k=0(−1)kσ(n2)k(x21,x22,…,x2n2)xn−2k. | (3.17) |
Remark 1. Drawing from Corollary 3.10 and Theorem 2.5, let us delve into the following observations: Let n be an even positive integer and k=0,1,2,…,n2.
(1) In case α>−12, consider that x1,x2,…,xn2 are the positive zeros of the ultraspherical polynomial C(λ)n(x) with λ=α+12, then
σ(n2)k(x21,x22,…,x2n2)=22n−2k(n−kk)(n−k+α−12α−12)(n+αn)(2n+2αn)(n+2α2α),σ(n2)k(1x21,1x22,…,1x2n2)=4k(n2+kn2−k)(n2+k+α−12α−12)(n2+α−12α−12). |
(2) In case α=0, consider that x1,x2,…,xn2 are the positive zeros of the Legendre polynomial Pn(x), then
σ(n2)k(x21,x22,…,x2n2)=(nk)(2n−2kn)(2nn),σ(n2)k(1x21,1x22,…,1x2n2)=(nn2−k)(n+2kn)(nn2). |
(3) In case α=−1/2, consider that x1,x2,…,xn2 are the positive zeros of Chebyshev polynomials of the first kind Tn(x), then
σ(n2)k(x21,x22,…,x2n2)=n4k(n−k)(n−kk),σ(n2)k(1x21,1x22,…,1x2n2)=4kn(n+2k)(n2+k2k). |
(4) In case α=1/2, consider that x1,x2,…,xn2 are the positive zeros of Chebyshev polynomials of the second kind Un(x), then
σ(n2)k(x21,x22,…,x2n2)=14k(n−kk),σ(n2)k(1x21,1x22,…,1x2n2)=4k(n2+k2k). |
Corollary 3.11. Let n be an even positive integer, and y1,y2,…,yn2 are the positive zeros of the Jacobi Polynomial P(α,α)n+1(x). Then P(α,α)n+1(x) can be expressed as follows:
P(α,α)n+1(x)=2−(n+1)(2n+2α+2n+1)n2∑k=0(−1)kσ(n2)k(y21,y22,…,y2n2)xn−2k+1. | (3.18) |
Remark 2. Referring to Corollary 3.11 and Theorem 2.5, we now turn our attention to the following insights: Let n be an even positive integer and k=0,1,2,…,n2.
(1) If we set α>−12, let us assume that y1,y2,…,yn2 represent the positive zeros of the ultraspherical polynomial C(λ)n+1(x) with λ=α+12. Then,
σ(n2)k(y21,y22,…,y2n2)=22n−2k+2(n−k+1k)(n−k+α+12α−12)(n+α+1n)(2n+2α+2n+1)(n+2α+12α),σ(n2)k(1y21,1y22,…,1y2n2)=22k+1n+2(n2+k+1n2−k)(n2+k+α+12α−12)(n2+α+12α−12). |
(2) If we set α=0, let us assume that y1,y2,…,yn2 represent the positive zeros of the Legendre polynomial Pn+1(x). Then,
σ(n2)k(y21,y22,…,y2n2)=(n+1k)(2n−2k+2n+1)(2n+2n+1),σ(n2)k(1y21,1y22,…,1y2n2)=(n+1n2−k)(n+2k+2n+1)(n+2)(n+1n2). |
(3) If we set α=−1/2, let us assume that y1,y2,…,yn2 represent the positive zeros of Chebyshev polynomials of the first kind, Tn+1(x). Then
σ(n2)k(y21,y22,…,y2n2)=n+14k(n−k+1)(n−k+1k),σ(n2)k(1y21,1y22,…,1y2n2)=4k(2k+1)(n2+k2k). |
(4) If we set α=1/2, let us assume that y1,y2,…,yn2 represent the positive zeros of Chebyshev polynomials of the second kind Un+1(x). Then
σ(n2)k(y21,y22,…,y2n2)=14k(n−k+1k),σ(n2)k(1y21,1y22,…,1y2n2)=22k+1(n+2)(n2+k+12k+1). |
Clearly, setting α=0 in Corollaries 9 and 10 allows us to obtain Corollaries 2 and 3, respectively, as derived in [10]. It is clear that we can also define the Jacobi polynomials P(α,β)n(x) in terms of the complete symmetric polynomials of their zeros via the relation (1.8).
Now, we will provide some illustrative examples concerning Theorem 3.7, Theorem 3.9, and Corollary 3.11.
Example 3.12. Taking α=0 and β=1, the zeros of P(0,1)2(x) are x1=15−√65 and x2=15+√65. Therefore, we have
σ(2)2(15−√65,15+√65)=−15,σ(2)1(15−√65,15+√65)=25. |
Using Theorem 3.7, we can further deduce
P(0,1)2(x)=2−2(52)2∑k=0(−1)2−kσ(2)2−k(15−√65,15+√65)xk=52[−15−25x+x2]=−12−x+52x2. |
Consequently, by applying Theorem 3.9, we can formulate the Jacobi polynomial P(0,1)3(x) in the following manner:
P(0,1)3(x)=2−36(73)3∑k=0(−1)kAkx3−k, |
where, the values of Ak for k=0,1,2,3 are computed using the formula (3.16) as follows:
A0=6σ(2)0(15−√65,15+√65)=6,A1=5σ(2)1(15−√65,15+√65)+47σ(2)0(15−√65,15+√65)=187,A2=4σ(2)2(15−√65,15+√65)+47σ(2)1(15−√65,15+√65)−2σ(2)0(15−√65,15+√65)=−187,A3=47σ(2)2(15−√65,15+√65)−σ(2)1(15−√65,15+√65)=−1835. |
Hence, the Jacobi polynomial P(0,1)3(x) is given by
P(0,1)3(x)=3548[6x3−187x2−187x+1835]=18(35x3−15x2−15x+3). |
Example 3.13. Let us consider the case where α=β=52. The zeros of P(52,52)3(x) are x1=−√310, x2=0, and x3=√310.
Utilizing Corollary 3.11, we obtain
P(52,52)3(x)=2−3(113)1∑k=0(−1)kσ(1)k(310)xk=1658[−310x+x3]. |
Applying formula (4.7.1) in [38], we obtain the normalized ultraspherical polynomial of degree 3 with λ=3 as
C(3)3(x)=(2+12)! 8!5! (5+12)! P(52,52)3(x)=8x(10x2−3). |
By employing Theorem 3.9, we can express the Jacobi polynomial P(52,52)4(x) in terms of the zeros of P(52,52)3(x) as follows
P(52,52)4(x)=2−412(134)4∑k=0(−1)kAkx4−k, |
where the coefficients Ak are calculated using the formula (3.16) as shown
Ak=(12−k)(−1)k2σ(1)k2(310)−(5−k)(−1)k−22σ(1)k−22(310),k=0,1,2,3,4. |
Therefore, the Jacobi polynomial P(52,52)4(x) is given by
P(52,52)4(x)=2−412(134)[12x4−6x2+310]. |
Finally, the normalized ultraspherical polynomial of degree 4 with λ=3 can be represented as
C(3)4(x)=(2+12)! 9!5! (6+12)! P(52,52)4(x)=6[40x4−20x2+1]. |
In this article, we presented novel identities for elementary and complete symmetric polynomials and explored their practical implications. These identities helped to increase our understanding of elementary and complete symmetric polynomials, as well as their connections to various fields. For instance, we extended certain results presented in [2], specifically those concerning the inversion of a generalized Vandermonde matrix. Additionally, we applied some of the derived identities to compute the determinant of two symmetric tri-diagonal matrices. Furthermore, we extended the results presented in [10] concerning orthogonal polynomials. Theorem 3.9 shows that the Jacobi polynomials P(α,β)n(x) can be expressed using the zeros of P(α,β)n−1(x).
Both authors A. Arafat and M. El-Mikkawy contributed equally to this work. Both authors have read and agreed to the published version of the manuscript.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
The authors would like to extend their sincere gratitude to the reviewers for their valuable comments and constructive suggestions on this manuscript, which have greatly enhanced it.
The authors declare no conflicts of interest.
[1] |
M. El-Mikkawy, Explicit inverse of a generalized Vandermonde matrix, Appl. Math. Comput., 146 (2003), 643–651. https://doi.org/10.1016/S0096-3003(02)00609-4 doi: 10.1016/S0096-3003(02)00609-4
![]() |
[2] |
A. Arafat, M. El-Mikkawy, A fast novel recursive algorithm for computing the inverse of a generalized Vandermonde matrix, Axioms, 12 (2023), 27. https://doi.org/10.3390/axioms12010027 doi: 10.3390/axioms12010027
![]() |
[3] | D. Knutson, Lambda-Rings and the representation theory of the symmetric group, Springer, 1973. https://doi.org/10.1007/BFb0069217 |
[4] | R. P. Stanley, Enumerative combinatorics, Vol. 2, Cambridge University Press, 1999. https://doi.org/10.1017/CBO9780511609589 |
[5] |
T. Bickel, N. Galli, K. Simon, Birth processes and symmetric polynomials, Ann. Comb., 5 (2001), 123–139. https://doi.org/10.1007/PL00001295 doi: 10.1007/PL00001295
![]() |
[6] | F. Bergeron, Algebraic combinatorics and coinvariant spaces, 1 Ed., A K Peters/CRC Press, 2009. https://doi.org/10.1201/b10583 |
[7] | I. G. Macdonald, Symmetric functions and Hall polynomials, Oxford University Press, 1998. |
[8] | I. Stewart, Galois theory, 5 Eds., Chapman and Hall/CRC, 2022. https://doi.org/10.1201/9781003213949 |
[9] |
M. Merca, Some experiments with complete and elementary symmetric functions, Period. Math. Hung., 69 (2014), 182–189. https://doi.org/10.1007/s10998-014-0034-3 doi: 10.1007/s10998-014-0034-3
![]() |
[10] |
M. S. Alatawi, On the elementary symmetric polynomials and the zeros of Legendre polynomials, J. Math., 2022 (2022), 413972. https://doi.org/10.1155/2022/4139728 doi: 10.1155/2022/4139728
![]() |
[11] |
M. El-Mikkawy, T. Sogabe, Notes on particular symmetric polynomials with applications, Appl. Math. Comput., 215 (2010), 3311–3317. https://doi.org/10.1016/j.amc.2009.10.019 doi: 10.1016/j.amc.2009.10.019
![]() |
[12] |
M. Merca, Two symmetric identities involving complete and elementary symmetric functions, Bull. Malays. Math. Sci. Soc., 43 (2020), 1661–1670. https://doi.org/10.1007/s40840-019-00764-2 doi: 10.1007/s40840-019-00764-2
![]() |
[13] |
M. El-Mikkawy, On a connection between the Pascal, Vandermonde and Stirling matrices-II, Appl. Math. Comput., 146 (2003), 759–769. https://doi.org/10.1016/S0096-3003(02)00616-1 doi: 10.1016/S0096-3003(02)00616-1
![]() |
[14] | S. L. Yang, Y. Y. Jia, Symmetric polynomial matrices and Vandermonde matrix, Indian J. Pure Appl. Math., 2009. |
[15] |
M. Merca, A convolution for complete and elementary symmetric functions, Aequat. Math., 86 (2013), 217–229. https://doi.org/10.1007/s00010-012-0170-x doi: 10.1007/s00010-012-0170-x
![]() |
[16] |
M. El-Mikkawy, F. Atlan, Remarks on two symmetric polynomials and some matrices, Appl. Math. Comput., 219 (2013), 8770–8778. https://doi.org/10.1016/j.amc.2013.02.068 doi: 10.1016/j.amc.2013.02.068
![]() |
[17] |
M. Merca, Bernoulli numbers and symmetric functions, RACSAM, 114 (2020), 20. https://doi.org/10.1007/s13398-019-00774-6 doi: 10.1007/s13398-019-00774-6
![]() |
[18] | M. Merca, A. Cuza, A special case of the generalized Girard-Waring formula, J. Integer Seq., 15 (2012), 1–7. |
[19] | I. Gelfand, V. Retakh, Noncommutative Vieta theorem and symmetric functions, In: I. M. Gelfand, J. Lepowsky, M. M. Smirnov, The Gelfand mathematical seminars 1993–1995, Birkhäuser Boston, 1996, 93–100. https://doi.org/10.1007/978-1-4612-4082-2_6 |
[20] |
T. Zhang, A. Chen, H. Shi, B. Saheya, B. Xi, Schur-convexity for elementary symmetric composite functions and their inverse problems and applications, Symmetry, 13 (2021), 2351. https://doi.org/10.3390/sym13122351 doi: 10.3390/sym13122351
![]() |
[21] |
I. Rovenţa, L. E. Temereancă, A note on the positivity of the even degree complete homogeneous symmetric polynomials, Mediterr. J. Math., 16 (2019), 1. https://doi.org/10.1007/s00009-018-1275-9 doi: 10.1007/s00009-018-1275-9
![]() |
[22] | E. Cornelius Jr, Identities for complete homogeneous symmetric polynomials, JP J. Algebra Number Theory Appl., 21 (2011), 109–116. |
[23] | H. M. Moya-Cessa, F. Soto-Eguibar, Differential equations: an operational approach, Rinton Press, 2011. |
[24] | K. R. Rao, D. N. Kim, J. J. Hwang, Fast Fourier transform: algorithms and applications, Springer, 2010. https://doi.org/10.1007/978-1-4020-6629-0 |
[25] | R. Vein, P. Dale, Determinants and their applications in mathematical physics, Springer Science & Business Media, 2006. |
[26] | C. Zhu, S. Liu, M. Wei, Analytic expression and numerical solution of ESD current, High Voltage Eng., 31 (2005), 22–24. |
[27] | K. Lundengård, M. Rančić, V. Javor, S. Silvestrov, On some properties of the multi-peaked analytically extended function for approximation of lightning discharge currents, In: S. Silvestrov, M. Rančić, Engineering mathematics I, Springer Proceedings in Mathematics & Statistics, Cham: Springer, 178 (2016), 151–172. https://doi.org/10.1007/978-3-319-42082-0_10 |
[28] | E. Desurvire, Classical and quantum information theory: an introduction for the telecom scientist, Cambridge university press, 2009. |
[29] |
M. Cirafici, A. Sinkovics, R. J. Szabo, Cohomological gauge theory, quiver matrix models and Donaldson-Thomas theory, Nuclear Phys. B, 809 (2009), 452–518. https://doi.org/10.1016/j.nuclphysb.2008.09.024 doi: 10.1016/j.nuclphysb.2008.09.024
![]() |
[30] |
T. Scharf, J. Thibon, B. Wybourne, Powers of the Vandermonde determinant and the quantum Hall effect, J. Phys. A: Math. Gen., 27 (1994), 4211. https://doi.org/10.1088/0305-4470/27/12/026 doi: 10.1088/0305-4470/27/12/026
![]() |
[31] |
M. Koohestani, A. Rahnamai Barghi, A. Amiraslani, The application of tri-diagonal matrices in P-polynomial table algebras, Iran. J. Sci. Technol. Trans. A: Sci., 44 (2020), 1125–1129. https://doi.org/10.1007/s40995-020-00924-1 doi: 10.1007/s40995-020-00924-1
![]() |
[32] |
I. Mazilu, D. Mazilu, H. Williams, Applications of tri-diagonal matrices in non-equilibrium statistical physics, Electron. J. Linear Algebra, 24 (2012), 7–17. https://doi.org/10.13001/1081-3810.1576 doi: 10.13001/1081-3810.1576
![]() |
[33] |
W. Yang, K. Li, K. Li, A parallel solving method for block-tridiagonal equations on CPU-GPU heterogeneous computing systems, J. Supercomput., 73 (2017), 1760–1781. https://doi.org/10.1007/s11227-016-1881-x doi: 10.1007/s11227-016-1881-x
![]() |
[34] |
A. Klinkenberg, Three examples of tridiagonal matrices in description of cascades, Ind. Eng. Chem. Fundamen., 8 (1969), 169–170. https://doi.org/10.1021/i160029a028 doi: 10.1021/i160029a028
![]() |
[35] |
M. El-Mikkawy, A note on a three-term recurrence for a tridiagonal matrix, Appl. Math. Comput., 139 (2003), 503–511. https://doi.org/10.1016/S0096-3003(02)00212-6 doi: 10.1016/S0096-3003(02)00212-6
![]() |
[36] |
Y. Huang, W. McColl, Analytical inversion of general tridiagonal matrices, J. Phys. A: Math. Gen., 30 (1997), 7919. https://doi.org/10.1088/0305-4470/30/22/026 doi: 10.1088/0305-4470/30/22/026
![]() |
[37] |
M. El-Mikkawy, A. Karawia, Inversion of general tridiagonal matrices, Appl. Math. Lett., 19 (2006), 712–720. https://doi.org/10.1016/j.aml.2005.11.012 doi: 10.1016/j.aml.2005.11.012
![]() |
[38] | G. Szegö, Orthogonal polynomials, Vol. 23, New York: American Mathematical Society, 1939. |
[39] | G. Freud, Orthogonal polynomials, Elsevier, 2014. |
[40] |
A. Arafat, E. Porcu, M. Bevilacqua, J. Mateu, Equivalence and orthogonality of Gaussian measures on spheres, J. Multivar. Anal., 167 (2018), 306–318. https://doi.org/10.1016/j.jmva.2018.05.005 doi: 10.1016/j.jmva.2018.05.005
![]() |
[41] |
A. Arafat, P. Gregori, E. Porcu, Schoenberg coefficients and curvature at the origin of continuous isotropic positive definite kernels on spheres, Stat. Probab. Lett., 156 (2020), 108618. https://doi.org/10.1016/j.spl.2019.108618 doi: 10.1016/j.spl.2019.108618
![]() |
[42] | Q. M. Luo, Contour integration for the improper rational functions, Montes Taurus J. Pure Appl. Math., 3 (2021), 135–139. |
[43] | S. Silvestrov, A. Malyarenko, M. Rančić, Algebraic structures and applications, Vol. 317, Springer, 2020. https://doi.org/10.1007/978-3-030-41850-2 |