In this paper, polynomial equations with real coefficients and in one variable were considered which contained a small, positive but specified and fixed parameter ε0≠0. By using the classical asymptotic method, roots of the polynomial equations have been constructed in the literature, which were proved to be valid for sufficiently small ε-values (or equivalently for ε→0). In this paper, it was assumed that for some or all roots of a polynomial equation, the first few terms in a Taylor or Laurent series in a small parameter depending on ε exist and can be constructed. We also assumed that at least two approximations x1(ε) and x2(ε) for the real roots exist and can be constructed. For a complex root, we assumed that at least two real approximations a1(ε) and a2(ε) for the real part of this root, and that at least two real approximations b1(ε) and b2(ε) for the imaginary part of this root, exist and can be constructed. Usually it was not clear whether for ε=ε0 the approximations were valid or not. It was shown in this paper how the classical asymptotic method in combination with the bisection method could be used to prove how accurate the constructed approximations of the roots were for a given interval in ε (usually including the specified and fixed value ε0≠0). The method was illustrated by studying a polynomial equation of degree five with a small but fixed parameter ε0=0.1. It was shown how (absolute and relative) error estimates for the real and imaginary parts of the roots could be obtained for all values of the small parameter in the interval (0,ε0].
Citation: Fitriana Yuli Saptaningtyas, Wim T Van Horssen, Fajar Adi-Kusumo, Lina Aryati. On accurate asymptotic approximations of roots for polynomial equations containing a small, but fixed parameter[J]. AIMS Mathematics, 2024, 9(10): 28542-28559. doi: 10.3934/math.20241385
[1] | Sumbal Ahsan, Rashid Nawaz, Muhammad Akbar, Saleem Abdullah, Kottakkaran Sooppy Nisar, Velusamy Vijayakumar . Numerical solution of system of fuzzy fractional order Volterra integro-differential equation using optimal homotopy asymptotic method. AIMS Mathematics, 2022, 7(7): 13169-13191. doi: 10.3934/math.2022726 |
[2] | Beatriz Campos, Alicia Cordero, Juan R. Torregrosa, Pura Vindel . Dynamical analysis of an iterative method with memory on a family of third-degree polynomials. AIMS Mathematics, 2022, 7(4): 6445-6466. doi: 10.3934/math.2022359 |
[3] | Jung Yoog Kang, Cheon Seoung Ryoo . Exploring variable-sensitive $ q $-difference equations for $ q $-SINE Euler polynomials and $ q $-COSINE-Euler polynomials. AIMS Mathematics, 2024, 9(6): 16753-16772. doi: 10.3934/math.2024812 |
[4] | Haifa Bin Jebreen, Beatriz Hernández-Jiménez . Pseudospectral method for fourth-order fractional Sturm-Liouville problems. AIMS Mathematics, 2024, 9(9): 26077-26091. doi: 10.3934/math.20241274 |
[5] | Gyung Won Hwang, Cheon Seoung Ryoo, Jung Yoog Kang . Some properties for 2-variable modified partially degenerate Hermite (MPDH) polynomials derived from differential equations and their zeros distributions. AIMS Mathematics, 2023, 8(12): 30591-30609. doi: 10.3934/math.20231564 |
[6] | Yingchao Zhang, Yuntao Jia, Yingzhen Lin . An $ {\varepsilon} $-approximate solution of BVPs based on improved multiscale orthonormal basis. AIMS Mathematics, 2024, 9(3): 5810-5826. doi: 10.3934/math.2024282 |
[7] | Yingchao Zhang, Yingzhen Lin . An $ {\varepsilon} $-approximation solution of time-fractional diffusion equations based on Legendre polynomials. AIMS Mathematics, 2024, 9(6): 16773-16789. doi: 10.3934/math.2024813 |
[8] | Muhammad Imran Liaqat, Sina Etemad, Shahram Rezapour, Choonkil Park . A novel analytical Aboodh residual power series method for solving linear and nonlinear time-fractional partial differential equations with variable coefficients. AIMS Mathematics, 2022, 7(9): 16917-16948. doi: 10.3934/math.2022929 |
[9] | Huiping Jiao, Xiao Zhang, Chao Wei . $ L_{\infty} $-norm minimum distance estimation for stochastic differential equations driven by small fractional Lévy noise. AIMS Mathematics, 2023, 8(1): 2083-2092. doi: 10.3934/math.2023107 |
[10] | John Paolo O. Soto, Jose Ernie C. Lope, Mark Philip F. Ona . Lifespan of solutions to second order Cauchy problems with small Gevrey data. AIMS Mathematics, 2024, 9(1): 1434-1442. doi: 10.3934/math.2024070 |
In this paper, polynomial equations with real coefficients and in one variable were considered which contained a small, positive but specified and fixed parameter ε0≠0. By using the classical asymptotic method, roots of the polynomial equations have been constructed in the literature, which were proved to be valid for sufficiently small ε-values (or equivalently for ε→0). In this paper, it was assumed that for some or all roots of a polynomial equation, the first few terms in a Taylor or Laurent series in a small parameter depending on ε exist and can be constructed. We also assumed that at least two approximations x1(ε) and x2(ε) for the real roots exist and can be constructed. For a complex root, we assumed that at least two real approximations a1(ε) and a2(ε) for the real part of this root, and that at least two real approximations b1(ε) and b2(ε) for the imaginary part of this root, exist and can be constructed. Usually it was not clear whether for ε=ε0 the approximations were valid or not. It was shown in this paper how the classical asymptotic method in combination with the bisection method could be used to prove how accurate the constructed approximations of the roots were for a given interval in ε (usually including the specified and fixed value ε0≠0). The method was illustrated by studying a polynomial equation of degree five with a small but fixed parameter ε0=0.1. It was shown how (absolute and relative) error estimates for the real and imaginary parts of the roots could be obtained for all values of the small parameter in the interval (0,ε0].
In the field of nonlinear dynamics, finding roots of polynomial equations plays an important role. Determining equilibrium points of systems of differential equations (or of systems of difference equations) with polynomial nonlinearities leads, in a lot of cases, to solving a polynomial equation. Determining the stability (in linearized sense) of equilibrium points of differential equations or of difference equations (maps) often leads to finding roots of polynomial equations with constant and real coefficients.
In this paper, we will consider polynomial equations of degree n for which the coefficients in the equation are real. The fundamental theorem of algebra tells us that such a polynomial equation of degree n has n roots (of which some roots may be coinciding). It is also well-known that polynomial equations up to degree 4 can be solved exactly and algebraically, and that for equations of degree 5 and higher that this in general is not possible. A lot of research has been done to compute accurate approximations of the (real or complex) roots of these polynomial equations by means of numerical methods or by means of asymptotic methods (when a small or large parameter is present in the equation which tends to zero or infinity, respectively). Both methods find their origins in the works of Newton, Euler, etc. Numerical methods obtained a lot of attention with the introduction of computers in the sixties of the previous century, such as numerical methods to solve nonlinear transcendental equations, to study polynomial equations, and to analyze some errors; see [1,2]. Nowadays, some numerical methods still obtain a lot of attention. Root of polynomials with complex coefficients and the applied multi-precision algorithm are studied in [3,4]. Fast algorithms for root finding are studied in [5,6]. Furthermore, the iterative method by using polynomiography is studied in [7], and two step simultaneous is studied in [8].
Several root-finding algorithms exist that use hybrid methods to reduce computational time and save memory; see [9,10,11]. Also, asymptotic methods to solve approximately polynomial equations obtained a lot of attention in the last 60 years. Several root-finding studies are using perturbation methods; see [12,13,14,15]. Singular problems have been studied in [16,17,18,19], and the asymptotic analysis has been discussed in [20,21,22,23].
However, in all of these books and papers on perturbation methods to construct approximations of the roots (or a root) of the polynomial equation, it is only indicated or proved that the approximations are valid for a sufficiently small parameter or for the parameter tending to zero. These indications or proofs are based on different forms of the implicit function theorem. For instance, for a function f(x,ε), which is analytic in both variables:
Theorem 1. Let x0 be a complex number such that f(x0,0)=0, and let f(x,ε) be analytic at x=x0 and ε=0. If ∂f∂x(x0,0)≠0, then there exist constants a>0 and b>0 such that for each ε with |ε |<a, the equation f(x,ε)=0 has a unique and simple root x=x(ε) for |x−x0|<b, where x(ε) is an analytic function in ε for |ε|<a with x(0)=x0.
Or, for example, for a real-valued function f(x,ε) in real variables:
Theorem 2. Let x0 be a real number such that f(x0,0)=0, and let f(x,ε) be a real-valued function for which all (mixed) partial derivatives up to order m>0 are continuous in a neighborhood of x=x0 and ε=0. If ∂f∂x(x0,0)≠0, then there exists an m times continuously differentiable function x=x(ε) in a neighborhood of ε=0 such that f(x(ε),ε)=0 with x(0)=x0.
Now, it should be observed that Theorem 1 contains two unknown constants a and b, and that Theorem 2 involves a similar vagueness concerning the neighborhood of x=x0 and ε=0. In practice, ε has a specified and fixed real value ε0, and from both theorems it cannot be concluded that ε0 satisfies the conditions as mentioned in both theorems. Only in [[12], p.30] is this issue observed: "For most problems of actual interest, it is very difficult to carry out these calculations (to obtain these constants a and b, or to obtain the neighborhood of x=x0 and ε=0), and it is almost never done. In practice, a result as Theorem 1 or 2 provides a reason to trust the approximations to some extent, and then one can turn to numerical or experimental data for more detailed information about accuracy and range of validity". This is exactly what we see in the literature: the asymptotic approximation for ε→0 is compared to numerical approximations or to available, exact roots for different values of ε. In [12], the author determines the accuracy of the roots of a quadratic polynomial on a certain interval in ε by estimating the remainder term R(ε) for the polynomial equation P2(x,ε)=0, where x(ε) is approximated by x0+εx1+ε2R(ε). The computations to obtain R(ε) and to determine an interval of validity for ε, are indeed already quite complicated for such a simple problem. In this paper we propose another method to determine the accuracy of a root on a given interval in ε. In fact, it will be a method based on the bisection method and the usual perturbation method to determine roots of polynomial equations.
We assume that at least two real approximations x1(ε) and x2(ε) for a real root x(ε) exist and can be constructed. For a complex root, we assume that at least two real approximations a1(ε) and a2(ε) for the real part of this root and that at least two real approximations b1(ε) and b2(ε) for the imaginary part of this root exist and can be constructed. This paper is organized as follows. In Section 2 of this paper, we formulate and prove two theorems in which the accuracies of the approximations of real and complex roots of polynomial equations can be obtained for all ε on a given interval for ε. In Section 3 of this paper, the two theorems will be applied to a polynomial equation of degree 5, which contains a small parameter ε. The equation contains real and complex roots, and some of the roots turn out to be large. Absolute and relative errors in the approximations of the roots will be determined for all ε∈(0,ε0] with ε0=0.1. In Section 4 of this paper, we will formulate a general algorithm to possibly determine approximations of roots of polynomial equations (including error estimates on a given interval for ε). Finally, in Section 5, we draw some conclusions and discuss some directions for future research.
In this section of the paper, we consider nth degree polynomial equations with real coefficients. Based on the fundamental theorem of algebra, we know that such a polynomial equation has n roots (which are not necessarily all different). These roots can be real or complex-valued. Information on how many positive (or negative) real roots might be present can be obtained from Descartes' rule of signs, which states for a polynomial equation pn(x)=0 in standard form that the number of positive real roots of pn(x)=0 is either equal to the number of variations in sign of pn(x) or less than that by an even number, and that the number of negative real roots is either equal to the number of variations in sign of pn(−x) or less than that by an even number. In Descartes' rule of signs, zero coefficients are ignored.
For polynomial equations, the implicit function theorem (as formulated in the introduction of this paper) can be applied. So, straightforward perturbation expansions in ε (or in another small parameter depending on ε and which is obtained after a rescaling procedure) can be used to approximate roots of the polynomial equation for ε tending to zero. In the following theorem 3 for real roots and in theorem 4 for complex roots, it will be shown under what conditions these perturbation expansions can be used for all ε∈(0,ε0], where ε0 is a fixed, real parameter. Moreover, explicit error estimates can be given, which are valid for all ε∈(0,ε0].
Theorem 3. Consider the polynomial equation of degree n with n∈Z+:
pn(x,ε)≡anxn+an−1xn−1+...+a1x+a0=0, | (1) |
where the real coefficients a0,a1,...,an depend on ε. If functions x1(ε)∈R and x2(ε)∈R can be found such that pn(x1(ε),ε)>0 and pn(x2(ε),ε)<0 for all ε=(0,ε0], then at least one real root x(ε) in between x1(ε) and x2(ε) exists, and this root can be approximated by x1(ε)+x2(ε)2 with an absolute error less than or equal to 12|(x2(ε)−x1(ε))|.
Proof. Let x1(ε)∈R and x2(ε)∈R, for which pn(x1(ε),ε) and pn(x2(ε),ε) have opposite signs for all ε∈(0,ε0]. Since pn(x,ε) is a polynomial with real coefficients, we then also know that a real root exists, satisfying x1(ε)<x(ε)<x2(ε) for all ε∈(0,ε0] (where it has been assumed without loss of generality that x1(ε)<x2(ε) and that pn(x1,ε)>0 and pn(x2,ε)<0). Since pn(x,ε) is continuous, then the mean value theorem guarantees that there exists a x(ε) in between x1(ε) and x2(ε) such that pn(x(ε),ε)=0. In fact, this is a simple application of the bisection method. From the inequality for x1(ε)≤x(ε)≤x2(ε), it follows that
x(ε)=12(x1(ε)+x2(ε))+Eabs(ε), |
where the absolute error Eabs(ε) satisfies:|Eabs(ε)|<12(x2(ε)−x1(ε)) for all ε∈(0,ε0].
Theorem 4. Let x(ε)=a(ε)+ib(ε) be a complex valued root of the polynomial equation (1), and let a(ε) and b(ε) be real functions in ε. By substituting a(ε)+ib(ε) into the polynomial equation (1), and by taking apart the real and imaginary parts in the so-obtained equation, one obtains a system of two real and nonlinear polynomial equations
{f1(a(ε),b(ε),ε)≡Re(pn(a(ε)+ib(ε),ε))=0,f2(a(ε),b(ε),ε)≡Im(pn(a(ε)+ib(ε),ε))=0. | (2) |
This system of polynomial equations (2) can be reduced to a triangular system of real polynomial equations
{g1(a(ε),ε)=0,g2(a(ε),b(ε),ε)=0,or{g3(a(ε),b(ε),ε)=0,g4(b(ε),ε)=0. | (3) |
If functions a1(ε),a2(ε),b1(ε), and b2(ε) can be constructed such that
g1(a1(ε),ε)>0,g1(a2(ε),ε)<0,g4(b1(ε),ε)>0,andg4(b2(ε),ε)<0 |
for all ε∈(0,ε0], then the polynomial equation (1) has a complex-valued root a(ε)+ib(ε), which can be approximated by 12(a1(ε)+a2(ε))+i2(b1(ε)+b2(ε)), where the absolute error in the real part and in the imaginary part are at most 12|(a2(ε)−a1(ε))| and 12|(b2(ε)−b1(ε))|, respectively, for all ε∈(0,ε0].
Proof. Let a complex-valued root x(ε) be given by a(ε)+ib(ε), where a(ε) and b(ε) are real-valued functions in ε with b(ε)≠0. By taking apart the real part and the imaginary part of pn(a(ε)+ib(ε),ε)=0, we obtain
{Re(pn(a(ε)+ib(ε),ε))=0,Im(pn(a(ε)+ib(ε),ε))=0. |
We assume that pn(x,ε) is a polynomial of degree n. These real and imaginary parts are both real polynomials in a(ε) and b(ε). So, in fact, we obtain : f1(a(ε),b(ε),ε)=0 and f2(a(ε),b(ε),ε)=0, where f1 and f2 are both polynomials in a(ε) and b(ε). Now, this system of polynomial equations can be transformed into regular chains (or equivalently, into a triangular system of polynomial equations) by using the triangular decomposition method for polynomial systems (see [2,5,24], for the existence proofs of this decomposition). There are a few algorithms to compute such a triangular decomposition of a polynomial system into regular chains or into regular semi-algebraic systems (see again [2,5,24]), and we obtain
{g1(a(ε),ε)=0,g2(a(ε),b(ε),ε)=0,or{g3(a(ε),b(ε),ε)=0,g4(b(ε),ε)=0, |
where g1,g2,g3, and g4 are real polynomial functions in their arguments. Obviously, such triangular systems are ready to be solved by evaluating the unknown one after the other (just like for triangular, linear systems of equations). From g1(a(ε),ε)=0 and from g4(b(ε),ε)=0, we construct approximations for a(ε) and for b(ε). Observe that g1 and g4 are both real functions (and a(ε) and b(ε) are real). We assume that two approximations for a(ε) exist, let's say, a1(ε) and a2(ε) for which g1(a1(ε),ε) and g1(a2(ε),ε) have opposite signs for all ε∈(0,ε0]. Similarly for b(ε), we assume that two approximations for b(ε) exist, let's say, b1(ε) and b2(ε) for which g4(b1(ε),ε) and g4(b2(ε),ε) have opposite signs for all ε∈(0,ε0]. The bisection method then simply implies that (assuming without loss of generality that a1(ε)<a2(ε) and b1(ε)<b2(ε)):
a(ε)=12(a1(ε)+a2(ε))+Eaabs(ε),b(ε)=12(b1(ε)+b2(ε))+Ebabs(ε), | (4) |
where the absolute errors Eaabs(ε) and Ebabs(ε) satisfy |Eaabs(ε)|<12(a2(ε)−a1(ε)) and |Ebabs(ε)|<12(b2(ε)−b1(ε)) for all ε∈(0,ε0].
So far we showed how the accuracy of approximations of (real or complex-valued) roots of polynomial equations can be obtained not only for ε tending to zero, but also for all ε∈(0,ε0]. In the next section, it will be shown for a nontrivial example how these two theorems can be applied.
In this section, we will consider the following 5th degree polynomial equation:
p5(x,ε)≡εx5+x2−1=0, | (5) |
where ε is a small, real but fixed parameter, that is, ε=ε0=0.1. When we assume for equation (5) that ε is a small but positive parameter, that is, 0<ε≪1, then we can apply the classical perturbation method to construct the formal approximations of the five roots of equation (5) for ε sufficiently small. However, does "ε sufficiently small" include ε=ε0=0.1? This question is usually never answered in the classical literature on perturbation methods for polynomial equations (see, for instance, [12,13,14,15,16]). By using a straightforward perturbation expansion for the root x(ε), that is,
x(ε)∼x0+εx1+ε2x2+..., | (6) |
where (for i=0,1,2,..) xi are constants independent of ε, one will find two approximations for the roots. The other three approximations are found by the rescaling procedure (also referred to as the method of maximal balance, or of distinguished limits, ..., or Newton's Polyhedron method):
x=εαy, | (7) |
where α is a constant and y=y(ε) is a strict O(1) function depending on ε. By substituting (7) into (5), one obtains:
ε1+5α.y5+ε2α.y−1=0⇔ε1+5α.y5+ε2α.y−ε0.1=0. | (8) |
For (8), the rescaling procedure implies that values for α should be determined such that two terms in (8) are dominating the remaining term (or the remaining term is of the same order of magnitude as the other two terms). It is not difficult to see that only two values for α will lead to what is called significant balances in the equation, that is, α=0 or α=−13. In Section 3.1 of this paper, we will study the approximations of the roots and their accuracies for ε∈(0,0.1] when α=0, and in Section 3.2 we will do the same for the case α=−13. It will turn out in Section 3.1 that two real roots have to be approximated, and in Section 3.2 that one large real root and two large complex-valued roots have to be approximated.
For the polynomial equation (5), the following theorem will be proved in the following subsection.
Theorem 5. Consider the polynomial equation (5). For all ε∈(0,0.1], the equation will have three real roots and two complex roots, and these roots can be approximated by (for all ε∈(0,0.1]):
(i) 1−12ε+916ε2 with an absolute error of at most 716ε2.
(ii) −1−12ε−2516ε2 with an absolute error of at most 716ε2.
(iii) ε−13(−1+13ε23++13ε43) with an absolute error of at most 13ε.
(iv) ε−13(12−16ε23+16ε43)±iε−13(12√3+16√3ε23−29√3ε43) with an absolute error in the real part of at most 16ε, and with an absolute error in the imaginary part of at most 13√3ε.
In this subsection, we will determine accurate approximations of two roots of (5), which can be obtained by using the straightforward perturbation expansion (6). By substituting (6) into (5), and by collecting terms of O(1), terms of O(ε), and so on, one finds the following problems:
O(1)−terms:x20−1=0,O(ε)−terms:x50+2x0x1=0,O(ε2)−terms:2x0x2+x21+5x40x1=0,and so on. | (9) |
From (9) x0,x1,x2,.. can readily be determined, yielding:
x0=1,x1=−12,x2=98,x3=−72,...,orx0=−1,x1=−12,x2=−98,x3=−72,... |
In this way, we found two formal approximations of two roots of p5(x,ε)=0. At this moment, we do not know whether these roots are real or not, and we do not know how accurate these approximations are for all ε∈(0,0.1]. However, from the implicit function theorem, we know that a unique root of p5(x,ε)=0 branches off from x0=1, and that a unique root branches off from x0=−1 for sufficiently small values of ε. Now, let's consider the first root with x0=1,x1=−12,x2=98, and so on, and observe that for all ε∈(0,0.1]:
p5(x0,ε)=p5(1,ε)=ε>0,p5(x0+εx1,ε)=p5(1−12ε,ε)=−94ε2+52ε3−54ε4+516ε5−132ε6<0,p5(x0+εx1+ε2x2,ε)=p5(1−12ε+98ε2,ε)=7ε3−71964ε4+68532ε5−139764ε6+6165256ε7−2025128ε8+473854096ε9−328058192ε10+5904932768ε11>0, |
and so on. Now it should be observed that p5(x,ε) is a polynomial function in x with real coefficients, and that p5(x,ε) takes positive and negative values. Since p5(x,ε) is a continuous and real function for real values of x, the bisection method now implies that in between the real x-values for which a positive and a negative function value occur, there should exist a real zero (or root) of p5(x,ε)=0. For the root x(ε) of the polynomial equation (5) which branches off from x0=1, we can now conclude that this root is real and satisfies for all ε∈(0,0.1]:
1−12ε<x(ε)<1,1−12ε<x(ε)<1−12ε+98ε2. |
From the last inequalities, we can conclude that (based on the bisection method)
x(ε)=1−12ε+916ε2+E1abs(ε), | (10) |
where the absolute error E1abs(ε) satisfies: |E1abs(ε)|<916ε2 for all ε∈(0,0.1]. So, when we take as approximation of this first root, 1−12ε+916ε2, then we now know that both the relative error, and the absolute error are less then 1% for all ε∈(0,0.1]. Now let's consider the second root with x0=−1,x1=−12,x2=−98, and so on, and observe that for all ε∈(0,0.1]:
p5(−1,ε)=−ε<0,p5(−1−12ε,ε)=−94ε2−52ε3−54ε4−516ε5−132ε6<0,p5(−1−12ε−98ε2,ε)=−7ε3−71964ε4−68532ε5−139764ε6−6165256ε7−2025128ε8−473854096ε9−328058192ε10−5904932768ε11<0, |
and so on. So far, we found a decreasing sequence of approximations for the unique root of p5(x,ε) which branches off from x0=−1. For these approximations, we found that p5 is negative. In order to apply the bisection method, we now will try to find an x-value for which p5 is positive. For instance, consider x=−1−12ε−2ε2. Then,
p5(−1−12ε−2ε2,ε)=74ε2−212ε3−694ε4−88516ε5−208132ε6−8858ε7−85ε8−100ε9−40ε10−32ε11>0 |
for all ε∈(0,0.1]. This can easily be checked analytically or by plotting p5(−1−12ε−2ε2;ε) for 0≤ε≤0.1. As for the first root, we can now conclude for the second root that this root is real, and that it satisfies:
−1−12ε−2ε2<x(ε)<−1−12ε−98ε2⇒x(ε)=−1−12ε−2516ε2+E2abs(ε), | (11) |
where the absolute error E2abs(ε) satisfies |E2abs(ε)|<716ε2. So, for the approximation −1−12ε−2516ε2 for the second root, we can now also conclude that the absolute error and the relative error are less than 1% for all ε∈(0,0.1].
In the previous subsection, we proved the existence of two real roots for Eq (5). Also, we constructed accurate approximations of these two real roots for all ε∈(0,0.1]. In this subsection, we will construct accurate approximations for the other three roots for all ε∈(0,0.1]. To do that, we have to use the rescaling (7) for x, that is, x=εαy with α=−13. By substituting this rescaling into (5), we obtain:
y5+y2−ε23=0. | (12) |
To find approximations of the roots of the polynomial equation (12), we will use the following straightforward perturbation expansion
y(ε)∼y0+δy1+δ2y2+..., | (13) |
where δ=ε23, and where yi are constants independent of δ (for i=0,1,2,...). By substituting (13) into (12), and by collecting terms of O(1), terms of O(δ), and so on, one finds the following equations to solve:
O(1)−terms:y50+y20=0,O(δ)−terms:5y40y1+2y0y1−1=0,O(δ2)−terms:5y40y2+10y30y21+2y0y2+y21=0, | (14) |
and so on. From Eq (14), y0,y1,y2,... can readily be computed. From y50+y20=0, it follows that y0=0 (double root), y0=−1, y0=12+i2√3, or y0=12−i2√3. The double root y0=0 can be disregarded (because it leads to the case as studied in Subsection 3.1). For the three other roots, it follows from the implicit function theorem that three large roots of equations (5) are branching off from y0=−1,y0=12+i2√3, and y0=12−i2√3. In Subsection 3.2.1, we will study the root which is branching off from y0=−1, and in Subsection 3.2.2 we will explain how for complex-valued roots the real and imaginary parts can be approximated and how the accuracies of the approximations of the real and imaginary parts can be determined.
From Eq (14), y1,y2, and so on can easily be determined, yielding y1=13,y2=13, and so on. For the polynomial equation (12), or equivalently for
h(y,δ)≡y5+y2−δ=0 |
with δ=ε23, it should be observed that for all ε∈(0,0.1] (or equivalently, for all δ with 0<δ<0.21544), we have:
h(−1,δ)=−δ<0,h(−1+13δ,δ)=−δ2+1027δ3−581δ4+1243δ5<0,h(−1+13δ+13δ3,δ)=−4427δ3+481δ4+211243δ5+5243δ6−50243δ7−5243δ8+5243δ9+1243δ10<0, |
and so on. Observe that all h-values are negative. When we find a y-value for which h(y,δ) is positive, then we know that a real root exists and we are able to give an interval in which this root can be found. Now, take for instance y=−1+13δ+23δ2. Then,
h(−1+13δ+23δ2,δ)=δ2−9827δ3−14981δ4+961243δ5+370243δ6−440243δ7−160243δ8+80243δ9+32243δ10>0, |
for all δ with 0<δ≤0.21544..(⇔0<ε≤0.1). So, a third, real root of the polynomial equation (5) exists. This root x(ε)=ε−13y(ε) satisfies
ε−13(−1+13ε23+13ε43)<x(ε)<ε−13(−1+13ε23+23ε43) |
for all ε∈(0,0.1]. From these inequalities, it follows that the third real root
x(ε)=ε−13(−1+13ε23+13ε43)+E3abs(ε), | (15) |
where the absolute error E3abs(ε) satisfies |E3abs(ε)|<16ε, for all ε∈(0,0.1]. As for the other two real roots, it can easily be shown that when this third real root is approximated by ε−13(−1+13ε23+13ε43), then the absolute error and the relative error are both less then 1% for all ε∈(0,0.1].
In Subsection 3.2, the existence of two complex-valued roots of the polynomial equation (5) is guaranteed by the implicit function theorem for ε sufficiently small. In this subsection, we will study the complex-valued roots of (5). We will study and approximate the real and imaginary parts of the roots separately. Moreover, we will show how accurate the real and imaginary parts are approximated for all ε∈(0,0.1]. Let's consider the polynomial equation (5) again, and let's assume that x=a+ib with a,b∈R, and b≠0. By substituting x=a+ib into (5), and by taking apart the real and imaginary parts in the so-obtained equation, one obtains:
{ε(a5−10a3b2+5ab4)+a2−b2−1=0,ε(5a4b−10a2b3+b5)+2ab=0. | (16) |
Since b≠0, a factor b can be divided out in the last equation of (16). System (16) can now be rewritten in the form:
{5εab4−(10εa3+1)b2=1−a2−εa5,εb4−10εa2b2=−2a−5εa4. | (17) |
or in matrix form Ab=c, where
A=[5εa−(10εa3+1)ε−10εa2],b=[b4b2],c=[1−a2−εa5−2a−5εa4]. |
The determinant of A, should be observed that is, detA=ε−40ε2a3. For ε>0, it can be shown elementarily that the case detA=0 does not lead to solutions of the nonlinear system of equations (17). For detA≠0, system Ab=c can readily be solved for b, yielding:
b4=−2a+ε(−10a2−15a4)−40ε2a7ε−40ε2a3,b2=−ε(9a2+1)−24ε2a5ε−40ε2a3. | (18) |
Since b∈R|0, it follows from (18) that the righthand sides should be positive. From (18), b can be eliminated by observing that b4=(b2)2, and after some manipulations one finds the following polynomial equation for a:
2a+ε(16a4+28a2+1)+ε2(−352a5−128a7)−1024ε3a10=0, | (19) |
where a∈R, and ε∈(0,0.1]. To find asymptotic approximations for a, one again can apply the rescaling procedure a=εα¯a, and as before one finds α=−13 as a significant rescaling parameter (the computations to obtain α=−13 are omitted here for convenience, and are similar to the computations as presented in the beginning of Section 2). By substituting a=ε−13¯a into (19), one obtains:
g(¯a,ε)≡1024¯a10+128¯a7−16¯a4−2¯a+ε23.(352¯a5−28¯a2)−ε43=0. | (20) |
For ¯a, we now use the straightforward perturbation expansion
¯a(ε)∼a0+ε23a1+ε43a2+..., | (21) |
where ai are constants independent of ε (with a0≠0, and i=0,1,2,..). By substituting (21) into (20), and by collecting terms of O(1), O(ε23), and so on, one finds the following:
O(1)−terms:1024a100+128a70−16a40−2a0=0,O(ε23)−terms:10240a90a1+896a60+352a50−64a30a1−28a20−2a1=0,O(ε43)−terms:10240a90a2+46080a80a21+896a60a2+2688a50a21+1760a40a1−64a30a2−96a20a21−56a0a1−2a2−1=0, | (22) |
and so on. The equation for a0 in (22) can readily be solved, but the only feasible solution (satisfying a∈R,b≠0, and for which the righthand sides of (18) are positive) is a0=12. Then, it follows from the second equation, and from the third equation in (22) that a1=−16 and a2=13. From (20) it now follows that
g(a0,ε)=g(12,ε)=−4ε23+ε43,g(a0+ε23a1,ε)=g(12−16ε23,ε)=8ε43−158127ε63+8681ε83+49ε103−184729ε123+1212187ε143−5729ε163+1019683ε183−159049ε203,g(a0+ε23a1+ε43a2,ε)=g(12−16ε23+13ε43,ε)=−17627ε63−108481ε83+173681ε103−54094729ε123+2091192187ε143−2912572187ε163+251500619683ε183−708814959049ε203+530454859049ε223−4215886561ε243+75027219683ε263−42275219683ε283+216322183ε303−8512019683ε323+2816019683ε343−896019683ε363+512059049ε383−896019683ε363+512059049ε383−102459049ε403, |
and so on. For ε∈(0,0.1], it can be shown (by considering the leading order terms, by comparing pair-wise the other terms with each other, or by plotting as function in ε) that
g(12,ε)<0,g(12−16ε23,ε)>0,g(12−16ε23+13ε43,ε)<0, |
and so on. So far, we can conclude that for all ε with 0<ε≤0.1, we have:
ε−13(12−16ε23)<a(ε)<12ε−13,ε−13(12−16ε23)<a(ε)<ε−13(12−16ε23+13ε43). |
These inequalities are based on the bisection method and imply that
a(ε)=ε−13(12−16ε23+16ε43)+Eaabs(ε), | (23) |
where the absolute error Eaabs(ε) satisfies |Eaabs(ε)|<16ε for all ε∈(0;0.1]. To approximate the imaginary part b(ε) of the roots of equation (5), we will now try to eliminate a(ε) from system (17). This can be accomplished by introducing the transformation
b(ε)=a(ε)ˆb(ε), | (24) |
into system (17), yielding
{5εˆb4−(10ε+1a3)ˆb2=1a5−1a3−ε,5ε+εˆb4−10εˆb2=−2a3, |
and this system can simply be rewritten into
{−ε2ˆb4−52εˆb2−32ε+ε2ˆb6=1a5,5εˆb2−ε2ˆb4−52ε=1a3. | (25) |
From system (25), we now easily eliminate a, yielding
(ε2ˆb6−ε2ˆb4−52εˆb2−32ε)3=(5εˆb2−ε2ˆb4−52ε)5,or equivalentlyh(ˆb,ε)≡((ˆb2−3)(ˆb2+1))3−ε24(10ˆb2−ˆb4−5)5=0. | (26) |
From (26), it can be deduced that ˆb(ε) has to be expanded in the form:
ˆb(ε)=b0+ε23b1+ε43b2+..., | (27) |
where b0≠0,b1,b2,... are ε-independent, real constants. By substituting the expansion (27) into (26), by collecting terms of O(1), O(ε23), O(ε43), and so on, and by solving the O(1)-equation, the O(ε23)-equation, and so on, one finally finds
b0=√3,b1=23√3,b2=−49√3,... |
or
b0=−√3,b1=−23√3,b2=+49√3,... |
For ε∈(0,0.1], it can be shown (for instance, by plotting h(ˆb(ε);ε) as function in ε) that
h(b0,ε)=h(±√3,ε)<0,h(b0+ε23b1,ε)=h(±√3±23√3ε23,ε)>0,h(b0+ε23b1+ε43b2,ε)=h(±√3±23√3ε23∓49√3ε43,ε)>0, | (28) |
From the first two inequalities in (28), it follows that:
√3<ˆb(ε)<√3+23√3ε23 |
or
−23√3ε23−√3<ˆb(ε)<−√3. |
Since there is not a sign-change in the last two inequalities of (28), we cannot directly use the bisection approach. However, as before, we can look for nearby values of ˆb(ε) such that h(ˆb(ε),ε) is negative. By plotting h(ˆb(ε),ε) as function of ε, one can readily find for ε∈(0,0.1] that
h(±√3±23√3ε23∓69√3ε43,ε)<0. |
So, we obtain for ˆb(ε) that
√3+23√3ε23−69√3ε43<ˆb(ε)<√3+23√3ε23−49√3ε43, |
or
−√3−23√3ε23+49√3ε43<ˆb(ε)<−√3−23√3ε23+69√3ε43. |
This implies that
ˆb(ε)=±√3±23√3ε23∓59√3ε43±Eˆbabs(ε), | (29) |
where the absolute error Eˆbabs(ε) satisfies |Eˆbabs(ε)|<19√3ε43 for all ε∈(0;0.1]. From (23), (24), and (29), we can now determine approximations of the imaginary parts b(ε) of the roots of the polynomial equation (5):
b(ε)=a(ε)ˆb(ε)=(ε−13(12−16ε23+16ε43)+Eaabs(ε))(±√3±23√3ε23∓59√3ε43±Eˆbabs(ε)),=...=±ε−13(12√3+16√3ε23−29√3ε43)+Ebabs(ε), | (30) |
where the absolute error Ebabs(ε) satisfies : |Ebabs(ε)|<13√3ε for all ε∈(0,0.1]. This completes the proof of Theorem 5.
So far, we constructed accurate approximations of the five roots of the polynomial equation (5) for all ε∈(0,0.1]. This example clearly shows how approximations of real or complex-valued roots can be constructed for all ε∈(0,0.1], and how Theorem 3 and Theorem 4 out of Section 2 of this paper can be applied to obtain these results.
In this paper, we consider nth degree polynomial equations with real coefficients (ai∈R,i=0,...,n,andan≠0), that is,
Pn(x,ε)≡anxn+an−1xn−1+...+a1x+a0=0, |
where the coefficients ai depend on ε. We consider polynomial equations which contain a small, but fixed parameter ε (which is equal to ε0). For ε tending to zero, we apply perturbation expansions to approximate the roots of the polynomial equation. Implicitly, it is assumed that for ε=0 the (reduced) polynomial equation is solvable, or that after a rescaling (or balancing approach), the so-obtained equation is solvable (by setting a new parameter depending on ε equal to zero). Furthermore, it is assumed that perturbation expansions for these roots can be constructed.
For polynomial equations, the implicit function theorem (as formulated in the introduction of this paper) can be applied. So, straightforward perturbation expansions in ε (or in another small parameter depending on ε and which is obtained after a rescaling procedure) can be used to approximate roots of the polynomial equation for ε tending to zero. The question now is can these perturbation expansions be used for all ε∈(0,ε0]? The answer is affirmative when we have (or we can construct from the expansions) two real expansions, let's say, x1(ε) and x2(ε), for which pn(x1(ε),ε) and pn(x2(ε),ε) have opposite signs for all ε∈(0,ε0]. Since pn(x;ε) is a polynomial with real coefficients, we then also know that a real root exists, satisfying x1(ε)<x(ε)<x2(ε) for all ε∈(0,ε0] (where it has been assumed without loss of generality that x1(ε)<x2(ε)). In fact, this is a simple application of the bisection method. From the inequality for x(ε), it follows that
x(ε)=12(x1(ε)+x2(ε))+Eabs(ε), |
where the absolute error Eabs(ε) satisfies:|Eabs(ε)|<12(x2(ε)−x1(ε)) for all ε∈(0,ε0].
When application of the perturbation expansion method leads to complex-valued expansions, we only know that the expansions are valid for ε tending to zero. Moreover, it is beforehand not clear how to prove that the approximations of the complex-valued roots are accurate for all ε∈(0,ε0]. To prove this, we propose the following approach. Let a complex-valued root x(ε) be given by x(ε)=a(ε)+ib(ε) with a(ε) and b(ε) real-valued functions in ε and b(ε)≠0. By taking apart the real part and the imaginary part of pn(a(ε)+ib(ε),ε)=0, we obtain
{Re(pn(a(ε)+ib(ε),ε))=0,Im(pn(a(ε)+ib(ε),ε))=0. |
These real and imaginary parts are both polynomials in a(ε) and b(ε). So, in fact we obtain : f1(a(ε),b(ε),ε)=0 and f2(a(ε),b(ε),ε)=0, where f1 and f2 are both polynomial in a(ε) and b(ε). Now, this system of polynomial equations can be transformed into regular chains (or equivalently, into a triangular system of polynomial equations) by using the triangular decomposition method for polynomial systems. There are a few algorithms to compute such a triangular decomposition of a polynomial system into regular chains or into regular semi-algebraic systems (see [2], [3], [24]), and we obtain
{g1(a(ε),ε)=0,g2(a(ε),b(ε),ε)=0,or{g3(a(ε),b(ε),ε)=0,g4(b(ε),ε)=0, |
where g1,g2,g3, and g4 are real polynomial functions in their arguments. It should be observed that formula manipulation packages like MAPLE or Mathematica have these triangular decomposition methods available in their libraries. Obviously, such triangular systems are ready to be solved by evaluating the unknown one after the other. So, if we have (or we can construct from the expansions) two real expansions for a(ε), let's say, a1(ε) and a2(ε) for which g1(a1(ε),ε) and g1(a2(ε),ε) have opposite signs for all ε∈(0;ε0]. Similarly for b(ε), if we have (or we can construct from the expansions) two real expansions, let's say, b1(ε) and b2(ε) for which g4(b1(ε),ε) and g4(b2(ε),ε) have opposite signs for all ε∈(0,ε0]. The bisection method then simply implies that (assuming without loss of generality that a1(ε)<a2(ε) and b1(ε)<b2(ε)):
a(ε)=12(a1(ε)+a2(ε))+Eaabs(ε),b(ε)=12(b1(ε)+b2(ε))+Ebabs(ε), | (31) |
where the absolute errors Eaabs(ε) and Ebabs(ε) satisfy |Eaabs(ε)|<12(a2(ε)−a1(ε)) and |Ebabs(ε)|<12(b2(ε)−b1(ε)) for all ε∈(0,ε0].
So, in this section of this paper we indicated how in an algorithmic approach, accurate approximations of (real or complex-valued) roots of polynomial equations can be obtained not only for ε tending to zero, but also for all ε∈(0,ε0].
In this paper, accurate approximations of roots of polynomial equations (with real coefficients and in one variable) have been constructed. The polynomial equations contain a small, positive, and fixed parameter ε0≠0. It has been indicated and proved how accurate approximations of a real or complex-valued root can be obtained for all ε∈(0,ε0]. Of course, it is assumed in this paper (see also Theorem 3 and Theorem 4) that the asymptotic expansions for the roots of the polynomial equations (described by the first few terms in a Taylor or Laurent series in a small parameter depending on ε) exist and can be constructed. We assume that at least two real approximations x1(ε) and x2(ε) for a real root x(ε) exist and can be constructed. For a complex root, we assume that at least two real approximations a1(ε) and a2(ε) for the real part of this root and that at least two real approximations b1(ε) and b2(ε) for the imaginary part of this root exist and can be constructed. The emphasis in this paper is on the classical perturbation approach leading to (truncated) Taylor or Laurent series as approximations for the roots. All computations that might be necessary can be done by hand or on a simple laptop by using Mathematica or Maple. So, the computational costs are not high. Instead of using the bisection method, other more advanced methods can be used, such as the Newton-Raphson method. By using this method, one will obtain quotients of polynomials in ε as approximations for the roots. This is, of course, directly related to the famous Padé approximants. Again, the algorithmic approach to approximate roots as described in Section 4 of this paper can be followed. The so-obtained Padé approximants for the roots of the polynomial equation can be an interesting subject for further and future research. This work was motivated by the first author's thesis [25] in which polynomial equations with small but fixed parameters occurred. Also in [25], a 5th-degree polynomial equation containing a small but fixed parameter was studied to determine the equilibrium points in a system of nonlinear differential equations describing the influence of a medical treatment on cancer cells, immune cells (effector cells), and compound (IL-2). The polynomial equation in [25] only contains real roots, and for that reason we studied in this paper Eq (5) which contains both real and complex-valued roots. In our opinion, the simple method as presented in this paper fills up a gap in the literature concerning the justification of (asymptotic) approximations of roots of the polynomial equation containing a small but fixed parameter. Furthermore, the method based on the perturbation approach and the bisection procedure most likely can be applied to a large class of problems such as determining approximations of roots of the system of polynomial equations containing a small but fixed parameter, or determining approximations of roots of characteristic equations for differential-delay equations which contain a small but fixed parameter. These problems might be interesting subjects for future research.
Fitriana Yuli Saptaningtyas: Writing-original draft, conceptualization; Wim T Van Horssen: Writing-review & editing, Conceptualization, Supervision; Fajar Adi-Kusumo: Writing-review & editing, Conceptualization, Supervision; Lina Aryati: Writing-review & editing, Conceptualization, Supervision
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This research is partially funded by Universitas Gadjah Mada through research scheme Rekognisi Tugas Akhir 2021 with the letter of assignment number 3143 / UN1.P.Ⅲ / D1T.L1T / PT / 2021. Furthermore, the authors would like to thank the anonymous reviewers for their useful suggestions to improve our paper.
The study in this paper is done analytically. There is no associated data, and no conflict of interest in this manuscript.
[1] | J. H. Wilkinson, The Algebraic Eigenvalue Problem, Oxford:Clarendon Press, (1965). |
[2] | A. Dickenstein, I. Z. Emiris, Solving Polynomial Equations: Foundations, Algorithms, and Applications: Algorithms and Computation in Mathematics, Berlin, New York: Springer, (2005). |
[3] |
D. A. Bini, G. Fiorentino, Design, analysis, and implementation of a multiprecision polynomial rootfinder, Numer. Algorithms, 23 (2000), 127–173. http://dx.doi.org/10.1023/A:1019199917103 doi: 10.1023/A:1019199917103
![]() |
[4] |
D. A. Bini, L. Robol, Solving secular and polynomial equations: A multiprecision algorithm, J. Comput. Appl. Math., 272 (2014), 276–292. https://doi.org/10.1016/j.cam.2013.04.037 doi: 10.1016/j.cam.2013.04.037
![]() |
[5] | V. Y. Pan, Acceleration of subdivision root-finding for sparse polynomials, In: F. Boulier, M. England, T. M. Sadikov, E. V. Vorozhtsov, (eds.) CASC 2020, Switzerland: Springer Nature, (2020), 461–477. |
[6] | V. Y. Pan, Structured Matrices and Polynomials: Unified Superfast Algorithms., Boston/New York: Birkhäuser/Springer, (2001). |
[7] |
M. Shams, N. Kausar, S. Araci, L. Kong, B. Carpentieri, Highly efficient family of two-step simultaneous method for all polynomial roots, AIMS Math., 9 (2024), 1755–1771. https://doi.org/10.3934/math.2024085 doi: 10.3934/math.2024085
![]() |
[8] |
S. Qureshi, I. K. Argyros, A. Soomro, K. Gdawiec, A. A. Shaikh, E. Hincal, new optimal root-finding iterative algorithm: local and semilocal analysis with polynomiography, Numer. Algorithms, 95 (2024), 1715–1745. https://doi.org/10.1007/s11075-023-01625-7 doi: 10.1007/s11075-023-01625-7
![]() |
[9] |
R. Sihwail, O. Solaiman, K. Ariffin, New robust hybrid Jarratt–Butterfly optimization algorithm for nonlinear models, J. King Saud University Comput. Inf. Sci., 34 (2022), 8207–8220. https://doi.org/10.1016/j.jksuci.2022.08.004 doi: 10.1016/j.jksuci.2022.08.004
![]() |
[10] |
R. Sihwail, O. Solaiman, K. Ariffin, M. Alswaitti, I. Hashim, A hybrid approach for solving systems of nonlinear equations using Harris hawks optimization and Newton's method, IEEE Access, 9 (2021), 95791–95807. https://doi.org/10.1109/ACCESS.2021.3094471 doi: 10.1109/ACCESS.2021.3094471
![]() |
[11] |
O. Solaiman, R. Sihwail, H. Shehadeh, I. Hashim, K. Alieyan, Hybrid Newton-Sperm swarm optimization algorithm for nonlinear systems, Mathematics, 11 (2023), 1473. https://doi.org/10.3390/math11061473 doi: 10.3390/math11061473
![]() |
[12] | J. A. Murdock, Perturbations Theory and Methods: Classics in Applied Mathematics, Series Number 27, Philadelphia: SIAM, (1999). |
[13] | M. H. Holmes, Introduction to Perturbation Methods, New York: Springer, (2010). |
[14] |
M. Pakdemirli, G. Sari, A comprehensive perturbation theorem for estimating magnitudes of roots of polynomials, LMS J. Comput. Math., 16 (2013), 1–8. https://doi.org/10.1112/S1461157012001192 doi: 10.1112/S1461157012001192
![]() |
[15] |
A. Luongo, M. Ferreti, Can a semi-simple eigenvalue admit fractional sensitivities?. Appl. Math. Comput., 225 (2015), 165–178. https://doi.org/10.1016/j.amc.2014.01.178 doi: 10.1016/j.amc.2014.01.178
![]() |
[16] | J. Kevorkian, J. D. Cole, Multiple Scale and Singular Perturbation Methods, Applied Mathematical Sciences 114, New York: Springer, (1996). |
[17] | F. Verhulst, Methods and Applications of Singular Perturbations, Texts in Applied Mathematics, New York: Springer, (2005). |
[18] | R. S. Johnson, Singular Perturbation Theory, New York: Springer, (2005). |
[19] | J. C. Neu, Singular Perturbation in the Physical Sciences, Graduate Studies in Mathematics Vol 167, Providence: AMS, (2015). |
[20] | C. M. Bender, S. A. Orszag, Advanced Mathematical Methods for Scientists and Engineers, Asymptotic Methods and Perturbation Theory, New York Berlin: Springer, (1999). |
[21] | P. D. Miller, Applied Asymptotic Analysis. Graduate Studies in Mathematics, 75, Providence: AMS, (2006). |
[22] | I. V. Andrianov, L. I. Manevitch, Asymptotology, Mathematics and Its Applications, 2 Eds., New York Berlin: Springer, (2002). |
[23] | R. B. White, Asymptotic Analysis of Differential Equations, London:Imperial College Press, (2005). |
[24] |
P. Aubry, M. M. Maza, Triangular sets for solving polynomial systems: A comparative implementation of four methods, J. Symb. Comput., 28 (1999), 125–154. https://doi.org/10.1006/jsco.1999.0270 doi: 10.1006/jsco.1999.0270
![]() |
[25] | F. Y. Saptaningtyas, Spatial-Temporal Model of Interaction between the Immune System and Cancer Cells with Immunotherapy in Cervical Cancer, PhD.-Thesis (Draft), Universitas Gadjah Mada, Yogyakarta, Indonesia, (2024). |
1. | Mulualem Aychluh, Minilik Ayalew, The Fractional Power Series Method for Solving the Nonlinear Kuramoto-Sivashinsky Equation, 2025, 11, 2349-5103, 10.1007/s40819-025-01850-9 |