1.
Introduction
Fractional calculus is a branch of study that of any order integral or derivative. It is obtained by replacing the integer order of differential equations with the fractional order. Because fractional derivative has heredity or memory, the fractional differential equation is widely used in various fields. In the past few decades, fractional derivative has been studied by a large number of scholars. They firstly proved a $ (3-\theta) $-order convergence and stability scheme for time-fractional derivative $ \theta $ under the time step $ k\geq 2 $ in [1]. The similar $ (3-\theta) $-order scheme to efficiently solve the time-fractional diffusion equation in time was first constructed in [2]. Based on the idea of block-by-block approach method, the reference [3] presented a general technique to construct high order schemes for the numerical solution of the fractional ordinary differential equations. The reference [4] established a fractional Gronwall inequality to prove some stability and convergence estimates of schemes for fractional reaction-subdiffusion problems. In [5], a space-time finite element method was used to solve the distributed-order time fractional reaction diffusion equations. The scheme with $ (3-\theta) $ convergence order for time step $ k\geq 2 $ and $ (2-\theta) $ convergence order for time step $ k = 1 $ was constructed for time-fractional derivative in [6]. They proved the $ (2-\theta) $-order convergence and stability of the time scheme, where $ \theta $ is the order of the time-fractional derivative in [7]. A series of high order numerical approximations to $ \theta $-Caputo derivatives $ (0 < \theta < 2) $ and Riemann-Liouville derivatives were given in [8]. They investigated numerical solutions of coupled nonlinear time-fractional reaction-diffusion equations obtained by the Lie symmetry analysis in [9]. They built a robust finite difference scheme for a nonlinear Caputo fractional differential equation on an adaptive grid with first-order convergent result in [10]. Spectral methods was popularly used to solve fractional derivative in [11,12,13]. An implicit numerical method was constructed to solve the two-dimensional $ \theta (1 < \theta < 2) $ time fractional diffusion-wave equation with the convergence order $ 3-\theta $ in [14]. They used the piecewise linear and quadratic Lagrange interpolation functions to construct a $ (3-\theta) $-order numerical approximate method for the Caputo fractional derivative in [15]. In [16], a fast high order numerical method for nonlinear fractional-order differential equations with non-singular kernel was constructed. The numerical scheme for nonlinear fractional differential equations was constructed by the Galerkin finite element method in [17]. The reference [18] established a $ hp $-version Chebyshev collocation method for nonlinear fractional differential equations. The Spectral collocation method for nonlinear Riemann-Liouville fractional differential equations was given in [19]. The above high order schemes are either implicit or low-order schemes are used in the first step or obtained by discretizing the equivalent form of the original equation. Therefore, we will give a high order numerical scheme with uniform convergence order in this paper using the direct numerical discretization of original equations based on the idea of [1,2,3].
This paper is arranged as follows: In Section 2 the higher order numerical scheme is proposed. And the local truncation error of the constructed higher order numerical scheme is given in Section 3. The convergence analysis is studied in Section 4. In Section 5 some numerical examples are given. Finally, some conclusion are given in Section 6.
2.
A high order finite difference scheme for Caputo derivative
We consider the following initial value problem of nonlinear fractional ordinary differential equations
where $ \theta $ is the order of the fractional derivative, $ _{0}D_{t}^{\theta} u(t) $ in (2.1) is defined as the Caputo fractional derivatives of order $ \theta $ given by
with $ \omega_{1-\theta}~~ $ is defined by $ \omega_{1-\theta}~~(t) = \frac{1}{t^{\theta}\Gamma(1-\theta)}, $ and $ \Gamma(\cdot) $ denotes Gamma function [20].
First, the finite difference scheme is introduced to discretize the fractional derivative. Dividing the interval $ [0, T] $ into $ K $ equal subintervals, set $ t_k = k\Delta t, k = 0, 1, \ldots, K $, with $ \Delta t = \frac{T}{K} $, the numerical solution of (2.1) at $ t_k $ is $ u_k $, and $ _{0}D_{t}^{\theta} u\left(t_{k}\right) = {_{0}D}_{t}^{\theta} u_{k} $, $ f_{k} = f\left(t_{k}, u_{k}\right) $.
Next, we start discretizing the fractional derivative (2.1). Firstly, the values of $ u(t) $ on $ t_1 $ and $ t_2 $ are determined. Using Quadratic Lagrange interpolation, the approximation formula of $ u(t) $ on the interval [$ t_0 $, $ t_2 $] is
where $ \varpi_{i, 0}(t), i = 0, 1, 2, $ are the Quadratic Lagrangian interpolation basis functions on point $ t_0 $, $ t_1 $ and $ t_2 $, are defined as following
Let $ \Delta u_{k} = u_{k}-u_{k-1}, \Delta u_{0} = u_{0}, \forall k\geq1 $. when $ k = 1, 2 $, we use $ _{0}D_{t}^{\theta}\left(J_{\left[t_{0}, t_{2}\right]} u\right)\left(t_{1}\right), \ _{0}D_{t}^{\theta}\left(J_{\left[t_{0}, t_{2}\right]} u\right)\left(t_{2}\right) $ to approximate $ _{0}D_{t}^{\theta}u(t_1), \ _{0}D_{t}^{\theta}u(t_2) $, respectively, then we get
where
When $ k \geq 3 $, we have
we can conclude that
where
According to approximations (2.5), (2.6) and (2.8), the higher order numerical scheme of Eq (2.1) corresponding to the initial value (2.2) is as follows
Remark 2.1. The numerical algorithm in this paper is different from [15]. In [15], in order to obtain $ (3-\theta) $-order numerical scheme for the linear form of (2.1), they changed (2.1) into the following equation
And using the linear Lagrange interpolation in $ [t_0, t_1] $ to $ u(s) $, they obtain a $ (3-\theta) $-order numerical scheme. In this paper, we use the Quadratic Lagrange interpolation in $ [t_0, t_2] $ to $ u(s) $ in (2.1) and directly obtain the $ (3-\theta) $-order numerical scheme (2.5) and (2.6).
Next, we will introduce a lemma to prove the sign of $ A_{k-j}^{k} $ and judge its size relation.
Lemma 2.1. $ A_{0}^{k} > 0 $, the sign of $ A_{1}^{k} $ is uncertain, $ A_{2}^{k} > A_{3}^{k} > \ldots > A_{k-1}^{k} > 0. $
Proof. Through calculation, we can get the following results
The sign of $ A_{1}^{k} $ is determined by the sign of $ M(\theta) $, which satisfies
Therefore $ M(\theta) $ is a decrease function on $ \theta\in(0, 1) $. By $ M(0) = 2 > 0 $ and $ M(1) = -\frac{1}{2} < 0 $, we know that $ M(\theta) $ has only one zero $ \theta_0 $ in $ (0, 1) $, and $ M(\theta) > 0 $ if $ \theta\in(0, \theta_0) $ and $ M(\theta) < 0 $ if $ \theta\in(\theta_0, 1) $, which agrees with the conclusion of Lemma 2.1.
For $ j = 3, \ldots, k-2 $, we have
let $ \bar j = k-j $, $ \bar j \geq 2 $, then using Taylor expansion, we can get the following results
where $ a_{i} = \Pi_{n = 0}^{i}(1-\theta-n)\frac{1}{(i+2)!}[(-1)^{i} (\frac{-i}{2})+\frac{i+4}{2}] $. Next, we will prove $ \bar j^{-\theta}\sum_{i = 0}^{+\infty} a_{i}\bar j^{-i} $ is decrease function on $ \bar j $. Because $ \sum_{i = 0}^{+\infty} a_{i}\bar j^{-i} $ is a convergence series, and the radius of convergence exists, it can exchange the derivative and the sum.
We can find $ \sum_{i = 2}^{+\infty} (i+\theta)a_{i}\bar j^{-i-1} $ is an alternating series with positive first term. So $ \sum_{i = 2}^{+\infty} (i+\theta)a_{i}\bar j^{-i-1} > 0 $, we have
Therefore, we have already proved $ \bar j^{-\theta}\sum_{i = 0}^{+\infty} a_{i}\bar j^{-i} $ is decrease function on $ \bar j $. According to (2.15), we obtain
Next, let's prove $ A_{k-3}^{k} > A_{k-2}^{k} > A_{k-1}^{k} > 0. $ Through careful calculation, we can get
where
According to (1) in Lemma A.1, we get $ F_{1} > 0, F_{2} > 0 $. Therefore, we have
The following step, we will prove $ A_{k-1}^{k} > 0 $. By carefully calculation, we have
According to (2.33), we have
Combining (2.12), (2.13), (2.16), (2.17) with (2.19), we already proved Lemma 2.1.
We let $ \Delta \bar u_{j} = \Delta (u_{j}-\rho u_{j-1}) $, $ \Delta \bar u_{0} = \Delta u_{0} $. Let $ \rho = \frac{1}{2}(1-\frac{A_1^k}{A_0^k}) = \frac{1}{2}(3+\frac{\theta-6}{4-\theta}\cdot2^{1-\theta}~~) $. Therefore, from (2.8), we can get that
where
Next, we discuss the relationship between the size of $ \bar A_{k-j}^{k} $.
Lemma 2.2. $ \bar A_{0}^{k} > \bar A_{1}^{k} > \bar A_{2}^{k} > \ldots > \bar A_{k-1}^{k} > 0 $.
Proof. When $ k = 3 $, we only need to prove $ \bar A_{0}^{3} > \bar A_{1}^{3} > \bar A_{2}^{3} > 0. $ By careful calculation, we can get
According to (2) in Lemma A.1, we get
By direct calculation, we have
According to (3) in Lemma A.1, we have
Next, we will prove $ \bar A_{2}^{3} = A_{2}^{3}+\rho\bar A_{1}^{3} > 0 $. According to (2.41), we get $ \bar A_{1}^{3} > 0 $. Hence, we just need to prove $ A_{2}^{3} > 0 $. By calculation, we have $ A_{2}^{3} = \frac{1}{2\Delta t^{\theta}\Gamma(3-\theta)}[4-\theta-\theta\cdot3^{2-\theta}] > 0 $, therefore,
Combining (2.22), (2.23) and (2.24), when $ k = 3 $, we know that Lemma 2.2 holds.
When $ k\geq 4, $ through careful calculation, we can draw the following conclusions
According to (4) in Lemma A.1, we can get $ \rho > 0 $, we have $ \bar A_{0}^{k} > \bar A_{1}^{k}. $ Because
where $ \tilde f(\theta) = -3(4-\theta)^2+6(4-\theta)(6-\theta)2^{1-\theta}~~-4(4-\theta)(8-\theta)3^{1-\theta}~~ +(6-\theta)^2\cdot4^{1-\theta}~~. $ According to (5) in Lemma A.1, we can get $ \tilde f(\theta) > 0 $. It can be concluded that
By careful calculation, we can get $ \bar A_{3}^{k}-\bar A_{2}^{k} = A_{3}^{k}-A_{2}^{k}+\rho(\bar A_{2}^{k}-\bar A_{1}^{k}). $ According to $ \rho > 0 $, (2.26) and Lemma 2.1, we can get
By mathematical induction, and Lemma 2.1 we can conclude that
So we have
Next we prove that $ \bar A_{k-1}^{k} > 0. $ According to (2.21), we have
According to (2.37), we get $ \bar A_{1}^{k} > 0 $. Using Lemma 2.1, $ \rho > 0 $, we have
Combining (2.28) with (2.29), we already proved Lemma 2.2.
Lemma 2.3. There is a constant $ \pi_{A}\geq6 $ such that the discrete kernel satisfies the lower bound.
Proof. When $ k\geq4 $, According to (2.21), Lemma 2.1 and Lemma 2.2, we can get
For $ j = 1 $, we have
let $ \tilde{j} = k-1, \tilde{j}\geq 3 $, using Taylor formula to expand the calculation, we can get the following results
where $ a_{i} = \Pi_{i = 0}^{i}(1-\theta-i)\frac{1}{(i+2)!}(\frac{1}{\tilde{j}})^{i}[(-1)^{i+1}\frac{i}{2} +\frac{3i+4}{2}] $, $ a_{0} = (1-\theta) $, $ a_{1} = \frac{8(1-\theta)(-\theta)}{12\tilde{j}} $, $ \sum_{i = 2}^{+\infty}a_{i} $ is a convergent alternating series, and $ a_{2} > 0 $, therefore, we have $ 0 \leq \sum_{i = 2}^{+\infty}a_{i} \leq a_{2}, $ so
We have
For $ j = 2 $, we have
let $ \hat{j} = k-2, \hat{j}\geq2 $, and then using Taylor expansion, similar to $ j = 1 $, it can be obtained by calculation
where $ a_{i} = \Pi_{n = 0}^{i}(1-\theta-n)\frac{1}{(i+2)!}(\frac{1}{\hat{j}})^{i} [\frac{i}{-2}\cdot(-1)^{i}+\frac{2-i}{2}2^{i+1}] $, because $ \sum_{i = 3}^{+\infty}a_{i} $ is a convergent staggered series, and $ a_{3} > 0 $, $ 0 < \sum_{i = 3}^{+\infty}a_{i} < a_{3}, $ we get
therefore, we have
For $ j = 3, 4, \ldots, k-2 $, we have
let $ \bar j = k-j $, $ \bar j \geq 2 $, we have
then using Taylor expansion, we can get the following results
where $ a_{i} = \Pi_{n = 0}^{i}(1-\theta-n)\frac{1}{(i+2)!}(\frac{1}{\bar j})^{i}[(-1)^{i} (\frac{-i}{2})+\frac{i+4}{2}]. $ Because $ \sum_{i = 2}^{+\infty} a_{i} $ is a convergent alternating series and $ a_{2} > 0, 0\leq \sum_{i = 2}^{+\infty} a_{i}\leq a_{2} $, so
so we can get
When $ j = k-1 $,
Therefore, we have
When $ j = k $, according to (2.30), we can get
We have
When $ k = 3 $, $ \bar A_{0}^{3} $ is in (2.39), so $ \frac{1}{\pi_{A}} \leq 1 $. According to (2.30), we only need
Therefore, we have
Next, we calculate $ \bar A_{2}^{3} $,
Therefore, we have
Combining (2.33)–(2.38) with (2.12)–(2.43), we have already proved Lemma 2.3.
3.
Estimation of the truncation errors
Now we turn to derive an estimate for the truncation errors of the scheme (2.10). We start with deriving an error estimate for the finite difference operator $ {_{0}D}_{\Delta_{t}}^{\theta}u_{k} $.
Theorem 3.1. Assume $ u(t) \in C^{3}[0, T] $. Let
Then there exists a constant $ C_{u} $ depending only on the function u, such that for all $ \Delta t > 0 $,
Proof. Our error estimation will be established on the following Taylor theorem
where $ \xi(\tau) $ is a function defined on $ [t_{0}, t_{2}] $ with range $ (t_{0}, t_{2}) $. We first estimate $ r_1(\Delta t) $
This proves (3.2) for $ k = 1 $. The case $ k = 2 $ can be similarly proven, and here we omit the details.
When $ k \geq 3 $, we have
Similar to the proof of $ |r_1(\Delta t)| $, we have
For $ N $, we have
In the above inequality, we use $ |(\tau-t_{j-1})(\tau-t_{j})(\tau-t_{j+1})|\leq\frac{2\sqrt{3}}{9}\Delta t^{3}. $
We can get the following conclusions
Complete the proof of Theorem 3.1.
4.
Convergence analysis
In order to obtain the convergence analysis, we now introduce an important tool: complementary discrete convolution kernel. Because of $ \omega_{\theta} \ast \omega_{\beta} = \omega_{\theta+\beta} $, therefore
A class of complementary discrete convolution kernels $ P_{k-i}^{k} $ with the same properties are given
According to [3], we have
where $ P_{i}^{k} > 0, i = 0, 1, ..., k-3 $ was obtained from Lemma 2.2.
Next, we introduce Lemma 2.1 of [3], as follows.
Lemma 4.1. In (4.3), the discrete kernel $ P_{i}^{k} $ has the property of (4.2) and satisfies the following conditions
Now we can analyze the convergence of $ _0D_{\Delta t}^{\theta}u_{j} $. First, we consider $ f(t, u) $ and assume that it satisfies the following differential mean value theorem and that there exists a constant $ L $, such that
and $ |L_{k}|\leq L, \forall k\geq1 $.
Theorem 4.1. Assumes that $ u $ is the exact solution of (2.1) and (2.2), and $ \{u_{k}\}_{k = 1}^{K} $ is the numerical solution of (2.10). If $ \theta \in (0, 1) $ and step $ \Delta t $ satisfy
where $ \pi_A\geq6 $, and $ \Lambda = 12L $, then the following error estimates hold,
here $ C $ is only dependent on the function $ u $.
Proof. Because $ \Delta u_{1} = u_{1}-u_{0}, \Delta u_{2} = u_{2}-u_{1}, $ through calculation, we can get
let $ -B_{1}^{1} = \Delta t^{-\theta}\bar D_{0}^{1} $, $ B_{1}^{1}-B_{1}^{2} = \Delta t^{-\theta}\bar D_{1}^{1} $, $ B_{1}^{2} = \Delta t^{-\theta}\bar D_{2}^{1} $; $ -B_{2}^{1} = \Delta t^{-\theta}D_{0}^{1}, $ $ B_{2}^{1}-B_{2}^{2} = \Delta t^{-\theta}D_{1}^{1}, $ $ B_{2}^{2} = \Delta t^{-\theta}D_{2}^{1}. $ It is easy to obtain by calculation
Among (4.6), then we can get
That is
where $ D = \begin{pmatrix} \bar D^{1}_{1} & \bar D^{1}_{2}\\ D^{1}_{1} & D^{1}_{2}\\ \end{pmatrix}. $ Since $ \bar D^{1}_{1}D^{1}_{2}-\bar D^{1}_{2}D^{1}_{1} $ is bounded, then
therefore,
when $ j = 1, 2, $ (4.8) is proved.
When $ i\geq 3 $, let's set $ \bar e_{i} = e_{i}-\rho e_{i-1}, i = 1, ..., j $, and $ \bar e_{0} = e_{0} $ for (4.13), we have
Combining with (2.20) and (4.6), $ \bar e_{j} $, $ j\geq 3 $, satisfy
Because of $ e_{j} = \sum_{n = 1}^{j}\rho^{j-n}\bar e_{n} $, then
where
by multiplying $ 2\bar e_{k} $ on both sides of (4.16), from Lemma A.1 in Liao [3], we can obtain that
To sum up, we can get the following conclusions
We change $ k $ to $ i $, and then multiply both sides of (4.18) by a $ \sum_{i = 3}^{k}P_{k-i}^{k} $, then
From (4.2) and $ \Delta u_{j} = u_{j}-u_{j-1} $, we can get the following results
substituting (4.20) into (4.19) yields
Next, we will prove it by mathematical induction
where $ \Lambda = 12L $, $ E_{\theta} $ stands for the Mittag-Leffler function, we usually define it as: $ E_{\theta}(z) = \sum_{j = 0}^{\infty}\frac{Z^{j}}{\Gamma(1+j\theta)} $, and $ E_{\theta}(0) = 1, \quad \forall Z\in R $, and $ Z > 0, E_{\theta}^{'}(Z) > 0 $, so $ F_{k}\geq F_{k-1}\geq2 $ for all $ k\geq2 $. where $ G_{k} = |\bar{e}_{2}|+2 {\max_{3 \leq j\leq k}} \sum_{i = 3}^{k} P_{j-i}^{j}\left|\bar{r}_{i}(\Delta t)\right| $.
When $ k = 3 $, (i) if $ |\bar e_{3}| \leq | e_{2}|, G_{3} = |e_{2}|+2P_{0}^{3}|\bar r_{3}(\Delta t)| \geq |\bar e_{2}| $, we have $ |\bar e_{3}|\leq |\bar e_{2}| \leq G_{3}\leq F_{3}G_{3}, $
(ii) if $ |\bar e_{3}| > |e_{2}| $, from (4.21), it can be concluded that
According to (4.4), it can be concluded that
so $ |\bar e_{3}|\leq 2(|\bar{e}_{2}|+2P_{0}^{3}|\bar{r}_{3}(\Delta t)|)\leq 2G_{3}\leq F_{3}G_{3}. $
When $ 4\leq k\leq K $, we assume that: $ |\bar e_{j}|\leq F_{j}G_{j} $, for $ 3 \leq j\leq k-1 $, let $ | \bar e_{j(k)}| = \max_{2\leq i\leq k-1}|\bar e_{i}|. $
(i) When $ |\bar e_{k}|\leq |\bar e_{j(k)}| $, because $ F_{j} $ and $ G_{j} $ monotonically increase with $ j $, so
(ii) When $ |\bar e_{k}|\geq |\bar e_{j(k)}| $, from (4.21) we can obtain that
From (4.4), we give the limit of the maximum step size, which means that
Using of Lemma 2.3 in [3] for any real $ \mu > 0 $,
We have
To sum up, the proof of (4.22) is complete.
Next, we carefully estimate $ |\bar r_{i}(\Delta t)| $. According to (4.17), we have
Next, we estimate the upper bounds of $ \bar A_{k-1}^{(k)} $ and $ \bar A_{k-2}^{(k)} $. According to (2.37), Lemma 2.1 and (4) in Lemma A.1, we can obtain
Therefore, $ \bar A_{k-2}^{k}\leq \frac{109}{4\Delta t^{\theta}\Gamma(3-\theta)} $. According to Lemma 2.2, we can get $ \bar A_{k-1}^{k} < \bar{A}_{k-2}^{k}\leq \frac{109}{4\Delta t^{\theta}\Gamma(3-\theta)} $. Therefore, we can obtain from (4.23),
In the expression on the right side of (4.22),
according to (4.5) in Lemma 4.1, we can obtain that
Because of $ \frac{1}{\omega_{1-\theta}~~(t_{i})} = \Gamma(1-\theta)t_{i}^{\theta} $, therefore (4.24) becomes
Based on (4.13), (4.25) and (4.22), we obtain
According to $ e_{k} = \bar e_{k}-\rho e_{k-1} = \sum_{n = 0}^{k}\rho^{k-n}\bar e_{n} $, we have $ |e_{k}|\leq 3 \max_{0\leq n \leq k } |\bar e_{n}| $. Then the proof is completed.
5.
Numerical results
In this section, two examples are presented to verify the effectiveness of our numerical method (2.10).
Example 5.1. we consider the problem (2.1) with $ u(0) = 0 $, and
The exact solution of Eq (2.1) is $ u(t) = t^{3+\theta} $. We take $ T = 1 $ and choose the step size to be $ \Delta t = 2^{-l}, l = 7, 8, \ldots, 11 $. The error we will display is defined by $ e_{\Delta t} = { }\max_{k = 1, 2, \cdots, K}|u(t_{k})-u_{k}|, K = T/\Delta t. $
In (5.1), $ f $ is a linear case of $ u $. From Table 1, the convergence order is computed by $ \log_2(e_{2\Delta t}/e_{\Delta t}) $. it is observed that for $ \theta = 0.3, 0.6 $ and $ 0.9 $, the convergence rates are close to $ 2.7, 2.4 $ and $ 2.1 $, respectively. This is in a good agreement with the theoretical prediction. In Table 2, we can take $ \theta = 0.01 $ and $ 0.99 $ and obtain that the convergence rates are close to $ 2.99 $ and $ 2.01 $. That tells us that as $ \theta\rightarrow0 $ or $ 1 $, the convergence rate still $ 3-\theta $. In (5.2), $ f $ is the nonlinear case of $ u $. From Tables 3 and 4, once again these results confirm that the convergence of the numerical solution is close to of order $ 3-\theta $ for $ 0 < \theta < 1 $.
Example 5.2. we consider the problem (2.1) and (2.2) with the following right hand side function
The corresponding exact solution is $ u(t) = \sin(t) $, and $ Sin_{\alpha, \beta}(t) = \sum_{k = 1}^\infty (-1)^{k+1}\frac{t^{2k-1}}{\Gamma(\alpha(2k-1)+\beta)} $. $ f $ is a linear and nonlinear case of $ u $ in (5.3) and (5.4), respectively.
We repeat the same calculation as in Example 5.1 using the proposed numerical scheme. Tables 5 and 6 show the maximum errors and decay rates of the step size for several a ranging from 0.3 to 0.9. This is consistent with the theoretical prediction. The convergence rate is close to $ 3-\theta $ for $ 0 < \theta < 1 $.
6.
Conclusions
In this paper, we presented a high order numerical method for solving Caputo nonlinear fractional ordinary differential equations. The numerical method was constructed by using the Quadratic Lagrange interpolation. By careful error estimation, the proposed scheme is of order $ 3-\theta $ for $ 0 < \theta < 1 $. Finally, numerical experiments are carried out to verify the theoretical prediction. In the future, we plan to apply this kind of methods to 3D fractional partial differential equations with time derivatives based the idea of [21] and [22].
Acknowledgments
This research was supported by National Natural Science Foundation of China (Grant No. 11901135, 11961009), Foundation of Guizhou Science and Technology Department (Grant No. [2020]1Y015, [2017]1086), Foundation for graduate students of Guizhou Provincial Department of Education(Grant No.YJSCXJH[2020]136). The authors thank the anonymous referees for their valuable suggestions to improve the quality of this work significantly.
Conflict of interest
The authors declare that there is no conflict of interests regarding the publication of this article.
A.
Proof of some inequalities
Lemma A.1.
Proof. (1) Firstly, we will prove $ F_{1} > 0 $. let $ k-2 = x $, Taylor formula is used to obtain the following results
where $ a_{n} = \frac{\frac{1}{2}(n+1)\cdot(-1)^{n}-8(n+1)\cdot2^{n}}{(n+3)!} = \frac{\frac{1}{2}(n+1)[(-1)^{n}-16\cdot2^{n}]}{(n+3)!} < 0, $ so $ \sum_{n = 0}^{+\infty}\prod_{i = 0}^{n}(-\theta-i)(\frac{1}{x})^{n}\cdot a_{n} $ is an alternating series with positive first term, so satisfies $ \sum_{n = 0}^{+\infty}\prod_{i = 0}^{n}(-\theta-i)(\frac{1}{x})^{n}\cdot a_{n} > 0, $ so $ F_{1} > 0. $
Next, we will prove $ F_2 > 0 $. Let $ k-2 = \bar{x} $, Taylor formula is used to obtain the following results
where $ b_{n} = \frac{3}{2}(n+1)(-1)^{n+1}+2(n-1)\cdot2^{n}[1+(-1)^{n}], b_{0} = -\frac{11}{2}, b_{1} = 3, $ when $ n\geq 2, $ $ n $ is odd, $ b_{n} > 0 $; $ n $ is even, $ b_{n} = (n-1)[4\cdot2^{n}-\frac{3}{2}]-3 > [8-\frac{3}{2}]-3 > 0 $, so $ \sum_{n = 1}^{+\infty}\prod_{i = 0}^{n}(-\theta-i)(\frac{1}{\bar{x}})^{n}\frac{1}{(n+3)!}b_{n} $ is an alternating series with positive first term. So $ \sum_{n = 0}^{+\infty}\prod_{i = 0}^{n}(-\theta-i)(\frac{1}{\bar{x}})^{n}\frac{1}{(n+3)!}b_{n} > (-\theta)\frac{1}{3!}b_{0}+0 > 0. $ We get $ F_2 > 0 $.
(2) Let $ f(\theta) = \frac{3}{2}(4-\theta)-(4+\theta)3^{1-\theta}+(6-\theta)2^{-\theta}, $ through careful calculation, we get
$ f^{\prime\prime}(\theta) = 3^{-\theta}\cdot g(\theta), $ where $ g(\theta) = 3(\ln3)[2-(4+\theta)\ln3]+(\frac{3}{2})^{\theta}(\ln2)[2+(6-\theta)\ln2], $ because
We can get $ H^{\prime}(\theta) = -\ln2\cdot \ln(\frac{3}{2}) < 0, $ so we get $ H(\theta) > H(1) > 0 $, therefore $ g^{\prime}(\theta) < g^{\prime}(1) < 0, $ $ g(\theta) $ monotone decreasing, $ g(1) < g(\theta) < g(0) = -3.6227 < 0 $, that is $ f^{\prime}(\theta) $ monotone decreasing, $ 0 < f^{\prime}(1) < f^{\prime}(\theta) < f^{\prime}(0) $, where $ f^{\prime}(1) = -3+5\ln3-\frac{5}{2}\ln2 > 0 $, that is $ f(\theta) $ monotone increasing, $ f(0) < f(\theta) < f(1) $, where $ f(0) = 0 $, to sum up, we can get $ f(\theta) > 0 $.
(3) Let $ f_{1}(\theta) = -\frac{3}{4}+\frac{5\theta-4}{2(4-\theta)}\cdot3^{1-\theta} $+$\frac{(4+\theta)(6-\theta)}{(4-\theta)^2}\cdot3^{1-\theta}\cdot2^{-\theta} -\frac{(6-\theta)^2}{(4-\theta)^2}(2^{-\theta})^2 \doteq $-$\frac{3}{4}+a_{1}\cdot3^{1-\theta}+a_{2}\cdot3^{1-\theta}2^{-\theta}+a_{3}\cdot(2^{-\theta})^{2}, $ where
Next, we just need to prove that $ f_{1}(\theta) > 0, $ using a Taylor expansion yields
where
Because of $ \frac{1-\theta}{1!}\cdot\frac{1}{2}+\frac{(1-\theta)(-\theta)}{2!}\cdot(\frac{1}{2})^{2} +\frac{(1-\theta)(-\theta)(-\theta-1)}{3!}(\frac{1}{2})^{3}+\ldots\doteq\sum_{k = 0}^{+\infty}a_{k} $ is an alternating series with positive first term, and $ \sum_{k = 0}^{+\infty}a_{k} = a_{0}+a_{1}+\sum_{k = 2}^{+\infty}a_{k} $, where $ \sum_{k = 2}^{+\infty}a_{k} $ is an alternating series with positive first term, so $ 0 < \sum_{k = 2}^{+\infty}a_{k} < a_{2} $, we have
We wish prove $ f_{1}(\theta) > 0 $, only need $ -\frac{3}{4}+\frac{2^{-\theta}}{8(4-\theta)^{2}}[f_{2}(\theta)+f_{3}(\theta)] > 0\Longleftrightarrow f_{2}(\theta)+f_{3}(\theta) > \frac{3}{4}\cdot8(4-\theta)^{2}2^{\theta} = 6(4-\theta)^{2}2^{\theta} $, that is $ f_{2}(\theta)+f_{3}(\theta) > 6(4-\theta)^{2}2^{\theta}\doteq6f_{4}(\theta) $, so to prove $ f_{1}(\theta) > 0 $, just prove $ f_{2}(\theta)+f_{3}(\theta)-6f_{4}(\theta) > 0 $, let's remember $ \bar{f}(\theta) = f_{2}(\theta)+f_{3}(\theta)-6f_{4}(\theta) $, since $ \bar{f}(\theta) $ first increases and then decreases, and the two endpoints are $ \bar{f}(0) = 0, \bar{f}(1) = 16 > 0 $, $ \bar{f}(\theta) > 0 $ is always true, so $ f_{1}(\theta) > 0 $.
(4) Firstly, we will prove $ \rho > 0 $. Because $ \rho = \frac{3(4-\theta)+(\theta-6)2^{1-\theta}}{2(4-\theta)}, $ let $ \bar{g}(\theta) = 3(4-\theta)+(\theta-6)2^{1-\theta} $, by calculation, we can get $ \bar{g}(\theta) $ is monotonically increasing function on $ \theta $, that is $ \bar{g}(\theta) > \bar{g}(0) = 0 $. therefore, we have $ \rho > 0. $
In addition, $ \rho-\frac{2}{3} = \frac{5(4-\theta)+3(\theta-6)2^{1-\theta}}{6(4-\theta)}\doteq\frac{\bar{g}_{1}(\theta)}{6(4-\theta)}. $ we can directly find $ \bar{g}_{1}(\theta) < \bar{g}_{1}(1) = 0 $, so $ 0 < \rho < \frac{2}{3} $.
(5) We use Lagrange's mean value theorem $ 4^{1-\theta} > \frac{4}{3+\theta}\cdot 3^{1-\theta} $, and we have
Let $ a_{1} = -3(4-\theta)^{2}, a_{2} = 6(4-\theta)(6-\theta), a_{3} = \frac{4}{3+\theta}[-(4-\theta)(8-\theta)(3+\theta)$+$(6-\theta)^{2}], a_{4} = 1+\frac{1-\theta}{1!}\frac{1}{2}+\frac{(1-\theta)(-\theta)}{2}(\frac{1}{2})^{2} $. (A.1) is becomes
where $ b_{k} = \Pi_{i = 0}^{k}(-\theta-1-i)\cdot \frac{-1}{(k+3)!}(\frac{1}{2})^{k+3} $, and $ b_{0} = \frac{-(-\theta-1)}{3!}(\frac{1}{2})^{3} > 0, $ $ |\frac{b_{k+1}}{b_{k}}| = \frac{\theta+2+k}{2(k+4)} < 1, $ so $ \sum_{k = 0}^{+\infty}b_{k} $ is a convergent alternating series, that is $ 0 < \sum_{k = 0}^{+\infty}b_{k} < b_{0}. $ Because of $ a_3 < 0 $, so we get
where $ a_{5} = a_{4}+(1-\theta)\theta\cdot \frac{-(-\theta-1)}{3!}(\frac{1}{2})^{3} = \frac{48+24(1-\theta)-6(\theta-\theta^{2})+(\theta-\theta^{3})}{48}, $ by careful calculation, we can get
because $ \theta\in(0, 1), $ so $ \theta^{3} < \theta^{2}, \theta^{6} > 0. $
where $ \tilde f_{4}(\theta) = -16\theta^{5}+97\theta^{4}-278\theta^{3}+88\theta^{2}+732\theta+864, $ by careful calculation, we can get $ \tilde f^{\prime\prime}_{3}(\theta) < 0 $, that is $ \tilde f^{\prime}_{3}(\theta) $ monotone decreasing, so $ \tilde f^{\prime}_{3}(1) < \tilde f^{\prime}_{3}(\theta) < \tilde f^{\prime}_{3}(0) $, by direct calculation, we can get $ \tilde f^{\prime}_{3}(\theta) = 144(2\theta+2)+2^{1-\theta}[\tilde f_{4}(\theta)(-\ln2)+\tilde f^{\prime}_{4}(\theta)], \tilde f^{\prime}_{3}(0) > 0, \tilde f^{\prime}_{3}(0) < 0, $ so $ \tilde f^{\prime}_{3}(0) $ changes from positive to negative, $ \tilde f_{3}(0) $ increases first and then decreases, $ \tilde f_{3}(0) = 0, \tilde f_{3}(1) = 223 > 0, $ that is $ \tilde f_{3}(\theta) > 0 $ established, so $ \tilde f(\theta) > 0 $.