Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Research on reinforcement learning based on PPO algorithm for human-machine intervention in autonomous driving

  • Given the current limitations in intelligence and processing capabilities, machine learning systems are yet unable to fully tackle diverse scenarios, thereby restricting their potential to completely substitute for human roles in practical applications. Recognizing the robustness and adaptability demonstrated by human drivers in complex environments, autonomous driving training has incorporated driving intervention mechanisms. By integrating these interventions into Proximal Policy Optimization (PPO) algorithms, it becomes possible for drivers to intervene and rectify vehicles' irrational behaviors when necessary, during the training process, thereby significantly accelerating the enhancement of model performance. A human-centric experiential replay mechanism has been developed to increase the efficiency of utilizing driving intervention data. To evaluate the impact of driving intervention on the performance of intelligent agents, experiments were conducted across four distinct intervention frequencies within scenarios involving lane changes and navigation through congested roads. The results demonstrate that the bespoke intervention mechanism markedly improves the model's performance in the initial stages of training, enabling it to overcome local optima through timely driving interventions. Although an increase in intervention frequency typically results in improved model performance, an excessively high intervention rate can detrimentally affect the model's efficiency. To assess the practical applicability of the algorithm, a comprehensive testing scenario that includes lane changes, traffic signals, and congested road sections was devised. The performance of the trained model was evaluated under various traffic conditions. The outcomes reveal that the model can adapt to different traffic flows, successfully and safely navigate the testing segment, and maintain speeds close to the target. These findings highlight the model's robustness and its potential for real-world application, emphasizing the critical role of human intervention in enhancing the safety and reliability of autonomous driving systems.

    Citation: Gaosong Shi, Qinghai Zhao, Jirong Wang, Xin Dong. Research on reinforcement learning based on PPO algorithm for human-machine intervention in autonomous driving[J]. Electronic Research Archive, 2024, 32(4): 2424-2446. doi: 10.3934/era.2024111

    Related Papers:

    [1] Feida Jiang . Weak solutions of generated Jacobian equations. Mathematics in Engineering, 2023, 5(3): 1-20. doi: 10.3934/mine.2023064
    [2] Pengfei Guan . A weighted gradient estimate for solutions of $ L^p $ Christoffel-Minkowski problem. Mathematics in Engineering, 2023, 5(3): 1-14. doi: 10.3934/mine.2023067
    [3] Stefano Almi, Giuliano Lazzaroni, Ilaria Lucardesi . Crack growth by vanishing viscosity in planar elasticity. Mathematics in Engineering, 2020, 2(1): 141-173. doi: 10.3934/mine.2020008
    [4] Bin Deng, Xinan Ma . Gradient estimates for the solutions of higher order curvature equations with prescribed contact angle. Mathematics in Engineering, 2023, 5(6): 1-13. doi: 10.3934/mine.2023093
    [5] Jinghong Li, Hongyu Liu, Wing-Yan Tsui, Xianchao Wang . An inverse scattering approach for geometric body generation: a machine learning perspective. Mathematics in Engineering, 2019, 1(4): 800-823. doi: 10.3934/mine.2019.4.800
    [6] Edgard A. Pimentel, Miguel Walker . Potential estimates for fully nonlinear elliptic equations with bounded ingredients. Mathematics in Engineering, 2023, 5(3): 1-16. doi: 10.3934/mine.2023063
    [7] Giovanni Cupini, Paolo Marcellini, Elvira Mascolo . Local boundedness of weak solutions to elliptic equations with $ p, q- $growth. Mathematics in Engineering, 2023, 5(3): 1-28. doi: 10.3934/mine.2023065
    [8] Zaffar Mehdi Dar, M. Arrutselvi, Chandru Muthusamy, Sundararajan Natarajan, Gianmarco Manzini . Virtual element approximations of the time-fractional nonlinear convection-diffusion equation on polygonal meshes. Mathematics in Engineering, 2025, 7(2): 96-129. doi: 10.3934/mine.2025005
    [9] Marco Cirant, Kevin R. Payne . Comparison principles for viscosity solutions of elliptic branches of fully nonlinear equations independent of the gradient. Mathematics in Engineering, 2021, 3(4): 1-45. doi: 10.3934/mine.2021030
    [10] Claudia Lederman, Noemi Wolanski . Lipschitz continuity of minimizers in a problem with nonstandard growth. Mathematics in Engineering, 2021, 3(1): 1-39. doi: 10.3934/mine.2021009
  • Given the current limitations in intelligence and processing capabilities, machine learning systems are yet unable to fully tackle diverse scenarios, thereby restricting their potential to completely substitute for human roles in practical applications. Recognizing the robustness and adaptability demonstrated by human drivers in complex environments, autonomous driving training has incorporated driving intervention mechanisms. By integrating these interventions into Proximal Policy Optimization (PPO) algorithms, it becomes possible for drivers to intervene and rectify vehicles' irrational behaviors when necessary, during the training process, thereby significantly accelerating the enhancement of model performance. A human-centric experiential replay mechanism has been developed to increase the efficiency of utilizing driving intervention data. To evaluate the impact of driving intervention on the performance of intelligent agents, experiments were conducted across four distinct intervention frequencies within scenarios involving lane changes and navigation through congested roads. The results demonstrate that the bespoke intervention mechanism markedly improves the model's performance in the initial stages of training, enabling it to overcome local optima through timely driving interventions. Although an increase in intervention frequency typically results in improved model performance, an excessively high intervention rate can detrimentally affect the model's efficiency. To assess the practical applicability of the algorithm, a comprehensive testing scenario that includes lane changes, traffic signals, and congested road sections was devised. The performance of the trained model was evaluated under various traffic conditions. The outcomes reveal that the model can adapt to different traffic flows, successfully and safely navigate the testing segment, and maintain speeds close to the target. These findings highlight the model's robustness and its potential for real-world application, emphasizing the critical role of human intervention in enhancing the safety and reliability of autonomous driving systems.



    This research focuses on the comprehensive connection among analytic functions and their inverses, which provides new ideas for investigating coefficient estimates and inequalities. The outcome of the present study is particularly relevant in the framework of geometric function theory (GFT), where particular geometric properties are established for analytic functions employing methods specific to this domain of research, but also could offer applications in other related fields such as partial differential equation theory, engineering, fluid dynamics, and electronics. Tremendous impact in the development of GFT was given by the Bieberbach's conjecture, an essential problem related to coefficient estimates for functions that lie within the family S of univalent functions. This conjecture suggests that for fS, expressed through the Taylor–Maclaurin series expansion:

    f(υ)=υ+k=2dkυk,υD, (1.1)

    where D:={υC:|υ|<1}, the coefficients inequality |dk|k holds for all k2. The family of such analytic functions with the series representation provided in (1.1) is represented by A. It is worth mentioning that Koebe first introduced S as a subclass of A in 1907. Bieberbach [1] originally proposed this conjecture in 1916, initially verifying it for the case k=2. Subsequent advancements by researchers including Löwner [2], Garabedian and Schiffer [3], Pederson and Schiffer [4], and Pederson [5] offered partial proofs for cases up to k=6. However, the complete proof for k7 remained unsolved until 1985, when de-Branges [6] utilized hypergeometric functions to establish it for k2.

    In 1960, Lawrence Zalcman postulated the functional |d2kd2k1|(k1)2 with k2 for fS in order to establish the Bieberbach hypothesis. This has led to the publication of several papers [7,8,9] on the Zalcman hypothesis and its generalized form |λd2kd2k1|λk22k+1 with λ0 for various subfamilies of the family S. This hypothesis remained unproven for a long time until Krushkal's breakthrough in 1999, when he proved it in [10] for k6 and solved it by utilizing the holomorphic homotopy of univalent functions in an unpublished manuscript [11] for k2. It was also demonstrated that |dlkdl(k1)2|2l(k1)kl with k,l2 for fS. The Bieberbach conjecture landscape is further enhanced by other conjectures, such as the one presented by Ma [12] in 1999, which is

    |djdkdj+k1|(j1)(k1),j,k2.

    He restricted his proof to a subclass of S. The challenge for class S remains available.

    Now, let us recall the concept of subordination, which essentially describes a relationship between analytic functions. An analytic function g1 is subordinate to g2 if there exists a Schwarz function ω such that g1(υ)=g2(ω(υ)) and it is mathematically represented as g1g2. If g2 is univalent in D, then

    g1(υ)g2(υ),(υD)

    if and only if

    g1(0)=g2(0)   &   g1(D)g2(D).

    In essence, this relationship helps us understand how one function is "contained" within another, providing insights into their behavior within the complex plane. The family of univalent functions comprises three classic subclasses C, S, and K, each distinguished by its unique properties. These subclasses are commonly known as convex functions, starlike functions, and close-to-convex functions, respectively. Let us define each class:

    C:={fS:(υf(υ))f(υ)1+υ1υ,υD},
    S:={fS:υf(υ)f(υ)1+υ1υ,υD}

    and

    K:={fS:υf(υ)h(υ)1+υ1υ,υD},

    for some hS. The above family K may be reduced to the family of bounded turning functions BT by choosing h(υ)=υ. Moreover, a number of intriguing subfamilies of class S were examined by replacing 1+υ1υ by other special functions. For the reader's benefit, a few of them are included below:

    (i). SeS(ez) and CeC(ez) [13], SSGS(21+ez) and CSGC(21+ez)[14],

    (ii). ScrS(z+1+z2) and CcrC(z+1+z2) [15], SNeS(1+z13z3) [16],

    (iii). S(n1)LS(Ψn1(z)) [17] with Ψn1(z)=1+nn+1z+1n+1zn for n2.

    (iv). SsinhS(1+sinh(λz)) with 0λln(1+2) [18].

    It is observed that a significant area of mathematics is the study of the inverse functions for the functions in various subclasses of S. The well-known Koebe's 1/4 theorem states that there exists the inverse f1 for every univalent function f defined in D, at least on the disk with a radius of 1/4, which has Taylor's series form

    f1(ω):=ω+n=2Bnωn,|ω|<1/4. (1.2)

    Employing the formula f(f1(ω))=ω, we acquire

    B2=d2, (1.3)
    B3=2d22d3, (1.4)
    B4=5d2d35d32d4, (1.5)
    B5=14d42+3d2321d22d3+6d2d4d5. (1.6)

    We consider the Hankel determinant of f1 given by

    ˆHλ,n(f1)=|BnBn+1Bn+λ1Bn+1Bn+2Bn+λBn+λ1Bn+λBn+2λ2|.

    Specifically, the second and third-order Hankel determinants of f1 are defined as the following determinants, respectively:

    ˆH2,2(f1)=|B2B3B3B4|=B2B4B23,ˆH3,1(f1)=|1B2B3B2B3B4B3B4B5|=B3(B2B4B23)B4(B4B2B3)+B5(B3B22).

    As it is seen, f1 is also not necessary to be univalent. Thus, this concept is also a natural generalization of the Hankel determinant with coefficients of fS as entries. There are very few publications in the literature that address coefficient-related problems of the inverse function, particularly determinants as stated above. Due to such a reason, the researchers motivated, and so this led to the publication of some good articles [19,20,21,22,23] on the above-stated Hankel determinants.

    The key mathematical concept in this study is the Hankel determinant ˆHλ,n(f), where n,λ{1,2,}. This concept was introduced by Pommerenke [24,25]. It is composed of the coefficients of the function fS and is expressed as

    ˆHλ,n(f):=|dndn+1dn+λ1dn+1dn+2dn+λdn+λ1dn+λdn+2λ2|.

    This determinant is utilized in both pure mathematics and applied sciences, including non-stationary signal theory in the Hamburger moment problem, Markov process theory, and a variety of other fields. There are relatively few publications on the estimates of the Hankel determinant for functions in the general class S. Hayman established the best estimate for fS in [26] by asserting that |ˆH2,n(f)||η|, where η is a constant. Moreover, it was demonstrated in [27] that |ˆH2,2(f)|η where 0η11/3 for fS. The two determinants ˆH2,1(f) and ˆH2,2(f) for different subfamilies of univalent functions have been thoroughly examined in the literature. Notable work was done by Janteng et al. [28], Lee et al. [29], Ebadian et al. [30], and Cho et al. [31], who determine the sharp estimates of the second-order Hankel determinant for certain subclasses of S.

    The sharp estimate of the third-order Hankel determinant ˆH3,1(f) for some analytic univalent functions is mathematically more difficult to find than the second-order Hankel determinant. Numerous articles on the third-order Hankel determinant have been published in the literature in which nonsharp limits of this determinant for the fundamental subclasses of analytic functions are determined. Following these arduous investigations, a few scholars were eventually able to obtain sharp bounds of this determinant for the classes C, BT, and S, as reported in the recently published works [32,33,34], respectively. These estimates are given by

    |ˆH3,1(f)|{4135forfC,14forfBT,49forfS.

    Later on, Lecko et al. [35] established the sharp estimate for |ˆH3,1(f)| by utilizing similar approaches, specifically for functions that belong to the S(1/2) class. Also, the articles [36,37,38] provide more investigations on the exact bounds of this third-order Hankel determinant.

    Now, let us consider the three function classes defined respectively by

    SSg:={fS:2υf(υ)f(υ)f(υ)21+eυ,υD},
    S3l,s:={fS:2υf(υ)f(υ)f(υ)1+45υ+15υ4,υD}

    and

    SKexp:={fS:2(υf(υ))(f(υ)f(υ))eυ,υD}.

    These classes have been studied by Faisal et al. [39], Tang et al. [40], and Mendiratta et al. [13] respectively. In this paper, we improved the bound of the third-order Hankel determinant |ˆH3,1(f1)|, which was determined by Hu and Deng [41] and published recently in AIMS Mathematics. Furthermore, we obtain the bounds of the initial three inverse coefficients together with the sharp bounds of Krushkal, Zalcman, and Fekete–Szeg ö functionals along with the Hankel determinants |ˆH2,2(f1)| and |ˆH3,1(f1)| upper bounds.

    Let B0 be the class of Schwarz functions. It is noted that ωB0 can be written as

    ω(υ)=n=1σnυn,υD. (2.1)

    We require the following lemmas to prove our main results.

    Lemma 2.1. [42] Let ω(υ) be a Schwarz function. Then, for any real numbers ϱ and ς such that

    (ϱ,ς)={|ϱ|12, 1ς1}{12|ϱ|2, 427(|ϱ|+1)3(|ϱ|+1)ς1},(ϱ,ς)={2|ϱ|4, 2(|ϱ|+1)|ϱ|4+|ϱ|2+2|ϱ|ς112(ϱ2+8)},(ϱ,ς)={12|ϱ|2, 23(1+|ϱ|)ς427(1+|ϱ|)3(1+|ϱ|)},

    the following sharp estimate holds:

    |σ3+ϱσ1σ2+ςσ31|1.

    Lemma 2.2. [43] If ω(υ) be a Schwarz function, then

    |σn|1,n1. (2.2)

    Moreover, for τC, the following inequality holds

    |σ2+τσ21|max{1,|τ|}. (2.3)

    Lemma 2.3. [44] Let ω(υ) be a Schwarz function. Then

    |σ2|1|σ1|2, (2.4)
    |σ3|1|σ1|2|σ2|21+|σ1|, (2.5)
    |σ4|1|σ1|2|σ2|2. (2.6)

    Lemma 2.4. [45] Let ω(υ) be a Schwarz function. Then

    |σ1σ3σ22|1|σ1|2

    and

    |σ4+(1+Λ)σ1σ3+σ22+(1+2Λ)σ21σ2+Λσ41|max{1,|Λ|},ΛC. (2.7)

    In this section, we will improve the bound of the third-order Hankel determinant |ˆH3,1(f1)| with inverse coefficient entries for functions belonging to the class SSg.

    Theorem 3.1. Let f1 be the inverse of the function fSSg and has the form (1.2). Then

    |ˆH3,1(f1)|<0.0317.

    Proof. Let fSSg. Then, by subordination relationship, it implies

    2υf(υ)f(υ)f(υ)=21+eω(υ),υD (3.1)

    and also assumes that

    ω(υ)=σ1υ+σ2υ2+σ3υ3+σ4υ4+. (3.2)

    Using (1.1), we obtain

    2υf(υ)f(υ)f(υ):=1+2d2υ+2d3υ2+(2d2d3+4d4)υ3+(2d23+4d5)υ4+. (3.3)

    By some easy calculation and utilizing the series expansion of (3.2), we achieve

    21+eω(υ)=1+12σ1υ+12σ2υ2+(124σ31+12σ3)υ3+(18σ21σ2+12σ4)υ4+. (3.4)

    Now by comparing (3.3) and (3.4), we obtain

    d2=14σ1, (3.5)
    d3=14σ2, (3.6)
    d4=196σ31+18σ3+132σ1σ2, (3.7)
    d5=18σ4+132σ22132σ21σ2. (3.8)

    Putting (3.5)(3.8) in (1.3)(1.6), we obtain

    B2=14σ1, (3.9)
    B3=14σ2+18σ21, (3.10)
    B4=13192σ3118σ3+932σ1σ2, (3.11)
    B5=532σ2214σ21σ2+5128σ4118σ4+316σ1σ3. (3.12)

    The determinant |ˆH3,1(f1)| can be reconfigured as follows:

    |ˆH3,1(f1)|=|2B2B3B4B24B22B5B33+B3B5|.

    From (3.9)(3.12), we easily write

    |ˆH3,1(f1)|=164|σ23+(12σ2+16σ21)σ1σ3+5576σ6132σ32+2σ2σ4                548σ41σ2+516σ21σ2212σ21σ4|.

    Now we begin by utilizing Lemma 2.1 with ϱ=12 and ς=16 that

    |σ3[σ3+(12)σ1σ2+(16)σ31]||σ3|

    and also by using Lemma 2.3, we have

    |σ3|1|σ2|21+|σ1||σ1|21|σ2|22|σ1|2.

    Applying it and also using |σ4|1|σ2|2|σ1|2, we achieve

    |ˆH3,1(f1)|164E(|σ1|,|σ2|),

    where

    E(σ,t)=1σ212t2+5576σ6+32t3+2t(1σ2t2)+548σ4t+516σ2t2         +12σ2(1σ2t2),  σ=|σ1|, t=|σ2|.

    But E is a decreasing function of the variable σ; consequently,

    E(σ,t)E(0,t)=112t212t3+2t.

    The function E(0,t) reaches its maximum value in [0,1] if t=13+1313, so E(0,t)2.0322, which completes the proof.

    Conjecture 3.2. If the inverse of fSSg is of the form (1.2), then

    |ˆH3,1(f1)|164.

    Equality will be obtained by using (1.3)(1.6) together with

    2υf(υ)f(υ)f(υ)=1+12υ3124υ9+.

    We begin this section by computing the estimates of the first three initial inverse coefficients for functions in the family S3l,s.

    Theorem 4.1. Let the inverse function of fS3l,s has the series form (1.2). Then

    |B2|25,|B3|25,|B4|15.

    The equality can easily be obtained by utilizing (1.3) up to (1.5) together with

    2υf(υ)f(υ)f(υ)=1+12υm124υ4m+, for m=1,2,3. (4.1)

    Proof. Let fS3l,s. Then we easily write

    2υf(υ)f(υ)f(υ)=1+45ω(υ)+15(ω(υ))4,υD,

    and here ω represents the Schwarz function. Also, let us assume that

    ω(υ)=σ1υ+σ2υ2+σ3υ3+σ4υ4+. (4.2)

    Using (1.1), we obtain

    2υf(υ)f(υ)f(υ)=1+2d2υ+2d3υ2+(4d42d2d3)υ3        +(4d52d23)υ4. (4.3)

    By some easy calculation and utilizing the series expansion of (4.2), we have

    1+45ω(υ)+15(ω(υ))4=1+45σ1υ+45σ2υ2+45σ3υ3+(45σ4+15σ41)υ4+. (4.4)

    Now, by comparing (4.3) and (4.4), we obtain

    d2=25σ1, (4.5)
    d3=25σ2, (4.6)
    d4=15σ3+225σ1σ2, (4.7)
    d5=15σ4+120σ41+225σ22. (4.8)

    Substituting (4.5)(4.8) in (1.3)(1.6), we obtain

    B2=25σ1, (4.9)
    B3=825σ2125σ2, (4.10)
    B4=825σ3115σ3+1825σ1σ2, (4.11)
    B5=7712500σ41144125σ21σ2+1225σ1σ3+25σ2215σ4. (4.12)

    Using (2.2) in (4.9), we achieve

    |B2|25.

    To prove the second inequality, we can write (4.10) as

    |B3|=25|σ2+(45)σ21|.

    Applying (2.3) in the above equation, we achieve

    |B3|25.

    From (4.11), we deduce that

    |B4|=15|σ3+(185)σ1σ2+85σ31|.

    Comparing it with Lemma 2.1, we note that

    ϱ=185andς=85.

    It is clear that 2|ϱ|4 with

    2|ϱ|(|ϱ|+1)|ϱ|2+2|ϱ|+4=45151ςandς112(ϱ2+8)=13175.

    All the conditions of Lemma 2.1 are satisfied. Therefore

    |B4|15.

    The required proof is thus completed.

    Now, we compute the Fekete–Szegö functional bound for the inverse function of fS3l,s.

    Theorem 4.2. If f1 is the inverse of the function fS3l,s with series expansion (1.2), then

    |B3τB22|max{25,|4τ825|},τC.

    This functional bound is sharp.

    Proof. Putting (4.9) and (4.10), we obtain

    |B3τB22|=|25σ24τ25σ21+825σ21|=25|σ2+(2τ45)σ21|.

    Application of (2.3) leads us to

    |B3τB22|max{25,|4τ825|}.

    The bound of the above functional is best possible, and it can easily be checked by (1.3), (1.4), and (4.1) with m=2.

    By replacing τ=1 in Theorem 4.2, we arrive at the below result.

    Corollary 4.3. If the inverse of the function fS3l,s is f1 with series expansion (1.2), then

    |B3B22|25.

    This estimate is sharp, and equality will be obtained by using (1.3), (1.4), and (4.1) with m=2. 

    Next, we investigate the Zalcman functional upper bound for f1S3l,s.

    Theorem 4.4. If fS3l,s and its inverse function f1 have the form (1.2), then

    |B2B3B4|15.

    The above estimate is sharp.

    Proof. Taking use of (4.9)(4.11), we achieve

    |B2B3B4|=15|σ3+(145)σ1σ2+2425σ31|.

    From Lemma 2.1, let

    ϱ=145andς=2425.

    It is clear that 2|ϱ|4 with

    2|ϱ|(|ϱ|+1)|ϱ|2+2|ϱ|+4=35109ςandς112(ϱ2+8)=3325.

    Thus, all the conditions of Lemma 2.1 are satisfied. Hence

    |B2B3B4|15.

    The required estimate is best possible and will easily be obtained by using (1.3)(1.5), and (4.1) with m=3.

    Further, we intend to compute the Krushkal functional bound for the family S3l,s.

    Theorem 4.5. If fS3l,s and its inverse function f1 have the form (1.2), then

    |B4B32|15.

    This estimate is sharp.

    Proof. Putting (4.9) and (4.11), we obtain

    |B4B32|=15|σ3+(185)σ1σ2+(3225)σ31|.

    From Lemma 2.1, let

    ϱ=185andς=3225.

    It is clear that 2|ϱ|4 with

    2|ϱ|(|ϱ|+1)|ϱ|2+2|ϱ|+4=45151ςandς112(ϱ2+8)=13175.

    Thus, all the conditions of Lemma 2.1 are satisfied. Hence

    |B4B32|15.

    This estimate is best possible and will be confirmed by using (1.3), (1.5), and (4.1) with m=3.

    In the upcoming result, we will investigate the estimate of ˆH2,2(f1) for the family S3l,s.

    Theorem 4.6. Let the inverse function of fS3l,s has the series expansion (1.2). Then

    |ˆH2,2(f1)|425.

    This inequality is sharp, and equality will easily be achieved by using (1.3)(1.5) and (4.1) with m=2.

    Proof. The determinant ˆH2,2(f1) can be reconfigured as follows:

    ˆH2,2(f1)=B2B4B23.=d23+d2d4d22d3+d42.

    Substituting (4.9)(4.11), we achieve

    |ˆH2,2(f1)|=425|425σ41+15σ21σ212σ1σ3+σ22|=425|12(σ22σ1σ3)+12(825σ41+25σ21σ2+σ22)|450|σ22σ1σ3|+450|825σ41+25σ21σ2+σ22|=450Y1+450Y2,

    where

    Y1=|σ22σ1σ3|,

    and

    Y2=|825σ41+25σ21σ2+σ22|.

    Utilizing Lemma 2.4, we acquire Y11. Applying (2.4) along with triangle inequality for Y2, we have

    |Y2|(1|σ1|2)2+825|σ1|4+25(1|σ1|2)|σ1|2.

    By setting |σ1|=ϰ with ϰ(0,1], we obtain

    |Y2|23ϰ4258ϰ25+1=N(ϰ).

    Clearly N(ϰ)0, N(ϰ) is a decreasing function of ϰ, indicating that it achieves its maxima at ϰ=0, that is,

    |Y2|1.

    Therefore

    |ˆH2,2(f1)|450Y1+450Y2425

    and so the required proof is accomplished.

    Theorem 4.7. Let f1 be the inverse of fS3l,s with series expansion (1.2). Then

    |ˆH3,1(f1)|<0.11600.

    Proof. The determinant |ˆH3,1(f1)| is described as follows:

    |ˆH3,1(f1)|=|2B2B3B4B24B22B5B33+B3B5|.

    From (4.9)(4.12), we easily write

    |ˆH3,1(f1)|=125|σ23+45σ2σ1σ361625σ61125σ3267250σ41σ2                +5225σ21σ2245σ21σ4+2σ2σ4|.

    The below inequality follows easily by using Lemma 2.1 with ϱ=45 and ς=0

    |σ3[σ3+(45)σ1σ2+(0)σ31]||σ3|

    and also by virtue of Lemma 2.3, we have

    |σ3|1|σ2|21+|σ1||σ1|21|σ2|22|σ1|2.

    Applying it and also using |σ4|1|σ2|2|σ1|2, we achieve

    |ˆH3,1(f1)|125E(|σ1|,|σ2|),

    where

    E(σ,t)=1σ212t2+61625σ6+125t3+67250σ4t+5225σ2t2+45σ2(1σ2t2)          +2t(1σ2t2),                 σ=|σ1|, t=|σ2|.

    But E is an increasing function of the variable σ; consequently,

    E(σ,t)E(0,t)=112t2+25t3+2t.

    The function E(0,t) reaches its maximum value in [0,1] if t=1, so E(0,t)2910, which completes the proof.

    Conjecture 4.8. If the inverse of fS3l,s is of the form (1.2), then

    |ˆH3,1(f1)|125.

    This result is best possible.

    Next, we begin this section by determining the estimates of the first four initial coefficients for functions in the family fSKexp.

    Theorem 5.1. If the function fSKexp has the series form (1.1), then

    |d2|14,|d3|16,|d4|116,|d5|120.

    The equality is attained by the following extremal functions:

    2υf(υ)f(υ)f(υ)=1+υm+12υ2m+, for m=1,2,3,4. (5.1)

    Proof. Let fSKexp. Then there exists a Schwarz function w such that

    2(υf(υ))(f(υ)f(υ))=eω(υ),υD. (5.2)

    Also, assuming that

    ω(υ)=σ1υ+σ2υ2+σ3υ3+σ4υ4+. (5.3)

    Using (1.1), we obtain

    2(υf(υ))(f(υ)f(υ)):=1+4d2υ+6d3υ2+(12d2d3+16d4)υ3+(18d23+20d5)υ4+. (5.4)

    By some easy calculation and utilizing the series expansion of (5.3), we achieve

    eω(υ)=1+σ1υ+(σ2+12σ21)υ2+(σ3+σ1σ2+16σ31)υ3+(σ4+σ1σ3+12σ22+124σ41+12σ21σ2)υ4+. (5.5)

    Now by comparing (5.4) and (5.5), we have

    d2=14σ1, (5.6)
    d3=16σ2+112σ21, (5.7)
    d4=116σ3+5192σ31+332σ1σ2, (5.8)
    d5=120σ22+120σ4+120σ21σ2+120σ1σ3+1120σ41. (5.9)

    Using (2.2) in (5.6), we obtain

    |d2|14.

    Rearranging of (5.7), we obtain

    |d3|=16|σ2+12σ21|.

    Applying (2.3) in the above equation, we achieve

    |d3|16.

    For d4, we can write (5.8), as

    |d4|=116|σ3+32σ1σ2+512σ31|.

    From Lemma 2.1, let

    ϱ=32andς=512.

    It is clear that 12|ϱ|2 with

    427(1+|ϱ|)3(1+|ϱ|)=527ςandς1.

    Hence the conditions of Lemma 2.1 are satisfied. Therefore

    |d4|116.

    From (5.9), we deduce that

    |d5|=120|12(2σ1σ3+σ4+σ22+σ41+3σ21σ2)+12(σ4+σ2223σ41σ21σ2)|. (5.10)

    The initial segment is estimated by 12 by utilizing (2.7) with Λ=1. Lemma 2.3 uses for the estimation of the second segment in the following:

    12|23σ41+σ4σ21σ2+σ22|12[|σ2|2|σ1|2+1+23|σ1|4+|σ1|2(1|σ1|2)+|σ2|2].=1|σ1|46+1212.

    By adding the bounds of the segments of (5.10), we achieve

    |d5|120.

    Thus, the proof is completed.

    Lastly, we will investigate the estimates of first three initial inverse coefficients for functions in the family SKexp.

    Theorem 6.1. If the inverse function of fSKexp is of the form (1.2), then

    |B2|14,|B3|16,|B4|116.

    Equalities hold in these bounds and will be confirmed by using (1.3)(1.5) and (5.1) with m=1,2,3.

    Proof. Applying (5.6)(5.9) in (1.3)(1.6), we achieve

    B2=14σ1, (6.1)
    B3=124σ2116σ2, (6.2)
    B4=1196σ1σ2116σ3, (6.3)
    B5=1320σ4143960σ21σ2+7160σ1σ3+130σ22120σ4. (6.4)

    Using (2.2) in (6.1), we obtain

    |B2|14.

    For B3, we can write (6.2), as

    |B3|=16|σ2+(14)σ21|.

    Applying (2.3) in the above equation, we achieve

    |B3|16.

    For B4, we consider

    |B4|=116|σ3+(116)σ1σ2+(0)σ31|.

    From Lemma 2.1, let

    ϱ=116andς=0.

    It is clear that 12|ϱ|2 with

    23(|ϱ|+1)=179ςandς427(1+|ϱ|)3(1+|ϱ|)=391729.

    This shows that all conditions of Lemma 2.1 are satisfied. Thus

    |B4|116.

    Thus, the required proof is completed.

    Theorem 6.2. If fSKexp has inverse function f1 with a series form (1.2), then

    |B3τB22|max16{1,|2+3τ8|},τC.

    This inequality is sharp.

    Proof. Employing (6.1) and (6.2), we have

    |B3τB22|=16|σ214σ21+3τ8σ21|.=16|σ2+(3τ28)σ21|.

    Implementation of Lemma 2.2 along with triangle inequality leads us to

    |B3τB22|max16{1,|2+3τ8|}.

    The functional bound is sharp and will be obtained from (1.3), (1.4), and (5.1) with m=2.

    By replacing τ=1 in Theorem 6.2, we arrive at the below result.

    Corollary 6.3. If fSKexp has the inverse function with a series form (1.2), then

    |B3B22|16.

    The functional bound is sharp. Equality will be achieved by utilizing (1.3), (1.4), and (5.1) with m=2.

    Theorem 6.4. If the inverse of the function fSKexp is expressed in (1.2), then

    |B4B2B3|116.

    This outcome is sharp, and it will be confirmed easily by using (1.3)(1.5), and (5.1) with m=3.

    Proof. From (6.1)(6.3), we have

    |B4B2B3|=116|σ376σ1σ216σ31|.

    From Lemma 2.1, let

    ϱ=76andς=16.

    It is clear that 12|ϱ|2 with

    427(1+|ϱ|)3(1+|ϱ|)=481729ςandς1.

    Thus, all the conditions of Lemma 2.1 are satisfied. Hence

    |B4B2B3|116

    and hence the proof is completed.

    Theorem 6.5. If the inverse function of fSKexp is provided in (1.2), then

    |B4B32|116.

    Equality will be held by using (1.3), (1.5), and (5.1) with m=3.

    Proof. Putting (6.1) and (6.3), we have

    |B4B32|=116|σ3+(116)σ1σ2+(14)σ31|.

    From Lemma 2.1, let

    ϱ=116andς=14.

    It is clear that 12|ϱ|2 with

    23(|ϱ|+1)=179ςandς427(1+|ϱ|)3(1+|ϱ|)=391729.

    Thus, all the conditions of Lemma 2.1 are satisfied. Hence

    |B4B32|116.

    Theorem 6.6. Let f1 be the inverse of fSKexp as defined in (1.2). Then

    |ˆH2,2(f1)|=|B2B4B23|136.

    Equality will be achieved by using (1.3)(1.5) and (5.1) with m=2.

    Proof. From (6.1)(6.3), we have

    |ˆH2,2(f1)|=136|116σ41+1732σ21σ2916σ1σ3+σ22|=136|12(σ22σ1σ3)+12(18σ4118σ1σ3+1716σ21σ2+σ22)|172|σ22σ1σ3|+172|18σ4118σ1σ3+1716σ21σ2+σ22|=172R1+172R2,

    where

    R1=|σ22σ1σ3|

    and

    R2=|18σ4118σ1σ3+1716σ21σ2+σ22|.

    Utilizing Lemma 2.4, we obtain R11. Also, by virtue of Lemma 2.3 for R2, we achieve

    |R2|1716|σ1|2|σ2|+|σ1|48+|σ2|2+|σ1|8(|σ2|2(|σ1|+1)|σ1|2+1)|σ1|48+17|σ2||σ1|216+(|σ1|8(|σ1|+1)+1)|σ2|2|σ1|38+|σ1|8. (6.5)

    Since (|σ1|8(|σ1|+1)+1)>0. Thus, we can substitute (2.4) in (6.5), and we easily obtain

    |R2||σ1|38+1716|σ1|2(1|σ1|2)+(|σ1|8(|σ1|+1)+1)(1|σ1|2)2                           +|σ1|8+|σ1|48.

    The basic computation of maximum and minimum leads us to

    |R2|1.

    Hence

    |ˆH2,2(f1)|172R1+172R2136.

    The proof is thus accomplished.

    Theorem 6.7. Let f1 be the inverse function of fSKexp and is expressed in (1.2). Then

    |ˆH3,1(f1)|<0.006671.

    Proof. The determinant |ˆH3,1(f1)| can be expressed as follows:

    |ˆH3,1(f1)|=|2B2B3B4B24B22B5B33+B3B5|.

    From (6.1)(6.4), we easily write

    |ˆH3,1(f1)|=1256|σ23+(715σ2+110σ21)σ1σ31540σ61160σ41σ213180σ21σ22                +415σ21σ432135σ32+3215σ2σ4|.

    At the beginning, it should be noted that

    |σ3[σ3+(715)σ1σ2+(110)σ31]||σ3|,

    where we have used Lemma 2.1 with ϱ=715 and ς=110. Also, by using Lemma 2.3, we have

    |σ3|1|σ2|21+|σ1||σ1|21|σ2|22|σ1|2.

    Applying it and also using |σ4|1|σ2|2|σ1|2, we achieve

    |ˆH3,1(f1)|1256E(|σ1|,|σ2|),

    where

    E(σ,t)=1σ212t2+1540σ6+160σ4t+13180σ2t2+415σ2(1σ2t2)        +32135t3+3215t(1σ2t2),          σ=|σ1|, t=|σ2|.

    But E is a decreasing function of the variable σ; consequently,

    E(σ,t)E(0,t)=112t2256135t3+3215t.

    The function E(0,t) reaches its maximum value in [0,1] if t=45512+1512100329, so E(0,t)1.7079, which completes the proof.

    Conjecture 6.8. If the inverse function of fSKexp is of the form (1.2), then the sharp bounds

    |ˆH3,1(f1)|1256.

    Equality will be achieved by using (1.3)(1.5) and (5.1) with m=3.

    The study of Hankel determinant bounds is of great importance in the research community due to its vast applications in mathematical science. In the current article, we have considered the Hankel determinant involving the coefficients of inverse functions for various subclasses of analytic functions. This generalizes the classical definition of the Hankel determinant and could provide more knowledge of inverse functions. The main focus of this article that we have studied is the coefficient-related problems along with Hankel determinants for the inverse function of the functions that belong to the families of symmetric starlike and symmetric convex functions associated with three different image domains. In particular, these problems include the sharp estimates of some initial inverse coefficients, the Zalcman, Fekete–Szegö, and Krushkal inequalities, along with the sharp estimation of second and third Hankel determinants containing inverse coefficients for functions in the mentioned families by using the concept of a Schwarz function. Also, we have given some conjectures that strongly support our obtained results. Our research introduces a new framework for analyzing the Hankel determinant, emphasizing the importance of inverse coefficients in analytic functions, potentially promoting more attention to coefficient-related problems. This study may be applied to meromorphic analytic functions, and the same methodology can be used to examine higher-order Hankel determinants, as studied in articles [46,47,48].

    Huo Tang: Funding acquisition, Methodology, Project administration; Muhammad Abbas: Investigation, Writing-original draft; Reem K. Alhefthi: Formal analysis, Supervision, Writing-review and editing; Muhammad Arif: Formal analysis, Supervision, Writing-review and editing. All authors read and approved the final manuscript.

    The first author (Huo Tang) was partly supported by the Natural Science Foundation of China under Grant 11561001, the Program for Young Talents of Science and Technology in Universities of Inner Mongolia Autonomous Region under Grant NJYT18-A14, the Natural Science Foundation of Inner Mongolia of China under Grants 2022MS01004 and 2020MS01011, and the Higher School Foundation of Inner Mongolia of China under Grant NJZY20200, the Program for Key Laboratory Construction of Chifeng University (No. CFXYZD202004), the Research and Innovation Team of Complex Analysis and Nonlinear Dynamic Systems of Chifeng University (No. cfxykycxtd202005) and the Youth Science Foundation of Chifeng University (No. cfxyqn202133).

    The third author (Reem K. Alhefthi) would like to extend their sincere-appreciation to the Researchers Supporting Project number (RSPD2024R802) King Saud University, Riyadh, Saudi Arabia.

    The authors declare that they have no conflicts of interest.



    [1] I. Yaqoob, L. U. Khan, S. M. A. Kazmi, M. Imran, N. Guizani, C. S. Hong, Autonomous driving cars in smart cities: Recent advances, requirements, and challenges, IEEE Network, 34 (2020), 174–181. https://doi.org/10.1109/MNET.2019.1900120
    [2] B. R. Kiran, I. Sobh, V. Talpaert, P. Mannion, A. A. Sallab, S. Yogamani, et al., Deep reinforcement learning for autonomous driving: a survey, IEEE Trans. Intell. Transp. Syst., 23 (2022), 4909–4926. https://doi.org/10.1109/TITS.2021.3054625 doi: 10.1109/TITS.2021.3054625
    [3] L. Anzalone, P. Barra, S. Barra, A. Castiglione, M. Nappi, An end-to-end curriculum learning approach for autonomous driving scenarios, IEEE Trans. Intell. Transp. Syst., 23 (2022), 19817–19826. https://doi.org/10.1109/TITS.2022.3160673 doi: 10.1109/TITS.2022.3160673
    [4] J. Hua, L. Zeng, G. Li, Z. Ju, Learning for a Robot: Deep reinforcement learning, imitation learning, transfer learning, Sensors, 21 (2021), 1278. https://doi.org/10.3390/s21041278
    [5] K. Makantasis, M. Kontorinaki, I. Nikolos, Deep reinforcement‐learning‐based driving policy for autonomous road vehicles, IET Intell. Transp. Syst., 14 (2019), 13–24. https://doi.org/10.1049/iet-its.2019.0249 doi: 10.1049/iet-its.2019.0249
    [6] L. L. Mero, D. Yi, M. Dianati, A. Mouzakitis, A survey on imitation learning tech-niques for end-to-end autonomous vehicles, IEEE Trans. Intell. Transp. Syst., 23 (2022), 14128–14147. https://doi.org/10.1109/TITS.2022.3144867 doi: 10.1109/TITS.2022.3144867
    [7] A. Hussein, M. M. Gaber, E. Elyan, C. Jayne, Imitation learning: A survey of learning methods, ACM Comput. Surv., 50 (2017), 1–35. https://doi.org/10.1145/3054912 doi: 10.1145/3054912
    [8] Y. Peng, G. Tan, H. Si, RTA-IR: A runtime assurance framework for behavior planning based on imitation learning and responsibility-sensitive safety model, Expert Syst. Appl., 232 (2023). https://doi.org/10.1016/j.eswa.2023.120824
    [9] H. M. Eraqi, M. N. Moustafa, J. Honer, Dynamic conditional imitation learning for autonomous driving, IEEE Trans. Intell. Transp. Syst., 23 (2022), 22988–23001. https://doi.org/10.1109/TITS.2022.3214079 doi: 10.1109/TITS.2022.3214079
    [10] S. Teng, L. Chen, Y. Ai, Y. Zhou, Z. Xuanyuan, X. Hu, Hierarchical interpretable imitation learning for end-to-end autonomous driving, IEEE Trans. Intell. Transp. Syst., 8 (2023), 673–683. https://doi.org/10.1109/TIV.2022.3225340 doi: 10.1109/TIV.2022.3225340
    [11] J. Ahn, M. Kim, J. Park, Autonomous driving using imitation learning with a look ahead point for semi-structured environments, Sci. Rep., 12 (2022), 21285. https://doi.org/10.1038/s41598-022-23546-6 doi: 10.1038/s41598-022-23546-6
    [12] B. Zheng, S. Verma, J. Zhou, I. W. Tsang, F. Chen, Imitation learning: Progress, taxonomies and challenges, IEEE Trans. Neural Networks Learn. Syst., (2022), 1–16. https://doi.org/10.1109/TNNLS.2022.3213246
    [13] Z. Wu, K. Qiu, H. Gao, Driving policies of V2X autonomous vehicles based on reinforcement learning methods, IET Intell. Transp. Syst., 14 (2020), 331–337. https://doi.org/10.1049/iet-its.2019.0457 doi: 10.1049/iet-its.2019.0457
    [14] C. You, J. Lu, D. Filev, P. Tsiotras, Advanced planning for autonomous vehicles using reinforcement learning and deep inverse reinforcement learning, Rob. Auton. Syst., 114 (2019), 1–18. https://doi.org/10.1016/j.robot.2019.01.003 doi: 10.1016/j.robot.2019.01.003
    [15] D. Zhang, X. Han, C. Deng, Review on the research and practice of deep learning and reinforcement learning in smart grids, CSEE J. Power Energy Syst., 4 (2018), 362–370. https://doi.org/10.17775/CSEEJPES.2018.00520 doi: 10.17775/CSEEJPES.2018.00520
    [16] Y. H. Khalil, H. T. Mouftah, Exploiting multi-modal fusion for urban autonomous driving using latent deep reinforcement learning, IEEE Trans. Veh. Technol., 72 (2023), 2921–2935. https://doi.org/10.1109/TVT.2022.3217299 doi: 10.1109/TVT.2022.3217299
    [17] H. Zhang, Y. Lin, S. Han, K. Lv, Lexicographic actor-critic deep reinforcement learning for urban autonomous driving, IEEE Trans. Veh. Technol., 72 (2023), 4308–4319. https://doi.org/10.1109/TVT.2022.3226579 doi: 10.1109/TVT.2022.3226579
    [18] Z. Du, Q. Miao, C. Zong, Trajectory planning for automated parking systems using deep reinforcement learning, Int. J. Automot. Technol., 21 (2020), 881–887. https://doi.org/10.1007/s12239-020-0085-9 doi: 10.1007/s12239-020-0085-9
    [19] E. O. Neftci, B. B. Averbeck, Reinforcement learning in artificial and biological systems, Nat. Mach. Intell., 1 (2019), 133–143. https://doi.org/10.1038/s42256-019-0025-4 doi: 10.1038/s42256-019-0025-4
    [20] M. L. Littman, Reinforcement learning improves behavior from evaluative feedback, Nature, 521 (2015), 445–451. https://doi.org/10.1038/nature14540 doi: 10.1038/nature14540
    [21] E. O. Neftci, B. B. Averbeck, Reinforcement learning in artificial and biological systems, Nat. Mach. Intell., 1 (2019), 133–143. https://doi.org/10.1038/s42256-019-0025-4 doi: 10.1038/s42256-019-0025-4
    [22] C. Zhu, Y. Cai, J. Zhu, C. Hu, J. Bi, GR(1)-guided deep reinforcement learning for multi-task motion planning under a stochastic environment, Electronics, 11 (2022), 3716. https://doi.org/10.3390/electronics11223716 doi: 10.3390/electronics11223716
    [23] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, O. Klimov, Proximal policy optimization algorithms, preprint, arXiv: 1707.06347. https://doi.org/10.48550/arXiv.1707.06347
    [24] W. Guan, Z. Cui, X. Zhang, Intelligent smart marine autonomous surface ship decision system based on improved PPO algorithm, Sensors, 22 (2022), 5732. https://doi.org/10.3390/s22155732 doi: 10.3390/s22155732
    [25] J. Han, K. Jo, W. Lim, Y. Lee, K. Ko, E. Sim, et al., Reinforcement learning guided by double replay memory, J. Sens., 2021 (2021), 1–8. https://doi.org/10.1155/2021/6652042 doi: 10.1155/2021/6652042
    [26] H. Liu, A. Trott, R. Socher, C. Xiong, Competitive experience replay, preprint, arXiv: 1902.00528. https://doi.org/10.48550/arXiv.1902.00528
    [27] X. Wang, H. Xiang, Y. Cheng, Q. Yu, Prioritised experience replay based on sample optimization, J. Eng., 2020 (2020), 298–302. https://doi.org/10.1049/joe.2019.1204 doi: 10.1049/joe.2019.1204
    [28] A. Karalakou, D. Troullinos, G. Chalkiadakis, M. Papageorgiou, Deep reinforcement learning reward function design for autonomous driving in lane-free traffic, Systems, 11 (2023), 134. https://doi.org/10.3390/systems11030134 doi: 10.3390/systems11030134
    [29] B. Geng, J. Ma, S. Zhang, Ensemble deep learning-based lane-changing behavior prediction of manually driven vehicles in mixed traffic environments, Electron. Res. Arch., 31 (2023), 6216–6235. https://doi.org/10.3934/era.2023315 doi: 10.3934/era.2023315
    [30] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, Hindsight experience replay, preprint, arXiv: 1707.01495.
    [31] J. Wu, Z. Huang, Z. Hu, C. Lu, Toward human-in-the-loop AI: Enhancing deep reinforcement learning via real-time human intervention for autonomous driving, Engineering, 21(2023), 75–91. https://doi.org/10.1016/j.eng.2022.05.017 doi: 10.1016/j.eng.2022.05.017
    [32] F. Pan, H. Bao, Preceding vehicle following algorithm with human driving characteristics, Proc. Inst. Mech. Eng., Part D: J. Automob. Eng., 235 (2021), 1825–1834. https://doi.org/10.1177/0954407020981546 doi: 10.1177/0954407020981546
    [33] Y. Zhou, R. Fu, C. Wang, Learning the car-following behavior of drivers using maximum entropy deep inverse reinforcement learning, J. Adv. Transp. , 2020 (2020), 1–13. https://doi.org/10.1155/2020/4752651 doi: 10.1155/2020/4752651
    [34] S. Lee, D. Ngoduy, M. Keyvan-Ekbatani, Integrated deep learning and stochastic car-following model for traffic dynamics on multi-lane freeways, Transp. Res. Part C Emerging Technol., 106 (2019), 360–377. https://doi.org/10.1016/j.trc.2019.07.023 doi: 10.1016/j.trc.2019.07.023
  • This article has been cited by:

    1. Neil S. Trudinger, A note on second derivative estimates for Monge-Ampère-type equations, 2023, 23, 2169-0375, 10.1515/ans-2022-0036
    2. Feida Jiang, Weak solutions of generated Jacobian equations, 2023, 5, 2640-3501, 1, 10.3934/mine.2023064
    3. Cale Rankin, Strict convexity and $$C^1$$ regularity of solutions to generated Jacobian equations in dimension two, 2021, 60, 0944-2669, 10.1007/s00526-021-02093-4
    4. Cale Rankin, First and second derivative Hölder estimates for generated Jacobian equations, 2023, 62, 0944-2669, 10.1007/s00526-022-02406-1
    5. Cale Rankin, Strict g-Convexity for Generated Jacobian Equations with Applications to Global Regularity, 2023, 55, 0036-1410, 5685, 10.1137/22M1518852
    6. Grégoire Loeper, Neil S. Trudinger, On the convexity theory of generating functions, 2024, 1536-1365, 10.1515/ans-2023-0160
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1699) PDF downloads(91) Cited by(3)

Figures and Tables

Figures(18)  /  Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog