Research article

Refinements of Jensen's inequality and applications

  • Received: 09 November 2021 Revised: 06 December 2021 Accepted: 23 December 2021 Published: 06 January 2022
  • MSC : 26A51, 26D15, 68P30

  • The principal aim of this research work is to establish refinements of the integral Jensen's inequality. For the intended refinements, we mainly use the notion of convexity and the concept of majorization. We derive some inequalities for power and quasi–arithmetic means while utilizing the main results. Moreover, we acquire several refinements of Hölder inequality and also an improvement of Hermite–Hadamard inequality as consequences of obtained results. Furthermore, we secure several applications of the acquired results in information theory, which consist bounds for Shannon entropy, different divergences, Bhattacharyya coefficient, triangular discrimination and various distances.

    Citation: Tareq Saeed, Muhammad Adil Khan, Hidayat Ullah. Refinements of Jensen's inequality and applications[J]. AIMS Mathematics, 2022, 7(4): 5328-5346. doi: 10.3934/math.2022297

    Related Papers:

    [1] Yamin Sayyari, Mana Donganont, Mehdi Dehghanian, Morteza Afshar Jahanshahi . Strongly convex functions and extensions of related inequalities with applications to entropy. AIMS Mathematics, 2024, 9(5): 10997-11006. doi: 10.3934/math.2024538
    [2] Imran Abbas Baloch, Aqeel Ahmad Mughal, Yu-Ming Chu, Absar Ul Haq, Manuel De La Sen . A variant of Jensen-type inequality and related results for harmonic convex functions. AIMS Mathematics, 2020, 5(6): 6404-6418. doi: 10.3934/math.2020412
    [3] Asadullah Sohail, Muhammad Adil Khan, Xiaoye Ding, Mohamed Sharaf, Mohammed A. El-Meligy . Improvements of the integral Jensen inequality through the treatment of the concept of convexity of thrice differential functions. AIMS Mathematics, 2024, 9(12): 33973-33994. doi: 10.3934/math.20241620
    [4] Muhammad Zakria Javed, Muhammad Uzair Awan, Loredana Ciurdariu, Omar Mutab Alsalami . Pseudo-ordering and $ \delta^{1} $-level mappings: A study in fuzzy interval convex analysis. AIMS Mathematics, 2025, 10(3): 7154-7190. doi: 10.3934/math.2025327
    [5] Asadullah Sohail, Muhammad Adil Khan, Emad Abouel Nasr, Xiaoye Ding . Further improvements of the Jensen inequality in the integral sense by virtue of 6-convexity along with applications. AIMS Mathematics, 2024, 9(5): 11278-11303. doi: 10.3934/math.2024553
    [6] Elahe Jaafari, Mohammad Sadegh Asgari, Mohsen Shah Hosseini, Baharak Moosavi . On the Jensen’s inequality and its variants. AIMS Mathematics, 2020, 5(2): 1177-1185. doi: 10.3934/math.2020081
    [7] Muhammad Adil Khan, Josip Pečarić, Yu-Ming Chu . Refinements of Jensen’s and McShane’s inequalities with applications. AIMS Mathematics, 2020, 5(5): 4931-4945. doi: 10.3934/math.2020315
    [8] Waqar Afzal, Khurram Shabbir, Thongchai Botmart . Generalized version of Jensen and Hermite-Hadamard inequalities for interval-valued $ (h_1, h_2) $-Godunova-Levin functions. AIMS Mathematics, 2022, 7(10): 19372-19387. doi: 10.3934/math.20221064
    [9] Paul Bosch, Jorge A. Paz Moyado, José M. Rodríguez-García, José M. Sigarreta . Refinement of Jensen-type inequalities: fractional extensions (global and local). AIMS Mathematics, 2025, 10(3): 6574-6588. doi: 10.3934/math.2025301
    [10] Waqar Afzal, Thongchai Botmart . Some novel estimates of Jensen and Hermite-Hadamard inequalities for h-Godunova-Levin stochastic processes. AIMS Mathematics, 2023, 8(3): 7277-7291. doi: 10.3934/math.2023366
  • The principal aim of this research work is to establish refinements of the integral Jensen's inequality. For the intended refinements, we mainly use the notion of convexity and the concept of majorization. We derive some inequalities for power and quasi–arithmetic means while utilizing the main results. Moreover, we acquire several refinements of Hölder inequality and also an improvement of Hermite–Hadamard inequality as consequences of obtained results. Furthermore, we secure several applications of the acquired results in information theory, which consist bounds for Shannon entropy, different divergences, Bhattacharyya coefficient, triangular discrimination and various distances.



    The field of mathematical inequalities and their applications has achieved a dynamic and exponential advancement within the last few decades with the dramatic affect on several areas of science [8,9]. It is notable that, several inventive concepts about mathematical inequalities and their applications can be obtained with the help of convexity [1,16,21]. Among these mathematical inequalities, Jensen's inequality is one of the important inequality which has been made possible with the help of convexity [11,17,19]. This inequality has preserved some important structures and also there are a lot of inequalities which are the direct consequences of Jensen's inequality for example Hölder, Hermite–Hadamard, Ky Fan's and Young's inequalities etc [6,16]. Jensen's inequality also performed a very significant role in statistics and many applications of this inequality have been observed involving estimations for different divergences [5,12], several estimations for Zipf–Mandelbrot law [2,3] and Shannon entropy [4].

    In the following theorem, Jensen's integral inequality is stated:

    Theorem 1.1. Assume that I is an arbitrary interval in R and ω,ψ:[a,b]I are integrable functions such that ω>0. If Φ:IR is a convex function and Φψ is an integrable function on [a,b], then

    Φ(baω(υ)ψ(υ)dυbaω(υ)dυ)baω(υ)(Φψ)(υ)dυbaω(υ)dυ. (1.1)

    The inequality (1.1) is valid in the opposite direction if the function Φ is concave on I.

    The concept of majorization is also an interesting topic for researchers since 1932 due to its important structure and properties [15]. Many researchers worked on this direction and a lot of results are devoted to this concept [16,18]. Now, we are going to discuss some important literature about majorization. Let κ2 be a fixed natural number and

    s1=(γ1,γ2,,γκ),s2=(δ1,δ2,,δκ)

    be two κtuples with the real entries. Assume that

    γ[1]γ[2]γ[κ],δ[1]δ[2]δ[κ]

    are the ordered components of the tuples s1 and s2 respectively.

    Definition 1.2. The κtuple s1 is said to majorizes the κtuple s2 or s2 is said to be majorized by s1, if

    h=1γ[]h=1δ[],h=1,2,,κ1

    and

    κ=1γ=κ=1δ

    are hold. In symbols, it is written as s1s2.

    The following theorem is established for the majorized tuples while using convex function, which is famous in the literature as majorization theorem.

    Theorem 1.3. ([15]) Let I be an interval in R and s1,s2 be two κtuples with entries in I. Then the inequality

    κj=1Φ(γj)κj=1Φ(δj) (1.2)

    is true for every continuous convex Φ on I if and only if s1s2.

    The inequality (1.2) is valid in the reverse direction if the function Φ is concave on I.

    For integrable functions, the definition of majorization can be stated as follows (see [18,p. 324]).

    Definition 1.4. Assume that φ,ψ:[a,b]R are any two functions. Then the function φ is said to be majorized by the function ψ (abbreviated ψφ), if both φ,ψ are decreasing on [a,b] and satisfy the conditions

    xaψ(υ)dυxaφ(υ)dυ,x[a,b] (1.3)

    and

    baψ(υ)dυbaφ(υ)dυ. (1.4)

    The following theorem is the integral version of Theorem 1.3 (see [18,p. 325]).

    Theorem 1.5. Assume that φ,ψ are decreasing functions on [a,b], then the inequality

    baΦ(ψ(υ))dυbaΦ(φ(υ))dυ (1.5)

    holds for each continuous convex function Φ:[a,b]R if and only if ψφ.

    If Φ is concave on [a,b], then (1.5) holds in the opposite direction.

    In 1995, Maligranda et al. [14] presented the weighted version of (1.5), which is given in the following theorem.

    Theorem 1.6. Assume that Φ:IR is convex function and φ,ψ,ω:[a,b]I are continuous functions such that ω(υ)0, for υ[a,b] and satisfying

    xaω(υ)ψ(υ)dυxaω(υ)φ(υ)dυ,forx[a,b] (1.6)

    and

    baω(υ)ψ(υ)dυ=baω(υ)φ(υ)dυ. (1.7)

    If the function φ is decreasing on [a,b], then

    baω(υ)Φ[ψ(υ)]dυbaω(υ)Φ[φ(υ)]dυ. (1.8)

    If the function Φ is concave on [a,b], then (1.8) is true in the reverse direction.

    In the following theorem, Dragomir [7] presented a weighted majorization inequality by taking certain integrable functions.

    Theorem 1.7. Assume that Φ:IR is convex function and ψ,φ,ω:[a,b]I are continuous functions such that ω>0 on [a,b]. If φ and ψφ are monotonic in the same direction and satisfying the condition (1.7), then (1.8) is true.

    In the following theorem, Dragomir [7] gave another majorization result under some relax conditions on ψ,φ and strict condition on Φ.

    Theorem 1.8. Assume that, the function Φ:IR is non-decreasing and convex, ψ,φ,ω:[a,b]I are continuous functions such that ω>0 on [a,b]. Also, let φ and ψφ be monotonic in the same direction. If

    baω(υ)ψ(υ)dυbaω(υ)φ(υ)dυ,

    then (1.8) is true.

    In the recent decades, the majorization become a very popular area for the researchers and obtained different generalizations [10,13], extensions [1] and improvements [10,23] of majorization type inequalities. Moreover, the majorization type inequalities have also been proved for other classes of convex functions [20,22,24].

    The aim of the this research work is to obtained refinements of the celebrated Jensen inequality. The intended refinements are established by utilizing the theory of majorization and the notion of convexity. We present some fruitful consequences of the main results in the form of an improvement of Hermite–Hadamard inequality and refinements of Hölder inequality. Furthermore, we also presented some inequalities for quasi–arithmetic and power means as consequences of the obtained results. Moreover, we give several applications of the constructed results in information theory. These applications provide bounds for Csiszár divergence, Kullback–Leibler divergence, Shannon entropy, various distances, triangular discrimination and Bhattachayya coefficient.

    This section of note is dedicated to the refinements of Jensen's inequality. The intended refinements will be made possible with the help of the notion of convexity and concept of majorization. We commence this section with the following result, in which a refinement is obtained for the Jensen inequality with the support of Theorem 1.6.

    Theorem 2.1. Assume that Φ:IR is a convex function and φ,ψ,ω:[a,b]I are any integrable functions such that ω(υ)>0, for all υ[a,b] with ¯ω=baω(υ)dυ.Also, assume that conditions (1.6) and (1.7) are valid and ¯ψ=1baω(υ)dυbaω(υ)ψ(υ)dυ. If λ[0,1] and the function φ is decreasing on [a,b], then

    Φ(¯ψ)baω(υ)Φ((1λ)¯ψ+λφ(υ))dυ¯ω(1λ)Φ(¯ψ)+λ¯ωbaω(υ)Φ[φ(υ)]dυ(1λ)¯ωbaω(υ)Φ[ψ(υ)]dυ+λ¯ωbaω(υ)Φ[φ(υ)]dυ1¯ωbaω(υ)Φ[ψ(υ)]dυ. (2.1)

    The aforementioned inequality will be true in the reverse sense, if the function Φ is concave.

    Proof. By utilizing the condition (1.7), we may write

    Φ(1baω(υ)dυbaω(υ)ψ(υ)dυ)=Φ(1baω(υ)dυbaω(υ)((1λ)baω(υ)ψ(υ)dυbaω(υ)dυ+λφ(υ))dυ). (2.2)

    Applying Jensen's inequality to the right hand side of (2.2), we get

    Φ(1baω(υ)dυbaω(υ)((1λ)baω(υ)dυbaω(υ)ψ(υ)dυ+λφ(υ))dυ)
    1baω(υ)dυbaω(υ)Φ((1λ)baω(υ)dυbaω(υ)ψ(υ)dυ+λφ(υ))dυ. (2.3)

    Now, using the definition of convex function on the right of (2.3), we acquire

    1baω(υ)dυbaω(υ)Φ((1λ)baω(υ)dυbaω(υ)ψ(υ)dυ+λφ(υ))dυ(1λ)Φ(1baω(υ)dυbaω(υ)ψ(υ)dυ)+λbaω(υ)dυbaω(υ)Φ(φ(υ))dυ. (2.4)

    Applying Jensen's inequality to the first term on the right hand side of (2.4), we obtain

    (1λ)Φ(1baω(υ)dυbaω(υ)ψ(υ)dυ)+λbaω(υ)dυbaω(υ)Φ(φ(υ))dυ1λbaω(υ)dυbaω(υ)Φ(ψ(υ))dυ+λbaω(υ)dυbaω(υ)Φ(φ(υ))dυ. (2.5)

    Now, utilizing Theorem 1.6 on the right hand side of (2.5), we get

    1λbaω(υ)dυbaω(υ)Φ(ψ(υ))dυ+λbaω(υ)dυbaω(υ)Φ(φ(υ))dυ1λbaω(υ)dυbaω(υ)Φ(ψ(υ))dυ+λbaω(υ)dυbaω(υ)Φ(ψ(υ))dυ=1baω(υ)dυbaω(υ)Φ(ψ(υ))dυ. (2.6)

    From (2.3)(2.6), we obtain (2.1).

    Corollary 1. Suppose that all the assumptions of Theorem 2.1 hold. Moreover, if φ(υ)=ψ(υ) for all υ[a,b], then

    Φ(¯ψ)1¯ωbaω(υ)Φ((1λ)¯ψ+λψ(υ))dυ(1λ)Φ(¯ψ)+λ¯ωbaω(υ)Φ[ψ(υ)]dυ1¯ωbaω(υ)Φ[ψ(υ)]dυ. (2.7)

    If the function Φ is concave, then the inequality (2.7) will become positive in opposite sense.

    Proof. Since, if we put φ(υ)=ψ(υ) for υ[a,b], then all the conditions of Theorem 2.1 are satisfied. Therefore, using (2.1) by putting φ(υ)=ψ(υ), we obtain (2.7).

    In the following theorem, we obtain the inequalities given in (2.1) while using Theorem 1.7 instead of Theorem 1.6.

    Theorem 2.2. Assume that, the function Φ:IR is convex and ψ,φ,ω:[a,b]I are continuous functions such that ω(υ)>0 for all υ[a,b]. Also, assume that φ and ψφ are monotonic in the parallel direction and satisfying

    baω(υ)ψ(υ)dυ=baω(υ)φ(υ)dυ. (2.8)

    If λ[0,1], then the inequalities given in (2.1) are valid.

    Proof. By adopting the idea used in the proof of Theorem 2.1 along with the result of Theorem 1.7, we acquire (2.1).

    Remark 1. Inequalities given in (2.1) can also be obtained under the conditions discussed in Theorem 1.8.

    In the following result, we obtain refinements of the Hölder inequality.

    Corollary 2. Suppose that, the functions ρ,g1,g2 are non-negative on the interval [a,b]. Also, let p,q>1 such that 1p+1q=1 and let λ[0,1]. Then the following statements are true:

    (i) If baρ(υ)gq2(υ)dυ>0, then

    baρ(υ)g1(υ)g2(υ)dυ(baρ(υ)gq2(υ)dυ)1q×[baρ(υ)gq2(υ)((1λ)baρ(υ)g1(υ)g2(υ)dυbaρ(υ)gq2(υ)dυ+λg1(υ)gqp2(υ))p]1p[(1λ)(baρ(υ)g1(υ)g2(υ)dυ)p+λ(baρ(υ)gp1(υ)dυ)(baρ(υ)gq2(υ)dυ)p1]1p
    (baρ(υ)gp1(υ)dυ)1p(baρ(υ)gq2(υ)dυ)1q. (2.9)

    (ii)If baρ(υ)gp1(υ)dυ>0, then

    baρ(υ)g1(υ)g2(υ)dυ(baρ(υ)gp1(υ)dυ)1p×[baρ(υ)gp1(υ)((1λ)baρ(υ)g1(υ)g2(υ)dυbaρ(υ)gp1(υ)dυ+λg2(υ)gpq1(υ))q]1q[(1λ)(baρ(υ)g1(υ)g2(υ)dυ)q+λ(baρ(υ)gq2(υ)dυ)(baρ(υ)gp1(υ)dυ)q1]1q
    (baρ(υ)gp1(υ)dυ)1p(baρ(υ)gq2(υ)dυ)1q. (2.10)

    (iii) In the case, when 0<p<1,q=pp1 and baρ(υ)gq2(υ)dυ>0, then

    (baρ(υ)gp1(υ)dυ)1p(baρ(υ)gq2(υ)dυ)1q
    baρ(υ)gq2(υ)((1λ)baρ(υ)gp1(υ)dυbaρ(υ)gq2(υ)dυ+λgp1(υ)g(ppq)2(υ))1pdυ
    (baρ(υ)gq2(υ)dυ)1q[(1λ)(baρ(υ)gp1(υ)dυ)+λ(baρ(υ)gq2(υ)dυ)q(baρ(υ)g1g2(υ)dυ)]
    baρ(υ)g1g2(υ)dυ. (2.11)

    (iv) In the case, when p<0 and baρ(υ)gp1(υ)dυ>0, then

    (ηϵρ(υ)gp1(υ)dυ)1p(baρ(υ)gq2(υ)dυ)1q
    baρ(υ)gp1(υ)((1λ)baρ(υ)gq2(υ)dυbaρ(υ)gp1(υ)dυ+λgq2(υ)g(ppq)1(υ))1qdυ
    (1λ)(baρ(υ)gp1(υ)dυ)1p(baρ(υ)g1g2(υ)dυ)1q+λ(baρ(υ)g1g2(υ)dυ)baρ(υ)g1g2(υ)dυ. (2.12)

    Proof. First, we prove (i). Since the function Φ(x)=υp is convex for υ>0. Therefore, applying inequality (2.7) by choosing Φ(υ)=υp, ω(υ)=ρ(υ)gq2(υ) andψ(υ)=g1(υ)gqp2(υ), we get (2.9).

    Now, we prove case (ii). For this, applying inequality (2.7) while selecting Φ(υ)=υq,υ>0, ω(υ)=ρ(υ)gp1(υ) andψ(υ)=g2(υ)gpq1(υ), we get (2.10).

    Instantly, we prove inequality (2.11). If 0<p<1, then clearly, both 1p and (1p)1 are positive and their sum is one. Therefore using (2.9) by putting p=1p,q=(1p)1,g1=(g1g2)p and g2=gp2, we get (2.11).

    Now, we prove the last case. Since p<0. Therefore q=pp1(0,1). Thus, utilizing (2.10) by putting p=(1q)1,q=1q,g1=gq1 and g2=(g1g2)q, we obtain (2.12).

    In following corollary, we establish some more refinements of the Hölder inequality.

    Corollary 3. Assume that p,q>1 such that 1p+1q=1. Also, assume that ρ,g1 and g2 are non-negative on [a,b] such thatρgp1,ρgq2,ρg1g2L1[a,b] and λ[0,1], then the following statements are valid:

    (i) If baρ(υ)gq2(υ)dυ>0, then

    (baρ(υ)gp1(υ)dυ)1p(baρ(υ)gq2(υ)dυ)1qbaρ(υ)gq2(υ)((1λ)baρ(υ)gp1(υ)dυbaρ(υ)gq2(υ)dυ+λgp1(υ)gq2(υ))1pdυ(1λ)(baρ(υ)gp1(υ)dυ)1p(baρ(υ)gq2(υ)dυ)1q+λbaρ(υ)g1(υ)g2(υ)dυbaρ(υ)g1(υ)g2(υ)dυ. (2.13)

    (ii) If baρ(υ)gp1(υ)dυ>0, then

    (baρ(υ)gp1(υ)dυ)1p(baρ(υ)gq2(υ)dυ)1qbaρ(υ)gp1(υ)((1λ)baρ(υ)gq2(υ)dυbaρ(υ)gp1(υ)dυ+λgp1(υ)gq2(υ))1pdυ(1λ)(baρ(υ)gp1(υ)dυ)1p(baρ(υ)gq2(υ)dυ)1q+λbaρ(υ)g1(υ)g2(υ)dυbaρ(υ)g1(υ)g2(υ)dυ. (2.14)

    (iii) In the situation, when p(0,1) and q=pp1 with baρ(υ)gq2(υ)dυ>0, then

    baρ(υ)g1(υ)g2(υ)dυ(baρ(υ)gq2(υ)dυ)1q×[baρ(υ)gq2(υ)((1λ)baρ(υ)g1(υ)g2(υ)dυbaρ(υ)gq2(υ)dυ+λg1(υ)g1q2(υ))p]1p(baρ(υ)gq2(υ)dυ)1q[(1λ)(baρ(υ)g1(υ)g2(υ)dυ)p×(baρ(υ)gq2(υ)dυ)1p+λ(baρ(υ)gp1(υ)dυ)]1p
    (baρ(υ)gp1(υ)dυ)1p(baρ(υ)gq2(υ)dυ)1q. (2.15)

    (iv)In the case, when p<0 and baρ(υ)gp1(υ)dυ>0, then

    baρ(υ)g1(υ)g2(υ)dυ(baρ(υ)gp1(υ)dυ)1p×[baρ(υ)gp1(υ)((1λ)baρ(υ)g1(υ)g2(υ)dυbaρ(υ)gp1(υ)dυ+λg1q1g2(υ)(υ))1q]1q(baρ(υ)gp1(υ)dυ)1p[(1λ)(baρ(υ)g1(υ)g2(υ)dυ)q×(baρ(υ)gp1(υ)dυ)1q+λ(baρ(υ)gq2(υ)dυ)]1q
    (baρ(υ)gp1(υ)dυ)1p(baρ(υ)gq2(υ)dυ)1q. (2.16)

    Proof. First, we prove inequality (2.13). Since the function Φ(υ)=υ1p is concave on (0,). Therefore, applying inequality (2.7) for Φ(υ)=υ1p, ω(υ)=ρ(υ)gq2(υ) and ψ(υ)=gp1(υ)gq2(υ), we obtain (2.13).

    Instantly, we prove (2.14). For this, utilizing inequality (2.7) by choosing Φ(υ)=υ1q,υ>0, ω=ρgq1 and ψ=gp1gq2.

    For the prove of (2.15), utilizing (2.13) by putting p=1p, q=(1p)1, g1=(g1g2)p and g2=gp2.

    Now, prove the last inequality. Since p<0. Therefore q=pq1(0,1). Clearly this case is the reflection of case (iii). Instantly, utilizing (2.14) while taking (1q)1,q,gp1,(g1g2)q as a substitute of p,q,g1,g2 respectively, we acquire (2.16).

    Now, we give definitions of power and quasi means.

    Definition 2.3. Suppose that ω and g are positive and integrable functions on [a,b]. Then the power mean of order pR is defined as:

    M(ω,g)={(baω(υ)gp(υ)dυbaω(υ)dυ)1p,ifp0,exp(baω(υ)logg(υ)dυbaω(υ)dυ),ifp=0. (2.17)

    Definition 2.4. Suppose that, the functions g,ω are positive and integrable on [a,b]. Also, suppose that the function h is continuous and strictly monotonic on (0,), then the quasi–arithmetic mean is defined by:

    Mh(ω,g)=h1(baω(υ)h(g(υ))dυbaω(υ)dυ). (2.18)

    In the following corollary, we obtain inequalities for the power mean with the help of Corollary 1.

    Corollary 4. Suppose that g,ω are positive and integrable functions on [a,b] with ¯ω:=baω(υ)dυ. If λ[0,1] and s,tR{0} such that ts, then

    Ms(ω;g)[ba((1λ)Mss(ω;g)+λgs(υ))tsdυ¯ω]1t((1λ)Mts(ω;g)+λMtt(ω;g))1tMt(ω;g),t0, (2.19)
    Ms(ω;g)1sbalog((1λ)Mss(ω;g)+λgs(υ))dυ(1λ)logMs(ω;g)+λlogMt(ω;g)Mt(ω;g),t=0, (2.20)
    Mt(ω;g)[ba((1λ)Mtt(ω;g)+λgt(υ))stdυ¯ω]1s((1λ)Mst(ω;g)+λMss(ω;g))1sMs(ω;g),s0, (2.21)
    Mt(ω;g)1tbalog((1λ)Mtt(ω;g)+λgt(υ))dυ(1λ)logMt(ω;g)+λlogMs(ω;g)Ms(ω;g),s=0. (2.22)

    Proof. Since t,sR and s,t0. Therefore, utilizing (2.7) for Φ(υ)=υts(υ>0), ψ=gs and then taking power 1t, we get (2.19). For the case of t=0, taking limits as t0 of (2.19), we obtain (2.20).

    Similarly, using (2.7) by choosing Φ(υ)=υst,ψ=gt and then taking power 1s, we get (2.21). When s=0, taking limits of (2.21) as s0, we acquire (2.22).

    With the help of Corollary 1, we get inequalities for the quasi means, which are stated in the following corollary.

    Corollary 5. Suppose that ω,ψ:[a,b]I are integrable functions such that ω>0 and ¯ω:=baω(υ)dυ and h is strictly monotonic and continuous function on I. If Φh1 is convex function and λ[0,1], then

    Φ(Mh(ω,ψ))baω(υ)Φ(h1((1λ)baω(υ)h(ψ(υ))dυ¯ω+λh(ψ(υ))))dυ¯ω(1λ)Φ(Mh(ω,ψ))+baω(υ)Φ(ψ(υ))dυ¯ωbaω(υ)Φ(ψ(υ))dυ¯ω. (2.23)

    Proof. Substituting ψ=hψ and Φ=Φh1 in (2.7), we obtain (2.23).

    In the following corollary, we obtain a refinement of the Hermite–Hadamard inequality with the support of Corollary 1.

    Corollary 6. Suppose that the function Φ:[a,b]R is convex and λ[0,1], then

    Φ(a+b2)baΦ((1λ)(a+b2)+λυ)dυba(1λ)Φ(a+b2)+λbaΦ(υ)dυbabaΦ(υ)dυba. (2.24)

    Proof. Using Corollary 1, for ω(υ)=1 and ψ(υ)=υ for all υ[a,b], we obtain (2.24).

    In this section, we are going to present some important applications of our main results in the information theory. These applications consists the refinements of different divergences, distances, Bhattacharyya coefficient, triangular discrimination and Shannon entropy.

    In order to go ahead first we give the definitions of Shannon entropy and different divergences.

    Definition 3.1. Let Φ:(0,)R be a convex function and ψ,φ:[a,b](0,) be two positive integrable functions, then the Csiszár divergence is defined as:

    Cd(ψ,φ)=baφ(υ)Φ(ψ(υ)φ(υ))dυ.

    Definition 3.2. Let ψ:[a,b](0,) be a probability density function. Then the Shannon entropy is defined by:

    SE(ψ)=baψ(υ)logψ(υ)dυ.

    Definition 3.3. Let ψ,φ:[a,b](0,) be two probability densities. Then the Kullback–Leibler divergence is given by:

    KLd(ψ,φ)=baψ(υ)log(ψ(υ)φ(υ))dυ.

    Theorem 3.4. Suppose that, the function Ψ:(0,)R is convex and ϕ1,ϕ2:[a,b](0,) are two integrable functions. If λ[0,1], then

    Ψ(baϕ1(υ)dυbaϕ2(υ)dυ)baϕ2(υ)dυbaϕ2(υ)Ψ((1λ)baϕ1(υ)dυbaϕ2(υ)dυ+λϕ1(υ)ϕ2(υ))dυ(1λ)Ψ(baϕ1(υ)dυbaϕ2(υ)dυ)baϕ2(υ)dυ+λCd(ϕ1,ϕ2)
    Cd(ϕ1,ϕ2). (3.1)

    Proof. Using Corollary 1 by putting ω=ϕ2, ψ=ϕ1ϕ2 and Φ=Ψ, we get (3.1).

    Corollary 7. Suppose that ϕ1,ϕ2:[a,b](0,) are integrable functions on [a,b]. If ϕ2 is probability density function and λ[0,1], then

    log(baϕ1(υ)dυ)baϕ2(υ)log((1λ)baϕ1(υ)dυ+λϕ1(υ)ϕ2(υ))dυ(1λ)log(baϕ1(υ)dυ)+λbaϕ2(υ)logϕ1(υ)dυ+λSE(ϕ2)baϕ2(υ)logϕ1(υ)dυ+SE(ϕ2). (3.2)

    Proof. Choosing Ψ(υ)=log(υ) in (3.1), we obtain (3.2).

    Corollary 8. Suppose that ϕ1,ϕ2:[a,b](0,) are probability density functions such that their integral exits on [a,b]. If λ[0,1], then

    0baϕ2(υ)(1+λ(ϕ1(υ)ϕ2(υ)1))×log(1+λ(ϕ1(υ)ϕ2(υ)1))dυλKLd(ϕ1,ϕ2)KLd(ϕ1,ϕ2). (3.3)

    Proof. Taking Ψ(υ)=υlogυ, υ>0 in (3.1), we acquire (3.3).

    Now, we give the definitions of different distances.

    Definition 3.5. Let ϕ1,ϕ2:[a,b]R be positive probability densities on [a,b]. Then the variational distance is defined by:

    Vd(ϕ1,ϕ2)=ba|ϕ1(υ)ϕ2(υ)|dυ.

    Definition 3.6. Let ϕ1,ϕ2:[a,b]R be two positive probability densities. Then the Jeffrey's distance is defined by:

    Jd(ϕ1,ϕ2)=ba(ϕ1(υ)ϕ2(υ))log(ϕ1(υ)ϕ2(υ))dυ.

    Definition 3.7. Let ϕ1,ϕ2:[a,b](0,) be two probability densities. Then the Hellinger distance is defined by:

    Hd(ϕ1,ϕ2)=ba(ϕ1(υ)ϕ2(υ))2dυ.

    Corollary 9. Suppose that ϕ1,ϕ2:[a,b](0,) are probability densities such that their integral exist on [a,b]. If λ[0,1], then

    0λba|ϕ1(υ)ϕ2(υ)1|dυλVd(ϕ1,ϕ2)Vd(ϕ1,ϕ2). (3.4)

    Proof. Utilizing (3.1) by choosing Ψ(υ)=|υ1|,υ(0,), we obtain (3.4).

    Corollary 10. Suppose that, all the hypotheses of Corollary 9 hold, then

    0ba(ϕ1(υ)ϕ2(υ))log(1+λ(ϕ1(υ)ϕ2(υ)1))dυλJd(ϕ1,ϕ2)Jd(ϕ1,ϕ2). (3.5)

    Proof. Using (3.1) for Ψ(υ)=(υ1)logυ,υ(0,), we get (3.5).

    Corollary 11. Suppose that, all the assumptions of Corollary 9 hold, then

    0baϕ2(υ)(1+λ(ϕ1(υ)ϕ2(υ))1)2dυλHd(ϕ1,ϕ2)Hd(ϕ1,ϕ2). (3.6)

    Proof. Applying Theorem 3.4 by choosing Ψ(υ)=(υ1)2,υ>0, we obtain (3.6).

    Now, we define the Bhattachayya coefficient and Triangular discrimination.

    Definition 3.8. Let ϕ1,ϕ2:[a,b](0,) be any probabilities densities. Then the Bhattacharyya coefficient is defined by:

    Bd(ϕ1,ϕ2)=baϕ1(υ)ϕ2(υ)dυ.

    Definition 3.9. Let ϕ1,ϕ2 be any positive probability densities functions on [a,b]. Then the triangular discrimination is defined by:

    Td(ϕ1,ϕ2)=ba(ϕ1(υ)ϕ2(υ))2ϕ1(υ)ϕ2(υ)dυ.

    Corollary 12. Suppose that, all the conditions of Corollary 9 are valid, then

    1baϕ2(υ)1+λ(ϕ1(υ)ϕ2(υ)1)dυ1+λ(Bd(ϕ1,ϕ2)1)Bd(ϕ1,ϕ2). (3.7)

    Proof. Using Ψ(υ)=υ,υ>0 in (3.1), we get (3.7).

    Corollary 13. Suppose that, all the assumptions of Corollary 9 hold, then

    0λ2ba(ϕ1(υ)ϕ2(υ)1)22+λ(ϕ1(υ)ϕ2(υ)1)dυλTd(ϕ1,ϕ2)Td(ϕ1,ϕ2). (3.8)

    Proof. Applying Theorem 3.4 by choosing Ψ(υ)=(υ1)2υ+1,υ>0, we obtain (3.8).

    The Jensen inequality has recorded an exponential growth in the last few decades due to its remarkable properties. Several important inequalities such like Hölder, Hermite–Hadmard and Ky Fan's inequalities etc can easily be deduced from this inequality. This article is devoted to refinements of Jensen's inequality and its applications. We acquired the refinements of this inequality with the help of majorization results and convex functions. We utilized some certain functions with majorization conditions and obtained new refinements of Jensen's inequality. Furthermore, we utilized the obtained refinements and gave inequalities for the power as well as quasi–arithmetic means. Moreover, we also acquired refinements of Hölder and Hermite–Hadamard inequalities with the help of obtained refinements. In addition to this, we also presented some applications of the obtained refinements in the information theory. These applications includes, bounds for the different divergences, Shannon entropy, Bhattacharyya coefficient, various distances and triangular discrimination. The results given in the present article will give an addition to the mathematical inequalities and especially to Jensen's inequality.

    The authors declare that they have no competing interest.



    [1] M. Adil Khan, S. H. Wu, H. Ullah, Y. M. Chu, Discrete majorization type inequalities for convex functions on rectangles, J. Inequal. Appl., 2019 (2019), 1–18. https://doi.org/10.1186/s13660-019-1964-3 doi: 10.1186/s13660-019-1964-3
    [2] M. Adil Khan, Z. M. Al-sahwi, Y. M. Chu, New estimations for Shannon and Zipf-Mandelbrot entropies, Entropy, 20 (2018), 1–10. https://doi.org/10.3390/e20080608 doi: 10.3390/e20080608
    [3] M. Adil Khan, D. Pečarić, J. Pečarić, On Zipf-Mandelbrot entropy, J. Comput. Appl. Math., 346 (2019), 192–204. https://doi.org/10.1016/j.cam.2018.07.002 doi: 10.1016/j.cam.2018.07.002
    [4] M. Adil Khan, D. Pečarić, J. Pečarić, Bounds for Shannon and Zipf-mandelbrot entropies, Math. Methods Appl. Sci., 40 (2017), 7316–7322. https://doi.org/10.1002/mma.4531 doi: 10.1002/mma.4531
    [5] I. Ansari, K. A. Khan, A. Nosheen, D. Pečarić, J. Pečarić, Some inequalities for Csiszar divergence via theory of time scales, Adv. Differ. Equ., 2020 (2020), 1–21. https://doi.org/10.1186/s13662-020-03159-x doi: 10.1186/s13662-020-03159-x
    [6] Y. Deng, H. Ullah, M. Adil Khan, S. Iqbal, S. Wu, Refinements of Jensen's inequality via majorization results with applications in the information theory, J. Math., 2012 (2021), 1–12. https://doi.org/10.1155/2021/1951799 doi: 10.1155/2021/1951799
    [7] N. S. Barnett, P. Cerone, S. S. Dragomir, Majorisation inequalities for Stieltjes integrals, Appl. Math. Lett., 22 (2009), 416–421. https://doi.org/10.1016/j.aml.2008.06.009 doi: 10.1016/j.aml.2008.06.009
    [8] J. Borwein, A, Lewis, Convex Analysis and Nonlinear Optimization, Theory and Examples, Springer: NewYork, 2000. https://doi.org/10.1007/978-1-4757-9859-3
    [9] M. J. Cloud, B. C. Drachman, L. P. Lebedev, Inequalities with Applications to Engineering, Springer: Cham Heidelberg New York Dordrecht London, 2014.
    [10] S. S. Dragomir, Some majorization type discrete inequalities for convex functions, Math. Inequal. Appl., 7 (2004), 207–216. https://doi.org/10.7153/mia-07-23 doi: 10.7153/mia-07-23
    [11] S. Furuichi, H. R. Moradi, A. Zardadi, Some new Karamata type inequalities and their applications to some entropies, Rep. Math. Phys., 84 (2019), 201–214. https://doi.org/10.1016/S0034-4877(19)30083-7 doi: 10.1016/S0034-4877(19)30083-7
    [12] L. Horváth, D. Pečarić, J. Pečarić, Estimations of f– and Rényi divergences by using a cyclic refinement of the Jensen's inequality, Bull. Malays. Math. Sci. Soc., 42 (2019), 933–946. https://doi.org/10.1007/s40840-017-0526-4 doi: 10.1007/s40840-017-0526-4
    [13] N. Latif, D. Pečarić, J. Pečarić, Majorization, useful Csiszar divergence and useful Zipf-Mandelbrot law, Open Math., 16 (2018), 1357–1373. https://doi.org/10.1515/math-2018-0113 doi: 10.1515/math-2018-0113
    [14] L. Maligranda, J. Pečarić, L. E. Persson, Weighted Favard and Berwald inequalities, J. Math. Anal. Appl., 190 (1995), 248–262. https://doi.org/10.1006/jmaa.1995.1075 doi: 10.1006/jmaa.1995.1075
    [15] A. W. Marshall, I. Olkin, B. Arnold, Inequalities: Theory of majorization and its applications, 2nd ed., Springer Series in Statistics, Springer, New York, 2011. https://doi.org/10.1007/978-0-387-68276-1
    [16] C. P. Niculescu, L. E. Persson, Convex Functions and Their Applications. A Contemporary Approach, 2nd ed., CMS Books in Mathematics vol. 23, Springer-Verlag, New York, 2018. https://doi.org/10.1007/978-3-319-78337-6
    [17] J. Pečarić, J. Perić, New improvement of the converse Jensen inequality, Math. Inequal. Appl., 21 (2018), 217–234. https://doi.org/10.7153/mia-2018-21-17 doi: 10.7153/mia-2018-21-17
    [18] J. Pečarić, F. Proschan, Y. L. Tong, Convex Functions, Partial Orderings and Statistical Applications, Academic Press, 1992.
    [19] M. Sababheh, H. R. Moradi, S. Furuichi, Integrals refining convex inequalities, Bull. Malays. Math. Sci. Soc., 2019 (2019), 1–17.
    [20] N. Siddique, M. Imran, K. A. Khan, J. Pečarić, Majorization inequalities via Green functions and Fink's identity with applications to Shannon entropy, J. Inequal. Appl., 2020 (2020), 1–14. https://doi.org/10.1186/s13660-020-02455-0 doi: 10.1186/s13660-020-02455-0
    [21] H. Ullah, M. Adil Khan, J. Pečarić, New bounds for soft margin estimator via concavity of Gaussian weighting function, Adv. Differ. Equ., 2020 (2020), 1–10. https://doi.org/10.1186/s13662-020-03103-z doi: 10.1186/s13662-020-03103-z
    [22] S. Z. Ullah, M. Adil Khan, Y. M. Chu, Majorization theorem for strongly convex function, J. Inequal. Appl., 2019 (2019), 1–13. https://doi.org/10.1186/s13660-019-1964-3 doi: 10.1186/s13660-019-1964-3
    [23] S. Wu, M. Adil Khan, H. U. Haleemzai, Refinements of majorization inequality involving convex functions via Taylor's theorem with mean value form of the remainder, Mathematics, 7 (2019), 1–7. https://doi.org/10.3390/math7080663 doi: 10.3390/math7080663
    [24] S. Wu, M. Adil Khan, A. Basir, R. Saadati, Some majorization integral inequalities for functions defined on rectangles, J. Inequal. Appl., 2018 (2018), 1–13. https://doi.org/10.1186/s13660-018-1739-2 doi: 10.1186/s13660-018-1739-2
  • This article has been cited by:

    1. Muhammad Adil Khan, Hidayat Ullah, Tareq Saeed, Some estimations of the Jensen difference and applications, 2023, 46, 0170-4214, 5863, 10.1002/mma.8873
    2. Muhammad Adil Khan, Hidayat Ullah, Tareq Saeed, Hamed H. Alsulami, Z. M. M. M. Sayed, Ahmed Mohammed Alshehri, Fahd Jarad, Estimations of the Slater Gap via Convexity and Its Applications in Information Theory, 2022, 2022, 1563-5147, 1, 10.1155/2022/1750331
    3. Shanhe Wu, Muhammad Adil Khan, Tareq Saeed, Zaid Mohammed Mohammed Mahdi Sayed, A Refined Jensen Inequality Connected to an Arbitrary Positive Finite Sequence, 2022, 10, 2227-7390, 4817, 10.3390/math10244817
    4. Asadullah Sohail, Muhammad Adil Khan, Emad Abouel Nasr, Xiaoye Ding, Further improvements of the Jensen inequality in the integral sense by virtue of 6-convexity along with applications, 2024, 9, 2473-6988, 11278, 10.3934/math.2024553
    5. Zaid Mohammed Mohammed Mahdi Sayed, Muhammad Adil Khan, Shahid Khan, Josip Pecaric, Refinement of the classical Jensen inequality using finite sequences, 2024, 53, 2651-477X, 608, 10.15672/hujms.1270585
    6. Shanhe Wu, Muhammad Adil Khan, Shah Faisal, Tareq Saeed, Eze R. Nwaeze, Derivation of Hermite-Hadamard-Jensen-Mercer conticrete inequalities for Atangana-Baleanu fractional integrals by means of majorization, 2024, 57, 2391-4661, 10.1515/dema-2024-0024
    7. Muhammad Adil Khan, Hidayat Ullah, Tareq Saeed, Zaid M. M. M. Sayed, Salha Alshaikey, Emad E. Mahmoud, Daniel Maria Busiello, Determination of Novel Estimations for the Slater Difference and Applications, 2024, 2024, 1099-0526, 1, 10.1155/2024/8481103
    8. László Horváth, Refining the integral Jensen inequality for finite signed measures using majorization, 2024, 118, 1578-7303, 10.1007/s13398-024-01627-7
    9. Yaser Khatib, Stanford Shateyi, Improvement of inequalities related to powers of the numerical radius, 2024, 9, 2473-6988, 19089, 10.3934/math.2024930
    10. Abdul Basir, Muhammad Adil Khan, Hidayat Ullah, Yahya Almalki, Saowaluck Chasreechai, Thanin Sitthiwirattham, Derivation of Bounds for Majorization Differences by a Novel Method and Its Applications in Information Theory, 2023, 12, 2075-1680, 885, 10.3390/axioms12090885
    11. Muhammad Adil Khan, Asadullah Sohail, Hidayat Ullah, Tareq Saeed, Estimations of the Jensen Gap and Their Applications Based on 6-Convexity, 2023, 11, 2227-7390, 1957, 10.3390/math11081957
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2615) PDF downloads(174) Cited by(11)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog