Processing math: 100%
Research article Special Issues

Neural architecture search via standard machine learning methodologies

  • Received: 24 March 2021 Revised: 19 October 2021 Accepted: 13 January 2022 Published: 11 February 2022
  • In the context of deep learning, the more expensive computational phase is the full training of the learning methodology. Indeed, its effectiveness depends on the choice of proper values for the so-called hyperparameters, namely the parameters that are not trained during the learning process, and such a selection typically requires an extensive numerical investigation with the execution of a significant number of experimental trials. The aim of the paper is to investigate how to choose the hyperparameters related to both the architecture of a Convolutional Neural Network (CNN), such as the number of filters and the kernel size at each convolutional layer, and the optimisation algorithm employed to train the CNN itself, such as the steplength, the mini-batch size and the potential adoption of variance reduction techniques. The main contribution of the paper consists in introducing an automatic Machine Learning technique to set these hyperparameters in such a way that a measure of the CNN performance can be optimised. In particular, given a set of values for the hyperparameters, we propose a low-cost strategy to predict the performance of the corresponding CNN, based on its behavior after only few steps of the training process. To achieve this goal, we generate a dataset whose input samples are provided by a limited number of hyperparameter configurations together with the corresponding CNN measures of performance obtained with only few steps of the CNN training process, while the label of each input sample is the performance corresponding to a complete training of the CNN. Such dataset is used as training set for a Support Vector Machines for Regression and/or Random Forest techniques to predict the performance of the considered learning methodology, given its performance at the initial iterations of its learning process. Furthermore, by a probabilistic exploration of the hyperparameter space, we are able to find, at a quite low cost, the setting of a CNN hyperparameters which provides the optimal performance. The results of an extensive numerical experimentation, carried out on CNNs, together with the use of our performance predictor with NAS-Bench-101, highlight how the proposed methodology for the hyperparameter setting appears very promising.

    Citation: Giorgia Franchini, Valeria Ruggiero, Federica Porta, Luca Zanni. Neural architecture search via standard machine learning methodologies[J]. Mathematics in Engineering, 2023, 5(1): 1-21. doi: 10.3934/mine.2023012

    Related Papers:

    [1] Shuang-Shuang Zhou, Saima Rashid, Muhammad Aslam Noor, Khalida Inayat Noor, Farhat Safdar, Yu-Ming Chu . New Hermite-Hadamard type inequalities for exponentially convex functions and applications. AIMS Mathematics, 2020, 5(6): 6874-6901. doi: 10.3934/math.2020441
    [2] Xinghua You, Ghulam Farid, Lakshmi Narayan Mishra, Kahkashan Mahreen, Saleem Ullah . Derivation of bounds of integral operators via convex functions. AIMS Mathematics, 2020, 5(5): 4781-4792. doi: 10.3934/math.2020306
    [3] Miguel Vivas-Cortez, Muhammad Aamir Ali, Artion Kashuri, Hüseyin Budak . Generalizations of fractional Hermite-Hadamard-Mercer like inequalities for convex functions. AIMS Mathematics, 2021, 6(9): 9397-9421. doi: 10.3934/math.2021546
    [4] Hengxiao Qi, Muhammad Yussouf, Sajid Mehmood, Yu-Ming Chu, Ghulam Farid . Fractional integral versions of Hermite-Hadamard type inequality for generalized exponentially convexity. AIMS Mathematics, 2020, 5(6): 6030-6042. doi: 10.3934/math.2020386
    [5] Muhammad Imran Asjad, Waqas Ali Faridi, Mohammed M. Al-Shomrani, Abdullahi Yusuf . The generalization of Hermite-Hadamard type Inequality with exp-convexity involving non-singular fractional operator. AIMS Mathematics, 2022, 7(4): 7040-7055. doi: 10.3934/math.2022392
    [6] Yousaf Khurshid, Muhammad Adil Khan, Yu-Ming Chu . Conformable integral version of Hermite-Hadamard-Fejér inequalities via η-convex functions. AIMS Mathematics, 2020, 5(5): 5106-5120. doi: 10.3934/math.2020328
    [7] Yue Wang, Ghulam Farid, Babar Khan Bangash, Weiwei Wang . Generalized inequalities for integral operators via several kinds of convex functions. AIMS Mathematics, 2020, 5(5): 4624-4643. doi: 10.3934/math.2020297
    [8] Wenfeng He, Ghulam Farid, Kahkashan Mahreen, Moquddsa Zahra, Nana Chen . On an integral and consequent fractional integral operators via generalized convexity. AIMS Mathematics, 2020, 5(6): 7632-7648. doi: 10.3934/math.2020488
    [9] Mehmet Eyüp Kiriş, Miguel Vivas-Cortez, Gözde Bayrak, Tuğba Çınar, Hüseyin Budak . On Hermite-Hadamard type inequalities for co-ordinated convex function via conformable fractional integrals. AIMS Mathematics, 2024, 9(4): 10267-10288. doi: 10.3934/math.2024502
    [10] Maryam Saddiqa, Ghulam Farid, Saleem Ullah, Chahn Yong Jung, Soo Hak Shim . On Bounds of fractional integral operators containing Mittag-Leffler functions for generalized exponentially convex functions. AIMS Mathematics, 2021, 6(6): 6454-6468. doi: 10.3934/math.2021379
  • In the context of deep learning, the more expensive computational phase is the full training of the learning methodology. Indeed, its effectiveness depends on the choice of proper values for the so-called hyperparameters, namely the parameters that are not trained during the learning process, and such a selection typically requires an extensive numerical investigation with the execution of a significant number of experimental trials. The aim of the paper is to investigate how to choose the hyperparameters related to both the architecture of a Convolutional Neural Network (CNN), such as the number of filters and the kernel size at each convolutional layer, and the optimisation algorithm employed to train the CNN itself, such as the steplength, the mini-batch size and the potential adoption of variance reduction techniques. The main contribution of the paper consists in introducing an automatic Machine Learning technique to set these hyperparameters in such a way that a measure of the CNN performance can be optimised. In particular, given a set of values for the hyperparameters, we propose a low-cost strategy to predict the performance of the corresponding CNN, based on its behavior after only few steps of the training process. To achieve this goal, we generate a dataset whose input samples are provided by a limited number of hyperparameter configurations together with the corresponding CNN measures of performance obtained with only few steps of the CNN training process, while the label of each input sample is the performance corresponding to a complete training of the CNN. Such dataset is used as training set for a Support Vector Machines for Regression and/or Random Forest techniques to predict the performance of the considered learning methodology, given its performance at the initial iterations of its learning process. Furthermore, by a probabilistic exploration of the hyperparameter space, we are able to find, at a quite low cost, the setting of a CNN hyperparameters which provides the optimal performance. The results of an extensive numerical experimentation, carried out on CNNs, together with the use of our performance predictor with NAS-Bench-101, highlight how the proposed methodology for the hyperparameter setting appears very promising.



    On different time ranges, fractional calculus has a great impact due to a diversity of applications that have contributed to several fields of technical sciences and engineering [1,2,3,4,5,6,7,8,9,10,11,12]. One of the principal options behind the popularity of the area is that fractional-order differentiations and integrations are more beneficial tools in expressing real-world matters than the integer-order ones. Various studies in the literature, on distinct fractional operators such as the classical Riemann-Liouville, Caputo, Katugamploa, Hadamard, and Marchaud versions have shown versatility in modeling and control applications across various disciplines. However, such forms of fractional derivatives may not be able to explain the dynamic performance accurately, hence, many authors are found to be sorting out new fractional differentiations and integrations which have a kernel depending upon a function and this makes the range of definition expanded [13,14]. Furthermore, models based on these fractional operators provide excellent results to be compared with the integer-order differentiations [15,16,17,18,19,20,21,22,23,24,25,26,27].

    The derivatives in this calculus seemed complicated and lost some of the basic properties that usual derivatives have such as the product rule and the chain rule. However, the semigroup properties of these operators behave well in some cases. Recently, the authors in [28] defined a new well-behaved simple derivative called "conformable fractional derivative" which depends just on the basic limit definition of the derivative. It will define the derivative of higher-order (i.e., order δ>1) and also define the integral of order 0<δ1 only. It will also prove the product rule and the mean value theorem and solve some (conformable) differential equations where the fractional exponential function eϑδδ plays an important rule. Inequalities and their utilities assume a crucial job in the literature of pure and applied mathematics [29,30,31,32,33,34,35,36,37]. The assortment of distinct kinds of classical variants and their modifications were built up by using the classical fractional operators.

    Convexity and its applications exist in almost every field of mathematics due to impermanence in several areas of science, technology in nonlinear programming and optimization theory. By utilizing the idea of convexity, numerous variants have been derived by researchers, for example, Hardy, Opial, Ostrowski, Jensen and the most distinguished one is the Hermite-Hadamard inequality [38,39,40,41].

    Let IR be an interval and Q:IR be a convex function. Then the double inequality

    (l2l1)Q(l1+l22)l2l1Q(z)dz(l2l1)Q(l1)+Q(l2)2, (1.1)

    holds for all l1,l2I with l1l2. Clearly, if Q is concave on I, then one has the reverse of inequality (1.1). By taking into account fractional integral operators, several lower and upper bounds for the mean value of a convex function can be obtained by utilizing of inequality (1.1).

    Exponentially convex functions have emerged as a significant new class of convex functions, which have potential applications in technology, data science, and statistics. In [42], Bernstein introduced the concept of exponentially convex function in covariance formation, then the idea of an exponentially convex function is extended by inserting the condition of r-convexity [43]. Following this tendency, Jakšetić and Pečarić introduced various kinds of exponentially convex functions in [44] and have contemplated the applications in Euler-Radau expansions and Stolarsky means. Our aim is to utilize the exponential convexity property of the functions as well as the absolute values of their derivatives in order to establish estimates for conformable fractional integral introduced by Abdeljawed [45] and Jarad et al. [46].

    Following the above propensity, we present a novel technique for establishing new generalizations of Hermite-Hadamard inequalities that correlate with exponentially tgs-convex functions and conformable fractional operator techniques in this paper. The main purpose is that our consequences, which are more consistent and efficient, are accelerated via the fractional calculus technique. In addition, our consequences also taking into account the estimates for Hermite-Hadamard inequalities for exponentially tgs-convex functions. We also investigate the applications of the two proposed conformable fractional operator to exponentially tgs-convex functions and fractional calculus. The proposed numerical experiments show that our results are superior to some related results.

    Before coming to the main results, we provide some significant definitions, theorems and properties of fractional calculus in order to establish a mathematically sound theory that will serve the purpose of the current article.

    Awan et al. [47] proposed a new class of functions called exponentially convex functions.

    Definition 2.1. (See [47]) A positive real-valued function Q:KR(0,) is said to be exponentially convex on K if the inequality

    Q(ϑl1+(1ϑ)l2)ϑQ(l1)eαl1+(1ϑ)Q(l2)eαl2, (2.1)

    holds for all l1,l2R,αR and ϑ[0,1].

    Now, we introduce a novel concept of convex function which is known as the exponentially tgs-convex function.

    Definition 2.2. A positive real-valued function Q:KR(0,) is said to be exponentially tgs-convex on K if the inequality

    Q(ϑl1+(1ϑ)l2)ϑ(1ϑ)[Q(l1)eαl1+Q(l2)eαl2], (2.2)

    holds for all l1,l2R,αR and ϑ[0,1].

    The conformable fractional integral operator was introduced by Abdeljawad [45].

    Definition 2.3. (See [45]) Let ρ(n,n+1] and δ=ρn. Then the left and right-sided conformable fractional integrals of order ρ>0 is defined by

    Jρl+1Q(z)=1n!zl1(zϑ)n(ϑl1)ρ1Q(ϑ)dϑ (2.3)

    and

    Jρl2Q(z)=1n!l2z(ϑz)n(l2ϑ)ρ1Q(ϑ)dϑ. (2.4)

    Next, we demonstrate the following fractional integral operator introduced by Jarad et al. [46].

    Definition 2.4. (See [46]) Let δC and (δ)>0. Then the left and right-sided fractional conformable integral operators of order ρ>0 are stated as:

    Jρ,δl+1Q(z)=1Γ(δ)zl1((zl1)ρ(ϑl1)ρρ)δ1Q(ϑ)(ϑl1)1ρdϑ (2.5)

    and

    Jρ,δl2Q(z)=1Γ(δ)zl1((l2z)ρ(l2ϑ)ρρ)δ1Q(ϑ)(l2ϑ)1ρdϑ. (2.6)

    Recalling some special functions which are known as beta and incomplete beta function.

    B(l1,l2)=10ϑl11(1ϑ)l21dϑ,
    Bv(l1,l2)=v0ϑl11(1ϑ)l21dϑ,v[0,1].

    Further, the following relationship holds between classical Beta and incomplete Beta functions:

    B(l1,l2)=Bv(l1,l2)+B1v(l1,l2),
    Bv(l1+1,l2)=l1Bv(l1,l2)(12)l1+l2l1+l2

    and

    Bv(l1,l2+1)=l2Bv(l1,l2)(12)l1+l2l1+l2.

    Throughout the article, let I=[l1,l2] be an interval in real line R. In this section, we shall demonstrate some integral versions of exponentially tgs-convex functions via conformable fractional integrals.

    Theorem 3.1. For ρ(n,n+1]) with ρ>0 and let Q:IRR be an exponentially tgs-convex function such that QL1([l1,l2]), then the following inequalities hold:

    4Γ(ρn)Γ(ρ+1)Q(l1+l22)
    1(l2l1)ρ[Jρl+1Q(l2)eαl2+Jρl2Q(l1)eαl1]
    2(n+1)Γ(ρn+1)Γ(ρ+3)(Q(l1)eαl1+Q(l2)eαl2). (3.1)

    Proof. By using exponentially tgs-convexity of Q, we have

    Q(x+y2)14(Q(x)eαx+Q(y)eαy). (3.2)

    Let x=ϑl1+(1ϑ)l2 and y=(1ϑ)l1+ϑl2, we get

    4Q(l1+l22)Q(ϑl1+(1ϑ)l2)eαQ(ϑl1+(1ϑ)l2)+Q(ϑl2+(1ϑ)l1)eα[(1ϑ)l1+ϑl2]. (3.3)

    If we multiply (3.3) by 1n!ϑn(1ϑ)ρn1 with ϑ(0,1),ρ>0 and then integrating the resulting estimate with respect to ϑ over [0,1], we find

    4n!Q(l1+l22)10ϑn(1ϑ)ρn1dϑ
    1n!10ϑn(1ϑ)ρn1Q(ϑl1+(1ϑ)l2)eαQ(ϑl1+(1ϑ)l2)dϑ
    +1n!10ϑn(1ϑ)ρn1Q(ϑl2+(1ϑ)l1)eα[(1ϑ)l1+ϑl2]dϑ
    =I1+I2 (3.4)

    By setting u=ϑl1+(1ϑ)l2, we have

    I1=1n!10ϑn(1ϑ)ρn1Q(ϑl1+(1ϑ)l2)eαQ(ϑl1+(1ϑ)l2)dϑ
    =1n!(l2l1)ρl2l1(l21)n(ul1)ρm1Q(u)eαudu
    =1(l2l1)ρJρl+1Q(l2)eαl2. (3.5)

    Analogously, by setting v=ϑl2+(1ϑ)l1, we have

    I2=1n!10ϑn(1ϑ)ρn1Q(ϑl2+(1ϑ)l1)dϑ
    =1n!(l2l1)ρl2l1(vl1)n(l2v)ρn1Q(v)eαvdv
    =1(l2l1)ρJρl2Q(l1)eαl1. (3.6)

    Thus by using (3.5) and (3.6) in (3.4), we get the first inequality of (3.1).

    Consider

    Q(ϑl1+(1ϑ)l2)ϑ(1ϑ)(Q(l1)eαl1+Q(l2)eαl2)

    and

    Q(ϑl2+(1ϑ)l1)ϑ(1ϑ)(Q(l1)eαl1+Q(l2)eαl2).

    By adding

    Q(ϑl1+(1ϑ)l2)+Q(ϑl2+(1ϑ)l1)2ϑ(1ϑ)(Q(l1)eαl1+Q(l2)eαl2). (3.7)

    If we multiply (3.7) by 1n!ϑn(1ϑ)ρn1 with ϑ(0,1),ρ>0 and then integrating the resulting inequality with respect to ϑ over [0,1], we get

    1(l2l1)ρ[Jρl+1Q(l2)eαl2+Jρl2Q(l1)eαl1]
    2(n+1)Γ(ρn+1)Γ(ρ+3)(Q(l1)eαl1+Q(l2)eαl2), (3.8)

    which is the required result.

    Some special cases of above theorem are stated as follows:

    Corollary 3.1. Choosing α=0, then Theorem 3.1 reduces to a new result

    4Γ(ρn)Γ(ρ+1)Q(l1+l22)
    1(l2l1)ρ[Jρl+1Q(l2)+Jρl2Q(l1)]
    2(n+1)Γ(ρn+1)Γ(ρ+3)(Q(l1)+Q(l2)).

    Remark 3.1. Choosing ρ=n+1 and α=0, then Theorem 3.1 reduces to Theorem 3.1 in [19].

    Our next result is the following lemma which plays a dominating role in proving our coming results.

    Lemma 4.1. For ρ(n,n+1]) with ρ>0 and let Q:IRR be differentiable function on I(interior of I) with l1<l2 such that QL1([l1,l2]), then the following inequality holds:

    B(n+1,ρn)(Q(l1)+Q(l2)2)n!2(l2l1)ρ[Jρl+1Q(l2)+Jρl2Q(l1)]
    =10(B1u(n+1,ρn)Bu(n+1,ρn))Q(ϑl1+(1ϑ)l2)dϑ. (4.1)

    Proof. It suffices that

    10(B1u(n+1,ρn)Bu(n+1,ρn))Q(ϑl1+(1ϑ)l2)dϑ
    =10B1u(n+1,ρn)Q(ϑl1+(1ϑ)l2)dϑ
    10Bu(n+1,ρn)Q(ϑl1+(1ϑ)l2)dϑ
    =S1S2 (4.2)

    Then by integration by parts, we have

    S1=10B1u(n+1,ρn)Q(ϑl1+(1ϑ)l2)dϑ
    =10(1u0vn(1v)ρn1dv)Q(ϑl1+(1ϑ)l2)dϑ
    =1l2l1B(n+1,ρn)Q(l2)
    1l2l110(1u)nuρn1Q(ϑl1+(1ϑ)l2)dϑ
    =1l2l1B(n+1,ρn)Q(l2)
    1l2l1l1l2(l1zl1l2)n(zl2l1l2)ρn1Q(z)l1l2dz
    =1l2l1B(n+1,ρn)Q(l2)n!(l2l1)ρ+1Jρl2Q(l1). (4.3)

    Analogously

    S2=10Bu(n+1,ρn)Q(ϑl1+(1ϑ)l2)dϑ
    =10(u0vm(1v)ρn1dv)Q(ϑl1+(1ϑ)l2)dϑ
    =1l2l1B(n+1,ρn)Q(l1)
    +1l2l110(u)n(1u)ρn1Q(ϑl1+(1ϑ)l2)dϑ
    =1l2l1B(n+1,ρn)Q(l1)
    +1l2l1l1l2(zl2l1l2)n(l1zl1l2)ρn1Q(z)l1l2dz
    =1l2l1B(n+1,ρn)Q(l1)n!(l2l1)ρ+1Jρl+1Q(l2). (4.4)

    By substituting values of S1 and S2 in (4.2) and then If we multiply by l2l12, we get (4.1).

    For the sake of simplicity, we use the following notation:

    ΥQ(ρ;B;n;l1,l2)=B(n+1,ρn)(Q(l1)+Q(l2)2)n!2(l2l1)ρ[Jρl+1Q(l2)+Jρl2Q(l1)].

    Theorem 4.2. For ρ(n,n+1]) with ρ>0 and let Q:IRR be a differentiable function on I with l1<l2 such that QL1([l1,l2]). If | Q|r, with r1, is an exponentially tgs-convex function, then the following inequality holds:

    | ΥQ(ρ;B;n;l1,l2)|l2l12(B(n+1,ρn+1)B(n+1,ρn)+B(n+2,ρn))11r
    ×(eαrl2|Q(l1)|r+eαrl1|Q(l2)|r6eαrl1eαrl2)1r. (4.5)

    Proof. Utilizing exponentially tgs-convex function of | Q|r, Lemma 4.1 and Hölder's inequality, one obtains

    | ΥQ(ρ;B;n;l1,l2)|
    =| l2l1210(B1u(n+1,ρn)Bu(n+1,ρn))Q(ϑl1+(1ϑ)l2)dϑ|
    l2l12(10(B1u(n+1,ρn)Bu(n+1,ρn))dϑ)11r
    ×(10| Q(ϑl1+(1ϑ)l2)|rdϑ)1r
    l2l12(B(n+1,ρn+1)B(n+1,ρn)+B(n+2,ρn))11r
    ×(10ϑ(1ϑ)(| Q(l1)eαl1|r+| Q(l2)eαl2|r)dϑ)1r
    l2l12(B(n+1,ρn+1)B(n+1,ρn)+B(n+2,ρn))11r
    ×(eαrl2|Q(l1)|r+eαrl1|Q(l2)|r6eαrl1eαrl2)1r, (4.6)

    which is the required result.

    Theorem 4.3. For ρ(n,n+1] with ρ>0 and let Q:IRR be a differentiable function on I with l1<l2 such that QL1([l1,l2]). If |Q|r, with r,s>1 such that 1s+1r=1, is exponentially tgs-convex function, then the following inequality holds:

    | ΥQ(ρ;B;n;l1,l2)|l2l12(2120(1uuvn(1v)ρn1dv)sdu)1s
    ×(eαrl2|Q(l1)|r+eαrl1|Q(l2)|r6eαrl1eαrl2)1r. (4.7)

    Proof. Utilizing exponentially tgs-convex function of | Q|r and well-known Hölder inequality, one obtains

    | ΥQ(ρ;B;n;l1,l2)|
    =| l2l1210(B1u(n+1,ρn)Bu(n+1,ρn))Q(ϑl1+(1ϑ)l2)dϑ|
    l2l12(10| B1u(n+1,ρn)Bn(n+1,ρn)|sdϑ)1s
    ×(10| Q(ϑl1+(1ϑ)l2)|rdϑ)1r
    l2l12(120(B1u(n+1,ρn)Bu(n+1,ρn))sdu
    +112(Bu(n+1,ρn)B1u(n+1,ρn))sdu)1s(10ϑ(1ϑ)(| Q(l1)|reαrl1+| Q(l2)|qeαrl2)dϑ)1r
    =l2l12(120(1uuvn(1v)ρn1dv)sdv+112(u1uvn(1v)ρn1dv)sdv)1s
    ×(eαrl2|Q(l1)|r+eαrl1|Q(l2)|r6eαrl1eαrl2)1r
    =l2l12(2120(1uuvn(1v)ρn1dv)sdu)1s(eαrl2|Q(l1)|r+eαrl1|Q(l2)|r6eαrl1eαrl2)1r, (4.8)

    which is the required result.

    This section is devoted to proving some new generalizations for exponentially tgs-convex functions within the generalized conformable integral operator.

    Theorem 5.1. For ρ>0 and let Q:[l1,l2]RR be an exponentially tgs-convex function such that QL1[l1,l2], then the following inequality holds:

    4δρδQ(l1+l22)Γ(δ)(l2l1)ρδ[Jρ,δl+1Q(l2)eαl2+Jρ,δl2Q(l1)eαl1]
    1ρ[B(ρ+1ρ,δ)+B(ρ+2ρ,δ)](Q(l1)eαl1+Q(l2)eαl2). (5.1)

    Proof. Taking into account (3.3) and conducting product of (3.3) by (1ϑρρ)δ1ϑρ1 with ϑ(0,1),ρ>0 and then integrating the resulting estimate with respect to ϑ over [0,1], we find

    4Q(l1+l22)10(1ϑρρ)δ1ϑρ1dϑ
    10(1ϑρρ)δ1ϑρ1Q(ϑl1+(1ϑ)l2)eα(ϑl1+(1ϑ)l2)dϑ
    +10(1ϑρρ)δ1ϑρ1Q(ϑl2+(1ϑ)l1)eα(ϑl2+(1ϑ)l1)dϑ
    =R1+R2. (5.2)

    By making change of variable u=ϑl1+(1ϑ)l2, we have

    R1=10(1ϑρρ)δ1ϑρ1Q(ϑl1+(1ϑ)l2)eα(ϑl1+(1ϑ)l2)dϑ
    =l1l2(1(ul2l1l2)ρρ)δ1(ul2l1l2)ρ1Q(u)eαudul1l2
    =1(l2l1)ρδl2l1((l2l1)ρ(l2u)ρρ)δ1(l2u)ρ1Q(u)eαudu
    =Γ(δ(l2l1)ρδJρ,δl2Q(l1)eαl1. (5.3)

    Substituting v=ϑl2+(1ϑ)l1, we have

    R2=10(1ϑρρ)δ1ϑρ1Q(ϑl2+(1ϑ)l1)eα(ϑl2+(1ϑ)l1)dϑ
    =l1l2(1(vl1l2l1)ρρ)δ1(vl1l2l1)ρ1Q(v)eαvdul2l1
    =1(l2l1)ρδl2l1((l2l1)ρ(vl1)ρρ)δ1(vl1)ρ1Q(v)eαvdv
    =Γ(δ)(l2l1)ρQJρ,δl2Q(l2)eαl2. (5.4)

    Thus by using (5.2) and (5.3) in (5.4), we get the first inequality of (5.1).

    Consider

    Q(ϑl1+(1ϑ)l2)ϑ(1ϑ)(Q(l1)eαl1+Q(l2)eαl2)

    and

    Q(ϑl2+(1ϑ)l1)ϑ(1ϑ)(Q(l1)eαl1+Q(l2)eαl2).

    By adding

    Q(ϑl1+(1ϑ)l2)+Q(ϑl2+(1ϑ)l1)2ϑ(1ϑ)(Q(l1)eαl1+Q(l2)eαl2). (5.5)

    If we multiply (5.5) by (1ϑρρ)δ1ϑρ1 with ϑ(0,1),ρ>0 and then integrating the resulting estimate with respect to ϑ over [0,1], we get

    Γ(δ)(l2l1)ρδ[Jρ,δl+1Q(l2)eαl2+Jρ,δl2Q(l1)eαl1]
    1ρ[B(ρ+1ρ,δ)+B(ρ+2ρ,δ)](Q(l1)eαl1+Q(l2)eαl2), (5.6)

    the desired inequality is the right hand side of (5.1).

    Our main results depend on the following identity.

    Lemma 5.2. For ρ>0 and let Q:IRR be a differentiable function on (l1,l2) with l1<l2 such that QL1[l1,l2], then the following identity holds:

    (Q(l1)+Q(l2)2)ρδΓ(δ+1)2(l2l1)ρδ[Jρ,δl+1Q(l2)+Jρ,δl+2Q(l1)]
    =(l2l1)ρδ210[(1ϑρρ)δ(1(1ϑ)ρρ)δ]Q(ϑl1+(1ϑ)l2)dϑ. (5.7)

    Proof. It suffices that

    10[(1ϑρρ)δ(1(1ϑ)ρρ)δ]Q(ϑl1+(1ϑ)l2)dϑ
    =10(1ϑρρ)δQ(ϑl1+(1ϑ)l2)dϑ(1(1ϑ)ρρ)δQ(ϑl1+(1ϑ)l2)dϑ
    =M1M2. (5.8)

    Using integration by parts and making change of variable technique, we have

    M1=10(1ϑρρ)δQ(ϑl1+(1ϑ)l2)dϑ
    =1l1l2(1ϑρρ)δQ(ϑl1+(1ϑ)l2)dϑ|10
    +δl1l210(1ϑρρ)δ1ϑρ1Q(ϑl1+(1ϑ)l2)dϑ
    =Q(l2)(l2l1)ρδδl2l110(1ϑρρ)δ1ϑρ1Q(ϑl1+(1ϑ)l2)dϑ
    =Q(l2)(l2l1)ρδδΓ(δ)(l2l1)ρδ+1Jρ,δl2Q(l1)

    Analogously

    M2=10(1(1ϑ)ρρ)δQ(ϑl1+(1ϑ)l2)dϑ
    =1l1l2(1(1ϑ)ρρ)δQ(ϑl1+(1ϑ)l2)|10
    1l1l210Q(1(1ϑ)ρρ)δ1(1ϑ)ρ1Q(ϑl1+(1ϑ)l2)dϑ
    =Q(l1)(l2l1)ρδ+δl2l110(1(1ϑ)ρρ)δ1(1ϑ)ρ1Q(ϑl1+(1ϑ)l2)dϑ
    =Q(l1)(l2l1)ρδ+δΓ(δ)(l2l1)ρδ+1Jρ,δl+1Q(l2). (5.9)

    By substituting values of M1 and M2 in (5.8) and then conducting product on both sides by (l2l1)ρδ2, we get the desired result.

    Theorem 5.3. For ρ>0 and let Q:IRR be a differentiable function on I with l1<l2 such that QL1([l1,l2]). If | Q|r, with r1, is an exponentially tgs-convex function, then the following inequality holds

    |(Q(l1)+Q(l2)2)ρδΓ(δ+1)2(l2l1)ρδ[Jρ,δl+1Q(l2)+Jρ,δl+2Q(l1)]|
    (l2l1)ρδ2(1ρδ+1B(1ρ,δ+1)+1ρδ+2B(1ρ2,δ+1))11r(eαrl2|Q(l1)|r+eαrl1|Q(l2)|r6eαrl1eαrl2)1r. (5.10)

    Proof. Using exponentially tgs-convexity of | Q|r, Lemma 5.2, and the well-known Hölder inequality, we have

    |(Q(l1)+Q(l2)2)ρδΓ(δ+1)2(l2l1)ρδ[Jρ,δl+1Q(l2)+Jρ,δl+2Q(l1)]|
    =| (l2l1)ρδ210[(1ϑρρ)δ(1(1ϑ)ρρ)δ]Q(ϑl1+(1ϑ)l2)dϑ
    (l2l1)ρδ2(10[(1ϑρρ)δ(1(1ϑ)ρρ)δ]dϑ)11r
    ×(10| Q(ϑl1+(1ϑ)l2)|rdϑ)1r
    (l2l1)ρδ2(10(1ϑρρ)δdϑ10(1(1ϑ)ρρ)δdϑ)11r
    ×(10ϑ(1ϑ)(| Q(l1)|reαrl1+| Q(l2)|reαrl2)dϑ)1r
    =(l2l1)ρδ2(1ρδ+1B(1ρ,δ+1)+1ρδ+2B(1ρ2,δ+1))11r(eαrl2|Q(l1)|r+eαrl1|Q(l2)|r6eαrl1eαrl2)1r,

    the required result.

    Let l1,l2>0 with l1l2. Then the arithmetic mean A(l1,l2), harmonic mean H(l1,l2), logarithmic mean L(l1,l2) and n-th generalized logarithmic mean Ln(l1,l2) are defined by

    A(l1,l2)=l1+l22,
    G(l1,l2)=l1l2,
    L(l1,l2)=l2l1lnl2lnl1

    and

    Ln(l1,l2)=[ln+12ln+11(n+1)(l2l1)]1n(n0,1),

    respectively. Recently, the bivariate means have attracted the attention of many researchers [47,48,49,50,51,52,53,54,55,56,57,58] due to their are closely related to the special functions.

    In this section, we use our obtained results in section 5 to provide several novel inequalities involving the special bivariate means mentioned above.

    Proposition 6.1. Let l1,l2>0 with l2>l1. Then

    |A(l21,l22)12L33(l1,l2)|l2l1(6)1reα(l1+l2)[(eαl2l1)r+(eαl1l2)r]1r.

    Proof. Let ρ=δ=1 and Q(z)=z2. Then the desired result follows from Theorem 5.3.

    Proposition 6.2. Let l1,l2>0 with l2>l1. Then

    |H1(l21,l22)12L1(l1,l2)|l2l12(6)1reα(l1+l2)[(eαl2l22)r+(eαl1l21)r(l1l2)2r]1r.

    Proof. Let ρ=δ=1 and Q(z)=1z. Then the desired result follows from Theorem 5.3.

    Proposition 6.3. Let l1,l2>0 with l2>l1. Then

    |A(ln1,ln2)12Lnn(l1,l2)|(l2l1)|n|2[(eαl2ln11)r+(eαl1ln12)r6eαr(l1+l2)]1r.

    Proof. Let ρ=δ=1 and Q(z)=zn. Then the desired result follows from Theorem 5.3.

    In this paper, we proposed a novel technique with two different approaches for deriving several generalizations for an exponentially tgs-convex function that accelerates with a conformable integral operator. We have generalized the Hermite-Hadamard type inequalities for exponentially tgs-convex functions. By choosing different parametric values ρ and δ, we analyzed the convergence behavior of our proposed methods in form of corollaries. Another aspect is that to show the effectiveness of our novel generalizations, our results have potential applications in fractional integrodifferential and fractional Schrödinger equations. Numerical applications show that our findings are consistent and efficient. Finally, we remark that the framework of the conformable fractional integral operator, it is of interest to further our results to the framework of Riemann-Liouville, Hadamard and Katugampola fractional integral operators. Our ideas and the approach may lead to a lot of follow-up research.

    The authors would like to thank the anonymous referees for their valuable comments and suggestions, which led to considerable improvement of the article.

    The work was supported by the Natural Science Foundation of China (Grant Nos. 61673169, 11971142, 11701176, 11626101, 11601485).

    The authors declare no conflict of interest.



    [1] T. Akiba, S. Sano, T. Yanase, T. Ohta, M. Koyama, Optuna: A next-generation hyperparameter optimization framework, In: Proceedings of the 25rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2019, 2623–2631. http://dx.doi.org/10.1145/3292500.3330701
    [2] B. Baker, O. Gupta, R. Raskar, N. Naik, Accelerating neural architecture search using performance prediction, 2017, arXiv: 1705.10823.
    [3] B. Baker, O. Gupta, N. Naik, R. Raskar, Designing neural network architectures using reinforcement learning, 2017, arXiv: 1611.02167.
    [4] J. F. Barrett, N. Keat, Artifacts in CT: recognition and avoidance, RadioGraphics, 24 (2004), 1679–1691. http://dx.doi.org/10.1148/rg.246045065 doi: 10.1148/rg.246045065
    [5] J. Bergstra, R. Bardenet, Y. Bengio, B. Kégl, Algorithms for hyper-parameter optimization, In: Advances in Neural Information Processing Systems, 2011, 2546–2554.
    [6] J. Bergstra, Y. Bengio, Random search for hyper-parameter optimization, J. Mach. Learn. Res., 13 (2012), 281–305.
    [7] L. Bottou, F. E. Curtis, J. Nocedal, Optimization methods for large-scale machine learning, SIAM Rev., 60 (2018), 223–311. http://dx.doi.org/10.1137/16M1080173 doi: 10.1137/16M1080173
    [8] L. Breiman, Random forests, Machine Learning, 45 (2001), 5–32. http://dx.doi.org/10.1023/A:1010933404324 doi: 10.1023/A:1010933404324
    [9] C. J. C. Burges, A tutorial on support vector machines for pattern recognition, Data Mining and Knowledge Discovery, 2 (1998), 121–167. http://dx.doi.org/10.1023/A:1009715923555 doi: 10.1023/A:1009715923555
    [10] H. Cai, T. Chen, W. Zhang, Y. Yu, J. Wang, Efficient architecture search by network transformation, 2017, arXiv/1707.04873.
    [11] T. Domhan, J. T. Springenberg, F. Hutter, Speeding Up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning Curves, In: IJCAI International Joint Conference on Artificial Intelligence, 2015, 3460–3468.
    [12] T. Elsken, J. H. Metzen, F. Hutter, Neural architecture search: a survey, J. Mach. Learn. Res., 20 (2019), 1997–2017.
    [13] T. Elsken, J.-H. Metzen, F. Hutter, Simple and efficient architecture search for convolutional neural networks, 2017, arXiv: 1711.04528.
    [14] G. Franchini, M. Galinier, M. Verucchi, Mise en abyme with artificial intelligence: how to predict the accuracy of NN, applied to hyper-parameter tuning, In: INNSBDDL 2019: Recent advances in big data and deep learning, Cham: Springer, 2020,286–295. http://dx.doi.org/10.1007/978-3-030-16841-4_30
    [15] D. E. Goldberg, Genetic algorithms in search, optimization, and machine learning, Addison Wesley Publishing Co. Inc., 1989.
    [16] T. Hospedales, A. Antoniou, P. Micaelli, A. Storkey, Meta-learning in neural networks: a survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, in press. http://dx.doi.org/10.1109/TPAMI.2021.3079209
    [17] F. Hutter, L. Kotthoff, J. Vanschoren, Automatic machine learning: methods, systems, challenges, Cham: Springer, 2019. http://dx.doi.org/10.1007/978-3-030-05318-5
    [18] F. Hutter, H. Hoos, K. Leyton-Brown, Sequential model-based optimization for general algorithm configuration, In: LION 2011: Learning and Intelligent Optimization, Berlin, Heidelberg: Springer, 2011,507–523. http://dx.doi.org/10.1007/978-3-642-25566-3_40
    [19] D. P. Kingma, J. Ba, Adam: a method for stochastic optimization, 2017, arXiv: 1412.6980.
    [20] N. Loizou, P. Richtarik, Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods, Comput. Optim. Appl., 77 (2020), 653–710. http://dx.doi.org/10.1007/s10589-020-00220-z doi: 10.1007/s10589-020-00220-z
    [21] J. Mockus, V. Tiesis, A. Zilinskas, The application of Bayesian methods for seeking the extremum, In: Towards global optimisation, North-Holand, 2012,117–129.
    [22] B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, N. De Freitas, Taking the human out of the loop: A review of bayesian optimization, Proc. IEEE, 104 (2016), 148–175. http://dx.doi.org/10.1109/JPROC.2015.2494218 doi: 10.1109/JPROC.2015.2494218
    [23] S. Thrun, L. Pratt, Learning to learn: introduction and overview, In: Learning to learn, Boston, MA: Springer, 1998, 3–17. http://dx.doi.org/10.1007/978-1-4615-5529-2_1
    [24] C. Ying, A. Klein, E. Real, E. Christiansen, K. Murphy, F. Hutter, NAS-Bench-101: Towards reproducible neural architecture search, In: Proceedings of the 36–th International Conference on Machine Learning, 2019, 7105–7114.
    [25] Z. Zhong, J. Yan, W. Wei, J. Shao, C.-L. Liu, Practical block-wise neural network architecture generation, In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, 2423–2432. http://dx.doi.org/10.1109/CVPR.2018.00257
    [26] B. Zoph, Q. V. Le, Neural architecture search with reinforcemente learning, 2017, arXiv: 1611.01578.
  • This article has been cited by:

    1. Humaira Kalsoom, Muhammad Idrees, Artion Kashuri, Muhammad Uzair Awan, Yu-Ming Chu, Some New (p1p2,q1q2)-Estimates of Ostrowski-type integral inequalities via n-polynomials s-type convexity, 2020, 5, 2473-6988, 7122, 10.3934/math.2020456
    2. Thabet Abdeljawad, Saima Rashid, A. A. El-Deeb, Zakia Hammouch, Yu-Ming Chu, Certain new weighted estimates proposing generalized proportional fractional operator in another sense, 2020, 2020, 1687-1847, 10.1186/s13662-020-02935-z
    3. Thabet Abdeljawad, Saima Rashid, Zakia Hammouch, İmdat İşcan, Yu-Ming Chu, Some new Simpson-type inequalities for generalized p-convex function on fractal sets with applications, 2020, 2020, 1687-1847, 10.1186/s13662-020-02955-9
    4. Shu-Bo Chen, Saima Rashid, Muhammad Aslam Noor, Rehana Ashraf, Yu-Ming Chu, A new approach on fractional calculus and probability density function, 2020, 5, 2473-6988, 7041, 10.3934/math.2020451
    5. Shuang-Shuang Zhou, Saima Rashid, Saima Parveen, Ahmet Ocak Akdemir, Zakia Hammouch, New computations for extended weighted functionals within the Hilfer generalized proportional fractional integral operators, 2021, 6, 2473-6988, 4507, 10.3934/math.2021267
    6. Shuang-Shuang Zhou, Saima Rashid, Muhammad Aslam Noor, Khalida Inayat Noor, Farhat Safdar, Yu-Ming Chu, New Hermite-Hadamard type inequalities for exponentially convex functions and applications, 2020, 5, 2473-6988, 6874, 10.3934/math.2020441
    7. Tie-Hong Zhao, Zai-Yin He, Yu-Ming Chu, On some refinements for inequalities involving zero-balanced hypergeometric function, 2020, 5, 2473-6988, 6479, 10.3934/math.2020418
    8. Shu-Bo Chen, Saima Rashid, Muhammad Aslam Noor, Zakia Hammouch, Yu-Ming Chu, New fractional approaches for n-polynomial P-convexity with applications in special function theory, 2020, 2020, 1687-1847, 10.1186/s13662-020-03000-5
    9. Muhammad Uzair Awan, Sadia Talib, Artion Kashuri, Muhammad Aslam Noor, Khalida Inayat Noor, Yu-Ming Chu, A new q-integral identity and estimation of its bounds involving generalized exponentially μ-preinvex functions, 2020, 2020, 1687-1847, 10.1186/s13662-020-03036-7
    10. Shyam S. Santra, Omar Bazighifan, Hijaz Ahmad, Yu-Ming Chu, Fateh Mebarek-Oudina, Second-Order Differential Equation: Oscillation Theorems and Applications, 2020, 2020, 1563-5147, 1, 10.1155/2020/8820066
    11. Saad Ihsan Butt, Muhammad Umar, Saima Rashid, Ahmet Ocak Akdemir, Yu-Ming Chu, New Hermite–Jensen–Mercer-type inequalities via k-fractional integrals, 2020, 2020, 1687-1847, 10.1186/s13662-020-03093-y
    12. Imran Abbas Baloch, Aqeel Ahmad Mughal, Yu-Ming Chu, Absar Ul Haq, Manuel De La Sen, A variant of Jensen-type inequality and related results for harmonic convex functions, 2020, 5, 2473-6988, 6404, 10.3934/math.2020412
    13. Artion Kashuri, Sajid Iqbal, Saad Ihsan Butt, Jamshed Nasir, Kottakkaran Sooppy Nisar, Thabet Abdeljawad, Basil K. Papadopoulos, Trapezium-Type Inequalities for k -Fractional Integral via New Exponential-Type Convexity and Their Applications, 2020, 2020, 2314-4785, 1, 10.1155/2020/8672710
    14. Maysaa Al Qurashi, Saima Rashid, Sobia Sultana, Hijaz Ahmad, Khaled A. Gepreel, New formulation for discrete dynamical type inequalities via h-discrete fractional operator pertaining to nonsingular kernel, 2021, 18, 1551-0018, 1794, 10.3934/mbe.2021093
    15. Yu‐ming Chu, Saima Rashid, Jagdev Singh, A novel comprehensive analysis on generalized harmonically ψ ‐convex with respect to Raina's function on fractal set with applications , 2021, 0170-4214, 10.1002/mma.7346
    16. Chahn Yong Jung, Ghulam Farid, Hafsa Yasmeen, Yu-Pei Lv, Josip Pečarić, Refinements of some fractional integral inequalities for refined (α,hm)-convex function, 2021, 2021, 1687-1847, 10.1186/s13662-021-03544-0
    17. Mubashir Qayyum, Efaza Ahmad, Sidra Afzal, Tanveer Sajid, Wasim Jamshed, Awad Musa, El Sayed M. Tag El Din, Amjad Iqbal, Fractional analysis of unsteady squeezing flow of Casson fluid via homotopy perturbation method, 2022, 12, 2045-2322, 10.1038/s41598-022-23239-0
    18. Saima Rashid, Aasma Khalid, Omar Bazighifan, Georgia Irina Oros, New Modifications of Integral Inequalities via ℘-Convexity Pertaining to Fractional Calculus and Their Applications, 2021, 9, 2227-7390, 1753, 10.3390/math9151753
    19. Ahmed A. El‐Deeb, Novel dynamic Hardy‐type inequalities on time scales, 2023, 46, 0170-4214, 5299, 10.1002/mma.8834
    20. Ahmed A. El-Deeb, Dumitru Baleanu, Nehad Ali Shah, Ahmed Abdeldaim, On some dynamic inequalities of Hilbert's-type on time scales, 2023, 8, 2473-6988, 3378, 10.3934/math.2023174
    21. Naqash Sarfraz, Muhammad Aslam, Mir Zaman, Fahd Jarad, Estimates for p-adic fractional integral operator and its commutators on p-adic Morrey–Herz spaces, 2022, 2022, 1029-242X, 10.1186/s13660-022-02829-6
    22. Wei Liu, Fangfang Shi, Guoju Ye, Dafang Zhao, Some inequalities for cr-log-h-convex functions, 2022, 2022, 1029-242X, 10.1186/s13660-022-02900-2
    23. JIAN-GEN LIU, XIAO-JUN YANG, YI-YING FENG, LU-LU GENG, ON THE GENERALIZED WEIGHTED CAPUTO-TYPE DIFFERENTIAL OPERATOR, 2022, 30, 0218-348X, 10.1142/S0218348X22500323
    24. Waewta Luangboon, Kamsing Nonlaopon, Jessada Tariboon, Sotiris K. Ntouyas, Simpson- and Newton-Type Inequalities for Convex Functions via (p,q)-Calculus, 2021, 9, 2227-7390, 1338, 10.3390/math9121338
    25. Shasha Li, Ghulam Farid, Atiq Ur Rehman, Hafsa Yasmeen, Ahmet Ocak Akdemir, Fractional Versions of Hadamard-Type Inequalities for Strongly Exponentially α , h − m -Convex Functions, 2021, 2021, 2314-4785, 1, 10.1155/2021/2555974
    26. Ahmed A. El-Deeb, On dynamic inequalities in two independent variables on time scales and their applications for boundary value problems, 2022, 2022, 1687-2770, 10.1186/s13661-022-01636-8
    27. Artion Kashuri, Soubhagya Kumar Sahoo, Bibhakar Kodamasingh, Muhammad Tariq, Ahmed A. Hamoud, Homan Emadifar, Faraidun K. Hamasalh, Nedal M. Mohammed, Masoumeh Khademi, Guotao Wang, Integral Inequalities of Integer and Fractional Orders for n –Polynomial Harmonically t g s –Convex Functions and Their Applications, 2022, 2022, 2314-4785, 1, 10.1155/2022/2493944
    28. MAYSAA AL-QURASHI, SAIMA RASHID, YELIZ KARACA, ZAKIA HAMMOUCH, DUMITRU BALEANU, YU-MING CHU, ACHIEVING MORE PRECISE BOUNDS BASED ON DOUBLE AND TRIPLE INTEGRAL AS PROPOSED BY GENERALIZED PROPORTIONAL FRACTIONAL OPERATORS IN THE HILFER SENSE, 2021, 29, 0218-348X, 2140027, 10.1142/S0218348X21400272
    29. Saima Rashid, Zakia Hammouch, Rehana Ashraf, Yu-Ming Chu, New Computation of Unified Bounds via a More General Fractional Operator Using Generalized Mittag–Leffler Function in the Kernel, 2021, 126, 1526-1506, 359, 10.32604/cmes.2021.011782
    30. YunPeng Chang, LiangJuan Yu, LinQi Sun, HuangZhi Xia, LlogL
    Type Estimates for Commutators of Fractional Integral Operators on the p-Adic Vector Space, 2024, 18, 1661-8254, 10.1007/s11785-024-01514-4
    31. Fangfang Shi, Guoju Ye, Wei Liu, Dafang Zhao, A class of nonconvex fuzzy optimization problems under granular differentiability concept, 2023, 211, 03784754, 430, 10.1016/j.matcom.2023.04.021
    32. Amit Prakash, Vijay Verma, Dumitru Baleanu, Two Novel Methods for Fractional Nonlinear Whitham–Broer–Kaup Equations Arising in Shallow Water, 2023, 9, 2349-5103, 10.1007/s40819-023-01497-4
    33. Umair Manzoor, Hassan Waqas, Taseer Muhammad, Hamzah Naeem, Ahmed Alshehri, Characteristics of hybrid nanofluid induced by curved surface with the consequences of thermal radiation: an entropy optimization, 2023, 1745-5030, 1, 10.1080/17455030.2023.2226251
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4469) PDF downloads(413) Cited by(13)

Figures and Tables

Figures(10)  /  Tables(6)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog