Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

How artificial intelligence reduces human bias in diagnostics?

  • Received: 10 October 2024 Revised: 07 December 2025 Accepted: 07 February 2025 Published: 12 February 2025
  • Accurate diagnostics of neurological disorders often rely on behavioral assessments, yet traditional methods rooted in manual observations and scoring are labor-intensive, subjective, and prone to human bias. Artificial Intelligence (AI), particularly Deep Neural Networks (DNNs), offers transformative potential to overcome these limitations by automating behavioral analyses and reducing biases in diagnostic practices. DNNs excel in processing complex, high-dimensional data, allowing for the detection of subtle behavioral patterns critical for diagnosing neurological disorders such as Parkinson's disease, strokes, or spinal cord injuries. This review explores how AI-driven approaches can mitigate observer biases, thereby emphasizing the use of explainable DNNs to enhance objectivity in diagnostics. Explainable AI techniques enable the identification of which features in data are used by DNNs to make decisions. In a data-driven manner, this allows one to uncover novel insights that may elude human experts. For instance, explainable DNN techniques have revealed previously unnoticed diagnostic markers, such as posture changes, which can enhance the sensitivity of behavioral diagnostic assessments. Furthermore, by providing interpretable outputs, explainable DNNs build trust in AI-driven systems and support the development of unbiased, evidence-based diagnostic tools. In addition, this review discusses challenges such as data quality, model interpretability, and ethical considerations. By illustrating the role of AI in reshaping diagnostic methods, this paper highlights its potential to revolutionize clinical practices, thus paving the way for more objective and reliable assessments of neurological disorders.

    Citation: Artur Luczak. How artificial intelligence reduces human bias in diagnostics?[J]. AIMS Bioengineering, 2025, 12(1): 69-89. doi: 10.3934/bioeng.2025004

    Related Papers:

    [1] Shuang-Shuang Zhou, Saima Rashid, Muhammad Aslam Noor, Khalida Inayat Noor, Farhat Safdar, Yu-Ming Chu . New Hermite-Hadamard type inequalities for exponentially convex functions and applications. AIMS Mathematics, 2020, 5(6): 6874-6901. doi: 10.3934/math.2020441
    [2] Xinghua You, Ghulam Farid, Lakshmi Narayan Mishra, Kahkashan Mahreen, Saleem Ullah . Derivation of bounds of integral operators via convex functions. AIMS Mathematics, 2020, 5(5): 4781-4792. doi: 10.3934/math.2020306
    [3] Miguel Vivas-Cortez, Muhammad Aamir Ali, Artion Kashuri, Hüseyin Budak . Generalizations of fractional Hermite-Hadamard-Mercer like inequalities for convex functions. AIMS Mathematics, 2021, 6(9): 9397-9421. doi: 10.3934/math.2021546
    [4] Hengxiao Qi, Muhammad Yussouf, Sajid Mehmood, Yu-Ming Chu, Ghulam Farid . Fractional integral versions of Hermite-Hadamard type inequality for generalized exponentially convexity. AIMS Mathematics, 2020, 5(6): 6030-6042. doi: 10.3934/math.2020386
    [5] Muhammad Imran Asjad, Waqas Ali Faridi, Mohammed M. Al-Shomrani, Abdullahi Yusuf . The generalization of Hermite-Hadamard type Inequality with exp-convexity involving non-singular fractional operator. AIMS Mathematics, 2022, 7(4): 7040-7055. doi: 10.3934/math.2022392
    [6] Yousaf Khurshid, Muhammad Adil Khan, Yu-Ming Chu . Conformable integral version of Hermite-Hadamard-Fejér inequalities via η-convex functions. AIMS Mathematics, 2020, 5(5): 5106-5120. doi: 10.3934/math.2020328
    [7] Yue Wang, Ghulam Farid, Babar Khan Bangash, Weiwei Wang . Generalized inequalities for integral operators via several kinds of convex functions. AIMS Mathematics, 2020, 5(5): 4624-4643. doi: 10.3934/math.2020297
    [8] Wenfeng He, Ghulam Farid, Kahkashan Mahreen, Moquddsa Zahra, Nana Chen . On an integral and consequent fractional integral operators via generalized convexity. AIMS Mathematics, 2020, 5(6): 7632-7648. doi: 10.3934/math.2020488
    [9] Mehmet Eyüp Kiriş, Miguel Vivas-Cortez, Gözde Bayrak, Tuğba Çınar, Hüseyin Budak . On Hermite-Hadamard type inequalities for co-ordinated convex function via conformable fractional integrals. AIMS Mathematics, 2024, 9(4): 10267-10288. doi: 10.3934/math.2024502
    [10] Maryam Saddiqa, Ghulam Farid, Saleem Ullah, Chahn Yong Jung, Soo Hak Shim . On Bounds of fractional integral operators containing Mittag-Leffler functions for generalized exponentially convex functions. AIMS Mathematics, 2021, 6(6): 6454-6468. doi: 10.3934/math.2021379
  • Accurate diagnostics of neurological disorders often rely on behavioral assessments, yet traditional methods rooted in manual observations and scoring are labor-intensive, subjective, and prone to human bias. Artificial Intelligence (AI), particularly Deep Neural Networks (DNNs), offers transformative potential to overcome these limitations by automating behavioral analyses and reducing biases in diagnostic practices. DNNs excel in processing complex, high-dimensional data, allowing for the detection of subtle behavioral patterns critical for diagnosing neurological disorders such as Parkinson's disease, strokes, or spinal cord injuries. This review explores how AI-driven approaches can mitigate observer biases, thereby emphasizing the use of explainable DNNs to enhance objectivity in diagnostics. Explainable AI techniques enable the identification of which features in data are used by DNNs to make decisions. In a data-driven manner, this allows one to uncover novel insights that may elude human experts. For instance, explainable DNN techniques have revealed previously unnoticed diagnostic markers, such as posture changes, which can enhance the sensitivity of behavioral diagnostic assessments. Furthermore, by providing interpretable outputs, explainable DNNs build trust in AI-driven systems and support the development of unbiased, evidence-based diagnostic tools. In addition, this review discusses challenges such as data quality, model interpretability, and ethical considerations. By illustrating the role of AI in reshaping diagnostic methods, this paper highlights its potential to revolutionize clinical practices, thus paving the way for more objective and reliable assessments of neurological disorders.



    On different time ranges, fractional calculus has a great impact due to a diversity of applications that have contributed to several fields of technical sciences and engineering [1,2,3,4,5,6,7,8,9,10,11,12]. One of the principal options behind the popularity of the area is that fractional-order differentiations and integrations are more beneficial tools in expressing real-world matters than the integer-order ones. Various studies in the literature, on distinct fractional operators such as the classical Riemann-Liouville, Caputo, Katugamploa, Hadamard, and Marchaud versions have shown versatility in modeling and control applications across various disciplines. However, such forms of fractional derivatives may not be able to explain the dynamic performance accurately, hence, many authors are found to be sorting out new fractional differentiations and integrations which have a kernel depending upon a function and this makes the range of definition expanded [13,14]. Furthermore, models based on these fractional operators provide excellent results to be compared with the integer-order differentiations [15,16,17,18,19,20,21,22,23,24,25,26,27].

    The derivatives in this calculus seemed complicated and lost some of the basic properties that usual derivatives have such as the product rule and the chain rule. However, the semigroup properties of these operators behave well in some cases. Recently, the authors in [28] defined a new well-behaved simple derivative called "conformable fractional derivative" which depends just on the basic limit definition of the derivative. It will define the derivative of higher-order (i.e., order δ>1) and also define the integral of order 0<δ1 only. It will also prove the product rule and the mean value theorem and solve some (conformable) differential equations where the fractional exponential function eϑδδ plays an important rule. Inequalities and their utilities assume a crucial job in the literature of pure and applied mathematics [29,30,31,32,33,34,35,36,37]. The assortment of distinct kinds of classical variants and their modifications were built up by using the classical fractional operators.

    Convexity and its applications exist in almost every field of mathematics due to impermanence in several areas of science, technology in nonlinear programming and optimization theory. By utilizing the idea of convexity, numerous variants have been derived by researchers, for example, Hardy, Opial, Ostrowski, Jensen and the most distinguished one is the Hermite-Hadamard inequality [38,39,40,41].

    Let IR be an interval and Q:IR be a convex function. Then the double inequality

    (l2l1)Q(l1+l22)l2l1Q(z)dz(l2l1)Q(l1)+Q(l2)2, (1.1)

    holds for all l1,l2I with l1l2. Clearly, if Q is concave on I, then one has the reverse of inequality (1.1). By taking into account fractional integral operators, several lower and upper bounds for the mean value of a convex function can be obtained by utilizing of inequality (1.1).

    Exponentially convex functions have emerged as a significant new class of convex functions, which have potential applications in technology, data science, and statistics. In [42], Bernstein introduced the concept of exponentially convex function in covariance formation, then the idea of an exponentially convex function is extended by inserting the condition of r-convexity [43]. Following this tendency, Jakšetić and Pečarić introduced various kinds of exponentially convex functions in [44] and have contemplated the applications in Euler-Radau expansions and Stolarsky means. Our aim is to utilize the exponential convexity property of the functions as well as the absolute values of their derivatives in order to establish estimates for conformable fractional integral introduced by Abdeljawed [45] and Jarad et al. [46].

    Following the above propensity, we present a novel technique for establishing new generalizations of Hermite-Hadamard inequalities that correlate with exponentially tgs-convex functions and conformable fractional operator techniques in this paper. The main purpose is that our consequences, which are more consistent and efficient, are accelerated via the fractional calculus technique. In addition, our consequences also taking into account the estimates for Hermite-Hadamard inequalities for exponentially tgs-convex functions. We also investigate the applications of the two proposed conformable fractional operator to exponentially tgs-convex functions and fractional calculus. The proposed numerical experiments show that our results are superior to some related results.

    Before coming to the main results, we provide some significant definitions, theorems and properties of fractional calculus in order to establish a mathematically sound theory that will serve the purpose of the current article.

    Awan et al. [47] proposed a new class of functions called exponentially convex functions.

    Definition 2.1. (See [47]) A positive real-valued function Q:KR(0,) is said to be exponentially convex on K if the inequality

    Q(ϑl1+(1ϑ)l2)ϑQ(l1)eαl1+(1ϑ)Q(l2)eαl2, (2.1)

    holds for all l1,l2R,αR and ϑ[0,1].

    Now, we introduce a novel concept of convex function which is known as the exponentially tgs-convex function.

    Definition 2.2. A positive real-valued function Q:KR(0,) is said to be exponentially tgs-convex on K if the inequality

    Q(ϑl1+(1ϑ)l2)ϑ(1ϑ)[Q(l1)eαl1+Q(l2)eαl2], (2.2)

    holds for all l1,l2R,αR and ϑ[0,1].

    The conformable fractional integral operator was introduced by Abdeljawad [45].

    Definition 2.3. (See [45]) Let ρ(n,n+1] and δ=ρn. Then the left and right-sided conformable fractional integrals of order ρ>0 is defined by

    Jρl+1Q(z)=1n!zl1(zϑ)n(ϑl1)ρ1Q(ϑ)dϑ (2.3)

    and

    Jρl2Q(z)=1n!l2z(ϑz)n(l2ϑ)ρ1Q(ϑ)dϑ. (2.4)

    Next, we demonstrate the following fractional integral operator introduced by Jarad et al. [46].

    Definition 2.4. (See [46]) Let δC and (δ)>0. Then the left and right-sided fractional conformable integral operators of order ρ>0 are stated as:

    Jρ,δl+1Q(z)=1Γ(δ)zl1((zl1)ρ(ϑl1)ρρ)δ1Q(ϑ)(ϑl1)1ρdϑ (2.5)

    and

    Jρ,δl2Q(z)=1Γ(δ)zl1((l2z)ρ(l2ϑ)ρρ)δ1Q(ϑ)(l2ϑ)1ρdϑ. (2.6)

    Recalling some special functions which are known as beta and incomplete beta function.

    B(l1,l2)=10ϑl11(1ϑ)l21dϑ,
    Bv(l1,l2)=v0ϑl11(1ϑ)l21dϑ,v[0,1].

    Further, the following relationship holds between classical Beta and incomplete Beta functions:

    B(l1,l2)=Bv(l1,l2)+B1v(l1,l2),
    Bv(l1+1,l2)=l1Bv(l1,l2)(12)l1+l2l1+l2

    and

    Bv(l1,l2+1)=l2Bv(l1,l2)(12)l1+l2l1+l2.

    Throughout the article, let I=[l1,l2] be an interval in real line R. In this section, we shall demonstrate some integral versions of exponentially tgs-convex functions via conformable fractional integrals.

    Theorem 3.1. For ρ(n,n+1]) with ρ>0 and let Q:IRR be an exponentially tgs-convex function such that QL1([l1,l2]), then the following inequalities hold:

    4Γ(ρn)Γ(ρ+1)Q(l1+l22)
    1(l2l1)ρ[Jρl+1Q(l2)eαl2+Jρl2Q(l1)eαl1]
    2(n+1)Γ(ρn+1)Γ(ρ+3)(Q(l1)eαl1+Q(l2)eαl2). (3.1)

    Proof. By using exponentially tgs-convexity of Q, we have

    Q(x+y2)14(Q(x)eαx+Q(y)eαy). (3.2)

    Let x=ϑl1+(1ϑ)l2 and y=(1ϑ)l1+ϑl2, we get

    4Q(l1+l22)Q(ϑl1+(1ϑ)l2)eαQ(ϑl1+(1ϑ)l2)+Q(ϑl2+(1ϑ)l1)eα[(1ϑ)l1+ϑl2]. (3.3)

    If we multiply (3.3) by 1n!ϑn(1ϑ)ρn1 with ϑ(0,1),ρ>0 and then integrating the resulting estimate with respect to ϑ over [0,1], we find

    4n!Q(l1+l22)10ϑn(1ϑ)ρn1dϑ
    1n!10ϑn(1ϑ)ρn1Q(ϑl1+(1ϑ)l2)eαQ(ϑl1+(1ϑ)l2)dϑ
    +1n!10ϑn(1ϑ)ρn1Q(ϑl2+(1ϑ)l1)eα[(1ϑ)l1+ϑl2]dϑ
    =I1+I2 (3.4)

    By setting u=ϑl1+(1ϑ)l2, we have

    I1=1n!10ϑn(1ϑ)ρn1Q(ϑl1+(1ϑ)l2)eαQ(ϑl1+(1ϑ)l2)dϑ
    =1n!(l2l1)ρl2l1(l21)n(ul1)ρm1Q(u)eαudu
    =1(l2l1)ρJρl+1Q(l2)eαl2. (3.5)

    Analogously, by setting v=ϑl2+(1ϑ)l1, we have

    I2=1n!10ϑn(1ϑ)ρn1Q(ϑl2+(1ϑ)l1)dϑ
    =1n!(l2l1)ρl2l1(vl1)n(l2v)ρn1Q(v)eαvdv
    =1(l2l1)ρJρl2Q(l1)eαl1. (3.6)

    Thus by using (3.5) and (3.6) in (3.4), we get the first inequality of (3.1).

    Consider

    Q(ϑl1+(1ϑ)l2)ϑ(1ϑ)(Q(l1)eαl1+Q(l2)eαl2)

    and

    Q(ϑl2+(1ϑ)l1)ϑ(1ϑ)(Q(l1)eαl1+Q(l2)eαl2).

    By adding

    Q(ϑl1+(1ϑ)l2)+Q(ϑl2+(1ϑ)l1)2ϑ(1ϑ)(Q(l1)eαl1+Q(l2)eαl2). (3.7)

    If we multiply (3.7) by 1n!ϑn(1ϑ)ρn1 with ϑ(0,1),ρ>0 and then integrating the resulting inequality with respect to ϑ over [0,1], we get

    1(l2l1)ρ[Jρl+1Q(l2)eαl2+Jρl2Q(l1)eαl1]
    2(n+1)Γ(ρn+1)Γ(ρ+3)(Q(l1)eαl1+Q(l2)eαl2), (3.8)

    which is the required result.

    Some special cases of above theorem are stated as follows:

    Corollary 3.1. Choosing α=0, then Theorem 3.1 reduces to a new result

    4Γ(ρn)Γ(ρ+1)Q(l1+l22)
    1(l2l1)ρ[Jρl+1Q(l2)+Jρl2Q(l1)]
    2(n+1)Γ(ρn+1)Γ(ρ+3)(Q(l1)+Q(l2)).

    Remark 3.1. Choosing ρ=n+1 and α=0, then Theorem 3.1 reduces to Theorem 3.1 in [19].

    Our next result is the following lemma which plays a dominating role in proving our coming results.

    Lemma 4.1. For ρ(n,n+1]) with ρ>0 and let Q:IRR be differentiable function on I(interior of I) with l1<l2 such that QL1([l1,l2]), then the following inequality holds:

    B(n+1,ρn)(Q(l1)+Q(l2)2)n!2(l2l1)ρ[Jρl+1Q(l2)+Jρl2Q(l1)]
    =10(B1u(n+1,ρn)Bu(n+1,ρn))Q(ϑl1+(1ϑ)l2)dϑ. (4.1)

    Proof. It suffices that

    10(B1u(n+1,ρn)Bu(n+1,ρn))Q(ϑl1+(1ϑ)l2)dϑ
    =10B1u(n+1,ρn)Q(ϑl1+(1ϑ)l2)dϑ
    10Bu(n+1,ρn)Q(ϑl1+(1ϑ)l2)dϑ
    =S1S2 (4.2)

    Then by integration by parts, we have

    S1=10B1u(n+1,ρn)Q(ϑl1+(1ϑ)l2)dϑ
    =10(1u0vn(1v)ρn1dv)Q(ϑl1+(1ϑ)l2)dϑ
    =1l2l1B(n+1,ρn)Q(l2)
    1l2l110(1u)nuρn1Q(ϑl1+(1ϑ)l2)dϑ
    =1l2l1B(n+1,ρn)Q(l2)
    1l2l1l1l2(l1zl1l2)n(zl2l1l2)ρn1Q(z)l1l2dz
    =1l2l1B(n+1,ρn)Q(l2)n!(l2l1)ρ+1Jρl2Q(l1). (4.3)

    Analogously

    S2=10Bu(n+1,ρn)Q(ϑl1+(1ϑ)l2)dϑ
    =10(u0vm(1v)ρn1dv)Q(ϑl1+(1ϑ)l2)dϑ
    =1l2l1B(n+1,ρn)Q(l1)
    +1l2l110(u)n(1u)ρn1Q(ϑl1+(1ϑ)l2)dϑ
    =1l2l1B(n+1,ρn)Q(l1)
    +1l2l1l1l2(zl2l1l2)n(l1zl1l2)ρn1Q(z)l1l2dz
    =1l2l1B(n+1,ρn)Q(l1)n!(l2l1)ρ+1Jρl+1Q(l2). (4.4)

    By substituting values of S1 and S2 in (4.2) and then If we multiply by l2l12, we get (4.1).

    For the sake of simplicity, we use the following notation:

    ΥQ(ρ;B;n;l1,l2)=B(n+1,ρn)(Q(l1)+Q(l2)2)n!2(l2l1)ρ[Jρl+1Q(l2)+Jρl2Q(l1)].

    Theorem 4.2. For ρ(n,n+1]) with ρ>0 and let Q:IRR be a differentiable function on I with l1<l2 such that QL1([l1,l2]). If | Q|r, with r1, is an exponentially tgs-convex function, then the following inequality holds:

    | ΥQ(ρ;B;n;l1,l2)|l2l12(B(n+1,ρn+1)B(n+1,ρn)+B(n+2,ρn))11r
    ×(eαrl2|Q(l1)|r+eαrl1|Q(l2)|r6eαrl1eαrl2)1r. (4.5)

    Proof. Utilizing exponentially tgs-convex function of | Q|r, Lemma 4.1 and Hölder's inequality, one obtains

    | ΥQ(ρ;B;n;l1,l2)|
    =| l2l1210(B1u(n+1,ρn)Bu(n+1,ρn))Q(ϑl1+(1ϑ)l2)dϑ|
    l2l12(10(B1u(n+1,ρn)Bu(n+1,ρn))dϑ)11r
    ×(10| Q(ϑl1+(1ϑ)l2)|rdϑ)1r
    l2l12(B(n+1,ρn+1)B(n+1,ρn)+B(n+2,ρn))11r
    ×(10ϑ(1ϑ)(| Q(l1)eαl1|r+| Q(l2)eαl2|r)dϑ)1r
    l2l12(B(n+1,ρn+1)B(n+1,ρn)+B(n+2,ρn))11r
    ×(eαrl2|Q(l1)|r+eαrl1|Q(l2)|r6eαrl1eαrl2)1r, (4.6)

    which is the required result.

    Theorem 4.3. For ρ(n,n+1] with ρ>0 and let Q:IRR be a differentiable function on I with l1<l2 such that QL1([l1,l2]). If |Q|r, with r,s>1 such that 1s+1r=1, is exponentially tgs-convex function, then the following inequality holds:

    | ΥQ(ρ;B;n;l1,l2)|l2l12(2120(1uuvn(1v)ρn1dv)sdu)1s
    ×(eαrl2|Q(l1)|r+eαrl1|Q(l2)|r6eαrl1eαrl2)1r. (4.7)

    Proof. Utilizing exponentially tgs-convex function of | Q|r and well-known Hölder inequality, one obtains

    | ΥQ(ρ;B;n;l1,l2)|
    =| l2l1210(B1u(n+1,ρn)Bu(n+1,ρn))Q(ϑl1+(1ϑ)l2)dϑ|
    l2l12(10| B1u(n+1,ρn)Bn(n+1,ρn)|sdϑ)1s
    ×(10| Q(ϑl1+(1ϑ)l2)|rdϑ)1r
    l2l12(120(B1u(n+1,ρn)Bu(n+1,ρn))sdu
    +112(Bu(n+1,ρn)B1u(n+1,ρn))sdu)1s(10ϑ(1ϑ)(| Q(l1)|reαrl1+| Q(l2)|qeαrl2)dϑ)1r
    =l2l12(120(1uuvn(1v)ρn1dv)sdv+112(u1uvn(1v)ρn1dv)sdv)1s
    ×(eαrl2|Q(l1)|r+eαrl1|Q(l2)|r6eαrl1eαrl2)1r
    =l2l12(2120(1uuvn(1v)ρn1dv)sdu)1s(eαrl2|Q(l1)|r+eαrl1|Q(l2)|r6eαrl1eαrl2)1r, (4.8)

    which is the required result.

    This section is devoted to proving some new generalizations for exponentially tgs-convex functions within the generalized conformable integral operator.

    Theorem 5.1. For ρ>0 and let Q:[l1,l2]RR be an exponentially tgs-convex function such that QL1[l1,l2], then the following inequality holds:

    4δρδQ(l1+l22)Γ(δ)(l2l1)ρδ[Jρ,δl+1Q(l2)eαl2+Jρ,δl2Q(l1)eαl1]
    1ρ[B(ρ+1ρ,δ)+B(ρ+2ρ,δ)](Q(l1)eαl1+Q(l2)eαl2). (5.1)

    Proof. Taking into account (3.3) and conducting product of (3.3) by (1ϑρρ)δ1ϑρ1 with ϑ(0,1),ρ>0 and then integrating the resulting estimate with respect to ϑ over [0,1], we find

    4Q(l1+l22)10(1ϑρρ)δ1ϑρ1dϑ
    10(1ϑρρ)δ1ϑρ1Q(ϑl1+(1ϑ)l2)eα(ϑl1+(1ϑ)l2)dϑ
    +10(1ϑρρ)δ1ϑρ1Q(ϑl2+(1ϑ)l1)eα(ϑl2+(1ϑ)l1)dϑ
    =R1+R2. (5.2)

    By making change of variable u=ϑl1+(1ϑ)l2, we have

    R1=10(1ϑρρ)δ1ϑρ1Q(ϑl1+(1ϑ)l2)eα(ϑl1+(1ϑ)l2)dϑ
    =l1l2(1(ul2l1l2)ρρ)δ1(ul2l1l2)ρ1Q(u)eαudul1l2
    =1(l2l1)ρδl2l1((l2l1)ρ(l2u)ρρ)δ1(l2u)ρ1Q(u)eαudu
    =Γ(δ(l2l1)ρδJρ,δl2Q(l1)eαl1. (5.3)

    Substituting v=ϑl2+(1ϑ)l1, we have

    R2=10(1ϑρρ)δ1ϑρ1Q(ϑl2+(1ϑ)l1)eα(ϑl2+(1ϑ)l1)dϑ
    =l1l2(1(vl1l2l1)ρρ)δ1(vl1l2l1)ρ1Q(v)eαvdul2l1
    =1(l2l1)ρδl2l1((l2l1)ρ(vl1)ρρ)δ1(vl1)ρ1Q(v)eαvdv
    =Γ(δ)(l2l1)ρQJρ,δl2Q(l2)eαl2. (5.4)

    Thus by using (5.2) and (5.3) in (5.4), we get the first inequality of (5.1).

    Consider

    Q(ϑl1+(1ϑ)l2)ϑ(1ϑ)(Q(l1)eαl1+Q(l2)eαl2)

    and

    Q(ϑl2+(1ϑ)l1)ϑ(1ϑ)(Q(l1)eαl1+Q(l2)eαl2).

    By adding

    Q(ϑl1+(1ϑ)l2)+Q(ϑl2+(1ϑ)l1)2ϑ(1ϑ)(Q(l1)eαl1+Q(l2)eαl2). (5.5)

    If we multiply (5.5) by (1ϑρρ)δ1ϑρ1 with ϑ(0,1),ρ>0 and then integrating the resulting estimate with respect to ϑ over [0,1], we get

    Γ(δ)(l2l1)ρδ[Jρ,δl+1Q(l2)eαl2+Jρ,δl2Q(l1)eαl1]
    1ρ[B(ρ+1ρ,δ)+B(ρ+2ρ,δ)](Q(l1)eαl1+Q(l2)eαl2), (5.6)

    the desired inequality is the right hand side of (5.1).

    Our main results depend on the following identity.

    Lemma 5.2. For ρ>0 and let Q:IRR be a differentiable function on (l1,l2) with l1<l2 such that QL1[l1,l2], then the following identity holds:

    (Q(l1)+Q(l2)2)ρδΓ(δ+1)2(l2l1)ρδ[Jρ,δl+1Q(l2)+Jρ,δl+2Q(l1)]
    =(l2l1)ρδ210[(1ϑρρ)δ(1(1ϑ)ρρ)δ]Q(ϑl1+(1ϑ)l2)dϑ. (5.7)

    Proof. It suffices that

    10[(1ϑρρ)δ(1(1ϑ)ρρ)δ]Q(ϑl1+(1ϑ)l2)dϑ
    =10(1ϑρρ)δQ(ϑl1+(1ϑ)l2)dϑ(1(1ϑ)ρρ)δQ(ϑl1+(1ϑ)l2)dϑ
    =M1M2. (5.8)

    Using integration by parts and making change of variable technique, we have

    M1=10(1ϑρρ)δQ(ϑl1+(1ϑ)l2)dϑ
    =1l1l2(1ϑρρ)δQ(ϑl1+(1ϑ)l2)dϑ|10
    +δl1l210(1ϑρρ)δ1ϑρ1Q(ϑl1+(1ϑ)l2)dϑ
    =Q(l2)(l2l1)ρδδl2l110(1ϑρρ)δ1ϑρ1Q(ϑl1+(1ϑ)l2)dϑ
    =Q(l2)(l2l1)ρδδΓ(δ)(l2l1)ρδ+1Jρ,δl2Q(l1)

    Analogously

    M2=10(1(1ϑ)ρρ)δQ(ϑl1+(1ϑ)l2)dϑ
    =1l1l2(1(1ϑ)ρρ)δQ(ϑl1+(1ϑ)l2)|10
    1l1l210Q(1(1ϑ)ρρ)δ1(1ϑ)ρ1Q(ϑl1+(1ϑ)l2)dϑ
    =Q(l1)(l2l1)ρδ+δl2l110(1(1ϑ)ρρ)δ1(1ϑ)ρ1Q(ϑl1+(1ϑ)l2)dϑ
    =Q(l1)(l2l1)ρδ+δΓ(δ)(l2l1)ρδ+1Jρ,δl+1Q(l2). (5.9)

    By substituting values of M1 and M2 in (5.8) and then conducting product on both sides by (l2l1)ρδ2, we get the desired result.

    Theorem 5.3. For ρ>0 and let Q:IRR be a differentiable function on I with l1<l2 such that QL1([l1,l2]). If | Q|r, with r1, is an exponentially tgs-convex function, then the following inequality holds

    |(Q(l1)+Q(l2)2)ρδΓ(δ+1)2(l2l1)ρδ[Jρ,δl+1Q(l2)+Jρ,δl+2Q(l1)]|
    (l2l1)ρδ2(1ρδ+1B(1ρ,δ+1)+1ρδ+2B(1ρ2,δ+1))11r(eαrl2|Q(l1)|r+eαrl1|Q(l2)|r6eαrl1eαrl2)1r. (5.10)

    Proof. Using exponentially tgs-convexity of | Q|r, Lemma 5.2, and the well-known Hölder inequality, we have

    |(Q(l1)+Q(l2)2)ρδΓ(δ+1)2(l2l1)ρδ[Jρ,δl+1Q(l2)+Jρ,δl+2Q(l1)]|
    =| (l2l1)ρδ210[(1ϑρρ)δ(1(1ϑ)ρρ)δ]Q(ϑl1+(1ϑ)l2)dϑ
    (l2l1)ρδ2(10[(1ϑρρ)δ(1(1ϑ)ρρ)δ]dϑ)11r
    ×(10| Q(ϑl1+(1ϑ)l2)|rdϑ)1r
    (l2l1)ρδ2(10(1ϑρρ)δdϑ10(1(1ϑ)ρρ)δdϑ)11r
    ×(10ϑ(1ϑ)(| Q(l1)|reαrl1+| Q(l2)|reαrl2)dϑ)1r
    =(l2l1)ρδ2(1ρδ+1B(1ρ,δ+1)+1ρδ+2B(1ρ2,δ+1))11r(eαrl2|Q(l1)|r+eαrl1|Q(l2)|r6eαrl1eαrl2)1r,

    the required result.

    Let l1,l2>0 with l1l2. Then the arithmetic mean A(l1,l2), harmonic mean H(l1,l2), logarithmic mean L(l1,l2) and n-th generalized logarithmic mean Ln(l1,l2) are defined by

    A(l1,l2)=l1+l22,
    G(l1,l2)=l1l2,
    L(l1,l2)=l2l1lnl2lnl1

    and

    Ln(l1,l2)=[ln+12ln+11(n+1)(l2l1)]1n(n0,1),

    respectively. Recently, the bivariate means have attracted the attention of many researchers [47,48,49,50,51,52,53,54,55,56,57,58] due to their are closely related to the special functions.

    In this section, we use our obtained results in section 5 to provide several novel inequalities involving the special bivariate means mentioned above.

    Proposition 6.1. Let l1,l2>0 with l2>l1. Then

    |A(l21,l22)12L33(l1,l2)|l2l1(6)1reα(l1+l2)[(eαl2l1)r+(eαl1l2)r]1r.

    Proof. Let ρ=δ=1 and Q(z)=z2. Then the desired result follows from Theorem 5.3.

    Proposition 6.2. Let l1,l2>0 with l2>l1. Then

    |H1(l21,l22)12L1(l1,l2)|l2l12(6)1reα(l1+l2)[(eαl2l22)r+(eαl1l21)r(l1l2)2r]1r.

    Proof. Let ρ=δ=1 and Q(z)=1z. Then the desired result follows from Theorem 5.3.

    Proposition 6.3. Let l1,l2>0 with l2>l1. Then

    |A(ln1,ln2)12Lnn(l1,l2)|(l2l1)|n|2[(eαl2ln11)r+(eαl1ln12)r6eαr(l1+l2)]1r.

    Proof. Let ρ=δ=1 and Q(z)=zn. Then the desired result follows from Theorem 5.3.

    In this paper, we proposed a novel technique with two different approaches for deriving several generalizations for an exponentially tgs-convex function that accelerates with a conformable integral operator. We have generalized the Hermite-Hadamard type inequalities for exponentially tgs-convex functions. By choosing different parametric values ρ and δ, we analyzed the convergence behavior of our proposed methods in form of corollaries. Another aspect is that to show the effectiveness of our novel generalizations, our results have potential applications in fractional integrodifferential and fractional Schrödinger equations. Numerical applications show that our findings are consistent and efficient. Finally, we remark that the framework of the conformable fractional integral operator, it is of interest to further our results to the framework of Riemann-Liouville, Hadamard and Katugampola fractional integral operators. Our ideas and the approach may lead to a lot of follow-up research.

    The authors would like to thank the anonymous referees for their valuable comments and suggestions, which led to considerable improvement of the article.

    The work was supported by the Natural Science Foundation of China (Grant Nos. 61673169, 11971142, 11701176, 11626101, 11601485).

    The authors declare no conflict of interest.


    Acknowledgments



    The author developed AI agents to work with them, like with a good MSc student. Agents helped to identify the most relevant literature, design a plan for the paper, implement suggested improvements, and draft paper sections and rewrite them based on comments provided by the author. The author assumes full responsibility for the accuracy of the content presented here.

    Conflict of interest



    The author has no conflicts of interest to declare.

    [1] Bakeman R, Quera V (2011) Sequential Analysis and Observational Methods for the Behavioral Sciences.Cambridge University Press. https://doi.org/10.1017/CBO9781139017343
    [2] Metz GA, Whishaw IQ (2002) Cortical and subcortical lesions impair skilled walking in the ladder rung walking test: a new task to evaluate fore-and hindlimb stepping, placing, and co-ordination. J Neurosci Meth 115: 169-179. https://doi.org/10.1016/S0165-0270(02)00012-2
    [3] Spano R (2005) Potential sources of observer bias in police observational data. Soc Sci Res 34: 591-617. https://doi.org/10.1016/j.ssresearch.2004.05.003
    [4] Asan O, Montague E (2014) Using video-based observation research methods in primary care health encounters to evaluate complex interactions. J Innov Health Inform 21: 161-170. https://doi.org/10.14236/jhi.v21i4.72
    [5] Moran RW, Schneiders AG, Major KM, et al. (2016) How reliable are functional movement screening scores? A systematic review of rater reliability. Brit J Sport Med 50: 527-536. https://doi.org/10.1136/bjsports-2015-094913
    [6] Mathis MW, Mathis A (2020) Deep learning tools for the measurement of animal behavior in neuroscience. Curr Opin Neurobiol 60: 1-11. https://doi.org/10.1016/j.conb.2019.10.008
    [7] Gautam R, Sharma M (2020) Prevalence and diagnosis of neurological disorders using different deep learning techniques: a meta-analysis. J Med Syst 44: 49. https://doi.org/10.1007/s10916-019-1519-7
    [8] Singh KR, Dash S (2023) Early detection of neurological diseases using machine learning and deep learning techniques: a review. Artif Intell Neurol Diso 2023: 1-24. https://doi.org/10.1016/B978-0-323-90277-9.00001-8
    [9] Arac A, Zhao P, Dobkin BH, et al. (2019) DeepBehavior: A deep learning toolbox for automated analysis of animal and human behavior imaging data. Front Syst Neurosci 13: 20. https://doi.org/10.3389/fnsys.2019.00020
    [10] Sewak M, Sahay SK, Rathore H (2020) An overview of deep learning architecture of deep neural networks and autoencoders. J Comput Theor Nanos 17: 182-188. https://doi.org/10.1166/jctn.2020.8648
    [11] Brattoli B, Büchler U, Dorkenwald M, et al. (2021) Unsupervised behaviour analysis and magnification (uBAM) using deep learning. Nat Mach Intell 3: 495-506. https://doi.org/10.1038/s42256-021-00326-x
    [12] ul Haq A, Li JP, Agbley BLY, et al. (2022) A survey of deep learning techniques based Parkinson's disease recognition methods employing clinical data. Expert Syst Appl 208: 118045. https://doi.org/10.1016/j.eswa.2022.118045
    [13] Nilashi M, Abumalloh RA, Yusuf SYM, et al. (2023) Early diagnosis of Parkinson's disease: a combined method using deep learning and neuro-fuzzy techniques. Comput Biol Chem 102: 107788. https://doi.org/10.1016/j.compbiolchem.2022.107788
    [14] Shahid AH, Singh MP (2020) A deep learning approach for prediction of Parkinson's disease progression. Biomed Eng Lett 10: 227-239. https://doi.org/10.1007/s13534-020-00156-7
    [15] Chintalapudi N, Battineni G, Hossain MA, et al. (2022) Cascaded deep learning frameworks in contribution to the detection of parkinson's disease. Bioengineering 9: 116. https://doi.org/10.3390/bioengineering9030116
    [16] Almuqhim F, Saeed F (2021) ASD-SAENet: a sparse autoencoder, and deep-neural network model for detecting autism spectrum disorder (ASD) using fMRI data. Front Comput Neurosci 15: 654315. https://doi.org/10.3389/fncom.2021.654315
    [17] Zhang L, Wang M, Liu M, et al. (2020) A survey on deep learning for neuroimaging-based brain disorder analysis. Front Neurosci 14: 779. https://doi.org/10.3389/fnins.2020.00779
    [18] Uddin MZ, Shahriar MA, Mahamood MN, et al. (2024) Deep learning with image-based autism spectrum disorder analysis: a systematic review. Eng Appl Artif Intel 127: 107185. https://doi.org/10.1016/j.engappai.2023.107185
    [19] Gupta C, Chandrashekar P, Jin T, et al. (2022) Bringing machine learning to research on intellectual and developmental disabilities: taking inspiration from neurological diseases. J Neurodev Disord 14: 28. https://doi.org/10.1186/s11689-022-09438-w
    [20] Saleh AY, Chern LH (2021) Autism spectrum disorder classification using deep learning. IJOE 17: 103-114. https://doi.org/10.3991/ijoe.v17i08.24603
    [21] Koppe G, Meyer-Lindenberg A, Durstewitz D (2021) Deep learning for small and big data in psychiatry. Neuropsychopharmacology 46: 176-190. https://doi.org/10.1038/s41386-020-0767-z
    [22] Gütter J, Kruspe A, Zhu XX, et al. (2022) Impact of training set size on the ability of deep neural networks to deal with omission noise. Front Remote Sens 3: 932431. https://doi.org/10.3389/frsen.2022.932431
    [23] Sturman O, von Ziegler L, Schläppi C, et al. (2020) Deep learning-based behavioral analysis reaches human accuracy and is capable of outperforming commercial solutions. Neuropsychopharmacology 45: 1942-1952. https://doi.org/10.1038/s41386-020-0776-y
    [24] He T, Kong R, Holmes AJ, et al. (2020) Deep neural networks and kernel regression achieve comparable accuracies for functional connectivity prediction of behavior and demographics. NeuroImage 206: 116276. https://doi.org/10.1016/j.neuroimage.2019.116276
    [25] Chen M, Li H, Wang J, et al. (2019) A multichannel deep neural network model analyzing multiscale functional brain connectome data for attention deficit hyperactivity disorder detection. Radiol Artif Intell 2: e190012. https://doi.org/10.1148/ryai.2019190012
    [26] Golshan HM, Hebb AO, Mahoor MH (2020) LFP-Net: a deep learning framework to recognize human behavioral activities using brain STN-LFP signals. J Neurosci Meth 335: 108621. https://doi.org/10.1016/j.jneumeth.2020.108621
    [27] Sutoko S, Masuda A, Kandori A, et al. (2021) Early identification of Alzheimer's disease in mouse models: Application of deep neural network algorithm to cognitive behavioral parameters. Iscience 24: 102198. https://doi.org/10.1016/j.isci.2021.102198
    [28] Tarigopula P, Fairhall SL, Bavaresco A, et al. (2023) Improved prediction of behavioral and neural similarity spaces using pruned DNNs. Neural Networks 168: 89-104. https://doi.org/10.1016/j.neunet.2023.08.049
    [29] Uyulan C, Ergüzel TT, Unubol H, et al. (2021) Major depressive disorder classification based on different convolutional neural network models: deep learning approach. Clin EEG Neurosci 52: 38-51. https://doi.org/10.1177/1550059420916634
    [30] Wen J, Thibeau-Sutre E, Diaz-Melo M, et al. (2020) Convolutional neural networks for classification of Alzheimer's disease: overview and reproducible evaluation. Med Image Anal 63: 101694. https://doi.org/10.1016/j.media.2020.101694
    [31] Karthik R, Menaka R, Johnson A, et al. (2020) Neuroimaging and deep learning for brain stroke detection-A review of recent advancements and future prospects. Comput Meth Prog Bio 197: 105728. https://doi.org/10.1016/j.cmpb.2020.105728
    [32] Iqbal MS, Heyat MBB, Parveen S, et al. (2024) Progress and trends in neurological disorders research based on deep learning. Comput Med Imag Grap 116: 102400. https://doi.org/10.1016/j.compmedimag.2024.102400
    [33] Kim S, Pathak S, Parise R, et al. (2024) The thriving influence of artificial intelligence in neuroscience. Application of Artificial Intelligence in Neurological Disorders. Singapore: Springer Nature Singapore 157-184. https://doi.org/10.1007/978-981-97-2577-9_9
    [34] Lima AA, Mridha MF, Das SC, et al. (2022) A comprehensive survey on the detection, classification, and challenges of neurological disorders. Biology 11: 469. https://doi.org/10.3390/biology11030469
    [35] Mulpuri RP, Konda N, Gadde ST, et al. (2024) Artificial intelligence and machine learning in neuroregeneration: a systematic review. Cureus 16: e61400. https://doi.org/10.7759/cureus.61400
    [36] Keserwani PK, Das S, Sarkar N (2024) A comparative study: prediction of parkinson's disease using machine learning, deep learning and nature inspired algorithm. Multimed Tools Appl 83: 69393-69441. https://doi.org/10.1007/s11042-024-18186-z
    [37] Fatima A, Masood S (2024) Machine learning approaches for neurological disease prediction: a systematic review. Expert Syst 41: e13569. https://doi.org/10.1111/exsy.13569
    [38] Surianarayanan C, Lawrence JJ, Chelliah PR, et al. (2023) Convergence of artificial intelligence and neuroscience towards the diagnosis of neurological disorders—a scoping review. Sensors 23: 3062. https://doi.org/10.3390/s23063062
    [39] Lombardi A, Diacono D, Amoroso N, et al. (2021) Explainable deep learning for personalized age prediction with brain morphology. Front Neurosci 15: 674055. https://doi.org/10.3389/fnins.2021.674055
    [40] Choo YJ, Chang MC (2022) Use of machine learning in stroke rehabilitation: a narrative review. Brain Neurorehab 15: e26. https://doi.org/10.12786/bn.2022.15.e26
    [41] Ryait H, Bermudez-Contreras E, Harvey M, et al. (2019) Data-driven analyses of motor impairments in animal models of neurological disorders. PLoS Biol 17: e3000516. https://doi.org/10.1371/journal.pbio.3000516
    [42] Nguyen HS, Ho DKN, Nguyen NN, et al. (2024) Predicting EGFR mutation status in non-small cell lung cancer using artificial intelligence: a systematic review and meta-analysis. Acad Radiol 31: 660-683. https://doi.org/10.1016/j.acra.2023.03.040
    [43] Zhang Y, Yao Q, Yue L, et al. (2023) Emerging drug interaction prediction enabled by a flow-based graph neural network with biomedical network. Nat Comput Sci 3: 1023-1033. https://doi.org/10.1038/s43588-023-00558-4
    [44] Le NQK (2023) Predicting emerging drug interactions using GNNs. Nat Comput Sci 3: 1007-1008. https://doi.org/10.1038/s43588-023-00555-7
    [45] Abed Mohammed A, Sumari P (2024) Hybrid k-means and principal component analysis (PCA) for diabetes prediction. Int J Comput Dig Syst 15: 1719-1728. https://doi.org/10.12785/ijcds/1501121
    [46] Mostafa F, Hasan E, Williamson M, et al. (2021) Statistical machine learning approaches to liver disease prediction. Livers 1: 294-312. https://doi.org/10.3390/livers1040023
    [47] Jackins V, Vimal S, Kaliappan M, et al. (2021) AI-based smart prediction of clinical disease using random forest classifier and Naive Bayes. J Supercomput 77: 5198-5219. https://doi.org/10.1007/s11227-020-03481-x
    [48] Cho G, Yim J, Choi Y, et al. (2019) Review of machine learning algorithms for diagnosing mental illness. Psychiat Invest 16: 262. https://doi.org/10.30773/pi.2018.12.21.2
    [49] Aljrees T (2024) Improving prediction of cervical cancer using KNN imputer and multi-model ensemble learning. Plos One 19: e0295632. https://doi.org/10.1371/journal.pone.0295632
    [50] Hajare S, Rewatkar R, Reddy KTV (2024) Design of an iterative method for enhanced early prediction of acute coronary syndrome using XAI analysis. AIMS Bioeng 11: 301-322. https://doi.org/10.3934/bioeng.2024016
    [51] Schjetnan AGP, Luczak A (2011) Recording large-scale neuronal ensembles with silicon probes in the anesthetized rat. J Vis Exp 56: e3282. https://doi.org/10.3791/3282-v
    [52] Luczak A, Narayanan NS (2005) Spectral representation-analyzing single-unit activity in extracellularly recorded neuronal data without spike sorting. J Neurosci Meth 144: 53-61. https://doi.org/10.1016/j.jneumeth.2004.10.009
    [53] Luczak A, Hackett TA, Kajikawa Y, et al. (2004) Multivariate receptive field mapping in marmoset auditory cortex. J Neurosci Meth 136: 77-85. https://doi.org/10.1016/j.jneumeth.2003.12.019
    [54] Luczak A (2010) Measuring neuronal branching patterns using model-based approach. Front Comput Neurosci 4: 135. https://doi.org/10.3389/fncom.2010.00135
    [55] Luczak A, Kubo Y (2022) Predictive neuronal adaptation as a basis for consciousness. Front Syst Neurosci 15: 767461. https://doi.org/10.3389/fnsys.2021.767461
    [56] Lepakshi VA (2022) Machine learning and deep learning based AI tools for development of diagnostic tools. Computational Approaches for Novel Therapeutic and Diagnostic Designing to Mitigate SARS-CoV-2 Infection.Academic Press 399-420. https://doi.org/10.1016/B978-0-323-91172-6.00011-X
    [57] Montavon G, Binder A, Lapuschkin S, et al. (2019) Layer-wise relevance propagation: an overview. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Cham: Springer 193-209. https://doi.org/10.1007/978-3-030-28954-6_10
    [58] Nazir S, Dickson DM, Akram MU (2023) Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks. Comput Biol Med 156: 106668. https://doi.org/10.1016/j.compbiomed.2023.106668
    [59] Torabi R, Jenkins S, Harker A, et al. (2021) A neural network reveals motoric effects of maternal preconception exposure to nicotine on rat pup behavior: a new approach for movement disorders diagnosis. Front Neurosci 15: 686767. https://doi.org/10.3389/fnins.2021.686767
    [60] Shahtalebi S, Atashzar SF, Patel RV, et al. (2021) A deep explainable artificial intelligent framework for neurological disorders discrimination. Sci Rep 11: 9630. https://doi.org/10.1038/s41598-021-88919-9
    [61] Morabito FC, Ieracitano C, Mammone N (2023) An explainable artificial intelligence approach to study MCI to AD conversion via HD-EEG processing. Clin EEG Neurosci 54: 51-60. https://doi.org/10.1177/15500594211063662
    [62] Goodwin NL, Nilsson SRO, Choong JJ, et al. (2022) Toward the explainability, transparency, and universality of machine learning for behavioral classification in neuroscience. Curr Opin Neurobiol 73: 102544. https://doi.org/10.1016/j.conb.2022.102544
    [63] Lindsay GW (2024) Grounding neuroscience in behavioral changes using artificial neural networks. Curr Opin Neurobiol 84: 102816. https://doi.org/10.1016/j.conb.2023.102816
    [64] Dan T, Kim M, Kim WH, et al. (2023) Developing explainable deep model for discovering novel control mechanism of neuro-dynamics. IEEE T Med Imaging 43: 427-438. https://doi.org/10.1109/TMI.2023.3309821
    [65] Fellous JM, Sapiro G, Rossi A, et al. (2019) Explainable artificial intelligence for neuroscience: behavioral neurostimulation. Front Neurosci 13: 1346. https://doi.org/10.3389/fnins.2019.01346
    [66] Bartle AS, Jiang Z, Jiang R, et al. (2022) A critical appraisal on deep neural networks: bridge the gap between deep learning and neuroscience via XAI. HANDBOOK ON COMPUTER LEARNING AND INTELLIGENCE: Volume 2: Deep Learning, Intelligent Control and Evolutionary Computation 2022: 619-634. https://doi.org/10.1142/9789811247323_0015
    [67] Lemon RN (1997) Mechanisms of cortical control of hand function. Neuroscientist 3: 389-398. https://doi.org/10.1177/107385849700300612
    [68] Alaverdashvili M, Whishaw IQ (2013) A behavioral method for identifying recovery and compensation: hand use in a preclinical stroke model using the single pellet reaching task. Neurosci Biobehav R 37: 950-967. https://doi.org/10.1016/j.neubiorev.2013.03.026
    [69] Metz GAS, Whishaw IQ (2000) Skilled reaching an action pattern: stability in rat (Rattus norvegicus) grasping movements as a function of changing food pellet size. Behav Brain Res 116: 111-122. https://doi.org/10.1016/S0166-4328(00)00245-X
    [70] Faraji J, Gomez-Palacio-Schjetnan A, Luczak A, et al. (2013) Beyond the silence: bilateral somatosensory stimulation enhances skilled movement quality and neural density in intact behaving rats. Behav Brain Res 253: 78-89. https://doi.org/10.1016/j.bbr.2013.07.022
    [71] Sheu Y (2020) Illuminating the black box: interpreting deep neural network models for psychiatric research. Front Psychiatry 11: 551299. https://doi.org/10.3389/fpsyt.2020.551299
    [72] Fan FL, Xiong J, Li M, et al. (2021) On interpretability of artificial neural networks: a survey. IEEE T Radiat Plasma 5: 741-760. https://doi.org/10.1109/TRPMS.2021.3066428
    [73] Smucny J, Shi G, Davidson I (2022) Deep learning in neuroimaging: overcoming challenges with emerging approaches. Front Psychiatry 13: 912600. https://doi.org/10.3389/fpsyt.2022.912600
    [74] Kohlbrenner M, Bauer A, Nakajima S, et al. (2020) Towards best practice in explaining neural network decisions with LRP. 2020 International Joint Conference on Neural Networks (IJCNN).IEEE 1-7. https://doi.org/10.1109/IJCNN48605.2020.9206975
    [75] Farahani FV, Fiok K, Lahijanian B, et al. (2022) Explainable AI: a review of applications to neuroimaging data. Front Neurosci 16: 906290. https://doi.org/10.3389/fnins.2022.906290
    [76] Böhle M, Eitel F, Weygandt M, et al. (2019) Layer-wise relevance propagation for explaining deep neural network decisions in MRI-based Alzheimer's disease classification. Front Aging Neurosci 11: 456892. https://doi.org/10.3389/fnagi.2019.00194
    [77] Marques dos Santos JD, Marques dos Santos JP (2023) Path-weights and layer-wise relevance propagation for explainability of ANNs with fMRI data. International Conference on Machine Learning, Optimization, and Data Science. Cham: Springer Nature Switzerland 433-448. https://doi.org/10.1007/978-3-031-53966-4_32
    [78] Filtjens B, Ginis P, Nieuwboer A, et al. (2021) Modelling and identification of characteristic kinematic features preceding freezing of gait with convolutional neural networks and layer-wise relevance propagation. BMC Med Inform Decis Mak 21: 341. https://doi.org/10.1186/s12911-021-01699-0
    [79] Li H, Tian Y, Mueller K, et al. (2019) Beyond saliency: understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation. Image Vision Comput 83: 70-86. https://doi.org/10.1016/j.imavis.2019.02.005
    [80] Nam H, Kim JM, Choi W, et al. (2023) The effects of layer-wise relevance propagation-based feature selection for EEG classification: a comparative study on multiple datasets. Front Hum Neurosci 17: 1205881. https://doi.org/10.3389/fnhum.2023.1205881
    [81] Korda AI, Ruef A, Neufang S, et al. (2021) Identification of voxel-based texture abnormalities as new biomarkers for schizophrenia and major depressive patients using layer-wise relevance propagation on deep learning decisions. Psychiat Res-Neuroim 313: 111303. https://doi.org/10.1016/j.pscychresns.2021.111303
    [82] von Ziegler L, Sturman O, Bohacek J (2021) Big behavior: challenges and opportunities in a new era of deep behavior profiling. Neuropsychopharmacology 46: 33-44. https://doi.org/10.1038/s41386-020-0751-7
    [83] Marks M, Jin Q, Sturman O, et al. (2022) Deep-learning-based identification, tracking, pose estimation and behaviour classification of interacting primates and mice in complex environments. Nat Mach Intell 4: 331-340. https://doi.org/10.1038/s42256-022-00477-5
    [84] Bohnslav JP, Wimalasena NK, Clausing KJ, et al. (2021) DeepEthogram, a machine learning pipeline for supervised behavior classification from raw pixels. Elife 10: e63377. https://doi.org/10.7554/eLife.63377
    [85] Wang PY, Sapra S, George VK, et al. (2021) Generalizable machine learning in neuroscience using graph neural networks. Front Artif Intell 4: 618372. https://doi.org/10.3389/frai.2021.618372
    [86] Watson DS, Krutzinna J, Bruce IN, et al. (2019) Clinical applications of machine learning algorithms: beyond the black box. Bmj 364: l886. https://doi.org/10.2139/ssrn.3352454
    [87] Jain A, Salas M, Aimer O, et al. (2024) Safeguarding patients in the AI era: ethics at the forefront of pharmacovigilance. Drug Safety 48: 119-127. https://doi.org/10.1007/s40264-024-01483-9
    [88] Murdoch B (2021) Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Med Ethics 22: 1-5. https://doi.org/10.1186/s12910-021-00687-3
    [89] Ziesche S (2021) AI ethics and value alignment for nonhuman animals. Philosophies 6: 31. https://doi.org/10.3390/philosophies6020031
    [90] Bossert L, Hagendorff T (2021) Animals and AI. the role of animals in AI research and application-an overview and ethical evaluation. Technol Soc 67: 101678. https://doi.org/10.1016/j.techsoc.2021.101678
    [91] Gong Y, Liu G, Xue Y, et al. (2023) A survey on dataset quality in machine learning. Inform Software Tech 162: 107268. https://doi.org/10.1016/j.infsof.2023.107268
    [92] Bolaños LA, Xiao D, Ford NL, et al. (2021) A three-dimensional virtual mouse generates synthetic training data for behavioral analysis. Nat Methods 18: 378-381. https://doi.org/10.1038/s41592-021-01103-9
    [93] Lashgari E, Liang D, Maoz U (2020) Data augmentation for deep-learning-based electroencephalography. J Neurosci Methods 346: 108885. https://doi.org/10.1016/j.jneumeth.2020.108885
    [94] Barile B, Marzullo A, Stamile C, et al. (2021) Data augmentation using generative adversarial neural networks on brain structural connectivity in multiple sclerosis. Comput Meth Prog Bio 206: 106113. https://doi.org/10.1016/j.cmpb.2021.106113
    [95] Memar S, Jiang E, Prado VF, et al. (2023) Open science and data sharing in cognitive neuroscience with MouseBytes and MouseBytes+. Sci Data 10: 210. https://doi.org/10.1038/s41597-023-02106-1
    [96] Jleilaty S, Ammounah A, Abdulmalek G, et al. (2024) Distributed real-time control architecture for electrohydraulic humanoid robots. Robot Intell Automat 44: 607-620. https://doi.org/10.1108/RIA-01-2024-0013
    [97] Zhao J, Wang Z, Lv Y, et al. (2024) Data-driven learning for H∞ control of adaptive cruise control systems. IEEE Trans Veh Technol 73: 18348-18362. https://doi.org/10.1109/TVT.2024.3447060
    [98] Kelly CJ, Karthikesalingam A, Suleyman M, et al. (2019) Key challenges for delivering clinical impact with artificial intelligence. BMC Med 17: 1-9. https://doi.org/10.1186/s12916-019-1426-2
    [99] Kulkarni PA, Singh H (2023) Artificial intelligence in clinical diagnosis: opportunities, challenges, and hype. Jama 330: 317-318. https://doi.org/10.1001/jama.2023.11440
    [100] Choudhury A, Asan O (2020) Role of artificial intelligence in patient safety outcomes: systematic literature review. JMIR Med inf 8: e18599. https://doi.org/10.2196/18599
    [101] Ratwani RM, Sutton K, Galarraga JE (2024) Addressing AI algorithmic bias in health care. Jama 332: 1051-1052. https://doi.org/10.1001/jama.2024.13486
    [102] Chen C, Sundar SS (2024) Communicating and combating algorithmic bias: effects of data diversity, labeler diversity, performance bias, and user feedback on AI trust. Hum-Comput Interact 2024: 1-37. https://doi.org/10.1080/07370024.2024.2392494
    [103] Chen F, Wang L, Hong J, et al. (2024) Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models. J Am Medl Inform Assn 31: 1172-1183. https://doi.org/10.1093/jamia/ocae060
    [104] Ienca M, Ignatiadis K (2020) Artificial intelligence in clinical neuroscience: methodological and ethical challenges. AJOB Neurosci 11: 77-87. https://doi.org/10.1080/21507740.2020.1740352
    [105] Avberšek LK, Repovš G (2022) Deep learning in neuroimaging data analysis: applications, challenges, and solutions. Front Neuroimag 1: 981642. https://doi.org/10.3389/fnimg.2022.981642
  • This article has been cited by:

    1. Humaira Kalsoom, Muhammad Idrees, Artion Kashuri, Muhammad Uzair Awan, Yu-Ming Chu, Some New (p1p2,q1q2)-Estimates of Ostrowski-type integral inequalities via n-polynomials s-type convexity, 2020, 5, 2473-6988, 7122, 10.3934/math.2020456
    2. Thabet Abdeljawad, Saima Rashid, A. A. El-Deeb, Zakia Hammouch, Yu-Ming Chu, Certain new weighted estimates proposing generalized proportional fractional operator in another sense, 2020, 2020, 1687-1847, 10.1186/s13662-020-02935-z
    3. Thabet Abdeljawad, Saima Rashid, Zakia Hammouch, İmdat İşcan, Yu-Ming Chu, Some new Simpson-type inequalities for generalized p-convex function on fractal sets with applications, 2020, 2020, 1687-1847, 10.1186/s13662-020-02955-9
    4. Shu-Bo Chen, Saima Rashid, Muhammad Aslam Noor, Rehana Ashraf, Yu-Ming Chu, A new approach on fractional calculus and probability density function, 2020, 5, 2473-6988, 7041, 10.3934/math.2020451
    5. Shuang-Shuang Zhou, Saima Rashid, Saima Parveen, Ahmet Ocak Akdemir, Zakia Hammouch, New computations for extended weighted functionals within the Hilfer generalized proportional fractional integral operators, 2021, 6, 2473-6988, 4507, 10.3934/math.2021267
    6. Shuang-Shuang Zhou, Saima Rashid, Muhammad Aslam Noor, Khalida Inayat Noor, Farhat Safdar, Yu-Ming Chu, New Hermite-Hadamard type inequalities for exponentially convex functions and applications, 2020, 5, 2473-6988, 6874, 10.3934/math.2020441
    7. Tie-Hong Zhao, Zai-Yin He, Yu-Ming Chu, On some refinements for inequalities involving zero-balanced hypergeometric function, 2020, 5, 2473-6988, 6479, 10.3934/math.2020418
    8. Shu-Bo Chen, Saima Rashid, Muhammad Aslam Noor, Zakia Hammouch, Yu-Ming Chu, New fractional approaches for n-polynomial P-convexity with applications in special function theory, 2020, 2020, 1687-1847, 10.1186/s13662-020-03000-5
    9. Muhammad Uzair Awan, Sadia Talib, Artion Kashuri, Muhammad Aslam Noor, Khalida Inayat Noor, Yu-Ming Chu, A new q-integral identity and estimation of its bounds involving generalized exponentially μ-preinvex functions, 2020, 2020, 1687-1847, 10.1186/s13662-020-03036-7
    10. Shyam S. Santra, Omar Bazighifan, Hijaz Ahmad, Yu-Ming Chu, Fateh Mebarek-Oudina, Second-Order Differential Equation: Oscillation Theorems and Applications, 2020, 2020, 1563-5147, 1, 10.1155/2020/8820066
    11. Saad Ihsan Butt, Muhammad Umar, Saima Rashid, Ahmet Ocak Akdemir, Yu-Ming Chu, New Hermite–Jensen–Mercer-type inequalities via k-fractional integrals, 2020, 2020, 1687-1847, 10.1186/s13662-020-03093-y
    12. Imran Abbas Baloch, Aqeel Ahmad Mughal, Yu-Ming Chu, Absar Ul Haq, Manuel De La Sen, A variant of Jensen-type inequality and related results for harmonic convex functions, 2020, 5, 2473-6988, 6404, 10.3934/math.2020412
    13. Artion Kashuri, Sajid Iqbal, Saad Ihsan Butt, Jamshed Nasir, Kottakkaran Sooppy Nisar, Thabet Abdeljawad, Basil K. Papadopoulos, Trapezium-Type Inequalities for k -Fractional Integral via New Exponential-Type Convexity and Their Applications, 2020, 2020, 2314-4785, 1, 10.1155/2020/8672710
    14. Maysaa Al Qurashi, Saima Rashid, Sobia Sultana, Hijaz Ahmad, Khaled A. Gepreel, New formulation for discrete dynamical type inequalities via h-discrete fractional operator pertaining to nonsingular kernel, 2021, 18, 1551-0018, 1794, 10.3934/mbe.2021093
    15. Yu‐ming Chu, Saima Rashid, Jagdev Singh, A novel comprehensive analysis on generalized harmonically ψ ‐convex with respect to Raina's function on fractal set with applications , 2021, 0170-4214, 10.1002/mma.7346
    16. Chahn Yong Jung, Ghulam Farid, Hafsa Yasmeen, Yu-Pei Lv, Josip Pečarić, Refinements of some fractional integral inequalities for refined (α,hm)-convex function, 2021, 2021, 1687-1847, 10.1186/s13662-021-03544-0
    17. Mubashir Qayyum, Efaza Ahmad, Sidra Afzal, Tanveer Sajid, Wasim Jamshed, Awad Musa, El Sayed M. Tag El Din, Amjad Iqbal, Fractional analysis of unsteady squeezing flow of Casson fluid via homotopy perturbation method, 2022, 12, 2045-2322, 10.1038/s41598-022-23239-0
    18. Saima Rashid, Aasma Khalid, Omar Bazighifan, Georgia Irina Oros, New Modifications of Integral Inequalities via ℘-Convexity Pertaining to Fractional Calculus and Their Applications, 2021, 9, 2227-7390, 1753, 10.3390/math9151753
    19. Ahmed A. El‐Deeb, Novel dynamic Hardy‐type inequalities on time scales, 2023, 46, 0170-4214, 5299, 10.1002/mma.8834
    20. Ahmed A. El-Deeb, Dumitru Baleanu, Nehad Ali Shah, Ahmed Abdeldaim, On some dynamic inequalities of Hilbert's-type on time scales, 2023, 8, 2473-6988, 3378, 10.3934/math.2023174
    21. Naqash Sarfraz, Muhammad Aslam, Mir Zaman, Fahd Jarad, Estimates for p-adic fractional integral operator and its commutators on p-adic Morrey–Herz spaces, 2022, 2022, 1029-242X, 10.1186/s13660-022-02829-6
    22. Wei Liu, Fangfang Shi, Guoju Ye, Dafang Zhao, Some inequalities for cr-log-h-convex functions, 2022, 2022, 1029-242X, 10.1186/s13660-022-02900-2
    23. JIAN-GEN LIU, XIAO-JUN YANG, YI-YING FENG, LU-LU GENG, ON THE GENERALIZED WEIGHTED CAPUTO-TYPE DIFFERENTIAL OPERATOR, 2022, 30, 0218-348X, 10.1142/S0218348X22500323
    24. Waewta Luangboon, Kamsing Nonlaopon, Jessada Tariboon, Sotiris K. Ntouyas, Simpson- and Newton-Type Inequalities for Convex Functions via (p,q)-Calculus, 2021, 9, 2227-7390, 1338, 10.3390/math9121338
    25. Shasha Li, Ghulam Farid, Atiq Ur Rehman, Hafsa Yasmeen, Ahmet Ocak Akdemir, Fractional Versions of Hadamard-Type Inequalities for Strongly Exponentially α , h − m -Convex Functions, 2021, 2021, 2314-4785, 1, 10.1155/2021/2555974
    26. Ahmed A. El-Deeb, On dynamic inequalities in two independent variables on time scales and their applications for boundary value problems, 2022, 2022, 1687-2770, 10.1186/s13661-022-01636-8
    27. Artion Kashuri, Soubhagya Kumar Sahoo, Bibhakar Kodamasingh, Muhammad Tariq, Ahmed A. Hamoud, Homan Emadifar, Faraidun K. Hamasalh, Nedal M. Mohammed, Masoumeh Khademi, Guotao Wang, Integral Inequalities of Integer and Fractional Orders for n –Polynomial Harmonically t g s –Convex Functions and Their Applications, 2022, 2022, 2314-4785, 1, 10.1155/2022/2493944
    28. MAYSAA AL-QURASHI, SAIMA RASHID, YELIZ KARACA, ZAKIA HAMMOUCH, DUMITRU BALEANU, YU-MING CHU, ACHIEVING MORE PRECISE BOUNDS BASED ON DOUBLE AND TRIPLE INTEGRAL AS PROPOSED BY GENERALIZED PROPORTIONAL FRACTIONAL OPERATORS IN THE HILFER SENSE, 2021, 29, 0218-348X, 2140027, 10.1142/S0218348X21400272
    29. Saima Rashid, Zakia Hammouch, Rehana Ashraf, Yu-Ming Chu, New Computation of Unified Bounds via a More General Fractional Operator Using Generalized Mittag–Leffler Function in the Kernel, 2021, 126, 1526-1506, 359, 10.32604/cmes.2021.011782
    30. YunPeng Chang, LiangJuan Yu, LinQi Sun, HuangZhi Xia, LlogL
    Type Estimates for Commutators of Fractional Integral Operators on the p-Adic Vector Space, 2024, 18, 1661-8254, 10.1007/s11785-024-01514-4
    31. Fangfang Shi, Guoju Ye, Wei Liu, Dafang Zhao, A class of nonconvex fuzzy optimization problems under granular differentiability concept, 2023, 211, 03784754, 430, 10.1016/j.matcom.2023.04.021
    32. Amit Prakash, Vijay Verma, Dumitru Baleanu, Two Novel Methods for Fractional Nonlinear Whitham–Broer–Kaup Equations Arising in Shallow Water, 2023, 9, 2349-5103, 10.1007/s40819-023-01497-4
    33. Umair Manzoor, Hassan Waqas, Taseer Muhammad, Hamzah Naeem, Ahmed Alshehri, Characteristics of hybrid nanofluid induced by curved surface with the consequences of thermal radiation: an entropy optimization, 2023, 1745-5030, 1, 10.1080/17455030.2023.2226251
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(499) PDF downloads(20) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog