Research article

A modified inertial viscosity extragradient type method for equilibrium problems application to classification of diabetes mellitus: Machine learning methods

  • Received: 30 June 2022 Revised: 29 September 2022 Accepted: 09 October 2022 Published: 17 October 2022
  • MSC : 65K15, 47H05, 49M37

  • Diabetes is one of the four major types of noncommunicable diseases (cardiovascular disease, diabetes, cancer and chronic respiratory diseases). It is chronic condition that occurs when the body does not produce enough insulin therefore results in raised blood sugar levels. Insulin is a hormone that regulates the blood sugar when food consumption. If the proper treatment is not received organs of the body like kidneys, nervous system and eyes can deteriorate. Therefore, it is better to predict diabetes as early as possible because lead to serious damage to many of the body's systems. In this paper, we modify extragradient method with an inertial extrapolation step and viscosity-type method to solve equilibrium problems of pseudomonotone bifunction operator in real Hilbert spaces. Strong convergence result is obtained under the assumption that the bifunction satisfies the Lipchitz-type condition. Moreover, we show choosing stepsize parameter in many ways, this shows that our algorithm is flexible using. Finally, we apply our algorithm to solve the diabetes mellitus classification in machine learning and show the algorithm's efficiency by comparing with existing algorithms.

    Citation: Suthep Suantai, Watcharaporn Yajai, Pronpat Peeyada, Watcharaporn Cholamjiak, Petcharaporn Chachvarat. A modified inertial viscosity extragradient type method for equilibrium problems application to classification of diabetes mellitus: Machine learning methods[J]. AIMS Mathematics, 2023, 8(1): 1102-1126. doi: 10.3934/math.2023055

    Related Papers:

    [1] Habib ur Rehman, Poom Kumam, Kanokwan Sitthithakerngkiet . Viscosity-type method for solving pseudomonotone equilibrium problems in a real Hilbert space with applications. AIMS Mathematics, 2021, 6(2): 1538-1560. doi: 10.3934/math.2021093
    [2] Francis Akutsah, Akindele Adebayo Mebawondu, Austine Efut Ofem, Reny George, Hossam A. Nabwey, Ojen Kumar Narain . Modified mildly inertial subgradient extragradient method for solving pseudomonotone equilibrium problems and nonexpansive fixed point problems. AIMS Mathematics, 2024, 9(7): 17276-17290. doi: 10.3934/math.2024839
    [3] Habib ur Rehman, Wiyada Kumam, Poom Kumam, Meshal Shutaywi . A new weak convergence non-monotonic self-adaptive iterative scheme for solving equilibrium problems. AIMS Mathematics, 2021, 6(6): 5612-5638. doi: 10.3934/math.2021332
    [4] Yasir Arfat, Muhammad Aqeel Ahmad Khan, Poom Kumam, Wiyada Kumam, Kanokwan Sitthithakerngkiet . Iterative solutions via some variants of extragradient approximants in Hilbert spaces. AIMS Mathematics, 2022, 7(8): 13910-13926. doi: 10.3934/math.2022768
    [5] Lu-Chuan Ceng, Shih-Hsin Chen, Yeong-Cheng Liou, Tzu-Chien Yin . Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points. AIMS Mathematics, 2024, 9(6): 13819-13842. doi: 10.3934/math.2024672
    [6] Yali Zhao, Qixin Dong, Xiaoqing Huang . A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208
    [7] Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Hossam A. Nabwey, Abubakar Adamu, Ojen Kumar Narain . Double inertial steps extragadient-type methods for solving optimal control and image restoration problems. AIMS Mathematics, 2024, 9(5): 12870-12905. doi: 10.3934/math.2024629
    [8] Rose Maluleka, Godwin Chidi Ugwunnadi, Maggie Aphane . Inertial subgradient extragradient with projection method for solving variational inequality and fixed point problems. AIMS Mathematics, 2023, 8(12): 30102-30119. doi: 10.3934/math.20231539
    [9] Lu-Chuan Ceng, Li-Jun Zhu, Tzu-Chien Yin . Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints. AIMS Mathematics, 2023, 8(2): 2961-2994. doi: 10.3934/math.2023154
    [10] Hasanen A. Hammad, Habib ur Rehman, Manuel De la Sen . Accelerated modified inertial Mann and viscosity algorithms to find a fixed point of $ \alpha - $inverse strongly monotone operators. AIMS Mathematics, 2021, 6(8): 9000-9019. doi: 10.3934/math.2021522
  • Diabetes is one of the four major types of noncommunicable diseases (cardiovascular disease, diabetes, cancer and chronic respiratory diseases). It is chronic condition that occurs when the body does not produce enough insulin therefore results in raised blood sugar levels. Insulin is a hormone that regulates the blood sugar when food consumption. If the proper treatment is not received organs of the body like kidneys, nervous system and eyes can deteriorate. Therefore, it is better to predict diabetes as early as possible because lead to serious damage to many of the body's systems. In this paper, we modify extragradient method with an inertial extrapolation step and viscosity-type method to solve equilibrium problems of pseudomonotone bifunction operator in real Hilbert spaces. Strong convergence result is obtained under the assumption that the bifunction satisfies the Lipchitz-type condition. Moreover, we show choosing stepsize parameter in many ways, this shows that our algorithm is flexible using. Finally, we apply our algorithm to solve the diabetes mellitus classification in machine learning and show the algorithm's efficiency by comparing with existing algorithms.



    Let H be a real Hilbert space and K is a nonempty closed and convex subset of a real Hilbert space H. The equilibrium problem (EP) is to find an element uK such that

    f(u,v)0,vK (1.1)

    where f:K×KR is a bifunction and satisfying f(z,z)=0 for all zH, and EP(f,K) is denoted for a solution set of EP (1.1). EP (1.1) generalizes many various problems in optimization problems such as variational inequalities problems, Nash equilibrium problems, linear programming problems, among others.

    To solve the problem EP (1.1), the two-step extragradient method (TSEM) was proposed by Tran et al. [25] in 2008 which is inspired by the concept ideas to solve variational inequalities of Korpelevich [10]. Under the Lipschitz condition of the bifunction f the convergence theorem was proved. However, only weak convergence was obtained in Hilbert spaces. For obtaining strong convergence, this goal was completed by Hieu [8] with Halpern subgradient extragradient method (HSEM) which was modified from the HSEM of Kraikaew and Saejung [11] for variational inequalities. This method is defined by uH and

    {yi=argminyK{λf(xi,y)+12xiy2},Ti={vH:(xiλti)yi,vyi0}, ti2f(xi,yi),zi=argminyTi{λf(yi,y)+12xiy2},xi+1=αiu+(1αi)zi, (1.2)

    where λ is still some constant depending on the interval that makes the bifunction f satisfies the Lipschitz condition and {αi}(0,1) which satisfies the principle conditions

    limiαi=0,  i=1αi=. (1.3)

    Very recently, Muangchoo [14] combined a viscosity-type method with the extragradient algorithm for obtaining strong convergence theorem of the EP (1.1). This method is defined by

    {vi=argminvK{λif(zi,v)+12ziv2},εi={tH:ziλiωivi,tvi0}, ωi2f(zi,vi),ti=argminvεi{μλif(vi,v)+12ziv2},zi+1=αig(zi)+(1αi)ti, (1.4)

    where μ(0,σ)(0,min{1,12c1,12c2}), g is a contraction function on H with contraction constant ξ[0,1)(g(x)g(y)ξxy, x,yH), {αi} satisfies the principle conditions limiαi=0 and +iαi=+, and the step-sizes {λi} is developed by updating the step-sizes method without knowing the Lipschitz-type constants of the bifunction f which satisfies the following:

    λi+1={min{σ,μf(vi,ti)Ki},                     if μf(vi,ti)Ki>0,λ0,                                       otherwise, (1.5)

    where Ki=f(zi,ti)f(zi,vi)c1zivi2c2tivi2+1.

    Finding technique to speed up the convergence of the algorithm is the way that many mathematicians are interested. One of that is an inertial which was first introduced by Polyak [16]. Very recently, Shehu et al. [22] modified the inertial technique with the Halpern-type algorithm and subgradient extragradient method for obtaining strong convergence to a solution of EP(f,K) such that f is pseudomonotone. This method is defined by γH and

    {wi=αiγ+(1αi)zi+δi(zizi1),vi=argminvK{λif(wi,v)+12wiv2},εi={tH:(wiλiωi)vi,tvi},  ωi2f(wi,vi),ti=argminvεi{λf(vi,v)+12wiv2},zi+1=τwi+(1τ)ti, (1.6)

    where the inertial parameter δi[0,13), τ(0,12], the update step-size {λi} satisfies the following:

    λi+1={min{μ2wivi2+tivi2f(wi,ti)f(wi,vi)f(vi,ti),λi},         if f(wi,ti)f(wi,vi)f(vi,ti)>0,λi,                                               otherwise, (1.7)

    and {αi} still satisfies the principle conditions limiαi=0 and iαi=. This update step-size {λi} is limited in the computation and can not be modified in another way.

    In this paper, motivated and inspired by the above works in literature, we introduce a modified inertial viscosity extragradient type method for solving equilibrium problems. Moreover, we apply our main result to solve a data classification problem in machine learning and show the performance of our algorithm by comparing with many existing methods.

    Let us begin with some concepts of monotonicity of a bifunction [2,15]. Let K be a nonempty, closed and convex subset of H. A bifunction f:H×HR is said to be:

    (i) strongly monotone on K if there exists a constant γ>0 such that

    f(x,y)+f(y,x)γxy2, x,yK;

    (ii) mototone on K if f(x,y)f(y,x),x,yK;

    (iii) pseudomonotone on K if f(x,y)0f(y,x)0,x,yK;

    (iv) satisfying Lipschitz-type condition on K if there exist two positive constants c1,c2 such that

    c1xy2+c2yz2f(x,z)f(x,y)f(y,z),x,y,zK.

    The normal cone NK to K at a point xK is defined by NK(x)={wH:w,xy0,yK}. For every xH, the metric projection PKx of x onto K is the nearest point of x in K, that is, PKx=argmin{yx:yK}. Since K is nonempty, closed and convex, PKx exists and is unique. For each x,zH, by 2f(z,x), we denote the subdifferential of convex function f(z,.) at x, i.e.,

    2f(z,x)={uH:f(z,y)u,yx+f(z,x), yH}.

    In particular, for zK,

    2f(z,z)={uH:f(z,y)u,yz, yH}.

    For proving the convergence of the proposed algorithm, we need the following lemmas.

    Lemma 2.1. [1] For each xH and λ>0,

    1λxproxλg(x),yproxλg(x)g(y)g(proxλg(x)), yK,

    where proxλg(x)=argmin{λg(y)+12xy2:yK}.

    For any point uH, there exists point PKuK such that

    uPKuuy, yK.

    PK is called the metric projection of H onto K. We know that PK is a nonexpansive mapping of H onto K, that is, PKxPKyxy, x,yK.

    Lemma 2.2. [18] Let g:KR be a convex, subdifferentiable and lower semicontinuous function on K. Suppose the convex set K has nonempty interior, or g is continuous at a point xK. Then, x is a solution to the following convex problem min{g(x):xK} if and only if 0g(x)+NK(x), where g(.) denotes the subdifferential of g and NK(x) is the normal cone of K at x.

    Lemma 2.3. [19,22] Let SR be a nonempty, closed, and convex subset of a real Hilbert space H. Let uH be arbitrarity given, z=PSu, and Ω={xH:xu,xz0}. Then ΩS={z}.

    Lemma 2.4. [28] Let {ai} and {ci} be nonnegative sequences of real numbers such that i=1ci<, and let {bi} be a sequence of real numbers such that lim supibi0. If for any iN such that

    ai+1(1γi)ai+γibi+ci,

    where {γi} is a sequence in (0,1) such that i=1γi=, then limiai=0.

    Lemma 2.5. [29] Let {ai}, {bi} and {ci} be positive sequences, such that

    ai+1(1+ci)ai+bi, i1.

    If i=1ci<+ and i=1bi<+; then, limi+ai exists.

    The convergence of Algorithm 3.1 will be given under the conditions that

    Condition 2.6. (A1) f is pseudomonotone on K with int(K) or f(x,.) is continuous at some zK for every xK;

    (A2) f satisfies Lipschitz-type condition on H with two constants c1 and c2;

    (A3) f(.,y) is sequentially weakly upper semicontinuous on K for each fixed point yK, i.e., if {xi}K is a sequence converging weakly to xK, then f(x,y)lim supif(xi,y);

    (A4) for xH, f(x,.) is convex and lower semicontinuous, subdifferentiable on H;

    (A5) V:HH is contraction with contraction constant α.

    Now, we are in a position to present a modification of algorithm (EGM) in [25] for equilibrium problems.

    Algorithm 3.1. (Modified viscosity type inertial extragradient algorithm for EPs)
    Initialization. Let x0,x1H, 0<λiλ<12max{c1,c2}, τ(0,12].
    Step 1. Given the current iterates xi1 and xi(i1) and αi(0,1), θi[0,13), compute
    wi=αiV(xi)+(1αi)xi+θi(xixi1),yi=argminyK{λif(wi,y)+12ywi2}.
    If yi=wi then stop and yi is a solution. Otherwise, go to Step 2.
    Step 2. Compute
    zi=argminyK{λif(yi,y)+12ywi2}.
    Step 3. Compute
    xi+1=(1τ)wi+τzi.
    Set i=i+1 and return to Step 1.

    In this section, we will analyse the convergence of Algorithm 3.1.

    For the rest of this paper, we assume the following condition.

    Condition 3.2. (i) {αi}(0,1] is non-increasing with limiαi=0 and i=1αi=;

    (ii) 0θiθi+1θ<13 and limiθiαixixi1=0;

    (iii) EP(f,K).

    Before we prove the strong convergence result, we need some lemmas below.

    Lemma 3.3. Assume that Conditions 2.6 and 3.2 hold. Let {xi} be generated by Algorithm 3.1. Then there exists N>0 such that

    xi+1u2wiu2xi+1wi2, uEP(f,K), iN. (3.1)

    Proof. By the definition of yi, and Lemma 2.1, we have

    1λiwiyi,yyif(wi,y)f(wi,yi), yK. (3.2)

    Putting y=zi into (3.2), we obtain

    1λiyiwi,yizif(wi,zi)f(wi,yi). (3.3)

    By the definition of zi, we have

    1λiwizi,yzif(yi,y)f(yi,zi), yK. (3.4)

    (3.3) and (3.4) imply that

    2λiwizi,yzi+2λiyiwi,yizi2f(yi,y)+2(f(wi,zi)f(wi,yi)f(yi,zi)). (3.5)

    If f(wi,zi)f(wi,yi)f(yi,zi)>0, then

    f(wi,zi)f(wi,yi)f(yi,zi)c1wiyi2+c2ziyi2 (3.6)

    Observe that (3.6) is also satisfied if f(wi,zi)f(wi,yi)f(yi,zi)0. By (3.5) and (3.6), we have

    wizi,yzi+yiwi,yiziλif(yi,y)+λic1wiyi2+λic2ziyi2. (3.7)

    Note that

    wizi,ziy=12(wiy2wizi2ziy2). (3.8)

    and

    wiyi,ziyi=12(wiyi2+ziyi2wizi2). (3.9)

    Using (3.8) and (3.9) in (3.7), we obtain, for all yK,

    ziy2wiy2(12λic1)wiyi2(12λic2)ziyi2+2λif(yi,y). (3.10)

    Taking y=uEP(f,K)K, one has f(u,yi)0,i. By (A1), we obtain f(yi,u)0, i. Hence, we obtain from (3.10) that

    ziu2wiu2(12λic1)wiyi2(12λic2)ziyi2. (3.11)

    It follows from λi(0,12max{c1,c2}) and (3.11), we have

    ziuwiu. (3.12)

    On the other hand, we have

    xi+1u2=(1τ)wiu2+τziu2(1τ)τziwi2. (3.13)

    Substituting (3.11) into (3.13), we obtain

    xi+1u2wiu2τwiu2+τwiu2τ(12λic1)wiyi2τ(12λic2)ziyi2(1τ)τziwi2. (3.14)

    Moreover, we have ziwi=1τ(xi+1wi), which together with (3.14) gives

    xi+1u2wiu2τ(12λic1)wiyi2τ(12λic2)ziyi2(1τ)τ1τ2xi+1wi2wiu21ττxi+1wi2wiu2ϵxi+1wi2, iN, (3.15)

    where ϵ=1ττ.

    Lemma 3.4. Assume that Conditions 2.6 and 3.2 hold. Let {xi} be generated by Algorithm 3.1. Then, for all uEP(f,K),

    2αixiu,xiV(xi)xi+1u2xiu2+2θi+1xi+1xi22θixixi12+αi+1V(xi)xi+12αixiV(xi)2θixiu2+θi1xi1u2+(13θi+1αi)xixi+12. (3.16)

    Proof. By Lemma 2.5, we have

    xi+1u2wiu2xi+1wi2. (3.17)

    Moreover, from the definition of wi, we obtain that

    wiu2=xiu2+θi(xixi1)αi(xiV(xi))2+2xiu,θi(xixi1)αi(xiV(xi))=xiu2+θi(xixi1)αi(xiV(xi))2+2θixiu,xixi12αixiu,xiV(xi). (3.18)

    Replacing u by xi+1 in (3.18), we have

    wixi+12=xixi+12+αi(xiV(xi))θi(xixi1)2+2θixixi+1,xixi12αixixi+1,xiV(xi). (3.19)

    Substituting (3.18) and (3.19) into (3.17), we have

    xi+1u2xiu2+θi(xixi1)αi(xiV(xi))2+2θixiu,xixi12αixiu,xiV(xi)xixi+122θixixi+1,xixi1+2αixixi+1,xiV(xi)αi(xiV(xi))θi(xixi1)2=xiu2xixi+12+2θixiu,xixi12αixiu,xiV(xi)+2αixixi+1,xiV(xi)+θixixi+12+θixixi12θixixi+1+(xixi1)2. (3.20)

    Therefore, we obtain

    xi+1u2xiu2θixixi12+xixi+12θixixi+122θixiu,xixi12αixiu,xiV(xi)+2αixixi+1,xiV(xi)=2αixiu,xiV(xi)θixi1u2+θixiu2+θixixi12αiV(xi)xi+12+αixi+1xi2+αixiV(xi)2. (3.21)

    It follows that

    2αixiu,xiV(xi)xi+1u2xiu2θixixi12+xixi+12θixixi+12+θixi1u2θixiu2θixixi12+αiV(xi)xi+12αixi+1xi2αixiV(xi)2xi+1u2xiu2+2θi+1xi+1xi22θixixi12+θi(xi1u2xiu2)+αi(V(xi)xi+12xiV(xi)2)+(1θi2θi+1αi)xi+1xi2. (3.22)

    Since θi is non-decreasing and αi is non-increasing, we then obtain

    2αixiu,xiV(xi)xi+1u2xiu2+2θi+1xi+1xi22θixixi12+αi+1V(xi)xi+12αixiV(xi)2θixiu2+θi1xi1u2+(13θi+1αi)xixi+12.

    Lemma 3.5. Assume that Conditions 2.6 and 3.2 hold. Then {xi} generated by Algorithm 3.1 is bounded.

    Proof. From (3.15) and Condition 3.2 (ii), there exists K>0 such that

    xi+1uwiu=αiV(xi)+(1αi)xi+θi(xixi1)uαiV(xi)u+(1αi)xiu+θixixi1=αiV(xi)u+(1αi)xiu+αiθiαixixi1 (3.23)
    αiV(xi)u+(1αi)xiu+αiKαi(V(xi)V(u)+V(u)u)+(1αi)xiu+αiK (3.24)
    (1αi(1α))xiu+αi(1α)(V(u)u+K1α) (3.25)
    max{xiu,V(u)u+K1α}. (3.26)

    This implies that xi+1umax{x1u,V(u)u+K1α}. This shows that {xi} is bounded.

    Lemma 3.6. Assume that Conditions 2.6 and 3.2 hold. Let {xi} be generated by Algorithm 3.1. For each i1, define

    ui=xiu2θi1xi1u2+2θixixi12+αixiV(xi)2.

    Then ui0.

    Proof. Since {θi} is non-decreasing with 0θi<13, and 2x,y=x2+y2xy2 for all x,yH, we have

    ui=xiu2θi1[xi1xi2+xiu2+2xi1xi,xiu]+2θixixi12+αixiV(xi)2=xiu2θi1[2xi1xi2+2xiu2xi12xi+u2]+2θixixi12+αixiV(xi)2xiu22θixi1xi223xiu2+θi1xi12xi+u2+2θixixi12+αixiV(xi)213xiu2+αixiV(xi)20.

    This completes the proof.

    Lemma 3.7. Assume that Conditions 2.6 and 3.2 hold. Let {xi} be generated by Algorithm 3.1. Suppose

    limixi+1xi=0,

    and

    limi(xi+1u2θixiu2)=0.

    Then {xi} converges strongly to uEP(f,K).

    Proof. By our assumptions, we have

    0=limi(xi+1u2θixiu2)=limi[(xi+1u+θixiu)(xi+1uθixiu)]. (3.27)

    In the case

    limi(xi+1u+θixiu)=0,

    this implies that {xi} converges strongly to u immediately. Assume this limit does not hold. Then there is a subset NN and a constant ρ>0 such that

    xi+1u+θixiuρ, iN. (3.28)

    Using (3.27) and θiθ<1. For iN, it then follows that

    0=limi(xi+1uθixiu)lim supi(xiuxi+1xiθixiu)lim supi((1θ)xiuxi+1xi)=(1θ)lim supixiulimixi+1xi=(1θ)lim supixiu.

    Consequently, we have lim supixiu0. Since lim infixiu0 obviously holds, it follows that limixiu=0. This implies (by (3.28))

    xi+1xixi+1uxiu=xi+1u+θixiu(1+θi)xiuρ2,

    for all iN sufficiently large, a contradiction to the assumption that limixi+1xi=0. This completes the proof.

    We now give the following strong convergence result of Algorithm 3.1.

    Theorem 3.8. Assume that Conditions 2.6 and 3.2 hold. Then {xi} generated by Algorithm 3.1 strongly converges to the solution u=PEP(f,K)V(u).

    Proof. From Lemma 3.6 and (3.16), we have

    ui+1uiαi+1xi+1V(xi+1)2+αi+1V(xi)xi+12+(13θi+1αi)xixi+122αixiu,xiV(xi). (3.29)

    Since PEP(f,K)V is contraction, by the Banach fixed point theorem, there exist unique u=PEP(f,K)V(u). It follow from Lemma 3.3 that

    xi+1u2wiu2=αi(V(xi)u)+(1αi)(xiu)+θi(xixi1)2(1αi)(xiu)+θi(xixi1)2+2αi(V(xi)u),wiu=(1αi)(xiu)+θi(xixi1)2+2αiV(xi)V(u),wiu+2αiV(u)u,wiu(1αi)(xiu)+θi(xixi1)2+2αiV(u)u,wiu+2αiαxiuwiu(1αi)(xiu)+θi(xixi1)2+2αiV(u)u,wiu+αiα(xiu2+wiu2)11αiα((1αi)(xiu)+θi(xixi1)2+αiαxiu2+2αiV(u)u,wiu)11αiα((1αi)(xiu)2+2θi(xixi1),(1αi)(xiu)+θi(xixi1)+αiαxiu2+2αiV(u)u,wiu)=(1αi)2+αiα1αiαxiu2+11αiα(2θi(xixi1),(1αi)(xiu)+θi(xixi1)+2αiV(u)u,wiu)=(1(2αi(1α)1αiα(αi)21αiα))xiu2+11αiα(2θi(xixi1),(1αi)(xiu)+θi(xixi1)+2αiV(u)u,wiu)(12αi(1α)1αiα)xiu2+2αi(1α)1αiα(αi2(1α)xiu2+1αi(1α)θi(xixi1),(1αi)(xiu)+θi(xixi1)+11αV(u)u,wiu). (3.30)

    We will consider into 2 cases.

    Case 1. In the case of ui+1ui+ti for all ii0 for some i0N, ti0 and i=1ti<+. Then ui0, i1 by Lemma 2.5, we have limiui=limiui+1 exists. Since {xi} is bounded by Lemma 3.5, there exists M1>0 such that 2|xiu,xiV(xi)|M1 and M2>0 such that xi+1V(xi+1)2+V(xi)xi+12M2. Since 0θiθi+1θ<13 and limiαi=0, there exist NN and γ1>0 such that 13θi+1αiγ1 for all iN. Therefore, for iN, we obtain from (3.29) that

    γ1xi+1xi2αiM1+αi+1M2+uiui+10, (3.31)

    as  i. Hence limixi+1xi=0. For uEP(f,K), we have

    wiu2=αiV(xi)+(1αi)xi+θi(xixi1)u2αiV(xi)+(1αi)xiu2+2θi(xixi1),wiuαiV(xi)u2+(1αi)xiu2+2θixixi1wiuαiV(xi)u2+(1αi)xiu2+2θiαixixi1wiuαiV(xi)u2+xiu2+2θiαixixi1wiu, (3.32)

    and from (3.14), we have

    xi+1u2=wiu2τ(12λic1)wiyi2τ(12λic2)ziyi2(1τ)τ1τ2xi+1wi2αiV(xi)u2+xiu2+2θiαixixi1wiuτ(12λic1)wiyi2τ(12λic2)ziyi21ττxi+1wi2. (3.33)

    This implies that

    τ(12λic1)wiyi2+τ(12λic2)ziyi2+1ττxi+1wi2αiV(xi)u2+xiu2+2θiαixixi1wiuxi+1u2. (3.34)

    By our condition and (3.31), we obtain

    limiwiyi=limiziyi=limixi+1wi=0. (3.35)

    Since {xi} is bounded, that is, there exits a subsequence {xik} of {xi} such that xikx for some xH. From (3.31) and (3.35), we get wikx and yikx as k.

    By the definition of zi and (3.6), we have

    λikf(yik,y)λikf(yik,zik)+wikzik,yzikλikf(wik,zik)λikf(wik,yik)c1wikyik2c2zikyik2+wikzik,yzikyikwik,yikzik+wikzik,yzikc1wikyik2c2zikyik2.

    It follows from {zik} is bounded, 0<λikλ<12max{c1,c2} and Condition 2.6 (A3) that 0lim supkf(yik,y)f(x,y) for all yH. This implies that f(x,y)0 for all yK, this shows that xEP(f,K). Then, we have

    lim supiV(u)u,wiu=limkV(u)u,wiku=V(u)u,xu0, (3.36)

    by u=PEP(f,K)V(u). Applying (3.36) to the inequality (3.30) with Lemma 2.4, we can conclude that xiu=PEP(f,K)V(u) as i.

    Case 2. In another case of {ui}, we let ϕ:NN be the map defined for all ii0 (for some i0N large enough) by

    ϕ(i)=max{kN:ki,uk+tkuk+1}. (3.37)

    Clearly, ϕ(i) is a non-decreasing sequence such that ϕ(i) for i and uϕ(i)+tϕ(i)uϕ(i)+1 for all ii0. Hence, similar to the proof of Case 1, we therefore obtain from (3.31) that

    γ1xϕ(i)+1xϕ(i)2αϕ(i)M1+αϕ(i)+1M2+uϕ(i)uϕ(i)+10 (3.38)

    for some constant M1,M2>0. Thus

    limixϕ(i)+1xϕ(i)=0. (3.39)

    By the same proof of Case 1, one also derive

    limixϕ(i)+1wϕ(i)=limiwϕ(i)xϕ(i)=limixϕ(i)zϕ(i)=0. (3.40)

    Again observe that for j0 by (3.29), we have uj+1<uj+tj when xjΩ={xH:xx0,xu0}. Hence xϕ(i)Ω for all ii0 since uϕ(i)+tϕ(i)uϕ(i)+1. Sine {xϕ(i)} is bounded, there exist subsequence {xϕ(i)} of {xϕ(i)} which converges weakly to some xH. As Ω is a closed and convex set, it is then weakly closed and so xΩ. Using (3.40), one can see as in Case 1 that zϕ(i)x and xEP(f,K) contains u as its only element. We therefore have x=u. Furthermore,

    xϕ(i)u2=xϕ(i)V(xi),xϕ(i)uuV(xi),xϕ(i)uuV(xi),xϕ(i)u,

    due to xϕ(i)Ω. This gives

    lim supixϕ(i)u0.

    Hence

    limixϕ(i)u=0. (3.41)

    By definition, uϕ(i)+1, we have

    uϕ(i)+1=xϕ(i)+1u2θϕ(i)xϕ(i)u2+2θϕ(i)+1xϕ(i)+1xϕ(i)2+αϕ(i)+1xϕ(i)+1Vϕ(i)+12(xϕ(i)+1xϕ(i)+xϕ(i)u)2θϕ(i)xϕ(i)u2+2θϕ(i)+1xϕ(i)+1xϕ(i)2+αϕ(i)+1xϕ(i)+1Vϕ(i)+12. (3.42)

    By our Condition 3.2 (i), (3.39) and (3.41), we obtain limiuϕ(i)+1=0. We next show that we actually have limiui=0. To this end, first observe that, for ii0, one has ui+tiuϕ(i)+1 if iϕ(i). It follows that for all ii0, we have uimax{uϕ(i),uϕ(i)+1}=uϕ(i)+10, since limiti=0, hence lim supiui0. On the other hand, Lemma 3.6 implies that lim infiui0. Hence, we obtain limiui=0. Consequently, the boundedness of {xi}, limiαi=0, and (3.29) show that xixi+10, as i. Hence the definition of ui yields (xi+1u2θixiu2)0, as i. By using Lemma 3.7, we obtain the desired conclusion immediately.

    Setting V(x)=x0, xH, then we obtain the following modified Halpern inertial extragradient algorithm for EPs:

    Algorithm 3.9. (Modified Halpern inertial extragradient algorithm for EPs)
    Initialization. Let x0,x1H, 0<λiλ<12max{c1,c2}, τ(0,12].
    Step 1. Given the current iterates xi1 and xi(i1) and αi(0,1), θi[0,13), compute
    wi=αix0+(1αi)xi+θi(xixi1),yi=argminyK{λif(wi,y)+12ywi2}.
    If yi=wi then stop and yi is a solution. Otherwise, go to Step 2.
    Step 2. Compute
    zi=argminyK{λif(yi,y)+12ywi2}.
    Step 3. Compute
    xi+1=(1τ)wi+τzi.
    Set i=i+1 and return to Step 1.

     | Show Table
    DownLoad: CSV

    From Algorithm 3.1, the convergence depends on the parameter {λi} with the condition 0<λiλ<12max{c1,c2}. So, the step size {λi} can be considered in many ways. Applying step size concept of Shehu et al. [22], we then obtain the following modified viscosity type inertial extragradient stepsize algorithm for EPs:

    Algorithm 3.10. (Modified viscosity type inertial extragradient stepsize algorithm for EPs)
    Initialization. Let x0,x1H, λ1(0,12max{c1,c2}), μ(0,1), τ(0,12].
    Step 1. Given the current iterates xi1 and xi(i1) and αi(0,1), θi[0,13), compute
    wi=αiV(xi)+(1αi)xi+θi(xixi1),yi=argminyK{λif(wi,y)+12ywi2}.
    If yi=wi then stop and yi is a solution. Otherwise, go to Step 2.
    Step 2. Compute
    zi=argminyK{λif(yi,y)+12ywi2}.
    Step 3. Compute
    xi+1=(1τ)wi+τzi,
    and
    λi+1={min{μ2wiyi2+ziyi2f(wi,zi)f(wi,yi)f(yi,zi),λi}, if f(wi,zi)f(wi,yi)f(yi,zi)>0,λi, Otherwise.
    Set i=i+1 and return to Step 1.

    Remark 3.11. (i) Since V(x)=x0, xH is contraction, thus the modified Halpern inertial extragradient Algorithm 3.9 converges strongly to x=PEP(f,K)x0 with Conditions 2.6 and 3.2;

    (ii) Since the step size {λi} in Algorithm 3.10 is a monotonically decreasing sequence with lower bound min{λ1,12max{c1,c2}} [22], thus Algorithm 3.10 converges strongly to the solution u=PEP(f,K)V(u) by Theorem 3.8.

    We now give an example in infinitely dimensional spaces L2[0,1] to support the main theorem.

    Example 3.12. Let V:L2[0,1]L2[0,1] be defined by V(x(t))=x(t)2 where x(t)L2[0,1]. We can choose x0(t)=sin(t)2 and x1(t)=sin(t). The stopping criterion is defined by xixi1<102.

    We set the following parameters for each algorithm, as seen in Table 1.

    Table 1.  Chosen parameters of each algorithm.
    Algorithm 3.1 Algorithm 3.9 Algorithm 3.10
    λi 0.1 0.1 -
    λ1 - - 0.12
    θi 0.29 0.29 0.29
    αi 1100i+1 1100i+1 1100i+1
    τi 0.15 0.1 0.15
    μ - - 0.2

     | Show Table
    DownLoad: CSV

    Next, we compare the performance of Algorithms 3.1, 3.9 and 3.10. We obtain the results as seen in Table 2.

    Table 2.  The performance of each algorithm.
    Algorithm 3.1 Algorithm 3.9 Algorithm 3.10
    CPU Time 1.2626 1.2010 177.9459
    Iter. No. 2 2 2

     | Show Table
    DownLoad: CSV

    From Figure 1, we see that the performance of Algorithm 3.10 is better than Algorithms 3.1 and 3.9.

    Figure 1.  The Cauchy error plotting number of iterations.

    According to the International Diabetes Federation (IDF), there are approximately 463 million people with diabetes worldwide, and it is estimated that by 2045 there will be 629 million people with diabetes. In Thailand, the incidence of diabetes is continuously increasing. There are about 300,000 new cases per year, and 3.2 million people with diabetes are registered in the Ministry of Public Health's registration system. They are causing huge losses in health care costs. Only one disease of diabetes causes the average cost of treatment costs up to 47,596 million baht per year. This has led to an ongoing campaign about the dangers of the disease. Furthermore, diabetes mellitus makes additional noncommunicable diseases that present a high risk for the patient, as they easily contact and are susceptible to infectious diseases such as COVID-19 [23]. Because it is a chronic disease that cannot be cured. There is a chance that the risk of complications spreading to the extent of the loss of vital organs of the body. By the International Diabetes Federation and the World Health Organization (WHO) has designated November 14 of each year as World Diabetes Day to recognize the importance of this disease.

    In this research, we used the PIMA Indians diabetes dataset which was downloaded from Kaggle (https://www.kaggle.com/uciml/pima-indians-diabetesdatabase) and is available publicly on UCI repository for training processing by our proposed algorithm. The dataset contains 768 pregnant female patients which 500 were non-diabetics and 268 were diabetics. There were 9 variables present inside the dataset; eight variables contain information about patients, and the 9th variable is the class predicting the patients as diabetic and nondiabetic. This dataset contains the various attributes that are Number of times pregnant; Plasma glucose concentration at 2 Hours in an oral glucose tolerance test (GTIT); Diastolic Blood Pressure (mm Hg); Triceps skin fold thickness (mm); 2-Hour Serum insulin (lh/ml); Body mass index [weight in kg/(Height in m)]; Diabetes pedigree function; Age (years); Binary value indicating non-diabetic /diabetic. For the implementation of machine learning algorithms, 614 were used as a training dataset and 154 were used as a testing training dataset by using 5-fold cross-validation [12]. For benchmarking classifier, we consider the following various methods which have been proposed to classify diabetes (see Table 3):

    Table 3.  Classification accuracy of different methods with literature.
    Authors Methods Accuracy (%)
    Li [13] Ensemble of SVM, ANN, and NB 58.3
    Brahim-Belhouari and Bermak [4] NB, SVM, DT 76.30
    Smith et al.[24] Neural ADAP algorithm 76
    Quinlan [17] C4.5 Decision trees 71.10
    Bozkurt et al.[3] Artificial neural network 76.0
    Sahan et al.[20] Artificial immune System 75.87
    Smith et al.[24] Ensemble of MLP and NB 64.1
    Chatrati et al.[5] Linear discriminant analysis 72
    Deng and Kasabov [7] Self-organizing maps 78.40
    Deng and Kasabov [7] Self-organizing maps 78.40
    Choubey et al. [6] Ensemble of RF and XB 78.9
    Saxena et al. [21] Feature selection of KNN, RF, DT, MLP 79.8
    Our Algorithm 3.1 Extreme learning machine 80.03

     | Show Table
    DownLoad: CSV

    We focus on extreme learning machine (ELM) proposed by Huang et al. [9] for applying our algorithms to solve data classification problems. It is defined as follows:

    Let E:={(xn,tn):xnRn,tnRm,n=1,2,...,P} be a training set of P distinct samples where xn is an input training data and tn is a training target. The output function of ELM for single-hidden layer feed forward neural networks (SLFNs) with M hidden nodes and activation function U is

    On=Mj=1ΘjU(wj,bj,xn),

    where wj and bj are parameters of weight and finally the bias, respectively and Θj is the optimal output weight at the j-th hidden node. The hidden layer output matrix H is defined as follows:

    H=[U(w1,b1,x1)U(wM,bM,x1)U(w1,b1,xP)U(wM,bM,xP)]

    To solve ELM is to find optimal output weight Θ=[ΘT1,...,ΘTM]T such that HΘ=T, where T=[tT1,...,tTP]T is the training data. In some cases, finding Θ=HT, where H is the Moore-Penrose generalized inverse of H. However, if H does not exist, then, finding such a solution Θ through convex minimization can overcome such difficulty.

    In this section, we process some experiments on the classification problem. This problem can be seen as the following convex minimization problem:

    minΘRM{HΘT22+λΘ1}, (4.1)

    where λ is a regularization parameter. This problem is called the least absolute shrinkage and selection operator (LASSO) [26]. Setting f(Θ,ζ)=HT(HΘT),ζΘ and V(x)=Cx where C is constant in (0, 1).

    The binary cross-entropy loss function along with sigmoid activation function for binary classification calculates the loss of an example by computing the following average:

    Loss=1KKj=1yjlogˆyj+(1yj)log(1ˆyj)

    where ˆyj is the j-th scalar value in the model output, yj is the corresponding target value, and K is the number of scalar values in the model output.

    In this work, the performance of machine learning techniques for all classes is accurately measured. The accuracy is calculated by adding the total number of correct predictions to the total number of predictions. The performance parameter calculation of precision and recall are measured. The formulation of three measures [27] are defined as follow:

    Precision(Pre)=TPTP+FP. (4.2)
    Recall(Rec)=TPTP+FN. (4.3)
    Accuracy(Acc)=TP+TNTP+FP+TN+FN×100%, (4.4)

    where a confusion matrix for original and predicted classes are shown in terms of TP, TN, FP, and FN are the True Positive, True Negative, False Positives, and False Negatives, respectively. Similarly, P and N are the Positive and Negative population of Malignant and Benign cases, respectively.

    For starting our computation, we set the activation function as sigmoid, hidden nodes M=160, regularization parameter λ=1×105, θi=0.3, αi=1i+1, τ=0.5, μ=0.2 for Algorithms 3.1, 3.9 and 3.10 and C=0.9999 for Algorithms 3.1 and 3.10. The stopping criteria is the number of iteration 250. We obtain the results of the different parameters S when λi=Smax(eigenvalue(ATA)) for Algorithms 3.1, 3.9 and the different parameters λ1 for Algorithm 3.10 as seen in Table 4.

    Table 4.  Training and validation loss and training time of the different parameter λi and λ1.
    Loss
    S, λ1 Training Time Training Validation
    0.2 0.4164 0.286963 0.275532
    0.4 0.4337 0.283279 0.273650
    Algorithm 3.1 0.6 0.4164 0.286963 0.275532
    0.9 0.4459 0.278714 0.272924
    0.99 0.4642 0.278144 0.272921
    0.2 0.4283 0.291883 0.279878
    0.4 0.5293 0.288831 0.277365
    Algorithm 3.9 0.6 0.4246 0.286890 0.276099
    0.9 0.4247 0.284851 0.275079
    0.99 0.5096 0.284356 0.274879
    0.2 1.3823 0.286963 0.275532
    0.4 1.5652 0.283279 0.273650
    Algorithm 3.10 0.6 1.4022 0.281060 0.273120
    0.9 1.9170 0.278714 0.272924
    0.99 1.3627 0.278144 0.272921

     | Show Table
    DownLoad: CSV

    We can see that λi=λ1=0.99 greatly improves the performance of Algorithms 3.1, 3.9 and 3.10. Therefore, we choose it as the default inertial parameter for next computation.

    We next choose λi=0.99max(eigenvalue(ATA)), αi=1i+1, τ=0.5 for Algorithms 3.1 and 3.9 and C=0.9999 for Algorithm 3.1 with λ1=0.99max(eigenvalue(ATA)), αi=1i+1, τ=0.5, C=0.9999, and μ=0.2 for Algorithm 3.10. The stopping criteria is the number of iteration 250. We consider the different initialization parameter θ where

    θi={θ                                 if xi=xi1 and iN,θi2xixi1                     otherwise,

    where N is a number of iterations that we want to stop. We obtain the numerical results as seen in Table 5.

    Table 5.  Training and validation loss and training time of the different parameter θ.
    Loss
    θ Training Time Training Validation
    0.1 0.4608 0.279629 0.272965
    0.2 0.4515 0.278938 0.272931
    Algorithm 3.1 0.3 0.4523 0.278144 0.272921
    1i 0.4591 0.280107 0.273004
    1xixi1+i2 0.5003 0.280221 0.273015
    0.1 0.4723 0.284808 0.274993
    0.2 0.4641 0.284587 0.274935
    Algorithm 3.9 0.3 0.4634 0.284356 0.274879
    1i 0.5297 0.285004 0.275049
    1xixi1+i2 0.4825 0.285019 0.275053
    0.1 1.4071 0.279629 0.272965
    0.2 1.3505 0.278938 0.272931
    Algorithm 3.10 0.3 1.4819 0.278144 0.272921
    1i 1.3276 0.280107 0.273004
    1xixi1+i2 1.4228 0.280221 0.273015

     | Show Table
    DownLoad: CSV

    We can see that θ=0.3 greatly improves the performance of Algorithms 3.1, 3.9 and 3.10. Therefore, we choose it as the default inertial parameter for next computation.

    We next set λi=0.99max(eigenvalue(ATA)), θi=0.3, τ=0.5 for Algorithms 3.1 and 3.9 and C=0.9999 for Algorithm 3.1 with λ1=0.99max(eigenvalue(ATA)), θi=0.3, τ=0.5, C=0.9999, and μ=0.2 for Algorithm 3.10. The stopping criteria is the number of iteration 250. We consider the different initialization parameter αi. The numerical results are shown in Table 6.

    Table 6.  Training and validation loss and training time of the different parameter αi.
    Loss
    αi Training Time Training Validation
    1i+1 0.4407 0.278144 0.272921
    Algorithm 3.1 110i+1 0.4054 0.278143 0.272921
    1i2+1 0.4938 0.278143 0.272921
    110i2+1 0.4876 0.278143 0.272921
    1i+1 0.4163 0.284356 0.274879
    Algorithm 3.9 110i+1 0.4274 0.279201 0.273129
    1i2+1 0.5150 0.278294 0.272931
    110i2+1 0.5960 0.278160 0.272922
    1i+1 1.4292 0.278144 0.272921
    Algorithm 3.10 110i+1 1.3803 0.278143 0.272921
    1i2+1 1.2452 0.278143 0.272921
    110i2+1 1.4100 0.278143 0.272921

     | Show Table
    DownLoad: CSV

    We can see that αi=110i+1 greatly improves the performance of Algorithm 3.1, αi=110i2+1 greatly improves the performance of Algorithm 3.9, and αi=1i2+1 greatly improves the performance of Algorithm 3.10. Therefore, we choose it as the default inertial parameter for next computation.

    We next calculate the numerical results by setting λi=0.99max(eigenvalue(ATA)), θi=0.3, αi=110i+1 and C=0.9999 for Algorithm 3.1, λi=0.99max(eigenvalue(ATA)), θi=0.3, αi=110i2+1 for Algorithm 3.9 and λ1=0.99max(eigenvalue(ATA)), θi=0.3, C=0.9999, αi=1i2+1, and μ=0.2 for Algorithm 3.10. The stopping criteria is the number of iteration 250. We consider the different initialization parameter τ. The numerical results are shown in Table 7.

    Table 7.  Training and validation loss and training time of the different parameter τ.
    Loss
    τ Training Time Training Validation
    0.1 0.4278 0.300531 0.299144
    Algorithm 3.1 0.3 0.4509 0.299074 0.293717
    0.5 0.5239 0.278143 0.272921
    i2i+1 0.4708 0.282187 0.274017
    0.1 0.4592 0.300531 0.299144
    Algorithm 3.9 0.3 0.4900 0.299074 0.293717
    0.5 0.4261 0.278160 0.272922
    i2i+1 0.5224 0.282191 0.274018
    0.1 1.3401 0.300531 0.299144
    Algorithm 3.10 0.3 1.3771 0.299074 0.293717
    0.5 1.8681 0.278143 0.272921
    i2i+1 1.4671 0.282187 0.274017

     | Show Table
    DownLoad: CSV

    We can see that τ=0.5 greatly improves the performance of Algorithms 3.1, 3.9 and 3.10. Therefore, we choose it as the default inertial parameter for next computation.

    We next calculate the numerical results by setting λi=0.99max(eigenvalue(ATA)), θi=0.3, τ=0.5 for Algorithms 3.1 and 3.9 and αi=110i+1 for Algorithm 3.1 with αi=110i2+1 for Algorithm 3.9 and λ1=0.99max(eigenvalue(ATA)), θi=0.3, αi=1i2+1, τ=0.5, and μ=0.2 for Algorithm 3.10. The stopping criteria is the number of iteration 250. We obtain the results of the different parameters C when V(x)=Cx for Algorithms 3.1 and 3.10 as seen in Table 8.

    Table 8.  Training and validation loss and training time of the different parameter c.
    Loss
    C Training Time Training Validation
    0.3 0.4796 0.278902 0.273066
    0.5 0.4270 0.278695 0.273024
    Algorithm 3.1 0.7 0.4190 0.278480 0.272982
    0.9 0.4209 0.278257 0.272941
    0.9999 0.4844 0.278143 0.272921
    0.3 1.5886 0.278251 0.272928
    0.5 1.6358 0.278222 0.272926
    Algorithm 3.10 0.7 1.3808 0.278191 0.272924
    0.9 1.5176 0.278159 0.272922
    0.9999 1.4598 0.278143 0.272921

     | Show Table
    DownLoad: CSV

    From Tables 38, we choose the parameters for Algorithm 3.1 to compare the exist algorithms from the literature. The following Table 9 shows choosing the necessary parameters of each algorithm.

    Table 9.  Chosen parameters of each algorithm.
    Algorithm in (1.2) Algorithm in (1.4) Algorithm in (1.6) Algorithm 3.1 Algorithm 3.9 Algorithm 3.10
    μ - 0.3 0.3 - - 0.2
    λ1 - 0.5max(eig(ATA)) 0.9999max(eig(ATA)) - - 0.99max(eig(ATA))
    λi 0.5max(eig(ATA)) - - 0.99max(eig(ATA)) 0.99max(eig(ATA)) -
    θi - - 0.3 0.3 0.3 0.3
    αi 1100i+1 1100i+1 12i+1 110i+1 110i2+1 1i2+1
    τ - - 0.5 0.5 0.5 0.5
    c - - - 0.9999 - 0.9999

     | Show Table
    DownLoad: CSV

    For comparison, We set sigmoid as an activation function, hidden nodes M=160 and regularization parameter λ=1×105.

    Table 10 shows that Algorithm 3.1 has the highest efficiency in precision, recall, and accuracy. It also has the lowest number of iterations. It has the highest probability of correctly classifying tumors compared to algorithms examinations. We present the training and validation loss with the accuracy of training to show that Algorithm 3.1 has no overfitting in the training dataset.

    Table 10.  The performance of each algorithm.
    Algorithm Iteration No. Training Time Pre Rec Acc (%)
    Algorithm in (1.2) 25 0.0537 80.97 97.50 80.03
    Algorithm in (1.4) 25 0.3132 80.97 97.50 80.03
    Algorithm in (1.6) 30 0.1182 80.97 97.50 80.03
    Algorithm 3.1 18 0.0375 80.97 97.50 80.03
    Algorithm 3.9 18 0.0401 80.97 97.50 80.03
    Algorithm 3.10 18 0.1045 80.97 97.50 80.03

     | Show Table
    DownLoad: CSV

    From Figures 2 and 3, we see that our Algorithm 3.1 has good fit model this means that our Algorithm 3.1 suitably learns the training dataset and generalizes well to classification the PIMA Indians diabetes dataset.

    Figure 2.  Training and validation loss plots of Algorithm 3.1.
    Figure 3.  Training and validation accuracy plots of Algorithm 3.1.

    In general, screening for diabetes in pregnancy we use The American College of Obstetricians and Gynecologists (ACOG) recommendations. The accuracy of our method is 80.03% and high accuracy may use for predict correctly diabetes in pregnancy in the future.

    In this paper, we introduce a modified extragradient method with inertial extrapolation step and viscosity-type method to solve equilibrium problems of pseudomonotone bifunction operator in real Hilbert spaces. We then prove the strong convergence theorem of the proposed algorithm under the assumption that the bifunction satisfies the Lipchitz-type condition. Moreover, we show choosing stepsize parameter {λi} in many ways, this shows that our algorithm is flexible using, see in Algorithms 3.1 and 3.10. Finally, we show our algorithms are better performance than existing algorithms to solve the diabetes mellitus classification in machine learning.

    This research was also supported by Fundamental Fund 2022, Chiang Mai University and the NSRF via the Program Management Unit for Human Resources and Institutional Development, Research and Innovation (grant number B05F640183). W. Cholamjiak would like to thank National Research Council of Thailand (N42A650334) and Thailand Science Research and Innovation, the University of Phayao (FF65-UOE).

    The authors declare no conflict of interest.



    [1] H. H. Bauschke, P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, Springer, New York, 2011. https://doi.org/10.1007/978-3-319-48311-5
    [2] E. Blum, W. Oettli, From optimization and variational inequalities to equilibrium problems, Math. Student., 63 (1994), 123–145.
    [3] M. R. Bozkurt, N. Yurtay, Z. Yilmaz, C. Setkaya, Comparison of different methodologies for determining diabetes, Turk. J. Electr. Eng. Co., 22 (2014), 1044–1055. https://doi.org/10.3906/elk-1209-82 doi: 10.3906/elk-1209-82
    [4] S. Brahim-Belhouari, A. Bermak, Gaussian process for nonstationary time series prediction, Comput. Stat. Data Anal., 47 (2014), 705–712. https://doi.org/10.1016/j.csda.2004.02.006 doi: 10.1016/j.csda.2004.02.006
    [5] S. P. Chatrati, G. Hossain, A. Goyal, A. Bhan, S. Bhattacharya, D. Gaurav, et al., Smart home health monitoring system for predicting type 2 diabetes and hypertension, J. King Saud Univ.-Com., 34 (2020), 862–870. https://doi.org/10.1016/j.jksuci.2020.01.010 doi: 10.1016/j.jksuci.2020.01.010
    [6] D. K. Choubey, M. Kumar, V. Shukla, S. Tripathi, V. K. Dhandhania, Comparative analysis of classification methods with PCA and LDA for diabetes, Curr. Diabetes Rev., 16 (2020), 833–850. https://doi.org/10.2174/1573399816666200123124008 doi: 10.2174/1573399816666200123124008
    [7] D. Deng, N. Kasabov, On-line pattern analysis by evolving self-organizing maps, Neurocomputing, 51 (2003), 87–103. https://doi.org/10.1016/S0925-2312(02)00599-4 doi: 10.1016/S0925-2312(02)00599-4
    [8] D. V. Hieu, Halpern subgradient extragradient method extended to equilibrium problems, RACSAM Rev. R. Acad. A, 111 (2017), 823–840. https://doi.org/10.1007/s13398-016-0328-9 doi: 10.1007/s13398-016-0328-9
    [9] G. B. Huang, Q. Y. Zhu, C. K. Siew, Extreme learning machine: Theory and applications, Neurocomputing, 70 (2006), 489–501. https://doi.org/10.1016/j.neucom.2005.12.126 doi: 10.1016/j.neucom.2005.12.126
    [10] G. M. Korpelevich, The extragradient method for finding saddle points and other problems, Matecon, 12 (1976), 747–756. Available from: https://cs.uwaterloo.ca/y328yu/classics/extragrad.pdf.
    [11] R. Kraikaew, S. Saejung, Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces, J. Optim. Theory Appl., 163 (2014), 399–412. https://doi.org/10.1007/s10957-013-0494-2 doi: 10.1007/s10957-013-0494-2
    [12] V. A. Kumari, R. Chitra, Classification of diabetes disease using support vector machine, Int. J. Eng. Res. Appl., 3 (2013), 1797–1801.
    [13] L. Li, Diagnosis of diabetes using a weight-adjusted voting approach, IEEE Int. Conf. Bioinform. Bioeng., 2014,320–324. https://doi.org/10.1109/BIBE.2014.27
    [14] K. Muangchoo, A new strongly convergent algorithm to solve pseudomonotone equilibrium problems in a real Hilbert space, J. Math. Comput. Sci., 24 (2022), 308–322. http://dx.doi.org/10.22436/jmcs.024.04.03 doi: 10.22436/jmcs.024.04.03
    [15] L. D. Muu, W. Oettli, Convergence of an adaptive penalty scheme for finding constrained equilibria, Nonlinear Anal.-Theor., 18 (1992), 1159–1166. http://dx.doi.org/10.1016/0041-5553(86)90159-X doi: 10.1016/0041-5553(86)90159-X
    [16] B. T. Polyak, Some methods of speeding up the convergence of iteration methods, USSR Comput. Math. Math. Phys., 4 (1964), 1–17. https://doi.org/10.1016/0041-5553(64)90137-5 doi: 10.1016/0041-5553(64)90137-5
    [17] J. R. Quinlan, C4.5: Programs for machine learning, Elsevier, 2014.
    [18] R. T. Rockafellar, Convex analysis, Princeton University Press, 1970.
    [19] Y. Shehu, O. S. Iyiola, Weak convergence for variational inequalities with inertial-type method, Appl. Anal., 101 (2022), 192–216. https://doi.org/10.1080/00036811.2020.1736287 doi: 10.1080/00036811.2020.1736287
    [20] S. Sahan, K. Polat, H. Kodaz, S. Gunes, The medical applications of attribute weighted artificial immune system (AWAIS): Diagnosis of heart and diabetes diseas, International Conference on Artificial Immune Systems, Springer, 3627 (2005), 456–468. https://doi.org/10.1007/11536444_35
    [21] R. Saxena, S. K. Sharma, M. Gupta, G. C. Sampada, A novel approach for feature selection and classification of diabetes mellitus: Machine learning methods, Comput. Intell. Neurosci., 2022 (2022). https://doi.org/10.1155/2022/3820360
    [22] Y. Shehu, C. Izuchukwu, J. C. Yao, X. Qin, Strongly convergent inertial extragradient type methods for equilibrium problems, Appl. Anal., 2021, 1–29. https://doi.org/10.1080/00036811.2021.2021187
    [23] World Health Organization, Global action plan for the prevention and control of NCDs 2013–2020, World Health Organization, 2013. Available from: https://apps.who.int/iris/bitstream/handle/10665/94384/9789241506236%20_eng.pdf?sequence=1.
    [24] J. W. Smith, J. E. Everhart, W. C. Dickson, W. C. Knowler, R. S. Johannes, Using the Adap learning algorithm to forecast the onset of diabetes mellitus, Proc. Annu. Symp. Comput. Appl. Med. Care, 9 (1988), 261–265. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2245318/.
    [25] D. Q. Tran, M. L. Dung, V. H. Hguyen, Extragradient algorithms extended to equilibrium problems, Optimization, 57 (2008), 749–776. https://doi.org/10.1080/02331930601122876 doi: 10.1080/02331930601122876
    [26] R. Tibshirani, Regression shrinkage and selection via the lasso, J. Roy. Stat. Soc. B, 58 (1996), 267–288. https://doi.org/10.1111/j.2517-6161.1996.tb02080.x doi: 10.1111/j.2517-6161.1996.tb02080.x
    [27] T. Thomas, N. Pradhan, V. S. Dhaka, Comparative analysis to predict breast cancer using machine learning algorithms: A survey, IEEE Int. Conf. Invent. Comput. Technol., 2020,192–196. https://doi.org/10.1109ICICT48043.2020.9112464
    [28] H. K. Xu, Iterative algorithms for nonlinear operators, J. London Math. Soc., 66 (2002), 240–256. https://doi.org/10.1112/S0024610702003332 doi: 10.1112/S0024610702003332
    [29] M. O. Osilike, S. C. Aniagbosor, B. G. Akuchu, Fixed points of asymptotically demicontractive mappings in arbitrary Banach spaces, Panamerican Math. J., 12 (2002), 77–88.
  • This article has been cited by:

    1. Watcharaporn Yajai, Shilpa Das, Supansa Yajai, Watcharaporn Cholamjiak, A modified viscosity type inertial subgradient extragradient algorithm for nonmonotone equilibrium problems and application to cardiovascular disease detection, 2024, 0, 1937-1632, 0, 10.3934/dcdss.2024163
    2. Zhaoli Ma, Lin Wang, New Convergence Theorems for Pseudomonotone Variational Inequality on Hadamard Manifolds, 2023, 15, 2073-8994, 2085, 10.3390/sym15112085
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1789) PDF downloads(110) Cited by(2)

Figures and Tables

Figures(3)  /  Tables(10)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog