Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Viscosity-type inertial iterative methods for variational inclusion and fixed point problems

  • In this paper, we have introduced some viscosity-type inertial iterative methods for solving fixed point and variational inclusion problems in Hilbert spaces. Our methods calculated the viscosity approximation, fixed point iteration, and inertial extrapolation jointly in the starting of every iteration. Assuming some suitable assumptions, we demonstrated the strong convergence theorems without computing the resolvent of the associated monotone operators. We used some numerical examples to illustrate the efficiency of our iterative approaches and compared them with the related work.

    Citation: Mohammad Dilshad, Fahad Maqbul Alamrani, Ahmed Alamer, Esmail Alshaban, Maryam G. Alshehri. Viscosity-type inertial iterative methods for variational inclusion and fixed point problems[J]. AIMS Mathematics, 2024, 9(7): 18553-18573. doi: 10.3934/math.2024903

    Related Papers:

    [1] Yali Zhao, Qixin Dong, Xiaoqing Huang . A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208
    [2] Mohammad Dilshad, Aysha Khan, Mohammad Akram . Splitting type viscosity methods for inclusion and fixed point problems on Hadamard manifolds. AIMS Mathematics, 2021, 6(5): 5205-5221. doi: 10.3934/math.2021309
    [3] Mohammad Dilshad, Mohammad Akram, Md. Nasiruzzaman, Doaa Filali, Ahmed A. Khidir . Adaptive inertial Yosida approximation iterative algorithms for split variational inclusion and fixed point problems. AIMS Mathematics, 2023, 8(6): 12922-12942. doi: 10.3934/math.2023651
    [4] Meiying Wang, Luoyi Shi, Cuijuan Guo . An inertial iterative method for solving split equality problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 17628-17646. doi: 10.3934/math.2022971
    [5] Anjali, Seema Mehra, Renu Chugh, Salma Haque, Nabil Mlaiki . Iterative algorithm for solving monotone inclusion and fixed point problem of a finite family of demimetric mappings. AIMS Mathematics, 2023, 8(8): 19334-19352. doi: 10.3934/math.2023986
    [6] Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Hossam A. Nabwey, Abubakar Adamu, Ojen Kumar Narain . Double inertial steps extragadient-type methods for solving optimal control and image restoration problems. AIMS Mathematics, 2024, 9(5): 12870-12905. doi: 10.3934/math.2024629
    [7] Lu-Chuan Ceng, Shih-Hsin Chen, Yeong-Cheng Liou, Tzu-Chien Yin . Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points. AIMS Mathematics, 2024, 9(6): 13819-13842. doi: 10.3934/math.2024672
    [8] Konrawut Khammahawong, Parin Chaipunya, Poom Kumam . An inertial Mann algorithm for nonexpansive mappings on Hadamard manifolds. AIMS Mathematics, 2023, 8(1): 2093-2116. doi: 10.3934/math.2023108
    [9] Doaa Filali, Mohammad Dilshad, Mohammad Akram . Generalized variational inclusion: graph convergence and dynamical system approach. AIMS Mathematics, 2024, 9(9): 24525-24545. doi: 10.3934/math.20241194
    [10] Habib ur Rehman, Poom Kumam, Kanokwan Sitthithakerngkiet . Viscosity-type method for solving pseudomonotone equilibrium problems in a real Hilbert space with applications. AIMS Mathematics, 2021, 6(2): 1538-1560. doi: 10.3934/math.2021093
  • In this paper, we have introduced some viscosity-type inertial iterative methods for solving fixed point and variational inclusion problems in Hilbert spaces. Our methods calculated the viscosity approximation, fixed point iteration, and inertial extrapolation jointly in the starting of every iteration. Assuming some suitable assumptions, we demonstrated the strong convergence theorems without computing the resolvent of the associated monotone operators. We used some numerical examples to illustrate the efficiency of our iterative approaches and compared them with the related work.



    A fixed point problem (FPP) is a significant problem that provides a natural support for studying a broad range of nonlinear problems with applications. The fixed point problem of mapping T is defined as

    Fix(T)={sE:T(s)=s}, (1.1)

    where E is a real Hilbert space and T:EE is a nonexpansive mapping.

    For a single-valued monotone operator Q:EE and a set-valued operator G:EE, the variational inclusion problem (VIsP) is to search sE such that

    0Q(s)+G(s). (1.2)

    Several problems, such as image recovery, optimization, variational inequality, can be transformed into a FPP or VIsP. Due to such applicability, in the last decades, several iterative methods have been formulated to solve FPPs and VIsPs in linear and nonlinear spaces, for example, [4,8,9,12,13,15,32].

    Dauglas and Rachford [11] formulated the forward-backward splitting method for VIsP:

    sn+1=RGμn[IμnQ](sn), (1.3)

    where μn>0, RGμn=[I+μnG]1 is the resolvent of G (also known as the backward operator), and [IμnQ] is known as the forward operator. We can rewrite (1.3) as

    snsn+1μnQ(sn)+G(sn+1), (1.4)

    which is studied by Ansari and Babu [2] in nonlinear space. If Q=0, the monotone inclusion problem (MIsP) is to search sE such that

    0G(s), (1.5)

    which was studied in [26]. The proximal point method, or the regularization method, is one of the renowned methods for MIsP studied by Lions and Mercier [18]:

    sn+1=[I+μnG]1(sn). (1.6)

    Since the operator RGμn is nonexpansive appearing in backward step, the algorithms have been studied widely by numerous authors, see for example [7,10,15,16,17,19,23,27].

    An essential development in the field of nonlinear science is the inertial extrapolation, introduced by Polyak [22], for fast convergence of algorithms. Alvarez and Attouch [6] implemented the inertial extrapolation to acquire the inertial proximal point method to solve MIsP. For μn>0, find sn+1E such that

    0μnG(sn+1)+sn+1snβn(snsn1), (1.7)

    and equivalently

    sn+1=RGμn[sn+βn(snsn1)], (1.8)

    where βn[0,1) is the extrapolation coefficient and βn(snsn1) is known as the inertial step. They proved the weak convergence of (1.8) assuming

    n=1βnsnsn12<+. (1.9)

    Inertial extrapolation has been demonstrated to have good convergence properties and a high convergence rate, therefore they have been improved and used in a variety of nonlinear problems, see [3,5,13,14,28,29] and the references inside.

    The following inertial proximal point approach was presented by Moudafi and Oliny in [21] to solve VIsP:

    {un=sn+βn(snsn1),sn=[I+μnG]1(unμnQun), (1.10)

    where μn<2/κ, and κ is the Lipschitz constant of operator Q. They proved the weak convergence of (1.10) using the same assumption (1.9). Recently, Duang et al. [30] studied the VIsP and FPP. They proposed the following viscosity inertial method (Algorithm 1.1) for estimating the common solution in Hilbert spaces.

    Algorithm 1.1 (Algorithm 3 of [30]) Viscosity inertial method (VIM)
    Choose arbitrary points s0 and s1 and set n=1.
    Step 1. Compute
    un=sn+θn(snsn1),
    vn=[I+λG]1(IλQ)un.
    If un=vn, then stop (sn is a solution of VIsP). If not, proceed to Step 2.
    Step 2. Compute
    sn+1=ψnk(un)+(1ψn)Tvn.
    Let n=n+1 and proceed to Step 1.

    In the above calculation Q is η-inverse strongly monotone (in short η-ism) and G is a maximal monotone operator, k is a contraction, T is a nonexpansive mapping, λ(0,2η), and the control sequence fulfills the requirements listed below:

    (ⅰ) ψn(0,1),limnψn=0,n=1ψn=,limnψn1ψn=0,

    (ⅱ) θn[0,θ),θ>0,limnθnψnsnsn1=0.

    Recently, Reich and Taiwo [24] investigated hybrid viscosity-type iterative schemes for solving variational inclusion problems in which viscosity approximation and inertial extrapolation were computed jointly. Ahmed and Dilshad [1] studied the Halpern-type iterative method for solving split common null point problems where the Halpern iteration and inertial iterations are computed simultaneously at the start of every iteration.

    Motivated by the work in [24,30], we present two viscosity-type inertial iteration methods for common solutions of VIsPs and FPPs. In our algorithms, we implement the viscosity iteration, fixed point iteration, and inertial extrapolation at the first step of each iteration. Our methods do not need the inverse strongly monotone assumptions on the operators Q and G, which are considered in the literature. We prove the strong convergence of the presented methods without calculating the resolvent of the associated monotone operators Q and G.

    We organize the paper as follows: In Section 2, we discuss some basic definitions and useful lemmas. In Section 3, we propose viscosity-type iterative methods for solving VIsPs and FPPs and prove the strong convergence theorems. In Section 4, as a consequence of our methods, we present Halpern-type inertial iterative methods for VIsPs and FPPs. Sections 5 describes some applications for solving variational inequality and optimization problems. In Section 6, we show the efficiency of the suggested methods by comparing them with Algorithm 3 of [30].

    Let {sn} be a sequence in E. Then sns denotes strong convergence of {sn} to s and sns denotes weak convergence. The weak w-limit of {sn} is defined by

    ww(sn)={sH:ynjsasjwheresnjisasubsequnceofsn}.

    The following useful inequality is well-known in the Hilbert space E:

    s1±w12s12±2s1,w1+w12. (2.1)

    Definition 2.1. A mapping k:EE is called

    (i) a contraction, if k(s1)k(w1)τs1w1,s1,w1E,τ(0,1);

    (ii) nonexpansive, if k(s1)k(w1)s1w1,s1,w1E.

    Definition 2.2. Let Q:EE. Then

    (i) Q is called monotone, if Q(s1)Q(w1),s1w10,s1,w1E;

    (ii) Q is called η-ism, if there exists η>0 such that

    Q(s1)Q(w1),s1w1ηQ(s1)Q(w1)2,s1,w1E;

    (iii) Q is called δ-strongly monotone, if there exists δ>0 such that

    Q(s1)Q(w1),s1w1δs1w12,s1,w1E;

    (iv) Q is called κ-Lipschitz continuous, if there exists κ>0 such that

    Q(s1)Q(w1)κs1w1,s1,w1E.

    Definition 2.3. Let G:E2E. Then

    (i) the graph of G is defined by Graph(G)={(s1,w1)E×E:w1G(s1)};

    (ii) G is called monotone, if for all (s1,w1),(s2,w2)Graph(G),w1w2,s1s20;

    (iii) G is called maximal monotone, if G is monotone and (I+μG)(E)=E, for μ>0.

    Lemma 2.1. [31] Let snR be a nonnegative sequence such that

    sn+1(1λn)sn+λnξn,nn0N,

    where λn(0,1) and ξnR fulfill the requirements given below:

    limnλn=0,n=1λn=,andlimsupnξn0.

    Then sn0 as n.

    Lemma 2.2. [20] Let ynR be a sequence that does not decrease at infinity in the sense that there exists a subsequence ynk of yn such that ynk<ynk+1 for all k0. Also consider the sequence of integers {Υ(n)}nn0 defined by

    Υ(n)=max{kn:ykyk+1}.

    Then {Υ(n)}nn0 is a nondecreasing sequence verifying limnΥ(n)=, and for all nn0, the following inequalities hold:

    yΥ(n)yΥ(n)+1,y(n)yΥ(n)+1.

    In the present section, we define our viscosity-type inertial iteration methods for solving FPP and VIsP. We symbolize the solution set of FPP by Λ and of VIsP by Δ and assume that ΛΔ. We adopt the following assumptions in order to prove the convergence of the sequences obtained from the suggested methods:

    (S1) k:EE is a τ-contraction with 0<τ<1;

    (S2) Q:EE is a δ-strongly monotone and κ-Lipschitz continuous operator and G:EE is a maximal monotone operator;

    (S3) μn is a sequence such that 0<ˉμμnμ<1/2δ and κ2δ;

    (S4) λn(0,1) satisfies limnλn=0 and n=1λn=;

    (S5) σn is a positive sequence satisfying n=1σn< and limnσnλn=0.

    Theorem 3.1. If the Assumptions (S1)(S5) are fulfilled then the sequences induced by the Algorithm 3.1 converge strongly to sΔΛ, which solve the following variational inequality:

    k(s)s,ys0,yΔΛ. (3.1)

    Algorithm 3.1. Viscosity-type inertial iterative method-I (VIIM-I)
    Let β[0,1) and μn>0 are given. Choose arbitrary points s0 and s1 and set n=0.
    Iterative step. For iterates sn, and sn1, for n1, select 0<βn<ˉβn, where
    ˉβn={min{σnsnsn1,β},ifsnsn1,β,otherwise, (3.2)
    compute
    un=λnk(sn)+(1λn)[T(sn)+βn(snsn1)], (3.3)
    0Q(un)+G(sn+1)+sn+1unμn. (3.4)
    If sn+1=un, then stop. If not, set n=n+1 and proceed to the iterative step.

    Remark 3.1. From (3.2), we have βnσnsnsn1. Since βn>0 and σn satisfies n=1σn<, we obtain limnβnsnsn1=0 and limnβnsnsn1λnlimnσnλn=0.

    Proof. Let sΔΛ, then Q(s)G(s) and using (3.4), we have unsn+1μnQ(un)G(sn+1). Since G is monotone, we get

    unsn+1μnQ(un)+Q(s),sn+1s0. (3.5)

    Since Q is strongly monotone with constant δ>0, we have

    Q(sn+1)Q(s),sn+1sδsn+1s2. (3.6)

    By adding (3.5) and (3.6), we get

    unsn+1μn+Q(sn+1)Q(un),sn+1sδsn+1s2 (3.7)

    or

    1μnunsn+1,sn+1s+Q(sn+1)Q(un),sn+1sδsn+1s2. (3.8)

    By using the Cauchy Schwartz inequality and Lipschitz continuity of Q, we have

    Q(sn+1)Q(un),sn+1sQ(sn+1)Q(un)sn+1sκsn+1unsn+1s=κ2{sn+1un2+sn+1s2}. (3.9)

    By using (2.1), we have

    uns2=unsn+1+sn+1s2=unsn+12+sn+1s2+2unsn+1,sn+1s. (3.10)

    Considering (3.8)–(3.10), we get

    sn+1s2uns2unsn+12+μnκ{sn+1un2+sn+1s2}2μnδsn+1s2. (3.11)

    Since κ2δ, we have

    sn+1s2uns2(12δμn)sn+1un2 (3.12)

    or

    sn+1s2uns2. (3.13)

    Since limnβnsnsn1λn=0 (Remark 3.1), there exists K1>0 such that βnsnsn1λnK1, that is βnsnsn1λnK1. By using (3.13) and mathematical induction, bearing in mind that k is a contraction and T is nonexpansive, it follows from (3.3) that

    uns=λnk(sn)+(1λn)[T(sn)+βn(snsn1]s=λnk(sn)s+(1λn)[(T(sn)s+βn(snsn1)]λnk(sn)k(s)+λnk(s)s+(1λn)[T(sn)s+βnsnsn1]λnτsns+λnk(s)s+(1λn)sns+λnK1[1λn(1τ)]sns+λn(1τ)k(s)s+K11τmax{sns,k(s)s+K11τ}max{un1s,k(s)s+K11τ}max{u0s,k(s)s+K11τ},

    meaning that {un} is bounded and hence {sn} is also bounded. Let vn=T(sn)+βn(snsn1). Note that vn is also bounded. By using (3.3), we get

    uns2=λnk(sn)+(1λn)vns2=λ2nk(sn)s2+(1λn)2vns2+2λn(1λn)k(sn)s,vns. (3.14)

    Now, we need to calculate

    vns2=T(sn)+βn(snsn1)s2=T(sn)s2+2βnsnsn1,vnsT(sn)s2+2βnsnsn1vnssns2+2Θnvns, (3.15)

    where Θn=βnznzn1, and

    k(sn)s,vns=k(sn)k(s),vns+k(s)s,vnsk(sn)k(svns+k(s)s,vns12{τ2sns2+vns2}+k(s)s,vns (3.16)

    and

    k(s)s,vns=k(s)s,T(sn)+βn(snsn1)sk(s)s,T(sn)s+k(s)s,βn(snsn1)k(s)s,T(sn)s+βnk(s)ssnsn1k(s)s,T(sn)s+Θnk(s)s. (3.17)

    By using (3.14)–(3.17), we get

    uns2λ2nk(sn)s2+(1λn)2{sns2+2Θnvns}+λn(1λn)τ2sns2+λn(1λn)vns2+2λn(1λn)k(s)s,T(sn)s+2λn(1λn)Θnk(s)s[1λn(1τ2)]sns2+λn{λnk(sn)s2+2(1λn)k(s)s,T(sn)s+Θnk(s)s+2Θnλnvns}. (3.18)

    Let γn=λn(1τ2). Then it follows from (3.12) and (3.18) that

    sn+1s2(1γn)sns2+γnUn(12δμn)sn+1un2, (3.19)

    where

    Un=λnk(sn)s2+2(1λn)k(s)s,T(sn)s+Θnk(s)s+2Θnλnvns1τ2. (3.20)

    Now, we continue the proof in the following two cases:

    Case Ⅰ: If {sns} is monotonically decreasing then there exists a number N1 such that sn+1ssns for all nN1. Hence, boundedness of {sns} implies that {sns} is convergent. Therefore, using (3.19), we have

    (12δμn)sn+1un2sn+1s2+sns2γnsns2+γnUn. (3.21)

    Since 2δμn<1 and limnγn=0, we obtained

    limnsn+1un=0. (3.22)

    By using (3.22) and Remark 3.1, we get

    limnvnT(sn)=limnβnsnsn1=0. (3.23)

    Boundedness of {sn} and {vn} implies that there exist M1 and M2 such that snsM1 and k(s)vnM2, hence

    unvn=λnk(sn)vnλn[k(sn)k(s)+k(s)vn]λn[τsns+k(s)vn]λn[τM1+M2]0asn. (3.24)

    The following can be obtained easily by using (3.22) and (3.23):

    limnTsnsn=limnsnvn=0. (3.25)

    Since {sn} is bounded, it guarantees the existence of subsequence {snk} of {sn} such that snkˉs. As a consequence, from (3.22) and (3.25), it follows that unkˉs and vnkˉs. Now, we will show that ˉsΔΛ. Since T is nonexpansive, hence by (3.25), we obtain ˉsFix(T). From (3.4), we have

    znk=unksnk+1μnkQ(unk)G(snk+1). (3.26)

    Since 0<ˉμ<μn<μ and from (3.22), we have snk+1unk0 and by the Lipschitz continuity of Q, we get

    znkQ(ˉy)ask. (3.27)

    Taking k, since the graph of the maximal monotone operator is weakly-strongly closed, we get Q(ˉs)G(ˉs), that is 0Q(ˉs)+G(ˉs). Thus ˉsΔΛ.

    Next, we show that {sn} strongly converges to s. From (3.19), it immediately follows that

    sn+1s2(1γn)sns2+γnUn (3.28)

    and

    limsupnUn=limsupnλnk(sn)s2+2(1λn)k(s)s,T(sn)s+Θnk(s)s+2Θnλnvns1τ2=k(s)s,ˉss0.

    By using Lemma 2.1, we deduce that {sn} converges strongly to s, where s is the solution to the variational inequality (3.1). Further, it follows that snun0, unˉyΔΛ, and sns as n, thus ˉy=s. This completes the proof.

    Case Ⅱ: If Case Ⅰ is false, then the function ρ:N:→N defined by ρ(n)=max{nm:smssm+1s} is an increasing sequence and ρ(n) as n and

    0sρ(n)ssρ(n)+1s,nm. (3.29)

    For the same reasons as in the proof of Case Ⅰ, we obtain sρ(n)s0 and sρ(n)uρ(n)0 as n. By using (3.19) and (3.29), we obtain

    0sρ(n)sUn. (3.30)

    Thus, we get sρ(n)s0 as n. Keeping in mind Lemma 2.2, we have

    0snsmax{sns,sρ(n)s}sρ(n)+1s. (3.31)

    Consequently, from (3.31), sns0 as n. Therefore, sns as n, where s is a solution of the variational inequality (3.1).

    Theorem 3.2. If the Assumptions (S1)(S5) are satisfied then the sequences induced by the Algorithm 3.2 converge strongly to sΛΔ, which solves the following variational inequality:

    k(s)s,ws0,wΛΔ.

    Algorithm 3.2. Viscosity-type inertial iterative method-II (VIIM-II)
    Let β[0,1) and μn>0 are given. Choose arbitrary points s0 and s1 and set n=0.
    Iterative step. For iterates sn, and sn1, for n1, select 0<βn<ˉβn, where
    ˉβn={min{σnsnsn1,β},ifsnsn1,β,otherwise, (3.32)
    compute
    un=λnk(sn)+(1λn)T(sn)+βn(snsn1), (3.33)
    0Q(un)+G(sn+1)+sn+1unμn. (3.34)
    If sn+1=un, then stop. If not, set n=n+1 and go back to the iterative step.

    Proof. Let sΛΔ, then by using (3.33), we obtain

    uns=λnk(sn)+(1λn)T(sn)+βn(snsn1)sλnk(sn)s+(1λn)T(sn)s+βnsnsn1λnk(sn)k(s)+λnk(s)s+(1λn)sns+βnsnsn1=λn[τsns+k(s)s+βnλnsnsn1]+(1λn)sns=[1λn(1τ)]sns+λn(1τ)k(s)s+M11τmax{sns,k(s)s+M11τ}max{un1s,k(s)s+M11τ}max{u0s,k(s)s+M11τ}, (3.35)

    implying that {un} is bounded and so is {sn}. Let xn=λnk(sn)+(1λn)T(sn), then by using (2.1), we get

    uns2=xn+βn(snsn1)s2=xns2+2xns,βn(snsn1)+β2nsnsn12, (3.36)

    and

    xns2=λnk(sn)+(1λn)T(sn)s2=λ2nk(sn)s2+2λn(1λn)k(sn)s,T(sn)s+(1λn)2T(sn)s2=λ2nk(sn)s2+2λn(1λn)k(sn)k(s),T(sn)s+2λn(1λn)k(s)s,T(sn)s+(1λn)2T(sn)s2λ2nk(sn)s2+(1λn)2T(sn)s2+2λnk(sn)k(s),T(sn)s+2λnk(s)s,T(sn)s+2λ2nk(s)s,T(sn)sλ2nk(sn)s2+(1λn)2sns2+2λnτsnssns+2λnk(s)s,T(sn)s+2λ2nk(s)ssns[12λn(1τ)]sns2+λn{λnk(sn)s2+λnsns2+2λnk(sn)ssns+2k(s)s,T(sn)s} (3.37)

    and

    xns,βn(snsn1)=λnk(sn)+(1λn)T(sn)s,βn(snsn1)=λnk(sn)s,βn(snsn1)+(1λn)T(sn)s,βn(snsn1)λnk(sn)sβnsnsn1+(1λn)T(sn)sβnsnsn1βnsnsn1{k(sn)s+sns}. (3.38)

    From (3.36)–(3.38), we get

    uns2[1λn(1τ)]sns2+λn{λnk(sn)s2+λnsns2+2λnk(sn)ssns+2k(s)s,T(sn)s+2βnλnsnsn1(k(sn)s+sns)+β2nsnsn12λn}

    or

    uns2(1ςn)sns2+ςnVn, (3.39)

    where ςn=λn(1τ) and

    Vn=1(1τ)[λnk(sn)s2+λnsns2+2λnk(sn)ssns+2k(s)s,T(sn)s+2βnλnsnsn1(k(sn)s+sns)+β2nsnsn12λn].

    By taking together (3.12) and (3.39), we obtain

    sn+1s2(1ςn)sns2+ςnVn(12δμn)sn+1un2.

    We obtain the intended outcomes by following the same procedures as in the proof of Theorem 3.1.

    Some Halpern-type inertial iterative methods for VIsPs and FPPs are the consequences of our suggested methods.

    Corollary 4.1. Suppose that the assumptions (S2)(S5) hold. The sequence {sn} induced by Algorithm 4.1 converges strongly to y=PΛΔ(z).

    Algorithm 4.1. Halpern-type inertial iteration method-1
    Let β[0,1) and μn>0 are given. Choose arbitrary points s0 and s1, and zE for n=0.
    Iterative step. For iterates sn, and sn1, for n1, select 0<βn<ˉβn, where
    ˉβn={min{σnsnsn1,β},ifsnsn1,β,otherwise,
    compute
    un=λnz+(1λn)[T(sn)+βn(snsn1)],
    0Q(un)+G(sn+1)+sn+1unμn.
    If sn+1=un, then stop. If not, set n=n+1 and go back to the iterative step.

    Proof. By replacing k(y) by z in Algorithm 3.1 and following the proof of Theorem 3.1, we get the desired result.

    Corollary 4.2. Suppose that the assumptions (S2)(S5) hold. The sequence {sn} induced by Algorithm 4.2 converges strongly to y=PΛΔ(z).

    Algorithm 4.2. Halpern-type inertial iteration method-2
    Let β[0,1) and μn>0 are given. Choose arbitrary points s0, s1, and zE for n=0.
    Iterative step. For iterates sn, and sn1, for n1, select 0<βn<ˉβn, where
    ˉβn={min{σnsnsn1,β},ifsnsn1,β,otherwise,
    compute
    un=λnz+(1λn)T(sn)+βn(snsn1),
    0Q(un)+G(sn+1)+sn+1unμn.
    If sn+1=un, then stop. If not, set n=n+1 and go back to the iterative step.

    Proof. By replacing k(y) by z in Algorithm 3.2 and following the proof of Theorem 3.2, we get the result.

    Now, we present some theoretical applications of our methods for solving variational inequality and optimization problems together with the fixed point problem.

    Let ΩE and Q:EE be a monotone operator. The variational inequality problem (VItP) is to find sE such that

    Q(s),ws0,wΩ. (5.1)

    The normal cone to Ω at z is defined by

    NΩ(z)={uE:u,wz0,wΩ}. (5.2)

    It is know to us that s solves (VItP) if and only if

    0Q(s)+NΩ(s). (5.3)

    The indicator function of Ω is defined by

    IΩ(w)={0,ifwΩ,+,ifwΩ.

    Since IΩ is a proper lower semicontinuous convex function on E, the subdifferential of IΩ is defined as

    IΩ(z)={zE:u,wz0,wΩ}, (5.4)

    which is maximal monotone (see [26]). From (5.2) and (5.4), we can write (5.3) as

    0Q(s)+IΩ(s).

    By replacing G by IΩ in Algorithms 3.1 and 3.2, we get viscosity-type inertial iteration methods for common solutions to VIsPs and FPPs.

    Let ΩE be a nonempty closed convex subset and f1,f2 be proper, lower semicontinuous functions. Assume that f1 is differentiable and f1 is δ-strongly monotone (hence, monotone) and κ-Lipschitz continuous. The subdifferential of f2 is defined by

    f2(y)={zΩ:f2(y)f1(w)yw,z,wE}

    and is maximal monotone [25]. The following convex minimization problem (COP) is taken into consideration:

    minwΩ{F(y)}=minyΩ{f1(y)+f2(y)}.

    Therefore, by taking Q=f1 and G=f2 in Algorithms 3.1 and 3.2, we get two viscosity-type inertial iteration methods for common solutions to COPs and FPPs.

    Example 6.1. Let E=R3. For s=(s1,s2,s3) and w=(w1,w2,w3)R3, the usual inner product is defined by s,w=s1w1+s2w2+s3w3 and w2=|w1|2+|w2|2+|w3|2. We define the operators Q and G by

    Q[w1w2w3]=[1/20001/30001/4][w1w2w3]andG[w1w2w3]=[1/60001/50001/4][w1w2w3].

    It is trivial to show that the mapping Q is η-inverse strongly monotone with η=2, δ-strongly monotone (hence monotone) with δ=14, and κ-Lipschitz continuous with κ=12. The mapping G is maximal monotone. We define the mappings T and k as follows:

    T[w1w2w3]=[100010001][w1w2w3] and k[w1w2w3]=[1/60001/60001/6][w1w2w3].

    The mapping T is nonexpansive and k is s τ-contraction with τ=1/6. For Algorithms 3.1 and 3.2, we choose β=0.3,λn=1100+n,σn=1(1+n)2,μn=32110+n, βn is selected randomly from (0,¯βn), and ¯βn is calculated by (3.2). For Algorithm 1.1, we choose θ=0.5 and θn=1(1+n)2(0,θ), λ=0.5(0,2η) and ψn=1(10+n)0.1. We compute the results of Algorithms 3.1 and 3.2 and then compare them with Algorithm 1.1. The stopping criteria for our calculation is Toln<1015, where Toln=sn+1sn. We select some different cases of initial values as given below:

    Case (a): w0=(1,7,9)w1=(1,3,4);

    Case (b): w0=(30,53,91)w1=(1/2,3/4,4/11);

    Case (c): w0=(1/2,14,0)w1=(0,23,1/4);

    Case (d): w0=(0.1,10,200)w1=(100,2,1/4).

    The experimental findings are shown in Table 1 and Figures 14.

    Table 1.  Comparison table of VIIM-1, VIIM-2, and VIM by using Cases (a)–(d).
    Case VIIM-1 VIIM-2 VIM
    (a) Iterations 22 22 31
    Time in seconds 8.7e-006 1.3e-005 1.06e-005
    (b) Iterations 21 23 31
    Time in seconds 8.6e-006 8.9e-006 8.1e-006
    (c) Iterations 17 19 28
    Time in seconds 8.9e-006 1.22e-005 1.08e-005
    (d) Iterations 23 26 35
    Time in seconds 8.8e-006 9.5e-006 1.08e-005

     | Show Table
    DownLoad: CSV
    Figure 1.  Graphical behavior of sn+1sn from VIIM-1, VIIM-2, and VIM by choosing Case (a).
    Figure 2.  Graphical behavior of sn+1sn from VIIM-1, VIIM-2, and VIM by choosing Case (b).
    Figure 3.  Graphical behavior of sn+1sn from VIIM-1, VIIM-2, and VIM by choosing Case (c).
    Figure 4.  Graphical behavior of sn+1sn from VIIM-1, VIIM-2, and VIM by choosing Case (d).

    Example 6.2. Let us consider the infinite dimensional real Hilbert space E1=E2=l2: = {u:=(u1,u2,u3,,un,),unR:n=1|un|<} with inner product u,v=n=1unvn and the norm is given by u=(n=1|un|2)1/2. We define the monotone mappings by Q(u):=u5=(u15,u25,u35,,un5,) and G(u):=u=(u1,u2,u3,,un,). Let k(u):=u15 be the contraction and the nonexpansive map T is defined by T(u):=u3=(u13,u23,u33,,un3,).

    It can be seen that Q is δ-strongly monotone with δ=15 and κ-Lipschitz continuous with κ=15 and also η-inverse strongly monotone with η=5; G is maximal monotone; k be the τ-contraction with τ=115. We choose β=0.4, λn=1(n+200)0.25, σn=1(10+n)3, μn=431n+50, βn is selected randomly, and ˉβn by (3.2). We choose θ=0.4 and θn=1(10+n)3(0,θ), λ=0.7(0,2η), and ψn=1(200+n)0.25. We compute the results of Algorithms 3.1 and 3.2, then compare with Algorithm 1.1. The stopping criteria for our computation is Toln<1015, where Toln=12sn+1sn. We compute the results of the Algorithms 3.1 and 3.2, and then compare them with Algorithm 1.1. We consider the following four cases of initial values:

    Case (a'): w0={1n}n=1,w1={11+n2}n=0;

    Case (b'): w0={1n+1,ifnisodd,0,ifniseven,w1={11+n3}n=1;

    Case (c'): w0=(0,0,0,0,),w1=(1,2,3,4,0,0,0,);

    Case (d'): w0={(1)nn}n=1,w1={0,ifnisodd,1n2,ifniseven.

    The experimental findings are shown in Table 2 and Figures 58.

    Table 2.  Comparison table of VIIM-1, VIIM-2, and VIM by using Case(a)Case(d).
    Case VIIM-1 VIIM-2 VIM
    (a') Iterations 37 40 90
    Time in seconds 9.4e-006 1.2e-005 8.5e-006
    (b') Iterations 36 39 90
    Time in seconds 1.39e-005 1.02e-005 1.06e-005
    (c') Iterations 32 34 39
    Time in seconds 9.6e-006 9.7e-006 1.66e-005
    (d') Iterations 22 32 50
    Time in seconds 1.35e-005 2.1e-005 1.41e-005

     | Show Table
    DownLoad: CSV
    Figure 5.  Graphical behavior of sn+1sn from VIIM-1, VIIM-2, and VIM by choosing Case (a').
    Figure 6.  Graphical behavior of sn+1sn from VIIM-1, VIIM-2, and VIM by choosing Case (b').
    Figure 7.  Graphical behavior of sn+1sn from VIIM-1, VIIM-2, and VIM by choosing Case (c').
    Figure 8.  Graphical behavior of sn+1sn from VIIM-1, VIIM-2, and VIM by choosing Case (d').

    We suggested two viscosity-type inertial iteration methods for solving VIsP and FPP in Hilbert spaces. Our methods calculated the viscosity approximation, fixed point iteration, and inertial extrapolation simultaneously at the beginning of each iteration. We proved the strong convergence of the proposed methods without calculating the resolvent of the associated monotone operators. Some consequences and theoretical applications were also discussed. Finally, we illustrated the proposed methods by using some suitable numerical examples. It has been deduced from the numerical examples that our algorithms performed well in the sense of time acquired by the CPU and the number of iterations.

    M. Dilshad: Conceptualization, Methodology, Formal analysis, Investigation, Writing-original draft, Software, Writing-review & editing; A. Alamer: Conceptualization, Methodology, Formal analysis, Software, Writing-review & editing; Maryam G. Alshahri: Conceptualization, Methodology, Formal analysis, Software, Writing-review & editing; Esmail Alshaban: Investigation, Writing-original draft; Fahad M. Alamrani: Investigation, Writing-original draft. All authors have read and approved the final version of the manuscript for publication.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors declare no conflicts of interest.



    [1] A. Alamer, M. Dilshad, Halpern-type inertial iteration methods with self-adaptive step size for split common null point problem, Mathematics, 12 (2024), 747. http://dx.doi.org/10.3390/math12050747 doi: 10.3390/math12050747
    [2] Q. Ansari, F. Babu, Proximal point algorithm for inclusion problems in Hadamard manifolds with applications, Optim. Lett., 15 (2021), 901–921. http://dx.doi.org/10.1007/s11590-019-01483-0 doi: 10.1007/s11590-019-01483-0
    [3] A. Adamu, D. Kitkuan, A. Padcharoen, C. Chidume, P. Kumam, Inertial viscosity-type iterative method for solving inclusion problems with applications, Math. Comput. Simulat., 194 (2022), 445–459. http://dx.doi.org/10.1016/j.matcom.2021.12.007 doi: 10.1016/j.matcom.2021.12.007
    [4] M. Akram, M. Dilshad, A. Rajpoot, F. Babu, R. Ahmad, J. Yao, Modified iterative schemes for a fixed point problem and a split variational inclusion problem, Mathematics, 10 (2022), 2098. http://dx.doi.org/10.3390/math10122098 doi: 10.3390/math10122098
    [5] M. Akram, M. Dilshad, A unified inertial iterative approach for general quasi variational inequality with application, Fractal Fract., 6 (2022), 395. http://dx.doi.org/10.3390/fractalfract6070395 doi: 10.3390/fractalfract6070395
    [6] F. Alvarez, H. Attouch, An inertial proximal method for maximal monotone operators via discretization of a nonlinear Oscillator with damping, Set-Valued Anal., 9 (2001), 3–11. http://dx.doi.org/10.1023/A:1011253113155 doi: 10.1023/A:1011253113155
    [7] J. Cruz, T. Nghia, On the convergence of the forward-backward splitting method with linesearches, Optim. Method. Softw., 31 (2016), 1209–1238. http://dx.doi.org/10.1080/10556788.2016.1214959 doi: 10.1080/10556788.2016.1214959
    [8] P. Combettes, V. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Sim., 4 (2005), 1168–1200. http://dx.doi.org/10.1137/050626090 doi: 10.1137/050626090
    [9] P. Combettes, The convex feasibility problem in image recovery, Adv. Imag. Elect. Phys., 95 (1996), 155–270. http://dx.doi.org/10.1016/S1076-5670(08)70157-5 doi: 10.1016/S1076-5670(08)70157-5
    [10] Q. Dong, D. Jiang, P. Cholamjiak, Y. Shehu, A strong convergence result involving an inertial forward-backward algorithm for monotone inclusions, J. Fixed Point Theory Appl., 19 (2017), 3097–3118. http://dx.doi.org/10.1007/s11784-017-0472-7 doi: 10.1007/s11784-017-0472-7
    [11] J. Douglas, H. Rachford, On the numerical solution of heat conduction problems in two and three space variables, Trans. Amer. Math. Soc., 82 (1956), 421–439. http://dx.doi.org/10.2307/1993056 doi: 10.2307/1993056
    [12] M. Dilshad, A. Khan, M. Akram, Splitting type viscosity methods for inclusion and fixed point problems on Hadamard manifolds, AIMS Mathematics, 6 (2021), 5205–5221. http://dx.doi.org/10.3934/math.2021309 doi: 10.3934/math.2021309
    [13] M. Dilshad, M. Akram, Md. Nsiruzzaman, D. Filali, A. Khidir, Adaptive inertial Yosida approximation iterative algorithms for split variational inclusion and fixed point problems, AIMS Mathematics, 8 (2023), 12922–12942. http://dx.doi.org/10.3934/math.2023651 doi: 10.3934/math.2023651
    [14] D. Filali, M. Dilshad, L. Alyasi, M. Akram, Inertial iterative algorithms for split variational inclusion and fixed point problems, Axioms, 12 (2023), 848. http://dx.doi.org/10.3390/axioms12090848 doi: 10.3390/axioms12090848
    [15] D. Kitkuan, P. Kumam, J. Martínez-Moreno, Generalized Halpern-type forward-backward splitting methods for convex minimization problems with application to image restoration problems, Optimization, 69 (2020), 1557–1581. http://dx.doi.org/10.1080/02331934.2019.1646742 doi: 10.1080/02331934.2019.1646742
    [16] G. López, V. Martín-Márquez, F. Wang, H. Xu, Forward-backward splitting methods for accretive operators in Banach spaces, Abstr. Appl. Anal., 2012 (2012), 109236. http://dx.doi.org/10.1155/2012/109236 doi: 10.1155/2012/109236
    [17] D. Lorenz, T. Pock, An inertial forward-backward algorithm for monotone inclusions, J. Math. Imaging Vis., 51 (2015), 311–325. http://dx.doi.org/10.1007/s10851-014-0523-2 doi: 10.1007/s10851-014-0523-2
    [18] P. Lion, B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964–979. http://dx.doi.org/10.1137/0716071 doi: 10.1137/0716071
    [19] Y. Malitsky, M. Tam, A forward-backward splitting method for monotone inclusions without cocoercivity, SIAM J. Optimiz., 30 (2020), 1451–1472. http://dx.doi.org/10.1137/18M1207260 doi: 10.1137/18M1207260
    [20] P. Mainge, Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization, Set-Valued Anal., 16 (2008), 899–912. http://dx.doi.org/10.1007/s11228-008-0102-z doi: 10.1007/s11228-008-0102-z
    [21] A. Moudafi, M. Oliny, Convergence of a splitting inertial proximal method for monotone operators, J. Comput. Appl. Math., 155 (2003), 447–454. http://dx.doi.org/10.1016/S0377-0427(02)00906-8 doi: 10.1016/S0377-0427(02)00906-8
    [22] B. Polyak, Some methods of speeding up the convergence of iteration methods, USSR Comput. Math. Math. Phys., 4 (1964), 1–17. http://dx.doi.org/10.1016/0041-5553(64)90137-5 doi: 10.1016/0041-5553(64)90137-5
    [23] M. Rahaman, R. Ahmad, M. Dilshad, I. Ahmad, Relaxed η-proximal operator for solving a variational-like inclusion problem, Math. Model. Anal., 20 (2015), 819–835. http://dx.doi.org/10.3846/13926292.2015.1117026 doi: 10.3846/13926292.2015.1117026
    [24] S. Reich, A. Taiwo, Fast hybrid iterative schemes for solving variational inclusion problems, Math. Methods. Appl. Sci., 46 (2023), 17177–17198. http://dx.doi.org/10.1002/mma.9494 doi: 10.1002/mma.9494
    [25] R. Rockafellar, On the maximal monotonicity of subdifferential mappings, Pac. J. Math., 33 (1970), 209–216. http://dx.doi.org/10.2140/pjm.1970.33.209 doi: 10.2140/pjm.1970.33.209
    [26] R. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optim., 14 (1976), 877–898. http://dx.doi.org/10.1137/0314056 doi: 10.1137/0314056
    [27] W. Takahashi, N. Wong, J. Yao, Two generalized strong convergence theorems of Halpern's type in Hilbert spaces and applications, Taiwan. J. Math., 16 (2012), 1151–1172. http://dx.doi.org/10.11650/twjm/1500406684 doi: 10.11650/twjm/1500406684
    [28] Y. Tang, H. Lin, A. Gibali, Y. Cho, Convergence analysis and applications of the inertial algorithm solving inclusion problems, Appl. Numer. Math., 175 (2022), 1–17. http://dx.doi.org/10.1016/j.apnum.2022.01.016 doi: 10.1016/j.apnum.2022.01.016
    [29] Y. Tang, Y. Zhang, A. Gibali, New self-adaptive inertial-like proximal point methods for the split common null point problem, Symmetry, 13 (2021), 2316. http://dx.doi.org/10.3390/sym13122316 doi: 10.3390/sym13122316
    [30] D. Thong, N. Vinh, Inertial methods for fixed point problems and zero point problems of the sum of two monotone mappings, Optimization, 68 (2019), 1037–1072. http://dx.doi.org/10.1080/02331934.2019.1573240 doi: 10.1080/02331934.2019.1573240
    [31] H. Xu, Iterative algorithms for nonlinear operators, J. Lond. Math. Soc., 66 (2002), 240–256. http://dx.doi.org/10.1112/S0024610702003332 doi: 10.1112/S0024610702003332
    [32] P. Yodjai, P. Kumam, D. Kitkuan, W. Jirakitpuwapat, S. Plubtieng, The Halpern approximation of three operators splitting method for convex minimization problems with an application to image inpainting, Bangmod Int. J. Math. Comp. Sci., 5 (2019), 58–75.
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1041) PDF downloads(31) Cited by(0)

Figures and Tables

Figures(8)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog