Loading [MathJax]/jax/output/SVG/jax.js
Research article

Accelerated modified inertial Mann and viscosity algorithms to find a fixed point of αinverse strongly monotone operators

  • Received: 23 April 2021 Accepted: 11 June 2021 Published: 16 June 2021
  • MSC : 47H06, 47H09, 47J25

  • In this paper, strong convergence results for αinverse strongly monotone operators under new algorithms in the framework of Hilbert spaces are discussed. Our algorithms are the combination of the inertial Mann forward-backward method with the CQ-shrinking projection method and viscosity algorithm. Our methods lead to an acceleration of modified inertial Mann Halpern and viscosity algorithms. Later on, numerical examples to illustrate the applications, performance, and effectiveness of our algorithms are presented.

    Citation: Hasanen A. Hammad, Habib ur Rehman, Manuel De la Sen. Accelerated modified inertial Mann and viscosity algorithms to find a fixed point of αinverse strongly monotone operators[J]. AIMS Mathematics, 2021, 6(8): 9000-9019. doi: 10.3934/math.2021522

    Related Papers:

    [1] Rose Maluleka, Godwin Chidi Ugwunnadi, Maggie Aphane . Inertial subgradient extragradient with projection method for solving variational inequality and fixed point problems. AIMS Mathematics, 2023, 8(12): 30102-30119. doi: 10.3934/math.20231539
    [2] Anantachai Padcharoen, Kritsana Sokhuma, Jamilu Abubakar . Projection methods for quasi-nonexpansive multivalued mappings in Hilbert spaces. AIMS Mathematics, 2023, 8(3): 7242-7257. doi: 10.3934/math.2023364
    [3] Kasamsuk Ungchittrakool, Natthaphon Artsawang . A generalized viscosity forward-backward splitting scheme with inertial terms for solving monotone inclusion problems and its applications. AIMS Mathematics, 2024, 9(9): 23632-23650. doi: 10.3934/math.20241149
    [4] Konrawut Khammahawong, Parin Chaipunya, Poom Kumam . An inertial Mann algorithm for nonexpansive mappings on Hadamard manifolds. AIMS Mathematics, 2023, 8(1): 2093-2116. doi: 10.3934/math.2023108
    [5] Yali Zhao, Qixin Dong, Xiaoqing Huang . A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208
    [6] Mohammad Dilshad, Fahad Maqbul Alamrani, Ahmed Alamer, Esmail Alshaban, Maryam G. Alshehri . Viscosity-type inertial iterative methods for variational inclusion and fixed point problems. AIMS Mathematics, 2024, 9(7): 18553-18573. doi: 10.3934/math.2024903
    [7] Suthep Suantai, Pronpat Peeyada, Andreea Fulga, Watcharaporn Cholamjiak . Heart disease detection using inertial Mann relaxed $ CQ $ algorithms for split feasibility problems. AIMS Mathematics, 2023, 8(8): 18898-18918. doi: 10.3934/math.2023962
    [8] Zheng Zhou, Bing Tan, Songxiao Li . Two self-adaptive inertial projection algorithms for solving split variational inclusion problems. AIMS Mathematics, 2022, 7(4): 4960-4973. doi: 10.3934/math.2022276
    [9] Lu-Chuan Ceng, Yeong-Cheng Liou, Tzu-Chien Yin . On Mann-type accelerated projection methods for pseudomonotone variational inequalities and common fixed points in Banach spaces. AIMS Mathematics, 2023, 8(9): 21138-21160. doi: 10.3934/math.20231077
    [10] Premyuda Dechboon, Abubakar Adamu, Poom Kumam . A generalized Halpern-type forward-backward splitting algorithm for solving variational inclusion problems. AIMS Mathematics, 2023, 8(5): 11037-11056. doi: 10.3934/math.2023559
  • In this paper, strong convergence results for αinverse strongly monotone operators under new algorithms in the framework of Hilbert spaces are discussed. Our algorithms are the combination of the inertial Mann forward-backward method with the CQ-shrinking projection method and viscosity algorithm. Our methods lead to an acceleration of modified inertial Mann Halpern and viscosity algorithms. Later on, numerical examples to illustrate the applications, performance, and effectiveness of our algorithms are presented.



    Assume that C is a nonempty closed convex subset of a Hilbert space . A self-mapping T:CC is called nonexpansive if

    TκTωκω,

    for all κ,ωC. The set F(T)={κC: Tκ=κ} denote the set of fixed points of a mapping T.

    In this paper, we discuss the following inclusion problem: Find ˜κ such that

    0Ξ˜κ+Π˜κ, (1.1)

    where Ξ: is an operator and Π:{2} is a set-valued operator. There are many real-world applications to various mappings in the fixed point theory, for example, many problems can be revisited as: Convex optimization and feasibility problems, image restoration problems, and monotone variational inequalities (see [1,2,3]). To be more precise, some concrete problems in machine learning and the linear inverse problem can be modeled mathematically with this formulation.

    The classical approach to problem (1.1) (which is denoted by ((Ξ+Π)1(0)) is the forward-backward splitting method [4,5,6,7,8], which is presented as follows: κ1 and

    κn+1=(I+τΠ)1(κnτΞκn), n1, (1.2)

    where τ>0 and I is the identity mapping. In this visibility, the step containing Ξ refers to the forward step and Π is the backward step, but not the sum of Ξ and Π. In special cases, this technique includes exciting results in the gradient method [9,10] and the proximal point algorithm [11,12].

    In 1979, a strong splitting algorithms in a real Hilbert space were built by Lions and Mercier [13] as follows:

    κn+1=(2JΞτI)(2JΠτI)κn, n1, (1.3)

    and

    κn+1=JΞτ(2JΠτI)κn+(IJΠτ)κn, n1, (1.4)

    where JΩτ=(I+τΩ)1. Mostly, the two kinds of algorithms (1.3) and (1.4) called a Peaceman-Rachford algorithm [7] and Douglas-Rachford algorithm [14], respectively. Generally, both algorithms are weakly convergent [15].

    Another direction concerning with the problem (1.1) of two monotone and accretive mappings in Hilbert and Banach spaces, a stationary solution to the following initial value-problem:

    0t, (0)=0,

    can be rewritten as (1.1) when the governing maximal monotone takes the form =Ξ+Π [13]. In optimization, it often needs [4] to solve a minimization problem of the form

    minκh(κ)+m(κ), (1.5)

    where h,m:(,] are proper (that is, the inverse image of a compact set is compact) and lower semi-continuous convex functions such that h is differentiable with L -Lipschitz gradient, and the proximal mapping of m is

    κargminωm(ω)+κω22τ.

    In particular, if Ξ=h and Π=m, where h is the gradient of h and m is the subdifferential of m which is defined by m(κ)={q:m(ω)m(κ)+q,ωκ, for all ω}, therefore problem (1.1) becomes (1.5) and (1.2) becomes

    κn+1=proxτm(κnτh(κn)), n1,

    where τ>0 is the step-size and proxτm=(I+τm)1 is the proximity operator of m.

    The rest of the paper is organized as follows. Section 2 describes a compilation of previously existing algorithms related to the well-known Mann iteration and some of its modifications and extensions. Section 3 gives some preliminary lemmas and definition which are then used to derive the main results of Section 4. The new main strong convergence results and their associated iterative algorithms are given in Section 4. In particular, the so-called inertial CQ-projection algorithm and the so-called inertial shrinking CQ-projection viscosity algorithm.

    The iteration κn+1=Tκn=...=Tnκ0 is called a Picard iteration where κ0 is a starting point. It is one of the simplest iterative methods, but it has a defect, that its convergence cannot be guaranteed even in the weak topology. To overcome this defect, Mann iteration algorithm is one of the effective ways for that, which generates iterative sequence {κn} through the following convex combination:

    κn+1=ζnκn+(1ζn)Tκn, n0. (2.1)

    For nonexpansive mappings, the iteration (2.1) is useful for solving the fixed point problem and provides a unified framework for different algorithms. Also it has shortcomings, although it is defined in a Hilbert space, under certain conditions, the iterative sequence {κn} defined by (2.1) has only weak convergence. Previously, many attempts to obtain a strong convergence were presented in [16,17,18].

    In 2001, a heavy ball method applied to inertial proximal point algorithm by Alvarez and Attouch [19]. This method under maximal monotone operators was introduced by Poylak [20] for proximal point algorithm. The algorithm takes the form

    {ωn=κn+θn(κnκn1),κn+1=(I+τnΠ)1nωn, n1. (2.2)

    It was proved that if {τn} is nondecreasing and {θn}[0,1) with

    n=1θnκnκn12<, (2.3)

    then algorithm (2.2) converges weakly to a zero of Π. In particular, the condition (2.3) is true for θn<1/3. Here θn is an extrapolation factor and the inertia is represented by the term θn(κnκn1).

    The concepts of single-valued, co-coercive and Lipschitz continuous operator Ξ added to the inertial proximal point algorithm by Moudafi and Oliny [21] to built the following algorithm:

    {ωn=κn+n(κnκn1),κn+1=(I+τnΠ)1n(ωnτnΞωn), n1. (2.4)

    Via condition (2.3) a weak convergence result using algorithm (2.4) was obtained provided that τn<2L, where L is a Lipschitz constant of Ξ. It is noted that for n>0 the algorithm (2.4) does not take the form of a forward–backward splitting algorithm, since operator Ξ is still evaluated at the point κn.

    Of course, strong convergence is much better than weak convergence because it is often much more desirable for academic researchers since the obtained convergence results are more efficient and robust in potential application.

    A strong algorithm for modified Mann algorithm was presented by Nakajo and Takahashi [16], which is called CQ-algorithm:

    {κ0C chosen arbitrarily,ωn=nκn+(1n)Tκn,Cn={pC:ωnpκnp},Qn={pC:κ0κn,κnp0},κn+1=PQnCnκ0, (2.5)

    for each n0 and C is defined in the above section. They obtained the strong convergence of the sequence {κn} to PFix(T)κ0, provided that the sequence {n} is bounded above by 1. For more good results of the CQ-algorithms of nonexpansive mappings, we highly mention to [22].

    Motivated by the algorithm (2.5), Dong et al. [23] discussed a strong convergence result involving an inertial forward-backward algorithm for monotone inclusions: Let Ξ: be an αinverse strongly monotone operator and Π:2 be a maximal monotone operator such that (Ξ+Π)1(0). Let {αn}∈</p><p>R</p><p> and the sequence {κn} be generated by κ,κ1 and for all n1

    {ωn=κn+αn(κnκn1),υn=(I+τnΠ)1n(ωnτnΞωn),Cn={p:υnp2κnp22αnκnp,κn1κn+α2nκn1κn2},Qn={p:κ0κn,κnp0},κn+1=PQnCnκ0.

    Recently, Kim and Xu [17] proposed the following modified Mann iteration algorithm based on the Halpern iterative algorithm [24] and the Mann iteration algorithm(2.1):

    {ωn=αnκn+(1αn)κn,κn+1=ζnκ+(1ζn)ωn, n0, (2.6)

    for some fixed point κC, where :CC is a nonexpansive mapping with Fix() and {αn}, {ζn} are sequences in (0, 1). Under mild conditions the sequence {κn} generated by (2.6) converges to a fixed point of . Many authors worked in this directions and obtained strong convergence for a fixed point under a appropriate conditions, see, [25,26,27,28].

    Chen et al. [18] generalized the results [24] by introducing a new modified Mann iteration algorithm by combining the viscosity approximation algorithm [29] and the modified Mann iteration algorithm [17]. They established strong convergence result under fewer restrictions. The above results were circulated to more general operators and wider Banach spaces such as quasi-nonexpansive, asymptotically quasi-nonexpansive and strict pseudo-contractions mappings, see for instance [30,31,32,33,34,35,36].

    Inspired by the contributions of [16,17,23], new algorithms by overlapping the concepts of inertial Mann forward-backward method, CQ-Shrinking projection method and the viscosity algorithm were obtained and strong convergence results under these algorithms were discussed. At the last, numerical results are discussed to present the applications and a good acceleration performance of our algorithms. Our results extend and unify a lot of papers in this direction like Kim and Xu [17], Chen et al. [18], Suzuki [37] and the paper cited [38,39,40].

    In this paper, we shall refer to {κn} is a sequence in , "→" is the strong convergence, "⇀" is the weak convergence and PC:C is the nearest point projection, that is for all κ and ωC, κPCκκω. PC is called the metric projection. It's obvious that PC achieves the following inequality,

    PCκPCω2PCκPCω,κω,

    for all κ,ω. In other words, the metric projection PC is firmly nonexpansive. Hence κPCκ,ωPCω0 holds for all κ and ωC, see [41].

    Lemma 3.1. [41] Suppose that is a real Hilbert space. Then for each κ,ω and a real number , we have

    (i) κ+ω2κ2+2ω,κ+ω,

    (ii) κ+(1)ω2=κ2+(1)ω2(1)κω2.

    Lemma 3.2. [42,Theorem 1.9.10,p. 39 and Theorem 2.2.13,p. 57] Suppose that is a real Hilbert space and {κn} is a sequence in . Then the following properties are fulfilled:

    (i) If κnκ and κnκ as n, then limnκn=κ; that is, has the Kadec-Klee property.

    (ii) If κnκ as n, then κlim infnκn.

    Lemma 3.3. [43] Let C be closed convex subset of a real Hilbert space . Then for each κ,ω,υ and ðR, the following set is closed and convex:

    {ηC:ωη2κη2+υ,η+ð}.

    Lemma 3.4. [21] Let C be closed convex subset of a real Hilbert space and PC:C be the metric projection. Then

    ωPCκ2+κPCκ2κω2

    for all κ and ωC.

    Lemma 3.5. [44] Let T be a nonexpansive self-mapping of a closed convex subset C of a Hilbert space . Then the mapping IT is demiclosed; that is, whenever {κn} is a sequence in C which weakly converges to some κC and the sequence {(IT)(κn)} strongly converges to some ω, it follows that (IT)(κ)=ω.

    Definition 3.6. Suppose that D(Ξ) and R(Ξ) are the domain and the range of an operator Ξ, respectively. An operator Ξ is called:

    1). monotone if

    κω,ΞκΞω0 for all κ,ωD(Ξ),

    2). maximal monotone if it is monotone and its graph

    G(Ξ)={(κ,Ξκ):κ}

    is not a proper subset of one of any other monotone mapping,

    3). βstrongly monotone if there exists β>0 such that

    κω,ΞκΞωβκω2 for all κ,ωD(Ξ),

    4). αinverse strongly monotone (for short α-ism) if there exists α>0 such that

    κω,ΞκΞωαΞκΞω2 for all κ,ωD(Ξ).

    Lemma 3.7. [5] Let be a real Hilbert space, Ξ: be an α-inverse strongly monotone operator and Π: 2 be a maximal monotone operator. For each τ>0, we define

    Tτ=JΠτ(IτΞ)=(I+τΠ)1(IτΞ),

    then, we get

    (i) For τ>0, F(Tτ)=(Ξ+Π)1(0);

    (ii) For 0<sτ and κ, κTsκ2κTτκ.

    Lemma 3.8. [45] Let be a real Hilbert space, Ξ:  be an αinverse strongly monotone operator and Π:2 be a maximal monotone operator. For each τ>0, we have

    TτκTτω2κω2τ(2ατ)ΞκΞω2,

    for all κ,ω.

    According to the notions of inertial CQ and shrinking projection technique with the Halpern algorithm and viscosity algorithm, respectively, we build two new algorithms in this section and their strong convergence in a Hilbert space is discussed.

    Theorem 4.1. (Inertial shrinking projection algorithm). Let be a real Hilbert space, Ξ: be an α inverse strongly monotone operator, Π:2 be a maximal monotone operator such that Θ=(Ξ+Π)1(0) and {αn} is a bounded sequence. For given two sequences {λn} and {ρn} in (0,1). A sequence {κn} is generated by

    {ωn=κn+αn(κnκn1),ϖn=λnωn+(1λn)Υnωnυn=ρnκ1+(1ρn)ϖn,Cn+1={pCn:υnp2κnp2+α2nκn1κn22αn(1ρn)κnp,κn1κn+2ρnκ1p,υnp},κn+1=PCn+1(κ1), (4.1)

    for each n1 and κ,κ1, where Υn=(I+τnΠ)1(IτnΞ). If the following hypotheses hold:

    (1) n=0ρn= and limnρn=0,

    (2) 0<liminfnτnlimsupnτn<2α,

    then the sequence {κn} generated by (4.1) converges strongly to a point θ=PΘ(κ1).

    Proof. We split the proof into the following steps:

    Step (i). Show that PCn+1κ1 is bounded for each κ1, n1 and ΘCn+1. It follows from the condition (2) and Lemma 3.8 that Tτn=(I+τnΠ)1(IτnΞ) is a nonexpansive mapping. Lemma 3.7 guarantees that Θ is closed and convex set, and Lemma 3.3, clarifies that Cn+1 is closed and convex, for all n1.

    Let pΘ, we have

    ωnp2=(κnp)αn(κn1κn)2κnp22αnκnp,κn1κn+α2nκn1κn2. (4.2)

    Furthermore, by Lemma 3.1 (ii) and Lemma 3.8, we can write

    ϖnp2=λnωn+(1λn)(I+τnΠ)1(ωnτnΞωn)p2=λn(ωnp)+(1λn)(Tτnωnp)2=(1λn)Tτnωnp2+λnωnp2λn(1λn)Tτnωnωn2λnωnp2+(1λn)Tτnωnp2=λnωnp2+(1λn)TτnωnTτnp2λnωnp2+(1λn)(ωnp2τn(2ατn)ΞωnΞp2)λnωnp2+(1λn)ωnp2=ωnp2. (4.3)

    Also, by Lemma 3.1 (i), we have

    υnp2=(1ρn)(ϖnp)+ρn(κ1p)2=(1ρn)ϖnp2+2ρnκ1p,υnp. (4.4)

    Applying (4.2) and (4.3) in (4.4), and since (1ρn)<1, we get

    υnp2(1ρn)κnp2+α2n(1ρn)κn1κn22αn(1ρn)κnp,κn1κn+2ρnκ1p,υnpκnp2+α2nκn1κn22αn(1ρn)κnp,κn1κn+2ρnκ1p,υnp. (4.5)

    It is clear that ΘC1=. Assume that ΘCn for some n1. Then pCn and by (4.5), we have for all n1, pCn+1. Thus ΘCn+1 for all n1, that is, PCn+1κ1 is well-defined.

    Step (ii).Prove that {κn} is bounded. Since Θ, closed and convex subset of , there is a unique Θ such that =PΘκ1. This implies that, κn=PCnκ1, Cn+1Cn and κn+1Cn for all n1, we can get

    κnκ1κn+1κ1, for all n1. (4.6)

    Further, as ΘCn, we obtain

    κnκ1κ1, for all n1. (4.7)

    It follows by (4.6) and (4.7), that limnκnκ1 exists. This leads to {κn} is bounded.

    Step (iii).Fulfillment limnκn=θ. By the definition of Cn, for m>n, we observe that κm=PCmκ1CmCn. From Lemma 3.4, we have

    κmκn2κmκ12κnκ12.

    Apply Step (ii), we conclude that limn,mκmκn2=0. Thus {κn} is a Cauchy sequence. Hence, limnκn=θ, as n. As well as, we get

    limnκn+1κn=0. (4.8)

    Step (iv). Prove that θΘ. It follows from the boundedness of the sequence {αn} and (4.8) that

    ωnκn=|αn|κnκn10 as n. (4.9)

    By (4.5), (4.8) and the condition (1), we get

    υnκn2κnκn2+α2nκn1κn22αn(1ρn)κnκn,κn1κn+2ρnκ1κn,υnκn=α2nκn1κn2+2ρnκ1p,υnκn0 as n. (4.10)

    Applying (4.8)–(4.10), we can write

    κn+1ωnκn+1κn+ωnκn0 as n, (4.11)
    κn+1υnκn+1κn+υnκn0 as n. (4.12)

    By (4.3) and (4.11), we observe that

    ϖnκn+1ωnκn+10 as n. (4.13)

    The inequalities (4.12) and (4.13) lead to

    υnωnυnκn+1+κn+1ωn0 as n, (4.14)

    and

    υnϖnυnκn+1+ϖnκn+10 as n. (4.15)

    Now, we have

    Tτnωnωn=1(1λn)[ϖnλnωn]ωn=1(1λn)ϖnλnωn(1λn)ωn=1(1λn)ϖnωn1(1λn)[ϖnυn+υnωn].

    It follows from (4.14) and (4.15) that

    limnTτnωnωn=0. (4.16)

    Since lim infnτn>0, there is ε>0 such that τnε and ε(0,2α) for all n1. Then by Lemma 3.7 (ii) and (4.16), we have

    Tεωnωn2Tτnωnωn0 as n. (4.17)

    From (4.10), since κnθ, we also have ωnθ. Since Tε is a nonexpansive and continuous mapping, from (4.17), we have θΘ.

    Step (v). Show that θ=PΘ(κ1). Since κn=PCnκ1, and ΘCn, we can get

    κ1κn,κnp0, pΘ. (4.18)

    Setting n in (4.18), we have

    κ1θ,θp0, pΘ.

    This show that θ=PΘ(κ1). The proof is finished.

    Theorem 4.2. (Inertial CQ-projection algorithm) (ICQMHA). Assume that all requirements of Theorem 4.1 are met. Then the sequence {κn} generated by

    {ωn=κn+αn(κnκn1),ϖn=λnωn+(1λn)Υnωnυn=ρnκ1+(1ρn)ϖn,Cn={p:υnp2κnp2+α2nκn1κn22αn(1ρn)κnp,κn1κn+2ρnκ1p,υnp},Qn={p:pκn,κ1κn0},κn+1=PCnQn(κ1),n1, (4.19)

    converges strongly to a point θ=PΘ(κ1).

    Proof. The proof is divided into the following steps:

    Step (1). Demonstrate that {κn}n=0 is well-defined for each κ1 and for all n1, ΘQnCn.

    It is clear that by Lemma 3.3, Cn is closed and convex subset of . Also, if we rewrite the set Qn as shown

    Qn={p: κ1κn,pκ1κn,κn},

    we obtain that Qn is also closed and convex subset of . So QnCn is closed and convex, for all n1. Assume that pΘ. By the same manner of Theorem 4.1, we have

    υnp2κnp2+α2nκn1κn22αn(1ρn)κnp,κn1κn+2ρnκ1p,υnp.

    Thus, pCn for all n1, it implies that ΘCn for all n1.

    For n=1, we have Q1= and hence ΘC1Q1. Suppose that ΘClQl for some l1. Since κl+1=PClQl(κ1). Then we get

    bκl+1b,κ0κl+10,

    for each bClQl. Since ΘClQl, and pΘ, we have

    pκl+1p,κ0κl+10.

    This leads to pQl+1, and hence ΘQl+1. Hence, we get ΘCl+1Ql+1. Based on the above results, {κn} is well defined and ΘCnQn.

    Step (2). Clarify that {κn} is bounded. By Algorithm (4.19), we can write

    ξκn,κ1κn0, for all ξQnn1.

    This implies that, κn=PQn(κ1). Since ΘQn, we get

    κnκ1κ1ξ, for all ξΘ. (4.20)

    Again, since κn+1Qn, we have

    κnκ1κn+1κ1. (4.21)

    It follows from (4.20) and (4.21) that limnκnκ1 exists. Hence {κn} is bounded.

    Step (3). Prove that limnκn+1κn=0. Since κn+1Qn and κn=PQn(κ1), it follows from Lemma 3.4 that

    κn+1κn2κn+1κ12κnκ120 as n.

    This implies that limnκn+1κn=0.

    Step (4).Show that θΘ. Follows immediately by Step (iv) Theorem 4.1.

    Step (5). Illustrate that θ=PΘ(κ1). By the same scenario of {Step (iv)} Theorem 4.1, we obtain that

    Tεωnωn0,ωnκn0 as n, (4.22)

    where ε(0,2α). The nonexpansivity of Tε yields,

    TεκnκnTεκnTεωn+Tεωnωn+ωnκn2ωnκn+Tεωnωn. (4.23)

    From (4.22) and (4.23), we can obtain

    Tεκnκn0 as n. (4.24)

    Since {κn} is bounded, there is a subsequence {κnk} of {κn} such that κnkκ. This combines with (4.24) and by using Lemma 3.5, we obtain that κF(Tε), that is, κΘ.

    Since θ=PΘ(κ1) and κΘ, (4.20) and Lemma 3.2 (ii) imply that

    κ1θκ1κlim infkκnkκ1lim supkκnkκ1κ1θ.

    Using the uniqueness of the nearest point θ, we now see that θ=κ. We also have κnkκ1κ1θ and from Lemma 3.2 (i), we get that κnkθ as k. Using again the uniqueness of θ, we deduce that κnθ as n.

    This ends the proof.

    If we replace κ1 with (κ1), where :CC is a contractive mapping in (4.1) and (4.19) we have the following inertial shrinking CQ-projection viscosity algorithms:

    Theorem 4.3. (Inertial shrinking projection viscosity algorithm) Assume that all requirements of Theorem 4.1 are satisfied. Let :CC be a μcontraction with μ[0,1), that is κωμκω for all κ,ωC. Then the sequence {κn} generated by

    {ωn=κn+αn(κnκn1),ϖn=λnωn+(1λn)Υnωnυn=ρn(κ1)+(1ρn)ϖn,Cn+1={pCn:υnp2κnp2+α2nκn1κn22αn(1ρn)κnp,κn1κn+2ρn(κ1)p,υnp},κn+1=PCn+1((κ1)),n1, (4.25)

    converges strongly to a point θ=PΘ(κ1).

    Theorem 4.4. (Inertial shrinking CQ-projection viscosity algorithm) (ICQMVA). Suppose that all requirements of Theorem 4.1 are verified. Let :CC be a μcontraction with μ[0,1). Then the sequence {κn} generated by

    {ωn=κn+αn(κnκn1),ϖn=λnωn+(1λn)Υnωnυn=ρn(κ1)+(1ρn)ϖn,Cn={p:υnp2κnp2+α2nκn1κn22αn(1ρn)κnp,κn1κn+2ρn(κ1)p,υnp},Qn={p:pκn,(κ1)κn0},κn+1=PCnQn((κ1)),n1, (4.26)

    converges strongly to a point θ=PΘ(κ1).

    Remark 4.5 If we neglect CQ-shrinking projection terms, then the proposed algorithms in this manuscript extend and improve the results of [38,39,40], Kim and Xu [17] (if αn=0 and Υn=I (Identity mapping) in algorithms (4.1) and (4.19)), Chen et al. [18] (if αn=0 and Υn=I in (4.25) and (4.26)) and Suzuki [37].

    In this section, the numerical comparison between strong convergence of our algorithms and the modified inertial Mann Halpern and viscosity algorithms [46] are illustrated. Through numerical calculations we found that our methods accelerate and more effective than methods of [46]. The codes used here to obtain numerical results are the MATLAB codes run in MATLAB version 9.5 (R2018b) on Intel(R) Core(TM)i5-6200 CPU PC @ 2.30GHz 2.40GHz, RAM 8.00 GB.

    For simplicity:

    (1) For Tan et. al. [46] (shortly, MIMHA) (shortly, MIMVA);

    (2) For our proposed algorithms (shortly, ICQMHA) (shortly, ICQMVA).

    Example 5.1. For any nonempty closed convex set CiRn for each i=0,1,...,m. We are now considering the convex feasibility problem of finding

    κC=mi=1Ci.

    Define a map T:RnRn by

    T=PC0(1mmi=1PCi) (5.1)

    where PCi (i=0,1,,m) denotes the metric projection upon Ci. Since PCi (i=0,1,,m) is nonexpansive, then the mapping T defined by (5.1) is also nonexpansive. Moreover, we can see that

    Fix(T)=Fix(PC0)mi=1Fix(PCi)=C0mi=1Ci=C. (5.2)

    During this experiment, we use Ci (i=0,1,,m) as a closed ball with center ciRn and radius ri>0. Thus PCi can be determined as

    PCi(κ)={ci+riciκ(κci)ifciκ>ri,κifciκri.

    Choose ri=1 (i=0,1,,m), c0=(0,0,,0), c1=(1,0,,0), and c2=(1,0,,0). Moreover, ci(1n,1n)n (i=3,4,,m) are randomly chosen. From the choice of c1,c2,r1,r2, given that Fix(T)=0. Moreover, we use κ0=κ1=(1,1,,1), αn=10(n+1)2, η=4, λn=1100(n+1)2, ρn=1n+1 and f(κ)=0.1κn. The numerical results are shown in Figures 12.

    Figure 1.  Example 5.1 for n=30 and m=30.
    Figure 2.  Example 5.1 for n=100 and m=30.

    Example 5.2. Let F:C be an operator and the variational inequality problem is define in the following way:

    FindκCsuch thatF(κ),ωκ0,ωC.

    Define T:C by

    T:=PC(IλF) (5.3)

    where 0<λ<2L and L is the Lipschitz constant of the mapping F. Let F:H=R2R2 defined by

    F(κ1κ2)=(2κ1+2κ2+sin(κ1)2κ1+2κ2+sin(κ2)). (5.4)

    The authors in [47] showed that F is Lipschitz continuous with L=26 and strongly monotone. Therefore the variational inequality (5.4) has a unique solution (see, e.g. [48]) and (0,0) is its solution. We use αn=10(n+1)2, η=4, λn=1100(n+1)2, ρn=1n+1 and f(κ)=0.1κn. The numerical results are shown in Figures 35.

    Figure 3.  Example 5.2 for κ0=κ1=(1,1)T.
    Figure 4.  Example 5.2 for κ0=κ1=(80,30)T.
    Figure 5.  Example 5.2 for κ0=κ1=(2,5)T.

    Example 5.3. We assume that the Fermat-Weber (FW) problem, it is a well-known model of location theory. In mathematical terms, Fermat-Weber is described as follows:

    FindκRnsuch thatminf(κ):=mi=1ωiκai

    in which aiRn are anchor points as well as ωi were non-negative weights (see [49] for more details). The above problem can be described as fixed point problems as follows: A mapping T:RnRn is defined by

    T(κ):=1miωiκaimi=1aiωiκai

    where ωi=1 for all i. Moreover, we consider a three dimensional example with n=3, m=8 and collection Π of anchor points are

    a1=(0,0,0)T,a2=(10,0,0)T,a3=(0,10,0)T,a4=(10,10,0)T,
    a5=(0,0,10)T,a6=(10,0,10)T,a7=(0,10,10)T,a8=(10,10,10)T.

    From above assumptions it follows that the optimal value of above problem is κ=(5,5,5)T. During this example, we use fixed element κ1=(1,2,3)T and αn=10(n+1)2, η=4, λn=1100(n+1)2, ρn=1n+1 and f(κ)=0.19κn. The numerical results are shown in Figures 67.

    Figure 6.  Example 5.3 for κ0=κ1=(10,20,30)T.
    Figure 7.  Example 5.3 for κ0=κ1=(1,1,1)T.

    Example 5.4. Let C={κR3:κ1} be the unit closed ball and T:CC [50] be defined by

    T(κ1κ2κ3)=(13sin(κ1+κ3)13sin(κ1+κ3)13(κ1+κ2)). (5.5)

    We use αn=10(n+1)2, η=4, λn=1100(n+1)2, ρn=1n+1 and f(κ)=0.1κn. The numerical results are shown in Figures 89.

    Figure 8.  Example 5.4 for κ0=κ1=(1,0,1)T.
    Figure 9.  Example 5.4 for κ0=κ1=(1,1,1)T.

    In the study of algorithms, the efficiency and effectiveness of algorithms are measured by two main factors: The first is reaching the desired point with the fewest iterations possible, and the second factor is the time. When the time taken to obtain strong convergence is short, results are better. There is no doubt that the paper [46] addressed a lot of algorithms and proved, under suitable stipulation, that its algorithm accelerates better than the previous one. Here according to Examples 5.1–5.4, we were able to verify that our algorithm converges faster than the algorithm [46], so it converges faster than all the algorithms included in the paper [46]. Also, the numerical results (images) shows that our algorithms need a small number of iterations to get the desired target. This makes our method successful in speeding up the algorithm presented in Paper [46].

    In this manuscript, the strong convergence results for αinverse strongly monotone operators under new algorithms in the framework of Hilbert spaces have been discussed and several algorithms have been developed. The proposed algorithms combine the inertial Mann forward-backward method with the CQ-shrinking projection method and viscosity algorithm. The main algorithms which are presented and discussed are so-called "Inertial CQ-projection algorithm" (ICQMHA) and "Inertial shrinking CQ-projection viscosity algorithm "(ICQMVA). It has been theoretically proved that our algorithms lead to an acceleration of the previous modified inertial Mann-Halpern and viscosity algorithms. Also, some numerical examples have been performed to illustrate the applications and to test the computational performance and its effectiveness of the proposed algorithms.

    The authors thank the Basque Government for Grant IT1207-19.

    The authors declare that they have no competing interests concerning the publication of this article



    [1] H. H. Bauschke, P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, Berlin: Springer, 2011.
    [2] H. H. Bauschke, J. M. Borwein, On projection algorithms for solving convex feasibility problems, SIAM Rev., 38 (1996), 367–426. doi: 10.1137/S0036144593251710
    [3] P. J. Chen, J. G. Huang, X. Q. Zhang, A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration, Inverse Probl., 29 (2013), 025011. doi: 10.1088/0266-5611/29/2/025011
    [4] P. L. Combettes, V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul., 4 (2005), 1168–1200. doi: 10.1137/050626090
    [5] G. López, V. Martín-Márquez, F. H. Wang, H. K. Xu, Forward-Backward splitting methods for accretive operators in Banach spaces, Abstr. Appl. Anal., 2012 (2012), 109236.
    [6] G. B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space, J. Math. Anal. Appl., 72 (1979), 383–390. doi: 10.1016/0022-247X(79)90234-8
    [7] D. H. Peaceman, H. H. Rachford, The numerical solution of parabolic and eliptic differentials, J. Soc. Industr. Appl. Math., 3 (1955), 28–41. doi: 10.1137/0103003
    [8] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control. Optim., 38 (2000), 431–446. doi: 10.1137/S0363012998338806
    [9] N. H. Xiu, C. Y. Wang, L. C. Kong, A note on the gradient projection method with exact stepsize rule, J. Comput. Math., 25 (2007), 221–230.
    [10] H. K. Xu, Averaged mappings and the gradient-projection algorithm, J. Optimiz. Theory Appl., 150 (2011), 360–378. doi: 10.1007/s10957-011-9837-z
    [11] R. E. Bruck, S. Reich, Nonexpansive projections and resolvents of accretive operators in Banach spaces, Houston J. Math., 3 (1977), 459–470.
    [12] N. Parikh, S. Boyd, Proximal algorithms, Found. Trends Optim., 1 (2014), 123–231.
    [13] P. L. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964–979. doi: 10.1137/0716071
    [14] J. Douglas, H. H. Rachford, On the numerical solution of the heat conduction problem in 2 and 3 space variables, T. Am. Math. Soc., 82 (1956), 421–439. doi: 10.1090/S0002-9947-1956-0084194-4
    [15] H. H. Bauschke, P. L. Combettes, A weak-to-strong convergence principle for Fejérmonotone methods in Hilbert spaces, Math. Oper. Res., 26 (2001), 248–264. doi: 10.1287/moor.26.2.248.10558
    [16] K. Nakajo, W. Takahashi, Strong convergence theorems for nonexpansive mappings and nonexpansive semi groups, J. Math. Anal. Appl., 279 (2003), 372–379. doi: 10.1016/S0022-247X(02)00458-4
    [17] T. H. Kim, H. K. Xu, Strong convergence of modified Mann iterations, Nonlinear Anal.-Theor., 61 (2005), 51–60. doi: 10.1016/j.na.2004.11.011
    [18] Y. H. Yao, R. D. Chen, J. C. Yao, Strong convergence and certain control conditions for modified Mann iteration, Nonlinear Anal.-Theor., 68 (2008), 1687–1693. doi: 10.1016/j.na.2007.01.009
    [19] F. Alvarez, H. Attouch, An inertial proximal method for monotone operators via discretization of a nonlinear oscillator with damping, Set-Valued Anal., 9 (2001), 3–11. doi: 10.1023/A:1011253113155
    [20] B. T. Polyak, Introduction to optimization, New York: Optimization Software, 1987.
    [21] A. Moudafi, M. Oliny, Convergence of a splitting inertial proximal method for monotone operators, J. Comput. Appl. Math., 155 (2003), 447–454. doi: 10.1016/S0377-0427(02)00906-8
    [22] Q. L. Dong, H. B. Yuan, Y. J. Cho, T. M. Rassias, Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings, Optim. Lett., 12 (2018), 87–102. doi: 10.1007/s11590-016-1102-9
    [23] Q. L. Dong, D. Jiang, P. Cholamjiak, Y. Shehu, A strong convergence result involving an inertial forward-backward algorithm for monotone inclusions, J. Fixed Point Theory Appl., 19 (2017), 3097–3118. doi: 10.1007/s11784-017-0472-7
    [24] B. Halpern, Fixed points of nonexpanding maps, Bull. Am. Math. Soc., 73 (1967), 957–961. doi: 10.1090/S0002-9904-1967-11864-0
    [25] B. Tan, S. X. Li, Strong convergence of inertial Mann algorithms for solving hierarchical fixed point problems, J. Nonlinear Var. Anal., 4 (2020), 337–355.
    [26] M. Tian, G. Xu, Inertial modified Tseng's extragradient algorithms for solving monotone variational inequalities and fixed point problems, J. Nonlinear Funct. Anal., 2020 (2020), 35.
    [27] F. U. Ogbuisi, The projection method with inertial extrapolation for solving split equilibrium problems in Hilbert spaces, Appl. Set-Valued Anal. Optim., 3 (2021), 239–255.
    [28] H. A. Hammad, H. ur Rahman, M. De la Sen, Shrinking projection Methods for accelerating relaxed inertial Tseng-type algorithm with applications, Math. Probl. Eng., 2020 (2020), 7487383.
    [29] A. Moudafi, Viscosity approximation methods for fixed-points problems, J. Math. Anal. Appl., 241 (2000), 46–55. doi: 10.1006/jmaa.1999.6615
    [30] X. L. Qin, S. Y. Cho, S. M. Kang, On hybrid projection methods for asymptotically quasi-ψ-nonexpansive mappings, Appl. Math. Comput., 215 (2010), 3874–3883.
    [31] S. Y. Cho, S. M. Kang, Approximation of common solutions of variational inequalities via strict pseudocontractions, Acta Math. Sci., 32 (2012), 1607–1618. doi: 10.1016/S0252-9602(12)60127-1
    [32] H. A. Hammad, H. ur Rahman, Y. U. Gaba, Solving a split feasibility problem by the strong convergence of two projection algorithms in Hilbert spaces, J. Funct. Spaces, 2021 (2021), 5562694.
    [33] H. A. Hammad, H. ur Rehman, M. De la Sen, Advanced algorithms and common solutions to variational inequalities, Symmetry, 12 (2020), 1198. doi: 10.3390/sym12071198
    [34] M. O. Aibinu, J. K. Kim, on the rate of convergence of viscosity implicit iterative algorithms, Nonlinear Funct. Anal. Appl., 25 (2020), 135–152.
    [35] M. O. Aibinu, J. Kim, Convergence analysis of viscosity implicit rules of nonexpansive mappings in Banach spaces, Nonlinear Funct. Anal. Appl., 24 (2019), 691–713.
    [36] Y. Kimura, Shrinking projection methods for a family of maximal monotone operators, Nonlinear Funct. Anal. Appl., 16 (2011), 481–489.
    [37] T. Suzuki, Moudafi's viscosity approximations with meir-keeler contractions, J. Math. Anal. Appl., 325 (2007), 342–352. doi: 10.1016/j.jmaa.2006.01.080
    [38] A. Beck, M. Teboulle, Fast iterative shrinkage-thresholding algorithm for linear inverse problem, SIAM J. Imaging Sci., 2 (2009), 183–202. doi: 10.1137/080716542
    [39] H. Attouch, J. Peypouquet, The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than 1/k2, SIAM J. Optim., 26 (2016), 1824–1834. doi: 10.1137/15M1046095
    [40] W. Cholamjiak, P. Cholamjiak, S. Suantai, An inertial forward–backward splitting method for solving inclusion problems in Hilbert spaces, J. Fixed Point Theory Appl., 20 (2018), 42. doi: 10.1007/s11784-018-0526-5
    [41] W. Takahashi, Nonlinear functional analysis, Yokohama: Yokohama Publishers, 2000.
    [42] R. P. Agarwal, D. O'Regan, D. R. Sahu, Fixed point theory for Lipschitzian-type mappings with applications, New York: Springer, 2009.
    [43] C. Martinez-Yanes, H. K. Xu, Strong convergence of the CQ method for fixed point iteration processes, Nonlinear Anal.-Theor., 64 (2006), 2400–2411. doi: 10.1016/j.na.2005.08.018
    [44] K. Goebel, W. A. Kirk, Topics in metric fixed point theory, Cambridge, UK: Cambridge University Press, 1990.
    [45] T. M. Tuyen, H. A. Hammad, Effect of shrinking projection and CQ-methods on two inertial forward–backward algorithms for solving variational inclusion problems, Rend. Circ. Mat. Palermo, II. Ser, DOI: 10.1007/s12215-020-00581-8.
    [46] B. Tan, Z. Zhou, S. X. Li, Strong convergence of modified inertial Mann algorithms for nonexpansive mappings, Mathematics, 8 (2020), 462. doi: 10.3390/math8040462
    [47] Q. L. Dong, Y. J. Cho, L. L. Zhong, T. M. Rassias, Inertial projection and contraction algorithms for variational inequalities, J. Glob. Optim., 70 (2018), 687–704. doi: 10.1007/s10898-017-0506-0
    [48] H. Y. Zhou, Y. Zhou, G. H. Feng, Iterative methods for solving a class of monotone variational inequality problems with applications, J. Inequal. Appl., 2015 (2015), 68. doi: 10.1186/s13660-015-0590-y
    [49] A. Beck, S. Sabach, Weiszfeld's method: Old and new results, J. Optim. Theory Appl., 164 (2015), 1–40. doi: 10.1007/s10957-014-0586-7
    [50] Q. L. Dong, H. B. Yuan, Accelerated mann and CQ algorithms for finding a fixed point of a nonexpansive mapping, Fixed Point Theory Appl., 2015 (2015), 125. doi: 10.1186/s13663-015-0374-6
  • This article has been cited by:

    1. Huancheng Zhang, The Application of Generalized Viscosity Implicit Midpoint Rule for Nonexpansive Mappings, 2024, 16, 2073-8994, 1528, 10.3390/sym16111528
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3033) PDF downloads(117) Cited by(1)

Figures and Tables

Figures(9)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog