Research article

Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces

  • Received: 28 December 2020 Accepted: 25 February 2021 Published: 01 March 2021
  • MSC : 47H09, 47H10, 47J25

  • In this work, we introduce two modified Tseng's splitting algorithms with a new non-monotone adaptive step size for solving monotone inclusion problem in the framework of Banach spaces. Under some mild assumptions, we establish the weak and strong convergence results of the proposed algorithms. Moreover, we also apply our results to variational inequality problem, convex minimization problem and signal recovery, and provide several numerical experiments including comparisons with other related algorithms.

    Citation: Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth. Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces[J]. AIMS Mathematics, 2021, 6(5): 4873-4900. doi: 10.3934/math.2021286

    Related Papers:

    [1] Premyuda Dechboon, Abubakar Adamu, Poom Kumam . A generalized Halpern-type forward-backward splitting algorithm for solving variational inclusion problems. AIMS Mathematics, 2023, 8(5): 11037-11056. doi: 10.3934/math.2023559
    [2] Yali Zhao, Qixin Dong, Xiaoqing Huang . A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208
    [3] Meiying Wang, Luoyi Shi, Cuijuan Guo . An inertial iterative method for solving split equality problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 17628-17646. doi: 10.3934/math.2022971
    [4] Anjali, Seema Mehra, Renu Chugh, Salma Haque, Nabil Mlaiki . Iterative algorithm for solving monotone inclusion and fixed point problem of a finite family of demimetric mappings. AIMS Mathematics, 2023, 8(8): 19334-19352. doi: 10.3934/math.2023986
    [5] Wenlong Sun, Gang Lu, Yuanfeng Jin, Choonkil Park . Self-adaptive algorithms for the split problem of the quasi-pseudocontractive operators in Hilbert spaces. AIMS Mathematics, 2022, 7(5): 8715-8732. doi: 10.3934/math.2022487
    [6] Meiying Wang, Luoyi Shi . A new self-adaptive inertial algorithm with $ W $-mapping for solving split feasibility problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 18767-18783. doi: 10.3934/math.20221032
    [7] Yasir Arfat, Supak Phiangsungnoen, Poom Kumam, Muhammad Aqeel Ahmad Khan, Jamshad Ahmad . Some variant of Tseng splitting method with accelerated Visco-Cesaro means for monotone inclusion problems. AIMS Mathematics, 2023, 8(10): 24590-24608. doi: 10.3934/math.20231254
    [8] Hasanen A. Hammad, Habib ur Rehman, Manuel De la Sen . Accelerated modified inertial Mann and viscosity algorithms to find a fixed point of $ \alpha - $inverse strongly monotone operators. AIMS Mathematics, 2021, 6(8): 9000-9019. doi: 10.3934/math.2021522
    [9] Zheng Zhou, Bing Tan, Songxiao Li . Two self-adaptive inertial projection algorithms for solving split variational inclusion problems. AIMS Mathematics, 2022, 7(4): 4960-4973. doi: 10.3934/math.2022276
    [10] Ziqi Zhu, Kaiye Zheng, Shenghua Wang . A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space. AIMS Mathematics, 2024, 9(8): 20956-20975. doi: 10.3934/math.20241020
  • In this work, we introduce two modified Tseng's splitting algorithms with a new non-monotone adaptive step size for solving monotone inclusion problem in the framework of Banach spaces. Under some mild assumptions, we establish the weak and strong convergence results of the proposed algorithms. Moreover, we also apply our results to variational inequality problem, convex minimization problem and signal recovery, and provide several numerical experiments including comparisons with other related algorithms.



    Let E be a real Banach space with its dual space E. In this paper, we study the so-called monotone inclusion problem:

    findzEsuch that0(A+B)z, (1.1)

    where A:EE is a single mapping and B:E2E is a multi-valued mapping. The set of solutions of the problem (1.1) is denoted by (A+B)10:={xE:0(A+B)x}. This problem draws much attention since it stands at the core of many mathematical problems, such as: variational inequalities, split feasibility problem and minimization problem with applications in machine learning, statistical regression, image processing and signal recovery (see [17,33,44]).

    A classical method for solving the problem (1.1) in Hilbert space H, is known as forward-backward splitting algorithm (FBSA) [15,29] which generates iterative sequence {xn} by the following algorithm:

    {x1H,xn+1=JBλ(IλA)xn,n1, (1.2)

    where JBλ:=(I+λB)1 is the resolvent operator of an operator B. Here, I denotes the identity operator on H. It was proved that the sequence generated by (1.2) converges weakly to an element in (A+B)10 under the assumption of the α-cocoercivity of the operator A, that is,

    AxAy,xyαAxAy2,x,yH

    and λ is chosen in (0,2α). In fact, FBSA includes, as special cases, the proximal point algorithm (when A=0) [11,20,34] and the gradient method [18].

    In order to get strong convergence result, Takashashi et al. [41] introduced the following algorithm:

    {x1,uH,xn+1=αnu+(1αn)JBλn(xnλnAxn),n1, (1.3)

    where A is an α-cocoercive mapping on H. It was shown that if {λn}(0,) and {αn}(0,1) satisfy the following assumptions:

    0<aλnb<2α,n=1|λn+1λn|<,
    limnαn=0,n=1αn=andn=1|αn+1αn|<,

    then the sequence {xn} defined by (1.3) converges strongly to an element in (A+B)10.

    In 2016, Cholamjiak [12] introduced the following FBSA in a uniformly convex and q-uniformly smooth Banach space E:

    {x1,uE,xn+1=αnu+βnxn+γnJBλn(xnλnAxn),n1, (1.4)

    where JBλn:=(I+λnB)1 is the resolvent operator of an m-accretive operator B and A is an α-cocoercive mapping. He proved that the sequence generated by (1.4) converges strongly to a solution of the problem (1.1) under the following assumptions:

    {αn},{βn},{γn}(0,1)withαn+βn+γn=1,
    limnαn=0,n=1αn=andlim infnγn>0,
    0<lim infnλnlim supnλn<(αqκq)1q1,

    where κq is the q-uniform smoothness coefficient of E.

    In recent years, the FBSA for solving the monotone inclusion problem (1.1), when A is α-cocoercive, was studied and modified by many authors in various settings (see, e.g., [1,9,10,13,26,27,32,37,38,46]). It is important to remark that the α-cocoercivity of the operator A is a strong assumption. To relax this assumption, Tseng [45] introduced the following so-called Tseng's splitting method:

    {x1H,yn=JBλn(xnλnAxn),xn+1=ynλn(AynAxn),n1, (1.5)

    where A is monotone and L-Lipschitz continuous with L>0. It was proved that the sequence {xn} generated by (1.5) converges weakly to an element in (A+B)10 provided the step size λn is chosen in (0,1L). It is worth noting that Tseng's splitting method is a requirement to know Lipschitz constant of the mapping. Unfortunately, Lipschitz constants are often unknown or difficult to approximate.

    Very recently, Shehu [37] extended Tseng's result to Banach spaces. He proposed the following iterative process for approximating a solution of the problem (1.1) in a 2-uniformly convex Banach space E which is also uniformly smooth:

    {x1E,yn=JBλnJ1(JxnλnAxn),xn+1=Jynλn(AynAxn),n1, (1.6)

    where A:EE is monotone and L-Lipschitz continuous, JBλn:=(J+λnB)1J is the resolvent of B and J is the duality mapping from E into E. He obtain weak convergence theorem to the solution of the problem (1.1) provided the step size λn is chosen in (0,12μκL), where μ is the 2-uniform convexity constant of E and κ is the 2-uniform smoothness constant of E. At the same time, he also proposed a variant of (1.6) with a linesearch for solving the problem (1.1). It is known that any algorithm with a linesearch needs an inner loop with some stopping criterion over iteration.

    In this paper, motivated by Shehu [37], we propose two modifications of Tseng's splitting method with non-monotone adaptive step sizes for solving the problem (1.1) in the framework of Banach spaces. The step size of our methods does not require the prior knowledge of the Lipschitz constant of operator and without any linesearch procedure. The remainder of this paper is organized as follows: We recall some definitions and lemmas in Section 2. Our methods are presented and analyzed in Section 3. Theoretical applications to variational inequality problem and convex minimization problem are considered in Section 4 and finally, in Section 5, we provide some numerical experiments to illustrate the behaviour of our methods.

    Let R and N be the set of real numbers and the set of positive integers, respectively. Let E be a real Banach space with its dual space E. We denote x,f by the value of a functional f in E at x in E, that is, x,f=f(x). For a sequence {xn} in E, the strong convergence and the weak convergence of {xn} to xE are denoted by xnx and xnx, respectively. Let SE={xE:x=1}. The space E is said to be smooth if the limit

    limt0x+tyxt (2.1)

    exists for all x,ySE. The space E is said to be uniformly smooth if the limit (2.1) converges uniformly in x,ySE. It is said to be strictly convex if (x+y)/2<1 whenever x,ySE and xy. The space E is said to be uniformly convex if and only if δE(ϵ)>0 for all ϵ(0,2], where δE is the modulus of convexity of E defined by

    δE(ϵ)=inf{1x+y2:x,ySE,xyϵ}

    for all ϵ[0,2]. Let p2. The space E is said to be p-uniformly convex if there is a c>0 such that δE(ϵ)cϵp for all ϵ(0,2]. Let 1<q2. The space E is said to be q-uniformly smooth if there exists a κ>0 such that ρE(t)κtq for all t>0, where ρE is the modulus of smoothness of E defined by

    ρE(t)=sup{x+ty+xty21:x,ySE}

    for all t0. Let 1<q2<p< with 1p+1q=1. It is observed that every p-uniformly convex (q-uniformly smooth) space is uniformly convex (uniformly smooth) space. It is known that E is p-uniformly convex (q-uniformly smooth) if and only if its dual E is q-uniformly smooth (p-uniformly convex) (see [2]). If E is uniformly convex then E is reflexive and strictly convex and if E is uniformly smooth then E is reflexive and smooth (see [14]). Moreover, we know that for every p>1, Lp and p are min{p,2}-uniformly smooth and max{p,2}-uniformly convex, while Hilbert space is 2-uniformly smooth and 2-uniformly convex (see [4,23,47] for more details).

    Definition 2.1. The normalized duality mapping J:E2E is defined by

    Jx={fE:x,f=x2=f2},xE,

    where , denotes the duality pairing between E and E.

    If E is a Hilbert space, then J=I is the identity mapping on E. It is known that E is smooth if and only if J is single-valued from E into E and if E is a reflexive, smooth and strictly convex, then J1 is single-valued, one-to-one, surjective and it is the duality mapping from E into E. Moreover, if E is uniformly smooth, then J is norm-to-norm uniformly continuous on bounded subsets of E (see [2,14] for more details). A duality mapping J from a smooth Banach space E into E is said to be weakly sequentially continuous if for any sequence {xn}E such that xnx implies that JxnJx.

    Lemma 2.2. [39] Let E be a smooth Banach space and J be the duality mapping on E. Then xy,JxJy0 for all x,yE. Further, if E is strictly convex and xy,JxJy=0, then x=y.

    Definition 2.3. A mapping A:EE is said to be:

    α-cocoercive if there exists a constant α>0 such that xy,AxAyαAxAy2 for all x,yE;

    monotone if xy,AxAy0 for all x,yE;

    L-Lipschitz continuous if there exists a constant L>0 such that AxAyLxy for all x,yE;

    hemicontinuous if for each x,yE, the mapping f:[0,1]E defined by f(t)=A(tx+(1t)y) is continuous with respect to the weak topology of E.

    Remark 2.4. It is easy to see that if A is cocoercive, then A is monotone and Lipschitz continuous but the converse is not true in general.

    The next lemma can be found in [49] (see also [47]).

    Lemma 2.5. (i) Let E be a 2-uniformly smooth Banach space. Then there exists a constant κ>0 such that

    xy2x22y,Jx+κy2,x,yE.

    (ii) Let E be a 2-uniformly convex Banach space. Then there exists a constant c>0 such that

    xy2x22y,Jx+cy2,x,yE.

    Remark 2.6. It is well-known that κ=c=1 whenever E is a Hilbert space. Hence these inequalities reduce to the following well-known polarization identity:

    xy2=x22x,y+y2.

    Moreover, we refer to [49] for the exact values of constants κ and c.

    Next, we recall the following Lyapunov function which was introduced in [3]:

    Definition 2.7. Let E be a smooth Banach space. The Lyapunov functional ϕ:E×ER is defined by

    ϕ(x,y)=x22x,Jy+y2,x,yE. (2.2)

    If E is a Hilbert space, then ϕ(x,y)=xy2 for all x,yE. In addition, the Lyapunov function ϕ has the following properties:

    (xy)2ϕ(x,y)(x+y)2,x,yE. (2.3)
    ϕ(x,J1(αJy+(1α)Jz)αϕ(x,y)+(1α)ϕ(x,z),x,y,zE,α[0,1]. (2.4)
    ϕ(x,y)=ϕ(x,z)ϕ(y,z)+2yx,JyJz,x,y,zE. (2.5)

    Lemma 2.8. [6] Let E be a 2-uniformly convex Banach space, then there exists a constant c>0 such that

    cxy2ϕ(x,y),

    where c is a constant in Lemma 2.5 (ii).

    We make use of the following functional V:E×ER studied in [3]:

    V(x,x)=x22x,x+x2,xE,xE. (2.6)

    Obviously, V(x,x)=ϕ(x,J1x) for all xE and xE.

    Lemma 2.9. [3] Let E be a reflexive, strictly convex and smooth Banach space. Then the following statement holds:

    V(x,x)+2J1xx,yV(x,x+y),xE,x,yE.

    Let E be a reflexive, strictly convex and smooth Banach space. Let C be a closed and convex subset of E. Then for any xE, there exists a unique element zC such that

    ϕ(z,x)=minyCϕ(y,x).

    Such a mapping ΠC:EC defined by z=ΠC(x) is called the generalized projection of E onto C. If E is a Hilbert space, then ΠC is coincident with the metric projection denoted by PC.

    Lemma 2.10. [3] Let E be a reflexive, strictly convex and smooth Banach space and C be a closed and convex subset of E. Let xE and zC. Then the following statements hold:

    (i) z=ΠC(x) if and only if yz,JxJz0, yC.

    (ii) ϕ(y,ΠC(x))+ϕ(ΠC(x),x)ϕ(y,x), yC.

    Lemma 2.11. [25] Let C be a closed and convex subset of a smooth and uniformly convex Banach space E. Let {xn} be a sequence in E such that ϕ(p,xn+1)ϕ(p,xn) for all pC and n1. Then the sequence {ΠC(xn)} converges strongly to some element xC.

    Let B:E2E be a multi-valued mapping. The effective domain of B is denoted by D(B)={xE:Bx} and the range of B is also denoted by R(B)={Bx:xD(B)}. The set of zeros of B is denoted by B10={xD(B):0Bx}. A multi-valued mapping B is said to be monotone if

    xy,uv0,x,yD(B),uBxandvBy.

    A monotone operator B on E is said to be maximal if its graph G(B)={(x,y)E×E:xD(B),yBx} is not properly contained in the graph of any other monotone operator on E. In other words, the maximality of B is equivalent to R(J+λB)=E for all λ>0 (see [5, Theorem 1.2]). It is known that if B is maximal monotone, then B10 is closed and convex (see [39]).

    For a maximal monotone operator B, we define the resolvent of B by JBλ(x)=(J+λB)1Jx for xE and λ>0. It is also known that B10=F(JBλ).

    Lemma 2.12. [5] Let E be a reflexive Banach space. Let A:EE be a monotone, hemicontinuous and bounded operator and B:E2E be a maximal monotone operator. Then A+B is maximal monotone.

    Lemma 2.13. ([48]) Assume that {an} is a sequence of nonnegative real sequences such that

    an+1(1γn)an+γnδn,n1,

    where {γn} is a sequence in (0,1) and {δn} is a sequence of real sequences such that

    (i) n=1γn=;

    (ii) lim supnδn0 or n=1|γnδn|<.

    Then limnan=0.

    Lemma 2.14. ([30]) Let {Γn} be a sequence of real numbers that does not decrease at infinity in the sense that there exists a subsequence {Γni} of {Γn} which satisfies Γni<Γni+1 for all N. Define the sequence {σ(n)} of integers as follows:

    σ(n)=max{kn:Γk<Γk+1},

    for all nn0 (for some n0 large enough). Then {σ(n)}nn0 is a non-decreasing sequence such that limnσ(n)=, and it holds that

    Γσ(n)Γσ(n)+1andΓnΓσ(n)+1.

    Lemma 2.15. ([42]) Assume that {λn} and {θn} are two nonnegative real sequences such that

    λn+1λn+θn,n1.

    If n=1θn<, then limnλn exists.

    In this section, we introduce two modified Tseng's splitting algorithms for solving the monotone inclusion problem in Banach spaces. In order to prove the convergence results of these algorithms, we need make the following assumptions:

    Assumption 3.1. (A1) The Banach space E is a real 2-uniformly convex and uniformly smooth.

    (A2) The mappings A:EE is monotone and L-Lipschitz continuous, and B:E2E is maximal monotone.

    (A3) The solution set of the problem (1.1) is nonempty, that is, (A+B)10.

    Algorithm 1: Tseng type splitting algorithm for monotone inclusion problem
    Step 0. Given λ1>0 and μ(0,cκ). Choose a nonnegative real sequence {θn} such that n=1θn<. Let x1E be arbitrary. Set n=1.
    Step 1. Compute
    yn=JBλnJ1(JxnλnAxn).(3.1)
    If xn=yn, then stop and xn is a solution of the problem (1.1). Otherwise, go to Step 2.
    Step 2. Compute
    xn+1=J1(Jynλn(AynAxn)),(3.2)
    where the sizes are adaptively updated as follows:
    λn+1={min{μxnynAxnAyn,λn+θn}ifAxnAyn0,λn+θnotherwise.(3.3)
    Set n:=n+1 and go to Step 1.

     | Show Table
    DownLoad: CSV

    Lemma 3.2. Assume that Assumption 3.1 holds. Let {xn}, {yn} and {λn} be sequences generated by Algorithm 1. Then the following statements hold:

    (i) If xn=yn for all nN, then xn(A+B)10.

    (ii) limnλn=λ[min{μL,λ1},λ1+θ], where θ=n=1θn. Moreover

    AxnAynμλn+1xnyn,n1.

    Proof. (i) If xn=yn, then xn=JBλnJ1(JxnλnAxn). It follows that xn=(J+λnB)1JJ1(JλnA)xn, that is, JxnλnAxnJxn+λnBxn, which implies that 0(A+B)xn. Hence xn(A+B)10.

    (ii) In the case AxnAyn0, using the Lipschitz continuity of A, we have

    μxnynAxnAynμxnynLxnyn=μL.

    From (3.3) and mathematical induction, we have the sequence {λn} has upper bound λ1+θ and lower bound min{μL,λ1}. From Lemma 2.15, we have limnλn exists and we denote λ=limnλn. It is obvious that λ[min{μL,λ1},λ1+θ]. By the definition of λn, we have

    λn+1=min{μxnynAxnAyn,λn+θn}μxnynAxnAyn.

    This implies that

    AxnAynμλn+1xnyn,n1. (3.4)

    Lemma 3.3. Assume that Assumption 3.1 holds. Let {xn} be a sequence generated by Algorithm 1. Hence

    ϕ(z,xn+1)ϕ(z,xn)(1κμ2cλ2nλ2n+1)ϕ(yn,xn),z(A+B)10, (3.5)

    where c and κ are constants in Lemma 2.5.

    Proof. Let z(A+B)10. From Lemma 2.5 (i) and (2.5), we have

    ϕ(z,xn+1)=ϕ(z,J1(Jynλn(AynAxn)))=V(z,Jynλn(AynAxn))=z22z,Jynλn(AynAxn)+Jynλn(AynAxn)2z22z,Jyn+2λnz,AynAxn+Jyn22λnyn,AynAxn+κλn(AynAxn)2=z22z,Jyn+yn22λnynz,AynAxn+κλ2nAynAxn2=ϕ(z,yn)2λnynz,AynAxn+κλ2nAynAxn2=ϕ(z,xn)ϕ(yn,xn)+2ynz,JynJxn2λnynz,AynAxn+κλ2nAynAxn2=ϕ(z,xn)ϕ(yn,xn)+κλ2nAynAxn22ynz,JxnJynλn(AxnAyn). (3.6)

    Combining (3.4) and (3.6), we have

    ϕ(z,xn+1)ϕ(z,xn)ϕ(yn,xn)+κλ2nμ2λ2n+1ynxn22ynz,JxnJynλn(AxnAyn). (3.7)

    By Lemma 2.8, we have

    ϕ(z,xn+1)ϕ(z,xn)(1κμ2cλ2nλ2n+1)ϕ(yn,xn)2ynz,JxnJynλn(AxnAyn). (3.8)

    Now, we will show that

    ynz,JxnJynλn(AxnAyn)0.

    From the definition of yn, we note that JxnλnAxnJyn+λnByn. Since B is maximal monotone, there exists vnByn such that JxnλnAxn=Jyn+λnvn, we have

    vn=1λn(JxnJynλnAxn). (3.9)

    Since 0(A+B)z and Ayn+vn(A+B)yn, it follows from Lemma 2.12 that A+B is maximal monotone. Hence

    ynz,Ayn+vn0. (3.10)

    Substituting (3.9) into (3.10), we have

    1λnynz,JxnJynλnAxn+λnAyn0.

    Hence

    ynz,JxnJynλn(AxnAyn)0. (3.11)

    Combining (3.8) and (3.11), thus this lemma is proved.

    Theorem 3.4. Assume that Assumption 3.1 holds. Suppose, in addition, that J is weakly sequentially continuous on E. Then the sequence {xn} generated by Algorithm 1 converges weakly to an element in (A+B)10.

    Proof. Since limnλn exists and μ(0,cκ), it follows that limn(1κμ2cλ2nλ2n+1)=1κμ2c>0. Thus there exists n0N such that

    1κμ2cλ2nλ2n+1>0,nn0. (3.12)

    Combining (3.5) and (3.12), we have

    ϕ(z,xn+1)ϕ(z,xn),nn0.

    This show that limnϕ(z,xn) exists and hence {ϕ(z,xn)} is bounded. Applying Lemma 2.8, we also have {xn} is bounded. From (3.5), we have

    (1κμ2cλ2nλ2n+1)ϕ(yn,xn)ϕ(z,xn)ϕ(z,xn+1). (3.13)

    Thus we have

    limnϕ(yn,xn)=0.

    Applying Lemma 2.8, we also have

    limnxnyn=0. (3.14)

    Since J is norm-to-norm uniformly continuous on bounded subsets of E, we have

    limnJxnJyn=0. (3.15)

    Using the fact that A is Lipschitz continuous, we have

    limnAxnAyn=0. (3.16)

    By the boundedness of {xn}, there exists a subsequence {xnk} of {xn} such that xnkxE. From (3.14), we have ynkx. We will show that x(A+B)10. Let (v,w)G(A+B), we have wAvBv. From the definition of ynk, we note that

    JxnkλnkAxnkJynk+λnkBynk,

    which implies that

    1λnk(JxnkJynkλnkAxnk)Bynk.

    By the maximal monotonicity of B, we have

    vynk,wAv1λnk(JxnkJynkλnkAxnk)0

    and by the monotonicity of A, we have

    vynk,wvynk,Av+1λnk(JxnkJynkλnkAxnk)=vynk,AvAxnk+1λnkvynk,JxnkJynk=vynk,AvAynk+vynk,AynkAxnk+1λnkvynk,JxnkJynkvynk,AynkAxnk+1λnkvynk,JxnkJynk.

    Since limkλnk=λ>0 and ynkx, it follows from (3.15) and (3.16) that

    vx,w0.

    By the monotonicity of A+B, we get 0(A+B)x, that is, x(A+B)10. Hence x(A+B)10. Note that (A+B)10 is closed and convex. Put un=Π(A+B)10(xn). It follows from Lemma 2.11 that there exists x(A+B)10 such that unx. Finally, we show that xnx. Let {xnk} be a subsequence of {xn} such that xnkˆx(A+B)10. Then we have

    ˆxunk,JxnkJunk0

    for all kN. Since unx, xnkˆx and J is weakly sequentially continuous, we have

    ˆxx,JˆxJx0.

    By the strict monotonicity of J, we obtain x=ˆx. In summary, we have shown that every subsequence of {xn} has a further subsequence which converges weakly to x. We conclude that xnx=limnΠ(A+B)10(xn). This completes the proof.

    Algorithm 2: Halpern-Tseng type splitting algorithm for monotone inclusion problem
    Step 0. Given λ1>0 and μ(0,cκ). Choose a nonnegative real sequence {θn} such that n=1θn<. Let u,x1E be arbitrary. Set n=1.
    Step 1. Compute
    yn=JBλnJ1(JxnλnAxn).(3.17)
    If xn=yn, then stop and xn is a solution of the problem (1.1). Otherwise, go to Step 2.
    Step 2. Compute
    zn=J1(Jynλn(AynAxn)).(3.18)
    Step 3. Compute
    xn+1=J1(αnJu+(1αn)Jzn),(3.19)
    where the step sizes are adaptively updated as follows:
    λn+1={min{μxnynAxnAyn,λn+θn}ifAxnAyn0,λn+θnotherwise.(3.20)
    Set n:=n+1 and go to Step 1.

     | Show Table
    DownLoad: CSV

    Theorem 3.5. Assume that Assumption 3.1 holds. If {αn}(0,1) with limnαn=0 and n=1αn=, then the sequence {xn} generated by Algorithm 2 converges strongly to x(A+B)10.

    Proof. We will show that {xn} is bounded. Let z(A+B)10. Using the same arguments as in the proof of Lemma 3.3, we can show that

    ϕ(z,zn)ϕ(z,xn)(1κμ2cλ2nλ2n+1)ϕ(yn,xn). (3.21)

    Since limnλn exists and μ(0,cκ), it follows that limn(1κμ2cλ2nλ2n+1)=1κμ2c>0. Thus there exists n0N such that

    1κμ2cλ2nλ2n+1>0,nn0. (3.22)

    Combining (3.21) and (3.22), we have

    ϕ(z,zn)ϕ(z,xn),nn0. (3.23)

    By (2.4), we have

    ϕ(z,xn+1)=ϕ(z,J1(αnJu+(1αn)Jzn))αnϕ(z,u)+(1αn)ϕ(z,zn)αnϕ(z,u)+(1αn)ϕ(z,xn)max{ϕ(z,u),ϕ(z,xn)}max{ϕ(z,u),ϕ(z,xn0)}.

    This implies that {ϕ(z,xn)} is bounded. Applying Lemma 2.8, we also have {xn} is bounded.

    Let x=Π(A+B)10(u). From (3.21), we have

    ϕ(x,xn+1)=ϕ(x,J1(αnJu+(1αn)Jzn))αnϕ(x,u)+(1αn)ϕ(x,zn)αnϕ(x,u)+(1αn)ϕ(x,xn)(1αn)(1κμ2cλ2nλ2n+1)ϕ(yn,xn).

    This implies that

    (1αn)(1κμ2cλ2nλ2n+1)ϕ(yn,xn)ϕ(x,xn)ϕ(x,xn+1)+αnK, (3.24)

    where K=supnN{|ϕ(x,u)ϕ(x,xn)|}.

    Now, we will divide the rest of the proof into two cases.

    Case 1. Suppose that there exists NN such that ϕ(x,xn+1)ϕ(x,xn) for all nN. Hence limnϕ(x,xn) exists. By our assumptions, we have from (3.24) that

    limnϕ(yn,xn)=0

    and hence

    limnxnyn=0. (3.25)

    Since J is norm-to-norm uniformly continuous on bounded subsets of E, we have

    limnJxnJyn=0. (3.26)

    Using the fact that A is Lipschitz continuous, we have

    limnAxnAyn=0.

    Then from (3.18), we have

    JznJyn=λnAxnAyn0. (3.27)

    Moreover from (3.26) and (3.27), we obtain

    Jxn+1JxnJxn+1Jzn+JznJyn+JynJxn=αnJuJzn+JznJyn+JynJxn0.

    Since J1 is norm-to-norm uniformly continuous on bounded subset of E, we have

    limnxn+1xn=0. (3.28)

    By the boundedness of {xn}, there exists a subsequence {xnk} of {xn} such that xnkˆxE and

    lim supnxnx,JuJx=limkxnkx,JuJx,

    where x=Π(A+B)10(u). By a similar argument to that of Theorem 3.4, we can show that ˆx(A+B)10. Thus we have

    lim supnxnx,JuJx=ˆxx,JuJx0.

    From (3.28), we also have

    lim supnxn+1x,JuJx0. (3.29)

    Finally, we show that xnx. From Lemma 2.9, we have

    ϕ(x,xn+1)=ϕ(x,J1(αnJu+(1αn)Jzn))=V(x,αnJu+(1αn)Jzn)V(x,αnJu+(1αn)Jznαn(JuJx))+2αnxn+1x,JuJx=V(x,αnJx+(1αn)Jzn)+2αnxn+1x,JuJx=αnϕ(x,x)+(1αn)ϕ(x,zn)+2αnxn+1x,JuJx(1αn)ϕ(x,xn)+2αnxn+1x,JuJx. (3.30)

    This together with (3.29) and (3.30), so we can conclude by Lemma 2.13 that ϕ(x,xn)0. Therefore xnx.

    Case 2. Suppose that there exists a subsequence {Γni} of {Γn} such that Γni<Γni+1 for all iN. In this case, we define σ:NN by

    σ(n)=max{kn:Γk<Γk+1}

    for all nn0 (for some n0 large enough). From Lemma 2.14, we have σ(n) is non-decreasing such that limnσ(n)= and the following inequalities hold for all nn0:

    Γσ(n)<Γσ(n)+1andΓnΓσ(n)+1. (3.31)

    Put Γn=ϕ(x,xn) for all nN. As proved in the Case 1, we obtain

    (1ασ(n))(1κμ2cλ2σ(n)λ2σ(n)+1)ϕ(yσ(n),xσ(n))ϕ(x,xσ(n))ϕ(x,xσ(n)+1)+ασ(n)Kασ(n)K,

    where K=supnN{|ϕ(x,u)ϕ(x,xσ(n))|}. By our assumptions, we have

    limnϕ(yσ(n),xσ(n))=0

    and hence

    limnxσ(n)yσ(n)=0.

    Using the same arguments as in the proof of Case 1, we can show that

    limnxσ(n)+1xσ(n)=0

    and

    lim supnxσ(n)+1x,JuJx0.

    From (3.30) and (3.31), we have

    ϕ(x,xσ(n)+1)(1ασ(n))ϕ(x,xσ(n))+ασ(n)xσ(n)+1x,JuJx(1ασ(n))ϕ(x,xσ(n)+1)+ασ(n)xσ(n)+1x,JuJx.

    This implies that

    ϕ(x,xn)ϕ(x,xσ(n)+1)xσ(n)+1x,JuJx.

    Hence lim supnϕ(x,xn)=0 and so limnϕ(x,xn)=0. Therefore xnx. This completes the proof.

    Let C be a nonempty, closed and convex subset of E. Let A:CE be a mapping. The variational inequality problem is to find xC such that

    yx,Ax0,yC. (4.1)

    The set of solutions of the problem (4.1) is denoted by VI(C,A). In particular, if A is a continuous and monotone mapping, then VI(C,A) is closed and convex (see [7,24]). Recall that the indicator function of C given by

    iC(x)={0,ifxC,,ifxC.

    It is known that iC is proper convex, lower semicontinuous and convex function with its subdifferential iC is maximal monotone (see [35]). From [2], we know that

    iC(v)=NC(v)={uE:yv,u0,yC},

    where NC is the normal cone for C at a point v. Thus we can define the resolvent of iC for λ>0 by

    JiCλ(x)=(J+λiC)1Jx,xE.

    As shown in [40], for any xE and zC, z=JiCλ(x)z=ΠC(x), where ΠC is the generalized projection from E onto C.

    Lemma 4.1. [36] Let C be a nonempty, closed convex subset of a Banach space E. Let A:CE be a monotone and hemicontinuous operator and T:E2E be an operator defined as follows:

    Tv={Av+NC(v)ifvC,ifvC.

    Then T is maximal monotone and T10=VI(C,A).

    If we set B=iC, then we obtain the following results regarding the problem (4.1).

    Assumption 4.2. (A1) The feasible set C is a nonempty, closed and convex subset of a real 2-uniformly convex and uniformly smooth Banach space E.

    (A2) The mapping A:EE is monotone and L-Lipschitz continuous.

    (A3) The solution set of the problem (4.1) is nonempty, that is, VI(C,A).

    Algorithm 3: Tseng type splitting algorithm for variational inequality problem
    Step 0. Given λ1>0 and μ(0,cκ). Choose a nonnegative real sequence {θn} such that n=1θn<. Let x1C be arbitrary. Set n=1.
    Step 1. Compute
    yn=ΠCJ1(JxnλnAxn).(4.2)
    Step 2. Compute
    xn+1=J1(Jynλn(AynAxn)),(4.3)
    where the step sizes are adaptively updated as follows:
    λn+1={min{μxnynAxnAyn,λn+θn}ifAxnAyn0,λn+θnotherwise.(4.4)
    Set n:=n+1 and go to Step 1.

     | Show Table
    DownLoad: CSV

    Theorem 4.3. Assume that Assumption 4.2 holds. Suppose, in addition, that J is weakly sequentially continuous on E. Then the sequence {xn} generated by Algorithm 3 converges weakly to an element in (A+B)10.

    Algorithm 4: Halpern-Tseng type splitting algorithm for variational inequality problem
    Step 0. Given λ1>0 and μ(0,cκ). Choose a nonnegative real sequence {θn} such that n=1θn<. Let u,x1C be arbitrary. Set n=1.
    Step 1. Compute
    yn=ΠCJ1(JxnλnAxn).(4.5)
    Step 2. Compute
    zn=J1(Jynλn(AynAxn)).(4.6)
    Step 3. Compute
    xn+1=J1(αnJu+(1αn)Jzn),(4.7)
    where the step sizes are adaptively updated as follows:
    λn+1={min{μxnynAxnAyn,λn+θn}ifAxnAyn0,λn+θnotherwise.(4.8)
    Set n:=n+1 and go to Step 1.

     | Show Table
    DownLoad: CSV

    Theorem 4.4. Assume that Assumption 4.2 holds. If {αn}(0,1) with limnαn=0 and n=1αn=, then the sequence {xn} generated by Algorithm 4 converges strongly to xVI(C,A).

    Let f:ER be a convex function and g:ER be a convex, lower semicontinuous and non-smooth function. We consider the following convex minimization problem:

    minxEf(x)+g(x). (4.9)

    By Fermat's rule, we know that above problem is equivalent to the problem of finding xE such that

    0f(x)+g(x), (4.10)

    where f is the gradient of f and g is the subdifferential of g. In this situation, we assume that f is a convex and differentiable function with its gradient f is L-Lipschitz continuous. Further, f is cocoercive with a constant 1/L (see [31, Theorem 3.13]). This implies that f is monotone and Lipschitz continuous. Moreover, g is maximal monotone (see [35, Theorem A]). In this point of view, we set A=f and B=g, then we obtain the following results regarding the problem (4.9).

    Assumption 4.5. (A1) The Banach space E is real 2-uniformly convex and uniformly smooth Banach space.

    (A2) The functions f:ER is convex and differentiable and its gradient f is L-Lipschitz continuous and g:ER is convex and lower semicontinuous which f+g attains a minimizer.

    Algorithm 5: Tseng type splitting algorithm for convex minimization problem
    Step 0. Given λ1>0 and μ(0,cκ). Choose a nonnegative real sequence {θn} such that n=1θn<. Let x1E be arbitrary. Set n=1.
    Step 1. Compute
    yn=JgλnJ1(Jxnλnf(xn)).(4.11)
    Step 2. Compute
    xn+1=J1(Jynλn(f(yn)f(xn))),(4.12)
    where the step sizes are adaptively updated as follows:
    λn+1={min{μxnynf(yn)f(xn),λn+θn}iff(yn)f(xn)0,λn+θnotherwise.(4.13)
    Set n:=n+1 and go to Step 1.

     | Show Table
    DownLoad: CSV

    Theorem 4.6. Assume that Assumption 4.5 holds. Suppose, in addition, that J is weakly sequentially continuous on E. Then the sequence {xn} generated by Algorithm 5 converges weakly to a minimizer of f+g.

    Algorithm 6: Halpern-Tseng type splitting algorithm for convex minimization problem
    Step 0. Given λ1>0 and μ(0,cκ). Choose a nonnegative real sequence {θn} such that n=1θn<. Let u,x1E be arbitrary. Set n=1.
    Step 1. Compute
    yn=JgλnJ1(Jxnλnf(xn)).(4.14)
    Step 2. Compute
    zn=J1(Jynλn(f(yn)f(xn))).(4.15)
    Step 3. Compute
    xn+1=J1(αnJu+(1αn)Jzn),(4.16)
    where the step sizes are adaptively updated as follows:
    λn+1={min{μxnynf(yn)f(xn),λn+θn}iff(yn)f(xn)0,λn+θnotherwise.(4.17)
    Set n:=n+1 and go to Step 1.

     | Show Table
    DownLoad: CSV

    Theorem 4.7. Assume that Assumption 4.5 holds. If {αn}(0,1) with limnαn=0 and n=1αn=, then the sequence {xn} generated by Algorithm 6 converges strongly to a minimizer of f+g.

    In this section, we provide some numerical experiments to illustrate the behaviour of our methods and compare them with some existing methods.

    Example 5.1. We consider the HpHard problem which is taken from [22]. Let A:RmRm be an operator defined by Ax=Mx+q with qRm and

    M=NNT+S+D,

    where N is an m×m matrix, S is an m×m skew-symmetric matrix and D is an m×m positive definite diagonal matrix. The feasible set is C=R+m. It is clear that A is monotone and Lipschitz continuous with L=M. In this experiments, we compare our Algorithm 3 and Algorithm 4 with the extragradient method (EGM) proposed in [28] and the subetaadient extragradient method (SEGM) proposed in [8]. The parameters are chosen as follows:

    Algorithm 3: λ1=0.4/M and μ=0.9;

    Algorithm 4: λ1=0.4/M, μ=0.9, αn=110000(n+2) and u=x1;

    EGM and SEGM: λ=0.4/M.

    All entries of N and S are generated randomly in (5,5), of D are in (0,0.3), of q uniformly generated from (500,0). For every m, we have generated two random samples with different choices of M and q. We perform the numerical experiments with three different cases of m (m=100,500,1000). We take the starting point x1=(1,1,1,,1)TRm and use stopping criterion xnynε=106. The numerical results are reported in Table 1.

    Table 1.  Numerical results for Example 5.1.
    m Algorithm 3 (θn=0) Algorithm 3 (θn=100/n1.1) Algorithm 4 (θn=0) Algorithm 4(θn=100/n1.1) EGM SEGM
    iter. time iter. time iter. time iter. time iter. time iter. time
    100 2454 0.02 1162 0.01 35112 1.31 25204 0.65 2454 0.03 2454 0.04
    1920 0.04 917 0.02 35072 1.48 25203 0.66 1920 0.03 1920 0.05
    500 2275 0.95 1104 0.29 35010 7.28 25201 5.12 2275 0.50 2275 0.65
    2291 0.93 1107 0.43 34989 7.20 25198 5.06 2291 0.47 2291 0.59
    1000 2027 8.08 996 4.25 34993 113.2 25200 78.2 2027 7.83 2027 7.96
    2017 7.80 987 3.87 35003 109.8 25200 78.0 2017 7.01 2017 7.16

     | Show Table
    DownLoad: CSV

    Example 5.2. We consider the problem (4.1) in L2([0,2π]) with the inner product x,y=2π0x(t)y(t)dt and the norm x=(2π0x2(t)dt)1/2 for all x,yL2([0,2π]). Let A:L2([0,2π])L2([0,2π]) be an operator defined by

    (Ax)(t)=12max{0,x(t)}

    for all xL2([0,2π]) and t[0,2π]. It can be easily verified that A is monotone and Lipschitz continuous with L=1 (see [50,51]). The feasible set is C={xL2([0,2π]):2π0(t2+1)x(t)dt1}. Observe that 0VI(C,A) and so VI(C,A). In this numerical experiment, we take all parameters αn, λn and μ are the same as in Example 5.1. We perform numerical experiments with three different cases of starting point x1 and use stopping criterion xnynε=103. The numerical results are reported in Table 2.

    Table 2.  Numerical results for Example 5.2.
    x1 Algorithm 3 (θn=0) Algorithm 3 (θn=0.001/(1.01)n) Algorithm 4 (θn=0) Algorithm 4 (θn=0.001/(1.01)n)
    iter. time iter. time iter. time iter. time
    1100sin(t) 7 9.9 7 8.9 7 9.8 7 10.1
    13t2e4t 5 0.4 5 0.3 5 0.3 5 0.3
    170(1t2) 6 3.2 6 2.5 6 2.7 6 2.7

     | Show Table
    DownLoad: CSV

    Example 5.3. Consider the minimization problem:

    minxR3x1+2x22+(1,2,5)x+1,

    where x=(w1,w2,w3)TR3. Let f(x)=2x22+(1,2,5)x+1 and g(x)=x1. Thus we have f(x)=4x+(1,2,5)T. It is easy to check that f is a convex and differentiable function and its gradient f is Lipschitz continuous with L=4. Moreover, g is a convex and lower semicontinuous function but not differentiable on R3. From [21], we know that

    Jgλ(x)=(I+λg)1(x)=(max{|w1|λ,0}sgn(w1),max{|w2|λ,0}sgn(w2),max{|w3|λ,0}sgn(w3))T

    for λ>0. In this experiments, we compare our Algorithm 5 and Algorithm 6 with Algorithm (1.4) of Cholamjiak [12]. The parameters are chosen as follows:

    Algorithm 5: λ1=0.1 and μ=0.9;

    Algorithm 6: λ1=0.1, μ=0.9, αn=110000(n+1) and u=x1;

    Algorithm (1.4): all parameters αn, λn, δn, rn and en are the same as Example 4.2 in [12], and u=x1.

    We perform the numerical experiments with four different cases of starting point x1 and use stopping criterion xn+1xnε=1012. The numerical results are reported in Table 3.

    Table 3.  Numerical results for Example 5.3.
    x1 Algorithm 5 (θn=0) Algorithm 5 (θn=100/n1.1) Algorithm 6 (θn=0) Algorithm 6 (θn=100/n1.1) Algorithm (1.4)
    iter. time iter. time iter. time iter. time iter. time
    (1,2,4)T 101 0.003 284 0.003 27818 0.10 25263 0.08 263957 0.33
    (1,7,3)T 103 0.002 288 0.003 27809 0.12 25264 0.08 314417 0.38
    (100,100,50)T 111 0.004 315 0.004 27802 0.11 25252 0.09 1313442 1.58
    (1000,5000,800)T 127 0.005 356 0.01 27787 0.11 25241 0.07 8004199 9.4

     | Show Table
    DownLoad: CSV

    Example 5.4. In signal processing, compressed sensing can be modeled as the following under-determinated linear equation system:

    y=Dx+ε, (5.1)

    where xRN is a vector with m nonzero components to be recovered, yRM is the observed or measured data with noisy ε, and D:RNRM(M<N) is a bounded linear operator. It is known that to solve (5.1) can be seen as solving the LASSO problem:

    minxRN12Dxy22+λx1, (5.2)

    where λ>0. Following [19], we define Ax:=(12Dxy22)=DT(Dxy) and Bx:=(λx1). It is known that A is D2-Lipschitz continuous and monotone. Moreover, B is maximal monotone (see [35]).

    In this experiment, the sparse vector xRN is generated from uniform distribution in the interval [2,2] with m nonzero elements. The matrix DRM×N is generated from a normal distribution with mean zero and one invariance. The observation y is generated by white Gaussian noise with signal-to-noise ratio (SNR) = 40. The restoration accuracy is measured by the mean squared error (MSE) as follows:

    En=1Nxnx22<104, (5.3)

    where xn is an estimated signal of x.

    We compare our proposed Algorithm 1 and Algorithm 2 with the forward-backward splitting algorithm (FBSA) (1.2), the Tseng's splitting algorithm (TSA) (1.5) and the contraction forward-backward splitting algorithm (CFBSA) proposed in ([43, Algorithm 3.1]). The parameters are chosen as follows:

    Algorithm 1: θn=0, λ1=0.0013 and μ=0.5;

    Algorithm 2: θn=0, λ1=0.0013, μ=0.5, αn=1200n+5 and u=(1,1,,1)T;

    CFBSA: αn=1200n+5, μ=0.5, δ=0.5, l=0.5, γ=0.45 and f(x)=x5;

    TSA: λn=0.2D2;

    FBSA: λ=2×105.

    The starting points x1 of all methods are randomly chosen in RN. We perform the numerical test with the following three cases:

    Case 1: N=512, M=256 and m=20;

    Case 2: N=1024, M=512 and m=30;

    Case 3: N=2048, M=1024 and m=60;

    The numerical results for all test are reported in Figures 1-6.

    Figure 1.  Comparison of recovered signal by using different algorithms in Case 1.
    Figure 2.  The plotting of MSE versus number of iterations in Case 1.
    Figure 3.  Comparison of recovered signal by using different algorithms in Case 2.
    Figure 4.  The plotting of MSE versus number of iterations in Case 2.
    Figure 5.  Comparison of recovered signal by using different algorithms in Case 3.
    Figure 6.  The plotting of MSE versus number of iterations in Case 3.

    In this paper, we propose Tseng's splitting algorithms with non-monotone adaptive step sizes for finding zeros of the sum of two monotone operators in the framework of Banach space. Under some suitable conditions, we prove the weak and strong convergence results of the algorithms without the knowledge of the Lipschitz constant of the mapping. Some applications related to the obtained results are presented. Finally, several numerical experiments are performed to illustrate the convergence of our algorithms and compared with many known methods.

    P. Cholamjiak was supported by Thailand Science Research and Innovation under the project IRN62W0007 and P. Sunthrayuth was supported by RMUTT Research Grant for New Scholar under Grant NSF62D0602.

    The authors declare no conflict of interest.



    [1] H. A. Abass, C. Izuchukwu, O. T. Mewomo, Q. L. Dong, Strong convergence of an inertial forward-backward splitting method for accretive operators in real Banach space, Fixed Point Theory, 21 (2020), 397-412. doi: 10.24193/fpt-ro.2020.2.28
    [2] R. P. Agarwal, D. O'Regan, D. R. Sahu, Fixed Point Theory for Lipschitzian-type Mappings with Applications, New York: Springer, 2009.
    [3] Y. I. Alber, Metric and generalized projection operators in Banach spaces: properties and applications, In: A. G. Kartsatos, Theory and Applications of Nonlinear Operator of Accretive and Monotone Type, New York: Marcel Dekker, (1996), 15-50.
    [4] K. Ball, E. A. Carlen, E. H. Lieb, Sharp uniform convexity and smoothness inequalities for trace norms, Inventiones Math., 115 (1994), 463-482. doi: 10.1007/BF01231769
    [5] V. Barbu, Nonlinear Semigroups and Differential Equations in Banach Spaces, Netherlands: Springer, 1976.
    [6] T. Bonesky, K. S. Kazimierski, P. Maass, F. Schöpfer, T. Schuster, Minimization of Tikhonov Functionals in Banach Spaces, Abstr. Appl. Anal., 2008 (2008), 192679.
    [7] F. Browder, Nonlinear monotone operators and convex sets in Banach spaces, Bull. Am. Math. Soc., 71 (1965), 780-785. doi: 10.1090/S0002-9904-1965-11391-X
    [8] Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl., 148 (2011), 318-335. doi: 10.1007/s10957-010-9757-3
    [9] S. S. Chang, C. F. Wen, J. C. Yao, Generalized viscosity implicit rules for solving quasi-inclusion problems of accretive operators in Banach spaces, Optimization, 66 (2017), 1105-1117. doi: 10.1080/02331934.2017.1325888
    [10] S. S. Chang, C. F. Wen, J. C. Yao, A generalized forward-backward splitting method for solving a system of quasi variational inclusions in Banach spaces, RACSAM, 113 (2019), 729-747. doi: 10.1007/s13398-018-0511-2
    [11] G. H. Chen, R. T. Rockafellar, Convergence rates in forward-backward splitting, SIAM J. Optim., 7 (1997), 421-444. doi: 10.1137/S1052623495290179
    [12] P. Cholamjiak, A generalized forward-backward splitting method for solving quasi inclusion problems in Banach spaces, Numer. Algorithms, 71 (2016), 915-932. doi: 10.1007/s11075-015-0030-6
    [13] P. Cholamjiak, N. Pholasa, S. Suantai, P. Sunthrayuth, The generalized viscosity explicit rules for solving variational inclusion problems in Banach spaces, Optimization, 2020. Available from: https://doi.org/10.1080/02331934.2020.1789131.
    [14] I. Cioranescu, Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems, Dordrecht: Kluwer Academic, 1990.
    [15] P. L. Combettes, V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul., 4 (2005), 1168-1200. doi: 10.1137/050626090
    [16] I. Daubechies, M. Defrise, C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Commun. Pure Appl. Math., 57 (2004), 1413-1457. doi: 10.1002/cpa.20042
    [17] J. Duchi, Y. Singer, Efficient online and batch learning using forward-backward splitting, J. Mach. Learn. Res., 10 (2009), 2899-2934.
    [18] J. C. Dunn, Convexity, monotonicity, and gradient processes in Hilbert space, J. Math. Anal. Appl., 53 (1976), 145-158. doi: 10.1016/0022-247X(76)90152-9
    [19] A. Gibali, D. V. Thong, Tseng type methods for solving inclusion problems and its applications, Calcolo, 55 (2018), 49. doi: 10.1007/s10092-018-0292-1
    [20] O. G¨uler, On the convergence of the proximal point algorithm for convex minimization, SIAM J. Control Optim., 29 (1991), 403-419. doi: 10.1137/0329022
    [21] E. T. Hale, W. Yin, Y. Zhang, A fixed-point continuation method for 1-regularized minimization with applications to compressed sensing, CAAM Technical Report TR07-07, 2007.
    [22] P. T. Harker, J. S. Pang, A damped-Newton method for the linear complementarity problem, In: G. Allgower, K. Georg, Computational Solution of Nonlinear Systems of Equations, AMS Lectures on Applied Mathematics, 26 (1990), 265-284.
    [23] O. Hanner, On the uniform convexity of Lp and p, Arkiv Matematik, 3 (1956), 239-244. doi: 10.1007/BF02589410
    [24] Hartman, G. Stampacchia, On some non linear elliptic differential functional equations, Acta Math., 115 (1966), 271-310. doi: 10.1007/BF02392210
    [25] H. Iiduka, W. Takahashi, Weak convergence of a projection algorithm for variational inequalities in a Banach space, J. Math. Anal. Appl., 339 (2008), 668-679. doi: 10.1016/j.jmaa.2007.07.019
    [26] C. Izuchukwu, C. C. Okeke, F. O. Isiogugu, Viscosity iterative technique for split variational inclusion problem and fixed point problem between Hilbert space and Banach space, J. Fixed Point Theory Appl., 20 (2018), 1-25. doi: 10.1007/s11784-018-0489-6
    [27] C. C. Okeke, C. Izuchukwu, Strong convergence theorem for split feasibility problem and variational inclusion problem in real Banach spaces, Rend. Circolo Mat. Palermo, 2020. Available from: https://doi.org/10.1007/s12215-020-00508-3.
    [28] G. M. Korpelevich, The extragradient method for finding saddle points and other problems, Ekonomikai Mat. Metody, 12 (1976), 747-756.
    [29] P. L. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964-979. doi: 10.1137/0716071
    [30] P. E. Maingé, Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization, Set-Valued Anal., 16 (2008), 899-912. doi: 10.1007/s11228-008-0102-z
    [31] J. Peypouquet, Convex Optimization in Normed Spaces: Theory, Methods and Examples, Springer Briefs in Optimization, 2015.
    [32] N. Pholasa, P. Cholamjiak, The regularization method for solving variational inclusion problems, Thai J. Math., 14 (2016), 369-381.
    [33] H. Raguet, J. Fadili, G. Peyre, A generalized forward-backward splitting, SIAM J. Imaging Sci., 6 (2013), 1199-1226. doi: 10.1137/120872802
    [34] R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optim., 14 (1976), 877-898. doi: 10.1137/0314056
    [35] R. T. Rockafellar, On the maximal monotonicity of subdifferential mappings, Pac. J. Math., 33 (1970), 209-216. doi: 10.2140/pjm.1970.33.209
    [36] R. T. Rockafellar, On the maximality of sums of nonlinear monotone operators, Trans. Amer. Math. Soc., 149 (1970), 75-88. doi: 10.1090/S0002-9947-1970-0282272-5
    [37] Y. Shehu, Convergence results of forward-backward algorithms for sum of monotone operators in Banach spaces, Results Math., 74 (2019), 138. doi: 10.1007/s00025-019-1061-4
    [38] Y. Shehu, G. Cai, Strong convergence result of forward-backward splitting methods for accretive operators in banach spaces with applications, RACSAM, 112 (2018), 71. doi: 10.1007/s13398-016-0366-3
    [39] W. Takahashi, Nonlinear Functional Analysis, Yokohama: Yokohama Publishers, 2000.
    [40] S. Takahashi, W. Takahashi, Split common null point problem and shrinking projection method for generalized resolvents in two Banach spaces, J. Nonlinear Convex Anal., 17 (2016), 2171-2182.
    [41] W. Takahashi, N. C. Wong, J. C. Yao, Two generalized strong convergence theorems of Halpern's type in Hilbert spaces and applications, Taiwan. J. Math., 16 (2012), 1151-1172. doi: 10.11650/twjm/1500406684
    [42] K. K. Tan, H. K. Xu, Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process, J. Math. Anal. Appl., 178 (1993), 301-308. doi: 10.1006/jmaa.1993.1309
    [43] D. V. Thong, P. Cholamjiak, Strong convergence of a forward-backward splitting method with a new step size for solving monotone inclusions, Comput. Appl. Math., 38 (2019), 94. doi: 10.1007/s40314-019-0855-z
    [44] R. Tibshirani, Regression shrinkage and selection via the lasso, J. Royal Statist. Soc., 58 (1996), 267-288. Available from: https://doi.org/10.1111/j.2517-6161.1996.tb02080.x.
    [45] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim., 38 (2000), 431-446. doi: 10.1137/S0363012998338806
    [46] Y. Wang, F. Wang, Strong convergence of the forward-backward splitting method with multiple parameters in Hilbert spaces, Optimization, 67 (2018), 493-505. doi: 10.1080/02331934.2017.1411485
    [47] H. K. Xu, Inequalities in Banach spaces with applications, Nonlinear Anal., 16 (1991), 1127-1138. doi: 10.1016/0362-546X(91)90200-K
    [48] H. K. Xu, Iterative algorithms for nonlinear operators, J. London Math. Soc., 66 (2002), 240-256. doi: 10.1112/S0024610702003332
    [49] Z. B. Xu, G. F. Roach, Characteristic inequalities of uniformly convex and uniformly smooth Banach spaces, J. Math. Anal. Appl., 157 (1991), 189-210. doi: 10.1016/0022-247X(91)90144-O
    [50] J. Yang, H. Liu, Strong convergence result for solving monotone variational inequalities in Hilbert space, Numer. Algorithms, 80 (2019), 741-752. doi: 10.1007/s11075-018-0504-4
    [51] J. Yang, H. Liu, G. Li, Convergence of a subgradient extragradient algorithm for solving monotone variational inequalities, Numer. Algorithms, 84 (2020), 389-405. doi: 10.1007/s11075-019-00759-x
  • This article has been cited by:

    1. Kunrada KANKAM, Prasit CHOLAMJİAK, Watcharaporn CHOLAMJİAK, A modified parallel monotone hybrid algorithm for a finite family of $\mathcal{G}$-nonexpansive mappings apply to a novel signal recovery, 2022, 5, 2636-7556, 393, 10.53006/rna.1122092
    2. Jitsupa Deepho, Abubakar Adamu, Abdulkarim Hassan Ibrahim, Auwal Bala Abubakar, Relaxed viscosity-type iterative methods with application to compressed sensing, 2023, 0971-3611, 10.1007/s41478-022-00547-2
    3. Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth, Weak and strong convergence results for solving monotone variational inequalities in reflexive Banach spaces, 2022, 0233-1934, 1, 10.1080/02331934.2022.2069568
    4. Shaotao Hu, Yuanheng Wang, Liya Liu, Qiao-Li Dong, An inertial self-adaptive iterative algorithm for finding the common solutions to split feasibility and fixed point problems in specific Banach spaces, 2023, 424, 03770427, 115010, 10.1016/j.cam.2022.115010
    5. Simeon Reich, Truong Minh Tuyen, Pongsakorn Sunthrayuth, Prasit Cholamjiak, Two New Inertial Algorithms for Solving Variational Inequalities in Reflexive Banach Spaces, 2021, 42, 0163-0563, 1954, 10.1080/01630563.2021.2006692
    6. V. V. Semenov, S. V. Denisov, G. V. Sandrakov, O. S. Kharkov, Convergence of the Operator Extrapolation Method for Variational Inequalities in Banach Spaces*, 2022, 58, 1060-0396, 740, 10.1007/s10559-022-00507-5
    7. V. V. Semenov, S. V. Denisov, Convergence of the Method of Extrapolation from the Past for Variational Inequalities in Uniformly Convex Banach Spaces*, 2022, 58, 1060-0396, 564, 10.1007/s10559-022-00490-x
    8. Bing Tan, Pongsakorn Sunthrayuth, Prasit Cholamjiak, Yeol Je Cho, Modified inertial extragradient methods for finding minimum-norm solution of the variational inequality problem with applications to optimal control problem, 2023, 100, 0020-7160, 525, 10.1080/00207160.2022.2137672
    9. Yana Vedel, Vladimir Semenov, Sergey Denisov, 2021, Chapter 4, 978-3-030-92710-3, 50, 10.1007/978-3-030-92711-0_4
    10. Serhii Denysov, Vladimir Semenov, 2023, Chapter 4, 978-3-031-30250-3, 49, 10.1007/978-3-031-30251-0_4
    11. V.V. Semenov, O.S. Kharkov, STRONG CONVERGENCE OF THE REGULARIZED OPERATOR EXTRAPOLATION ALGORITHM FOR VARIATIONAL INEQUALITIES, 2024, 10195262, 64, 10.34229/KCA2522-9664.24.3.6
    12. V. V. Semenov, O. S. Kharkov, Strong Convergence of the Regularized Operator Extrapolation Algorithm For Variational Inequalities, 2024, 60, 1060-0396, 392, 10.1007/s10559-024-00680-9
    13. Zhong-bao Wang, Pongsakorn Sunthrayuth, Ratthaprom Promkam, Abubakar Adamu, Three novel inertial subgradient extragradient methods for quasi-monotone variational inequalities in Banach spaces, 2024, 43, 2238-3603, 10.1007/s40314-024-02929-7
    14. Austine Efut Ofem, Akindele Adebayo Mebawondu, Godwin Chidi Ugwunnadi, Hüseyin Işık, Ojen Kumar Narain, A modified subgradient extragradient algorithm-type for solving quasimonotone variational inequality problems with applications, 2023, 2023, 1029-242X, 10.1186/s13660-023-02981-7
    15. Austine Efut Ofem, Akindele Adebayo Mebawondu, Godwin Chidi Ugwunnadi, Prasit Cholamjiak, Ojen Kumar Narain, Relaxed Tseng splitting method with double inertial steps for solving monotone inclusions and fixed point problems, 2024, 96, 1017-1398, 1465, 10.1007/s11075-023-01674-y
    16. V. V. Semenov, O. S. Kharkov, THE REGULARIZED OPERATOR EXTRAPOLATION ALGORITHM, 2023, 27069680, 15, 10.17721/2706-9699.2023.1.02
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2888) PDF downloads(262) Cited by(16)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog