Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Research article

Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces

  • Received: 28 December 2020 Accepted: 25 February 2021 Published: 01 March 2021
  • MSC : 47H09, 47H10, 47J25

  • In this work, we introduce two modified Tseng's splitting algorithms with a new non-monotone adaptive step size for solving monotone inclusion problem in the framework of Banach spaces. Under some mild assumptions, we establish the weak and strong convergence results of the proposed algorithms. Moreover, we also apply our results to variational inequality problem, convex minimization problem and signal recovery, and provide several numerical experiments including comparisons with other related algorithms.

    Citation: Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth. Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces[J]. AIMS Mathematics, 2021, 6(5): 4873-4900. doi: 10.3934/math.2021286

    Related Papers:

    [1] Premyuda Dechboon, Abubakar Adamu, Poom Kumam . A generalized Halpern-type forward-backward splitting algorithm for solving variational inclusion problems. AIMS Mathematics, 2023, 8(5): 11037-11056. doi: 10.3934/math.2023559
    [2] Yali Zhao, Qixin Dong, Xiaoqing Huang . A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208
    [3] Meiying Wang, Luoyi Shi, Cuijuan Guo . An inertial iterative method for solving split equality problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 17628-17646. doi: 10.3934/math.2022971
    [4] Anjali, Seema Mehra, Renu Chugh, Salma Haque, Nabil Mlaiki . Iterative algorithm for solving monotone inclusion and fixed point problem of a finite family of demimetric mappings. AIMS Mathematics, 2023, 8(8): 19334-19352. doi: 10.3934/math.2023986
    [5] Wenlong Sun, Gang Lu, Yuanfeng Jin, Choonkil Park . Self-adaptive algorithms for the split problem of the quasi-pseudocontractive operators in Hilbert spaces. AIMS Mathematics, 2022, 7(5): 8715-8732. doi: 10.3934/math.2022487
    [6] Meiying Wang, Luoyi Shi . A new self-adaptive inertial algorithm with W-mapping for solving split feasibility problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 18767-18783. doi: 10.3934/math.20221032
    [7] Yasir Arfat, Supak Phiangsungnoen, Poom Kumam, Muhammad Aqeel Ahmad Khan, Jamshad Ahmad . Some variant of Tseng splitting method with accelerated Visco-Cesaro means for monotone inclusion problems. AIMS Mathematics, 2023, 8(10): 24590-24608. doi: 10.3934/math.20231254
    [8] Hasanen A. Hammad, Habib ur Rehman, Manuel De la Sen . Accelerated modified inertial Mann and viscosity algorithms to find a fixed point of αinverse strongly monotone operators. AIMS Mathematics, 2021, 6(8): 9000-9019. doi: 10.3934/math.2021522
    [9] Zheng Zhou, Bing Tan, Songxiao Li . Two self-adaptive inertial projection algorithms for solving split variational inclusion problems. AIMS Mathematics, 2022, 7(4): 4960-4973. doi: 10.3934/math.2022276
    [10] Ziqi Zhu, Kaiye Zheng, Shenghua Wang . A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space. AIMS Mathematics, 2024, 9(8): 20956-20975. doi: 10.3934/math.20241020
  • In this work, we introduce two modified Tseng's splitting algorithms with a new non-monotone adaptive step size for solving monotone inclusion problem in the framework of Banach spaces. Under some mild assumptions, we establish the weak and strong convergence results of the proposed algorithms. Moreover, we also apply our results to variational inequality problem, convex minimization problem and signal recovery, and provide several numerical experiments including comparisons with other related algorithms.



    Let E be a real Banach space with its dual space E. In this paper, we study the so-called monotone inclusion problem:

    findzEsuch that0(A+B)z, (1.1)

    where A:EE is a single mapping and B:E2E is a multi-valued mapping. The set of solutions of the problem (1.1) is denoted by (A+B)10:={xE:0(A+B)x}. This problem draws much attention since it stands at the core of many mathematical problems, such as: variational inequalities, split feasibility problem and minimization problem with applications in machine learning, statistical regression, image processing and signal recovery (see [17,33,44]).

    A classical method for solving the problem (1.1) in Hilbert space H, is known as forward-backward splitting algorithm (FBSA) [15,29] which generates iterative sequence {xn} by the following algorithm:

    {x1H,xn+1=JBλ(IλA)xn,n1, (1.2)

    where JBλ:=(I+λB)1 is the resolvent operator of an operator B. Here, I denotes the identity operator on H. It was proved that the sequence generated by (1.2) converges weakly to an element in (A+B)10 under the assumption of the α-cocoercivity of the operator A, that is,

    AxAy,xyαAxAy2,x,yH

    and λ is chosen in (0,2α). In fact, FBSA includes, as special cases, the proximal point algorithm (when A=0) [11,20,34] and the gradient method [18].

    In order to get strong convergence result, Takashashi et al. [41] introduced the following algorithm:

    {x1,uH,xn+1=αnu+(1αn)JBλn(xnλnAxn),n1, (1.3)

    where A is an α-cocoercive mapping on H. It was shown that if {λn}(0,) and {αn}(0,1) satisfy the following assumptions:

    0<aλnb<2α,n=1|λn+1λn|<,
    lim

    then the sequence \{x_n\} defined by (1.3) converges strongly to an element in (A+B)^{-1}0 .

    In 2016, Cholamjiak [12] introduced the following FBSA in a uniformly convex and q -uniformly smooth Banach space E :

    \begin{eqnarray} \begin{cases} \begin{array}{ll} x_1,u\in E,\\ x_{n+1} = \alpha_{n}u+\beta_nx_n+\gamma_nJ_{\lambda_n}^{B}(x_{n}-\lambda_{n}Ax_n),\; \forall n\geq1, \end{array} \end{cases} \end{eqnarray} (1.4)

    where J_{\lambda_n}^{B}: = (I+\lambda_n B)^{-1} is the resolvent operator of an m -accretive operator B and A is an \alpha -cocoercive mapping. He proved that the sequence generated by (1.4) converges strongly to a solution of the problem (1.1) under the following assumptions:

    \begin{eqnarray*} \{\alpha_n\},\{\beta_n\},\{\gamma_n\}\subset(0,1)\; \text{with}\; \alpha_n+\beta_n+\gamma_n = 1, \end{eqnarray*}
    \begin{eqnarray*} \lim\limits_{n\to\infty}\alpha_n = 0,\; \sum\limits_{n = 1}^{\infty}\alpha_n = \infty\; \text{and}\; \liminf\limits_{n\to\infty}\gamma_n > 0, \end{eqnarray*}
    \begin{eqnarray*} 0 < \liminf\limits_{n\to\infty}\lambda_n\leq \limsup\limits_{n\to\infty}\lambda_n < \big(\frac{\alpha q}{\kappa_q}\big)^{\frac{1}{q-1}}, \end{eqnarray*}

    where \kappa_q is the q -uniform smoothness coefficient of E .

    In recent years, the FBSA for solving the monotone inclusion problem (1.1), when A is \alpha -cocoercive, was studied and modified by many authors in various settings (see, e.g., [1,9,10,13,26,27,32,37,38,46]). It is important to remark that the \alpha -cocoercivity of the operator A is a strong assumption. To relax this assumption, Tseng [45] introduced the following so-called Tseng's splitting method:

    \begin{equation} \begin{cases} \begin{array}{ll} x_1\in H,\\ y_n = J_{\lambda_n}^{B}(x_n-\lambda_nAx_n),\\ x_{n+1} = y_n-\lambda_n(Ay_n-Ax_n),\; \; \forall n\geq1, \end{array} \end{cases} \end{equation} (1.5)

    where A is monotone and L -Lipschitz continuous with L > 0 . It was proved that the sequence \{x_n\} generated by (1.5) converges weakly to an element in (A+B)^{-1}0 provided the step size \lambda_n is chosen in \Big(0, \frac{1}{L}\Big) . It is worth noting that Tseng's splitting method is a requirement to know Lipschitz constant of the mapping. Unfortunately, Lipschitz constants are often unknown or difficult to approximate.

    Very recently, Shehu [37] extended Tseng's result to Banach spaces. He proposed the following iterative process for approximating a solution of the problem (1.1) in a 2 -uniformly convex Banach space E which is also uniformly smooth:

    \begin{equation} \begin{cases} \begin{array}{ll} x_1\in E,\\ y_n = J_{\lambda_n}^{B}J^{-1}(Jx_n-\lambda_nAx_n),\\ x_{n+1} = Jy_n-\lambda_n(Ay_n-Ax_n),\; \; \forall n\geq1, \end{array} \end{cases} \end{equation} (1.6)

    where A:E\to E^* is monotone and L -Lipschitz continuous, J_{\lambda_n}^{B}: = (J+\lambda_nB)^{-1}J is the resolvent of B and J is the duality mapping from E into E^* . He obtain weak convergence theorem to the solution of the problem (1.1) provided the step size \lambda_n is chosen in \Big(0, \frac{1}{\sqrt{2\mu}\kappa L}\Big) , where \mu is the 2 -uniform convexity constant of E and \kappa is the 2 -uniform smoothness constant of E^* . At the same time, he also proposed a variant of (1.6) with a linesearch for solving the problem (1.1). It is known that any algorithm with a linesearch needs an inner loop with some stopping criterion over iteration.

    In this paper, motivated by Shehu [37], we propose two modifications of Tseng's splitting method with non-monotone adaptive step sizes for solving the problem (1.1) in the framework of Banach spaces. The step size of our methods does not require the prior knowledge of the Lipschitz constant of operator and without any linesearch procedure. The remainder of this paper is organized as follows: We recall some definitions and lemmas in Section 2. Our methods are presented and analyzed in Section 3. Theoretical applications to variational inequality problem and convex minimization problem are considered in Section 4 and finally, in Section 5, we provide some numerical experiments to illustrate the behaviour of our methods.

    Let \mathbb{R} and \mathbb{N} be the set of real numbers and the set of positive integers, respectively. Let E be a real Banach space with its dual space E^* . We denote \langle x, f\rangle by the value of a functional f in E^* at x in E , that is, \langle x, f\rangle = f(x) . For a sequence \{x_n\} in E , the strong convergence and the weak convergence of \{x_n\} to x\in E are denoted by x_n\to x and x_n\rightharpoonup x , respectively. Let S_E = \{x\in E:\|x\| = 1\} . The space E is said to be smooth if the limit

    \begin{eqnarray} \lim\limits_{t\to 0}\frac{\|x+ty\|-\|x\|}{t} \end{eqnarray} (2.1)

    exists for all x, y\in S_E . The space E is said to be uniformly smooth if the limit (2.1) converges uniformly in x, y\in S_E . It is said to be strictly convex if \|(x+y)/2\| < 1 whenever x, y\in S_E and x\neq y . The space E is said to be uniformly convex if and only if \delta_E(\epsilon) > 0 for all \epsilon\in (0, 2] , where \delta_E is the modulus of convexity of E defined by

    \begin{equation*} \begin{array}{lcl} \delta_E(\epsilon) = \inf\bigg\{1-\frac{\|x+y\|}{2}:x,y\in S_E, \|x-y\|\geq\epsilon\bigg\} \end{array} \end{equation*}

    for all \epsilon\in[0, 2] . Let p\geq 2 . The space E is said to be p -uniformly convex if there is a c > 0 such that \delta_E(\epsilon)\geq c\epsilon^p for all \epsilon\in (0, 2] . Let 1 < q\leq 2 . The space E is said to be q -uniformly smooth if there exists a \kappa > 0 such that \rho_E(t)\leq \kappa t^q for all t > 0 , where \rho_E is the modulus of smoothness of E defined by

    \begin{equation*} \begin{array}{lcl} \rho_E(t) = \sup\bigg\{\frac{\|x+t y\|+\|x- t y\|}{2}-1:x,y\in S_E\bigg\} \end{array} \end{equation*}

    for all t\geq0 . Let 1 < q\leq 2 < p < \infty with \frac{1}{p}+\frac{1}{q} = 1 . It is observed that every p -uniformly convex ( q -uniformly smooth) space is uniformly convex (uniformly smooth) space. It is known that E is p -uniformly convex ( q -uniformly smooth) if and only if its dual E^* is q -uniformly smooth ( p -uniformly convex) (see [2]). If E is uniformly convex then E is reflexive and strictly convex and if E is uniformly smooth then E is reflexive and smooth (see [14]). Moreover, we know that for every p > 1 , L_p and \ell_p are \min\{p, 2\} -uniformly smooth and \max\{p, 2\} -uniformly convex, while Hilbert space is 2 -uniformly smooth and 2 -uniformly convex (see [4,23,47] for more details).

    Definition 2.1. The normalized duality mapping J:E\to 2^{E^*} is defined by

    \begin{eqnarray*} Jx = \{f\in E^*:\langle x,f\rangle = \|x\|^2 = \|f\|^2\},\; \forall x\in E, \end{eqnarray*}

    where \langle\cdot, \cdot\rangle denotes the duality pairing between E and E^* .

    If E is a Hilbert space, then J = I is the identity mapping on E . It is known that E is smooth if and only if J is single-valued from E into E^* and if E is a reflexive, smooth and strictly convex, then J^{-1} is single-valued, one-to-one, surjective and it is the duality mapping from E^* into E . Moreover, if E is uniformly smooth, then J is norm-to-norm uniformly continuous on bounded subsets of E (see [2,14] for more details). A duality mapping J from a smooth Banach space E into E^* is said to be weakly sequentially continuous if for any sequence \{x_n\}\subset E such that x_n\rightharpoonup x implies that Jx_n\rightharpoonup^* Jx .

    Lemma 2.2. [39] Let E be a smooth Banach space and J be the duality mapping on E . Then \langle x-y, Jx-Jy\rangle\geq0 for all x, y\in E . Further, if E is strictly convex and \langle x-y, Jx-Jy\rangle = 0 , then x = y .

    Definition 2.3. A mapping A:E\to E^* is said to be:

    \bullet \alpha -cocoercive if there exists a constant \alpha > 0 such that \langle x-y, Ax-Ay\rangle\geq\alpha\|Ax-Ay\|^2 for all x, y\in E ;

    \bullet monotone if \langle x-y, Ax-Ay\rangle\geq0 for all x, y\in E ;

    \bullet L -Lipschitz continuous if there exists a constant L > 0 such that \|Ax-Ay\|\leq L\|x-y\| for all x, y\in E ;

    \bullet hemicontinuous if for each x, y\in E , the mapping f:[0, 1]\to E^* defined by f(t) = A(tx+(1-t)y) is continuous with respect to the weak ^* topology of E^* .

    Remark 2.4. It is easy to see that if A is cocoercive, then A is monotone and Lipschitz continuous but the converse is not true in general.

    The next lemma can be found in [49] (see also [47]).

    Lemma 2.5. (i) Let E be a 2 -uniformly smooth Banach space. Then there exists a constant \kappa > 0 such that

    \begin{equation*} \begin{array}{lcl} \|x-y\|^2\leq\|x\|^2-2\langle y,Jx\rangle+\kappa\|y\|^2,\; \forall x,y\in E. \end{array} \end{equation*}

    (ii) Let E be a 2 -uniformly convex Banach space. Then there exists a constant c > 0 such that

    \begin{equation*} \begin{array}{lcl} \|x-y\|^2\geq\|x\|^2-2\langle y,Jx\rangle+c\|y\|^2,\; \forall x,y\in E. \end{array} \end{equation*}

    Remark 2.6. It is well-known that \kappa = c = 1 whenever E is a Hilbert space. Hence these inequalities reduce to the following well-known polarization identity:

    \begin{equation*} \begin{array}{lcl} \|x-y\|^2 = \|x\|^2-2\langle x,y\rangle+\|y\|^2. \end{array} \end{equation*}

    Moreover, we refer to [49] for the exact values of constants \kappa and c .

    Next, we recall the following Lyapunov function which was introduced in [3]:

    Definition 2.7. Let E be a smooth Banach space. The Lyapunov functional \phi:E\times E\to\mathbb{R} is defined by

    \begin{eqnarray} \phi(x,y) = \|x\|^2-2\langle x,Jy\rangle+\|y\|^2,\; \forall x,y\in E. \end{eqnarray} (2.2)

    If E is a Hilbert space, then \phi(x, y) = \|x-y\|^2 for all x, y\in E . In addition, the Lyapunov function \phi has the following properties:

    \begin{eqnarray} (\|x\|-\|y\|)^2\leq\phi(x,y)\leq(\|x\|+\|y\|)^2,\; \forall x,y\in E. \end{eqnarray} (2.3)
    \begin{eqnarray} \phi(x,J^{-1}(\alpha Jy+(1-\alpha)Jz)\leq \alpha\phi(x,y)+(1-\alpha)\phi(x,z),\; \forall x,y,z\in E,\; \alpha\in[0,1]. \end{eqnarray} (2.4)
    \begin{eqnarray} \phi(x,y) = \phi(x,z)-\phi(y,z)+2\langle y-x,Jy-Jz\rangle,\; \forall x,y,z\in E. \end{eqnarray} (2.5)

    Lemma 2.8. [6] Let E be a 2 -uniformly convex Banach space, then there exists a constant c > 0 such that

    \begin{eqnarray*} c\|x-y\|^2\leq \phi(x,y), \end{eqnarray*}

    where c is a constant in Lemma 2.5 (ii) .

    We make use of the following functional V:E\times E^*\to\mathbb{R} studied in [3]:

    \begin{eqnarray} V(x,x^*) = \|x\|^2-2\langle x,x^*\rangle+\|x^*\|^2,\; \forall x\in E,\; x^*\in E^*. \end{eqnarray} (2.6)

    Obviously, V(x, x^*) = \phi(x, J^{-1}x^*) for all x\in E and x^*\in E^* .

    Lemma 2.9. [3] Let E be a reflexive, strictly convex and smooth Banach space. Then the following statement holds:

    \begin{eqnarray*} V(x,x^*)+2\langle J^{-1}x^*-x,y^*\rangle\leq V(x,x^*+y^*),\; \forall x\in E,\; x^*,y^*\in E^*. \end{eqnarray*}

    Let E be a reflexive, strictly convex and smooth Banach space. Let C be a closed and convex subset of E . Then for any x\in E , there exists a unique element z\in C such that

    \begin{eqnarray*} \phi(z,x) = \min\limits_{y\in C}\phi(y,x). \end{eqnarray*}

    Such a mapping \Pi_C:E\to C defined by z = \Pi_C(x) is called the generalized projection of E onto C . If E is a Hilbert space, then \Pi_C is coincident with the metric projection denoted by P_C .

    Lemma 2.10. [3] Let E be a reflexive, strictly convex and smooth Banach space and C be a closed and convex subset of E . Let x\in E and z\in C . Then the following statements hold:

    (i) z = \Pi_C(x) if and only if \langle y-z, Jx-Jz\rangle\leq0 , \forall y\in C .

    (ii) \phi(y, \Pi_{C}(x))+\phi(\Pi_{C}(x), x)\leq \phi(y, x) , \forall y\in C .

    Lemma 2.11. [25] Let C be a closed and convex subset of a smooth and uniformly convex Banach space E . Let \{x_n\} be a sequence in E such that \phi(p, x_{n+1})\leq\phi(p, x_n) for all p\in C and n\geq1 . Then the sequence \{\Pi_{C}(x_n)\} converges strongly to some element x^*\in C .

    Let B:E\to 2^{E^*} be a multi-valued mapping. The effective domain of B is denoted by D(B) = \{x\in E : Bx\neq\emptyset\} and the range of B is also denoted by R(B) = \bigcup\{Bx:x\in D(B)\} . The set of zeros of B is denoted by B^{-1}0 = \{x\in D(B): 0\in Bx\} . A multi-valued mapping B is said to be monotone if

    \begin{eqnarray*} \langle x-y,u-v\rangle\geq0,\; \; \forall x,y\in D(B),\; u\in Bx\; \text{and}\; v\in By. \end{eqnarray*}

    A monotone operator B on E is said to be maximal if its graph G(B) = \{(x, y)\in E\times E^*:x\in D(B), y\in Bx\} is not properly contained in the graph of any other monotone operator on E . In other words, the maximality of B is equivalent to R(J+\lambda B) = E^* for all \lambda > 0 (see [5, Theorem 1.2]). It is known that if B is maximal monotone, then B^{-1}0 is closed and convex (see [39]).

    For a maximal monotone operator B , we define the resolvent of B by J_{\lambda}^{B}(x) = (J+\lambda B)^{-1}Jx for x\in E and \lambda > 0 . It is also known that B^{-1}0 = F(J_{\lambda}^{B}) .

    Lemma 2.12. [5] Let E be a reflexive Banach space. Let A:E\to E^* be a monotone, hemicontinuous and bounded operator and B:E\to 2^{E^*} be a maximal monotone operator. Then A + B is maximal monotone.

    Lemma 2.13. ([48]) Assume that \{a_n\} is a sequence of nonnegative real sequences such that

    \begin{eqnarray*} a_{n+1}\leq (1-\gamma_n)a_n+\gamma_n\delta_n,\; \forall n\geq1, \end{eqnarray*}

    where \{\gamma_n\} is a sequence in (0, 1) and \{\delta_n\} is a sequence of real sequences such that

    (i) \sum_{n = 1}^{\infty}\gamma_n = \infty ;

    (ii) \limsup_{n\to\infty}\delta_n\leq0 or \sum_{n = 1}^{\infty}|\gamma_n\delta_n| < \infty .

    Then \lim_{n\to\infty}a_n = 0 .

    Lemma 2.14. ([30]) Let \{\Gamma_n\} be a sequence of real numbers that does not decrease at infinity in the sense that there exists a subsequence \{\Gamma_{n_{i}}\} of \{\Gamma_n\} which satisfies \Gamma_{n_{i}} < \Gamma_{n_{i}+1} for all \ell\in\mathbb{N} . Define the sequence \{\sigma(n)\} of integers as follows:

    \begin{eqnarray*} \sigma(n) = \max\{k\leq n:\Gamma_k < \Gamma_{k+1}\}, \end{eqnarray*}

    for all n\geq n_0 (for some n_0 large enough). Then \{\sigma(n)\}_{n\geq n_0} is a non-decreasing sequence such that \lim_{n\to\infty}\sigma(n) = \infty , and it holds that

    \begin{eqnarray*} \Gamma_{\sigma(n)}\leq\Gamma_{\sigma(n)+1}\; \mathit{\text{and}}\; \Gamma_n\leq\Gamma_{\sigma(n)+1}. \end{eqnarray*}

    Lemma 2.15. ([42]) Assume that \{\lambda_n\} and \{\theta_n\} are two nonnegative real sequences such that

    \begin{eqnarray*} \lambda_{n+1}\leq \lambda_n+\theta_n,\; \forall n\geq1. \end{eqnarray*}

    If \sum_{n = 1}^\infty \theta_n < \infty , then \lim\limits_{n\rightarrow\infty}\lambda_n exists.

    In this section, we introduce two modified Tseng's splitting algorithms for solving the monotone inclusion problem in Banach spaces. In order to prove the convergence results of these algorithms, we need make the following assumptions:

    Assumption 3.1. (A1) The Banach space E is a real 2 -uniformly convex and uniformly smooth.

    (A2) The mappings A:E\to E^* is monotone and L -Lipschitz continuous, and B:E\to 2^{E^*} is maximal monotone.

    (A3) The solution set of the problem (1.1) is nonempty, that is, (A+B)^{-1}0\neq\emptyset .

    Algorithm 1: Tseng type splitting algorithm for monotone inclusion problem
    Step 0. Given \lambda_1 > 0 and \mu\in\big(0, \sqrt{\frac{c}{\kappa}}\big) . Choose a nonnegative real sequence \{\theta_n\} such that \sum_{n=1}^\infty \theta_n < \infty . Let x_1\in E be arbitrary. Set n=1 .
    Step 1. Compute
    \begin{eqnarray} y_n = J_{\lambda_n}^{B}J^{-1}(Jx_n-\lambda_nAx_n). \end{eqnarray}\;\;\;\;\;\;\;\;\;\;\;\;(3.1)
    If x_n=y_n , then stop and x_n is a solution of the problem (1.1). Otherwise, go to Step 2.
    Step 2. Compute
    \begin{eqnarray} x_{n+1} = J^{-1}(Jy_n-\lambda_n(Ay_n-Ax_n)), \end{eqnarray} \;\;\;\;\;\;\;\;\;\;\;\;(3.2)
    where the sizes are adaptively updated as follows:
    \begin{equation} \begin{array}{lcl} \lambda_{n+1} = \begin{cases} \begin{array}{ll} \min\bigg\{\cfrac{\mu\|x_n-y_n\|}{\|Ax_n-Ay_n\|},\lambda_n+\theta_n\bigg\} \mbox{if}\; \; Ax_n-Ay_n\neq0,\\ \lambda_n+\theta_n \;\;\;\;\;\;\;\;\mbox{otherwise.} \end{array} \end{cases} \end{array} \end{equation} \;\;\;\;\;\;\;\;\;\;\;\;(3.3)
    Set n:=n+1 and go to Step 1.

     | Show Table
    DownLoad: CSV

    Lemma 3.2. Assume that Assumption 3.1 holds. Let \{x_n\} , \{y_n\} and \{\lambda_n\} be sequences generated by Algorithm 1. Then the following statements hold:

    (i) If x_n = y_n for all n\in\mathbb{N} , then x_n\in(A+B)^{-1}0 .

    (ii) \lim\limits_{n\rightarrow\infty}\lambda_n = \lambda\in \Big[\min\{{\frac{\mu}{L }, \lambda_1 }\}, \lambda_1+\theta\Big] , where \theta = \sum_{n = 1}^\infty \theta_n . Moreover

    \begin{eqnarray*} \|Ax_n-Ay_n\|\leq\frac{\mu}{\lambda_{n+1}}\|x_n-y_n\|,\; \forall n\geq1. \end{eqnarray*}

    Proof. (i) If x_n = y_n , then x_n = J_{\lambda_n}^{B}J^{-1}(Jx_n-\lambda_nAx_n) . It follows that x_n = (J+\lambda_n B)^{-1} J\circ J^{-1}(J-\lambda_n A)x_n , that is, Jx_n-\lambda_n Ax_n\in Jx_n+\lambda_n Bx_n , which implies that 0\in(A+B)x_n . Hence x_n\in (A+B)^{-1}0 .

    (ii) In the case Ax_n-Ay_n\neq0 , using the Lipschitz continuity of A , we have

    \begin{eqnarray*} \cfrac{\mu\|x_n-y_n\|}{\|Ax_n-Ay_n\|}\geq\frac{\mu\|x_n-y_n\|}{L\|x_n-y_n\|} = \frac{\mu}{L}. \end{eqnarray*}

    From (3.3) and mathematical induction, we have the sequence \{\lambda_n\} has upper bound \lambda_1+\theta and lower bound \min\{{\frac{\mu}{L}, \lambda_1}\} . From Lemma 2.15, we have \lim\limits_{n\rightarrow\infty}\lambda_n exists and we denote \lambda = \lim\limits_{n\rightarrow\infty}\lambda_n. It is obvious that \lambda\in\Big[\min\{{\frac{\mu}{L }, \lambda_1 }\}, \lambda_1+\theta\Big] . By the definition of \lambda_n , we have

    \begin{eqnarray*} \lambda_{n+1} = \min\bigg\{\cfrac{\mu\|x_n-y_n\|}{\|Ax_n-Ay_n\|},\lambda_n+\theta_n\bigg\} \leq\frac{\mu\|x_n-y_n\|}{\|Ax_n-Ay_n\|}. \end{eqnarray*}

    This implies that

    \begin{eqnarray} \|Ax_n-Ay_n\|\leq\frac{\mu}{\lambda_{n+1}}\|x_n-y_n\|,\; \forall n\geq1. \end{eqnarray} (3.4)

    Lemma 3.3. Assume that Assumption 3.1 holds. Let \{x_n\} be a sequence generated by Algorithm 1. Hence

    \begin{eqnarray} \phi(z,x_{n+1})\leq \phi(z,x_n)-\bigg(1-\cfrac{\kappa\mu^2}{c}\frac{\lambda_{n}^{2}}{\lambda_{n+1}^{2}}\bigg)\phi(y_n,x_n),\; \; \forall z\in(A+B)^{-1}0, \end{eqnarray} (3.5)

    where c and \kappa are constants in Lemma 2.5.

    Proof. Let z\in(A+B)^{-1}0 . From Lemma 2.5 (i) and (2.5), we have

    \begin{eqnarray} \phi(z,x_{n+1})& = &\phi(z,J^{-1}(Jy_n-\lambda_n(Ay_n-Ax_n)))\\ & = &V(z,Jy_n-\lambda_n(Ay_n-Ax_n))\\ & = &\|z\|^2-2\langle z,Jy_n-\lambda_n(Ay_n-Ax_n)\rangle+\|Jy_n-\lambda_n(Ay_n-Ax_n)\|^2\\ &\leq&\|z\|^2-2\langle z,Jy_n\rangle+2\lambda_n\langle z,Ay_n-Ax_n\rangle+\|Jy_n\|^2-2\lambda_n\langle y_n,Ay_n-Ax_n\rangle+\kappa\|\lambda_n(Ay_n-Ax_n)\|^2\\ & = &\|z\|^2-2\langle z,Jy_n\rangle+\|y_n\|^2-2\lambda_n\langle y_n-z,Ay_n-Ax_n\rangle+\kappa\lambda_{n}^{2}\|Ay_n-Ax_n\|^2\\ & = &\phi(z,y_n)-2\lambda_n\langle y_n-z,Ay_n-Ax_n\rangle+\kappa\lambda_{n}^{2}\|Ay_n-Ax_n\|^2\\ & = &\phi(z,x_n)-\phi(y_n,x_n)+2\langle y_n-z,Jy_n-Jx_n\rangle-2\lambda_n\langle y_n-z,Ay_n-Ax_n\rangle+\kappa\lambda_{n}^{2}\|Ay_n-Ax_n\|^2\\ & = &\phi(z,x_n)-\phi(y_n,x_n)+\kappa\lambda_{n}^{2}\|Ay_n-Ax_n\|^2-2\langle y_n-z, Jx_n-Jy_n-\lambda_n(Ax_n-Ay_n)\rangle. \end{eqnarray} (3.6)

    Combining (3.4) and (3.6), we have

    \begin{eqnarray} \phi(z,x_{n+1})&\leq&\phi(z,x_n)-\phi(y_n,x_n)+\kappa\lambda_{n}^{2}\frac{\mu^2}{\lambda_{n+1}^{2}}\|y_n-x_n\|^2\\ &&-2\langle y_n-z,Jx_n-Jy_n-\lambda_n(Ax_n-Ay_n)\rangle. \end{eqnarray} (3.7)

    By Lemma 2.8, we have

    \begin{eqnarray} \phi(z,x_{n+1})&\leq&\phi(z,x_n)-\bigg(1-\frac{\kappa\mu^2}{c}\frac{\lambda_{n}^{2}}{\lambda_{n+1}^{2}}\bigg)\phi(y_n,x_n)\\ &&-2\langle y_n-z,Jx_n-Jy_n-\lambda_n(Ax_n-Ay_n)\rangle. \end{eqnarray} (3.8)

    Now, we will show that

    \begin{eqnarray*} \langle y_n-z,Jx_n-Jy_n-\lambda_n(Ax_n-Ay_n)\rangle\geq0. \end{eqnarray*}

    From the definition of y_n , we note that Jx_n-\lambda_n Ax_n\in Jy_n+\lambda_n By_n . Since B is maximal monotone, there exists v_n\in By_n such that Jx_n-\lambda_n Ax_n = Jy_n+\lambda_nv_n , we have

    \begin{eqnarray} v_n = \frac{1}{\lambda_n}\big(Jx_n-Jy_n-\lambda_n Ax_n\big). \end{eqnarray} (3.9)

    Since 0\in (A+B)z and Ay_n+v_n\in (A+B)y_n , it follows from Lemma 2.12 that A+B is maximal monotone. Hence

    \begin{eqnarray} \langle y_n-z,Ay_n+v_n\rangle\geq 0. \end{eqnarray} (3.10)

    Substituting (3.9) into (3.10), we have

    \begin{eqnarray*} \frac{1}{\lambda_n}\langle y_n-z,Jx_n-Jy_n-\lambda_n Ax_n+\lambda_nAy_n\rangle\geq 0. \end{eqnarray*}

    Hence

    \begin{eqnarray} \langle y_n-z,Jx_n-Jy_n-\lambda_n(Ax_n-Ay_n)\rangle\geq0. \end{eqnarray} (3.11)

    Combining (3.8) and (3.11), thus this lemma is proved.

    Theorem 3.4. Assume that Assumption 3.1 holds. Suppose, in addition, that J is weakly sequentially continuous on E . Then the sequence \{x_n\} generated by Algorithm 1 converges weakly to an element in (A+B)^{-1}0 .

    Proof. Since \lim_{n\to\infty}\lambda_n exists and \mu\in\big(0, \sqrt{\frac{c}{\kappa}}\big) , it follows that \lim_{n\to\infty}\bigg(1-\frac{\kappa\mu^2}{c}\frac{\lambda_{n}^{2}}{\lambda_{n+1}^{2}}\bigg) = 1-\frac{\kappa\mu^2}{c} > 0 . Thus there exists n_0\in\mathbb{N} such that

    \begin{eqnarray} 1-\frac{\kappa\mu^2}{c}\frac{\lambda_{n}^{2}}{\lambda_{n+1}^{2}} > 0,\; \; \forall n\geq n_0. \end{eqnarray} (3.12)

    Combining (3.5) and (3.12), we have

    \begin{eqnarray*} \phi(z,x_{n+1})\leq \phi(z,x_n),\; \; \forall n\geq n_0. \end{eqnarray*}

    This show that \lim_{n\to\infty}\phi(z, x_{n}) exists and hence \{\phi(z, x_{n})\} is bounded. Applying Lemma 2.8, we also have \{x_n\} is bounded. From (3.5), we have

    \begin{eqnarray} \bigg(1-\frac{\kappa\mu^2}{c}\frac{\lambda_{n}^{2}}{\lambda_{n+1}^{2}}\bigg)\phi(y_n,x_n)\leq \phi(z,x_n)-\phi(z,x_{n+1}). \end{eqnarray} (3.13)

    Thus we have

    \begin{eqnarray*} \lim\limits_{n\to\infty}\phi(y_n,x_n) = 0. \end{eqnarray*}

    Applying Lemma 2.8, we also have

    \begin{eqnarray} \lim\limits_{n\to\infty}\|x_n-y_n\| = 0. \end{eqnarray} (3.14)

    Since J is norm-to-norm uniformly continuous on bounded subsets of E , we have

    \begin{eqnarray} \lim\limits_{n\to\infty}\|Jx_n-Jy_n\| = 0. \end{eqnarray} (3.15)

    Using the fact that A is Lipschitz continuous, we have

    \begin{eqnarray} \lim\limits_{n\to\infty}\|Ax_n-Ay_n\| = 0. \end{eqnarray} (3.16)

    By the boundedness of \{x_n\} , there exists a subsequence \{x_{n_k}\} of \{x_n\} such that x_{n_k}\rightharpoonup x^*\in E . From (3.14), we have y_{n_k}\rightharpoonup x^* . We will show that x^*\in(A+B)^{-1}0 . Let (v, w)\in G(A+B) , we have w-Av\in Bv . From the definition of y_{n_k} , we note that

    \begin{eqnarray*} Jx_{n_k}-\lambda_{n_k} Ax_{n_k}\in Jy_{n_k}+\lambda_{n_k} By_{n_k}, \end{eqnarray*}

    which implies that

    \begin{eqnarray*} \frac{1}{\lambda_{n_k}}\big(Jx_{n_k}-Jy_{n_k}-\lambda_{n_k} Ax_{n_k}\big)\in By_{n_k}. \end{eqnarray*}

    By the maximal monotonicity of B , we have

    \begin{eqnarray*} \bigg\langle v-y_{n_k},w-Av-\frac{1}{\lambda_{n_k}}\big(Jx_{n_k}-Jy_{n_k}-\lambda_{n_k}Ax_{n_k}\big)\bigg\rangle\geq0 \end{eqnarray*}

    and by the monotonicity of A , we have

    \begin{eqnarray*} \langle v-y_{n_k},w\rangle&\geq&\bigg\langle v-y_{n_k},Av+\frac{1}{\lambda_{n_k}}\big(Jx_{n_k}-Jy_{n_k}-\lambda_{n_k}Ax_{n_k})\bigg\rangle\nonumber\\ & = &\langle v-y_{n_k},Av-Ax_{n_k}\rangle+\frac{1}{\lambda_{n_k}}\langle v-y_{n_k},Jx_{n_k}-Jy_{n_k}\rangle\nonumber\\ & = &\langle v-y_{n_k},Av-Ay_{n_k}\rangle+\langle v-y_{n_k},Ay_{n_k}-Ax_{n_k}\rangle+\frac{1}{\lambda_{n_k}}\langle v-y_{n_k},Jx_{n_k}-Jy_{n_k}\rangle\nonumber\\ &\geq&\langle v-y_{n_k},Ay_{n_k}-Ax_{n_k}\rangle+\frac{1}{\lambda_{n_k}}\langle v-y_{n_k},Jx_{n_k}-Jy_{n_k}\rangle. \end{eqnarray*}

    Since \lim_{k\to\infty}\lambda_{n_k} = \lambda > 0 and y_{n_k}\rightharpoonup x^* , it follows from (3.15) and (3.16) that

    \begin{eqnarray*} \langle v-x^*,w\rangle\geq0. \end{eqnarray*}

    By the monotonicity of A+B , we get 0\in(A+B)x^* , that is, x^*\in(A+B)^{-1}0 . Hence x^*\in(A+B)^{-1}0 . Note that (A+B)^{-1}0 is closed and convex. Put u_n = \Pi_{(A+B)^{-1}0}(x_n) . It follows from Lemma 2.11 that there exists x^*\in (A+B)^{-1}0 such that u_n\to x^* . Finally, we show that x_n\rightharpoonup x^* . Let \{x_{n_k}\} be a subsequence of \{x_n\} such that x_{n_k}\rightharpoonup \hat{x}\in (A+B)^{-1}0 . Then we have

    \begin{eqnarray*} \langle\hat{x}-u_{n_k},Jx_{n_k}-Ju_{n_k}\rangle\leq0 \end{eqnarray*}

    for all k\in\mathbb{N} . Since u_n\to x^* , x_{n_k}\rightharpoonup \hat{x} and J is weakly sequentially continuous, we have

    \begin{eqnarray*} \langle\hat{x}-x^*,J\hat{x}-Jx^*\rangle\leq0. \end{eqnarray*}

    By the strict monotonicity of J , we obtain x^* = \hat{x} . In summary, we have shown that every subsequence of \{x_n\} has a further subsequence which converges weakly to x^* . We conclude that x_n\rightharpoonup x^* = \lim_{n\to\infty}\Pi_{(A+B)^{-1}0}(x_n) . This completes the proof.

    Algorithm 2: Halpern-Tseng type splitting algorithm for monotone inclusion problem
    Step 0. Given \lambda_1 > 0 and \mu\in\big(0, \sqrt{\frac{c}{\kappa}}\big) . Choose a nonnegative real sequence \{\theta_n\} such that \sum_{n=1}^\infty \theta_n < \infty . Let u, x_1\in E be arbitrary. Set n=1 .
    Step 1. Compute
    \begin{eqnarray} y_n = J_{\lambda_n}^{B}J^{-1}(Jx_n-\lambda_nAx_n). \end{eqnarray} \;\;\;\;\;\;\;\;\;(3.17)
    If x_n=y_n , then stop and x_n is a solution of the problem (1.1). Otherwise, go to Step 2.
    Step 2. Compute
    \begin{eqnarray} z_n = J^{-1}(Jy_n-\lambda_n(Ay_n-Ax_n)). \end{eqnarray} \;\;\;\;\;\;\;\;\;(3.18)
    Step 3. Compute
    \begin{eqnarray} x_{n+1} = J^{-1}(\alpha_nJu+(1-\alpha_n) Jz_n), \end{eqnarray} \;\;\;\;\;\;\;\;\;(3.19)
    where the step sizes are adaptively updated as follows:
    \begin{equation} \begin{array}{lcl} \lambda_{n+1} = \begin{cases} \begin{array}{ll} \min\bigg\{\cfrac{\mu\|x_n-y_n\|}{\|Ax_n-Ay_n\|},\lambda_n+\theta_n\bigg\} \mbox{if}\; \; Ax_n-Ay_n\neq0,\\ \lambda_n+\theta_n \;\;\;\;\;\;\;\;\;\mbox{otherwise.} \end{array} \end{cases} \end{array} \end{equation} \;\;\;\;\;\;\;\;\;(3.20)
    Set n:=n+1 and go to Step 1.

     | Show Table
    DownLoad: CSV

    Theorem 3.5. Assume that Assumption 3.1 holds. If \{\alpha_n\}\subset(0, 1) with \lim_{n\to\infty}\alpha_n = 0 and \sum_{n = 1}^{\infty}\alpha_n = \infty , then the sequence \{x_n\} generated by Algorithm 2 converges strongly to x^*\in(A+B)^{-1}0 .

    Proof. We will show that \{x_n\} is bounded. Let z\in(A+B)^{-1}0 . Using the same arguments as in the proof of Lemma 3.3, we can show that

    \begin{eqnarray} \phi(z,z_n)\leq \phi(z,x_n)-\bigg(1-\frac{\kappa\mu^2}{c}\frac{\lambda_{n}^{2}}{\lambda_{n+1}^{2}}\bigg)\phi(y_n,x_n). \end{eqnarray} (3.21)

    Since \lim_{n\to\infty}\lambda_n exists and \mu\in\big(0, \sqrt{\frac{c}{\kappa}}\big) , it follows that \lim_{n\to\infty}\bigg(1-\frac{\kappa\mu^2}{c}\frac{\lambda_{n}^{2}}{\lambda_{n+1}^{2}}\bigg) = 1-\frac{\kappa\mu^2}{c} > 0 . Thus there exists n_0\in\mathbb{N} such that

    \begin{eqnarray} 1-\frac{\kappa\mu^2}{c}\frac{\lambda_{n}^{2}}{\lambda_{n+1}^{2}} > 0,\; \; \forall n\geq n_0. \end{eqnarray} (3.22)

    Combining (3.21) and (3.22), we have

    \begin{eqnarray} \phi(z,z_n)\leq \phi(z,x_n),\; \; \forall n\geq n_0. \end{eqnarray} (3.23)

    By (2.4), we have

    \begin{eqnarray*} \phi(z,x_{n+1})& = &\phi(z,J^{-1}(\alpha_nJu+(1-\alpha_n) Jz_n))\nonumber\\ &\leq&\alpha_n\phi(z,u)+(1-\alpha_n)\phi(z,z_n)\nonumber\\ &\leq&\alpha_n\phi(z,u)+(1-\alpha_n)\phi(z,x_n)\nonumber\\ &\leq&\max\{\phi(z,u),\phi(z,x_n)\}\nonumber\\ &\vdots&\nonumber\\ &\leq&\max\{\phi(z,u),\phi(z,x_{n_0})\}. \end{eqnarray*}

    This implies that \{\phi(z, x_{n})\} is bounded. Applying Lemma 2.8, we also have \{x_n\} is bounded.

    Let x^* = \Pi_{(A+B)^{-1}0}(u) . From (3.21), we have

    \begin{eqnarray*} \phi(x^*,x_{n+1})& = &\phi(x^*,J^{-1}(\alpha_nJu+(1-\alpha_n) Jz_n))\nonumber\\ &\leq&\alpha_n\phi(x^*,u)+(1-\alpha_n)\phi(x^*,z_n)\nonumber\\ &\leq&\alpha_n\phi(x^*,u)+(1-\alpha_n)\phi(x^*,x_n)-(1-\alpha_n)\bigg(1-\frac{\kappa\mu^2}{c}\frac{\lambda_{n}^{2}}{\lambda_{n+1}^{2}}\bigg)\phi(y_n,x_n). \end{eqnarray*}

    This implies that

    \begin{eqnarray} (1-\alpha_n)\bigg(1-\frac{\kappa\mu^2}{c}\frac{\lambda_{n}^{2}}{\lambda_{n+1}^{2}}\bigg)\phi(y_n,x_n)\leq \phi(x^*,x_{n})-\phi(x^*,x_{n+1})+\alpha_nK, \end{eqnarray} (3.24)

    where K = \sup_{n\in\mathbb{N}}\{|\phi(x^*, u)-\phi(x^*, x_n)|\} .

    Now, we will divide the rest of the proof into two cases.

    Case 1. Suppose that there exists N\in\mathbb{N} such that \phi(x^*, x_{n+1})\leq \phi(x^*, x_n) for all n\geq N . Hence \lim_{n\to\infty}\phi(x^*, x_n) exists. By our assumptions, we have from (3.24) that

    \begin{eqnarray*} \lim\limits_{n\to\infty}\phi(y_n,x_n) = 0 \end{eqnarray*}

    and hence

    \begin{eqnarray} \lim\limits_{n\to\infty}\|x_n-y_n\| = 0. \end{eqnarray} (3.25)

    Since J is norm-to-norm uniformly continuous on bounded subsets of E , we have

    \begin{eqnarray} \lim\limits_{n\to\infty}\|Jx_n-Jy_n\| = 0. \end{eqnarray} (3.26)

    Using the fact that A is Lipschitz continuous, we have

    \begin{eqnarray*} \lim\limits_{n\to\infty}\|Ax_n-Ay_n\| = 0. \end{eqnarray*}

    Then from (3.18), we have

    \begin{eqnarray} \|Jz_n-Jy_n\| = \lambda_n\|Ax_n-Ay_n\|\to0. \end{eqnarray} (3.27)

    Moreover from (3.26) and (3.27), we obtain

    \begin{eqnarray*} \|Jx_{n+1}-Jx_n\|&\leq&\|Jx_{n+1}-Jz_n\|+\|Jz_n-Jy_n\|+\|Jy_n-Jx_n\|\nonumber\\ & = &\alpha_n\|Ju-Jz_n\|+\|Jz_n-Jy_n\|+\|Jy_n-Jx_n\|\nonumber\\ &\to&0. \end{eqnarray*}

    Since J^{-1} is norm-to-norm uniformly continuous on bounded subset of E^* , we have

    \begin{eqnarray} \lim\limits_{n\to\infty}\|x_{n+1}-x_n\| = 0. \end{eqnarray} (3.28)

    By the boundedness of \{x_n\} , there exists a subsequence \{x_{n_k}\} of \{x_n\} such that x_{n_k}\rightharpoonup \hat{x}\in E and

    \begin{eqnarray*} \limsup\limits_{n\to \infty}\langle x_n-x^*,Ju-Jx^*\rangle = \lim\limits_{k\to \infty}\langle x_{n_k}-x^*,Ju-Jx^*\rangle, \end{eqnarray*}

    where x^* = \Pi_{(A+B)^{-1}0}(u) . By a similar argument to that of Theorem 3.4, we can show that \hat{x}\in(A+B)^{-1}0 . Thus we have

    \begin{eqnarray*} \limsup\limits_{n\to \infty}\langle x_n-x^*,Ju-Jx^*\rangle = \langle \hat{x}-x^*,Ju-Jx^*\rangle\leq0. \end{eqnarray*}

    From (3.28), we also have

    \begin{eqnarray} \limsup\limits_{n\to \infty}\langle x_{n+1}-x^*,Ju-Jx^*\rangle\leq0. \end{eqnarray} (3.29)

    Finally, we show that x_n\to x^* . From Lemma 2.9, we have

    \begin{eqnarray} \phi(x^*,x_{n+1})& = &\phi(x^*,J^{-1}(\alpha_nJu+(1-\alpha_n) Jz_n))\\ & = &V(x^*,\alpha_nJu+(1-\alpha_n) Jz_n)\\ &\leq&V(x^*,\alpha_n Ju+(1-\alpha_n)Jz_n-\alpha_n(Ju-Jx^*))+2\alpha_n\langle x_{n+1}-x^*,Ju-Jx^*\rangle\\ & = &V(x^*,\alpha_nJx^*+(1-\alpha_n)Jz_n)+2\alpha_n\langle x_{n+1}-x^*,Ju-Jx^*\rangle\\ & = &\alpha_n\phi(x^*,x^*)+(1-\alpha_n)\phi(x^*,z_n)+2\alpha_n\langle x_{n+1}-x^*,Ju-Jx^*\rangle\\ &\leq& (1-\alpha_n)\phi(x^*,x_n)+2\alpha_n\langle x_{n+1}-x^*,Ju-Jx^*\rangle. \end{eqnarray} (3.30)

    This together with (3.29) and (3.30), so we can conclude by Lemma 2.13 that \phi(x^*, x_n)\to 0 . Therefore x_n\to x^* .

    Case 2. Suppose that there exists a subsequence \{\Gamma_{n_i}\} of \{\Gamma_n\} such that \Gamma_{n_i} < \Gamma_{n_{i}+1} for all i\in\mathbb{N} . In this case, we define \sigma:\mathbb{N}\to\mathbb{N} by

    \begin{eqnarray*} \sigma(n) = \max\{k\leq n:\Gamma_k < \Gamma_{k+1}\} \end{eqnarray*}

    for all n\geq n_0 (for some n_0 large enough). From Lemma 2.14, we have \sigma(n) is non-decreasing such that \lim_{n\to\infty}\sigma(n) = \infty and the following inequalities hold for all n\geq n_0 :

    \begin{eqnarray} \Gamma_{\sigma(n)} < \Gamma_{\sigma(n)+1}\; \; \text{and}\; \; \Gamma_{n}\leq\Gamma_{\sigma(n)+1}. \end{eqnarray} (3.31)

    Put \Gamma_n = \phi(x^*, x_n) for all n\in\mathbb{N} . As proved in the Case 1, we obtain

    \begin{eqnarray*} (1-\alpha_{\sigma(n)})\bigg(1-\frac{\kappa\mu^2}{c}\frac{\lambda_{{\sigma(n)}}^{2}}{\lambda_{{\sigma(n)}+1}^{2}}\bigg)\phi(y_{\sigma(n)},x_{\sigma(n)})&\leq& \phi(x^*,x_{\sigma(n)})-\phi(x^*,x_{{\sigma(n)}+1})+\alpha_{\sigma(n)}K\\ &\leq&\alpha_{\sigma(n)}K, \end{eqnarray*}

    where K = \sup_{n\in\mathbb{N}}\{|\phi(x^*, u)-\phi(x^*, x_{\sigma(n)})|\} . By our assumptions, we have

    \begin{eqnarray*} \lim\limits_{n\to\infty}\phi(y_{\sigma(n)},x_{\sigma(n)}) = 0 \end{eqnarray*}

    and hence

    \begin{eqnarray*} \lim\limits_{n\to\infty}\|x_{\sigma(n)}-y_{\sigma(n)}\| = 0. \end{eqnarray*}

    Using the same arguments as in the proof of Case 1, we can show that

    \begin{eqnarray*} \lim\limits_{n\to\infty}\|x_{{\sigma(n)}+1}-x_{\sigma(n)}\| = 0 \end{eqnarray*}

    and

    \begin{eqnarray*} \limsup\limits_{n\to \infty}\langle x_{{\sigma(n)}+1}-x^*,Ju-Jx^*\rangle\leq0. \end{eqnarray*}

    From (3.30) and (3.31), we have

    \begin{eqnarray*} \phi(x^*,x_{{\sigma(n)}+1})&\leq&(1-\alpha_{\sigma(n)})\phi(x^*,x_{\sigma(n)})+\alpha_{\sigma(n)}\langle x_{{\sigma(n)}+1}-x^*,Ju-Jx^*\rangle\nonumber\\ &\leq&(1-\alpha_{\sigma(n)})\phi(x^*,x_{\sigma(n)+1})+\alpha_{\sigma(n)}\langle x_{{\sigma(n)}+1}-x^*,Ju-Jx^*\rangle. \end{eqnarray*}

    This implies that

    \begin{eqnarray*} \phi(x^*,x_{n})\leq \phi(x^*,x_{{\sigma(n)}+1})\leq \langle x_{{\sigma(n)}+1}-x^*,Ju-Jx^*\rangle. \end{eqnarray*}

    Hence \limsup_{n\to\infty}\phi(x^*, x_{n}) = 0 and so \lim_{n\to\infty}\phi(x^*, x_{n}) = 0 . Therefore x_n\to x^* . This completes the proof.

    Let C be a nonempty, closed and convex subset of E . Let A : C \to E^* be a mapping. The variational inequality problem is to find x^*\in C such that

    \begin{eqnarray} \langle y-x^*,Ax^*\rangle\geq0,\; \; \forall y\in C. \end{eqnarray} (4.1)

    The set of solutions of the problem (4.1) is denoted by VI(C, A ). In particular, if A is a continuous and monotone mapping, then VI(C, A) is closed and convex (see [7,24]). Recall that the indicator function of C given by

    \begin{eqnarray*} i_C(x) = \left \{\begin{array}{l} \; 0,\text{if}\; \; x\in C, \\ \infty,\text{if}\; \; x\notin C. \end{array}\right. \end{eqnarray*}

    It is known that i_C is proper convex, lower semicontinuous and convex function with its subdifferential \partial i_C is maximal monotone (see [35]). From [2], we know that

    \begin{eqnarray*} \partial i_C(v) = N_C(v) = \{u\in E^*:\langle y-v,u\rangle\leq0,\; \forall y\in C\}, \end{eqnarray*}

    where N_C is the normal cone for C at a point v . Thus we can define the resolvent of \partial i_C for \lambda > 0 by

    \begin{eqnarray*} J_{\lambda}^{\partial i_C}(x) = (J+\lambda \partial i_C)^{-1} Jx,\; \; \forall x\in E. \end{eqnarray*}

    As shown in [40], for any x\in E and z\in C , z = J_{\lambda}^{\partial i_C}(x)\iff z = \Pi_{C}(x) , where \Pi_{C} is the generalized projection from E onto C .

    Lemma 4.1. [36] Let C be a nonempty, closed convex subset of a Banach space E . Let A:C\to E^* be a monotone and hemicontinuous operator and T:E\to 2^{E^*} be an operator defined as follows:

    \begin{eqnarray*} \begin{array}{lcl} Tv = \begin{cases} \begin{array}{ll} Av+N_C(v)\;\; \mathit{\mbox{if}}\; \; v\in C,\\ \emptyset \;\;\;\;\;\;\;\;\; \mathit{\mbox{if}}\; \; v\notin C. \end{array} \end{cases} \end{array} \end{eqnarray*}

    Then T is maximal monotone and T^{-1}0 = VI(C, A) .

    If we set B = \partial i_C , then we obtain the following results regarding the problem (4.1).

    Assumption 4.2. (A1) The feasible set C is a nonempty, closed and convex subset of a real 2 -uniformly convex and uniformly smooth Banach space E .

    (A2) The mapping A:E\to E^* is monotone and L -Lipschitz continuous.

    (A3) The solution set of the problem (4.1) is nonempty, that is, VI(C, A)\neq\emptyset .

    Algorithm 3: Tseng type splitting algorithm for variational inequality problem
    Step 0. Given \lambda_1 > 0 and \mu\in\big(0, \sqrt{\frac{c}{\kappa}}\big) . Choose a nonnegative real sequence \{\theta_n\} such that \sum_{n=1}^\infty \theta_n < \infty . Let x_1\in C be arbitrary. Set n=1 .
    Step 1. Compute
    \begin{eqnarray} y_n = \Pi_CJ^{-1}(Jx_n-\lambda_nAx_n). \end{eqnarray}\;\;\;\;\;\;\;\;\;(4.2)
    Step 2. Compute
    \begin{eqnarray} x_{n+1} = J^{-1}(Jy_n-\lambda_n(Ay_n-Ax_n)), \end{eqnarray} \;\;\;\;\;\;\;\;\;(4.3)
    where the step sizes are adaptively updated as follows:
    \begin{equation} \begin{array}{lcl} \lambda_{n+1} = \begin{cases} \begin{array}{ll} \min\bigg\{\cfrac{\mu\|x_n-y_n\|}{\|Ax_n-Ay_n\|},\lambda_n+\theta_n\bigg\} \mbox{if}\; \; Ax_n-Ay_n\neq0,\\ \lambda_n+\theta_n \;\;\;\;\;\;\;\;\;\mbox{otherwise.} \end{array} \end{cases} \end{array} \end{equation} \;\;\;\;\;\;\;\;\;(4.4)
    Set n:=n+1 and go to Step 1.

     | Show Table
    DownLoad: CSV

    Theorem 4.3. Assume that Assumption 4.2 holds. Suppose, in addition, that J is weakly sequentially continuous on E . Then the sequence \{x_n\} generated by Algorithm 3 converges weakly to an element in (A+B)^{-1}0 .

    Algorithm 4: Halpern-Tseng type splitting algorithm for variational inequality problem
    Step 0. Given \lambda_1 > 0 and \mu\in\big(0, \sqrt{\frac{c}{\kappa}}\big) . Choose a nonnegative real sequence \{\theta_n\} such that \sum_{n=1}^\infty \theta_n < \infty . Let u, x_1\in C be arbitrary. Set n=1 .
    Step 1. Compute
    \begin{eqnarray} y_n = \Pi_CJ^{-1}(Jx_n-\lambda_nAx_n). \end{eqnarray} \;\;\;\;\;\;\;\;\;\;(4.5)
    Step 2. Compute
    \begin{eqnarray} z_n = J^{-1}(Jy_n-\lambda_n(Ay_n-Ax_n)). \end{eqnarray}\;\;\;\;\;\;\;\;\;\;(4.6)
    Step 3. Compute
    \begin{eqnarray} x_{n+1} = J^{-1}(\alpha_nJu+(1-\alpha_n) Jz_n), \end{eqnarray} \;\;\;\;\;\;\;\;\;\;(4.7)
    where the step sizes are adaptively updated as follows:
    \begin{equation} \begin{array}{lcl} \lambda_{n+1} = \begin{cases} \begin{array}{ll} \min\bigg\{\cfrac{\mu\|x_n-y_n\|}{\|Ax_n-Ay_n\|},\lambda_n+\theta_n\bigg\} \mbox{if}\; \; Ax_n-Ay_n\neq0,\\ \lambda_n+\theta_n \;\;\;\;\;\;\;\;\;\mbox{otherwise.} \end{array} \end{cases} \end{array} \end{equation} \;\;\;\;\;\;\;\;\;\;(4.8)
    Set n:=n+1 and go to Step 1.

     | Show Table
    DownLoad: CSV

    Theorem 4.4. Assume that Assumption 4.2 holds. If \{\alpha_n\}\subset(0, 1) with \lim_{n\to\infty}\alpha_n = 0 and \sum_{n = 1}^{\infty}\alpha_n = \infty , then the sequence \{x_n\} generated by Algorithm 4 converges strongly to x^*\in VI(C, A) .

    Let f:E\to\mathbb{R} be a convex function and g : E\to\mathbb{R} be a convex, lower semicontinuous and non-smooth function. We consider the following convex minimization problem:

    \begin{eqnarray} \min\limits_{x\in E}f(x)+g(x). \end{eqnarray} (4.9)

    By Fermat's rule, we know that above problem is equivalent to the problem of finding x\in E such that

    \begin{eqnarray} 0\in\nabla f(x)+\partial g(x), \end{eqnarray} (4.10)

    where \nabla f is the gradient of f and \partial g is the subdifferential of g . In this situation, we assume that f is a convex and differentiable function with its gradient \nabla f is L -Lipschitz continuous. Further, \nabla f is cocoercive with a constant 1/L (see [31, Theorem 3.13]). This implies that \nabla f is monotone and Lipschitz continuous. Moreover, \partial g is maximal monotone (see [35, Theorem A]). In this point of view, we set A = \nabla f and B = \partial g , then we obtain the following results regarding the problem (4.9).

    Assumption 4.5. (A1) The Banach space E is real 2 -uniformly convex and uniformly smooth Banach space.

    (A2) The functions f:E\to\mathbb{R} is convex and differentiable and its gradient \nabla f is L -Lipschitz continuous and g:E\to\mathbb{R} is convex and lower semicontinuous which f+g attains a minimizer.

    Algorithm 5: Tseng type splitting algorithm for convex minimization problem
    Step 0. Given \lambda_1 > 0 and \mu\in\big(0, \sqrt{\frac{c}{\kappa}}\big) . Choose a nonnegative real sequence \{\theta_n\} such that \sum_{n=1}^\infty \theta_n < \infty . Let x_1\in E be arbitrary. Set n=1 .
    Step 1. Compute
    \begin{eqnarray} y_n = J_{\lambda_n}^{\partial g}J^{-1}(Jx_n-\lambda_n\nabla f(x_n)). \end{eqnarray} \;\;\;\;\;\;\;\;\;(4.11)
    Step 2. Compute
    \begin{eqnarray} x_{n+1} = J^{-1}(Jy_n-\lambda_n(\nabla f(y_n)-\nabla f(x_n))), \end{eqnarray} \;\;\;\;\;\;\;\;\;(4.12)
    where the step sizes are adaptively updated as follows:
    \begin{equation} \begin{array}{lcl} \lambda_{n+1} = \begin{cases} \begin{array}{ll} \min\bigg\{\cfrac{\mu\|x_n-y_n\|}{\|\nabla f(y_n)-\nabla f(x_n)\|},\lambda_n+\theta_n\bigg\} \mbox{if}\; \; \nabla f(y_n)-\nabla f(x_n)\neq0,\\ \lambda_n+\theta_n \;\;\;\;\;\;\;\;\;\mbox{otherwise.} \end{array} \end{cases} \end{array} \end{equation}\;\;\;\;\;\;\;\;\;(4.13)
    Set n:=n+1 and go to Step 1.

     | Show Table
    DownLoad: CSV

    Theorem 4.6. Assume that Assumption 4.5 holds. Suppose, in addition, that J is weakly sequentially continuous on E . Then the sequence \{x_n\} generated by Algorithm 5 converges weakly to a minimizer of f+g .

    Algorithm 6: Halpern-Tseng type splitting algorithm for convex minimization problem
    Step 0. Given \lambda_1 > 0 and \mu\in\big(0, \sqrt{\frac{c}{\kappa}}\big) . Choose a nonnegative real sequence \{\theta_n\} such that \sum_{n=1}^\infty \theta_n < \infty . Let u, x_1\in E be arbitrary. Set n=1 .
    Step 1. Compute
    \begin{eqnarray} y_n = J_{\lambda_n}^{\partial g}J^{-1}(Jx_n-\lambda_n\nabla f(x_n)). \end{eqnarray} \;\;\;\;\;\;\;\;\;\;\;\;(4.14)
    Step 2. Compute
    \begin{eqnarray} z_n = J^{-1}(Jy_n-\lambda_n(\nabla f(y_n)-\nabla f(x_n))). \end{eqnarray} \;\;\;\;\;\;\;\;\;\;\;\;(4.15)
    Step 3. Compute
    \begin{eqnarray} x_{n+1} = J^{-1}(\alpha_nJu+(1-\alpha_n) Jz_n), \end{eqnarray}\;\;\;\;\;\;\;\;\;\;\;\;(4.16)
    where the step sizes are adaptively updated as follows:
    \begin{equation} \begin{array}{lcl} \lambda_{n+1} = \begin{cases} \begin{array}{ll} \min\bigg\{\cfrac{\mu\|x_n-y_n\|}{\|\nabla f(y_n)-\nabla f(x_n)\|},\lambda_n+\theta_n\bigg\} \mbox{if}\; \; \nabla f(y_n)-\nabla f(x_n)\neq0,\\ \lambda_n+\theta_n \;\;\;\;\;\;\;\;\;\mbox{otherwise.} \end{array} \end{cases} \end{array} \end{equation} \;\;\;\;\;\;\;\;\;\;\;\;(4.17)
    Set n:=n+1 and go to Step 1.

     | Show Table
    DownLoad: CSV

    Theorem 4.7. Assume that Assumption 4.5 holds. If \{\alpha_n\}\subset(0, 1) with \lim_{n\to\infty}\alpha_n = 0 and \sum_{n = 1}^{\infty}\alpha_n = \infty , then the sequence \{x_n\} generated by Algorithm 6 converges strongly to a minimizer of f+g .

    In this section, we provide some numerical experiments to illustrate the behaviour of our methods and compare them with some existing methods.

    Example 5.1. We consider the HpHard problem which is taken from [22]. Let A:\mathbb{R}^m\to\mathbb{R}^m be an operator defined by Ax = Mx+q with q\in\mathbb{R}^m and

    \begin{eqnarray*} M = NN^T+S+D, \end{eqnarray*}

    where N is an m\times m matrix, S is an m\times m skew-symmetric matrix and D is an m\times m positive definite diagonal matrix. The feasible set is C = \mathbb{R}_m^+ . It is clear that A is monotone and Lipschitz continuous with L = \|M\| . In this experiments, we compare our Algorithm 3 and Algorithm 4 with the extragradient method (EGM) proposed in [28] and the subetaadient extragradient method (SEGM) proposed in [8]. The parameters are chosen as follows:

    \bullet Algorithm 3: \lambda_1 = 0.4/\|M\| and \mu = 0.9 ;

    \bullet Algorithm 4: \lambda_1 = 0.4/\|M\| , \mu = 0.9 , \alpha_n = \frac{1}{10000(n+2)} and u = x_1 ;

    \bullet EGM and SEGM: \lambda = 0.4/\|M\| .

    All entries of N and S are generated randomly in (-5, 5) , of D are in (0, 0.3) , of q uniformly generated from (-500, 0) . For every m , we have generated two random samples with different choices of M and q . We perform the numerical experiments with three different cases of m (m = 100,500, 1000) . We take the starting point x_1 = (1, 1, 1, \ldots, 1)^T\in\mathbb{R}^m and use stopping criterion \|x_n-y_n\|\leq\varepsilon = 10^{-6} . The numerical results are reported in Table 1.

    Table 1.  Numerical results for Example 5.1.
    m Algorithm 3 ( \theta_n=0 ) Algorithm 3 ( \theta_n=100/n^{1.1} ) Algorithm 4 ( \theta_n=0 ) Algorithm 4( \theta_n=100/n^{1.1} ) EGM SEGM
    iter. time iter. time iter. time iter. time iter. time iter. time
    100 2454 0.02 1162 0.01 35112 1.31 25204 0.65 2454 0.03 2454 0.04
    1920 0.04 917 0.02 35072 1.48 25203 0.66 1920 0.03 1920 0.05
    500 2275 0.95 1104 0.29 35010 7.28 25201 5.12 2275 0.50 2275 0.65
    2291 0.93 1107 0.43 34989 7.20 25198 5.06 2291 0.47 2291 0.59
    1000 2027 8.08 996 4.25 34993 113.2 25200 78.2 2027 7.83 2027 7.96
    2017 7.80 987 3.87 35003 109.8 25200 78.0 2017 7.01 2017 7.16

     | Show Table
    DownLoad: CSV

    Example 5.2. We consider the problem (4.1) in L_2([0, 2\pi ]) with the inner product \langle x, y\rangle = \int_{0}^{2\pi} x(t)y(t)dt and the norm \| x\| = \bigg(\int_0^{2\pi} x^2(t)dt\bigg)^{1/2} for all x, y\in L_2([0, 2\pi ]) . Let A:L_2([0, 2\pi ])\to L_2([0, 2\pi ]) be an operator defined by

    \begin{eqnarray*} (Ax)(t) = \frac{1}{2}\max\{0,x(t)\} \end{eqnarray*}

    for all x\in L_2([0, 2\pi ]) and t\in[0, 2\pi] . It can be easily verified that A is monotone and Lipschitz continuous with L = 1 (see [50,51]). The feasible set is C = \{x\in L_2([0, 2\pi ]):\int_{0}^{2\pi}(t^2+1)x(t)dt\leq 1\} . Observe that 0\in VI(C, A) and so VI(C, A)\neq\emptyset . In this numerical experiment, we take all parameters \alpha_n , \lambda_n and \mu are the same as in Example 5.1. We perform numerical experiments with three different cases of starting point x_1 and use stopping criterion \|x_{n}-y_n\|\leq\varepsilon = 10^{-3} . The numerical results are reported in Table 2.

    Table 2.  Numerical results for Example 5.2.
    x_1 Algorithm 3 ( \theta_n=0 ) Algorithm 3 ( \theta_n=0.001/(1.01)^n ) Algorithm 4 ( \theta_n=0 ) Algorithm 4 ( \theta_n=0.001/(1.01)^n )
    iter. time iter. time iter. time iter. time
    \frac{1}{100}\sin(t) 7 9.9 7 8.9 7 9.8 7 10.1
    \frac{1}{3}t^{2}e^{-4t} 5 0.4 5 0.3 5 0.3 5 0.3
    \frac{1}{70}(1-t^{2}) 6 3.2 6 2.5 6 2.7 6 2.7

     | Show Table
    DownLoad: CSV

    Example 5.3. Consider the minimization problem:

    \begin{eqnarray*} \min\limits_{x\in\mathbb{R}^3}\|x\|_1+2\|x\|_{2}^{2}+(-1,2,5)x+1, \end{eqnarray*}

    where x = (w_1, w_2, w_3)^T\in \mathbb{R}^{3} . Let f(x) = 2\|x\|_{2}^{2}+(-1, 2, 5)x+1 and g(x) = \|x\|_1 . Thus we have \nabla f(x) = 4x+(-1, 2, 5)^T . It is easy to check that f is a convex and differentiable function and its gradient \nabla f is Lipschitz continuous with L = 4 . Moreover, g is a convex and lower semicontinuous function but not differentiable on \mathbb{R}^3 . From [21], we know that

    \begin{eqnarray*} J_{\lambda}^{\partial g}(x)& = &(I+\lambda\partial g)^{-1}(x)\\ & = &\big(\max\{|w_1|-\lambda,0\}\text{sgn}(w_1),\max\{|w_2|-\lambda,0\}\text{sgn}(w_2),\max\{|w_3|-\lambda,0\}\text{sgn}(w_3)\big)^T \end{eqnarray*}

    for \lambda > 0 . In this experiments, we compare our Algorithm 5 and Algorithm 6 with Algorithm (1.4) of Cholamjiak [12]. The parameters are chosen as follows:

    \bullet Algorithm 5: \lambda_1 = 0.1 and \mu = 0.9 ;

    \bullet Algorithm 6: \lambda_1 = 0.1 , \mu = 0.9 , \alpha_n = \frac{1}{10000(n+1)} and u = x_1 ;

    \bullet Algorithm (1.4): all parameters \alpha_n , \lambda_n , \delta_n , r_n and e_n are the same as Example 4.2 in [12], and u = x_1 .

    We perform the numerical experiments with four different cases of starting point x_1 and use stopping criterion \|x_{n+1}-x_{n}\|\leq\varepsilon = 10^{-12} . The numerical results are reported in Table 3.

    Table 3.  Numerical results for Example 5.3.
    x_1 Algorithm 5 ( \theta_n=0 ) Algorithm 5 ( \theta_n=100/n^{1.1} ) Algorithm 6 ( \theta_n=0 ) Algorithm 6 ( \theta_n=100/n^{1.1} ) Algorithm (1.4)
    iter. time iter. time iter. time iter. time iter. time
    (1, 2, 4)^T 101 0.003 284 0.003 27818 0.10 25263 0.08 263957 0.33
    (1, -7, 3)^T 103 0.002 288 0.003 27809 0.12 25264 0.08 314417 0.38
    (-100,100, 50)^T 111 0.004 315 0.004 27802 0.11 25252 0.09 1313442 1.58
    (-1000, -5000, -800)^T 127 0.005 356 0.01 27787 0.11 25241 0.07 8004199 9.4

     | Show Table
    DownLoad: CSV

    Example 5.4. In signal processing, compressed sensing can be modeled as the following under-determinated linear equation system:

    \begin{equation} y = Dx+\varepsilon, \end{equation} (5.1)

    where x\in\mathbb{R}^N is a vector with m nonzero components to be recovered, y\in\mathbb{R}^M is the observed or measured data with noisy \varepsilon , and D:\mathbb{R}^N\rightarrow\mathbb{R}^M (M < N) is a bounded linear operator. It is known that to solve (5.1) can be seen as solving the LASSO problem:

    \begin{eqnarray} \min\limits_{x\in\mathbb{R}^N}\frac{1}{2}\|Dx-y\|_{2}^{2} \;+ \lambda\|x\|_{1}, \end{eqnarray} (5.2)

    where \lambda > 0 . Following [19], we define Ax: = \nabla\Big(\frac{1}{2}\|Dx-y\|_{2}^{2}\Big) = D^{T}(Dx-y) and Bx: = \partial (\lambda\|x\|_1) . It is known that A is \|D\|^{2} -Lipschitz continuous and monotone. Moreover, B is maximal monotone (see [35]).

    In this experiment, the sparse vector x\in\mathbb{R}^N is generated from uniform distribution in the interval [-2, 2] with m nonzero elements. The matrix D\in\mathbb{R}^{M\times N} is generated from a normal distribution with mean zero and one invariance. The observation y is generated by white Gaussian noise with signal-to-noise ratio (SNR) = 40. The restoration accuracy is measured by the mean squared error (MSE) as follows:

    \begin{equation} E_n = \frac{1}{N}\|x_n-x\|_{2}^2 < 10^{-4}, \end{equation} (5.3)

    where x_n is an estimated signal of x .

    We compare our proposed Algorithm 1 and Algorithm 2 with the forward-backward splitting algorithm (FBSA) (1.2), the Tseng's splitting algorithm (TSA) (1.5) and the contraction forward-backward splitting algorithm (CFBSA) proposed in ([43, Algorithm 3.1]). The parameters are chosen as follows:

    \bullet Algorithm 1: \theta_n = 0 , \lambda_1 = 0.0013 and \mu = 0.5 ;

    \bullet Algorithm 2: \theta_n = 0 , \lambda_1 = 0.0013 , \mu = 0.5 , \alpha_n = \frac{1}{200n+5} and u = (1, 1, \ldots, 1)^T ;

    \bullet CFBSA: \alpha_n = \frac{1}{200n+5} , \mu = 0.5 , \delta = 0.5 , l = 0.5 , \gamma = 0.45 and f(x) = \frac{x}{5} ;

    \bullet TSA: \lambda_n = \frac{0.2}{\|D\|^2} ;

    \bullet FBSA: \lambda = 2\times 10^{-5} .

    The starting points x_1 of all methods are randomly chosen in \mathbb{R}^N . We perform the numerical test with the following three cases:

    Case 1: N = 512 , M = 256 and m = 20 ;

    Case 2: N = 1024 , M = 512 and m = 30 ;

    Case 3: N = 2048 , M = 1024 and m = 60 ;

    The numerical results for all test are reported in Figures 1-6.

    Figure 1.  Comparison of recovered signal by using different algorithms in Case 1.
    Figure 2.  The plotting of MSE versus number of iterations in Case 1.
    Figure 3.  Comparison of recovered signal by using different algorithms in Case 2.
    Figure 4.  The plotting of MSE versus number of iterations in Case 2.
    Figure 5.  Comparison of recovered signal by using different algorithms in Case 3.
    Figure 6.  The plotting of MSE versus number of iterations in Case 3.

    In this paper, we propose Tseng's splitting algorithms with non-monotone adaptive step sizes for finding zeros of the sum of two monotone operators in the framework of Banach space. Under some suitable conditions, we prove the weak and strong convergence results of the algorithms without the knowledge of the Lipschitz constant of the mapping. Some applications related to the obtained results are presented. Finally, several numerical experiments are performed to illustrate the convergence of our algorithms and compared with many known methods.

    P. Cholamjiak was supported by Thailand Science Research and Innovation under the project IRN62W0007 and P. Sunthrayuth was supported by RMUTT Research Grant for New Scholar under Grant NSF62D0602.

    The authors declare no conflict of interest.



    [1] H. A. Abass, C. Izuchukwu, O. T. Mewomo, Q. L. Dong, Strong convergence of an inertial forward-backward splitting method for accretive operators in real Banach space, Fixed Point Theory, 21 (2020), 397-412. doi: 10.24193/fpt-ro.2020.2.28
    [2] R. P. Agarwal, D. O'Regan, D. R. Sahu, Fixed Point Theory for Lipschitzian-type Mappings with Applications, New York: Springer, 2009.
    [3] Y. I. Alber, Metric and generalized projection operators in Banach spaces: properties and applications, In: A. G. Kartsatos, Theory and Applications of Nonlinear Operator of Accretive and Monotone Type, New York: Marcel Dekker, (1996), 15-50.
    [4] K. Ball, E. A. Carlen, E. H. Lieb, Sharp uniform convexity and smoothness inequalities for trace norms, Inventiones Math., 115 (1994), 463-482. doi: 10.1007/BF01231769
    [5] V. Barbu, Nonlinear Semigroups and Differential Equations in Banach Spaces, Netherlands: Springer, 1976.
    [6] T. Bonesky, K. S. Kazimierski, P. Maass, F. Schöpfer, T. Schuster, Minimization of Tikhonov Functionals in Banach Spaces, Abstr. Appl. Anal., 2008 (2008), 192679.
    [7] F. Browder, Nonlinear monotone operators and convex sets in Banach spaces, Bull. Am. Math. Soc., 71 (1965), 780-785. doi: 10.1090/S0002-9904-1965-11391-X
    [8] Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl., 148 (2011), 318-335. doi: 10.1007/s10957-010-9757-3
    [9] S. S. Chang, C. F. Wen, J. C. Yao, Generalized viscosity implicit rules for solving quasi-inclusion problems of accretive operators in Banach spaces, Optimization, 66 (2017), 1105-1117. doi: 10.1080/02331934.2017.1325888
    [10] S. S. Chang, C. F. Wen, J. C. Yao, A generalized forward-backward splitting method for solving a system of quasi variational inclusions in Banach spaces, RACSAM, 113 (2019), 729-747. doi: 10.1007/s13398-018-0511-2
    [11] G. H. Chen, R. T. Rockafellar, Convergence rates in forward-backward splitting, SIAM J. Optim., 7 (1997), 421-444. doi: 10.1137/S1052623495290179
    [12] P. Cholamjiak, A generalized forward-backward splitting method for solving quasi inclusion problems in Banach spaces, Numer. Algorithms, 71 (2016), 915-932. doi: 10.1007/s11075-015-0030-6
    [13] P. Cholamjiak, N. Pholasa, S. Suantai, P. Sunthrayuth, The generalized viscosity explicit rules for solving variational inclusion problems in Banach spaces, Optimization, 2020. Available from: https://doi.org/10.1080/02331934.2020.1789131.
    [14] I. Cioranescu, Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems, Dordrecht: Kluwer Academic, 1990.
    [15] P. L. Combettes, V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul., 4 (2005), 1168-1200. doi: 10.1137/050626090
    [16] I. Daubechies, M. Defrise, C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Commun. Pure Appl. Math., 57 (2004), 1413-1457. doi: 10.1002/cpa.20042
    [17] J. Duchi, Y. Singer, Efficient online and batch learning using forward-backward splitting, J. Mach. Learn. Res., 10 (2009), 2899-2934.
    [18] J. C. Dunn, Convexity, monotonicity, and gradient processes in Hilbert space, J. Math. Anal. Appl., 53 (1976), 145-158. doi: 10.1016/0022-247X(76)90152-9
    [19] A. Gibali, D. V. Thong, Tseng type methods for solving inclusion problems and its applications, Calcolo, 55 (2018), 49. doi: 10.1007/s10092-018-0292-1
    [20] O. G\ddot{u}ler, On the convergence of the proximal point algorithm for convex minimization, SIAM J. Control Optim., 29 (1991), 403-419. doi: 10.1137/0329022
    [21] E. T. Hale, W. Yin, Y. Zhang, A fixed-point continuation method for \ell_1-regularized minimization with applications to compressed sensing, CAAM Technical Report TR07-07, 2007.
    [22] P. T. Harker, J. S. Pang, A damped-Newton method for the linear complementarity problem, In: G. Allgower, K. Georg, Computational Solution of Nonlinear Systems of Equations, AMS Lectures on Applied Mathematics, 26 (1990), 265-284.
    [23] O. Hanner, On the uniform convexity of L_p and \ell_p, Arkiv Matematik, 3 (1956), 239-244. doi: 10.1007/BF02589410
    [24] Hartman, G. Stampacchia, On some non linear elliptic differential functional equations, Acta Math., 115 (1966), 271-310. doi: 10.1007/BF02392210
    [25] H. Iiduka, W. Takahashi, Weak convergence of a projection algorithm for variational inequalities in a Banach space, J. Math. Anal. Appl., 339 (2008), 668-679. doi: 10.1016/j.jmaa.2007.07.019
    [26] C. Izuchukwu, C. C. Okeke, F. O. Isiogugu, Viscosity iterative technique for split variational inclusion problem and fixed point problem between Hilbert space and Banach space, J. Fixed Point Theory Appl., 20 (2018), 1-25. doi: 10.1007/s11784-018-0489-6
    [27] C. C. Okeke, C. Izuchukwu, Strong convergence theorem for split feasibility problem and variational inclusion problem in real Banach spaces, Rend. Circolo Mat. Palermo, 2020. Available from: https://doi.org/10.1007/s12215-020-00508-3.
    [28] G. M. Korpelevich, The extragradient method for finding saddle points and other problems, Ekonomikai Mat. Metody, 12 (1976), 747-756.
    [29] P. L. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964-979. doi: 10.1137/0716071
    [30] P. E. Maingé, Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization, Set-Valued Anal., 16 (2008), 899-912. doi: 10.1007/s11228-008-0102-z
    [31] J. Peypouquet, Convex Optimization in Normed Spaces: Theory, Methods and Examples, Springer Briefs in Optimization, 2015.
    [32] N. Pholasa, P. Cholamjiak, The regularization method for solving variational inclusion problems, Thai J. Math., 14 (2016), 369-381.
    [33] H. Raguet, J. Fadili, G. Peyre, A generalized forward-backward splitting, SIAM J. Imaging Sci., 6 (2013), 1199-1226. doi: 10.1137/120872802
    [34] R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optim., 14 (1976), 877-898. doi: 10.1137/0314056
    [35] R. T. Rockafellar, On the maximal monotonicity of subdifferential mappings, Pac. J. Math., 33 (1970), 209-216. doi: 10.2140/pjm.1970.33.209
    [36] R. T. Rockafellar, On the maximality of sums of nonlinear monotone operators, Trans. Amer. Math. Soc., 149 (1970), 75-88. doi: 10.1090/S0002-9947-1970-0282272-5
    [37] Y. Shehu, Convergence results of forward-backward algorithms for sum of monotone operators in Banach spaces, Results Math., 74 (2019), 138. doi: 10.1007/s00025-019-1061-4
    [38] Y. Shehu, G. Cai, Strong convergence result of forward-backward splitting methods for accretive operators in banach spaces with applications, RACSAM, 112 (2018), 71. doi: 10.1007/s13398-016-0366-3
    [39] W. Takahashi, Nonlinear Functional Analysis, Yokohama: Yokohama Publishers, 2000.
    [40] S. Takahashi, W. Takahashi, Split common null point problem and shrinking projection method for generalized resolvents in two Banach spaces, J. Nonlinear Convex Anal., 17 (2016), 2171-2182.
    [41] W. Takahashi, N. C. Wong, J. C. Yao, Two generalized strong convergence theorems of Halpern's type in Hilbert spaces and applications, Taiwan. J. Math., 16 (2012), 1151-1172. doi: 10.11650/twjm/1500406684
    [42] K. K. Tan, H. K. Xu, Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process, J. Math. Anal. Appl., 178 (1993), 301-308. doi: 10.1006/jmaa.1993.1309
    [43] D. V. Thong, P. Cholamjiak, Strong convergence of a forward-backward splitting method with a new step size for solving monotone inclusions, Comput. Appl. Math., 38 (2019), 94. doi: 10.1007/s40314-019-0855-z
    [44] R. Tibshirani, Regression shrinkage and selection via the lasso, J. Royal Statist. Soc., 58 (1996), 267-288. Available from: https://doi.org/10.1111/j.2517-6161.1996.tb02080.x.
    [45] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim., 38 (2000), 431-446. doi: 10.1137/S0363012998338806
    [46] Y. Wang, F. Wang, Strong convergence of the forward-backward splitting method with multiple parameters in Hilbert spaces, Optimization, 67 (2018), 493-505. doi: 10.1080/02331934.2017.1411485
    [47] H. K. Xu, Inequalities in Banach spaces with applications, Nonlinear Anal., 16 (1991), 1127-1138. doi: 10.1016/0362-546X(91)90200-K
    [48] H. K. Xu, Iterative algorithms for nonlinear operators, J. London Math. Soc., 66 (2002), 240-256. doi: 10.1112/S0024610702003332
    [49] Z. B. Xu, G. F. Roach, Characteristic inequalities of uniformly convex and uniformly smooth Banach spaces, J. Math. Anal. Appl., 157 (1991), 189-210. doi: 10.1016/0022-247X(91)90144-O
    [50] J. Yang, H. Liu, Strong convergence result for solving monotone variational inequalities in Hilbert space, Numer. Algorithms, 80 (2019), 741-752. doi: 10.1007/s11075-018-0504-4
    [51] J. Yang, H. Liu, G. Li, Convergence of a subgradient extragradient algorithm for solving monotone variational inequalities, Numer. Algorithms, 84 (2020), 389-405. doi: 10.1007/s11075-019-00759-x
  • This article has been cited by:

    1. Kunrada KANKAM, Prasit CHOLAMJİAK, Watcharaporn CHOLAMJİAK, A modified parallel monotone hybrid algorithm for a finite family of \mathcal{G}-nonexpansive mappings apply to a novel signal recovery, 2022, 5, 2636-7556, 393, 10.53006/rna.1122092
    2. Jitsupa Deepho, Abubakar Adamu, Abdulkarim Hassan Ibrahim, Auwal Bala Abubakar, Relaxed viscosity-type iterative methods with application to compressed sensing, 2023, 0971-3611, 10.1007/s41478-022-00547-2
    3. Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth, Weak and strong convergence results for solving monotone variational inequalities in reflexive Banach spaces, 2022, 0233-1934, 1, 10.1080/02331934.2022.2069568
    4. Shaotao Hu, Yuanheng Wang, Liya Liu, Qiao-Li Dong, An inertial self-adaptive iterative algorithm for finding the common solutions to split feasibility and fixed point problems in specific Banach spaces, 2023, 424, 03770427, 115010, 10.1016/j.cam.2022.115010
    5. Simeon Reich, Truong Minh Tuyen, Pongsakorn Sunthrayuth, Prasit Cholamjiak, Two New Inertial Algorithms for Solving Variational Inequalities in Reflexive Banach Spaces, 2021, 42, 0163-0563, 1954, 10.1080/01630563.2021.2006692
    6. V. V. Semenov, S. V. Denisov, G. V. Sandrakov, O. S. Kharkov, Convergence of the Operator Extrapolation Method for Variational Inequalities in Banach Spaces*, 2022, 58, 1060-0396, 740, 10.1007/s10559-022-00507-5
    7. V. V. Semenov, S. V. Denisov, Convergence of the Method of Extrapolation from the Past for Variational Inequalities in Uniformly Convex Banach Spaces*, 2022, 58, 1060-0396, 564, 10.1007/s10559-022-00490-x
    8. Bing Tan, Pongsakorn Sunthrayuth, Prasit Cholamjiak, Yeol Je Cho, Modified inertial extragradient methods for finding minimum-norm solution of the variational inequality problem with applications to optimal control problem, 2023, 100, 0020-7160, 525, 10.1080/00207160.2022.2137672
    9. Yana Vedel, Vladimir Semenov, Sergey Denisov, 2021, Chapter 4, 978-3-030-92710-3, 50, 10.1007/978-3-030-92711-0_4
    10. Serhii Denysov, Vladimir Semenov, 2023, Chapter 4, 978-3-031-30250-3, 49, 10.1007/978-3-031-30251-0_4
    11. V.V. Semenov, O.S. Kharkov, STRONG CONVERGENCE OF THE REGULARIZED OPERATOR EXTRAPOLATION ALGORITHM FOR VARIATIONAL INEQUALITIES, 2024, 10195262, 64, 10.34229/KCA2522-9664.24.3.6
    12. V. V. Semenov, O. S. Kharkov, Strong Convergence of the Regularized Operator Extrapolation Algorithm For Variational Inequalities, 2024, 60, 1060-0396, 392, 10.1007/s10559-024-00680-9
    13. Zhong-bao Wang, Pongsakorn Sunthrayuth, Ratthaprom Promkam, Abubakar Adamu, Three novel inertial subgradient extragradient methods for quasi-monotone variational inequalities in Banach spaces, 2024, 43, 2238-3603, 10.1007/s40314-024-02929-7
    14. Austine Efut Ofem, Akindele Adebayo Mebawondu, Godwin Chidi Ugwunnadi, Hüseyin Işık, Ojen Kumar Narain, A modified subgradient extragradient algorithm-type for solving quasimonotone variational inequality problems with applications, 2023, 2023, 1029-242X, 10.1186/s13660-023-02981-7
    15. Austine Efut Ofem, Akindele Adebayo Mebawondu, Godwin Chidi Ugwunnadi, Prasit Cholamjiak, Ojen Kumar Narain, Relaxed Tseng splitting method with double inertial steps for solving monotone inclusions and fixed point problems, 2024, 96, 1017-1398, 1465, 10.1007/s11075-023-01674-y
    16. V. V. Semenov, O. S. Kharkov, THE REGULARIZED OPERATOR EXTRAPOLATION ALGORITHM, 2023, 27069680, 15, 10.17721/2706-9699.2023.1.02
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2911) PDF downloads(262) Cited by(16)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog