
In this paper, strong convergence results for α−inverse strongly monotone operators under new algorithms in the framework of Hilbert spaces are discussed. Our algorithms are the combination of the inertial Mann forward-backward method with the CQ-shrinking projection method and viscosity algorithm. Our methods lead to an acceleration of modified inertial Mann Halpern and viscosity algorithms. Later on, numerical examples to illustrate the applications, performance, and effectiveness of our algorithms are presented.
Citation: Hasanen A. Hammad, Habib ur Rehman, Manuel De la Sen. Accelerated modified inertial Mann and viscosity algorithms to find a fixed point of α−inverse strongly monotone operators[J]. AIMS Mathematics, 2021, 6(8): 9000-9019. doi: 10.3934/math.2021522
[1] | Rose Maluleka, Godwin Chidi Ugwunnadi, Maggie Aphane . Inertial subgradient extragradient with projection method for solving variational inequality and fixed point problems. AIMS Mathematics, 2023, 8(12): 30102-30119. doi: 10.3934/math.20231539 |
[2] | Anantachai Padcharoen, Kritsana Sokhuma, Jamilu Abubakar . Projection methods for quasi-nonexpansive multivalued mappings in Hilbert spaces. AIMS Mathematics, 2023, 8(3): 7242-7257. doi: 10.3934/math.2023364 |
[3] | Kasamsuk Ungchittrakool, Natthaphon Artsawang . A generalized viscosity forward-backward splitting scheme with inertial terms for solving monotone inclusion problems and its applications. AIMS Mathematics, 2024, 9(9): 23632-23650. doi: 10.3934/math.20241149 |
[4] | Konrawut Khammahawong, Parin Chaipunya, Poom Kumam . An inertial Mann algorithm for nonexpansive mappings on Hadamard manifolds. AIMS Mathematics, 2023, 8(1): 2093-2116. doi: 10.3934/math.2023108 |
[5] | Yali Zhao, Qixin Dong, Xiaoqing Huang . A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208 |
[6] | Mohammad Dilshad, Fahad Maqbul Alamrani, Ahmed Alamer, Esmail Alshaban, Maryam G. Alshehri . Viscosity-type inertial iterative methods for variational inclusion and fixed point problems. AIMS Mathematics, 2024, 9(7): 18553-18573. doi: 10.3934/math.2024903 |
[7] | Suthep Suantai, Pronpat Peeyada, Andreea Fulga, Watcharaporn Cholamjiak . Heart disease detection using inertial Mann relaxed CQ algorithms for split feasibility problems. AIMS Mathematics, 2023, 8(8): 18898-18918. doi: 10.3934/math.2023962 |
[8] | Zheng Zhou, Bing Tan, Songxiao Li . Two self-adaptive inertial projection algorithms for solving split variational inclusion problems. AIMS Mathematics, 2022, 7(4): 4960-4973. doi: 10.3934/math.2022276 |
[9] | Lu-Chuan Ceng, Yeong-Cheng Liou, Tzu-Chien Yin . On Mann-type accelerated projection methods for pseudomonotone variational inequalities and common fixed points in Banach spaces. AIMS Mathematics, 2023, 8(9): 21138-21160. doi: 10.3934/math.20231077 |
[10] | Premyuda Dechboon, Abubakar Adamu, Poom Kumam . A generalized Halpern-type forward-backward splitting algorithm for solving variational inclusion problems. AIMS Mathematics, 2023, 8(5): 11037-11056. doi: 10.3934/math.2023559 |
In this paper, strong convergence results for α−inverse strongly monotone operators under new algorithms in the framework of Hilbert spaces are discussed. Our algorithms are the combination of the inertial Mann forward-backward method with the CQ-shrinking projection method and viscosity algorithm. Our methods lead to an acceleration of modified inertial Mann Halpern and viscosity algorithms. Later on, numerical examples to illustrate the applications, performance, and effectiveness of our algorithms are presented.
Assume that C is a nonempty closed convex subset of a Hilbert space ℷ. A self-mapping T:C→C is called nonexpansive if
‖Tκ−Tω‖≤‖κ−ω‖, |
for all κ,ω∈C. The set F(T)={κ∈C: Tκ=κ} denote the set of fixed points of a mapping T.
In this paper, we discuss the following inclusion problem: Find ˜κ∈ℷ such that
0∈Ξ˜κ+Π˜κ, | (1.1) |
where Ξ:ℷ→ℷ is an operator and Π:ℷ→{2ℷ} is a set-valued operator. There are many real-world applications to various mappings in the fixed point theory, for example, many problems can be revisited as: Convex optimization and feasibility problems, image restoration problems, and monotone variational inequalities (see [1,2,3]). To be more precise, some concrete problems in machine learning and the linear inverse problem can be modeled mathematically with this formulation.
The classical approach to problem (1.1) (which is denoted by ((Ξ+Π)−1(0)) is the forward-backward splitting method [4,5,6,7,8], which is presented as follows: κ1∈ℷ and
κn+1=(I+τΠ)−1(κn−τΞκn), n≥1, | (1.2) |
where τ>0 and I is the identity mapping. In this visibility, the step containing Ξ refers to the forward step and Π is the backward step, but not the sum of Ξ and Π. In special cases, this technique includes exciting results in the gradient method [9,10] and the proximal point algorithm [11,12].
In 1979, a strong splitting algorithms in a real Hilbert space were built by Lions and Mercier [13] as follows:
κn+1=(2JΞτ−I)(2JΠτ−I)κn, n≥1, | (1.3) |
and
κn+1=JΞτ(2JΠτ−I)κn+(I−JΠτ)κn, n≥1, | (1.4) |
where JΩτ=(I+τΩ)−1. Mostly, the two kinds of algorithms (1.3) and (1.4) called a Peaceman-Rachford algorithm [7] and Douglas-Rachford algorithm [14], respectively. Generally, both algorithms are weakly convergent [15].
Another direction concerning with the problem (1.1) of two monotone and accretive mappings in Hilbert and Banach spaces, a stationary solution to the following initial value-problem:
0∈∂℘∂t−ℜ℘, ℘(0)=℘0, |
can be rewritten as (1.1) when the governing maximal monotone ℜ takes the form ℜ=Ξ+Π [13]. In optimization, it often needs [4] to solve a minimization problem of the form
minκ∈ℷh(κ)+m(κ), | (1.5) |
where h,m:ℷ→(−∞,∞] are proper (that is, the inverse image of a compact set is compact) and lower semi-continuous convex functions such that h is differentiable with L -Lipschitz gradient, and the proximal mapping of m is
κ⟼argminω∈ℷm(ω)+‖κ−ω‖22τ. |
In particular, if Ξ=∇h and Π=∂m, where ∇h is the gradient of h and ∂m is the subdifferential of m which is defined by ∂m(κ)={q∈ℷ:m(ω)≥m(κ)+⟨q,ω−κ⟩, for all ω∈ℷ}, therefore problem (1.1) becomes (1.5) and (1.2) becomes
κn+1=proxτm(κn−τ∇h(κn)), n≥1, |
where τ>0 is the step-size and proxτm=(I+τ∂m)−1 is the proximity operator of m.
The rest of the paper is organized as follows. Section 2 describes a compilation of previously existing algorithms related to the well-known Mann iteration and some of its modifications and extensions. Section 3 gives some preliminary lemmas and definition which are then used to derive the main results of Section 4. The new main strong convergence results and their associated iterative algorithms are given in Section 4. In particular, the so-called inertial CQ-projection algorithm and the so-called inertial shrinking CQ-projection viscosity algorithm.
The iteration κn+1=Tκn=...=Tnκ0 is called a Picard iteration where κ0 is a starting point. It is one of the simplest iterative methods, but it has a defect, that its convergence cannot be guaranteed even in the weak topology. To overcome this defect, Mann iteration algorithm is one of the effective ways for that, which generates iterative sequence {κn} through the following convex combination:
κn+1=ζnκn+(1−ζn)Tκn, n≥0. | (2.1) |
For nonexpansive mappings, the iteration (2.1) is useful for solving the fixed point problem and provides a unified framework for different algorithms. Also it has shortcomings, although it is defined in a Hilbert space, under certain conditions, the iterative sequence {κn} defined by (2.1) has only weak convergence. Previously, many attempts to obtain a strong convergence were presented in [16,17,18].
In 2001, a heavy ball method applied to inertial proximal point algorithm by Alvarez and Attouch [19]. This method under maximal monotone operators was introduced by Poylak [20] for proximal point algorithm. The algorithm takes the form
{ωn=κn+θn(κn−κn−1),κn+1=(I+τnΠ)−1nωn, n≥1. | (2.2) |
It was proved that if {τn} is nondecreasing and {θn}⊂[0,1) with
∞∑n=1θn‖κn−κn−1‖2<∞, | (2.3) |
then algorithm (2.2) converges weakly to a zero of Π. In particular, the condition (2.3) is true for θn<1/3. Here θn is an extrapolation factor and the inertia is represented by the term θn(κn−κn−1).
The concepts of single-valued, co-coercive and Lipschitz continuous operator Ξ added to the inertial proximal point algorithm by Moudafi and Oliny [21] to built the following algorithm:
{ωn=κn+ℏn(κn−κn−1),κn+1=(I+τnΠ)−1n(ωn−τnΞωn), n≥1. | (2.4) |
Via condition (2.3) a weak convergence result using algorithm (2.4) was obtained provided that τn<2L, where L is a Lipschitz constant of Ξ. It is noted that for ℏn>0 the algorithm (2.4) does not take the form of a forward–backward splitting algorithm, since operator Ξ is still evaluated at the point κn.
Of course, strong convergence is much better than weak convergence because it is often much more desirable for academic researchers since the obtained convergence results are more efficient and robust in potential application.
A strong algorithm for modified Mann algorithm was presented by Nakajo and Takahashi [16], which is called CQ-algorithm:
{κ0∈C chosen arbitrarily,ωn=ℏnκn+(1−ℏn)Tκn,Cn={p∈C:‖ωn−p‖≤‖κn−p‖},Qn={p∈C:⟨κ0−κn,κn−p⟩≥0},κn+1=PQn∩Cnκ0, | (2.5) |
for each n≥0 and C is defined in the above section. They obtained the strong convergence of the sequence {κn} to PFix(T)κ0, provided that the sequence {ℏn} is bounded above by 1. For more good results of the CQ-algorithms of nonexpansive mappings, we highly mention to [22].
Motivated by the algorithm (2.5), Dong et al. [23] discussed a strong convergence result involving an inertial forward-backward algorithm for monotone inclusions: Let Ξ:ℷ →ℷ be an α−inverse strongly monotone operator and Π:ℷ→2ℷ be a maximal monotone operator such that (Ξ+Π)−1(0)≠∅. Let {αn}∈</p><p>R</p><p> and the sequence {κn}⊂ℷ be generated by κ∘,κ1∈ℷ and for all n≥1
{ωn=κn+αn(κn−κn−1),υn=(I+τnΠ)−1n(ωn−τnΞωn),Cn={p∈ℷ:‖υn−p‖2≤‖κn−p‖2−2αn⟨κn−p,κn−1−κn⟩+α2n‖κn−1−κn‖2},Qn={p∈ℷ:⟨κ0−κn,κn−p⟩≤0},κn+1=PQn∩Cnκ0. |
Recently, Kim and Xu [17] proposed the following modified Mann iteration algorithm based on the Halpern iterative algorithm [24] and the Mann iteration algorithm(2.1):
{ωn=αnκn+(1−αn)ℑκn,κn+1=ζnκ+(1−ζn)ωn, n≥0, | (2.6) |
for some fixed point κ∈C, where ℑ:C→C is a nonexpansive mapping with Fix(ℑ)≠∅ and {αn}, {ζn} are sequences in (0, 1). Under mild conditions the sequence {κn} generated by (2.6) converges to a fixed point of ℑ. Many authors worked in this directions and obtained strong convergence for a fixed point under a appropriate conditions, see, [25,26,27,28].
Chen et al. [18] generalized the results [24] by introducing a new modified Mann iteration algorithm by combining the viscosity approximation algorithm [29] and the modified Mann iteration algorithm [17]. They established strong convergence result under fewer restrictions. The above results were circulated to more general operators and wider Banach spaces such as quasi-nonexpansive, asymptotically quasi-nonexpansive and strict pseudo-contractions mappings, see for instance [30,31,32,33,34,35,36].
Inspired by the contributions of [16,17,23], new algorithms by overlapping the concepts of inertial Mann forward-backward method, CQ-Shrinking projection method and the viscosity algorithm were obtained and strong convergence results under these algorithms were discussed. At the last, numerical results are discussed to present the applications and a good acceleration performance of our algorithms. Our results extend and unify a lot of papers in this direction like Kim and Xu [17], Chen et al. [18], Suzuki [37] and the paper cited [38,39,40].
In this paper, we shall refer to {κn} is a sequence in ℷ, "→" is the strong convergence, "⇀" is the weak convergence and PC:ℷ→C is the nearest point projection, that is for all κ∈ℷ and ω∈C, ‖κ−PCκ‖≤‖κ−ω‖. PC is called the metric projection. It's obvious that PC achieves the following inequality,
‖PCκ−PCω‖2≤⟨PCκ−PCω,κ−ω⟩, |
for all κ,ω∈ℷ. In other words, the metric projection PC is firmly nonexpansive. Hence ⟨κ−PCκ,ω−PCω⟩≤0 holds for all κ∈ℷ and ω∈C, see [41].
Lemma 3.1. [41] Suppose that ℷ is a real Hilbert space. Then for each κ,ω∈ℷ and a real number ℧, we have
(i) ‖κ+ω‖2≤‖κ‖2+2⟨ω,κ+ω⟩,
(ii) ‖℧κ+(1−℧)ω‖2=℧‖κ‖2+(1−℧)‖ω‖2−℧(1−℧)‖κ−ω‖2.
Lemma 3.2. [42,Theorem 1.9.10,p. 39 and Theorem 2.2.13,p. 57] Suppose that ℷ is a real Hilbert space and {κn} is a sequence in ℷ. Then the following properties are fulfilled:
(i) If κn⇀κ and ‖κn‖→‖κ‖ as n→∞, then limn→∞κn=κ; that is, ℷ has the Kadec-Klee property.
(ii) If κn⇀κ as n→∞, then ‖κ‖≤lim infn→∞‖κn‖.
Lemma 3.3. [43] Let C≠∅ be closed convex subset of a real Hilbert space ℷ. Then for each κ,ω,υ∈ℷ and ð∈R, the following set is closed and convex:
{η∈C:‖ω−η‖2≤‖κ−η‖2+⟨υ,η⟩+ð}. |
Lemma 3.4. [21] Let C≠∅ be closed convex subset of a real Hilbert space ℷ and PC:ℷ→C be the metric projection. Then
‖ω−PCκ‖2+‖κ−PCκ‖2≤‖κ−ω‖2 |
for all κ∈ℷ and ω∈C.
Lemma 3.5. [44] Let T be a nonexpansive self-mapping of a closed convex subset C of a Hilbert space ℷ. Then the mapping I−T is demiclosed; that is, whenever {κn} is a sequence in C which weakly converges to some κ∈C and the sequence {(I−T)(κn)} strongly converges to some ω, it follows that (I−T)(κ)=ω.
Definition 3.6. Suppose that D(Ξ)⊂ℷ and R(Ξ)⊂ℷ are the domain and the range of an operator Ξ, respectively. An operator Ξ is called:
1). monotone if
⟨κ−ω,Ξκ−Ξω⟩≥0 for all κ,ω∈D(Ξ), |
2). maximal monotone if it is monotone and its graph
G(Ξ)={(κ,Ξκ):κ∈ℷ} |
is not a proper subset of one of any other monotone mapping,
3). β−strongly monotone if there exists β>0 such that
⟨κ−ω,Ξκ−Ξω⟩≥β‖κ−ω‖2 for all κ,ω∈D(Ξ), |
4). α−inverse strongly monotone (for short α-ism) if there exists α>0 such that
⟨κ−ω,Ξκ−Ξω⟩≥α‖Ξκ−Ξω‖2 for all κ,ω∈D(Ξ). |
Lemma 3.7. [5] Let ℷ be a real Hilbert space, Ξ:ℷ→ℷ be an α-inverse strongly monotone operator and Π: ℷ→2ℷ be a maximal monotone operator. For each τ>0, we define
Tτ=JΠτ(I−τΞ)=(I+τΠ)−1(I−τΞ), |
then, we get
(i) For τ>0, F(Tτ)=(Ξ+Π)−1(0);
(ii) For 0<s≤τ and κ∈ℷ, ‖κ−Tsκ‖≤2‖κ−Tτκ‖.
Lemma 3.8. [45] Let ℷ be a real Hilbert space, Ξ: ℷ→ℷ be an α−inverse strongly monotone operator and Π:ℷ→2ℷ be a maximal monotone operator. For each τ>0, we have
‖Tτκ−Tτω‖2≤‖κ−ω‖2−τ(2α−τ)‖Ξκ−Ξω‖2, |
for all κ,ω∈ℷ.
According to the notions of inertial CQ and shrinking projection technique with the Halpern algorithm and viscosity algorithm, respectively, we build two new algorithms in this section and their strong convergence in a Hilbert space is discussed.
Theorem 4.1. (Inertial shrinking projection algorithm). Let ℷ be a real Hilbert space, Ξ:ℷ→ℷ be an α− inverse strongly monotone operator, Π:ℷ→2ℷ be a maximal monotone operator such that Θ=(Ξ+Π)−1(0)≠∅ and {αn} is a bounded sequence. For given two sequences {λn} and {ρn} in (0,1). A sequence {κn} is generated by
{ωn=κn+αn(κn−κn−1),ϖn=λnωn+(1−λn)Υnωnυn=ρnκ1+(1−ρn)ϖn,Cn+1={p∈Cn:‖υn−p‖2≤‖κn−p‖2+α2n‖κn−1−κn‖2−2αn(1−ρn)⟨κn−p,κn−1−κn⟩+2ρn⟨κ1−p,υn−p⟩},κn+1=PCn+1(κ1), | (4.1) |
for each n≥1 and κ∘,κ1∈ℷ, where Υn=(I+τnΠ)−1(I−τnΞ). If the following hypotheses hold:
(▸ \sum_{n = 0}^{\infty }\rho _{n} = \infty and \lim_{n\rightarrow \infty }\rho _{n} = 0,
(\blacktriangleright _{2}) 0 < \lim \inf_{n\rightarrow \infty }\tau _{n}\leq \lim \sup_{n\rightarrow \infty }\tau _{n} < 2\alpha,
then the sequence \{\kappa _{n}\} generated by (4.1) converges strongly to a point \theta = P_{\Theta }(\kappa _{1}).
Proof. We split the proof into the following steps:
Step (i). Show that P_{C_{n+1}}\kappa _{1} is bounded for each \kappa _{1}\in \gimel, n\geq 1 and \Theta \subset C_{n+1}. It follows from the condition (\blacktriangleright _{2}) and Lemma 3.8 that T_{\tau _{n}} = (I+\tau _{n}\Pi)^{-1}(I-\tau _{n}\Xi) is a nonexpansive mapping. Lemma 3.7 guarantees that \Theta is closed and convex set, and Lemma 3.3, clarifies that C_{n+1} is closed and convex, for all n\geq 1.
Let p\in \Theta, we have
\begin{align} \left\Vert \omega _{n}-p\right\Vert ^{2}& = \left\Vert (\kappa _{n}-p)-\alpha _{n}(\kappa _{n-1}-\kappa _{n})\right\Vert ^{2} \\ & \leq \left\Vert \kappa _{n}-p\right\Vert ^{2}-2\alpha _{n}\langle \kappa _{n}-p, \kappa _{n-1}-\kappa _{n}\rangle +\alpha _{n}^{2}\left\Vert \kappa _{n-1}-\kappa _{n}\right\Vert ^{2}. \end{align} | (4.2) |
Furthermore, by Lemma 3.1 (ii) and Lemma 3.8, we can write
\begin{eqnarray} \left\Vert \varpi _{n}-p\right\Vert ^{2} & = &\left\Vert \lambda _{n}\omega _{n}+(1-\lambda _{n})(I+\tau _{n}\Pi )^{-1}(\omega _{n}-\tau _{n}\Xi \omega _{n})-p\right\Vert ^{2} \\ & = &\left\Vert \lambda _{n}(\omega _{n}-p)+(1-\lambda _{n})(T_{\tau _{n}}\omega _{n}-p)\right\Vert ^{2} \\ & = &(1-\lambda _{n})\left\Vert T_{\tau _{n}}\omega _{n}-p\right\Vert ^{2}+\lambda _{n}\left\Vert \omega _{n}-p\right\Vert ^{2}-\lambda _{n}(1-\lambda _{n})\left\Vert T_{\tau _{n}}\omega _{n}-\omega _{n}\right\Vert ^{2} \\ &\leq &\lambda _{n}\left\Vert \omega _{n}-p\right\Vert ^{2}+(1-\lambda _{n})\left\Vert T_{\tau _{n}}\omega _{n}-p\right\Vert ^{2} \\ & = &\lambda _{n}\left\Vert \omega _{n}-p\right\Vert ^{2}+(1-\lambda _{n})\left\Vert T_{\tau _{n}}\omega _{n}-T_{\tau _{n}}p\right\Vert ^{2} \\ &\leq &\lambda _{n}\left\Vert \omega _{n}-p\right\Vert ^{2}+(1-\lambda _{n})\left( \left\Vert \omega _{n}-p\right\Vert ^{2}-\tau _{n}(2\alpha -\tau _{n})\left\Vert \Xi \omega _{n}-\Xi p\right\Vert ^{2}\right) \\ &\leq &\lambda _{n}\left\Vert \omega _{n}-p\right\Vert ^{2}+(1-\lambda _{n})\left\Vert \omega _{n}-p\right\Vert ^{2} \\ & = &\left\Vert \omega _{n}-p\right\Vert ^{2}. \end{eqnarray} | (4.3) |
Also, by Lemma 3.1 (i), we have
\begin{eqnarray} \left\Vert \upsilon _{n}-p\right\Vert ^{2} & = &\left\Vert (1-\rho _{n})\left( \varpi _{n}-p\right) +\rho _{n}\left( \kappa _{1}-p\right) \right\Vert ^{2} \\ & = &(1-\rho _{n})\left\Vert \varpi _{n}-p\right\Vert ^{2}+2\rho _{n}\langle \kappa _{1}-p, \upsilon _{n}-p\rangle . \end{eqnarray} | (4.4) |
Applying (4.2) and (4.3) in (4.4), and since (1-\rho _{n}) < 1, we get
\begin{eqnarray} \left\Vert \upsilon _{n}-p\right\Vert ^{2} &\leq &(1-\rho _{n})\left\Vert \kappa _{n}-p\right\Vert ^{2}+\alpha _{n}^{2}(1-\rho _{n})\left\Vert \kappa _{n-1}-\kappa _{n}\right\Vert ^{2} \\ &&-2\alpha _{n}(1-\rho _{n})\langle \kappa _{n}-p, \kappa _{n-1}-\kappa _{n}\rangle +2\rho _{n}\langle \kappa _{1}-p, \upsilon _{n}-p\rangle \\ &\leq &\left\Vert \kappa _{n}-p\right\Vert ^{2}+\alpha _{n}^{2}\left\Vert \kappa _{n-1}-\kappa _{n}\right\Vert ^{2} \\ &&-2\alpha _{n}(1-\rho _{n})\langle \kappa _{n}-p, \kappa _{n-1}-\kappa _{n}\rangle +2\rho _{n}\langle \kappa _{1}-p, \upsilon _{n}-p\rangle . \end{eqnarray} | (4.5) |
It is clear that \Theta \subset C_{1} = \gimel . Assume that \Theta \subset C_{n} for some n\geq 1 . Then p\in C_{n} and by (4.5), we have for all n\geq 1, p\in C_{n+1}. Thus \Theta \subset C_{n+1} for all n\geq 1, that is, P_{C_{n+1}}\kappa _{1} is well-defined.
Step (ii).Prove that \{\kappa _{n}\} is bounded. Since \Theta \neq \emptyset , closed and convex subset of \gimel, there is a unique \wp \in \Theta such that \wp = P_{\Theta }\kappa _{1}. This implies that, \kappa _{n} = P_{C_{n}}\kappa _{1}, C_{n+1}\subset C_{n} and \kappa _{n+1}\in C_{n} for all n\geq 1, we can get
\begin{equation} \Vert \kappa _{n}-\kappa _{1}\Vert \leq \Vert \kappa _{n+1}-\kappa _{1}\Vert , \text{ for all }n\geq 1. \end{equation} | (4.6) |
Further, as \Theta \subset C_{n}, we obtain
\begin{equation} \Vert \kappa _{n}-\kappa _{1}\Vert \leq \Vert \wp-\kappa _{1}\Vert , \text{ for all }n\geq 1. \end{equation} | (4.7) |
It follows by (4.6) and (4.7), that \lim_{n\rightarrow \infty }\Vert \kappa _{n}-\kappa _{1}\Vert exists. This leads to \{\kappa _{n}\} is bounded.
Step (iii).Fulfillment \lim_{n\rightarrow \infty }\kappa _{n} = \theta. By the definition of C_{n}, for m > n , we observe that \kappa _{m} = P_{C_{m}}\kappa _{1}\in C_{m}\subset C_{n}. From Lemma 3.4, we have
\begin{equation*} \Vert \kappa _{m}-\kappa _{n}\Vert ^{2}\leq \Vert \kappa _{m}-\kappa _{1}\Vert ^{2}-\Vert \kappa _{n}-\kappa _{1}\Vert ^{2}. \end{equation*} |
Apply Step (ii), we conclude that \lim_{n, m\rightarrow \infty }\Vert \kappa _{m}-\kappa _{n}\Vert ^{2} = 0 . Thus \{\kappa _{n}\} is a Cauchy sequence. Hence, \lim_{n\rightarrow \infty }\kappa _{n} = \theta, as n\rightarrow \infty . As well as, we get
\begin{equation} \lim\limits_{n\rightarrow \infty }\Vert \kappa _{n+1}-\kappa _{n}\Vert = 0. \end{equation} | (4.8) |
Step (iv). Prove that \theta \in \Theta. It follows from the boundedness of the sequence \{\alpha _{n}\} and (4.8) that
\begin{equation} \Vert \omega _{n}-\kappa _{n}\Vert = \left\vert \alpha _{n}\right\vert \Vert \kappa _{n}-\kappa _{n-1}\Vert \rightarrow 0\text{ as }n\rightarrow \infty . \end{equation} | (4.9) |
By (4.5), (4.8) and the condition (\blacktriangleright _{1}), we get
\begin{eqnarray} \Vert \upsilon _{n}-\kappa _{n}\Vert ^{2} &\leq &\left\Vert \kappa _{n}-\kappa _{n}\right\Vert ^{2}+\alpha _{n}^{2}\left\Vert \kappa _{n-1}-\kappa _{n}\right\Vert ^{2} \\ &&-2\alpha _{n}(1-\rho _{n})\langle \kappa _{n}-\kappa _{n}, \kappa _{n-1}-\kappa _{n}\rangle +2\rho _{n}\langle \kappa _{1}-\kappa _{n}, \upsilon _{n}-\kappa _{n}\rangle \\ & = &\alpha _{n}^{2}\left\Vert \kappa _{n-1}-\kappa _{n}\right\Vert ^{2}+2\rho _{n}\langle \kappa _{1}-p, \upsilon _{n}-\kappa _{n}\rangle \rightarrow 0 \text{ as }n\rightarrow \infty . \end{eqnarray} | (4.10) |
Applying (4.8)–(4.10), we can write
\begin{equation} \Vert \kappa _{n+1}-\omega _{n}\Vert \leq \Vert \kappa _{n+1}-\kappa _{n}\Vert +\Vert \omega _{n}-\kappa _{n}\Vert \rightarrow 0\text{ as } n\rightarrow \infty , \end{equation} | (4.11) |
\begin{equation} \Vert \kappa _{n+1}-\upsilon _{n}\Vert \leq \Vert \kappa _{n+1}-\kappa _{n}\Vert +\Vert \upsilon _{n}-\kappa _{n}\Vert \rightarrow 0\text{ as } n\rightarrow \infty. \end{equation} | (4.12) |
By (4.3) and (4.11), we observe that
\begin{equation} \left\Vert \varpi _{n}-\kappa _{n+1}\right\Vert \leq \left\Vert \omega _{n}-\kappa _{n+1}\right\Vert \rightarrow 0\text{ as }n\rightarrow \infty . \end{equation} | (4.13) |
The inequalities (4.12) and (4.13) lead to
\begin{equation} \left\Vert \upsilon _{n}-\omega _{n}\right\Vert \leq \left\Vert \upsilon _{n}-\kappa _{n+1}\right\Vert +\left\Vert \kappa _{n+1}-\omega _{n}\right\Vert \rightarrow 0\text{ as }n\rightarrow \infty , \end{equation} | (4.14) |
and
\begin{equation} \left\Vert \upsilon _{n}-\varpi _{n}\right\Vert \leq \left\Vert \upsilon _{n}-\kappa _{n+1}\right\Vert +\left\Vert \varpi _{n}-\kappa _{n+1}\right\Vert \rightarrow 0\text{ as }n\rightarrow \infty. \end{equation} | (4.15) |
Now, we have
\begin{eqnarray*} \left\Vert T_{\tau _{n}}\omega _{n}-\omega _{n}\right\Vert & = &\left\Vert \frac{1}{(1-\lambda _{n})}\left[ \varpi _{n}-\lambda _{n}\omega _{n}\right] -\omega _{n}\right\Vert \\ & = &\frac{1}{(1-\lambda _{n})}\left\Vert \varpi _{n}-\lambda _{n}\omega _{n}-(1-\lambda _{n})\omega _{n}\right\Vert \\ & = &\frac{1}{(1-\lambda _{n})}\left\Vert \varpi _{n}-\omega _{n}\right\Vert \\ &\leq &\frac{1}{(1-\lambda _{n})}\left[ \left\Vert \varpi _{n}-\upsilon _{n}\right\Vert +\left\Vert \upsilon _{n}-\omega _{n}\right\Vert \right] . \end{eqnarray*} |
It follows from (4.14) and (4.15) that
\begin{equation} \lim\limits_{n\rightarrow \infty }\left\Vert T_{\tau _{n}}\omega _{n}-\omega _{n}\right\Vert = 0. \end{equation} | (4.16) |
Since \liminf_{n\rightarrow \infty }\tau _{n} > 0, there is \varepsilon > 0 such that \tau _{n}\geq \varepsilon and \varepsilon \in (0, 2\alpha) for all n\geq 1. Then by Lemma 3.7 (ii) and (4.16), we have
\begin{equation} \Vert T_{\varepsilon}\omega _{n}-\omega _{n}\Vert \leq 2\Vert T_{\tau _{n}}\omega _{n}-\omega _{n}\Vert \rightarrow 0\text{ as }n\rightarrow \infty . \end{equation} | (4.17) |
From (4.10), since \kappa _{n}\rightarrow \theta, we also have \omega _{n}\rightarrow \theta. Since T_{\varepsilon } is a nonexpansive and continuous mapping, from (4.17), we have \theta \in \Theta .
Step (v). Show that \theta = P_{\Theta }(\kappa _{1}). Since \kappa _{n} = P_{C_{n}}\kappa _{1}, and \Theta \subset C_{n}, we can get
\begin{equation} \langle \kappa _{1}-\kappa _{n}, \kappa _{n}-p\rangle \geq 0, \ \forall p\in \Theta . \end{equation} | (4.18) |
Setting n\rightarrow \infty in (4.18), we have
\begin{equation*} \langle \kappa _{1}-\theta , \theta -p\rangle \geq 0, \ \forall p\in \Theta . \end{equation*} |
This show that \theta = P_{\Theta }(\kappa _{1}) . The proof is finished.
Theorem 4.2. (Inertial CQ-projection algorithm) (ICQMHA). Assume that all requirements of Theorem 4.1 are met. Then the sequence \{\kappa _{n}\} generated by
\begin{equation} \left\{ \begin{array}{l} \omega _{n} = \kappa _{n}+\alpha _{n}(\kappa _{n}-\kappa _{n-1}), \\ \varpi _{n} = \lambda _{n}\omega _{n}+(1-\lambda _{n})\Upsilon _{n}\omega _{n} \\ \upsilon _{n} = \rho _{n}\kappa _{1}+(1-\rho _{n})\varpi _{n}, \\ C_{n} = \left\{ \begin{array}{c} p\in \gimel :\left\Vert \upsilon _{n}-p\right\Vert ^{2}\leq \left\Vert \kappa _{n}-p\right\Vert ^{2}+\alpha _{n}^{2}\left\Vert \kappa _{n-1}-\kappa _{n}\right\Vert ^{2} \\ -2\alpha _{n}(1-\rho _{n})\langle \kappa _{n}-p, \kappa _{n-1}-\kappa _{n}\rangle +2\rho _{n}\langle \kappa _{1}-p, \upsilon _{n}-p\rangle \end{array} \right\} , \\ Q_{n} = \left\{ p\in \gimel :\langle p-\kappa _{n}, \kappa _{1}-\kappa _{n}\rangle \leq 0\right\} , \\ \kappa _{n+1} = P_{C_{n}\cap Q_{n}}(\kappa _{1}), \mathit{\text{}}n\geq 1, \end{array} \right. \end{equation} | (4.19) |
converges strongly to a point \theta = P_{\Theta }(\kappa _{1}).
Proof. The proof is divided into the following steps:
Step (1). Demonstrate that \{\kappa _{n}\}_{n = 0}^{\infty } is well-defined for each \kappa _{1}\in \gimel and for all n\geq 1, \Theta \subset Q_{n}\cap C_{n} .
It is clear that by Lemma 3.3, C_{n} is closed and convex subset of \gimel . Also, if we rewrite the set Q_{n} as shown
\begin{equation*} Q_{n} = \{p\in \gimel :\ \langle \kappa _{1}-\kappa _{n}, p\rangle \leq \langle \kappa _{1}-\kappa _{n}, \kappa _{n}\rangle \}, \end{equation*} |
we obtain that Q_{n} is also closed and convex subset of \gimel . So Q_{n}\cap C_{n} is closed and convex, for all n\geq 1. Assume that p\in \Theta . By the same manner of Theorem 4.1, we have
\begin{eqnarray*} \left\Vert \upsilon_{n} -p\right\Vert ^{2} &\leq &\left\Vert \kappa _{n}-p\right\Vert ^{2}+\alpha _{n}^{2}\left\Vert \kappa _{n-1}-\kappa _{n}\right\Vert ^{2} \\ &&-2\alpha _{n}(1-\rho _{n})\langle \kappa _{n}-p, \kappa _{n-1}-\kappa _{n}\rangle +2\rho _{n}\langle \kappa _{1}-p, \upsilon _{n}-p\rangle . \end{eqnarray*} |
Thus, p\in C_{n} for all n\geq 1, it implies that \Theta \subset C_{n} for all n\geq 1.
For n = 1 , we have Q_{1} = \gimel and hence \Theta \subset C_{1}\cap Q_{1} . Suppose that \Theta \subset C_{l}\cap Q_{l} for some l\geq 1 . Since \kappa _{l+1} = P_{C_{l}\cap Q_{l}}(\kappa _{1}) . Then we get
\begin{equation*} \langle b-\kappa _{l+1}-b, \kappa _{0}-\kappa _{l+1}\rangle \geq 0, \end{equation*} |
for each b\in C_{l}\cap Q_{l}. Since \Theta \subseteq C_{l}\cap Q_{l}, and p\in \Theta, we have
\begin{equation*} \langle p-\kappa _{l+1}-p, \kappa _{0}-\kappa _{l+1}\rangle \geq 0. \end{equation*} |
This leads to p\in Q_{l+1} , and hence \Theta \subseteq Q_{l+1}. Hence, we get \Theta \subseteq C_{l+1}\cap Q_{l+1} . Based on the above results, \{\kappa _{n}\} is well defined and \Theta \subset C_{n}\cap Q_{n}.
Step (2). Clarify that \{\kappa _{n}\} is bounded. By Algorithm (4.19), we can write
\begin{equation*} \langle \xi -\kappa _{n}, \kappa _{1}-\kappa _{n}\rangle \leq 0, \text{ for all }\xi \in Q_{n}\text{, }n\geq 1. \end{equation*} |
This implies that, \kappa _{n} = P_{Q_{n}}(\kappa _{1}). Since \Theta \subset Q_{n}, we get
\begin{equation} \Vert \kappa _{n}-\kappa _{1}\Vert \leq \Vert \kappa _{1}-\xi \Vert , \text{ for all }\xi \in \Theta. \end{equation} | (4.20) |
Again, since \kappa _{n+1}\in Q_{n}, we have
\begin{equation} \Vert \kappa _{n}-\kappa _{1}\Vert \leq \Vert \kappa _{n+1}-\kappa _{1}\Vert . \end{equation} | (4.21) |
It follows from (4.20) and (4.21) that \lim_{n\rightarrow \infty }\Vert \kappa _{n}-\kappa _{1}\Vert exists. Hence \{\kappa _{n}\} is bounded.
Step (3). Prove that \lim_{n\rightarrow \infty }\Vert \kappa _{n+1}-\kappa _{n}\Vert = 0 . Since \kappa _{n+1}\in Q_{n} and \kappa _{n} = P_{Q_{n}}(\kappa _{1}) , it follows from Lemma 3.4 that
\begin{equation*} \Vert \kappa _{n+1}-\kappa _{n}\Vert ^{2}\leq \Vert \kappa _{n+1}-\kappa _{1}\Vert ^{2}-\Vert \kappa _{n}-\kappa _{1}\Vert ^{2}\rightarrow 0\text{ as }n\rightarrow \infty . \end{equation*} |
This implies that \lim_{n\rightarrow \infty }\Vert \kappa _{n+1}-\kappa _{n}\Vert = 0 .
Step (4).Show that \theta \in \Theta. Follows immediately by Step (iv) Theorem 4.1.
Step (5). Illustrate that \theta = P_{\Theta }(\kappa _{1}) . By the same scenario of {Step (iv)} Theorem 4.1, we obtain that
\begin{equation} \Vert T_{\varepsilon }\omega _{n}-\omega _{n}\Vert \rightarrow 0, \Vert \omega _{n}-\kappa _{n}\Vert \rightarrow 0\text{ as }n\rightarrow \infty, \end{equation} | (4.22) |
where \varepsilon \in (0, 2\alpha) . The nonexpansivity of T_{\varepsilon } yields,
\begin{align} \Vert T_{\varepsilon }\kappa _{n}-\kappa _{n}\Vert & \leq \Vert T_{\varepsilon }\kappa _{n}-T_{\varepsilon }\omega _{n}\Vert +\Vert T_{\varepsilon }\omega _{n}-\omega _{n}\Vert +\Vert \omega _{n}-\kappa _{n}\Vert \\ & \leq 2\Vert \omega _{n}-\kappa _{n}\Vert +\Vert T_{\varepsilon }\omega _{n}-\omega _{n}\Vert . \end{align} | (4.23) |
From (4.22) and (4.23), we can obtain
\begin{equation} \Vert T_{\varepsilon }\kappa _{n}-\kappa _{n}\Vert \rightarrow 0\text{ as } n\rightarrow \infty . \end{equation} | (4.24) |
Since \{\kappa _{n}\} is bounded, there is a subsequence \{\kappa _{n_{k}}\} of \{\kappa _{n}\} such that \kappa _{n_{k}}\rightharpoonup \kappa ^{\ast } . This combines with (4.24) and by using Lemma 3.5, we obtain that \kappa ^{\ast }\in F(T_{\varepsilon }) , that is, \kappa ^{\ast }\in \Theta .
Since \theta = P_{\Theta }(\kappa _{1}) and \kappa ^{\ast }\in \Theta , (4.20) and Lemma 3.2 (ii) imply that
\begin{align*} \Vert \kappa _{1}-\theta \Vert & \leq \Vert \kappa _{1}-\kappa ^{\ast }\Vert \leq \liminf\limits_{k\rightarrow \infty }\Vert \kappa _{n_{k}}-\kappa _{1}\Vert \\ & \leq \limsup\limits_{k\rightarrow \infty }\Vert \kappa _{n_{k}}-\kappa _{1}\Vert \leq \Vert \kappa _{1}-\theta \Vert . \end{align*} |
Using the uniqueness of the nearest point \theta , we now see that \theta = \kappa ^{\ast } . We also have \Vert \kappa _{n_{k}}-\kappa _{1}\Vert \rightarrow \Vert \kappa _{1}-\theta \Vert and from Lemma 3.2 (i), we get that \kappa _{n_{k}}\rightarrow \theta as k\rightarrow \infty . Using again the uniqueness of \theta , we deduce that \kappa _{n}\rightarrow \theta as n\rightarrow \infty .
This ends the proof.
If we replace \kappa _{1} with \Im (\kappa _{1}), where \Im :C\rightarrow C is a contractive mapping in (4.1) and (4.19) we have the following inertial shrinking CQ-projection viscosity algorithms:
Theorem 4.3. (Inertial shrinking projection viscosity algorithm) Assume that all requirements of Theorem 4.1 are satisfied. Let \Im :C\rightarrow C be a \mu - contraction with \mu \in \lbrack 0, 1), that is \left\Vert \Im \kappa -\Im \omega \right\Vert \leq \mu \left\Vert \kappa -\omega \right\Vert for all \kappa, \omega \in C. Then the sequence \{\kappa _{n}\} generated by
\begin{equation} \left\{ \begin{array}{l} \omega _{n} = \kappa _{n}+\alpha _{n}(\kappa _{n}-\kappa _{n-1}), \\ \varpi _{n} = \lambda _{n}\omega _{n}+(1-\lambda _{n})\Upsilon _{n}\omega _{n} \\ \upsilon _{n} = \rho _{n}\Im (\kappa _{1})+(1-\rho _{n})\varpi _{n}, \\ C_{n+1} = \left\{ \begin{array}{c} p\in C_{n}:\left\Vert \upsilon _{n}-p\right\Vert ^{2}\leq \left\Vert \kappa _{n}-p\right\Vert ^{2}+\alpha _{n}^{2}\left\Vert \kappa _{n-1}-\kappa _{n}\right\Vert ^{2} \\ -2\alpha _{n}(1-\rho _{n})\langle \kappa _{n}-p, \kappa _{n-1}-\kappa _{n}\rangle +2\rho _{n}\langle \Im (\kappa _{1})-p, \upsilon _{n}-p\rangle \end{array} \right\} , \\ \kappa _{n+1} = P_{C_{n+1}}(\Im (\kappa _{1})), \mathit{\text{}}n\geq 1, \end{array} \right. \end{equation} | (4.25) |
converges strongly to a point \theta = P_{\Theta }(\kappa _{1}).
Theorem 4.4. (Inertial shrinking CQ-projection viscosity algorithm) (ICQMVA). Suppose that all requirements of Theorem 4.1 are verified. Let \Im :C\rightarrow C be a \mu - contraction with \mu \in \lbrack 0, 1). Then the sequence \{\kappa _{n}\} generated by
\begin{equation} \left\{ \begin{array}{l} \omega _{n} = \kappa _{n}+\alpha _{n}(\kappa _{n}-\kappa _{n-1}), \\ \varpi _{n} = \lambda _{n}\omega _{n}+(1-\lambda _{n})\Upsilon _{n}\omega _{n} \\ \upsilon _{n} = \rho _{n}\Im (\kappa _{1})+(1-\rho _{n})\varpi _{n}, \\ C_{n} = \left\{ \begin{array}{c} p\in \gimel :\left\Vert \upsilon _{n}-p\right\Vert ^{2}\leq \left\Vert \kappa _{n}-p\right\Vert ^{2}+\alpha _{n}^{2}\left\Vert \kappa _{n-1}-\kappa _{n}\right\Vert ^{2} \\ -2\alpha _{n}(1-\rho _{n})\langle \kappa _{n}-p, \kappa _{n-1}-\kappa _{n}\rangle +2\rho _{n}\langle \Im (\kappa _{1})-p, \upsilon _{n}-p\rangle \end{array} \right\} , \\ Q_{n} = \left\{ p\in \gimel :\langle p-\kappa _{n}, \Im (\kappa _{1})-\kappa _{n}\rangle \leq 0\right\} , \\ \kappa _{n+1} = P_{C_{n}\cap Q_{n}}(\Im (\kappa _{1})), \mathit{\text{}}n\geq 1, \end{array} \right. \end{equation} | (4.26) |
converges strongly to a point \theta = P_{\Theta }(\kappa _{1}).
Remark 4.5 If we neglect CQ-shrinking projection terms, then the proposed algorithms in this manuscript extend and improve the results of [38,39,40], Kim and Xu [17] (if \alpha _{n} = 0 and \Upsilon _{n} = I (Identity mapping) in algorithms (4.1) and (4.19)), Chen et al. [18] (if \alpha _{n} = 0 and \Upsilon _{n} = I in (4.25) and (4.26)) and Suzuki [37].
In this section, the numerical comparison between strong convergence of our algorithms and the modified inertial Mann Halpern and viscosity algorithms [46] are illustrated. Through numerical calculations we found that our methods accelerate and more effective than methods of [46]. The codes used here to obtain numerical results are the MATLAB codes run in MATLAB version 9.5 (R2018b) on Intel(R) Core(TM)i5-6200 CPU PC @ 2.30GHz 2.40GHz, RAM 8.00 GB.
For simplicity:
{\text (1)} For Tan et. al. [46] (shortly, MIMHA) (shortly, MIMVA);
{\text (2)} For our proposed algorithms (shortly, ICQMHA) (shortly, ICQMVA).
Example 5.1. For any nonempty closed convex set C_{i} \subset \mathbb{R}^{n} for each i = 0, 1, ..., m. We are now considering the convex feasibility problem of finding
\text{} \, \, \, \, \kappa^{*} \in C = \bigcap\limits_{i = 1}^{m} C_{i}. |
Define a map T : \mathbb{R}^{n} \rightarrow \mathbb{R}^{n} by
\begin{equation} T = P_{C_0} \Big( \frac{1}{m} \sum\limits_{i = 1}^{m} P_{C_i} \Big) \end{equation} | (5.1) |
where P_{C_i} (i = 0, 1, \cdots, m) denotes the metric projection upon C_{i}. Since P_{C_i} (i = 0, 1, \cdots, m) is nonexpansive, then the mapping T defined by (5.1) is also nonexpansive. Moreover, we can see that
\begin{equation} Fix(T) = Fix(P_{C_0}) \bigcap\limits_{i = 1}^{m} Fix(P_{C_i}) = C_{0} \bigcap\limits_{i = 1}^{m} C_{i} = C. \end{equation} | (5.2) |
During this experiment, we use C_{i} (i = 0, 1, \cdots, m) as a closed ball with center c_{i} \in \mathbb{R}^{n} and radius r_{i} > 0. Thus P_{C_i} can be determined as
P_{C_i} (\kappa) = \left\{ \begin{array}{ll} c_{i} + \frac{r_{i}}{\|c_{i} - \kappa\|} (\kappa - c_{i}) \quad \quad & \text{if} \, \, \|c_{i} - \kappa\| > r_{i}, \\\\ \kappa \quad \quad & \text{if} \, \, \|c_{i} - \kappa\| \leq r_{i}. \end{array} \right. |
Choose r_{i} = 1 (i = 0, 1, \cdots, m) , c_{0} = (0, 0, \cdots, 0), c_{1} = (1, 0, \cdots, 0), and c_{2} = (-1, 0, \cdots, 0). Moreover, c_{i} \in \Big(\frac{-1}{\sqrt{n}}, \frac{-1}{\sqrt{n}} \Big)^{n} (i = 3, 4, \cdots, m) are randomly chosen. From the choice of c_{1}, c_{2}, r_{1}, r_{2}, given that Fix(T) = {0}. Moreover, we use \kappa_{0} = \kappa_{1} = (1, 1, \cdots, 1), \alpha_{n} = \frac{10}{(n+1)^{2}}, \eta = 4, \lambda_{n} = \frac{1}{100(n+1)^{2}}, \rho_{n} = \frac{1}{n+1} and f(\kappa) = 0.1\kappa_{n}. The numerical results are shown in Figures 1–2.
Example 5.2. Let F:C\subset \mathbb{\gimel }\rightarrow \mathbb{\gimel } be an operator and the variational inequality problem is define in the following way:
\text{Find}\, \, \kappa ^{\ast }\in C\, \, \text{such that}\, \, \big\langle F(\kappa ^{\ast }), \omega -\kappa ^{\ast }\big\rangle\geq 0, \; \forall \omega \in C. |
Define T:C\subset \mathbb{\gimel }\rightarrow \mathbb{\gimel } by
\begin{equation} T : = P_{C} (I - \lambda F) \end{equation} | (5.3) |
where 0 < \lambda < \frac{2}{L} and L is the Lipschitz constant of the mapping F. Let F : H = \mathbb{R}^{2} \rightarrow \mathbb{R}^{2} defined by
\begin{equation} F \begin{pmatrix} \kappa_{1} \\ \kappa_{2} \end{pmatrix} = \begin{pmatrix} 2 \kappa_{1} + 2 \kappa_{2} + sin(\kappa_{1}) \\ \\ - 2 \kappa_{1} + 2 \kappa_{2} + sin(\kappa_{2}) \end{pmatrix}. \end{equation} | (5.4) |
The authors in [47] showed that F is Lipschitz continuous with L = \sqrt{26} and strongly monotone. Therefore the variational inequality (5.4) has a unique solution (see, e.g. [48]) and (0, 0) is its solution. We use \alpha_{n} = \frac{10}{(n+1)^{2}}, \eta = 4, \lambda_{n} = \frac{1}{100(n+1)^{2}}, \rho_{n} = \frac{1}{n+1} and f(\kappa) = 0.1\kappa_{n}. The numerical results are shown in Figures 3–5.
Example 5.3. We assume that the Fermat-Weber (FW) problem, it is a well-known model of location theory. In mathematical terms, Fermat-Weber is described as follows:
\text{Find} \, \, \kappa \in \mathbb{R}^{n} \, \, \text{such that} \, \, \min f(\kappa) : = \sum\limits_{i = 1}^{m} \omega_{i} \|\kappa - a_{i}\| |
in which a_{i} \in \mathbb{R}^{n} are anchor points as well as \omega_{i} were non-negative weights (see [49] for more details). The above problem can be described as fixed point problems as follows: A mapping T : \mathbb{R}^{n} \rightarrow \mathbb{R}^{n} is defined by
T(\kappa) : = \frac{1}{\sum_{i}^{m} \frac{\omega_{i}}{\|\kappa - a_{i}\|}} \sum\limits_{i = 1}^{m} \frac{a_{i} \omega_{i}}{\|\kappa - a_{i}\|} |
where \omega_{i} = 1 for all i. Moreover, we consider a three dimensional example with n = 3 , m = 8 and collection \Pi of anchor points are
a_{1} = (0, 0, 0)^{T}, a_{2} = (10, 0, 0)^{T}, a_{3} = (0, 10, 0)^{T}, a_{4} = (10, 10, 0)^{T}, |
a_{5} = (0, 0, 10)^{T}, a_{6} = (10, 0, 10)^{T}, a_{7} = (0, 10, 10)^{T}, a_{8} = (10, 10, 10)^{T}. |
From above assumptions it follows that the optimal value of above problem is \kappa^{*} = (5, 5, 5)^{T}. During this example, we use fixed element \kappa_{1} = (1, 2, 3)^{T} and \alpha_{n} = \frac{10}{(n+1)^{2}}, \eta = 4, \lambda_{n} = \frac{1}{100(n+1)^{2}}, \rho_{n} = \frac{1}{n+1} and f(\kappa) = 0.19\kappa_{n}. The numerical results are shown in Figures 6–7.
Example 5.4. Let C = \{\kappa \in \mathbb{R}^{3} : \|\kappa\| \leq 1 \} be the unit closed ball and T : C \rightarrow C [50] be defined by
\begin{equation} T \begin{pmatrix} \kappa_{1} \\ \kappa_{2} \\ \kappa_{3} \end{pmatrix} = \begin{pmatrix} \frac{1}{\sqrt{3}} \sin (\kappa_{1} + \kappa_{3}) \\ \\ \frac{1}{\sqrt{3}} \sin (\kappa_{1} + \kappa_{3}) \\\\ \frac{1}{\sqrt{3}} (\kappa_{1} + \kappa_{2}) \end{pmatrix}. \end{equation} | (5.5) |
We use \alpha_{n} = \frac{10}{(n+1)^{2}}, \eta = 4, \lambda_{n} = \frac{1}{100(n+1)^{2}}, \rho_{n} = \frac{1}{n+1} and f(\kappa) = 0.1\kappa_{n}. The numerical results are shown in Figures 8–9.
In the study of algorithms, the efficiency and effectiveness of algorithms are measured by two main factors: The first is reaching the desired point with the fewest iterations possible, and the second factor is the time. When the time taken to obtain strong convergence is short, results are better. There is no doubt that the paper [46] addressed a lot of algorithms and proved, under suitable stipulation, that its algorithm accelerates better than the previous one. Here according to Examples 5.1–5.4, we were able to verify that our algorithm converges faster than the algorithm [46], so it converges faster than all the algorithms included in the paper [46]. Also, the numerical results (images) shows that our algorithms need a small number of iterations to get the desired target. This makes our method successful in speeding up the algorithm presented in Paper [46].
In this manuscript, the strong convergence results for \alpha inverse strongly monotone operators under new algorithms in the framework of Hilbert spaces have been discussed and several algorithms have been developed. The proposed algorithms combine the inertial Mann forward-backward method with the CQ-shrinking projection method and viscosity algorithm. The main algorithms which are presented and discussed are so-called "Inertial CQ-projection algorithm" (ICQMHA) and "Inertial shrinking CQ-projection viscosity algorithm "(ICQMVA). It has been theoretically proved that our algorithms lead to an acceleration of the previous modified inertial Mann-Halpern and viscosity algorithms. Also, some numerical examples have been performed to illustrate the applications and to test the computational performance and its effectiveness of the proposed algorithms.
The authors thank the Basque Government for Grant IT1207-19.
The authors declare that they have no competing interests concerning the publication of this article
[1] | H. H. Bauschke, P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, Berlin: Springer, 2011. |
[2] |
H. H. Bauschke, J. M. Borwein, On projection algorithms for solving convex feasibility problems, SIAM Rev., 38 (1996), 367–426. doi: 10.1137/S0036144593251710
![]() |
[3] |
P. J. Chen, J. G. Huang, X. Q. Zhang, A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration, Inverse Probl., 29 (2013), 025011. doi: 10.1088/0266-5611/29/2/025011
![]() |
[4] |
P. L. Combettes, V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul., 4 (2005), 1168–1200. doi: 10.1137/050626090
![]() |
[5] | G. López, V. Martín-Márquez, F. H. Wang, H. K. Xu, Forward-Backward splitting methods for accretive operators in Banach spaces, Abstr. Appl. Anal., 2012 (2012), 109236. |
[6] |
G. B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space, J. Math. Anal. Appl., 72 (1979), 383–390. doi: 10.1016/0022-247X(79)90234-8
![]() |
[7] |
D. H. Peaceman, H. H. Rachford, The numerical solution of parabolic and eliptic differentials, J. Soc. Industr. Appl. Math., 3 (1955), 28–41. doi: 10.1137/0103003
![]() |
[8] |
P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control. Optim., 38 (2000), 431–446. doi: 10.1137/S0363012998338806
![]() |
[9] | N. H. Xiu, C. Y. Wang, L. C. Kong, A note on the gradient projection method with exact stepsize rule, J. Comput. Math., 25 (2007), 221–230. |
[10] |
H. K. Xu, Averaged mappings and the gradient-projection algorithm, J. Optimiz. Theory Appl., 150 (2011), 360–378. doi: 10.1007/s10957-011-9837-z
![]() |
[11] | R. E. Bruck, S. Reich, Nonexpansive projections and resolvents of accretive operators in Banach spaces, Houston J. Math., 3 (1977), 459–470. |
[12] | N. Parikh, S. Boyd, Proximal algorithms, Found. Trends Optim., 1 (2014), 123–231. |
[13] |
P. L. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964–979. doi: 10.1137/0716071
![]() |
[14] |
J. Douglas, H. H. Rachford, On the numerical solution of the heat conduction problem in 2 and 3 space variables, T. Am. Math. Soc., 82 (1956), 421–439. doi: 10.1090/S0002-9947-1956-0084194-4
![]() |
[15] |
H. H. Bauschke, P. L. Combettes, A weak-to-strong convergence principle for Fejérmonotone methods in Hilbert spaces, Math. Oper. Res., 26 (2001), 248–264. doi: 10.1287/moor.26.2.248.10558
![]() |
[16] |
K. Nakajo, W. Takahashi, Strong convergence theorems for nonexpansive mappings and nonexpansive semi groups, J. Math. Anal. Appl., 279 (2003), 372–379. doi: 10.1016/S0022-247X(02)00458-4
![]() |
[17] |
T. H. Kim, H. K. Xu, Strong convergence of modified Mann iterations, Nonlinear Anal.-Theor., 61 (2005), 51–60. doi: 10.1016/j.na.2004.11.011
![]() |
[18] |
Y. H. Yao, R. D. Chen, J. C. Yao, Strong convergence and certain control conditions for modified Mann iteration, Nonlinear Anal.-Theor., 68 (2008), 1687–1693. doi: 10.1016/j.na.2007.01.009
![]() |
[19] |
F. Alvarez, H. Attouch, An inertial proximal method for monotone operators via discretization of a nonlinear oscillator with damping, Set-Valued Anal., 9 (2001), 3–11. doi: 10.1023/A:1011253113155
![]() |
[20] | B. T. Polyak, Introduction to optimization, New York: Optimization Software, 1987. |
[21] |
A. Moudafi, M. Oliny, Convergence of a splitting inertial proximal method for monotone operators, J. Comput. Appl. Math., 155 (2003), 447–454. doi: 10.1016/S0377-0427(02)00906-8
![]() |
[22] |
Q. L. Dong, H. B. Yuan, Y. J. Cho, T. M. Rassias, Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings, Optim. Lett., 12 (2018), 87–102. doi: 10.1007/s11590-016-1102-9
![]() |
[23] |
Q. L. Dong, D. Jiang, P. Cholamjiak, Y. Shehu, A strong convergence result involving an inertial forward-backward algorithm for monotone inclusions, J. Fixed Point Theory Appl., 19 (2017), 3097–3118. doi: 10.1007/s11784-017-0472-7
![]() |
[24] |
B. Halpern, Fixed points of nonexpanding maps, Bull. Am. Math. Soc., 73 (1967), 957–961. doi: 10.1090/S0002-9904-1967-11864-0
![]() |
[25] | B. Tan, S. X. Li, Strong convergence of inertial Mann algorithms for solving hierarchical fixed point problems, J. Nonlinear Var. Anal., 4 (2020), 337–355. |
[26] | M. Tian, G. Xu, Inertial modified Tseng's extragradient algorithms for solving monotone variational inequalities and fixed point problems, J. Nonlinear Funct. Anal., 2020 (2020), 35. |
[27] | F. U. Ogbuisi, The projection method with inertial extrapolation for solving split equilibrium problems in Hilbert spaces, Appl. Set-Valued Anal. Optim., 3 (2021), 239–255. |
[28] | H. A. Hammad, H. ur Rahman, M. De la Sen, Shrinking projection Methods for accelerating relaxed inertial Tseng-type algorithm with applications, Math. Probl. Eng., 2020 (2020), 7487383. |
[29] |
A. Moudafi, Viscosity approximation methods for fixed-points problems, J. Math. Anal. Appl., 241 (2000), 46–55. doi: 10.1006/jmaa.1999.6615
![]() |
[30] | X. L. Qin, S. Y. Cho, S. M. Kang, On hybrid projection methods for asymptotically quasi-\psi -nonexpansive mappings, Appl. Math. Comput., 215 (2010), 3874–3883. |
[31] |
S. Y. Cho, S. M. Kang, Approximation of common solutions of variational inequalities via strict pseudocontractions, Acta Math. Sci., 32 (2012), 1607–1618. doi: 10.1016/S0252-9602(12)60127-1
![]() |
[32] | H. A. Hammad, H. ur Rahman, Y. U. Gaba, Solving a split feasibility problem by the strong convergence of two projection algorithms in Hilbert spaces, J. Funct. Spaces, 2021 (2021), 5562694. |
[33] |
H. A. Hammad, H. ur Rehman, M. De la Sen, Advanced algorithms and common solutions to variational inequalities, Symmetry, 12 (2020), 1198. doi: 10.3390/sym12071198
![]() |
[34] | M. O. Aibinu, J. K. Kim, on the rate of convergence of viscosity implicit iterative algorithms, Nonlinear Funct. Anal. Appl., 25 (2020), 135–152. |
[35] | M. O. Aibinu, J. Kim, Convergence analysis of viscosity implicit rules of nonexpansive mappings in Banach spaces, Nonlinear Funct. Anal. Appl., 24 (2019), 691–713. |
[36] | Y. Kimura, Shrinking projection methods for a family of maximal monotone operators, Nonlinear Funct. Anal. Appl., 16 (2011), 481–489. |
[37] |
T. Suzuki, Moudafi's viscosity approximations with meir-keeler contractions, J. Math. Anal. Appl., 325 (2007), 342–352. doi: 10.1016/j.jmaa.2006.01.080
![]() |
[38] |
A. Beck, M. Teboulle, Fast iterative shrinkage-thresholding algorithm for linear inverse problem, SIAM J. Imaging Sci., 2 (2009), 183–202. doi: 10.1137/080716542
![]() |
[39] |
H. Attouch, J. Peypouquet, The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than 1/k^{2}, SIAM J. Optim., 26 (2016), 1824–1834. doi: 10.1137/15M1046095
![]() |
[40] |
W. Cholamjiak, P. Cholamjiak, S. Suantai, An inertial forward–backward splitting method for solving inclusion problems in Hilbert spaces, J. Fixed Point Theory Appl., 20 (2018), 42. doi: 10.1007/s11784-018-0526-5
![]() |
[41] | W. Takahashi, Nonlinear functional analysis, Yokohama: Yokohama Publishers, 2000. |
[42] | R. P. Agarwal, D. O'Regan, D. R. Sahu, Fixed point theory for Lipschitzian-type mappings with applications, New York: Springer, 2009. |
[43] |
C. Martinez-Yanes, H. K. Xu, Strong convergence of the CQ method for fixed point iteration processes, Nonlinear Anal.-Theor., 64 (2006), 2400–2411. doi: 10.1016/j.na.2005.08.018
![]() |
[44] | K. Goebel, W. A. Kirk, Topics in metric fixed point theory, Cambridge, UK: Cambridge University Press, 1990. |
[45] | T. M. Tuyen, H. A. Hammad, Effect of shrinking projection and CQ-methods on two inertial forward–backward algorithms for solving variational inclusion problems, Rend. Circ. Mat. Palermo, II. Ser, DOI: 10.1007/s12215-020-00581-8. |
[46] |
B. Tan, Z. Zhou, S. X. Li, Strong convergence of modified inertial Mann algorithms for nonexpansive mappings, Mathematics, 8 (2020), 462. doi: 10.3390/math8040462
![]() |
[47] |
Q. L. Dong, Y. J. Cho, L. L. Zhong, T. M. Rassias, Inertial projection and contraction algorithms for variational inequalities, J. Glob. Optim., 70 (2018), 687–704. doi: 10.1007/s10898-017-0506-0
![]() |
[48] |
H. Y. Zhou, Y. Zhou, G. H. Feng, Iterative methods for solving a class of monotone variational inequality problems with applications, J. Inequal. Appl., 2015 (2015), 68. doi: 10.1186/s13660-015-0590-y
![]() |
[49] |
A. Beck, S. Sabach, Weiszfeld's method: Old and new results, J. Optim. Theory Appl., 164 (2015), 1–40. doi: 10.1007/s10957-014-0586-7
![]() |
[50] |
Q. L. Dong, H. B. Yuan, Accelerated mann and CQ algorithms for finding a fixed point of a nonexpansive mapping, Fixed Point Theory Appl., 2015 (2015), 125. doi: 10.1186/s13663-015-0374-6
![]() |
1. | Huancheng Zhang, The Application of Generalized Viscosity Implicit Midpoint Rule for Nonexpansive Mappings, 2024, 16, 2073-8994, 1528, 10.3390/sym16111528 |