1.
Introduction
A fixed point problem (FPP) is a significant problem that provides a natural support for studying a broad range of nonlinear problems with applications. The fixed point problem of mapping T is defined as
where E is a real Hilbert space and T:E→E is a nonexpansive mapping.
For a single-valued monotone operator Q:E→E and a set-valued operator G:E⇉E, the variational inclusion problem (VIsP) is to search s⋆∈E such that
Several problems, such as image recovery, optimization, variational inequality, can be transformed into a FPP or VIsP. Due to such applicability, in the last decades, several iterative methods have been formulated to solve FPPs and VIsPs in linear and nonlinear spaces, for example, [4,8,9,12,13,15,32].
Dauglas and Rachford [11] formulated the forward-backward splitting method for VIsP:
where μn>0, RGμn=[I+μnG]−1 is the resolvent of G (also known as the backward operator), and [I−μnQ] is known as the forward operator. We can rewrite (1.3) as
which is studied by Ansari and Babu [2] in nonlinear space. If Q=0, the monotone inclusion problem (MIsP) is to search s⋆∈E such that
which was studied in [26]. The proximal point method, or the regularization method, is one of the renowned methods for MIsP studied by Lions and Mercier [18]:
Since the operator RGμn is nonexpansive appearing in backward step, the algorithms have been studied widely by numerous authors, see for example [7,10,15,16,17,19,23,27].
An essential development in the field of nonlinear science is the inertial extrapolation, introduced by Polyak [22], for fast convergence of algorithms. Alvarez and Attouch [6] implemented the inertial extrapolation to acquire the inertial proximal point method to solve MIsP. For μn>0, find sn+1∈E such that
and equivalently
where βn∈[0,1) is the extrapolation coefficient and βn(sn−sn−1) is known as the inertial step. They proved the weak convergence of (1.8) assuming
Inertial extrapolation has been demonstrated to have good convergence properties and a high convergence rate, therefore they have been improved and used in a variety of nonlinear problems, see [3,5,13,14,28,29] and the references inside.
The following inertial proximal point approach was presented by Moudafi and Oliny in [21] to solve VIsP:
where μn<2/κ, and κ is the Lipschitz constant of operator Q. They proved the weak convergence of (1.10) using the same assumption (1.9). Recently, Duang et al. [30] studied the VIsP and FPP. They proposed the following viscosity inertial method (Algorithm 1.1) for estimating the common solution in Hilbert spaces.
In the above calculation Q is η-inverse strongly monotone (in short η-ism) and G is a maximal monotone operator, k is a contraction, T is a nonexpansive mapping, λ∈(0,2η), and the control sequence fulfills the requirements listed below:
(ⅰ) ψn∈(0,1),limn→∞ψn=0,∞∑n=1ψn=∞,limn→∞ψn−1ψn=0,
(ⅱ) θn∈[0,θ),θ>0,limn→∞θnψn‖sn−sn−1‖=0.
Recently, Reich and Taiwo [24] investigated hybrid viscosity-type iterative schemes for solving variational inclusion problems in which viscosity approximation and inertial extrapolation were computed jointly. Ahmed and Dilshad [1] studied the Halpern-type iterative method for solving split common null point problems where the Halpern iteration and inertial iterations are computed simultaneously at the start of every iteration.
Motivated by the work in [24,30], we present two viscosity-type inertial iteration methods for common solutions of VIsPs and FPPs. In our algorithms, we implement the viscosity iteration, fixed point iteration, and inertial extrapolation at the first step of each iteration. Our methods do not need the inverse strongly monotone assumptions on the operators Q and G, which are considered in the literature. We prove the strong convergence of the presented methods without calculating the resolvent of the associated monotone operators Q and G.
We organize the paper as follows: In Section 2, we discuss some basic definitions and useful lemmas. In Section 3, we propose viscosity-type iterative methods for solving VIsPs and FPPs and prove the strong convergence theorems. In Section 4, as a consequence of our methods, we present Halpern-type inertial iterative methods for VIsPs and FPPs. Sections 5 describes some applications for solving variational inequality and optimization problems. In Section 6, we show the efficiency of the suggested methods by comparing them with Algorithm 3 of [30].
2.
Preliminaries
Let {sn} be a sequence in E. Then sn→s denotes strong convergence of {sn} to s and sn⇀s denotes weak convergence. The weak w-limit of {sn} is defined by
The following useful inequality is well-known in the Hilbert space E:
Definition 2.1. A mapping k:E→E is called
(i) a contraction, if ‖k(s1)−k(w1)‖≤τ‖s1−w1‖,∀s1,w1∈E,τ∈(0,1);
(ii) nonexpansive, if ‖k(s1)−k(w1)‖≤‖s1−w1‖,∀s1,w1∈E.
Definition 2.2. Let Q:E→E. Then
(i) Q is called monotone, if ⟨Q(s1)−Q(w1),s1−w1⟩≥0,∀s1,w1∈E;
(ii) Q is called η-ism, if there exists η>0 such that
(iii) Q is called δ-strongly monotone, if there exists δ>0 such that
(iv) Q is called κ-Lipschitz continuous, if there exists κ>0 such that
Definition 2.3. Let G:E→2E. Then
(i) the graph of G is defined by Graph(G)={(s1,w1)∈E×E:w1∈G(s1)};
(ii) G is called monotone, if for all (s1,w1),(s2,w2)∈Graph(G),⟨w1−w2,s1−s2⟩≥0;
(iii) G is called maximal monotone, if G is monotone and (I+μG)(E)=E, for μ>0.
Lemma 2.1. [31] Let sn∈R be a nonnegative sequence such that
where λn∈(0,1) and ξn∈R fulfill the requirements given below:
Then sn→0 as n→∞.
Lemma 2.2. [20] Let yn∈R be a sequence that does not decrease at infinity in the sense that there exists a subsequence ynk of yn such that ynk<ynk+1 for all k≥0. Also consider the sequence of integers {Υ(n)}n≥n0 defined by
Then {Υ(n)}n≥n0 is a nondecreasing sequence verifying limn→∞Υ(n)=∞, and for all n≥n0, the following inequalities hold:
3.
Main results
In the present section, we define our viscosity-type inertial iteration methods for solving FPP and VIsP. We symbolize the solution set of FPP by Λ and of VIsP by Δ and assume that Λ∩Δ≠∅. We adopt the following assumptions in order to prove the convergence of the sequences obtained from the suggested methods:
(S1) k:E→E is a τ-contraction with 0<τ<1;
(S2) Q:E→E is a δ-strongly monotone and κ-Lipschitz continuous operator and G:E⇉E is a maximal monotone operator;
(S3) μn is a sequence such that 0<ˉμ≤μn≤μ<1/2δ and κ≤2δ;
(S4) λn∈(0,1) satisfies limn→∞λn=0 and ∞∑n=1λn=∞;
(S5) σn is a positive sequence satisfying ∞∑n=1σn<∞ and limn→∞σnλn=0.
Theorem 3.1. If the Assumptions (S1)–(S5) are fulfilled then the sequences induced by the Algorithm 3.1 converge strongly to s⋆∈Δ∩Λ, which solve the following variational inequality:
Remark 3.1. From (3.2), we have βn≤σn‖sn−sn−1‖. Since βn>0 and σn satisfies ∞∑n=1σn<∞, we obtain limn→∞βn‖sn−sn−1‖=0 and limn→∞βn‖sn−sn−1‖λn≤limn→∞σnλn=0.
Proof. Let s⋆∈Δ∩Λ, then −Q(s⋆)∈G(s⋆) and using (3.4), we have un−sn+1μn−Q(un)∈G(sn+1). Since G is monotone, we get
Since Q is strongly monotone with constant δ>0, we have
By adding (3.5) and (3.6), we get
or
By using the Cauchy Schwartz inequality and Lipschitz continuity of Q, we have
By using (2.1), we have
Considering (3.8)–(3.10), we get
Since κ≤2δ, we have
or
Since limn→∞βn‖sn−sn−1‖λn=0 (Remark 3.1), there exists K1>0 such that βn‖sn−sn−1‖λn≤K1, that is βn‖sn−sn−1‖≤λnK1. By using (3.13) and mathematical induction, bearing in mind that k is a contraction and T is nonexpansive, it follows from (3.3) that
meaning that {un} is bounded and hence {sn} is also bounded. Let vn=T(sn)+βn(sn−sn−1). Note that vn is also bounded. By using (3.3), we get
Now, we need to calculate
where Θn=βn‖zn−zn−1‖, and
and
By using (3.14)–(3.17), we get
Let γn=λn(1−τ2). Then it follows from (3.12) and (3.18) that
where
Now, we continue the proof in the following two cases:
Case Ⅰ: If {‖sn−s⋆‖} is monotonically decreasing then there exists a number N1 such that ‖sn+1−s⋆‖≤‖sn−s⋆‖ for all n≥N1. Hence, boundedness of {‖sn−s⋆‖} implies that {‖sn−s⋆‖} is convergent. Therefore, using (3.19), we have
Since 2δμn<1 and limn→∞γn=0, we obtained
By using (3.22) and Remark 3.1, we get
Boundedness of {sn} and {vn} implies that there exist M1 and M2 such that ‖sn−s⋆‖≤M1 and ‖k(s⋆)−vn‖≤M2, hence
The following can be obtained easily by using (3.22) and (3.23):
Since {sn} is bounded, it guarantees the existence of subsequence {snk} of {sn} such that snk⇀ˉs. As a consequence, from (3.22) and (3.25), it follows that unk⇀ˉs and vnk⇀ˉs. Now, we will show that ˉs∈Δ∩Λ. Since T is nonexpansive, hence by (3.25), we obtain ˉs∈Fix(T). From (3.4), we have
Since 0<ˉμ<μn<μ and from (3.22), we have ‖snk+1−unk‖→0 and by the Lipschitz continuity of Q, we get
Taking k→∞, since the graph of the maximal monotone operator is weakly-strongly closed, we get −Q(ˉs)∈G(ˉs), that is 0∈Q(ˉs)+G(ˉs). Thus ˉs∈Δ∩Λ.
Next, we show that {sn} strongly converges to s⋆. From (3.19), it immediately follows that
and
By using Lemma 2.1, we deduce that {sn} converges strongly to s⋆, where s⋆ is the solution to the variational inequality (3.1). Further, it follows that ‖sn−un‖→0, un⇀ˉy∈Δ∩Λ, and sn→s⋆ as n→∞, thus ˉy=s⋆. This completes the proof.
Case Ⅱ: If Case Ⅰ is false, then the function ρ:N:→N defined by ρ(n)=max{n≥m:‖sm−s⋆‖≤‖sm+1−s⋆‖} is an increasing sequence and ρ(n)→∞ as n→∞ and
For the same reasons as in the proof of Case Ⅰ, we obtain ‖sρ(n)−s⋆‖→0 and ‖sρ(n)−uρ(n)‖→0 as n→∞. By using (3.19) and (3.29), we obtain
Thus, we get ‖sρ(n)−s⋆‖→0 as n→∞. Keeping in mind Lemma 2.2, we have
Consequently, from (3.31), ‖sn−s⋆‖→0 as n→∞. Therefore, sn→s⋆ as n→∞, where s⋆ is a solution of the variational inequality (3.1). □
Theorem 3.2. If the Assumptions (S1)–(S5) are satisfied then the sequences induced by the Algorithm 3.2 converge strongly to s⋆∈Λ∩Δ, which solves the following variational inequality:
Proof. Let s⋆∈Λ∩Δ, then by using (3.33), we obtain
implying that {un} is bounded and so is {sn}. Let xn=λnk(sn)+(1−λn)T(sn), then by using (2.1), we get
and
and
From (3.36)–(3.38), we get
or
where ςn=λn(1−τ) and
By taking together (3.12) and (3.39), we obtain
We obtain the intended outcomes by following the same procedures as in the proof of Theorem 3.1. □
4.
Consequences
Some Halpern-type inertial iterative methods for VIsPs and FPPs are the consequences of our suggested methods.
Corollary 4.1. Suppose that the assumptions (S2)–(S5) hold. The sequence {sn} induced by Algorithm 4.1 converges strongly to y⋆=PΛ∩Δ(z).
Proof. By replacing k(y) by z in Algorithm 3.1 and following the proof of Theorem 3.1, we get the desired result. □
Corollary 4.2. Suppose that the assumptions (S2)–(S5) hold. The sequence {sn} induced by Algorithm 4.2 converges strongly to y⋆=PΛ∩Δ(z).
Proof. By replacing k(y) by z in Algorithm 3.2 and following the proof of Theorem 3.2, we get the result. □
5.
Applications
Now, we present some theoretical applications of our methods for solving variational inequality and optimization problems together with the fixed point problem.
5.1. Variational inequality problem
Let Ω⊆E and Q:E→E be a monotone operator. The variational inequality problem (VItP) is to find s⋆∈E such that
The normal cone to Ω at z is defined by
It is know to us that s⋆ solves (VItP) if and only if
The indicator function of Ω is defined by
Since IΩ is a proper lower semicontinuous convex function on E, the subdifferential of IΩ is defined as
which is maximal monotone (see [26]). From (5.2) and (5.4), we can write (5.3) as
By replacing G by ∂IΩ in Algorithms 3.1 and 3.2, we get viscosity-type inertial iteration methods for common solutions to VIsPs and FPPs.
5.2. Optimization problem
Let Ω⊆E be a nonempty closed convex subset and f1,f2 be proper, lower semicontinuous functions. Assume that f1 is differentiable and ∇f1 is δ-strongly monotone (hence, monotone) and κ-Lipschitz continuous. The subdifferential of f2 is defined by
and is maximal monotone [25]. The following convex minimization problem (COP) is taken into consideration:
Therefore, by taking Q=∇f1 and G=∂f2 in Algorithms 3.1 and 3.2, we get two viscosity-type inertial iteration methods for common solutions to COPs and FPPs.
6.
Numerical experiments
Example 6.1. Let E=R3. For s=(s1,s2,s3) and w=(w1,w2,w3)∈R3, the usual inner product is defined by ⟨s,w⟩=s1w1+s2w2+s3w3 and ‖w‖2=|w1|2+|w2|2+|w3|2. We define the operators Q and G by
It is trivial to show that the mapping Q is η-inverse strongly monotone with η=2, δ-strongly monotone (hence monotone) with δ=14, and κ-Lipschitz continuous with κ=12. The mapping G is maximal monotone. We define the mappings T and k as follows:
The mapping T is nonexpansive and k is s τ-contraction with τ=1/6. For Algorithms 3.1 and 3.2, we choose β=0.3,λn=1√100+n,σn=1(1+n)2,μn=32−110+n, βn is selected randomly from (0,¯βn), and ¯βn is calculated by (3.2). For Algorithm 1.1, we choose θ=0.5 and θn=1(1+n)2∈(0,θ), λ=0.5∈(0,2η) and ψn=1(10+n)0.1. We compute the results of Algorithms 3.1 and 3.2 and then compare them with Algorithm 1.1. The stopping criteria for our calculation is Toln<10−15, where Toln=‖sn+1−sn‖. We select some different cases of initial values as given below:
Case (a): w0=(1,7,−9)w1=(1,−3,4);
Case (b): w0=(30,53,91)w1=(1/2,−3/4,−4/11);
Case (c): w0=(1/2,−14,0)w1=(0,−23,1/4);
Case (d): w0=(0.1,−10,200)w1=(100,−2,1/4).
The experimental findings are shown in Table 1 and Figures 1–4.
Example 6.2. Let us consider the infinite dimensional real Hilbert space E1=E2=l2: = {u:=(u1,u2,u3,⋯,un,⋯),un∈R:∞∑n=1|un|<∞} with inner product ⟨u,v⟩=∞∑n=1unvn and the norm is given by ‖u‖=(∞∑n=1|un|2)1/2. We define the monotone mappings by Q(u):=u5=(u15,u25,u35,⋯,un5,⋯) and G(u):=u=(u1,u2,u3,⋯,un,⋯). Let k(u):=u15 be the contraction and the nonexpansive map T is defined by T(u):=u3=(u13,u23,u33,⋯,un3,⋯).
It can be seen that Q is δ-strongly monotone with δ=15 and κ-Lipschitz continuous with κ=15 and also η-inverse strongly monotone with η=5; G is maximal monotone; k be the τ-contraction with τ=115. We choose β=0.4, λn=1(n+200)0.25, σn=1(10+n)3, μn=43−1n+50, βn is selected randomly, and ˉβn by (3.2). We choose θ=0.4 and θn=1(10+n)3∈(0,θ), λ=0.7∈(0,2η), and ψn=1(200+n)0.25. We compute the results of Algorithms 3.1 and 3.2, then compare with Algorithm 1.1. The stopping criteria for our computation is Toln<10−15, where Toln=12‖sn+1−sn‖. We compute the results of the Algorithms 3.1 and 3.2, and then compare them with Algorithm 1.1. We consider the following four cases of initial values:
Case (a'): w0={1n}∞n=1,w1={11+n2}∞n=0;
Case (b'): w0={1n+1,ifnisodd,0,ifniseven,w1={11+n3}∞n=1;
Case (c'): w0=(0,0,0,0,⋯),w1=(1,2,3,4,0,0,0,⋯);
Case (d'): w0={(−1)nn}∞n=1,w1={0,ifnisodd,1n2,ifniseven.
The experimental findings are shown in Table 2 and Figures 5–8.
7.
Conclusions
We suggested two viscosity-type inertial iteration methods for solving VIsP and FPP in Hilbert spaces. Our methods calculated the viscosity approximation, fixed point iteration, and inertial extrapolation simultaneously at the beginning of each iteration. We proved the strong convergence of the proposed methods without calculating the resolvent of the associated monotone operators. Some consequences and theoretical applications were also discussed. Finally, we illustrated the proposed methods by using some suitable numerical examples. It has been deduced from the numerical examples that our algorithms performed well in the sense of time acquired by the CPU and the number of iterations.
Author contributions
M. Dilshad: Conceptualization, Methodology, Formal analysis, Investigation, Writing-original draft, Software, Writing-review & editing; A. Alamer: Conceptualization, Methodology, Formal analysis, Software, Writing-review & editing; Maryam G. Alshahri: Conceptualization, Methodology, Formal analysis, Software, Writing-review & editing; Esmail Alshaban: Investigation, Writing-original draft; Fahad M. Alamrani: Investigation, Writing-original draft. All authors have read and approved the final version of the manuscript for publication.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Conflict of interest
The authors declare no conflicts of interest.