
In this article, we suggest and analyze the splitting type viscosity methods for inclusion and fixed point problem of a nonexpansive mapping in the setting of Hadamard manifolds. We derive the convergence of sequences generated by the proposed iterative methods under some suitable assumptions. Several special cases of the proposed iterative methods are also discussed. Finally, some applications to solve the variational inequality, optimization and fixed point problems are given on Hadamard manifolds.
Citation: Mohammad Dilshad, Aysha Khan, Mohammad Akram. Splitting type viscosity methods for inclusion and fixed point problems on Hadamard manifolds[J]. AIMS Mathematics, 2021, 6(5): 5205-5221. doi: 10.3934/math.2021309
[1] | Konrawut Khammahawong, Parin Chaipunya, Poom Kumam . An inertial Mann algorithm for nonexpansive mappings on Hadamard manifolds. AIMS Mathematics, 2023, 8(1): 2093-2116. doi: 10.3934/math.2023108 |
[2] | Jamilu Abubakar, Poom Kumam, Jitsupa Deepho . Multistep hybrid viscosity method for split monotone variational inclusion and fixed point problems in Hilbert spaces. AIMS Mathematics, 2020, 5(6): 5969-5992. doi: 10.3934/math.2020382 |
[3] | Mohammad Dilshad, Fahad Maqbul Alamrani, Ahmed Alamer, Esmail Alshaban, Maryam G. Alshehri . Viscosity-type inertial iterative methods for variational inclusion and fixed point problems. AIMS Mathematics, 2024, 9(7): 18553-18573. doi: 10.3934/math.2024903 |
[4] | Yali Zhao, Qixin Dong, Xiaoqing Huang . A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208 |
[5] | Wenlong Sun, Gang Lu, Yuanfeng Jin, Zufeng Peng . Strong convergence theorems for split variational inequality problems in Hilbert spaces. AIMS Mathematics, 2023, 8(11): 27291-27308. doi: 10.3934/math.20231396 |
[6] | Charu Batra, Renu Chugh, Mohammad Sajid, Nishu Gupta, Rajeev Kumar . Generalized viscosity approximation method for solving split generalized mixed equilibrium problem with application to compressed sensing. AIMS Mathematics, 2024, 9(1): 1718-1754. doi: 10.3934/math.2024084 |
[7] | Doaa Filali, Mohammad Dilshad, Mohammad Akram . Generalized variational inclusion: graph convergence and dynamical system approach. AIMS Mathematics, 2024, 9(9): 24525-24545. doi: 10.3934/math.20241194 |
[8] | Meiying Wang, Luoyi Shi, Cuijuan Guo . An inertial iterative method for solving split equality problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 17628-17646. doi: 10.3934/math.2022971 |
[9] | Nagendra Singh, Sunil Kumar Sharma, Akhlad Iqbal, Shahid Ali . On relationships between vector variational inequalities and optimization problems using convexificators on the Hadamard manifold. AIMS Mathematics, 2025, 10(3): 5612-5630. doi: 10.3934/math.2025259 |
[10] | Lu-Chuan Ceng, Yeong-Cheng Liou, Tzu-Chien Yin . On Mann-type accelerated projection methods for pseudomonotone variational inequalities and common fixed points in Banach spaces. AIMS Mathematics, 2023, 8(9): 21138-21160. doi: 10.3934/math.20231077 |
In this article, we suggest and analyze the splitting type viscosity methods for inclusion and fixed point problem of a nonexpansive mapping in the setting of Hadamard manifolds. We derive the convergence of sequences generated by the proposed iterative methods under some suitable assumptions. Several special cases of the proposed iterative methods are also discussed. Finally, some applications to solve the variational inequality, optimization and fixed point problems are given on Hadamard manifolds.
Let M:H⇉H be a set-valued maximal monotone mapping and K be a nonempty closed convex subset of Hilbert space H. The inclusion problem:
Findx∈Ksuchthatx∈M−1(0), | (1.1) |
was introduced by Rockafellar [19]. The iconic method for solving inclusion problem (1.1) is the proximal point method which was first introduced and studied by Martinet [15] for optimization problem and later generalized by Rockafellar [19] to solve the inclusion problem (1.1).
Many problems arising in nonlinear analysis, such as optimization, variational inequality problems, equilibrium problems and partial differential equations are convertible to the inclusion problem (1.1). Therefore, in the recent past, many authors have been extended and generalized the inclusion problem (1.1) in different directions using novel and innovative techniques, see for example [1,4,7,9,11,12,13,20,24] and references cited therein.
The fixed point problem of a nonexpansive self mapping S:K→K is defined as:
Findx∈Ksuchthatx∈Fix(S). | (1.2) |
Most of the iterative methods to find the fixed point of nonexpansive mappings are due to Mann [14]. Moudafi [16] proposed the viscosity method by combining the nonexpansive mapping S with a given contraction mapping φ over K. For an arbitrary x0∈K, compute the sequence {xn} generated by
xn+1=βnφ(xn)+(1−βn)S(xn),n≥0, |
where βn∈(0,1) goes slowly to zero. The sequence {xn+1} achieved from this iterative method converge strongly to a fixed point of S. Common solution of fixed point problem (1.2) of a nonexpansive self mapping S and variational inclusion problem studied by Takahashi et al. [22] in Hilbert spaces, which is defined as:
Findx∈Ksuchthatx∈Fix(S)∩(M+F)−1(0), | (1.3) |
where F is single valued monotone mapping and M,S are same as defined above. Recently, Ansari et al. [1] extended the problem (1.3) to Hadamard manifolds and studied the Halpern and Mann type algorithms to solve problem (1.3) and discussed several applications on Hadamard manifold. Very recently, Al-Homidan et al. [2] extended the viscosity method for hierarchical variational inequality problems and discussed its several special cases on Hadamard manifolds. Konrawut et al. [10] studied the splitting algorithms for common solutions of equilibrium and inclusion problems on Hadamard manifolds.
In this article, encouraged and inspired by the work of [1,10,16], our motive is to introduce and study a splitting type viscosity method to find the common solution of inclusion problem (1.1) and fixed point problem (1.2) on Hadamard manifolds, that is,
Findx∈Ksuchthatx∈Fix(S)∩(M)−1(0), | (1.4) |
where K is a nonempty closed convex subset of Hadamard manifold D. Our suggested method is like a double back-ward method for inclusion and fixed point problems and can be seen as the refinement of the work studied in [1]. The article is organized as follows:
The next section consists of preliminaries and some useful results of Riemannian manifolds. Section 3 deals with the main results explaining the splitting type viscosity method and convergence of the sequences obtained from it. In the last section, some applications of the proposed method and its convergence theorem to solve variational inequality, optimization and fixed point problems are given.
Let D be a finite dimensional differentiable manifold and for a vector field p∈D, the tangent space of D at p is denoted by TpD and the tangent bundle by TD=∪p∈DTpD. The tangent space TpD at p is a vector space and has the same dimension as D. An inner product ℜp(⋅,⋅) on TpD is the Riemannian metric on TpD. A tensor ℜp(⋅,⋅) is called a Riemannian metric on TpD, if for each p∈D, the tensor ℜ(⋅,⋅) is a Riemannian metric on D. We assume that D is endowed with the Riemannian metric ℜp(⋅,⋅) with the corresponding norm ‖.‖p. The angle between 0≠x,y∈TqD, denoted by ∠p(x,y) is defined as cos∠p(x,y)=ℜp(⋅,⋅)‖x‖‖y‖. For the sake of simplicity, we donote ‖.‖p=‖.‖,ℜp(⋅,⋅)=ℜ(⋅,⋅) and ∠p(x,y)=∠(x,y).
For a given piecewise smooth curve γ:[a,b]→D joining p to q (i.e.γ(a)=pandγ(b)=q), the length of γ is defined as
L(γ)=∫ba‖γ′(s)‖ds. |
The Riemannian distance d(p,q) induces the original topology on D, minimize the length over the set of all such curves joining p to q.
Let ▽ be the Levi-Civita connection corresponding to Riemannian manifold D. A vector field U is said to be parallel along a smooth curve γ if ▽γ′(s)U=0. If γ′ is parallel along γ, i.e., ▽γ′(s)γ′(s)=0, then γ is called geodesic and in this case ‖γ′‖ is constant and if ‖γ′‖=1, then γ is said to be normalized geodesic. A geodesic joining p to q in D is called minimal geodesic if its length is equal to d(p,q). A Riemannian manifold is called (geodesically) complete if for any p∈D, all geodesics emanating from p are defined for all s∈(−∞,∞). We know by Hopf-Rinow Theorem [21] that if D is Riemannian manifold then following are equivalent:
(I) D is complete.
(II) Any pair of points in D can be joined by a minimal geodesic.
(III) (D,d) is a complete metric space.
(IV) Bounded closed subsets of D are compact.
Let γ:[0,1]→D be a geodesic joining p to q. Then
d(γ(s1),γ(s2))=|s1−s2|d(p,q),∀s1,s2∈[0,1]. | (2.1) |
Assuming D is a complete Riemannian manifold, the exponential mapping expp:TpD→D at p is defined by expp(ϑ)=γϑ(1,p) for each ϑ∈TpD, where γ(⋅)=γϑ(⋅,p) is the geodesic starting at p with velocity ϑ (i.e., γ(0)=0andγ′(0)=ϑ). We know that expq(sϑ)=γϑ(s,p) for each real number s. One can easily see that expp0=γϑ(0;p)=p, where 0 is the zero tangent vector. The exponential mapping expp is differentiable on TpD for any p∈D. It is known to us that the derivative of expp(0) is equal to the identity vector of TpD. Therefore by inverse mapping theorem there exists an inverse exponential mapping exp−1:D→TpD. Moreover, for any p,q∈D, we have d(p,q)=‖exp−1pq‖.
A complete, simply connected Riemannian manifold of non-positive sectional curvature is called a Hadamard manifold.
Proposition 2.1. [21] Let D be a Hadamard manifold. Then expp:TpD→D is a diffeomorphism for all p∈D and for any two points p,q∈D, there exists a unique normalized geodesic γ:[0,1]→D joining p=γ(0) to q=γ(1) which is in fact a minimal geodesic denoted by
γ(s)=exppsexp−1q,foralls∈[0,1]. | (2.2) |
A subset K⊂D is said to be convex if for any two points p,q∈K, the geodesic joining p to q is contained in K, that is, if γ:[a,b]→D is a geodesic such that p=γ(a) and q=γ(b), then γ((1−s)a+sb)∈K for all s∈[0,1]. From now on, K⊂D will denote a nonempty, closed and convex subset of a Hadamard manifold D. The projection onto K is defined by
PK(p)={r∈K:d(p,r)≤d(p,q),forallq∈K},forallp∈D. | (2.3) |
A function g:K→R is said to be convex if for any geodesic γ:[a,b]→D, the composition function g∘γ:[a,b]→R is convex, that is,
(g∘γ)(as+(1−s)b)≤s(g∘γ)(a)+(1−s)(g∘γ)(b),foralls∈[0,1] and foralla,b∈R. |
Proposition 2.2. [21] The Riemannian distance d:D×D→R is a convex function with respect to the product Riemannian metric, i.e., given any pair of geodesics γ1:[0,1]→D and γ2:[0,1]→D, the following inequality holds for all s∈[0,1]:
d(γ1(s),γ2(s))≤(1−s)d(γ1(0),γ2(0))+sd(γ1(1),γ2(1)). | (2.4) |
In particular, for each p∈D, the function d(⋅,p):D→R is a convex function.
If D is a finite dimensional manifold with dimension n, then Proposition 2.1 shows that D is diffeomorphic to the Euclidean space Rn. Thus, we see that D has the same topology and differential structure as Rn. Moreover, Hadamard manifolds and Euclidean spaces have several similar geometrical properties. We describe some of them in the following results.
Recall that a geodesic triangle Δ(q1,q2,q3) of Riemannian manifold is a set consisting of three points q1,q2 and q3 and the three minimal geodesics γj joining qj to qj+1, where j=1,2,3mod(3).
Lemma 2.1. [13] Let Δ(q1,q2,q3) be a geodesic triangle in Hadamard manifold D. Then there exist q′1,q′2,q′3∈R2 such that
d(q1,q2)=‖q′1−q′2‖,d(q2,q3)=‖q′2−q′3‖,andd(q3,q1)=‖q′3−q′1‖. |
The points q′1,q2′,q′3 are called the comparison points to q1,q2,q3, respectively. The triangle Δ(q′1,q′2,q′3) is called the comparison triangle of the geodesic triangle Δ(q1,q2,q3), which is unique upto isometry of D.
Lemma 2.2. [13] Let Δ(q1,q2,q3) be a geodesic triangle in Hadamard manifold D and Δ(q′1,q′2,q′3)∈R2 be its comparison triangle.
(i) Let θ1,θ2,θ3 (respectively, θ′1,θ′2,θ′3) be the angles of Δ(q1,q2,q3) (respectively, Δ(q′1,q′2,q′3)) at the vertices (q1,q2,q3) (respectively, q′1,q′2,q′3). Then the following inequality holds:
θ′1≥θ1,θ′2≥θ2,θ′3≥θ3. |
(ii) Let p be a point on the geodesic joining q1 to q2 and p′ be its comparison point in the interval [q′1,q′2]. Suppose that d(p,q1)=‖p′−q′1‖ and d(p,q2)=‖p′−q′2‖. Then
d(p,q3)≤‖p′−q′3‖. |
Proposition 2.3. [21] (Comparison Theorem for Triangle) Let Δ(q1,q2,q3) be a geodesic triangle. Denote, for each j=1,2,3mod(3), by γj:[0,li]→D geodesic joining qj to qj+1 and set lj=L(γj),θj=∠(γ′j(0),−γ′j−1(lj−1)). Then
θ1+θ2+θ3≤π, | (2.5) |
l2j+l2j+1−2ljlj+1cosθj+1≤l2j−1. | (2.6) |
In terms of distance and exponential mapping, (2.6) can be rewritten as
d2(qj,qj+1)+d2(qj+1,qj+2)−2ℜ(exp−1qj+1qj,exp−1qj+1qj+2)≤d2(qj−1,qj), | (2.7) |
since
ℜ(exp−1qj+1qj,exp−1qj+1qj+2)=d(qj,qj+1)d(qj+1,qj+2)cosαj+1. | (2.8) |
Following proposition characterizes the projection mapping.
Proposition 2.4. [23] Let K be a nonempty closed convex subset of a Hadamard manifold D. Then for any p∈D, PK(p) is a singleton set and the following inequality holds:
ℜ(exp−1PK(p)p,exp−1PK(p)q)≤0,∀q∈D. | (2.9) |
The set of all single-valued vector fields M:D→TD is denoted by Ω(D) such that M(p)∈Tp(D) for all p∈D. We denote by χ(D) the set of all set-valued vector fields, M:D⇉TD such that M(p)⊆Tp(D) for all p∈D(M), where D(M) is the domain of M defined as D(M)={p∈D:M(p)≠∅}.
Definition 2.1. [17] A single-valued vector field M∈Ω(D) is said to be monotone if for all p,q∈D,
ℜ(M(p),exp−1pq)≤ℜ(M(q),−exp−1qp). |
Definition 2.2. [12] A single-valued vector field M∈Ω(D) is said to be firmly nonexpansive if for all p,q∈K⊆D, the mapping ψ:[0,1]→[0,∞] defined by
ψ(s)=d(exppsexp−1pM(p),expqsexp−1qM(q)),∀s∈[0,1], |
is nonincreasing.
Definition 2.3. [8] A set-valued vector field M∈χ(D) is said to be monotone if for all p,q∈D(D),
ℜ(u,exp−1pq)≤ℜ(v,−exp−1qp),∀u∈M(p),∀v∈M(q). |
Definition 2.4. [12] Let M∈χ(D), the resolvent of M of order λ>0 is set-valued mapping JMλ:D⇉D(M) defined by
JMλ(p)={q∈D:p∈expqλM(q)},∀p∈D. |
Theorem 2.1. [12] Let λ>0 and M∈χ(D). Then vector field M is monotone if and only if JMλ is single-valued and firmly nonexpansive.
Lemma 2.3. [13] Let {an} and {bn} be two sequences of positive real numbers such that limn→∞bnan=0 and ∞∑n=1an=+∞. Let {xn} be a sequence of positive real numbers satisfying the recursive inequality:
xn+1≤(1−an)xn+anbn,∀n∈N, |
then limn→∞xn=0.
We propose the following splitting type viscosity method for problem (1.4) on Hadamard manifold.
Algorithm 3.1. Suppose that K be nonempty closed and convex subset of Hadamard manifold D. Let M:K⇉D be a set-valued vector field, φ:K→K be a contraction and S:K→K be a nonexpansive mapping such that Fix(S)∩(M)−1(0)≠∅. For an arbitrary x0∈K, αn,βn∈(0,1) and λ>0, compute the sequences {yn} and {xn} as follows:
yn=expxn(1−αn)exp−1xnJMλ(xn),xn+1=expφ(xn)(1−βn)exp−1φ(xn)S(yn), |
or, equivalently
xn+1=γn(1−βn),∀n≥0, |
where γn:[0,1]→D is sequence of geodesics joining φ(xn) to S(yn), that is, γn(0)=φ(xn) and γn(1)=S(yn) for all n≥0.
For the convergence of Algorithm 3.1, we require the following conditions on the sequences {αn} and {βn} :
(A1) limn→∞αn=0,limn→∞βn=0;
(A2) ∞∑n=0αn=∞,∞∑n=0βn=∞;
(A3) ∞∑n=0|αn+1−αn|<∞,∞∑n=0|βn+1−βn|<∞
If S=I, the identity mapping on K, then Algorithm 3.1 reduces to the following algorithm to find the solution of problem (1.1).
Algorithm 3.2. Suppose that K be nonempty closed and convex subset of Hadamard manifold D. Let M:K⇉D be a set-valued vector field and φ:K→K be self mapping. For an arbitrary x0∈K, compute the sequences {yn} and {xn} as follows
yn=expxn(1−αn)exp−1xnJMλ(xn),xn+1=exp−1φ(xn)(1−βn)exp−1φ(xn)yn, |
where αn,βn∈(0,1) and λ>0 are same as given in Algorithm 3.1.
We can obtain the the following proposition by substituting A=0, zero vector field in Proposition 3.2 of [3].
Proposition 3.1. For any x∈K, the following assertions are equivalent:
(i)x∈(M)−1(0);
(ii)x=JMλ[expx(−λx)], for all λ>0.
Remark 3.1. It can be easily seen that for a nonexpansive mapping S, the set Fix(S) is geodesic convex, for more details, (see [1,12]). Since JMλ is nonexpansive, by Proposition 3.1, it follows that Fix(JMλ)=(M)−1(0). Therefore (M)−1(0) is closed and geodesic convex in D. Hence, Fix(S)∩(M)−1(0) is closed and geodesic convex in D.
Theorem 3.1. Let D be a Hadamard manifold and K be a nonempty, closed and convex subset of D. Let S:K→K be a nonexpansive mapping and φ:K→K be a contraction with constant κ. Let M:K→TD be a set-valued monotone vector field. Suppose {αn}, and {βn} are sequences in (0,1), satisfying the conditions A1−A3. If Fix(S)∩(M)−1(0)≠∅, then the sequences achieved by Algorithm 3.1 converges to w∈Fix(S)∩(M)−1(0), where w=PFix(S)∩(M)−1(0)φ(w).
Proof. We divide the proof into following five steps.
Step I. We show that {yn},{xn},{φ(xn)} and {T(yn)} are bounded.
Let x⋆∈Fix(S)∩(M)−1(0), then x⋆∈Fix(S) and x⋆∈(M)−1(0). Since yn=γn(1−αn), by Proposition 3.1 and nonexpansive property of JMλ, we have
d(yn,x⋆)=d(γn(1−αn),x⋆)≤αnd(γn(0),x⋆)+(1−αn)d(γn(1),x⋆)≤αnd(xn,x⋆)+(1−αn)d(JMλ(xn),x⋆)≤αnd(xn,x⋆)+(1−αn)d(xn,x⋆)=d(xn,x⋆). | (3.1) |
Since xn+1=γn(1−βn), then by convexity of Riemannian distance, we have
d(xn+1,x⋆)=d(γn(1−βn),x⋆)≤βnd(γn(0),x⋆)+(1−βn)d(γn(1),x⋆)=βnd(φ(xn),x⋆)+(1−βn)d(S(yn),x⋆)≤βn[d(φ(xn),φ(x⋆))+d(φ(x⋆),x⋆)]+(1−βn)d(S(yn),S(x⋆))≤βn[κd(xn,x⋆)+d(φ(x⋆),x⋆)]+(1−βn)d(yn,x⋆)≤βn[κd(xn,x⋆)+d(φ(x⋆),x⋆)]+(1−βn)d(xn,x⋆)≤[1−βn(1−κ)]d(xn,x⋆)+βnd(φ(x⋆),x⋆)⋮≤max{d(x0,x⋆),11−κd(φ(x⋆),x⋆)}, | (3.2) |
which implies that the sequence {xn} is bounded and using (3.1), {yn} is also bounded. Since S is nonexpansive, φ is a contraction, we conclude that the sequences {S(yn)} and {φ(xn)} are also bounded.
Step II. We show that limn→∞d(xn+1,xn)=0.
Since S is nonexpansive and φ is a contraction, then using (2.1), (2.4) and Proposition 2.2, we have
d(xn+1,xn)=d(γn(1−βn),γn−1(1−βn−1))≤d(γn(1−βn),γn−1(1−βn))+d(γn−1(1−βn),γn−1(1−βn−1))≤βnd(γn(0),γn−1(0))+(1−βn)d(γn(1),γn−1(1))+|βn−βn−1|d(φ(xn−1),S(yn−1))≤βnd(φ(xn),φ(xn−1))+(1−βn)d(S(yn),S(yn−1))+|βn−βn−1|d(φ(xn−1),S(yn−1))≤βnκd(xn,xn−1)+(1−βn)d(yn,yn−1)+|βn−βn−1|d(φ(xn−1),S(yn−1)). | (3.3) |
Again, by Algorithm 3.1 and nonexpansive property of JMλ, we have
d(yn,yn−1)=d(γn(1−αn),γn−1(1−αn−1))≤d(γn(1−αn),γn−1(1−αn))+d(γn−1(1−αn),γn−1(1−αn−1))≤d(γn(0),γn−1(0))+(1−αn)d(γn(1),γn−1(1))+|αn−αn−1|d(xn−1,JMλ(xn−1))≤αnd(xn,xn−1))+(1−αn)d(JMλ(xn),JMλ(xn−1))+|αn−αn−1|d(xn−1,JMλ(xn−1))≤αnd(xn,xn−1)+(1−αn)d(xn,xn−1)+|αn−αn−1|d(xn−1,JMλ(xn−1))≤d(xn,xn−1)+|αn−αn−1|d(xn−1,JMλ(xn−1)). | (3.4) |
Since {xn}, {φ(xn)} and {JMλ(xn)} are bounded, then there exist constants C1,C2 and C3, such that d(xn,JMλ(xn−1))≤C1, d(φ(xn),x⋆)≤C2, d(xn,x⋆)≤C3. Thus, we have
d(yn,yn−1)≤d(xn,xn−1)+|αn−αn−1|C1, | (3.5) |
and
d(φ(xn−1),S(xn−1))≤d(φ(xn−1),x⋆)+d(S(yn−1),x⋆)≤d(φ(xn−1),x⋆)+d(yn−1,x⋆)≤d(φ(xn−1),x⋆)+d(xn−1,x⋆)≤C2+C3:=C4. | (3.6) |
d(xn,xn−1)≤d(xn,x⋆)+d(xn−1,x⋆)≤C3+C3=2C3:=C5. | (3.7) |
Combining (3.4), (3.5), (3.6) and (3.7), (3.3) becomes
d(xn+1,xn)≤[1−βn(1−κ)]C5++|αn−αn−1|C1+|βn−βn−1|C4≤(1−¯βn)C5+|αn−αn−1|C1+|βn−βn−1|C4, |
where ¯βn=βn(1−κ). Let m≤n, then
d(xn+1,xn)≤C5n∏i=m(1−ˉβi)+C1n∑i=m|αi−αi−1|+C4n∑i=m|βi−βi−1|. |
Taking limit n→∞, we have
d(xn+1,xn)≤C5∞∏i=m(1−ˉβi))+C1∞∑i=m|αi−αi−1|+C4∞∑i=m|βi−βi−1|. |
From condition A2, we have ∞∏i=m(1−ˉβi)=0, from A3, we get ∞∑i=m|αi−αi−1|=0 and ∞∑i=m|βi−βi−1|=0, as m→∞. Thus by taking m→∞, we get
limn→∞d(xn+1,xn)=0. | (3.8) |
Step III. Next, we show that limn→∞d(xn,yn)=0. Since φ is a contraction, then by using Algorithm 3.1 and (3.1), we obtain
d(xn,yn)≤d(xn,x⋆)+d(yn,x⋆)≤d(xn,x⋆)+d(xn,x⋆)=2d(xn,x⋆)=2{d(γn−1(1−βn−1),x⋆)}≤2{βn−1d(γn−1(0),x⋆)+(1−βn−1)d(γn−1(1),x⋆)}≤2{βn−1d(φ(xn−1),x⋆)+(1−βn−1)d(S(yn−1),x⋆)}≤2{βn−1[d(φ(xn−1),φ(x⋆))+d(φ(x⋆),x⋆)]+(1−βn−1)d(S(yn−1),x⋆)}≤2{βn−1κd(xn−1,x⋆)+βn−1d(φ(x⋆),x⋆)+(1−βn−1)d(xn−1,x⋆)}<2{βn−1κd(xn−1,x⋆)+βn−1d(φ(x⋆),x⋆)+(1−βn−1)d(xn−1,x⋆)}=2{[1−βn−1(1−κ)]d(xn−1,x⋆)+βn−1d(φ(x⋆),x⋆)}=2{[1−ˉβn−1]d(xn−1,x⋆)+βn−1d(φ(x⋆),x⋆)}. | (3.9) |
Let m≤n, then we have
d(xn,yn)<2C1n−1∏j=m(1−ˉβj)+2n−1∑j=m{βjn−1∏i=j+1(1−¯βi)}d(φ(x⋆),x⋆). |
By taking limit n→∞, we have
d(xn,yn)<2C1∞∏j=m(1−ˉβj)+2∞∑j=m{βj∞∏i=j+1(1−¯βi)}d(φ(x⋆),x⋆). |
From A2, it follows that limm→∞∞∏j=m(1−ˉβj)=0 and from A1−A2, limm→∞∞∑j=m{βj∞∏i=j+1(1−¯βi)}=0. Hence by letting limit m→∞, we get
limn→∞d(xn,yn)=0. | (3.10) |
Step IV. Since {xn} is bounded, so there exists a subsequence {xnk} of {xn} such that xnk→w as k→∞. Let un=JMλ(xn), by Algorithm 3.1, yn=expxn(1−αn)exp−1xnJMλ(xn). Then we have d(yn,un)=αnd(xn,un) and d(yn,un)→0 as n→∞. Thus
d(xn,un)≤d(xn,yn)+d(yn,un)→0asn→∞. | (3.11) |
By the contuinuity of JMλ, as k→∞, we have
0=d(xnk,unk)=d(xnk,JMλ(xnk))=d(w,JMλ(w)). | (3.12) |
This implies that JMλ(w)=w, by Proposition 2.1, we get w∈(M)−1(0).
Again, by using the convexity of Riemannian manifold, we have
d(xn+1,S(yn))=d(γn(1−βn),S(yn))≤βnd(γn(0),S(yn))+(1−βn)d(γn(1),S(yn))≤βnd(φ(xn),S(yn))+(1−βn)d(S(yn),S(yn))≤βnd(φ(un),S(yn)). | (3.13) |
Since {xn} is bounded and φ is a κ-contraction, we get
d(φ(xn),S(yn))≤d(φ(xn),φ(x⋆))+d(φ(x⋆),S(yn))≤κd(xn,x⋆)+d(φ(x⋆),x⋆)+d(S(yn),x⋆)<κd(xn,x⋆)+d(φ(x⋆),x⋆)+d(yn,x⋆)≤κd(xn,x⋆)+d(φ(x⋆),x⋆)+)d(xn,x⋆)≤(1+κ)d(xn,x⋆)+d(φ(x⋆),x⋆)≤(1+κ)C3+C2=C6. | (3.14) |
This together with the condition A1, implies that
limn→∞d(xn+1,S(yn))=limn→∞βnC6=0. | (3.15) |
Also, from (3.9) and with a subsequence {ynk} of {yn}, we have
limk→∞d(ynk,w)≤limk→∞d(ynk,xnk)+limk→∞d(xnk,w)=0, | (3.16) |
that is, {ynk} converges to w as k→∞. Then, we obtain
d(S(w),w)≤d(S(w),S(ynk))+d(S(ynk),xnk+1)+d(xnk+1,w)≤d(w,ynk)+d(S(ynk),xnk+1)+d(xnk+1,w)→0,ask→∞, | (3.17) |
and so, w∈Fix(S). Thus, we have w∈Fix(S)∩(M)−1(0).
Step V. Finally, we show thatlimn→∞d(xn,z)=0.
To prove the last step, we need to show that lim supn→∞ℜ(exp−1zφ(z),exp−1zS(yn))≤0, where z is a fixed point of the mapping PFix(S)∩(M)−1(0)φ.
Since w∈Fix(S)∩(M)−1(0) and w=PFix(S)∩(M)−1(0)φ(w), then by Proposition 4, we have ℜ(exp−1zφ(z),exp−1zw) ≤0. Boundedness of {yn} implies that {ℜ(exp−1zφ(z),exp−1zS(yn))} is bounded. Then, we have
lim supn→∞ℜ(exp−1zφ(z),exp−1zyn)=limk→∞ℜ(exp−1zφ(z),exp−1zS(ynk)). | (3.18) |
Since ynk→w as k→∞ and by using continuity of S, we obtain
limk→∞ℜ(exp−1zφ(z),exp−1zS(ynk))=ℜ(exp−1zφ(z),exp−1zS(w))≤0, |
therefore,
lim supn→∞ℜ(exp−1zφ(z),exp−1zS(yn))≤0. | (3.19) |
For n≥0, set v=φ(xn), q=S(yn) and consider geodesic triangles Δ(v,q,z), Δ(φ(z),q,v) and Δ(φ(z),q,z), with their comparison triangles Δ(v′,q′,z′), Δ(φ(z)′,q′,v′) and Δ(φ(z)′,q′,z′). From Lemma 2.1, we have
d(φ(xn),z)=d(v,z)=‖v′−z′‖andd(S(yn),z)=d(q,z)=‖q′−z′‖. |
d(φ(z),z)=‖φ(z)′−z′‖andd(S(yn),z)=d(q,z)=‖q′−z′‖. |
Recall that xn+1=expφ(xn)(1−βn)exp−1φ(xn)S(yn)=expv(1−βn)exp−1vq. The comparison point of xn+1 in R2 is x′n+1=βnv′+(1−βn)q′. Let θ and θ′ denote the angles at q and q′ in the triangles Δ(φ(z),q,z) and Δ(φ(z)′,q′,z′), respectively. Therefore, θ≤θ′, and then, cosθ′≤cosθ. By Lemma 2.2 (ii), using nonexpansive property of S and contraction property of φ, we have
d2(xn+1,z)≤‖x′n+1−z′‖2=‖βnv′+(1−βn)q′−z′‖2=‖βn(v′−z′)+(1−βn)(q′−z′)‖2=β2n‖v′−z′‖2+(1−βn)2‖q′−z′‖2+2βn(1−βn)‖v′−z′‖‖q′−z′‖cosθ′≤β2nd2(φ(xn),z)+(1−βn)2d2(S(yn),z)+2βn(1−βn)d(φ(xn),z)d(S(yn),z)cosθ≤β2nd2(φ(xn),z)+(1−βn)2d2(S(yn),z)+2βn(1−βn)[d(φ(z),z)+d(φ(yn),φ(z))]d(S(xn),z)cosθ≤β2nd2(φ(xn),z)+(1−βn)2d2(xn,z)+2βn(1−βn)[d(φ(z),z)+d(φ(xn),φ(z))]d(xn,z)cosθ≤β2nd2(φ(xn),z)+(1−βn)2d2(xn,z)+2βn(1−βn)[d(φ(z),z)d(xn,z)+d(φ(xn),φ(z))d(xn,z)]cosθ≤β2nd2(φ(xn),z)+(1−βn)2d2(xn,z)+2βn(1−βn)[ℜ(exp−1zφ(z),exp−1zxn)+κd2(xn,z)]=[1−2βn+β2n+2βn(1−βn)κ]d2(xn,z)]+β2nd2(φ(xn),z)+2βn(1−βn)ℜ(exp−1zf(z),exp−1zxn)=(1−bn)d2(xn,z)+bncn, |
where bn=[1−2βn+β2n+2βn(1−βn)κ] and cn=1bn[β2nd2(φ(xn),z)+2βn(1−βn)ℜ(exp−1zφ(z),exp−1zxn)]. By (3.19), limn→∞cn≤0 and by conditions A1 and A2, we have limn→∞bn=0 and ∞∑n=1bn=∞, respectively. Hence by Lemma 2.3, limn→∞d(xn,z)=0. This completes the proof.
We obtain the following convergence result for Algorithm 3.2, by replacing S=I, the identity mapping in Theorem 3.1.
Theorem 3.2. Let K be a nonempty, closed and convex subset of Hadamard manifold D. Let φ:K→K be a contraction mapping with constant κ and M:K⇉TD be a set-valued monotone vector field. If (M)−1(0)≠∅, then the sequence achieved by Algorithm 3.2 converges to w∈(M)−1(0), where w=P(M)−1(0)φ(w).
Remark 3.2. We can obtain splitting type Mann's iterative methods for the said problems by putting φ=I, the identity mapping on K in Algorithm 3.1 and Algorithm 3.2 and by putting κ=1 in Theorem 3.1 and Theorem 3.2, we can obtain the convergence theorems.
To illustrate the convergence of our algorithms, we extend the example which was also considered in [4].
Example 3.1. Let D=R++={x∈R:x>0}. Then M is a Riemannian manifold with Riemannian metric ⟨⋅,⋅⟩ defined by ⟨u,v⟩:=g(x)uv for all u,v∈TxD, where g:R++→(0,+∞) is given by g(x)=x−2. It directly follows that the tangent plane TxD at x∈D is equal to R for all x∈D. The Riemannian distance d:D×D→R+ is given by
d(x,y):=|lnxy|,∀x,y∈D. |
Therefore, (R++,⟨⋅,⋅⟩) is a Hadamard manifold and the unique geodesic γ:R→D starting from x=γ(0) with v=˙γ(0)∈TxD is defined by γ(t):=xe(v/x)t. In other words, γ(t), in terms of initial point γ(0)=x and terminal point γ(1)=y, is defined as γ(t):=x1−tyt. The inverse of exponential mapping is given by
γ′(0)=exp−1xy=xlnyx. |
Consider a vector field M:D⇉R defined by
M(x):={x},∀x∈D(M). |
Note that M is a monotone vector field and the resolvent of M is given by
JMλ(x):=xe−λ,∀λ>0. |
Let φ be a contraction and S be a nonexpansive mapping, defined by φ(x)=12x and S(x)=x for all x∈D, respectively. Clearly, the solution set of the problem (1.4) is {0}. Choose any initial guess x0=1, λ=12, αn=βn=1√n+1, and αn=βn=1(n+1)13. Then all the conditions of Theorem 3.2 are satisfied, and hence, we conclude that the sequence {xn}∞n=0 generated by Algorithm 3.1 converges to a solution of the problem (1.4). The convergence of the sequence is shown in Figure 1.
By adopting the techniques and methodologies of [1,2,3,4,5,6], we drive the algorithm and convergence results for variational inequality and optimization problems using the proposed iterative methods.
Let K be a nonempty, closed and convex subset of Hadamard manifold M and A:K→TM be a single-valued vector field. Nˊemeth [18], introduced the variational inequality problem VI(A,K) to find x⋆∈K such that
⟨A(x⋆),exp−1x⋆y⟩≥0,∀y∈K. | (4.1) |
It is known to us that x∈K is a solution of VI(A,K) if and only if x satisfies (for more details, see [11])
0∈A(x)+NK(x), | (4.2) |
where NK(x) denotes the normal cone to K at x∈K, defined as
NK(x)={w∈TxM:ℜ(w,exp−1xy)≤0,∀y∈K}. |
Let IK be the indicator function of K, i.e.,
IK(x)={0,ifx∈K,+∞,ifx∉K. |
Since IK is proper, lower semicontinuous, the differential ∂IK(x) of IK is maximal monotone, defined by
∂IK(x)={u∈TxM:ℜ(u,exp−1xy)≤IK(y)−IK(x)}=0. |
Thus, we have
∂IK(x)={v∈TxM:ℜ(v,exp−1xy)≤0}.=NK(x). | (4.3) |
Let J∂IKλ be the resolvent of ∂IK, defined as
J∂IKλ(x)={v∈M:x∈expvλ∂IK(v)}=PK(x),∀x∈M,λ>0. |
Thus, for A:K→M and for all for x∈K, we have
x∈(A+∂IK)−1(0)=−A(x)∈∂IK(x)⟺ℜ(−A(x),exp−1xy)≤0,∀y∈K⟺x∈VI(A,K). | (4.4) |
Now, we can state some results for the common solution of VI(A, K) and Fix(S).
Theorem 4.1. Let D be Hadamard manifold and K be a nonempty, closed and convex subset of D. Let S:K→K be a nonexpansive mapping, φ:K→K be a contraction mapping and A:K→TD be a continuous vector field. Suppose {αn} and {βn} are sequences in (0,1) satisfying the conditions A1−A3. If Fix(S)∩VI(A,K)≠∅, then the sequences {yn} and {xn} achieved by
yn=expxn(1−αn)exp−1xnPK(Axn),xn+1=expφ(xn)(1−βn)exp−1φ(xn)S(yn), |
converge to the solution of VI(A,K)∩Fix(S), which is a fixed point of the mapping PFix(S)∩(M)−1(0)φ.
Corollary 4.1. Let D be Hadamard manifold and K be a nonempty, closed and convex subset of D. Let φ:K→K be a contraction mapping and A:K→TD be a continuous vector field. Suppose {αn} and {βn} are sequences in (0,1) satisfying the conditions A1−A3. If (M)−1(0)≠∅, then the sequences {yn} and {xn} achieved by
yn=expxn(1−αn)exp−1xnPK(Axn),xn+1=expφ(xn)(1−βn)exp−1φ(xn)(yn), |
converge to the solution of VI(A,K), which is a fixed point of the mapping P(M)−1(0)φ.
For a proper lower semicontinuous and geodesic convex function h:D→(−∞,+∞], the minimization problem is
minp∈Dh(p). | (4.5) |
We know that, the subdifferential ∂h(p) at p is closed and geodesic convex [1] and is defined as
∂h(p)={q∈TpD:ℜ(q,exp−1pq)≤h(q)−h(p),∀q∈D}. | (4.6) |
Lemma 4.1. Let h:D→(−∞,+∞] be a proper lower semicontinuous and geodesic convex function on Hadamard manifold D. Then the subdifferential ∂h(p) of h is maximal monotone vector field.
If the solution set of minimization problem (4.5) is Ω, then it can be easily seen that
p∈Ω⇔0∈∂h(p). | (4.7) |
Now, we can state some results for minimization problem (4.5), using Algorithm 3.1 and Algorithm 3.2.
Theorem 4.2. Let D be a Hadamard manifold. Let h:D→D be a proper lower semicontinuous and geodesic convex function, S:K→K be a nonexpansive mapping and φ:K→K be a κ-contraction. Suppose {αn} and {βn} are sequences in (0,1) satisfying the conditions A1−A3. If Fix(S)∩Ω≠∅, then the sequences {yn} and {xn} achieved by
yn=expxn(1−αn)exp−1xnJ∂hλ(xn),xn+1=expφ(xn)(1−βn)exp−1φ(xn)S(yn), |
converge to the solution of Ω∩Fix(S), which is a fixed point of the mapping PFix(S)∩Ωφ.
Corollary 4.2. Let D be a Hadamard manifold. Let h:D→D be a proper lower semicontinuous and geodesic convex function and φ:K→K be a κ-contraction. Suppose {αn} and {βn} are sequences in (0,1) satisfying the conditions A1−A3. If Fix(S)∩Ω≠∅, then the sequences {yn} and {xn} achieved by
yn=expxn(1−αn)exp−1xnJ∂hλ(xn),xn+1=expφ(xn)(1−βn)exp−1φ(xn)(yn), |
converge to the solution of Ω∩Fix(S), which is a fixed point of the mapping PΩφ.
In this paper, we studied the splitting type viscosity methods for inclusion and fixed point problem of nonexpansive mapping in Hadamard manifolds. We prove the convergence of iterative sequences obtained from the proposed method. Our method is new and can be seen as the refinement of methods studied in [1]. Some applications of the proposed method are given for variational inequalities, optimization and fixed point problems. We suppose that the method presented in this paper can be used to study some generalized inclusion and fixed point problems in geodesic spaces.
The second author would like to thank the Deanship of Scientific Research, Prince Sattam bin Abdulaziz University for supporting this work.
The authors declare no conflict of interest in this paper.
[1] |
S. Al-Homidan, Q. H. Ansari, F. Babu, Halpern and Mann type algorithms for fixed points and inclusion problems on Hadamard manifolds, Numer. Funct. Anal. Optim., 40 (2019), 621-653. doi: 10.1080/01630563.2018.1553887
![]() |
[2] |
S. Al-Homidan, Q. H. Ansari, F. Babu, J. C. Yao, Viscosity method with a ϕ-contraction mapping for hierarchical variational inequalities on Hadamard manifolds, Fixed Point Theory, 21 (2020), 561-584. doi: 10.24193/fpt-ro.2020.2.40
![]() |
[3] | Q. H. Ansari, F. Babu, X.-B. Li, Variational inclusion problems on Hadamard manifolds, J. Nonlinear Convex Anal., 19 (2018), 219-237. |
[4] | Q. H. Ansari, F. Babu, Proximal point algorithm for inclusion problems in Hadamard manifolds with applications, Optim. Lett., (2019), 1-21. |
[5] |
Q. H. Ansari, M. Islam, J. C. Yao, Nonsmooth variational inequalities on Hadamard manifolds, Appl. Anal., 99 (2020), 340-358. doi: 10.1080/00036811.2018.1495329
![]() |
[6] | D. W. Boyd, J. S. Wong, On nonlinear contractions, Proc. Amer. Math. Soc., 20 (1969), 335-341. |
[7] |
M. Dilshad, Solving Yosida inclusion problem in Hadamard manifold, Arab. J. Math., 9 (2020), 357-366. doi: 10.1007/s40065-019-0261-9
![]() |
[8] | J. X. Da Cruz Neto, O. P. Ferreira, L. R. Lucambio Pérez, Monotone point-to-set vector fields, Balk. J. Geometry Appl., 5 (2000), 69-79. |
[9] | D. Filali, M. Dilshad, M. Akram, F. Babu, I. Ahmad, Viscosity method for hierarchical variational inequalities and variational inclusions on Hadamard manifolds, to be appear in J. Inequal. Appl., 2021. |
[10] | K. Khammahawong, P. Kumam, P. Chaipunya, Splitting algorithms of common solutions between equilibrium and inclusion problems on Hadamard manifold, arXive: 1097.00364, 2019. |
[11] |
C. Li, G. Lopez, V. M. Márquez, Monotone vector fields and the proximal point algorithm on Hadamard manifolds, J. Lond. Math. Soc., 79 (2009), 663-683. doi: 10.1112/jlms/jdn087
![]() |
[12] |
C. Li, G. López, V. M. Márquez, J. H. Wang, Resolvent of set-valued monotone vector fields in Hadamard manifolds, J. Set-valued Anal., 19 (2011), 361-383. doi: 10.1007/s11228-010-0169-1
![]() |
[13] |
C. Li, G. López, V. Martín-Márquez, Iterative algorithms for nonexpansive mappings on Hadamard manifolds, Taiwan J. Math., 14 (2010), 541-559. doi: 10.11650/twjm/1500405806
![]() |
[14] |
W. R. Mann, Mean value methods in iteration, Proc. Amer. Math. Soc., 4 (1953), 506-510. doi: 10.1090/S0002-9939-1953-0054846-3
![]() |
[15] | B. Martinet, Régularisation d'inequations variationnelles par approximations successives, Rev. Fr. Inform. Oper., 4 (1970), 154-158. |
[16] |
A. Moudafi, Viscosity approximation methods for fixed-points problems, J. Math. Anal. Appl., 241 (2000), 46-55. doi: 10.1006/jmaa.1999.6615
![]() |
[17] | S. Z. Németh, Monotone vector fields, Publ. Math. Debrecen, 54 (1999), 437-449. |
[18] |
S. Z. Németh, Variational inequalities on Hadamard manifolds, Nonlinear Anal., 52 (2003), 1491-1498. doi: 10.1016/S0362-546X(02)00266-3
![]() |
[19] | R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optim., 14 (2003), 877-898. |
[20] | D. R. Sahu, Q. H. Ansari, J. C. Yao, The Prox-Tikhonov forward method and application, Taiwan. J. Math., 19 (2003), 481-503. |
[21] | T. Sakai, Riemannian Geomety, Translations of Mathematical Monograph, Amer. Math. Soc., Providence, RI, 1996. |
[22] |
S. Takahashi, W. Takahashi, M. Toyoda, Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces, J. Optim. Theory Appl., 147 (2010), 27-41. doi: 10.1007/s10957-010-9713-2
![]() |
[23] |
R. Walter, On the metric projections onto convex sets in Riemannian spaces, Arch. Math., 25 (1974), 91-98. doi: 10.1007/BF01238646
![]() |
[24] |
N. C. Wong, D. R. Sahu, J. C. Yao, Solving variational inequalities involving nonexpansive type mappings, Nonlinear Anal., 69 (2008), 4732-4753. doi: 10.1016/j.na.2007.11.025
![]() |
1. | Doaa Filali, Mohammad Dilshad, Mohammad Akram, Feeroz Babu, Izhar Ahmad, Viscosity method for hierarchical variational inequalities and variational inclusions on Hadamard manifolds, 2021, 2021, 1029-242X, 10.1186/s13660-021-02598-8 | |
2. | Olawale Kazeem Oyewole, Simeon Reich, An Inertial Subgradient Extragradient Method for Approximating Solutions to Equilibrium Problems in Hadamard Manifolds, 2023, 12, 2075-1680, 256, 10.3390/axioms12030256 | |
3. | Aysha Khan, M. Akram, M. Dilshad, J. Shafi, Mohamed A. Taoudi, A New Iterative Algorithm for General Variational Inequality Problem with Applications, 2022, 2022, 2314-8888, 1, 10.1155/2022/7618683 | |
4. | Mohammad Dilshad, Doaa Filali, Sumit Chandok, Mohammad Akram, Iterative Approximate Solutions for Variational Problems in Hadamard Manifold, 2022, 11, 2075-1680, 352, 10.3390/axioms11070352 | |
5. | Mohammad Dilshad, Fahad Maqbul Alamrani, Ahmed Alamer, Esmail Alshaban, Maryam G. Alshehri, Viscosity-type inertial iterative methods for variational inclusion and fixed point problems, 2024, 9, 2473-6988, 18553, 10.3934/math.2024903 | |
6. | O.K. Oyewole, New subgradient extragradient algorithm for solving variational inequalities in Hadamard manifold, 2024, 73, 0233-1934, 2585, 10.1080/02331934.2023.2230995 | |
7. | Mohammad Dilshad, Inertial proximal point algorithm for sum of two monotone vector fields in Hadamard manifold, 2024, 0030-3887, 10.1007/s12597-024-00838-1 | |
8. | Konrawut Khammahawong, Shrinking projection method for hierarchical fixed point problems on Hadamard manifolds, 2024, 73, 0009-725X, 1617, 10.1007/s12215-024-01003-9 | |
9. | Huimin He, Jigen Peng, Qinwei Fan, A Novel Halpern-type Algorithm for a Monotone Inclusion Problem and a Fixed Points Problem on Hadamard Manifolds, 2023, 44, 0163-0563, 1031, 10.1080/01630563.2023.2221896 |