Loading [MathJax]/jax/output/SVG/jax.js
Research article

A novel algorithm with an inertial technique for fixed points of nonexpansive mappings and zeros of accretive operators in Banach spaces

  • Received: 01 December 2023 Revised: 24 January 2024 Accepted: 01 February 2024 Published: 05 February 2024
  • MSC : 46E15, 47H10, 47H09, 54H25

  • The purpose of this paper was to prove that a novel algorithm with an inertial approach, used to generate an iterative sequence, strongly converges to a fixed point of a nonexpansive mapping in a real uniformly convex Banach space with a uniformly Gâteaux differentiable norm. Furthermore, zeros of accretive mappings were obtained. The proposed algorithm has been implemented and tested via numerical simulation in MATLAB. The simulation results showed that the algorithm converges to the optimal configurations and shows the effectiveness of the proposed algorithm.

    Citation: Kaiwich Baewnoi, Damrongsak Yambangwai, Tanakit Thianwan. A novel algorithm with an inertial technique for fixed points of nonexpansive mappings and zeros of accretive operators in Banach spaces[J]. AIMS Mathematics, 2024, 9(3): 6424-6444. doi: 10.3934/math.2024313

    Related Papers:

    [1] Hasanen A. Hammad, Hassan Almusawa . Modified inertial Ishikawa iterations for fixed points of nonexpansive mappings with an application. AIMS Mathematics, 2022, 7(4): 6984-7000. doi: 10.3934/math.2022388
    [2] Premyuda Dechboon, Abubakar Adamu, Poom Kumam . A generalized Halpern-type forward-backward splitting algorithm for solving variational inclusion problems. AIMS Mathematics, 2023, 8(5): 11037-11056. doi: 10.3934/math.2023559
    [3] Damrongsak Yambangwai, Chonjaroen Chairatsiripong, Tanakit Thianwan . Iterative manner involving sunny nonexpansive retractions for nonlinear operators from the perspective of convex programming as applicable to differential problems, image restoration and signal recovery. AIMS Mathematics, 2023, 8(3): 7163-7195. doi: 10.3934/math.2023361
    [4] Lu-Chuan Ceng, Yeong-Cheng Liou, Tzu-Chien Yin . On Mann-type accelerated projection methods for pseudomonotone variational inequalities and common fixed points in Banach spaces. AIMS Mathematics, 2023, 8(9): 21138-21160. doi: 10.3934/math.20231077
    [5] Konrawut Khammahawong, Parin Chaipunya, Poom Kumam . An inertial Mann algorithm for nonexpansive mappings on Hadamard manifolds. AIMS Mathematics, 2023, 8(1): 2093-2116. doi: 10.3934/math.2023108
    [6] Francis Akutsah, Akindele Adebayo Mebawondu, Austine Efut Ofem, Reny George, Hossam A. Nabwey, Ojen Kumar Narain . Modified mildly inertial subgradient extragradient method for solving pseudomonotone equilibrium problems and nonexpansive fixed point problems. AIMS Mathematics, 2024, 9(7): 17276-17290. doi: 10.3934/math.2024839
    [7] Anantachai Padcharoen, Kritsana Sokhuma, Jamilu Abubakar . Projection methods for quasi-nonexpansive multivalued mappings in Hilbert spaces. AIMS Mathematics, 2023, 8(3): 7242-7257. doi: 10.3934/math.2023364
    [8] Hamza Bashir, Junaid Ahmad, Walid Emam, Zhenhua Ma, Muhammad Arshad . A faster fixed point iterative algorithm and its application to optimization problems. AIMS Mathematics, 2024, 9(9): 23724-23751. doi: 10.3934/math.20241153
    [9] Asima Razzaque, Imo Kalu Agwu, Naeem Saleem, Donatus Ikechi Igbokwe, Maggie Aphane . Novel fixed point results for a class of enriched nonspreading mappings in real Banach spaces. AIMS Mathematics, 2025, 10(2): 3884-3909. doi: 10.3934/math.2025181
    [10] Buthinah A. Bin Dehaish, Rawan K. Alharbi . On fixed point results for some generalized nonexpansive mappings. AIMS Mathematics, 2023, 8(3): 5763-5778. doi: 10.3934/math.2023290
  • The purpose of this paper was to prove that a novel algorithm with an inertial approach, used to generate an iterative sequence, strongly converges to a fixed point of a nonexpansive mapping in a real uniformly convex Banach space with a uniformly Gâteaux differentiable norm. Furthermore, zeros of accretive mappings were obtained. The proposed algorithm has been implemented and tested via numerical simulation in MATLAB. The simulation results showed that the algorithm converges to the optimal configurations and shows the effectiveness of the proposed algorithm.



    Let C be a nonempty, closed, and convex subset of a real Banach space B with dual B. Let J:B2B denote the normalized duality mapping given by

    J(v)={ϰB:v,ϰ=v2=ϰ2},

    where , denotes the generalized duality pairing (see, for example, [1]). J is single valued if B is strictly convex. In what follows, we denote single valued normalized duality mapping by J. A Banach space B is said to be uniformly convex [2,3] if, for any sequences {vm} and {m}, in B with vm=m=1 and limmvm+m=2 imply limmvmm=0. The modulus of smoothness ρB() of B is the function ρB:[0,+)[0,+) defined by

    ρB(τ)=sup{12(v++v)1:v,B,v=1,τ}.

    It is well known that B is uniformly smooth if, and only if, ρB(τ)τ0, as τ0. Let q>1 be a real number. A Banach space B is said to be q-uniformly smooth if there exists a positive constant Kq such that ρB(τ)Kqτq for any τ>0. It is obvious that a q-uniformly smooth Banach space must be uniformly smooth. A mapping T:CC is said to be LLipschitzian if there exits L 0 such that

    TvT  Lv, v,C. 

    T is said to be a contraction if L[0,1) and T is said to be nonexpansive if L=1 (see [1,2,3,4,5,6]).

    In this paper, we are interested in formulating a numerical method for solving the fixed point problem

    find vC such that v=T(v), (1.1)

    where T:CC is a nonexpansive mapping. We consider F(T) and designate F(T) by the set of all fixed points of T, which is F(T) := {vC | v=T(v)}.

    The most naive approach when looking for a fixed point of a contraction mapping T: defined on a complete metric space (,L) has a unique fixed point, where L is the distance that describes the mapping T as the following process, also called Banach-Picard iteration,

    vm+1 = T(vm), m0, (1.2)

    where v0 is a starting point.

    According to the Banach-Picard fixed point theorem, if T is a contraction, namely, T is Lipschitz continuous with modulus δ[0,1), then the sequence {vm}m0 generated by (1.2) converges strongly to the unique fixed point of T with linear convergence rate.

    If T is just nonexpansive, then this statement is no longer true. To illustrate this, it is enough to choose T=Id, where Id denotes the identity mapping, and v00, in which case the Banach-Picard iteration fails an approach to a fixed point of T.

    In order to overcome the restrictive contraction assumption on T, Krasnoselskii proposed in [7] to apply the Banach-Picard iteration (1.2) to the operator 12Id+12T instead of T. The Krasnoselskii-Mann iteration is written as follows:

    vm+1=(1ηm)vm+ηmT(vm), m0, (1.3)

    where {ηm} is a sequence in (0,1). This iteration is often said to be a segmenting Mann iteration (see [8,9,10]) or to be of the Krasnoselskii-type (see e.g., [11,12,13,14,15,16,17]). It was found that the sequence {vm} created by (1.3) weakly converges to a fixed point of T under the conditions of F(T) and mild assumptions imposed on {ηm}.

    It turned out that a fundamental step in proving the convergence of the iterates of (1.3) is to show that vmT(vm)0 as m+, as it was done by Browder and Petryshyn in [18] in the constant case ηmη(0,1). The weak convergence of the iterates was then studied in various settings in [9,19,20,21,22].

    It should be noted that, even in real Hilbert spaces, all previous modifications to the Krasnoselskii-Mann method for nonexpansive mappings only provide weak convergence; for further information, see [23].

    Bot et al. [24] recently presented a new form for Mann's method to address the previously mentioned issues. Let v0 be arbitrary in a real Hilbert space H, m0,

    vm+1 = ηmvm+ζm(T(ηmvm)ηmvm). (1.4)

    They proved that the iterative sequence {vm} produced by (1.4) is strongly convergent using appropriate {ηm} and {ζm} assumptions. Sequence {ζm}, also known as the Tikhonov regularization sequence, plays a significant role in acceleration (1.4). Dong et al. [25], Fan et al. [26], and Polyak [27] have cited several theoretical and numerical conversations to examine strong convergence utilizing the Tikhonov regularization algorithm.

    Recent years have seen the development and introduction of additional algorithms, such as the inertial algorithm initially presented by Polyak [27]. He minimized a smooth convex function by use of inertial extrapolation. It is important to note that these simple adjustments improved the efficiency and efficacy of these algorithms. Researchers have been able to study several vital applications after adopting this concept. For example, see [25,26,28,29,30,31,32,33,34,35,36].

    An operator Υ:D(Υ)BR(Υ)B is called accretive (see [1]) if for all t>0 and for all v,D(Υ), where D(Υ) denotes the domain of Υ. We have

    vv+t(ΥvΥ).

    Furthermore, Υ is accretive if, and only if, for each v,D(Υ), there exists j(v)J(v) such that

    ΥvΥ,j(v)0.

    An accretive operetor Υ is said to be m-accretive (see, for example, [1]) if R(I+eΥ)=B for all e>0, where R(I+eΥ) is the range of (I+eΥ). Υ is said to satisfy the range condition if ¯D(Υ)R(I+eΥ) for all e>0, where ¯D(Υ) is the closure of the domain of Υ. Moreover, if Υ is accretive [37], then JΥ:R(I+Υ)D(Υ), which, defined by JΥ=(I+Υ)1, is a single-valued nonexpansive and F(JΥ)=N(Υ), where N(Υ)={vD(Υ):0Υv} and F(JΥ)={vB:JΥv=v}.

    Browder [38] and Kato [39] independently introduced the accretive operators. Due to their close relation to the existence theory for nonlinear equations of evolving in Banach spaces, the study of such mappings is very fascinating.

    Under suitable Banach spaces, accretive operators play a crucial role in many physically relevant situations that may be characterized as initial boundary value problems as follows:

    dμdτ+Υμ=0, μ(0)=μ0. (1.5)

    Many embedded models of evolution equations exist, including the Schrodinger, heat, and wave equations [40]. According to Browder [38], (1.5) has a solution if Υ is locally Lipschitzian and accretive on B. He also proved that Υ is m-accretive and there is a solution to the equation below

    Υμ=0. (1.6)

    Ray [40] uses the fixed point theory of Caristi [41] to elegantly and precisely improve Browder's conclusions. Robert and Martin [42] show that the problem (1.5) is solved in the space B if Υ is continuous and accretive. Utilizing this result, Martin [43] proved that if Υ is continuous and accretive, then Υ is m-accretive.

    See Browder [44] and Deimling [45] for further information on the theorems for zeros of accretive operators.

    One should note that, if μ is independent of τ in (1.5), then dμdτ=0. Because of this, (1.5) simplifies to (1.6), whose solution illustrates the problem's stable or equilibrium state. This in turn is tremendously fascinating in a variety of beautiful applications, including, but not limited to, economics, physics, and ecology. Significant efforts have been undertaken to solve (1.6) when Υ is accretive. Researchers were interested in investigating the fixed point and approximate iterative approaches for zeros of m-accretive mappings since Υ, in general, is nonlinear and there is no known process to discover a close solution to this equation. As a result, research in the field has flourished up to the present. Some of the related work can be found in [46,47] and the references therein.

    Based on the previous research, the sequence was created iteratively by a novel algorithm with an inertial technique, and a strong convergence using the proposed algorithm is also discussed in a real uniformly convex Banach space with a Gâteaux differentiable norm. In addition, we find zeros of accretive mappings. Moreover, a numerical example is presented to illustrate the behavior of our algorithm.

    In this section, we summarize notations and lemmas that play a significant role in the convergence analysis of our algorithm.

    A real normed linear space B is said to have a Gâteaux differentiable norm if the limit

    limτv+τvτ,

    exists for all v,, where denotes the unit sphere of B (i.e., ={vB:v=1}). In this case, B is called smooth. It is also said to be uniformly smooth if the limit is attained uniformly for v,, and B is said to have a uniformly Gâteaux differentiable norm.

    If B is smooth, it is clear that every duality mapping on B is a single-valued mapping. If B has a uniformly Gâteaux differentiable norm, then the duality mapping is norm-to-weak* uniformly continuous on bounded subsets of B.

    Let Δ be a nonempty, closed, convex, and bounded subset of a real Banach space B and the diameter of Δ defined by d(Δ)=sup{v,v,Δ}. The Chebyshev radius of Δ is given by w(Δ)=inf{w(v,),vΔ}, where vΔ,

     w(v,Δ)=sup{v,Δ}.

    Bynum [48] proposed the normal structural coefficient N(B) of B as follows:

    N(B)=inf{d(Δ)w(Δ):d(Δ)>0}.

    If N(B)>1, then B has a uniform normal structure.

    Every space with a uniform normal structure is reflexive, which means that all uniformly convex and uniformly smooth Banach spaces have a uniform normal structure. See [1,49] for more details.

    In the sequel, the following lemmas are needed to prove our main results.

    Lemma 1. [50] Suppose that B is a real uniformly convex Banach space. For arbitrary u>0, u(0) = {vB:vu} and α[0,1], then there is a continuous strictly increasing convex function r:[0,2u]R, r(0)=0 such that

    ||αv+(1α)||2α||v||2+(1α)||||2α(1α)r(||v||).

    Lemma 2. [45] Suppose that B is a real normed linear space, then for any v,B, j(v+)J(v+), we have that the following inequality holds

    ||v+||2||v||2+2,j(v+).

    Lemma 3. [5] Let B be a uniformly convex Banach space and Δ a nonempty, closed, and convex subset of B. Suppose that T:ΔΔ is a nonexpansive mapping with fixed points. Let {vm} be a sequence in Δ such that vmv and vmTvm, then vTv=.

    Lemma 4. [49] Let B be a Banach space with uniform normal structure and Δ a nonempty, bounded subset of B. Suppose that T:ΔΔ is a uniformly L-Lipschitzian mapping with L<N(B)12. If there is a nonempty, bounded, closed, convex subset R of Δ with the property (D), that is,

    vRϖw(v)R,

    then T has a fixed point in Δ.

    Note that ϖw(v)={B:y=weak ϖlimTnjv, nj}; here is the ϖ-limit set of T at v.

    Lemma 5. [51] Suppose that (v0, v1,v2,...)l, is so that δmvm0 for all Banach limits δ. If lim supm(vm+1vm)0, then lim supmvm0.

    Lemma 6. [52] Let {em} be a sequence of nonnegative real numbers such that

    em+1(1cm)em+cmσm+πm, m1.

    If

    (i) {cm}[0,1],cm=, lim supm σm 0,

    ( ii ) for each m0, πm0,πm<, then limmem=0.

    We now prove the following strong convergence results.

    Theorem 1. Let C be a nonempty, closed, convex subset of a real uniformly convex Banach space B, which has a uniformly Gâteaux differentiable norm, and T:CC is a nonexpansive mapping such that F(T). Consider that the following assumptions hold:

    (i) limmξm=0, limmσm=0, m=1σm=, ξm,σm(0,1),ρm[l1,l2](0,1),

    (ii) πm0, mN and m=1πm<.

    For arbitrary ν0,ν1C, let {vm} be the sequence generated by

    {m=vm+πm (vmvm1),ψm=(1ξm)(1σm)m,vm+1=(1ρm )ψm+ρmTψm, m1, (3.1)

    then {vm} converges strongly to a point in F(T).

    Proof. Let dF(T). Set m=(1σm)m. Using (3.1), we have

    vm+1d=(1ρm)(ψmd)+ρm(Tψmd)(1ρm )ψmd+ρmTψmd=(1ρm )ψmd+ρmTψmd(1ρm )ψmd+ρmψmd=ψmd=(1ξm )md=(1ξm )(md)ξmd(1ξm)md+ξmd=(1ξm)(1σm)md+ξmd(1ξm )((1σm)md+σmd)+ξmd=(1ξm )(1σm)md+(1ξm )σmd+ξmd(1ξm )(1σm)md+(1ξm )d+ξmd(1σm)md+d(1σm)(vmd)+πm(vmvm1)+d(1σm)vmd+(1σm)πmvmvm1+dmax{vmd,vmvm1,d}.

    By mathematical induction, one can obtain

    vmdmax{v1d,v1v0,d}.

    This shows that {vm} is bounded, so {m}, {m}, and {ψm} are also bounded. By condition (ii), this implies m=1πmνmνm1<. Using Lemmas 1 and 2 and (3.1), we have

    vm+1d2=(1ρm)(ψmd)+ρm(Tψmd)2(1ρm)ψmd2+ρmTψmd2ρm(1ρm)r(Tψmψm)(1ρm)ψmd2+ρmψmd2ρm(1ρm)r(Tψmψm)=ψmd2ρm(1ρm)r(Tψmψm)=md2+2ξmmd,j(ψmd)ρm(1ρm)r(Tψmψm)md2+2σmmd,j(md)+2ξmmd,j(ψmd)ρm(1ρm)r(Tψmψm)vmd2+2πmvmd,j(md)+2σmmd,j(md)+2ξmmd,j(ψmd)ρm(1ρm)r(Tψmψm).

    On the other hand, one can write

    ρm(1ρm)r(Tψmψm)vmd2vm+1d2+2πmvmd,j(md)+2σmmd,j(md)+2ξmmd,j(ψmd). (3.2)

    The boundedness of {vm}, {m}, {m}, and {ψm} leads to constants Λ1,Λ2,Λ3>0 so that for all m1,

    vmd,j(md)Λ1,md,j(md)Λ2,md,j(ψmd)Λ3. (3.3)

    Applying (3.3) in (3.2), we have

    ρm(1ρm)r(Tψmψm)vmd2vm+1d2+2πmΛ1+2σmΛ2+2ξmΛ3. (3.4)

    This implies that {vm} converges to d. We consider the following cases in order to achieve strong convergence:

    Case (a). If the sequence {vmd} is monotonically decreasing, then {vmd} is convergent. We see that

    vm+1d2vmd20

    as m. By (3.4), we have

    ρm(1ρm)r(Tψmψm)0.

    Using the property of r and ρm[l1,l2](0,1), we have

    Tψmψm0. (3.5)

    Combining (3.1) and (3.5), we find that

    vm+1ψm=ρm(Tψmψm)0. (3.6)

    Using (3.1) and condition (i), we have

    ψmm=ξmm0. (3.7)

    From (3.1) and condition (i), we get

    mm=σmm0. (3.8)

    It follows from (3.7) and (3.8) that

    ψmmψmm+mm0. (3.9)

    From m=1πmνmνm1<, we get

    mvm=πmvmvm10. (3.10)

    Based on (3.9) and (3.10), we can write

    ψmvmψmm+mvm0. (3.11)

    Using (3.6) and (3.11), we have

    vm+1vmvm+1ψm+ψmvm0asm.

    Using (3.5), (3.9), and (3.10), we have

    TvmvmTvmTψm+Tψmψm+vmψm2vmψm+Tψmψm2(ψmm+mvm)+Tψmψm0.

    Since {vm} is bounded, there exists a subsequence {vmb}{vm} such that it converges weakly to dB. In addition, using Lemma 3, we have dF(T). Now, we prove that

    lim supmd,j(md)0.

    Suppose that χ:BR is given by

    χ(v)=δmmv2,vB,

    then χ(v) as v and χ is convex and continuous. Since B is reflexive, then there exists B such that χ()=minaBχ(a). Hence, the set ˆR , where

    ˆR={vB:χ(v)=minaBχ(a)}.

    It follows from limmTψmψm=0 and limmψmm=0 that

    TmmTmTψm+Tψmψm+ψmmmψm+Tψmψm+ψmm0(as m).

    Since limmTmm=0, it follows from induction that limmTnmm=0 for all n1. Thus, using Lemma 4, if vR and =ϖlimjTnjv, then from weak lower semicontinuity of χ and limmTmm=0, we get

    χ()lim infjχ(Tnjv)lim supnχ(Tnv)=lim supn(δmmTnv2)=lim supn(δmmTm+TmTnv2)lim supn(δmTmTnv2)lim supn(δmmv2)=χ(v)=infaBχ(a).

    Hence, ˆR. It follows from Lemma 4 that T has a fixed point in ˆR, so ˆRF(T). Without losing the general case, as a particular instance, suppose that =dˆRF(T). Consider τ(0,1), then it is easy to see that χ(d)χ(dτd). With the help of Lemma 2, we have

    md+τd2md2+2τd,j(md+τd).

    By the properties of χ, we can write

    1δmχ(dτd)1δmχ(d)+2τd,j(md+τd).

    By arranging the above inequality, we have

    2τδmd,j(md+τd)χ(d)χ(dτd)0.

    This leads to

    δmd,j(md+τd)0.

    In addition,

    δmd,j(md)δmd,j(md)j(md+τd)+δmd,j(md+τd)δmd,j(md)j(md+τd). (3.12)

    Since the normalized duality mapping is norm-to-weak* uniformly continuous on bounded subsets of B, we have, as τ0 and for fixed n,

    d,j(md)j(md+τd)
    d,j(md)d,j(md+τd)0.

    Thus, for each ϵ>0, there is ςϵ>0 such that for all τ(0,ςϵ),

    d,j(md)d,j(md+τd)<ϵ.

    Thus,

    δmd,j(md)δmd,j(md+τd)ϵ.

    Since ϵ is an arbitrary, using (3.8), we obtain

    δmd,j(md)0.

    By the triangle inequality, we have

    m+1mm+1m+1+m+1vm+1+vm+1ψm+ψmm.

    Using (3.4)–(3.6) and (3.8), we have

    limmm+1m=0.

    Again, since the normalized duality mapping is norm-to-weak* uniformly continuous on bounded subsets of B, we have

    limm(d,j(md)d,j(m+1d))=0.

    Using Lemma 5, we have

    lim supmd,j(md)0.

    From (3.1), we obtain

    ψm=(1ξm)m=(1ξm)(1σm)m(1σm)m.

    Thus,

    ψmd2(1σm)md2(1σm)(md)σmd2. (3.13)

    Since

    md2=(1σm)(md)σmd2,

    using (3.1), (3.13), Lemma 2, and m=1πmνmνm1<, we have

    vm+1d2=(1ρm)(ψmd)+ρm(Tψmd)2(1ρm)ψmd2+ρmTψmd2ψmd2(1σm)(md)σmd2=(1σm)md2+2σmd,j(md)(1σm)(vmd)+πm(vmvm1)2+2σmd,j(md)(1σm)vmd2+2πmvmvm1,j(md)+2σmd,j(md)=(1σm)vmd2+2σmd,j(md). (3.14)

    Applying Lemma 6, we conclude that {vm} converges to d.

    Case (b). Suppose the sequence {vmd} is not monotonically decreasing. Let Ξm=vmd2. Suppose that Π:NN, defined by

    Π(m)=max{N:m, ΞΞ+1}.

    Obviously, Π is a nonincreasing sequence so that limmΠ(m)= and ΞΠ(m)ΞΠ(m)+1 for mm0 (for some m0 large enough). Using (3.4), we have

    ρΠ(m)(1ρΠ(m))r(TψΠ(m)ψΠ(m)) vΠ(m)d2vΠ(m+1)d2+2πΠ(m)Λ1+2σΠ(m)Λ2+2ξΠ(m)Λ3=ΞΠ(m)ΞΠ(m)+1+2πΠ(m)Λ1+2σΠ(m)Λ2+2ξΠ(m)Λ32πΠ(m)Λ1+2σΠ(m)Λ2+2ξΠ(m)Λ30 as m.

    In addition, we get

    TψΠ(m)ψΠ(m)0 as m.

    Using the same circumstances as in Case (a), we can show that vΠ(m)d as Π(m) and lim supΠ(m)d,j(Π(m)d)0. For all mm0, we obtain by (3.14) that

    0vΠ(m)+1d2vΠ(m)d2σΠ(m)[2d,j(Π(m)d)vΠ(m)d2].

    This implies that

    vΠ(m)d22d,j(Π(m)d).

    Since lim supΠ(m)d,j(Π(m)d)0, taking the limit as m in the above inequality, we have

    limmvΠ(m)d2=0.

    Thus,

    limmΞΠ(m)=limmΞΠ(m)+1=0.

    Moreover, for all mm0, it is easy to notice that ΞmΞΠ(m)+1 if mΠ(m), that is, Π(m)<m, since Ξi>Ξi+1 for Π(m)+1im. As a result, for all mm0, we get

    0Ξmmax{ΞΠ(m),ΞΠ(m)+1}=ΞΠ(m)+1.

    Hence, limmΞm=0, which concludes that {vm} converges strongly to a point d. This finishes the proof.

    Since every uniformly convex Banach space has a uniformly Gâteaux differentiable norm, our theorem can be stated in a uniformly convex Banach space, which is also uniformly smooth. Therefore, we can also obtain the following result without proof.

    Corollary 1. Let C be a nonempty, closed, convex subset of a real uniformly convex Banach space B, which is also uniformly smooth, and T:CC is a nonexpansive mapping such that F(T). Let {vm} be a sequence generated iteratively by (3.1), then {vm} converges strongly to a point in F(T).

    In the remainder of this section, we prove the following theorem for finding zeros of accretive mappings.

    Theorem 2. Let C be a nonempty, closed, convex subset of a real uniformly convex Banach space B, which has a uniformly Gâteaux differentiable norm, and Υ:CC is a continuous and accretive mapping such that N(Υ). For arbitrary v0,v1, let {vm} be the sequence generated by

    {m=vm+πm (vmvm1),ψm=(1ξm)(1σm)m,vm+1=(1ρm )ψm+ρmJΥψm, m1,

    where JΥ=(I+Υ)1. Consider that the following assumptions hold:

    (i) limmξm=0,limmσm=0,m=1σm=,ξm,σm(0,1),ρm[l1,l2](0,1),

    (ii) πm0, mN and m=1πm<,

    then {vm} converges strongly to a point in N(Υ).

    Proof. According to the results of Martin [42,43,44] and Cioranescu [37], Υ is m-accretive. This implies that JΥ=(I+Υ)1 is nonexpansive and F(JΥ)=N(Υ). Setting JΥ=T in Theorem 1 and using the same approach going forward, we obtain the desired result.

    Using the following experiment, we examine the algorithm's behavior (3.1) for approximating the fixed point. We show the convergence results discussed in this study graphically and with a table of numerical values.

    Example 1. Consider that a fixed point problem taken from [53] in which B=R through the usual real number space R with the usual norm. A mapping T:BB is defined by

    T(v)=(5v22v+48)13,vA,

    where A={v:0v50}.

    Experiment 1. For the control parameter ξm=σm=1(km+2) in this experiment, we used several values for k=1,2,3,5,10. Consider ρm=0.80,v0=v1=10, πm=10(m+1)2, and Dm=vmvm1 (see on Table 1 and Figure 1).

    Table 1.  Table showing some terms of the sequence generated by Algorithm (3.1) while ξm=σm=1(km+2) and elapsed time for the indicated values of n.
    k number of iteration (n) elapsed time
    1 449 0.013728
    2 319 0.011591
    3 262 0.021587
    5 204 0.024854
    10 145 0.036621

     | Show Table
    DownLoad: CSV
    Figure 1.  Graph showing the convergence of Algorithm (3.1) while ξm=σm=1(km+2) and the number of iterations are 449, 319, 262, 204, 145.

    Experiment 2. We used several values for k=0.15,0.35,0.55,0.75,0.95 for the control parameter ρm=k. Also, consider ξm=σm=1(2m+2),v0=v1=10, πm=10(m+1)2, and Dm=vmvm1 (see on Table 2 and Figure 2).

    Table 2.  Table showing some terms of the sequence generated by Algorithm (3.1) while ρm=k and elapsed time for the indicated values of n.
    k number of iteration (n) elapsed time
    0.15 897 0.026490
    0.35 557 0.019869
    0.55 419 0.024898
    0.75 336 0.028761
    0.95 276 0.022688

     | Show Table
    DownLoad: CSV
    Figure 2.  Graph showing the convergence of Algorithm (3.1) while ρm=k and the number of iterations are 897, 557, 419, 336, 276.

    Symmetry considerations can be related to signal processing, especially when signals satisfy certain symmetries. Now, we focus on applying Algorithm (3.1) to signal recovery problems. In signal processing, compressed sensing can be modeled as the following under the determined linear equation system:

    y=Av+ν,

    where vRn is the original signal with n components to be recovered, ν,yRm are noise and the observed signal with noise for m components, respectively, and ARm×n is a degraded matrix. Finding the solutions of the previous underdetermined linear equation system can be viewed as solving the least absolute shrinkage and selection operator problem (LASSO problem):

    minvRN12yAv22+λv1,

    where λ>0. Various techniques and iterative schemes have been developed to solve the LASSO problem. Our method for solving the LASSO problem can be applied by setting Tv=proxμg(vμf(v)), where f(v)=yAv22/2, g(v)=λv1, and f(v)=AT(Avy).

    Next, we provide an example of applying our algorithm to signal recovery problems.

    Example 2. Let ARm×n(m<n) be a degraded matrix and y,νRm. We propose the following method to find the solution of the signal recovery problem:

    m=vm+πm (vmvm1),ψm=(1ξm)(1σm)m,vm+1=(1ρm)ψm+ρmTψm,m1,

    where Tv=proxμg(vμAT(Avy)), μ=1.8/ATA2. Moreover, we randomly choose vectors v0 and v1 and apply them to the proposed method, where σm=ξm=110(m+1), ρm=mm+1, and πm=1(m+100)2 for each mN.

    A straightforward observation confirms the satisfaction of all conditions in Theorem 1. Next, we conduct experiments to showcase the convergence and effectiveness of the proposed algorithm in recovering the k-sparse signal vk recovery problem with k=70,35,18,9 (see Figure 3).

    Figure 3.  The k-sparse signal with k=70,35,18,9, respectively.

    A signal of size n=1024 elements, generated uniformly within the interval [2,2], is utilized to produce observation signals yk=Avk+ν, where m=512 (see on Figure 4).

    Figure 4.  Degraded of k-sparse signal with k=70,35,18,9, respectively.

    The white Gaussian noise ν is depicted in Figure 5.

    Figure 5.  Noise Signal ν.

    The process starts with randomly selected initial signal data v0 and v1, each comprising n=1024 randomly chosen elements (see Figure 6).

    Figure 6.  Initial signals v0 and v1.

    In addressing the challenge of recovering k-sparse signals, we reconstructed the observed signals depicted in Figure 4 to obtain the k-nonzero signal shown in Figure 3. Throughout this recovery process, we carefully considered the optimal regularization parameter, denoted as λ, to maximize the signal-to-noise ratio (SNR). The performance of the proposed method at mth iteration is measured quantitatively by means of the SNR, which is defined by

    SNR(vm)=20log10(vm2vmv2),

    where vm is the recovered signal at the mth iteration using the proposed method. The SNR quality influenced by the regularization parameter λ within the range [5,75] are visualized in Figure 7.

    Figure 7.  The plots of the best SNR quality of the proposed method effected with regularized parameter λ during 1, 000 iterations.

    The most recent figure illustrates that the proposed algorithms can solve the sparse signal recovery challlenge. Moreover, we present the evolution of the SNR and relative error plot using max-norm over the number of iterations during the recovery of k-sparse signals with k=70,35,18,9 (see Figure 8). This is done while identifying the optimal regularization parameter, denoted as λ, to achieve the highest SNR quality, as illustrated in the figure above.

    Figure 8.  The SNR and relative error norm plots of the proposed algorithm effected with the optimal regularized parameter λ in recovering the observed sparse signal.

    Notably, the plot of the signal's relative error exhibits a continuous decrease until it reaches convergence to a constant value. In the SNR quality plot, it is evident that the SNR value progressively rises until it stabilizes at a constant value. Additionally, Figure 9 demonstrates the best recovery of k-sparse signals with k=70,35,18,9 during 400 iterations using the proposed algorithm along with its optimal regularization parameter λ.

    Figure 9.  The best recovering of k-sparse signals k=70,35,18,9, respectively, being used for the proposed algorithm during 400th iterations.

    Based on these findings, it can be inferred that the proposed algorithm successfully enhances the quality of the recovered signal in solving the signal recovery problem.

    We constructed a novel algorithm with an inertial technique to approximate a fixed point of a nonexpansive mapping in a real uniformly convex Banach space with a Gâteaux differentiable norm. Furthermore, we found zeros of accretive mappings. An illustrative example was also provided as Example 1. Moreover, an application of the algorithm to a signal recovery problem was presented. We proved a strong convergence result, which is stronger than a weak convergence result.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This project is funded by the National Research Council of Thailand (NRCT) and University of Phayao Grant No. N42A660382.

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.



    [1] C. E. Chidume, Geometric properties of Banach spaces and nonlinear iterations, Lecture Notes in Mathematics 1965, Springer, 2009. https://doi.org/10.1007/978-1-84882-190-3
    [2] R. P. Agarwal, D. O'Regan, D. R. Sahu, Fixed point theory for Lipschtz-type mappings with applications, Berlin: Springer, 2008.
    [3] W. Takahashi, Nonlinear functional analysis: Fixed point theory and its applications, Yokohama: Yokohama Publishers, 2000.
    [4] V. Berinde, Iterative approximation of fixed points, Lectures Notes 1912, Springer, 2002.
    [5] F. E. Browder, Nonexpansive nonlinear operators in Banach spaces, P. Nat. Acad. Sci., 54 (1965), 1041–1044. https://doi.org/10.1073/pnas.54.4.1041 doi: 10.1073/pnas.54.4.1041
    [6] Z. Opial, Weak convergence of successive approximations for nonexpansive mappings, B. Am. Math. Soc., 73 (1967), 591–597. https://doi.org/10.1090/S0002-9904-1967-11761-0 doi: 10.1090/S0002-9904-1967-11761-0
    [7] M. A. Krasnosel'skii, Two remarks on the method of successive approximations, Uspekhi Mat. Nauk, 10 (1955), 123–127.
    [8] W. R. Mann, Mean value methods in iteration, P. Am. Math. Soc., 4 (1953), 506–510. https://doi.org/10.1090/S0002-9939-1953-0054846-3 doi: 10.1090/S0002-9939-1953-0054846-3
    [9] C. W. Groetsch, A note on segmenting Mann iterates, J. Math. Anal. Appl., 40 (1972), 369–372. https://doi.org/10.1016/0022-247X(72)90056-X doi: 10.1016/0022-247X(72)90056-X
    [10] T. L. Hicks, J. D. Kubicek, On the Mann iteration process in a Hilbert space, J. Math. Anal. Appl., 59 (1977), 498–504. https://doi.org/10.1016/0022-247X(77)90076-2 doi: 10.1016/0022-247X(77)90076-2
    [11] B. P. Hillam, A generalization of Krasnoselski's theorem on the real line, Math. Mag., 48 (1975), 167–168. https://doi.org/10.1080/0025570X.1975.11976471 doi: 10.1080/0025570X.1975.11976471
    [12] M. Edelstein, R. C. O'Brien, Nonexpansive mappings, asymptotic regularity and successive approximations, J. Lond. Math. Soc., 2 (1978), 547–554. https://doi.org/10.1112/jlms/s2-17.3.547 doi: 10.1112/jlms/s2-17.3.547
    [13] M. Bravo, R. Cominetti, M. P. Signé, Rates of convergence for inexact Krasnosel'skii-Mann iterations in Banach spaces, Math. Program., 175 (2019), 241–262. https://doi.org/10.1007/s10107-018-1240-1 doi: 10.1007/s10107-018-1240-1
    [14] Q. L. Dong, J. Huang, X. H. Li, Y. J. Cho, Th. M. Rassias, MiKM: Multi-step inertial Krasnosel'skii-Mann algorithm and its applications, J. Global Optim., 73 (2019), 801–824. https://doi.org/10.1007/s10898-018-0727-x doi: 10.1007/s10898-018-0727-x
    [15] Q. L. Dong, X. H. Li, Y. J. Cho, T. M. Rassias, Multi-step inertial Krasnosel'skii-Mann iteration with new inertial parameters arrays, J. Fix. Point Theory A., 23 (2021), 1–18. https://doi.org/10.1007/s11784-021-00879-9 doi: 10.1007/s11784-021-00879-9
    [16] S. He, Q. L. Dong, H. Tian, X. H. Li, On the optimal parameters of Krasnosel'skii-Mann iteration, Optimization, 70 (2021), 1959–1986. https://doi.org/10.1080/02331934.2020.1767101 doi: 10.1080/02331934.2020.1767101
    [17] Q. L. Dong, Y. J. Cho, S. He, P. M. Pardalos, T. M. Rassias, The Krasnosel'skii-Mann iterative method: Recent progress and applications, Springer, 2022. https://doi.org/10.1007/978-3-030-91654-1
    [18] F. E. Browder, W. V. Petryshyn, The solution by iteration of nonlinear functional equations in Banach spaces, B. Am. Math. Soc., 72 (1966), 571–575. https://doi.org/10.1090/S0002-9904-1966-11544-6 doi: 10.1090/S0002-9904-1966-11544-6
    [19] H. H. Bauschke, P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, 2 Eds., CMS Books in Mathematics, New York: Springer, 2017. https://doi.org/10.1007/978-3-319-48311-5
    [20] J. Borwein, S. Reich, I. Shafrir, Krasnoselski-Mann iterations in normed spaces, Can. Math. Bull., 35 (1992), 21–28. https://doi.org/10.4153/CMB-1992-003-0 doi: 10.4153/CMB-1992-003-0
    [21] S. Ishikawa, Fixed points and iteration of a nonexpansive mapping in a Banach space, P. Am. Math. Soc., 59 (1976), 65–71. https://doi.org/10.1090/S0002-9939-1976-0412909-X doi: 10.1090/S0002-9939-1976-0412909-X
    [22] S. Reich, Weak convergence theorems for nonexpansive mappings in Banach spaces, J. Math. Anal. Appl., 67 (1979), 274–276. https://doi.org/10.1016/0022-247X(79)90024-6 doi: 10.1016/0022-247X(79)90024-6
    [23] A. Genel, J. Lindenstrass, An example concerning fixed points, Isr. J. Math., 22 (1975), 81–86. https://doi.org/10.1007/BF02757276 doi: 10.1007/BF02757276
    [24] R. I. Bot, E. R. Csetnek, D. Meier, Inducing strong convergence into the asymptotic behavior of proximal splitting algorithms in Hilbert spaces, Optim. Method. Softw., 34 (2019), 489–514. https://doi.org/10.1080/10556788.2018.1457151 doi: 10.1080/10556788.2018.1457151
    [25] Q. L. Dong, Y. Y. Lu, J. Yang, The extragradient algorithm with inertial effects for solving the variational inequality, Optimization, 65 (2016), 2217–2226. https://doi.org/10.1080/02331934.2016.1239266 doi: 10.1080/02331934.2016.1239266
    [26] J. Fan, L. Liu, X. Qin, A subgradient extragradient algorithm with inertial effects for solving strongly pseudomonotone variational inequalities, Optimization, 69 (2020), 2199–2215. https://doi.org/10.1080/02331934.2019.1625355 doi: 10.1080/02331934.2019.1625355
    [27] B. T. Polyak, Some methods of speeding up the convergence of iteration methods, USSR Comput. Math. Math. Phys., 4 (1964), 1–17. https://doi.org/10.1016/0041-5553(64)90137-5 doi: 10.1016/0041-5553(64)90137-5
    [28] Q. L. Dong, H. B. Yuan, Y. J. Cho, T. M. Rassias, Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings, Optim. Lett., 12 (2018), 87–102. https://doi.org/10.1007/s11590-016-1102-9 doi: 10.1007/s11590-016-1102-9
    [29] H. A. Hammad, H. ur Rehman, M. De la Sen, Advanced algorithms and common solutions to variational inequalities, Symmetry, 12 (2020), 1198. https://doi.org/10.3390/sym12071198 doi: 10.3390/sym12071198
    [30] P. E. Maingé, Convergence theorems for inertial KM-type algorithms, J. Comput. Appl. Math., 219 (2008), 223–236. https://doi.org/10.1016/j.cam.2007.07.021 doi: 10.1016/j.cam.2007.07.021
    [31] Y. Shehu, X. H. Li, Q. L. Dong, An efficient projection-type method for monotone variational inequalities in Hilbert spaces, Numer. Algorithms, 84 (2020), 365–388. https://doi.org/10.1007/s11075-019-00758-y doi: 10.1007/s11075-019-00758-y
    [32] B. Tan, S. Xu, S. Li, Inertial shrinking projection algorithms for solving hierarchical variational inequality problems, J. Nonlinear Convex A., 21 (2020), 871–884.
    [33] L. Liu, B. Tan, S. Y. Cho, On the resolution of variational inequality problems with a double-hierarchical structure, J. Nonlinear Convex A., 21 (2020), 377–386.
    [34] F. Akutsah, O. K. Narain, J. K. Kim, Improved generalized M-iteration for quasi-nonexpansive multivalued mappings with application in real Hilbert spaces, Nonlinear Funct. Anal. Appl., 27 (2022), 59–82.
    [35] N. D. Truong, J. K. Kim, T. H. H. Anh, Hybrid inertial contraction projection methods extended to variational inequality problems, Nonlinear Funct. Anal. Appl., 27 (2022), 203–221.
    [36] J. A. Abuchu, G. C. Ugunnadi, O. K. Narain, Inertial proximal and contraction methods for solving monotone variational inclusion and fixed point problems, Nonlinear Funct. Anal. Appl., 28 (2023), 175–203. https://doi.org/10.23952/jnfa.2023.19 doi: 10.23952/jnfa.2023.19
    [37] I. Cioranescu, Geometry of Banach spaces, duality mappings and nonlinear problems, Dordrecht: Kluwer Academic, 1990. https://doi.org/10.1007/978-94-009-2121-4
    [38] F. E. Browder, Nonlinear mappings of nonexpansive and accretive type in Banach spaces, B. Am. Math. Soc., 73 (1967), 875–882. https://doi.org/10.1090/S0002-9904-1967-11823-8 doi: 10.1090/S0002-9904-1967-11823-8
    [39] T. Kato, Nonlinear semigroups and evolution equations, J. Math. Soc. Jpn., 19 (1967), 508–520. https://doi.org/10.2969/jmsj/01940508 doi: 10.2969/jmsj/01940508
    [40] W. O. Ray, An elementary proof of surjectivity for a class of accretive operators, P. Am. Math. Soc., 75 (1979), 255–258. https://doi.org/10.1090/S0002-9939-1979-0532146-0 doi: 10.1090/S0002-9939-1979-0532146-0
    [41] J. V. Caristi, The fixed point theory for mappings satisfying inwardness conditions, Ph.D. Thesis, The University of Iowa, Iowa City, 1975.
    [42] H. Robert, J. Martin, Nonlinear operators and differential equations in Banach spaces, SIAM Rev., 20 (1978), 202–204. https://doi.org/10.1137/1020032 doi: 10.1137/1020032
    [43] R. H. Martin, A global existence theorem for autonomous differential equations in Banach spaces, P. Am. Math. Soc., 26 (1970), 307–314. https://doi.org/10.1090/S0002-9939-1970-0264195-6 doi: 10.1090/S0002-9939-1970-0264195-6
    [44] F. E. Browder, Nonlinear elliptic boundary value problems, B. Am. Math. Soc., 69 (1963). https://doi.org/10.1090/S0002-9904-1963-11068-X
    [45] K. Deimling, Nonlinear functional analysis, Berlin: Springer, 1985. https://doi.org/10.1007/978-3-662-00547-7
    [46] L. Wei, Q. Zhang, Y. Zhang, R. P. Agarwal, Iterative algorithm for zero points of the sum of countable accretive-type mappings and variational inequalities, J. Nonlinear Funct. Anal., 2022 (2022). https://doi.org/10.23952/jnfa.2022.3
    [47] H. K. Xu, N. Altwaijry, I. Alzughaibi, S. Chebbi, The viscosity approximation method for accretive operators in Banach spaces, J. Nonlinear Var. Anal., 6 (2022), 37–50. https://doi.org/10.23952/jnva.6.2022.1.03 doi: 10.23952/jnva.6.2022.1.03
    [48] W. L. Bynum, Normal structure coefficients for Banach spaces, Pac. J. Math., 86 (1980), 427–436. https://doi.org/10.2140/pjm.1980.86.427 doi: 10.2140/pjm.1980.86.427
    [49] T. C. Lim, H. K. Xu, Fixed point theorems for asymptotically nonexpansive mappings, Nonlinear Anal. TMA, 22 (1994), 1345–1355. https://doi.org/10.1016/0362-546X(94)90116-3 doi: 10.1016/0362-546X(94)90116-3
    [50] H. K. Xu, Inequalities in Banach spaces with applications, Nonlinear Anal. TMA, 16 (1991), 1127–1138. https://doi.org/10.1016/0362-546X(91)90200-K doi: 10.1016/0362-546X(91)90200-K
    [51] S. Shioji, W. Takahashim, Strong convergence of approximated sequences for nonexpansive mappings in Banach spaces, P. Am. Math. Soc., 125 (1997), 3641–3645. https://doi.org/10.1090/S0002-9939-97-04033-1 doi: 10.1090/S0002-9939-97-04033-1
    [52] H. K. Xu, Iterative algorithms for nonlinear operators, J. London Math. Soc., 66 (2002), 240–256. https://doi.org/10.1112/S0024610702003332 doi: 10.1112/S0024610702003332
    [53] W. Shatanawi, A. Bataihah, A. Tallafha, Four-step iteration scheme to approximate fixed point for weak contractions, CMC-Comput. Mater. Con., 64 (2020), 1491–1504. https://doi.org/10.32604/cmc.2020.010365 doi: 10.32604/cmc.2020.010365
  • This article has been cited by:

    1. Imo Kalu Agwu, Faeem Ali, Donatus Ikechi Igbokwe, The solution of split common fixed point problems for enriched asymptotically nonexpansive mappings in Banach spaces, 2024, 0971-3611, 10.1007/s41478-024-00849-7
    2. Papatsara Inkrong, Papinwich Paimsang, Prasit Cholamjiak, A recent fixed point method based on two inertial terms, 2024, 0971-3611, 10.1007/s41478-024-00845-x
    3. Salah Benhiouna, Azzeddine Bellour, Reemah Alhuzally, Ahmad M. Alghamdi, Existence of Solutions for Generalized Nonlinear Fourth-Order Differential Equations, 2024, 12, 2227-7390, 4002, 10.3390/math12244002
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1285) PDF downloads(84) Cited by(3)

Figures and Tables

Figures(9)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog