Research article

Iterative manner involving sunny nonexpansive retractions for nonlinear operators from the perspective of convex programming as applicable to differential problems, image restoration and signal recovery

  • Received: 28 August 2022 Revised: 09 December 2022 Accepted: 04 January 2023 Published: 11 January 2023
  • MSC : 46T99, 47H09, 47H10, 47J25, 49M37, 54H25

  • In this paper, using sunny nonexpansive retractions which are different from the metric projection in Banach spaces, we develop the CR-iteration algorithm in view of two quasi-nonexpansive nonself mappings and also give the convergence analysis for the proposed method in the setting of uniformly convex Banach spaces. Furthermore, our results can be applied for the purpose of finding common zeros of accretive operators, convexly constrained least square problems and convex minimization problems. Regarding application, some numerical experiments involving real-world problems are provided, with focus on differential problems, image restoration problems and signal recovery problems.

    Citation: Damrongsak Yambangwai, Chonjaroen Chairatsiripong, Tanakit Thianwan. Iterative manner involving sunny nonexpansive retractions for nonlinear operators from the perspective of convex programming as applicable to differential problems, image restoration and signal recovery[J]. AIMS Mathematics, 2023, 8(3): 7163-7195. doi: 10.3934/math.2023361

    Related Papers:

    [1] Hengdi Wang, Jiakang Du, Honglei Su, Hongchun Sun . A linearly convergent self-adaptive gradient projection algorithm for sparse signal reconstruction in compressive sensing. AIMS Mathematics, 2023, 8(6): 14726-14746. doi: 10.3934/math.2023753
    [2] Damrongsak Yambangwai, Tanakit Thianwan . An efficient iterative algorithm for common solutions of three G-nonexpansive mappings in Banach spaces involving a graph with applications to signal and image restoration problems. AIMS Mathematics, 2022, 7(1): 1366-1398. doi: 10.3934/math.2022081
    [3] Adisak Hanjing, Panadda Thongpaen, Suthep Suantai . A new accelerated algorithm with a linesearch technique for convex bilevel optimization problems with applications. AIMS Mathematics, 2024, 9(8): 22366-22392. doi: 10.3934/math.20241088
    [4] Areerat Arunchai, Thidaporn Seangwattana, Kanokwan Sitthithakerngkiet, Kamonrat Sombut . Image restoration by using a modified proximal point algorithm. AIMS Mathematics, 2023, 8(4): 9557-9575. doi: 10.3934/math.2023482
    [5] Nipa Jun-on, Raweerote Suparatulatorn, Mohamed Gamal, Watcharaporn Cholamjiak . An inertial parallel algorithm for a finite family of $ G $-nonexpansive mappings applied to signal recovery. AIMS Mathematics, 2022, 7(2): 1775-1790. doi: 10.3934/math.2022102
    [6] Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth . Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces. AIMS Mathematics, 2021, 6(5): 4873-4900. doi: 10.3934/math.2021286
    [7] Kasamsuk Ungchittrakool, Natthaphon Artsawang . A generalized viscosity forward-backward splitting scheme with inertial terms for solving monotone inclusion problems and its applications. AIMS Mathematics, 2024, 9(9): 23632-23650. doi: 10.3934/math.20241149
    [8] Hamza Bashir, Junaid Ahmad, Walid Emam, Zhenhua Ma, Muhammad Arshad . A faster fixed point iterative algorithm and its application to optimization problems. AIMS Mathematics, 2024, 9(9): 23724-23751. doi: 10.3934/math.20241153
    [9] Thabet Abdeljawad, Kifayat Ullah, Junaid Ahmad, Muhammad Arshad, Zhenhua Ma . On the convergence of an iterative process for enriched Suzuki nonexpansive mappings in Banach spaces. AIMS Mathematics, 2022, 7(11): 20247-20258. doi: 10.3934/math.20221108
    [10] Mingying Pan, Xiangchu Feng . Application of Fisher information to CMOS noise estimation. AIMS Mathematics, 2023, 8(6): 14522-14540. doi: 10.3934/math.2023742
  • In this paper, using sunny nonexpansive retractions which are different from the metric projection in Banach spaces, we develop the CR-iteration algorithm in view of two quasi-nonexpansive nonself mappings and also give the convergence analysis for the proposed method in the setting of uniformly convex Banach spaces. Furthermore, our results can be applied for the purpose of finding common zeros of accretive operators, convexly constrained least square problems and convex minimization problems. Regarding application, some numerical experiments involving real-world problems are provided, with focus on differential problems, image restoration problems and signal recovery problems.



    Let E be a real Banach space. Assume that A:EE is an operator and B:E2E is a set-valued operator. We study the following zero point problem: find an element uE such that

    0Au+Bu. (1.1)

    It is well known that this problem includes, as special cases, convex programming, variational inequalities, split feasibility problems and minimization problems [1,2,3,4,5,6,7], which have applications in machine learning, image processing [4,5], linear inverse problems and signal processing, and it can be modeled mathematically as the form (1.1).

    Many mathematicians have been interested in analyzing fixed points by using some iterative methods in recent years. With the considerable improvements in fixed-point theory in the last several years, iterative concepts have emerged as a topic of increasing interest. Iteration qualities involving types of sequences and types of operators have not been thoroughly investigated and are currently being debated. Because of its considerable usefulness in fixed-point theory and its implementations, the idea of operators has populated a prominent part in present scientific studies employing algorithmic approaches. A number of studies have observed fixed points by using iterative techniques [8]. It is worth noting that the type of operators used in fixed point studies becomes important.

    Let K be a nonempty closed convex subset of E and S:KK be an operator with at least one fixed point. In order to find a fixed point, many researchers have proposed various algorithms for finding the approximate solution. One of the most popular iterative processes, called the Picard iteration process [9], was defined by u1K:

    un+1=Sun,n1.

    In addition, other iterative methods for improving the Picard iteration method have been studied extensively, such as what follows.

    The Mann iteration process [10] is defined by u1K and

    un+1=(1ηn)un+ηnSun,n1,

    where {ηn} is a sequence in (0,1).

    The Ishikawa iteration process [11] is defined by u1K and

    un+1=(1ηn)un+ηnS((1ϑn)un+ϑnSun),n1,

    where {ηn} and {ϑn} are sequences in (0,1). The S-iteration process was introduced by Agarwal et al. [12] in a Banach space as follows:

    un+1=(1ηn)Sun+ηnS((1ϑn)un+ϑnSun),n1,

    where {ηn} and {ϑn} are sequences in (0,1). They proved that the S-iteration method is independent of the Mann and Ishikawa iteration methods and converges faster than both of them. In 2012, Chugh et al. [13] introduced the CR-iteration procedure as follows:

    Let K be a nonempty convex subset of a normed linear space E, and let {ηn}, {ϑn}, {ζn} be sequences in (0,1). For u1K, the sequence un is defined by

    wn=(1ζn)un+ζnSun,zn=(1ϑn)Sun+ϑnSwn,un+1=(1ηn)zn+ηnSzn,n1.

    The authors showed numerically that this scheme is equivalent to and faster than the Picard, Mann, Ishikawa and Agarwal iterative schemes. Some of the other well-known three-step iteration processes were developed by Noor [14], Ullah and Arshad [15] (AK-iteration), Sahu et al. [16], Thakur et al. [17] and Phuengrattana and Suantai [18] (SP-iteration). There are a number of papers that have studied fixed points by using some iterative schemes (see [8]).

    Many real-world problems can be modeled as common problems. Therefore, the study of solving these problems is important and has received the attention of many mathematicians.

    To date, fixed-point theory has been applied to solve various problems in science, engineering, economics, physics and data science, with applications such as signal/image processing [19,20], and intensity-modulated radiation therapy treatment planning [21,22].

    In the field of image processing, the image restoration problem is an interesting and important topic. For this problem, there are several optimizations and fixed-point methods; see [23,24,25,26,27] for more detail.

    Signal processing and numerical optimization are independent scientific fields that have always been mutually influencing each other. Perhaps the most convincing example whereby the two fields have met is compressed sensing [2]. Several surveys dedicated to these algorithms and their applications in signal processing have appeared [3,6,7,20].

    In this work, our contribution is twofold:

    1) We develop the CR-iteration process for calculating common solutions of the common fixed-point problems and apply our results for solving the problem in (1.1).

    2) We find common solutions of convexly constrained least square problems and convex minimization problems and apply them to differential problems, image restoration and signal processing.

    Let E be a real Banach space with a norm and E be its dual. The value of fE at uE is denoted by u,f. Throughout the paper, we denote by Bλ[v] a closed ball with the center at v and radius λ:

    Bλ[v]={uE:vuλ}.

    A Banach space E is said to be uniformly convex if for any x, 0<x2, the inequalities u1, v1 and uvx imply that there exists δ=δ(x)>0 such that 12u+v1δ. Note that every Hilbert space is a uniformly convex Banach space. The (normalized) duality mapping J from E into the family of nonempty (by Hahn-Banach theroem) weak-star compact subsets of its dual E is defined by

    J(u)={fE:u,f=u2=f2}

    for each uE, where , denotes the generalized duality pairing.

    For a set-valued operator A:E2E, we denote its domain, range and graph as follows:

    D(A)={uE:Au},R(A)={Ap:pD(A)}

    and

    G(A)={(u,v)E×E:uD(A),vAu},

    respectively. The inverse A1 of A is defined by uA1v if and only if vAu. A is called accretive if uiD(A) and viAui (i=1,2); there exists j=J(u1u2) such that v1v2,j0.

    An accretive operator A in a Banach space E is said to satisfy the range condition if ¯D(A)R(I+μA) for all μ>0, where ¯D(A) denotes the closure of the domain of A.

    It is well known that for an accretive operator A which satisfies the range condition, A10=Fix(JAμ) for all μ>0.

    Let H be a real Hilbert space. If A:E2E is an m-accretive operator (see [28,29,30]), then A is called a maximal accretive operator [31], and, for all μ>0, R(I+μA)=H if and only if A is called maximal monotone [32]. Denote by dom(h) the domain of a function h:H(,], i.e.,

    dom(h)={uH:h(u)<}.

    The subdifferential of hT0(H) is the set

    h(u)={uH:h(u)h(v)+z,uv,vH},

    where T0(H) denotes the class of all lower semicontinuous convex functions from H to (,] with nonempty domains.

    A point uK is called a fixed point of S if Su=u. We shall denote with Fix(S) the set of fixed points of S, i.e., Fix(S)={uK:Su=u}. The mapping S is called L-Lipschitz if there exists a constant L>0 such that

    SuSvLuv

    for all u,vK. The mapping S is called nonexpansive if S is 1-Lipschitz. The mapping S is called quasi-nonexpansive if Fix(S) and

    Suvuv

    for all uK and vFix(S).

    A subset K of Banach space E is said to be a retract of E if there is a continuous mapping Q from E onto K such that Qu=u for all uK. We call such a Q a retraction of E onto K. It follows that, if a mapping Q is a retraction, then Qv=v for all v in the range of Q. A retraction Q is called sunny if Q(Qu+λ(uQu))=Qu for all uE and λ0. If a sunny retraction Q is also nonexpansive, then K is called a sunny nonexpansive retract of E [33].

    Let E be a strictly convex reflexive Banach space and K be a nonempty closed convex subset of E. Denote by PK the (metric) projection from E onto K, namely, for uE, PK(u) is the unique point in K with the property

    inf{uv:vK}=uPK(u).

    Let an inner product , and the induced norm be specified with a real Hilbert space H. Let K be a nonempty subset of H; we have that the nearest point projection PK from H onto K is the unique sunny nonexpansive retraction of H onto K. It is also known that PK(u)K and

    uPK(u),PK(u)v0,uH,vK.

    Sunny nonexpansive retractions play a similar role in Banach spaces as do projections in Hilbert spaces. We emphasize that, if K is a closed and convex subset of a uniformly smooth Banach space E (that is the norm of E is Gateaux differentiable), then there exists a sunny nonexpansive retraction from E to K and it is unique.

    Recall that a vector space H is said to satisfy Opial's condition [34] if for each sequence {un} in H which converges weakly to a point uH,

    limninfunu<limninfunv,vH,vu.

    In the sequel, the following lemmas are needed to prove our main results.

    Lemma 1. ([35]) Let hT0(H). Then, h is maximal monotone.

    Lemma 2. ([36]) Let E be a Banach space and p>1 and R>0 be two fixed numbers. Then, E is uniformly convex if and only if there exists a continuous, strictly increasing and convex function φ:[0,)[0,) with φ(0)=0 such that

    αu+(1α)vpup+(1α)vpα(1α)φ(uv)

    for all u,vBλ[0] and α[0,1].

    Lemma 3. ([37]) Let K be a nonempty subset of a Banach space E and S:KE be a uniformly continuous mapping. Let {un}K be an approximating fixed point sequence of S, i.e., limnunSun=0. Then, {vn} is an approximating fixed-point sequence of S whenever {vn} is in K such that limnunvn=0.

    Lemma 4. ([30]) Let K be a nonempty closed convex subset of a uniformly convex Banach space E. If S:KE is a nonexpansive mapping, then I-S has the demiclosed property with respect to 0.

    Let K be a nonempty closed convex subset of a Banach space E with QK as a sunny nonexpansive retraction. We will denote the set of common fixed points of S and T by Ψ, that is, Ψ:=Fix(S)Fix(T). The following lemma is needed.

    Lemma 5. Let K be a nonempty closed convex subset of a Banach space E with QK as the sunny nonexpansive retraction. Let S,T:KE be quasi-nonexpansive mappings with Ψ and let {ηn},{ϑn} and {ζn} be sequences in (0,1) for all nN. Define the sequence {un} by using Algorithm 1. Then, for each ˉuΨ, limnunˉu exists and

    wnˉuunˉuandznˉuunˉu,nN. (3.1)
    Algorithm 1: CR-Iteration Algorithm with Sunny Nonexpansive Retraction
    initialization: ηn,ϑn,ζn(0,1),u1K and n=1.
    while stopping criterion not met do
         wn=QK((1ζn)un+ζnSun),zn=QK((1ϑn)Tun+ϑnSwn),un+1=QK((1ηn)zn+ηnTzn).
    end

     | Show Table
    DownLoad: CSV

    Proof. Let ˉuΨ. Then, for each n1, we have

    wnˉu=QK((1ζn)un+ζnSun)ˉu(1ζn)(unˉu)+ζn(Sunˉu)(1ζn)unˉu+ζnSunˉu(1ζn)unˉu+ζnunˉu=unˉu, (3.2)
    znˉu=QK((1ϑn)Tun+ϑnSwn)ˉu(1ϑn)(Tunˉu)+ϑn(Swnˉu)(1ϑn)Tunˉu+ϑnSwnˉu(1ϑn)unˉu+ϑnwnˉu(1ϑn)unˉu+ϑnunˉu=unˉu (3.3)

    and

    un+1ˉu=QK((1ηn)zn+ηnTzn)ˉu(1ηn)(znˉu)+ηn(Tznˉu)(1ηn)znˉu+ηnTznˉu(1ηn)znˉu+ηnznˉu=znˉuunˉu. (3.4)

    Therefore,

    un+1ˉuunˉu...u1ˉu,nN. (3.5)

    Since {unˉu} is monotonically decreasing, we have that the sequence {unˉu} is convergent.

    Theorem 1. Let K be a nonempty closed convex subset of a uniformly convex Banach space E with QK as the sunny nonexpansive retraction. Let S,T:KE be quasi-nonexpansive mappings with Ψ. Let {ηn}, {ϑn} and {ζn} be sequences of real numbers such that 0<c1ηnˆc1<1, 0<c2ϑnˆc2<1 and 0<c3ζnˆc3<1 for all nN. From an arbitrary u1K, PΨ(u1)=u, define the sequence {un} by Algorithm 1. Then, we have the following:

    (i) {un} is in a closed convex bounded set Bλ[u]K, where λ is a constant in (0,) such that u1uλ.

    (ii) If S is uniformly continuous, then limnunSun=0 and limnunTun=0.

    (iii) If E fulfills Opial's condition and IS and IT are demiclosed at 0, then {un} converges weakly to an element of ΨBλ[u].

    Proof. (ⅰ) Since uΨ, from (3.5), we obtain

    un+1uunu...u1uλ,nN. (3.6)

    Therefore, {un} is in the closed convex bounded set Bλ[u]K.

    (ⅱ) Suppose that S is uniformly continuous. Using Lemma 5, we get that {un}, {zn} and {wn} are in Bλ[u]K, and, hence, from (3.1), we obtain

    Tunuλ,Tznuλ,SwnuλandSunuλ

    for all nN. Using Lemma 2 for p=2 and R=λ, from the relations in Algorithm 1, we obtain

    wnu2(1ζn)unu2+ζnSunu2ζn(1ζn)φ(unSun)(1ζn)unu2+ζnunu2ζn(1ζn)φ(unSun)=unu2ζn(1ζn)φ(unSun). (3.7)

    It follows from (3.7) that

    znu2(1ϑn)unu2+ϑnwnu2ϑn(1ϑn)φ(TunSwn)(1ϑn)unu2+ϑn(unu2ζn(1ζn)φ(unSun))ϑn(1ϑn)φ(TunSwn)=unu2ϑnζn(1ζn)φ(unSun)ϑn(1ϑn)φ(TunSwn). (3.8)

    Using (3.8) and Lemma 2, we have

    un+1u2(1ηn)(znu)+ηn(Tznu)2(1ηn)znu2+ηnTznu2ηn(1ηn)φ(znTzn)(1ηn)znu2+ηnznu2ηn(1ηn)φ(znTzn)=znu2ηn(1ηn)φ(znTzn)unu2ϑnζn(1ζn)φ(unSun)ϑn(1ϑn)φ(TunSwn)ηn(1ηn)φ(znTzn). (3.9)

    From (3.9), we have the following important two inequalities.

    ϑn(1ϑn)φ(TunSwn)unu2un+1u2 (3.10)

    and

    ϑnζn(1ζn)φ(unSun)unu2un+1u2. (3.11)

    Note that c2(1ˆc2)ϑn(1ϑn) and c2c3(1ˆc3)ϑnζn(1ζn).

    Using (3.10) and (3.11), we obtain

    c2(1ˆc2)ni=1φ(TuiSwi)u1u2un+1u2,nN (3.12)

    and

    c2c3(1ˆc3)ni=1φ(uiSui)u1u2un+1u2,nN. (3.13)

    It follows from (3.12) and (3.13) that we obtain

    n=1φ(TunSwn)< (3.14)

    and

    n=1φ(unSun)<. (3.15)

    Using (3.14) and (3.15), we have

    limnTunSwn=0 (3.16)

    and

    limnunSun=0. (3.17)

    In addition, using (3.17), we have

    wnun=QK((1ζn)un+ζnSun)QK(un),Sunun0(asn). (3.18)

    Since S is uniformly continuous, it follows from Lemma 3 that

    limnwnSwn=0. (3.19)

    Thus from (3.16)–(3.19), we have

    unTununSun+SunTununSun+SunSwn+SwnTununSun+Sunun+unSwn+SwnTununSun+Sunun+unwn+wnSwn+SwnTun0(asn). (3.20)

    (ⅲ) By assumption, E satisfies Opial's condition. Let wΨ such that wBλ[u]K. From Lemma 5, we have that limnunw exists. Suppose that there are two subsequences {unq} and {uml} which converge to two distinct points u and v in Bλ[u]K, respectively. Then, since both I-S and I-T have the demiclosed property at 0, we have that Su=Tu=u and Sv=Tv=v. Moreover, using Opial's condition, we have

    limnunu=limqunqu<limlumlv=limnunv.

    Similarly, we obtain

    limnunv<limnunu

    which is a contradiction. Therefore, u=v. Hence, the sequence {un} converges weakly to an element of ΨBλ[u]K.

    Since every nonexpansive mapping is uniformly continuous. By using the same ideas and techniques as in Theorem 1 and Lemma 4, we can state the following result without proofs.

    Theorem 2. Let K be a nonempty closed convex subset of a uniformly convex Banach space E with QK as the sunny nonexpansive retraction. Let S,T:KE be nonexpansive mappings with ψ. Let {ηn}, {ϑn} and {ζn} be sequences of real numbers such that 0<c1ηnˆc1<1, 0<c2ϑnˆc2<1 and 0<c3ζnˆc3<1 for all nN. From an arbitrary u1K, PΨ(u1)=u, define the sequence {un} by using Algorithm 1. Then, we have the following:

    (i) {un} is in a closed convex bounded set Bλ[u]K, where λ is a constant in (0,) such that u1uλ.

    (ii) limnunSun=0 and limnunTun=0.

    (iii) If E fulfills Opial's condition, then {un} converges weakly to an element of ΨBλ[u].

    If S and T are nonexpansive self-mappings on a nonempty closed convex subset K of a real Hilbert space H. Hence, from Theorem 1, we can obtain the following result.

    Corollary 1. Let K be a nonempty closed convex subset of a real Hilbert space H, let S,T:KK be nonexpansive mappings with Ψ and let {ηn}, {ϑn} and {ζn} be sequences of real numbers for which 0<c1ηnˆc1<1, 0<c2ϑnˆc2<1 and 0<c3ζnˆc3<1 for all nN. From an arbitrary u1K, define the sequence {un} by

    {wn=(1ζn)un+ζnSunzn=(1ϑn)Tun+ϑnSwnun+1=(1ηn)zn+ηnTzn,nN. (3.21)

    Then, {un} converges weakly to an element of Ψ.

    In this section, we shall utilize the methods mentioned above to study common zeros of accretive operators, convexly constrained least square problems and convex minimization problems.

    Setting S=JAμ and T=JBμ, and by using (3.21), we derive its convergence analysis to solve (1.1).

    Theorem 3. Let K be a nonempty closed convex subset of a real uniformly convex Banach space E satisfying Opial's condition. Let A:D(A)K2E and B:D(B)K2E be accretive operators such that ¯D(A)Kμ>0R(I+μA), ¯D(B)Kμ>0R(I+μB) and A1(0)B1(0). Let {ηn}, {ϑn} and {ζn} be sequences of real numbers such that 0<c1ηnˆc1<1, 0<c2ϑnˆc2<1 and 0<c3ζnˆc3<1 for all nN. Let μ>0, u1K and PA1(0)B1(0)(u1)=u. Let {un} be defined by

    {wn=(1ζn)un+ζnJAμun,zn=(1ϑn)JBμun+ϑnJAμwn,un+1=(1ηn)zn+ηnJBμzn,nN. (4.1)

    Then, we have the following:

    (i) {un} is in a closed convex bounded set Bλ[u]K, where λ is a constant in (0,) such that u1uλ.

    (ii) limnunJAμun=0 and limnunJBμun=0.

    (iii) {un} converges weakly to an element of A1(0)B1(0)Bλ[u].

    Proof. By the assumption ¯D(A)Kμ>0R(I+μA), we know that JAμ,JBμ:KK is nonexpansive. Note that D(A)D(B)K; hence,

    uA1(0)B1(0)uD(A)D(B)with0Auand0BuuKwithJAμu=uandJBμu=uuFix(JAμ,JBμ)K.

    Set S=JAμ and T=JBμ. Hence, Theorem 3 follows from Theorem 2.

    Let A,BB(H) and y,zH. Define φ,ψ:HR by

    φ(u)=12Auy2andψ(u)=12Buz2

    for all uH, where H is a real Hilbert space. Let K be a nonempty closed convex subset of H. The objective is to find bK such that

    bargminuKφ(u)argminuKψ(u), (4.2)

    where argminuKφ(u):={uK:φ(u)=infuKφ(u)}.

    We give an application of Theorem 2 for finding common solutions to two convexly constrained least square problems.

    Proposition 1. ([20]) Let H be a real Hilbert space, AB(H) with the adjoint A and yH. Let K be a nonempty closed convex subset of H. Let bH and δ(0,). Then, the following statements are equivalent:

    (i) b solves the following problem:

    minuK12Auy2.

    (ii) b=PK(bδA(Aby)).

    (iii) AvAb,yAb0 for all vK.

    Theorem 4. Let K be a nonempty closed convex subset of a real Hilbert space H, y,zH and A,BB(H) such that the solution set of the problem in (4.2) is nonempty. Let {ηn}, {ϑn} and {ζn} be sequences of real numbers such that 0<c1ηnˆc1<1, 0<c2ϑnˆc2<1 and 0<c3ζnˆc3<1 for all nN. Let u1H, PargminuKφ(u)argminuKψ(u)(u1)=u and δ(0,2min{1A2,1B2}). From an arbitrary u1K, define the sequence {un} by

    {wn=(1ζn)un+ζnSun,zn=(1ϑn)Tun+ϑnSwn,un+1=(1ηn)zn+ηnTzn,nN, (4.3)

    where S,T:KK is defined by Su=PK(uδA(Auy)) and Tu=PK(uδB(Buz)) for all uK. Then, we have the following:

    (i) {un} is in the closed ball Bλ[u], where λ is a constant in (0,) such that u1uλ.

    (ii) limnunSun=0 and limnunTun=0.

    (iii) {un} converges weakly to an element of argminuKφ(u)argminuKψ(u) Bλ[u].

    Proof. Note that φ(u)=A(Auy) for all uH; we obtain that φ(u)φ(v)=A(Auy)A(Avy)A2uv for all u,vH, Thus, φ(u) is an 1A2-ism; hence, (Iδφ) is nonexpansive from K into H for σ(0,2A2). Therefore, S=PK(Iσφ) and T=PK(Iτφ)) are nonexpansive mappings from K into itself for σ(0,2A2) and τ(0,2B2), respectively. Thus, Theorem 4 follows from Theorem 2.

    Let H be a Hilbert space and let g1,g2:H(,] be proper convex and lower-semicontinuous functions. We consider the following problem of finding xH such that

    xg11(0)g12(0). (4.4)

    Note that Jg1μ=proxμg1. We give an application to common solutions for convex programming problems in a Hilbert space.

    Theorem 5. Let K be a nonempty closed convex subset of a real Hilbert space H. Let g1,g2Γ0H such that the solution set of the problem in (4.4) is nonempty. Let {ηn}, {ϑn} and {ζn} be sequences of real numbers such that 0<c1ηnˆc1<1 and 0<c2ϑnˆc2<1 and 0<c3ζnˆc3<1 for all nN. Let μ>0, u1H and Pg11(0)g12(0)(u1)=u. From an arbitrary u1K, define the sequence {un} by

    {wn=(1ζn)un+ζnproxμg1(un),zn=(1ϑn)proxμg2(un)+ϑnproxμg1(wn),un+1=(1ηn)zn+ηnproxμg2(zn),nN. (4.5)

    Then, we have the following:

    (i) {un} is in the closed ball Bλ[u], where λ is a constant in (0,) such that u1uλ.

    (ii) limnunproxμg1(un)=0 and limnunproxμg2(un)=0.

    (iii) {un} converges weakly to an element of g11(0)g12(0)Bλ[u].

    Proof. Using Lemma 1, we have that g1 is maximal monotone. We see that R(I+μf)=H by the maximal monotonicity of g1. It follows that Jg1μ=proxμg1:HH is nonexpansive. Similarly, Jg2μ=proxμg2:HH is nonexpansive. Thus, Theorem 5 follows from Theorem 2.

    In this section, some real-world problems such as differential problems, image restoration problems and signal recovering problems, are used to illustrate the effectiveness of our algorithm.

    Let us consider the following simple and well known one-dimensional heat equation with Dirichlet boundary conditions and initial data:

    ut=βuxx+f(x,t),0<x<l,t>0.
    u(x,0)=u0(x),0<x<l, (5.1)
    u(0,t)=γ1(t),u(l,t)=γ2(t),t>0,

    where β is constant, u(x,t) represents the temperature at a point (x,t) and f(x,t), γ1(t) and γ2(t) are sufficiently smooth functions. Below, we use the notations uni and (uxx)ni to represent the numerical approximations of u(xi,tn) and uxx(xi,tn), with tn=nΔt, and where Δt denotes the temporal mesh size. A set of schemes to solve the problem of (5.1) is based on the following well-known Crank-Nikolson type of scheme [38,39]:

    un+1iuniΔt=β2[(uxx)n+1i+(uxx)ni]+fn+1/2i,i=2,,N1 (5.2)

    with the initial data

    u0i=u0(xi),i=2,,N1 (5.3)

    and Dirichlet boundary conditions

    un+11=γ1(tn+1),un+1N=γ2(tn+1). (5.4)

    For the approximate term of (uxx)ki,k=n,n+1, we use the standard centered discretization with space. The matrix form of the second-order finite difference scheme (FDS) for solving the heat problem of (5.1) can be written as

    Aun+1=Gn, (5.5)

    where Gn=Bun+fn+1/2,

    A=[1+ηη2η21+ηη2η21+ηη2η21+η],B=[1ηη2η21ηη2η21ηη2η21η],
    un=[uk2uk3ukN2ukN1],fn+1/2=[η2γn+1/21+Δtfn+1/22Δtfn+1/23Δtfn+1/2N2η2γn+1/22+Δtfn+1/2N1],

    η=βΔt/(Δx2), γn+1/2i=γi(tn+1/2),i=1,2 and fn+1/2i=fi(tn+1/2), i=2,,N1.

    From Eq (5.5), the matrix A is square and symmetric positive definite. Traditionally, iterative methods have been presented to find the solution of the linear system (5.5). The well-known weight Jacobi (WJ) and successive over-relaxation (SOR) methods [39,40] have been chosen for exemplification here (see Table 1).

    Table 1.  Specific names of WJ and SOR for solving the linear system of (5.5).
    Linear system Iterative method Specific name
    Aun+1=Gn Du(n+1,s+1)=(DωA)u(n+1,s)+ωGn WJ
    (DωL)u(n+1,s+1)=((DωL)ωA)u(n+1,s)+ωGn SOR

     | Show Table
    DownLoad: CSV

    And, ω is a weight parameter, D is the diagonal part of the matrix A and L is the lower triangular part of the matrix D-A, respectively.

    The discussion on the stability of WJ and SOR in solving the linear system (5.5) can be found in [39,40].

    Let us consider the linear system

    Au=G, (5.6)

    where A:RlRl is a linear and positive operator and u,GRl. To find the solution of the linear system (5.6), we manipulate this linear system into the form of the fixed-point equation u=T(u). For example, the well-known weight Jacobi (WJ), successive over-relaxation (SOR) and Gauss-Seidel (GS, SOR with ω=1) methods [38,39,40] present the linear system (5.6) in the form of fixed-point as TWJ(u)=u, TSOR(u)=u and TGS(u)=u, respectively (see Table 2).

    Table 2.  Different way of rearranging the linear system (5.6) into the form u=T(u).
    Linear system Fixed-point mapping T(u)
    Au=G TWJ(u)=(IωD1A)u+ωD1G
    TSOR(u)=(Iω(DωL)1A)u+ω(DωL)1G

     | Show Table
    DownLoad: CSV

    Let Tu=Su+c, where u,cK, and let S be a self-mapping on a nonempty closed convex subset K of a uniformly convex Banach space E with S∥<1; then, T is a nonexpansive mapping on K. Indeed, for all u,vK and n1, we have

    TuTv∥=∥SuSv∥≤∥S∥∥uv∥<∥uv.

    Therefore, in controlling the operators TWJ and TSOR in the form of TWJu=SWJu+cWJ, where

    SWJ=IωD1A,cWJ=ωD1G

    and TSORu=SSORu+cSOR, where

    SSOR=Iω(DωL)1A,cSOR=ω(DωL)1G

    are nonexpansive mappings, their weight parameter must be properly modified. The implemented weight parameter ω for the operator SWJ and SSOR are defined as its norm of less than one. Moreover, the optimal weight parameter ωo to yield the smallest norm for each type of operator S is indicated in Table 3.

    Table 3.  Implemented weight parameter and optimal weight parameter of operator S.
    Different types of operator S Implemented weight parameter ω Optimal weight parameter ωo
    SWJ 0<ω<2λmax(D1A) ωo=2λmin(D1A)+λmax(D1A)
    SSOR 0<ω<2 ωo=21+1ρ2

     | Show Table
    DownLoad: CSV

    The parameters λmax(D1A) and λmin(D1A) are the maximum and minimum eigenvalue of the matrix A, respectively, and ρ is the spectral radius of the iteration matrix of the Jacobi method.

    Next, we introduce the proposed method for solving the linear system (5.5) by using the nonexpansive mappings Ti and Tj. The generated sequence {un} is created iteratively by u0Rn and

    w(n,s+1)=(1ζn)u(n,s)+ζnTiu(n,s),z(n,s+1)=(1ϑn)Tju(n,s)+ϑnTiw(n,s+1),u(n+1,s+1)=(1ηn)z(n,s+1)+ηnTjz(n,s+1),n0, (5.7)

    where the second superscript "s'' denotes the number of iterations s=0,1,,ˆSn; set

    ζn=ϑn=ηn=0.9 (5.8)

    as the default parameter. Since we focus on the convergence of the proposed algorithm, the stability analysis in terms of choosing the step sizes for time is not discussed in detail. The step size for time for the proposed algorithm is based on the smallest step size chosen for the WJ and SOR methods when solving the linear system (5.5) generated from the discretization of the considered problem (5.1). In all computations, we used β=25, Δt=Δx2/10 (step size for time) and ϵd=1010. For testing purposes only, both computations are performed for 0t0.01 (when t0.05,u(x,t)0). The following stopping criterion is used:

    u(n+1,ˆSn+1)u(n+1,ˆSn)2<ϵd,

    where "ˆSn" denotes the number of the last iteration at time tn and set u(n+1,ˆSn+1)=u(n,0). All computations can be performed by using a uniform grid of 101 nodes, which corresponds to the solution of the linear system (5.5) with a 99×99 size. The weight parameter ω of the proposed algorithm should be set as its optimum weight parameter (ωo) defined in Table 3. We apply the WJ, SOR, GS and proposed algorithms with two different operators Ti and Tj (proposed algorithm with TiTj) to obtain the solution of the linear system (5.5) of the heat problem with Dirichlet boundary conditions and the initial data described by (5.1). For the proposed algorithm, the nonexpansive mapping Ti is chosen from the following operators: TGS,TWJ and TSOR.

    Let us consider the following heat problem:

    ut=βuxx+0.4β(4π21)e4βtcos(4πx),0x1,0<t<ts,u(x,0)=cos(4πx)/10,u(0,t)=e4βt/10,u(1,t)=e4βt/10,u(x,t)=e4βtcos(4πx)/10. (5.9)

    The results of the basic iterative methods (WJ, GS, SOR) and the proposed algorithms are demonstrated and discussed for the following cases:

    Case Ⅰ: WJ method, Case Ⅱ: GS method, Case Ⅲ: SOR method,

    Case Ⅳ: The proposed algorithm with TWJTGS,

    Case Ⅴ: The proposed algorithm with TWJTSOR,

    Case Ⅵ: The proposed algorithm with TGSTWJ,

    Case Ⅶ: The proposed algorithm with TGSTSOR,

    Case Ⅷ: The proposed algorithm with TSORTWJ,

    Case Ⅸ: The proposed algorithm with TSORTGS.

    The exact error can be measured by using unu2. Figure 1 shows the approximate solution of problem (5.9) with 101 nodes at t=0.01, as obtained by using the basic iterative methods and the proposed algorithm for Cases Ⅰ–Ⅸ.

    Figure 1.  Approximate solutions of problem (5.9) for all cases of the basic iterative methods and the proposed algorithms with 101 nodes.

    It can be seen in Figure 1 that all numerical solutions match the analytical solution reasonably well. Thus, it can be concluded that all sequences generated by the proposed method with two different operators Ti and Tj converge to their common fixed-point solution.

    Figure 2 shows the trend of the iteration number for the basic iterative methods and the proposed algorithm for Case Ⅳ–Ⅸ to solve problem (5.5), as generated from the discretization of the considered problem (5.9) with 101 nodes.

    Figure 2.  Evolution of iteration number for GS, WJ, SOR and the proposed algorithm for application to problem (5.1) with β=25 and t(0,1].

    It can be seen in Figure 2 that the proposed method with two different operators Ti and Tj uses fewer iterations on each step of time as compared with the basic iterative methods. And, the proposed methods with TGSTSOR and TSORTGS give us the smallest number of iterations on each step of time.

    Next, the proposed algorithm with TSORTGS is chosen to test and verify the order of accuracy for the presented FDS in solving the heat equation problem (5.9). And, all computations have been performed by using uniform grids of 11,21,41,81 and 161 nodes, which correspond to the solution of the discretization of the heat equation problem (5.9) with Δx=0.1,0.05,0.025,0.0125,0.0625, respectively. We found that the proposed algorithm with TSORTGS can be seen as the second order of accuracy (Figure 3) when the distance between the graphs of all computational grid sizes are compared. That is, the order of accuracy of the proposed algorithm with TSORTGS agrees with the construction of their FDS. Figure 4 shows the trend of the average iteration number for the basic iterative methods as compared with all cases of the proposed algorithm when applied for solving the discretization of the considered problem (5.9) with various grid sizes.

    Figure 3.  Evolution of iteration number for the basic iterative methods and the proposed algorithm for problem (5.1) with 101 nodes and t(0,1].
    Figure 4.  Evolution of the average iteration number on each step of time for GS, WJ, SOR and the proposed algorithm for problem (5.1)) with ϑ=25 and t(0,1].

    It can be seen in this figure that the average number of iterations on each step of time for the proposed method, with Cases Ⅳ–Ⅸ being less than the average number of iterations on each step of time for the WJ, GS and SOR methods. Moreover, the average number of iterations on each step of time for the proposed algorithm with TSOR-TGS gives us the smallest number of iterations for all of the considered grid sizes. However, even using a small amount of iterations per step of time shows the excellent performance of the proposed method, but the stability condition of the proposed algorithm needs to be considered carefully, as chosen for the results of the stability analysis with time.

    Let B be the degraded image of the true image U in the matrix form of ˜m rows and ˜n columns (B,UR˜mטn). The key to obtaining the image restoration model is to rearrange the elements of the images B and U into the column vectors by stacking the columns of these images into two long vectors b and u where b=vec(B) and u=vec(U), both of length n=˜m˜n. The image restoration problem can be modeled in a one-dimensional vector by using the following linear equation system:

    b=Mu, (5.10)

    where uRn is an original image, bRn is the observed image, MRn×n is the blurring operation and n=˜m˜n. In order to solve problem (5.10), we aim to approximate the original image, vector b, which is known as the following least squares (LS) problem:

    minu12bMu22, (5.11)

    where .2 is defined by u2=ni=1|ui|2.

    Next, we will apply our method for solving the LS problem (5.11) and the image restoration problem (5.10) by applying the following settings as follows: Let MRn×n be a degraded matrix and bRn. We obtain the following proposed method to find the common solution of the image restoration problem

    wn=(1ζn)un+ζnTun,zn=(1ϑn)Tun+ϑnTwn,un+1=(1ηn)zn+ηnTzn, (5.12)

    where Tu=uμMT(Mub), μ(0,2/||MTM||2) and ηn,ϑn,ζn(0,1) for all nN.

    The goal for the image restoration problem is to find the original image from the observed image without knowing which one is the blurring matrix. However, the blurring matrix M must be known when applying the algorithm (5.12). Now, we present the new idea for solving the image restoration problem when the observed image b1,b2,...,bN can be restored by using the blurring matrices M1,M2,...,MN with different qualities (bi=Miu,i=1,2,,N). Let us consider the following problems:

    minuRn12M1ub122,minuRn12M2ub222,...,minuRn12MNubN22, (5.13)

    where u is the originally true image (common solution), Mi is the blurred matrix and bi is the blurred image by the blurred matrix Mi. However, there are many degraded matrices that must be known. For simplicity, we give an example for applying the algorithm (5.12) to find the original image u with a pair of blurring matrices Mi and Mj:

    wn=(1ζn)un+ζnTiun,zn=(1ϑn)Tjun+ϑnTiwn,un=(1ηn)zn+ηnTjzn, (5.14)

    where Tku=uμkMTk(Mkub). The implemented algorithm (5.14) is proposed for the purpose of solving the image restoration problem by using a pair of the blurring matrices Mi and Mj with the default parameter (5.8) and μk=1/MTkMk2, and it is called the proposed algorithm with TiTj. In the case of Ti=Tj, we call it the proposed algorithm with Ti.

    The original RGB format shown in Figure 5 is used to demonstrate the practicability of the proposed algorithm. The Cauchy error and the relative error are measured by using the max-norm unun1 and unu/u, respectively. The performance of the comparing algorithms at un for the image deblurring process can be measured quantitatively by using the peak signal-to-noise ratio (PSNR), which is defined by

    PSNR(un)=20log10(2552MSE),
    Figure 5.  Original RGB image with matrix size 380×440×3.

    where MSE=unu22. Three different types of the original RGB image degraded by the blurring matrices M1,M2 and M3 are shown in Figure 6. These have been used to test the implemented algorithm.

    Figure 6.  Original RGB image degraded by blurred matrices M1M3 respectively.

    Next, we present the restoration of images that have been corrupted by the following blur types:

    Type Ⅰ. Gaussian blur of filter size 9×9 with standard deviation σ=4. (The original image has been degraded by the blurring matrix M1.)

    Type Ⅱ. Out of focus blur (disk) with radius r=6. (The original image has been degraded by the blurring matrix M2.)

    Type Ⅲ. Motion blur specifying with motion length of 21 pixels and motion orientation 11 (θ=11). (The original image has been degraded by the blurring matrix M3.)

    The image U and three different types of the blurring image B (Figure 3) are represented in the red-green-blue component. Then, we denote Ur,Ug,Ub and Br,Bg,Bb as the gray-scale images that constitute the red-green-blue channels of the applied image U and the blurring image B, respectively. Thus, we define the column vector u and b from the color image U and B, and both of length n=3˜m˜n. After that, we applied the proposed algorithms to obtain the common solution of the image restoration problem with these three blurring matrices. Both the theoretical and experimental results demonstrate the convergence properties of the proposed algorithm with the permutation of the blurring matrices M1,M2 and M3, which are demonstrated and discussed for the following cases:

    Case Ⅰ: Algorithm 5.12 with T1, Case Ⅱ: Algorithm 5.14 with T2,

    Case Ⅲ: Algorithm 5.14 with T3, Case Ⅳ: Algorithm 5.12 with T1T2,

    Case Ⅴ: Algorithm 5.14 with T1T3, Case Ⅵ: Algorithm 5.14 with T2T1,

    Case Ⅶ: Algorithm 5.12 with T2T3, Case Ⅷ: Algorithm 5.14 with T3T1,

    Case Ⅸ: Algorithm 5.14 with T3T2.

    Figure 7 shows the plot behavior for the Cauchy error and relative error of the reconstructed RGB image in all cases.

    Figure 7.  Cauchy norm and relative norm plots for the proposed algorithm for Cases Ⅰ–Ⅸ.

    The Cauchy error plots and the relative error plot guarantee that the presented method converge to the common solution of the restoration problem (5.14) in all cases. Thus, the Cauchy and relative error plots show the validity and confirm the convergence of the proposed method. Figure 8 shows all cases of the PSNR plot for the proposed method with 100,000 iterations. Based on the PSNR plots in Figure 8, all restored images using the proposed algorithm to solve the deblurring problem results in quality improvements when the iteration number increases. Moreover, the PSNR quality of the observed image is improved when the proposed method with a pair of TiTj and ij is used to solve deblurring problem compared with the proposed method in which only one blurring matrix is used. And, the best case for recovering the observed image occurs when the proposed method with T2T3 and T3T2 is used.

    Figure 8.  Comparison plots of PSNR quality for the proposed algorithm for Cases Ⅰ–Ⅸ.

    Figure 9 demonstrates the crop of reconstructed RGB images presented for the 500th iteration by using the proposed algorithm to obtain the common solution of the restoration problem with operators T1 and T2, T3 and T3T2 respectively. It can be seen in these figures that the quality of the image restored by using the proposed algorithm with T3T2 results in the smooth quality of the applied degraded image.

    Figure 9.  Reconstructed images resulting from using the proposed algorithm with operators T1 and T2, T3 and T3T2, respectively, at 500th iterations.

    Next, we will compare the effectiveness of the proposed methods with Mann iteration [10], Ishikawa iteration [11], S-iteration [12], Noor iteration [14], AK-iteration [15], Sahu et al. iteration[16] and SP-iteration [18] through the PSNR plots in Figure 10. And, the parameters ζn,ϑn and ηn of the comparative algorithms are set with the default parameter presented in (5.8). Since, all comparative methods use only one blurring matrix on their algorithms, the proposed method with Cases Ⅰ–Ⅲ are used to compare the effectiveness of our methods.

    Figure 10.  PSNR plots for all methods compared with the proposed method, presented in 100,000 iterations.

    It can be seen in Figure 10 that the PSNR quality from the proposed method is better than that of the methods of Mann, Ishikawa, S, Noor and Sahu et al., while the proposed method is as effective as the remaining methods. However, the proposed method has been designed to be used with a pair of TiTj where ij. With this advantage, we found that the proposed method with Cases Ⅲ–Ⅸ is more effective when the number of iterations is large enough (see Figure 8).

    In signal processing, compressed sensing can be modeled as the following under a determined linear equation system:

    y=Au+ν

    where uRn is an original signal with n components to be recovered, ν,yRm are noise and the observed signal with noise for m components respectively, and ARm×n is a degraded matrix. Finding the solutions of the previous determined linear equation system can be seen as solving the LASSO problem

    minuRN12yAu22+λu1, (5.15)

    where λ>0. As a result various techniques and iterative schemes have been developed to solve the LASSO problem. We can apply our method to solve the LASSO problem (5.15) by setting Tu=proxμg(uμf(u)) where f(u)=yAu22/2, g(u)=λu1 and f(u)=AT(Auy).

    Next, we give an example by applying our algorithm to signal recovery problems. Let ARm×n(m<n) be a degraded matrix and y,ννRm; we obtain the following proposed methods to find the solution of the signal recovery problem:

    wn=(1ζn)un+ζnTun,zn=(1ϑn)Tun+ϑnTwn,un+1=(1ηn)zn+ηnTzn, (5.16)

    where Tu=proxμg(uμAT(Auy)), μ(0,2/||AtA||2) and ηn,ϑn,ζn(0,1) for all nN. The goal for a signal recovery problem is to find the original signal from the observed signal without knowing the degraded signal operator A and noise νν. However, the degraded signal operator A must be known when applying the algorithm (5.16). Now, we present the method for solving the signal recovery problem when the observed signal y1,y2,...,yN can be recovered by using the known degraded matrices A1,A2,...,AN (yi=Aiu+νi,νi=λiu1,i=1,2,,N). Let us consider the following problems:

    minuRN12A1uy122+λ1u1,minuRN12A2uy222+λ2u1,minuRN12ANuyN22+λNu1, (5.17)

    where a true signal u is a common solution of problem (5.17). That is, we will find the true signal u through the common solution of N LASSO problems in the problem (5.17). However, there are many degraded signal operators that must be known. For simplicity, we give an example of applying our algorithm to find the common solution u for a pair of LASSO problems to Eq (5.17):

    wn=(1ζn)un+ζnTjun,zn=(1ϑn)Tkun+ϑnTjwn,un+1=(1ηn)zn+ηnTkzn, (5.18)

    where

    Tiu=proxμigi(uμiATi(Aiuyi)),
    gi(u)=λiu1

    and

    μi=1/ATiAi2,

    i=1,2,...,N. The implemented algorithm (5.18) is proposed for the purpose of solving the signal recovery problem by using a pair of the degraded signal operators Aj and Ak, and set ζn=ϑn=ηn=0.9, and it is called the proposed algorithm with the operator TjTk. In the case of Tj=Tk, we call it the proposed algorithm with Tj.

    Next, some experiments are provided to illustrate the convergence and the effectiveness of the proposed algorithm (5.18). The original signal u with n=1024 generated by the uniform distribution in the interval [2,2] with k=70 nonzero elements is used to create the observation signal yi=Aiu+νi,i=1,2,3 with m=512 (see Figure 11).

    Figure 11.  Original signal (u) with 70 nonzero elements.

    The process is started when the signal initial data u0 with k=70 and n=1024 is picked randomly (see Figure 12).

    Figure 12.  Initial signal u0.

    The observation signal yi is shown in Figure 13.

    Figure 13.  Degraded signals y1, y2, and y3.

    The matrices Ai generated by the normal distribution with a mean of zero and variance of one, and the white Gaussian noise νi,i=1,2,3 (see on Figure 14).

    Figure 14.  Noise signals ν1, ν2, and ν3.

    Both the theoretical and experimental results that demonstrate the convergence properties of the proposed algorithm with the permutation of the blurring matrices A1,A2 and A3 are demonstrated and discussed for the following cases:

    Case Ⅰ: Algorithm 5.12 with T1, Case Ⅱ: Algorithm 5.14 with T2,

    Case Ⅲ: Algorithm 5.14 with T3, Case Ⅳ: Algorithm 5.12 with T1T2,

    Case Ⅴ: Algorithm 5.14 with T1T3, Case Ⅵ: Algorithm 5.14 with T2T1,

    Case Ⅶ: Algorithm 5.12 with T2T3, Case Ⅷ: Algorithm 5.14 with T3T1,

    Case Ⅸ: Algorithm 5.14 with T3T2.

    The Cauchy error and the relative signal error can also measured by using max-norm. The performance of the proposed method at the nth iteration can be measured quantitatively by using the the signal-to-noise ratio (SNR), which is defined by

    SNR(un)=20log10(un2unu2),

    where un is the recovered signal at the nth iteration by using the proposed method. The Cauchy error, signal relative error and SNR quality of the proposed methods for recovering the degraded signal are shown in Figure 15.

    Figure 15.  Cauchy norm and relative error norm plots for the proposed method for Cases Ⅰ–Ⅸ to recover the observed signal.

    The Cauchy error plot guarantees that all cases of the presented method converge to a common solution of the signal recovery problem (5.17) with N=2. It is remarkable that the signal relative error plot decreases until it converges to some constant value. Figure 16 shows all cases of the SNR plot for the proposed method with 100,000 iterations. For the SNR quality plot in Figure 16, it can be seen that the SNR value increases until it converges to some constant value. Through these results, it can be concluded that the solution of the signal recovery problem solved by the proposed algorithm achieved quality improvements of the observed signal. Moreover, the SNR quality of the observed signal has been greatly improved when the proposed method with TjTk and TjTk is used for solving the signal recovery problem. And, the optimal case for recovering the observed signal occurred when the proposed method with T3T1 was used.

    Figure 16.  Comparison plots of the SNR quality for the proposed algorithm for Cases Ⅰ–Ⅸ to recover the observed signal.

    Figure 17 shows the signals restored by using the optimal proposed algorithm with the operators T3T1 as compared with the proposed algorithms with the operators T1, T2 and T3.

    Figure 17.  Recovering signals obtained by using the proposed algorithm with the operators T1 and T2, T3 and T3T1, respectively, presented in 5,000 iterations.

    Next, we will compare the effectiveness of the proposed method with the Mann iteration [10], Ishikawa iteration [11], S-iteration [12], Noor iteration [14], AK-iteration [15], Sahu et al. iteration [16] and SP-iteration [18] through the SNR plots in Figure 18. And, the parameters ζn,ϑn and ηn of the comparative algorithms are set with the default parameter of (5.8). Since all comparative methods use only one blurring matrix on their algorithms, the proposed method with Cases Ⅰ–Ⅲ are used to compare the effectiveness of our methods.

    Figure 18.  SNR plots for all methods compared with the proposed method, presented in 100,000 iterations.

    It can be seen in Figure 18 that the SNR quality from the proposed and SP methods are better than the remaining methods. However, the proposed method has been designed to be used with a pair of TjTk where jk. With this advantage, we found that the proposed method with Cases Ⅳ–Ⅸ is more effective when the number of iterations is large enough (see Figure 16).

    Moreover, we show the efficiency of the proposed method with the operators T3T1 in solving the k-sparse signal recovering problem with k=70,35,18,9. To solve the k-sparse signal recovering problem, the observed signal in Figure 11 was attenuated by half until k=9 to obtain the k-nonzero signal. When solving these k-sparse signal recovery problems using the suggested technique with the operators T3T1, the regularization parameters λ1 and λ3 should be set as λ1=λ3=λ. The SNR quality and the relative signal error with the effect of the regularization parameter λ where λ[5,75], are shown in Figure 19.

    Figure 19.  SNR quality and relative error norm results for the proposed method with the operators T3T1 and the regularization parameter λ as applied to recover the observed signal within 10,000 iterations.

    The most recent figure illustrates that the offered algorithms can solve the sparse signal recovery challenge. This figure shows that, with the optimal regularization parameter λ, the SNR quality increased and the relative error decreased as the nonzero elements of the sparse signal were attenuated by half. Following that, we compared the numerical simulations of the proposed method to the well-known LASSO-ADMM methodology (see [41,42]). When using the LASSO-ADMM algorithm to solve the sparse signal recovery problem and compare it with the proposed method with the operators T3T1, we set the penalty parameter ρ=1 and varied the regularization parameter λ from 0 to 1.

    The SNR and the relative error norm plots in Figures 19 and 20, respectively, show that the reconstruction via the recommended technique was more accurate than that resulting from the LASSO-ADMM algorithm for its ideal regularization parameter λ.

    Figure 20.  SNR and relative error norm plots of the LASSO-ADMM algorithm with the regularization parameter λ as applied to recover the observed sparse signal.

    However, the LASSO-ADMM algorithm needs fewer observations to obtain the same probability of exact reconstruction than our technique.

    This is due to the fact that the LASSO-ADMM technique is solved implicitly (the inverse of the noise covariance matrix is necessary), whereas the suggested algorithm is not. Furthermore, while determining the best regularization parameter λ to achieve maximum SNR quality, the SNR plot for the LASSO-ADMM algorithm can provide an adequate regularization parameter compared to the proposed approaches.

    Splitting methods have received a lot of attention lately because many nonlinear problems that arise in the areas used, such as signal processing and image restoration, are modeled in mathematics as a nonlinear equation, and this operator is decomposed as the sum of two nonlinear operators. Most investigations about the methods of separation are carried out in Hilbert spaces. This work developed an iterative scheme in Banach spaces. By using sunny nonexpansive retractions which are different from the metric projection in Banach spaces, we have developed the CR-iteration algorithm in view of two quasi-nonexpansive nonself-mappings in the setting of uniformly convex Banach spaces. We proved the convergence theorem of our iterative scheme, as well as presented applications for common zeros of accretive operators, convexly constrained least square problems, convex minimization problems, differential problems, image restoration problems and signal recovery problems. As applications, we found that, when the proposed method is used to solve the common solution of image and signal recovering problems, it expands the quality range of the recovered image and signal. Our algorithm was found to be flexible and have better quality than the Mann, Ishikawa, S, Noor, AK, Sahu et al. and SP iterations (see Figures 10 and 18). And, we presented the idea of applying the pairs of basic iterative methods such as WJ, GS and SOR to the proposed method to solve the unique solution of the well-known one-dimensional heat equation with Dirichlet boundary conditions and initial data. Furthermore, the proposed algorithm can solve the sparse signal recovery challenge. The simulation results demonstrate the efficacy of the proposed algorithm for solving k-spares signal recovery problems. Their numerical experiments also show the excellent performance of the proposed method.

    This research project was supported by the Thailand Science Research and Innovation Fund and the University of Phayao Grant No. UoE65002. T. Thianwan would like to thank the Thailand Science Research and Innovation Fund and the University of Phayao Grant No. FF65-RIM072. Also, the third author acknowledges the partial support provided by the revenue budget in 2022 (PBTSC65014), School of Science, University of Phayao.

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.



    [1] K. Kankam, N. Pholasa, P. Cholamjiak, On convergence and complexity of the modified forward-backward method involving new linesearches for convex minimization, Math. Meth. Appl. Sci. 42 (2019), 1352–1362. https://doi.org/10.1002/mma.5420 doi: 10.1002/mma.5420
    [2] E. J. Candès, M. B. Wakin, An introduction to compressive sampling, IEEE Signal Process. Mag., 25 (2008), 21–30.
    [3] S. Suantai, S. Kesornprom, P. Cholamjiak, A new hybrid CQ algorithm for the split feasibility problem in Hilbert spaces and its applications to compressed sensing, Mathematics, 7 (2019), 789. https://doi.org/10.3390/math7090789 doi: 10.3390/math7090789
    [4] D. Kitkuan, P. Kumam, A. Padcharoen, W. Kumam, P. Thounthong, Algorithms for zeros of two accretive operators for solving convex minimization problems and its application to image restoration problems, J. Comput. Appl. Math., 354 (2019), 471–495. https://doi.org/10.1016/j.cam.2018.04.057 doi: 10.1016/j.cam.2018.04.057
    [5] A. Padcharoen, P. Kumam, Y. J. Cho, Split common fixed point problems for demicontractive operators, Numer. Algorithms, 82 (2019), 297–320. https://doi.org/10.1007/s11075-018-0605-0 doi: 10.1007/s11075-018-0605-0
    [6] P. Cholamjiak, Y. Shehu, Inertial forward-backward splitting method in Banach spaces with application to compressed sensing, Appl. Math., 64 (2019), 409–435.
    [7] W. Jirakitpuwapat, P. Kumam, Y. J. Cho, K. Sitthithakerngkiet, A general algorithm for the split common fixed point problem with its applications to signal processing, Mathematics, 7 (2019), 226. https://doi.org/10.3390/math7030226 doi: 10.3390/math7030226
    [8] V. Berinde, Iterative approximation of fixed points: Lecture notes in mathematics, 2 Eds., Springer: Berlin, Germany, 2007.
    [9] E. Picard, Memoire sur la theorie des equations aux d'erives partielles et la methode des approximations successives, J. Math Pures Appl., 231 (1890), 145–210.
    [10] W. R Mann, Mean value methods in iteration, Proc. Am. Math. Soc., 4 (1953), 506–510. https://doi.org/10.1090/S0002-9939-1953-0054846-3 doi: 10.1090/S0002-9939-1953-0054846-3
    [11] S. Ishikawa, Fixed points by a new iteration method, Proc. Am. Math. Soc., 44 (1974), 147–150. https://doi.org/10.1090/S0002-9939-1974-0336469-5 doi: 10.1090/S0002-9939-1974-0336469-5
    [12] R. P. Agarwal, D. O'Regan, D. R. Sahu, Iterative construction of fixed points of nearly asymptotically nonexpansive mappings, J. Nonlinear Convex Anal., 8 (2007), 61–79.
    [13] R. Chugh, V. Kumar, S. Kumar, Strong convergence of a new three step iterative scheme in Banach spaces, Amer. J. Compu. Math., 2 (2012), 345–357. https://doi.org/10.4236/ajcm.2012.24048 doi: 10.4236/ajcm.2012.24048
    [14] M. A. Noor, New approximation schemes for general variational inequalities, J. Math. Anal. Appl., 251 (2000), 217–229. https://doi.org/10.1006/jmaa.2000.7042 doi: 10.1006/jmaa.2000.7042
    [15] K. Ullah, M. Arshad, On different results for new three step iteration process in Banach spaces, SpringerPlus, 5 (2016), 1616. https://doi.org/10.1186/s40064-016-3056-x doi: 10.1186/s40064-016-3056-x
    [16] V. K. Sahu, H. K. Pathak, R. Tiwari, Convergence theorems for new iteration scheme and comparison results, Aligarh Bull. Math., 35 (2016), 19–42.
    [17] B. S. Thakur, D. Thakur, M. Postolache, New iteration scheme for approximating fixed point of nonexpansive mappings, Filomat, 30 (2016), 2711–2720. https://doi.org/10.2298/FIL1610711T doi: 10.2298/FIL1610711T
    [18] W. Phuengrattana, S. Suantai, On the rate of convergence of Mann, Ishikawa, Noor and SPiterations for continuous functions on an arbitrary interval, J. Comput. Appl. Math., 235 (2011), 3006–3014.
    [19] C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Probl., 20 (2004), 103–120. https://doi.org/10.1088/0266-5611/20/1/006 doi: 10.1088/0266-5611/20/1/006
    [20] P. L. Combettes, V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model Simul., 4 (2005), 1168–1200. https://doi.org/10.1137/050626090 doi: 10.1137/050626090
    [21] Y. Censor, T. Bortfeld, B. Martin, A. Trofimov, A unified approach for inversion problems in intensity-modulated radiation therapy, Phys. Med. Biol., 51 (2006), 2353–2365. https://doi.org/10.1088/0031-9155/51/10/001 doi: 10.1088/0031-9155/51/10/001
    [22] Y. Censor, T. Elfving, N. Kopf, T. Bortfeld, The multiple set split feasibility problem and its applications, Inverse Probl., 21 (2005), 2071–2084.
    [23] A. Ben-Tal, A. Nemirovski, Lectures on modern convex optimization, analysis, algorithms, and engineering applications, MPS/SIAM Ser. Optim., SIAM: Philadelphia, PA, USA, 2001.
    [24] J. Bioucas-Dias, M. Figueiredo, A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration, IEEE Trans. Image Process., 16 (2007), 2992–3004. https://doi.org/10.1109/TIP.2007.909319 doi: 10.1109/TIP.2007.909319
    [25] S. S. Chen, D. L. Donoho, M. A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Sci. Comput., 20 (1998), 33–61. https://doi.org/10.1137/S1064827596304010 doi: 10.1137/S1064827596304010
    [26] D. L. Donoho, I. M. Johnstone, Adapting to unknown smoothness via wavelet shrinkage, J. Am. Statist. Assoc., 90 (1995), 1200–1224. https://doi.org/10.1080/01621459.1995.10476626 doi: 10.1080/01621459.1995.10476626
    [27] M. A. T. Figueiredo, R. D. Nowak, S. J. Wright, Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems, IEEE J. Sel. Top. Signal Process., 1 (2007), 586–597. https://doi.org/10.1109/JSTSP.2007.910281 doi: 10.1109/JSTSP.2007.910281
    [28] S. S. Chang, C. F. Wen, J. C. Yao, Zero point problem of accretive operators in Banach spaces, Bull. Malays. Math. Sci. Soc., 42 (2019), 105–118. https://doi.org/10.1007/s40840-017-0470-3 doi: 10.1007/s40840-017-0470-3
    [29] F. E. Browder, Nonlinear mappings of nonexpansive and accretive type in Banach spaces, Bull. Am. Math. Soc., 73 (1967), 875–882. https://doi.org/10.1090/S0002-9904-1967-11823-8 doi: 10.1090/S0002-9904-1967-11823-8
    [30] F. E. Browder, Semicontractive and semiaccretive nonlinear mappings in Banach spaces, Bull. Am. Math. Soc., 7 (1968), 660–665. https://doi.org/10.1090/S0002-9904-1968-11983-4 doi: 10.1090/S0002-9904-1968-11983-4
    [31] I. Cioranescu, Geometry of Banach spaces, duality mapping and nonlinear problems, Kluwer: Amsterdam, Netherlands, 1990.
    [32] W. Takahashi, Nonlinear functional analysis. Fixed point theory and its applications, Yokohama Publishers: Yokohama, Japan, 2000.
    [33] K. Goebel, S. Reich, Uniform convexity, hyperbolic geometry and non-expansive mappings, Marcel Dekker lnc.: New York, USA, 1984.
    [34] Z. Opial, Weak convergence of the sequence of successive approximations for nonexpansive mappings, Bull. Am. Math. Soc., 73 (1967), 591–597. https://doi.org/10.1090/S0002-9904-1967-11761-0 doi: 10.1090/S0002-9904-1967-11761-0
    [35] R. T. Rockafellar, On the maximal monotonicity of subdifferential mappings, Pac. J. Math., 33 (1970), 209–216. https://doi.org/10.2140/pjm.1970.33.209 doi: 10.2140/pjm.1970.33.209
    [36] H. K. Xu, Inequalities in Banach spaces with applications, Nonlinear Anal., 16 (1991), 1127–1138. https://doi.org/10.1016/0362-546X(91)90200-K doi: 10.1016/0362-546X(91)90200-K
    [37] D. R. Sahu, A. Pitea, M. Verma, A new iteration technique for nonlinear operators as concerns convex programming and feasibility problems, Numer. Algorithms, 83 (2020), 421–449. https://doi.org/10.1007/s11075-019-00688-9 doi: 10.1007/s11075-019-00688-9
    [38] D. Yambangwai, N. Moshkin, Deferred correction technique to construct high-order schemes for the heat equation with dirichlet and neumann boundary conditions, Eng. Lett., 21 (2013), 61–67.
    [39] D. Yambangwai, W. Cholamjiak, T. Thianwan, H. Dutta, On a new weight tridiagonal iterative method and its applications, Soft Comput., 25 (2021), 725–740.
    [40] S. M. Grzegorski, On optimal parameter not only for the SOR method, Appl. Comput. Math., 8 (2019), 82–87. https://doi.org/10.11648/j.acm.20190805.11 doi: 10.11648/j.acm.20190805.11
    [41] I. Daubechies, M. Defrise, C. D. Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Commun. Pure Appl. Math., 57 (2004), 1413–1457. https://doi.org/10.1002/cpa.20042 doi: 10.1002/cpa.20042
    [42] A. Elgabli, A. Elghariani, A. O. Al-Abbasi, M. Bell, Two-stage LASSO ADMM signal detection algorithm for large scale MIMO, 2017 51st Asilomar Conference on Signals, Systems, and Computers, 2017, 1660–1664.
  • This article has been cited by:

    1. Takanori Ibaraki, Satit Saejung, On shrinking projection method for cutter type mappings with nonsummable errors, 2023, 2023, 1029-242X, 10.1186/s13660-023-03004-1
    2. Imo Kalu Agwu, Faeem Ali, Donatus Ikechi Igbokwe, The solution of split common fixed point problems for enriched asymptotically nonexpansive mappings in Banach spaces, 2024, 0971-3611, 10.1007/s41478-024-00849-7
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1365) PDF downloads(44) Cited by(2)

Figures and Tables

Figures(20)  /  Tables(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog