Algorithm 1: CR-Iteration Algorithm with Sunny Nonexpansive Retraction |
initialization: ηn,ϑn,ζn∈(0,1),u1∈K and n=1. while stopping criterion not met do |
wn=QK((1−ζn)un+ζnSun),zn=QK((1−ϑn)Tun+ϑnSwn),un+1=QK((1−ηn)zn+ηnTzn). |
end |
In this paper, using sunny nonexpansive retractions which are different from the metric projection in Banach spaces, we develop the CR-iteration algorithm in view of two quasi-nonexpansive nonself mappings and also give the convergence analysis for the proposed method in the setting of uniformly convex Banach spaces. Furthermore, our results can be applied for the purpose of finding common zeros of accretive operators, convexly constrained least square problems and convex minimization problems. Regarding application, some numerical experiments involving real-world problems are provided, with focus on differential problems, image restoration problems and signal recovery problems.
Citation: Damrongsak Yambangwai, Chonjaroen Chairatsiripong, Tanakit Thianwan. Iterative manner involving sunny nonexpansive retractions for nonlinear operators from the perspective of convex programming as applicable to differential problems, image restoration and signal recovery[J]. AIMS Mathematics, 2023, 8(3): 7163-7195. doi: 10.3934/math.2023361
[1] | Hengdi Wang, Jiakang Du, Honglei Su, Hongchun Sun . A linearly convergent self-adaptive gradient projection algorithm for sparse signal reconstruction in compressive sensing. AIMS Mathematics, 2023, 8(6): 14726-14746. doi: 10.3934/math.2023753 |
[2] | Damrongsak Yambangwai, Tanakit Thianwan . An efficient iterative algorithm for common solutions of three G-nonexpansive mappings in Banach spaces involving a graph with applications to signal and image restoration problems. AIMS Mathematics, 2022, 7(1): 1366-1398. doi: 10.3934/math.2022081 |
[3] | Adisak Hanjing, Panadda Thongpaen, Suthep Suantai . A new accelerated algorithm with a linesearch technique for convex bilevel optimization problems with applications. AIMS Mathematics, 2024, 9(8): 22366-22392. doi: 10.3934/math.20241088 |
[4] | Areerat Arunchai, Thidaporn Seangwattana, Kanokwan Sitthithakerngkiet, Kamonrat Sombut . Image restoration by using a modified proximal point algorithm. AIMS Mathematics, 2023, 8(4): 9557-9575. doi: 10.3934/math.2023482 |
[5] | Nipa Jun-on, Raweerote Suparatulatorn, Mohamed Gamal, Watcharaporn Cholamjiak . An inertial parallel algorithm for a finite family of $ G $-nonexpansive mappings applied to signal recovery. AIMS Mathematics, 2022, 7(2): 1775-1790. doi: 10.3934/math.2022102 |
[6] | Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth . Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces. AIMS Mathematics, 2021, 6(5): 4873-4900. doi: 10.3934/math.2021286 |
[7] | Kasamsuk Ungchittrakool, Natthaphon Artsawang . A generalized viscosity forward-backward splitting scheme with inertial terms for solving monotone inclusion problems and its applications. AIMS Mathematics, 2024, 9(9): 23632-23650. doi: 10.3934/math.20241149 |
[8] | Hamza Bashir, Junaid Ahmad, Walid Emam, Zhenhua Ma, Muhammad Arshad . A faster fixed point iterative algorithm and its application to optimization problems. AIMS Mathematics, 2024, 9(9): 23724-23751. doi: 10.3934/math.20241153 |
[9] | Thabet Abdeljawad, Kifayat Ullah, Junaid Ahmad, Muhammad Arshad, Zhenhua Ma . On the convergence of an iterative process for enriched Suzuki nonexpansive mappings in Banach spaces. AIMS Mathematics, 2022, 7(11): 20247-20258. doi: 10.3934/math.20221108 |
[10] | Mingying Pan, Xiangchu Feng . Application of Fisher information to CMOS noise estimation. AIMS Mathematics, 2023, 8(6): 14522-14540. doi: 10.3934/math.2023742 |
In this paper, using sunny nonexpansive retractions which are different from the metric projection in Banach spaces, we develop the CR-iteration algorithm in view of two quasi-nonexpansive nonself mappings and also give the convergence analysis for the proposed method in the setting of uniformly convex Banach spaces. Furthermore, our results can be applied for the purpose of finding common zeros of accretive operators, convexly constrained least square problems and convex minimization problems. Regarding application, some numerical experiments involving real-world problems are provided, with focus on differential problems, image restoration problems and signal recovery problems.
Let E be a real Banach space. Assume that A:E→E is an operator and B:E→2E is a set-valued operator. We study the following zero point problem: find an element u∈E such that
0∈Au+Bu. | (1.1) |
It is well known that this problem includes, as special cases, convex programming, variational inequalities, split feasibility problems and minimization problems [1,2,3,4,5,6,7], which have applications in machine learning, image processing [4,5], linear inverse problems and signal processing, and it can be modeled mathematically as the form (1.1).
Many mathematicians have been interested in analyzing fixed points by using some iterative methods in recent years. With the considerable improvements in fixed-point theory in the last several years, iterative concepts have emerged as a topic of increasing interest. Iteration qualities involving types of sequences and types of operators have not been thoroughly investigated and are currently being debated. Because of its considerable usefulness in fixed-point theory and its implementations, the idea of operators has populated a prominent part in present scientific studies employing algorithmic approaches. A number of studies have observed fixed points by using iterative techniques [8]. It is worth noting that the type of operators used in fixed point studies becomes important.
Let K be a nonempty closed convex subset of E and S:K→K be an operator with at least one fixed point. In order to find a fixed point, many researchers have proposed various algorithms for finding the approximate solution. One of the most popular iterative processes, called the Picard iteration process [9], was defined by u1∈K:
un+1=Sun,n≥1. |
In addition, other iterative methods for improving the Picard iteration method have been studied extensively, such as what follows.
The Mann iteration process [10] is defined by u1∈K and
un+1=(1−ηn)un+ηnSun,n≥1, |
where {ηn} is a sequence in (0,1).
The Ishikawa iteration process [11] is defined by u1∈K and
un+1=(1−ηn)un+ηnS((1−ϑn)un+ϑnSun),n≥1, |
where {ηn} and {ϑn} are sequences in (0,1). The S-iteration process was introduced by Agarwal et al. [12] in a Banach space as follows:
un+1=(1−ηn)Sun+ηnS((1−ϑn)un+ϑnSun),n≥1, |
where {ηn} and {ϑn} are sequences in (0,1). They proved that the S-iteration method is independent of the Mann and Ishikawa iteration methods and converges faster than both of them. In 2012, Chugh et al. [13] introduced the CR-iteration procedure as follows:
Let K be a nonempty convex subset of a normed linear space E, and let {ηn}, {ϑn}, {ζn} be sequences in (0,1). For u1∈K, the sequence un is defined by
wn=(1−ζn)un+ζnSun,zn=(1−ϑn)Sun+ϑnSwn,un+1=(1−ηn)zn+ηnSzn,n≥1. |
The authors showed numerically that this scheme is equivalent to and faster than the Picard, Mann, Ishikawa and Agarwal iterative schemes. Some of the other well-known three-step iteration processes were developed by Noor [14], Ullah and Arshad [15] (AK-iteration), Sahu et al. [16], Thakur et al. [17] and Phuengrattana and Suantai [18] (SP-iteration). There are a number of papers that have studied fixed points by using some iterative schemes (see [8]).
Many real-world problems can be modeled as common problems. Therefore, the study of solving these problems is important and has received the attention of many mathematicians.
To date, fixed-point theory has been applied to solve various problems in science, engineering, economics, physics and data science, with applications such as signal/image processing [19,20], and intensity-modulated radiation therapy treatment planning [21,22].
In the field of image processing, the image restoration problem is an interesting and important topic. For this problem, there are several optimizations and fixed-point methods; see [23,24,25,26,27] for more detail.
Signal processing and numerical optimization are independent scientific fields that have always been mutually influencing each other. Perhaps the most convincing example whereby the two fields have met is compressed sensing [2]. Several surveys dedicated to these algorithms and their applications in signal processing have appeared [3,6,7,20].
In this work, our contribution is twofold:
1) We develop the CR-iteration process for calculating common solutions of the common fixed-point problems and apply our results for solving the problem in (1.1).
2) We find common solutions of convexly constrained least square problems and convex minimization problems and apply them to differential problems, image restoration and signal processing.
Let E be a real Banach space with a norm ‖⋅‖ and E∗ be its dual. The value of f∈E∗ at u∈E is denoted by ⟨u,f⟩. Throughout the paper, we denote by Bλ[v] a closed ball with the center at v and radius λ:
Bλ[v]={u∈E:‖v−u‖≤λ}. |
A Banach space E is said to be uniformly convex if for any x, 0<x≤2, the inequalities ‖u‖≤1, ‖v‖≤1 and ‖u−v‖≥x imply that there exists δ=δ(x)>0 such that 12‖u+v‖≤1−δ. Note that every Hilbert space is a uniformly convex Banach space. The (normalized) duality mapping J from E into the family of nonempty (by Hahn-Banach theroem) weak-star compact subsets of its dual E is defined by
J(u)={f∈E∗:⟨u,f⟩=‖u‖2=‖f‖2} |
for each u∈E, where ⟨⋅,⋅⟩ denotes the generalized duality pairing.
For a set-valued operator A:E→2E, we denote its domain, range and graph as follows:
D(A)={u∈E:Au≠∅},R(A)=∪{Ap:p∈D(A)} |
and
G(A)={(u,v)∈E×E:u∈D(A),v∈Au}, |
respectively. The inverse A−1 of A is defined by u∈A−1v if and only if v∈Au. A is called accretive if ∀ui∈D(A) and vi∈Aui (i=1,2); there exists j=J(u1−u2) such that ⟨v1−v2,j⟩≥0.
An accretive operator A in a Banach space E is said to satisfy the range condition if ¯D(A)⊂R(I+μA) for all μ>0, where ¯D(A) denotes the closure of the domain of A.
It is well known that for an accretive operator A which satisfies the range condition, A−10=Fix(JAμ) for all μ>0.
Let H be a real Hilbert space. If A:E→2E is an m-accretive operator (see [28,29,30]), then A is called a maximal accretive operator [31], and, for all μ>0, R(I+μA)=H if and only if A is called maximal monotone [32]. Denote by dom(h) the domain of a function h:H→(−∞,∞], i.e.,
dom(h)={u∈H:h(u)<∞}. |
The subdifferential of h∈T0(H) is the set
∂h(u)={u∈H:h(u)≤h(v)+⟨z,u−v⟩,∀v∈H}, |
where T0(H) denotes the class of all lower semicontinuous convex functions from H to (−∞,∞] with nonempty domains.
A point u∈K is called a fixed point of S if Su=u. We shall denote with Fix(S) the set of fixed points of S, i.e., Fix(S)={u∈K:Su=u}. The mapping S is called L-Lipschitz if there exists a constant L>0 such that
‖Su−Sv‖≤L‖u−v‖ |
for all u,v∈K. The mapping S is called nonexpansive if S is 1-Lipschitz. The mapping S is called quasi-nonexpansive if Fix(S)≠∅ and
‖Su−v‖≤‖u−v‖ |
for all u∈K and v∈Fix(S).
A subset K of Banach space E is said to be a retract of E if there is a continuous mapping Q from E onto K such that Qu=u for all u∈K. We call such a Q a retraction of E onto K. It follows that, if a mapping Q is a retraction, then Qv=v for all v in the range of Q. A retraction Q is called sunny if Q(Qu+λ(u−Qu))=Qu for all u∈E and λ≥0. If a sunny retraction Q is also nonexpansive, then K is called a sunny nonexpansive retract of E [33].
Let E be a strictly convex reflexive Banach space and K be a nonempty closed convex subset of E. Denote by PK the (metric) projection from E onto K, namely, for u∈E, PK(u) is the unique point in K with the property
inf{‖u−v‖:v∈K}=‖u−PK(u)‖. |
Let an inner product ⟨⋅,⋅⟩ and the induced norm ‖⋅‖ be specified with a real Hilbert space H. Let K be a nonempty subset of H; we have that the nearest point projection PK from H onto K is the unique sunny nonexpansive retraction of H onto K. It is also known that PK(u)∈K and
⟨u−PK(u),PK(u)−v⟩≥0,∀u∈H,v∈K. |
Sunny nonexpansive retractions play a similar role in Banach spaces as do projections in Hilbert spaces. We emphasize that, if K is a closed and convex subset of a uniformly smooth Banach space E (that is the norm of E is Gateaux differentiable), then there exists a sunny nonexpansive retraction from E to K and it is unique.
Recall that a vector space H is said to satisfy Opial's condition [34] if for each sequence {un} in H which converges weakly to a point u∈H,
limn→∞inf‖un−u‖<limn→∞inf‖un−v‖,v∈H,v≠u. |
In the sequel, the following lemmas are needed to prove our main results.
Lemma 1. ([35]) Let h∈T0(H). Then, ∂h is maximal monotone.
Lemma 2. ([36]) Let E be a Banach space and p>1 and R>0 be two fixed numbers. Then, E is uniformly convex if and only if there exists a continuous, strictly increasing and convex function φ:[0,∞)→[0,∞) with φ(0)=0 such that
‖αu+(1−α)v‖p≤‖u‖p+(1−α)‖v‖p−α(1−α)φ(‖u−v‖) |
for all u,v∈Bλ[0] and α∈[0,1].
Lemma 3. ([37]) Let K be a nonempty subset of a Banach space E and S:K→E be a uniformly continuous mapping. Let {un}⊂K be an approximating fixed point sequence of S, i.e., limn→∞‖un−Sun‖=0. Then, {vn} is an approximating fixed-point sequence of S whenever {vn} is in K such that limn→∞‖un−vn‖=0.
Lemma 4. ([30]) Let K be a nonempty closed convex subset of a uniformly convex Banach space E. If S:K→E is a nonexpansive mapping, then I-S has the demiclosed property with respect to 0.
Let K be a nonempty closed convex subset of a Banach space E with QK as a sunny nonexpansive retraction. We will denote the set of common fixed points of S and T by Ψ, that is, Ψ:=Fix(S)∩Fix(T). The following lemma is needed.
Lemma 5. Let K be a nonempty closed convex subset of a Banach space E with QK as the sunny nonexpansive retraction. Let S,T:K→E be quasi-nonexpansive mappings with Ψ≠∅ and let {ηn},{ϑn} and {ζn} be sequences in (0,1) for all n∈N. Define the sequence {un} by using Algorithm 1. Then, for each ˉu∈Ψ, limn→∞‖un−ˉu‖ exists and
‖wn−ˉu‖≤‖un−ˉu‖and‖zn−ˉu‖≤‖un−ˉu‖,∀n∈N. | (3.1) |
Algorithm 1: CR-Iteration Algorithm with Sunny Nonexpansive Retraction |
initialization: ηn,ϑn,ζn∈(0,1),u1∈K and n=1. while stopping criterion not met do |
wn=QK((1−ζn)un+ζnSun),zn=QK((1−ϑn)Tun+ϑnSwn),un+1=QK((1−ηn)zn+ηnTzn). |
end |
Proof. Let ˉu∈Ψ. Then, for each n≥1, we have
‖wn−ˉu‖=‖QK((1−ζn)un+ζnSun)−ˉu‖≤‖(1−ζn)(un−ˉu)+ζn(Sun−ˉu)‖≤(1−ζn)‖un−ˉu‖+ζn‖Sun−ˉu‖≤(1−ζn)‖un−ˉu‖+ζn‖un−ˉu‖=‖un−ˉu‖, | (3.2) |
‖zn−ˉu‖=‖QK((1−ϑn)Tun+ϑnSwn)−ˉu‖≤‖(1−ϑn)(Tun−ˉu)+ϑn(Swn−ˉu)‖≤(1−ϑn)‖Tun−ˉu‖+ϑn‖Swn−ˉu‖≤(1−ϑn)‖un−ˉu‖+ϑn‖wn−ˉu‖≤(1−ϑn)‖un−ˉu‖+ϑn‖un−ˉu‖=‖un−ˉu‖ | (3.3) |
and
‖un+1−ˉu‖=‖QK((1−ηn)zn+ηnTzn)−ˉu‖≤‖(1−ηn)(zn−ˉu)+ηn(Tzn−ˉu)‖≤(1−ηn)‖zn−ˉu‖+ηn‖Tzn−ˉu‖≤(1−ηn)‖zn−ˉu‖+ηn‖zn−ˉu‖=‖zn−ˉu‖≤‖un−ˉu‖. | (3.4) |
Therefore,
‖un+1−ˉu‖≤‖un−ˉu‖≤...≤‖u1−ˉu‖,∀n∈N. | (3.5) |
Since {‖un−ˉu‖} is monotonically decreasing, we have that the sequence {‖un−ˉu‖} is convergent.
Theorem 1. Let K be a nonempty closed convex subset of a uniformly convex Banach space E with QK as the sunny nonexpansive retraction. Let S,T:K→E be quasi-nonexpansive mappings with Ψ≠∅. Let {ηn}, {ϑn} and {ζn} be sequences of real numbers such that 0<c1≤ηn≤ˆc1<1, 0<c2≤ϑn≤ˆc2<1 and 0<c3≤ζn≤ˆc3<1 for all n∈N. From an arbitrary u1∈K, PΨ(u1)=u∗, define the sequence {un} by Algorithm 1. Then, we have the following:
(i) {un} is in a closed convex bounded set Bλ[u∗]∩K, where λ is a constant in (0,∞) such that ‖u1−u∗‖≤λ.
(ii) If S is uniformly continuous, then limn→∞‖un−Sun‖=0 and limn→∞‖un−Tun‖=0.
(iii) If E fulfills Opial's condition and I−S and I−T are demiclosed at 0, then {un} converges weakly to an element of Ψ∩Bλ[u∗].
Proof. (ⅰ) Since u∗∈Ψ, from (3.5), we obtain
‖un+1−u∗‖≤‖un−u∗‖≤...≤‖u1−u∗‖≤λ,∀n∈N. | (3.6) |
Therefore, {un} is in the closed convex bounded set Bλ[u∗]∩K.
(ⅱ) Suppose that S is uniformly continuous. Using Lemma 5, we get that {un}, {zn} and {wn} are in Bλ[u∗]∩K, and, hence, from (3.1), we obtain
‖Tun−u∗‖≤λ,‖Tzn−u∗‖≤λ,‖Swn−u∗‖≤λand‖Sun−u∗‖≤λ |
for all n∈N. Using Lemma 2 for p=2 and R=λ, from the relations in Algorithm 1, we obtain
‖wn−u∗‖2≤(1−ζn)‖un−u∗‖2+ζn‖Sun−u∗‖2−ζn(1−ζn)φ(‖un−Sun‖)≤(1−ζn)‖un−u∗‖2+ζn‖un−u∗‖2−ζn(1−ζn)φ(‖un−Sun‖)=‖un−u∗‖2−ζn(1−ζn)φ(‖un−Sun‖). | (3.7) |
It follows from (3.7) that
‖zn−u∗‖2≤(1−ϑn)‖un−u∗‖2+ϑn‖wn−u∗‖2−ϑn(1−ϑn)φ(‖Tun−Swn‖)≤(1−ϑn)‖un−u∗‖2+ϑn(‖un−u∗‖2−ζn(1−ζn)φ(‖un−Sun‖))−ϑn(1−ϑn)φ(‖Tun−Swn‖)=‖un−u∗‖2−ϑnζn(1−ζn)φ(‖un−Sun‖)−ϑn(1−ϑn)φ(‖Tun−Swn‖). | (3.8) |
Using (3.8) and Lemma 2, we have
‖un+1−u∗‖2≤‖(1−ηn)(zn−u∗)+ηn(Tzn−u∗)‖2≤(1−ηn)‖zn−u∗‖2+ηn‖Tzn−u∗‖2−ηn(1−ηn)φ(‖zn−Tzn‖)≤(1−ηn)‖zn−u∗‖2+ηn‖zn−u∗‖2−ηn(1−ηn)φ(‖zn−Tzn‖)=‖zn−u∗‖2−ηn(1−ηn)φ(‖zn−Tzn‖)≤‖un−u∗‖2−ϑnζn(1−ζn)φ(‖un−Sun‖)−ϑn(1−ϑn)φ(‖Tun−Swn‖)−ηn(1−ηn)φ(‖zn−Tzn‖). | (3.9) |
From (3.9), we have the following important two inequalities.
ϑn(1−ϑn)φ(‖Tun−Swn‖)≤‖un−u∗‖2−‖un+1−u∗‖2 | (3.10) |
and
ϑnζn(1−ζn)φ(‖un−Sun‖)≤‖un−u∗‖2−‖un+1−u∗‖2. | (3.11) |
Note that c2(1−ˆc2)≤ϑn(1−ϑn) and c2c3(1−ˆc3)≤ϑnζn(1−ζn).
Using (3.10) and (3.11), we obtain
c2(1−ˆc2)n∑i=1φ(‖Tui−Swi‖)≤‖u1−u∗‖2−‖un+1−u∗‖2,∀n∈N | (3.12) |
and
c2c3(1−ˆc3)n∑i=1φ(‖ui−Sui‖)≤‖u1−u∗‖2−‖un+1−u∗‖2,∀n∈N. | (3.13) |
It follows from (3.12) and (3.13) that we obtain
∞∑n=1φ(‖Tun−Swn‖)<∞ | (3.14) |
and
∞∑n=1φ(‖un−Sun‖)<∞. | (3.15) |
Using (3.14) and (3.15), we have
limn→∞‖Tun−Swn‖=0 | (3.16) |
and
limn→∞‖un−Sun‖=0. | (3.17) |
In addition, using (3.17), we have
‖wn−un‖=‖QK((1−ζn)un+ζnSun)−QK(un)‖,≤‖Sun−un‖→0(asn→∞). | (3.18) |
Since S is uniformly continuous, it follows from Lemma 3 that
limn→∞‖wn−Swn‖=0. | (3.19) |
Thus from (3.16)–(3.19), we have
‖un−Tun‖≤‖un−Sun‖+‖Sun−Tun‖≤‖un−Sun‖+‖Sun−Swn‖+‖Swn−Tun‖≤‖un−Sun‖+‖Sun−un‖+‖un−Swn‖+‖Swn−Tun‖≤‖un−Sun‖+‖Sun−un‖+‖un−wn‖+‖wn−Swn‖+‖Swn−Tun‖→0(asn→∞). | (3.20) |
(ⅲ) By assumption, E satisfies Opial's condition. Let w∗∈Ψ such that w∗∈Bλ[u∗]∩K. From Lemma 5, we have that limn→∞‖un−w∗‖ exists. Suppose that there are two subsequences {unq} and {uml} which converge to two distinct points u∗ and v∗ in Bλ[u∗]∩K, respectively. Then, since both I-S and I-T have the demiclosed property at 0, we have that Su∗=Tu∗=u∗ and Sv∗=Tv∗=v∗. Moreover, using Opial's condition, we have
limn→∞‖un−u∗‖=limq→∞‖unq−u∗‖<liml→∞‖uml−v∗‖=limn→∞‖un−v∗‖. |
Similarly, we obtain
limn→∞‖un−v∗‖<limn→∞‖un−u∗‖ |
which is a contradiction. Therefore, u∗=v∗. Hence, the sequence {un} converges weakly to an element of Ψ∩Bλ[u∗]∩K.
Since every nonexpansive mapping is uniformly continuous. By using the same ideas and techniques as in Theorem 1 and Lemma 4, we can state the following result without proofs.
Theorem 2. Let K be a nonempty closed convex subset of a uniformly convex Banach space E with QK as the sunny nonexpansive retraction. Let S,T:K→E be nonexpansive mappings with ψ≠∅. Let {ηn}, {ϑn} and {ζn} be sequences of real numbers such that 0<c1≤ηn≤ˆc1<1, 0<c2≤ϑn≤ˆc2<1 and 0<c3≤ζn≤ˆc3<1 for all n∈N. From an arbitrary u1∈K, PΨ(u1)=u∗, define the sequence {un} by using Algorithm 1. Then, we have the following:
(i) {un} is in a closed convex bounded set Bλ[u∗]∩K, where λ is a constant in (0,∞) such that ‖u1−u∗‖≤λ.
(ii) limn→∞‖un−Sun‖=0 and limn→∞‖un−Tun‖=0.
(iii) If E fulfills Opial's condition, then {un} converges weakly to an element of Ψ∩Bλ[u∗].
If S and T are nonexpansive self-mappings on a nonempty closed convex subset K of a real Hilbert space H. Hence, from Theorem 1, we can obtain the following result.
Corollary 1. Let K be a nonempty closed convex subset of a real Hilbert space H, let S,T:K→K be nonexpansive mappings with Ψ≠∅ and let {ηn}, {ϑn} and {ζn} be sequences of real numbers for which 0<c1≤ηn≤ˆc1<1, 0<c2≤ϑn≤ˆc2<1 and 0<c3≤ζn≤ˆc3<1 for all n∈N. From an arbitrary u1∈K, define the sequence {un} by
{wn=(1−ζn)un+ζnSunzn=(1−ϑn)Tun+ϑnSwnun+1=(1−ηn)zn+ηnTzn,∀n∈N. | (3.21) |
Then, {un} converges weakly to an element of Ψ.
In this section, we shall utilize the methods mentioned above to study common zeros of accretive operators, convexly constrained least square problems and convex minimization problems.
Setting S=JAμ and T=JBμ, and by using (3.21), we derive its convergence analysis to solve (1.1).
Theorem 3. Let K be a nonempty closed convex subset of a real uniformly convex Banach space E satisfying Opial's condition. Let A:D(A)⊆K→2E and B:D(B)⊆K→2E be accretive operators such that ¯D(A)⊆K⊆∩μ>0R(I+μA), ¯D(B)⊆K⊆∩μ>0R(I+μB) and A−1(0)∩B−1(0)≠∅. Let {ηn}, {ϑn} and {ζn} be sequences of real numbers such that 0<c1≤ηn≤ˆc1<1, 0<c2≤ϑn≤ˆc2<1 and 0<c3≤ζn≤ˆc3<1 for all n∈N. Let μ>0, u1∈K and PA−1(0)∩B−1(0)(u1)=u∗. Let {un} be defined by
{wn=(1−ζn)un+ζnJAμun,zn=(1−ϑn)JBμun+ϑnJAμwn,un+1=(1−ηn)zn+ηnJBμzn,∀n∈N. | (4.1) |
Then, we have the following:
(i) {un} is in a closed convex bounded set Bλ[u∗]∩K, where λ is a constant in (0,∞) such that ‖u1−u∗‖≤λ.
(ii) limn→∞‖un−JAμun‖=0 and limn→∞‖un−JBμun‖=0.
(iii) {un} converges weakly to an element of A−1(0)∩B−1(0)∩Bλ[u∗].
Proof. By the assumption ¯D(A)⊆K⊆∩μ>0R(I+μA), we know that JAμ,JBμ:K→K is nonexpansive. Note that D(A)∩D(B)⊆K; hence,
u∗∈A−1(0)∩B−1(0)⇒u∗∈D(A)∩D(B)with0∈Au∗and0∈Bu∗⇒u∗∈KwithJAμu∗=u∗andJBμu∗=u∗⇒u∗∈Fix(JAμ,JBμ)∩K. |
Set S=JAμ and T=JBμ. Hence, Theorem 3 follows from Theorem 2.
Let A,B∈B(H) and y,z∈H. Define φ,ψ:H→R by
φ(u)=12‖Au−y‖2andψ(u)=12‖Bu−z‖2 |
for all u∈H, where H is a real Hilbert space. Let K be a nonempty closed convex subset of H. The objective is to find b∈K such that
b∈argminu∈Kφ(u)∩argminu∈Kψ(u), | (4.2) |
where argminu∈Kφ(u):={u∗∈K:φ(u∗)=infu∈Kφ(u)}.
We give an application of Theorem 2 for finding common solutions to two convexly constrained least square problems.
Proposition 1. ([20]) Let H be a real Hilbert space, A∈B(H) with the adjoint A∗ and y∈H. Let K be a nonempty closed convex subset of H. Let b∈H and δ∈(0,∞). Then, the following statements are equivalent:
(i) b solves the following problem:
minu∈K12‖Au−y‖2. |
(ii) b=PK(b−δA∗(Ab−y)).
(iii) ⟨Av−Ab,y−Ab⟩≤0 for all v∈K.
Theorem 4. Let K be a nonempty closed convex subset of a real Hilbert space H, y,z∈H and A,B∈B(H) such that the solution set of the problem in (4.2) is nonempty. Let {ηn}, {ϑn} and {ζn} be sequences of real numbers such that 0<c1≤ηn≤ˆc1<1, 0<c2≤ϑn≤ˆc2<1 and 0<c3≤ζn≤ˆc3<1 for all n∈N. Let u1∈H, Pargminu∈Kφ(u)∩argminu∈Kψ(u)(u1)=u∗ and δ∈(0,2min{1‖A‖2,1‖B‖2}). From an arbitrary u1∈K, define the sequence {un} by
{wn=(1−ζn)un+ζnSun,zn=(1−ϑn)Tun+ϑnSwn,un+1=(1−ηn)zn+ηnTzn,∀n∈N, | (4.3) |
where S,T:K→K is defined by Su=PK(u−δA∗(Au−y)) and Tu=PK(u−δB∗(Bu−z)) for all u∈K. Then, we have the following:
(i) {un} is in the closed ball Bλ[u∗], where λ is a constant in (0,∞) such that ‖u1−u∗‖≤λ.
(ii) limn→∞‖un−Sun‖=0 and limn→∞‖un−Tun‖=0.
(iii) {un} converges weakly to an element of argminu∈Kφ(u)∩argminu∈Kψ(u)∩ Bλ[u∗].
Proof. Note that ∇φ(u)=A∗(Au−y) for all u∈H; we obtain that ‖∇φ(u)−∇φ(v)‖=‖A∗(Au−y)−A∗(Av−y)‖≤‖A‖2‖u−v‖ for all u,v∈H, Thus, ∇φ(u) is an 1‖A‖2-ism; hence, (I−δ∇φ) is nonexpansive from K into H for σ∈(0,2‖A‖2). Therefore, S=PK(I−σ∇φ) and T=PK(I−τ∇φ)) are nonexpansive mappings from K into itself for σ∈(0,2‖A‖2) and τ∈(0,2‖B‖2), respectively. Thus, Theorem 4 follows from Theorem 2.
Let H be a Hilbert space and let g1,g2:H→(−∞,∞] be proper convex and lower-semicontinuous functions. We consider the following problem of finding x∈H such that
x∈∂g−11(0)∩g−12(0). | (4.4) |
Note that J∂g1μ=proxμg1. We give an application to common solutions for convex programming problems in a Hilbert space.
Theorem 5. Let K be a nonempty closed convex subset of a real Hilbert space H. Let g1,g2∈Γ0H such that the solution set of the problem in (4.4) is nonempty. Let {ηn}, {ϑn} and {ζn} be sequences of real numbers such that 0<c1≤ηn≤ˆc1<1 and 0<c2≤ϑn≤ˆc2<1 and 0<c3≤ζn≤ˆc3<1 for all n∈N. Let μ>0, u1∈H and P∂g−11(0)∩g−12(0)(u1)=u∗. From an arbitrary u1∈K, define the sequence {un} by
{wn=(1−ζn)un+ζnproxμg1(un),zn=(1−ϑn)proxμg2(un)+ϑnproxμg1(wn),un+1=(1−ηn)zn+ηnproxμg2(zn),∀n∈N. | (4.5) |
Then, we have the following:
(i) {un} is in the closed ball Bλ[u∗], where λ is a constant in (0,∞) such that ‖u1−u∗‖≤λ.
(ii) limn→∞‖un−proxμg1(un)‖=0 and limn→∞‖un−proxμg2(un)‖=0.
(iii) {un} converges weakly to an element of ∂g−11(0)∩g−12(0)∩Bλ[u∗].
Proof. Using Lemma 1, we have that ∂g1 is maximal monotone. We see that R(I+μ∂f)=H by the maximal monotonicity of ∂g1. It follows that J∂g1μ=proxμg1:H→H is nonexpansive. Similarly, J∂g2μ=proxμg2:H→H is nonexpansive. Thus, Theorem 5 follows from Theorem 2.
In this section, some real-world problems such as differential problems, image restoration problems and signal recovering problems, are used to illustrate the effectiveness of our algorithm.
Let us consider the following simple and well known one-dimensional heat equation with Dirichlet boundary conditions and initial data:
ut=βuxx+f(x,t),0<x<l,t>0. |
u(x,0)=u0(x),0<x<l, | (5.1) |
u(0,t)=γ1(t),u(l,t)=γ2(t),t>0, |
where β is constant, u(x,t) represents the temperature at a point (x,t) and f(x,t), γ1(t) and γ2(t) are sufficiently smooth functions. Below, we use the notations uni and (uxx)ni to represent the numerical approximations of u(xi,tn) and uxx(xi,tn), with tn=nΔt, and where Δt denotes the temporal mesh size. A set of schemes to solve the problem of (5.1) is based on the following well-known Crank-Nikolson type of scheme [38,39]:
un+1i−uniΔt=β2[(uxx)n+1i+(uxx)ni]+fn+1/2i,i=2,…,N−1 | (5.2) |
with the initial data
u0i=u0(xi),i=2,…,N−1 | (5.3) |
and Dirichlet boundary conditions
un+11=γ1(tn+1),un+1N=γ2(tn+1). | (5.4) |
For the approximate term of (uxx)ki,k=n,n+1, we use the standard centered discretization with space. The matrix form of the second-order finite difference scheme (FDS) for solving the heat problem of (5.1) can be written as
Aun+1=Gn, | (5.5) |
where Gn=Bun+fn+1/2,
A=[1+η−η2−η21+η−η2⋱⋱⋱−η21+η−η2−η21+η],B=[1−ηη2η21−ηη2⋱⋱⋱η21−ηη2η21−η], |
un=[uk2uk3⋮ukN−2ukN−1],fn+1/2=[η2γn+1/21+Δtfn+1/22Δtfn+1/23⋮Δtfn+1/2N−2η2γn+1/22+Δtfn+1/2N−1], |
η=βΔt/(Δx2), γn+1/2i=γi(tn+1/2),i=1,2 and fn+1/2i=fi(tn+1/2), i=2,…,N−1.
From Eq (5.5), the matrix A is square and symmetric positive definite. Traditionally, iterative methods have been presented to find the solution of the linear system (5.5). The well-known weight Jacobi (WJ) and successive over-relaxation (SOR) methods [39,40] have been chosen for exemplification here (see Table 1).
Linear system | Iterative method | Specific name |
Aun+1=Gn | Du(n+1,s+1)=(D−ωA)u(n+1,s)+ωGn | WJ |
(D−ωL)u(n+1,s+1)=((D−ωL)−ωA)u(n+1,s)+ωGn | SOR |
And, ω is a weight parameter, D is the diagonal part of the matrix A and L is the lower triangular part of the matrix D-A, respectively.
The discussion on the stability of WJ and SOR in solving the linear system (5.5) can be found in [39,40].
Let us consider the linear system
Au=G, | (5.6) |
where A:Rl→Rl is a linear and positive operator and u,G∈Rl. To find the solution of the linear system (5.6), we manipulate this linear system into the form of the fixed-point equation u=T(u). For example, the well-known weight Jacobi (WJ), successive over-relaxation (SOR) and Gauss-Seidel (GS, SOR with ω=1) methods [38,39,40] present the linear system (5.6) in the form of fixed-point as TWJ(u)=u, TSOR(u)=u and TGS(u)=u, respectively (see Table 2).
Linear system | Fixed-point mapping T(u) |
Au=G | TWJ(u)=(I−ωD−1A)u+ωD−1G |
TSOR(u)=(I−ω(D−ωL)−1A)u+ω(D−ωL)−1G |
Let Tu=Su+c, where u,c∈K, and let S be a self-mapping on a nonempty closed convex subset K of a uniformly convex Banach space E with ∥S∥<1; then, T is a nonexpansive mapping on K. Indeed, for all u,v∈K and n≥1, we have
∥Tu−Tv∥=∥Su−Sv∥≤∥S∥∥u−v∥<∥u−v∥. |
Therefore, in controlling the operators TWJ and TSOR in the form of TWJu=SWJu+cWJ, where
SWJ=I−ωD−1A,cWJ=ωD−1G |
and TSORu=SSORu+cSOR, where
SSOR=I−ω(D−ωL)−1A,cSOR=ω(D−ωL)−1G |
are nonexpansive mappings, their weight parameter must be properly modified. The implemented weight parameter ω for the operator SWJ and SSOR are defined as its norm of less than one. Moreover, the optimal weight parameter ωo to yield the smallest norm for each type of operator S is indicated in Table 3.
Different types of operator S | Implemented weight parameter ω | Optimal weight parameter ωo |
SWJ | 0<ω<2λmax(D−1A) | ωo=2λmin(D−1A)+λmax(D−1A) |
SSOR | 0<ω<2 | ωo=21+√1−ρ2 |
The parameters λmax(D−1A) and λmin(D−1A) are the maximum and minimum eigenvalue of the matrix A, respectively, and ρ is the spectral radius of the iteration matrix of the Jacobi method.
Next, we introduce the proposed method for solving the linear system (5.5) by using the nonexpansive mappings Ti and Tj. The generated sequence {un} is created iteratively by u0∈Rn and
w(n,s+1)=(1−ζn)u(n,s)+ζnTiu(n,s),z(n,s+1)=(1−ϑn)Tju(n,s)+ϑnTiw(n,s+1),u(n+1,s+1)=(1−ηn)z(n,s+1)+ηnTjz(n,s+1),n≥0, | (5.7) |
where the second superscript "s'' denotes the number of iterations s=0,1,…,ˆSn; set
ζn=ϑn=ηn=0.9 | (5.8) |
as the default parameter. Since we focus on the convergence of the proposed algorithm, the stability analysis in terms of choosing the step sizes for time is not discussed in detail. The step size for time for the proposed algorithm is based on the smallest step size chosen for the WJ and SOR methods when solving the linear system (5.5) generated from the discretization of the considered problem (5.1). In all computations, we used β=25, Δt=Δx2/10 (step size for time) and ϵd=10−10. For testing purposes only, both computations are performed for 0≤t≤0.01 (when t≫0.05,u(x,t)→0). The following stopping criterion is used:
‖u(n+1,ˆSn+1)−u(n+1,ˆSn)‖2<ϵd, |
where "ˆSn" denotes the number of the last iteration at time tn and set u(n+1,ˆSn+1)=u(n,0). All computations can be performed by using a uniform grid of 101 nodes, which corresponds to the solution of the linear system (5.5) with a 99×99 size. The weight parameter ω of the proposed algorithm should be set as its optimum weight parameter (ωo) defined in Table 3. We apply the WJ, SOR, GS and proposed algorithms with two different operators Ti and Tj (proposed algorithm with Ti−Tj) to obtain the solution of the linear system (5.5) of the heat problem with Dirichlet boundary conditions and the initial data described by (5.1). For the proposed algorithm, the nonexpansive mapping Ti is chosen from the following operators: TGS,TWJ and TSOR.
Let us consider the following heat problem:
ut=βuxx+0.4β(4π2−1)e−4βtcos(4πx),0≤x≤1,0<t<ts,u(x,0)=cos(4πx)/10,u(0,t)=e−4βt/10,u(1,t)=e−4βt/10,u(x,t)=e−4βtcos(4πx)/10. | (5.9) |
The results of the basic iterative methods (WJ, GS, SOR) and the proposed algorithms are demonstrated and discussed for the following cases:
Case Ⅰ: WJ method, Case Ⅱ: GS method, Case Ⅲ: SOR method,
Case Ⅳ: The proposed algorithm with TWJ−TGS,
Case Ⅴ: The proposed algorithm with TWJ−TSOR,
Case Ⅵ: The proposed algorithm with TGS−TWJ,
Case Ⅶ: The proposed algorithm with TGS−TSOR,
Case Ⅷ: The proposed algorithm with TSOR−TWJ,
Case Ⅸ: The proposed algorithm with TSOR−TGS.
The exact error can be measured by using ‖un−u‖2. Figure 1 shows the approximate solution of problem (5.9) with 101 nodes at t=0.01, as obtained by using the basic iterative methods and the proposed algorithm for Cases Ⅰ–Ⅸ.
It can be seen in Figure 1 that all numerical solutions match the analytical solution reasonably well. Thus, it can be concluded that all sequences generated by the proposed method with two different operators Ti and Tj converge to their common fixed-point solution.
Figure 2 shows the trend of the iteration number for the basic iterative methods and the proposed algorithm for Case Ⅳ–Ⅸ to solve problem (5.5), as generated from the discretization of the considered problem (5.9) with 101 nodes.
It can be seen in Figure 2 that the proposed method with two different operators Ti and Tj uses fewer iterations on each step of time as compared with the basic iterative methods. And, the proposed methods with TGS−TSOR and TSOR−TGS give us the smallest number of iterations on each step of time.
Next, the proposed algorithm with TSOR−TGS is chosen to test and verify the order of accuracy for the presented FDS in solving the heat equation problem (5.9). And, all computations have been performed by using uniform grids of 11,21,41,81 and 161 nodes, which correspond to the solution of the discretization of the heat equation problem (5.9) with Δx=0.1,0.05,0.025,0.0125,0.0625, respectively. We found that the proposed algorithm with TSOR−TGS can be seen as the second order of accuracy (Figure 3) when the distance between the graphs of all computational grid sizes are compared. That is, the order of accuracy of the proposed algorithm with TSOR−TGS agrees with the construction of their FDS. Figure 4 shows the trend of the average iteration number for the basic iterative methods as compared with all cases of the proposed algorithm when applied for solving the discretization of the considered problem (5.9) with various grid sizes.
It can be seen in this figure that the average number of iterations on each step of time for the proposed method, with Cases Ⅳ–Ⅸ being less than the average number of iterations on each step of time for the WJ, GS and SOR methods. Moreover, the average number of iterations on each step of time for the proposed algorithm with TSOR-TGS gives us the smallest number of iterations for all of the considered grid sizes. However, even using a small amount of iterations per step of time shows the excellent performance of the proposed method, but the stability condition of the proposed algorithm needs to be considered carefully, as chosen for the results of the stability analysis with time.
Let B be the degraded image of the true image U in the matrix form of ˜m rows and ˜n columns (B,U∈R˜mטn). The key to obtaining the image restoration model is to rearrange the elements of the images B and U into the column vectors by stacking the columns of these images into two long vectors b and u where b=vec(B) and u=vec(U), both of length n=˜m˜n. The image restoration problem can be modeled in a one-dimensional vector by using the following linear equation system:
b=Mu, | (5.10) |
where u∈Rn is an original image, b∈Rn is the observed image, M∈Rn×n is the blurring operation and n=˜m˜n. In order to solve problem (5.10), we aim to approximate the original image, vector b, which is known as the following least squares (LS) problem:
minu12‖b−Mu‖22, | (5.11) |
where ‖.‖2 is defined by ‖u‖2=√∑ni=1|ui|2.
Next, we will apply our method for solving the LS problem (5.11) and the image restoration problem (5.10) by applying the following settings as follows: Let M∈Rn×n be a degraded matrix and b∈Rn. We obtain the following proposed method to find the common solution of the image restoration problem
wn=(1−ζn)un+ζnTun,zn=(1−ϑn)Tun+ϑnTwn,un+1=(1−ηn)zn+ηnTzn, | (5.12) |
where Tu=u−μMT(Mu−b), μ⊂(0,2/||MTM||2) and ηn,ϑn,ζn∈(0,1) for all n∈N.
The goal for the image restoration problem is to find the original image from the observed image without knowing which one is the blurring matrix. However, the blurring matrix M must be known when applying the algorithm (5.12). Now, we present the new idea for solving the image restoration problem when the observed image b1,b2,...,bN can be restored by using the blurring matrices M1,M2,...,MN with different qualities (bi=Miu,i=1,2,…,N). Let us consider the following problems:
minu∈Rn12‖M1u−b1‖22,minu∈Rn12‖M2u−b2‖22,...,minu∈Rn12‖MNu−bN‖22, | (5.13) |
where u is the originally true image (common solution), Mi is the blurred matrix and bi is the blurred image by the blurred matrix Mi. However, there are many degraded matrices that must be known. For simplicity, we give an example for applying the algorithm (5.12) to find the original image u with a pair of blurring matrices Mi and Mj:
wn=(1−ζn)un+ζnTiun,zn=(1−ϑn)Tjun+ϑnTiwn,un=(1−ηn)zn+ηnTjzn, | (5.14) |
where Tku=u−μkMTk(Mku−b). The implemented algorithm (5.14) is proposed for the purpose of solving the image restoration problem by using a pair of the blurring matrices Mi and Mj with the default parameter (5.8) and μk=1/‖MTkMk‖2, and it is called the proposed algorithm with Ti−Tj. In the case of Ti=Tj, we call it the proposed algorithm with Ti.
The original RGB format shown in Figure 5 is used to demonstrate the practicability of the proposed algorithm. The Cauchy error and the relative error are measured by using the max-norm ‖un−un−1‖∞ and ‖un−u‖∞/‖u‖∞, respectively. The performance of the comparing algorithms at un for the image deblurring process can be measured quantitatively by using the peak signal-to-noise ratio (PSNR), which is defined by
PSNR(un)=20log10(2552MSE), |
where MSE=‖un−u‖22. Three different types of the original RGB image degraded by the blurring matrices M1,M2 and M3 are shown in Figure 6. These have been used to test the implemented algorithm.
Next, we present the restoration of images that have been corrupted by the following blur types:
Type Ⅰ. Gaussian blur of filter size 9×9 with standard deviation σ=4. (The original image has been degraded by the blurring matrix M1.)
Type Ⅱ. Out of focus blur (disk) with radius r=6. (The original image has been degraded by the blurring matrix M2.)
Type Ⅲ. Motion blur specifying with motion length of 21 pixels and motion orientation 11∘ (θ=11). (The original image has been degraded by the blurring matrix M3.)
The image U and three different types of the blurring image B (Figure 3) are represented in the red-green-blue component. Then, we denote Ur,Ug,Ub and Br,Bg,Bb as the gray-scale images that constitute the red-green-blue channels of the applied image U and the blurring image B, respectively. Thus, we define the column vector u and b from the color image U and B, and both of length n=3˜m˜n. After that, we applied the proposed algorithms to obtain the common solution of the image restoration problem with these three blurring matrices. Both the theoretical and experimental results demonstrate the convergence properties of the proposed algorithm with the permutation of the blurring matrices M1,M2 and M3, which are demonstrated and discussed for the following cases:
Case Ⅰ: Algorithm 5.12 with T1, Case Ⅱ: Algorithm 5.14 with T2,
Case Ⅲ: Algorithm 5.14 with T3, Case Ⅳ: Algorithm 5.12 with T1−T2,
Case Ⅴ: Algorithm 5.14 with T1−T3, Case Ⅵ: Algorithm 5.14 with T2−T1,
Case Ⅶ: Algorithm 5.12 with T2−T3, Case Ⅷ: Algorithm 5.14 with T3−T1,
Case Ⅸ: Algorithm 5.14 with T3−T2.
Figure 7 shows the plot behavior for the Cauchy error and relative error of the reconstructed RGB image in all cases.
The Cauchy error plots and the relative error plot guarantee that the presented method converge to the common solution of the restoration problem (5.14) in all cases. Thus, the Cauchy and relative error plots show the validity and confirm the convergence of the proposed method. Figure 8 shows all cases of the PSNR plot for the proposed method with 100,000 iterations. Based on the PSNR plots in Figure 8, all restored images using the proposed algorithm to solve the deblurring problem results in quality improvements when the iteration number increases. Moreover, the PSNR quality of the observed image is improved when the proposed method with a pair of Ti−Tj and i≠j is used to solve deblurring problem compared with the proposed method in which only one blurring matrix is used. And, the best case for recovering the observed image occurs when the proposed method with T2−T3 and T3−T2 is used.
Figure 9 demonstrates the crop of reconstructed RGB images presented for the 500th iteration by using the proposed algorithm to obtain the common solution of the restoration problem with operators T1 and T2, T3 and T3−T2 respectively. It can be seen in these figures that the quality of the image restored by using the proposed algorithm with T3−T2 results in the smooth quality of the applied degraded image.
Next, we will compare the effectiveness of the proposed methods with Mann iteration [10], Ishikawa iteration [11], S-iteration [12], Noor iteration [14], AK-iteration [15], Sahu et al. iteration[16] and SP-iteration [18] through the PSNR plots in Figure 10. And, the parameters ζn,ϑn and ηn of the comparative algorithms are set with the default parameter presented in (5.8). Since, all comparative methods use only one blurring matrix on their algorithms, the proposed method with Cases Ⅰ–Ⅲ are used to compare the effectiveness of our methods.
It can be seen in Figure 10 that the PSNR quality from the proposed method is better than that of the methods of Mann, Ishikawa, S, Noor and Sahu et al., while the proposed method is as effective as the remaining methods. However, the proposed method has been designed to be used with a pair of Ti−Tj where i≠j. With this advantage, we found that the proposed method with Cases Ⅲ–Ⅸ is more effective when the number of iterations is large enough (see Figure 8).
In signal processing, compressed sensing can be modeled as the following under a determined linear equation system:
y=Au+ν |
where u∈Rn is an original signal with n components to be recovered, ν,y∈Rm are noise and the observed signal with noise for m components respectively, and A∈Rm×n is a degraded matrix. Finding the solutions of the previous determined linear equation system can be seen as solving the LASSO problem
minu∈RN12‖y−Au‖22+λ‖u‖1, | (5.15) |
where λ>0. As a result various techniques and iterative schemes have been developed to solve the LASSO problem. We can apply our method to solve the LASSO problem (5.15) by setting Tu=proxμg(u−μ∇f(u)) where f(u)=‖y−Au‖22/2, g(u)=λ‖u‖1 and ∇f(u)=AT(Au−y).
Next, we give an example by applying our algorithm to signal recovery problems. Let A∈Rm×n(m<n) be a degraded matrix and y,νν∈Rm; we obtain the following proposed methods to find the solution of the signal recovery problem:
wn=(1−ζn)un+ζnTun,zn=(1−ϑn)Tun+ϑnTwn,un+1=(1−ηn)zn+ηnTzn, | (5.16) |
where Tu=proxμg(u−μAT(Au−y)), μ∈(0,2/||AtA||2) and ηn,ϑn,ζn∈(0,1) for all n∈N. The goal for a signal recovery problem is to find the original signal from the observed signal without knowing the degraded signal operator A and noise νν. However, the degraded signal operator A must be known when applying the algorithm (5.16). Now, we present the method for solving the signal recovery problem when the observed signal y1,y2,...,yN can be recovered by using the known degraded matrices A1,A2,...,AN (yi=Aiu+νi,νi=λi‖u‖1,i=1,2,…,N). Let us consider the following problems:
minu∈RN12‖A1u−y1‖22+λ1‖u‖1,minu∈RN12‖A2u−y2‖22+λ2‖u‖1,⋮minu∈RN12‖ANu−yN‖22+λN‖u‖1, | (5.17) |
where a true signal u is a common solution of problem (5.17). That is, we will find the true signal u through the common solution of N LASSO problems in the problem (5.17). However, there are many degraded signal operators that must be known. For simplicity, we give an example of applying our algorithm to find the common solution u for a pair of LASSO problems to Eq (5.17):
wn=(1−ζn)un+ζnTjun,zn=(1−ϑn)Tkun+ϑnTjwn,un+1=(1−ηn)zn+ηnTkzn, | (5.18) |
where
Tiu=proxμigi(u−μiATi(Aiu−yi)), |
gi(u)=λi‖u‖1 |
and
μi=1/‖ATiAi‖2, |
i=1,2,...,N. The implemented algorithm (5.18) is proposed for the purpose of solving the signal recovery problem by using a pair of the degraded signal operators Aj and Ak, and set ζn=ϑn=ηn=0.9, and it is called the proposed algorithm with the operator Tj−Tk. In the case of Tj=Tk, we call it the proposed algorithm with Tj.
Next, some experiments are provided to illustrate the convergence and the effectiveness of the proposed algorithm (5.18). The original signal u with n=1024 generated by the uniform distribution in the interval [−2,2] with k=70 nonzero elements is used to create the observation signal yi=Aiu+νi,i=1,2,3 with m=512 (see Figure 11).
The process is started when the signal initial data u0 with k=70 and n=1024 is picked randomly (see Figure 12).
The observation signal yi is shown in Figure 13.
The matrices Ai generated by the normal distribution with a mean of zero and variance of one, and the white Gaussian noise νi,i=1,2,3 (see on Figure 14).
Both the theoretical and experimental results that demonstrate the convergence properties of the proposed algorithm with the permutation of the blurring matrices A1,A2 and A3 are demonstrated and discussed for the following cases:
Case Ⅰ: Algorithm 5.12 with T1, Case Ⅱ: Algorithm 5.14 with T2,
Case Ⅲ: Algorithm 5.14 with T3, Case Ⅳ: Algorithm 5.12 with T1−T2,
Case Ⅴ: Algorithm 5.14 with T1−T3, Case Ⅵ: Algorithm 5.14 with T2−T1,
Case Ⅶ: Algorithm 5.12 with T2−T3, Case Ⅷ: Algorithm 5.14 with T3−T1,
Case Ⅸ: Algorithm 5.14 with T3−T2.
The Cauchy error and the relative signal error can also measured by using max-norm. The performance of the proposed method at the nth iteration can be measured quantitatively by using the the signal-to-noise ratio (SNR), which is defined by
SNR(un)=20log10(‖un‖2‖un−u‖2), |
where un is the recovered signal at the nth iteration by using the proposed method. The Cauchy error, signal relative error and SNR quality of the proposed methods for recovering the degraded signal are shown in Figure 15.
The Cauchy error plot guarantees that all cases of the presented method converge to a common solution of the signal recovery problem (5.17) with N=2. It is remarkable that the signal relative error plot decreases until it converges to some constant value. Figure 16 shows all cases of the SNR plot for the proposed method with 100,000 iterations. For the SNR quality plot in Figure 16, it can be seen that the SNR value increases until it converges to some constant value. Through these results, it can be concluded that the solution of the signal recovery problem solved by the proposed algorithm achieved quality improvements of the observed signal. Moreover, the SNR quality of the observed signal has been greatly improved when the proposed method with Tj−Tk and Tj≠Tk is used for solving the signal recovery problem. And, the optimal case for recovering the observed signal occurred when the proposed method with T3−T1 was used.
Figure 17 shows the signals restored by using the optimal proposed algorithm with the operators T3−T1 as compared with the proposed algorithms with the operators T1, T2 and T3.
Next, we will compare the effectiveness of the proposed method with the Mann iteration [10], Ishikawa iteration [11], S-iteration [12], Noor iteration [14], AK-iteration [15], Sahu et al. iteration [16] and SP-iteration [18] through the SNR plots in Figure 18. And, the parameters ζn,ϑn and ηn of the comparative algorithms are set with the default parameter of (5.8). Since all comparative methods use only one blurring matrix on their algorithms, the proposed method with Cases Ⅰ–Ⅲ are used to compare the effectiveness of our methods.
It can be seen in Figure 18 that the SNR quality from the proposed and SP methods are better than the remaining methods. However, the proposed method has been designed to be used with a pair of Tj−Tk where j≠k. With this advantage, we found that the proposed method with Cases Ⅳ–Ⅸ is more effective when the number of iterations is large enough (see Figure 16).
Moreover, we show the efficiency of the proposed method with the operators T3−T1 in solving the k-sparse signal recovering problem with k=70,35,18,9. To solve the k-sparse signal recovering problem, the observed signal in Figure 11 was attenuated by half until k=9 to obtain the k-nonzero signal. When solving these k-sparse signal recovery problems using the suggested technique with the operators T3−T1, the regularization parameters λ1 and λ3 should be set as λ1=λ3=λ. The SNR quality and the relative signal error with the effect of the regularization parameter λ where λ∈[5,75], are shown in Figure 19.
The most recent figure illustrates that the offered algorithms can solve the sparse signal recovery challenge. This figure shows that, with the optimal regularization parameter λ, the SNR quality increased and the relative error decreased as the nonzero elements of the sparse signal were attenuated by half. Following that, we compared the numerical simulations of the proposed method to the well-known LASSO-ADMM methodology (see [41,42]). When using the LASSO-ADMM algorithm to solve the sparse signal recovery problem and compare it with the proposed method with the operators T3−T1, we set the penalty parameter ρ=1 and varied the regularization parameter λ from 0 to 1.
The SNR and the relative error norm plots in Figures 19 and 20, respectively, show that the reconstruction via the recommended technique was more accurate than that resulting from the LASSO-ADMM algorithm for its ideal regularization parameter λ.
However, the LASSO-ADMM algorithm needs fewer observations to obtain the same probability of exact reconstruction than our technique.
This is due to the fact that the LASSO-ADMM technique is solved implicitly (the inverse of the noise covariance matrix is necessary), whereas the suggested algorithm is not. Furthermore, while determining the best regularization parameter λ to achieve maximum SNR quality, the SNR plot for the LASSO-ADMM algorithm can provide an adequate regularization parameter compared to the proposed approaches.
Splitting methods have received a lot of attention lately because many nonlinear problems that arise in the areas used, such as signal processing and image restoration, are modeled in mathematics as a nonlinear equation, and this operator is decomposed as the sum of two nonlinear operators. Most investigations about the methods of separation are carried out in Hilbert spaces. This work developed an iterative scheme in Banach spaces. By using sunny nonexpansive retractions which are different from the metric projection in Banach spaces, we have developed the CR-iteration algorithm in view of two quasi-nonexpansive nonself-mappings in the setting of uniformly convex Banach spaces. We proved the convergence theorem of our iterative scheme, as well as presented applications for common zeros of accretive operators, convexly constrained least square problems, convex minimization problems, differential problems, image restoration problems and signal recovery problems. As applications, we found that, when the proposed method is used to solve the common solution of image and signal recovering problems, it expands the quality range of the recovered image and signal. Our algorithm was found to be flexible and have better quality than the Mann, Ishikawa, S, Noor, AK, Sahu et al. and SP iterations (see Figures 10 and 18). And, we presented the idea of applying the pairs of basic iterative methods such as WJ, GS and SOR to the proposed method to solve the unique solution of the well-known one-dimensional heat equation with Dirichlet boundary conditions and initial data. Furthermore, the proposed algorithm can solve the sparse signal recovery challenge. The simulation results demonstrate the efficacy of the proposed algorithm for solving k-spares signal recovery problems. Their numerical experiments also show the excellent performance of the proposed method.
This research project was supported by the Thailand Science Research and Innovation Fund and the University of Phayao Grant No. UoE65002. T. Thianwan would like to thank the Thailand Science Research and Innovation Fund and the University of Phayao Grant No. FF65-RIM072. Also, the third author acknowledges the partial support provided by the revenue budget in 2022 (PBTSC65014), School of Science, University of Phayao.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
[1] |
K. Kankam, N. Pholasa, P. Cholamjiak, On convergence and complexity of the modified forward-backward method involving new linesearches for convex minimization, Math. Meth. Appl. Sci. 42 (2019), 1352–1362. https://doi.org/10.1002/mma.5420 doi: 10.1002/mma.5420
![]() |
[2] | E. J. Candès, M. B. Wakin, An introduction to compressive sampling, IEEE Signal Process. Mag., 25 (2008), 21–30. |
[3] |
S. Suantai, S. Kesornprom, P. Cholamjiak, A new hybrid CQ algorithm for the split feasibility problem in Hilbert spaces and its applications to compressed sensing, Mathematics, 7 (2019), 789. https://doi.org/10.3390/math7090789 doi: 10.3390/math7090789
![]() |
[4] |
D. Kitkuan, P. Kumam, A. Padcharoen, W. Kumam, P. Thounthong, Algorithms for zeros of two accretive operators for solving convex minimization problems and its application to image restoration problems, J. Comput. Appl. Math., 354 (2019), 471–495. https://doi.org/10.1016/j.cam.2018.04.057 doi: 10.1016/j.cam.2018.04.057
![]() |
[5] |
A. Padcharoen, P. Kumam, Y. J. Cho, Split common fixed point problems for demicontractive operators, Numer. Algorithms, 82 (2019), 297–320. https://doi.org/10.1007/s11075-018-0605-0 doi: 10.1007/s11075-018-0605-0
![]() |
[6] | P. Cholamjiak, Y. Shehu, Inertial forward-backward splitting method in Banach spaces with application to compressed sensing, Appl. Math., 64 (2019), 409–435. |
[7] |
W. Jirakitpuwapat, P. Kumam, Y. J. Cho, K. Sitthithakerngkiet, A general algorithm for the split common fixed point problem with its applications to signal processing, Mathematics, 7 (2019), 226. https://doi.org/10.3390/math7030226 doi: 10.3390/math7030226
![]() |
[8] | V. Berinde, Iterative approximation of fixed points: Lecture notes in mathematics, 2 Eds., Springer: Berlin, Germany, 2007. |
[9] | E. Picard, Memoire sur la theorie des equations aux d'erives partielles et la methode des approximations successives, J. Math Pures Appl., 231 (1890), 145–210. |
[10] |
W. R Mann, Mean value methods in iteration, Proc. Am. Math. Soc., 4 (1953), 506–510. https://doi.org/10.1090/S0002-9939-1953-0054846-3 doi: 10.1090/S0002-9939-1953-0054846-3
![]() |
[11] |
S. Ishikawa, Fixed points by a new iteration method, Proc. Am. Math. Soc., 44 (1974), 147–150. https://doi.org/10.1090/S0002-9939-1974-0336469-5 doi: 10.1090/S0002-9939-1974-0336469-5
![]() |
[12] | R. P. Agarwal, D. O'Regan, D. R. Sahu, Iterative construction of fixed points of nearly asymptotically nonexpansive mappings, J. Nonlinear Convex Anal., 8 (2007), 61–79. |
[13] |
R. Chugh, V. Kumar, S. Kumar, Strong convergence of a new three step iterative scheme in Banach spaces, Amer. J. Compu. Math., 2 (2012), 345–357. https://doi.org/10.4236/ajcm.2012.24048 doi: 10.4236/ajcm.2012.24048
![]() |
[14] |
M. A. Noor, New approximation schemes for general variational inequalities, J. Math. Anal. Appl., 251 (2000), 217–229. https://doi.org/10.1006/jmaa.2000.7042 doi: 10.1006/jmaa.2000.7042
![]() |
[15] |
K. Ullah, M. Arshad, On different results for new three step iteration process in Banach spaces, SpringerPlus, 5 (2016), 1616. https://doi.org/10.1186/s40064-016-3056-x doi: 10.1186/s40064-016-3056-x
![]() |
[16] | V. K. Sahu, H. K. Pathak, R. Tiwari, Convergence theorems for new iteration scheme and comparison results, Aligarh Bull. Math., 35 (2016), 19–42. |
[17] |
B. S. Thakur, D. Thakur, M. Postolache, New iteration scheme for approximating fixed point of nonexpansive mappings, Filomat, 30 (2016), 2711–2720. https://doi.org/10.2298/FIL1610711T doi: 10.2298/FIL1610711T
![]() |
[18] | W. Phuengrattana, S. Suantai, On the rate of convergence of Mann, Ishikawa, Noor and SPiterations for continuous functions on an arbitrary interval, J. Comput. Appl. Math., 235 (2011), 3006–3014. |
[19] |
C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Probl., 20 (2004), 103–120. https://doi.org/10.1088/0266-5611/20/1/006 doi: 10.1088/0266-5611/20/1/006
![]() |
[20] |
P. L. Combettes, V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model Simul., 4 (2005), 1168–1200. https://doi.org/10.1137/050626090 doi: 10.1137/050626090
![]() |
[21] |
Y. Censor, T. Bortfeld, B. Martin, A. Trofimov, A unified approach for inversion problems in intensity-modulated radiation therapy, Phys. Med. Biol., 51 (2006), 2353–2365. https://doi.org/10.1088/0031-9155/51/10/001 doi: 10.1088/0031-9155/51/10/001
![]() |
[22] | Y. Censor, T. Elfving, N. Kopf, T. Bortfeld, The multiple set split feasibility problem and its applications, Inverse Probl., 21 (2005), 2071–2084. |
[23] | A. Ben-Tal, A. Nemirovski, Lectures on modern convex optimization, analysis, algorithms, and engineering applications, MPS/SIAM Ser. Optim., SIAM: Philadelphia, PA, USA, 2001. |
[24] |
J. Bioucas-Dias, M. Figueiredo, A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration, IEEE Trans. Image Process., 16 (2007), 2992–3004. https://doi.org/10.1109/TIP.2007.909319 doi: 10.1109/TIP.2007.909319
![]() |
[25] |
S. S. Chen, D. L. Donoho, M. A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Sci. Comput., 20 (1998), 33–61. https://doi.org/10.1137/S1064827596304010 doi: 10.1137/S1064827596304010
![]() |
[26] |
D. L. Donoho, I. M. Johnstone, Adapting to unknown smoothness via wavelet shrinkage, J. Am. Statist. Assoc., 90 (1995), 1200–1224. https://doi.org/10.1080/01621459.1995.10476626 doi: 10.1080/01621459.1995.10476626
![]() |
[27] |
M. A. T. Figueiredo, R. D. Nowak, S. J. Wright, Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems, IEEE J. Sel. Top. Signal Process., 1 (2007), 586–597. https://doi.org/10.1109/JSTSP.2007.910281 doi: 10.1109/JSTSP.2007.910281
![]() |
[28] |
S. S. Chang, C. F. Wen, J. C. Yao, Zero point problem of accretive operators in Banach spaces, Bull. Malays. Math. Sci. Soc., 42 (2019), 105–118. https://doi.org/10.1007/s40840-017-0470-3 doi: 10.1007/s40840-017-0470-3
![]() |
[29] |
F. E. Browder, Nonlinear mappings of nonexpansive and accretive type in Banach spaces, Bull. Am. Math. Soc., 73 (1967), 875–882. https://doi.org/10.1090/S0002-9904-1967-11823-8 doi: 10.1090/S0002-9904-1967-11823-8
![]() |
[30] |
F. E. Browder, Semicontractive and semiaccretive nonlinear mappings in Banach spaces, Bull. Am. Math. Soc., 7 (1968), 660–665. https://doi.org/10.1090/S0002-9904-1968-11983-4 doi: 10.1090/S0002-9904-1968-11983-4
![]() |
[31] | I. Cioranescu, Geometry of Banach spaces, duality mapping and nonlinear problems, Kluwer: Amsterdam, Netherlands, 1990. |
[32] | W. Takahashi, Nonlinear functional analysis. Fixed point theory and its applications, Yokohama Publishers: Yokohama, Japan, 2000. |
[33] | K. Goebel, S. Reich, Uniform convexity, hyperbolic geometry and non-expansive mappings, Marcel Dekker lnc.: New York, USA, 1984. |
[34] |
Z. Opial, Weak convergence of the sequence of successive approximations for nonexpansive mappings, Bull. Am. Math. Soc., 73 (1967), 591–597. https://doi.org/10.1090/S0002-9904-1967-11761-0 doi: 10.1090/S0002-9904-1967-11761-0
![]() |
[35] |
R. T. Rockafellar, On the maximal monotonicity of subdifferential mappings, Pac. J. Math., 33 (1970), 209–216. https://doi.org/10.2140/pjm.1970.33.209 doi: 10.2140/pjm.1970.33.209
![]() |
[36] |
H. K. Xu, Inequalities in Banach spaces with applications, Nonlinear Anal., 16 (1991), 1127–1138. https://doi.org/10.1016/0362-546X(91)90200-K doi: 10.1016/0362-546X(91)90200-K
![]() |
[37] |
D. R. Sahu, A. Pitea, M. Verma, A new iteration technique for nonlinear operators as concerns convex programming and feasibility problems, Numer. Algorithms, 83 (2020), 421–449. https://doi.org/10.1007/s11075-019-00688-9 doi: 10.1007/s11075-019-00688-9
![]() |
[38] | D. Yambangwai, N. Moshkin, Deferred correction technique to construct high-order schemes for the heat equation with dirichlet and neumann boundary conditions, Eng. Lett., 21 (2013), 61–67. |
[39] | D. Yambangwai, W. Cholamjiak, T. Thianwan, H. Dutta, On a new weight tridiagonal iterative method and its applications, Soft Comput., 25 (2021), 725–740. |
[40] |
S. M. Grzegorski, On optimal parameter not only for the SOR method, Appl. Comput. Math., 8 (2019), 82–87. https://doi.org/10.11648/j.acm.20190805.11 doi: 10.11648/j.acm.20190805.11
![]() |
[41] |
I. Daubechies, M. Defrise, C. D. Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Commun. Pure Appl. Math., 57 (2004), 1413–1457. https://doi.org/10.1002/cpa.20042 doi: 10.1002/cpa.20042
![]() |
[42] | A. Elgabli, A. Elghariani, A. O. Al-Abbasi, M. Bell, Two-stage LASSO ADMM signal detection algorithm for large scale MIMO, 2017 51st Asilomar Conference on Signals, Systems, and Computers, 2017, 1660–1664. |
1. | Takanori Ibaraki, Satit Saejung, On shrinking projection method for cutter type mappings with nonsummable errors, 2023, 2023, 1029-242X, 10.1186/s13660-023-03004-1 | |
2. | Imo Kalu Agwu, Faeem Ali, Donatus Ikechi Igbokwe, The solution of split common fixed point problems for enriched asymptotically nonexpansive mappings in Banach spaces, 2024, 0971-3611, 10.1007/s41478-024-00849-7 |
Algorithm 1: CR-Iteration Algorithm with Sunny Nonexpansive Retraction |
initialization: ηn,ϑn,ζn∈(0,1),u1∈K and n=1. while stopping criterion not met do |
wn=QK((1−ζn)un+ζnSun),zn=QK((1−ϑn)Tun+ϑnSwn),un+1=QK((1−ηn)zn+ηnTzn). |
end |
Linear system | Iterative method | Specific name |
Aun+1=Gn | Du(n+1,s+1)=(D−ωA)u(n+1,s)+ωGn | WJ |
(D−ωL)u(n+1,s+1)=((D−ωL)−ωA)u(n+1,s)+ωGn | SOR |
Linear system | Fixed-point mapping T(u) |
Au=G | TWJ(u)=(I−ωD−1A)u+ωD−1G |
TSOR(u)=(I−ω(D−ωL)−1A)u+ω(D−ωL)−1G |
Different types of operator S | Implemented weight parameter ω | Optimal weight parameter ωo |
SWJ | 0<ω<2λmax(D−1A) | ωo=2λmin(D−1A)+λmax(D−1A) |
SSOR | 0<ω<2 | ωo=21+√1−ρ2 |
Algorithm 1: CR-Iteration Algorithm with Sunny Nonexpansive Retraction |
initialization: ηn,ϑn,ζn∈(0,1),u1∈K and n=1. while stopping criterion not met do |
wn=QK((1−ζn)un+ζnSun),zn=QK((1−ϑn)Tun+ϑnSwn),un+1=QK((1−ηn)zn+ηnTzn). |
end |
Linear system | Iterative method | Specific name |
Aun+1=Gn | Du(n+1,s+1)=(D−ωA)u(n+1,s)+ωGn | WJ |
(D−ωL)u(n+1,s+1)=((D−ωL)−ωA)u(n+1,s)+ωGn | SOR |
Linear system | Fixed-point mapping T(u) |
Au=G | TWJ(u)=(I−ωD−1A)u+ωD−1G |
TSOR(u)=(I−ω(D−ωL)−1A)u+ω(D−ωL)−1G |
Different types of operator S | Implemented weight parameter ω | Optimal weight parameter ωo |
SWJ | 0<ω<2λmax(D−1A) | ωo=2λmin(D−1A)+λmax(D−1A) |
SSOR | 0<ω<2 | ωo=21+√1−ρ2 |