Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Research article

An efficient iterative algorithm for common solutions of three G-nonexpansive mappings in Banach spaces involving a graph with applications to signal and image restoration problems

  • Received: 26 May 2021 Accepted: 20 October 2021 Published: 25 October 2021
  • MSC : 47E10, 47H09, 47H10, 54H25

  • In this paper, three G-nonexpansive mappings are implemented and analyzed using an efficient modified three-step iteration algorithm. Assuming coordinate-convexity in a uniformly convex Banach space endowed with a directed graph, conditions for the weak and strong convergence of the scheme are determined. We give numerical comparisons to back up our main theorem, and compare our algorithm's convergence behavior to that of the three-step Noor iteration and SP-iteration. We use our proposed algorithm to solve image deblurring problems as an application. In addition, we discuss a novel approach to signal recovery in situations where the type of noise is unknown.

    Citation: Damrongsak Yambangwai, Tanakit Thianwan. An efficient iterative algorithm for common solutions of three G-nonexpansive mappings in Banach spaces involving a graph with applications to signal and image restoration problems[J]. AIMS Mathematics, 2022, 7(1): 1366-1398. doi: 10.3934/math.2022081

    Related Papers:

    [1] Damrongsak Yambangwai, Chonjaroen Chairatsiripong, Tanakit Thianwan . Iterative manner involving sunny nonexpansive retractions for nonlinear operators from the perspective of convex programming as applicable to differential problems, image restoration and signal recovery. AIMS Mathematics, 2023, 8(3): 7163-7195. doi: 10.3934/math.2023361
    [2] Maryam Iqbal, Afshan Batool, Aftab Hussain, Hamed Al-Sulami . A faster iterative scheme for common fixed points of $ G $-nonexpansive mappings via directed graphs: application in split feasibility problems. AIMS Mathematics, 2024, 9(5): 11941-11957. doi: 10.3934/math.2024583
    [3] Lu-Chuan Ceng, Yeong-Cheng Liou, Tzu-Chien Yin . On Mann-type accelerated projection methods for pseudomonotone variational inequalities and common fixed points in Banach spaces. AIMS Mathematics, 2023, 8(9): 21138-21160. doi: 10.3934/math.20231077
    [4] Nipa Jun-on, Raweerote Suparatulatorn, Mohamed Gamal, Watcharaporn Cholamjiak . An inertial parallel algorithm for a finite family of $ G $-nonexpansive mappings applied to signal recovery. AIMS Mathematics, 2022, 7(2): 1775-1790. doi: 10.3934/math.2022102
    [5] Shahram Rezapour, Maryam Iqbal, Afshan Batool, Sina Etemad, Thongchai Botmart . A new modified iterative scheme for finding common fixed points in Banach spaces: application in variational inequality problems. AIMS Mathematics, 2023, 8(3): 5980-5997. doi: 10.3934/math.2023301
    [6] Hamza Bashir, Junaid Ahmad, Walid Emam, Zhenhua Ma, Muhammad Arshad . A faster fixed point iterative algorithm and its application to optimization problems. AIMS Mathematics, 2024, 9(9): 23724-23751. doi: 10.3934/math.20241153
    [7] Junaid Ahmad, Kifayat Ullah, Reny George . Numerical algorithms for solutions of nonlinear problems in some distance spaces. AIMS Mathematics, 2023, 8(4): 8460-8477. doi: 10.3934/math.2023426
    [8] Liliana Guran, Khushdil Ahmad, Khurram Shabbir, Monica-Felicia Bota . Computational comparative analysis of fixed point approximations of generalized $ \alpha $-nonexpansive mappings in hyperbolic spaces. AIMS Mathematics, 2023, 8(2): 2489-2507. doi: 10.3934/math.2023129
    [9] Buthinah A. Bin Dehaish, Rawan K. Alharbi . On fixed point results for some generalized nonexpansive mappings. AIMS Mathematics, 2023, 8(3): 5763-5778. doi: 10.3934/math.2023290
    [10] Asima Razzaque, Imo Kalu Agwu, Naeem Saleem, Donatus Ikechi Igbokwe, Maggie Aphane . Novel fixed point results for a class of enriched nonspreading mappings in real Banach spaces. AIMS Mathematics, 2025, 10(2): 3884-3909. doi: 10.3934/math.2025181
  • In this paper, three G-nonexpansive mappings are implemented and analyzed using an efficient modified three-step iteration algorithm. Assuming coordinate-convexity in a uniformly convex Banach space endowed with a directed graph, conditions for the weak and strong convergence of the scheme are determined. We give numerical comparisons to back up our main theorem, and compare our algorithm's convergence behavior to that of the three-step Noor iteration and SP-iteration. We use our proposed algorithm to solve image deblurring problems as an application. In addition, we discuss a novel approach to signal recovery in situations where the type of noise is unknown.



    Fixed point theory is an extremely active research area because of its multifaceted applications. The typical situation is that a self-map on a set admits a fixed point under certain conditions. In a complete metric space, Banach [1] proved the existence of unique fixed point for contractions. The Banach contraction principle has been generalized and extended in several ways due to its applications in mathematics and other related disciplines. The most recent version of the theorem was proved in Banach spaces endowed with a graph G, where G=(V(G),E(G)) is a directed graph such that the set V(G) of its vertices of a graph and the set E(G) of its edges contains all loops.

    In order to extend the Banach contraction principle, by combination of the concepts in fixed point theory and graph theory, Banach G-contractions were introduced by Jachymaski [2] in complete metric spaces accompanied with a graph G which has a convex subset of the underlying metric space as its set of vertices, see e.g., [3,4,5,6,7,8,9,10].

    In the last few decades finding of fixed points by some iterative schemes for G-contraction, G-nonexpansive and G-monotone nonexpansive mappings have been studied extensively by various authors In 2012, Aleomraninejad et al. [11] presented some iterative scheme for G-contraction and G-nonexpansive mappings in a metric space with a directed graph and stated the convergence for such mappings. In 2015, Alfuraidan and Khamsi [12] defined the concept of G-monotone nonexpansive multivalued mappings defined on a hyperbolic metric space with a graph. Alfuraidan [13] studied the existence of fixed points of monotone nonexpansive mappings on a Banach space endowed with a directed graph. Tiammee et al. [14] proved the Browder convergence theorem for G-nonexpansive mappings and studied the convergence of the Halpern iteration process to the projection of initial point where the projection is onto the set of fixed points of the G-nonexpansive mapping in Hilbert space with the directed graph. Tripak [15] defined the Ishikawa type iteration process for two G-nonexpansive mappings and gave some weak and strong convergence theorems of such iterations in a Banach space endowed with a graph.

    The three-step iterative scheme, usually called the Noor iteration method, [16] is used to approximate common fixed point of three G-nonexpansive mappings via

     zn=(1γn)xn+γnT3xn,yn=(1βn)xn+βnT2zn,xn+1=(1αn)xn+αnT1yn,n0, (1.1)

    where {αn}, {βn} and {γn} are sequences in [0,1].

    Glowinski and Le Tallec [17] employed three-step iterative approach in elastoviscoplasticity, eigenvalue computation and theory of liquid crystals. In [17], it was shown that the three-step iterative process yields better numerical results than the estimated iterations in two and one steps. In 1998, Haubruge, Nguyen and Strodiot [18] investigated the convergence analysis of three-step methods of Glowinski and Le Tallec [17] and applied these methods to obtain new splitting-type algorithms for solving variation inequalities, separable convex programming and minimization of a sum of convex functions. They also proved that three-step iterations lead to highly parallelized algorithms under certain conditions.

    Recently, Sridarat et al. [19] modified the SP-iteration process for three G-nonexpansive mappings T1,T2 and T3 as follows:

    zn=(1γn)xn+γnT3xn,yn=(1βn)zn+βnT2zn,xn+1=(1αn)yn+αnT1yn,n0, (1.2)

    where {αn}, {βn} and {γn} are appropriate real sequences in [0,1]. They studied the weak and strong convergence of the iterative scheme (1.2) under proper conditions in a uniformly convex Banach space endowed with a graph.

    The main purpose of this paper is to construct an efficient modified three-step iteration algorithm for approximating common fixed points of three G-nonexpansive mappings in a uniformly convex Banach space endowed with a graph. This paper is organized as follows.

    The notations, basic definitions, and some lemmas for proving our main result are given in Section 2. Our main result is presented in Section 3. In this section, we also prove weak and strong convergence results of the proposed method under mild conditions in a uniformly convex Banach space endowed with a graph. In Section 4, we study the convergence behavior of our method, and compare its efficiency with competing methods. In Section 5, we apply our proposed method to solve certain image deblurring and signal recovering problems.

    In this section, we recall a few basic notions concerning the connectivity of graphs. All of these notions can be found, e.g., in [20].

    Let C be a nonempty subset of a real Banach space X. We identify the graph G with the pair (V(G),E(G)), where the set V(G) of its vertices coincide with set C and the set of edges E(G) contains {(x,x):xC}. Also, G is such that no two edges are parallel. A mapping T:CC is said to be G-contraction if T preserves edges of G (or T is edge-preserving), i.e.,

    (x,y)E(G)(Tx,Ty)E(G),

    and T decreases weights of edges of G in the following way: there exists α(0,1) such that

    (x,y)E(G)TxTyαxy.

    A mapping T:CC is said to be G-nonexpansive (see [12], Definition 2.3 (iii)) if T preserves edges of G, i.e.,

    (x,y)E(G)(Tx,Ty)E(G),

    and T non-increases weights of edges of G in the following way:

    (x,y)E(G)TxTyxy.

    If x and y are vertices in a graph G, then a path in G from x to y of length N (NN{0}) is a sequence {xi}Ni=0 of N+1 vertices such that x0=x, xN=y and (xi,xi+1)E(G) for i=0,1,N1. A graph G is connected if there is a path between any two vertices. A directed graph G=(V(G),E(G)) is said to be transitive if, for any x,y,zV(G) such that (x,y) and (y,z) are in E(G), we have (x,z)E(G). We denote G1 the conversion of a graph G and

    E(G1)={(x,y)X×X:(y,x)E(G)}.

    Let x0V(G) and A a subset of V(G). We say that A is dominated by x0 if (x0,x)E(G) for all xA. A dominates x0 if for each xA, (x,x0)E(G).

    In this paper, we use and to denote the strong convergence and weak convergence, respectively.

    A mapping T:CC is said to be G-demiclosed at 0 if, for any sequence {xn} in C such that (xn,xn+1)E(G), xnx and Txn0 imply Tx=0.

    Let X be a Banach space with dimension X2. The modulus of X is the function δX:(0,2][0,1] defined by

    δX=inf{112(x+y):x=1,ϵ=xy}.

    Banach space X is uniformly convex if and only if δX(ϵ)>0 for all ϵ(0,2].

    Recall that a Banach space X is said to satisfy Opial's condition [21] if xnx and xy implying that

    lim sup

    Let \mathcal{C} be a nonempty closed convex subset of a real uniformly convex Banach space \mathcal{X} . Recall that the mappings \mathcal{T}_{i}(i = 1, 2, 3) on \mathcal{C} are said to satisfy condition (C) [19] if there exists a nondecreasing function f : [0, \infty)\rightarrow [0, \infty) with f(0) = 0 and f(r) > 0 for all r > 0 such that for all x\in \mathcal{C} ,

    max\{\|x - \mathcal{T}_{1}x\|, \|x - \mathcal{T}_{2}x\|, \|x - \mathcal{T}_{3}x\|\} \geq f(d(x, \mathcal{F})),

    where \mathcal{F} = \mathcal{F} (\mathcal{T}_{1}) \cap \mathcal{F} (\mathcal{T}_{2}) \cap \mathcal{F} (\mathcal{T}_{3}), \mathcal{F} (\mathcal{T}_{i}) (i = 1, 2, 3) are the sets of fixed points of \mathcal{T}_i and d(x, \mathcal{F}) = \inf\{\|x - q\| : q\in \mathcal{F}\}.

    Let \mathcal{C} be a subset of a metric space (\mathcal{X}, d) . A mapping \mathcal{T} : \mathcal{C}\rightarrow \mathcal{C} is semi-compact [22] if for a sequence \{x_{n}\} in \mathcal{C} with \lim_{n \to \infty}d(x_{n}, \mathcal{T}x_{n}) = 0 , there exists a subsequence \{x_{n_{j}}\} of \{x_{n}\} such that x_{n_{j}}\rightarrow p\in \mathcal{C} .

    Let \mathcal{C} be a nonempty subset of a normed space \mathcal{X} and let G = (V (G), E(G)) be a directed graph such that V (G) = \mathcal{C} . Then, \mathcal{C} is said to have Property G (see [19]) if for each sequence in \mathcal{C} converging weakly to x\in \mathcal{C} and (x_{n}, x_{n+1})\in E(G) , there is a subsequence \{x_{n_{j}}\} of \{x_{n}\} such that (x_{n_{j}}, x)\in E(G) for all j\in \mathbb{N} .

    Remark 1. If G is transitive, then Property G is equivalent to the property: if \{x_n\} is a sequence in \mathcal{C} with (x_{n}, x_{n+1})\in E(G) such that for any subsequence \{x_{n_{j}}\} of the sequence \{x_n\} converging weakly to x in \mathcal{X} , then (x_{n}, x)\in E(G) for all n\in \mathbb{N} .

    Definition 1. ([23], Definition 3.1) Let \mathcal{X} be a vector space and \mathcal{D} be a nonempty subset of \mathcal{X} \times \mathcal{X}. Then \mathcal{D} is said to be coordinate-convex if for all (p, u), (p, v), (u, p), (v, p) \in \mathcal{D} and for all t \in [0, 1], we have

    t(p, u) + (1 - t)(p, v) \in \mathcal{D} \, \, \text{and}\, \, t(u, p) + (1 - t)(v, p) \in \mathcal{D}.

    Remark 2. If \mathcal{D} is convex in \mathcal{X} \times \mathcal{X}, then \mathcal{D} is coordinate-convex in \mathcal{X} \times \mathcal{X}.

    In the sequel, the following lemmas are needed to prove our main results.

    Lemma 1. ([24]) Let \mathcal{X} be a uniformly convex Banach space, and \{\alpha_{n}\} a sequence in [\delta, 1 - \delta] for some \delta\in(0, 1) . Suppose that sequences \{x_{n}\} and \{y_{n}\} in \mathcal{X} are such that \limsup_{n \to \infty}||x_n||\leq c, \limsup_{n \to \infty}||y_n||\leq c and \limsup_{n \to \infty}||\alpha_{n}x_n +(1-\alpha_{n})y_n|| = c for some c\geq 0. Then \lim_{n \to \infty} ||x_n - y_n|| = 0.

    Lemma 2. ([25]) Let \mathcal{X} be a Banach space that satisfies Opial's condition and let \{x_{n}\} be a sequence in \mathcal{X} . Let u, v \in \mathcal{X} be such that \lim_{n \to \infty} ||x_n -u|| and \lim_{n \to \infty} ||x_n -v|| exist. If \{x_{n_{j}}\} and \{x_{n_{k}}\} are subsequences of \{x_{n}\} that converge weakly to u and v , respectively, then u = v .

    The following lemmas show some useful Property G of \mathcal{C} on a uniformly convex Banach space.

    Lemma 3. ([19]) Suppose that \mathcal{X} is a Banach space having Opial's condition, \mathcal{C} has Property G and let \mathcal{T} : \mathcal{C}\rightarrow \mathcal{C} be a G -nonexpansive mapping. Then, I - \mathcal{T} is G -demiclosed at 0, i.e., if x_{n} \rightharpoonup x and x_{n} - \mathcal{T}x_{n} \rightarrow 0, then x\in \mathcal{F}(T), where \mathcal{F}(T) is the set of fixed points of \mathcal{T}.

    Lemma 4. ([26]) Let \mathcal{C} be a nonempty closed convex subset of a uniformly convex Banach space \mathcal{X} and suppose that \mathcal{C} has Property G . Let \mathcal{T} be a G -nonexpansive mapping on \mathcal{C}. Then I-\mathcal{T} is G -demiclosed at 0.

    The following useful result is due to [27].

    Lemma 5. ([27]) Let \{x_{n}\} be a bounded sequence in a reflexive Banach space \mathcal{X}. If for any weakly convergent subsequence \{x_{n_{j}}\} of \{x_{n}\} , both \{x_{n_{j}}\} and \{x_{n_{j}+1}\} converge weakly to the same point in \mathcal{X}, then the sequence \{x_{n}\} is weakly convergent.

    Throughout the section, we let \mathcal{C} be a nonempty closed convex subset of a Banach space \mathcal{X} endowed with a directed graph G such that V(G) = \mathcal{C} and E(G) is coordinate-convex. We also suppose that the graph G is transitive. The mappings \mathcal{T}_i (i = 1, 2, 3) are G -nonexpansive from \mathcal{C} to \mathcal{C} with \mathcal{F} = \mathcal{F}(\mathcal{T}_{1})\cap \mathcal{F}(\mathcal{T}_{2}\cap \mathcal{F}(\mathcal{T}_{3}) nonempty. For an arbitrary x_{0}\in \mathcal{C}, defined the sequence \{x_n\} by

    \begin{equation} \begin{split} z_{n} & = (1 - \gamma_{n})x_{n} + \gamma_{n}\mathcal{T}_{3}x_{n}, \\ y_{n} & = (1 - \beta_{n})\mathcal{T}_{3}x_{n} + \beta_{n}\mathcal{T}_{2}z_{n}, \\ x_{n+1} & = (1 - \alpha_{n})\mathcal{T}_{1}y_{n} + \alpha_{n}\mathcal{T}_{2}z_{n}, \, n\geq 0, \end{split} \end{equation} (3.1)

    where \{\alpha_{n}\} , \{\beta_{n}\} and \{\gamma_{n}\} are appropriate real sequences in [0, 1].

    Proposition 1. Let q_{0} \in \mathcal{F} be such that (x_{0}, q_{0}), (q_{0}, x_{0}) are in E(G) . Then (x_{n}, q_{0}), (z_{n}, q_{0}), (y_{n}, q_{0}), (q_{0}, x_{n}), (q_{0}, z_{n}), (q_{0}, y_{n}), (x_{n}, y_{n}), (x_{n}, z_{n}) and (x_{n}, x_{n+1}) are in E(G) .

    Proof. We proceed by induction. Since \mathcal{T}_{3} is edge-preserving and (x_{0}, q_{0})\in E(G) , we have (\mathcal{T}_{3}x_{0}, q_{0})\in E(G) . Note that

    (z_{0} , q_{0}) = ((1 - \gamma_{0})x_{0} + \gamma_{0}\mathcal{T}_{3}x_{0}, q_{0}) = (1 - \gamma_{0})(x_{0}, q_{0}) + \gamma_{0}(\mathcal{T}_{3}x_{0}, q_{0}).

    Using (x_{0}, q_{0}), (\mathcal{T}_{3}x_{0}, q_{0}) \in E(G) and the coordinate-convexity of E(G), we have (z_{0}, q_{0}) \in E(G). Again, by edge-preserving of \mathcal{T}_{2} and (z_{0}, q_{0})\in E(G) , we have (\mathcal{T}_{2}z_{0}, q_{0})\in E(G). We also have

    (y_{0} , q_{0}) = ((1 - \beta_{0})\mathcal{T}_{3}x_{0} + \beta_{0}\mathcal{T}_{2}z_{0}, q_{0}) = (1 - \beta_{0})(\mathcal{T}_{3}x_{0}, q_{0}) + \beta_{0}(\mathcal{T}_{2}z_{0}, q_{0}).

    Since (\mathcal{T}_{3}x_{0}, q_{0}), (\mathcal{T}_{2}z_{0}, q_{0}) \in E(G) by the coordinate-convexity of E(G), we have (y_{0}, q_{0})\in E(G). Then, since \mathcal{T}_{1} is edge-preserving, we get (\mathcal{T}_{1}y_{0}, q_{0})\in E(G) . Note that

    (x_{1}, q_{0}) = ((1 - \alpha_{0})\mathcal{T}_{1}y_{0} + \alpha_{0}\mathcal{T}_{2}z_{0}, q_{0}) = (1 - \alpha_{0})(\mathcal{T}_{1}y_{0}, q_{0}) + \alpha_{0}(\mathcal{T}_{2}z_{0}, q_{0}).

    By the coordinate-convexity of E(G) and (\mathcal{T}_{2}z_{0}, q_{0}), (\mathcal{T}_{1}y_{0}, q_{0}) \in E(G) , we get (x_{1}, q_{0})\in E(G) . Thus, by edge-preserving of \mathcal{T}_{3} , (\mathcal{T}_{3}x_{1}, q_{0})\in E(G) . Again, by the coordinate-convexity of E(G) and (\mathcal{T}_{3}x_{1}, q_{0}), (x_{1}, q_{0}) \in E(G) , we have

    (z_{1}, q_{0}) = ((1 - \gamma_{1})x_{1} + \gamma_{1}\mathcal{T}_{3}x_{1}, q_{0}) = (1 - \gamma_{1})(x_{1}, q_{0}) + \gamma_{1}(\mathcal{T}_{3}x_{1}, q_{0}) \in E(G).

    Then, since \mathcal{T}_{2} is edge-preserving, (\mathcal{T}_{2}z_{1}, q_{0})\in E(G) . By the coordinate-convexity of E(G) and (\mathcal{T}_{2}z_{1}, q_{0}), (\mathcal{T}_{3}x_{1}, q_{0}) \in E(G) , we get

    (y_{1}, q_{0}) = ((1 - \beta_{1})\mathcal{T}_{3}x_{1} + \beta_{1}\mathcal{T}_{2}z_{1}, q_{0}) = (1 - \beta_{1})(\mathcal{T}_{3}x_{1}, q_{0}) + \beta_{1}(\mathcal{T}_{2}z_{1}, q_{0}) \in E(G),

    and hence, (\mathcal{T}_{1}y_{1}, q_{0})\in E(G) .

    Next, we assume that (x_{k}, q_{0}) \in E(G). Since \mathcal{T}_{3} is edge-preserving, we get (\mathcal{T}_{3}x_{k}, q_{0})\in E(G) . Thus

    (z_{k}, q_{0}) = ((1 - \gamma_{k})x_{k} + \gamma_{k}\mathcal{T}_{3}x_{k}, q_{0}) = (1 - \gamma_{k})(x_{k}, q_{0}) + \gamma_{k}(\mathcal{T}_{3}x_{k}, q_{0}) \in E(G),

    since E(G) is coordinate-convex. Hence, by edge-preserving of \mathcal{T}_{2} and (z_{k}, q_{0})\in E(G) , we have (\mathcal{T}_{2}z_{k}, q_{0})\in E(G). By the coordinate-convexity of E(G) and (\mathcal{T}_{3}x_{1}, q_{0}), (\mathcal{T}_{2}z_{k}, q_{0})\in E(G), we get

    (y_{k}, q_{0}) = ((1 - \beta_{k})\mathcal{T}_{3}x_{k} + \beta_{k}\mathcal{T}_{2}z_{k}, q_{0}) = (1 - \beta_{k})(\mathcal{T}_{3}x_{k}, q_{0}) + \beta_{k}(\mathcal{T}_{2}z_{k}, q_{0}) \in E(G).

    Also, since \mathcal{T}_{1} is edge-preserving and (y_{k}, q_{0})\in E(G) , we get (\mathcal{T}_{1}y_{k}, q_{0})\in E(G) . By the coordinate-convexity of E(G) and (\mathcal{T}_{2}z_{k}, q_{0}), (\mathcal{T}_{1}y_{k}, q_{0})\in E(G), we obtain

    (x_{k+1}, q_{0}) = (1 - \alpha_{k})(\mathcal{T}_{1}y_{k} + \alpha_{k}\mathcal{T}_{2}z_{k}, q_{0}) = (1 - \alpha_{k})(\mathcal{T}_{1}y_{k}, q_{0}) + \alpha_{k}(\mathcal{T}_{2}z_{k}, q_{0})\in E(G).

    Hence, by edge-preserving of \mathcal{T}_{3} , we get (\mathcal{T}_{3}x_{k+1}, q_{0})\in E(G) , and so (z_{k+1}, q_{0}) = ((1 - \gamma_{k+1})x_{k+1} + \gamma_{k+1}\mathcal{T}_{3}x_{k+1}, q_{0}) = (1 - \gamma_{k+1})(x_{k+1}, q_{0}) + \gamma_{k+1}(\mathcal{T}_{3}x_{k+1}, q_{0}) \in E(G), since E(G) is coordinate-convex. Again, by edge-preserving of \mathcal{T}_{2} , we obtain (\mathcal{T}_{2}z_{k+1}, q_{0})\in E(G). By the coordinate-convexity of E(G) and (\mathcal{T}_{3}x_{k+1}, q_{0}), (\mathcal{T}_{2}z_{k+1}, q_{0}) \in E(G) , we obtain (y_{k+1}, q_{0}) = ((1 - \beta_{k+1})\mathcal{T}_{3}x_{k+1} + \beta_{k+1}\mathcal{T}_{2}z_{k+1}, q_{0}) = (1 - \beta_{k+1})(\mathcal{T}_{3}x_{k+1}, q_{0}) + \beta_{k+1}(\mathcal{T}_{2}z_{k+1}, q_{0}) \in E(G). Therefore, (x_{n}, q_{0}), (z_{n}, q_{0}), (y_{n}, q_{0})\in E(G) for all n\geq 1 .

    Since \mathcal{T}_{3} is edge-preserving and (q_{0}, x_{0})\in E(G) , we have (q_{0}, \mathcal{T}_{3}x_{0})\in E(G) , and so (q_{0}, z_{0}) = (q_{0}, (1 - \gamma_{0})x_{0} + \gamma_{0}\mathcal{T}_{3}x_{0}) = (1 - \gamma_{0})(q_{0}, x_{0}) + \gamma_{0}(q_{0}, \mathcal{T}_{3}x_{0}) \in E(G). Again, since \mathcal{T}_{2} is edge-preserving and (q_{0}, z_{0})\in E(G) , we have (q_{0}, \mathcal{T}_{2}z_{0}) \in E(G). By the coordinate-convexity of E(G) and (q_{0}, \mathcal{T}_{3}x_{0}), (q_{0}, \mathcal{T}_{2}z_{0}) \in E(G) , so (q_{0}, y_{0}) = (q_{0}, (1 - \beta_{0})\mathcal{T}_{3}x_{0} + \beta_{0}\mathcal{T}_{2}z_{0}) = (1 - \beta_{0})(q_{0}, \mathcal{T}_{3}x_{0}) + \beta_{0}(q_{0}, \mathcal{T}_{2}z_{0}) \in E(G).

    Using a similar argument, we can show that (q_{0}, x_{n}), (q_{0}, z_{n}), (q_{0}, y_{n})\in E(G) under the assumption that (q_{0}, x_{0}), (q_{0}, z_{0}), (q_{0}, y_{0})\in E(G) . By transitivity of G, we get (x_{n}, y_{n}), (x_{n}, z_{n}), (x_{n}, x_{n+1})\in E(G) . This completes the proof.

    Lemma 6. Let \mathcal{X} be a uniformly convex Banach space. Suppose that \{\alpha_n\}, \{\beta_n\} and \{\gamma_n\} are real sequences in [\delta, 1 - \delta] for some \delta \in (0, 1) and (x_{0}, q_{0}), (q_{0}, x_{0})\in E(G) for arbitrary x_{0} \in \mathcal{C} and q_{0} \in \mathcal{F}. Then

    (i) \lim_{n\rightarrow \infty}\|x_{n}-q_{0}\| exists;

    (ii) \lim_{n\rightarrow \infty}\|x_{n}-\mathcal{T}_{1}x_{n}\| = \lim_{n\rightarrow \infty}\|x_{n}-\mathcal{T}_{2}x_{n}\| = \lim_{n\rightarrow \infty}\|x_{n}-\mathcal{T}_{3}x_{n}\| = 0.

    Proof. (i) Let q_{0} \in \mathcal{F} . By Proposition 1, we have (x_{n}, q_{0}), (y_{n}, q_{0}) and (z_{n}, q_{0})\in E(G) . Then, by G -nonexpansiveness of \mathcal{T}_{i}(i = 1, 2, 3) and using (3.1) , we have

    \begin{align} \|z_{n}-q_{0}\| & = \|(1-\gamma_{n})x_{n} + \gamma_{n}\mathcal{T}_{3}x_{n} - q_{0}\|\\ & = \|(1-\gamma_{n})(x_{n}-q_{0}) + \gamma_{n}(\mathcal{T}_{3}x_{n}-q_{0})\|\\ &\leq (1-\gamma_{n})\|x_{n}-q_{0}\| + \gamma_{n}\|\mathcal{T}_{3}x_{n}-q_{0}\|\\ &\leq (1-\gamma_{n})\|x_{n}-q_{0}\| + \gamma_{n}\|x_{n}-q_{0}\|\\ & = \|x_{n}-q_{0}\|, \end{align} (3.2)
    \begin{align} \|y_{n}-q_{0}\| & = \|(1-\beta_{n})\mathcal{T}_{3}x_{n} + \beta_{n}\mathcal{T}_{2}z_{n} - q_{0}\|\\ & = \|(1-\beta_{n})(\mathcal{T}_{3}x_{n}-q_{0}) + \beta_{n}(\mathcal{T}_{2}z_{n}-q_{0})\|\\ &\leq (1-\beta_{n})\|\mathcal{T}_{3}x_{n}-q_{0}\| + \beta_{n}\|\mathcal{T}_{2}z_{n}-q_{0}\|\\ &\leq (1-\beta_{n})\|x_{n}-q_{0}\| + \beta_{n}\|z_{n}-q_{0}\|\\ &\leq (1-\beta_{n})\|x_{n}-q_{0}\| + \beta_{n}\|x_{n}-q_{0}\|\\ & = \|x_{n}-q_{0}\|, \end{align} (3.3)

    and so

    \begin{align} \|x_{n+1}-q_{0}\| & = \|(1-\alpha_{n})\mathcal{T}_{1}y_{n}+\alpha_{n}\mathcal{T}_{2}z_{n}-q_{0}\|\\ & = \|(1-\alpha_{n})(\mathcal{T}_{1}y_{n} - q_{0}) + \alpha_{n}(\mathcal{T}_{2}z_{n} - q_{0})\|\\ &\leq (1-\alpha_{n})\|\mathcal{T}_{1}y_{n}-q_{0}\| + \alpha_{n}\|\mathcal{T}_{2}y_{n}-q_{0}\|\\ &\leq (1-\alpha_{n})\|y_{n}-q_{0}\| + \alpha_{n}\|z_{n}-q_{0}\|\\ &\leq (1-\alpha_{n})\|x_{n}-q_{0}\| + \alpha_{n}\|x_{n}-q_{0}\|\\ &\leq \|x_{n}-q_{0}\|. \end{align} (3.4)

    As \{\|x_{n}-q_{0}\| : n \in \mathbb{N}\} is decreasing sequence which is obviously bounded below. Therefore \lim_{n\rightarrow \infty}\|x_{n} - q_{0}\| exists. In particular, the sequence \{x_{n}\} is bounded.

    (ii) Assume that \lim_{n\rightarrow \infty} \|x_{n} - q_{0} \| = c. If c = 0 , then by G-nonexpansiveness of \mathcal{T}_{i} (i = 1, 2, 3) , we get

    \begin{align*} \|x_{n}-\mathcal{T}_{i}x_{n}\| &\leq \|x_{n}-q_{0}\|+\|q_{0}-\mathcal{T}_{i}x_{n}\|\\ &\leq \|x_{n}-q_{0}\|+\|q_{0}-x_{n}\|. \end{align*}

    Therefore, the result follows. Suppose that c > 0. Taking the lim sup on both sides in the inequality (3.3), we obtain

    \begin{align} \limsup\limits_{n\rightarrow \infty} \|y_{n} - q_{0} \|\leq \limsup\limits_{n\rightarrow \infty} \|x_{n} - q_{0} \| = c. \end{align} (3.5)

    In addition, by G-nonexpansiveness of \mathcal{T}_{1}, we have \|\mathcal{T}_{1}y_{n} - q_{0}\| \leq \|y_{n} - q_{0}\|, taking the lim sup on both sides in this inequality and using (3.5), we obtain

    \begin{align} \limsup\limits_{n\rightarrow \infty} \|\mathcal{T}_{1}y_{n} - q_{0} \| &\leq c. \end{align} (3.6)

    Taking the lim sup on both sides in the inequality (3.2), we obtain

    \begin{align} \limsup\limits_{n\rightarrow \infty} \|z_{n} - q_{0} \|\leq \limsup\limits_{n\rightarrow \infty} \|x_{n} - q_{0} \| = c. \end{align} (3.7)

    In addition, by G -nonexpansiveness of \mathcal{T}_{2}, we have \|\mathcal{T}_{2}z_{n} - q_{0}\| \leq \|z_{n} - q_{0}\|, taking the lim sup on both sides in this inequality and using (3.7), we obtain

    \begin{align} \limsup\limits_{n\rightarrow \infty} \|\mathcal{T}_{2}z_{n} - q_{0} \| &\leq c. \end{align} (3.8)

    Since \lim_{n\rightarrow \infty} \|x_{n+1} - q_{0} \| = c. Letting n \rightarrow \infty in the inequality (3.4), we have

    \begin{align} \lim\limits_{n\rightarrow \infty} \|(1-\alpha_{n})(\mathcal{T}_{1}y_{n} - q_{0}) + \alpha_{n}(\mathcal{T}_{2}z_{n} - q_{0})\| = c. \end{align} (3.9)

    By using (3.6), (3.8) and (3.9) and Lemma 1, we have

    \begin{eqnarray} \lim\limits_{n\rightarrow \infty}\|\mathcal{T}_{2}z_{n} - \mathcal{T}_{1}y_{n}\| = 0. \end{eqnarray} (3.10)

    Moreover, we see that

    \begin{align} \|x_{n+1}-q_{0}\| & = \|(1-\alpha_{n})\mathcal{T}_{1}y_{n}+\alpha_{n}\mathcal{T}_{2}z_{n}-q_{0}\|\\ &\leq (1-\alpha_{n})\|\mathcal{T}_{1}y_{n} - q_{0}\| + \alpha_{n}\|\mathcal{T}_{2}z_{n} - q_{0}\|\\ &\leq (1-\alpha_{n})(\|\mathcal{T}_{1}y_{n}-T_{2}z_{n}\| + \|\mathcal{T}_{2}z_{n} - q_{0}\|) + \alpha_{n}(\|\mathcal{T}_{2}z_{n}-q_{0}\|)\\ & = (1-\alpha_{n})\|\mathcal{T}_{1}y_{n}-\mathcal{T}_{2}z_{n}\| + (1-\alpha_{n})\|\mathcal{T}_{2}z_{n} - q_{0}\| + \alpha_{n}(\|\mathcal{T}_{2}z_{n}-q_{0}\|)\\ & = (1-\alpha_{n})\|\mathcal{T}_{1}y_{n}-\mathcal{T}_{2}z_{n}\| + \|\mathcal{T}_{2}z_{n}- q_{0}\|. \end{align} (3.11)

    Using (3.10) and (3.11), we have

    \begin{eqnarray} \liminf\limits_{n\rightarrow \infty}\|\mathcal{T}_{2}z_{n} - q_{0}\|\geq c, \end{eqnarray} (3.12)

    and so by (3.8) and (3.12), we have

    \begin{eqnarray} \lim\limits_{n\rightarrow \infty}\|\mathcal{T}_{2}z_{n} - q_{0}\| = c. \end{eqnarray} (3.13)

    On the other hand,

    \begin{align*} \|\mathcal{T}_{2}z_{n}-q_{0}\| &\leq \|\mathcal{T}_{2}z_{n}-\mathcal{T}_{1}y_{n}\|+\|\mathcal{T}_{1}y_{n}-q_{0}\|\\ &\leq \|\mathcal{T}_{2}z_{n}-\mathcal{T}_{1}y_{n}\|+\|y_{n}-q_{0}\|, \end{align*}

    and this yields to

    \begin{eqnarray} \liminf\limits_{n\rightarrow \infty}\|y_{n} - q_{0}\| \geq c. \end{eqnarray} (3.14)

    From (3.5) and (3.14),

    \begin{eqnarray} \lim\limits_{n\rightarrow \infty}\|y_{n} - q_{0}\| = c. \end{eqnarray} (3.15)

    In addition, by G-noneapansives of \mathcal{T}_{3} we have \|\mathcal{T}_{3}x_{n} - q_{0}\| \leq \|x_{n} - q_{0}\| , taking the lim sup on both sides in this inequality, we obtain

    \begin{eqnarray} \limsup\limits_{n\rightarrow \infty}\|\mathcal{T}_{3}x_{n} - q_{0}\| \leq c. \end{eqnarray} (3.16)

    By (3.3) and (3.15), we have

    \begin{eqnarray} \lim\limits_{n\rightarrow \infty}\|(1-\beta_{n})(\mathcal{T}_{3}x_{n}-q_{0}) + \beta_{n}(\mathcal{T}_{2}z_{n}-q_{0})\| = c. \end{eqnarray} (3.17)

    Using (3.8), (3.16) and (3.17) and Lemma 1,

    \begin{eqnarray} \lim\limits_{n\rightarrow \infty}\|\mathcal{T}_{2}z_{n} - \mathcal{T}_{3}x_{n}\| = 0. \end{eqnarray} (3.18)

    Moreover,

    \begin{align} \|x_{n+1}-q_{0}\| & = \|(1-\alpha_{n})\mathcal{T}_{1}y_{n}+\alpha_{n}\mathcal{T}_{2}z_{n}-q_{0}\|\\ &\leq (1-\alpha_{n})\|\mathcal{T}_{1}y_{n} - q_{0}\| + \alpha_{n}\|\mathcal{T}_{2}z_{n} - q_{0}\|\\ &\leq (1-\alpha_{n})\|\mathcal{T}_{1}y_{n} - q_{0}\| + \alpha_{n}(\|\mathcal{T}_{2}z_{n} - \mathcal{T}_{1}y_{n}\| + \|\mathcal{T}_{1}y_{n} - q_{0}\|)\\ & = (1-\alpha_{n})\|\mathcal{T}_{1}y_{n} - q_{0}\| + \alpha_{n}\|\mathcal{T}_{2}z_{n} - \mathcal{T}_{1}y_{n}\| + \alpha_{n} \|\mathcal{T}_{1}y_{n} - q_{0}\|\\ & = \|\mathcal{T}_{1}y_{n} - q_{0}\| + \alpha_{n}\|\mathcal{T}_{2}z_{n} - \mathcal{T}_{1}y_{n}\| . \end{align} (3.19)

    Using (3.10) and (3.19), we have

    \begin{eqnarray} \liminf\limits_{n\rightarrow \infty}\|\mathcal{T}_{1}y_{n} - q_{0}\|\geq c. \end{eqnarray} (3.20)

    and so by (3.6) and (3.20), we get

    \begin{eqnarray} \lim\limits_{n\rightarrow \infty}\|\mathcal{T}_{1}y_{n} - q_{0}\| = c. \end{eqnarray} (3.21)

    On the other hand,

    \begin{align*} \|\mathcal{T}_{1}y_{n}-q_{0}\| &\leq \|\mathcal{T}_{1}y_{n}-\mathcal{T}_{2}z_{n}\|+\|\mathcal{T}_{2}z_{n}-q_{0}\|\\ &\leq \|\mathcal{T}_{1}y_{n}-\mathcal{T}_{2}z_{n}\|+\|z_{n}-q_{0}\|, \end{align*}

    and this yields to

    \begin{eqnarray} \liminf\limits_{n\rightarrow \infty}\|z_{n} - q_{0}\| \geq c. \end{eqnarray} (3.22)

    From (3.7) and (3.22), we get

    \begin{eqnarray} \lim\limits_{n\rightarrow \infty}\|z_{n} - q_{0}\| = c. \end{eqnarray} (3.23)

    From (3.2) and (3.23), we have

    \begin{eqnarray} \lim\limits_{n\rightarrow \infty}\|(1-\gamma_{n})(x_{n}-q_{0}) + \gamma_{n}(\mathcal{T}_{3}x_{n}-q_{0})\| = c. \end{eqnarray} (3.24)

    By using (3.16), (3.24) and Lemma 1,

    \begin{eqnarray} \lim\limits_{n\rightarrow \infty}\|\mathcal{T}_{3}x_{n} - x_{n}\| = 0. \end{eqnarray} (3.25)

    Thus, it follows from (3.25) that

    \begin{align} \|z_{n}-x_{n}\| & = \|(1-\gamma_{n})x_{n}+\gamma_{n}\mathcal{T}_{3}x_{n}-x_{n}\|\\ &\leq \gamma_{n}\|\mathcal{T}_{3}x_{n}-x_{n}\|\\ &\rightarrow 0 \ ( as \ n \rightarrow \infty). \end{align} (3.26)

    Using (3.18), (3.25) and (3.26) together with G -nonexpansiveness of \mathcal{T}_{2} ,

    \begin{align} \|\mathcal{T}_{2}x_{n}-x_{n}\| & = \|\mathcal{T}_{2}x_{n}-\mathcal{T}_{2}z_{n}+\mathcal{T}_{2}z_{n}-x_{n}\|\\ &\leq \|x_{n}-z_{n}\|+\|\mathcal{T}_{2}z_{n}-\mathcal{T}_{3}x_{n}\|+\|\mathcal{T}_{3}x_{n}-x_{n}\|\\ &\rightarrow 0 \ ( as \ n \rightarrow \infty). \end{align} (3.27)

    From (3.18) and (3.25), we have

    \begin{align} \|y_{n}-x_{n}\| & = \|(1-\beta_{n})(\mathcal{T}_{3}x_{n} - x_{n}) + \beta_{n}(\mathcal{T}_{2}z_{n}-x_{n})\|\\ &\leq (1-\beta_{n})\|\mathcal{T}_{3}x_{n} - x_{n}\| + \beta_{n}\|\mathcal{T}_{2}z_{n}-x_{n}\|\\ &\leq (1-\beta_{n})\|\mathcal{T}_{3}x_{n} - x_{n}\| + \beta_{n}(\|\mathcal{T}_{2}z_{n}-\mathcal{T}_{3}x_{n}\| + \|\mathcal{T}_{3}x_{n}-x_{n}\|)\\ & = (1-\beta_{n})\|\mathcal{T}_{3}x_{n} - x_{n}\| + \beta_{n}\|\mathcal{T}_{2}z_{n}-\mathcal{T}_{3}x_{n}\| + \beta_{n}\|\mathcal{T}_{3}x_{n}-x_{n}\|\\ & = \|\mathcal{T}_{3}x_{n} - x_{n}\| + \beta_{n}\|\mathcal{T}_{2}z_{n}-\mathcal{T}_{3}x_{n}\|\\ &\rightarrow 0 \ ( as \ n \rightarrow \infty). \end{align} (3.28)

    Using (3.10), (3.18), (3.25) and (3.28) together with G -nonexpansiveness of \mathcal{T}_{1} , we have

    \begin{align*} \|\mathcal{T}_{1}x_{n}-x_{n}\| &\leq \|\mathcal{T}_{1}x_n-\mathcal{T}_{1}y_{n}\|+\|\mathcal{T}_{1}y_{n}-x_{n}\|\nonumber\\ &\leq \|x_n-y_{n}\|+\|\mathcal{T}_{1}y_{n}-\mathcal{T}_{2}z_{n}\|+\|\mathcal{T}_{2}z_{n}-\mathcal{T}_{3}x_{n}\|+\|\mathcal{T}_{3}x_{n}-x_{n}\|\nonumber\\ &\rightarrow 0 \ ( as \ n \rightarrow \infty). \end{align*}

    Therefore, we conclude that \lim_{n\rightarrow \infty}\|x_{n}-\mathcal{T}_{1}x_{n}\| = \lim_{n\rightarrow \infty}\|x_{n}-\mathcal{T}_{2}x_{n}\| = \lim_{n\rightarrow \infty}\|x_{n}-\mathcal{T}_{3}x_{n}\| = 0. This completes the proof.

    We now prove the weak convergence of the sequence generated by the new modified three-step iteration method (3.1) for three G -nonexpansive mappings in a uniformly convex Banach space satisfying Opial's condition.

    Theorem 1. Let \mathcal{X} be a uniformly convex Banach space which satisfies Opial's condition and \mathcal{C} has Property G. Suppose that \{\alpha_n\} , \{\beta_n\} and \{\gamma_n\} are real sequences in [\delta, 1 - \delta] for some \delta \in (0, 1) . If (x_{0}, q_{0}), (q_{0}, x_{0})\in E(G) for arbitrary x_{0} \in C and q_{0} \in \mathcal{F}, then \{x_{n}\} converges weakly to a common fixed point of \mathcal{T}_{1}, \mathcal{T}_{2} and \mathcal{T}_{3} .

    Proof. Let q_{0}\in \mathcal{F} be such that (x_{0}, q_{0}), (q_{0}, x_{0})\in E(G) . From Lemma 6 (i) , \lim_{n\rightarrow \infty}\|x_{n}-q_{0}\| exists, thus \{x_{n}\} is bounded. It follows from Lemma 6 (ii) that \lim_{n\rightarrow \infty}\|x_{n}-\mathcal{T}_{1}x_{n}\| = \lim_{n\rightarrow \infty}\|x_{n}-\mathcal{T}_{2}x_{n}\| = \lim_{n\rightarrow \infty}\|x_{n}-\mathcal{T}_{3}x_{n}\| = 0. Since \mathcal{X} is uniformly convex and \{x_{n}\} is bounded, we may assume that x_{n} \rightharpoonup u as n \rightarrow \infty, without loss of generality. By Lemma 3, we have u \in F. Suppose that subsequences \{x_{n_{k}}\} and \{x_{n_{j}}\} of \{x_{n}\} converge weakly to u and v, respectively. By Lemma 6 (ii), we obtain that \|x_{n_{k}}-\mathcal{T}_{i}x_{n_{k}}\|\rightarrow 0 and \|x_{n_{j}}-\mathcal{T}_{i}x_{n_{j}}\|\rightarrow 0 as k, j\rightarrow \infty . Using Lemma 3, we have u, v \in \mathcal{F}. By Lemma 6 (i), \lim_{n\rightarrow \infty}\|x_{n}-u\| and \lim_{n\rightarrow \infty}\|x_{n}-v\| exist. It follows from Lemma 3 that u = v . Therefore, \{x_{n}\} converges weakly to a common fixed point of \mathcal{T}_{1} , \mathcal{T}_{2} and \mathcal{T}_{3}.

    It is worth noting that the Opial's condition has remained crucial in proving weak convergence theorems. However, each l^p (1 \leq p < \infty) satisfies the Opial's condition, while L^p do not have the property unless p = 2.

    Next, we deal with the weak convergence of the sequence \{x_{n}\} generated by (3.1) for two G -nonexpansive mappings without assuming the Opial's condition in a uniformly convex Banach space with a directed graph.

    Theorem 2. Let \mathcal{X} be a uniformly convex Banach space. Suppose that \mathcal{C} has Property G, \{\alpha_n\}, \{\beta_n\} and \{\gamma_n\} are real sequences in [\delta, 1 - \delta] for some \delta \in (0, 1), \mathcal{F} is dominated by x_{0} and \mathcal{F} dominates x_{0} . If (x_{0}, q_{0}), (q_{0}, x_{0}) \in E(G) for arbitrary x_{0}\in C and q_{0}\in \mathcal{F}, then \{x_{n}\} converges weakly to a common fixed point of \mathcal{T}_{1}, \mathcal{T}_{2} and \mathcal{T}_{3} .

    Proof. Let q_{0}\in \mathcal{F} be such that (x_0, q_0), (q_0, x_0) are in E(G). From Lemma 6 (i) , \lim_{n\rightarrow \infty}\|x_{n}-q_{0}\| exists, so \{x_{n}\} is bounded in \mathcal{C} . Since \mathcal{C} is nonempty closed convex subset of a uniformly convex Banach space \mathcal{X}, it is weakly compact and hence there exists a subsequence \{x_{n_{j}}\} of the sequence \{x_{n}\} such that \{x_{n_{j}}\} converges weakly to some point p \in \mathcal{C}. By Lemma 6 (ii) we obtain that

    \begin{eqnarray} \lim\limits_{j\rightarrow \infty}\|x_{n_{j}}-\mathcal{T}_{1}x_{n_{j}}\| = \lim\limits_{j\rightarrow \infty}\|x_{n_{j}}-\mathcal{T}_{2}x_{n_{j}}\| = \lim\limits_{j\rightarrow \infty}\|x_{n_{j}}-\mathcal{T}_{3}x_{n_{j}}\| = 0. \end{eqnarray} (3.29)

    In addition, \|y_{n} -z_{n}\| \leq \|y_{n} - x_{n}\| + \|x_{n} - z_{n}\|. Using (3.26) and (3.28), we have

    \begin{eqnarray} \lim\limits_{n\rightarrow \infty}\|y_{n} -z_{n}\| = 0. \end{eqnarray} (3.30)

    From (3.26) and G -nonexpansiveness of \mathcal{T}_{2},

    \begin{align} \|\mathcal{T}_{2}z_{n}-z_{n}\| &\leq \|\mathcal{T}_{2}z_{n}-\mathcal{T}_{2}x_{n}\| + \|\mathcal{T}_{2}x_{n}-x_{n}\| + \|x_{n}-z_{n}\|\\ &\leq \|z_{n}-x_{n}\| + \|\mathcal{T}_{2}x_{n}-x_{n}\| + \|x_{n}-z_{n}\|\\ &\rightarrow 0 \ ( as \ n \rightarrow \infty). \end{align} (3.31)

    Thus, it follows from (3.10), (3.30) and (3.31) that

    \begin{align} \|\mathcal{T}_{1}y_{n} - y_{n}\| &\leq \|\mathcal{T}_{1}y_{n} - \mathcal{T}_{2}z_{n}\| + \|\mathcal{T}_{2}z_{n} - z_{n}\| + \|y_{n} - z_{n}\|\\ &\rightarrow 0 \ ( as \ n \rightarrow \infty). \end{align} (3.32)

    Using Lemma 4, we have I-\mathcal{T}_{1} , I-\mathcal{T}_{2} and I-\mathcal{T}_{3} are G -demiclosed at 0 so that p\in \mathcal{F}. To complete the proof it suffices to show that \{x_{n}\} converges weakly to p. To this end we need to show that \{x_{n}\} satisfies the hypothesis of Lemma 5. Let \{x_{n_{j}}\} be a subsequence of \{x_{n}\} which converges weakly to some q \in \mathcal{C}. By similar arguments as above q is in \mathcal{F}. Now for each j \geq 1, using (3.1), we have

    \begin{eqnarray} x_{n_{j}+1} & = (1 - \alpha_{n_{j}})\mathcal{T}_{1}y_{n_{j}} + \alpha_{n_{j}}\mathcal{T}_{2}z_{n_{j}}. \end{eqnarray} (3.33)

    It follows from (3.29) that

    \begin{eqnarray} \mathcal{T}_{3}x_{n_{j}} & = (\mathcal{T}_{3}x_{n_{j}} - x_{n_{j}}) + x_{n_{j}} \rightharpoonup q. \end{eqnarray} (3.34)

    Now from (3.1) and (3.34),

    \begin{eqnarray} z_{n_{j}} & = (1 - \gamma_{n_{j}})x_{n_{j}} + \gamma_{n_{j}}\mathcal{T}_{3}x_{n_{j}} \rightharpoonup q. \end{eqnarray} (3.35)

    Using (3.31) and (3.35), we have

    \begin{eqnarray} \mathcal{T}_{2}z_{n_{j}} & = (\mathcal{T}_{2}z_{n_{j}} - z_{n_{j}}) + z_{n_{j}} \rightharpoonup q. \end{eqnarray} (3.36)

    Now from (3.1), (3.34) and (3.36),

    \begin{eqnarray} y_{n_{j}} & = (1 - \beta_{n_{j}})\mathcal{T}_{3}x_{n_{j}} + \beta_{n_{j}}\mathcal{T}_{2}z_{n_{j}} \rightharpoonup q. \end{eqnarray} (3.37)

    Also from (3.32) and (3.37),

    \begin{eqnarray} \mathcal{T}_{1}y_{n_{j}} & = (\mathcal{T}_{1}y_{n_{j}} - y_{n_{j}}) + y_{n_{j}} \rightharpoonup q. \end{eqnarray} (3.38)

    It follows from (3.33), (3.36) and (3.38) that

    \begin{eqnarray*} x_{n_{j}+1} &\rightharpoonup q. \end{eqnarray*}

    Therefore, the sequence \{x_{n}\} satisfies the hypothesis of Lemma 5 which in turn implies that \{x_{n}\} weakly converges to q so that p = q. This completes the proof.

    The strong convergence of the sequence generated by the new modified three-step iteration method (3.1) for three G -nonexpansive mappings in a uniformly convex Banach space with a directed graph is discussed in the rest of this section.

    Theorem 3. Let \mathcal{X} be a uniformly convex Banach space. Suppose that \{\alpha_n\}, \{\beta_n\} and \{\gamma_n\} are real sequences in [\delta, 1 - \delta] for some \delta \in (0, 1) , \mathcal{T}_{i}(i = 1, 2, 3) satisfy condition (C), \mathcal{F} is dominated by x_{0} and \mathcal{F} dominates x_{0}. Then \{x_{n}\} converges strongly to a common fixed point of \mathcal{T}_{1}, \mathcal{T}_{2} and \mathcal{T}_{3} .

    Proof. By Lemma 6 (i), \lim_{n\rightarrow \infty}\|x_{n}-q\| exists and so \lim_{n\rightarrow \infty}d(x_{n}, \mathcal{F}) exists for any q\in \mathcal{F}. Also by Lemma 6 (ii), \lim_{n\rightarrow \infty}\|x_{n}-\mathcal{T}_{1}x_{n}\| = \lim_{n\rightarrow \infty}\|x_{n}-\mathcal{T}_{2}x_{n}\| = \lim_{n\rightarrow \infty}\|x_{n}-\mathcal{T}_{3}x_{n}\| = 0. It follows from condition (C) that \lim_{n\rightarrow \infty}f(d(x_{n}, \mathcal{F})) = 0. Since f: [0, \infty) \rightarrow [0, \infty) is a nondecreasing function satisfying f(0) = 0, f(r) > 0 for all r\in (0, \infty), we obtain that \lim_{n\rightarrow \infty}d(x_{n}, \mathcal{F}) = 0 . Hence, we can find a subsequence \{x_{n_{j}}\} of \{x_{n}\} and a sequence \{u_{j}\}\subset \mathcal{F} such that \|x_{n_{j}}-u_{j}\|\leq\frac{1}{2^{j}} . Put n_{j+1} = n_{j}+k for some k\geq1 . Then

    \begin{align*} \|x_{n_{j+1}}-u_{j}\|\leq \|x_{n_{j}+k-1}-u_{j}\|\leq \|x_{n_{j}}-u_{j}\|\leq\frac{1}{2^{j}}. \end{align*}

    We obtain that \|u_{j+1}-u_{j}\|\leq\frac{3}{2^{j+1}} , thus \{u_{j}\} is a Cauchy sequence. We assume that u_{j}\rightarrow q_{0}\in \mathcal{C} as j\rightarrow \infty . Since \mathcal{F} is closed, we get q_{0}\in \mathcal{F} . Therefore, we have x_{n_{j}}\rightarrow q_{0}\in \mathcal{C} as j\rightarrow \infty . Since \lim_{n\rightarrow \infty}\|x_{n}-q_{0}\| exists, we obtain x_{n}\rightarrow q_{0} . This completes the proof.

    Theorem 4. Let \mathcal{X} be a uniformly convex Banach space. Suppose that \mathcal{C} has Property G, \{\alpha_n\}, \{\beta_n\} and \{\gamma_n\} are real sequences in [\delta, 1 - \delta] for some \delta \in (0, 1), \mathcal{F} is dominated by x_{0} and \mathcal{F} dominates x_{0} . If one of \mathcal{T}_{i} (i = 1, 2, 3) is semi-compact, then \{x_{n}\} converges strongly to a common fixed point of \mathcal{T}_{1}, \mathcal{T}_{2} and \mathcal{T}_{3} .

    Proof. It follows from Lemma 6 that \{x_{n}\} is bounded and \lim_{n\rightarrow \infty}\|x_{n}-\mathcal{T}_{1}x_{n}\| = \lim_{n\rightarrow \infty}\|x_{n}-\mathcal{T}_{2}x_{n}\| = \lim_{n\rightarrow \infty}\|x_{n}-\mathcal{T}_{3}x_{n}\| = 0. Since one of \mathcal{T}_1 , \mathcal{T}_2 and \mathcal{T}_3 is semi-compact, then there exists a subsequence \{x_{n_{j}}\} of \{x_{n}\} such that x_{n_{j}}\rightarrow q \in C as j\rightarrow \infty . Since \mathcal{C} has Property G and transitivity of graph G, we obtain (x_{n_{j}}, q)\in E(G) . Notice that, for each i\in \{1, 2, 3\}, \lim_{j\rightarrow \infty}\|x_{n_{j}}-\mathcal{T}_{i}x_{n_{j}}\| = 0. Then

    \begin{align*} \|q-\mathcal{T}_{i}q\| &\leq \|q-x_{n_{j}}\|+\|x_{n_{j}}-\mathcal{T}_{i}x_{n_{j}}\|+\|\mathcal{T}_{i}x_{n_{j}}-\mathcal{T}_{i}q\|\\ &\leq \|q-x_{n_{j}}\|+\|x_{n_{j}}-\mathcal{T}_{i}x_{n_{j}}\|+\|x_{n_{j}}-q\|\\ &\rightarrow 0 \, \, \, (\text{as} \, \, j\rightarrow \infty ) . \end{align*}

    Hence q\in \mathcal{F} . Thus \lim_{n\rightarrow \infty}d(x_{n}, \mathcal{F}) exists by Theorem 3. We note that d(x_{n_{j}}, \mathcal{F}) \leq d(x_{n_{j}}, q)\rightarrow 0 as j\rightarrow \infty , hence \lim_{n\rightarrow \infty}d(x_{n}, \mathcal{F}) = 0 . It follows, as in the proof of Theorem 3, that \{x_{n}\} converges strongly to a common fixed point of \mathcal{T}_{1} , \mathcal{T}_{2} and \mathcal{T}_{3} . This completes the proof.

    For supporting our main theorem, the approximate solutions of common fixed-point problems for a class of G -nonexpansive mapping on the closed interval are discussed. The following definition will be useful in this context.

    Definition 2. [28] Suppose {\{\zeta_{n}\}} is a sequence that converges to \zeta , with \zeta_{n}\neq \zeta for all n. If positive constants \nu and \psi exist with

    \lim\limits_{n\rightarrow \infty}\frac{|\zeta_{n+1}-\zeta|}{|\zeta_{n}-\zeta|^{\psi}} = \nu,

    then {\{\zeta_{n}\}} converges to \zeta of order \psi , with asymptotic error constant \nu . If \psi = 1 (and \nu < 1 ), the sequence is linearly convergent.

    In 2011, Phuengrattana and Suantai [29] showed that the Ishikawa iteration converges faster than the Mann iteration for a class of continuous functions on the closed interval in a real line. In order to study the order of convergence of a real sequence {\{\zeta_{n}\}} converging to \zeta, we usually use the well-known terminology in numerical analysis, see [28], for example.

    Example 1. Let \mathcal{X} = \mathbb{R} , \mathcal{C} = [0, 2] and G = (V (G), E(G)) be a directed graph defined by V (G) = \mathcal{C} and (x, y) \in E(G) if and only if 0.50\leq x \neq y \leq 1.70 or x = y \in \mathcal{C}. Then E(G) is coordinate-convex and \{(x, x) : x \in V(G)\} \subset E(G). Define mappings \mathcal{T}_{1}, \mathcal{T}_{2}, \mathcal{T}_{3} : \mathcal{C} \rightarrow \mathcal{C} where

    \mathcal{T}_{1}x = \frac{2}{3}arcsin(x-1)+1, \, \mathcal{T}_{2}x = \frac{1}{3}tan(x-1)+1, \, \mathcal{T}_{3}x = \sqrt{x},

    for any x \in \mathcal{C} . It is easy to show that \mathcal{T}_{1}, \mathcal{T}_{2}, \mathcal{T}_{3} are G -nonexpansive but \mathcal{T}_{1}, \mathcal{T}_{2}, \mathcal{T}_{3} are not nonexpansive because

    |\mathcal{T}_{1}x-\mathcal{T}_{1}y| > 0.50 = |x - y|,
    |\mathcal{T}_{2}u - \mathcal{T}_{2}v| > 0.07 = |u - v|,

    and

    |\mathcal{T}_{3}p - \mathcal{T}_{3}q| > 0.45 = |p - q|,

    when x = 1.95, y = 1.45, u = 0.08, v = 0.01, p = 0.5 and q = 0.05 .

    Let

    \begin{equation} \alpha_{n} = \frac{n+1}{5n+3}, \beta_{n} = \frac{n+2}{\sqrt{8n + 5}} \, \, \text{and}\, \, \gamma_{n} = \frac{n+4}{10n+7}. \end{equation} (4.1)

    Let {\{x_{n}\}} be a sequence generated by (3.1) and {\{y_{n}\}}, {\{z_{n}\}} be sequences generated by the three-step Noor iteration (1.1) and SP-iteration, respectively. Example 1 shows the convergence behavior of these three comparative methods and x = 1 is a common fixed point of \mathcal{T}_{1}, \mathcal{T}_{2} and \mathcal{T}_{3} . We choose z_{1} = y_{1} = w_{1} = x_{1} = 1.65 and set the relative error |\zeta_n - x|/|x| < 1.00e-08 as stopping criterion where {\{\zeta_{n}\}} be all of comparative sequences. The results of three comparative algorithms with the permutation of the operators \mathcal{T}_{1}, \mathcal{T}_{2} and \mathcal{T}_{3} are presented. However, the permutation of the group of these three operators gives 27 case studies that we need to consider. But we are interested in the absence of duplicate operator being applied to the proposed algorithm (3.1). That is the following 6 cases of three comparative algorithms consists of the proposed algorithm, three-step Noor iteration (1.1) and SP-iteration (1.2) are demonstrated and discussed.

    Case I. Three comparative algorithms with the operator order \mathcal{T}_1-\mathcal{T}_2-\mathcal{T}_3 .

    Case II. Three comparative algorithms with the operator order \mathcal{T}_1-\mathcal{T}_3-\mathcal{T}_2 .

    Case III. Three comparative algorithms with the operator order \mathcal{T}_2-\mathcal{T}_1-\mathcal{T}_3 .

    Case IV. Three comparative algorithms with the operator order \mathcal{T}_2-\mathcal{T}_3-\mathcal{T}_1 .

    Case V. Three comparative algorithms with the operator order \mathcal{T}_3-\mathcal{T}_1-\mathcal{T}_2 .

    Case VI. Three comparative algorithms with the operator order \mathcal{T}_3-\mathcal{T}_2-\mathcal{T}_1 .

    All numerical experiments for common fixed point solution by using three comparative methods with the permutation of the operators \mathcal{T}_{1}, \mathcal{T}_{2} and \mathcal{T}_{3} are shown in Figures 16. Each figure contains three graphs showing the behavior of numerical solution sequences, relative error sequences and asymptotic error sequences for three comparative methods, respectively. The first two graphs of each figure shows that all sequences generated by these three comparative methods converge to a common fixed point solution x = 1 and the relative errors of these three comparative methods are also decreased to zero when the number of iteration increased. The remainning graph of each figure shows the tendency of the asymptotic error constant \sigma for sequence \{\zeta_n\} results from the formula \left| \zeta_{n+1} - 1\right| / \left| \zeta_n - 1\right| of three-step Noor, SP and proposed methods.

    Figure 1.  Numerical solution, relative error and aymtotic error constant of three comparative methods for case I.
    Figure 2.  Numerical solution, relative error and aymtotic error constant of three comparative methods for case II.
    Figure 3.  Numerical solution, relative error and aymtotic error constant of three comparative methods for case III.
    Figure 4.  Numerical solution, relative error and aymtotic error constant of three comparative methods for case IV.
    Figure 5.  Numerical solution, relative error and aymtotic error constant of three comparative methods for case V.
    Figure 6.  Numerical solution, relative error and aymtotic error constant of three comparative methods for case VI.

    It can bee seen from the remainning graph of each figure that all methods are linearly convergent. This message is being made more confident by using Definition 2. Since, the smaller of asymptotic error constant gives us the faster convergence of the considering sequence then it can be concluded from the remainning graph that the proposed method converge faster than three-step Noor and SP methods.

    Thus, it can be concluded from Figures 16 that the proposed method with the absence of duplicate operator being applied to the proposed algorithm give us faster convergence compare with three-step Noor and SP methods. And the fastest case occurs when the proposed method with the operator order \mathcal{T}_3-\mathcal{T}_1-\mathcal{T}_2 is used.

    Now, we apply our proposed algorithms to solve certain image deblurring and signal recovering problems where all codes were written in Matlab and run on laptop Intel core i7, 16.00 GB RAM, windows 10 (64-bit).

    The minimization problem of the sum of two functions is to find a solution of

    \begin{align} \min\limits_{\substack{x \in \mathbb{R}^{n}}} \{F(x): = f(x) + h(x)\}, \end{align} (5.1)

    where h : \mathbb{R}^{n} \rightarrow \mathbb{R} \cup \{\infty\} is proper convex and lower semicontinuous function, and f : \mathbb{R}^{n} \rightarrow \mathbb{R} is convex differentiable function with gradient \nabla f being L-Lipschitz constant for some L > 0. The solution of (5.1) can be characterized by using Fermat's rule, Theorem 16.3 of Bauschke and Combettes [30] as follows:

    x^* \, \, \text{is a minimizer of}\, \, (f + h) \, \, \text{if and only if}\, \, 0 \in \partial h(x^*) + \nabla f (x^*),

    where \partial h is the subdifferential of h and \nabla f is the gradient of f . The subdifferential of h at x^*, denoted by \partial h(x^*), is defined by

    \partial h(x^*) : = \{u : h(x) - h(x^*) \geq \langle u, x - x^*\rangle \, \, \text{for all}\, \, x\}.

    It is also well-known that the solution of (5.1) is characterized by the following fixed point problem:

    x^* \, \, \text{is a minimizer of}\, \, (f + h) \, \, \text{if and only if}\, \, x^* = prox_{\mu h}(I - \mu \nabla f) (x^*),

    where c > 0, prox_{h} is the proximity operator of h defined by prox_{h} : = argmin \{h(y)+ \frac{1}{2}\left\|x - y\right\|_{2}^{2}\}, see [31] for more details. It is also known that prox_{\mu h}(I - \mu \nabla f) is a nonexpansive mapping when \mu \in (0, \frac{2}{L}).

    Let \mathcal{B} is the degraded image of the true image \mathcal{X} in the matrix form of \tilde{m} rows and \tilde{n} columns ( \mathcal{B}, \mathcal{X} \in \mathbb{R}^{\tilde{m} \times \tilde{n}} ). The key to obtaining the image restoration model is to rearrange the elements of the images \mathcal{B} and \mathcal{X} into the column vectors by stacking the columns of these images into two long vectors b and x where b = \rm{vec}(\mathcal{B}) and x = \rm{vec}(\mathcal{X}) , both of length n = \tilde{m}\tilde{n} . The image restoration problem can be modelled in one dimensional vector by the following linear equation system:

    \begin{align} b = Mx, \end{align} (5.2)

    where x \in \mathbb{R}^{n} is an original image, b \in \mathbb{R}^{n} is the observed image, M\in\mathbb{R}^{n \times n} is the blurring matrix and n = \tilde{m}\tilde{n} . In order to solve problem (5.2), we aim to approximate the original image, vector b , which is known as the following least squares (LS) problem:

    \begin{align} \min\limits_{x}\frac{1}{2}\|b-Mx\|_{2}^{2}, \end{align} (5.3)

    where \|.\|_{2} is defined by \|x\|_{2} = \sqrt{\sum_{i = 1}^{n}|x_{i}|^{2}} . We can apply the minimization problem of the sum of two functions (5.1) to the LS-problem (5.3) by setting h(x) = 0 and f(x) = \frac{1}{2}\|b-Mx\|_2^2 where \nabla f(x) = M^T(Mx-y) . Thus, LS-problem can be solved with our method (3.1) by setting

    \mathcal{T}x = x - \mu M^T(M x - b), \quad \mu \subset \left( 0, \frac{2}{||M^T M||_2} \right).

    And, the following proposed method is obtained in finding the solution of the image deblurring problem:

    \begin{align} z_{n} & = (1 - \gamma_n)x_{n} + \gamma_n\left( x_{n} - \mu M^T(M x_{n} - b) \right), \\ y_{n} & = (1 - \beta_n)\left( x_{n} - \mu M^T(M x_{n} - b) \right) + \beta_n\left( z_{n} - \mu M^T(M z_{n} - b) \right), \\ x_{n+1} & = (1 - \alpha_n)\left( y_{n} - \mu M^T(M y_n - b) \right) + \alpha_n\left( z_{n} - \mu M^T(M z_n - b) \right), \end{align} (5.4)

    where \mu \in \left(0, \frac{2}{\|M^T M\|_2} \right) and \{ \alpha_n \}, \{ \beta_n \} and \{ \gamma_n \} are sequences in [ \delta, 1 - \delta ] for all n \in \mathbb{N} and for some \delta in (0, 1) . The algorithm (5.4) is used in solving the image deblurring problem (5.2) with the default parameter (4.1) and \mu = 1/\|M^T M\|_2 .

    The goal on image deblurring problem is to find the original image from the observed image without knowing which one is the blurring matrix. However, the blurring matrix M must be known in applying algorithm (5.4). Now, we present the new idea in solving the image deblurring problem when the observed image b_1, b_2, ..., b_N can be restored by using the blurring matrices M_1, M_2, ..., M_N , repectively in which

    \begin{equation} b_i = M_i x, \quad i = 1, 2, \ldots, N. \end{equation} (5.5)

    That is, the original image x is a common solution of the N -determinated linear Eq (5.5). Now, Let us consider the following LS-problems:

    \begin{align} \min\limits_{x\in\mathbb{R}^{n}}\frac{1}{2}\|M_1x-b_{1}\|_{2}^{2}, \min\limits_{x\in\mathbb{R}^{n}}\frac{1}{2}\|M_2x-b_{2}\|_{2}^{2}, ..., \min\limits_{x\in\mathbb{R}^{n}}\frac{1}{2}\|M_Nx-b_{N}\|_{2}^{2}, \end{align} (5.6)

    where x is the common solution of problem (5.6). In applying the proposed algorithm (5.4) in finding the original image x , Only a group of the chosen three blurring matrices M_j, M_k, M_l from known blurring matrices M_1, M_2, ..., M_N . And, we can apply the minimization problem of the sum of two functions with our method (3.1) for solving the LS-problems by setting

    \mathcal{T}_ix = x -\mu_i \nabla f_i(x)c

    with h_i(x) = 0 and f_i(x) = \frac{1}{2}\|b_i-M_ix\|_2^2 where \nabla f_i(x) = M_i^T(M_ix-b_i) . Now, we presented the proposed algorithm with M_j-M_k-M_l :

    \begin{align} z_{n} & = (1 - \gamma_n)x_{n} + \gamma_n \left( x_{n} - \mu_j M_j^T(M_j x_{n} - b_j) \right), \\ y_{n} & = (1 - \beta_n)\left( x_{n} - \mu_j M_j^T(M_j x_{n} - b_j) \right) + \beta_n \left( z_{n} - \mu_k M_k^T(M_k z_{n} - b_k) \right), \\ x_{n+1} & = (1 - \alpha_n)\left( y_{n} - \mu_l M_l^T(M_l y_n - b_l) \right) + \alpha_n \left( z_{n} - \mu_k M_k^T(M_k z_n - b_k) \right), \end{align} (5.7)

    with the default parameter (4.1), \mu_j = 1/\|M_j^T M_j\|_2, \mu_k = 1/\|M_k^T M_k\|_2, \mu_l = 1/\|M_l^T M_l\|_2 and called it as the proposed algorithm with M_j-M_k-M_l . The implemented algorithm (5.7) is proposed in solving the image deblurring problem by using every three blurring matrices from N known blurring matrices with the default parameter (4.1). The original RGB format for color image shown in Figure 7 is used to demonstrate the practicability of the proposed algorithm. The relative errors with the stopping criterion of the proposed algorithms are defined as \|x_{n} - x \|_{\infty}/\| x \|_{\infty} < 10^{-3} . The performance of the comparing algorithms at x_{n} on image deblurring process is measured quantitatively by the means of the peak signal-to-noise ratio (PSNR), which is defined by

    \begin{align*} \text{PSNR}(x_{n} ) = 20\text{log}_{10}\left(\frac{255^{2}}{MSE}\right), \end{align*}
    Figure 7.  The original RGB image with matrix size 248 \times 356 \times 3 .

    where \text{MSE} = \|x_{n} - x\|_{2}^{2} . Three different types of the original RGB image degraded by the blurring matrices M_{1}, M_{2} and M_{3} are shown in Figure 8. These are used to test the implemented algorithm.

    Figure 8.  The original RGB image degraded by blurred matrices M_{1} , M_{2} and M_{3} respectively.

    Next, we present restoration of images that have been corrupted by the following blur types:

    Type I. Gaussian blur of filter size 9\times9 with standard deviation \sigma = 4 (The original image has been degraded by the blurring matrix M_{1} ).

    Type II. Out of focus blur (Disk) with radius r = 6 (The original image has been degraded by the blurring matrix M_{2} ).

    Type III. Motion blur specifying with motion length of 21 pixels (len = 21 ) and motion orientation 11^{\circ} ( \theta = 11 ) (The original image has been degraded by the blurring matrix M_{3} ).

    Since, the using image \mathcal{X} and three different types of the blurring image \mathcal{B} (See on Figure 8) are represented in the red-green-blue component. Then, we denote \mathcal{X}_r, \mathcal{X}_g, \mathcal{X}_b and \mathcal{B}_r, \mathcal{B}_g, \mathcal{B}_b as the gray-scale images that constitute the red-green-blue channels of the using image \mathcal{X} and the blurring image \mathcal{B} respectively. Thus, we define the column vector x and b from color image \mathcal{X} and \mathcal{B} and both of length n = 3\tilde{m}\tilde{n} . After that, we apply the proposed algorithms in getting the common solution of the image deblurring problem with these three blurring matrices. And, we are also compare with three-step Noor iteration (1.1) and SP-iteration (1.2).

    The permutation of the group of these three blurring matrices gives 27 case studies that we need to consider. First, we are interested in the simple three cases in which all three blurring matrices are the same. Second, we are also interested in the case of the absence of duplicate blurring matrices. That is the following 9 cases of three comparative algorithms consists of the proposed algorithm (5.7), three-step Noor iteration (1.1) and SP-iteration (1.2) are demonstrated and discussed.

    Case I. Three comparative algorithms with M_1-M_1-M_1 .

    Case II. Three comparative algorithms with M_1-M_2-M_3 .

    Case III. Three comparative algorithms with M_1-M_3-M_2 .

    Case IV. Three comparative algorithms with M_2-M_2-M_2 .

    Case V. Three comparative algorithms with M_2-M_1-M_3 .

    Case VI. Three comparative algorithms with M_2-M_3-M_1 .

    Case VII. Three comparative algorithms with M_3-M_3-M_3 .

    Case VIII. Three comparative algorithms with M_3-M_1-M_2 .

    Case IX. Three comparative algorithms with M_3-M_2-M_1 .

    Figures 917 show the comparative plots behavior of Cauchy error, relative error and PSNR quality of the reconstructed RGB image with 9 cases.

    Figure 9.  The Cauchy norm, relative norm and PSNR quality plots of three comparative algorithms with case I.
    Figure 10.  The Cauchy norm, relative norm and PSNR quality plots of three comparative algorithms with case II.
    Figure 11.  The Cauchy norm, relative norm and PSNR quality plots of three comparative algorithms with case III.
    Figure 12.  The Cauchy norm, relative norm and PSNR quality plots of three comparative algorithms with case IV.
    Figure 13.  The Cauchy norm, relative norm and PSNR quality plots of three comparative algorithms with case V.
    Figure 14.  The Cauchy norm, relative norm and PSNR quality plots of three comparative algorithms with case VI.
    Figure 15.  The Cauchy norm, relative norm and PSNR quality plots of three comparative algorithms with case VII.
    Figure 16.  The Cauchy norm, relative norm and PSNR quality plots of three comparative algorithms with case VIII.
    Figure 17.  The Cauchy norm, relative norm and PSNR quality plots of three comparative algorithms with case IX.

    It is remarkable that, the Cauchy error and relative error plots on each case of three comparative algorithms decrease as the iteration number increases. Since, the Cauchy and relative errors plot show the validity and confirm the convergence of the presented methods then, it can guarantee that all presented methods on Figures 917 converge to common solution of deblurring problem (5.7). Based on the PSNR plots of each case, all restored image using these three comparative algorithms in solving the deblurring problem get the quality improvements when the iteration number increases. It can be conclude from the comparative plots behavior on Figures 917 that the proposed method is more efficiency than three-step Noor and SP methods. Moreover, the PSNR quality of the observed image is much better when the proposed method with M_j-M_k-M_l and j \neq k \neq l is used for solving deblurring problem compare with the proposed methods with M_1-M_1-M_1 , M_2-M_2-M_2 and M_3-M_3-M_3 . And, the best case in recovering the observed image occurs when the proposed methods with M_2-M_1-M_3 and M_3-M_1-M_2 are used. Figure 11 demonstrates the crop of reconstructed RGB image presented in 10, 000^{th} iteration by using the proposed algorithms in getting the common solution of the deblurring problem with operators M_1-M_1-M_1 and M_2-M_2-M_2 , M_3-M_3-M_3 and M_3-M_1-M_2 respectively.

    It can be seen from these figures that the quality of restored image by using the proposed algorithms with operators M_3-M_2-M_1 get the smooth quality of the using degraded image.

    Next, we give some numerical examples to the signal recovery.

    In signal processing, compressed sensing can be modeled as the following undetermined linear equation:

    y = Ax + ν,

    where x \in \mathbb{R}^{n} is an original signal with n components to be recovered, ν, y \in \mathbb{R}^{m} are noise and the observed signal with noisy for m components respectively and A \in \mathbb{R}^{m \times n} is a degraded matrix. Finding the solutions of previous undetermined linear equation can be seen as solving the LASSO problem:

    \begin{eqnarray} \min\limits_{x \in \mathbb{R}^{N}} \frac{1}{2} \| y - Ax \|^{2}_{2} +\lambda\|x\|_1, \end{eqnarray} (5.8)

    where \lambda > 0 . The new idea in solving the signal recovering problem is presented when the observed signal y_1, y_2, ..., y_N can be recovered by using the known degraded matrices A_1, A_2, ..., A_N in which

    \begin{equation} y_i = A_i \mathbf{x} + \nu_i, \quad i = 1, 2, \ldots, N. \end{equation} (5.9)

    where a true signal x is common solution of problem (5.9).

    Let us consider the following LASSO problems:

    \begin{eqnarray} \min\limits_{x \in \mathbb{R}^{N}} \frac{1}{2} \| A_{1}x - y_{1} \|^{2}_{2} &+&\lambda_1\|x\|_1 , \\ \min\limits_{x \in \mathbb{R}^{N}} \frac{1}{2} \| A_{2}x - y_{2} \|^{2}_{2} &+&\lambda_2\|x\|_1 , \\ &\vdots& \\ \min\limits_{x \in \mathbb{R}^{N}} \frac{1}{2} \| A_{N}x - y_{N} \|^{2}_{2} &+&\lambda_N\|x\|_1 , \end{eqnarray} (5.10)

    where a true signal x is common solution of problem (5.10). We can apply the minimization problem of the sum of two functions with our method (3.1) for solving the LASSO problems (5.10) by setting

    \mathcal{T}_ix = prox_{\mu_i h_i}\big(x -\mu_i\nabla f_i(x)\big), \quad \mu_i \subset \left( 0, \frac{2}{||A_i^T A_i||_2} \right), \quad i = 1, 2, \ldots, N,

    with h_i(x) = \lambda_i\|x\|_1 and f_i(x) = \frac{1}{2}\|y_i-A_ix\|_2^2 where \nabla f_i(x) = A_i^T(A_ix-y_i) . Now, we obtain the following proposed method to find the common solution of LASSO problems (5.10) for a group of three blurring matrices A_j, A_k, A_l chosen from N blurring matrices A_1, A_2, ..., A_N :

    \begin{align} z_{n} & = (1 - \gamma_{n})x_{n} + \gamma_{n} prox_{\lambda g}\left( x_{n} - \mu_j A_j^T(A_jx_n - y_j) \right), \\ y_{n} & = (1 - \beta_{n}) prox_{\lambda g}\left( x_{n} - \mu_j A_j^T(A_j x_n - y_j) \right) + \beta_{n} prox_{\lambda g}\left( z_{n} - \mu_k A_k^T(A_k z_{n} - y_k) \right), \\ x_{n+1} & = (1 - \alpha_{n})prox_{\lambda g} \left( y_{n} -\mu_l A_l^T(A_l y_{n} - y_l) \right) + \alpha_{n} prox_{\lambda g}\left( z_{n} -\mu_k A_k^T(A_k z_{n} - y_k) \right), \end{align} (5.11)

    with the default parameter (4.1), \mu_j = 1/\|A_j^T A_j\|_2, \mu_k = 1/\|A_k^T A_k\|_2, \mu_l = 1/\|A_l^T A_l\|_2 and called it as the proposed algorithm with A_j-A_k-A_l . The implemented algorithm (5.11) is proposed in solving the signal recovering problem by using every three blurring matrices from N blurring matrices on Eq (5.9) with the default parameter (4.1). And, we are also compare the proposed algorithm (5.11) with three-step Noor iteration (1.1) and SP-iteration (1.2). The following 9 cases of three comparative algorithms consists of the proposed algorithm (5.11), three-step Noor iteration (1.1) and SP-iteration (1.2) are demonstrated and discussed.

    Case I. Three comparative algorithms with A_1-A_1-A_1 .

    Case II. Three comparative algorithms with A_1-A_2-A_3 .

    Case III. Three comparative algorithms with A_1-A_3-A_2 .

    Case IV. Three comparative algorithms with A_2-A_2-A_2 .

    Case V. Three comparative algorithms with A_2-A_1-A_3 .

    Case VI. Three comparative algorithms with A_2-A_3-A_1 .

    Case VII. Three comparative algorithms with A_3-A_3-A_3 .

    Case VIII. Three comparative algorithms with A_3-A_1-A_2 .

    Case IX. Three comparative algorithms with A_3-A_2-A_1 .

    Next, some experiments are provided to illustrate the convergence and the effectiveness of the proposed algorithm (5.11). The original signal x with n = 1024 generated by the uniform distribution in the interval [-2, 2] with 70 nonzero elements is used to create the observation signal y_i = A_i x + \nu_i, i = 1, 2, 3 with m = 512 (See on Figure 19).

    Figure 18.  The reconstructed images being used the proposed algorithms with cases I, IV, VI and VIII, respectively present in 10, 000^{ \rm{th}} iterations.
    Figure 19.  Original Signal ( x ) with m = 70 .

    The observation signal y_i = A_ix + n_i shows on Figure 20.

    Figure 20.  Degraded Signals y_1 , y_2 , and y_3 , respectively.

    The process is started when the signal initial data x_0 with n = 1024 is picked randomly (See on Figure 21).

    Figure 21.  Initial Signals x_0 .

    The matrices A_i that generated by the normal distribution with mean zero and variance one and the white Gaussian noise \nu_i for all i = 1, 2, 3 (See on Figure 22).

    Figure 22.  Noise Signals \nu_1 , \nu_2 , and \nu_3 respectively.

    The Cauchy error and the relative signal error are measured by using max-norm \|x_{n} - x_{n-1} \|_\infty and \|x_{n} -x\|_\infty/\|x\|_\infty , respectively. The performance of the comparing method at n^{th} iteration is measured quantitatively by the means of the the signal-to-noise ratio (SNR), which is defined by

    \text{SNR}(x_{n} ) = 20 \log_{10} \left( \dfrac{ \|x_{n} \|_2}{ \|x_{n} - x\|_2} \right),

    where x_{n} is the recovered signal at n^{th} iteration by using the proposed method. The Cauchy error plots and relative error plots on each case of three comparative methods on Figures 2331 guarantee that all presented methods converge to common solution of signal recovering problem. Based on the SNR plots on each case of three comparative methods, all recovering signal using the proposed method, three-step Noor method and Sridarat method witness quality improvement when the iteration number increases. It can be conclude from the comparative plots behavior on Figures 2331 that the proposed method is more efficiency than three-step Noor and SP methods.

    Figure 23.  The Cauchy norm, relative norm plots and SNR quality plots of three comparative algorithms with case I.
    Figure 24.  The Cauchy norm, relative norm plots and SNR quality plots of three comparative algorithms with case II.
    Figure 25.  The Cauchy norm, relative norm plots and SNR quality plots of three comparative algorithms with case III.
    Figure 26.  The Cauchy norm, relative norm plots and SNR quality plots of three comparative algorithms with case IV.
    Figure 27.  The Cauchy norm, relative norm plots and SNR quality plots of three comparative algorithms with case V.
    Figure 28.  The Cauchy norm, relative norm plots and SNR quality plots of three comparative algorithms with case VI.
    Figure 29.  The Cauchy norm, relative norm plots and SNR quality plots of three comparative algorithms with case VII.
    Figure 30.  The Cauchy norm, relative norm plots and SNR quality plots of three comparative algorithms with case VIII.
    Figure 31.  The Cauchy norm, relative norm plots and SNR quality plots of three comparative algorithms with case IX.

    Moreover, the SNR quality of the recovering signal is significantly improved when the proposed method with A_j-A_k-A_l and j \neq k \neq l is used compare with the proposed methods with A_1-A_1-A_1 , A_2-A_2-A_2 and A_3-A_3-A_3 . And, the best case in recovering the observed signal occurred when the proposed method with the operator order A_2-A_1 -A_3 and A_3-A_1 -A_2 are used. Figure 32 shows an excellent quality of the restored signal using the proposed algorithm with A_3-A_1-A_2 .

    Figure 32.  Recovering signals being used the proposed algorithms with cases I, IV, VII and VIII, respectively presented in 10, 000^{ \rm{th}} iterations.

    In this article, the efficient modified three-step iteration algorithm is proposed for approximating a common fixed point of three G-nonexpansive mappings on Banach spaces involving a graph. By assuming coordinate-convexity in a uniformly convex Banach space endowed with a directed graph, we have proved strong convergence theorem for above said algorithm and mappings by using condition (C) which is a generalization of condition (A) [32] and a weak convergence theorem by using Opial's condition [21]. Also we have proved weak convergence theorem without using Opial's condition. The conditions for convergence of the method are established by systematic proof. Numerical example illustrating the performance of the suggested algorithm was provided. All numerical experiments for common fixed point solution by using the three-step Noor iteration, SP-iteration and the proposed method with the permutation of three operators are shown in Figures 16. Our algorithm was found to be faster than Noor and SP iterations. As applications, we applied our proposed algorithm to solve the image restoration problems with the permutation of the three blurring operators (see Figures 917). We also applied our algorithms for solving signal recovery in situations where the type of noise is unknown (see Figures 2331). We found that our proposed algorithm is flexible and has good quality for common types of blur and noise effects in image deblurring and signal recovering problems. Moreover, we found that when the proposed is used in solving the common solution of image and signal recovering problems, it enhanced the quality range of the recovered image and signal.

    This research project was supported by the thailand science research and innovation fund and the University of Phayao (Grant No. FF64-UoE011). The authors acknowledge the partially support provided by Thailand Science Research and Innovation under the Project IRN62W0007. The opinions in the research report belong to the researchers; Thailand Science Research and Innovation do not always have to agree. Also, T.Thianwan was supported by Contract No. PBTSC64019.

    The authors declare no conflict of interest.



    [1] S. Banach, Sur les oprations dans les ensembles abstraits et leur application aux quations intgrales, Fund. Math., 3 (1922), 133–181. doi: 10.4064/fm-3-1-133-181. doi: 10.4064/fm-3-1-133-181
    [2] J. Jachymski, The contraction principle for mappings on a metric space with a graph, Proc. Amer. Math. Soc., 136 (2008), 1359–1373. doi: 10.1090/S0002-9939-07-09110-1. doi: 10.1090/S0002-9939-07-09110-1
    [3] R. P. Kelisky, T. J. Rivlin, Iterates of Bernstein polynomials, Pac. J. Math., 21 (1967), 511–520. doi: 10.2140/pjm.1967.21.511. doi: 10.2140/pjm.1967.21.511
    [4] M. Samreen, T. Kamran, Fixed point theorems for integral G-contractions, Fixed Point Theory Appl., 2013 (2013), 149. doi: 10.1186/1687-1812-2013-149. doi: 10.1186/1687-1812-2013-149
    [5] J. Tiammee, S. Suantai, Coincidence point theorems for graph-preserving multivalued mappings, Fixed Point Theory Appl., 2014 (2014), 70. doi: 10.1186/1687-1812-2014-70. doi: 10.1186/1687-1812-2014-70
    [6] A. Nicolae, D. O. Regan, A. Petrusel, Fixed point theorems for single-valued and multivalued generalized contractions in metric spaces endowed with a graph, Georgian Math. J., 18 (2011), 307–327. doi: 10.1515/gmj.2011.0019. doi: 10.1515/gmj.2011.0019
    [7] F. Bojor, Fixed point of \phi-contraction in metric spaces endowed with a graph, Ann. Univ. Craiova Mat., 37 (2010), 85–92.
    [8] F. Bojor, Fixed point theorems for Reich type contractions on metric spaces with a graph, Nonlinear Anal., 75 (2012), 3895–3901. doi: 10.1016/j.na.2012.02.009. doi: 10.1016/j.na.2012.02.009
    [9] F. Bojor, Fixed points of Kannan mappings in metric spaces endowed with a graph, An. St. Univ. Ovidius Constanta, 20 (2012), 31–40.
    [10] J. H. Asl, B. Mohammadi, S. Rezapour, S. M. Vaezpour, Some fixed point results for generalized quasi-contractive multifunctions on graphs, Filomat, 27 (2013), 311–315. doi: 10.2298/FIL1302311A. doi: 10.2298/FIL1302311A
    [11] S. M. A. Aleomraninejad, S. Rezapour, N. Shahzad, Some fixed point result on a metric space with a graph, Topol. Appl., 159 (2012), 659–663. doi: 10.1016/j.topol.2011.10.013. doi: 10.1016/j.topol.2011.10.013
    [12] M. R. Alfuraidan, M. A. Khamsi, Fixed points of monotone nonexpansive mappings on a hyperbolic metric space with a graph, Fixed Point Theory Appl., 2015 (2015), 44. doi: 10.1186/s13663-015-0294-5. doi: 10.1186/s13663-015-0294-5
    [13] M. R. Alfuraidan, Fixed points of monotone nonexpansive mappings with a graph, Fixed Point Theory Appl., 2015 (2015), 49. doi: 10.1186/s13663-015-0299-0. doi: 10.1186/s13663-015-0299-0
    [14] J. Tiammee, A. Kaewkhao, S. Suantai, On Browder's convergence theorem and Halpern iteration process for G-nonexpansive mappings in Hilbert spaces endowed with graphs, Fixed Point Theory Appl., 2015 (2015), 187. doi: 10.1186/s13663-015-0436-9. doi: 10.1186/s13663-015-0436-9
    [15] O. Tripak, Common fixed points of G-nonexpansive mappings on Banach spaces with a graph, Fixed Point Theory Appl., 2016 (2016), 87. doi: 10.1186/s13663-016-0578-4. doi: 10.1186/s13663-016-0578-4
    [16] M. A. Noor, New approximation schemes for general variational inequalities, J. Math. Anal. Appl., 251 (2000), 217–229. doi: 10.1006/jmaa.2000.7042.
    [17] R. Glowinski, P. L. Tallec, Augmented Lagrangian and operator-splitting methods in nonlinear mechanic, Society for Industrial and Applied Mathematics, 1989.
    [18] S. Haubruge, V. H. Nguyen, J. J. Strodiot, Convergence analysis and applications of the Glowinski Le Tallec splitting method for finding a zero of the sum of two maximal monotone operators, J. Optimiz. Theory App., 97 (1998), 645–673. doi: 10.1023/a:1022646327085. doi: 10.1023/a:1022646327085
    [19] P. Sridarat, R. Suparaturatorn, S. Suantai, Y. J. Cho, Covergence analysis of SP-iteration for G-nonexpansive mappings with directed graphs, Bull. Malays. Math. Sci. Soc., 42 (2019), 2361–2380. doi: 10.1007/s40840-018-0606-0. doi: 10.1007/s40840-018-0606-0
    [20] R. Johnsonbaugh, Discrete mathematics, Prentice Hall, 1997.
    [21] Z. Opial, Weak convergence of the sequence of successive approximations for nonexpansive mappings, Bull. Amer. Math. Soc., 73 (1967), 591–597. doi: 10.1090/S0002-9904-1967-11761-0. doi: 10.1090/S0002-9904-1967-11761-0
    [22] S. Shahzad, R. Al-Dubiban, Approximating common fixed points of nonexpansive mappings in Banach spaces, Georgian Math. J., 13 (2006), 529–537. doi: 10.1515/GMJ.2006.529
    [23] N. V. Dung, N. T. Hieu, Convergence of a new three-step iteration process to common fixed points of three G-nonexpansivemappings in Banach spaces with directed graphs, RACSAM, 114 (2020), 140. doi: 10.1007/s13398-020-00872-w. doi: 10.1007/s13398-020-00872-w
    [24] J. Schu, Weak and strong convergence to fixed points of asymptotically nonexpansive mappings, B. Aust. Math. Soc., 43 (1991), 153–159. doi: 10.1017/S0004972700028884. doi: 10.1017/S0004972700028884
    [25] S. Suantai, Weak and strong convergence criteria of Noor iterations for asymptotically nonexpansive mappings, J. Math. Anal. Appl., 331 (2005), 506–517. doi: 10.1016/j.jmaa.2005.03.002. doi: 10.1016/j.jmaa.2005.03.002
    [26] D. Yambangwai, S. Aunruean, T. Thianwan, A new modified three-step iteration method for Gnonexpansive mappings in Banach spaces with a graph, Numer. Algor., 20 (2019), 1–29. doi: 10.1007/s11075-019-00768-w. doi: 10.1007/s11075-019-00768-w
    [27] M. G. Sangago, Convergence of iterative schemes for nonexpansive mappings, Asian-Eur. J. Math., 4 (2011), 671–682. doi: 10.1142/S1793557111000551. doi: 10.1142/S1793557111000551
    [28] R. L. Burden, J. D. Faires, Numerical analysis: Cengage learning, Brooks/Cole, 2010.
    [29] W. Phuengrattana, S. Suantai, On the rate of convergence of Mann, Ishikawa, Noor and SP-iterations for continuous functions on an arbitrary interval, J. Comput. Appl. Math., 235 (2011), 3006–3014. doi: 10.1016/j.cam.2010.12.022. doi: 10.1016/j.cam.2010.12.022
    [30] H. H. Bauschke, P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, New York: Springer, 2011.
    [31] J. J. Moreau, Proximit\acute{e} et dualit\acute{e} dans un espace hilbertien, B. Soc. Math. Fr., 93 (1965), 273–299.
    [32] H. F. Senter, W. G. Dotson, Approximating fixed points of nonexpansive mappings, Proc. Amer. Math. Soc., 44 (1974), 375–380. doi: 10.1090/S0002-9939-1974-0346608-8. doi: 10.1090/S0002-9939-1974-0346608-8
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2333) PDF downloads(79) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog