Loading [MathJax]/jax/output/SVG/jax.js
Research article

An extrapolated fixed-point optimization method for strongly convex smooth optimizations

  • Received: 21 November 2023 Revised: 21 December 2023 Accepted: 05 January 2024 Published: 16 January 2024
  • MSC : 47H09, 47J05, 65K10, 90C25

  • In this work, we focused on minimizing a strongly convex smooth function over the common fixed-point constraints. We proposed an extrapolated fixed-point optimization method, which is a modified version of the extrapolated sequential constraint method with conjugate gradient direction. We proved the convergence of the generated sequence to the unique solution to the considered problem without boundedness assumption. We also investigated some numerical experiments to underline the effectiveness and performance of the proposed method.

    Citation: Duangdaw Rakjarungkiat, Nimit Nimana. An extrapolated fixed-point optimization method for strongly convex smooth optimizations[J]. AIMS Mathematics, 2024, 9(2): 4259-4280. doi: 10.3934/math.2024210

    Related Papers:

    [1] Wei Xue, Pengcheng Wan, Qiao Li, Ping Zhong, Gaohang Yu, Tao Tao . An online conjugate gradient algorithm for large-scale data analysis in machine learning. AIMS Mathematics, 2021, 6(2): 1515-1537. doi: 10.3934/math.2021092
    [2] Adisak Hanjing, Panadda Thongpaen, Suthep Suantai . A new accelerated algorithm with a linesearch technique for convex bilevel optimization problems with applications. AIMS Mathematics, 2024, 9(8): 22366-22392. doi: 10.3934/math.20241088
    [3] Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Hossam A. Nabwey, Abubakar Adamu, Ojen Kumar Narain . Double inertial steps extragadient-type methods for solving optimal control and image restoration problems. AIMS Mathematics, 2024, 9(5): 12870-12905. doi: 10.3934/math.2024629
    [4] Yixin Li, Chunguang Li, Wei Yang, Wensheng Zhang . A new conjugate gradient method with a restart direction and its application in image restoration. AIMS Mathematics, 2023, 8(12): 28791-28807. doi: 10.3934/math.20231475
    [5] Cuijie Zhang, Zhaoyang Chu . New extrapolation projection contraction algorithms based on the golden ratio for pseudo-monotone variational inequalities. AIMS Mathematics, 2023, 8(10): 23291-23312. doi: 10.3934/math.20231184
    [6] Mohammad Dilshad, Fahad Maqbul Alamrani, Ahmed Alamer, Esmail Alshaban, Maryam G. Alshehri . Viscosity-type inertial iterative methods for variational inclusion and fixed point problems. AIMS Mathematics, 2024, 9(7): 18553-18573. doi: 10.3934/math.2024903
    [7] Yanfei Chai . Robust strong duality for nonconvex optimization problem under data uncertainty in constraint. AIMS Mathematics, 2021, 6(11): 12321-12338. doi: 10.3934/math.2021713
    [8] Lu-Chuan Ceng, Yeong-Cheng Liou, Tzu-Chien Yin . On Mann-type accelerated projection methods for pseudomonotone variational inequalities and common fixed points in Banach spaces. AIMS Mathematics, 2023, 8(9): 21138-21160. doi: 10.3934/math.20231077
    [9] Mingyuan Cao, Yueting Yang, Chaoqian Li, Xiaowei Jiang . An accelerated conjugate gradient method for the Z-eigenvalues of symmetric tensors. AIMS Mathematics, 2023, 8(7): 15008-15023. doi: 10.3934/math.2023766
    [10] Sani Aji, Poom Kumam, Aliyu Muhammed Awwal, Mahmoud Muhammad Yahaya, Kanokwan Sitthithakerngkiet . An efficient DY-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. AIMS Mathematics, 2021, 6(8): 8078-8106. doi: 10.3934/math.2021469
  • In this work, we focused on minimizing a strongly convex smooth function over the common fixed-point constraints. We proposed an extrapolated fixed-point optimization method, which is a modified version of the extrapolated sequential constraint method with conjugate gradient direction. We proved the convergence of the generated sequence to the unique solution to the considered problem without boundedness assumption. We also investigated some numerical experiments to underline the effectiveness and performance of the proposed method.



    Let H be a real Hilbert space equipped with an inner product , and its induced norm . In this work, we deal with the minimization of a strongly convex smooth function over the common fixed-point constraints, which is in the following form:

    minimizef(x)subjecttoxmi=1FixTi, (1.1)

    where FixTi:={xH:Ti(x)=x} and the objective function and the constrained operators satisfy the following assumptions:

    (A1) The function f:HR is η-strongly convex and L-smooth.

    (A2) The operators Ti:HH, i=1,,m, are cutters with mi=1FixTi.

    Thanks to the closedness and the convexity of the common fixed-point sets mi=1FixTi and the strong convexity of f, we can ensure that the solution set to the problem (1.1) is a singleton set. In this situation, we denote the solution point to the problem (1.1) by x throughout this work.

    The minimization problem and its generalization (such as variational inequality problem) over the common fixed-point constraints have been investigated by many authors, in which we briefly note some related works as follows. Bauschke [1] considered the problem (1.1) in the case when the objective function is given by f(x):=12xa2, where the point aH is a given point and the operators Ti,i=1,2,,m are nonexpansive operators. He proposed the following method: Given x1H and a nonnegative sequence {βk}k=1, for all kN, compute

    xk+1=T[k+1](xk)βk+1(T[k+1](xk)a), (1.2)

    where the function [] is the modulo m function taking values in {1,,m}. Bauschke proved that the sequence {xk}k=1 generated by the proposed method (1.2) converges strongly to the unique solution of the considered problem, provided that the constrained set satisfies

    mi=1FixTi=Fix(TmT1)=Fix(T1TmT3T2)==Fix(Tm1Tm2T1Tm), (1.3)

    and the sequence {βk}k=1[0,1) satisfies limkβk=0, k=1βk=, and k=1|βk+mβk|<.

    After that, Yamada [2] considered the solving of a more general setting of the problem (1.1) in the sense of variational inequality problem governed by the strongly monotone and Lipschitz continuous operator F:HH. Thanks to the optimality condition, it is well known that such a considered problem can be reduced to the problem (1.1) by setting F:=f. He proposed the so-called hybrid steepest descent method, which is in the following form: Given x1H and a nonnegative sequence {βk}k=1, for all kN, compute

    xk+1=T[k+1](xk)βk+1F(T[k+1](xk)). (1.4)

    With the similar assumption on the constrained set (1.3) and the control sequence {βk}k=1, the strong convergence of the proposed method (1.4) to the unique solution of the considered problem was obtained.

    Xu and Kim [3] also investigated the problem in the same setting as in Yamada [2] and considered the convergence result of the method (1.4) by proposing a variant condition of the control sequence {βk}k=1(0,1], which satisties limnβnβn+1=1 in place of the condition k=1|βk+mβk|< given in Yamada [2]. Another condition of the control sequence {βk}k=1 and the assumption on the constrained set (1.3) are also the same as above. After the works of Yamada [2] and Xu and Kim [3] were presented, many authors considered the variances and generalizations; see, for instance, [4,5,6,7,8,9].

    Prangprakhon et al. [10] also considered the solving of the variational inequality problem over the intersection of fixed point sets of firmly nonexpansive operators. They presented an iterative algorithm, which can be viewed as a generalization of the hybrid steepest descent method in the allowance of adding appropriated information when computing operators values. The main feature of their proposed method is the presence of added information that can be viewed as the allowance of possible numerical errors on the computations of operators' values. They subsequently proved the strong convergence of the proposed method without the assumption on the constrained set (1.3).

    Prangprakhon and Nimana [11] subsequently proposed the so-called extrapolated sequential constraint method with conjugate gradient direction (ESCoM-CGD) for solving the variational inequality problem over the intersection of the fixed-point sets of cutters. ESCoM-CGD is motivated by the ideas of the hybrid conjugate gradient method [12], which is an accelerated version of the hybrid steepest descent method together with the extrapolated cyclic cutter methods [13,14]. This method consists of two interesting features, namely, it consists of the extrapolation stepsize function σ(yk) (see Step 2 of Algorithm 1) and the search direction dk+1, which is the combination of the current direction f(xk+1) together with a previous search direction dk. By assumming the boundedness of the generated sequence {xk}k=1 together with the assumptions on control sequences, they also proved the strong convergence of the proposed method.

    Very recently, Petrot et al. [15] proposed the so-called dynamic distributed three-term conjugate gradient method for solving the strongly monotone variational inequality problem over the intersection of fixed-point sets of firmly nonexpansive operators. Unlike [11], this method has a simultaneous structure so that it allows the independent computation of each operator along with the dynamic weight, which is updated at every iteration. Moreover, the strong convergence of the method was obtained without assuming that the sequence {xk}k=1 is bounded.

    In this work, we will present a modification version of ESCoM-CGD by using the search direction considered in [15]. We will prove the convergence of the proposed method without assuming that the generated sequence is bounded. It is important to underline that we will simplify the proving lines compared to the proof of ESCoM-CGD given in [11, Theorem 3] in a shortened way. We also show that the proposed algorithm and convergence result are applicable in the binary classification via support vector machine learning.

    The remainder of this work is organized as follows. In the rest of this section, we collect some mathematical preliminaries consisting of some useful definitions and facts needed in the proving lines. In Section 2, we present the modified version of ESCoM-CGD for finding the solution to the problem (1.1). The main convergence theorem and some important remarks are also given in this section. In Section 3, we provide some usefulness of the proposed method and its convergence result in solving the minimum-norm problem to the system of homogeneous linear inequalities, as well as the binary classification via support vector machine learning. In Section 4, we provide some technical convergence properties of the generated sequences and, subsequently, prove the main convergence theorem. Lastly, we give a concluding remark.

    In this subsection, we will recall some definitions, properties, and key tools in proving our convergence results. The readers can consult the books [16,17] for more details.

    We denote by Id the identity operator on a Hilbert space H. For a sequence {xk}k=1, the strong and weak convergences of a sequence {xk}k=1 to a point xH are defined by the expression xkx and xkx, respectively.

    We will recall the convexity and smoothness of a function. Now, let f:HR be a real-valued function. Let α>0 be given. We say that the function f is convex if, for all x,yH and for all λ[0,1], it holds that

    f((1λ)x+λy)(1λ)f(x)+λf(y).

    In particular, we say that the function f is α-strongly convex if, for all x,yH and for all λ[0,1], it holds that

    f((1λ)x+λy)(1λ)f(x)+λf(y)12αλ(1λ)xy2.

    The constant α is called the strongly convex parameter.

    Let xH be given. We say that the function f is Fréchet differentiable or, in short, differentiable at x if there exists a vector gH such that

    limh0f(x+h)f(x)g,hh=0.

    Moreover, we call this unique vector g by the gradient of f at x and denote it by f(x).

    It is well known that the convexity of f can be characterized in terms of differentiability property as the followings.

    Fact 1.1. [16, Theorem 5.24] Let f:HR be a real-valued differentiable function, then f is α-strongly convex if, and only if, for all x,yH, we have

    f(x)f(y),xyαxy2.

    The following fact is the cornerstone of our proving lines. It is the necessary and sufficient condition for the optimality of a convex function over a nonempty closed and convex set.

    Fact 1.2. [16, Corollary 3.68] Let f:HR be a continuously differentiable convex function and CH be a closed and convex set, then f attains its minimum at a point xC if, and only if, for all yC, it holds that

    <f(x),yx>≥0.

    Let L0 be given. We say that the function f is L- smooth if it is differentiable and, for all x,yH, it satisfies that

    f(x)f(y)Lxy.

    The constant L is called the smoothness parameter. It is clear that an L-smooth function is a continuously differentiable function.

    The following fact is a very useful tool in obtaining the convergence result. The proof can be found in [2, Lemma 3.1(b)].

    Fact 1.3. Let f:HR be an η-strongly convex and L-smooth. For any μ(0,2η/L2) and β(0,1], if we define the operator Fβ:HH by Fβ(x):=xμβf(x) for all xH, then

    Fβ(x)Fβ(y)(1βτ)xy,

    for all x,yH, where τ:=11+μ2L22μη(0,1].

    Next, we recall the definition of cutters and some useful properties.

    Let T:HH be an operator with FixT:={xH:T(x)=x}. We say the operator T is a cutter if, for all xH and zFixT, we have

    xT(x),zT(x)0.

    The followings are the important properties of cutters.

    Fact 1.4. [17, Remark 2.1.31 and Lemma 2.1.36.] Let T:HH be a cutter, then the following statements hold:

    (i) FixT is closed and convex.

    (ii) For all xH and for all zFixT, we have Txx,zxT(x)x2.

    Next, we recall the definitions of relaxations of an operator. Let T:HH be an operator and λ[0,2] be given. The relaxation of the operator T is defined by Tλ(x):=(1λ)x+λT(x), for all xH. In this case, we call the parameter λ by a relaxation parameter. Moreover, let σ:H(0,) be a function. The generalized relaxation of the operator T is defined by

    Tσ,λ(x):=x+λσ(x)(T(x)x),

    for all xH. In this case, we call the function σ by a stepsize function. If the function σ(x)1 for all xH, then we call the operator Tσ,λ by an extrapolation of Tλ. We denote Tσ:=Tσ,1. It is worth noting that, for all xH, the followings hold

    Tσ,λ(x)x=λσ(x)(T(x)x)=λ(Tσ(x)x),

    and for all λ0, we also have

    FixTσ,λ=FixTσ=FixT.

    For the simplicity, we denote the compositions of operators as

    T:=TmTm1T1,
    S0:=Id

    and

    Si:=TiTi1T1,i=1,2,,m.

    We recall the important properties that will be useful for the convergence properties.

    Fact 1.5. [17, Section 4.10] Let Ti:HH, i=1,2,,m, be cutters with mi=1FixTi. Let σ:H(0,) be defined by

    σ(x):={mi=1T(x)Si1(x),Si(x)Si1(x)T(x)x2,if xmi=1FixTi,1,otherwise, (1.5)

    then the following properties hold:

    (i) For all xmi=1FixTi, we have

    σ(x)12mi=1Si(x)Si1(x)2T(x)x212m.

    (ii) The operator Tσ is a cutter.

    Next, we will recall the notion of demi-closedness principle as follows.

    Let T:HH be an operator having a fixed point. The operator T is said to satisfy the demi-closedness principle if, for every sequence {xk}H such that xkxH and T(xk)xk0, we have xFixT.

    We close this section by recalling the well-known metric projection, which is defined in the following: Let C be a nonempty subset of H and xH. If there is a point yC such that xyxz, for any zC, then y is said to be a metric projection of x onto C and is denoted by PC(x). If PC(x) exists and is uniquely determined for all xH, then the operator PC:HH is said to be the metric projection onto C. Actually, we need some additional properties of the set C to ensure the existence and the uniqueness of a metric projection PC(x) for a point xH as the following fact.

    Fact 1.6. [17, Theorem 1.2.3.] Let C be a nonempty closed convex subset of H, then for any xH, there exists a metric projection PC(x) and it is uniquely determined.

    Moreover, we finally note that the metric projection onto a nonempty closed convex set C is a cutter with FixPC=C; see [17, Theorem 2.2.21.] for further details.

    We will proceed in this section by presenting a modified version of the extrapolated sequential constraint method with conjugate gradient direction (ESCoM-CGD) for solving the problem (1.1). Now, we are in a position to state a modified version of ESCoM-CGD as the following algorithm. We call the proposed method by the modified extrapolated sequential constraint method with conjugate gradient direction (in short, MESCoM-CGD).

    Algorithm 1: MESCoM-CGD
    Initialization: Put nonnegative sequences {βk}k=1, {φk}k=1 and {λk}k=1, and a parameter μ(0,2ηL2). Choose an initial point x1H arbitrarily and set d1=f(x1).
    Iterative Step (kN): For a current iterate xkH and direction dkH, repeat the following steps:
    Step 1. Compute the iterate ykH as
            yk:=xk+μβkdkmax{1,dk}.
    Step 2. Compute the stepsize σ(yk) as
            σ(yk):={mi=1T(yk)Si1(yk),Si(yk)Si1(yk)T(yk)yk2,ifykmi=1FixTi,1,otherwise.
    Step 3. Compute the recurrence xk+1H and the search direction dk+1H as
            xk+1:=Tσ,λk(yk)
    and
            dk+1:=f(xk+1)+φk+1dkmax{1,dk}.
    Step 4. Update k:=k+1 and return to Step 1.

     | Show Table
    DownLoad: CSV

    Some comments and particular situations of MESCoM-CGD are the following:

    Remark 2.1. (ⅰ) As we have mentioned that the convergence of ESCoM-CGD (see, [11, Theorem 3]) relies on the assumption that the generated sequence {xk}k=1 is bounded, however, it is also pointed in [11, Remark 3] that the boundednesses of the search direction {dk}k=1H and the sequence {φk}k=1 together with the definition of the search direction yield the boundedness of the sequence {xk}k=1. In this situation, if we construct the bounded search direction {dkmax{1,dk}}k=1H in place of the previous version, then the boundedness of the generated sequence {xk}k=1 will be provable as well (see, Lemma 3.3 below). One can notice that the key idea of this bounded strategy is nothing else but restricting the search direction in a unit ball centered at the origin.

    (ⅱ) If the search direction {dk}k=1 is guaranteed to stay within the unit ball centered at the origin, then it holds that max{1,dk}=1 so that MESCoM-CGD is reduced to the ESCoM-CGD in the case when the operator Tm=Id.

    (ⅲ) If the number of m=1, the relaxation parameter λk=1, the stepsize σ(yk)=1, and max{1,dk}=1 for all nN, then MESCoM-CGD is nothing else but the hybrid conjugate gradient method proposed in [12].

    Now, let us begin this section by investigating the assumptions needed for the convergence result.

    Assumption 2.2. Assume that

    (H1) The sequence of stepsizes {βk}k=1(0,1] satisfies k=1βk= and k=1β2k<.

    (H2) The sequence of parameters {φk}k=1[0,) satisfies limkφk=0.

    (H3) There is a constant ε(0,1) such that the relaxation parameters {λk}k=1[ε,2ε].

    (H4) The operators Ti, i=1,,m, satisfy the demi-closedness principle.

    Let us briefly discuss some particular situations in which hypotheses (H1)–(H4) hold as follows:

    Remark 2.3. (ⅰ) An example of the stepsizes {βk}k=1 satisfying (H1) is, for instance, βk=β0(k+1)b with β0(0,1] and b(0,1] for all kN.

    (ⅱ) An example of the parameters {φk}k=1 satisfying (H2) is, for instance, φk=φ0(k+1)c with φ00 and c>0 for all kN.

    (ⅲ) The role of the constant ε(0,1) is to ensure that the term λk(2λk) stays away from zero for all kN. This will yield that the cancellation of this term can be processed. One can take, for instance, λk=λ0 for some λ0(0,2) as a trivial example of relaxation parameters {λk}k=1, satisfying hypothesis (H3).

    (ⅳ) The demi-closedness principle of the operators Ti, i=1=,,m, in (H4) is a key property and it is typically assumed when dealing with the convergence result of the common fixed-point type problems. Actually, the demi-closedness principle is satisfied by a nonexpansive operator, i.e., Ti(x)Ti(y)xy for all x,yH; see [17, Lemma 3.2.5.] for the proving lines with some historical notes. This yields that the metric projection onto a nonempty closed and convex also satisfies the demi-closedness principle, see [17, Theorem 2.2.21]. Moreover, it is worth mentioning that the demi-closedness principle is also satisfied by a subgradient projection operator Pf for a convex continuous function f:HR, which is Lipschitz continuous relative to every bounded subset of H; see [17, Theorem 4.2.7] for more details.

    The main result of this work is the following theorem:

    Theorem 2.4. Let the sequence {xk}k=1 be generated by MESCoM-CGD and assume that assumptions (A1) and (A2) and hypotheses (H1)–(H4) hold, then the sequence {xk}k=1 converges strongly to the unique solution x of the problem (1.1).

    In this section, we will provide some technical convergence properties and close this section by proving the convergence of the proposed method to the unique solution of the problem (1.1).

    For simplicity of notations, we denote

    τ:=11+μ2L22μη(0,1]

    and, for every nN and xmi=1FixTi, we denote

    εk:=μ2β2k+2μβkmax{1,dk}xkx,dk

    and

    αk:=βkτmax{1,dk}.

    Moreover, for every k2, we denote

    δk:=2μτ(φkmax{1,dk1}dk1,ykxf(x),ykx).

    In order to prove the convergence result in Theorem 2.4, we collect several facts that will be useful in what follows. We first state the lemma relating to the norms of iterate xk to a common fixed-point. The analysis is similar to those given in [11, Lemma 4], however, we state here again for the sake of completeness.

    Lemma 3.1. For all kN and xmi=1FixTi, we have

    xk+1x2xkx2λk(2λk)4mmi=1Si(yk)Si1(yk)2+εk.

    Proof. Let kN and xmi=1FixTi be given. By using the properties of extrapolation and Facts 1.4 (ⅱ) and 1.5(ⅱ), we note that

    xk+1x2=Tσ,λk(yk)x2=yk+λkσ(yk)(T(yk)yk)x2=ykx2+λ2kσ(yk)(T(yk)yk)2+2λkykx,σ(yk)(T(yk)yk)=ykx2+λ2kTσ(yk)yk2+2λkykx,Tσ(yk)ykykx2+λ2kTσ(yk)yk22λkTσ(yk)yk2=ykx2λk(2λk)Tσ(yk)yk2=ykx2λk(2λk)σ2(yk)T(yk)yk2ykx2λk(2λk)14(mi=1Si(yk)Si1(yk)2)2T(yk)(yk)4Tykyk2=ykx2λk(2λk)14(mi=1Si(yk)Si1(yk)2)2T(yk)yk2ykx2λk(2λk)4mmi=1Si(yk)Si1(yk)2. (3.1)

    We note from the definition of {yk}k=1 that

    ykx2xk+μβkdkmax{1,dk}x2=xkx2+μ2βk2max{1,dk}2dk2+2μβkmax{1,dk}xkx,dkxkx2+μ2βk2+2μβkmax{1,dk}xkx,dk=xkx2+εk.

    By invoking this relation in (3.1), we obtain

    xk+1x2xkx2λk(2λk)4mmi=1Si(yk)Si1(yk)2+εk

    as required.

    In order to prove the boundedness of the generated sequence {xk}k=1, we need the following proposition [18, Lemma 3.1].

    Proposition 3.2. Let {ak}k=1 be a sequence of nonnegative real numbers such that there exists a subsequence {akj}j=1 of {ak}k=1 with akj<akj+1 for all jN. If, for all kk0, we define

    v(k)=max{¯kN:k0¯kk,a¯k<a¯k+1},

    then the sequence {v(k)}kk0 is nondecreasing, av(k)av(k)+1, and akav(k)+1 for every kk0.

    Now, we are in a position to prove the boundedness property of the generated sequence {xk}k=1 as the following lemma. The idea proof is based on [15, Lemma 5].

    Lemma 3.3. The sequence {xk}k=1 is a bounded sequence.

    Proof. Let kN and xmi=1FixTi be given. Since λk[ε,2ε], we have ε2λk(2λk), and so

    λk(2λk)4mmi=1Si(yk)Si1(yk)20.

    Thus, by using this property together with the inequality obtained in Lemma 3.1, we get

    xk+1x2xkx2+μ2βk2+2μβkmax{1,dk}xkx,dk. (3.2)

    For the sake of simplicity, we set

    Γk:=xkx2μ2k1j=1βj2

    for all kN. In this case, the inequality (3.2) can be rewritten as

    Γk+1Γk+2μβkmax{1,dk}xkx,dk0. (3.3)

    Next, we divide the proof into two cases based on the behaviors of the sequence {Γk}k=1:

    Case Ⅰ: Suppose that there exists k0N such that Γk+1Γk for all kk0. In this case, we have ΓkΓk0 for all kk0, which is

    xkx2Γk0+μ2k1j=1β2j

    for all kk0. Since the series k=1β2k converges, we obtain that the sequence {xkx2}k=1 is bounded and, subsequently, the sequence {xk}k=1 is also bounded.

    Case Ⅱ: Suppose that there exists a subsequence {Γkj}j=1 of {Γk}k=1 such that Γkj<Γkj+1 for all kN, and let {v(k)}k=1 be defined as in Proposition 3.2. This yields, for all kk0, that

    Γv(k)<Γv(k)+1 (3.4)

    and

    Γk<Γv(k)+1. (3.5)

    By using the relation (3.4) in the inequality (3.3) and the fact that the term 2μβkmax{1,dk} is positive, we have

    xv(k)x,dv(k)0.

    Now, let us note that

    0xv(k)x,dv(k)=xv(k)x,f(xv(k))φv(k)max{1,dv(k)1}xv(k)x,dv(k)1

    and, since 0φk1, we have

    xv(k)x,f(xv(k))φv(k)max{1,dv(k)1}xv(k)x,dv(k)1φv(k)xv(k)xxv(k)x. (3.6)

    Since f is η-strongly convex, we have from Fact 1.1 that

    ηxv(k)x2f(xv(k))f(x),xv(k)xf(xv(k)),xv(k)x+f(x),xv(k)xxv(k)x+f(x)xv(k)x,

    where the last inequality holds by the inequality (3.6). Thus, we obtain that

    xv(k)xη1(1+f(x)),

    which implies that the sequence {xv(k)x}k=1 is bounded. Now, let us observe that

    Γv(k)+1=xv(k)+1x2μ2v(k)j=1β2jxv(k)+1x2,

    which means that the sequence {Γv(k)+1}k=1 is bounded above. Finally, by using the inequality (3.5), we obtain that {Γk}k=1 is bounded and, subsequently, {xk}k=1 is also bounded.

    A simple consequence of Lemma 3.3 is the boundedness of related sequences.

    Lemma 3.4. The sequences {f(xk)}k=1, {dk}k=1, and {yk}k=1 are bounded.

    Proof. Let kN and xmi=1FixTi be given. We first note from the L-smoothness of f that

    f(xk)f(xk)f(x)+f(x)Lxkx+f(x)L(M1+x)+f(x)=:M,

    where M1:=supkNxk. This means that the boundedness of the sequence {f(xk)}k=1 is now obtained.

    Next, by the construction of dk, we note for all k1 that

    dk+1=f(xk+1)+φk+1dkmax{1,dk}f(xk+1)+φk+1M+1.

    Thus, we have dkmax{M+1,d1} for all kN and the boundedness of {dk}k=1 is obtained. Finally, these two obtained results immediately yield the boundedness of the sequence {yk}k=1.

    Lemma 3.5. For all k2 and xmi=1FixTi, it holds that

    xk+1x2(1αk)xkx2+αkδk.

    Proof. Let k2 and xmi=1FixTi be given. We note from the inequality (3.1) that

    xk+1x2ykx2=xk+μβkdkmax{1,dk}x2=xk+μβkmax{1,dk}(f(xk)+φkdk1max{1,dk1})x2=(xkμβkmax{1,dk}f(xk))(xμβkmax{1,dk}f(x))+μβkmax{1,dk}(φkdk1max{1,dk1}f(x))2(xkμβkmax{1,dk}f(xk))(xμβkmax{1,dk}f(x))2+2μβkmax{1,dk}(φkdk1max{1,dk1}f(x)),ykx(1βkτmax{1,dk})xkx2+2μβkmax{1,dk}φkdk1max{1,dk1}f(x),ykx=(1βkτmax{1,dk})xkx2+βkτmax{1,dk}(2μτ(φkmax{1,dk1}dk1,ykxf(x),ykx))=(1αk)xkx2+αkδk,

    where the second inequality holds by the fact that x+y2x2+2y,x+y for all x,yH, and the third inequality holds by Fact 1.3.

    The following proposition will be a key tool for obtaining the convergence result in Theorem 2.4. The idea and its proof can be consulted in [19, Lemma 2.6] and [20, Lemma 2.4].

    Proposition 3.6. Let {ak}k=1 be a nonnegative real sequence, {δk}k=1 be a real sequence, and {αk}k=1 be a real sequence in [0,1] such that k=1αk=. Suppose that

    ak+1(1αk)ak+αkδkfor allkN.

    If lim supjδkj0 for every subsequence {akj}j=1 of {ak}k=1 satisfying

    lim infj(akj+1akj)0,

    then limkak=0.

    Now, we are in a position to prove Theorem 2.4.

    Proof. Let x be the unique solution to the problem 1.1. For simplicity, we denote ak:=xkx2 for all kN. Now, by using the facts obtained in Lemmata 3.3 and 3.4 and the fact that limkβk=0, we obtain

    0limk|εk|=limk|μ2β2k+2μβkmax{1,dk}xkx,dk|0,

    which implies that limkεk=0.

    Next, we let a subsequence {akj}j=1 of the seuqence {ak}k=1 satisfying

    lim infj(akj+1akj)0

    or, equivalently,

    lim supj(akjakj+1)0.

    By utilizing the inequality obtained in Lemma 3.1 and the fact limkεk=0, we obtain

    0lim supjλkj(2λkj)4mmi=1Si(ykj)Si1(ykj)2lim supj(akjakj+1+εkj)=lim supj(akjakj+1)+limjεkj0.

    This implies that

    limjλkj(2λkj)4mmi=1Si(ykj)Si1(ykj)2=0.

    Since ε2λk(2λk), we obtain that

    limjSi(ykj)Si1(ykj)=0,foralli=1,2,,m. (3.7)

    On the other hand, since {yk}k=1 is a bounded sequence, so is the sequence {ykjx,f(x)}j=1. Now, let {ykj}=1 be a sequence of {ykj}j=1 such that

    lim infjykjx,f(x)=limykjx,f(x).

    Due to the boundedness of the sequence {ykj}j=1, there exists a weakly cluster point zH and a subsequence {ykj}=1 of {ykj}j=1 such that ykjzH. According to the obtained fact in (3.7), we note that

    limT1(ykj)ykj=limS1(ykj)S0(ykj)=0.

    Since T1 satisfies the demi-closedness principle, we obtain that zFixT1. Furthermore, we note that the facts ykjz and

    lim(T1(ykj)T1(z))(ykjz)=0

    yield that T1(ykj)T1(z)=z. Furthermore, we observe that

    lim(T2(T1(ykj))T1(ykj)=limS2(ykj)S1(ykj)=0.

    By invoking the assumption that T2 satisfies the demi-closedness principle, we also obtain that zFixT2. By continuing the same argument used in the above proving lines, we obtain that zFixTi for all i=1,2,,m, then zmi=1FixTi. Since x is the unique solution to the problem (1.1), we note from the optimality condition in Fact 1.2 that

    lim infjykjx,f(x)=limykjx,f(x)=zx,f(x)0. (3.8)

    Now, the assumption that limkφk=0, the boundedness of the sequences {yk}k=1 and {dk}k=1, and the relation (3.8) yield that

    lim supjδkj=lim supj2μτ(φkjmax{1,dkj}dkj1,ykjxf(x),ykjx)0.

    Hence, by applying Propostion 3.6, we conclude that limkak=0. The proof is completed.

    In this section, we will provide some simple consequences of the main theorem. We start with the minimization problem over the system of linear inequalities. This constraint is nothing else but the intersection of linear half-spaces. We subsequently show that the proposed algorithm and convergence result are also applicable in the well-known support vector machine learning.

    In this subsection, we assume that the function f:HR is η-strongly convex and L-smooth as given in the assumption (A1). Now, let aiH with ai0 and biR, i=1,,m. We consider the minimization of a strongly convex smooth function over the intersection of nonempty closed and convex sets, which is in the following form:

    minimizef(x)subjectto<ai,x>≤bi,i=1,,m. (4.1)

    We may assume the consistency of the system of linear inequalities and we also denote the solution point to the problem (4.1) by x. Now, for each i=1,,m, we let Hi:={xH:<ai,x>≤bi} be the half-space corresponding to ai and bi. Furthermore, we set Ti:=PHi, the metric projection onto the half-space Hi, for all i=1,,m. It is worth noting that the metric projection onto a half-space has a closed-form expression, that is,

    Ti:=PHi(x)=xmax{<ai,x>bi,0}ai2ai,

    see [17, Subsection 4.1.3] for further details. Note that Hi is a closed and convex set and the fixed-point set. FixTi=Hi for all i=1,,m. To derive an iterative method for solving the problem (4.1), we recall the notations T:=TmTm1T1, S0:=Id, and Si:=TiTi1T1, for all i=1,2,,m. Now, for every xH, by setting ui:=Si(x), we have u0=x and ui=Ti(ui1), for all i=1,2,,m. In this case, the stepsize function σ:H(0,) can be written as the following:

    σ(x):={mi=1(<ai,x>bi)(max{<ai,ui>bi,0}ai2)umx2,ifxmi=1Hi,1,otherwise,

    see [13, Section 4.2] for further details.

    In order to solve the problem (4.1), we consider a particular situation of MESCoM-CGD as the following algorithm.

    Algorithm 2: MESCoM-CGD for minimizing over the system of linear inequalities
    Initialization: Put nonnegative sequences {βk}k=1, {φk}k=1 and {λk}k=1, and a parameter μ(0,2ηL2). Choose an initial point x1H arbitrarily and set d1=f(x1).
    Iterative Step (kN): For a current iterate xkH and direction dkH, repeat the following steps:
    Step 1. Compute the iterate ykH as
           yk:=xk+μβkdkmax{1,dk}.
    Step 2. Put uk,0:=yk. For each i=1,2,,m, compute
           uk,i:=uk,i1max{<ai,uk,i1>bi,0}ai2ai.
    Compute the stepsize σ(yk) as
           σ(yk):={mi=1(<ai,yk>bi)(max{<ai,uk,i>bi,0}ai2)uk,myk2,ifykmi=1Hi,1,otherwise.
    Step 3. Compute the recurrence xk+1H and the search direction dk+1H as
           xk+1:=yk+λkσ(yk)(uk,myk)
    and
           dk+1:=f(xk+1)+φk+1dkmax{1,dk}.
    Step 4. Update k:=k+1 and return to Step 1.

     | Show Table
    DownLoad: CSV

    We obtain an immediate consequence of Theorem 2.4 in the following corollary.

    Corollary 4.1. Let the sequence {xk}k=1 be generated by Algorithm 2 and assume that assumption (A1) and hypotheses (H1)–(H3) hold, then the sequence {xk}k=1 converges strongly to the unique solution x of the problem (4.1).

    To examine the behavior of Algorithm 2 and the convergence given in Corollary 4.1, we consider the solving of the minimum-norm problem to the system of homogeneous linear inequalities. We assume that the whole space H is finite dimensional so that H=Rn. Given a matrix A=[a1||am]T of predictors ai=(a1i,,ani)Rn, for all i=1,,m, b=(b1,,bm)Rm is a vector. The minimum-norm problem is to find the vector xRn that solves the problem

    minimize12x2subjecttoAxb (4.2)

    or, equivalently, in the explicit form

    minimize12x2subjecttoai,xbi,i=1,,m.

    Now, by putting the constrained sets Hi:={xRn:ai,xbi}, i=1,,m as half spaces and Ti:=PHi, i=1,,m, the metric projections onto Hi with FixTi=Hi satisfy the demi-closed principle. Furthermore, the function f(x):=12x2 is 1-strongly convex with 1-smooth. In this situation, the problem (4.2) is nothing else but a special case of the problem (4.1), which yields that Algorithm 2 and the convergence given in Corollary 4.1 are applicable.

    To perform a numerical illustration in solving the problem (4.1), we generate the matrix ARm×n, where m=200 and n=100 by uniformly distributed random generating between (5,5), and set the vector b=0. According to Remark 2.3 (ⅰ)-(ⅲ), we examine the influence of parameters β0[0.8,1], φ0[0.1,1], and λ0[0.1,1.9]. We fix the corresponding parameter μ=1.9. We terminated the experiment with the error of norm, that is, xk+1xk<ϵ. We manually choose the choice of parameters β0=1, φ0=1, and λ0=1 with the smallest number of iterations when the error of tolerance ϵ=104 is met.

    Next, we show behavior of Algorithm 2 when solving the problem (4.2) for various error of tolerance ϵ. We set number of variables to be n=100 and consider several number of inequalities, that is, m=100,200,300,400, and 500. According to the above experiments, we set the parameters β0=1, φ0=1, and λ0=1. We plot the number of iterations and computational time in seconds in Figure 1.

    Figure 1.  Behavior of Algorithm 2 for various errors of tolerance.

    It can be noticed from Figure 1 that a smaller number m needed a larger number of iterations for all errors of tolerance. Moreover, for the numbers m=100,200, and 300, it can be seen that a smaller number m needed a larger computational time.

    In this subsection, we consider the constructing of a classifier in binary classification problem starting by a given training datasets of two classes. More precisely, the ith training data aiRn and the ith label bi{1,+1} of the ith training data, for all i=1,,m. The support vector machine learning is to train a weight uRn so that a (linear) classifier c(a):=<a,u> can give a corrected class of every new tested data aRn. In this situation, a will be identified to be in the class 1 if c(a)<0, and to the class +1, otherwise. Mathematically, the support vector machine learning can be form as the following minimization problem:

    minimize12u2+12mi=1ξ2isubjecttobi<ai,u>≥1ξi,i=1,,m,ξi0,i=1,,m, (4.3)

    where ξi0 is nonnegative slack variable corresponding to the ith training data for all i=1,,m. By introducing a new variable x:=(u,ξ1,,ξm)Rn+m and using the idea given in, for instance [15,21], the problem (4.3) can be written as

    minimize12x2subjecttoAxb, (4.4)

    where the matrix AR2m×(n+m) is given by

    A=[b1a1100b2a2010bmam0010Rn1000Rn0100Rn001]R2m×(n+m)

    and the vector bR2m is given by

    b=(1Rm0Rm).

    This problem (4.4) is nothing else but a particular case of the problem (4.2) and, subsequently, Algorithm 2 and the convergence given in Corollary 4.1 are also applicable.

    In the first experiment, we aim to classify the 28×28 images on gray scale pixels of the handwritten digits from the MNIST dataset, which was provided as https://cs.nyu.edu/roweis/data.html. We used the dataset of 5000 images for the handwritten digit 9 and the dataset of 5000 images for the handwritten digits 08. The images are labeled by the classes +1 and 1, respectively. We perform the 10-fold cross-validation on the given datasets. Actually, in each class, we put a fold of 1000 images to be the testing data and the remaining 9 folds consisting of 9000 images to be the training data. We perform the cross-validation process repeatedly for 10 times so that each fold is set to be the testing data.

    Recalling that the class of digit 9 and the class of digits 08 are labeled by +1 and 1, which are positive and negative, respectively, we denote the numbers of a tested data. It is classified as positive by true positive (TP); if classified as negative, then it is denoted by false negative (FN). If a tested data is labeled as negative and is classified as negative, then it is denoted as true negative (TN); if it is classified as positive, it is denoted as false positive (FP). These numbers are summarized as the following details:

    Actual Class Classified Class
    Positive (+1) Negative (1)
    Positive (+1) TP := True Positive FN := False Negative
    Negative (1) FP := False Positive TN := True Negative

     | Show Table
    DownLoad: CSV

    To measure the performance of each obtained classifier performed by each cross-validation, we consider classification performance metrics, namely, accuracy, precision, recall, specificity, and F-measure. These performance metrics are computed by the following details:

    Accuracy=TP+TNTP+TN+FP+FN,Precision=TPTP+FP,Recall=TPTP+FN,Specificity=TNTN+FP,F-measure=2×Recall×PrecisionRecall+Precision.

    In the experiment, we terminate the experiment when the number of iterations is k=50 and, subsequently, average each performance metric after the cross-validation process repeatedly for 10 times. The classification performance metrics of each parameter are presented in Table 1.

    Table 1.  Classification performance metrics for varying parameters β0 and φ0.
    β0 φ0 Accuracy Precision Recall Specificity F-measure Time
    0.5 0.1 0.9325 0.9507 0.9173 0.9489 0.9337 340.23
    0.3 0.9325 0.9507 0.9173 0.9489 0.9337 339.46
    0.5 0.9325 0.9507 0.9173 0.9489 0.9337 347.93
    0.7 0.9325 0.9507 0.9173 0.9489 0.9337 341.14
    0.9 0.9325 0.9507 0.9173 0.9489 0.9337 352.12
    0.6 0.1 0.9323 0.9507 0.9171 0.9489 0.9335 338.62
    0.3 0.9323 0.9507 0.9171 0.9489 0.9335 339.44
    0.5 0.9323 0.9507 0.9171 0.9489 0.9335 343.61
    0.7 0.9323 0.9507 0.9171 0.9489 0.9335 341.39
    0.9 0.9323 0.9507 0.9171 0.9489 0.9335 356.79
    0.7 0.1 0.9321 0.9507 0.9167 0.9489 0.9333 339.02
    0.3 0.9321 0.9507 0.9167 0.9489 0.9333 340.37
    0.5 0.9321 0.9507 0.9167 0.9489 0.9333 343.67
    0.7 0.9321 0.9507 0.9167 0.9489 0.9333 341.12
    0.9 0.9321 0.9507 0.9167 0.9489 0.9333 357.62
    0.8 0.1 0.9322 0.9509 0.9167 0.9491 0.9335 341.26
    0.3 0.9322 0.9509 0.9167 0.9491 0.9335 341.02
    0.5 0.9322 0.9509 0.9167 0.9491 0.9335 342.17
    0.7 0.9322 0.9509 0.9167 0.9491 0.9335 347.23
    0.9 0.9322 0.9509 0.9167 0.9491 0.9335 342.92
    0.9 0.1 0.9322 0.9509 0.9167 0.9491 0.9335 339.05
    0.3 0.9322 0.9509 0.9167 0.9491 0.9335 340.88
    0.5 0.9322 0.9509 0.9167 0.9491 0.9335 340.90
    0.7 0.9322 0.9509 0.9167 0.9491 0.9335 346.44
    0.9 0.9322 0.9509 0.9167 0.9491 0.9335 342.06
    1.0 0.1 0.9323 0.9511 0.9168 0.9493 0.9336 338.86
    0.3 0.9323 0.9511 0.9168 0.9493 0.9336 340.88
    0.5 0.9323 0.9511 0.9168 0.9493 0.9336 340.48
    0.7 0.9323 0.9511 0.9168 0.9493 0.9336 346.35
    0.9 0.9323 0.9511 0.9168 0.9493 0.9336 341.66

     | Show Table
    DownLoad: CSV

    The results given in Table 1 shows that the parameters β0 and φ0 slightly effected the classification performances. The highest accuracy and F-measure values were obtained for the case β0=0.5. Actually, we can observe that these five metrics as well as the computation running times were not much different. This may yield the stability of the proposed method in the sense that the corresponding parameters do not hugely effect the convergence of the proposed method.

    We presented the modified version of the extrapolated sequential constraint method proposed in [11] for solving the minimization problem over the intersection of the fixed-point sets of cutter operators. We not only presented a simple version of the strong convergence of the proposed method, but also omitted the boundedness assumption used in [11]. We examined the proposed method to solve the binary classification by using support vector machines.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors are grateful to three anonymous reviewers for their comments and suggestions that improved the paper's quality and presentation. This work has received funding support from the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B05F650018]. D. Rakjarungkiat gratefully acknowledges the research capability enhancement program through graduate student scholarship, Faculty of Science, Khon Kaen University.

    The authors declare that there is no conflict of interest regarding the publication of this article.



    [1] H. H. Bauschke, The approximation of fixed points of compositions of nonexpansive mappings in Hilbert space, J. Math. Anal. Appl., 202 (1996), 150–159. https://doi.org/10.1006/jmaa.1996.0308 doi: 10.1006/jmaa.1996.0308
    [2] I. Yamada, The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings, In: Inherently parallel algorithms in feasibility and optimization and their applications, Elsevier, 2001,473–504.
    [3] H. K. Xu, T. H. Kim, Convergence of hybrid steepest-descent methods for variational inequalities, J. Optim. Theory Appl., 119 (2003), 185–201. https://doi.org/10.1023/B:JOTA.0000005048.79379.b6 doi: 10.1023/B:JOTA.0000005048.79379.b6
    [4] A. Cegielski, Extrapolated simultaneous subgradient projection method for variational inequality over the intersection of convex subsets, J. Nonlinear Convex Anal., 15 (2014), 211–218.
    [5] A. Cegielski, A. Gibali, S. Reich, R. Zalas, An algorithm for solving the variational inequality problem over the fixed point set of a quasi-nonexpansive operator in Euclidean space, Numer. Funct. Anal. Optim., 34 (2013), 1067–1096. https://doi.org/10.1080/01630563.2013.771656 doi: 10.1080/01630563.2013.771656
    [6] A. Cegielski, R. Zalas, Methods for variational inequality problem over the intersection of fixed point sets of quasi-nonexpansive operators, Numer. Funct. Anal. Optim., 34 (2013), 255–283. https://doi.org/10.1080/01630563.2012.716807 doi: 10.1080/01630563.2012.716807
    [7] S. Sabach, S. Shtern, A first order method for solving convex bilevel optimization problems, SIAM J. Optim., 27 (2017), 640–660. https://doi.org/10.1137/16M105592X doi: 10.1137/16M105592X
    [8] B. Tan, S. Li, Strong convergence of inertial Mann algorithms for solving hierarchical fixed point problems, J. Nonlinear Var. Anal., 4 (2020), 337–355. http://dx.doi.org/10.23952/jnva.4.2020.3.02 doi: 10.23952/jnva.4.2020.3.02
    [9] B. Tan, X. Qin, A. Gibali, Three approximation methods for solving constraint variational inequalities and related problems, Pure Appl. Funct. Anal., 8 (2023), 965–986.
    [10] M. Prangprakhon, N. Nimana, N. Petrot, A sequential constraint method for solving variational inequality over the intersection of fixed point sets, Thai J. Math., 18 (2020), 1105–1123.
    [11] M. Prangprakhon, N. Nimana, Extrapolated sequential constraint method for variational inequality over the intersection of fixed-point sets, Numer. Algorithms, 88 (2021), 1051–1075. https://doi.org/10.1007/s11075-021-01067-z doi: 10.1007/s11075-021-01067-z
    [12] H. Iiduka, I. Yamada, A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping, SIAM J. Optim., 19 (2009), 1881–1893. https://doi.org/10.1137/070702497 doi: 10.1137/070702497
    [13] A. Cegielski, Y. Censor, Extrapolation and local acceleration of an iterative process for common fixed point problems, J. Math. Anal. Appl., 394 (2012), 809–818. https://doi.org/10.1016/j.jmaa.2012.04.072 doi: 10.1016/j.jmaa.2012.04.072
    [14] A. Cegielski, N. Nimana, Extrapolated cyclic subgradient projection methods for the convex feasibility problems and their numerical behaviour, Optimization, 68 (2019), 145–161. https://doi.org/10.1080/02331934.2018.1509214 doi: 10.1080/02331934.2018.1509214
    [15] N. Petrot, M. Prangprakhon, P. Promsinchai, N. Nimana, A dynamic distributed conjugate gradient method for variational inequality problem over the common fixed-point constraints. Numer. Algorithms, 93 (2023), 639–668. https://doi.org/10.1007/s11075-022-01430-8 doi: 10.1007/s11075-022-01430-8
    [16] A. Beck, First-ordered methods in optimization, Philadelphia: SIAM, 2017.
    [17] A. Cegielski, Iterative methods for fixed point problems in Hilbert spaces, Berlin, Heidelberg: Springer, 2012. https://doi.org/10.1007/978-3-642-30901-4
    [18] P. E. Mainge, Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization, Set Valued Anal., 16 (2008), 899–912. https://doi.org/10.1007/s11228-008-0102-z doi: 10.1007/s11228-008-0102-z
    [19] S. Saejung, P. Yotkaew, Approximation of zeros of inverse strongly monotone operators in Banach spaces, Nonlinear Anal., 75 (2012), 742–750. https://doi.org/10.1016/j.na.2011.09.005 doi: 10.1016/j.na.2011.09.005
    [20] C. Jaipranop, S. Saejung, On Halpern-type sequences with applications in variational inequality problems, Optimization, 71 (2020), 675–710. https://doi.org/10.1080/02331934.2020.1812065 doi: 10.1080/02331934.2020.1812065
    [21] R. I. Boţ, E. R. Csetnek, N. Nimana, Gradient-type penalty method with inertial effects for solving constrained convex optimization problems with smooth data, Optim. Lett., 12 (2018), 17–33. https://doi.org/10.1007/s11590-017-1158-1 doi: 10.1007/s11590-017-1158-1
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1077) PDF downloads(74) Cited by(0)

Figures and Tables

Figures(1)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog