Research article

A new two-step inertial algorithm for solving convex bilevel optimization problems with application in data classification problems

  • Received: 12 January 2024 Revised: 21 February 2024 Accepted: 23 February 2024 Published: 28 February 2024
  • MSC : 47H10, 65K10, 90C25

  • In this paper, we propose a new accelerated algorithm for solving convex bilevel optimization problems using some fixed point and two-step inertial techniques. Our focus is on analyzing the convergence behavior of the proposed algorithm. We establish a strong convergence theorem for our algorithm under some control conditions. To demonstrate the effectiveness of our algorithm, we utilize it as a machine learning algorithm to solve data classification problems of some noncommunicable diseases, and compare its efficacy with BiG-SAM and iBiG-SAM.

    Citation: Puntita Sae-jia, Suthep Suantai. A new two-step inertial algorithm for solving convex bilevel optimization problems with application in data classification problems[J]. AIMS Mathematics, 2024, 9(4): 8476-8496. doi: 10.3934/math.2024412

    Related Papers:

    [1] Adisak Hanjing, Panadda Thongpaen, Suthep Suantai . A new accelerated algorithm with a linesearch technique for convex bilevel optimization problems with applications. AIMS Mathematics, 2024, 9(8): 22366-22392. doi: 10.3934/math.20241088
    [2] Kobkoon Janngam, Suthep Suantai, Rattanakorn Wattanataweekul . A novel fixed-point based two-step inertial algorithm for convex minimization in deep learning data classification. AIMS Mathematics, 2025, 10(3): 6209-6232. doi: 10.3934/math.2025283
    [3] Suparat Kesornprom, Papatsara Inkrong, Uamporn Witthayarat, Prasit Cholamjiak . A recent proximal gradient algorithm for convex minimization problem using double inertial extrapolations. AIMS Mathematics, 2024, 9(7): 18841-18859. doi: 10.3934/math.2024917
    [4] Habibe Sadeghi, Fatemeh Moslemi . A multiple objective programming approach to linear bilevel multi-follower programming. AIMS Mathematics, 2019, 4(3): 763-778. doi: 10.3934/math.2019.3.763
    [5] Sani Aji, Aliyu Muhammed Awwal, Ahmadu Bappah Muhammadu, Chainarong Khunpanuk, Nuttapol Pakkaranang, Bancha Panyanak . A new spectral method with inertial technique for solving system of nonlinear monotone equations and applications. AIMS Mathematics, 2023, 8(2): 4442-4466. doi: 10.3934/math.2023221
    [6] Premyuda Dechboon, Abubakar Adamu, Poom Kumam . A generalized Halpern-type forward-backward splitting algorithm for solving variational inclusion problems. AIMS Mathematics, 2023, 8(5): 11037-11056. doi: 10.3934/math.2023559
    [7] Suthep Suantai, Watcharaporn Yajai, Pronpat Peeyada, Watcharaporn Cholamjiak, Petcharaporn Chachvarat . A modified inertial viscosity extragradient type method for equilibrium problems application to classification of diabetes mellitus: Machine learning methods. AIMS Mathematics, 2023, 8(1): 1102-1126. doi: 10.3934/math.2023055
    [8] Adisak Hanjing, Pachara Jailoka, Suthep Suantai . An accelerated forward-backward algorithm with a new linesearch for convex minimization problems and its applications. AIMS Mathematics, 2021, 6(6): 6180-6200. doi: 10.3934/math.2021363
    [9] Cuijie Zhang, Zhaoyang Chu . New extrapolation projection contraction algorithms based on the golden ratio for pseudo-monotone variational inequalities. AIMS Mathematics, 2023, 8(10): 23291-23312. doi: 10.3934/math.20231184
    [10] Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Hossam A. Nabwey, Abubakar Adamu, Ojen Kumar Narain . Double inertial steps extragadient-type methods for solving optimal control and image restoration problems. AIMS Mathematics, 2024, 9(5): 12870-12905. doi: 10.3934/math.2024629
  • In this paper, we propose a new accelerated algorithm for solving convex bilevel optimization problems using some fixed point and two-step inertial techniques. Our focus is on analyzing the convergence behavior of the proposed algorithm. We establish a strong convergence theorem for our algorithm under some control conditions. To demonstrate the effectiveness of our algorithm, we utilize it as a machine learning algorithm to solve data classification problems of some noncommunicable diseases, and compare its efficacy with BiG-SAM and iBiG-SAM.



    Data classification is an important data mining technique with a wide variety of applications to classify the different kinds of data that exist practically in all aspects of our life. It has been recognized as a critical topic in machine learning and data mining.

    We begin by reviewing the history of various mathematical models and related techniques used for this purpose. Convex bilevel optimization problem plays an important role in real-world applications. It can be applied to data classification, see for example [1,2,3,4]. The convex bilevel optimization problem consists of the constrained minimization problem known as the outer level,

    minuΛϕ(u), (1.1)

    where H is a real Hilbert space, ϕ:HR is a strongly convex differentiable function, and Λ is a nonempty set of minimizers of the inner level given by

    argminuRm{φ(u)+ψ(u)}, (1.2)

    where φ:RmR is convex differentiable function such that φ is Lφ-Lipschitzian and ψΓ0(H), the set of proper lower semicontinuous convex functions from H to R. Problems (1.1) and (1.2) are labeled as a bilevel optimization problem.

    Furthermore, the solution of (1.2) can be restated as the problem of finding ˆuΛ such that

    0φ(ˆu)+ψ(ˆu). (1.3)

    Parikh and Boyd [5] introduced the proximal gradient technique for solving (1.3), that is, ˆu is a solution of (1.3) if and only if ˆuF(T) where T is the prox-grad mapping defined by

    T:=proxtψ(Itφ),

    for t>0, and F(T) is the set of fixed points of T. It is well-known that if t(0,2Lφ), then T is nonexpansive and F(T)=argminuRm{φ(u)+ψ(u)}. We also note that the set of all common fixed points of Tn=proxcnψ(Isφ) is the set of minimizers of the inner level problem (1.2).

    Furthermore, uF(T) is a solution for problem (1.1) if u satisfies the condition

    ϕ(u),vu0, vF(T).

    Hereafter, we would like to give some background on iteration methods for finding a fixed point of the nonexpansive mapping T, that is, finding a point uC such that Tu=u.

    Let H be a real Hilbert space with norm and inner product ,, and C be a nonempty closed convex subset of H. One of the most popular iterative methods for finding a fixed point of a nonexpansive mapping is the Mann iteration, which was first introduced by Mann [6]. Later, Reich [7] modified it to the general version

    un+1=λnun+(1λn)Tun, n1, (1.4)

    where u1H and {λn} is a real sequence in [0,1]. He proved the weak convergence of (1.4) under the condition n=1λn(1λn)=.

    Later, Halpern [8] introduced an iterative method known as the Halpern iteration for finding a fixed point of nonexpansive mappings in real Hilbert spaces. His algorithm was given in the following form:

    un+1=λnu0+(1λn)Tun, n1, (1.5)

    where u0,u1C and {λn}[0,1]. Under some condition on {λn}, he established a strong convergence theorem of (1.5) when u0=0. Later, Reich [9] extended the Halpern iteration (1.5) to uniformly smooth Banach spaces.

    In 1974, by modifying the Mann iteration, Ishikawa [10] introduced the Ishikawa iteration process as follows:

    {vn=(1λn)vn+λnTvn,un+1=(1δn)un+δnTvn, n1, (1.6)

    where u1H and {λn},{δn}[0,1].

    Moudafi [11] demonstrated a viscosity approximation method for a nonexpansive mapping in 2000, which was defined as

    un+1=λnf(un)+(1λn)Tun, n1, (1.7)

    where u1H,{λn}[0,1], and f is a contraction mapping. He proved that, under certain conditions, {un} generated by (1.7) converges strongly to xF(T).

    By modification of the Ishikawa iteration, Agarwal et al. [12] presented the S-iteration process as follows:

    {vn=(1λn)un+λnTun,un+1=(1δn)Tun+δnTvn, n1, (1.8)

    where {λn},{δn}[0,1] and x1 is arbitrarily chosen. Furthemore, they demonstrated that the convergence behavior of the S-iteration is better than the iterations of Mann and Ishikawa.

    Now, we would like to give some background on iteration methods to find a common fixed point of a countable family of a nonexpansive mapping {Tn}.

    Aoyama et al. [13] demonstrated a Halpern type iteration

    un+1=λnu+(1λn)Tnun, n1, (1.9)

    where {λn}[0,1] and u1 and uC are arbitrarily chosen. Further, they showed that, under some condition on {λn},xnxn=1F(Tn).

    Thereafter, Takahashi [14] demonstrated the iteration process

    un+1=λnf(un)+(1λn)Tnun, n1, (1.10)

    where {λn}[0,1], and established a strong convergence theorem of (1.10) under some constraint on {λn}.

    In 2010, Klin-eam and Suantai [15] introduced the following algorithm:

    {vn=λnf(un)+(1δn)Tnun,un+1=(1δn)vn+δnTnvn, n1, (1.11)

    where {λn}[0,1] and u1C, and showed that, under certain conditions, {un} generated by (1.11) converges strongly to a common fixed point of Tn.

    Polyak [16] developed an inertial methodology for improving the convergence behavior of the method. From that time on, the inertial methodology was frequently employed to accelerate the convergence behavior of methods, such as the fast iterative shrinkage-thresholding algorithm (FISTA) defined as follows:

    {vn=Tun,tn+1=1+1+4t2n2,θn=tn1tn+1,un+1=vn+θn(vnvn1),n1, (1.12)

    where u1=v0Rm,t1=1, and T=proxλg(Iλf) for λ>0. FISTA was introduced by Beck and Teboulle [17], and they applied it to solve some image restoration problems where it was shown that the performance of FISTA was better than the existing methods in the literature.

    A new accelerated viscosity algorithm (NAVA) was proposed by Puangpee and Suantai [18] for finding a common fixed point of {Tn}. It was defined as follows:

    {vn=un+θn(unun1),wn=(1σn)vn+σnTnvn,un+1=λnf(un)+δnTnvn+γnTnwn, n1, (1.13)

    where u0,u1H, and {σn},{λn},{δn}, and {γn}(0,1). Moreover, they obtained a strong convergence theorem of (1.13) under certain control conditions.

    Polyak [19] also highlighted how multi-step inertial methods can accelerate optimization approaches, despite the fact that neither the convergence nor the rate outcome of such multi-step inertial methods are proven in [19].

    After that, Q. L. Dong et al. [20] presented the general inertial Mann algorithm as follows:

    {vn=un+θn(unun1),wn=un+ζn(unun1),un+1=(1γn)vn+γnT(wn) (1.14)

    for each n1, where {θn}[0,θ],{ζn}[0,ζ] with θ1=ζ1=0, and θ,ζ[0,1).

    From here on, we would like to give a some direct methods to solve problem (1.1), namely, the Bilevel Gradient Sequential Averaging Method (BiG-SAM) and the inertial Bilevel Gradient Sequential Averaging Method (iBiG-SAM).

    In 2017, Sabach and Shtern [21] presented the BiG-SAM process (Algorithm 1) as follows:

    Algorithm 1 BiG-SAM
      Input: u1Rm,λn(0,1),ι(0,1Lφ) and s(0,2Lϕ+σ).
      For n1 :
      Compute:
                {vn=proxιg(unιLφ(un)),un+1=λn(un1sϕ(un))+(1λn)vn.

    They showed that unu where u is a solution of (1.1) and (1.2).

    Later, Shehu et al. [22] introduced iBiG-SAM (Algorithm 2) by utilizing an inertial technique with BiG-SAM as follows:

    Algorithm 2 iBiG-SAM
      Input: u0,u1Rm,α3,λn(0,1),ι(0,2Lφ),s(0,2Lϕ+σ] such that {λn} and {ϵn} satisfying the Assumption 1.1.
      For n1 :
      Choose: θn[0,¯θn] with ¯θn defined by
              ¯θn:={min{n1n+α1,ϵnunun1}if unun1,n1n+α1otherwise.
      Compute:
              {vn=un+θn(unun1),tn=proxιψ(vnιφ(vn)),wn=vnsϕ(vn),un+1=λnwn+(1λn)tn.

    They proved a strong convergence theorem of Algorithm 2 under Assumption 1.1 as follows:

    Assumption 1.1. Suppose {λn}n=1(0,1) and {ϵn}n=1 are positive sequences that satisfy the following conditions:

    (1) limnλn=0 and n=1λn=.

    (2) ϵn=o(λn), i.e., limn(ϵn/λn)=0.

    Motivated by ongoing research in this area, we are interested in introducing a new accelerated algorithm for solving convex bilevel optimization problems and applying it to solve data classification problems.

    The following describes the way this paper is organized: Section 2 contains some fundamental definitions and helpful lemmas. The main results of this work are presented in Section 3. In this part, we provide a new accelerated algorithm for solving convex bilevel optimization problems and prove its strong convergence theorem. In Section 4, we also use our main finding to solve data classification problems. Finally, Section 5 contains a conclusion of our work.

    Throughout this paper, let C be a nonempty closed convex subset of real Hilbert space H, and let T:CC be a mapping. Let the strong and weak convergence of {un} to uH be denoted by unu and unu, respectively. A point uC is said to be a fixed point of T if Tu=u, and the set of all fixed points of T is denoted by F(T).

    A set C is said to be convex if αu+(1α)vC for all u,vC and α[0,1].

    Definition 2.1. Let f:HˉR. Then, the function f is convex on C if

    f(λu+(1λ)v)λf(u)+(1λ)f(v), u,vCandλ(0,1).

    Definition 2.2. A function f:HR is strongly convex with constant σ>0 if for any u,vH and λ[0,1],

    f(λu+(1λ)v)λf(u)+(1λ)f(v)σ2λ(1λ)uv2.

    Definition 2.3. For a scalar-valued function f:RmR, the derivative of f at ˉu is denoted by f(ˉu)Rm and is defined as

    limh0f(ˉu+h)f(ˉu)f(ˉu),hh=0.

    A function f is differentiable if it is differentiable at every uRm.

    Definition 2.4. Let f:RmR be convex differentiable. The gradient of f at u denoted by f(u), is defined by

    f(u):=[f(u)u1f(u)un].

    Hereafter, we will recall some important definitions, lemmas, and propositions that will be used to prove our main results.

    Definition 2.5. If there exists τ0 such that

    TuTvτuv, u,vC,

    T:CC is said to be Lipschitzian.

    In the above inequality, if 0τ<1, T is called a contraction, and if τ=1, T is called nonexpansive. It is known that F(T) is closed and convex if T is nonexpansive.

    Definition 2.6. Let uH. An element uC is said to be a metric projection of u on C if

    uuvu,  vC,

    and u is denoted by PCu.

    The function PC is called the metric projection of H onto C and it is well-known that PC is nonexpansive. Moreover,

    uPCu,vPCu0, (2.1)

    holds for all uH and vC. More information and properties of PC can be found in [23].

    For finding a common fixed point of a family of nonexpansive {Tn}, we need some important conditions, one of which is the NST- condition introduced by Nakajo et al. [24].

    Let {Tn} and T be two families of nonexpansive mappings of H into itself with F(T)n=1F(Tn) where F(T) is the set of all common fixed points of each TT. We say that {Tn} satisfies NST- condition (Ⅰ) with T if for each bounded sequence {un}

    limnunTnun=0limnunTun=0, TT.

    In particular, if T={T}, then {Tn} is said to satisfy NST- condition (Ⅰ) with T.

    Definition 2.7. Let ψΓ0(H) and t>0. The proximitor of tψ at vH, denoted by proxtψ(v), is defined as

    proxtψ(v)=argminuH{ψ(u)+uv22t}.

    The forward-backward operator T of φ and ψ with respect to t is denoted by T:=proxtψ(Itφ). Futhermore, if t(0,2/Lφ), where Lφ is the Lipschitz gradient of φ, it is generally known that T is nonexpansive.

    The following lemma is required to prove our main results.

    Lemma 2.8. [25,27] The following holds with u,wH and any arbitrary real number λ[0,1]:

    (1) λu+(1λ)w2=λu2+(1λ)w2λ(1λ)uw2;

    (2) u±w2=u2±2u,w+w2;

    (3) u+w2u2+2w,u+w.

    The following equality holds for all u,v,wH by utilizing Lemma 2.8 (1):

    αu+βv+γw2=αu2+βv2+γw2αβuv2βγvw2αγuw2, (2.2)

    where α,β,γ[0,1] with α+β+γ=1.

    Lemma 2.9. [26] Let ψΓ0(H), and φ:HR be convex differentiable such that φ is Lφ-Lipschitzian with Lφ>0. Let {cn}(0,2/Lφ) and c(0,2/Lφ) such that cnc. Define Tn:=proxcnψ(Icnφ), then {Tn} satisfies NST-condition (I) with T, where T:=proxcψ(Icφ).

    Lemma 2.10. [18] Let T be a nonexpansive mapping, and {Tn} be a family of nonexpansive mappings such that F(T)n=1F(Tn). For any subsequences {k} of {n}, if {Tn} satisfies NST-condition (I) with T, then {Tk} also satisfies NST-condition (I) with T.

    Proposition 2.11. [21] Let ϕ be a strongly convex differentiable function from Rm into R with parameter σ>0 such that ϕ is Lϕ-Lipschitzian. Define Ts:=Isϕ, where I is the identity mapping. Then, Ts is a contraction for all s2Lϕ+σ, that is

    usϕ(u)(vsϕ(v))12sσLϕσ+Lϕuv, u,vRm.

    Lemma 2.12. [28] Let T:HH be a nonexpansive mapping with F(T). Then, IT is demiclosed at zero, that is

    unTun0uF(T),

    for any sequences {un}H such that unuH.

    Lemma 2.13. [29,30] Let {pn},{ξn} be sequences of nonnegative real numbers, {αn} a sequence in [0,1], and {qn} a sequence of real numbers such that

    pn+1(1αn)pn+αnqn+rn,

    for all nN. If the following conditions hold,

    (1) n=1αn=;

    (2) n=1rn<;

    (3) lim supnqn0;

    then limnpn=0.

    Lemma 2.14. [31] Let {ϑn} be a real sequence of numbers that does not decrease at infinity in such a way that there is a subsequence {ϑni} such that ϑnk<ϑnk+1 for all kN. Define the sequence {π(n)}nn0 by

    π(n):=max{jn:ϑj<ϑj+1},

    where n0N such that {jn0:ϑj<ϑj+1}. Then, the following hold:

    (1) π(n0)π(n0+1) and π(n);

    (2) ϑπ(n)ϑπ(n)+1 and ϑnϑπ(n)+1 for all nn0.

    In this part, we propose a new accelerated algorithm for finding a common fixed point of a family of nonexpansive mappings in H by using the two-step inertial methodology with the viscosity approximation method. Second, we establish a strong convergence theorem under relevant conditions.

    To do this, we start by introducing a new two-step inertial algorithm for estimating a solution for a common fixed point problem (Algorithm 3).

    Algorithm 3 Two-step Inertial and Viscosity Algorithm
      Initialize : Take u1,u0,u1H. Let {μn}(0,) and {ρn}(,0).
      For n1 :
      Set
            θn={min{μn,ηnλnunun1}if unun1;μn otherwise.ζn={max{ρn,ηnλnunun1} if unun1;ρn otherwise.
      Compute
            {vn=un+θn(unun1)+ζn(un1un2),wn=ιnf(vn)+(1ιn)Tnvn,un+1=(1λnδn)vn+λnTnwn+δnTnvn.

    Throughout this section, let {Tn} be a family of nonexpansive mappings on H into itself. Let f be a τ-contraction mapping on H with τ(0,1), {ηn}(0,), and {λn},{δn},{ιn}(0,1).

    Next, we prove a strong convergence theorem of Algorithm 3.

    Theorem 3.1. Let T:HH be a nonexpansive mapping with F(T). Assume that F(T)n=1F(Tn) such that {Tn} satisfies NST-condition (I) with T. Let {un} be a sequence generated by Algorithm 3 such that the following additional conditions hold:

    (1) limnηn=0,

    (2) limnιn=0 and n=1ιn=,

    (3) 0<a<λn for some aR,

    (4) 0<b<δn<λn+δn<c<1 for some b,cR,

    then the sequence {un}uF(T) such that u=PF(T)f(u).

    Proof. Let uF(T) such that u=PF(T)f(u). First, we show that {un} is bounded. According to the definitions of vn and wn, we obtain

    vnu=un+θn(unun1)+ζn(un1un2)uunu+θnunun1+|ζn|un1un2, (3.1)

    and

    wnu=ιnf(vn)+(1ιn)Tnvnuιnf(vn)f(u)+ιnf(u)u+(1ιn)Tnvnuιnτvnu+ιnf(u)u+(1ιn)vnu=(1(1τ)ιn)vnu+ιnf(u)uvnu+ιnf(u)u. (3.2)

    We also know from (3.1) and (3.2) that

    un+1u=λnTnwn+δnTnvn+(1λnδn)vnuλnTnwnu+δnTnvnu+(1λnδn)vnuλnwnu+δnvnu+(1λnδn)vnu=λnwnu+(1λn)vnuλn((1(1τ)ιn)vnu+ιnf(u)u)+(1λn)vnu=(1(1τ)λnιn)vnu+λnιnf(u)u(1(1τ)λnιn)unu+(1(1τ)λnιn)[θnunun1+|ζn|un1un2]+λnιnf(u)u=(1(1τ)λnιn)unu+(1τ)λnιn(1(1τ)λnιn)(1τ)ιnθnλnunun1+(1τ)λnιn[(1(1τ)λnιn)(1τ)ιn|ζn|λnun1un2+f(u)u1τ]. (3.3)

    In accordance with Assumption (1) and the definition of θn and ζn, we have

    θnλnunun10 and |ζn|λnun1un20 as n.

    Then, positive constants M1,M2 exist such that

    θnλnunun1M1 and |ζn|λnun1un2M2.

    From (3.3), we have

    un+1u(1(1τ)λnιn)unu+(1τ)λnιnξ1τθnλnunun1+(1τ)λnιn[ξ1τ|ζn|λnun1un2+f(u)u1τ](1(1τ)λnιn)unu+(1τ)λnιn[ξ(M1+M2)+f(u)u1τ]max{unu,ξ(M1+M2)+f(u)u1τ}max{u1u,ξ(M1+M2)+f(u)u1τ},

    where ξ=sup{1(1τ)λnιnιn}. As a result, {un} is bounded. Moreover, {vn}, {wn}, {f(un)}, and {Tnvn} are all bounded.

    Using Lemma 2.8 (2), we also have

    vnu2=un+θn(unun1)+ζn(un1un2)u2unu2+2θnunu,unun1+2ζnunu,un1un2+θn(unun1)+ζn(un1un2)u2unu2+2θnunu,unun1+2ζnunu,un1un2+θ2nunun12+2θnζnunun1,un1un2+ζ2nun1un22unu2+2θnunuunun1+2|ζn|unuun1un2+θ2nunun12+2θn|ζn|unun1un1un2+ζ2nun1un22. (3.4)

    Using Lemma 2.8 (3) and (3.4), we have

    un+1u2=λnTnwn+δnTnvn+(1λnδn)vnu2λnTnwnu2+δnTnvnu2+(1λnδn)vnu2λnwnu2+δnvnu2+(1λnδn)vnu2=λnwnu2+(1λn)vnu2=λnιnf(vn)+(1ιn)Tnvnu2+(1λn)vnu2λnιn(f(vn)f(u))+(1ιn)(Tnvnu)2+2λnιnf(u)u,wnu+(1λn)vnu2λn[ιnf(vn)f(u)2+(1ιn)Tnvnu2]+2λnιnf(u)u,wnu+(1λn)vnu2λnιnτvnu2+λn(1ιn)vnu2+2λnιnf(u)u,wnu+(1λn)vnu2=(1(1τ)λnιn)vnu2+2λnιnf(u)u,wnu=(1(1τ)λnιn)[unu2+2θnunuunun1+2|ζn|unuun1un2+θ2nunun12+2θn|ζn|unun1un1un2+ζ2nun1un22]+2λnιnf(u)u,wnu. (3.5)

    Since

    θnunun1=λnθnλnunun10

    and

    |ζn|un1un2=λn|ζn|λnun1un20

    as n, there exist positive constants M3,M4 such that

    θnunun1M3,|ζn|un1un2M4.

    It follows from (3.5) that

    un+1u2(1(1τ)λnιn)unu2+(1(1τ)λnιn)θnunun1×(2unu+θnunun1+|ζn|un1un2)+(1(1τ)λnιn)|ζn|un1un2×(2unu+|ζn|un1un2)+2λnιnf(u)u,wnu(1(1τ)λnιn)unu2+(1(1k)λnιn)[5M5θnunun1+3M5|ζn|un1un2]+2λnιnf(u)u,wnu(1(1τ)λnιn)unu2+(1τ)λnιn[5M5ξ1τθnλnunun1+3M5ξ1τ|ζn|λnun1un2+21τf(u)u,wnu], (3.6)

    where M5=max{supnunu,M3,M4}. From (3.6), we set

    pn:=unu2,αn:=(1τ)λnιn

    and

    qn:=5M5ξ1τθnunun1+3M5ξ1τ|ζn|un1un2+21τf(u)u,wnu.

    Hence, we obtain

    pn+1(1αn)pn+αnqn. (3.7)

    After that, we examine the following two cases:

    Case 1. Assume there is an n0N such that the sequence {unu}nn0 is nonincreasing. As a result, {unu} converges since it has boundaries from below by 0. We infer that n=1αn=, by using Assumptions (2) and (3). Then, using Lemma 2.13, we assert that

    lim supnf(u)u,wnu0.

    Indeed, by (3.2), we have

    wnu2vnu2(vnu+ιnf(u)u)2vnu2=2ιnvnuf(u)u+ι2nf(u)u2. (3.8)

    By Lemma 2.8 (1), (3.4), and (3.8), we have

    un+1u2=λnTnwn+δnTnvn+(1λnδn)vnu2λnTnwnu2+δnTnvnu2+(1λnδn)vnu2δn(1λnδn)vnTnvn2λnwnu2+δnvnu2+(1λnδn)vnu2δn(1λnδn)vnTnvn2=λnwnu2+(1λn)vnu2δn(1λnδn)vnTnvn2λn[wnu2vnu2]+unu2+2θnunuunun1+2|ζn|unuun1un2+θ2nunun12+2θn|ζn|unun1un1un2+ζ2nun1un22δn(1λnδn)vnTnvn22λnιnvnuf(u)u+λnι2nf(u)u2+unu2+2θnunuunun1+2|ζn|unuun1un2+θ2nunun12+2θn|ζn|unun1un1un2+ζ2nun1un22δn(1λnδn)vnTnvn2. (3.9)

    This implies that

    δn(1λnδn)vnTnvn22λnιnvnuf(u)u+λnι2nf(u)u2+unu2un+1u2+θnunun1×(2unu+θnunun1+2|ζn|un1un2)+|ζn|un1un2(2unu+|ζn|un1un2). (3.10)

    Assumptions (2) and (4), as well as the convergence of the sequences {unu} and the fact that θnunun10 and |ζn|un1un20, imply that

    vnTnvn0 as n. (3.11)

    Since {Tn} satisfies NST-condition (Ⅰ) with T, we obtain

    vnTvn0 as n. (3.12)

    As a result of the definition of vn and wn, we have

    vnwn=vnιnf(vn)(1ιn)Tnvnιnf(vn)vn+(1ιn)Tnvnvn. (3.13)

    We can conclude from (3.11) and Assumption (2) that

    vnwn0 as n. (3.14)

    By definition of un+1, we have

    un+1vnun+1Tnvn+Tnvnvn=λnTnwn+δnTnvn+(1λnδn)vnTnvn+TnvnvnλnTnwnTnvn+(1λnδn)Tnvnvn+Tnvnvnλnwnvn+(2λnδn)Tnvnvn, (3.15)

    which implies

    un+1vn0 as n. (3.16)

    We can also conclude the following fact from the definition of vn:

    vnun=θnunun1+|δn|un1un20 as n. (3.17)

    Hence,

    un+1unun+1vn+vnun. (3.18)

    Set

    V=lim supnf(u)u,wnu. (3.19)

    So, there is a subsequence {wnk} of {wn} such that

    V=limkf(u)u,wnku. (3.20)

    Because {wnk} is bounded, there must be a subsequence {wnk} of {wnk} that satisfies wnkwH. We can assume wnkw and (3.20) hold without losing generality.

    We may conclude from (3.14) that vnkw, and we obtain that wF(T) by using that fact and Lemma 2.12. Furthermore, we obtain the following fact by using u=PF(T)f(u) and (2.1):

    V=limkf(u)u,wnku=f(u)u,wu0. (3.21)

    Hence,

    V=lim supnf(u)u,wnu0, (3.22)

    which implies lim supnqn0 by using θnunun10 and |ζn|un1un20.

    Using Lemma 2.13, we can conclude that unu.

    Case 2. Assume that for any n0, the sequence {unu}nn0 is not monotonically nonincreasing. We define

    ϑn:=unu2.

    So, there is a subsequence {ϑnk} of {ϑn} such that ϑnk<ϑnk+1 for all kN. We define π:{n:nn0}N, by

    π(n):=max{jN:jn,ϑj<ϑj+1}.

    For any nn0, we have ϑπ(n)ϑπ(n)+1 by Lemma 2.14, that is

    uπ(n)uuπ(n)+1u. (3.23)

    As in Case 1, by applying (3.23) we obtain δπ(n)(1λπ(n)δπ(n))vπ(n)Tπ(n)vπ(n)2

    2λπ(n)ιπ(n)vπ(n)uf(u)u+λπ(n)ι2π(n)f(u)u2+uπ(n)u2uπ(n)+1u2+θπ(n)uπ(n)uπ(n)1×(2uπ(n)u+θπ(n)uπ(n)uπ(n)1+2|ζπ(n)|uπ(n)1uπ(n)2)+|ζπ(n)|uπ(n)1uπ(n)2(2uπ(n)u+|ζπ(n)|uπ(n)1uπ(n)2)2λπ(n)ιπ(n)vπ(n)uf(u)u+λπ(n)ι2π(n)f(u)u2+θπ(n)uπ(n)uπ(n)1×(2uπ(n)u+θπ(n)uπ(n)uπ(n)1+2|ζπ(n)|uπ(n)1uπ(n)2)+|ζπ(n)|uπ(n)1uπ(n)2(2uπ(n)u+|ζπ(n)|uπ(n)1uπ(n)2), (3.24)

    which implies

    vπ(n)Tπ(n)vπ(n)0 as n. (3.25)

    Similar to the proof in Case 1, we get

    vπ(n)wπ(n)0, (3.26)
    uπ(n)+1vπ(n)0, (3.27)

    and

    vπ(n)uπ(n)0, (3.28)

    as n, and so

    uπ(n)+1uπ(n)0 as n. (3.29)

    As in Case 1, we then demonstrate that lim supnf(u)u,wπ(n)u0. Set

    V=lim supnf(u)u,wπ(n)u. (3.30)

    There exists a subsequence {wπ(t)} of {wπ(n)} such that wπ(t)wH and

    V=limtf(u)u,wπ(t)u. (3.31)

    By Lemma 2.10, {Tπ(t)} satisfies NST-condition (Ⅰ) with T. Due to inequality (3.24), vπ(t)Tπ(t)vπ(t)0, and we obtain

    vπ(t)Tvπ(t)0 as t. (3.32)

    As in Case 1, we can conclude from (3.25) that vπ(t)w, and wF(T). Using u=PF(T)f(u) and (2.1), we obtain

    V=limtf(u)u,wπ(t)u=f(u)u,wu0. (3.33)

    Then,

    V=lim supnf(u)u,wπ(n)u0. (3.34)

    Since ϑπ(n)ϑπ(n)+1, and from (3.6) along with (1τ)λπ(n)ιπ(n)>0, we obtain

    uπ(n)u25M5ξ1τθπ(n)λπ(n)uπ(n)uπ(n)1+3M5ξ1τ|ζπ(n)|λπ(n)uπ(n)1uπ(n)2+21τf(u)u,wπ(n)u. (3.35)

    From θπ(n)λπ(n)uπ(n)uπ(n)10,|ζπ(n)|λπ(n)uπ(n)1uπ(n)20, and (3.34), we obtain

    lim supnuπ(n)u20,

    and so uπ(n)u0 as n.

    This implies by (3.29) that uπ(n)+1u0 as n. From Lemma 2.14 (2), we get ϑnϑπ(n)+1, that is,

    unuuπ(n)+1u0 as n.

    Therefore, unu.

    For solving the problem (1.1), we assume the following assumptions:

    Assumption 3.2. Let Φ be the set of all solutions of problem (1.1) where

    (1) ϕ:RmR is strongly convex with parameter σϕ>0,

    (2) ϕ is a continuously differentiable function such that ϕ is Lipschitz continuous with constant Lϕ.

    For solving the problem (1.2), we assume:

    Assumption 3.3. Let Λ be a nonempty set of minimizer of problem (1.2).

    (1) φ:RmR is convex and continuously differentiable, and φ is Lipschitz continuous with constant Lφ,

    (2) ψΓ0(Rm).

    Next, we will present an algorithm (Algorithm 4) for solving problem (1.1).

    Algorithm 4 Two-step Inertial Forward-Backward Algorithm
      Input : cn(0,2Lφ),s(0,2Lϕ+σ).
      Initialize : Take u1,u0,u1Rm. Let {μn}(0,) and {ρn}(,0).
      For n1 :
      Set
              θn={min{μn,ηnλnunun1}if unun1;μn otherwise.ζn={max{ρn,ηnλnunun1} if unun1;ρn otherwise.
      Compute
        {vn=un+θn(unun1)+ζn(un1un2),wn=ιn(Isϕ)(yn)+(1ιn)proxcnψ(Icnφ)vn,un+1=(1λnδn)vn+λnproxcnψ(Icnφ)wn+δnproxcnψ(Icnφ)vn.

    Theorem 3.4. Let ϕ be a function satisfying Assumption 3.2, and φ and ψ be functions satisfying Assumption 3.3. Let {cn}(0,2Lφ) and c(0,2Lφ) such that cnc as n. Let {un} be a sequence generated by Algorithm 4 with the same conditions as in Theorem 3.1. Then, unuΦ.

    Proof. Set Tn=proxcnψ(Icnφ),T=proxcψ(Icφ) and f=Isϕ. In addition, we know that Tn and T are nonexpansive mappings. We also know from Lemma 2.9 that Tn satisfie NST-condition (Ⅰ) with T. According to Proposition 2.11, f is contraction with constants τ=12sσLϕσ+Lϕ and s2Lϕ+σ. Theorem 3.1 clearly demonstrates that unuF(T), where u=PF(T)f(u). We next claim that uΦ. By using (2.1), we have for any vF(T)

    f(u)u,vu0,(Isϕ)(u)u,vu=0,sϕ(u),vu=0,ϕ(u),vu0. (3.36)

    Therefore, u is a solution of problem (1.1).

    In this section, we utilize our algorithm as a machine learning algorithm for data classification of Parkinson's disease and diabetes, and compare its effectiveness with BiG-SAM and iBiG-SAM.

    Let {(xk,tk)Rn×Rm:k=1,2,,s} be a training set with s samples, with xk representing an input and tk representing a target. The mathematical model of single-layer feedforward neuron networks (SLFNs) is given by

    ok=hj=1αjg(ωj,xk+bj), k=1,2,,s,

    where ok is an output of ELM for SLFNs, h is the number of hidden nodes, g is an activation function, bj is the bias, and αj and ωj are the weight vectors connecting the j-th hidden node with the output and input node, respectively.

    The hidden-layer output matrix denoted by H, is given by

    H=[g(ω1,x1+b1)g(ωh,x1+bh)g(ω1,xs+b1)g(ωh,xs+bh)]s×h.

    The target of standard SLFNs is to approximate these s sample with zero means, that is, sk=1|oktk|=0. Then, there exists αj,ωj, and bj such that

    tk=hj=1αjg(ωj,xk+bj), k=1,2,,s.

    We could derive the following simple equation from the s equations above:

    Hu=T, (4.1)

    where u=[αT1,,αTh]T,T=[tT1,,tTs]T.

    For solving ELM, it is necessary to calculate only the u that satisfies (4.1) with random ωj and bj. If there is a pseudo-inverse H+ of H, u=H+T is the solution of (4.1). If H+ does not exists, we can obtain a solution in terms of the least squares problem, that is,

    minuHuT22. (4.2)

    In machine learning, model fitness plays an essential role for training set accuracy. We cannot employ an overfitting model to predict unknown data; instead, we utilize the most common technique known as the least absolute shrinkage and selection operator (LASSO). It is formulated as

    minuHuT22+λu1, (4.3)

    where 1 is the l1-norm defined by (x1,,xn)1=ni=1|xi|, and λ>0 is a regularization parameter. We may simplify problem (4.3) to problem (1.2) by setting φ(u):=HuT22 and ψ(u):=λu1. For problem (1.1), we set ϕ:=12u22 with Lϕ=1 and σϕ=1.

    In this experiment, we aim to classify the datasets of the Parkinson's disease and diabetes from UCI and Kaggle, respectively.

    Parkinson's disease dataset. [33] There are 195 examples in this dataset, all of which have 22 features. We classified two types of data in this dataset.

    Diabetes dataset. [34] There are 768 examples in this set, all of which have 8 features. We classified two types of data in this dataset.

    In this experiment, we establish the default settings by selecting the most advantageous choice for any parameter of each algorithm in order to reach the best level of performance, as follows:

    (1) For inner level: φ(u)=2HT(HuT) and Lφ=λmax(HH), the maximum eigenvalue of HH.

    (2) For Algorithm 1 (BiG-SAM) and Algorithm 2 (iBiG-SAM):

    ι=1Lφ and λn=1n.

    (3) For Algorithm 4 (our algorithm):

    λn=0.5+133n, δn=0.9λn, ιn=133n, cn=1Lφ,ηn=331020n, μn=n1n+α1, ρn=0.0001.

    (4) For all algorithms:

    ● Regularization parameter: λ=105.

    ● Hidden nodes: h=30.

    n=500,α=3, and s=0.01.

    10-fold cross-validation.

    The following experiment uses the Parkinson's disease and diabetes disease datasets. We compare the effectiveness of Algorithms 1, 2, and 4 at the 500th iteration, as shown in Tables 1 and 2.

    Table 1.  The efficacy of each algorithm at the 500th iteration with 10-fold CV on the Parkinson's disease dataset.
    Algorithm 4 BiG-SAM iBiG-SAM
    acc. train acc.test acc. train acc.test acc. train acc.test
    Fold 1 86.93 94.74 85.80 94.74 85.80 94.74
    Fold 2 86.29 75.00 86.29 75.00 86.29 75.00
    Fold 3 86.86 85.00 86.86 85.00 86.86 85.00
    Fold 4 88.57 85.00 87.43 85.00 87.43 85.00
    Fold 5 84.57 95.00 84.00 95.00 84.00 95.00
    Fold 6 86.86 85.00 86.86 85.00 86.86 85.00
    Fold 7 88.07 78.95 87.50 78.95 87.50 78.95
    Fold 8 85.23 89.47 84.66 89.47 84.66 89.47
    Fold 9 87.50 84.21 85.80 84.21 85.80 84.21
    Fold 10 84.66 89.47 84.66 84.21 84.66 84.21
    Average acc. 86.55 86.18 85.98 85.66 85.98 85.66

     | Show Table
    DownLoad: CSV
    Table 2.  The efficacy of each algorithm at the 500th iteration with 10-fold CV on the diabetes dataset.
    Algorithm 4 BiG-SAM iBiG-SAM
    acc. train acc.test acc. train acc.test acc. train acc.test
    Fold 1 77.46 67.11 77.02 67.11 77.02 67.11
    Fold 2 76.12 71.43 75.40 71.43 75.40 71.43
    Fold 3 78.15 74.03 77.13 74.03 77.28 74.03
    Fold 4 75.69 68.83 74.24 67.53 74.24 67.53
    Fold 5 74.38 77.92 73.23 77.92 73.23 77.92
    Fold 6 74.53 84.42 73.81 83.12 73.81 83.12
    Fold 7 76.56 76.62 76.12 74.03 76.12 74.03
    Fold 8 75.83 79.22 75.54 79.22 75.83 79.22
    Fold 9 75.11 79.22 75.11 76.62 75.11 76.62
    Fold 10 75.58 75.00 73.70 68.42 73.70 68.42
    Average acc. 75.94 75.38 75.13 73.94 75.17 73.94

     | Show Table
    DownLoad: CSV

    The results of Tables 1 and 2 reveal that Algorithm 4 provides a better accuracy for data classification than the others.

    We provide a new two-step inertial accelerated algorithm in this paper. First, we analyze the convergence behavior of this algorithm and establish the strong convergence theorem under relevant conditions. Next, we utilize our algorithm as a machine learning algorithm to solve data classification problems of some noncommunicable diseases and compare its efficacy with BiG-SAM and iBiG-SAM. We find that our algorithm outperforms BiG-SAM and iBig-SAM in terms of accuracy. In our future work, we would like to employ our proposed algorithm as a machine learning algorithm for prediction and classification of some noncommunicable diseases collected from the Sriphat Medical Center, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand, and we also aim to build new innovations in the form of web applications/mobile applications/computer systems for data prediction and classification of noncommunicable diseases. These applications will have benefits for hospitals, communities, and citizens in terms of screening and preventing noncommunicable diseases.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was partially supported by Chiang Mai University and Fundamental Fund 2024 (FF030/2567), Chiang Mai University.

    The first author would like to thank CMU Presidential Scholarship for the financial support.

    All authors declare no conflicts of interest in this paper.



    [1] P. Thongpaen, W. Inthakon, T. Leerapun, S. Suantai, A new accelerated algorithm for convex Bilevel optimization problems and applications in data classification, Symmetry, 14 (2022), 2617. https://doi.org/10.3390/sym14122617 doi: 10.3390/sym14122617
    [2] K. Janngam, S. Suantai, Y. J. Cho, A. Kaewkhao, A novel inertial viscosity algorithm for Bilevel optimization problems applied to classification problems, Mathematics, 11 (2023), 2617. https://doi.org/10.3390/math11143241 doi: 10.3390/math11143241
    [3] P. Thongsri, B. Panyanak, S. Suantai, A new accelerated algorithm based on fixed point method for convex Bilevel optimization problems with applications, Mathematics, 11 (2023), 702. https://doi.org/10.3390/math11030702 doi: 10.3390/math11030702
    [4] P. Sae-jia, S. Suantai, A novel algorithm for convex Bi-level optimization problems in Hilbert spaces with applications, Thai. J. Math., 21 (2023), 625–645. https://doi.org/10.3390/math11143241 doi: 10.3390/math11143241
    [5] N. Parikh, S. Boyd, Proximal algorithms, Found. Trend. Optim., 1 (2014), 127–239. http://dx.doi.org/10.1561/2400000003 doi: 10.1561/240000000
    [6] W. R. Mann, Mean value methods in iteration, P. Am. Math. Soc., 4 (1953), 506–510.
    [7] S. Reich, Weak convergence theorems for nonexpansive mappings in Banach spaces, J. Math. Anal. Appl., 67 (1979), 174–276.
    [8] B. Halpern, Fixed points of nonexpanding maps, Found. Trend. Optim., 73 (1967), 957–961. http://dx.doi.org/10.1561/2400000003 doi: 10.1561/2400000003
    [9] S. Reich, Strong convergence theorems for resolvents of accretive operators in Banach spaces, J. Math. Anal. Appl., 75 (1980), 287–292.
    [10] S. Ishikawa, Fixed points by a new iteration method, P. Am. Math. Soc., 44 (1974), 147–150. https://doi.org/10.2307/2039245 doi: 10.2307/2039245
    [11] A. Moudafi, Viscosity approximation methods for fixed points problems, J. Math. Anal. Appl., 241 (2000), 46–55. https://doi.org/10.1006/jmaa.1999.6615 doi: 10.1006/jmaa.1999.6615
    [12] R. P. Agarwal, D. O. Regan, Iterative construction of fixed points of nearly asymptotically nonexpansive mappings, J. Nonlinear Convex A., 8 (2007), 61.
    [13] K. Aoyama, Y. Kimura, W. Takahashi, M. Toyoda, Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space, Nonlinear Anal., 67 (2007), 2350–2360. https://doi.org/10.1155/2010/407651 doi: 10.1155/2010/407651
    [14] W. Takahashi, Viscosity approximation methods for countable family of nonexpansive mapping in Banach spaces, Nonlinear Anal., 70 (2009), 719–734. https://doi.org/10.1016/j.na.2008.01.005 doi: 10.1016/j.na.2008.01.005
    [15] C. Klin-eam, S. Suantai, Strong convergence of composite iterative schemes for a countable family of nonexpansive mappings in Banach spaces, Nonlinear Anal., 73 (2010), 431–439. https://doi.org/10.1016/j.na.2010.03.034 doi: 10.1016/j.na.2010.03.034
    [16] B. T. Polyak, Some methods of speeding up the convergence of iteration methods, USSR Comput. Math. Math. Phys., 4 (1964), 1–17. https://doi.org/10.1016/0041-5553(64)90137-5 doi: 10.1016/0041-5553(64)90137-5
    [17] A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), 183–202. https://doi.org/10.1137/080716542 doi: 10.1137/080716542
    [18] J. Puangpee, S. Suantai, A new accelerated viscosity iterative method for an infinite family of nonexpansive mappings with applications to image restoration problems, Mathematics, 8 (2020), 615. https://doi.org/10.3390/math8040615 doi: 10.3390/math8040615
    [19] B. T. Polyak, Introduction to optimization. optimization software, New York: Publications Division, 1987.
    [20] Q. L. Dong, Y. J. Cho, T. M. Rassias, General inertial Mann algorithms and their convergence analysis for nonexpansive mappings, Appl. Nonlinear Anal., 2018,175–191. https://doi.org/10.1007/978-3-319-89815-5_7 doi: 10.1007/978-3-319-89815-5_7
    [21] S. Sabach, S. Shtern, A first order method for solving convex bilevel optimization problems, SIAM J. Optim., 27 (2017), 640–660. https://doi.org/10.1137/16M105592 doi: 10.1137/16M105592
    [22] Y. Shehu, P. T. Vuong, A. Zemkoho, An inertial extrapolation method for convex simple bilevel optimization, Optim. Method. Softw., 36 (2021), 1–19. https://doi.org/10.48550/arXiv.1809.06250 doi: 10.48550/arXiv.1809.06250
    [23] K. Goebel, S. Reich, Uniform convexity, hyperbolic geometry, and nonexpansive mappings, New York: Marcel Dekker, 1984.
    [24] K. Nakajo, K. Shimoji, W. Takahashi, Strong convergence to common fixed points of families of nonexpansive mappings in Banach spaces, J. Nonlinear Convex A., 8 (2007), 11.
    [25] W. Takahashi, Introduction to nonlinear and convex analysis, Yokohama Publishers, 2009.
    [26] L. Bussaban, S. Suantai, A. Kaewkhao, A paralle inertial S-iteration forward-backward algorithm for regression and classification problems, Carpathian J. Math., 36 (2020), 35–44.
    [27] W. Takahashi, Nonlinear functional analysis, Yokohama Publishers, 2000.
    [28] K. Goebel, W. A. Kirk, Topic in metric fixed point theory, Cambridge University Press, 1990.
    [29] K. Aoyam, Y. Yasunori, W. Takahashi, M. Toyoda, On a strongly nonexpansive sequence in a Hilbert space, J. Nonlinear Convex A., 8 (2007), 471–490.
    [30] H. K. Xu, Another control condition in an iterative method for nonexpansive mappings, B. Aust. Math. Soc., 65 (2002), 109–113. https://doi.org/10.1017/S0004972700020116 doi: 10.1017/S0004972700020116
    [31] P. E. Maing, Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization, Set-Valued Anal., 16 (2008), 899–912. https://doi.org/10.1007/s11228-008-0102-z doi: 10.1007/s11228-008-0102-z
    [32] G. B. Huang, Q. Y. Zhu, C. K. Siew, Extreme learning machine: Theory and applications, Neurocomputing, 70 (2006), 489–501. https://doi.org/10.1016/j.neucom.2005.12.126 doi: 10.1016/j.neucom.2005.12.126
    [33] Little Max, Parkinsons, UCI Machine Learning Repository, 2008. Available from: https://archive.ics.uci.edu/dataset/174/parkinsons.
    [34] Kaggle, Diabetes dataset, Kaggle, 1990. Available from: https://www.kaggle.com/datasets/mathchi/diabetes-data-set/data.
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1255) PDF downloads(90) Cited by(0)

Figures and Tables

Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog