Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Linear convergence of a primal-dual algorithm for distributed interval optimization

  • In this paper, we investigate a distributed interval optimization problem whose local functions are interval functions rather than scalar functions. Focusing on distributed interval optimization, this paper presents a distributed primal-dual algorithm. A criterion is introduced under which linear convergence to the Pareto solution of distributed interval optimization problems can be achieved without strong convexity. Lastly, a numerical simulation is presented to illustrate the linear convergence of the algorithm that has been proposed.

    Citation: Yinghui Wang, Jiuwei Wang, Xiaobo Song, Yanpeng Hu. Linear convergence of a primal-dual algorithm for distributed interval optimization[J]. Electronic Research Archive, 2024, 32(2): 857-873. doi: 10.3934/era.2024041

    Related Papers:

    [1] Yiyuan Qian, Kai Zhang, Jingzhi Li, Xiaoshen Wang . Adaptive neural network surrogate model for solving the implied volatility of time-dependent American option via Bayesian inference. Electronic Research Archive, 2022, 30(6): 2335-2355. doi: 10.3934/era.2022119
    [2] Yiyuan Qian, Haiming Song, Xiaoshen Wang, Kai Zhang . Primal-dual active-set method for solving the unilateral pricing problem of American better-of options on two assets. Electronic Research Archive, 2022, 30(1): 90-115. doi: 10.3934/era.2022005
    [3] Wenya Shi, Xinpeng Yan, Zhan Huan . Faster free pseudoinverse greedy block Kaczmarz method for image recovery. Electronic Research Archive, 2024, 32(6): 3973-3988. doi: 10.3934/era.2024178
    [4] Bin Wang . Random periodic sequence of globally mean-square exponentially stable discrete-time stochastic genetic regulatory networks with discrete spatial diffusions. Electronic Research Archive, 2023, 31(6): 3097-3122. doi: 10.3934/era.2023157
    [5] Mengjie Xu, Nuerken Saireke, Jimin Wang . Privacy-preserving distributed optimization algorithm for directed networks via state decomposition and external input. Electronic Research Archive, 2025, 33(3): 1429-1445. doi: 10.3934/era.2025067
    [6] Changling Xu, Huilai Li . Two-grid methods of finite element approximation for parabolic integro-differential optimal control problems. Electronic Research Archive, 2023, 31(8): 4818-4842. doi: 10.3934/era.2023247
    [7] Haijun Wang, Gege Kang, Ruifang Zhang . On optimality conditions and duality for multiobjective fractional optimization problem with vanishing constraints. Electronic Research Archive, 2024, 32(8): 5109-5126. doi: 10.3934/era.2024235
    [8] Simon Eberle, Arnulf Jentzen, Adrian Riekert, Georg S. Weiss . Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation. Electronic Research Archive, 2023, 31(5): 2519-2554. doi: 10.3934/era.2023128
    [9] Weishang Gao, Qin Gao, Lijie Sun, Yue Chen . Design of a novel multimodal optimization algorithm and its application in logistics optimization. Electronic Research Archive, 2024, 32(3): 1946-1972. doi: 10.3934/era.2024089
    [10] Wenjie Wang, Suzhen Wen, Shen Gao, Pengyi Lin . A multi-objective dynamic vehicle routing optimization for fresh product distribution: A case study of Shenzhen. Electronic Research Archive, 2024, 32(4): 2897-2920. doi: 10.3934/era.2024132
  • In this paper, we investigate a distributed interval optimization problem whose local functions are interval functions rather than scalar functions. Focusing on distributed interval optimization, this paper presents a distributed primal-dual algorithm. A criterion is introduced under which linear convergence to the Pareto solution of distributed interval optimization problems can be achieved without strong convexity. Lastly, a numerical simulation is presented to illustrate the linear convergence of the algorithm that has been proposed.



    Due to the theoretical significance and wide range of applications in areas such as machine learning, multi-agent system coordination, sensor networks, and smart grids, distributed optimization has received a lot of attention from researchers in recent years. Various distributed algorithms for solving distributed optimization problems have been introduced, and they involve agents collaborating with their neighboring agents in order to attain global minimization, see recent works [1,2,3,4,5,6,7].

    The aforementioned works' objective functions are scalar functions. In practice, however, scalar functions have been frequently incapable of expressing objective functions for distributed networks explicitly or precisely (see [8,9,10]). On the contrary, interval functions are employed to describe problems, as exemplified in the applications of smart grids and economic systems [11,12]. To address the challenges presented by interval functions, interval optimization problems (IOPs), have been proposed [13,14,15,16,17,18,19]. Initial studies on IOPs were conducted by the authors of [13], and subsequently investigated in [14,15]. Existence conditions have been presented in [11,20] to achieve Pareto solutions of IOPs. In addition, [21,22,23,24] detail algorithms that have been designed for centralized IOPs. Without conducting a theoretical analysis, [11,12] present distributed applications of IOPs in economic systems and smart infrastructures. For centralized IOPs [21,22,23,24] have presented algorithms. These line search algorithms, nevertheless, fail in distributed environments.

    Given this context, it is natural for us to consider the design of efficient algorithms to solve DIOPs over multi-agent networks. The DIOPs, nevertheless, remain a subject of ongoing research. This may be due to the ease with which line search algorithms (e.g., Wolfe or Lamke's algorithms [21,22,23,24]) can be applied in distributed settings, and very few papers [25] with related theoretical results have been published. In addition, algorithm designs are made difficult by the partial order of interval functions.

    Furthermore, there is growing interest in the convergence rates of distributed algorithms for distributed optimization with scalar functions. In fact, when local objective functions were strongly convex, the algorithms of [2,26,27] achieved linear convergence rates for the centralized and distributed counterparts. Local scalar functions for distributed optimization are not strongly convex in a number of practical applications. Further investigation was undertaken by a group of scholars [1,28,29,30] regarding the substitution of strongly convex conditions that dictate linear convergence rates. For example, [1] analyzed four distinct categories of function conditions and deduced the linear convergence of numerous centralized algorithms. The authors of [28,29] respectively demonstrated the linear rates of their distributed algorithms under metrically sub-regular and Polyak-Lojasiewicz conditions.

    In this paper, we investigate the Pareto solutions of a DIOP whose local functions are interval functions rather than scalar functions. The DIOP is given as follows:

    (DIOP)minsG(s),G(s)=ni=1Gi(s)

    where Gi=[Li,Ri] is a convex interval function for each agent i. Li(s)Ri(s) holds for every given s. Still, each agent can only get the gradient information of interval function Gi. By means of neighborhood information communication, the global Pareto solution is obtained. The contributions of this paper are summarized as follows:

    (a) We investigate the Pareto solution of a DIOP whose local functions are interval functions. By incorporating convexity and well-defined partial orderings of interval functions, we convert the DIOP [11,20,31] into a solvable distributed optimization problem scalarization (DSIOP) with convex global constraints.

    (b) In this reformulation, the optimal solutions of the DSIOP correspond to the Pareto solutions of the DIOP. With this relationship, we propose a distributed primal-dual algorithm to find a Pareto solution of the DIOP.

    (c) We discuss a crucial criterion that, when applied to Pareto solutions of a DIOP, weaken the strict or strong convexity required for linear convergence. Given that this paper investigates DIOPs, the supplied criterion differ from those delineated in [1,28,29]. In addition, the criterion is essential for evaluating the convergence of DIOP distributed algorithms.

    The rest of the paper is organized as follows. The preliminaries of this paper are given in Section 2. In Section 3, the DIOP is analyzed. The primal-dual algorithm is further given to find a Pareto solution of the DIOP in Section 4 and a numerical example is given in Section 5. Finally, the conclusion of this paper is offered in Section 6.

    Notations. Denote by R the set of real numbers, InRn×n as the identity matrix, and 1n=[1,1,,1]Rn, respectively. Denote , as the inner product and as the Euclidean norm in Rn.

    In this section, we present an introduction to convex analysis for scalar functions [32], graph theory, and interval optimization [33].

    Define N={1,2,...,n} as the agent set and EN×N as the set of edges between agents. The communication between n agents is described by an undirected graph G=(N,E). If (i,j)E, then the agent i can communicate with the agent j. Therefore, each agent iN can communicate with agents in its neighborhood Ni={j|(i,j)E}{i}. Denote ARn×n as the communication matrix between agents, whose elements aij satisfy the following conditions:

    aij={aii,ifi=jaij,ifijand(i,j)E0,otherwise. (2.1)

    Denote di by the degree of agent i, i.e., |di|=nj=1aij. Further, denote D by the n×n diagonal degree matrix such that D=diag(nj=1a1j,,nj=1anj). Then, the associated Laplacian matrix PRn×n is P:=DA.

    The following assumption forms the basis of the communication topology G=(N,E) between agents over the network:

    Assumption 1. The undirected graph G=(N,E) is connected.

    Assumption 1 is extensively employed in [28], this ensures the consensus of vectors for agents over the network.

    Prior to proceeding with the discussion of interval functions, we define convexity and the Lipschitz continuity of scalar functions.

    Definition 1 (a) A scalar function f:ΩR is convex if for any s1,s2Ω and z[0,1], f(λs2+(1λ)s1)λf(s2)+(1λ)f(s1) holds.

    (b) A scalar function f:RnR is κ-Lipschitz continuous with respect to a constant κ>0 if

    f(s2)f(s1)κs2s1,s1,s2Rn.

    The following lemma is crucial for the analysis of convergence in distributed optimization problems involving scalar functions and interval functions.

    Lemma 1. [32,Lemma 11,Chapter 2.2] Define {vk}k1 and {wk}k1 as two nonnegative scalar sequences. Define {hk}k1 as a scalar sequence, which is bounded from below uniformly. If there exists a nonnegative constant sequence ηk0 with k=1ηk< and

    hk+1(1+ηk)hkvk+wk,k1

    then {hk}k1 converges with k=1vk<.

    Let G:RpR be any interval map. Now, we consider the following IOP:

    (IOP)minxG(s)s.t.sΩ

    where G(x)=[L(s),R(s)] is any non-empty compact interval in R.

    The Pareto optimal solution to an IOP is defined as follows:

    Definition 2. [34] A point sΩ is said to be a Pareto optimal solution to an IOP iff it holds that for some ˉsΩ, L(ˉs)L(s) and R(ˉs)R(s) both hold implying that L(s)L(s) and R(s)R(s).

    The example of the DIOP is presented below. There is no solution other than the Pareto solution in the example that follows.

    Example 1 The IOP illustrated in Figure 1 does not have a solution. However, the Pareto optimal solutions to the given problem are [s1,s2].

    (a) For ys1, we have that R(y)R(s1) and L(y)L(s1), and s1 is a Pareto solution to the IOP.

    (b)For ys2, we have that R(y)R(s2) and L(y)L(s2), and s2 is a Pareto solution to the IOP.

    (c) For s1ys2, we have that R(y)R(s1), L(y)L(s1), R(y)R(s2) and L(y)L(s2).

    For s1ys2, ˉsΩ, L(ˉs)L(y) and R(ˉs)R(y) could not hold concurrently.

    Figure 1.  L(x) and R(x) for vector x.

    According to Definition 2, [s1,s2] are Pareto optimal solutions to this given problem.

    To investigate the Pareto solutions of an IOP, let us consider the following IOP in conjunction with its scalarization (SIOP):

    (SIOP)minxλL(x)+(1λ)R(x)s.t.xΩ

    where λ[0,1]. The following lemma holds for Pareto solutions of IOPs and solutions of SIOPs according to [34]. Furthermore, it remains valid in distributed settings.

    Lemma 2. [34] We assume that G is compact-valued and convex with respect to x:

    (a) If there exists a real number λ(0,1) such that sΩ is a solution to the SIOP, then sΩ is a Pareto optimization of the IOP.

    (b) If a point sΩ is a Pareto optimization of the IOP, then there exists a real number λ[0,1] such that sΩ is an optimal solution of the SIOP.

    In this section, we consider a DIOP and introduce its distributed primal-dual algorithm.

    Consider the following DIOP:

    (DIOP)minsG(s),G(s)=ni=1Gi(si)s. t.si=sj (3.1)

    where s=[s1,s2,,sn]Rnp, siRp, and Gi=[Li,Ri]. Li,Ri:RpR are convex functions. For any given si, Li(si)Ri(si) holds. Each agent i knows its local interval function Gi.

    Define L(s) and R(s) as

    L(s)=ni=1Li(si),R(s)=ni=1Ri(si). (3.2)

    With (3.2), the definition of Pareto solutions is then given to the DIOP.

    Definition 3. [34] sΩ is a Pareto solution of the DIOP, iff for some ˉsΩ, L(ˉs)L(s) and R(ˉs)R(s) both hold implying that L(s)L(ˉs) and R(s)R(ˉs).

    The existence of Pareto solutions for the DIOP is guaranteed by Assumption 2 which is consistent with the centralized counterpart [35].

    Assumption 2. (a) Li(s) and Ri(s) are strongly convex, continuous functions.

    (b) Problem (3.1) has at least one Pareto solution.

    (c) Gradients of Li(s) and Ri(s) are Lipschitz continuous.

    Lemma 2 also establishes a theoretical framework for Pareto solutions for the DIOP. Consider the following scalarization of the DIOP as well. Define f:Rnp×RnR and fi:Rp×[0,1]R as

    F(s,z)ni=1fi(si,zi) (3.3)
    fi(si,zi)ziLi(s)+(1zi)Ri(s) (3.4)

    where z=[z1,z2,,zn](0,1)n and s=[s1,s2,,sn]Rnp. Let z=z01n with z0(0,1). The DSIOP (3.1) can be rewritten as follows:

    (DSIOP) minsF(s,z),F(s,z)=ni=1fi(si,zi)s. t.si=sj,zi=zj (3.5)

    where each agent i possesses the following information: fi, si, zi(0,1) and sjNi. The given problem (3.5) can be modeled as a distributed optimization problem [28,36,37] with scalars when z represents a common vector to each agent i. Additionally, under Assumption 2, the following lemma remains valid:

    Lemma 3. [34,35]

    (a) fi(s,z) is linear with respect to z and fi(s,z) is convex with respect to s.

    (b) There are Lipschitz constants ki1 and K1 such that the partial derivative fix(s,z) is Lipschitz continuous with respect to s with ki1 and Fs(s,z) is Lipschitz continuous with respect to s with K1.

    (c) There are Lipschitz constants ki2 and K2 such that fi(s,z) is Lipschitz continuous with respect to z with constant ki2 and F(s,z) is Lipschitz continuous with respect to z with constant K2.

    (d) There are Lipschitz constants ki3 and K3 such that the partial derivative fix(s,z) is Lipschitz continuous with respect to z with constant ki3 and Fs(s,z) is Lipschitz continuous with respect to z with constant K3.

    It should be noted that although fi(si,zi) is convex with respect to s and z, fi(si,zi) is not a convex function. Owing to the non-convexity of fi(si,zi), the criteria for linear convergence rates of algorithms are no longer applicable to the DIOP.

    During distributed optimization processes, s1,,sn, z1,,zn are not necessarily equal all of the time. Therefore, it is natural to treat those variables separately and impose the soft constraints s1==sn, z1==zn. By using the Laplacian matrix P, these constraints are equivalent to Ps=0 and Pz=0, where z=[z1,z2,,zn](0,1)n, s=[s1,s2,,sn]Rnp, and P=PIp. Consequently, problem (3.5) is reformulated as follows:

    minsF(s,z),F(s,z)=ni=1fi(si,zi)s. t.Ps=0,Pz=0,z(0,1)n. (3.6)

    Let t=[t1,t2,,tn]. Recall that the dual problem of (3.6) is

    minsRnm[F(s,z)+maxtRnpt,Ps]s. t.Pz=0,z(0,1)n. (3.7)

    and the augmented Lagrangian function of (3.7) with respect to s is

    ˜L(s,z,t)=F(s,z)+t,Ps+12s,Ps. (3.8)

    Define by ˉz0=1nni=1z0i, ˉz0=[(ˉz0),(ˉz0),,(ˉz0)]Rnp, where z0i(0,1) is an initial value for any agent i. For the vector ˉz0, denote S as the optimal solution set of problem (3.6) and T as the saddle point set of problem (3.7), respectively. According to Assumption 2, for a proper given z0, there exists t such that (s,t)S×T. (s,t)S×T also satisfies the following lemma, which is also a basis for the analysis of convergence:

    Lemma 4. (Karush-Kuhn-Tucker condition, [38,Theorems 3.25–3.27]) With Assumption 2, for a particular given ˉz0=ˉz01n(0,1)n, (s,t) is a solution to (3.7) if

    {0=s˜L(s,t)=Fs(s,ˉz0)PsPt,0=t~L(s,t)=Ps.

    With Lemma 4, we introduce a distributed primal-dual algorithm as follows:

    sk+1i=skih(fiski(ski,zki)+mj=1aij(skiskj)+mj=1aij(tkitkj)) (3.9a)
    zk+1i=mj=1aijzkj (3.9b)
    tk+1i=tki+h(mj=1aij(skiskj)) (3.9c)

    where the step-size h satisfies that 0<h<2L+4σ, σ is the largest eigenvalue of the Laplacian matrix P. At the k-th iteration, for all iV={1,2,,n}, each agent i only obtains a partial gradient in the form of fiski(ski,zki) for its local function fi(ski,zki), and it is cooperative with neighbors to achieve a Pareto solution of problem (3.1).

    The constraint Plimkz(k)=0,zi(0,1) in (3.6), is satisfied through (3.9b) and the initialization of zi(0)(0,1) in (3.9), while the constraint Plimkx(k)=0 and the minimization of F(x,z) are satisfied through (3.9a) and (3.9c) in (3.9). Define sk=col{sk1,,skn}, tk=col{tk1,,tkn} and zk=col{zk1,,zkn}. Then, with wcol{s,t}R2qn, wcol{s,t}WS×T for a proper given ˉz0, where W is the primal-dual solution set of problem (3.7). Algorithm (3.9) can be rewritten in a compact form in terms of {w,z}:

    {w(k+1)=w(k)hI(w(k),z(k))z(k+1)=Az(k) (3.10)

    where

    I(w,z)[I1(w,z)I2(w,z)]=[Fs(s,z)+Ps+PtPs]. (3.11)

    We have the following basic result, whose proof is in the Appendix.

    Theorem 1. Under Assumptions 1 and 2, {sk,tk} converges to the Pareto solution set W.

    Consider a Lyapunov function

    V(w,z)=Va(w,z)+Vb(w,z)+Vc(w,z) (3.12)

    where Va(w,z)=σd2(w,W), Vb(w,z)=F(s,z)F(s,ˉz0)+12s,Ps+s,Pt, Vc(w,z)=K2zˉz0 and K2 is a Lipschitz constant given in Lemma 3. Theorem 1 is based on Lemmas 5 and 6, whose proof are also given in Appendix.

    Lemma 5. With Assumption 1, {zk} converges to ˉz0 with a linear convergence rate γ1 whose elements belong to (0,1): limkzk=ˉz0,zkˉz0γ1zk1ˉz0. zkˉz0 is also summable with respect to k: k=1zkˉz0<.

    Lemma 6 is additionally presented to illustrate the minimum and maximum values of V(w,z).

    Lemma 6. With Assumptions 1 and 2, the following inequality holds for the Lyapunov function V(w,z):

    σ2[ss2+tt2]V(w,z)K1+4σ2[ss2+tt2]+2K2zˉz0

    where K1,K2 are Lipschitz constants given in Lemma 1, and σ is the largest eigenvalue of the Laplacian matrix P.

    The asymptotic convergence of (3.9) is demonstrated by Theorem 1, which is consistent with that of [28] for distributed optimization. It should be noted that the inclusion of the partial gradient term Fs(s,z) renders inapplicable the contraction mapping principle. In contrast to numerous distributed algorithms that rely on the contraction mapping principle for their proofs [26,28,37,39], this awork involves employing the martingale convergence theorem (Lemma 1) in Theorem 1.

    In this section, we present our main results. A criterion without strong convexity is first introduced for the DIOP, which, together with (3.9) will imply linear convergence. Our criterion for (3.9) to achieve exponential convergence is as follows.

    Criterion. The continuously differentiable function ˜L>0 has a restricted quadratic gradient growth. That is, if there exists a constant κL such that for any w, w=PW(w), we have

    I(w,ˉz0)I(w,ˉz0),wwκLww2 (4.1)

    where ˜L is the augmented Lagrangian function defined in (3.8).

    The criterion given in this paper differs from the quadratic convex condition given in [1] and the metrically irregular condition discussed in [28] for distributed optimization problems with scalar functions. This criterion is given for DIOPs. On the other hand, regarding the dynamics given by (3.9), we will show that (4.1) is sufficient to achieve linear convergence.

    Theorem 2. Under Assumptions 1 and 2 and (4.1), {sk,tk} converges linearly to the optimal set W.

    Proof. If w=w, we have that I(w,z)0. Further, consider the case when ww. With Lemma 5, we obtain

    I(w,z),ww=I(w,z)I(w,z),ww+I(w,z)I(w,ˉz0),wwκLww2K3zz0ww (4.2)

    where the last inequality holds by a,ba2+b22. Still,

    I(w,z),wwI(w,z)ww. (4.3)

    Equations (4.2) and (4.3) indicate that I(w,z)κLwwK3zˉz0. Therefore, if Assumption 2 holds, I(w,z)κLwwK3zz0. By Lemma 6,

    I(wk,zk)2κ2Lwkw2+K23zkz022κLK3wkwzkz02κ2LK1+4σ[V(wk,zk)2K2zkˉz0]2κLK3wkwzkz0+K23zkz02. (4.4)

    Substituting (4.4) into (A12) yields

    V(wk+1,zk+1)Tk1+Tk2+Tk3+Tk4Tk5

    where Tk1=(1h(2ν0h)κ2LK1+4σ)V(wk,zk), Tk2=2hK3σzkˉz0sks, Tk3=K2(zk+1zk+(1γ1)zkˉz0), Tk4=2hK2(2ν0h)κ2LK1+4σzkˉz0 and Tk5=h(2νoh)K232zkˉz02.

    Still, according to Lemma 5, zkˉz0 converges linearly at a rate γ1. Therefore, residue terms Tk2, Tk3, Tk4 and Tk5 diminish with linear rates. Since ν0K1+4σ, the main term Tk1 converges with a linear rate, which is no less than (1h(2(K1+4σ)h)κ2L2(K1+4σ)). With Lemma 6, we obtain that [sk+1s2+yk+1t2]2σV(wk+1,zk+1), which completes the proof.

    As shown in Theorem 1, (4.1) plays an important role in achieving linear convergence even in the absence of strong convexity of fi(si,zi). In this paper, we extend the quadratic convex condition given in [1] to (4.1) for interval functions. Criterion (4.1) also describes another linear growth condition of gradients for distributed optimization problems.

    In this section, we demonstrate the following simulation:

    minG(s)=9i=1[υ1i,υ2i]sρi2

    where υ1i, υ1iR and ρiRp. The problem is motivated from both a centralized IOP [35] and the distributed optimization [40]. The communication topology between agents is described by Figure 2.

    Figure 2.  Communication topology between agents.

    Define [υ1i,υ2i]=[0.5,2]. Take ρ1=5, ρ2=4, ρ3=3, ρ4=2, ρ5=1, ρ6=0, ρ7=1, ρ8=2, and ρ9=3. Next, initialize (3.9) by setting the step-size h=0.1, z0i as random numbers in [0,1], and s0i=0. Then we investigate the convergence of (3.9). Also, Figures 3 and 4 show the consensus of zki and convergence of ski for the proposed algorithm. We get a Pareto solution as (0.4695;1.002) for 1000 iterations. Figure 5 shows the convergence of ski for a centralized primal-dual algorithm (an algorithm generated according to the properties of solutions in [35]) for each agent i, where zi denotes random numbers in [0,1]. In addition, we take a performance index R as Rk=logsks2. The performance of R is shown in Figure 6, which implies the linear convergence of (3.9).

    Figure 3.  zki for agent i of (3.9).
    Figure 4.  ski for agent i of (3.9).
    Figure 5.  ski for agent i of centralized primal-dual algorithm.
    Figure 6.  Convergence Rate of (3.9).

    We have investigated a DIOP in which the local functions are interval functions in this paper. With distributed interval optimization as its primary focus, this article introduces a distributed primal-dual algorithm. A criterion has been proposed that allows the linear convergence to the Pareto solution of a DIOP without strong convexity. Finally, a numerical simulation has been executed to demonstrate the linear convergence of the proposed algorithm. Given that the existing research on DIOPs primarily focuses on objective interval functions, the investigation of distributed problems involving interval constraints should be expanded in the future.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research was supported by the NSFC (72101026, 62203045) and Operation Expenses for Universities' Basic Scientific Research of Central Authorities (FRF-TP-22-141A1).

    The authors declare that there is no conflict of interest.

    Proof of Lemma 5

    Proof. According to Assumption 1(b), the adjacency matrix A is irreducible and aperiodic. With [33,Theorem 6.64], limkAk=B with a linear convergence rate γ1(0,1), where B=1n1n1n. With (3.9b), we have

    limkz(k)=limkAkz(0)=Bz(0)=ˉz(0). (A1)

    According to [37,Lemma 3], k=1AkB< holds, which completes the proof.

    Proof of Lemma 6

    Proof. (a)Lower bound of the Lyapunov function V(w,z): Let w=col{s,t} be the projection of wk onto the optimal set W. Since the symmetry of P holds, given Lemma 4, Fs(x,ˉz0)=Pt and s,P=0. We further obtain that ss,Fs(x,ˉz0)=ss,Pt, and s,Pt=ss,Pt. Vb(w,z) can be further written as

    Vb(w,z)=F(s,z)F(s,ˉz0)+12s,Ps+s,Pt=F(s,z)F(s,ˉz0)+ss,P(tt)+ss,Pt+12ss,P(ss)=F(s,z)F(s,ˉz0)ss,Fs(x,ˉz0)+ss,P(tt)+12ss,P(ss). (A2)

    According to Lemma 3, F(s,z) is convex with respect to s and Lipschitz continuous with respect to z. Therefore,

    F(s,z)F(s,ˉz0)ss,Fs(x,ˉz0)=F(x,ˉz0)F(x,ˉz0)ss,Fs(x,ˉz0)+F(s,z)F(x,ˉz0)K2zˉz0=K2zˉz0.

    Since P is positive semidefinite,

    12ss,P(ss)0.

    Therefore, Vb(w,z)ss,P(tt)σ2[ss2+tt2]K2zˉz0, which implies the lower bound of the Lyapunov function V(w,z)σ2[ss2+tt2].

    (b) Upper bound of the Lyapunov function V(w,z): According to Lemma 3 (L-Lipschitz continuity of Fs(s,z) with respect to s ) and Assumption 2, F(s,ˉz0)F(s,ˉz0)ss,Fs(x,ˉz0)L2ss2. According to Lemma 3, F(s,z) is Lipschitz continuous with respect to z, we have that F(s,z)F(x,ˉz0)K2zˉz0. Note that

    12ss,P(ss)σ2ss2.

    Moreover, ss,P(tt)σssttσ2[εss2+1εtt2] for any ε>0. Through choosing ε=σK1+σ, we get

    Vb(w,z)L+σ2[(ss2+zˉz02)]+σ22(K1+σ)ss2K1+2σ2[ss2+tt2],

    which implies that V(w,z)K1+4σ2[ss2+tt2]+2K2zˉz0.

    Proof. Proof of Theorem 1

    It follows from the K1-Lipschitz of F(s,z) in Lemma 3 that

    F(sk+1,zk+1)F(sk,zk)=F(sk+1,zk+1)F(sk+1,zk)+F(sk+1,zk)F(sk,zk)Fsk(sk,zk),sk+1sk+K12sk+1sk2+K2zk+1zkhFsk(sk,zk),I1(wk,zk)+h2K12I1(wk,zk)2+K2zk+1zk, (A3)

    where the second inequality builds on the definition of I1(w). Since Pσ, we have

    sk+1,Psk+1sk,Psk2hI1(wk,zk),Psk+h2σI1(wk,zk)2 (A4)

    and

    yk+1,Psk+1tk,PskhI2(w(k),zk),Psk+h2σ2I2(wk,zk)2hI1(wk,zk),Ptk+h2σ2I1(wk,zk)2. (A5)

    Combine (A3)–(A5). Given the definition of Vb(w,z), we get

    Vb(wk+1,zk+1)Vb(wk,zk)hI1(wk,zk)2+hI2(wk,zk)2+h2(K1+2σ)2I1(wk,zk)2+h2σ2I2(wk,zk)2+K2zk+1zk, (A6)

    which is based on Px,t=Py,s.

    With Lemma 5 and Pσ, we obtain

    I(wk,zk),wkw=sks,Fsk(sk,zk)+Ptk+Psk+tkt,Psk=sks,Fsk(sk,zk)sk,Ptsks,Ps=sks,Fsk(sk,zk)Fs(s,ˉz0)sk,Psk=sks,Fsk(sk,zk)Fs(s,zk)sks,Fs(s,zk)Fs(s,ˉz0)sk,Psk. (A7)

    Since F() is a convex function with respect to s,

    sks,Fsk(sk,zk)Fs(s,zk)0. (A8)

    According to the K3-Lipschitz continuity of Fs(s,z) in Lemma 3, we have

    sks,Fs(s,zk)Fs(s,ˉz0)K3zkˉz0sks. (A9)

    Combining (A8) and (A9) with (A7) yields

    I(wk,zk),wkw1σPsk2+K3zkˉz0sks. (A10)

    According to the σ-Lipschitz continuity of Va(w,z)with respect to z in (3.12), we have

    Va(wk+1,zk+1)Va(wk,zk)Vaw(wk,zk),wk+1wk+σ2wk+1wk22hσI(wk,zk),wkw+σ2wk+1wk22hI2(wk,zk)2+σh22I(wk,zk)2+2hK3σzkˉz0sks. (A11)

    Therefore, by using (A6)m (A11) and the definition of V(w,z) in (3.12), we have

    V(wk+1,zk+1)V(wk,zk)h(2ν0h)2I(wk,zk)2+2hK3σzkˉz0sks+K2Mk (A12)

    where ν0=4σ+K1. and Mk=zk+1zk+zk+1ˉz0zkˉz0.

    According to Lemma 5 and Assumption 2,

    k=12hK3σzkˉz0sks=k=12hK3σ(AkB)z0sks<, (A13)

    and

    k=1K2Mk=k=1K2((AIn)Akz0+(AIn)(AkB)z0)<. (A14)

    Consequently, with Lemma 1, V(wk,zk) converges with k=0I(wk,zk)2<+, which implies that limkI(wk,zk)=0. By Lemma 4 and the continuity of I, limksks=0 and limktkt=0.



    [1] I. Necoara, Y. Nesterov, F. Glineur, Linear convergence of first order methods for non-strongly convex optimization, Math. Program., 175 (2019), 69–107. https://doi.org/10.1007/s10107-018-1232-1 doi: 10.1007/s10107-018-1232-1
    [2] A. Makhdoumi, A. Ozdaglar, Convergence rate of distributed admm over networks, IEEE Trans. Autom. Control, 62 (2017), 5082–5095. https://doi.org/10.1109/TAC.2017.2677879 doi: 10.1109/TAC.2017.2677879
    [3] X. He, T. Huang, J. Yu, C. Li, Y. Zhang, A continuous-time algorithm for distributed optimization based on multiagent networks, IEEE Trans. Syst. Man Cybern.: Syst., 49 (2017), 2700–2709. https://doi.org/10.1109/TSMC.2017.2780194 doi: 10.1109/TSMC.2017.2780194
    [4] H. Li, Q. Lü, X. Liao, T. Huang, Accelerated convergence algorithm for distributed constrained optimization under time-varying general directed graphs, IEEE Trans. Syst. Man Cybern.: Syst., 50 (2018), 2612–2622. https://doi.org/10.1109/TSMC.2018.2823901 doi: 10.1109/TSMC.2018.2823901
    [5] Q. Wang, J. Chen, B. Xin, X. Zeng, Distributed optimal consensus for euler-lagrange systems based on event-triggered control, IEEE Trans. Syst. Man Cybern.: Syst., 51 (2021), 4588–4598. https://doi.org/10.1109/TSMC.2019.2944857 doi: 10.1109/TSMC.2019.2944857
    [6] J. Guo, R. Jia, R. Su, Y. Zhao, Identification of fir systems with binary-valued observations against data tampering attacks, IEEE Trans. Syst. Man Cybern.: Syst., 53 (2023), 5861–5873. https://doi.org/10.1109/TSMC.2023.3276352 doi: 10.1109/TSMC.2023.3276352
    [7] J. Guo, X. Wang, W. Xue, Y. Zhao, System identification with binary-valued observations under data tampering attacks, IEEE Trans. Autom. Control, 66 (2020), 3825–3832. https://doi.org/10.1109/TAC.2020.3029325 doi: 10.1109/TAC.2020.3029325
    [8] X. Zeng, Y. Peng, Y. Hong, Distributed algorithm for robust resource allocation with polyhedral uncertain allocation parameters, J. Syst. Sci. Complexity, 31 (2018), 103–119. https://doi.org/10.1007/s11424-018-7145-5 doi: 10.1007/s11424-018-7145-5
    [9] V. Kekatos, G. B. Giannakis, Distributed robust power system state estimation, IEEE Trans. Power Syst., 28 (2013), 1617–1626. https://doi.org/10.1109/TPWRS.2012.2219629 doi: 10.1109/TPWRS.2012.2219629
    [10] S. Sra, S. Nowozin, S. J. Wright, Optimization for Machine Learning, Mit Press, 2012.
    [11] B. Q. Hu, S. Wang, A novel approach in uncertain programming part Ⅰ: new arithmetic and order relation for interval numbers, J. Ind. Manage. Optim., 2 (2006), 351–371. https://doi.org/10.3934/jimo.2006.2.351 doi: 10.3934/jimo.2006.2.351
    [12] L. Wu, M. Shahidehpour, Z. Li, Comparison of scenario-based and interval optimization approaches to stochastic SCUC, IEEE Trans. Power Syst., 27 (2012), 913–921. https://doi.org/10.1109/TPWRS.2011.2164947 doi: 10.1109/TPWRS.2011.2164947
    [13] A. Neumaier, Interval Methods for Systems of Equations, Cambridge University Press, 1990.
    [14] J. Rohn, Positive definiteness and stability of interval matrices, SIAM J. Matrix Anal. Appl., 15 (1994), 175–184. https://doi.org/10.1137/S0895479891219216 doi: 10.1137/S0895479891219216
    [15] V. I. Levin, Nonlinear optimization under interval uncertainty, Cybern. Syst. Anal., 35 (1999), 297–306. https://doi.org/10.1007/BF02733477 doi: 10.1007/BF02733477
    [16] T. Saeed, S. Treană, New classes of interval-valued variational problems and inequalities, Results Control Optim., 13 (2023), 100324. https://doi.org/10.1016/j.rico.2023.100324 doi: 10.1016/j.rico.2023.100324
    [17] M. Ciontescu, S. Treană, On some connections between interval-valued variational control problems and the associated inequalities, Results Control Optim., 12 (2023), 100300. https://doi.org/10.1016/j.rico.2023.100300 doi: 10.1016/j.rico.2023.100300
    [18] Y. Guo, G. Ye, W. Liu, D. Zhao, S. Treanţǎ, Solving nonsmooth interval optimization problems based on interval-valued symmetric invexity, Chaos, Solitons Fractals, 174 (2023), 113834. https://doi.org/10.1016/j.chaos.2023.113834 doi: 10.1016/j.chaos.2023.113834
    [19] S. Treană, T. Saeed, On weak variational control inequalities via interval analysis, Mathematics, 11 (2023), 2177. https://doi.org/10.3390/math11092177 doi: 10.3390/math11092177
    [20] I. Hisao, T. Hideo, Multiobjective programming in optimization of the interval objective function, Eur. J. Oper. Res., 48 (1990), 219–225. https://doi.org/10.1016/0377-2217(90)90375-L doi: 10.1016/0377-2217(90)90375-L
    [21] S. T. Liu, R. T. Wang, A numerical solution method to interval quadratic programming, Appl. Math. Comput., 189 (2007), 1274–1281. https://doi.org/10.1016/j.amc.2006.12.007 doi: 10.1016/j.amc.2006.12.007
    [22] C. Jiang, X. Han, G. Liu, G. Liu, A nonlinear interval number programming method for uncertain optimization problems, Eur. J. Oper. Res., 188 (2008), 1–13. https://doi.org/10.1016/j.ejor.2007.03.031 doi: 10.1016/j.ejor.2007.03.031
    [23] A. Jayswal, I. Stancu-Minasian, I. Ahmad, On sufficiency and duality for a class of interval-valued programming problems, Appl. Math. Comput., 218 (2011), 4119–4127. https://doi.org/10.1016/j.amc.2011.09.041 doi: 10.1016/j.amc.2011.09.041
    [24] M. Hladık, Interval linear programming: a survey, in Linear Programming-New Frontiers in Theory and Applications, (2012), 85–120.
    [25] A. Bellet, Y. Liang, A. B. Garakani, M. F. Balcan, F. Sha, A distributed Frank-Wolfe algorithm for communication-efficient sparse learning, in Proceedings of the 2015 SIAM International Conference on Data Mining (SDM), (2015), 478–486. https://doi.org/10.1137/1.9781611974010.54
    [26] G. Qu, N. Li, Accelerated distributed Nesterov gradient descent, IEEE Trans. Autom. Control, 65 (2020), 2566–2581. https://doi.org/10.1109/TAC.2019.2937496 doi: 10.1109/TAC.2019.2937496
    [27] A. Nedic, A. Olshevsky, W. Shi, Achieving geometric convergence for distributed optimization over time-varying graphs, SIAM J. Optim., 27 (2017), 2597–2633. https://doi.org/10.1137/16M1084316 doi: 10.1137/16M1084316
    [28] S. Liang, L. Y. Wang, G. Yin, Exponential convergence of distributed primal–dual convex optimization algorithm without strong convexity, Automatica, 105 (2019), 298–306. https://doi.org/10.1016/j.automatica.2019.04.004 doi: 10.1016/j.automatica.2019.04.004
    [29] X. Yi, S. Zhang, T. Yang, K. H. Johansson, T. Chai, Exponential convergence for distributed optimization under the restricted secant inequality condition, IFAC-PapersOnLine, 53 (2020), 2672–2677. https://doi.org/10.1016/j.ifacol.2020.12.383 doi: 10.1016/j.ifacol.2020.12.383
    [30] S. Treană, Lu-optimality conditions in optimization problems with mechanical work objective functionals, IEEE Trans. Neural Networks Learn. Syst., 33 (2021), 4971–4978. https://doi.org/10.1109/TNNLS.2021.3066196 doi: 10.1109/TNNLS.2021.3066196
    [31] H. C. Wu, On interval-valued nonlinear programming problems, J. Math. Anal. Appl., 338 (2008), 299–316. https://doi.org/10.1016/j.jmaa.2007.05.023 doi: 10.1016/j.jmaa.2007.05.023
    [32] B. T. Polyak, Introduction to Optimization, Chapman and Hall, 1987.
    [33] R. Durrett, Probability: Theory and Examples, Cambridge University Press, 2010. https://doi.org/10.1017/CBO9780511779398
    [34] T. Maeda, On optimization problems with set-valued objective maps: existence and optimality, J. Optim. Theory Appl., 153 (2012), 263–279. https://doi.org/10.1007/s10957-011-9952-x doi: 10.1007/s10957-011-9952-x
    [35] A. K. Bhurjee, G. Panda, Efficient solution of interval optimization problem, Math. Methods Oper. Res., 76 (2012), 273–288. https://doi.org/10.1007/s00186-012-0399-0 doi: 10.1007/s00186-012-0399-0
    [36] S. S. Ram, A. Nedić, V. V. Veeravalli, Distributed stochastic subgradient projection algorithms for convex optimization, J. Optim. Theory Appl., 147 (2010), 516–545. https://doi.org/10.1007/s10957-010-9737-7 doi: 10.1007/s10957-010-9737-7
    [37] A. Nedic, A. Ozdaglar, Distributed subgradient methods for multi-agent optimization, IEEE Trans. Autom. Control, 54 (2009), 48–61. https://doi.org/10.1109/TAC.2008.2009515 doi: 10.1109/TAC.2008.2009515
    [38] A. P. Ruszczyński, A. Ruszczynski, Nonlinear Optimization, Princeton University Press, 2006. https://doi.org/10.1515/9781400841059
    [39] A. Nedic, A. Ozdaglar, P. A. Parrilo, Constrained consensus and optimization in multi-agent networks, IEEE Trans. Autom. Control, 55 (2010), 922–938. https://doi.org/10.1109/TAC.2010.2041686 doi: 10.1109/TAC.2010.2041686
    [40] A. Nedić, A. Olshevsky, Stochastic gradient-push for strongly convex functions on time-varying directed graphs, IEEE Trans. Autom. Control, 61 (2016), 3936–3947. https://doi.org/10.1109/TAC.2016.2529285 doi: 10.1109/TAC.2016.2529285
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1279) PDF downloads(60) Cited by(0)

Figures and Tables

Figures(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog