
In this paper, we investigate a distributed interval optimization problem whose local functions are interval functions rather than scalar functions. Focusing on distributed interval optimization, this paper presents a distributed primal-dual algorithm. A criterion is introduced under which linear convergence to the Pareto solution of distributed interval optimization problems can be achieved without strong convexity. Lastly, a numerical simulation is presented to illustrate the linear convergence of the algorithm that has been proposed.
Citation: Yinghui Wang, Jiuwei Wang, Xiaobo Song, Yanpeng Hu. Linear convergence of a primal-dual algorithm for distributed interval optimization[J]. Electronic Research Archive, 2024, 32(2): 857-873. doi: 10.3934/era.2024041
[1] | Yiyuan Qian, Kai Zhang, Jingzhi Li, Xiaoshen Wang . Adaptive neural network surrogate model for solving the implied volatility of time-dependent American option via Bayesian inference. Electronic Research Archive, 2022, 30(6): 2335-2355. doi: 10.3934/era.2022119 |
[2] | Yiyuan Qian, Haiming Song, Xiaoshen Wang, Kai Zhang . Primal-dual active-set method for solving the unilateral pricing problem of American better-of options on two assets. Electronic Research Archive, 2022, 30(1): 90-115. doi: 10.3934/era.2022005 |
[3] | Wenya Shi, Xinpeng Yan, Zhan Huan . Faster free pseudoinverse greedy block Kaczmarz method for image recovery. Electronic Research Archive, 2024, 32(6): 3973-3988. doi: 10.3934/era.2024178 |
[4] | Bin Wang . Random periodic sequence of globally mean-square exponentially stable discrete-time stochastic genetic regulatory networks with discrete spatial diffusions. Electronic Research Archive, 2023, 31(6): 3097-3122. doi: 10.3934/era.2023157 |
[5] | Mengjie Xu, Nuerken Saireke, Jimin Wang . Privacy-preserving distributed optimization algorithm for directed networks via state decomposition and external input. Electronic Research Archive, 2025, 33(3): 1429-1445. doi: 10.3934/era.2025067 |
[6] | Changling Xu, Huilai Li . Two-grid methods of finite element approximation for parabolic integro-differential optimal control problems. Electronic Research Archive, 2023, 31(8): 4818-4842. doi: 10.3934/era.2023247 |
[7] | Haijun Wang, Gege Kang, Ruifang Zhang . On optimality conditions and duality for multiobjective fractional optimization problem with vanishing constraints. Electronic Research Archive, 2024, 32(8): 5109-5126. doi: 10.3934/era.2024235 |
[8] | Simon Eberle, Arnulf Jentzen, Adrian Riekert, Georg S. Weiss . Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation. Electronic Research Archive, 2023, 31(5): 2519-2554. doi: 10.3934/era.2023128 |
[9] | Weishang Gao, Qin Gao, Lijie Sun, Yue Chen . Design of a novel multimodal optimization algorithm and its application in logistics optimization. Electronic Research Archive, 2024, 32(3): 1946-1972. doi: 10.3934/era.2024089 |
[10] | Wenjie Wang, Suzhen Wen, Shen Gao, Pengyi Lin . A multi-objective dynamic vehicle routing optimization for fresh product distribution: A case study of Shenzhen. Electronic Research Archive, 2024, 32(4): 2897-2920. doi: 10.3934/era.2024132 |
In this paper, we investigate a distributed interval optimization problem whose local functions are interval functions rather than scalar functions. Focusing on distributed interval optimization, this paper presents a distributed primal-dual algorithm. A criterion is introduced under which linear convergence to the Pareto solution of distributed interval optimization problems can be achieved without strong convexity. Lastly, a numerical simulation is presented to illustrate the linear convergence of the algorithm that has been proposed.
Due to the theoretical significance and wide range of applications in areas such as machine learning, multi-agent system coordination, sensor networks, and smart grids, distributed optimization has received a lot of attention from researchers in recent years. Various distributed algorithms for solving distributed optimization problems have been introduced, and they involve agents collaborating with their neighboring agents in order to attain global minimization, see recent works [1,2,3,4,5,6,7].
The aforementioned works' objective functions are scalar functions. In practice, however, scalar functions have been frequently incapable of expressing objective functions for distributed networks explicitly or precisely (see [8,9,10]). On the contrary, interval functions are employed to describe problems, as exemplified in the applications of smart grids and economic systems [11,12]. To address the challenges presented by interval functions, interval optimization problems (IOPs), have been proposed [13,14,15,16,17,18,19]. Initial studies on IOPs were conducted by the authors of [13], and subsequently investigated in [14,15]. Existence conditions have been presented in [11,20] to achieve Pareto solutions of IOPs. In addition, [21,22,23,24] detail algorithms that have been designed for centralized IOPs. Without conducting a theoretical analysis, [11,12] present distributed applications of IOPs in economic systems and smart infrastructures. For centralized IOPs [21,22,23,24] have presented algorithms. These line search algorithms, nevertheless, fail in distributed environments.
Given this context, it is natural for us to consider the design of efficient algorithms to solve DIOPs over multi-agent networks. The DIOPs, nevertheless, remain a subject of ongoing research. This may be due to the ease with which line search algorithms (e.g., Wolfe or Lamke's algorithms [21,22,23,24]) can be applied in distributed settings, and very few papers [25] with related theoretical results have been published. In addition, algorithm designs are made difficult by the partial order of interval functions.
Furthermore, there is growing interest in the convergence rates of distributed algorithms for distributed optimization with scalar functions. In fact, when local objective functions were strongly convex, the algorithms of [2,26,27] achieved linear convergence rates for the centralized and distributed counterparts. Local scalar functions for distributed optimization are not strongly convex in a number of practical applications. Further investigation was undertaken by a group of scholars [1,28,29,30] regarding the substitution of strongly convex conditions that dictate linear convergence rates. For example, [1] analyzed four distinct categories of function conditions and deduced the linear convergence of numerous centralized algorithms. The authors of [28,29] respectively demonstrated the linear rates of their distributed algorithms under metrically sub-regular and Polyak-Lojasiewicz conditions.
In this paper, we investigate the Pareto solutions of a DIOP whose local functions are interval functions rather than scalar functions. The DIOP is given as follows:
(DIOP)minsG(s),G(s)=n∑i=1Gi(s) |
where Gi=[Li,Ri] is a convex interval function for each agent i. Li(s)⩽Ri(s) holds for every given s. Still, each agent can only get the gradient information of interval function Gi. By means of neighborhood information communication, the global Pareto solution is obtained. The contributions of this paper are summarized as follows:
(a) We investigate the Pareto solution of a DIOP whose local functions are interval functions. By incorporating convexity and well-defined partial orderings of interval functions, we convert the DIOP [11,20,31] into a solvable distributed optimization problem scalarization (DSIOP) with convex global constraints.
(b) In this reformulation, the optimal solutions of the DSIOP correspond to the Pareto solutions of the DIOP. With this relationship, we propose a distributed primal-dual algorithm to find a Pareto solution of the DIOP.
(c) We discuss a crucial criterion that, when applied to Pareto solutions of a DIOP, weaken the strict or strong convexity required for linear convergence. Given that this paper investigates DIOPs, the supplied criterion differ from those delineated in [1,28,29]. In addition, the criterion is essential for evaluating the convergence of DIOP distributed algorithms.
The rest of the paper is organized as follows. The preliminaries of this paper are given in Section 2. In Section 3, the DIOP is analyzed. The primal-dual algorithm is further given to find a Pareto solution of the DIOP in Section 4 and a numerical example is given in Section 5. Finally, the conclusion of this paper is offered in Section 6.
Notations. Denote by R the set of real numbers, In∈Rn×n as the identity matrix, and 1n=[1,1,…,1]⊤∈Rn, respectively. Denote ⟨⋅,⋅⟩ as the inner product and ‖⋅‖ as the Euclidean norm in Rn.
In this section, we present an introduction to convex analysis for scalar functions [32], graph theory, and interval optimization [33].
Define N={1,2,...,n} as the agent set and E⊂N×N as the set of edges between agents. The communication between n agents is described by an undirected graph G=(N,E). If (i,j)∈E, then the agent i can communicate with the agent j. Therefore, each agent i∈N can communicate with agents in its neighborhood Ni={j|(i,j)∈E}∪{i}. Denote A∈Rn×n as the communication matrix between agents, whose elements aij satisfy the following conditions:
aij={aii,ifi=jaij,ifi≠jand(i,j)∈E0,otherwise. | (2.1) |
Denote di by the degree of agent i, i.e., |di|=∑nj=1aij. Further, denote D by the n×n diagonal degree matrix such that D=diag(∑nj=1a1j,…,∑nj=1anj). Then, the associated Laplacian matrix P∈Rn×n is P:=D−A.
The following assumption forms the basis of the communication topology G=(N,E) between agents over the network:
Assumption 1. The undirected graph G=(N,E) is connected.
Assumption 1 is extensively employed in [28], this ensures the consensus of vectors for agents over the network.
Prior to proceeding with the discussion of interval functions, we define convexity and the Lipschitz continuity of scalar functions.
Definition 1 (a) A scalar function f:Ω→R is convex if for any s1,s2∈Ω and z∈[0,1], f(λs2+(1−λ)s1)≤λf(s2)+(1−λ)f(s1) holds.
(b) A scalar function f:Rn→R is κ-Lipschitz continuous with respect to a constant κ>0 if
‖f(s2)−f(s1)‖⩽κ‖s2−s1‖,∀s1,s2∈Rn. |
The following lemma is crucial for the analysis of convergence in distributed optimization problems involving scalar functions and interval functions.
Lemma 1. [32,Lemma 11,Chapter 2.2] Define {vk}k⩾1 and {wk}k⩾1 as two nonnegative scalar sequences. Define {hk}k⩾1 as a scalar sequence, which is bounded from below uniformly. If there exists a nonnegative constant sequence ηk⩾0 with ∑∞k=1ηk<∞ and
hk+1⩽(1+ηk)hk−vk+wk,∀k⩾1 |
then {hk}k⩾1 converges with ∑∞k=1vk<∞.
Let G:Rp⇉ be any interval map. Now, we consider the following IOP:
where is any non-empty compact interval in .
The Pareto optimal solution to an IOP is defined as follows:
Definition 2. [34] A point is said to be a Pareto optimal solution to an IOP iff it holds that for some , and both hold implying that and .
The example of the DIOP is presented below. There is no solution other than the Pareto solution in the example that follows.
Example 1 The IOP illustrated in Figure 1 does not have a solution. However, the Pareto optimal solutions to the given problem are .
(a) For , we have that and , and is a Pareto solution to the IOP.
(b)For , we have that and , and is a Pareto solution to the IOP.
(c) For , we have that , , and .
For , , and could not hold concurrently.
According to Definition 2, are Pareto optimal solutions to this given problem.
To investigate the Pareto solutions of an IOP, let us consider the following IOP in conjunction with its scalarization (SIOP):
where . The following lemma holds for Pareto solutions of IOPs and solutions of SIOPs according to [34]. Furthermore, it remains valid in distributed settings.
Lemma 2. [34] We assume that is compact-valued and convex with respect to :
(a) If there exists a real number such that is a solution to the SIOP, then is a Pareto optimization of the IOP.
(b) If a point is a Pareto optimization of the IOP, then there exists a real number such that is an optimal solution of the SIOP.
In this section, we consider a DIOP and introduce its distributed primal-dual algorithm.
Consider the following DIOP:
(3.1) |
where , , and . are convex functions. For any given , holds. Each agent knows its local interval function .
Define and as
(3.2) |
With (3.2), the definition of Pareto solutions is then given to the DIOP.
Definition 3. [34] is a Pareto solution of the DIOP, iff for some , and both hold implying that and .
The existence of Pareto solutions for the DIOP is guaranteed by Assumption 2 which is consistent with the centralized counterpart [35].
Assumption 2. (a) and are strongly convex, continuous functions.
(b) Problem (3.1) has at least one Pareto solution.
(c) Gradients of and are Lipschitz continuous.
Lemma 2 also establishes a theoretical framework for Pareto solutions for the DIOP. Consider the following scalarization of the DIOP as well. Define and as
(3.3) |
(3.4) |
where and . Let with . The DSIOP (3.1) can be rewritten as follows:
(3.5) |
where each agent possesses the following information: , , and . The given problem (3.5) can be modeled as a distributed optimization problem [28,36,37] with scalars when represents a common vector to each agent . Additionally, under Assumption 2, the following lemma remains valid:
(a) is linear with respect to and is convex with respect to .
(b) There are Lipschitz constants and such that the partial derivative is Lipschitz continuous with respect to with and is Lipschitz continuous with respect to with .
(c) There are Lipschitz constants and such that is Lipschitz continuous with respect to with constant and is Lipschitz continuous with respect to with constant .
(d) There are Lipschitz constants and such that the partial derivative is Lipschitz continuous with respect to with constant and is Lipschitz continuous with respect to with constant .
It should be noted that although is convex with respect to and , is not a convex function. Owing to the non-convexity of , the criteria for linear convergence rates of algorithms are no longer applicable to the DIOP.
During distributed optimization processes, , are not necessarily equal all of the time. Therefore, it is natural to treat those variables separately and impose the soft constraints , . By using the Laplacian matrix , these constraints are equivalent to and , where , , and . Consequently, problem (3.5) is reformulated as follows:
(3.6) |
Let . Recall that the dual problem of (3.6) is
(3.7) |
and the augmented Lagrangian function of (3.7) with respect to is
(3.8) |
Define by , , where is an initial value for any agent . For the vector , denote as the optimal solution set of problem (3.6) and as the saddle point set of problem (3.7), respectively. According to Assumption 2, for a proper given , there exists such that . also satisfies the following lemma, which is also a basis for the analysis of convergence:
Lemma 4. (Karush-Kuhn-Tucker condition, [38,Theorems 3.25–3.27]) With Assumption 2, for a particular given , is a solution to (3.7) if
With Lemma 4, we introduce a distributed primal-dual algorithm as follows:
(3.9a) |
(3.9b) |
(3.9c) |
where the step-size satisfies that , is the largest eigenvalue of the Laplacian matrix . At the -th iteration, for all , each agent only obtains a partial gradient in the form of for its local function , and it is cooperative with neighbors to achieve a Pareto solution of problem (3.1).
The constraint in (3.6), is satisfied through (3.9b) and the initialization of in (3.9), while the constraint and the minimization of are satisfied through (3.9a) and (3.9c) in (3.9). Define , and . Then, with , for a proper given , where is the primal-dual solution set of problem (3.7). Algorithm (3.9) can be rewritten in a compact form in terms of :
(3.10) |
where
(3.11) |
We have the following basic result, whose proof is in the Appendix.
Theorem 1. Under Assumptions 1 and 2, converges to the Pareto solution set .
Consider a Lyapunov function
(3.12) |
where , , and is a Lipschitz constant given in Lemma 3. Theorem 1 is based on Lemmas 5 and 6, whose proof are also given in Appendix.
Lemma 5. With Assumption 1, converges to with a linear convergence rate whose elements belong to : . is also summable with respect to : .
Lemma 6 is additionally presented to illustrate the minimum and maximum values of .
Lemma 6. With Assumptions 1 and 2, the following inequality holds for the Lyapunov function :
where are Lipschitz constants given in Lemma 1, and is the largest eigenvalue of the Laplacian matrix .
The asymptotic convergence of (3.9) is demonstrated by Theorem 1, which is consistent with that of [28] for distributed optimization. It should be noted that the inclusion of the partial gradient term renders inapplicable the contraction mapping principle. In contrast to numerous distributed algorithms that rely on the contraction mapping principle for their proofs [26,28,37,39], this awork involves employing the martingale convergence theorem (Lemma 1) in Theorem 1.
In this section, we present our main results. A criterion without strong convexity is first introduced for the DIOP, which, together with (3.9) will imply linear convergence. Our criterion for (3.9) to achieve exponential convergence is as follows.
Criterion. The continuously differentiable function has a restricted quadratic gradient growth. That is, if there exists a constant such that for any , , we have
(4.1) |
where is the augmented Lagrangian function defined in (3.8).
The criterion given in this paper differs from the quadratic convex condition given in [1] and the metrically irregular condition discussed in [28] for distributed optimization problems with scalar functions. This criterion is given for DIOPs. On the other hand, regarding the dynamics given by (3.9), we will show that (4.1) is sufficient to achieve linear convergence.
Theorem 2. Under Assumptions 1 and 2 and (4.1), converges linearly to the optimal set .
Proof. If , we have that . Further, consider the case when . With Lemma 5, we obtain
(4.2) |
where the last inequality holds by Still,
(4.3) |
Equations (4.2) and (4.3) indicate that . Therefore, if Assumption 2 holds, . By Lemma 6,
(4.4) |
Substituting (4.4) into (A12) yields
where , , , and
Still, according to Lemma 5, converges linearly at a rate . Therefore, residue terms , , and diminish with linear rates. Since , the main term converges with a linear rate, which is no less than . With Lemma 6, we obtain that , which completes the proof.
As shown in Theorem 1, (4.1) plays an important role in achieving linear convergence even in the absence of strong convexity of . In this paper, we extend the quadratic convex condition given in [1] to (4.1) for interval functions. Criterion (4.1) also describes another linear growth condition of gradients for distributed optimization problems.
In this section, we demonstrate the following simulation:
where , and . The problem is motivated from both a centralized IOP [35] and the distributed optimization [40]. The communication topology between agents is described by Figure 2.
Define . Take , , , , , , , , and . Next, initialize (3.9) by setting the step-size , as random numbers in , and . Then we investigate the convergence of (3.9). Also, Figures 3 and 4 show the consensus of and convergence of for the proposed algorithm. We get a Pareto solution as for iterations. Figure 5 shows the convergence of for a centralized primal-dual algorithm (an algorithm generated according to the properties of solutions in [35]) for each agent , where denotes random numbers in . In addition, we take a performance index as . The performance of is shown in Figure 6, which implies the linear convergence of (3.9).
We have investigated a DIOP in which the local functions are interval functions in this paper. With distributed interval optimization as its primary focus, this article introduces a distributed primal-dual algorithm. A criterion has been proposed that allows the linear convergence to the Pareto solution of a DIOP without strong convexity. Finally, a numerical simulation has been executed to demonstrate the linear convergence of the proposed algorithm. Given that the existing research on DIOPs primarily focuses on objective interval functions, the investigation of distributed problems involving interval constraints should be expanded in the future.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This research was supported by the NSFC (72101026, 62203045) and Operation Expenses for Universities' Basic Scientific Research of Central Authorities (FRF-TP-22-141A1).
The authors declare that there is no conflict of interest.
Proof of Lemma 5
Proof. According to Assumption 1(b), the adjacency matrix is irreducible and aperiodic. With [33,Theorem 6.64], with a linear convergence rate , where . With (3.9b), we have
(A1) |
According to [37,Lemma 3], holds, which completes the proof.
Proof of Lemma 6
Proof. Lower bound of the Lyapunov function : Let be the projection of onto the optimal set . Since the symmetry of holds, given Lemma 4, and . We further obtain that , and . can be further written as
(A2) |
According to Lemma 3, is convex with respect to and Lipschitz continuous with respect to . Therefore,
Since is positive semidefinite,
Therefore, which implies the lower bound of the Lyapunov function
(b) Upper bound of the Lyapunov function : According to Lemma 3 (-Lipschitz continuity of with respect to ) and Assumption 2, . According to Lemma 3, is Lipschitz continuous with respect to , we have that . Note that
Moreover, for any . Through choosing , we get
which implies that .
Proof. Proof of Theorem 1
It follows from the -Lipschitz of in Lemma 3 that
(A3) |
where the second inequality builds on the definition of . Since , we have
(A4) |
and
(A5) |
Combine (A3)–(A5). Given the definition of , we get
(A6) |
which is based on .
With Lemma 5 and , we obtain
(A7) |
Since is a convex function with respect to ,
(A8) |
According to the -Lipschitz continuity of in Lemma 3, we have
(A9) |
Combining (A8) and (A9) with (A7) yields
(A10) |
According to the -Lipschitz continuity of with respect to in (3.12), we have
(A11) |
Therefore, by using (A6)m (A11) and the definition of in (3.12), we have
(A12) |
where . and .
According to Lemma 5 and Assumption 2,
(A13) |
and
(A14) |
Consequently, with Lemma 1, converges with , which implies that . By Lemma 4 and the continuity of , and .
[1] |
I. Necoara, Y. Nesterov, F. Glineur, Linear convergence of first order methods for non-strongly convex optimization, Math. Program., 175 (2019), 69–107. https://doi.org/10.1007/s10107-018-1232-1 doi: 10.1007/s10107-018-1232-1
![]() |
[2] |
A. Makhdoumi, A. Ozdaglar, Convergence rate of distributed admm over networks, IEEE Trans. Autom. Control, 62 (2017), 5082–5095. https://doi.org/10.1109/TAC.2017.2677879 doi: 10.1109/TAC.2017.2677879
![]() |
[3] |
X. He, T. Huang, J. Yu, C. Li, Y. Zhang, A continuous-time algorithm for distributed optimization based on multiagent networks, IEEE Trans. Syst. Man Cybern.: Syst., 49 (2017), 2700–2709. https://doi.org/10.1109/TSMC.2017.2780194 doi: 10.1109/TSMC.2017.2780194
![]() |
[4] |
H. Li, Q. Lü, X. Liao, T. Huang, Accelerated convergence algorithm for distributed constrained optimization under time-varying general directed graphs, IEEE Trans. Syst. Man Cybern.: Syst., 50 (2018), 2612–2622. https://doi.org/10.1109/TSMC.2018.2823901 doi: 10.1109/TSMC.2018.2823901
![]() |
[5] |
Q. Wang, J. Chen, B. Xin, X. Zeng, Distributed optimal consensus for euler-lagrange systems based on event-triggered control, IEEE Trans. Syst. Man Cybern.: Syst., 51 (2021), 4588–4598. https://doi.org/10.1109/TSMC.2019.2944857 doi: 10.1109/TSMC.2019.2944857
![]() |
[6] |
J. Guo, R. Jia, R. Su, Y. Zhao, Identification of fir systems with binary-valued observations against data tampering attacks, IEEE Trans. Syst. Man Cybern.: Syst., 53 (2023), 5861–5873. https://doi.org/10.1109/TSMC.2023.3276352 doi: 10.1109/TSMC.2023.3276352
![]() |
[7] |
J. Guo, X. Wang, W. Xue, Y. Zhao, System identification with binary-valued observations under data tampering attacks, IEEE Trans. Autom. Control, 66 (2020), 3825–3832. https://doi.org/10.1109/TAC.2020.3029325 doi: 10.1109/TAC.2020.3029325
![]() |
[8] |
X. Zeng, Y. Peng, Y. Hong, Distributed algorithm for robust resource allocation with polyhedral uncertain allocation parameters, J. Syst. Sci. Complexity, 31 (2018), 103–119. https://doi.org/10.1007/s11424-018-7145-5 doi: 10.1007/s11424-018-7145-5
![]() |
[9] |
V. Kekatos, G. B. Giannakis, Distributed robust power system state estimation, IEEE Trans. Power Syst., 28 (2013), 1617–1626. https://doi.org/10.1109/TPWRS.2012.2219629 doi: 10.1109/TPWRS.2012.2219629
![]() |
[10] | S. Sra, S. Nowozin, S. J. Wright, Optimization for Machine Learning, Mit Press, 2012. |
[11] |
B. Q. Hu, S. Wang, A novel approach in uncertain programming part Ⅰ: new arithmetic and order relation for interval numbers, J. Ind. Manage. Optim., 2 (2006), 351–371. https://doi.org/10.3934/jimo.2006.2.351 doi: 10.3934/jimo.2006.2.351
![]() |
[12] |
L. Wu, M. Shahidehpour, Z. Li, Comparison of scenario-based and interval optimization approaches to stochastic SCUC, IEEE Trans. Power Syst., 27 (2012), 913–921. https://doi.org/10.1109/TPWRS.2011.2164947 doi: 10.1109/TPWRS.2011.2164947
![]() |
[13] | A. Neumaier, Interval Methods for Systems of Equations, Cambridge University Press, 1990. |
[14] |
J. Rohn, Positive definiteness and stability of interval matrices, SIAM J. Matrix Anal. Appl., 15 (1994), 175–184. https://doi.org/10.1137/S0895479891219216 doi: 10.1137/S0895479891219216
![]() |
[15] |
V. I. Levin, Nonlinear optimization under interval uncertainty, Cybern. Syst. Anal., 35 (1999), 297–306. https://doi.org/10.1007/BF02733477 doi: 10.1007/BF02733477
![]() |
[16] |
T. Saeed, S. Treană, New classes of interval-valued variational problems and inequalities, Results Control Optim., 13 (2023), 100324. https://doi.org/10.1016/j.rico.2023.100324 doi: 10.1016/j.rico.2023.100324
![]() |
[17] |
M. Ciontescu, S. Treană, On some connections between interval-valued variational control problems and the associated inequalities, Results Control Optim., 12 (2023), 100300. https://doi.org/10.1016/j.rico.2023.100300 doi: 10.1016/j.rico.2023.100300
![]() |
[18] |
Y. Guo, G. Ye, W. Liu, D. Zhao, S. Treanţǎ, Solving nonsmooth interval optimization problems based on interval-valued symmetric invexity, Chaos, Solitons Fractals, 174 (2023), 113834. https://doi.org/10.1016/j.chaos.2023.113834 doi: 10.1016/j.chaos.2023.113834
![]() |
[19] |
S. Treană, T. Saeed, On weak variational control inequalities via interval analysis, Mathematics, 11 (2023), 2177. https://doi.org/10.3390/math11092177 doi: 10.3390/math11092177
![]() |
[20] |
I. Hisao, T. Hideo, Multiobjective programming in optimization of the interval objective function, Eur. J. Oper. Res., 48 (1990), 219–225. https://doi.org/10.1016/0377-2217(90)90375-L doi: 10.1016/0377-2217(90)90375-L
![]() |
[21] |
S. T. Liu, R. T. Wang, A numerical solution method to interval quadratic programming, Appl. Math. Comput., 189 (2007), 1274–1281. https://doi.org/10.1016/j.amc.2006.12.007 doi: 10.1016/j.amc.2006.12.007
![]() |
[22] |
C. Jiang, X. Han, G. Liu, G. Liu, A nonlinear interval number programming method for uncertain optimization problems, Eur. J. Oper. Res., 188 (2008), 1–13. https://doi.org/10.1016/j.ejor.2007.03.031 doi: 10.1016/j.ejor.2007.03.031
![]() |
[23] |
A. Jayswal, I. Stancu-Minasian, I. Ahmad, On sufficiency and duality for a class of interval-valued programming problems, Appl. Math. Comput., 218 (2011), 4119–4127. https://doi.org/10.1016/j.amc.2011.09.041 doi: 10.1016/j.amc.2011.09.041
![]() |
[24] | M. Hladık, Interval linear programming: a survey, in Linear Programming-New Frontiers in Theory and Applications, (2012), 85–120. |
[25] | A. Bellet, Y. Liang, A. B. Garakani, M. F. Balcan, F. Sha, A distributed Frank-Wolfe algorithm for communication-efficient sparse learning, in Proceedings of the 2015 SIAM International Conference on Data Mining (SDM), (2015), 478–486. https://doi.org/10.1137/1.9781611974010.54 |
[26] |
G. Qu, N. Li, Accelerated distributed Nesterov gradient descent, IEEE Trans. Autom. Control, 65 (2020), 2566–2581. https://doi.org/10.1109/TAC.2019.2937496 doi: 10.1109/TAC.2019.2937496
![]() |
[27] |
A. Nedic, A. Olshevsky, W. Shi, Achieving geometric convergence for distributed optimization over time-varying graphs, SIAM J. Optim., 27 (2017), 2597–2633. https://doi.org/10.1137/16M1084316 doi: 10.1137/16M1084316
![]() |
[28] |
S. Liang, L. Y. Wang, G. Yin, Exponential convergence of distributed primal–dual convex optimization algorithm without strong convexity, Automatica, 105 (2019), 298–306. https://doi.org/10.1016/j.automatica.2019.04.004 doi: 10.1016/j.automatica.2019.04.004
![]() |
[29] |
X. Yi, S. Zhang, T. Yang, K. H. Johansson, T. Chai, Exponential convergence for distributed optimization under the restricted secant inequality condition, IFAC-PapersOnLine, 53 (2020), 2672–2677. https://doi.org/10.1016/j.ifacol.2020.12.383 doi: 10.1016/j.ifacol.2020.12.383
![]() |
[30] |
S. Treană, Lu-optimality conditions in optimization problems with mechanical work objective functionals, IEEE Trans. Neural Networks Learn. Syst., 33 (2021), 4971–4978. https://doi.org/10.1109/TNNLS.2021.3066196 doi: 10.1109/TNNLS.2021.3066196
![]() |
[31] |
H. C. Wu, On interval-valued nonlinear programming problems, J. Math. Anal. Appl., 338 (2008), 299–316. https://doi.org/10.1016/j.jmaa.2007.05.023 doi: 10.1016/j.jmaa.2007.05.023
![]() |
[32] | B. T. Polyak, Introduction to Optimization, Chapman and Hall, 1987. |
[33] | R. Durrett, Probability: Theory and Examples, Cambridge University Press, 2010. https://doi.org/10.1017/CBO9780511779398 |
[34] |
T. Maeda, On optimization problems with set-valued objective maps: existence and optimality, J. Optim. Theory Appl., 153 (2012), 263–279. https://doi.org/10.1007/s10957-011-9952-x doi: 10.1007/s10957-011-9952-x
![]() |
[35] |
A. K. Bhurjee, G. Panda, Efficient solution of interval optimization problem, Math. Methods Oper. Res., 76 (2012), 273–288. https://doi.org/10.1007/s00186-012-0399-0 doi: 10.1007/s00186-012-0399-0
![]() |
[36] |
S. S. Ram, A. Nedić, V. V. Veeravalli, Distributed stochastic subgradient projection algorithms for convex optimization, J. Optim. Theory Appl., 147 (2010), 516–545. https://doi.org/10.1007/s10957-010-9737-7 doi: 10.1007/s10957-010-9737-7
![]() |
[37] |
A. Nedic, A. Ozdaglar, Distributed subgradient methods for multi-agent optimization, IEEE Trans. Autom. Control, 54 (2009), 48–61. https://doi.org/10.1109/TAC.2008.2009515 doi: 10.1109/TAC.2008.2009515
![]() |
[38] | A. P. Ruszczyński, A. Ruszczynski, Nonlinear Optimization, Princeton University Press, 2006. https://doi.org/10.1515/9781400841059 |
[39] |
A. Nedic, A. Ozdaglar, P. A. Parrilo, Constrained consensus and optimization in multi-agent networks, IEEE Trans. Autom. Control, 55 (2010), 922–938. https://doi.org/10.1109/TAC.2010.2041686 doi: 10.1109/TAC.2010.2041686
![]() |
[40] |
A. Nedić, A. Olshevsky, Stochastic gradient-push for strongly convex functions on time-varying directed graphs, IEEE Trans. Autom. Control, 61 (2016), 3936–3947. https://doi.org/10.1109/TAC.2016.2529285 doi: 10.1109/TAC.2016.2529285
![]() |