k | 10 | 11 | 12 | 13 | 14 | 15 |
sTkyk | 0.246690 | 0.391931 | 0.320335 | 0.225285 | 0.192523 | 0.555195 |
|sTkˉyk| | 0.245609 | 0.391931 | 0.302341 | 0.230323 | 0.192523 | 0.257161 |
Unconstrained optimization problems often arise from mining of big data and scientific computing. On the basis of a modified gradient-difference, this article aims to present a new three-term conjugate gradient algorithm to efficiently solve unconstrained optimization problems. Compared with the existing nonlinear conjugate gradient algorithms, the search directions in this algorithm are always sufficiently descent independent of any line search, as well as having conjugacy property. Using the standard Wolfe line search, global and local convergence of the proposed algorithm is proved under mild assumptions. Implementing the developed algorithm to solve 750 benchmark test problems available in the literature, it is shown that the numerical performance of this algorithm is remarkable, especially in comparison with that of the other similar efficient algorithms.
Citation: Jie Guo, Zhong Wan. A new three-term conjugate gradient algorithm with modified gradient-differences for solving unconstrained optimization problems[J]. AIMS Mathematics, 2023, 8(2): 2473-2488. doi: 10.3934/math.2023128
[1] | Yixin Li, Chunguang Li, Wei Yang, Wensheng Zhang . A new conjugate gradient method with a restart direction and its application in image restoration. AIMS Mathematics, 2023, 8(12): 28791-28807. doi: 10.3934/math.20231475 |
[2] | Xiyuan Zhang, Yueting Yang . A new hybrid conjugate gradient method close to the memoryless BFGS quasi-Newton method and its application in image restoration and machine learning. AIMS Mathematics, 2024, 9(10): 27535-27556. doi: 10.3934/math.20241337 |
[3] | Mingyuan Cao, Yueting Yang, Chaoqian Li, Xiaowei Jiang . An accelerated conjugate gradient method for the Z-eigenvalues of symmetric tensors. AIMS Mathematics, 2023, 8(7): 15008-15023. doi: 10.3934/math.2023766 |
[4] | Maulana Malik, Ibrahim Mohammed Sulaiman, Auwal Bala Abubakar, Gianinna Ardaneswari, Sukono . A new family of hybrid three-term conjugate gradient method for unconstrained optimization with application to image restoration and portfolio selection. AIMS Mathematics, 2023, 8(1): 1-28. doi: 10.3934/math.2023001 |
[5] | Sani Aji, Poom Kumam, Aliyu Muhammed Awwal, Mahmoud Muhammad Yahaya, Kanokwan Sitthithakerngkiet . An efficient DY-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. AIMS Mathematics, 2021, 6(8): 8078-8106. doi: 10.3934/math.2021469 |
[6] | Auwal Bala Abubakar, Poom Kumam, Maulana Malik, Parin Chaipunya, Abdulkarim Hassan Ibrahim . A hybrid FR-DY conjugate gradient algorithm for unconstrained optimization with application in portfolio selection. AIMS Mathematics, 2021, 6(6): 6506-6527. doi: 10.3934/math.2021383 |
[7] | Xuejie Ma, Songhua Wang . A hybrid approach to conjugate gradient algorithms for nonlinear systems of equations with applications in signal restoration. AIMS Mathematics, 2024, 9(12): 36167-36190. doi: 10.3934/math.20241717 |
[8] | Nasiru Salihu, Poom Kumam, Ibrahim Mohammed Sulaiman, Thidaporn Seangwattana . An efficient spectral minimization of the Dai-Yuan method with application to image reconstruction. AIMS Mathematics, 2023, 8(12): 30940-30962. doi: 10.3934/math.20231583 |
[9] | Ibtisam A. Masmali, Zabidin Salleh, Ahmad Alhawarat . A decent three term conjugate gradient method with global convergence properties for large scale unconstrained optimization problems. AIMS Mathematics, 2021, 6(10): 10742-10764. doi: 10.3934/math.2021624 |
[10] | Jamilu Sabi'u, Ibrahim Mohammed Sulaiman, P. Kaelo, Maulana Malik, Saadi Ahmad Kamaruddin . An optimal choice Dai-Liao conjugate gradient algorithm for unconstrained optimization and portfolio selection. AIMS Mathematics, 2024, 9(1): 642-664. doi: 10.3934/math.2024034 |
Unconstrained optimization problems often arise from mining of big data and scientific computing. On the basis of a modified gradient-difference, this article aims to present a new three-term conjugate gradient algorithm to efficiently solve unconstrained optimization problems. Compared with the existing nonlinear conjugate gradient algorithms, the search directions in this algorithm are always sufficiently descent independent of any line search, as well as having conjugacy property. Using the standard Wolfe line search, global and local convergence of the proposed algorithm is proved under mild assumptions. Implementing the developed algorithm to solve 750 benchmark test problems available in the literature, it is shown that the numerical performance of this algorithm is remarkable, especially in comparison with that of the other similar efficient algorithms.
Since unconstrained optimization problems often arise from scientific computing and mining of big data [1,2,3,4], it is valuable to develop efficient numerical algorithms to solve these problems. However, it seems that there is no any algorithm available in the literature which is in commanding position when it is used to solve all the unconstrained optimization problems, compared with other similar algorithms [5,6,7,8,9,10]. For this reason, many researchers have been studying new numerical methods to solve the unconstrained optimization problems [1,11].
Mathematically, a unconstrained optimization problem is written as
minf(x),x∈Rn, | (1.1) |
where f:Rn→R is continuously differentiable such that its gradient function g: Rn→Rn is available. By gk we denote the gradient vector of g at xk.
Owing to smaller capacity of computation and storage, conjugate gradient methods (CG) are usually used to solve problem (1.1). By CG, the iterative format to generate a sequence of approximate optimal solutions is
xk+1=xk+αkdk,k=0,1,2,…, | (1.2) |
where x0 is an arbitrarily chosen initial solution, dk is a search direction to efficiently seek for an optimal solution of problem (1.1), and αk>0 is a step size found by line search along dk. In general, the search directions in the classical CG methods are given by
dk={−gk,ifk=0,−gk+βkdk−1,otherwise, | (1.3) |
where βk is the so called conjugate parameter, often being computed by the following classical methods [2,12]:
βHSk=gTkyk−1dTk−1yk−1,βFRk=‖gk‖2‖gk−1‖2,βPRPk=gTkyk−1‖gk−1‖2,βCDk=‖gk‖2−dTk−1gk−1,βLSk=gTkyk−1−dTk−1gk−1,βDYk=‖gk‖2dTk−1yk−1. | (1.4) |
In (1.4), yk−1=gk−gk−1 when k≥1. When αk in (1.2) is the exact step size and problem (1.1) is a strictly convex quadratic minimization problem, the values of all βk in (1.4) are the same. However, for a generic nonlinear objective function, it is often difficult to find the exact step size. Thus, an inexact line search with lower computational cost is generally adopted. For instance, using the strong Wolfe inexact line search, Riahi and Qattan[13] established global convergence theory of the Fletcher-Reeves CG method and proved its property of local linear convergence. Unfortunately, in most cases, when the Armijo-type line search is used to find the step size αk, it is often difficult to establish global convergence of the classical CG methods, where the search direction dk is not necessarily descent. For this reason, many variants of CG methods have been proposed to overcome the above difficulty. For instance, using a modified Armijo-type line search, an improved spectral conjugate algorithm was developed in [6] and its global convergence was proved. Numerical tests also showed the advantages of this algorithm.
As remarkable extensions of the classical CG methods, three-term CG methods have been attracting extensive research interest [8,9,10,14,15,16]. The first three-term CG method was proposed in [14], which chooses the search directions by
dk+1=−gk+1+βkdk+γkdt, | (1.5) |
where βk=βHSk (or βFRk, βDYk etc.), dt(t≤k−1) was a restart direction, and
γk={0,ift=k−1;gTk+1ytdTtyt,ift<k−1. | (1.6) |
By numerical tests, it was shown [14] that in the third term of (1.5), the automatical restarts of using the gradient information in (1.6) may improve convergence of the algorithm.
Nazareth [15] presented another method of choosing the directions, given by
dk+1=−yk+yTkykyTkdkdk+yTk−1ykyTk−1dk−1dk−1, | (1.7) |
where d−1=d0=0. It was proved that without requirement of the exact line search, the developed algorithm based on (1.7) can maintain finite termination as applied to solve convex quadratic minimization problems.
In [10,17], two three-term conjugate gradient methods were given by
dk+1=−gk+1+βPRPkdk−gTk+1dkgTkgkyk, | (1.8) |
and
dk+1=−gk+1+βHSkdk−gTk+1dkdTkykyk. | (1.9) |
respectively. Independent of any line search, it was proved that the directions in (1.8) and (1.9) are sufficiently descent. Since (1.8) and (1.9) can reduce to the standard PRP and HS conjugate gradient methods in (1.4) under the exact line search, respectively, they are regarded as two modified versions of the standard CG methods. It is noteworthy that the search directions in the standard PRP and HS conjugate gradient methods are not necessarily descent in general.
In [8,9,16], Andrei suggested three descent three-term CG methods, which computed the search directions by the following different formats:
dk+1=−yTksk‖gk‖2gk+1+yTkgk+1‖gk‖2sk−sTkgk+1‖gk‖2yk, | (1.10) |
dk+1=−gk+1−((1+‖yk‖2yTksk)sTkgk+1yTksk−yTkgk+1yTksk)sk−sTkgk+1yTkskyk, | (1.11) |
and
dk+1=−gk+1−((1+2‖yk‖2yTksk)sTkgk+1yTksk−yTkgk+1yTksk)sk−sTkgk+1yTkskyk. | (1.12) |
All of them satisfy the conjugacy condition, and except for (1.10), the search directions in (1.11) and (1.12) are descent when the Wolfe line search are used. Numerical experiments indicated that the CG method in [8] outperforms the other six algorithms available in the literature.
Recently, Liu et al. [18] constructed two three-term CG methods, specified by:
dk+1=−gk+1+gTk+1yk‖dk‖2dk−gTk+1dk‖dk‖2yk, | (1.13) |
and
dk+1=−gk+1+gTk+1(yk−dk)‖dk‖2dk−gTk+1dk‖dk‖2yk, | (1.14) |
respectively. A remarkable property of these search directions is that they were proved to be sufficiently descent under any line search. However, it is unclear whether these directions in (1.13) and (1.14) satisfy any conjugacy condition or not.
Motivated by a need to further improve numerical efficiency of algorithms, we intend to develop a novel three-term CG algorithm such that the search directions may simultaneously possess descent and conjugate properties. Then, using the standard Wolfe inexact line search, we attempt to prove its global and local convergence under appropriate assumptions, and test its numerical performance by solving benchmark test problems.
The remainder of this article is organized as follows. The new three-term CG algorithm is first developed in Section 2. In Section 3, global convergence of this algorithm is proved. Section 4 is devoted to testing of its numerical performance. Conclusions are drawn in the last section.
In this section, we state ideas to develop a new algorithm, and then present its framework of computer procedures.
Combining the ideas in [6,18], we construct the search direction by
dk+1={−gk+1,ifwk=0,−gk+1+gTk+1(yk−sk)wksk−gTk+1skwkyk,otherwise, | (2.1) |
where wk=max{|sTkˉyk|,sTkyk}, d0=−g0, and
ˉyk=(In−gk+1gTk+1‖gk+1‖2)yk |
is defined as done in [6]. Clearly, ˉyk can be regarded as a modified difference of gradients. For this reason, we call the proposed CG method in this paper, where the search directions are defined by (2.1), a three-term CG method with modified gradient-differences. In essence, this three-term CG method is an extension of the two-term spectral conjugate gradient method in [6].
We first prove the following property of the search directions in (2.1).
Proposition 1. Let dk be given by (2.1). Then, for any k≥0, the following inequality holds:
gTkdk≤−‖gk‖2. | (2.2) |
Proof. By definition, when k=0, we have gT0d0=−‖g0‖2. When k>0, we have gTkdk=−‖gk‖2 if wk−1=0; Otherwise, it is true that
gTkdk=−gTkgk+gTk(yk−1−sk−1)wk−1gTksk−1−gTksk−1wk−1gTkyk−1=−‖gk‖2−(gTksk−1)2wk−1≤−‖gk‖2. | (2.3) |
Consequently, for any k≥0,
gTkdk≤−‖gk‖2, | (2.4) |
i.e., dk is always sufficiently descent.
Remark 1. As pointed out in [19,20,21,22], such a sufficiently descent condition like (2.4) plays a critical role in proving global convergence of CG methods.
Based on the above nice property of the search directions (2.1) in Propositions 1, we come to state a framework of computer procedures for solving unconstrained optimization problems (1.1).
Algorithm 1. (New three-term conjugate gradient algorithm (NTTCG))
Step 0. Take an initial (approximate) solution x0∈Rn and an initial search direction d0=−g0. Choose the parameters 0<ρ<σ<1 used in the line search. The tolerance error is ε∈(0,1). Set k:=0.
Step 1. If ‖gk‖∞≤ε, then the algorithm stops.
Step 2. Determine the step size αk by the following standard Wolfe line search:
{f(xk+αkdk)−f(xk)≤ραkgTkdk,gTk+1dk≥σgTkdk. | (2.5) |
Step 3. Update the solution by xk+1:=xk+αkdk. Compute gk+1, and compute a search direction dk+1 given in (2.1).
Step 4. Set k:=k+1. Return to Step 1.
Remark 2. By Proposition 1, we know that dk+1 in Step 3 of Algorithm 1 is a sufficiently descent direction at xk+1, which ensures that the conducted line search in Step 2 of Algorithm 1 stops in finitely many steps [23].
Remark 3. It follows from (2.4) that the inequalities:
‖gk‖2≤‖dTkgk‖≤‖dk‖‖gk‖ |
hold for any k≥0. Thus, ‖dk‖≥‖gk‖ is true for any k≥0.
In this section, we study conjugacy property of the search directions defined by (2.1), and establish global and local convergence theory of Algorithm 1.
We first study the conjugacy property of the search directions generated by Algorithm 1 in the case that the objective function in problem (1.1) is convex quadratic.
Specifically, a problem of convex quadratic minimization is written as:
minf(x)=12xTQx+qTx+c, | (3.1) |
where Q∈Rn×n is a given positive definite matrix and q∈Rn is a given vector. When Algorithm 1 is applied to solve problem (3.1), we can prove that the search directions in (2.1) have the following property.
Proposition 2. For problem (3.1), let dk be chosen by (2.1). Then, by the exact line search, dk+1 and dk are conjugate with respect to Q for any k≥0.
Proof. With the exact line search, we have
gTk+1sk=0, | (3.2) |
and
skˉyk=sTkyk=sTk(gk+1−gk)=sTkQsk. | (3.3) |
Consequently,
dk+1=−gk+1+gTk+1(yk−sk)max{|sTkˉyk|,sTkyk}sk−gTk+1skmax{|sTkˉyk|,sTkyk}yk=−gk+1+gTk+1yk|sTkˉyk|sk=−gk+1+gTk+1QsksTkQsksk=−gk+1+gTk+1QdkdTkQdkdk. | (3.4) |
Thus, for k=0,
dT1Qd0=gT1Qg0−gT1Qg0gT0Qg0gT0Qg0=0. | (3.5) |
For k>0, we have
dTk+1Qdk=−gTk+1Qdk+gTk+1QdkdTkQdkdTkQdk=0. | (3.6) |
In other words, dk+1 and dk are conjugate with respect to Q for any k≥0.
Remark 4. The basic idea to derive the search directions (2.1) is to guarantee their sufficiently descent property by appropriately modifying the steepest descent direction −gk+1 (see Proposition 1). It is well known that the conjugate directions in the classic conjugate direction method are not necessarily descent. Proposition 2 demonstrates that our search directions also satisfy the so-called conjugacy condition in the case that the objective function is convex quadratic, although it is not true that any two directions (for example, the two directions dk+2 and dk) are conjugate with respect to the matrix Q, as in the classic conjugate direction method. In one word, compared with majority of the existing three-term conjugate gradient methods in the literature, the search directions (2.1) have an advantage of simultaneously possessing descent and conjugate properties.
The following result further states global convergence of Algorithm 1 when it is implemented to solve a uniformly convex optimization problem, an extention of the convex quadratic minimization problem (3.1).
Theorem 1. Let f:Rn→R be twice continuously differentiable and uniformly convex on a level set Ω={x∈Rn|f(x)≤f(x0)}, i.e., there exists a positive constant μ such that for all x,y∈Ω,
(g(x)−g(y))T(x−y)≥μ‖x−y‖2 |
holds. Let {gk} be the gradient sequence generated by Algorithm 1. Then,
limk→∞inf‖gk‖=0. | (3.7) |
Proof. For the sake of contradiction, we suppose that there exists a constant ε>0 such that ‖gk‖≥ε for all k∈N.
Since the step size satisfies (2.5), the sequence {f(xk)} generated by Algorithm 1 is decreasing, and all the iterative points xk are in the level set Ω. Since f:Rn→R is twice continuously differentiable and uniformly convex on Ω, it follows from Steps 2 and 3 in Algorithm 1 that the level set Ω is a bounded closed convex set, i.e., there exists a positive constant B>0 such that
‖x‖≤B,∀x∈Ω. | (3.8) |
In addition, the gradient of f is also Lipschitz continuous on Ω, i.e., there exists a constant L>0 such that for all x,y∈Ω, the following inequality holds:
‖g(x)−g(y)‖≤L‖x−y‖. | (3.9) |
Furthermore, we can prove boundedness of the sequence {dk}. Actually, from (3.9) and the uniformly convex property of f, we have ‖yk‖≤L‖sk‖ and sTkyk≥μ‖sk‖2. Thus, by the Cauchy-Schwarz inequality, we have
‖dk+1‖≤‖gk+1‖+‖gk+1‖(‖yk‖+‖sk‖)‖sk‖sTkyk+‖gk+1‖‖sk‖‖yk‖sTkyk≤‖gk+1‖+‖gk+1‖(2‖yk‖+‖sk‖)‖sk‖μ‖sk‖2≤‖gk+1‖+2L‖gk+1‖‖sk‖μ‖sk‖+‖gk+1‖μ=(1+2L+1μ)‖gk+1‖. | (3.10) |
Take M=1+2L+1μ. From (3.10) and ‖gk‖≥ε, it follows that
∞∑k=0‖gk‖4‖dk‖2≥∞∑k=0ε2M2=+∞. | (3.11) |
Using (3.9) and the second inequality in (2.5), it yields
(σ−1)gTkdk≤(gk+1−gk)Tdk≤‖gk+1−gk‖‖dk‖≤αkL‖dk‖2. | (3.12) |
Consequently,
f(xk)−f(xk+1)≥−ραkgTkdk≥ραk‖gk‖2≥ρ(1−σ)‖gk‖4L‖dk‖2, | (3.13) |
hence,
f(x0)−limk→∞f(xk+1)≥ρ(1−σ)L∞∑k=0‖gk‖4‖dk‖2. | (3.14) |
From (3.14) and the boundedness of f on Ω, we know that
∞∑k=0‖gk‖4‖dk‖2≤L(f(x0)−limk→∞f(xk+1))ρ(1−σ)<+∞, | (3.15) |
which contradicts (3.11). The proof is completed.
We can also prove that Algorithm 1 is R-linearly convergent when the objective function is uniformly convex.
Theorem 2. Suppose that f:Rn→R is twice continuously differentiable and uniformly convex on the level set Ω, and that the sequence {xk} generated by Algorithm 1 converges to the unique optimal solution x∗. Then for all k>0, there exist constants a>0 and b∈(0,1) such that
‖f(xk)−f(x∗)‖≤abk. | (3.16) |
Proof. Since f is twice continuously differentiable and uniformly convex on Ω, it follows from (3.2)-(3.4) and (3.12) in [24] that there exist constants ˆλ>λ>0, ˆζ>ζ>0 such that for all x∈Ω, the following inequalities hold:
ζ‖x−x∗‖2≤λ‖g(x)‖2≤f(x)−f(x∗)≤ˆλ‖g(x)‖2≤ˆζ‖x−x∗‖2. | (3.17) |
Thus, from the first inequality in (2.5), we have
f(xk+1)−f(x∗)≤(f(xk)−f(x∗))+ραkgTkdk≤(f(xk)−f(x∗))−ρ(1−σ)‖gk‖2L‖dk‖2‖gk‖2≤(f(xk)−f(x∗))−ρ1−σLM2‖gk‖2≤(f(xk)−f(x∗))−ρ1−σˆλLM2(f(xk)−f(x∗))=(1−ρ1−σˆλLM2)(f(xk)−f(x∗)), | (3.18) |
where the second inequality follows from (3.12) and (2.2), the third inequality follows from (3.10), and the last inequality follow from (3.17). Consequently,
f(xk+1)−f(x∗)≤(1−ρ1−σˆλLM2)k+1(f(x0)−f(x∗)). | (3.19) |
Taking a=f(x0)−f(x∗) and b=1−ρ1−σˆλLM2, the desired result (3.16) has been proved.
For non-convex minimization problems, we can prove that the search directions in (2.1) satisfy an approximate Dai-Liao conjugate condition.
Proposition 3. Suppose that sTkyk>|sTkˉyk|. Then, dk+1 in (2.1) satisfies the following approximate Dai-Liao conjugate condition:
dTk+1yk=−tkgTk+1sk. | (3.20) |
Proof. When sTkyk>|sTkˉyk|, it holds that
dTk+1yk=−gTk+1yk+gTk+1(yk−sk)sTkyksTkyk−gTk+1sksTkyk‖yk‖2=−(1+‖yk‖2sTkyk)gTk+1sk=−tkgTk+1sk, | (3.21) |
where tk=1+‖yk‖2sTkyk>0. The result (3.20) has been proved.
Remark 5. Although the condition (3.20) in Proposition 3 does not always hold at any iteration, it does not affect global convergence of Algorithm 1. By a simple example, we can show that this condition is often satisfied (see Table 1). In addition, our numerical tests will also show advantages of the search directions given by (2.1).
k | 10 | 11 | 12 | 13 | 14 | 15 |
sTkyk | 0.246690 | 0.391931 | 0.320335 | 0.225285 | 0.192523 | 0.555195 |
|sTkˉyk| | 0.245609 | 0.391931 | 0.302341 | 0.230323 | 0.192523 | 0.257161 |
Example 1. For the Rosenbrock problem:
minf(x)=100(x2−x21)2+(1−x1)2. |
Initial point x0=(−1.2,1).
We implement Algorithm 1 to solve the Rosenbrock problem, and partly present the obtained values of sTkyk and |sTkˉyk| in Table 1.
In Table 1, it is easy to see that the directions at the 10th, 12th and 15th iterations, the inequality (3.20) holds.
Before stating global convergence of Algorithm 1 in the non-convex case, we first make the following mild assumptions.
Assumption 1. The level set Ω={x∈Rn|f(x)≤f(x0)} is bounded, i.e., there exists a positive constant B>0 such that (3.8) holds for all x∈Ω.
Assumption 2. In some neighborhood N of Ω, f is continuously differentiable and its gradient is Lipschitz continuous. That is to say, there exists a constant L>0 such that (3.9) holds for all x,y∈N.
Theorem 3. Let {gk} be a gradient sequence generated by Algorithm 1. Suppose that there exists a constant τ>0 such that sTkyk≥τ for any k≥1. Under Assumptions 1 and 2, it is true that
limk→∞inf‖gk‖=0. | (3.22) |
Proof. From Assumptions 1 and 2, we have
‖dk+1‖≤‖gk+1‖+|gTk+1yk|sTkyk‖sk‖+|gTk+1sk|sTkyk‖sk‖+|gTk+1sk|sTkyk‖yk‖≤‖gk+1‖+‖gk+1‖‖yk‖τ‖sk‖+‖gk+1‖‖sk‖τ‖sk‖+‖gk+1‖‖sk‖τ‖yk‖≤‖gk+1‖+4LB2‖gk+1‖τ+4B2‖gk+1‖τ+4LB2‖gk+1‖τ=(1+8L+4τB2)‖gk+1‖, | (3.23) |
where the first and second inequalities follow from the Cauchy-Schwarz inequality and sTkyk≥τ, and the last inequality follows from (3.8) and (3.9).
Similar to the proof of Theorem 1, we can also prove that (3.15) holds under Assumptions 1 and 2. Together with (3.23), it is concluded that (3.22) holds.
In order to present R-linear convergence of Algorithm 1 in the non-convex case, we need the following assumption:
Assumption 3. (1) f:Rn→R is twice continuously differentiable.
(2) The sequence {xk} generated by Algorithm 1 satisfies xk→x∗, where ∇f(x∗)=0 and ∇2f(x∗) is positive definite.
(3) There exists a positive constant τ such that sTkyk>τ holds for all the sufficiently large k>0.
Theorem 4. Let {xk} be the sequence generated by Algorithm 1, which converges to a solution x∗ satisfying (2) in Assumption 3. Suppose that Assumptions 1 and 2 hold. Then, for the sufficiently large k>0, there exist constants a>0 and b∈(0,1) such that
‖f(xk)−f(x∗)‖≤abk. | (3.24) |
Proof. From (3) in Assumption 3, we can know that ‖dk‖ is bounded for all k>0. Moreover, from (4.1)–(4.3) in [18], it is clear that there exists a neighborhood of x∗, denoted by U(x∗), such that (3.17) holds for all x∈U(x∗). The rest of the proof is similar to that of Theorem 2, we omit it here.
In this section, by numerical experiments, we study effectiveness and robustness of Algorithm 1 when it is employed to solve unconstrained optimization problems.
Algorithm 1 (NTTCG) is tested through solution of the 75 benchmark test problems with variable dimensions from 1000 to 10000. These problems are from [25] or CUTE [26]. Its computer codes are written using the language of Fortran 77, and run on a personal computer with a 2.2GHZ CPU processor, 8GB memory and Windows 10 operation system.
To show advantages of our algorithm (NTTCG), we compare it with the other four similar algorithms, including TMRMIL in [18], ISCG in [6], CG_DESCENT in [7] and THREECG in [8]. For all the compared algorithms, the termination condition is ε=10−6 or the number of iterations exceeds 10,000. In Algorithm 1, ρ=0.0001,σ=0.01, and the parameters not mentioned here are consistent with the corresponding literature. We show the numerical performance differences among these five algorithms by the Dolan and Moré performance profiles [27]. Let S be a set of all methods, P be a set of test problems, np be the size of the set P and tp,s be the number of iterations or the CPU time needed to solve problem p∈P by method s∈S. Then, the performance ratio is computed by rp,s=tp,smin{tp,s:s∈S}, and the overall performance of Algorithm s is given by ρs(τ)=1npsize{p∈P:rp,s≤τ}. In fact, ρs(τ) is the probability for Algorithm s that a performance ratio rp,s is within a factor τ∈R of the best possible ratio. The function ρs(τ) is the distribution function for the performance ratio rp,s. We report the numerical results in Figures 1 and 2. From Figures 1 and 2, we can known that our algorithm (NTTCG) performs the best among the five algorithms, either with respect to the number of iterations, or with respect to the elapsed CPU time.
We also underline good numerical results in Table 3. In Table 3, P, Ni and CPU stand for the number of problems in Table 2, the number of iterations and the consumed CPU time in centisecond (cs), respectively. From the underlined results in Table 3, we know that our algorithm (NTTCG) performs very well in some test problems.
No. | Problem | Dim | No. | Problem | Dim |
1 | Extended Trigonometric | 7000 | 15 | Extended Quadratic Penalty QP1 3000 | 2000 |
2 | Extended Rosenbrock | 10000 | 16 | Extended Tridiagonal 2 | 9000 |
3 | Extended White & Holst | 9000 | 17 | BDQRTIC (CUTE) | 3000 |
4 | Diagonal 3 | 6000 | 18 | TRIDIA (CUTE) | 8000 |
5 | Raydan 1 | 10000 | 19 | NONDIA (CUTE) | 6000 |
6 | Diagonal 1 | 9000 | 20 | DQDRTIC (CUTE) | 10000 |
7 | Diagonal 2 | 1000 | 21 | DIXMAANC (CUTE) | 10000 |
8 | Diagonal 3 | 1000 | 22 | LIARWHD (CUTE) | 9000 |
9 | Extended Himmelblau | 8000 | 23 | DIXMAANG (CUTE) | 3000 |
10 | Extended Powell | 10000 | 24 | DIXMAANJ (CUTE) | 3000 |
11 | Extended Block-Diagonal BD1 | 6000 | 25 | DIXMAANL (CUTE) | 9000 |
12 | Extended Maratos | 8000 | 26 | SINQUAD (CUTE) | 9000 |
13 | Extended Cliff | 6000 | 27 | BIGGSB1 (CUTE) | 7000 |
14 | Quadratic QF1 | 10000 | 28 | Scaled Quadratic SQ2 | 9000 |
Problem | NTTCG | TMRMIL | ISCG | CG_DESCENT | THREECG |
Ni/CPU(cs) | Ni/CPU(cs) | Ni/CPU(cs) | Ni/CPU(cs) | Ni/CPU(cs) | |
1 | 50_/46 | 58/34 | 79/73 | 85/51 | 52/43 |
2 | 16_/7_ | 26/14 | 22/13 | 35/21 | 28/8 |
3 | 10_/2_ | 13/3 | 11/4 | 17/6 | 12/3 |
4 | 8_/1_ | 10/2 | 10/2 | 21/22 | 10/2 |
5 | 2_/1_ | 3/1 | 3/2 | 4/1 | 3/2 |
6 | 431_/98_ | 5835/1899 | 473/204 | F/F | 477/171 |
7 | 229_/20 | 1372/307 | 230/25 | 231/12 | 231/28 |
8 | 43_/2_ | 69/5 | 45/3 | 44/2 | 45/2 |
9 | 198_/118_ | 5161/1800 | 215/163 | 731/486 | 312/140 |
10 | 11_/5 | 15/3 | 12/5 | 17/6 | 12/6 |
11 | 57_/6_ | 60/6 | 61/12 | 59/6 | 75/13 |
12 | 4_/1_ | 8/2 | 5/2 | 15/3 | 7/3 |
13 | 221_/25 | 6552/649 | 223/38 | 245/22 | 225/28 |
14 | 7_/2_ | 10/4 | 8/5 | 17/3 | 9/4 |
15 | 18_/4_ | 33/5 | 39/15 | 41/18 | 34/6 |
16 | 92_/55_ | 476/166 | 106/104 | 8119/2891 | 123/96 |
17 | 599_/31_ | 5452/368 | 601/51 | 600/41 | 601/45 |
18 | 4_/1_ | 5/1 | 10001/2730 | 9/3 | 5/2 |
19 | 593_/159_ | 1818/473 | 637/401 | 6005/2342 | 722/290 |
20 | 15_/14_ | 5480/2287 | 263/727 | 442/356 | 2177/1444 |
21 | 369_/227_ | 5415/4426 | 386/335 | 376/331 | 385/272 |
22 | 2_/1_ | 3/1 | 3/1 | 4/1 | 3/1 |
23 | 10_/1_ | 11/2 | 11/2 | 12/3 | 12/2 |
24 | 111_/32 | 128/22 | 121/55 | 633/123 | 235/52 |
25 | 7_/2_ | 8/2 | 8/4 | 11/5 | 8/3 |
26 | 26_/5_ | 41/6 | 29/7 | 35/8 | 29/8 |
27 | 3572_/987 | 5628/919 | 3877/998 | 6500/837 | 4754/877 |
28 | 2_/1_ | 6/1 | 6/1 | 11/11 | 6/2 |
In this paper, we have developed a novel three-term CG algorithm (NTTCG) based on modified gradient-differences for solving unconstrained optimization problems. Global convergence has been proved for this algorithm.
By applying our method to solve the 750 benchmark test problems, the numerical results have demonstrated that NTTCG outperforms the compared four algorithms in the literature. Especially, compared with the existing methods, NTTCG can find the optimal solutions of the unconstrained optimization problems, using less number of iterations, or less CPU time consumed.
In future research, it is valuable to study the method of obtaining the iteration direction by minimizing a quadratic approximate model of the objective function or the conic model in a specific subspace spanned by gk+1, sk and yk.
It is also interesting to extend the algorithm proposed in this paper to solve nonlinear system of monotone equations since it has been shown in [28,29,30,31,32,33,34] that recovering sparse signals and restoring blurred images can be formulated as a system of equations, and the CG methods can solve nonlinear system of monotone equations efficiently. Furthermore, as done in [35], we can also modify our algorithm to solve symmetric system of nonlinear equations.
The authors are very grateful to the editors and referees for their constructive advice.
This research is supported by the National Social Science Foundation of China (Grant No. 21BGL122).
The data and material used to support the findings in this research can be provided by the corresponding author upon request.
We declare that all the authors have no any conflicts of interest about this submission and publication of this article.
[1] |
J. Y. Tang, Z. Wan, Orthogonal dual graph-regularized nonnegative matrix factorization for co-clustering, J. Sci. Comput., 87 (2021), 1–37. https://doi.org/10.1007/s10915-021-01489-w doi: 10.1007/s10915-021-01489-w
![]() |
[2] |
T. Li, Z. Wan, New adaptive barzilar-borwein step size and its application in solving large scale optimization problems, ANZIAM J., 61 (2019), 76–98. https://doi.org/10.1017/S1446181118000263 doi: 10.1017/S1446181118000263
![]() |
[3] |
C. M. Hu, Z. M. Wan, S. Zhu, Z. Wan, An integrated stochastic model and algorithm for constrained multi-item newsvendor problems by two-stage decision-making approach, Math. Comput. Simul., 193 (2022), 280–300. https://doi.org/10.1016/j.matcom.2021.10.018 doi: 10.1016/j.matcom.2021.10.018
![]() |
[4] |
Z. Wan, C. Q. Yu, Reutilization of the existent sales network for recycling unwanted smartphones by nonlinear optimization, J. Cleaner Prod., 350 (2022), 131349. https://doi.org/10.1016/j.jclepro.2022.131349 doi: 10.1016/j.jclepro.2022.131349
![]() |
[5] |
S. Huang, Z. Wan, A new nonmonotone spectral residual method for nonsmooth nonlinear equations, J. Comput. Appl. Math., 313 (2017), 82–101. http://doi.org/10.1016/j.cam.2016.09.014 doi: 10.1016/j.cam.2016.09.014
![]() |
[6] |
S. Deng, Z. Wan, X. Chen, An improved spectral conjugate gradient algorithm for nonconvex unconstrained optimization problems, J. Optim. Theory Appl., 157 (2013), 820–842. http://doi.org/10.1007/s10957-012-0239-7 doi: 10.1007/s10957-012-0239-7
![]() |
[7] |
W. W. Hager, H. C. Zhang, A new conjugate gradient method with guaranteed descent and an efficient line search, SIAM J. Optim., 16 (2005), 170–192. http://doi.org/10.1137/030601880 doi: 10.1137/030601880
![]() |
[8] |
N. Andrei, A simple three-term conjugate gradient algorithm for unconstrained optimization, J. Comput. Appl. Math., 241 (2013), 19–29. http://doi.org/10.1016/j.cam.2012.10.002 doi: 10.1016/j.cam.2012.10.002
![]() |
[9] |
N. Andrei, On three-term conjugate gradient algorithms for unconstrained optimization, Appl. Math. Comput., 219 (2013), 6316–6327. http://doi.org/10.1016/j.amc.2012.11.097 doi: 10.1016/j.amc.2012.11.097
![]() |
[10] |
L. Zhang, W. Zhou, D. H. Li, A descent modified Polak-Ribiˊere-Polyak conjugate gradient method and its global convergence, IMA J. Numer. Anal., 26 (2006), 629–640. http://doi.org/10.1093/imanum/drl016 doi: 10.1093/imanum/drl016
![]() |
[11] |
W. L. Liu, Z. M. Wan, Z. Wan, B. Gong, Sustainable recycle network of heterogeneous pharmaceuticals with governmental subsidies and service-levels of third-party logistics by bi-level programming approach, J. Cleaner Prod., 249 (2020), 119324. http://doi.org/10.1016/j.jclepro.2019.119324 doi: 10.1016/j.jclepro.2019.119324
![]() |
[12] |
J. Guo, Z. Wan, A new three-term spectral conjugate gradient algorithm with higher numerical performance for solving large scale optimization problems based on Quasi-Newton equation, Int. J. Model. Simul. Sci. Comput., 12 (2021), 2150053. http://doi.org/10.1142/S1793962321500537 doi: 10.1142/S1793962321500537
![]() |
[13] |
M. K. Riahi, I. A. Qattan, On the convergence rate of Fletcher-Reeves nonlinear conjugate gradient methods satisfying strong Wolfe conditions: application to parameter identification in problems governed by general dynamics, Math. Methods Appl. Sci., 45 (2022), 3644–3664. http://doi.org/10.1002/mma.8009 doi: 10.1002/mma.8009
![]() |
[14] |
M. Powell, Restart procedures for the conjugate gradient method, Math. Program., 12 (1977), 241–254. http://doi.org/10.1007/BF01593790 doi: 10.1007/BF01593790
![]() |
[15] |
L. Nazareth, A conjugate direction algorithm without line searches, J. Optim. Theory Appl., 23 (1977), 373–387. http://doi.org/10.1007/BF00933447 doi: 10.1007/BF00933447
![]() |
[16] |
N. Andrei, A modified Polak-Ribiˊere-Polyak conjugate gradient algorithm for unconstrained optimization, Optimization, 60 (2011), 1457–1471. http://doi.org/10.1080/02331931003653187 doi: 10.1080/02331931003653187
![]() |
[17] |
L. Zhang, W. Zhou, D. H. Li, Some descent three-term conjugate gradient methods and their global convergence, Optim. Methods Software, 22 (2007), 697–711. http://doi.org/10.1080/10556780701223293 doi: 10.1080/10556780701223293
![]() |
[18] |
J. K. Liu, Y. M. Feng, L. M. Zou, Some three-term conjugate gradient methods with the inexact line search condition, Calcolo, 55 (2018), 16. http://doi.org/10.1007/s10092-018-0258-3 doi: 10.1007/s10092-018-0258-3
![]() |
[19] |
D. Touati-Ahmed, C. Storey, Efficient hybrid conjugate gradient techniques, J. Optim. Theory Appl., 64 (1990), 379–397. http://doi.org/10.1007/bf00939455 doi: 10.1007/bf00939455
![]() |
[20] |
M. Al-Baali, Descent property and global convergence of the Fletcher-Reeves method with inexact line search, J. Numer. Anal., 5 (1985), 121–124. http://doi.org/10.1093/imanum/5.1.121 doi: 10.1093/imanum/5.1.121
![]() |
[21] |
J. C. Gilbert, J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM J. Optim., 2 (1992), 21–42. http://doi.org/10.1137/0802003 doi: 10.1137/0802003
![]() |
[22] |
Y. F. Hu, C. Storey, Global convergence result for conjugate gradient methods, J. Optim. Theory Appl., 71 (1991), 399–405. http://doi.org/10.1007/bf00939927 doi: 10.1007/bf00939927
![]() |
[23] |
S. Huang, Z. Wan, J. Zhang, An extended nonmonotone line search technique for large-scale unconstrained optimization, J. Comput. Appl. Math., 330 (2018), 586–604. http://doi.org/10.1016/j.cam.2017.09.026 doi: 10.1016/j.cam.2017.09.026
![]() |
[24] |
H. C. Zhang, W. W. Hager, A nonmonotone line search technique and its application to unconstrained optimization, SIAM J. Optim., 14 (2004), 1043–1056. http://doi.org/10.1137/S1052623403428208 doi: 10.1137/S1052623403428208
![]() |
[25] | N. Andrei, An unconstrained optimization test functions collection, Adv. Model. Optim., 10 (2008), 147–161. |
[26] |
I. Bongartz, A. R. Conn, N. Gould, P. L. Toint, Cute: Constrained and unconstrained testing environment, ACM Trans. Math. Software, 21 (1995), 123–160. http://doi.org/10.1145/200979.201043 doi: 10.1145/200979.201043
![]() |
[27] |
E. D. Dolan, J. J. Moreˊ, Benchmarking optimization software with performance profiles, Math. Progra., 91 (2002), 201–213. http://doi.org/10.1007/s101070100263 doi: 10.1007/s101070100263
![]() |
[28] |
J. Guo, Z. Wan, A modified spectral PRP conjugate gradient projection method for solving large-scale monotone equations and its application in compressed sensing, Math. Probl. Eng., 2019 (2019), 5261830. http://doi.org/10.1155/2019/5261830 doi: 10.1155/2019/5261830
![]() |
[29] |
A. Althobaiti, J. Sabi'u, H. Emadifar, P. Junsawang, S. K. Sahoo, A scaled Dai-Yuan projection-based conjugate gradient method for solving monotone equations with applications, Symmetry, 14 (2022), 1401. http://doi.org/10.3390/sym14071401 doi: 10.3390/sym14071401
![]() |
[30] |
J. Sabi'u, K. O. Aremu, A. Althobaiti, A. Shah, Scaled three-term conjugate gradient methods for solving monotone equations with application, Symmetry, 14 (2022), 936. http://doi.org/10.3390/sym14050936 doi: 10.3390/sym14050936
![]() |
[31] |
J. Sabi'u, A. Shah, M. Y. Waziri, A modified Hager-Zhang conjugate gradient method with optimal choices for solving monotone nonlinear equations, Int. J. Comput. Math., 99 (2022), 332–354. http://doi.org/10.1080/00207160.2021.1910814 doi: 10.1080/00207160.2021.1910814
![]() |
[32] |
N. Ullah, J. Sabi'u, A. Shah, A derivative-free scaling memoryless Broyden-Fletcher-Goldfarb-Shanno method for solving a system of monotone nonlinear equations, Numer. Linear Algebra Appl., 28 (2021), 2374. http://doi.org/10.1002/nla.2374 doi: 10.1002/nla.2374
![]() |
[33] |
A. B. Abubakar, J. Sabi'u, P. Kumam, A. Shah, Solving nonlinear monotone operator equations via modified SR1 update, J. Appl. Math. Comput., 67 (2021), 343–373. http://doi.org/10.1007/s12190-020-01461-1 doi: 10.1007/s12190-020-01461-1
![]() |
[34] |
J. Sabi'u, A. Shah, M. Y. Waziri, K. Ahmed, Modified Hager-Zhang conjugate gradient methods via singular value analysis for solving monotone nonlinear equations with convex constraint, Int. J. Comput. Methods, 18 (2021), 2050043. http://doi.org/10.1142/S0219876220500437 doi: 10.1142/S0219876220500437
![]() |
[35] |
J. Sabi'u, K. Muangchoo, A. Shah, A. B. Abubakar, K. O. Aremu, An inexact optimal hybrid conjugate gradient method for solving symmetric nonlinear equations, Symmetry, 13 (2021), 1829. http://doi.org/10.3390/sym13101829 doi: 10.3390/sym13101829
![]() |
1. | Nasiru Salihu, Poom Kumam, Sulaiman Mohammed Ibrahim, Huzaifa Aliyu Babando, A sufficient descent hybrid conjugate gradient method without line search consideration and application, 2024, 41, 0264-4401, 1203, 10.1108/EC-12-2023-0912 | |
2. | T. Diphofu, P. Kaelo, A modified extended Fletcher–Reeves conjugate gradient method with an application in image restoration, 2025, 0020-7160, 1, 10.1080/00207160.2025.2462754 |
k | 10 | 11 | 12 | 13 | 14 | 15 |
s_k^Ty_k | 0.246690 | 0.391931 | 0.320335 | 0.225285 | 0.192523 | 0.555195 |
\left|s_k^T\bar{y}_{k}\right| | 0.245609 | 0.391931 | 0.302341 | 0.230323 | 0.192523 | 0.257161 |
No. | Problem | Dim | No. | Problem | Dim |
1 | Extended Trigonometric | 7000 | 15 | Extended Quadratic Penalty QP1 3000 | 2000 |
2 | Extended Rosenbrock | 10000 | 16 | Extended Tridiagonal 2 | 9000 |
3 | Extended White & Holst | 9000 | 17 | BDQRTIC (CUTE) | 3000 |
4 | Diagonal 3 | 6000 | 18 | TRIDIA (CUTE) | 8000 |
5 | Raydan 1 | 10000 | 19 | NONDIA (CUTE) | 6000 |
6 | Diagonal 1 | 9000 | 20 | DQDRTIC (CUTE) | 10000 |
7 | Diagonal 2 | 1000 | 21 | DIXMAANC (CUTE) | 10000 |
8 | Diagonal 3 | 1000 | 22 | LIARWHD (CUTE) | 9000 |
9 | Extended Himmelblau | 8000 | 23 | DIXMAANG (CUTE) | 3000 |
10 | Extended Powell | 10000 | 24 | DIXMAANJ (CUTE) | 3000 |
11 | Extended Block-Diagonal BD1 | 6000 | 25 | DIXMAANL (CUTE) | 9000 |
12 | Extended Maratos | 8000 | 26 | SINQUAD (CUTE) | 9000 |
13 | Extended Cliff | 6000 | 27 | BIGGSB1 (CUTE) | 7000 |
14 | Quadratic QF1 | 10000 | 28 | Scaled Quadratic SQ2 | 9000 |
Problem | NTTCG | TMRMIL | ISCG | CG_DESCENT | THREECG |
Ni/CPU(cs) | Ni/CPU(cs) | Ni/CPU(cs) | Ni/CPU(cs) | Ni/CPU(cs) | |
1 | \underline{50}/46 | 58/34 | 79/73 | 85/51 | 52/43 |
2 | \underline{16}/\underline{7} | 26/14 | 22/13 | 35/21 | 28/8 |
3 | \underline{10}/\underline{2} | 13/3 | 11/4 | 17/6 | 12/3 |
4 | \underline{8}/\underline{1} | 10/2 | 10/2 | 21/22 | 10/2 |
5 | \underline{2}/\underline{1} | 3/1 | 3/2 | 4/1 | 3/2 |
6 | \underline{431}/\underline{98} | 5835/1899 | 473/204 | F/F | 477/171 |
7 | \underline{229}/20 | 1372/307 | 230/25 | 231/12 | 231/28 |
8 | \underline{43}/\underline{2} | 69/5 | 45/3 | 44/2 | 45/2 |
9 | \underline{198}/\underline{118} | 5161/1800 | 215/163 | 731/486 | 312/140 |
10 | \underline{11}/5 | 15/3 | 12/5 | 17/6 | 12/6 |
11 | \underline{57}/\underline{6} | 60/6 | 61/12 | 59/6 | 75/13 |
12 | \underline{4}/\underline{1} | 8/2 | 5/2 | 15/3 | 7/3 |
13 | \underline{221}/25 | 6552/649 | 223/38 | 245/22 | 225/28 |
14 | \underline{7}/\underline{2} | 10/4 | 8/5 | 17/3 | 9/4 |
15 | \underline{18}/\underline{4} | 33/5 | 39/15 | 41/18 | 34/6 |
16 | \underline{92}/\underline{55} | 476/166 | 106/104 | 8119/2891 | 123/96 |
17 | \underline{599}/\underline{31} | 5452/368 | 601/51 | 600/41 | 601/45 |
18 | \underline{4}/\underline{1} | 5/1 | 10001/2730 | 9/3 | 5/2 |
19 | \underline{593}/\underline{159} | 1818/473 | 637/401 | 6005/2342 | 722/290 |
20 | \underline{15}/\underline{14} | 5480/2287 | 263/727 | 442/356 | 2177/1444 |
21 | \underline{369}/\underline{227} | 5415/4426 | 386/335 | 376/331 | 385/272 |
22 | \underline{2}/\underline{1} | 3/1 | 3/1 | 4/1 | 3/1 |
23 | \underline{10}/\underline{1} | 11/2 | 11/2 | 12/3 | 12/2 |
24 | \underline{111}/32 | 128/22 | 121/55 | 633/123 | 235/52 |
25 | \underline{7}/\underline{2} | 8/2 | 8/4 | 11/5 | 8/3 |
26 | \underline{26}/\underline{5} | 41/6 | 29/7 | 35/8 | 29/8 |
27 | \underline{3572}/987 | 5628/919 | 3877/998 | 6500/837 | 4754/877 |
28 | \underline{2}/\underline{1} | 6/1 | 6/1 | 11/11 | 6/2 |
k | 10 | 11 | 12 | 13 | 14 | 15 |
s_k^Ty_k | 0.246690 | 0.391931 | 0.320335 | 0.225285 | 0.192523 | 0.555195 |
\left|s_k^T\bar{y}_{k}\right| | 0.245609 | 0.391931 | 0.302341 | 0.230323 | 0.192523 | 0.257161 |
No. | Problem | Dim | No. | Problem | Dim |
1 | Extended Trigonometric | 7000 | 15 | Extended Quadratic Penalty QP1 3000 | 2000 |
2 | Extended Rosenbrock | 10000 | 16 | Extended Tridiagonal 2 | 9000 |
3 | Extended White & Holst | 9000 | 17 | BDQRTIC (CUTE) | 3000 |
4 | Diagonal 3 | 6000 | 18 | TRIDIA (CUTE) | 8000 |
5 | Raydan 1 | 10000 | 19 | NONDIA (CUTE) | 6000 |
6 | Diagonal 1 | 9000 | 20 | DQDRTIC (CUTE) | 10000 |
7 | Diagonal 2 | 1000 | 21 | DIXMAANC (CUTE) | 10000 |
8 | Diagonal 3 | 1000 | 22 | LIARWHD (CUTE) | 9000 |
9 | Extended Himmelblau | 8000 | 23 | DIXMAANG (CUTE) | 3000 |
10 | Extended Powell | 10000 | 24 | DIXMAANJ (CUTE) | 3000 |
11 | Extended Block-Diagonal BD1 | 6000 | 25 | DIXMAANL (CUTE) | 9000 |
12 | Extended Maratos | 8000 | 26 | SINQUAD (CUTE) | 9000 |
13 | Extended Cliff | 6000 | 27 | BIGGSB1 (CUTE) | 7000 |
14 | Quadratic QF1 | 10000 | 28 | Scaled Quadratic SQ2 | 9000 |
Problem | NTTCG | TMRMIL | ISCG | CG_DESCENT | THREECG |
Ni/CPU(cs) | Ni/CPU(cs) | Ni/CPU(cs) | Ni/CPU(cs) | Ni/CPU(cs) | |
1 | \underline{50}/46 | 58/34 | 79/73 | 85/51 | 52/43 |
2 | \underline{16}/\underline{7} | 26/14 | 22/13 | 35/21 | 28/8 |
3 | \underline{10}/\underline{2} | 13/3 | 11/4 | 17/6 | 12/3 |
4 | \underline{8}/\underline{1} | 10/2 | 10/2 | 21/22 | 10/2 |
5 | \underline{2}/\underline{1} | 3/1 | 3/2 | 4/1 | 3/2 |
6 | \underline{431}/\underline{98} | 5835/1899 | 473/204 | F/F | 477/171 |
7 | \underline{229}/20 | 1372/307 | 230/25 | 231/12 | 231/28 |
8 | \underline{43}/\underline{2} | 69/5 | 45/3 | 44/2 | 45/2 |
9 | \underline{198}/\underline{118} | 5161/1800 | 215/163 | 731/486 | 312/140 |
10 | \underline{11}/5 | 15/3 | 12/5 | 17/6 | 12/6 |
11 | \underline{57}/\underline{6} | 60/6 | 61/12 | 59/6 | 75/13 |
12 | \underline{4}/\underline{1} | 8/2 | 5/2 | 15/3 | 7/3 |
13 | \underline{221}/25 | 6552/649 | 223/38 | 245/22 | 225/28 |
14 | \underline{7}/\underline{2} | 10/4 | 8/5 | 17/3 | 9/4 |
15 | \underline{18}/\underline{4} | 33/5 | 39/15 | 41/18 | 34/6 |
16 | \underline{92}/\underline{55} | 476/166 | 106/104 | 8119/2891 | 123/96 |
17 | \underline{599}/\underline{31} | 5452/368 | 601/51 | 600/41 | 601/45 |
18 | \underline{4}/\underline{1} | 5/1 | 10001/2730 | 9/3 | 5/2 |
19 | \underline{593}/\underline{159} | 1818/473 | 637/401 | 6005/2342 | 722/290 |
20 | \underline{15}/\underline{14} | 5480/2287 | 263/727 | 442/356 | 2177/1444 |
21 | \underline{369}/\underline{227} | 5415/4426 | 386/335 | 376/331 | 385/272 |
22 | \underline{2}/\underline{1} | 3/1 | 3/1 | 4/1 | 3/1 |
23 | \underline{10}/\underline{1} | 11/2 | 11/2 | 12/3 | 12/2 |
24 | \underline{111}/32 | 128/22 | 121/55 | 633/123 | 235/52 |
25 | \underline{7}/\underline{2} | 8/2 | 8/4 | 11/5 | 8/3 |
26 | \underline{26}/\underline{5} | 41/6 | 29/7 | 35/8 | 29/8 |
27 | \underline{3572}/987 | 5628/919 | 3877/998 | 6500/837 | 4754/877 |
28 | \underline{2}/\underline{1} | 6/1 | 6/1 | 11/11 | 6/2 |