Research article

A new family of hybrid three-term conjugate gradient method for unconstrained optimization with application to image restoration and portfolio selection

  • Received: 29 July 2022 Revised: 08 September 2022 Accepted: 15 September 2022 Published: 27 September 2022
  • MSC : 65K10, 90C52, 90C26

  • The conjugate gradient (CG) method is an optimization method, which, in its application, has a fast convergence. Until now, many CG methods have been developed to improve computational performance and have been applied to real-world problems. In this paper, a new hybrid three-term CG method is proposed for solving unconstrained optimization problems. The search direction is a three-term hybrid form of the Hestenes-Stiefel (HS) and the Polak-Ribiére-Polyak (PRP) CG coefficients, and it satisfies the sufficient descent condition. In addition, the global convergence properties of the proposed method will also be proved under the weak Wolfe line search. By using several test functions, numerical results show that the proposed method is most efficient compared to some of the existing methods. In addition, the proposed method is used in practical application problems for image restoration and portfolio selection.

    Citation: Maulana Malik, Ibrahim Mohammed Sulaiman, Auwal Bala Abubakar, Gianinna Ardaneswari, Sukono. A new family of hybrid three-term conjugate gradient method for unconstrained optimization with application to image restoration and portfolio selection[J]. AIMS Mathematics, 2023, 8(1): 1-28. doi: 10.3934/math.2023001

    Related Papers:

    [1] Yixin Li, Chunguang Li, Wei Yang, Wensheng Zhang . A new conjugate gradient method with a restart direction and its application in image restoration. AIMS Mathematics, 2023, 8(12): 28791-28807. doi: 10.3934/math.20231475
    [2] Jamilu Sabi'u, Ibrahim Mohammed Sulaiman, P. Kaelo, Maulana Malik, Saadi Ahmad Kamaruddin . An optimal choice Dai-Liao conjugate gradient algorithm for unconstrained optimization and portfolio selection. AIMS Mathematics, 2024, 9(1): 642-664. doi: 10.3934/math.2024034
    [3] Xiyuan Zhang, Yueting Yang . A new hybrid conjugate gradient method close to the memoryless BFGS quasi-Newton method and its application in image restoration and machine learning. AIMS Mathematics, 2024, 9(10): 27535-27556. doi: 10.3934/math.20241337
    [4] Auwal Bala Abubakar, Poom Kumam, Maulana Malik, Parin Chaipunya, Abdulkarim Hassan Ibrahim . A hybrid FR-DY conjugate gradient algorithm for unconstrained optimization with application in portfolio selection. AIMS Mathematics, 2021, 6(6): 6506-6527. doi: 10.3934/math.2021383
    [5] Jie Guo, Zhong Wan . A new three-term conjugate gradient algorithm with modified gradient-differences for solving unconstrained optimization problems. AIMS Mathematics, 2023, 8(2): 2473-2488. doi: 10.3934/math.2023128
    [6] Mingyuan Cao, Yueting Yang, Chaoqian Li, Xiaowei Jiang . An accelerated conjugate gradient method for the Z-eigenvalues of symmetric tensors. AIMS Mathematics, 2023, 8(7): 15008-15023. doi: 10.3934/math.2023766
    [7] Ibtisam A. Masmali, Zabidin Salleh, Ahmad Alhawarat . A decent three term conjugate gradient method with global convergence properties for large scale unconstrained optimization problems. AIMS Mathematics, 2021, 6(10): 10742-10764. doi: 10.3934/math.2021624
    [8] Xuejie Ma, Songhua Wang . A hybrid approach to conjugate gradient algorithms for nonlinear systems of equations with applications in signal restoration. AIMS Mathematics, 2024, 9(12): 36167-36190. doi: 10.3934/math.20241717
    [9] Guocheng Li, Pan Zhao, Minghua Shi, Gensheng Li . A hybrid framework for mean-CVaR portfolio selection under jump-diffusion processes: Combining cross-entropy method with beluga whale optimization. AIMS Mathematics, 2024, 9(8): 19911-19942. doi: 10.3934/math.2024972
    [10] Sani Aji, Poom Kumam, Aliyu Muhammed Awwal, Mahmoud Muhammad Yahaya, Kanokwan Sitthithakerngkiet . An efficient DY-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. AIMS Mathematics, 2021, 6(8): 8078-8106. doi: 10.3934/math.2021469
  • The conjugate gradient (CG) method is an optimization method, which, in its application, has a fast convergence. Until now, many CG methods have been developed to improve computational performance and have been applied to real-world problems. In this paper, a new hybrid three-term CG method is proposed for solving unconstrained optimization problems. The search direction is a three-term hybrid form of the Hestenes-Stiefel (HS) and the Polak-Ribiére-Polyak (PRP) CG coefficients, and it satisfies the sufficient descent condition. In addition, the global convergence properties of the proposed method will also be proved under the weak Wolfe line search. By using several test functions, numerical results show that the proposed method is most efficient compared to some of the existing methods. In addition, the proposed method is used in practical application problems for image restoration and portfolio selection.



    We consider the unconstrained optimization problems {with the form}

    minxRnf(x), (1.1)

    where f:RnR is a continuously differentiable function and bounded below. There are many methods for solving (1.1) such as the Newton methods, Quasi-Newton methods, Steepest Descent method, and Levenberg-Marquardt methods [36]. However, these methods are efficient to use for low-dimensional problems, while high-dimensional problems require many iterations and a long time. Therefore, to overcome the drawbacks of the previous methods, a nonlinear conjugate gradient (CG) method is proposed. Generally, the CG method is used to solve large-scale optimization problems because it has simple iterations, fast convergence properties, and low memory requirements [5,22,36].

    The CG method was first introduced by Hestenes and Stiefel in 1952 and was used to solve a system of linear equations. Subsequently, in 1964, Fletcher and Reeves extended the form of the CG method to solve large-scale nonlinear systems of equations, and they used it to solve the general form of the optimization problem without constraints. The results of the expansion carried out by Fletcher and Reeves triggered many further studies [36]. For instance, in [24], Ibrahim et al. proposed a new hybrid method by combining the Liu-Storey [29] and Kamandi-Amini [28] conjugate gradient parameters. Likewise, Jian et al. [26] proposed a simple spectral CG method for solving large-scale unconstrained optimization problems. The method was based on the Fletcher-Reeves [17] and the Dai-Yuan methods [11]. Under the weak Wolfe line search structure, the convergence analysis was presented. In [44], Salleh et al. proposed a modified Liu and Storey [29] method by formulating the new parameter. Depending on the strong Wolfe line search, the search direction satisfies the descent condition and fulfills the convergence properties. Besides this, Zheng and Shi [53] proposed another formula for the CG parameter. The parameter is formulated by replacing the denominator of the PRP formula. The direction satisfies a trust region property and by using the Armijo line search, the global convergence properties were proved. {Furthermore, motivated by the idea of Zhang et al. in [52]}, Tian et al. [49] proposed a new descent hybrid three-term CG algorithm. The new method satisfies the sufficient descent condition and is independent of the line search structure. For uniformly convex objective functions, global convergence is established under mild conditions. The modified secant condition is also used to establish global convergence for general functions, without the assumption of convexity. According to the numerical results, the proposed algorithm by Tian et al. [49] is effective and reliable.

    The CG method is an iteration method where each step produces approximation points from the following formula:

    xk+1=xk+αkdk,k=0,1,2,...,

    where dk is the search direction, xk+1 and xk are the current and previous points, respectively. The αk>0 is the stepsize, obtained by exact or inexact line searches. However, considering the level of efficiency, inexact line search is more often used than exact line search. The two most frequently used inexact line searches are weak Wolfe and strong Wolfe line searches. The weak Wolfe line search is calculated such that αk satisfy

    f(xk+αkdk)f(xk)+δαkgTkdk, (1.2)
    g(xk+αkdk)TdkσgTkdk, (1.3)

    and the strong Wolfe line search is calculated such that αk satisfy

    f(xk+αkdk)f(xk)+δαkgTkdk,|g(xk+αkdk)Tdk|σgTkdk,

    where gk=g(xk)=f(xk) is a gradient f at point xk, and the parameters δ and σ are required to satisfy 0<δ<σ<1. One condition that must be met by the CG method is the descent condition. This condition guarantees that the approximation point leads to the minimum point, and this condition is defined as follows:

    gTkdk<0.

    Over time, an Omani scientist Al-Baali [4] proposed another version of the descent condition, which plays an important role in the convergence of CG methods called the sufficient descent condition. The definition of sufficient descent condition is given below.

    Definition 1. Let f:RnR is continuously differentiable function and the search direction dk satisfies

    gTkdkCgk2,k, (1.4)

    where C>0 is a constant, then dk is said to fulfill the sufficient descent condition.

    For standard CG method, the search direction dk is defined by

    dk:={gk,ifk=0,gk+βkdk1,ifk>0,

    where βk is a parameter corresponding to a distinct CG method. Some of the well-known standard CG methods are the Hestenes-Stiefel (HS) method [19], the Fletcher-Reeves (FR) method [17], the Polak-Ribiére-Polyak (PRP) method [40,41], the Conjugate-Descent (CD) method [16], the Dai-Yuan (DY) method [11], the Liu-Storey (LS) method [29], and the Rivaie-Mustafa-Ismail-Leong (RMIL) method [43] and their βk parameters are

    βHSk=gTkrk1dTk1rk1,βFRk=gk2gk12,βPRPk=gTkrk1gk12,βCDk=gk2dTk1gk1,
    βDYk=gk2dTk1rk1,βLSk=gTkrk1gTk1dk1,βRMILk=gTkrk1dk12,

    respectively, where rk1:=gkgk1 and . is a symbol for Euclidean norm on Rn. We know that the HS, PRP, and LS methods are efficient but fail to meet the convergence property, even when using the exact line searches for non-convex functions. So, to improve the convergence of the PRP, HS, and LS methods, Powell [42] suggested modifying the parameter values to be non-negative. On other hand, while this makes the FR, PRP, DY, and RMIL methods robust and able to converge, the numerical performance remains not efficient. When compared to the standard CG method, the hybrid [30,33] and three-term [8,23,25,38,51] CG methods always show good theoretical properties and numerical performance, such as the sufficient descent property regardless of the line search structure.

    Recently, Abubakar et al. [1] proposed a hybrid three-term CG method in which the search direction is generated from the limited memory Broyden-Fletcher-Goldferb-Shanno (LBFGS) Quasi-Newton method. The method satisfies the sufficient descent condition and fulfills the trust region. Under a condition, the global convergence properties were established, and {compared} with some of the existing methods, the method is efficient. Likewise, Deepho et al. in [12] proposed a modification of the hybrid three-term CG method. The modification was done by combining the conjugate gradient parameters that already exist. Numerical experiments on several test problems for the method showed good results compared to other existing methods. In addition, the methods have also been applied to solve risk optimization problems in portfolio selection.

    Motivated by the above contributions, in this paper, we propose a hybrid three-term CG method based on the structure of the LBFGS method of Nocedal [39] and Shanno [45], which can give a better numerical performance. The following are some of this paper's contributions:

    (1) Based on the LBFGS method, a new hybrid three-term CG method for solving unconstrained optimization is proposed.

    (2) The search direction of the proposed method satisfies the sufficient descent property without requiring any line search.

    (3) The global convergence of the proposed method is demonstrated using the weak Wolfe line search.

    (4) The computational performance of the new method is presented on several standard test problems.

    (5) Finally, the proposed method is effectively applied to image restoration and minimizing risk in portfolio selection problems.

    The paper is organized as follows. In Section 2, we present a modified hybrid three-term CG method and give the algorithm for our proposed method. In Section 3, we establish the sufficient descent condition and prove the global convergence property of our proposed method under a certain line search. Numerical experiments are outlined in Section 4 to see the computational performance by using several test functions and comparing them with other existing methods. In Section 5, we provide problem-solving to image restoration and portfolio selection problems by using our proposed method. Finally, the conclusions are presented in Section 6.

    We start this section by describing our formulation and then present an algorithm of our proposed method.

    In [1], Abubakar et al. proposed a hybrid three-term HTT CG method in which the search direction is formulated as follows:

    d0:=g0,dk:=gk+βHTTkdk1+γkgk,k1,

    where

    βHTTk:=gk2zkgk2gTkdk1z2k,γk:=vkgTkdk1zk,

    and

    zk:=max{λdk1gk,dTk1rk1,gk12},λ>0,0vkˉv<1.

    Similarly, Deepho, et al. [12] proposed a hybrid three-term TTCDDY CG method in which the search direction owns the form

    d0:=g0,dk:=gk+βTTCDDYkdk1+ϱkgk,k1,

    where

    βTTCDDYk:=gk2hkgk2gTkdk1h2k,ϱk:=ekgTkdk1hk,

    and

    hk:=max{ϖdk1gk,dTk1gk1,dTk1rk1},ϖ>0,0ekˉe<1.

    Under some assumptions, HTT and TTCDDY methods satisfy the sufficient descent condition, and the global convergence is proved. The numerical experiments showed that the HTT and TTCDDY methods perform better than the other existing methods.

    Motivated by the HTT and TTCDDY methods, we propose a new hybrid three-term CG method based on the LBFGS Quasi-Newton method. In the same way, we first recall the search direction of the LBFGS method as

    dLBFGSk:=Mkgk,Mk=(Isk1rTk1sTk1rk1rk1sTk1sTk1rk1+sk1rTk1rk1sTk1sTk1rk1+sk1sTk1sTk1rk1),

    where sk1=xkxk1=αk1dk1 and I is the identity matrix. By simplifying the form of dLBFGSk, we can define the search direction as follows:

    dLBFGSk:=gk+(βHSkrk12gTkdk1(dTk1rk1)2)dk1+gTkdk1dTk1rk1(rk1sk1). (2.1)

    Next, replacing the term (rk1sk1) in (2.1) with ckrk1, where ck is a parameter, we get the following three-term search direction:

    dTTHSk:=gk+(βHSkrk12gTkdk1(dTk1rk1)2)dk1+ckgTkdk1dTk1rk1rk1. (2.2)

    Further, replacing βHSk with βPRPk, rk12gTkdk1(dTk1rk1)2 with gk2gTkdk1gk14 and (rk1sk1) with ckrk1 in (2.1), we can write a three-term search direction as

    dTTPRPk:=gk+(βPRPkgk2gTkdk1gk14)dk1+ckgTkdk1gk12rk1. (2.3)

    In the following, we will rewrite how to find the parameter ck as in [1,12]. The ck parameter is obtained by solving the univariate problem as follows:

    mincR(rk1sk1)cgk2F,

    where F is the Frobenious norm.

    Let Ak=(rk1sk1)cgk, then

    AkATk=[(rk1sk1)cgk][(rk1sk1)cgk]T=[(rk1sk1)cgk][(rk1sk1)TcgTk]=c2gkgTkc[(rk1sk1)gTk+gk(rk1sk1)T]+(rk1sk1)(rk1sk1)T.

    Letting Bk=rk1sk1, then

    AkATk=c2gkgTkc[BkgTk+gkBTk]+BkBTk,

    and

    tr(AkATk)=c2gk2c[tr(BkgTk)+tr(gkBTk)]+Bk2=c2gk22cgTkBk+Bk2.

    Differentiating the above with respect to c and equating to zero, we have

    2cgk22gTkBk=0,

    which implies

    c=gTk(rk1sk1)gk2.

    Hence, we select ck as

    ck:=min{ˉc,max{0,c}}, (2.4)

    which implies 0ckˉc<1.

    Based on the search direction of the three-term CG defined in (2.2) and (2.3), we construct a new search direction of the hybrid three-term CG method as follows:

    dk:={g0,k=0,gk+βHTHPkdk1+κHTHPkrk1,k1, (2.5)

    where

    βHTHPk:=gTkrk1nkrk12gTkdk1n2k, (2.6)
    κHTHPk:=ckgTkdk1nk, (2.7)

    and

    nk:=max{μdk1rk1,dTk1rk1,gk12},μ>0. (2.8)

    Now, we provide the flow framework of our algorithm as follows:

    Algorithm 1: New hybrid three-term HS-PRP CG method (HTHP)
    Input : Select the start point x0Rn, the parametersϵ>0, 0<δ<σ<1, 0ˉc<1, and μ>0.

     | Show Table
    DownLoad: CSV

    In this section, we provide the global convergence result of the HTHP method under the following assumption.

    Assumption 1. The level set B:={xRn:f(x)f(x0)} is bounded, where x0 is starting point.

    Assumption 2. In some neighborhood L of B, the gradient of the function f is Lipschitz continuous. That is, we can find L>0, such that for all x

    g(x)g(v)Lxv,vL.

    In other words, Assumption 1 states that there exists a constant T>0, such that:

    xT,xB.

    Furthermore, observe that from Assumptions 1 and 2, we can obtain a positive constant F, such that:

    g(x)F,xB.

    Next, we will present the sufficient descent condition for the HTHP method.

    Lemma 1. Let dk be generated by (2.5). Then we obtain

    gTkdk(114(1+ˉc)2)gk2. (3.1)

    So, the search direction given by (2.5) satisfies the sufficient descent condition (1.4).

    Proof. For k=0, we have gT0d0=g02 and then the relation (3.1) is obvious since 0ckˉc<1. Meanwhile, for k1, multiplying both sides (2.5) by gTk, we get

    gTkdk=gk2+gTkrk1nkgTkdk1rk12gTkdk1n2k(gTkdk1)+ckgTkdk1nkgTkrk1=gk2+(1+ck)gTkrk1nkgTkdk1rk12n2k(gTkdk1)2. (3.2)

    Using the inequality aTkbk12(ak2+bk2) with

    ak=12(1+ck)gk,bk=2(gTkdk1)rk1nk,

    we obtain

    (1+ck)gTkrk1nkgTkdk114(1+ck)2gk2+(gTkdk1)2rk12n2k. (3.3)

    Combining (3.2) with (3.3), we get

    gTkdkgk2+14(1+ck)2gk2+(gTkdk1)2rk12n2krk12n2k(gTkdk1)2=gk2+14(1+ck)2gk2=(114(1+ck)2)gk2(114(1+ˉc)2)gk2.

    The proof is completed.

    Remark 1. The lemma above indicates that the HTHP always satisfies the sufficient descent condition without depending on any line search.

    Next, we will establish the global convergence properties of the HTHP method.

    Theorem 1. Let Assumptions 1 and 2 hold, and assume conditions (1.2) and (1.3) are satisfied, then

    lim infkgk=0. (3.4)

    Proof. We will prove the theorem by contradiction, that is, assume that (3.4) is not true. Then, there exists a constant ζ such that

    gkζ,for allk0. (3.5)

    From (2.6), we have

    |βHTHPk|=|gTkrk1nkrk12gTkdk1n2k|gkrk1μdk1rk1+rk12gkdk1(μdk1rk1)2=(1μ+1μ2)gkdk1. (3.6)

    Next, from (2.7), we have

    |κHTHPk|=|ckgTkdk1nk|=ck|gTkdk1nk|ˉcgkdk1nkˉcgkdk1μdk1rk1=ˉcgkμrk1. (3.7)

    Furthermore, from (2.5)–(2.8), (3.6) and (3.7), we have

    dk=gk+βHTHPkdk1+κHTHPkrk1gk+|βHTHPk|dk1+|κHTHPk|rk1gk+(1μ+1μ2)gkdk1dk1+ˉcgkμrk1rk1=(1+1μ+1μ2+ˉcμ)gk(1+1μ+1μ2+ˉcμ)F.

    Hence, the sequence {dk} generated by the HTHP method has an upper bound, i.e.

    dkY,k0, (3.8)

    where Y=(1+1μ+1μ2+ˉcμ)F.

    Now, from (1.2) and using Lemma 1, 0ˉc<1,δ>0,αk>0, we have

    f(xk+αkdk)f(xk)+δαkgTkdkf(xk)δαk(114(1+ˉc)2)gk2f(xk).

    If we expand the above result and together with Assumption 1, we obtain

    f(xk+αkdk)=f(xk+1)f(xk)f(xk1)...f(x0)<+. (3.9)

    Also, adding condition (1.3) by gTkdk yields

    g(xk+αkdk)TdkgTkdkσgTkdkgTkdk=(1σ)gTkdk.

    Applying Lemma 1 and Assumption 2 to relation above, we now have

    (1σ)gTkdk(gk+1gk)Tdkgk+1gkdkLxk+1xkdk.

    Using the equation xk+1xk=αkdk=αkdk, then the above relation will be

    (1σ)gTkdkLdk2αk. (3.10)

    Multiplying (3.10) by δgTkdk0 and combining with (1.2), we get

    δ(1σ)(gTkdk)2Ldk2δαkgTkdkf(xk)f(xk+1)

    or

    δ(1σ)(gTkdk)2Ldk2f(xk)f(xk+1). (3.11)

    Summing (3.11), and applying (3.9), we have

    δ(1σ)Lk=0(gTkdk)2dk2(f(x0)f(x1))+(f(x1)f(x2))+...f(x0)<+.

    That implies,

    +k=0(gTkdk)2dk2<+. (3.12)

    Now, from inequality (3.5) and (3.1) we get that

    gTkdk(114(1+ˉc)2)gk2(114(1+ˉc)2)ζ2. (3.13)

    Upon squaring both sides of (3.13), then dividing by dk2 and also using (3.8), we obtain

    +k=0(gTkdk)2dk2(114(1+ˉc)2)ζ2+k=01dk2=+.

    This result contradicts (3.12). Therefore, the condition (3.4) holds.

    This section analyzes the performance of the new HTHP CG algorithm on several benchmark test functions considered from Andrei [5] and Moré et al. [37], with dimensions ranging from 2 to 1, 000, 000 (see Table 1). To illustrate the efficiency, the proposed method was compared with other existing methods such as TTCDDY [12], HTT [1], and MPRP [52], based on the following metrics:

    ● Number of iterations denoted as NOI.

    ● Number of function evaluations presented as NOF.

    ● Central processing unit time denoted as CPU time.

    Table 1.  The problems and their dimensions.
    No Problem/Dimension No Problem/Dimension
    1 COSINE 6000 67 Extended DENSCHNB 300, 000
    2 COSINE 100, 000 68 Generalized Quartic 9000
    3 COSINE 800, 000 69 Generalized Quartic 90, 000
    4 DIXMAANA 2000 70 Generalized Quartic 500, 000
    5 DIXMAANA 30, 000 71 BIGGSB1 110
    6 DIXMAANB 8000 72 BIGGSB1 200
    7 DIXMAANB 16, 000 73 SINE 100, 000
    8 DIXMAANC 900 74 SINE 50, 000
    9 DIXMAANC 9000 75 FLETCBV 15
    10 DIXMAAND 4000 76 FLETCBV 55
    11 DIXMAAND 30, 000 77 NONSCOMP 5000
    12 DIXMAANE 800 78 NONSCOMP 80, 000
    13 DIXMAANE 16, 000 79 POWER 150
    14 DIXMAANF 5000 80 POWER 90
    15 DIXMAANF 20, 000 81 RAYDAN1 500
    16 DIXMAANG 4000 82 RAYDAN1 5000
    17 DIXMAANG 30, 000 83 RAYDAN2 2000
    18 DIXMAANH 2000 84 RAYDAN2 20, 000
    19 DIXMAANH 50, 000 85 RAYDAN2 500, 000
    20 DIXMAANI 120 86 DIAGONAL1 800
    21 DIXMAANI 12 87 DIAGONAL1 2000
    22 DIXMAANJ 1000 88 DIAGONAL2 100
    23 DIXMAANJ 5000 89 DIAGONAL2 1000
    24 DIXMAANK 4000 90 DIAGONAL3 500
    25 DIXMAANK 40 91 DIAGONAL3 2000
    26 DIXMAANL 800 92 Discrete Boundary Value 2000
    27 DIXMAANL 8000 93 Discrete Boundary Value 20, 000
    28 DIXON3DQ 150 94 Discrete Integral Equation 500
    29 DIXON3DQ 15 95 Discrete Integral Equation 1500
    30 DQDRTIC 9000 96 Extended Powell Singular 1000
    31 DQDRTIC 90, 000 97 Extended Powell Singular 2000
    32 QUARTICM 5000 98 Linear Full Rank 100
    33 QUARTICM 150, 000 99 Linear Full Rank 500
    34 EDENSCH 7000 100 Osborne 2 11
    35 EDENSCH 40, 000 101 Penalty1 200
    36 EDENSCH 500, 000 102 Penalty1 1000
    37 EG2 100 103 Penalty2 100
    38 EG2 35 104 Penalty2 110
    39 FLETCHCR 1000 105 Extended Rosenbrock 500
    40 FLETCHCR 50, 000 106 Extended Rosenbrock 1000
    41 FLETCHCR 200, 000 107 Broyden Tridiagonal 500
    42 Freudenstein&Roth 460 108 Broyden Tridiagonal 50
    43 Freudenstein&Roth 10 109 HIMMELH 70, 000
    44 Generalized Rosenbrock 10, 000 110 HIMMELH 240, 000
    45 Generalized Rosenbrock 100 111 Brown Badly Scaled 2
    46 HIMMELBG 70, 000 112 Brown and Dennis 4
    47 HIMMELBG 240, 000 113 Biggs EXP6 6
    48 LIARWHD 15 114 Osborne1 5
    49 LIARWHD 1000 115 Extended Beale 5000
    50 Extended Penalty 1000 116 Extended Beale 10, 000
    51 Extended Penalty 8000 117 HIMMELBC 500, 000
    52 QUARTC 4000 118 HIMMELBC 1, 000, 000
    53 QUARTC 80, 000 119 ARWHEAD 100
    54 QUARTC 500, 000 120 ARWHEAD 1000
    55 TRIDIA 300 121 ENGVAL1 500, 000
    56 TRIDIA 50 122 ENGVAL1 1, 000, 000
    57 Extended Woods 150, 000 123 DENSCHNA 500, 000
    58 Extended Woods 200, 000 124 DENSCHNA 1, 000, 000
    59 BDEXP 5000 125 DENSCHNB 500, 000
    60 BDEXP 50, 000 126 DENSCHNB 1, 000, 000
    61 BDEXP 500, 000 127 DENSCHNC 10
    62 DENSCHNF 90, 000 128 DENSCHNC 500
    63 DENSCHNF 280, 000 129 DENSCHNF 500, 000
    64 DENSCHNF 600, 000 130 DENSCHNF 1, 000, 000
    65 DENSCHNB 6000 131 ENGVAL8 500, 000
    66 DENSCHNB 24, 000 132 ENGVAL8 1, 000, 000

     | Show Table
    DownLoad: CSV

    The executions were carried out under the weak Wolfe line search. For the proposed method, the parameters' values σ=0.009,δ=0.0001, μ=0.02,ˉc=0.105 were considered for the numerical experimentation. While, for the TTCDDY, HTT, and MPRP methods, the parameter values defined in the study were maintained. The termination criteria for all algorithms were set as gk106, and an algorithm is said to fail (failure point denoted as "NaN") if any of the following conditions hold:

    ● The stopping condition gk106 is not satisfied.

    ● The NOI exceeds 2000.

    All algorithms are coded on MATLAB as in [27], and the host computer is an Intel Core i7 processor with the following specifications: 16 GB RAM, 64-bit Windows 10 Pro operating system. The complete experimental results for the TTCDDY, HTT, MPRP, and HTHP methods are provided in https://github.com/malik1106/HTHP.git, and the graphical representation of the results are further evaluated using the performance profile tool introduced by Dolan and Moré [14], as shown in Figures 1 (for NOI), 2 (for NOF), and 3 (for CPU time) respectively. Based on the performance profile rule, the algorithm with the highest curve illustrates the efficiency of that algorithm over the others considered for comparison.

    Referring to our plots, it is clear from Figures 13 that the curve of the proposed HTHP method lies above that of TTCDDY, HTT, and MPRP for all the three performance metrics, which include NOI, NOF, and well as the CPU time. This implies that it is the most efficient algorithm among the related TTCDDY, HTT, and MPRP methods.

    Figure 1.  Performance profiles on NOI.
    Figure 2.  Performance profiles on NOF.
    Figure 3.  Performance profiles on CPU time.

    Until now, the CG method have been widely used in solving some problems, such as the regression analysis problems [47,48], image restoration problems [10,27], motion control [2,3], and portfolio selection problems [1,6,12,13,31,32]. In this section, we apply the proposed method for solving image restoration and portfolio selection problems.

    The problem of restoring images that have been corrupted by noise in the transmission or acquisition process are among the difficult optimization problems due to their nonsmooth properties. Most of the available gradient-based algorithms are unable to solve these problems directly due to the nature of the problems. With recent advances in gradient-based methods, more efficient and reliable noise suppression process capable of producing better and more accurate results can be achieved. One of the classical noise models considered by several researchers is impulse noise. Lately, researchers have investigated the performance of some gradient-based methods on image restoration problems (see [7,50]).

    In this section, we demonstrate the performance of the proposed HTHP CG method in recovering the original Camera, Lena, and Goldhill 256×256 grey level images (x) that have been corrupted by salt-and-pepper impulse noise. For this purpose, we first consider the index set of the noise candidate as:

    K={(i,j)W|ˉξijξij,ξij=smin or smax},

    where xi,j represent the grey level of the true image x at the pixel location (i,j), W={1,2,,M}×{1,2,,N} {and} ˉξ is an adaptive median filter of the observed noisy image ξ of x corrupted by salt-and-pepper impulse noise. Also, smin and smax denotes the minimum and maximum of a noisy pixel respectively. Based on the above, we defined the image restoration problem as follows:

    minG(u),

    where

    G(u)=(i,j)K{(m,n)Vi,j/Kϕα(ui,jξm,n)+12(m,n)Vi,jKϕα(ui,jum,n)},

    where Vij={(i,j1),(i,j+1),(i1,j),(i+1,j)} is the neighborhood of (i,j). From the above equation, it is obvious that the regularity of G relies on the Huber function ϕ which is chosen as the edge-preserving potential function with ϕα(t)=t2+α with α=1.

    To demonstrate the suitability of the proposed HTHP method, we compare the performance result with that of TTCDDY, HTT, and MPRP methods based on three metrics which include CPU time (CPUT), relative error (RelErr), and peak signal-to-noise ratio (PSNR). All the methods were implemented on MATLAB software installed on an Intel Core i7 computer with 16 GB RAM. The quality of the images restored is based on 30, 50, and 80 percent noise degrees respectively.

    From results presented in Tables 27, we can see that the proposed method outperformed the other methods considered in the study based on all the three metrics employed which includes CPUT, RelErr, as well as PSNR. In addition, Figures 46 show that the proposed method was able to remove noise from the corrupted Camera, Lena, and Goldhill images with a better accuracy compare to the other methods. Based on these results, we can conclude that the proposed HTHP CG method is suitable and effective.

    Figure 4.  Images corrupted by 30% salt-and-pepper noise: (a–c), the restored images via TTCDDY: (d–f), the restored images via HTT: (g–i), the restored images via HTHP: (j–l), and the restored images via MPRP: (m–o).
    Figure 5.  Images corrupted by 50\% salt-and-pepper noise: (a--c), the restored images via TTCDDY: (d--f), the restored images via HTT: (g--i), the restored images via HTHP: (j--l), and the restored images via MPRP: (m--o).
    Figure 6.  Images corrupted by 70% salt-and-pepper noise: (a–c), the restored images via TTCDDY: (d–f), the restored images via HTT: (g–i), the restored images via HTHP: (j–l), and the restored images via MPRP: (m–o).
    Table 2.  Image restoration outputs for Camera based on CPUT, RelErr, and PSNR.
    Noise TTCDDY HTT
    CPUT RelErr PSNR CPUT RelErr PSNR
    30% 48.7915 1.05125 30.7558 48.5623 1.05126 30.7558
    50% 116.132 1.71497 27.4747 117.190 1.71497 27.4747
    80% 190.563 3.15721 23.5289 186.039 2.99665 23.6932

     | Show Table
    DownLoad: CSV
    Table 3.  Image restoration outputs for Camera based on CPUT, RelErr, and PSNR.
    Noise HTHP MPRP
    CPUT RelErr PSNR CPUT RelErr PSNR
    30% 48.5041 1.05112 30.7567 59.1099 1.2126 30.7433
    50% 117.053 1.71051 27.3803 114.1785 1.8109 27.2048
    80% 189.843 2.93330 23.8340 182.7823 3.2413 23.7697

     | Show Table
    DownLoad: CSV
    Table 4.  Image restoration outputs for Lena based on CPUT, RelErr, and PSNR.
    Noise TTCDDY HTT
    CPUT RelErr PSNR CPUT RelErr PSNR
    30% 37.1322 0.89482 33.8083 37.2949 0.89494 33.8069
    50% 86.0965 1.34941 30.2385 81.7588 1.34964 30.2370
    80% 185.258 25.3976 26.0779 149.421 2.48394 26.3320

     | Show Table
    DownLoad: CSV
    Table 5.  Image restoration outputs for Lena based on CPUT, RelErr, and PSNR.
    Noise HTHP MPRP
    CPUT RelErr PSNR CPUT RelErr PSNR
    30% 37.4014 0.89483 33.8086 46.7401 0.9161 33.6496
    50% 80.0735 1.34933 30.2309 110.7538 1.4301 30.2379
    80% 150.912 2.62686 25.9123 145.4011 2.4176 26.4453

     | Show Table
    DownLoad: CSV
    Table 6.  Image restoration outputs for Goldhill based on CPUT, RelErr, and PSNR.
    Noise TTCDDY HTT
    CPUT RelErr PSNR CPUT RelErr PSNR
    30% 47.9128 0.98980 32.0271 48.1301 0.88681 32.2260
    50% 80.1133 1.49985 29.4045 81.8965 1.44497 29.2995
    80% 125.720 2.63321 25.9414 123.6915 2.60388 25.8423

     | Show Table
    DownLoad: CSV
    Table 7.  Image restoration outputs for Goldhill based on CPUT, RelErr, and PSNR.
    Noise HTHP MPRP
    CPUT RelErr PSNR CPUT RelErr PSNR
    30% 48.0866 0.85528 32.1903 46.2320 0.8965 32.0978
    50% 81.8959 1.52717 29.2131 78.0435 1.4165 29.4420
    80% 124.789 2.59839 25.9093 142.8682 2.6428 25.8946

     | Show Table
    DownLoad: CSV

    When investing in bonds or stocks, investors must pay attention to two basic components related to financial instruments, namely risk and return. Risk is the possibility of loss that occurs when an investment is made, while the return is the possible profit that can be obtained when investing. In practice, there is a positive correlation between the expected return and the risk that must be borne; hence, the greater the expected return, the greater the risk obtained, and vice versa [15]. One way to make the right investment decisions is to build a portfolio. An investment portfolio is a collection of investment instruments in several financial securities, which may or may not be the same to minimize risk and/or maximize returns. By creating a portfolio, investors can identify which securities to choose and how much capital to invest in the selected securities. Surely, investors will choose an efficient portfolio to invest [34].

    An efficient portfolio aims to minimize risk or maximize return. This study focuses on selecting an efficient portfolio by minimizing risk. Here, portfolio risk is measured using a risk measuring instrument. Several risk measurement tools can be used for this purpose, of which a variance risk measurement tool has been used here [9]. To minimize portfolio variance, several ways in the optimization mathematical theory can be used, one of which is the CG method. This method is an optimization method that is still widely used and being developed by researchers. Among the advantages of using the CG method are low memory usage and high convergence speed [5].

    The development of technology and civilization has resulted in a better understanding of the benefits of short-term and long-term investments. The main purpose of investment is to obtain larger funds in the future. The capital market in every country is one of the places where investment activities are carried out by investors. One of the investment products that can be found in the capital market is stocks. Simply, stocks are proof of ownership of a small part of a company. This evidence entitles the shareholder to a stock of the company's assets and profits by the number of stocks owned [35]. There are at least two things that need to be considered when investing in stocks, namely risk and rate of return.

    Returns on investment is a financial measure that is widely used to measure the probability of obtaining a return on investment. Returns can be in the form of returns that have occurred (realized returns) or expected returns, which have not yet occurred but are expected to occur in the future. Realized returns can be calculated using historical data, which is quite important because it is used as a measure of the performance of an asset. In addition, realized returns are also the basis for determining expected returns in the future [18].

    For stock A, the realized returns can be defined as

    ri,A=pipi1pi1, (5.1)

    where, ri,A is the value of return of stock A at time i, pi is stock price at time i, and pi1 is stock price at time i1 [46]. While the expected return can be formulated as follows:

    E(Ri,A)=μA=Ni=1ri,Af(ri,A), (5.2)

    where, Ri,A is a random variable of the return of stock A at time i, E(Ri,A)=μA is the expected return of stock A, Ri,A follows a certain distribution with the density function expressed as f(ri,A), and i=1,2,...,N [20,21].

    In addition to returns, the risk also needs to be considered when investing. Risk is often associated with deviations from the expected outcomes. One way to calculate risk is using the standard deviation method, where the method is used to measure the deviation of values that have occurred with the expected value. The variance of stock A can be expressed as follows [20,21]:

    σ2A=E[(Ri,AμA)2]=Ni=1(ri,AμA)2f(ri,A). (5.3)

    In investing, Harry Markowitz advises to not put all the capital you have in just one asset because if that asset fails, all the capital invested in that asset will disappear. Thus, one way that investors can minimize risk is to diversify investments in the form of a portfolio. The formation of a portfolio is one way an investor can employ to maximize the expected return or/and minimize the level of risk that will be faced. Of course, the portfolio formed must be optimal. To form an optimal portfolio, Harry Markowitz proposed a method known as the mean-variance method, which uses the average and variance of historical stock price data [34]. The main result of this method is that the proportion of each stock is obtained so that an optimal portfolio can be formed.

    Markowitz's portfolio theory works on how to diversify a stock portfolio to minimize risk. Portfolio risk is not just the weighted average in the portfolio but must also consider the relationship between the stocks. This relationship is known as covariance. Covariance is a measurement that expresses the joint variance of two random variables, defined as follows [20,21]:

    σAB=E[(Ri,AμA)(Ri,BμB)]=Ni=1[(ri,AμA)(ri,BμB)]f(ri,A,ri,B), (5.4)

    where σA,B is the covariance of return between stocks A and B. Next, we will explore returns, expected returns, and variance and covariance of return of the portfolio. Suppose an investment portfolio consists K stocks, the return of each stock is ri,1,ri,2,...,ri,K. If rT is the vector of return stocks in the investment portfolio, then we can express it as follows: rT=(ri,1,ri,2,...,ri,K). It is assumed that the first and second moments of return on these assets exist. Let μT and wT represent the transposing of the mean vector and weight vector, respectively, which can be expressed as follows: μT=(μ1,μ2,...,μK), wT=(w1,w2,...,gk) and μj=E[ri,j], wj is weight/proportion of funds allocated to the stocks j, j=1,2,...,K. Based on these notations, the return of the portfolio can be formulated as follows [46]:

    rp=Kj=1wjri,j=wTr. (5.5)

    According to (5.5), the expected return of the portfolio can be expressed as the following equation:

    μp=E[rp]=E[wTr]=wTE[r]=wTμ. (5.6)

    By using (5.5), the variance of portfolio return can be expressed as follows:

    σ2p=Var(rp)=KjKlwjwlσjl=wTΣw, (5.7)

    where

    Σ=(σ11σ12...σ1Kσ21σ22...σ2KσK1σK2...σKK),

    and σ11,σ12,...,σKK can be determined by (5.4).

    After knowing the expected return and variance of the portfolio return, the next problem becomes how to choose an efficient portfolio, namely a portfolio that has high return expectations with low risk as measured by variance. We know that Markowitz [34] popularized the method of selecting an efficient portfolio by minimizing portfolio risk, measured by variance. Therefore, the optimization problem of portfolio selection by minimizing risk to be solved is as follows:

    {minimize:σ2p=KjKlwjwlσjl=wTΣw.subject to:Kj=1wj=1. (5.8)

    Now, we will consider the problem of determining the proportion of stock in a portfolio by applying the proposed CG method, such that it produces an optimal portfolio. The stock data analyzed in this portfolio problem is stock data traded on the capital market in Indonesia through the Indonesian Stock Exchange (IDX). The historical stock data used is the daily closing price of the stocks included in the IDX30 stock list and accessed through the website http://finance.yahoo.com. The names of the top 20 stocks are listed in Table 8.

    Furthermore, from the 20 stocks that have been selected in the formation of the portfolio, daily historical data will is sought for the period from June 1, 2020, to May 31, 2022. The historical data for these stocks contain the opening price, highest price, lowest price and closing price, respectively. For analysis purposes, in this research, we only need the daily closing price of the stock. After selecting several stocks to be included in the portfolio formation, they were estimated for their distribution model, expectations, and return variance. Identification of the distribution model for the return of each stock was done by finding the return using the formula (5.1), and then fitting the distribution using EasyFit software. After obtaining the estimated distribution function, the next step is calculating the expected return and variance by using (5.2) and (5.3), respectively. In addition, we also calculate ratio between expected return and variance. Thus, we get the estimated distribution, expected return (μ), variance (σ2), and ratio (μ/σ2) of each return stock in Table 9.

    Table 8.  List of stocks included in the IDX30 index for the period February 2022–July 2022.
    No Code Name
    1 ADRO Adaro Energy Tbk.
    2 ANTM Aneka Tambang (Persero) Tbk
    3 ASII Astra International Tbk.
    4 BBCA Bank Central Asia Tbk.
    5 BBNI Bank Negara Indonesia (Persero) Tbk.
    6 BBRI Bank Rakyat Indonesia (Persero) Tbk.
    7 BBTN Bank Tabungan Negara (Persero) Tbk.
    8 BMRI Bank Mandiri (Persero) Tbk.
    9 BRPT Barito Pacific Tbk.
    10 CPIN Charoen Pokphand Indonesia Tbk.
    11 ICBP Indofood CBP Sukses Makmur Tbk.
    12 INCO Vale Indonesia Tbk.
    13 INDF Indofood Sukses Makmur Tbk.
    14 KLBF Kalbe Farma Tbk.
    15 PGAS Perusahaan Gas Negara (Persero) Tbk.
    16 SMGR Semen Indonesia (Persero) Tbk.
    17 PTBA Tambang Batubara Bukit Asam (Persero) Tbk.
    18 TLKM Telekomunikasi Indonesia (Persero) Tbk.
    19 WSKT Waskita Karya (Persero) Tbk.
    20 UNVR Unilever Indonesia Tbk.

     | Show Table
    DownLoad: CSV

    Suppose that the investor forms a portfolio consisting of the five best stocks. Therefore, from the 20 stocks in Table 9, the five best stocks will be selected based on the largest ratio value. The covariance among five selected stocks is summarized in Table 10. Since five stocks were selected for the portfolio, the optimization problem (5.8) becomes

    {minimize:σ2p=5j=15l=1wjwlσjl=wTΣw.subject to:w1+w2+...+w5=1. (5.9)

    Note that the proposed HTHP method is to solve the optimization problem without constraints, so to solve the problem (5.9) by using the HTHP method, we need to convert it to an unconstrained problem. Suppose that w5=1w1w2w3w4, where w1,w2,w3,w4 and w5 are proportional to UNVR, SMGR, BRPT, WSKT and CPIN stocks, respectively. Furthermore, by using values of covariance among the selected stocks in Table 10, we have an unconstrained optimization portfolio selection problem as follows:

    min(w1,w2,w3,w4)R4(w1+w2+w3+w41)((41w1)/105+w2/3125+(29w3)/105+(41w4)/10551/105)+w4(w2/6250(3w1)/105+(3w3)/25000+(27w4)/25000+1/104)+w1((29w1)/105+w2/50000w3/50000(3w4)/105+1/104)+w2(w2/2500(7w1)/105+w3/25000+(7w4)/105+19/105)+w3(w2/105(7w1)/50000+(37w3)/50000+11/50000).
    Table 9.  Estimated distribution, expected return, variance, and ratio of each stock.
    Stock Estimated distribution μ σ2 (μ/σ2)
    ADRO Dagum (4P) -0.00167 0.00076 -2.18694
    ANTM Dagum (4P) -0.00278 0.00136 -2.0364
    ASII Dagum (4P) -0.00056 0.00043 -1.29461
    BBCA Gen. Logistic -0.00075 0.00024 -3.06997
    BBNI Gen. Logistic -0.00144 0.00053 -2.71164
    BBRI Gen. Logistic -0.00067 0.00045 -1.49148
    BBTN Gen. Logistic -0.00106 0.00074 -1.43336
    BMRI Gen. Logistic -0.00098 0.00043 -2.26977
    BRPT Log-Logistic (3P) 0.00145 0.00101 1.440135
    CPIN Gen. Logistic 0.00029 0.00051 0.576654
    ICBP Laplace 0.00012 0.00027 0.433568
    INCO Gen. Logistic -0.00165 0.00085 -1.93436
    INDF Burr (4P) -0.00002 0.00025 -0.06185
    KLBF Burr (4P) -0.00014 0.00039 -0.35195
    PGAS Gen. Logistic -0.00105 0.00079 -1.32354
    SMGR Burr (4P) 0.00093 0.00059 1.569875
    PTBA Gen. Logistic -0.00130 0.00052 -2.50288
    TLKM Gen. Logistic -0.00037 0.00037 -1.00194
    WSKT Gen. Logistic 0.00090 0.00118 0.761808
    UNVR Gen. Logistic 0.00135 0.00039 3.48021

     | Show Table
    DownLoad: CSV
    Table 10.  Covariance among the selected stocks.
    Stock UNVR SMGR BRPT WSKT CPIN
    UNVR 0.00039 0.00012 0.00008 0.00007 0.00010
    SMGR 0.00012 0.00059 0.00023 0.00026 0.00019
    BRPT 0.00008 0.00023 0.00096 0.00022 0.00022
    WSKT 0.00007 0.00026 0.00022 0.00118 0.00010
    CPIN 0.00010 0.00019 0.00022 0.00010 0.00051

     | Show Table
    DownLoad: CSV

    Applying Algorithm 1 to solve the above problems, we obtain w1=0.4347,w2=0.1349,w3=0.0858,w4=0.0973 and w5=0.2473. After obtaining the weight value of each stock in the formation of an efficient portfolio, the next step is to calculate the expected return of the portfolio using (5.6) and to calculate the portfolio variance using (5.7). The expected value of portfolio return is μp=0.000949 and portfolio variance is σ2p=0.000224. From the results of the analysis, it can be seen that the optimal portfolio composed of five stocks is a portfolio with the composition of each stock as in Table 11.

    From Table 11, the UNVR is 0.4347. This value indicates that the proportion of UNVR in the formed portfolio is 43.47% of the total allocation of funds. The second proportion, namely SMGR, is 0.1349, so the amount of funds that will be allocated is 13.49%. The third proportion, namely BRPT, is 0.0858, so the funds allocated are 8.58%. The fourth proportion for WSKT is 9.73% of funds allocated, and the fifth proportion for CPIN is 24.73%. By allocating each stock based on the portion in Table 11, the investment will provide a rate of return of 0.0949% for the total allocated funds and the risk of 0.00224% on the total funds.

    Table 11.  Optimal portfolio weight composition.
    Stock UNVR SMGR BRPT WSKT CPIN
    Proportion 0.4347 0.1349 0.0858 0.0973 0.2473

     | Show Table
    DownLoad: CSV

    We have presented a hybrid three-term CG method for solving unconstrained optimization problems. The method is a combination of HS and PRP three-term types. Under some conditions, the global convergence properties of the method were established. By using some test functions, the numerical results showed that the method is most efficient compared to the TTCDDY, HTT, and MPRP methods. Moreover, our method was able to solve the image restoration and portfolio selection problems.

    This research is funded by Directorate of Research and Development, Universitas Indonesia under Hibah PUTI 2022 (Grant No. NKB-655/UN2.RST/HKP.05.00/2022). The third author acknowledge with thanks, the Department of Mathematics and Applied Mathematics at the Sefako Makgatho Health Sciences University.

    The authors declare that they have no conflicts of interest.



    [1] A. B. Abubakar, P. Kumam, M. Malik, P. Chaipunya, A. H. Ibrahim, A hybrid FR-DY conjugate gradient algorithm for unconstrained optimization with application in portfolio selection, AIMS Math., 6 (2021), 6506–6527. https://doi.org/10.3934/math.2021383 doi: 10.3934/math.2021383
    [2] A. B. Abubakar, P. Kumam, M. Malik, A. H. Ibrahim, A hybrid conjugate gradient based approach for solving unconstrained optimization and motion control problems, Math. Comput. Simulat., 201 (2022), 640–657. https://doi.org/10.1016/j.matcom.2021.05.038 doi: 10.1016/j.matcom.2021.05.038
    [3] A. B. Abubakar, M. Malik, P. Kumam, H. Mohammad, M. Sun, A. H. Ibrahim, et al., A Liu-Storey-type conjugate gradient method for unconstrained minimization problem with application in motion control, J. King Saud Univ., Sci., 34 (2022). https://doi.org/10.1016/j.jksus.2022.101923
    [4] M. Al-Baali, Descent property and global convergence of the Fletcher-Reeves method with inexact line search, J. Inst. Math. Appl., 5 (1985), 121–124. https://doi.org/10.1093/imanum/5.1.121 doi: 10.1093/imanum/5.1.121
    [5] N. Andrei, Nonlinear optimization with financial applications, Cham, Switzerland: Springer, 2020.
    [6] A. M. Awwal, I. M. Sulaiman, M. Malik, M. Mamat, P. Kumam, K. Sitthithakerngkiet, A spectral RMIL+ conjugate gradient method for unconstrained optimization with applications in portfolio selection and motion control, IEEE Access, 9 (2021), 75398–75414. https://doi.org/10.1109/ACCESS.2021.3081570 doi: 10.1109/ACCESS.2021.3081570
    [7] S. Babaie-Kafaki, N. Mirhoseini, Z. Aminifard, A descent extension of a modified Polak-Ribière-Polyak method with application in image restoration problem, Optim. Lett., 2021, 1–17. https://doi.org/10.1007/s11590-022-01878-6
    [8] S. Babaie-Kafaki, R. Ghanbari, Two modified three-term conjugate gradient methods with sufficient descent property, Optim. Lett., 8 (2014), 2285–2297. https://doi.org/10.1007/s11590-014-0736-8 doi: 10.1007/s11590-014-0736-8
    [9] M. Bartholomew-Biggs, Nonlinear conjugate gradient methods for unconstrained optimization, Springer Science & Business Media, 2006.
    [10] J. Cao, J. Wu, A conjugate gradient algorithm and its applications in image restoration, Appl. Numer. Math., 152 (2020), 243–252. https://doi.org/10.1016/j.apnum.2019.12.002 doi: 10.1016/j.apnum.2019.12.002
    [11] Y. H. Dai, Y. Yuan, A nonlinear conjugate gradient method with a strong global convergence property, SIAM J. Optim., 10 (1999), 177–182. https://doi.org/10.1137/S1052623497318992 doi: 10.1137/S1052623497318992
    [12] J. Deepho, A. B. Abubakar, M. Malik, I. K. Argyros, Solving unconstrained optimization problems via hybrid CD-DY conjugate gradient methods with applications, J. Comput. Appl. Math., 405 (2022). https://doi.org/10.1016/j.cam.2021.113823
    [13] S. Devila, M. Malik, W. Giyarti, A new hybrid PRP-MMSIS conjugate gradient method and its application in portfolio selection, JRAM, 5 (2021), 47–59. https://doi.org/10.26740/jram.v5n1 doi: 10.26740/jram.v5n1
    [14] E. D. Dolan, J. J. Moré, Benchmarking optimization software with performance profiles, Math. Program., 91 (2002), 201–213. https://doi.org/10.26740/jram.v5n1.p47-59 doi: 10.26740/jram.v5n1.p47-59
    [15] F. J. Fabozzi, H. M. Markowitz, F. Gupta, Handbook of finance, Hoboken, NJ, USA: Wiley, 2008. https://doi.org/10.1002/9780470404324.hof002057
    [16] R. Fletcher, Practical methods of optimization, Hoboken, NJ, USA: Wiley, 2013.
    [17] R. Fletcher, C. M. Reeves, Function minimization by conjugate gradients, Comput. J., 7 (1964), 149–154. https://doi.org/10.1093/comjnl/7.2.149 doi: 10.1093/comjnl/7.2.149
    [18] G. T. Friedlob, F. J. Jr. Plewa, Understanding return on investment, John Wiley & Sons, 1996.
    [19] M. R. Hestenes, E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Stand., 49 (1952), 409–438.
    [20] R. V. Hogg, A. T. Craig, Introduction to mathematical statistics, Pearson, 2018.
    [21] R. V. Hogg, S. A. Klugman, Loss distributions, John Wiley & Sons, 2009.
    [22] A. H. Ibrahim, M. Kimiaei, P. Kumam, A new black box method for monotone nonlinear equations, Optimization, 2021, 1–19. https://doi.org/10.1080/02331934.2021.2002326
    [23] A. H. Ibrahim, P. Kumam, W. Kumam, A family of derivative-free conjugate gradient methods for constrained nonlinear equations and image restoration, IEEE Access, 8 (2020), 162714–162729. https://doi.org/10.1109/ACCESS.2020.3020969 doi: 10.1109/ACCESS.2020.3020969
    [24] A. H. Ibrahim, P. Kumam, A. Kamandi, A. B. Abubakar, An efficient hybrid conjugate gradient method for unconstrained optimization, Optim. Method. Softw., 8 (2022), 1–14. https://doi.org/10.1080/10556788.2021.1998490 doi: 10.1080/10556788.2021.1998490
    [25] J. Jian, W. Chen, X. Jiang, P. Liu, A three-term conjugate gradient method with accelerated subspace quadratic optimization, J. Appl. Math. Comput., 68 (2021), 2407–2433. https://doi.org/10.1007/s12190-021-01622-w doi: 10.1007/s12190-021-01622-w
    [26] J. Jian, L. Yang, X. Jiang, P. Liu, M. Liu, A spectral conjugate gradient method with descent property, Mathematics, 8 (2020). https://doi.org/10.3390/math8020280
    [27] X. Jiang, W. Liao, J. Yin, J. Jian, A new family of hybrid three-term conjugate gradient methods with applications in image restoration, Numer. Algorithms, 91 (2022), 161–191. https://doi.org/10.1007/s11075-022-01258-2 doi: 10.1007/s11075-022-01258-2
    [28] A. Kamandi, K. Amini, A globally convergent gradient-like method based on the armijo line search, J. Math. Model., 9 (2021), 665–676. https://doi.org/10.22124/JMM.2021.18854.1612 doi: 10.22124/JMM.2021.18854.1612
    [29] Y. Liu, C. Storey, Efficient generalized conjugate gradient algorithms, part 1: Theory, J. Optimiz. Theory. App., 69 (1991), 129–137. https://doi.org/10.1007/BF00940464 doi: 10.1007/BF00940464
    [30] M. Malik, S. S. Abas, M. Mamat, Sukono, I. S. Mohammed, A new hybrid conjugate gradient method with global convergence properties, Int. J. Adv. Sci. Techn., 29 (2020), 199–210.
    [31] M. Malik, A. B. Abubakar, S. M. Ibrahim, M. Mamat, S. S. Abas, S. Firman, A new three-term conjugate gradient method for unconstrained optimization with applications in portfolio selection and robotic motion control, IAENG Int. J. Appl. Math., 51 (2021), 471–486.
    [32] M. Malik, I. M. Sulaiman, M. Mamat, S. S. Abas, Sukono, A new class nonlinear conjugate gradient method for unconstrained optimization models and its application in portfolio selection, Nonlinear Funct. An. Appl., 26 (2021), 811–837.
    [33] M. Malik, M. Mamat, S. S. Abas, I. M. Sulaiman, Sukono, Performance analysis of new spectral and hybrid conjugate gradient methods for solving unconstrained optimization problems, IAENG Int. J. Comput. Sci., 48 (2021), 66–79.
    [34] H. M. Markowitz, G. P. Todd, Mean-variance analysis in portfolio choice and capital markets, John Wiley & Sons, 2000.
    [35] H. B. Mayo, Investments: An introduction, Cengage Learning, 2020.
    [36] S. K. Mishra, B. Ram, Introduction to unconstrained optimization with R, Springer Nature, 2019.
    [37] J. J. Moré, B. S. Garbow, K. E. Hillstrom, Testing unconstrained optimization software, ACM T. Math. Software, 7 (1981), 17–41.
    [38] Y. Narushima, H. Yabe, J. A. Ford, A three-term conjugate gradient method with sufficient descent property for unconstrained optimization, SIAM J. Optimiz., 21 (2011), 212–230. https://doi.org/10.1137/080743573 doi: 10.1137/080743573
    [39] J. Nocedal, Updating quasi-Newton matrices with limited storage, Math. Comp., 35 (1980), 773–782. https://doi.org/10.1090/S0025-5718-1980-0572855-7 doi: 10.1090/S0025-5718-1980-0572855-7
    [40] E. Polak, G. Ribiere, Note sur la convergence de méthodes de directions conjuguées, Math. Model. Numer. Anal., 3 (1969), 35–43.
    [41] B. T. Polyak, The conjugate gradient method in extremal problems, USSR Comput. Math. Phys., 9 (1969), 94–112. https://doi.org/10.1016/0041-5553(69)90035-4 doi: 10.1016/0041-5553(69)90035-4
    [42] M. J. D. Powell, Nonconvex minimization calculations and the conjugate gradient method, Lect. Notes. Math., 1984,122–141. https://doi.org/10.1007/BFb0099521
    [43] M. Rivaie, M. Mamat, L. W. June, I. Mohd, A new class of nonlinear conjugate gradient coeficients with global convergence properties, Appl. Math. Comput., 218 (2012), 11323–11332. https://doi.org/10.1016/j.amc.2012.05.030 doi: 10.1016/j.amc.2012.05.030
    [44] Z. Salleh, G. Alhamzi, I. Masmali, A. Alhawarat, A modified liu and storey conjugate gradient method for large scale unconstrained optimization problems, Algorithms, 14 (2021), 227. https://doi.org/10.3390/a14080227 doi: 10.3390/a14080227
    [45] D. F. Shanno, Conjugate gradient methods with inexact searches, Math. Oper. Res., 3 (1978), 244–256. https://doi.org/10.1287/moor.3.3.244 doi: 10.1287/moor.3.3.244
    [46] R. Steven, Introduction to the mathematics of finance: From risk management to options pricing, Springer Science & Business Media, 2004.
    [47] I. M. Sulaiman, M. Malik, A. M. Awwal, P. Kumam, M. Mamat, S. Al-Ahmad, On three-term conjugate gradient method for optimization problems with applications on COVID-19 model and robotic motion control, Adv. Cont. Discr. Mod., 2022 (2022), 1–22. https://doi.org/10.1186/s13662-021-03638-9 doi: 10.1186/s13662-021-03638-9
    [48] I. M. Sulaiman, M. Mamat, A new conjugate gradient method with descent properties and its application to regression analysis, J. Numer. Anal. Ind. Appl. Math., 14 (2020), 25–39.
    [49] Q. Tian, X. Wang, L. Pang, M. Zhang, F. Meng, A new hybrid three-term conjugate gradient algorithm for large-scale unconstrained problems, Mathematics, 9 (2021), 1353. https://doi.org/10.3390/math9121353 doi: 10.3390/math9121353
    [50] G. Yu, J. Huang, Y. Zhou, A descent spectral conjugate gradient method for impulse noise removal, Appl. Math. Lett., 23 (2010), 555–560. https://doi.org/10.1016/j.aml.2010.01.010 doi: 10.1016/j.aml.2010.01.010
    [51] J. Yin, J. Jian, X. Jiang, M. Liu, L. Wang, A hybrid three-term conjugate gradient projection method for constrained nonlinear monotone equations with applications, Numer. Algor., 88 (2021), 389–418. https://doi.org/10.1007/s11075-020-01043-z doi: 10.1007/s11075-020-01043-z
    [52] L. Zhang, W. Zhou, D. Li, A descent modified Polak-Ribière-Polyak conjugate gradient method and its global convergence, J. Inst. Math. Appl., 26 (2006), 629–640. https://doi.org/10.1093/imanum/drl016 doi: 10.1093/imanum/drl016
    [53] X. Zheng, J. Shi, A modified sufficient descent Polak-Ribiére-Polyak type conjugate gradient method for unconstrained optimization problems, Algorithms, 11 (2018), 133. https://doi.org/10.3390/a11090133 doi: 10.3390/a11090133
  • This article has been cited by:

    1. Khalid Fanoukh Al Oweidi, Faisal Shahzad, Wasim Jamshed, Rabha W. Ibrahim, El Sayed M. Tag El Din, Afrah M. AlDerea, Partial differential equations of entropy analysis on ternary hybridity nanofluid flow model via rotating disk with hall current and electromagnetic radiative influences, 2022, 12, 2045-2322, 10.1038/s41598-022-24895-y
    2. Fevi Novkaniza, Maulana Malik, Ibrahim Mohammed Sulaiman, Dipo Aldila, Modified spectral conjugate gradient iterative scheme for unconstrained optimization problems with application on COVID-19 model, 2022, 8, 2297-4687, 10.3389/fams.2022.1014956
    3. Nasiru Salihu, Poom Kumam, Aliyu Muhammed Awwal, Ibrahim Mohammed Sulaiman, Thidaporn Seangwattana, He Chen, The global convergence of spectral RMIL conjugate gradient method for unconstrained optimization with applications to robotic model and image recovery, 2023, 18, 1932-6203, e0281250, 10.1371/journal.pone.0281250
    4. Nasiru Salihu, Poom Kumam, Mahmoud Muhammad Yahaya, Thidaporn Seangwattana, A revised Liu–Storey conjugate gradient parameter for unconstrained optimization problems with applications, 2024, 0305-215X, 1, 10.1080/0305215X.2024.2329323
    5. Lawal Muhammad, Mohammad Y. Waziri, Ibrahim Mohammed Sulaiman, Issam A.R. Moghrabi, Aceng Sambas, An improved preconditioned conjugate gradient method for unconstrained optimization problem with application in Robot arm control, 2024, 2577-8196, 10.1002/eng2.12968
    6. Kamilu Kamfa, Rabiu Bashir Yunus, Mustafa Mamat, 2024, Chapter 11, 978-3-031-67316-0, 175, 10.1007/978-3-031-67317-7_11
    7. Nasiru Salihu, Poom Kumam, Ibrahim Mohammed Sulaiman, Thidaporn Seangwattana, An efficient spectral minimization of the Dai-Yuan method with application to image reconstruction, 2023, 8, 2473-6988, 30940, 10.3934/math.20231583
    8. Jamilu Sabi'u, Ibrahim Mohammed Sulaiman, P. Kaelo, Maulana Malik, Saadi Ahmad Kamaruddin, An optimal choice Dai-Liao conjugate gradient algorithm for unconstrained optimization and portfolio selection, 2024, 9, 2473-6988, 642, 10.3934/math.2024034
    9. Nasiru Salihu, Poom Kumam, Sulaiman M. Ibrahim, Wiyada Kumam, Some combined techniques of spectral conjugate gradient methods with applications to robotic and image restoration models, 2024, 1017-1398, 10.1007/s11075-024-01970-1
    10. Eltiyeb Ali, Salem Mahdi, A Family of Developed Hybrid Four-Term Conjugate Gradient Algorithms for Unconstrained Optimization with Applications in Image Restoration, 2023, 15, 2073-8994, 1203, 10.3390/sym15061203
    11. Isam H. Halil, Issam A.R. Moghrabi, Ahmed A. Fawze, Basim A. Hassan, Hisham M. Khudhur, A Quadratic Model based Conjugate Gradient Optimization Method, 2023, 22, 2224-2880, 925, 10.37394/23206.2023.22.101
    12. Rabiu Bashir Yunus, Nooraini Zainuddin, Hanita Daud, Ramani Kannan, Mahmoud Muhammad Yahaya, Samsul Ariffin Abdul Karim, New CG-Based Algorithms With Second-Order Curvature Information for NLS Problems and a 4DOF Arm Robot Model, 2024, 12, 2169-3536, 61086, 10.1109/ACCESS.2024.3393555
    13. Ibrahim M. Sulaiman, Ruzelan Khalid, Mohd Kamal Mohd Nawawi, Aida Mauziah Benjamin, Mustafa Mamat, 2024, 3158, 0094-243X, 040005, 10.1063/5.0223813
    14. M. A. I. Ishak, S. M. Marjugi, A New Hybrid Three-Term LS-CD Conjugate Gradient In Solving Unconstrained Optimization Problems, 2024, 18, 1823-8343, 167, 10.47836/mjms.18.1.10
    15. Nasiru Salihu, Poom Kumam, Ibrahim Mohammed Sulaiman, Ibrahim Arzuka, Wiyada Kumam, An efficient Newton-like conjugate gradient method with restart strategy and its application, 2024, 226, 03784754, 354, 10.1016/j.matcom.2024.07.008
    16. Xianzhen Jiang, Xiaomin Ye, Zefeng Huang, Meixing Liu, A family of hybrid conjugate gradient method with restart procedure for unconstrained optimizations and image restorations, 2023, 159, 03050548, 106341, 10.1016/j.cor.2023.106341
    17. Sulaiman Mohammed Ibrahim, Nasiru Salihu, Two sufficient descent spectral conjugate gradient algorithms for unconstrained optimization with application, 2024, 1389-4420, 10.1007/s11081-024-09899-z
    18. Nasiru Salihu, Poom Kumam, Sulaiman Mohammed Ibrahim, Huzaifa Aliyu Babando, A sufficient descent hybrid conjugate gradient method without line search consideration and application, 2024, 41, 0264-4401, 1203, 10.1108/EC-12-2023-0912
    19. Nurul ‘Aini, Nurul Hajar, Mohd Rivaie, Shamsatun Nahar Ahmad, Ain Aqiela Azamuddin, 2024, 3203, 0094-243X, 060002, 10.1063/5.0224999
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4841) PDF downloads(620) Cited by(19)

Figures and Tables

Figures(6)  /  Tables(11)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog