Research article

A hybrid FR-DY conjugate gradient algorithm for unconstrained optimization with application in portfolio selection

  • Received: 29 January 2021 Accepted: 02 April 2021 Published: 15 April 2021
  • MSC : 65K10, 90C26, 90C52

  • In this paper, we present a new hybrid conjugate gradient (CG) approach for solving unconstrained optimization problem. The search direction is a hybrid form of the Fletcher-Reeves (FR) and the Dai-Yuan (DY) CG parameters and is close to the direction of the memoryless Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton approach. Independent of the line search, the search direction of the new approach satisfies the descent condition and possess the trust region. We establish the global convergence of the approach for general functions under the Wolfe-type and Armijo-type line search. Using the CUTEr library, numerical results show that the propose approach is more efficient than some existing approaches. Furthermore, we give a practical application of the new approach in optimizing risk in portfolio selection.

    Citation: Auwal Bala Abubakar, Poom Kumam, Maulana Malik, Parin Chaipunya, Abdulkarim Hassan Ibrahim. A hybrid FR-DY conjugate gradient algorithm for unconstrained optimization with application in portfolio selection[J]. AIMS Mathematics, 2021, 6(6): 6506-6527. doi: 10.3934/math.2021383

    Related Papers:

  • In this paper, we present a new hybrid conjugate gradient (CG) approach for solving unconstrained optimization problem. The search direction is a hybrid form of the Fletcher-Reeves (FR) and the Dai-Yuan (DY) CG parameters and is close to the direction of the memoryless Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton approach. Independent of the line search, the search direction of the new approach satisfies the descent condition and possess the trust region. We establish the global convergence of the approach for general functions under the Wolfe-type and Armijo-type line search. Using the CUTEr library, numerical results show that the propose approach is more efficient than some existing approaches. Furthermore, we give a practical application of the new approach in optimizing risk in portfolio selection.



    加载中


    [1] M. Al-Baali, Descent property and global convergence of the Fletcher-Reeves method with inexact line search, IMA J. Numer. Anal., 5 (1985), 121–124. doi: 10.1093/imanum/5.1.121
    [2] N. Andrei, A simple three-term conjugate gradient algorithm for unconstrained optimization, J. Comput. Appl. Math., 241 (2013), 19–29. doi: 10.1016/j.cam.2012.10.002
    [3] N. Andrei, An adaptive conjugate gradient algorithm for large-scale unconstrained optimization, J. Comput. Appl. Math., 292 (2016), 83–91. doi: 10.1016/j.cam.2015.07.003
    [4] N. Andrei, Nonlinear Conjugate Gradient Methods for Unconstrained Optimization, Springer, 2020.
    [5] Y. H. Dai, J. Y. Han, G. H. Liu, D. F. Sun, H. X. Yin, Y. X. Yuan, Convergence properties of nonlinear conjugate gradient methods, SIAM J. Optim., 10 (2000), 345–358. doi: 10.1137/S1052623494268443
    [6] Y. H. Dai, Y. Yuan, A nonlinear conjugate gradient method with a strong global convergence property, SIAM J. Optim., 10 (1999), 177–182. doi: 10.1137/S1052623497318992
    [7] E. D. Dolan, J. J. Moré, Benchmarking optimization software with performance profiles, Math. Program., 91 (2002), 201–213. doi: 10.1007/s101070100263
    [8] R. Fletcher, Practical methods of optimization, John Wiley & Sons, 2013.
    [9] R. Fletcher, C. M. Reeves, Function minimization by conjugate gradients, Comput. J., 7 (1964), 149–154. doi: 10.1093/comjnl/7.2.149
    [10] J. C. Gilbert, J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM J. Optim., 2 (1992), 21–42. doi: 10.1137/0802003
    [11] L. Grippo, S. Lucidi, A globally convergent version of the polak-ribière conjugate gradient method, Math. Program., 78 (1997), 375–391.
    [12] W. W. Hager, H. C. Zhang, A new conjugate gradient method with guaranteed descent and an efficient line search, SIAM J. Optimi., 16 (2005), 170–192. doi: 10.1137/030601880
    [13] W. H. Hager, H. C. Zhang, Algorithm 851: CG-DESCENT, a conjugate gradient method with guaranteed descent, ACM T. Math. Software, 32 (2006), 113–137. doi: 10.1145/1132973.1132979
    [14] W. Hager, H. C. Zhang, A survey of nonlinear conjugate gradient methods, Pac. J. Optim., 2 (2006), 35–58.
    [15] M. R. Hestenes, E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Natl. Bur. Stand., 49 (1952), 409–436. doi: 10.6028/jres.049.044
    [16] J. B. Jian, L. Han, X. Z. Jiang, A hybrid conjugate gradient method with descent property for unconstrained optimization, Appl. Math. Model., 39 (2015), 1281–1290. doi: 10.1016/j.apm.2014.08.008
    [17] N. Jorge, Updating quasi-newton matrices with limited storage, Math. Comput., 35 (1980), 773–782. doi: 10.1090/S0025-5718-1980-0572855-7
    [18] M. Li, A modified Hestense–Stiefel conjugate gradient method close to the memoryless bfgs quasi-newton method, Optim. Method. Softw., 33 (2018), 336–353. doi: 10.1080/10556788.2017.1325885
    [19] M. Li, A three term polak-ribière-polyak conjugate gradient method close to the memoryless bfgs quasi-newton method, J. Ind. Manage. Optim., 16 (2020), 245–260.
    [20] X. L. Li, J. J. Shi, X. L. Dong, J. L. Yu, A new conjugate gradient method based on quasi-newton equation for unconstrained optimization, J. Comput. Appl. Math., 350 (2019), 372–379. doi: 10.1016/j.cam.2018.10.035
    [21] J. K. Liu, S. J. Li, New hybrid conjugate gradient method for unconstrained optimization, Appl. Math. Comput., 245 (2014), 36–43. doi: 10.1016/j.amc.2014.07.096
    [22] Y. Liu, C. Storey, Efficient generalized conjugate gradient algorithms, part 1: Theory, J. Optimiz. Theory. App., 69 (1991), 129–137. doi: 10.1007/BF00940464
    [23] J. T. Mo, N. Z. Gu, Z. X. Wei, Hybrid conjugate gradient methods for unconstrained optimization, Optimi. Method. Softw., 22 (2007), 297–307. doi: 10.1080/10556780500518653
    [24] R. Pike, B. Neale, P. Linsley, Corporate finance and investment-decisions and strategies, Pearson Education Limited, 2006.
    [25] E. Polak, G. Ribiere, Note sur la convergence de méthodes de directions conjuguées, ESAIM-Math. Model. Num., 3 (1969), 35–43.
    [26] B. T. Polyak, The conjugate gradient method in extremal problems, USSR Comput. Math. Math. Phys., 9 (1969), 94–112. doi: 10.1016/0041-5553(69)90035-4
    [27] D. F. Shanno, Conjugate gradient methods with inexact searches, Math. Oper. Res., 3 (1978), 244–256. doi: 10.1287/moor.3.3.244
    [28] R. Steven, Introduction to the mathematics of finance: from risk management to options pricing, Springer, 2004.
    [29] W. Sun, Y. X. Yuan, Optimization theory and methods: Nonlinear programming, Springer, 1992.
    [30] D. Touati-Ahmed, C. Storey, Efficient hybrid conjugate gradient techniques, J. Optimiz. Theory App., 64 (1990), 379–397. doi: 10.1007/BF00939455
    [31] L. Zhang, W. J. Zhou, D. H. Li, A descent modified Polak–Ribière–Polyak conjugate gradient method and its global convergence, IMA J. Numer. Anal., 26 (2006), 629–640. doi: 10.1093/imanum/drl016
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3207) PDF downloads(197) Cited by(14)

Article outline

Figures and Tables

Figures(4)  /  Tables(5)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog