Research article

Nash equilibrium realization of population games based on social learning processes


  • Received: 11 May 2023 Revised: 16 August 2023 Accepted: 21 August 2023 Published: 04 September 2023
  • In the two-population game model, we assume the players have certain imitative learning abilities. To simulate the learning process of the game players, we propose a new swarm intelligence algorithm by combining the particle swarm optimization algorithm, where each player can be considered a particle. We conduct simulations for three typical games: the prisoner's dilemma game (with only one pure-strategy Nash equilibrium), the coin-flip game (with only one fully-mixed Nash equilibrium), and the coordination game (with two pure-strategy Nash equilibria and one fully-mixed Nash equilibrium). The results show that when the game has a pure strategy Nash equilibrium, the algorithm converges to that equilibrium. However, if the game does not have a pure strategy Nash equilibrium, it exhibits periodic convergence to the only mixed-strategy Nash equilibrium. Furthermore, the magnitude of the periodical convergence is inversely proportional to the introspection rate. After conducting experiments, our algorithm outperforms the Meta Equilibrium Q-learning algorithm in realizing mixed-strategy Nash equilibrium.

    Citation: Zhiyan Xing, Yanlong Yang, Zuopeng Hu. Nash equilibrium realization of population games based on social learning processes[J]. Mathematical Biosciences and Engineering, 2023, 20(9): 17116-17137. doi: 10.3934/mbe.2023763

    Related Papers:

  • In the two-population game model, we assume the players have certain imitative learning abilities. To simulate the learning process of the game players, we propose a new swarm intelligence algorithm by combining the particle swarm optimization algorithm, where each player can be considered a particle. We conduct simulations for three typical games: the prisoner's dilemma game (with only one pure-strategy Nash equilibrium), the coin-flip game (with only one fully-mixed Nash equilibrium), and the coordination game (with two pure-strategy Nash equilibria and one fully-mixed Nash equilibrium). The results show that when the game has a pure strategy Nash equilibrium, the algorithm converges to that equilibrium. However, if the game does not have a pure strategy Nash equilibrium, it exhibits periodic convergence to the only mixed-strategy Nash equilibrium. Furthermore, the magnitude of the periodical convergence is inversely proportional to the introspection rate. After conducting experiments, our algorithm outperforms the Meta Equilibrium Q-learning algorithm in realizing mixed-strategy Nash equilibrium.



    加载中


    [1] J. F. Nash, Equilibrium points in n-person games, PNAS, 36 (1950), 48–49. https://doi.org/10.1073/pnas.36.1.48 doi: 10.1073/pnas.36.1.48
    [2] J. Nash, Non-cooperative games, Ann. Math., 54 (1951), 286–295. https://doi.org/10.2307/1969529 doi: 10.2307/1969529
    [3] D. Fudenberg, D. K. Levine, The Theory of Learning in Games, MIT Press Books, 1998.
    [4] J. Kennedy, R. Eberhart, Particle swarm optimization, in Proceedings of ICNN'95 - International Conference on Neural Networks, 4 (1995), 1942–1948. https://doi.org/10.1109/ICNN.1995.488968
    [5] R. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in MHS'95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science, (1995), 39–43. https://doi.org/10.1109/MHS.1995.494215
    [6] D. Fudenberg, D. K. Levine, Consistency and cautious fictitious play, J. Econ. Dyn. Control, 19 (1995), 1065–1089. https://doi.org/10.1016/0165-1889(94)00819-4 doi: 10.1016/0165-1889(94)00819-4
    [7] D. Fudenberg, D. M. Kreps, Lectures on Learning and Equilibrium in Strategic form Games, Core Foundation, 1992.
    [8] J. Robinson, An iterative method of solving a game, Ann. Math., 54 (1951), 296–301. https://doi.org/10.2307/1969530 doi: 10.2307/1969530
    [9] J. H. Nachbar, "Evolutionary" selection dynamics in games: convergence and limit properties, Int. J. Game Theory, 19 (1990), 59–89. https://doi.org/10.1007/BF01753708 doi: 10.1007/BF01753708
    [10] V. Krishna, T. Sjostrom, On the convergence of fictitious play, Math. Oper. Res., 23 (1998), 479–511. https://doi.org/10.1287/moor.23.2.479 doi: 10.1287/moor.23.2.479
    [11] D. Monderer, D. Samet, A. Sela, Belief affirming in learning processes, J. Econ. Theory, 73 (1997), 438–452. https://doi.org/10.1006/jeth.1996.2245 doi: 10.1006/jeth.1996.2245
    [12] D. Fudenberg, D. M. Kreps, Learning mixed equilibria, Games Econ. Behav., 5 (1993), 320–367. https://doi.org/10.1006/game.1993.1021 doi: 10.1006/game.1993.1021
    [13] P. Milgrom, J. Roberts, Adaptive and sophisticated learning in normal form games, Games Econ. Behav., 3 (1991), 82–100. https://doi.org/10.1016/0899-8256(91)90006-Z doi: 10.1016/0899-8256(91)90006-Z
    [14] K. H. Schlag, Why imitate, and if so, how?: A boundedly rational approach to multi-armed bandits, J. Econ. Theory, 78 (1998), 130–156. https://doi.org/10.1006/jeth.1997.2347 doi: 10.1006/jeth.1997.2347
    [15] J. Bjornerstedt, J. W. Weibull, Nash Equilibrium and Evolution by Imitation, Working Paper Series, 1994. Available from: https://www.econstor.eu/bitstream/10419/94705/1/wp407.pdf.
    [16] K. Binmore, L. Samuelson, Muddling through: noisy equilibrium selection, J. Econ. Theory, 74 (1997), 235–265. https://doi.org/10.1006/jeth.1996.2255 doi: 10.1006/jeth.1996.2255
    [17] J. Bjornerstedt, Experimentation, Imitation and Evolutionary Dynamics, University of Stockholm, mimeo, 1993.
    [18] K. Binmore, L. Samuelson, Evolutionary drift and equilibrium selection, Rev. Econ. Stud., 66 (1999), 363–393. https://doi.org/10.1111/1467-937X.00091 doi: 10.1111/1467-937X.00091
    [19] D. Fudenberg, D. Levine, Learning in games, Eur. Econ. Rev., 42 (1998), 631–639. https://doi.org/10.1016/S0014-2921(98)00011-7 doi: 10.1016/S0014-2921(98)00011-7
    [20] J. Heinrich, M. Lanctot, D. Silver, Fictitious self-play in extensive-form games, in Proceedings of the 32nd International Conference on Machine Learning, 37 (2015), 805–813. Available from: http://proceedings.mlr.press/v37/heinrich15.
    [21] J. Heinrich, D. Silver, Deep reinforcement learning from self-play in imperfect-information games, preprint, arXiv: 1603.01121.
    [22] P. Muller, S. Omidshafiei, M. Rowland, K. Tuyls, J. Perolat, S. Liu, et al., A generalized training approach for multiagent learning, preprint, arXiv: 1909.12823.
    [23] L. Marris, P. Muller, M. Lanctot, K. Tuyls, T. Graepel, Multi-agent training beyond zero-sum with correlated equilibrium meta-solvers, in Proceedings of the 38th International Conference on Machine Learning, 139 (2021), 7480–7491. Available from: https://proceedings.mlr.press/v139/marris21a.html.
    [24] D. Balduzzi, M. Garnelo, Y. Bachrach, W. Czarnecki, J. Perolat, M. Jaderberg, et al., Open-ended learning in symmetric zero-sum games, in Proceedings of the 36th International Conference on Machine Learning, 97 (2019), 434–443. Available from: https://proceedings.mlr.press/v97/balduzzi19a.html.
    [25] M. L. Littman, Markov games as a framework for multi-agent reinforcement learning, Mach. Learn. Proc. 1994, (1994), 157–163. https://doi.org/10.1016/B978-1-55860-335-6.50027-1 doi: 10.1016/B978-1-55860-335-6.50027-1
    [26] T. Borgers, R. Sarin, Learning through reinforcement and replicator dynamics, J. Econ. Theory, 77 (1997), 1–14. https://doi.org/10.1006/jeth.1997.2319 doi: 10.1006/jeth.1997.2319
    [27] J. S. Jordan, Bayesian learning in normal form games, Games Econ. Behav., 3 (1991), 60–81. https://doi.org/10.1016/0899-8256(91)90005-Y doi: 10.1016/0899-8256(91)90005-Y
    [28] C. Camerer, T. H. Ho, Experience-weighted attraction learning in normal form games, Econometrica, 67 (1999), 827–874. https://doi.org/10.1111/1468-0262.00054 doi: 10.1111/1468-0262.00054
    [29] S. Singh, M. Kearns, Y. Mansour, Nash convergence of gradient dynamics in general-sum games, UAI, (2000), 541–548.
    [30] M. Zinkevich, Online convex programming and generalized infinitesimal gradient ascent, in Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003), (2003), 928–936. Available from: https://cdn.aaai.org/ICML/2003/ICML03-120.pdf.
    [31] J. Wang, L. Cao, X. Chen, Y. Chen, Z. Zhao, Game reinforcement learning for pure strategy Nash equilibrium (in Chinese), Comput. Eng. Appl., 58 (2022), 78–86. https://doi.org/10.3778/j.issn.1002-8331.2112-0167 doi: 10.3778/j.issn.1002-8331.2112-0167
    [32] W. H. Sandholm, Population Games and Evolutionary Dynamics, MIT press, 2010.
    [33] A. Nmeth, K. Takcs, The paradox of cooperation benefits, J. Theor. Biol., 264 (2010), 301–311. https://doi.org/10.1016/j.jtbi.2010.02.005 doi: 10.1016/j.jtbi.2010.02.005
    [34] Z. Wang, S. Kokubo, M. Jusup, J. Tanimoto, Universal scaling for the dilemma strength in evolutionary games, Phys. Life Rev., 14 (2015), 1–30. https://doi.org/10.1016/j.plrev.2015.04.033 doi: 10.1016/j.plrev.2015.04.033
    [35] Z. Zhang, D. Wang, D. Zhao, Q. Han, T. Song, A gradient-based reinforcement learning algorithm for multiple cooperative agents, IEEE Access, 6 (2018), 70223–70235. https://doi.org/10.1109/ACCESS.2018.2878853 doi: 10.1109/ACCESS.2018.2878853
    [36] C. Zhang, X. Li, J. Hao, S. Chen, K. Tuyls, W. Xue, et al., SA-IGA: a multiagent reinforcement learning method towards socially optimal outcomes, Auton. Agents Multi-Agent Syst., 33 (2019), 403–429. https://doi.org/10.1007/s10458-019-09411-3 doi: 10.1007/s10458-019-09411-3
    [37] H. Li, S. Xiang, Y. Yang, C. Liu, Differential evolution particle swarm optimization algorithm based on good point set for computing Nash equilibrium of finite noncooperative game, AIMS Math., 6 (2021), 1309–1323. https://doi.org/10.3934/math.2021081 doi: 10.3934/math.2021081
    [38] Z. Wang, M. Jusup, L. Shi, J. Lee, Y. Iwasa, S. Boccaletti, Exploiting a cognitive bias promotes cooperation in social dilemma experiments, Nat. Commun., 9 (2018), 2954. https://doi.org/10.1038/s41467-018-05259-5 doi: 10.1038/s41467-018-05259-5
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(11394) PDF downloads(133) Cited by(0)

Article outline

Figures and Tables

Figures(9)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog