Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

A TP53-associated immune prognostic signature for the prediction of the overall survival and therapeutic responses in pancreatic cancer


  • Received: 08 August 2021 Accepted: 22 October 2021 Published: 09 November 2021
  • Pancreatic cancer (PC) is a highly fatal disease correlated with an inferior prognosis. The tumor protein p53 (TP53) is one of the frequent mutant genes in PC and has been implicated in prognosis. We collected somatic mutation data, RNA sequencing data, and clinical information of PC samples in the Cancer Genome Atlas (TCGA) database. TP53 mutation was an independent prognostic predictor of PC patients. According to TP53 status, Gene set enrichment analysis (GSEA) suggested that TP53 mutations were related to the immunophenotype of pancreatic cancer. We identified 102 differentially expressed immune genes (DEIGs) based on TP53 mutation status and developed a TP53-associated immune prognostic model (TIPM), including Epiregulin (EREG) and Prolactin receptor (PRLR). TIPM identified the high-risk group with poor outcomes and more significant response potential to cisplatin, gemcitabine, and paclitaxel therapies. And we verified the TIPM in the International Cancer Genome Consortium (ICGC) cohort (PACA-AU) and Gene Expression Omnibus (GEO) cohort (GSE78229 and GSE28735). Finally, we developed a nomogram that reliably predicts overall survival in PC patients on the bias of TIPM and other clinicopathological factors. Our study indicates that the TIPM derived from TP53 mutation patterns might be an underlying prognostic therapeutic target. But more comprehensive researches with a large sample size is necessary to confirm the potential.

    Citation: Yi Liu, Long Cheng, Xiangyang Song, Chao Li, Jiantao Zhang, Lei Wang. A TP53-associated immune prognostic signature for the prediction of the overall survival and therapeutic responses in pancreatic cancer[J]. Mathematical Biosciences and Engineering, 2022, 19(1): 191-208. doi: 10.3934/mbe.2022010

    Related Papers:

    [1] Huanhuan Guo, Biao Gao . Game theory analysis of self-awareness and politeness. Mathematical Biosciences and Engineering, 2022, 19(10): 10493-10532. doi: 10.3934/mbe.2022491
    [2] Stephen Tully, Monica-Gabriela Cojocaru, Chris T. Bauch . Multiplayer games and HIV transmission via casual encounters. Mathematical Biosciences and Engineering, 2017, 14(2): 359-376. doi: 10.3934/mbe.2017023
    [3] Sharon M. Cameron, Ariel Cintrón-Arias . Prisoner's Dilemma on real social networks: Revisited. Mathematical Biosciences and Engineering, 2013, 10(5&6): 1381-1398. doi: 10.3934/mbe.2013.10.1381
    [4] Notice Ringa, Chris T. Bauch . Spatially-implicit modelling of disease-behaviour interactions in the context of non-pharmaceutical interventions. Mathematical Biosciences and Engineering, 2018, 15(2): 461-483. doi: 10.3934/mbe.2018021
    [5] Zheng Liu, Lingling Lang, Lingling Li, Yuanjun Zhao, Lihua Shi . Evolutionary game analysis on the recycling strategy of household medical device enterprises under government dynamic rewards and punishments. Mathematical Biosciences and Engineering, 2021, 18(5): 6434-6451. doi: 10.3934/mbe.2021320
    [6] Jianjun Long, Hui Huang . Stability of equilibrium production-price in a dynamic duopoly Cournot-Bertrand game with asymmetric information and cluster spillovers. Mathematical Biosciences and Engineering, 2022, 19(12): 14056-14073. doi: 10.3934/mbe.2022654
    [7] Rajanish Kumar Rai, Pankaj Kumar Tiwari, Yun Kang, Arvind Kumar Misra . Modeling the effect of literacy and social media advertisements on the dynamics of infectious diseases. Mathematical Biosciences and Engineering, 2020, 17(5): 5812-5848. doi: 10.3934/mbe.2020311
    [8] Alexander S. Bratus, Vladimir P. Posvyanskii, Artem S. Novozhilov . A note on the replicator equation with explicit space and global regulation. Mathematical Biosciences and Engineering, 2011, 8(3): 659-676. doi: 10.3934/mbe.2011.8.659
    [9] Dongning Chen, Jianchang Liu, Chengyu Yao, Ziwei Zhang, Xinwei Du . Multi-strategy improved salp swarm algorithm and its application in reliability optimization. Mathematical Biosciences and Engineering, 2022, 19(5): 5269-5292. doi: 10.3934/mbe.2022247
    [10] Yueliang Wu, Aolong Yi, Chengcheng Ma, Ling Chen . Artificial intelligence for video game visualization, advancements, benefits and challenges. Mathematical Biosciences and Engineering, 2023, 20(8): 15345-15373. doi: 10.3934/mbe.2023686
  • Pancreatic cancer (PC) is a highly fatal disease correlated with an inferior prognosis. The tumor protein p53 (TP53) is one of the frequent mutant genes in PC and has been implicated in prognosis. We collected somatic mutation data, RNA sequencing data, and clinical information of PC samples in the Cancer Genome Atlas (TCGA) database. TP53 mutation was an independent prognostic predictor of PC patients. According to TP53 status, Gene set enrichment analysis (GSEA) suggested that TP53 mutations were related to the immunophenotype of pancreatic cancer. We identified 102 differentially expressed immune genes (DEIGs) based on TP53 mutation status and developed a TP53-associated immune prognostic model (TIPM), including Epiregulin (EREG) and Prolactin receptor (PRLR). TIPM identified the high-risk group with poor outcomes and more significant response potential to cisplatin, gemcitabine, and paclitaxel therapies. And we verified the TIPM in the International Cancer Genome Consortium (ICGC) cohort (PACA-AU) and Gene Expression Omnibus (GEO) cohort (GSE78229 and GSE28735). Finally, we developed a nomogram that reliably predicts overall survival in PC patients on the bias of TIPM and other clinicopathological factors. Our study indicates that the TIPM derived from TP53 mutation patterns might be an underlying prognostic therapeutic target. But more comprehensive researches with a large sample size is necessary to confirm the potential.



    Game theory is an important branch of operations research, in which population games are a classical game model. Because population games not only can reveal the essential features of collaboration and competition but also can provide profound insights and revelations among populations, it has been widely used in social sciences, biology, economics, and other fields. Therefore, population games have been a hot topic of academic research. In the game theory, Nash equilibrium [1,2] is an important concept. However, the traditional Nash equilibrium requires that players are perfectly rational and have complete information. Fudenberg and Levine [3] propose an alternative interpretation of equilibrium, "The equilibrium is the long-term outcome of the process by which imperfectly rational players seek to optimize over time". Influenced by Fudenberg's interpretation of equilibrium, we consider how to find the path to Nash equilibrium under conditions of imperfect rationality and incomplete information. In reality, players aim to maximize their benefits, and equilibrium emerges after repeated games. Nash equilibrium is an integral component of this equilibrium, and due to its challenging establishment, investigating the process of Nash equilibrium formation holds intrinsic value. These players are not smart enough, and their ability is limited. To depict the strategic interactions among these players, we develop an algorithm to simulate their gaming processes. Among these algorithms, the particle swarm optimization (PSO) algorithm [4,5] is based on the feeding behavior of a bird flock. Both the PSO algorithm and the realization of Nash equilibrium are based on the concept of optimization, albeit with distinct approaches. The PSO algorithm emphasizes collective optimization, whereas Nash equilibrium realization centers around individual optimization. Therefore, we can glean insights from the PSO algorithm to develop an algorithm suitable for achieving Nash equilibrium. The algorithmic realization of Nash equilibrium is rooted in the decisions made by imperfectly rational players, developing algorithms to simulate equilibrium evolution. However, limited research exists regarding achieving Nash equilibrium in population games using swarm intelligence algorithms.

    In the field of game theory, Nash equilibrium theory holds significant importance. Learning rules provide a perspective for studying Nash equilibrium from the players' viewpoint. Currently, three primary types of learning models exist. The first type is the virtual action learning theory [6,7,8,9,10,11,12,13], which was first proposed by Fudenberg and Levine [3]. It is believed that the opponent's strategy remains uncertain in each game, requiring the anticipation of the opponent's moves. The theory of virtual action learning considers the opponent's prior strategy choices, assigning weight to these choices and using the weighted outcome to determine the opponent's subsequent strategy. The second type is the social learning model [14,15,16,17,18] for population games, which is also proposed by Fudenberg and Levine [19]. Within this model, players can glean information about fellow players who achieve superior benefits within the population. This collective learning process eventually converges the system to a stable state. The third type is the reinforcement learning model [20,21,22,23,24], Littman [25] proposed a two-player model in zero-sum games. The model assumes that players can retain the memory of strategies and their associated benefits from previous games. Through continuous reflective learning, players strive to achieve Nash equilibrium. In addition, Borgers and Sarin [26] proposed the stimulus-response learning model based on the reinforcement learning model. In this model, players can solely recall their past strategy selections and the associated benefits. Consequently, they are inclined to employ their previous actions to guide future strategic decisions. The model posits that well-performing actions are positively reinforced, while poorly performing actions are negatively reinforced. Jordan [27] proposed the Bayesian learning model, and Camerer and Hua [28] proposed the experience-adding weight affinity (EWA) model. These two models are also important learning models based on virtual, social, and reinforcement learning models.

    Previous research on realizing Nash equilibrium has been mainly based on reinforcement learning theory. In 2000, Singh et al. [29] first proposed the infinitesimal gradient algorithm (IGA), which enables each player to adjust its strategy based on the gradient of its expected benefit. This algorithm converges to a particular Nash equilibrium. After that, Zinkevich [30] proposed the generalized infinitesimal gradient algorithm (GIGA), which extends the applicability of the IGA algorithm from just two strategies to encompass multi-strategy scenarios. Wang et al. [31] expanded their investigation to meta-games, achieving a path towards meta-equilibrium using Q-learning. However, the existing realizations of Nash equilibrium mainly focus on inter-player games, and further exploration and refinement are necessary for the realization of Nash equilibrium in population games.

    This paper develops the population game particle swarm optimization (PGPSO) algorithm, which uses social learning and population imitation as theoretical sources. We theoretically prove the convergence of the PGPSO algorithm, and the Nash equilibrium of a single mixed strategy is proved to be the center of a stable limit ring in the algorithm. Using the PGPSO algorithm, we simulate the evolution of three two-population games and search their Nash equilibriums' realization paths. The experimental outcomes validate the efficacy of the PGPSO algorithm in uncovering Nash equilibria. Additionally, the effect of introspection rate and initial state on the PGPSO algorithm to realize Nash equilibrium is further explored.

    This section will mainly introduce the fundamental concepts of population games, social learning theory, and population imitation theory.

    The two-population game is denoted by {Γ,X,F} [32].

    1) Γ={1,2} denotes the two populations. For each population, pΓ, Sp={1,2} denotes the set of pure strategies available to population p.

    2) For population 1, x1 denotes the proportion of players who choose strategy 1; for population 2, y1 denotes the proportion of players who choose strategy 1. Further, x=(x1,x2) denotes the pure strategy distribution state of population 1, y=(y1,y2) denotes the pure strategy distribution state of population 2. The strategy choice of the i-th player in population 1 is denoted as xi, and likewise, the strategy choice of the j-th player in population 2 is denoted as yj. Given that players can only select pure strategies, xi,yj{0,1}. If xi (yj) = 1, it means that the i-th (j-th) player has chosen strategy 1; If xi (yj) = 0, it means that the i-th (j-th) player has chosen strategy 2. The combined set X=(x,y) represents the social state of the two populations Γ.

    3) For any given population p, Fps:XR denotes the expected benefit associated with pure strategy s, sSp. Therefore, the corresponding set of pure strategies for population p is denoted as Sp, and Fp:XR2 represents the expected benefit of population p. The overall expected benefit function of the entire society Γ is denoted as F=(F1;F2):XR4.

    The following definition is the Nash equilibrium definition of the two populations game {Γ,X,F} [32].

    Definition 1. Let {Γ,X,F} be a two-population game. If the social state ˉz=(ˉx,ˉy)X satisfies pΓ,ˉxs>0,ˉys>0Fps(ˉz)=maxrSpFpr(ˉz), sSp, then we define ˉz=(ˉx,ˉy) as the Nash equilibrium of the population game {Γ,X,F}, and denote the set containing all Nash equilibria as E(F).

    According to the above definition of Nash equilibrium for the two-population game, the Nash equilibrium of the prisoner's dilemma game is (ˉx,ˉy)=((1,0),(1,0)). The Nash equilibrium of the coin-flip game is (ˉx,ˉy)=((12,12),(12,12)). The Nash equilibrium set of the coordination game is E(F)={((1,0),(1,0)),((0,1),(0,1)),((13,23),(13,23))}. The benefit matrices for the three games are presented in Examples 4.1–4.3 later on.

    Fudenberg and Levine [3] proposed the social learning theory to explain the formation of the Nash equilibrium of the population game. In a single iteration, the initial population is called the "parent", denoted as q(t). The population that completes strategy adjustment is called the "offspring", denoted as q(t+1). There is an excessive generation from the parent to the offspring, called the pending generation, denoted as q(t). It is crucial that the overall strategy distribution of the pending generation is the same as that of the parent, i.e., xs(t)=xs(t), with pending players corresponding one-to-one with their parent players.

    During each iteration, for one of the populations p, a proportion α of pending players chooses to adjust their strategies, while the remaining pending players will keep their original strategy. Björnerstedt and Weibull [15] interpret the phenomenon as an introspective phenomenon, where certain players in the population actively imitate and learn from others. Players who adjust their strategies are called "introspective players", whereas those who keep their original strategies are called "non-introspective players". In the context of a game, following the principle of random matching, all players choose the pure strategy. This process can be illustrated using the game model presented in [3] as an example.

    LRUD[9,00,02,02,0]

    The model is based on a game framework featuring a virtual population 2, as proposed by Fudenberg and Levine [3]. The social learning theory for strategy updating is as follows: x1(t) denotes the proportion of parents in population 1 who choose strategy U, x2(t) denotes the proportion of parents in population 1 who choose strategy D, y1(t) denotes the proportion of parents in population 2 who choose strategy L, and y2(t) denotes the proportion of parents who choose strategy R. For population 1, the proportion of direct choice strategy U without introspection is (1α)x1(t). According to the social learning theory, a player's strategy remains unchanged if their strategy is consistent with their parent's, and the proportion is αx1(t)2.

    When a player's strategy does not match their parent's strategy, in that case, it is divided into two small populations according to the encountered opponents, and the player imitates the strategy of the small population with the highest expected benefit. For example, when a player encounters an opponent who chooses strategy L, he will choose strategy U, and the proportion is 2αy1(t)x1(t)x2(t). Similarly, if he encounters an opponent who chooses strategy R, he will choose strategy D, and the proportion is 2αy2(t)x1(t)x2(t).

    With the above variation of strategies, the proportional update formula for the offspring selection strategy U of population 1 can be obtained as:

    x1(t+1)=(1α)x1(t)+α(x1(t)2+2y1(t)x1(t)x2(t)). (2.1)

    For the population game model, {Γ,X,F}, Schlag [14] presents an alternative perspective, asserting that each player can observe the strategies and expected benefits of others within their population. This perspective eliminates the notion of parent and offspring from social learning theory. Schlag argues that the emergence of Nash equilibrium in population games hinges solely on the phenomenon of imitation. The rules of imitation, as defined by Schlag, are as follows:

    1) Following imitative behavior, i.e., change behavior exclusively by imitating others.

    2) Never imitate someone who performs worse than you do.

    Rule 2) means that the players will only imitate those who exhibit superior expected benefits. This implies that players evaluate their strategies based on the expected benefit of other players. And players choose to imitate other players with superior expected benefits to improve their own benefit. The essence of the imitation rule is that players adjust their strategies based on the strategies and benefits of other players.

    In 1995, inspired by the regularity of birds' flock feeding behavior, Kennedy and Eberhart [4,5] developed a simplified algorithm model, which later evolved into the particle swarm optimization (PSO) algorithm through subsequent enhancements. The idea of the PSO algorithm originated from studying the birds' flock feeding behavior, where the birds share information collectively so that the flock can find the optimal destination. In the PSO algorithm, the feasible solution of each optimization problem can be considered as a point in the d-dimensional search space. Let the position of the i-th particle be denoted as li=(li1,li2,...,lid), and the best position it has experienced is denoted as pi=(pi1,pi2,...,pid), and also known as pbest. The index number of the best position experienced by all particles is denoted by the symbol gbest. The velocity of the i-th particle is denoted as vi=(vi1,vi2,...,vid). For each iteration, the velocity and position of the particle change according to the following:

    vk+1i=wvki+c1r1(pkbest(i)lki)+c2r2(gkbestlki), (3.1)
    lk+1i=lki+vk+1i, (3.2)

    where c1, c2 are learning factors; r1, r2 are random numbers varying within (0,1); w are inertia weights; k is the number of iterations.

    The PGPSO algorithm builds upon the framework of the PSO algorithm to realize Nash equilibrium and find the realization path, which introduces social learning and population imitation theory into the PSO algorithm. In the PGPSO algorithm, each Nash equilibrium represents a solution to the problem, and each player in the population is considered a particle of the PSO algorithm. Players with different strategies reflect the differences in position among particles, and different benefits reflect the differences in expected benefits among particles, both of which constitute particle diversity.

    According to population imitation theory, in the population updating process of particles (players), particles (players) with high expected benefits are learned. However, if all players with low benefits change their strategies by imitating in the first iteration, the algorithm stops directly, resulting in Nash equilibrium losing the opportunity to be learned. Therefore, this paper takes the introspection rate to the PSO algorithm. This serves two purposes: firstly, only some particles (players) choose to introspect, effectively maintaining the diversity of particles (players) in each iteration. Secondly, the introspection rate fits the lag of strategy update of the players in the actual game.

    During each iteration, the player chooses a pure strategy, i.e., the position of the particles. Given the nature of the two-population game, the PGPSO algorithm accommodates two distinct particle populations. The benefit matrices for a two-population game are defined as follows:

    A=(a11a12a21a22),B=(b11b12b21b22).

    At the k-th iteration, the expected benefit of the i-th player in population 1 is calculated as per references [33,34].

    F1s(k)=(xi,1xi)A(y1,1y1)T=(a11a12a21+a22)xiy1+(a12a22)xi+(a21a22)y1+a22, (3.3)

    where xi denotes the i-th player strategy choice in population 1, xi{0,1}. If xi=1, then the player has chosen the strategy 1; if xi=0, then the player has chosen the strategy 2. y1 indicates the proportion of players in population 2 who choose strategy 1.

    The expected benefit of the j-th player in population 2 is

    F2s(k)=(x1,1x1)B(yj,1yj)T=(b11b12b21+b22)x1yj+(b12b22)x1+(b21b22)yj+b22, (3.4)

    where yi denotes the i-th player strategy choice in population 2, yi{0,1}. If yi=1, then the player has chosen the strategy 1; if yi=0, then the player has chosen the strategy 2. x1 denotes the proportion of players in population 1 who choose strategy 1.

    The Nash equilibrium of the population game represents a state where all players maximize their benefits. The players are selfish and aim to maximize their benefits, so the benefits defined in the Eqs (3.3) and (3.4) are obtained based on that consideration. Since the benefits of players in the population who choose the same strategy are indistinguishable, it is assumed that determining the benefits corresponding to each strategy becomes an optimization problem. In this case, solving the Nash equilibrium is equivalent to finding the optimal solution of this optimization problem. The key difference is that an ordinary optimization problem is a single-player optimization problem. At the same time, the game studies a multi-player optimization problem, which is the essential distinction between the two.

    In the iterative process of particles, population imitation theory guides introspective players to adopt the strategy associated with the highest expected benefit. In the k-th iteration of a population, the particle with the highest expected benefit is determined by the Eqs (3.3) and (3.4), denoted as ikbest, and the introspective particle changes its strategy to ikbest. In contrast, the non-introspective particle keeps its strategy unchanged. At the k-th iteration, suppose the set of all particles' ordinal numbers is Ik, and the set of introspective players' ordinal numbers is denoted as Ikα.

    Take the Eqs (3.1) and (3.2) as the basis, the PGPSO algorithm iteration function is

    vk+1i=ikbestlki, (3.5)
    lk+1i={lki+vk+1i,iIkαlki,iIkα. (3.6)

    This is the iterative formulation of the PGPSO algorithm, which draws on the formula form of the Eqs (3.1) and (3.2). However, the idea is derived from the theory of population imitation and social learning, where players with relatively lower benefits adopt strategies observed from the players with the highest benefit.

    The implementing steps of the PGPSO algorithm are as follows.

    (a) The PGPSO algorithm initializes the parameter values. These include the introspection rates α, β, the lower bound of the search space popmin, the upper bound of the search space popmax, the population size m, n, and the number of iterations genmax.

    (b) Two populations are created, each containing m and n particles, respectively. The algorithm randomly generates the strategy xi for population 1 of m particles, xi satisfies xi=0 or 1. Then the algorithm randomly generates the strategy yj for population 2 of n particles, yj satisfies yj=0 or 1.

    (c) Expected benefits of all particles in both populations are computed using Eqs (3.3) and (3.4), and the particle strategy with the highest expected benefit is found, denoted as ibest1 and ibest2, respectively.

    (d) All particles update their positions according to Eqs (3.5) and (3.6).

    (e) Whether to end the iteration is determined according to the number of iterations genmax. If the iteration ends, the algorithm outputs the two populations' optimal benefits and particle position figures. Otherwise, the algorithm turns to (c).

    In the PGPSO algorithm, the update rule of players' strategies is as follows: the players with α proportion choose to adjust the strategy in population 1, and the remaining players do not. The players with β proportion choose to adjust their strategy in population 2, and the remaining players do not. For the players who choose to adjust their strategies, they can observe the expected benefits of the players in their population, i.e., the benefits are derived from the Eqs (3.3) and (3.4). Subsequently, these players apply population imitation theory to select an optimal strategy. After the above adjustment, the Eq (2.1) is the basis. The updated formula for the proportion of two populations choosing strategy 1 is obtained as follows:

    {x1(t+1)=(1α)x1(t)+α[p1(t)x1+p2(t)(1x1(t))]y1(t+1)=(1β)y1(t)+β[q1(t)y1+q2(t)(1y1(t))]. (4.1)

    where x1(t+1), y1(t+1) is the proportion of players choosing strategy 1 at t+1 iterations for both populations, respectively x1(t+1),y1(t+1)[0,1]. If x1(t+1) and y1(t+1) are both 0, it implies that both populations choose strategy 2. According to the benefit matrices A and B, all players in the first population receive the benefit a22, and all players in the second population receive the benefit b22.

    p1(t)={1,F11(t)F12(t)0,F11(t)<F12(t);p2(t)={1,F11(t)>F12(t)0,F11(t)F12(t);
    q1(t)={1,F21(t)F22(t)0,F21(t)<F22(t);q2(t)={1,F21(t)>F22(t)0,F21(t)F22(t).

    The updating Eq (4.1) essentially represents a discretized form of the differential Eqs (3.5) and (3.6). Compared with the Eq (2.1) in social learning theory, Eq (4.1) can be effectively applied to any two-population and two-strategy game model. Second, the social learning rule specifies the concepts of a parent, pending generation, and child, and the parent influences strategy updating. At the same time, the Eq (4.1) removes the concepts of a parent and pending generation, thereby reducing the dependency on heritability as a condition for strategy adaptation.

    For the benefit matrices of the two-population game:

    A=(a11a12a21a22),B=(b11b12b21b22)

    Let a1=a11a21, a2=a22a12, b1=b11b12, b2=b22b21.

    The benefit matrices simplify to

    A=(a100a2),B=(b100b2)

    where a10,a20,b10,b20.

    Theorem 4.1. In a non-cooperative repeated two-population and two-strategy game with benefit matrices A and B, when each population updates its strategy following the Eq (4.1), the convergence outcome corresponds to either the Nash equilibrium or its stable limit ring.

    Proof. The updating Eq (4.1) is transformed to continuous time, and the imitation dynamic equation is obtained using the difference method.

    {dx1dt=αx1+α[p1(t)x1+p2(t)(1x1)]dy1dt=βy1+β[q1(t)y1+q2(t)(1y1)]. (4.2)

    The two-population game is classified into three types according to their equilibria: one pure strategy Nash equilibrium, one mixed strategy Nash equilibrium, and three Nash equilibria (two pure strategy Nash equilibria, one mixed strategy Nash equilibrium). Next, we discuss the classification.

    For the first type, if Nash equilibrium (ˉx,ˉy)=((1,0),(1,0)), then a1>0,a2<0,b1>0,b2<0, deducing F11=a1y1, F12=a2(1y1), F21=b1x1 and F22=b2(1x1), implying that F11>F12,F21>F22. From the Eq (4.2), we get dx1dt=αx1+α0, dy1dt=βy1+β0. The imitation dynamic Eq (4.2) converges to x1=y1=1. That is, it converges to Nash equilibrium ((1,0),(1,0)). When (ˉx,ˉy)=((1,0),(0,1)), (ˉx,ˉy)=((0,1),(1,0)) and (ˉx,ˉy)=((0,1),(0,1)), the analysis is similar. The Eq (4.2) converges to Nash equilibrium, which is proved.

    For the second type, Nash equilibrium (ˉx,ˉy)=((b2b1+b2,b1b1+b2),(a2a1+a2,a1a1+a2)), then a1<0,a2<0,b1>0,b2>0. For the Eq (4.1), we get:

    p1(t)={0,y1(t)>a2a1+a21,y1(t)a2a1+a2;p2(t)={0,y1(t)a2a1+a21,y1(t)<a2a1+a2;
    q1(t)={1,x1(t)b2b1+b20,x1(t)<b2b1+b2;q2(t)={1,x1(t)>b2b1+b20,x1(t)b2b1+b2.

    When x1(0)=b2b1+b2,y1(0)=a2a1+a2, dx1dt=dy1dt=0, i.e., the imitation dynamic Eq (4.2) will converge to Nash equilibrium ((b2b1+b2,b1b1+b2),(a2a1+a2,a1a1+a2)).

    If the initial point is not Nash equilibrium, let b=b2b1+b2, a=a2a1+a2 and x1=x1b2b1+b2, y1=y1a2a1+a2, then we can obtain that the Eq (4.2) is a differential equation with zero solutions.

    Then the Eq (4.2) becomes

    {˙x1=αp1(t)(x1+b)+αp2(t)(1bx1)α(x1+b)˙y1=βq1(t)(y1+a)+βq2(t)(1ay1)β(y1+a). (4.3)

    Taking the polar coordinates x1=rcosθ, y1=rsinθ, we can obtain that the Eq (4.3) is

    ˙r=α(p1(t)p2(t)1)rcos2θ+α(bp1(t)bp2(t)b+p2(t))cosθ+β(q1(t)q2(t)1)rsin2θ+β(aq1(t)aq2(t)a+q2(t))sinθ. (4.4)

    Divide the Eq (4.4) into 0<θ<π2,π2<θ<π,π<θ<3π2,3π2<θ<2π and θ=0,π2,π,3π2,2π. These cases are classified and discussed.

    1) When 0<θ<π2, we get x1(t),y1(t)>0, which is obtained from the Eq (4.4) as

    ˙r=(αcos2θ+βsin2θ)rαbcosθ+β(1a)sinθ.

    From Eq (4.4), it follows that when r=αbcosθ+β(1a)sinθαcos2θ+βsin2θ, ˙r=0, i.e., there is a special solution

    r=αbcosθ+β(1a)sinθαcos2θ+βsin2θ.

    The solution is a curve in the phase plane centered at the origin.

    When r>αbcosθ+β(1a)sinθαcos2θ+βsin2θ, we get ˙r<0 from the Eq (4.4). That is, the trajectory converges to the curve from outside the curve. When r<αbcosθ+β(1a)sinθαcos2θ+βsin2θ, we get ˙r>0 from the Eq (4.4). That is, the trajectory converges to the curve from inside the curve. Therefore the system has a stable limit ring centered at the origin at 0<θ<π2. A stable limit ring is a periodic solution around a non-isolated equilibrium point. When the solution trajectory evolves from a point in the solution space, it converges to this limit ring and makes a periodic movement on this limit ring.

    2) When π2<θ<π,π<θ<3π2,3π2<θ<2π, ˙r is respectively,

    (αcos2θ+βsin2θ)rαbcosθβasinθ,
    (αcos2θ+βsin2θ)r+α(1b)cosθβasinθ,
    (αcos2θ+βsin2θ)r+α(1b)cosθ+β(1a)sinθ.

    The analysis idea and result are similar to 0<θ<π2.

    3) When θ=0,π2,π,3π2,2π, ˙r=0.

    The polar differential Eq (4.4) has a stable limit ring centered at the origin. That is, the imitation dynamic Eq (4.2) has a stable limit ring centered at ((b2b1+b2,b1b1+b2),(a2a1+a2,a1a1+a2)). The imitation dynamic Eq (4.2) will converge to the Nash equilibrium or a stable limit ring centered on the Nash equilibrium.

    After the above analysis, the second type proves to be completed.

    For the third type, if Nash equilibrium set E(F)={((1,0),(1,0)),((0,1),(0,1)),((b2b1+b2,b1b1+b2),(a2a1+a2,a1a1+a2))}, then a1>0,a2>0,b1>0,b2>0. For the Eq (4.1), we get

    p1(t)={1,y1(t)a2a1+a20,y1(t)<a2a1+a2;p2(t)={1,y1(t)a2a1+a20,y1(t)<a2a1+a2;
    q1(t)={1,x1(t)b2b1+b20,x1(t)<b2b1+b2;q2(t)={1,x1(t)>b2b1+b20,x1(t)b2b1+b2.

    According to the method of variation of parameters, the solution of the imitation dynamic Eq (4.2) is

    {x1(t)=eα(p1(t)p2(t)1)t[α(p1(t)p2(t)1)x1(0)+αp2(t)]αp2(t)α(p1(t)p2(t)1)y1(t)=eβ(q1(t)q2(t)1)t[β(q1(t)q2(x1)1)y1(0)+βq2(t)]βq2(t)β(q1(t)q2(t)1).

    For ε>0, there exists δ>0, when (x1(0)1)2+(y1(0)1)2<δ, there is p1(t)=p2(t)=q1(t)=q2(t)=1. From the Eq (4.2), we get

    limt+(x1(t)1)2+(y1(t)1)2=(11)2+(11)2=0.

    This proves the asymptotic stability of ((1,0),(1,0)).

    When x1(0)2+y1(0)2<δ, we have p1(t)=p2(t)=q1(t)=q2(t)=0. From the Eq (4.2), we get

    limt+x1(t)2+y1(t)2=02+02=0.

    This proves the asymptotic stability of ((0,1),(0,1)).

    If and only if x1(0)=b2b1+b2,y1(0)=a2a1+a2, we get limt+x1(t)=b2b1+b2,limt+y1(t)=a2a1+a2.

    According to the above analysis, the solution trajectory of the Eq (4.2) starts from any position in the solution space. It will converge to an element of the Nash equilibrium set E(F).

    When the Nash equilibrium set

    E(F)={((1,0),(0,1)),((0,1),(1,0)),((b2b1+b2,b1b1+b2),(a2a1+a2,a1a1+a2))},

    the analysis is similar. The Eq (4.2) converges to Nash equilibrium, which is proved.

    Example 4.1. Take the prisoner's dilemma game with the following benefit matrices. We set the introspection rate α=0.1,β=0.2, the horizontal or vertical separation of the initial position is 0.2, and the arrows represent the direction of the solution trajectory. The solution trajectory is shown in Figure 1.

    A1=(5081),B1=(5801)
    Figure 1.  Solution trajectories of the imitation dynamic Eq (4.2) for the prisoner's dilemma game.

    From the Figure 1, it can be seen that solution trajectories eventually converge to (1,1) for all initial points, i.e., the solution trajectories converge to Nash equilibrium (ˉx,ˉy)=((1,0),(1,0)).

    Example 4.2. Take the coin-flip game with the following benefit matrices. We set the introspection rate α=0.1,β=0.2, the horizontal or vertical separation of the initial position is 0.25, and the arrow represents the direction of the solution trajectory. The solution trajectory is shown in Figure 2.

    A2=(1111),B2=(1111)
    Figure 2.  Solution trajectory of the imitation dynamic Eq (4.2) for the coin-flip game.

    From the Figure 2, we can see that when the initial point is (0.5,0.5), the solution trajectory converges to (0.5,0.5), i.e., Nash equilibrium (ˉx,ˉy)=((12,12),(12,12)); when the initial point is other than (0.5,0.5), the solution trajectory evolves counterclockwise to (0.5,0.5) and converges to the limit ring centered at (0.5,0.5), i.e., a stable limit ring centered at Nash equilibrium (ˉx,ˉy)=((12,12)).

    Example 4.3. Take the coordination game with the following benefit matrices. We set the introspection rate α=0.1,β=0.2, the horizontal or vertical separation of the initial position is 16, and the arrow represents the direction of the solution trajectory. The solution trajectory is shown in Figure 3.

    A3=(2001),B3=(2001)
    Figure 3.  Solution trajectory of the imitation dynamic Eq (4.2) for the coordination game.

    From the Figure 3, when the initial point is (13,13), the solution trajectory converges to (13,13), i.e., Nash equilibrium (ˉx,ˉy)=((13,23),(13,23)); when the initial point is to the upper right of (13,13), the solution trajectory converges to (1,1), i.e., Nash equilibrium ((1,0),(1,0)). When the initial point is to the lower left of (13,13), the solution trajectory converges to (0,0), i.e., Nash equilibrium ((0,1),(0,1)).

    Research in the realm of realizing or computing Nash equilibrium through algorithmic approaches has led to various contributions. Zhang et al. [35] proposed the PMR-IGA algorithm, and Zhang et al. [36] proposed the SA-IGA algorithm. Both algorithms are based on the reinforcement learning theory, striving to guide individuals within a population through iterative processes that ultimately lead to Nash equilibrium convergence. For the computation of Nash equilibrium, Li et al. [37] proposed the GPDEPSO algorithm to compute the Nash equilibrium of a finite non-cooperative game. This approach equates solving the Nash equilibrium to solving an optimization problem and considers the algorithmic process stochastic. Stochastic generalized function theory is adopted to prove the convergence to the Nash equilibrium.

    On the other hand, the PGPSO algorithm's inspiration differs from the above-mentioned approaches, as it is based on the theory of social learning and population imitation. Its convergence is demonstrated by transforming the iterative process into differential equations. It proves the pure Nash equilibrium strategy's asymptotic stability from an equation-driven standpoint. In this case, the unique mixed strategy Nash equilibrium is the center of the stable limit ring in the solution space, and the solution trajectories from any point will converge to that limit ring.

    In this section, we study the Nash equilibrium realizations of the prisoner's dilemma game, the coin-flip game, and the hawk-dove game using the PGPSO algorithm, respectively. The prisoner's dilemma game has only one pure strategy Nash equilibrium, the coin-flip game has only one mixed strategy Nash equilibrium, and the hawk-dove game has three Nash equilibria, i.e., two pure strategy Nash equilibria and one mixed strategy Nash equilibrium.

    The prisoner's dilemma game, typically a two-person game, has been employed by Wang [38] to corroborate the presence of the "baiting effect" within social populations. Notably, the benefit matrices characteristic of the inter-player game can be seamlessly applied to the population game model. Therefore, this game model can be used as an example of the population game. The prisoner's dilemma game has only one pure-strategy Nash equilibrium. Both populations choose strategy 1, i.e., (ˉx,ˉy)=((1,0),(1,0)). The following are the benefit matrices and expected benefits of the prisoner's dilemma game:

    A1=(5081),B1=(5801),
    F1=2xiy1+xi7y11;F2=2x1yj7x1+yj1.

    In the PGPSO algorithm of the prisoner's dilemma game, we set the introspection rate α=β={112,k25124,k>25, the population size m=n=48, the search space range from popmin=0 to popmax=1, and the number of iterations genmax=50. The initial states of the populations are chosen randomly by the system. The two populations' optimal benefit and location figures are shown in Figure 4.

    Figure 4.  The figures of two populations in the prisoners' dilemma game.

    From the Figure 4(a), (b) we can see that all the particles of population 1 converge to x1=1, i.e., all players of population 1 choose the strategy 1, and the best benefit of population 1 converges to 5. All particles of population 2 converge to y1=1, i.e., all players of population 2 choose strategy 1, and the best benefit of population 2 converges to 5. This outcome is consistent with the Nash equilibrium ((1,0),(1,0)) of the prisoner's dilemma game. Therefore, the PGPSO algorithm accurately finds the particles' positions and benefits corresponding to its Nash equilibrium strategy. It completely records the path to the Nash equilibrium of the prisoner's dilemma game.

    Consider the coin-flip game as a population game, where two populations play against each other according to the benefit matrices of the coin-flip game. The coin-flip game has a mixed-strategy Nash equilibrium, i.e., (ˉx,ˉy)=((12,12),(12,12)). The following are the benefit matrices and expected benefits of the coin-flip game:

    A2=(1111),B2=(1111),
    F1=4xiy1+2xi+2y11;F2=4x1yj2x12yj+1.

    In the PGPSO algorithm of the coin-flip game, we set the introspection rate α=β={112,k25124,k>25, the population size m=n=48, the search space range from popmin=0 to popmax=1 and the number of iterations genmax=50. The initial states of the populations are chosen randomly by the system. The two populations' optimal benefit and location figures are shown in Figure 5.

    Figure 5.  The figures of two populations in the coin-flip game.

    From the Figure 5(a), (b), we can see that all particles of population 1 converge cyclically to x1=12, i.e., nearly half of the players choose the strategy 1 in population 1. The optimal benefit of population 1 converges cyclically at 0.025. All particles of population 2 converge cyclically to y1=12, i.e., nearly half of the players choose strategy 1 in population 2. The optimal benefit of population 2 converges cyclically to 0.025. All particles converge cyclically to ((12,12),(12,12)). It represents the evolution of the two populations toward Nash equilibrium, which eventually converges to a neighborhood centered on Nash equilibrium. Corresponding to the mixed-strategy Nash equilibrium is a stable limit ring in the imitation dynamic equation Eq (4.2). And from the Figure 5(c), (d), we can see that when the initial strategy distribution is Nash equilibrium for the two populations, all particles of population 1 converge to x1=12. The optimal benefit of population 2 converges to 0. All particles of population 2 converge to y1=12, and the optimal benefit of population 2 converges to 0. This outcome is consistent with the Nash equilibrium ((12,12),(12,12)) of the coin-flip game. Therefore, it can be seen that the PGPSO algorithm accurately finds the particle positions and benefits corresponding to its Nash equilibrium strategy. It completely records the path to the Nash equilibrium of the coin-flip game.

    The hawk-dove game is an important population game model in evolutionary game theory. The model has three Nash equilibria, and the set of Nash equilibria is E(F)={((1,0),(0,1)),((0,1),(1,0)),((23,13),(23,13))}. The following are the benefit matrices and expected benefits of the hawk-dove game:

    A4=(1402),B4=(1042),
    F1=3xiy1+2xi2y1+2;F2=3x1yj2x1+2yj+2.

    In the PGPSO algorithm of the hawk-dove game, we set the introspection rate α=β={112,k25124,k>25, the population size m=n=48, the search space range from popmin=0 to popmax=1, and the number of iterations genmax=50. The initial states of the populations are chosen randomly by the system. The two populations' optimal benefit and location figures are shown in Figure 6.

    Figure 6.  The figures of two populations in the hawk-dove game.

    From the Figure 6(a), (b), it can be seen that all particles of population 1 converge to x1=1, and the optimal benefit converges to 4. All particles of population 2 converge to y1=0, and the optimal benefit converge to 0. From the Figure 6(c), (d), all particles of population 1 converge to x1=0, and the optimal benefit converges to 0; all particles of population 2 converge to y1=1, and the optimal benefit converges to 4. From the Figure 6(e), (f), when the initial strategy distribution of two populations is Nash equilibria ((23,13),(23,13)), all particles of population 1 converge to x1=23, and the optimal benefit converges to 23. All particles of population 2 converge to y1=23, and the optimal benefit converges to 23. This outcome is consistent with the three Nash equilibria of the hawk-dove game, E(F)={((1,0),(0,1)),((0,1),(1,0)),((23,13)),(23,13))}. Therefore, it can be seen that the PGPSO algorithm accurately finds the particle positions and benefits corresponding to its Nash equilibrium strategy. It completely records the path to the Nash equilibrium of the hawk-dove game.

    In the PGPSO algorithm with introspection rate sensitivity of the three games, we set the introspection rate α=β, the population size m=n=48, the search space range from popmin=0 to popmax=1, and the number of iterations genmax=20. The initial states of the prisoner's dilemma and the hawk-dove game are x1(0)=y1(0)=0.5, and the initial states of the coin-flip game are x1(0)=y1(0)=0.4. The position figures of the two populations are shown in Figure 7.

    Figure 7.  The relationship between Nash equilibrium realization and introspection rate for the three games. The solid line represents population 1, and the dashed line represents population 2.

    From the Figure 7(a), (c), it can be seen that in the prisoner's dilemma and the hawk-dove game, for the two populations, the number of players converging to Nash equilibrium decreases as the introspection rate α increases, representing that the increase of the introspection rate α speeds up the evolution of Nash equilibrium realization. From the Figure 7(b), it can be seen that in the coin-flip game, the magnitude of the cycle convergence increases with the increase of the introspection rate α, which means that the increase of the introspection rate α expands the range of cycle fluctuations.

    In the PGPSO algorithm for the initial state sensitivity of the three games, we set the introspection rate α=β={112,k25124,k>25, the population size m=n=48, the search space range from popmin=0 to popmax=1 and the number of iterations genmax=50. For the prisoner's dilemma and the hawk-dove game, the initial state range of population 1 is 0.1–0.9, and the positions are chosen every 0.1. The initial state range of population 2 is chosen with the same rules as population 1. A total of 81 parameter configurations are generated by combining the two populations. For each parameter configuration, a series of 50 experiments are conducted to observe the system's steady state. Two initial states (0.1, 0.1) and (0.9, 0.9) are selected for the coin-flip game. The location figures of the two populations are shown in Figure 8.

    Figure 8.  The relationship between Nash equilibrium realization and initial state for the three games. The solid line represents population 1, and the dashed line represents population 2.

    From Figure 8(a), it can be seen that the populations start from any initial state in the prisoner's dilemma game. The two populations converge to the Nash equilibrium ((1,0),(1,0)), which means that the change of initial state can't affect the Nash equilibrium realization. From Figure 8(b), it can be seen that the population starts from two initial states in the coin-flip game. The two populations converge cyclically to the mixed strategy Nash equilibrium ((12,12),(12,12)), representing that the initial state changes can't affect the Nash equilibrium realization. From Figure 8(c), it can be seen that in the hawk-dove game, the two populations converge to the different Nash equilibrium from the different initial states. Specifically, when the initial state is on the right side of the diagonal with x=y, the two populations converge to the Nash equilibrium ((1,0),(0,1)). Conversely, when the initial state is on the left side of this diagonal, the two populations converge to Nash equilibrium ((0,1),(1,0)). Subtle nuances emerge when considering different positions along the diagonal. Specifically, for the first five positions, the populations converge to ((0,1),(1,0)), while the last four positions lead to ((0,1),(1,0)). These results represent that the initial state will affect the Nash equilibrium realization.

    Taking the welfare game in [31] as an example, for finding its mixed-strategy Nash equilibrium realization path, we compare the difference between the PGPSO algorithm and the Meta Equilibrium Q-learning algorithm.

    Example 5.1. This game model has a mixed-strategy Nash equilibrium, i.e., (ˉx,ˉy)=((12,12),(14,34)), and the followings are the benefit matrices and expected benefits of the welfare game:

    A5=(3110),B5=(2310),
    F1=5xiy1xiy1;F2=2x1yj+3x1+yj.

    The Meta Equilibrium Q-learning algorithm represents an enhancement over the Nash Q-learning algorithm. The rationale behind this improvement stems from the Nash Q-learning algorithm's limitation C specifically, its inability to devise a pathway to realize a mixed-strategy Nash equilibrium when each player opts for a pure strategy. In order to address this problem, the Meta Equilibrium Q-learning algorithm transforms the welfare game into a meta-game. It uses the pure strategy meta-equilibrium to represent the mixed-strategy Nash equilibrium in the welfare game. The meta-equilibrium's realization path replaces the mixed strategy's realization path. Thus, we can obtain the path to find the mixed-strategy Nash equilibrium. However, it is important to note that this transformation comes at the expense of increased complexity in locating the realization path for the mixed-strategy Nash equilibrium.

    In the PGPSO algorithm of the welfare game, we set the introspection rate α=β={112,k25124,k>25, the population size m=n=48, the search space range from popmin=0 to popmax=1 and the number of iterations genmax=50. The computer system randomly selects the initial states of the populations. The position figures of the two populations are shown in Figure 9.

    Figure 9.  The figures of two populations in the welfare game.

    From Figure 9(a), it can be seen that after 25 iterations, all particles of population 1 converge cyclically to x1=12 and all particle of population 2 converge cyclically to y1=14. The difference between the maximum and minimum values of the cycle convergence is 0.0625. From the Figure 9(b), when the initial strategy distribution of the two populations is Nash equilibrium, all particles of population 1 converge to x1=12, and all particles of population 2 converge to y1=14. This outcome is consistent with the Nash equilibrium of the welfare game. Therefore, for players who choose a pure strategy in the welfare game, the PGPSO algorithm converges to the mixed strategy Nash equilibrium and finds the path to realize the equilibrium.

    In the welfare game, for the problem of finding the realization path of the mixed-strategy Nash equilibrium, the Meta Equilibrium Q-learning algorithm transforms the welfare game into a meta-game. The meta-equilibrium's realization path replaces the mixed strategy's realization path. From Figure 9(a), it can be seen that the PGPSO algorithm converges the stable limit ring centered on the mixed-strategy Nash equilibrium. The PGPSO algorithm directly finds the realization path, which means that the algorithm reduces the complexity by eliminating the operation of transforming the welfare game into a meta-game.

    The population's Nash equilibrium exists commonly in human societies and biological populations. Its realization is crucial from the perspective of individual players within a population. In a population of finite players, players can optimize their strategies by imitating other players with higher benefits, depending on their knowledge of their strategic environment. So far, the rule of imitation has been widely studied in different game models. In this paper, we combine social learning theory and population imitation theory, develop the PGPSO algorithm, and apply it to the Nash equilibrium realization of three two-population game models.

    The motivation for studying the imitation learning rule is to explore a problem: whether imitation learning rules can realize the Nash equilibrium realization of population games. Specifically, the new learning rule is transformed into a swarm intelligence algorithm, which is used to simulate the behavioral dynamics of the players in the game. For the PGPSO algorithm iterative formulation, the convergence analysis is performed from the perspective of differential equations. The result is that the solution trajectory of differential equations converges completely to the pure strategy Nash equilibrium. The solution trajectory will converge completely to the mixed strategy Nash equilibrium when the initial position is the mixed strategy Nash equilibrium. Also, in the coin-flip game, the mixed-strategy Nash equilibrium is the center of a stable limit ring of the differential equation. When the initial position is not on the mixed strategy Nash equilibrium, all initial points converge to this limit ring.

    Using the PGPSO algorithm, we simulate the Nash equilibrium realization process for three two-population games. Simulation outcomes demonstrate that the PGPSO algorithm successfully realizes Nash equilibrium realization. Meanwhile, the PGPSO algorithm clearly shows the path of realizing Nash equilibrium. According to the analysis of the effect of introspection rate and initial state on the realization of Nash equilibrium, the increase of introspection rate accelerates the evolution of pure strategy Nash equilibrium realization. However, it expands the range of cycle fluctuations of mixed strategy Nash equilibrium. The change in the initial state can't affect the Nash equilibrium realization of the prisoner's dilemma and the coin-flip game, but it causes the hawk-dove game to converge to the different Nash equilibrium.

    The following abbreviations are used in this manuscript.

    Table  .   .
    PSO Particle swarm optimization
    EWA Experience-adding weight affinity
    IGA Infinitesimal gradient algorithm
    GIGA Generalized infinitesimal gradient algorithm
    PGPSO Population game particle swarm optimization

     | Show Table
    DownLoad: CSV

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by the National Natural Science Foundation of China (Grant No. 71961003) and Science and Technology Program of Guizhou Province (Grant No. 7223), and the Doctoral Foundation Project of Guizhou University (Grant No. 49).

    The authors declare there is no conflicts of interest.



    [1] M. Ilic, I. Ilic, Epidemiology of pancreatic cancer, World J. Gastroenterol., 22 (2016), 9694-9705. doi: 10.3748/wjg.v22.i44.9694.
    [2] J. D. Mizrahi, R. Surana, J. W. Valle, R. T. Shroff, Pancreatic cancer, Lancet, 395 (2020), 2008-2020. doi: 10.1016/S0140-6736(20)30974-0.
    [3] T. Kamisawa, L. D. Wood, T. Itoi, K. Takaori, Pancreatic cancer, Lancet, 388 (2016), 73-85. doi: 10.1016/S0140-6736(16)00141-0.
    [4] A. D. Singhi, E. J. Koay, S. T. Chari, A. Maitra, Early detection of pancreatic cancer: opportunities and challenges, Gastroenterology, 156 (2019), 2024-2040. doi: 10.1053/j.gastro.2019.01.259. doi: 10.1053/j.gastro.2019.01.259
    [5] B. Zhang, Q. Wu, B. Li, D. Wang, L. Wang, Y. L. Zhou, mA regulator-mediated methylation modification patterns and tumor microenvironment infiltration characterization in gastric cancer, Mol. Cancer, 19 (2020), 53. doi: 10.1186/s12943-020-01170-0.
    [6] Y. Ino, R. Yamazaki-Itoh, K. Shimada, M. Iwasaki, T. Kosuge, Y. Kanai, et al., Immune cell infiltration as an indicator of the immune microenvironment of pancreatic cancer, Br. J. Cancer, 108 (2013), 914-923. doi: 10.1038/bjc.2013.32.
    [7] W. J. Ho, E. M. Jaffee, L. Zheng, The tumour microenvironment in pancreatic cancer-clinical challenges and opportunities, Nat. Rev. Clin. Oncol., 17 (2020), 527-540. doi: 10.1038/s41571-020-0363-5. doi: 10.1038/s41571-020-0363-5
    [8] A. O. Giacomelli, X. Yang, R. E. Lintner, J. M. McFarland, M. Duby, J. Kim, et al., Mutational processes shape the landscape of TP53 mutations in human cancer, Nat. Genet., 50 (2018), 1381-1387. doi: 10.1038/s41588-018-0204-y.
    [9] A. J. Levine, M. Oren, The first 30 years of p53: growing ever more complex, Nat. Rev. Cancer, 9 (2009), 749-758. doi: 10.1038/nrc2723. doi: 10.1038/nrc2723
    [10] R. Brosh, V. Rotter, When mutants gain new powers: news from the mutant p53 field, Nat. Rev. Cancer, 9 (2009), 701-713. doi: 10.1038/nrc2693. doi: 10.1038/nrc2693
    [11] S. P. Dowell, P. O. Wilson, N. W. Derias, D. P. Lane, P. A. Hall, Clinical utility of the immunocytochemical detection of p53 protein in cytological specimens, Cancer Res., 54 (1994), 2914-2918.
    [12] I. Ringshausen, C. C. O'Shea, A. J. Finch, L. B. Swigart, G. I. Evan, Mdm2 is critically and continuously required to suppress lethal p53 activity in vivo, Cancer Cell, 10 (2006), 501-514. doi: 10.3748/10.1016/j.ccr.2006.10.010. doi: 10.3748/10.1016/j.ccr.2006.10.010
    [13] V. J. N. Bykov, S. E. Eriksson, J. Bianchi, K. G. Wiman, Targeting mutant p53 for efficient cancer therapy, Nat. Rev. Cancer, 18 (2018). doi: 10.1038/nrc.2017.109.
    [14] X. Liu, B. Chen, J. Chen, S. Sun, A novel tp53-associated nomogram to predict the overall survival in patients with pancreatic cancer, BMC Cancer, 21 (2021), 335. doi: 10.1186/s12885-021-08066-2.
    [15] F. Zhang, W. Zhong, H. Li, K. Huang, M. Yu, Y. Liu, TP53 mutational status-based genomic signature for prognosis and predicting therapeutic response in pancreatic cancer, Front. Cell. Dev. Biol., 9 (2021), 665265. doi: 10.3389/fcell.2021.665265.
    [16] H. Sun, B. Zhang, H. Li, The roles of frequently mutated genes of pancreatic cancer in regulation of tumor microenvironment, Technol. Cancer Res. Treat., 19 (2020), 1533033820920969. doi: 10.1177/1533033820920969.
    [17] S. Hashimoto, S. Furukawa, A. Hashimoto, A. Tsutaho, A. Fukao, Y. Sakamura, et al., ARF6 and AMAP1 are major targets of and mutations to promote invasion, PD-L1 dynamics, and immune evasion of pancreatic cancer, Proc. Nat. Acad. Sci. U. S. A., 116 (2019), 17450-17459. doi: 10.1073/pnas.1901765116.
    [18] D. Toro-Domínguez, J. Martorell-Marugán, R. López-Domínguez, A. García-Moreno, V. González-Rumayor, M. E. Alarcón-Riquelme, et al., ImaGEO: integrative gene expression meta-analysis from GEO database, Bioinformatics, 35 (2019), 880-882. doi: 10.1093/bioinformatics/bty721.
    [19] A. Subramanian, P. Tamayo, V. K. Mootha, S. Mukherjee, B. L. Ebert, M. A. Gillette, et al., Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles, Proc. Nat. Acad. Sci. U. S. A., 102 (2005), 15545-15550. doi: 10.1073/pnas.0506580102.
    [20] S. Bhattacharya, S. Andorf, L. Gomes, P. Dunn, H. Schaefer, J. Pontius, et al., ImmPort: disseminating data to the public for the future of immunology, Immunol. Res., 58 (2014), 234-239. doi: 10.1007/s12026-014-8516-1.
    [21] D. Szklarczyk, A. Franceschini, S. Wyder, K. Forslund, D. Heller, J. Huerta-Cepas, et al., STRING v10: protein-protein interaction networks, integrated over the tree of life, Nucleic Acids Res., 43 (2015), D447-D452. doi: 10.1093/nar/gku1003.
    [22] P. Shannon, A. Markiel, O. Ozier, N. S. Baliga, J. T. Wang, D. Ramage, et al., Cytoscape: a software environment for integrated models of biomolecular interaction networks, Genome Res., 13 (2003), 2498-2504. doi: 10.1101/gr.1239303.
    [23] G. D. Bader, C. W. V. Hogue, An automated method for finding molecular complexes in large protein interaction networks, BMC Bioinf., 4 (2003), 2. doi: 10.1186/1471-2105-4-2.
    [24] Y. Zhou, B. Zhou, L. Pache, M. Chang, A. H. Khodabakhshi, O. Tanaseichuk, et al., Metascape provides a biologist-oriented resource for the analysis of systems-level datasets, Nat. Commun., 10 (2019), 1523. doi: 10.1038/s41467-019-09234-6.
    [25] Z. Tang, C. Li, B. Kang, G. Gao, C. Li, Z. Zhang, GEPIA: a web server for cancer and normal gene expression profiling and interactive analyses, Nucleic Acids Res., 45 (2017). doi: 10.1093/nar/gkx247.
    [26] A. Subramanian, P. Tamayo, V. K. Mootha, S. Mukherjee, B. L. Ebert, M. A. Gillette, et al., Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles, Proc. Nat. Acad. Sci. U. S. A., 102 (2005), 15545-15550. doi: 10.1073/pnas.0506580102.
    [27] W. Yang, J. Soares, P. Greninger, E. J. Edelman, H. Lightfoot, S. Forbes, et al., Genomics of Drug Sensitivity in Cancer (GDSC): a resource for therapeutic biomarker discovery in cancer cells, Nucleic Acids Res., 41 (2013), D955-D961. doi: 10.1093/nar/gks1111.
    [28] P. Geeleher, N. J. Cox, R. S. Huang, Clinical drug response can be predicted using baseline gene expression levels and in vitro drug sensitivity in cell lines, Genome Biol., 15 (2014), R47. doi: 10.1186/gb-2014-15-3-r47.
    [29] C. J. Qiu, X. B. Wang, Z. R. Zheng, C. Z. Yang, K. Lin, K. Zhang, et al., Development and validation of a ferroptosis-related prognostic model in pancreatic cancer, Invest. New Drugs, 2021. doi: 10.1007/s10637-021-01114-5.
    [30] M. Miyazawa, M. Katsuda, M. Kawai, S. Hirono, K. I. Okada, Y. Kitahata, et al., Advances in immunotherapy for pancreatic ductal adenocarcinoma, J. Hepato-Biliary-Pancreat. Sci., 28 (2021), 419-430. doi: 10.1002/jhbp.944.
    [31] F. Skoulidis, M. E. Goldberg, D. M. Greenawalt, M. D. Hellmann, M. M. Awad, J. F. Gainor, et al., STK11/LKB1 mutations and PD-1 inhibitor resistance in KRAS-mutant lung adenocarcinoma, Cancer Discov., 8 (2018), 822-835. doi: 10.1158/2159-8290.CD-18-0099.
    [32] Z. Y. Dong, W. Z. Zhong, X. C. Zhang, J. Su, Z. Xie, S. Y. Liu, et al., Potential predictive value of and mutation status for response to PD-1 blockade immunotherapy in lung adenocarcinoma, Clin. Cancer Res., 23 (2017), 3012-3024. doi: 10.1158/1078-0432.CCR-16-2554.
    [33] A. K. Witkiewicz, E. A. McMillan, U. Balaji, G. Baek, W. C. Lin, J. Mansour, et al., Whole-exome sequencing of pancreatic cancer defines genetic diversity and therapeutic targets, Nat. Commun., 6 (2015), 6744. doi: 10.1038/ncomms7744.
    [34] Z. Zhu, J. Kleeff, H. Friess, L. Wang, A. Zimmermann, Y. Yarden, et al., Epiregulin is up-regulated in pancreatic cancer and stimulates pancreatic cancer cell growth, Biochem. Biophys. Res. Commun., 273 (2000), 1019-1024. doi: 10.1006/bbrc.2000.3033.
    [35] D. J. Riese, R. L. Cullum, Epiregulin: roles in normal physiology and cancer, Semin. Cell Dev. Biol., 28 (2014), 49-56. doi: 10.1016/j.semcdb.2014.03.005. doi: 10.1016/j.semcdb.2014.03.005
    [36] F. Bormann, S. Stinzing, S. Tierling, M. Morkel, M. R. Markelova, J. Walter, et al., Epigenetic regulation of amphiregulin and epiregulin in colorectal cancer, Int. J. Cancer, 144 (2019), 569-581. doi: 10.1002/ijc.31892.
    [37] J. Zhang, K. Iwanaga, K. C. Choi, M. Wislez, M. G. Raso, W. Wei, et al., Intratumoral epiregulin is a marker of advanced disease in non-small cell lung cancer patients and confers invasive properties on EGFR-mutant cells, Cancer Prev. Res. (Phila), 1 (2008), 201-207. doi: 10.1158/1940-6207.CAPR-08-0014.
    [38] R. S. Herbst, Review of epidermal growth factor receptor biology, Int. J. Radiat. Oncol. Biol. Phys., 59 (2004), 21-26. doi: 10.1016/j.ijrobp.2003.11.041. doi: 10.1016/j.ijrobp.2003.11.041
    [39] C. M. Sloss, F. Wang, M. A. Palladino, J. C. Cusack, Activation of EGFR by proteasome inhibition requires HB-EGF in pancreatic cancer cells, Oncogene, 29 (2010), 3146-3152. doi: 10.1038/onc.2010.52. doi: 10.1038/onc.2010.52
    [40] V. Bernard, J. Young, P. Chanson, N. Binart, New insights in prolactin: pathological implications, Nat. Rev. Endocrinol., 11 (2015), 265-275. doi: 10.1038/nrendo.2015.36. doi: 10.1038/nrendo.2015.36
    [41] P. Dandawate, G. Kaushik, C. Ghosh, D. Standing, A. A. Ali Sayed, S. Choudhury, et al., Diphenylbutylpiperidine antipsychotic drugs inhibit prolactin receptor signaling to reduce growth of pancreatic ductal adenocarcinoma in mice, Gastroenterology, 158 (2020). doi: 10.1053/j.gastro.2019.11.279.
    [42] M. Tandon, G. M. Coudriet, A. Criscimanna, M. Socorro, M. Eliliwi, A. D. Singhi, et al., Prolactin promotes fibrosis and pancreatic cancer progression, Cancer Res., 79 (2019), 5316-5327. doi: 10.1158/0008-5472.CAN-18-3064.
    [43] H. Nie, P. Q. Huang, S. H. Jiang, Q. Yang, L. P. Hu, X. M. Yang, et al., The short isoform of PRLR suppresses the pentose phosphate pathway and nucleotide synthesis through the NEK9-Hippo axis in pancreatic cancer, Theranostics, 11 (2021), 3898-3915. doi: 10.7150/thno.51712.
    [44] J. Yang, Y. Li, Z. Sun, H. Zhan, Macrophages in pancreatic cancer: An immunometabolic perspective, Cancer Lett., 498 (2021), 188-200. doi: 10.1016/j.canlet.2020.10.029. doi: 10.1016/j.canlet.2020.10.029
    [45] S. S. Linton, T. Abraham, J. Liao, G. A. Clawson, P. J. Butler, T. Fox, et al., Tumor-promoting effects of pancreatic cancer cell exosomes on THP-1-derived macrophages, PLoS One, 13 (2018), e0206759. doi: 10.1371/journal.pone.0206759.
  • mbe-19-01-010-Supplementary.pdf
  • This article has been cited by:

    1. Qianxi Yang, Yanlong Yang, Effects of an update mechanism based on combinatorial memory and high-reputation learning objects on the evolution of cooperation, 2025, 495, 00963003, 129309, 10.1016/j.amc.2025.129309
    2. Meng Zhou, Yanlong Yang, Shuwen Xiang, Effect of community learning mechanism on cooperation in conflict societies, 2025, 192, 09600779, 116046, 10.1016/j.chaos.2025.116046
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(10609) PDF downloads(240) Cited by(9)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog