Processing math: 42%
Research article Special Issues

Invariable distribution of co-evolutionary complex adaptive systems with agent's behavior and local topological configuration

  • Received: 06 December 2023 Revised: 24 January 2024 Accepted: 25 January 2024 Published: 01 February 2024
  • In this study, we developed a dynamical Multi-Local-Worlds (MLW) complex adaptive system with co-evolution of agent's behavior and local topological configuration to predict whether agents' behavior would converge to a certain invariable distribution and derive the conditions that should be satisfied by the invariable distribution of the optimal strategies in a dynamical system structure. To this end, a Markov process controlled by agent's behavior and local graphic topology configuration was constructed to describe the dynamic case's interaction property. After analysis, the invariable distribution of the system was obtained using the stochastic process method. Then, three kinds of agent's behavior (smart, normal, and irrational) coupled with corresponding behaviors, were introduced as an example to prove that their strategies converge to a certain invariable distribution. The results showed that an agent selected his/her behavior according to the evolution of random complex networks driven by preferential attachment and a volatility mechanism with its payment, which made the complex adaptive system evolve. We conclude that the corresponding invariable distribution was determined by agent's behavior, the system's topology configuration, the agent's behavior noise, and the system population. The invariable distribution with agent's behavior noise tending to zero differed from that with the population tending to infinity. The universal conclusion, corresponding to the properties of both dynamical MLW complex adaptive system and cooperative/non-cooperative game that are much closer to the common property of actual economic and management events that have not been analyzed before, is instrumental in substantiating managers' decision-making in the development of traffic systems, urban models, industrial clusters, technology innovation centers, and other applications.

    Citation: Hebing Zhang, Xiaojing Zheng. Invariable distribution of co-evolutionary complex adaptive systems with agent's behavior and local topological configuration[J]. Mathematical Biosciences and Engineering, 2024, 21(2): 3229-3261. doi: 10.3934/mbe.2024143

    Related Papers:

    [1] Lina Hao, Meng Fan, Xin Wang . Effects of nutrient enrichment on coevolution of a stoichiometric producer-grazer system. Mathematical Biosciences and Engineering, 2014, 11(4): 841-875. doi: 10.3934/mbe.2014.11.841
    [2] Meiqiao Wang, Wuquan Li . Distributed adaptive control for nonlinear multi-agent systems with nonlinear parametric uncertainties. Mathematical Biosciences and Engineering, 2023, 20(7): 12908-12922. doi: 10.3934/mbe.2023576
    [3] Siyu Li, Shu Li, Lei Liu . Fuzzy adaptive event-triggered distributed control for a class of nonlinear multi-agent systems. Mathematical Biosciences and Engineering, 2024, 21(1): 474-493. doi: 10.3934/mbe.2024021
    [4] Siqi Chen, Yang Yang, Ran Su . Deep reinforcement learning with emergent communication for coalitional negotiation games. Mathematical Biosciences and Engineering, 2022, 19(5): 4592-4609. doi: 10.3934/mbe.2022212
    [5] Jinna Lu, Xiaoguang Zhang . Bifurcation analysis of a pair-wise epidemic model on adaptive networks. Mathematical Biosciences and Engineering, 2019, 16(4): 2973-2989. doi: 10.3934/mbe.2019147
    [6] Andrzej Swierniak, Michal Krzeslak . Application of evolutionary games to modeling carcinogenesis. Mathematical Biosciences and Engineering, 2013, 10(3): 873-911. doi: 10.3934/mbe.2013.10.873
    [7] Shahab Shamshirband, Javad Hassannataj Joloudari, Sahar Khanjani Shirkharkolaie, Sanaz Mojrian, Fatemeh Rahmani, Seyedakbar Mostafavi, Zulkefli Mansor . Game theory and evolutionary optimization approaches applied to resource allocation problems in computing environments: A survey. Mathematical Biosciences and Engineering, 2021, 18(6): 9190-9232. doi: 10.3934/mbe.2021453
    [8] Xiaoling Han, Xiongxiong Du . Dynamics study of nonlinear discrete predator-prey system with Michaelis-Menten type harvesting. Mathematical Biosciences and Engineering, 2023, 20(9): 16939-16961. doi: 10.3934/mbe.2023755
    [9] Run Tang, Wei Zhu, Huizhu Pu . Event-triggered distributed optimization of multi-agent systems with time delay. Mathematical Biosciences and Engineering, 2023, 20(12): 20712-20726. doi: 10.3934/mbe.2023916
    [10] Jiangtao Dai, Ge Guo . A leader-following consensus of multi-agent systems with actuator saturation and semi-Markov switching topologies. Mathematical Biosciences and Engineering, 2024, 21(4): 4908-4926. doi: 10.3934/mbe.2024217
  • In this study, we developed a dynamical Multi-Local-Worlds (MLW) complex adaptive system with co-evolution of agent's behavior and local topological configuration to predict whether agents' behavior would converge to a certain invariable distribution and derive the conditions that should be satisfied by the invariable distribution of the optimal strategies in a dynamical system structure. To this end, a Markov process controlled by agent's behavior and local graphic topology configuration was constructed to describe the dynamic case's interaction property. After analysis, the invariable distribution of the system was obtained using the stochastic process method. Then, three kinds of agent's behavior (smart, normal, and irrational) coupled with corresponding behaviors, were introduced as an example to prove that their strategies converge to a certain invariable distribution. The results showed that an agent selected his/her behavior according to the evolution of random complex networks driven by preferential attachment and a volatility mechanism with its payment, which made the complex adaptive system evolve. We conclude that the corresponding invariable distribution was determined by agent's behavior, the system's topology configuration, the agent's behavior noise, and the system population. The invariable distribution with agent's behavior noise tending to zero differed from that with the population tending to infinity. The universal conclusion, corresponding to the properties of both dynamical MLW complex adaptive system and cooperative/non-cooperative game that are much closer to the common property of actual economic and management events that have not been analyzed before, is instrumental in substantiating managers' decision-making in the development of traffic systems, urban models, industrial clusters, technology innovation centers, and other applications.



    Over the past two decades, complex adaptive systems have been actively developed, including the evolution of economic systems [1,2], theory of emergence [3,4], as well as social [5,6], ecological [7], and epidemic [8,9] characteristics of bifurcation [10]. Minor perturbations were found to escalate into tectonic shifts in the system, or even abrupt changes of the system properties and functions [11], resulting in symmetry-breaking [12] or synchronization [13]. In essence, the spontaneous switches in group behavior derive from interactions between individuals [14], during which some behaviors are learned [15] or propagate [16], causing the structure and behaviors of the system (or collective) to change [17] or reach the critical state [18]. The common property is that their detailed structure cannot be explained exactly from a mathematical viewpoint. To this end, the stochastic differential game theory has been introduced [19], reflecting the interaction behavior of agents and the optimal strategy coupled with temporary deterministic structure [20] and stochastic complex networks [21]. Atar and Budhiraja described the evolution law under various agent interaction rules in different fields [22]. However, these interactions that happen in the Multi-Local-Worlds (MLW) system are both synthesized (the interactions consist of not only cooperative games but also non-cooperative) and stochastic in dynamical configuration (arbitrary agent always selects interactive agents according to his/her benefit), making the problem much more difficult to resolve in mathematics and yielding most results that fail to satisfy the real complex adaptive system.

    The classic analysis method cannot deal with this dual randomness because the agent's diverse behaviors and the system's configurations change randomly with time. There have been few results to discover what the system's behavior would converge to as time tends to infinity or a relatively large number under classic methods. Furthermore, most research results have either considered the mixed interaction of non-cooperative/cooperative games in MLW stable graph or considered the random MLW complex networks with the Boolean game between individuals, which are far from the property of the real complex adaptive system. In this sense, new modeling methods should be introduced. We constructed a multi-agent model to analyze the evolution law of complex adaptive systems. We preclude that the system behavior must satisfy an invariable distribution if agents usually work according to this model. Furthermore, once this invariable distribution law is determined, some strategies for economic issues, political events, social questions, and environmental influence will be made scientifically.

    Generally, each agent in a complex adaptive system could interact just with local agents; their interaction relies on the system's local topological configuration. Brian concluded that imperfect information from other agents, with whom an arbitrary agent acts directly, determines this agent's behavior [23]. Jiang et al. analyzed how the topological configuration of nematic disclination networks affects the interaction between agents and agents' behavior [24]. Furthermore, Maia et al. reported that if each agent in the system can change its interacting targets (i.e., its "neighbors") to obtain more benefits, then there will be complex nonlinear interactions between subjects and between subjects and environments, which lead to the phenomenon of "emergence in large numbers" of the system [25]. Thus, the evolution of microscopic individuals makes the macro system display a new state and a new structure [26]. In this sense, the local topological configuration is not stable but dynamic [27]. More importantly, the system's state and property would be affected adaptively by the environment [28], and the environment is also affected by the system's state and properties [29], which produce an adaptive and evolutional process [30]. Scholars have studied complex adaptive systems with these properties. If the interactivity between agents is very simple, yes or no, for example, the problem is one of the complex networks; if the system structure is deterministic and constant, the problem is one of game theory. However, most complex adaptive systems have both properties. Therefore, neither the theory of complex networks nor stochastic game theory can help to resolve the problem. Thus, new methods must be introduced to study the properties of optimal agent strategy in complex adaptive systems.

    Generally, a system can be subdivided into multiple subsystems, with interactions between individuals occurring within the same subsystem and across different subsystems. Miguel et al. analyzed individual and flock behavior in heterogeneous social complex systems and found that much complexity comes from the relationship between these sub-systems [31]. Similarly, Hassler et al. investigated the individual behavior between intergroups under social environmental change [32]. In addition, A et al. considered co evolution in proxy behavior and local topology configuration [33]. The interactions occur not merely between neighboring individuals, and the long-range interactions in spatial dimensions significantly affect the critical phase transition of the system. Neffke focused on the phase transition between co-workers who interacted long-rangely [34]. Levin et al. considered the political polarization and the corresponding reversal of the political forces and found that indirect interaction would obtain political polarization easier by induction [35]. Priol et al. constructed an avalanche model to describe phase transition property driven by long-range interactions between agents [36]. A et al. studied the impact of network topology on the resistance of vehicle platforms [37]. In addition, the rules met by the interactions between individuals within an economic or management system are far more complex than the rules regulating interactions between individuals in the natural world, such as the conservation of momentum that regulates collisions of particles and the black box of biology (such as the behavior adjustment strategies defined by the Ising and Vicsek models) Narizuka and Yoshihiro analyzed the lifetime distribution for adjacency relationship by invoking the corresponding Ising model [38]. Tiokhin et al. studied the priority evolution in the social complex system by constructing a corresponding Vicsek model [39]. Colwell reported how simple behavior would be changed if the environment was disrupted [40]. Moreover, Algeier et al. substantiated that the system structure determined by interactions between individuals is a key contributing factor to the function and nature of the system [41]. Tóth et al. investigated the emergence from structure and function of the system with a leader-follower hierarchy among players and concluded that the collective behavior would be much unstable if the interaction between agent and the leadership of the managers in an arbitrary multi-level complex system is beyond different layers [42]. Tump et al. studied the intelligence of emergence collective driven by the interaction between irrational agents and found that the collective intelligence would be polarized, which relies on system structure, interactive nature, and population size of agents [43]. Berner et al. revealed the phenomenon of desynchronization transitions occurring when the multi-layered structure satisfied certain conditions [44]. Zhang et al. analyzed the phase diagram of symmetric iterated prisoner's dilemma of two companies with a partial imitation rule in a sparse graph using cases where individuals interacted in varied structures, such as sparse graphs and dense graphs, random graphs and complete graphs, scale-free networks, and small-world networks [45]. Alternatively, Chen studied the diverse motion under small noise in Vicsek model in dense scale-free networks [46].

    However, the available random complex network models failed to accurately describe economic and management systems because their interaction was much more complex than the Boolean interaction defined in these models. Similarly, different game models were also ineffective because the interaction configuration between agents changed dynamically, and the above two properties must be considered in the comprehensive model. Furthermore, there are many unknown and unseen scenarios in reality. Due to the lack of real-world data, the conclusions regarding concerted changes in collective behavior reached by classical analysis methods do not apply to unknown scenarios. In this paper, a MLW economic and management complex adaptive system with agent's behavior and local configuration is considered. This partially complements the gap between reality and the results of previous studies.

    The rest of this paper is organized as follows. In Section 2, the characteristics of a complex adaptive system are analyzed, and a hypothesis is proposed. In Section 3, agent's behavior in the system is analyzed and abstracted into six processes. In Section 4 and 5, the agent local topology evolution model is constructed, and some theorems are formulated and proven. Section 6 discusses invariable distributions where both parameters β and N tend to zero and infinity, respectively. Section 7 concludes this study.

    The innovative features and major contributions of this paper can be listed as follows:

    (1) Different from previous studies, We consider the network growth and decline by treating the network as a multi-local-event one. Furthermore, the priority connection mechanism of the agent is not based on the degree, but the priority connection probability is determined based on the income over a short time scale. If, and only if, the phase transfer equation based on the priority link is determined, the evolution characteristics of the system can be obtained. On the basis of considering the behavior and adaptability of an agent, the interaction between environment and system is also considered, and it could reach the corresponding measurement coupled with the invariable distribution.

    (2) In the case where the agent's behavior noise approaches zero (β0), the invariable distributions μβ,τ,N0 are proven to satisfy the maximum deviation principle. This implies that the invariable distribution would converge to a certain subset space that can converge logarithmically precisely to the minimum value of the ratio function and such that the ratio function can be estimated perfectly, as shown in Theorem 3. The deterministic state of the evolved complex adaptive system, ω, must be estimated according to the invariable distribution coupled with the optimal strategy and the local topological structure of the agent, according to Theorem 4.

    (3) We prove that if the population of agents in the system tends to infinity, the invariable distribution of a complex adaptive system with co-evolving agent's behavior and local topological configuration can converge into a certain interval with rate function rβ(σ,q), according to Theorem 5.

    Definition 1. The connected sub-graph Gi,i=1,2,...,m of the topological structure of the Complex Adaptive System G, where GiG, is called Local World (LW).

    To model this system, some variables were introduced, as listed in Table 1.

    Table 1.  Variables.
    Notation Description
    ji The jth agent in LWi
    x Resource
    av Strategy space
    A Strategy space collection
    I Set of all agents
    r Game radius
    ar The most effective strategy within the game radius r
    g Topological configuration of system
    β Environment noise
    N Population of the agents in the system
    ϕji(t,xtji) The best strategy vector of agent ji for a specific pure strategy at time t and state xt
    πji The income of agent ji
    q1, q2, q3 Probabilities of agent choosing the behavior of adjusting strategy, creating a new interaction in the same LW, and creating a new interaction with agents in other LWs, respectively
    q4, q5, q6 Probabilities of agent choosing the behavior of deleting the present interaction, creating a game relationship with a new agent in the system, and retreating from the system, respectively
    bji,β(|ω) Probability of agent changing his/her behavior
    w(sub-processm)ji,βlk Probability of agent ji creating new interaction with agent lk in sub-process m, m=2,3,5
    v(sub-processm)jiki Probability of agent ji deleting old interaction with agent ki in sub-process m, m=4,6
    ¯Nji The set of neighbors of agent ji
    Nji The set of agent ji and his/her neighbors
    ς Noise of agent's behavior
    ω=(α,g) System state controlled by behaviors α and system configuration g
    λ(sub-processm)ji The ratio function in sub-process m,m=2,3,4,5,6
    τ Agent's profile structure
    (Yβ(t))t0 Stochastic process of the economic and management complex adaptive system
    Γβ The model of complex adaptive system with behavior and system topological configuration co-evolution, where Γβ=(G,A,π)(Ω,F,P,(Xβt)tT0)βR+
    P:F[0,1] Measurement
    (Jn)nN0 Time point of certain state occurrence
    Sn+1 Holding time of a certain state, i.e., Sn+1:=Jn+1Jn
    (ηβ,τ,Nω,ω)ω,ωΩN Probability of transition from state ω to state ω
    Cβ,τ,N Mechanism of preferential attachment
    Ξβ,Njiki(τ) Volatile mechanism of economic and management complex adaptive system

     | Show Table
    DownLoad: CSV

    As mentioned above, at one time, an arbitrary agent can select one behavior from six sub-processes with a probability (p1,...,p6), respectively. At the next one, he/she can select another behavior. Thus, if we regard the agent's behavior as his/her state, the state would satisfy a certain state transition equation. Combining this system's property with the theory of stochastic process, this complex adaptive system can be simulated by a stochastic process model. An optimal strategy path must exist for a certain system configuration, but since the latter always changes randomly, the optimal strategies vary, respectively.

    The agent's irrational behavior will be discussed in the following sections, and then the co-evolutional stochastic process model driven by irrational behavior and the system's configuration will be constructed. The universal approximate analytic solution will be calculated, and two precise solutions will be derived for (i) agent's behavior noise tending to zero and (ii) agents' population tending to infinity.

    An arbitrary agent, ji, will change his/her strategy as probability, q1(ω)[0,1]. The probability of the agent changing his/her behavior bji,β(|ω) in a certain system configuration should satisfy several conditions specified as follows:

    bji(ar|ω)P(arargmaxavA(πji(αavji,g)+εjiav)|ω) (1)

    where ar is the most effective strategy within the game radius r, av is the strategy space, A is the strategy space collection, πji() is the income of agent ji, a is the strategy of agent ji in the strategy space, and ε is noise. Furthermore, this decision relies not only on the neighbor's strategy and the topological structure g, but also on the environment β.

    limβ0logbji,β(a|ω)=cji1(ω,(aαji,g))=v(t)ji(t,xtji,ϕji(t,xtji)) (2)

    where the strategy ϕji(t,xtji) for time t and state xt refers to the best strategy vector for a specific pure strategy, ϕji(t,xtji)=(ϕj1(t,xtj1),ϕj2(t,xtj2),,ϕjϑ(t,xtjϑ))T and ϑ is the spatial dimension of the agent ji.

    Obviously, when an agent selects a strategy ar from strategy space A, it must satisfy the condition that agent ji can obtain relatively more payoff as a maximum probability when they select the new one. However, this probability would be rewritten as exp[1β(v(t)ji(t,xtji,ϕji(t,xtji))+o(1))].

    Suppose that an arbitrary agent ji creates a new game with a new agent who is not his/her neighbor with the following probability:

    w(sub-process2)ji,β(ω)(w(sub-process2)ji,βki(ω))kiI=λ(sub-process2)ji(ω)/λ(sub-process2)(ω),

    which relies on the ratio λ(sub-process2)ji:ΩR+ that satisfies κji(ω)=Ni1λji(ω)=0 and

    (jiI)(ωΩ):λ(sub-process2)ji(ω)=ki¯Nji(ω)exp(˜W(t){ji,ki}(t,xtji,xtki,ϕji(t,xtji),ϕki(t,xtji))/β) (3)
    λ(sub-process 2)β(ω)jiIλ(sub-process 2)ji,β(ω)=ji,ki>jiexp(˜W(t){ji,ki}(t,xtji,xtki,ϕji(t,xtji),ϕki(t,xtji))/β)(1gjiki) (4)
    (ji,kiI):w(sub-process 2)ki,βji(ω)=ˆw(sub-process 2)ki,βji(α)(1gjiki)

    The probability of agent ji creating a new game with agent ki is defined as λ(sub-process2)ji(ω)/¯λ(sub-process2)(ω), implying that payoff of the coalition of agent ji and agent ki is larger or equal to the other coalition's payoff affected by noise ςji=(ςjiki)ki¯Nji(ω). In this respect, we get

    wjiki(ω)P((˜W(t){ji,ki}(t,xtji,xtki,ϕji(t,xtji),ϕki(t,xtji))+ςjiki)(˜W(t){ji,li}(t,xtji,xtli,ϕji(t,xtji),ϕli(t,xtli))+ςjili),li¯Nji|ω) (5)

    and

    (jiI)(ωΩ):limβ0βlogw(sub - process2)ji,βki(ω)=cji2(ω,(a,g(ji,ki)))=˜W(t){ji,ki}(t,x(t){ji,ki},ϕji(t,xtji),ϕki(t,xtki),g(ji,ki))liˉNji(ω)˜W(t){ji,li}(t,x(t){ji,li},ϕji(t,xtji),ϕli(t,xtli),g(ji,li)) (6)

    Thus, nonequity ˜klkl,˜kl¯Nji,P{W{ji,kl}(t,xt)W{ji,˜kl}(t,xt)}=1 must be satisfied to create a new game relationship with agents from Ni¯Nji who did not interact with agent ji. This complies with the so-called preferential attachment mechanism, implying that agents prefer to select a game partner who can bring them more payoff than others. This causes each agent to be selected prior to their payoff coupled with the optimal strategy in the corresponding short time interval. This probability is a multi-dimension logit function, which means there exists a critical point of probability ˜W(t){ji,ki}0 coupled with the agent's payoff in the selected process such that the probability a certain agent will be selected is far smaller than 0.5 if the agent's payoff is smaller than ˜W(t){ji,ki}0 but the choosing probability is far larger than 0.5 and closer to 1 if the agent's payoff exceeds ˜W(t){ji,ki}0.

    In the case where the agent ji creates a new game, the agent ki must satisfy the following conditions:

    (jiI)(ωΩ):λ(Sub-process 3)ji,β(ω)=ki¯Nji(ω)exp(˜W(t){ji,ki}(t,xtji,xtki,ϕji(t,xtji),ϕki(t,xtji))/β) (7)
    λ(Sub-process 3)β(ω)jiIλ(Sub-process 3)ji,β(ω)
    w(Sub-process 3)jikj(ω)P((π(αji,αki)+ςjiki)(π(αji,αli)+ςjili),li¯Nji(ω))=P((˜W(t){ji,ki}(t,xtji,xtki,ϕji(t,xtji),ϕki(t,xtki))+ςjiki)
    P((˜W(t){ji,li}(t,xtji,xtli,ϕji(t,xtji),ϕli(t,xtli))π(αji,αli)+ςjili),li¯Nji(ω)) (8)

    This yields

    (jiI)(kj¯Nji(ω)):w(Sub-process 3)ji,βkj(ω)=exp(π(αji,αkj)/β)lj¯Njiexp(π(αji,αlj)/β)=exp(˜W(t){ji,ki}(t,xtji,xtki,φji(t,xtji),φki(t,xtki))/β)li¯Njiexp(˜W(t){ji,li}(t,xtji,xtli,φji(t,xtji),φli(t,xtli))/β) (9)
    λ(Sub-process 3)β(ωˆω)=λ(Sub-process 3)ji(ω)w(Sub-process 3)jikj(ω)+λ(Sub-process 3)kj(ω)w(Sub-process 3)kjji(ω) (10)
    λ(Sub-process 3)β(ωˆω)=exp(˜W(t){ji,ki}(t,xtji,xtki,φji(t,xtji),ϕki(t,xtki))/β) (11)

    Assume that an arbitrary link (ji,ki) will disappear at probability ξ>0. That is, if this link exists as probability ξh+o(h) during a small enough time interval [t,t+h], the expected time of existence will be 1/ξ. Therefore, starting from the system state ω=(α,g), the probability that the system transit system state ˆω=(α,g(ji,ki)) must be η(Sub-process 4)β(ωˆω)=ξ

    (jiI)(ωΩ):η(Sub-process 4)ji,β(ω)=kj¯Nji(ω)exp(˜W(t){ji,ki}(t,xtji,xtki,ϕji(t,xtji),ϕki(t,xtki))/β) (12)
    η(Sub-process 4)β(ω)=jiFη(Sub-process 4)ji,β(ω) (13)
    v(Sub-process 4)jiki(ω)P((π(αji,αki)+ςjiki)(π(αji,αli)+ςjili),li¯Nji(ω))=P((˜W(t){ji,ki}(t,xtji,xtki,ϕji(t,xtji),ϕki(t,xtki))+ςjikj)
    (˜W(t){ji,li}(t,xtji,xtli,ϕji(t,xtji),ϕli(t,xtli))+ςjili),li¯Nji(ω)) (14)
    (jiI)(kj¯Nji(ω)):v(Sub-process 4)ji,βkj(ω)=1exp(π(αji,αkj)/β)lj¯Njiexp(π(αji,αlj)/β)=1exp(˜W(t){ji,ki}(t,xtji,xtki,ϕji(t,xtji),ϕki(t,xtki))/β)li¯Njiexp(˜W(t){ji,li}(t,xtji,xtli,ϕji(t,xtji),ϕli(t,xtli))/β) (15)

    When agent N+1 enters the complex adaptive system, it will enter Local World i as probability 1/m, will be reordered to Ni+1, then it will create a game relationship with an arbitrary agent ji with probability w(sub-process 5),ji,βNi+1, where:

    (jiI)(ωΩ):λ(Sub-process 5)ji,β(ω)=exp(˜W(t){ji,N+1}(t,xtji,xtN+1,ϕji(t,xtji),ϕN+1(t,xtN+1))/β) (16)
    w(Sub-process 5)jiNi+1(ω)P((π(αji,αNi+1)+ςjiNi+1)(π(αli,αNi+1)+ςliNi+1),lii(ω))=P((˜W(t){ji,N+1}(t,xtji,xtN+1,ϕji(t,xtji),ϕN+1(t,xtN+1))+ςjiN+1)
    (˜W(t){li,N+1}(t,xtli,xtN+1,ϕli(t,xtli),ϕN+1(t,xtN+1))+ςliN+1),lii(ω)) (17)
    (jiI)(lii(ω),liji):w(Sub-process 5)ji,βNi+1(ω)=exp(π(αji,αNi+1)/β)lii(ω),lijiexp(π(αli,αNi+1)/β) (18)
    λ(Sub-process 5)β(ωˆω)=λ(Sub-process 5)ji(ω)w(Sub-process 5)jiN+1(ω) (19)

    Obviously, when an agent is deleted, the links that expressed its game relationships must be deleted.

    (jiI)(ωΩ):η(Sub-process 6)ji,β(ω)=kj¯Nji(ω)exp(π(αji,αkj)/β)=ki¯Nji(ω)exp(˜W(t){ji,ki}(t,xtji,xtki,ϕji(t,xtji),ϕki(t,xtki))/β) (20)
    v(Sub - process 6)ji(ω)P(kiNji(π(αji,αki)+ςjikj)ljNkj(π(αlj,αkj)+ςljkj))=P(kiNji(˜W(t){ji,ki}(t,xtji,xtki,ϕji(t,xtji),ϕki(t,xtki))+ςjiki)liNkj(˜W(t){ji,li}(t,xtji,xtli,ϕji(t,xtji),ϕli(t,xtli))+ςjili) (21)
    (jiI)(kiI,kiNji)(kjNji)(ljI,ljNkj):v(Sub-process 6)ji,β(ω)=1kiNjiexp(π(αji,αki)/β)kjNjiljNkjexp(π(αlj,αkj)/β) (22)

    The system evolves according to a state transition equation matched with the 6 sub-processes, which forms a stochastic process, a weak Markov process. By analyzing this process, the invariable distribution can be determined.

    The complex adaptive system was analyzed and re-described as follows. Suppose that there are several kinds of inhomogeneous agents in a system, whose profile structure is denoted by τ, and suppose that the system state ω=(a,g) consists of agents' behaviors and agents' local topological configuration, where β is the behavior noise. That is, the above environment N is the sum of the agents in the system. For an arbitrary state, ω=(a,g), mappings α:ΩNAN and γ:ΩNG[N] are defined to describe the co-evolutionary complex adaptive system with agents' behavior and local topology in detail, i.e., an infinitesimal generator (ηβ,τ,Nω,ω)ω,ωΩN coupled with the corresponding rate function set will be generated. Thus, a corresponding stochastic process model is constructed, and the probability from state ω to ω is defined as follows:

    ηβ,τ,Nω,ω={exp[1/β(v(t)ji(t,xtji,ϕtji(t,xtji)),g)]kiNiexp[1/β(v(t)ji(t,xtji,ϕtji(t,xtji)),g)]exp[˜W(t){ji,ki},ϕtki(t,xtki)),g(ji,ki))/β]li¯Nji(ω)˜W(t){ji,li},ϕtli(t,xtli)),g(ji,li))/βexp[˜W(t){ji,ki},g(ji,ki))/β]li¯Nji(ω)˜W(t){ji,li},ϕtli(t,xtli)),g(ji,li))/β1exp[˜W(t){ji,ki},g(ji,ki))/β]li¯Nji(ω)˜W(t){ji,li},g(ji,li))/βexp[˜W(t){ji,N+1},g(ji,kN+1))/β]lii(ω),liji˜W(t){li,N+1},g(li,N+1))/β1kiNjiexp[˜W(t){ji,ki},g¯Nji)/β]kiNjiliNkiexp[˜W(t){ji,li},g¯Nji)/β]

    where ¯Nji(ω) is the set of neighbors of agent ji, which payment in the corresponding small time scale is ˜W(t){ji,ki}˜W(t){ji,ki}(t,xt{ji,ki},ϕtji(t,xtji),ϕtki(t,xtki)). The above transition probability equation describes six sub-processes, i.e, ω consists of six cases:((a,a1(ω)),γ(ω)), (α(ω),γ(ω)(ji,ki)), (α(ω),γ(ω)(ji,ki)), (α(ω),γ(ω)(ji,ki)), (α(ω),γ(ω)(ji,N+1)), and (α(ω),γ(ω)¯Nji).

    Thus, the process (Yβ(t))t0 can be described as a time scale τα,τg according to the changed behavior and topological configuration, respectively. In this respect, the ratio ττg/τα controls the relative speed of topological configuration change relative to the behavior change.

    In essence, all the processes involved should be regarded as different Poisson processes, that is, counting processes. As mentioned above, a complex adaptive system with behavior and system topological configuration co-evolution model Γβ=(G,A,π)(Ω,F,P,(Xβt)tT0)βR+ consists of the following information: agents' behavior, α, and the graph topological configuration, g, that is made of the finite-state space Ω=AI×G[I], measurement P:F[0,1] of the state transition probability such that the system is measurable for the changing randomly, the Markov renewal process, (Yβ(t))t0, that consists of two parameters of continuous time (t0) and noise (β0). Thus, for an arbitrary sequence 0t0t1...tkt, the stochastic process (Yβ(t))t0 is controlled by the random variable generated by J0=0 and the random variable generated by n1,Jn:inf{tJn1:Yβ(t)Yβ(Jn1)}, which statistical property can be described via its jump times (Jn)nN0 and holding time Sn+1Jn+1Jn.

    As analyzed above, the distribution of agents' state is more important to decision-making. In this study, the process is defined as the number of jumps {Jn}n0. The property of this sample chain determines the system property. For an arbitrary graph g with behavior configuration aAN, suppose that J0=0 and Xβ,τ,N(0)=Xβ,τ,N0. To specify the phenomenon that the system can be transferred from one state to another clearly, several parameters would need to be introduced. For all n1, set Jn<. Furthermore, for all ωΩN, define ηβ,τ,NωωΩN{ω}ηβ,τ,Nω,ω(0,) as the measurement of system leaving the state ω. Thus, for all t0, the transition probability of the system at this state Xβ,τ,N(t) can be described as follows:

    P(Xβ,τ,NJn+1=ω,Jn+1Jn>t|FN)=P(Xβ,τ,NJn+1=ω,Jn+1Jn>t|Xβ,τ,NJn)=exp(tηβ,τ,Nω)ηβ,τ,Nω,ωηβ,τ,Nω.

    Thus, the invariable distribution of agent's behavior relies on parameters of ηβ,τ,Nω and ηβ,τ,Nω,ω. Due to the operator ηβ,τ,Nω,ω, a random graph process Gβ,τ,N={Gβ,τ,N(t)}t0, a branch process, can be constructed. To specify this process, the event of the link (ji,ki) is generated by a deterministic scalar

    Aβ,Njiki(a,τ):=f(˜W(sub-process 2)β,Njiki(a,τ),˜W(sub-process 3)β,Njiki(a,τ),˜W(sub-process 5)β,NjiN+1(a,τ))

    Moreover, the death case is denoted by another deterministic scalar

    Mβ,Njiki(τ)=f(˜W(sub-process 4)β,Njiki(a,τ),˜W(sub-process 6)β,Nji,¯Nji(a,τ))

    The evolution of these two parameters can be determined via the following two theorems, which proofs will be given in the next section.

    Theorem 1. There exist invariable distributions of complex adaptive systems with preferential attachment Cβ,τ,N and volatile mechanism Ξβ,Njiki(τ) for the inhomogeneous random graphs process Gβ,τ,N, if μβ,τ,N(ω|ΩNa)=Niji=1ki>jipβ,Njiki(a,τ)γjiki(ω)(1pβ,Njiki(a,τ)γjiki(ω))1γjiki(ω).

    The interaction probability of the model G[N,(pβ,τ,Njiki(a))ki>ji] can be written as

    pβ,Njiki(a,τ)=Aβ,Njiki(a,τ)Aβ,Njiki(a,τ)+Mβ,Njiki(τ)=˜W(sub-process 2)β,Njiki(a,τ)+˜W(sub-process 3)β,Njiki(a,τ)+˜W(sub-process 5)β,Nji,N+1(a,τ)[˜W(sub-process 2)β,Njiki(a,τ)+˜W(sub-process 3)β,Njiki(a,τ)+˜W(sub-process 4)β,Njiki(a,τ)+˜W(sub-process 5)β,NjiN+1(a,τ)+˜W(sub-process 6)β,Nji,¯Nji(a,τ)]

    where Aβ,Njiki(a,τ):=f(˜W(sub-process 2)β,Njiki(a,τ),˜W(sub-process 3)β,Njiki(a,τ),˜W(sub-process 5)β,NjiN+1(a,τ)), and Mβ,Njiki(τ)=f(˜W(sub-process 4)β,Njiki(a,τ),˜W(sub-process 6)β,Nji,¯Nji(a,τ)).

    If an arbitrary cross-section is taken from the strategy space, the link probability on this cross-section can be determined. So, pβ,Njiki:=(pβ,Njiki(a,b))(a,b)A2Rn×n+,pβ,N:=(pβ,Njiki)1ji,kiK denotes the profile of the interactivity of the agents in the complex adaptive system. Reconsidering the behavior coupled with the payoff in the corresponding short time scale, the detailed strategies of the system can be determined. Furthermore, whether the invariable distribution can be described as a function of the initial state is defined via Theorem 2.

    Theorem 2. The invariable distribution of the Markov process {Xβ,τ,N(t)}t0 is a Gibbs measurementμβ,τ,N(ω)=exp(β1Hβ,N(ω,τ))ωΩNexp(β1Hβ,N(ω,τ))=μβ,τ,N0(ω)exp(β1V(ω,τ))ωΩNμβ,τ,N0(ω)exp(β1V(ω,τ)), where, for all ωΩN, Hβ,N(ω,τ)V(ω,τ)+βlogμβ,τ,N0(ω), and μβ,τ,N0(ω)Nji=1,ki>ji(2NMN).

    For a better perception of the logical structure and the sequential nature of the proposed mathematical derivations, a flowchart providing a visual overview of the progression and interrelationships between various analytical steps undertaken in this work is plotted in Figure 1. This shows the process development from the initial assumptions and conditions to the conclusion, highlighting key decision points and derivation milestones.

    Figure 1.  The proposed derivation process flowchart.

    The above process can be described via jump times {Jn}n0 as in [47]. First, suppose that jump times comprise a sample chain of set Xβ,τ,N(Jn)=Xβ,τ,Nn, then set Fn=σ({J0,J1,...,Jn},{Xβ,τ,N0,Xβ,τ,N1,...,Xβ,τ,Nn}),n0. For an arbitrary graph g with behavior configuration aAN, suppose that J0=0 and Xβ,τ,N(0)=Xβ,τ,N0. To specify the phenomenon that the system can be transferred from one state to another clearly, several parameters have to be introduced. For all n1, set Jn<; furthermore, for all ωΩN, define

    ηβ,τ,NωωΩN{ω}ηβ,τ,Nω,ω(0,) (23)

    as the measurement of system leaves the state ω. Therefore, for all t0, the transition probability of the system at this state Xβ,τ,N(t) can be described as

    P(Xβ,τ,NJn+1=ω,Jn+1Jn>t|FN)=P(Xβ,τ,NJn+1=ω,Jn+1Jn>t|Xβ,τ,NJn)=exp(tηβ,τ,Nω)ηβ,τ,Nω,ωηβ,τ,Nω (24)

    where ηβ,τ,Nω and ηβ,τ,Nω,ω were defined above.

    As seen from the above, the agent's historical behavior, other behaviors, and the interaction between the environment and the system are crucial for choosing a rational strategy and which should be grasped by the complex adaptive system. Similarly, the environment in which an agent operates comprises the external environment and agent's behaviors with the payoffs of his/her neighbors. Moreover, the history of agent's behaviors should be considered when selecting a strategy at an arbitrary time. Furthermore, the behavior of one agent is affected by his/her neighbors, which means that the noises of each agent's behavior are superimposed, resulting in a huge total noise reaching its critical value in the process of the system development. Another inhomogeneous environment can occur, with a larger or more complex noise exceeding this critical value. Thus, two cases (namely (i) growth controlled by sub-processes 2, 3, and 5, and (ii) decay controlled by sub-processes 4 and 6) are introduced to describe system evolution. Insofar as these sub-processes are independent, they can be superimposed via the addition theorem.

    Using operator ηβ,τ,Nω,ω, a random graph process Gβ,τ,N={Gβ,τ,N(t)}t0, which is a branching process, can be constructed. To specify this process, the event of the link (ji,ki) is generated by a deterministic scalar Aβ,Njiki(a,τ):=f(˜W(sub-process 2)β,Njiki(a,τ),˜W(sub-process 3)β,Njiki(a,τ),˜W(sub-process 5)β,NjiN+1(a,τ)), but the death case is denoted by another deterministic scalar Mβ,Njiki(τ)=f(˜W(sub-process 4)β,Njiki(a,τ),˜W(sub-process 6)β,Nji,¯Nji(a,τ)). Obviously, these two scalars express the growth and decay mechanisms introduced above. Suppose that f(x),f(x) are linear and equal to the occurrence probabilities of these six sub-processes. These six sub-processes should be integrated into a pure process. Furthermore, because the co-evolutionary process constructed is a branching process, i.e., a special Markov process, there must exist an invariable distribution in the process of a complex adaptive system development [47].

    Reconsidering the complex adaptive system, each agent selects one kind of behavior from the six sub-processes as a certain probability, which is affected by the behaviors of his/her neighbors. In other words, agents in this system are inhomogeneous, can select one behavior randomly, and thus change their behavior and their property. Invoking agent property configuration τ, controlled by the configuration of the agent's property, that is τ=(τ11,...,τn1,...,τ1m,...,τnm), it can be seen that the agent changes his/her property by changing his/her behavior. A random variable, Θ={θ1,...,θϑ}, is introduced to express the fact that an agent changes his/her property randomly. Similarly, the behaviors of agents in the system can be denoted by a scalar a=(a11,...,an1,...,a1m,...,anm). We call agents ji, jl, ki, and kl homogeneous if τji=θjl,τki=θkl are satisfied. Furthermore, the distribution law of these agents' behaviors can be obtained by the distribution of τ,a. The corresponding results, i.e., Theorem 1, and the proof should be given as follows.

    Definition 3. Scalar Cβ,τ,N satisfied with exponential distribution is called the preferential attachment mechanism of the complex adaptive system if Cβ,Njikl(a,τ)=2Nexp(v(aji,akl)).

    Definition 4. The admissible volatile mechanism Ξβ,Njiki(τ) is half-anonymous, ifξβ,Njiki(τ)=ξβ,Njlkl when τji=θjl,τki=θklis satisfied, for all N2,τΘN and all agents ji,klN.

    Proof of Theorem 1. For the equilibrium condition considered, the invariable distribution is set to the following form, holding for all ω,ωΩNa:

    μβ,τ,N(ω|ΩNa)ηβ,τ,Nω,ω=μβ,τ,N(ω|ΩNa)ηβ,τ,Nω,ω (25)

    Upon normalization, given a constant Zβ,τ,N(a), the above equation has the following unique solution

    μβ,τ,N(ω|ΩNa)=Zβ,τ,N(a)Nji=1ki>ji(Aβ,Njiki(a,τ)Mβ,Njiki(τ))γjiki(ω)=Zβ,τ,N(a)Nji=1ki>ji(˜W(sub-process 2)β,Njiki(a,τ)+˜W(sub-process 3)β,Njiki(a,τ)+˜W(sub-process 5)β,Nji,N+1(a,τ)˜W(sub-process 4)β,Njiki(a,τ)+˜W(sub-process 6)β,Nji,¯Nji(a,τ))γjiki(ω) (26)

    For all ji=11,21,...,N1,12,...,N2,...,N and ki>ji, define

    χβ,Njiki(a,τ)log(Aβ,Njiki(a,τ)Mβ,Njiki(τ)) = log(˜W(sub-process 2)β,Njiki(a,τ)+˜W(sub-process 3)β,Njiki(a,τ)+˜W(sub-process 5)β,Nji,N+1(a,τ)˜W(sub-process 4)β,Njiki(a,τ)+˜W(sub-process 6)β,Nji,¯Nji(a,τ)) (27)

    The function on ΩNa

    H0(ω|ΩNa)=Nji=1ki>jiχβ,Njiki(a,τ)γ(ω) (28)

    Substituting the above parameters into (26), the invariable distribution should be written as:

    μβ,τ,N(ω|ΩNa)=expH0(ω|ΩNa)ωΩNaexpH0(ω|ΩNa) (29)

    Setting pβ,Njiki(a,τ)=Aβ,Njiki(a,τ)Aβ,Njiki(a,τ)+Mβ,Njiki(τ), the denominator of formula (28) can be derived as

    ωΩNaexpH0(ω|ΩNa)=ωΩNaNji=1ki>ji(exp(χβ,Njiki(a,τ)))γjiki(ω)=ωΩNaNji=1ki>ji(exp(χβ,Njiki(a,τ)))log(Aβ,Njiki(a,τ)Mβ,Njiki(τ))=ωΩNalog(Aβ,Njiki(a,τ)Mβ,Njiki(τ))Nji=1ki>ji(exp(χβ,Njiki(a,τ)))=Nji=1ki>ji(1+exp(χβ,Njiki(a,τ))) =Nji=1ki>ji(1+Aβ,Njiki(a,τ)Mβ,Njiki(τ))=Nji=1ki>ji(1pβ,Njiki(a,τ))1 (30)

    Furthermore, for all ωΩNa, we have

    ωΩNaexpH0(ω|ΩNa)=ωΩNaNji=1ki>ji(pβ,Njiki(a,τ)1pβ,Njiki(a,τ))γjiki(ω) (31)

    Combining (30) and (31), the measurement ωΩNa can be obtained directly:

    μβ,τ,N(ω|ΩNa)=Niji=1ki>jipβ,Njiki(a,τ)γjiki(ω)(1pβ,Njiki(a,τ)γjiki(ω))1γjiki(ω) (32)

    Thus, Theorem 1 is proven: QED.

    Due to the property of the invariable distribution, the following property should be noted:

    μβ,τ,N(ω|ΩNa)ηβ,τ,Nω,ω=μβ,τ,N(ω|ΩNa)ηβ,τ,Nω,ω (33)

    Using the recursion method, this invariable distribution can be expressed to a form with a certain initial. To do this, consider the function ΩNaf:

    H0(ω|ΩNa):=Nji=1ki>jixβ,Njiki(a,τ)γ(ω), (34)

    where

    xβ,Njiki(a,τ):=log(Aβ,Njiki(a,τ)Mβ,Njiki(τ)) (35)

    It is concluded that, when the time scale is relatively large, the behavior of an arbitrary agent that interacts with others in the system must satisfy the distribution property of an arbitrary state with agents' behavior and agents' local topological structure in the evolutionary process of the complex adaptive system, that is, it could reach the corresponding measurement coupled with the invariable distribution. Because the system state consists of agents' behavior and agents' local topological configuration, the distribution of corresponding optimal strategy coupled with a constant graph topology can be obtained by analyzing the invariable distribution of the system's state, which consists of what the most probable strategies of the agent in a certain small time-scale would be and how long these strategies would hold in the evolution of the complex adaptive system.

    Note 1. A preferential attachment mechanism makes the complex adaptive system operate. Furthermore, the half-anonymous volatile mechanism is separable, which means the system's character can be obtained more easily. Consider the likelihood ratio function:

    pβ,Njiki(a)1pβ,Njiki(a)=Aβ,Njiki(a,τ)Aβ,Njiki(a,τ)+Mβ,Njiki(τ) For all 1ji,kiK and a,bA, define scalar

    φβ,Njiki(a,b)2βexp(v(a,b)/β)Aβ,Njiki(a,τ)+Mβ,Njiki(τ) (36)

    and symmetrical matrix

    φβ,Njiki(a,b)(φβ,Njiki(a,b))(a,b)A2,φβ,N(φβ,Njiki)1ji,kiK (37)

    Therefore, when considering an arbitrary agent ji with strategy a and agent ki with strategy b, the probability of interactivity between them can be written as:

    pβ,Njiki(a,b)=φβ,Njiki(a,b)Nβ+φβ,Njiki(a,b) (38)

    Equation (38) describes the case that agent ji acts with agent ki according to a certain probability under some noise. Since parameters a and b are taken arbitrarily, this probability can be characterized on an arbitrary cross-section in strategy space. That is, if an arbitrary cross-section is taken from the strategy space, the link probability on this cross-section can be determined. In this sense, the following matrix denotes the profile of the interactivity of the agents in the complex adaptive system:

    pβ,Njiki(pβ,Njiki(a,b))(a,b)A2Rn×n+,pβ,N(pβ,Njiki)1ji,kiK (39)

    Reconsidering the behavior coupled with the payoff in the corresponding short time scale, the detailed strategies of the system can be determined. Furthermore, whether the invariable distribution can be described as a function of the initial state can be determined via Theorem 2. However, to prove Theorem 2, the following Lemma should be invoked first.

    Lemma 1. The ratio function of Markov process jump {Xβ,τ,Nt}t0 with infinitesimal generator has invariable distribution

    μβ,τ,N(ω)=(Zβ,τ,N)1Nji=1ki>ji(2Nexp(v(αji(ω),αki(ω))/β)Mβ,τ,Njiki)γjiki(ω)×exp(τji(αji(ω))/β)

    where, for arbitrary equilibrium condition, constant Zβ,τ,N(a) holds Eq. (33) has a unique solution true for all ω,ωΩNa, by employing a normalization approach.

    Proof of Lemma 1. For all ω,ωΩNa, let ω=(a,g),ω=(a,g(ji,ki)), where consists of and . Thus, we have

    ηβ,τ,Nω,ωηβ,τ,Nω,ω=2exp(v(aji,aki)/β)NMβ,Nτji,τki (40)

    Consider ηβ,τ,Nω,ω/ηβ,τ,Nω,ω. Due to the multiplier structure, it can be seen that the factors that appear in these two measurements can be eliminated except for factor (40). Considering the two states ω=(a,g),ˆω=((a,a)g), aA, their likelihood ratio can be calculated as follows:

    ηβ,τ,N(ω)ηβ,τ,N(ω)=Nki=1li>ki[2exp(v(αki(ω),αli(ω))/β)NMβ,Nτki,τli]γkili(ω)exp(τki(αki(ω))/β)[2exp(v(αki(ω),αli(ω))/β)NMβ,Nτki,τli]γkili(ω)exp(τki(αki(ω))/β)×liki=ji+1li>ki[2exp(v(αki(ω),αli(ω))/β)NMβ,Nτki,τli]γkili(ω)exp(τki(αki(ω))/β)[2exp(v(αki(ω),αli(ω))/β)NMβ,Nτki,τli]γkili(ω)exp(τki(αki(ω))/β) (41)

    The second term is independent of agent ji; therefore, this likelihood ratio is equal to 1. Multiplying the first term and considering the symmetry of the payment function, we have:

    ηβ,τ,N(ω)ηβ,τ,N(ω)=Nki=1v(αki(ω),αli(ω))v(αki(ω),αli(ω))+τji(ω)τji(ω)β=ηβ,τ,Nω,ωηβ,τ,Nω,ω (42)

    Thus, Lemma 1 is proven: QED.

    Proof of Theorem 2. As defined, for all ji,kiN and ω,ωΩNa, function

    xβ,Njiki(ω,τ)log(2exp(v(αji(ω),αki(ω))/β)NMβ,Nτji,τki(τ))=1β(v(αji(ω),αki(ω))+log(2NMβ,Nτji,τki(τ)) (43)

    Invoking Lemma 1, it is transformed to:

    μβ,τ,N(ω)=(Zβ,τ,N)1Nji=1ki>ji(exp(xβ,Njiki(ω,τ)γjiki(ω)+β1τji(αji(ω)))=(Zβ,τ,N)1exp[β1Nji=1ki>ji(xβ,Njiki(ω,τ)γjiki(ω)+τji(αji(ω))]=(Zβ,τ,N)1exp[β1(V(ω,τ)+βlogμβ,τ,N0(ω))]=(Zβ,τ,N)1exp[β1Hβ,N(ω,τ)] (44)

    Thus, Theorem 2 is proven: QED.

    Insofar as μβ,τ,N0 is controlled by the decay mechanism, M, from the definition of μβ,τ,N0, it can be seen that the probability of emergence of invariable distribution is large if the decay is stronger in the complex adaptive system; otherwise, the probability is relatively small. Therefore, it is the noise of the behaviors of the agents in the system that decides the stability of the complex adaptive system. Furthermore, when the property of an arbitrary agent is changed randomly, the invariable distribution will become more complex. If some parameters are determined, the certain invariable distribution must be a deterministic one. The invariable distribution of the complex adaptive system relies on several external parameters: The noise of agent's behavior, β, and the population of the agents in the system, N. The following subsections consider what the invariable distribution would be if these two parameters tend to their respective limits, that is, β0 and N.

    Since it was concluded that the system's behavior would satisfy an exponential distribution with parameter of η, Theorems 1 and 2 specify the parameter η. More precisely, P(Xβ,τ,NJn+1=ω,Jn+1Jn>tFN)=exp(tηβ,τ,Nω)ηβ,τ,Nω,ωηβ,τ,Nω, where η is the measurement of system leaving a certain state, which relies on the local topological configuration of interaction relationship between agents, strategy configuration. According to Theorem 1, there exists an invariable distribution of system behavior μ, which relies on variables ω, τ, β, and N. Of these, parameters ω and τ are the most important controllable variables, while β and N are the scenario variables. If ω and τ are fixed, the statistical distribution of system's behavior relies on two parameters: Noise β and agent's population N of the system. Similarly, if parameters β and N are fixed, i.e., the scenario is fixed, the statistical distribution of a system's behavior relies on the agent's behavior strategy and interaction configuration. The latter relies on the preferential attachment of a certain agent.

    The ways agents adjust their behaviors and whether the time scale is small or large depends on the rules expressed by Equations (1)-(22) and the co-evolution model Γβ=(G,A,π)(Ω,F,P,(Xβt)tT0)βR+ defined in sub-section 4.2. Using Theorem 1, the invariable distribution depends on the interaction probability pβ,Nji,ki determined by the payoff obtained in different sub-processes: μβ,τ,N(ω|ΩNa)=Niji=1ki>jipβ,Njiki(a,τ)γjiki(ω)(1pβ,Njiki(a,τ)γjiki(ω))1γjiki(ω). Therefore, the property of pβ,Nji,ki is crucial for the invariable distribution. Insofar as pβ,Nji,ki is the ratio of payoffs coming from creating an interaction relationship, the sub-processes of deleting the old interaction should be omitted because deleted agents bring maximum profit losses. Thus, optimal strategies of all agents correspond to large values of pβ,Nji,ki; otherwise, if just partial agents' strategies are optimal, pβ,Nji,ki values are relatively small.

    By setting F=x(1x)a,0<a<1, it is easy to see that Fx=(1x)a+a(1x)a1x>0, so μ monotonously grows with pβ,Nji,ki and monotonously decreases with agents' population N.

    Thus, this invariable and the noise effect on the system behavior are quite complex. Intuitively, the larger the noise β, the more difficult to select the optimal strategy and the best partner to interact with. Thus, pβ,Nji,ki is expected to increase with β.

    Given the synthesis effect of ω, τ, β, and N, the system behavior is hard to predict, yielding only approximate solutions. The precise solutions were further derived for the following two limiting cases: β0 and N. First, we formulated and proved the following hypothesis.

    Hypothesis 3. Under two limiting scenarios (β0 and N), the system behavior properties are not equivalent.

    The above two scenarios are discussed in Subsections 6.1 and 6.2, respectively,

    First, the term stochastic stability should be defined.

    Definition 5. With the limitation of small behavior noise β, the system configuration ωΩN is stochastically stable, if limβ0βlogμβ,τ,N(ω)=0.

    Lemma 2. Under fixed N2 and an arbitrary agent's property τΘN, we getlimβ0maxωΩN|Hβ,N(ω,τ)V(ω,τ)|=0, if and only if limβ0maxωΩN|logμβ,τ,N0(ω)|=0.

    Proof of Lemma 2. It follows directly from Theorem 2: QED.

    If the co-evolutionary dynamics of agents' behavior and local topological configuration follows an admissible volatile mechanism, then the perturbation of function μβ,τ,N0 of graph must be controlled by the potential function when behavior noise is much smaller. The corresponding result is given in Theorem 3, which states that the class of these invariable distributions μβ,τ,N0 satisfies the maximum deviation principle, so that the invariable distribution will converge to a certain subset space that can converge logarithmically to the minimum value of the ratio function, which can be precisely estimated.

    Theorem 3. There exists a ratio function that satisfies R(ω,τ)maxωΩNV(ω,τ)V(ω,τ) with maximum deviation principle if Ξβ,τ,N is an admissible volatile mechanism, for all ωΩN, such that the invariable distribution class {μβ,τ,N}β>0 satisfies limβ0βlogμβ,τ,N(ω)=R(ω,τ).

    Proof of Theorem 3: According to Theorem 2, for all ωΩN, we have

    μβ,τ,N(ω)=exp(β1Hβ,N(ω,τ))ωΩNexp(β1Hβ,N(ω,τ)) (45)

    Furthermore, if the volatile mechanism is admissible, it satisfies in particular (SNB). Then, it follows from Lemma 2 that this Hamiltonian function at β0 will converge uniformly to the potential function of the game. So, for all ωΩN, we get

    limβ0βlogμβ,τ,N(ω)=maxωΩNlimβ0Hβ,N(ω,τ)limβ0Hβ,N(ω,τ)=maxωΩNV(ω,τ)V(ω,τ)=R(ω,τ) (46)

    Thus, Theorem 3 is proven: QED

    Notably, two corollaries following from Theorem 3 are given in Appendix 1.

    To get the profile of individual rationality of agents in a complex adaptive system, the following definition of equilibrium was introduced.

    Definition 5. The four tuplesΩN,μβ,τ,N,(Pji)ji[N],(sji)ji[N] are in a relative equilibrium if at β>0andρ>0, the following inequality holds for all ji[Ni]N, all strategies ˆsji, and allβ<β: ωΩNμβ,τ,N(ω)Uji(s(ω),γ(ω),τji)ωΩNμβ,τ,N(ω)Uji(ˆsji(ω),sji(ω),γ(ω),τji)ρ.

    Theorem 4. For all ji[Ni]N with strategysji(ω)=αji(ω), ωΩN, tuples ΩN,μβ,τ,N,(Pji)ji[N],(sji)ji[N] of agent {j}_{i} comprise a relative equilibrium of (\beta, \rho)

    Proof. For all {j}_{i}\in \left[{N}_{i}\right]\subseteq N and an arbitrary alternative strategy, {\widehat{s}}_{{j}_{i}} , the deviation payoff of agent {j}_{i} has a boundary of:

    \sum\limits_{\omega \in {\varOmega }^{N}}{\mu }^{\beta , \tau , N}\left\{{U}_{{j}_{i}}\left[\widehat{s}\right(\omega ), {s}_{-{j}_{i}}(\omega ), \gamma (\omega ), {\tau }_{{j}_{i}}]-{U}_{{j}_{i}}\left[s\right(\omega ), \gamma (\omega ), {\tau }_{{j}_{i}}]\right\}\\ = \sum\limits_{\omega \in {\varOmega }^{*, N}\left(\tau \right)}{\mu }^{\beta , \tau , N}\left(\omega \right)\left\{V\left(\widehat{s}\right(\omega ), {s}_{-{j}_{i}}(\omega ), \gamma (\omega ), \tau )-V\left(s\right(\omega ), \gamma (\omega ), {\tau }_{{j}_{i}})\right\}\\+\sum\limits_{\omega \notin {\varOmega }^{*, N}\left(\tau \right)}{\mu }^{\beta , \tau , N}\left(\omega \right)\left\{V\left(\widehat{s}\right(\omega ), {s}_{-{j}_{i}}(\omega ), \gamma (\omega ), \tau )-V\left(s\right(\omega ), \gamma (\omega ), {\tau }_{{j}_{i}})\right\}
    \le {\mu }^{\beta , \tau , N}({\varOmega }^{N}\backslash {\varOmega }^{*, N}(\tau \left)\right)C (47)

    where C\triangleq \underset{\omega \notin {\varOmega }^{*, N}\left(\tau \right)}{max}\left\{V\left(\widehat{s}\right(\omega), {s}_{-{j}_{i}}(\omega), \gamma (\omega), \tau)-V\left(s\right(\omega), \gamma (\omega), {\tau }_{{j}_{i}})\right\} . The upper boundary can be obtained directly from the first term of the second column and the condition of non-positive due to the definition of {\varOmega }^{*, N}\left(\tau \right) . If C < 0 , Theorem 4 holds. If C\ge 0 , by invoking the exponential convergence of the invariable distribution and Corollary 2, for \beta \to 0 and \delta \left(\beta \right)\to 0 at their respective limitations, there exists \varepsilon > 0 , such that {\mu }^{\beta, \tau, N}({\varOmega }^{N}\backslash {\varOmega }^{*, N}(\tau \left)\right)\le \mathit{exp}\left(-\frac{\varepsilon }{\beta }(1+o(1\left)\right)\right)\triangleq \delta \left(\beta \right) holds. Thus, for each \rho > 0 , a small enough \beta can be selected such that the corresponding upper boundary is decreased under \rho . This proves Theorem 4: QED.

    If the behavior noise is small enough, each agent will use the equilibrium as his/her optimal strategy, with a little deviation permitted. In this sense, a deterministic state of the complex adaptive system that has evolved ( \omega ) should be estimated according to the invariable distribution coupled with the optimal strategy and the local topological structure of the agent.

    In this section, a positive noise \beta > 0 is fixed, and the population of the agents in the complex adaptive system should be regarded as a selectable parameter to analyze the specification of the invariable distribution of the states. Similar to the analysis process for noise limitation, the preferential attachment mechanism is set to a logarithmic formation, that is, to Hypothesis 3, and the volatile mechanism is half-anonymous. The invariable distribution {\mu }^{\beta, \tau, N} is the most important consideration when a complex adaptive system with population \omega \in {\varOmega }^{N} is changed. Thus, when considering the interactivity of agents, the focus is on whether the different types of agents would select similar strategies and emerge into certain LWs, and the system structure, as the prior distribution {\widehat{\sigma }}^{N} = ({\widehat{\sigma }}_{1}^{N}, {\widehat{\sigma }}_{2}^{N}, ..., {\widehat{\sigma }}_{K}^{N}) , is the key consideration. Selection of strategies in this manner is called the Bayesian strategy, where every element of strategy {\widehat{\sigma }}_{k} is taken as a certain probability in the behavior set A , and the coordinates of {\widehat{\sigma }}_{k} should be denoted by {\widehat{\sigma }}_{k}\left(a\right), a\in A , which should, for all a\in A, 1\le k\le K , occur as the probability:

    {\widehat{\sigma }}_{k}^{N}\left(a\right)(\omega , \tau )\triangleq \frac{1}{N{M}_{k}^{N}\left(\tau \right)}{\sum }_{{j}_{i} = 1}^{N}{1}_{a}\left({\alpha }_{{j}_{i}}\right(\omega \left)\right){1}_{{\theta }_{k}}\left({\tau }_{{j}_{i}}\right) (48)

    where 1 is a symbolic function. Since \widehat{\sigma } can be regarded as a mapping from a certain type space \varTheta of agent to a mixing space \varDelta \left(A\right) , the corresponding classical Bayesian strategy can be found. Denoting \varSigma \triangleq \varDelta (A{)}^{k} as the set of all Bayesian strategies, then for (\sigma, m)\in \varSigma \times \varDelta {\varTheta }^{N} , the measurement set generalized by mapping {\widehat{\sigma }}^{N} can be defined as:

    [\sigma , m]\triangleq \left\{(\omega , \tau )\in {\varOmega }^{N}\times {\varTheta }^{N}\left|{\widehat{\sigma }}^{N}(\omega , \tau ) = \sigma \&{{\rm M}}^{N}\left(\tau \right) = \right.m\right\} (49)

    Invoking the measurement of invariable distribution, {\mu }^{\beta, N}\in M({\varOmega }^{N}\times {\varTheta }^{N}) , it needs to give the most approximate expression for it. When the system population tends to infinity, coupled with set [\sigma, m] , then the measurement can be described as:

    {\mu }^{\beta , N}\left(\right[\sigma , m\left]\right) = {\sum }_{(\omega , \tau )\in [\sigma , m]}{\mu }^{\beta , N}(\omega , \tau ) = {\sum }_{\tau \in {T}^{N}\left(m\right)}{P}_{q}({\tilde{\tau }}^{N} = \tau ){\sum }_{\omega \in \left({\sigma }^{N, \tau }{)}^{-1}\right(\sigma )}{\mu }^{\beta , \tau N}\left(\omega \right) , (50)

    where {\widehat{\sigma }}^{N, \tau }\left(\right) = {\widehat{\sigma }}^{N}(, \tau) , and \left({\widehat{\sigma }}^{N, \tau }{)}^{-1}\right(\sigma) = \left\{\omega \in {\varOmega }^{N}\left|{\widehat{\sigma }}^{N}\right.(\omega, \tau) = \sigma \right\} .

    Based on the Bayesian strategy definition, all states must stand in the set [\sigma, m] for all \tau \in {T}^{N}\left(m\right), {\varOmega }_{a}^{N}\times \left\{\tau \right\} . Therefore, one can define an equivalent correlation, {~}_{[\sigma, \tau]} , such that (a, \tau){~}_{[\sigma, m]}(a{'}, \tau {'}) holds if, and only if, \tau, \tau {'}\in {T}^{N}\left(m\right) and {\widehat{\sigma }}^{N}({\varOmega }_{a}^{N}, \tau) = {\widehat{\sigma }}^{N}({\varOmega }_{a{'}}^{N}, \tau {'}) = \sigma , meaning that the pair of agent's behavior and agent type profile is an equivalent correlation, {~}_{[\sigma, \tau]} , if these profiles can generate the same aggregate [\sigma, m] . Therefore, \left({\widehat{\sigma }}^{N, \tau }{)}^{-1}\right(\sigma) defines the a- behavior coalition. Furthermore, the class of {\mu }^{\beta, \tau, N}\left({\varOmega }_{a}^{N}\right) is the approximate expression to all a\in A . Thus, conditional on half-anonymous, it is concluded that this measurement relies only on the ratio of a certain type of agent with certain behavior. If considering two or more things out of all possible non-order permutations, a Bayesian measurement can be obtained with a limited population as follows:

    {\psi }^{\beta , N}\left(\sigma \left|m\right.\right) = {K}^{\beta , C}(m{)}^{-1}{\prod }_{k = 1}^{K}{\varPi }_{a\in A}\frac{\left(N{m}_{k}\right)!}{\left(N{m}_{k}{\sigma }_{k}\right(a\left)\right)!}\mathit{exp}\left(N{m}_{k}{f}_{k}^{\beta , N}(\sigma , m)\right) (51)

    In this expression, because the factor {K}^{\beta, C}(m{)}^{-1} is the normalization variable of the probability measurement, this probability distribution {\psi }^{\beta, N}\left(\left|m\right.\right) must come from the subset of

    \mathit{sup}\left({\psi }^{\beta , N}\left(\left|m\right.\right)\right) = {\varSigma }^{N}\left(m\right)\triangleq \left\{\sigma \in \varSigma \left|N{m}_{k}{\sigma }_{k}\left(a\right)\in \mathbb{N}\right., \forall a\in A, 1\le k\le K\right\} , (52)

    which is the interior point limited approximately to the polyhedron of the Bayesian strategy \varSigma .

    The function {f}_{k}^{\beta, N}:\varSigma \times \varDelta \left(\varTheta \right)\to \mathbb{R} describes the payoff of an agent of type k obtained from interacting with an agent of type l\ge k , considering all possible sub-networks coupled with its preferential behavior to the others. Although the result of the case of a limited total population is not perfect for objectivity, it can be seen that when the population total tends to infinity, the sequence {\left\{{f}_{k}^{\beta, N}\right\}}_{N\ge {N}_{0}} converges, a.s. to the limit function,

    {f}_{k}^{\beta , N}(\sigma , m)\triangleq ⟨{\sigma }_{k}, {\theta }_{k}⟩+{\sum }_{l\ge kl+{\delta }_{kl}}⟨{\sigma }_{k}, {\varphi }_{kl}^{\beta }{\sigma }_{l}⟩ (53)

    where n-dimensional vector {\theta }_{k} = {\left({\theta }_{k}\left(a\right)\right)}_{a\in A} , identifying probability measurement via Equation (51) of type {\theta }_{k}:A\to \mathbb{R} , is the best alternative that satisfies the large deviation principle from class {\left\{{\phi }^{\beta, N}\left(\left|{M}^{N}\right.\right)\right\}}_{N\ge {N}_{0}} , for which the convergence sequence {\left\{({\sigma }^{N}, {m}^{N})\right\}}_{N\ge {N}^{0}} of ratio function r:\varSigma \times \varDelta \left(\varTheta \right) should be calculated. After analyzing, we have:

    -\underset{N\to \infty }{lim}\frac{\beta }{N}\mathit{log}\left({\psi }^{\beta , N}\left({\sigma }^{N}\left|{m}^{N}\right.\right)\right) = {r}^{\beta }(\sigma , m) (54)

    Extending this expression when type m is given, and the population of agents is infinity, we can obtain that the probability of Bayesian strategy \sigma \in \varSigma is equal to the rank of \mathit{exp}\left(-\frac{N}{\beta }r(\sigma, m)\right) scaled by logarithm. So, the Bayesian strategy scaled by logarithm with the largest probability must be the strategy that satisfies r(\sigma, m) = 0 , implying that the problem of probability of distribution strategy coupled with local topological structure can be transferred to the problem of identifying the potential function of the game. It was proven earlier that the logit function is a precise one; that is, the sought function should satisfy the following condition:

    (1\le k\le K):{\tilde{f}}_{k}^{\beta , N}(\sigma , m): = {f}_{k}^{\beta , N}(\sigma , m)+\beta h\left({\sigma }_{k}\right)\\ {f}^{\beta , k}(\sigma , m)\triangleq {\sum }_{k = 1}^{K}{m}_{k}{\tilde{f}}_{k}^{\beta , N}(\sigma , m) (55)

    where h\left(x\right) = -{\sum }_{i}{x}_{i}\mathit{log}{x}_{i} is an entropy with distribution x that relies on a growing population of agents. The type distribution must change with time. The case of relatively large population will be discussed in the next section, implying that M\stackrel{\hspace{1em}a.s.\hspace{1em}}{\to }q at N\to \infty .

    Assuming that N is large enough, the implementation of almost natural assigning of agent's type will lead to the distribution of type being closed to priori probability q . We focus on the implementation set of type {M}^{N}\to q , and on the type distribution that converged into measurement class {\left\{{\varphi }^{\beta, N}\left(\left|{M}^{N}\right.\right)\right\}}_{N\ge {N}_{0}} when measurement {P}_{q}- is omitted. This leads to Theorem 5.

    Theorem 5. Set (m{)}_{N\ge {N}_{0}} is a type distribution sequence converged to priori probability q . The class {\left\{{\varphi }^{\beta, N}\left(\left|{M}^{N}\right.\right)\right\}}_{N\ge {N}_{0}} generalized with permissive half-anonymous mechanism controlled can satisfy, for all \sigma \in \varSigma , the ratio function {r}^{\beta }(\sigma, q): = \underset{\sigma {'}\in \varSigma }{max}{\tilde{f}}^{\beta }(\sigma, m)-{\tilde{f}}^{\beta }(\sigma {'}, m) with large deviation principle. For each sequence, class {\left\{{\varphi }^{\beta, N}\left(\left|{M}^{N}\right.\right)\right\}}_{N\ge {N}_{0}} satisfies: \underset{N\to \infty }{lim}\frac{\beta }{N}\mathit{log}{\psi }^{\beta, N}\left({\sigma }^{N}\left|{m}^{N}\right.\right) = -{r}^{\beta }(\sigma, q) , where {\sigma }^{N}\in {\varSigma }^{N}\left({m}^{N}\right), \forall N\ge {N}_{0} and \boldsymbol{\sigma}^N \rightarrow \boldsymbol{\sigma}.

    Similar to the maximum deviation principle introduced in Theorem 5, the information family is a measurement {\left\{{\varphi }^{\beta, N}\left(\left|{M}^{N}\right.\right)\right\}}_{N\ge {N}_{0}} focused on the logarithmic measurement of the Bayesian strategy, and the strategy should be the optimal solution of the following programming:

    \underset{\sigma {'}\in \varSigma }{max}{\tilde{f}}^{\beta }(\sigma {'}, q) (56)

    The solution of the programming is the logit equilibrium solution; that is, it can be obtained directly due to the definition of the fixed point condition of the Bayesian strategy. Furthermore, the corresponding fixed point condition is:

    {\sigma }_{k}^{*}\left(a\right) = \frac{\mathit{exp}\left({\beta }^{-1}\left({\pi }_{a}^{k}\right({\sigma }^{*}, q)+{\theta }_{k}(a\left)\right)\right)}{{\sum }_{b\in \in A}\mathit{exp}\left({\beta }^{-1}\left({\pi }_{b}^{k}\right({\sigma }^{*}, q)+{\theta }_{k}(b\left)\right)\right)} (57)

    where the following equality holds for all a\in A and 1\le k\le K :

    {\pi }_{a}^{k}(\sigma , q) = {\sum }_{l = 1}^{K}{q}_{l}{\sum }_{b\in A}{\varphi }_{kl}^{\beta }(a, b){\sigma }_{l}\left(b\right)\equiv {\sum }_{l = 1}^{K}{q}_{l}{\left({\varphi }_{kl}^{\beta }(a, b){\sigma }_{l}\right)}_{a} (58)

    To specify this conclusion, Equation (56) can be reduced to the optimal problem with standard constraints. Therefore, the corresponding solution can be resolved by the Lagrange method as follows:

    L = {\tilde{f}}^{\beta }(\sigma {'}, q)-{\sum }_{k = 1}^{K}{\lambda }_{k}\left({\sum }_{a\in A}{\sigma }_{k}\left(a\right)-1\right) (59)

    The corresponding first-order condition should be necessary and sufficient for the optimal solution, implying that there exists a single solution if a positive \beta is large enough. Normally, as far as all 1\le k\le K and a, b\in A are considered, the corresponding first-order condition takes the following form:

    \frac{\partial {\tilde{f}}^{\beta }(\sigma , q)}{\partial {\sigma }_{k}\left(a\right)}-\frac{\partial {\tilde{f}}^{\beta }(\sigma , q)}{\partial {\sigma }_{k}\left(b\right)} = 0 (60)

    Due to the symmetry of {\phi }_{kl}^{\beta } , we have:

    \frac{\partial {\tilde{f}}^{\beta }(\sigma , q)}{\partial {\sigma }_{k}\left(a\right)} = \left\{\begin{array}{cc}0& l > k\\ {\theta }_{k}\left(a\right)+{\sum }_{l{'}\ge k}{q}_{l{'}}({\phi }_{kl{'}}^{\beta }{\sigma }_{l{'}}{)}_{a}& l = k\\ {q}_{l}({\phi }_{kl}^{\beta }{\sigma }_{l}{)}_{a}& l < k\end{array}\right. (61)

    and:

    \frac{\partial {\tilde{f}}^{\beta }(\sigma , q)}{\partial {\sigma }_{k}\left(a\right)} = {q}_{k}\left[{\theta }_{k}\left(a\right)+{\sum }_{l{'}\ge k}{q}_{l{'}}({\phi }_{kl{'}}^{\beta }{\sigma }_{l{'}}{)}_{a}-\beta \left(\mathit{log}{\sigma }_{k}\right(a)+1)\right] (62)

    Extending the first-order condition to the general case, the following condition should be satisfied:

    \begin{gathered} \log \frac{{{\sigma _k}(a)}}{{{\sigma _k}(b)}} = \frac{1}{\beta }\left[ {\sum\limits_{l = 1}^K {{q_l}{{\left( {\boldsymbol{\varphi}_{kl}^\beta {\sigma _l} + {\theta _k}} \right)}_a}} - \sum\limits_{l = 1}^K {{q_l}{{\left( {\boldsymbol{\varphi}_{kl}^\beta {\sigma _l} + {\theta _k}} \right)}_b}} } \right] \\ = \frac{1}{\beta }\left[ {\left( {\pi _a^k(\boldsymbol{\sigma} , \boldsymbol{q}) + {\theta _k}(a)} \right) - \left( {\pi _b^k(\boldsymbol{\sigma} , \boldsymbol{q}) + {\theta _k}(b)} \right)} \right] \\ \end{gathered} (63)

    The other parts can be obtained directly from the constraint condition {\sum }_{a\in A}{\sigma }_{k}\left(a\right) = 1 . Therefore, when the population tends to infinity, the invariable distribution of a complex adaptive system with the agent's behavior and its local topological configuration co-evolution can converge into a certain interval with rate function -{r}^{\beta }(\sigma, q) , according to Theorem 5.

    In the case of limitation of small noise of behavior, the system's invariable distribution of co-evolution of agent's behavior and its local topological configuration must stand in the set of the potential function. However, in the case of the limitation of the large population, the invariable distribution would converge into a different ratio function. Therefore, a small noise of agent's behavior is not identical to a large population for this co-evolutionary complex adaptive system with agent's behavior and its local topological configuration.

    As mentioned above, the invariable distribution is much more complex. No universal analytical solution can be derived if corresponding parameters change gradually and continuously. This problem concerns decision-making based on non-structural analysis of the system scenario. When facing this complex system, one can select a certain scenario and then adjust the corresponding parameters such that the scenario changes dynamically. Finally, if several discrete scenarios are studied, the corresponding conclusion would be drawn by induction.

    Although we defined "irrational behavior", we did not analyze particular irrational behavior patterns such as competing (vying) and comparing, the anchoring effect, the loss of real demand caused by free offers, the nullity of money incentives under the dual effects of social norms and market regulations, sense and sensibility, and the high price of ownership. We plan to delve deeply into these issues in the follow-up study.

    The authors declare that they have not used Artificial Intelligence (AI) tools in creating this article.

    This paper is supported by the National Natural Science Foundation of China (Grant No. 71671054) and the Natural Science Foundation of Shandong Province (Grant No. ZR2020MG004).

    The authors declare that there are no conflicts of interest.

    According to Theorem 3, two corollaries were derived as follows.

    Corollary 1. {\varXi }^{\beta, \tau, N} is an admissible volatile mechanism

    (1) Set {\varOmega }^{*, N}\left(\tau \right)\triangleq \left\{\omega \in {\varOmega }^{N}\left|\underset{\beta \to 0}{lim}\beta \mathit{log}{\mu }^{\beta, \tau, N}\left(\omega \right) = 0\right.\right\} is a stochastic stable state of realizable configuration of agent's type \tau with small noise, which identifies with {\varOmega }^{*, N}\left(\tau \right) = \mathit{arg}\underset{\omega \in \varOmega }{max}V(\omega, \tau)

    (2) An invariable distribution is a function with exponential convergence; that is, for arbitrary \varepsilon > 0 , there exists a subset {X}_{\varepsilon }\subseteq {\varOmega }^{N} such that \underset{\beta \to 0}{lim}\beta \mathit{log}{\mu }^{\beta, \tau, N}\left({X}_{\varepsilon }\right) < -\varepsilon hold.

    Proof. Just the second section should be proven in Proposition 1. Denote the set of ratio functions {L}_{R}\left(\varepsilon \right)\triangleq \left\{\omega \in {\varOmega }^{N}\left|R(\omega, \tau)\le \varepsilon \right.\right\} for all \varepsilon > 0 . These sets are non-empty because there always exist {\varOmega }^{*, N}\left(\tau \right)\subseteq {L}_{R}\left(\varepsilon \right) such that they are equivalent when \varepsilon \to 0 . Then fix a \varepsilon > 0 , and consider set {X}_{\varepsilon }\triangleq {\varOmega }^{N}\backslash {L}_{R}\left(\varepsilon \right) , then {R}_{{X}_{\varepsilon }}\left(\tau \right)\triangleq \underset{\omega \in {X}_{\varepsilon }}{min}R(\omega, \tau) > 0 . So, we have

    \begin{gathered} \mathop {\lim }\limits_{\beta \to 0} \beta \log {\mu ^{\beta , \tau , N}}({X_\varepsilon }) = \mathop {\lim }\limits_{\beta \to 0} \beta \log \left( {\sum\limits_{\omega \in {X_\varepsilon }} {{\mu ^{\beta , \tau , N}}(\omega )} } \right) \\ = \mathop {\max }\limits_{\omega \in {X_\varepsilon }} \mathop {\lim }\limits_{\beta \to 0} \beta \log {\mu ^{\beta , \tau , N}}(\omega ) = - \mathop {\min }\limits_{\omega \in {X_\varepsilon }} R(\omega ) < - \varepsilon \\ \end{gathered} (A.1)

    Similar to Theorem 3, for some functions {B}_{{X}_{\varepsilon }}^{\beta, N}, {r}_{{X}_{\varepsilon }}\left(\beta \right) , we have

    {\sum }_{\omega \in {X}_{\varepsilon }}{\mu }^{\beta , \tau , N}\left(\omega \right) = \mathit{exp}\left(-{\beta }^{-1}{R}_{{X}_{\varepsilon }}\left(\tau \right)\right){B}_{{X}_{\varepsilon }}^{\beta , N}{r}_{{X}_{\varepsilon }}\left(\beta \right) (A.2)

    Taking logarithm from two sides, and multiplying by \beta , we have

    \beta \mathit{log}\left({\sum }_{\omega \in {X}_{\varepsilon }}{\mu }^{\beta , \tau , N}\left(\omega \right)\right) = -{R}_{{X}_{\varepsilon }}\left(\tau \right)+\beta \mathit{log}{B}_{{X}_{\varepsilon }}^{\beta , N}+\beta {r}_{{X}_{\varepsilon }}\left(\beta \right) (A.3)

    At \beta \to 0 , the left side becomes -{R}_{{X}_{\varepsilon }}\left(\tau \right)(1+o(1\left)\right) . Corollary 1 is proven: QED.

    Theorem 3 indicates that, under small noise, for each type of configuration, the invariable distribution centralizes the set of maximum potential functions. Similarly, Corollary 1 expresses the corresponding stochastic stable state must be the maximum one among the potential functions. Furthermore, measurement {\left\{{\mu }^{\beta, \tau, N}\right\}}_{\beta > 0} class gives an arbitrary weight for the deterministic subset of state space, which makes the agent select the optimal strategy.

    In the dynamical game model of the complex adaptive system, suppose that an agent always implements his/her optimal strategy when it acts with others, and it does so in a rational manner. A notation is needed to express this kind of equilibrium. Thus, the Aumann correlative equilibrium, a fitness order parameter, is introduced. The Aumann correlative equilibrium regards state space {\varOmega }^{N} as the set of states that could potentially appear and {\mu }^{\beta, \tau, N} is simplified to \mu . The information of {j}_{i} is denoted to {P}_{{j}_{i}} , decided by set {P}_{{j}_{i}}\left(\theta \right)\triangleq \left\{\omega \in {\varOmega }^{N}\left|{\alpha }_{{j}_{i}}\left(\omega \right) = a\right.\right\} , a\in A . The strategy of agent {j}_{i} is mapping {s}_{{j}_{i}}:{\varOmega }^{N}\to A , a measurable function of information {P}_{{j}_{i}} . That is, whatever the states \omega, \omega {'}\in {P}_{{j}_{i}}\left(\alpha \right) , we have {s}_{{j}_{i}} = {s}_{{j}_{i}}\left(\omega {'}\right) = a , and the profile of the strategies is denoted by S = ({s}_{{j}_{i}}{)}_{{j}_{i}\in \left[{N}_{i}\right]\subseteq N} . If this co-evolutionary complex adaptive system converges into a certain dynamical equilibrium state, then these agents' type configurations must be ({\tau }_{11}, ...{\tau }_{1{n}_{1}}, {\tau }_{21}, ..., {\tau }_{N}) and the information configuration must be ⟨{\varOmega }^{N}, \mu, {P}_{{j}_{i}}⟩ . Supposing that each agent uses strategy {s}_{{j}_{i}}\left(\omega \right) = {\alpha }_{{j}_{i}}\left(\omega \right), \forall \omega \in {\varOmega }^{N} , then an interesting question arises: would agent {j}_{i} use this strategy when it acts with its neighbors as the system evolves?

    Because the measurement \mu is effective if, and only if, noise is positive, all states of the system would appear as a positive probability. However, not all strategies coupled with the measurement functions about {P}_{{j}_{i}} are good. Invoking Theorem 3, the exact state of the system that would occur as a maximum non-zero probability in the system can be known. This kind of Nash equilibrium means that the behavior of agents is very complex.

    Corollary 2. Set \tau \in {\varTheta }^{N} is an arbitrary type profile. For all {j}_{i}\in \left[{N}_{i}\right]\subseteq N , when considering strategy {s}_{{j}_{i}}\left(\omega \right) = {\alpha }_{{j}_{i}}\left(\omega \right), \forall \omega \in {\varOmega }^{N} , the profile s is the Nash equilibrium behavior of a state, with all states \omega \in {\varOmega }^{*, N}\left(\tau \right) .

    Proof The measurement {P}_{{j}_{i}} maps {s}_{{j}_{i}} . Thus, {s}_{{j}_{i}} that describes the strategy selected by an agent. Fixing an arbitrary agent {j}_{i}\in \left[{N}_{i}\right]\subseteq N and setting {\widehat{s}}_{{j}_{i}} as an arbitrary strategy that is not optimal, we also fix \omega \in {\varOmega }^{N}\left(\tau \right) . The deviation payoff of the agent {j}_{i} will be

    {U}_{{j}_{i}}\left[s\right(\omega ), \gamma (\omega ), {\tau }_{{j}_{i}}]-{U}_{{j}_{i}}\left[\widehat{s}\right(\omega ), {s}_{-{j}_{i}}(\omega ), \gamma (\omega ), {\tau }_{{j}_{i}}]\\ = V\left(s\right(\omega ), \gamma (\omega ), \mathrm{\tau })-V\left(\widehat{s}\right(\omega ), {s}_{-{j}_{i}}(\omega ), \gamma (\omega ), \tau ) (A.4)

    This payoff is non-negative {\varOmega }^{*, N}\left(\tau \right) . Therefore, Corollary 2 is proven to hold: QED.

    Proof. The proof is complex, first requiring some parameters to be defined and some Lemmas to be invoked.

    The first thing is to obtain the martingale distribution on {A}^{N} for arbitrary type profile \tau \in {\varOmega }^{N} such that the distribution of Bayesian strategy relying on {T}^{N}\left(m\right) , m\in {L}_{N} , on this set is given to analyze the probability that this type of agent stands in this type class. To analyze the phenomenon of aggregate, we sectionalize agents in the complex adaptive system according to the agent's behavior and its type, making the same kind of agents stand in the same subset. Now, for all 1\le k\le K and a\in A , define set

    {I}_{k}^{\tau }\left(a\right)\left(\omega \right)\triangleq \left\{{j}_{i}\in \left[N\right]\left|{\alpha }_{{j}_{i}}\left(\omega \right) = a\&{\tau }_{{j}_{i}} = {\theta }_{k}\right.\right\} (A.5)

    Obviously, for an arbitrary type profile \tau \in {\varTheta }^{N} , the class of set \left\{{\left\{{I}_{k}^{\tau }\left(a\right)\right\}}_{a\in A}\right\} is segmentation on \left[N\right] . Under half-anonymous mechanism controlled, measurement of random complex networks would regard the edge between agent {I}_{k}^{\tau }\left(a\right) and {j}_{{i}^{{'}}}^{{'}}\in {I}_{l}^{\tau }\left(a\right) as an i.i.d. random variable. Thus, a random variable satisfying binomial distribution with parameter of {P}_{kl}(a, b) is defined as:

    {E}_{kl}^{N, \tau }(a, b)\left(\omega \right)\triangleq {\sum }_{({j}_{i}, {j}_{i}{'})\in \left[{I}_{k}^{\tau }\right(a)\cup {I}_{l}^{\tau }(b){]}^{\left(2\right)}}{\gamma }_{{j}_{i}j{{'}}_{i{'}}}\left(b\right) (A.6)

    Given one type profile \tau and a {\varOmega }_{a}^{N} on a- cross-section and denoting {E}_{kl}^{N, \tau }(a, b) for the maximum quantities of edges between agents with behavior a and type k and agents with behavior b and type l , and {e}_{kl} for the implementation of a random variable {E}_{kl}^{N, \tau }(a, b)\left(\right) , several lemmas are necessary. They are introduced as follows.

    Lemma 3. Consider a given type profile \tau \in {T}^{N}\left(m\right) and a half-anonymous volatile mechanism {\mathfrak{M}}^{\beta, \tau, N} :

    (1) The stable expression of a- game, on a- cross-section {\varOmega }_{a}^{N} , should be

    {\sigma }_{k}\left(a\right) = {\widehat{\sigma }}_{k}\left(a\right)(\omega, \tau), \forall \omega \in {\varOmega }_{a}^{N} where \sigma :\left({\sigma }_{k}\right(a); 1\le k\le K, a\in A)\in {\sum }^{N}\left(m\right)

    (2) We have {\mu }^{\beta, \tau, N}\left({\varOmega }_{a}^{N}\right)\propto {\prod }_{k > 1}^{K}{\prod }_{a = 1}^{n}{\varPhi }_{k}^{a}(\sigma, \beta, N){l}^{N{m}_{k}{\sigma }_{k}\left(a\right)} where, for all types of 1\le k\le K , and behavior of 1\le a\le n , {\varPhi }_{k}^{a}\left(\right) satisfies {\varPhi }_{k}^{a}(\sigma, \beta, N)\triangleq {\prod }_{l\ge k}{\varPhi }_{kl}^{a}(\sigma, \beta, N) , {\varPhi }_{kk}^{a}(\sigma, \beta, N)\triangleq \mathit{exp}\left(\frac{{\theta }_{k}\left(a\right)}{\beta }\right){\prod }_{b\ge a}{\left(1+\frac{1}{N\beta }{\phi }_{kk}^{\beta, N}(a, b)\right)}^{\frac{N{m}_{k}{\sigma }_{k}\left(a\right)-{\delta }_{a, b}}{1+{\delta }_{a, b}}} , {\varPhi }_{kl}^{a}(\sigma, \beta, N)\triangleq {\prod }_{b = 1}^{n}{\left(1+\frac{1}{N\beta }{\phi }_{kl}^{\beta, N}(a, b)\right)}^{N{m}_{l}{\sigma }_{l}\left(a\right)}

    Proof: Set the modulus of a- game of type {\theta }_{k} as {z}_{k}\left(a\right) = N{m}_{k}{\sigma }_{k}\left(a\right) . Therefore, (1) is proven.

    To prove (2), the following process should be introduced:

    For all \omega \in {\varOmega }^{N} , defining \rho (\omega, \tau): = {\mu }_{0}\left(\omega \right)\mathit{exp}(V(\omega, \tau)/\beta) , and invoking function {x}_{{j}_{i}j{{'}}_{i{'}}}\left(\right|) of Equation (A.6), this mapping should be changed to:

    \rho (\omega , \tau ) = {\prod }_{{j}_{i} = 1}^{n}{\prod }_{j{{'}}_{i{'}} > {j}_{i}}\mathit{exp}({x}_{{j}_{i}j{{'}}_{i{'}}}(\omega , \tau ){\gamma }_{{j}_{i}j{{'}}_{i{'}}}\left(\omega \right)\mathit{exp}({\tau }_{{j}_{i}}\left(\alpha \right(\omega \left)\right)/\beta ) (A.7)

    Therefore, for all \omega \in {\varOmega }^{N} , 1\le k\le K and a\in A , we have {I}_{k}^{\tau }(a, \omega) = {I}_{k}^{\tau }\left(a\right) . Furthermore, for all agents with {j}_{i}\in {I}_{k}^{\tau }\left(a\right), {j}_{i{'}}\in {I}_{l}^{\tau }\left(b\right) , we can observe that:

    {x}_{{j}_{i}j{{'}}_{i{'}}}(\omega , \tau )\equiv {x}_{kl}(\omega , \tau )\triangleq \frac{1}{\beta }v(a, b)+\mathit{log}\left(\frac{2}{N{\xi }_{kl}}\right) (A.8)

    Then, Equation (A.7) can be reduced to

    \begin{gathered} \rho (\boldsymbol{\omega} , \boldsymbol{\tau}) = {{\tilde \rho }_{[\sigma , m]}}(\omega ) \triangleq \prod\limits_{k = 1}^K {\prod\limits_{a = 1}^n {\exp \left( {\frac{{{\theta _k}(a){z_k}(a)}}{\beta }} \right)} \prod\limits_{b > a} {\exp {{[{x_{kk}}(a, b)]}^{\xi _{kk}^{N, \tau }(a, b)(\omega )}}} } \\ \times \prod\limits_{k, l > k} {\prod\limits_{a, b \in \mathscr{A}} {\exp {{[{x_{kl}}(a, b)]}^{\xi _{kl}^{N, \tau }(a, b)(\omega )}}} } \\ \end{gathered} (A.9)

    The latter equation just relies on the system's state and the number of edges in networks, \omega . Then, we calculate this expression for all states \omega \in {\varOmega }^{N} , which needs integration for all possible states. The process can be described in detail as follows:

    Initialization: set k = 1, a = 1 ;

    First cycle: considering the special situation l = k , integrate all possible edges {e}_{kl}(a, b) on b\ge a —if b = n , set l = l+1 , then go to next cycle;

    Second cycle: integrate all possible edges {e}_{kl}(a, b) with all b\in A —if l\le K-1 , set l\to l+1 , and replace this process; otherwise, go to the third cycle;

    Third cycle: if a\le n-1 and k\le K-1 , then, for the same k and a\to a+1 , go to the first cycle; if a = n and k\le K-1 , then go to the first cycle for k\to k+1 and a\to 1 ; if a = n and k = K , stop calculation.

    All possible links between agents with behaviors of {I}_{1}^{\tau }\left(1\right) are integrated within the first cycle. Note that the only factor affecting the calculation result must be \mathit{exp}({x}_{11}\left(\mathrm{1, 1}\right){)}^{{E}_{11}^{N, \tau }\left(\mathrm{1, 1}\right)\left(\omega \right)} , \omega \in {\varOmega }_{a}^{N} . Therefore, the convolution term will not be affected by the universal term {B}_{1} , in this sense, \rho (\omega, \tau) = {B}_{1}\mathit{exp}({x}_{11}\left(\mathrm{1, 1}\right){)}^{{E}_{11}^{N, \tau }\left(\mathrm{1, 1}\right)\left(\omega \right)} . Furthermore, because the respective behaviors of agents are {E}_{11}^{N, \tau }\left(\mathrm{1, 1}\right) = {e}_{11}\left(\mathrm{1, 1}\right) , then for an arbitrary agent in this complex adaptive system, there exist several agents that he/she can interact with, and the combined identical equations representing multiple possible games must be considered, which may require adjusting the results of the first cycle as follows:

    {B}_{1}\sum\limits_{{e}_{11}\left(\mathrm{1, 1}\right) = 0}^{{E}_{11}^{N, \tau }}\left(\begin{array}{c}{E}_{11}^{N, \tau }\left(\mathrm{1, 1}\right)\\ {e}_{11}\left(\mathrm{1, 1}\right)\end{array}\right)\mathit{exp}({x}_{11}\left(\mathrm{1, 1}\right){)}^{{e}_{11}^{N, \tau }\left(\mathrm{1, 1}\right)} = {B}_{1}{\left(1+\mathit{exp}({x}_{11}\left(\mathrm{1, 1}\right))\right)}^{{E}_{11}^{N, \tau }\left(\mathrm{1, 1}\right)} \\ = {\mathrm{B}}_{1}{\left(1+\frac{1}{N\beta }{\varphi }_{11}^{\beta , N}\left(\mathrm{1, 1}\right)\right)}^{\frac{{z}_{1}\left(1\right)\left({z}_{1}\right(1)-1)}{2}} (A.10)

    The algorithm of the next cycle is to calculate all links between agents with respective behaviors of {I}_{1}^{\tau }\left(1\right) and {I}_{1}^{\tau }\left(2\right) . The relative factors of the universal term {B}_{1} must be extracted. Therefore, we provide the integration of the whole above process as follows:

    {B}_{2}{\left(1+\frac{1}{N\beta }{\varphi }_{11}^{\beta , N}\left(\mathrm{1, 1}\right)\right)}^{\frac{{z}_{1}\left(1\right)\left({z}_{1}\right(1)-1)}{2}}{\left(1+\frac{1}{N\beta }{\varphi }_{11}^{\beta , N}\left(\mathrm{1, 2}\right)\right)}^{{z}_{1}\left(1\right){z}_{1}\left(2\right)} (A.11)

    Repeating this algorithm, we obtain the corresponding function of the nth step:

    {\varPhi }_{11}^{1}(\sigma , \beta , N{)}^{{z}_{1}\left(1\right)} = \mathit{exp}\left(\frac{{\theta }_{1}\left(1\right){z}_{1}\left(1\right)}{\beta }\right){\prod }_{b\ge 1}{\left(1+\frac{1}{N\beta }{\varphi }_{11}^{\beta , N}(1, b)\right)}^{\frac{{z}_{1}\left(1\right)\left({z}_{1}\right(b)-{\delta }_{a, b})}{1+{\delta }_{1, b}}} (A.12)

    Recall {z}_{k}\left(a\right) = N{m}_{k}{\sigma }_{k}\left(a\right) ; therefore, the function {\varPhi }_{11}^{1}(\sigma, \beta, N) holds, and the result of Lemma 2 can be obtained by calculating the remaining steps as the recurrence relation. Thus, Lemma 2 is proven: QED.

    The invariable distribution on an a- cross-section is given in Lemma 2. It can be seen from its proof that not only the behavior profile but also the Bayesian strategies control the invariable distribution.

    Lemma 4. If \tau, \tau {'}\in {T}^{N}\left(m\right) and {\widehat{\sigma }}^{N}({\varOmega }_{a}^{N}, \tau) = {\widehat{\sigma }}^{N}({\varOmega }_{a{'}}^{N}, \tau {'}) , then {\mu }^{\beta, \tau, N}\left({\varOmega }_{a}^{N}\right)/{\mu }^{\beta, \tau {'}, N}\left({\varOmega }_{a{'}}^{N}\right) = 1

    Proof Denoting \sigma as the co-strategies on the subsets respective to {\varOmega }_{a}^{N} and {\varOmega }_{a{'}}^{N} , it follows from Equation (A.9) that, for arbitrary \omega \in {\varOmega }_{a}^{N} and \omega {'}\in {\varOmega }_{a{'}}^{N} , we have:

    \rho (\omega , \tau ) = {\tilde{\rho }}_{[\sigma , m]}\left(\omega \right), \rho (\omega {'}, \tau {'}) = {\tilde{\rho }}_{[\sigma , m]}\left(\omega {'}\right) (A.13)

    Lemma 4 can be proven if we can show that

    {\sum }_{\omega \in {\varOmega }_{a}^{N}}{\tilde{\rho }}_{[\sigma , m]}\left(\omega \right) = {\sum }_{\omega {'}\in {\varOmega }_{a{'}}^{N}}{\tilde{\rho }}_{[\sigma , m]}\left(\omega {'}\right) (A.14)

    It can be deduced from the proof of Lemma 3 that this operator was driven by the algorithm. The random complex networks produced in {\varOmega }_{a}^{N} and the ones produced in {\varOmega }_{a{'}}^{N} were isomorphic graphs, so Equation (A.14) holds and Lemma 4 is proven: QED.

    Lemma 5. The conditional probability distribution of Bayesian strategy of the set \varSigma on type class {T}^{N}\left(m\right) , for all 1\le k\le K , must be {\psi }^{\beta, N}\left(\sigma \left|m\right.\right) = {K}^{\beta, N}(m{)}^{-1}{\prod }_{k = 1}^{K}{\upsilon }_{k}^{\beta, N}\left(\sigma \left|m\right.\right) , where {\upsilon }_{k}^{\beta, N}\left(\sigma \left|m\right.\right)\triangleq \frac{\left(N{m}_{k}\right)!}{{\prod }_{a\in A}\left(N{m}_{k}{\sigma }_{k}\right(a\left)\right)!}{\prod }_{a = 1}^{n}{\varPhi }_{k}^{a}(\sigma, \beta, N{)}^{N{m}_{k}{\sigma }_{k}\left(a\right)} . The support of this probability distribution \text{supp}\left({\psi }^{\beta, N}\left(\left|m\right.\right)\right) = {\varSigma }^{N}\left(m\right) = \left\{\sigma \in \varSigma \left|N{m}_{k}{\sigma }_{k}\left(a\right)\in \mathbb{N}, \forall a\in A, 1\le k\le K\right.\right\} is an interior point limited approximately to the polyhedron of Bayesian strategy \varSigma .

    Proof. If (a{'}, \tau {'}) transfers into (a, \tau) after labeling agents in the complex adaptive system, then {\mu }^{\beta, \tau {'}, N}\left({\varOmega }_{a{'}}^{N}\right) = {\mu }^{\beta, \tau, N}\left({\varOmega }_{a}^{N}\right) via Lemma 4. Furthermore, if the Bayesian strategies are used universally by agents with \tau \in {T}^{N}\left(m\right) on {\varOmega }_{a}^{N} , then there must be N{m}_{k} agents of type {\theta }_{k} and N{m}_{k}{\sigma }_{k}\left(a\right) classes with the behavior of a\in A , in sub-generation of 1\le k\le K , such as \frac{\left(N{m}_{k}\right)!}{{\prod }_{a\in A}\left(N{m}_{k}{\sigma }_{k}\right(a\left)\right)!} , which is similar to the result of profile \tau \in {T}^{N}\left(m\right) of all types. Therefore, Lemma 5 is proven: QED.

    There is the closest form of compact compression for the invariable distribution of Bayesian strategy, meaning that the measurement coupled with the agents' population is large enough to study this complex adaptive system. Therefore, we introduce the following function

    (1\le k\le K):{f}_{k}^{\beta , N}(\sigma , m)\triangleq \sum\limits_{a\in A}{\sigma }_{k}\left(a\right)\sum\limits_{l\ge k}\mathit{log}{\varPhi }_{kl}^{a}(\sigma , \beta , N) \\{f}_{k}^{\beta , N}(\sigma , m)\triangleq {\sum }_{k = 1}^{K}{m}_{k}{f}_{k}^{\beta , N}(\sigma , m) (A.15)

    According to these mappings, the conditional probability in Lemma 5 takes the following form:

    {\psi }^{\beta , N}\left(\sigma \left|m\right.\right) = {K}^{\beta , N}(m{)}^{-1}{\prod }_{k = 1}^{K}\frac{\left(N{m}_{k}\right)!}{{\prod }_{a\in A}\left(N{m}_{k}{\sigma }_{k}\right(a\left)\right)!}\mathit{exp}\left(N{m}_{k}{f}_{k}^{\beta , N}(\sigma , m)\right) (A.16)

    The evolution law of the agent's strategy, when the population is large enough, relies strongly on the convergence property of function {\left\{{f}_{k}^{\beta, N}\right\}}_{N > {N}_{0}} . For {m}^{N}\in {L}_{N} , we propose that set {\varSigma }^{N}\left(m\right)\times {L}_{N} approximates the convergence continuous space {\varSigma }^{N}\times \varDelta \left(\varTheta \right) at N\to \infty . This yields the following Lemma.

    Lemma 6. For each arbitrary Bayesian strategy \sigma \in \varSigma and type distribution m\in \mathit{int}\varDelta \left(\varTheta \right) , there exists a sequence {\left\{\left({\sigma }^{N}, {m}^{N}\right)\right\}}_{N\ge {N}_{0}} with {\sigma }^{N}\in {\varSigma }^{N}\left({m}^{N}\right) and {m}^{N}\in {{L}_{N}}^{N} , for all N\ge {N}_{0} , such that ({\sigma }^{N}, {m}^{N})\to (\sigma, m) holds at N\to \infty .

    Proof: Let us prove this lemma in two steps. First, we try to find a sequence {m}^{N}\in {{L}_{N}}^{N} such that the total variable distance is converged into m at N\to \infty . Then, we construct the Bayesian strategy sequence using sequence {m}^{N}\in {{L}_{N}}^{N} .

    In the first step, we define the total variable distance between two distributions x, y\in \varDelta \left(\varTheta \right) in \varDelta \left(\varTheta \right) as follows:

    {‖x-y‖}_{TV, \varTheta }\triangleq \frac{1}{2}{\sum }_{k = 1}^{K}\left|{x}_{k}-{y}_{k}\right| (A.17)

    It is known that {m}_{k}^{N}\in \left\{0, \frac{1}{N}, ..., \frac{N}{N}\right\} if {m}^{N}\in {L}_{N} . Therefore, if m\in \varDelta \left(\varTheta \right) , then for each 1\le k\le K , there exists such {m}_{k}^{N}\in \left\{0, \frac{1}{N}, ..., \frac{N}{N}\right\} that \left|{m}_{k}-{m}_{k}^{N}\right|\le \frac{1}{N} holds. Thus, for each N , a vector {m}^{N} must be found such that {‖{m}^{N}-m‖}_{TV, \varTheta }\le \frac{K}{2N} . Thus, for a small enough \delta > 0 , set {N}^{\delta }\left(m\right)\triangleq \left\{y\in \varDelta \left(\varTheta \right)\left|{‖y-m‖}_{TV, \varTheta } < \delta \right.\right\} is an open ball that surrounds m and consists of all {m}^{N} with N\ge N\left(\delta \right) , where N\left(\delta \right) is an appropriate integer. Therefore, {m}^{N}\to m a.s. in the total change distance.

    In the second step, given the identifiable prior distribution sequence {\left({m}^{N}\right)}_{N\ge {N}_{0}} , for all N\ge {N}_{0} , set {\sigma }^{N}\in {\varSigma }^{N}\left({m}^{N}\right) , we can measure their distance due to maximum norm on space \varSigma . That is, for all \sigma, \sigma {'}\in \varSigma , we have:

    {‖\sigma -\sigma {'}‖}_{TV, \varSigma }\triangleq \underset{1\le k\le K}{max}{‖{\sigma }_{k}-{\sigma }_{k}^{{'}}‖}_{TV} (A.18)

    As specified in the first step, as for all 1\le k\le K , there is a boundary of distance between the \sigma and \sigma {'} , which is:

    {‖{\sigma }_{k}^{N}-{\sigma }_{k}‖}_{TV}\le \frac{n}{2N{m}_{k}^{N}} (A.19)

    Then, for all large enough values of N , we have:

    {‖\sigma -{\sigma }^{N}‖}_{TV, \varSigma }\le \frac{n}{2N}\underset{1\le k\le K}{max}\frac{1}{{m}_{k}^{N}} (A.20)

    Because {m}^{N}\to m\in \mathit{int}\varDelta \left(\varTheta \right) , for large enough N and all 1\le k\le K , there exists such \varepsilon > 0 that {m}_{k}^{N}\ge \varepsilon > 0 holds. For small enough \delta > 0 , the neighbor {N}^{\delta }\left({\sigma }^{N}\right) mentioned in the first step must be found, and it is observed that {\sigma }^{N}\in {N}^{\delta }\left(\sigma \right) for N\ge N\left(\delta \right) . Thus, Lemma 6 is proven: QED.

    The above reasoning proved the existence of an approximate pair (\sigma, m)\in \varSigma \times \varDelta \left(\varTheta \right) that can be obtained from a discrete sequence as ({\sigma }^{N}, {m}^{N}) converges, which can be measured directly in this limited process.

    Lemma 7. For all 1\le k\le K coupled with sequence {\left\{\left({\sigma }^{N}, {m}^{N}\right)\right\}}_{N\ge {N}_{0}} having limit (\sigma, m)\in \varSigma \times \varDelta \left(\varTheta \right) , where {\sigma }^{N}\in {\varSigma }^{N}\left({m}^{N}\right), {m}^{N}\in {L}_{N} , we have \underset{N\to \infty }{lim}{f}_{k}^{\beta, N}({\sigma }^{N}, {m}^{N}) = \frac{1}{\beta }{f}_{k}^{\beta }(\sigma, m) where {f}_{k}^{\beta }:\varSigma \times \varDelta \left(\varTheta \right)\to \mathbb{R} is a continuous function. That is, {f}_{k}^{\beta }(\sigma, m)\triangleq ⟨{\sigma }_{k}, {m}_{k}⟩+{\sum }_{l\ge k}\frac{{m}_{l}}{1+{\delta }_{kl}}⟨{\sigma }_{k}, {\varphi }_{kl}^{\beta }{\sigma }_{l}⟩

    Proof. The asymptotical behavior of function {\varPhi }_{k}^{a}\left(\right) , which is the large-component behavior with deterministic quantity {\varphi }_{k, l}^{\beta, N}(a, b) = \frac{2\beta \mathit{exp}(v(a, b)/\beta)}{{\xi }_{kl}^{\beta }} , should be quantified. Thus, for all 1\le k, l\le K and a, b\in A , we have:

    \underset{N\to \infty }{lim}\frac{1}{N}{\varphi }_{k, l}^{\beta , N}(a, b) = 0, \underset{N\to \infty }{lim}{\varphi }_{k, l}^{\beta , N}(a, b) = \frac{2\mathit{exp}(v(a, b)/\beta )}{{\xi }_{kl}^{\beta }} (A.21)

    This implies that the first-order approximate \mathit{log}\left(1+\frac{{\varphi }_{kl}^{\beta, N}(a, b)}{N\beta }\right) = \frac{{\varphi }_{kl}^{\beta, N}(a, b)}{N\beta }+O\left({N}^{-2}{\beta }^{-1}\right) properly describes the asymptotical behavior with large enough N . Furthermore, for all a\in A and 1\le k\le l\le K , we find that:

    \mathit{log}{\varPhi }_{kk}^{a}({\sigma }^{N}, \beta , N) = {\theta }_{k}\left(a\right)+{\sum }_{b\ge a}\left(\frac{N{m}_{k}^{N}{\sigma }_{k}^{N}\left(b\right)-{\delta }_{a, b}}{1+{\delta }_{a, b}}\right)\mathit{log}\left(1+\frac{{\varphi }_{kl}^{\beta , N}(a, b)}{N\beta }\right) \\ = \frac{1}{\beta }\left[{\theta }_{k}\left(a\right)+\frac{1}{2}{m}_{k}^{N}{\sigma }_{k}^{N}\left(b\right){\varphi }_{kk}^{\beta , N}(a, a)+{\sum }_{b > a}{m}_{k}^{N}{\sigma }_{k}^{N}\left(b\right){\varphi }_{kk}^{\beta , N}(a, b)+O\left(\frac{1}{N}\right)\right] (A.22)

    and:

    \mathit{log}{\varPhi }_{kl}^{a}({\sigma }^{N}, \beta , N) = \frac{1}{\beta }\left[{m}_{l}^{N}{\sum }_{b\in A}{\sigma }_{l}^{N}\left(b\right){\varphi }_{kl}^{\beta , N}(a, b)+O\left(\frac{1}{N}\right)\right] (A.23)

    Therefore, for all 1\le k\le l\le K , we have:

    \begin{gathered} f_k^{\beta , N}({\boldsymbol{\sigma} ^N}, {\boldsymbol{m}^N}) = \sum\limits_{a \in \mathscr{A}} {{\sigma} _k^N(a)\sum\limits_{l \geqslant k} {\log \Phi _{kl}^a({\boldsymbol{\sigma} ^N}, \beta , N)} } \\ = \frac{1}{\beta }\left[ {\left\langle {\boldsymbol{\sigma} _k^N, {\boldsymbol{\theta}_k}} \right\rangle + \sum\limits_{l \geqslant k} {\frac{{m_l^N}}{{1 + {\delta _{kl}}}}(\boldsymbol{\sigma} _k^N, \varphi _{kl}^{\beta , N}\boldsymbol{\sigma} _l^N)} + O(1/N)} \right] \\ = \frac{1}{\beta }\left( {f_k^\beta ({\boldsymbol{\sigma} ^N}, {\boldsymbol{m}^N}) + O(1/N)} \right) \\ \end{gathered} (A.24)

    Furthermore, a function defined in {f}_{k}^{\beta, N}({\sigma }^{N}, {m}^{N}) must have its limitations at N\to \infty . Thus, Lemma 7 is proven: QED.

    Corollary 3. Function \{{f}^{\beta, N}{\}}_{N\ge {N}_{0}} converges, a.s., to limit function {f}^{\beta } .

    Proof: It directly follows from Lemmas 6 and 7: QED.

    As far as the processes obtained are concerned, all states can be determined via a generalized type sequence, whose distribution follows the common law q with the property of i.i.d.. Therefore, we have:

    Lemma 8. At N\to \infty , we get {M}^{N}\stackrel{\hspace{1em}a.s.\hspace{1em}}{\to }q .

    Proof. Let matrix \forall m, q\in \varDelta \left(\varTheta \right)\left|\right|m-q|{|}_{TV}\triangleq \frac{1}{2}{\sum }_{k = 1}^{k}|{m}_{k}-{q}_{k}| be the total variable distance on \varDelta \left(\varTheta \right) . Recall the common rule of q\in \mathit{int}\varDelta \left(\varTheta \right) with type {\tilde{\tau }}_{{j}_{I}}^{\left(N\right)} and consider the countable class of open set \left\{{B}_{q, \varepsilon }\right\} , where \varepsilon \ge 0, {B}_{q, \varepsilon }\triangleq \{m\in \varDelta (\varTheta){\left|‖m-q‖\right.}_{TV} > \varepsilon \} . This rule can be distributed among these sets according to the prior process \{{M}^{N}{\}}_{N\ge {N}_{0}} :

    {\widehat{P}}_{q}^{N}\left({B}_{q, \varepsilon }\right) = {\widehat{P}}_{q}^{N}\left(\left\{\tau \left|{M}^{N}\left(\tau \right)\in {B}_{q, \varepsilon }\right.\right\}\right) (A.25)

    By invoking Sanov's theorem, we obtain:

    \underset{N\to \infty }{lim}\frac{1}{N}\mathit{log}{\widehat{P}}_{q}^{N}\left({B}_{q, \varepsilon }\right) = -\underset{m\in {B}_{q, \varepsilon }}{inf}h\left(m\left|q\right.\right) (A.26)

    where h\left(m\left|q\right.\right)\triangleq {\sum }_{k = 1}^{K}{m}_{k}\mathit{log}\frac{{m}_{k}}{{q}_{k}} is a relative entropy. Using the Jensen inequality, h\left(\left|q\right.\right)\ge 0 and the equation holds if • = q . Because q\in {B}_{q, \varepsilon } holds for all \varepsilon \ge 0 , for each \varepsilon , there always exists a constant {c}_{\varepsilon }\in (0, \infty) such that {\widehat{p}}_{q}^{N}\left({B}_{q, \varepsilon }\right)\le {e}^{-N{c}_{\varepsilon }} . Thus, set {B}_{q, \varepsilon } can be reduced to a case with a prior type distributing event. Then, we can construct the event with a set as follows:

    {A}_{N}\left(\varepsilon \right)\triangleq \left\{\tau \left|{M}^{N}\left(\tau \right)\in {B}_{q, \varepsilon }\right.\right\} = \left\{\tau \left|{‖{M}^{N}\left(\tau \right)-q‖}_{TV} > \varepsilon \right.\right\} (A.27)

    It can be seen that this is a case of {P}_{q}- probability ( {\widehat{p}}_{q}^{N}\left({B}_{q, \varepsilon }\right) ). Using Equation (A.21), we get:

    {\sum }_{N\ge {N}_{0}}{P}_{q}\left({A}_{N}\right(\varepsilon \left)\right) = {\sum }_{N\ge {N}_{0}}{P}_{q}^{N}\left({B}_{q, \varepsilon }\right)\le {\sum }_{N\ge {N}_{0}}{e}^{-N{c}_{\varepsilon }} < \infty (A.28)

    By invoking the first Borel–Cantelli Lemma, we have, for all \varepsilon \in {\mathbb{Q}}_{+} , {P}_{q}\left({\mathit{limsup}}_{N\to \infty }{A}_{N}\left(\varepsilon \right)\right) = 0 , which will converge to the prior process {\left\{{M}^{N}\left(\tau \right)\right\}}_{N\ge {N}_{0}} . Therefore, Lemma 8 is proven: QED.

    The preferential attachment used in this study is a logit function describing the property of {\left\{{\phi }^{\beta, N}\left({M}^{N}\right)\right\}}_{N\ge {N}_{0}} with the maximum deviation measurement. This yields:

    (1\le k\le K):{\overline{f}}_{k}^{\beta , N}(\sigma , m)\triangleq {f}_{k}^{\beta , N}(\sigma , m)+\beta h\left({\sigma }_{k}\right) \\{\overline{f}}^{\beta , N}(\sigma , m)\triangleq {\sum }_{k = 1}^{K}{m}_{k}{\overline{f}}_{k}^{\beta , N}(\sigma , m) (A.29)

    Theorem 5 can be proven as follows. According to Lemma 8, we take a sequence with probability 1-{n}^{-1000} converged to q , and denote the agent's Bayesian strategy among all types with agent's behavior 1 to {e}_{1} = \left[{e}_{1}\left(1\right), \cdots, {e}_{K}\left(1\right)\right] . That is, for each 1\le k\le K , parameter {e}_{k}\left(1\right) can be regarded as the unit vector {\mathbb{R}}^{n} with zero value contributed from the first term and unity values from the other n-1 terms. Thus, for all N we have {e}_{k}\left(1\right)\in {\varSigma }^{N}\left({m}^{N}\right) . Therefore, for all \sigma \in {\varSigma }^{N}\left({m}^{N}\right) the following equation holds:

    \frac{{\psi }^{\beta , N}\left(\sigma {\left|m\right.}^{N}\right)}{{\psi }^{\beta , N}\left({e}_{1}{\left|m\right.}^{N}\right)} = {\prod }_{k = 1}^{K}\frac{\left(N{m}_{k}\right)!}{{\prod }_{a\in A}\left(N{m}_{k}^{N}{\sigma }_{k}^{N}\right(a\left)\right)!}\mathit{exp}\left[N{m}_{k}^{N}\left({f}_{k}^{\beta , N}({\sigma }^{N}, {m}^{N})-{f}_{k}^{\beta , N}({e}_{1}, {m}^{N})\right)\right] (A.30)

    Taking the logarithm of both sides and multiplying by \beta /N , we get:

    \begin{gathered} \frac{\beta }{N}\log \frac{{{\psi ^{\beta , N}}(\boldsymbol{\sigma} {{\left| \boldsymbol{m} \right.}^N})}}{{{\psi ^{\beta , N}}({\boldsymbol{e}_1}{{\left| \boldsymbol{m} \right.}^N})}} = \frac{\beta }{N}\sum\limits_{k = 1}^K {\log \left( {\frac{{(N{m_k})!}}{{\prod\limits_{a \in \mathscr{A}} {(Nm_k^N\sigma _k^N(a))!} }}} \right)} \\ + \sum\limits_{k = 1}^K {m_k^N\left( {f_k^{\beta , N}({\boldsymbol{\sigma} ^N}, {\boldsymbol{m}^N}) - f_k^{\beta , N}({\boldsymbol{e}_1}, {\boldsymbol{m}^N})} \right)} \\ \end{gathered} (A.31)

    Taking the limitation of the combination term and considering the Stirling formula n!\cong \sqrt{2\pi n}(n/e{)}^{n} , we obtain:

    \frac{1}{N}\mathit{log}\left(\frac{\left(N{m}_{k}^{N}\right)!}{{\prod }_{a\in A}\left(N{m}_{k}^{N}{\sigma }_{k}^{N}\right(a\left|{m}_{k}^{N}\right.\left)\right)!}\right) = {m}_{k}^{N}\left(h\left({\sigma }^{N}\right)+O(1/N)\right) (A.32)

    Invoking Lemma 8, we deduce that {\left\{({\sigma }^{N}, {m}^{N})\right\}}_{N\ge {N}_{0}} converges coupled with {f}^{\beta, N}({\sigma }^{N}, {m}^{N})\to {f}^{\beta }(\sigma, q) .

    \underset{N\to \infty }{lim}\frac{{\psi }^{\beta , N}\left(\sigma {\left|m\right.}^{N}\right)}{{\psi }^{\beta , N}\left({e}_{1}{\left|m\right.}^{N}\right)} = {\tilde{f}}_{k}^{\beta }(\sigma , q)-{\tilde{f}}^{\beta }({e}_{1}, q) (A.33)

    where {\tilde{f}}_{k}^{\beta }\left(\left|\right.\right) is the preferential attachment mechanism, i.e., {\tilde{f}}_{k}^{\beta }\left(\left|\right.\right) is a logit function. Next, set {\sigma }_{*}^{N} as a function with the following maximum value:

    {\tilde{f}}^{\beta , N}({\sigma }^{N}, {m}^{N})\triangleq {\sum }_{k = 1}^{K}{m}_{k}^{N}\left[{\tilde{f}}_{k}^{\beta , N}({\sigma }^{N}, {m}^{N})+\beta h\left({\sigma }_{k}^{N}\right)\right], {\sigma }^{N}\in {\varSigma }^{N}\left({m}^{N}\right) (A.34)

    Based on the uniform convergent principle, at N\to \infty , we get {\tilde{f}}^{\beta, N}({\sigma }_{*}^{N}, {m}^{N})\to {\tilde{f}}_{k}^{\beta }({\sigma }_{*}, q) , and the limitation point is the maximum value of {\tilde{f}}^{\beta }\left(\left|q\right.\right) . That is,

    \underset{N\to \infty }{lim}\frac{\beta }{N}\mathit{log}{\psi }^{\beta , N}\left({\sigma }_{*}^{N}\left|{m}^{N}\right.\right) = 0 (A.35)

    Considering Equation (A.30), we have, for all {\sigma }^{N}\to \sigma \in \varSigma :

    \begin{gathered} \mathop {\lim }\limits_{N \to \infty } \frac{\beta }{N}\log {\psi ^{\beta , N}}\left( {\boldsymbol{\sigma} _*^N\left| {{\boldsymbol{m}^N}} \right.} \right) = \mathop {\lim }\limits_{N \to \infty } \left[ {\frac{\beta }{N}\log \frac{{{\psi ^{\beta , N}}(\boldsymbol{\sigma} {{\left| \boldsymbol{m} \right.}^N})}}{{{\psi ^{\beta , N}}({\boldsymbol{e}_1}{{\left| \boldsymbol{m} \right.}^N})}}} \right. \\ \left. { - \frac{\beta }{N}\log \frac{{{\psi ^{\beta , N}}(\boldsymbol{\sigma} _*^N{{\left| \boldsymbol{m} \right.}^N})}}{{{\psi ^{\beta , N}}({\boldsymbol{e}_1}{{\left| \boldsymbol{m} \right.}^N})}} + \frac{\beta }{N}\log {\psi ^{\beta , N}}(\boldsymbol{\sigma} _*^N{{\left| \boldsymbol{m} \right.}^N})} \right] \\ = {{\tilde f}^\beta }(\boldsymbol{\sigma} , \boldsymbol{q}) - {{\tilde f}^\beta }({{\sigma} _*}, \boldsymbol{q}) \\ = - {r^\beta }(\boldsymbol{\sigma} , \boldsymbol{q}) \\ \end{gathered} (A.36)

    Thus, Theorem 5 is proven: QED.



    [1] M. A. Fuentes, A. Gerig, J. Vicente, Universal behavior of extreme price movements in stock markets, PLoS ONE, 4 (2009), e8243. https://doi.org/10.1371/journal.pone.0008243 doi: 10.1371/journal.pone.0008243
    [2] M. T. J. Heino, K. Knittle, C. Noone, F. Hasselman, N. Hankonen, Studying behaviour change mechanisms under complexity, Behav. Sci., 11 (2021), 1–22. https://doi.org/10.3390/bs11050077 doi: 10.3390/bs11050077
    [3] S. Bowles, E. A. Smith, M. B. Mulder, The Emergence and Persistence of Inequality in Premodern Societies Introduction to the Special Section, Curr. Anthropol., 51 (2010), 7–17. https://doi.org/10.1086/649206 doi: 10.1086/649206
    [4] S. Bartolucci, F. Caccioli, P. Vivo, A percolation model for the emergence of the Bitcoin Lightning Network, Sci. Rep.-UK, 10 (2020), 4488. https://doi.org/10.1038/s41598-020-61137-5 doi: 10.1038/s41598-020-61137-5
    [5] C. Hesp, M. Ramstead, A. Constant, P. Badcock, M. Kirchhoff, K. Friston, A multi-scale view of the emergent complexity of life: A free-energy proposal, in Evolution, Development and Complexity. Springer Proceedings in Complexity, (eds G. Georgiev, J. Smart, C. Flores Martinez, M. Price), Springer, Cham, (2019), 195–227. https://doi.org/10.1007/978-3-030-00075-2_7
    [6] J. P. Bagrow, D. Wang, A. L Barabasi, Collective response of human populations to large-scale emergencies, PLoS One, 6 (2011), e17680. https://doi.org/10.1371/journal.pone.0017680 doi: 10.1371/journal.pone.0017680
    [7] E. I. Badano, P. A. Marquet, L. A. Cavieres, Predicting effects of ecosystem engineering on species richness along primary productivity gradients, Acta. Oecol., 36 (2010), 46–54. https://doi.org/10.1016/j.actao.2009.09.008 doi: 10.1016/j.actao.2009.09.008
    [8] F. Brauer, Z. L. Feng, C. Castillo-Chavez, Discrete epidemic models, Math. Biosci. Eng., 7 (2010), 1–15. https://doi.org/10.3934/mbe.2010.7.1 doi: 10.3934/mbe.2010.7.1
    [9] S. E. Kreps, D. L. Kriner, Model uncertainty, political contestation, and public trust in science: Evidence from the COVID-19 pandemic, Sci. Adv., 6 (2020), eabd4563. https://doi.org/10.1126/sciadv.abd4563 doi: 10.1126/sciadv.abd4563
    [10] G. F. D. Arruda, L. G. S. Jeub, A. S. Mata, F. A. Rodrigues, Y. Moreno, From subcritical behavior to elusive transition in rumor models, Nat. Commun., 13 (2022), 3049. https://doi.org/10.1038/s41467-022-30683-z doi: 10.1038/s41467-022-30683-z
    [11] J. Andreoni, N. Nikiforakis, S. Siegenthaler, Predicting social tipping and norm change in controlled experiments, P. Natl. A. Sci., 118 (2021), 2014893118. https://doi.org/10.1073/pnas.2014893118 doi: 10.1073/pnas.2014893118
    [12] I. Kozic, Role of symmetry in irrational choice, preprint, arXiv: 1806.02627[physics.pop-ph].
    [13] R. M. D'Souza, M. di Bernardo, Y. Y. Liu, Controlling complex networks with complex nodes, Nat. Rev. Phys., 5 (2023), 250–262. https://doi.org/10.1038/s42254-023-00566-3 doi: 10.1038/s42254-023-00566-3
    [14] J. Li, C. Xia, G. Xiao, Y. Moreno, Crash dynamics of interdependent networks, Sci. Rep.-UK, 9 (2019), 14574. https://doi.org/10.1038/s41598-019-51030-1 doi: 10.1038/s41598-019-51030-1
    [15] N. Biderman, D. Shohamy, Memory and decision making interact to shape the value of unchosen options, Nat. Commun., 12 (2021), 4648. https://doi.org/10.1038/s41467-021-24907-x doi: 10.1038/s41467-021-24907-x
    [16] P. Rizkallah, A. Sarracino, Microscopic theory for the diffusion of an active particle in a crowded environment, Phys. Rev. Lett., 128 (2022), 038001. https://doi.org/10.1103/PhysRevLett.128.038001 doi: 10.1103/PhysRevLett.128.038001
    [17] D. Fernex, B. R. Noack, R Semaan, Cluster-based network modeling—From snapshots to complex dynamical systems, Sci. Adv., 7 (2021), eabf5006. https://doi.org/10.1126/SCIADV.ABF5006 doi: 10.1126/SCIADV.ABF5006
    [18] L. Gavassino, M. Antonelli, B. Haskell, Thermodynamic stability implies causality, Phyl. Rev. Lett., 128 (2021), 010606. https://doi.org/10.48550/arXiv.2105.14621 doi: 10.48550/arXiv.2105.14621
    [19] P. Cardaliaguet, C. Rainer, Stochastic differential games with asymmetric information, Appl. Math. Opt., 59(2009), 1–36. https://doi.org/10.1007/s00245-008-9042-0 doi: 10.1007/s00245-008-9042-0
    [20] P. Mertikopoulos, A. L. Moustakas, The emergence of rational behavior in the presence of stochastic perturbations, Ann. Appl. Probab., 20 (2010), 1359–1388. https://doi.org/10.1214/09-AAP651 doi: 10.1214/09-AAP651
    [21] I. Durham, A formal model for adaptive free choice in complex systems, Entropy, 22 (2020), 568. https://doi.org/10.3390/e22050568 doi: 10.3390/e22050568
    [22] R. Atar, A. Budhiraja, On near optimal trajectories for a game associated with the ∞-Laplacian, Probab. Theory. Rel., 151(2011), 509–528. https://doi.org/10.1007/s00440-010-0306-7 doi: 10.1007/s00440-010-0306-7
    [23] W. Brian, Foundations of complexity economics, Nat. Rev. Phys., 3 (2021), 136–145. https://doi.org/10.1038/s42254-020-00273-3 doi: 10.1038/s42254-020-00273-3
    [24] J. H. Jiang, K. Ranabhat, X. Y. Wang, Active transformations of topological structures in light-driven nematic disclination networks, P. Natl. Acad. Sci., 119 (2022), 2122226119. https://doi.org/10.1073/pnas.2122226119 doi: 10.1073/pnas.2122226119
    [25] H. P Maia, S. C Ferreira, M. L Martins, Adaptive network approach for emergence of societal bubbles, Phys. A, 572 (2021), 125588. https://doi.org/10.1016/j.physa.2020.125588 doi: 10.1016/j.physa.2020.125588
    [26] W. Zou, D. V. Senthikumar, M. Zhan, J. Kurths, Quenching, aging, and reviving in coupled dynamical networks, Phys. Rep., 931 (2021), 1–72. https://doi.org/10.1016/j.physrep.2021.07.004 doi: 10.1016/j.physrep.2021.07.004
    [27] Z. Fulker, P. Forber, R. Smead, C. Riedl, Spite is contagious in dynamic networks, Nat. Commun., 12 (2021), 1–9. https://doi.org/10.1038/s41467-020-20436-1 doi: 10.1038/s41467-020-20436-1
    [28] M. Colnaghi, F. P. Santos, P. A. M. V. Lange, D. Balliet, Adaptations to infer fitness interdependence promote the evolution of cooperation. P. Natl. Acad. Sci. USA, 120 (2023). https://doi.org/10.1073/pnas.2312242120
    [29] S. Carozza, D. Akarca, D. Astle, The adaptive stochasticity hypothesis: Modeling equifinality, multifinality, and adaptation to adversity, P. Natl. Acad. Sci. USA, 120 (2023). https://doi.org/10.1073/pnas.2307508120
    [30] R. Berner, S. Vock, E. Schöll, S. Yanchuk, Desynchronization transitions in adaptive networks, Phys. Rev. Lett., 126 (2021), 028301. https://doi.org/10.1103/PhysRevLett.126.028301 doi: 10.1103/PhysRevLett.126.028301
    [31] M. C. Miguel, J. T. Parley, R. Pastor-Satorras, Effects of heterogeneous social interactions on flocking dynamics Phys. Rev. Lett., 120 (2018), 068303. https://doi.org/10.1103/PhysRevLett.120.068303
    [32] T. Hassler, J. Ullrich, M. Bernardino, N. Shnabel, C. V. Laar, D. Valdenegro, et.al., A large-scale test of the link between intergroup contact and support for social change, Nat. Hum. Behav., 4 (2020), 380–386. https://doi.org/10.1038/s41562-019-0815-z doi: 10.1038/s41562-019-0815-z
    [33] P. DeLellis, M. D. Bemardo, T. E. Gorochowski, G. Russo, Synchronization and control of complex networks via contraction, adaptation and evolution, IEEE Circ. Syst. Mag., 10 (2010), 64–82. https://doi.org/10.1109/MCAS.2010.937884
    [34] F. M. Neffke, The value of complementary co-workers, Sci. Adv., 5 (2019), eaax3370. https://doi.org/10.1126/sciadv.aax3370 doi: 10.1126/sciadv.aax3370
    [35] S. A. Levin, H. V. Milner, C. Perrings, The dynamics of political polarization, P. Natl. Acad. Sci. USA, 118 (2021), e2116950118. https://doi.org/10.1073/pnas.2116950118 doi: 10.1073/pnas.2116950118
    [36] C. Le Priol, P. Le Doussal, A. Rosso, Spatial clustering of depinning avalanches in presence of long-range interactions, Phys. Rev. Lett., 126 (2021), 025702. https://doi.org/10.1103/PhysRevLett.126.025702 doi: 10.1103/PhysRevLett.126.025702
    [37] M. Pirani, S. Baldi, K. H. Johansson, Impact of network topology on the resilience of vehicle platoons, IEEE T. Intell. Transp., 23 (2022), 15166–15177. https://doi.org/10.1109/TITS.2021.3137826 doi: 10.1109/TITS.2021.3137826
    [38] T. Narizuka, Y. Yoshihiro, Lifetime distributions for adjacency relationships in a vicsek Yamazaki model, Phys. Rev. E, 100 (2019), 032603. https://doi.org/10.1103/PhysRevE.100.032603 doi: 10.1103/PhysRevE.100.032603
    [39] L. Tiokhin, M. Yan, T. J. Morgan, Competition for priority harms the reliability of science, but reforms can help, Nat. Hum. Behav., 5 (2021), 857–867. https://doi.org/10.1038/s41562-020-01040-1
    [40] R. K. Colwell, Spatial scale and the synchrony of ecological disruption, Nature, 599 (2021), E8–E10. https://doi.org/10.1038/s41586-021-03759-x doi: 10.1038/s41586-021-03759-x
    [41] J. E. Allgeier, T. J. Cline, T. E. Walsworth, G. Wathen, C. A. Layman, D. E. Schindler, Individual behavior drives ecosystem function and the impacts of harvest, Sci. Adv., 6 (2020), eaax8329. https://doi.org/10.1126/sciadv.aax8329 doi: 10.1126/sciadv.aax8329
    [42] B. J. Tóth, G. Palla, E. Mones, G. Havadi, N. Pall, P. Pollner, T. Vicsek, Emergence of leader-follower hierarchy among players in an on-line experiment, in 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), (IEEE), (2018), 1184–1190. https://doi.org/10.1109/ASONAM.2018.8508278
    [43] A. N. Tump, T. J. Pleskac, R. H. Kurvers, Wise or mad crowds? The cognitive mechanisms underlying information cascades, Sci. Adv., 6 (2020), eabb0266. https://doi.org/10.1126/sciadv.abb0266 doi: 10.1126/sciadv.abb0266
    [44] R. Berner, S. Vock, E. Schöll, S. Yanchuk, Desynchronization transitions in adaptive networks, Phys. Rev. Lett., 126 (2021), 028301. https://doi.org/10.1103/physrevlett.126.028301 doi: 10.1103/physrevlett.126.028301
    [45] L. Zhang, W. Chen, M. Antony, K. Y. Szeto, Phase diagram of symmetric iterated prisoner's dilemma of two companies with partial imitation rule. preprint, arXiv: 1103.6103[physics.soc-ph].
    [46] G. Chen, Small noise may diversify collective motion in Vicsek model, IEEE T. Automat. Contr., 62 (2016), 636–651. https://doi.org/10.1109/tac.2016.2560144 doi: 10.1109/tac.2016.2560144
    [47] M. Staudigi, Co-evolutionary dynamics and Bayesian interaction games, Int. J. Game Theory, 42 (2013), 179–210. https://doi.org/10.1007/s00182-012-0331-0 doi: 10.1007/s00182-012-0331-0
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1025) PDF downloads(49) Cited by(0)

Figures and Tables

Figures(1)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog