1.
Introduction
Over the past two decades, complex adaptive systems have been actively developed, including the evolution of economic systems [1,2], theory of emergence [3,4], as well as social [5,6], ecological [7], and epidemic [8,9] characteristics of bifurcation [10]. Minor perturbations were found to escalate into tectonic shifts in the system, or even abrupt changes of the system properties and functions [11], resulting in symmetry-breaking [12] or synchronization [13]. In essence, the spontaneous switches in group behavior derive from interactions between individuals [14], during which some behaviors are learned [15] or propagate [16], causing the structure and behaviors of the system (or collective) to change [17] or reach the critical state [18]. The common property is that their detailed structure cannot be explained exactly from a mathematical viewpoint. To this end, the stochastic differential game theory has been introduced [19], reflecting the interaction behavior of agents and the optimal strategy coupled with temporary deterministic structure [20] and stochastic complex networks [21]. Atar and Budhiraja described the evolution law under various agent interaction rules in different fields [22]. However, these interactions that happen in the Multi-Local-Worlds (MLW) system are both synthesized (the interactions consist of not only cooperative games but also non-cooperative) and stochastic in dynamical configuration (arbitrary agent always selects interactive agents according to his/her benefit), making the problem much more difficult to resolve in mathematics and yielding most results that fail to satisfy the real complex adaptive system.
The classic analysis method cannot deal with this dual randomness because the agent's diverse behaviors and the system's configurations change randomly with time. There have been few results to discover what the system's behavior would converge to as time tends to infinity or a relatively large number under classic methods. Furthermore, most research results have either considered the mixed interaction of non-cooperative/cooperative games in MLW stable graph or considered the random MLW complex networks with the Boolean game between individuals, which are far from the property of the real complex adaptive system. In this sense, new modeling methods should be introduced. We constructed a multi-agent model to analyze the evolution law of complex adaptive systems. We preclude that the system behavior must satisfy an invariable distribution if agents usually work according to this model. Furthermore, once this invariable distribution law is determined, some strategies for economic issues, political events, social questions, and environmental influence will be made scientifically.
Generally, each agent in a complex adaptive system could interact just with local agents; their interaction relies on the system's local topological configuration. Brian concluded that imperfect information from other agents, with whom an arbitrary agent acts directly, determines this agent's behavior [23]. Jiang et al. analyzed how the topological configuration of nematic disclination networks affects the interaction between agents and agents' behavior [24]. Furthermore, Maia et al. reported that if each agent in the system can change its interacting targets (i.e., its "neighbors") to obtain more benefits, then there will be complex nonlinear interactions between subjects and between subjects and environments, which lead to the phenomenon of "emergence in large numbers" of the system [25]. Thus, the evolution of microscopic individuals makes the macro system display a new state and a new structure [26]. In this sense, the local topological configuration is not stable but dynamic [27]. More importantly, the system's state and property would be affected adaptively by the environment [28], and the environment is also affected by the system's state and properties [29], which produce an adaptive and evolutional process [30]. Scholars have studied complex adaptive systems with these properties. If the interactivity between agents is very simple, yes or no, for example, the problem is one of the complex networks; if the system structure is deterministic and constant, the problem is one of game theory. However, most complex adaptive systems have both properties. Therefore, neither the theory of complex networks nor stochastic game theory can help to resolve the problem. Thus, new methods must be introduced to study the properties of optimal agent strategy in complex adaptive systems.
Generally, a system can be subdivided into multiple subsystems, with interactions between individuals occurring within the same subsystem and across different subsystems. Miguel et al. analyzed individual and flock behavior in heterogeneous social complex systems and found that much complexity comes from the relationship between these sub-systems [31]. Similarly, Hassler et al. investigated the individual behavior between intergroups under social environmental change [32]. In addition, A et al. considered co evolution in proxy behavior and local topology configuration [33]. The interactions occur not merely between neighboring individuals, and the long-range interactions in spatial dimensions significantly affect the critical phase transition of the system. Neffke focused on the phase transition between co-workers who interacted long-rangely [34]. Levin et al. considered the political polarization and the corresponding reversal of the political forces and found that indirect interaction would obtain political polarization easier by induction [35]. Priol et al. constructed an avalanche model to describe phase transition property driven by long-range interactions between agents [36]. A et al. studied the impact of network topology on the resistance of vehicle platforms [37]. In addition, the rules met by the interactions between individuals within an economic or management system are far more complex than the rules regulating interactions between individuals in the natural world, such as the conservation of momentum that regulates collisions of particles and the black box of biology (such as the behavior adjustment strategies defined by the Ising and Vicsek models) Narizuka and Yoshihiro analyzed the lifetime distribution for adjacency relationship by invoking the corresponding Ising model [38]. Tiokhin et al. studied the priority evolution in the social complex system by constructing a corresponding Vicsek model [39]. Colwell reported how simple behavior would be changed if the environment was disrupted [40]. Moreover, Algeier et al. substantiated that the system structure determined by interactions between individuals is a key contributing factor to the function and nature of the system [41]. Tóth et al. investigated the emergence from structure and function of the system with a leader-follower hierarchy among players and concluded that the collective behavior would be much unstable if the interaction between agent and the leadership of the managers in an arbitrary multi-level complex system is beyond different layers [42]. Tump et al. studied the intelligence of emergence collective driven by the interaction between irrational agents and found that the collective intelligence would be polarized, which relies on system structure, interactive nature, and population size of agents [43]. Berner et al. revealed the phenomenon of desynchronization transitions occurring when the multi-layered structure satisfied certain conditions [44]. Zhang et al. analyzed the phase diagram of symmetric iterated prisoner's dilemma of two companies with a partial imitation rule in a sparse graph using cases where individuals interacted in varied structures, such as sparse graphs and dense graphs, random graphs and complete graphs, scale-free networks, and small-world networks [45]. Alternatively, Chen studied the diverse motion under small noise in Vicsek model in dense scale-free networks [46].
However, the available random complex network models failed to accurately describe economic and management systems because their interaction was much more complex than the Boolean interaction defined in these models. Similarly, different game models were also ineffective because the interaction configuration between agents changed dynamically, and the above two properties must be considered in the comprehensive model. Furthermore, there are many unknown and unseen scenarios in reality. Due to the lack of real-world data, the conclusions regarding concerted changes in collective behavior reached by classical analysis methods do not apply to unknown scenarios. In this paper, a MLW economic and management complex adaptive system with agent's behavior and local configuration is considered. This partially complements the gap between reality and the results of previous studies.
The rest of this paper is organized as follows. In Section 2, the characteristics of a complex adaptive system are analyzed, and a hypothesis is proposed. In Section 3, agent's behavior in the system is analyzed and abstracted into six processes. In Section 4 and 5, the agent local topology evolution model is constructed, and some theorems are formulated and proven. Section 6 discusses invariable distributions where both parameters β and N tend to zero and infinity, respectively. Section 7 concludes this study.
The innovative features and major contributions of this paper can be listed as follows:
(1) Different from previous studies, We consider the network growth and decline by treating the network as a multi-local-event one. Furthermore, the priority connection mechanism of the agent is not based on the degree, but the priority connection probability is determined based on the income over a short time scale. If, and only if, the phase transfer equation based on the priority link is determined, the evolution characteristics of the system can be obtained. On the basis of considering the behavior and adaptability of an agent, the interaction between environment and system is also considered, and it could reach the corresponding measurement coupled with the invariable distribution.
(2) In the case where the agent's behavior noise approaches zero (β→0), the invariable distributions μβ,τ,N0 are proven to satisfy the maximum deviation principle. This implies that the invariable distribution would converge to a certain subset space that can converge logarithmically precisely to the minimum value of the ratio function and such that the ratio function can be estimated perfectly, as shown in Theorem 3. The deterministic state of the evolved complex adaptive system, ω, must be estimated according to the invariable distribution coupled with the optimal strategy and the local topological structure of the agent, according to Theorem 4.
(3) We prove that if the population of agents in the system tends to infinity, the invariable distribution of a complex adaptive system with co-evolving agent's behavior and local topological configuration can converge into a certain interval with rate function −rβ(σ,q), according to Theorem 5.
2.
Characteristic analysis of an economic and management co-evolutional complex adaptive system
Definition 1. The connected sub-graph Gi,i=1,2,...,m of the topological structure of the Complex Adaptive System G, where Gi⊆G, is called Local World (LW).
To model this system, some variables were introduced, as listed in Table 1.
As mentioned above, at one time, an arbitrary agent can select one behavior from six sub-processes with a probability (p1,...,p6), respectively. At the next one, he/she can select another behavior. Thus, if we regard the agent's behavior as his/her state, the state would satisfy a certain state transition equation. Combining this system's property with the theory of stochastic process, this complex adaptive system can be simulated by a stochastic process model. An optimal strategy path must exist for a certain system configuration, but since the latter always changes randomly, the optimal strategies vary, respectively.
The agent's irrational behavior will be discussed in the following sections, and then the co-evolutional stochastic process model driven by irrational behavior and the system's configuration will be constructed. The universal approximate analytic solution will be calculated, and two precise solutions will be derived for (i) agent's behavior noise tending to zero and (ii) agents' population tending to infinity.
3.
Model of a co-evolutional complex adaptive system
3.1. Agent's behavior
3.1.1. Adjust behavior
An arbitrary agent, ji, will change his/her strategy as probability, q1(ω)∈[0,1]. The probability of the agent changing his/her behavior bji,β(⋅|ω) in a certain system configuration should satisfy several conditions specified as follows:
where ar is the most effective strategy within the game radius r, av is the strategy space, A is the strategy space collection, πji(⋅) is the income of agent ji, a is the strategy of agent ji in the strategy space, and ε is noise. Furthermore, this decision relies not only on the neighbor's strategy and the topological structure g, but also on the environment β.
where the strategy ϕ∗ji(t,xt∗ji) for time t and state xt refers to the best strategy vector for a specific pure strategy, ϕ∗ji(t,xt∗ji)=(ϕ∗j1(t,xt∗j1),ϕ∗j2(t,xt∗j2),⋯,ϕ∗jϑ(t,xt∗jϑ))T and ϑ is the spatial dimension of the agent ji.
Obviously, when an agent selects a strategy ar from strategy space A, it must satisfy the condition that agent ji can obtain relatively more payoff as a maximum probability when they select the new one. However, this probability would be rewritten as exp[−1β(v(t)ji(t,xt∗ji,ϕ∗ji(t,xt∗ji))+o(1))].
3.1.2. Create a new game relationship with another agent in the same LW
Suppose that an arbitrary agent ji creates a new game with a new agent who is not his/her neighbor with the following probability:
which relies on the ratio λ(sub-process2)ji:Ω→R+ that satisfies κji(ω)=Ni−1⇒λji(ω)=0 and
The probability of agent ji creating a new game with agent ki is defined as λ(sub-process2)ji(ω)/¯λ(sub-process2)(ω), implying that payoff of the coalition of agent ji and agent ki is larger or equal to the other coalition's payoff affected by noise ςji=(ςjiki)ki∉¯Nji(ω). In this respect, we get
and
Thus, nonequity ∃˜kl≠kl,˜kl∉¯Nji,P{W{ji,kl}(t,xt∗)≥W{ji,˜kl}(t,xt∗)}=1 must be satisfied to create a new game relationship with agents from Ni−¯Nji who did not interact with agent ji. This complies with the so-called preferential attachment mechanism, implying that agents prefer to select a game partner who can bring them more payoff than others. This causes each agent to be selected prior to their payoff coupled with the optimal strategy in the corresponding short time interval. This probability is a multi-dimension logit function, which means there exists a critical point of probability ˜W(t){ji,ki}0 coupled with the agent's payoff in the selected process such that the probability a certain agent will be selected is far smaller than 0.5 if the agent's payoff is smaller than ˜W(t){ji,ki}0 but the choosing probability is far larger than 0.5 and closer to 1 if the agent's payoff exceeds ˜W(t){ji,ki}0.
3.1.3. Create a new game relationship with another agent in a different LW similar to sub-process 2
In the case where the agent ji creates a new game, the agent ki′ must satisfy the following conditions:
This yields
3.1.4. Delete an existing game relationship
Assume that an arbitrary link (ji,ki′) will disappear at probability ξ>0. That is, if this link exists as probability ξh+o(h) during a small enough time interval [t,t+h], the expected time of existence will be 1/ξ. Therefore, starting from the system state ω=(α,g), the probability that the system transit system state ˆω=(α,g−(ji,ki′)) must be η(Sub-process 4)β(ω→ˆω)=ξ
3.1.5. Create a game relationship with a new agent in the system
When agent N+1 enters the complex adaptive system, it will enter Local World i as probability 1/m, will be reordered to Ni+1, then it will create a game relationship with an arbitrary agent ji with probability w(sub-process 5),ji,βNi+1, where:
3.1.6. An agent is deleted from the system
Obviously, when an agent is deleted, the links that expressed its game relationships must be deleted.
The system evolves according to a state transition equation matched with the 6 sub-processes, which forms a stochastic process, a weak Markov process. By analyzing this process, the invariable distribution can be determined.
3.2. Agent local topology evolution model
The complex adaptive system was analyzed and re-described as follows. Suppose that there are several kinds of inhomogeneous agents in a system, whose profile structure is denoted by τ, and suppose that the system state ω=(a,g) consists of agents' behaviors and agents' local topological configuration, where β is the behavior noise. That is, the above environment N is the sum of the agents in the system. For an arbitrary state, ω=(a,g), mappings α:ΩN→AN and γ:ΩN→G[N] are defined to describe the co-evolutionary complex adaptive system with agents' behavior and local topology in detail, i.e., an infinitesimal generator (ηβ,τ,Nω,ω′)ω,ω′∈ΩN coupled with the corresponding rate function set will be generated. Thus, a corresponding stochastic process model is constructed, and the probability from state ω to ω′ is defined as follows:
where ¯Nji(ω) is the set of neighbors of agent ji, which payment in the corresponding small time scale is ˜W(t){ji,ki}≜˜W(t){ji,ki}(t,xt∗{ji,ki},ϕt∗ji(t,xt∗ji),ϕt∗ki(t,xt∗ki)). The above transition probability equation describes six sub-processes, i.e, ω′ consists of six cases:((a,a−1(ω)),γ(ω)), (α(ω),γ(ω)⊕(ji,ki)), (α(ω),γ(ω)⊕(ji,ki′)), (α(ω),γ(ω)−(ji,ki′)), (α(ω),γ(ω)⊕(ji,N+1)), and (α(ω),γ(ω)−¯Nji).
Thus, the process (Yβ(t))t≥0 can be described as a time scale τα,τg according to the changed behavior and topological configuration, respectively. In this respect, the ratio τ≜τg/τα controls the relative speed of topological configuration change relative to the behavior change.
In essence, all the processes involved should be regarded as different Poisson processes, that is, counting processes. As mentioned above, a complex adaptive system with behavior and system topological configuration co-evolution model Γβ=(G,A,π)≡(Ω,F,P,(Xβt)t∈T0)β∈R+ consists of the following information: agents' behavior, α, and the graph topological configuration, g, that is made of the finite-state space Ω=AI×G[I], measurement P:F→[0,1] of the state transition probability such that the system is measurable for the changing randomly, the Markov renewal process, (Yβ(t))t≥0, that consists of two parameters of continuous time (t≥0) and noise (β≥0). Thus, for an arbitrary sequence 0≤t0≤t1≤...≤tk≤t, the stochastic process (Yβ(t))t≥0 is controlled by the random variable generated by J0=0 and the random variable generated by n≥1,Jn:inf{t≥Jn−1:Yβ(t)≠Yβ(Jn−1)}, which statistical property can be described via its jump times (Jn)n∈N0 and holding time Sn+1≜Jn+1−Jn.
4.
Major results
As analyzed above, the distribution of agents' state is more important to decision-making. In this study, the process is defined as the number of jumps {Jn}n≥0. The property of this sample chain determines the system property. For an arbitrary graph g with behavior configuration a∈AN, suppose that J0=0 and Xβ,τ,N(0)=Xβ,τ,N0. To specify the phenomenon that the system can be transferred from one state to another clearly, several parameters would need to be introduced. For all n≥1, set Jn<∞. Furthermore, for all ω∈ΩN, define ηβ,τ,Nω≜∑ω′∈ΩN∖{ω}ηβ,τ,Nω,ω′∈(0,∞) as the measurement of system leaving the state ω. Thus, for all t≥0, the transition probability of the system at this state Xβ,τ,N(t) can be described as follows:
Thus, the invariable distribution of agent's behavior relies on parameters of ηβ,τ,Nω and ηβ,τ,Nω,ω′. Due to the operator ηβ,τ,Nω,ω′, a random graph process Gβ,τ,N={Gβ,τ,N(t)}t≥0, a branch process, can be constructed. To specify this process, the event of the link (ji,ki′) is generated by a deterministic scalar
Moreover, the death case is denoted by another deterministic scalar
The evolution of these two parameters can be determined via the following two theorems, which proofs will be given in the next section.
Theorem 1. There exist invariable distributions of complex adaptive systems with preferential attachment Cβ,τ,N and volatile mechanism Ξβ,Njiki′(τ) for the inhomogeneous random graphs process Gβ,τ,N, if {\mu ^{\beta, \tau, N}}\left({\omega \left| {\varOmega _a^N} \right.} \right) = \prod\limits_{{j_i} = 1}^{{N_i}} {\prod\limits_{{k_{i'}} > {j_i}} {p_{{j_i}{k_{i'}}}^{\beta, N}{{(\boldsymbol{a}, \boldsymbol{\tau})}^{{\gamma _{{j_i}{k_{i'}}}}(\omega)}}{{\left({1 - p_{{j_i}{k_{i'}}}^{\beta, N}{{(\boldsymbol{a}, \boldsymbol{\tau})}^{{\gamma _{{j_i}{k_{i'}}}}(\omega)}}} \right)}^{1 - {\gamma _{{j_i}{k_{i'}}}}(\omega)}}} } .
The interaction probability of the model G\left[N, {\left({p}_{{j}_{i}{k}_{i{'}}}^{\beta, \tau, N}\left(a\right)\right)}_{{k}_{i{'}} > {j}_{i}}\right] can be written as
where {\mathfrak{A}}_{{j}_{i}{k}_{i{'}}}^{\beta, N}(a, \tau): = f\left({\tilde{W}}_{{j}_{i}{k}_{i}}^{\left(\text{sub-process 2}\right)\beta, N}(a, \tau), {\tilde{W}}_{{j}_{i}{k}_{i{'}}}^{\left(\text{sub-process 3}\right)\beta, N}(a, \tau), {\tilde{W}}_{{j}_{i}N+1}^{\left(\text{sub-process 5}\right)\beta, N}(a, \tau)\right) , and {\mathfrak{M}}_{{j}_{i}{k}_{i{'}}}^{\beta, N}\left(\tau \right) = f{'}\left({\tilde{W}}_{{j}_{i}{k}_{i{'}}}^{\left(\text{sub-process 4}\right)\beta, N}(a, \tau), {\tilde{W}}_{{j}_{i}, {\overline{N}}^{{j}_{i}}}^{\left(\text{sub-process 6}\right)\beta, N}(a, \tau)\right) .
If an arbitrary cross-section is taken from the strategy space, the link probability on this cross-section can be determined. So, {p}_{{j}_{i}{k}_{i{'}}}^{\beta, N}: = {\left({p}_{{j}_{i}{k}_{i{'}}}^{\beta, N}(a, b)\right)}_{(a, b)\in {A}^{2}}\in {\mathbb{R}}_{+}^{n\times n}, {p}^{\beta, N}: = {\left({p}_{{j}_{i}{k}_{i{'}}}^{\beta, N}\right)}_{1\le {j}_{i}, {k}_{i{'}}\le K} denotes the profile of the interactivity of the agents in the complex adaptive system. Reconsidering the behavior coupled with the payoff in the corresponding short time scale, the detailed strategies of the system can be determined. Furthermore, whether the invariable distribution can be described as a function of the initial state is defined via Theorem 2.
Theorem 2. The invariable distribution of the Markov process {\left\{{X}^{\beta, \tau, N}\left(t\right)\right\}}_{t\ge 0} is a Gibbs measurement {\mu }^{\beta, \tau, N}\left(\omega \right) = \frac{\mathit{exp}({\beta }^{-1}{H}^{\beta, N}(\omega, \tau))}{{\sum }_{\omega {'}\in {\varOmega }^{N}}\mathit{exp}({\beta }^{-1}{H}^{\beta, N}(\omega {'}, \tau))} = \frac{{\mu }_{0}^{\beta, \tau, N}\left(\omega \right)\mathit{exp}({\beta }^{-1}V(\omega, \tau))}{{\sum }_{\omega {'}\in {\varOmega }^{N}}{\mu }_{0}^{\beta, \tau, N}\left(\omega {'}\right)\mathit{exp}({\beta }^{-1}V(\omega {'}, \tau))} , where, for all \omega \in {\varOmega }^{N} , {H}^{\beta, N}(\omega, \tau)\triangleq V(\omega, \tau)+\beta \mathit{log}{\mu }_{0}^{\beta, \tau, N}\left(\omega \right) , and {\mu }_{0}^{\beta, \tau, N}\left(\omega \right)\triangleq {\prod }_{{j}_{i} = 1, {k}_{i{'}} > {j}_{i}}^{N}\left(\frac{2}{N{\mathfrak{M}}_{-}^{N}}\right) .
For a better perception of the logical structure and the sequential nature of the proposed mathematical derivations, a flowchart providing a visual overview of the progression and interrelationships between various analytical steps undertaken in this work is plotted in Figure 1. This shows the process development from the initial assumptions and conditions to the conclusion, highlighting key decision points and derivation milestones.
5.
Proofs of Theorems 1 and 2
The above process can be described via jump times \{{J}_{n}{\}}_{n\ge 0} as in [47]. First, suppose that jump times comprise a sample chain of set {X}^{\beta, \tau, N}\left({J}_{n}\right) = {X}_{n}^{\beta, \tau, N} , then set {F}_{n} = \sigma \left(\{{J}_{0}, {J}_{1}, ..., {J}_{n}\}, \{{X}_{0}^{\beta, \tau, N}, {X}_{1}^{\beta, \tau, N}, ..., {X}_{n}^{\beta, \tau, N}\}\right), n\ge 0 . For an arbitrary graph g with behavior configuration a\in {A}^{N} , suppose that {J}_{0} = 0 and {X}^{\beta, \tau, N}\left(0\right) = {X}_{0}^{\beta, \tau, N} . To specify the phenomenon that the system can be transferred from one state to another clearly, several parameters have to be introduced. For all n\ge 1 , set {J}_{n} < \infty ; furthermore, for all \omega \in {\varOmega }^{N} , define
as the measurement of system leaves the state \omega . Therefore, for all t\ge 0 , the transition probability of the system at this state {X}^{\beta, \tau, N}\left(t\right) can be described as
where {\eta }_{\omega }^{\beta, \tau, N} and {\eta }_{\omega, \omega {'}}^{\beta, \tau, N} were defined above.
As seen from the above, the agent's historical behavior, other behaviors, and the interaction between the environment and the system are crucial for choosing a rational strategy and which should be grasped by the complex adaptive system. Similarly, the environment in which an agent operates comprises the external environment and agent's behaviors with the payoffs of his/her neighbors. Moreover, the history of agent's behaviors should be considered when selecting a strategy at an arbitrary time. Furthermore, the behavior of one agent is affected by his/her neighbors, which means that the noises of each agent's behavior are superimposed, resulting in a huge total noise reaching its critical value in the process of the system development. Another inhomogeneous environment can occur, with a larger or more complex noise exceeding this critical value. Thus, two cases (namely (i) growth controlled by sub-processes 2, 3, and 5, and (ii) decay controlled by sub-processes 4 and 6) are introduced to describe system evolution. Insofar as these sub-processes are independent, they can be superimposed via the addition theorem.
Using operator {\eta }_{\omega, \omega {'}}^{\beta, \tau, N} , a random graph process {G}^{\beta, \tau, N} = {\left\{{G}^{\beta, \tau, N}\left(t\right)\right\}}_{t\ge 0} , which is a branching process, can be constructed. To specify this process, the event of the link ({j}_{i}, {k}_{i{'}}) is generated by a deterministic scalar {\mathfrak{A}}_{{j}_{i}{k}_{i{'}}}^{\beta, N}(a, \tau): = f\left({\tilde{W}}_{{j}_{i}{k}_{i}}^{\left(\text{sub-process 2}\right)\beta, N}(a, \tau), {\tilde{W}}_{{j}_{i}{k}_{i{'}}}^{\left(\text{sub-process 3}\right)\beta, N}(a, \tau), {\tilde{W}}_{{j}_{i}N+1}^{\left(\text{sub-process 5}\right)\beta, N}(a, \tau)\right) , but the death case is denoted by another deterministic scalar {\mathfrak{M}}_{{j}_{i}{k}_{i{'}}}^{\beta, N}\left(\tau \right) = f{'}\left({\tilde{W}}_{{j}_{i}{k}_{i{'}}}^{\left(\text{sub-process 4}\right)\beta, N}(a, \tau), {\tilde{W}}_{{j}_{i}, {\overline{N}}^{{j}_{i}}}^{\left(\text{sub-process 6}\right)\beta, N}(a, \tau)\right) . Obviously, these two scalars express the growth and decay mechanisms introduced above. Suppose that f\left(x\right), f{'}\left(x\right) are linear and equal to the occurrence probabilities of these six sub-processes. These six sub-processes should be integrated into a pure process. Furthermore, because the co-evolutionary process constructed is a branching process, i.e., a special Markov process, there must exist an invariable distribution in the process of a complex adaptive system development [47].
Reconsidering the complex adaptive system, each agent selects one kind of behavior from the six sub-processes as a certain probability, which is affected by the behaviors of his/her neighbors. In other words, agents in this system are inhomogeneous, can select one behavior randomly, and thus change their behavior and their property. Invoking agent property configuration \tau , controlled by the configuration of the agent's property, that is \tau = ({\tau }_{{1}_{1}}, ..., {\tau }_{{n}_{1}}, ..., {\tau }_{{1}_{m}}, ..., {\tau }_{{n}_{m}}) , it can be seen that the agent changes his/her property by changing his/her behavior. A random variable, \varTheta = \{{\theta }_{1}, ..., {\theta }_{\vartheta }\} , is introduced to express the fact that an agent changes his/her property randomly. Similarly, the behaviors of agents in the system can be denoted by a scalar a = ({a}_{{1}_{1}}, ..., {a}_{{n}_{1}}, ..., {a}_{{1}_{m}}, ..., {a}_{{n}_{m}}) . We call agents {j}_{i} , {j}_{l} , {k}_{i{'}} , and {k}_{l{'}} homogeneous if {\tau }_{{j}_{i}} = {\theta }_{{j}_{l}}, {\tau }_{{k}_{i{'}}} = {\theta }_{{k}_{l{'}}} are satisfied. Furthermore, the distribution law of these agents' behaviors can be obtained by the distribution of \tau, a . The corresponding results, i.e., Theorem 1, and the proof should be given as follows.
Definition 3. Scalar {C}^{\beta, \tau, N} satisfied with exponential distribution is called the preferential attachment mechanism of the complex adaptive system if {C}_{{j}_{i}{k}_{l}}^{\beta, N}(a, \tau) = \frac{2}{N}\mathit{exp}(v({a}_{{j}_{i}}, {a}_{{k}_{l}})) .
Definition 4. The admissible volatile mechanism {\varXi }_{{j}_{i}{k}_{i{'}}}^{\beta, N}\left(\tau \right) is half-anonymous, if {\xi }_{{j}_{i}{k}_{i{'}}}^{\beta, N}\left(\tau \right) = {\xi }_{{j}_{l}{k}_{l{'}}}^{\beta, N} when {\tau }_{{j}_{i}} = {\theta }_{{j}_{l}}, {\tau }_{{k}_{i{'}}} = {\theta }_{{k}_{l{'}}} is satisfied, for all N\ge 2, \tau \in {\varTheta }^{N} and all agents {j}_{i}, {k}_{l}\in N .
Proof of Theorem 1. For the equilibrium condition considered, the invariable distribution is set to the following form, holding for all \omega, \omega {'}\in {\varOmega }_{a}^{N} :
Upon normalization, given a constant {Z}^{\beta, \tau, N}\left(a\right) , the above equation has the following unique solution
For all {j}_{i} = {1}_{1}, {2}_{1}, ..., {N}_{1}, {1}_{2}, ..., {N}_{2}, ..., N and {k}_{i{'}} > {j}_{i} , define
The function on {\varOmega }_{a}^{N}
Substituting the above parameters into (26), the invariable distribution should be written as:
Setting {p}_{{j}_{i}{k}_{i{'}}}^{\beta, N}(a, \tau) = \frac{{\mathfrak{A}}_{{j}_{i}{k}_{i{'}}}^{\beta, N}(a, \tau)}{{\mathfrak{A}}_{{j}_{i}{k}_{i{'}}}^{\beta, N}(a, \tau)+{\mathfrak{M}}_{{j}_{i}{k}_{i{'}}}^{\beta, N}\left(\tau \right)} , the denominator of formula (28) can be derived as
Furthermore, for all \omega \in {\varOmega }_{a}^{N} , we have
Combining (30) and (31), the measurement \omega \in {\varOmega }_{a}^{N} can be obtained directly:
Thus, Theorem 1 is proven: QED.
Due to the property of the invariable distribution, the following property should be noted:
Using the recursion method, this invariable distribution can be expressed to a form with a certain initial. To do this, consider the function {\varOmega }_{a}^{N} f:
where
It is concluded that, when the time scale is relatively large, the behavior of an arbitrary agent that interacts with others in the system must satisfy the distribution property of an arbitrary state with agents' behavior and agents' local topological structure in the evolutionary process of the complex adaptive system, that is, it could reach the corresponding measurement coupled with the invariable distribution. Because the system state consists of agents' behavior and agents' local topological configuration, the distribution of corresponding optimal strategy coupled with a constant graph topology can be obtained by analyzing the invariable distribution of the system's state, which consists of what the most probable strategies of the agent in a certain small time-scale would be and how long these strategies would hold in the evolution of the complex adaptive system.
Note 1. A preferential attachment mechanism makes the complex adaptive system operate. Furthermore, the half-anonymous volatile mechanism is separable, which means the system's character can be obtained more easily. Consider the likelihood ratio function:
\frac{{p}_{{j}_{i}{k}_{i{'}}}^{\beta, N}\left(a\right)}{1-{p}_{{j}_{i}{k}_{i{'}}}^{\beta, N}\left(a\right)} = \frac{{\mathfrak{A}}_{{j}_{i}{k}_{i{'}}}^{\beta, N}(a, \tau)}{{\mathfrak{A}}_{{j}_{i}{k}_{i{'}}}^{\beta, N}(a, \tau)+{\mathfrak{M}}_{{j}_{i}{k}_{i{'}}}^{\beta, N}\left(\tau \right)} For all 1\le {j}_{i}, {k}_{i{'}}\le K and a, b\in A , define scalar
and symmetrical matrix
Therefore, when considering an arbitrary agent {j}_{i} with strategy a and agent {k}_{i{'}} with strategy b , the probability of interactivity between them can be written as:
Equation (38) describes the case that agent {j}_{i} acts with agent {k}_{i{'}} according to a certain probability under some noise. Since parameters a and b are taken arbitrarily, this probability can be characterized on an arbitrary cross-section in strategy space. That is, if an arbitrary cross-section is taken from the strategy space, the link probability on this cross-section can be determined. In this sense, the following matrix denotes the profile of the interactivity of the agents in the complex adaptive system:
Reconsidering the behavior coupled with the payoff in the corresponding short time scale, the detailed strategies of the system can be determined. Furthermore, whether the invariable distribution can be described as a function of the initial state can be determined via Theorem 2. However, to prove Theorem 2, the following Lemma should be invoked first.
Lemma 1. The ratio function of Markov process jump \{{X}_{t}^{\beta, \tau, N}{\}}_{t\ge 0} with infinitesimal generator has invariable distribution
where, for arbitrary equilibrium condition, constant {Z}^{\beta, \tau, N}\left(a\right) holds Eq. (33) has a unique solution true for all \omega, \omega {'}\in {\varOmega }_{a}^{N} , by employing a normalization approach.
Proof of Lemma 1. For all \omega, \omega {'}\in {\varOmega }_{a}^{N} , let \omega = (a, g), \omega {'} = (a, g \oplus ({j}_{i}, {k}_{i{'}}\left)\right) , where \oplus consists of \oplus and - . Thus, we have
Consider {\eta }_{\omega, \omega {'}}^{\beta, \tau, N}/{\eta }_{\omega {'}, \omega }^{\beta, \tau, N} . Due to the multiplier structure, it can be seen that the factors that appear in these two measurements can be eliminated except for factor (40). Considering the two states \omega = (a, g), \widehat{\omega } = \left(\right(a{'}, a\left)g\right) , a{'}\in A , their likelihood ratio can be calculated as follows:
The second term is independent of agent {j}_{i} ; therefore, this likelihood ratio is equal to 1. Multiplying the first term and considering the symmetry of the payment function, we have:
Thus, Lemma 1 is proven: QED.
Proof of Theorem 2. As defined, for all {j}_{i}, {k}_{i{'}}\in N and \omega, \omega {'}\in {\varOmega }_{a}^{N} , function
Invoking Lemma 1, it is transformed to:
Thus, Theorem 2 is proven: QED.
Insofar as {\mu }_{0}^{\beta, \tau, N} is controlled by the decay mechanism, \mathfrak{M} , from the definition of {\mu }_{0}^{\beta, \tau, N} , it can be seen that the probability of emergence of invariable distribution is large if the decay is stronger in the complex adaptive system; otherwise, the probability is relatively small. Therefore, it is the noise of the behaviors of the agents in the system that decides the stability of the complex adaptive system. Furthermore, when the property of an arbitrary agent is changed randomly, the invariable distribution will become more complex. If some parameters are determined, the certain invariable distribution must be a deterministic one. The invariable distribution of the complex adaptive system relies on several external parameters: The noise of agent's behavior, \beta , and the population of the agents in the system, N . The following subsections consider what the invariable distribution would be if these two parameters tend to their respective limits, that is, \beta \to 0 and N\to \infty .
6.
Discussion
Since it was concluded that the system's behavior would satisfy an exponential distribution with parameter of \eta , Theorems 1 and 2 specify the parameter \eta . More precisely, \mathbb{P}\left(X_{J_{n+1}}^{\beta, \tau, N} = \omega^{\prime}, J_{n+1}-J_n>t \mid \mathscr{F}_N\right) = \exp \left(-t \eta_\omega^{\beta, \tau, N}\right) \frac{\eta_{\omega, \omega^{\prime}}^{\beta, \tau, N}}{\eta_\omega^{\beta, \tau, N}}, where \eta is the measurement of system leaving a certain state, which relies on the local topological configuration of interaction relationship between agents, strategy configuration. According to Theorem 1, there exists an invariable distribution of system behavior \mu , which relies on variables \omega , \tau , \beta , and N . Of these, parameters \omega and \tau are the most important controllable variables, while \beta and N are the scenario variables. If \omega and \tau are fixed, the statistical distribution of system's behavior relies on two parameters: Noise \beta and agent's population N of the system. Similarly, if parameters \beta and N are fixed, i.e., the scenario is fixed, the statistical distribution of a system's behavior relies on the agent's behavior strategy and interaction configuration. The latter relies on the preferential attachment of a certain agent.
The ways agents adjust their behaviors and whether the time scale is small or large depends on the rules expressed by Equations (1)-(22) and the co-evolution model {\varGamma }_{\beta } = (G, A, \pi)\equiv (\varOmega, F, \mathbb{P}, ({X}_{t}^{\beta }{)}_{t\in {T}_{0}}{)}_{\beta \in {\mathbb{R}}_{+}} defined in sub-section 4.2. Using Theorem 1, the invariable distribution depends on the interaction probability {p}_{{j}_{i}, {k}_{i{'}}}^{\beta, N} determined by the payoff obtained in different sub-processes: {\mu }^{\beta, \tau, N}\left(\omega \left|{\varOmega }_{a}^{N}\right.\right) = {\prod }_{{j}_{i} = 1}^{{N}_{i}}{\prod }_{{k}_{i{'}} > {j}_{i}}{p}_{{j}_{i}{k}_{i{'}}}^{\beta, N}(a, \tau {)}^{{\gamma }_{{j}_{i}{k}_{i{'}}}\left(\omega \right)}{\left(1-{p}_{{j}_{i}{k}_{i{'}}}^{\beta, N}(a, \tau {)}^{{\gamma }_{{j}_{i}{k}_{i{'}}}\left(\omega \right)}\right)}^{1-{\gamma }_{{j}_{i}{k}_{i{'}}}\left(\omega \right)} . Therefore, the property of {p}_{{j}_{i}, {k}_{i{'}}}^{\beta, N} is crucial for the invariable distribution. Insofar as {p}_{{j}_{i}, {k}_{i{'}}}^{\beta, N} is the ratio of payoffs coming from creating an interaction relationship, the sub-processes of deleting the old interaction should be omitted because deleted agents bring maximum profit losses. Thus, optimal strategies of all agents correspond to large values of {p}_{{j}_{i}, {k}_{i{'}}}^{\beta, N} ; otherwise, if just partial agents' strategies are optimal, {p}_{{j}_{i}, {k}_{i{'}}}^{\beta, N} values are relatively small.
By setting F = x(1-x{)}^{a}, 0 < a < 1 , it is easy to see that {F}_{x}^{{'}} = (1-x{)}^{a}+a(1-x{)}^{a-1}x > 0 , so \mu monotonously grows with {p}_{{j}_{i}, {k}_{i{'}}}^{\beta, N} and monotonously decreases with agents' population N .
Thus, this invariable and the noise effect on the system behavior are quite complex. Intuitively, the larger the noise β, the more difficult to select the optimal strategy and the best partner to interact with. Thus, {p}_{{j}_{i}, {k}_{i{'}}}^{\beta, N} is expected to increase with \beta .
Given the synthesis effect of \omega , \tau , \beta , and N , the system behavior is hard to predict, yielding only approximate solutions. The precise solutions were further derived for the following two limiting cases: \beta \to 0 and N\to \infty . First, we formulated and proved the following hypothesis.
Hypothesis 3. Under two limiting scenarios ( \beta \to 0 and N\to \infty ), the system behavior properties are not equivalent.
The above two scenarios are discussed in Subsections 6.1 and 6.2, respectively,
6.1. Analytical Solution of the case of \beta \to 0
First, the term stochastic stability should be defined.
Definition 5. With the limitation of small behavior noise β, the system configuration \omega \in {\varOmega }^{N} is stochastically stable, if \underset{\beta \to 0}{lim}\beta \mathit{log}{\mu }^{\beta, \tau, N}\left(\omega \right) = 0 .
Lemma 2. Under fixed N\ge 2 and an arbitrary agent's property \tau \in {\varTheta }^{N} , we get \underset{\beta \to 0}{lim}\underset{\omega \in {\varOmega }^{N}}{max}\left|{H}^{\beta, N}(\omega, \tau)-V(\omega, \tau)\right| = 0 , if and only if \underset{\beta \to 0}{lim}\underset{\omega \in {\varOmega }^{N}}{max}\left|\mathit{log}{\mu }_{0}^{\beta, \tau, N}\left(\omega \right)\right| = 0 .
Proof of Lemma 2. It follows directly from Theorem 2: QED.
If the co-evolutionary dynamics of agents' behavior and local topological configuration follows an admissible volatile mechanism, then the perturbation of function {\mu }_{0}^{\beta, \tau, N} of graph must be controlled by the potential function when behavior noise is much smaller. The corresponding result is given in Theorem 3, which states that the class of these invariable distributions {\mu }_{0}^{\beta, \tau, N} satisfies the maximum deviation principle, so that the invariable distribution will converge to a certain subset space that can converge logarithmically to the minimum value of the ratio function, which can be precisely estimated.
Theorem 3. There exists a ratio function that satisfies R(\omega, \tau)\triangleq \underset{\omega {'}\in {\varOmega }^{N}}{max}V(\omega {'}, \tau)-V(\omega, \tau) with maximum deviation principle if {\varXi }^{\beta, \tau, N} is an admissible volatile mechanism, for all \omega \in {\varOmega }^{N} , such that the invariable distribution class \{{\mu }^{\beta, \tau, N}{\}}_{\beta > 0} satisfies \underset{\beta \to 0}{lim}\beta \mathit{log}{\mu }^{\beta, \tau, N}\left(\omega \right) = -R(\omega, \tau) .
Proof of Theorem 3: According to Theorem 2, for all \omega \in {\varOmega }^{N} , we have
Furthermore, if the volatile mechanism is admissible, it satisfies in particular (SNB). Then, it follows from Lemma 2 that this Hamiltonian function at \beta \to 0 will converge uniformly to the potential function of the game. So, for all \omega \in {\varOmega }^{N} , we get
Thus, Theorem 3 is proven: QED
Notably, two corollaries following from Theorem 3 are given in Appendix 1.
To get the profile of individual rationality of agents in a complex adaptive system, the following definition of equilibrium was introduced.
Definition 5. The four tuples ⟨{\varOmega }^{N}, {\mu }^{\beta, \tau, N}, ({P}_{{j}_{i}}{)}_{{j}_{i}\in \left[N\right]}, ({s}_{{j}_{i}}{)}_{{j}_{i}\in \left[N\right]}⟩ are in a relative equilibrium if at \beta > 0 and \rho > 0 , the following inequality holds for all {j}_{i}\in \left[{N}_{i}\right]\subseteq N , all strategies {\widehat{s}}_{{j}_{i}} , and all \beta {'} < \beta : {\sum }_{\omega \in {\varOmega }^{N}}{\mu }^{\beta {'}, \tau, N}\left(\omega \right){U}_{{j}_{i}}\left(s\right(\omega), \gamma (\omega), {\tau }_{{j}_{i}})\ge {\sum }_{\omega \in {\varOmega }^{N}}{\mu }^{\beta {'}, \tau, N}\left(\omega \right){U}_{{j}_{i}}\left({\widehat{s}}_{{j}_{i}}\right(\omega), {s}_{-{j}_{i}}(\omega), \gamma (\omega), {\tau }_{{j}_{i}})-\rho .
Theorem 4. For all {j}_{i}\in \left[{N}_{i}\right]\subseteq N with strategy {s}_{{j}_{i}}\left(\omega \right) = {\alpha }_{{j}_{i}}\left(\omega \right) , \forall \omega \in {\varOmega }^{N} , tuples ⟨{\varOmega }^{N}, {\mu }^{\beta, \tau, N}, ({P}_{{j}_{i}}{)}_{{j}_{i}\in \left[N\right]}, ({s}_{{j}_{i}}{)}_{{j}_{i}\in \left[N\right]}⟩ of agent {j}_{i} comprise a relative equilibrium of (\beta, \rho)
Proof. For all {j}_{i}\in \left[{N}_{i}\right]\subseteq N and an arbitrary alternative strategy, {\widehat{s}}_{{j}_{i}} , the deviation payoff of agent {j}_{i} has a boundary of:
where C\triangleq \underset{\omega \notin {\varOmega }^{*, N}\left(\tau \right)}{max}\left\{V\left(\widehat{s}\right(\omega), {s}_{-{j}_{i}}(\omega), \gamma (\omega), \tau)-V\left(s\right(\omega), \gamma (\omega), {\tau }_{{j}_{i}})\right\} . The upper boundary can be obtained directly from the first term of the second column and the condition of non-positive due to the definition of {\varOmega }^{*, N}\left(\tau \right) . If C < 0 , Theorem 4 holds. If C\ge 0 , by invoking the exponential convergence of the invariable distribution and Corollary 2, for \beta \to 0 and \delta \left(\beta \right)\to 0 at their respective limitations, there exists \varepsilon > 0 , such that {\mu }^{\beta, \tau, N}({\varOmega }^{N}\backslash {\varOmega }^{*, N}(\tau \left)\right)\le \mathit{exp}\left(-\frac{\varepsilon }{\beta }(1+o(1\left)\right)\right)\triangleq \delta \left(\beta \right) holds. Thus, for each \rho > 0 , a small enough \beta can be selected such that the corresponding upper boundary is decreased under \rho . This proves Theorem 4: QED.
If the behavior noise is small enough, each agent will use the equilibrium as his/her optimal strategy, with a little deviation permitted. In this sense, a deterministic state of the complex adaptive system that has evolved ( \omega ) should be estimated according to the invariable distribution coupled with the optimal strategy and the local topological structure of the agent.
6.2. Analytical solution of the case of N\to \infty
In this section, a positive noise \beta > 0 is fixed, and the population of the agents in the complex adaptive system should be regarded as a selectable parameter to analyze the specification of the invariable distribution of the states. Similar to the analysis process for noise limitation, the preferential attachment mechanism is set to a logarithmic formation, that is, to Hypothesis 3, and the volatile mechanism is half-anonymous. The invariable distribution {\mu }^{\beta, \tau, N} is the most important consideration when a complex adaptive system with population \omega \in {\varOmega }^{N} is changed. Thus, when considering the interactivity of agents, the focus is on whether the different types of agents would select similar strategies and emerge into certain LWs, and the system structure, as the prior distribution {\widehat{\sigma }}^{N} = ({\widehat{\sigma }}_{1}^{N}, {\widehat{\sigma }}_{2}^{N}, ..., {\widehat{\sigma }}_{K}^{N}) , is the key consideration. Selection of strategies in this manner is called the Bayesian strategy, where every element of strategy {\widehat{\sigma }}_{k} is taken as a certain probability in the behavior set A , and the coordinates of {\widehat{\sigma }}_{k} should be denoted by {\widehat{\sigma }}_{k}\left(a\right), a\in A , which should, for all a\in A, 1\le k\le K , occur as the probability:
where 1 is a symbolic function. Since \widehat{\sigma } can be regarded as a mapping from a certain type space \varTheta of agent to a mixing space \varDelta \left(A\right) , the corresponding classical Bayesian strategy can be found. Denoting \varSigma \triangleq \varDelta (A{)}^{k} as the set of all Bayesian strategies, then for (\sigma, m)\in \varSigma \times \varDelta {\varTheta }^{N} , the measurement set generalized by mapping {\widehat{\sigma }}^{N} can be defined as:
Invoking the measurement of invariable distribution, {\mu }^{\beta, N}\in M({\varOmega }^{N}\times {\varTheta }^{N}) , it needs to give the most approximate expression for it. When the system population tends to infinity, coupled with set [\sigma, m] , then the measurement can be described as:
where {\widehat{\sigma }}^{N, \tau }\left(\right) = {\widehat{\sigma }}^{N}(, \tau) , and \left({\widehat{\sigma }}^{N, \tau }{)}^{-1}\right(\sigma) = \left\{\omega \in {\varOmega }^{N}\left|{\widehat{\sigma }}^{N}\right.(\omega, \tau) = \sigma \right\} .
Based on the Bayesian strategy definition, all states must stand in the set [\sigma, m] for all \tau \in {T}^{N}\left(m\right), {\varOmega }_{a}^{N}\times \left\{\tau \right\} . Therefore, one can define an equivalent correlation, {~}_{[\sigma, \tau]} , such that (a, \tau){~}_{[\sigma, m]}(a{'}, \tau {'}) holds if, and only if, \tau, \tau {'}\in {T}^{N}\left(m\right) and {\widehat{\sigma }}^{N}({\varOmega }_{a}^{N}, \tau) = {\widehat{\sigma }}^{N}({\varOmega }_{a{'}}^{N}, \tau {'}) = \sigma , meaning that the pair of agent's behavior and agent type profile is an equivalent correlation, {~}_{[\sigma, \tau]} , if these profiles can generate the same aggregate [\sigma, m] . Therefore, \left({\widehat{\sigma }}^{N, \tau }{)}^{-1}\right(\sigma) defines the a- behavior coalition. Furthermore, the class of {\mu }^{\beta, \tau, N}\left({\varOmega }_{a}^{N}\right) is the approximate expression to all a\in A . Thus, conditional on half-anonymous, it is concluded that this measurement relies only on the ratio of a certain type of agent with certain behavior. If considering two or more things out of all possible non-order permutations, a Bayesian measurement can be obtained with a limited population as follows:
In this expression, because the factor {K}^{\beta, C}(m{)}^{-1} is the normalization variable of the probability measurement, this probability distribution {\psi }^{\beta, N}\left(\left|m\right.\right) must come from the subset of
which is the interior point limited approximately to the polyhedron of the Bayesian strategy \varSigma .
The function {f}_{k}^{\beta, N}:\varSigma \times \varDelta \left(\varTheta \right)\to \mathbb{R} describes the payoff of an agent of type k obtained from interacting with an agent of type l\ge k , considering all possible sub-networks coupled with its preferential behavior to the others. Although the result of the case of a limited total population is not perfect for objectivity, it can be seen that when the population total tends to infinity, the sequence {\left\{{f}_{k}^{\beta, N}\right\}}_{N\ge {N}_{0}} converges, a.s. to the limit function,
where n-dimensional vector {\theta }_{k} = {\left({\theta }_{k}\left(a\right)\right)}_{a\in A} , identifying probability measurement via Equation (51) of type {\theta }_{k}:A\to \mathbb{R} , is the best alternative that satisfies the large deviation principle from class {\left\{{\phi }^{\beta, N}\left(\left|{M}^{N}\right.\right)\right\}}_{N\ge {N}_{0}} , for which the convergence sequence {\left\{({\sigma }^{N}, {m}^{N})\right\}}_{N\ge {N}^{0}} of ratio function r:\varSigma \times \varDelta \left(\varTheta \right) should be calculated. After analyzing, we have:
Extending this expression when type m is given, and the population of agents is infinity, we can obtain that the probability of Bayesian strategy \sigma \in \varSigma is equal to the rank of \mathit{exp}\left(-\frac{N}{\beta }r(\sigma, m)\right) scaled by logarithm. So, the Bayesian strategy scaled by logarithm with the largest probability must be the strategy that satisfies r(\sigma, m) = 0 , implying that the problem of probability of distribution strategy coupled with local topological structure can be transferred to the problem of identifying the potential function of the game. It was proven earlier that the logit function is a precise one; that is, the sought function should satisfy the following condition:
where h\left(x\right) = -{\sum }_{i}{x}_{i}\mathit{log}{x}_{i} is an entropy with distribution x that relies on a growing population of agents. The type distribution must change with time. The case of relatively large population will be discussed in the next section, implying that M\stackrel{\hspace{1em}a.s.\hspace{1em}}{\to }q at N\to \infty .
Assuming that N is large enough, the implementation of almost natural assigning of agent's type will lead to the distribution of type being closed to priori probability q . We focus on the implementation set of type {M}^{N}\to q , and on the type distribution that converged into measurement class {\left\{{\varphi }^{\beta, N}\left(\left|{M}^{N}\right.\right)\right\}}_{N\ge {N}_{0}} when measurement {P}_{q}- is omitted. This leads to Theorem 5.
Theorem 5. Set (m{)}_{N\ge {N}_{0}} is a type distribution sequence converged to priori probability q . The class {\left\{{\varphi }^{\beta, N}\left(\left|{M}^{N}\right.\right)\right\}}_{N\ge {N}_{0}} generalized with permissive half-anonymous mechanism controlled can satisfy, for all \sigma \in \varSigma , the ratio function {r}^{\beta }(\sigma, q): = \underset{\sigma {'}\in \varSigma }{max}{\tilde{f}}^{\beta }(\sigma, m)-{\tilde{f}}^{\beta }(\sigma {'}, m) with large deviation principle. For each sequence, class {\left\{{\varphi }^{\beta, N}\left(\left|{M}^{N}\right.\right)\right\}}_{N\ge {N}_{0}} satisfies: \underset{N\to \infty }{lim}\frac{\beta }{N}\mathit{log}{\psi }^{\beta, N}\left({\sigma }^{N}\left|{m}^{N}\right.\right) = -{r}^{\beta }(\sigma, q) , where {\sigma }^{N}\in {\varSigma }^{N}\left({m}^{N}\right), \forall N\ge {N}_{0} and \boldsymbol{\sigma}^N \rightarrow \boldsymbol{\sigma}.
Similar to the maximum deviation principle introduced in Theorem 5, the information family is a measurement {\left\{{\varphi }^{\beta, N}\left(\left|{M}^{N}\right.\right)\right\}}_{N\ge {N}_{0}} focused on the logarithmic measurement of the Bayesian strategy, and the strategy should be the optimal solution of the following programming:
The solution of the programming is the logit equilibrium solution; that is, it can be obtained directly due to the definition of the fixed point condition of the Bayesian strategy. Furthermore, the corresponding fixed point condition is:
where the following equality holds for all a\in A and 1\le k\le K :
To specify this conclusion, Equation (56) can be reduced to the optimal problem with standard constraints. Therefore, the corresponding solution can be resolved by the Lagrange method as follows:
The corresponding first-order condition should be necessary and sufficient for the optimal solution, implying that there exists a single solution if a positive \beta is large enough. Normally, as far as all 1\le k\le K and a, b\in A are considered, the corresponding first-order condition takes the following form:
Due to the symmetry of {\phi }_{kl}^{\beta } , we have:
and:
Extending the first-order condition to the general case, the following condition should be satisfied:
The other parts can be obtained directly from the constraint condition {\sum }_{a\in A}{\sigma }_{k}\left(a\right) = 1 . Therefore, when the population tends to infinity, the invariable distribution of a complex adaptive system with the agent's behavior and its local topological configuration co-evolution can converge into a certain interval with rate function -{r}^{\beta }(\sigma, q) , according to Theorem 5.
7.
Conclusions
In the case of limitation of small noise of behavior, the system's invariable distribution of co-evolution of agent's behavior and its local topological configuration must stand in the set of the potential function. However, in the case of the limitation of the large population, the invariable distribution would converge into a different ratio function. Therefore, a small noise of agent's behavior is not identical to a large population for this co-evolutionary complex adaptive system with agent's behavior and its local topological configuration.
As mentioned above, the invariable distribution is much more complex. No universal analytical solution can be derived if corresponding parameters change gradually and continuously. This problem concerns decision-making based on non-structural analysis of the system scenario. When facing this complex system, one can select a certain scenario and then adjust the corresponding parameters such that the scenario changes dynamically. Finally, if several discrete scenarios are studied, the corresponding conclusion would be drawn by induction.
Although we defined "irrational behavior", we did not analyze particular irrational behavior patterns such as competing (vying) and comparing, the anchoring effect, the loss of real demand caused by free offers, the nullity of money incentives under the dual effects of social norms and market regulations, sense and sensibility, and the high price of ownership. We plan to delve deeply into these issues in the follow-up study.
Use of AI tools declaration
The authors declare that they have not used Artificial Intelligence (AI) tools in creating this article.
Acknowledgments
This paper is supported by the National Natural Science Foundation of China (Grant No. 71671054) and the Natural Science Foundation of Shandong Province (Grant No. ZR2020MG004).
Conflict of interest
The authors declare that there are no conflicts of interest.
Appendix 1. Two corollaries from Theorem 3
According to Theorem 3, two corollaries were derived as follows.
Corollary 1. {\varXi }^{\beta, \tau, N} is an admissible volatile mechanism
(1) Set {\varOmega }^{*, N}\left(\tau \right)\triangleq \left\{\omega \in {\varOmega }^{N}\left|\underset{\beta \to 0}{lim}\beta \mathit{log}{\mu }^{\beta, \tau, N}\left(\omega \right) = 0\right.\right\} is a stochastic stable state of realizable configuration of agent's type \tau with small noise, which identifies with {\varOmega }^{*, N}\left(\tau \right) = \mathit{arg}\underset{\omega \in \varOmega }{max}V(\omega, \tau)
(2) An invariable distribution is a function with exponential convergence; that is, for arbitrary \varepsilon > 0 , there exists a subset {X}_{\varepsilon }\subseteq {\varOmega }^{N} such that \underset{\beta \to 0}{lim}\beta \mathit{log}{\mu }^{\beta, \tau, N}\left({X}_{\varepsilon }\right) < -\varepsilon hold.
Proof. Just the second section should be proven in Proposition 1. Denote the set of ratio functions {L}_{R}\left(\varepsilon \right)\triangleq \left\{\omega \in {\varOmega }^{N}\left|R(\omega, \tau)\le \varepsilon \right.\right\} for all \varepsilon > 0 . These sets are non-empty because there always exist {\varOmega }^{*, N}\left(\tau \right)\subseteq {L}_{R}\left(\varepsilon \right) such that they are equivalent when \varepsilon \to 0 . Then fix a \varepsilon > 0 , and consider set {X}_{\varepsilon }\triangleq {\varOmega }^{N}\backslash {L}_{R}\left(\varepsilon \right) , then {R}_{{X}_{\varepsilon }}\left(\tau \right)\triangleq \underset{\omega \in {X}_{\varepsilon }}{min}R(\omega, \tau) > 0 . So, we have
Similar to Theorem 3, for some functions {B}_{{X}_{\varepsilon }}^{\beta, N}, {r}_{{X}_{\varepsilon }}\left(\beta \right) , we have
Taking logarithm from two sides, and multiplying by \beta , we have
At \beta \to 0 , the left side becomes -{R}_{{X}_{\varepsilon }}\left(\tau \right)(1+o(1\left)\right) . Corollary 1 is proven: QED.
Theorem 3 indicates that, under small noise, for each type of configuration, the invariable distribution centralizes the set of maximum potential functions. Similarly, Corollary 1 expresses the corresponding stochastic stable state must be the maximum one among the potential functions. Furthermore, measurement {\left\{{\mu }^{\beta, \tau, N}\right\}}_{\beta > 0} class gives an arbitrary weight for the deterministic subset of state space, which makes the agent select the optimal strategy.
In the dynamical game model of the complex adaptive system, suppose that an agent always implements his/her optimal strategy when it acts with others, and it does so in a rational manner. A notation is needed to express this kind of equilibrium. Thus, the Aumann correlative equilibrium, a fitness order parameter, is introduced. The Aumann correlative equilibrium regards state space {\varOmega }^{N} as the set of states that could potentially appear and {\mu }^{\beta, \tau, N} is simplified to \mu . The information of {j}_{i} is denoted to {P}_{{j}_{i}} , decided by set {P}_{{j}_{i}}\left(\theta \right)\triangleq \left\{\omega \in {\varOmega }^{N}\left|{\alpha }_{{j}_{i}}\left(\omega \right) = a\right.\right\} , a\in A . The strategy of agent {j}_{i} is mapping {s}_{{j}_{i}}:{\varOmega }^{N}\to A , a measurable function of information {P}_{{j}_{i}} . That is, whatever the states \omega, \omega {'}\in {P}_{{j}_{i}}\left(\alpha \right) , we have {s}_{{j}_{i}} = {s}_{{j}_{i}}\left(\omega {'}\right) = a , and the profile of the strategies is denoted by S = ({s}_{{j}_{i}}{)}_{{j}_{i}\in \left[{N}_{i}\right]\subseteq N} . If this co-evolutionary complex adaptive system converges into a certain dynamical equilibrium state, then these agents' type configurations must be ({\tau }_{11}, ...{\tau }_{1{n}_{1}}, {\tau }_{21}, ..., {\tau }_{N}) and the information configuration must be ⟨{\varOmega }^{N}, \mu, {P}_{{j}_{i}}⟩ . Supposing that each agent uses strategy {s}_{{j}_{i}}\left(\omega \right) = {\alpha }_{{j}_{i}}\left(\omega \right), \forall \omega \in {\varOmega }^{N} , then an interesting question arises: would agent {j}_{i} use this strategy when it acts with its neighbors as the system evolves?
Because the measurement \mu is effective if, and only if, noise is positive, all states of the system would appear as a positive probability. However, not all strategies coupled with the measurement functions about {P}_{{j}_{i}} are good. Invoking Theorem 3, the exact state of the system that would occur as a maximum non-zero probability in the system can be known. This kind of Nash equilibrium means that the behavior of agents is very complex.
Corollary 2. Set \tau \in {\varTheta }^{N} is an arbitrary type profile. For all {j}_{i}\in \left[{N}_{i}\right]\subseteq N , when considering strategy {s}_{{j}_{i}}\left(\omega \right) = {\alpha }_{{j}_{i}}\left(\omega \right), \forall \omega \in {\varOmega }^{N} , the profile s is the Nash equilibrium behavior of a state, with all states \omega \in {\varOmega }^{*, N}\left(\tau \right) .
Proof The measurement {P}_{{j}_{i}} maps {s}_{{j}_{i}} . Thus, {s}_{{j}_{i}} that describes the strategy selected by an agent. Fixing an arbitrary agent {j}_{i}\in \left[{N}_{i}\right]\subseteq N and setting {\widehat{s}}_{{j}_{i}} as an arbitrary strategy that is not optimal, we also fix \omega \in {\varOmega }^{N}\left(\tau \right) . The deviation payoff of the agent {j}_{i} will be
This payoff is non-negative {\varOmega }^{*, N}\left(\tau \right) . Therefore, Corollary 2 is proven to hold: QED.
Appendix 2. Proof of Theorem 5
Proof. The proof is complex, first requiring some parameters to be defined and some Lemmas to be invoked.
The first thing is to obtain the martingale distribution on {A}^{N} for arbitrary type profile \tau \in {\varOmega }^{N} such that the distribution of Bayesian strategy relying on {T}^{N}\left(m\right) , m\in {L}_{N} , on this set is given to analyze the probability that this type of agent stands in this type class. To analyze the phenomenon of aggregate, we sectionalize agents in the complex adaptive system according to the agent's behavior and its type, making the same kind of agents stand in the same subset. Now, for all 1\le k\le K and a\in A , define set
Obviously, for an arbitrary type profile \tau \in {\varTheta }^{N} , the class of set \left\{{\left\{{I}_{k}^{\tau }\left(a\right)\right\}}_{a\in A}\right\} is segmentation on \left[N\right] . Under half-anonymous mechanism controlled, measurement of random complex networks would regard the edge between agent {I}_{k}^{\tau }\left(a\right) and {j}_{{i}^{{'}}}^{{'}}\in {I}_{l}^{\tau }\left(a\right) as an i.i.d. random variable. Thus, a random variable satisfying binomial distribution with parameter of {P}_{kl}(a, b) is defined as:
Given one type profile \tau and a {\varOmega }_{a}^{N} on a- cross-section and denoting {E}_{kl}^{N, \tau }(a, b) for the maximum quantities of edges between agents with behavior a and type k and agents with behavior b and type l , and {e}_{kl} for the implementation of a random variable {E}_{kl}^{N, \tau }(a, b)\left(\right) , several lemmas are necessary. They are introduced as follows.
Lemma 3. Consider a given type profile \tau \in {T}^{N}\left(m\right) and a half-anonymous volatile mechanism {\mathfrak{M}}^{\beta, \tau, N} :
(1) The stable expression of a- game, on a- cross-section {\varOmega }_{a}^{N} , should be
{\sigma }_{k}\left(a\right) = {\widehat{\sigma }}_{k}\left(a\right)(\omega, \tau), \forall \omega \in {\varOmega }_{a}^{N} where \sigma :\left({\sigma }_{k}\right(a); 1\le k\le K, a\in A)\in {\sum }^{N}\left(m\right)
(2) We have {\mu }^{\beta, \tau, N}\left({\varOmega }_{a}^{N}\right)\propto {\prod }_{k > 1}^{K}{\prod }_{a = 1}^{n}{\varPhi }_{k}^{a}(\sigma, \beta, N){l}^{N{m}_{k}{\sigma }_{k}\left(a\right)} where, for all types of 1\le k\le K , and behavior of 1\le a\le n , {\varPhi }_{k}^{a}\left(\right) satisfies {\varPhi }_{k}^{a}(\sigma, \beta, N)\triangleq {\prod }_{l\ge k}{\varPhi }_{kl}^{a}(\sigma, \beta, N) , {\varPhi }_{kk}^{a}(\sigma, \beta, N)\triangleq \mathit{exp}\left(\frac{{\theta }_{k}\left(a\right)}{\beta }\right){\prod }_{b\ge a}{\left(1+\frac{1}{N\beta }{\phi }_{kk}^{\beta, N}(a, b)\right)}^{\frac{N{m}_{k}{\sigma }_{k}\left(a\right)-{\delta }_{a, b}}{1+{\delta }_{a, b}}} , {\varPhi }_{kl}^{a}(\sigma, \beta, N)\triangleq {\prod }_{b = 1}^{n}{\left(1+\frac{1}{N\beta }{\phi }_{kl}^{\beta, N}(a, b)\right)}^{N{m}_{l}{\sigma }_{l}\left(a\right)}
Proof: Set the modulus of a- game of type {\theta }_{k} as {z}_{k}\left(a\right) = N{m}_{k}{\sigma }_{k}\left(a\right) . Therefore, (1) is proven.
To prove (2), the following process should be introduced:
For all \omega \in {\varOmega }^{N} , defining \rho (\omega, \tau): = {\mu }_{0}\left(\omega \right)\mathit{exp}(V(\omega, \tau)/\beta) , and invoking function {x}_{{j}_{i}j{{'}}_{i{'}}}\left(\right|) of Equation (A.6), this mapping should be changed to:
Therefore, for all \omega \in {\varOmega }^{N} , 1\le k\le K and a\in A , we have {I}_{k}^{\tau }(a, \omega) = {I}_{k}^{\tau }\left(a\right) . Furthermore, for all agents with {j}_{i}\in {I}_{k}^{\tau }\left(a\right), {j}_{i{'}}\in {I}_{l}^{\tau }\left(b\right) , we can observe that:
Then, Equation (A.7) can be reduced to
The latter equation just relies on the system's state and the number of edges in networks, \omega . Then, we calculate this expression for all states \omega \in {\varOmega }^{N} , which needs integration for all possible states. The process can be described in detail as follows:
Initialization: set k = 1, a = 1 ;
First cycle: considering the special situation l = k , integrate all possible edges {e}_{kl}(a, b) on b\ge a —if b = n , set l = l+1 , then go to next cycle;
Second cycle: integrate all possible edges {e}_{kl}(a, b) with all b\in A —if l\le K-1 , set l\to l+1 , and replace this process; otherwise, go to the third cycle;
Third cycle: if a\le n-1 and k\le K-1 , then, for the same k and a\to a+1 , go to the first cycle; if a = n and k\le K-1 , then go to the first cycle for k\to k+1 and a\to 1 ; if a = n and k = K , stop calculation.
All possible links between agents with behaviors of {I}_{1}^{\tau }\left(1\right) are integrated within the first cycle. Note that the only factor affecting the calculation result must be \mathit{exp}({x}_{11}\left(\mathrm{1, 1}\right){)}^{{E}_{11}^{N, \tau }\left(\mathrm{1, 1}\right)\left(\omega \right)} , \omega \in {\varOmega }_{a}^{N} . Therefore, the convolution term will not be affected by the universal term {B}_{1} , in this sense, \rho (\omega, \tau) = {B}_{1}\mathit{exp}({x}_{11}\left(\mathrm{1, 1}\right){)}^{{E}_{11}^{N, \tau }\left(\mathrm{1, 1}\right)\left(\omega \right)} . Furthermore, because the respective behaviors of agents are {E}_{11}^{N, \tau }\left(\mathrm{1, 1}\right) = {e}_{11}\left(\mathrm{1, 1}\right) , then for an arbitrary agent in this complex adaptive system, there exist several agents that he/she can interact with, and the combined identical equations representing multiple possible games must be considered, which may require adjusting the results of the first cycle as follows:
The algorithm of the next cycle is to calculate all links between agents with respective behaviors of {I}_{1}^{\tau }\left(1\right) and {I}_{1}^{\tau }\left(2\right) . The relative factors of the universal term {B}_{1} must be extracted. Therefore, we provide the integration of the whole above process as follows:
Repeating this algorithm, we obtain the corresponding function of the nth step:
Recall {z}_{k}\left(a\right) = N{m}_{k}{\sigma }_{k}\left(a\right) ; therefore, the function {\varPhi }_{11}^{1}(\sigma, \beta, N) holds, and the result of Lemma 2 can be obtained by calculating the remaining steps as the recurrence relation. Thus, Lemma 2 is proven: QED.
The invariable distribution on an a- cross-section is given in Lemma 2. It can be seen from its proof that not only the behavior profile but also the Bayesian strategies control the invariable distribution.
Lemma 4. If \tau, \tau {'}\in {T}^{N}\left(m\right) and {\widehat{\sigma }}^{N}({\varOmega }_{a}^{N}, \tau) = {\widehat{\sigma }}^{N}({\varOmega }_{a{'}}^{N}, \tau {'}) , then {\mu }^{\beta, \tau, N}\left({\varOmega }_{a}^{N}\right)/{\mu }^{\beta, \tau {'}, N}\left({\varOmega }_{a{'}}^{N}\right) = 1
Proof Denoting \sigma as the co-strategies on the subsets respective to {\varOmega }_{a}^{N} and {\varOmega }_{a{'}}^{N} , it follows from Equation (A.9) that, for arbitrary \omega \in {\varOmega }_{a}^{N} and \omega {'}\in {\varOmega }_{a{'}}^{N} , we have:
Lemma 4 can be proven if we can show that
It can be deduced from the proof of Lemma 3 that this operator was driven by the algorithm. The random complex networks produced in {\varOmega }_{a}^{N} and the ones produced in {\varOmega }_{a{'}}^{N} were isomorphic graphs, so Equation (A.14) holds and Lemma 4 is proven: QED.
Lemma 5. The conditional probability distribution of Bayesian strategy of the set \varSigma on type class {T}^{N}\left(m\right) , for all 1\le k\le K , must be {\psi }^{\beta, N}\left(\sigma \left|m\right.\right) = {K}^{\beta, N}(m{)}^{-1}{\prod }_{k = 1}^{K}{\upsilon }_{k}^{\beta, N}\left(\sigma \left|m\right.\right) , where {\upsilon }_{k}^{\beta, N}\left(\sigma \left|m\right.\right)\triangleq \frac{\left(N{m}_{k}\right)!}{{\prod }_{a\in A}\left(N{m}_{k}{\sigma }_{k}\right(a\left)\right)!}{\prod }_{a = 1}^{n}{\varPhi }_{k}^{a}(\sigma, \beta, N{)}^{N{m}_{k}{\sigma }_{k}\left(a\right)} . The support of this probability distribution \text{supp}\left({\psi }^{\beta, N}\left(\left|m\right.\right)\right) = {\varSigma }^{N}\left(m\right) = \left\{\sigma \in \varSigma \left|N{m}_{k}{\sigma }_{k}\left(a\right)\in \mathbb{N}, \forall a\in A, 1\le k\le K\right.\right\} is an interior point limited approximately to the polyhedron of Bayesian strategy \varSigma .
Proof. If (a{'}, \tau {'}) transfers into (a, \tau) after labeling agents in the complex adaptive system, then {\mu }^{\beta, \tau {'}, N}\left({\varOmega }_{a{'}}^{N}\right) = {\mu }^{\beta, \tau, N}\left({\varOmega }_{a}^{N}\right) via Lemma 4. Furthermore, if the Bayesian strategies are used universally by agents with \tau \in {T}^{N}\left(m\right) on {\varOmega }_{a}^{N} , then there must be N{m}_{k} agents of type {\theta }_{k} and N{m}_{k}{\sigma }_{k}\left(a\right) classes with the behavior of a\in A , in sub-generation of 1\le k\le K , such as \frac{\left(N{m}_{k}\right)!}{{\prod }_{a\in A}\left(N{m}_{k}{\sigma }_{k}\right(a\left)\right)!} , which is similar to the result of profile \tau \in {T}^{N}\left(m\right) of all types. Therefore, Lemma 5 is proven: QED.
There is the closest form of compact compression for the invariable distribution of Bayesian strategy, meaning that the measurement coupled with the agents' population is large enough to study this complex adaptive system. Therefore, we introduce the following function
According to these mappings, the conditional probability in Lemma 5 takes the following form:
The evolution law of the agent's strategy, when the population is large enough, relies strongly on the convergence property of function {\left\{{f}_{k}^{\beta, N}\right\}}_{N > {N}_{0}} . For {m}^{N}\in {L}_{N} , we propose that set {\varSigma }^{N}\left(m\right)\times {L}_{N} approximates the convergence continuous space {\varSigma }^{N}\times \varDelta \left(\varTheta \right) at N\to \infty . This yields the following Lemma.
Lemma 6. For each arbitrary Bayesian strategy \sigma \in \varSigma and type distribution m\in \mathit{int}\varDelta \left(\varTheta \right) , there exists a sequence {\left\{\left({\sigma }^{N}, {m}^{N}\right)\right\}}_{N\ge {N}_{0}} with {\sigma }^{N}\in {\varSigma }^{N}\left({m}^{N}\right) and {m}^{N}\in {{L}_{N}}^{N} , for all N\ge {N}_{0} , such that ({\sigma }^{N}, {m}^{N})\to (\sigma, m) holds at N\to \infty .
Proof: Let us prove this lemma in two steps. First, we try to find a sequence {m}^{N}\in {{L}_{N}}^{N} such that the total variable distance is converged into m at N\to \infty . Then, we construct the Bayesian strategy sequence using sequence {m}^{N}\in {{L}_{N}}^{N} .
In the first step, we define the total variable distance between two distributions x, y\in \varDelta \left(\varTheta \right) in \varDelta \left(\varTheta \right) as follows:
It is known that {m}_{k}^{N}\in \left\{0, \frac{1}{N}, ..., \frac{N}{N}\right\} if {m}^{N}\in {L}_{N} . Therefore, if m\in \varDelta \left(\varTheta \right) , then for each 1\le k\le K , there exists such {m}_{k}^{N}\in \left\{0, \frac{1}{N}, ..., \frac{N}{N}\right\} that \left|{m}_{k}-{m}_{k}^{N}\right|\le \frac{1}{N} holds. Thus, for each N , a vector {m}^{N} must be found such that {‖{m}^{N}-m‖}_{TV, \varTheta }\le \frac{K}{2N} . Thus, for a small enough \delta > 0 , set {N}^{\delta }\left(m\right)\triangleq \left\{y\in \varDelta \left(\varTheta \right)\left|{‖y-m‖}_{TV, \varTheta } < \delta \right.\right\} is an open ball that surrounds m and consists of all {m}^{N} with N\ge N\left(\delta \right) , where N\left(\delta \right) is an appropriate integer. Therefore, {m}^{N}\to m a.s. in the total change distance.
In the second step, given the identifiable prior distribution sequence {\left({m}^{N}\right)}_{N\ge {N}_{0}} , for all N\ge {N}_{0} , set {\sigma }^{N}\in {\varSigma }^{N}\left({m}^{N}\right) , we can measure their distance due to maximum norm on space \varSigma . That is, for all \sigma, \sigma {'}\in \varSigma , we have:
As specified in the first step, as for all 1\le k\le K , there is a boundary of distance between the \sigma and \sigma {'} , which is:
Then, for all large enough values of N , we have:
Because {m}^{N}\to m\in \mathit{int}\varDelta \left(\varTheta \right) , for large enough N and all 1\le k\le K , there exists such \varepsilon > 0 that {m}_{k}^{N}\ge \varepsilon > 0 holds. For small enough \delta > 0 , the neighbor {N}^{\delta }\left({\sigma }^{N}\right) mentioned in the first step must be found, and it is observed that {\sigma }^{N}\in {N}^{\delta }\left(\sigma \right) for N\ge N\left(\delta \right) . Thus, Lemma 6 is proven: QED.
The above reasoning proved the existence of an approximate pair (\sigma, m)\in \varSigma \times \varDelta \left(\varTheta \right) that can be obtained from a discrete sequence as ({\sigma }^{N}, {m}^{N}) converges, which can be measured directly in this limited process.
Lemma 7. For all 1\le k\le K coupled with sequence {\left\{\left({\sigma }^{N}, {m}^{N}\right)\right\}}_{N\ge {N}_{0}} having limit (\sigma, m)\in \varSigma \times \varDelta \left(\varTheta \right) , where {\sigma }^{N}\in {\varSigma }^{N}\left({m}^{N}\right), {m}^{N}\in {L}_{N} , we have \underset{N\to \infty }{lim}{f}_{k}^{\beta, N}({\sigma }^{N}, {m}^{N}) = \frac{1}{\beta }{f}_{k}^{\beta }(\sigma, m) where {f}_{k}^{\beta }:\varSigma \times \varDelta \left(\varTheta \right)\to \mathbb{R} is a continuous function. That is, {f}_{k}^{\beta }(\sigma, m)\triangleq ⟨{\sigma }_{k}, {m}_{k}⟩+{\sum }_{l\ge k}\frac{{m}_{l}}{1+{\delta }_{kl}}⟨{\sigma }_{k}, {\varphi }_{kl}^{\beta }{\sigma }_{l}⟩
Proof. The asymptotical behavior of function {\varPhi }_{k}^{a}\left(\right) , which is the large-component behavior with deterministic quantity {\varphi }_{k, l}^{\beta, N}(a, b) = \frac{2\beta \mathit{exp}(v(a, b)/\beta)}{{\xi }_{kl}^{\beta }} , should be quantified. Thus, for all 1\le k, l\le K and a, b\in A , we have:
This implies that the first-order approximate \mathit{log}\left(1+\frac{{\varphi }_{kl}^{\beta, N}(a, b)}{N\beta }\right) = \frac{{\varphi }_{kl}^{\beta, N}(a, b)}{N\beta }+O\left({N}^{-2}{\beta }^{-1}\right) properly describes the asymptotical behavior with large enough N . Furthermore, for all a\in A and 1\le k\le l\le K , we find that:
and:
Therefore, for all 1\le k\le l\le K , we have:
Furthermore, a function defined in {f}_{k}^{\beta, N}({\sigma }^{N}, {m}^{N}) must have its limitations at N\to \infty . Thus, Lemma 7 is proven: QED.
Corollary 3. Function \{{f}^{\beta, N}{\}}_{N\ge {N}_{0}} converges, a.s., to limit function {f}^{\beta } .
Proof: It directly follows from Lemmas 6 and 7: QED.
As far as the processes obtained are concerned, all states can be determined via a generalized type sequence, whose distribution follows the common law q with the property of i.i.d.. Therefore, we have:
Lemma 8. At N\to \infty , we get {M}^{N}\stackrel{\hspace{1em}a.s.\hspace{1em}}{\to }q .
Proof. Let matrix \forall m, q\in \varDelta \left(\varTheta \right)\left|\right|m-q|{|}_{TV}\triangleq \frac{1}{2}{\sum }_{k = 1}^{k}|{m}_{k}-{q}_{k}| be the total variable distance on \varDelta \left(\varTheta \right) . Recall the common rule of q\in \mathit{int}\varDelta \left(\varTheta \right) with type {\tilde{\tau }}_{{j}_{I}}^{\left(N\right)} and consider the countable class of open set \left\{{B}_{q, \varepsilon }\right\} , where \varepsilon \ge 0, {B}_{q, \varepsilon }\triangleq \{m\in \varDelta (\varTheta){\left|‖m-q‖\right.}_{TV} > \varepsilon \} . This rule can be distributed among these sets according to the prior process \{{M}^{N}{\}}_{N\ge {N}_{0}} :
By invoking Sanov's theorem, we obtain:
where h\left(m\left|q\right.\right)\triangleq {\sum }_{k = 1}^{K}{m}_{k}\mathit{log}\frac{{m}_{k}}{{q}_{k}} is a relative entropy. Using the Jensen inequality, h\left(\left|q\right.\right)\ge 0 and the equation holds if • = q . Because q\in {B}_{q, \varepsilon } holds for all \varepsilon \ge 0 , for each \varepsilon , there always exists a constant {c}_{\varepsilon }\in (0, \infty) such that {\widehat{p}}_{q}^{N}\left({B}_{q, \varepsilon }\right)\le {e}^{-N{c}_{\varepsilon }} . Thus, set {B}_{q, \varepsilon } can be reduced to a case with a prior type distributing event. Then, we can construct the event with a set as follows:
It can be seen that this is a case of {P}_{q}- probability ( {\widehat{p}}_{q}^{N}\left({B}_{q, \varepsilon }\right) ). Using Equation (A.21), we get:
By invoking the first Borel–Cantelli Lemma, we have, for all \varepsilon \in {\mathbb{Q}}_{+} , {P}_{q}\left({\mathit{limsup}}_{N\to \infty }{A}_{N}\left(\varepsilon \right)\right) = 0 , which will converge to the prior process {\left\{{M}^{N}\left(\tau \right)\right\}}_{N\ge {N}_{0}} . Therefore, Lemma 8 is proven: QED.
The preferential attachment used in this study is a logit function describing the property of {\left\{{\phi }^{\beta, N}\left({M}^{N}\right)\right\}}_{N\ge {N}_{0}} with the maximum deviation measurement. This yields:
Theorem 5 can be proven as follows. According to Lemma 8, we take a sequence with probability 1-{n}^{-1000} converged to q , and denote the agent's Bayesian strategy among all types with agent's behavior 1 to {e}_{1} = \left[{e}_{1}\left(1\right), \cdots, {e}_{K}\left(1\right)\right] . That is, for each 1\le k\le K , parameter {e}_{k}\left(1\right) can be regarded as the unit vector {\mathbb{R}}^{n} with zero value contributed from the first term and unity values from the other n-1 terms. Thus, for all N we have {e}_{k}\left(1\right)\in {\varSigma }^{N}\left({m}^{N}\right) . Therefore, for all \sigma \in {\varSigma }^{N}\left({m}^{N}\right) the following equation holds:
Taking the logarithm of both sides and multiplying by \beta /N , we get:
Taking the limitation of the combination term and considering the Stirling formula n!\cong \sqrt{2\pi n}(n/e{)}^{n} , we obtain:
Invoking Lemma 8, we deduce that {\left\{({\sigma }^{N}, {m}^{N})\right\}}_{N\ge {N}_{0}} converges coupled with {f}^{\beta, N}({\sigma }^{N}, {m}^{N})\to {f}^{\beta }(\sigma, q) .
where {\tilde{f}}_{k}^{\beta }\left(\left|\right.\right) is the preferential attachment mechanism, i.e., {\tilde{f}}_{k}^{\beta }\left(\left|\right.\right) is a logit function. Next, set {\sigma }_{*}^{N} as a function with the following maximum value:
Based on the uniform convergent principle, at N\to \infty , we get {\tilde{f}}^{\beta, N}({\sigma }_{*}^{N}, {m}^{N})\to {\tilde{f}}_{k}^{\beta }({\sigma }_{*}, q) , and the limitation point is the maximum value of {\tilde{f}}^{\beta }\left(\left|q\right.\right) . That is,
Considering Equation (A.30), we have, for all {\sigma }^{N}\to \sigma \in \varSigma :
Thus, Theorem 5 is proven: QED.