Citation: Kieran Greer. New ideas for brain modelling 5[J]. AIMS Biophysics, 2021, 8(1): 41-56. doi: 10.3934/biophy.2021003
[1] | Hongyan Guo . Automorphism group and twisted modules of the twisted Heisenberg-Virasoro vertex operator algebra. Electronic Research Archive, 2021, 29(4): 2673-2685. doi: 10.3934/era.2021008 |
[2] | Agustín Moreno Cañadas, Robinson-Julian Serna, Isaías David Marín Gaviria . Zavadskij modules over cluster-tilted algebras of type $ \mathbb{A} $. Electronic Research Archive, 2022, 30(9): 3435-3451. doi: 10.3934/era.2022175 |
[3] | Youming Chen, Weiguo Lyu, Song Yang . A note on the differential calculus of Hochschild theory for $ A_{\infty} $-algebras. Electronic Research Archive, 2022, 30(9): 3211-3237. doi: 10.3934/era.2022163 |
[4] | Xue Yu . Orientable vertex imprimitive complete maps. Electronic Research Archive, 2024, 32(4): 2466-2477. doi: 10.3934/era.2024113 |
[5] | Yizheng Li, Dingguo Wang . Lie algebras with differential operators of any weights. Electronic Research Archive, 2023, 31(3): 1195-1211. doi: 10.3934/era.2023061 |
[6] | Ming Ding, Zhiqi Chen, Jifu Li . The properties on F-manifold color algebras and pre-F-manifold color algebras. Electronic Research Archive, 2025, 33(1): 87-101. doi: 10.3934/era.2025005 |
[7] | Liqian Bai, Xueqing Chen, Ming Ding, Fan Xu . A generalized quantum cluster algebra of Kronecker type. Electronic Research Archive, 2024, 32(1): 670-685. doi: 10.3934/era.2024032 |
[8] | Xiuhai Fei, Haifang Zhang . Additivity of nonlinear higher anti-derivable mappings on generalized matrix algebras. Electronic Research Archive, 2023, 31(11): 6898-6912. doi: 10.3934/era.2023349 |
[9] | Doston Jumaniyozov, Ivan Kaygorodov, Abror Khudoyberdiyev . The algebraic classification of nilpotent commutative algebras. Electronic Research Archive, 2021, 29(6): 3909-3993. doi: 10.3934/era.2021068 |
[10] | Quanguo Chen, Yong Deng . Hopf algebra structures on generalized quaternion algebras. Electronic Research Archive, 2024, 32(5): 3334-3362. doi: 10.3934/era.2024154 |
Existing methods and algorithms appeared in some literatures assume that variables are independent, but it is not plausible. In many stochastic models and statistical applications, those variables involved are dependent. Hence, it is important and meaningful to extend the results of independent variables to dependent cases. One of these dependence structures is weakly dependent (i.e., ρ∗-mixing or ˜ρ-mixing), which has attracted the concern by many researchers.
Definition 1.1. Let {Xn;n≥1} be a sequence of random variables defined on a probability space (Ω,F,P). For any S⊂N = {1,2,…}, define FS=σ(Xi,i∈S). The set L2(FS) is the class of all F-measureable random variables with the finite second moment. For some integer s≥1, denote the mixing coefficient by
ρ∗(s)=sup{ρ(FS,FT):S,T⊂N,dist(S,T)≥s}, | (1.1) |
where
ρ(FS,FT)=sup{|EXY−EXEY|√VarX⋅√VarY:X∈L2(FS),Y∈L2(FT)}. | (1.2) |
Noting that the above fact dist(S,T)≥s denotes dist(S,T)=inf{|i−j|:i∈S,j∈T}≥s. Obviously, 0≤ρ∗(s+1)≤ρ∗(s)≤1 and ρ∗(0)=1. The sequence {Xn;n≥1} is called ρ∗-mixing if there exists s∈N such that ρ∗(s)<1. Clearly, if {Xn;n≥1} is a sequence of independent random variables, then ρ∗(s)=0 for all s≥1.
ρ∗-mixing seems similarly to another dependent structure: ρ-mixing, but they are quite different from each other. ρ∗-mixing is also a wide range class of dependent structures, which was firstly introduced to the limit theorems by Bradley [4]. From then on, many scholars investigated the limit theory for ρ∗-mixing random variables, and a number of important applications for ρ∗-mixing have been established. For more details, we refer to [12,16,18,19,21,23,24] among others.
The concept of complete convergence was firstly given by Hsu and Robbins[9] as follows: A sequence of random variables {Xn;n≥1} converges completely to a constant λ if ∞∑n=1P(|Xn−λ|>ε)<∞ for all ε>0. By the Borel-Cantelli lemma, the above result implies that Xn→λ almost surely (a.s.). Thus, the complete convergence plays a crucial role in investigating the limit theory for summation of random variables as well as weighted sums.
Chow [8] introduced the following notion of complete moment convergence: Let {Zn;n≥1} be a sequence of random variables, and an>0, bn>0, q>0. If ∞∑n=1anE(b−1n|Zn|−ε)q+<∞ for all ε≥0, then the sequence {Zn;n≥1} is called to be the complete q-th moment convergence. It will be shown that the complete moment convergence is the more general version of the complete convergence, and is also much stronger than the latter (see Remark 2.1).
According to the related statements of Rosalsky and Thành[14] as well as that of Thành[17], we recall the definition of stochastic domination as follows.
Definition 1.2. A sequence of random variables {Xn;n≥1} is said to be stochastically dominated by a random variable X if for all x≥0 and n≥1,
supn≥1P(|Xn|≥x)≤P(|X|≥x). |
The concept of stochastic domination is a slight generalization of identical distribution. It is clearly seen that stochastic dominance of {Xn;n≥1} by the random variable X implies E|Xn|p≤E|X|p if the p-th moment of |X| exists, i.e. E|X|p<∞.
As is known to us all, the weighted sums of random variables are used widely in some important linear statistics (such as least squares estimators, nonparametric regression function estimators and jackknife estimates). Based on this respect, many probability statisticians devote to investigate the probability limiting behaviors for weighted sums of random variables. For example, Bai and Cheng[3], Cai[5], Chen and Sung[6], Cheng et al.[7], Lang et al.[11], Peng et al.[13], Sung[15,16] and Wu[20] among others.
Recently, Li et al.[12] extended the corresponding result of Chen and Sung[6] from negatively associated random variables to ρ∗-mixing cases by a total different method, and obtained the following theorem.
Theorem A. Let {X,Xn;n≥1} be a sequence of identically distributed ρ∗-mixing random variables with EXn=0, and let {ani;1≤i≤n,n≥1} be an array of real constants such that n∑i=1|ani|α=O(n) for some 1<α≤2. Set bn=n1/α(logn)1/γ for 0<γ<α. If E|X|α/(log(1+|X|))α/γ−1<∞, then
∞∑n=11nP(max1≤j≤n|j∑i=1aniXi|>εbn)<∞for∀ε>0. | (1.3) |
In addition, Huang et al.[10] proved the following complete α-th moment convergence theorem for weighted sums of ρ∗-mixing random variables under some moment conditions.
Theorem B. Let {Xn;n≥1} be a sequence of ρ∗-mixing random variables, which is stochastically dominated by a random variable X, let {ani;1≤i≤n,n≥1} be an array of real constants such that n∑i=1|ani|α=O(n) for some 0<α≤2. Set bn=n1/α(logn)1/γ for some γ>0. Assume further that EXn=0 when 1<α≤2. If
E|X|α<∞,forα>γ,E|X|αlog(1+|X|)<∞,forα=γ,E|X|γ<∞,forα<γ, | (1.4) |
then
∞∑n=11nE(1bnmax1≤j≤n|j∑i=1aniXi|−ε)α+<∞ for ∀ε>0. | (1.5) |
It is interesting to find the optimal moment conditions for (1.5). Huang et al.[10] also posed a worth pondering problem whether the result (1.5) holds for the case α>γ under the almost optimal moment condition E|X|α/(log(1+|X|))α/γ−1<∞?
Mainly inspired by the related results of Li et al.[12], Chen and Sung[6] and Huang et al.[10], the authors will further study the convergence rate for weighted sums of ρ∗-mixing random variables without assumptions of identical distribution. Under the almost optimal moment condition E|X|α/(log(1+|X|))α/γ−1<∞ for 0<γ<α with 1<α≤2, a version of the complete α-th moment convergence theorem for weighted sums of ρ∗-mixing random variables is established. The main result not only improves the corresponding ones of Li et al.[12], Chen and Sung[6], but also partially settles the open problem posed by Huang et al.[10].
Now, we state the main result as follows. Some important auxiliary lemmas and the proof of the theorem will be detailed in the next section.
Theorem 1.1. Let {Xn;n≥1} be a sequence of ρ∗-mixing random variables with EXn=0, which is stochastically dominated by a random variable X, let {ani;1≤i≤n,n≥1} be an array of real constants such that n∑i=1|ani|α=O(n) for some 0<α≤2. Set bn=n1/α(logn)1/γ for γ>0. If E|X|α/(log(1+|X|))α/γ−1<∞ for α>γ with 1<α≤2, then (1.5) holds.
Throughout this paper, let I(A) be the indicator function of the event A and I(A,B)=I(A⋂B). The symbol C always presents a positive constant, which may be different in various places, and an=O(bn) stands for an≤Cbn.
To prove our main result of this paper, we need the following important lemmas.
Lemma 2.1. (Utev and Peligrad[18]) Let p≥2, {Xn;n≥1} be a sequence of ρ∗-mixing random variables with EXn=0 and E|Xn|p<∞ for all n≥1. Then there exists a positive constant C depending only on p, s and ρ∗(s) such that
E(max1≤j≤n|j∑i=1Xi|p)≤C(n∑i=1E|Xi|p+(n∑i=1EX2i)p/2). | (2.1) |
In particular, if p=2,
E(max1≤j≤n|j∑i=1Xi|2)≤Cn∑i=1EX2i. | (2.2) |
The following one is a basic property for stochastic domination. For the details, one refers to Adler and Rosalsky[1] and Adler et al.[2], or Wu[22]. In fact, we can remove the constant C in those of Adler and Rosalsky[1] and Adler et al.[2], or Wu[22], since it was proved in Reference [[14], Theorem 2.4] (or [[17], Corollary 2.3]) that this is indeed equivalent to C=1.
Lemma 2.2. Let {Xn,n≥1} be a sequence of random variables which is stochastically dominated by a random variable X. For all β>0 and b>0, the following statements hold:
E|Xn|βI(|Xn|≤b)≤(E|X|βI(|X|≤b)+bβP(|X|>b)), | (2.3) |
E|Xn|βI(|Xn|>b)≤E|X|βI(|X|>b). | (2.4) |
Consequently, E|Xn|β≤E|X|β.
Lemma 2.3. Under the conditions of Theorem 1.1, if E|X|α/(log(1+|X|))α/γ−1<∞ for 0<γ<α with 0<α≤2, then
∞∑n=11n∫∞1n∑i=1P(|aniXi|>bnt1/α)dt<∞. | (2.5) |
Proof. By Definition 1.2, noting that
∞∑n=11n∫∞1n∑i=1P(|aniXi|>bnt1/α)dt≤∞∑n=11n∫∞1n∑i=1P(|aniX|>bnt1/α)dt≤∞∑n=11n∫∞0n∑i=1P(|aniX|αbαn>t)dt≤∞∑n=1n−1b−αnn∑i=1E|aniX|α. | (2.6) |
It is easy to show that
∞∑n=1n−1b−αnn∑i=1|ani|αE|X|αI(|X|≤bn)≤C∞∑n=1b−αnE|X|αI(|X|≤bn)≤C∞∑n=1b−αnn∑k=1E|X|αI(bk<|X|≤bk+1)≤C∞∑k=1E|X|αI(bk<|X|≤bk+1)(logk)1−(α/γ)≤CE|X|α/(log(1+|X|))(α/γ)−1<∞, | (2.7) |
and
∞∑n=1n−1b−αnn∑i=1|ani|αE|X|αI(|X|>bn)≤C∞∑n=1b−αnE|X|αI(|X|>bn)=C∞∑n=1b−αn∞∑j=nE|X|αI(bj<|X|≤bj+1)=C∞∑j=1E|X|αI(bj<|X|≤bj+1)j∑n=1n−1(logn)−α/γ≤C∞∑j=1(logj)1−(α/γ)E|X|αI(bj<|X|≤bj+1)≤CE|X|α/(log(1+|X|))(α/γ)−1<∞. | (2.8) |
Hence, (2.5) holds by (2.6)–(2.8).
Proof of Theorem 1.1. For any given ε>0, observing that
∞∑n=11nE(1bnmax1≤j≤n|j∑i=1aniXi|−ε)α+=∞∑n=11n∫∞0P(1bnmax1≤j≤n|j∑i=1aniXi|−ε>t1/α)dt=∞∑n=11n∫10P(1bnmax1≤j≤n|j∑i=1aniXi|>ε+t1/α)dt+∞∑n=11n∫∞1P(1bnmax1≤j≤n|j∑i=1aniXi|>ε+t1/α)dt≤∞∑n=11nP(max1≤j≤n|j∑i=1aniXi|>εbn)+∞∑n=11n∫∞1P(max1≤j≤n|j∑i=1aniXi|>bnt1/α)dt≜I+J. | (2.9) |
By Theorem A of Li et al.[12] declared in the first section, we get directly I<∞. In order to prove (1.5), it suffices to show that J<∞.
Without loss of generality, assume that ani≥0. For all t≥1 and 1≤i≤n, n∈N, define
Yi=aniXiI(|aniXi|≤bnt1/α). |
It is easy to check that
(max1≤j≤n|j∑i=1aniXi|>bnt1/α)⊂(max1≤j≤n|j∑i=1Yi|>bnt1/α)⋃(n⋃i=1(|aniXi|>bnt1/α)), |
which implies
P(max1≤j≤n|j∑i=1aniXi|>bnt1/α)≤P(max1≤j≤n|j∑i=1Yi|>bnt1/α)+P(n⋃i=1(|aniXi|>bnt1/α)). | (2.10) |
To prove J<∞, we need only to show that
J1=∞∑n=11n∫∞1P(max1≤j≤n|j∑i=1Yi|>bnt1/α)dt<∞, |
J2=∞∑n=11n∫∞1P(n⋃i=1(|aniXi|>bnt1/α))dt<∞. |
Since
P(n⋃i=1(|aniXi|>bnt1/α))≤n∑i=1P(|aniXi|>bnt1/α), |
it follows from Lemma 2.3 that
J2≤∞∑n=11n∫∞1n∑i=1P(|aniXi|>bnt1/α)dt<∞. |
Next, we prove that
supt≥11bnt1/αmax1≤j≤n|j∑i=1EYi|→0. | (2.11) |
By EXn=0 and (2.4) of Lemma 2.2, it follows that
supt≥11bnt1/αmax1≤j≤n|j∑i=1EYi|=supt≥11bnt1/αmax1≤j≤n|j∑i=1EaniXiI(|aniXi|≤bnt1/α)|=supt≥11bnt1/αmax1≤j≤n|j∑i=1EaniXiI(|aniXi|>bnt1/α)|≤Csupt≥11bnt1/αn∑i=1E|aniX|I(|aniX|>bnt1/α). |
Observe that,
E|aniX|I(|aniX|>bnt1/α)=E|aniX|I(|aniX|>bnt1/α,|X|≤bn)+E|aniX|I(|aniX|>bnt1/α,|X|>bn). | (2.12) |
For 0<γ<α and 1<α≤2, it is clearly shown that
E|aniX|I(|aniX|>bnt1/α,|X|≤bn)≤Cb1−αnt(1/α)−1|ani|αE|X|αI(|X|≤bn)≤Cb1−αnt(1/α)−1|ani|αE(|X|α(log(1+|X|))α/γ−1(log(1+|X|))α/γ−1)I(|X|≤bn)≤Ct(1/α)−1n−1+(1/α)|ani|α(logn)(1/γ)−1, | (2.13) |
and
E|aniX|I(|aniX|>bnt1/α,|X|>bn)≤C|ani|E|X|I(|X|>bn)≤Cb1−αn(log(1+bn))(α/γ)−1|ani|≤Cn−1+(1/α)(logn)−1+(1/γ)|ani|. | (2.14) |
Thus,
supt≥11bnt1/αn∑i=1E|aniX|I(|aniX|>bnt1/α,|X|≤bn)≤Cb−1nn−1+(1/α)(logn)(1/γ)−1n∑i=1|ani|α≤C(logn)−1→0, | (2.15) |
and
supt≥11bnt1/αn∑i=1E|aniX|I(|aniX|>bnt1/α,|X|>bn)≤Cb−1nn−1+(1/α)(logn)−1+(1/γ)n∑i=1|ani|≤C(logn)−1→0. | (2.16) |
Then, (2.11) holds by the argumentation of (2.12)–(2.16).
Hence, for n sufficiently large, we have that max1≤j≤n|j∑i=1EYi|≤bnt1/α2 holds uniformly for all t≥1. Therefore,
J1=∞∑n=11n∫∞1P(max1≤j≤n|j∑i=1(Yi−EYi)|>bnt1/α2)dt. | (2.17) |
By the Markov's inequality, (2.2) of Lemma 2.1 and (2.3) of Lemma 2.2, we get that
J1≤C∞∑n=11n∫∞11b2nt2/αE(max1≤j≤n|j∑i=1(Yi−EYi)|2)dt≤C∞∑n=11n∫∞11b2nt2/α(n∑i=1E|Yi−EYi|2)dt≤C∞∑n=11n∫∞11b2nt2/α(n∑i=1E|aniXi|2I(|aniXi|≤bnt1/α))dt≤C∞∑n=11n∫∞11b2nt2/α(n∑i=1E|aniX|2I(|aniX|≤bnt1/α))dt+C∞∑n=11n∫∞1n∑i=1P(|aniX|>bnt1/α)dt≤C∞∑n=11n∫∞11b2nt2/α(n∑i=1E|aniX|2I(|aniX|≤bn))dt+C∞∑n=11n∫∞11b2nt2/α(n∑i=1E|aniX|2I(bn<|aniX|≤bnt1/α))dt+C∞∑n=11n∫∞1n∑i=1P(|aniX|>bnt1/α)dt=J11+J12+J13. | (2.18) |
Based on the formula (2.2) of Lemma 2.2 in Li et al.[10], we get that
J11=∞∑n=11n∫∞11b2nt2/α(n∑i=1E|aniX|2I(|aniX|≤bn))dt≤∞∑n=11n1bαn(n∑i=1E|aniX|αI(|aniX|≤bn))<∞. | (2.19) |
Denoting t=xα, by (2.3) of Lemma 2.2, the Markov's inequality and Lemma 2.3, we also get that
J12=∞∑n=11n∫∞11b2nt2/α(n∑i=1E|aniX|2I(bn<|aniX|≤bnt1/α))dt≤C∞∑n=11nb2n∫∞1xα−3n∑i=1E|aniX|2I(bn<|aniX|≤bnx)dx≤C∞∑n=11nb2n∞∑m=1∫m+1mxα−3n∑i=1E|aniX|2I(bn<|aniX|≤bnx)dx≤C∞∑n=11nb2n∞∑m=1mα−3n∑i=1E|aniX|2I(bn<|aniX|≤bn(m+1))=C∞∑n=11nb2nn∑i=1∞∑m=1m∑s=1mα−3E|aniX|2I(bns<|aniX|≤bn(s+1))=C∞∑n=11nb2nn∑i=1∞∑s=1E|aniX|2I(bns<|aniX|≤bn(s+1))∞∑m=smα−3≤C∞∑n=11nb2nn∑i=1∞∑s=1E|aniX|2I(bns<|aniX|≤bn(s+1))sα−2≤C∞∑n=11nbαnn∑i=1E|aniX|αI(|aniX|>bn)≤CE|X|α/(log(1+|X|))α/γ−1<∞. | (2.20) |
Analogous to the argumentation of Lemma 2.3, it is easy to show that
J13=∞∑n=11n∫∞1n∑i=1P(|aniX|>bnt1/α)dt≤CE|X|α/(log(1+|X|))α/γ−1<∞. | (2.21) |
Hence, the desired result J1<∞ holds by the above statements. The proof of Theorem 1.1 is completed.
Remark 2.1. Under the conditions of Theorem 1.1, noting that
∞>∞∑n=11nE(1bnmax1≤j≤n|j∑i=1aniXi|−ε)α+=∞∑n=11n∫∞0P(1bnmax1≤j≤n|j∑i=1aniXi|−ε>t1/α)dt≥C∞∑n=11n∫εα0P(1bnmax1≤j≤n|j∑i=1aniXi|>ε+t1/α)dt≥C∞∑n=11nP(max1≤j≤n|j∑i=1aniXi|>2εbn)for∀ε>0. | (2.22) |
Since ε>0 is arbitrary, it follows from (2.22) that the complete moment convergence is much stronger than the complete convergence. Compared with the corresponding results of Li et al.[12], Chen and Sung[6], it is worth pointing out that Theorem 1.1 of this paper is an extension and improvement of those of Li et al.[12], Chen and Sung[6] under the same moment condition. In addition, the main result partially settles the open problem posed by Huang et al.[10] for the case 0<γ<α with 1<α≤2.
In this work, we consider the problem of complete moment convergence for weighted sums of weakly dependent (or ρ∗-mixing) random variables. The main results of this paper are presented in the form of the main theorem and a remark as well as Lemma 2.3, which plays a vital role to prove the main theorem. The presented main theorem improves and generalizes the corresponding complete convergence results of Li et al.[12] and Chen and Sung[6].
The authors are most grateful to the Editor as well as the anonymous referees for carefully reading the manuscript and for offering some valuable suggestions and comments, which greatly enabled them to improve this paper. This paper is supported by the Doctor and Professor Natural Science Foundation of Guilin University of Aerospace Technology.
All authors declare no conflicts of interest in this paper.
[1] | Greer K (2020) New ideas for brain modelling 6. AIMS Biophysics 7: 308-322. |
[2] | Greer K (2019) New ideas for brain modelling 3. Cogn Syst Res 55: 1-13. |
[3] | Greer K (2017) New ideas for brain modelling 4. BRAIN. Broad Research in Artificial Intelligence and Neuroscience 9: 155-167. |
[4] | Greer K (2016) A repeated signal difference for recognising patterns. BRAIN. Broad Research in Artificial Intelligence and Neuroscience 7: 139-147. |
[5] | Greer K (2014) Concept trees: Building dynamic concepts from semi-structured data using nature-inspired methods. Complex System Modelling and Control through Intelligent Soft Computations, Studies in Fuzziness and Soft Computing Germany: 221-252. |
[6] | Greer K (2013) Turing: then, now and still key. Artificial Intelligence, Evolutionary Computing and Metaheuristics Berlin: 43-62. |
[7] | Greer K (2011) Symbolic neural networks for clustering higher-level concepts. NAUN Int J Comput 5: 378-386. |
[8] | Anderson JA, Silverstein JW, Ritz SA, et al. (1977) Distinctive features, categorical perception, and probability learning: Some applications of a neural model. Psychol Rev 84: 413. |
[9] | Hawkins J, Blakeslee S (2004) Times books. On Intelligence . |
[10] | Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18: 1527-1554. |
[11] | Curbera F, Goland Y, Klein J, et al. Business process execution language for web services BPEL (2002) .Available from: https://www.oasis-open.org/committees/download.php/2046/BPEL V1-1 May 5 2003 Final.pdf. |
[12] | Thiagarajan RK, Srivastava AK, Pujari AK, et al. (2002) BPML: A process modeling language for dynamic business models. Proceedings Fourth IEEE International Workshop on Advanced Issues of E-Commerce and Web-Based Information Systems (WECWIS 2002) IEEE, 222-224. |
[13] | Rockstrom A, Saracco R (1982) SDL-CCITT specification and description language. IEEE T Commun 30: 1310-1318. |
[14] | FIPA The foundation for intelligent physical agents Available from: http://www.fipa.org/. |
[15] | Bellman R (1957) A Markovian decision process. J Math Mech 6: 679-684. |
[16] | Guigon E, Grandguillaume P, Otto I, et al. (1994) Neural network models of cortical functions based on the computational properties of the cerebral cortex. J Physiol-Paris 88: 291-308. |
[17] | Dehaene S, Changeux JP, Nadal JP (1987) Neural networks that learn temporal sequences by selection. P Natl Acad Sci 84: 2727-2731. |
[18] | Hawkins J, Ahmad S (2016) Why neurons have thousands of synapses, a theory of sequence memory in neocortex. Front in Neural Circuit 10: 23. |
[19] | Yuste R (2011) Dendritic spines and distributed circuits. Neuron 71: 772-781. |
[20] | Kandel ER (2001) The molecular biology of memory storage: a dialogue between genes and synapses. Science 294: 1030-1038. |
[21] | Deco G, Jirsa VK, Robinson PA, et al. (2008) The dynamic brain: from spiking neurons to neural masses and cortical fields. PLoS Comput Biol 4: e1000092. |
[22] | Mastrandrea R, Gabrielli A, Piras F, et al. (2017) Organization and hierarchy of the human functional brain network lead to a chain-like core. Sci Rep 7: 1-13. |
[23] | Meunier D, Lambiotte R, Bullmore ET (2010) Modular and hierarchically modular organization of brain networks. Front Neurosci 4: 200. |
[24] | Watts DJ, Strogatz SH (1998) Collective dynamics of ‘small-world’ networks. Nature 393: 440-442. |
[25] | Rubinov M, Sporns O, van Leeuwen C, et al. (2009) Symbiotic relationship between brain structure and dynamics. BMC Neurosci 10: 1-18. |
[26] | Gong P, van Leeuwen C (2004) Evolution to a small-world network with chaotic units. EPL (Europhysics Letters) 67: 328. |
[27] | IBM (2003) An architectural blueprint for autonomic computing. IBM and Autonomic Computing . |
1. | Yukun Xiao, Jianzhi Han, Cocommutative connected vertex (operator) bialgebras, 2025, 212, 03930440, 105461, 10.1016/j.geomphys.2025.105461 |