Processing math: 100%
Review Special Issues

Mathematical studies of the dynamics of finite-size binary neural networks: A review of recent progress

  • Several mathematical approaches to studying analytically the dynamics of neural networks rely on mean-field approximations, which are rigorously applicable only to networks of infinite size. However, all existing real biological networks have finite size, and many of them, such as microscopic circuits in invertebrates, are composed only of a few tens of neurons. Thus, it is important to be able to extend to small-size networks our ability to study analytically neural dynamics. Analytical solutions of the dynamics of small-size neural networks have remained elusive for many decades, because the powerful methods of statistical analysis, such as the central limit theorem and the law of large numbers, do not apply to small networks. In this article, we critically review recent progress on the study of the dynamics of small networks composed of binary neurons. In particular, we review the mathematical techniques we developed for studying the bifurcations of the network dynamics, the dualism between neural activity and membrane potentials, cross-neuron correlations, and pattern storage in stochastic networks. Then, we compare our results with existing mathematical techniques for studying networks composed of a finite number of neurons. Finally, we highlight key challenges that remain open, future directions for further progress, and possible implications of our results for neuroscience.

    Citation: Diego Fasoli, Stefano Panzeri. Mathematical studies of the dynamics of finite-size binary neural networks: A review of recent progress[J]. Mathematical Biosciences and Engineering, 2019, 16(6): 8025-8059. doi: 10.3934/mbe.2019404

    Related Papers:

    [1] Joanna Tyrcha, John Hertz . Network inference with hidden units. Mathematical Biosciences and Engineering, 2014, 11(1): 149-165. doi: 10.3934/mbe.2014.11.149
    [2] Andrey Olypher, Jean Vaillant . On the properties of input-to-output transformations in neuronal networks. Mathematical Biosciences and Engineering, 2016, 13(3): 579-596. doi: 10.3934/mbe.2016009
    [3] Guowei Wang, Yan Fu . Spatiotemporal patterns and collective dynamics of bi-layer coupled Izhikevich neural networks with multi-area channels. Mathematical Biosciences and Engineering, 2023, 20(2): 3944-3969. doi: 10.3934/mbe.2023184
    [4] Sakorn Mekruksavanich, Wikanda Phaphan, Anuchit Jitpattanakul . Epileptic seizure detection in EEG signals via an enhanced hybrid CNN with an integrated attention mechanism. Mathematical Biosciences and Engineering, 2025, 22(1): 73-105. doi: 10.3934/mbe.2025004
    [5] Ravi Agarwal, Snezhana Hristova, Donal O’Regan, Radoslava Terzieva . Stability properties of neural networks with non-instantaneous impulses. Mathematical Biosciences and Engineering, 2019, 16(3): 1210-1227. doi: 10.3934/mbe.2019058
    [6] Mahtab Mehrabbeik, Fatemeh Parastesh, Janarthanan Ramadoss, Karthikeyan Rajagopal, Hamidreza Namazi, Sajad Jafari . Synchronization and chimera states in the network of electrochemically coupled memristive Rulkov neuron maps. Mathematical Biosciences and Engineering, 2021, 18(6): 9394-9409. doi: 10.3934/mbe.2021462
    [7] Xiaodong Zhu, Liehui Jiang, Zeng Chen . Cross-platform binary code similarity detection based on NMT and graph embedding. Mathematical Biosciences and Engineering, 2021, 18(4): 4528-4551. doi: 10.3934/mbe.2021230
    [8] Sourav Kumar Sasmal, Jeet Banerjee, Yasuhiro Takeuchi . Dynamics and spatio-temporal patterns in a prey–predator system with aposematic prey. Mathematical Biosciences and Engineering, 2019, 16(5): 3864-3884. doi: 10.3934/mbe.2019191
    [9] Ying Chen, Lele Wang, Zhixin Li, Yibin Tang, Zhan Huan . Unveiling critical ADHD biomarkers in limbic system and cerebellum using a binary hypothesis testing approach. Mathematical Biosciences and Engineering, 2024, 21(4): 5803-5825. doi: 10.3934/mbe.2024256
    [10] Gayathri Vivekanandhan, Hamid Reza Abdolmohammadi, Hayder Natiq, Karthikeyan Rajagopal, Sajad Jafari, Hamidreza Namazi . Dynamic analysis of the discrete fractional-order Rulkov neuron map. Mathematical Biosciences and Engineering, 2023, 20(3): 4760-4781. doi: 10.3934/mbe.2023220
  • Several mathematical approaches to studying analytically the dynamics of neural networks rely on mean-field approximations, which are rigorously applicable only to networks of infinite size. However, all existing real biological networks have finite size, and many of them, such as microscopic circuits in invertebrates, are composed only of a few tens of neurons. Thus, it is important to be able to extend to small-size networks our ability to study analytically neural dynamics. Analytical solutions of the dynamics of small-size neural networks have remained elusive for many decades, because the powerful methods of statistical analysis, such as the central limit theorem and the law of large numbers, do not apply to small networks. In this article, we critically review recent progress on the study of the dynamics of small networks composed of binary neurons. In particular, we review the mathematical techniques we developed for studying the bifurcations of the network dynamics, the dualism between neural activity and membrane potentials, cross-neuron correlations, and pattern storage in stochastic networks. Then, we compare our results with existing mathematical techniques for studying networks composed of a finite number of neurons. Finally, we highlight key challenges that remain open, future directions for further progress, and possible implications of our results for neuroscience.


    Understanding the dynamics of networks of neurons, and of how such networks represent, process and exchange information by means of the temporal evolution of their activity, is one of the central problems in neuroscience. Real networks of neurons are highly complex both in terms of structure and physiology. Introducing details of this complexity greatly complicates the tractability of the models. Thus, a mathematical model of neural networks needs to be carefully designed, by finding a compromise between the elements of biological complexity and plausibility that are introduced, and the analytical tractability of the resulting model [1].

    A wide set of mathematical models have been proposed to investigate the behavior of biological neural networks [2,3,4]. Typically, these models attempt to simplify as much as possible the original system they describe, without losing the properties that give rise to the most interesting emergent phenomena observed in biological systems. Binary neural network models [5,6,7,8,9] represent one of the most successful examples in finding a good compromise between keeping simplicity to enhance tractability, while yet achieving a rich dynamics with a set of complex emergent network properties.

    Binary models describe the dynamical properties of networks composed of threshold units, which integrate their inputs to produce a binary output, namely a high (respectively low) output firing rate when their membrane potential does (respectively does not) exceed a given threshold (see section (2) for more details). Among the wide set of neural network models proposed by computational neuroscientists, threshold units represent one of the most convenient tools for studying the dynamical and statistical properties of neural circuits. The relative ease with which these models can be investigated analytically, is a consequence of their thresholding activation function, which can be considered as the simplest, piecewise-constant, approximation of the non-linear (and typically sigmoidal shaped [10,11]) graded input-output relationship of biological neurons. Despite their simplicity, as shown both by classic work [5,6,7,12,13], as well as by our work reviewed here [14,15,16], the jump discontinuity of their activation function at the threshold is sufficient to endow binary networks with a complex set of useful emergent dynamical properties and non-linear phenomena, such as attractor dynamics [17], formation of patterns and oscillatory waves [18], chaos [19], and information processing capabilities [20], which are reminiscent of neuronal activity in biological networks.

    The importance of binary network models is further strengthened by their close relationship with spin networks studied in physics [21,22,23,24]. The temporal evolution of a binary network in the zero-noise limit is isomorphic to the dynamics of kinetic models of spin networks at absolute temperature [25]. This allowed computational neuroscientists to study the behavior of large-size binary networks, by applying the powerful techniques of statistical mechanics already developed for spin models (see e.g. [26]).

    Sizes of brains and of specialized neural networks within brains change considerably across animal species, ranging from few tens of neurons in invertebrates such as rotifers and nematodes, to billions of neurons in cetaceans and primates [27]. Network size changes also across levels of spatial organizations, ranging from microscopic and mesoscopic levels of organization in cortical micro-columns and columns (including from few tens [28] to few tens of thousands of neurons [29,30]), to several orders of magnitude more in large macroscopic networks, such as the resting state networks in the human brain, that involve many brain areas [31,32]. For this reason, it is important to be able to study mathematically the dynamics of binary neural network models (or of any network model, see e.g. [33,34,35]), for a wide range of different network sizes.

    Large-scale networks composed of several thousands of neurons or more, are typically studied by taking advantage of the powerful techniques of statistical mechanics, such as the law of large numbers and the central limit theorem, under the assumption of random connectivity matrices and/or noise-driven dynamics (see e.g. [26,13,36,14,16]). These theories, such as those developed in physics for spin models, typically approximate the interaction of all the other neurons to a given neuron with a mean field, namely an effective interaction which is obtained by averaging the interactions of the other neurons. This allows one to dimensionally reduce the model, by transforming the set of equations of a large network into a single-neuron equation. Therefore the mean-field theories represent a powerful tool for gaining insight into the behavior of large networks, at a relatively low cost.

    However, statistical mechanics does not apply to small-scale networks containing only few tens of neurons, which therefore prove much more difficult to study mathematically. Importantly, several studies, see e.g. [37,36,38,35,14,16], have shown that the dynamics of neural network models in the large-size limit can be qualitatively, not only quantitatively, very different from that of the same models with small or finite size. For this reason, the computational investigation of networks composed of a finite number of neurons requires the development of new specialized analytical and numerical techniques. Some notable examples of the techniques that have been recently introduced for studying finite-size networks include linear response theory [39,40,41,42] and mesoscopic population equations [43] for networks of integrate-and-fire neurons, the cumulant expansion of binary networks [44], and much more (see Subsection (6.2) for a deeper discussion).

    Recently, we proposed an alternative theory to describe the dynamics of finite-size neural networks composed of binary threshold units [14,15,16], which we critically review in this paper. In section (2) we introduce the binary network model that we analyze in this review, while in section (3) we focus specifically on the zero-noise limit of small networks, and we characterize mathematically the bifurcation points of the network dynamics in single network realizations. In section (4) we describe the techniques that we developed for studying small networks when external sources of noise are added to the neural equations. In particular, in Subsection (4.1) we describe the dualism between neural activity and membrane potentials, and we derive a complete description of the probabilistic behavior of the network in the long-time limit. In Subsections (4.2) and (4.3), we introduce, under some assumptions on the nature of the noise sources, exact analytical expressions of the cross-neuron correlations, and a learning rule for storing patterns and sequences of neural activity. Then, in section (5), we extend the study of bifurcations of section (3) to networks with quenched disorder across multiple network realizations. To conclude, in section (6) we discuss the advantages and weaknesses of our techniques, also in relation to previous analytical work on finite-size networks. Moreover, we highlight key challenges that remain open, as well as future directions for further progress in the mathematical study of binary networks, and possible implications of our results for neuroscience.

    In this review we assume that neural activity evolves in discrete time steps, and that the threshold units are synchronously updated. These assumptions are often used in studying network dynamics, see e.g. [5,6,12,2]. A threshold unit, here referred to as artificial neuron, is a logic gate or a mathematical function that mimics the working mechanisms of a biological neuron. Typically the unit receives several inputs, which can be loosely interpreted as postsynaptic potentials at neural dendrites, and sums them to produce a binary or digital output (also known as activation or neural activity). Usually, each input of a threshold unit is multiplied by a so called synaptic weight, which represents the strength of connections between pairs of neurons. Moreover, the sum of the weighted inputs is passed through a piecewise-constant (or Heaviside) thresholding function, also known as activation function or transfer function. If the sum of the weighted inputs exceeds a threshold, the output is set to one, and the artificial neuron is said to fire at that rate. On the contrary, if the sum is below the threshold, the output is set to zero, and the neuron is quiescent. For this reason, the binary output of a threshold unit can be loosely interpreted as the firing rate of the postsynaptic neuron, namely as the number of spikes per second of its action potential, which propagates along its axon toward other neurons in the network.

    The basic set of equations defining the dynamics of the discrete-time binary network is:

    Ai(t+1)=H(N1j=0Ji,jAj(t)+Ii+N1j=0σNi,jNj(t)θi),i=0,,N1, (2.1)

    which describes the temporal evolution of the neural activity Ai of the ith neuron, from the time instant t to the time instant t+1. In Eq. (2.1), N represents the size (namely the number of threshold units) of the network. The matrix J=[Ji,j]i,j=0,,N1 is the (generally asymmetric) synaptic connectivity matrix of the network, whose entries Ji,j are time-independent and represent the strength or weight of the synaptic connection from the jth (presynaptic) neuron to the ith (postsynaptic) neuron. Moreover, Ii+N1j=0σNi,jNj(t) represents the total external input current (i.e. the stimulus) to the ith neuron. In more detail, Ii is the time-independent deterministic component of the stimulus, while its stochastic component is the sum of N random variables σNi,jNj(t), each one having zero mean and standard deviation σNi,j. The vector Ndef=[N0,,NN1]T represents a collection of stochastic variables with unit standard deviation, whose joint probability distribution pN is arbitrary. Then, in Eq. (2.1), H() is the Heaviside activation function with threshold θ, which is defined as follows:

    H(x)={0,ifx<01,ifx0.

    It is important to observe that, unlike the classic Hopfield network [7], which is symmetric and asynchronously updated, a Lyapunov function for synchronous networks with asymmetric synaptic weights, like ours, is generally not known. For this reason, the analytical investigation of the network dynamics determined by Eq. (2.1) proves much more challenging. In Sections (3)-(5), we will review the techniques, that we developed in [14,15,16], for investigating the dynamical and probabilistic properties of Eq. (2.1) in small networks.

    An important problem in the theory of binary networks is represented by the study of the qualitative changes in the dynamics of their neuronal activity, which typically are elicited by variations in the external stimuli. These changes of dynamics are named in mathematical terms as bifurcations [45]. Seminal work in physics focused on the study of bifurcations in infinite-size spin networks, see e.g. [21,46,23]. On the other hand, the theory of bifurcations of non-smooth dynamical systems composed of a finite number of units, including those with discontinuous functions such as binary networks, has been developed mostly for continuous-time models, see e.g. [47,48,49,50,51], and for piecewise-smooth continuous maps [52,53]. Discontinuous maps are formally described by the following recurrence relation:

    x(t+1)=f(x(t),p)={f1(x(t),p),x(t)S1f2(x(t),p),x(t)S2fn(x(t),p),x(t)Sn,

    where the function f is discontinuous across the boundaries of the regions S1,,Sn of the phase space, while p is a bifurcation parameter. Despite the important role that discontinuous maps play in computational neuroscience, the development of new techniques for studying their bifurcation structure has received much less attention [54].

    In [14,15] we tackled the problem of deriving the bifurcations in the dynamics of the neural activity for finite-size binary networks with arbitrary connectivity matrix, which are described by the discontinuous map Eq. (2.1). As is common practice, we performed the bifurcation analysis in the zero-noise limit, i.e. for σNi,j0i,j (see Eq. (2.1)). In particular, we studied how the dynamics of neural activity switches between stationary states and neural oscillations, when varying the external stimuli to the network. Because of the discrete nature of the neural activity, there exists only a finite number of stationary and oscillatory solutions to Eq. (2.1). This allowed us to introduce a combinatorial brute-force approach for studying the bifurcation structure of binary networks, which we describe briefly below.

    We introduce the vector Adef=[A0,,AN1]T containing the activities of all N neurons, and the sequence S(0,T) of activity vectors A(0)A(1)A(T), for some 1T2N. Given a network with P distinct input currents I0,,IP1, we also define ΓIα to be the set of neurons that share the same external current Iα (namely ΓIαdef={i{0,,N1}:Ii=Iα}), and Γ(j)Iα,xdef={iΓIα:A(j)i=x} for x{0,1}. Then, in [15] we proved that the sequence S(0,T) is a solution of Eq. (2.1) in the time range [0,T] (i.e. A(t=j)=A(j) for j=0,,T), for every combination of stimuli (I0,,IP1)V=V0××VP1, where:

    Vαdef={(,Ξα)ifΓ(j+1)Iα,1=jT[Λα,+)ifΓ(j+1)Iα,0=jT[Λα,Ξα)otherwiseTdef={0,,T1},Tα,xdef={jT:Γ(j+1)Iα,x}Λαdef=maxjTα,1(maxiΓ(j+1)Iα,1I(j)i),Ξαdef=minjTα,0(miniΓ(j+1)Iα,0I(j)i)I(j)idef=θiN1k=0Ji,kA(j)k. (3.1)

    In Eq. (3.1), Vα for α=0,,P1 represent the edges of the P-dimensional hyperrectangle V within which the sequence S(0,T) is stable. S(0,T) loses its stability, namely it turns into another sequence, at the boundaries Λα and Ξα, which therefore represent the coordinates of the bifurcation points of the neural activity sequence. It is important to observe that Λα and Ξα are explicitly calculated from Eq. (3.1) in terms of the connectivity matrix and the firing thresholds. For this reason, by using our approach, there is no need to perform any fine (and computationally expensive) binning of the stimulus space in order to detect the bifurcation points of the sequence S(0,T).

    In this review, we focus specifically on the subset of sequences that satisfy the additional constraint A(0)=A(T): these sequences represent the candidate oscillatory solutions with period T of Eq. (2.1). We also observe that, in the special case T=1, we obtain the set of candidate stationary solutions of the network equations. For this reason, the bifurcation diagram of a binary network can be decomposed into two panels, the oscillation and the multistability diagrams. These diagrams describe the relationship between the oscillatory/stationary solutions of Eq. (2.1), and the set of stimuli. In other words, these diagrams display the fragmentation of the stimulus space into areas where several oscillatory solutions occur, and/or where the network is (multi)stable.

    It is important to note that only the sequences whose hyperrectangles V have positive hypervolumes (i.e. the sequences that satisfy the condition Λα<Ξα, for every α and j such that Γ(j+1)Iα,0,Γ(j+1)Iα,1) are solutions of Eq. (2.1), for some combinations of stimuli. On the contrary, if the hypervolume of V is zero, the corresponding neural sequence is never a solution of Eq. (2.1). Unfortunately, the sequences with positive hypervolumes are not known a priori, therefore they must be found through a brute-force searching procedure. Because of the combinatorial explosion of the number of possible sequences for increasing N, typically brute-force algorithms have at least exponential complexity with respect to the network size. Therefore they can be applied only to small networks (typically N<30), regardless of the density of their synaptic connections. However, real cortical circuits are typically very sparse. At macroscopic scales of spatial organization, the average density of the synaptic connections (i.e. the ratio between the actual and the maximum possible number of connections among the neurons within the considered network), is approximately 106107 across the whole cortex, and it can increase up to 0.20.4 in connection pathways linking cortical areas [55]. Moreover, mesoscopic structures such as cortical columns, contain thousands of neurons and millions of synaptic connections. In [56], the authors simulated a model of cortical column consisting of about 80,000 neurons and 0.3 billion synapses, so that the average density of the synaptic connections of the model is about 0.047. Another estimation of the sparseness of the neural circuits within a cortical column was reported in [57], where the authors showed that more than 80% of the synapses on each neuron in a cortical column originate from cells outside the local volume. To account for network sparseness, in [15] we developed an efficient algorithm specifically designed for networks with a low density of the synaptic connections. This efficient algorithm takes advantage of the information provided by the absence of the synaptic connections among the threshold units to speed up the detection of the oscillatory and stationary solutions of Eq. (2.1). In other words, the sparse-efficient algorithm avoids checking the sequences of neural activity vectors that are not compatible with the topology of the synaptic connections, resulting in a much faster calculation of the bifurcation structure of the network model. For example, in [15] we showed that, for networks composed of 50 neurons and about 70 synapses (network sparseness 0.028), the algorithm derived the complete multistability diagram in less than 2 seconds on laptop, by identifying the stationary states of the network among a set of about 1015 possible states. Moreover, the algorithm detected the whole set of network oscillations with period T=2 in 7 minutes, among a set of about 1029 possible oscillations with that period. The interested reader is referred to [15] for a detailed discussion of the algorithm.

    In Figure (1) we show an example of bifurcation diagram, that we obtained from Eq. (3.1), in the specific case of the network parameters reported in Table (1). In this example, we consider a network with heterogeneous random synaptic weights, which is composed of 3 excitatory neurons and 2 inhibitory neurons. Moreover, in Figure (2), we show two examples of state-to-state transitions, obtained by solving Eq. (2.1) in the zero-noise limit, for all the 2N initial conditions of the network dynamics (i.e. from A(t=0)=[0,,0]T to A(t=0)=[1,,1]T).

    Figure 1.  An example of bifurcation diagram. This figure shows the bifurcation diagram of the binary network, obtained for the parameters in Table (1). We supposed that the excitatory neurons with indexes i=0 and i=1 receive an arbitrary external stimulus I0=I1=IE, which represents the first bifurcation parameter, while the excitatory neuron with index i=2 receives a fixed stimulus I2=10. Moreover, we assumed that the inhibitory neuron with index i=3 receives an arbitrary stimulus I3=II, which represents the second bifurcation parameter, while the inhibitory neuron with index i=4 receives a fixed stimulus I4=5. Then, we plotted the multistability and oscillation diagrams in the IEII plane, according to Eq. (3.1). A) Multistability diagram. Each color represents a different degree of multistability (white = astable, red = monostable, green = bistable, blue = tristable). B) Oscillation diagram. Each color represents a different set of oscillatory solutions of Eq. (2.1) (the notation x:y reveals the formation of y distinct oscillations with period T=x). For example, for every combination of stimuli (IE,II) that lies in the yellow area, Eq. (2.1) has 2 oscillatory solutions with period T=2, while for every combination in the green areas, the equation has an oscillatory solution with period T=3. Note that, for other connectivity matrices J, oscillations with distinct periods may coexist in the same area.
    Table 1.  Network parameters 1. This table reports the values of the network parameters that we used for plotting Figures (1)-(4).
    J=[01717436250153321010107502960157285950],θ0==θ4=1

     | Show Table
    DownLoad: CSV
    Figure 2.  Examples of state-to-state transitions of a binary network. This figure shows the allowed transitions between states of neural activity, obtained for the network parameters in Table (1) and I2=10, I4=5. The nodes in the graphs represent the 2N states of the neural activity vector A, namely they are numbered taking the activity states as binary words (e.g. the node 26 corresponds to the state A=[1,1,0,1,0]T), while the arrows represent the allowed transitions between these states. A) State-to-state transitions, obtained for IE=10.5 and II=6. We highlighted in red 3 stationary states (i.e. the nodes 2, 5 and 14) and 2 oscillations of period T=2 (i.e. 070 and 6106). Note that, as expected, the point in the stimulus plane with coordinates (IE,II)=(10.5,6) lies in the blue area of the multistability diagram (see Figure (1), panel A), which corresponds to tristability, and in the yellow area of the oscillation diagram (see Figure (1), panel B), which corresponds to the formation of 2 oscillations with period T=2. B) State-to-state transitions, obtained for IE=5 and II=55. We highlighted in red an oscillations of period T=5 (i.e. 682129316). Note that the point in the stimulus plane with coordinates (IE,II)=(5,55) lies in the white area of the multistability diagram, where Eq. (2.1) has no stationary solutions, and in the cyan area of the oscillation diagram, which corresponds to the formation of an oscillation with period T=5.
    Figure 3.  Probability distributions in stochastic networks. This figure shows the probability mass function P(A,t) and the marginal probability density function p(Vi,t) of the 0th neuron, obtained for the network parameters in Table (1), I=[0,3,2,4,0]T, and Ψ=diag(2,3,2,3,3). A) - B) Probability distributions obtained for correlated normally-distributed noise sources, with uniform cross-correlation (namely Corr(Ni(t),Nj(t))=0.8 i,j such that ij, and Corr(Ni(t),Nj(s))=0 t,s such that ts). C) - D) Probability distributions obtained for independent noise sources with Laplace distributions, namely pN(x)=N1i=0pNi(xi), where pNi(xi)=22exp(2|xi|). The red bars in panels A and C are calculated analytically from Eq. (4.6), while the red curves in panels B and D are calculated according to Eq. (4.8). The blue bars in panels A, C, and the blue dots in panels B, D are calculated numerically through a Monte Carlo method, namely by solving iteratively Eqs. (2.1) and (4.2) 10,000 times in the time range [0,100], and then by calculating the probability distributions across the repetitions at t=100.
    Figure 4.  Cross-neuron correlations. This figure shows the dependence of the cross-neuron correlation of the binary network on the external stimulus, in the specific case of independent normally-distributed noise sources. Correlation is calculated between the neurons with indexes i=0 and i=4, for the values of the parameters reported in Table (1), I=I[1,1,1,1,1]T, I[14,14], and Ψ=diag(2,3,2,3,3). A) Correlation between neural activities. The red curve is calculated analytically from Eq. (4.9). B) Correlation between membrane potentials. The red curve is calculated according to Eq. (4.10). In both panels, the blue dots are calculated numerically through a Monte Carlo method over 10,000 repetitions, collected at t=100. Moreover, the vertical dashed lines correspond to the stimuli I=12 and I=8.5. We chose these values of the stimuli to show that low (respectively high) correlations between neural activities do not necessarily correspond to low (respectively high) correlations between the membrane potentials (Corr(A0,A4)0.99 and Corr(V0,V4)0.02 for I=12, while Corr(A0,A4)0.06 and Corr(V0,V4)0.65 for I=8.5).

    For networks of synchronously-updated binary neurons, such as those considered in this article, it is possible to derive explicit dynamical equations of the membrane potentials, and to show the existence of a dualism between them and the neural activity states. The membrane potentials and the activity states are intrinsically related, but have distinct dynamical aspects. On the other hand, networks of asynchronously-updated binary neurons (see e.g. [7]), as well as maximum-entropy models and information theoretic approaches (see e.g. [58,59]), do not model membrane dynamics, therefore they are not considered in this review.

    As we explained in section (2), the term N1j=0Ji,jAj(t) in Eq. (2.1) can be loosely interpreted as the weighted sum of postsynaptic potentials at neural dendrites. Therefore this term, plus the eventual external stimulus to the ith neuron, can be interpreted as the total membrane potential Vi of that neuron, namely:

    Vi(t+1)=N1j=0Ji,jAj(t)+Ii+N1j=0σNi,jNj(t). (4.1)

    Interestingly, we show that it is possible to exactly derive the set of equations satisfied by the membrane potentials, and that these equations provide a complementary description of the network dynamics with respect to Eq. (2.1). Under the change of variables Eq. (4.1), we observe that Eq. (2.1) can be transformed into the following set of equations for the membrane potentials:

    Vi(t+1)=N1j=0Ji,jH(Vj(t)θj)+Ii+N1j=0σNi,jNj(t),i=0,,N1. (4.2)

    Note also that Eqs. (2.1) and (4.1) imply Ai(t)=H(Vi(t)θi){0,1}, and that the binary output of a threshold unit can be loosely interpreted as the firing rate of that neuron: Ai(t)=0 if the ith neuron is not firing at time t, and Ai(t)=1 if it is firing at unit rate.

    Since Eq. (2.1) does not depend on the variables Vi(t), it can be solved without knowing the behavior of the membrane potentials. Solving the equations that govern the microscopic dynamics of the membrane potentials in rate models is typically difficult, and several authors opted either for calculating population-averaged potentials, or for deriving approximate solutions in the large-network-size limit (see e.g. [60,36]). For these reasons, the exact calculation of the microscopic dynamics of Vi(t) from Eq. (4.2) has always been neglected in the literature. On the contrary, in [14] we introduced a technique for solving Eq. (4.2), and we found the exact joint probability distribution of the microscopic membrane potentials in the long-time limit, which is reported later in this subsection.

    We observe that the neural activities are discrete random variables, therefore they are described by probability mass functions (pmfs). We introduce the vector A=[A0,,AN1]T containing the activities of all N neurons at time t+1, and A the vector of the activities of all neurons at time t. We also define:

    Ψdef=[σN0,0σN0,N1σNN1,0σNN1,N1],θdef=[θ0θN1],Idef=[I0IN1],HH(xθ)def=[H(x0θ0)H(xN1θN1)].

    Ψ represents the matrix of the standard deviations of the noise sources, and it is invertible by hypothesis. θ and I represent, respectively, the vectors of the firing thresholds and of the deterministic external stimuli, while HH is the element-wise Heaviside activation function. Moreover, we introduce the matrix C=[Ci,j]i,j=0,,2N2, such that:

    Ci,j=δi,j+Gi,2N1Gi,j, (4.3)

    where δi,j is the Kronecker delta, and:

    Gi,jdef=1|det(Ψ)|V(N)ipN(Ψ1[xJBB(N)jI])dx,i,j=0,,2N1, (4.4)

    where we remind that pN is the joint probability distribution of the stochastic variables N0,,NN1. In Eq. (4.4), BB(N)i is the N×1 vector whose entries are the digits of the binary representation of the index i (e.g. BB(5)26=[1,1,0,1,0]T). Moreover, the set V(N)i is defined as follows:

    V(N)idef={xRN:i=D(HH(xθ))},

    where D(ν) is the decimal representation of the binary vector ν. For example, for N=2, we get:

    V(2)0={(x0,x1)R2:x0<θ0,x1<θ1}V(2)1={(x0,x1)R2:x0<θ0,x1θ1}V(2)2={(x0,x1)R2:x0θ0,x1<θ1}V(2)3={(x0,x1)R2:x0θ0,x1θ1}

    (note for example that, for any (x0,x1)V(2)3, we get [H(x0θ0)H(x1θ1)]=[11], whose decimal representation is 3). Finally, we define:

    ˜H=[H0,,H2N2]Tdef=C1˜G,˜Gdef=[Gi,2N1]i=0,,2N2,H2N1def=12N2j=0Hj.

    Then, in [14] we proved that the conditional probability distribution of A given A, and the stationary joint distribution of A in the limit t+, are:

    P(A,t+1|A,t)=1|det(Ψ)|V(N)D(A)pN(Ψ1[xJAI])dx (4.5)
    limt+P(A,t)=1|det(Ψ)|2N1j=0HjV(N)D(A)pN(Ψ1[xJBB(N)jI])dx. (4.6)

    Note that, according to the theory of Markov processes (see e.g. [8,9]), limt+P(A,t) can be alternatively obtained as the eigenvector to the eigenvalue 1 of the transition matrix P(A,t+1|A,t). However, the Markov's formulation would obscure the dualism between the neural activity and membrane potentials, as we will see in what follows. For this reason, Eq. (4.6) is a more convenient description of the joint probability distribution of the neural activity.

    On the other hand, the membrane potentials Vi(t) are continuous variables, therefore they are described by probability density functions (pdfs). By introducing the vector Vdef=[V0,,VN1]T, which contains the membrane potentials of all N neurons at time t+1, and the vector V of the membrane potentials of all neurons at time t, the conditional probability distribution of V given V, and the stationary joint distribution of V in the limit t+, are [14]:

    p(V,t+1|V,t)=1|det(Ψ)|pN(Ψ1[VJHH(Vθ)I]) (4.7)
    limt+p(V,t)=1|det(Ψ)|2N1j=0HjpN(Ψ1[VJBB(N)jI]). (4.8)

    Note that Eq. (4.8) can be obtained in two different, non-trivial ways: either from Eqs. (4.1) and (4.6), or by solving Eq. (4.2) in the long-time limit. In this work we followed the latter approach, whose details are reported in the supplemental information of [14]. There we also showed that Hj=limt+V(N)jp(V,t)dV, therefore the coefficient Hj in Eqs. (4.6) and (4.8) can be finally interpreted as the probability that the vector A(t) equals the binary representation of the index j in the long-time limit (for example, in a network of size N=5, we get H26=P(A(t)=[1,1,0,1,0]T) for t+).

    By comparing Eqs. (4.5) and (4.6) with, respectively, Eqs. (4.7) and (4.8), we observe that there exists a close relationship between the neural activities A and the membrane potentials V, which we further investigate in Subsection (4.2). It is also important to note that, generally, the integrals in Eqs. (4.4)-(4.6) can be calculated only numerically or through analytical approximations. However, in the specific case when the matrix Ψ is diagonal, and the stochastic variables Ni(t) are independent, exact analytical solutions can be found in terms of the cumulative distribution functions of the noise sources (see [14]).

    In Figure (3) we show an example of the joint probability distributions P(A,t) and p(V,t) in the large-time limit, obtained for the network parameters that we reported in Table (1). In this figure we considered two distinct distributions of the noise sources (namely correlated normally-distributed variables Ni(t), as well as independent sources with Laplace distributions), and we showed how they differently shape the probability distributions of the neural activity and of the membrane potentials.

    The joint probability distributions P(A,t) and p(V,t), that we reported in the previous section, provide a complete probabilistic description of the network in the limit t+. In particular, these distributions can be used for calculating cross-neuron correlations, which represents a powerful tool for quantifying the exchange of information between neurons. In [14], we derived exact analytical expressions of the Pearson correlation coefficient for t+, in the case when the matrix Ψ is diagonal (i.e. Ψ=diag(σN0,,σNN1), where σNidef=σNi,ii), while the noise sources are independent and normally distributed. By applying Eq. (4.6), we found that the pairwise correlation between the neural activities of two neurons with indexes i and j (such that ij) is:

    Corr(Ai,Aj)=Cov(Ai,Aj)Var(Ai)Var(Aj)Cov(Ai,Aj)=142N1n=0Hn(12¯AiEn,i)(12¯AjEn,j)Var(Ai)=¯Ai(¯Ai)2¯Ai=12(12N1n=0HnEn,i)En,i=erf(θiN1m=0Ji,mB(N)n,mIi2σNi), (4.9)

    where B(N)n,mdef=[BB(N)n]m is the mth entry of the N×1 vector BB(N)n. In a similar way, from Eq. (4.8) we derived the following formula for the pairwise correlation between the membrane potentials:

    Corr(Vi,Vj)=Cov(Vi,Vj)Var(Vi)Var(Vj)Cov(Vi,Vj)=2N1n=0HnR(N)n,iR(N)n,jVar(Vi)=(σNi)2+2N1n=0Hn(R(N)n,i)2R(N)n,idef=N1m=0[(B(N)n,m2N1k=0HkB(N)k,m)Ji,m]. (4.10)

    Eqs. (4.9) and (4.10) show that the cross-neuron correlations are generally very complex functions of the network parameters. However, these formulas strongly simplify in special limiting cases, such as the weak-noise or the strong-stimuli limits, which are not discussed here. The interested reader is referred to the supplemental information of [14] for a rigorous derivation of these simplified formulas.

    In Figure (4) we plotted some examples of cross-correlations, obtained for the the network parameters that we reported in Table (1). This figure shows that variations of the external stimuli I switch the binary network between synchronous (i.e. highly correlated) and asynchronous (i.e. uncorrelated) states. Moreover, we observe that low (respectively high) correlations between neural activities do not necessarily correspond to low (respectively high) correlations between the membrane potentials. In other words, the linear relationship between the neural activity and the membrane potentials, as given by Eq. (4.1), is not reflected by the correlation structure of these variables. This result proves that, despite the similarity of the corresponding equations (which we already observed in Subsection (4.1), by comparing Eqs. (4.5) and (4.6) with, respectively, Eq. (4.7) and (4.8)) and their linear relationship, neural activity and the membrane potentials represent two distinct aspects of binary networks.

    The interested reader is referred to [14] for a detailed description of the conditions under which synchronous and asynchronous states occur in the network, and for the extension of Eqs. (4.9) and (4.10) to encompass higher-order (i.e. groupwise) correlations among an arbitrary number of neurons.

    In this section we consider the problem of storing D sequences of neural activity vectors A(i,0)A(i,1)A(i,Ti), for some Ti (which represents the number of transitions between activity vectors in the ith sequence), and i=0,,D1. In the context of content-addressable memories, one aims to determine a synaptic connectivity matrix J that stores these sequences in the binary network, so that each sequence can be retrieved by initializing the network state to A(t=0)=A(i,0), even in the presence of noise. Any method for calculating such a connectivity matrix is typically called learning rule.

    Each transition A(i,ni)A(i,ni+1) in the sequences is noise-resistant whenever P(A(i,ni+1),tni+1|A(i,ni),tni)1, since under this condition the probability that the state A(i,ni) switches to a state other than A(i,ni+1) at the time instant tni+1 is negligible. Therefore, according to Eq. (4.5), the sequences of neural activity can be stored in the network by solving the following set of equations:

    V(N)D(A)pN(Ψ1[xJAI])dx|det(Ψ)|,(A,A)=(A(i,ni),A(i,ni+1)),ni=0,,Ti1,i=0,,D1 (4.11)

    with respect to the connectivity matrix J. Generally, these equations can be solved only numerically or through analytical approximations. However, in the specific case when the matrix Ψ is diagonal and the stochastic variables Ni(t) are independent, exact analytical solutions can be found.

    In [14], we considered the case of independent normally-distributed noise sources, and we found that Eq. (4.11) reduces to:

    erf(θjN1k=0Jj,kA(i,ni)kIj2σNj)(1)A(i,ni+1)j

    (where erf() is the error function), or in other words:

    θjN1k=0Jj,kA(i,ni)kIj2σNj=(1)A(i,ni+1)jK(i,ni)j, (4.12)

    for ni=0,,Ti1, i=0,,D1 and for any sufficiently large and positive constant K(i,ni)j. If the network is fully-connected without self-connections (so that Ji,i=0), Eq. (4.12) can be interpreted as the following N systems of linear algebraic equations:

    Ω(j)J(j)=u(j),j=0,,N1. (4.13)

    In Eq. (4.13), J(j) is the (N1)×1 vector with entries Jj,k for kj. Moreover, if we define Tdef=D1i=0Ti, Tidef=ik=1Tk1 (for i>0) and T0def=0, then u(j) is a T×1 vector with entries:

    [u(j)]Ti+ni=θj(1)A(i,ni+1)jK(i,ni)j2σNjIj,ni=0,,Ti1,i=0,,D1. (4.14)

    Moreover, in Eq. (4.13), Ω(j) is the T×(N1) matrix obtained by removing the jth column of the following matrix:

    Ω=[A(0,0)0A(0,0)N1A(0,T01)0A(0,T01)N1A(D1,0)0A(D1,0)N1A(D1,TD11)0A(D1,TD11)N1].

    In particular, we observe that whenever A(i,0)=A(i,Ti), the ith neural sequence is an oscillatory solution of Eq. (2.1) with period Ti, so that if the matrix J is calculated by solving Eq. (4.13) and the network is initialized to any state of the oscillation, the network will cycle repeatedly through the same set of states. Moreover, in the special case Ti=1, the neural sequence represents a stationary solution of Eq. (2.1).

    In Figure (5), we show some examples of storage of stationary patterns and oscillatory sequences with T=3. This figure is obtained for the the network parameters reported in Table (2), and proves that the learning rule Eq. (4.13) can be used to store safely sequences of neural activity also in very noisy networks. To conclude, we observe that generally the entries of the matrices J that store these patterns in very noisy networks have large absolute values (see Table (2)). This is a consequence of the fact that JijσN for σN1, according to Eqs. (4.13) and (4.14). Large entries of the connectivity matrix ensure that the membrane potentials fluctuate, under noise perturbations, far away from the firing thresholds, so that undesired transitions between the activity states are unlikely. This mechanism protects the sequences of activity patterns from the disruptive effects of noise fluctuations, allowing their storage also in highly noisy media such as the cortical tissue.

    Figure 5.  Storage of activity patterns and neural sequences. This figure shows examples of three distinct sets of activity patterns and neural sequences (highlighted in red), stored in a stochastic binary network driven by independent and normally distributed noise sources, for N=4, I=[0,0,0,0]T, and Ψ=σNdiag(1,1,1,1). The color gradation of the blue arrows is proportional to the magnitude of P(A(i,ni+1),tni+1|A(i,ni),tni) (see Eq. (4.5)), so that the arrows are white for every pair of states (A(i,ni),A(i,ni+1)) such that P(A(i,ni+1),tni+1|A(i,ni),tni)0, while they are blue if the conditional probability is close to 1. The entries of the matrix J are calculated from Eq. (4.13) for σN=50 (see the highlighted central panels) and K(i,ni)j=10i,j,ni, in order to store 3 stationary states (panel B), an oscillation with period T=3 (panel E), and a stationary state plus an oscillation with period T=2 (panel H). The matrices Ω and the corresponding solutions J are reported in Table (2). Then the stability of the stored patterns under noise perturbations is tested for σN=0.1 (panels A, D, G) and σN=1000 (panels C, F, I), without changing the matrix J. Note that the stationary states and the oscillations are noise-resistant, despite the presence of strong noise sources (σN=50). Moreover, in the case of weak noise sources (σN=0.1), new stable patterns may appear, while for extreme noise intensity (σN=1000) the stored patterns become unstable, and the network is fully random.
    Table 2.  Network parameters 2. This table reports the matrices Ω and the corresponding matrices J, that we used for plotting Figure (5).
    Panel A Panel B Panel C
    Ω=[110101111011] Ω=[101111011010] Ω=[010111101001]
    J[0141414142122141401414212214141414021223543543540] J[07087080212001414141470814140141417077070] J[00706708141402122708106035403547087067060]

     | Show Table
    DownLoad: CSV

    The results reported in Sections (3) and (4) are valid for binary neural networks with arbitrary topology of the synaptic connections (which does not evolve over time). For this reason, they can be applied to networks with regular connectivity matrices J, as well as to random networks with frozen synaptic weights (see e.g. the network parameters reported in Table (1)). In other words, the connectivity matrix of random networks can be interpreted as a single realization of the synaptic wiring among neurons, generated according to some known probability distribution pJ. These models are said to present quenched disorder [21,22,61].

    Each realization of the connectivity matrix, generated according to the distribution pJ, usually produces a distinct matrix J, which in turn gives rise to distinct dynamical properties of the neural activity. In particular, each realization typically produces distinct bifurcation diagrams. For this reason, in order to obtain statistically representative results, one needs to average the coordinates of the bifurcation points over the variability of the matrix J. More generally, one would be interested in determining the probability distribution of the bifurcation points over the matrix J.

    In [16] we derived semi-analytical expressions of these probability distributions. For simplicity, we focused on the bifurcation points of the stationary states in the zero-noise limit σNi,j0i,j (the possibility to extend our approach to the study of neural oscillations is discussed briefly in Subsection (6.3.4)). In [16] we supposed that the entries Ji,j of the connectivity matrix can be decomposed as the product of a synaptic weight, Wi,j (which represents the random strength of interaction of the jth neuron on the ith neuron), with another random variable, Ti,j (which represents either the presence, for Ti,j=1, or the absence, for Ti,j=0, of a synaptic connections from the jth to the ith neuron). The random variable Wi,j is supposed to be continuous, and distributed according to some distribution pWi,j. On the other hand, the variable Ti,j is discrete, and such that Ti,j=1 with probability Pi,j[0,1], while Ti,j=0 with probability 1Pi,j. We also supposed that the variables {Wi,j,Ti,j}i,j=0,,N1 are statistically independent. Because of these assumptions, the random variable Ii (as given by Eq. (3.1); note that the superscript (j) can be omitted in the case of stationary states studied in this section) is distributed as follows:

    pIi(x)=ai(θix)+biδ(θix)ai(x)=SP(R)[jSPi,j][jRS(1Pi,j)][(jSpWi,j)(x)]bi=jR(1Pi,j),

    where P(R) represents the power set of Rdef={i{0,,N1}:Ai=1}. Moreover, we call FIi the cumulative distribution function of Ii. Then, the coordinates of the bifurcation points, Λα and Ξα (see again Eq. (3.1)), are distributed as follows:

    pX(x)=pXc(x)+qD[FX(xq)limxxqFX(x)]δ(xxq), (5.1)

    for X{Λα,Ξα}. In Eq. (5.1), δ() is the Dirac delta function, pXc is the component of pX that describes the statistical behavior of the continuous values of X, and FX is the cumulative distribution function of X. Since, given the activity vector A, the coordinates Λα and Ξα are, respectively, the maximum and minimum of conditionally independent variables Ii (see Eq. (3.1)), they must be distributed according to order statistics [62,63,64,65]. By calling per() the matrix permanent (which is an analog of a matrix determinant, but with all the signs in the Laplace expansion by cofactors taken as positive), in [16] we proved that Xc is distributed as follows:

    pΛcα(x)=1(γα,11)!per([aα,1(θx),F(γα,11)α,1(x)])pΞcα(x)=1(γα,01)!per([aα,0(θx),Iγα,0,γα,01F(γα,01)α,0(x)]), (5.2)

    where γα,udef=|ΓIα,u|, while [aα,1(θx),F(γα,11)α,1(x)] and [aα,0(θx),Iγα,0,γα,01F(γα,01)α,0(x)] are γα,1×γα,1 and γα,0×γα,0 matrices respectively, aα,u(θx)def=[ai(θix)]iΓIα,u and Fα,u(x)def=[FIi(x)]iΓIα,u are γα,u×1 column vectors, F(v)α,u(x)def=[Fα,u(x),,Fα,u(x)]vtimes is a γα,u×v matrix, and Iγα,0,γα,01 is the γα,0×(γα,01) all-ones matrix. Moreover, in Eq. (5.1), {xq}qD represents the set of the discrete values of X, at which the cumulative distribution function FX, namely:

    FΛα(x)=1γα,1!per([F(γα,1)α,1(x)])FΞα(x)=γα,0n=11n!(γα,0n)!per([F(n)α,0(x),Iγα,0,γα,0nF(γα,0n)α,0(x)]), (5.3)

    is (possibly) discontinuous. Note that D=ΓIα,1 and D=ΓIα,0, for Λα and Ξα respectively, while xq=θq.

    The mean multistability diagram of the network is the plot of the bifurcation points Λα and Ξα, averaged over the realizations of the synaptic connectivity matrix J. In other words, the mean bifurcation points Λα and Ξα (where the brackets represent the mean over the realizations) correspond to the values of the stimulus Iα at which a given neural activity state A loses its stability on average, turning into another stationary state or an oscillation. In [16] we proved that:

    X=+xpXc(x)dx+qDxq[FX(xq)limxxqFX(x)]=+[H(x)FX(x)]dx, (5.4)

    where the functions pXc(x) and FX(x) are given, respectively, by Eqs. (5.2) and (5.3).

    The probability that a given activity state A is stationary for a fixed combination of stimuli ˆI=[ˆI0,,ˆIP1]T, corresponds to the probability that ˆIV, where the coordinates of the hyperrectangle V (i.e. the bifurcation points where A becomes unstable) are random variables, whose density functions are calculated from Eq. (3.1). In [16] we proved that P(ˆIV) can be calculated from the cumulative distribution of the bifurcation points as follows:

    P(ˆIV)=P1α=0P(ˆIαVα)P(ˆIαVα)={1FΞα(ˆIα),ifΓIα,1=FΛα(ˆIα),ifΓIα,0=FΛα(ˆIα)[1FΞα(ˆIα)],otherwise. (5.5)

    Moreover, the probability to observe the state A in the whole multistability diagram of a single realization of the matrix J (i.e. the probability that A is stationary, regardless of the specific combination of stimuli), corresponds to the probability that the hyperrectangle V has positive hypervolume vol(V). In [16] we proved that, according to Eq. (3.1), this probability has the following expression:

    P(vol(V)>0)=P1α=0[+pΞcα(x)FΛα(x)dx+qΓIα,0[FΞα(θq)limxθqFΞα(x)]FΛα(θq)]. (5.6)

    Eqs. (5.1)-(5.6) provide a complete description of the statistical properties of the stationary states in networks with quenched disorder. It is also important to note that these equations are semi-analytical, since they are expressed in terms of 1D integrals containing the distribution pWi,j. These integrals may be calculated exactly for some pWi,j, for example in the case of normally-distributed weights. However, for simplicity, in this review and in [16], they are calculated through numerical integration schemes, because fully-analytical expressions may be very cumbersome.

    In Figures (6) and (7) we show an example of these results for a specific distribution of the connectivity matrix. We consider a network composed of 3 excitatory neurons (with indexes i=0,1,2), and 2 inhibitory neurons (i=3,4). The excitatory and inhibitory neurons receive, respectively, external stimuli IE and II. We also assume that the synaptic weights Wi,j are distributed according to the following powerlaw distribution:

    pWi,j(x)={3W(xSW)2,ifSxS+W0,otherwise, (5.7)
    Figure 6.  Probability distribution of the synaptic connections. This figure reports the probability distribution of the network topology and of the synaptic weights, given the values of the network parameters reported in Table (3). A) Probability distribution of the network topology, namely the graph of the matrix [Pi,j]i,j=0,,N1. The N=5 nodes in the graph represent the neurons in the network, while the arrow from the jth to the ith neuron represents the probability to observe the ij synaptic connection in a single realization of the matrix J (note that the color gradation is proportional to Pi,j). B) Examples of the powerlaw probability distributions of the synaptic weights Wi,j, see Eq. (5.7).
    Table 3.  Network parameters 3. This table contains the values of the parameters that we used for plotting Figures (6) and (7) The symbol × in the matrices S and W means that the probability distributions of the stationary states and of the bifurcation points are not affected by those parameters, since the corresponding synaptic connections are absent (Pi,j=0).
    P=[0.20.101000.20.70.300.5000.80.2000.30.81100.80.40], θ=[03212]
    S=[55×1××312×3××34××2115×13×], W=[12×1××112×2××13××2111×12×]

     | Show Table
    DownLoad: CSV
    Figure 7.  Stationary behavior of a binary network with quenched disorder. This figure reports the probabilistic properties of the stationary states of a binary network with quenched disorder. The matrix J is generated randomly from the powerlaw distribution Eq. (5.7), and for the network parameters in Table (3) (see also Figure (6)). A) Cumulative distribution function FΛE(x) of the activity state A=[1,0,1,1,0]T. The red curve is derived semi-analytically from Eq. (5.3), while the blue dots are calculated numerically through a Monte Carlo method over 10,000 repetitions of the synaptic connectivity matrix. B) Mean multistability diagram of the network, obtained semi-analytically from Eq. (5.4). C) -D) Occurrence probability of the stationary states, obtained for the fixed stimuli [IEII]=ˆI=[10] (panel C), and regardless of the stimuli (panel D). The red bars in panel C (respectively, panel D) are derived semi-analytically from Eq. (5.5) (respectively, Eq. (5.6)), while the blue bars are calculated numerically through a Monte Carlo method (see [16] for more details).

    where S and W represent, respectively, the horizontal shift and the width of the support of the distribution. To conclude, in Table (3) we reported the values of the parameters P, θ, S and W that we chose for this network. Figure (6) reports the graph of the matrix [Pi,j]i,j=0,,N1 and some examples of the powerlaw distribution of the synaptic weights Wi,j (see Eq. (5.7)). Moreover, in Figure (7) we show the mean multistability diagram of the network, as well as the occurrence probability of the activity states for fixed stimuli (i.e. IE=1 and II=0) and regardless of the stimuli (see, respectively, Eqs. (5.4), (5.5) and (5.6)).

    New mathematical techniques for analytically investigating finite-size and small-size neural network models are invaluable theoretical tools for studying the brain at its multiple scales of spatial organization, that complement the already existing mean-field approaches. Studying how the complexity and dynamics of neuronal network activity change with the network size is of fundamental importance to understand why networks in the brain appear organized at multiple spatial scales. In this article, we reviewed the effort we made in this direction, trying to fill the gap in the current neuro-mathematical literature. In the following, we discuss strengths and weaknesses of the approach we developed so far, the implications of our work for specific issues related to bifurcation dynamics and learning, and for future progress in the understanding of the function of networks in the brain.

    An effective tool for studying spin networks in physics is represented by their energy (Hamiltonian) function. In order to study the low-temperature physical properties of the network at the thermodynamic equilibrium, one is often interested in finding out the global (and possibly degenerate) minimum energy state of the network. This is known as the ground state of the system, and it can be calculated by minimizing the energy function over the space of all possible spin configurations at absolute temperature. This is an optimization problem, which, for networks on non-planar or three- or higher-dimensional lattices, has been proven to be NP-hard [66].

    In the study of discrete-time binary neural networks, energy (or, more generally, Lyapunov) functions typically are known only for asynchronously updated neurons with symmetric synaptic connections [7]. For neural networks with asymmetric connectivity matrices and/or synchronous update, like the one we considered in this review, the search for the ground state(s) turns into the more general problem of determining the long-time (non-equilibrium) states in the zero-noise limit. It is important to observe that this problem is even more formidable than the search for the ground states, due to the intractable number of oscillatory sequences that can eventually be observed during the network dynamics. We are not aware of any algorithm that performs efficiently this highly demanding combinatorial analysis in synchronously-updated networks with asymmetric connections. The algorithm that we introduced in [15] represents an attempt to tackle this problem, in the specific case of networks with sparse synaptic connections.

    Once the set of all the possible stationary and oscillatory solutions has been evaluated, Eq. (3.1) provides a fast, analytical way to calculate the bifurcation diagram of the network. On the other hand, the bifurcation analysis of networks composed of neurons with graded output typically requires numerical continuation techniques [45], which do not provide any analytical intuition of the mechanisms underlying the changes of dynamics.

    The analytical results reported in Subsections (4.1) and (4.2) provide a complete description of the probabilistic behavior of the neural activity and of the membrane potentials in the long-time regime of single network realizations. These results provide new qualitative insights into the mechanisms underlying stochastic neuronal dynamics, which hold for any network size, and therefore are not restricted to small networks only.

    However, the main drawback of Eqs. (4.6) and (4.8) (and, as a consequence, also of Eqs. (4.9) and (4.10)), is represented by the quantitative evaluation of the probability distributions and of the cross correlations. This requires the calculation of a number of coefficients Hi that increases exponentially with the network size, and therefore proves intractable already for networks composed of a few tens of neurons. A more efficient estimation of these quantities for large networks can be performed numerically through Monte Carlo methods.

    An alternative approach we developed to deal with single realizations of large-size networks, is the derivation of their Sznitman's mean-field equations. In [14] we focused on the specific case of fully-connected multi-population networks driven by independent sources of noise, and we found that, under this assumption, the neurons become independent in the thermodynamic limit. This phenomenon simplifies considerably the complexity of the network equations, and allowed us to perform a detailed bifurcation analysis of the model.

    In order to provide a complete description of the stationary behavior of networks with quenched disorder, the calculation of the probability distributions of the bifurcation points across network realizations, that we reported in section (5) (see Eqs. (5.1)-(5.4)), as well as the calculation of the occurrence probability of the stationary states (Eqs. (5.5) and (5.6)), must be performed for every stationary state of the model. Unfortunately, these states are not known a priory, therefore the calculation of the probability distributions must be repeated for all the 2N combinations of the neural activity states. A possible solution to this problem, in the specific case of sparse networks, is represented by the sparse-efficient algorithm that we introduced in [15]. This algorithm allows a fast evaluation of the stationary solutions of the network, so that the calculation of the probability distributions can be performed only on the actual stationary states detected by the sparse-efficient algorithm.

    Similarly to the case of single realizations of large networks that we discussed above, an alternative approach for studying large networks with quenched disorder is represented by the derivation of asymptotic formulas. In [16] we focused specifically on the case of arbitrarily sparse networks composed of statistically-homogeneous populations, which are often considered to be a good approximation of biological circuits, see e.g. [67,61,68]. In particular, by applying the Fisher-Tippett-Gnedenko theorem, we found that the coordinates of the bifurcation points Λα and Ξα in large networks are distributed according to the Gumbel's (double exponential) probability distribution.

    To conclude, another disadvantage of Eqs. (5.1)-(5.6) is represented by the numerical calculation of the matrix permanent, which is computationally demanding. The fastest known technique for calculating the permanent of arbitrary matrices is the Balasubramanian-Bax-Franklin-Glynn (BBFG) formula [69,70,71,72], which has complexity O(2N1N2). In order to alleviate this computational bottleneck, in [16] we derived a closed-form analytical expression of the permanent of uniform block matrices, which proved much faster than the BBFG formula. This solution allowed us to speed up considerably the calculation of the bifurcation structure of statistically-homogeneous multi-population networks with quenched disorder.

    The algorithm we introduced in Subsection (4.3) for storing some desired sequences of neural activity, was obtained in [14] by manipulating the conditional probability distribution of the neural activity (see Eq. (4.5)). This distribution does not depend on the coefficients Hi, therefore the learning rule has polynomial complexity. For this reason, its applicability is not restricted to small networks only.

    Another interesting property of this learning rule is represented by the possibility to store sequences of neural activity also in noisy networks. Typically, noise can break a neural sequence if the stochastic fluctuations are sufficiently strong. However, our algorithm is designed for being noise-resistant, namely the probability of breaking the sequence under the influence of noise can be made arbitrarily small.

    In the literature, several learning rules have been proposed for networks of binary neurons. A mechanism for storing and retrieving static patterns of neural activity in networks with symmetric connectivity was proposed by Hopfield [7]. The storage of sequences of temporally evolving patterns was investigated by Sompolinsky, Kanter and Kleinfeld [73,74], for networks with non-instantaneous synaptic transmission between neurons. In [75], Dehaene et al introduced an alternative approach based on temporally evolving synapses. Then, Buhmann and Schulten [76] showed that asymmetric connectivity and noise are sufficient conditions for storing and retrieving temporal sequences, without further assumptions on the biophysical properties of the synaptic connections.

    It is important to observe that the learning rule introduced in Subsection (4.3) does not require transmission delays, time-dependent synaptic strengths, or the presence of noise. Only asymmetric synaptic connections are required. This is compatible with experimental observation, in that the vast majority of synapses in real biological networks are asymmetric [77]. Our result can be considered as an extension to stochastic networks of the associating learning rule for deterministic models, introduced by Personnaz et al in [78].

    Unlike previous mathematical techniques introduced for studying finite-size effects in neural network models, our analytical results are exact for arbitrary network size (also small ones), in that they do no rely on any mathematical approximation. In contrast to linear response theories [39,40,41,42], our approach works far away from stationary states and can be applied also to the study of neural oscillations. Compared to the method introduced in [43], which represents a mean-field description for the mesoscopic dynamics of interacting populations of integrate-and-fire neurons, our technique does not rely on any mean-field approximation and it can be applied to networks of arbitrary size. Moreover, while the cumulant expansion introduced in [44] for networks of binary neurons yields an approximate analytical description of correlations between the activities of individual pairs of units, which goes beyond averaged pairwise correlations, our approach calculates exact group-wise correlations among an arbitrary number of neurons (see [14]). An alternative technique for the calculation of the correlation structure in finite-size networks of binary neurons was introduced in [79]. This approach assumes that the fluctuations of neuronal activity are sufficiently small and it can be applied only to asynchronous and irregular network states. In [80] the authors describe neuronal dynamics by means of a spike-response model with escape noise, and they perform a linear-noise approximation of the equations which allows them to investigate the theoretical properties of finite-size activity fluctuations in the asynchronous regime. We observe that, unlike [79,80], our approach can describe both asynchronous and synchronous states with arbitrary levels of correlations between neurons. In [81] the authors consider a network composed of phase neurons, and they develop a perturbation theory in the inverse system size N1. Consequently, this approach is expected to be inaccurate for small network sizes, while our technique provides exact results also for arbitrarily small networks. The approach introduced in [82] applies to large but finite networks of sparsely connected rate-based neurons. Unlike this technique, our results are not restricted to sparse networks only, and they provide exact formulas of neural activity fluctuations also in highly dense networks.

    To conclude, it is important to observe that while our approach to the study of finite-size effects in recurrent neural networks presents, as we discussed above, several advantages compared to previous work, our network equations describe binary firing rates that evolve in discrete-time steps. On the other hand, most of the techniques cited above deal with continuous-time equations of spiking neurons, which are a more realistic description of biological networks. Moreover, our derivation of the exact solutions of the binary equations comes with a cost, namely the exponential increase of complexity of our results with the network size. For this reason, our technique has a higher computational load compared to other methods, as we discussed in more detail in Subsection (6.1.2).

    While in [15] we proposed an efficient solution for performing the bifurcation analysis of binary networks with sparse connectivity, fast algorithms for dense networks proved more difficult to develop. In [15] we showed that, in the specific case of homogeneous networks with regular topologies, it is possible to take advantage of the symmetries of the network equations to speed up the calculation of the bifurcation diagram. This observation allowed us to introduce an algorithm which runs in linear time with respect to the network size. However, the development of efficient algorithms for dense networks with arbitrary topology of the synaptic connections represents a much more difficult challenge, and it still remains an open problem to be addressed in future work.

    Another open problem is represented by the bifurcation analysis of large and medium-size networks. Since computer's processing power increases over time, the size of the networks that can be studied through combinatorial approaches, such as the one we introduced in [15], is expected to increase accordingly. For this reason, we believe that these methods are key to the future development of computational neuroscience and of the physics of complex systems. Beyond brute-force processing power, other techniques can be developed for accelerating combinatorial algorithms; in particular, the algorithm that we developed in [15] lends itself to be parallelized over several processors. It also is important to note that our algorithm calculates exact bifurcation diagrams, while the calculation of approximate bifurcation diagrams, through a heuristic search of the oscillatory and stationary solutions of the network equations, would prove much faster.

    In section (3) we calculated the bifurcation points of the network in the zero-noise limit. An interesting extension of our formalism, that deserves further investigation, is the study of bifurcations in noisy networks. Note that in Subsection (4.3) we studied the conditions under which the stored sequences of activity patterns are disrupted by noise. For this reason, it is natural to speculate that a theory of bifurcations in noisy networks of binary units must be closely related to the theory we developed in that subsection.

    In [83] the authors introduced a technique for quantifying the average number of oscillations in asymmetric fully-connected binary networks with quenched disorder. Their approach can be applied to networks in the large-size limit, provided the systems have no autapses (i.e. self-connections) and no external input currents. It is important to observe that in principle the techniques developed in [16] for studying the stationary states in networks with quenched disorder can be extended to neural oscillations, by applying order statistics twice to Eq. (3.1). This approach would provide a theoretical estimation of the average number of oscillations which, unlike that derived in [83], is not restricted to large fully-connected networks.

    To conclude, an important open problem in the study of networks with quenched disorder, is represented by models with correlated synaptic weights. In section (5), we calculated the probability distribution of the bifurcation points of the stationary states, through the results derived in [62,63,64,65] for the order statistics of a set of independent random variables. On the other hand, it is known that synaptic weights in real cortical circuits are correlated, as a consequence, for example, of synaptic plasticity mechanisms. However, a generalization of order statistics to sets of arbitrarily correlated random variables is still out of reach. More generally, the analytical investigation of networks with correlated synaptic weights has proven a formidable problem in mathematical neuroscience, that has challenged also the theories of large-size systems [84,85]. Because of its biological relevance, the presence of synaptic correlations represents an important ingredient in the study of neural network dynamics, which needs to be addressed in future research for increasing the biological plausibility of the models.

    Being able to understand analytically the dynamics of finite-size networks from the circuit's equations is potentially important for improving our understanding of how neural circuits work and of how and why their function is impaired in certain neural disorders. The pattern and power of the external inputs, the pattern of anatomical connectivity, the synaptic strength and the relative firing rate of excitatory and inhibitory neurons, are all key elements in determining the functional organization and output of a circuit. Yet, it is not known how exactly these factors combine to produce brain functions and to cause dysfunctions. Mathematical work to understanding neural networks of arbitrary size could be useful to address two questions relevant to these issues.

    The first question regards the relationship between anatomy, functional coupling and population coding in local neural circuits. Our own work (e.g. [86,87,88,89]), and that of many others [90,91,92,93,94,95], has shown that both the dynamics of individual neurons and of populations, and the functional coupling between cells, is crucial for shaping information in population codes and for behavior. Functional coupling between cells may arise both because of anatomical connectivity between neurons, but also because of other factors such as common inputs. Thus, the relationship between functional coupling, circuit's anatomy, and population-level information coding has remained largely unaddressed. However, recent advances in experimental techniques allow the simultaneous functional imaging of activity of several neurons in mice during sensory or cognitive tasks, as well as the post-mortem measure by Electron-Microscopy of the anatomical connectivity of the same set of neurons that were functionally imaged in vivo [96]. The mathematical work reviewed here develops a set of tools that could be used to complement the measure of anatomy and physiology from the same circuits, and help bridging the gap between these two measures. Our tools, when coupled with modern experimental techniques such as those described above, could be used, in particular, to understand what is the consequence of specific patterns of recurrent anatomical connectivity on population coding and circuit dynamics, and then to test these theoretical relationships on real data.

    A second possible direction of relevance for neuroscience of our work is to use these tools to understand the neural origin of certain brain disorders. For example, it is thought that Autism Spectrum Disorders (ASD) result, at least in part, from abnormal changes in the functional organization and dynamics of neural circuits [97,98,99,100]. However, although many changes in parameters such as the strength of synaptic connections and/or the change in firing properties of certain classes of neurons have been observed in ASD, it is still unclear how different elementary changes in neural parameters combine to change the circuit's function. Being able to test directly, and understand mathematically at a deep level, how the changes of such basic neural properties affect the circuit's dynamics at different spatial scales, including scales that involve finite-size neural networks, could be useful to understand the origin and consequences of aberrant circuit function in ASD conditions, as well as in other neural disorders.

    This research was in part supported by the Simons Foundation (SFARI, Explorer Grant No. 602849) and by the BRAIN Initiative (Grant No. R01 NS108410). The funders had no role in study design, data collection and analysis, decision to publish, interpretation of results, or preparation of the manuscript.

    The authors declare no conflict of interest.



    [1] W. W. Lytton, From computer to brain:Foundations of computational neuroscience, SpringerVerlag New York, 2002, URL https://doi.org/10.1007/b98859.
    [2] B. Ermentrout, Neural networks as spatio-temporal pattern-forming systems, Rep. Prog. Phys., 61 (1998), 353-430, URL http://doi.org/10.1088/0034-4885/61/4/002.
    [3] E. M. Izhikevich, Dynamical systems in neuroscience, MIT Press, 2007, URL https://mitpress.mit.edu/books/dynamical-systems-neuroscience.
    [4] P. Ashwin, S. Coombes and R. Nicks, Mathematical frameworks for oscillatory network dynamics in neuroscience, J. Math. Neurosci., 6 (2016), 2, URL https://doi.org/10.1186/s13408-015-0033-6.
    [5] S. I. Amari, Learning patterns and pattern sequences by self-organizing nets of threshold elements, IEEE Trans. Comput., C-21 (1972), 1197-1206, URL https://doi.org/10.1109/T-C.1972.223477.
    [6] W. A. Little, The existence of persistent states in the brain, Math. Biosci., 19 (1974), 101-120, URL https://doi.org/10.1016/0025-5564(74)90031-5.
    [7] J. J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, Proc. Natl. Acad. Sci. U.S.A., 79 (1982), 2554-2558, URL http://dx.doi.org/10.1073/pnas.79.8.2554.
    [8] A. C. C. Coolen, Chapter 14, Statistical mechanics of recurrent neural networks I:Statics, vol. 4 of Handbook of Biological Physics, North-Holland, 2001, 553-618, URL https://doi.org/10.1016/S1383-8121(01)80017-8.
    [9] A. C. C. Coolen, Chapter 15, Statistical mechanics of recurrent neural networks Ⅱ:Dynamics, vol. 4 of Handbook of Biological Physics, North-Holland, 2001, 619-684, URL https://doi.org/10.1016/S1383-8121(01)80018-X.
    [10] D. Kernell, The adaptation and the relation between discharge frequency and current strength of cat lumbosacral motoneurones stimulated by long-lasting injected currents, Acta Physiol. Scand., 65 (1965), 65-73, URL https://doi.org/10.1111/j.1748-1716.1965.tb04250.x.
    [11] D. Kernell, High-frequency repetitive firing of cat lumbosacral motoneurones stimulated by longlasting injected currents, Acta Physiol. Scand., 65 (1965), 74-86, URL https://doi.org/10.1111/j.1748-1716.1965.tb04251.x.
    [12] P. Peretto, Collective properties of neural networks:A statistical physics approach, Biol. Cybern., 50 (1984), 51-62, URL https://doi.org/10.1007/BF00317939.
    [13] I. Ginzburg and H. Sompolinsky, Theory of correlations in stochastic neural networks, Phys. Rev. E, 50 (1994), 3171-3191, URL https://doi.org/10.1103/PhysRevE.50.3171.
    [14] D. Fasoli, A. Cattani and S. Panzeri, Pattern storage, bifurcations and groupwise correlation structure of an exactly solvable asymmetric neural network model, Neural Comput., 30 (2018), 1258-1295, URL https://doi.org/10.1162/NECO_a_01069.
    [15] D. Fasoli and S. Panzeri, Optimized brute-force algorithms for the bifurcation analysis of a binary neural network model, Phys. Rev. E, 99 (2019), 012316, URL https://doi.org/10.1103/PhysRevE.99.012316.
    [16] D. Fasoli and S. Panzeri, Stationary-state statistics of a binary neural network model with quenched disorder, Entropy, 21 (2019), 630, URL https://doi.org/10.3390/e21070630.
    [17] D. J. Amit, Modeling brain function:The world of attractor neural networks, Cambridge University Press, 1989, URL https://doi.org/10.1017/CBO9780511623257.
    [18] S. I. Amari, Dynamics of pattern formation in lateral-inhibition type neural fields, Biol. Cybern., 27 (1977), 77-87, URL https://doi.org/10.1007/BF00337259.
    [19] C. van Vreeswijk and H. Sompolinsky, Chaos in neuronal networks with balanced excitatory and inhibitory activity, Science, 274 (1996), 1724-1726, URL https://doi.org/10.1126/science.274.5293.1724.
    [20] S. I. Amari, Homogeneous nets of neuron-like elements, Biol. Cybern., 17 (1975), 211-220, URL https://doi.org/10.1007/BF00339367.
    [21] D. Sherrington and S. Kirkpatrick, Solvable model of a spin-glass, Phys. Rev. Lett., 35 (1976), 1792-1796, URL https://doi.org/10.1103/PhysRevLett.35.1792.
    [22] S. Kirkpatrick and D. Sherrington, Infinite-ranged models of spin-glasses, Phys. Rev. B, 17 (1978), 4384-4403, URL https://doi.org/10.1103/PhysRevB.17.4384.
    [23] M. Mézard, N. Sourlas, G. Toulouse, et al., Replica symmetry breaking and the nature of the spin glass phase, J. Phys., 45 (1984), 843-854, URL https://doi.org/10.1051/jphys:01984004505084300. doi: 10.1051/jphys:01984004505084300
    [24] M. Mézard, G. Parisi and M. Virasoro, Spin glass theory and beyond:An introduction to the replica method and its applications, World Scientific Singapore, 1986, URL http://www.worldscientific.com/worldscibooks/10.1142/0271.
    [25] R. J. Glauber, Time dependent statistics of the Ising model, J. Math. Phys., 4 (1963), 294-307, URL https://doi.org/10.1063/1.1703954.
    [26] A. C. C. Coolen and D. Sherrington, Dynamics of fully connected attractor neural networks near saturation, Phys. Rev. Lett., 71 (1993), 3886-3889, URL https://doi.org/10.1103/PhysRevLett.71.3886.
    [27] R. W. Williams and K. Herrup, The control of neuron number, Ann. Rev. Neurosci., 11 (1988), 423-453, URL https://doi.org/10.1146/annurev.ne.11.030188.002231.
    [28] V. B. Mountcastle, The columnar organization of the neocortex, Brain, 120 (1997), 701-722, URL https://doi.org/10.1093/brain/120.4.701.
    [29] M. Helmstaedter, C. P. de Kock, D. Feldmeyer, et al., Reconstruction of an average cortical column in silico, Brain Res. Rev., 55 (2007), 193-203, URL https://doi.org/10.1016/j.brainresrev.2007.07.011.
    [30] H. S. Meyer, V. C. Wimmer, M. Oberlaender, et al., Number and laminar distribution of neurons in a thalamocortical projection column of rat vibrissal cortex, Cereb. Cortex, 20 (2010), 2277-2286, URL https://doi.org/10.1093/cercor/bhq067.
    [31] M. De Luca, C. F. Beckmann, N. De Stefano, et al., fMRI resting state networks define distinct modes of long-distance interactions in the human brain, NeuroImage, 29 (2006), 1359-1367, URL https://doi.org/10.1016/j.neuroimage.2005.08.035.
    [32] D. Mantini, M. G. Perrucci, C. Del Gratta, et al., Electrophysiological signatures of resting state networks in the human brain, Proc. Natl. Acad. Sci. USA, 104 (2007), 13170-13175, URL https://doi.org/10.1073/pnas.0700668104.
    [33] R. D. Beer, On the dynamics of small continuous-time recurrent neural networks, Adapt. Behav., 3 (1995), 469-509, URL https://doi.org/10.1177/105971239500300405.
    [34] F. Pasemann, Complex dynamics and the structure of small neural networks, Network-Comp. Neural, 13 (2002), 195-216, URL http://doi.org/10.1080/net.13.2.195.216.
    [35] D. Fasoli, A. Cattani and S. Panzeri, The complexity of dynamics in small neural circuits, PLoS Comput. Biol., 12 (2016), e1004992, URL https://doi.org/10.1371/journal.pcbi.1004992.
    [36] C. van Vreeswijk and H. Sompolinsky, Chaotic balanced state in a model of cortical circuits, Neural Comput., 10 (1998), 1321-1371, URL https://doi.org/10.1162/089976698300017214.
    [37] B. Cessac, Increase in complexity in random neural networks, J. Phys. I France, 5 (1995), 409-432, URL http://doi.org/10.1051/jp1:1995135.
    [38] A. Renart, J. De La Rocha, P. Bartho, et al., The asynchronous state in cortical circuits, Science, 327 (2010), 587-590, URL https://doi.org/10.1126/science.1179850.
    [39] V. Pernice, B. Staude, S. Cardanobile, et al., Recurrent interactions in spiking networks with arbitrary topology, Phys. Rev. E, 85 (2012), 031916, URL https://doi.org/10.1103/PhysRevE.85.031916.
    [40] J. Trousdale, Y. Hu, E. Shea-Brown, et al., Impact of network structure and cellular response on spike time correlations, PLoS Comput. Biol., 8 (2012), e1002408, URL https://doi.org/10.1371/journal.pcbi.1002408.
    [41] T. Tetzlaff, M. Helias, G. T. Einevoll, et al., Decorrelation of neural-network activity by inhibitory feedback, PLoS Comput. Biol., 8 (2012), e1002596, URL https://doi.org/10.1371/journal.pcbi.1002596.
    [42] M. Helias, T. Tetzlaff and M. Diesmann, Echoes in correlated neural systems, New J. Phys., 15 (2013), 023002, URL https://doi.org/10.1088/1367-2630/15/2/023002.
    [43] T. Schwalger, M. Deger and W. Gerstner, Towards a theory of cortical columns:From spiking neurons to interacting neural populations of finite size, PLoS Comput. Biol., 13 (2017), e1005507, URL https://doi.org/10.1371/journal.pcbi.1005507.
    [44] D. Dahmen, H. Bos and M. Helias, Correlated fluctuations in strongly coupled binary networks beyond equilibrium, Phys. Rev. X, 6 (2016), 031024, URL https://doi.org/10.1103/PhysRevX.6.031024.
    [45] Y. A. Kuznetsov, Elements of applied bifurcation theory, vol. 112, Springer-Verlag New York, 1998, URL https://doi.org/10.1007/978-1-4757-3978-7.
    [46] J. R. L. De Almeida and D. J. Thouless, Stability of the Sherrington-Kirkpatrick solution of a spin glass model, J. Phys. A:Math. Gen., 11 (1978), 983-990, URL https://doi.org/10.1088/0305-4470/11/5/028.
    [47] R. I. Leine, D. H. Van Campen and B. L. Van De Vrande, Bifurcations in nonlinear discontinuous systems, Nonlinear Dyn., 23 (2000), 105-164, URL https://doi.org/10.1023/A:1008384928636.
    [48] J. Awrejcewicz and C. H. Lamarque, Bifurcation and chaos in nonsmooth mechanical systems, World Scientific, 2003, URL http://doi.org/10.1142/5342.
    [49] R. I. Leine and D. H. Van Campen, Bifurcation phenomena in non-smooth dynamical systems, Eur. J. Mech. A-Solid, 25 (2006), 595-616, URL https://doi.org/10.1016/j.euromechsol.2006.04.004.
    [50] O. Makarenkov and J. S. W. Lamb, Dynamics and bifurcations of nonsmooth systems:A survey, Physica D, 241 (2012), 1826-1844, URL https://doi.org/10.1016/j.physd.2012.08.002.
    [51] J. Harris and B. Ermentrout, Bifurcations in the Wilson-Cowan equations with nonsmooth firing rate, SIAM J. Appl. Dyn. Syst., 14 (2015), 43-72, URL https://doi.org/10.1137/140977953.
    [52] S. Parui and S. Banerjee, Border collision bifurcations at the change of state-space dimension, Chaos, 12 (2002), 1054-1069, URL https://doi.org/10.1063/1.1521390.
    [53] B. Cessac and T. Viéville, On dynamics of integrate-and-fire neural networks with conductance based synapses, Front. Comput. Neurosci., 2 (2008), 2, URL https://doi.org/10.3389/neuro.10.002.2008.
    [54] V. Avrutin, M. Schanz and S. Banerjee, Multi-parametric bifurcations in a piecewise-linear discontinuous map, Nonlinearity, 19 (2006), 1875-1906, URL https://doi.org/10.1088/0951-7715/19/8/007.
    [55] R. Kötter, Neuroscience databases:A practical guide, Springer US, 2003, URL http://doi. org/10.1007/978-1-4615-1079-6.
    [56] T. C. Potjans and M. Diesmann, The cell-type specific cortical microcircuit:Relating structure and activity in a full-scale spiking network model, Cereb. Cortex, 24 (2014), 785-806, URL https://doi.org/10.1093/cercor/bhs358.
    [57] C. Boucsein, M. Nawrot, P. Schnepel, et al., Beyond the cortical column:Abundance and physiology of horizontal connections imply a strong role for inputs from the surround, Front. Neurosci., 5 (2011), 32, URL https://doi.org/10.3389/fnins.2011.00032.
    [58] S. E. Palmer, O. Marre, M. J. Berry, et al., Predictive information in a sensory population, Proc. Natl. Acad. Sci. USA, 112 (2015), 6908-6913, URL https://doi.org/10.1073/pnas.1506855112.
    [59] V. Rostami, P. Porta Mana, S. Grün, et al., Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models, PLoS Comput. Biol., 13 (2017), e1005762, URL https://doi.org/10.1371/journal.pcbi.1005762.
    [60] S. I. Amari, Characteristics of random nets of analog neuron-like elements, IEEE Trans. Syst. Man Cybern., SMC-2 (1972), 643-657, URL https://doi.org/10.1109/TSMC.1972.4309193.
    [61] G. Hermann and J. Touboul, Heterogeneous connections induce oscillations in large-scale networks, Phys. Rev. Lett., 109 (2012), 018702, URL https://doi.org/10.1103/PhysRevLett.109.018702.
    [62] R. J. Vaughan and W. N. Venables, Permanent expressions for order statistic densities, J. R. Stat. Soc. Ser. B, 34 (1972), 308-310, URL http://www.jstor.org/stable/2985190.
    [63] R. B. Bapat and M. I. Beg, Order statistics for nonidentically distributed variables and permanents, Sankhyā Ser. A, 51 (1989), 79-93, URL http://www.jstor.org/stable/25050725.
    [64] R. B. Bapat, Permanents in probability and statistics, Linear Algebra Appl., 127 (1990), 3-25, URL https://doi.org/10.1016/0024-3795(90)90332-7.
    [65] S. Hande, A note on order statistics for nondentically distributed variables, Sankhyā Ser. A, 56 (1994), 365-368, URL http://www.jstor.org/stable/25050995.
    [66] C. H. Papadimitriou and K. Steiglitz, Combinatorial optimization:Algorithms and complexity, Prentice Hall, 1982.
    [67] O. Faugeras, J. Touboul and B. Cessac, A constructive mean-field analysis of multi-population neural networks with random synaptic weights and stochastic inputs, Front. Comput. Neurosci., 3 (2009), 1, URL https://doi.org/10.3389/neuro.10.001.2009.
    [68] T. Cabana and J. Touboul, Large deviations, dynamics and phase transitions in large stochastic and disordered neural networks, J. Stat. Phys., 153 (2013), 211-269, URL https://doi.org/10.1007/s10955-013-0818-5.
    [69] K. Balasubramanian, Combinatorics and diagonals of matrices, PhD thesis, Indian Statistical Institute, 1980, URL http://library.isical.ac.in:8080/jspui/handle/10263/3603.
    [70] E. Bax and J. Franklin, A finite-difference sieve to compute the permanent, Technical Report CalTech-CS-TR-96-04, 1996.
    [71] E. Bax, Finite-difference algorithms for counting problems, PhD thesis, California Institute of Technology, 1998, URL https://thesis.library.caltech.edu/223/.
    [72] D. G. Glynn, The permanent of a square matrix, Eur. J. Combin., 31 (2010), 1887-1891, URL https://doi.org/10.1016/j.ejc.2010.01.010.
    [73] H. Sompolinsky and I. Kanter, Temporal association in asymmetric neural networks, Phys. Rev. Lett., 57 (1986), 2861-2864, URL http://dx.doi.org/10.1103/PhysRevLett.57.2861.
    [74] D. Kleinfeld, Sequential state generation by model neural networks, Proc. Natl. Acad. Sci. U.S.A., 83 (1986), 9469-9473, URL http://dx.doi.org/10.1073/pnas.83.24.9469.
    [75] S. Dehaene, J. P. Changeux and J. P. Nadal, Neural networks that learn temporal sequences by selection, Proc. Natl. Acad. Sci. U.S.A., 84 (1987), 2727-2731, URL https://doi.org/10.1073/pnas.84.9.2727.
    [76] J. Buhmann and K. Schulten, Noise-driven temporal association in neural networks, Europhys Lett., 4 (1987), 1205, URL http://dx.doi.org/10.1209/0295-5075/4/10/021.
    [77] J. DeFelipe, P. Marco, I. Busturia, et al., Estimation of the number of synapses in the cerebral cortex:Methodological considerations, Cereb. Cortex, 9 (1999), 722, URL http://dx.doi.org/10.1093/cercor/9.7.722.
    [78] L. Personnaz, I. Guyon and G. Dreyfus, Collective computational properties of neural networks:New learning mechanisms, Phys. Rev. A, 34 (1986), 4217-4228, URL http://dx.doi.org/10.1103/PhysRevA.34.4217.
    [79] M. Helias, T. Tetzlaff and M. Diesmann, The correlation structure of local neuronal networks intrinsically results from recurrent dynamics, PLoS Comput. Biol., 10 (2014), e1003428, URL https://doi.org/10.1371/journal.pcbi.1003428.
    [80] G. Dumont, A. Payeur and A. Longtin, A stochastic-field description of finite-size spiking neural networks, PLoS Comput. Biol., 13 (2017), e1005691, URL https://doi.org/10.1371/journal.pcbi.1005691.
    [81] M. A. Buice and C. C. Chow, Generalized activity equations for spiking neural network dynamics, Front. Comput. Neurosci., 7 (2013), 162, URL https://doi.org/10.3389/fncom.2013.00162.
    [82] V. Klinshov and I. Franović, Mean-field dynamics of a random neural network with noise, Phys. Rev. E, 92 (2015), 062813, URL https://doi.org/10.1103/PhysRevE.92.062813.
    [83] S. Hwang, V. Folli, E. Lanza, et al., On the number of limit cycles in asymmetric neural networks, J. Stat. Mech.:Theory Exp., 2019 (2019), 053402, URL https://doi.org/10.1088/1742-5468/ab11e3.
    [84] O. Faugeras and J. MacLaurin, Asymptotic description of neural networks with correlated synaptic weights, Entropy, 17 (2015), 4701-4743, URL https://doi.org/10.3390/e17074701.
    [85] D. Martí, N. Brunel and S. Ostojic, Correlations between synapses in pairs of neurons slow down dynamics in randomly connected neural networks, Phys. Rev. E, 97 (2018), 062314, URL https://doi.org/10.1103/PhysRevE.97.062314.
    [86] S. Panzeri, S. R. Schultz, A. Treves, et al., Correlations and the encoding of information in the nervous system, Proc. Biol. Sci., 266 (1999), 1001-1012, URL https://doi.org/10.1098/rspb.1999.0736. doi: 10.1098/rspb.1999.0736
    [87] S. Panzeri, J. H. Macke, J. Gross, et al., Neural population coding:Combining insights from microscopic and mass signals, Trends Cogn. Sci., 19 (2015), 162-172, URL https://doi.org/10.1016/j.tics.2015.01.002.
    [88] Y. Zuo, H. Safaai, G. Notaro, et al., Complementary contributions of spike timing and spike rate to perceptual decisions in rat S1 and S2 cortex, Curr. Biol., 25 (2015), 357-363, URL https://doi.org/10.1016/j.cub.2014.11.065.
    [89] C. A. Runyan, E. Piasini, S. Panzeri, et al., Distinct timescales of population coding across cortex, Nature, 548 (2017), 92-96, URL https://doi.org/10.1038/nature23020.
    [90] E. Zohary, M. N. Shadlen and W. T. Newsome, Correlated neuronal discharge rate and its implications for psychophysical performance, Nature, 370 (1994), 140-143, URL https://doi.org/10.1038/370140a0.
    [91] W. Singer, Neuronal synchrony:A versatile code for the definition of relations?, Neuron, 24 (1999), 49-65, URL https://doi.org/10.1016/S0896-6273(00)80821-1.
    [92] A. S. Ecker, P. Berens, G. A. Keliris, et al., Decorrelated neuronal firing in cortical microcircuits, Science, 327 (2010), 584-587, URL https://doi.org/10.1126/science.1179867.
    [93] M. R. Cohen and A. Kohn, Measuring and interpreting neuronal correlations, Nat. Neurosci., 14 (2011), 811-819, URL https://doi.org/10.1038/nn.2842.
    [94] R. Brette, Computing with neural synchrony, PLoS Comput. Biol., 8 (2012), e1002561, URL https://doi.org/10.1371/journal.pcbi.1002561.
    [95] R. Moreno-Bote, J. Beck, I. Kanitscheider, et al., Information-limiting correlations, Nat. Neurosci., 17 (2014), 1410-1417, URL https://doi.org/10.1038/nn.3807.
    [96] W.-C. A. Lee, V. Bonin, M. Reed, et al., Anatomy and function of an excitatory network in the visual cortex, Nature, 532 (2016), 370-374, URL https://doi.org/10.1038/nature17192.
    [97] J. L. R. Rubenstein and M. M. Merzenich, Model of autism:Increased ratio of excitation/inhibition in key neural systems, Genes Brain Behav., 2 (2003), 255-267, URL https://doi.org/10.1034/j.1601-183X.2003.00037.x.
    [98] H. Y. Zoghbi, Postnatal neurodevelopmental disorders:Meeting at the synapse?, Science, 302 (2003), 826-830, URL https://doi.org/10.1126/science.1089071.
    [99] M. Sahin and M. Sur, Genes, circuits, and precision therapies for autism and related neurodevelopmental disorders, Science, 350 (2015), aab3897, URL https://doi.org/10.1126/science.aab3897.
    [100] J. P. K. Ip, N. Mellios and M. Sur, Rett syndrome:Insights into genetic, molecular and circuit mechanisms, Nat. Rev. Neurosci., 19 (2018), 368-382, URL https://doi.org/10.1038/s41583-018-0006-3.
  • This article has been cited by:

    1. Jason M. Samonds, Martin Szinte, Carrie Barr, Anna Montagnini, Guillaume S Masson, Nicholas J. Priebe, Mammals achieve common neural coverage of visual scenes using distinct sampling behaviors, 2023, 2373-2822, ENEURO.0287-23.2023, 10.1523/ENEURO.0287-23.2023
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4760) PDF downloads(418) Cited by(1)

Figures and Tables

Figures(7)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog