Research article Special Issues

Inference on the effect of non homogeneous inputs in Ornstein-Uhlenbeck neuronal modeling

  • A non-homogeneous Ornstein-Uhlembeck (OU) diffusion process is considered as a model for the membrane potential activity of a single neuron. We assume that, in the absence of stimuli, the neuron activity is described via a time-homogeneous process with linear drift and constant infinitesimal variance. When a sequence of inhibitory and excitatory post-synaptic potentials occurres with generally time-dependent rates, the membrane potential is then modeled by means of a non-homogeneous OU-type process. From a biological point of view it becomes important to understand the behavior of the membrane potential in the presence of such stimuli. This issue means, from a statistical point of view, to make inference on the resulting process modeling the phenomenon. To this aim, we derive some probabilistic properties of a non-homogeneous OU-type process and we provide a statistical procedure to fit the constant parameters and the time-dependent functions involved in the model. The proposed methodology is based on two steps: the first one is able to estimate the constant parameters, while the second one fits the non-homogeneous terms of the process. Related to the second step two methods are considered. Some numerical evaluations based on simulation studies are performed to validate and to compare the proposed procedures.

    Citation: Giuseppina Albano, Virginia Giorno. Inference on the effect of non homogeneous inputs in Ornstein-Uhlenbeck neuronal modeling[J]. Mathematical Biosciences and Engineering, 2020, 17(1): 328-348. doi: 10.3934/mbe.2020018

    Related Papers:

    [1] Virginia Giorno, Serena Spina . On the return process with refractoriness for a non-homogeneous Ornstein-Uhlenbeck neuronal model. Mathematical Biosciences and Engineering, 2014, 11(2): 285-302. doi: 10.3934/mbe.2014.11.285
    [2] Virginia Giorno, Amelia G. Nobile . Exact solutions and asymptotic behaviors for the reflected Wiener, Ornstein-Uhlenbeck and Feller diffusion processes. Mathematical Biosciences and Engineering, 2023, 20(8): 13602-13637. doi: 10.3934/mbe.2023607
    [3] Giuseppina Albano . Detecting time-changes in $ PM_{10} $ during Covid pandemic by means of an Ornstein Uhlenbeck type process. Mathematical Biosciences and Engineering, 2021, 18(1): 888-903. doi: 10.3934/mbe.2021047
    [4] Aniello Buonocore, Luigia Caputo, Enrica Pirozzi, Maria Francesca Carfora . A simple algorithm to generate firing times for leaky integrate-and-fire neuronal model. Mathematical Biosciences and Engineering, 2014, 11(1): 1-10. doi: 10.3934/mbe.2014.11.1
    [5] Giuseppe D'Onofrio, Enrica Pirozzi . Successive spike times predicted by a stochastic neuronal model with a variable input signal. Mathematical Biosciences and Engineering, 2016, 13(3): 495-507. doi: 10.3934/mbe.2016003
    [6] Giacomo Ascione, Enrica Pirozzi . On a stochastic neuronal model integrating correlated inputs. Mathematical Biosciences and Engineering, 2019, 16(5): 5206-5225. doi: 10.3934/mbe.2019260
    [7] Yongxin Gao, Shuyuan Yao . Persistence and extinction of a modified Leslie-Gower Holling-type Ⅱ predator-prey stochastic model in polluted environments with impulsive toxicant input. Mathematical Biosciences and Engineering, 2021, 18(4): 4894-4918. doi: 10.3934/mbe.2021249
    [8] Aniello Buonocore, Luigia Caputo, Enrica Pirozzi, Maria Francesca Carfora . Gauss-diffusion processes for modeling the dynamics of a couple of interacting neurons. Mathematical Biosciences and Engineering, 2014, 11(2): 189-201. doi: 10.3934/mbe.2014.11.189
    [9] Shinsuke Koyama, Ryota Kobayashi . Fluctuation scaling in neural spike trains. Mathematical Biosciences and Engineering, 2016, 13(3): 537-550. doi: 10.3934/mbe.2016006
    [10] Meng Gao, Xiaohui Ai . A stochastic Gilpin-Ayala mutualism model driven by mean-reverting OU process with Lévy jumps. Mathematical Biosciences and Engineering, 2024, 21(3): 4117-4141. doi: 10.3934/mbe.2024182
  • A non-homogeneous Ornstein-Uhlembeck (OU) diffusion process is considered as a model for the membrane potential activity of a single neuron. We assume that, in the absence of stimuli, the neuron activity is described via a time-homogeneous process with linear drift and constant infinitesimal variance. When a sequence of inhibitory and excitatory post-synaptic potentials occurres with generally time-dependent rates, the membrane potential is then modeled by means of a non-homogeneous OU-type process. From a biological point of view it becomes important to understand the behavior of the membrane potential in the presence of such stimuli. This issue means, from a statistical point of view, to make inference on the resulting process modeling the phenomenon. To this aim, we derive some probabilistic properties of a non-homogeneous OU-type process and we provide a statistical procedure to fit the constant parameters and the time-dependent functions involved in the model. The proposed methodology is based on two steps: the first one is able to estimate the constant parameters, while the second one fits the non-homogeneous terms of the process. Related to the second step two methods are considered. Some numerical evaluations based on simulation studies are performed to validate and to compare the proposed procedures.


    The first attempt to construct a stochastic model able to describe the spontaneous activity of a single neuron was due in 1964 to Gernstein and Mandelbrot in [1]. The authors assumed that the behavior of the neuronal membrane potential, subject to several simultaneous and independent acting potentials, is described by means of a stochastic diffusion process. They also proved that, by suitably choosing the involved parameters, the histograms of the experimentally recorded interspike intervals (ISI) can be approximated by the first passage time (FPT) probability density function (pdf) of a time-homogeneous Wiener process. Later, other models were proposed to include more characteristics of the behaviour of the membrane potential. Specifically, to describe the effect of inhibitory and excitatory inputs and to introduce the exponential decay of the membrane potential, models based on Ornstein-Uhlenbeck (OU) and Feller processes were considered (see, for example, [2] and reference therein).

    Recently, studies also have been focused on the estimation of the parameters involved in neuronal models (see [3,4,5,6,7]) in order to understand information processing in neuronal mechanisms. An extensive review is also provided in [8]. In [3,4,6] estimators of the OU neuronal model input parameters are derived by looking at the firing regime of the process. Moreover, in [5], several estimation methods are compared in three different regimes: sub-threshold, threshold and supra-threshold. In the first regime the equilibrium point of the process is much smaller than the threshold and firing is caused only by random fluctuations and firings can be approximating by a Poissonian distribution. In the threshold regime the equilibrium point of the process is very closed to the firing threshold. In the supra-threshold regime the equilibrium point of the process is far above the firing threshold and the ISI are relatively regular (deterministic firing - which means that the neuron is active also in the absence of noise). In [5] it is shown that the moments method works best in moderate sub-threshold regime, the Laplace transform method only works in supra-threshold regime and the Integral equation method works in the entire parameter space. In [7] Bayesian estimators of the rates of the excitatory and inhibitory are provided to the aim to describe the stimulation effects on the synaptic input.

    In [9,10,11] the OU process modeling the membrane potential is generalized assuming that in addition to the constant input there is a random component changing from two subsequent interspike intervals, generally caused by the naturally occurring variations of environment signaling or by other sources of noise not included in the model. In particular, in [9,10], the Gaussian assumption for the random effect is made and the the estimators of the parameters are explicitly obtained by MLE. In [11] non parametric estimators of the random effect are provided, by using a kernel strategy and the deconvolution method.

    A wide literature on neuronal processing focuses on inputs arriving with generally non-constant frequencies and, in particular, on periodic inputs (see, for example, [12,13,14,15,16,17]). Generally, the periodic input signals are handled by means of a first passage time, viewed as the interval between two consecutive spikes.

    In the present paper we consider a model for the activity of a single neuron based on a time non-homogeneous OU process and we propose a two-step procedure to infer the model. Specifically, we assume that, in the absence of input, the neuron membrane potential exponentially decays to a resting potential, with a time constant. Moreover, we assume that the neuron is stimulated by a sequence of inhibitory and excitatory postsynaptic potentials (PSP's) of constant amplitude occurring according to a Poisson's process with time-dependent rates. In this way we have two different process: one, referred as the control group, describes the membrane potential activity when the neuron is not stimulated, the other, denoted as a treated group, represents the neuronal membrane potential under the effect of the inhibitory and excitatory PSP's. Under the assumption of exponential decay, the control group is described by a time-homogeneous OU process, whereas the treated group is generally a non-homogeneous OU-type diffusion process.

    The estimation procedure of the two processes consists of a first step in which the constant parameters of the control group are estimated, based on the Maximum Likelihood Estimation (MLE); in the second step of the procedure, by using the parameters estimated in the first step, the fit of the time-dependent functions involved in the treated group is performed by means of a generalized method of moments. Related to the second step, two methods are proposed: one based on the sample variance, while the second one is based on the sample covariance of the paths of the process.

    We point out that the focus is on the time interval between two consecutive spikes in which the OU process is unlimited, in the sense that no absorbing or reflecting condition is considered. Nevertheless our procedures can be generalized in the case in which the control group as well as the treated group are described via a OU process restricted by suitable boundary conditions. This topic is beyond the aim of the present paper and it will be treated in future works.

    The layout of the paper is the following. In Section 2 we describe the model and we derive some theoretical results for the resulting non-homogeneous diffusion process. Section 3 provides the two-step procedures to infer the neuronal model. In Section 4 some simulation experiments are performed to evaluate the effectiveness of the proposed procedures and to compare the performances of them.

    We assume that, in the absence of stimuli, the behavior of the membrane potential of the neuron is described by a time-homogeneous OU diffusion process {X(t),tt0} defined in R with drift and infinitesimal variance :

    A1(x)=xϑ+μ,A2=σ2, (2.1)

    respectively. In Eq. (2.1), the parameter ϑ>0 is the time constant characterizing the exponential decay; μ is related to the resting potential ϱ=ϑμ; finally, σ2>0 represents the noise intensity. Hence, we assume that in the absence of inputs the membrane potential, described by the diffusion process X(t), spontaneously decays to the resting potential ϱ with a time constant ϑ.

    By stimulating the neuron with a sequence of constant inputs occurring with time-dependent rate, making use of standard procedure (see, for example, [23]), it can be proved that the evolution of the neuronal membrane potential is described via a generally time non-homogeneous OU diffusion process. Specifically, we assume that the neuron is reached by inhibitory and excitatory PSP's characterized by constant magnitude ε occurring with rates:

    αi(t)=ai(t)ε+u(t)2ε2,αe(t)=ae(t)ε+u(t)2ε2 (2.2)

    where ai(t),ae(t),u(t) are positive functions of time. Here, αi(t) denotes the rate of the inhibitory inputs whereas αe(t) represents the rate of the excitatory inputs. It can be proved that the dynamics of the neuronal membrane potential is described via a generally time non-homogeneous OU diffusion process {XS(t),tt0} defined in R whose infinitesimal moments are related to the rates given in (2.2). In particular, the drift and infinitesimal variance of XS(t) are

    B1(x,t)=xϑ+μ+m(t),B2(t)=u(t), (2.3)

    respectively, with

    m(t)=limϵ0ϵ[αe(t)αi(t)],u(t)=limϵ0ϵ2[αe(t)+αi(t)].

    The sample-paths of XS(t) are solutions of the following stochastic differential equation:

    dXS(t)=[XS(t)ϑ+μ+m(t)]dt+u(t)dW(t)P[XS(t0)=x0]=1, (2.4)

    where t0 is the initial time; x0 denotes the starting value and it is often assumed equal to the resting potential; W(t) is a standard Brownian motion. The solution of (2.4) is

    XS(t)=x0e(tt0)/ϑ+tt0e(tξ)/ϑ[μ+m(ξ)]dξ+tt0e(tξ)/ϑu(ξ)dW(ξ). (2.5)

    Clearly, the equation for the control group X(t) can be obtained from (2.5) taking m(t)=0 and u(t)=σ2.

    We note that (2.3) and (2.4) characterize a non-homogeneous OU process, known also as leaky-integrate-and-fire (LIF) model, describing the evolution of the membrane potential (see, for example, [13,18,19,20] and references therein). The theoretical study on the LIF model essentially concerns the analysis of the first passage time density viewed as the interspikes density, its asymptotic approximations (see, for example, [17,21]) and the analysis in the presence of a lower reflecting boundary (cf. [15,17,22]).

    In the present paper we focus on the inference of model (2.3) and (2.4). Differently from [3,4,5,6], in which the focus is on the firing regime, in our assumptions, both X(t) and XS(t) describe the neuronal membrane potential between two consecutive spikes, so no boundary conditions are assumed. This case was also considered in [9,10,11], in which the membrane potential, during a generic ISI, is a stationary OU process with a random equilibrium point having an assigned distribution; so the effect of the neuronal input remains constant within each ISI and it changes randomly from one ISI to another. Further, the input is able to influence only the drift of the process.

    Moreover, we assume that the membrane potential, during the generic ISI, is a generally non-stationary process. Moreover, in our model, the neuronal inputs can influence both the drift and the infinitesimal variance by means of the two time-dependent functions m(t) and u(t).

    The transition pdf fS(x,t|x0,t0) of XS(t) is solution of the Kolmogorov equation

    fS(x,t|x0,t0)t0+[x0ϑ+μ+m(t0)]fS(x,t|x0,t0)x0+u(t0)22fS(x,t|x0,t0)x20=0 (2.6)

    and of Fokker-Plank equation

    fS(x,t|x0,t0)t=x{[xϑ+μ+m(t)]fS(x,t|x0,t0)}+u(t)22fS(x,t|x0,t0)x2. (2.7)

    Moreover, the transition pdf satisfies the initial delta condition

    limtt0fS(x,t|x0,t0)=limt0tfS(x,t|x0,t0)=δ(xx0).

    By using the transformation:

    ψ(x,t)=xet/ϑt[μ+m(ξ)]eξ/ϑdξ,φ(t)=tu(ξ)eξ/ϑdξ,

    the equations (2.6) and (2.7) can be reduced to the analogous equations of a standard Wiener process for which the end points of the diffusion interval, ±, are natural boundaries. So we obtain that fS(x,t|x0,t0) is a normal pdf:

    fS(x,t|x0,t0)=12πVS(t|t0)exp{[xMS(t|x0,t0)]22VS(t|t0)} (2.8)

    with conditional mean and variance given by

    MS(t|x0,t0)=x0e(tt0)/ϑ+μϑ[1e(tt0)/ϑ]+tt0m(ξ)e(tξ)/ϑdξ,VS(t|t0)=tt0u(ξ)e2(tξ)/ϑdξ, (2.9)

    respectively.

    The transition pdf f(x,t|x0,t0) of X(t) can be obtained from (2.8) setting m(t)=0 and u(t)=σ2 so, for tt0, it is normal with conditional mean and variance

    M(t|x0,t0)=x0e(tt0)/ϑ+μϑ[1e(tt0)/ϑ], (2.10)
    V(t|t0)=σ2ϑ2[1e2(tt0)/ϑ]. (2.11)

    We note that

    MS(t|x0,t0)=M(t|x0,t0)+tt0m(ξ)e(tξ)/ϑdξ. (2.12)

    Further, the process X(t) admits an asymptotic behaviour with steady-state density:

    w(x)=limtf(x,t|x0,t0)=1πσ2ϑexp{[xμϑ]2σ2ϑ}.

    Under suitable hypothesis on the functions m(t) and u(t), also XS(t) admits an asymptotic behaviour. In particular, if the following limits:

    L1=limt+tt0m(ξ)e(tξ)/ϑdξL2=limt+tt0u(ξ)e2(tξ)/ϑdξ

    exist finite, the steady state pdf wS(x) of XS(t) is normal with mean μϑ+L1 and variance L2.

    Finally, we remark that XS(t) is a Gauss-Markov process with mean and covariance functions:

    E[XS(t)]=μϑ(1et/ϑ)+t0m(ξ)e(tξ)/ϑdξ=MS(t|0,0), (2.13)
    cS(τ,t)=cov[XS(τ),XS(t)]=e(tτ)/ϑτ0u(ξ)e2(τξ)/ϑdξ=e(tτ)/ϑVS(τ|0), (2.14)

    with t>0 and 0<τ<t, respectively. From (2.13) and (2.14) it is easy to obtain the mean and covariance for the control group X(t):

    E[X(t)]=μϑ(1et/ϑ)=M(t|0,0)c(τ,t)=cov[X(τ),X(t)]=σ2ϑ2e(tτ)/ϑ[1e2τ/ϑ]=e(tτ)/ϑV(τ|0). (2.15)

    In this section we consider a two-step procedure to infer the model. Firstly, we use the control group, modeled by the process X(t) given in (2.1), to estimate the constant parameters ϑ,μ and σ2 via the MLE, and secondly, we use a treated groups described by XS(t) defined in (2.3) to fit the unknown functions m(t) and u(t). We suppose that the control group and the treated group are observed at the same discrete time instants. In the neuronal context this means to record the data of two groups of identical neurons, the first one without time-dependent stimuli and the second one with generally time-dependent inputs.

    In the following we consider a discrete sampling of the process X(t) in (2.1) based on d1 sample paths at the times tij for i=1,,d1 and j=1,,n with ti1=t1 for i=1,,d1. Let xij be the observed values at times tij. We assume that the observations xij are equally spaced at Δ=tijtij1 i,j. Further, we assume xi1=x0 for i=1,2,,d1.

    The likelihood function for the parameters (ϑ,μ,σ2) is

    L(ϑ,μ,σ2)=d1i=1nj=2f(xij,tj|xij1,tj1).

    The MLE can be explicitly obtained (see, for example, [3,4]):

    ˆϑ=Δlog(ˆβ1),ˆμ=ˆβ2ˆϑ,ˆσ2=2ˆϑˆβ31ˆβ12, (3.1)

    where

    ˆβ1=1[d1(n1)]1d1i=1nj=2x2ij1[d1(n1)]2(d1i=1nj=2xij1)2×{[d1(n1)]1d1i=1nj=2xijxij1[d1(n1)]2d1i=1nj=2xijd1i=1nj=2xij1}ˆβ2=[d1(n1)]1d1i=1nj=2(xijˆβ1xij1)1ˆβ1ˆβ3=[d1(n1)]1d1i=1nj=2{xijˆβ1xij1ˆβ2(1ˆβ1)}2.

    Since the process XS(t) is characterized by the same constant parameters with respect to the control group X(t), the estimates ˆϑ,ˆμ,ˆσ2 obtained by (3.1) will be used in the second step of the procedure to fit the unknown functions m(t) and u(t) in (2.4).

    In order to fit the functions m(t) and u(t) in (2.4) in Propositions 1, 2 and 3 we provide equations related to suitable characteristics of the process XS(t).

    Proposition 1. The function m(t) in (2.4) is given by

    m(t)=1ϑh(t)+ddth(t), (3.2)

    with

    h(t)h(t|x0,t0)=MS(t|x0,t0)M(t|x0,t0), (3.3)

    where MS(t|x0,t0) and M(t|x0,t0) are the conditional means of the processes XS(t) and X(t), respectively.

    Proof. From (2.12), we obtain:

    tt0m(ξ)eξ/ϑdξ=et/ϑh(t), (3.4)

    with h(t) given in (3.3). From (3.4), deriving with respect to t, we derive (3.2).

    Proposition 2. The function u(t) in (2.4) is given by

    u(t)=2ϑVS(t|t0)+dVS(t|t0)dt, (3.5)

    where VS(t|t0) is the conditional variance of the treated group XS(t).

    Proof. Deriving both sides of (2.9) with respect to t, we obtain:

    dVS(t|t0)dt=u(t)2ϑVS(t|t0),

    from which one has Eq. (3.5). Alternatively, the unknown function u(t) in (2.4) can be obtained by looking at the covariance function cS(,) of the process XS(t) as shown in the following proposition.

    Proposition 3. The function u(t) in (2.4) is given by

    u(τ)=e(tτ)/ϑ{cS(τ,t)ϑ+dcS(τ,t)dτ}(0<τ<t). (3.6)

    Proof. Eq. (3.6) is obtained from (2.14) by deriving both sides with respect to τ.

    Proposition 1 can be used to fit the function m(t) in (2.4), Propositions 2 and 3 are able to provide two different estimates of the function u(t). In the following we specify the procedures.

    Let us consider a discrete sampling of the process XS(t) in (2.4) based on d2 sample paths at the times tij for i=1,,d2 and j=1,,n with ti1=t1 for i=1,,d2. Let x(S)ij be the observed values at times tij, i=1,,d2 and j=1,,n. We assume, as for the control group, that the step between two consecutive observations is Δ and x(S)i1=x0 for i=1,2,,d2. Note that, in order to apply Eq. (3.2), data from the control and treated groups have to be observed at the same time instants. Considering Proposition 1 and Proposition 2, the following procedure can be formulated:

    1. From the data of the control group, estimate the parameters of process X(t), obtaining the ML estimate ˆϑ,ˆμ,ˆσ2.

    2. Let

    xj=1d1d1i=1xi,jandx(S)j=1d2d2i=1x(S)i,j(j=1,2,,n)

    be the sample means of X(t) and XS(t), respectively, for j=1,2,,n.

    3. Obtain the points:

    hj=x(S)jxj. (3.7)

    4. Interpolate the points hj in (3.7) to obtain an estimate, ˆh(t), of the function h(t) in (3.3).

    5. Consider

    ˆm(t)=1ˆϑˆh(t)+ddtˆh(t)

    as approximation of the function m(t).

    6. Obtain the points

    vj=1d21d2i=1[x(S)i,jx(S)j]2. (3.8)

    7. Interpolate the points vj in (3.8) to obtain an estimate, ˆv(t), of the function VS(t|t0) in (2.9).

    8. A fitted function of u(t) in (2.4) is:

    ˆuv(t)=2ˆϑˆv(t)+ddtˆv(t).

    An alternative procedure can be obtained by looking at Eq. (3.6) instead of Eq. (3.5). Precisely, by considering a discrete sampling of the process XS(t) as in Procedure 1, Procedure 2 works as before till point 5., whereas the function u(t) is fitted by using Proposition 3. Therefore, the next steps change in

    6. Obtain the points

    cj=1d21d2i=1[x(S)i,j1x(S)j1][[x(S)i,jx(S)j]. (3.9)

    7. Interpolate the points cj in (3.9) to obtain an estimate, ˆc(t), of the function cS(t1,t) in (2.14).

    8. A fitted function of u(t) in (2.4) is:

    ˆuc(t)=eΔ/ˆϑ{ˆc(t)ˆϑ+dˆc(t)dt}.

    We point out that the two suggested procedures differ between them only in the fit of the infinitesimal variance u(t).

    In order to evaluate the proposed procedures we present some experiments in which 50 sample-paths including 500 observations of the processes X(t) and XS(t) with t0=0, Δ=0.1, x0=70 (representing the resting potential) are simulated.

    We point out that the simulation experiments can be performed starting from the classical Euler's discretization of the corresponding stochastic differential equation. Since the process XS(t) is Gauss-Markov, an alternative way to simulate sample-paths of the process XS(t) in (2.4) is to consider the following discretization (see, for example, [24]):

    XS(tk)=E[XS(tk)]+[XS(tk1)E(XS(tk1)]eΔ/ϑ+ξkVS(tk|tk1), (4.1)

    with tk=t0+kΔ,k=1,499. Here, ξ1,ξ2, is a sequence of independent and identically distributed random variables with standard Gaussian distribution. Clearly, setting m(t)=0 and u(t)=σ2 for all t in (4.1), sample-paths of X(t) can be obtained.

    We simulated the sample paths of the processes X(t) and XS(t) by using both Eq. (2.4) and Eq. (4.1), the obtained estimates in two cases do not seem to show relevant differences, so in the following our simulation experiments are based on the simplest Euler's discretization.

    We have chosen several functions m(t) and u(t), obtaining 4 non-homogeneous OU processes XS(t). Specifically, we have considered periodic functions in the drift and in the infinitesimal variance of the process, so to model periodic neuronal input widely admitted in the literature. Further, the infinitesimal variance of Case 3 was also chosen in [17]. Finally, some test cases are also considered to validate our procedures. In all the cases, for the process modeling the control group we have chosen the following values for the parameters: ϑ=1,μ=70,σ=0.05, so the original homogeneous model has infinitesimal drift and variance

    A1(x)=x70,A2=0.052.

    For each process we apply the procedures proposed in Sections 3.3 - 3.4 and we compare the true functions with their fitted versions. Moreover, to find out whether there are significant differences between the two fitting methods for the function u(t), based on the sample variance and on the sample covariance, we compare the theoretical conditional mean and variance of the process XS(t) with the corresponding fitted functions. Finally, the simulation study includes the calculation of the Mean Absolute Error (MAE) for the conditional mean and the variance of XS(t).

    First of all, by focusing on the control group, we remark that the use of MLE as described in Section 3.1 has provided the following results:

    ˆϑ=1.01027,ˆμ=69.28929,ˆσ2=0.00253.

    We consider the process X(1)S(t) characterized by the following infinitesimal moments:

    B(1)1(x,t)=xϑ+μ+m1(t),B(1)2(x,t)=u1(t), (4.2)

    where m1(t)=c[b+sin(ωt)] and u1(t)=σ2. In this case we assume that the inputs rates influences only the drift of the treated process. For this process from (2.9), (2.12), (2.13) and (2.14) we have:

    M(1)S(t|y,τ)=M(t|y,τ)+bcϑ[1e(tτ)/ϑ]+cϑ1+ω2ϑ2{sin(ωt)ωϑcos(ωt)e(tϑ)/ϑ[sin(ωτ)ωϑcos(ωτ)]}E[X(1)S(t)]=M(1)S(t|0,0),V(1)S(t|τ)=V(t|τ),c(1)S(τ,t)=c(τ,t),

    where M(t|y,τ) and V(t|τ), defined in (2.10) and (2.11), are the conditional mean and variance of the control group, whereas c(τ,t) is the covariance function of X(t) defined in (2.15).

    We assume b=0,c=0.1 and ω=1. In Figure 1 the function ˆm1(t) (green line) along with the true function m1(t) (red line) are shown on the top. Smoothed version of ˆm1(t) is also displayed. On the bottom the results obtained for u1(t) are plotted: on the left we show the results obtained making use of Procedure 1, whereas on the right the results obtained by Procedure 2 are plotted. We note that both the procedures give satisfactory results. Moreover, the approximation of u1(t) based on Procedure 1 shows an higher variability with respect to that one obtained by Procedure 2. Further, the approximation based on Procedure 2 seems to systematically underestimate the true function u1(t). To further investigate the comparison of the two suggested procedures, in Figure 2 the conditional mean (on the left) and variance (on the right) of the process (4.2) are plotted together with their fitted versions. We can see that he fitted conditional mean is able to capture the period structure of the conditional mean and also the amplitude of its periodicity. Further, the conditional variance of X(1)S(t) in (4.2) seems to be well fitted by using Procedure 2, while Procedure 1 seems to overestimate the true conditional variance.

    Figure 1.  For the process X(1)S(t) defined in (4.2), on the top the estimate of m1(t), on the bottom the estimate of u(t)=σ2 via Procedure 1 (on the left) and via Procedure 2 (on the right).
    Figure 2.  Comparison between M(1)S(t|70,0) and the corresponding fitting function (on the left) and between V(1)S(t|0) and the its fitting functions (on the right).

    Finally, we simulate N=50 replications of model (4.2) for several choices of the parameter σ and of the observation step Δ. For each Monte Carlo replication, indexed by r=1,2,,N, the absolute errors for both the procedures with respect to the mean and the variance of the process XS(t) have been calculated. In particular, denoting by (ˆMS(t|x0,t0))r the fitted conditional mean at the replication r and by (V(v)S(t|t0))r and (V(c)S(t|t0))r the fitted conditional variance obtained by the Procedure 1 and 2, respectively, we consider the following absolute errors

    er,M(t)=|MS(t|x0,t0)(ˆMS(t|x0,t0))r|,e(v)r,V(t)=|VS(t|t0)(ˆV(v)S(t|t0))r|,e(c)r,V(t)=|VS(t|t0)(ˆV(c)S(t|t0))r|.

    Then, we derive the mean absolute errors as follows

    MAEM(t)=1NNr=1er,M(t),MAE(v)V(t)=1NNr=1e(v)r,V(t),MAE(c)V(t)=1NNr=1e(c)r,V(t).

    We point out that the MAEs are dependent on time. In order to give a measure of the errors, we consider the mean over the time of the MAEs for each estimated functions, for example for the function M(t)

    mean(MAEM(t))=1500499k=0MAEM(tk).

    The obtained values of mean(MAEM(t)),mean(MAE(v)V(t)) and mean(MAE(c)V(t)) are listed in Table 1 for several choices of the parameter σ and of the observation step Δ. Clearly, in almost all the cases the errors increases as σ and the step Δ increase. This is more evident in the second column in which the errors on the conditional variance of X(1)S(t) by Procedure 1 are shown. Further, comparing the errors on the conditional variance by the two procedures we can see that Procedure 2 provides always smaller mean errors, suggesting that it is to prefer when one is interested to the conditional mean and variance.

    Table 1.  Errors for the process X(1)S(t).
    σ Step mean(MAEM(t)) mean(MAE(v)V(t)) mean(MAE(c)V(t))
    σ=0.05 Δ=0.01 0.04178 0.00115 0.00041
    Δ=0.1 0.04438 0.00125 0.00020
    Δ=0.5 0.04940 0.00131 0.00049
    σ=0.1 Δ=0.01 0.04456 0.00443 0.00150
    Δ=0.1 0.04621 0.00500 0.00082
    Δ=0.5 0.04548 0.00522 0.00196
    σ=0.5 Δ=0.01 0.10452 0.10903 0.04237
    Δ=0.1 0.09540 0.12596 0.02002
    Δ=0.5 0.06412 0.13169 0.04851
    σ=1 Δ=0.01 0.18915 0.45787 0.17219
    Δ=0.1 0.17724 0.49276 0.08436
    Δ=0.5 0.10775 0.52634 0.19468

     | Show Table
    DownLoad: CSV

    Let X(2)S(t) be the stochastic diffusion process having infinitesimal moments:

    B(2)1(x,t)=xϑ+μ+m2(t),B(2)2(x,t)=u2(t). (4.3)

    with m2(t)=ct+d and u2(t)=σ2. Also in this case the inputs rates influences only the drift of the treated process. For the process X(2)(t) from (2.9), (2.12), (2.13) and (2.14) we have:

    M(2)S(t|y,τ)=M(t|y,τ)+dϑ[1e(tτ)/ϑ]+cϑ[tϑ+(ϑτ)e(tτ)/ϑ]E(2)S(t)=M(2)S(t|0,0),V(2)S(t|τ)=V(t|τ),c(2)S(τ,t)=c(τ,t),

    where M(t|y,τ) and V(t|τ), defined in (2.10) and (2.11), are the conditional mean and variance of the control group, whereas c(τ,t) is the covariance function of X(t) defined in (2.15).

    We assume c=0.01 and d=0. In Figure 3 and in Figure 4 the same analysis as in Figure 1 and 2 is performed. Also in this case the fitted functions seem to be very close to the true ones although Procedure 2 underestimates the true infinitesimal variance u2(t). Further, Figure 4 shows that the fitted conditional mean well recognizes the linear trend of the conditional mean M(2)S(t|x0,t0) and only slightly overestimate its slope. By looking at the conditional variance V(2)S(t|τ), we observe similar results with respect the case in Section 4.1.

    In Table 2 the errors on the conditional variance by the two procedures for the process X(2)S(t) are listed for the same choices of Section 4.1, and they confirm the results of Table 1.

    Figure 3.  For the process X(2)S(t) defined in (4.3), on the top the estimate of m2(t)=0.01t, on the bottom the estimate of u2(t)=σ2 via procedure 1 (on the left) and via procedure 2 (on the right).
    Figure 4.  Comparison between M(2)S(t|70,0) and the corresponding fitting function (on the left) and between V(2)S(t|0) and the its fitting functions (on the right).
    Table 2.  Errors for the process X(2)S(t).
    σ Step mean(MAEM(t)) mean(MAE(v)V(t)) mean(MAE(c)V(t))
    σ=0.05 Δ=0.01 0.01984 0.00115 0.00043
    Δ=0.1 0.23981 0.00126 0.00021
    Δ=0.5 1.23502 0.00131 0.00049
    σ=0.1 Δ=0.01 0.02817 0.00458 0.00183
    Δ=0.1 0.23958 0.00505 0.00082
    Δ=0.5 1.23497 0.00522 0.00196
    σ=0.5 Δ=0.01 0.09679 0.11184 0.00424
    Δ=0.1 0.25660 0.12555 0.02065
    Δ=0.5 1.23836 0.13106 0.04882
    σ=1 Δ=0.01 0.18682 0.44296 0.16401
    Δ=0.1 0.28073 0.50265 0.08271
    Δ=0.5 1.24534 0.52284 0.19582

     | Show Table
    DownLoad: CSV

    We consider the process X(3)S(t) with infinitesimal moments:

    B(3)1(x,t)=xϑ+μ+m3(t),B(3)2(x,t)=u3(t), (4.4)

    where m3(t)=c(b+sin(ωt)) and u3(t)=σ2(1e2t/ϑ)2. In this case the inputs rates influences both the drift and the infinitesimal variance of the treated process. For the process X(3)(t) from (2.9), (2.12), (2.13) and (2.14) we have:

    M(3)S(t|y,τ)=M(1)S(t|y,τ),E[X(3)S(t)]=E[X(1)S(t)],V(3)S(t|τ)=e2t/ϑσ2c2[2t+2τ+ϑsinh(2tϑ)ϑsinh(2τϑ)],c(3)S(τ,t)=e2(tτ)/ϑV(3)S(τ|0)=e2τ/ϑσ2[2τ+ϑsinh(2τϑ)].

    with M(1)(t|y,τ) given in (2.10). We set c=0.1, b=1.2 and ω=1. In Figure 5, on the top, the function m3(t) along with its approximation and its smoothed version are plotted. On the bottom the results obtained for u3(t) for the two procedures are shown. Also in this case, the approximation of u3(t) based on the sample variance of the process shows an higher variability with respect to that one obtained by Procedure 2 and this latter seems to underestimate the function u3(t). Figure 6, showing the conditional mean and variance, this latter approximated by the two procedures, shows the behaviour observed in Section 4.1 and 4.2. In Table 3 the mean errors of the conditional mean and variance are listed for various choices of σ2 and of the step Δ.

    Figure 5.  For the process X(3)S(t) defined in (4.4), on the top the estimate of m3(t)=0.1(1.2+sin(t)), on the bottom the estimate of u3(t)=σ2(1e2t/ϑ)2 via procedure 1 (on the left) and via procedure 2 (on the right).
    Figure 6.  Comparison between M(3)S(t|70,0) and the corresponding fitting function (on the left) and between V(3)S(t|0) and the its fitting functions (on the right).
    Table 3.  Errors for Case 3.
    σ Step mean(MAEEX) mean(MAEvarX) mean(MAEvarX)
    σ=0.05 Δ=0.01 0.12285 0.00093 0.00037
    Δ=0.1 0.11852 0.00123 0.00020
    Δ=0.5 0.12010 0.00129 0.00049
    σ=0.1 Δ=0.01 0.12328 0.00370 0.00161
    Δ=0.1 0.11865 0.00489 0.00080
    Δ=0.5 0.12025 0.00524 0.00193
    σ=0.5 Δ=0.01 0.14681 0.09447 0.04388
    Δ=0.1 0.14135 0.12339 0.01991
    Δ=0.5 0.12405 0.12975 0.04859
    σ=1 Δ=0.01 0.21759 0.36846 0.15951
    Δ=0.1 0.19573 0.49398 0.07963
    Δ=0.5 0.14582 0.52598 0.19158

     | Show Table
    DownLoad: CSV

    For the last simulation experiment we consider the process X(4)S(t) with:

    B(4)1(x,t)=xϑ+μ+m4(t),B(4)2(x,t)=u4(t), (4.5)

    where m4(t)=0 and u4(t)=σ2c(b+sin(t)).

    In this case the inputs rates influences only the infinitesimal variance of the treated process. For the process X(4)(t) from (2.9), (2.12), (2.13) and (2.14) we have:

    M(4)S(t|y,τ)=M(t|y,τ),E[X(4)S(t)]=M(4)S(t|0,0),V(4)S(t|τ)=σ2c{bϑ2[1e2(tτ)/ϑ]+ϑ4+ω2ϑ2[ωϑcos(ωt)+2sin(ωt)]+e2(tτ)/ϑ[ωϑcos(ωτ)2sin(ωτ)]},c(4)S(τ,t)=e(tτ)/ϑV(4)S(τ|0),

    where M(t|y,τ) denotes the conditional mean of the control group modeled via X(t). The numerical study is performed for c=0.1, b=1.2 and ω=1. The fitted functions along with the true one are plotted in Figure 7. As shown in Figure 8, both the suggested procedures are able to detect the periodicity in the infinitesimal variance, although Procedure 2 provides a fitted conditional variance slightly delayed with respect the true one. In Table 4 the related mean errors are listed for various choices of σ2 and of the step Δ.

    Figure 7.  For the process X(4)S(t) defined in (4.5), on the top the estimate of m4(t)=0), on the bottom the estimate of u4(t)=0.1σ2(1.2+sin(t)) via procedure 1 (on the left) and via procedure 2 (on the right).
    Figure 8.  Comparison between M(4)S(t|70,0) and the corresponding fitting function (on the left) and between V(42)S(t|0) and the its fitting functions (on the right).
    Table 4.  Errors for Case 4.
    σ Step mean(MAEEX) mean(MAEvarX) mean(MAEvarX)
    σ=0.05 Δ=0.01 0.00675 0.00017 0.00010
    Δ=0.1 0.00632 0.00015 0.00009
    Δ=0.5 0.00350 0.00016 0.00008
    σ=0.1 Δ=0.01 0.01417 0.00066 0.00038
    Δ=0.1 0.01288 0.00061 0.00040
    Δ=0.5 0.00731 0.00064 0.00033
    σ=0.5 Δ=0.01 0.06852 0.01630 0.00879
    Δ=0.1 0.06479 0.01525 0.00971
    Δ=0.5 0.03586 0.01587 0.00828
    σ=1 Δ=0.01 0.13745 0.06381 0.03816
    Δ=0.1 0.12409 0.05984 0.03734
    Δ=0.5 0.07120 0.06305 0.03330

     | Show Table
    DownLoad: CSV

    In this paper we have considered a time non-homogeneous OU process. It can be viewed as a model for the membrane potential activity of a single neuron, stimulated by a sequence of inhibitory and excitatory post-synaptic potentials with generally time-dependent rates. We have proposed a two-step procedure to infer the model. It uses a control group modelled by means of a homogeneous OU process, and a treated group. In the first step the constant parameters of the model are estimated making use of a MLE applied to the control group. In the second step we are able to fit the non-homogeneous terms of the process. This second step uses the estimates obtained in the first step and, by a generalized method of moments, it provides the fit of the time-dependent functions in the process describing the treated group. To estimate the infinitesimal variance of the non-homogeneous OU process, we propose two procedures: the first one is based on the conditional variance and the second one is based on the covariance function of the treated group. Several numerical simulations show that the first procedure seems to be able to well capture the time-dependent infinitesimal variance, whereas the second one better fits the conditional variance of the process.

    This research is partially supported by MIUR – PRIN 2017, Project "Stochastic Models for Complex Systems" and by INDAM-GNCS. The authors acknowledge the constructive criticism of anonymous reviewers on an earlier version of this paper.

    The authors declares no conflict of interest.



    [1] G. L. Gerstein and B. Mandelbrot, Random walk models for the spike activity of a single neuron, Biophys. J., 4 (1964), 41-68.
    [2] H. C. Tuckwell, Introduction to Theoretical Neurobiology 2. Nonlinear and Stochastic Theories, Cambridge Univ. Press, Cambridge, 1988.
    [3] S. Ditlevsen and P. Lánský, Estimation of the input parameters in the Ornstein-Uhlenbeck neuronal ´ model, Phys. Rev. E, 71 (2005), 011907.
    [4] P. Lánský, P. Sanda and J. He, The parameters of the stochastic leaky integrate-and-fire neuronal model, J. Comput. Neurosci., 21 (2006), 211-223.
    [5] S. Ditlevsen and P. Lánský, Comparison of statistical methods for estimation of the input parameters in the Ornstein-Uhlenbeck neuronal model from first-passage times data, in: "American Institute of Physics Proceedings Series, CP1028, Collective Dynamics: Topics on Competition and Cooperation in the Biosciences", Eds.: L.M. Ricciardi, A. Buonocore, and E. Pirozzi, (2008), 171-185.
    [6] E. Bibbona and S. Ditlevsen, Estimation in Discretely Observed Diffusions Killed at a Threshold, Scand J. Stat., 40 (2012), 274-293.
    [7] R. Kobayashi and P. Lansky, Estimation of the synaptic input firing rates and characterization of the stimulation effects in an auditory neuron, Front. Comput. Neurosci., 9 (2015), 59.
    [8] S. Ditlevsen and A. Samson, Parameter estimation in neuronal stochastic differential equation models from intracellular recordings of membrane potentials in single neurons: a Review, J. de la Socié 157 (2016), 6-16.
    [9] U. Picchini, S. Ditlevsen, A. De Gaetano, et al., Parameters of the Diffusion Leaky Integrate-andFire Neuronal Model for a Slowly Fluctuating Signal, Neural Comput., 20 (2008), 2696-2714.
    [10] U. Picchini, S. Ditlevsen and A. De Gaetano, Stochastic differential mixed-effects models, Scand J. Stat., 37 (2010), 67-90.
    [11] C. Dion, Nonparametric estimation in a mixed-effect Ornstein-Uhlenbeck model, Metrika, 79 (2016), 919-951.
    [12] A. N. Burkitt, A review of the integrate-and-fire neuron model: Ⅱ. Inhomogeneous synaptic input and network properties, Biol. Cybern., 95 (2006), 97-112.
    [13] W. Gerstner, W.M. Kistler, R. Naud and L. Paninski, Neuronal Dynamics From single neurons to networks and models of cognition, University Press, 2014.
    [14] N. Brunel, V. Hakim and M. J. Richardson, Firing-rate resonance in a generalized integrate-andfire neuron with subthreshold resonance, Phys. Rev. E, 67 (2003), 051916.
    [15] A. Buonocore, L. Caputo, A. G. Nobile, et al., Gauss-Markov processes in the presence of a reflecting boundary and applications in neuronal models, Appl. Math. Comput., 232 (2014), 799- 809.
    [16] G. D'Onofrio and E. Pirozzi, Successive spike times predicted by a stochastic neuronal model with a variable input signal, Math. Biosci. Eng., 13 (2016), 495-507.
    [17] A. Buonocore, L. Caputo, A. G. Nobile, et al., Restricted Ornstein Uhlenbeck process and applications in neuronal models with periodic input signals, J. Comput. Appl. Math., 285 (2015), 59-71.
    [18] A. R. Bulsara, T. C. Elston, C. R. Doering, et al., Cooperative behavior in periodically driven noisy integrate-fire models of neuronal dynamics, Phys. Rev. E, 53 (1996), 3958-3969.
    [19] A. Longtin, Stochastic resonance in neuron models, J. Stat. Phys., 70 (1993), 309-327.
    [20] K. Pakdaman, S. Tanabe and T. Shimokawa, Coherence resonance and discharge time reliability in neurons and neuronal models, Neural Netw., 14 (2001), 895-905.
    [21] A. Buonocore, L. Caputo and E. Pirozzi, On the evaluation of firing densities for periodically driven neuron models, Math. Biosci., 214 (2008), 122-133.
    [22] M. Giraudo, P. Greenwood and L. Sacerdote, How sample paths of leaky integrate-and-fire models are influenced by the presence of a firing threshold, Neural Comput., 23 (2011), 1743-1767.
    [23] L. M. Ricciardi, A. Di Crescenzo, V. Giorno, et al., An outline of theoretical and algorithmic approaches to first passage time problems with applications to biological modeling, Math. Japonica, 50 (1999), 247-322.
    [24] D. P. Kroese, T. Taimre and Z. I. Botev, Handbook of Monte Carlo Methods, Wiley Series in Probability and Statistics, John Wiley & Sons, Hoboken, NJ, 2011.
  • This article has been cited by:

    1. G. Albano, V. Giorno, Inferring time non-homogeneous Ornstein Uhlenbeck type stochastic process, 2020, 150, 01679473, 107008, 10.1016/j.csda.2020.107008
    2. Virginia Giorno, Amelia G. Nobile, On the Simulation of a Special Class of Time-Inhomogeneous Diffusion Processes, 2021, 9, 2227-7390, 818, 10.3390/math9080818
    3. Virginia Giorno, Amelia G. Nobile, Time-Inhomogeneous Feller-type Diffusion Process with Absorbing Boundary Condition, 2021, 183, 0022-4715, 10.1007/s10955-021-02777-3
    4. Giuseppina Albano, Virginia Giorno, Francisco Torres-Ruiz, Inference of a Susceptible–Infectious stochastic model, 2024, 21, 1551-0018, 7067, 10.3934/mbe.2024310
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4034) PDF downloads(454) Cited by(4)

Figures and Tables

Figures(8)  /  Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog