Impact of a transient signaling input on postsynaptic spiking

We consider a specified presynaptic neuron that provides the transient signaling input, while the rest of presynaptic neurons produce noisy background inputs to the postsynaptic neuron (Fig. 1a). The question is how a single spike of the signaling input affects the spike timing of the postsynaptic neuron (Fig. 1b). We use the LIF neuron model for postsynaptic neuron; the membrane potential, V, evolves with:

$$\begin{array}{@{}rcl@{}} \tau_{\mathrm{m}} \frac{d V}{dt} = - (V - V_{r}) + I(t), \end{array} $$ (1)

where τ m is the membrane time constant, and I(t)is the total input current. The neuron produces a spike when its voltage reaches the threshold, V 𝜃 . The membrane voltage then resets to its resting value, V r , which we assume to be zero without loss of generality.

The effect of specified signaling input, and the rest of excitatory/inhibitory presynaptic neurons come through the input current I(t):

$$\begin{array}{@{}rcl@{}} I(t) = I_{0}(t) + {\Delta} I(t, t^{*}), \end{array} $$ (2)

where I 0 (t) is the background input induced by presynaptic neurons, and ΔI(t,t∗) is the signaling input from the specified neuron; here t∗represents the time that the signaling input arrives. For large number of presynaptic neurons, the uncorrelated background input is approximated as: \(I_{0}(t) = \bar {I} + \xi (t)\), where \(\bar {I}\) is the mean input strength and ξ(t) is a zero mean Gaussian noise (i.e. < ξ(t) >= 0and < ξ(t)ξ(t′) >= 2Dδ(t − t′), where δ(t) is the Dirac delta function).

When a spike from presynaptic neuron arrives at synaptic terminal, the postsynaptic current instantaneously increases according to the strength of the synapse, and decays with a time constant of τ s . For τ s ≪ τ m which is measured for fast currents generated by AMPA and GABA A receptors (Destexhe et al. 1998), we can model the input current as (Stern et al. 1992):

$$\begin{array}{@{}rcl@{}} {\Delta} I(t, t^{*})=A \times \left\{\begin{array}{ll} 0 &{\qquad\qquad} t < t^{*},\\ 1/{\Delta} t & {\qquad\qquad}t^{*} \leq t \leq t^{*}+{\Delta} t,\\ 0 & {\qquad\qquad} t^{*}+{\Delta} t < t, \end{array}\right. \end{array} $$ (3)

where Δt ∼ τ s . This resembles a current which begins at t = t∗, remains constant for a short time window, t ∈ [t∗,t∗ + Δt], and vanishes after that. It transmits a net charge of A regardless of Δt; it also converges to the Dirac delta function in the limit of Δt → 0(i.e. ΔI(t,t∗) → A δ(t − t∗)). In this limit, the process (see Eq. (1)) converges to jump-diffusion process (Kou and Wang 2003).

As mentioned, a key element of this article is to predict when the postsynaptic neuron spikes if the signaling input arrives at t∗. We consider the last spike of the postsynaptic neuron as the origin of time, t = 0, and analytically predict when the first postsynaptic spike will happen (Fig. 1b). However, because of the stochastic term in the background input, ξ(t), we cannot predict the exact time of the next spike, but can describe its probability density, J(V 𝜃 ,t).

We consider an ensemble of many postsynaptic neurons (or many repetitions of the same experiment with a single postsynaptic neuron); the life of each neuron in this ensemble is described by a trajectory of V (t), which is governed by the LIF equation (see Fig. 1c, top-left). All trajectories begin from the same point (V = 0 at t = 0), but do not follow the same path; because of different values realized for the stochastic background noise, ξ(t). The neuron initiates a spike if a trajectory passes the threshold voltage, V 𝜃 . Then J(V 𝜃 ,t), which we shortly call first-passage time density, is the probability density function that a trajectory passes V 𝜃 at time t. However, to obtain J(V 𝜃 ,t), we have to know P(V,t), the probability density that a trajectory has the potential V at time t. This membrane potential probability density satisfies the Fokker-Planck (FP) equation (Risken 1984; Kardar 2007):

$$\begin{array}{@{}rcl@{}} \frac{\partial}{\partial t}P(V,t)&=&\frac{D}{{\tau_{\mathrm{m}}}^{2}}\frac{\partial^ 2}{\partial V^{2}}P(V,t)\\ &&+\frac{1}{\tau_{\mathrm{m}}}\frac{\partial}{\partial V}[(V-\bar{I}-{\Delta} I(t, t^{*}))\,P(V,t)]. \end{array} $$ (4)

Here, the temporal evolution of P(V,t)is governed by (i) a diffusion term which is a signature of the stochastic input (i.e. ξ(t)), and (ii) a drift term which represents both the leak and the non-stochastic currents (i.e. \(\bar {I}\) and ΔI(t,t∗)).

Threshold nonlinearity of neuronal spike generation is dictated as a boundary condition in Eq. (4). The LIF neuron spikes if it passes the threshold voltage. Since each membrane trajectory ends at the threshold, there is no neuron with V > V 𝜃 . In the continuum limit, this results in the absorbing boundary condition of P(V ≥ V 𝜃 ,t) = 0 (Gerstner et al. 2014). Here, we do not consider the reappearance of the trajectory from the resting potential after each spike occurs; because we are interested in neuron’s first spike only (Fig. 1c, down). This results in the absorbing boundary condition instead of the widely used periodic boundary condition to derive firing rate (Brunel and Hakim 1999; Brunel et al. 2001; Lindner and Schimansky-Geier 2001; Richardson and Swarbrick 2010).

Finally, we will obtain P(V,t)for t > t 0 by solving Eq. (4) under this boundary condition once we specify an initial distribution of the membrane potential at time t = 0. Here we use P(V,0) = δ(V ) as we assumed that all membrane trajectories started from V = 0.

Unfortunately, the analytical solution of P(V,t)and consequently J(V 𝜃 ,t)is not attainable in general even if we discard the signaling input from the equation. However, we may obtain the analytical solution at a particular regime known as the threshold regime, which will be described in detail as follows. For a fixed noise strength (i.e., D), a simple ratio \(\bar {I}/V_{\theta }\) determines how P(V,t) and the corresponding first-passage time density of J(V 𝜃 ,t) behave. The neuron regularly spikes if \(\bar {I}\) significantly exceeds V 𝜃 ; because the high value of mean input robustly drives neuron to its threshold. If, on the other hand, \(\bar {I} \ll V_{\theta }\), there would be occasional spikes whenever the noise or the signaling input happens to be strong enough to prevail the gap between \(\bar {I}\) and V 𝜃 . An interesting regime exists somewhere in-between; for \(\bar {I} \simeq V_{\theta }\), a modest signaling input or some conventional noise can induce spike of the postsynaptic neuron. This is the near threshold regime. It was empirically observed (Shadlen and Newsome 1998; Tan et al. 2014) and suggested as a basis of high variability in neural networks (van Vreeswijk and Sompolinsky 1996, 1998). The quest for a closed-form analytical solution for Eq. (4) also leads us to the very same regime. There exists a closed-form solution for P(V,t), and consequently for J(V 𝜃 ,t), if (i) \(\bar {I}=V_{\theta }\) and (ii) no signaling input is applied: ΔI(t,t∗) = 0(Wang and Uhlenbeck 1945; Sugiyama et al. 1970; Bulsara et al. 1996). Below, we describe the reason behind this peculiarity of the threshold regime and revisit the closed-form solution without the signaling input. We then extend this analytical solution to include effect of the signaling input.

The first-passage time density in the absence of signaling input

We begin with the simpler condition in which the signaling input is turned off (ΔI(t,t∗) = 0). Even in such a case, solutions for P(V,t)are, in general, available only in a non-closed form such as inverse Laplace transforms (Siegert 1951; Ricciardi and Sato 1988; Ostojic 2011). However, there exists a closed-form analytical solution for the particular case of the threshold regime, \(\bar {I}=V_{\theta }\) (Wang and Uhlenbeck 1945; Sugiyama et al. 1970; Tuckwell 1988).

A closed-form solution for probability density would exist for arbitrary \(\bar {I}\) if we could neglect the absorbing boundary condition. Assume that we have freed ourselves from the absorbing boundary condition, and that the membrane potential has a definite value of V = V 0 at time t = t 0 , there exists a closed-form analytical solution for Eq. (4), for t > t 0 . It would be the free Green’s function, and it reads (Uhlenbeck and Ornstein 1930):

$$\begin{array}{@{}rcl@{}} G_{f}{\kern-.5pt}({\kern-.7pt}V\!,{\kern-.8pt}t{\kern-.5pt};\! V_{0}{\kern-.5pt},{\kern-.8pt}t_{0}{\kern-.5pt})\! &=&\! \sqrt{\frac{\tau_{\mathrm{m}}}{2 \pi D} \frac{1}{1-r^{2}(t-t_{0})}}\\ &&\times {\kern-.8pt} \exp\!\left[\! -\frac{\tau_{\mathrm{m}}}{2D} \frac{({\kern-.5pt}V\! -\!\bar{I}\! -\! ({\kern-.5pt}V_{0}\! -\! \bar{I}) r({\kern-.9pt}t\! -\! t_{0}){\kern-.5pt})^{2}}{1-r^{2}(t-t_{0})}{\kern-.5pt}\right].\\ \end{array} $$ (5)

Here V 0 and t 0 quote the initial condition, and r(t) = exp[−t/τ m ]. The free Green’s function describes a probability density of the membrane trajectories which all begin from the point O 1 = (t 0 ,V 0 ), but follow different paths due to the noise (see Fig. 1c, Right). Since we have neglected the threshold for the moment, the trajectories do not end as they pass the threshold line, V = V 𝜃 . Thus we can freely consider any initiating point, even above the threshold line, for the trajectories. It would be \(\textbf {O}_{2}=(t_{0},2\bar {I}-V_{0})\), the mirror-image point of O 1 , with respect to the \(V=\bar {I}\) line (Fig. 1c, Right). The probability density for this initiating point is \(G_{f}(V,t;\,2\bar {I}-V_{0},t_{0})\). The encouraging fact is that the two Green’s functions yield equal values on the mirror-line: \(G_{f}(V=\bar {I},t;\,V_{0},t_{0})=G_{f}(V=\bar {I},t;\,2\bar {I}-V_{0},t_{0})\). Conclusively, we define the main Green’s function as:

$$\begin{array}{@{}rcl@{}} G{\kern-.3pt}({\kern-.3pt}V{\kern-.3pt},{\kern-.3pt}t{\kern-.3pt};{\kern-.3pt}\,V_{0}{\kern-.3pt},{\kern-.3pt}t_{0})\! =\! G_{f}({\kern-.3pt}V{\kern-.3pt},{\kern-.3pt}t{\kern-.3pt};{\kern-.3pt}\,V_{0}{\kern-.3pt},{\kern-.3pt}t_{0}){\kern-.3pt}-{\kern-.3pt}G_{f}(V,t;\,2\bar{I}-V_{0},t_{0}).\!\!\!\!\!\!\\ \end{array} $$ (6)

This linear combination of the free Green’s functions, satisfies the linear Fokker-Planck equation (i.e., Eq. (4)). It also approaches zero on the \(V=\bar {I}\) line. Now, if we choose \(\bar {I}\) equal to V 𝜃 , this means that G(V,t; V 0 ,t 0 )also satisfies the absorbing boundary condition. Thus we can utilize the analytical free Green’s functions to obtain an analytical solution under the absorbing boundary condition only at the threshold regime.

For our main problem, in which the postsynaptic neuron has the certain voltage of V = 0 at t = 0, the probability density of membrane potential is simply:

$$\begin{array}{@{}rcl@{}} P_{0}(V,t)=G(V,t;\,0,0). \end{array} $$ (7)

The probability of spiking between t and t + dt, is proportional to the number of voltage trajectories which pass the threshold in [t,t + dt] (see Fig. 1c, left). This equals \(J_{0}(V,t)|_{V=V_{\theta }}dt\) where J 0 (V,t) is the current density:

$$\begin{array}{@{}rcl@{}} J_{0}(V,t)=-\frac{D}{{\tau_{\mathrm{m}}}^{2}} \frac{\partial}{\partial V} P_{0}(V,t)-\frac{V-\bar{I}}{\tau_{\mathrm{m}}}P_{0}(V,t). \end{array} $$ (8)

For V = V 𝜃 , Eq. (8) yields the first-passage time density; it simplifies to (Wang and Uhlenbeck 1945; Sugiyama et al. 1970; Tuckwell 1988; Bulsara et al. 1996):

$$\begin{array}{@{}rcl@{}} J_{0}(V_{\theta},t)&=&\frac{1}{\tau_{\mathrm{m}}}\sqrt{\frac{2}{\pi}\frac{\tau_{\mathrm{m}} V_{\theta}^{2}}{ D }\frac{r^{2}(t)}{ (1-r^{2}(t))^{3}}} \\ &&\times \exp{\left[-\frac{{\tau_{\mathrm{m}} V_{\theta}}^{2}}{2 D}\frac{ r^{2}(t)}{1-r^{2}(t)}\right]}, \end{array} $$ (9)

where r(t) = exp[−t/τ m ]. Apart from a 1/τ m pre-factor in Eq. (9), which stands for its time inverse dimensionality, the overall shape of function is characterized by a dimensionless ratio of D/(τ m V𝜃2). The ratio quantifies the strength of input background noise relative to the other competing factors. Other relevant quantities are also characterized by this ratio. For example, the maximum value of first-passage time density, J 0 (V 𝜃 ,t), occurs at \(t_{\max }=\tau _{\mathrm {m}}\,{\textbf {h}}(\,D/(\tau _{\mathrm {m}} V_{\theta }^{2})\,)\) where:

$$\begin{array}{@{}rcl@{}} {\textbf{h}} (x)=\frac{1}{2}\ln\left( \frac{1}{2x}\left( 1-x+\sqrt{9x^{2}-2x + 1}\right)\right). \end{array} $$ (10)

For weak enough noise (i.e. \(x=D/(\tau _{\mathrm {m}} V_{\theta }^{2}) \ll 1\)), this function simplifies to \(0.5\times \ln (\tau _{\mathrm {m}} V_{\theta }^{2}/D)\). Finally, it is important to extend this formalism to more plausible sub/supra-threshold cases. In Appendix I, we show how a scaling approach does help us to do so.

The transient signaling input modifies the probability density and first-passage time density

In this subsection, we extend the above analysis to the case in which signaling input is additionally applied to the neuron. The presence of the signaling input modifies P(V,t) and consequently J(V 𝜃 ,t). To obtain a clear causal picture, we rewrite Eq. (4) as:

$$\begin{array}{@{}rcl@{}} \left( \frac{\partial}{\partial t}-\frac{D}{{\tau_{\mathrm{m}}}^{2}}\frac{\partial^ 2}{\partial V^{2}} -\frac{1}{\tau_{\mathrm{m}}} -\frac{V-\bar{I}}{\tau_{\mathrm{m}}}\frac{\partial}{\partial V} \right) P(V,t)\\=-\frac{\Delta I(t, t^{*})}{\tau_{\mathrm{m}}}\frac{\partial}{\partial V}P(V,t). \end{array} $$ (11)

We assume that ΔI(t,t∗) corrects the initial threshold regime answer of P 0 (V,t)to P(V,t) = P 0 (V,t) + ΔP(V,t); P 0 (V,t)is the analytical solution of membrane potential density in the absence of the signaling input, and ΔP(V,t) is the correction, due to the signaling input. ΔP(V,t) would be zero if the signaling input did not exist, A = 0 (see Eq. (3)). For arbitrary signaling input, ΔP(V,t)would be a function of A, the signaling input’s strength. This lets us write a Taylor series for ΔP(V,t)as:

$$\begin{array}{@{}rcl@{}} {\Delta} P(V,t)={\Sigma}_{n = 1}^{n=\infty} \delta P_{n}(V,t), \end{array} $$ (12)

where δP n (V,t) ∝ An. Since ΔP(V,t) vanishes for A = 0, the constant term, δP n= 0 (V,t) ∝ A0, is not included in the series. We plug \(P(V,t)=P_{0}(V,t)+ {\Sigma }_{n = 1}^{n=\infty } \delta P_{n}(V,t)\) into Eq. (11); for each n, we consider terms proportional to An, on both sides of equality, as a separate equation. For n = 1the equation reads:

$$\begin{array}{@{}rcl@{}} \left( \frac{\partial}{\partial t}-\frac{D}{{\tau_{\mathrm{m}}}^{2}}\frac{\partial^ 2}{\partial V^{2}} -\frac{1}{\tau_{\mathrm{m}}} -\frac{V-\bar{I}}{\tau_{\mathrm{m}}}\frac{\partial}{\partial V} \right)\delta P_{1}(V,t)\\ =-\frac{\Delta I(t, t^{*})}{\tau_{\mathrm{m}}}\frac{\partial}{\partial V}P_{0}(V,t), \end{array} $$ (13)

This is the first-order perturbation equation, as both its sides are proportional to A1. To address its boundary conditions, we note that ΔP(V,t)is zero before the occurrence of the specified signaling input (i.e. t < t∗); consequently we obtain δP 1 (V,t < t∗) = 0. Moreover, the absorbing boundary condition at V = V 𝜃 results in ΔP(V 𝜃 ,t) = 0, from which we conclude that δP 1 (V 𝜃 ,t) = 0. These let us use the aforementioned Green’s function, and write δP 1 (V,t)as a Green’s integral over the source term, the right side of Eq. (13):

$$\begin{array}{@{}rcl@{}} \delta P_{1}(V,t)&=& {\int}_{t_{0}= 0}^{t}dt_{0} {\int}_{V_{0}=-\infty}^{V_{\theta}} d V_{0}\,G(V,t;V_{0},t_{0} )\\ && \times \left[ -\frac{\Delta I(t_{0}, t^{*})}{\tau_{\mathrm{m}}}\frac{\partial}{\partial V_{0}} P_{0} (V_{0} ,t_{0} )\right]. \end{array} $$ (14)

Equation (14) could be further simplified, if we note that ΔI(t 0 ,t∗) is zero for t 0 < t∗ or t 0 > t∗ + Δt; thus the t 0 in G(V,t;V 0 ,t 0 ) and (∂/∂V )P 0 (V 0 ,t 0 ) always belong to [t∗,t∗ + Δt], a short time interval. As Δt ≪ τ m , we conclude that the two functions are almost constant during this time interval and approximate them with G(V,t;V 0 ,t∗) and (∂/∂V )P 0 (V 0 ,t∗)respectively. This further simplifies Eq. (14):

$$\begin{array}{@{}rcl@{}} \delta P_{1}{\kern-.5pt}(V,t)& = & -{\int}_{t_{0}= 0}^{t} \frac{\Delta I(t_{0}, t^{*})}{\tau_{\mathrm{m}}} dt_{0} \\ &&\times {\int}_{V_{0}=-\infty}^{V_{\theta}}G{\kern-.5pt}({\kern-.5pt}V{\kern-.5pt},{\kern-.5pt}t;{\kern-.5pt}V_{0},{\kern-.5pt}t^{*}) \,\frac{\partial}{\partial V_{0}}P_{0}({\kern-.5pt}V_{0} ,t^{*})\,d V_{0}.\\ \end{array} $$ (15)

Taking the time integral in Eq. (15), we have:

$$\begin{array}{@{}rcl@{}} \delta P_{1}{\kern-.5pt}({\kern-.5pt}V{\kern-.5pt},{\kern-.5pt}t{\kern-.5pt})\! & = &\! f_{1}(t,t^{*}) \left( \frac{A}{\tau_{\mathrm{m}}}\right) \\ &&\!\times {\int}_{V_{0}=-\infty}^{V_{\theta}}G(V,t;V_{0},t^{*})\,\frac{\partial}{\partial V_{0}}P_{0}(V_{0} ,t^{*})\,d V_{0},\\ \end{array} $$ (16)

where

$$\begin{array}{@{}rcl@{}} f_{1}(t,t^{*})=-\left\{\begin{array}{l} 0 {\qquad\qquad\qquad\quad}\,\, t < t^{*},\\ (t-t^{*})/{\Delta} t {\qquad\quad} t^{*} \leq t \leq t^{*}+{\Delta} t,\\ 1 {\qquad\qquad\qquad\quad}\,\, t^{*}+{\Delta} t < t. \end{array}\right. \end{array} $$ (17)

For n ≥ 2, the nth order perturbation equation is the same as Eq. (13) by replacing δP 1 (V,t) and δP 0 (V,t) with δP n (V,t) and δP n− 1 (V,t). A recursive formalism, then, yields:

$$\begin{array}{@{}rcl@{}} \delta{\kern-.3pt} P_{n}{\kern-.3pt}({\kern-.3pt}V,{\kern-.3pt}t{\kern-.3pt})\! & = &\! f_{n}(t,t^{*}) \left( \frac{A}{\tau_{\mathrm{m}}}\right)^{n}\\ &&\!\times\! {\int}_{V_{0}=-\infty}^{V_{\theta}}{\kern-.3pt}G{\kern-.3pt}({\kern-.3pt}V{\kern-.3pt},{\kern-.3pt}t{\kern-.3pt};{\kern-.3pt}V_{0},{\kern-.3pt}t^{*}) \,\frac{\partial^{n}}{\partial {V_{0}^{n}}}P_{0}(V_{0} ,t^{*})\,d V_{0},\\ \end{array} $$ (18)

where

$$\begin{array}{@{}rcl@{}} f_{n}(t,t^{*})=\frac{(-1)^{n}}{n!}\left\{\begin{array}{l} 0 {\qquad\qquad\qquad\qquad} t < t^{*},\\ {[(t-t^{*})/{\Delta} t]}^{n} {\quad\quad}\,\,\, t^{*} \leq t \leq t^{*}+{\Delta} t,\\ 1 {\qquad\qquad\qquad\qquad} t^{*}+{\Delta} t < t. \end{array}\right. \end{array} $$ (19)

This helps us to calculate the series in Eq. (12) and obtain ΔP(V,t). For t > t∗ + Δt, for example, it reads:

$$\begin{array}{@{}rcl@{}} {\Delta} P(V,t) &=& {\int}_{V_{0}=-\infty}^{V_{\theta}} G(V,t;V_{0},t^{*})\\ &&\times\left[ {\Sigma}_{n = 1}^{n=\infty} \frac{1}{n!} \left( \frac{-A}{\tau_{\mathrm{m}}}\right)^{n} \frac{\partial^{n}}{\partial {V_{0}^{n}}}P_{0}(V_{0} ,t^{*})\right]\,d V_{0} \end{array} $$ (20)

The summation in bracket is the Taylor expansion of P 0 (V 0 − A/τ m ,t∗) − P 0 (V 0 ,t∗); thus for t > t∗ + Δt:

$$\begin{array}{@{}rcl@{}} {\Delta} P(V,t) &=& {\int}_{V_{0}=-\infty}^{V_{\theta}} G(V,t;V_{0},t^{*})\\ &&\times \left[ P_{0}\left( V_{0}-\frac{A}{\tau_{\mathrm{m}}}\,, t^{*}\right)-P_{0}(V_{0},t^{*})\right ]\,d V_{0}. \end{array} $$ (21)

For t∗≤ t ≤ t∗ + Δt, a similar reasoning yields:

$$\begin{array}{@{}rcl@{}} {\Delta}{\kern-.5pt} P{\kern-.5pt}({\kern-.5pt}V{\kern-.5pt},{\kern-.5pt}t{\kern-.5pt})\! & = &\! {\int}_{V_{0}=-\infty}^{V_{\theta}} G(V,t;V_{0},t^{*})\\ &&\times\! \left[\! P_{0}\!\left( \! V_{0}\! -\! \frac{A}{\tau_{\mathrm{m}}}\! \times\! \frac{t\! -\! t^{*}}{\Delta t}\,, t^{*}\!\right)\! -\! P_{0}{\kern-.5pt}\left( {\kern-.5pt}V_{0},t^{*}\right)\!\right]d V_{0}. \end{array} $$ (22)

And evidently, for t < t∗, δP n (V,t) = 0. We use the combination rule, for propagators:

$$\begin{array}{@{}rcl@{}} P_{0}(V,t) = {\int}_{V_{0}=-\infty}^{V_{\theta}} G(V,t;V_{0},t_{0}) P_{0}(V_{0}, t_{0}) d V_{0}. \end{array} $$ (23)

The final result, P(V,t) = P 0 (V,t) + ΔP(V,t), simplifies as:

$$\begin{array}{@{}rcl@{}} P(V, t)=\left\{\begin{array}{l} P_{0}(V, t) {\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad}\,\,\,\,\, t < t^{*},\\ \\ {\int}_{V_{0}=-\infty}^{V_{\theta}} G(V,t;V_{0},t^{*}) P_{0}\left( V_{0}- {\frac{A}{\tau_{\mathrm{m}}}\times\frac{t-t^{*}}{\Delta t}}\,, t^{*}\right) d V_{0}. {\quad}{\quad}{\quad}\,\,\, t^{*} \leq t \leq t^{*}+{\Delta} t,\\ \\ {\int}_{V_{0}=-\infty}^{V_{\theta}} G(V,t;V_{0},t^{*}) P_{0}\left( V_{0}- {\frac{A}{\tau_{\mathrm{m}}}}\,, t^{*}\right) d V_{0}. {\qquad\qquad\qquad}\,\,\, t^{*}+{\Delta} t < t. \end{array}\right. \end{array} $$ (24)

Finally, we use P(V,t)to obtain the corrected first-passage time density in the presence of the signaling input:

$$\begin{array}{@{}rcl@{}} J(V_{\theta},t)=-\frac{D}{{\tau_{\mathrm{m}}}^{2}} \frac{\partial}{\partial V} P(V,t)|_{V=V_{\theta}}. \end{array} $$ (25)

While our formalism is applicable to arbitrary values of transient signaling input, here we focus on the effect of strong excitatory/inhibitory signaling input on postsynaptic neuron’s response. Figure 1c (top-left) shows how voltage trajectories almost uniformly increase during arrival of excitatory signaling input (i.e. t ∈ [t∗,t∗ + Δt]). The short period of signal arrival (i.e.Δt ≪ τ m ) guarantees this uniform increase and results in an overall rise of A/τ m . Consequently, if the value of membrane potential of a particular trajectory is larger than V 𝜃 − A/τ m upon signal arrival, it passes the threshold during signal arrival: the neuron fires. If, on the other hand, it is smaller than V 𝜃 − A/τ m , the membrane potential does not cross the threshold by the additional signaling input: the neuron does not fire. This simple picture helps us to understand results shown in Fig. 2.

Fig. 1 The framework to study the impact of a specified presynaptic neuron on postsynaptic spiking activity. a The effect of specified presynaptic neuron is separated from the noisy background input, which is produced by the rest of presynaptic neurons. b A schematic view of the signaling input activity and its effect on the membrane potential and spiking of the postsynaptic neuron. c Postsynaptic membrane potential versus time, Top-left: There can be different trajectories, due to the noise. Any trajectory, if not reached the threshold before signaling input arrival at t∗, shows a sharp increase during t∗to t∗ + Δt time window. Top-right: For the particular case of \(\bar {I}=V_{\theta }\), we can use the image method. The membrane potential trajectory and its mirror image coincide at \(V=\bar {I}\) line. This can help to fulfill the thresholding criteria if it is the same line of V = V 𝜃 . Bottom: It shows the membrane potential trajectory of the postsynaptic neuron. When the voltage reaches to V 𝜃 , the neuron fires and the voltage resets to 0. Here we particularly consider the first spike after t = 0 Full size image

Fig. 2 First-passage time density when the neuron receives signaling input and background noise at the threshold regime. The black vertical arrows show when the signaling input arrives; it changes the initial ISI distribution, shown in dashed black curve, to the modified distribution, thick red curve. Even when the signaling input is remarkably strong, the analytic modified distribution well matches the simulation results, which come as empty green circles. a Left: An early excitatory signaling input at t∗ = 50 mscauses a leftward temporal shift. Right: A drastic change occurs, If the signaling input arrives at t∗ = 100 ms, close enough to the peak of initial spiking density. b Left: An inhibitory signaling input imposes a mere temporal shift, a delay, if it occurs too early, t∗ = 50 ms. Right: If the inhibitory signaling input occurs close enough to the peak, \(t=t_{\max }\), the spiking density almost divides into two distinct regions. c The first-passage time density as a function of the presynaptic spike timing, t∗, for small excitatory signaling inputs, A = 0.2 mV ×ms. The values of the parameters are chosen using the physiologically plausible range (McCormick et al. 1985). Here, we use membrane time constant τ m = 20 ms, A = 10 mV ms, threshold voltage of V 𝜃 = 20 ms, diffusion coefficient of D = 0.74 mV2msand the simulation time step is Δt = 0.05 ms Full size image

Figure 2a and b show that excitatory and inhibitory signaling inputs can result in quite different spiking behaviors, depending on their arrival time. The dashed-black curve, in both panels of Fig. 2a, b, right and left, shows the first-passage time density in the absence of the signaling input, J 0 (V 𝜃 ,t). In the left panels, the signaling input arrives at t∗ = 50 ms; it has A = 10 mV × ms which increases or decreases the membrane potential by A/τ m = 0.5 mV. In this case, the excitatory signaling input shifts the original density leftward for excitation or rightward for inhibition (thick-red-curves). In contrast, the right panels show the spiking density when the signaling input occurs at t∗ = 100 ms. This is close to \(t_{\max }= 0.5\times \tau _{\mathrm {m}}\,\ln (\,\tau _{\mathrm {m}} V_{\theta }^{2}/D\,)\simeq 93~\text {ms}\), at which the J 0 (V 𝜃 ,t)is maximized. Therefore, when the signal arrives, many trajectories are already close to the threshold potential (i.e., larger than V 𝜃 − A/τ m ). All such trajectories will spike, due to the aforementioned rise of excitatory A/τ m ; this results in a sudden sharp increase of the first-passage time density at t = t∗ (see Fig. 2a, right, and its inset). In contrast, the inhibitory input prevents all trajectories to reach the threshold, which makes a sharp depletion in the first-passage time density (Fig. 2b, right). The effect of inhibitory input fades away after a while and again trajectories approach the threshold; this leads to the second rise of the first-passage time density (Fig. 2b, right). These theoretical predictions were confirmed by numerical simulation of the LIF model (green dots). It is also important to see what will happen if the signaling input comes much later than t max . This means that most of the trajectories have already reached the threshold voltage, and only a tiny portion of them remains. Consequently, the signaling input does not induce much change in J(V 𝜃 ,t). In other words, if the signaling input comes too late, the postsynaptic neuron has already fired, and the signal cannot change its first spike-time any more (Fig. 2c). This figure also demonstrates that the first-passage time density undergoes the maximum change when the excitatory input arrives around t max , the peak time of no-signaling input case.

It is also important to note that there is a critical value for excitatory signal strength for which postsynaptic neuron fires, apart from the signal arrival time. For a signal with A ≥ τ m V 𝜃 , the aforementioned rise would be V 𝜃 which results in spiking of almost all trajectories, irrespective of signal arrival time. This introduces another dimensionless ratio of A/(τ m V 𝜃 ), which quantifies the strength of the signaling input.

Figure 3 shows how the first-passage time density changes with the diffusion coefficient. When the scaled diffusion coefficient increases, the first-passage time density and t max shift to the left. So signaling inputs that arrive earlier than t max at 50 mscan modulate the density (Fig. 3a, left). When the signaling input arrives at time t∗ = 100 ms which is close to t max , the first-passage time density is modulated in low diffusion regimes (Fig. 3a, Right). In the case of inhibitory presynaptic spike (Fig. 3b), for high scaled diffusion coefficient, the first-passage time density decreases at the time of presynaptic spike but because of high diffusion, it goes up quickly. As the scaled diffusion decreases, the recovery from inhibition takes time which makes distance between two separated distributions.

Fig. 3 First-passage time density as a function of the scaled diffusion coefficient, D/(τ m V𝜃2). a Excitatory signaling input: (Left) The signaling input occurs at t = 50ms, black arrow. Variation of the scaled diffusion modifies the overall picture of ISI distribution. There is an apparent jump in the spiking density when D/(τ m V𝜃2) ≃ 0.25 × 10− 2. It is because D/(τ m V𝜃2)controls \(t_{\max }\), the maximum of the ISI distribution in the absence of the signaling input; and \(t_{\max } \simeq 50\)ms for D/(τ m V𝜃2) ≃ 0.25 × 10− 2. For higher/lower values of the scaled diffusion coefficient, however, we see minor modification as \(t_{\max }\) would be larger/smaller enough compared to the signaling input arrival time. (Right) The signaling input occurs at t = 100 ms; we chose much weaker diffusion coefficient, so that \(t_{\max }\) would be comparable to this signaling input arrival time. However, the modification is drastically larger, compared to the left panel. This shows that the influence of the signaling input amplifies, for weaker diffusion/noise strength. b The same as in (a) but with inhibitory signaling input. Reducing the scaled diffusion coefficient, from left to right, drastically amplifies the modification imposed by the signaling input. The values of the intrinsic parameters are chosen using the physiologically plausible range, V 𝜃 = 20 mVand τ m = 20 ms(McCormick et al. 1985). The amplitude of signaling input is fixed at A = 1 mV ms Full size image

The first-passage time density also depends on the scaled amplitude of the signaling input (Fig. 4). For an excitatory input which fires at time (t∗) much earlier than t max , as the amplitude increases, the density moves to the left (Fig. 4a, left) until the amplitude is large enough to make them spike at the same time (Fig. 4a, middle). When inhibitory signaling input arrives near t max , the density breaks in two parts (Fig. 4b, right). As the amplitude increases, the spiking density is zero not only at the time of presynaptic spiking but also for a duration after that. This duration nonlinearly depends on the strength of signaling input (Fig. 4b, right).

Fig. 4 First-passage time density as a function of scaled signaling input amplitude A/(τ m V 𝜃 ). a Excitatory signaling input: When the amplitude of signaling input increases, the first-passage time density gradually shifts to the left (a, left) and in specific strong amplitude, the density changes to a peak because the signaling input is so strong that makes all the trajectories to fire at the time of signal arrival (a, middle). The signaling input arrives at t∗ = 50 ms(left and middle) and at t∗ = 100 ms(Right). b Inhibitory signaling input: Inhibitory signal arrives at t∗ = 50 ms(left and middle) and t∗ = 100 ms(Right). When the input arrival time is much less than \(t_{\max }= 90~\text {ms}\), the effect of increasing the amplitude of inhibitory input is just to push the density to the right (b, Left and middle). For cases that t∗is comparable to \(t_{\max }\), the first-passage time density breaks at the time of input arrival (b, right). As the amplitude increases, the recovery of distribution from that break takes more time and the distance between separated distribution increases nonlinearly. The signaling input arrival is shown by black arrows and the diffusion coefficient is fixed at D = 1 mV2ms. The values of the intrinsic parameters (V 𝜃 = 20 mVand τ m = 20 ms) are chosen using the physiologically plausible range (McCormick et al. 1985) Full size image

Arbitrary shapes of the transient signaling input

One merit of this formalism is its flexibility to address other shapes of the transient signaling input. We solve the first-passage time problem for the exponentially decaying input, which is more physiologically plausible (i.e. ΔI(t,t∗) ∝ exp(−(t − t∗)/τ s ), see Appendix II for the derivation and results). Figure 5a and b show how an exponential transient signaling input modifies the first-passage time density.

Fig. 5 First-passage time density when the neuron receives exponential decaying signaling input and background noise at the threshold regime. The black vertical arrows show when the signaling input arrives; it changes the initial ISI distribution, shown in dashed black curve, to the modified distribution, thick red curve. Even when the signaling input is remarkably strong, the analytic modified distribution well matches the simulation results, which come as empty green circles. a Left: An early excitatory signaling input at t∗ = 50 mscauses a leftward temporal shift. The result is very similar to the case of square input (Fig. 2a, left). Right: A drastic change occurs, if the signaling input arrives at t∗ = 100 ms, close enough to the peak of initial spiking density. The change in density is smoother comparing with Fig. 2a, right. The inset shows the shape of the signaling input. b Left: An inhibitory signaling input imposes a mere temporal shift, a delay, if it occurs too early, t∗ = 50 ms. Right: If the inhibitory signaling input occurs close enough to the peak, \(t=t_{\max }\), the spiking density almost divides into two distinct regions. The inset shows the shape of the signaling input. c The first-passage time density for the cases that signaling input is gamma function with parameter γ = 1(Top) and γ = 0.25(Bottom). The values of the parameters are chosen using the physiologically plausible range (McCormick et al. 1985). Here, we use membrane time constant τ m = 20 ms, τ s = 2 ms, A = 10 mV ms, threshold voltage of V 𝜃 = 20 mV, diffusion coefficient of D = 0.74 mV2msand the simulation time step is Δt = 0.05 ms Full size image

For early signaling arrival, the modification due to excitatory (inhibitory) input is a leftward (rightward) shift, very similar to the changes we had for square input (Fig. 2a, b). For an excitatory input at t∗ = 100 ms (Fig. 5a right) there is a jump upon signal arrival; comparing to the square excitatory input (Fig. 2a right) the jump is less sharp but more wide. Similarly, the inhibitory exponential input at t∗ = 100 ms induces a fall (Fig. 5b right), similar but less steep than the fall observed for the square inhibitory input (Fig. 2b right). The numerical simulations well verify these analytic results (green circles in Fig. 5a, b).

The analytical solutions for square and exponential input suggest a general formula for probability density in the presence of an arbitrary input current, ΔI(t,t∗), which arrives at t∗. If the duration of signal arrival is shorter enough than the membrane time constant (i.e. τ s ≪ τ m ) we suggest the probability density as:

$$\begin{array}{@{}rcl@{}} P(V,t)\,=\,{\int}_{V_{0}=-\infty}^{V_{\theta}} \!G(V,t,V_{0},t^{*})\,P_{0}\left( V_{0}\,-\,\!{{\int}_{0}^{t}}\frac{\Delta I(s,t^{*})}{\tau_{\mathrm{m}}} ds ,\, t^{*}\right)\, \!d V_{0}.\\ \end{array} $$ (26)

This conjecture covers our results for both the square input (i.e. Eq. (24)) and the exponential decay (i.e. Eq. (54) in Appendix II). We also test it by comparing the analytical result of first-passage time (obtained using Eq. (26)) with simulation results (Fig. 5c) using the Gamma function input current:

$$\begin{array}{@{}rcl@{}} {\Delta} I(t,t^{*})\,=\,A\!\times\!\left\{\begin{array}{l} 0 {\qquad\qquad\qquad\qquad\qquad\quad\quad\quad\quad\quad } t < t^{*},\\ {\frac{1}{\Gamma(1+\gamma)}}\left( {\frac{t-t^{*}}{\tau_{\mathrm{s}}}}\right)^{{\gamma}} \!\times\!{\frac{1}{\tau_{\mathrm{s}}}}\,{\exp\left( -\frac{t-t^{*}}{\tau_{\mathrm{s}}}\right)} {\quad}\,\,\,\,\, t^{*} \leq t; \end{array}\right.\\ \end{array} $$ (27)

the Γ(n) is the Euler’s Gamma function (Davis 1959). The signaling inputs are shown in the insets of Fig. 5c, top (bottom) for γ = 1(γ = 0.25). The inputs arrive at t∗ = 100 ms, and have a short duration of τ s = 2 ms ≪ τ m = 20 ms; we recognize a good agreement between simulation and analytical results. The method also can be extendable to the case that more than one presynaptic spike arrives. The derivation for more spikes of presynaptic neuron comes in Appendix III.

It is worthy to use Eq. (26) and obtain a general solution for the first-passage time density, due to transient signaling input with arbitrary shape. Using Eq. (8), the first-passage time density reads:

$$\begin{array}{@{}rcl@{}} J(V_{\theta},t)& = & -\frac{D}{\tau_{\mathrm{m}}^{2}}{\int}_{V_{0}=-\infty}^{V_{\theta}} \tilde{G}(V_{\theta},t,V_0,t^*)\\ &&\times P_{0}\left( V_{0}- {{\int}_{0}^{t}}\frac{\Delta I(s,t^{*})}{\tau_{\mathrm{m}}} ds , t^{*}\right)\,d V_{0}, \end{array} $$ (28)

where \(\tilde {G}(V_{\theta },t,V_{0},t^{*})=\partial G(V,t,V_{0},t^{*})/\partial V|_{V=V_{\theta }}\); it reads:

$$\begin{array}{@{}rcl@{}} \tilde{G}{\kern-.5pt}({\kern-.5pt}V_{\theta},{\kern-.5pt}t,{\kern-.5pt}V_{0},t^{*})&\,=\,&\sqrt{\frac{2 \tau_{\mathrm{m}}^{3}}{\pi D^{3}}} \frac{r(t-t^{*}) \times (V_{0}-V_{\theta})}{(1-r^{2}(t-t^{*}))^{3/2}}\\ &&\times{\kern-.5pt} \exp{\kern-.5pt}{\left[\! -\frac{\tau_{\mathrm{m}}}{2D}\frac{r^{2}(t\,-\,t^{*})\,(V_{0}\,-\,V_{\theta})^{2}}{1-r^{2}(t-t^{*})}\right]}. \end{array} $$ (29)

We use Eq. (7) for probability density, P 0 (V ), and try to calculate the integral in Eq. (28) (see Appendix IV). An expression is derived, for the first-passage time density in the presence of arbitrary signaling input (τ s ≪ τ m ):

$$\begin{array}{@{}rcl@{}} J(V_{\theta},t)&=&\frac{\sqrt{\kappa \, \omega}}{\pi \tau_{\mathrm{m}}} \left\{ \exp\left( -\frac{\varphi_{+}^{2}}{2}\right)\!\left[1+\sqrt{\frac{\pi}{2 \kappa}}\,\varphi_{+}\exp\left( \frac{\varphi_{+}^{2}}{2 \kappa}\right)\right.\times\left( 1+\text{erf}\left( \frac{\varphi_{+}}{\sqrt{2 \kappa}}\right)\right)\right]\\ &&\qquad\quad-\exp\left( -\frac{\varphi_{-}^{2}}{2}\right)\!\left[1\,+\,\sqrt{\frac{\pi}{2 \kappa}}\,\varphi_{-}\exp\!\left( \frac{\varphi_{-}^{2}}{2 \kappa}\right)\left.\times\left( 1+\text{erf}\left( \frac{\varphi_{-}}{\sqrt{2 \kappa}}\right)\right)\right] \right\}, \end{array} $$ (30)

where \(\text {erf}(x)=(2/\sqrt {\pi }){{\int }_{0}^{x}} \exp (-t^{2})dt\) and κ(t,t∗), ω(t,t∗) and φ ± (t,t∗)are:

$$\begin{array}{@{}rcl@{}} \kappa(t,t^{*}) & \,=\, & (1-r(t)^{2})/(1\,-\,r(t-t^{*})^{2}),\\ \omega(t,t^{*}) & \,=\, & \frac{r(t-t^{*})^{2}(1-r(t^{*})^{2})}{(1-r(t)^{2})^{3}},\\ \varphi_{\pm}(t,t^{*}) & \,=\, & {\sqrt{\frac{\tau_{\mathrm{m}}V_{\mathrm \theta}^{2}}{D(1\,-\,r(t^{*})^{2})}}} \left\{\pm r(t^{*})\,-\,{{\int}_{0}^{t}} \frac{ds}{\tau_{\mathrm{m}}}\frac{\Delta I(s, t^{*})}{V_{\theta}}\right\}. \end{array} $$ (31)

The first-passage time density, Eq. (30), is the probability density of the first spike of the postsynaptic neuron when the signaling input arrives at t∗; the t∗measures the time elapsed since the last spike of postsynaptic neuron up to the signal arrival.

There are experimental studies in which the postsynaptic neuron’s spike times both before and after signal arrival are recorded (Blot et al. 2016). In these researches, the t∗ is observed/assumed. For example studies regarding timing dependent plasticity, assume the knowledge of the last postsynaptic spike (Froemke and Dan 2002; Wang et al. 2005). The timing of the last spike may also be learned by intrinsic mechanisms (Johansson et al. 2016; Jirenhed et al. 2017). In such cases, it would be possible to directly apply our formalism to the analysis. However, there are experimental studies which do not take into account such knowledge of the last postsynaptic spike timing (Panzeri et al. 2014); a downstream neuron may have no access to the information of the neuron’s former spikes. Therefore, we should consider all possible values for t∗, a statistical procedure known as marginalization.

First spike-timing density after signaling input’s arrival

We observed that timing of the signaling input, relative to the last postsynaptic spike, significantly affects the first-spiking density of the postsynaptic neuron. However, the postsynaptic neuron (as well as downstream neurons) may have no access to this elapsed time. Assume that we monitor the arrival of a signaling input; it would be more convenient to consider arrival time as the time origin. Therefore, we reset the time origin accordingly and ask what is the probability density f(τ) that postsynaptic neuron fires at τ after signaling input arrival.

As a building block to address this question, we need to know the probability of the last postsynaptic spike occurring in a time window between t∗and t∗ + dt∗ before the signal arrival, P back (t∗) dt∗. Following the language of voltage trajectories, this probability can be computed by considering that (A) a trajectory has begun from V r = 0in the mentioned time window, but (B) it has not yet reached the threshold at V = V 𝜃 . The answer comes as multiplication of probabilities associated with the two conditions (A) and (B):

$$\begin{array}{@{}rcl@{}} P_{\text{back}}(t^{*})\,dt^{*}= \lambda\,dt^{*} \times (1 - {\int}_{0}^{t^{*}} J_{0}(V_{\theta},s) \, ds ). \end{array} $$ (32)

where \(\lambda =({\int }_{0}^{\infty } s J_{0}(V_{\theta },s) \, ds)^{-1}\) is the spike rate of the postsynaptic neuron in the absence of any signaling input and J 0 (V 𝜃 ,s) is given by Eq. (9). P back (t∗)is known as the density function of backward recurrence time in the point process theory (Cox 1962).

The next question is which portion of the trajectories, addressed above, will reach the threshold at τ to τ + dτ, after signal arrival. This sets a temporal distance of t∗ + τ between the beginning point of the trajectories with V = V r = 0to their spiking point at V = V 𝜃 . The answer is a conditional probability which reads:

$$\begin{array}{@{}rcl@{}} f (\tau | t^{*} ) d\tau = \frac{ J(V_{\theta},\tau + t^{*}) d\tau}{ 1 - {\int}_{0}^{t^{*}} J_{0}(V_{\theta},s) ds }. \end{array} $$ (33)

Note that J(V 𝜃 ,τ + t∗) is given by Eq. (30), where the signaling input arrives at t∗, after the last postsynaptic spike. The denominator is a normalization term to achieve \({\int }_{0}^{\infty } f (\tau | t^{*} )d\tau = 1\).

Having f(τ|t∗)and P back (t∗), we should integrate over all possible values of backward recurrence time, t∗, to obtain f(τ):

$$\begin{array}{@{}rcl@{}} f(\tau) &=& {\int}_{0}^{\infty} f(\tau|t^{*}) \times P_{\text{back}}(t^{*}) \, dt^{*} \\ &=& \lambda {\int}_{0}^{\infty} J(V_{\theta},\tau + t^{*}) \,dt^{*} . \end{array} $$ (34)

The result in Eq. (34) presents the probability density of first-spike timing after input arrival; the time of input arrival (stimulus onset) should be known by some mechanisms in the cortex (Van Rullen et al. 2005; Panzeri et al. 2014).

It is worth to see how P back (t∗)successfully connects us to the existing stationary solution for the membrane potential (Brunel and Hakim 1999; Brunel 2000). We should address the probability of finding the membrane potential between V 0 and V 0 + dV 0 at an arbitrary observation time. We split the task into two questions: First, what is the probability that the last postsynaptic spike has happened in a time window of t∗to t∗ + dt∗, before observation time?; Second, what is the conditional probability that voltage trajectories, which initiated from t∗ before observation time, have a potential V ∈ [V 0 ,V 0 + dV 0 ]at the time of observation. The answer for the first question is simply given by the probability density of the backward recurrence time. The answer for the second question is a conditional probability:

$$\begin{array}{@{}rcl@{}} \tilde{p}(V_{0} | t^{*}) dV_{0}=\frac{P_{0}(V_{0}, t^{*}) dV_{0}}{{\int}_{-\infty}^{V_{\theta}} P_{0}(V, t^{*}) dV}. \end{array} $$ (35)

P 0 (V,t∗)is given by Eq. (7). The denominator is a normalizing factor to ensure: \({\int }_{-\infty }^{V_{\theta }} \tilde {p}(V_{0} | t^{*}) dV_{0}= 1\). It has the very same origin we mentioned for the denominator in Eq. (33); in fact, it is easy to verify that the two denominators are equal, due to the conservation of probability: \({\int }_{-\infty }^{V_{\theta }} P_{0}(V, t^{*}) dV = 1 - {\int }_{0}^{t^{*}} J_{0}(V_{\theta },s) ds\). We combine the answers of two questions, and obtain the stationary probability density as:

$$\begin{array}{@{}rcl@{}} P_{\mathrm{s}}(V_{0}) &=& {\int}_{0}^{\infty} \tilde{p}(V_{0} | t^{*}) \times P_{\text{back}}(t^{*}) \, dt^{*} \\ &=& \lambda {\int}_{0}^{\infty} P_{0}(V_{0}, t^{*}) \, dt^{*}. \end{array} $$ (36)

Figure 6b-inset shows that P s (V 0 )nicely coincides with the existing stationary solution found by Brunel and Hakim (Brunel and Hakim 1999; Brunel 2000). To use their solution, we have simply put the mean input current equal to the threshold potential, \(\bar {I}=V_{\theta }\).

Fig. 6 a, Top: Fisher information with respect to the amplitude of signaling input in logarithmic scale as a function of the scaled amplitude, A/(τ m V 𝜃 ), and the scaled diffusion coefficient, D/(τ m V𝜃2). We use exponential decaying signaling input with τ s = 1 ms (see Eq. (46)). For high level of the noise, Fisher information has one maximum in strong amplitude, but for a specific level of the noise, it splits into two maximums which occur in strong and weak amplitudes. The dashed black line shows the maximums of Fisher information. It is notable that Fisher information does not monotonically decrease with the noise level; except for A/(τ m V 𝜃 ) ≃ 1and A/(τ m V 𝜃 ) < 0.1, for the rest of scaled amplitudes, the Fisher information is maximized for a certain level of the noise (stochastic resonance). Bottom: in the low noise regime, D/(τ m V𝜃2) ≤ 0.088, the Fisher information is maximized either by small signaling input’s amplitudes, A/(τ m V 𝜃 ) < 0.3or by its strong amplitudes, A/(τ m V 𝜃 ) ≃ 1. The two maximums approach each other, as the diffusion coefficient increases, and reach for D/(τ m V𝜃2) > 0.088. This one maximum would be robust to the change of the amplitude in a wide range. b stationary distribution of the membrane potential for different scaled diffusion coefficient. When strong amplitudes in low diffusion arrive, nearly all trajectories reach the threshold and only small portion of them remains (hatched red area). Inset shows the comparison between stationary distribution from Eqs. (36) and (77) (Brunel and Hakim 1999; Brunel 2000). cf(τ)in logarithmic scale for weak and strong signaling inputs: The solid colored lines are obtained using Eq. (34) while the black dashed lines are the results using Eq. (37). The coincidence shows that two equations produce the same result, for the same signaling input’s amplitude. The scaled noise level is D/(τ m V𝜃2) = 0.04 Full size image

P s (V 0 ) provides an alternative approach to find f(τ). It determines the probability density that the postsynaptic neuron has a membrane potential of V = V 0 upon signal arrival, (i.e. t = t∗). We have also obtained how the probability density evolves after signal arrival (i.e. t > t∗), for a square (see Eq. (24)) or exponential (see Eq. (55)) signaling input. We note that the framework of solutions which results in Eq. (24) or Eq. (55) does not depend on the initial choice of P 0 (V 0 ,t) or P s (V 0 ). Conclusively, if we want to determine first-spiking density, with no previous knowledge about the last postsynaptic spike, we should simply replace P 0 (V 0 ,t∗) with P s (V 0 ) (see Appendix V). This lets us follow our suggestion for J(V 𝜃 ,t)in the presence of an arbitrary transient input, Eq. (30), and obtain f(τ)accordingly:

$$\begin{array}{@{}rcl@{}} f(\tau)&=&-\frac{D}{\tau_{\mathrm{m}}^{2}}{\int}_{V_{0}=-\infty}^{V_{\theta}} \tilde{G}(V_{\theta},\tau,V_{0},0)\\ &&\times P_{\mathrm{s}}\left( V_{0}- {\int}_{0}^{\tau}\frac{\Delta I(s,0)}{\tau_{\mathrm{m}}} ds\right)\,d V_{0}, \end{array} $$ (37)

where \(\tilde {G}(V_{\theta },\tau ,V_{0},0)\) is given by Eq. (29). In Fig. 6c, we depict f(τ)using both Eq. (37), dashed lines, and Eq. (34), full lines. There is a nice coincidence between two sets of curves which shows the consistency of the result from two approaches mentioned here. The result arises from each one of two approaches, has its own advantage. Since Eq. (34) has just one temporal integral, and J(V 𝜃 ,τ)is already well simplified in Eq. (30), it is computationally easier and faster to work with. At first glance, Eq. (37) also has one temporal integral, however, there is another integral in P s (V ) to reach stationary solution (see Eq. (36)). Consequently, it would be computationally faster to use Eq. (34) but Eq. (37) provides more intuition about how f(τ)behaves.

Fisher information

The analytical first-spiking density after input arrival, Eq. (34), allows us to quantify the minimum error for any unbiased estimator to decode signaling input’s properties such as its amplitude (input’s strength). Based on Cramer-Rao’s inequality (Rao 1973), the Fisher information provides the lower bound of the estimator’s variance (\(\sigma ^{2}_{\text {est}}\geq 1/\mathcal {I}_{FI} \)). Applied to spike timing density, maximizing the Fisher information gives us the minimum error to decode an input parameter (e.g. signal’s amplitude) using the spiking activity. Spike timing of postsynaptic neuron contains information that spike count does not carry (Rieke et al. 1999; van Vreeswijk 2001; Toyoizumi et al. 2006). Indeed, discarding spike timing information (specifically first-spike timing) leads to loss of information (Panzeri et al. 2001). Hence here, we investigate the Fisher information based on the spike timing with respect to the strength of signaling input. In this scenario, the decoder must know the input arrival time and the first spike time after that. It was discussed that this knowledge about the input arrival time as a time reference may be known by, for example, network oscillations or other mechanisms in cortical/sensory systems (Van Rullen et al. 2005; Panzeri et al. 2014). Here, depending on the level of noise, we want to find the amplitudes of signaling inputs with which an optimal decoder can make the best possible discrimination.

The Fisher information is defined as:

$$\begin{array}{@{}rcl@{}} \mathcal{I}_{\text{FI}}(A) = E \left[ \left( \frac{\partial \log f(\tau)} {\partial A} \right)^{2} \right]={\int}_{0}^{\infty} \frac{(\partial f(\tau)/\partial A)^{2}} {f(\tau) }\,d\tau, \end{array} $$ (38)

where the expectation is performed by f(τ)itself (see Eq. (34)). We assume the exponential decaying as signaling input’s functionality (i.e. ΔI(t,0) ∝ exp(−t/τ s ), see Eq. (46) for details).

Figure 6a top, shows the Fisher information as a function of two scaled variables: amplitude, A/(τ m V 𝜃 ), and noise level, \(D / (\tau _{\mathrm {m}}V_{\theta }^{2})\). The dashed black lines locate the points of local maximums. Given the high noise level, \(D/(\tau _{\mathrm {m}}V_{\theta }^{2})>0.088\), the Fisher information is maximized at a certain amplitude. The single maximum, however, splits into two maximums as the noise decreases. Figure 6a bottom, depicts the same \(\mathcal {I}_{\text {FI}}(A)\) as a function of signal’s amplitude, for certain values of the noise level. There are two distinct maxima in the dark-red curve, \(D/(\tau _{\mathrm {m}}V_{\theta }^{2})\simeq 0.01\). The two peaks, however, go down and approach each other as the noise level is increased; they finally merge into one peak for \(D/(\tau _{\mathrm {m}}V_{\theta }^{2})\gtrsim 0.09\), the blue and dark-blue curves.

The mentioned two maximums which appear in low noise regime show noise plays a major role in optimal decoding. Despite the high noise level, the best discrimination would happen for two kinds of input strength, in the low noise level. The crucial role of noise is also studied in the context of mutual information, where the maximally informative solutions for neural population splits into two, as noise level decreases (Kastner et al. 2015). Figure 6a shows two branches for the maximum of the Fisher information. The left side branch indicates that the maximizing amplitude diminishes as noise decreases. A similar behavior has been seen using an extension of the perfect integrate and fire model (Levakova et al. 2016); however, they observed a single but not two maximums. Therefore, the existence of the second maximum, as a result of strong signaling input, is less expected and needs more exploration. Here, we suggest a hand-waving explanation, which intuitively explains the existence of the second peak for strong amplitudes in low noise levels.

The Fisher information (Eq. (38)) has an integral over τ; we may expect that the maximization of its integrand versus A, for certain domains of τ, results in the maximization of the whole \(\mathcal {I}_{FI}(A)\). The integrand, also, is a fraction with ∂f(τ)/∂A in its nominator, and f(τ)in its denominator. Figure 6c shows in the logarithmic scale, how f(τ) modifies as A increases. For large signal’s amplitude, as A/(τ m V 𝜃 ) varies from 0.7 to 1, we see no significant change of f(τ)for 0 ≤ τ ≲ 2.5τ s domain. On the contrary, for \(\tau \gtrsim 2.5 \tau _{\mathrm {s}}\), we see a significant decrease in f(τ); increasing A/(τ m V 𝜃 )by a constant step of 0.1, always results in a significant downward shift of f(τ). The shift in the last step (A/(τ m V 𝜃 ) : 0.9 → 1) seems almost twice as large as the shift in its previous step, in logarithmic scale. This picture suggests that as A/(τ m V 𝜃 ) → 1, f(τ) drastically decreases, whereas ∂f(τ)/∂A remains finite. This results in the growth of the integrand and hence the integral for the \(\tau \gtrsim 2.5 \tau _{\mathrm {s}}\) domain.

Going back to the scenario of voltage trajectories, we can further understand this trend. A large transient signaling input boosts all trajectories with an upward shift of A/τ m . Consequently, those trajectories which are closer to the threshold than A/τ m , upon signal arrival, will fire immediately. What remains to fire afterward would be the trajectories which were below V 𝜃 − A/τ m , upon signal occurrence. This picture lets us introduce two distinct sources for f(τ), during and after signal arrival. \(F_{\text {during}}(A)={\int }_{V_{\theta }-A/\tau _{\mathrm {m}}}^{V_{\theta }} P_{\mathrm {s}}(V_{0})dV_{0}\) measures the portion of those trajectories which fire during signal arrival (e.g. 0 < τ < 2.5τ s ), whereas \(F_{\text {after}}(A)={\int }_{-\infty }^{V_{\theta }-A/\tau _{\mathrm {m}}} P_{\mathrm {s}}(V_{0})dV_{0}\) measures the portion which will reach the threshold after the signal arrival. For the case of exponentially decaying input, it roughly implies that:

$$\begin{array}{@{}rcl@{}} F_{\text{after}}(A)= {\int}_{-\infty}^{V_{\theta}-A/\tau_{\mathrm{m}}} P_{\mathrm{s}}(V_{0})\,dV_{0} \sim {\int}_{2.5 \tau_{\mathrm{s}}}^{\infty} f(\tau) d\tau. \end{array} $$ (39)

Equation (39) does not fully determine how f(τ)behaves after signal arrival; but it provides us a hint on how f(τ)varies with A. However, Fig. 6c helps us to go one step further; it shows a linear tail for f(τ) in logarithmic scale, f(τ) ∝ exp(−αt); all tails show almost the same slope: \(\alpha \sim \tau _{\mathrm {m}}^{-1}\). This decouples the dependence of f(τ) on A, from its temporal dependence: f(τ) ∝ F after (A) × exp(−αt). Consequently, we would be able to estimate ∂f(τ)/∂A which reads (∂F after (A)/∂A) × exp(−αt). The ∂F after (A)/∂A simply is − (1/τ m )P s (V 𝜃 − A/τ m ). These points clarify how the integrand in Eq. (38) varies with A, in the 2.5τ s < τ domain. The integrand and the whole integral would behave like:

$$\begin{array}{@{}rcl@{}} {\int}_{2.5 \tau_{\mathrm{s}}}^{\infty} \frac{(\partial f(\tau)/\partial A)^{2}} {f(\tau) }\,d\tau \propto \frac{P_{\mathrm{s}}(V_{\theta}-A/\tau_{\mathrm{m}})^{2}}{{\int}_{-\infty}^{V_{\theta}-A/\tau_{\mathrm{m}}} P_{\mathrm{s}}(V_{0})\,dV_{0}}. \end{array} $$ (40)

The right side of Eq. (40) has a simple geometrical interpretation. Consider the green curve in Fig. 6b (D/(τ m V𝜃2) = 0.2), the denominator of Eq. (40) equals the hatched area below the curve, while its nominator is simply the square of the height of that curve, P s (V ). As A/τ m → V 𝜃 , the height of the curve remains finite (i.e. P s (0)), whereas the hatched area decreases to \({\int }_{-\infty }^{0} P_{\mathrm {s}}(V_{0})\,dV_{0}\).

We compare how the curve changes as noise decreases from green to red curve (Fig. 6b), \(D/(\tau _{\mathrm {m}}V_{\theta }^{2})\,= 0.2\rightarrow 0.04\). There is almost a ratio of 2 between the heights of the curves at V = 0; this means that the nominator decreases by a factor of 1/22. However, the hatched area is decreasing by a factor larger than 4. This means that the right side of Eq. (40) enlarges as noise level decreases. This trend is much stronger as noise level further decreases to \(D/(\tau _{\mathrm {m}}V_{\theta }^{2})\,= 0.004\); the height is decreased by a factor of 0.63 whereas the enclosed area reduces to something hardly recognizable. In fact, we can show that the right side of Eq. (40) diverges like \((\tau _{\mathrm {m}}V_{\theta }^{2}/D)/\ln {(\tau _{\mathrm {m}}V_{\theta }^{2}/D)}\), as the noise level approaches to zero. So the integral of the Fisher information on the after signal arrival domain should diverge if both A/τ m → V 𝜃 and \(D/(\tau _{\mathrm {m}}V_{\theta }^{2}) \rightarrow 0\). This intuitive picture explains why the second peak arises for large amplitudes in low diffusion regime.

Figure 6a shows for weak amplitudes (0.02 ≤ A/(τ m V 𝜃 ) < 0.1), when the level of noise increases, the Fisher information decreases monotonically. The same effect is observed when A/(τ m V 𝜃 ) ≃ 1. But for the rest amount of amplitudes, the Fisher information is a non-monotonic function of noise level (see the dashed lines in Fig. 6a which show the maximum of the function); there is a certain level of the noise that maximizes the Fisher information (stochastic resonance) (Bulsara et al. 1991).