[pointprocesses] better incorporated Poisson spike train

This commit is contained in:
Jan Benda 2021-01-17 23:57:06 +01:00
parent bd610a9b1d
commit 0f0dfafd56
3 changed files with 283 additions and 233 deletions

View File

@ -222,11 +222,26 @@
{\vspace{-2ex}\lstset{#1}\noindent\minipage[t]{1\linewidth}}% {\vspace{-2ex}\lstset{#1}\noindent\minipage[t]{1\linewidth}}%
{\endminipage} {\endminipage}
%%%%% english, german, code and file terms: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% english and german terms: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage{ifthen} \usepackage{ifthen}
% \enterm[en-index]{term}: Typeset term and add it to the index of english terms
%
% \determ[de-index]{term}: Typeset term and add it to the index of german terms
%
% \entermde[en-index]{de-index}{term}: Typeset term and add it to the index of english terms
% and de-index to the index of german terms
%
% how to specificy an index:
% \enterm{term} - just put term into the index
% \enterm[en-index]{term} - typeset term and put en-index into the index
% \enterm[statistics!mean]{term} - mean is a subentry of statistics
% \enterm[statistics!average|see{statistics!mean}]{term} - cross reference to statistics mean
% \enterm[statistics@\textbf{statistics}]{term} - put index at statistics but use
% \textbf{statistics} for typesetting in the index
% \enterm[english index entry]{<english term>} % \enterm[english index entry]{<english term>}
% typeset the term in italics and add it (or the optional argument) to % typeset the term in italics and add it (or rather the optional argument) to
% the english index. % the english index.
\newcommand{\enterm}[2][]{\textit{#2}\ifthenelse{\equal{#1}{}}{\protect\sindex[enterm]{#2}}{\protect\sindex[enterm]{#1}}} \newcommand{\enterm}[2][]{\textit{#2}\ifthenelse{\equal{#1}{}}{\protect\sindex[enterm]{#2}}{\protect\sindex[enterm]{#1}}}

View File

@ -93,6 +93,8 @@ def plot_hom_isih(ax):
ax.set_ylim(0.0, 31.0) ax.set_ylim(0.0, 31.0)
ax.set_xticks(np.arange(0.0, 151.0, 50.0)) ax.set_xticks(np.arange(0.0, 151.0, 50.0))
ax.set_yticks(np.arange(0.0, 31.0, 10.0)) ax.set_yticks(np.arange(0.0, 31.0, 10.0))
tt = np.linspace(0.0, 0.15, 100)
ax.plot(1000.0*tt, rate*np.exp(-rate*tt), **lsB)
plotisih(ax, isis(homspikes), 0.005) plotisih(ax, isis(homspikes), 0.005)

View File

@ -34,7 +34,7 @@ process]{Punktprozess}{point processes}.
trial. Shown is a stationary point process (homogeneous point trial. Shown is a stationary point process (homogeneous point
process with a rate $\lambda=20$\;Hz, left) and an non-stationary process with a rate $\lambda=20$\;Hz, left) and an non-stationary
point process with a rate that varies in time (noisy perfect point process with a rate that varies in time (noisy perfect
integrate-and-fire neuron driven by Ohrnstein-Uhlenbeck noise with integrate-and-fire neuron driven by Ornstein-Uhlenbeck noise with
a time-constant $\tau=100$\,ms, right).} a time-constant $\tau=100$\,ms, right).}
\end{figure} \end{figure}
@ -87,190 +87,6 @@ certain time window $n_i$ (\figref{pointprocesssketchfig}).
are stored as vectors of times within a cell array. are stored as vectors of times within a cell array.
\end{exercise} \end{exercise}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Interval statistics}
The intervals $T_i=t_{i+1}-t_i$ between successive events are real
positive numbers. In the context of action potentials they are
referred to as \entermde{Interspikeintervalle}{interspike
intervals}. The statistics of interspike intervals are described
using common measures for describing the statistics of stochastic
real-valued variables:
\begin{figure}[t]
\includegraphics[width=0.96\textwidth]{isihexamples}\vspace{-2ex}
\titlecaption{\label{isihexamplesfig}Interspike-interval
histograms}{of the spike trains shown in \figref{rasterexamplesfig}.}
\end{figure}
\begin{exercise}{isis.m}{}
Implement a function \varcode{isis()} that calculates the interspike
intervals from several spike trains. The function should return a
single vector of intervals. The spike times (in seconds) of each
trial are stored as vectors within a cell-array.
\end{exercise}
%\subsection{First order interval statistics}
\begin{itemize}
\item Probability density $p(T)$ of the intervals $T$
(\figref{isihexamplesfig}). Normalized to $\int_0^{\infty} p(T) \; dT
= 1$.
\item Average interval: $\mu_{ISI} = \langle T \rangle =
\frac{1}{n}\sum\limits_{i=1}^n T_i$.
\item Standard deviation of the interspike intervals: $\sigma_{ISI} = \sqrt{\langle (T - \langle T
\rangle)^2 \rangle}$\vspace{1ex}
\item \entermde[coefficient of variation]{Variationskoeffizient}{Coefficient of variation}:
$CV_{ISI} = \frac{\sigma_{ISI}}{\mu_{ISI}}$.
\item \entermde[diffusion coefficient]{Diffusionskoeffizient}{Diffusion coefficient}: $D_{ISI} =
\frac{\sigma_{ISI}^2}{2\mu_{ISI}^3}$.
\end{itemize}
\begin{exercise}{isihist.m}{}
Implement a function \varcode{isiHist()} that calculates the normalized
interspike interval histogram. The function should take two input
arguments; (i) a vector of interspike intervals and (ii) the width
of the bins used for the histogram. It further returns the
probability density as well as the centers of the bins.
\end{exercise}
\begin{exercise}{plotisihist.m}{}
Implement a function that takes the return values of
\varcode{isiHist()} as input arguments and then plots the data. The
plot should show the histogram with the x-axis scaled to
milliseconds and should be annotated with the average ISI, the
standard deviation and the coefficient of variation.
\end{exercise}
\subsection{Interval correlations}
So called \entermde[return map]{return map}{return maps} are used to
illustrate interdependencies between successive interspike
intervals. The return map plots the delayed interval $T_{i+k}$ against
the interval $T_i$. The parameter $k$ is called the \enterm{lag}
(\determ{Verz\"ogerung}) $k$. Stationary and non-stationary return
maps are distinctly different \figref{returnmapfig}.
\begin{figure}[tp]
\includegraphics[width=1\textwidth]{serialcorrexamples}
\titlecaption{\label{returnmapfig}Interspike-interval
correlations}{of the spike trains shown in
\figref{rasterexamplesfig}. Upper panels show the return maps and
lower panels the serial correlations of successive intervals
separated by lag $k$. All the interspike intervals of the
stationary spike trains are independent of each other --- this is
a so called \enterm{renewal process}
(\determ{Erneuerungsprozess}). In contrast, the ones of the
non-stationary spike trains show positive correlations that decay
for larger lags. The positive correlations in this example are
caused by a common stimulus that slowly increases and decreases
the mean firing rate of the spike trains.}
\end{figure}
Such dependencies can be further quantified using the
\entermde[correlation!serial]{Korrelation!serielle}{serial
correlations} \figref{returnmapfig}. The serial correlation is the
correlation coefficient of the intervals $T_i$ and the intervals
delayed by the lag $T_{i+k}$:
\[ \rho_k = \frac{\langle (T_{i+k} - \langle T \rangle)(T_i - \langle T \rangle) \rangle}{\langle (T_i - \langle T \rangle)^2\rangle} = \frac{{\rm cov}(T_{i+k}, T_i)}{{\rm var}(T_i)}
= {\rm corr}(T_{i+k}, T_i) \] The resulting correlation coefficient
$\rho_k$ is usually plotted against the lag $k$
\figref{returnmapfig}. $\rho_0=1$ is the correlation of each interval
with itself and is always 1.
\begin{exercise}{isiserialcorr.m}{}
Implement a function \varcode{isiserialcorr()} that takes a vector of
interspike intervals as input argument and calculates the serial
correlation. The function should further plot the serial
correlation.
\end{exercise}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Count statistics}
\begin{figure}[t]
\includegraphics{countexamples}
\titlecaption{\label{countstatsfig}Count statistics.}{Probability
distributions of counting $k$ events $k$ (blue) within windows of
20\,ms (left) or 200\,ms duration (right) of a homogeneous Poisson
spike train with a rate of 20\,Hz
(\figref{rasterexamplesfig}). For Poisson spike trains these
distributions are given by Poisson distributions (red).}
\end{figure}
The most commonly used measure for characterizing spike trains is the
\enterm[firing rate!average]{average firing rate}. The firing rate $r$
is the average number of spikes counted within some time interval $W$
\begin{equation}
\label{firingrate}
r = \frac{\langle n \rangle}{W}
\end{equation}
and is neasured in Hertz. The average of the spike counts is taken
over trials. For stationary spike trains (no change in statistics, in
particular the firing rate, over time), the firing rate based on the
spike count equals the inverse average interspike interval
$1/\mu_{ISI}$.
The firing rate based on an averaged spike counts is one example of
many statistics based on event counts. Stationary spike trains can be
split into many segments $i$, each of duration $W$, and the number of
events $n_i$ in each of the segments can be counted. The integer event
counts can be quantified in the usual ways:
\begin{itemize}
\item Histogram of the counts $n_i$ (\figref{countstatsfig}).
\item Average number of counts: $\mu_n = \langle n \rangle$.
\item Variance of counts:
$\sigma_n^2 = \langle (n - \langle n \rangle)^2 \rangle$.
\end{itemize}
Because spike counts are unitless and positive numbers, the
\begin{itemize}
\item \entermde{Fano Faktor}{Fano factor} (variance of counts divided
by average count): $F = \frac{\sigma_n^2}{\mu_n}$.
\end{itemize}
is an additional measure quantifying event counts.
Note that all of these statistics depend on the chosen window length
$W$. The average spike count, for example, grows linearly with $W$ for
sufficiently large time windows: $\langle n \rangle = r W$,
\eqnref{firingrate}. Doubling the counting window doubles the spike
count. As does the spike-count variance (\figref{fanofig}). At smaller
time windows the statistics of the event counts might depend on the
particular duration of the counting window. There might be an optimal
time window for which the variance of the spike count is minimal. The
Fano factor plotted as a function of the time window illustrates such
properties of point processes (\figref{fanofig}).
This also has consequences for information transmission in neural
systems. The lower the variance in spike count relative to the
averaged count, the higher the signal-to-noise ratio at which
information encoded in the mean spike count is transmitted.
\begin{figure}[t]
\includegraphics{fanoexamples}
\titlecaption{\label{fanofig}
Count variance and Fano factor.}{Variance of event counts as a
function of mean counts obtained by varying the duration of the
count window (left). Dividing the count variance by the respective
mean results in the Fano factor that can be plotted as a function
of the count window (right). For Poisson spike trains the variance
always equals the mean counts and consequently the Fano factor
equals one irrespective of the count window (top). A spike train
with positive correlations between interspike intervals (caused by
Ohrnstein-Uhlenbeck noise) has a minimum in the Fano factor, that
is an analysis window for which the relative count variance is
minimal somewhere close to the correlation time scale of the
interspike intervals (bottom).}
\end{figure}
\begin{exercise}{counthist.m}{}
Implement a function \varcode{counthist()} that calculates and plots
the distribution of spike counts observed in a certain time
window. The function should take two input arguments: (i) a
cell-array of vectors containing the spike times in seconds observed
in a number of trials, and (ii) the duration of the time window that
is used to evaluate the counts.
\end{exercise}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Homogeneous Poisson process} \section{Homogeneous Poisson process}
@ -303,45 +119,6 @@ In an \entermde[Poisson process!inhomogeneous]{Poissonprozess!inhomogener}{inhom
\eqnref{hompoissonprob} to generate the event times. \eqnref{hompoissonprob} to generate the event times.
\end{exercise} \end{exercise}
\begin{figure}[t]
\includegraphics[width=1\textwidth]{poissonraster100hz}
\titlecaption{\label{hompoissonfig}Rasterplot of spikes of a
homogeneous Poisson process with a rate $\lambda=100$\,Hz.}{}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{poissonisihexp20hz}\hfill
\includegraphics[width=0.45\textwidth]{poissonisihexp100hz}
\titlecaption{\label{hompoissonisihfig}Distribution of interspike
intervals of two Poisson processes.}{The processes differ in their
rate (left: $\lambda=20$\,Hz, right: $\lambda=100$\,Hz). The red
lines indicate the corresponding exponential interval distribution
\eqnref{poissonintervals}.}
\end{figure}
The homogeneous Poisson process has the following properties:
\begin{itemize}
\item Intervals $T$ are exponentially distributed (\figref{hompoissonisihfig}):
\begin{equation}
\label{poissonintervals}
p(T) = \lambda e^{-\lambda T} \; .
\end{equation}
\item The average interval is $\mu_{ISI} = \frac{1}{\lambda}$ .
\item The variance of the intervals is $\sigma_{ISI}^2 = \frac{1}{\lambda^2}$ .
\item Thus, the coefficient of variation is always $CV_{ISI} = 1$ .
\item The serial correlation is $\rho_k =0$ for $k>0$, since the
occurrence of an event is independent of all previous events. Such a
process is also called a \enterm{renewal process} (\determ{Erneuerungsprozess}).
\item The number of events $k$ within a temporal window of duration
$W$ is Poisson distributed:
\begin{equation}
\label{poissoncounts}
P(k) = \frac{(\lambda W)^ke^{\lambda W}}{k!}
\end{equation}
(\figref{hompoissoncountfig})
\item The Fano Faktor is always $F=1$ .
\end{itemize}
\begin{exercise}{hompoissonspikes.m}{} \begin{exercise}{hompoissonspikes.m}{}
Implement a function \varcode{hompoissonspikes()} that uses a Implement a function \varcode{hompoissonspikes()} that uses a
homogeneous Poisson process to generate spike events at a given rate homogeneous Poisson process to generate spike events at a given rate
@ -353,16 +130,272 @@ The homogeneous Poisson process has the following properties:
times. times.
\end{exercise} \end{exercise}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Interval statistics}
The intervals $T_i=t_{i+1}-t_i$ between successive events are real
positive numbers. In the context of action potentials they are
referred to as \entermde[interspike
interval]{Interspikeintervall}{interspike intervals}, in short
\entermde[ISI|see{interspike
interval}]{ISI|see{Interspikeintervall}}{ISI}s. The statistics of
interspike intervals are described using common measures for
describing the statistics of real-valued variables:
\begin{figure}[t] \begin{figure}[t]
\includegraphics[width=0.48\textwidth]{poissoncounthistdist100hz10ms}\hfill \includegraphics[width=0.96\textwidth]{isihexamples}\vspace{-2ex}
\includegraphics[width=0.48\textwidth]{poissoncounthistdist100hz100ms} \titlecaption{\label{isihexamplesfig}Interspike-interval
\titlecaption{\label{hompoissoncountfig}Distribution of counts of a histograms}{of the spike trains shown in \figref{rasterexamplesfig}.}
Poisson spike train.}{The count statistics was generated for two
different windows of width $W=10$\,ms (left) and width $W=100$\,ms
(right). The red line illustrates the corresponding Poisson
distribution \eqnref{poissoncounts}.}
\end{figure} \end{figure}
\begin{exercise}{isis.m}{}
Implement a function \varcode{isis()} that calculates the interspike
intervals from several spike trains. The function should return a
single vector of intervals. The spike times (in seconds) of each
trial are stored as vectors within a cell-array.
\end{exercise}
\begin{itemize}
\item Probability density $p(T)$ of the intervals $T$
(\figref{isihexamplesfig}). Normalized to $\int_0^{\infty} p(T) \;
dT = 1$. Commonly referred to as the \enterm[interspike
interval!histogram]{interspike interval histogram}. Its shape
reveals many interesting aspects like locking or bursting that
cannot be inferred from the mean or standard deviation. A particular
reference is the exponential distribution of intervals
\begin{equation}
\label{hompoissonexponential}
p_{exp}(T) = \lambda e^{-\lambda T}
\end{equation}
of a homogeneous Poisson spike train with rate $\lambda$.
\item Mean interval: $\mu_{ISI} = \langle T \rangle =
\frac{1}{n}\sum\limits_{i=1}^n T_i$. The average time it takes from
one event to the next. The inverse of the mean interval is identical
with the mean rate $\lambda$ (number of events per time, see below)
of the process.
\item Standard deviation of intervals: $\sigma_{ISI} = \sqrt{\langle
(T - \langle T \rangle)^2 \rangle}$. Periodically spiking neurons
have little variability in their intervals, whereas many cortical
neurons cover a wide range with their intervals. The standard
deviation of homogeneous Poisson spike trains, $\sigma_{ISI} =
\frac{1}{\lambda}$, also equals the inverse rate. Whether the
standard deviation of intervals is low or high, however, is better
quantified by the
\item \entermde[coefficient of
variation]{Variationskoeffizient}{Coefficient of variation}, the
standard deviation of the ISIs relative to their mean:
\begin{equation}
\label{cvisi}
CV_{ISI} = \frac{\sigma_{ISI}}{\mu_{ISI}}
\end{equation}
Homogeneous Poisson spike trains have an CV of exactly one. The
lower the CV the more regularly firing a neuron is firing. CVs
larger than one are also possible in spike trains with small
intervals separated by really long ones.
%\item \entermde[diffusion coefficient]{Diffusionskoeffizient}{Diffusion coefficient}: $D_{ISI} =
% \frac{\sigma_{ISI}^2}{2\mu_{ISI}^3}$.
\end{itemize}
\begin{exercise}{isihist.m}{}
Implement a function \varcode{isiHist()} that calculates the normalized
interspike interval histogram. The function should take two input
arguments; (i) a vector of interspike intervals and (ii) the width
of the bins used for the histogram. It further returns the
probability density as well as the centers of the bins.
\end{exercise}
\begin{exercise}{plotisihist.m}{}
Implement a function that takes the return values of
\varcode{isiHist()} as input arguments and then plots the data. The
plot should show the histogram with the x-axis scaled to
milliseconds and should be annotated with the average ISI, the
standard deviation and the coefficient of variation.
\end{exercise}
\subsection{Interval correlations}
Intervals are not just numbers without an order, like weights of
tigers. Intervals are temporally ordered and there could be temporal
structure in the sequence of intervals. For example, short intervals
could be followed by more longer ones, and vice versa. Such
dependencies in the sequence of intervals do not show up in the
interval histogram. We need additional measures to also quantify the
temporal structure of the sequence of intervals.
We can use the same techniques we know for visualizing and quantifying
correlations in bivariate data sets, i.e. scatter plots and
correlation coefficients. We form $(x,y)$ data pairs by taking the
series of intervals $T_i$ as $x$-data values and pairing them with the
$k$-th next intervals $T_{i+k}$ as $y$-data values. The parameter $k$
is called \enterm{lag} (\determ{Verz\"ogerung}). For lag one we pair
each interval with the next one. A \entermde[return map]{return
map}{Return map} illustrates dependencies between successive
intervals by simply plotting the intervals $T_{i+k}$ against the
intervals $T_i$ in a scatter plot (\figref{returnmapfig}). For Poisson
spike trains there is no structure beyond the one expected from the
exponential interspike interval distribution, hinting at neighboring
interspike intervals being independent of each other. For the spike
train based on an Ornstein-Uhlenbeck process the return map is more
clustered along the diagonal, hinting at a positive correlation
between succeeding intervals. That is, short intervals are more likely
to be followed by short ones and long intervals more likely by long
ones. This temporal structure was already clearly visible in the spike
raster shown in \figref{rasterexamplesfig}.
\begin{figure}[tp]
\includegraphics[width=1\textwidth]{serialcorrexamples}
\titlecaption{\label{returnmapfig}Interspike-interval
correlations}{of the spike trains shown in
\figref{rasterexamplesfig}. Upper panels show the return maps and
lower panels the serial correlations of successive intervals
separated by lag $k$. All the interspike intervals of the
stationary spike trains are independent of each other --- this is
a so called \enterm{renewal process}
(\determ{Erneuerungsprozess}). In contrast, the ones of the
non-stationary spike trains show positive correlations that decay
for larger lags. The positive correlations in this example are
caused by a common stimulus that slowly increases and decreases
the mean firing rate of the spike trains.}
\end{figure}
Such dependencies can be further quantified by
\entermde[correlation!serial]{Korrelation!serielle}{serial
correlations}. These are the correlation coefficients between the
intervals $T_{i+k}$ and $T_i$ in dependence on lag $k$:
\begin{equation}
\label{serialcorrelation}
\rho_k = \frac{\langle (T_{i+k} - \langle T \rangle)(T_i - \langle T \rangle) \rangle}{\langle (T_i - \langle T \rangle)^2\rangle} = \frac{{\rm cov}(T_{i+k}, T_i)}{{\rm var}(T_i)}
= {\rm corr}(T_{i+k}, T_i)
\end{equation}
The serial correlations $\rho_k$ are usually plotted against the lag
$k$ for a range small range of lags
(\figref{returnmapfig}). $\rho_0=1$ is the correlation of each
interval with itself and always equals one.
If the serial correlations all equal zero, $\rho_k =0$ for $k>0$, then
the length of an interval is independent of all the previous
ones. Such a process is a \enterm{renewal process}
(\determ{Erneuerungsprozess}). Each event, each action potential,
erases the history. The occurrence of the next event is independent of
what happened before. To a first approximation an action potential
erases all information about the past from the membrane voltage and
thus spike trains may approximate renewal processes.
However, other variables like the intracellular calcium concentration
or the states of slowly switching ion channels may carry information
from one interspike interval to the next and thus introducing
correlations. Such non-renewal dynamics can then be described by the
non-zero serial correlations (\figref{returnmapfig}).
\begin{exercise}{isiserialcorr.m}{}
Implement a function \varcode{isiserialcorr()} that takes a vector of
interspike intervals as input argument and calculates the serial
correlation. The function should further plot the serial
correlation.
\end{exercise}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Count statistics}
\begin{figure}[t]
\includegraphics{countexamples}
\titlecaption{\label{countstatsfig}Count statistics.}{Probability
distributions of counting $k$ events $k$ (blue) within windows of
20\,ms (left) or 200\,ms duration (right) of a homogeneous Poisson
spike train with a rate of 20\,Hz
(\figref{rasterexamplesfig}). For Poisson spike trains these
distributions are given by Poisson distributions (red).}
\end{figure}
The most commonly used measure for characterizing spike trains is the
\enterm[firing rate!average]{average firing rate}. The firing rate $r$
is the average number of spikes counted within some time interval $W$
\begin{equation}
\label{firingrate}
r = \frac{\langle n \rangle}{W}
\end{equation}
and is measured in Hertz. The average of the spike counts is taken
over trials. For stationary spike trains (no change in statistics, in
particular the firing rate, over time), the firing rate based on the
spike count equals the inverse average interspike interval
$1/\mu_{ISI}$.
The firing rate based on an averaged spike counts is one example of
many statistics based on event counts. Stationary spike trains can be
split into many segments $i$, each of duration $W$, and the number of
events $n_i$ in each of the segments can be counted. The integer event
counts can be quantified in the usual ways:
\begin{itemize}
\item Histogram of the counts $n_i$. For homogeneous Poisson spike
trains with rate $\lambda$ the resulting probability distributions
follow a Poisson distribution (\figref{countstatsfig}), where the
probability of counting $k$ events within a time window $W$ is given by
\begin{equation}
\label{poissondist}
P(k) = \frac{(\lambda W)^k e^{\lambda W}}{k!}
\end{equation}
\item Average number of counts: $\mu_n = \langle n \rangle$.
\item Variance of counts:
$\sigma_n^2 = \langle (n - \langle n \rangle)^2 \rangle$.
\end{itemize}
Because spike counts are unitless and positive numbers, the
\begin{itemize}
\item \entermde{Fano Faktor}{Fano factor} (variance of counts divided
by average count)
\begin{equation}
\label{fano}
F = \frac{\sigma_n^2}{\mu_n}
\end{equation}
is a commonly used measure for quantifying the variability of event
counts relative to the mean number of events. In particular for
homogeneous Poisson processes the Fano factor equals one,
independently of the time window $W$.
\end{itemize}
is an additional measure quantifying event counts.
Note that all of these statistics depend in general on the chosen
window length $W$. The average spike count, for example, grows
linearly with $W$ for sufficiently large time windows: $\langle n
\rangle = r W$, \eqnref{firingrate}. Doubling the counting window
doubles the spike count. As does the spike-count variance
(\figref{fanofig}). At smaller time windows the statistics of the
event counts might depend on the particular duration of the counting
window. There might be an optimal time window for which the variance
of the spike count is minimal. The Fano factor plotted as a function
of the time window illustrates such properties of point processes
(\figref{fanofig}).
This also has consequences for information transmission in neural
systems. The lower the variance in spike count relative to the
averaged count, the higher the signal-to-noise ratio at which
information encoded in the mean spike count is transmitted.
\begin{figure}[t]
\includegraphics{fanoexamples}
\titlecaption{\label{fanofig}
Count variance and Fano factor.}{Variance of event counts as a
function of mean counts obtained by varying the duration of the
count window (left). Dividing the count variance by the respective
mean results in the Fano factor that can be plotted as a function
of the count window (right). For Poisson spike trains the variance
always equals the mean counts and consequently the Fano factor
equals one irrespective of the count window (top). A spike train
with positive correlations between interspike intervals (caused by
an Ornstein-Uhlenbeck process) has a minimum in the Fano factor,
that is an analysis window for which the relative count variance
is minimal somewhere close to the correlation time scale of the
interspike intervals (bottom).}
\end{figure}
\begin{exercise}{counthist.m}{}
Implement a function \varcode{counthist()} that calculates and plots
the distribution of spike counts observed in a certain time
window. The function should take two input arguments: (i) a
cell-array of vectors containing the spike times in seconds observed
in a number of trials, and (ii) the duration of the time window that
is used to evaluate the counts.
\end{exercise}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Time-dependent firing rate} \section{Time-dependent firing rate}