Worked on point process script
This commit is contained in:
parent
573c4ceb07
commit
0d2c5e91f9
@ -5,7 +5,7 @@
|
||||
In vielen Situationen wollen wir einen oder mehrere Parameter $\theta$
|
||||
einer Wahrscheinlichkeitsverteilung sch\"atzen, so dass die Verteilung
|
||||
die Daten $x_1, x_2, \ldots x_n$ am besten beschreibt.
|
||||
Maximum-Likelihood-Sch\"atzer w\"ahlen wir die Parameter so, dass die
|
||||
Maximum-Likelihood-Sch\"atzer w\"ahlen die Parameter so, dass die
|
||||
Wahrscheinlichkeit, dass die Daten aus der Verteilung stammen, am
|
||||
gr\"o{\ss}ten ist.
|
||||
|
||||
@ -16,10 +16,9 @@ $\theta$'') die Wahrscheinlichkeits(dichte)verteilung von $x$ mit dem
|
||||
Parameter(n) $\theta$. Das k\"onnte die Normalverteilung
|
||||
\begin{equation}
|
||||
\label{normpdfmean}
|
||||
p(x|\theta) = \frac{1}{\sqrt{2\pi \sigma^2}}e^{-\frac{(x-\theta)^2}{2\sigma^2}}
|
||||
p(x|\theta) = \frac{1}{\sqrt{2\pi \sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}
|
||||
\end{equation}
|
||||
sein mit
|
||||
fester Standardverteilung $\sigma$ und dem Mittelwert $\mu$ als
|
||||
sein mit dem Mittelwert $\mu$ und der Standardabweichung $\sigma$ als
|
||||
Parameter $\theta$.
|
||||
|
||||
Wenn nun den $n$ unabh\"angigen Beobachtungen $x_1, x_2, \ldots x_n$
|
||||
@ -59,9 +58,10 @@ das Maximum der logarithmierten Likelihood (``Log-Likelihood'') gesucht:
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
\subsection{Beispiel: Das arithmetische Mittel}
|
||||
|
||||
Wenn die Me{\ss}daten $x_1, x_2, \ldots x_n$ der Normalverteilung \eqnref{normpdfmean}
|
||||
entstammen, und wir den Mittelwert $\mu$ als einzigen Parameter der Verteilung betrachten,
|
||||
welcher Wert von $\theta$ maximiert dessen Likelhood?
|
||||
Wenn die Me{\ss}daten $x_1, x_2, \ldots x_n$ der Normalverteilung
|
||||
\eqnref{normpdfmean} entstammen, und wir den Mittelwert $\mu=\theta$ als
|
||||
einzigen Parameter der Verteilung betrachten, welcher Wert von
|
||||
$\theta$ maximiert dessen Likelhood?
|
||||
|
||||
\begin{figure}[t]
|
||||
\includegraphics[width=1\textwidth]{mlemean}
|
||||
@ -101,7 +101,7 @@ Normalverteilung mit diesem Mittelwert gezogen worden sind.
|
||||
die Log-Likelihood (aus der Summe der logarithmierten
|
||||
Wahrscheinlichkeiten) f\"ur den Mittelwert als Parameter. Vergleiche
|
||||
die Position der Maxima mit den aus den Daten berechneten
|
||||
Mittelwerte.
|
||||
Mittelwert.
|
||||
\end{exercise}
|
||||
|
||||
|
||||
@ -125,19 +125,19 @@ gegeben sind.
|
||||
Der Parameter $\theta$ soll so gew\"ahlt werden, dass die
|
||||
Log-Likelihood maximal wird. Der erste Term der Summe ist
|
||||
unabh\"angig von $\theta$ und kann deshalb bei der Suche nach dem
|
||||
Maximum weggelassen werden.
|
||||
Maximum weggelassen werden:
|
||||
\begin{eqnarray*}
|
||||
& = & - \frac{1}{2} \sum_{i=1}^n \left( \frac{y_i-f(x_i;\theta)}{\sigma_i} \right)^2
|
||||
\end{eqnarray*}
|
||||
Anstatt nach dem Maximum zu suchen, k\"onnen wir auch das Vorzeichen der Log-Likelihood
|
||||
umdrehen und nach dem Minimum suchen. Dabei k\"onnen wir auch den Faktor $1/2$ vor der Summe vernachl\"assigen --- auch das \"andert nichts an der Position des Minimums.
|
||||
umdrehen und nach dem Minimum suchen. Dabei k\"onnen wir auch den Faktor $1/2$ vor der Summe vernachl\"assigen --- auch das \"andert nichts an der Position des Minimums:
|
||||
\begin{equation}
|
||||
\label{chisqmin}
|
||||
\theta_{mle} = \text{argmin}_{\theta} \; \sum_{i=1}^n \left( \frac{y_i-f(x_i;\theta)}{\sigma_i} \right)^2 \;\; = \;\; \text{argmin}_{\theta} \; \chi^2
|
||||
\end{equation}
|
||||
Die Summe der quadratischen Abst\"ande normiert auf die jeweiligen
|
||||
Standardabweichungen wird auch mit $\chi^2$ bezeichnet. Der Wert des
|
||||
Parameters $\theta$ welcher den quadratischen Abstand minimiert ist
|
||||
Parameters $\theta$, welcher den quadratischen Abstand minimiert, ist
|
||||
also identisch mit der Maximierung der Wahrscheinlichkeit, dass die
|
||||
Daten tats\"achlich aus der Funktion stammen k\"onnen. Minimierung des
|
||||
$\chi^2$ ist also eine Maximum-Likelihood Sch\"atzung. Aber nur, wenn
|
||||
@ -169,33 +169,33 @@ und setzen diese gleich Null:
|
||||
\end{eqnarray}
|
||||
Damit haben wir nun einen anlytischen Ausdruck f\"ur die Bestimmung
|
||||
der Steigung $\theta$ des Regressionsgeraden gewonnen. Ein
|
||||
Gradientenabstieg ist f\"ur das Fitten der Geradensteigung also gar nicht
|
||||
n\"otig. Das gilt allgemein f\"ur das Fitten von Koeffizienten von
|
||||
linear kombinierten Basisfunktionen. Parameter die nichtlinear in
|
||||
einer Funktion enthalten sind k\"onnen aber nicht analytisch aus den
|
||||
Daten berechnet werden. Da bleibt dann nur auf numerische Verfahren
|
||||
zur Optimierung der Kostenfunktion, wie z.B. der Gradientenabstieg,
|
||||
zur\"uckzugreifen.
|
||||
Gradientenabstieg ist f\"ur das Fitten der Geradensteigung also gar
|
||||
nicht n\"otig. Das gilt allgemein f\"ur das Fitten von Koeffizienten
|
||||
von linear kombinierten Basisfunktionen. Parameter, die nichtlinear in
|
||||
einer Funktion enthalten sind, k\"onnen im Gegensatz dazu nicht
|
||||
analytisch aus den Daten berechnet werden. F\"ur diesen Fall bleibt
|
||||
dann nur auf numerische Verfahren zur Optimierung der Kostenfunktion,
|
||||
wie z.B. der Gradientenabstieg, zur\"uckzugreifen.
|
||||
|
||||
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
\section{Fits von Wahrscheinlichkeitsverteilungen}
|
||||
Zum Abschluss betrachten wir noch den Fall, bei dem wir die Parameter
|
||||
einer Wahrscheinlichkeitsdichtefunktion (z.B. Mittelwert und
|
||||
Standardabweichung der Normalverteilung) an ein Datenset fitten wolle.
|
||||
Standardabweichung der Normalverteilung) an ein Datenset fitten wollen.
|
||||
|
||||
Ein erster Gedanke k\"onnte sein, die
|
||||
Wahrscheinlichkeitsdichtefunktion durch Minimierung des quadratischen
|
||||
Abstands an ein Histogram der Daten zu fitten. Das ist aber aus
|
||||
Abstands an ein Histogramm der Daten zu fitten. Das ist aber aus
|
||||
folgenden Gr\"unden nicht die Methode der Wahl: (i)
|
||||
Wahrscheinlichkeitsdichten k\"onnen nur positiv sein. Darum k\"onnen
|
||||
insbesondere bei kleinen Werten die Daten nicht symmetrisch streuen,
|
||||
wie es bei normalverteilte Daten der Fall ist. (ii) Die Datenwerte
|
||||
wie es bei normalverteilten Daten der Fall ist. (ii) Die Datenwerte
|
||||
sind nicht unabh\"angig, da das normierte Histogram sich zu Eins
|
||||
aufintegriert. Die beiden Annahmen normalverteilte und unabh\"angige
|
||||
Daten, die die Minimierung des quadratischen Abstands
|
||||
\eqnref{chisqmin} zu einem Maximum-Likelihood Sch\"atzer machen, sind
|
||||
also verletzt. (iii) Das Histgramm h\"angt von der Wahl der
|
||||
also verletzt. (iii) Das Histogramm h\"angt von der Wahl der
|
||||
Klassenbreite ab.
|
||||
|
||||
Den direkten Weg, eine Wahrscheinlichkeitsdichtefunktion an ein
|
||||
@ -213,7 +213,6 @@ z.B. dem Gradientenabstieg, gel\"ost wird.
|
||||
Wahrscheinlichkeitsdichtefunktion. Links: die 100 Datenpunkte, die
|
||||
aus der Gammaverteilung 2. Ordnung (rot) gezogen worden sind. Der
|
||||
Maximum-Likelihood-Fit ist orange dargestellt. Rechts: das
|
||||
normierte Histogramm der Daten zusammen mit der \"uber Minimierung
|
||||
des quadratischen Abstands zum Histogramm berechneten Fits ist
|
||||
potentiell schlechter.}
|
||||
normierte Histogramm der Daten zusammen mit dem \"uber Minimierung
|
||||
des quadratischen Abstands zum Histogramm berechneten Fit.}
|
||||
\end{figure}
|
||||
|
101
pointprocesses/lecture/isihexamples.py
Normal file
101
pointprocesses/lecture/isihexamples.py
Normal file
@ -0,0 +1,101 @@
|
||||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
def hompoisson(rate, trials, duration) :
|
||||
spikes = []
|
||||
for k in range(trials) :
|
||||
times = []
|
||||
t = 0.0
|
||||
while t < duration :
|
||||
t += np.random.exponential(1/rate)
|
||||
times.append( t )
|
||||
spikes.append( times )
|
||||
return spikes
|
||||
|
||||
def inhompoisson(rate, trials, dt) :
|
||||
spikes = []
|
||||
p = rate*dt
|
||||
for k in range(trials) :
|
||||
x = np.random.rand(len(rate))
|
||||
times = dt*np.nonzero(x<p)[0]
|
||||
spikes.append( times )
|
||||
return spikes
|
||||
|
||||
|
||||
def pifspikes(input, trials, dt, D=0.1) :
|
||||
vreset = 0.0
|
||||
vthresh = 1.0
|
||||
tau = 1.0
|
||||
spikes = []
|
||||
for k in range(trials) :
|
||||
times = []
|
||||
v = vreset
|
||||
noise = np.sqrt(2.0*D)*np.random.randn(len(input))/np.sqrt(dt)
|
||||
for k in xrange(len(noise)) :
|
||||
v += (input[k]+noise[k])*dt/tau
|
||||
if v >= vthresh :
|
||||
v = vreset
|
||||
times.append(k*dt)
|
||||
spikes.append( times )
|
||||
return spikes
|
||||
|
||||
def isis( spikes ) :
|
||||
isi = []
|
||||
for k in xrange(len(spikes)) :
|
||||
isi.extend(np.diff(spikes[k]))
|
||||
return isi
|
||||
|
||||
def plotisih( ax, isis, binwidth=None ) :
|
||||
if binwidth == None :
|
||||
nperbin = 200.0 # average number of isis per bin
|
||||
bins = len(isis)/nperbin # number of bins
|
||||
binwidth = np.max(isis)/bins
|
||||
if binwidth < 5e-4 : # half a millisecond
|
||||
binwidth = 5e-4
|
||||
h, b = np.histogram(isis, np.arange(0.0, np.max(isis)+binwidth, binwidth), density=True)
|
||||
ax.text(0.9, 0.85, 'rate={:.0f}Hz'.format(1.0/np.mean(isis)), ha='right', transform=ax.transAxes)
|
||||
ax.text(0.9, 0.75, 'mean={:.0f}ms'.format(1000.0*np.mean(isis)), ha='right', transform=ax.transAxes)
|
||||
ax.text(0.9, 0.65, 'CV={:.2f}'.format(np.std(isis)/np.mean(isis)), ha='right', transform=ax.transAxes)
|
||||
ax.set_xlabel('ISI [ms]')
|
||||
ax.set_ylabel('p(ISI) [1/s]')
|
||||
ax.bar( 1000.0*b[:-1], h, 1000.0*np.diff(b) )
|
||||
|
||||
# parameter:
|
||||
rate = 20.0
|
||||
drate = 50.0
|
||||
trials = 10
|
||||
duration = 100.0
|
||||
dt = 0.001
|
||||
tau = 0.1;
|
||||
|
||||
# homogeneous spike trains:
|
||||
homspikes = hompoisson(rate, trials, duration)
|
||||
|
||||
# OU noise:
|
||||
rng = np.random.RandomState(54637281)
|
||||
time = np.arange(0.0, duration, dt)
|
||||
x = np.zeros(time.shape)+rate
|
||||
n = rng.randn(len(time))*drate*tau/np.sqrt(dt)+rate
|
||||
for k in xrange(1,len(x)) :
|
||||
x[k] = x[k-1] + (n[k]-x[k-1])*dt/tau
|
||||
x[x<0.0] = 0.0
|
||||
|
||||
# pif spike trains:
|
||||
inhspikes = pifspikes(x, trials, dt, D=0.3)
|
||||
|
||||
fig = plt.figure( figsize=(9,4) )
|
||||
ax = fig.add_subplot(1, 2, 1)
|
||||
ax.set_title('stationary')
|
||||
ax.set_xlim(0.0, 200.0)
|
||||
ax.set_ylim(0.0, 40.0)
|
||||
plotisih(ax, isis(homspikes))
|
||||
|
||||
ax = fig.add_subplot(1, 2, 2)
|
||||
ax.set_title('non-stationary')
|
||||
ax.set_xlim(0.0, 200.0)
|
||||
ax.set_ylim(0.0, 40.0)
|
||||
plotisih(ax, isis(inhspikes))
|
||||
|
||||
plt.tight_layout()
|
||||
plt.savefig('isihexamples.pdf')
|
||||
plt.show()
|
@ -223,3 +223,49 @@
|
||||
\include{pointprocesses}
|
||||
|
||||
\end{document}
|
||||
|
||||
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
\section{\tr{Homogeneous Poisson process}{Homogener Poisson Prozess}}
|
||||
|
||||
\begin{figure}[t]
|
||||
\includegraphics[width=1\textwidth]{poissonraster100hz}
|
||||
\caption{\label{hompoissonfig}Rasterplot von Poisson-Spikes.}
|
||||
\end{figure}
|
||||
|
||||
The probability $p(t)\delta t$ of an event occuring at time $t$
|
||||
is independent of $t$ and independent of any previous event
|
||||
(independent of event history).
|
||||
|
||||
The probability $P$ for an event occuring within a time bin of width $\Delta t$
|
||||
is
|
||||
\[ P=\lambda \cdot \Delta t \]
|
||||
for a Poisson process with rate $\lambda$.
|
||||
|
||||
\subsection{Statistics of homogeneous Poisson process}
|
||||
|
||||
\begin{figure}[t]
|
||||
\includegraphics[width=0.45\textwidth]{poissonisihexp20hz}\hfill
|
||||
\includegraphics[width=0.45\textwidth]{poissonisihexp100hz}
|
||||
\caption{\label{hompoissonisihfig}Interspike interval histograms of poisson spike train.}
|
||||
\end{figure}
|
||||
|
||||
\begin{itemize}
|
||||
\item Exponential distribution of intervals $T$: $p(T) = \lambda e^{-\lambda T}$
|
||||
\item Mean interval $\mu_{ISI} = \frac{1}{\lambda}$
|
||||
\item Variance of intervals $\sigma_{ISI}^2 = \frac{1}{\lambda^2}$
|
||||
\item Coefficient of variation $CV_{ISI} = 1$
|
||||
\item Serial correlation $\rho_k =0$ for $k>0$ (renewal process!)
|
||||
\item Fano factor $F=1$
|
||||
\end{itemize}
|
||||
|
||||
\subsection{Count statistics of Poisson process}
|
||||
|
||||
\begin{figure}[t]
|
||||
\includegraphics[width=0.48\textwidth]{poissoncounthistdist100hz10ms}\hfill
|
||||
\includegraphics[width=0.48\textwidth]{poissoncounthistdist100hz100ms}
|
||||
\caption{\label{hompoissoncountfig}Count statistics of poisson spike train.}
|
||||
\end{figure}
|
||||
|
||||
Poisson distribution:
|
||||
\[ P(k) = \frac{(\lambda W)^ke^{\lambda W}}{k!} \]
|
@ -28,71 +28,51 @@ erzeugt. Zum Beispiel:
|
||||
Dynamik des Nervensystems und des Muskelapparates erzeugt.
|
||||
\end{itemize}
|
||||
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
\section{Rate eines Punktprozesses}
|
||||
Rate of events $r$ (``spikes per time'') measured in Hertz.
|
||||
\begin{itemize}
|
||||
\item Number of events $N$ per observation time $W$: $r = \frac{N}{W}$
|
||||
\item Without boundary effects: $r = \frac{N-1}{t_N-t_1}$
|
||||
\item Inverse interval: $r = \frac{1}{\mu_{ISI}}$
|
||||
\end{itemize}
|
||||
\begin{figure}[t]
|
||||
\includegraphics[width=1\textwidth]{rasterexamples}
|
||||
\caption{\label{rasterexamplesfig}Raster-Plot von jeweils 10
|
||||
Realisierungen eines station\"arenen Punktprozesses (homogener
|
||||
Poisson Prozess mit Rate $\lambda=20$\;Hz, links) und eines
|
||||
nicht-station\"aren Punktprozesses (perfect integrate-and-fire
|
||||
Neuron getrieben mit Ohrnstein-Uhlenbeck Rauschen mit
|
||||
Zeitkonstante $\tau=100$\,ms, rechts).}
|
||||
\end{figure}
|
||||
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
\section{Intervall Statistiken}
|
||||
\section{Intervall Statistik}
|
||||
|
||||
\begin{figure}[t]
|
||||
\includegraphics[width=0.45\textwidth]{poissonisih100hz}\hfill
|
||||
\includegraphics[width=0.45\textwidth]{lifisih16}
|
||||
\caption{\label{isihfig}Interspike-Intervall Histogramme von einem Poisson Prozess (links)
|
||||
und einem Integrate-and-Fire Neuron (rechts).}
|
||||
\includegraphics[width=1\textwidth]{isihexamples}\hfill
|
||||
\caption{\label{isihexamplesfig}Interspike-Intervall Histogramme der in
|
||||
\figref{rasterexamplesfig} gezeigten Spikes.}
|
||||
\end{figure}
|
||||
|
||||
\subsection{First order (Interspike) interval statistics}
|
||||
\subsection{(Interspike) Intervall Statistik erster Ordnung}
|
||||
\begin{itemize}
|
||||
\item Histogram $p(T)$ of intervals $T$. Normalized to $\int_0^{\infty} p(T) \; dT = 1$
|
||||
\item Mean interval $\mu_{ISI} = \langle T \rangle = \frac{1}{n}\sum\limits_{i=1}^n T_i$
|
||||
\item Variance of intervals $\sigma_{ISI}^2 = \langle (T - \langle T \rangle)^2 \rangle$\vspace{1ex}
|
||||
\item Coefficient of variation $CV_{ISI} = \frac{\sigma_{ISI}}{\mu_{ISI}}$
|
||||
\item Diffusion coefficient $D_{ISI} = \frac{\sigma_{ISI}^2}{2\mu_{ISI}^3}$
|
||||
\item Histogramm $p(T)$ der Intervalle $T$. Normiert auf $\int_0^{\infty} p(T) \; dT = 1$.
|
||||
\item Mittleres Intervall $\mu_{ISI} = \langle T \rangle = \frac{1}{n}\sum\limits_{i=1}^n T_i$.
|
||||
\item Varianz der Intervalle $\sigma_{ISI}^2 = \langle (T - \langle T \rangle)^2 \rangle$\vspace{1ex}
|
||||
\item Variationskoeffizient (``Coefficient of variation'') $CV_{ISI} = \frac{\sigma_{ISI}}{\mu_{ISI}}$.
|
||||
\item Diffusions Koeffizient $D_{ISI} = \frac{\sigma_{ISI}^2}{2\mu_{ISI}^3}$.
|
||||
\end{itemize}
|
||||
|
||||
\subsection{Interval return maps}
|
||||
Scatter plot between succeeding intervals separated by lag $k$.
|
||||
Scatter plot von aufeinander folgenden Intervallen $(T_{i+k}, T_i)$ getrennt durch das ``lag'' $k$.
|
||||
|
||||
\begin{figure}[t]
|
||||
\begin{minipage}[t]{0.49\textwidth}
|
||||
LIF $I=10$, $\tau_{adapt}=100$\,ms:\\
|
||||
\includegraphics[width=1\textwidth]{lifadaptreturnmap10-100ms}
|
||||
\end{minipage}
|
||||
\hfill
|
||||
\begin{minipage}[t]{0.49\textwidth}
|
||||
LIF $I=15.7$, $\tau_{OU}=100$\,ms:\\
|
||||
\includegraphics[width=1\textwidth]{lifoureturnmap16-100ms}
|
||||
\end{minipage}
|
||||
\caption{\label{returnmapfig}Interspike-Intervall return maps.}
|
||||
\includegraphics[width=1\textwidth]{returnmapexamples}
|
||||
\includegraphics[width=1\textwidth]{serialcorrexamples}
|
||||
\caption{\label{returnmapfig}Interspike-Intervall return maps and serial correlations.}
|
||||
\end{figure}
|
||||
|
||||
\subsection{Serial correlations of the intervals}
|
||||
Correlation coefficients between succeeding intervals separated by lag $k$:
|
||||
\subsection{Serielle Korrelationen der Intervalle}
|
||||
Korrelationskoeffizient zwischen aufeinander folgenden Intervallen getrennt durch ``lag'' $k$:
|
||||
\[ \rho_k = \frac{\langle (T_{i+k} - \langle T \rangle)(T_i - \langle T \rangle) \rangle}{\langle (T_i - \langle T \rangle)^2\rangle} = \frac{{\rm cov}(T_{i+k}, T_i)}{{\rm var}(T_i)} \]
|
||||
$\rho_0=1$ (correlation of each interval with itself).
|
||||
|
||||
\begin{figure}[t]
|
||||
\begin{minipage}[t]{0.49\textwidth}
|
||||
LIF $I=10$, $\tau_{adapt}=100$\,ms:\\
|
||||
\includegraphics[width=1\textwidth]{lifadaptserial10-100ms}
|
||||
\end{minipage}
|
||||
\hfill
|
||||
\begin{minipage}[t]{0.49\textwidth}
|
||||
LIF $I=15.7$, $\tau_{OU}=100$\,ms:\\
|
||||
\includegraphics[width=1\textwidth]{lifouserial16-100ms}
|
||||
\end{minipage}
|
||||
\caption{\label{serialcorrfig}Serial correlations.}
|
||||
\end{figure}
|
||||
$\rho_0=1$ (Korrelation jedes Intervalls mit sich selber).
|
||||
|
||||
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
\section{Count statistics}
|
||||
\section{Z\"ahlstatistik}
|
||||
|
||||
\begin{figure}[t]
|
||||
\includegraphics[width=0.48\textwidth]{poissoncounthist100hz10ms}\hfill
|
||||
@ -100,9 +80,16 @@ $\rho_0=1$ (correlation of each interval with itself).
|
||||
\caption{\label{countstatsfig}Count Statistik.}
|
||||
\end{figure}
|
||||
|
||||
Histogram of number of events $N$ (counts) within observation window of duration $W$.
|
||||
Statistik der Anzahl der Ereignisse $N_i$ innerhalb von Beobachtungsfenstern $i$ der Breite $W$.
|
||||
\begin{itemize}
|
||||
\item Histogramm der counts $N_i$.
|
||||
\item Mittlere Anzahl von Ereignissen: $\mu_N = \langle N \rangle$.
|
||||
\item Varianz der Anzahl: $\sigma_N^2 = \langle (N - \langle N \rangle)^2 \rangle$.
|
||||
\item Fano Faktor (Varianz geteilt durch Mittelwert): $F = \frac{\sigma_N^2}{\mu_N}$.
|
||||
\end{itemize}
|
||||
|
||||
\subsection{Fano factor}
|
||||
Insbesondere ist die mittlere Rate der Ereignisse $r$ (``Spikes pro Zeit'', Feuerrate) gemessen in Hertz
|
||||
\[ r = \frac{\langle N \rangle}{W} \; . \]
|
||||
|
||||
\begin{figure}[t]
|
||||
\begin{minipage}[t]{0.49\textwidth}
|
||||
@ -117,55 +104,4 @@ Histogram of number of events $N$ (counts) within observation window of duration
|
||||
\caption{\label{fanofig}Fano factor.}
|
||||
\end{figure}
|
||||
|
||||
Statistics of number of events $N$ within observation window of duration $W$.
|
||||
\begin{itemize}
|
||||
\item Mean count: $\mu_N = \langle N \rangle$
|
||||
\item Count variance: $\sigma_N^2 = \langle (N - \langle N \rangle)^2 \rangle$
|
||||
\item Fano factor (variance divided by mean): $F = \frac{\sigma_N^2}{\mu_N}$
|
||||
\end{itemize}
|
||||
|
||||
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
\section{\tr{Homogeneous Poisson process}{Homogener Poisson Prozess}}
|
||||
|
||||
\begin{figure}[t]
|
||||
\includegraphics[width=1\textwidth]{poissonraster100hz}
|
||||
\caption{\label{hompoissonfig}Rasterplot von Poisson-Spikes.}
|
||||
\end{figure}
|
||||
|
||||
The probability $p(t)\delta t$ of an event occuring at time $t$
|
||||
is independent of $t$ and independent of any previous event
|
||||
(independent of event history).
|
||||
|
||||
The probability $P$ for an event occuring within a time bin of width $\Delta t$
|
||||
is
|
||||
\[ P=\lambda \cdot \Delta t \]
|
||||
for a Poisson process with rate $\lambda$.
|
||||
|
||||
\subsection{Statistics of homogeneous Poisson process}
|
||||
|
||||
\begin{figure}[t]
|
||||
\includegraphics[width=0.45\textwidth]{poissonisihexp20hz}\hfill
|
||||
\includegraphics[width=0.45\textwidth]{poissonisihexp100hz}
|
||||
\caption{\label{hompoissonisihfig}Interspike interval histograms of poisson spike train.}
|
||||
\end{figure}
|
||||
|
||||
\begin{itemize}
|
||||
\item Exponential distribution of intervals $T$: $p(T) = \lambda e^{-\lambda T}$
|
||||
\item Mean interval $\mu_{ISI} = \frac{1}{\lambda}$
|
||||
\item Variance of intervals $\sigma_{ISI}^2 = \frac{1}{\lambda^2}$
|
||||
\item Coefficient of variation $CV_{ISI} = 1$
|
||||
\item Serial correlation $\rho_k =0$ for $k>0$ (renewal process!)
|
||||
\item Fano factor $F=1$
|
||||
\end{itemize}
|
||||
|
||||
\subsection{Count statistics of Poisson process}
|
||||
|
||||
\begin{figure}[t]
|
||||
\includegraphics[width=0.48\textwidth]{poissoncounthistdist100hz10ms}\hfill
|
||||
\includegraphics[width=0.48\textwidth]{poissoncounthistdist100hz100ms}
|
||||
\caption{\label{hompoissoncountfig}Count statistics of poisson spike train.}
|
||||
\end{figure}
|
||||
|
||||
Poisson distribution:
|
||||
\[ P(k) = \frac{(\lambda W)^ke^{\lambda W}}{k!} \]
|
86
pointprocesses/lecture/rasterexamples.py
Normal file
86
pointprocesses/lecture/rasterexamples.py
Normal file
@ -0,0 +1,86 @@
|
||||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
def hompoisson(rate, trials, duration) :
|
||||
spikes = []
|
||||
for k in range(trials) :
|
||||
times = []
|
||||
t = 0.0
|
||||
while t < duration :
|
||||
t += np.random.exponential(1/rate)
|
||||
times.append( t )
|
||||
spikes.append( times )
|
||||
return spikes
|
||||
|
||||
def inhompoisson(rate, trials, dt) :
|
||||
spikes = []
|
||||
p = rate*dt
|
||||
for k in range(trials) :
|
||||
x = np.random.rand(len(rate))
|
||||
times = dt*np.nonzero(x<p)[0]
|
||||
spikes.append( times )
|
||||
return spikes
|
||||
|
||||
|
||||
def pifspikes(input, trials, dt, D=0.1) :
|
||||
vreset = 0.0
|
||||
vthresh = 1.0
|
||||
tau = 1.0
|
||||
spikes = []
|
||||
for k in range(trials) :
|
||||
times = []
|
||||
v = vreset
|
||||
noise = np.sqrt(2.0*D)*np.random.randn(len(input))/np.sqrt(dt)
|
||||
for k in xrange(len(noise)) :
|
||||
v += (input[k]+noise[k])*dt/tau
|
||||
if v >= vthresh :
|
||||
v = vreset
|
||||
times.append(k*dt)
|
||||
spikes.append( times )
|
||||
return spikes
|
||||
|
||||
# parameter:
|
||||
rate = 20.0
|
||||
drate = 50.0
|
||||
trials = 10
|
||||
duration = 2.0
|
||||
dt = 0.001
|
||||
tau = 0.1;
|
||||
|
||||
# homogeneous spike trains:
|
||||
homspikes = hompoisson(rate, trials, duration)
|
||||
|
||||
# OU noise:
|
||||
rng = np.random.RandomState(54637281)
|
||||
time = np.arange(0.0, duration, dt)
|
||||
x = np.zeros(time.shape)+rate
|
||||
n = rng.randn(len(time))*drate*tau/np.sqrt(dt)+rate
|
||||
for k in xrange(1,len(x)) :
|
||||
x[k] = x[k-1] + (n[k]-x[k-1])*dt/tau
|
||||
x[x<0.0] = 0.0
|
||||
|
||||
# inhomogeneous spike trains:
|
||||
#inhspikes = inhompoisson(x, trials, dt)
|
||||
# pif spike trains:
|
||||
inhspikes = pifspikes(x, trials, dt, D=0.3)
|
||||
|
||||
fig = plt.figure( figsize=(9,4) )
|
||||
ax = fig.add_subplot(1, 2, 1)
|
||||
ax.set_title('stationary')
|
||||
ax.set_xlim(0.0, duration)
|
||||
ax.set_ylim(-0.5, trials-0.5)
|
||||
ax.set_xlabel('Time [s]')
|
||||
ax.set_ylabel('Trials')
|
||||
ax.eventplot(homspikes, colors=[[0, 0, 0]], linelength=0.8)
|
||||
|
||||
ax = fig.add_subplot(1, 2, 2)
|
||||
ax.set_title('non-stationary')
|
||||
ax.set_xlim(0.0, duration)
|
||||
ax.set_ylim(-0.5, trials-0.5)
|
||||
ax.set_xlabel('Time [s]')
|
||||
ax.set_ylabel('Trials')
|
||||
ax.eventplot(inhspikes, colors=[[0, 0, 0]], linelength=0.8)
|
||||
|
||||
plt.tight_layout()
|
||||
plt.savefig('rasterexamples.pdf')
|
||||
plt.show()
|
105
pointprocesses/lecture/returnmapexamples.py
Normal file
105
pointprocesses/lecture/returnmapexamples.py
Normal file
@ -0,0 +1,105 @@
|
||||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
def hompoisson(rate, trials, duration) :
|
||||
spikes = []
|
||||
for k in range(trials) :
|
||||
times = []
|
||||
t = 0.0
|
||||
while t < duration :
|
||||
t += np.random.exponential(1/rate)
|
||||
times.append( t )
|
||||
spikes.append( times )
|
||||
return spikes
|
||||
|
||||
def inhompoisson(rate, trials, dt) :
|
||||
spikes = []
|
||||
p = rate*dt
|
||||
for k in range(trials) :
|
||||
x = np.random.rand(len(rate))
|
||||
times = dt*np.nonzero(x<p)[0]
|
||||
spikes.append( times )
|
||||
return spikes
|
||||
|
||||
|
||||
def pifspikes(input, trials, dt, D=0.1) :
|
||||
vreset = 0.0
|
||||
vthresh = 1.0
|
||||
tau = 1.0
|
||||
spikes = []
|
||||
for k in range(trials) :
|
||||
times = []
|
||||
v = vreset
|
||||
noise = np.sqrt(2.0*D)*np.random.randn(len(input))/np.sqrt(dt)
|
||||
for k in xrange(len(noise)) :
|
||||
v += (input[k]+noise[k])*dt/tau
|
||||
if v >= vthresh :
|
||||
v = vreset
|
||||
times.append(k*dt)
|
||||
spikes.append( times )
|
||||
return spikes
|
||||
|
||||
def isis( spikes ) :
|
||||
isi = []
|
||||
for k in xrange(len(spikes)) :
|
||||
isi.extend(np.diff(spikes[k]))
|
||||
return np.array( isi )
|
||||
|
||||
def plotisih( ax, isis, binwidth=None ) :
|
||||
if binwidth == None :
|
||||
nperbin = 200.0 # average number of isis per bin
|
||||
bins = len(isis)/nperbin # number of bins
|
||||
binwidth = np.max(isis)/bins
|
||||
if binwidth < 5e-4 : # half a millisecond
|
||||
binwidth = 5e-4
|
||||
h, b = np.histogram(isis, np.arange(0.0, np.max(isis)+binwidth, binwidth), density=True)
|
||||
ax.text(0.9, 0.85, 'rate={:.0f}Hz'.format(1.0/np.mean(isis)), ha='right', transform=ax.transAxes)
|
||||
ax.text(0.9, 0.75, 'mean={:.0f}ms'.format(1000.0*np.mean(isis)), ha='right', transform=ax.transAxes)
|
||||
ax.text(0.9, 0.65, 'CV={:.2f}'.format(np.std(isis)/np.mean(isis)), ha='right', transform=ax.transAxes)
|
||||
ax.set_xlabel('ISI [ms]')
|
||||
ax.set_ylabel('p(ISI) [1/s]')
|
||||
ax.bar( 1000.0*b[:-1], h, 1000.0*np.diff(b) )
|
||||
|
||||
def plotreturnmap(ax, isis, lag=1, max=None) :
|
||||
ax.set_xlabel(r'ISI$_i$ [ms]')
|
||||
ax.set_ylabel(r'ISI$_{i+1}$ [ms]')
|
||||
if max != None :
|
||||
ax.set_xlim(0.0, 1000.0*max)
|
||||
ax.set_ylim(0.0, 1000.0*max)
|
||||
ax.scatter( 1000.0*isis[:-lag], 1000.0*isis[lag:] )
|
||||
|
||||
# parameter:
|
||||
rate = 20.0
|
||||
drate = 50.0
|
||||
trials = 10
|
||||
duration = 10.0
|
||||
dt = 0.001
|
||||
tau = 0.1;
|
||||
|
||||
# homogeneous spike trains:
|
||||
homspikes = hompoisson(rate, trials, duration)
|
||||
|
||||
# OU noise:
|
||||
rng = np.random.RandomState(54637281)
|
||||
time = np.arange(0.0, duration, dt)
|
||||
x = np.zeros(time.shape)+rate
|
||||
n = rng.randn(len(time))*drate*tau/np.sqrt(dt)+rate
|
||||
for k in xrange(1,len(x)) :
|
||||
x[k] = x[k-1] + (n[k]-x[k-1])*dt/tau
|
||||
x[x<0.0] = 0.0
|
||||
|
||||
# pif spike trains:
|
||||
inhspikes = pifspikes(x, trials, dt, D=0.3)
|
||||
|
||||
fig = plt.figure( figsize=(9,4) )
|
||||
ax = fig.add_subplot(1, 2, 1)
|
||||
ax.set_title('stationary')
|
||||
plotreturnmap(ax, isis(homspikes), 1, 0.3)
|
||||
|
||||
ax = fig.add_subplot(1, 2, 2)
|
||||
ax.set_title('non-stationary')
|
||||
plotreturnmap(ax, isis(inhspikes), 1, 0.3)
|
||||
|
||||
plt.tight_layout()
|
||||
plt.savefig('returnmapexamples.pdf')
|
||||
#plt.show()
|
117
pointprocesses/lecture/serialcorrexamples.py
Normal file
117
pointprocesses/lecture/serialcorrexamples.py
Normal file
@ -0,0 +1,117 @@
|
||||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
def hompoisson(rate, trials, duration) :
|
||||
spikes = []
|
||||
for k in range(trials) :
|
||||
times = []
|
||||
t = 0.0
|
||||
while t < duration :
|
||||
t += np.random.exponential(1/rate)
|
||||
times.append( t )
|
||||
spikes.append( times )
|
||||
return spikes
|
||||
|
||||
def inhompoisson(rate, trials, dt) :
|
||||
spikes = []
|
||||
p = rate*dt
|
||||
for k in range(trials) :
|
||||
x = np.random.rand(len(rate))
|
||||
times = dt*np.nonzero(x<p)[0]
|
||||
spikes.append( times )
|
||||
return spikes
|
||||
|
||||
|
||||
def pifspikes(input, trials, dt, D=0.1) :
|
||||
vreset = 0.0
|
||||
vthresh = 1.0
|
||||
tau = 1.0
|
||||
spikes = []
|
||||
for k in range(trials) :
|
||||
times = []
|
||||
v = vreset
|
||||
noise = np.sqrt(2.0*D)*np.random.randn(len(input))/np.sqrt(dt)
|
||||
for k in xrange(len(noise)) :
|
||||
v += (input[k]+noise[k])*dt/tau
|
||||
if v >= vthresh :
|
||||
v = vreset
|
||||
times.append(k*dt)
|
||||
spikes.append( times )
|
||||
return spikes
|
||||
|
||||
def isis( spikes ) :
|
||||
isi = []
|
||||
for k in xrange(len(spikes)) :
|
||||
isi.extend(np.diff(spikes[k]))
|
||||
return np.array( isi )
|
||||
|
||||
def plotisih( ax, isis, binwidth=None ) :
|
||||
if binwidth == None :
|
||||
nperbin = 200.0 # average number of isis per bin
|
||||
bins = len(isis)/nperbin # number of bins
|
||||
binwidth = np.max(isis)/bins
|
||||
if binwidth < 5e-4 : # half a millisecond
|
||||
binwidth = 5e-4
|
||||
h, b = np.histogram(isis, np.arange(0.0, np.max(isis)+binwidth, binwidth), density=True)
|
||||
ax.text(0.9, 0.85, 'rate={:.0f}Hz'.format(1.0/np.mean(isis)), ha='right', transform=ax.transAxes)
|
||||
ax.text(0.9, 0.75, 'mean={:.0f}ms'.format(1000.0*np.mean(isis)), ha='right', transform=ax.transAxes)
|
||||
ax.text(0.9, 0.65, 'CV={:.2f}'.format(np.std(isis)/np.mean(isis)), ha='right', transform=ax.transAxes)
|
||||
ax.set_xlabel('ISI [ms]')
|
||||
ax.set_ylabel('p(ISI) [1/s]')
|
||||
ax.bar( 1000.0*b[:-1], h, 1000.0*np.diff(b) )
|
||||
|
||||
def plotreturnmap(ax, isis, lag=1, max=None) :
|
||||
ax.set_xlabel(r'ISI$_i$ [ms]')
|
||||
ax.set_ylabel(r'ISI$_{i+1}$ [ms]')
|
||||
if max != None :
|
||||
ax.set_xlim(0.0, 1000.0*max)
|
||||
ax.set_ylim(0.0, 1000.0*max)
|
||||
ax.scatter( 1000.0*isis[:-lag], 1000.0*isis[lag:] )
|
||||
|
||||
def plotserialcorr(ax, isis, maxlag=10) :
|
||||
lags = np.arange(maxlag+1)
|
||||
corr = [1.0]
|
||||
for lag in lags[1:] :
|
||||
corr.append(np.corrcoef(isis[:-lag], isis[lag:])[0,1])
|
||||
ax.set_xlabel(r'lag $k$')
|
||||
ax.set_ylabel(r'ISI correlation $\rho_k$')
|
||||
ax.set_xlim(0.0, maxlag)
|
||||
ax.set_ylim(-1.0, 1.0)
|
||||
ax.plot(lags, corr, '.-', markersize=20)
|
||||
|
||||
# parameter:
|
||||
rate = 20.0
|
||||
drate = 50.0
|
||||
trials = 10
|
||||
duration = 500.0
|
||||
dt = 0.001
|
||||
tau = 0.1;
|
||||
|
||||
# homogeneous spike trains:
|
||||
homspikes = hompoisson(rate, trials, duration)
|
||||
|
||||
# OU noise:
|
||||
rng = np.random.RandomState(54637281)
|
||||
time = np.arange(0.0, duration, dt)
|
||||
x = np.zeros(time.shape)+rate
|
||||
n = rng.randn(len(time))*drate*tau/np.sqrt(dt)+rate
|
||||
for k in xrange(1,len(x)) :
|
||||
x[k] = x[k-1] + (n[k]-x[k-1])*dt/tau
|
||||
x[x<0.0] = 0.0
|
||||
|
||||
# pif spike trains:
|
||||
inhspikes = pifspikes(x, trials, dt, D=0.3)
|
||||
|
||||
fig = plt.figure( figsize=(9,3) )
|
||||
|
||||
ax = fig.add_subplot(1, 2, 1)
|
||||
plotserialcorr(ax, isis(homspikes))
|
||||
ax.set_ylim(-0.2, 1.0)
|
||||
|
||||
ax = fig.add_subplot(1, 2, 2)
|
||||
plotserialcorr(ax, isis(inhspikes))
|
||||
ax.set_ylim(-0.2, 1.0)
|
||||
|
||||
plt.tight_layout()
|
||||
plt.savefig('serialcorrexamples.pdf')
|
||||
#plt.show()
|
@ -205,8 +205,9 @@
|
||||
\newenvironment{definition}[1][]{\medskip\noindent\textbf{Definition}\ifthenelse{\equal{#1}{}}{}{ #1}:\newline}%
|
||||
{\medskip}
|
||||
|
||||
%%%%% exercises: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
\newcounter{maxexercise}
|
||||
\setcounter{maxexercise}{9} % show listings up to exercise maxexercise
|
||||
\setcounter{maxexercise}{10} % show listings up to exercise maxexercise
|
||||
\newcounter{theexercise}
|
||||
\setcounter{theexercise}{1}
|
||||
\newcommand{\codepath}{}
|
||||
|
Reference in New Issue
Block a user