[simulations] added random walk figures

This commit is contained in:
Jan Benda 2020-12-29 19:01:23 +01:00
parent 7b61bd1e85
commit 2bac236768
4 changed files with 155 additions and 64 deletions

View File

@ -22,7 +22,7 @@ colors['orange'] = '#FF9900'
colors['lightorange'] = '#FFCC00'
colors['yellow'] = '#FFF720'
colors['green'] = '#99FF00'
colors['blue'] = '#0010CC'
colors['blue'] = '#0020BB'
colors['gray'] = '#A7A7A7'
colors['black'] = '#000000'
colors['white'] = '#FFFFFF'

View File

@ -0,0 +1,34 @@
import numpy as np
import matplotlib.pyplot as plt
from plotstyle import *
def random_walk(n, p, rng):
steps = rng.rand(n)
steps[steps>=p] = 1
steps[steps<p] = -1
x = np.hstack(((0.0,), np.cumsum(steps)))
return x
if __name__ == "__main__":
fig, ax = plt.subplots()
fig.subplots_adjust(**adjust_fs(fig, right=1.0))
rng = np.random.RandomState(52281)
nmax = 80
nn = np.linspace(0.0, nmax, 200)
ax.fill_between(nn, np.sqrt(nn), -np.sqrt(nn), **fsAa)
ax.axhline(0.0, **lsAm)
lcs = [colors['red'], colors['orange'],
colors['yellow'], colors['green']]
n = np.arange(0.0, nmax+1, 1.0)
for k in range(12):
x = random_walk(nmax, 0.5, rng)
ls = dict(**lsAm)
ls['color'] = lcs[k%len(lcs)]
ax.plot(n, x, **ls)
ax.set_xlabel('Iteration $n$')
ax.set_ylabel('Position $x_n$')
ax.set_xlim(0, nmax)
ax.set_ylim(-20, 20)
ax.set_yticks(np.arange(-20, 21, 10))
fig.savefig("randomwalkone.pdf")
plt.close()

View File

@ -0,0 +1,37 @@
import numpy as np
import matplotlib.pyplot as plt
from plotstyle import *
def random_walk(n, rng):
p = 0.25
steps = rng.rand(n)
xsteps = np.zeros(len(steps))
ysteps = np.zeros(len(steps))
xsteps[(steps>=0*p)&(steps<1*p)] = +1
xsteps[(steps>=1*p)&(steps<2*p)] = -1
ysteps[(steps>=2*p)&(steps<3*p)] = +1
ysteps[(steps>=3*p)&(steps<4*p)] = -1
x = np.hstack(((0.0,), np.cumsum(xsteps)))
y = np.hstack(((0.0,), np.cumsum(ysteps)))
return x, y
if __name__ == "__main__":
fig, ax = plt.subplots(figsize=cm_size(figure_width, 0.65*figure_width))
fig.subplots_adjust(**adjust_fs(fig, right=1.0))
rng = np.random.RandomState(42281)
nmax = 500
lcs = [colors['blue'], colors['red'], colors['orange'],
colors['yellow'], colors['green']]
for k in range(len(lcs)):
x, y = random_walk(nmax, rng)
ls = dict(**lsAm)
ls['color'] = lcs[k%len(lcs)]
ax.plot(x, y, **ls)
ax.set_xlabel('Position $x_n$')
ax.set_ylabel('Position $y_n$')
ax.set_xlim(-30, 30)
ax.set_ylim(-20, 20)
ax.set_xticks(np.arange(-30, 31, 10))
ax.set_yticks(np.arange(-20, 21, 10))
fig.savefig("randomwalktwo.pdf")
plt.close()

View File

@ -1,8 +1,8 @@
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Simulations}
\label{simulationschapter}
\exercisechapter{Simulations}
\chapter{Stochastic simulations}
\label{stochasticsimulationschapter}
\exercisechapter{Stochastic simulations}
The real power of computers for data analysis is the possibility to
run simulations. Experimental data of almost unlimited sample sizes
@ -23,23 +23,37 @@ lines of code.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Random numbers}
\begin{figure}[t]
\includegraphics{randomnumbers}
\titlecaption{\label{randomnumbersfig} Random numbers.} {Numerical
random number generators return sequences of numbers that are as
random as possible. The returned values approximate a given
probability distribution. Here, for example, a uniform
distribution between zero and one (top left). The serial
correlation (auto-correlation) of the returned sequences is zero
at any lag except for zero (bottom left). However, by setting a
seed, defining the state, or spawning the random number generator,
the exact same sequence can be generated (right).}
\end{figure}
At the heart of many simulations are random numbers that we get from
\enterm[random number generator]{random number generators}. These are
numerical algorithms that return sequences of numbers that appear to
be as random as possible. If we draw random numbers using, for
example, the \code{rand()} function, then these numbers are indeed
uniformly distributed and have a mean of one half. Subsequent numbers
are independent of each other, i.e. the autocorrelation function is
zero everywhere except at lag zero. However, numerical random number
generators have a period, after which they repeat the exact same
sequence of numbers. This differentiates them from truely random
numbers and hence they are called \enterm[random number
generator!pseudo]{pseudo random number generators}. In rare cases this
periodicity can induce problems in simulations whenever more random
numbers than the period of the random number generator are
used. Luckily, nowadays the periods of random nunmber generators are
very large. Periods are at least in the range of $2^{64}$, that is
about $10^{18}$, or even larger.
be as random as possible (\figref{randomnumbersfig}). If we draw
random numbers using, for example, the \code{rand()} function, then
these numbers are indeed uniformly distributed and have a mean of one
half. Subsequent numbers are independent of each other, i.e. the
autocorrelation function is zero everywhere except at lag
zero. However, numerical random number generators have a period, after
which they repeat the exact same sequence of numbers. This
differentiates them from truely random numbers and hence they are
called \enterm[random number generator!pseudo]{pseudo random number
generators}. In rare cases this periodicity can induce problems in
simulations whenever more random numbers than the period of the random
number generator are used. Luckily, nowadays the periods of random
nunmber generators are very large. Periods are at least in the range
of $2^{64}$, that is about $10^{18}$, or even larger.
The pseudo randomness of numerical random number generators also has
an advantage. They allow to repeat exactly the same sequence of
@ -64,19 +78,6 @@ script is run.
the sequence. This is the serial correlation at lag one.
\end{exercise}
\begin{figure}[t]
\includegraphics[width=1\textwidth]{randomnumbers}
\titlecaption{\label{randomnumbersfig} Random numbers.} {Numerical
random number generators return sequences of numbers that are as
random as possible. The returned values approximate a given
probability distribution. Here, for example, a uniform
distribution between zero and one (top left). The serial
correlation (auto-correlation) of the returned sequences is zero
at any lag except for zero (bottom left). However, by setting a
seed, defining the state, or spawning the random number generator,
the exact same sequence can be generated (right).}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Univariate data}
A basic type of stochastic simulations is to draw random numbers from
@ -91,6 +92,18 @@ simulated as described in the following, is described in
chapter~\ref{descriptivestatisticschapter}.
\subsection{Normally distributed data}
\begin{figure}[t]
\includegraphics{normaldata}
\titlecaption{\label{normaldatafig} Generating normally distributed
data.}{With the help of a computer the weight of 300 tigers can be
measured in no time using the \code{randn()} function (left). By
construction we then even know the population distribution (red
line, right), its mean (here 220\,kg) and standard deviation
(40\,kg) from which the simulated data values were drawn (yellow
histogram).}
\end{figure}
For drawing numbers from a normal distribution
\begin{equation}
p_{norm}(x) = \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}x^2}
@ -100,22 +113,12 @@ function. This function returns normally distributed numbers $\xi_i$
with zero mean and unit standard deviation. For changing the standard
deviation we need to multiply the returned data values with the
required standard deviation $\sigma$. For changing the mean we just
add the desired mean $\mu$ to the random numbers:
add the desired mean $\mu$ to the random numbers
(\figref{normaldatafig}):
\begin{equation}
x_i = \sigma \xi_i + \mu
\end{equation}
\begin{figure}[t]
\includegraphics[width=1\textwidth]{normaldata}
\titlecaption{\label{normaldatafig} Generating normally distributed
data.}{With the help of a computer the weight of 300 tigers can be
measured in no time using the \code{randn()} function (left). By
construction we then even know the population distribution (red
line, right), its mean (here 220\,kg) and standard deviation
(40\,kg) from which the simulated data values were drawn (yellow
histogram).}
\end{figure}
\begin{exercise}{normaldata.m}{normaldata.out}
Write a little script that generates $n=100$ normally distributed
data simulating the weight of Bengal tiger males with mean 220\,kg
@ -211,37 +214,54 @@ the event does not occur.
\subsection{Random walks}
A basic concept for stochastic models is the random walk. A walker
starts at some initial position $x_0$. It then takes steps to the
right, $x_n = x_{n-1} + 1$, with some probability $P_r$ or to the
left, $x_n = x_{n-1} - 1$, with probability $P_l = 1-P_r$.
For a symmetric random walk, the probabilities to step to the left or
to the right are the same, $P_r = P_l = \frac{1}{2}$. The average
position of many walkers is then independent of the iteration step and
equals the initial position $x_0$. The standard deviation of the
walker positions, however, grows with the square root of the number of
iteration steps $n$: $\sigma_{x_n} = \sqrt{n}$.
starts at some initial position $x_0$. At every iteration $n$ it takes
a step $\delta_n$: $x_n = x_{n-1} + \delta_n$. The walker either steps
to the right ($\delta_n = +1$) with some probability $P_r$ or to the
left ($\delta_n = -1$) with probability $P_l = 1-P_r$.
For a symmetric random walk (\figref{randomwalkonefig}), the
probabilities to step to the left or to the right are the same, $P_r =
P_l = \frac{1}{2}$. The average position of many walkers is then
independent of the iteration step and equals the initial position
$x_0$. The variance of each step is $\sigma_{\delta_n} = 1$. Since
each step is independent of the steps before, the variances of each
step add up such that the total variance of the walker position at
step $n$ equals $\sigma_{x_n}^2 = n$. The standard deviation of the
walker positions grows with the square root of the number of iteration
steps $n$: $\sigma_{x_n} = \sqrt{n}$.
\begin{figure}[tb]
Have a figure with a few 1D random walks and the increasing standard
deviation as a shaded area behind. And with absorbing boundary.
\includegraphics{randomwalkone}
\titlecaption{\label{randomwalkonefig} Random walk in one
dimension.}{Twelve symmetric random walks starting at $x_0=0$. The
ensemble average of the position of the random walkers is zero
(blue line) and the standard deviation increases with the square
root of the number of steps taken (shaded area).}
\end{figure}
The sub-threshold membrane potential of a neuron can be modeled as a
one-dimensional random walk. The thousands of excitatory and
inhibitory synaptic inputs of a principal neuron in the brain make the
membrane potential to randomly step upwards or downwards. Once the
membrane potential hits a firing threshold an action potential is
generated and the membrane potential is reset to the resting
potential. The firing threshold in this context is often called an
``absorbing boundary''. The time it takes from the initial position to
cross the threshold is the ``first-passage time''. For neurons this is
the interspike interval. For symmetric random walks the first-passage
membrane potential hits a threshold an action potential is generated
and the membrane potential is reset to the resting potential. The
threshold in this context is often called an ``absorbing
boundary''. The time it takes from the initial position to cross the
threshold is the ``first-passage time''. For neurons this is the
interspike interval. For symmetric random walks the first-passage
times are again exponentially distributed. ??? Is that so ???
% Gerstein and Mandelbrot 1964
Higher dimensions, Brownian motion.
Higher dimensions (\figref{randomwalktwofig}), Brownian motion. Crystal growth.
\begin{figure}[tb]
\includegraphics{randomwalktwo}
\titlecaption{\label{randomwalktwofig} Random walk in two
dimensions.}{Fivehundred steps of five random walkers that in each
iteration take a step either up, down, to the left, or to the
right.}
\end{figure}
The gamma distribution (\code{gamrnd()}) phenomenologically describes
@ -258,7 +278,7 @@ various types of interspike interval dsitributions
\subsection{Static nonlinearities}
\begin{figure}[t]
\includegraphics[width=1\textwidth]{staticnonlinearity}
\includegraphics{staticnonlinearity}
\titlecaption{\label{staticnonlinearityfig} Generating data
fluctuating around a function.}{The conductance of the
mechontransducer channels in hair cells of the inner ear is a
@ -268,7 +288,7 @@ various types of interspike interval dsitributions
distributed (right, red line).}
\end{figure}
Example: mechanotransduciton!
Example: mechanotransduciton! (\figref{staticnonlinearityfig})
draw (and plot) random functions.