[simulations] some improvements

This commit is contained in:
Jan Benda 2019-12-23 19:10:36 +01:00
parent 42da5587f5
commit b5607deecb
2 changed files with 39 additions and 28 deletions

View File

@ -7,12 +7,12 @@
The real power of computers for data analysis is the possibility to
run simulations. Experimental data of almost unlimited sample sizes
can be simulated in no time. This allows to explore basic concepts,
like the ones we introduce in the following chapters, with well
controlled data sets that are free of confounding pecularities of real
data sets. With simulated data we can also test our own analysis
the ones we introduce in the following chapters and many more, with
well controlled data sets that are free of confounding pecularities of
real data sets. With simulated data we can also test our own analysis
functions. More importantly, by means of simulations we can explore
possible outcomes of our planned experiments before we even started
the experiment or we can explore possible results for regimes that we
possible outcomes of our experiments before we even started the
experiment or we can explore possible results for regimes that we
cannot test experimentally. How dynamical systems, like for example
predator-prey interactions or the activity of neurons, evolve in time
is a central application for simulations. Computers becoming available
@ -23,31 +23,41 @@ code.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Random numbers}
At the heart of many simulations are random numbers. Pseudo random
number generator XXX. These are numerical algorithms that return
sequences of numbers that appear to be as random as possible. If we
draw random number using, for example, the \code{rand()} function,
then these numbers are indeed uniformly distributed and have a mean of
one half. Subsequent numbers are also independent of each other,
i.e. the autocorrelation function is zero everywhere except at lag
zero. However, numerical random number generators have a period, after
which they repeat the exact same sequence. This differentiates them
from truely random numbers and hence they are called \enterm{pseudo
random number generators}. In rare cases this periodicity can induce
problems in your simulations. Luckily, nowadays the periods of random
nunmber generators very large, $2^{64}$, $2^{128}$, or even larger.
An advantage of pseudo random numbers is that they can be exactly
repeated given a defined state or seed of the random number
generator. After defining the state of the generator or setting a
\term{seed} with the \code{rng()} function, the exact same sequence of
At the heart of many simulations are random numbers that we get from
\enterm[random number generator]{random number generators}. These are
numerical algorithms that return sequences of numbers that appear to
be as random as possible. If we draw random numbers using, for
example, the \code{rand()} function, then these numbers are indeed
uniformly distributed and have a mean of one half. Subsequent numbers
are independent of each other, i.e. the autocorrelation function is
zero everywhere except at lag zero. However, numerical random number
generators have a period, after which they repeat the exact same
sequence of numbers. This differentiates them from truely random
numbers and hence they are called \enterm[random number
generator!pseudo]{pseudo random number generators}. In rare cases this
periodicity can induce problems in simulations whenever more random
numbers than the period of the random number generator are
used. Luckily, nowadays the periods of random nunmber generators are
very large, $2^{64}$, $2^{128}$, or even larger.
The pseudo randomness of numerical random number generators also has
an advantage. They allow to repeat exactly the same sequence of
random numbers. After defining the state of the generator or setting a
\enterm{seed} with the \code{rng()} function, a particular sequence of
random numbers is generated by subsequent calls of the random number
generator. This is in particular useful for plots that involve some
random numbers but should look the same whenever the script is run.
generator. This way we can not only precisly define the statistics of
noise in our simulated data, but we can repeat an experiment with
exactly the same sequence of noise values. This is useful for plots
that involve some random numbers but should look the same whenever the
script is run.
\begin{exercise}{}{}
Generate three times the same sequence of 20 uniformly distributed
numbers using the \code{rand()} and \code{rng()} functions.
Generate 10\,000 uniformly distributed random numbers and compute
the correlation coefficient between each number and the next one in
the sequence. This is the serial correlation at lag one.
\end{exercise}
\begin{figure}[t]
@ -72,8 +82,8 @@ tigers or firing rate of neurons). Doing so we must specify from which
probability distribution the data should originate from and what are
the parameters (mean, standard deviation, shape parameters, etc.)
that distribution. How to illuastrate and quantify univariate data, no
matter whether they have been actually measured or whether they are
simulated as described in the following, is described in
matter whether they have been actually measured or whether they have
been simulated as described in the following, is described in
chapter~\ref{descriptivestatisticschapter}.
\subsection{Normally distributed data}
@ -142,6 +152,7 @@ draw (and plot) random functions (in statistics chapter?)
\section{Dynamical systems}
\begin{itemize}
\item iterated maps
\item euler forward, odeint
\item introduce derivatives which are also needed for fitting (move box from there here)
\item Passive membrane

View File

@ -22,7 +22,7 @@ if __name__ == "__main__":
fig = plt.figure(figsize=cm_size(16.0, 6.0))
spec = gridspec.GridSpec(nrows=1, ncols=2,
left=0.10, bottom=0.23, right=0.97, top=0.96, wspace=0.4)
left=0.12, bottom=0.23, right=0.97, top=0.96, wspace=0.4)
ax1 = fig.add_subplot(spec[0, 0])
show_spines(ax1, 'lb')
ax1.plot(xx, yy, colors['red'], lw=2)