174 lines
8.0 KiB
TeX
174 lines
8.0 KiB
TeX
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
\chapter{Simulations}
|
|
\label{simulationschapter}
|
|
\exercisechapter{Simulations}
|
|
|
|
The real power of computers for data analysis is the possibility to
|
|
run simulations. Experimental data of almost unlimited sample sizes
|
|
can be simulated in no time. This allows to explore basic concepts,
|
|
the ones we introduce in the following chapters and many more, with
|
|
well controlled data sets that are free of confounding pecularities of
|
|
real data sets. With simulated data we can also test our own analysis
|
|
functions. More importantly, by means of simulations we can explore
|
|
possible outcomes of our experiments before we even started the
|
|
experiment. We could even explore possible results for regimes that we
|
|
cannot test experimentally. How dynamical systems, like for example
|
|
predator-prey interactions or the activity of neurons or whole brains,
|
|
evolve in time is a central application for simulations. The advent of
|
|
computers at the second half of the twentieth century pushed the
|
|
exciting field of nonlinear dynamical systems forward. Conceptually,
|
|
many kinds of simulations are very simple and are implemented in a few
|
|
lines of code.
|
|
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
\section{Random numbers}
|
|
At the heart of many simulations are random numbers that we get from
|
|
\enterm[random number generator]{random number generators}. These are
|
|
numerical algorithms that return sequences of numbers that appear to
|
|
be as random as possible. If we draw random numbers using, for
|
|
example, the \code{rand()} function, then these numbers are indeed
|
|
uniformly distributed and have a mean of one half. Subsequent numbers
|
|
are independent of each other, i.e. the autocorrelation function is
|
|
zero everywhere except at lag zero. However, numerical random number
|
|
generators have a period, after which they repeat the exact same
|
|
sequence of numbers. This differentiates them from truely random
|
|
numbers and hence they are called \enterm[random number
|
|
generator!pseudo]{pseudo random number generators}. In rare cases this
|
|
periodicity can induce problems in simulations whenever more random
|
|
numbers than the period of the random number generator are
|
|
used. Luckily, nowadays the periods of random nunmber generators are
|
|
very large, $2^{64}$, $2^{128}$, or even larger.
|
|
|
|
The pseudo randomness of numerical random number generators also has
|
|
an advantage. They allow to repeat exactly the same sequence of
|
|
random numbers. After defining the state of the generator or setting a
|
|
\enterm{seed} with the \code{rng()} function, a particular sequence of
|
|
random numbers is generated by subsequent calls of the random number
|
|
generator. This way we can not only precisly define the statistics of
|
|
noise in our simulated data, but we can repeat an experiment with
|
|
exactly the same sequence of noise values. This is useful for plots
|
|
that involve some random numbers but should look the same whenever the
|
|
script is run.
|
|
|
|
\begin{exercise}{randomnumbers.m}{randomnumbers.out}
|
|
First, read the documentation of the \varcode{rand()} function and
|
|
check its output for some (small) input arguments.
|
|
|
|
Generate three times the same sequence of 10 uniformly distributed
|
|
numbers using the \code{rand()} and \code{rng()} functions.
|
|
|
|
Generate 10\,000 uniformly distributed random numbers and compute
|
|
the correlation coefficient between each number and the next one in
|
|
the sequence. This is the serial correlation at lag one.
|
|
\end{exercise}
|
|
|
|
\begin{figure}[t]
|
|
\includegraphics[width=1\textwidth]{randomnumbers}
|
|
\titlecaption{\label{randomnumbersfig} Random numbers.} {Numerical
|
|
random number generators return sequences of numbers that are as
|
|
random as possible. The returned values approximate a given
|
|
probability distribution. Here, for example, a uniform
|
|
distribution between zero and one (top left). The serial
|
|
correlation (auto-correlation) of the returned sequences is zero
|
|
at any lag except for zero (bottom left). However, by setting a
|
|
seed, defining the state, or spawning the random number generator,
|
|
the exact same sequence can be generated (right).}
|
|
\end{figure}
|
|
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
\section{Univariate data}
|
|
The most basic type of simulation is to draw random numbers from a
|
|
given distribution like, for example, the normal distribution. This
|
|
simulates repeated measurements of some quantity (e.g., weight of
|
|
tigers or firing rate of neurons). Doing so we must specify from which
|
|
probability distribution the data should originate from and what are
|
|
the parameters of that distribution (mean, standard deviation, shape
|
|
parameters, ...). How to illustrate and quantify univariate data, no
|
|
matter whether they have been actually measured or whether they have
|
|
been simulated as described in the following, is described in
|
|
chapter~\ref{descriptivestatisticschapter}.
|
|
|
|
\subsection{Normally distributed data}
|
|
For drawing numbers $x_i$ from a normal distribution we use the
|
|
\code{randn()} function. This function returns normally distributed
|
|
numbers $\xi_i$ with zero mean and unit standard deviation. For
|
|
changing the standard deviation we need to multiply the returned data
|
|
values with the required standard deviation $\sigma$. For changing the
|
|
mean we just add the desired mean $\mu$ to the random numbers:
|
|
\begin{equation}
|
|
x_i = \sigma \xi_i + \mu
|
|
\end{equation}
|
|
|
|
\begin{figure}[t]
|
|
\includegraphics[width=1\textwidth]{normaldata}
|
|
\titlecaption{\label{normaldatafig} Generating normally distributed
|
|
data.}{With the help of a computer the weight of 300 tigers can be
|
|
measured in no time using the \code{randn()} function (left). By
|
|
construction we then even know the population distribution (red
|
|
line, right), its mean (here 220\,kg) and standard deviation
|
|
(40\,kg) from which the simulated data values were drawn (yellow
|
|
histogram).}
|
|
\end{figure}
|
|
|
|
\begin{exercise}{normaldata.m}{normaldata.out}
|
|
Write a little script that generates $n=100$ normally distributed
|
|
data simulating the weight of Bengal tiger males with mean 220\,kg
|
|
and standard deviation 40\,kg. Check the actual mean and standard
|
|
deviation of the generated data. Do this, let's say, five times
|
|
using a for-loop. Then increase $n$ to 10\,000 and run the code
|
|
again. It is so simple to measure the weight of 10\,000 tigers for
|
|
getting a really good estimate of their mean weight, isn't it?
|
|
Finally plot from the last generated data set of tiger weights the
|
|
first 1000 values as a function of their index.
|
|
\end{exercise}
|
|
|
|
\subsection{Other probability densities}
|
|
\code{rand()}
|
|
gamma
|
|
|
|
\subsection{Random integers}
|
|
\code{randi()}
|
|
|
|
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
\section{Bivariate data and static nonlinearities}
|
|
|
|
\begin{figure}[t]
|
|
\includegraphics[width=1\textwidth]{staticnonlinearity}
|
|
\titlecaption{\label{staticnonlinearityfig} Generating data
|
|
fluctuating around a function.}{The conductance of the
|
|
mechontransducer channels in hair cells of the inner ear is a
|
|
saturating function of the deflection of their hairs (left, red
|
|
line). Measured data will fluctuate around this function (blue
|
|
dots). Ideally the residuals (yellow histogram) are normally
|
|
distributed (right, red line).}
|
|
\end{figure}
|
|
|
|
Example: mechanotransduciton!
|
|
|
|
draw (and plot) random functions (in statistics chapter?)
|
|
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
\section{Dynamical systems}
|
|
|
|
\begin{itemize}
|
|
\item iterated maps
|
|
\item euler forward, odeint
|
|
\item introduce derivatives which are also needed for fitting (move box from there here)
|
|
\item Passive membrane
|
|
\item Add passive membrane to mechanotransduction!
|
|
\item Integrate and fire
|
|
\item Fitzugh-Nagumo
|
|
\item Two coupled neurons? Predator-prey?
|
|
\end{itemize}
|
|
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
\section{Summary}
|
|
with outook to other simulations (cellular automata, monte carlo, etc.)
|
|
|
|
|
|
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
\printsolutions
|