some fixes

This commit is contained in:
Jan Benda 2020-12-21 22:07:36 +01:00
parent f6b52d32cb
commit 60a94c9ce6
3 changed files with 21 additions and 9 deletions

View File

@ -5,7 +5,7 @@
\exercisechapter{Resampling methods}
\entermde{Resampling methoden}{Resampling methods} are applied to
\entermde{Resampling-Methoden}{Resampling methods} are applied to
generate distributions of statistical measures via resampling of
existing samples. Resampling offers several advantages:
\begin{itemize}
@ -80,10 +80,10 @@ distribution of average values around the true mean of the population
Alternatively, we can use \enterm{bootstrapping}
(\determ[Bootstrap!Verfahren]{Bootstrapverfahren}) to generate new
samples from one set of measurements
(\entermde{Resampling}{resampling}). From these bootstrapped samples
we compute the desired statistical measure and estimate their
distribution (\entermde{Bootstrap!Verteilung}{bootstrap distribution},
samples from one set of measurements by means of resampling. From
these bootstrapped samples we compute the desired statistical measure
and estimate their distribution
(\entermde{Bootstrap!Verteilung}{bootstrap distribution},
\subfigref{bootstrapsamplingdistributionfig}{c}). Interestingly, this
distribution is very similar to the sampling distribution regarding
its width. The only difference is that the bootstrapped values are

View File

@ -24,7 +24,7 @@ for i in range(nresamples) :
musrs.append(np.mean(rng.randn(nsamples)))
hmusrs, _ = np.histogram(musrs, bins, density=True)
fig, ax = plt.subplots(figsize=cm_size(figure_width, 1.05*figure_height))
fig, ax = plt.subplots(figsize=cm_size(figure_width, 1.1*figure_height))
fig.subplots_adjust(**adjust_fs(left=4.0, bottom=2.7, right=1.5))
ax.set_xlabel('Mean')
ax.set_xlim(-0.4, 0.4)

View File

@ -362,7 +362,8 @@ too large, the algorithm does not converge to the minimum of the cost
function (try it!). At medium values it oscillates around the minimum
but might nevertheless converge. Only for sufficiently small values
(here $\epsilon = 0.00001$) does the algorithm follow the slope
downwards towards the minimum.
downwards towards the minimum. Change $\epsilon$ by factors of ten to
adapt it to a specific problem.
The terminating condition on the absolute value of the gradient
influences how often the cost function is evaluated. The smaller the
@ -560,7 +561,7 @@ For testing our new function we need to implement the power law
\end{exercise}
Now let's use the new gradient descent function to fit a power law to
our tiger data-set (\figref{powergradientdescentfig}):
our tiger data-set:
\begin{exercise}{plotgradientdescentpower.m}{}
Use the function \varcode{gradientDescent()} to fit the
@ -573,6 +574,15 @@ our tiger data-set (\figref{powergradientdescentfig}):
data together with the best fitting power-law \eqref{powerfunc}.
\end{exercise}
Note that in our specific example on tiger sizes and weights the
simulated data look on a first glance like being linearly related
(\figref{cubicdatafig}). The true cubic relation between weights and
sizes is not that obvious, because of the limited range of tiger
sizes. Nevertheless, the cost function has a minimum at the bottom of
a valley that is very narrow in the direction of the expontent
(\figref{powergradientdescentfig}). The exponent of about three is
thus clearly defined by the data.
\section{Fitting non-linear functions to data}
@ -650,6 +660,8 @@ however, is not a fixed function. It may change in time by changing
abiotic and biotic environmental conditions, making this a very
complex but also interesting optimization problem.
\subsection{Optimal design of neural systems}
How should a neuron or neural network be designed? As a particular
aspect of the general evolution of a species, this is a fundamental
question in the neurosciences. Maintaining a neural system is