some fixes
This commit is contained in:
parent
f6b52d32cb
commit
60a94c9ce6
@ -5,7 +5,7 @@
|
|||||||
\exercisechapter{Resampling methods}
|
\exercisechapter{Resampling methods}
|
||||||
|
|
||||||
|
|
||||||
\entermde{Resampling methoden}{Resampling methods} are applied to
|
\entermde{Resampling-Methoden}{Resampling methods} are applied to
|
||||||
generate distributions of statistical measures via resampling of
|
generate distributions of statistical measures via resampling of
|
||||||
existing samples. Resampling offers several advantages:
|
existing samples. Resampling offers several advantages:
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
@ -80,10 +80,10 @@ distribution of average values around the true mean of the population
|
|||||||
|
|
||||||
Alternatively, we can use \enterm{bootstrapping}
|
Alternatively, we can use \enterm{bootstrapping}
|
||||||
(\determ[Bootstrap!Verfahren]{Bootstrapverfahren}) to generate new
|
(\determ[Bootstrap!Verfahren]{Bootstrapverfahren}) to generate new
|
||||||
samples from one set of measurements
|
samples from one set of measurements by means of resampling. From
|
||||||
(\entermde{Resampling}{resampling}). From these bootstrapped samples
|
these bootstrapped samples we compute the desired statistical measure
|
||||||
we compute the desired statistical measure and estimate their
|
and estimate their distribution
|
||||||
distribution (\entermde{Bootstrap!Verteilung}{bootstrap distribution},
|
(\entermde{Bootstrap!Verteilung}{bootstrap distribution},
|
||||||
\subfigref{bootstrapsamplingdistributionfig}{c}). Interestingly, this
|
\subfigref{bootstrapsamplingdistributionfig}{c}). Interestingly, this
|
||||||
distribution is very similar to the sampling distribution regarding
|
distribution is very similar to the sampling distribution regarding
|
||||||
its width. The only difference is that the bootstrapped values are
|
its width. The only difference is that the bootstrapped values are
|
||||||
|
@ -24,7 +24,7 @@ for i in range(nresamples) :
|
|||||||
musrs.append(np.mean(rng.randn(nsamples)))
|
musrs.append(np.mean(rng.randn(nsamples)))
|
||||||
hmusrs, _ = np.histogram(musrs, bins, density=True)
|
hmusrs, _ = np.histogram(musrs, bins, density=True)
|
||||||
|
|
||||||
fig, ax = plt.subplots(figsize=cm_size(figure_width, 1.05*figure_height))
|
fig, ax = plt.subplots(figsize=cm_size(figure_width, 1.1*figure_height))
|
||||||
fig.subplots_adjust(**adjust_fs(left=4.0, bottom=2.7, right=1.5))
|
fig.subplots_adjust(**adjust_fs(left=4.0, bottom=2.7, right=1.5))
|
||||||
ax.set_xlabel('Mean')
|
ax.set_xlabel('Mean')
|
||||||
ax.set_xlim(-0.4, 0.4)
|
ax.set_xlim(-0.4, 0.4)
|
||||||
|
@ -362,7 +362,8 @@ too large, the algorithm does not converge to the minimum of the cost
|
|||||||
function (try it!). At medium values it oscillates around the minimum
|
function (try it!). At medium values it oscillates around the minimum
|
||||||
but might nevertheless converge. Only for sufficiently small values
|
but might nevertheless converge. Only for sufficiently small values
|
||||||
(here $\epsilon = 0.00001$) does the algorithm follow the slope
|
(here $\epsilon = 0.00001$) does the algorithm follow the slope
|
||||||
downwards towards the minimum.
|
downwards towards the minimum. Change $\epsilon$ by factors of ten to
|
||||||
|
adapt it to a specific problem.
|
||||||
|
|
||||||
The terminating condition on the absolute value of the gradient
|
The terminating condition on the absolute value of the gradient
|
||||||
influences how often the cost function is evaluated. The smaller the
|
influences how often the cost function is evaluated. The smaller the
|
||||||
@ -560,7 +561,7 @@ For testing our new function we need to implement the power law
|
|||||||
\end{exercise}
|
\end{exercise}
|
||||||
|
|
||||||
Now let's use the new gradient descent function to fit a power law to
|
Now let's use the new gradient descent function to fit a power law to
|
||||||
our tiger data-set (\figref{powergradientdescentfig}):
|
our tiger data-set:
|
||||||
|
|
||||||
\begin{exercise}{plotgradientdescentpower.m}{}
|
\begin{exercise}{plotgradientdescentpower.m}{}
|
||||||
Use the function \varcode{gradientDescent()} to fit the
|
Use the function \varcode{gradientDescent()} to fit the
|
||||||
@ -573,12 +574,21 @@ our tiger data-set (\figref{powergradientdescentfig}):
|
|||||||
data together with the best fitting power-law \eqref{powerfunc}.
|
data together with the best fitting power-law \eqref{powerfunc}.
|
||||||
\end{exercise}
|
\end{exercise}
|
||||||
|
|
||||||
|
Note that in our specific example on tiger sizes and weights the
|
||||||
|
simulated data look on a first glance like being linearly related
|
||||||
|
(\figref{cubicdatafig}). The true cubic relation between weights and
|
||||||
|
sizes is not that obvious, because of the limited range of tiger
|
||||||
|
sizes. Nevertheless, the cost function has a minimum at the bottom of
|
||||||
|
a valley that is very narrow in the direction of the expontent
|
||||||
|
(\figref{powergradientdescentfig}). The exponent of about three is
|
||||||
|
thus clearly defined by the data.
|
||||||
|
|
||||||
|
|
||||||
\section{Fitting non-linear functions to data}
|
\section{Fitting non-linear functions to data}
|
||||||
|
|
||||||
The gradient descent is a basic numerical method for solving
|
The gradient descent is a basic numerical method for solving
|
||||||
optimization problems. It is used to find the global minimum of an
|
optimization problems. It is used to find the global minimum of an
|
||||||
objective function.
|
objective function.
|
||||||
|
|
||||||
Curve fitting is a specific optimization problem and a common
|
Curve fitting is a specific optimization problem and a common
|
||||||
application for the gradient descent method. For the case of fitting
|
application for the gradient descent method. For the case of fitting
|
||||||
@ -650,6 +660,8 @@ however, is not a fixed function. It may change in time by changing
|
|||||||
abiotic and biotic environmental conditions, making this a very
|
abiotic and biotic environmental conditions, making this a very
|
||||||
complex but also interesting optimization problem.
|
complex but also interesting optimization problem.
|
||||||
|
|
||||||
|
\subsection{Optimal design of neural systems}
|
||||||
|
|
||||||
How should a neuron or neural network be designed? As a particular
|
How should a neuron or neural network be designed? As a particular
|
||||||
aspect of the general evolution of a species, this is a fundamental
|
aspect of the general evolution of a species, this is a fundamental
|
||||||
question in the neurosciences. Maintaining a neural system is
|
question in the neurosciences. Maintaining a neural system is
|
||||||
|
Reference in New Issue
Block a user