fixes in a few chapters
This commit is contained in:
parent
5cf23aba85
commit
79f282f7b3
plotting/lecture
programming/lecture
programmingstyle/lecture
regression/lecture
@ -146,7 +146,7 @@ additional options consult the help.
|
||||
The following listing shows a simple line plot with axis labeling and a title
|
||||
|
||||
\lstinputlisting[caption={A simple plot showing a sinewave.},
|
||||
label=niceplotlisting]{simple_plot.m}
|
||||
label=simpleplotlisting]{simple_plot.m}
|
||||
|
||||
|
||||
\subsection{Changing properties of a line plot}
|
||||
|
@ -40,14 +40,12 @@ variable.
|
||||
\begin{figure}
|
||||
\centering
|
||||
\begin{subfigure}{.5\textwidth}
|
||||
\includegraphics[width=0.8\textwidth]{variable}
|
||||
\label{variable:a}
|
||||
\includegraphics[width=0.8\textwidth]{variable}\label{variable:a}
|
||||
\end{subfigure}%
|
||||
\begin{subfigure}{.5\textwidth}
|
||||
\includegraphics[width=.8\textwidth]{variableB}
|
||||
\label{variable:b}
|
||||
\includegraphics[width=.8\textwidth]{variableB}\label{variable:b}
|
||||
\end{subfigure}
|
||||
\titlecaption{Variables} point to a memory
|
||||
\titlecaption{Variables}{ point to a memory
|
||||
address. They further are described by their name and
|
||||
data type. The variable's value is stored as a pattern of binary
|
||||
values (0 or 1). When reading the variable this pattern is
|
||||
@ -630,7 +628,7 @@ matrix). The function \code{cat()} allows to concatenate n-dimensional
|
||||
matrices.
|
||||
|
||||
To request the length of a vector we used the function
|
||||
\code{length()}. This function is \tetbf{not} suited to request
|
||||
\code{length()}. This function is \textbf{not} suited to request
|
||||
information about the size of a matrix. As mentioned above,
|
||||
\code{length()} would return the length of the largest dimension. The
|
||||
function \code{size()} however, returns the length in each dimension
|
||||
|
@ -345,7 +345,7 @@ access (read or write) variables of the calling function. Interaction
|
||||
with the local function requires to pass all required arguments and to
|
||||
take care of the return values of the function.
|
||||
|
||||
\emp{Nested functions} are different in this respect. They are
|
||||
\emph{Nested functions} are different in this respect. They are
|
||||
defined within the body of the parent function (between the keywords
|
||||
\code{function} and \code{end}) and have full access to all variables
|
||||
defined in the parent function. Working (in particular changing) the
|
||||
|
@ -86,7 +86,7 @@ large deviations.
|
||||
$f_{cost}(\{(x_i, y_i)\}|\{y^{est}_i\})$ is a so called
|
||||
\enterm{objective function} or \enterm{cost function}. We aim to adapt
|
||||
the model parameters to minimize the error (mean square error) and
|
||||
thus the \emph{objective function}. In Chapter~\ref{maximumlikelihood}
|
||||
thus the \emph{objective function}. In Chapter~\ref{maximumlikelihoodchapter}
|
||||
we will show that the minimization of the mean square error is
|
||||
equivalent to maximizing the likelihood that the observations
|
||||
originate from the model (assuming a normal distribution of the data
|
||||
@ -270,7 +270,7 @@ The gradient is given by partial derivatives
|
||||
(Box~\ref{partialderivativebox}) with respect to the parameters $m$
|
||||
and $b$ of the linear equation. There is no need to calculate it
|
||||
analytically but it can be estimated from the partial derivatives
|
||||
using the difference quotient (Box~\ref{differentialquotient}) for
|
||||
using the difference quotient (Box~\ref{differentialquotientbox}) for
|
||||
small steps $\Delta m$ und $\Delta b$. For example the partial
|
||||
derivative with respect to $m$:
|
||||
|
||||
|
Reference in New Issue
Block a user