fixes in a few chapters
This commit is contained in:
@@ -86,7 +86,7 @@ large deviations.
|
||||
$f_{cost}(\{(x_i, y_i)\}|\{y^{est}_i\})$ is a so called
|
||||
\enterm{objective function} or \enterm{cost function}. We aim to adapt
|
||||
the model parameters to minimize the error (mean square error) and
|
||||
thus the \emph{objective function}. In Chapter~\ref{maximumlikelihood}
|
||||
thus the \emph{objective function}. In Chapter~\ref{maximumlikelihoodchapter}
|
||||
we will show that the minimization of the mean square error is
|
||||
equivalent to maximizing the likelihood that the observations
|
||||
originate from the model (assuming a normal distribution of the data
|
||||
@@ -270,7 +270,7 @@ The gradient is given by partial derivatives
|
||||
(Box~\ref{partialderivativebox}) with respect to the parameters $m$
|
||||
and $b$ of the linear equation. There is no need to calculate it
|
||||
analytically but it can be estimated from the partial derivatives
|
||||
using the difference quotient (Box~\ref{differentialquotient}) for
|
||||
using the difference quotient (Box~\ref{differentialquotientbox}) for
|
||||
small steps $\Delta m$ und $\Delta b$. For example the partial
|
||||
derivative with respect to $m$:
|
||||
|
||||
|
||||
Reference in New Issue
Block a user