[regression] objective function section done
This commit is contained in:
@@ -41,7 +41,7 @@ def plot_data_fac(ax, x, y, c):
|
||||
|
||||
if __name__ == "__main__":
|
||||
x, y, c = create_data()
|
||||
print(len(x))
|
||||
print('n=%d' % len(x))
|
||||
fig, (ax1, ax2) = plt.subplots(1, 2)
|
||||
fig.subplots_adjust(wspace=0.5, **adjust_fs(fig, left=6.0, right=1.5))
|
||||
plot_data(ax1, x, y)
|
||||
|
||||
@@ -42,17 +42,6 @@
|
||||
|
||||
\subsection{2D fit}
|
||||
|
||||
\begin{exercise}{meanSquaredError.m}{}
|
||||
Implement the objective function \eqref{mseline} as a function
|
||||
\varcode{meanSquaredError()}. The function takes three
|
||||
arguments. The first is a vector of $x$-values and the second
|
||||
contains the measurements $y$ for each value of $x$. The third
|
||||
argument is a 2-element vector that contains the values of
|
||||
parameters \varcode{m} and \varcode{b}. The function returns the
|
||||
mean square error.
|
||||
\end{exercise}
|
||||
|
||||
|
||||
\begin{exercise}{errorSurface.m}{}\label{errorsurfaceexercise}
|
||||
Generate 20 data pairs $(x_i|y_i)$ that are linearly related with
|
||||
slope $m=0.75$ and intercept $b=-40$, using \varcode{rand()} for
|
||||
|
||||
@@ -109,10 +109,10 @@ The mean squared error is a so called \enterm{objective function} or
|
||||
\enterm{cost function} (\determ{Kostenfunktion}). A cost function
|
||||
assigns to a model prediction $\{y^{est}(x_i)\}$ for a given data set
|
||||
$\{(x_i, y_i)\}$ a single scalar value that we want to minimize. Here
|
||||
we aim to adapt the model parameters to minimize the mean squared
|
||||
error \eqref{meansquarederror}. In general, the \enterm{cost function}
|
||||
can be any function that describes the quality of the fit by mapping
|
||||
the data and the predictions to a single scalar value.
|
||||
we aim to adapt the model parameter to minimize the mean squared error
|
||||
\eqref{meansquarederror}. In general, the \enterm{cost function} can
|
||||
be any function that describes the quality of a fit by mapping the
|
||||
data and the predictions to a single scalar value.
|
||||
|
||||
\begin{figure}[t]
|
||||
\includegraphics{cubicerrors}
|
||||
@@ -125,25 +125,23 @@ the data and the predictions to a single scalar value.
|
||||
\end{figure}
|
||||
|
||||
Replacing $y^{est}$ in the mean squared error \eqref{meansquarederror}
|
||||
with our model, the straight line \eqref{straightline}, the cost
|
||||
function reads
|
||||
with our cubic model \eqref{cubicfunc}, the cost function reads
|
||||
\begin{eqnarray}
|
||||
f_{cost}(m,b|\{(x_i, y_i)\}) & = & \frac{1}{N} \sum_{i=1}^N (y_i - f(x_i;m,b))^2 \label{msefunc} \\
|
||||
& = & \frac{1}{N} \sum_{i=1}^N (y_i - m x_i - b)^2 \label{mseline}
|
||||
f_{cost}(c|\{(x_i, y_i)\}) & = & \frac{1}{N} \sum_{i=1}^N (y_i - f(x_i;c))^2 \label{msefunc} \\
|
||||
& = & \frac{1}{N} \sum_{i=1}^N (y_i - c x_i^3)^2 \label{msecube}
|
||||
\end{eqnarray}
|
||||
The optimization process tries to find the slope $m$ and the intercept
|
||||
$b$ such that the cost function is minimized. With the mean squared
|
||||
error as the cost function this optimization process is also called
|
||||
method of the \enterm{least square error} (\determ[quadratischer
|
||||
The optimization process tries to find a value for the factor $c$ such
|
||||
that the cost function is minimized. With the mean squared error as
|
||||
the cost function this optimization process is also called method of
|
||||
\enterm{least squares} (\determ[quadratischer
|
||||
Fehler!kleinster]{Methode der kleinsten Quadrate}).
|
||||
|
||||
\begin{exercise}{meanSquaredError.m}{}
|
||||
Implement the objective function \eqref{mseline} as a function
|
||||
\varcode{meanSquaredError()}. The function takes three
|
||||
\begin{exercise}{meanSquaredErrorCubic.m}{}
|
||||
Implement the objective function \eqref{msecube} as a function
|
||||
\varcode{meanSquaredErrorCubic()}. The function takes three
|
||||
arguments. The first is a vector of $x$-values and the second
|
||||
contains the measurements $y$ for each value of $x$. The third
|
||||
argument is a 2-element vector that contains the values of
|
||||
parameters \varcode{m} and \varcode{b}. The function returns the
|
||||
argument is the value of the factor $c$. The function returns the
|
||||
mean squared error.
|
||||
\end{exercise}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user