[regression] all equations are numbered
This commit is contained in:
parent
14160e2f8d
commit
a7e64a5c6d
@ -16,6 +16,11 @@
|
|||||||
|
|
||||||
\include{regression}
|
\include{regression}
|
||||||
|
|
||||||
|
\subsection{Notes}
|
||||||
|
\begin{itemize}
|
||||||
|
\item Fig 8.2 right: this should be a chi-squared distribution with one degree of freedom!
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
\subsection{Start with one-dimensional problem!}
|
\subsection{Start with one-dimensional problem!}
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item Just the root mean square as a function of the slope
|
\item Just the root mean square as a function of the slope
|
||||||
|
@ -283,10 +283,11 @@ the partial derivatives using the difference quotient
|
|||||||
(Box~\ref{differentialquotientbox}) for small steps $\Delta m$ and
|
(Box~\ref{differentialquotientbox}) for small steps $\Delta m$ and
|
||||||
$\Delta b$. For example, the partial derivative with respect to $m$
|
$\Delta b$. For example, the partial derivative with respect to $m$
|
||||||
can be computed as
|
can be computed as
|
||||||
\[\frac{\partial f_{cost}(m,b)}{\partial m} = \lim\limits_{\Delta m \to
|
\begin{equation}
|
||||||
|
\frac{\partial f_{cost}(m,b)}{\partial m} = \lim\limits_{\Delta m \to
|
||||||
0} \frac{f_{cost}(m + \Delta m, b) - f_{cost}(m,b)}{\Delta m}
|
0} \frac{f_{cost}(m + \Delta m, b) - f_{cost}(m,b)}{\Delta m}
|
||||||
\approx \frac{f_{cost}(m + \Delta m, b) - f_{cost}(m,b)}{\Delta m} \;
|
\approx \frac{f_{cost}(m + \Delta m, b) - f_{cost}(m,b)}{\Delta m} \; .
|
||||||
. \]
|
\end{equation}
|
||||||
The length of the gradient indicates the steepness of the slope
|
The length of the gradient indicates the steepness of the slope
|
||||||
(\figref{gradientquiverfig}). Since want to go down the hill, we
|
(\figref{gradientquiverfig}). Since want to go down the hill, we
|
||||||
choose the opposite direction.
|
choose the opposite direction.
|
||||||
@ -341,7 +342,9 @@ descent works as follows:
|
|||||||
sufficiently close to zero (e.g. \varcode{norm(gradient) < 0.1}).
|
sufficiently close to zero (e.g. \varcode{norm(gradient) < 0.1}).
|
||||||
\item \label{gradientstep} If the length of the gradient exceeds the
|
\item \label{gradientstep} If the length of the gradient exceeds the
|
||||||
threshold we take a small step into the opposite direction:
|
threshold we take a small step into the opposite direction:
|
||||||
\[p_{i+1} = p_i - \epsilon \cdot \nabla f_{cost}(m_i, b_i)\]
|
\begin{equation}
|
||||||
|
p_{i+1} = p_i - \epsilon \cdot \nabla f_{cost}(m_i, b_i)
|
||||||
|
\end{equation}
|
||||||
where $\epsilon = 0.01$ is a factor linking the gradient to
|
where $\epsilon = 0.01$ is a factor linking the gradient to
|
||||||
appropriate steps in the parameter space.
|
appropriate steps in the parameter space.
|
||||||
\item Repeat steps \ref{computegradient} -- \ref{gradientstep}.
|
\item Repeat steps \ref{computegradient} -- \ref{gradientstep}.
|
||||||
|
Reference in New Issue
Block a user