From 49f5687bfe1cab561039c6e443be4c1a02a389b6 Mon Sep 17 00:00:00 2001 From: Jan Benda Date: Mon, 17 Dec 2018 12:08:25 +0100 Subject: [PATCH] [regression] improved exercise --- regression/exercises/exercises01.tex | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/regression/exercises/exercises01.tex b/regression/exercises/exercises01.tex index 043ee93..a58e3da 100644 --- a/regression/exercises/exercises01.tex +++ b/regression/exercises/exercises01.tex @@ -58,18 +58,20 @@ \begin{questions} - \question Implement the gradient descent for finding the parameters - of a straigth line \[ y = mx+b \] that we want to fit to the data in - the file \emph{lin\_regression.mat}. - - In the lecture we already prepared most of the necessary functions: - 1. the cost function (\code{lsqError()}), and 2. the gradient - (\code{lsqGradient()}). Read chapter 8 ``Optimization and gradient - descent'' in the script, in particular section 8.4 and exercise 8.4! + \question We want to fit the straigth line \[ y = mx+b \] to the + data in the file \emph{lin\_regression.mat}. + + In the lecture we already prepared the cost function + (\code{lsqError()}), and the gradient (\code{lsqGradient()}) (read + chapter 8 ``Optimization and gradient descent'' in the script, in + particular section 8.4 and exercise 8.4!). With these functions in + place we here want to implement a gradient descend algorithm that + finds the minimum of the cost function and thus the slope and + intercept of the straigth line that minimizes the squared distance + to the data values. The algorithm for the descent towards the minimum of the cost function is as follows: - \begin{enumerate} \item Start with some arbitrary parameter values (intercept $b_0$ and slope $m_0$, $\vec p_0 = (b_0, m_0)$ for the slope and the @@ -106,9 +108,11 @@ \lstinputlisting{../code/descentfit.m} \end{solution} - \part Find the position of the minimum of the cost function by - means of the \code{min()} function. Compare with the result of the - gradient descent method. Vary the value of $\epsilon$ and the + \part For checking the gradient descend method from (a) compare + its result for slope and intercept with the position of the + minimum of the cost function that you get when computing the cost + function for many values of the slope and intercept and then using + the \code{min()} function. Vary the value of $\epsilon$ and the minimum gradient. What are good values such that the gradient descent gets closest to the true minimum of the cost function? \begin{solution}