diff --git a/regression/exercises/exercises01.tex b/regression/exercises/exercises01.tex
index 6d9e335..9e7c5b3 100644
--- a/regression/exercises/exercises01.tex
+++ b/regression/exercises/exercises01.tex
@@ -62,13 +62,13 @@
   data in the file \emph{lin\_regression.mat}.
 
   In the lecture we already prepared the cost function
-  (\code{lsqError()}), and the gradient (\code{lsqGradient()}) (read
-  chapter 8 ``Optimization and gradient descent'' in the script, in
-  particular section 8.4 and exercise 8.4!). With these functions in
-  place we here want to implement a gradient descend algorithm that
-  finds the minimum of the cost function and thus the slope and
-  intercept of the straigth line that minimizes the squared distance
-  to the data values.
+  (\code{meanSquaredError()}), and the gradient
+  (\code{meanSquaredGradient()}) (read chapter 8 ``Optimization and
+  gradient descent'' in the script, in particular section 8.4 and
+  exercise 8.4!). With these functions in place we here want to
+  implement a gradient descend algorithm that finds the minimum of the
+  cost function and thus the slope and intercept of the straigth line
+  that minimizes the squared distance to the data values.
 
   The algorithm for the descent towards the minimum of the cost
   function is as follows:
@@ -86,7 +86,7 @@
     why we just require the gradient to be sufficiently small
     (e.g. \code{norm(gradient) < 0.1}).
   \item \label{gradientstep} Move against the gradient by a small step
-    ($\epsilon = 0.01$):
+    $\epsilon = 0.01$:
     \[\vec p_{i+1} = \vec p_i - \epsilon \cdot \nabla f_{cost}(m_i, b_i)\]
   \item Repeat steps \ref{computegradient} -- \ref{gradientstep}.
   \end{enumerate}
diff --git a/regression/exercises/lin_regression.mat b/regression/exercises/lin_regression.mat
new file mode 100644
index 0000000..6a21622
Binary files /dev/null and b/regression/exercises/lin_regression.mat differ