%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Descriptive statistics} \label{descriptivestatisticschapter} \exercisechapter{Descriptive statistics} Descriptive statistics characterizes data sets by means of a few measures. In addition to histograms that estimate the full distribution of the data, the following measures are used for characterizing univariate data: \begin{description} \item[Location, central tendency] (\determ{Lagema{\ss}e}): \entermde[mean!arithmetic]{Mittel!arithmetisches}{arithmetic mean}, \entermde{Median}{median}, \enterm{mode}. \item[Spread, dispersion] (\determ{Streuungsma{\ss}e}): \entermde{Varianz}{variance}, \entermde{Standardabweichung}{standard deviation}, inter-quartile range,\linebreak \enterm{coefficient of variation} (\determ{Variationskoeffizient}). \item[Shape]: \enterm{skewness} (\determ{Schiefe}), \enterm{kurtosis} (\determ{W\"olbung}). \end{description} For bivariate and multivariate data sets we can also analyse their \begin{description} \item[Dependence, association] (\determ{Zusammenhangsma{\ss}e}): \entermde[correlation!coefficient!Pearson's]{Korrelation!Pearson}{Pearson's correlation coefficient}, \entermde[correlation!coefficient!Spearman's rank]{{Rangkorrelationskoeffizient!Spearman'scher}}{Spearman's rank correlation coefficient}. \end{description} The following is in no way a complete introduction to descriptive statistics, but summarizes a few concepts that are most important in daily data-analysis problems. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Mean, variance, and standard deviation} The \entermde[mean!arithmetic]{Mittel!arithmetisches}{arithmetic mean} is a measure of location. For $n$ data values $x_i$ the arithmetic mean is computed by \[ \bar x = \langle x \rangle = \frac{1}{N}\sum_{i=1}^n x_i \; . \] This computation (summing up all elements of a vector and dividing by the length of the vector) is provided by the function \mcode{mean()}. The mean has the same unit as the data values. The dispersion of the data values around the mean is quantified by their \entermde{Varianz}{variance} \[ \sigma^2_x = \langle (x-\langle x \rangle)^2 \rangle = \frac{1}{N}\sum_{i=1}^n (x_i - \bar x)^2 \; . \] The variance is computed by the function \mcode{var()}. The unit of the variance is the unit of the data values squared. Therefore, variances cannot be compared to the mean or the data values themselves. In particular, variances cannot be used for plotting error bars along with the mean. In contrast to the variance, the \entermde{Standardabweichung}{standard deviation} \[ \sigma_x = \sqrt{\sigma^2_x} \; , \] as computed by the function \mcode{std()} has the same unit as the data values and can (and should) be used to display the dispersion of the data together with their mean. The mean of a data set can be displayed by a bar-plot \matlabfun{bar()}. Additional errorbars \matlabfun{errorbar()} can be used to illustrate the standard deviation of the data (\figref{displayunivariatedatafig} (2)). \begin{figure}[t] \includegraphics[width=1\textwidth]{displayunivariatedata} \titlecaption{\label{displayunivariatedatafig} Displaying statistics of univariate data.}{(1) In particular for small data sets it is most informative to plot the data themselves. The value of each data point is plotted on the y-axis. To make the data points overlap less, they are jittered along the x-axis by means of uniformly distributed random numbers \matlabfun{rand()}. (2) With a bar plot \matlabfun{bar()} one usually shows the mean of the data. The additional errorbar illustrates the deviation of the data from the mean by $\pm$ one standard deviation. In case of non-normal data mean and standard deviation only poorly characterize the distribution of the data values. (3) A box-whisker plot \matlabfun{boxplot()} shows more details of the distribution of the data values. The box extends from the 1. to the 3. quartile, a horizontal line within the box marks the median value, and the whiskers extend to the minum and the maximum data values. (4) The probability density $p(x)$ estimated from a normalized histogram shows the entire distribution of the data. Estimating the probability distribution is only meaningful for sufficiently large data sets.} \end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Mode, median, quartile, etc.} \begin{figure}[t] \includegraphics[width=1\textwidth]{median} \titlecaption{\label{medianfig} Median, mean and mode of a probability distribution.}{Left: Median, mean and mode coincide for the symmetric and unimodal normal distribution. Right: for asymmetric distributions these three measures differ. A heavy tail of a distribution pulls out the mean most strongly. In contrast, the median is more robust against heavy tails, but not necessarily identical with the mode.} \end{figure} The \enterm{mode} (\determ{Modus}) is the most frequent value, i.e. the position of the maximum of the probability distribution. The \entermde{Median}{median} separates a list of data values into two halves such that one half of the data is not greater and the other half is not smaller than the median (\figref{medianfig}). The function \mcode{median()} computes the median. \begin{exercise}{mymedian.m}{} Write a function \varcode{mymedian()} that computes the median of a vector. \end{exercise} \begin{exercise}{checkmymedian.m}{} Write a script that tests whether your median function really returns a median above which are the same number of data than below. In particular the script should test data vectors of different length. You should not use the \mcode{median()} function for testing your function. Writing tests for your own functions is a very important strategy for writing reliable code! \end{exercise} The distribution of data can be further characterized by the position of its \entermde[quartile]{Quartil}{quartiles}. Neighboring quartiles are separated by 25\,\% of the data.% (\figref{quartilefig}). \entermde[percentile]{Perzentil}{Percentiles} allow to characterize the distribution of the data in more detail. The 3$^{\rm rd}$ quartile corresponds to the 75$^{\rm th}$ percentile, because 75\,\% of the data are smaller than the 3$^{\rm rd}$ quartile. % \begin{definition}[quartile] % Die Quartile Q1, Q2 und Q3 unterteilen die Daten in vier gleich % gro{\ss}e Gruppen, die jeweils ein Viertel der Daten enthalten. % Das mittlere Quartil entspricht dem Median. % \end{definition} % \begin{exercise}{quartiles.m}{} % Write a function that computes the first, second, and third quartile of a vector. % \end{exercise} % \begin{figure}[t] % \includegraphics[width=1\textwidth]{boxwhisker} % \titlecaption{\label{boxwhiskerfig} Box-Whisker Plot.}{Box-whisker % plots are well suited for comparing unimodal distributions. Each % box-whisker characterizes 40 random numbers that have been drawn % from a normal distribution.} % \end{figure} \entermde[box-whisker plots]{Box-Whisker-Plot}{Box-whisker plots}, or \entermde{Box-Plot}{box plot} are commonly used to visualize and compare the distribution of unimodal data. A box is drawn around the median that extends from the 1$^{\rm st}$ to the 3$^{\rm rd}$ quartile. The whiskers mark the minimum and maximum value of the data set (\figref{displayunivariatedatafig} (3)). % \begin{figure}[t] % \includegraphics[width=1\textwidth]{quartile} % \titlecaption{\label{quartilefig} Median and quartiles of a normal % distribution.}{ The interquartile range between the first and the % third quartile contains 50\,\% of the data and contains the % median.} % \end{figure} % \begin{exercise}{boxwhisker.m}{} % Generate a $40 \times 10$ matrix of random numbers and % illustrate their distribution in a box-whisker plot % (\code{boxplot()} function). How to interpret the plot? % \end{exercise} \section{Distributions} The \enterm{distribution} (\determ{Verteilung}) of values in a data set is estimated by histograms (\figref{displayunivariatedatafig} (4)). \subsection{Histograms} \entermde[histogram]{Histogramm}{Histograms} count the frequency $n_i$ of $N=\sum_{i=1}^M n_i$ measurements in each of $M$ bins $i$ (\figref{diehistogramsfig} left). The bins tile the data range usually into intervals of the same size. The width of the bins is called the bin width. The frequencies $n_i$ plotted against the categories $i$ is the \enterm{histogram}, or the \enterm{frequency histogram}. \begin{figure}[t] \includegraphics[width=1\textwidth]{diehistograms} \titlecaption{\label{diehistogramsfig} Histograms resulting from 100 or 500 times rolling a die.}{Left: the absolute frequency histogram counts the frequency of each number the die shows. Right: When normalized by the sum of the frequency histogram the two data sets become comparable with each other and with the expected theoretical distribution of $P=1/6$.} \end{figure} Histograms are often used to estimate the \enterm[probability!distribution]{probability distribution} (\determ[Wahrscheinlichkeits!-verteilung]{Wahrscheinlichkeitsverteilung}) of the data values. \begin{exercise}{univariatedata.m}{} Generate 40 normally distributed random numbers with a mean of 2 and illustrate their distribution in a box-whisker plot (\code{boxplot()} function), with a bar and errorbar illustrating the mean and standard deviation (\code{bar()}, \code{errorbar()}), and the data themselves jittered randomly (as in \figref{displayunivariatedatafig}). How to interpret the different plots? \end{exercise} \subsection{Probabilities} In the frequentist interpretation of probability, the \enterm{probability} (\determ{Wahrscheinlichkeit}) of an event (e.g. getting a six when rolling a die) is the relative occurrence of this event in the limit of a large number of trials. For a finite number of trials $N$ where the event $i$ occurred $n_i$ times, the probability $P_i$ of this event is estimated by \[ P_i = \frac{n_i}{N} = \frac{n_i}{\sum_{i=1}^M n_i} \; . \] From this definition it follows that a probability is a unitless quantity that takes on values between zero and one. Most importantly, the sum of the probabilities of all possible events is one: \[ \sum_{i=1}^M P_i = \sum_{i=1}^M \frac{n_i}{N} = \frac{1}{N} \sum_{i=1}^M n_i = \frac{N}{N} = 1\; , \] i.e. the probability of getting any event is one. \subsection{Probability distributions of categorical data} For \entermde[data!categorical]{Daten!kategorische}{categorical} data values (e.g. the faces of a die (as integer numbers or as colors)) a bin can be defined for each category $i$. The histogram is normalized by the total number of measurements to make it independent of the size of the data set (\figref{diehistogramsfig}). After this normalization the height of each histogram bar is an estimate of the probability $P_i$ of the category $i$, i.e. of getting a data value in the $i$-th bin. \begin{exercise}{rollthedie.m}{} Write a function that simulates rolling a die $n$ times. \end{exercise} \begin{exercise}{diehistograms.m}{} Plot histograms for 20, 100, and 1000 times rolling a die. Use \code[hist()]{hist(x)}, enforce six bins with \code[hist()]{hist(x,6)}, or set useful bins yourself. Normalize the histograms appropriately. \end{exercise} \subsection{Probability densities functions} In cases where we deal with \entermde[data!continuous]{Daten!kontinuierliche}{continuous data}, (measurements of real-valued quantities, e.g. lengths of snakes, weights of elephants, times between succeeding spikes) there is no natural bin width for computing a histogram. In addition, the probability of measuring a data value that equals exactly a specific real number like, e.g., 0.123456789 is zero, because there are uncountable many real numbers. We can only ask for the probability to get a measurement value in some range. For example, we can ask for the probability $P(0