[likelihood] 1st version of translation
This commit is contained in:
		
							parent
							
								
									87f52022c9
								
							
						
					
					
						commit
						413ccf22b3
					
				| @ -17,7 +17,7 @@ x_n$ originating from the distribution. | ||||
| %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | ||||
| \section{Maximum Likelihood} | ||||
| 
 | ||||
| Let $p(x|\theta)$ (to be read as ``Probability(density) of $x$ given | ||||
| Let $p(x|\theta)$ (to be read as ``probability(density) of $x$ given | ||||
| $\theta$.'') the probability (density) distribution of $x$ given the | ||||
| parameters $\theta$. This could be the normal distribution | ||||
| \begin{equation} | ||||
| @ -68,118 +68,119 @@ the likelihood (\enterm{log-likelihood}): | ||||
| 
 | ||||
| %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | ||||
| \subsection{Example: the arithmetic mean} | ||||
| 
 | ||||
| Wenn die Me{\ss}daten $x_1, x_2, \ldots x_n$ der Normalverteilung | ||||
| \eqnref{normpdfmean} entstammen, und wir den Mittelwert $\mu=\theta$ als | ||||
| einzigen Parameter der Verteilung betrachten, welcher Wert von | ||||
| $\theta$ maximiert dessen Likelihood? | ||||
| Suppose that the measurements $x_1, x_2, \ldots x_n$ originate from a | ||||
| normal distribution \eqnref{normpdfmean} and we consider the mean | ||||
| $\mu=\theta$ as the only parameter.  Which value of $\theta$ maximizes | ||||
| its likelihood? | ||||
| 
 | ||||
| \begin{figure}[t] | ||||
|   \includegraphics[width=1\textwidth]{mlemean} | ||||
|   \titlecaption{\label{mlemeanfig} Maximum Likelihood Sch\"atzung des | ||||
|     Mittelwerts.}{Oben: Die Daten zusammen mit drei m\"oglichen | ||||
|     Normalverteilungen mit unterschiedlichen Mittelwerten (Pfeile) aus | ||||
|     denen die Daten stammen k\"onnten.  Unten links: Die Likelihood | ||||
|     in Abh\"angigkeit des Mittelwerts als Parameter der | ||||
|     Normalverteilungen. Unten rechts: die entsprechende | ||||
|     Log-Likelihood. An der Position des Maximums bei $\theta=2$ | ||||
|     \"andert sich nichts (Pfeil).} | ||||
|   \titlecaption{\label{mlemeanfig} Maximum likelihood estimation of | ||||
|     the mean.}{Top: The measured data (blue dots) together with three | ||||
|     different possible normal distributions with different means | ||||
|     (arrows) that could be the origin of the data.  Bootom left: the | ||||
|     likelihood as a function of $\theta$ i.e. the mean. It is maximal | ||||
|     at a value of $\theta = 2$. Bottom right: the | ||||
|     log-likelihood. Taking the logarithm does not change the position | ||||
|     of the maximum.} | ||||
| \end{figure} | ||||
| 
 | ||||
| Die Log-Likelihood \eqnref{loglikelihood} ist | ||||
| The log-likelihood \eqnref{loglikelihood} | ||||
| \begin{eqnarray*} | ||||
|   \log {\cal L}(\theta|x_1,x_2, \ldots x_n) | ||||
|   & = & \sum_{i=1}^n \log \frac{1}{\sqrt{2\pi \sigma^2}}e^{-\frac{(x_i-\theta)^2}{2\sigma^2}} \\ | ||||
|   & = & \sum_{i=1}^n - \log \sqrt{2\pi \sigma^2} -\frac{(x_i-\theta)^2}{2\sigma^2} \; . | ||||
| \end{eqnarray*} | ||||
| Der Logarithmus hat die sch\"one Eigenschaft, die Exponentialfunktion | ||||
| der Normalverteilung auszul\"oschen, da der Logarithmus die | ||||
| Umkehrfunktion der Exponentialfunktion ist ($\log(e^x)=x$). | ||||
| 
 | ||||
| Zur Bestimmung des Maximums der Log-Likelihood berechnen wir deren Ableitung | ||||
| nach dem Parameter $\theta$ und setzen diese gleich Null:  | ||||
| % FIXME do we need parentheses around the normal distribution in line one? | ||||
| Since the logarithm is the inverse function of the exponential | ||||
| ($\log(e^x)=x$), taking the logarithm removes the exponential from the | ||||
| normal distribution.  To calculate the maximum of the log-likelihood, | ||||
| we need to take the derivative with respect to $\theta$ and set it to | ||||
| zero: | ||||
| \begin{eqnarray*} | ||||
|   \frac{\text{d}}{\text{d}\theta} \log {\cal L}(\theta|x_1,x_2, \ldots x_n) & = & \sum_{i=1}^n - \frac{2(x_i-\theta)}{2\sigma^2} \;\; = \;\; 0 \\ | ||||
|   \Leftrightarrow \quad \sum_{i=1}^n x_i - \sum_{i=1}^n \theta & = & 0 \\ | ||||
|   \Leftrightarrow \quad n \theta & = & \sum_{i=1}^n x_i \\ | ||||
|   \Leftrightarrow \quad \theta & = & \frac{1}{n} \sum_{i=1}^n x_i \;\; = \;\; \bar x | ||||
| \end{eqnarray*} | ||||
| Der Maximum-Likelihood-Sch\"atzer ist das arithmetische Mittel $\bar | ||||
| x$ der Daten. D.h. das arithmetische Mittel maximiert die | ||||
| Wahrscheinlichkeit, dass die Daten aus einer Normalverteilung mit | ||||
| diesem Mittelwert gezogen worden sind (\figref{mlemeanfig}). | ||||
| From the above equations it becomes clear that the maximum likelihood | ||||
| estimation is equivalent to the mean of the data. That is, the | ||||
| assuming the mean of the data as $\theta$ maximizes the likelihood | ||||
| that the data originate from a normal distribution with that mean | ||||
| (\figref{mlemeanfig}). | ||||
| 
 | ||||
| 
 | ||||
| \begin{exercise}{mlemean.m}{mlemean.out} | ||||
|   Ziehe $n=50$ normalverteilte Zufallsvariablen mit einem Mittelwert $\ne 0$ | ||||
|   und einer Standardabweichung $\ne 1$. | ||||
|   Draw $n=50$ random numbers from a normal distribution with a mean of | ||||
|   $\ne 0$ and a standard deviation of $\ne 1$. | ||||
| 
 | ||||
|   Plotte die Likelihood (aus dem Produkt der Wahrscheinlichkeiten) und | ||||
|   die Log-Likelihood (aus der Summe der logarithmierten | ||||
|   Wahrscheinlichkeiten) f\"ur den Mittelwert als Parameter. Vergleiche | ||||
|   die Position der Maxima mit dem aus den Daten berechneten | ||||
|   Mittelwert. | ||||
|   Plot the likelihood (the product of the probabilities) and the | ||||
|   log-likelihood (given by the sum of the logarithms of the | ||||
|   probabilities) for the mean as parameter. Compare the position of | ||||
|   the maxima with the mean calculated from the data. | ||||
|   \pagebreak[4] | ||||
| \end{exercise} | ||||
| 
 | ||||
| 
 | ||||
| \pagebreak[4] | ||||
| %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | ||||
| \section{Kurvenfit als Maximum-Likelihood Sch\"atzung} | ||||
| Beim \determ{Kurvenfit} soll eine Funktion $f(x;\theta)$ mit den Parametern | ||||
| $\theta$ an die Datenpaare $(x_i|y_i)$ durch Anpassung der Parameter | ||||
| $\theta$ gefittet werden. Wenn wir annehmen, dass die $y_i$ um die | ||||
| entsprechenden Funktionswerte $f(x_i;\theta)$ mit einer | ||||
| Standardabweichung $\sigma_i$ normalverteilt streuen, dann lautet die | ||||
| Log-Likelihood | ||||
| \section{Curve fitting as using maximum-likelihood estimation} | ||||
| 
 | ||||
| During curve fitting a function of the form $f(x;\theta)$ with the | ||||
| parameter $\theta$ is adapted to the data pairs $(x_i|y_i)$ by | ||||
| adapting $\theta$. When we assume that the $y_i$ values are normally | ||||
| distributed around the function values $f(x_i;\theta)$ with a standard | ||||
| deviation $\sigma_i$, the log-likelihood is | ||||
| 
 | ||||
| \begin{eqnarray*} | ||||
|   \log {\cal L}(\theta|(x_1,y_1,\sigma_1), \ldots, (x_n,y_n,\sigma_n)) | ||||
|   & = & \sum_{i=1}^n \log \frac{1}{\sqrt{2\pi \sigma_i^2}}e^{-\frac{(y_i-f(x_i;\theta))^2}{2\sigma_i^2}} \\ | ||||
|   & = & \sum_{i=1}^n - \log \sqrt{2\pi \sigma_i^2} -\frac{(y_i-f(x_i;\theta))^2}{2\sigma_i^2} \\ | ||||
| \end{eqnarray*} | ||||
| Der einzige Unterschied zum vorherigen Beispiel ist, dass die | ||||
| Mittelwerte der Normalverteilungen nun durch die Funktionswerte | ||||
| gegeben sind. | ||||
| The only difference to the previous example is that the averages in | ||||
| the equations above are now given as the function values | ||||
| $f(x_i;\theta)$ | ||||
| 
 | ||||
| Der Parameter $\theta$ soll so gew\"ahlt werden, dass die | ||||
| Log-Likelihood maximal wird.  Der erste Term der Summe ist | ||||
| unabh\"angig von $\theta$ und kann deshalb bei der Suche nach dem | ||||
| Maximum weggelassen werden: | ||||
| The parameter $\theta$ should be the one that maximizes the | ||||
| log-likelihood. The first part of the sum is independent of $\theta$ | ||||
| and can thus be ignored during the search of the maximum: | ||||
| \begin{eqnarray*} | ||||
|   & = & - \frac{1}{2} \sum_{i=1}^n \left( \frac{y_i-f(x_i;\theta)}{\sigma_i} \right)^2 | ||||
| \end{eqnarray*} | ||||
| Anstatt nach dem Maximum zu suchen, k\"onnen wir auch das Vorzeichen der Log-Likelihood | ||||
| umdrehen und nach dem Minimum suchen. Dabei k\"onnen wir auch den Faktor $1/2$ vor der Summe vernachl\"assigen --- auch das \"andert nichts an der Position des Minimums: | ||||
| We can further simplify by inverting the sign and then search for the | ||||
| minimum. Also the $1/2$ factor can be ignored since it does not affect | ||||
| the position of the minimum: | ||||
| \begin{equation} | ||||
|   \label{chisqmin} | ||||
|   \theta_{mle} = \text{argmin}_{\theta} \; \sum_{i=1}^n \left( \frac{y_i-f(x_i;\theta)}{\sigma_i} \right)^2 \;\; = \;\; \text{argmin}_{\theta} \; \chi^2 | ||||
| \end{equation} | ||||
| Die Summe der quadratischen Abst\"ande normiert auf die jeweiligen | ||||
| Standardabweichungen wird auch mit $\chi^2$ bezeichnet. Der Wert des | ||||
| Parameters $\theta$, welcher den quadratischen Abstand minimiert, ist | ||||
| also identisch mit der Maximierung der Wahrscheinlichkeit, dass die | ||||
| Daten tats\"achlich aus der Funktion stammen k\"onnen. Minimierung des | ||||
| $\chi^2$ ist also eine Maximum-Likelihood Sch\"atzung.  | ||||
| The sum of the squared differences when normalized to the standard | ||||
| deviation is also called $\chi^2$. The parameter $\theta$ which | ||||
| minimizes the squared differences is thus the one that maximizes the | ||||
| probability that the data actually originate from the given | ||||
| function. Minimizing $\chi^2$ therefore is a maximum likelihood | ||||
| estimation. | ||||
| 
 | ||||
| An der Herleitung sehen wir aber auch, dass die Minimierung des | ||||
| quadratischen Abstands nur dann eine Maximum-Likelihood Absch\"atzung | ||||
| ist, wenn die Daten normalverteilt um die Funktion streuen. Bei | ||||
| anderen Verteilungen m\"usste man die Log-Likelihood entsprechend | ||||
| \eqnref{loglikelihood} ausrechnen und maximieren. | ||||
| From the mathematical considerations above we can see that the | ||||
| minimization of the squared difference is a maximum-likelihood | ||||
| estimation only if the data are normally distributed around the | ||||
| function. In case of other distributions, the log-likelihood needs to | ||||
| be adapted accordingly \eqnref{loglikelihood} and be maximized | ||||
| respectively. | ||||
| 
 | ||||
| \begin{figure}[t] | ||||
|   \includegraphics[width=1\textwidth]{mlepropline} | ||||
|   \titlecaption{\label{mleproplinefig} Maximum-Likelihood Sch\"atzung der | ||||
|     Steigung einer Ursprungsgeraden.}{} | ||||
|   \titlecaption{\label{mleproplinefig} Maximum likelihood estimation | ||||
|     of the slope of line through the origin.}{} | ||||
| \end{figure} | ||||
| 
 | ||||
| 
 | ||||
| \subsection{Beispiel: einfache Proportionalit\"at} | ||||
| Als Funktion nehmen wir die Ursprungsgerade | ||||
| \subsection{Example: simple proportionality} | ||||
| The function of a line going through the origin | ||||
| \[ f(x) = \theta x  \] | ||||
| mit Steigung $\theta$. Die $\chi^2$-Summe lautet damit | ||||
| with the slope $\theta$. The $\chi^2$-sum is thus | ||||
| \[ \chi^2 = \sum_{i=1}^n \left( \frac{y_i-\theta x_i}{\sigma_i} \right)^2 \; . \] | ||||
| Zur Bestimmung des Minimums berechnen wir wieder die erste Ableitung nach $\theta$ | ||||
| und setzen diese gleich Null: | ||||
| To estimate the minimum we again take the first derivative with | ||||
| respect to $\theta$ and equate it to zero: | ||||
| \begin{eqnarray} | ||||
|   \frac{\text{d}}{\text{d}\theta}\chi^2 & = & \frac{\text{d}}{\text{d}\theta} \sum_{i=1}^n \left( \frac{y_i-\theta x_i}{\sigma_i} \right)^2 \nonumber \\ | ||||
|   & = & \sum_{i=1}^n \frac{\text{d}}{\text{d}\theta} \left( \frac{y_i-\theta x_i}{\sigma_i} \right)^2 \nonumber \\ | ||||
| @ -188,123 +189,124 @@ und setzen diese gleich Null: | ||||
| \Leftrightarrow \quad  \theta \sum_{i=1}^n \frac{x_i^2}{\sigma_i^2} & = & \sum_{i=1}^n \frac{x_iy_i}{\sigma_i^2} \nonumber \\ | ||||
| \Leftrightarrow \quad  \theta & = & \frac{\sum_{i=1}^n \frac{x_iy_i}{\sigma_i^2}}{ \sum_{i=1}^n \frac{x_i^2}{\sigma_i^2}} \label{mleslope} | ||||
| \end{eqnarray} | ||||
| Damit haben wir nun einen anlytischen Ausdruck f\"ur die Bestimmung | ||||
| der Steigung $\theta$ der Regressionsgeraden gewonnen | ||||
| (\figref{mleproplinefig}). | ||||
| With this we obtained an analytical expression for the estimation of | ||||
| the slope $\theta$ of the regression line (\figref{mleproplinefig}). | ||||
| 
 | ||||
| Ein Gradientenabstieg ist f\"ur das Fitten der Geradensteigung also | ||||
| gar nicht n\"otig. Das gilt allgemein f\"ur das Fitten von | ||||
| Koeffizienten von linear kombinierten Basisfunktionen. Wie z.B. | ||||
| die Steigung $m$ und der y-Achsenabschnitt $b$ einer Geradengleichung | ||||
| A gradient descent, as we have done in the previous chapter, is thus | ||||
| not necessary for the fitting of the slope of a linear equation. This | ||||
| holds even more generally for fitting the coefficients of linearly | ||||
| combined basis functions as for example the fitting of the slope $m$ | ||||
| and the y-intercept $b$ of the linear equation | ||||
| \[ y = m \cdot x +b \] | ||||
| oder allgemeiner die Koeffizienten $a_k$ eines Polynoms | ||||
| or, more generally, the coefficients $a_k$ of a polynom | ||||
| \[ y = \sum_{k=0}^N a_k x^k = a_o + a_1x + a_2x^2 + a_3x^4 + \ldots \] | ||||
| \matlabfun{polyfit()}. | ||||
| 
 | ||||
| Parameter, die nichtlinear in einer Funktion enthalten sind, k\"onnen | ||||
| im Gegensatz dazu nicht analytisch aus den Daten berechnet | ||||
| werden. z.B. die Rate $\lambda$ eines exponentiellen Zerfalls | ||||
| Parameters that are non-linearly combined can not be calculated | ||||
| analytically. Consider for example the rate $\lambda$ of the | ||||
| exponential decay | ||||
| \[ y = c \cdot e^{\lambda x} \quad , \quad c, \lambda \in \reZ \; . \] | ||||
|  F\"ur diesen Fall bleibt dann nur auf numerische Verfahren zur | ||||
| Optimierung der Kostenfunktion, wie z.B. der Gradientenabstieg, | ||||
| zur\"uckzugreifen \matlabfun{lsqcurvefit()}. | ||||
| Such cases require numerical solutions for the optimization of the | ||||
| cost function, e.g. the gradient descent \matlabfun{lsqcurvefit()}. | ||||
| 
 | ||||
| 
 | ||||
| %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | ||||
| \section{Fits von Wahrscheinlichkeitsverteilungen} | ||||
| Jetzt betrachten wir noch den Fall, bei dem wir die Parameter einer | ||||
| Wahrscheinlichkeitsdichtefunktion (z.B. den shape-Parameter einer | ||||
| \determ{Gamma-Verteilung}) an ein Datenset fitten wollen. | ||||
| Finally let's consider the case in which we want to fit the parameters | ||||
| of a probability density function (e.g. the shape parameter of a | ||||
| \enterm{Gamma-distribution}) to a dataset. | ||||
| 
 | ||||
| Ein erster Gedanke k\"onnte sein, die | ||||
| \determ[Wahrscheinlichkeitsdichte]{Wahrscheinlichkeitsdichtefunktion} | ||||
| durch Minimierung des quadratischen Abstands an ein Histogramm der | ||||
| Daten zu fitten. Das ist aber aus folgenden Gr\"unden nicht die | ||||
| Methode der Wahl: (i) Wahrscheinlichkeitsdichten k\"onnen nur positiv | ||||
| sein. Darum k\"onnen insbesondere bei kleinen Werten die Daten nicht | ||||
| symmetrisch streuen, wie es bei normalverteilten Daten der Fall | ||||
| ist. (ii) Die Datenwerte sind nicht unabh\"angig, da das normierte | ||||
| Histogram sich zu Eins aufintegriert. Die beiden Annahmen | ||||
| normalverteilte und unabh\"angige Daten, die die Minimierung des | ||||
| quadratischen Abstands \eqnref{chisqmin} zu einem Maximum-Likelihood | ||||
| Sch\"atzer machen, sind also verletzt. (iii) Das Histogramm h\"angt | ||||
| von der Wahl der Klassenbreite ab (\figref{mlepdffig}). | ||||
| A first guess could be to fit the probability density by minimization | ||||
| of the squared difference to a histogram of the measured data. For | ||||
| several reasons this is, however, not the method of choice: (i) | ||||
| probability densities can only be positive which leads, for small | ||||
| values in particular, to asymmetric distributions. (ii) the values of | ||||
| a histogram are not independent because the integral of a density is | ||||
| unity. The two basic assumptions of normally distributed and | ||||
| independent samples, which are a prerequisite make the minimization of | ||||
| the squared difference \eqnref{chisqmin} to a maximum likelihood | ||||
| estimation, are violated. (iii) The histogram strongly depends on the | ||||
| chosen bin size \figref{mlepdffig}). | ||||
| 
 | ||||
| \begin{figure}[t] | ||||
|   \includegraphics[width=1\textwidth]{mlepdf} | ||||
|   \titlecaption{\label{mlepdffig} Maximum-Likelihood Sch\"atzung einer | ||||
|     Wahrscheinlichkeitsdichtefunktion.}{Links: die 100 Datenpunkte, die | ||||
|     aus der Gammaverteilung 2. Ordnung (rot) gezogen worden sind. Der | ||||
|     Maximum-Likelihood-Fit ist orange dargestellt.  Rechts: das | ||||
|     normierte Histogramm der Daten zusammen mit dem \"uber Minimierung | ||||
|     des quadratischen Abstands zum Histogramm berechneten Fit.} | ||||
|   \titlecaption{\label{mlepdffig} Maximum likelihood estimation of a | ||||
|     probability density.}{Left: the 100 data points drawn from a 2nd | ||||
|     order Gamma-distribution. The maximum likelihood estimation of the | ||||
|     probability density function is shown in orange, the true pdf is | ||||
|     shown in red. Right: normalized histogram of the data together | ||||
|     with the real (red) and the fitted probability density | ||||
|     functions. The fit was done by minimizing the squared difference | ||||
|     to the histogram.} | ||||
| \end{figure} | ||||
| 
 | ||||
| Den direkten Weg, eine Wahrscheinlichkeitsdichtefunktion an ein | ||||
| Datenset zu fitten, haben wir oben schon bei dem Beispiel zur | ||||
| Absch\"atzung des Mittelwertes einer Normalverteilung gesehen --- | ||||
| Maximum Likelihood! Wir suchen einfach die Parameter $\theta$ der | ||||
| gesuchten Wahrscheinlichkeitsdichtefunktion bei der die Log-Likelihood | ||||
| \eqnref{loglikelihood} maximal wird. Das ist im allgemeinen ein | ||||
| nichtlinieares Optimierungsproblem, das mit numerischen Verfahren, wie | ||||
| z.B. dem Gradientenabstieg, gel\"ost wird \matlabfun{mle()}. | ||||
| 
 | ||||
| Using the example of the estimating the mean value of a normal | ||||
| distribution we have discussed the direct approach to fit a | ||||
| probability density to data via maximum likelihood. We simply search | ||||
| for the parameter $\theta$ of the desired probability density function | ||||
| that maximizes the log-likelihood. This is a non-linear optimization | ||||
| problem that is generally solved with numerical methods such as the | ||||
| gradient descent \matlabfun{mle()}. | ||||
| 
 | ||||
| \begin{exercise}{mlegammafit.m}{mlegammafit.out} | ||||
|   Erzeuge Gammaverteilte Zufallszahlen und benutze Maximum-Likelihood, | ||||
|   um die Parameter der Gammafunktion aus den Daten zu bestimmen. | ||||
|   Create a sample of gamma-distributed random number and apply the | ||||
|   maximum likelihood method to estimate the parameters of the gamma | ||||
|   function from the data. | ||||
|   \pagebreak | ||||
| \end{exercise} | ||||
| 
 | ||||
| 
 | ||||
| %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | ||||
| \section{Neuronale Kodierung} | ||||
| In sensorischen Systemen kodieren Populationen von Neuronen mit ihrer | ||||
| Aktivit\"at Eigenschaften von sensorischen Stimuli. z.B. im visuellen | ||||
| Kortex V1 die Orientierung eines Balkens. Traditionell wird die | ||||
| Antwort der Neurone f\"ur verschiedene Stimuli (z.B. verschiedene | ||||
| Orientierungen des Balkens) gemessen. Die mittlere Antwort der Neurone | ||||
| als Funktion eines Stimulusparameters ist dann die | ||||
| \enterm{Tuning-curve} (deutsch \determ{Abstimmkurve}, z.B. Feuerrate | ||||
| als Funktion des Orientierungswinkels). | ||||
| \section{Neural coding} | ||||
| In sensory systems certain aspects of the surrounding are encoded in | ||||
| the neuronal activity of populations of neurons. One example of such | ||||
| population coding is the tuning of neurons in the primary visual | ||||
| cortex (V1) to the orientation of a visual stimulus. Different neurons | ||||
| respond best to different stimulus orientations. Traditionally, such a | ||||
| tuning is measured by analyzing the neuronal response strength | ||||
| (e.g. the firing rate) as a function of the orientation of the visual | ||||
| stimulus and is depicted and summarized with the so called | ||||
| \enterm{tuning-curve}(German \determ{Abstimmkurve}, | ||||
| figure~\ref{mlecoding}, top). | ||||
| 
 | ||||
| \begin{figure}[tp] | ||||
|   \includegraphics[width=1\textwidth]{mlecoding} | ||||
|   \titlecaption{\label{mlecodingfig} Maximum Likelihood Sch\"atzung | ||||
|     eines Stimulusparameters aus neuronaler Aktivit\"at.}{Oben: | ||||
|     Die Tuning-Kurve eines einzelnen Neurons in Abh\"angigkeit von der | ||||
|     Orientierung eines Balkens. Der Stimulus der die st\"akste | ||||
|     Aktivit\"at in diesem Neuron hervorruft ist ein senkrechter Balken | ||||
|     (Pfeil, $\phi_i=90$\,\degree. Die rote Fl\"ache deutet die | ||||
|     Variabilit\"at $p(r)$ der Aktivit\"at $r$ um die Tuning-Kurve | ||||
|     herum an. Mitte: Jedes Neuron in der Population hat eine andere | ||||
|     bevorzugte Orientierung des Stimulus (farbige Linien).  Ein | ||||
|     Stimulus einer bestimmten Orientierung aktiviert die Neurone in | ||||
|     spezifischer Weise (Punkte). Unten: Die Log-Likelihood dieser | ||||
|     Aktivit\"aten wird in der N\"ahe der wahren Orientierung | ||||
|     des Stimulus maximiert.} | ||||
|   \titlecaption{\label{mlecodingfig} Maximum likelihood estimation of | ||||
|     a stimulus parameter from neuronal activity.}{Top: Tuning curve of | ||||
|     an individual neuron as a function of the stimulus orientation (a | ||||
|     dark bar in front of a white background). The stimulus that evokes | ||||
|     the strongest activity in that neuron is the bar with the vertical | ||||
|     orientation (arrow, $\phi_i=90$\,\degree). The red area indicates | ||||
|     the variability of the neuronal activity $p(r)$ around the tunig | ||||
|     curve. Center: In a population of neurons, each neuron may have a | ||||
|     different tuning curve (colors). A specific stimulus (the vertical | ||||
|     bar) activates the individual neurons of the population in a | ||||
|     specific way (dots). Bottom: The log-likelihood of the activity | ||||
|     pattern will be maximized close to the real stimulus orientation.} | ||||
| \end{figure} | ||||
| 
 | ||||
| Das Gehirn ist aber mit dem umgekehrten Problem konfrontiert: gegeben | ||||
| eine bestimmte Aktivit\"at der Neurone in der Population, was war der | ||||
| Stimulus (die Orientierung des Balkens)? Eine m\"ogliche Antwort ist | ||||
| im Sinne von Maximum-Likelihood: es war der Stimulus f\"ur den das | ||||
| Aktivit\"atsmuster am wahrscheinlichsten ist. | ||||
| The brain, however, is confronted with the inverse problem: given a | ||||
| certain activity pattern in the neuronal population, what was the | ||||
| stimulus? In the sense of maximum likelihood, a possible answer to | ||||
| this question would be: It was the stimulus for which the particular | ||||
| activity pattern is most likely. | ||||
| 
 | ||||
| Bleiben wir mit einem Beispiel bei den orientierungssensitiven Zellen | ||||
| des V1. Das Tuning $\Omega_i(\phi)$ der Zellen $i$ auf ihre bevorzugte | ||||
| Orientierung $\phi_i$ l\"asst sich gut mit einer van-Mises Funktion | ||||
| (entspricht der Gaussfunktion auf einer zyklischen x-Achse) | ||||
| beschreiben (\figref{mlecodingfig}): | ||||
| Let's stay with the example of the orientation tuning in V1. The | ||||
| tuning $\Omega_i(\phi)$ of the neurons $i$ to the preferred stimulus | ||||
| orientation $\phi_i$ can be well described using a van-Mises function | ||||
| (the Gaussian function on a cyclic x-axis) (\figref{mlecodingfig}): | ||||
| \[ \Omega_i(\phi) = c \cdot e^{\cos(2(\phi-\phi_i))} \quad , \quad c | ||||
| \in \reZ \]  | ||||
| Die Aktivit\"at der Neurone approximieren wir hier mit einer | ||||
| Normalverteilung um die Tuning-Kurve mit Standardabweichung | ||||
| $\sigma=\Omega/4$ proportional zu $\Omega$, so dass die | ||||
| Wahrscheinlichkeit $p_i(r|\phi)$ des $i$-ten Neurons die Aktivit\"at $r$ zu | ||||
| haben, wenn ein Stimulus mit Orientierung $\phi$ anliegt, gegeben ist durch | ||||
| \in \reZ \] | ||||
| The neuronal activity is approximated with a normal | ||||
| distribution around the tuning curve with a standard deviation | ||||
| $\sigma=\Omega/4$ which is proprotional to $\Omega$ such that the | ||||
| probability $p_i(r|\phi)$ of the $i$-th neuron showing the activity | ||||
| $r$ given a certain orientation $\phi$ is given by | ||||
| 
 | ||||
| \[ p_i(r|\phi) = \frac{1}{\sqrt{2\pi}\Omega_i(\phi)/4} e^{-\frac{1}{2}\left(\frac{r-\Omega_i(\phi)}{\Omega_i(\phi)/4}\right)^2} \; . \] | ||||
| Die Log-Likelihood der Stimulusorientierung $\phi$ gegeben die | ||||
| Aktivit\"aten $r_1$, $r_2$, ... $r_n$ ist damit | ||||
| The log-likelihood of the stimulus orientation $\phi$ given the | ||||
| activity pattern in the population $r_1$, $r_2$, ... $r_n$ is thus | ||||
| \[ {\cal L}(\phi|r_1, r_2, \ldots r_n) = \sum_{i=1}^n \log p_i(r_i|\phi) \] | ||||
| 
 | ||||
| \selectlanguage{english} | ||||
|  | ||||
		Reference in New Issue
	
	Block a user