974 lines
94 KiB
TeX
974 lines
94 KiB
TeX
\documentclass[a4paper,10pt]{scrartcl}
|
||
\usepackage[utf8]{inputenc}
|
||
|
||
%opening
|
||
\title{On the role of noise in signal detection}
|
||
\author{Dennis Huben}
|
||
|
||
\usepackage[T1]{fontenc}
|
||
\usepackage[utf8]{inputenc}
|
||
\usepackage[english]{babel}
|
||
\usepackage{graphicx}
|
||
\usepackage{multicol}
|
||
\usepackage{scalefnt}
|
||
\usepackage{bm}
|
||
\usepackage{palatino}
|
||
\usepackage{url}
|
||
\usepackage{enumitem}
|
||
\usepackage{amsmath}
|
||
\usepackage{xcolor}
|
||
\usepackage{ifthen}
|
||
|
||
\usepackage[normalem]{ulem}
|
||
\usepackage[round,]{natbib}
|
||
\usepackage[thinqspace]{SIunits}
|
||
|
||
\bibliographystyle{plainnat}
|
||
\newcommand{\lepto}{\textit{A. leptorhynchus}}
|
||
|
||
\DeclareMathOperator\erfc{erfc}
|
||
|
||
\newcommand{\eq}[1]{\begin{align}#1\end{align}}
|
||
|
||
\newcommand{\note}[2][]{\textcolor{red!80!black}{[\textbf{\ifthenelse{\equal{#1}{}}{}{#1: }}#2]}}
|
||
\newcommand{\notejb}[1]{\note[JB]{#1}}
|
||
\newcommand{\notedh}[1]{\note[DH]{#1}}
|
||
\newcommand{\newdh}[1]{\textcolor{green}{#1}}
|
||
|
||
\begin{document}
|
||
|
||
\maketitle
|
||
|
||
\begin{abstract}
|
||
|
||
\end{abstract}
|
||
|
||
\tableofcontents
|
||
|
||
\newpage
|
||
|
||
|
||
\section{Suprathreshold stochastic resonance}
|
||
|
||
\subsection{Introduction}
|
||
|
||
In any biological system, there is a limit to the precision of the components making up that system. This means that even without external input the spike times of each individual neurons will have some variation and will not be perfectly regular. Increasing the precision has a cost in energy requirement \citep{schreiber2002energy} but may not even be desirable.
|
||
|
||
In populations of neurons, representation of a common stimulus can be improved by population heterogeneity \citep{ahn2014heterogeneity}. The source of this heterogeneity could for example be a different firing threshold for each neuron. Alternatively, the improvement can be achieved by adding noise to the input of neurons \citep{shapira2016sound}. The effect of adding noise to a sub-threshold signal, a phenomenon known as "stochastic resonance" (SR), has been very well investigated during the last decades \citep{benzi1981mechanism,gammaitoni1998resonance, shimokawa1999stochastic}. The noise added to a signal makes it more likely that the signal reaches the detection threshold so that it triggers a spike in a neuron.
|
||
But often in nature the goal is not to simply detect a signal but to discriminate between two different signals as well as possible. For example in auditory communication it is not sufficient to detect the presence of sound but instead the goal is to encode an auditory stimulus so that an optimal amount of information is gained from the stimulus.
|
||
Another example is the electrosensory communication between conspecifics in weakly electric fishes. Those fish need to for example differentiate aggressive and courtship behaviors.
|
||
|
||
|
||
More recently it has been shown that for populations of neurons the beneficial role of noise can also be true for signals which already are above the threshold\citep{stocks2000suprathreshold, Stocks2000,stocks2001information,stocks2001generic,beiran2018coding}, a phenomenon termed "Suprathreshold Stochastic Resonance" (SSR). Despite the similarity in name, SR and SSR work in very different ways. The idea behind SSR is that in case of no or very weak individual noise the different neurons in the population react to the same features of a common input. Additional noise that affects each cell differently desynchronizes the response of the neurons. The spiking behavior of the neurons becomes more probabilistic than deterministic in nature. However, if the noise is too strong, the noise masks the signal and less information can be coded than would be ideally possible. In the case of infinite noise strength, no information about the signal can be reconstructed from the responses. Because some noise is beneficial and too much noise isn't, there is a noise strength where performance is best.
|
||
This thesis investigates populations of neurons reacting to input signals with cutoff frequencies over a large range. Population sizes range from a single neuron to many thousands of neurons.
|
||
|
||
|
||
%plot script: lif_summation_sketch.py on denkdirnix
|
||
\begin{figure}
|
||
\centering
|
||
\includegraphics[width=0.5\linewidth]{img/stocks}
|
||
|
||
\includegraphics[width=1.\linewidth]{img/plotted/LIF_example_sketch.pdf}
|
||
\caption{Array of threshold systems as described by Stocks.}
|
||
\label{stocks}
|
||
\end{figure}
|
||
|
||
Here we use the Integrate-and-Fire model to simulate neuronal populations receiving a common dynamic input. We look at linear coding of signals by different sized populations of neurons of a single type, similar to the situation in weakly electric fish. We show that the optimal noise grows with population size and depends on properties of the input. We use input signals of varying frequencies widths and cutoffs, along with changing the strength of the signal.
|
||
|
||
We also present the results of electrophysiological results in the weakly electric fish \textit{Apteronotus leptorhynchus}. Because it is not obvious how to quantify noisiness in the receptor cells of these fish, we compare different methods and find that using the activation curve of the individual neurons allows for the best estimate of the strength of noise in these cells. Then we show that we can see the effects of SSR in the real world example of \textit{A. leptorhynchus}.
|
||
|
||
\subsection{Methods}
|
||
We use a population neuron model using the Leaky-Integrate-And-Fire (LIF) neuron, described by the equation
|
||
|
||
\begin{equation}V_{t}^j = V_{t-1}^j + \frac{\Delta t}{\tau_v} ((\mu-V_{t-1}^j) + \sigma I_{t} + \sqrt{2D/\Delta t}\xi_{t}^j),\quad j \in [1,N]\end{equation}
|
||
with $\tau_v = 10 ms$ the membrane time constant, $\mu = 15.0 mV$ or $\mu = 10.5 mV$ as offset. $\sigma$ is a factor which scales the standard deviation of the input, ranging from 0.1 to 1 and I the previously generated stimulus. $\xi_{t}$ are independent Gaussian distributed random variables with mean 0 and variance 1. The Noise D was varied between $1*10^{-12} mV^2/Hz$ and $3 mV^2/Hz$. Whenever $V_{t}$ was greater than the voltage threshold (10mV) a "spike" was recorded and the voltage has been reset to 0mV. $V_{0}$ was initialized to a random value uniformly distributed between 0mV and 10mV. For the first sets of simulations there was no absolute refractory period\footnote{Absolute refractory period means a time in which the cell ignores any input and can't spike.}. In a later chapter I show that qualitatively results don't change with an added refractory period.
|
||
Simulations of up to 8192 neurons were done using an Euler method with a step size of $\Delta\, t = 0.01$ms. Typical firing rates were around 90Hz for an offset of 15.0mV and 35Hz for an offset of 10.5mV. Firing rates were larger for high noise levels than for low noise levels.
|
||
|
||
As stimulus we used Gaussian white noise signal with different frequency cutoff on both ends of the spectrum. By construction, the input power spectrum is flat between 0 and $\pm f_{c}$:
|
||
|
||
\begin{equation}
|
||
S_{ss}(f) = \frac{\sigma^2}{2 \left| f_{c} \right|} \Theta\left(f_{c} - |f|\right).\label{S_ss}
|
||
\end{equation}
|
||
A Fast Fourier Transform (FFT) was applied to the signal so it can serve as input stimulus to the simulated cells. The signal was normalized so that the variance of the signal was 1mV and the length of the signal was 500s with a resolution of 0.01ms.
|
||
|
||
|
||
\begin{figure}
|
||
\includegraphics[scale=0.5]{img/intro_raster/example_noise_resonance.pdf}
|
||
\caption{Snapshots of 200ms length from three example simulations with different noise, but all other parameters held constant. Black: Spikes of 32 simulated neurons. The green curve beneath the spikes is the signal that was fed into the network. The blue curve is the best linear reconstruction possible from the spikes. The input signal has a cutoff frequency of 50Hz.
|
||
If noise is weak, the neurons behave regularly and similar to each other (A). For optimal noise strength, the neuronal population follows the signal best (B). If the noise is too strong, the information about the signal gets drowned out (C). D: Example coding fraction curve over the strength of the noise. Marked in red are the noise strengths from which the examples were taken.}
|
||
\label{example_spiketrains}
|
||
\end{figure}
|
||
|
||
\subsection{Analysis} \label{Analysis}
|
||
For each combination of parameters, a histogram of the output spikes from all neurons or a subset of the neurons was created.
|
||
The coherence $C(f)$ was calculated \citep{lindner2016mechanisms} in frequency space as the fraction between the squared cross-spectral density $|S_{sx}^2|$ of input signal $s(t) = \sigma I_{t}$ and output spikes x(t), $S_{sx}(f) = \mathcal{F}\{ s(t)*x(t) \}(f) $, divided by the product of the power spectral densities of input ($S_{ss}(f) = |\mathcal{F}\{s(t)\}(f)|^2 $) and output ($S_{xx}(f) = |\mathcal{F}\{x(t)\}(f)|^2$), where $\mathcal{F}\{ g(t) \}(f)$ is the Fourier transform of g(t).
|
||
\begin{equation}C(f) = \frac{|S_{sx}(f)|^2}{S_{ss}(f) S_{xx}(f)}\label{coherence}\end{equation}
|
||
|
||
The coding fraction $\gamma$ \citep{gabbiani1996codingLIF, krahe2002stimulus} quantifies how much of the input signal can be reconstructed by an optimal linear decoder. It is 0 in case the input can't be reconstructed at all and 1 if the signal can be perfectly reconstructed\citep{gabbiani1996stimulus}.
|
||
It is defined by the reconstruction error $\epsilon^2$ and the variance of the input $\sigma^2$:
|
||
|
||
\begin{equation}\gamma = 1-\sqrt{\frac{\epsilon^2}{\sigma^2}}.\label{coding_fraction}\end{equation}
|
||
|
||
|
||
The variance is
|
||
\begin{equation}\sigma^2 = \langle \left(s(t)-\langle s(t)\rangle\right)^2\rangle = \int_{f_{low}}^{f_{high}} S_{ss}(f) df .\end{equation}
|
||
|
||
The reconstruction error is defined as
|
||
|
||
\begin{equation}\epsilon^2 = \langle \left(s(t) - s_{est}(t)\right)^2\rangle = \int_{f_{low}}^{f_{high}} S_{ss} - \frac{|S_{sx}|^2}{S_{xx}} = \int_{f_{low}}^{f_{high}} S_{ss}(f) (1-C(f)) df\end{equation}
|
||
with the estimate $s_{est}(t) = h*x(t)$. $h$ is the optimal linear filter which has Fourier Transform $H = \frac{S_{sx}}{S_{xx}}$\citep{gabbiani1996coding}.
|
||
|
||
We then analyzed coding fraction as a function of these cutoff frequencies for different parameters (noise strength, signal amplitude, signal mean/firing rate) in the limit of large populations.
|
||
The limit was considered reached if the increase in coding fraction gained by doubling the population size is small (4\%)(??).
|
||
For the weak signals ($\sigma = 0.1mV$) combined with the strongest noise ($D = 10^{-3} \frac{mV^2}{Hz}$), convergence was not reached for a population size of 2048 neurons for both threshold values. The same is true for the combination of the weak signal, close to the threshold ($\mu = 10.5mV$) and high frequencies (200Hz).
|
||
|
||
\begin{figure}
|
||
\centering
|
||
\includegraphics[width=0.69\linewidth]{{img/broad_coherence_15.0_1.0_paired}.pdf}
|
||
\caption{Coherence for a signal with $f_{cutoff} = 200\,Hz$. Coherence for a small and a large population, each at weak and strong noise values. For weak noise, the curves are indistinguishable from one another. For strong noise an increase in population size allows much better reconstruction of the input. For the small population size weak noise in the simulated neurons allows for better signal reconstruction. The line marks the average firing rate (about \(91\,Hz\)) of the neurons in the population.}
|
||
\label{CodingFrac}
|
||
\end{figure}
|
||
|
||
|
||
\subsection{Simulations with more neurons}
|
||
|
||
\subsection{Noise makes neurons' responses different from each other}
|
||
If noise levels are low (fig. \ref{example_spiketrains} a)), neurons within a population with behave very similarly to each other. There is little variation in the spike responses of the neurons to a signal, and recreating the signal is difficult. If the strength of the noise is increasing, at some point the coding fraction will also begin increasing. The signal recreation will become better as the responses of the different neurons begin to deviate from each other. When noise strength is increased even further at some point a peak coding fraction is reached. This point is the optimal noise strength for the given parameters (fig. \ref{example_spiketrains} b)). If the strength of the noise is increased beyond this point, the responses of the neurons will be determined more by random fluctuations and less by the actual signal, making reconstruction more difficult (fig. \ref{example_spiketrains} c)). At some point, signal encoding breaks down completely and coding fraction goes to 0.
|
||
|
||
|
||
\subsection{Large population size is only useful if noise is strong}
|
||
We see that an increase in population size leads to a larger coding fraction until it hits a limit which depends on noise. For weak noise the increase in conding fraction with an increase in population size is low or non-existent. This can be seen in figure \ref{cf_limit} c) where the red ($10^{-5}\frac{mV^2}{Hz}$) and orange ($10^{-4}\frac{mV^2}{Hz}$) curves (relatively weak noise) saturate for relatively small population size (about 8 neurons and 32 neurons respectively).
|
||
An increase in population size also leads to the optimum noise level moving towards stronger noise (green dots in figure \ref{cf_limit} a)). A larger population can exploit the higher noise levels better. Within the larger population the precision of the individual neurons becomes less important. After the optimum noise where peak coding fraction is reached, an increase in noise strength leads to a reduction in coding fraction. If the noise is very strong, coding fraction can reach approximately 0. This happens earlier (for weaker noise) in smaller populations than in larger populations. Together those facts mean that for a given noise level and population size, coding fraction might already be declining; whereas for larger populations, coding fraction can still be increasing. A given amount of noise can lead to a very low coding fraction in a small population, but to a greater coding fraction in a larger population. (figure \ref{cf_limit} c), blue and purple curves). The noise levels that work best for large populations are in general performing very bad in small populations. If coding fraction is supposed to reach its highest values and needs large populations to do so, the necessary noise strength will be at a level, where basically no encoding will happen in a single neurons or small populations.
|
||
|
||
\begin{figure}
|
||
\centering
|
||
\includegraphics[width=0.4\linewidth]{{img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_4_with_input}.pdf}
|
||
\includegraphics[width=0.4\linewidth]{{img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_16_with_input}.pdf}
|
||
\includegraphics[width=0.4\linewidth]{{img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_64_with_input}.pdf}
|
||
\includegraphics[width=0.4\linewidth]{{img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_256_with_input}.pdf}
|
||
\label{harmonizing}
|
||
\caption{Rasterplots and reconstructed signals for different population sizes; insets show signal spectrum. Rasterplots show the responses of neurons in the different populations. Blue lines show the reconstruction of the original signal by different sets of neurons of that population size. A: Each blue line is the reconstructed signal from the responses of a population of 4 neurons. B: Each blue line is the reconstructed signal from the responses of a population of 16 neurons. C: The same for 64 neurons. D: The same for 256 neurons. Larger population sizes lead to observations which are not as dependent on random fluctuations and are therefore closer to each other.
|
||
\notedh{langsames signal hier nehmen(!?)}}
|
||
\end{figure}
|
||
|
||
\subsection{Influence of the input is complex}
|
||
Two very important variables are the mean strength of the signal, equivalent to the baseline firing rate of the neurons and the strength of the signal. A higher baseline firing rate leads to a larger coding fraction. In our terms that means that a mean signal strength $\mu$ that is much above the signal will lead to higher coding fractions than if the signal strength is close to the threshold (see figure \ref{cf_limit} b), orange curves are above the green curves). The influence of the signal amplitude $\sigma$ is more complex. In general, at small population sizes, larger amplitudes appear to work better, but with large populations they might perform as well or even better than stronger signals (figure \ref{cf_limit} c), dashed curves vs solid curves.)
|
||
|
||
\begin{figure}
|
||
\includegraphics[width=0.45\linewidth]{{img/basic/basic_15.0_1.0_200_detail_with_max}.pdf}
|
||
\includegraphics[width=0.45\linewidth]{{img/basic/n_basic_weak_15.0_1.0_200_detail}.pdf}
|
||
\includegraphics[width=0.45\linewidth]{img/basic/n_basic_compare_50_detail.pdf}
|
||
\caption{A: Coding fraction as a function of noise for different population sizes. Green dots mark the peak of the coding fraction curve. Increasing population size leads to a higher peak and moves the peak to stronger noise.
|
||
B: Coding fraction as a function of population size. Each curve shows coding fraction for a different noise strength.
|
||
C: Peak coding fraction as a function of population size for different input parameters. \notedh{ needs information about noise}}
|
||
\label{cf_limit}
|
||
\end{figure}
|
||
|
||
|
||
|
||
|
||
|
||
\subsection{Slow signals are more easily encoded}
|
||
To encode a signal well, neurons in a population need to keep up with the rising and falling of the signal.
|
||
Signals that change fast are harder to encode than signals which change more slowly. When a signal changes more gradually, the neurons can slowly adapt their firing rate. A visual example can be see in figure \ref{freq_raster}. When all other parameters are equal, a signal with a lower frequency is easier to recreate from the firing of the neurons.
|
||
In the rasterplots one can see especially for the 50Hz signal (bottom left) that the firing probability of each neuron follows the input signal. When the input is low, almost none of the neurons fire. The result are the ``stripes'' we can see in the rasterplot. The stripes have a certain width which is determined by the signal frequency and the noise level. When the signal frequency is low, the width of the stripes can't be seen in a short snapshot. For the 50Hz signal in this example we can clearly see a break in the firing activity of the neurons at around 25ms. The slower changes in the signal allow for the reconstruction to follow the original signal more closely.
|
||
For the 200Hz signal there is little structure to be seen in the firing behaviour of the population and instead that behaviour looks chaotic.
|
||
Something similar can be said for the 1Hz signal. Because the peaks are about 1s apart from each other, a snapshot of 400ms cannot capture the structure of the neuronal response. Instead what we see is a very gradual change of the firing rate following the signal. Because the change is so gradual, the reconstructed signal follows the input signal very closely.
|
||
|
||
\begin{figure}
|
||
\centering
|
||
\includegraphics[width=0.4\linewidth]{{img/rasterplots/best_approximation_spikes_1hz_0.001noi500s_10.5_0.5_1.dat}.pdf}
|
||
\includegraphics[width=0.4\linewidth]{{img/rasterplots/best_approximation_spikes_10hz_0.001noi500s_10.5_0.5_1.dat}.pdf}
|
||
\includegraphics[width=0.4\linewidth]{{img/rasterplots/best_approximation_spikes_50hz_0.001noi500s_10.5_0.5_1.dat}.pdf}
|
||
\includegraphics[width=0.4\linewidth]{{img/rasterplots/best_approximation_spikes_200hz_0.001noi500s_10.5_0.5_1.dat}.pdf}
|
||
\caption{Rasterplots, input signal and reconstructed signals for different cutoff frequencies; insets show each signal spectrum.
|
||
Shown here are examples taken from 500s long simulations. Rasterplots show the firing of 64 LIF-neurons. Each row corresponds to one neuron.
|
||
Blue lines below the rasters are the input signal, the orange line the reconstruction, calculated by convolving the spikes with the optimal linear filter. Reconstruction is closer to the original signal for slower signals than for higher frequency signals.
|
||
The different time scales lead to spike patterns which appear very different from each other.}
|
||
\label{freq_raster}
|
||
\end{figure}
|
||
|
||
|
||
|
||
|
||
\subsection{Fast signals are harder to encode - noise can help with that}
|
||
For low frequency signals, the coding fraction is almost always at least as large as the coding is for signals with higher frequency. For the parameters we have used there is very little difference in coding fraction for a random noise signal with frequencies of 1Hz and 10Hz respectively (figure \ref{cf_for_frequencies}, bottom row).
|
||
For all signal frequencies and amplitudes a signal mean much larger than the threshold ($\mu = 15.0mV$, with the threshold at $10.0mV$) results in a higher coding fraction than the signal mean closer to the threshold ($\mu = 10.5 mV$). Firing rates of the neurons is much higher at the large input: about 90 Hz vs. 30 Hz for the lower signal mean.
|
||
We also find that for the signal mean which is further away from the threshold for the loss of coding fraction from the 10Hz signal to the 50Hz signal is smaller than for the lower signal mean. This is partially explained by the firing rate of the neurons: Around the firing rate the signal encoding is weaker (see figure \ref{CodingFrac}. In general, an increase in signal frequency and bandwidth leads to a decrease in the maximum achievable coding fraction. This decrease is smaller if the noise is stronger. In some conditions, a 50 Hz signal can be encoded as well as a 10 Hz signal (fig. \ref{cf_for_frequencies} d)).
|
||
|
||
\begin{figure}
|
||
\centering
|
||
\includegraphics[width=0.7\linewidth]{img/coding_fraction_vs_frequency.pdf}
|
||
\includegraphics[width=0.7\linewidth]{img/1Hz_vs_10Hz_alternativ.pdf}
|
||
\caption{\textbf{A-D}: Coding fraction in the large population limit as a function of input signal frequency for different parameters. Each curve represents a different noise strength. Points are only shown when the coding fraction increased by less than 2\% when population size was increased from 1024 to 2048 neurons. For small amplitudes ($\sigma = 0.1mV$, A \& B) there was no convergence for a noise of $10^{-3} mV^2/Hz$. Coding fraction decreases for faster signals (50Hz and 200Hz). In the large population limit, stronger noise results in coding fraction at least as large as for weaker noise.
|
||
\textbf{E, F}: Comparison of the coding fraction in the large population limit for a 1Hz signal and a 10Hz signal. Shapes indicate noise strength, color indicates mean signal input (i.e. distance from threshold). Left plot shows an amplitude of $\sigma=0.1mV$, the right plot shows $\sigma=1.0mV$. The diagonal black line indicates where coding fractions are equal.}
|
||
\label{cf_for_frequencies}
|
||
\end{figure}
|
||
|
||
\notedh{
|
||
TODO: frequency vs optimum noise;
|
||
For slower signals, coding fraction converges faster in terms of population size (figure \ref{cf_for_frequencies}).
|
||
This (convergence speed) is also true for stronger signals as opposed to weaker signals.
|
||
For slower signals the maximum value is reached for weaker noise.}
|
||
|
||
\subsection{A tuning curve allows calculation of coding fraction for arbitrarily large populations}
|
||
|
||
To understand information encoding by populations of neurons it is common practice to use simulations. However, the size of the simulated population is limited by computational power. We demonstrate a way to circumvent these limitations, allowing to make predictions in the limit case of large population size. We use the interpretation of the tuning curve as a kind of averaged population response. To calculate this average, we need relatively few neurons to reproduce the response of an arbitrarily large population of neurons. This allows the necessary computational power to be greatly reduced.
|
||
|
||
At least for slow signals, the spiking probability at a given point in time is determined by the signal power in this moment. The population response should simply be proportional to the response of a single neuron.
|
||
This average firing rate is reflected in the tuning curve.
|
||
We can look at the average firing rate for the input to find the spiking probability and how this probability changes with noise.
|
||
|
||
For faster signals, the past of the signal plays a role: after a spike there is a short period where the simulated neuron is unlikely to fire again, even if there is no explicit refractory period. If the next spike falls into that period, fewer neurons will spike than they would have without the first spike. We have also seen before that faster signals aren't encoded as well as slower signals; but the results we receive from using the tuning curve this way is frequency-independent.
|
||
|
||
\begin{figure}
|
||
\centering
|
||
\includegraphics[width=1.0\linewidth]{img/non_lin_example_undetail.pdf}
|
||
\includegraphics[width=0.5\linewidth]{{img/tuningcurves/6.00_to_15.00mV,1.0E-07_to_1.0E-02}.pdf}
|
||
\includegraphics[width=0.4\linewidth]{{img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_256_with_input}.pdf}
|
||
\caption{Two ways to arrive at coherence and coding fraction. Left: The input signal (top, center) is received by LIF-neurons. The spiking of the neurons is then binned and coherence and coding fraction are calculated between the result and the input signal.
|
||
Right: Input signal (top, center) is transformed by the tuning curve (top right). The tuning curve corresponds to a function $g(V)$, which takes a voltage as input and yields a firing rate. Output is a modulated signal. We calculate coherence and coding fraction between input voltage and output firing rate. If the mean of the input is close to the threshold, as is the case here, inputs below the threshold all get projected to 0. This can be seen here at the beginning of the transformed curve.
|
||
Bottom left: Tuning curves for different noise levels. x-Axis shows the stimulus strength in mV, the y-axis shows the corresponding firing rate. For low noise levels there is a strong non-linearity at the threshold. For increasing noise, firing rate becomes larger than 0 for progressively weaker signals. For strong stimuli (roughly 13mV and more) there is little different in the firing rate depending on the noise.}
|
||
\label{non-lin}
|
||
\end{figure}
|
||
|
||
The noise influences the shape of the tuning curve, with stronger noise linearizing the curve. The linearity of the curve is important, because coding fraction is a linear measure. For strong input signals (around 13mV) the curve is almost linear, resulting in coding fractions close to 1. For signal amplitudes in this range firing rate is almost independent of noise strength. This tells us that the increase in coding fraction that follows a change in noise strength we saw in previous chapters is not simply due to the neurons spiking more frequently.
|
||
For slow signals (1Hz cutoff frequency, up to 10Hz) the results from the tuning curve and the simulation for large populations of neurons match very well (figure \ref{accuracy}) over a range of signal strengths, base inputs to the neurons and noise strength.
|
||
This means that the LIF-neuron tuning curve gives us a very good approximation for the limit of encoded information that can be achieved by summing over independent, identical LIF-neurons with intrinsic noise.
|
||
For faster signals, the coding fraction calculated through the tuning curve stays constant, as the tuning curve only deforms the signal. As shown in figure \ref{cf_for_frequencies} e) and f), the coding fraction of the LIF-neuron ensemble drops with increasing frequency. Hence for high frequency signals the tuning curve ceases to be a good predictor of the encoding quality of the ensemble.
|
||
|
||
\begin{figure}
|
||
\centering
|
||
\includegraphics[width=0.48\linewidth]{img/tuningcurves/tuningcurve_vs_simulation_10Hz.pdf}
|
||
\includegraphics[width=0.48\linewidth]{img/tuningcurves/tuningcurve_vs_simulation_200Hz.pdf}
|
||
\label{accuracy}
|
||
\caption{Tuningcurve works for 10Hz but not for 200Hz.}
|
||
\end{figure}
|
||
|
||
For high-frequency signals, the method does not work. The effective and implicit refractory period prohibits the instantaneous firing rate from being useful, because the neurons spike only in very short intervals around a signal peak. They are very unlikely to immediately spike again and signal peaks that are too close to the preceding one will not be resolved properly.\notedh{Add a figure.}
|
||
|
||
We use the tuning curve to analyse how the signal mean and the signal amplitude change the coding fraction we would get from an infinitely large population of neurons (fig. \ref{non-lin}, bottom two rows). We can see that in this case the stronger noise always yields a larger coding fraction. This is expected because the tuning curve is more linear for stronger noise and coding fraction is a linear measure. It matches that we are observing the limit of an infinitely large population, which would be able to ``average out'' any noise.
|
||
|
||
For coding fraction as a function of the mean we see zero or near zero coding fraction if we are far below the threshold. If the signal is too weak it doesn't trigger any spiking in the neurons and no information can be encoded. If we increase the mean at one point we can see that coding fraction starts to jump up. This happens earlier for stronger noise, as spiking can be triggered for weaker signals. The increase in coding fraction is much smoother if we use a larger amplitude (right figure). We also notice some sort of plateau, where increasing the mean does not lead to a larger coding fraction, before it begins rising close to 1. The plateau begins earlier for stronger noise.
|
||
For coding fraction as a function of signal amplitude we see very different results depending on the parameters. Again, we see that stronger noise leads to higher coding fraction. If we are just above or at the threshold (center and right), an increase in signal amplitude leads to a lower coding fraction. This makes sense, as more of the signal moves into the very non-linear area around the threshold.
|
||
A very interesting effect happens if we have a mean slightly below the threshold (left): while for a strong noise we see the same effect as at or above the threshold, for weaker noise we see the opposite.
|
||
The increase can be explained as the reverse of the effect that leads to decreasing coding fraction. Here, a larger amplitude means that the signal moves to the more linear part of the tuning curve more often. On the other hand, an increase in amplitude does not lead to worse encoding because of movement of the signal into the low-firing rate, non-linear part of the tuning curve -- because the signal is already there, so it can't get worse. This can help explain why the coding fraction seems to saturate near 0.5: In an extreme case, the negative parts of a signal would not get encoded at all, while the positive parts would be encoded linearly.
|
||
|
||
\begin{figure}
|
||
\centering
|
||
\includegraphics[width=0.4\linewidth]{{img/tuningcurves/codingfraction_from_curves_amplitude_0.1mV}.pdf}
|
||
\includegraphics[width=0.4\linewidth]{{img/tuningcurves/codingfraction_from_curves_amplitude_0.5mV}.pdf}
|
||
\includegraphics[width=0.3\linewidth]{{img/tuningcurves/codingfraction_from_curves_mean_9.5mV}.pdf}
|
||
\includegraphics[width=0.3\linewidth]{{img/tuningcurves/codingfraction_from_curves_mean_10.0mV}.pdf}
|
||
\includegraphics[width=0.3\linewidth]{{img/tuningcurves/codingfraction_from_curves_mean_10.5mV}.pdf}
|
||
% \includegraphics[width=0.45\linewidth]{{img/rasterplots/best_approximation_spikes_50hz_1e-07noi500s_15_0.5_1.dat}.pdf}
|
||
% \includegraphics[width=0.45\linewidth]{{img/rasterplots/best_approximation_spikes_200hz_1e-07noi500s_15_0.5_1.dat}.pdf}
|
||
\label{codingfraction_means_amplitudes}
|
||
\caption{
|
||
\textbf{A,B}: Coding fraction as a function of signal mean.
|
||
Each curve shows coding fraction as a function of the signal mean for a different noise level. The vertical line indicates the threshold. A: $\sigma = 0.1mV$. For the two weak noise strengths coding fraction is 0 if the signal mean too far below the threshold. Just below the threshold there is a sharp increase and a bit above the threshold the curves flatten out with a coding fraction close to 1.
|
||
B: $\sigma = 0.5mV$. Increase in coding fraction with increasing signal mean is much smoother than at the lower amplitude. Surprisingly there is a small drop in coding fraction if the signal mean increases roughly between 10mV and 11mV. The drop sets in slightly earlier for stronger noise. At about 12mV coding fraction reaches a plateu close to 1.
|
||
\textbf{C-E}: Coding fraction as a function of signal amplitude for different tuningcurves (noise levels). Three different means, one below the threshold (9.5mV), one at the threshold (10.0mV), and one above the threshold (10.5mV). Except for one combination of parameters, increasing amplitude always leads to decreasing coding fraction. Because the signal means are all very close to the threshold, an increase in amplitude means that the signal reaches the highly non-linear part of the tunigncurve. The only exception is for the signal mean below the threshold (9.5mV) and relatively weak noise. For those parameters, encoding is very weak. The signal most of the time is not strong enough to create any spikes in the neurons. By increasing the signal amplitude, the non-zero part of the tuning curve is reached more often.}
|
||
\end{figure}
|
||
|
||
\section*{Discussion}
|
||
|
||
In this paper we have shown the effect of Suprathreshold Stochastic Resonance (SSR) in ensembles of neurons. We detailed how noise levels affect the impact of population size on the coding fraction. We looked at different frequency ranges and could show that the encoding of high-frequency signals profits particularly well from SSR. Using the tuningcurve we were able to provide a way to extrapolate the effects of SSR for very large populations. Because in general analysis of the impact of changing parameters is complex, we investigated limit cases, in particular the slow stimulus limit and the weak stimulus limit. For low-frequency signals, i.e. the slow stimulus limit, the tuningcurve also allows analyzing the impact of changing signal strength; in addition we were able to show the difference in sub-threshold SR and SSR for different noise levels. For the weak stimulus limit, where noise is relatively strong compared to the signal, we were able to provide an analytical solution for our observations.
|
||
|
||
\citep{hoch2003optimal} also shows that SSR effects hold for both LIF- and HH- Neurons. However, Hoch et al. have found that optimal noise level depends "close to logarithmatically" on the number of neurons in the population. They used a cutoff frequency of only 20Hz for their simulations. \notedh{Hier fehlt ein plot, der Population size und optimum noise in Verbindung setzt}
|
||
|
||
|
||
We investigated the impact of noise on homogeneous populations of neurons. Neurons being intrinsically noisy is a phenomenom that is well investigated (Grewe et al 2017, Padmanabhan and Urban 2010).
|
||
In natural systems however, neuronal populations are rarely homegeneous. Padmanabhan and Urban (2010) showed that heterogeneous populations of neurons carry more information that heterogenous populations.
|
||
%\notedh{Aber noisy! Zitieren: Neurone haben intrinsisches Rauschen (Einleitung?)} (Grewe, Lindner, Benda 2017 PNAS Synchronoy code) (Padmanabhan, Urban 2010 Nature Neurosci).
|
||
Beiran et al. (2017) investigated SSR in heterogeneous populations of neurons. They made a point that heterogeneous populations are comparable to homogeneous populations where the neurons receive independent noise in addition to a deterministic signal. They make the point that in the case of weak signals, heterogeneous population can encode information better, as strong noise would overwhelm the signal.
|
||
\notedh{Unterschiede herausstellen!} Similarly, Hunsberger et al. (2014) showed that both noise and heterogeneity linearize the tuning curve of LIF neurons.
|
||
In summary, while noise and heterogeneity are not completely interchangeable. In the limit cases we see similar behaviour.
|
||
|
||
\citep{Sharafi2013} Sharafi et al. (2013) had already investigated SSR in a similar way. However, they only observed populations of up to three neurons and were focused on the synchronous output of cells. They took spike trains, convolved those with a gaussian and then multiplied the response of the different neurons. In our simulations we instead used the addition of spike trains to calculate the cohenrece between input and output. Instead of changing the noise parameter to find the optimum noise level, they changed the input signal frequency to find a resonating frequency, which was possible for suprathreshold stochastic resonance, but not for subthreshold stochastic resonance. For some combinations of parameters we also found that coding fraction does not decrease monotonically with increasing signal frequency (fig. \ref{cf_for_frequencies}).
|
||
It is especially notable for signals that are far from the threshold (fig \ref{cf_for_frequencies} E,F (red markers)).
|
||
That we don't see the effect that clearly matches Sharafi et al.'s observation that in the case of subthreshold stochastic resonance, coherence monotonically decreased with increasing frequency. Pakdaman et al. (2001)
|
||
\notedh{Besser verkn\"upfen als das Folgende (vergleichen \"uber Gr\"o\ss{}enordnungen; vergleichen mit Abbildung 5\ref{}; mehr als Sharafi zitieren Stichwort ``Coherence Resonance''}
|
||
|
||
Similar research to Sharafi et al. was done by (de la Rocha et al. 2007). They investigated the output correlation of populations of two neurons and found it increases with firing rate. We found something similar in this paper, where an increase in $\mu$ increases both the firing rate of the neurons and generally also the coding fraction \notedh{Verkn\"upfen mit output correlation}(fig. \ref{codingfraction_means_amplitudes}). Our explanation is that coding fraction and firing rate are linked via the tuningcurve. In addition to simulations of LIF neurons de la Rocha et al. also carried out \textit{in vitro} experiments where they confirmed their simulations.
|
||
|
||
\notedh{Konkreter machen: was machen die Anderen, das mit uns zu tun hat und was genau hat das mit uns zu tun?}
|
||
\notedh{Vielleicht nochmal Stocks, obwohl er schon in der Einleitung vorkommt? Heterogen/homogen}
|
||
\notedh{Dynamische stimuli! Bei Stocks z.B. nicht, nur z.B. bei Beiran. Wir haben den \"Ubergang.}
|
||
|
||
Examples for neuronal systems that feature noise are P-unit receptor cells of weakly electric fish (which paper?) and ...
|
||
|
||
|
||
In the case of low cutoff frequency and strong noise we were able to derive a formula that explains why in those cases coding fraction simply depends on the ratio between noise and population size, whereas generally the two variables have very different effects on the coding fraction.
|
||
|
||
\subsection{Different frequency ranges}
|
||
|
||
\subsection{Narrow-/wideband}
|
||
|
||
\subsection{Narrowband stimuli}
|
||
Using the \(f_{cutoff} = 200 \hertz\usk\) signal, we repeated the analysis (fig. \ref{cf_limit}) considering only selected parts of the spectrum. We did so for two "low frequency" (0--8Hz, 0--50Hz) and two "high frequency" (192--200Hz, 150--200Hz) intervals.\notedh{8Hz is not in yet.} We then compared the results to the results we get from narrowband stimuli, with power only in those frequency bands.
|
||
To keep the power of the signal inside the two intervals the same as in the broadband stimulus, amplitude of the narrowband signals was less than that of the broadband signal. For the 8Hz intervals, amplitude (i.e. standard deviation) of the signal was 0.2mV, or a fifth of the amplitude of the broadband signal. Because signal power is proportional to the square of the amplitude, this was appropriate for a stimulus with a spectrum 25 times smaller. Similarly, for the 50Hz intervals we used a 0.5mV amplitude, or half of that of the broadband stimulus.
|
||
As the square of the amplitude is equal to the integral over the frequency spectrum, for a signal with a quarter of the width we need to half the amplitude to have the same power in the interval defined by the narrowband signals.
|
||
|
||
\subsection{Smaller frequency intervals in broadband signals }
|
||
|
||
\begin{figure}
|
||
\includegraphics[width=0.45\linewidth]{img/small_in_broad_spectrum}
|
||
\includegraphics[width=0.45\linewidth]{img/power_spectrum_0_50}
|
||
\includegraphics[width=0.49\linewidth]{{img/broad_coherence_15.0_1.0}.pdf}
|
||
\includegraphics[width=0.49\linewidth]{{img/coherence_15.0_0.5_narrow_both}.pdf}
|
||
\includegraphics[width=0.49\linewidth]{{img/broad_coherence_10.5_1.0_200}.pdf}
|
||
\includegraphics[width=0.49\linewidth]{{img/coherence_10.5_0.5_narrow_both}.pdf}
|
||
\caption{Coherence for broad and narrow frequency range inputs. a) Broad spectrum.
|
||
At the frequency of the firing rate (91Hz, marked by the black bar) and its first
|
||
harmonic (182Hz) the coding fraction breaks down. For the weak noise level (blue),
|
||
population sizes n=4 and n=4096 show indistinguishable coding fraction.
|
||
In case of a small population size, coherence is higher for weak noise (blue) than
|
||
for strong noise (green) in the frequency range up to about 50\hertz. For higher
|
||
frequencies coherence is unchanged. For the case of the larger population size and the
|
||
greater noise strength there is a huge increase in the coherence for all frequencies.
|
||
b) Coherence for two narrowband inputs with different frequency ranges.
|
||
Low frequency range: coherence for
|
||
slow parts of the signal is close to 1 for weak noise. SSR works mostly on the
|
||
higher frequencies (here >40\hertz). High frequency range: At 182Hz (twice
|
||
the firing frequency) there is a very sharp decrease in coding fraction,
|
||
especially for the weak noise condition (blue). Increasing the noise makes the drop
|
||
less clear. For weak noise (blue) there is another break down at 182-(200-182)Hz.
|
||
Stronger noise seems to make this sharp drop disappear. Again, the effect of SSR
|
||
is most noticeable for the higher frequencies in the interval.\notedh{Add description for 10.5mV}}
|
||
\label{fig:coherence_narrow}
|
||
\end{figure}
|
||
|
||
We want to know how well encoding works for different frequency intervals in the signal.
|
||
When we take out a narrower frequency interval from a broadband signal, the other
|
||
frequencies in the signal serve as common noise to the neurons encoding the signal.
|
||
In many cases we only care about a certain frequency band in a signal of much wider bandwidth.
|
||
In figure \ref{fig:coherence_narrow} C we can see that SSR has very different
|
||
effects on some frequencies inside the signal than on others. In blue we see the
|
||
case of very weak noise (\(10^{-6} \milli\volt\squared\per\hertz\)). Coherence starts somewhat close to 1 but falls off quickly that it reaches about 0.5 by 50Hz and goes down to almost zero around the 91Hz firing rate of the signal. Following that there is a small increase up to about 0.1 at around 130Hz, after which coherence decreases to almost 0.
|
||
Increasing the population size from 4 neurons to 2048 neurons has practically no effect.
|
||
When we keep population size at 4 neurons, but
|
||
add more noise to the neurons (green, \(2\cdot10^{-3} \milli\volt\squared\per\hertz\)),
|
||
encoding of the low frequencies (up to about 50\hertz) becomes worse, while
|
||
encoding of the higher frequencies stays unchanged. When we increase the population
|
||
size to 2048 neurons we have almost perfect encoding for frequencies up to 50\hertz.
|
||
Coherence is still reduced around the average firing rate of the neurons, but at
|
||
a much higher level than before. For higher frequencies coherence becomes higher again.
|
||
For the weaker mean input (figure \ref{coherence_narrow} E results look similar. For weak noise (blue) there is no difference for the increased population. Coherence starts relatively high again (around 0.7). There is a decrease in coherence for increasing frequency which is steep at first, until about the firing rate of the neuron, after which the decrease flattens off. For stronger noise, encoding at low frequencies is worse for small populations; for large populations the coherence is greatly increased for all frequencies. Coherence is very close to 1 at first, decreases slightly in the frequency is increased up to the firing rate, after which coherence stays about constant.
|
||
|
||
In summary, the high frequency bands inside the broadband stimulus experience a
|
||
much greater increase in encoding quality than the low frequency bands,
|
||
which were already encoded quite well.
|
||
|
||
\begin{figure}
|
||
\includegraphics[width=0.45\linewidth]{img/broadband_optimum_newcolor.pdf}
|
||
\includegraphics[width=0.45\linewidth]{img/smallband_optimum_newcolor.pdf}
|
||
\centering
|
||
\includegraphics[width=0.9\linewidth]{img/max_cf_smallbroad.pdf}
|
||
\caption{
|
||
C and D \notedh{B and C right now because the order in the right column was mixed up}: Best amount of noise for different number of neurons. The dashed lines show where coding fraction still is at least 95\% from the maximum. The width of the peaks is much larger for the narrowband signals which encompasses the entire width of the high-frequency interval peak.
|
||
Optimum noise values for a fixed number of neurons are always higher for the broadband signal than for narrowband signals.
|
||
In the broadband case, the optimum amount of noise is larger for the high-frequency interval than for the low-frequency interval and vice-versa for the narrowband case. %The optimal noise values have been fitted with a function of square root of the population size N, $f(N)=a+b\sqrt{N}$. We observe that the optimal noise value grows with the square root of population size.
|
||
E and F: Coding fraction as a function of noise for a fixed population size (N=512). Red dots show the maximum, the red line where coding fraction is at least 95\% of the maximum value.
|
||
G: An increase in population size leads to a higher coding fraction especially for broader bands and higher frequency intervals. Coding fraction is
|
||
larger for the narrowband signal than in the equivalent broadband interval for all neural population sizes considered here. The coding fraction for the low frequency intervals is always larger than for the high frequency interval.
|
||
Signal mean $\mu=15.0\milli\volt$, signal amplitude $\sigma=1.0\milli\volt$ and $\sigma=0.5\milli\volt$ respectively.}
|
||
\label{smallbroad}
|
||
\end{figure}
|
||
|
||
\subsection{Narrowband Signals vs Broadband Signals}
|
||
|
||
In nature, often an external stimulus covers a narrow frequency range
|
||
that starts at high frequencies, so that only using broadband white noise signals
|
||
as input appears to be insufficient to describe realistic scenarios.\notedh{Add examples.}
|
||
%, with bird songs\citep{nottebohm1972neural} and ???\footnote{chirps, in a way?}.
|
||
%We see that in many animals receptor neurons have adapted to these signals. For example, it was found that electroreceptors in weakly electric fish have band-pass properties\citep{bastian1976frequency}.
|
||
Therefore, we investigate the coding of narrowband signals in the ranges
|
||
described earlier (0--50Hz, 150--200Hz). Comparing the results from coding of
|
||
broadband and coding of narrowband signals, we see several differences.
|
||
|
||
For both low and high frequency signals, the narrowband signal
|
||
can be resolved better than the broadband signal for any amount of noise and at all population sizes (figure \ref{smallbroad}, bottom left).
|
||
That coding fractions are higher when we use narrowband signals can be
|
||
explained by the fact that the additional
|
||
frequencies in the broadband signal are now absent. In the broadband signal
|
||
they are a form of "noise" that is common to all the input neurons.
|
||
Similar to what we saw for the broadband signal,
|
||
the peak of the low frequency input is still much more broad than the peak of the high frequency input.
|
||
To encode low frequency signals the exact strength of the noise is not as
|
||
important as it is for the high frequency signals which can be seen from the wider peaks.
|
||
|
||
\subsection{Discussion}
|
||
The usefulness of noise on information encoding of subthreshold signals by single neurons has been well investigated. However, the encoding of supra-threshold signals by populations of neurons has received comparatively little attention and different effects play a role for suprathreshold signals than for subthreshold signals \citep{Stocks2001}. This paper delivers an important contribution for the understanding of suprathreshold stochastic resonance (SSR). We simulate populations of leaky integrate-and-fire neurons to answer the question how population size influences the optimal noise strength for linear encoding of suprathreshold signals. We are able to show that this optimal noise is well described as a function of the square root of population size.\notedh{Currently missing, but it is somewhere in my notes ...} This relationship is independent of frequency properties of the input signal and holds true for narrowband and broadband signals.
|
||
|
||
In this paper, we show that SSR works in LIF-neurons for a variety of signals of different bandwidth and frequency intervals. We show that signal-to-noise ratio is for signals above a certain strength sufficient to describe the optimal noise strength in the population, but that the actual coding fraction depends on the absolute value of signal strength. %We furthermore show that increasing signal strength does not always increase the coding fraction.
|
||
|
||
We contrast how well the low and high frequency parts of a broadband signal can be encoded. We take an input signal with $f_{cutoff} = \unit{200}\hertz$ and analyse the coding fraction for the frequency ranges 0 to \unit{50}\hertz\usk and 150 to \unit{200}\hertz\usk separately. The maximum value of the coding fraction is lower for the high frequency interval compared to the low frequency interval. This means that inside broadband signals higher frequencies intervals appear more difficult to encode for each level of noise and population size. The low frequency interval has a wider peak (defined as 95\% coding fraction of its coding fraction maximum value), which means around the optimal amount of noise there is a large area where coding fraction is still good. The noise optimum for the low frequency parts of the input is lower than the optimum for the high frequency interval (Fig. \ref{highlowcoherence}). In both cases, the optimal noise value appears to grow with the square root of population size.\notedh{See note above}
|
||
|
||
In general, narrowband signals can be encoded better than broadband signals.
|
||
narrowband vs broadband
|
||
|
||
Another main finding of this paper is the discovery of frequency dependence of SSR.
|
||
We can see from the shape of the coherence between the signal and the output of the simulated
|
||
neurons, SSR works mostly for the higher frequencies in the signal. As the lower frequency
|
||
components are in many cases already encoded really well, the addition of noise
|
||
helps to flatten the shape of the coherence curve. In the case of weak noise, often there
|
||
are border effects which disappear with increasing strength of the noise.
|
||
In addition, for weak noise there are often visible effects from the firing rate of the neurons, in so far that the encoding
|
||
around those frequencies is worse than for the surrounding frequencies. Generally
|
||
this effect becomes less pronounced when we add more noise to the simulation, but
|
||
we found a very striking exception in the case of narrowband signals.
|
||
Whereas for a firing rate of about
|
||
91\hertz\usk the coding fraction of the encoding of a signal in the 0-50\hertz\usk band is
|
||
better than for the encoding of a signal in the 150-200\hertz\usk band. However, this is
|
||
not the case if the neurons have a firing rate about 34\hertz.
|
||
We were thus able to show that the firing rate on the neurons in the simulation is of
|
||
critical importance to the encoding of the signal.
|
||
|
||
\section{Theory}
|
||
|
||
\subsection{For large population sizes and strong noise, coding fraction becomes a function of their quotient}
|
||
|
||
For the linear response regime of large noise, we can estimate the coding fraction. From Beiran et al. 2018 we know the coherence in linear response is given as
|
||
|
||
\eq{
|
||
C_N(\omega) = \frac{N|\chi(\omega)|^2 S_{ss}}{S_{x_ix_i}(\omega)+(N1)|\chi(\omega)|^2S_{ss}}
|
||
\label{eq:linear_response}
|
||
}
|
||
|
||
where \(C_1(\omega)\) is the coherence function for a single LIF neuron. Generally, the single-neuron coherence is given by \citep{??}
|
||
|
||
\eq{
|
||
C_1(\omega)=\frac{r_0}{D} \frac{\omega^2S_{ss}(\omega)}{1+\omega^2}\frac{\left|\mathcal{D}_{i\omega-1}\big(\frac{\mu-v_T}{\sqrt{D}}\big)-e^{\Delta}\mathcal{D}_{i\omega-1}\big(\frac{\mu-v_R}{\sqrt{D}}\big)\right|^2}{\left|{\cal D}_{i\omega}(\frac{\mu-v_T}{\sqrt{D}})\right|^2-e^{2\Delta}\left|{\cal D}_{i\omega}(\frac{\mu-v_R}{\sqrt{D}})\right|^2}
|
||
\label{eq:single_coherence}
|
||
}
|
||
|
||
where \(r_0\) is the firing rate of the neuron,
|
||
\[r_0 = \left(\tau_{ref} + \sqrt{\pi}\int_\frac{\mu-v_r}{\sqrt{2D}}^\frac{\mu-v_t}{\sqrt{2D}} dz e^{z^2} \erfc(z) \right)^{-1}\].
|
||
In the limit of large noise (calculation in the appendix) this equation evaluates to:
|
||
|
||
\eq{
|
||
C_1(\omega) = \sqrt{\pi}D^{-1}
|
||
\frac{S_{ss}(\omega)\omega^2/(1+\omega^2)}{2 \sinh\left(\frac{\omega\pi}{2}\right)\Im\left( \Gamma\left(1+\frac{i\omega}{2}\right)\Gamma\left(\frac12-\frac{i\omega}{2}\right)\right)}
|
||
\label{eq:simplified_single_coherence}
|
||
}
|
||
|
||
From eqs.\ref{eq:linear_response} and \ref{eq:simplified_single_coherence} it follows that in the case \(D \rightarrow \infty\) the coherence, and therefore the coding fraction, of the population of LIF neurons is a function of \(D^{-1}N\). We plot the approximation as a function of \(\omega\) (fig. \ref{d_n_ratio}). In the limit of small frequencies the approximation matches the exact equation very well, though not for higher frequencies. We can verify this in our simulations by plotting coding fraction as a function of \(\frac{D}{N}\). We see (fig. \ref{d_n_ratio}) that in the limit of large D, the curves actually lie on top of each other. This is however not the case (fig. \ref{d_n_ratio}) for stimuli with a large cutoff frequency \(f_c\), as expected by our evaluation of the approximation as a function of the frequency.
|
||
|
||
|
||
\begin{figure}
|
||
\centering
|
||
\includegraphics[width=0.32\linewidth]{{img/d_over_n/d_10.5_0.5_10_detail}.pdf}
|
||
\includegraphics[width=0.32\linewidth]{{img/d_over_n/d_15.0_0.5_50_detail}.pdf}
|
||
\includegraphics[width=0.32\linewidth]{{img/d_over_n/d_15.0_1.0_200_detail}.pdf}
|
||
\includegraphics[width=0.32\linewidth]{{img/d_over_n/d_over_n_10.5_0.5_10_detail}.pdf}
|
||
\includegraphics[width=0.32\linewidth]{{img/d_over_n/d_over_n_15.0_0.5_50_detail}.pdf}
|
||
\includegraphics[width=0.32\linewidth]{{img/d_over_n/d_over_n_15.0_1.0_200_detail}.pdf}
|
||
\label{d_n_ratio}
|
||
\caption{Top row: Coding fraction as a function of noise.
|
||
Bottom row: Coding fraction as a function of the ratio between noise strength and population size. For strong noise, coding fraction is a function of this ratio.
|
||
Left: signal mean 10.5mV, signal amplitude 0.5mV, $f_{c}$ 10Hz.
|
||
Middle: signal mean 15.0mV, signal amplitude 0.5mV, $f_{c}$ 50Hz.
|
||
Right: signal mean 15.0mV, signal amplitude 1.0mV, $f_{c}$ 200Hz.}
|
||
\end{figure}
|
||
|
||
\subsection{Refractory period}
|
||
|
||
We analyzed the effect of non-zero refractory periods on the previous results. We repeated the same simulations as before but added a 1ms or a 5ms refractory period to each of the LIF-neurons. Results are summarized in figure \ref{refractory_periods}.
|
||
Results change very little for a refractory period of 1ms, especially for large noise values. For a refractory period of 5ms resulting coding fraction is lower for almost all noise values. Paradoxically, for high frequencies in smallband signals and very small noise, coding fraction actually is larger for 5ms refractory period than for 1ms. \notedh{Needs plots!} In spite of this, coding fraction is still largest for the LIF-ensembles without refractory period.
|
||
|
||
We also find all other results replicated even with refractory periods of 1ms or 5ms: Figure \ref{refractory_periods} shows that the optimal noise stills grows with \(\sqrt{N}\) for both the 1ms and the 5ms refractory period. We see an increase in the value of the optimum noise with an increase of the refractory period. The achievable coding fraction is lower for the neurons with refractory periods, especially at the maximum. In the limit of large noise, the neurons with 1ms refractory period and the ones with no refractory period also result in similar coding fractions, over a wide range of population sizes. However, this is not true for the neurons with 5ms refractory period.
|
||
|
||
|
||
\begin{figure}
|
||
\includegraphics[width=0.8\linewidth]{img/ordnung/refractory_periods_coding_fraction.pdf}
|
||
\caption{Repeating the simulations adding a refractory period to the LIF-neurons shows no qualitative changes in the SSR behaviour of the neurons. Coding fraction is lower the longer the refractory period. The SSR peak moves to stronger noise; cells with larger refractory periods need stronger noise to work optimally.}
|
||
\label{refractory_periods}
|
||
\end{figure}
|
||
|
||
|
||
\section{Electric fish}
|
||
|
||
\subsection{Introduction}
|
||
|
||
\subsection{Methods}
|
||
|
||
\subsection*{Electrophysiology}
|
||
|
||
We recorded electrophysiological data from X cells from Y different fish.
|
||
|
||
\textit{Surgery}. Twenty-two E. virescens (10 to 21 cm) were used for
|
||
single-unit recordings. Recordings of electroreceptors were made
|
||
from the anterior part of the lateral line nerve.
|
||
Fish were initially anesthetized with 150 mg/l MS-222 (PharmaQ,
|
||
Fordingbridge, UK) until gill movements ceased and were then
|
||
respirated with a constant flow of water through a mouth tube,
|
||
containing 120 mg/l MS-222 during the surgery to sustain anesthesia.
|
||
The lateral line nerve was exposed dorsal to the operculum. Fish were
|
||
fixed in the setup with a plastic rod glued to the exposed skull bone.
|
||
The wounds were locally anesthetized with Lidocainehydrochloride
|
||
2\% (bela-pharm, Vechta, Germany) before the nerve was exposed.
|
||
Local anesthesia was renewed every 2 h by careful application of
|
||
Lidocaine to the skin surrounding the wound.
|
||
After surgery, fish were immobilized with 0.05 ml 5 mg/ml tubocurarine (Sigma-Aldrich, Steinheim, Germany) injected into the trunk
|
||
muscles.
|
||
\sout{Since tubocurarine suppresses all muscular activity, it also
|
||
suppresses the activity of the electrocytes of the electric organ and thus
|
||
strongly reduces the EOD of the fish. We therefore mimicked the EOD
|
||
by a sinusoidal signal provided by a sine-wave generator (Hameg HMF
|
||
2525; Hameg Instruments, Mainhausen, Germany) via silver electrodes
|
||
in the mouth tube and at the tail. The amplitude and frequency of the
|
||
artificial field were adjusted to the fish’s own field as measured before
|
||
surgery.} After surgery, fish were transferred into the recording tank of the
|
||
setup filled with water from the fish’s housing tank not containing
|
||
MS-222. Respiration was continued without anesthesia. The animals
|
||
were submerged into the water so that the exposed nerve was just above
|
||
the water surface. Electroreceptors located on the parts above water
|
||
surface did not respond to the stimulus and were excluded from analysis.
|
||
Water temperature was kept at 26°C.\footnote{From St\"ockl et al. 2014}
|
||
|
||
\textit{Recording. }Action potentials from electroreceptor afferents were
|
||
recorded intracellularly with sharp borosilicate microelectrodes
|
||
(GB150F-8P; Science Products, Hofheim, Germany), pulled to a resistance between 20 and 100 M and filled with a 1 M KCl solution.
|
||
Electrodes were positioned by microdrives (Luigs-Neumann, Ratingen,
|
||
Germany). As a reference, glass microelectrodes were used. They were
|
||
placed in the tissue surrounding the nerve, adjusted to the isopotential line
|
||
of the recording electrode. The potential between the micropipette and the
|
||
reference electrode was amplified (SEC-05X; npi electronic) and lowpass filtered at 10 kHz. Signals were digitized by a data acquisition board
|
||
(PCI-6229; National Instruments) at a sampling rate of 20 kHz. Spikes
|
||
were detected and identified online based on the peak-detection algorithm
|
||
proposed by Todd and Andrews (1999).
|
||
The EOD of the fish was measured between the head and tail via
|
||
two carbon rod electrodes (11 cm long, 8-mm diameter). The potential
|
||
at the skin of the fish was recorded by a pair of silver wires, spaced
|
||
1 cm apart, which were placed orthogonal to the side of the fish at
|
||
two-thirds body length. The residual EOD potentials were recorded
|
||
and monitored with a pair of silver wire electrodes placed in a piece
|
||
of tube that was put over the tip of the tail. These EOD voltages were
|
||
amplified by a factor of 1,000 and band-pass filtered between 3 Hz and
|
||
1.5 kHz (DPA-2FXM; npi electronics).
|
||
Stimuli were attenuated (ATN-01M; npi electronics), isolated from
|
||
ground (ISO-02V; npi electronics), and delivered by two carbon rod
|
||
electrodes (30-cm length, 8-mm diameter) placed on either side of the
|
||
fish parallel to its longitudinal axis. Stimuli were calibrated to evoke
|
||
defined AM measured close to the fish. Spike and EOD detection,
|
||
stimulus generation and attenuation, as well as preanalysis of the
|
||
data were performed online during the experiment within the
|
||
RELACS software version 0.9.7 using the efish plugin-set (by J.
|
||
Benda: http://www.relacs.net).\footnote{From St\"ockl et al. 2014}
|
||
Recordings were carried out by \notedh{insert names here}
|
||
|
||
\textit{Stimulation.} White noise stimuli with a cutoff frequency of 300{\hertz} defined an AM of the fish's signal. The stimulus was combined with the fish's own EOD in a way that the desired AM could be measured near the fish. Amplitude of the AM was 10\% (?) of the amplitude of the EOD. Stimulus duration was between 2s and 10s, with a time resolution of X.
|
||
We also used 5 narrowband stimuli in the frequency ranges 0-50Hz, 50-100Hz, 150-200Hz, 250-300Hz and 350-400Hz. Parameters for these were ... \notedh{fill information}
|
||
|
||
\subsection{Analysis}
|
||
|
||
To analyse the data we went ahead the same way we did for the simulations. For more information see section \ref{Analysis}.
|
||
We created populations out of each cell. For each p-unit, we took the different trials and added the spikes in each time bin the same way we did it for the simulated neurons. \notedh{Did I do something to build averages for smaller population sizes?} For most of the analysed cells there were between X and Y \notedh{fill information in} trials.
|
||
|
||
|
||
\subsection{Frequency dependence of sensory cells in \lepto}
|
||
|
||
\begin{figure}
|
||
\centering
|
||
\includegraphics[width=0.49\linewidth]{img/fish/coherence_example.pdf}
|
||
\includegraphics[width=0.49\linewidth]{img/fish/coherence_example_narrow.pdf}
|
||
\label{fig:ex_data}
|
||
\caption{Examples of coherence in the p-Units of \lepto. Each plot shows
|
||
the coherence of the response of a single cell to a stimulus for different numbers of trials.
|
||
Like in the simulations, increased population sizes lead to a higher coherence.
|
||
Left: One signal with a maximum frequency of 300Hz. Right: Three different signals (0-50Hz, 150-200Hz and 350-400Hz). With increasing population size the increase in coherence was especially noticeable for the higher frequency ranges. See also figure \ref{fish_result_summary_yue} b). \notedh{Show a different cell with all five narrowband signals?}
|
||
}
|
||
\end{figure}
|
||
|
||
\begin{figure}
|
||
\includegraphics[width=0.65\linewidth]{img/sigma/cf_N_sigma.pdf}
|
||
\caption{Coding fraction as a function of population size. Each line represents one cell. Populations were created by taking separate trials from each cell. Line color indicates the firing rate of the cell. \notedh{What are they sorted/divided by? $\sigma$?}}
|
||
\label{Curve_examples}
|
||
\end{figure}
|
||
|
||
|
||
|
||
|
||
|
||
\subsection{How to determine noisiness}
|
||
|
||
\subsection*{Determining noise in real world}
|
||
|
||
While in simulations we can control the noise parameter directly, we cannot do so in electrophysiological experiments.
|
||
Therefore, we need a way to quantify "noisiness".
|
||
One such way is by using the activation curve of the neuron, fitting a function and extracting the parameters from this function.
|
||
Stocks (2000) uses one such function to simulate groups of noisy spiking neurons:
|
||
|
||
\begin{equation}
|
||
\label{errorfct}\frac{1}{2}\erfc\left(\frac{\theta-x}{\sqrt{2\sigma^2}}\right)
|
||
\end{equation}
|
||
|
||
|
||
where $\sigma$ is the parameter quantifying the noise (figure \ref{Curve_examples}). $\sigma$ determines the steepness of the curve.
|
||
A neuron with a $\sigma$ of 0 would be a perfect thresholding mechanism. Firing probability for all inputs below the threshold is 0, and firing probability for all inputs above is 1.
|
||
Larger values mean a flatter activation curve. Neurons with such an activation curve can sometimes fire even for signals below the firing threshold, while it will sometimes not fire for inputs above the firing threshold. Its firing behaviour is influenced less by the signal and more affected by noise.
|
||
We also tried different other methods of quantifying noise commonly used (citations), but none of them worked as well as the errorfunction fit (fig. \ref{noiseparameters} and \ref{noiseparameters2}).
|
||
|
||
|
||
|
||
\subsection*{Methodology}
|
||
We calculate the cross correlation between the signal and the discrete output spikes.
|
||
The signal values were binned in 50 bins. The result is a discrete Gaussian distribution around 0mV, the mean of the signal, as is expected from the way the signal was created.
|
||
We have to account for the delay between the moment we play the signal and when it gets processed in the cell, which can for example depend on the position of the cell on the skin. We can easily reconstruct the delay from the measurements.
|
||
The position of the peak of the crosscorrelation is the time shift for which the signal influences the result of the output the most. For an explanation, see figure \ref{timeshift}.
|
||
Then for every spike we assign the value of the signal at the time of the spike minus the time shift.
|
||
The result is a histogram, where each signal value bin has a number of spikes.
|
||
This histogram is then normalized by the distribution of the signal. The result is another histogram, whose values are firing frequencies for each signal value. Because those frequencies are just firing probabilities multiplied by time, we can fit a Gaussian error function to those probabilities.
|
||
|
||
\subsection*{Simulation}
|
||
|
||
To confirm that the $\sigma$ parameter estimated from the fit is indeed a good measure for the noisiness, we validated it against D, the noise parameter from the simulations. We find that there is a strictly monotonous relationship between the two for different sets of simulation parameters. Other parameters often used to determine noisiness (citations) such as the variance of the spike PSTH, the coefficient of variation (CV) of the interspike interval are not as useful. In figure \ref{noiseparameters} we see why. The variance of the psth is not always monotonous in D and is very flat for low values of D.
|
||
%describe what happens to the others
|
||
%check Fano-factor maybe?
|
||
|
||
\begin{figure}
|
||
\includegraphics[width=0.45\linewidth]{img/ordnung/base_D_sigma}
|
||
\includegraphics[width=0.45\linewidth]{img/dataframe_scatter_D_normalized_psth_1ms_test_tau}
|
||
\includegraphics[width=0.45\linewidth]{img/dataframe_scatter_D_psth_5ms_test}
|
||
% \includegraphics[width=0.45\linewidth]{img/dataframe_scatter_D_cv_test}
|
||
\caption{a)The parameter \(\sigma\) as a function of the noise parameter D in LIF-simulations. There is a strictly monotonous relationship between the two, which allows us to use \(\sigma\) as a susbtitute for D in the analysis of electrophysiological experiments. b-d) different other parameters commonly used to quantify noise. None of these functions is stricly monotonous and therefore none is useful as a substitute for D. b) Peri-stimulus time histogram (PSTH) of the spikes with a bin width of 1ms, normalized by c) PSTH of the spikes with a bin width of 5ms. d) coefficient of variation (cv) of the interspike-intervals.}
|
||
\label{noiseparameters}
|
||
\end{figure}
|
||
|
||
\begin{figure}
|
||
\centering
|
||
%\includegraphics[width=0.45\linewidth]{img/ordnung/base_D_sigma}\\
|
||
\includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_sigma_membrane_50.pdf}
|
||
\includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_cv_membrane_50.pdf}
|
||
\includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_1ms_membrane_50.pdf}
|
||
\includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_5ms_membrane_50.pdf}
|
||
\includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_sigma_refractory_50.pdf}
|
||
\includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_cv_refractory_50.pdf}
|
||
\includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_1ms_refractory_50.pdf}
|
||
\includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_5ms_refractory_50.pdf}
|
||
\caption{a)The parameter \(\sigma\) as a function of the noise parameter D in LIF-simulations. There is a strictly monotonous relationship between the two, which allows us to use \(\sigma\) as a susbtitute for D in the analysis of electrophysiological experiments.
|
||
b-e) Left to right: $\sigma$, CV and standard deviation of the psth with two diffrent kernel widths as a function of D for different membrane constants (4ms, 10ms and 16ms). The membrane constant $\tau$ determines how quickly the voltage of a LIF-neuron changes, with lower constants meaning faster changes. Only $\sigma$ does not change its values with different $\tau$. The CV (c)) is not even monotonous in the case of a timeconstant of 4ms, ruling out any potential usefulness.
|
||
f-i) Left to right: $\sigma$, CV and standard deviation of the psth with two diffrent kernel widths as a function of D for different refractory periods (0ms, 1ms and 5ms). Only $\sigma$ does not change with different refractory periods.
|
||
}
|
||
\label{noiseparameters2}
|
||
\end{figure}
|
||
|
||
We tried several different bin sizes (30 to 300 bins) and spike widths. There was little difference between the different parameters (see figure \ref{sigma_bins} in appendix).
|
||
|
||
\section*{Electric fish as a real world model system}
|
||
|
||
To put the results from our simulations into a real world context, we chose the
|
||
weakly electric fish \textit{Apteronotus leptorhynchus} as a model system.
|
||
\lepto\ uses an electric organ to produce electric fields which it
|
||
uses for orientation, prey detection and communication. Distributed over the skin
|
||
of \lepto\ are electroreceptors which produce action potentials
|
||
in response to electric signals.
|
||
|
||
These receptor cells ("p-units") are analogous to the
|
||
simulated neurons we used in our simulations because they do not receive any
|
||
input other than the signal they are encoding. Individual cells fire independently
|
||
of each other and there is no feedback.
|
||
|
||
\begin{figure}
|
||
\includegraphics[width=0.7\linewidth]{img/explain_analysis/after_timeshift_11.pdf}
|
||
\caption{Top: Spike train of a p-unit.
|
||
Middle: Situation as recorded: Signal is blue, the response from the cells created from the spike train is orange.
|
||
Bottom: The signal was shifted forward in time, so that the response fits the signal better. This corrects for any delays in the recording process.
|
||
}
|
||
\label{timeshift}
|
||
\end{figure}
|
||
|
||
|
||
\subsection*{Electrophysiology}
|
||
|
||
We can see from figure \ref{sigmafits_example} that the fits look very close to the data. Due to the gaussian signal distribution there are fewer samples for very weak and very strong inputs. In these regions the firing rates become somewhat noisy. This is especially noticeable for strong inputs, as there are more spikes there, and therefore large fluctuations. Fluctuations are less visible for weak inputs where there is very little spiking anyway.
|
||
|
||
\begin{figure}
|
||
\includegraphics[width=0.4\linewidth]{img/ordnung/cropped_fitcurve_0_2010-08-31-ad-invivo-1_0.pdf}
|
||
\includegraphics[width=0.4\linewidth]{img/ordnung/cropped_fitcurve_0_2010-08-11-aq-invivo-1_0.pdf}
|
||
% cropped_fitcurve_0_2010-08-31-ad-invivo-1_0.pdf: 0x0 px, 300dpi, 0.00x0.00 cm, bb=
|
||
\caption{Histogram of spike count distribution (firing rate) and errorfunction fits. 50 bins represent different values of the amplitude of the Gaussian distributed input signal \notedh{[maybe histogram in background again] - or better: One plot where I show the raw data - histrogram in background, number of spikes as dots.}. The value of each of those bins is the number of spikes during the times the signal was in that bin. Each of the values was normalized by the signal distribution. To account for delay, we first calculated the cross-correlation of signal and spike train and took its peak as the delay. This is necessary because there are delays between the signal being emitted and being registered by the cell. For an explanation, see figure \ref{timeshift}. The lines show fits according to equation \eqref{errorfct}.
|
||
Below are some examples of different cells, with differently wide distributions. one with a relatively narrow distribution and one with a distribution that is more broad, as indicated by the parameter \(\sigma\). Different amounts of bins (30 and 100) showed no difference in resulting parameters. \notedh{Show a plot.} \notedh{Show more than two plots?}}
|
||
\label{sigmafits_example}
|
||
\end{figure}
|
||
|
||
Figure \ref{fr_sigma} shows that between the firing rate and the cell and its noisiness is only a very weak correlation and they appear mostly independent of each other.
|
||
|
||
\begin{figure}
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/0_300/scatter_and_fits_sigma_firing_rate_contrast.pdf}
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/0_300/scatter_and_fits_contrast_firing_rate_sigma.pdf}
|
||
\caption{Relationship between firing rate and $\sigma$ and cv respectively. Noisier tend to be cells that fire slower, but the relationship is very slight.}
|
||
\label{fr_sigma}
|
||
\end{figure}
|
||
|
||
%TODO insert plot with sigma x-axis and delta_cf on y-axis here; also, plot with sigma as function of firing rate, also absoulte cf for different population size as function of sigma.
|
||
|
||
\begin{figure}
|
||
\includegraphics[width=0.8\linewidth]{img/sigma/0_300/averaged_4parts.pdf}
|
||
\caption{Coding fraction as a function of population size for the recorded trials; some neurons provided multiple trials. The trials have been grouped in ascending order with regards to $\sigma$. Plotted are the means and (shaded) the standard deviation of the quartile. Curves look similar to figure \ref{cf_limit}}
|
||
\label{ephys_sigma}
|
||
\end{figure}
|
||
|
||
|
||
% TODO insert plot with sigma x-axis and delta_cf on y-axis here; also, plot with sigma as function of firing rate, also absoulte cf for different population size as function of sigma.
|
||
|
||
When we group neurons by their noise and plot coding fraction as a function of population size for averages of the groups, we see results similar to what we see for simulations (figure \ref{ephys_sigma}, compare to figure \ref{cf_limit}):
|
||
Cells which are not very noisy (small $\sigma$, blue) on average start with a larger coding fraction than noisier cells (for this, compare also fig \ref{c1_by_sigma}). While they show an increasing coding fraction for larger populations, the increase is small, less than a factor of 2.
|
||
The cells in the next quartile interval show on average only a slightly lower coding fraction for a population size of 1 than the cells in the first quartile interval. Their rise is much faster however, and somewhere between 4 and 8 cells they begin showing a higher coding fraction than the cells in the first quartile.
|
||
Cells in the third quartile interval have on average a coding fraction half as large as the cells in the first quartile for a single neuron. With increasing population size, the growth in coding fraction also becomes steeper and between 32 and 64 cells their average coding fraction passes the average coding fraction of the cells in the first quartile interval.
|
||
The noisiest cells which make up the fourth quartile interval (largest $\sigma$, red) start with the lowest coding fraction on average for small populations. For a single cell, the coding fraction is on average only about a quarter of the cells in the first quartile interval. The increase of coding fraction with increasing population size is quite slow at first, an effect we have also seen with very noisy cells in our simulations (see figure \ref{cf_limit} b)). For increasing population sizes the coding fraction increase becomes larger.
|
||
These results qualitatively do not depend on the choice of separating the cells into quartiles. Please see the appendix (figures \ref{3_groups_cf_vs_pop} and \ref{5_groups_cf_vs_pop}) for similar results with splitting the cells into tertiles and quintiles.
|
||
The curves from which the averages were created can be seen in figure \ref{2_by_2_overview}. The curves in the top left which make up the blue curve in figure \ref{ephys_sigma} are in the majority very flat. On the other hand, most of the curves in the bottom right that make up the red curve in figure \ref{ephys_sigma} start very low and bend to the left, showing the coding fraction increase is larger for the larger population sizes observed here.
|
||
|
||
\begin{figure}
|
||
\includegraphics[width=0.8\linewidth]{img/sigma/0_300/2_by_2_overview.pdf}
|
||
\caption{Individual plots of the cells used in figure \ref{ephys_sigma}. Top left represents the cells in the first quartile (low $\sigma$, so little noise). Top right represents the second quartile, bottom left the third quartile. Bottom right shows the noisiest cells with the largest $\sigma$}
|
||
\label{2_by_2_overview}
|
||
\end{figure}
|
||
|
||
|
||
%
|
||
% \begin{figure}
|
||
% \centering
|
||
% \includegraphics[width=0.4\linewidth]{img/sigma/example_spikes_sigma_with_input.pdf}
|
||
% \includegraphics[width=0.28\linewidth]{img/ordnung/cropped_fitcurve_0_2010-08-11-aq-invivo-1_0.pdf}
|
||
% \includegraphics[width=0.28\linewidth]{img/sigma/cropped_fitcurve_0_2010-08-31-aj-invivo-1_0.pdf}
|
||
% \includegraphics[width=0.28\linewidth]{img/ordnung/cropped_fitcurve_0_2010-08-31-ad-invivo-1_0.pdf}
|
||
% \includegraphics[width=0.4\linewidth]{img/fish/dataframe_scatter_sigma_cv.pdf}
|
||
% \includegraphics[width=0.4\linewidth]{img/fish/dataframe_scatter_sigma_firing_rate.pdf}
|
||
% \includegraphics[width=0.32\linewidth]{img/fish/sigma_distribution.pdf}
|
||
% \includegraphics[width=0.32\linewidth]{img/fish/cv_distribution.pdf}
|
||
% \includegraphics[width=0.32\linewidth]{img/fish/fr_distribution.pdf}
|
||
% cropped_fitcurve_0_2010-08-31-ad-invivo-1_0.pdf: 0x0 px, 300dpi, 0.00x0.00 cm, bb=
|
||
% \caption{Histogram of spike count distribution (firing rate) and errorfunction fits. 50 bins represent different values of the Gaussian distributed input signal [maybe histogram in background again]. The value of each of those bins is the number of spikes during the times the signal was in that bin. Each of the values was normalized by the signal distribution. For very weak and very strong inputs, the firing rates themselves become noisy, because the signal only assumes those values rarely. To account for delay, we first calculated the cross-correlation of signal and spike train and took its peak as the delay. The lines show fits according to equation \eqref{errorfct}. Left and right plots show two different cells, one with a relatively narrow distribution (left) and one with a distribution that is more broad (right), as indicated by the parameter \(\sigma\). An increase of $\sigma$ is equivalent to an broader distribution. Cells with broader distributions are assumed to be noisier, as their thresholding is less sharp than those with narrow distributions. Different amounts of bins (30 and 100) showed no difference in resulting parameters.}
|
||
% \label{sigmafits}
|
||
% \end{figure}
|
||
|
||
Figure \ref{coding_fraction_n_1} shows the link between noisiness and coding fraction very apparently. There is a strong correlation between the coding fraction calculated from the response of a single neuron and the neuron's noisiness. This intuitively makes sense, because the SSR advantage noisiness offers that we discussed earlier only appears for populations. There is a smaller, but still obvious, correlation between the coding fraction and the cell's firing rate: An increase in firing rate increases the coding fraction.
|
||
|
||
|
||
|
||
|
||
\begin{figure}
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/0_300/scatter_and_fits_sigma_coding_fractions_firing_rate.pdf}
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/0_300/scatter_and_fits_firing_rate_coding_fractions_sigma.pdf}
|
||
\caption{Low firing rate and strong noise both lead a a small coding fraction for single neurons.}
|
||
\label{coding_fraction_n_1}
|
||
\end{figure}
|
||
|
||
|
||
|
||
We can further quantify the effect of SSR on the encoding by studying the difference in coding fraction for populations of different sizes. There are two different ways to do this. The first is to take the coding fraction at a large population size (here: 64 neurons) and divide it by the coding fraction for a single neuron. It is important to note that a large gain does not necessarily mean a good performance: a neuron that starts with a coding fraction of 0.01 for a population size of 1 can have a gain of 10. It would still perform worse for a population of 64 neurons than a cell that starts with a coding fraction of 0.11 even though that cell will certainly have a gain lower than 10, as coding fraction is limited at 1.
|
||
The alternative is taking the difference between the two coding fraction values for a large population and a single neuron. However, this might not be ideal in case of cells which need a population size larger than the 64 neurons observed here; the coding fraction increase from 1 to 64 neurons might then look small, even though the cells actually fit our model very well. Examples for neurons like this are in figure \ref{2_by_2_overview}: in the bottom right panel there are the bottom two lines with only a small increase in coding fraction, but both lines appear to become steeper with rising population size, so it is not unthinkable that they would rise much further for very large populations. It's a limitation of the current experiments that we can only record a finite amount of trials from each neuron.\notedh{Discussion??}
|
||
|
||
The result shown in figure \ref{increases_broad} is that $\sigma$ is a good predictor of the gain (quotient between coding fraction at 64 cells and coding fraction at 1 cell). Additionally, the firing rate negatively correlates with the gain, but more weakly.
|
||
For the difference between coding fraction of a single neuron and a population we do not see any correlation neither with $\sigma$ nor with the firing rate.
|
||
|
||
|
||
%figures created with result_fits.py
|
||
\begin{figure}
|
||
%\includegraphics[width=0.45\linewidth]{img/ordnung/sigma_popsize_curves_0to300}
|
||
\centering
|
||
%\includegraphics[width=0.45\linewidth]{img/sigma/cf_N_ex_lines}
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/0_300/scatter_and_fits_sigma_quot_firing_rate}%
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/0_300/scatter_and_fits_firing_rate_quot_contrast}%
|
||
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/0_300/scatter_and_fits_sigma_diff_firing_rate}%
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/0_300/scatter_and_fits_firing_rate_diff_contrast}%
|
||
\caption{Top: The relative increase in coding fraction for population sizes 64 and 1. Note that the y-axis scales logarithmically Left: As a function of $\sigma$. Red curve shows a regression between $\sigma$ and $\log(c_{64}/c_{1})$. There is a strong relationship between the noisiness and the increase. Noisier cells (larger $\sigma$) have generally lower coding fractions for a single neuron, so they have a bigger potential for gain. Right: As a function of cell firing rate. The relationship is much weaker, but still there.
|
||
Bottom: Using the difference in coding fraction instead of the quotient makes the relationship between the increase in coding fraction and the two parameters $\sigma$ and firing rate disappear. This might be different for larger population sizes.}
|
||
\label{increases_broad}
|
||
\end{figure}
|
||
|
||
\subsubsection{Narrowband}
|
||
|
||
Qualitatively we see very similar results when instead of the broadband signal we use the narrowband signal with a frequency cutoff of 50Hz (figure \ref{overview_experiment_results_narrow}. Again the cells in the first quantile interval show on average only a very slightly increasing coding fraction with increasing population size. Coding fraction for a population size of one on average decreases for the higher quartile intervals. The seperate coding fraction curves also show the typical flatness for the first quartile interval. The fourth quartile interval in particular contains several curves that are only just beginning to increase in coding fraction at a population size of 64 neurons.
|
||
|
||
|
||
\begin{figure}
|
||
\includegraphics[width=0.8\linewidth]{img/sigma/narrow_0_50/averaged_4parts.pdf}
|
||
\includegraphics[width=0.8\linewidth]{img/sigma/narrow_0_50/2_by_2_overview.pdf}
|
||
\caption{Equivalent plots to \ref{ephys_sigma} and \ref{2_by_2_overview}, just for the narrowband signal with a cutoff frequency of 50Hz. The general trend is the same.}
|
||
\label{overview_experiment_results_narrow}
|
||
\end{figure}
|
||
|
||
Figures \ref{increases_narow} and \ref{increases_narow_high} both show that the results with regards to the increase of coding fraction for different population sizes seen for the broadband signal also appear when we use narrowband signals.
|
||
|
||
%figures created with result_fits.py
|
||
\begin{figure}
|
||
\centering
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/narrow_0_50/scatter_and_fits_sigma_quot_firing_rate}%
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/narrow_0_50/scatter_and_fits_firing_rate_quot_contrast}%
|
||
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/narrow_0_50/scatter_and_fits_sigma_diff_firing_rate}%
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/narrow_0_50/scatter_and_fits_firing_rate_diff_contrast}%
|
||
\caption{Top: The relative increase in coding fraction for population sizes 64 and 1. Note that the y-axis scales logarithmically Left: As a function of $\sigma$. Red curve shows a regression between $\sigma$ and $\log(c_{64}/c_{1})$. Right: As a function of cell firing rate.
|
||
Bottom: Using the difference in coding fraction instead of the quotient makes the relationship between the increase in coding fraction and the two parameters $\sigma$ and firing rate disappear. This might be different for larger population sizes.}
|
||
\label{increases_narow}
|
||
\end{figure}
|
||
|
||
|
||
%figures created with result_fits.py
|
||
\begin{figure}
|
||
\centering
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/narrow_250_300/scatter_and_fits_sigma_quot_firing_rate}%
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/narrow_250_300/scatter_and_fits_firing_rate_quot_contrast}%
|
||
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/narrow_250_300/scatter_and_fits_sigma_diff_firing_rate}%
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/narrow_250_300/scatter_and_fits_firing_rate_diff_contrast}%
|
||
\caption{Top: The relative increase in coding fraction for population sizes 64 and 1. Note that the y-axis scales logarithmically Left: As a function of $\sigma$. Red curve shows a regression between $\sigma$ and $\log(c_{64}/c_{1})$. Right: As a function of cell firing rate.
|
||
Bottom: Using the difference in coding fraction instead of the quotient makes the relationship between the increase in coding fraction and the two parameters $\sigma$ and firing rate disappear. This might be different for larger population sizes.}
|
||
\label{increases_narow_high}
|
||
\end{figure}
|
||
|
||
\notedh{link to the appropriate chapter from theory results}
|
||
In addition to the ``pure'' narrowband signals, I also analysed the coding fraction change for a smaller part of the spectrum in the experiments using the broadband signal. Figure \ref{increases_narow_in_broad} shows part of the results and again we see the strong correlation between $\sigma$ and the gain and a lesser correlation between the firing rate and the gain. In this case we see the same correlation also for the coding fraction difference.
|
||
Similar results can be observed for the other frequency bands. \notedh{Images to the appendix? The sigma/gain of all in one plot?}
|
||
|
||
%figures created with result_fits.py
|
||
\begin{figure}
|
||
\centering
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/0_50/scatter_and_fits_sigma_quot_firing_rate}%
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/0_50/scatter_and_fits_firing_rate_quot_contrast}%
|
||
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/0_50/scatter_and_fits_sigma_diff_firing_rate}%
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/0_50/scatter_and_fits_firing_rate_diff_contrast}%
|
||
\caption{Top: The relative increase in coding fraction for population sizes 64 and 1. Note that the y-axis scales logarithmically Left: As a function of $\sigma$. Red curve shows a regression between $\sigma$ and $\log(c_{64}/c_{1})$. Right: As a function of cell firing rate.
|
||
Bottom: Using the difference in coding fraction instead of the quotient makes the relationship between the increase in coding fraction and the two parameters $\sigma$ and firing rate disappear.}
|
||
\label{increases_narow_in_broad}
|
||
\end{figure}
|
||
|
||
|
||
|
||
\subsection*{Results}
|
||
|
||
Figure \ref{ex_data} A,B and C show three examples for coherence from intracellular
|
||
measurements in \lepto\. Each cell was exposed to up to 128 repetitions of the
|
||
same signal. The response was then averaged over different numbers of trials to
|
||
simulate different population sizes of homogeneous cells. We can see that an increase
|
||
in population size leads to higher coherence. Similar to what we saw in the simulations,
|
||
around the average firing rate of the cell (marked by the red vertical lines), coherence
|
||
decreases sharply. We then aggregated the results for 31 different cells (50 experiments total,
|
||
as some cells were presented with the stimulus more than once).
|
||
Figure \ref{ex_data} D shows that the increase is largest inside the
|
||
high frequency intervals. As we could see in our simulations (figures \ref{fig:popsizenarrow15} C
|
||
and \ref{fig:popsizenarrow10} C), the ratio of coding fraction in a large population
|
||
to the coding fraction in a single cell is larger for higher frequencies.
|
||
|
||
%simulation plots are from 200hz/nice coherence curves.ipynb
|
||
\begin{figure}
|
||
\centering
|
||
broad
|
||
|
||
\includegraphics[width=0.48\linewidth]{img/fish/cf_curves/cfN_broad_0.pdf}
|
||
\includegraphics[width=0.48\linewidth]{img/fish/cf_curves/cfN_broad_1.pdf}
|
||
\includegraphics[width=0.48\linewidth]{img/fish/cf_curves/cfN_broad_2.pdf}
|
||
\includegraphics[width=0.48\linewidth]{img/fish/cf_curves/cfN_broad_3.pdf}
|
||
\end{figure}
|
||
|
||
|
||
%compare_params_300.py auf oilbird
|
||
\begin{figure}
|
||
\includegraphics[width=0.30\linewidth]{img/sigma/parameter_assessment/bins100v300.pdf}
|
||
\includegraphics[width=0.30\linewidth]{img/sigma/parameter_assessment/bins50v300.pdf}
|
||
\includegraphics[width=0.30\linewidth]{img/sigma/parameter_assessment/bins30v300.pdf}
|
||
|
||
\hspace{0.30\linewidth}
|
||
\includegraphics[width=0.30\linewidth]{img/sigma/parameter_assessment/bins50v100.pdf}
|
||
\includegraphics[width=0.30\linewidth]{img/sigma/parameter_assessment/bins30v100.pdf}
|
||
|
||
\hspace{0.60\linewidth}
|
||
\includegraphics[width=0.30\linewidth]{img/sigma/parameter_assessment/bins30v50.pdf}
|
||
\caption{Comparing different bin numbers for the calculation of $\sigma$. Values were in good agreement when we compare 50 bins and 100 bins. For 300 bins $\sigma$ was estimated smaller than for the other bin numbers, especially for $\sigma > 0.8$. For 30 bins a few estimates stuck close to $\sigma = 0$, when they didn't for the other bin numbers. We chose to proceed with 50 bins.}
|
||
\label{sigma_bins}
|
||
\end{figure}
|
||
|
||
%compare_params.py auf oilbird
|
||
\begin{figure}
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/parameter_assessment/gauss1v5_30.pdf}
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/parameter_assessment/gauss1v5_50.pdf}
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/parameter_assessment/gauss1v5_100.pdf}
|
||
\includegraphics[width=0.45\linewidth]{img/sigma/parameter_assessment/gauss1v5_300.pdf}
|
||
\caption{Width of the Gaussian distribution we convolve with the spike from the experiments doesn't change the $\sigma$ we get after calculation. We use this Gaussian distribution to calculate the delay between the signal being emitted and the signal being acted upon by the cell. We tried for all different bin numbers we thought of using and in all the $\sigma$ result stays more or less the same. There are no systematic differences.}
|
||
\label{sigma_gauss}
|
||
\end{figure}
|
||
|
||
|
||
%box_script.py, quot_sigma() und quot_sigma_narrow()
|
||
\begin{figure}
|
||
\centering
|
||
broad
|
||
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_broad_0_50.pdf}
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_broad_50_100.pdf}
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_broad_100_150.pdf}
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_broad_150_200.pdf}
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_broad_200_250.pdf}
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_broad_250_300.pdf}
|
||
|
||
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_broad_0_50.pdf}
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_broad_50_100.pdf}
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_broad_100_150.pdf}
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_broad_150_200.pdf}
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_broad_200_250.pdf}
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_broad_250_300.pdf}
|
||
|
||
narrow
|
||
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_narrow_0_50.pdf}
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_narrow_50_100.pdf}
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_narrow_150_200.pdf}
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_narrow_250_300.pdf}
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_narrow_350_400.pdf}
|
||
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_narrow_0_50.pdf}
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_narrow_50_100.pdf}
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_narrow_150_200.pdf}
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_narrow_250_300.pdf}
|
||
\includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_narrow_350_400.pdf}
|
||
|
||
\end{figure}
|
||
|
||
|
||
|
||
\begin{figure}
|
||
\centering
|
||
\includegraphics[width=0.4\linewidth]{img/fish/diff_box.pdf}
|
||
\includegraphics[width=0.4\linewidth]{img/fish/diff_box_narrow.pdf}
|
||
\includegraphics[width=0.4\linewidth]{img/relative_coding_fractions_box.pdf}
|
||
\notedh{needs figure 3.6 from yue and equivalent}
|
||
\label{fish_result_summary_yue}
|
||
\end{figure}
|
||
|
||
\begin{figure}
|
||
\includegraphics[width=0.49\linewidth]{img/fish/ratio_narrow.pdf}
|
||
\includegraphics[width=0.49\linewidth]{img/fish/broad_ratio.pdf}
|
||
\label{freq_delta_cf}
|
||
\caption{This is about frequency and how it determines $delta_cf$. In other paper I have used $quot_cf$. \notedh{The x-axis labels don't make sense to me. Left is broad and right is narrow? }}
|
||
\end{figure}
|
||
|
||
\subsection{Discussion}
|
||
|
||
We also confirmed that the results from the theory part of the paper play a role in a
|
||
real world example. Inside the brain of the weakly electric fish
|
||
\textit{Apteronotus leptorhynchus} pyramidal cells in different areas
|
||
are responsible for encoding different frequencies. In each of those areas,
|
||
cells integrate over different numbers of the same receptor cells.
|
||
Artificial populations consisting of different trials of the same receptor cell
|
||
show what we have seen in our simulations: Larger populations help
|
||
especially with the encoding of high frequency signals. These results
|
||
are in line with what is known about the pyramidal cells of \lepto:
|
||
The cells which encode high frequency signals best are the cells which
|
||
integrate over the largest number of neurons.
|
||
|
||
\section{Discussion: Combining experiment and simulation}
|
||
|
||
\section{Literature}
|
||
|
||
\clearpage
|
||
\bibliography{citations.bib}
|
||
|
||
\end{document}
|