\documentclass[a4paper,10pt]{scrartcl} \usepackage[utf8]{inputenc} %opening \title{On the role of noise in signal detection} \author{Dennis Huben} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{graphicx} \usepackage{multicol} \usepackage{scalefnt} \usepackage{bm} \usepackage{palatino} \usepackage{url} \usepackage{enumitem} \usepackage{amsmath} \usepackage{xcolor} \usepackage{ifthen} \usepackage[normalem]{ulem} \usepackage[round,]{natbib} \usepackage[thinqspace]{SIunits} \bibliographystyle{plainnat} \newcommand{\lepto}{\textit{A. leptorhynchus}} \DeclareMathOperator\erfc{erfc} \newcommand{\eq}[1]{\begin{align}#1\end{align}} \newcommand{\note}[2][]{\textcolor{red!80!black}{[\textbf{\ifthenelse{\equal{#1}{}}{}{#1: }}#2]}} \newcommand{\notejb}[1]{\note[JB]{#1}} \newcommand{\notedh}[1]{\note[DH]{#1}} \newcommand{\newdh}[1]{\textcolor{green}{#1}} \newcommand{\todo}[1]{\textcolor{green}{TODO: {#1}}} \newcommand{\sig}{$\sigma$ } \begin{document} \maketitle \begin{abstract} \end{abstract} \tableofcontents \newpage \section{Suprathreshold stochastic resonance} \subsection{Introduction} In any biological system, there is a limit to the precision of the components making up that system. This means that even without external input the spike times of each individual neurons will have some variation and will not be perfectly regular. Increasing the precision has a cost in energy requirement \citep{schreiber2002energy} but may not even be desirable. In populations of neurons, representation of a common stimulus can be improved by population heterogeneity \citep{ahn2014heterogeneity}. The source of this heterogeneity could for example be a different firing threshold for each neuron. Alternatively, the improvement can be achieved by adding noise to the input of neurons \citep{shapira2016sound}. The effect of adding noise to a sub-threshold signal, a phenomenon known as "stochastic resonance" (SR), has been very well investigated during the last decades \citep{benzi1981mechanism,gammaitoni1998resonance, shimokawa1999stochastic}. The noise added to a signal makes it more likely that the signal reaches the detection threshold so that it triggers a spike in a neuron. But often in nature the goal is not to simply detect a signal but to discriminate between two different signals as well as possible. For example in auditory communication it is not sufficient to detect the presence of sound but instead the goal is to encode an auditory stimulus so that an optimal amount of information is gained from the stimulus. Another example is the electrosensory communication between conspecifics in weakly electric fishes. Those fish need to for example differentiate aggressive and courtship behaviors. More recently it has been shown that for populations of neurons the beneficial role of noise can also be true for signals which already are above the threshold\citep{stocks2000suprathreshold, Stocks2000,stocks2001information,stocks2001generic,beiran2018coding}, a phenomenon termed "Suprathreshold Stochastic Resonance" (SSR). Despite the similarity in name, SR and SSR work in very different ways. The idea behind SSR is that in case of no or very weak individual noise the different neurons in the population react to the same features of a common input. Additional noise that affects each cell differently desynchronizes the response of the neurons. The spiking behavior of the neurons becomes more probabilistic than deterministic in nature. However, if the noise is too strong, the noise masks the signal and less information can be coded than would be ideally possible. In the case of infinite noise strength, no information about the signal can be reconstructed from the responses. Because some noise is beneficial and too much noise isn't, there is a noise strength where performance is best. This thesis investigates populations of neurons reacting to input signals with cutoff frequencies over a large range. Population sizes range from a single neuron to many thousands of neurons. %plot script: lif_summation_sketch.py on denkdirnix \begin{figure} \centering \includegraphics[width=0.5\linewidth]{img/stocks} \includegraphics[width=1.\linewidth]{img/plotted/LIF_example_sketch.pdf} \caption{Array of threshold systems as described by Stocks.} \label{stocks} \end{figure} Here we use the Integrate-and-Fire model to simulate neuronal populations receiving a common dynamic input. We look at linear coding of signals by different sized populations of neurons of a single type, similar to the situation in weakly electric fish. We show that the optimal noise grows with population size and depends on properties of the input. We use input signals of varying frequencies widths and cutoffs, along with changing the strength of the signal. We also present the results of electrophysiological results in the weakly electric fish \textit{Apteronotus leptorhynchus}. Because it is not obvious how to quantify noisiness in the receptor cells of these fish, we compare different methods and find that using the activation curve of the individual neurons allows for the best estimate of the strength of noise in these cells. Then we show that we can see the effects of SSR in the real world example of \textit{A. leptorhynchus}. \subsection{Methods} We use a population neuron model using the Leaky-Integrate-And-Fire (LIF) neuron, described by the equation \begin{equation}V_{t}^j = V_{t-1}^j + \frac{\Delta t}{\tau_v} ((\mu-V_{t-1}^j) + \sigma I_{t} + \sqrt{2D/\Delta t}\xi_{t}^j),\quad j \in [1,N]\end{equation} with $\tau_v = 10 ms$ the membrane time constant, $\mu = 15.0 mV$ or $\mu = 10.5 mV$ as offset. $\sigma$ is a factor which scales the standard deviation of the input, ranging from 0.1 to 1 and I the previously generated stimulus. $\xi_{t}$ are independent Gaussian distributed random variables with mean 0 and variance 1. The Noise D was varied between $1*10^{-12} mV^2/Hz$ and $3 mV^2/Hz$. Whenever $V_{t}$ was greater than the voltage threshold (10mV) a "spike" was recorded and the voltage has been reset to 0mV. $V_{0}$ was initialized to a random value uniformly distributed between 0mV and 10mV. For the first sets of simulations there was no absolute refractory period\footnote{Absolute refractory period means a time in which the cell ignores any input and can't spike.}. In a later chapter I show that qualitatively results don't change with an added refractory period. Simulations of up to 8192 neurons were done using an Euler method with a step size of $\Delta\, t = 0.01$ms. Typical firing rates were around 90Hz for an offset of 15.0mV and 35Hz for an offset of 10.5mV. Firing rates were larger for high noise levels than for low noise levels. As stimulus we used Gaussian white noise signal with different frequency cutoff on both ends of the spectrum. By construction, the input power spectrum is flat between 0 and $\pm f_{c}$: \begin{equation} S_{ss}(f) = \frac{\sigma^2}{2 \left| f_{c} \right|} \Theta\left(f_{c} - |f|\right).\label{S_ss} \end{equation} A Fast Fourier Transform (FFT) was applied to the signal so it can serve as input stimulus to the simulated cells. The signal was normalized so that the variance of the signal was 1mV and the length of the signal was 500s with a resolution of 0.01ms. \begin{figure} \includegraphics[scale=0.5]{img/intro_raster/example_noise_resonance.pdf} \caption{Snapshots of 200ms length from three example simulations with different noise, but all other parameters held constant. Black: Spikes of 32 simulated neurons. The green curve beneath the spikes is the signal that was fed into the network. The blue curve is the best linear reconstruction possible from the spikes. The input signal has a cutoff frequency of 50Hz. If noise is weak, the neurons behave regularly and similar to each other (A). For optimal noise strength, the neuronal population follows the signal best (B). If the noise is too strong, the information about the signal gets drowned out (C). D: Example coding fraction curve over the strength of the noise. Marked in red are the noise strengths from which the examples were taken.} \label{example_spiketrains} \end{figure} \subsection{Analysis} \label{Analysis} For each combination of parameters, a histogram of the output spikes from all neurons or a subset of the neurons was created. The coherence $C(f)$ was calculated \citep{lindner2016mechanisms} in frequency space as the fraction between the squared cross-spectral density $|S_{sx}^2|$ of input signal $s(t) = \sigma I_{t}$ and output spikes x(t), $S_{sx}(f) = \mathcal{F}\{ s(t)*x(t) \}(f) $, divided by the product of the power spectral densities of input ($S_{ss}(f) = |\mathcal{F}\{s(t)\}(f)|^2 $) and output ($S_{xx}(f) = |\mathcal{F}\{x(t)\}(f)|^2$), where $\mathcal{F}\{ g(t) \}(f)$ is the Fourier transform of g(t). \begin{equation}C(f) = \frac{|S_{sx}(f)|^2}{S_{ss}(f) S_{xx}(f)}\label{coherence}\end{equation} The coding fraction $\gamma$ \citep{gabbiani1996codingLIF, krahe2002stimulus} quantifies how much of the input signal can be reconstructed by an optimal linear decoder. It is 0 in case the input can't be reconstructed at all and 1 if the signal can be perfectly reconstructed\citep{gabbiani1996stimulus}. It is defined by the reconstruction error $\epsilon^2$ and the variance of the input $\sigma^2$: \begin{equation}\gamma = 1-\sqrt{\frac{\epsilon^2}{\sigma^2}}.\label{coding_fraction}\end{equation} The variance is \begin{equation}\sigma^2 = \langle \left(s(t)-\langle s(t)\rangle\right)^2\rangle = \int_{f_{low}}^{f_{high}} S_{ss}(f) df .\end{equation} The reconstruction error is defined as \begin{equation}\epsilon^2 = \langle \left(s(t) - s_{est}(t)\right)^2\rangle = \int_{f_{low}}^{f_{high}} S_{ss} - \frac{|S_{sx}|^2}{S_{xx}} = \int_{f_{low}}^{f_{high}} S_{ss}(f) (1-C(f)) df\end{equation} with the estimate $s_{est}(t) = h*x(t)$. $h$ is the optimal linear filter which has Fourier Transform $H = \frac{S_{sx}}{S_{xx}}$\citep{gabbiani1996coding}. We then analyzed coding fraction as a function of these cutoff frequencies for different parameters (noise strength, signal amplitude, signal mean/firing rate) in the limit of large populations. The limit was considered reached if the increase in coding fraction gained by doubling the population size is small (4\%)(??). For the weak signals ($\sigma = 0.1mV$) combined with the strongest noise ($D = 10^{-3} \frac{mV^2}{Hz}$), convergence was not reached for a population size of 2048 neurons for both threshold values. The same is true for the combination of the weak signal, close to the threshold ($\mu = 10.5mV$) and high frequencies (200Hz). \begin{figure} \centering \includegraphics[width=0.69\linewidth]{{img/broad_coherence_15.0_1.0_paired}.pdf} \caption{Coherence for a signal with $f_{cutoff} = 200\,Hz$. Coherence for a small and a large population, each at weak and strong noise values. For weak noise, the curves are indistinguishable from one another. For strong noise an increase in population size allows much better reconstruction of the input. For the small population size weak noise in the simulated neurons allows for better signal reconstruction. The line marks the average firing rate (about \(91\,Hz\)) of the neurons in the population.} \label{CodingFrac} \end{figure} \subsection{Simulations with more neurons} \subsection{Noise makes neurons' responses different from each other} If noise levels are low (fig. \ref{example_spiketrains} a)), neurons within a population with behave very similarly to each other. There is little variation in the spike responses of the neurons to a signal, and recreating the signal is difficult. If the strength of the noise is increasing, at some point the coding fraction will also begin increasing. The signal recreation will become better as the responses of the different neurons begin to deviate from each other. When noise strength is increased even further at some point a peak coding fraction is reached. This point is the optimal noise strength for the given parameters (fig. \ref{example_spiketrains} b)). If the strength of the noise is increased beyond this point, the responses of the neurons will be determined more by random fluctuations and less by the actual signal, making reconstruction more difficult (fig. \ref{example_spiketrains} c)). At some point, signal encoding breaks down completely and coding fraction goes to 0. \subsection{Large population size is only useful if noise is strong} We see that an increase in population size leads to a larger coding fraction until it hits a limit which depends on noise. For weak noise the increase in conding fraction with an increase in population size is low or non-existent. This can be seen in figure \ref{cf_limit} c) where the red ($10^{-5}\frac{mV^2}{Hz}$) and orange ($10^{-4}\frac{mV^2}{Hz}$) curves (relatively weak noise) saturate for relatively small population size (about 8 neurons and 32 neurons respectively). An increase in population size also leads to the optimum noise level moving towards stronger noise (green dots in figure \ref{cf_limit} a)). A larger population can exploit the higher noise levels better. Within the larger population the precision of the individual neurons becomes less important. After the optimum noise where peak coding fraction is reached, an increase in noise strength leads to a reduction in coding fraction. If the noise is very strong, coding fraction can reach approximately 0. This happens earlier (for weaker noise) in smaller populations than in larger populations. Together those facts mean that for a given noise level and population size, coding fraction might already be declining; whereas for larger populations, coding fraction can still be increasing. A given amount of noise can lead to a very low coding fraction in a small population, but to a greater coding fraction in a larger population. (figure \ref{cf_limit} c), blue and purple curves). The noise levels that work best for large populations are in general performing very bad in small populations. If coding fraction is supposed to reach its highest values and needs large populations to do so, the necessary noise strength will be at a level, where basically no encoding will happen in a single neurons or small populations. \begin{figure} \centering \includegraphics[width=0.4\linewidth]{{img/intro_raster/best_approximation_spikes_50hz_0.01noi_10.5_1.0_4_with_input}.pdf} \includegraphics[width=0.4\linewidth]{{img/intro_raster/best_approximation_spikes_50hz_0.01noi_10.5_1.0_16_with_input}.pdf} \includegraphics[width=0.4\linewidth]{{img/intro_raster/best_approximation_spikes_50hz_0.01noi_10.5_1.0_64_with_input}.pdf} \includegraphics[width=0.4\linewidth]{{img/intro_raster/best_approximation_spikes_50hz_0.01noi_10.5_1.0_256_with_input}.pdf} \label{harmonizing} \caption{Rasterplots and reconstructed signals for different population sizes; insets show signal spectrum. Rasterplots show the responses of neurons in the different populations. Each row is one trial and each black bar is one spike. Below the rasterplots, the orange line shows the original signal. Each plot contains four blue lines show the reconstruction of the original signal by different sets of neurons of that population size. A: Each blue line is the reconstructed signal from the responses of a population of 4 neurons. B: Each blue line is the reconstructed signal from the responses of a population of 16 neurons. C: The same for 64 neurons. D: The same for 256 neurons. Larger population sizes lead to observations which are not as dependent on random fluctuations and are therefore closer to each other and to the original signal. } \end{figure} \subsection{Influence of the input is complex} Two very important variables are the mean strength of the signal, equivalent to the baseline firing rate of the neurons and the strength of the signal. A higher baseline firing rate leads to a larger coding fraction. In our terms that means that a mean signal strength $\mu$ that is much above the signal will lead to higher coding fractions than if the signal strength is close to the threshold (see figure \ref{cf_limit} b), orange curves are above the green curves). The influence of the signal amplitude $\sigma$ is more complex. In general, at small population sizes, larger amplitudes appear to work better, but with large populations they might perform as well or even better than stronger signals (figure \ref{cf_limit} c), dashed curves vs solid curves.) \begin{figure} \includegraphics[width=0.45\linewidth]{{img/basic/basic_15.0_1.0_200_detail_with_max}.pdf} \includegraphics[width=0.45\linewidth]{{img/basic/n_basic_weak_15.0_1.0_200_detail}.pdf} \includegraphics[width=0.45\linewidth]{img/basic/n_basic_compare_50_detail.pdf} \caption{A: Coding fraction as a function of noise for different population sizes. Green dots mark the peak of the coding fraction curve. Increasing population size leads to a higher peak and moves the peak to stronger noise. B: Coding fraction as a function of population size. Each curve shows coding fraction for a different noise strength. C: Peak coding fraction as a function of population size for different input parameters. \notedh{ needs information about noise}} \label{cf_limit} \end{figure} \subsection{Slow signals are more easily encoded} To encode a signal well, neurons in a population need to keep up with the rising and falling of the signal. Signals that change fast are harder to encode than signals which change more slowly. When a signal changes more gradually, the neurons can slowly adapt their firing rate. A visual example can be see in figure \ref{freq_raster}. When all other parameters are equal, a signal with a lower frequency is easier to recreate from the firing of the neurons. In the rasterplots one can see especially for the 50Hz signal (bottom left) that the firing probability of each neuron follows the input signal. When the input is low, almost none of the neurons fire. The result are the ``stripes'' we can see in the rasterplot. The stripes have a certain width which is determined by the signal frequency and the noise level. When the signal frequency is low, the width of the stripes can't be seen in a short snapshot. For the 50Hz signal in this example we can clearly see a break in the firing activity of the neurons at around 25ms. The slower changes in the signal allow for the reconstruction to follow the original signal more closely. For the 200Hz signal there is little structure to be seen in the firing behaviour of the population and instead that behaviour looks chaotic. Something similar can be said for the 1Hz signal. Because the peaks are about 1s apart from each other, a snapshot of 400ms cannot capture the structure of the neuronal response. Instead what we see is a very gradual change of the firing rate following the signal. Because the change is so gradual, the reconstructed signal follows the input signal very closely. \begin{figure} \centering \includegraphics[width=0.4\linewidth]{{img/rasterplots/best_approximation_spikes_1hz_0.001noi500s_10.5_0.5_1.dat}.pdf} \includegraphics[width=0.4\linewidth]{{img/rasterplots/best_approximation_spikes_10hz_0.001noi500s_10.5_0.5_1.dat}.pdf} \includegraphics[width=0.4\linewidth]{{img/rasterplots/best_approximation_spikes_50hz_0.001noi500s_10.5_0.5_1.dat}.pdf} \includegraphics[width=0.4\linewidth]{{img/rasterplots/best_approximation_spikes_200hz_0.001noi500s_10.5_0.5_1.dat}.pdf} \caption{Rasterplots, input signal and reconstructed signals for different cutoff frequencies; insets show each signal spectrum. Shown here are examples taken from 500s long simulations. Rasterplots show the firing of 64 LIF-neurons. Each row corresponds to one neuron. Blue lines below the rasters are the input signal, the orange line the reconstruction, calculated by convolving the spikes with the optimal linear filter. Reconstruction is closer to the original signal for slower signals than for higher frequency signals. The different time scales lead to spike patterns which appear very different from each other.} \label{freq_raster} \end{figure} \subsection{Fast signals are harder to encode - noise can help with that} For low frequency signals, the coding fraction is almost always at least as large as the coding is for signals with higher frequency. For the parameters we have used there is very little difference in coding fraction for a random noise signal with frequencies of 1Hz and 10Hz respectively (figure \ref{cf_for_frequencies}, bottom row). For all signal frequencies and amplitudes a signal mean much larger than the threshold ($\mu = 15.0mV$, with the threshold at $10.0mV$) results in a higher coding fraction than the signal mean closer to the threshold ($\mu = 10.5 mV$). Firing rates of the neurons is much higher at the large input: about 90 Hz vs. 30 Hz for the lower signal mean. We also find that for the signal mean which is further away from the threshold for the loss of coding fraction from the 10Hz signal to the 50Hz signal is smaller than for the lower signal mean. This is partially explained by the firing rate of the neurons: Around the firing rate the signal encoding is weaker (see figure \ref{CodingFrac}. In general, an increase in signal frequency and bandwidth leads to a decrease in the maximum achievable coding fraction. This decrease is smaller if the noise is stronger. In some conditions, a 50 Hz signal can be encoded as well as a 10 Hz signal (fig. \ref{cf_for_frequencies} d)). \begin{figure} \centering \includegraphics[width=0.7\linewidth]{img/coding_fraction_vs_frequency.pdf} \includegraphics[width=0.7\linewidth]{img/1Hz_vs_10Hz_alternativ.pdf} \caption{\textbf{A-D}: Coding fraction in the large population limit as a function of input signal frequency for different parameters. Each curve represents a different noise strength. Points are only shown when the coding fraction increased by less than 2\% when population size was increased from 1024 to 2048 neurons. For small amplitudes ($\sigma = 0.1mV$, A \& B) there was no convergence for a noise of $10^{-3} mV^2/Hz$. Coding fraction decreases for faster signals (50Hz and 200Hz). In the large population limit, stronger noise results in coding fraction at least as large as for weaker noise. \textbf{E, F}: Comparison of the coding fraction in the large population limit for a 1Hz signal and a 10Hz signal. Shapes indicate noise strength, color indicates mean signal input (i.e. distance from threshold). Left plot shows an amplitude of $\sigma=0.1mV$, the right plot shows $\sigma=1.0mV$. The diagonal black line indicates where coding fractions are equal.} \label{cf_for_frequencies} \end{figure} \notedh{ TODO: frequency vs optimum noise; For slower signals, coding fraction converges faster in terms of population size (figure \ref{cf_for_frequencies}). This (convergence speed) is also true for stronger signals as opposed to weaker signals. For slower signals the maximum value is reached for weaker noise.} \subsection{A tuning curve allows calculation of coding fraction for arbitrarily large populations} To understand information encoding by populations of neurons it is common practice to use simulations. However, the size of the simulated population is limited by computational power. We demonstrate a way to circumvent these limitations, allowing to make predictions in the limit case of large population size. We use the interpretation of the tuning curve as a kind of averaged population response. To calculate this average, we need relatively few neurons to reproduce the response of an arbitrarily large population of neurons. This allows the necessary computational power to be greatly reduced. At least for slow signals, the spiking probability at a given point in time is determined by the signal power in this moment. The population response should simply be proportional to the response of a single neuron. This average firing rate is reflected in the tuning curve. We can look at the average firing rate for the input to find the spiking probability and how this probability changes with noise. For faster signals, the past of the signal plays a role: after a spike there is a short period where the simulated neuron is unlikely to fire again, even if there is no explicit refractory period. If the next spike falls into that period, fewer neurons will spike than they would have without the first spike. We have also seen before that faster signals aren't encoded as well as slower signals; but the results we receive from using the tuning curve this way is frequency-independent. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{img/non_lin_example_undetail.pdf} \includegraphics[width=0.5\linewidth]{{img/tuningcurves/6.00_to_15.00mV,1.0E-07_to_1.0E-02}.pdf} \includegraphics[width=0.4\linewidth]{{img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_256_with_input}.pdf} \caption{Two ways to arrive at coherence and coding fraction. Left: The input signal (top, center) is received by LIF-neurons. The spiking of the neurons is then binned and coherence and coding fraction are calculated between the result and the input signal. Right: Input signal (top, center) is transformed by the tuning curve (top right). The tuning curve corresponds to a function $g(V)$, which takes a voltage as input and yields a firing rate. Output is a modulated signal. We calculate coherence and coding fraction between input voltage and output firing rate. If the mean of the input is close to the threshold, as is the case here, inputs below the threshold all get projected to 0. This can be seen here at the beginning of the transformed curve. Bottom left: Tuning curves for different noise levels. x-Axis shows the stimulus strength in mV, the y-axis shows the corresponding firing rate. For low noise levels there is a strong non-linearity at the threshold. For increasing noise, firing rate becomes larger than 0 for progressively weaker signals. For strong stimuli (roughly 13mV and more) there is little different in the firing rate depending on the noise.} \label{non-lin} \end{figure} The noise influences the shape of the tuning curve, with stronger noise linearizing the curve. The linearity of the curve is important, because coding fraction is a linear measure. For strong input signals (around 13mV) the curve is almost linear, resulting in coding fractions close to 1. For signal amplitudes in this range firing rate is almost independent of noise strength. This tells us that the increase in coding fraction that follows a change in noise strength we saw in previous chapters is not simply due to the neurons spiking more frequently. For slow signals (1Hz cutoff frequency, up to 10Hz) the results from the tuning curve and the simulation for large populations of neurons match very well (figure \ref{accuracy}) over a range of signal strengths, base inputs to the neurons and noise strength. This means that the LIF-neuron tuning curve gives us a very good approximation for the limit of encoded information that can be achieved by summing over independent, identical LIF-neurons with intrinsic noise. For faster signals, the coding fraction calculated through the tuning curve stays constant, as the tuning curve only deforms the signal. As shown in figure \ref{cf_for_frequencies} e) and f), the coding fraction of the LIF-neuron ensemble drops with increasing frequency. Hence for high frequency signals the tuning curve ceases to be a good predictor of the encoding quality of the ensemble. \begin{figure} \centering \includegraphics[width=0.48\linewidth]{img/tuningcurves/tuningcurve_vs_simulation_10Hz.pdf} \includegraphics[width=0.48\linewidth]{img/tuningcurves/tuningcurve_vs_simulation_200Hz.pdf} \label{accuracy} \caption{Coding fraction obtained from the simulations and coding fraction calculated from the tuningcurve. For the low frequency (10Hz) signal (left) the results are quite close to each other. Calculating the coding fraction from the tuningcurve appears to consistently yield slightly larger numbers than we get from the simulations. For the faster signal (200 Hz, right) the results for the two different methods are quite far apart from each other; note the different scales on the axes. \notedh{The labels are off; also use different symbols! Make the axes scale the same, but have an inset on the left?} Using the tuningcurve to predict the coding fraction works for 10Hz but not for 200Hz.} \end{figure} For high-frequency signals, the method does not work. The effective and implicit refractory period prohibits the instantaneous firing rate from being useful, because the neurons spike only in very short intervals around a signal peak. They are very unlikely to immediately spike again and signal peaks that are too close to the preceding one will not be resolved properly.\notedh{Add a figure.} We use the tuning curve to analyse how the signal mean and the signal amplitude change the coding fraction we would get from an infinitely large population of neurons (fig. \ref{non-lin}, bottom two rows). We can see that in this case the stronger noise always yields a larger coding fraction. This is expected because the tuning curve is more linear for stronger noise and coding fraction is a linear measure. It matches that we are observing the limit of an infinitely large population, which would be able to ``average out'' any noise. For coding fraction as a function of the mean we see zero or near zero coding fraction if we are far below the threshold. If the signal is too weak it doesn't trigger any spiking in the neurons and no information can be encoded. If we increase the mean at one point we can see that coding fraction starts to jump up. This happens earlier for stronger noise, as spiking can be triggered for weaker signals. The increase in coding fraction is much smoother if we use a larger amplitude (right figure). We also notice some sort of plateau, where increasing the mean does not lead to a larger coding fraction, before it begins rising close to 1. The plateau begins earlier for stronger noise. For coding fraction as a function of signal amplitude we see very different results depending on the parameters. Again, we see that stronger noise leads to higher coding fraction. If we are just above or at the threshold (center and right), an increase in signal amplitude leads to a lower coding fraction. This makes sense, as more of the signal moves into the very non-linear area around the threshold. A very interesting effect happens if we have a mean slightly below the threshold (left): while for a strong noise we see the same effect as at or above the threshold, for weaker noise we see the opposite. The increase can be explained as the reverse of the effect that leads to decreasing coding fraction. Here, a larger amplitude means that the signal moves to the more linear part of the tuning curve more often. On the other hand, an increase in amplitude does not lead to worse encoding because of movement of the signal into the low-firing rate, non-linear part of the tuning curve -- because the signal is already there, so it can't get worse. This can help explain why the coding fraction seems to saturate near 0.5: In an extreme case, the negative parts of a signal would not get encoded at all, while the positive parts would be encoded linearly. \begin{figure} \centering \includegraphics[width=0.4\linewidth]{{img/tuningcurves/codingfraction_from_curves_amplitude_0.1mV}.pdf} \includegraphics[width=0.4\linewidth]{{img/tuningcurves/codingfraction_from_curves_amplitude_0.5mV}.pdf} \includegraphics[width=0.3\linewidth]{{img/tuningcurves/codingfraction_from_curves_mean_9.5mV}.pdf} \includegraphics[width=0.3\linewidth]{{img/tuningcurves/codingfraction_from_curves_mean_10.0mV}.pdf} \includegraphics[width=0.3\linewidth]{{img/tuningcurves/codingfraction_from_curves_mean_10.5mV}.pdf} % \includegraphics[width=0.45\linewidth]{{img/rasterplots/best_approximation_spikes_50hz_1e-07noi500s_15_0.5_1.dat}.pdf} % \includegraphics[width=0.45\linewidth]{{img/rasterplots/best_approximation_spikes_200hz_1e-07noi500s_15_0.5_1.dat}.pdf} \label{codingfraction_means_amplitudes} \caption{ \textbf{A,B}: Coding fraction as a function of signal mean. Each curve shows coding fraction as a function of the signal mean for a different noise level. The vertical line indicates the threshold. A: $\sigma = 0.1mV$. For the two weak noise strengths coding fraction is 0 if the signal mean too far below the threshold. Just below the threshold there is a sharp increase and a bit above the threshold the curves flatten out with a coding fraction close to 1. B: $\sigma = 0.5mV$. Increase in coding fraction with increasing signal mean is much smoother than at the lower amplitude. Surprisingly there is a small drop in coding fraction if the signal mean increases roughly between 10mV and 11mV. The drop sets in slightly earlier for stronger noise. At about 12mV coding fraction reaches a plateu close to 1. \textbf{C-E}: Coding fraction as a function of signal amplitude for different tuningcurves (noise levels). Three different means, one below the threshold (9.5mV), one at the threshold (10.0mV), and one above the threshold (10.5mV). Except for one combination of parameters, increasing amplitude always leads to decreasing coding fraction. Because the signal means are all very close to the threshold, an increase in amplitude means that the signal reaches the highly non-linear part of the tunigncurve. The only exception is for the signal mean below the threshold (9.5mV) and relatively weak noise. For those parameters, encoding is very weak. The signal most of the time is not strong enough to create any spikes in the neurons. By increasing the signal amplitude, the non-zero part of the tuning curve is reached more often.} \end{figure} \section*{Discussion} In this paper we have shown the effect of Suprathreshold Stochastic Resonance (SSR) in ensembles of neurons. We detailed how noise levels affect the impact of population size on the coding fraction. We looked at different frequency ranges and could show that the encoding of high-frequency signals profits particularly well from SSR. Using the tuningcurve we were able to provide a way to extrapolate the effects of SSR for very large populations. Because in general analysis of the impact of changing parameters is complex, we investigated limit cases, in particular the slow stimulus limit and the weak stimulus limit. For low-frequency signals, i.e. the slow stimulus limit, the tuningcurve also allows analyzing the impact of changing signal strength; in addition we were able to show the difference in sub-threshold SR and SSR for different noise levels. For the weak stimulus limit, where noise is relatively strong compared to the signal, we were able to provide an analytical solution for our observations. \citep{hoch2003optimal} also shows that SSR effects hold for both LIF- and HH- Neurons. However, Hoch et al. have found that optimal noise level depends "close to logarithmatically" on the number of neurons in the population. They used a cutoff frequency of only 20Hz for their simulations. \notedh{Hier fehlt ein plot, der Population size und optimum noise in Verbindung setzt} We investigated the impact of noise on homogeneous populations of neurons. Neurons being intrinsically noisy is a phenomenom that is well investigated (Grewe et al 2017, Padmanabhan and Urban 2010). In natural systems however, neuronal populations are rarely homegeneous. Padmanabhan and Urban (2010) showed that heterogeneous populations of neurons carry more information that heterogenous populations. %\notedh{Aber noisy! Zitieren: Neurone haben intrinsisches Rauschen (Einleitung?)} (Grewe, Lindner, Benda 2017 PNAS Synchronoy code) (Padmanabhan, Urban 2010 Nature Neurosci). Beiran et al. (2017) investigated SSR in heterogeneous populations of neurons. They made a point that heterogeneous populations are comparable to homogeneous populations where the neurons receive independent noise in addition to a deterministic signal. They make the point that in the case of weak signals, heterogeneous population can encode information better, as strong noise would overwhelm the signal. \notedh{Unterschiede herausstellen!} Similarly, Hunsberger et al. (2014) showed that both noise and heterogeneity linearize the tuning curve of LIF neurons. In summary, while noise and heterogeneity are not completely interchangeable. In the limit cases we see similar behaviour. \citep{Sharafi2013} Sharafi et al. (2013) had already investigated SSR in a similar way. However, they only observed populations of up to three neurons and were focused on the synchronous output of cells. They took spike trains, convolved those with a gaussian and then multiplied the response of the different neurons. In our simulations we instead used the addition of spike trains to calculate the cohenrece between input and output. Instead of changing the noise parameter to find the optimum noise level, they changed the input signal frequency to find a resonating frequency, which was possible for suprathreshold stochastic resonance, but not for subthreshold stochastic resonance. For some combinations of parameters we also found that coding fraction does not decrease monotonically with increasing signal frequency (fig. \ref{cf_for_frequencies}). It is especially notable for signals that are far from the threshold (fig \ref{cf_for_frequencies} E,F (red markers)). That we don't see the effect that clearly matches Sharafi et al.'s observation that in the case of subthreshold stochastic resonance, coherence monotonically decreased with increasing frequency. Pakdaman et al. (2001) \notedh{Besser verkn\"upfen als das Folgende (vergleichen \"uber Gr\"o\ss{}enordnungen; vergleichen mit Abbildung 5\ref{}; mehr als Sharafi zitieren Stichwort ``Coherence Resonance''} Similar research to Sharafi et al. was done by (de la Rocha et al. 2007). They investigated the output correlation of populations of two neurons and found it increases with firing rate. We found something similar in this paper, where an increase in $\mu$ increases both the firing rate of the neurons and generally also the coding fraction \notedh{Verkn\"upfen mit output correlation}(fig. \ref{codingfraction_means_amplitudes}). Our explanation is that coding fraction and firing rate are linked via the tuningcurve. In addition to simulations of LIF neurons de la Rocha et al. also carried out \textit{in vitro} experiments where they confirmed their simulations. \notedh{Konkreter machen: was machen die Anderen, das mit uns zu tun hat und was genau hat das mit uns zu tun?} \notedh{Vielleicht nochmal Stocks, obwohl er schon in der Einleitung vorkommt? Heterogen/homogen} \notedh{Dynamische stimuli! Bei Stocks z.B. nicht, nur z.B. bei Beiran. Wir haben den \"Ubergang.} Examples for neuronal systems that feature noise are P-unit receptor cells of weakly electric fish (which paper?) and ... In the case of low cutoff frequency and strong noise we were able to derive a formula that explains why in those cases coding fraction simply depends on the ratio between noise and population size, whereas generally the two variables have very different effects on the coding fraction. \subsection{Different frequency ranges} \subsection{Narrow-/wideband} \subsection{Narrowband stimuli} Using the \(f_{cutoff} = 200 \hertz\usk\) signal, we repeated the analysis (fig. \ref{cf_limit}) considering only selected parts of the spectrum. We did so for two "low frequency" (0--8Hz, 0--50Hz) and two "high frequency" (192--200Hz, 150--200Hz) intervals.\notedh{8Hz is not in yet.} We then compared the results to the results we get from narrowband stimuli, with power only in those frequency bands. To keep the power of the signal inside the two intervals the same as in the broadband stimulus, amplitude of the narrowband signals was less than that of the broadband signal. For the 8Hz intervals, amplitude (i.e. standard deviation) of the signal was 0.2mV, or a fifth of the amplitude of the broadband signal. Because signal power is proportional to the square of the amplitude, this was appropriate for a stimulus with a spectrum 25 times smaller. Similarly, for the 50Hz intervals we used a 0.5mV amplitude, or half of that of the broadband stimulus. As the square of the amplitude is equal to the integral over the frequency spectrum, for a signal with a quarter of the width we need to half the amplitude to have the same power in the interval defined by the narrowband signals. \subsection{Smaller frequency intervals in broadband signals } \begin{figure} \includegraphics[width=0.45\linewidth]{img/small_in_broad_spectrum} \includegraphics[width=0.45\linewidth]{img/power_spectrum_0_50} \includegraphics[width=0.49\linewidth]{{img/broad_coherence_15.0_1.0}.pdf} \includegraphics[width=0.49\linewidth]{{img/coherence_15.0_0.5_narrow_both}.pdf} \includegraphics[width=0.49\linewidth]{{img/broad_coherence_10.5_1.0_200}.pdf} \includegraphics[width=0.49\linewidth]{{img/coherence_10.5_0.5_narrow_both}.pdf} \caption{Coherence for broad and narrow frequency range inputs. a) Broad spectrum. At the frequency of the firing rate (91Hz, marked by the black bar) and its first harmonic (182Hz) the coding fraction breaks down. For the weak noise level (blue), population sizes n=4 and n=4096 show indistinguishable coding fraction. In case of a small population size, coherence is higher for weak noise (blue) than for strong noise (green) in the frequency range up to about 50\hertz. For higher frequencies coherence is unchanged. For the case of the larger population size and the greater noise strength there is a huge increase in the coherence for all frequencies. b) Coherence for two narrowband inputs with different frequency ranges. Low frequency range: coherence for slow parts of the signal is close to 1 for weak noise. SSR works mostly on the higher frequencies (here >40\hertz). High frequency range: At 182Hz (twice the firing frequency) there is a very sharp decrease in coding fraction, especially for the weak noise condition (blue). Increasing the noise makes the drop less clear. For weak noise (blue) there is another break down at 182-(200-182)Hz. Stronger noise seems to make this sharp drop disappear. Again, the effect of SSR is most noticeable for the higher frequencies in the interval.\notedh{Add description for 10.5mV}} \label{fig:coherence_narrow} \end{figure} We want to know how well encoding works for different frequency intervals in the signal. When we take out a narrower frequency interval from a broadband signal, the other frequencies in the signal serve as common noise to the neurons encoding the signal. In many cases we only care about a certain frequency band in a signal of much wider bandwidth. In figure \ref{fig:coherence_narrow} C we can see that SSR has very different effects on some frequencies inside the signal than on others. In blue we see the case of very weak noise (\(10^{-6} \milli\volt\squared\per\hertz\)). Coherence starts somewhat close to 1 but falls off quickly that it reaches about 0.5 by 50Hz and goes down to almost zero around the 91Hz firing rate of the signal. Following that there is a small increase up to about 0.1 at around 130Hz, after which coherence decreases to almost 0. Increasing the population size from 4 neurons to 2048 neurons has practically no effect. When we keep population size at 4 neurons, but add more noise to the neurons (green, \(2\cdot10^{-3} \milli\volt\squared\per\hertz\)), encoding of the low frequencies (up to about 50\hertz) becomes worse, while encoding of the higher frequencies stays unchanged. When we increase the population size to 2048 neurons we have almost perfect encoding for frequencies up to 50\hertz. Coherence is still reduced around the average firing rate of the neurons, but at a much higher level than before. For higher frequencies coherence becomes higher again. For the weaker mean input (figure \ref{coherence_narrow} E results look similar. For weak noise (blue) there is no difference for the increased population. Coherence starts relatively high again (around 0.7). There is a decrease in coherence for increasing frequency which is steep at first, until about the firing rate of the neuron, after which the decrease flattens off. For stronger noise, encoding at low frequencies is worse for small populations; for large populations the coherence is greatly increased for all frequencies. Coherence is very close to 1 at first, decreases slightly in the frequency is increased up to the firing rate, after which coherence stays about constant. In summary, the high frequency bands inside the broadband stimulus experience a much greater increase in encoding quality than the low frequency bands, which were already encoded quite well. \begin{figure} \includegraphics[width=0.45\linewidth]{img/broadband_optimum_newcolor.pdf} \includegraphics[width=0.45\linewidth]{img/smallband_optimum_newcolor.pdf} \centering \includegraphics[width=0.9\linewidth]{img/max_cf_smallbroad.pdf} \caption{ C and D \notedh{B and C right now because the order in the right column was mixed up}: Best amount of noise for different number of neurons. The dashed lines show where coding fraction still is at least 95\% from the maximum. The width of the peaks is much larger for the narrowband signals which encompasses the entire width of the high-frequency interval peak. Optimum noise values for a fixed number of neurons are always higher for the broadband signal than for narrowband signals. In the broadband case, the optimum amount of noise is larger for the high-frequency interval than for the low-frequency interval and vice-versa for the narrowband case. %The optimal noise values have been fitted with a function of square root of the population size N, $f(N)=a+b\sqrt{N}$. We observe that the optimal noise value grows with the square root of population size. E and F: Coding fraction as a function of noise for a fixed population size (N=512). Red dots show the maximum, the red line where coding fraction is at least 95\% of the maximum value. G: An increase in population size leads to a higher coding fraction especially for broader bands and higher frequency intervals. Coding fraction is larger for the narrowband signal than in the equivalent broadband interval for all neural population sizes considered here. The coding fraction for the low frequency intervals is always larger than for the high frequency interval. Signal mean $\mu=15.0\milli\volt$, signal amplitude $\sigma=1.0\milli\volt$ and $\sigma=0.5\milli\volt$ respectively.} \label{smallbroad} \end{figure} \subsection{Narrowband Signals vs Broadband Signals} In nature, often an external stimulus covers a narrow frequency range that starts at high frequencies, so that only using broadband white noise signals as input appears to be insufficient to describe realistic scenarios.\notedh{Add examples.} %, with bird songs\citep{nottebohm1972neural} and ???\footnote{chirps, in a way?}. %We see that in many animals receptor neurons have adapted to these signals. For example, it was found that electroreceptors in weakly electric fish have band-pass properties\citep{bastian1976frequency}. Therefore, we investigate the coding of narrowband signals in the ranges described earlier (0--50Hz, 150--200Hz). Comparing the results from coding of broadband and coding of narrowband signals, we see several differences. For both low and high frequency signals, the narrowband signal can be resolved better than the broadband signal for any amount of noise and at all population sizes (figure \ref{smallbroad}, bottom left). That coding fractions are higher when we use narrowband signals can be explained by the fact that the additional frequencies in the broadband signal are now absent. In the broadband signal they are a form of "noise" that is common to all the input neurons. Similar to what we saw for the broadband signal, the peak of the low frequency input is still much more broad than the peak of the high frequency input. To encode low frequency signals the exact strength of the noise is not as important as it is for the high frequency signals which can be seen from the wider peaks. \subsection{Discussion} The usefulness of noise on information encoding of subthreshold signals by single neurons has been well investigated. However, the encoding of supra-threshold signals by populations of neurons has received comparatively little attention and different effects play a role for suprathreshold signals than for subthreshold signals \citep{Stocks2001}. This paper delivers an important contribution for the understanding of suprathreshold stochastic resonance (SSR). We simulate populations of leaky integrate-and-fire neurons to answer the question how population size influences the optimal noise strength for linear encoding of suprathreshold signals. We are able to show that this optimal noise is well described as a function of the square root of population size.\notedh{Currently missing, but it is somewhere in my notes ...} This relationship is independent of frequency properties of the input signal and holds true for narrowband and broadband signals. In this paper, we show that SSR works in LIF-neurons for a variety of signals of different bandwidth and frequency intervals. We show that signal-to-noise ratio is for signals above a certain strength sufficient to describe the optimal noise strength in the population, but that the actual coding fraction depends on the absolute value of signal strength. %We furthermore show that increasing signal strength does not always increase the coding fraction. We contrast how well the low and high frequency parts of a broadband signal can be encoded. We take an input signal with $f_{cutoff} = \unit{200}\hertz$ and analyse the coding fraction for the frequency ranges 0 to \unit{50}\hertz\usk and 150 to \unit{200}\hertz\usk separately. The maximum value of the coding fraction is lower for the high frequency interval compared to the low frequency interval. This means that inside broadband signals higher frequencies intervals appear more difficult to encode for each level of noise and population size. The low frequency interval has a wider peak (defined as 95\% coding fraction of its coding fraction maximum value), which means around the optimal amount of noise there is a large area where coding fraction is still good. The noise optimum for the low frequency parts of the input is lower than the optimum for the high frequency interval (Fig. \ref{highlowcoherence}). In both cases, the optimal noise value appears to grow with the square root of population size.\notedh{See note above} In general, narrowband signals can be encoded better than broadband signals. narrowband vs broadband Another main finding of this paper is the discovery of frequency dependence of SSR. We can see from the shape of the coherence between the signal and the output of the simulated neurons, SSR works mostly for the higher frequencies in the signal. As the lower frequency components are in many cases already encoded really well, the addition of noise helps to flatten the shape of the coherence curve. In the case of weak noise, often there are border effects which disappear with increasing strength of the noise. In addition, for weak noise there are often visible effects from the firing rate of the neurons, in so far that the encoding around those frequencies is worse than for the surrounding frequencies. Generally this effect becomes less pronounced when we add more noise to the simulation, but we found a very striking exception in the case of narrowband signals. Whereas for a firing rate of about 91\hertz\usk the coding fraction of the encoding of a signal in the 0-50\hertz\usk band is better than for the encoding of a signal in the 150-200\hertz\usk band. However, this is not the case if the neurons have a firing rate about 34\hertz. We were thus able to show that the firing rate on the neurons in the simulation is of critical importance to the encoding of the signal. \section{Theory} \subsection{For large population sizes and strong noise, coding fraction becomes a function of their quotient} For the linear response regime of large noise, we can estimate the coding fraction. From Beiran et al. 2018 we know the coherence in linear response is given as \eq{ C_N(\omega) = \frac{N|\chi(\omega)|^2 S_{ss}}{S_{x_ix_i}(\omega)+(N1)|\chi(\omega)|^2S_{ss}} \label{eq:linear_response} } where \(C_1(\omega)\) is the coherence function for a single LIF neuron. Generally, the single-neuron coherence is given by \citep{??} \eq{ C_1(\omega)=\frac{r_0}{D} \frac{\omega^2S_{ss}(\omega)}{1+\omega^2}\frac{\left|\mathcal{D}_{i\omega-1}\big(\frac{\mu-v_T}{\sqrt{D}}\big)-e^{\Delta}\mathcal{D}_{i\omega-1}\big(\frac{\mu-v_R}{\sqrt{D}}\big)\right|^2}{\left|{\cal D}_{i\omega}(\frac{\mu-v_T}{\sqrt{D}})\right|^2-e^{2\Delta}\left|{\cal D}_{i\omega}(\frac{\mu-v_R}{\sqrt{D}})\right|^2} \label{eq:single_coherence} } where \(r_0\) is the firing rate of the neuron, \[r_0 = \left(\tau_{ref} + \sqrt{\pi}\int_\frac{\mu-v_r}{\sqrt{2D}}^\frac{\mu-v_t}{\sqrt{2D}} dz e^{z^2} \erfc(z) \right)^{-1}\]. In the limit of large noise (calculation in the appendix) this equation evaluates to: \eq{ C_1(\omega) = \sqrt{\pi}D^{-1} \frac{S_{ss}(\omega)\omega^2/(1+\omega^2)}{2 \sinh\left(\frac{\omega\pi}{2}\right)\Im\left( \Gamma\left(1+\frac{i\omega}{2}\right)\Gamma\left(\frac12-\frac{i\omega}{2}\right)\right)} \label{eq:simplified_single_coherence} } From eqs.\ref{eq:linear_response} and \ref{eq:simplified_single_coherence} it follows that in the case \(D \rightarrow \infty\) the coherence, and therefore the coding fraction, of the population of LIF neurons is a function of \(D^{-1}N\). We plot the approximation as a function of \(\omega\) (fig. \ref{d_n_ratio}). In the limit of small frequencies the approximation matches the exact equation very well, though not for higher frequencies. We can verify this in our simulations by plotting coding fraction as a function of \(\frac{D}{N}\). We see (fig. \ref{d_n_ratio}) that in the limit of large D, the curves actually lie on top of each other. This is however not the case (fig. \ref{d_n_ratio}) for stimuli with a large cutoff frequency \(f_c\), as expected by our evaluation of the approximation as a function of the frequency. \begin{figure} \centering \includegraphics[width=0.32\linewidth]{{img/d_over_n/d_10.5_0.5_10_detail}.pdf} \includegraphics[width=0.32\linewidth]{{img/d_over_n/d_15.0_0.5_50_detail}.pdf} \includegraphics[width=0.32\linewidth]{{img/d_over_n/d_15.0_1.0_200_detail}.pdf} \includegraphics[width=0.32\linewidth]{{img/d_over_n/d_over_n_10.5_0.5_10_detail}.pdf} \includegraphics[width=0.32\linewidth]{{img/d_over_n/d_over_n_15.0_0.5_50_detail}.pdf} \includegraphics[width=0.32\linewidth]{{img/d_over_n/d_over_n_15.0_1.0_200_detail}.pdf} \label{d_n_ratio} \caption{Top row: Coding fraction as a function of noise. Bottom row: Coding fraction as a function of the ratio between noise strength and population size. For strong noise, coding fraction is a function of this ratio. Left: signal mean 10.5mV, signal amplitude 0.5mV, $f_{c}$ 10Hz. Middle: signal mean 15.0mV, signal amplitude 0.5mV, $f_{c}$ 50Hz. Right: signal mean 15.0mV, signal amplitude 1.0mV, $f_{c}$ 200Hz.} \end{figure} \subsection{Refractory period} We analyzed the effect of non-zero refractory periods on the previous results. We repeated the same simulations as before but added a 1ms or a 5ms refractory period to each of the LIF-neurons. Results are summarized in figure \ref{refractory_periods}. Results change very little for a refractory period of 1ms, especially for large noise values. For a refractory period of 5ms resulting coding fraction is lower for almost all noise values. Paradoxically, for high frequencies in smallband signals and very small noise, coding fraction actually is larger for 5ms refractory period than for 1ms. \notedh{Needs plots!} In spite of this, coding fraction is still largest for the LIF-ensembles without refractory period. We also find all other results replicated even with refractory periods of 1ms or 5ms: Figure \ref{refractory_periods} shows that the optimal noise stills grows with \(\sqrt{N}\) for both the 1ms and the 5ms refractory period. We see an increase in the value of the optimum noise with an increase of the refractory period. The achievable coding fraction is lower for the neurons with refractory periods, especially at the maximum. In the limit of large noise, the neurons with 1ms refractory period and the ones with no refractory period also result in similar coding fractions, over a wide range of population sizes. However, this is not true for the neurons with 5ms refractory period. \begin{figure} \includegraphics[width=0.8\linewidth]{img/ordnung/refractory_periods_coding_fraction.pdf} \caption{Repeating the simulations adding a refractory period to the LIF-neurons shows no qualitative changes in the SSR behaviour of the neurons. Coding fraction is lower the longer the refractory period. The SSR peak moves to stronger noise; cells with larger refractory periods need stronger noise to work optimally.} \label{refractory_periods} \end{figure} \section{Electric fish} \subsection{Introduction} \subsection{Methods} \subsection*{Electrophysiology} We recorded electrophysiological data from X cells from Y different fish. \subsection{Analysis} To analyse the data we went ahead the same way we did for the simulations. For more information see section \ref{Analysis}. We created populations out of each cell. For each p-unit, we took the different trials and added the spikes in each time bin the same way we did it for the simulated neurons. \notedh{Did I do something to build averages for smaller population sizes?} For most of the analysed cells there were between X and Y \notedh{fill information in} trials. \subsection{Frequency dependence of sensory cells in \lepto} \begin{figure} \centering \includegraphics[width=0.49\linewidth]{img/fish/coherence_example.pdf} \includegraphics[width=0.49\linewidth]{img/fish/coherence_example_narrow.pdf} \label{fig:ex_data} \caption{Examples of coherence in the p-Units of \lepto. Each plot shows the coherence of the response of a single cell to a stimulus for different numbers of trials. Left: One signal with a maximum frequency of 300Hz. Like in the simulations, increased population sizes lead to a higher coherence. Again this is true especially for the higher frequencies, where coherence is small for small population sizes. With increased population size the coherence becomes flatter in the low frequencies. Compare figure \ref{CodingFrac}. \todo{Firing rate would be interesting to know to explain some of the dips} Right: Three different signals (0-50Hz, 150-200Hz and 350-400Hz). With increasing population size the increase in coherence was especially noticeable for the higher frequency ranges. See also figure \ref{fish_result_summary_yue} b). Interestingly, for this cell coherence is higher for higher frequency signals, given a large enough population. \notedh{Show a different cell with all five narrowband signals?} } \end{figure} \subsection{How to determine noisiness} \subsection*{Determining noise in real world} While in simulations we can control the noise parameter directly, we cannot do so in electrophysiological experiments. Therefore, we need a way to quantify "noisiness". One such way is by using the activation curve of the neuron, fitting a function and extracting the parameters from this function. Stocks (2000) uses one such function to simulate groups of noisy spiking neurons: \begin{equation} \label{errorfct}\frac{1}{2}\erfc\left(\frac{\theta-x}{\sqrt{2\sigma^2}}\right) \end{equation} where $\sigma$ is the parameter quantifying the noise (figure \ref{Curve_examples}). $\sigma$ determines the steepness of the curve. A neuron with a $\sigma$ of 0 would be a perfect thresholding mechanism. Firing probability for all inputs below the threshold is 0, and firing probability for all inputs above is 1. Larger values mean a flatter activation curve. Neurons with such an activation curve can sometimes fire even for signals below the firing threshold, while it will sometimes not fire for inputs above the firing threshold. Its firing behaviour is influenced less by the signal and more affected by noise. We also tried different other methods of quantifying noise commonly used (citations), but none of them worked as well as the errorfunction fit (fig. \ref{noiseparameters}. For example, the coefficient of variation (cv) which is commonly \notedh{add citations} used as a measure of noisiness does not only depend on the noisiness of the cell, but also on other cell parameters like the membrane constant or refractory periods. As \subsection*{Methodology} We calculate the cross correlation between the signal and the discrete output spikes. The signal values were binned in 50 bins. The result is a discrete Gaussian distribution around 0mV, the mean of the signal, as is expected from the way the signal was created. We have to account for the delay between the moment we play the signal and when it gets processed in the cell, which can for example depend on the position of the cell on the skin. We can easily reconstruct the delay from the measurements. The position of the peak of the crosscorrelation is the time shift for which the signal influences the result of the output the most. For an explanation, see figure \ref{timeshift}. Then for every spike we assign the value of the signal at the time of the spike minus the time shift. The result is a histogram, where each signal value bin has a number of spikes. This histogram is then normalized by the distribution of the signal. The result is another histogram, whose values are firing frequencies for each signal value. Because those frequencies are just firing probabilities multiplied by time, we can fit a Gaussian error function to those probabilities. \subsection*{Simulation} To confirm that the $\sigma$ parameter estimated from the fit is indeed a good measure for the noisiness, we validated it against D, the noise parameter from the simulations. We find that there is a strictly monotonous relationship between the two for different sets of simulation parameters. Other parameters often used to determine noisiness (citations) such as the variance of the spike PSTH, the coefficient of variation (CV) of the interspike interval are not as useful (see figure \ref{noiseparameters}) The variance of the psth is not always monotonous in D and is very flat for low values of D. The membrane constant $\tau$ determines how quickly the voltage of a LIF-neuron changes, with lower constants meaning faster changes. Only $\sigma$ does not change its values with different $\tau$. %describe what happens to the others %check Fano-factor maybe? \begin{figure} \centering %\includegraphics[width=0.45\linewidth]{img/ordnung/base_D_sigma}\\ \includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_sigma_membrane_50.pdf} \includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_cv_membrane_50.pdf} \includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_1ms_membrane_50.pdf} \includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_5ms_membrane_50.pdf} \includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_sigma_refractory_50.pdf} \includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_cv_refractory_50.pdf} \includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_1ms_refractory_50.pdf} \includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_5ms_refractory_50.pdf} \caption{a)The parameter \(\sigma\) as a function of the noise parameter D in LIF-simulations. There is a almost strictly monotonous relationship between the two, which allows us to use \(\sigma\) as a susbtitute for D in the analysis of electrophysiological experiments. Furthermore, changing the membrane constant of the simulated neurons has no influence on \sig, indicating that it really is a function of the noise and not additionally influenced by the firing rate. b-d) Left to right: $\sigma$, CV and standard deviation of the psth with two diffrent kernel widths as a function of D for different membrane constants (4ms, 10ms and 16ms). The CV (c)) is not even monotonous in the case of a timeconstant of 4ms, ruling out any potential usefulness. e-h) Left to right: $\sigma$, CV and standard deviation of the psth with two diffrent kernel widths as a function of D for different refractory periods (0ms, 1ms and 5ms). Only $\sigma$ does not change with different refractory periods. } \label{noiseparameters} \end{figure} We tried several different bin sizes (30 to 300 bins) and spike widths. There was little difference between the different parameters (see figure \ref{sigma_bins} in appendix). \section*{Electric fish as a real world model system} To put the results from our simulations into a real world context, we chose the weakly electric fish \textit{Apteronotus leptorhynchus} as a model system. \lepto\ uses an electric organ to produce electric fields which it uses for orientation, prey detection and communication. Distributed over the skin of \lepto\ are electroreceptors which produce action potentials in response to electric signals. These receptor cells ("p-units") are analogous to the simulated neurons we used in our simulations because they do not receive any input other than the signal they are encoding. Individual cells fire independently of each other and there is no feedback. \begin{figure} \includegraphics[width=0.7\linewidth]{img/explain_analysis/after_timeshift_11.pdf} \caption{Top: Spike train of a p-unit. Middle: The original signal is blue represented by the blue line. The orange line is the response from the cells re-created from the spike train. The recreation was done by a convolution of a simple gaussian and the spike train. Bottom: The signal was shifted forward in time, so that the response fits the signal better. This corrects for any delays in the recording process, whether technical or biological. } \label{timeshift} \end{figure} \subsection*{Electrophysiology} After correcting for the time delay between the emission of the signal and the recording of the electrophysiological data, we apply a timeshift to the signal, so it matches up better with the data (figure \ref{timeshift}). Next we fit the curve from equation \ref{errorfct} to the recorded data. We find that that the fits look very close to the data (figure \ref{sigmafits_example}). Due to the gaussian signal distribution there are fewer samples for very weak and very strong inputs. In these regions the firing rates become somewhat noisy. This is especially noticeable for strong inputs, as there are more spikes there than for weaker inputs. Fluctuations are less visible for weak inputs where there is very little spiking anyway. \begin{figure} \includegraphics[width=0.4\linewidth]{img/ordnung/cropped_fitcurve_0_2010-08-31-ad-invivo-1_0.pdf} \includegraphics[width=0.4\linewidth]{img/ordnung/cropped_fitcurve_0_2010-08-11-aq-invivo-1_0.pdf} % cropped_fitcurve_0_2010-08-31-ad-invivo-1_0.pdf: 0x0 px, 300dpi, 0.00x0.00 cm, bb= \caption{Histogram of spike count distribution (firing rate) and errorfunction fits. 50 bins represent different values of the amplitude of the Gaussian distributed input signal \notedh{[maybe histogram in background again] - or better: One plot where I show the raw data - histrogram in background, number of spikes as dots.}. The value of each of those bins is the number of spikes during the times the signal was in that bin. Each of the values was normalized by the signal distribution. To account for delay, we first calculated the cross-correlation of signal and spike train and took its peak as the delay. This is necessary because there are delays between the signal being emitted and being registered by the cell (see figure \ref{timeshift}). The lines show fits according to equation \eqref{errorfct}. Below are some examples of different cells, with differently wide distributions. one with a relatively narrow distribution and one with a distribution that is more broad, as indicated by the parameter \(\sigma\). Different amounts of bins (30 and 100) showed no difference in resulting parameters (figure \ref{sigma_bins}). \notedh{Show more than two plots?}} \label{sigmafits_example} \end{figure} Figure \ref{fr_sigma} shows that between the firing rate and the cell and its noisiness is only a very weak correlation and they appear mostly independent of each other. This matches the observations we had from the analysis of the simulated data. \begin{figure} \includegraphics[width=0.45\linewidth]{img/sigma/0_300/scatter_and_fits_sigma_firing_rate_contrast.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/0_300/scatter_and_fits_cv_firing_rate_contrast.pdf} \caption{Relationship between firing rate and $\sigma$ and firing rate and CV respectively. Noisier cells (as measured by \sig) might overall be cells that fire less frequently, but the relationship is very weak, if present at all. The color of the markers indicates the contrast, a measure for the strength of the signal. For the values used in the experiments, there doesn't appear to be much of a correlation between firing rate and the contrast (figure \ref{contrast_firing_rate}).} \label{fr_sigma} \end{figure} %TODO insert plot with sigma x-axis and delta_cf on y-axis here; also, plot with sigma as function of firing rate, also absoulte cf for different population size as function of sigma. \begin{figure} \includegraphics[width=0.8\linewidth]{img/sigma/0_300/averaged_4parts.pdf} \caption{Coding fraction as a function of population size for the recorded trials; some neurons provided multiple trials. The trials have been grouped in ascending order with regards to $\sigma$. Plotted are the means and (shaded) the standard deviation of the quartile. Curves look similar to the curves seen previously in the simulations (figure \ref{cf_limit}): The cells which are less noisy (orange and blue) start of with a larger coding fraction at a population size of 1 than the noisier cells (green and red). The least noisy cells (blue) don't show a very slow increase in coding fraction for increasing population size, and the noisier cells show a higher coding fraction then the less noisy cells for larger populations.} \label{ephys_sigma} \end{figure} % TODO insert plot with sigma x-axis and delta_cf on y-axis here; also, plot with sigma as function of firing rate, also absoulte cf for different population size as function of sigma. When we group neurons by their noise and plot coding fraction as a function of population size for averages of the groups, we see results similar to what we see for simulations (figure \ref{ephys_sigma}, compare to figure \ref{cf_limit}): Cells which are not very noisy (small $\sigma$, blue) on average start with a larger coding fraction than noisier cells (for this, compare also fig \ref{c1_by_sigma}). While they show an increasing coding fraction for larger populations, the increase is small, less than a factor of 2. The cells in the next quartile interval show on average only a slightly lower coding fraction for a population size of 1 than the cells in the first quartile interval. Their rise is much faster however, and somewhere between 4 and 8 cells they begin showing a higher coding fraction than the cells in the first quartile. Cells in the third quartile interval have on average a coding fraction half as large as the cells in the first quartile for a single neuron. With increasing population size, the growth in coding fraction also becomes steeper and between 32 and 64 cells their average coding fraction passes the average coding fraction of the cells in the first quartile interval. The noisiest cells which make up the fourth quartile interval (largest $\sigma$, red) start with the lowest coding fraction on average for small populations. For a single cell, the coding fraction is on average only about a quarter of the cells in the first quartile interval. The increase of coding fraction with increasing population size is quite slow at first, an effect we have also seen with very noisy cells in our simulations (see figure \ref{cf_limit} b)). For increasing population sizes the coding fraction increase becomes larger. These results qualitatively do not depend on the choice of separating the cells into quartiles. Please see the appendix (figures \ref{3_groups_cf_vs_pop} and \ref{5_groups_cf_vs_pop}) for similar results with splitting the cells into tertiles and quintiles. The curves from which the averages were created can be seen in figure \ref{2_by_2_overview}. The curves in the top left which make up the blue curve in figure \ref{ephys_sigma} are in the majority very flat. On the other hand, most of the curves in the bottom right that make up the red curve in figure \ref{ephys_sigma} start very low and bend to the left, showing the coding fraction increase is larger for the larger population sizes observed here. \begin{figure} \includegraphics[width=0.8\linewidth]{img/sigma/0_300/2_by_2_overview.pdf} \caption{Individual plots of the cells used in figure \ref{ephys_sigma}. Shown is the coding fraction as a function of population size in the range of 1 to 64 cells. Top left represents the cells in the first quartile (low $\sigma$, corresponding to little noise). The curves start relatively high, but flatten out soon. Note that there is one outlier curve at the bottom. Top right represents the second quartile: curves start a bit lower, but increase more. Some curves can be seen that begin to flatten. Bottom left shows the curves in the third quartile: They start lower than the curves in the previous quartiles. Very few of them show signs of flattening, and several seem to be increasing super-linearly. Also note the darker color of the lines, indicating there are no cells here with high firing rates. Bottom right shows the noisiest cells with the largest $\sigma$. They start closest to 0 and all of them are still increasing by the time the population reaches 64 neurons.} \label{2_by_2_overview} \end{figure} % % \begin{figure} % \centering % \includegraphics[width=0.4\linewidth]{img/sigma/example_spikes_sigma_with_input.pdf} % \includegraphics[width=0.28\linewidth]{img/ordnung/cropped_fitcurve_0_2010-08-11-aq-invivo-1_0.pdf} % \includegraphics[width=0.28\linewidth]{img/sigma/cropped_fitcurve_0_2010-08-31-aj-invivo-1_0.pdf} % \includegraphics[width=0.28\linewidth]{img/ordnung/cropped_fitcurve_0_2010-08-31-ad-invivo-1_0.pdf} % \includegraphics[width=0.4\linewidth]{img/fish/dataframe_scatter_sigma_cv.pdf} % \includegraphics[width=0.4\linewidth]{img/fish/dataframe_scatter_sigma_firing_rate.pdf} % \includegraphics[width=0.32\linewidth]{img/fish/sigma_distribution.pdf} % \includegraphics[width=0.32\linewidth]{img/fish/cv_distribution.pdf} % \includegraphics[width=0.32\linewidth]{img/fish/fr_distribution.pdf} % cropped_fitcurve_0_2010-08-31-ad-invivo-1_0.pdf: 0x0 px, 300dpi, 0.00x0.00 cm, bb= % \caption{Histogram of spike count distribution (firing rate) and errorfunction fits. 50 bins represent different values of the Gaussian distributed input signal [maybe histogram in background again]. The value of each of those bins is the number of spikes during the times the signal was in that bin. Each of the values was normalized by the signal distribution. For very weak and very strong inputs, the firing rates themselves become noisy, because the signal only assumes those values rarely. To account for delay, we first calculated the cross-correlation of signal and spike train and took its peak as the delay. The lines show fits according to equation \eqref{errorfct}. Left and right plots show two different cells, one with a relatively narrow distribution (left) and one with a distribution that is more broad (right), as indicated by the parameter \(\sigma\). An increase of $\sigma$ is equivalent to an broader distribution. Cells with broader distributions are assumed to be noisier, as their thresholding is less sharp than those with narrow distributions. Different amounts of bins (30 and 100) showed no difference in resulting parameters.} % \label{sigmafits} % \end{figure} Figure \ref{coding_fraction_n_1} shows the link between noisiness and coding fraction very apparently. There is a strong correlation between the coding fraction calculated from the response of a single neuron and the neuron's noisiness. This intuitively makes sense and matches what we observed in the simulations, because a single cell simply becomes less reliable by additional noise. The advantage of SSR only comes into play at larger population sizes. There is a smaller correlation between the coding fraction and the cell's firing rate: An increase in firing rate increases the single-cell coding fraction. \begin{figure} \includegraphics[width=0.45\linewidth]{img/sigma/0_300/scatter_and_fits_sigma_coding_fractions_firing_rate.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/0_300/scatter_and_fits_firing_rate_coding_fractions_sigma.pdf} \caption{Coding fraction for a single cell as a function of $\sigma$ (left) and firing rate (right). Cells for which $\sigma$ is large, i.e. noisier cells, have a lower single cell coding fraction than cells with a smaller $\sigma$. The relationship for firing rate is much weaker; cells with a higher firing rate tend to have a larger single cell coding fraction, but the spread is much larger. \notedh{Do it also without the outlier?}} \label{coding_fraction_n_1} \end{figure} We can further quantify the effect of SSR on the encoding by studying the difference in coding fraction for populations of different sizes. There are two different ways to do this. The first is to take the coding fraction at a large population size (here: 64 neurons) and divide it by the coding fraction for a single neuron. It is important to note that a large gain does not necessarily mean a good performance: a neuron that starts with a coding fraction of 0.01 for a population size of 1 could have a gain of 10. It would still perform worse for a population of 64 neurons than a cell that shows a coding fraction of 0.11 at N=1. The alternative to taking the ratio is to take the difference between the two coding fraction values for a large population and a single neuron. However, this might not be ideal in case of cells which need large population sizes to show a good performance. The coding fraction increase from 1 to 64 neurons might then look small, even though the cells actually fit our model very well. Examples for neurons like this are in figure \ref{2_by_2_overview}: in the bottom right panel there are the bottom two lines with only a small increase in coding fraction, but both lines appear to become steeper with rising population size, so it is not unthinkable that they would rise much further for very large populations. It's a limitation of the current experiments that we can only record a finite amount of trials from each neuron.\notedh{Discussion??} The results we show will include both the coding fraction ratio and the coding fraction difference at the different population sizes. The result (figure \ref{increases_broad}) is that $\sigma$ is a good predictor of the ratio between coding fraction at 64 cells and coding fraction of a single cell: With an increased noisiness (an increase in \sig) comes a larger ratio. This is exactly what we expected, due to the smaller coding fraction at small populations and the effects of SSR. The firing rate of the cells has only a small effect on the ratio, if any. As f'or the difference between coding fraction of a single neuron and a population we do not see any correlation neither with $\sigma$ nor with the firing rate. As seen in figure \ref{2_by_2_overview} some of the cells only begin to take off in coding fraction near that population size. So the absolute difference is quite small at this point. If we had population sizes larger than 64, the regression would make more sense; the less noisy cells will have similar values of $c_{64}$ and e.g. $c_{512}$, but for the noisier cells there can be a huge difference. %figures created with result_fits.py \begin{figure} %\includegraphics[width=0.45\linewidth]{img/ordnung/sigma_popsize_curves_0to300} \centering %\includegraphics[width=0.45\linewidth]{img/sigma/cf_N_ex_lines} \includegraphics[width=0.45\linewidth]{img/sigma/0_300/scatter_and_fits_sigma_quot_firing_rate}% \includegraphics[width=0.45\linewidth]{img/sigma/0_300/scatter_and_fits_firing_rate_quot_sigma}% \includegraphics[width=0.45\linewidth]{img/sigma/0_300/scatter_and_fits_sigma_diff_firing_rate}% \includegraphics[width=0.45\linewidth]{img/sigma/0_300/scatter_and_fits_firing_rate_diff_sigma}% \caption{Top: The relative increase in coding fraction for population sizes 64 and 1. Note that the y-axis scales logarithmically Left: As a function of $\sigma$. Red curve shows a regression between $\sigma$ and $\log(c_{64}/c_{1})$. There is a strong relationship between the noisiness and the increase. Noisier cells (larger $\sigma$) have generally lower coding fractions for a single neuron, so they have a bigger potential for gain. Right: As a function of cell firing rate. There is very little correlation between the firing rate and the increase in coding fraction. Bottom: Using the difference in coding fraction instead of the quotient makes the relationship between the increase in coding fraction and the two parameters $\sigma$ and firing rate disappear. This might be different for larger population sizes, because coding fraction might have saturated for less noisy cells, but might still increase with population size for noisier cells.} \label{increases_broad} \end{figure} \subsubsection{Narrowband} Qualitatively we see very similar results when instead of the broadband signal we use the narrowband signal with a frequency cutoff of 50Hz (figure \ref{overview_experiment_results_narrow}). Again the cells in the first quartile interval show on average only a very slightly increasing coding fraction with increasing population size. Coding fraction for a population size of one on average decreases for the higher quartile intervals. The seperate coding fraction curves also show the typical flatness for the first quartile interval. The fourth quartile interval in particular contains several curves that are only just beginning to increase in coding fraction at a population size of 64 neurons. There are some differences to the result of the broadband signal experiments: The first is that coding fractions in general are larger. For the case of a single cell population, coding fraction was never larger than 0.25 for the observed cells with the broadband signals. With the narrowband signal input, most cells show a coding fraction of larger than 0.25, even for a single cell sized population. We also find a correlation between the firing rate and \sig when we use the narrowband signal as input (figure \ref{sigma_vs_firing_rate_for_narrow} a). Contrarily, the curves are in general flatter for the signal with narrowed bandwidth. This can be seen in the coding fraction as a function of population size figures (\ref{2_by_2_overview} and \ref{overview_experiment_results_narrow} B-E). The numbers support the visual impression: Previously (broadband signal) most cells showed a difference of between 0.1 and about 0.4 for the increase in coding fraction from a single cell to a population of 64 cells. With the narrowband signal, most cells show an increase of less than 0.2, with most even less than 0.1. Only a few cells show the same increase we commonly saw for the broadband signal. Because for the narrowband signal many cells already show a large coding fraction at the N=1 population size, the ratio between $c_{64}$ and $c_1$ is mostly between 1 and 2. Only the cells with a lower firing rate show a substantially larger ratio (figure \ref{increases_narrow} b) than that. Using a linear regression over the whole range as we have done before might not be the ideal way to handle the distribution of points. Analysing the data as we did shows a correlation between firing rate and the ratio $\frac{c_{64}}{c_1}$, even though the linear regression is clearly not the best model to capture it.\notedh{Do the analysis again only for < 200Hz (or other number?) where it looks like it could be linear?} However, it works much better again if the ratio $\frac{c_{64}}{c_1}$ is compared again the noise strength \sig (figure \ref{increases_narrow} a). The correlation is lower than it was for the broadband signal (0.60 there vs 0.49 for the narrowband signal). With the narrowband signal we now also see a correlation with the difference of $c_{64}$ and $c_1$, which we didn't for the broadband signal. \begin{figure} \includegraphics[width=0.8\linewidth]{img/sigma/narrow_0_50/averaged_4parts.pdf} \includegraphics[width=0.8\linewidth]{img/sigma/narrow_0_50/2_by_2_overview.pdf} \caption{Equivalent plots to figures \ref{ephys_sigma} and \ref{2_by_2_overview}, but for a narrowband signal with a cutoff frequency of 50Hz. Top: Curves for the least noisy cells (blue and orange) are very flat, meaning coding fraction does not increase much with increasing population size. Cells which are noisier (green) start with a lower coding fraction for a single cell, but with increasing population size catch up with the less noisy cells. Near the maximum population size considered here, the curve flattens. The noisiest cells (red) start off with the smallest coding fraction for a single cell. Coding fraction keeps increasing throughout the entire range of population sizes considered here. Note the large spread (shaded area) around the mean, indicating that the cells in this group behave very differently from each other. This is confirmed if we look at the curves of the individual cells (bottom). The bottom right again shows the curves for the noisiest cells and we find cells that have relatively flat curves throughout, cells where coding fraction increases and then flattens off, and curves where coding fraction is only beginning to increase around the largest population sizes considered here. Even though the general trend is the same, there are some differences compared to the broadband signal. Even the noisier cells appear not to profit as much off of an increase in population size as before. \todo{Relate in Flie\ss{}text to CMS (low frequency -> small populations!!!}}. \label{overview_experiment_results_narrow} \end{figure} \begin{figure} \includegraphics[width=0.45\linewidth]{img/sigma/narrow_0_50/scatter_and_fits_sigma_coding_fractions_firing_rate.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/narrow_0_50/scatter_and_fits_firing_rate_coding_fractions_sigma.pdf} \caption{Coding fraction for a single cell as a function of $\sigma$ (left) and firing rate (right). The signal used was a 0-50Hz narrowband signal. Similar to what we have seen for the broadband signal (figure \ref{coding_fraction_n_1}), cells for which $\sigma$ is large, i.e. noisier cells, have a lower single cell coding fraction than cells with a smaller $\sigma$. The correlation appears to be a bit weaker though. The reverse is true for single-cell coding fraction as a function of firing rate: here, the correlation is stronger that it was for the broadband signal; it is still weaker than the correlation for the noise. Notably, there are a few cells with rather low firing rates for which the single-cell coding fraction is very close to 0. This was not the case for any of the other input signals we used, neither broadband nor higher frequency narrowband.} \label{coding_fraction_n_1_narrow_0_50} \end{figure} \begin{figure} \includegraphics[width=0.45\linewidth]{img/sigma/narrow_0_50/scatter_and_fits_sigma_firing_rate_contrast} \includegraphics[width=0.45\linewidth]{img/sigma/narrow_250_300/scatter_and_fits_sigma_firing_rate_contrast} \caption{The correlation between noise and firing rate is stronger for narrowband signals than it was for the broadband signal (compare to figure \ref{fr_sigma}). Left: Cell firing rate in the presence of a 0-50Hz input signal and \sig for the cells which have at least 64 trials recorded with that signal. The red line indicates a fitted linear regression. Color of the markers indicates the contrast with which the signal was applied. Results indicate that cells that fire slowly on narrowband inputs tend to be noisier cells. Right: Same, but for an input signal with a frequency of 250-300Hz.} \label{sigma_vs_firing_rate_for_narrow} \end{figure} %figures created with result_fits.py \begin{figure} \centering \includegraphics[width=0.45\linewidth]{img/sigma/narrow_0_50/scatter_and_fits_sigma_quot_firing_rate}% \includegraphics[width=0.45\linewidth]{img/sigma/narrow_0_50/scatter_and_fits_firing_rate_quot_contrast}% \includegraphics[width=0.45\linewidth]{img/sigma/narrow_0_50/scatter_and_fits_sigma_diff_firing_rate}% \includegraphics[width=0.45\linewidth]{img/sigma/narrow_0_50/scatter_and_fits_firing_rate_diff_contrast}% \caption{Narrowband signal with 0Hz-50Hz. Top: The relative increase in coding fraction for population sizes 64 and 1. Note that the y-axis scales logarithmically. Left: As a function of $\sigma$. Red curve shows a regression between $\sigma$ and $\log(c_{64}/c_{1})$. Right: As a function of cell firing rate. The linear fit over the whole range is probably not very good here. It appears that increasing population size has less of an impact in case of this relatively slow signal. For firing rates over 150Hz the increase is very small, almost exclusively below 2. Coding fraction $c_1$ is close to 0.3 or over for most of the cells with those firing rates \notedh{Plot?%scatter_and_fits_firing_rate_coding_fractions_sigma }. In contrast, $c_1$ for the broadband signal was always lower than 0.25. If the base coding fraction already is large, there is less potential for an increase in the quotient. Bottom: Using the difference in coding fraction instead of the quotient makes the relationship between the increase in coding fraction and the two parameters $\sigma$ and firing rate disappear. This might be different for larger population sizes.} \label{increases_narrow} \end{figure} We used other narrowband signals with a frequency width of 50Hz as well. For those, the power of the signal was not in the 0-50Hz spectrum, but in higher frequencies, e.g. 50-100Hz, 150-200Hz, 250-300Hz and 350-400Hz. In general, those results are comparable to the results we get with the 0-50Hz input signal. For example, we again see a correlation between the firing rate of the cells and their noisiness \sig (figure \ref{sigma_vs_firing_rate_for_narrow} b for the 250-300Hz signal). Taking the 250Hz-300Hz input signal as an example for the higher frequency signals, the positive correlations between \sig and the increase in coding fraction for increasing population size are quite clear, for both the ratio and the difference (figure \ref{increases_narrow_high}). Now we also see a sharp decline in the improvement for increasing firing rate, i.e. cells that fire more slowly have a much larger coding fraction increase than cells that fire more quickly. The correlation between the firing rate and the increase appears to be stronger that the correlation between \sig and the increase. Detailed results for all frequency ranges are shown in the appendix (figures \ref{sigma_narrow_50_100} to \ref{sigma_narrow_350_400}. %figures created with result_fits.py \begin{figure} \centering \includegraphics[width=0.45\linewidth]{img/sigma/narrow_250_300/scatter_and_fits_sigma_coding_fractions_firing_rate.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/narrow_250_300/scatter_and_fits_firing_rate_coding_fractions_sigma.pdf} \centering \includegraphics[width=0.45\linewidth]{img/sigma/narrow_250_300/scatter_and_fits_sigma_quot_firing_rate}% \includegraphics[width=0.45\linewidth]{img/sigma/narrow_250_300/scatter_and_fits_firing_rate_quot_contrast}% \centering \includegraphics[width=0.45\linewidth]{img/sigma/narrow_250_300/scatter_and_fits_sigma_diff_firing_rate}% \includegraphics[width=0.45\linewidth]{img/sigma/narrow_250_300/scatter_and_fits_firing_rate_diff_contrast}% \caption{Same graphs as in figure \ref{increases_narrow}, but with a narrowband signal with 250Hz-300Hz. Top row: Coding fraction for a single cell as a function of $\sigma$ (left) and firing rate (right). Points show individual trials and the red line shows a linear regression between the points. On the left, as a function of \sig, on the right as a function of the firing rate. Middle row: The relative increase in coding fraction for population sizes 64 and 1. Note that the y-axis scales logarithmically Left: As a function of $\sigma$. Red curve shows a regression between $\sigma$ and $\log(c_{64}/c_{1})$. Right: As a function of cell firing rate. Both cell firing rate and $sigma$ determine the increase, even though firing rate and $sigma$ themselves are almost independent of each other (figure \ref{fr_sigma}) Bottom: Using the difference in coding fraction instead of the quotient makes the relationship between the increase in coding fraction and the two parameters $\sigma$ and firing rate disappear. This might be different for larger population sizes.} \label{increases_narrow_high} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\linewidth]{img/sigma/fit_results_overviews/fit_results_sigma_narrow_single_cell_coding_fraction.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/fit_results_overviews/fit_results_firing_rate_narrow_single_cell_coding_fraction.pdf} \centering \includegraphics[width=0.45\linewidth]{img/sigma/fit_results_overviews/fit_results_sigma_narrow_quot.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/fit_results_overviews/fit_results_firing_rate_narrow_quot.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/fit_results_overviews/fit_results_sigma_narrow_diff.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/fit_results_overviews/fit_results_firing_rate_narrow_diff.pdf} \caption{Overview of r-squared and p-value for the linear regressions for the increase in coding fraction from a population size of 1 to a population size of 64. Results are shown for the different narrowband input signals used in the experiments. Left columns shows results using \sig as the parameter, right column shows results using the cell firing rate. We $r^2-value$ an p-value for linear regressions between the parameter and one measurement. Those measurements are, from top to bottom: Coding fraction with a population of a single cell ($c_1$); logarithm of the ratio between coding fraction at a population of 64 cells ($c_{64}$) and $c_1$; and the difference between $c_{64}$ and $c_1$. Results are shown for each of the narrowband signals that we used in the experiments. For the frequency range of 150-200Hz only four data points are present (compare figure \ref{sigma_narrow_150_200}); for the range 350-400Hz only five trials were available (figure \ref{sigma_narrow_350_400} and for 50-100Hz only six trials were available (compare figure \ref{sigma_narrow_50_100}).} \label{overview_fits_narrow} \end{figure} Figure \ref{overview_fits_narrow} compiles the results for the narrowband signal in all the frequency ranges. For three of the ranges (50-100Hz, 150-200Hz and 350-400Hz) there were few trials available for analysis (six, four and five respectively). Therefore, those results are less reliable, which is also shown in the respective p-values in the figure. For the cases for which we have good data, we see the same trend we saw for the broadband signal in the relationship between the coding fraction increase from increasing population size and \sig still exists. Other than what we saw for the broadband signal, we can also observe a correlation between the coding fraction increase and the firing rate of the cell. \todo{Das in der Diskussion einbringen!} \notedh{link to the appropriate chapter from theory results} In addition to the ``pure'' narrowband signals, I also analysed the coding fraction change for a smaller part of the spectrum in the experiments using the 0-300Hz broadband signal. To relate the results to the results from the narrowband experiments, we split the frequency range in six disctinct 50Hz ranges (0-50Hz, 50-100Hz, ... 250-300Hz). The results we see are very similar to what we saw for the broadband signal (figure \ref{overview_fits_broad}, some details in figure \ref{increases_narow_in_broad}). Considering the ratio between the coding fraction for a population size of 64 neurons ($c_{64}$) and the coding fraction for a single cell ($c_1$) we find that the correlation between the ratio and the noisiness \sig is best in the analysis using the entire signal range. But we also find correlations for all of the 50Hz ranges. In general, the correlations are slightly weaker than the result for the entire range (0.60 for the whole range, 0.42-0.58 for the smaller ranges). With regards to the difference between the coding fractions for the different population sizes, for the whole range we didn't see any correlation. Analysing the smaller frequency intervals, we now find a correlation for the two lowest intervals (0-50Hz, $r^2=0.37$ and 50-100Hz, $r^2=0.44$), but not for any of the other ranges. Previously, when we investigated the correlation between the two measures of population size effects and the firing rate, for the broadband signal we didn't see any correlation. This was very different for the narrowband signals, where in many cases we could show some correlation. When we repeat the analysis for the frequency ranges inside the broadband signal, we find the same we did for the entire broadband signal: No correlations that are distinguishable from noise. In almost every analysis here, our results for the broadband signal are the same or at least very similar, independent of whether we apply the analysis to the whole range or just part of it. In particular, there was a good correlation between the noisiness of the cells measured in \sig and the improvement. However, for the broadband signal the neural firing rate allows no prediction of the improvement trough increased population size. This is in contrast to the narrowband signals. There, firing rate and noisiness (\sig) were very similar in their correlations to the increase in coding fraction from increased population size. We also saw a stronger influence of the firing rate on the coding fraction of a single neuron. \todo{Add reason for low $r^2$ for N=1 cf: It's all very flat and close to 0.} \notedh{what to conclude here?} \begin{figure} \centering \includegraphics[width=0.45\linewidth]{img/sigma/fit_results_overviews/fit_results_sigma_broad_single_cell_coding_fraction.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/fit_results_overviews/fit_results_firing_rate_broad_single_cell_coding_fraction.pdf} \centering \includegraphics[width=0.45\linewidth]{img/sigma/fit_results_overviews/fit_results_sigma_broad_quot.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/fit_results_overviews/fit_results_firing_rate_broad_quot.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/fit_results_overviews/fit_results_sigma_broad_diff.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/fit_results_overviews/fit_results_firing_rate_broad_diff.pdf} \caption{Figure \ref{overview_fits_narrow}, but with the broadband signal instead. Overview of r-squared and p-value for the linear regressions for the increase in coding fraction from a population size of 1 to a population size of 64. Results are shown for the different narrowband input signals used in the experiments. Left columns shows results using \sig as the parameter, right column shows results using the cell firing rate. We show the $r^2$-value and p-value for linear regressions between the parameter and one measurement. Those measurements are, from top to bottom: Coding fraction with a population of a single cell ($c_1$); logarithm of the ratio between coding fraction at a population of 64 cells ($c_{64}$) and $c_1$; and the difference between $c_{64}$ and $c_1$. Results are shown for first the data analysed in the entire 0-300Hz range, matching the signal's cutoff frequency. Then we also show results were we analysed the data only in 50Hz slices'. } \label{overview_fits_broad} \end{figure} %figures created with result_fits.py \begin{figure} \centering \includegraphics[width=0.45\linewidth]{img/sigma/0_50/scatter_and_fits_sigma_coding_fractions_firing_rate.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/0_50/scatter_and_fits_firing_rate_coding_fractions_sigma.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/0_50/scatter_and_fits_sigma_quot_firing_rate}% \includegraphics[width=0.45\linewidth]{img/sigma/0_50/scatter_and_fits_firing_rate_quot_contrast}% \includegraphics[width=0.45\linewidth]{img/sigma/0_50/scatter_and_fits_sigma_diff_firing_rate}% \includegraphics[width=0.45\linewidth]{img/sigma/0_50/scatter_and_fits_firing_rate_diff_contrast}% \caption{Broadband signal with a cutoff frequency of 300Hz, but the coding fraction was calculated in the range 0Hz-50Hz. Top: The relative increase in coding fraction for population sizes 64 and 1. Note that the y-axis scales logarithmically Left: As a function of $\sigma$. Red curve shows a regression between $\sigma$ and $\log(c_{64}/c_{1})$. Right: As a function of cell firing rate. No correlation is observed. Bottom: Same as above, but using the difference in coding fraction instead of the logarithm of the ratio. The result shows a smaller correlation between the increase in coding fraction and $\sigma$. The firing rate still doesn't show a correlation to the increase. } \label{increases_narow_in_broad} \end{figure} \section*{Appendix} %compare_params_300.py auf oilbird \begin{figure} \includegraphics[width=0.30\linewidth]{img/sigma/parameter_assessment/bins100v300.pdf} \includegraphics[width=0.30\linewidth]{img/sigma/parameter_assessment/bins50v300.pdf} \includegraphics[width=0.30\linewidth]{img/sigma/parameter_assessment/bins30v300.pdf} \hspace{0.30\linewidth} \includegraphics[width=0.30\linewidth]{img/sigma/parameter_assessment/bins50v100.pdf} \includegraphics[width=0.30\linewidth]{img/sigma/parameter_assessment/bins30v100.pdf} \hspace{0.60\linewidth} \includegraphics[width=0.30\linewidth]{img/sigma/parameter_assessment/bins30v50.pdf} \caption{Comparing different bin numbers for the calculation of $\sigma$. Values were in good agreement when we compare 50 bins and 100 bins. For 300 bins $\sigma$ was estimated smaller than for the other bin numbers, especially for $\sigma > 0.8$. For 30 bins a few estimates stuck close to $\sigma = 0$, when they didn't for the other bin numbers. We chose to proceed with 50 bins.} \label{sigma_bins} \end{figure} %compare_params.py auf oilbird \begin{figure} \centering \includegraphics[width=0.45\linewidth]{img/sigma/parameter_assessment/gauss1v5_30.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/parameter_assessment/gauss1v5_50.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/parameter_assessment/gauss1v5_100.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/parameter_assessment/gauss1v5_300.pdf} \caption{Width of the Gaussian distribution we convolve with the spike from the experiments doesn't change the $\sigma$ we get after calculation. We use this Gaussian distribution to calculate the delay between the signal being emitted and the signal being acted upon by the cell. We tried for all different bin numbers we thought of using and in all the $\sigma$ result stays more or less the same. There are no systematic differences.} \label{sigma_gauss} \end{figure} \begin{figure} \centering \includegraphics[width=0.4\linewidth]{img/fish/diff_box.pdf} \includegraphics[width=0.4\linewidth]{img/fish/diff_box_narrow.pdf} \includegraphics[width=0.4\linewidth]{img/relative_coding_fractions_box.pdf} \notedh{needs figure 3.6 from yue and equivalent} \label{fish_result_summary_yue} \end{figure} \begin{figure} \includegraphics[width=0.49\linewidth]{img/fish/ratio_narrow.pdf} \includegraphics[width=0.49\linewidth]{img/fish/broad_ratio.pdf} \caption{This is about frequency and how it determines $delta_cf$. In other paper I have used $quot_cf$. \notedh{The x-axis labels don't make sense to me. Left is broad and right is narrow? }} \label{freq_delta_cf} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\linewidth]{img/sigma/0_300/scatter_and_fits_contrast_firing_rate_contrast} \caption{Firing rate of the recorded cells as a function of signal contrast. The signal used was the broandband signal with 300Hz cutoff frequency. The red line shows a linear regression through the points. No correlation between firing rate and contrast is observed.} \label{contrast_firing_rate} \end{figure} %figures created with result_fits.py \begin{figure} \centering \includegraphics[width=0.49\linewidth]{img/sigma/50_100/2_by_2_overview.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/50_100/averaged_4parts.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/50_100/scatter_and_fits_sigma_coding_fractions_firing_rate.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/50_100/scatter_and_fits_firing_rate_coding_fractions_sigma.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/50_100/scatter_and_fits_sigma_quot_firing_rate}% \includegraphics[width=0.45\linewidth]{img/sigma/50_100/scatter_and_fits_firing_rate_quot_contrast}% \includegraphics[width=0.45\linewidth]{img/sigma/50_100/scatter_and_fits_sigma_diff_firing_rate}% \includegraphics[width=0.45\linewidth]{img/sigma/50_100/scatter_and_fits_firing_rate_diff_contrast}% \caption{Broadband signal with a cutoff frequency of 300Hz, but the coding fraction was calculated in the range 50-100Hz. Top: The relative increase in coding fraction for population sizes 64 and 1. Note that the y-axis scales logarithmically Left: As a function of $\sigma$. Red curve shows a regression between $\sigma$ and $\log(c_{64}/c_{1})$. Right: As a function of cell firing rate. No correlation is observed. Bottom: Same as above, but using the difference in coding fraction instead of the logarithm of the ratio. The result shows a smaller correlation between the increase in coding fraction and $\sigma$. The firing rate still doesn't show a correlation to the increase. } \label{increases_narow_in_broad_50_100} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\linewidth]{img/sigma/100_150/2_by_2_overview.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/100_150/averaged_4parts.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/100_150/scatter_and_fits_sigma_coding_fractions_firing_rate.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/100_150/scatter_and_fits_firing_rate_coding_fractions_sigma.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/100_150/scatter_and_fits_sigma_quot_firing_rate}% \includegraphics[width=0.45\linewidth]{img/sigma/100_150/scatter_and_fits_firing_rate_quot_contrast}% \includegraphics[width=0.45\linewidth]{img/sigma/100_150/scatter_and_fits_sigma_diff_firing_rate}% \includegraphics[width=0.45\linewidth]{img/sigma/100_150/scatter_and_fits_firing_rate_diff_contrast}% \caption{Broadband signal with a cutoff frequency of 300Hz, but the coding fraction was calculated in the range 100-150Hz. Top: The relative increase in coding fraction for population sizes 64 and 1. Note that the y-axis scales logarithmically Left: As a function of $\sigma$. Red curve shows a regression between $\sigma$ and $\log(c_{64}/c_{1})$. Right: As a function of cell firing rate. No correlation is observed. Bottom: Same as above, but using the difference in coding fraction instead of the logarithm of the ratio. The result shows a similar correlation between the increase in coding fraction and $\sigma$. The firing rate still doesn't show a correlation to the increase. } \label{increases_narow_in_broad_100_150} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\linewidth]{img/sigma/150_200/2_by_2_overview.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/150_200/averaged_4parts.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/150_200/scatter_and_fits_sigma_coding_fractions_firing_rate.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/150_200/scatter_and_fits_firing_rate_coding_fractions_sigma.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/150_200/scatter_and_fits_sigma_quot_firing_rate}% \includegraphics[width=0.45\linewidth]{img/sigma/150_200/scatter_and_fits_firing_rate_quot_contrast}% \includegraphics[width=0.45\linewidth]{img/sigma/150_200/scatter_and_fits_sigma_diff_firing_rate}% \includegraphics[width=0.45\linewidth]{img/sigma/150_200/scatter_and_fits_firing_rate_diff_contrast}% \caption{Broadband signal with a cutoff frequency of 300Hz, but the coding fraction was calculated in the range 150-200Hz. Top: The relative increase in coding fraction for population sizes 64 and 1. Note that the y-axis scales logarithmically Left: As a function of $\sigma$. Red curve shows a regression between $\sigma$ and $\log(c_{64}/c_{1})$. Right: As a function of cell firing rate. No correlation is observed. Bottom: Same as above, but using the difference in coding fraction instead of the logarithm of the ratio. The result shows no correlation between the increase in coding fraction and $\sigma$. The firing rate still doesn't show a correlation to the increase. } \label{increases_narow_in_broad_150_200} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\linewidth]{img/sigma/200_250/2_by_2_overview.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/200_250/averaged_4parts.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/200_250/scatter_and_fits_sigma_coding_fractions_firing_rate.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/200_250/scatter_and_fits_firing_rate_coding_fractions_sigma.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/200_250/scatter_and_fits_sigma_quot_firing_rate}% \includegraphics[width=0.45\linewidth]{img/sigma/200_250/scatter_and_fits_firing_rate_quot_contrast}% \includegraphics[width=0.45\linewidth]{img/sigma/200_250/scatter_and_fits_sigma_diff_firing_rate}% \includegraphics[width=0.45\linewidth]{img/sigma/200_250/scatter_and_fits_firing_rate_diff_contrast}% \caption{Broadband signal with a cutoff frequency of 300Hz, but the coding fraction was calculated in the range 200-250Hz. Top: The relative increase in coding fraction for population sizes 64 and 1. Note that the y-axis scales logarithmically Left: As a function of $\sigma$. Red curve shows a regression between $\sigma$ and $\log(c_{64}/c_{1})$. Right: As a function of cell firing rate. No correlation is observed. Bottom: Same as above, but using the difference in coding fraction instead of the logarithm of the ratio. The result shows no correlation between the increase in coding fraction and $\sigma$. The firing rate still doesn't show a correlation to the increase. } \label{increases_narow_in_broad_200_250} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\linewidth]{img/sigma/250_300/2_by_2_overview.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/250_300/averaged_4parts.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/250_300/scatter_and_fits_sigma_coding_fractions_firing_rate.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/250_300/scatter_and_fits_firing_rate_coding_fractions_sigma.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/250_300/scatter_and_fits_sigma_quot_firing_rate}% \includegraphics[width=0.45\linewidth]{img/sigma/250_300/scatter_and_fits_firing_rate_quot_contrast}% \includegraphics[width=0.45\linewidth]{img/sigma/250_300/scatter_and_fits_sigma_diff_firing_rate}% \includegraphics[width=0.45\linewidth]{img/sigma/250_300/scatter_and_fits_firing_rate_diff_contrast}% \caption{Broadband signal with a cutoff frequency of 300Hz, but the coding fraction was calculated in the range 250-300Hz. Top: The relative increase in coding fraction for population sizes 64 and 1. Note that the y-axis scales logarithmically Left: As a function of $\sigma$. Red curve shows a regression between $\sigma$ and $\log(c_{64}/c_{1})$. Right: As a function of cell firing rate. No correlation is observed. Bottom: Same as above, but using the difference in coding fraction instead of the logarithm of the ratio. The result shows no correlation between the increase in coding fraction and $\sigma$. The firing rate still doesn't show a correlation to the increase. } \label{increases_narow_in_broad_250_300} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\linewidth]{img/sigma/narrow_50_100/2_by_2_overview.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_50_100/averaged_4parts.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/narrow_50_100/scatter_and_fits_sigma_coding_fractions_firing_rate.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/narrow_50_100/scatter_and_fits_firing_rate_coding_fractions_sigma.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_50_100/scatter_and_fits_sigma_quot_firing_rate.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_50_100/scatter_and_fits_sigma_diff_firing_rate.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_50_100/scatter_and_fits_firing_rate_quot_sigma.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_50_100/scatter_and_fits_firing_rate_diff_sigma.pdf} \caption{Experimental data for a signal with a lower cutoff frequency of 50Hz and an upper cutoff of 100Hz. A: Coding fraction as a function of population size. Cells are grouped in quartiles according to $\sigma$. B: Coding fraction as a function of population size. Each curve shows an average over the cells in one panel of A. Shaded area shows the standard deviation. C: Increase in coding fraction for N=1 to N=64 as a function of $\sigma$. The y-axis shows the quotient of coding fraction at N=64 divided by coding fraction at N=1. D: Same as C, only with the difference instead of the quotient. } \label{sigma_narrow_50_100} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\linewidth]{img/sigma/narrow_150_200/2_by_2_overview.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_150_200/averaged_4parts.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/narrow_150_200/scatter_and_fits_sigma_coding_fractions_firing_rate.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/narrow_150_200/scatter_and_fits_firing_rate_coding_fractions_sigma.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_150_200/scatter_and_fits_sigma_quot_firing_rate.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_150_200/scatter_and_fits_sigma_diff_firing_rate.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_150_200/scatter_and_fits_firing_rate_quot_sigma.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_150_200/scatter_and_fits_firing_rate_diff_sigma.pdf} \caption{Experimental data for a signal with a lower cutoff frequency of 150Hz and an upper cutoff of 200Hz. A: Coding fraction as a function of population size. Cells are grouped in quartiles according to $\sigma$. B: Coding fraction as a function of population size. Each curve shows an average over the cells in one panel of A. Shaded area shows the standard deviation. C: Increase in coding fraction for N=1 to N=64 as a function of $\sigma$. The y-axis shows the quotient of coding fraction at N=64 divided by coding fraction at N=1. D: Same as C, only with the difference instead of the quotient. } \label{sigma_narrow_150_200} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\linewidth]{img/sigma/narrow_250_300/2_by_2_overview.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_250_300/averaged_4parts.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/narrow_250_300/scatter_and_fits_sigma_coding_fractions_firing_rate.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/narrow_250_300/scatter_and_fits_firing_rate_coding_fractions_sigma.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_250_300/scatter_and_fits_sigma_quot_firing_rate.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_250_300/scatter_and_fits_sigma_diff_firing_rate.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_250_300/scatter_and_fits_firing_rate_quot_sigma.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_250_300/scatter_and_fits_firing_rate_diff_sigma.pdf} \caption{Experimental data for a signal with a lower cutoff frequency of 250Hz and an upper cutoff of 300Hz. A: Coding fraction as a function of population size. Cells are grouped in quartiles according to $\sigma$. B: Coding fraction as a function of population size. Each curve shows an average over the cells in one panel of A. Shaded area shows the standard deviation. C: Increase in coding fraction for N=1 to N=64 as a function of $\sigma$. The y-axis shows the quotient of coding fraction at N=64 divided by coding fraction at N=1. D: Same as C, only with the difference instead of the quotient. } \label{sigma_narrow_250_300} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\linewidth]{img/sigma/narrow_350_400/2_by_2_overview.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_350_400/averaged_4parts.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/narrow_350_400/scatter_and_fits_sigma_coding_fractions_firing_rate.pdf} \includegraphics[width=0.45\linewidth]{img/sigma/narrow_350_400/scatter_and_fits_firing_rate_coding_fractions_sigma.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_350_400/scatter_and_fits_sigma_quot_firing_rate.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_350_400/scatter_and_fits_sigma_diff_firing_rate.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_350_400/scatter_and_fits_firing_rate_quot_sigma.pdf} \includegraphics[width=0.49\linewidth]{img/sigma/narrow_350_400/scatter_and_fits_firing_rate_diff_sigma.pdf} \caption{Experimental data for a signal with a lower cutoff frequency of 350Hz and an upper cutoff of 400Hz. A: Coding fraction as a function of population size. Cells are grouped in quartiles according to $\sigma$. B: Coding fraction as a function of population size. Each curve shows an average over the cells in one panel of A. Shaded area shows the standard deviation. C: Increase in coding fraction for N=1 to N=64 as a function of $\sigma$. The y-axis shows the quotient of coding fraction at N=64 divided by coding fraction at N=1. D: Same as C, only with the difference instead of the quotient. } \label{sigma_narrow_350_400} \end{figure} %\section{Literature} \clearpage \bibliography{citations.bib} \end{document}