commit 47fe25c02d18397274195dcaf6a834f03f2a3479 Author: Dennis Huben Date: Thu Aug 1 11:56:40 2024 +0200 Initial commit/First pass over introduction diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..7876e9a --- /dev/null +++ b/.gitignore @@ -0,0 +1,7 @@ +*.log +*.bbl +*.blg +*.aux +*.dvi +*.toc +*swp diff --git a/bands.tex b/bands.tex new file mode 100644 index 0000000..78bb4a4 --- /dev/null +++ b/bands.tex @@ -0,0 +1,126 @@ + +\subsection*{Narrowband stimuli} +Using the \(f_{cutoff} = 200 \hertz\usk\) signal, we repeated the analysis for only a part of the spectrum. We did so for two "low frequency" (0--8Hz, 0--50Hz) and two "high frequency" (192--200Hz, 150--200Hz) intervals. We then compared the results to the results we get from narrowband stimuli, with power only in those frequency bands. +To keep the power of the signal inside the two intervals the same as in the broadband stimulus, amplitude of the narrowband signals was less than that of the broadband signal. For the 8Hz intervals, amplitude (i.e. standard deviation) of the signal was 0.2mV, or a fifth of the amplitude of the broadband signal. Because signal power is proportional to the square of the amplitude, this was appropriate for a stimulus with a spectrum 25 times smaller. Similarly, for the 50Hz intervals we used a 0.5mV amplitude, or half of that of the broadband stimulus. +As the square of the amplitude is equal to the integral over the frequency spectrum, for a signal with a quarter of the width we need to half the amplitude to have the same power in the interval defined by the narrowband signals. + +\subsection*{Smaller frequency intervals in broadband signals } + +\begin{figure} +\includegraphics[width=0.45\linewidth]{img/small_in_broad_spectrum} +\includegraphics[width=0.45\linewidth]{img/power_spectrum_0_50} + \includegraphics[width=0.49\linewidth]{{img/broad_coherence_15.0_1.0}.pdf} + \includegraphics[width=0.49\linewidth]{{img/coherence_15.0_0.5_narrow_both}.pdf} + \includegraphics[width=0.49\linewidth]{{img/broad_coherence_10.5_1.0_200}.pdf} + \includegraphics[width=0.49\linewidth]{{img/coherence_10.5_0.5_narrow_both}.pdf} + \caption{Coherence for broad and narrow frequency range inputs. a) Broad spectrum. + At the frequency of the firing rate (91Hz, marked by the black bar) and its first + harmonic (182Hz) the coding fraction breaks down. For the weak noise level (blue), + population sizes n=4 and n=4096 show indistinguishable coding fraction. + In case of a small population size, coherence is higher for weak noise (blue) than + for strong noise (green) in the frequency range up to about 50\hertz. For higher + frequencies coherence is unchanged. For the case of the larger population size and the + greater noise strength there is a huge increase in the coherence for all frequencies. + b) Coherence for two narrowband inputs with different frequency ranges. + Low frequency range: coherence for + slow parts of the signal is close to 1 for weak noise. SSR works mostly on the + higher frequencies (here >40\hertz). High frequency range: At 182Hz (first harmonic + of the firing frequency) there is a very sharp decrease in coding fraction, + especially for the weak noise condition (blue). Increasing the noise makes the drop + less clear. For weak noise (blue) there is another break down at 182-(200-182)Hz. + Stronger noise seems to make this sharp drop disappear. Again, the effect of SSR + is most noticeable for the higher frequencies in the interval.} + \label{fig:coherence_narrow_15.0} +\end{figure} + +We want to know how good encoding works for different frequency intervals in the signal. +When we take out a narrower frequency interval from a broadband signal, the other +frequencies in the signal serve as common noise to the neurons encoding the signal. +In many cases we only care about a certain frequency band in a signal of much wider bandwidth. +In figure \ref{fig:coherence_narrow_15.0} A we can see that SSR has very different +effects on certain frequencies inside the signal than on others. In blue we see the +case of very weak noise (\(10^{-6} \milli\volt\squared\per\hertz\)). Increasing the +population size from 4 neurons to 2048 neurons has practically no effect. Around +the average firing rate of the neurons, coherence becomes almost zero. When +we keep population size at 4 neurons, but +add more noise to the neurons (green, \(2\cdot10^{-3} \milli\volt\squared\per\hertz\)), +encoding of the low frequencies (up to about 50\hertz) becomes worse, while +encoding of the higher frequencies stays unchanged. When we increase the population +size to 2048 neurons we have almost perfect encoding for frequencies up to 50\hertz. +Coherence is still reduced around the average firing rate of the neurons, but at +a much higher level than before. For higher frequencies coherence becomes higher again. +In summary, the high frequency bands inside the broadband stimulus experience a +much greater increase in encoding quality than the low frequency bands, +which were already encoded quite well. + + +\begin{figure} +\includegraphics[width=0.45\linewidth]{img/broadband_optimum_newcolor.pdf} +\includegraphics[width=0.45\linewidth]{img/smallband_optimum_newcolor.pdf} +\centering +\includegraphics[width=0.9\linewidth]{img/max_cf_smallbroad.pdf} +\caption{ + A: Input signal spectrum of a broadband signal. The colored area marks the frequency ranges considered here. + B: Two narrowband signals (red and blue). The broadband signal from A (grey) is shown again for comparison. + C and D: Best amount of noise for different number of neurons. The dashed lines show where coding fraction still is at least 95\% from the maximum. The width of the peaks is much larger for the narrowband signals which encompasses the entire width of the high-frequency interval peak. +Optimum noise values for a fixed number of neurons are always higher for the broadband signal than for narrowband signals. +In the broadband case, the optimum amount of noise is larger for the high-frequency interval than for the low-frequency interval and vice-versa for the narrowband case. %The optimal noise values have been fitted with a function of square root of the population size N, $f(N)=a+b\sqrt{N}$. We observe that the optimal noise value grows with the square root of population size. +E and F: Coding fraction as a function of noise for a fixed population size (N=512). Red dots show the maximum, the red line where coding fraction is at least 95\% of the maximum value. +G: An increase in population size leads to a higher coding fraction especially for broader bands and higher frequency intervals. Coding fraction is +larger for the narrowband signal than in the equivalent broadband interval for all neural population sizes considered here. The coding fraction for the low frequency intervals is always larger than for the high frequency interval. +Signal mean $\mu=15.0\milli\volt$, signal amplitude $\sigma=1.0\milli\volt$ and $\sigma=0.5\milli\volt$ respectively.} +\label{smallbroad} +\end{figure} + +\subsection*{Narrowband Signals vs Broadband Signals} + + + + +In nature, often an external stimulus covers a frequency range +that starts at high frequencies, so that only using broadband white noise signals +as input appears to be insufficient to describe realistic scenarios. +%, with bird songs\citep{nottebohm1972neural} and ???\footnote{chirps, in a way?}. +%We see that in many animals receptor neurons have adapted to these signals. For example, it was found that electroreceptors in weakly electric fish have band-pass properties\citep{bastian1976frequency}. +Therefore, we investigate the coding of narrowband signals in the ranges +described earlier (0--50Hz, 150--200Hz). Comparing the results from coding of +broadband and coding of narrowband signals, we see several differences. + +For both low and high frequency signals, the narrowband signal +can be resolved better than the broadband signal for any amount of noise and (figure \ref{smallbroad}, bottom left). +That coding fractions are higher when we use narrowband signals can be +explained by the fact that the additional +frequencies in the broadband signal are now absent. In the broadband signal +they are a form of "noise" that is common to all the input neurons. +Similar to what we saw for the broadband signal, +the peak of the low frequency input is still much more broad than the peak of the high frequency input. +To encode low frequency signals the exact strength of the noise is not as +important as it is for the high frequency signals which can be seen from the wider peaks. + +\subsection{Discussion} +The usefulness of noise on information encoding of subthreshold signals by single neurons has been well investigated. However, the encoding of supra-threshold signals by populations of neurons has received comparatively little attention and different effects play a role for suprathreshold signals than for subthreshold signals \citep{Stocks2001}. This paper delivers an important contribution for the understanding of suprathreshold stochastic resonance (SSR). We simulate populations of leaky integrate-and-fire neurons to answer the question how population size influences the optimal noise strength for linear encoding of suprathreshold signals. We are able to show that this optimal noise is well described as a function of the square root of population size. This relationship is independent of frequency properties of the input signal and holds true for narrowband and broadband signals. + +In this paper, we show that SSR works in LIF-neurons for a variety of signals of different bandwidth and frequency intervals. We show that signal-to-noise ratio is for signals above a certain strength sufficient to describe the optimal noise strength in the population, but that the actual coding fraction depends on the absolute value of signal strength. %We furthermore show that increasing signal strength does not always increase the coding fraction. + +We contrast how well the low and high frequency parts of a broadband signal can be encoded. We take an input signal with $f_{cutoff} = \unit{200}\hertz$ and analyse the coding fraction for the frequency ranges 0 to \unit{50}\hertz\usk and 150 to \unit{200}\hertz\usk separately. The maximum value of the coding fraction is lower for the high frequency interval compared to the low frequency interval. This means that inside broadband signals higher frequencies intervals appear more difficult to encode for each level of noise and population size. The low frequency interval has a wider peak (defined as 95\% coding fraction of its coding fraction maximum value), which means around the optimal amount of noise there is a large area where coding fraction is still good. The noise optimum for the low frequency parts of the input is lower than the optimum for the high frequency interval (Fig. \ref{highlowcoherence}). In both cases, the optimal noise value appears to grow with the square root of population size. + +In general, narrowband signals can be encoded better than broadband signals. +narrowband vs broadband + +Another main finding of this paper is the discovery of frequency dependence of SSR. +We can see from the shape of the coherence between the signal and the output of the simulated +neurons, SSR works mostly for the higher frequencies in the signal. As the lower frequency +components are in many cases already encoded really well, the addition of noise +helps to flatten the shape of the coherence curve. In the case of weak noise, often there +are border effects which disappear with increasing strength of the noise. +In addition, for weak noise there are often visible effects from the firing rate of the neurons, in so far that the encoding +around those frequencies is worse than for the surrounding frequencies. Generally +this effect becomes less pronounced when we add more noise to the simulation, but +we found a very striking exception in the case of narrowband signals. +Whereas for a firing rate of about +91\hertz\usk the coding fraction of the encoding of a signal in the 0-50\hertz\usk band is +better than for the encoding of a signal in the 150-200\hertz\usk band. However, this is +not the case if the neurons have a firing rate about 34\hertz. +We were thus able to show that the firing rate on the neurons in the simulation is of +critical importance to the encoding of the signal. + diff --git a/calculation.tex b/calculation.tex new file mode 100644 index 0000000..3ec3cad --- /dev/null +++ b/calculation.tex @@ -0,0 +1,46 @@ +\section*{Limit case of large populations} + +\subsection*{For large population sizes and strong noise, coding fraction becomes a function of their quotient} + +For the linear response regime of large noise, we can estimate the coding fraction. From Beiran et al. 2018 we know the coherence in linear response is given as + +\eq{ +C_N(\omega) = \frac{N|\chi(\omega)|^2 S_{ss}}{S_{x_ix_i}(\omega)+(N1)|\chi(\omega)|^2S_{ss}} +\label{eq:linear_response} +} + +where \(C_1(\omega)\) is the coherence function for a single LIF neuron. Generally, the single-neuron coherence is given by \citep{??} + +\eq{ +C_1(\omega)=\frac{r_0}{D} \frac{\omega^2S_{ss}(\omega)}{1+\omega^2}\frac{\left|\mathcal{D}_{i\omega-1}\big(\frac{\mu-v_T}{\sqrt{D}}\big)-e^{\Delta}\mathcal{D}_{i\omega-1}\big(\frac{\mu-v_R}{\sqrt{D}}\big)\right|^2}{\left|{\cal D}_{i\omega}(\frac{\mu-v_T}{\sqrt{D}})\right|^2-e^{2\Delta}\left|{\cal D}_{i\omega}(\frac{\mu-v_R}{\sqrt{D}})\right|^2} +\label{eq:single_coherence} +} + +where \(r_0\) is the firing rate of the neuron, +\[r_0 = \left(\tau_{ref} + \sqrt{\pi}\int_\frac{\mu-v_r}{\sqrt{2D}}^\frac{\mu-v_t}{\sqrt{2D}} dz e^{z^2} \erfc(z) \right)^{-1}\]. +In the limit of large noise (calculation in the appendix) this equation evaluates to: + +\eq{ +C_1(\omega) = \sqrt{\pi}D^{-1} +\frac{S_{ss}(\omega)\omega^2/(1+\omega^2)}{2 \sinh\left(\frac{\omega\pi}{2}\right)\Im\left( \Gamma\left(1+\frac{i\omega}{2}\right)\Gamma\left(\frac12-\frac{i\omega}{2}\right)\right)} +\label{eq:simplified_single_coherence} +} + +From eqs.\ref{eq:linear_response} and \ref{eq:simplified_single_coherence} it follows that in the case \(D \rightarrow \infty\) the coherence, and therefore the coding fraction, of the population of LIF neurons is a function of \(D^{-1}N\). We plot the approximation as a function of \(\omega\) (fig. \ref{d_n_ratio}). In the limit of small frequencies the approximation matches the exact equation very well, though not for higher frequencies. We can verify this in our simulations by plotting coding fraction as a function of \(\frac{D}{N}\). We see (fig. \ref{d_n_ratio}) that in the limit of large D, the curves actually lie on top of each other. This is however not the case (fig. \ref{d_n_ratio}) for stimuli with a large cutoff frequency \(f_c\), as expected by our evaluation of the approximation as a function of the frequency. + + +\begin{figure} +\centering + \includegraphics[width=0.32\linewidth]{{img/d_over_n/d_10.5_0.5_10_detail}.pdf} + \includegraphics[width=0.32\linewidth]{{img/d_over_n/d_15.0_0.5_50_detail}.pdf} + \includegraphics[width=0.32\linewidth]{{img/d_over_n/d_15.0_1.0_200_detail}.pdf} + \includegraphics[width=0.32\linewidth]{{img/d_over_n/d_over_n_10.5_0.5_10_detail}.pdf} + \includegraphics[width=0.32\linewidth]{{img/d_over_n/d_over_n_15.0_0.5_50_detail}.pdf} + \includegraphics[width=0.32\linewidth]{{img/d_over_n/d_over_n_15.0_1.0_200_detail}.pdf} + \label{d_n_ratio} + \caption{Top row: Coding fraction as a function of noise. + Bottom row: Coding fraction as a function of the ratio between noise strength and population size. For strong noise, coding fraction is a function of this ratio. + Left: signal mean 10.5mV, signal amplitude 0.5mV, $f_{c}$ 10Hz. + Middle: signal mean 15.0mV, signal amplitude 0.5mV, $f_{c}$ 50Hz. + Right: signal mean 15.0mV, signal amplitude 1.0mV, $f_{c}$ 200Hz.} +\end{figure} diff --git a/citations.bib b/citations.bib new file mode 100644 index 0000000..a8c5a2e --- /dev/null +++ b/citations.bib @@ -0,0 +1,724 @@ +@article{goldberg1984relation, + title={Relation between discharge regularity and responses to externally applied galvanic currents in vestibular nerve afferents of the squirrel monkey}, + author={Goldberg, JM and Smith, CHARLES E and Fernandez, C}, + journal={Journal of neurophysiology}, + volume={51}, + number={6}, + pages={1236--1256}, + year={1984}, + publisher={American Physiological Society Bethesda, MD} +} +@article{benzi1981mechanism, + title={The mechanism of stochastic resonance}, + author={Benzi, Roberto and Sutera, Alfonso and Vulpiani, Angelo}, + journal={Journal of Physics A: mathematical and general}, + volume={14}, + number={11}, + pages={L453}, + year={1981}, + publisher={iOP Publishing} +} +@article{palanca2015vivo, +author = {Palanca-Castan, Nicolas and K{\"{o}}ppl, Christine}, +journal = {Brain, behavior and evolution}, +number = {4}, +pages = {271--286}, +publisher = {Karger Publishers}, +title = {{In vivo recordings from low-frequency nucleus laminaris in the barn owl}}, +volume = {85}, +year = {2015} +} +@article{Chapeau-blondeau1996, +author = {Chapeau-blondeau, Franqois and Godivier, Xavier and Chambet, Nicolas}, +file = {:home/huben/Desktop/PhysRevE.53.1273.pdf:pdf}, +number = {1}, +pages = {3--5}, +title = {s(t)=+„5(t}, +volume = {53}, +year = {1996} +} +@article{walz2014static, +author = {Walz, Henriette and Grewe, Jan and Benda, Jan}, +journal = {Journal of Neurophysiology}, +number = {4}, +pages = {752--765}, +publisher = {Am Physiological Soc}, +title = {{Static frequency tuning accounts for changes in neural synchrony evoked by transient communication signals}}, +volume = {112}, +year = {2014} +} +@article{Chapeau-blondeau1996, +author = {Chapeau-blondeau, Franqois and Godivier, Xavier and Chambet, Nicolas}, +file = {:home/huben/Desktop/PhysRevE.53.1273.pdf:pdf}, +number = {1}, +pages = {3--5}, +title = {s(t)=+„5(t}, +volume = {53}, +year = {1996} +} +@article{lindner2016mechanisms, +author = {Lindner, Benjamin}, +journal = {IEEE Transactions on Molecular, Biological and Multi-Scale Communications}, +publisher = {IEEE}, +title = {{Mechanisms of information filtering in neural systems}}, +year = {2016} +} +@article{krahe2002stimulus, +author = {Krahe, R{\"{u}}diger and Kreiman, Gabriel and Gabbiani, Fabrizio and Koch, Christof and Metzner, Walter}, +journal = {Journal of Neuroscience}, +number = {6}, +pages = {2374--2382}, +publisher = {Soc Neuroscience}, +title = {{Stimulus encoding and feature extraction by multiple sensory neurons}}, +volume = {22}, +year = {2002} +} +@article{ahn2014heterogeneity, +author = {Ahn, Jheeyae and Kreeger, Lauren J and Lubejko, Susan T and Butts, Daniel A and MacLeod, Katrina M}, +journal = {Journal of neurophysiology}, +number = {11}, +pages = {2320--2331}, +publisher = {Am Physiological Soc}, +title = {{Heterogeneity of intrinsic biophysical properties among cochlear nucleus neurons improves the population coding of temporal information}}, +volume = {111}, +year = {2014} +} +@article{Krahe2008, +abstract = {Multiple topographic representations of sensory space are common in the nervous system and presumably allow organisms to separately process particular features of incoming sensory stimuli that vary widely in their attributes. We compared the response properties of sensory neurons within three maps of the body surface that are arranged strictly in parallel to two classes of stimuli that mimic prey and conspecifics, respectively. We used information-theoretic approaches and measures of phase locking to quantify neuronal responses. Our results show that frequency tuning in one of the three maps does not depend on stimulus class. This map acts as a low-pass filter under both conditions. A previously described stimulus-class-dependent switch in frequency tuning is shown to occur in the other two maps. Only a fraction of the information encoded by all neurons could be recovered through a linear decoder. Particularly striking were low-pass neurons the information of which in the high-frequency range could not be decoded linearly. We then explored whether intrinsic cellular mechanisms could partially account for the differences in frequency tuning across maps. Injection of a Ca2+ chelator had no effect in the map with low-pass characteristics. However, injection of the same Ca2+ chelator in the other two maps switched the tuning of neurons from band-pass/high-pass to low-pass. These results show that Ca2+-dependent processes play an important part in determining the functional roles of different sensory maps and thus shed light on the evolution of this important feature of the vertebrate brain.}, +author = {Krahe, R{\"{u}}diger and Bastian, Joseph and Chacron, Maurice J}, +doi = {10.1152/jn.90300.2008}, +issn = {0022-3077}, +journal = {Journal of Neurophysiology}, +number = {2}, +pages = {852--867}, +publisher = {American Physiological Society}, +title = {{Temporal Processing Across Multiple Topographic Maps in the Electrosensory System}}, +url = {http://jn.physiology.org/content/100/2/852}, +volume = {100}, +year = {2008} +} +@article{walz2014static, +author = {Walz, Henriette and Grewe, Jan and Benda, Jan}, +file = {:home/huben/Documents/Paper/Walz2014.pdf:pdf}, +journal = {Journal of Neurophysiology}, +number = {4}, +pages = {752--765}, +publisher = {Am Physiological Soc}, +title = {{Static frequency tuning accounts for changes in neural synchrony evoked by transient communication signals}}, +volume = {112}, +year = {2014} +} +@article{Walz2014, +author = {Walz, Henriette and Grewe, Jan and Benda, Jan}, +doi = {10.1152/jn.00576.2013}, +file = {:home/huben/Documents/Paper/Walz2014.pdf:pdf}, +isbn = {2154253113}, +pages = {752--765}, +title = {{Static frequency tuning accounts for changes in neural synchrony evoked by transient communication signals}}, +year = {2014} +} +@article{white2000channel, +author = {White, John A and Rubinstein, Jay T and Kay, Alan R}, +journal = {Trends in Neurosciences}, +number = {3}, +pages = {131--137}, +publisher = {Elsevier}, +title = {{Channel noise in neurons}}, +volume = {23}, +year = {2000} +} +@article{faisal2008noise, +author = {Faisal, A Aldo and Selen, Luc P J and Wolpert, Daniel M}, +file = {:home/huben/Documents/Paper/Faisal2008.pdf:pdf}, +journal = {Nature Reviews Neuroscience}, +number = {4}, +pages = {292--303}, +publisher = {Nature Publishing Group}, +title = {{Noise in the nervous system}}, +volume = {9}, +year = {2008} +} +@article{fourcaud2003spike, +author = {Fourcaud-Trocm{\'{e}}, Nicolas and Hansel, David and {Van Vreeswijk}, Carl and Brunel, Nicolas}, +journal = {The Journal of Nneuroscience}, +number = {37}, +pages = {11628--11640}, +publisher = {Soc Neuroscience}, +title = {{How spike generation mechanisms determine the neuronal response to fluctuating inputs}}, +volume = {23}, +year = {2003} +} +@article{krahe2004burst, +author = {Krahe, R{\"{u}}diger and Gabbiani, Fabrizio}, +journal = {Nature Reviews Neuroscience}, +number = {1}, +pages = {13--23}, +publisher = {Nature Publishing Group}, +title = {{Burst firing in sensory systems}}, +volume = {5}, +year = {2004} +} +@article{grewe2017synchronous, + title={Synchronous spikes are necessary but not sufficient for a synchrony code in populations of spiking neurons}, + author={Grewe, Jan and Kruscha, Alexandra and Lindner, Benjamin and Benda, Jan}, + journal={Proceedings of the National Academy of Sciences}, + volume={114}, + number={10}, + pages={E1977--E1985}, + year={2017}, + publisher={National Acad Sciences} +} +@article{schreiber2002energy, +author = {Schreiber, Susanne and Machens, Christian K and Herz, Andreas V M and Laughlin, Simon B}, +journal = {Neural Computation}, +number = {6}, +pages = {1323--1346}, +publisher = {MIT Press}, +title = {{Energy-efficient coding with discrete stochastic events}}, +volume = {14}, +year = {2002} +} +@article{neiman2011temporal, +abstract = {The manner in which information is encoded in neural signals is a major issue in Neuroscience. A common distinction is between rate codes, where information in neural responses is encoded as the number of spikes within a specified time frame (encoding window), and temporal codes, where the position of spikes within the encoding window carries some or all of the information about the stimulus. One test for the existence of a temporal code in neural responses is to add artificial time jitter to each spike in the response, and then assess whether or not information in the response has been degraded. If so, temporal encoding might be inferred, on the assumption that the jitter is small enough to alter the position, but not the number, of spikes within the encoding window. Here, the effects of artificial jitter on various spike train and information metrics were derived analytically, and this theory was validated using data from afferent neurons of the turtle vestibular and paddlefish electrosensory systems, and from model neurons. We demonstrate that the jitter procedure will degrade information content even when coding is known to be entirely by rate. For this and additional reasons, we conclude that the jitter procedure by itself is not sufficient to establish the presence of a temporal code.}, +author = {Neiman, Alexander B and Russell, David F and Rowe, Michael H}, +doi = {10.1371/journal.pone.0027380}, +journal = {PLOS ONE}, +number = {11}, +pages = {1--13}, +publisher = {Public Library of Science}, +title = {{Identifying Temporal Codes in Spontaneously Active Sensory Neurons}}, +url = {http://dx.doi.org/10.1371{\%}2Fjournal.pone.0027380}, +volume = {6}, +year = {2011} +} +@article{gabbiani1996coding, +author = {Gabbiani, Fabrizio}, +journal = {Network: Computation in Neural Systems}, +number = {1}, +pages = {61--85}, +publisher = {Citeseer}, +title = {{Coding of time-varying signals in spike trains of linear and half-wave rectifying neurons}}, +volume = {7}, +year = {1996} +} +@article{gammaitoni1998resonance, +author = {Gammaitoni, Luca and H{\"{a}}nggi, Peter and Jung, Peter and Marchesoni, Fabio}, +doi = {10.1103/RevModPhys.70.223}, +journal = {Rev. Mod. Phys.}, +number = {1}, +pages = {223--287}, +publisher = {American Physical Society}, +title = {{Stochastic resonance}}, +url = {http://link.aps.org/doi/10.1103/RevModPhys.70.223}, +volume = {70}, +year = {1998} +} +@article{davtyan2016protein, +author = {Davtyan, Aram and Platkov, Max and Gruebele, Martin and Papoian, Garegin A}, +doi = {10.1002/cphc.201501125}, +issn = {1439-7641}, +journal = {ChemPhysChem}, +keywords = {F{\"{o}}rster resonance energy transfer,brownian dynamics,molecular dynamics,protein folding,stochastic resonance}, +number = {9}, +pages = {1305--1313}, +title = {{Stochastic Resonance in Protein Folding Dynamics}}, +url = {http://dx.doi.org/10.1002/cphc.201501125}, +volume = {17}, +year = {2016} +} +@article{VanderGroen2016, +abstract = {Random noise enhances the detectability of weak signals in nonlinear systems, a phenomenon known as stochastic resonance (SR). Though counterintuitive at first, SR has been demonstrated in a variety of naturally occurring processes, including human perception, where it has been shown that adding noise directly to weak visual, tactile, or auditory stimuli enhances detection performance. These results indicate that random noise can push subthreshold receptor potentials across the transfer threshold, causing action potentials in an otherwise silent afference. Despite the wealth of evidence demonstrating SR for noise added to a stimulus, relatively few studies have explored whether or not noise added directly to cortical networks enhances sensory detection. Here we administered transcranial random noise stimulation (tRNS; 100–640 Hz zero-mean Gaussian white noise) to the occipital region of human participants. For increasing tRNS intensities (ranging from 0 to 1.5 mA), the detection accuracy of a visual stimuli changed according to an inverted-U-shaped function, typical of the SR phenomenon. When the optimal level of noise was added to visual cortex, detection performance improved significantly relative to a zero noise condition (9.7 ± 4.6{\%}) and to a similar extent as optimal noise added to the visual stimuli (11.2 ± 4.7{\%}). Our results demonstrate that adding noise to cortical networks can improve human behavior and that tRNS is an appropriate tool to exploit this mechanism.SIGNIFICANCE STATEMENT Our findings suggest that neural processing at the network level exhibits nonlinear system properties that are sensitive to the stochastic resonance phenomenon and highlight the usefulness of tRNS as a tool to modulate human behavior. Since tRNS can be applied to all cortical areas, exploiting the SR phenomenon is not restricted to the perceptual domain, but can be used for other functions that depend on nonlinear neural dynamics (e.g., decision making, task switching, response inhibition, and many other processes). This will open new avenues for using tRNS to investigate brain function and enhance the behavior of healthy individuals or patients.}, +author = {van der Groen, Onno and Wenderoth, Nicole}, +journal = {The Journal of Neuroscience}, +number = {19}, +pages = {5289 LP -- 5298}, +title = {{Transcranial Random Noise Stimulation of Visual Cortex: Stochastic Resonance Enhances Central Mechanisms of Perception}}, +url = {http://www.jneurosci.org/content/36/19/5289.abstract}, +volume = {36}, +year = {2016} +} +@article{shapira2016sound, +author = {Shapira, Einat and Pujol, R{\'{e}}my and Plaksin, Michael and Kimmel, Eitan}, +journal = {Physics in Medicine}, +publisher = {Elsevier}, +title = {{Sound-induced motility of outer hair cells explained by stochastic resonance in nanometric sensors in the lateral wall}}, +year = {2016} +} +@article{mileva2016short, +author = {Mileva, G R and Kozak, I J and Lewis, J E}, +file = {:home/huben/Desktop/1-s2.0-S0306452216000336-main.pdf:pdf}, +journal = {Neuroscience}, +pages = {1--11}, +publisher = {Elsevier}, +title = {{Short-term synaptic plasticity across topographic maps in the electrosensory system}}, +volume = {318}, +year = {2016} +} +@article{shimokawa1999stochastic, +author = {Shimokawa, T and Rogel, A and Pakdaman, K and Sato, S}, +journal = {Physical Review E}, +number = {3}, +pages = {3461}, +publisher = {APS}, +title = {{Stochastic resonance and spike-timing precision in an ensemble of leaky integrate and fire neuron models}}, +volume = {59}, +year = {1999} +} +@incollection{benda2013neural, +author = {Benda, Jan and Grewe, Jan and Krahe, R{\"{u}}diger}, +booktitle = {Animal Communication and Noise}, +pages = {331--372}, +publisher = {Springer}, +title = {{Neural noise in electrocommunication: from burden to benefits}}, +year = {2013} +} +@article{hupe2008effect, +author = {Hup{\'{e}}, Ginette J and Lewis, John E and Benda, Jan}, +journal = {Journal of Physiology-Paris}, +number = {4}, +pages = {164--172}, +publisher = {Elsevier}, +title = {{The effect of difference frequency on electrocommunication: chirp production and encoding in a species of weakly electric fish, Apteronotus leptorhynchus}}, +volume = {102}, +year = {2008} +} +@article{collins1995stochastic, + title={Stochastic resonance without tuning}, + author={Collins, JJ and Chow, Carson C and Imhoff, Thomas T}, + journal={Nature}, + volume={376}, + number={6537}, + pages={236}, + year={1995}, + publisher={Nature Publishing Group} +} +@article{collins1995aperiodic, + title={Aperiodic stochastic resonance in excitable systems}, + author={Collins, JJ and Chow, Carson C and Imhoff, Thomas T}, + journal={Physical Review E}, + volume={52}, + number={4}, + pages={R3321}, + year={1995}, + publisher={APS} +} + +@article{lindner2002maximizing, +author = {Lindner, Benjamin and Schimansky-Geier, Lutz and Longtin, Andr{\'{e}}}, +file = {:home/huben/Desktop/PhysRevE.66.031916.pdf:pdf}, +journal = {Physical Review E}, +number = {3}, +pages = {31916}, +publisher = {APS}, +title = {{Maximizing spike train coherence or incoherence in the leaky integrate-and-fire model}}, +volume = {66}, +year = {2002} +} +@article{Chapeau-blondeau1996, +author = {Chapeau-blondeau, Franqois and Godivier, Xavier and Chambet, Nicolas}, +file = {:home/huben/Desktop/PhysRevE.53.1273.pdf:pdf}, +number = {1}, +pages = {3--5}, +title = {s(t)=+„5(t}, +volume = {53}, +year = {1996} +} +@article{borst1999information, +author = {Borst, Alexander and Theunissen, Fr{\'{e}}d{\'{e}}ric E}, +journal = {Nature Neuroscience}, +number = {11}, +pages = {947--957}, +publisher = {Nature Publishing Group}, +title = {{Information theory and neural coding}}, +volume = {2}, +year = {1999} +} +@article{krahe2014neural, +author = {Krahe, R{\"{u}}diger and Maler, Leonard}, +file = {:home/huben/Downloads/1-s2.0-S0959438813001724-main.pdf:pdf}, +journal = {Current opinion in Neurobiology}, +pages = {13--21}, +publisher = {Elsevier}, +title = {{Neural maps in the electrosensory system of weakly electric fish}}, +volume = {24}, +year = {2014} +} +@article{stocks2000suprathreshold, +author = {Stocks, N G}, +file = {:home/huben/Desktop/PhysRevLett.84.2310.pdf:pdf}, +journal = {Physical Review Letters}, +number = {11}, +pages = {2310}, +publisher = {APS}, +title = {{Suprathreshold stochastic resonance in multilevel threshold systems}}, +volume = {84}, +year = {2000} +} +@article{stocks2001information, + title={Information transmission in parallel threshold arrays: Suprathreshold stochastic resonance}, + author={Stocks, NG}, + journal={Physical Review E}, + volume={63}, + number={4}, + pages={041114}, + year={2001}, + publisher={APS} +} +@article{stocks2002application, + title={The application of suprathreshold stochastic resonance to cochlear implant coding}, + author={Stocks, NG and Allingham, D and Morse, RP}, + journal={Fluctuation and noise letters}, + volume={2}, + number={03}, + pages={L169--L181}, + year={2002}, + publisher={World Scientific} +} + +@article{beiran2018coding, + title={Coding of time-dependent stimuli in homogeneous and heterogeneous neural populations}, + author={Beiran, Manuel and Kruscha, Alexandra and Benda, Jan and Lindner, Benjamin}, + journal={Journal of computational neuroscience}, + volume={44}, + number={2}, + pages={189--202}, + year={2018}, + publisher={Springer} +} + +@article{stocks2001generic, + title={Generic noise-enhanced coding in neuronal arrays}, + author={Stocks, NG and Mannella, Riccardo}, + journal={Physical Review E}, + volume={64}, + number={3}, + pages={030902}, + year={2001}, + publisher={APS} +} +@article{nottebohm1972neural, + title={Neural lateralization of vocal control in a passerine bird. II. Subsong, calls, and a theory of vocal learning}, + author={Nottebohm, Fernando}, + journal={Journal of Experimental Zoology}, + volume={179}, + number={1}, + pages={35--49}, + year={1972}, + publisher={Wiley Online Library} +} +@article{bastian1976frequency, + title={Frequency response characteristics of electroreceptors in weakly electric fish (Gymnotoidei) with a pulse discharge}, + author={Bastian, Joseph}, + journal={Journal of Comparative Physiology}, + volume={112}, + number={2}, + pages={165--180}, + year={1976}, + publisher={Springer} +} + +@article{gabbiani1996stimulus, +author = {Gabbiani, Fabrizio and Metzner, Walter and Wessel, Ralf and Koch, Christof and Others}, +journal = {Nature}, +number = {6609}, +pages = {564--567}, +title = {{From stimulus encoding to feature extraction in weakly electric fish}}, +volume = {384}, +year = {1996} +} +@article{douglass1993noise, + title={Noise enhancement of information transfer in crayfish mechanoreceptors by stochastic resonance}, + author={Douglass, John K and Wilkens, Lon and Pantazelou, Eleni and Moss, Frank}, + journal={Nature}, + volume={365}, + number={6444}, + pages={337}, + year={1993}, + publisher={Nature Publishing Group} +} +@article{levin1996broadband, + title={Broadband neural encoding in the cricket cereal sensory system enhanced by stochastic resonance}, + author={Levin, Jacob E and Miller, John P}, + journal={Nature}, + volume={380}, + number={6570}, + pages={165}, + year={1996}, + publisher={Nature Publishing Group} +} + +@article{gabbiani1996codingLIF, + title={Coding of time-varying signals in spike trains of integrate-and-fire neurons with random threshold}, + author={Gabbiani, Fabrizio and Koch, Christof}, + journal={Neural Computation}, + volume={8}, + number={1}, + pages={44--66}, + year={1996}, + publisher={MIT Press} +} +@article{Gjorgjieva2014, +author = {Gjorgjieva, Julijana and Sompolinsky, Haim and Meister, Markus}, +doi = {10.1523/JNEUROSCI.1032-14.2014}, +file = {:home/huben/Downloads/12127.full.pdf:pdf}, +journal = {Journal of Neuroscience,}, +keywords = {efficient coding,off,on,optimality,parallel pathways,retina,sensory processing}, +number = {36}, +pages = {12127--12144}, +title = {{Benefits of Pathway Splitting in Sensory Coding}}, +volume = {34}, +year = {2014} +} +@article{Maler, +author = {Maler, Leonard}, +doi = {10.1016/j.conb.2013.08.013}, +file = {:home/huben/Downloads/1-s2.0-S0959438813001724-main.pdf:pdf}, +title = {{Neural maps in the electrosensory system of weakly electric fish ¨}}, +year = {2014} +} +@article{Huang2016, +author = {Huang, Chengjie G. and Chacron, Maurice J.}, +doi = {10.1523/JNEUROSCI.1433-16.2016}, +file = {:home/huben/Downloads/parallelcodingchacron2016.pdf:pdf}, +keywords = {adaptation,electrosensory,envelope,features,have no or significant,in the same sensory,influence on their,neural coding,neuron type can either,off-type response to first-order,responses to second-order stimulus,significance statement,sk channels,stimulus attributes has no,we demonstrate that heterogeneities,weakly electric fish,while an on- or}, +number = {38}, +pages = {9859--9872}, +title = {{Optimized Parallel Coding of Second-Order Stimulus Features by Heterogeneous Neural Populations}}, +volume = {36}, +year = {2016} +} +@article{stocks2001suprathreshold, + title={Suprathreshold stochastic resonance: an exact result for uniformly distributed signal and noise}, + author={Stocks, NG}, + journal={Physics Letters A}, + volume={279}, + number={5-6}, + pages={308--312}, + year={2001}, + publisher={Elsevier} +} +@article{nowak1997influence, + title={Influence of low and high frequency inputs on spike timing in visual cortical neurons.}, + author={Nowak, Lionel G and Sanchez-Vives, Maria V and McCormick, David A}, + journal={Cerebral cortex (New York, NY: 1991)}, + volume={7}, + number={6}, + pages={487--501}, + year={1997} +} +@article{mainen1995reliability, + title={Reliability of spike timing in neocortical neurons}, + author={Mainen, Zachary F and Sejnowski, Terrence J}, + journal={Science}, + volume={268}, + number={5216}, + pages={1503--1506}, + year={1995}, + publisher={American Association for the Advancement of Science} +} +@article{mcdonnell2002characterization, + title={A characterization of suprathreshold stochastic resonance in an array of comparators by correlation coefficient}, + author={Mcdonnell, Mark D and Abbott, Derek and Pearce, Charles EM}, + journal={Fluctuation and Noise Letters}, + volume={2}, + number={03}, + pages={L205--L220}, + year={2002}, + publisher={World Scientific} +} +@article{bulsara1996threshold, + title={Threshold detection of wideband signals: A noise-induced maximum in the mutual information}, + author={Bulsara, Adi R and Zador, Anthony}, + journal={Physical Review E}, + volume={54}, + number={3}, + pages={R2185}, + year={1996}, + publisher={APS} +} + +@article{Sadeghi2007, +author = {Sadeghi, Soroush G and Chacron, Maurice J and Taylor, Michael C and Cullen, Kathleen E}, +doi = {10.1523/JNEUROSCI.4690-06.2007}, +file = {:home/huben/Downloads/31{\_}sadeghi{\_}chacron{\_}taylor{\_}cullen{\_}2007.pdf:pdf}, +keywords = {detection threshold,heterogeneity,information theory,regular afferents,spike timing,vestibular afferents}, +number = {4}, +pages = {771--781}, +title = {{Neural Variability, Detection Thresholds, and Information Transmission in the Vestibular System}}, +journal = {Journal of Neuroscience,}, +volume = {27}, +year = {2007} +} +@article{hoch2003optimal, + title={Optimal noise-aided signal transmission through populations of neurons}, + author={Hoch, Thomas and Wenning, Gregor and Obermayer, Klaus}, + journal={Physical Review E}, + volume={68}, + number={1}, + pages={011911}, + year={2003}, + publisher={APS} +} + +@article{mcdonnell2006optimal, + title={Optimal information transmission in nonlinear arrays through suprathreshold stochastic resonance}, + author={McDonnell, Mark D and Stocks, Nigel G and Pearce, Charles EM and Abbott, Derek}, + journal={Physics Letters A}, + volume={352}, + number={3}, + pages={183--189}, + year={2006}, + publisher={Elsevier} +} + +@article{mcdonnell2007optimal, + title={Optimal stimulus and noise distributions for information transmission via suprathreshold stochastic resonance}, + author={McDonnell, Mark D and Stocks, Nigel G and Abbott, Derek}, + journal={Physical Review E}, + volume={75}, + number={6}, + pages={061105}, + year={2007}, + publisher={APS} +} + +@article{Borst1999, +author = {Borst, Alexander}, +file = {:home/huben/Downloads/nn1199{\_}947.pdf:pdf}, +pages = {13--16}, +title = {{Information theory and neural coding}}, +year = {1999} +} +@book{Walz2016, +author = {Walz, Henriette and Maler, Leonard and Longtin, Andre and Benda, Jan}, +file = {:home/huben/Downloads/punitModel.pdf:pdf}, +isbn = {2154253113}, +title = {{A simple neuron model of spike generation accurately describes frequency tuning of an electroreceptor population.}}, +year = {2016} +} +@article{Stocks2000, + title={Suprathreshold stochastic resonance in multilevel threshold systems}, + author={Stocks, NG}, + journal={Physical Review Letters}, + volume={84}, + number={11}, + pages={2310}, + year={2000}, + publisher={APS} +} +@article{Strong1998, +author = {Strong, S P and Koberle, Roland and Bialek, William}, +file = {:home/huben/Desktop/PhysRevLett.80.197.pdf:pdf}, +pages = {197--200}, +title = {{Entropy and Information in Neural Spike Trains}}, +year = {1998} +} +@article{deweese1995information, + title={Information flow in sensory neurons}, + author={DeWeese, M and Bialek, W}, + journal={Il Nuovo Cimento D}, + volume={17}, + number={7-8}, + pages={733--741}, + year={1995}, + publisher={Springer} +} +@article{bialek1993bits, + title={Bits and brains: Information flow in the nervous system}, + author={Bialek, William and DeWeese, Michael and Rieke, Fred and Warland, David}, + journal={Physica A: Statistical Mechanics and its Applications}, + volume={200}, + number={1-4}, + pages={581--593}, + year={1993}, + publisher={Elsevier} +} + +@article{wiesenfeld1995stochastic, + title={Stochastic resonance and the benefits of noise: from ice ages to crayfish and SQUIDs}, + author={Wiesenfeld, Kurt and Moss, Frank}, + journal={Nature}, + volume={373}, + number={6509}, + pages={33}, + year={1995}, + publisher={Nature Publishing Group} +} + +@article{Chapeau-blondeau1996, +author = {Chapeau-blondeau, Franqois and Godivier, Xavier and Chambet, Nicolas}, +file = {:home/huben/Desktop/PhysRevE.53.1273.pdf:pdf}, +number = {1}, +pages = {3--5}, +title = {s(t)=+„5(t}, +volume = {53}, +year = {1996} +} +@article{Lindner2002, +author = {Lindner, Benjamin and Schimansky-geier, Lutz}, +doi = {10.1103/PhysRevE.66.031916}, +file = {:home/huben/Desktop/PhysRevE.66.031916.pdf:pdf}, +pages = {1--6}, +title = {{Maximizing spike train coherence or incoherence in the leaky integrate-and-fire model}}, +year = {2002} +} + +@article{Farkhooi2009, +author = {Farkhooi, Farzad and Strube-Bloss, Martin F and Nawrot, Martin P}, +journal = {Physical Review E}, +month = {feb}, +number = {2}, +pages = {21905}, +publisher = {American Physical Society}, +title = {{Serial correlation in neural spike trains: Experimental evidence, stochastic modeling, and single neuron variability}}, +url = {http://link.aps.org/doi/10.1103/PhysRevE.79.021905}, +volume = {79}, +year = {2009} +} +@article{Mileva2016, +author = {Mileva, G R and Kozak, I J and Lewis, J E}, +doi = {10.1016/j.neuroscience.2016.01.014}, +file = {:home/huben/Desktop/1-s2.0-S0306452216000336-main.pdf:pdf}, +issn = {0306-4522}, +journal = {NEUROSCIENCE}, +keywords = {facilitation,synaptic depression,weakly electric}, +pages = {1--11}, +publisher = {IBRO}, +title = {{Short-term synaptic plasticity across topographic maps in the electrosensory system}}, +url = {http://dx.doi.org/10.1016/j.neuroscience.2016.01.014}, +volume = {318}, +year = {2016} +} +@article{Maler2009, +abstract = {The electric fish Apteronotus leptorhynchus emits a high-frequency electric organ discharge (EOD) sensed by specialized electroreceptors (P-units) distributed across the fish's skin. Objects such as prey increase the amplitude of the EOD over the underlying skin and thus cause an increase in P-unit discharge. The resulting localized intensity increase is called the electric image and is detected by its effect on the P-unit population; the electric image peak value and the extent to its spreads are cues utilized by these fish to estimate the location and size of its prey. P-units project topographically to three topographic maps in the electrosensory lateral line lobe (ELL): centromedial (CMS), centrolateral (CLS), and lateral (LS) segments. In a companion paper I have calculated the receptive fields (RFs) in these maps: RFs were small in CMS and very large in LS, with intermediate values in CLS. Here I use physiological data to create a simple model of the RF structure within the three ELL maps and to compute the response of these model maps to simulated prey. The Fisher information (FI) method was used to compute the optimal estimates possible for prey localization across the three maps. The FI predictions were compared with behavioral studies on prey detection. These comparisons were used to frame alternative hypotheses on the functions of the three maps and on the constraints that RF size and synaptic strength impose on weak signal detection and estimation.}, +author = {Maler, Leonard}, +doi = {10.1002/cne.22120}, +file = {:home/huben/Desktop/Maler2009a-2.pdf:pdf}, +issn = {00219967}, +journal = {Journal of Comparative Neurology}, +keywords = {Electric fish,Electrosensory lateral line lobe,Fisher information,Receptive field,Stimulus estimation,Topographic maps}, +number = {5}, +pages = {394--422}, +pmid = {19655388}, +title = {{Receptive field organization across multiple electrosensory maps. II. Computational analysis of the effects of receptive field size on prey localization}}, +volume = {516}, +year = {2009} +} +@article{Cumming2007, +abstract = {Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars.}, +archivePrefix = {arXiv}, +arxivId = {Error bars in experimental biology}, +author = {Cumming, G. and Fidler, F. and Vaux, D.L.}, +doi = {10.1083/jcb.200611141}, +eprint = {Error bars in experimental biology}, +isbn = {0021-9525 (Print)}, +issn = {0021-9525}, +journal = {The Journal of Cell Biology}, +number = {1}, +pages = {7--11}, +pmid = {17420288}, +title = {{Error bars in experimental biology}}, +url = {http://www.jcb.org/cgi/doi/10.1083/jcb.200611141}, +volume = {177}, +year = {2007} +} diff --git a/firing_rate.tex b/firing_rate.tex new file mode 100644 index 0000000..395539f --- /dev/null +++ b/firing_rate.tex @@ -0,0 +1,184 @@ +\section*{High and low firing rates} + +One key factor that determines how well a neuron can encode a given signal is +the firing rate of the neuron. Though it has been shown to be possible for +neurons to encode signals with frequencies above their firing rate \citep{knight1972}, +in general higher firing rates lead to a better encoding of the signal. +In our simulations the firing rate of the neurons depends on the noise added +to the neurons. The effect can be seen in figure \ref{firingrates}, which shows +average firing rate as a function of the mean input for different noise strengths. +The largest differences can be seen for inputs with a mean around the firing threshold +(10\milli\volt) and below. While for weak noise (gray) there is an obvious non-linearity +at the firing threshold, increasing noise strength linearizes the average firing rate. +This illustrates subthreshold SR quite well, as the noise induces firing of the neurons +for signals which would otherwise be too weak to elicit a response. +For stronger mean inputs like the 15\milli\volt\usk (second vertical black line) +we use in our simulations firing rate is roughly linear with input strength and +is not sensitive to changes in the noise strength. Therefore, the effect of noise +on the average firing rate of the neurons plays at most a weak role in the +explanation of SSR. However, the mean input strength is very important because of +its effects on the average firing rate, as we will show below.\footnote{could use a plot comparing high/low directly} +We found that in our simulations, the amplitude of the signal has a negligible +influence on the firing rate. + + + + +Previously we looked at the optimum noise value for a given a population size. Now we look +at optimal population sizes for a given noise value: We define a +"maximum" coding fraction as the coding fraction at a population size of 4096 neurons, +if the coding faction at this population size is no more than 2\% greater than the coding +fraction for a population of 2048 neurons. This ensures that coding +fraction has reasonably converged at this point. To see why this is important, +compare the yellow line (noise strength \(10^{-3}\milli\volt\square\per\hertz\)) +in figure \ref{CodingFrac} B to the other lines in that figure. +The coding fraction is still rising with increasing population size and there is no reliable +way to estimate for which population size it will ultimately converge and what the maximum coding +fraction will be. We are only using the noise strengths for which coding fraction has +converged in the following analysis. +Then, we define the "optimum" population size as the size where coding fraction is 95\% of the maximum +coding fraction, using linear interpolation between the different population sizes. Exponential interpolation +yielded essentially the same results. We call this population size "optimal" because we assume that, for efficiency reasons, +population size should be as small as possible while having very good encoding capabilities. + +\subsection*{Strong average input (high firing rates)} + + + +\begin{figure} + \centering + \includegraphics[width=0.32\linewidth]{{img/popsize_15.0_1.0}.pdf} + \includegraphics[width=0.32\linewidth]{{img/max_cf_15.0_1.0}.pdf} + \includegraphics[width=0.32\linewidth]{{img/improvement_15.0_1.0}.pdf} + + \includegraphics[width=0.32\linewidth]{{img/popsize_10.5_1.0}.pdf} + \includegraphics[width=0.32\linewidth]{{img/improvement_10.5_1.0}.pdf} + \includegraphics[width=0.32\linewidth]{{img/max_cf_10.5_1.0}.pdf} + \caption{An overview of the effect of different noise strengths on maximum coding fraction and population size. We only considered noise values where the difference in coding fraction between population sizes n=4096 and n=2048 is less than 2\%. Average firing rate of the neurons was about 91\hertz. Input strengths where chosen so taht the power of the signal in the corresponding bands is the same (1.0\milli\volt for the broadband and 0.5\milli\volt for the narrowband signals. Top to bottom: a) Minimum population size needed to have coding fraction be at least 95\% of the maximum as a function of noise. Optimal population size grows with increasing noise. Optimal population size is larger for the narrowband signals (dots) than for the broadband signal (crosses). For weak noise and narrowband signals, there is little difference in the optimal population size for the high frequency interval + (brown) and the low frequency interval (blue). As noise becomes stronger, optimal population size + is larger for the higher interval. For the broadband signal, optimum population size is always largest for the higher frequency interval, then for the broadband signal and finally the low frequency narrowband signal. That minimum population size is larger for the narrowband signals than + for the broadband signal can be explained by the fact that the maximum coding fraction (b) is higher for the narrowband signals. b) Maximum coding fraction is higher for lower intervals and narrower bands. In the case of the narrowband signal and the lwoer interval, maximum coding fraction is close to 1 for all noise strengths. For weak noise, the low interval in the + broadband signal and the higher narrowband signal have very similar maximum coding fractions. With increasing noise, the maximum coding fraction rises much + faster for the high frequency narrowband signal. The broadband signal and the high frequency interval inside that signal have very low maximum coding fraction for weak noise. Increasing the strength of the noise at some point coding fraction begins to increase rapidly. c) Frequency band, not signal bandwidth appears to be the main factor in the relative improvement of coding fraction with increasing population size. For the slow narrowband signal the improvement is mostly not because the maximum becomes higher, as it changes very little. Instead the reason is that a single neuron has a diminished ability to encode the signal with increasing noise levels (see fig. \ref{CodingFrac} B and C). For the whole broadband signal and the low frequency interval within, for weak noise there is almost no improvement in coding fraction through increasing population size. Interestingly, with stronger noise the relative improvement for the low + frequency interval is the same for both the narrowband and the broadband signal, even though maximum coding fraction is very different, at least for intermediate noise strength (\(10^{-4}\) to \(10^{-3}\)\milli\volt\squared\per\hertz). + } + \label{fig:popsizenarrow15} +\end{figure} + +First, we consider the case of strong input (15 \milli\volt) which leads to a an average firing +rate of about 91\hertz. As before, we see great differences for the different frequency intervals. In figure \ref{fig:popsizenarrow15} A +we see population sizes necessary to reach an encoding quality close to the maximum. +To encode the high frequency parts of the spectrum much larger populations are required +than for the low frequency parts. As expected, the +size of the optimal population increases with increasing noise strength. To encode the broadband +signal over its entire spectrum, a population size between the population sizes +for the narrowband intervals is optimal. This can +be understood because the broadband signal contains both the "easy" and the "difficult" intervals +and should therefore fall in between. The same is true for the maximum coding fraction (figure \ref{fig:popsizenarrow15} C): +Again, the broadband signal falls in between the intervals. As we could see before (figure \ref{smallbroad}), +the lower frequency interval is easier to encode than the higher frequency interval +and the whole broadband signal. +We also considered the relative effect of +increasing population size. To quantify this, we divided the maximum coding fraction by +the coding fraction of a single neuron. Figure \ref{fig:popsizenarrow15} E shows that +the increase in coding fraction gained by increasing population size +is larger for the higher frequency interval. In contrast, a single neuron can +encode the broadband signal or the low frequency interval about as well as a +larger population can. +That relative improvement increases for stronger noise +is a consequence of the reduction in the encoding +capability of a single neuron (compare figure \ref{CodingFrac} C). + +For the narrowband signals we see a similar picture: Except for very weak noise, +optimal population size to encode the high frequency signal is larger than that +for the low frequency signal (figure \ref{fig:popsizenarrow15} B). Optimal +population size for encoding of the low frequency signal now starts at a much +higher level than before. +Maximum coding fraction +again is larger for the low frequency signal (figure \ref{fig:popsizenarrow15} D). +For the high frequency signal, coding fraction +is at a much higher level than it is for the high frequency interval in the broadband signal +for all noise strengths. +With increasing noise strength the coding fraction increases rapidly and almost reaches the level achieved +for the low frequency signal for the population sizes considered here. +The relative increase in coding fraction +(figure \ref{fig:popsizenarrow15} F) is similar to what +we saw for the broadband signal. For high frequencies the increase in +coding fraction from a single neuron to a population of neurons becomes apparent +even at a relatively low noise strength. Whereas for the low frequency +signal the relative improvement starts at much higher noise levels and is +mostly explained by the decrease in encoding capabilities of a single neuron. + +For both narrowband and broadband signals, the results here qualitatively do not depend +on the input amplitude. See \textit{appendix} for more information. + +\subsection*{Weak average input (low firing rates)} + +\begin{figure} + + \includegraphics[width=0.49\linewidth]{{img/0to50_broad_small_coherence_10.5_0.5}.pdf} + + \caption{Coherence curves for broad and narrow frequency range inputs. The average firing rate of the cells is marked with a black vertical line. a) Broad spectrum. Coherence for low frequencies is much lower than for the case of higher mean input (fig. \ref{fig:coherence_narrow_15.0} A). For the weak noise level (blue), population sizes n=4 and n=4096 show indistinguishable coding fraction. In case of a small population size, coherence is higher for stronger noise (green), contrary to what we have seen before. This can not be explained by an increase in firing rate, as the difference in firing rate is only about 1\%. b) Narrow band inputs for two frequency ranges. Low frequency range: coherence for slow parts of the signal is similar to those in the broadband signal. High frequency range: In contrast to the case of higher mean input (fig. \ref{fig:coherence_narrow_15.0}) the high-frequency signal is encoded better than the low-frequency signal. Even for comparatively weak noise (blue), an increase in population size offers better encoding. +} + \label{fig:coherence_narrow_10.5} +\end{figure} + +We also consider the case of a weaker mean input (10.5\milli\volt) which leads +to lower average firing rates (about 34\hertz). Results can be seen in figure +\ref{fig:popsizenarrow10}. For the broadband signal, in general +results are similar to the results in the strong average input case. +We again see (fig. \ref{fig:popsizenarrow10} A) that optimum population size is +larger for the high frequency interval than for the low frequency interval, with +the entire broadband signal somewhere in between. Now the optimal population sizes +are much closer than in the case of high average firing rates. The values being +closer together is also true for the maximum coding fraction +(fig. \ref{fig:popsizenarrow10} C). The value for the low frequency interval + is now much lower for very weak noise than for the high mean input. +The curves for both intervals and the broadband signal now look very similar +to each other. +Maximum coding fraction for the broadband signal and the high +frequency interval are almost equal for noise strengths greater +than \(10^{-3}\milli\volt\squared\per\hertz\). +Relative improvement (fig. \ref{fig:popsizenarrow10} E) is very similar for all +intervals. For increasing noise strength relative improvement starts to show +the pattern we have seen before, with improvement being greatest for +the high frequency interval, followed by the broadband signal and then the +low frequency interval. + +For the narrowband signals we see some striking differences to what we saw +for the high mean input. Optimal population sizes are now very different for the low frequency +signal and the high frequency signal for all noise strengths (fig. \ref{fig:popsizenarrow10} B). +The high frequency interval signal again needs a larger population for +encoding being close to optimal. The optimal population size for +the low frequency signal is now very similar to the optimal population +size for the low frequency interval in the broadband signal. +This contrasts with the high input case (fig. \ref{fig:popsizenarrow15} A \& B). For +weak noise the optimal population size was much larger for the narrowband signal +than for the interval in the broadband signal. + +A striking change happens for the maximum coding fraction (fig. \ref{fig:popsizenarrow10} D). +As opposed to every case we have looked at before, now the higher frequency +signal appears to be more easily encoded than the low frequency signal. +This can be explained by looking at the coherence curves of the two signals +(fig. \ref{fig:coherence_narrow_10.5} B). As the firing rate of the neurons +is inside the frequency range of the low frequency signal, encoding +of those frequencies is suppressed. This is not the case for the +high frequency signal, where the coherence behaves similarly to +what we have seen for the strong input case (fig. \ref{fig:coherence_narrow_15.0} B). +Maximum coding fraction for the low frequency signal is very similar to +the maximum coding fraction for the low frequency interval of the broadband signal. +This is also different to what we have seen before, as for every other case +coding fraction can get much higher for the narrowband signals. + +Relative improvement is higher for the high frequency signal +than for the low frequency signal (fig. \ref{fig:popsizenarrow10} F). +For the high frequency signal, the improvement is greater +than for the equivalent interval in the broadband signal for weak noise. +Relative improvement is also greater +than it was for high average input (fig. \ref{fig:popsizenarrow15} E \& F). +Here, even for weak noise increasing the population size has a large effect. +In other words, to encode the high frequency signal in the case of a +low average firing rate, population size is very critical to the quality +of the encoding. For the low frequency signal there is no large difference +in relative improvement to the other cases. + diff --git a/fish_bands.tex b/fish_bands.tex new file mode 100644 index 0000000..8be6e47 --- /dev/null +++ b/fish_bands.tex @@ -0,0 +1,130 @@ +\section*{Electric fish as a real world model system} + +To put the results from our simulations into a real world context, we chose the +weakly electric fish \textit{Apteronotus leptorhynchus} as a model system. +\lepto\ uses an electric organ to produce electric fields which it +uses for orientation, prey detection and communication. Distributed over the skin +of \lepto\ are electroreceptors which produce action potentials +in response to electric signals. + +These receptor cells ("p-units") are analogous to the +simulated neurons we used in our simulations because they do not receive any +input other than the signal they are encoding. Individual cells fire independently +of each other and there is no feedback. + + + + +\subsection*{Results} + +Figure \ref{fig:ex_data} A,B and C show three examples for coherence from intracellular +measurements in \lepto\. Each cell was exposed to up to 128 repetitions of the +same signal. The response was then averaged over different numbers of trials to +simulate different population sizes of homogeneous cells. We can see that an increase +in population size leads to higher coherence. Similar to what we saw in the simulations, +around the average firing rate of the cell (marked by the red vertical lines), coherence +decreases sharply. We then aggregated the results for 31 different cells (50 experiments total, +as some cells were presented with the stimulus more than once). +Figure \ref{ex_data} D shows that the increase is largest inside the +high frequency intervals. As we could see in our simulations (figures \ref{fig:popsizenarrow15} C +and \ref{fig:popsizenarrow10} C), the ratio of coding fraction in a large population +to the coding fraction in a single cell is larger for higher frequencies. + +%simulation plots are from 200hz/nice coherence curves.ipynb +\begin{figure} + \centering + \includegraphics[width=0.49\linewidth]{img/fish/coherence_example.pdf} + \includegraphics[width=0.49\linewidth]{img/fish/coherence_example_narrow.pdf} + \includegraphics[width=0.49\linewidth]{{img/coherence/broad_coherence_15.0_1.0_different_popsizes_0.001}.pdf} + \includegraphics[width=0.49\linewidth]{{img/coherence/coherence_15.0_0.5_narrow_both_different_popsizes_0.001}.pdf} + \label{fig:ex_data} + \caption{A,B,C: examples of coherence in the p-Units of \lepto. Each plot shows + the coherence of the response of a single cell to a stimulus for different numbers of trials. + Like in the simulations, increased population sizes lead to a higher coherence. + D: Encoding of higher frequency intervals profits more from an increase in + population size than encoding of lower frequency intervals. + The ratio of the coding fraction for the largest number of trials divided by + the coding fraction for a single trial for each of six different frequency + intervals. Shown here are the data for all 50 experiments (31 different cells). + The orange line signifies the median value for all cells. The box + extends over the 2nd and 3rd quartile. } +\end{figure} + + +\begin{figure} + \centering + broad + + \includegraphics[width=0.48\linewidth]{img/fish/cf_curves/cfN_broad_0.pdf} + \includegraphics[width=0.48\linewidth]{img/fish/cf_curves/cfN_broad_1.pdf} + \includegraphics[width=0.48\linewidth]{img/fish/cf_curves/cfN_broad_2.pdf} + \includegraphics[width=0.48\linewidth]{img/fish/cf_curves/cfN_broad_3.pdf} +\end{figure} + +%box_script.py, quot_sigma() und quot_sigma_narrow() +\begin{figure} + \centering + broad + + \includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_broad_0_50.pdf} + \includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_broad_50_100.pdf} + \includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_broad_100_150.pdf} + \includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_broad_150_200.pdf} + \includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_broad_200_250.pdf} + \includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_broad_250_300.pdf} + + + \includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_broad_0_50.pdf} + \includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_broad_50_100.pdf} + \includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_broad_100_150.pdf} + \includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_broad_150_200.pdf} + \includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_broad_200_250.pdf} + \includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_broad_250_300.pdf} + + narrow + + \includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_narrow_0_50.pdf} + \includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_narrow_50_100.pdf} + \includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_narrow_150_200.pdf} + \includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_narrow_250_300.pdf} + \includegraphics[width=0.16\linewidth]{img/fish/scatter/sigma_cf_quot_narrow_350_400.pdf} + + \includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_narrow_0_50.pdf} + \includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_narrow_50_100.pdf} + \includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_narrow_150_200.pdf} + \includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_narrow_250_300.pdf} + \includegraphics[width=0.16\linewidth]{img/fish/scatter/check_fr_quot_narrow_350_400.pdf} + +\end{figure} + + + +\begin{figure} +\centering +\includegraphics[width=0.4\linewidth]{img/fish/diff_box.pdf} +\includegraphics[width=0.4\linewidth]{img/fish/diff_box_narrow.pdf} + \includegraphics[width=0.4\linewidth]{img/relative_coding_fractions_box.pdf} + \notedh{needs figure 3.6 from yue and equivalent} +\end{figure} + +\begin{figure} + \includegraphics[width=0.49\linewidth]{img/fish/ratio_narrow.pdf} + \includegraphics[width=0.49\linewidth]{img/fish/broad_ratio.pdf} + \label{freq_delta_cf} + \caption{This is about frequency and how it determines $delta_cf$. In other paper I have used $quot_cf$.} +\end{figure} + +\subsection{Discussion} + +We also confirmed that the results from the theory part of the paper play a role in a +real world example. Inside the brain of the weakly electric fish +\textit{Apteronotus leptorhynchus} pyramidal cells in different areas +are responsible for encoding different frequencies. In each of those areas, +cells integrate over different numbers of the same receptor cells. +Artificial populations consisting of different trials of the same receptor cell +show what we have seen in our simulations: Larger populations help +especially with the encoding of high frequency signals. These results +are in line with what is known about the pyramidal cells of \lepto: +The cells which encode high frequency signals best are the cells which +integrate over the largest number of neurons. + diff --git a/fish_methods.tex b/fish_methods.tex new file mode 100644 index 0000000..7a6c5b6 --- /dev/null +++ b/fish_methods.tex @@ -0,0 +1,65 @@ +\subsection*{Electrophysiology} + +We recorded electrophysiological data from X cells from Y different fish. + +\textit{Surgery}. Twenty-two E. virescens (10 to 21 cm) were used for +single-unit recordings. Recordings of electroreceptors were made +from the anterior part of the lateral line nerve. +Fish were initially anesthetized with 150 mg/l MS-222 (PharmaQ, +Fordingbridge, UK) until gill movements ceased and were then +respirated with a constant flow of water through a mouth tube, +containing 120 mg/l MS-222 during the surgery to sustain anesthesia. +The lateral line nerve was exposed dorsal to the operculum. Fish were +fixed in the setup with a plastic rod glued to the exposed skull bone. +The wounds were locally anesthetized with Lidocainehydrochloride +2\% (bela-pharm, Vechta, Germany) before the nerve was exposed. +Local anesthesia was renewed every 2 h by careful application of +Lidocaine to the skin surrounding the wound. +After surgery, fish were immobilized with 0.05 ml 5 mg/ml tubocurarine (Sigma-Aldrich, Steinheim, Germany) injected into the trunk +muscles. +\sout{Since tubocurarine suppresses all muscular activity, it also +suppresses the activity of the electrocytes of the electric organ and thus +strongly reduces the EOD of the fish. We therefore mimicked the EOD +by a sinusoidal signal provided by a sine-wave generator (Hameg HMF +2525; Hameg Instruments, Mainhausen, Germany) via silver electrodes +in the mouth tube and at the tail. The amplitude and frequency of the +artificial field were adjusted to the fish’s own field as measured before +surgery.} After surgery, fish were transferred into the recording tank of the +setup filled with water from the fish’s housing tank not containing +MS-222. Respiration was continued without anesthesia. The animals +were submerged into the water so that the exposed nerve was just above +the water surface. Electroreceptors located on the parts above water +surface did not respond to the stimulus and were excluded from analysis. +Water temperature was kept at 26°C.\footnote{From St\"ockl et al. 2014} + +\textit{Recording. }Action potentials from electroreceptor afferents were +recorded intracellularly with sharp borosilicate microelectrodes +(GB150F-8P; Science Products, Hofheim, Germany), pulled to a resistance between 20 and 100 M and filled with a 1 M KCl solution. +Electrodes were positioned by microdrives (Luigs-Neumann, Ratingen, +Germany). As a reference, glass microelectrodes were used. They were +placed in the tissue surrounding the nerve, adjusted to the isopotential line +of the recording electrode. The potential between the micropipette and the +reference electrode was amplified (SEC-05X; npi electronic) and lowpass filtered at 10 kHz. Signals were digitized by a data acquisition board +(PCI-6229; National Instruments) at a sampling rate of 20 kHz. Spikes +were detected and identified online based on the peak-detection algorithm +proposed by Todd and Andrews (1999). +The EOD of the fish was measured between the head and tail via +two carbon rod electrodes (11 cm long, 8-mm diameter). The potential +at the skin of the fish was recorded by a pair of silver wires, spaced +1 cm apart, which were placed orthogonal to the side of the fish at +two-thirds body length. The residual EOD potentials were recorded +and monitored with a pair of silver wire electrodes placed in a piece +of tube that was put over the tip of the tail. These EOD voltages were +amplified by a factor of 1,000 and band-pass filtered between 3 Hz and +1.5 kHz (DPA-2FXM; npi electronics). +Stimuli were attenuated (ATN-01M; npi electronics), isolated from +ground (ISO-02V; npi electronics), and delivered by two carbon rod +electrodes (30-cm length, 8-mm diameter) placed on either side of the +fish parallel to its longitudinal axis. Stimuli were calibrated to evoke +defined AM measured close to the fish. Spike and EOD detection, +stimulus generation and attenuation, as well as preanalysis of the +data were performed online during the experiment within the +RELACS software version 0.9.7 using the efish plugin-set (by J. +Benda: http://www.relacs.net).\footnote{From St\"ockl et al. 2014} + +\textit{Stimulation.} White noise stimuli with a cutoff frequency of 300{\hertz} defined an AM of the fish's signal. The stimulus was combined with the fish's own EOD in a way that the desired AM could be measured near the fish. Amplitude of the AM was 10\% (?) of the amplitude of the EOD. Stimulus duration was between 2s and 10s, with a time resolution of X. diff --git a/img/0to50_broad_small_coherence_10.5_0.5.pdf b/img/0to50_broad_small_coherence_10.5_0.5.pdf new file mode 100644 index 0000000..2675e51 Binary files /dev/null and b/img/0to50_broad_small_coherence_10.5_0.5.pdf differ diff --git a/img/1Hz_vs_10Hz_alternativ.pdf b/img/1Hz_vs_10Hz_alternativ.pdf new file mode 100644 index 0000000..0d8b8d8 Binary files /dev/null and b/img/1Hz_vs_10Hz_alternativ.pdf differ diff --git a/img/ISI_explanation.pdf b/img/ISI_explanation.pdf new file mode 100644 index 0000000..7f37f46 Binary files /dev/null and b/img/ISI_explanation.pdf differ diff --git a/img/basic/basic_15.0_1.0_200_detail_with_max.pdf b/img/basic/basic_15.0_1.0_200_detail_with_max.pdf new file mode 100644 index 0000000..c4287ef Binary files /dev/null and b/img/basic/basic_15.0_1.0_200_detail_with_max.pdf differ diff --git a/img/basic/n_basic_compare_50_detail.pdf b/img/basic/n_basic_compare_50_detail.pdf new file mode 100644 index 0000000..7b5fb68 Binary files /dev/null and b/img/basic/n_basic_compare_50_detail.pdf differ diff --git a/img/basic/n_basic_weak_15.0_1.0_200_detail.pdf b/img/basic/n_basic_weak_15.0_1.0_200_detail.pdf new file mode 100644 index 0000000..9f62f06 Binary files /dev/null and b/img/basic/n_basic_weak_15.0_1.0_200_detail.pdf differ diff --git a/img/broad_coherence_10.5_1.0_200.pdf b/img/broad_coherence_10.5_1.0_200.pdf new file mode 100644 index 0000000..03530d7 Binary files /dev/null and b/img/broad_coherence_10.5_1.0_200.pdf differ diff --git a/img/broad_coherence_15.0_1.0.pdf b/img/broad_coherence_15.0_1.0.pdf new file mode 100644 index 0000000..211a8fe Binary files /dev/null and b/img/broad_coherence_15.0_1.0.pdf differ diff --git a/img/broadband_optimum_newcolor.pdf b/img/broadband_optimum_newcolor.pdf new file mode 100644 index 0000000..ad86178 Binary files /dev/null and b/img/broadband_optimum_newcolor.pdf differ diff --git a/img/coding_fraction_vs_frequency.pdf b/img/coding_fraction_vs_frequency.pdf new file mode 100644 index 0000000..2e4d2ea Binary files /dev/null and b/img/coding_fraction_vs_frequency.pdf differ diff --git a/img/coherence/broad_coherence_15.0_1.0_different_popsizes_0.001.pdf b/img/coherence/broad_coherence_15.0_1.0_different_popsizes_0.001.pdf new file mode 100644 index 0000000..8b21ae6 Binary files /dev/null and b/img/coherence/broad_coherence_15.0_1.0_different_popsizes_0.001.pdf differ diff --git a/img/coherence/coherence_15.0_0.5_narrow_both_different_popsizes_0.001.pdf b/img/coherence/coherence_15.0_0.5_narrow_both_different_popsizes_0.001.pdf new file mode 100644 index 0000000..fe1960f Binary files /dev/null and b/img/coherence/coherence_15.0_0.5_narrow_both_different_popsizes_0.001.pdf differ diff --git a/img/coherence_10.5_0.5_narrow_both.pdf b/img/coherence_10.5_0.5_narrow_both.pdf new file mode 100644 index 0000000..1815c17 Binary files /dev/null and b/img/coherence_10.5_0.5_narrow_both.pdf differ diff --git a/img/coherence_15.0_0.5_narrow_both.pdf b/img/coherence_15.0_0.5_narrow_both.pdf new file mode 100644 index 0000000..d2dde3b Binary files /dev/null and b/img/coherence_15.0_0.5_narrow_both.pdf differ diff --git a/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_cv_membrane_50.pdf b/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_cv_membrane_50.pdf new file mode 100644 index 0000000..f7f952a Binary files /dev/null and b/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_cv_membrane_50.pdf differ diff --git a/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_cv_refractory_50.pdf b/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_cv_refractory_50.pdf new file mode 100644 index 0000000..748b09d Binary files /dev/null and b/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_cv_refractory_50.pdf differ diff --git a/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_1ms_membrane_50.pdf b/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_1ms_membrane_50.pdf new file mode 100644 index 0000000..984eb04 Binary files /dev/null and b/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_1ms_membrane_50.pdf differ diff --git a/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_1ms_refractory_50.pdf b/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_1ms_refractory_50.pdf new file mode 100644 index 0000000..2bfc0d4 Binary files /dev/null and b/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_1ms_refractory_50.pdf differ diff --git a/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_5ms_membrane_50.pdf b/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_5ms_membrane_50.pdf new file mode 100644 index 0000000..e976ab2 Binary files /dev/null and b/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_5ms_membrane_50.pdf differ diff --git a/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_5ms_refractory_50.pdf b/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_5ms_refractory_50.pdf new file mode 100644 index 0000000..11df35c Binary files /dev/null and b/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_5ms_refractory_50.pdf differ diff --git a/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_sigma_membrane_50.pdf b/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_sigma_membrane_50.pdf new file mode 100644 index 0000000..ee8a57a Binary files /dev/null and b/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_sigma_membrane_50.pdf differ diff --git a/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_sigma_refractory_50.pdf b/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_sigma_refractory_50.pdf new file mode 100644 index 0000000..01a8409 Binary files /dev/null and b/img/cv_psth_sigma_compare/dataframe_scatter_labels_D_sigma_refractory_50.pdf differ diff --git a/img/d_over_n/d_10.5_0.5_10_detail.pdf b/img/d_over_n/d_10.5_0.5_10_detail.pdf new file mode 100644 index 0000000..2a1191d Binary files /dev/null and b/img/d_over_n/d_10.5_0.5_10_detail.pdf differ diff --git a/img/d_over_n/d_15.0_0.5_50_detail.pdf b/img/d_over_n/d_15.0_0.5_50_detail.pdf new file mode 100644 index 0000000..15f921b Binary files /dev/null and b/img/d_over_n/d_15.0_0.5_50_detail.pdf differ diff --git a/img/d_over_n/d_15.0_1.0_200_detail.pdf b/img/d_over_n/d_15.0_1.0_200_detail.pdf new file mode 100644 index 0000000..2a0974e Binary files /dev/null and b/img/d_over_n/d_15.0_1.0_200_detail.pdf differ diff --git a/img/d_over_n/d_over_n_10.5_0.5_10_detail.pdf b/img/d_over_n/d_over_n_10.5_0.5_10_detail.pdf new file mode 100644 index 0000000..2fa7570 Binary files /dev/null and b/img/d_over_n/d_over_n_10.5_0.5_10_detail.pdf differ diff --git a/img/d_over_n/d_over_n_15.0_0.5_50_detail.pdf b/img/d_over_n/d_over_n_15.0_0.5_50_detail.pdf new file mode 100644 index 0000000..5419e77 Binary files /dev/null and b/img/d_over_n/d_over_n_15.0_0.5_50_detail.pdf differ diff --git a/img/d_over_n/d_over_n_15.0_1.0_200_detail.pdf b/img/d_over_n/d_over_n_15.0_1.0_200_detail.pdf new file mode 100644 index 0000000..5312a4d Binary files /dev/null and b/img/d_over_n/d_over_n_15.0_1.0_200_detail.pdf differ diff --git a/img/dataframe_scatter_D_normalized_psth_1ms_test_tau.pdf b/img/dataframe_scatter_D_normalized_psth_1ms_test_tau.pdf new file mode 100644 index 0000000..1b882bf Binary files /dev/null and b/img/dataframe_scatter_D_normalized_psth_1ms_test_tau.pdf differ diff --git a/img/dataframe_scatter_D_normalized_psth_5ms_test.pdf b/img/dataframe_scatter_D_normalized_psth_5ms_test.pdf new file mode 100644 index 0000000..11ce630 Binary files /dev/null and b/img/dataframe_scatter_D_normalized_psth_5ms_test.pdf differ diff --git a/img/dataframe_scatter_D_psth_5ms_test.pdf b/img/dataframe_scatter_D_psth_5ms_test.pdf new file mode 100644 index 0000000..c8a14cc Binary files /dev/null and b/img/dataframe_scatter_D_psth_5ms_test.pdf differ diff --git a/img/fish/broad_ratio.pdf b/img/fish/broad_ratio.pdf new file mode 100644 index 0000000..31f3c4f Binary files /dev/null and b/img/fish/broad_ratio.pdf differ diff --git a/img/fish/cf_curves/cfN_broad_0.pdf b/img/fish/cf_curves/cfN_broad_0.pdf new file mode 100644 index 0000000..6b25f66 Binary files /dev/null and b/img/fish/cf_curves/cfN_broad_0.pdf differ diff --git a/img/fish/cf_curves/cfN_broad_1.pdf b/img/fish/cf_curves/cfN_broad_1.pdf new file mode 100644 index 0000000..c529b72 Binary files /dev/null and b/img/fish/cf_curves/cfN_broad_1.pdf differ diff --git a/img/fish/cf_curves/cfN_broad_2.pdf b/img/fish/cf_curves/cfN_broad_2.pdf new file mode 100644 index 0000000..eb61764 Binary files /dev/null and b/img/fish/cf_curves/cfN_broad_2.pdf differ diff --git a/img/fish/cf_curves/cfN_broad_3.pdf b/img/fish/cf_curves/cfN_broad_3.pdf new file mode 100644 index 0000000..4bdc23b Binary files /dev/null and b/img/fish/cf_curves/cfN_broad_3.pdf differ diff --git a/img/fish/coherence_example.pdf b/img/fish/coherence_example.pdf new file mode 100644 index 0000000..e924142 Binary files /dev/null and b/img/fish/coherence_example.pdf differ diff --git a/img/fish/coherence_example_narrow.pdf b/img/fish/coherence_example_narrow.pdf new file mode 100644 index 0000000..84ff075 Binary files /dev/null and b/img/fish/coherence_example_narrow.pdf differ diff --git a/img/fish/cv_distribution.pdf b/img/fish/cv_distribution.pdf new file mode 100644 index 0000000..205b883 Binary files /dev/null and b/img/fish/cv_distribution.pdf differ diff --git a/img/fish/dataframe_scatter_sigma_cv.pdf b/img/fish/dataframe_scatter_sigma_cv.pdf new file mode 100644 index 0000000..683a8ee Binary files /dev/null and b/img/fish/dataframe_scatter_sigma_cv.pdf differ diff --git a/img/fish/dataframe_scatter_sigma_firing_rate.pdf b/img/fish/dataframe_scatter_sigma_firing_rate.pdf new file mode 100644 index 0000000..429c751 Binary files /dev/null and b/img/fish/dataframe_scatter_sigma_firing_rate.pdf differ diff --git a/img/fish/diff_box.pdf b/img/fish/diff_box.pdf new file mode 100644 index 0000000..397aed0 Binary files /dev/null and b/img/fish/diff_box.pdf differ diff --git a/img/fish/diff_box_narrow.pdf b/img/fish/diff_box_narrow.pdf new file mode 100644 index 0000000..7a82ade Binary files /dev/null and b/img/fish/diff_box_narrow.pdf differ diff --git a/img/fish/fr_distribution.pdf b/img/fish/fr_distribution.pdf new file mode 100644 index 0000000..a64a5db Binary files /dev/null and b/img/fish/fr_distribution.pdf differ diff --git a/img/fish/ratio_narrow.pdf b/img/fish/ratio_narrow.pdf new file mode 100644 index 0000000..aeb54c1 Binary files /dev/null and b/img/fish/ratio_narrow.pdf differ diff --git a/img/fish/scatter/check_fr_quot_broad_0_50.pdf b/img/fish/scatter/check_fr_quot_broad_0_50.pdf new file mode 100644 index 0000000..2d575ce Binary files /dev/null and b/img/fish/scatter/check_fr_quot_broad_0_50.pdf differ diff --git a/img/fish/scatter/check_fr_quot_broad_100_150.pdf b/img/fish/scatter/check_fr_quot_broad_100_150.pdf new file mode 100644 index 0000000..a97adb8 Binary files /dev/null and b/img/fish/scatter/check_fr_quot_broad_100_150.pdf differ diff --git a/img/fish/scatter/check_fr_quot_broad_150_200.pdf b/img/fish/scatter/check_fr_quot_broad_150_200.pdf new file mode 100644 index 0000000..86ca159 Binary files /dev/null and b/img/fish/scatter/check_fr_quot_broad_150_200.pdf differ diff --git a/img/fish/scatter/check_fr_quot_broad_200_250.pdf b/img/fish/scatter/check_fr_quot_broad_200_250.pdf new file mode 100644 index 0000000..655dfb0 Binary files /dev/null and b/img/fish/scatter/check_fr_quot_broad_200_250.pdf differ diff --git a/img/fish/scatter/check_fr_quot_broad_250_300.pdf b/img/fish/scatter/check_fr_quot_broad_250_300.pdf new file mode 100644 index 0000000..5da364c Binary files /dev/null and b/img/fish/scatter/check_fr_quot_broad_250_300.pdf differ diff --git a/img/fish/scatter/check_fr_quot_broad_50_100.pdf b/img/fish/scatter/check_fr_quot_broad_50_100.pdf new file mode 100644 index 0000000..20773b6 Binary files /dev/null and b/img/fish/scatter/check_fr_quot_broad_50_100.pdf differ diff --git a/img/fish/scatter/check_fr_quot_narrow_0_50.pdf b/img/fish/scatter/check_fr_quot_narrow_0_50.pdf new file mode 100644 index 0000000..ed53f8e Binary files /dev/null and b/img/fish/scatter/check_fr_quot_narrow_0_50.pdf differ diff --git a/img/fish/scatter/check_fr_quot_narrow_150_200.pdf b/img/fish/scatter/check_fr_quot_narrow_150_200.pdf new file mode 100644 index 0000000..973042b Binary files /dev/null and b/img/fish/scatter/check_fr_quot_narrow_150_200.pdf differ diff --git a/img/fish/scatter/check_fr_quot_narrow_250_300.pdf b/img/fish/scatter/check_fr_quot_narrow_250_300.pdf new file mode 100644 index 0000000..25bf5ec Binary files /dev/null and b/img/fish/scatter/check_fr_quot_narrow_250_300.pdf differ diff --git a/img/fish/scatter/check_fr_quot_narrow_350_400.pdf b/img/fish/scatter/check_fr_quot_narrow_350_400.pdf new file mode 100644 index 0000000..4bec4cc Binary files /dev/null and b/img/fish/scatter/check_fr_quot_narrow_350_400.pdf differ diff --git a/img/fish/scatter/check_fr_quot_narrow_50_100.pdf b/img/fish/scatter/check_fr_quot_narrow_50_100.pdf new file mode 100644 index 0000000..670152d Binary files /dev/null and b/img/fish/scatter/check_fr_quot_narrow_50_100.pdf differ diff --git a/img/fish/scatter/sigma_cf_quot_broad_0_50.pdf b/img/fish/scatter/sigma_cf_quot_broad_0_50.pdf new file mode 100644 index 0000000..3fe3ba5 Binary files /dev/null and b/img/fish/scatter/sigma_cf_quot_broad_0_50.pdf differ diff --git a/img/fish/scatter/sigma_cf_quot_broad_100_150.pdf b/img/fish/scatter/sigma_cf_quot_broad_100_150.pdf new file mode 100644 index 0000000..f3b566b Binary files /dev/null and b/img/fish/scatter/sigma_cf_quot_broad_100_150.pdf differ diff --git a/img/fish/scatter/sigma_cf_quot_broad_150_200.pdf b/img/fish/scatter/sigma_cf_quot_broad_150_200.pdf new file mode 100644 index 0000000..d592bed Binary files /dev/null and b/img/fish/scatter/sigma_cf_quot_broad_150_200.pdf differ diff --git a/img/fish/scatter/sigma_cf_quot_broad_200_250.pdf b/img/fish/scatter/sigma_cf_quot_broad_200_250.pdf new file mode 100644 index 0000000..58d7e88 Binary files /dev/null and b/img/fish/scatter/sigma_cf_quot_broad_200_250.pdf differ diff --git a/img/fish/scatter/sigma_cf_quot_broad_250_300.pdf b/img/fish/scatter/sigma_cf_quot_broad_250_300.pdf new file mode 100644 index 0000000..b46ce01 Binary files /dev/null and b/img/fish/scatter/sigma_cf_quot_broad_250_300.pdf differ diff --git a/img/fish/scatter/sigma_cf_quot_broad_50_100.pdf b/img/fish/scatter/sigma_cf_quot_broad_50_100.pdf new file mode 100644 index 0000000..4180cd9 Binary files /dev/null and b/img/fish/scatter/sigma_cf_quot_broad_50_100.pdf differ diff --git a/img/fish/scatter/sigma_cf_quot_narrow_0_50.pdf b/img/fish/scatter/sigma_cf_quot_narrow_0_50.pdf new file mode 100644 index 0000000..09eb06d Binary files /dev/null and b/img/fish/scatter/sigma_cf_quot_narrow_0_50.pdf differ diff --git a/img/fish/scatter/sigma_cf_quot_narrow_150_200.pdf b/img/fish/scatter/sigma_cf_quot_narrow_150_200.pdf new file mode 100644 index 0000000..62908ec Binary files /dev/null and b/img/fish/scatter/sigma_cf_quot_narrow_150_200.pdf differ diff --git a/img/fish/scatter/sigma_cf_quot_narrow_250_300.pdf b/img/fish/scatter/sigma_cf_quot_narrow_250_300.pdf new file mode 100644 index 0000000..41b74f8 Binary files /dev/null and b/img/fish/scatter/sigma_cf_quot_narrow_250_300.pdf differ diff --git a/img/fish/scatter/sigma_cf_quot_narrow_350_400.pdf b/img/fish/scatter/sigma_cf_quot_narrow_350_400.pdf new file mode 100644 index 0000000..6f29beb Binary files /dev/null and b/img/fish/scatter/sigma_cf_quot_narrow_350_400.pdf differ diff --git a/img/fish/scatter/sigma_cf_quot_narrow_50_100.pdf b/img/fish/scatter/sigma_cf_quot_narrow_50_100.pdf new file mode 100644 index 0000000..bdb9053 Binary files /dev/null and b/img/fish/scatter/sigma_cf_quot_narrow_50_100.pdf differ diff --git a/img/fish/sigma_distribution.pdf b/img/fish/sigma_distribution.pdf new file mode 100644 index 0000000..c4e7b2c Binary files /dev/null and b/img/fish/sigma_distribution.pdf differ diff --git a/img/improvement_10.5_1.0.pdf b/img/improvement_10.5_1.0.pdf new file mode 100644 index 0000000..97c678d Binary files /dev/null and b/img/improvement_10.5_1.0.pdf differ diff --git a/img/improvement_15.0_1.0.pdf b/img/improvement_15.0_1.0.pdf new file mode 100644 index 0000000..ace0346 Binary files /dev/null and b/img/improvement_15.0_1.0.pdf differ diff --git a/img/intro_raster/example_noise_resonance.pdf b/img/intro_raster/example_noise_resonance.pdf new file mode 100644 index 0000000..1e58d54 Binary files /dev/null and b/img/intro_raster/example_noise_resonance.pdf differ diff --git a/img/max_cf_10.5_1.0.pdf b/img/max_cf_10.5_1.0.pdf new file mode 100644 index 0000000..834edbf Binary files /dev/null and b/img/max_cf_10.5_1.0.pdf differ diff --git a/img/max_cf_15.0_1.0.pdf b/img/max_cf_15.0_1.0.pdf new file mode 100644 index 0000000..d876586 Binary files /dev/null and b/img/max_cf_15.0_1.0.pdf differ diff --git a/img/max_cf_smallbroad.pdf b/img/max_cf_smallbroad.pdf new file mode 100644 index 0000000..4505b80 Binary files /dev/null and b/img/max_cf_smallbroad.pdf differ diff --git a/img/non_lin_example_undetail.pdf b/img/non_lin_example_undetail.pdf new file mode 100644 index 0000000..258c53d Binary files /dev/null and b/img/non_lin_example_undetail.pdf differ diff --git a/img/ordnung/base_D_sigma.pdf b/img/ordnung/base_D_sigma.pdf new file mode 100644 index 0000000..b4148cb Binary files /dev/null and b/img/ordnung/base_D_sigma.pdf differ diff --git a/img/ordnung/cropped_fitcurve_0_2010-08-11-aq-invivo-1_0.pdf b/img/ordnung/cropped_fitcurve_0_2010-08-11-aq-invivo-1_0.pdf new file mode 100644 index 0000000..5e71cdb Binary files /dev/null and b/img/ordnung/cropped_fitcurve_0_2010-08-11-aq-invivo-1_0.pdf differ diff --git a/img/ordnung/cropped_fitcurve_0_2010-08-31-ad-invivo-1_0.pdf b/img/ordnung/cropped_fitcurve_0_2010-08-31-ad-invivo-1_0.pdf new file mode 100644 index 0000000..5bb3e85 Binary files /dev/null and b/img/ordnung/cropped_fitcurve_0_2010-08-31-ad-invivo-1_0.pdf differ diff --git a/img/ordnung/refractory_periods_coding_fraction.pdf b/img/ordnung/refractory_periods_coding_fraction.pdf new file mode 100644 index 0000000..579fbe4 Binary files /dev/null and b/img/ordnung/refractory_periods_coding_fraction.pdf differ diff --git a/img/ordnung/sigma_popsize_curves_0to300.pdf b/img/ordnung/sigma_popsize_curves_0to300.pdf new file mode 100644 index 0000000..68cb2fa Binary files /dev/null and b/img/ordnung/sigma_popsize_curves_0to300.pdf differ diff --git a/img/plotted/LIF_example_sketch.pdf b/img/plotted/LIF_example_sketch.pdf new file mode 100644 index 0000000..510c993 Binary files /dev/null and b/img/plotted/LIF_example_sketch.pdf differ diff --git a/img/popsize_10.5_1.0.pdf b/img/popsize_10.5_1.0.pdf new file mode 100644 index 0000000..2f35ca8 Binary files /dev/null and b/img/popsize_10.5_1.0.pdf differ diff --git a/img/popsize_15.0_1.0.pdf b/img/popsize_15.0_1.0.pdf new file mode 100644 index 0000000..e106904 Binary files /dev/null and b/img/popsize_15.0_1.0.pdf differ diff --git a/img/power_spectrum_0_50.pdf b/img/power_spectrum_0_50.pdf new file mode 100644 index 0000000..42b33b5 Binary files /dev/null and b/img/power_spectrum_0_50.pdf differ diff --git a/img/rasterplots/best_approximation_spikes_10hz_0.001noi500s_10.5_0.5_1.dat.pdf b/img/rasterplots/best_approximation_spikes_10hz_0.001noi500s_10.5_0.5_1.dat.pdf new file mode 100644 index 0000000..8659433 Binary files /dev/null and b/img/rasterplots/best_approximation_spikes_10hz_0.001noi500s_10.5_0.5_1.dat.pdf differ diff --git a/img/rasterplots/best_approximation_spikes_1hz_0.001noi500s_10.5_0.5_1.dat.pdf b/img/rasterplots/best_approximation_spikes_1hz_0.001noi500s_10.5_0.5_1.dat.pdf new file mode 100644 index 0000000..7ba39e8 Binary files /dev/null and b/img/rasterplots/best_approximation_spikes_1hz_0.001noi500s_10.5_0.5_1.dat.pdf differ diff --git a/img/rasterplots/best_approximation_spikes_200hz_0.001noi500s_10.5_0.5_1.dat.pdf b/img/rasterplots/best_approximation_spikes_200hz_0.001noi500s_10.5_0.5_1.dat.pdf new file mode 100644 index 0000000..625ce9f Binary files /dev/null and b/img/rasterplots/best_approximation_spikes_200hz_0.001noi500s_10.5_0.5_1.dat.pdf differ diff --git a/img/rasterplots/best_approximation_spikes_200hz_1e-07noi500s_15_0.5_1.dat.pdf b/img/rasterplots/best_approximation_spikes_200hz_1e-07noi500s_15_0.5_1.dat.pdf new file mode 100644 index 0000000..3ceaa8c Binary files /dev/null and b/img/rasterplots/best_approximation_spikes_200hz_1e-07noi500s_15_0.5_1.dat.pdf differ diff --git a/img/rasterplots/best_approximation_spikes_50hz_0.001noi500s_10.5_0.5_1.dat.pdf b/img/rasterplots/best_approximation_spikes_50hz_0.001noi500s_10.5_0.5_1.dat.pdf new file mode 100644 index 0000000..f332b67 Binary files /dev/null and b/img/rasterplots/best_approximation_spikes_50hz_0.001noi500s_10.5_0.5_1.dat.pdf differ diff --git a/img/rasterplots/best_approximation_spikes_50hz_1e-07noi500s_15_0.5_1.dat.pdf b/img/rasterplots/best_approximation_spikes_50hz_1e-07noi500s_15_0.5_1.dat.pdf new file mode 100644 index 0000000..60be715 Binary files /dev/null and b/img/rasterplots/best_approximation_spikes_50hz_1e-07noi500s_15_0.5_1.dat.pdf differ diff --git a/img/relative_coding_fractions_box.pdf b/img/relative_coding_fractions_box.pdf new file mode 100644 index 0000000..5d98284 Binary files /dev/null and b/img/relative_coding_fractions_box.pdf differ diff --git a/img/samesignal_bestnoise_200Hz_shading_rootfit_new.pdf b/img/samesignal_bestnoise_200Hz_shading_rootfit_new.pdf new file mode 100644 index 0000000..b4222cf Binary files /dev/null and b/img/samesignal_bestnoise_200Hz_shading_rootfit_new.pdf differ diff --git a/img/samesignal_bestnoise_narrowband_shading_rootfit_new.pdf b/img/samesignal_bestnoise_narrowband_shading_rootfit_new.pdf new file mode 100644 index 0000000..d8c393f Binary files /dev/null and b/img/samesignal_bestnoise_narrowband_shading_rootfit_new.pdf differ diff --git a/img/sigma/cf_N_sigma.pdf b/img/sigma/cf_N_sigma.pdf new file mode 100644 index 0000000..fc3a952 Binary files /dev/null and b/img/sigma/cf_N_sigma.pdf differ diff --git a/img/sigma/check_fr_quot.pdf b/img/sigma/check_fr_quot.pdf new file mode 100644 index 0000000..4c514e8 Binary files /dev/null and b/img/sigma/check_fr_quot.pdf differ diff --git a/img/sigma/cropped_fitcurve_0_2010-08-31-aj-invivo-1_0.pdf b/img/sigma/cropped_fitcurve_0_2010-08-31-aj-invivo-1_0.pdf new file mode 100644 index 0000000..e0f8915 Binary files /dev/null and b/img/sigma/cropped_fitcurve_0_2010-08-31-aj-invivo-1_0.pdf differ diff --git a/img/sigma/example_spikes_sigma.pdf b/img/sigma/example_spikes_sigma.pdf new file mode 100644 index 0000000..a10cdf3 Binary files /dev/null and b/img/sigma/example_spikes_sigma.pdf differ diff --git a/img/sigma/sigma_cf_quot.pdf b/img/sigma/sigma_cf_quot.pdf new file mode 100644 index 0000000..8805860 Binary files /dev/null and b/img/sigma/sigma_cf_quot.pdf differ diff --git a/img/simulation_sigma_examples/fitcurve_50hz_0.0002noi500s_0_capped.pdf b/img/simulation_sigma_examples/fitcurve_50hz_0.0002noi500s_0_capped.pdf new file mode 100644 index 0000000..186c5e7 Binary files /dev/null and b/img/simulation_sigma_examples/fitcurve_50hz_0.0002noi500s_0_capped.pdf differ diff --git a/img/simulation_sigma_examples/fitcurve_50hz_0.001noi500s_0_capped.pdf b/img/simulation_sigma_examples/fitcurve_50hz_0.001noi500s_0_capped.pdf new file mode 100644 index 0000000..0d54239 Binary files /dev/null and b/img/simulation_sigma_examples/fitcurve_50hz_0.001noi500s_0_capped.pdf differ diff --git a/img/simulation_sigma_examples/fitcurve_50hz_0.1noi500s_0_capped.pdf b/img/simulation_sigma_examples/fitcurve_50hz_0.1noi500s_0_capped.pdf new file mode 100644 index 0000000..658bfc2 Binary files /dev/null and b/img/simulation_sigma_examples/fitcurve_50hz_0.1noi500s_0_capped.pdf differ diff --git a/img/small_in_broad_spectrum.pdf b/img/small_in_broad_spectrum.pdf new file mode 100644 index 0000000..261537c Binary files /dev/null and b/img/small_in_broad_spectrum.pdf differ diff --git a/img/smallband_optimum_newcolor.pdf b/img/smallband_optimum_newcolor.pdf new file mode 100644 index 0000000..c974650 Binary files /dev/null and b/img/smallband_optimum_newcolor.pdf differ diff --git a/img/stocks.png b/img/stocks.png new file mode 100644 index 0000000..6f40642 Binary files /dev/null and b/img/stocks.png differ diff --git a/img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_16_with_input.pdf b/img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_16_with_input.pdf new file mode 100644 index 0000000..b2d74d8 Binary files /dev/null and b/img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_16_with_input.pdf differ diff --git a/img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_256_with_input.pdf b/img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_256_with_input.pdf new file mode 100644 index 0000000..7c01acd Binary files /dev/null and b/img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_256_with_input.pdf differ diff --git a/img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_4_with_input.pdf b/img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_4_with_input.pdf new file mode 100644 index 0000000..6dbd7c4 Binary files /dev/null and b/img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_4_with_input.pdf differ diff --git a/img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_64_with_input.pdf b/img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_64_with_input.pdf new file mode 100644 index 0000000..1b51b7f Binary files /dev/null and b/img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_64_with_input.pdf differ diff --git a/img/tuningcurves/6.00_to_15.00mV,1.0E-07_to_1.0E-02.pdf b/img/tuningcurves/6.00_to_15.00mV,1.0E-07_to_1.0E-02.pdf new file mode 100644 index 0000000..9801688 Binary files /dev/null and b/img/tuningcurves/6.00_to_15.00mV,1.0E-07_to_1.0E-02.pdf differ diff --git a/img/tuningcurves/codingfraction_from_curves_amplitude_0.1mV.pdf b/img/tuningcurves/codingfraction_from_curves_amplitude_0.1mV.pdf new file mode 100644 index 0000000..6df481f Binary files /dev/null and b/img/tuningcurves/codingfraction_from_curves_amplitude_0.1mV.pdf differ diff --git a/img/tuningcurves/codingfraction_from_curves_amplitude_0.5mV.pdf b/img/tuningcurves/codingfraction_from_curves_amplitude_0.5mV.pdf new file mode 100644 index 0000000..9b66c24 Binary files /dev/null and b/img/tuningcurves/codingfraction_from_curves_amplitude_0.5mV.pdf differ diff --git a/img/tuningcurves/codingfraction_from_curves_mean_10.0mV.pdf b/img/tuningcurves/codingfraction_from_curves_mean_10.0mV.pdf new file mode 100644 index 0000000..57cd2f4 Binary files /dev/null and b/img/tuningcurves/codingfraction_from_curves_mean_10.0mV.pdf differ diff --git a/img/tuningcurves/codingfraction_from_curves_mean_10.5mV.pdf b/img/tuningcurves/codingfraction_from_curves_mean_10.5mV.pdf new file mode 100644 index 0000000..57c9dc9 Binary files /dev/null and b/img/tuningcurves/codingfraction_from_curves_mean_10.5mV.pdf differ diff --git a/img/tuningcurves/codingfraction_from_curves_mean_9.5mV.pdf b/img/tuningcurves/codingfraction_from_curves_mean_9.5mV.pdf new file mode 100644 index 0000000..6da7662 Binary files /dev/null and b/img/tuningcurves/codingfraction_from_curves_mean_9.5mV.pdf differ diff --git a/img/tuningcurves/tuningcurve_vs_simulation_10Hz.pdf b/img/tuningcurves/tuningcurve_vs_simulation_10Hz.pdf new file mode 100644 index 0000000..6eb958e Binary files /dev/null and b/img/tuningcurves/tuningcurve_vs_simulation_10Hz.pdf differ diff --git a/img/tuningcurves/tuningcurve_vs_simulation_200Hz.pdf b/img/tuningcurves/tuningcurve_vs_simulation_200Hz.pdf new file mode 100644 index 0000000..75a48bc Binary files /dev/null and b/img/tuningcurves/tuningcurve_vs_simulation_200Hz.pdf differ diff --git a/introduction.tex b/introduction.tex new file mode 100644 index 0000000..a3dfb70 --- /dev/null +++ b/introduction.tex @@ -0,0 +1,72 @@ +In populations of neurons, representation of a common stimulus can be improved by population heterogeneity \citep{ahn2014heterogeneity}. This heterogeneity could for example be a different firing threshold for each neuron. Alternatively, the improvement can be achieved by adding noise to the input of neurons \citep{shapira2016sound}. The effect of adding noise to a sub-threshold signal, a phenomenon known as "stochastic resonance" (SR), has been very well investigated during the last decades\citep{benzi1981mechanism,gammaitoni1998resonance, shimokawa1999stochastic}. The noise added to a signal makes it more likely that the signal reaches the detection threshold and thereby trigger a spike in a neuron. +But often in nature the goal is not to simply detect a signal but to discriminate between two different signals as well as possible. For example in auditory communication it is not sufficient to detect the presence of sound but instead the goal is to code an auditory stimulus so that an optimal amount of information is gained from the stimulus. +Another example is the electrosensory communication between conspecifics in weakly electric fishes. Those fish need to for example differentiate aggressive and courtship behaviors. + +In any biological system, there is a limit to the precision of the components making up that system. This means that even without external input the spike times of each individual neurons will have some variation and will not be perfectly regular. Increasing the precision has a cost in energy requirement \citep{schreiber2002energy} but may not even be desirable. + +More recently it has been shown that for populations of neurons the beneficial role of noise can also be true for signals which already are above the threshold\citep{stocks2000suprathreshold, Stocks2000,stocks2001information,stocks2001generic,beiran2018coding}, a phenomenon termed "Suprathreshold Stochastic Resonance" (SSR). Despite the similarity in name, SR and SSR work in very different ways. The idea behind SSR is that in case of no or very small noise the individual neurons in the population react to the same features of a common input. Additional noise desynchronizes the response of the individual neurons. However, if the noise is too strong, the noise masks the signal and less information can be coded. In the case of infinite noise strength, no information about the signal can be reconstructed from the responses. Therefore there is a noise strength where performance is best. +In this paper we look at input signals with cutoff frequencies over a large range and populations ranging from a single neuron to many thousands of neurons. +%Research on SSR has mostly focused on low frequency signals and small neuronal populations\citep{stocks2002application,beiran2018coding}. + +%TODO +% However, little attention has been paid to the frequency dependence of the coded signal [citation needed]. Of special interest is how well neurons can code specific frequency intervals inside a larger broadband signal. +\begin{figure} +\includegraphics[width=0.5\linewidth]{img/stocks} +\caption{Array of threshold systems as described by Stocks.} +\label{stocks} +\end{figure} + + +%Previously it has been shown that there might be an advantage in having both regular and irregular afferents; regular afferents carry information of the time course of stimulus, while irregular afferents code high frequencies better \citep{Sadeghi2007}. How and where in the brain low- and high frequency signals are processed can also be different. This is true for example in the acoustic system of avians \citep{palanca2015vivo} and for the electric sense of weakly electric fish\citep{Krahe2008}. For example, in the brain of the brown ghost knifefish (\textit{A. leptorhynchus}) three different segments receive direct input from receptor cells on the skin of the fish. Between those segments, cells have different frequency filtering properties. Additionally, cells in the different segments have different receptive field sizes, so they receive input from populations which vary in size by orders of magnitude. + +Here we use the Integrate-and-Fire model to simulate neuronal populations receiving a common dynamic input. We look at linear coding of signals by different sized populations of neurons of a single type, similar to the situation in weakly electric fish. We show that the optimal noise grows with population size and depends on properties of the input. We use input signals of varying frequencies widths and cutoffs, along with changing the strength of the signal. + +We also present the results of electrophysiological results in the weakly electric fish \textit{Apteronotus leptorhynchus}. Because it is not obvious how to quantify noisiness in the receptor cells of these fish, we compare different methods and find that using the activation curve of the individual neurons allows for the best estimate of the strength of noise in these cells. Then we show that we can see the effects of SSR in the real world example of \textit{A. leptorhynchus}. + +In populations of neurons, representation of a common stimulus can be improved by population heterogeneity \citep{ahn2014heterogeneity}. This heterogeneity could, for example, be a different firing threshold for each neuron. Alternatively, the improvement can be achieved by adding dynamic noise to the input of neurons \citep{shapira2016sound}. The effect of adding noise to a sub-threshold signal, a phenomenon known as ``stochastic resonance'' (SR), has been very well investigated during the last decades \citep{benzi1981mechanism,gammaitoni1998resonance, shimokawa1999stochastic}. The noise added to a signal makes it more likely that the signal reaches the detection threshold and thereby triggers an action potential in a neuron. +However, often the goal is not to simply detect a signal but to discriminate between two different signals as well as possible. For example, in auditory communication it is not sufficient to merely detect the presence of sound. Rather an auditory stimulus should be encoded such that an optimal amount of information is gained from the stimulus. +Another example is the electrosensory communication in weakly electric fish. Those fish need to differentiate aggressive and courtship behaviors. + +In any biological system, there is a limit to the precision of the components making up that system. Even without external input, the spike times of each neuron have some variation and are not perfectly regular. Increasing spike-timing precision requires more ion channels and thus has a cost in energy requirement \citep{schreiber2002energy} but may only be desirable up to a certain coding quality. + +For signals which already are above the firing threshold, noise can also be beneficial in populations of neurons \citep{stocks2000suprathreshold, Stocks2000,stocks2001information,stocks2001generic,beiran2018coding}, a phenomenon termed "Suprathreshold Stochastic Resonance" (SSR). Despite the similarity in name, SR and SSR work in very different ways. In case of no or very small noise the neurons in the population respond to the same features of a common input. Additional noise desynchronizes the response of the neurons and coding quality improves. However, if the noise is too strong, the noise masks the signal and less information is encoded. In the case of infinite noise strength, no information about the signal can be reconstructed from the responses. Therefore, there is an optimal noise strength where coding performance is best. + +In this paper we look at input signals with cutoff frequencies over a large range and populations ranging from a single neuron to many thousands of neurons. +%Research on SSR has mostly focused on low frequency signals and small neuronal populations\citep{stocks2002application,beiran2018coding}. + +%TODO +% However, little attention has been paid to the frequency dependence of the coded signal [citation needed]. Of special interest is how well neurons can code specific frequency intervals inside a larger broadband signal. + +\begin{figure} + \includegraphics[width=0.5\linewidth]{img/stocks} + \\\notejb{You have LIF neurons! Make a figure with a sketch of LIF neurons with dynamic noise and common input, all summed up by a target neuron. This figure can go on top of Fig 2.} + \caption{Array of threshold systems as described by Stocks.} + \label{stocks} +\end{figure} + + +%Previously it has been shown that there might be an advantage in having both regular and irregular afferents; regular afferents carry information of the time course of stimulus, while irregular afferents code high frequencies better \citep{Sadeghi2007}. How and where in the brain low- and high frequency signals are processed can also be different. This is true for example in the acoustic system of avians \citep{palanca2015vivo} and for the electric sense of weakly electric fish\citep{Krahe2008}. For example, in the brain of the brown ghost knifefish (\textit{A. leptorhynchus}) three different segments receive direct input from receptor cells on the skin of the fish. Between those segments, cells have different frequency filtering properties. Additionally, cells in the different segments have different receptive field sizes, so they receive input from populations which vary in size by orders of magnitude. + +Here we use the Integrate-and-Fire model to simulate neuronal populations receiving a common dynamic input. We look at linear coding of signals by different sized populations of neurons of a single type, similar to the situation in weakly electric fish. We show that the optimal noise grows with population size and depends on properties of the input. We use input signals of varying frequencies widths and cutoffs, along with changing the strength of the signal. + +We also present the results of electrophysiological results in the weakly electric fish \textit{Apteronotus leptorhynchus}. Because it is not obvious how to quantify noisiness in the receptor cells of these fish, we compare different methods and find that using the activation curve of the individual neurons allows for the best estimate of the strength of noise in these cells. Then we show that we can see the effects of SSR in the real world example of \textit{A. leptorhynchus}. + + + +%Using an LIF-neurons's tuning curve, we show that we can model the limit of information transmitted for a given noise strength even in the case of infinitely large populations. In addition, we show that the maximum of information encoded linearly is limited even for infinitely sized populations. + +% We describe behavior of the neural population as a function of noise in the limit of strong noise. We find empirically and analytically that the amount of information about the input signal encoded by the population is a function of noise proportional to population size. + + %Furthermore we show the influence the baseline firing rate of the neurons.% that We show that optimizing for high-frequencies still yields a good result for low frequency parts of the signal, while the reverse is not true, regardless of the size of the neural population. It appears to be a good strategy to optimize the amount of noise for coding high frequencies, as frequency intervals at lower frequencies tolerate non-optimal noise strength better. + +%Figure \ref{example_spiketrains} confirms the result from (Beiran 2017) that suprathreshold stochastic resonance works in the case of dynamic stimuli. An increase in noise allows for an increased capability to reconstruct the original input up to a maximum after which an increased amount of noise masks the signal until there is no information about the input left. + + + +%Using an LIF-neurons's tuning curve, we show that we can model the limit of information transmitted for a given noise strength even in the case of infinitely large populations. In addition, we show that the maximum of information encoded linearly is limited even for infinitely sized populations. + +% We describe behavior of the neural population as a function of noise in the limit of strong noise. We find empirically and analytically that the amount of information about the input signal encoded by the population is a function of noise proportional to population size. + + %Furthermore we show the influence the baseline firing rate of the neurons.% that We show that optimizing for high-frequencies still yields a good result for low frequency parts of the signal, while the reverse is not true, regardless of the size of the neural population. It appears to be a good strategy to optimize the amount of noise for coding high frequencies, as frequency intervals at lower frequencies tolerate non-optimal noise strength better. + +%Figure \ref{example_spiketrains} confirms the result from (Beiran 2017) that suprathreshold stochastic resonance works in the case of dynamic stimuli. An increase in noise allows for an increased capability to reconstruct the original input up to a maximum after which an increased amount of noise masks the signal until there is no information about the input left. diff --git a/main.pdf b/main.pdf new file mode 100644 index 0000000..7032b16 Binary files /dev/null and b/main.pdf differ diff --git a/main.synctex.gz b/main.synctex.gz new file mode 100644 index 0000000..fa450eb Binary files /dev/null and b/main.synctex.gz differ diff --git a/main.tex b/main.tex new file mode 100644 index 0000000..994f01f --- /dev/null +++ b/main.tex @@ -0,0 +1,131 @@ +\documentclass[a4paper,10pt]{scrartcl} +\usepackage[utf8]{inputenc} + +%opening +\title{On the role of noise in signal detection} +\author{Dennis Huben} + +\usepackage[T1]{fontenc} +\usepackage[utf8]{inputenc} +\usepackage[english]{babel} +\usepackage{graphicx} +\usepackage{multicol} +\usepackage{scalefnt} +\usepackage{bm} +\usepackage{palatino} +\usepackage{url} +\usepackage{enumitem} +\usepackage{amsmath} +\usepackage{xcolor} +\usepackage{ifthen} + +\usepackage[normalem]{ulem} +\usepackage[round,]{natbib} +\usepackage[thinqspace]{SIunits} + +\bibliographystyle{plainnat} +\newcommand{\lepto}{\textit{A. leptorhynchus}} + +\DeclareMathOperator\erfc{erfc} + +\newcommand{\eq}[1]{\begin{align}#1\end{align}} + +\newcommand{\note}[2][]{\textcolor{red!80!black}{[\textbf{\ifthenelse{\equal{#1}{}}{}{#1: }}#2]}} +\newcommand{\notejb}[1]{\note[JB]{#1}} +\newcommand{\notedh}[1]{\note[DH]{#1}} +\newcommand{\newdh}[1]{\textcolor{green}{#1}} + +\begin{document} + +\maketitle + +\begin{abstract} + +\end{abstract} + +\tableofcontents + +\newpage + + +\section{Suprathreshold stochastic resonance} + +\subsection{Introduction} + +In any biological system, there is a limit to the precision of the components making up that system. This means that even without external input the spike times of each individual neurons will have some variation and will not be perfectly regular. Increasing the precision has a cost in energy requirement \citep{schreiber2002energy} but may not even be desirable. + +In populations of neurons, representation of a common stimulus can be improved by population heterogeneity \citep{ahn2014heterogeneity}. The source of this heterogeneity could for example be a different firing threshold for each neuron. Alternatively, the improvement can be achieved by adding noise to the input of neurons \citep{shapira2016sound}. The effect of adding noise to a sub-threshold signal, a phenomenon known as "stochastic resonance" (SR), has been very well investigated during the last decades \citep{benzi1981mechanism,gammaitoni1998resonance, shimokawa1999stochastic}. The noise added to a signal makes it more likely that the signal reaches the detection threshold so that it triggers a spike in a neuron. +But often in nature the goal is not to simply detect a signal but to discriminate between two different signals as well as possible. For example in auditory communication it is not sufficient to detect the presence of sound but instead the goal is to encode an auditory stimulus so that an optimal amount of information is gained from the stimulus. +Another example is the electrosensory communication between conspecifics in weakly electric fishes. Those fish need to for example differentiate aggressive and courtship behaviors. + + +More recently it has been shown that for populations of neurons the beneficial role of noise can also be true for signals which already are above the threshold\citep{stocks2000suprathreshold, Stocks2000,stocks2001information,stocks2001generic,beiran2018coding}, a phenomenon termed "Suprathreshold Stochastic Resonance" (SSR). Despite the similarity in name, SR and SSR work in very different ways. The idea behind SSR is that in case of no or very weak individual noise the different neurons in the population react to the same features of a common input. Additional noise that affects each cell differently desynchronizes the response of the neurons. The spiking behavior of the neurons becomes more probabilistic than deterministic in nature. However, if the noise is too strong, the noise masks the signal and less information can be coded than would be ideally possible. In the case of infinite noise strength, no information about the signal can be reconstructed from the responses. Because some noise is beneficial and too much noise isn't, there is a noise strength where performance is best. +This thesis investigates populations of neurons reacting to input signals with cutoff frequencies over a large range. Population sizes range from a single neuron to many thousands of neurons. + + +%plot script: lif_summation_sketch.py on denkdirnix +\begin{figure} +\centering +\includegraphics[width=0.5\linewidth]{img/stocks} + +\includegraphics[width=1.\linewidth]{img/plotted/LIF_example_sketch.pdf} +\caption{Array of threshold systems as described by Stocks.} +\label{stocks} +\end{figure} + +Here we use the Integrate-and-Fire model to simulate neuronal populations receiving a common dynamic input. We look at linear coding of signals by different sized populations of neurons of a single type, similar to the situation in weakly electric fish. We show that the optimal noise grows with population size and depends on properties of the input. We use input signals of varying frequencies widths and cutoffs, along with changing the strength of the signal. + +We also present the results of electrophysiological results in the weakly electric fish \textit{Apteronotus leptorhynchus}. Because it is not obvious how to quantify noisiness in the receptor cells of these fish, we compare different methods and find that using the activation curve of the individual neurons allows for the best estimate of the strength of noise in these cells. Then we show that we can see the effects of SSR in the real world example of \textit{A. leptorhynchus}. + +\subsection{Methods} + +\input{methods_analysis} + +\subsection{Simulations with more neurons} + +\input{simulation_results} + +\input{simulation_further_considerations} + +\subsection{Different frequency ranges} + +\subsection{Narrow-/wideband} + +\input{bands} + +\section{Theory} + +\subsection{Firing rates} + +\input{calculation} + +\input{firing_rate} + +\subsection{Refractory period} + +\input{refractory_periods} + +\section{Electric fish} + +\subsection{Introduction} + +\subsection{Methods} + +\input{fish_methods} + +\subsection{How to determine noisiness} + +\input{sigma} + +\subsection{Results} + +\input{fish_bands} + +\section{Discussion: Combining experiment and simulation} + +\section{Literature} + +\clearpage +\bibliography{citations.bib} + +\end{document} diff --git a/methods_analysis.tex b/methods_analysis.tex new file mode 100644 index 0000000..a260154 --- /dev/null +++ b/methods_analysis.tex @@ -0,0 +1,47 @@ +We use a population neuron model using the Leaky-Integrate-And-Fire (LIF) neuron, described by the equation + +\begin{equation}V_{t}^j = V_{t-1}^j + \frac{\Delta t}{\tau_v} ((\mu-V_{t-1}^j) + \sigma I_{t} + \sqrt{2D/\Delta t}\xi_{t}^j),\quad j \in [1,N]\end{equation} +with $\tau_v = 10 ms$ the membrane time constant, $\mu = 15.0 mV$ or $\mu = 10.5 mV$ as offset. $\sigma$ is a factor which scales the standard deviation of the input, ranging from 0.1 to 1 and I the previously generated stimulus. $\xi_{t}$ are independent Gaussian distributed random variables with mean 0 and variance 1. The Noise D was varied between $1*10^{-7} mV^2/Hz$ and $3 mV^2/Hz$. Whenever $V_{t}$ was greater than the voltage threshold (10mV) a "spike" was recorded and the voltage has been reset to 0mV. $V_{0}$ was initialized to a random value uniformly distributed between 0mV and 10mV. +Simulations of up to 8192 neurons were done using an Euler method with a step size of $\Delta\, t = 0.01$ms. Typical firing rates were around 90Hz for an offset of 15.0mV and 35Hz for an offset of 10.5mV. Firing rates were larger for high noise levels than for low noise levels. +We simulated large populations (up to 2048) of LIF-neurons. + +As stimulus we used Gaussian white noise signal with different frequency cutoff on both ends of the spectrum. By construction, the input power spectrum is flat between 0 and $\pm f_{c}$: + +\begin{equation} + S_{ss}(f) = \frac{\sigma^2}{2 \left| f_{c} \right|} \Theta\left(f_{c} - |f|\right).\label{S_ss} +\end{equation} +A Fast Fourier Transform (FFT) was applied to the signal so it can serve as input stimulus to the simulated cells. The signal was normalized so that the variance of the signal was 1mV and the length of the signal was 500s with a resolution of 0.01ms. + + +\begin{figure} +\includegraphics[scale=0.5]{img/intro_raster/example_noise_resonance.pdf} +\caption{Snapshots of 200ms length from three example simulations with different noise, but all other parameters held constant. Black: Spikes of 32 simulated neurons. The green curve beneath the spikes is the signal that was fed into the network. The blue curve is the best linear reconstruction possible from the spikes. The input signal has a cutoff frequency of 50Hz. +If noise is weak, the neurons behave regularly and similar to each other (A). For optimal noise strength, the neuronal population follows the signal best (B). If the noise is too strong, the information about the signal gets drowned out (C). D: Example coding fraction curve over the strength of the noise. Marked in red are the noise strengths from which the examples were taken.} +\label{example_spiketrains} +\end{figure} + + + + +\subsection*{Analysis} +For each combination of parameters, a histogram of the output spikes from all neurons or a subset of the neurons was created. +The coherence $C(f)$ was calculated \citep{lindner2016mechanisms} in frequency space as the fraction between the squared cross-spectral density $|S_{sx}^2|$ of input signal $s(t) = \sigma I_{t}$ and output spikes x(t), $S_{sx}(f) = \mathcal{F}\{ s(t)*x(t) \}(f) $, divided by the product of the power spectral densities of input ($S_{ss}(f) = |\mathcal{F}\{s(t)\}(f)|^2 $) and output ($S_{xx}(f) = |\mathcal{F}\{x(t)\}(f)|^2$), where $\mathcal{F}\{ g(t) \}(f)$ is the Fourier transform of g(t). +\begin{equation}C(f) = \frac{|S_{sx}(f)|^2}{S_{ss}(f) S_{xx}(f)}\label{coherence}\end{equation} + + The coding fraction $\gamma$ \citep{gabbiani1996codingLIF, krahe2002stimulus} quantifies how much of the input signal can be reconstructed by an optimal linear decoder. It is 0 in case the input can't be reconstructed at all and 1 if the signal can be perfectly reconstructed\citep{gabbiani1996stimulus}. + It is defined by the reconstruction error $\epsilon^2$ and the variance of the input $\sigma^2$: + +\begin{equation}\gamma = 1-\sqrt{\frac{\epsilon^2}{\sigma^2}}.\label{coding_fraction}\end{equation} + + +The variance is +\begin{equation}\sigma^2 = \langle \left(s(t)-\langle s(t)\rangle\right)^2\rangle = \int_{f_{low}}^{f_{high}} S_{ss}(f) df .\end{equation} + +The reconstruction error is defined as + +\begin{equation}\epsilon^2 = \langle \left(s(t) - s_{est}(t)\right)^2\rangle = \int_{f_{low}}^{f_{high}} S_{ss} - \frac{|S_{sx}|^2}{S_{xx}} = \int_{f_{low}}^{f_{high}} S_{ss}(f) (1-C(f)) df\end{equation} +with the estimate $s_{est}(t) = h*x(t)$. $h$ is the optimal linear filter which has Fourier Transform $H = \frac{S_{sx}}{S_{xx}}$\citep{gabbiani1996coding}. + +We then analyzed coding fraction as a function of these cutoff frequencies for different parameters (noise strength, signal amplitude, signal mean/firing rate) in the limit of large populations. +The limit was considered reached if the increase in coding fraction gained by doubling the population size is small (4\%)(??). +For the weak signals ($\sigma = 0.1mV$) combined with the strongest noise ($D = 10^{-3} \frac{mV^2}{Hz}$), convergence was not reached for a population size of 2048 neurons. The same is true for the combination of the weak signal, close to the threshold ($\mu = 10.5mV$) and high frequencies (200Hz). diff --git a/refractory_periods.tex b/refractory_periods.tex new file mode 100644 index 0000000..8531a76 --- /dev/null +++ b/refractory_periods.tex @@ -0,0 +1,13 @@ +\subsection*{Refractory Periods} + +We analyzed the effect of non-zero refractory periods on the previous results. We added a 1ms or a 5ms refractory period to each of the LIF-neurons. Then, we repeated the same simulations as before. Results are summarized in figure \ref{refractory_periods}. +Results change very little for a refractory period of 1ms, especially for large noise values. For a refractory period of 5ms resulting coding fraction is lower for almost all noise values. Paradoxically, for high frequencies in smallband signals and very small noise, coding fraction actually is larger for 5ms refractory period than for 1ms. In spite of this, coding fraction is still largest for the LIF-ensembles without refractory period. + +We also find all other results replicated even with refractory periods of 1ms or 5ms: Figure (??) shows that the optimal noise stills grows with \(\sqrt{N}\) for both the 1ms and the 5ms refractory period. We see an increase in the value of the optimum noise with an increase of the refractory period. The achievable coding fraction is lower for the neurons with refractory periods, especially at the maximum. In the limit of large noise, the neurons with 1ms refractory period and the ones with no fractory period also result in similar coding fractions, over a wide range of population sizes. However, this is not true for the neurons with 5ms refractory period. + + +\begin{figure} + \includegraphics[width=0.8\linewidth]{img/ordnung/refractory_periods_coding_fraction.pdf} + \caption{Repeating the simulations adding a refractory period to the LIF-neurons shows no qualitative changes in the SSR behaviour of the neurons. Coding fraction is lower the longer the refractory period. The SSR peak moves to stronger noise; cells with larger refractory periods need stronger noise to work optimally.} + \label{refractory_periods} +\end{figure} diff --git a/sigma.tex b/sigma.tex new file mode 100644 index 0000000..a327112 --- /dev/null +++ b/sigma.tex @@ -0,0 +1,180 @@ +\subsection*{Determining noise in real world} + +While in simulations we can control the noise parameter directly, we cannot do so in electrophysiological experiments. +Therefore, we need a way to quantify "noisiness". +One such way is by using the activation curve of the neuron, fitting a function and extracting the parameters from this function. +Stocks (2000) uses one such function to simulate groups of noisy spiking neurons: + +\begin{equation} +\label{errorfct}\frac{1}{2}\erfc\left(\frac{\theta-x}{\sqrt{2\sigma^2}}\right) +\end{equation} + + +where $\sigma$ is the parameter quantifying the noise (figure \ref{idealizedactivation}). A neuron with a $\sigma$ of 0 would be a perfect thresholding mechanism. Firing probability for all inputs below the threshold is 0, and firing probability for all inputs above is 1. Large values mean a flat activation curve. Neurons with such an activation curve will likely fire even for some signals below the firing threshold, while it will sometimes not fire for inputs above the firing threshold. Its firing behaviour is influenced less by the signal, which indicates noisiness. +We also tried different other methods of quantifying noise commonly used (citations), but none of them worked as well as the errorfunction fit (fig. \ref{sigmasimulation} b)-d)). + + + +\subsection*{Methodology} + +The signal values were binned in 50 bins. The result is a discrete Gaussian distribution around 0mV, the mean of the signal, as is expected from the way the signal was created. +We have to account for the delay between the moment we paly the signal and when it gets processed in the cell, which can for example depend on the position of the cell on the skin. We calculate the cross correlation between the signal and the discrete output spikes. The position of the peak of the correlation is the time shift for which the signal influences the result of the output the most. +Then, for every spike, we note the value of the signal at the time of the spike minus the time shift. The result is a histogram, where each signal value bin has an associated number of spikes. This histogram is then normalized by the distribution of the signal. The result is another histogram, whose values are firing frequencies for each signal value. Because those frequencies are just firing probabilities multiplied with equal time steps, we can fit a Gaussian error function to those probabilities. + +\subsection*{Simulation} + +To confirm that the $\sigma$ parameter estimated from the fit is indeed a good measure for the noisiness, we validated it against D, the noise parameter from the simulations. We find that there is a strictly monotonous relationship between the two for different sets of simulation parameters. Other parameters often used to determine noisiness (citations) such as the variance of the spike PSTH, the coefficient of variation (CV) of the interspike interval are not as useful. In figure \ref{noiseparameters} we see why. The variance of the psth is not always monotonous in D and is very flat for low values of D. +%describe what happens to the others +%check Fano-factor maybe? + +\begin{figure} + \includegraphics[width=0.45\linewidth]{img/ordnung/base_D_sigma} + \includegraphics[width=0.45\linewidth]{img/dataframe_scatter_D_normalized_psth_1ms_test_tau} + \includegraphics[width=0.45\linewidth]{img/dataframe_scatter_D_psth_5ms_test} +% \includegraphics[width=0.45\linewidth]{img/dataframe_scatter_D_cv_test} + \caption{a)The parameter \(\sigma\) as a function of the noise parameter D in LIF-simulations. There is a strictly monotonous relationship between the two, which allows us to use \(\sigma\) as a susbtitute for D in the analysis of electrophysiological experiments. b-d) different other parameters commonly used to quantify noise. None of these functions is stricly monotonous and therefore none is useful as a substitute for D. b) Peri-stimulus time histogram (PSTH) of the spikes with a bin width of 1ms, normalized by c) PSTH of the spikes with a bin width of 5ms. d) coefficient of variation (cv) of the interspike-intervals.} + \label{noiseparameters} +\end{figure} + + +We tried several different bin sizes (30 to 300 bins) and spike widths. There was little difference between the different parameters (see appendix). + + +\subsection*{Electrophysiology} + +We can see from figure \ref{sigmafits} that the fits look very close to the data. For very weak and very strong inputs, the firing rates themselves become noisy, because the signal only assumes those values rarely. This is especially noticeable for strong inputs, as there are more spikes there, and therefore large fluctuations, while there is very little spiking anyway for weak inputs. + +\begin{figure} + \includegraphics[width=0.4\linewidth]{img/ordnung/cropped_fitcurve_0_2010-08-31-ad-invivo-1_0.pdf} + \includegraphics[width=0.4\linewidth]{img/ordnung/cropped_fitcurve_0_2010-08-11-aq-invivo-1_0.pdf} + % cropped_fitcurve_0_2010-08-31-ad-invivo-1_0.pdf: 0x0 px, 300dpi, 0.00x0.00 cm, bb= + \caption{Histogram of spike count distribution (firing rate) and errorfunction fits. 50 bins represent different values of the Gaussian distributed input signal [maybe histogram in background again]. The value of each of those bins is the number of spikes during the times the signal was in that bin. Each of the values was normalized by the signal distribution. To account for delay, we first calculated the cross-correlation of signal and spike train and took its peak as the delay. The lines show fits according to equation \eqref{errorfct}. Left and right plots show two different cells, one with a relatively narrow distribution and one with a distribution that is more broad, as indicated by the parameter \(\sigma\). Different amounts of bins (30 and 100) showed no difference in resulting parameters.} + \label{sigmafits} +\end{figure} + +%TODO insert plot with sigma x-axis and delta_cf on y-axis here; also, plot with sigma as function of firing rate, also absoulte cf for different population size as function of sigma. + + +When we group neurons by their noise and plot coding fraction as a function of population size for averages of the groups, we see results similar to what we see for simulations. +Noisier cells have a lower coding fraction for small populations. For increasing population size, coding fraction increases for all groups, but the increase is much larger for noisy cells. For large population sizes the noisy cells show a better linear encoding of the signal than the more regular cells. + +\begin{figure} +\includegraphics[width=0.45\linewidth]{img/ordnung/sigma_popsize_curves_0to300} + \caption{Left: Coding fraction as a function of population size for all recorded neurons. Color are by \(\sigma\) from the fit of the function in equation \ref{errorfct}, so that there are roughly an equal number of neurons in each category. Red: \(\sigma = \) 0 to 0.5, pink: 0.5 to 1.0, purple: 1.0 to 1.5, blue: 1.5 and above. Thick colored lines are average of the neurons in each group. For a population size of 1, coding fraction descreases on average with increasing \(\sigma\). As population sizes increase, coding fraction for weak noise neurons quickly stops increasing. Strong noise neurons show better coding performance for larger popuation sizes (about 8 to 32 neurons). Right [missing]: Increase in coding as a function of sigma. y-axis shows the difference in coding fraction between N=1 and N=32,} + \label{ephys_sigma} +\end{figure} + +\subsection*{Determining the strength of noise in a real world example} + +While in simulations we can control the noise parameter directly, we cannot do so in electrophysiological experiments. +Therefore, we need a way to quantify the intrinsic noise of the cell from the output of the measured cells. Common measures to quantify noisiness of neuronal spike trains are not directly correlated with intrinsic noise strength (figure \ref{noiseparameters}). An example for such a measure is the coefficient of variation (cv) of the interspike interval (ISI)\citep{white2000channel, goldberg1984relation,nowak1997influence}. The ISI is the time between each consecutive pair of spikes. The coefficient of variation is then defined as the standard deviation of the ISI divided by the mean ISI. Even though it is frequently used, we find that the cv as a function the intrinsic noise in our LIF-simulations is not necessarily monotonously related. In addition, for different membrane constants, which determine how quickly a neuron reacts to inputs, the same intrinsic noise results in widely different cv-values. Refractory periods also have an influence on the cv. + \notedh{their/holt1996 cv2 looks interesting.} +Another measure which has been used before is the standard variation of the peri-stimulus spike histogram +\citep{mainen1995reliability} \notedh{can't find any paper which did something like we did here, even though Schreiber et al. 2003 A new correlation-based measure of spike timing reliability - claim it's frequently done with psth}. This approach also does not work well, as it also depends on the membrane constant and to a lesser extend the refractory period. + +The approach used here uses the activation curve of the neuron, fitting a function to it and extracting the parameters from the fitted function. It is assumed that the neurons show Gaussian noise. The mean of the distribution is the activation threshold and the width of the Gaussian is a measure for noise. +The probability of spiking as a function of the input is then the integral over the Gaussian, i.e. an error function. +Stocks (2000) uses one such function to simulate groups of noisy spiking neurons: +\begin{equation} +\label{errorfct}\frac{1}{2}\erfc\left(\frac{\theta-x}{\sqrt{2\sigma^2}}\right) +\end{equation} +where $\sigma$ is the parameter quantifying the noise (figure ?) %\ref{idealizedactivation}). \notejb{$\sigma$ quantifies the noise in units of the stimulus!!! THis is why this approach might work!} +A neuron with a $\sigma$ of 0 would be a perfect thresholding mechanism. Firing probability for all inputs below the threshold is 0, and firing probability for all inputs above is 1. If $\sigma$ is greater than 0, a neuron with such an activation curve will fire even for some signals below the firing threshold, while it will sometimes not fire for inputs above the firing threshold. For large values of $\sigma$ the activation curve becomes flatter, meaning the probability for inputs below the theshold eliciting a spike is large and the probability that an input above the threshold does not lead to firing is also large. The firing behaviour of such a cell is influenced less by the signal, which indicates noisiness. +However, for strong noise $(>10^{-2} \frac{mV^2}{Hz})$, results are not monotonous anymore. This happens at a point where $\sigma$ becomes large. Therefore, we excluded all values of the unit-less \(\sigma\) larger than two from the following analyses. + + +\subsection*{Methodology} + +The signal was binned according to its amplitude. The result is a discrete Gaussian distribution around 0mV, the mean of the signal, as is expected from the way the signal was created. +After accounting for time delays in signal processing, we make a histogram which contains the distribution of spikes according to signal amplitude. + This histogram is then normalized by the distribution of the signal. +The result is another histogram, where values are firing frequencies for each signal value. Because those frequencies are just firing probabilities multiplied with equal time steps, we can fit a Gaussian error function to those probabilities. + +\subsection*{Simulation} + +To confirm that the $\sigma$ parameter estimated from the fit is indeed a good measure for the noisiness, we validated it against D, the noise parameter from the simulations. We find that there is a strictly monotonous relationship between the two for different sets of simulation parameters. +%Other parameters often used to determine noisiness (citations) such as the variance of the spike PSTH, the coefficient of variation (CV) of the interspike interval are not as useful. In figure \ref{noiseparameters} we see why. The variance of the psth is not always monotonous in D and is very flat for low values of D. +%describe what happens to the others +%check Fano-factor maybe? + + +\begin{figure} +\centering + %\includegraphics[width=0.45\linewidth]{img/ordnung/base_D_sigma}\\ + \includegraphics[width=0.23\linewidth]{img/simulation_sigma_examples/fitcurve_50hz_0.0002noi500s_0_capped.pdf} + \includegraphics[width=0.23\linewidth]{img/simulation_sigma_examples/fitcurve_50hz_0.001noi500s_0_capped.pdf} + \includegraphics[width=0.23\linewidth]{img/simulation_sigma_examples/fitcurve_50hz_0.1noi500s_0_capped.pdf} + \includegraphics[width=0.23\linewidth]{img/ISI_explanation.pdf} + \includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_sigma_membrane_50.pdf} + \includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_cv_membrane_50.pdf} + \includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_1ms_membrane_50.pdf} + \includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_5ms_membrane_50.pdf} + \includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_sigma_refractory_50.pdf} + \includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_cv_refractory_50.pdf} + \includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_1ms_refractory_50.pdf} + \includegraphics[width=0.23\linewidth]{img/cv_psth_sigma_compare/dataframe_scatter_labels_D_psth_5ms_refractory_50.pdf} + \caption{a)The parameter \(\sigma\) as a function of the noise parameter D in LIF-simulations. There is a strictly monotonous relationship between the two, which allows us to use \(\sigma\) as a susbtitute for D in the analysis of electrophysiological experiments. + b-e) Left to right: $\sigma$, CV and standard deviation of the psth with two diffrent kernel widths as a function of D for different membrane constants (4ms, 10ms and 16ms). The membrane constant $\tau$ determines how quickly the voltage of a LIF-neuron changes, with lower constants meaning faster changes. Only $\sigma$ does not change its values with different $\tau$. The CV (c)) is not even monotonous in the case of a timeconstant of 4ms, ruling out any potential usefulness. + f-i) Left to right: $\sigma$, CV and standard deviation of the psth with two diffrent kernel widths as a function of D for different refractory periods (0ms, 1ms and 5ms). Only $\sigma$ does not change with different refractory periods. + } + \label{noiseparameters} +\end{figure} + +We tried several different bin sizes (30 to 300 bins) and spike widths. There was little difference between the different parameters (see appendix). + + +\section*{-----------------------} +%We can use $\sigma$ instead of D*firing_rate: $\sigma$ makes it ind. of fr!! + + +\subsection*{Electrophysiology} + +We find that the fits match the experimental data very well (figure \ref{sigmafits}). For very weak and very strong inputs, the firing rates themselves become noisy, because the signal only assumes those values rarely. This is especially noticeable for strong inputs, as there are more spikes there, and therefore large fluctuations, while there is very little spiking anyway for weak inputs. + +% fish_raster.py on oilbird for the eventplot +% instructions.txt enth\"alt python-Befehle um Verteilungen und scatter zu rekonstruieren + +\begin{figure} +\centering + \includegraphics[width=0.4\linewidth]{img/sigma/example_spikes_sigma.pdf} + \includegraphics[width=0.28\linewidth]{img/ordnung/cropped_fitcurve_0_2010-08-11-aq-invivo-1_0.pdf} + \includegraphics[width=0.28\linewidth]{img/sigma/cropped_fitcurve_0_2010-08-31-aj-invivo-1_0.pdf} + \notedh{Daraus ergibt sich nicht direkt eine Intuition, wieso das noisy ist. H\"angt einfach sehr am Eingangssignal; wenn es im eventplot (un-)regelm\"a\ss{}ig w\"are, k\"onnten wir auch einfach cv nehmen...}\\ +% \includegraphics[width=0.28\linewidth]{img/ordnung/cropped_fitcurve_0_2010-08-31-ad-invivo-1_0.pdf} + \includegraphics[width=0.4\linewidth]{img/fish/dataframe_scatter_sigma_cv.pdf} + \includegraphics[width=0.4\linewidth]{img/fish/dataframe_scatter_sigma_firing_rate.pdf} + \includegraphics[width=0.32\linewidth]{img/fish/sigma_distribution.pdf} + \includegraphics[width=0.32\linewidth]{img/fish/cv_distribution.pdf} + \includegraphics[width=0.32\linewidth]{img/fish/fr_distribution.pdf} + % cropped_fitcurve_0_2010-08-31-ad-invivo-1_0.pdf: 0x0 px, 300dpi, 0.00x0.00 cm, bb= + \caption{Histogram of spike count distribution (firing rate) and errorfunction fits. 50 bins represent different values of the Gaussian distributed input signal [maybe histogram in background again]. The value of each of those bins is the number of spikes during the times the signal was in that bin. Each of the values was normalized by the signal distribution. For very weak and very strong inputs, the firing rates themselves become noisy, because the signal only assumes those values rarely. To account for delay, we first calculated the cross-correlation of signal and spike train and took its peak as the delay. The lines show fits according to equation \eqref{errorfct}. Left and right plots show two different cells, one with a relatively narrow distribution (left) and one with a distribution that is more broad (right), as indicated by the parameter \(\sigma\). An increase of $\sigma$ is equivalent to an broader distribution. Cells with broader distributions are assumed to be noisier, as their thresholding is less sharp than those with narrow distributions. Different amounts of bins (30 and 100) showed no difference in resulting parameters.} + \label{sigmafits} +\end{figure} + + +% TODO insert plot with sigma x-axis and delta_cf on y-axis here; also, plot with sigma as function of firing rate, also absoulte cf for different population size as function of sigma. + + + +When we group neurons by their noise and plot coding fraction as a function of population size for averages of the groups, we see results similar to what we see for simulations (figure \ref{ephys_sigma} a)): +Noisier cells (larger $\sigma$, purple) have a lower coding fraction for small populations. However, coding fraction mostly stops increasing with population sizes once a population size of about 16 is reached. The increase is much larger for noisy cells (orange). The averages of the coding fraction for the noisy cells does not increase above the coding fraction of the less noisy cells for the population sizes investigated here (N=128). In contrast to the more regular cells, coding fraction is still improving for the noisy cells, so it is plausible that at a certain population size the noisy cells can outperform the less noisy cells. +Indeed, if results are not averaged and single cells are considered, we find that for large population sizes the noisy cells show a better linear encoding of the signal than the more regular cells (figure \ref{ephys_sigma} b), red). + +%figures created with box_script.py +\begin{figure} +%\includegraphics[width=0.45\linewidth]{img/ordnung/sigma_popsize_curves_0to300} +\centering +\includegraphics[width=0.65\linewidth]{img/sigma/cf_N_sigma.pdf} +%\includegraphics[width=0.45\linewidth]{img/sigma/cf_N_ex_lines} +\includegraphics[width=0.45\linewidth]{img/sigma/sigma_cf_quot.pdf}% +\includegraphics[width=0.45\linewidth]{img/sigma/check_fr_quot.pdf}% + \caption{Left: Coding fraction as a function of population size for all recorded neurons. Cells are grouped by \(\sigma\) from the fit of the function in equation \ref{errorfct}. Lines are averages over three cells each, with the shading showing the standard deviation. For stronger noise, coding fraction is far smaller for a single neuron. With increasing population size, coding fraction increases much faster for the noisy cells than for the less noisy cells. + Right: Examples for the two cells with lowest, intermediate and highest $\sigma$. For a population size of N=1, the cell with the largest $\sigma$ (brown) has the lowest coding fraction out of all the cells here. The coding fraction of that cell increases hugely with population size. At a population of N=128, coding fraction is second highest among the pictured cells.} +\label{ephys_sigma} +\end{figure} + + + + +%The value of $\sigma$ is not signal independent. The same cell can have different values for $\sigma$ for different input signals. + diff --git a/simulation_further_considerations.tex b/simulation_further_considerations.tex new file mode 100644 index 0000000..f7fc8d9 --- /dev/null +++ b/simulation_further_considerations.tex @@ -0,0 +1,31 @@ +\section*{Discussion} + +In this paper we have shown the effect of Suprathreshold Stochastic Resonance (SSR) in ensembles of neurons. We detailed how noise levels affect the impact of population size on the coding fraction. We looked at different frequency ranges and could show that the encoding of high-frequency signals profits particularly well from SSR. Using the tuningcurve we were able to provide a way to extrapolate the effects of SSR for very large populations. Because in general analysis of the impact of changing parameters is complex, we investigated limit cases, in particular the slow stimulus limit and the weak stimulus limit. For low-frequency signals, i.e. the slow stimulus limit, the tuningcurve also allows analyzing the impact of changing signal strength; in addition we were able to show the difference in sub-threshold SR and SSR for different noise levels. For the weak stimulus limit, where noise is relatively strong compared to the signal, we were able to provide an analytical solution for our observations. + +\citep{hoch2003optimal} also shows that SSR effects hold for both LIF- and HH- Neurons. However, Hoch et al. have found that optimal noise level depends "close to logarithmatically" on the number of neurons in the population. They used a cutoff frequency of only 20Hz for their simulations. \notedh{Hier fehlt ein plot, der Population size und optimum noise in Verbindung setzt} + + +We investigated the impact of noise on homogeneous populations of neurons. Neurons being intrinsically noisy is a phenomenom that is well investigated (Grewe et al 2017, Padmanabhan and Urban 2010). +In natural systems however, neuronal populations are rarely homegeneous. Padmanabhan and Urban (2010) showed that heterogeneous populations of neurons carry more information that heterogenous populations. +%\notedh{Aber noisy! Zitieren: Neurone haben intrinsisches Rauschen (Einleitung?)} (Grewe, Lindner, Benda 2017 PNAS Synchronoy code) (Padmanabhan, Urban 2010 Nature Neurosci). +Beiran et al. (2017) investigated SSR in heterogeneous populations of neurons. They made a point that heterogeneous populations are comparable to homogeneous populations where the neurons receive independent noise in addition to a deterministic signal. They make the point that in the case of weak signals, heterogeneous population can encode information better, as strong noise would overwhelm the signal. +\notedh{Unterschiede herausstellen!} Similarly, Hunsberger et al. (2014) showed that both noise and heterogeneity linearize the tuning curve of LIF neurons. +In summary, while noise and heterogeneity are not completely interchangeable. In the limit cases we see similar behaviour. + +\citep{Sharafi2013} Sharafi et al. (2013) had already investigated SSR in a similar way. However, they only observed populations of up to three neurons and were focused on the synchronous output of cells. They took spike trains, convolved those with a gaussian and then multiplied the response of the different neurons. In our simulations we instead used the addition of spike trains to calculate the cohenrece between input and output. Instead of changing the noise parameter to find the optimum noise level, they changed the input signal frequency to find a resonating frequency, which was possible for suprathreshold stochastic resonance, but not for subthreshold stochastic resonance. For some combinations of parameters we also found that coding fraction does not decrease monotonically with increasing signal frequency (fig. \ref{cf_for_frequencies}). +It is especially notable for signals that are far from the threshold (fig \ref{cf_for_frequencies} E,F (red markers)). +That we don't see the effect that clearly matches Sharafi et al.'s observation that in the case of subthreshold stochastic resonance, coherence monotonically decreased with increasing frequency. Pakdaman et al. (2001) +\notedh{Besser verkn\"upfen als das Folgende (vergleichen \"uber Gr\"o\ss{}enordnungen; vergleichen mit Abbildung 5\ref{}; mehr als Sharafi zitieren Stichwort ``Coherence Resonance''} + +Similar research to Sharafi et al. was done by (de la Rocha et al. 2007). They investigated the output correlation of populations of two neurons and found it increases with firing rate. We found something similar in this paper, where an increase in $\mu$ increases both the firing rate of the neurons and generally also the coding fraction \notedh{Verkn\"upfen mit output correlation}(fig. \ref{codingfraction_means_amplitudes}). Our explanation is that coding fraction and firing rate are linked via the tuningcurve. In addition to simulations of LIF neurons de la Rocha et al. also carried out \textit{in vitro} experiments where they confirmed their simulations. + +\notedh{Konkreter machen: was machen die Anderen, das mit uns zu tun hat und was genau hat das mit uns zu tun?} +\notedh{Vielleicht nochmal Stocks, obwohl er schon in der Einleitung vorkommt? Heterogen/homogen} +\notedh{Dynamische stimuli! Bei Stocks z.B. nicht, nur z.B. bei Beiran. Wir haben den \"Ubergang.} + +Examples for neuronal systems that feature noise are P-unit receptor cells of weakly electric fish (which paper?) and ... + + +In the case of low cutoff frequency and strong noise we were able to derive a formula that explains why in those cases coding fraction simply depends on the ratio between noise and population size, whereas generally the two variables have very different effects on the coding fraction. +%SNR has proven to be unsuitable as a measure of encoding an aperiodic signal \citep{bulsara1996threshold,Collins1995aperiodic, collins1995stochastic}. Bulsara and Zador tackled this question using only a single LIF neuron. One of their conclusions was that for suprathreshold signals, information rate increases monotonically with the SNR (same SNR as defined here), which does not hold for populations. +%Collins et al. investigated SR (subthreshold) with Gaussian correlated noise (20s correlation time) as input using the FHN model. They used a normalized power norm, similar to the coherence C we use in this paper to assess the coding fraction. Even though they used a large population of up to 1000 neurons, discussion in their paper has focused only on the sub-threshold properties of SR and considered noise as something which inhibits the ability of the network to display supra-threshold signals. They were unable to detect an increase of input-output coherence, because they did use a very slow signal and we could show that wideband signals are necessary for SSR to manifest in large populations of neurons. \notedh{Sollte ein plot rein mit verschiedenen cutoff-Frequenzen?} diff --git a/simulation_methods.tex b/simulation_methods.tex new file mode 100644 index 0000000..acb114a --- /dev/null +++ b/simulation_methods.tex @@ -0,0 +1,67 @@ +\section*{Materials and Methods} +\subsection*{Simulations} +We use a population neuron model using the Leaky-Integrate-And-Fire (LIF) neuron, described by the equation + +\begin{equation}V_{t}^j = V_{t-1}^j + \frac{\Delta t}{\tau_v} ((\mu-V_{t-1}^j) + \sigma I_{t} + \sqrt{2D/\Delta t}\xi_{t}^j),\quad j \in [1,N]\end{equation} +with $\tau_v = 10 ms$ the membrane time constant, $\mu = 15.0 mV$ or $\mu = 10.5 mV$ as offset. $\sigma$ is a factor which scales the standard deviation of the input, ranging from 0.1 to 1 and I the previously generated stimulus. $\xi_{t}$ are independent Gaussian distributed random variables with mean 0 and variance 1. The Noise D was varied between $1*10^{-7} mV^2/Hz$ and $3 mV^2/Hz$. Whenever $V_{t}$ was greater than the voltage threshold (10mV) a "spike" was recorded and the voltage has been reset to 0mV. $V_{0}$ was initialized to a random value uniformly distributed between 0mV and 10mV. +Simulations of up to 8192 neurons were done using an Euler method with a step size of $\Delta\, t = 0.01$ms. Typical firing rates were around 90Hz for an offset of 15.0mV and 35Hz for an offset of 10.5mV. Firing rates were larger for high noise levels than for low noise levels. +We simulated large populations (up to 2048) of LIF-neurons. + +As stimulus we used Gaussian white noise signal with different frequency cutoff on both ends of the spectrum. By construction, the input power spectrum is flat between 0 and $\pm f_{c}$: + +\begin{equation} + S_{ss}(f) = \frac{\sigma^2}{2 \left| f_{c} \right|} \Theta\left(f_{c} - |f|\right).\label{S_ss} +\end{equation} +A Fast Fourier Transform (FFT) was applied to the signal so it can serve as input stimulus to the simulated cells. The signal was normalized so that the variance of the signal was 1mV and the length of the signal was 500s with a resolution of 0.01ms. + + +\begin{figure} +\includegraphics[scale=0.5]{img/intro_raster/example_noise_resonance.pdf} +\caption{Snapshots of 200ms length from three example simulations with different noise, but all other parameters held constant. Black: Spikes of 32 simulated neurons. The green curve beneath the spikes is the signal that was fed into the network. The blue curve is the best linear reconstruction possible from the spikes. The input signal has a cutoff frequency of 50Hz. +If noise is weak, the neurons behave regularly and similar to each other (A). For optimal noise strength, the neuronal population follows the signal best (B). If the noise is too strong, the information about the signal gets drowned out (C). D: Example coding fraction curve over the strength of the noise. Marked in red are the noise strengths from which the examples were taken.} +\label{example_spiketrains} +\end{figure} + + + + +\subsection*{Analysis} +For each combination of parameters, a histogram of the output spikes from all neurons or a subset of the neurons was created. +The coherence $C(f)$ was calculated \citep{lindner2016mechanisms} in frequency space as the fraction between the squared cross-spectral density $|S_{sx}^2|$ of input signal $s(t) = \sigma I_{t}$ and output spikes x(t), $S_{sx}(f) = \mathcal{F}\{ s(t)*x(t) \}(f) $, divided by the product of the power spectral densities of input ($S_{ss}(f) = |\mathcal{F}\{s(t)\}(f)|^2 $) and output ($S_{xx}(f) = |\mathcal{F}\{x(t)\}(f)|^2$), where $\mathcal{F}\{ g(t) \}(f)$ is the Fourier transform of g(t). +\begin{equation}C(f) = \frac{|S_{sx}(f)|^2}{S_{ss}(f) S_{xx}(f)}\label{coherence}\end{equation} + + The coding fraction $\gamma$ \citep{gabbiani1996codingLIF, krahe2002stimulus} quantifies how much of the input signal can be reconstructed by an optimal linear decoder. It is 0 in case the input can't be reconstructed at all and 1 if the signal can be perfectly reconstructed\citep{gabbiani1996stimulus}. + It is defined by the reconstruction error $\epsilon^2$ and the variance of the input $\sigma^2$: + +\begin{equation}\gamma = 1-\sqrt{\frac{\epsilon^2}{\sigma^2}}.\label{coding_fraction}\end{equation} + + +The variance is +\begin{equation}\sigma^2 = \langle \left(s(t)-\langle s(t)\rangle\right)^2\rangle = \int_{f_{low}}^{f_{high}} S_{ss}(f) df .\end{equation} + +The reconstruction error is defined as + +\begin{equation}\epsilon^2 = \langle \left(s(t) - s_{est}(t)\right)^2\rangle = \int_{f_{low}}^{f_{high}} S_{ss} - \frac{|S_{sx}|^2}{S_{xx}} = \int_{f_{low}}^{f_{high}} S_{ss}(f) (1-C(f)) df\end{equation} +with the estimate $s_{est}(t) = h*x(t)$. $h$ is the optimal linear filter which has Fourier Transform $H = \frac{S_{sx}}{S_{xx}}$\citep{gabbiani1996coding}. + +We then analyzed coding fraction as a function of these cutoff frequencies for different parameters (noise strength, signal amplitude, signal mean/firing rate) in the limit of large populations. +The limit was considered reached if the increase in coding fraction gained by doubling the population size is small (4\%)(??). +For the weak signals ($\sigma = 0.1mV$) combined with the strongest noise ($D = 10^{-3} \frac{mV^2}{Hz}$), convergence was not reached for a population size of 2048 neurons. The same is true for the combination of the weak signal, close to the threshold ($\mu = 10.5mV$) and high frequencies (200Hz). + +\subsection*{Tuningcurve} +To create a tuning curve we also used the LIF model (Figure \ref{example_spiketrains}) with no dynamic input signal and only white noise and constant offset $\bar\mu$. The mean input $\bar\mu$ was varied between 6mV and 21mV in steps of 0.01mV. These values were selected to cover the range of inputs we get with a $\sigma = 1.0mV$ signal for both $\mu=10.5mV$ and $\mu=15.0mV$. +For each point of the tuning curve, we simulated 32 LIF-neurons with no input signal for 500s. Firing of those neurons is determined only by the parameters, $V_0$ and $V_{reset}$ and the average input $\mu$ and the noise. Between simulations we only changed $\mu$ and the noise, the cell intrinsic parameters were the same as in the previous simulations. +By repeating simulations for a total of five times it was possible to determine whether the population size was sufficient. +This was the case, as resulting average firing rates were practically identical for every combination of parameters. + +For each timestep the amplitude of the signal is transformed into a rate using the tuningcurve. The result is the output, similar to the output we get from the psth of the LIF-simulations. As before, we calculate the coherence between this output and the original input. From the coherence, the coding fraction is calculated using equation \ref{coding_fraction}. + +The tuning curve has a resolution of 0.1mV. For some parameter combinations we also tried a resolution of 0.01mV, but results were similar (see appendix?). +To calculate coding fraction from the tuning curve, the signal is transformed by replacing input values with average firing rates. Then, coherence of the transformed signal and the original signal is used to calculate coding fraction as previously. +We investigated whether the neuronal tuning curve allows to predict the coding fraction of the neuronal population. + +We compare the results from the tuning curves to populations of up to 4096 Leaky Integrate-and-Fire (LIF) neurons for simulations as described above, with signal amplitudes $\sigma=0.1mV$ or $\sigma=1.0mV$ and constant inputs of $\mu=10.5mV$ or $\mu=15.0mV$. For input $I_t$ we used Gaussian White Noise stimuli with a cutoff frequency of 1Hz, so that we could use the signal as an approximation to an almost constant signal. + +We also use the tuning curve to estimate the effects of signal amplitude and signal mean on the quality of the encoding. For three different signal means ($\mu=9.5mV$,$\mu=10.0mV$ and $\mu=10.5mV$) and two different noise strengths ($10^{-3} \frac{mV^2}{Hz}$ and $10^{-5}\frac{mV^2}{Hz}$) we simulated seven different signals each with amplitudes from $0.1mV$ to $0.7mV$. +In addition, for two signal amplitudes ($0.1mV$ and $0.5mV$) and three different noise strengths ($10^{-3}\frac{mV^2}{Hz}$, $10^{-5}\frac{mV^2}{Hz}$ and $10^{-7}\frac{mV^2}{Hz}$) we simulated signals with different means from $8mV$ to $14mV$. +For each of the described simulations we used a signal with a frequency of 200Hz. We repeated some simulations with a 10Hz signal and found no differences. diff --git a/simulation_results.tex b/simulation_results.tex new file mode 100644 index 0000000..3309842 --- /dev/null +++ b/simulation_results.tex @@ -0,0 +1,137 @@ +\section*{Results} +\subsection*{Noise makes neurons' responses different from each other} +If noise levels are low (fig. \ref{example_spiketrains} a)), neurons within a population with behave very similarly to each other. There is little variation in the spike responses of the neurons to a signal, and recreating the signal is difficult. If the strength of the noise is increasing, at some point the coding fraction will also begin increasing. The signal recreation will become better as the responses of the different neurons begin to deviate from each other. When noise strength is increased even further at some point a peak coding fraction is reached. This point is the optimal noise strength for the given parameters (fig. \ref{example_spiketrains} b)). If the strength of the noise is increased beyond this point, the responses of the neurons will be determined more by random fluctuations and less by the actual signal, making reconstruction more difficult (fig. \ref{example_spiketrains} c)). At some point, signal encoding breaks down completely and coding fraction goes to 0. + + +\subsection*{Large population size is only useful if noise is strong} +We see that an increase in population size leads to a larger coding fraction until it hits a limit which depends on noise. For weak noise the increase in conding fraction with an increase in population size is low or non-existent. This can be seen in figure \ref{cf_limit} c) where the red ($10^{-5}\frac{mV^2}{Hz}$) and orange ($10^{-4}\frac{mV^2}{Hz}$) curves (relatively weak noise) saturate for relatively small population size (about 8 neurons and 32 neurons respectively). +An increase in population size also leads to the optimum noise level moving towards stronger noise (green dots in figure \ref{cf_limit} a)). \newdh{A larger population can exploit the higher noise levels better. Within the larger population the precision of the individual neurons becomes less important.}\notedh{Is this discussion?} After the optimum noise where peak coding fraction is reached, an increase in noise strength leads to a reduction in coding fraction. If the noise is very strong, coding fraction can reach approximately 0. This happens earlier (for weaker noise) in smaller populations than in larger populations. Together those facts mean that for a given noise level and population size, coding fraction might already be declining; whereas for larger populations, coding fraction can still be increasing. A given amount of noise can lead to a very low coding fraction in a small population, but to a greater coding fraction in a larger population. (figure \ref{cf_limit} c), blue and purple curves). The noise levels that work best for large populations are in general performing very bad in small populations. \newdh{If coding fraction is supposed to reach its highest values and needs large populations to do so, the necessary noise strength will be at a level, where basically no encoding will happen in a single neurons or small populations.}\notedh{Discussion?} + +\begin{figure} +\centering +\includegraphics[width=0.4\linewidth]{{img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_4_with_input}.pdf} +\includegraphics[width=0.4\linewidth]{{img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_16_with_input}.pdf} +\includegraphics[width=0.4\linewidth]{{img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_64_with_input}.pdf} +\includegraphics[width=0.4\linewidth]{{img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_256_with_input}.pdf} +\label{harmonizing} +\caption{Rasterplots and reconstructed signals for different population sizes; insets show signal spectrum. Rasterplots show the responses of neurons in the different populations. Blue lines show the reconstruction of the original signal by different sets of neurons of that population size. A: Each blue line is the reconstructed signal from the responses of a population of 4 neurons. B: Each blue line is the reconstructed signal from the responses of a population of 16 neurons. C: The same for 64 neurons. D: The same for 256 neurons. Larger population sizes lead to observations which are not as dependent on random fluctuations and are therefore closer to each other. +\notedh{langsames signal hier nehmen(!?)}} +\end{figure} + +\subsection*{Influence of the input is complex} +Two very important variables are the mean strength of the signal, equivalent to the baseline firing rate of the neurons and the strength of the signal. A higher baseline firing rate leads to a larger coding fraction. In our terms that means that a mean signal strength $\mu$ that is much above the signal will lead to higher coding fractions than if the signal strength is close to the threshold (see figure \ref{cf_limit} b), orange curves are above the green curves). The influence of the signal amplitude $\sigma$ is more complex. In general, at small population sizes, larger amplitudes appear to work better, but with large populations they might perform as well or even better than stronger signals (figure \ref{cf_limit} c), dashed curves vs solid curves.) + +\begin{figure} + \includegraphics[width=0.45\linewidth]{{img/basic/basic_15.0_1.0_200_detail_with_max}.pdf} + \includegraphics[width=0.45\linewidth]{{img/basic/n_basic_weak_15.0_1.0_200_detail}.pdf} +\includegraphics[width=0.45\linewidth]{img/basic/n_basic_compare_50_detail.pdf} +\label{cf_limit} +\caption{A: Coding fraction as a function of noise for different population sizes. Green dots mark the peak of the coding fraction curve. Increasing population size leads to a higher peak and moves the peak to stronger noise. +B: Coding fraction as a function of population size. Each curve shows coding fraction for a different noise strength. +C: Peak coding fraction as a function of population size for different input parameters. \notedh{ needs information about noise}} +\end{figure} + + + + + +\subsection*{Slow signals are more easily encoded} +To encode a signal well, neurons in a population need to keep up with the rising and falling of the signal. +Signals that change fast are harder to encode than signals which change more slowly. When a signal changes more gradually, the neurons can slowly adapt their firing rate. A visual example can be see in figure \ref{freq_raster}. When all other parameters are equal, a signal with a lower frequency is easier to recreate from the firing of the neurons. +In the rasterplots one can see especially for the 50Hz signal (bottom left) that the firing probability of each neuron follows the input signal. When the input is low, almost none of the neurons fire. The result are the ``stripes'' we can see in the rasterplot. The stripes have a certain width which is determined by the signal frequency and the noise level. When the signal frequency is low, the width of the stripes can't be seen in a short snapshot. For the 10Hz signal in this example we can clearly see a break in the firing activity of the neurons at around 50ms. \notedh{There is another break at about 350ms, but the inset overlays...}. The slower changes in the signal allow for the reconstruction to follow the original signal more closely. +For the 200Hz signal there is little structure to be seen in the firing behaviour of the population and instead that behaviour looks chaotic. +Something similar can be said for the 1Hz signal. Because the peaks are about 1s apart from each other, a snapshot of 400ms cannot capture the structure of the neuronal response. Instead what we see is a very gradual change of the firing rate following the signal. Because the change is so gradual, the reconstructed signal follows the input signal very closely. +\newdh{it is possible for neurons to encode signals + which have a higher frequency than their own firing rate (Knight 73)} + +\begin{figure} + \centering + \includegraphics[width=0.4\linewidth]{{img/rasterplots/best_approximation_spikes_1hz_0.001noi500s_10.5_0.5_1.dat}.pdf} + \includegraphics[width=0.4\linewidth]{{img/rasterplots/best_approximation_spikes_10hz_0.001noi500s_10.5_0.5_1.dat}.pdf} + \includegraphics[width=0.4\linewidth]{{img/rasterplots/best_approximation_spikes_50hz_0.001noi500s_10.5_0.5_1.dat}.pdf} + \includegraphics[width=0.4\linewidth]{{img/rasterplots/best_approximation_spikes_200hz_0.001noi500s_10.5_0.5_1.dat}.pdf} + \caption{Rasterplots, input signal and reconstructed signals for different cutoff frequencies; insets show each signal spectrum. + Shown here are examples taken from 500s long simulations. Rasterplots show the firing of 64 LIF-neurons. Each row corresponds to one neuron. + Blue lines below the rasters are the input signal, the orange line the reconstruction, calculated by convolving the spikes with the optimal linear filter. Reconstruction is closer to the original signal for slower signals than for higher frequency signals. + The different time scales lead to spike patterns which appear very different from each other.} + \label{freq_raster} +\end{figure} + + + + +\subsection*{Fast signals are harder to encode - noise can help with that} +For low frequency signals, the coding fraction is always at least as large as the coding is for signals with higher frequency. For the parameters we have used there is very little difference for a random noise signal with frequencies of 1Hz and 10Hz respectively (figure \ref{cf_for_frequencies}, bottom row). +For all signal frequencies and amplitudes a signal mean much larger than the threshold ($\mu = 15.0mV$, with the threshold at $10.0mV$) results in a higher coding fraction than the signal mean closer to the threshold ($\mu = 10.5 mV$). +We also find that for the signal mean which is further away from the threshold for the loss of coding fraction from the 10Hz signal to the 50Hz signal is smaller than for the lower signal mean. For the fast signal (200Hz) we always find a large drop in coding fraction. The drop is less pronounced for stronger noise. +Coding fractions for the 1Hz and the 10Hz signal are almost identical (fig. \ref{cf_for_frequencies}, bottom row. +\newdh{It seems a reasonable assumption that for the parameters in our simulations coding fraction can be considered converged at a frequency of 10Hz. Analysis?} + +\begin{figure} + \centering +\includegraphics[width=0.7\linewidth]{img/coding_fraction_vs_frequency.pdf} +\includegraphics[width=0.7\linewidth]{img/1Hz_vs_10Hz_alternativ.pdf} +\label{cf_for_frequencies} +\caption{\textbf{A-D}: Coding fraction in the large population limit as a function of input signal frequency for different parameters. Each curve represents a different noise strength. Points are only shown when the coding fraction increased by less than 2\% when population size was increased from 1024 to 2048 neurons. For small amplitudes ($\sigma = 0.1mV$, A \& B) there was no convergence for a noise of $10^{-3} mV^2/Hz$. Coding fraction decreases for faster signals (50Hz and 200Hz). In the large population limit, stronger noise results in coding fraction at least as large as for weaker noise. +\textbf{E, F}: Comparison of the coding fraction in the large population limit for a 1Hz signal and a 10Hz signal. Shapes indicate noise strength, color indicates mean signal input (i.e. distance from threshold). Left plot shows an amplitude of $\sigma=0.1mV$, the right plot shows $\sigma=1.0mV$. The diagonal black line indicates where coding fractions are equal.} +\end{figure} + +\notedh{ +Is there frequency vs. optimum noise missing? +For slower signals, coding fraction converges faster in terms of population size (figure \ref{cf_for_frequencies}). +This (convergence speed) is also true for stronger signals as opposed to weaker signals. +For slower signals the maximum value is reached for weaker noise.} + +\subsection*{A tuning curve allows calculation of coding fraction for arbitrarily large populations} + +To understand information encoding by populations of neurons it is common practice to use simulations. However, the size of the simulated population is limited by computational power. We demonstrate a way to circumvent these limitations, allowing to make predictions in the limit case of large population size. We use the interpretation of the tuning curve as a kind of averaged population response. To calculate this average, we need relatively few neurons to reproduce the response of an arbitrarily large population of neurons. This allows the necessary computational power to be greatly reduced. +Is it possible to approximate an arbitrarily large population size in some way? +Most importantly, we would like to know towards which value coding fraction converges for given parameters in the limit $N \rightarrow \infty$. + +At least for slow signals, the frequency response at a given point in time is determined by the signal power in this moment (for faster signals, past plays a role; we have also seen before that faster signals aren't encoded as well as slower signals; but tuning curve is frequency-independent). Then, population response should simply be proportional to the response of a single neuron. +We can look at the average firing rate for the input and how it changes with noise. This average firing rate is reflected in the tuning curve. + +\begin{figure} +\centering +\includegraphics[width=1.0\linewidth]{img/non_lin_example_undetail.pdf} + \includegraphics[width=0.5\linewidth]{{img/tuningcurves/6.00_to_15.00mV,1.0E-07_to_1.0E-02}.pdf} + \includegraphics[width=0.4\linewidth]{{img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_256_with_input}.pdf} +\caption{Two ways to arrive at coherence and coding fraction. Left: The input signal (top, center) is received by LIF-neurons. The spiking of the neurons is then binned and coherence and coding fraction are calculated between the result and the input signal. +Right: Input signal (top, center) is transformed by the tuning curve (top right). The tuning curve corresponds to a function $g(V)$, which takes a voltage as input and yields a firing rate. Output is a modulated signal. We calculate coherence and coding fraction between input voltage and output firing rate. If the mean of the input is close to the threshold, as is the case here, inputs below the threshold all get projected to 0. This can be seen here at the beginning of the transformed curve. +Bottom left: Tuning curves for different noise levels. x-Axis shows the stimulus in mV, the y-axis shows the corresponding firing rate. For low noise levels there is a strong non-linearity at the threshold. For increasing noise, firing rate increases particularly} +\label{non-lin} +\end{figure} + +The noise influences the shape of the tuning curve, with stronger noise linearizing the curve. The linearity of the curve is important, because coding fraction is a linear measure. For strong input signals (around 15mV) the curve is almost linear, resulting in coding fractions close to 1. +For slow signals (1Hz cutoff frequency, up to 10Hz) the results from the tuning curve and the simulation for large populations of neurons match very well (figure \ref{accuracy}) over a range of signal strengths, base inputs to the neurons and noise strength. +This means that the LIF-neuron tuning curve gives us a very good approximation for the limit of encoded information that can be achieved by summing over independent, identical LIF-neurons with intrinsic noise. +For faster signals, the coding fraction calculated through the tuning curve stays constant, as the tuning curve only deforms the signal. As shown in figure \ref{cf_for_frequencies} e) and f), the coding fraction of the LIF-neuron ensemble drops with increasing frequency. Hence for high frequency signals the tuning curve ceases to be a good predictor of the encoding quality of the ensemble. + +\begin{figure} +\centering + \includegraphics[width=0.48\linewidth]{img/tuningcurves/tuningcurve_vs_simulation_10Hz.pdf} + \includegraphics[width=0.48\linewidth]{img/tuningcurves/tuningcurve_vs_simulation_200Hz.pdf} + \label{accuracy} + \caption{Tuningcurve works for 10Hz but not for 200Hz.} +\end{figure} + +For high-frequency signals, the method does not work. The effective refractory period prohibits the instantaneous firing rate from being useful, because the neurons spike only in very short intervals around a signal peak. They are very unlikely to immediately spike again, so that the psth is focused around the input peaks, but there is little nuance. + +We use the tuning curve to analyse how the signal mean and the signal amplitude change the coding fraction we would get from an infinitely large population of neurons (fig. \ref{non-lin}, bottom two rows). We can see that the stronger noise always yields a larger coding fraction. This is expected because the tuning curve is more linear for stronger nosie. It matches that we are observing the limit of an infinitely large population, which would be able to ``average out'' any noise. For coding fraction as a function of the mean we see zero or near zero coding fraction if we are far below the threshold. If we increase the mean at one point we can see that coding fraction starts to jump up. This happens earlier for stronger noise (= more linear tuning curve). The increase in coding fraction is much smoother if we use a larger amplitude (right figure). We also notice some sort of plateau, where increasing the mean does not lead to a larger coding fraction, before it begins rising close to 1. The plateau begins earlier for straighter tuning curves. +For coding fraction as a function of signal amplitude we see very different results depending on the parameters. Again, we see that stronger noise leads to higher coding fraction. If we are just above or at the threshold (center and right), an increase in signal amplitude leads to a lower coding fraction. This makes sense, as more of the signal moves into the very non-linear area around the threshold. This means that for increasing amplitude an increasing fraction of the signal gets into the range of the tuningcurve with 0Hz firing range, i.e. where there is no signal encoding. A very interesting effect happens if we have a mean slightly below the threshold (left): while for a strong noise we see the same effect as at or above the threshold, for weaker noise we see the opposite. This can be explained as the reverse of the effect that leads to decreasing coding fraction. Here, a larger amplitude means that the signal moves to the more linear part of the tuning curve more often. On the other hand, an increase in amplitude does not lead to worse encoding because of movement of the signal into the 0Hz part of the tuning curve -- because the signal is already there, so it can't get worse. This can help explain why the coding fraction seems to saturate near 0.5: In an extreme case, the negative parts of a signal would not get encoded at all, while the positive parts would be encoded linearly. + +\begin{figure} +\centering + \includegraphics[width=0.4\linewidth]{{img/tuningcurves/codingfraction_from_curves_amplitude_0.1mV}.pdf} + \includegraphics[width=0.4\linewidth]{{img/tuningcurves/codingfraction_from_curves_amplitude_0.5mV}.pdf} + \includegraphics[width=0.3\linewidth]{{img/tuningcurves/codingfraction_from_curves_mean_9.5mV}.pdf} + \includegraphics[width=0.3\linewidth]{{img/tuningcurves/codingfraction_from_curves_mean_10.0mV}.pdf} + \includegraphics[width=0.3\linewidth]{{img/tuningcurves/codingfraction_from_curves_mean_10.5mV}.pdf} +% \includegraphics[width=0.45\linewidth]{{img/rasterplots/best_approximation_spikes_50hz_1e-07noi500s_15_0.5_1.dat}.pdf} +% \includegraphics[width=0.45\linewidth]{{img/rasterplots/best_approximation_spikes_200hz_1e-07noi500s_15_0.5_1.dat}.pdf} + \label{codingfraction_means_amplitudes} + \caption{ + \textbf{A,B}: Coding signal as a function of signal mean for two different frequencies. There is little to no difference in the coding fraction. + A: $\sigma = 0.1mV$. Each curve shows coding fraction as a function of the signal mean for a different noise level. The vertical line indicates the threshold. + \textbf{C-E}: Coding fraction as a function of signal amplitude for different tuningcurves (noise levels). Three different means, one below the threshold (9.5mV), one at the threshold (10.0mV), and one above the threshold (10.5mV).} +\end{figure}