138 lines
20 KiB
TeX
138 lines
20 KiB
TeX
\section*{Results}
|
|
\subsection*{Noise makes neurons' responses different from each other}
|
|
If noise levels are low (fig. \ref{example_spiketrains} a)), neurons within a population with behave very similarly to each other. There is little variation in the spike responses of the neurons to a signal, and recreating the signal is difficult. If the strength of the noise is increasing, at some point the coding fraction will also begin increasing. The signal recreation will become better as the responses of the different neurons begin to deviate from each other. When noise strength is increased even further at some point a peak coding fraction is reached. This point is the optimal noise strength for the given parameters (fig. \ref{example_spiketrains} b)). If the strength of the noise is increased beyond this point, the responses of the neurons will be determined more by random fluctuations and less by the actual signal, making reconstruction more difficult (fig. \ref{example_spiketrains} c)). At some point, signal encoding breaks down completely and coding fraction goes to 0.
|
|
|
|
|
|
\subsection*{Large population size is only useful if noise is strong}
|
|
We see that an increase in population size leads to a larger coding fraction until it hits a limit which depends on noise. For weak noise the increase in conding fraction with an increase in population size is low or non-existent. This can be seen in figure \ref{cf_limit} c) where the red ($10^{-5}\frac{mV^2}{Hz}$) and orange ($10^{-4}\frac{mV^2}{Hz}$) curves (relatively weak noise) saturate for relatively small population size (about 8 neurons and 32 neurons respectively).
|
|
An increase in population size also leads to the optimum noise level moving towards stronger noise (green dots in figure \ref{cf_limit} a)). \newdh{A larger population can exploit the higher noise levels better. Within the larger population the precision of the individual neurons becomes less important.}\notedh{Is this discussion?} After the optimum noise where peak coding fraction is reached, an increase in noise strength leads to a reduction in coding fraction. If the noise is very strong, coding fraction can reach approximately 0. This happens earlier (for weaker noise) in smaller populations than in larger populations. Together those facts mean that for a given noise level and population size, coding fraction might already be declining; whereas for larger populations, coding fraction can still be increasing. A given amount of noise can lead to a very low coding fraction in a small population, but to a greater coding fraction in a larger population. (figure \ref{cf_limit} c), blue and purple curves). The noise levels that work best for large populations are in general performing very bad in small populations. \newdh{If coding fraction is supposed to reach its highest values and needs large populations to do so, the necessary noise strength will be at a level, where basically no encoding will happen in a single neurons or small populations.}\notedh{Discussion?}
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=0.4\linewidth]{{img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_4_with_input}.pdf}
|
|
\includegraphics[width=0.4\linewidth]{{img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_16_with_input}.pdf}
|
|
\includegraphics[width=0.4\linewidth]{{img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_64_with_input}.pdf}
|
|
\includegraphics[width=0.4\linewidth]{{img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_256_with_input}.pdf}
|
|
\label{harmonizing}
|
|
\caption{Rasterplots and reconstructed signals for different population sizes; insets show signal spectrum. Rasterplots show the responses of neurons in the different populations. Blue lines show the reconstruction of the original signal by different sets of neurons of that population size. A: Each blue line is the reconstructed signal from the responses of a population of 4 neurons. B: Each blue line is the reconstructed signal from the responses of a population of 16 neurons. C: The same for 64 neurons. D: The same for 256 neurons. Larger population sizes lead to observations which are not as dependent on random fluctuations and are therefore closer to each other.
|
|
\notedh{langsames signal hier nehmen(!?)}}
|
|
\end{figure}
|
|
|
|
\subsection*{Influence of the input is complex}
|
|
Two very important variables are the mean strength of the signal, equivalent to the baseline firing rate of the neurons and the strength of the signal. A higher baseline firing rate leads to a larger coding fraction. In our terms that means that a mean signal strength $\mu$ that is much above the signal will lead to higher coding fractions than if the signal strength is close to the threshold (see figure \ref{cf_limit} b), orange curves are above the green curves). The influence of the signal amplitude $\sigma$ is more complex. In general, at small population sizes, larger amplitudes appear to work better, but with large populations they might perform as well or even better than stronger signals (figure \ref{cf_limit} c), dashed curves vs solid curves.)
|
|
|
|
\begin{figure}
|
|
\includegraphics[width=0.45\linewidth]{{img/basic/basic_15.0_1.0_200_detail_with_max}.pdf}
|
|
\includegraphics[width=0.45\linewidth]{{img/basic/n_basic_weak_15.0_1.0_200_detail}.pdf}
|
|
\includegraphics[width=0.45\linewidth]{img/basic/n_basic_compare_50_detail.pdf}
|
|
\label{cf_limit}
|
|
\caption{A: Coding fraction as a function of noise for different population sizes. Green dots mark the peak of the coding fraction curve. Increasing population size leads to a higher peak and moves the peak to stronger noise.
|
|
B: Coding fraction as a function of population size. Each curve shows coding fraction for a different noise strength.
|
|
C: Peak coding fraction as a function of population size for different input parameters. \notedh{ needs information about noise}}
|
|
\end{figure}
|
|
|
|
|
|
|
|
|
|
|
|
\subsection*{Slow signals are more easily encoded}
|
|
To encode a signal well, neurons in a population need to keep up with the rising and falling of the signal.
|
|
Signals that change fast are harder to encode than signals which change more slowly. When a signal changes more gradually, the neurons can slowly adapt their firing rate. A visual example can be see in figure \ref{freq_raster}. When all other parameters are equal, a signal with a lower frequency is easier to recreate from the firing of the neurons.
|
|
In the rasterplots one can see especially for the 50Hz signal (bottom left) that the firing probability of each neuron follows the input signal. When the input is low, almost none of the neurons fire. The result are the ``stripes'' we can see in the rasterplot. The stripes have a certain width which is determined by the signal frequency and the noise level. When the signal frequency is low, the width of the stripes can't be seen in a short snapshot. For the 10Hz signal in this example we can clearly see a break in the firing activity of the neurons at around 50ms. \notedh{There is another break at about 350ms, but the inset overlays...}. The slower changes in the signal allow for the reconstruction to follow the original signal more closely.
|
|
For the 200Hz signal there is little structure to be seen in the firing behaviour of the population and instead that behaviour looks chaotic.
|
|
Something similar can be said for the 1Hz signal. Because the peaks are about 1s apart from each other, a snapshot of 400ms cannot capture the structure of the neuronal response. Instead what we see is a very gradual change of the firing rate following the signal. Because the change is so gradual, the reconstructed signal follows the input signal very closely.
|
|
\newdh{it is possible for neurons to encode signals
|
|
which have a higher frequency than their own firing rate (Knight 73)}
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=0.4\linewidth]{{img/rasterplots/best_approximation_spikes_1hz_0.001noi500s_10.5_0.5_1.dat}.pdf}
|
|
\includegraphics[width=0.4\linewidth]{{img/rasterplots/best_approximation_spikes_10hz_0.001noi500s_10.5_0.5_1.dat}.pdf}
|
|
\includegraphics[width=0.4\linewidth]{{img/rasterplots/best_approximation_spikes_50hz_0.001noi500s_10.5_0.5_1.dat}.pdf}
|
|
\includegraphics[width=0.4\linewidth]{{img/rasterplots/best_approximation_spikes_200hz_0.001noi500s_10.5_0.5_1.dat}.pdf}
|
|
\caption{Rasterplots, input signal and reconstructed signals for different cutoff frequencies; insets show each signal spectrum.
|
|
Shown here are examples taken from 500s long simulations. Rasterplots show the firing of 64 LIF-neurons. Each row corresponds to one neuron.
|
|
Blue lines below the rasters are the input signal, the orange line the reconstruction, calculated by convolving the spikes with the optimal linear filter. Reconstruction is closer to the original signal for slower signals than for higher frequency signals.
|
|
The different time scales lead to spike patterns which appear very different from each other.}
|
|
\label{freq_raster}
|
|
\end{figure}
|
|
|
|
|
|
|
|
|
|
\subsection*{Fast signals are harder to encode - noise can help with that}
|
|
For low frequency signals, the coding fraction is always at least as large as the coding is for signals with higher frequency. For the parameters we have used there is very little difference for a random noise signal with frequencies of 1Hz and 10Hz respectively (figure \ref{cf_for_frequencies}, bottom row).
|
|
For all signal frequencies and amplitudes a signal mean much larger than the threshold ($\mu = 15.0mV$, with the threshold at $10.0mV$) results in a higher coding fraction than the signal mean closer to the threshold ($\mu = 10.5 mV$).
|
|
We also find that for the signal mean which is further away from the threshold for the loss of coding fraction from the 10Hz signal to the 50Hz signal is smaller than for the lower signal mean. For the fast signal (200Hz) we always find a large drop in coding fraction. The drop is less pronounced for stronger noise.
|
|
Coding fractions for the 1Hz and the 10Hz signal are almost identical (fig. \ref{cf_for_frequencies}, bottom row.
|
|
\newdh{It seems a reasonable assumption that for the parameters in our simulations coding fraction can be considered converged at a frequency of 10Hz. Analysis?}
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=0.7\linewidth]{img/coding_fraction_vs_frequency.pdf}
|
|
\includegraphics[width=0.7\linewidth]{img/1Hz_vs_10Hz_alternativ.pdf}
|
|
\label{cf_for_frequencies}
|
|
\caption{\textbf{A-D}: Coding fraction in the large population limit as a function of input signal frequency for different parameters. Each curve represents a different noise strength. Points are only shown when the coding fraction increased by less than 2\% when population size was increased from 1024 to 2048 neurons. For small amplitudes ($\sigma = 0.1mV$, A \& B) there was no convergence for a noise of $10^{-3} mV^2/Hz$. Coding fraction decreases for faster signals (50Hz and 200Hz). In the large population limit, stronger noise results in coding fraction at least as large as for weaker noise.
|
|
\textbf{E, F}: Comparison of the coding fraction in the large population limit for a 1Hz signal and a 10Hz signal. Shapes indicate noise strength, color indicates mean signal input (i.e. distance from threshold). Left plot shows an amplitude of $\sigma=0.1mV$, the right plot shows $\sigma=1.0mV$. The diagonal black line indicates where coding fractions are equal.}
|
|
\end{figure}
|
|
|
|
\notedh{
|
|
Is there frequency vs. optimum noise missing?
|
|
For slower signals, coding fraction converges faster in terms of population size (figure \ref{cf_for_frequencies}).
|
|
This (convergence speed) is also true for stronger signals as opposed to weaker signals.
|
|
For slower signals the maximum value is reached for weaker noise.}
|
|
|
|
\subsection*{A tuning curve allows calculation of coding fraction for arbitrarily large populations}
|
|
|
|
To understand information encoding by populations of neurons it is common practice to use simulations. However, the size of the simulated population is limited by computational power. We demonstrate a way to circumvent these limitations, allowing to make predictions in the limit case of large population size. We use the interpretation of the tuning curve as a kind of averaged population response. To calculate this average, we need relatively few neurons to reproduce the response of an arbitrarily large population of neurons. This allows the necessary computational power to be greatly reduced.
|
|
Is it possible to approximate an arbitrarily large population size in some way?
|
|
Most importantly, we would like to know towards which value coding fraction converges for given parameters in the limit $N \rightarrow \infty$.
|
|
|
|
At least for slow signals, the frequency response at a given point in time is determined by the signal power in this moment (for faster signals, past plays a role; we have also seen before that faster signals aren't encoded as well as slower signals; but tuning curve is frequency-independent). Then, population response should simply be proportional to the response of a single neuron.
|
|
We can look at the average firing rate for the input and how it changes with noise. This average firing rate is reflected in the tuning curve.
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=1.0\linewidth]{img/non_lin_example_undetail.pdf}
|
|
\includegraphics[width=0.5\linewidth]{{img/tuningcurves/6.00_to_15.00mV,1.0E-07_to_1.0E-02}.pdf}
|
|
\includegraphics[width=0.4\linewidth]{{img/temp/best_approximation_spikes_50hz_0.01noi500s_10.5_1_1.dat_256_with_input}.pdf}
|
|
\caption{Two ways to arrive at coherence and coding fraction. Left: The input signal (top, center) is received by LIF-neurons. The spiking of the neurons is then binned and coherence and coding fraction are calculated between the result and the input signal.
|
|
Right: Input signal (top, center) is transformed by the tuning curve (top right). The tuning curve corresponds to a function $g(V)$, which takes a voltage as input and yields a firing rate. Output is a modulated signal. We calculate coherence and coding fraction between input voltage and output firing rate. If the mean of the input is close to the threshold, as is the case here, inputs below the threshold all get projected to 0. This can be seen here at the beginning of the transformed curve.
|
|
Bottom left: Tuning curves for different noise levels. x-Axis shows the stimulus in mV, the y-axis shows the corresponding firing rate. For low noise levels there is a strong non-linearity at the threshold. For increasing noise, firing rate increases particularly}
|
|
\label{non-lin}
|
|
\end{figure}
|
|
|
|
The noise influences the shape of the tuning curve, with stronger noise linearizing the curve. The linearity of the curve is important, because coding fraction is a linear measure. For strong input signals (around 15mV) the curve is almost linear, resulting in coding fractions close to 1.
|
|
For slow signals (1Hz cutoff frequency, up to 10Hz) the results from the tuning curve and the simulation for large populations of neurons match very well (figure \ref{accuracy}) over a range of signal strengths, base inputs to the neurons and noise strength.
|
|
This means that the LIF-neuron tuning curve gives us a very good approximation for the limit of encoded information that can be achieved by summing over independent, identical LIF-neurons with intrinsic noise.
|
|
For faster signals, the coding fraction calculated through the tuning curve stays constant, as the tuning curve only deforms the signal. As shown in figure \ref{cf_for_frequencies} e) and f), the coding fraction of the LIF-neuron ensemble drops with increasing frequency. Hence for high frequency signals the tuning curve ceases to be a good predictor of the encoding quality of the ensemble.
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=0.48\linewidth]{img/tuningcurves/tuningcurve_vs_simulation_10Hz.pdf}
|
|
\includegraphics[width=0.48\linewidth]{img/tuningcurves/tuningcurve_vs_simulation_200Hz.pdf}
|
|
\label{accuracy}
|
|
\caption{Tuningcurve works for 10Hz but not for 200Hz.}
|
|
\end{figure}
|
|
|
|
For high-frequency signals, the method does not work. The effective refractory period prohibits the instantaneous firing rate from being useful, because the neurons spike only in very short intervals around a signal peak. They are very unlikely to immediately spike again, so that the psth is focused around the input peaks, but there is little nuance.
|
|
|
|
We use the tuning curve to analyse how the signal mean and the signal amplitude change the coding fraction we would get from an infinitely large population of neurons (fig. \ref{non-lin}, bottom two rows). We can see that the stronger noise always yields a larger coding fraction. This is expected because the tuning curve is more linear for stronger nosie. It matches that we are observing the limit of an infinitely large population, which would be able to ``average out'' any noise. For coding fraction as a function of the mean we see zero or near zero coding fraction if we are far below the threshold. If we increase the mean at one point we can see that coding fraction starts to jump up. This happens earlier for stronger noise (= more linear tuning curve). The increase in coding fraction is much smoother if we use a larger amplitude (right figure). We also notice some sort of plateau, where increasing the mean does not lead to a larger coding fraction, before it begins rising close to 1. The plateau begins earlier for straighter tuning curves.
|
|
For coding fraction as a function of signal amplitude we see very different results depending on the parameters. Again, we see that stronger noise leads to higher coding fraction. If we are just above or at the threshold (center and right), an increase in signal amplitude leads to a lower coding fraction. This makes sense, as more of the signal moves into the very non-linear area around the threshold. This means that for increasing amplitude an increasing fraction of the signal gets into the range of the tuningcurve with 0Hz firing range, i.e. where there is no signal encoding. A very interesting effect happens if we have a mean slightly below the threshold (left): while for a strong noise we see the same effect as at or above the threshold, for weaker noise we see the opposite. This can be explained as the reverse of the effect that leads to decreasing coding fraction. Here, a larger amplitude means that the signal moves to the more linear part of the tuning curve more often. On the other hand, an increase in amplitude does not lead to worse encoding because of movement of the signal into the 0Hz part of the tuning curve -- because the signal is already there, so it can't get worse. This can help explain why the coding fraction seems to saturate near 0.5: In an extreme case, the negative parts of a signal would not get encoded at all, while the positive parts would be encoded linearly.
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=0.4\linewidth]{{img/tuningcurves/codingfraction_from_curves_amplitude_0.1mV}.pdf}
|
|
\includegraphics[width=0.4\linewidth]{{img/tuningcurves/codingfraction_from_curves_amplitude_0.5mV}.pdf}
|
|
\includegraphics[width=0.3\linewidth]{{img/tuningcurves/codingfraction_from_curves_mean_9.5mV}.pdf}
|
|
\includegraphics[width=0.3\linewidth]{{img/tuningcurves/codingfraction_from_curves_mean_10.0mV}.pdf}
|
|
\includegraphics[width=0.3\linewidth]{{img/tuningcurves/codingfraction_from_curves_mean_10.5mV}.pdf}
|
|
% \includegraphics[width=0.45\linewidth]{{img/rasterplots/best_approximation_spikes_50hz_1e-07noi500s_15_0.5_1.dat}.pdf}
|
|
% \includegraphics[width=0.45\linewidth]{{img/rasterplots/best_approximation_spikes_200hz_1e-07noi500s_15_0.5_1.dat}.pdf}
|
|
\label{codingfraction_means_amplitudes}
|
|
\caption{
|
|
\textbf{A,B}: Coding signal as a function of signal mean for two different frequencies. There is little to no difference in the coding fraction.
|
|
A: $\sigma = 0.1mV$. Each curve shows coding fraction as a function of the signal mean for a different noise level. The vertical line indicates the threshold.
|
|
\textbf{C-E}: Coding fraction as a function of signal amplitude for different tuningcurves (noise levels). Three different means, one below the threshold (9.5mV), one at the threshold (10.0mV), and one above the threshold (10.5mV).}
|
|
\end{figure}
|