Added section
This commit is contained in:
parent
d9992fe304
commit
824a06aeff
11
main.tex
11
main.tex
@ -245,10 +245,14 @@ For faster signals, the coding fraction calculated through the tuning curve stay
|
||||
\caption{Tuningcurve works for 10Hz but not for 200Hz.}
|
||||
\end{figure}
|
||||
|
||||
For high-frequency signals, the method does not work. The effective refractory period prohibits the instantaneous firing rate from being useful, because the neurons spike only in very short intervals around a signal peak. They are very unlikely to immediately spike again, so that the psth is focused around the input peaks, but there is little nuance.
|
||||
For high-frequency signals, the method does not work. The effective and implicit refractory period prohibits the instantaneous firing rate from being useful, because the neurons spike only in very short intervals around a signal peak. They are very unlikely to immediately spike again and signal peaks that are too close to the preceding one will not be resolved properly.\notedh{Add a figure.}
|
||||
|
||||
We use the tuning curve to analyse how the signal mean and the signal amplitude change the coding fraction we would get from an infinitely large population of neurons (fig. \ref{non-lin}, bottom two rows). We can see that the stronger noise always yields a larger coding fraction. This is expected because the tuning curve is more linear for stronger nosie. It matches that we are observing the limit of an infinitely large population, which would be able to ``average out'' any noise. For coding fraction as a function of the mean we see zero or near zero coding fraction if we are far below the threshold. If we increase the mean at one point we can see that coding fraction starts to jump up. This happens earlier for stronger noise (= more linear tuning curve). The increase in coding fraction is much smoother if we use a larger amplitude (right figure). We also notice some sort of plateau, where increasing the mean does not lead to a larger coding fraction, before it begins rising close to 1. The plateau begins earlier for straighter tuning curves.
|
||||
For coding fraction as a function of signal amplitude we see very different results depending on the parameters. Again, we see that stronger noise leads to higher coding fraction. If we are just above or at the threshold (center and right), an increase in signal amplitude leads to a lower coding fraction. This makes sense, as more of the signal moves into the very non-linear area around the threshold. This means that for increasing amplitude an increasing fraction of the signal gets into the range of the tuningcurve with 0Hz firing range, i.e. where there is no signal encoding. A very interesting effect happens if we have a mean slightly below the threshold (left): while for a strong noise we see the same effect as at or above the threshold, for weaker noise we see the opposite. This can be explained as the reverse of the effect that leads to decreasing coding fraction. Here, a larger amplitude means that the signal moves to the more linear part of the tuning curve more often. On the other hand, an increase in amplitude does not lead to worse encoding because of movement of the signal into the 0Hz part of the tuning curve -- because the signal is already there, so it can't get worse. This can help explain why the coding fraction seems to saturate near 0.5: In an extreme case, the negative parts of a signal would not get encoded at all, while the positive parts would be encoded linearly.
|
||||
We use the tuning curve to analyse how the signal mean and the signal amplitude change the coding fraction we would get from an infinitely large population of neurons (fig. \ref{non-lin}, bottom two rows). We can see that in this case the stronger noise always yields a larger coding fraction. This is expected because the tuning curve is more linear for stronger noise and coding fraction is a linear measure. It matches that we are observing the limit of an infinitely large population, which would be able to ``average out'' any noise.
|
||||
|
||||
For coding fraction as a function of the mean we see zero or near zero coding fraction if we are far below the threshold. If the signal is too weak it doesn't trigger any spiking in the neurons and no information can be encoded. If we increase the mean at one point we can see that coding fraction starts to jump up. This happens earlier for stronger noise, as spiking can be triggered for weaker signals. The increase in coding fraction is much smoother if we use a larger amplitude (right figure). We also notice some sort of plateau, where increasing the mean does not lead to a larger coding fraction, before it begins rising close to 1. The plateau begins earlier for stronger noise.
|
||||
For coding fraction as a function of signal amplitude we see very different results depending on the parameters. Again, we see that stronger noise leads to higher coding fraction. If we are just above or at the threshold (center and right), an increase in signal amplitude leads to a lower coding fraction. This makes sense, as more of the signal moves into the very non-linear area around the threshold.
|
||||
A very interesting effect happens if we have a mean slightly below the threshold (left): while for a strong noise we see the same effect as at or above the threshold, for weaker noise we see the opposite.
|
||||
The increase can be explained as the reverse of the effect that leads to decreasing coding fraction. Here, a larger amplitude means that the signal moves to the more linear part of the tuning curve more often. On the other hand, an increase in amplitude does not lead to worse encoding because of movement of the signal into the low-firing rate, non-linear part of the tuning curve -- because the signal is already there, so it can't get worse. This can help explain why the coding fraction seems to saturate near 0.5: In an extreme case, the negative parts of a signal would not get encoded at all, while the positive parts would be encoded linearly.
|
||||
|
||||
\begin{figure}
|
||||
\centering
|
||||
@ -266,7 +270,6 @@ For coding fraction as a function of signal amplitude we see very different resu
|
||||
\textbf{C-E}: Coding fraction as a function of signal amplitude for different tuningcurves (noise levels). Three different means, one below the threshold (9.5mV), one at the threshold (10.0mV), and one above the threshold (10.5mV).}
|
||||
\end{figure}
|
||||
|
||||
\input{simulation_results}
|
||||
|
||||
\input{simulation_further_considerations}
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user