In populations of neurons, representation of a common stimulus can be improved by population heterogeneity \citep{ahn2014heterogeneity}. This heterogeneity could for example be a different firing threshold for each neuron. Alternatively, the improvement can be achieved by adding noise to the input of neurons \citep{shapira2016sound}. The effect of adding noise to a sub-threshold signal, a phenomenon known as "stochastic resonance" (SR), has been very well investigated during the last decades\citep{benzi1981mechanism,gammaitoni1998resonance, shimokawa1999stochastic}. The noise added to a signal makes it more likely that the signal reaches the detection threshold and thereby trigger a spike in a neuron. But often in nature the goal is not to simply detect a signal but to discriminate between two different signals as well as possible. For example in auditory communication it is not sufficient to detect the presence of sound but instead the goal is to code an auditory stimulus so that an optimal amount of information is gained from the stimulus. Another example is the electrosensory communication between conspecifics in weakly electric fishes. Those fish need to for example differentiate aggressive and courtship behaviors. In any biological system, there is a limit to the precision of the components making up that system. This means that even without external input the spike times of each individual neurons will have some variation and will not be perfectly regular. Increasing the precision has a cost in energy requirement \citep{schreiber2002energy} but may not even be desirable. More recently it has been shown that for populations of neurons the beneficial role of noise can also be true for signals which already are above the threshold\citep{stocks2000suprathreshold, Stocks2000,stocks2001information,stocks2001generic,beiran2018coding}, a phenomenon termed "Suprathreshold Stochastic Resonance" (SSR). Despite the similarity in name, SR and SSR work in very different ways. The idea behind SSR is that in case of no or very small noise the individual neurons in the population react to the same features of a common input. Additional noise desynchronizes the response of the individual neurons. However, if the noise is too strong, the noise masks the signal and less information can be coded. In the case of infinite noise strength, no information about the signal can be reconstructed from the responses. Therefore there is a noise strength where performance is best. In this paper we look at input signals with cutoff frequencies over a large range and populations ranging from a single neuron to many thousands of neurons. %Research on SSR has mostly focused on low frequency signals and small neuronal populations\citep{stocks2002application,beiran2018coding}. %TODO % However, little attention has been paid to the frequency dependence of the coded signal [citation needed]. Of special interest is how well neurons can code specific frequency intervals inside a larger broadband signal. \begin{figure} \includegraphics[width=0.5\linewidth]{img/stocks} \caption{Array of threshold systems as described by Stocks.} \label{stocks} \end{figure} %Previously it has been shown that there might be an advantage in having both regular and irregular afferents; regular afferents carry information of the time course of stimulus, while irregular afferents code high frequencies better \citep{Sadeghi2007}. How and where in the brain low- and high frequency signals are processed can also be different. This is true for example in the acoustic system of avians \citep{palanca2015vivo} and for the electric sense of weakly electric fish\citep{Krahe2008}. For example, in the brain of the brown ghost knifefish (\textit{A. leptorhynchus}) three different segments receive direct input from receptor cells on the skin of the fish. Between those segments, cells have different frequency filtering properties. Additionally, cells in the different segments have different receptive field sizes, so they receive input from populations which vary in size by orders of magnitude. Here we use the Integrate-and-Fire model to simulate neuronal populations receiving a common dynamic input. We look at linear coding of signals by different sized populations of neurons of a single type, similar to the situation in weakly electric fish. We show that the optimal noise grows with population size and depends on properties of the input. We use input signals of varying frequencies widths and cutoffs, along with changing the strength of the signal. We also present the results of electrophysiological results in the weakly electric fish \textit{Apteronotus leptorhynchus}. Because it is not obvious how to quantify noisiness in the receptor cells of these fish, we compare different methods and find that using the activation curve of the individual neurons allows for the best estimate of the strength of noise in these cells. Then we show that we can see the effects of SSR in the real world example of \textit{A. leptorhynchus}. In populations of neurons, representation of a common stimulus can be improved by population heterogeneity \citep{ahn2014heterogeneity}. This heterogeneity could, for example, be a different firing threshold for each neuron. Alternatively, the improvement can be achieved by adding dynamic noise to the input of neurons \citep{shapira2016sound}. The effect of adding noise to a sub-threshold signal, a phenomenon known as ``stochastic resonance'' (SR), has been very well investigated during the last decades \citep{benzi1981mechanism,gammaitoni1998resonance, shimokawa1999stochastic}. The noise added to a signal makes it more likely that the signal reaches the detection threshold and thereby triggers an action potential in a neuron. However, often the goal is not to simply detect a signal but to discriminate between two different signals as well as possible. For example, in auditory communication it is not sufficient to merely detect the presence of sound. Rather an auditory stimulus should be encoded such that an optimal amount of information is gained from the stimulus. Another example is the electrosensory communication in weakly electric fish. Those fish need to differentiate aggressive and courtship behaviors. In any biological system, there is a limit to the precision of the components making up that system. Even without external input, the spike times of each neuron have some variation and are not perfectly regular. Increasing spike-timing precision requires more ion channels and thus has a cost in energy requirement \citep{schreiber2002energy} but may only be desirable up to a certain coding quality. For signals which already are above the firing threshold, noise can also be beneficial in populations of neurons \citep{stocks2000suprathreshold, Stocks2000,stocks2001information,stocks2001generic,beiran2018coding}, a phenomenon termed "Suprathreshold Stochastic Resonance" (SSR). Despite the similarity in name, SR and SSR work in very different ways. In case of no or very small noise the neurons in the population respond to the same features of a common input. Additional noise desynchronizes the response of the neurons and coding quality improves. However, if the noise is too strong, the noise masks the signal and less information is encoded. In the case of infinite noise strength, no information about the signal can be reconstructed from the responses. Therefore, there is an optimal noise strength where coding performance is best. In this paper we look at input signals with cutoff frequencies over a large range and populations ranging from a single neuron to many thousands of neurons. %Research on SSR has mostly focused on low frequency signals and small neuronal populations\citep{stocks2002application,beiran2018coding}. %TODO % However, little attention has been paid to the frequency dependence of the coded signal [citation needed]. Of special interest is how well neurons can code specific frequency intervals inside a larger broadband signal. \begin{figure} \includegraphics[width=0.5\linewidth]{img/stocks} \\\notejb{You have LIF neurons! Make a figure with a sketch of LIF neurons with dynamic noise and common input, all summed up by a target neuron. This figure can go on top of Fig 2.} \caption{Array of threshold systems as described by Stocks.} \label{stocks} \end{figure} %Previously it has been shown that there might be an advantage in having both regular and irregular afferents; regular afferents carry information of the time course of stimulus, while irregular afferents code high frequencies better \citep{Sadeghi2007}. How and where in the brain low- and high frequency signals are processed can also be different. This is true for example in the acoustic system of avians \citep{palanca2015vivo} and for the electric sense of weakly electric fish\citep{Krahe2008}. For example, in the brain of the brown ghost knifefish (\textit{A. leptorhynchus}) three different segments receive direct input from receptor cells on the skin of the fish. Between those segments, cells have different frequency filtering properties. Additionally, cells in the different segments have different receptive field sizes, so they receive input from populations which vary in size by orders of magnitude. Here we use the Integrate-and-Fire model to simulate neuronal populations receiving a common dynamic input. We look at linear coding of signals by different sized populations of neurons of a single type, similar to the situation in weakly electric fish. We show that the optimal noise grows with population size and depends on properties of the input. We use input signals of varying frequencies widths and cutoffs, along with changing the strength of the signal. We also present the results of electrophysiological results in the weakly electric fish \textit{Apteronotus leptorhynchus}. Because it is not obvious how to quantify noisiness in the receptor cells of these fish, we compare different methods and find that using the activation curve of the individual neurons allows for the best estimate of the strength of noise in these cells. Then we show that we can see the effects of SSR in the real world example of \textit{A. leptorhynchus}. %Using an LIF-neurons's tuning curve, we show that we can model the limit of information transmitted for a given noise strength even in the case of infinitely large populations. In addition, we show that the maximum of information encoded linearly is limited even for infinitely sized populations. % We describe behavior of the neural population as a function of noise in the limit of strong noise. We find empirically and analytically that the amount of information about the input signal encoded by the population is a function of noise proportional to population size. %Furthermore we show the influence the baseline firing rate of the neurons.% that We show that optimizing for high-frequencies still yields a good result for low frequency parts of the signal, while the reverse is not true, regardless of the size of the neural population. It appears to be a good strategy to optimize the amount of noise for coding high frequencies, as frequency intervals at lower frequencies tolerate non-optimal noise strength better. %Figure \ref{example_spiketrains} confirms the result from (Beiran 2017) that suprathreshold stochastic resonance works in the case of dynamic stimuli. An increase in noise allows for an increased capability to reconstruct the original input up to a maximum after which an increased amount of noise masks the signal until there is no information about the input left. %Using an LIF-neurons's tuning curve, we show that we can model the limit of information transmitted for a given noise strength even in the case of infinitely large populations. In addition, we show that the maximum of information encoded linearly is limited even for infinitely sized populations. % We describe behavior of the neural population as a function of noise in the limit of strong noise. We find empirically and analytically that the amount of information about the input signal encoded by the population is a function of noise proportional to population size. %Furthermore we show the influence the baseline firing rate of the neurons.% that We show that optimizing for high-frequencies still yields a good result for low frequency parts of the signal, while the reverse is not true, regardless of the size of the neural population. It appears to be a good strategy to optimize the amount of noise for coding high frequencies, as frequency intervals at lower frequencies tolerate non-optimal noise strength better. %Figure \ref{example_spiketrains} confirms the result from (Beiran 2017) that suprathreshold stochastic resonance works in the case of dynamic stimuli. An increase in noise allows for an increased capability to reconstruct the original input up to a maximum after which an increased amount of noise masks the signal until there is no information about the input left.