improved results
This commit is contained in:
parent
fcf9cfa431
commit
742fa901c7
@ -141,8 +141,6 @@ def si_stats(title, data, sicol, si_thresh, nsegscol):
|
||||
print(f' high SI cells: n={len(hcells):3d}, {100*len(hcells)/ncells:4.1f}%')
|
||||
print(f' high SI recordings: n={np.sum(sidata > si_thresh):3d}, '
|
||||
f'{100*np.sum(sidata > si_thresh)/nrecs:4.1f}%')
|
||||
nsegs = data[nsegscol]
|
||||
print(f' number of segments: {np.min(nsegs):4.0f} - {np.max(nsegs):4.0f}, median={np.median(nsegs):4.0f}, mean={np.mean(nsegs):4.0f}, std={np.std(nsegs):4.0f}')
|
||||
nrecs = []
|
||||
for cell in cells:
|
||||
nrecs.append(len(data[data["cell"] == cell, :]))
|
||||
@ -154,6 +152,16 @@ def si_stats(title, data, sicol, si_thresh, nsegscol):
|
||||
contrasts = 100*data['contrast']
|
||||
print(' contrasts: ', ' '.join([f'{c:.2g}%' for c in np.unique(contrasts)]))
|
||||
print(f' contrasts: {np.min(contrasts):.2g}% - {np.max(contrasts):.2g}%, median={np.median(contrasts):.2g}%, mean={np.mean(contrasts):.2g}%, std={np.std(contrasts):.2g}%')
|
||||
nsegs = data[nsegscol]
|
||||
print(f' number of segments: {np.min(nsegs):4.0f} - {np.max(nsegs):4.0f}, median={np.median(nsegs):4.0f}, mean={np.mean(nsegs):4.0f}, std={np.std(nsegs):4.0f}')
|
||||
nsegs = data['nsegs']
|
||||
print(f' available segments: {np.min(nsegs):4.0f} - {np.max(nsegs):4.0f}, median={np.median(nsegs):4.0f}, mean={np.mean(nsegs):4.0f}, std={np.std(nsegs):4.0f}')
|
||||
trials = data['trials']
|
||||
print(f' trials: {np.min(trials):.0f} - {np.max(trials):.0f}, median={np.median(trials):.0f}, mean={np.mean(trials):.0f}, std={np.std(trials):.0f}')
|
||||
duration = data['duration']
|
||||
print(f' duration: {np.min(duration):.1f}s - {np.max(duration):.1f}s, median={np.median(duration):.1f}s, mean={np.mean(duration):.1f}s, std={np.std(duration):.1f}s')
|
||||
duration *= trials
|
||||
print(f' total duration: {np.min(duration):.1f}s - {np.max(duration):.1f}s, median={np.median(duration):.1f}s, mean={np.mean(duration):.1f}s, std={np.std(duration):.1f}s')
|
||||
cols = ['cvbase', 'respmod2', 'ratebase', 'vsbase', 'serialcorr1', 'burstfrac', 'ratestim', 'cvstim']
|
||||
for i in range(len(cols)):
|
||||
for j in range(i + 1, len(cols)):
|
||||
|
@ -465,7 +465,7 @@ For this example, we chose very specific stimulus (beat) frequencies. %One of th
|
||||
In the following, however, we are interested in how the nonlinear responses depend on different combinations of stimulus frequencies in the weakly nonlinear regime. For the sake of simplicity we will drop the $\Delta$ notation even though P-unit stimuli are beats.
|
||||
|
||||
|
||||
\subsection{Nonlinear signal transmission in example P-units}
|
||||
\subsection{Nonlinear signal transmission in P-units}
|
||||
P-units fire action potentials probabilistically phase-locked to the self-generated EOD \citep{Bastian1981a}. Skipping of EOD cycles leads to the characteristic multimodal ISI distribution with maxima at integer multiples of the EOD period (\subfigrefb{fig:punit}{A}). In this example, the baseline ISI distribution has a CV$_{\text{base}}$ of 0.49, which is at the center of the P-unit population \citep{Hladnik2023}. Spectral analysis of the baseline activity shows two major peaks: the first is located at the baseline firing rate $r$, the second is located at the discharge frequency \feod{} of the electric organ (\subfigref{fig:punit}{B}).
|
||||
|
||||
|
||||
@ -474,13 +474,13 @@ P-units fire action potentials probabilistically phase-locked to the self-genera
|
||||
\caption{\label{fig:punit} Linear and nonlinear stimulus encoding in example P-units. \figitem{A} Interspike interval (ISI) distribution of a cell's baseline activity, i.e. the cell is driven only by the unperturbed own electric field (cell identifier ``2020-10-27-ag''). This cell has a rather high baseline firing rate $r=405$\,Hz and an intermediate CV$_{\text{base}}=0.49$ of its interspike intervals. \figitem{B} Power spectral density of the cell's baseline response with marked peaks at the cell's baseline firing rate $r$ and the fish's EOD frequency $f_{\text{EOD}}$. \figitem{C} Random amplitude modulation (RAM) stimulus (top, red, with cutoff frequency of 300\,Hz) and evoked responses (spike raster, bottom) of the same P-unit for two different stimulus contrasts (right). The stimulus contrast quantifies the standard deviation of the RAM relative to the fish's EOD amplitude. \figitem{D} Gain of the transfer function (first-order susceptibility), \eqnref{linearencoding_methods}, computed from the responses to 10\,\% (light blue) and 20\,\% contrast (dark blue) RAM stimulation of 5\,s duration. \figitem{E} Absolute value of the second-order susceptibility, \eqnref{eq:susceptibility}, for both the low and high stimulus contrast. At the lower stimulus contrast an anti-diagonal where the sum of the two stimulus frequencies equals the neuron's baseline frequency clearly sticks out of the noise floor. \figitem{F} At the higher contrast, the anti-diagonal is much weaker. \figitem{G} Second-order susceptibilities projected onto the diagonal (averages over all anti-diagonals of the matrices shown in \panel{E, F}). The anti-diagonals from \panel{E} and \panel{F} show up as a peak close to the cell's baseline firing rate $r$. The susceptibility index, SI($r$), quantifies the height of this peak relative to the values in the vicinity \notejb{See equation XXX}. \figitem{H} ISI distributions (top) and second-order susceptibilities (bottom) of two more example P-units (``2021-06-18-ae'', ``2017-07-18-ai'') showing an anti-diagonal, but not the full expected triangular structure. \figitem{I} Most P-units, however, have a flat second-order susceptibility and consequently their SI($r$) values are close to one (cell identifiers ``2018-08-24-ak'', ``2018-08-14-ac'').}
|
||||
\end{figure*}
|
||||
|
||||
Noise stimuli, here random amplitude modulations (RAM) of the EOD (\subfigref{fig:punit}{C}, top trace, red line), are commonly used to characterize stimulus-driven responses of sensory neurons using transfer functions (first-order susceptibility), spike-triggered averages, or stimulus-response coherences. Here, we additionally estimate the second-order susceptibility to quantify nonlinear encoding. P-unit spikes align more or less clearly with fluctuations in the RAM stimulus. A higher stimulus intensity, here a higher contrast of the RAM relative to the EOD amplitude (see methods), entrains the P-unit response more clearly (light and dark blue for low and high contrast stimuli, respectively, \subfigrefb{fig:punit}{C}). Linear encoding, quantified by the first-order susceptibility or transfer function, \eqnref{linearencoding_methods}, is similar for the two RAM contrasts in this low-CV P-unit (\subfigrefb{fig:punit}{D}), as expected for a linear system. The first-order susceptibility is low for low frequencies, peaks in the range below 100\,Hz and then falls off again \citep{Benda2005}.
|
||||
Noise stimuli, here random amplitude modulations (RAM) of the EOD (\subfigref{fig:punit}{C}, top trace, red line), have been commonly used to characterize stimulus-driven responses of sensory neurons using transfer functions (first-order susceptibility), spike-triggered averages, or stimulus-response coherences. Here, we additionally estimate from existing recordings the second-order susceptibility to quantify nonlinear encoding. P-unit spikes align more or less clearly with fluctuations in the RAM stimulus. A higher stimulus intensity, here a higher contrast of the RAM relative to the EOD amplitude (see methods), entrains the P-unit response more clearly (light and dark blue for low and high contrast stimuli, respectively, \subfigrefb{fig:punit}{C}). Linear encoding, quantified by the first-order susceptibility or transfer function, \eqnref{linearencoding_methods}, is similar for the two RAM contrasts in this low-CV P-unit (\subfigrefb{fig:punit}{D}), as expected for a linear system. The first-order susceptibility is low for low frequencies, peaks in the range below 100\,Hz and then falls off again \citep{Benda2005}.
|
||||
|
||||
The second-order susceptibility, \eqnref{eq:susceptibility}, quantifies for each combination of two stimulus frequencies \fone{} and \ftwo{} the amplitude and phase of the stimulus-evoked response at the sum \fsum{} (and also the difference, \subfigrefb{fig:model_full}{A}). Large values of the second-order susceptibility indicate stimulus-evoked peaks in the response spectrum at the summed frequency that cannot be explained by linear response theory. Similar to the first-order susceptibility, the second-order susceptibility can be estimated directly from the response evoked by a RAM stimulus that stimulates the neuron with a whole range of frequencies simultaneously (\subfigsref{fig:punit}{E, F}). For LIF and theta neuron models driven in the supra-threshold regime, theory predicts nonlinear interactions between the two stimulus frequencies, when the two frequencies \fone{} and \ftwo{} or their sum \fsum{} exactly match the neuron's baseline firing rate $r$ \citep{Voronenko2017,Franzen2023}. Only then, additional stimulus-evoked peaks appear in the spectrum of the spiking response that would show up in the second-order susceptibility as a horizontal, a vertical, and an anti-diagonal line (\subfigrefb{fig:lifresponse}{B}).
|
||||
|
||||
For the example P-unit, we observe a ridge of elevated second-order susceptibility for the low RAM contrast at \fsumb{} (yellowish anti-diagonal, \subfigrefb{fig:punit}{E}). This structure is less prominent for the stronger stimulus (\subfigref{fig:punit}{F}). Further, the overall level of the second-order susceptibility is reduced with increasing stimulus strength. To quantify the structural changes in the susceptibility matrices we projected the susceptibility values onto the diagonal (white dashed line) by averaging over the anti-diagonals (\subfigrefb{fig:punit}{G}). At low RAM contrast this projected second-order susceptibility indeed has a distinct peak close to the neuron's baseline firing rate (\subfigrefb{fig:punit}{G}, dot on top line). For the higher RAM contrast this peak is much smaller and the overall level of the second-order susceptibility is reduced (\subfigrefb{fig:punit}{G}). The reason behind this reduction is that a RAM with a higher contrast is not only a stimulus with an increased amplitude, but also increases the total noise in the system. Increased noise is known to linearize signal transmission \citep{Longtin1993, Chialvo1997, Roddey2000, Voronenko2017} and thus the second-order susceptibility is expected to decrease.
|
||||
For the example P-unit, we observe a ridge of elevated second-order susceptibility for the low RAM contrast at \fsumb{} (yellowish anti-diagonal, \subfigrefb{fig:punit}{E}). This structure is less prominent for the stronger stimulus (\subfigref{fig:punit}{F}). Further, the overall level of the second-order susceptibility is reduced with increasing stimulus strength. To quantify the structural changes in the susceptibility matrices we projected the susceptibility values onto the diagonal (white dashed line) by averaging over the anti-diagonals (\subfigrefb{fig:punit}{G}). At low RAM contrast this projection indeed has a distinct peak close to the neuron's baseline firing rate (\subfigrefb{fig:punit}{G}, dot on top line). For the higher RAM contrast this peak is much smaller and the overall level of the second-order susceptibility is reduced (\subfigrefb{fig:punit}{G}). The reason behind this reduction is that a RAM with a higher contrast is not only a stimulus with an increased amplitude, but also increases the total noise in the system. Increased noise is known to linearize signal transmission \citep{Longtin1993, Chialvo1997, Roddey2000, Voronenko2017} and thus the second-order susceptibility is expected to decrease.
|
||||
|
||||
In other P-units we also observe ridges where the stimulus frequencies add up to the unit's baseline firing rate (\subfigrefb{fig:punit}{H}), but we never observed the expected triangular structure. In most P-units, however, we did not observe any structure in the second-order susceptibility (\subfigrefb{fig:punit}{I}).
|
||||
Overall we observed in 17\,\% of the 159 P-units ridges where the stimulus frequencies add up to the unit's baseline firing rate. Two more examples are shown in \subfigref{fig:punit}{H}. However, we never observed the full triangular structure expected from theory (\subfigrefb{fig:lifresponse}{B}). In all other P-units, we did not observe any structure in the second-order susceptibility (\subfigrefb{fig:punit}{I}).
|
||||
|
||||
|
||||
\subsection{Ampullary afferents exhibit strong nonlinear interactions}
|
||||
@ -490,7 +490,7 @@ In other P-units we also observe ridges where the stimulus frequencies add up to
|
||||
\caption{\label{fig:ampullary} Linear and nonlinear stimulus encoding in example ampullary afferents. \figitem{A} Interspike interval (ISI) distribution of the cell's baseline activity (cell identifier ``2012-05-15-ac''). The very low CV of the ISIs indicates almost perfect periodic spiking. \figitem{B} Power spectral density of baseline activity with peaks at the cell's baseline firing rate and its harmonics. Ampullary afferents do not respond to the fish's EOD frequency, $f_{\text{EOD}}$ --- a sharp peak at $f_{\text{EOD}}$ is missing. \figitem{C} Band-limited white noise stimulus (top, red, with a cutoff frequency of 150\,Hz) added to the fish's self-generated electric field (no amplitude modulation!) and spike raster of the evoked responses (bottom) for two stimulus contrasts as indicated (right). \figitem{D} Gain of the transfer function, \eqnref{linearencoding_methods}, of the responses to stimulation with 5\,\% (light green) and 10\,\% contrast (dark green) of 10\,s duration. \figitem{E, F} Absolute value of the second-order susceptibility, \eqnref{eq:susceptibility}, for both stimulus contrasts as indicated. Both show a clear anti-diagonal where the two stimulus frequencies add up to the afferent's baseline firing rate. \figitem{G} Projections of the second-order susceptibilities in \panel{E, F} onto the diagonal. \figitem{H} ISI distributions (top) and second-order susceptibilites (bottom) of three more example afferents with clear anti-diagonals (``2010-11-26-an'', ``2010-11-08-aa'', ``2011-02-18-ab''). \figitem{I} Some ampullary afferents do not show any structure in their second-order susceptibility (``2014-01-16-aj'').}
|
||||
\end{figure*}
|
||||
|
||||
Electric fish possess an additional electrosensory system, the passive or ampullary electrosensory system, that responds to low-frequency exogenous electric stimuli. The population of ampullary afferents is much less heterogeneous, and known for the much lower CVs of their baseline ISIs (CV$_{\text{base}}=0.06$ to $0.22$, \citealp{Grewe2017}). Ampullary cells do not phase-lock to the high-frequency EOD and the ISIs are unimodally distributed (\subfigrefb{fig:ampullary}{A}). As a consequence of the high regularity of their baseline spiking activity, the corresponding power spectrum shows distinct peaks at the baseline firing rate $r$ and its harmonics. Since the cells do not respond to the self-generated EOD, there is no sharp peak at \feod{} (\subfigrefb{fig:ampullary}{B}). When driven by a band-limited white noise stimulus (note: this is no longer an AM stimulus, \subfigref{fig:ampullary}{C}), ampullary cells exhibit very pronounced bands in the second-order susceptibility, where \fsum{} is equal to \fbase{} or its harmonic (yellow anti-diagonals in \subfigrefb{fig:ampullary}{E--H}), implying strong nonlinear response components at these frequency combinations (\subfigrefb{fig:ampullary}{G}, top). With higher stimulus contrasts these bands get weaker (\subfigrefb{fig:ampullary}{F}), the projection onto the diagonal loses its distinct peak at \fsum{} and its overall level is reduced (\subfigrefb{fig:ampullary}{G}, bottom). Some ampullary afferents, however, do not show any such structure in their second-order susceptibility (\subfigrefb{fig:ampullary}{I}).
|
||||
Electric fish possess an additional electrosensory system, the passive or ampullary electrosensory system, that responds to low-frequency exogenous electric stimuli. The population of ampullary afferents is much less heterogeneous, and known for the much lower CVs of their baseline ISIs ($0.06 < \text{CV}_{\text{base}} < 0.22$, \citealp{Grewe2017}). Ampullary cells do not phase-lock to the high-frequency EOD and the ISIs are unimodally distributed (\subfigrefb{fig:ampullary}{A}). As a consequence of the high regularity of their baseline spiking activity, the corresponding power spectrum shows distinct peaks at the baseline firing rate $r$ and its harmonics. Since the cells do not respond to the self-generated EOD, there is no sharp peak at \feod{} (\subfigrefb{fig:ampullary}{B}). When driven by a band-limited white noise stimulus (note: for ampullary afferents this is not an AM stimulus, \subfigref{fig:ampullary}{C}), ampullary afferents exhibit very pronounced ridges in the second-order susceptibility, where $f_1 + f_2$ is equal to $r$ or its harmonics (yellow anti-diagonals in \subfigrefb{fig:ampullary}{E--H}), implying strong nonlinear response components at these frequency combinations (\subfigrefb{fig:ampullary}{G}, top). With higher stimulus contrasts these bands get weaker (\subfigrefb{fig:ampullary}{F}), the projection onto the diagonal loses its distinct peak at $r$, and its overall level is reduced (\subfigrefb{fig:ampullary}{G}, bottom). Some ampullary afferents (27\,\% of 30 afferents), however, do not show any such structure in their second-order susceptibility (\subfigrefb{fig:ampullary}{I}).
|
||||
|
||||
|
||||
\subsection{Model-based estimation of the second-order susceptibility}
|
||||
@ -501,13 +501,13 @@ In the example recordings shown above (\figsrefb{fig:punit} and \fref{fig:ampull
|
||||
\caption{\label{fig:noisesplit} Estimation of second-order susceptibilities. \figitem{A} \suscept{} (right) estimated from $N=198$ 256\,ms long FFT segments of an electrophysiological recording of another P-unit (cell ``2017-07-18-ai'', $r=78$\,Hz, CV$_{\text{base}}=0.22$) driven with a RAM stimulus with contrast 5\,\% (left). \figitem[i]{B} \textit{Standard condition} of model simulations with intrinsic noise (bottom) and a RAM stimulus (top). \figitem[ii]{B} \suscept{} estimated from simulations of the cell's LIF model counterpart (cell ``2017-07-18-ai'', table~\ref{modelparams}) based on the same RAM contrast and number of $N=100$ FFT segments. As in the electrophysiological recording only a weak anti-diagonal is visible. \figitem[iii]{B} Same as \panel[ii]{B} but using $10^6$ FFT segments. Now, the expected triangular structure is revealed. \figitem[iv]{B} Convergence of the \suscept{} estimate as a function of FFT segments. \figitem{C} At a lower stimulus contrast of 1\,\% the estimate did not converge yet even for $10^6$ FFT segments. The triangular structure is not revealed yet. \figitem[i]{D} Same as in \panel[i]{B} but in the \textit{noise split} condition: there is no external RAM signal (red) driving the model. Instead, a large part (90\,\%) of the total intrinsic noise is treated as a signal and is presented as an equivalent amplitude modulation ($s_{\xi}(t)$, orange, 10.6\,\% contrast), while the intrinsic noise is reduced to 10\,\% of its original strength (bottom, see methods for details). \figitem[i]{D} 100 FFT segments are still not sufficient for estimating \suscept{}. \figitem[iii]{D} Simulating one million segments reveals the full expected trangular structure of the second-order susceptibility. \figitem[iv]{D} In the noise-split condition, the \suscept{} estimate converges already at about $10^{4}$ FFT segments.}
|
||||
\end{figure*}
|
||||
|
||||
One simple reason could be the lack of data, i.e. the estimation of the second-order susceptibility is not good enough. Electrophysiological recordings are limited in time, and therefore only a limited number of trials, here repetitions of the same frozen RAM stimulus, are available. As a consequence, the cross-spectra, \eqnref{eq:crosshigh}, are insufficiently averaged and the full structure of the second-order susceptibility might be hidden in finite-data noise. This experimental limitation can be overcome by using a computational model for the P-unit, a stochastic leaky integrate-and-fire model with adaptation current, dendritic preprocessing, and parameters fitted to the experimentally recorded P-unit (\figrefb{flowchart}) \citep{Barayeu2023}. The model faithfully reproduces the second-order susceptibility of another example cell estimated from the same low number of FFT (fast fourier transform) segments as in the experiment ($N=100$, compare faint anti-diagonal in the bottom left corner of the second-order susceptibility in \panel[ii]{A} and \panel[ii]{B} in \figrefb{fig:noisesplit}).
|
||||
One simple reason could be the lack of data, i.e. the estimation of the second-order susceptibility is not good enough. Electrophysiological recordings are limited in time, and therefore only a limited number of trials, here repetitions of the same frozen RAM stimulus, are available. In our data set we have 1 to 199 trials (median: 10) of RAM stimuli with a duration ranging from 2 to 5\,s (median: 8\,s), resulting in a total duration of 30 to 400\,s. Using a resolution of 0.5\,ms and FFT segments of 512 samples this yields 105 to 1520 available FFT segments for a specific RAM stimulus. As a consequence, the cross-spectra, \eqnref{eq:crosshigh}, are insufficiently averaged and the full structure of the second-order susceptibility might be hidden in finite-data noise. This experimental limitation can be overcome by using a computational model for the P-unit, a stochastic leaky integrate-and-fire model with adaptation current, dendritic preprocessing, and parameters fitted to the experimentally recorded P-unit (\figrefb{flowchart}) \citep{Barayeu2023}. The model faithfully reproduces the second-order susceptibility of the P-unit estimated from the same low number of FFT (fast fourier transform) segments as in the experiment ($N=100$, compare faint anti-diagonal in the bottom left corner of the second-order susceptibility in \panel[ii]{A} and \panel[ii]{B} in \figrefb{fig:noisesplit}).
|
||||
|
||||
In model simulations we can increase the number of FFT segments beyond what would be experimentally possible, here to one million (\figrefb{fig:noisesplit}\,\panel[iii]{B}). Then the estimate of the second-order susceptibility indeed improves. It gets less noisy, the diagonal at $f_ + f_2 = r$ is emphasized, and the vertical and horizontal ridges at $f_1 = r$ and $f_2 = r$ are revealed. Increasing the number of FFT segments reduces the order of magnitude of the susceptibility estimate until close to one million the estimate levels out at a low levels (\subfigrefb{fig:noisesplit}\,\panel[iv]{B}).
|
||||
In model simulations we can increase the number of FFT segments beyond what would be experimentally possible, here to one million (\figrefb{fig:noisesplit}\,\panel[iii]{B}). Then the estimate of the second-order susceptibility indeed improves. It gets less noisy, the diagonal at $f_ + f_2 = r$ is emphasized, and the vertical and horizontal ridges at $f_1 = r$ and $f_2 = r$ are revealed. Increasing the number of FFT segments also reduces the order of magnitude of the susceptibility estimate until close to one million the estimate levels out at a low values (\subfigrefb{fig:noisesplit}\,\panel[iv]{B}).
|
||||
|
||||
At a lower stimulus contrast of 1\,\% (\subfigrefb{fig:noisesplit}{C}), however, one million FFT segements are still not sufficient for the estimate to converge (\figrefb{fig:noisesplit}\,\panel[iv]{C}). Still only a faint anti-diagonal is visible (\figrefb{fig:noisesplit}\,\panel[iii]{C}).
|
||||
At a lower stimulus contrast of 1\,\% (\subfigrefb{fig:noisesplit}{C}), however, one million FFT segments are still not sufficient for the estimate to converge (\figrefb{fig:noisesplit}\,\panel[iv]{C}). Still only a faint anti-diagonal is visible (\figrefb{fig:noisesplit}\,\panel[iii]{C}).
|
||||
|
||||
Using a broadband stimulus increases the effective input-noise level. This may linearize signal transmission and suppress potential nonlinear responses \citep{Longtin1993, Chialvo1997, Roddey2000, Voronenko2017}. Assuming that the intrinsic noise level in this P-unit is small enough, the full expected structure of the second-order susceptibility should appear in the limit of weak AMs. As we just have seen, this cannot be done experimentally, because the problem of insufficient averaging becomes even more severe for weak AMs (low contrast). In the model, however, we know the time course of the intrinsic noise and can use this knowledge to determine the susceptibilities by input-output correlations via the Furutsu-Novikov theorem \citep{Furutsu1963, Novikov1965}. This theorem, in its simplest form, states that the cross-spectrum $S_{x\eta}(\omega)$ of a Gaussian noise $\eta(t)$ driving a nonlinear system and the system's output $x(t)$ is proportional to the linear susceptibility according to $S_{x\eta}(\omega)=\chi(\omega)S_{\eta\eta}(\omega)$. Here $\chi(\omega)$ characterizes the linear response to an infinitely weak signal $s(t)$ in the presence of the background noise $\eta(t)$. Likewise, the nonlinear susceptibility can be determined in an analogous fashion from higher-order input-output cross-spectra (see methods, equations \eqref{eq:crosshigh} and \eqref{eq:susceptibility}) \citep{Egerland2020}. In line with an alternative derivation of the Furutsu-Novikov theorem \citep{Lindner2022}, we can split the total noise and consider a fraction of it as a stimulus. This allows us to calculate the susceptibility from the cross-spectrum between the output and this stimulus fraction of the noise. Adapting this approach to our P-unit model (see methods), we replace the intrinsic noise by an approximately equivalent RAM stimulus $s_{\xi}(t)$ and a weak remaining intrinsic noise $\sqrt{2D \, c_{\rm{noise}}}\;\xi(t)$ with $c_\text{noise} = 0.1$ (see methods, equations \eqref{eq:ram_split}, \eqref{eq:Noise_split_intrinsic}, \eqref{eq:Noise_split_intrinsic_dendrite}, \figrefb{fig:noisesplit}\,\panel[i]{D}). We tuned the amplitude of the RAM stimulus $s_{\xi}(t)$ such that the output firing rate and variability (CV) are the same as in the baseline activity (i.e. full intrinsic noise $\sqrt{2D}\;\xi(t)$ in the voltage equation but no RAM) and compute the cross-spectra between the RAM part of the noise $s_{\xi}(t)$ and the output spike train. This procedure has two consequences: (i) by means of the cross-spectrum between the output and \signalnoise, which is a large fraction of the noise, the signal-to-noise ratio of the measured susceptibilities is drastically improved and thus the estimate converges already at about ten thousand FFT segments (\figrefb{fig:noisesplit}\,\panel[iv]{D}); (ii) the total noise in the system has been reduced (by what was before the external RAM stimulus $s(t)$), which makes the system more nonlinear. For both reasons we now see the expected nonlinear features in the second-order susceptibility for a sufficient number of FFT segments (\figrefb{fig:noisesplit}\,\panel[iii]{D}), but not for a number of segments comparable to the experiment (\figrefb{fig:noisesplit}\,\panel[ii]{D}). In addition to the strong response for \fsumb{}, we now also observe pronounced nonlinear responses at \foneb{} and \ftwob{} (vertical and horizontal lines, \figrefb{fig:noisesplit}\,\panel[iii]{D}).
|
||||
Using a broadband stimulus increases the effective input-noise level. This may linearize signal transmission and suppress potential nonlinear responses \citep{Longtin1993, Chialvo1997, Roddey2000, Voronenko2017}. Assuming that the intrinsic noise level in this P-unit is small enough, the full expected structure of the second-order susceptibility should appear in the limit of weak AMs. As we just have seen, this cannot be done experimentally, because the problem of insufficient averaging becomes even more severe for weak AMs (low contrast). In the model, however, we know the time course of the intrinsic noise and can use this knowledge to determine the susceptibilities by input-output correlations via the Furutsu-Novikov theorem \citep{Furutsu1963, Novikov1965}. This theorem, in its simplest form, states that the cross-spectrum $S_{x\eta}(\omega)$ of a Gaussian noise $\eta(t)$ driving a nonlinear system and the system's output $x(t)$ is proportional to the linear susceptibility according to $S_{x\eta}(\omega)=\chi(\omega)S_{\eta\eta}(\omega)$. Here $\chi(\omega)$ characterizes the linear response to an infinitely weak signal $s(t)$ in the presence of the background noise $\eta(t)$. Likewise, the nonlinear susceptibility can be determined in an analogous fashion from higher-order input-output cross-spectra (see methods, equations \eqref{eq:crosshigh} and \eqref{eq:susceptibility}) \citep{Egerland2020}. In line with an alternative derivation of the Furutsu-Novikov theorem \citep{Lindner2022}, we can split the total noise and consider a fraction of it as a stimulus. This allows us to calculate the susceptibility from the cross-spectrum between the output and this stimulus fraction of the noise. Adapting this approach to our P-unit model (see methods), we replace the intrinsic noise by an approximately equivalent RAM stimulus $s_{\xi}(t)$ and a weak remaining intrinsic noise $\sqrt{2D \, c_{\rm{noise}}}\;\xi(t)$ with $c_\text{noise} = 0.1$ \notejb{$c$ is already used for contrast} (see methods, equations \eqref{eq:ram_split}, \eqref{eq:Noise_split_intrinsic}, \eqref{eq:Noise_split_intrinsic_dendrite}, \figrefb{fig:noisesplit}\,\panel[i]{D}). We tuned the amplitude of the RAM stimulus $s_{\xi}(t)$ such that the output firing rate and variability (CV of interspike intervals) are the same as in the baseline activity (i.e. full intrinsic noise $\sqrt{2D}\;\xi(t)$ in the voltage equation but no RAM) and compute the cross-spectra between the RAM part of the noise $s_{\xi}(t)$ and the output spike train. This procedure has two consequences: (i) by means of the cross-spectrum between the output and \signalnoise, which is a large fraction of the noise, the signal-to-noise ratio of the measured susceptibilities is drastically improved and thus the estimate converges already at about ten thousand FFT segments (\figrefb{fig:noisesplit}\,\panel[iv]{D}); (ii) the total noise in the system has been reduced (by what was before the external RAM stimulus $s(t)$), which makes the system more nonlinear. For both reasons we now see the expected nonlinear features in the second-order susceptibility for a sufficient number of FFT segments (\figrefb{fig:noisesplit}\,\panel[iii]{D}), but not for a number of segments comparable to the experiment (\figrefb{fig:noisesplit}\,\panel[ii]{D}). In addition to the strong response for \fsumb{}, we now also observe pronounced nonlinear responses at \foneb{} and \ftwob{} (vertical and horizontal lines, \figrefb{fig:noisesplit}\,\panel[iii]{D}).
|
||||
|
||||
|
||||
\begin{figure}[p]
|
||||
@ -516,7 +516,7 @@ Using a broadband stimulus increases the effective input-noise level. This may l
|
||||
\end{figure}
|
||||
|
||||
\subsection{Weakly nonlinear interactions in many model cells}
|
||||
In the previous section we have shown one example cell for which we find in the corresponding model strong ridges in the second-order susceptibility where one \notejg{same here...} or the sum of two stimulus frequencies match the neuron's baseline firing rate (\figrefb{fig:noisesplit}\,\panel[iii]{C}). Using our 39 P-unit models, we now can explore how many P-unit model neurons show such a triangular structure.
|
||||
In the previous section we have shown one example cell for which we find in the corresponding model the expected strong ridges in the second-order susceptibility (\figrefb{fig:noisesplit}\,\panel[iii]{B},\panel[iii]{D}). Using our 39 P-unit models, we now can explore how many P-unit model neurons show such a triangular structure.
|
||||
|
||||
By just looking at the second-order susceptibilities estimated using the noise-split method (first column of \figrefb{fig:modelsusceptcontrasts}) we can readily identify strong triangular patterns in 11 of the 39 model cells (28\,\%, see \figrefb{fig:modelsusceptcontrasts}\,\panel[i]{A}\&\panel[i]{B} for two examples). In another 5 cells (13\,\%) the triangle is much weaker and sits on top of a smooth bump of elevated second-order susceptibility (\figrefb{fig:modelsusceptcontrasts}\,\panel[i]{C} shows an example). The remaining 23 model cells (59\,\%) show no triangle (see \figrefb{fig:modelsusceptcontrasts}\,\panel[i]{D} for an example).
|
||||
|
||||
@ -526,7 +526,7 @@ The SI($r$) correlates with the cell's CV of its baseline interspike intervals (
|
||||
|
||||
|
||||
\subsection{Weakly nonlinear interactions vanish for higher stimulus contrasts}
|
||||
As pointed out above, the weakly nonlinear regime can only be observed for sufficiently weak stimulus amplitudes. In the model cells we estimated second-order susceptibilities for RAM stimuli with a contrast of 1, 3, and 10\,\%. The estimates for 1\,\% contrast (\figrefb{fig:modelsusceptcontrasts}\,\panel[ii]{E}) were quite similar to the estimates from the noise-split method, corresponding to a stimulus contrast of 0\,\% ($r=0.97$, $p\ll 0.001$). Thus, RAM stimuli with 1\,\% contrast are sufficiently small to not destroy weakly nonlinear interactions by their linearizing effect. 51\,\% of the model cells have an SI($r$) value greater than 1.2.
|
||||
As pointed out above, the weakly nonlinear regime can only be observed for sufficiently weak stimulus amplitudes. In the model cells we estimated second-order susceptibilities for RAM stimuli with a contrast of 1, 3, and 10\,\%. The estimates for 1\,\% contrast (\figrefb{fig:modelsusceptcontrasts}\,\panel[ii]{E}) were quite similar to the estimates from the noise-split method, corresponding to a stimulus contrast of 0\,\% ($r=0.97$, $p\ll 0.001$). Thus, RAM stimuli with 1\,\% contrast are sufficiently small to not destroy weakly nonlinear interactions by their linearizing effect. At this low contrast, 51\,\% of the model cells have an SI($r$) value greater than 1.2.
|
||||
|
||||
At a RAM contrast of 3\,\% the SI($r$) values become smaller (\figrefb{fig:modelsusceptcontrasts}\,\panel[iii]{E}). Only 7 cells (18\,\%) have SI($r$) values exceeding 1.2. Finally, at 10\,\% the SI($r$) values of all cells drop below 1.2, except for three cells (8\,\%, \figrefb{fig:modelsusceptcontrasts}\,\panel[iv]{E}). The cell shown in \subfigrefb{fig:modelsusceptcontrasts}{A} is one of them. At 10\,\% contrast the SI($r$) values are no longer correlated with the ones in the noise-split configuration ($r=0.32$, $p=0.05$). To summarize, the regime of distinct nonlinear interactions at frequencies matching the baseline firing rate extends in this set of P-unit model cells to stimulus contrasts ranging from a few percents to about 10\,\%.
|
||||
|
||||
@ -538,7 +538,7 @@ At a RAM contrast of 3\,\% the SI($r$) values become smaller (\figrefb{fig:model
|
||||
\subsection{Weakly nonlinear interactions can be deduced from limited data}
|
||||
Estimating second-order susceptibilities reliably requires large numbers (millions) of FFT segments (\figrefb{fig:noisesplit}). Electrophysiological measurements, however, suffer from limited recording durations and hence limited numbers of available FFT segments and estimating weakly nonlinear interactions from just a few hundred segments appears futile. The question arises, to what extend such limited-data estimates are still informative?
|
||||
|
||||
The second-order susceptibility matrices that are based on only 100 sgements look flat and noisy, lacking the triangular structure (\subfigref{fig:modelsusceptlown}{B}). The anti-diagonal ridge, however, where the sum of the stimulus frequencies matches the neuron's baseline firing rate, seems to be present whenever the converged estimate shows a clear triangular structure (compare \subfigref{fig:modelsusceptlown}{B} and \subfigref{fig:modelsusceptlown}{A}). The SI($r$) characterizes the height of the ridge in the second-oder susceptibility plane at the neuron's baseline firing rate $r$. Comparing it when based on 100 versus one or ten million segments for all 39 model cells (\subfigrefb{fig:modelsusceptlown}{C}) supports this impression. As we have seen in the context of \subfigref{fig:modelsusceptcontrasts}{E} for converged estimates, values greater than one indicate triangular structures in the second-order susceptibility. The SI($r$) values based on just one hundred segments correlate quite well with the ones from the converged estimates for contrasts 1\,\% and 3\,\% ($r=0.9$, $p<0.001$). At a contrast of 10\,\% this correlation is weaker ($r=0.38$, $p<0.05$), because there are only three cells left with SI($r$) values greater than 1.2. Despite the good correlations, care has to be taken to set a threshold on the SI($r$) values for deciding whether a triangular structure would emerge for a much higher number of segments. Because at low number of segments the estimates are noisier, there could be false positives for a too low threshold. Setting the threshold to 1.8 avoids false positives for the price of a few false negatives.
|
||||
The second-order susceptibility matrices that are based on only 100 segments look flat and noisy, lacking the triangular structure (\subfigref{fig:modelsusceptlown}{B}). The anti-diagonal ridge, however, where the sum of the stimulus frequencies matches the neuron's baseline firing rate, seems to be present whenever the converged estimate shows a clear triangular structure (compare \subfigref{fig:modelsusceptlown}{B} and \subfigref{fig:modelsusceptlown}{A}). The SI($r$) characterizes the height of the ridge in the second-oder susceptibility plane at the neuron's baseline firing rate $r$. Comparing SI($r$) values based on 100 FFT segements to the ones based on one or ten million segments for all 39 model cells (\subfigrefb{fig:modelsusceptlown}{C}) supports this impression. They correlate quite well at contrasts of 1\,\% and 3\,\% ($r=0.9$, $p\ll 0.001$). At a contrast of 10\,\% this correlation is weaker ($r=0.38$, $p<0.05$), because there are only three cells left with SI($r$) values greater than 1.2. Despite the good correlations, care has to be taken to set a threshold on the SI($r$) values for deciding whether a triangular structure would emerge for a much higher number of segments. Because at low number of segments the estimates are noisier, there could be false positives for a too low threshold. Setting the threshold to 1.8 avoids false positives for the price of a few false negatives.
|
||||
|
||||
Overall, observing SI($r$) values greater than about 1.8, even for a number of FFT segments as low as one hundred, seems to be a reliable indication for a triangular structure in the second-order susceptibility at the corresponding stimulus contrast. Small stimulus contrasts of 1\,\% are less informative, because of their bad signal-to-noise ratio. Intermediate stimulus contrasts around 3\,\% seem to be optimal, because there, most cells still have a triangular structure in their susceptibility and the signal-to-noise ratio is better. At RAM stimulus contrasts of 10\,\% or higher the signal-to-noise ratio is even better, but only few cells remain with weak triangularly shaped susceptibilities that might be missed as a false positives.
|
||||
|
||||
|
@ -14,8 +14,10 @@ def significance_str(p):
|
||||
return '$p<0.05$'
|
||||
elif p > 0.001:
|
||||
return '$p<0.01$'
|
||||
else:
|
||||
elif p > 0.0001:
|
||||
return '$p<0.001$'
|
||||
else:
|
||||
return '$p\\ll 0.001$'
|
||||
|
||||
|
||||
def noise_files(data_path, cell_name, alpha=None):
|
||||
|
@ -187,7 +187,7 @@ def plot_psd(ax, s, path, contrast, spikes, nfft, dt, beatf1, beatf2):
|
||||
label=r'$r$', clip_on=False, **s.psF0)
|
||||
ax.plot(beatf1, decibel(peak_ampl(freqs, psd, beatf1)) + offs,
|
||||
label=r'$\Delta f_1$', clip_on=False, **s.psF01)
|
||||
ax.plot(beatf2, decibel(peak_ampl(freqs, psd, beatf2)) + offs + 5.5,
|
||||
ax.plot(beatf2, decibel(peak_ampl(freqs, psd, beatf2)) + 2*offs + 3,
|
||||
label=r'$\Delta f_2$', clip_on=False, **s.psF02)
|
||||
ax.plot(beatf2 - beatf1, decibel(peak_ampl(freqs, psd, beatf2 - beatf1)) + offs,
|
||||
label=r'$\Delta f_2 - \Delta f_1$', clip_on=False, **s.psF01_2)
|
||||
|
Loading…
Reference in New Issue
Block a user