Advances in statistical learning theory have resulted in a multitude of different designs of learning machines. But which ones are implemented by brains and other biological information processors? We analyze how various abstract Bayesian learners perform on different data and argue that it is difficult to determine which learning—theoretic computation is performed by a particular organism using just its performance in learning a stationary target (learning curve). Based on the fluctuation—dissipation relation in statistical physics, we then discuss a different experimental setup that might be able to solve the problem.
Escherichia coli lac repressor (LacI) is a paradigmatic transcriptional factor that controls the expression of lacZYA in the lac operon. This tetrameric protein specifically binds to the O1, O2 and O3 operators of the lac operon and forms a DNA loop to repress transcription from the adjacent lac promoter. In this article, we demonstrate that upon binding to the O1 and O2 operators at their native positions LacI constrains three (−) supercoils within the 401-bp DNA loop of the lac promoter and forms a topological barrier. The stability of LacI-mediated DNA topological barriers is directly proportional to its DNA binding affinity. However, we find that DNA supercoiling modulates the basal expression from the lac operon in E. coli. Our results are consistent with the hypothesis that LacI functions as a topological barrier to constrain free, unconstrained (−) supercoils within the 401-bp DNA loop of the lac promoter. These constrained (−) supercoils enhance LacI’s DNA-binding affinity and thereby the repression of the promoter. Thus, LacI binding is superhelically modulated to control the expression of lacZYA in the lac operon under varying growth conditions.
Large-amplitude magnetization dynamics is substantially more complex compared to the low-amplitude linear regime, due to the inevitable emergence of nonlinearities. One of the fundamental nonlinear phenomena is the nonlinear damping enhancement, which imposes strict limitations on the operation and efficiency of magnetic nanodevices. In particular, nonlinear damping prevents excitation of coherent magnetization auto-oscillations driven by the injection of spin current into spatially extended magnetic regions. Here, we propose and experimentally demonstrate that nonlinear damping can be controlled by the ellipticity of magnetization precession. By balancing different contributions to anisotropy, we minimize the ellipticity and achieve coherent magnetization oscillations driven by spatially extended spin current injection into a microscopic magnetic disk. Our results provide a route for the implementation of efficient active spintronic and magnonic devices driven by spin current.
The major problem in information theoretic analysis of neural responses and other biological data is the reliable estimation of entropy-like quantities from small samples. We apply a recently introduced Bayesian entropy estimator to synthetic data inspired by experiments, and to real experimental spike trains. The estimator performs admirably even very deep in the undersampled regime, where other techniques fail. This opens new possibilities for the information theoretic analysis of experiments, and may be of general interest as an example of learning from limited data.
by
Vadas Gintautas;
Michael I. Ham;
Benjamin Kunsberg;
Shawn Barr;
Steven P. Brumby;
Craig Rasmussen;
John S. George;
Ilya Nemenman;
Luis M. A. Bettencourt;
Garret T. Kenyon
Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas.
We perform an asymptotic analysis of the NSB estimator of entropy of a discrete random variable. The analysis illuminates the dependence of the estimates on the number of coincidences in the sample and shows that the estimator has a well defined limit for a large cardinality of the studied variable. This allows estimation of entropy with no a priori assumptions about the cardinality. Software implementation of the algorithm is available.
We construct a unifying theory of geometric effects in mesoscopic stochastic kinetics. We demonstrate that the adiabatic pump and the reversible ratchet effects, as well as similar new phenomena in other domains, such as in epidemiology, all follow from very similar geometric phase contributions to the effective action in the stochastic path integral representation of the moment generating function. The theory provides the universal technique for identification, prediction, and calculation of pumplike phenomena in an arbitrary mesoscopic stochastic framework.
We quantify the influence of the topology of a transcriptional regulatory network on its ability to process environmental signals. By posing the problem in terms of information theory, we do this without specifying the function performed by the network. Specifically, we study the maximum mutual information between the input (chemical) signal and the output (genetic) response attainable by the network in the context of an analytic model of particle number fluctuations. We perform this analysis for all biochemical circuits, including various feedback loops, that can be built out of 3 chemical species, each under the control of one regulator. We find that a generic network, constrained to low molecule numbers and reasonable response times, can transduce more information than a simple binary switch and, in fact, manages to achieve close to the optimal information transmission fidelity. These high-information solutions are robust to tenfold changes in most of the networks' biochemical parameters; moreover they are easier to achieve in networks containing cycles with an odd number of negative regulators (overall negative feedback) due to their decreased molecular noise (a result which we derive analytically). Finally, we demonstrate that a single circuit can support multiple high-information solutions. These findings suggest a potential resolution of the “cross-talk” phenomenon as well as the previously unexplained observation that transcription factors that undergo proteolysis are more likely to be auto-repressive.
Sensory information about the outside world is encoded by neurons in sequences of discrete, identical pulses termed action potentials or spikes. There is persistent controversy about the extent to which the precise timing of these spikes is relevant to the function of the brain. We revisit this issue, using the motion-sensitive neurons of the fly visual system as a test case. Our experimental methods allow us to deliver more nearly natural visual stimuli, comparable to those which flies encounter in free, acrobatic flight. New mathematical methods allow us to draw more reliable conclusions about the information content of neural responses even when the set of possible responses is very large. We find that significant amounts of visual information are represented by details of the spike train at millisecond and sub-millisecond precision, even though the sensory input has a correlation time of ~55 ms; different patterns of spike timing represent distinct motion trajectories, and the absolute timing of spikes points to particular features of these trajectories with high precision. Finally, the efficiency of our entropy estimator makes it possible to uncover features of neural coding relevant for natural visual stimuli: first, the system's information transmission rate varies with natural fluctuations in light intensity, resulting from varying cloud cover, such that marginal increases in information rate thus occur even when the individual photoreceptors are counting on the order of one million photons per second. Secondly, we see that the system exploits the relatively slow dynamics of the stimulus to remove coding redundancy and so generate a more efficient neural code.
Learning of a smooth but nonparametric probability density can be regularized using methods of quantum
field theory. We implement a field theoretic prior numerically, test its efficacy, and show that the data and the
phase space factors arising from the integration over the model space determine the free parameter of the
theory ~‘‘smoothness scale’’! self-consistently. This persists even for distributions that are atypical in the prior
and is a step towards a model independent theory for learning continuous distributions. Finally, we point out
that a wrong parametrization of a model family may sometimes be advantageous for small data sets.