Observations of power laws in neural activity data have raised the intriguing notion that brains may operate in a critical state. One example of this critical state is “avalanche criticality,” which has been observed in a range of systems, including cultured neurons, zebrafish, and human EEG. More recently, power laws have also been observed in neural populations in the mouse under a coarse-graining procedure, and they were explained as a consequence of the neural activity being coupled to multiple latent dynamical variables. An intriguing possibility is that avalanche criticality emerges due to a similar mechanism. Here, we determine the conditions under which dynamical latent variables give rise to avalanche criticality. We find that a single, quasi-static latent variable can generate critical avalanches, but that multiple latent variables lead to critical behavior in a broader parameter range. We identify two regimes of avalanches, both of which are critical, but differ in the amount of information carried about the latent variable. Our results suggest that avalanche criticality arises in neural systems in which there is an emergent dynamical variable or shared inputs creating an effective latent dynamical variable, and when this variable can be inferred from the population activity.
The technological revolution in biological research, and in particular the use of molecular fluorescent labels, has allowed investigation of heterogeneity of cellular responses to stimuli on the single cell level. Computational, theoretical, and synthetic biology advances have allowed predicting and manipulating this heterogeneity with an exquisite precision previously reserved only for physical sciences. Functionally, this cell-to-cell variability can compromise cellular responses to environmental signals, and it can also enlarge the repertoire of possible cellular responses and hence increase the adaptive nature of cellular behaviors. And yet quantification of the functional importance of this response heterogeneity remained elusive. Recently the mathematical language of information theory has been proposed to address this problem. This opinion reviews the recent advances and discusses the broader implications of using information-theoretic tools to characterize heterogeneity of cellular behaviors
The roundworm Caenorhabditis elegans exhibits robust escape behavior in response to rapidly rising temperature. The behavior lasts for a few seconds, shows history dependence, involves both sensory and motor systems, and is too complicated to model mechanistically using currently available knowledge. Instead we model the process phenomenologically, and we use the Sir Isaac dynamical inference platform to infer the model in a fully automated fashion directly from experimental data. The inferred model requires incorporation of an unobserved dynamical variable and is biologically interpretable. The model makes accurate predictions about the dynamics of the worm behavior, and it can be used to characterize the functional logic of the dynamical system underlying the escape response. This work illustrates the power of modern artificial intelligence to aid in discovery of accurate and interpretable models of complex natural systems.
The dynamics of growth of bacterial populations has been extensively studied for planktonic cells in well-agitated liquid culture, in which all cells have equal access to nutrients. In the real world, bacteria are more likely to live in physically structured habitats as colonies, within which individual cells vary in their access to nutrients. The dynamics of bacterial growth in such conditions is poorly understood, and, unlike that for liquid culture, there is not a standard broadly used mathematical model for bacterial populations growing in colonies in three dimensions (3-d). By extending the classic Monod model of resource-limited population growth to allow for spatial heterogeneity in the bacterial access to nutrients, we develop a 3-d model of colonies, in which bacteria consume diffusing nutrients in their vicinity. By following the changes in density of E. coli in liquid and embedded in glucose-limited soft agar, we evaluate the fit of this model to experimental data. The model accounts for the experimentally observed presence of a sub-exponential, diffusion-limited growth regime in colonies, which is absent in liquid cultures. The model predicts and our experiments confirm that, as a consequence of inter-colony competition for the diffusing nutrients and of cell death, there is a non-monotonic relationship between total number of colonies within the habitat and the total number of individual cells in all of these colonies. This combined theoretical-experimental study reveals that, within 3-d colonies, E. coli cells are loosely packed, and colonies produce about 2.5 times as many cells as the liquid culture from the same amount of nutrients. We verify that this is because cells in liquid culture are larger than in colonies. Our model provides a baseline description of bacterial growth in 3-d, deviations from which can be used to identify phenotypic heterogeneities and inter-cellular interactions that further contribute to the structure of bacterial communities.
Statistical properties of environments experienced by biological signaling systems in the real world change, which necessitates adaptive responses to achieve high fidelity information transmission. One form of such adaptive response is gain control. Here we argue that a certain simple mechanism of gain control, understood well in the context of systems neuroscience, also works for molecular signaling. The mechanism allows to transmit more than one bit (on or off) of information about the signal independently of the signal variance. It does not require additional molecular circuitry beyond that already present in many molecular systems, and, in particular, it does not depend on existence of feedback loops. The mechanism provides a potential explanation for abundance of ultrasensitive response curves in biological regulatory networks.
In order to produce specific complex structures from a large set of similar biochemical building blocks, many biochemical systems require high sensitivity to small molecular differences. The first and most common model used to explain this high specificity is kinetic proofreading, which has been extended to a variety of systems from detection of DNA mismatch to cell signaling processes. While the specification properties of kinetic proofreading models are well known and were studied in various contexts, very little is known about their temporal behavior. In this work, we study the dynamical properties of discrete stochastic two-branch kinetic proofreading schemes. Using the Laplace transform of the corresponding chemical master equation, we obtain an analytical solution for the completion time distribution. In particular we provide expressions for the specificity as well as the mean and variance of the process completion times. We also show that, for a wide range of parameters, a process distinguishing between two different products can be reduced to a much simpler three-point process. Our results allow for the systematic study of the interplay between specificity and completion times, as well as testing the validity of the kinetic proofreading model in biological systems.
Magnetic flux trapped on the surface of superconducting rotors of the Gravity Probe B ~GP-B!
experiment produces some signal in the superconducting quantum interference device readout. For
the needs of GP-B error analysis and simulation of data reduction, this signal is calculated and
analyzed in this article. We first solve a magnetostatic problem for a point source on the surface of
a sphere, finding the closed form elementary expression for the corresponding Green’s function.
Second, we calculate the flux through the pick-up loop as a function of the source position. Next,
the time dependence of a source position, caused by rotor motion according to a symmetric top
model, and thus the time signature of its flux are determined, and the spectrum of the trapped flux
signal is analyzed. Finally, a multipurpose program of trapped flux signal generation based on the
above results is described, various examples of the signal obtained by means of this program are
given, and their features are discussed. Signals of up to 100 fluxons, i.e., 100 pairs of positive and
negative point sources, are examined.
The number of neurons in mammalian cortex varies by multiple orders of magnitude across different species. In contrast, the ratio of excitatory to inhibitory neurons (E:I ratio) varies in a much smaller range, from 3:1 to 9:1 and remains roughly constant for different sensory areas within a species. Despite this structure being important for understanding the function of neural circuits, the reason for this consistency is not yet understood. While recent models of vision based on the efficient coding hypothesis show that increasing the number of both excitatory and inhibitory cells improves stimulus representation, the two cannot increase simultaneously due to constraints on brain volume. In this work, we implement an efficient coding model of vision under a constraint on the volume (using number of neurons as a surrogate) while varying the E:I ratio. We show that the performance of the model is optimal at biologically observed E:I ratios under several metrics. We argue that this happens due to trade-offs between the computational accuracy and the representation capacity for natural stimuli. Further, we make experimentally testable predictions that 1) the optimal E:I ratio should be higher for species with a higher sparsity in the neural activity and 2) the character of inhibitory synaptic distributions and firing rates should change depending on E:I ratio. Our findings, which are supported by our new preliminary analyses of publicly available data, provide the first quantitative and testable hypothesis based on optimal coding models for the distribution of excitatory and inhibitory neural types in the mammalian sensory cortices.
The problem of deciphering how low-level patterns (action potentials in the brain, amino acids in a protein, etc.) drive high-level biological features (sensorimotor behavior, enzymatic function) represents the central challenge of quantitative biology. The lack of general methods for doing so from the size of datasets that can be collected experimentally severely limits our understanding of the biological world. For example, in neuroscience, some sensory and motor codes have been shown to consist of precisely timed multi-spike patterns. However, the combinatorial complexity of such pattern codes have precluded development of methods for their comprehensive analysis. Thus, just as it is hard to predict a protein’s function based on its sequence, we still do not understand how to accurately predict an organism’s behavior based on neural activity. Here we introduce the unsupervised Bayesian Ising Approximation (uBIA) for solving this class of problems. We demonstrate its utility in an application to neural data, detecting precisely timed spike patterns that code for specific motor behaviors in a songbird vocal system. In data recorded during singing from neurons in a vocal control region, our method detects such codewords with an arbitrary number of spikes, does so from small data sets, and accounts for dependencies in occurrences of codewords. Detecting such comprehensive motor control dictionaries can improve our understanding of skilled motor control and the neural bases of sensorimotor learning in animals. To further illustrate the utility of uBIA, used it to identify the distinct sets of activity patterns that encode vocal motor exploration versus typical song production. Crucially, our method can be used not only for analysis of neural systems, but also for understanding the structure of correlations in other biological and nonbiological datasets.
A fundamental problem in neuroscience is understanding how sequences of action potentials (“spikes”) encode information about sensory signals and motor outputs. Although traditional theories assume that this information is conveyed by the total number of spikes fired within a specified time interval (spike rate), recent studies have shown that additional information is carried by the millisecond-scale timing patterns of action potentials (spike timing). However, it is unknown whether or how subtle differences in spike timing drive differences in perception or behavior, leaving it unclear whether the information in spike timing actually plays a role in brain function. By examining the activity of individual motor units (the muscle fibers innervated by a single motor neuron) and manipulating patterns of activation of these neurons, we provide both correlative and causal evidence that the nervous system uses millisecond-scale variations in the timing of spikes within multispike patterns to control a vertebrate behavior—namely, respiration in the Bengalese finch, a songbird. These findings suggest that a fundamental assumption of current theories of motor coding requires revision