Localization of odor-induced neural activity in the antennal lobes of the blowfly Calliphora vicina: a [3H]2-deoxyglucose labeling study. Murlis, J. Carde, R. Coincident stimulation with pheromone components improves temporal pattern resolution in central olfactory neurons. Hansson, B.
Stopfer, M. Local interneurons and information processing in the olfactory glomeruli of the moth Manduca sexta. A , — Nicolelis, M. Heinbockel, T. Temporal tuning of odor responses in pheromone-sensitive projection neurons in the brain of the sphinx moth Manduca sexta. King, J. Response characteristics of an identified, sexually dimorphic olfactory glomerulus.
You are here
Male-specific, sex pheromone-selective projection neurons in the antennal lobes of the moth Manduca sexta. Mori, K. Seminars Cell Biol. Buonviso, N. Response similarity to odors in olfactory bulb output cells presumed to be connected to the same glomerulus: electrophysiological study using simultaneous single-unit recordings.
Rospars, J. Anatomical identification of glomeruli in the antennal lobes of the male moth Manduca sexta. Cell Tissue Res. Temporal pattern analyses in pairs of neighboring mitral cells. Rieke, F. Kashiwadani, H. Gelperin, A. Oscillatory dynamics and information processing in olfactory systems. Multitasking in the olfactory system: context-dependent responses to odors reveal dual GABA-regulated coding mechanisms in single olfactory projection neurons.
Faber, T. Associative learning modifies neural representations of odors in the insect brain. Daly, K. Associative olfactory learning in the moth Manduca sexta. Hartlieb, E. Olfactory conditioning in the moth Heliothis virescens. Naturwissenschaften 83 , 87—88 Pheromone-evoked potentials and oscillations in the antennal lobes of the sphinx moth Manduca sexta.
Short-term memory in olfactory network dynamics. Aertsen, A. Download references. We are grateful to David Anderson and coworkers for providing microprobes and technical support, and we thank Carol Barnes and Bruce McNaughton for advice. Osman for technical assistance. Correspondence to Thomas A. To obtain permission to re-use content from this article visit RightsLink. Language, Cognition and Neuroscience Frontiers in Physiology Frontiers in Cellular Neuroscience Article metrics. Advanced search. Skip to main content.
You are viewing this page in draft mode. Abstract We used neural ensemble recording to examine odor-evoked ensemble patterns in the moth antennal olfactory lobe.
In some brain areas we can exploit useful side information to provide a sanity check on the sorting results: for example, the mosaic tiling of receptive fields in the primate retina provides some partial validation. In parallel, as to the imaging context, the iterative improvement of simulators of electrical activity Hagen et al remains an important direction.
Neural data science: accelerating the experiment-analysis-theory cycle in large-scale neuroscience
The time seems ripe for a community - based collaborative approach to develop a battery of gold standard datasets and quality metrics and then iteratively improve each module of these pipelines towards more scalable and accurate solutions. A separate track of work has taken as a starting point the realization that spike sorting selects the most easily - discriminable units from the observed voltage traces, but leaves behind a large amount of information in the lower - SNR units that can not be separated cleanly from the noise floor.
A combined strategy in which we sort the sortable units, but also exploit information from the non - separable units and local field potential signals works best in practice Bansal et al ; Todorova et al Finally, as interest grows in bidirectional electrical neural interfaces that stimulate and record simultaneously, the problem of stimulation artifact cancellation becomes critical; see Mena et al for a recent step forward. As emphasized above, acquiring and processing large - scale neural data with single - neuron and high temporal resolution has represented a critical bottleneck that has attracted significant research effort over the last couple years.
While significant challenges remain, these efforts have established a clear path forward towards eliminating this bottleneck. The next frontier then is to extract understanding from the resulting high - dimensional neural activity data. Historically, the analysis of spike train data has focused significant effort on three broad questions. We will review recent progress on each of these three themes in turn below, but it is worth emphasizing up front that models developed to address any of these questions can be profitably combined: e.
How the brain encodes external variables into spike trains, and the converse problem of decoding external variables from spike trains, are classic problems in statistical neuroscience. Some recent work has targeted the computational efficiency of GLM estimation methods Mena and Paninski ; Ramirez and Paninski, ; Wu et al This leads to significantly richer and more powerful models. See also Rahnama Rad et al for a different method for sharing statistical strength across cells. Alternatively, we can repurpose ANNs trained to perform computer vision tasks eg object recognition and use the resulting feature sets to predict responses; see Kriegeskorte and Diedrichsen ?
Regarding the converse problem of decoding, there is a large ongoing engineering and clinical literature on brain - machine interfaces including not only systems to extract motor control information from the brain but also sensory devices such as cochlear and retinal prosthetics that we will not attempt to review systematically here.
Another thread of work has shown that stronger models of joint neural variability can be used to construct better decoders Lawhern et al , Kao et al ; see also Burkhart et al for a promising converse approach. We expect to see more applications of similar ideas in the near future. Another classic problem in statistical neuroscience is to infer neuronal network connectivity from correlated activity in the network, and then to use the inferred connectivity to understand the For network function and predict its dynamics.
Note that this approach is enabled by optical approaches to neural recording; this approach would not be feasible with current multi - electrode arrays. In simulations, this approach enables the accurate estimation of networks an order of magnitude larger than was previously possible. See Turaga et al for a simplified implementation of this idea. Experimental methods are now becoming sufficiently fast and scalable to put this method into practice. Once we have estimated the network connectivity, we need a high - throughput method for verifying our estimates e.
Augmentation of Sensorimotor Functions with Neural Prostheses | Opera Medica et Physiologica
Optogenetic approaches are well - suited to this task; Shababo et al ; Chen et al b propose a scalable adaptive closed - loop optimal experimental design approach towards mapping and verifying the connectivity onto single postsynaptic cells. Finally, once these networks are inferred a major goal is to study their dynamical properties.
Gerhard et al ; Hocker and Park, point out that standard GLM estimation approaches can lead to dynamically unstable estimated networks, and propose approaches to correct this deficit; some relevant asymptotic theory is developed in Hall et al ; Chen et al a. In the language of machine learning, the encoding and decoding problems are supervised , in the sense that one seeks a mapping between two known signals: measurable behavioral variables and populations of spike trains. The unsupervised analog is often approached using factor models : high - dimensional neural population activity is assumed to be a noisy, redundant observation of some hidden latent , often low - dimensional, signal of interest.
This signal can then be interrogated with respect to a scientific hypothesis, used as a denoised and simpler representation of the neural activity, or visualized for compact exploratory analysis of the data. Following the pioneering work of Smith and Brown , much of this literature has followed the Bayesian paradigm, where a generative probabilistic model is stipulated to link low - dimensional latent signals to high - dimensional neural spike trains, and then a computational inference procedure recovers the posterior distribution of the latent variable from the observed data.
Examples of this paradigm include systems with simple latent temporal structure e.
- Adam in Chains (Gay Bondage Erotica Book 1)?
- Login to your account;
- Die Wege und die Begegnungen (German Edition).
Dimensionality reduction approaches have been widely used in neuroscience Cunningham and Yu Several exemplars of the scientific potential of these approaches are worth noting. First, some of the earliest applications of factor models were to understand mixed selectivity in prefrontal cortex Machens et al ; this work showed that despite the apparent complex responses displayed by single neurons, at the population level simple behavioral correlates can be effectively read out from the brain.
Third, Sadtler et al used this notion of subspaces of activity along with a brain - machine interface to discover constraints in terms of dimensions in neural population space on learning. Fourth, population activity along with factor models has been used to understand the dynamical structure of motor and prefrontal cortices Churchland et al , Mante et al As more connectomic and cell type constraints become available for population activity recordings, we expect this literature to continue to mature and deepen methodologically, and to elucidate interactions between cell type - specific subpopulations in multiple brain areas Semedo et.
Of course, the impetus behind the analysis of large scale neural data is the belief that these efforts will lead to deeper insights into principles of neural computation.
One important line of research, which is in its earliest chapter, is to ask: to what extent is that belief well founded? There are three categories of approach to address this critical question. First, there is the concern that novel analyses of large - scale neural data may not be discovering new phenomena, but are rather rediscovering simpler, previously known features of the data that appear new given the novel class of data and algorithms used to investigate these points.
Another key point of skepticism is whether recording larger and larger datasets will produce fundamentally new findings. The answer may depend on the complexity of the experimental paradigm: if the number of recorded neurons grows while the task the animal needs to solve is kept relatively simple, will new scientific insights follow, or must the complexity of the task grow in concordance with that of the data? One group has discussed a notion of required task complexity Gao and Ganguli , and two others have attempted to measure the complexity of neural population activity in the face of larger and larger datasets, finding both that complexity as measured by the apparent dimensionality of the data grows seemingly without bound Pachitariu et al, unpublished in some cases, and in others that it does not Williamson et al Significant additional theoretical and experimental work is required to provide clearer conclusions here.
Third, at the broadest level, we might ask if our current approaches will ever produce a coherent mechanistic understanding of the neural system. Moreover, many of our theories of the brain have been allowed to flourish largely untethered from data that could constrain and winnow these theories. But now we have to grapple seriously with the question of what we will do when we have in hand, for example, a matrix of the spatially - and temporally - resolved activity of all neurons in an animal performing an interesting behavior.
This remains a yet - distant dream in mammals but is close to reality in several invertebrate species, and our field needs to think critically and deeply about what to do now that this century - long goal is almost in our grasp. We believe the way forward is an acceleration of the experiment - analysis - theory cycle; there is a rapidly growing need for new theories to guide our exquisite new experimental tools, and as these theories develop we will continue to need well - matched scalable analysis methods that can connect experiment and theory in a tightly closed loop.
We close by summarizing several trends that will guide development in this field over the next several years. Datasets will continue to grow in size as recording modalities are optimized and new approaches are introduced; the scalability of processing pipelines will remain a critical design constraint.