2023
Eisenmann, Lukas; Monfared, Zahra; Göring, Niclas Alexander; Durstewitz, Daniel
Bifurcations and loss jumps in RNN training Inproceedings
NeurIPS 2023, 2023.
@inproceedings{Eisenmann2023,
title = {Bifurcations and loss jumps in RNN training},
author = {Lukas Eisenmann and Zahra Monfared and Niclas Alexander Göring and Daniel Durstewitz},
year = {2023},
date = {2023-11-06},
booktitle = {NeurIPS 2023},
abstract = {Recurrent neural networks (RNNs) are popular machine learning tools for modeling
and forecasting sequential data and for inferring dynamical systems (DS) from
observed time series. Concepts from DS theory (DST) have variously been used
to further our understanding of both, how trained RNNs solve complex tasks, and
the training process itself. Bifurcations are particularly important phenomena in
DS, including RNNs, that refer to topological (qualitative) changes in a system’s
dynamical behavior as one or more of its parameters are varied. Knowing the
bifurcation structure of an RNN will thus allow to deduce many of its computa-
tional and dynamical properties, like its sensitivity to parameter variations or its
behavior during training. In particular, bifurcations may account for sudden loss
jumps observed in RNN training that could severely impede the training process.
Here we first mathematically prove for a particular class of ReLU-based RNNs
that certain bifurcations are indeed associated with loss gradients tending toward
infinity or zero. We then introduce a novel heuristic algorithm for detecting all
fixed points and k-cycles in ReLU-based RNNs and their existence and stability
regions, hence bifurcation manifolds in parameter space. In contrast to previous
numerical algorithms for finding fixed points and common continuation methods,
our algorithm provides exact results and returns fixed points and cycles up to high
orders with surprisingly good scaling behavior. We exemplify the algorithm on
the analysis of the training process of RNNs, and find that the recently introduced
technique of generalized teacher forcing completely avoids certain types of bifurca-
tions in training. Thus, besides facilitating the DST analysis of trained RNNs, our
algorithm provides a powerful instrument for analyzing the training process itself.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Recurrent neural networks (RNNs) are popular machine learning tools for modeling
and forecasting sequential data and for inferring dynamical systems (DS) from
observed time series. Concepts from DS theory (DST) have variously been used
to further our understanding of both, how trained RNNs solve complex tasks, and
the training process itself. Bifurcations are particularly important phenomena in
DS, including RNNs, that refer to topological (qualitative) changes in a system’s
dynamical behavior as one or more of its parameters are varied. Knowing the
bifurcation structure of an RNN will thus allow to deduce many of its computa-
tional and dynamical properties, like its sensitivity to parameter variations or its
behavior during training. In particular, bifurcations may account for sudden loss
jumps observed in RNN training that could severely impede the training process.
Here we first mathematically prove for a particular class of ReLU-based RNNs
that certain bifurcations are indeed associated with loss gradients tending toward
infinity or zero. We then introduce a novel heuristic algorithm for detecting all
fixed points and k-cycles in ReLU-based RNNs and their existence and stability
regions, hence bifurcation manifolds in parameter space. In contrast to previous
numerical algorithms for finding fixed points and common continuation methods,
our algorithm provides exact results and returns fixed points and cycles up to high
orders with surprisingly good scaling behavior. We exemplify the algorithm on
the analysis of the training process of RNNs, and find that the recently introduced
technique of generalized teacher forcing completely avoids certain types of bifurca-
tions in training. Thus, besides facilitating the DST analysis of trained RNNs, our
algorithm provides a powerful instrument for analyzing the training process itself. Durstewitz, Daniel; Koppe, Georgia; Thurm, Max Ingo
Reconstructing Computational Dynamics from Neural Measurements with Recurrent Neural Networks Journal Article
Nature Reviews Neuroscience, 2023.
@article{Durstewitz2023,
title = {Reconstructing Computational Dynamics from Neural Measurements with Recurrent Neural Networks},
author = {Daniel Durstewitz and Georgia Koppe and Max Ingo Thurm},
url = {https://www.nature.com/articles/s41583-023-00740-7},
doi = {https://doi.org/10.1038/s41583-023-00740-7},
year = {2023},
date = {2023-10-04},
journal = {Nature Reviews Neuroscience},
abstract = {Computational models in neuroscience usually take the form of systems of differential equations. The behaviour of such systems is the subject of dynamical systems theory. Dynamical systems theory provides a powerful mathematical toolbox for analysing neurobiological processes and has been a mainstay of computational neuroscience for decades. Recently, recurrent neural networks (RNNs) have become a popular machine learning tool for studying the non-linear dynamics of neural and behavioural processes by emulating an underlying system of differential equations. RNNs have been routinely trained on similar behavioural tasks to those used for animal subjects to generate hypotheses about the underlying computational mechanisms. By contrast, RNNs can also be trained on the measured physiological and behavioural data, thereby directly inheriting their temporal and geometrical properties. In this way they become a formal surrogate for the experimentally probed system that can be further analysed, perturbed and simulated. This powerful approach is called dynamical system reconstruction. In this Perspective, we focus on recent trends in artificial intelligence and machine learning in this exciting and rapidly expanding field, which may be less well known in neuroscience. We discuss formal prerequisites, different model architectures and training approaches for RNN-based dynamical system reconstructions, ways to evaluate and validate model performance, how to interpret trained models in a neuroscience context, and current challenges.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Computational models in neuroscience usually take the form of systems of differential equations. The behaviour of such systems is the subject of dynamical systems theory. Dynamical systems theory provides a powerful mathematical toolbox for analysing neurobiological processes and has been a mainstay of computational neuroscience for decades. Recently, recurrent neural networks (RNNs) have become a popular machine learning tool for studying the non-linear dynamics of neural and behavioural processes by emulating an underlying system of differential equations. RNNs have been routinely trained on similar behavioural tasks to those used for animal subjects to generate hypotheses about the underlying computational mechanisms. By contrast, RNNs can also be trained on the measured physiological and behavioural data, thereby directly inheriting their temporal and geometrical properties. In this way they become a formal surrogate for the experimentally probed system that can be further analysed, perturbed and simulated. This powerful approach is called dynamical system reconstruction. In this Perspective, we focus on recent trends in artificial intelligence and machine learning in this exciting and rapidly expanding field, which may be less well known in neuroscience. We discuss formal prerequisites, different model architectures and training approaches for RNN-based dynamical system reconstructions, ways to evaluate and validate model performance, how to interpret trained models in a neuroscience context, and current challenges. Miftari, Egzon; Durstewitz, Daniel; Sadlo, Filip
Visualization of Discontinuous Vector Field Topology Journal Article
IEEE Transactions on Visualization & Computer Graphics, 2023.
@article{Miftari2023,
title = {Visualization of Discontinuous Vector Field Topology},
author = {Egzon Miftari and Daniel Durstewitz and Filip Sadlo},
url = {https://www.computer.org/csdl/journal/tg/5555/01/10296524/1RwXG8nn7d6},
year = {2023},
date = {2023-10-01},
journal = {IEEE Transactions on Visualization & Computer Graphics},
abstract = {This paper extends the concept and the visualization of vector field topology to vector fields with discontinuities. We address the non-uniqueness of flow in such fields by introduction of a time-reversible concept of equivalence. This concept generalizes streamlines to streamsets and thus vector field topology to discontinuous vector fields in terms of invariant streamsets. We identify respective novel critical structures as well as their manifolds, investigate their interplay with traditional vector field topology, and detail the application and interpretation of our approach using specifically designed synthetic cases and a simulated case from physics.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
This paper extends the concept and the visualization of vector field topology to vector fields with discontinuities. We address the non-uniqueness of flow in such fields by introduction of a time-reversible concept of equivalence. This concept generalizes streamlines to streamsets and thus vector field topology to discontinuous vector fields in terms of invariant streamsets. We identify respective novel critical structures as well as their manifolds, investigate their interplay with traditional vector field topology, and detail the application and interpretation of our approach using specifically designed synthetic cases and a simulated case from physics. Hess, Florian; Monfared, Zahra; Brenner, Manuel; Durstewitz, Daniel
Generalized Teacher Forcing for Learning Chaotic Dynamics Inproceedings
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:13017-13049, 2023., 2023.
@inproceedings{Hess2023,
title = {Generalized Teacher Forcing for Learning Chaotic Dynamics},
author = {Florian Hess and Zahra Monfared and Manuel Brenner and Daniel Durstewitz},
url = {https://proceedings.mlr.press/v202/hess23a.html},
year = {2023},
date = {2023-05-31},
booktitle = {Proceedings of the 40th International Conference on Machine Learning, PMLR 202:13017-13049, 2023.},
journal = {Proceedings of Machine Learning Research, ICML 2023},
abstract = {Chaotic dynamical systems (DS) are ubiquitous in nature and society. Often we are interested in reconstructing such systems from observed time series for prediction or mechanistic insight, where by reconstruction we mean learning geometrical and invariant temporal properties of the system in question. However, training reconstruction algorithms like recurrent neural networks (RNNs) on such systems by gradient-descent based techniques faces severe challenges. This is mainly due to the exploding gradients caused by the exponential divergence of trajectories in chaotic systems. Moreover, for (scientific) interpretability we wish to have as low dimensional reconstructions as possible, preferably in a model which is mathematically tractable. Here we report that a surprisingly simple modification of teacher forcing leads to provably strictly all-time bounded gradients in training on chaotic systems, while still learning to faithfully represent their dynamics. Furthermore, we observed that a simple architectural rearrangement of a tractable RNN design, piecewise-linear RNNs (PLRNNs), enables to reduce the reconstruction dimension to at most that of the observed system (or less).
We show on several DS that with these amendments we can reconstruct DS better than current SOTA algorithms, in much lower dimensions. Performance differences were particularly compelling on real world data with which most other methods severely struggled. This work thus led to a simple yet powerful DS reconstruction algorithm which is highly interpretable at the same time.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Chaotic dynamical systems (DS) are ubiquitous in nature and society. Often we are interested in reconstructing such systems from observed time series for prediction or mechanistic insight, where by reconstruction we mean learning geometrical and invariant temporal properties of the system in question. However, training reconstruction algorithms like recurrent neural networks (RNNs) on such systems by gradient-descent based techniques faces severe challenges. This is mainly due to the exploding gradients caused by the exponential divergence of trajectories in chaotic systems. Moreover, for (scientific) interpretability we wish to have as low dimensional reconstructions as possible, preferably in a model which is mathematically tractable. Here we report that a surprisingly simple modification of teacher forcing leads to provably strictly all-time bounded gradients in training on chaotic systems, while still learning to faithfully represent their dynamics. Furthermore, we observed that a simple architectural rearrangement of a tractable RNN design, piecewise-linear RNNs (PLRNNs), enables to reduce the reconstruction dimension to at most that of the observed system (or less).
We show on several DS that with these amendments we can reconstruct DS better than current SOTA algorithms, in much lower dimensions. Performance differences were particularly compelling on real world data with which most other methods severely struggled. This work thus led to a simple yet powerful DS reconstruction algorithm which is highly interpretable at the same time. Fechtelpeter, Janik; Rauschenberg, Christian; Jamalabadi, Hamidreza; Boecking, Benjamin; van Amelsvoort, Therese; Reininghaus, Ulrich; Durstewitz, Daniel; Koppe, Georgia
A control theoretic approach to evaluate and inform ecological momentary interventions Unpublished
2023.
@unpublished{Fechtelpeter2023,
title = {A control theoretic approach to evaluate and inform ecological momentary interventions},
author = {Janik Fechtelpeter and Christian Rauschenberg and Hamidreza Jamalabadi and Benjamin Boecking and Therese van Amelsvoort and Ulrich Reininghaus and Daniel Durstewitz and Georgia Koppe},
url = {https://psyarxiv.com/97teh/download?format=pdf},
year = {2023},
date = {2023-05-17},
journal = {PsyArXiv},
abstract = {Ecological momentary interventions (EMI) are digital mobile health (mHealth) interventions that are administered in an individual's daily life with the intent to improve mental health outcomes by tailoring intervention components to person, moment, and context. Questions regarding which intervention is most effective in a given individual, when it is best delivered, and what mechanisms of change underlie observed effects therefore naturally arise in this setting. To achieve this, EMI are typically informed by the collection of multivariate, intensive longitudinal data of various target constructs-designed to assess an individual’s psychological state-using ecological momentary assessments (EMA). However, the dynamic and interconnected nature of such multivariate time series data poses several challenges when analyzing and interpreting findings. This may be illustrated when understanding psychological variables as part of an interconnected network of dynamic variables, and the delivery of EMI as time-specific perturbations to these variables. Network control theory (NCT) is a branch of dynamical systems theory that precisely deals with the formal analysis of such network perturbations and provides solutions of how to perturb a network to reach a desired state in an optimal manner. In doing so, NCT may help to formally quantify and evaluate proximal intervention effects, as well as to identify optimal intervention approaches given a set of reasonable (temporal or energetic) constraints. In this proof-of-concept study, we leverage concepts from NCT to analyze the data of 10 individuals undergoing joint EMA and EMI for several weeks. We show how simple metrics derived from NCT can provide insightful information on putative mechanisms of change in the inferred EMA networks and contribute to identifying optimal leveraging points. We also
outline what additional considerations might play a role in the design of effective intervention strategies in the future from the perspective of NCT.},
keywords = {},
pubstate = {published},
tppubtype = {unpublished}
}
Ecological momentary interventions (EMI) are digital mobile health (mHealth) interventions that are administered in an individual's daily life with the intent to improve mental health outcomes by tailoring intervention components to person, moment, and context. Questions regarding which intervention is most effective in a given individual, when it is best delivered, and what mechanisms of change underlie observed effects therefore naturally arise in this setting. To achieve this, EMI are typically informed by the collection of multivariate, intensive longitudinal data of various target constructs-designed to assess an individual’s psychological state-using ecological momentary assessments (EMA). However, the dynamic and interconnected nature of such multivariate time series data poses several challenges when analyzing and interpreting findings. This may be illustrated when understanding psychological variables as part of an interconnected network of dynamic variables, and the delivery of EMI as time-specific perturbations to these variables. Network control theory (NCT) is a branch of dynamical systems theory that precisely deals with the formal analysis of such network perturbations and provides solutions of how to perturb a network to reach a desired state in an optimal manner. In doing so, NCT may help to formally quantify and evaluate proximal intervention effects, as well as to identify optimal intervention approaches given a set of reasonable (temporal or energetic) constraints. In this proof-of-concept study, we leverage concepts from NCT to analyze the data of 10 individuals undergoing joint EMA and EMI for several weeks. We show how simple metrics derived from NCT can provide insightful information on putative mechanisms of change in the inferred EMA networks and contribute to identifying optimal leveraging points. We also
outline what additional considerations might play a role in the design of effective intervention strategies in the future from the perspective of NCT. Domanski, Aleksander PF; Kucewicz, Michal T; Russo, Eleonora; Tricklebank, Mark D; Robinson, Emma SJ; Durstewitz, Daniel; Jones, Matt W
Distinct hippocampal-prefrontal neural assemblies coordinate memory encoding, maintenance, and recall Journal Article
Current Biology, 33 (7), 2023.
@article{Domanski2023,
title = {Distinct hippocampal-prefrontal neural assemblies coordinate memory encoding, maintenance, and recall},
author = {Aleksander PF Domanski and Michal T Kucewicz and Eleonora Russo and Mark D Tricklebank and Emma SJ Robinson and Daniel Durstewitz and Matt W Jones},
url = {https://www.cell.com/current-biology/pdf/S0960-9822(23)00169-0.pdf},
year = {2023},
date = {2023-04-10},
journal = {Current Biology},
volume = {33},
number = {7},
abstract = {Short-term memory enables incorporation of recent experience into subsequent decision-making. This pro cessing recruits both the prefrontal cortex and hippocampus, where neurons encode task cues, rules, and outcomes. However, precisely which information is carried when, and by which neurons, remains unclear. Using population decoding of activity in rat medial prefrontal cortex (mPFC) and dorsal hippocampal CA1,
we confirm that mPFC populations lead in maintaining sample information across delays of an operant non-match to sample task, despite individual neurons firing only transiently. During sample encoding, distinct mPFC subpopulations joined distributed CA1-mPFC cell assemblies hallmarked by 4–5 Hz rhythmic modulation; CA1-mPFC assemblies re-emerged during choice episodes but were not 4–5 Hz modulated.
Delay-dependent errors arose when attenuated rhythmic assembly activity heralded collapse of sustained mPFC encoding. Our results map component processes of memory-guided decisions onto heterogeneous CA1-mPFC subpopulations and the dynamics of physiologically distinct, distributed cell assemblies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Short-term memory enables incorporation of recent experience into subsequent decision-making. This pro cessing recruits both the prefrontal cortex and hippocampus, where neurons encode task cues, rules, and outcomes. However, precisely which information is carried when, and by which neurons, remains unclear. Using population decoding of activity in rat medial prefrontal cortex (mPFC) and dorsal hippocampal CA1,
we confirm that mPFC populations lead in maintaining sample information across delays of an operant non-match to sample task, despite individual neurons firing only transiently. During sample encoding, distinct mPFC subpopulations joined distributed CA1-mPFC cell assemblies hallmarked by 4–5 Hz rhythmic modulation; CA1-mPFC assemblies re-emerged during choice episodes but were not 4–5 Hz modulated.
Delay-dependent errors arose when attenuated rhythmic assembly activity heralded collapse of sustained mPFC encoding. Our results map component processes of memory-guided decisions onto heterogeneous CA1-mPFC subpopulations and the dynamics of physiologically distinct, distributed cell assemblies. Hanganu-Opatz, Ileana L; Klausberger, Thomas; Sigurdsson, Torfi; Nieder, Andreas; Jacob, Simon N; Bartos, Marlene; Sauer, Jonas-Frederic; Durstewitz, Daniel; Leibold, Christian; Diester, Ilka
Resolving the prefrontal mechanisms of adaptive cognitive behaviors: A cross-species perspective Journal Article
Neuron, 111 (7), 2023.
@article{Hanganu-Opatz2023,
title = {Resolving the prefrontal mechanisms of adaptive cognitive behaviors: A cross-species perspective},
author = {Ileana L Hanganu-Opatz and Thomas Klausberger and Torfi Sigurdsson and Andreas Nieder and Simon N Jacob and Marlene Bartos and Jonas-Frederic Sauer and Daniel Durstewitz and Christian Leibold and Ilka Diester},
url = {https://neurocluster-db.meduniwien.ac.at/db_files/pub_art_431.pdf},
year = {2023},
date = {2023-04-10},
journal = {Neuron},
volume = {111},
number = {7},
abstract = {The prefrontal cortex (PFC) enables a staggering variety of complex behaviors, such as planning actions, solving problems, and adapting to new situations according to external information and internal states. These higher-order abilities, collectively defined as adaptive cognitive behavior, require cellular ensembles that coordinate the tradeoff between the stability and flexibility of neural representations. While the mechanisms
underlying the function of cellular ensembles are still unclear, recent experimental and theoretical studies suggest that temporal coordination dynamically binds prefrontal neurons into functional ensembles. A so far largely separate stream of research has investigated the prefrontal efferent and afferent connectivity. These two research streams have recently converged on the hypothesis that prefrontal connectivity patterns
influence ensemble formation and the function of neurons within ensembles. Here, we propose a unitary concept that, leveraging a cross-species definition of prefrontal regions, explains how prefrontal ensembles adaptively regulate and efficiently coordinate multiple processes in distinct cognitive behaviors.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The prefrontal cortex (PFC) enables a staggering variety of complex behaviors, such as planning actions, solving problems, and adapting to new situations according to external information and internal states. These higher-order abilities, collectively defined as adaptive cognitive behavior, require cellular ensembles that coordinate the tradeoff between the stability and flexibility of neural representations. While the mechanisms
underlying the function of cellular ensembles are still unclear, recent experimental and theoretical studies suggest that temporal coordination dynamically binds prefrontal neurons into functional ensembles. A so far largely separate stream of research has investigated the prefrontal efferent and afferent connectivity. These two research streams have recently converged on the hypothesis that prefrontal connectivity patterns
influence ensemble formation and the function of neurons within ensembles. Here, we propose a unitary concept that, leveraging a cross-species definition of prefrontal regions, explains how prefrontal ensembles adaptively regulate and efficiently coordinate multiple processes in distinct cognitive behaviors. Thome, Janine; Pinger, Mathieu; Durstewitz, Daniel; Sommer, Wolfgang H; Kirsch, Peter; Koppe, Georgia
Model-based experimental manipulation of probabilistic behavior in interpretable behavioral latent variable models Journal Article
Frontiers in Neuroscience, 16 , pp. 2270, 2023.
@article{Thome2023,
title = {Model-based experimental manipulation of probabilistic behavior in interpretable behavioral latent variable models},
author = {Janine Thome and Mathieu Pinger and Daniel Durstewitz and Wolfgang H Sommer and Peter Kirsch and Georgia Koppe},
url = {https://www.frontiersin.org/articles/10.3389/fnins.2022.1077735/full},
year = {2023},
date = {2023-01-09},
journal = {Frontiers in Neuroscience},
volume = {16},
pages = {2270},
abstract = {In studying mental processes, we often rely on quantifying not directly observable latent processes. Interpretable latent variable models that probabilistically link observations to the underlying process have increasingly been used to draw inferences from observed behavior. However, these models are far more powerful than that. By formally embedding experimentally manipulable variables within the latent process, they can be used to make precise and falsifiable hypotheses or predictions. In doing so, they pinpoint how experimental conditions must be designed to test these hypotheses and, by that, generate adaptive experiments. By comparing predictions to observed behavior, we may then assess and evaluate the predictive validity of an adaptive experiment and model directly and objectively. These ideas are exemplified here on the experimentally not directly observable process of delay discounting. We propose a generic approach to systematically generate and validate experimental conditions based on the aforementioned models. The conditions are explicitly generated so as to predict 9 graded behavioral discounting probabilities across participants. Meeting this prediction, the framework induces discounting probabilities on 9 levels. In contrast to several alternative models, the applied model exhibits high validity as indicated by a comparably low out-of-sample prediction error. We also report evidence for inter-individual differences with respect to the most suitable models underlying behavior. Finally, we outline how to adapt the proposed method to the investigation of other cognitive processes including reinforcement learning.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In studying mental processes, we often rely on quantifying not directly observable latent processes. Interpretable latent variable models that probabilistically link observations to the underlying process have increasingly been used to draw inferences from observed behavior. However, these models are far more powerful than that. By formally embedding experimentally manipulable variables within the latent process, they can be used to make precise and falsifiable hypotheses or predictions. In doing so, they pinpoint how experimental conditions must be designed to test these hypotheses and, by that, generate adaptive experiments. By comparing predictions to observed behavior, we may then assess and evaluate the predictive validity of an adaptive experiment and model directly and objectively. These ideas are exemplified here on the experimentally not directly observable process of delay discounting. We propose a generic approach to systematically generate and validate experimental conditions based on the aforementioned models. The conditions are explicitly generated so as to predict 9 graded behavioral discounting probabilities across participants. Meeting this prediction, the framework induces discounting probabilities on 9 levels. In contrast to several alternative models, the applied model exhibits high validity as indicated by a comparably low out-of-sample prediction error. We also report evidence for inter-individual differences with respect to the most suitable models underlying behavior. Finally, we outline how to adapt the proposed method to the investigation of other cognitive processes including reinforcement learning.
2022
Brenner, Manuel; Koppe, Georgia; Durstewitz, Daniel
Multimodal Teacher Forcing for Reconstructing Nonlinear Dynamical Systems Workshop
2022.
@workshop{Brenner2022b,
title = {Multimodal Teacher Forcing for Reconstructing Nonlinear Dynamical Systems},
author = {Manuel Brenner and Georgia Koppe and Daniel Durstewitz},
url = {https://arxiv.org/pdf/2212.07892.pdf},
year = {2022},
date = {2022-12-15},
journal = {AAAI 2023 (MLmDS Workshop)},
abstract = {Many, if not most, systems of interest in science are naturally
described as nonlinear dynamical systems (DS). Empirically,
we commonly access these systems through time series mea-
surements, where often we have time series from different
types of data modalities simultaneously. For instance, we may
have event counts in addition to some continuous signal. While
by now there are many powerful machine learning (ML) tools
for integrating different data modalities into predictive models,
this has rarely been approached so far from the perspective of
uncovering the underlying, data-generating DS (aka DS recon-
struction). Recently, sparse teacher forcing (TF) has been sug-
gested as an efficient control-theoretic method for dealing with
exploding loss gradients when training ML models on chaotic
DS. Here we incorporate this idea into a novel recurrent neu-
ral network (RNN) training framework for DS reconstruction
based on multimodal variational autoencoders (MVAE). The
forcing signal for the RNN is generated by the MVAE which
integrates different types of simultaneously given time series
data into a joint latent code optimal for DS reconstruction. We
show that this training method achieves significantly better
reconstructions on multimodal datasets generated from chaotic
DS benchmarks than various alternative methods.},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
Many, if not most, systems of interest in science are naturally
described as nonlinear dynamical systems (DS). Empirically,
we commonly access these systems through time series mea-
surements, where often we have time series from different
types of data modalities simultaneously. For instance, we may
have event counts in addition to some continuous signal. While
by now there are many powerful machine learning (ML) tools
for integrating different data modalities into predictive models,
this has rarely been approached so far from the perspective of
uncovering the underlying, data-generating DS (aka DS recon-
struction). Recently, sparse teacher forcing (TF) has been sug-
gested as an efficient control-theoretic method for dealing with
exploding loss gradients when training ML models on chaotic
DS. Here we incorporate this idea into a novel recurrent neu-
ral network (RNN) training framework for DS reconstruction
based on multimodal variational autoencoders (MVAE). The
forcing signal for the RNN is generated by the MVAE which
integrates different types of simultaneously given time series
data into a joint latent code optimal for DS reconstruction. We
show that this training method achieves significantly better
reconstructions on multimodal datasets generated from chaotic
DS benchmarks than various alternative methods. Götzl, Christian; Hiller, Selina; Rauschenberg, Christian; Schick, Anita; Fechtelpeter, Janik; Abaigar, Unai Fischer; Koppe, Georgia; Durstewitz, Daniel; Reininghaus, Ulrich; Krumm, Silvia
Artificial intelligence-informed mobile mental health apps for young people: a mixed-methods approach on users’ and stakeholders’ perspectives Journal Article
Child and Adolescent Psychiatry and Mental Health, 16 (86), 2022.
@article{Götzl2022,
title = {Artificial intelligence-informed mobile mental health apps for young people: a mixed-methods approach on users’ and stakeholders’ perspectives},
author = {Christian Götzl and Selina Hiller and Christian Rauschenberg and Anita Schick and Janik Fechtelpeter and Unai Fischer Abaigar and Georgia Koppe and Daniel Durstewitz and Ulrich Reininghaus and Silvia Krumm},
year = {2022},
date = {2022-12-01},
journal = {Child and Adolescent Psychiatry and Mental Health},
volume = {16},
number = {86},
abstract = {Novel approaches in mobile mental health (mHealth) apps that make use of Artificial Intelligence (AI), Ecological Momentary Assessments, and Ecological Momentary Interventions have the potential to support young people in the achievement of mental health and wellbeing goals. However, little is known on the perspectives of young people and mental health experts on this rapidly advancing technology. This study aims to investigate the subjective needs, attitudes, and preferences of key stakeholders towards an AI–informed mHealth app, including young people and experts on mHealth promotion and prevention in youth.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Novel approaches in mobile mental health (mHealth) apps that make use of Artificial Intelligence (AI), Ecological Momentary Assessments, and Ecological Momentary Interventions have the potential to support young people in the achievement of mental health and wellbeing goals. However, little is known on the perspectives of young people and mental health experts on this rapidly advancing technology. This study aims to investigate the subjective needs, attitudes, and preferences of key stakeholders towards an AI–informed mHealth app, including young people and experts on mHealth promotion and prevention in youth. Bähner, Florian; Popov, Tzvetan; Hermann, Selina; Boehme, Nico; Merten, Tom; Zingone, Hélène; Koppe, Georgia; Meyer-Lindenberg, Andreas; Toutounji, Hazem; Durstewitz, Daniel
Species-conserved mechanisms of cognitive flexibility in complex environments Journal Article
bioRxiv, 2022.
@article{Bähner2022,
title = {Species-conserved mechanisms of cognitive flexibility in complex environments},
author = {Florian Bähner and Tzvetan Popov and Selina Hermann and Nico Boehme and Tom Merten and Hélène Zingone and Georgia Koppe and Andreas Meyer-Lindenberg and Hazem Toutounji and Daniel Durstewitz},
year = {2022},
date = {2022-11-14},
journal = {bioRxiv},
abstract = {Flexible decision making in complex environments is a hallmark of intelligent behavior but the underlying learning mechanisms and neural computations remain elusive. Through a combination of behavioral, computational and electrophysiological analysis of a novel multidimensional rule-learning paradigm, we show that both rats and humans sequentially probe different behavioral strategies to infer the task rule, rather than learning all possible mappings between environmental cues and actions as current theoretical formulations suppose. This species-conserved process reduces task dimensionality and explains both observed sudden behavioral transitions and positive transfer effects. Behavioral strategies are represented by rat prefrontal activity and strategy-related variables can be decoded from magnetoencephalography signals in human prefrontal cortex. These mechanistic findings provide a foundation for the translational investigation of impaired cognitive flexibility.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Flexible decision making in complex environments is a hallmark of intelligent behavior but the underlying learning mechanisms and neural computations remain elusive. Through a combination of behavioral, computational and electrophysiological analysis of a novel multidimensional rule-learning paradigm, we show that both rats and humans sequentially probe different behavioral strategies to infer the task rule, rather than learning all possible mappings between environmental cues and actions as current theoretical formulations suppose. This species-conserved process reduces task dimensionality and explains both observed sudden behavioral transitions and positive transfer effects. Behavioral strategies are represented by rat prefrontal activity and strategy-related variables can be decoded from magnetoencephalography signals in human prefrontal cortex. These mechanistic findings provide a foundation for the translational investigation of impaired cognitive flexibility. Stocker, Julia Elina; Koppe, Georgia; de Paredes, Hanna Reich; Heshmati, Saeideh; Hofmann, Stefan G; Hahn, Tim; van der Maas, Han; Waldorp, Lourens; Jamalabadi, Hamidreza
Towards a formal model of psychological intervention: Applying a dynamic network and control approach to attitude modification Journal Article
PsyArXiv, 2022.
@article{Stocker2022,
title = {Towards a formal model of psychological intervention: Applying a dynamic network and control approach to attitude modification},
author = {Julia Elina Stocker and Georgia Koppe and Hanna Reich de Paredes and Saeideh Heshmati and Stefan G Hofmann and Tim Hahn and Han van der Maas and Lourens Waldorp and Hamidreza Jamalabadi},
year = {2022},
date = {2022-11-09},
journal = {PsyArXiv},
abstract = {Despite the growing deployment of network representation throughout psychological sciences, the question of whether and how networks can systematically describe the effects of psychological interventions remains elusive. Towards this end, we capitalize on recent breakthrough in network control theory, the engineering study of networked interventions, to investigate a representative psychological attitude modification experiment. This study examined 30 healthy participants who answered 11 questions about their attitude toward eating meat. They then received 11 arguments to challenge their attitude on the questions, after which they were asked again the same set of questions. Using this data, we constructed networks that quantify the connections between the responses and tested: 1) if the observed psychological effect, in terms of sensitivity and specificity, relates to the regional network topology as described by control theory, 2) if the size of change in responses relates to whole-network topology that quantifies the “ease” of change as described by control theory, and 3) if responses after intervention could be predicted based on formal results from control theory. We found that 1) the interventions that had higher regional topological relevance (the so-called controllability scores) had stronger effect (r> 0.5), the intervention sensitivities were systematically lower for the interventions that were “easier to control”(r=-0.49), and that the model offered substantial prediction accuracy (r= 0.36). },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Despite the growing deployment of network representation throughout psychological sciences, the question of whether and how networks can systematically describe the effects of psychological interventions remains elusive. Towards this end, we capitalize on recent breakthrough in network control theory, the engineering study of networked interventions, to investigate a representative psychological attitude modification experiment. This study examined 30 healthy participants who answered 11 questions about their attitude toward eating meat. They then received 11 arguments to challenge their attitude on the questions, after which they were asked again the same set of questions. Using this data, we constructed networks that quantify the connections between the responses and tested: 1) if the observed psychological effect, in terms of sensitivity and specificity, relates to the regional network topology as described by control theory, 2) if the size of change in responses relates to whole-network topology that quantifies the “ease” of change as described by control theory, and 3) if responses after intervention could be predicted based on formal results from control theory. We found that 1) the interventions that had higher regional topological relevance (the so-called controllability scores) had stronger effect (r> 0.5), the intervention sensitivities were systematically lower for the interventions that were “easier to control”(r=-0.49), and that the model offered substantial prediction accuracy (r= 0.36). Zeb Kurth-Nelson John P O'Doherty, Deanna Barch Sophie Denève Daniel Durstewitz Michael Frank Joshua Gordon Sanjay Mathew Yael Niv Kerry Ressler Heike Tost M J A J
Computational Approaches Journal Article
Computational Psychiatry: New Perspectives on Mental Illness, 2022.
@article{Kurth-Nelson2022,
title = {Computational Approaches},
author = {Zeb Kurth-Nelson, John P O'Doherty, Deanna M Barch, Sophie Denève, Daniel Durstewitz, Michael J Frank, Joshua A Gordon, Sanjay J Mathew, Yael Niv, Kerry Ressler, Heike Tost},
url = {https://books.google.de/books?hl=en&lr=&id=746JEAAAQBAJ&oi=fnd&pg=PA77&dq=info:okpKmHWClm8J:scholar.google.com&ots=oqTdTTaF-h&sig=dPeS3sfDXW64H2ytq_NFvQbYWXI&redir_esc=y#v=onepage&q&f=false},
year = {2022},
date = {2022-11-01},
journal = {Computational Psychiatry: New Perspectives on Mental Illness},
abstract = {Vast spectra of biological and psychological processes are potentially involved in the mechanisms of psychiatric illness. Computational neuroscience brings a diverse toolkit to bear on understanding these processes. This chapter begins by organizing the many ways in which computational neuroscience may provide insight to the mechanisms of psychiatric illness. It then contextualizes the quest for deep mechanistic understanding through the perspective that even partial or nonmechanistic understanding can be applied productively. Finally, it questions the standards by which these approaches should be evaluated. If computational psychiatry hopes to go beyond traditional psychiatry, it cannot be judged solely on the basis of how closely it reproduces the diagnoses and prognoses of traditional psychiatry, but must also be judged against more fundamental measures such as patient outcomes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Vast spectra of biological and psychological processes are potentially involved in the mechanisms of psychiatric illness. Computational neuroscience brings a diverse toolkit to bear on understanding these processes. This chapter begins by organizing the many ways in which computational neuroscience may provide insight to the mechanisms of psychiatric illness. It then contextualizes the quest for deep mechanistic understanding through the perspective that even partial or nonmechanistic understanding can be applied productively. Finally, it questions the standards by which these approaches should be evaluated. If computational psychiatry hopes to go beyond traditional psychiatry, it cannot be judged solely on the basis of how closely it reproduces the diagnoses and prognoses of traditional psychiatry, but must also be judged against more fundamental measures such as patient outcomes. Monfared, Zahra; Patra, Mahashweta; Durstewitz, Daniel
Robust chaos and multi-stability in piecewise linear recurrent neural networks Journal Article
Preprint, 2022.
@article{Monfared2022,
title = {Robust chaos and multi-stability in piecewise linear recurrent neural networks},
author = {Zahra Monfared and Mahashweta Patra and Daniel Durstewitz},
url = {https://www.researchsquare.com/article/rs-2147683/v1},
year = {2022},
date = {2022-10-27},
journal = {Preprint},
abstract = {Recurrent neural networks (RNNs) are major machine learning tools for the processing of sequential data. Piecewise-linear RNNs (PLRNNs) in particular, which are formally piecewise linear (PWL) maps, have become popular recently as data-driven techniques for dynamical systems reconstructions from time-series observations. For a better understanding of the training process, performance, and behavior of trained PLRNNs, more thorough theoretical analysis is highly needed. Especially the presence of chaos strongly affects RNN training and expressivity. Here we show the existence of robust chaos in 2d PLRNNs. To this end, necessary and sufficient conditions for the occurrence of homoclinic intersections are derived by analyzing the interplay between stable and unstable manifolds of 2d PWL maps. Our analysis focuses on general PWL maps, like PLRNNs, since normal form PWL maps lack important characteristics that can occur in PLRNNs. We also explore some bifurcations and multi-stability involving chaos, since the co-existence of chaotic attractors with other attractor objects poses particular challenges for PLRNN training on the one hand, yet may endow trained PLRNNs with important computational properties on the other. Numerical simulations are performed to verify our results and are demonstrated to be in good agreement with the theoretical derivations. We discuss the implications of our results for PLRNN training, performance on machine learning tasks, and scientific applications.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Recurrent neural networks (RNNs) are major machine learning tools for the processing of sequential data. Piecewise-linear RNNs (PLRNNs) in particular, which are formally piecewise linear (PWL) maps, have become popular recently as data-driven techniques for dynamical systems reconstructions from time-series observations. For a better understanding of the training process, performance, and behavior of trained PLRNNs, more thorough theoretical analysis is highly needed. Especially the presence of chaos strongly affects RNN training and expressivity. Here we show the existence of robust chaos in 2d PLRNNs. To this end, necessary and sufficient conditions for the occurrence of homoclinic intersections are derived by analyzing the interplay between stable and unstable manifolds of 2d PWL maps. Our analysis focuses on general PWL maps, like PLRNNs, since normal form PWL maps lack important characteristics that can occur in PLRNNs. We also explore some bifurcations and multi-stability involving chaos, since the co-existence of chaotic attractors with other attractor objects poses particular challenges for PLRNN training on the one hand, yet may endow trained PLRNNs with important computational properties on the other. Numerical simulations are performed to verify our results and are demonstrated to be in good agreement with the theoretical derivations. We discuss the implications of our results for PLRNN training, performance on machine learning tasks, and scientific applications. Mikhaeil, Jonas M; Monfared, Zahra; Durstewitz, Daniel
On the difficulty of learning chaotic dynamics with RNNs Inproceedings
2022.
@inproceedings{Monfared2021b,
title = {On the difficulty of learning chaotic dynamics with RNNs},
author = {Jonas M. Mikhaeil and Zahra Monfared and Daniel Durstewitz},
url = {https://openreview.net/pdf?id=-_AMpmyV0Ll},
year = {2022},
date = {2022-09-14},
journal = {36th Conference on Neural Information Processing Systems (NeurIPS 2022).},
abstract = {Recurrent neural networks (RNNs) are wide-spread machine learning tools for modeling sequential and time series data. They are notoriously hard to train because their loss gradients backpropagated in time tend to saturate or diverge during training. This is known as the exploding and vanishing gradient problem. Previous solutions to this issue either built on rather complicated, purpose-engineered architectures with gated memory buffers, or - more recently - imposed constraints that ensure convergence to a fixed point or restrict (the eigenspectrum of) the recurrence matrix. Such constraints, however, convey severe limitations on the expressivity of the RNN. Essential intrinsic dynamics such as multistability or chaos are disabled. This is inherently at disaccord with the chaotic nature of many, if not most, time series encountered in nature and society. Here we offer a comprehensive theoretical treatment of this problem by relating the loss gradients during RNN training to the Lyapunov spectrum of RNN-generated orbits. We mathematically prove that RNNs producing stable equilibrium or cyclic behavior have bounded gradients, whereas the gradients of RNNs with chaotic dynamics always diverge. Based on these analyses and insights, we offer an effective yet simple training technique for chaotic data and guidance on how to choose relevant hyperparameters according to the Lyapunov spectrum. },
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Recurrent neural networks (RNNs) are wide-spread machine learning tools for modeling sequential and time series data. They are notoriously hard to train because their loss gradients backpropagated in time tend to saturate or diverge during training. This is known as the exploding and vanishing gradient problem. Previous solutions to this issue either built on rather complicated, purpose-engineered architectures with gated memory buffers, or - more recently - imposed constraints that ensure convergence to a fixed point or restrict (the eigenspectrum of) the recurrence matrix. Such constraints, however, convey severe limitations on the expressivity of the RNN. Essential intrinsic dynamics such as multistability or chaos are disabled. This is inherently at disaccord with the chaotic nature of many, if not most, time series encountered in nature and society. Here we offer a comprehensive theoretical treatment of this problem by relating the loss gradients during RNN training to the Lyapunov spectrum of RNN-generated orbits. We mathematically prove that RNNs producing stable equilibrium or cyclic behavior have bounded gradients, whereas the gradients of RNNs with chaotic dynamics always diverge. Based on these analyses and insights, we offer an effective yet simple training technique for chaotic data and guidance on how to choose relevant hyperparameters according to the Lyapunov spectrum. Pinger, Mathieu; Thome, Janine; Halli, Patrick; Sommer, Wolfgang H; Koppe, Georgia; Kirsch, Peter
Comparing Discounting of Potentially Real Rewards and Losses by Means of Functional Magnetic Resonance Imaging Journal Article
Frontiers in System Neuroscience, 2022.
@article{Pinger2022,
title = {Comparing Discounting of Potentially Real Rewards and Losses by Means of Functional Magnetic Resonance Imaging},
author = {Mathieu Pinger and Janine Thome and Patrick Halli and Wolfgang H. Sommer and Georgia Koppe and Peter Kirsch},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9365957/},
doi = {10.3389/fnsys.2022.867202},
year = {2022},
date = {2022-07-22},
journal = {Frontiers in System Neuroscience},
abstract = {Delay discounting (DD) has often been investigated in the context of decision making whereby individuals attribute decreasing value to rewards in the distant future. Less is known about DD in the context of negative consequences. The aim of this pilot study was to identify commonalities and differences between reward and loss discounting on the behavioral as well as the neural level by means of computational modeling and functional Magnetic Resonance Imaging (fMRI). We furthermore compared the neural activation between anticipation of rewards and losses.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Delay discounting (DD) has often been investigated in the context of decision making whereby individuals attribute decreasing value to rewards in the distant future. Less is known about DD in the context of negative consequences. The aim of this pilot study was to identify commonalities and differences between reward and loss discounting on the behavioral as well as the neural level by means of computational modeling and functional Magnetic Resonance Imaging (fMRI). We furthermore compared the neural activation between anticipation of rewards and losses. Thome, Janine; Pinger, Mathieu; Durstewitz, Daniel; Sommer, Wolfgang; Kirsch, Peter; Koppe, Georgia
Model-based experimental manipulation of probabilistic behavior in interpretable behavioral latent variable models Journal Article
PsyArXiv Preprints , 2022.
@article{Thome2022,
title = {Model-based experimental manipulation of probabilistic behavior in interpretable behavioral latent variable models},
author = {Janine Thome and Mathieu Pinger and Daniel Durstewitz and Wolfgang Sommer and Peter Kirsch and Georgia Koppe },
url = {https://psyarxiv.com/s7wda/},
doi = {10.31234/osf.io/s7wda},
year = {2022},
date = {2022-07-14},
journal = {PsyArXiv Preprints },
abstract = {In studying mental processes, we often rely on quantifying not directly observable latent constructs. Interpretable latent variable models that probabilistically link observations to the underlying construct have increasingly been used to draw inferences from observed behavior. However, these models are far more powerful than that. By formally embedding experimentally manipulable variables within the latent construct, they can be used to make precise and falsifiable hypotheses or predictions. At the same time, they pinpoint how experimental conditions must be designed to test these hypotheses. By comparing predictions to observed behavior, we may then assess and evaluate the validity of a measurement instrument directly and objectively, without resorting to comparisons with other latent constructs, as traditionally done in psychology.
These ideas are exemplified here on the experimentally not directly observable construct of delay discounting. We propose a generic approach to systematically generate experimental conditions based on the aforementioned models. The conditions are explicitly generated so as to predict 9 graded behavioral discounting probabilities across participants. Meeting this prediction, the framework induces discounting probabilities on 9 levels. In contrast to several alternative models, the applied model exhibits high validity as indicated by a comparably low out-of-sample prediction error. We also report evidence for inter-individual differences w.r.t. the most suitable models underlying behavior.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In studying mental processes, we often rely on quantifying not directly observable latent constructs. Interpretable latent variable models that probabilistically link observations to the underlying construct have increasingly been used to draw inferences from observed behavior. However, these models are far more powerful than that. By formally embedding experimentally manipulable variables within the latent construct, they can be used to make precise and falsifiable hypotheses or predictions. At the same time, they pinpoint how experimental conditions must be designed to test these hypotheses. By comparing predictions to observed behavior, we may then assess and evaluate the validity of a measurement instrument directly and objectively, without resorting to comparisons with other latent constructs, as traditionally done in psychology.
These ideas are exemplified here on the experimentally not directly observable construct of delay discounting. We propose a generic approach to systematically generate experimental conditions based on the aforementioned models. The conditions are explicitly generated so as to predict 9 graded behavioral discounting probabilities across participants. Meeting this prediction, the framework induces discounting probabilities on 9 levels. In contrast to several alternative models, the applied model exhibits high validity as indicated by a comparably low out-of-sample prediction error. We also report evidence for inter-individual differences w.r.t. the most suitable models underlying behavior. Brenner, Manuel; Hess, Florian; Mikhaeil, Jonas; Bereska, Leonard; Monfared, Zahra; Kuo, Po-Chen; Durstewitz, Daniel
Tractable Dendritic RNNs for Reconstructing Nonlinear Dynamical Systems Inproceedings
2022.
@inproceedings{Brenner2022,
title = {Tractable Dendritic RNNs for Reconstructing Nonlinear Dynamical Systems},
author = {Manuel Brenner and Florian Hess and Jonas Mikhaeil and Leonard Bereska and Zahra Monfared and Po-Chen Kuo and Daniel Durstewitz},
url = {https://proceedings.mlr.press/v162/brenner22a.html},
year = {2022},
date = {2022-07-01},
journal = {Proceedings of Machine Learning Research, ICML 2022},
abstract = {In many scientific disciplines, we are interested in inferring the nonlinear dynamical system underlying a set of observed time series, a challenging task in the face of chaotic behavior and noise. Previous deep learning approaches toward this goal often suffered from a lack of interpretability and tractability. In particular, the high-dimensional latent spaces often required for a faithful embedding, even when the underlying dynamics lives on a lower-dimensional manifold, can hamper theoretical analysis. Motivated by the emerging principles of dendritic computation, we augment a dynamically interpretable and mathematically tractable piecewise-linear (PL) recurrent neural network (RNN) by a linear spline basis expansion. We show that this approach retains all the theoretically appealing properties of the simple PLRNN, yet boosts its capacity for approximating arbitrary nonlinear dynamical systems in comparatively low dimensions. We employ two frameworks for training the system, one combining BPTT with teacher forcing, and another based on fast and scalable variational inference. We show that the dendritically expanded PLRNN achieves better reconstructions with fewer parameters and dimensions on various dynamical systems benchmarks and compares favorably to other methods, while retaining a tractable and interpretable structure.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
In many scientific disciplines, we are interested in inferring the nonlinear dynamical system underlying a set of observed time series, a challenging task in the face of chaotic behavior and noise. Previous deep learning approaches toward this goal often suffered from a lack of interpretability and tractability. In particular, the high-dimensional latent spaces often required for a faithful embedding, even when the underlying dynamics lives on a lower-dimensional manifold, can hamper theoretical analysis. Motivated by the emerging principles of dendritic computation, we augment a dynamically interpretable and mathematically tractable piecewise-linear (PL) recurrent neural network (RNN) by a linear spline basis expansion. We show that this approach retains all the theoretically appealing properties of the simple PLRNN, yet boosts its capacity for approximating arbitrary nonlinear dynamical systems in comparatively low dimensions. We employ two frameworks for training the system, one combining BPTT with teacher forcing, and another based on fast and scalable variational inference. We show that the dendritically expanded PLRNN achieves better reconstructions with fewer parameters and dimensions on various dynamical systems benchmarks and compares favorably to other methods, while retaining a tractable and interpretable structure. Kramer, Daniel; Bommer, Philine Lou; Tombolini, Carlo; Koppe, Georgia; Durstewitz, Daniel
Identifying nonlinear dynamical systems from multi-modal time series data Inproceedings
2022.
@inproceedings{Kramer2022,
title = {Identifying nonlinear dynamical systems from multi-modal time series data},
author = {Daniel Kramer and Philine Lou Bommer and Carlo Tombolini and Georgia Koppe and Daniel Durstewitz},
url = {https://proceedings.mlr.press/v162/kramer22a.html},
year = {2022},
date = {2022-06-21},
journal = {Proceedings of Machine Learning Research},
volume = {162},
abstract = {Empirically observed time series in physics, biology, or medicine, are commonly generated by some underlying dynamical system (DS) which is the target of scientific interest. There is an increasing interest to harvest machine learning methods to reconstruct this latent DS in a completely data-driven, unsupervised way. In many areas of science it is common to sample time series observations from many data modalities simultaneously, e.g. electrophysiological and behavioral time series in a typical neuroscience experiment. However, current machine learning tools for reconstructing DSs usually focus on just one data modality. Here we propose a general framework for multi-modal data integration for the purpose of nonlinear DS identification and cross-modal prediction. This framework is based on dynamically interpretable recurrent neural networks as general approximators of nonlinear DSs, coupled to sets of modality-specific decoder models from the class of generalized linear models. Both an expectation-maximization and a variational inference algorithm for model training are advanced and compared. We show on nonlinear DS benchmarks that our algorithms can efficiently compensate for too noisy or missing information in one data channel by exploiting other channels, and demonstrate on experimental neuroscience data how the algorithm learns to link different data domains to the underlying dynamics },
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Empirically observed time series in physics, biology, or medicine, are commonly generated by some underlying dynamical system (DS) which is the target of scientific interest. There is an increasing interest to harvest machine learning methods to reconstruct this latent DS in a completely data-driven, unsupervised way. In many areas of science it is common to sample time series observations from many data modalities simultaneously, e.g. electrophysiological and behavioral time series in a typical neuroscience experiment. However, current machine learning tools for reconstructing DSs usually focus on just one data modality. Here we propose a general framework for multi-modal data integration for the purpose of nonlinear DS identification and cross-modal prediction. This framework is based on dynamically interpretable recurrent neural networks as general approximators of nonlinear DSs, coupled to sets of modality-specific decoder models from the class of generalized linear models. Both an expectation-maximization and a variational inference algorithm for model training are advanced and compared. We show on nonlinear DS benchmarks that our algorithms can efficiently compensate for too noisy or missing information in one data channel by exploiting other channels, and demonstrate on experimental neuroscience data how the algorithm learns to link different data domains to the underlying dynamics Thome, Janine; Pinger, Mathieu; Halli, Patrick; Durstewitz, Daniel; Sommer, Wolfgang H; Kirsch, Peter; Koppe, Georgia
A Model Guided Approach to Evoke Homogeneous Behavior During Temporal Reward and Loss Discounting Journal Article
Frontiers in Psychiatry, 2022.
@article{Thome2022b,
title = {A Model Guided Approach to Evoke Homogeneous Behavior During Temporal Reward and Loss Discounting},
author = {Janine Thome and Mathieu Pinger and Patrick Halli and Daniel Durstewitz and Wolfgang H. Sommer and Peter Kirsch and Georgia Koppe},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9253427/},
doi = {10.3389/fpsyt.2022.846119},
year = {2022},
date = {2022-06-21},
journal = {Frontiers in Psychiatry},
abstract = {The tendency to devaluate future options as a function of time, known as delay discounting, is associated with various factors such as psychiatric illness and personality. Under identical experimental conditions, individuals may therefore strongly differ in the degree to which they discount future options. In delay discounting tasks, this inter-individual variability inevitably results in an unequal number of discounted trials per subject, generating difficulties in linking delay discounting to psychophysiological and neural correlates. Many studies have therefore focused on assessing delay discounting adaptively. Here, we extend these approaches by developing an adaptive paradigm which aims at inducing more comparable and homogeneous discounting frequencies across participants on a dimensional scale.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The tendency to devaluate future options as a function of time, known as delay discounting, is associated with various factors such as psychiatric illness and personality. Under identical experimental conditions, individuals may therefore strongly differ in the degree to which they discount future options. In delay discounting tasks, this inter-individual variability inevitably results in an unequal number of discounted trials per subject, generating difficulties in linking delay discounting to psychophysiological and neural correlates. Many studies have therefore focused on assessing delay discounting adaptively. Here, we extend these approaches by developing an adaptive paradigm which aims at inducing more comparable and homogeneous discounting frequencies across participants on a dimensional scale. Melbaum, Svenja; Russo, Eleonora; Eriksson, David; Schneider, Artur; Durstewitz, Daniel; Brox, Thomas; Diester, Ilka
Conserved structures of neural activity in sensorimotor cortex of freely moving rats allow cross-subject decoding Journal Article
bioRxiv, 2022.
@article{Melbaum2022,
title = {Conserved structures of neural activity in sensorimotor cortex of freely moving rats allow cross-subject decoding},
author = {Svenja Melbaum and Eleonora Russo and David Eriksson and Artur Schneider and Daniel Durstewitz and Thomas Brox and Ilka Diester},
url = {https://www.biorxiv.org/content/10.1101/2021.03.04.433869v2},
doi = {https://doi.org/10.1101/2021.03.04.433869 },
year = {2022},
date = {2022-02-18},
journal = {bioRxiv},
abstract = {Our knowledge about neuronal activity in the sensorimotor cortex relies primarily on stereotyped movements that are strictly controlled in experimental settings. It remains unclear how results can be carried over to less constrained behavior like that of freely moving subjects. Toward this goal, we developed a self-paced behavioral paradigm that encouraged rats to engage in different movement types. We employed bilateral electrophysiological recordings across the entire sensorimotor cortex and simultaneous paw tracking. These techniques revealed behavioral coupling of neurons with lateralization and an anterior–posterior gradient from the premotor to the primary sensory cortex. The structure of population activity patterns was conserved across animals despite the severe under-sampling of the total number of neurons and variations in electrode positions across individuals. We demonstrated cross-subject and cross-session generalization in a decoding task through alignments of low-dimensional neural manifolds, providing evidence of a conserved neuronal code.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Our knowledge about neuronal activity in the sensorimotor cortex relies primarily on stereotyped movements that are strictly controlled in experimental settings. It remains unclear how results can be carried over to less constrained behavior like that of freely moving subjects. Toward this goal, we developed a self-paced behavioral paradigm that encouraged rats to engage in different movement types. We employed bilateral electrophysiological recordings across the entire sensorimotor cortex and simultaneous paw tracking. These techniques revealed behavioral coupling of neurons with lateralization and an anterior–posterior gradient from the premotor to the primary sensory cortex. The structure of population activity patterns was conserved across animals despite the severe under-sampling of the total number of neurons and variations in electrode positions across individuals. We demonstrated cross-subject and cross-session generalization in a decoding task through alignments of low-dimensional neural manifolds, providing evidence of a conserved neuronal code.
2021
Owusu, Priscilla N; Reininghaus, Ulrich; Koppe, Georgia; Dankwa-Mullan, Irene; Bärnighausen, Till
Artificial intelligence applications in social media for depression screening: A systematic review protocol for content validity processes Journal Article
PLoS ONE, 2021.
@article{Owusu2021,
title = {Artificial intelligence applications in social media for depression screening: A systematic review protocol for content validity processes},
author = { Priscilla N. Owusu and Ulrich Reininghaus and Georgia Koppe and Irene Dankwa-Mullan and Till Bärnighausen},
url = {https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0259499},
doi = {https://doi.org/10.1371/journal.pone.0259499},
year = {2021},
date = {2021-11-08},
journal = {PLoS ONE},
abstract = {The popularization of social media has led to the coalescing of user groups around mental health conditions; in particular, depression. Social media offers a rich environment for contextualizing and predicting users’ self-reported burden of depression. Modern artificial intelligence (AI) methods are commonly employed in analyzing user-generated sentiment on social media. In the forthcoming systematic review, we will examine the content validity of these computer-based health surveillance models with respect to standard diagnostic frameworks. Drawing from a clinical perspective, we will attempt to establish a normative judgment about the strengths of these modern AI applications in the detection of depression.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The popularization of social media has led to the coalescing of user groups around mental health conditions; in particular, depression. Social media offers a rich environment for contextualizing and predicting users’ self-reported burden of depression. Modern artificial intelligence (AI) methods are commonly employed in analyzing user-generated sentiment on social media. In the forthcoming systematic review, we will examine the content validity of these computer-based health surveillance models with respect to standard diagnostic frameworks. Drawing from a clinical perspective, we will attempt to establish a normative judgment about the strengths of these modern AI applications in the detection of depression. Thome, Janine; Steinbach, Robert; Grosskreutz, Julian; Durstewitz, Daniel; Koppe, Georgia
Classification of amyotrophic lateral sclerosis by brain volume, connectivity, and network dynamics Journal Article
Human Brain Mapping, 2021.
@article{Thome2021,
title = {Classification of amyotrophic lateral sclerosis by brain volume, connectivity, and network dynamics},
author = {Janine Thome and Robert Steinbach and Julian Grosskreutz and Daniel Durstewitz and Georgia Koppe},
url = {https://doi.org/10.1002/hbm.25679},
year = {2021},
date = {2021-10-16},
journal = {Human Brain Mapping},
abstract = {Emerging studies corroborate the importance of neuroimaging biomarkers and machine learning to improve diagnostic classification of amyotrophic lateral sclerosis (ALS). While most studies focus on structural data, recent studies assessing functional connectivity between brain regions by linear methods highlight the role of brain function. These studies have yet to be combined with brain structure and nonlinear functional features. We investigate the role of linear and nonlinear functional brain features, and the benefit of combining brain structure and function for ALS classification. ALS patients (N = 97) and healthy controls (N = 59) underwent structural and functional resting state magnetic resonance imaging. Based on key hubs of resting state networks, we defined three feature sets comprising brain volume, resting state functional connectivity (rsFC), as well as (nonlinear) resting state dynamics assessed via recurrent neural networks. Unimodal and multimodal random forest classifiers were built to classify ALS. Out-of-sample prediction errors were assessed via five-fold cross-validation. Unimodal classifiers achieved a classification accuracy of 56.35–61.66%. Multimodal classifiers outperformed unimodal classifiers achieving accuracies of 62.85–66.82%. Evaluating the ranking of individual features' importance scores across all classifiers revealed that rsFC features were most dominant in classification. While univariate analyses revealed reduced rsFC in ALS patients, functional features more generally indicated deficits in information integration across resting state brain networks in ALS. The present work undermines that combining brain structure and function provides an additional benefit to diagnostic classification, as indicated by multimodal classifiers, while emphasizing the importance of capturing both linear and nonlinear functional brain properties to identify discriminative biomarkers of ALS.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Emerging studies corroborate the importance of neuroimaging biomarkers and machine learning to improve diagnostic classification of amyotrophic lateral sclerosis (ALS). While most studies focus on structural data, recent studies assessing functional connectivity between brain regions by linear methods highlight the role of brain function. These studies have yet to be combined with brain structure and nonlinear functional features. We investigate the role of linear and nonlinear functional brain features, and the benefit of combining brain structure and function for ALS classification. ALS patients (N = 97) and healthy controls (N = 59) underwent structural and functional resting state magnetic resonance imaging. Based on key hubs of resting state networks, we defined three feature sets comprising brain volume, resting state functional connectivity (rsFC), as well as (nonlinear) resting state dynamics assessed via recurrent neural networks. Unimodal and multimodal random forest classifiers were built to classify ALS. Out-of-sample prediction errors were assessed via five-fold cross-validation. Unimodal classifiers achieved a classification accuracy of 56.35–61.66%. Multimodal classifiers outperformed unimodal classifiers achieving accuracies of 62.85–66.82%. Evaluating the ranking of individual features' importance scores across all classifiers revealed that rsFC features were most dominant in classification. While univariate analyses revealed reduced rsFC in ALS patients, functional features more generally indicated deficits in information integration across resting state brain networks in ALS. The present work undermines that combining brain structure and function provides an additional benefit to diagnostic classification, as indicated by multimodal classifiers, while emphasizing the importance of capturing both linear and nonlinear functional brain properties to identify discriminative biomarkers of ALS. Urs Braun Anais Harneit, Giulio Pergola Tommaso Menara Axel Schäfer Richard Betzel Zhenxiang Zang Janina Schweiger Xiaolong Zhang Kristina Schwarz Junfang Chen Giuseppe Blasi Alessandro Bertolino Daniel Durstewitz Fabio Pasqualetti Emanuel Schwarz Andreas Meyer-Lindenberg Danielle Bassett & Heike Tost F I S
Brain network dynamics during working memory are modulated by dopamine and diminished in schizophrenia Journal Article
Nature Communications, 2021.
@article{Braun2021,
title = {Brain network dynamics during working memory are modulated by dopamine and diminished in schizophrenia},
author = {Urs Braun, Anais Harneit, Giulio Pergola, Tommaso Menara, Axel Schäfer, Richard F. Betzel, Zhenxiang Zang, Janina I. Schweiger, Xiaolong Zhang, Kristina Schwarz, Junfang Chen, Giuseppe Blasi, Alessandro Bertolino, Daniel Durstewitz, Fabio Pasqualetti, Emanuel Schwarz, Andreas Meyer-Lindenberg, Danielle S. Bassett & Heike Tost },
url = {https://www.nature.com/articles/s41467-021-23694-9},
doi = {10.1038/s41467-021-23694-9},
year = {2021},
date = {2021-06-09},
journal = {Nature Communications},
abstract = {Dynamical brain state transitions are critical for flexible working memory but the network mechanisms are incompletely understood. Here, we show that working memory performance entails brain-wide switching between activity states using a combination of functional magnetic resonance imaging in healthy controls and individuals with schizophrenia, pharmacological fMRI, genetic analyses and network control theory. The stability of states relates to dopamine D1 receptor gene expression while state transitions are influenced by D2 receptor expression and pharmacological modulation. Individuals with schizophrenia show altered network control properties, including a more diverse energy landscape and decreased stability of working memory representations. Our results demonstrate the relevance of dopamine signaling for the steering of whole-brain network dynamics during working memory and link these processes to schizophrenia pathophysiology.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Dynamical brain state transitions are critical for flexible working memory but the network mechanisms are incompletely understood. Here, we show that working memory performance entails brain-wide switching between activity states using a combination of functional magnetic resonance imaging in healthy controls and individuals with schizophrenia, pharmacological fMRI, genetic analyses and network control theory. The stability of states relates to dopamine D1 receptor gene expression while state transitions are influenced by D2 receptor expression and pharmacological modulation. Individuals with schizophrenia show altered network control properties, including a more diverse energy landscape and decreased stability of working memory representations. Our results demonstrate the relevance of dopamine signaling for the steering of whole-brain network dynamics during working memory and link these processes to schizophrenia pathophysiology. Russo, Eleonora; Ma, Tianyang; Spanagel, Rainer; Durstewitz, Daniel; Toutounji, Hazem; Köhr, Georg
Coordinated prefrontal state transition leads extinction of reward-seeking behaviors Journal Article
Journal of Neuroscience, 41 (11), 2021.
@article{Russo2021,
title = {Coordinated prefrontal state transition leads extinction of reward-seeking behaviors},
author = {Eleonora Russo and Tianyang Ma and Rainer Spanagel and Daniel Durstewitz and Hazem Toutounji and Georg Köhr},
url = {https://www.jneurosci.org/content/jneuro/41/11/2406.full.pdf},
year = {2021},
date = {2021-02-02},
journal = {Journal of Neuroscience},
volume = {41},
number = {11},
abstract = {Extinction learning suppresses conditioned reward responses and is thus fundamental to adapt to changing environmental
demands and to control excessive reward seeking. The medial prefrontal cortex (mPFC) monitors and controls conditioned
reward responses. Abrupt transitions in mPFC activity anticipate changes in conditioned responses to altered contingencies.
It remains, however, unknown whether such transitions are driven by the extinction of old behavioral strategies or by the ac-
quisition of new competing ones. Using in vivo multiple single-unit recordings of mPFC in male rats, we studied the relation-
ship between single-unit and population dynamics during extinction learning, using alcohol as a positive reinforcer in an
operant conditioning paradigm. To examine the fine temporal relation between neural activity and behavior, we developed a
novel behavioral model that allowed us to identify the number, onset, and duration of extinction-learning episodes in the
behavior of each animal. We found that single-unit responses to conditioned stimuli changed even under stable experimental
conditions and behavior. However, when behavioral responses to task contingencies had to be updated, unit-specific modula-
tions became coordinated across the whole population, pushing the network into a new stable attractor state. Thus, extinction
learning is not associated with suppressed mPFC responses to conditioned stimuli, but is anticipated by single-unit coordina-
tion into population-wide transitions of the internal state of the animal.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Extinction learning suppresses conditioned reward responses and is thus fundamental to adapt to changing environmental
demands and to control excessive reward seeking. The medial prefrontal cortex (mPFC) monitors and controls conditioned
reward responses. Abrupt transitions in mPFC activity anticipate changes in conditioned responses to altered contingencies.
It remains, however, unknown whether such transitions are driven by the extinction of old behavioral strategies or by the ac-
quisition of new competing ones. Using in vivo multiple single-unit recordings of mPFC in male rats, we studied the relation-
ship between single-unit and population dynamics during extinction learning, using alcohol as a positive reinforcer in an
operant conditioning paradigm. To examine the fine temporal relation between neural activity and behavior, we developed a
novel behavioral model that allowed us to identify the number, onset, and duration of extinction-learning episodes in the
behavior of each animal. We found that single-unit responses to conditioned stimuli changed even under stable experimental
conditions and behavior. However, when behavioral responses to task contingencies had to be updated, unit-specific modula-
tions became coordinated across the whole population, pushing the network into a new stable attractor state. Thus, extinction
learning is not associated with suppressed mPFC responses to conditioned stimuli, but is anticipated by single-unit coordina-
tion into population-wide transitions of the internal state of the animal.
2020
Koppe, Georgia; Meyer-Lindenberg, Andreas; Durstewitz, Daniel
Deep learning for small and big data in psychiatry Journal Article
Neuropsychopharmacology, 2020.
@article{Koppe2020b,
title = {Deep learning for small and big data in psychiatry},
author = {Georgia Koppe and Andreas Meyer-Lindenberg and Daniel Durstewitz},
url = {https://www.nature.com/articles/s41386-020-0767-z},
doi = {10.1038/s41386-020-0767-z},
year = {2020},
date = {2020-07-15},
journal = {Neuropsychopharmacology},
abstract = {Psychiatry today must gain a better understanding of the common and distinct pathophysiological mechanisms underlying psychiatric disorders in order to deliver more effective, person-tailored treatments. To this end, it appears that the analysis of ‘small’ experimental samples using conventional statistical approaches has largely failed to capture the heterogeneity underlying psychiatric phenotypes. Modern algorithms and approaches from machine learning, particularly deep learning, provide new hope to address these issues given their outstanding prediction performance in other disciplines. The strength of deep learning algorithms is that they can implement very complicated, and in principle arbitrary predictor-response mappings efficiently. This power comes at a cost, the need for large training (and test) samples to infer the (sometimes over millions of) model parameters. This appears to be at odds with the as yet rather ‘small’ samples available in psychiatric human research to date (n
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Psychiatry today must gain a better understanding of the common and distinct pathophysiological mechanisms underlying psychiatric disorders in order to deliver more effective, person-tailored treatments. To this end, it appears that the analysis of ‘small’ experimental samples using conventional statistical approaches has largely failed to capture the heterogeneity underlying psychiatric phenotypes. Modern algorithms and approaches from machine learning, particularly deep learning, provide new hope to address these issues given their outstanding prediction performance in other disciplines. The strength of deep learning algorithms is that they can implement very complicated, and in principle arbitrary predictor-response mappings efficiently. This power comes at a cost, the need for large training (and test) samples to infer the (sometimes over millions of) model parameters. This appears to be at odds with the as yet rather ‘small’ samples available in psychiatric human research to date (n < 10,000), and the ambition of predicting treatment at the single subject level (n = 1). Here, we aim at giving a comprehensive overview on how we can yet use such models for prediction in psychiatry. We review how machine learning approaches compare to more traditional statistical hypothesis-driven approaches, how their complexity relates to the need of large sample sizes, and what we can do to optimally use these powerful techniques in psychiatric neuroscience. Lars-Lennart Oettl Max Scheller, Carla Filosa Sebastian Wieland Franziska Haag Cathrin Loeb Daniel Durstewitz Roman Shusterman Eleonora Russo & Wolfgang Kelsch
Phasic dopamine reinforces distinct striatal stimulus encoding in the olfactory tubercle driving dopaminergic reward prediction Journal Article
Nature Communications, 2020.
@article{Oettl2020,
title = {Phasic dopamine reinforces distinct striatal stimulus encoding in the olfactory tubercle driving dopaminergic reward prediction},
author = {Lars-Lennart Oettl, Max Scheller, Carla Filosa, Sebastian Wieland, Franziska Haag, Cathrin Loeb, Daniel Durstewitz, Roman Shusterman, Eleonora Russo & Wolfgang Kelsch},
url = {https://www.nature.com/articles/s41467-020-17257-7#disqus_thread},
doi = {https://doi.org/10.1038/s41467-020-17257-7},
year = {2020},
date = {2020-07-10},
journal = {Nature Communications},
abstract = {The learning of stimulus-outcome associations allows for predictions about the environment. Ventral striatum and dopaminergic midbrain neurons form a larger network for generating reward prediction signals from sensory cues. Yet, the network plasticity mechanisms to generate predictive signals in these distributed circuits have not been entirely clarified. Also, direct evidence of the underlying interregional assembly formation and information transfer is still missing. Here we show that phasic dopamine is sufficient to reinforce the distinctness of stimulus representations in the ventral striatum even in the absence of reward. Upon such reinforcement, striatal stimulus encoding gives rise to interregional assemblies that drive dopaminergic neurons during stimulus-outcome learning. These assemblies dynamically encode the predicted reward value of conditioned stimuli. Together, our data reveal that ventral striatal and midbrain reward networks form a reinforcing loop to generate reward prediction coding.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The learning of stimulus-outcome associations allows for predictions about the environment. Ventral striatum and dopaminergic midbrain neurons form a larger network for generating reward prediction signals from sensory cues. Yet, the network plasticity mechanisms to generate predictive signals in these distributed circuits have not been entirely clarified. Also, direct evidence of the underlying interregional assembly formation and information transfer is still missing. Here we show that phasic dopamine is sufficient to reinforce the distinctness of stimulus representations in the ventral striatum even in the absence of reward. Upon such reinforcement, striatal stimulus encoding gives rise to interregional assemblies that drive dopaminergic neurons during stimulus-outcome learning. These assemblies dynamically encode the predicted reward value of conditioned stimuli. Together, our data reveal that ventral striatal and midbrain reward networks form a reinforcing loop to generate reward prediction coding. Lars-Lennart Oettl Max Scheller, Carla Filosa Sebastian Wieland Franziska Haag Cathrin Loeb Daniel Durstewitz Roman Shusterman Eleonora Russo Wolfgang Kelsch
Phasic dopamine reinforces distinct striatal stimulus encoding in the olfactory tubercle driving dopaminergic reward prediction Journal Article
Nature Communications, 2020.
@article{Oettl2020b,
title = {Phasic dopamine reinforces distinct striatal stimulus encoding in the olfactory tubercle driving dopaminergic reward prediction},
author = {Lars-Lennart Oettl, Max Scheller, Carla Filosa, Sebastian Wieland, Franziska Haag, Cathrin Loeb, Daniel Durstewitz, Roman Shusterman, Eleonora Russo, Wolfgang Kelsch},
url = {https://www.nature.com/articles/s41467-020-17257-7#citeas},
doi = {https://doi.org/10.1038/s41467-020-17257-7},
year = {2020},
date = {2020-07-10},
journal = {Nature Communications},
abstract = {The learning of stimulus-outcome associations allows for predictions about the environment. Ventral striatum and dopaminergic midbrain neurons form a larger network for generating reward prediction signals from sensory cues. Yet, the network plasticity mechanisms to generate predictive signals in these distributed circuits have not been entirely clarified. Also, direct evidence of the underlying interregional assembly formation and information transfer is still missing. Here we show that phasic dopamine is sufficient to reinforce the distinctness of stimulus representations in the ventral striatum even in the absence of reward. Upon such reinforcement, striatal stimulus encoding gives rise to interregional assemblies that drive dopaminergic neurons during stimulus-outcome learning. These assemblies dynamically encode the predicted reward value of conditioned stimuli. Together, our data reveal that ventral striatal and midbrain reward networks form a reinforcing loop to generate reward prediction coding.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The learning of stimulus-outcome associations allows for predictions about the environment. Ventral striatum and dopaminergic midbrain neurons form a larger network for generating reward prediction signals from sensory cues. Yet, the network plasticity mechanisms to generate predictive signals in these distributed circuits have not been entirely clarified. Also, direct evidence of the underlying interregional assembly formation and information transfer is still missing. Here we show that phasic dopamine is sufficient to reinforce the distinctness of stimulus representations in the ventral striatum even in the absence of reward. Upon such reinforcement, striatal stimulus encoding gives rise to interregional assemblies that drive dopaminergic neurons during stimulus-outcome learning. These assemblies dynamically encode the predicted reward value of conditioned stimuli. Together, our data reveal that ventral striatal and midbrain reward networks form a reinforcing loop to generate reward prediction coding. Zahra Monfared, Daniel Durstewitz
Existence of n-cycles and border-collision bifurcations in piecewise-linear continuous maps with applications to recurrent neural networks Journal Article
Nonlinear Dynamics, 2020.
@article{Monfared2020,
title = {Existence of n-cycles and border-collision bifurcations in piecewise-linear continuous maps with applications to recurrent neural networks},
author = {Zahra Monfared, Daniel Durstewitz},
url = {https://arxiv.org/abs/1911.04304},
doi = {10.1007/s11071-020-05777-2},
year = {2020},
date = {2020-07-01},
journal = {Nonlinear Dynamics},
abstract = {Piecewise linear recurrent neural networks (PLRNNs) form the basis of many successful machine learning applications for time series prediction and dynamical systems identification, but rigorous mathematical analysis of their dynamics and properties is lagging behind. Here we contribute to this topic by investigating the existence of n-cycles (n≥3) and border-collision bifurcations in a class of n-dimensional piecewise linear continuous maps which have the general form of a PLRNN. This is particularly important as for one-dimensional maps the existence of 3-cycles implies chaos. It is shown that these n-cycles collide with the switching boundary in a border-collision bifurcation, and parametric regions for the existence of both stable and unstable n-cycles and border-collision bifurcations will be derived theoretically. We then discuss how our results can be extended and applied to PLRNNs. Finally, numerical simulations demonstrate the implementation of our results and are found to be in good agreement with the theoretical derivations. Our findings thus provide a basis for understanding periodic behavior in PLRNNs, how it emerges in bifurcations, and how it may lead into chaos. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Piecewise linear recurrent neural networks (PLRNNs) form the basis of many successful machine learning applications for time series prediction and dynamical systems identification, but rigorous mathematical analysis of their dynamics and properties is lagging behind. Here we contribute to this topic by investigating the existence of n-cycles (n≥3) and border-collision bifurcations in a class of n-dimensional piecewise linear continuous maps which have the general form of a PLRNN. This is particularly important as for one-dimensional maps the existence of 3-cycles implies chaos. It is shown that these n-cycles collide with the switching boundary in a border-collision bifurcation, and parametric regions for the existence of both stable and unstable n-cycles and border-collision bifurcations will be derived theoretically. We then discuss how our results can be extended and applied to PLRNNs. Finally, numerical simulations demonstrate the implementation of our results and are found to be in good agreement with the theoretical derivations. Our findings thus provide a basis for understanding periodic behavior in PLRNNs, how it emerges in bifurcations, and how it may lead into chaos. Zahra Monfared, Daniel Durstewitz
Transformation of ReLU-based recurrent neural networks from discrete-time to continuous-time Inproceedings
2020.
@inproceedings{Monfared2020b,
title = {Transformation of ReLU-based recurrent neural networks from discrete-time to continuous-time},
author = {Zahra Monfared, Daniel Durstewitz},
url = {https://arxiv.org/abs/2007.00321},
year = {2020},
date = {2020-07-01},
journal = {Proceedings of the International Conference on Machine Learning},
abstract = {Recurrent neural networks (RNN) as used in machine learning are commonly formulated in discrete time, i.e. as recursive maps. This brings a lot of advantages for training models on data, e.g. for the purpose of time series prediction or dynamical systems identification, as powerful and efficient inference algorithms exist for discrete time systems and numerical integration of differential equations is not necessary. On the other hand, mathematical analysis of dynamical systems inferred from data is often more convenient and enables additional insights if these are formulated in continuous time, i.e. as systems of ordinary (or partial) differential equations (ODE). Here we show how to perform such a translation from discrete to continuous time for a particular class of ReLU-based RNN. We prove three theorems on the mathematical equivalence between the discrete and continuous time formulations under a variety of conditions, and illustrate how to use our mathematical results on different machine learning and nonlinear dynamical systems examples. },
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Recurrent neural networks (RNN) as used in machine learning are commonly formulated in discrete time, i.e. as recursive maps. This brings a lot of advantages for training models on data, e.g. for the purpose of time series prediction or dynamical systems identification, as powerful and efficient inference algorithms exist for discrete time systems and numerical integration of differential equations is not necessary. On the other hand, mathematical analysis of dynamical systems inferred from data is often more convenient and enables additional insights if these are formulated in continuous time, i.e. as systems of ordinary (or partial) differential equations (ODE). Here we show how to perform such a translation from discrete to continuous time for a particular class of ReLU-based RNN. We prove three theorems on the mathematical equivalence between the discrete and continuous time formulations under a variety of conditions, and illustrate how to use our mathematical results on different machine learning and nonlinear dynamical systems examples. Linke, Julia; Koppe, Georgia; Scholz, Vanessa; Kanske, Philipp; Durstewitz, Daniel; Wessa, Michèle
Aberrant probabilistic reinforcement learning in first-degree relatives of individuals with bipolar disorder Journal Article
Journal of Affective Disorders, 2020.
@article{Linke2020,
title = {Aberrant probabilistic reinforcement learning in first-degree relatives of individuals with bipolar disorder},
author = {Julia Linke and Georgia Koppe and Vanessa Scholz and Philipp Kanske and Daniel Durstewitz and Michèle Wessa},
url = {https://doi.org/10.1016/j.jad.2019.11.063},
doi = {10.1016/j.jad.2019.11.063},
year = {2020},
date = {2020-03-01},
journal = {Journal of Affective Disorders},
abstract = {Motivational dysregulation represents a core vulnerability factor for bipolar disorder. Whether this also comprises aberrant learning of stimulus-reinforcer contingencies is less clear.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Motivational dysregulation represents a core vulnerability factor for bipolar disorder. Whether this also comprises aberrant learning of stimulus-reinforcer contingencies is less clear. Eleonora Russo Tianyang Ma, Rainer Spanagel Daniel Durstewitz Hazem Toutounji Georg Köhr
Coordinated prefrontal state transition leads extinction of reward-seeking behaviors Journal Article
biorxiv, 2020.
@article{Russo2020,
title = {Coordinated prefrontal state transition leads extinction of reward-seeking behaviors},
author = {Eleonora Russo, Tianyang Ma, Rainer Spanagel, Daniel Durstewitz, Hazem Toutounji, Georg Köhr},
url = {https://www.biorxiv.org/content/10.1101/2020.02.26.964510v1.full},
doi = {https://doi.org/10.1101/2020.02.26.964510},
year = {2020},
date = {2020-02-27},
journal = {biorxiv},
abstract = {Extinction learning suppresses conditioned reward responses and is thus fundamental to adapt to changing environmental demands and to control excessive reward seeking. The medial prefrontal cortex (mPFC) monitors and controls conditioned reward responses. Using in vivo multiple single-unit recordings of mPFC we studied the relationship between single-unit and population dynamics during different phases of an operant conditioning task. To examine the fine temporal relation between neural activity and behavior, we developed a model-based statistical analysis that captured behavioral idiosyncrasies. We found that single-unit responses to conditioned stimuli changed throughout the course of a session even under stable experimental conditions and consistent behavior. However, when behavioral responses to task contingencies had to be updated during the extinction phase, unit-specific modulations became coordinated across the whole population, pushing the network into a new stable attractor state. These results show that extinction learning is not associated with suppressed mPFC responses to conditioned stimuli, but is driven by single-unit coordination into population-wide transitions of the animal’s internal state.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Extinction learning suppresses conditioned reward responses and is thus fundamental to adapt to changing environmental demands and to control excessive reward seeking. The medial prefrontal cortex (mPFC) monitors and controls conditioned reward responses. Using in vivo multiple single-unit recordings of mPFC we studied the relationship between single-unit and population dynamics during different phases of an operant conditioning task. To examine the fine temporal relation between neural activity and behavior, we developed a model-based statistical analysis that captured behavioral idiosyncrasies. We found that single-unit responses to conditioned stimuli changed throughout the course of a session even under stable experimental conditions and consistent behavior. However, when behavioral responses to task contingencies had to be updated during the extinction phase, unit-specific modulations became coordinated across the whole population, pushing the network into a new stable attractor state. These results show that extinction learning is not associated with suppressed mPFC responses to conditioned stimuli, but is driven by single-unit coordination into population-wide transitions of the animal’s internal state. Georgia Koppe Quentin Huys, Daniel Durstewitz
Psychiatric Illnesses as Disorders of Network Dynamics Journal Article
Biological Psychiatry, 2020.
@article{Koppe2020,
title = {Psychiatric Illnesses as Disorders of Network Dynamics},
author = {Georgia Koppe, Quentin Huys, Daniel Durstewitz},
url = {https://www.biologicalpsychiatrycnni.org/article/S2451902220300197/abstract},
year = {2020},
date = {2020-01-16},
journal = {Biological Psychiatry},
abstract = {This review provides a dynamical systems perspective on mental illness. After a brief introduction to the theory of dynamical systems, we focus on the common assumption in theoretical and computational neuroscience that phenomena at subcellular, cellular, network, cognitive, and even societal levels could be described and explained in terms of dynamical systems theory. As such, dynamical systems theory may also provide a framework for understanding mental illnesses. The review examines a number of core dynamical systems phenomena and relates each of these to aspects of mental illnesses. This provides an outline of how a broad set of phenomena in serious and common mental illnesses and neurological conditions can be understood in dynamical systems terms. It suggests that the dynamical systems level may provide a central, hublike level of convergence that unifies and links multiple biophysical and behavioral phenomena in the sense that diverse biophysical changes can give rise to the same dynamical phenomena and, vice versa, similar changes in dynamics may yield different behavioral symptoms depending on the brain area where these changes manifest. We also briefly outline current methodological approaches for inferring dynamical systems from data such as electroencephalography, functional magnetic resonance imaging, or self-reports, and we discuss the implications of a dynamical view for the diagnosis, prognosis, and treatment of psychiatric conditions. We argue that a consideration of dynamics could play a potentially transformative role in the choice and target of interventions. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
This review provides a dynamical systems perspective on mental illness. After a brief introduction to the theory of dynamical systems, we focus on the common assumption in theoretical and computational neuroscience that phenomena at subcellular, cellular, network, cognitive, and even societal levels could be described and explained in terms of dynamical systems theory. As such, dynamical systems theory may also provide a framework for understanding mental illnesses. The review examines a number of core dynamical systems phenomena and relates each of these to aspects of mental illnesses. This provides an outline of how a broad set of phenomena in serious and common mental illnesses and neurological conditions can be understood in dynamical systems terms. It suggests that the dynamical systems level may provide a central, hublike level of convergence that unifies and links multiple biophysical and behavioral phenomena in the sense that diverse biophysical changes can give rise to the same dynamical phenomena and, vice versa, similar changes in dynamics may yield different behavioral symptoms depending on the brain area where these changes manifest. We also briefly outline current methodological approaches for inferring dynamical systems from data such as electroencephalography, functional magnetic resonance imaging, or self-reports, and we discuss the implications of a dynamical view for the diagnosis, prognosis, and treatment of psychiatric conditions. We argue that a consideration of dynamics could play a potentially transformative role in the choice and target of interventions.
2019
Schmidt, Dominik; Koppe, Georgia; Beutelspacher, Max; Durstewitz, Daniel
Inferring Dynamical Systems with Long-Range Dependencies through Line Attractor Regularization Inproceedings
2019.
@inproceedings{Schmidt2019,
title = {Inferring Dynamical Systems with Long-Range Dependencies through Line Attractor Regularization},
author = {Dominik Schmidt and Georgia Koppe and Max Beutelspacher and Daniel Durstewitz},
url = {http://arxiv.org/abs/1910.03471},
year = {2019},
date = {2019-10-01},
abstract = {Vanilla RNN with ReLU activation have a simple structure that is amenable to systematic dynamical systems analysis and interpretation, but they suffer from the exploding vs. vanishing gradients problem. Recent attempts to retain this simplicity while alleviating the gradient problem are based on proper initialization schemes or orthogonality/unitary constraints on the RNN's recurrence matrix, which, however, comes with limitations to its expressive power with regards to dynamical systems phenomena like chaos or multi-stability. Here, we instead suggest a regularization scheme that pushes part of the RNN's latent subspace toward a line attractor configuration that enables long short-term memory and arbitrarily slow time scales. We show that our approach excels on a number of benchmarks like the sequential MNIST or multiplication problems, and enables reconstruction of dynamical systems which harbor widely different time scales.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Vanilla RNN with ReLU activation have a simple structure that is amenable to systematic dynamical systems analysis and interpretation, but they suffer from the exploding vs. vanishing gradients problem. Recent attempts to retain this simplicity while alleviating the gradient problem are based on proper initialization schemes or orthogonality/unitary constraints on the RNN's recurrence matrix, which, however, comes with limitations to its expressive power with regards to dynamical systems phenomena like chaos or multi-stability. Here, we instead suggest a regularization scheme that pushes part of the RNN's latent subspace toward a line attractor configuration that enables long short-term memory and arbitrarily slow time scales. We show that our approach excels on a number of benchmarks like the sequential MNIST or multiplication problems, and enables reconstruction of dynamical systems which harbor widely different time scales. Lars-Lennart Oettl Max Scheller, Sebastian Wieland Franziska Haag David Wolf Cathrin Loeb Namasivayam Ravi Daniel Durstewitz Roman Shusterman Eleonora Russo Wolfgang Kelsch
Phasic dopamine enhances the distinct decoding and perceived salience of stimuli Journal Article
bioRxiv, 2019.
@article{Oettl2019,
title = {Phasic dopamine enhances the distinct decoding and perceived salience of stimuli},
author = {Lars-Lennart Oettl, Max Scheller, Sebastian Wieland, Franziska Haag, David Wolf, Cathrin Loeb, Namasivayam Ravi, Daniel Durstewitz, Roman Shusterman, Eleonora Russo, Wolfgang Kelsch},
url = {https://www.biorxiv.org/content/10.1101/771162v1},
doi = {https://doi.org/10.1101/771162 },
year = {2019},
date = {2019-09-18},
journal = {bioRxiv},
abstract = {Subjects learn to assign value to stimuli that predict outcomes. Novelty, rewards or punishment evoke reinforcing phasic dopamine release from midbrain neurons to ventral striatum that mediates expected value and salience of stimuli in humans and animals. It is however not clear whether phasic dopamine release is sufficient to form distinct engrams that encode salient stimuli within these circuits. We addressed this question in awake mice. Evoked phasic dopamine induced plasticity selectively to the population encoding of coincidently presented stimuli and increased their distinctness from other stimuli. Phasic dopamine thereby enhanced the decoding of previously paired stimuli and increased their perceived salience. This dopamine-induced plasticity mimicked population coding dynamics of conditioned stimuli during reinforcement learning. These findings provide a network coding mechanism of how dopaminergic learning signals promote value assignment to stimulus representations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Subjects learn to assign value to stimuli that predict outcomes. Novelty, rewards or punishment evoke reinforcing phasic dopamine release from midbrain neurons to ventral striatum that mediates expected value and salience of stimuli in humans and animals. It is however not clear whether phasic dopamine release is sufficient to form distinct engrams that encode salient stimuli within these circuits. We addressed this question in awake mice. Evoked phasic dopamine induced plasticity selectively to the population encoding of coincidently presented stimuli and increased their distinctness from other stimuli. Phasic dopamine thereby enhanced the decoding of previously paired stimuli and increased their perceived salience. This dopamine-induced plasticity mimicked population coding dynamics of conditioned stimuli during reinforcement learning. These findings provide a network coding mechanism of how dopaminergic learning signals promote value assignment to stimulus representations. Koppe, Georgia; Toutounji, Hazem; Kirsch, Peter; Lis, Stefanie; Durstewitz, Daniel
Identifying nonlinear dynamical systems via generative recurrent neural networks with applications to fMRI Journal Article
PLOS Computational Biology, 15 (8), pp. e1007263, 2019, ISSN: 1553-7358.
@article{Koppe2019,
title = {Identifying nonlinear dynamical systems via generative recurrent neural networks with applications to fMRI},
author = {Georgia Koppe and Hazem Toutounji and Peter Kirsch and Stefanie Lis and Daniel Durstewitz},
editor = {Leyla Isik},
url = {http://dx.plos.org/10.1371/journal.pcbi.1007263},
doi = {10.1371/journal.pcbi.1007263},
issn = {1553-7358},
year = {2019},
date = {2019-08-01},
journal = {PLOS Computational Biology},
volume = {15},
number = {8},
pages = {e1007263},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Urs Braun Anais Harneit, Giulio Pergola Tommaso Menara Axel Schaefer Richard Betzel Zhenxiang Zang Janina Schweiger Kristina Schwarz Junfang Chen Giuseppe Blasi Alessandro Bertolino Daniel Durstewitz Fabio Pasqualetti Emanuel Schwarz Andreas Meyer-Lindenberg Danielle Bassett Heike Tost F I S
Arxiv Preprint, 2019.
@article{Braun2019,
title = {Brain state stability during working memory is explained by network control theory, modulated by dopamine D1/D2 receptor function, and diminished in schizophrenia},
author = {Urs Braun, Anais Harneit, Giulio Pergola, Tommaso Menara, Axel Schaefer, Richard F Betzel, Zhenxiang Zang, Janina I Schweiger, Kristina Schwarz, Junfang Chen, Giuseppe Blasi, Alessandro Bertolino, Daniel Durstewitz, Fabio Pasqualetti, Emanuel Schwarz, Andreas Meyer-Lindenberg, Danielle S Bassett, Heike Tost},
url = {https://arxiv.org/ftp/arxiv/papers/1906/1906.09290.pdf},
doi = {arXiv:1906.09290},
year = {2019},
date = {2019-06-21},
journal = {Arxiv Preprint},
abstract = {Dynamical brain state transitions are critical for flexible working memory but the network mechanisms are incompletely understood. Here, we show that working memory entails brainwide switching between activity states. The stability of states relates to dopamine D1 receptor gene expression while state transitions are influenced by D2 receptor expression and pharmacological modulation. Schizophrenia patients show altered network control properties, including a more diverse energy landscape and decreased stability of working memory representations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Dynamical brain state transitions are critical for flexible working memory but the network mechanisms are incompletely understood. Here, we show that working memory entails brainwide switching between activity states. The stability of states relates to dopamine D1 receptor gene expression while state transitions are influenced by D2 receptor expression and pharmacological modulation. Schizophrenia patients show altered network control properties, including a more diverse energy landscape and decreased stability of working memory representations. Elke Kirschbaum Manuel Haußmann, Steffen Wolf Hannah Sonntag Justus Schneider Shehabeldin Elzoheiry Oliver Kann Daniel Durstewitz Fred Hamprecht A
LeMoNADe: Learned Motif and Neuronal Assembly Detection in calcium imaging videos Conference
ICLR. Proceedings, 2019.
@conference{Kirschbaum2019,
title = {LeMoNADe: Learned Motif and Neuronal Assembly Detection in calcium imaging videos},
author = {Elke Kirschbaum, Manuel Haußmann, Steffen Wolf, Hannah Sonntag, Justus Schneider, Shehabeldin Elzoheiry, Oliver Kann, Daniel Durstewitz, Fred A. Hamprecht},
url = {https://arxiv.org/abs/1806.09963},
year = {2019},
date = {2019-02-22},
publisher = {ICLR. Proceedings},
abstract = {Neuronal assemblies, loosely defined as subsets of neurons with reoccurring spatio-temporally coordinated activation patterns, or "motifs", are thought to be building blocks of neural representations and information processing. We here propose LeMoNADe, a new exploratory data analysis method that facilitates hunting for motifs in calcium imaging videos, the dominant microscopic functional imaging modality in neurophysiology. Our nonparametric method extracts motifs directly from videos, bypassing the difficult intermediate step of spike extraction. Our technique augments variational autoencoders with a discrete stochastic node, and we show in detail how a differentiable reparametrization and relaxation can be used. An evaluation on simulated data, with available ground truth, reveals excellent quantitative performance. In real video data acquired from brain slices, with no ground truth available, LeMoNADe uncovers nontrivial candidate motifs that can help generate hypotheses for more focused biological investigations.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Neuronal assemblies, loosely defined as subsets of neurons with reoccurring spatio-temporally coordinated activation patterns, or "motifs", are thought to be building blocks of neural representations and information processing. We here propose LeMoNADe, a new exploratory data analysis method that facilitates hunting for motifs in calcium imaging videos, the dominant microscopic functional imaging modality in neurophysiology. Our nonparametric method extracts motifs directly from videos, bypassing the difficult intermediate step of spike extraction. Our technique augments variational autoencoders with a discrete stochastic node, and we show in detail how a differentiable reparametrization and relaxation can be used. An evaluation on simulated data, with available ground truth, reveals excellent quantitative performance. In real video data acquired from brain slices, with no ground truth available, LeMoNADe uncovers nontrivial candidate motifs that can help generate hypotheses for more focused biological investigations. Koppe, Georgia; Guloksuz, Sinan; Reininghaus, Ulrich; Durstewitz, Daniel
Recurrent Neural Networks in Mobile Sampling and Intervention Journal Article
Schizophrenia Bulletin, 45 (2), pp. 272–276, 2019, ISSN: 17451701.
@article{Koppe2019b,
title = {Recurrent Neural Networks in Mobile Sampling and Intervention},
author = {Georgia Koppe and Sinan Guloksuz and Ulrich Reininghaus and Daniel Durstewitz},
doi = {10.1093/schbul/sby171},
issn = {17451701},
year = {2019},
date = {2019-01-01},
journal = {Schizophrenia Bulletin},
volume = {45},
number = {2},
pages = {272--276},
abstract = {The rapid rise and now widespread distribution of handheld and wearable devices, such as smartphones, fitness trackers, or smartwatches, has opened a new universe of possibilities for monitoring emotion and cognition in everyday-life context, and for applying experience- and context-specific interventions in psychosis. These devices are equipped with multiple sensors, recording channels, and app-based opportunities for assessment using experience sampling methodology (ESM), which enables to collect vast amounts of temporally highly resolved and ecologically valid personal data from various domains in daily life. In psychosis, this allows to elucidate intermediate and clinical phenotypes, psychological processes and mechanisms, and their interplay with socioenvironmental factors, as well as to evaluate the effects of treatments for psychosis on important clinical and social outcomes. Although these data offer immense opportunities, they also pose tremendous challenges for data analysis. These challenges include the sheer amount of time series data generated and the many different data modalities and their specific properties and sampling rates. After a brief review of studies and approaches to ESM and ecological momentary interventions in psychosis, we will discuss recurrent neural networks (RNNs) as a powerful statistical machine learning approach for time series analysis and prediction in this context. RNNs can be trained on multiple data modalities simultaneously to learn a dynamical model that could be used to forecast individual trajectories and schedule online feedback and intervention accordingly. Future research using this approach is likely going to offer new avenues to further our understanding and treatments of psychosis.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The rapid rise and now widespread distribution of handheld and wearable devices, such as smartphones, fitness trackers, or smartwatches, has opened a new universe of possibilities for monitoring emotion and cognition in everyday-life context, and for applying experience- and context-specific interventions in psychosis. These devices are equipped with multiple sensors, recording channels, and app-based opportunities for assessment using experience sampling methodology (ESM), which enables to collect vast amounts of temporally highly resolved and ecologically valid personal data from various domains in daily life. In psychosis, this allows to elucidate intermediate and clinical phenotypes, psychological processes and mechanisms, and their interplay with socioenvironmental factors, as well as to evaluate the effects of treatments for psychosis on important clinical and social outcomes. Although these data offer immense opportunities, they also pose tremendous challenges for data analysis. These challenges include the sheer amount of time series data generated and the many different data modalities and their specific properties and sampling rates. After a brief review of studies and approaches to ESM and ecological momentary interventions in psychosis, we will discuss recurrent neural networks (RNNs) as a powerful statistical machine learning approach for time series analysis and prediction in this context. RNNs can be trained on multiple data modalities simultaneously to learn a dynamical model that could be used to forecast individual trajectories and schedule online feedback and intervention accordingly. Future research using this approach is likely going to offer new avenues to further our understanding and treatments of psychosis. Durstewitz, Daniel; Koppe, Georgia; Meyer-Lindenberg, Andreas
Deep neural networks in psychiatry Journal Article
Molecular Psychiatry, 2019, ISSN: 14765578.
@article{Durstewitz2019,
title = {Deep neural networks in psychiatry},
author = {Daniel Durstewitz and Georgia Koppe and Andreas Meyer-Lindenberg},
url = {http://dx.doi.org/10.1038/s41380-019-0365-9},
doi = {10.1038/s41380-019-0365-9},
issn = {14765578},
year = {2019},
date = {2019-01-01},
journal = {Molecular Psychiatry},
publisher = {Springer US},
abstract = {Machine and deep learning methods, today's core of artificial intelligence, have been applied with increasing success and impact in many commercial and research settings. They are powerful tools for large scale data analysis, prediction and classification, especially in very data-rich environments (“big data”), and have started to find their way into medical applications. Here we will first give an overview of machine learning methods, with a focus on deep and recurrent neural networks, their relation to statistics, and the core principles behind them. We will then discuss and review directions along which (deep) neural networks can be, or already have been, applied in the context of psychiatry, and will try to delineate their future potential in this area. We will also comment on an emerging area that so far has been much less well explored: by embedding semantically interpretable computational models of brain dynamics or behavior into a statistical machine learning context, insights into dysfunction beyond mere prediction and classification may be gained. Especially this marriage of computational models with statistical inference may offer insights into neural and behavioral mechanisms that could open completely novel avenues for psychiatric treatment.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Machine and deep learning methods, today's core of artificial intelligence, have been applied with increasing success and impact in many commercial and research settings. They are powerful tools for large scale data analysis, prediction and classification, especially in very data-rich environments (“big data”), and have started to find their way into medical applications. Here we will first give an overview of machine learning methods, with a focus on deep and recurrent neural networks, their relation to statistics, and the core principles behind them. We will then discuss and review directions along which (deep) neural networks can be, or already have been, applied in the context of psychiatry, and will try to delineate their future potential in this area. We will also comment on an emerging area that so far has been much less well explored: by embedding semantically interpretable computational models of brain dynamics or behavior into a statistical machine learning context, insights into dysfunction beyond mere prediction and classification may be gained. Especially this marriage of computational models with statistical inference may offer insights into neural and behavioral mechanisms that could open completely novel avenues for psychiatric treatment.
2018
Toutounji, Hazem; Durstewitz, Daniel
Detecting Multiple Change Points Using Adaptive Regression Splines With Application to Neural Recordings Journal Article
Frontiers in Neuroinformatics, 12 (67), 2018.
@article{Toutounji2018,
title = {Detecting Multiple Change Points Using Adaptive Regression Splines With Application to Neural Recordings},
author = {Hazem Toutounji and Daniel Durstewitz},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6187984/},
doi = {10.3389/fninf.2018.00067},
year = {2018},
date = {2018-10-04},
journal = {Frontiers in Neuroinformatics},
volume = {12},
number = {67},
abstract = {Time series, as frequently the case in neuroscience, are rarely stationary, but often exhibit abrupt changes due to attractor transitions or bifurcations in the dynamical systems producing them. A plethora of methods for detecting such change points in time series statistics have been developed over the years, in addition to test criteria to evaluate their significance. Issues to consider when developing change point analysis methods include computational demands, difficulties arising from either limited amount of data or a large number of covariates, and arriving at statistical tests with sufficient power to detect as many changes as contained in potentially high-dimensional time series. Here, a general method called Paired Adaptive Regressors for Cumulative Sum is developed for detecting multiple change points in the mean of multivariate time series. The method's advantages over alternative approaches are demonstrated through a series of simulation experiments. This is followed by a real data application to neural recordings from rat medial prefrontal cortex during learning. Finally, the method's flexibility to incorporate useful features from state-of-the-art change point detection techniques is discussed, along with potential drawbacks and suggestions to remedy them.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Time series, as frequently the case in neuroscience, are rarely stationary, but often exhibit abrupt changes due to attractor transitions or bifurcations in the dynamical systems producing them. A plethora of methods for detecting such change points in time series statistics have been developed over the years, in addition to test criteria to evaluate their significance. Issues to consider when developing change point analysis methods include computational demands, difficulties arising from either limited amount of data or a large number of covariates, and arriving at statistical tests with sufficient power to detect as many changes as contained in potentially high-dimensional time series. Here, a general method called Paired Adaptive Regressors for Cumulative Sum is developed for detecting multiple change points in the mean of multivariate time series. The method's advantages over alternative approaches are demonstrated through a series of simulation experiments. This is followed by a real data application to neural recordings from rat medial prefrontal cortex during learning. Finally, the method's flexibility to incorporate useful features from state-of-the-art change point detection techniques is discussed, along with potential drawbacks and suggestions to remedy them. Durstewitz, Daniel; Huys, Quentin J M; Koppe, Georgia
Psychiatric Illnesses as Disorders of Network Dynamics Journal Article
pp. 1–24, 2018.
@article{Durstewitza,
title = {Psychiatric Illnesses as Disorders of Network Dynamics},
author = {Daniel Durstewitz and Quentin J M Huys and Georgia Koppe},
url = {https://arxiv.org/pdf/1809.06303.pdf},
year = {2018},
date = {2018-09-18},
pages = {1--24},
abstract = {This review provides a dynamical systems perspective on psychiatric symptoms and disease, and discusses its potential implications for diagnosis, prognosis, and treatment. After a brief introduction into the theory of dynamical systems, we will focus on the idea that cognitive and emotional functions are implemented in terms of dynamical systems phenomena in the brain, a common assumption in theoretical and computational neuroscience. Specific computational models, anchored in biophysics, for generating different types of network dynamics, and with a relation to psychiatric symptoms, will be briefly reviewed, as well as methodological approaches for reconstructing the system dynamics from observed time series (like fMRI or EEG recordings). We then attempt to outline how psychiatric phenomena, associated with schizophrenia, depression, PTSD, ADHD, phantom pain, and others, could be understood in dynamical systems terms. Most importantly, we will try to convey that the dynamical systems level may provide a central, hub-like level of convergence which unifies and links multiple biophysical and behavioral phenomena, in the sense that diverse biophysical changes can give rise to the same dynamical phenomena and, vice versa, similar changes in dynamics may yield different behavioral symptoms depending on the brain area where these changes manifest. If this assessment is correct, it may have profound implications for the diagnosis, prognosis, and treatment of psychiatric conditions, as it puts the focus on dynamics. We therefore argue that consideration of dynamics should play an important role in the choice and target of interventions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
This review provides a dynamical systems perspective on psychiatric symptoms and disease, and discusses its potential implications for diagnosis, prognosis, and treatment. After a brief introduction into the theory of dynamical systems, we will focus on the idea that cognitive and emotional functions are implemented in terms of dynamical systems phenomena in the brain, a common assumption in theoretical and computational neuroscience. Specific computational models, anchored in biophysics, for generating different types of network dynamics, and with a relation to psychiatric symptoms, will be briefly reviewed, as well as methodological approaches for reconstructing the system dynamics from observed time series (like fMRI or EEG recordings). We then attempt to outline how psychiatric phenomena, associated with schizophrenia, depression, PTSD, ADHD, phantom pain, and others, could be understood in dynamical systems terms. Most importantly, we will try to convey that the dynamical systems level may provide a central, hub-like level of convergence which unifies and links multiple biophysical and behavioral phenomena, in the sense that diverse biophysical changes can give rise to the same dynamical phenomena and, vice versa, similar changes in dynamics may yield different behavioral symptoms depending on the brain area where these changes manifest. If this assessment is correct, it may have profound implications for the diagnosis, prognosis, and treatment of psychiatric conditions, as it puts the focus on dynamics. We therefore argue that consideration of dynamics should play an important role in the choice and target of interventions. Livio Oboti Eleonora Russo, Tuyen Tran Daniel Durstewitz ; Corbin, Joshua G
Amygdala Corticofugal Input Shapes Mitral Cell Responses in the Accessory Olfactory Bulb Journal Article
eNeuro, 2018.
@article{Oboti2018,
title = {Amygdala Corticofugal Input Shapes Mitral Cell Responses in the Accessory Olfactory Bulb},
author = {Livio Oboti, Eleonora Russo, Tuyen Tran, Daniel Durstewitz and Joshua G. Corbin},
doi = {https://doi.org/10.1523/ENEURO.0175-18.2018},
year = {2018},
date = {2018-05-18},
journal = {eNeuro},
abstract = {Interconnections between the olfactory bulb and the amygdala are a major pathway for triggering strong behavioralresponses to a variety of odorants. However, while this broad mapping has been established, the patterns of amygdalafeedback connectivity and the influence on olfactory circuitry remain unknown. Here, using a combination of neuronaltracing approaches, we dissect the connectivity of a cortical amygdala [posteromedial cortical nucleus (PmCo)]feedback circuit innervating the mouse accessory olfactory bulb. Optogenetic activation of PmCo feedback mainlyresults in feedforward mitral cell (MC) inhibition through direct excitation of GABAergic granule cells. In addition,LED-driven activity of corticofugal afferents increases the gain of MC responses to olfactory nerve stimulation. Thus,through corticofugal pathways, the PmCo likely regulates primary olfactory and social odor processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Interconnections between the olfactory bulb and the amygdala are a major pathway for triggering strong behavioralresponses to a variety of odorants. However, while this broad mapping has been established, the patterns of amygdalafeedback connectivity and the influence on olfactory circuitry remain unknown. Here, using a combination of neuronaltracing approaches, we dissect the connectivity of a cortical amygdala [posteromedial cortical nucleus (PmCo)]feedback circuit innervating the mouse accessory olfactory bulb. Optogenetic activation of PmCo feedback mainlyresults in feedforward mitral cell (MC) inhibition through direct excitation of GABAergic granule cells. In addition,LED-driven activity of corticofugal afferents increases the gain of MC responses to olfactory nerve stimulation. Thus,through corticofugal pathways, the PmCo likely regulates primary olfactory and social odor processing. Koppe, Georgia; Toutounji, Hazem; Kirsch, Peter; Lis, Stefanie; Durstewitz, Daniel
Identifying nonlinear dynamical systems via generative recurrent neural networks with applications to fMRI Journal Article
arxiv, 6 (2), pp. 103, 2018.
@article{Koppe2018,
title = {Identifying nonlinear dynamical systems via generative recurrent neural networks with applications to fMRI},
author = {Georgia Koppe and Hazem Toutounji and Peter Kirsch and Stefanie Lis and Daniel Durstewitz},
url = {https://arxiv.org/ftp/arxiv/papers/1902/1902.07186.pdf},
year = {2018},
date = {2018-01-01},
journal = {arxiv},
volume = {6},
number = {2},
pages = {103},
abstract = {A major tenet in theoretical neuroscience is that cognitive and behavioral processes are ultimately implemented in terms of the neural system dynamics. Accordingly, a major aim for the analysis of neurophysiological measurements should lie in the identification of the computational dynamics underlying task processing. Here we advance a state space model (SSM) based on generative piecewise-linear recurrent neural networks (PLRNN) to assess dynamics from neuroimaging data. In contrast to many other nonlinear time series models which have been proposed for reconstructing latent dynamics, our model is easily interpretable in neural terms, amenable to systematic dynamical systems analysis of the resulting set of equations, and can straightforwardly be transformed into an equivalent continuous-time dynamical system. The major contributions of this paper are the introduction of a new observation model suitable for functional magnetic resonance imaging (fMRI)},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
A major tenet in theoretical neuroscience is that cognitive and behavioral processes are ultimately implemented in terms of the neural system dynamics. Accordingly, a major aim for the analysis of neurophysiological measurements should lie in the identification of the computational dynamics underlying task processing. Here we advance a state space model (SSM) based on generative piecewise-linear recurrent neural networks (PLRNN) to assess dynamics from neuroimaging data. In contrast to many other nonlinear time series models which have been proposed for reconstructing latent dynamics, our model is easily interpretable in neural terms, amenable to systematic dynamical systems analysis of the resulting set of equations, and can straightforwardly be transformed into an equivalent continuous-time dynamical system. The major contributions of this paper are the introduction of a new observation model suitable for functional magnetic resonance imaging (fMRI)
2017
Durstewitz, Daniel
Advanced Data Analysis in Neuroscience Book
2017, ISBN: 9783319599748.
@book{Durstewitzb,
title = {Advanced Data Analysis in Neuroscience},
author = {Daniel Durstewitz},
url = {https://link.springer.com/content/pdf/10.1007%2F978-3-319-59976-2.pdf},
isbn = {9783319599748},
year = {2017},
date = {2017-11-01},
keywords = {},
pubstate = {published},
tppubtype = {book}
}
Koppe, Georgia; Mallien, Anne Stephanie; Berger, Stefan; Bartsch, Dusan; Gass, Peter; Vollmayr, Barbara; Durstewitz, Daniel
CACNA1C gene regulates behavioral strategies in operant rule learning Journal Article
PLOS Biology, 15 (6), 2017.
@article{Koppe2017,
title = {CACNA1C gene regulates behavioral strategies in operant rule learning},
author = {Georgia Koppe and Anne Stephanie Mallien and Stefan Berger and Dusan Bartsch and Peter Gass and Barbara Vollmayr and Daniel Durstewitz
},
url = { https://doi.org/10.1371/journal.pbio.2000936},
doi = {10.1371/journal.pbio.2000936},
year = {2017},
date = {2017-06-12},
journal = {PLOS Biology},
volume = {15},
number = {6},
abstract = {Behavioral experiments are usually designed to tap into a specific cognitive function, but animals may solve a given task through a variety of different and individual behavioral strategies, some of them not foreseen by the experimenter. Animal learning may therefore be seen more as the process of selecting among, and adapting, potential behavioral policies, rather than mere strengthening of associative links. Calcium influx through high-voltage-gated Ca2+ channels is central to synaptic plasticity, and altered expression of Cav1.2 channels and the CACNA1C gene have been associated with severe learning deficits and psychiatric disorders. Given this, we were interested in how specifically a selective functional ablation of the Cacna1c gene would modulate the learning process. Using a detailed, individual-level analysis of learning on an operant cue discrimination task in terms of behavioral strategies, combined with Bayesian selection among computational models estimated from the empirical data, we show that a Cacna1c knockout does not impair learning in general but has a much more specific effect: the majority of Cacna1c knockout mice still managed to increase reward feedback across trials but did so by adapting an outcome-based strategy, while the majority of matched controls adopted the experimentally intended cue-association rule. Our results thus point to a quite specific role of a single gene in learning and highlight that much more mechanistic insight could be gained by examining response patterns in terms of a larger repertoire of potential behavioral strategies. The results may also have clinical implications for treating psychiatric disorders.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Behavioral experiments are usually designed to tap into a specific cognitive function, but animals may solve a given task through a variety of different and individual behavioral strategies, some of them not foreseen by the experimenter. Animal learning may therefore be seen more as the process of selecting among, and adapting, potential behavioral policies, rather than mere strengthening of associative links. Calcium influx through high-voltage-gated Ca2+ channels is central to synaptic plasticity, and altered expression of Cav1.2 channels and the CACNA1C gene have been associated with severe learning deficits and psychiatric disorders. Given this, we were interested in how specifically a selective functional ablation of the Cacna1c gene would modulate the learning process. Using a detailed, individual-level analysis of learning on an operant cue discrimination task in terms of behavioral strategies, combined with Bayesian selection among computational models estimated from the empirical data, we show that a Cacna1c knockout does not impair learning in general but has a much more specific effect: the majority of Cacna1c knockout mice still managed to increase reward feedback across trials but did so by adapting an outcome-based strategy, while the majority of matched controls adopted the experimentally intended cue-association rule. Our results thus point to a quite specific role of a single gene in learning and highlight that much more mechanistic insight could be gained by examining response patterns in terms of a larger repertoire of potential behavioral strategies. The results may also have clinical implications for treating psychiatric disorders. Eleonora Russo, Daniel Durstewitz
Cell assemblies at multiple time scales with arbitrary lag constellations Journal Article
eLife, 2017.
@article{Russo2017,
title = {Cell assemblies at multiple time scales with arbitrary lag constellations},
author = {Eleonora Russo, Daniel Durstewitz},
url = {https://elifesciences.org/articles/19428},
doi = {10.7554/eLife.19428},
year = {2017},
date = {2017-01-11},
journal = {eLife},
abstract = {Hebb's idea of a cell assembly as the fundamental unit of neural information processing has dominated neuroscience like no other theoretical concept within the past 60 years. A range of different physiological phenomena, from precisely synchronized spiking to broadly simultaneous rate increases, has been subsumed under this term. Yet progress in this area is hampered by the lack of statistical tools that would enable to extract assemblies with arbitrary constellations of time lags, and at multiple temporal scales, partly due to the severe computational burden. Here we present such a unifying methodological and conceptual framework which detects assembly structure at many different time scales, levels of precision, and with arbitrary internal organization. Applying this methodology to multiple single unit recordings from various cortical areas, we find that there is no universal cortical coding scheme, but that assembly structure and precision significantly depends on the brain area recorded and ongoing task demands.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Hebb's idea of a cell assembly as the fundamental unit of neural information processing has dominated neuroscience like no other theoretical concept within the past 60 years. A range of different physiological phenomena, from precisely synchronized spiking to broadly simultaneous rate increases, has been subsumed under this term. Yet progress in this area is hampered by the lack of statistical tools that would enable to extract assemblies with arbitrary constellations of time lags, and at multiple temporal scales, partly due to the severe computational burden. Here we present such a unifying methodological and conceptual framework which detects assembly structure at many different time scales, levels of precision, and with arbitrary internal organization. Applying this methodology to multiple single unit recordings from various cortical areas, we find that there is no universal cortical coding scheme, but that assembly structure and precision significantly depends on the brain area recorded and ongoing task demands. Durstewitz, Daniel
2017, ISSN: 1553-7358.
@book{Durstewitz2017,
title = {A State Space Approach for Piecewise‐Linear Recurrent Neural Networks for Reconstructing Nonlinear Dynamics from Neural Measurements},
author = {Daniel Durstewitz},
doi = {10.1371/journal.pcbi.1005542},
issn = {1553-7358},
year = {2017},
date = {2017-01-01},
booktitle = {PLoS Computational Biology},
volume = {13},
number = {6},
pages = {e1005542},
abstract = {The computational and cognitive properties of neural systems are often thought to be imple-mented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computa-tions. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maxi-mization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Mod-els estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation frame-work for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties. Citation: Durstewitz D (2017) A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements. PLoS Comput Biol 13 (6): Neuronal dynamics mediate between the physiological and anatomical properties of a neural system and the computations it performs, in fact may be seen as the 'computational language' of the brain. It is therefore of great interest to recover from experimentally recorded time series, like multiple single-unit or neuroimaging data, the underlying sto-chastic network dynamics and, ideally, even equations governing their statistical evolu-tion. This is not at all a trivial enterprise, however, since neural systems are very high-dimensional, come with considerable levels of intrinsic (process) noise, are usually only partially observable, and these observations may be further corrupted by noise from mea-surement and preprocessing steps. The present article embeds piecewise-linear recurrent neural networks (PLRNNs) within a state space approach, a statistical estimation frame-work that deals with both process and observation noise. PLRNNs are computationally and dynamically powerful nonlinear systems. Their statistically principled estimation from multivariate neuronal time series thus may provide access to some essential features of the neuronal dynamics, like attractor states, generative equations, and their computa-tional implications. The approach is exemplified on multiple single-unit recordings from the rat prefrontal cortex during working memory.},
keywords = {},
pubstate = {published},
tppubtype = {book}
}
The computational and cognitive properties of neural systems are often thought to be imple-mented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computa-tions. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maxi-mization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Mod-els estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation frame-work for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties. Citation: Durstewitz D (2017) A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements. PLoS Comput Biol 13 (6): Neuronal dynamics mediate between the physiological and anatomical properties of a neural system and the computations it performs, in fact may be seen as the 'computational language' of the brain. It is therefore of great interest to recover from experimentally recorded time series, like multiple single-unit or neuroimaging data, the underlying sto-chastic network dynamics and, ideally, even equations governing their statistical evolu-tion. This is not at all a trivial enterprise, however, since neural systems are very high-dimensional, come with considerable levels of intrinsic (process) noise, are usually only partially observable, and these observations may be further corrupted by noise from mea-surement and preprocessing steps. The present article embeds piecewise-linear recurrent neural networks (PLRNNs) within a state space approach, a statistical estimation frame-work that deals with both process and observation noise. PLRNNs are computationally and dynamically powerful nonlinear systems. Their statistically principled estimation from multivariate neuronal time series thus may provide access to some essential features of the neuronal dynamics, like attractor states, generative equations, and their computa-tional implications. The approach is exemplified on multiple single-unit recordings from the rat prefrontal cortex during working memory. Durstewitz, Daniel
2017, ISSN: 1553-7358.
@book{Durstewitz2017b,
title = {A State Space Approach for Piecewise‐Linear Recurrent Neural Networks for Reconstructing Nonlinear Dynamics from Neural Measurements},
author = {Daniel Durstewitz},
doi = {10.1371/journal.pcbi.1005542},
issn = {1553-7358},
year = {2017},
date = {2017-01-01},
booktitle = {PLoS Computational Biology},
volume = {13},
number = {6},
pages = {e1005542},
abstract = {The computational and cognitive properties of neural systems are often thought to be imple-mented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computa-tions. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maxi-mization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Mod-els estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation frame-work for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties. Citation: Durstewitz D (2017) A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements. PLoS Comput Biol 13 (6): Neuronal dynamics mediate between the physiological and anatomical properties of a neural system and the computations it performs, in fact may be seen as the 'computational language' of the brain. It is therefore of great interest to recover from experimentally recorded time series, like multiple single-unit or neuroimaging data, the underlying sto-chastic network dynamics and, ideally, even equations governing their statistical evolu-tion. This is not at all a trivial enterprise, however, since neural systems are very high-dimensional, come with considerable levels of intrinsic (process) noise, are usually only partially observable, and these observations may be further corrupted by noise from mea-surement and preprocessing steps. The present article embeds piecewise-linear recurrent neural networks (PLRNNs) within a state space approach, a statistical estimation frame-work that deals with both process and observation noise. PLRNNs are computationally and dynamically powerful nonlinear systems. Their statistically principled estimation from multivariate neuronal time series thus may provide access to some essential features of the neuronal dynamics, like attractor states, generative equations, and their computa-tional implications. The approach is exemplified on multiple single-unit recordings from the rat prefrontal cortex during working memory.},
keywords = {},
pubstate = {published},
tppubtype = {book}
}
The computational and cognitive properties of neural systems are often thought to be imple-mented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computa-tions. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maxi-mization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Mod-els estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation frame-work for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties. Citation: Durstewitz D (2017) A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements. PLoS Comput Biol 13 (6): Neuronal dynamics mediate between the physiological and anatomical properties of a neural system and the computations it performs, in fact may be seen as the 'computational language' of the brain. It is therefore of great interest to recover from experimentally recorded time series, like multiple single-unit or neuroimaging data, the underlying sto-chastic network dynamics and, ideally, even equations governing their statistical evolu-tion. This is not at all a trivial enterprise, however, since neural systems are very high-dimensional, come with considerable levels of intrinsic (process) noise, are usually only partially observable, and these observations may be further corrupted by noise from mea-surement and preprocessing steps. The present article embeds piecewise-linear recurrent neural networks (PLRNNs) within a state space approach, a statistical estimation frame-work that deals with both process and observation noise. PLRNNs are computationally and dynamically powerful nonlinear systems. Their statistically principled estimation from multivariate neuronal time series thus may provide access to some essential features of the neuronal dynamics, like attractor states, generative equations, and their computa-tional implications. The approach is exemplified on multiple single-unit recordings from the rat prefrontal cortex during working memory. Peter, Sven ; Kirschbaum, Elke ; Both, Martin ; Campbell, Lee ; Harvey, Brandon ; Heins, Conor ; Durstewitz, Daniel ; Diego, Ferran ; Hamprecht, Fred A
Sparse convolutional coding for neuronal assembly detection Journal Article
Advances in Neural Information Processing Systems , 30 , pp. 3675–3685, 2017.
@article{Peter2017,
title = {Sparse convolutional coding for neuronal assembly detection},
author = {Peter, Sven and Kirschbaum, Elke and Both, Martin and Campbell, Lee and Harvey, Brandon and Heins, Conor and Durstewitz, Daniel and Diego, Ferran and Hamprecht, Fred A},
editor = {I. Guyon and U. V. Luxburg and S. Bengio and H. Wallach and R. Fergus and S. Vishwanathan and R. Garnett},
url = {http://papers.nips.cc/paper/6958-sparse-convolutional-coding-for-neuronal-assembly-detection.pdf},
year = {2017},
date = {2017-01-01},
journal = {Advances in Neural Information Processing Systems },
volume = {30},
pages = {3675--3685},
abstract = {Cell assemblies, originally proposed by Donald Hebb (1949), are subsets of neurons firing in a temporally coordinated way that gives rise to repeated motifs supposed to underly neural representations and information processing. Although Hebb's original proposal dates back many decades, the detection of assemblies and their role in coding is still an open and current research topic, partly because simultaneous recordings from large populations of neurons became feasible only relatively recently. Most current and easy-to-apply computational techniques focus on the identification of strictly synchronously spiking neurons. In this paper we propose a new algorithm, based on sparse convolutional coding, for detecting recurrent motifs of arbitrary structure up to a given length. Testing of our algorithm on synthetically generated datasets shows that it outperforms established methods and accurately identifies the temporal structure of embedded assemblies, even when these contain overlapping neurons or when strong background noise is present. Moreover, exploratory analysis of experimental datasets from hippocampal slices and cortical neuron cultures have provided promising results.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Cell assemblies, originally proposed by Donald Hebb (1949), are subsets of neurons firing in a temporally coordinated way that gives rise to repeated motifs supposed to underly neural representations and information processing. Although Hebb's original proposal dates back many decades, the detection of assemblies and their role in coding is still an open and current research topic, partly because simultaneous recordings from large populations of neurons became feasible only relatively recently. Most current and easy-to-apply computational techniques focus on the identification of strictly synchronously spiking neurons. In this paper we propose a new algorithm, based on sparse convolutional coding, for detecting recurrent motifs of arbitrary structure up to a given length. Testing of our algorithm on synthetically generated datasets shows that it outperforms established methods and accurately identifies the temporal structure of embedded assemblies, even when these contain overlapping neurons or when strong background noise is present. Moreover, exploratory analysis of experimental datasets from hippocampal slices and cortical neuron cultures have provided promising results.
2016
Ma, Liya; Hyman, James M; Durstewitz, Daniel; Phillips, Anthony G; Seamans, Jeremy K
A Quantitative Analysis of Context-Dependent Remapping of Medial Frontal Cortex Neurons and Ensembles Journal Article
J Neurosci. 2016 Aug 3; 36(31), 3 (36), pp. 31, 2016.
@article{Ma2016,
title = {A Quantitative Analysis of Context-Dependent Remapping of Medial Frontal Cortex Neurons and Ensembles},
author = {Liya Ma and James M. Hyman and Daniel Durstewitz and Anthony G. Phillips and Jeremy K. Seamans},
doi = {10.1523/JNEUROSCI.3176-15.2016},
year = {2016},
date = {2016-08-03},
journal = {J Neurosci. 2016 Aug 3; 36(31)},
volume = {3},
number = {36},
pages = {31},
abstract = {The frontal cortex has been implicated in a number of cognitive and motivational processes, but understanding how individual neurons contribute to these processes is particularly challenging as they respond to a broad array of events (multiplexing) in a manner that can be dynamically modulated by the task context, i.e., adaptive coding (Duncan, 2001). Fundamental questions remain, such as how the flexibility gained through these mechanisms is balanced by the need for consistency and how the ensembles of neurons are coherently shaped by task demands. In the present study, ensembles of medial frontal cortex neurons were recorded from rats trained to perform three different operant actions either in two different sequences or two different physical environments. Single neurons exhibited diverse mixtures of responsivity to each of the three actions and these mixtures were abruptly altered by context/sequence switches. Remarkably, the overall responsivity of the population remained highly consistent both within and between context/sequences because the gains versus losses were tightly balanced across neurons and across the three actions. These data are consistent with a reallocation mixture model in which individual neurons express unique mixtures of selectivity for different actions that become reallocated as task conditions change. However, because the allocations and reallocations are so well balanced across neurons, the population maintains a low but highly consistent response to all actions. The frontal cortex may therefore balance consistency with flexibility by having ensembles respond in a fixed way to task-relevant actions while abruptly reconfiguring single neurons to encode “actions in context.”},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The frontal cortex has been implicated in a number of cognitive and motivational processes, but understanding how individual neurons contribute to these processes is particularly challenging as they respond to a broad array of events (multiplexing) in a manner that can be dynamically modulated by the task context, i.e., adaptive coding (Duncan, 2001). Fundamental questions remain, such as how the flexibility gained through these mechanisms is balanced by the need for consistency and how the ensembles of neurons are coherently shaped by task demands. In the present study, ensembles of medial frontal cortex neurons were recorded from rats trained to perform three different operant actions either in two different sequences or two different physical environments. Single neurons exhibited diverse mixtures of responsivity to each of the three actions and these mixtures were abruptly altered by context/sequence switches. Remarkably, the overall responsivity of the population remained highly consistent both within and between context/sequences because the gains versus losses were tightly balanced across neurons and across the three actions. These data are consistent with a reallocation mixture model in which individual neurons express unique mixtures of selectivity for different actions that become reallocated as task conditions change. However, because the allocations and reallocations are so well balanced across neurons, the population maintains a low but highly consistent response to all actions. The frontal cortex may therefore balance consistency with flexibility by having ensembles respond in a fixed way to task-relevant actions while abruptly reconfiguring single neurons to encode “actions in context.” Hass, Joachim; Durstewitz, Daniel
Time at the center, or time at the side? Assessing current models of time perception. Journal Article
Current Opinion in Behavioral Sciences, 8 , pp. Pages 238-244, 2016.
@article{Hass2016b,
title = {Time at the center, or time at the side? Assessing current models of time perception.},
author = {Joachim Hass and Daniel Durstewitz},
url = {https://www.sciencedirect.com/science/article/pii/S2352154616300535},
doi = {https://doi.org/10.1016/j.cobeha.2016.02.030},
year = {2016},
date = {2016-04-01},
journal = {Current Opinion in Behavioral Sciences},
volume = {8},
pages = {Pages 238-244},
abstract = {The ability to tell time is a crucial requirement for almost everything we do, but the neural mechanisms of time perception are still largely unknown. One way to approach these mechanisms is through computational modeling. This review provides an overview of the most prominent timing models, experimental evidence in their support, and formal ways for understanding the relationship between mechanisms of time perception and the scaling behavior of time estimation errors. Theories that interpret timing as a byproduct of other computational processes are also discussed. We suggest that there may be in fact a multitude of timing mechanisms in operation, anchored within area-specific computations, and tailored to different sensory-behavioral requirements. These ultimately have to be integrated into a common frame (a ‘temporal hub’) for the purpose of decision making. This common frame may support Bayesian integration and generalization across sensory modalities.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The ability to tell time is a crucial requirement for almost everything we do, but the neural mechanisms of time perception are still largely unknown. One way to approach these mechanisms is through computational modeling. This review provides an overview of the most prominent timing models, experimental evidence in their support, and formal ways for understanding the relationship between mechanisms of time perception and the scaling behavior of time estimation errors. Theories that interpret timing as a byproduct of other computational processes are also discussed. We suggest that there may be in fact a multitude of timing mechanisms in operation, anchored within area-specific computations, and tailored to different sensory-behavioral requirements. These ultimately have to be integrated into a common frame (a ‘temporal hub’) for the purpose of decision making. This common frame may support Bayesian integration and generalization across sensory modalities. Durstewitz, Daniel; Koppe, Georgia; Toutounji, Hazem
Computational models as statistical tools Journal Article
Current Opinion in Behavioral Sciences, 11 , pp. 93–99, 2016, ISSN: 23521546.
@article{Durstewitz2016,
title = {Computational models as statistical tools},
author = {Daniel Durstewitz and Georgia Koppe and Hazem Toutounji},
url = {http://dx.doi.org/10.1016/j.cobeha.2016.07.004},
doi = {10.1016/j.cobeha.2016.07.004},
issn = {23521546},
year = {2016},
date = {2016-01-01},
journal = {Current Opinion in Behavioral Sciences},
volume = {11},
pages = {93--99},
publisher = {Elsevier Ltd},
abstract = {Traditionally, models in statistics are relatively simple ???general purpose??? quantitative inference tools, while models in computational neuroscience aim more at mechanistically explaining specific observations. Research on methods for inferring behavioral and neural models from data, however, has shown that a lot could be gained by merging these approaches, augmenting computational models with distributional assumptions. This enables estimation of parameters of such models in a principled way, comes with confidence regions that quantify uncertainty in estimates, and allows for quantitative assessment of prediction quality of computational models and tests of specific hypotheses about underlying mechanisms. Thus, unlike in conventional statistics, inferences about the latent dynamical mechanisms that generated the observed data can be drawn. Future directions and challenges of this approach are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Traditionally, models in statistics are relatively simple ???general purpose??? quantitative inference tools, while models in computational neuroscience aim more at mechanistically explaining specific observations. Research on methods for inferring behavioral and neural models from data, however, has shown that a lot could be gained by merging these approaches, augmenting computational models with distributional assumptions. This enables estimation of parameters of such models in a principled way, comes with confidence regions that quantify uncertainty in estimates, and allows for quantitative assessment of prediction quality of computational models and tests of specific hypotheses about underlying mechanisms. Thus, unlike in conventional statistics, inferences about the latent dynamical mechanisms that generated the observed data can be drawn. Future directions and challenges of this approach are discussed. Hass, Joachim; Hertäg, Loreen; Durstewitz, Daniel
A Detailed Data-Driven Network Model of Prefrontal Cortex Reproduces Key Features of In Vivo Activity Journal Article
PLoS Computational Biology, 12 (5), pp. 1–29, 2016, ISSN: 15537358.
@article{Hass2016,
title = {A Detailed Data-Driven Network Model of Prefrontal Cortex Reproduces Key Features of In Vivo Activity},
author = {Joachim Hass and Loreen Hertäg and Daniel Durstewitz},
doi = {10.1371/journal.pcbi.1004930},
issn = {15537358},
year = {2016},
date = {2016-01-01},
journal = {PLoS Computational Biology},
volume = {12},
number = {5},
pages = {1--29},
abstract = {textcopyright 2016 Hass et al. The prefrontal cortex is centrally involved in a wide range of cognitive functions and their impairment in psychiatric disorders. Yet, the computational principles that govern the dynamics of prefrontal neural networks, and link their physiological, biochemical and anatomical properties to cognitive functions, are not well understood. Computational models can help to bridge the gap between these different levels of description, provided they are sufficiently constrained by experimental data and capable of predicting key properties of the intact cortex. Here, we present a detailed network model of the prefrontal cortex, based on a simple computationally efficient single neuron model (simpAdEx), with all parameters derived from in vitro electrophysiological and anatomical data. Without additional tuning, this model could be shown to quantitatively reproduce a wide range of measures from in vivo electrophysiological recordings, to a degree where simulated and experimentally observed activities were statistically indistinguishable. These measures include spike train statistics, membrane potential fluctuations, local field potentials, and the transmission of transient stimulus information across layers. We further demonstrate that model predictions are robust against moderate changes in key parameters, and that synaptic heterogeneity is a crucial ingredient to the quantitative reproduction of in vivo-like electrophysiological behavior. Thus, we have produced a physiologically highly valid, in a quantitative sense, yet computationally efficient PFC network model, which helped to identify key properties underlying spike time dynamics as observed in vivo, and can be harvested for in-depth investigation of the links between physiology and cognition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
textcopyright 2016 Hass et al. The prefrontal cortex is centrally involved in a wide range of cognitive functions and their impairment in psychiatric disorders. Yet, the computational principles that govern the dynamics of prefrontal neural networks, and link their physiological, biochemical and anatomical properties to cognitive functions, are not well understood. Computational models can help to bridge the gap between these different levels of description, provided they are sufficiently constrained by experimental data and capable of predicting key properties of the intact cortex. Here, we present a detailed network model of the prefrontal cortex, based on a simple computationally efficient single neuron model (simpAdEx), with all parameters derived from in vitro electrophysiological and anatomical data. Without additional tuning, this model could be shown to quantitatively reproduce a wide range of measures from in vivo electrophysiological recordings, to a degree where simulated and experimentally observed activities were statistically indistinguishable. These measures include spike train statistics, membrane potential fluctuations, local field potentials, and the transmission of transient stimulus information across layers. We further demonstrate that model predictions are robust against moderate changes in key parameters, and that synaptic heterogeneity is a crucial ingredient to the quantitative reproduction of in vivo-like electrophysiological behavior. Thus, we have produced a physiologically highly valid, in a quantitative sense, yet computationally efficient PFC network model, which helped to identify key properties underlying spike time dynamics as observed in vivo, and can be harvested for in-depth investigation of the links between physiology and cognition.
2015
Demanuele, Charmaine ; Bähner, Florian ; Plichta, Michael ; Kirsch, Peter ; Tost, Heike ; Meyer-Lindenberg, Andreas ; Durstewitz, Daniel
A statistical approach for segregating cognitive task stages from multivariate fMRI BOLD time series Journal Article
Frontiers in Human Neuroscience, 9 , 2015.
@article{Demanuele2015,
title = {A statistical approach for segregating cognitive task stages from multivariate fMRI BOLD time series},
author = {Demanuele, Charmaine and Bähner, Florian and Plichta, Michael and Kirsch, Peter and Tost, Heike and Meyer-Lindenberg, Andreas and Durstewitz, Daniel},
url = {https://www.researchgate.net/publication/282647853_A_statistical_approach_for_segregating_cognitive_task_stages_from_multivariate_fMRI_BOLD_time_series},
doi = {DOI: 10.3389/fnhum.2015.00537 },
year = {2015},
date = {2015-09-01},
journal = {Frontiers in Human Neuroscience},
volume = {9},
abstract = {Multivariate pattern analysis can reveal new information from neuroimaging data to illuminate human cognition and its disturbances. Here, we develop a methodological approach, based on multivariate statistical/machine learning and time series analysis, to discern cognitive processing stages from fMRI blood oxygenation level dependent (BOLD) time series. We apply this method to data recorded from a group of healthy adults whilst performing a virtual reality version of the delayed win-shift radial arm maze task. This task has been frequently used to study working memory and decision making in rodents. Using linear classifiers and multivariate test statistics in conjunction with time series bootstraps, we show that different cognitive stages of the task, as defined by the experimenter, namely, the encoding/retrieval, choice, reward and delay stages, can be statistically discriminated from the BOLD time series in brain areas relevant for decision making and working memory. Discrimination of these task stages was significantly reduced during poor behavioral performance in dorsolateral prefrontal cortex (DLPFC), but not in the primary visual cortex (V1). Experimenter-defined dissection of time series into class labels based on task structure was confirmed by an unsupervised, bottom-up approach based on Hidden Markov Models. Furthermore, we show that different groupings of recorded time points into cognitive event classes can be used to test hypotheses about the specific cognitive role of a given brain region during task execution. We found that whilst the DLPFC strongly differentiated between task stages associated with different memory loads, but not between different visual-spatial aspects, the reverse was true for V1. Our methodology illustrates how different aspects of cognitive information processing during one and the same task can be separated and attributed to specific brain regions based on information contained in multivariate patterns of voxel activity.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Multivariate pattern analysis can reveal new information from neuroimaging data to illuminate human cognition and its disturbances. Here, we develop a methodological approach, based on multivariate statistical/machine learning and time series analysis, to discern cognitive processing stages from fMRI blood oxygenation level dependent (BOLD) time series. We apply this method to data recorded from a group of healthy adults whilst performing a virtual reality version of the delayed win-shift radial arm maze task. This task has been frequently used to study working memory and decision making in rodents. Using linear classifiers and multivariate test statistics in conjunction with time series bootstraps, we show that different cognitive stages of the task, as defined by the experimenter, namely, the encoding/retrieval, choice, reward and delay stages, can be statistically discriminated from the BOLD time series in brain areas relevant for decision making and working memory. Discrimination of these task stages was significantly reduced during poor behavioral performance in dorsolateral prefrontal cortex (DLPFC), but not in the primary visual cortex (V1). Experimenter-defined dissection of time series into class labels based on task structure was confirmed by an unsupervised, bottom-up approach based on Hidden Markov Models. Furthermore, we show that different groupings of recorded time points into cognitive event classes can be used to test hypotheses about the specific cognitive role of a given brain region during task execution. We found that whilst the DLPFC strongly differentiated between task stages associated with different memory loads, but not between different visual-spatial aspects, the reverse was true for V1. Our methodology illustrates how different aspects of cognitive information processing during one and the same task can be separated and attributed to specific brain regions based on information contained in multivariate patterns of voxel activity. Demanuele, Charmaine; Kirsch, Peter; Esslinger, Christine; Zink, Mathias; Meyer-Lindenberg, Andreas; Durstewitz, Daniel
Area-Specific Information Processing in Prefrontal Cortex during a Probabilistic Inference Task: A Multivariate fMRI BOLD Time Series Analysis Journal Article
PLOS ONE, 2015.
@article{Demanuele2015b,
title = {Area-Specific Information Processing in Prefrontal Cortex during a Probabilistic Inference Task: A Multivariate fMRI BOLD Time Series Analysis},
author = {Charmaine Demanuele and Peter Kirsch and Christine Esslinger and Mathias Zink and Andreas Meyer-Lindenberg and Daniel Durstewitz},
url = { https://doi.org/10.1371/journal.pone.0135424
},
doi = {10.1371/journal.pone.0135424},
year = {2015},
date = {2015-08-10},
journal = {PLOS ONE},
abstract = {Discriminating spatiotemporal stages of information processing involved in complex cognitive processes remains a challenge for neuroscience. This is especially so in prefrontal cortex whose subregions, such as the dorsolateral prefrontal (DLPFC), anterior cingulate (ACC) and orbitofrontal (OFC) cortices are known to have differentiable roles in cognition. Yet it is much less clear how these subregions contribute to different cognitive processes required by a given task. To investigate this, we use functional MRI data recorded from a group of healthy adults during a “Jumping to Conclusions” probabilistic reasoning task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Discriminating spatiotemporal stages of information processing involved in complex cognitive processes remains a challenge for neuroscience. This is especially so in prefrontal cortex whose subregions, such as the dorsolateral prefrontal (DLPFC), anterior cingulate (ACC) and orbitofrontal (OFC) cortices are known to have differentiable roles in cognition. Yet it is much less clear how these subregions contribute to different cognitive processes required by a given task. To investigate this, we use functional MRI data recorded from a group of healthy adults during a “Jumping to Conclusions” probabilistic reasoning task. Florian Bähner Charmaine Demanuele, Janina Schweiger Martin Gerchen Vera Zamoscik Kai Ueltzhöffer Tim Hahn Patric Meyer Herta Flor Daniel Durstewitz Heike Tost Peter Kirsch Michael Plichta & Andreas Meyer-Lindenberg F M
Hippocampal-dorsolateral prefrontal coupling as a species-conserved cognitive mechanism: a human translational imaging study Journal Article
Neuropsychopharmacology, 40 (7), pp. 1674-81, 2015.
@article{Bähner2015,
title = {Hippocampal-dorsolateral prefrontal coupling as a species-conserved cognitive mechanism: a human translational imaging study},
author = {Florian Bähner, Charmaine Demanuele, Janina Schweiger, Martin F Gerchen, Vera Zamoscik, Kai Ueltzhöffer, Tim Hahn, Patric Meyer, Herta Flor, Daniel Durstewitz, Heike Tost, Peter Kirsch, Michael M Plichta & Andreas Meyer-Lindenberg},
url = {https://www.nature.com/articles/npp201513},
year = {2015},
date = {2015-06-01},
journal = {Neuropsychopharmacology},
volume = {40},
number = {7},
pages = {1674-81},
abstract = {Hippocampal–prefrontal cortex (HC–PFC) interactions are implicated in working memory (WM) and altered in psychiatric conditions with cognitive impairment such as schizophrenia. While coupling between both structures is crucial for WM performance in rodents, evidence from human studies is conflicting and translation of findings is complicated by the use of differing paradigms across species. We therefore used functional magnetic resonance imaging together with a spatial WM paradigm adapted from rodent research to examine HC–PFC coupling in humans. A PFC–parietal network was functionally connected to hippocampus (HC) during task stages requiring high levels of executive control but not during a matched control condition. The magnitude of coupling in a network comprising HC, bilateral dorsolateral PFC (DLPFC), and right supramarginal gyrus explained one-fourth of the variability in an independent spatial WM task but was unrelated to visual WM performance. HC–DLPFC coupling may thus represent a systems-level mechanism specific to spatial WM that is conserved across species, suggesting its utility for modeling cognitive dysfunction in translational neuroscience.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Hippocampal–prefrontal cortex (HC–PFC) interactions are implicated in working memory (WM) and altered in psychiatric conditions with cognitive impairment such as schizophrenia. While coupling between both structures is crucial for WM performance in rodents, evidence from human studies is conflicting and translation of findings is complicated by the use of differing paradigms across species. We therefore used functional magnetic resonance imaging together with a spatial WM paradigm adapted from rodent research to examine HC–PFC coupling in humans. A PFC–parietal network was functionally connected to hippocampus (HC) during task stages requiring high levels of executive control but not during a matched control condition. The magnitude of coupling in a network comprising HC, bilateral dorsolateral PFC (DLPFC), and right supramarginal gyrus explained one-fourth of the variability in an independent spatial WM task but was unrelated to visual WM performance. HC–DLPFC coupling may thus represent a systems-level mechanism specific to spatial WM that is conserved across species, suggesting its utility for modeling cognitive dysfunction in translational neuroscience. Kucewicz, Michal T; Kucewicz, Michal T; Durstewitz, Daniel; Tricklebank, Mark D; Jones, Matt; Laubach, Mark; Fujisawa, Shigeyoshi; Pennartz, Cyriel; Shapiro, Matthew; Hampson, Robert; Deadwyler, Samuel
Decoding the sequential contributions of hippocampal-prefrontal neuronal assemblies to spatial working memory Unpublished
2015.
@unpublished{Kucewicz2015,
title = {Decoding the sequential contributions of hippocampal-prefrontal neuronal assemblies to spatial working memory},
author = {Michal T Kucewicz and Michal T Kucewicz and Daniel Durstewitz and Mark D Tricklebank and Matt Jones and Mark Laubach and Shigeyoshi Fujisawa and Cyriel Pennartz and Matthew Shapiro and Robert Hampson and Samuel Deadwyler},
year = {2015},
date = {2015-01-01},
keywords = {},
pubstate = {published},
tppubtype = {unpublished}
}
Lapish, Christopher C; Balaguer-ballester, Emili; Seamans, Jeremy K; Phillips, Anthony G; Durstewitz, Daniel
Amphetamine Exerts Dose-Dependent Changes in Prefrontal Cortex Attractor Dynamics during Working Memory Journal Article
35 (28), pp. 10172–10187, 2015.
@article{Lapish2015,
title = {Amphetamine Exerts Dose-Dependent Changes in Prefrontal Cortex Attractor Dynamics during Working Memory},
author = {Christopher C Lapish and Emili Balaguer-ballester and Jeremy K Seamans and Anthony G Phillips and Daniel Durstewitz},
doi = {10.1523/JNEUROSCI.2421-14.2015},
year = {2015},
date = {2015-01-01},
volume = {35},
number = {28},
pages = {10172--10187},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2014
Joachim Hass, Daniel Durstewitz
Neurocomputational models of time perception Journal Article
Adv Exp Med Biol, (829), pp. 49-71, 2014.
@article{Hass2014,
title = {Neurocomputational models of time perception},
author = {Joachim Hass, Daniel Durstewitz
},
url = {https://pubmed.ncbi.nlm.nih.gov/25358705/},
doi = {10.1007/978-1-4939-1782-2_4 },
year = {2014},
date = {2014-10-10},
journal = { Adv Exp Med Biol},
number = {829},
pages = {49-71},
abstract = {Mathematical modeling is a useful tool for understanding the neurodynamical and computational mechanisms of cognitive abilities like time perception, and for linking neurophysiology to psychology. In this chapter, we discuss several biophysical models of time perception and how they can be tested against experimental evidence. After a brief overview on the history of computational timing models, we list a number of central psychological and physiological findings that such a model should be able to account for, with a focus on the scaling of the variability of duration estimates with the length of the interval that needs to be estimated. The functional form of this scaling turns out to be predictive of the underlying computational mechanism for time perception. We then present four basic classes of timing models (ramping activity, sequential activation of neuron populations, state space trajectories and neural oscillators) and discuss two specific examples in more detail. Finally, we review to what extent existing theories of time perception adhere to the experimental constraints. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Mathematical modeling is a useful tool for understanding the neurodynamical and computational mechanisms of cognitive abilities like time perception, and for linking neurophysiology to psychology. In this chapter, we discuss several biophysical models of time perception and how they can be tested against experimental evidence. After a brief overview on the history of computational timing models, we list a number of central psychological and physiological findings that such a model should be able to account for, with a focus on the scaling of the variability of duration estimates with the length of the interval that needs to be estimated. The functional form of this scaling turns out to be predictive of the underlying computational mechanism for time perception. We then present four basic classes of timing models (ramping activity, sequential activation of neuron populations, state space trajectories and neural oscillators) and discuss two specific examples in more detail. Finally, we review to what extent existing theories of time perception adhere to the experimental constraints. Loreen Hertäg, Daniel Durstewitz ; Brunel, Nicolas
Analytical approximations of the firing rate of an adaptive exponential integrate-and-fire neuron in the presence of synaptic noise Journal Article
Frontiers Computational Neuroscience, 8 (116), 2014.
@article{Hertäg2014,
title = {Analytical approximations of the firing rate of an adaptive exponential integrate-and-fire neuron in the presence of synaptic noise},
author = {Loreen Hertäg, Daniel Durstewitz and Nicolas Brunel},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4167001/},
doi = {10.3389/fncom.2014.00116},
year = {2014},
date = {2014-09-18},
journal = {Frontiers Computational Neuroscience},
volume = {8},
number = {116},
abstract = {Computational models offer a unique tool for understanding the network-dynamical mechanisms which mediate between physiological and biophysical properties, and behavioral function. A traditional challenge in computational neuroscience is, however, that simple neuronal models which can be studied analytically fail to reproduce the diversity of electrophysiological behaviors seen in real neurons, while detailed neuronal models which do reproduce such diversity are intractable analytically and computationally expensive. A number of intermediate models have been proposed whose aim is to capture the diversity of firing behaviors and spike times of real neurons while entailing the simplest possible mathematical description. One such model is the exponential integrate-and-fire neuron with spike rate adaptation (aEIF) which consists of two differential equations for the membrane potential (V) and an adaptation current (w). Despite its simplicity, it can reproduce a wide variety of physiologically observed spiking patterns, can be fit to physiological recordings quantitatively, and, once done so, is able to predict spike times on traces not used for model fitting. Here we compute the steady-state firing rate of aEIF in the presence of Gaussian synaptic noise, using two approaches. The first approach is based on the 2-dimensional Fokker-Planck equation that describes the (V,w)-probability distribution, which is solved using an expansion in the ratio between the time constants of the two variables. The second is based on the firing rate of the EIF model, which is averaged over the distribution of the w variable. These analytically derived closed-form expressions were tested on simulations from a large variety of model cells quantitatively fitted to in vitro electrophysiological recordings from pyramidal cells and interneurons. Theoretical predictions closely agreed with the firing rate of the simulated cells fed with in-vivo-like synaptic noise.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Computational models offer a unique tool for understanding the network-dynamical mechanisms which mediate between physiological and biophysical properties, and behavioral function. A traditional challenge in computational neuroscience is, however, that simple neuronal models which can be studied analytically fail to reproduce the diversity of electrophysiological behaviors seen in real neurons, while detailed neuronal models which do reproduce such diversity are intractable analytically and computationally expensive. A number of intermediate models have been proposed whose aim is to capture the diversity of firing behaviors and spike times of real neurons while entailing the simplest possible mathematical description. One such model is the exponential integrate-and-fire neuron with spike rate adaptation (aEIF) which consists of two differential equations for the membrane potential (V) and an adaptation current (w). Despite its simplicity, it can reproduce a wide variety of physiologically observed spiking patterns, can be fit to physiological recordings quantitatively, and, once done so, is able to predict spike times on traces not used for model fitting. Here we compute the steady-state firing rate of aEIF in the presence of Gaussian synaptic noise, using two approaches. The first approach is based on the 2-dimensional Fokker-Planck equation that describes the (V,w)-probability distribution, which is solved using an expansion in the ratio between the time constants of the two variables. The second is based on the firing rate of the EIF model, which is averaged over the distribution of the w variable. These analytically derived closed-form expressions were tested on simulations from a large variety of model cells quantitatively fitted to in vitro electrophysiological recordings from pyramidal cells and interneurons. Theoretical predictions closely agreed with the firing rate of the simulated cells fed with in-vivo-like synaptic noise.
2013
Rainer Spanagel Daniel Durstewitz, Anita Hansson Andreas Heinz Falk Kiefer Georg Köhr Franziska Matthäus Markus Nöthen Hamid Noori Klaus Obermayer Marcella Rietschel Patrick Schloss Henrike Scholz Gunter Schumann Michael Smolka Wolfgang Sommer Valentina Vengeliene Henrik Walter Wolfgang Wurst Uli Zimmermann Addiction GWAS Resource Group; Sven Stringer Yannick Smits Eske Derks M R S M
A systems medicine research approach for studying alcohol addiction Journal Article
Addiction Biology, 18 (6), 2013.
@article{Spanagel2013,
title = {A systems medicine research approach for studying alcohol addiction},
author = {
Rainer Spanagel, Daniel Durstewitz, Anita Hansson, Andreas Heinz, Falk Kiefer, Georg Köhr, Franziska Matthäus, Markus M Nöthen, Hamid R Noori, Klaus Obermayer, Marcella Rietschel, Patrick Schloss, Henrike Scholz, Gunter Schumann, Michael Smolka, Wolfgang Sommer, Valentina Vengeliene, Henrik Walter, Wolfgang Wurst, Uli S Zimmermann, Addiction GWAS Resource Group; Sven Stringer, Yannick Smits, Eske M Derks},
url = {https://pubmed.ncbi.nlm.nih.gov/24283978/},
doi = {10.1111/adb.12109},
year = {2013},
date = {2013-11-01},
journal = {Addiction Biology},
volume = {18},
number = {6},
abstract = {According to the World Health Organization, about 2 billion people drink alcohol. Excessive alcohol consumption can result in alcohol addiction, which is one of the most prevalent neuropsychiatric diseases afflicting our society today. Prevention and intervention of alcohol binging in adolescents and treatment of alcoholism are major unmet challenges affecting our health-care system and society alike. Our newly formed German SysMedAlcoholism consortium is using a new systems medicine approach and intends (1) to define individual neurobehavioral risk profiles in adolescents that are predictive of alcohol use disorders later in life and (2) to identify new pharmacological targets and molecules for the treatment of alcoholism. To achieve these goals, we will use omics-information from epigenomics, genetics transcriptomics, neurodynamics, global neurochemical connectomes and neuroimaging (IMAGEN; Schumann et al. ) to feed mathematical prediction modules provided by two Bernstein Centers for Computational Neurosciences (Berlin and Heidelberg/Mannheim), the results of which will subsequently be functionally validated in independent clinical samples and appropriate animal models. This approach will lead to new early intervention strategies and identify innovative molecules for relapse prevention that will be tested in experimental human studies. This research program will ultimately help in consolidating addiction research clusters in Germany that can effectively conduct large clinical trials, implement early intervention strategies and impact political and healthcare decision makers. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
According to the World Health Organization, about 2 billion people drink alcohol. Excessive alcohol consumption can result in alcohol addiction, which is one of the most prevalent neuropsychiatric diseases afflicting our society today. Prevention and intervention of alcohol binging in adolescents and treatment of alcoholism are major unmet challenges affecting our health-care system and society alike. Our newly formed German SysMedAlcoholism consortium is using a new systems medicine approach and intends (1) to define individual neurobehavioral risk profiles in adolescents that are predictive of alcohol use disorders later in life and (2) to identify new pharmacological targets and molecules for the treatment of alcoholism. To achieve these goals, we will use omics-information from epigenomics, genetics transcriptomics, neurodynamics, global neurochemical connectomes and neuroimaging (IMAGEN; Schumann et al. ) to feed mathematical prediction modules provided by two Bernstein Centers for Computational Neurosciences (Berlin and Heidelberg/Mannheim), the results of which will subsequently be functionally validated in independent clinical samples and appropriate animal models. This approach will lead to new early intervention strategies and identify innovative molecules for relapse prevention that will be tested in experimental human studies. This research program will ultimately help in consolidating addiction research clusters in Germany that can effectively conduct large clinical trials, implement early intervention strategies and impact political and healthcare decision makers. Claudio S. Quiroga-Lombard, Joachim Hass ; Durstewitz, Daniel
Method for stationarity-segmentation of spike train data with application to the Pearson cross-correlation Journal Article
Journal of Neurophysiology, 2013.
@article{Quiroga-Lombard2013,
title = {Method for stationarity-segmentation of spike train data with application to the Pearson cross-correlation},
author = {Claudio S. Quiroga-Lombard, Joachim Hass and Daniel Durstewitz},
url = {https://doi.org/10.1152/jn.00186.2013},
doi = {10.1152/jn.00186.2013},
year = {2013},
date = {2013-07-15},
journal = {Journal of Neurophysiology},
abstract = {Correlations among neurons are supposed to play an important role in computation and information coding in the nervous system. Empirically, functional interactions between neurons are most commonly assessed by cross-correlation functions. Recent studies have suggested that pairwise correlations may indeed be sufficient to capture most of the information present in neural interactions. Many applications of correlation functions, however, implicitly tend to assume that the underlying processes are stationary. This assumption will usually fail for real neurons recorded in vivo since their activity during behavioral tasks is heavily influenced by stimulus-, movement-, or cognition-related processes as well as by more general processes like slow oscillations or changes in state of alertness. To address the problem of nonstationarity, we introduce a method for assessing stationarity empirically and then “slicing” spike trains into stationary segments according to the statistical definition of weak-sense stationarity. We examine pairwise Pearson cross-correlations (PCCs) under both stationary and nonstationary conditions and identify another source of covariance that can be differentiated from the covariance of the spike times and emerges as a consequence of residual nonstationarities after the slicing process: the covariance of the firing rates defined on each segment. Based on this, a correction of the PCC is introduced that accounts for the effect of segmentation. We probe these methods both on simulated data sets and on in vivo recordings from the prefrontal cortex of behaving rats. Rather than for removing nonstationarities, the present method may also be used for detecting significant events in spike trains.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Correlations among neurons are supposed to play an important role in computation and information coding in the nervous system. Empirically, functional interactions between neurons are most commonly assessed by cross-correlation functions. Recent studies have suggested that pairwise correlations may indeed be sufficient to capture most of the information present in neural interactions. Many applications of correlation functions, however, implicitly tend to assume that the underlying processes are stationary. This assumption will usually fail for real neurons recorded in vivo since their activity during behavioral tasks is heavily influenced by stimulus-, movement-, or cognition-related processes as well as by more general processes like slow oscillations or changes in state of alertness. To address the problem of nonstationarity, we introduce a method for assessing stationarity empirically and then “slicing” spike trains into stationary segments according to the statistical definition of weak-sense stationarity. We examine pairwise Pearson cross-correlations (PCCs) under both stationary and nonstationary conditions and identify another source of covariance that can be differentiated from the covariance of the spike times and emerges as a consequence of residual nonstationarities after the slicing process: the covariance of the firing rates defined on each segment. Based on this, a correction of the PCC is introduced that accounts for the effect of segmentation. We probe these methods both on simulated data sets and on in vivo recordings from the prefrontal cortex of behaving rats. Rather than for removing nonstationarities, the present method may also be used for detecting significant events in spike trains. Sophie Helene Richter Benjamin Zeuch, Katja Lankisch Peter Gass Daniel Durstewitz Barbara Vollmayr
Where Have I Been? Where Should I Go? Spatial Working Memory on a Radial Arm Maze in a Rat Model of Depression Journal Article
PLOS ONE, 2013.
@article{Richter2013,
title = {Where Have I Been? Where Should I Go? Spatial Working Memory on a Radial Arm Maze in a Rat Model of Depression},
author = {Sophie Helene Richter , Benjamin Zeuch, Katja Lankisch, Peter Gass, Daniel Durstewitz, Barbara Vollmayr
},
url = { https://doi.org/10.1371/journal.pone.0062458},
doi = {10.1371/journal.pone.0062458},
year = {2013},
date = {2013-04-13},
journal = {PLOS ONE},
abstract = {Disturbances in cognitive functioning are among the most debilitating problems experienced by patients with major depression. Investigations of these deficits in animals help to extend and refine our understanding of human emotional disorder, while at the same time providing valid tools to study higher executive functions in animals. We employ the “learned helplessness” genetic rat model of depression in studying working memory using an eight arm radial maze procedure with temporal delay. This so-called delayed spatial win-shift task consists of three phases, training, delay and test, requiring rats to hold information on-line across a retention interval and making choices based on this information in the test phase. According to a 2×2 factorial design, working memory performance of thirty-one congenitally helpless (cLH) and non-helpless (cNLH) rats was tested on eighteen trials, additionally imposing two different delay durations, 30 s and 15 min, respectively. While not observing a general cognitive deficit in cLH rats, the delay length greatly influenced maze performance. Notably, performance was most impaired in cLH rats tested with the shorter 30 s delay, suggesting a stress-related disruption of attentional processes in rats that are more sensitive to stress. Our study provides direct animal homologues of clinically important measures in human research, and contributes to the non-invasive assessment of cognitive deficits associated with depression.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Disturbances in cognitive functioning are among the most debilitating problems experienced by patients with major depression. Investigations of these deficits in animals help to extend and refine our understanding of human emotional disorder, while at the same time providing valid tools to study higher executive functions in animals. We employ the “learned helplessness” genetic rat model of depression in studying working memory using an eight arm radial maze procedure with temporal delay. This so-called delayed spatial win-shift task consists of three phases, training, delay and test, requiring rats to hold information on-line across a retention interval and making choices based on this information in the test phase. According to a 2×2 factorial design, working memory performance of thirty-one congenitally helpless (cLH) and non-helpless (cNLH) rats was tested on eighteen trials, additionally imposing two different delay durations, 30 s and 15 min, respectively. While not observing a general cognitive deficit in cLH rats, the delay length greatly influenced maze performance. Notably, performance was most impaired in cLH rats tested with the shorter 30 s delay, suggesting a stress-related disruption of attentional processes in rats that are more sensitive to stress. Our study provides direct animal homologues of clinically important measures in human research, and contributes to the non-invasive assessment of cognitive deficits associated with depression.
2012
James M Hyman Liya Ma, Emili Balaguer-Ballester Daniel Durstewitz Jeremy Seamans K
Contextual encoding by ensembles of medial prefrontal cortex neurons Journal Article
Proceedings of the National Academy of Sciences, 2012.
@article{Hyman2012,
title = {Contextual encoding by ensembles of medial prefrontal cortex neurons},
author = {James M Hyman , Liya Ma, Emili Balaguer-Ballester, Daniel Durstewitz, Jeremy K Seamans
},
url = {https://pubmed.ncbi.nlm.nih.gov/22421138/},
doi = {10.1073/pnas.1114415109 },
year = {2012},
date = {2012-03-27},
journal = {Proceedings of the National Academy of Sciences},
abstract = {Contextual representations serve to guide many aspects of behavior and influence the way stimuli or actions are encoded and interpreted. The medial prefrontal cortex (mPFC), including the anterior cingulate subregion, has been implicated in contextual encoding, yet the nature of contextual representations formed by the mPFC is unclear. Using multiple single-unit tetrode recordings in rats, we found that different activity patterns emerged in mPFC ensembles when animals moved between different environmental contexts. These differences in activity patterns were significantly larger than those observed for hippocampal ensembles. Whereas ≈11% of mPFC cells consistently preferred one environment over the other across multiple exposures to the same environments, optimal decoding (prediction) of the environmental setting occurred when the activity of up to ≈50% of all mPFC neurons was taken into account. On the other hand, population activity patterns were not identical upon repeated exposures to the very same environment. This was partly because the state of mPFC ensembles seemed to systematically shift with time, such that we could sometimes predict the change in ensemble state upon later reentry into one environment according to linear extrapolation from the time-dependent shifts observed during the first exposure. We also observed that many strongly action-selective mPFC neurons exhibited a significant degree of context-dependent modulation. These results highlight potential differences in contextual encoding schemes by the mPFC and hippocampus and suggest that the mPFC forms rich contextual representations that take into account not only sensory cues but also actions and time. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Contextual representations serve to guide many aspects of behavior and influence the way stimuli or actions are encoded and interpreted. The medial prefrontal cortex (mPFC), including the anterior cingulate subregion, has been implicated in contextual encoding, yet the nature of contextual representations formed by the mPFC is unclear. Using multiple single-unit tetrode recordings in rats, we found that different activity patterns emerged in mPFC ensembles when animals moved between different environmental contexts. These differences in activity patterns were significantly larger than those observed for hippocampal ensembles. Whereas ≈11% of mPFC cells consistently preferred one environment over the other across multiple exposures to the same environments, optimal decoding (prediction) of the environmental setting occurred when the activity of up to ≈50% of all mPFC neurons was taken into account. On the other hand, population activity patterns were not identical upon repeated exposures to the very same environment. This was partly because the state of mPFC ensembles seemed to systematically shift with time, such that we could sometimes predict the change in ensemble state upon later reentry into one environment according to linear extrapolation from the time-dependent shifts observed during the first exposure. We also observed that many strongly action-selective mPFC neurons exhibited a significant degree of context-dependent modulation. These results highlight potential differences in contextual encoding schemes by the mPFC and hippocampus and suggest that the mPFC forms rich contextual representations that take into account not only sensory cues but also actions and time. Hertäg, Loreen; Hass, Joachim; Golovko, Tatiana; Durstewitz, Daniel
An Approximation to the Adaptive Exponential Integrate-and-Fire Neuron Model Allows Fast and Predictive Fitting to Physiological Data Journal Article
Frontiers in Computational Neuroscience, 6 (September), pp. 1–22, 2012.
@article{Hertag2012,
title = {An Approximation to the Adaptive Exponential Integrate-and-Fire Neuron Model Allows Fast and Predictive Fitting to Physiological Data},
author = {Loreen Hertäg and Joachim Hass and Tatiana Golovko and Daniel Durstewitz},
doi = {10.3389/fncom.2012.00062},
year = {2012},
date = {2012-01-01},
journal = {Frontiers in Computational Neuroscience},
volume = {6},
number = {September},
pages = {1--22},
abstract = {For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ('invivo-like') input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a 'high-throughput' model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available. textcopyright 2012 Hertäg, Hass, Golovko and Durstewitz.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ('invivo-like') input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a 'high-throughput' model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available. textcopyright 2012 Hertäg, Hass, Golovko and Durstewitz.
2011
Joachim Hass, Daniel Durstewitz
Models of dopaminergic modulation Journal Article
Scholarpedia, 6 (6), 2011.
@article{Hass2011,
title = {Models of dopaminergic modulation},
author = {Joachim Hass, Daniel Durstewitz},
url = {http://www.scholarpedia.org/article/Models_of_dopaminergic_modulation},
doi = {doi:10.4249/scholarpedia.4215},
year = {2011},
date = {2011-08-01},
journal = {Scholarpedia},
volume = {6},
number = {6},
abstract = {In computational neuroscience, models of dopaminergic modulation address the physiological and computational functions of the neuromodulator dopamine (DA) by implementing it into models of biological neurons and networks.
DA plays a highly important role in higher order motor control, goal-directed behavior, motivation, reinforcement learning, and a number of cognitive and executive functions such as working memory, planning, attention, behavioral and cognitive flexibility, inhibition of impulsive responses, and time perception (Schultz, 1998, Nieoullon, 2003, Goldman-Rakic, 2008, Dalley and Everitt, 2009). DA's fundamental part in learning, cognitive, and motor control is also reflected in the various serious nervous system diseases associated with impaired DA regulation, such as Parkinson’s disease, Schizophrenia, bipolar disorder, Huntington’s disease, attention-deficit hyperactivity disorder (ADHD), autism, restless legs syndrome (RLS), and addictions (Meyer-Lindenberg, 2010, Egan and Weinberger, 1997, Dalley and Everitt, 2009).
From electrophysiological experiments, DA is known to affect a number of neuronal and synaptic properties in various target areas such as the striatum, the hippocampus, and motor and frontal cortical regions, via different types of receptors often combined within the D1- and D2-receptor class (D1R and D2R) (see Dopamine modulation). In single neurons, DA changes neuronal excitability and signal integration by virtue of its effects on a variety of voltage-dependent currents. DA also enhances or suppresses various synaptic currents such as AMPA-, GABA- and NMDA-type currents. With regards to both intrinsic and synaptic currents, the D1 and D2 receptor classes may function largely antagonistically (Trantham-Davidson et al. 2004, West and Grace 2002, Gulledge and Jaffe, 1998): D2 receptors decrease neuronal excitability with relatively short latency (in vitro), while there is a delayed and prolonged increase mediated by D1R. Similarly, D1R enhance NMDA- and GABA-type currents, while D2R decrease them. These antagonistic physiological effects may be rooted in the differential regulation of intracellular proteins like adenylyl cyclase, cAMP and DARPP-32 through D1R and D2R (Greengard, 2001).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In computational neuroscience, models of dopaminergic modulation address the physiological and computational functions of the neuromodulator dopamine (DA) by implementing it into models of biological neurons and networks.
DA plays a highly important role in higher order motor control, goal-directed behavior, motivation, reinforcement learning, and a number of cognitive and executive functions such as working memory, planning, attention, behavioral and cognitive flexibility, inhibition of impulsive responses, and time perception (Schultz, 1998, Nieoullon, 2003, Goldman-Rakic, 2008, Dalley and Everitt, 2009). DA's fundamental part in learning, cognitive, and motor control is also reflected in the various serious nervous system diseases associated with impaired DA regulation, such as Parkinson’s disease, Schizophrenia, bipolar disorder, Huntington’s disease, attention-deficit hyperactivity disorder (ADHD), autism, restless legs syndrome (RLS), and addictions (Meyer-Lindenberg, 2010, Egan and Weinberger, 1997, Dalley and Everitt, 2009).
From electrophysiological experiments, DA is known to affect a number of neuronal and synaptic properties in various target areas such as the striatum, the hippocampus, and motor and frontal cortical regions, via different types of receptors often combined within the D1- and D2-receptor class (D1R and D2R) (see Dopamine modulation). In single neurons, DA changes neuronal excitability and signal integration by virtue of its effects on a variety of voltage-dependent currents. DA also enhances or suppresses various synaptic currents such as AMPA-, GABA- and NMDA-type currents. With regards to both intrinsic and synaptic currents, the D1 and D2 receptor classes may function largely antagonistically (Trantham-Davidson et al. 2004, West and Grace 2002, Gulledge and Jaffe, 1998): D2 receptors decrease neuronal excitability with relatively short latency (in vitro), while there is a delayed and prolonged increase mediated by D1R. Similarly, D1R enhance NMDA- and GABA-type currents, while D2R decrease them. These antagonistic physiological effects may be rooted in the differential regulation of intracellular proteins like adenylyl cyclase, cAMP and DARPP-32 through D1R and D2R (Greengard, 2001). Balaguer-Ballester, Emili; Lapish, Christopher C; Seamans, Jeremy K; Durstewitz, Daniel
Attracting dynamics of frontal cortex ensembles during memory-guided decision-making Journal Article
PLoS Computational Biology, 7 (5), 2011, ISSN: 1553734X.
@article{Balaguer-Ballester2011,
title = {Attracting dynamics of frontal cortex ensembles during memory-guided decision-making},
author = {Emili Balaguer-Ballester and Christopher C Lapish and Jeremy K Seamans and Daniel Durstewitz},
doi = {10.1371/journal.pcbi.1002057},
issn = {1553734X},
year = {2011},
date = {2011-01-01},
journal = {PLoS Computational Biology},
volume = {7},
number = {5},
abstract = {A common theoretical view is that attractor-like properties of neuronal dynamics underlie cognitive processing. However, although often proposed theoretically, direct experimental support for the convergence of neural activity to stable population patterns as a signature of attracting states has been sparse so far, especially in higher cortical areas. Combining state space reconstruction theorems and statistical learning techniques, we were able to resolve details of anterior cingulate cortex (ACC) multiple single-unit activity (MSUA) ensemble dynamics during a higher cognitive task which were not accessible previously. The approach worked by constructing high-dimensional state spaces from delays of the original single-unit firing rate variables and the interactions among them, which were then statistically analyzed using kernel methods. We observed cognitive-epoch-specific neural ensemble states in ACC which were stable across many trials (in the sense of being predictive) and depended on behavioral performance. More interestingly, attracting properties of these cognitively defined ensemble states became apparent in high-dimensional expansions of the MSUA spaces due to a proper unfolding of the neural activity flow, with properties common across different animals. These results therefore suggest that ACC networks may process different subcomponents of higher cognitive tasks by transiting among different attracting states.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
A common theoretical view is that attractor-like properties of neuronal dynamics underlie cognitive processing. However, although often proposed theoretically, direct experimental support for the convergence of neural activity to stable population patterns as a signature of attracting states has been sparse so far, especially in higher cortical areas. Combining state space reconstruction theorems and statistical learning techniques, we were able to resolve details of anterior cingulate cortex (ACC) multiple single-unit activity (MSUA) ensemble dynamics during a higher cognitive task which were not accessible previously. The approach worked by constructing high-dimensional state spaces from delays of the original single-unit firing rate variables and the interactions among them, which were then statistically analyzed using kernel methods. We observed cognitive-epoch-specific neural ensemble states in ACC which were stable across many trials (in the sense of being predictive) and depended on behavioral performance. More interestingly, attracting properties of these cognitively defined ensemble states became apparent in high-dimensional expansions of the MSUA spaces due to a proper unfolding of the neural activity flow, with properties common across different animals. These results therefore suggest that ACC networks may process different subcomponents of higher cognitive tasks by transiting among different attracting states.
2010
Durstewitz, Daniel; Vittoz, Nicole M; Floresco, Stan B; Seamans, Jeremy K
Abrupt transitions between prefrontal neural ensemble states accompany behavioral transitions during rule learning Journal Article
Neuron, 66 (3), pp. 438–448, 2010, ISSN: 08966273.
@article{Durstewitz2010,
title = {Abrupt transitions between prefrontal neural ensemble states accompany behavioral transitions during rule learning},
author = {Daniel Durstewitz and Nicole M Vittoz and Stan B Floresco and Jeremy K Seamans},
url = {http://dx.doi.org/10.1016/j.neuron.2010.03.029},
doi = {10.1016/j.neuron.2010.03.029},
issn = {08966273},
year = {2010},
date = {2010-01-01},
journal = {Neuron},
volume = {66},
number = {3},
pages = {438--448},
publisher = {Elsevier Ltd},
abstract = {One of the most intriguing aspects of adaptive behavior involves the inference of regularities and rules in ever-changing environments. Rules are often deduced through evidence-based learning which relies on the prefrontal cortex (PFC). This is a highly dynamic process, evolving trial by trial and therefore may not be adequately captured by averaging single-unit responses over numerous repetitions. Here, we employed advanced statistical techniques to visualize the trajectories of ensembles of simultaneously recorded medial PFC neurons on a trial-by-trial basis as rats deduced a novel rule in a set-shifting task. Neural populations formed clearly distinct and lasting representations of familiar and novel rules by entering unique network states. During rule acquisition, the recorded ensembles often exhibited abrupt transitions, rather than evolving continuously, in tight temporal relation to behavioral performance shifts. These results support the idea that rule learning is an evidence-based decision process, perhaps accompanied by moments of sudden insight. ?? 2010 Elsevier Inc.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
One of the most intriguing aspects of adaptive behavior involves the inference of regularities and rules in ever-changing environments. Rules are often deduced through evidence-based learning which relies on the prefrontal cortex (PFC). This is a highly dynamic process, evolving trial by trial and therefore may not be adequately captured by averaging single-unit responses over numerous repetitions. Here, we employed advanced statistical techniques to visualize the trajectories of ensembles of simultaneously recorded medial PFC neurons on a trial-by-trial basis as rats deduced a novel rule in a set-shifting task. Neural populations formed clearly distinct and lasting representations of familiar and novel rules by entering unique network states. During rule acquisition, the recorded ensembles often exhibited abrupt transitions, rather than evolving continuously, in tight temporal relation to behavioral performance shifts. These results support the idea that rule learning is an evidence-based decision process, perhaps accompanied by moments of sudden insight. ?? 2010 Elsevier Inc.
2009
Durstewitz, Daniel
Implications of synaptic biophysics for recurrent network dynamics and active memory Journal Article
Neural Networks, 22 (8), pp. 1189–1200, 2009, ISSN: 08936080.
@article{Durstewitz2009,
title = {Implications of synaptic biophysics for recurrent network dynamics and active memory},
author = {Daniel Durstewitz},
doi = {10.1016/j.neunet.2009.07.016},
issn = {08936080},
year = {2009},
date = {2009-10-01},
journal = {Neural Networks},
volume = {22},
number = {8},
pages = {1189--1200},
abstract = {In cortical networks, synaptic excitation is mediated by AMPA- and NMDA-type receptors. NMDA differ from AMPA synaptic potentials with regard to peak current, time course, and a strong voltage-dependent nonlinearity. Here we illustrate based on empirical and computational findings that these specific biophysical properties may have profound implications for the dynamics of cortical networks, and via dynamics on cognitive functions like active memory. The discussion will be led along a minimal set of neural equations introduced to capture the essential dynamics of the various phenomena described. NMDA currents could establish cortical bistability and may provide the relatively constant synaptic drive needed to robustly maintain enhanced levels of activity during working memory epochs, freeing fast AMPA currents for other computational purposes. Perhaps more importantly, variations in NMDA synaptic input-due to their biophysical particularities-control the dynamical regime within which single neurons and networks reside. By provoking bursting, chaotic irregularity, and coherent oscillations their major effect may be on the temporal pattern of spiking activity, rather than on average firing rate. During active memory, neurons may thus be pushed into a spiking regime that harbors complex temporal structure, potentially optimal for the encoding and processing of temporal sequence information. These observations provide a qualitatively different view on the role of synaptic excitation in neocortical dynamics than entailed by many more abstract models. In this sense, this article is a plead for taking the specific biophysics of real neurons and synapses seriously when trying to account for the neurobiology of cognition. textcopyright 2009 Elsevier Ltd. All rights reserved.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In cortical networks, synaptic excitation is mediated by AMPA- and NMDA-type receptors. NMDA differ from AMPA synaptic potentials with regard to peak current, time course, and a strong voltage-dependent nonlinearity. Here we illustrate based on empirical and computational findings that these specific biophysical properties may have profound implications for the dynamics of cortical networks, and via dynamics on cognitive functions like active memory. The discussion will be led along a minimal set of neural equations introduced to capture the essential dynamics of the various phenomena described. NMDA currents could establish cortical bistability and may provide the relatively constant synaptic drive needed to robustly maintain enhanced levels of activity during working memory epochs, freeing fast AMPA currents for other computational purposes. Perhaps more importantly, variations in NMDA synaptic input-due to their biophysical particularities-control the dynamical regime within which single neurons and networks reside. By provoking bursting, chaotic irregularity, and coherent oscillations their major effect may be on the temporal pattern of spiking activity, rather than on average firing rate. During active memory, neurons may thus be pushed into a spiking regime that harbors complex temporal structure, potentially optimal for the encoding and processing of temporal sequence information. These observations provide a qualitatively different view on the role of synaptic excitation in neocortical dynamics than entailed by many more abstract models. In this sense, this article is a plead for taking the specific biophysics of real neurons and synapses seriously when trying to account for the neurobiology of cognition. textcopyright 2009 Elsevier Ltd. All rights reserved.
2008
Seamans J.K., Lapish & Durstewitz C C D
Comparing the prefrontal cortex of rats and primates: Insights from electrophysiology Journal Article
Neurotoxicity Research, 14 , pp. 249-262, 2008.
@article{Seamans2008,
title = {Comparing the prefrontal cortex of rats and primates: Insights from electrophysiology},
author = {Seamans, J.K., Lapish, C.C., & Durstewitz, D.},
url = {https://www.ncbi.nlm.nih.gov/pubmed/19073430},
year = {2008},
date = {2008-10-14},
journal = {Neurotoxicity Research},
volume = {14},
pages = {249-262},
abstract = {There is a long-standing debate about whether rats have what could be considered a prefrontal cortex (PFC) and, if they do, what its primate homologue is. Anatomical evidence supports the view that the rat medial PFC is related to both the primate anterior cingulate cortex (ACC) and the dorsolateral PFC. Functionally the primate and human ACC are believed to be involved in the monitoring of actions and outcomes to guide decisions especially in challenging situations where cognitive conflict and errors arise. In contrast, the dorsolateral PFC is responsible for the maintenance and manipulation of goal-related items in memory in the service of planning, problem solving, and predicting forthcoming events. Recent multiple single-unit recording studies in rats have reported strong correlates of motor planning, movement and reward anticipation analogous to what has been observed in the primate ACC. There is also emerging evidence that rats may partly encode information over delays using body posture or variations in running path as embodied strategies, and that these are the aspects tracked by medial PFC neurons. The primate PFC may have elaborated on these rudimentary functions by carrying them over to more abstract levels of mental representation, more independent from somatic or other external mnemonic cues, and allowing manipulation of mental contents outside specific task contexts. Therefore, from an electrophysiological and computational perspective, the rat medial PFC seems to combine elements of the primate ACC and dorsolateral PFC at a rudimentary level. In primates, these functions may have formed the building blocks required for abstract rule encoding during the expansion of the cortex dorsolaterally.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
There is a long-standing debate about whether rats have what could be considered a prefrontal cortex (PFC) and, if they do, what its primate homologue is. Anatomical evidence supports the view that the rat medial PFC is related to both the primate anterior cingulate cortex (ACC) and the dorsolateral PFC. Functionally the primate and human ACC are believed to be involved in the monitoring of actions and outcomes to guide decisions especially in challenging situations where cognitive conflict and errors arise. In contrast, the dorsolateral PFC is responsible for the maintenance and manipulation of goal-related items in memory in the service of planning, problem solving, and predicting forthcoming events. Recent multiple single-unit recording studies in rats have reported strong correlates of motor planning, movement and reward anticipation analogous to what has been observed in the primate ACC. There is also emerging evidence that rats may partly encode information over delays using body posture or variations in running path as embodied strategies, and that these are the aspects tracked by medial PFC neurons. The primate PFC may have elaborated on these rudimentary functions by carrying them over to more abstract levels of mental representation, more independent from somatic or other external mnemonic cues, and allowing manipulation of mental contents outside specific task contexts. Therefore, from an electrophysiological and computational perspective, the rat medial PFC seems to combine elements of the primate ACC and dorsolateral PFC at a rudimentary level. In primates, these functions may have formed the building blocks required for abstract rule encoding during the expansion of the cortex dorsolaterally. Christopher C. Lapish Daniel Durstewitz, Judson Chandler L; Seamans, Jeremy K
Successful choice behavior is associated with distinct and coherent network states in anterior cingulate cortex Journal Article
Proceedings of the National Academy of Sciences, 2008.
@article{Lapish2008,
title = {Successful choice behavior is associated with distinct and coherent network states in anterior cingulate cortex},
author = {Christopher C. Lapish, Daniel Durstewitz, L. Judson Chandler, and Jeremy K. Seamans},
url = {https://doi.org/10.1073/pnas.0804045105 },
doi = {10.1073/pnas.0804045105 },
year = {2008},
date = {2008-08-19},
journal = {Proceedings of the National Academy of Sciences},
abstract = {Successful decision making requires an ability to monitor contexts, actions, and outcomes. The anterior cingulate cortex (ACC) is thought to be critical for these functions, monitoring and guiding decisions especially in challenging situations involving conflict and errors. A number of different single-unit correlates have been observed in the ACC that reflect the diverse cognitive components involved. Yet how ACC neurons function as an integrated network is poorly understood. Here we show, using advanced population analysis of multiple single-unit recordings from the rat ACC during performance of an ecologically valid decision-making task, that ensembles of neurons move through different coherent and dissociable states as the cognitive requirements of the task change. This organization into distinct network patterns with respect to both firing-rate changes and correlations among units broke down during trials with numerous behavioral errors, especially at choice points of the task. These results point to an underlying functional organization into cell assemblies in the ACC that may monitor choices, outcomes, and task contexts, thus tracking the animal's progression through “task space.”},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Successful decision making requires an ability to monitor contexts, actions, and outcomes. The anterior cingulate cortex (ACC) is thought to be critical for these functions, monitoring and guiding decisions especially in challenging situations involving conflict and errors. A number of different single-unit correlates have been observed in the ACC that reflect the diverse cognitive components involved. Yet how ACC neurons function as an integrated network is poorly understood. Here we show, using advanced population analysis of multiple single-unit recordings from the rat ACC during performance of an ecologically valid decision-making task, that ensembles of neurons move through different coherent and dissociable states as the cognitive requirements of the task change. This organization into distinct network patterns with respect to both firing-rate changes and correlations among units broke down during trials with numerous behavioral errors, especially at choice points of the task. These results point to an underlying functional organization into cell assemblies in the ACC that may monitor choices, outcomes, and task contexts, thus tracking the animal's progression through “task space.” Daniel Durstewitz, Jeremy Seamans K
The dual-state theory of prefrontal cortex dopamine function with relevance to COMT genotypes and schizophrenia Journal Article
Biol Psychiatry, 2008.
@article{Durstewitz2008b,
title = {The dual-state theory of prefrontal cortex dopamine function with relevance to COMT genotypes and schizophrenia},
author = {Daniel Durstewitz, Jeremy K Seamans
},
url = {https://pubmed.ncbi.nlm.nih.gov/18620336/},
doi = {10.1016/j.biopsych.2008.05.015},
year = {2008},
date = {2008-07-11},
journal = { Biol Psychiatry},
abstract = {There is now general consensus that at least some of the cognitive deficits in schizophrenia are related to dysfunctions in the prefrontal cortex (PFC) dopamine (DA) system. At the cellular and synaptic level, the effects of DA in PFC via D1- and D2-class receptors are highly complex, often apparently opposing, and hence difficult to understand with regard to their functional implications. Biophysically realistic computational models have provided valuable insights into how the effects of DA on PFC neurons and synaptic currents as measured in vitro link up to the neural network and cognitive levels. They suggest the existence of two discrete dynamical regimes, a D1-dominated state characterized by a high energy barrier among different network patterns that favors robust online maintenance of information and a D2-dominated state characterized by a low energy barrier that is beneficial for flexible and fast switching among representational states. These predictions are consistent with a variety of electrophysiological, neuroimaging, and behavioral results in humans and nonhuman species. Moreover, these biophysically based models predict that imbalanced D1:D2 receptor activation causing extremely low or extremely high energy barriers among activity states could lead to the emergence of cognitive, positive, and negative symptoms observed in schizophrenia. Thus, combined experimental and computational approaches hold the promise of allowing a detailed mechanistic understanding of how DA alters information processing in normal and pathological conditions, thereby potentially providing new routes for the development of pharmacological treatments for schizophrenia. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
There is now general consensus that at least some of the cognitive deficits in schizophrenia are related to dysfunctions in the prefrontal cortex (PFC) dopamine (DA) system. At the cellular and synaptic level, the effects of DA in PFC via D1- and D2-class receptors are highly complex, often apparently opposing, and hence difficult to understand with regard to their functional implications. Biophysically realistic computational models have provided valuable insights into how the effects of DA on PFC neurons and synaptic currents as measured in vitro link up to the neural network and cognitive levels. They suggest the existence of two discrete dynamical regimes, a D1-dominated state characterized by a high energy barrier among different network patterns that favors robust online maintenance of information and a D2-dominated state characterized by a low energy barrier that is beneficial for flexible and fast switching among representational states. These predictions are consistent with a variety of electrophysiological, neuroimaging, and behavioral results in humans and nonhuman species. Moreover, these biophysically based models predict that imbalanced D1:D2 receptor activation causing extremely low or extremely high energy barriers among activity states could lead to the emergence of cognitive, positive, and negative symptoms observed in schizophrenia. Thus, combined experimental and computational approaches hold the promise of allowing a detailed mechanistic understanding of how DA alters information processing in normal and pathological conditions, thereby potentially providing new routes for the development of pharmacological treatments for schizophrenia. Durstewitz, D; Deco, G
Computational significance of transient dynamics in cortical networks Journal Article
European Journal of Neuroscience, 27 , pp. 217-27, 2008.
@article{Durstewitz2008,
title = {Computational significance of transient dynamics in cortical networks},
author = {D. Durstewitz and G. Deco},
url = {https://www.ncbi.nlm.nih.gov/pubmed/18093174},
year = {2008},
date = {2008-02-27},
journal = {European Journal of Neuroscience},
volume = {27},
pages = {217-27},
abstract = {Neural responses are most often characterized in terms of the sets of environmental or internal conditions or stimuli with which their firing rate [corrected]increases or decreases are correlated [corrected] Their transient (nonstationary) temporal profiles of activity have received comparatively less attention. Similarly, the computational framework of attractor neural networks puts most emphasis on the representational or computational properties of the stable states of a neural system. Here we review a couple of neurophysiological observations and computational ideas that shift the focus to the transient dynamics of neural systems. We argue that there are many situations in which the transient neural behaviour, while hopping between different attractor states or moving along 'attractor ruins', carries most of the computational and/or behavioural significance, rather than the attractor states eventually reached. Such transients may be related to the computation of temporally precise predictions or the probabilistic transitions among choice options, accounting for Weber's law in decision-making tasks. Finally, we conclude with a more general perspective on the role of transient dynamics in the brain, promoting the view that brain activity is characterized by a high-dimensional chaotic ground state from which transient spatiotemporal patterns (metastable states) briefly emerge. Neural computation has to exploit the itinerant dynamics between these states.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Neural responses are most often characterized in terms of the sets of environmental or internal conditions or stimuli with which their firing rate [corrected]increases or decreases are correlated [corrected] Their transient (nonstationary) temporal profiles of activity have received comparatively less attention. Similarly, the computational framework of attractor neural networks puts most emphasis on the representational or computational properties of the stable states of a neural system. Here we review a couple of neurophysiological observations and computational ideas that shift the focus to the transient dynamics of neural systems. We argue that there are many situations in which the transient neural behaviour, while hopping between different attractor states or moving along 'attractor ruins', carries most of the computational and/or behavioural significance, rather than the attractor states eventually reached. Such transients may be related to the computation of temporally precise predictions or the probabilistic transitions among choice options, accounting for Weber's law in decision-making tasks. Finally, we conclude with a more general perspective on the role of transient dynamics in the brain, promoting the view that brain activity is characterized by a high-dimensional chaotic ground state from which transient spatiotemporal patterns (metastable states) briefly emerge. Neural computation has to exploit the itinerant dynamics between these states.
2007
Durstewitz D., & Gabriel T
Dynamical basis of irregular spiking in NMDA-driven prefrontal cortex neurons Journal Article
Cerebral Cortex 17 , 17 , pp. 894-908, 2007.
@article{Durstewitz2007,
title = {Dynamical basis of irregular spiking in NMDA-driven prefrontal cortex neurons},
author = {Durstewitz, D., & Gabriel, T.},
url = {https://www.ncbi.nlm.nih.gov/pubmed/16740581},
year = {2007},
date = {2007-04-17},
journal = {Cerebral Cortex 17 },
volume = {17},
pages = {894-908},
abstract = {Slow N-Methyl-D-aspartic acid (NMDA) synaptic currents are assumed to strongly contribute to the persistently elevated firing rates observed in prefrontal cortex (PFC) during working memory. During persistent activity, spiking of many neurons is highly irregular. Here we report that highly irregular firing can be induced through a combination of NMDA- and dopamine D1 receptor agonists applied to adult PFC neurons in vitro. The highest interspike-interval (ISI) variability occurred in a transition regime where the subthreshold membrane potential distribution shifts from mono- to bimodality, while neurons with clearly mono- or bimodal distributions fired much more regularly. Predictability within irregular ISI series was significantly higher than expected from a noise-driven linear process, indicating that it might best be described through complex (potentially chaotic) nonlinear deterministic processes. Accordingly, the phenomena observed in vitro could be reproduced in purely deterministic biophysical model neurons. High spiking irregularity in these models emerged within a chaotic, close-to-bifurcation regime characterized by a shift of the membrane potential distribution from mono- to bimodality and by similar ISI return maps as observed in vitro. The nonlinearity of NMDA conductances was crucial for inducing this regime. NMDA-induced irregular dynamics may have important implications for computational processes during working memory and neural coding.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Slow N-Methyl-D-aspartic acid (NMDA) synaptic currents are assumed to strongly contribute to the persistently elevated firing rates observed in prefrontal cortex (PFC) during working memory. During persistent activity, spiking of many neurons is highly irregular. Here we report that highly irregular firing can be induced through a combination of NMDA- and dopamine D1 receptor agonists applied to adult PFC neurons in vitro. The highest interspike-interval (ISI) variability occurred in a transition regime where the subthreshold membrane potential distribution shifts from mono- to bimodality, while neurons with clearly mono- or bimodal distributions fired much more regularly. Predictability within irregular ISI series was significantly higher than expected from a noise-driven linear process, indicating that it might best be described through complex (potentially chaotic) nonlinear deterministic processes. Accordingly, the phenomena observed in vitro could be reproduced in purely deterministic biophysical model neurons. High spiking irregularity in these models emerged within a chaotic, close-to-bifurcation regime characterized by a shift of the membrane potential distribution from mono- to bimodality and by similar ISI return maps as observed in vitro. The nonlinearity of NMDA conductances was crucial for inducing this regime. NMDA-induced irregular dynamics may have important implications for computational processes during working memory and neural coding. Christopher C. Lapish Sven Kroener, Daniel Durstewitz Antonieta Lavin ; Seamans, Jeremy K
The ability of the mesocortical dopamine system to operate in distinct temporal modes Journal Article
Psychopharmacology, 2007.
@article{Lapish2007,
title = {The ability of the mesocortical dopamine system to operate in distinct temporal modes},
author = {Christopher C. Lapish, Sven Kroener, Daniel Durstewitz, Antonieta Lavin, and Jeremy K. Seamans},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5509053/},
doi = {10.1007/s00213-006-0527-8},
year = {2007},
date = {2007-04-01},
journal = {Psychopharmacology},
abstract = {Phasic bursting of midbrain DA neurons may provide temporally precise information about the mismatch between expected and actual rewards (prediction errors) that has been hypothesized to serve as a learning signal in efferent regions. However, because DA acts as a relatively slow modulator of cortical neurotransmission, it is unclear whether DA can indeed act to precisely transmit prediction errors to prefrontal cortex (PFC). In light of recent physiological and anatomical evidence, we propose that corelease of glutamate from DA and/or non-DA neurons in the VTA could serve to transmit this temporally precise signal. In contrast, DA acts in a protracted manner to provide spatially and temporally diffuse modulation of PFC pyramidal neurons and interneurons. This modulation occurs first via a relatively rapid depolarization of fast-spiking interneurons that acts on the order of seconds. This is followed by a more protracted modulation of a variety of other ionic currents on timescales of minutes to hours, which may bias the manner in which cortical networks process information. However, the prolonged actions of DA may be curtailed by counteracting influences, which likely include opposing actions at D1 and D2-like receptors that have been shown to be time-and concentration-dependent. In this way, the mesocortical DA system optimizes the characteristics of glutamate, GABA, and DA neurotransmission both within the midbrain and cortex to communicate temporally precise information and to modulate network activity patterns on prolonged timescales.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Phasic bursting of midbrain DA neurons may provide temporally precise information about the mismatch between expected and actual rewards (prediction errors) that has been hypothesized to serve as a learning signal in efferent regions. However, because DA acts as a relatively slow modulator of cortical neurotransmission, it is unclear whether DA can indeed act to precisely transmit prediction errors to prefrontal cortex (PFC). In light of recent physiological and anatomical evidence, we propose that corelease of glutamate from DA and/or non-DA neurons in the VTA could serve to transmit this temporally precise signal. In contrast, DA acts in a protracted manner to provide spatially and temporally diffuse modulation of PFC pyramidal neurons and interneurons. This modulation occurs first via a relatively rapid depolarization of fast-spiking interneurons that acts on the order of seconds. This is followed by a more protracted modulation of a variety of other ionic currents on timescales of minutes to hours, which may bias the manner in which cortical networks process information. However, the prolonged actions of DA may be curtailed by counteracting influences, which likely include opposing actions at D1 and D2-like receptors that have been shown to be time-and concentration-dependent. In this way, the mesocortical DA system optimizes the characteristics of glutamate, GABA, and DA neurotransmission both within the midbrain and cortex to communicate temporally precise information and to modulate network activity patterns on prolonged timescales.
2006
Daniel Durstewitz, Jeremy Seamans K
Beyond bistability: Biophysics and temporal dynamics of working memory Journal Article
Neuroscience, 2006.
@article{Durstewitz2006,
title = {Beyond bistability: Biophysics and temporal dynamics of working memory},
author = {Daniel Durstewitz, Jeremy K Seamans},
url = {https://doi.org/10.1016/j.neuroscience.2005.06.094},
doi = {10.1016/j.neuroscience.2005.06.094},
year = {2006},
date = {2006-04-28},
journal = {Neuroscience},
abstract = {Working memory has often been modeled and conceptualized as a kind of binary (bistable) memory switch, where stimuli turn on plateau-like persistent activity in subsets of cells, in line with many in vivo electrophysiological reports. A potentially related form of bistability, termed up- and down-states, has been studied with regard to its synaptic and ionic basis in vivo and in reduced cortical preparations. Also single cell mechanisms for producing bistability have been proposed and investigated in brain slices and computationally. Recently, however, it has been emphasized that clear plateau-like bistable activity is rather rare during working memory tasks, and that neurons exhibit a multitude of different temporally unfolding activity profiles and temporal structure within their spiking dynamics. Hence, working memory seems to be a highly dynamical neural process with yet unknown mappings from dynamical to computational properties. Empirical findings on ramping activity profiles and temporal structure will be reviewed, as well as neural models that attempt to account for it and its computational significance. Furthermore, recent in vivo, neural culture, and in vitro preparations will be discussed that offer new possibilities for studying the biophysical mechanisms underlying computational processes during working memory. These preparations have revealed additional evidence for temporal structure and spatio-temporally organized attractor states in cortical networks, as well as for specific computational properties that may characterize synaptic processing during high-activity states as during working memory. Together such findings may lay the foundations for highly dynamical theories of working memory based on biophysical principles.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Working memory has often been modeled and conceptualized as a kind of binary (bistable) memory switch, where stimuli turn on plateau-like persistent activity in subsets of cells, in line with many in vivo electrophysiological reports. A potentially related form of bistability, termed up- and down-states, has been studied with regard to its synaptic and ionic basis in vivo and in reduced cortical preparations. Also single cell mechanisms for producing bistability have been proposed and investigated in brain slices and computationally. Recently, however, it has been emphasized that clear plateau-like bistable activity is rather rare during working memory tasks, and that neurons exhibit a multitude of different temporally unfolding activity profiles and temporal structure within their spiking dynamics. Hence, working memory seems to be a highly dynamical neural process with yet unknown mappings from dynamical to computational properties. Empirical findings on ramping activity profiles and temporal structure will be reviewed, as well as neural models that attempt to account for it and its computational significance. Furthermore, recent in vivo, neural culture, and in vitro preparations will be discussed that offer new possibilities for studying the biophysical mechanisms underlying computational processes during working memory. These preparations have revealed additional evidence for temporal structure and spatio-temporally organized attractor states in cortical networks, as well as for specific computational properties that may characterize synaptic processing during high-activity states as during working memory. Together such findings may lay the foundations for highly dynamical theories of working memory based on biophysical principles.
2004
Durstewitz, D
Neural representation of interval time Journal Article
Neuroreport, 15 , pp. 745-749, 2004.
@article{Durstewitz2004,
title = {Neural representation of interval time},
author = {D. Durstewitz},
url = {https://www.ncbi.nlm.nih.gov/pubmed/15073507},
year = {2004},
date = {2004-04-09},
journal = {Neuroreport},
volume = {15},
pages = {745-749},
abstract = {Animals can predict the time of occurrence of a forthcoming event relative to a preceding stimulus, i.e. the interval time between those two, given previous learning experience with the temporal contingency between them. Accumulating evidence suggests that a particular pattern of neural activity observed during tasks involving fixed temporal intervals might carry interval time information: the activity of some cortical and subcortical neurons ramps up slowly and linearly during the interval, like a temporal integrator, and peaks around the time at which the event is due to occur. The slope of this climbing activity, and hence the peak time, adjusts to the length of a temporal interval during repetitive experience with it. Various neural mechanisms for producing climbing activity with variable slopes, representing the length of learned intervals, are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Animals can predict the time of occurrence of a forthcoming event relative to a preceding stimulus, i.e. the interval time between those two, given previous learning experience with the temporal contingency between them. Accumulating evidence suggests that a particular pattern of neural activity observed during tasks involving fixed temporal intervals might carry interval time information: the activity of some cortical and subcortical neurons ramps up slowly and linearly during the interval, like a temporal integrator, and peaks around the time at which the event is due to occur. The slope of this climbing activity, and hence the peak time, adjusts to the length of a temporal interval during repetitive experience with it. Various neural mechanisms for producing climbing activity with variable slopes, representing the length of learned intervals, are discussed.
2003
Durstewitz, D
Self-organizing neural integrator predicts interval times through climbing activity Journal Article
Journal of Neuroscience, 23 , pp. 5342-5353, 2003.
@article{Durstewitz2003b,
title = {Self-organizing neural integrator predicts interval times through climbing activity},
author = {D. Durstewitz},
url = {https://www.jneurosci.org/content/23/12/5342},
year = {2003},
date = {2003-06-15},
journal = { Journal of Neuroscience},
volume = {23},
pages = {5342-5353},
abstract = {Mammals can reliably predict the time of occurrence of an expected event after a predictive stimulus. Climbing activity is a prominent profile of neural activity observed in prefrontal cortex and other brain areas that is related to the anticipation of forthcoming events. Climbing activity might span intervals from hundreds of milliseconds to tens of seconds and has a number of properties that make it a plausible candidate for representing interval time. A biophysical model is presented that produces climbing, temporal integrator-like activity with variable slopes as observed empirically, through a single-cell positive feedback loop between firing rate, spike-driven Ca2+ influx, and Ca2+-activated inward currents. It is shown that the fine adjustment of this feedback loop might emerge in a self-organizing manner if the cell can use the variance in intracellular Ca2+ fluctuations as a learning signal. This self-organizing process is based on the present observation that the variance of the intracellular Ca2+ concentration and the variance of the neural firing rate and of activity-dependent conductances reach a maximum as the biophysical parameters of a cell approach a configuration required for temporal integration. Thus, specific mechanisms are proposed for (1) how neurons might represent interval times of variable length and (2) how neurons could acquire the biophysical properties that enable them to work as timers.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Mammals can reliably predict the time of occurrence of an expected event after a predictive stimulus. Climbing activity is a prominent profile of neural activity observed in prefrontal cortex and other brain areas that is related to the anticipation of forthcoming events. Climbing activity might span intervals from hundreds of milliseconds to tens of seconds and has a number of properties that make it a plausible candidate for representing interval time. A biophysical model is presented that produces climbing, temporal integrator-like activity with variable slopes as observed empirically, through a single-cell positive feedback loop between firing rate, spike-driven Ca2+ influx, and Ca2+-activated inward currents. It is shown that the fine adjustment of this feedback loop might emerge in a self-organizing manner if the cell can use the variance in intracellular Ca2+ fluctuations as a learning signal. This self-organizing process is based on the present observation that the variance of the intracellular Ca2+ concentration and the variance of the neural firing rate and of activity-dependent conductances reach a maximum as the biophysical parameters of a cell approach a configuration required for temporal integration. Thus, specific mechanisms are proposed for (1) how neurons might represent interval times of variable length and (2) how neurons could acquire the biophysical properties that enable them to work as timers. Durstewitz, Daniel
Self-Organizing Neural Integrator Predicts Interval Times Journal Article
Journal of Neuroscience, 23 (12), pp. 5342–5353, 2003.
@article{Durstewitz2003,
title = {Self-Organizing Neural Integrator Predicts Interval Times},
author = {Daniel Durstewitz},
year = {2003},
date = {2003-01-01},
journal = {Journal of Neuroscience},
volume = {23},
number = {12},
pages = {5342--5353},
abstract = {Mammals can reliably predict the time of occurrence of an expected event after a predictive stimulus. Climbing activity is a prominent profile of neural activity observed in prefrontal cortex and other brain areas that is related to the anticipation of forthcoming events. Climbing activity might span intervals from hundreds of milliseconds to tens of seconds and has a number of properties that make it a plausible candidate for representing interval time. A biophysical model is presented that produces climbing, temporal integrator-like activity with variable slopes as observed empirically, through a single-cell positive feedback loop between firing rate, spike-driven Ca2+ influx, and Ca2+-activated inward currents. It is shown that the fine adjustment of this feedback loop might emerge in a self-organizing manner if the cell can use the variance in intraceflular Ca2+ fluctuations as a learning signal. This self-organizing process is based on the present observation that the variance of the intraceffular Ca2+ concentration and the variance of the neural firing rate and of activity-dependent conductances reach a maximum as the biophysical parameters of a cell approach a configuration required for temporal integration. Thus, specific mechanisms are proposed for (1) how neurons might represent interval times of variable length and (2) how neurons could acquire the biophysical properties that enable them to work as timers.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Mammals can reliably predict the time of occurrence of an expected event after a predictive stimulus. Climbing activity is a prominent profile of neural activity observed in prefrontal cortex and other brain areas that is related to the anticipation of forthcoming events. Climbing activity might span intervals from hundreds of milliseconds to tens of seconds and has a number of properties that make it a plausible candidate for representing interval time. A biophysical model is presented that produces climbing, temporal integrator-like activity with variable slopes as observed empirically, through a single-cell positive feedback loop between firing rate, spike-driven Ca2+ influx, and Ca2+-activated inward currents. It is shown that the fine adjustment of this feedback loop might emerge in a self-organizing manner if the cell can use the variance in intraceflular Ca2+ fluctuations as a learning signal. This self-organizing process is based on the present observation that the variance of the intraceffular Ca2+ concentration and the variance of the neural firing rate and of activity-dependent conductances reach a maximum as the biophysical parameters of a cell approach a configuration required for temporal integration. Thus, specific mechanisms are proposed for (1) how neurons might represent interval times of variable length and (2) how neurons could acquire the biophysical properties that enable them to work as timers.
2002
Durstewitz, D; Seamans, J K
The computational role of dopamine D1 receptors in working memory Journal Article
Neural Networks, 15 , pp. 561-572, 2002.
@article{Durstewitz2002,
title = {The computational role of dopamine D1 receptors in working memory},
author = {D. Durstewitz and J.K. Seamans},
url = {https://www.ncbi.nlm.nih.gov/pubmed/12371512},
year = {2002},
date = {2002-06-01},
journal = {Neural Networks},
volume = {15},
pages = {561-572},
abstract = {The prefrontal cortex (PFC) is essential for working memory, which is the ability to transiently hold and manipulate information necessary for generating forthcoming action. PFC neurons actively encode working memory information via sustained firing patterns. Dopamine via D1 receptors potently modulates sustained activity of PFC neurons and performance in working memory tasks. In vitro patch-clamp data have revealed many different cellular actions of dopamine on PFC neurons and synapses. These effects were simulated using realistic networks of recurrently connected assemblies of PFC neurons. Simulated D1-mediated modulation led to a deepening and widening of the basins of attraction of high (working memory) activity states of the network, while at the same time background activity was depressed. As a result, self-sustained activity was more robust to distracting stimuli and noise. In this manner, D1 receptor stimulation might regulate the extent to which PFC network activity is focused on a particular goal state versus being open to new goals or information unrelated to the current goal.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The prefrontal cortex (PFC) is essential for working memory, which is the ability to transiently hold and manipulate information necessary for generating forthcoming action. PFC neurons actively encode working memory information via sustained firing patterns. Dopamine via D1 receptors potently modulates sustained activity of PFC neurons and performance in working memory tasks. In vitro patch-clamp data have revealed many different cellular actions of dopamine on PFC neurons and synapses. These effects were simulated using realistic networks of recurrently connected assemblies of PFC neurons. Simulated D1-mediated modulation led to a deepening and widening of the basins of attraction of high (working memory) activity states of the network, while at the same time background activity was depressed. As a result, self-sustained activity was more robust to distracting stimuli and noise. In this manner, D1 receptor stimulation might regulate the extent to which PFC network activity is focused on a particular goal state versus being open to new goals or information unrelated to the current goal.
2001
J. K. Seamans N. Gorelova, Durstewitz Yang D C R
) Bidirectional dopamine modulation of GABAergic inhibition in prefrontal cortical pyramidal neurons Journal Article
2001.
@article{Seamans2001,
title = {) Bidirectional dopamine modulation of GABAergic inhibition in prefrontal cortical pyramidal neurons},
author = {J. K. Seamans, N. Gorelova, D. Durstewitz, C. R. Yang
},
url = {https://pubmed.ncbi.nlm.nih.gov/11331392/},
doi = {10.1523/JNEUROSCI.21-10-03628.2001},
year = {2001},
date = {2001-05-15},
abstract = {Dopamine regulates the activity of neural networks in the prefrontal cortex that process working memory information, but its precise biophysical actions are poorly understood. The present study characterized the effects of dopamine on GABAergic inputs to prefrontal pyramidal neurons using whole-cell patch-clamp recordings in vitro. In most pyramidal cells, dopamine had a temporally biphasic effect on evoked IPSCs, producing an initial abrupt decrease in amplitude followed by a delayed increase in IPSC amplitude. Using receptor subtype-specific agonists and antagonists, we found that the initial abrupt reduction was D2 receptor-mediated, whereas the late, slower developing enhancement was D1 receptor-mediated. Linearly combining the effects of the two agonists could reproduce the biphasic dopamine effect. Because D1 agonists enhanced spontaneous (sIPSCs) but did not affect miniature (mIPSCs) IPSCs, it appears that D1 agonists caused larger evoked IPSCs by increasing the intrinsic excitability of interneurons and their axons. In contrast, D2 agonists had no effects on sIPSCs but did produce a significant reduction in mIPSCs, suggestive of a decrease in GABA release probability. In addition, D2 agonists reduced the postsynaptic response to a GABA(A) agonist. D1 and D2 receptors therefore regulated GABAergic activity in opposite manners and through different mechanisms in prefrontal cortex (PFC) pyramidal cells. This bidirectional modulation could have important implications for the computational properties of active PFC networks. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Dopamine regulates the activity of neural networks in the prefrontal cortex that process working memory information, but its precise biophysical actions are poorly understood. The present study characterized the effects of dopamine on GABAergic inputs to prefrontal pyramidal neurons using whole-cell patch-clamp recordings in vitro. In most pyramidal cells, dopamine had a temporally biphasic effect on evoked IPSCs, producing an initial abrupt decrease in amplitude followed by a delayed increase in IPSC amplitude. Using receptor subtype-specific agonists and antagonists, we found that the initial abrupt reduction was D2 receptor-mediated, whereas the late, slower developing enhancement was D1 receptor-mediated. Linearly combining the effects of the two agonists could reproduce the biphasic dopamine effect. Because D1 agonists enhanced spontaneous (sIPSCs) but did not affect miniature (mIPSCs) IPSCs, it appears that D1 agonists caused larger evoked IPSCs by increasing the intrinsic excitability of interneurons and their axons. In contrast, D2 agonists had no effects on sIPSCs but did produce a significant reduction in mIPSCs, suggestive of a decrease in GABA release probability. In addition, D2 agonists reduced the postsynaptic response to a GABA(A) agonist. D1 and D2 receptors therefore regulated GABAergic activity in opposite manners and through different mechanisms in prefrontal cortex (PFC) pyramidal cells. This bidirectional modulation could have important implications for the computational properties of active PFC networks. Jeremy K. Seamans Daniel Durstewitz, Brian Christie Charles Stevens R F; Sejnowski, Terrence J
Dopamine D1/D5 receptor modulation of excitatory synaptic inputs to layer V prefrontal cortex neurons Journal Article
Proceedings of the National Academy of Sciences, 2001.
@article{Seamans2001b,
title = {Dopamine D1/D5 receptor modulation of excitatory synaptic inputs to layer V prefrontal cortex neurons},
author = {Jeremy K. Seamans, Daniel Durstewitz, Brian R. Christie, Charles F. Stevens, and Terrence J. Sejnowski},
url = {https://doi.org/10.1073/pnas.98.1.301},
doi = {10.1073/pnas.98.1.301},
year = {2001},
date = {2001-01-02},
journal = {Proceedings of the National Academy of Sciences},
abstract = {Dopamine acts mainly through the D1/D5 receptor in the prefrontal cortex (PFC) to modulate neural activity and behaviors associated with working memory. To understand the mechanism of this effect, we examined the modulation of excitatory synaptic inputs onto layer V PFC pyramidal neurons by D1/D5 receptor stimulation. D1/D5 agonists increased the size of N-methyl-d-aspartate (NMDA) component of excitatory postsynaptic currents (EPSCs) through a postsynaptic mechanism. In contrast, D1/D5 agonists caused a slight reduction in the size of the non-NMDA component of EPSCs through a small decrease in release probability. With 20 Hz synaptic trains, we found that the D1/D5 agonists increased depolarization of summating the NMDA component of excitatory postsynaptic potential (EPSP). By increasing the NMDA component of EPSCs, yet slightly reducing release, D1/D5 receptor activation selectively enhanced sustained synaptic inputs and equalized the sizes of EPSPs in a 20-Hz train.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Dopamine acts mainly through the D1/D5 receptor in the prefrontal cortex (PFC) to modulate neural activity and behaviors associated with working memory. To understand the mechanism of this effect, we examined the modulation of excitatory synaptic inputs onto layer V PFC pyramidal neurons by D1/D5 receptor stimulation. D1/D5 agonists increased the size of N-methyl-d-aspartate (NMDA) component of excitatory postsynaptic currents (EPSCs) through a postsynaptic mechanism. In contrast, D1/D5 agonists caused a slight reduction in the size of the non-NMDA component of EPSCs through a small decrease in release probability. With 20 Hz synaptic trains, we found that the D1/D5 agonists increased depolarization of summating the NMDA component of excitatory postsynaptic potential (EPSP). By increasing the NMDA component of EPSCs, yet slightly reducing release, D1/D5 receptor activation selectively enhanced sustained synaptic inputs and equalized the sizes of EPSPs in a 20-Hz train.
2000
Daniel Durstewitz Jeremy K. Seamans, ; Sejnowski, Terrence J
Dopamine-Mediated Stabilization of Delay-Period Activity in a Network Model of Prefrontal Cortex Journal Article
Journal of Neurophysiology, 2000.
@article{Durstewitz2000b,
title = {Dopamine-Mediated Stabilization of Delay-Period Activity in a Network Model of Prefrontal Cortex},
author = {Daniel Durstewitz , Jeremy K. Seamans , and Terrence J. Sejnowski},
url = {https://doi.org/10.1152/jn.2000.83.3.1733},
doi = {10.1152/jn.2000.83.3.1733},
year = {2000},
date = {2000-03-01},
journal = {Journal of Neurophysiology},
abstract = {The prefrontal cortex (PFC) is critically involved in working memory, which underlies memory-guided, goal-directed behavior. During working-memory tasks, PFC neurons exhibit sustained elevated activity, which may reflect the active holding of goal-related information or the preparation of forthcoming actions. Dopamine via the D1 receptor strongly modulates both this sustained (delay-period) activity and behavioral performance in working-memory tasks. However, the function of dopamine during delay-period activity and the underlying neural mechanisms are only poorly understood. Recently we proposed that dopamine might stabilize active neural representations in PFC circuits during tasks involving working memory and render them robust against interfering stimuli and noise. To further test this idea and to examine the dopamine-modulated ionic currents that could give rise to increased stability of neural representations, we developed a network model of the PFC consisting of multicompartment neurons equipped with Hodgkin-Huxley-like channel kinetics that could reproduce in vitro whole cell and in vivo recordings from PFC neurons. Dopaminergic effects on intrinsic ionic and synaptic conductances were implemented in the model based on in vitro data. Simulated dopamine strongly enhanced high, delay-type activity but not low, spontaneous activity in the model network. Furthermore the strength of an afferent stimulation needed to disrupt delay-type activity increased with the magnitude of the dopamine-induced shifts in network parameters, making the currently active representation much more stable. Stability could be increased by dopamine-induced enhancements of the persistent Na+and N-methyl-d-aspartate (NMDA) conductances. Stability also was enhanced by a reductionin AMPA conductances. The increase in GABAA conductances that occurs after stimulation of dopaminergic D1 receptors was necessary in this context to prevent uncontrolled, spontaneous switches into high-activity states (i.e., spontaneous activation of task-irrelevant representations). In conclusion, the dopamine-induced changes in the biophysical properties of intrinsic ionic and synaptic conductances conjointly acted to highly increase stability of activated representations in PFC networks and at the same time retain control over network behavior and thus preserve its ability to adequately respond to task-related stimuli. Predictions of the model can be tested in vivo by locally applying specific D1 receptor, NMDA, or GABAA antagonists while recording from PFC neurons in delayed reaction-type tasks with interfering stimuli.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The prefrontal cortex (PFC) is critically involved in working memory, which underlies memory-guided, goal-directed behavior. During working-memory tasks, PFC neurons exhibit sustained elevated activity, which may reflect the active holding of goal-related information or the preparation of forthcoming actions. Dopamine via the D1 receptor strongly modulates both this sustained (delay-period) activity and behavioral performance in working-memory tasks. However, the function of dopamine during delay-period activity and the underlying neural mechanisms are only poorly understood. Recently we proposed that dopamine might stabilize active neural representations in PFC circuits during tasks involving working memory and render them robust against interfering stimuli and noise. To further test this idea and to examine the dopamine-modulated ionic currents that could give rise to increased stability of neural representations, we developed a network model of the PFC consisting of multicompartment neurons equipped with Hodgkin-Huxley-like channel kinetics that could reproduce in vitro whole cell and in vivo recordings from PFC neurons. Dopaminergic effects on intrinsic ionic and synaptic conductances were implemented in the model based on in vitro data. Simulated dopamine strongly enhanced high, delay-type activity but not low, spontaneous activity in the model network. Furthermore the strength of an afferent stimulation needed to disrupt delay-type activity increased with the magnitude of the dopamine-induced shifts in network parameters, making the currently active representation much more stable. Stability could be increased by dopamine-induced enhancements of the persistent Na+and N-methyl-d-aspartate (NMDA) conductances. Stability also was enhanced by a reductionin AMPA conductances. The increase in GABAA conductances that occurs after stimulation of dopaminergic D1 receptors was necessary in this context to prevent uncontrolled, spontaneous switches into high-activity states (i.e., spontaneous activation of task-irrelevant representations). In conclusion, the dopamine-induced changes in the biophysical properties of intrinsic ionic and synaptic conductances conjointly acted to highly increase stability of activated representations in PFC networks and at the same time retain control over network behavior and thus preserve its ability to adequately respond to task-related stimuli. Predictions of the model can be tested in vivo by locally applying specific D1 receptor, NMDA, or GABAA antagonists while recording from PFC neurons in delayed reaction-type tasks with interfering stimuli. Durstewitz, D; Seamans, J K; Sejnowski, T J
Neurocomputational models of working memory. Journal Article
Nature neuroscience, 3 Suppl (november), pp. 1184–1191, 2000, ISSN: 1097-6256.
@article{Durstewitz2000,
title = {Neurocomputational models of working memory.},
author = {D Durstewitz and J K Seamans and T J Sejnowski},
doi = {10.1038/81460},
issn = {1097-6256},
year = {2000},
date = {2000-01-01},
journal = {Nature neuroscience},
volume = {3 Suppl},
number = {november},
pages = {1184--1191},
abstract = {During working memory tasks, the firing rates of single neurons recorded in behaving monkeys remain elevated without external cues. Modeling studies have explored different mechanisms that could underlie this selective persistent activity, including recurrent excitation within cell assemblies, synfire chains and single-cell bistability. The models show how sustained activity can be stable in the presence of noise and distractors, how different synaptic and voltage-gated conductances contribute to persistent activity, how neuromodulation could influence its robustness, how completely novel items could be maintained, and how continuous attractor states might be achieved. More work is needed to address the full repertoire of neural dynamics observed during working memory tasks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
During working memory tasks, the firing rates of single neurons recorded in behaving monkeys remain elevated without external cues. Modeling studies have explored different mechanisms that could underlie this selective persistent activity, including recurrent excitation within cell assemblies, synfire chains and single-cell bistability. The models show how sustained activity can be stable in the presence of noise and distractors, how different synaptic and voltage-gated conductances contribute to persistent activity, how neuromodulation could influence its robustness, how completely novel items could be maintained, and how continuous attractor states might be achieved. More work is needed to address the full repertoire of neural dynamics observed during working memory tasks.
1999
D Durstewitz S Kröner, Güntürkün O
The dopaminergic innervation of the avian telencephalon Journal Article
Prog Neurobiol., 1999.
@article{Durstewitz1999b,
title = {The dopaminergic innervation of the avian telencephalon},
author = {D Durstewitz, S Kröner, O Güntürkün
},
url = {https://pubmed.ncbi.nlm.nih.gov/10463794/},
doi = {10.1016/s0301-0082(98)00100-2 },
year = {1999},
date = {1999-10-01},
journal = { Prog Neurobiol.},
abstract = {The present review provides an overview of the distribution of dopaminergic fibers and dopaminoceptive elements within the avian telencephalon, the possible interactions of dopamine (DA) with other biochemically identified systems as revealed by immunocytochemistry, and the involvement of DA in behavioral processes in birds. Primary sensory structures are largely devoid of dopaminergic fibers, DA receptors and the D1-related phosphoprotein DARPP-32, while all these dopaminergic markers gradually increase in density from the secondary sensory to the multimodal association and the limbic and motor output areas. Structures of the avian basal ganglia are most densely innervated but, in contrast to mammals, show a higher D2 than D1 receptor density. In most of the remaining telencephalon D1 receptors clearly outnumber D2 receptors. Dopaminergic fibers in the avian telencephalon often show a peculiar arrangement where fibers coil around the somata and proximal dendrites of neurons like baskets, probably providing them with a massive dopaminergic input. Basket-like innervation of DARPP-32-positive neurons seems to be most prominent in the multimodal association areas. Taken together, these anatomical findings indicate a specific role of DA in higher order learning and sensory-motor processes, while primary sensory processes are less affected. This conclusion is supported by behavioral findings which show that in birds, as in mammals, DA is specifically involved in sensory-motor integration, attention and arousal, learning and working memory. Thus, despite considerable differences in the anatomical organization of the avian and mammalian forebrain, the organization of the dopaminergic system and its behavioral functions are very similar in birds and mammals. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The present review provides an overview of the distribution of dopaminergic fibers and dopaminoceptive elements within the avian telencephalon, the possible interactions of dopamine (DA) with other biochemically identified systems as revealed by immunocytochemistry, and the involvement of DA in behavioral processes in birds. Primary sensory structures are largely devoid of dopaminergic fibers, DA receptors and the D1-related phosphoprotein DARPP-32, while all these dopaminergic markers gradually increase in density from the secondary sensory to the multimodal association and the limbic and motor output areas. Structures of the avian basal ganglia are most densely innervated but, in contrast to mammals, show a higher D2 than D1 receptor density. In most of the remaining telencephalon D1 receptors clearly outnumber D2 receptors. Dopaminergic fibers in the avian telencephalon often show a peculiar arrangement where fibers coil around the somata and proximal dendrites of neurons like baskets, probably providing them with a massive dopaminergic input. Basket-like innervation of DARPP-32-positive neurons seems to be most prominent in the multimodal association areas. Taken together, these anatomical findings indicate a specific role of DA in higher order learning and sensory-motor processes, while primary sensory processes are less affected. This conclusion is supported by behavioral findings which show that in birds, as in mammals, DA is specifically involved in sensory-motor integration, attention and arousal, learning and working memory. Thus, despite considerable differences in the anatomical organization of the avian and mammalian forebrain, the organization of the dopaminergic system and its behavioral functions are very similar in birds and mammals. Daniel Durstewitz, Marian Kelc ; Güntürkün, Onur
A neurocomputational theory of the dopaminergic modulation of working memory functions Journal Article
Journal of Neuroscience, 1999.
@article{Durstewitz1999,
title = {A neurocomputational theory of the dopaminergic modulation of working memory functions},
author = {Daniel Durstewitz, Marian Kelc and Onur Güntürkün},
url = {https://doi.org/10.1523/JNEUROSCI.19-07-02807.1999 },
doi = {10.1523/JNEUROSCI.19-07-02807.1999},
year = {1999},
date = {1999-04-01},
journal = {Journal of Neuroscience},
abstract = {The dopaminergic modulation of neural activity in the prefrontal cortex (PFC) is essential for working memory. Delay-activity in the PFC in working memory tasks persists even if interfering stimuli intervene between the presentation of the sample and the target stimulus. Here, the hypothesis is put forward that the functional role of dopamine in working memory processing is to stabilize active neural representations in the PFC network and thereby to protect goal-related delay-activity against interfering stimuli. To test this hypothesis, we examined the reported dopamine-induced changes in several biophysical properties of PFC neurons to determine whether they could fulfill this function. An attractor network model consisting of model neurons was devised in which the empirically observed effects of dopamine on synaptic and voltage-gated membrane conductances could be represented in a biophysically realistic manner. In the model, the dopamine-induced enhancement of the persistent Na+ and reduction of the slowly inactivating K+ current increased firing of the delay-active neurons, thereby increasing inhibitory feedback and thus reducing activity of the “background” neurons. Furthermore, the dopamine-induced reduction of EPSP sizes and a dendritic Ca2+ current diminished the impact of intervening stimuli on current network activity. In this manner, dopaminergic effects indeed acted to stabilize current delay-activity. Working memory deficits observed after supranormal D1-receptor stimulation could also be explained within this framework. Thus, the model offers a mechanistic explanation for the behavioral deficits observed after blockade or after supranormal stimulation of dopamine receptors in the PFC and, in addition, makes some specific empirical predictions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The dopaminergic modulation of neural activity in the prefrontal cortex (PFC) is essential for working memory. Delay-activity in the PFC in working memory tasks persists even if interfering stimuli intervene between the presentation of the sample and the target stimulus. Here, the hypothesis is put forward that the functional role of dopamine in working memory processing is to stabilize active neural representations in the PFC network and thereby to protect goal-related delay-activity against interfering stimuli. To test this hypothesis, we examined the reported dopamine-induced changes in several biophysical properties of PFC neurons to determine whether they could fulfill this function. An attractor network model consisting of model neurons was devised in which the empirically observed effects of dopamine on synaptic and voltage-gated membrane conductances could be represented in a biophysically realistic manner. In the model, the dopamine-induced enhancement of the persistent Na+ and reduction of the slowly inactivating K+ current increased firing of the delay-active neurons, thereby increasing inhibitory feedback and thus reducing activity of the “background” neurons. Furthermore, the dopamine-induced reduction of EPSP sizes and a dendritic Ca2+ current diminished the impact of intervening stimuli on current network activity. In this manner, dopaminergic effects indeed acted to stabilize current delay-activity. Working memory deficits observed after supranormal D1-receptor stimulation could also be explained within this framework. Thus, the model offers a mechanistic explanation for the behavioral deficits observed after blockade or after supranormal stimulation of dopamine receptors in the PFC and, in addition, makes some specific empirical predictions. Seamans J.K., Durstewitz & Sejnowski D T
State-dependence of dopamine D1 receptor modulation in prefrontal cortex neurons Journal Article
Proceedings of the 6th Joint Symposium on Neural Computation, 9 , pp. 128-135, 1999.
@article{Seamans1999,
title = {State-dependence of dopamine D1 receptor modulation in prefrontal cortex neurons},
author = {Seamans, J.K., Durstewitz, D., & Sejnowski, T.},
url = {https://papers.cnl.salk.edu/PDFs/State-Dependence%20of%20Dopamine%20D1%20Receptor%20Modulation%20in%20Prefrontal%20Cortex%20Neurons%201999-3575.pdf},
year = {1999},
date = {1999-01-01},
journal = {Proceedings of the 6th Joint Symposium on Neural Computation},
volume = {9},
pages = {128-135},
abstract = {Dopamine makes an important yet poorly understood contribution to normal and pathological processes mediated by the prefrontal cortex. The present study proposes a hypothesis for the cellular actions of dopamine Dl receptors on prefrontal cortex neurons based on in vitro recordings and computational models. In deep layer V prefrontal cortex neurons, we show that Dl receptor stimulation: 1) increased evoked firing from rest 2) shifted the activation of a persistent Na+ current and slowed its inactivation, 3) enhanced NMDA-mediated EPSCs and 4) enhanced GABAA lPSPs over many minutes. These changes had state-dependent effects on networks of realistically modeled prefrontal cortex neurons: spontaneous firing driven by low frequency inputs was decreased, while firing evoked by progressively stronger excitatory drive was enhanced and sustained following offset of an input. These findings provide insights into the paradoxical nature of dopamine's actions in the prefrontal cortex, and suggest how dopamine may modulate working memory mechanisms in networks of prefrontal neurons.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Dopamine makes an important yet poorly understood contribution to normal and pathological processes mediated by the prefrontal cortex. The present study proposes a hypothesis for the cellular actions of dopamine Dl receptors on prefrontal cortex neurons based on in vitro recordings and computational models. In deep layer V prefrontal cortex neurons, we show that Dl receptor stimulation: 1) increased evoked firing from rest 2) shifted the activation of a persistent Na+ current and slowed its inactivation, 3) enhanced NMDA-mediated EPSCs and 4) enhanced GABAA lPSPs over many minutes. These changes had state-dependent effects on networks of realistically modeled prefrontal cortex neurons: spontaneous firing driven by low frequency inputs was decreased, while firing evoked by progressively stronger excitatory drive was enhanced and sustained following offset of an input. These findings provide insights into the paradoxical nature of dopamine's actions in the prefrontal cortex, and suggest how dopamine may modulate working memory mechanisms in networks of prefrontal neurons.
1998
D Durstewitz S Kröner, Hemmings Jr Güntürkün H C O
Neuroscience, 1998.
@article{Durstewitz1998,
title = {The dopaminergic innervation of the pigeon telencephalon: distribution of DARPP-32 and co-occurrence with glutamate decarboxylase and tyrosine hydroxylase},
author = {D Durstewitz, S Kröner, H C Hemmings Jr, O Güntürkün
},
url = {https://pubmed.ncbi.nlm.nih.gov/9483560/},
doi = { 10.1016/s0306-4522(97)00450-8 },
year = {1998},
date = {1998-04-01},
journal = {Neuroscience},
abstract = {Dopaminergic axons arising from midbrain nuclei innervate the mammalian and avian telencephalon with heterogeneous regional and laminar distributions. In primate, rodent, and avian species, the neuromodulator dopamine is low or almost absent in most primary sensory areas and is most abundant in the striatal parts of the basal ganglia. Furthermore, dopaminergic fibres are present in most limbic and associative structures. Herein, the distribution of DARPP-32, a phosphoprotein related to the dopamine D1-receptor, was investigated in the pigeon telencephalon by immunocytochemical techniques. Furthermore, co-occurrence of DARPP-32-positive perikarya with tyrosine hydroxylase-positive pericellular axonal "baskets" or glutamate decarboxylase-positive neurons, as well as co-occurrence of tyrosine hydroxylase and glutamate decarboxylase were examined. Specificity of the anti-DARPP-32 monoclonal antibody in pigeon brain was determined by immunoblotting. The distribution of DARPP-32 shared important features with the distribution of D1-receptors and dopaminergic fibres in the pigeon telencephalon as described previously. In particular, DARPP-32 was highly abundant in the avian basal ganglia, where a high percentage of neurons were labelled in the "striatal" parts (paleostriatum augmentatum, lobus parolfactorius), while only neuropil staining was observed in the "pallidal" portions (paleostriatum primitivum). In contrast, DARPP-32 was almost absent or present in comparatively lower concentrations in most primary sensory areas. Secondary sensory and tertiary areas of the neostriatum contained numbers of labelled neurons comparable to that of the basal ganglia and intermediate levels of neuropil staining. Approximately up to one-third of DARPP-32-positive neurons received a basket-type innervation from tyrosine hydroxylase-positive fibres in the lateral and caudal neostriatum, but only about half as many did in the medial and frontal neostriatum, and even less so in the hyperstriatum. No case of colocalization of glutamate decarboxylase and DARPP-32 and no co-occurrence of glutamate decarboxylase-positive neurons and tyrosine hydroxylase-basket-like structures could be detected out of more than 2000 glutamate decarboxylase-positive neurons examined, although the high DARPP-32 and high tyrosine hydroxylase staining density hampered this analysis in the basal ganglia. In conclusion, the pigeon dopaminergic system seems to be organized similar to that of mammals. Apparently, in the telencephalon, dopamine has its primary function in higher level sensory, associative and motor processes, since primary areas showed only weak or no anatomical cues of dopaminergic modulation. Dopamine might exert its effects primarily by modulating the physiological properties of non-GABAergic and therefore presumably excitatory units.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Dopaminergic axons arising from midbrain nuclei innervate the mammalian and avian telencephalon with heterogeneous regional and laminar distributions. In primate, rodent, and avian species, the neuromodulator dopamine is low or almost absent in most primary sensory areas and is most abundant in the striatal parts of the basal ganglia. Furthermore, dopaminergic fibres are present in most limbic and associative structures. Herein, the distribution of DARPP-32, a phosphoprotein related to the dopamine D1-receptor, was investigated in the pigeon telencephalon by immunocytochemical techniques. Furthermore, co-occurrence of DARPP-32-positive perikarya with tyrosine hydroxylase-positive pericellular axonal "baskets" or glutamate decarboxylase-positive neurons, as well as co-occurrence of tyrosine hydroxylase and glutamate decarboxylase were examined. Specificity of the anti-DARPP-32 monoclonal antibody in pigeon brain was determined by immunoblotting. The distribution of DARPP-32 shared important features with the distribution of D1-receptors and dopaminergic fibres in the pigeon telencephalon as described previously. In particular, DARPP-32 was highly abundant in the avian basal ganglia, where a high percentage of neurons were labelled in the "striatal" parts (paleostriatum augmentatum, lobus parolfactorius), while only neuropil staining was observed in the "pallidal" portions (paleostriatum primitivum). In contrast, DARPP-32 was almost absent or present in comparatively lower concentrations in most primary sensory areas. Secondary sensory and tertiary areas of the neostriatum contained numbers of labelled neurons comparable to that of the basal ganglia and intermediate levels of neuropil staining. Approximately up to one-third of DARPP-32-positive neurons received a basket-type innervation from tyrosine hydroxylase-positive fibres in the lateral and caudal neostriatum, but only about half as many did in the medial and frontal neostriatum, and even less so in the hyperstriatum. No case of colocalization of glutamate decarboxylase and DARPP-32 and no co-occurrence of glutamate decarboxylase-positive neurons and tyrosine hydroxylase-basket-like structures could be detected out of more than 2000 glutamate decarboxylase-positive neurons examined, although the high DARPP-32 and high tyrosine hydroxylase staining density hampered this analysis in the basal ganglia. In conclusion, the pigeon dopaminergic system seems to be organized similar to that of mammals. Apparently, in the telencephalon, dopamine has its primary function in higher level sensory, associative and motor processes, since primary areas showed only weak or no anatomical cues of dopaminergic modulation. Dopamine might exert its effects primarily by modulating the physiological properties of non-GABAergic and therefore presumably excitatory units.
2023 |
Eisenmann, Lukas; Monfared, Zahra; Göring, Niclas Alexander; Durstewitz, Daniel Bifurcations and loss jumps in RNN training Inproceedings NeurIPS 2023, 2023. @inproceedings{Eisenmann2023, title = {Bifurcations and loss jumps in RNN training}, author = {Lukas Eisenmann and Zahra Monfared and Niclas Alexander Göring and Daniel Durstewitz}, year = {2023}, date = {2023-11-06}, booktitle = {NeurIPS 2023}, abstract = {Recurrent neural networks (RNNs) are popular machine learning tools for modeling and forecasting sequential data and for inferring dynamical systems (DS) from observed time series. Concepts from DS theory (DST) have variously been used to further our understanding of both, how trained RNNs solve complex tasks, and the training process itself. Bifurcations are particularly important phenomena in DS, including RNNs, that refer to topological (qualitative) changes in a system’s dynamical behavior as one or more of its parameters are varied. Knowing the bifurcation structure of an RNN will thus allow to deduce many of its computa- tional and dynamical properties, like its sensitivity to parameter variations or its behavior during training. In particular, bifurcations may account for sudden loss jumps observed in RNN training that could severely impede the training process. Here we first mathematically prove for a particular class of ReLU-based RNNs that certain bifurcations are indeed associated with loss gradients tending toward infinity or zero. We then introduce a novel heuristic algorithm for detecting all fixed points and k-cycles in ReLU-based RNNs and their existence and stability regions, hence bifurcation manifolds in parameter space. In contrast to previous numerical algorithms for finding fixed points and common continuation methods, our algorithm provides exact results and returns fixed points and cycles up to high orders with surprisingly good scaling behavior. We exemplify the algorithm on the analysis of the training process of RNNs, and find that the recently introduced technique of generalized teacher forcing completely avoids certain types of bifurca- tions in training. Thus, besides facilitating the DST analysis of trained RNNs, our algorithm provides a powerful instrument for analyzing the training process itself.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Recurrent neural networks (RNNs) are popular machine learning tools for modeling and forecasting sequential data and for inferring dynamical systems (DS) from observed time series. Concepts from DS theory (DST) have variously been used to further our understanding of both, how trained RNNs solve complex tasks, and the training process itself. Bifurcations are particularly important phenomena in DS, including RNNs, that refer to topological (qualitative) changes in a system’s dynamical behavior as one or more of its parameters are varied. Knowing the bifurcation structure of an RNN will thus allow to deduce many of its computa- tional and dynamical properties, like its sensitivity to parameter variations or its behavior during training. In particular, bifurcations may account for sudden loss jumps observed in RNN training that could severely impede the training process. Here we first mathematically prove for a particular class of ReLU-based RNNs that certain bifurcations are indeed associated with loss gradients tending toward infinity or zero. We then introduce a novel heuristic algorithm for detecting all fixed points and k-cycles in ReLU-based RNNs and their existence and stability regions, hence bifurcation manifolds in parameter space. In contrast to previous numerical algorithms for finding fixed points and common continuation methods, our algorithm provides exact results and returns fixed points and cycles up to high orders with surprisingly good scaling behavior. We exemplify the algorithm on the analysis of the training process of RNNs, and find that the recently introduced technique of generalized teacher forcing completely avoids certain types of bifurca- tions in training. Thus, besides facilitating the DST analysis of trained RNNs, our algorithm provides a powerful instrument for analyzing the training process itself. |
Durstewitz, Daniel; Koppe, Georgia; Thurm, Max Ingo Reconstructing Computational Dynamics from Neural Measurements with Recurrent Neural Networks Journal Article Nature Reviews Neuroscience, 2023. @article{Durstewitz2023, title = {Reconstructing Computational Dynamics from Neural Measurements with Recurrent Neural Networks}, author = {Daniel Durstewitz and Georgia Koppe and Max Ingo Thurm}, url = {https://www.nature.com/articles/s41583-023-00740-7}, doi = {https://doi.org/10.1038/s41583-023-00740-7}, year = {2023}, date = {2023-10-04}, journal = {Nature Reviews Neuroscience}, abstract = {Computational models in neuroscience usually take the form of systems of differential equations. The behaviour of such systems is the subject of dynamical systems theory. Dynamical systems theory provides a powerful mathematical toolbox for analysing neurobiological processes and has been a mainstay of computational neuroscience for decades. Recently, recurrent neural networks (RNNs) have become a popular machine learning tool for studying the non-linear dynamics of neural and behavioural processes by emulating an underlying system of differential equations. RNNs have been routinely trained on similar behavioural tasks to those used for animal subjects to generate hypotheses about the underlying computational mechanisms. By contrast, RNNs can also be trained on the measured physiological and behavioural data, thereby directly inheriting their temporal and geometrical properties. In this way they become a formal surrogate for the experimentally probed system that can be further analysed, perturbed and simulated. This powerful approach is called dynamical system reconstruction. In this Perspective, we focus on recent trends in artificial intelligence and machine learning in this exciting and rapidly expanding field, which may be less well known in neuroscience. We discuss formal prerequisites, different model architectures and training approaches for RNN-based dynamical system reconstructions, ways to evaluate and validate model performance, how to interpret trained models in a neuroscience context, and current challenges.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Computational models in neuroscience usually take the form of systems of differential equations. The behaviour of such systems is the subject of dynamical systems theory. Dynamical systems theory provides a powerful mathematical toolbox for analysing neurobiological processes and has been a mainstay of computational neuroscience for decades. Recently, recurrent neural networks (RNNs) have become a popular machine learning tool for studying the non-linear dynamics of neural and behavioural processes by emulating an underlying system of differential equations. RNNs have been routinely trained on similar behavioural tasks to those used for animal subjects to generate hypotheses about the underlying computational mechanisms. By contrast, RNNs can also be trained on the measured physiological and behavioural data, thereby directly inheriting their temporal and geometrical properties. In this way they become a formal surrogate for the experimentally probed system that can be further analysed, perturbed and simulated. This powerful approach is called dynamical system reconstruction. In this Perspective, we focus on recent trends in artificial intelligence and machine learning in this exciting and rapidly expanding field, which may be less well known in neuroscience. We discuss formal prerequisites, different model architectures and training approaches for RNN-based dynamical system reconstructions, ways to evaluate and validate model performance, how to interpret trained models in a neuroscience context, and current challenges. |
Miftari, Egzon; Durstewitz, Daniel; Sadlo, Filip Visualization of Discontinuous Vector Field Topology Journal Article IEEE Transactions on Visualization & Computer Graphics, 2023. @article{Miftari2023, title = {Visualization of Discontinuous Vector Field Topology}, author = {Egzon Miftari and Daniel Durstewitz and Filip Sadlo}, url = {https://www.computer.org/csdl/journal/tg/5555/01/10296524/1RwXG8nn7d6}, year = {2023}, date = {2023-10-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, abstract = {This paper extends the concept and the visualization of vector field topology to vector fields with discontinuities. We address the non-uniqueness of flow in such fields by introduction of a time-reversible concept of equivalence. This concept generalizes streamlines to streamsets and thus vector field topology to discontinuous vector fields in terms of invariant streamsets. We identify respective novel critical structures as well as their manifolds, investigate their interplay with traditional vector field topology, and detail the application and interpretation of our approach using specifically designed synthetic cases and a simulated case from physics.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This paper extends the concept and the visualization of vector field topology to vector fields with discontinuities. We address the non-uniqueness of flow in such fields by introduction of a time-reversible concept of equivalence. This concept generalizes streamlines to streamsets and thus vector field topology to discontinuous vector fields in terms of invariant streamsets. We identify respective novel critical structures as well as their manifolds, investigate their interplay with traditional vector field topology, and detail the application and interpretation of our approach using specifically designed synthetic cases and a simulated case from physics. |
Hess, Florian; Monfared, Zahra; Brenner, Manuel; Durstewitz, Daniel Generalized Teacher Forcing for Learning Chaotic Dynamics Inproceedings Proceedings of the 40th International Conference on Machine Learning, PMLR 202:13017-13049, 2023., 2023. @inproceedings{Hess2023, title = {Generalized Teacher Forcing for Learning Chaotic Dynamics}, author = {Florian Hess and Zahra Monfared and Manuel Brenner and Daniel Durstewitz}, url = {https://proceedings.mlr.press/v202/hess23a.html}, year = {2023}, date = {2023-05-31}, booktitle = {Proceedings of the 40th International Conference on Machine Learning, PMLR 202:13017-13049, 2023.}, journal = {Proceedings of Machine Learning Research, ICML 2023}, abstract = {Chaotic dynamical systems (DS) are ubiquitous in nature and society. Often we are interested in reconstructing such systems from observed time series for prediction or mechanistic insight, where by reconstruction we mean learning geometrical and invariant temporal properties of the system in question. However, training reconstruction algorithms like recurrent neural networks (RNNs) on such systems by gradient-descent based techniques faces severe challenges. This is mainly due to the exploding gradients caused by the exponential divergence of trajectories in chaotic systems. Moreover, for (scientific) interpretability we wish to have as low dimensional reconstructions as possible, preferably in a model which is mathematically tractable. Here we report that a surprisingly simple modification of teacher forcing leads to provably strictly all-time bounded gradients in training on chaotic systems, while still learning to faithfully represent their dynamics. Furthermore, we observed that a simple architectural rearrangement of a tractable RNN design, piecewise-linear RNNs (PLRNNs), enables to reduce the reconstruction dimension to at most that of the observed system (or less). We show on several DS that with these amendments we can reconstruct DS better than current SOTA algorithms, in much lower dimensions. Performance differences were particularly compelling on real world data with which most other methods severely struggled. This work thus led to a simple yet powerful DS reconstruction algorithm which is highly interpretable at the same time.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Chaotic dynamical systems (DS) are ubiquitous in nature and society. Often we are interested in reconstructing such systems from observed time series for prediction or mechanistic insight, where by reconstruction we mean learning geometrical and invariant temporal properties of the system in question. However, training reconstruction algorithms like recurrent neural networks (RNNs) on such systems by gradient-descent based techniques faces severe challenges. This is mainly due to the exploding gradients caused by the exponential divergence of trajectories in chaotic systems. Moreover, for (scientific) interpretability we wish to have as low dimensional reconstructions as possible, preferably in a model which is mathematically tractable. Here we report that a surprisingly simple modification of teacher forcing leads to provably strictly all-time bounded gradients in training on chaotic systems, while still learning to faithfully represent their dynamics. Furthermore, we observed that a simple architectural rearrangement of a tractable RNN design, piecewise-linear RNNs (PLRNNs), enables to reduce the reconstruction dimension to at most that of the observed system (or less). We show on several DS that with these amendments we can reconstruct DS better than current SOTA algorithms, in much lower dimensions. Performance differences were particularly compelling on real world data with which most other methods severely struggled. This work thus led to a simple yet powerful DS reconstruction algorithm which is highly interpretable at the same time. |
Fechtelpeter, Janik; Rauschenberg, Christian; Jamalabadi, Hamidreza; Boecking, Benjamin; van Amelsvoort, Therese; Reininghaus, Ulrich; Durstewitz, Daniel; Koppe, Georgia A control theoretic approach to evaluate and inform ecological momentary interventions Unpublished 2023. @unpublished{Fechtelpeter2023, title = {A control theoretic approach to evaluate and inform ecological momentary interventions}, author = {Janik Fechtelpeter and Christian Rauschenberg and Hamidreza Jamalabadi and Benjamin Boecking and Therese van Amelsvoort and Ulrich Reininghaus and Daniel Durstewitz and Georgia Koppe}, url = {https://psyarxiv.com/97teh/download?format=pdf}, year = {2023}, date = {2023-05-17}, journal = {PsyArXiv}, abstract = {Ecological momentary interventions (EMI) are digital mobile health (mHealth) interventions that are administered in an individual's daily life with the intent to improve mental health outcomes by tailoring intervention components to person, moment, and context. Questions regarding which intervention is most effective in a given individual, when it is best delivered, and what mechanisms of change underlie observed effects therefore naturally arise in this setting. To achieve this, EMI are typically informed by the collection of multivariate, intensive longitudinal data of various target constructs-designed to assess an individual’s psychological state-using ecological momentary assessments (EMA). However, the dynamic and interconnected nature of such multivariate time series data poses several challenges when analyzing and interpreting findings. This may be illustrated when understanding psychological variables as part of an interconnected network of dynamic variables, and the delivery of EMI as time-specific perturbations to these variables. Network control theory (NCT) is a branch of dynamical systems theory that precisely deals with the formal analysis of such network perturbations and provides solutions of how to perturb a network to reach a desired state in an optimal manner. In doing so, NCT may help to formally quantify and evaluate proximal intervention effects, as well as to identify optimal intervention approaches given a set of reasonable (temporal or energetic) constraints. In this proof-of-concept study, we leverage concepts from NCT to analyze the data of 10 individuals undergoing joint EMA and EMI for several weeks. We show how simple metrics derived from NCT can provide insightful information on putative mechanisms of change in the inferred EMA networks and contribute to identifying optimal leveraging points. We also outline what additional considerations might play a role in the design of effective intervention strategies in the future from the perspective of NCT.}, keywords = {}, pubstate = {published}, tppubtype = {unpublished} } Ecological momentary interventions (EMI) are digital mobile health (mHealth) interventions that are administered in an individual's daily life with the intent to improve mental health outcomes by tailoring intervention components to person, moment, and context. Questions regarding which intervention is most effective in a given individual, when it is best delivered, and what mechanisms of change underlie observed effects therefore naturally arise in this setting. To achieve this, EMI are typically informed by the collection of multivariate, intensive longitudinal data of various target constructs-designed to assess an individual’s psychological state-using ecological momentary assessments (EMA). However, the dynamic and interconnected nature of such multivariate time series data poses several challenges when analyzing and interpreting findings. This may be illustrated when understanding psychological variables as part of an interconnected network of dynamic variables, and the delivery of EMI as time-specific perturbations to these variables. Network control theory (NCT) is a branch of dynamical systems theory that precisely deals with the formal analysis of such network perturbations and provides solutions of how to perturb a network to reach a desired state in an optimal manner. In doing so, NCT may help to formally quantify and evaluate proximal intervention effects, as well as to identify optimal intervention approaches given a set of reasonable (temporal or energetic) constraints. In this proof-of-concept study, we leverage concepts from NCT to analyze the data of 10 individuals undergoing joint EMA and EMI for several weeks. We show how simple metrics derived from NCT can provide insightful information on putative mechanisms of change in the inferred EMA networks and contribute to identifying optimal leveraging points. We also outline what additional considerations might play a role in the design of effective intervention strategies in the future from the perspective of NCT. |
Domanski, Aleksander PF; Kucewicz, Michal T; Russo, Eleonora; Tricklebank, Mark D; Robinson, Emma SJ; Durstewitz, Daniel; Jones, Matt W Distinct hippocampal-prefrontal neural assemblies coordinate memory encoding, maintenance, and recall Journal Article Current Biology, 33 (7), 2023. @article{Domanski2023, title = {Distinct hippocampal-prefrontal neural assemblies coordinate memory encoding, maintenance, and recall}, author = {Aleksander PF Domanski and Michal T Kucewicz and Eleonora Russo and Mark D Tricklebank and Emma SJ Robinson and Daniel Durstewitz and Matt W Jones}, url = {https://www.cell.com/current-biology/pdf/S0960-9822(23)00169-0.pdf}, year = {2023}, date = {2023-04-10}, journal = {Current Biology}, volume = {33}, number = {7}, abstract = {Short-term memory enables incorporation of recent experience into subsequent decision-making. This pro cessing recruits both the prefrontal cortex and hippocampus, where neurons encode task cues, rules, and outcomes. However, precisely which information is carried when, and by which neurons, remains unclear. Using population decoding of activity in rat medial prefrontal cortex (mPFC) and dorsal hippocampal CA1, we confirm that mPFC populations lead in maintaining sample information across delays of an operant non-match to sample task, despite individual neurons firing only transiently. During sample encoding, distinct mPFC subpopulations joined distributed CA1-mPFC cell assemblies hallmarked by 4–5 Hz rhythmic modulation; CA1-mPFC assemblies re-emerged during choice episodes but were not 4–5 Hz modulated. Delay-dependent errors arose when attenuated rhythmic assembly activity heralded collapse of sustained mPFC encoding. Our results map component processes of memory-guided decisions onto heterogeneous CA1-mPFC subpopulations and the dynamics of physiologically distinct, distributed cell assemblies.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Short-term memory enables incorporation of recent experience into subsequent decision-making. This pro cessing recruits both the prefrontal cortex and hippocampus, where neurons encode task cues, rules, and outcomes. However, precisely which information is carried when, and by which neurons, remains unclear. Using population decoding of activity in rat medial prefrontal cortex (mPFC) and dorsal hippocampal CA1, we confirm that mPFC populations lead in maintaining sample information across delays of an operant non-match to sample task, despite individual neurons firing only transiently. During sample encoding, distinct mPFC subpopulations joined distributed CA1-mPFC cell assemblies hallmarked by 4–5 Hz rhythmic modulation; CA1-mPFC assemblies re-emerged during choice episodes but were not 4–5 Hz modulated. Delay-dependent errors arose when attenuated rhythmic assembly activity heralded collapse of sustained mPFC encoding. Our results map component processes of memory-guided decisions onto heterogeneous CA1-mPFC subpopulations and the dynamics of physiologically distinct, distributed cell assemblies. |
Hanganu-Opatz, Ileana L; Klausberger, Thomas; Sigurdsson, Torfi; Nieder, Andreas; Jacob, Simon N; Bartos, Marlene; Sauer, Jonas-Frederic; Durstewitz, Daniel; Leibold, Christian; Diester, Ilka Resolving the prefrontal mechanisms of adaptive cognitive behaviors: A cross-species perspective Journal Article Neuron, 111 (7), 2023. @article{Hanganu-Opatz2023, title = {Resolving the prefrontal mechanisms of adaptive cognitive behaviors: A cross-species perspective}, author = {Ileana L Hanganu-Opatz and Thomas Klausberger and Torfi Sigurdsson and Andreas Nieder and Simon N Jacob and Marlene Bartos and Jonas-Frederic Sauer and Daniel Durstewitz and Christian Leibold and Ilka Diester}, url = {https://neurocluster-db.meduniwien.ac.at/db_files/pub_art_431.pdf}, year = {2023}, date = {2023-04-10}, journal = {Neuron}, volume = {111}, number = {7}, abstract = {The prefrontal cortex (PFC) enables a staggering variety of complex behaviors, such as planning actions, solving problems, and adapting to new situations according to external information and internal states. These higher-order abilities, collectively defined as adaptive cognitive behavior, require cellular ensembles that coordinate the tradeoff between the stability and flexibility of neural representations. While the mechanisms underlying the function of cellular ensembles are still unclear, recent experimental and theoretical studies suggest that temporal coordination dynamically binds prefrontal neurons into functional ensembles. A so far largely separate stream of research has investigated the prefrontal efferent and afferent connectivity. These two research streams have recently converged on the hypothesis that prefrontal connectivity patterns influence ensemble formation and the function of neurons within ensembles. Here, we propose a unitary concept that, leveraging a cross-species definition of prefrontal regions, explains how prefrontal ensembles adaptively regulate and efficiently coordinate multiple processes in distinct cognitive behaviors.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The prefrontal cortex (PFC) enables a staggering variety of complex behaviors, such as planning actions, solving problems, and adapting to new situations according to external information and internal states. These higher-order abilities, collectively defined as adaptive cognitive behavior, require cellular ensembles that coordinate the tradeoff between the stability and flexibility of neural representations. While the mechanisms underlying the function of cellular ensembles are still unclear, recent experimental and theoretical studies suggest that temporal coordination dynamically binds prefrontal neurons into functional ensembles. A so far largely separate stream of research has investigated the prefrontal efferent and afferent connectivity. These two research streams have recently converged on the hypothesis that prefrontal connectivity patterns influence ensemble formation and the function of neurons within ensembles. Here, we propose a unitary concept that, leveraging a cross-species definition of prefrontal regions, explains how prefrontal ensembles adaptively regulate and efficiently coordinate multiple processes in distinct cognitive behaviors. |
Thome, Janine; Pinger, Mathieu; Durstewitz, Daniel; Sommer, Wolfgang H; Kirsch, Peter; Koppe, Georgia Model-based experimental manipulation of probabilistic behavior in interpretable behavioral latent variable models Journal Article Frontiers in Neuroscience, 16 , pp. 2270, 2023. @article{Thome2023, title = {Model-based experimental manipulation of probabilistic behavior in interpretable behavioral latent variable models}, author = {Janine Thome and Mathieu Pinger and Daniel Durstewitz and Wolfgang H Sommer and Peter Kirsch and Georgia Koppe}, url = {https://www.frontiersin.org/articles/10.3389/fnins.2022.1077735/full}, year = {2023}, date = {2023-01-09}, journal = {Frontiers in Neuroscience}, volume = {16}, pages = {2270}, abstract = {In studying mental processes, we often rely on quantifying not directly observable latent processes. Interpretable latent variable models that probabilistically link observations to the underlying process have increasingly been used to draw inferences from observed behavior. However, these models are far more powerful than that. By formally embedding experimentally manipulable variables within the latent process, they can be used to make precise and falsifiable hypotheses or predictions. In doing so, they pinpoint how experimental conditions must be designed to test these hypotheses and, by that, generate adaptive experiments. By comparing predictions to observed behavior, we may then assess and evaluate the predictive validity of an adaptive experiment and model directly and objectively. These ideas are exemplified here on the experimentally not directly observable process of delay discounting. We propose a generic approach to systematically generate and validate experimental conditions based on the aforementioned models. The conditions are explicitly generated so as to predict 9 graded behavioral discounting probabilities across participants. Meeting this prediction, the framework induces discounting probabilities on 9 levels. In contrast to several alternative models, the applied model exhibits high validity as indicated by a comparably low out-of-sample prediction error. We also report evidence for inter-individual differences with respect to the most suitable models underlying behavior. Finally, we outline how to adapt the proposed method to the investigation of other cognitive processes including reinforcement learning.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In studying mental processes, we often rely on quantifying not directly observable latent processes. Interpretable latent variable models that probabilistically link observations to the underlying process have increasingly been used to draw inferences from observed behavior. However, these models are far more powerful than that. By formally embedding experimentally manipulable variables within the latent process, they can be used to make precise and falsifiable hypotheses or predictions. In doing so, they pinpoint how experimental conditions must be designed to test these hypotheses and, by that, generate adaptive experiments. By comparing predictions to observed behavior, we may then assess and evaluate the predictive validity of an adaptive experiment and model directly and objectively. These ideas are exemplified here on the experimentally not directly observable process of delay discounting. We propose a generic approach to systematically generate and validate experimental conditions based on the aforementioned models. The conditions are explicitly generated so as to predict 9 graded behavioral discounting probabilities across participants. Meeting this prediction, the framework induces discounting probabilities on 9 levels. In contrast to several alternative models, the applied model exhibits high validity as indicated by a comparably low out-of-sample prediction error. We also report evidence for inter-individual differences with respect to the most suitable models underlying behavior. Finally, we outline how to adapt the proposed method to the investigation of other cognitive processes including reinforcement learning. |
2022 |
Brenner, Manuel; Koppe, Georgia; Durstewitz, Daniel Multimodal Teacher Forcing for Reconstructing Nonlinear Dynamical Systems Workshop 2022. @workshop{Brenner2022b, title = {Multimodal Teacher Forcing for Reconstructing Nonlinear Dynamical Systems}, author = {Manuel Brenner and Georgia Koppe and Daniel Durstewitz}, url = {https://arxiv.org/pdf/2212.07892.pdf}, year = {2022}, date = {2022-12-15}, journal = {AAAI 2023 (MLmDS Workshop)}, abstract = {Many, if not most, systems of interest in science are naturally described as nonlinear dynamical systems (DS). Empirically, we commonly access these systems through time series mea- surements, where often we have time series from different types of data modalities simultaneously. For instance, we may have event counts in addition to some continuous signal. While by now there are many powerful machine learning (ML) tools for integrating different data modalities into predictive models, this has rarely been approached so far from the perspective of uncovering the underlying, data-generating DS (aka DS recon- struction). Recently, sparse teacher forcing (TF) has been sug- gested as an efficient control-theoretic method for dealing with exploding loss gradients when training ML models on chaotic DS. Here we incorporate this idea into a novel recurrent neu- ral network (RNN) training framework for DS reconstruction based on multimodal variational autoencoders (MVAE). The forcing signal for the RNN is generated by the MVAE which integrates different types of simultaneously given time series data into a joint latent code optimal for DS reconstruction. We show that this training method achieves significantly better reconstructions on multimodal datasets generated from chaotic DS benchmarks than various alternative methods.}, keywords = {}, pubstate = {published}, tppubtype = {workshop} } Many, if not most, systems of interest in science are naturally described as nonlinear dynamical systems (DS). Empirically, we commonly access these systems through time series mea- surements, where often we have time series from different types of data modalities simultaneously. For instance, we may have event counts in addition to some continuous signal. While by now there are many powerful machine learning (ML) tools for integrating different data modalities into predictive models, this has rarely been approached so far from the perspective of uncovering the underlying, data-generating DS (aka DS recon- struction). Recently, sparse teacher forcing (TF) has been sug- gested as an efficient control-theoretic method for dealing with exploding loss gradients when training ML models on chaotic DS. Here we incorporate this idea into a novel recurrent neu- ral network (RNN) training framework for DS reconstruction based on multimodal variational autoencoders (MVAE). The forcing signal for the RNN is generated by the MVAE which integrates different types of simultaneously given time series data into a joint latent code optimal for DS reconstruction. We show that this training method achieves significantly better reconstructions on multimodal datasets generated from chaotic DS benchmarks than various alternative methods. |
Götzl, Christian; Hiller, Selina; Rauschenberg, Christian; Schick, Anita; Fechtelpeter, Janik; Abaigar, Unai Fischer; Koppe, Georgia; Durstewitz, Daniel; Reininghaus, Ulrich; Krumm, Silvia Artificial intelligence-informed mobile mental health apps for young people: a mixed-methods approach on users’ and stakeholders’ perspectives Journal Article Child and Adolescent Psychiatry and Mental Health, 16 (86), 2022. @article{Götzl2022, title = {Artificial intelligence-informed mobile mental health apps for young people: a mixed-methods approach on users’ and stakeholders’ perspectives}, author = {Christian Götzl and Selina Hiller and Christian Rauschenberg and Anita Schick and Janik Fechtelpeter and Unai Fischer Abaigar and Georgia Koppe and Daniel Durstewitz and Ulrich Reininghaus and Silvia Krumm}, year = {2022}, date = {2022-12-01}, journal = {Child and Adolescent Psychiatry and Mental Health}, volume = {16}, number = {86}, abstract = {Novel approaches in mobile mental health (mHealth) apps that make use of Artificial Intelligence (AI), Ecological Momentary Assessments, and Ecological Momentary Interventions have the potential to support young people in the achievement of mental health and wellbeing goals. However, little is known on the perspectives of young people and mental health experts on this rapidly advancing technology. This study aims to investigate the subjective needs, attitudes, and preferences of key stakeholders towards an AI–informed mHealth app, including young people and experts on mHealth promotion and prevention in youth.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Novel approaches in mobile mental health (mHealth) apps that make use of Artificial Intelligence (AI), Ecological Momentary Assessments, and Ecological Momentary Interventions have the potential to support young people in the achievement of mental health and wellbeing goals. However, little is known on the perspectives of young people and mental health experts on this rapidly advancing technology. This study aims to investigate the subjective needs, attitudes, and preferences of key stakeholders towards an AI–informed mHealth app, including young people and experts on mHealth promotion and prevention in youth. |
Bähner, Florian; Popov, Tzvetan; Hermann, Selina; Boehme, Nico; Merten, Tom; Zingone, Hélène; Koppe, Georgia; Meyer-Lindenberg, Andreas; Toutounji, Hazem; Durstewitz, Daniel Species-conserved mechanisms of cognitive flexibility in complex environments Journal Article bioRxiv, 2022. @article{Bähner2022, title = {Species-conserved mechanisms of cognitive flexibility in complex environments}, author = {Florian Bähner and Tzvetan Popov and Selina Hermann and Nico Boehme and Tom Merten and Hélène Zingone and Georgia Koppe and Andreas Meyer-Lindenberg and Hazem Toutounji and Daniel Durstewitz}, year = {2022}, date = {2022-11-14}, journal = {bioRxiv}, abstract = {Flexible decision making in complex environments is a hallmark of intelligent behavior but the underlying learning mechanisms and neural computations remain elusive. Through a combination of behavioral, computational and electrophysiological analysis of a novel multidimensional rule-learning paradigm, we show that both rats and humans sequentially probe different behavioral strategies to infer the task rule, rather than learning all possible mappings between environmental cues and actions as current theoretical formulations suppose. This species-conserved process reduces task dimensionality and explains both observed sudden behavioral transitions and positive transfer effects. Behavioral strategies are represented by rat prefrontal activity and strategy-related variables can be decoded from magnetoencephalography signals in human prefrontal cortex. These mechanistic findings provide a foundation for the translational investigation of impaired cognitive flexibility.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Flexible decision making in complex environments is a hallmark of intelligent behavior but the underlying learning mechanisms and neural computations remain elusive. Through a combination of behavioral, computational and electrophysiological analysis of a novel multidimensional rule-learning paradigm, we show that both rats and humans sequentially probe different behavioral strategies to infer the task rule, rather than learning all possible mappings between environmental cues and actions as current theoretical formulations suppose. This species-conserved process reduces task dimensionality and explains both observed sudden behavioral transitions and positive transfer effects. Behavioral strategies are represented by rat prefrontal activity and strategy-related variables can be decoded from magnetoencephalography signals in human prefrontal cortex. These mechanistic findings provide a foundation for the translational investigation of impaired cognitive flexibility. |
Stocker, Julia Elina; Koppe, Georgia; de Paredes, Hanna Reich; Heshmati, Saeideh; Hofmann, Stefan G; Hahn, Tim; van der Maas, Han; Waldorp, Lourens; Jamalabadi, Hamidreza Towards a formal model of psychological intervention: Applying a dynamic network and control approach to attitude modification Journal Article PsyArXiv, 2022. @article{Stocker2022, title = {Towards a formal model of psychological intervention: Applying a dynamic network and control approach to attitude modification}, author = {Julia Elina Stocker and Georgia Koppe and Hanna Reich de Paredes and Saeideh Heshmati and Stefan G Hofmann and Tim Hahn and Han van der Maas and Lourens Waldorp and Hamidreza Jamalabadi}, year = {2022}, date = {2022-11-09}, journal = {PsyArXiv}, abstract = {Despite the growing deployment of network representation throughout psychological sciences, the question of whether and how networks can systematically describe the effects of psychological interventions remains elusive. Towards this end, we capitalize on recent breakthrough in network control theory, the engineering study of networked interventions, to investigate a representative psychological attitude modification experiment. This study examined 30 healthy participants who answered 11 questions about their attitude toward eating meat. They then received 11 arguments to challenge their attitude on the questions, after which they were asked again the same set of questions. Using this data, we constructed networks that quantify the connections between the responses and tested: 1) if the observed psychological effect, in terms of sensitivity and specificity, relates to the regional network topology as described by control theory, 2) if the size of change in responses relates to whole-network topology that quantifies the “ease” of change as described by control theory, and 3) if responses after intervention could be predicted based on formal results from control theory. We found that 1) the interventions that had higher regional topological relevance (the so-called controllability scores) had stronger effect (r> 0.5), the intervention sensitivities were systematically lower for the interventions that were “easier to control”(r=-0.49), and that the model offered substantial prediction accuracy (r= 0.36). }, keywords = {}, pubstate = {published}, tppubtype = {article} } Despite the growing deployment of network representation throughout psychological sciences, the question of whether and how networks can systematically describe the effects of psychological interventions remains elusive. Towards this end, we capitalize on recent breakthrough in network control theory, the engineering study of networked interventions, to investigate a representative psychological attitude modification experiment. This study examined 30 healthy participants who answered 11 questions about their attitude toward eating meat. They then received 11 arguments to challenge their attitude on the questions, after which they were asked again the same set of questions. Using this data, we constructed networks that quantify the connections between the responses and tested: 1) if the observed psychological effect, in terms of sensitivity and specificity, relates to the regional network topology as described by control theory, 2) if the size of change in responses relates to whole-network topology that quantifies the “ease” of change as described by control theory, and 3) if responses after intervention could be predicted based on formal results from control theory. We found that 1) the interventions that had higher regional topological relevance (the so-called controllability scores) had stronger effect (r> 0.5), the intervention sensitivities were systematically lower for the interventions that were “easier to control”(r=-0.49), and that the model offered substantial prediction accuracy (r= 0.36). |
Zeb Kurth-Nelson John P O'Doherty, Deanna Barch Sophie Denève Daniel Durstewitz Michael Frank Joshua Gordon Sanjay Mathew Yael Niv Kerry Ressler Heike Tost M J A J Computational Approaches Journal Article Computational Psychiatry: New Perspectives on Mental Illness, 2022. @article{Kurth-Nelson2022, title = {Computational Approaches}, author = {Zeb Kurth-Nelson, John P O'Doherty, Deanna M Barch, Sophie Denève, Daniel Durstewitz, Michael J Frank, Joshua A Gordon, Sanjay J Mathew, Yael Niv, Kerry Ressler, Heike Tost}, url = {https://books.google.de/books?hl=en&lr=&id=746JEAAAQBAJ&oi=fnd&pg=PA77&dq=info:okpKmHWClm8J:scholar.google.com&ots=oqTdTTaF-h&sig=dPeS3sfDXW64H2ytq_NFvQbYWXI&redir_esc=y#v=onepage&q&f=false}, year = {2022}, date = {2022-11-01}, journal = {Computational Psychiatry: New Perspectives on Mental Illness}, abstract = {Vast spectra of biological and psychological processes are potentially involved in the mechanisms of psychiatric illness. Computational neuroscience brings a diverse toolkit to bear on understanding these processes. This chapter begins by organizing the many ways in which computational neuroscience may provide insight to the mechanisms of psychiatric illness. It then contextualizes the quest for deep mechanistic understanding through the perspective that even partial or nonmechanistic understanding can be applied productively. Finally, it questions the standards by which these approaches should be evaluated. If computational psychiatry hopes to go beyond traditional psychiatry, it cannot be judged solely on the basis of how closely it reproduces the diagnoses and prognoses of traditional psychiatry, but must also be judged against more fundamental measures such as patient outcomes.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Vast spectra of biological and psychological processes are potentially involved in the mechanisms of psychiatric illness. Computational neuroscience brings a diverse toolkit to bear on understanding these processes. This chapter begins by organizing the many ways in which computational neuroscience may provide insight to the mechanisms of psychiatric illness. It then contextualizes the quest for deep mechanistic understanding through the perspective that even partial or nonmechanistic understanding can be applied productively. Finally, it questions the standards by which these approaches should be evaluated. If computational psychiatry hopes to go beyond traditional psychiatry, it cannot be judged solely on the basis of how closely it reproduces the diagnoses and prognoses of traditional psychiatry, but must also be judged against more fundamental measures such as patient outcomes. |
Monfared, Zahra; Patra, Mahashweta; Durstewitz, Daniel Robust chaos and multi-stability in piecewise linear recurrent neural networks Journal Article Preprint, 2022. @article{Monfared2022, title = {Robust chaos and multi-stability in piecewise linear recurrent neural networks}, author = {Zahra Monfared and Mahashweta Patra and Daniel Durstewitz}, url = {https://www.researchsquare.com/article/rs-2147683/v1}, year = {2022}, date = {2022-10-27}, journal = {Preprint}, abstract = {Recurrent neural networks (RNNs) are major machine learning tools for the processing of sequential data. Piecewise-linear RNNs (PLRNNs) in particular, which are formally piecewise linear (PWL) maps, have become popular recently as data-driven techniques for dynamical systems reconstructions from time-series observations. For a better understanding of the training process, performance, and behavior of trained PLRNNs, more thorough theoretical analysis is highly needed. Especially the presence of chaos strongly affects RNN training and expressivity. Here we show the existence of robust chaos in 2d PLRNNs. To this end, necessary and sufficient conditions for the occurrence of homoclinic intersections are derived by analyzing the interplay between stable and unstable manifolds of 2d PWL maps. Our analysis focuses on general PWL maps, like PLRNNs, since normal form PWL maps lack important characteristics that can occur in PLRNNs. We also explore some bifurcations and multi-stability involving chaos, since the co-existence of chaotic attractors with other attractor objects poses particular challenges for PLRNN training on the one hand, yet may endow trained PLRNNs with important computational properties on the other. Numerical simulations are performed to verify our results and are demonstrated to be in good agreement with the theoretical derivations. We discuss the implications of our results for PLRNN training, performance on machine learning tasks, and scientific applications.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Recurrent neural networks (RNNs) are major machine learning tools for the processing of sequential data. Piecewise-linear RNNs (PLRNNs) in particular, which are formally piecewise linear (PWL) maps, have become popular recently as data-driven techniques for dynamical systems reconstructions from time-series observations. For a better understanding of the training process, performance, and behavior of trained PLRNNs, more thorough theoretical analysis is highly needed. Especially the presence of chaos strongly affects RNN training and expressivity. Here we show the existence of robust chaos in 2d PLRNNs. To this end, necessary and sufficient conditions for the occurrence of homoclinic intersections are derived by analyzing the interplay between stable and unstable manifolds of 2d PWL maps. Our analysis focuses on general PWL maps, like PLRNNs, since normal form PWL maps lack important characteristics that can occur in PLRNNs. We also explore some bifurcations and multi-stability involving chaos, since the co-existence of chaotic attractors with other attractor objects poses particular challenges for PLRNN training on the one hand, yet may endow trained PLRNNs with important computational properties on the other. Numerical simulations are performed to verify our results and are demonstrated to be in good agreement with the theoretical derivations. We discuss the implications of our results for PLRNN training, performance on machine learning tasks, and scientific applications. |
Mikhaeil, Jonas M; Monfared, Zahra; Durstewitz, Daniel On the difficulty of learning chaotic dynamics with RNNs Inproceedings 2022. @inproceedings{Monfared2021b, title = {On the difficulty of learning chaotic dynamics with RNNs}, author = {Jonas M. Mikhaeil and Zahra Monfared and Daniel Durstewitz}, url = {https://openreview.net/pdf?id=-_AMpmyV0Ll}, year = {2022}, date = {2022-09-14}, journal = {36th Conference on Neural Information Processing Systems (NeurIPS 2022).}, abstract = {Recurrent neural networks (RNNs) are wide-spread machine learning tools for modeling sequential and time series data. They are notoriously hard to train because their loss gradients backpropagated in time tend to saturate or diverge during training. This is known as the exploding and vanishing gradient problem. Previous solutions to this issue either built on rather complicated, purpose-engineered architectures with gated memory buffers, or - more recently - imposed constraints that ensure convergence to a fixed point or restrict (the eigenspectrum of) the recurrence matrix. Such constraints, however, convey severe limitations on the expressivity of the RNN. Essential intrinsic dynamics such as multistability or chaos are disabled. This is inherently at disaccord with the chaotic nature of many, if not most, time series encountered in nature and society. Here we offer a comprehensive theoretical treatment of this problem by relating the loss gradients during RNN training to the Lyapunov spectrum of RNN-generated orbits. We mathematically prove that RNNs producing stable equilibrium or cyclic behavior have bounded gradients, whereas the gradients of RNNs with chaotic dynamics always diverge. Based on these analyses and insights, we offer an effective yet simple training technique for chaotic data and guidance on how to choose relevant hyperparameters according to the Lyapunov spectrum. }, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Recurrent neural networks (RNNs) are wide-spread machine learning tools for modeling sequential and time series data. They are notoriously hard to train because their loss gradients backpropagated in time tend to saturate or diverge during training. This is known as the exploding and vanishing gradient problem. Previous solutions to this issue either built on rather complicated, purpose-engineered architectures with gated memory buffers, or - more recently - imposed constraints that ensure convergence to a fixed point or restrict (the eigenspectrum of) the recurrence matrix. Such constraints, however, convey severe limitations on the expressivity of the RNN. Essential intrinsic dynamics such as multistability or chaos are disabled. This is inherently at disaccord with the chaotic nature of many, if not most, time series encountered in nature and society. Here we offer a comprehensive theoretical treatment of this problem by relating the loss gradients during RNN training to the Lyapunov spectrum of RNN-generated orbits. We mathematically prove that RNNs producing stable equilibrium or cyclic behavior have bounded gradients, whereas the gradients of RNNs with chaotic dynamics always diverge. Based on these analyses and insights, we offer an effective yet simple training technique for chaotic data and guidance on how to choose relevant hyperparameters according to the Lyapunov spectrum. |
Pinger, Mathieu; Thome, Janine; Halli, Patrick; Sommer, Wolfgang H; Koppe, Georgia; Kirsch, Peter Comparing Discounting of Potentially Real Rewards and Losses by Means of Functional Magnetic Resonance Imaging Journal Article Frontiers in System Neuroscience, 2022. @article{Pinger2022, title = {Comparing Discounting of Potentially Real Rewards and Losses by Means of Functional Magnetic Resonance Imaging}, author = {Mathieu Pinger and Janine Thome and Patrick Halli and Wolfgang H. Sommer and Georgia Koppe and Peter Kirsch}, url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9365957/}, doi = {10.3389/fnsys.2022.867202}, year = {2022}, date = {2022-07-22}, journal = {Frontiers in System Neuroscience}, abstract = {Delay discounting (DD) has often been investigated in the context of decision making whereby individuals attribute decreasing value to rewards in the distant future. Less is known about DD in the context of negative consequences. The aim of this pilot study was to identify commonalities and differences between reward and loss discounting on the behavioral as well as the neural level by means of computational modeling and functional Magnetic Resonance Imaging (fMRI). We furthermore compared the neural activation between anticipation of rewards and losses.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Delay discounting (DD) has often been investigated in the context of decision making whereby individuals attribute decreasing value to rewards in the distant future. Less is known about DD in the context of negative consequences. The aim of this pilot study was to identify commonalities and differences between reward and loss discounting on the behavioral as well as the neural level by means of computational modeling and functional Magnetic Resonance Imaging (fMRI). We furthermore compared the neural activation between anticipation of rewards and losses. |
Thome, Janine; Pinger, Mathieu; Durstewitz, Daniel; Sommer, Wolfgang; Kirsch, Peter; Koppe, Georgia Model-based experimental manipulation of probabilistic behavior in interpretable behavioral latent variable models Journal Article PsyArXiv Preprints , 2022. @article{Thome2022, title = {Model-based experimental manipulation of probabilistic behavior in interpretable behavioral latent variable models}, author = {Janine Thome and Mathieu Pinger and Daniel Durstewitz and Wolfgang Sommer and Peter Kirsch and Georgia Koppe }, url = {https://psyarxiv.com/s7wda/}, doi = {10.31234/osf.io/s7wda}, year = {2022}, date = {2022-07-14}, journal = {PsyArXiv Preprints }, abstract = {In studying mental processes, we often rely on quantifying not directly observable latent constructs. Interpretable latent variable models that probabilistically link observations to the underlying construct have increasingly been used to draw inferences from observed behavior. However, these models are far more powerful than that. By formally embedding experimentally manipulable variables within the latent construct, they can be used to make precise and falsifiable hypotheses or predictions. At the same time, they pinpoint how experimental conditions must be designed to test these hypotheses. By comparing predictions to observed behavior, we may then assess and evaluate the validity of a measurement instrument directly and objectively, without resorting to comparisons with other latent constructs, as traditionally done in psychology. These ideas are exemplified here on the experimentally not directly observable construct of delay discounting. We propose a generic approach to systematically generate experimental conditions based on the aforementioned models. The conditions are explicitly generated so as to predict 9 graded behavioral discounting probabilities across participants. Meeting this prediction, the framework induces discounting probabilities on 9 levels. In contrast to several alternative models, the applied model exhibits high validity as indicated by a comparably low out-of-sample prediction error. We also report evidence for inter-individual differences w.r.t. the most suitable models underlying behavior.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In studying mental processes, we often rely on quantifying not directly observable latent constructs. Interpretable latent variable models that probabilistically link observations to the underlying construct have increasingly been used to draw inferences from observed behavior. However, these models are far more powerful than that. By formally embedding experimentally manipulable variables within the latent construct, they can be used to make precise and falsifiable hypotheses or predictions. At the same time, they pinpoint how experimental conditions must be designed to test these hypotheses. By comparing predictions to observed behavior, we may then assess and evaluate the validity of a measurement instrument directly and objectively, without resorting to comparisons with other latent constructs, as traditionally done in psychology. These ideas are exemplified here on the experimentally not directly observable construct of delay discounting. We propose a generic approach to systematically generate experimental conditions based on the aforementioned models. The conditions are explicitly generated so as to predict 9 graded behavioral discounting probabilities across participants. Meeting this prediction, the framework induces discounting probabilities on 9 levels. In contrast to several alternative models, the applied model exhibits high validity as indicated by a comparably low out-of-sample prediction error. We also report evidence for inter-individual differences w.r.t. the most suitable models underlying behavior. |
Brenner, Manuel; Hess, Florian; Mikhaeil, Jonas; Bereska, Leonard; Monfared, Zahra; Kuo, Po-Chen; Durstewitz, Daniel Tractable Dendritic RNNs for Reconstructing Nonlinear Dynamical Systems Inproceedings 2022. @inproceedings{Brenner2022, title = {Tractable Dendritic RNNs for Reconstructing Nonlinear Dynamical Systems}, author = {Manuel Brenner and Florian Hess and Jonas Mikhaeil and Leonard Bereska and Zahra Monfared and Po-Chen Kuo and Daniel Durstewitz}, url = {https://proceedings.mlr.press/v162/brenner22a.html}, year = {2022}, date = {2022-07-01}, journal = {Proceedings of Machine Learning Research, ICML 2022}, abstract = {In many scientific disciplines, we are interested in inferring the nonlinear dynamical system underlying a set of observed time series, a challenging task in the face of chaotic behavior and noise. Previous deep learning approaches toward this goal often suffered from a lack of interpretability and tractability. In particular, the high-dimensional latent spaces often required for a faithful embedding, even when the underlying dynamics lives on a lower-dimensional manifold, can hamper theoretical analysis. Motivated by the emerging principles of dendritic computation, we augment a dynamically interpretable and mathematically tractable piecewise-linear (PL) recurrent neural network (RNN) by a linear spline basis expansion. We show that this approach retains all the theoretically appealing properties of the simple PLRNN, yet boosts its capacity for approximating arbitrary nonlinear dynamical systems in comparatively low dimensions. We employ two frameworks for training the system, one combining BPTT with teacher forcing, and another based on fast and scalable variational inference. We show that the dendritically expanded PLRNN achieves better reconstructions with fewer parameters and dimensions on various dynamical systems benchmarks and compares favorably to other methods, while retaining a tractable and interpretable structure.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } In many scientific disciplines, we are interested in inferring the nonlinear dynamical system underlying a set of observed time series, a challenging task in the face of chaotic behavior and noise. Previous deep learning approaches toward this goal often suffered from a lack of interpretability and tractability. In particular, the high-dimensional latent spaces often required for a faithful embedding, even when the underlying dynamics lives on a lower-dimensional manifold, can hamper theoretical analysis. Motivated by the emerging principles of dendritic computation, we augment a dynamically interpretable and mathematically tractable piecewise-linear (PL) recurrent neural network (RNN) by a linear spline basis expansion. We show that this approach retains all the theoretically appealing properties of the simple PLRNN, yet boosts its capacity for approximating arbitrary nonlinear dynamical systems in comparatively low dimensions. We employ two frameworks for training the system, one combining BPTT with teacher forcing, and another based on fast and scalable variational inference. We show that the dendritically expanded PLRNN achieves better reconstructions with fewer parameters and dimensions on various dynamical systems benchmarks and compares favorably to other methods, while retaining a tractable and interpretable structure. |
Kramer, Daniel; Bommer, Philine Lou; Tombolini, Carlo; Koppe, Georgia; Durstewitz, Daniel Identifying nonlinear dynamical systems from multi-modal time series data Inproceedings 2022. @inproceedings{Kramer2022, title = {Identifying nonlinear dynamical systems from multi-modal time series data}, author = {Daniel Kramer and Philine Lou Bommer and Carlo Tombolini and Georgia Koppe and Daniel Durstewitz}, url = {https://proceedings.mlr.press/v162/kramer22a.html}, year = {2022}, date = {2022-06-21}, journal = {Proceedings of Machine Learning Research}, volume = {162}, abstract = {Empirically observed time series in physics, biology, or medicine, are commonly generated by some underlying dynamical system (DS) which is the target of scientific interest. There is an increasing interest to harvest machine learning methods to reconstruct this latent DS in a completely data-driven, unsupervised way. In many areas of science it is common to sample time series observations from many data modalities simultaneously, e.g. electrophysiological and behavioral time series in a typical neuroscience experiment. However, current machine learning tools for reconstructing DSs usually focus on just one data modality. Here we propose a general framework for multi-modal data integration for the purpose of nonlinear DS identification and cross-modal prediction. This framework is based on dynamically interpretable recurrent neural networks as general approximators of nonlinear DSs, coupled to sets of modality-specific decoder models from the class of generalized linear models. Both an expectation-maximization and a variational inference algorithm for model training are advanced and compared. We show on nonlinear DS benchmarks that our algorithms can efficiently compensate for too noisy or missing information in one data channel by exploiting other channels, and demonstrate on experimental neuroscience data how the algorithm learns to link different data domains to the underlying dynamics }, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Empirically observed time series in physics, biology, or medicine, are commonly generated by some underlying dynamical system (DS) which is the target of scientific interest. There is an increasing interest to harvest machine learning methods to reconstruct this latent DS in a completely data-driven, unsupervised way. In many areas of science it is common to sample time series observations from many data modalities simultaneously, e.g. electrophysiological and behavioral time series in a typical neuroscience experiment. However, current machine learning tools for reconstructing DSs usually focus on just one data modality. Here we propose a general framework for multi-modal data integration for the purpose of nonlinear DS identification and cross-modal prediction. This framework is based on dynamically interpretable recurrent neural networks as general approximators of nonlinear DSs, coupled to sets of modality-specific decoder models from the class of generalized linear models. Both an expectation-maximization and a variational inference algorithm for model training are advanced and compared. We show on nonlinear DS benchmarks that our algorithms can efficiently compensate for too noisy or missing information in one data channel by exploiting other channels, and demonstrate on experimental neuroscience data how the algorithm learns to link different data domains to the underlying dynamics |
Thome, Janine; Pinger, Mathieu; Halli, Patrick; Durstewitz, Daniel; Sommer, Wolfgang H; Kirsch, Peter; Koppe, Georgia A Model Guided Approach to Evoke Homogeneous Behavior During Temporal Reward and Loss Discounting Journal Article Frontiers in Psychiatry, 2022. @article{Thome2022b, title = {A Model Guided Approach to Evoke Homogeneous Behavior During Temporal Reward and Loss Discounting}, author = {Janine Thome and Mathieu Pinger and Patrick Halli and Daniel Durstewitz and Wolfgang H. Sommer and Peter Kirsch and Georgia Koppe}, url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9253427/}, doi = {10.3389/fpsyt.2022.846119}, year = {2022}, date = {2022-06-21}, journal = {Frontiers in Psychiatry}, abstract = {The tendency to devaluate future options as a function of time, known as delay discounting, is associated with various factors such as psychiatric illness and personality. Under identical experimental conditions, individuals may therefore strongly differ in the degree to which they discount future options. In delay discounting tasks, this inter-individual variability inevitably results in an unequal number of discounted trials per subject, generating difficulties in linking delay discounting to psychophysiological and neural correlates. Many studies have therefore focused on assessing delay discounting adaptively. Here, we extend these approaches by developing an adaptive paradigm which aims at inducing more comparable and homogeneous discounting frequencies across participants on a dimensional scale.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The tendency to devaluate future options as a function of time, known as delay discounting, is associated with various factors such as psychiatric illness and personality. Under identical experimental conditions, individuals may therefore strongly differ in the degree to which they discount future options. In delay discounting tasks, this inter-individual variability inevitably results in an unequal number of discounted trials per subject, generating difficulties in linking delay discounting to psychophysiological and neural correlates. Many studies have therefore focused on assessing delay discounting adaptively. Here, we extend these approaches by developing an adaptive paradigm which aims at inducing more comparable and homogeneous discounting frequencies across participants on a dimensional scale. |
Melbaum, Svenja; Russo, Eleonora; Eriksson, David; Schneider, Artur; Durstewitz, Daniel; Brox, Thomas; Diester, Ilka Conserved structures of neural activity in sensorimotor cortex of freely moving rats allow cross-subject decoding Journal Article bioRxiv, 2022. @article{Melbaum2022, title = {Conserved structures of neural activity in sensorimotor cortex of freely moving rats allow cross-subject decoding}, author = {Svenja Melbaum and Eleonora Russo and David Eriksson and Artur Schneider and Daniel Durstewitz and Thomas Brox and Ilka Diester}, url = {https://www.biorxiv.org/content/10.1101/2021.03.04.433869v2}, doi = {https://doi.org/10.1101/2021.03.04.433869 }, year = {2022}, date = {2022-02-18}, journal = {bioRxiv}, abstract = {Our knowledge about neuronal activity in the sensorimotor cortex relies primarily on stereotyped movements that are strictly controlled in experimental settings. It remains unclear how results can be carried over to less constrained behavior like that of freely moving subjects. Toward this goal, we developed a self-paced behavioral paradigm that encouraged rats to engage in different movement types. We employed bilateral electrophysiological recordings across the entire sensorimotor cortex and simultaneous paw tracking. These techniques revealed behavioral coupling of neurons with lateralization and an anterior–posterior gradient from the premotor to the primary sensory cortex. The structure of population activity patterns was conserved across animals despite the severe under-sampling of the total number of neurons and variations in electrode positions across individuals. We demonstrated cross-subject and cross-session generalization in a decoding task through alignments of low-dimensional neural manifolds, providing evidence of a conserved neuronal code.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Our knowledge about neuronal activity in the sensorimotor cortex relies primarily on stereotyped movements that are strictly controlled in experimental settings. It remains unclear how results can be carried over to less constrained behavior like that of freely moving subjects. Toward this goal, we developed a self-paced behavioral paradigm that encouraged rats to engage in different movement types. We employed bilateral electrophysiological recordings across the entire sensorimotor cortex and simultaneous paw tracking. These techniques revealed behavioral coupling of neurons with lateralization and an anterior–posterior gradient from the premotor to the primary sensory cortex. The structure of population activity patterns was conserved across animals despite the severe under-sampling of the total number of neurons and variations in electrode positions across individuals. We demonstrated cross-subject and cross-session generalization in a decoding task through alignments of low-dimensional neural manifolds, providing evidence of a conserved neuronal code. |
2021 |
Owusu, Priscilla N; Reininghaus, Ulrich; Koppe, Georgia; Dankwa-Mullan, Irene; Bärnighausen, Till Artificial intelligence applications in social media for depression screening: A systematic review protocol for content validity processes Journal Article PLoS ONE, 2021. @article{Owusu2021, title = {Artificial intelligence applications in social media for depression screening: A systematic review protocol for content validity processes}, author = { Priscilla N. Owusu and Ulrich Reininghaus and Georgia Koppe and Irene Dankwa-Mullan and Till Bärnighausen}, url = {https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0259499}, doi = {https://doi.org/10.1371/journal.pone.0259499}, year = {2021}, date = {2021-11-08}, journal = {PLoS ONE}, abstract = {The popularization of social media has led to the coalescing of user groups around mental health conditions; in particular, depression. Social media offers a rich environment for contextualizing and predicting users’ self-reported burden of depression. Modern artificial intelligence (AI) methods are commonly employed in analyzing user-generated sentiment on social media. In the forthcoming systematic review, we will examine the content validity of these computer-based health surveillance models with respect to standard diagnostic frameworks. Drawing from a clinical perspective, we will attempt to establish a normative judgment about the strengths of these modern AI applications in the detection of depression.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The popularization of social media has led to the coalescing of user groups around mental health conditions; in particular, depression. Social media offers a rich environment for contextualizing and predicting users’ self-reported burden of depression. Modern artificial intelligence (AI) methods are commonly employed in analyzing user-generated sentiment on social media. In the forthcoming systematic review, we will examine the content validity of these computer-based health surveillance models with respect to standard diagnostic frameworks. Drawing from a clinical perspective, we will attempt to establish a normative judgment about the strengths of these modern AI applications in the detection of depression. |
Thome, Janine; Steinbach, Robert; Grosskreutz, Julian; Durstewitz, Daniel; Koppe, Georgia Classification of amyotrophic lateral sclerosis by brain volume, connectivity, and network dynamics Journal Article Human Brain Mapping, 2021. @article{Thome2021, title = {Classification of amyotrophic lateral sclerosis by brain volume, connectivity, and network dynamics}, author = {Janine Thome and Robert Steinbach and Julian Grosskreutz and Daniel Durstewitz and Georgia Koppe}, url = {https://doi.org/10.1002/hbm.25679}, year = {2021}, date = {2021-10-16}, journal = {Human Brain Mapping}, abstract = {Emerging studies corroborate the importance of neuroimaging biomarkers and machine learning to improve diagnostic classification of amyotrophic lateral sclerosis (ALS). While most studies focus on structural data, recent studies assessing functional connectivity between brain regions by linear methods highlight the role of brain function. These studies have yet to be combined with brain structure and nonlinear functional features. We investigate the role of linear and nonlinear functional brain features, and the benefit of combining brain structure and function for ALS classification. ALS patients (N = 97) and healthy controls (N = 59) underwent structural and functional resting state magnetic resonance imaging. Based on key hubs of resting state networks, we defined three feature sets comprising brain volume, resting state functional connectivity (rsFC), as well as (nonlinear) resting state dynamics assessed via recurrent neural networks. Unimodal and multimodal random forest classifiers were built to classify ALS. Out-of-sample prediction errors were assessed via five-fold cross-validation. Unimodal classifiers achieved a classification accuracy of 56.35–61.66%. Multimodal classifiers outperformed unimodal classifiers achieving accuracies of 62.85–66.82%. Evaluating the ranking of individual features' importance scores across all classifiers revealed that rsFC features were most dominant in classification. While univariate analyses revealed reduced rsFC in ALS patients, functional features more generally indicated deficits in information integration across resting state brain networks in ALS. The present work undermines that combining brain structure and function provides an additional benefit to diagnostic classification, as indicated by multimodal classifiers, while emphasizing the importance of capturing both linear and nonlinear functional brain properties to identify discriminative biomarkers of ALS.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Emerging studies corroborate the importance of neuroimaging biomarkers and machine learning to improve diagnostic classification of amyotrophic lateral sclerosis (ALS). While most studies focus on structural data, recent studies assessing functional connectivity between brain regions by linear methods highlight the role of brain function. These studies have yet to be combined with brain structure and nonlinear functional features. We investigate the role of linear and nonlinear functional brain features, and the benefit of combining brain structure and function for ALS classification. ALS patients (N = 97) and healthy controls (N = 59) underwent structural and functional resting state magnetic resonance imaging. Based on key hubs of resting state networks, we defined three feature sets comprising brain volume, resting state functional connectivity (rsFC), as well as (nonlinear) resting state dynamics assessed via recurrent neural networks. Unimodal and multimodal random forest classifiers were built to classify ALS. Out-of-sample prediction errors were assessed via five-fold cross-validation. Unimodal classifiers achieved a classification accuracy of 56.35–61.66%. Multimodal classifiers outperformed unimodal classifiers achieving accuracies of 62.85–66.82%. Evaluating the ranking of individual features' importance scores across all classifiers revealed that rsFC features were most dominant in classification. While univariate analyses revealed reduced rsFC in ALS patients, functional features more generally indicated deficits in information integration across resting state brain networks in ALS. The present work undermines that combining brain structure and function provides an additional benefit to diagnostic classification, as indicated by multimodal classifiers, while emphasizing the importance of capturing both linear and nonlinear functional brain properties to identify discriminative biomarkers of ALS. |
Urs Braun Anais Harneit, Giulio Pergola Tommaso Menara Axel Schäfer Richard Betzel Zhenxiang Zang Janina Schweiger Xiaolong Zhang Kristina Schwarz Junfang Chen Giuseppe Blasi Alessandro Bertolino Daniel Durstewitz Fabio Pasqualetti Emanuel Schwarz Andreas Meyer-Lindenberg Danielle Bassett & Heike Tost F I S Brain network dynamics during working memory are modulated by dopamine and diminished in schizophrenia Journal Article Nature Communications, 2021. @article{Braun2021, title = {Brain network dynamics during working memory are modulated by dopamine and diminished in schizophrenia}, author = {Urs Braun, Anais Harneit, Giulio Pergola, Tommaso Menara, Axel Schäfer, Richard F. Betzel, Zhenxiang Zang, Janina I. Schweiger, Xiaolong Zhang, Kristina Schwarz, Junfang Chen, Giuseppe Blasi, Alessandro Bertolino, Daniel Durstewitz, Fabio Pasqualetti, Emanuel Schwarz, Andreas Meyer-Lindenberg, Danielle S. Bassett & Heike Tost }, url = {https://www.nature.com/articles/s41467-021-23694-9}, doi = {10.1038/s41467-021-23694-9}, year = {2021}, date = {2021-06-09}, journal = {Nature Communications}, abstract = {Dynamical brain state transitions are critical for flexible working memory but the network mechanisms are incompletely understood. Here, we show that working memory performance entails brain-wide switching between activity states using a combination of functional magnetic resonance imaging in healthy controls and individuals with schizophrenia, pharmacological fMRI, genetic analyses and network control theory. The stability of states relates to dopamine D1 receptor gene expression while state transitions are influenced by D2 receptor expression and pharmacological modulation. Individuals with schizophrenia show altered network control properties, including a more diverse energy landscape and decreased stability of working memory representations. Our results demonstrate the relevance of dopamine signaling for the steering of whole-brain network dynamics during working memory and link these processes to schizophrenia pathophysiology.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Dynamical brain state transitions are critical for flexible working memory but the network mechanisms are incompletely understood. Here, we show that working memory performance entails brain-wide switching between activity states using a combination of functional magnetic resonance imaging in healthy controls and individuals with schizophrenia, pharmacological fMRI, genetic analyses and network control theory. The stability of states relates to dopamine D1 receptor gene expression while state transitions are influenced by D2 receptor expression and pharmacological modulation. Individuals with schizophrenia show altered network control properties, including a more diverse energy landscape and decreased stability of working memory representations. Our results demonstrate the relevance of dopamine signaling for the steering of whole-brain network dynamics during working memory and link these processes to schizophrenia pathophysiology. |
Russo, Eleonora; Ma, Tianyang; Spanagel, Rainer; Durstewitz, Daniel; Toutounji, Hazem; Köhr, Georg Coordinated prefrontal state transition leads extinction of reward-seeking behaviors Journal Article Journal of Neuroscience, 41 (11), 2021. @article{Russo2021, title = {Coordinated prefrontal state transition leads extinction of reward-seeking behaviors}, author = {Eleonora Russo and Tianyang Ma and Rainer Spanagel and Daniel Durstewitz and Hazem Toutounji and Georg Köhr}, url = {https://www.jneurosci.org/content/jneuro/41/11/2406.full.pdf}, year = {2021}, date = {2021-02-02}, journal = {Journal of Neuroscience}, volume = {41}, number = {11}, abstract = {Extinction learning suppresses conditioned reward responses and is thus fundamental to adapt to changing environmental demands and to control excessive reward seeking. The medial prefrontal cortex (mPFC) monitors and controls conditioned reward responses. Abrupt transitions in mPFC activity anticipate changes in conditioned responses to altered contingencies. It remains, however, unknown whether such transitions are driven by the extinction of old behavioral strategies or by the ac- quisition of new competing ones. Using in vivo multiple single-unit recordings of mPFC in male rats, we studied the relation- ship between single-unit and population dynamics during extinction learning, using alcohol as a positive reinforcer in an operant conditioning paradigm. To examine the fine temporal relation between neural activity and behavior, we developed a novel behavioral model that allowed us to identify the number, onset, and duration of extinction-learning episodes in the behavior of each animal. We found that single-unit responses to conditioned stimuli changed even under stable experimental conditions and behavior. However, when behavioral responses to task contingencies had to be updated, unit-specific modula- tions became coordinated across the whole population, pushing the network into a new stable attractor state. Thus, extinction learning is not associated with suppressed mPFC responses to conditioned stimuli, but is anticipated by single-unit coordina- tion into population-wide transitions of the internal state of the animal.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Extinction learning suppresses conditioned reward responses and is thus fundamental to adapt to changing environmental demands and to control excessive reward seeking. The medial prefrontal cortex (mPFC) monitors and controls conditioned reward responses. Abrupt transitions in mPFC activity anticipate changes in conditioned responses to altered contingencies. It remains, however, unknown whether such transitions are driven by the extinction of old behavioral strategies or by the ac- quisition of new competing ones. Using in vivo multiple single-unit recordings of mPFC in male rats, we studied the relation- ship between single-unit and population dynamics during extinction learning, using alcohol as a positive reinforcer in an operant conditioning paradigm. To examine the fine temporal relation between neural activity and behavior, we developed a novel behavioral model that allowed us to identify the number, onset, and duration of extinction-learning episodes in the behavior of each animal. We found that single-unit responses to conditioned stimuli changed even under stable experimental conditions and behavior. However, when behavioral responses to task contingencies had to be updated, unit-specific modula- tions became coordinated across the whole population, pushing the network into a new stable attractor state. Thus, extinction learning is not associated with suppressed mPFC responses to conditioned stimuli, but is anticipated by single-unit coordina- tion into population-wide transitions of the internal state of the animal. |
2020 |
Koppe, Georgia; Meyer-Lindenberg, Andreas; Durstewitz, Daniel Deep learning for small and big data in psychiatry Journal Article Neuropsychopharmacology, 2020. @article{Koppe2020b, title = {Deep learning for small and big data in psychiatry}, author = {Georgia Koppe and Andreas Meyer-Lindenberg and Daniel Durstewitz}, url = {https://www.nature.com/articles/s41386-020-0767-z}, doi = {10.1038/s41386-020-0767-z}, year = {2020}, date = {2020-07-15}, journal = {Neuropsychopharmacology}, abstract = {Psychiatry today must gain a better understanding of the common and distinct pathophysiological mechanisms underlying psychiatric disorders in order to deliver more effective, person-tailored treatments. To this end, it appears that the analysis of ‘small’ experimental samples using conventional statistical approaches has largely failed to capture the heterogeneity underlying psychiatric phenotypes. Modern algorithms and approaches from machine learning, particularly deep learning, provide new hope to address these issues given their outstanding prediction performance in other disciplines. The strength of deep learning algorithms is that they can implement very complicated, and in principle arbitrary predictor-response mappings efficiently. This power comes at a cost, the need for large training (and test) samples to infer the (sometimes over millions of) model parameters. This appears to be at odds with the as yet rather ‘small’ samples available in psychiatric human research to date (n keywords = {}, pubstate = {published}, tppubtype = {article} } Psychiatry today must gain a better understanding of the common and distinct pathophysiological mechanisms underlying psychiatric disorders in order to deliver more effective, person-tailored treatments. To this end, it appears that the analysis of ‘small’ experimental samples using conventional statistical approaches has largely failed to capture the heterogeneity underlying psychiatric phenotypes. Modern algorithms and approaches from machine learning, particularly deep learning, provide new hope to address these issues given their outstanding prediction performance in other disciplines. The strength of deep learning algorithms is that they can implement very complicated, and in principle arbitrary predictor-response mappings efficiently. This power comes at a cost, the need for large training (and test) samples to infer the (sometimes over millions of) model parameters. This appears to be at odds with the as yet rather ‘small’ samples available in psychiatric human research to date (n < 10,000), and the ambition of predicting treatment at the single subject level (n = 1). Here, we aim at giving a comprehensive overview on how we can yet use such models for prediction in psychiatry. We review how machine learning approaches compare to more traditional statistical hypothesis-driven approaches, how their complexity relates to the need of large sample sizes, and what we can do to optimally use these powerful techniques in psychiatric neuroscience. |
Lars-Lennart Oettl Max Scheller, Carla Filosa Sebastian Wieland Franziska Haag Cathrin Loeb Daniel Durstewitz Roman Shusterman Eleonora Russo & Wolfgang Kelsch Phasic dopamine reinforces distinct striatal stimulus encoding in the olfactory tubercle driving dopaminergic reward prediction Journal Article Nature Communications, 2020. @article{Oettl2020, title = {Phasic dopamine reinforces distinct striatal stimulus encoding in the olfactory tubercle driving dopaminergic reward prediction}, author = {Lars-Lennart Oettl, Max Scheller, Carla Filosa, Sebastian Wieland, Franziska Haag, Cathrin Loeb, Daniel Durstewitz, Roman Shusterman, Eleonora Russo & Wolfgang Kelsch}, url = {https://www.nature.com/articles/s41467-020-17257-7#disqus_thread}, doi = {https://doi.org/10.1038/s41467-020-17257-7}, year = {2020}, date = {2020-07-10}, journal = {Nature Communications}, abstract = {The learning of stimulus-outcome associations allows for predictions about the environment. Ventral striatum and dopaminergic midbrain neurons form a larger network for generating reward prediction signals from sensory cues. Yet, the network plasticity mechanisms to generate predictive signals in these distributed circuits have not been entirely clarified. Also, direct evidence of the underlying interregional assembly formation and information transfer is still missing. Here we show that phasic dopamine is sufficient to reinforce the distinctness of stimulus representations in the ventral striatum even in the absence of reward. Upon such reinforcement, striatal stimulus encoding gives rise to interregional assemblies that drive dopaminergic neurons during stimulus-outcome learning. These assemblies dynamically encode the predicted reward value of conditioned stimuli. Together, our data reveal that ventral striatal and midbrain reward networks form a reinforcing loop to generate reward prediction coding.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The learning of stimulus-outcome associations allows for predictions about the environment. Ventral striatum and dopaminergic midbrain neurons form a larger network for generating reward prediction signals from sensory cues. Yet, the network plasticity mechanisms to generate predictive signals in these distributed circuits have not been entirely clarified. Also, direct evidence of the underlying interregional assembly formation and information transfer is still missing. Here we show that phasic dopamine is sufficient to reinforce the distinctness of stimulus representations in the ventral striatum even in the absence of reward. Upon such reinforcement, striatal stimulus encoding gives rise to interregional assemblies that drive dopaminergic neurons during stimulus-outcome learning. These assemblies dynamically encode the predicted reward value of conditioned stimuli. Together, our data reveal that ventral striatal and midbrain reward networks form a reinforcing loop to generate reward prediction coding. |
Lars-Lennart Oettl Max Scheller, Carla Filosa Sebastian Wieland Franziska Haag Cathrin Loeb Daniel Durstewitz Roman Shusterman Eleonora Russo Wolfgang Kelsch Phasic dopamine reinforces distinct striatal stimulus encoding in the olfactory tubercle driving dopaminergic reward prediction Journal Article Nature Communications, 2020. @article{Oettl2020b, title = {Phasic dopamine reinforces distinct striatal stimulus encoding in the olfactory tubercle driving dopaminergic reward prediction}, author = {Lars-Lennart Oettl, Max Scheller, Carla Filosa, Sebastian Wieland, Franziska Haag, Cathrin Loeb, Daniel Durstewitz, Roman Shusterman, Eleonora Russo, Wolfgang Kelsch}, url = {https://www.nature.com/articles/s41467-020-17257-7#citeas}, doi = {https://doi.org/10.1038/s41467-020-17257-7}, year = {2020}, date = {2020-07-10}, journal = {Nature Communications}, abstract = {The learning of stimulus-outcome associations allows for predictions about the environment. Ventral striatum and dopaminergic midbrain neurons form a larger network for generating reward prediction signals from sensory cues. Yet, the network plasticity mechanisms to generate predictive signals in these distributed circuits have not been entirely clarified. Also, direct evidence of the underlying interregional assembly formation and information transfer is still missing. Here we show that phasic dopamine is sufficient to reinforce the distinctness of stimulus representations in the ventral striatum even in the absence of reward. Upon such reinforcement, striatal stimulus encoding gives rise to interregional assemblies that drive dopaminergic neurons during stimulus-outcome learning. These assemblies dynamically encode the predicted reward value of conditioned stimuli. Together, our data reveal that ventral striatal and midbrain reward networks form a reinforcing loop to generate reward prediction coding.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The learning of stimulus-outcome associations allows for predictions about the environment. Ventral striatum and dopaminergic midbrain neurons form a larger network for generating reward prediction signals from sensory cues. Yet, the network plasticity mechanisms to generate predictive signals in these distributed circuits have not been entirely clarified. Also, direct evidence of the underlying interregional assembly formation and information transfer is still missing. Here we show that phasic dopamine is sufficient to reinforce the distinctness of stimulus representations in the ventral striatum even in the absence of reward. Upon such reinforcement, striatal stimulus encoding gives rise to interregional assemblies that drive dopaminergic neurons during stimulus-outcome learning. These assemblies dynamically encode the predicted reward value of conditioned stimuli. Together, our data reveal that ventral striatal and midbrain reward networks form a reinforcing loop to generate reward prediction coding. |
Zahra Monfared, Daniel Durstewitz Existence of n-cycles and border-collision bifurcations in piecewise-linear continuous maps with applications to recurrent neural networks Journal Article Nonlinear Dynamics, 2020. @article{Monfared2020, title = {Existence of n-cycles and border-collision bifurcations in piecewise-linear continuous maps with applications to recurrent neural networks}, author = {Zahra Monfared, Daniel Durstewitz}, url = {https://arxiv.org/abs/1911.04304}, doi = {10.1007/s11071-020-05777-2}, year = {2020}, date = {2020-07-01}, journal = {Nonlinear Dynamics}, abstract = {Piecewise linear recurrent neural networks (PLRNNs) form the basis of many successful machine learning applications for time series prediction and dynamical systems identification, but rigorous mathematical analysis of their dynamics and properties is lagging behind. Here we contribute to this topic by investigating the existence of n-cycles (n≥3) and border-collision bifurcations in a class of n-dimensional piecewise linear continuous maps which have the general form of a PLRNN. This is particularly important as for one-dimensional maps the existence of 3-cycles implies chaos. It is shown that these n-cycles collide with the switching boundary in a border-collision bifurcation, and parametric regions for the existence of both stable and unstable n-cycles and border-collision bifurcations will be derived theoretically. We then discuss how our results can be extended and applied to PLRNNs. Finally, numerical simulations demonstrate the implementation of our results and are found to be in good agreement with the theoretical derivations. Our findings thus provide a basis for understanding periodic behavior in PLRNNs, how it emerges in bifurcations, and how it may lead into chaos. }, keywords = {}, pubstate = {published}, tppubtype = {article} } Piecewise linear recurrent neural networks (PLRNNs) form the basis of many successful machine learning applications for time series prediction and dynamical systems identification, but rigorous mathematical analysis of their dynamics and properties is lagging behind. Here we contribute to this topic by investigating the existence of n-cycles (n≥3) and border-collision bifurcations in a class of n-dimensional piecewise linear continuous maps which have the general form of a PLRNN. This is particularly important as for one-dimensional maps the existence of 3-cycles implies chaos. It is shown that these n-cycles collide with the switching boundary in a border-collision bifurcation, and parametric regions for the existence of both stable and unstable n-cycles and border-collision bifurcations will be derived theoretically. We then discuss how our results can be extended and applied to PLRNNs. Finally, numerical simulations demonstrate the implementation of our results and are found to be in good agreement with the theoretical derivations. Our findings thus provide a basis for understanding periodic behavior in PLRNNs, how it emerges in bifurcations, and how it may lead into chaos. |
Zahra Monfared, Daniel Durstewitz Transformation of ReLU-based recurrent neural networks from discrete-time to continuous-time Inproceedings 2020. @inproceedings{Monfared2020b, title = {Transformation of ReLU-based recurrent neural networks from discrete-time to continuous-time}, author = {Zahra Monfared, Daniel Durstewitz}, url = {https://arxiv.org/abs/2007.00321}, year = {2020}, date = {2020-07-01}, journal = {Proceedings of the International Conference on Machine Learning}, abstract = {Recurrent neural networks (RNN) as used in machine learning are commonly formulated in discrete time, i.e. as recursive maps. This brings a lot of advantages for training models on data, e.g. for the purpose of time series prediction or dynamical systems identification, as powerful and efficient inference algorithms exist for discrete time systems and numerical integration of differential equations is not necessary. On the other hand, mathematical analysis of dynamical systems inferred from data is often more convenient and enables additional insights if these are formulated in continuous time, i.e. as systems of ordinary (or partial) differential equations (ODE). Here we show how to perform such a translation from discrete to continuous time for a particular class of ReLU-based RNN. We prove three theorems on the mathematical equivalence between the discrete and continuous time formulations under a variety of conditions, and illustrate how to use our mathematical results on different machine learning and nonlinear dynamical systems examples. }, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Recurrent neural networks (RNN) as used in machine learning are commonly formulated in discrete time, i.e. as recursive maps. This brings a lot of advantages for training models on data, e.g. for the purpose of time series prediction or dynamical systems identification, as powerful and efficient inference algorithms exist for discrete time systems and numerical integration of differential equations is not necessary. On the other hand, mathematical analysis of dynamical systems inferred from data is often more convenient and enables additional insights if these are formulated in continuous time, i.e. as systems of ordinary (or partial) differential equations (ODE). Here we show how to perform such a translation from discrete to continuous time for a particular class of ReLU-based RNN. We prove three theorems on the mathematical equivalence between the discrete and continuous time formulations under a variety of conditions, and illustrate how to use our mathematical results on different machine learning and nonlinear dynamical systems examples. |
Linke, Julia; Koppe, Georgia; Scholz, Vanessa; Kanske, Philipp; Durstewitz, Daniel; Wessa, Michèle Aberrant probabilistic reinforcement learning in first-degree relatives of individuals with bipolar disorder Journal Article Journal of Affective Disorders, 2020. @article{Linke2020, title = {Aberrant probabilistic reinforcement learning in first-degree relatives of individuals with bipolar disorder}, author = {Julia Linke and Georgia Koppe and Vanessa Scholz and Philipp Kanske and Daniel Durstewitz and Michèle Wessa}, url = {https://doi.org/10.1016/j.jad.2019.11.063}, doi = {10.1016/j.jad.2019.11.063}, year = {2020}, date = {2020-03-01}, journal = {Journal of Affective Disorders}, abstract = {Motivational dysregulation represents a core vulnerability factor for bipolar disorder. Whether this also comprises aberrant learning of stimulus-reinforcer contingencies is less clear.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Motivational dysregulation represents a core vulnerability factor for bipolar disorder. Whether this also comprises aberrant learning of stimulus-reinforcer contingencies is less clear. |
Eleonora Russo Tianyang Ma, Rainer Spanagel Daniel Durstewitz Hazem Toutounji Georg Köhr Coordinated prefrontal state transition leads extinction of reward-seeking behaviors Journal Article biorxiv, 2020. @article{Russo2020, title = {Coordinated prefrontal state transition leads extinction of reward-seeking behaviors}, author = {Eleonora Russo, Tianyang Ma, Rainer Spanagel, Daniel Durstewitz, Hazem Toutounji, Georg Köhr}, url = {https://www.biorxiv.org/content/10.1101/2020.02.26.964510v1.full}, doi = {https://doi.org/10.1101/2020.02.26.964510}, year = {2020}, date = {2020-02-27}, journal = {biorxiv}, abstract = {Extinction learning suppresses conditioned reward responses and is thus fundamental to adapt to changing environmental demands and to control excessive reward seeking. The medial prefrontal cortex (mPFC) monitors and controls conditioned reward responses. Using in vivo multiple single-unit recordings of mPFC we studied the relationship between single-unit and population dynamics during different phases of an operant conditioning task. To examine the fine temporal relation between neural activity and behavior, we developed a model-based statistical analysis that captured behavioral idiosyncrasies. We found that single-unit responses to conditioned stimuli changed throughout the course of a session even under stable experimental conditions and consistent behavior. However, when behavioral responses to task contingencies had to be updated during the extinction phase, unit-specific modulations became coordinated across the whole population, pushing the network into a new stable attractor state. These results show that extinction learning is not associated with suppressed mPFC responses to conditioned stimuli, but is driven by single-unit coordination into population-wide transitions of the animal’s internal state.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Extinction learning suppresses conditioned reward responses and is thus fundamental to adapt to changing environmental demands and to control excessive reward seeking. The medial prefrontal cortex (mPFC) monitors and controls conditioned reward responses. Using in vivo multiple single-unit recordings of mPFC we studied the relationship between single-unit and population dynamics during different phases of an operant conditioning task. To examine the fine temporal relation between neural activity and behavior, we developed a model-based statistical analysis that captured behavioral idiosyncrasies. We found that single-unit responses to conditioned stimuli changed throughout the course of a session even under stable experimental conditions and consistent behavior. However, when behavioral responses to task contingencies had to be updated during the extinction phase, unit-specific modulations became coordinated across the whole population, pushing the network into a new stable attractor state. These results show that extinction learning is not associated with suppressed mPFC responses to conditioned stimuli, but is driven by single-unit coordination into population-wide transitions of the animal’s internal state. |
Georgia Koppe Quentin Huys, Daniel Durstewitz Psychiatric Illnesses as Disorders of Network Dynamics Journal Article Biological Psychiatry, 2020. @article{Koppe2020, title = {Psychiatric Illnesses as Disorders of Network Dynamics}, author = {Georgia Koppe, Quentin Huys, Daniel Durstewitz}, url = {https://www.biologicalpsychiatrycnni.org/article/S2451902220300197/abstract}, year = {2020}, date = {2020-01-16}, journal = {Biological Psychiatry}, abstract = {This review provides a dynamical systems perspective on mental illness. After a brief introduction to the theory of dynamical systems, we focus on the common assumption in theoretical and computational neuroscience that phenomena at subcellular, cellular, network, cognitive, and even societal levels could be described and explained in terms of dynamical systems theory. As such, dynamical systems theory may also provide a framework for understanding mental illnesses. The review examines a number of core dynamical systems phenomena and relates each of these to aspects of mental illnesses. This provides an outline of how a broad set of phenomena in serious and common mental illnesses and neurological conditions can be understood in dynamical systems terms. It suggests that the dynamical systems level may provide a central, hublike level of convergence that unifies and links multiple biophysical and behavioral phenomena in the sense that diverse biophysical changes can give rise to the same dynamical phenomena and, vice versa, similar changes in dynamics may yield different behavioral symptoms depending on the brain area where these changes manifest. We also briefly outline current methodological approaches for inferring dynamical systems from data such as electroencephalography, functional magnetic resonance imaging, or self-reports, and we discuss the implications of a dynamical view for the diagnosis, prognosis, and treatment of psychiatric conditions. We argue that a consideration of dynamics could play a potentially transformative role in the choice and target of interventions. }, keywords = {}, pubstate = {published}, tppubtype = {article} } This review provides a dynamical systems perspective on mental illness. After a brief introduction to the theory of dynamical systems, we focus on the common assumption in theoretical and computational neuroscience that phenomena at subcellular, cellular, network, cognitive, and even societal levels could be described and explained in terms of dynamical systems theory. As such, dynamical systems theory may also provide a framework for understanding mental illnesses. The review examines a number of core dynamical systems phenomena and relates each of these to aspects of mental illnesses. This provides an outline of how a broad set of phenomena in serious and common mental illnesses and neurological conditions can be understood in dynamical systems terms. It suggests that the dynamical systems level may provide a central, hublike level of convergence that unifies and links multiple biophysical and behavioral phenomena in the sense that diverse biophysical changes can give rise to the same dynamical phenomena and, vice versa, similar changes in dynamics may yield different behavioral symptoms depending on the brain area where these changes manifest. We also briefly outline current methodological approaches for inferring dynamical systems from data such as electroencephalography, functional magnetic resonance imaging, or self-reports, and we discuss the implications of a dynamical view for the diagnosis, prognosis, and treatment of psychiatric conditions. We argue that a consideration of dynamics could play a potentially transformative role in the choice and target of interventions. |
2019 |
Schmidt, Dominik; Koppe, Georgia; Beutelspacher, Max; Durstewitz, Daniel Inferring Dynamical Systems with Long-Range Dependencies through Line Attractor Regularization Inproceedings 2019. @inproceedings{Schmidt2019, title = {Inferring Dynamical Systems with Long-Range Dependencies through Line Attractor Regularization}, author = {Dominik Schmidt and Georgia Koppe and Max Beutelspacher and Daniel Durstewitz}, url = {http://arxiv.org/abs/1910.03471}, year = {2019}, date = {2019-10-01}, abstract = {Vanilla RNN with ReLU activation have a simple structure that is amenable to systematic dynamical systems analysis and interpretation, but they suffer from the exploding vs. vanishing gradients problem. Recent attempts to retain this simplicity while alleviating the gradient problem are based on proper initialization schemes or orthogonality/unitary constraints on the RNN's recurrence matrix, which, however, comes with limitations to its expressive power with regards to dynamical systems phenomena like chaos or multi-stability. Here, we instead suggest a regularization scheme that pushes part of the RNN's latent subspace toward a line attractor configuration that enables long short-term memory and arbitrarily slow time scales. We show that our approach excels on a number of benchmarks like the sequential MNIST or multiplication problems, and enables reconstruction of dynamical systems which harbor widely different time scales.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Vanilla RNN with ReLU activation have a simple structure that is amenable to systematic dynamical systems analysis and interpretation, but they suffer from the exploding vs. vanishing gradients problem. Recent attempts to retain this simplicity while alleviating the gradient problem are based on proper initialization schemes or orthogonality/unitary constraints on the RNN's recurrence matrix, which, however, comes with limitations to its expressive power with regards to dynamical systems phenomena like chaos or multi-stability. Here, we instead suggest a regularization scheme that pushes part of the RNN's latent subspace toward a line attractor configuration that enables long short-term memory and arbitrarily slow time scales. We show that our approach excels on a number of benchmarks like the sequential MNIST or multiplication problems, and enables reconstruction of dynamical systems which harbor widely different time scales. |
Lars-Lennart Oettl Max Scheller, Sebastian Wieland Franziska Haag David Wolf Cathrin Loeb Namasivayam Ravi Daniel Durstewitz Roman Shusterman Eleonora Russo Wolfgang Kelsch Phasic dopamine enhances the distinct decoding and perceived salience of stimuli Journal Article bioRxiv, 2019. @article{Oettl2019, title = {Phasic dopamine enhances the distinct decoding and perceived salience of stimuli}, author = {Lars-Lennart Oettl, Max Scheller, Sebastian Wieland, Franziska Haag, David Wolf, Cathrin Loeb, Namasivayam Ravi, Daniel Durstewitz, Roman Shusterman, Eleonora Russo, Wolfgang Kelsch}, url = {https://www.biorxiv.org/content/10.1101/771162v1}, doi = {https://doi.org/10.1101/771162 }, year = {2019}, date = {2019-09-18}, journal = {bioRxiv}, abstract = {Subjects learn to assign value to stimuli that predict outcomes. Novelty, rewards or punishment evoke reinforcing phasic dopamine release from midbrain neurons to ventral striatum that mediates expected value and salience of stimuli in humans and animals. It is however not clear whether phasic dopamine release is sufficient to form distinct engrams that encode salient stimuli within these circuits. We addressed this question in awake mice. Evoked phasic dopamine induced plasticity selectively to the population encoding of coincidently presented stimuli and increased their distinctness from other stimuli. Phasic dopamine thereby enhanced the decoding of previously paired stimuli and increased their perceived salience. This dopamine-induced plasticity mimicked population coding dynamics of conditioned stimuli during reinforcement learning. These findings provide a network coding mechanism of how dopaminergic learning signals promote value assignment to stimulus representations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Subjects learn to assign value to stimuli that predict outcomes. Novelty, rewards or punishment evoke reinforcing phasic dopamine release from midbrain neurons to ventral striatum that mediates expected value and salience of stimuli in humans and animals. It is however not clear whether phasic dopamine release is sufficient to form distinct engrams that encode salient stimuli within these circuits. We addressed this question in awake mice. Evoked phasic dopamine induced plasticity selectively to the population encoding of coincidently presented stimuli and increased their distinctness from other stimuli. Phasic dopamine thereby enhanced the decoding of previously paired stimuli and increased their perceived salience. This dopamine-induced plasticity mimicked population coding dynamics of conditioned stimuli during reinforcement learning. These findings provide a network coding mechanism of how dopaminergic learning signals promote value assignment to stimulus representations. |
Koppe, Georgia; Toutounji, Hazem; Kirsch, Peter; Lis, Stefanie; Durstewitz, Daniel Identifying nonlinear dynamical systems via generative recurrent neural networks with applications to fMRI Journal Article PLOS Computational Biology, 15 (8), pp. e1007263, 2019, ISSN: 1553-7358. @article{Koppe2019, title = {Identifying nonlinear dynamical systems via generative recurrent neural networks with applications to fMRI}, author = {Georgia Koppe and Hazem Toutounji and Peter Kirsch and Stefanie Lis and Daniel Durstewitz}, editor = {Leyla Isik}, url = {http://dx.plos.org/10.1371/journal.pcbi.1007263}, doi = {10.1371/journal.pcbi.1007263}, issn = {1553-7358}, year = {2019}, date = {2019-08-01}, journal = {PLOS Computational Biology}, volume = {15}, number = {8}, pages = {e1007263}, keywords = {}, pubstate = {published}, tppubtype = {article} } |
Urs Braun Anais Harneit, Giulio Pergola Tommaso Menara Axel Schaefer Richard Betzel Zhenxiang Zang Janina Schweiger Kristina Schwarz Junfang Chen Giuseppe Blasi Alessandro Bertolino Daniel Durstewitz Fabio Pasqualetti Emanuel Schwarz Andreas Meyer-Lindenberg Danielle Bassett Heike Tost F I S Arxiv Preprint, 2019. @article{Braun2019, title = {Brain state stability during working memory is explained by network control theory, modulated by dopamine D1/D2 receptor function, and diminished in schizophrenia}, author = {Urs Braun, Anais Harneit, Giulio Pergola, Tommaso Menara, Axel Schaefer, Richard F Betzel, Zhenxiang Zang, Janina I Schweiger, Kristina Schwarz, Junfang Chen, Giuseppe Blasi, Alessandro Bertolino, Daniel Durstewitz, Fabio Pasqualetti, Emanuel Schwarz, Andreas Meyer-Lindenberg, Danielle S Bassett, Heike Tost}, url = {https://arxiv.org/ftp/arxiv/papers/1906/1906.09290.pdf}, doi = {arXiv:1906.09290}, year = {2019}, date = {2019-06-21}, journal = {Arxiv Preprint}, abstract = {Dynamical brain state transitions are critical for flexible working memory but the network mechanisms are incompletely understood. Here, we show that working memory entails brainwide switching between activity states. The stability of states relates to dopamine D1 receptor gene expression while state transitions are influenced by D2 receptor expression and pharmacological modulation. Schizophrenia patients show altered network control properties, including a more diverse energy landscape and decreased stability of working memory representations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Dynamical brain state transitions are critical for flexible working memory but the network mechanisms are incompletely understood. Here, we show that working memory entails brainwide switching between activity states. The stability of states relates to dopamine D1 receptor gene expression while state transitions are influenced by D2 receptor expression and pharmacological modulation. Schizophrenia patients show altered network control properties, including a more diverse energy landscape and decreased stability of working memory representations. |
Elke Kirschbaum Manuel Haußmann, Steffen Wolf Hannah Sonntag Justus Schneider Shehabeldin Elzoheiry Oliver Kann Daniel Durstewitz Fred Hamprecht A LeMoNADe: Learned Motif and Neuronal Assembly Detection in calcium imaging videos Conference ICLR. Proceedings, 2019. @conference{Kirschbaum2019, title = {LeMoNADe: Learned Motif and Neuronal Assembly Detection in calcium imaging videos}, author = {Elke Kirschbaum, Manuel Haußmann, Steffen Wolf, Hannah Sonntag, Justus Schneider, Shehabeldin Elzoheiry, Oliver Kann, Daniel Durstewitz, Fred A. Hamprecht}, url = {https://arxiv.org/abs/1806.09963}, year = {2019}, date = {2019-02-22}, publisher = {ICLR. Proceedings}, abstract = {Neuronal assemblies, loosely defined as subsets of neurons with reoccurring spatio-temporally coordinated activation patterns, or "motifs", are thought to be building blocks of neural representations and information processing. We here propose LeMoNADe, a new exploratory data analysis method that facilitates hunting for motifs in calcium imaging videos, the dominant microscopic functional imaging modality in neurophysiology. Our nonparametric method extracts motifs directly from videos, bypassing the difficult intermediate step of spike extraction. Our technique augments variational autoencoders with a discrete stochastic node, and we show in detail how a differentiable reparametrization and relaxation can be used. An evaluation on simulated data, with available ground truth, reveals excellent quantitative performance. In real video data acquired from brain slices, with no ground truth available, LeMoNADe uncovers nontrivial candidate motifs that can help generate hypotheses for more focused biological investigations.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Neuronal assemblies, loosely defined as subsets of neurons with reoccurring spatio-temporally coordinated activation patterns, or "motifs", are thought to be building blocks of neural representations and information processing. We here propose LeMoNADe, a new exploratory data analysis method that facilitates hunting for motifs in calcium imaging videos, the dominant microscopic functional imaging modality in neurophysiology. Our nonparametric method extracts motifs directly from videos, bypassing the difficult intermediate step of spike extraction. Our technique augments variational autoencoders with a discrete stochastic node, and we show in detail how a differentiable reparametrization and relaxation can be used. An evaluation on simulated data, with available ground truth, reveals excellent quantitative performance. In real video data acquired from brain slices, with no ground truth available, LeMoNADe uncovers nontrivial candidate motifs that can help generate hypotheses for more focused biological investigations. |
Koppe, Georgia; Guloksuz, Sinan; Reininghaus, Ulrich; Durstewitz, Daniel Recurrent Neural Networks in Mobile Sampling and Intervention Journal Article Schizophrenia Bulletin, 45 (2), pp. 272–276, 2019, ISSN: 17451701. @article{Koppe2019b, title = {Recurrent Neural Networks in Mobile Sampling and Intervention}, author = {Georgia Koppe and Sinan Guloksuz and Ulrich Reininghaus and Daniel Durstewitz}, doi = {10.1093/schbul/sby171}, issn = {17451701}, year = {2019}, date = {2019-01-01}, journal = {Schizophrenia Bulletin}, volume = {45}, number = {2}, pages = {272--276}, abstract = {The rapid rise and now widespread distribution of handheld and wearable devices, such as smartphones, fitness trackers, or smartwatches, has opened a new universe of possibilities for monitoring emotion and cognition in everyday-life context, and for applying experience- and context-specific interventions in psychosis. These devices are equipped with multiple sensors, recording channels, and app-based opportunities for assessment using experience sampling methodology (ESM), which enables to collect vast amounts of temporally highly resolved and ecologically valid personal data from various domains in daily life. In psychosis, this allows to elucidate intermediate and clinical phenotypes, psychological processes and mechanisms, and their interplay with socioenvironmental factors, as well as to evaluate the effects of treatments for psychosis on important clinical and social outcomes. Although these data offer immense opportunities, they also pose tremendous challenges for data analysis. These challenges include the sheer amount of time series data generated and the many different data modalities and their specific properties and sampling rates. After a brief review of studies and approaches to ESM and ecological momentary interventions in psychosis, we will discuss recurrent neural networks (RNNs) as a powerful statistical machine learning approach for time series analysis and prediction in this context. RNNs can be trained on multiple data modalities simultaneously to learn a dynamical model that could be used to forecast individual trajectories and schedule online feedback and intervention accordingly. Future research using this approach is likely going to offer new avenues to further our understanding and treatments of psychosis.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The rapid rise and now widespread distribution of handheld and wearable devices, such as smartphones, fitness trackers, or smartwatches, has opened a new universe of possibilities for monitoring emotion and cognition in everyday-life context, and for applying experience- and context-specific interventions in psychosis. These devices are equipped with multiple sensors, recording channels, and app-based opportunities for assessment using experience sampling methodology (ESM), which enables to collect vast amounts of temporally highly resolved and ecologically valid personal data from various domains in daily life. In psychosis, this allows to elucidate intermediate and clinical phenotypes, psychological processes and mechanisms, and their interplay with socioenvironmental factors, as well as to evaluate the effects of treatments for psychosis on important clinical and social outcomes. Although these data offer immense opportunities, they also pose tremendous challenges for data analysis. These challenges include the sheer amount of time series data generated and the many different data modalities and their specific properties and sampling rates. After a brief review of studies and approaches to ESM and ecological momentary interventions in psychosis, we will discuss recurrent neural networks (RNNs) as a powerful statistical machine learning approach for time series analysis and prediction in this context. RNNs can be trained on multiple data modalities simultaneously to learn a dynamical model that could be used to forecast individual trajectories and schedule online feedback and intervention accordingly. Future research using this approach is likely going to offer new avenues to further our understanding and treatments of psychosis. |
Durstewitz, Daniel; Koppe, Georgia; Meyer-Lindenberg, Andreas Deep neural networks in psychiatry Journal Article Molecular Psychiatry, 2019, ISSN: 14765578. @article{Durstewitz2019, title = {Deep neural networks in psychiatry}, author = {Daniel Durstewitz and Georgia Koppe and Andreas Meyer-Lindenberg}, url = {http://dx.doi.org/10.1038/s41380-019-0365-9}, doi = {10.1038/s41380-019-0365-9}, issn = {14765578}, year = {2019}, date = {2019-01-01}, journal = {Molecular Psychiatry}, publisher = {Springer US}, abstract = {Machine and deep learning methods, today's core of artificial intelligence, have been applied with increasing success and impact in many commercial and research settings. They are powerful tools for large scale data analysis, prediction and classification, especially in very data-rich environments (“big data”), and have started to find their way into medical applications. Here we will first give an overview of machine learning methods, with a focus on deep and recurrent neural networks, their relation to statistics, and the core principles behind them. We will then discuss and review directions along which (deep) neural networks can be, or already have been, applied in the context of psychiatry, and will try to delineate their future potential in this area. We will also comment on an emerging area that so far has been much less well explored: by embedding semantically interpretable computational models of brain dynamics or behavior into a statistical machine learning context, insights into dysfunction beyond mere prediction and classification may be gained. Especially this marriage of computational models with statistical inference may offer insights into neural and behavioral mechanisms that could open completely novel avenues for psychiatric treatment.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Machine and deep learning methods, today's core of artificial intelligence, have been applied with increasing success and impact in many commercial and research settings. They are powerful tools for large scale data analysis, prediction and classification, especially in very data-rich environments (“big data”), and have started to find their way into medical applications. Here we will first give an overview of machine learning methods, with a focus on deep and recurrent neural networks, their relation to statistics, and the core principles behind them. We will then discuss and review directions along which (deep) neural networks can be, or already have been, applied in the context of psychiatry, and will try to delineate their future potential in this area. We will also comment on an emerging area that so far has been much less well explored: by embedding semantically interpretable computational models of brain dynamics or behavior into a statistical machine learning context, insights into dysfunction beyond mere prediction and classification may be gained. Especially this marriage of computational models with statistical inference may offer insights into neural and behavioral mechanisms that could open completely novel avenues for psychiatric treatment. |
2018 |
Toutounji, Hazem; Durstewitz, Daniel Detecting Multiple Change Points Using Adaptive Regression Splines With Application to Neural Recordings Journal Article Frontiers in Neuroinformatics, 12 (67), 2018. @article{Toutounji2018, title = {Detecting Multiple Change Points Using Adaptive Regression Splines With Application to Neural Recordings}, author = {Hazem Toutounji and Daniel Durstewitz}, url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6187984/}, doi = {10.3389/fninf.2018.00067}, year = {2018}, date = {2018-10-04}, journal = {Frontiers in Neuroinformatics}, volume = {12}, number = {67}, abstract = {Time series, as frequently the case in neuroscience, are rarely stationary, but often exhibit abrupt changes due to attractor transitions or bifurcations in the dynamical systems producing them. A plethora of methods for detecting such change points in time series statistics have been developed over the years, in addition to test criteria to evaluate their significance. Issues to consider when developing change point analysis methods include computational demands, difficulties arising from either limited amount of data or a large number of covariates, and arriving at statistical tests with sufficient power to detect as many changes as contained in potentially high-dimensional time series. Here, a general method called Paired Adaptive Regressors for Cumulative Sum is developed for detecting multiple change points in the mean of multivariate time series. The method's advantages over alternative approaches are demonstrated through a series of simulation experiments. This is followed by a real data application to neural recordings from rat medial prefrontal cortex during learning. Finally, the method's flexibility to incorporate useful features from state-of-the-art change point detection techniques is discussed, along with potential drawbacks and suggestions to remedy them.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Time series, as frequently the case in neuroscience, are rarely stationary, but often exhibit abrupt changes due to attractor transitions or bifurcations in the dynamical systems producing them. A plethora of methods for detecting such change points in time series statistics have been developed over the years, in addition to test criteria to evaluate their significance. Issues to consider when developing change point analysis methods include computational demands, difficulties arising from either limited amount of data or a large number of covariates, and arriving at statistical tests with sufficient power to detect as many changes as contained in potentially high-dimensional time series. Here, a general method called Paired Adaptive Regressors for Cumulative Sum is developed for detecting multiple change points in the mean of multivariate time series. The method's advantages over alternative approaches are demonstrated through a series of simulation experiments. This is followed by a real data application to neural recordings from rat medial prefrontal cortex during learning. Finally, the method's flexibility to incorporate useful features from state-of-the-art change point detection techniques is discussed, along with potential drawbacks and suggestions to remedy them. |
Durstewitz, Daniel; Huys, Quentin J M; Koppe, Georgia Psychiatric Illnesses as Disorders of Network Dynamics Journal Article pp. 1–24, 2018. @article{Durstewitza, title = {Psychiatric Illnesses as Disorders of Network Dynamics}, author = {Daniel Durstewitz and Quentin J M Huys and Georgia Koppe}, url = {https://arxiv.org/pdf/1809.06303.pdf}, year = {2018}, date = {2018-09-18}, pages = {1--24}, abstract = {This review provides a dynamical systems perspective on psychiatric symptoms and disease, and discusses its potential implications for diagnosis, prognosis, and treatment. After a brief introduction into the theory of dynamical systems, we will focus on the idea that cognitive and emotional functions are implemented in terms of dynamical systems phenomena in the brain, a common assumption in theoretical and computational neuroscience. Specific computational models, anchored in biophysics, for generating different types of network dynamics, and with a relation to psychiatric symptoms, will be briefly reviewed, as well as methodological approaches for reconstructing the system dynamics from observed time series (like fMRI or EEG recordings). We then attempt to outline how psychiatric phenomena, associated with schizophrenia, depression, PTSD, ADHD, phantom pain, and others, could be understood in dynamical systems terms. Most importantly, we will try to convey that the dynamical systems level may provide a central, hub-like level of convergence which unifies and links multiple biophysical and behavioral phenomena, in the sense that diverse biophysical changes can give rise to the same dynamical phenomena and, vice versa, similar changes in dynamics may yield different behavioral symptoms depending on the brain area where these changes manifest. If this assessment is correct, it may have profound implications for the diagnosis, prognosis, and treatment of psychiatric conditions, as it puts the focus on dynamics. We therefore argue that consideration of dynamics should play an important role in the choice and target of interventions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This review provides a dynamical systems perspective on psychiatric symptoms and disease, and discusses its potential implications for diagnosis, prognosis, and treatment. After a brief introduction into the theory of dynamical systems, we will focus on the idea that cognitive and emotional functions are implemented in terms of dynamical systems phenomena in the brain, a common assumption in theoretical and computational neuroscience. Specific computational models, anchored in biophysics, for generating different types of network dynamics, and with a relation to psychiatric symptoms, will be briefly reviewed, as well as methodological approaches for reconstructing the system dynamics from observed time series (like fMRI or EEG recordings). We then attempt to outline how psychiatric phenomena, associated with schizophrenia, depression, PTSD, ADHD, phantom pain, and others, could be understood in dynamical systems terms. Most importantly, we will try to convey that the dynamical systems level may provide a central, hub-like level of convergence which unifies and links multiple biophysical and behavioral phenomena, in the sense that diverse biophysical changes can give rise to the same dynamical phenomena and, vice versa, similar changes in dynamics may yield different behavioral symptoms depending on the brain area where these changes manifest. If this assessment is correct, it may have profound implications for the diagnosis, prognosis, and treatment of psychiatric conditions, as it puts the focus on dynamics. We therefore argue that consideration of dynamics should play an important role in the choice and target of interventions. |
Livio Oboti Eleonora Russo, Tuyen Tran Daniel Durstewitz ; Corbin, Joshua G Amygdala Corticofugal Input Shapes Mitral Cell Responses in the Accessory Olfactory Bulb Journal Article eNeuro, 2018. @article{Oboti2018, title = {Amygdala Corticofugal Input Shapes Mitral Cell Responses in the Accessory Olfactory Bulb}, author = {Livio Oboti, Eleonora Russo, Tuyen Tran, Daniel Durstewitz and Joshua G. Corbin}, doi = {https://doi.org/10.1523/ENEURO.0175-18.2018}, year = {2018}, date = {2018-05-18}, journal = {eNeuro}, abstract = {Interconnections between the olfactory bulb and the amygdala are a major pathway for triggering strong behavioralresponses to a variety of odorants. However, while this broad mapping has been established, the patterns of amygdalafeedback connectivity and the influence on olfactory circuitry remain unknown. Here, using a combination of neuronaltracing approaches, we dissect the connectivity of a cortical amygdala [posteromedial cortical nucleus (PmCo)]feedback circuit innervating the mouse accessory olfactory bulb. Optogenetic activation of PmCo feedback mainlyresults in feedforward mitral cell (MC) inhibition through direct excitation of GABAergic granule cells. In addition,LED-driven activity of corticofugal afferents increases the gain of MC responses to olfactory nerve stimulation. Thus,through corticofugal pathways, the PmCo likely regulates primary olfactory and social odor processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Interconnections between the olfactory bulb and the amygdala are a major pathway for triggering strong behavioralresponses to a variety of odorants. However, while this broad mapping has been established, the patterns of amygdalafeedback connectivity and the influence on olfactory circuitry remain unknown. Here, using a combination of neuronaltracing approaches, we dissect the connectivity of a cortical amygdala [posteromedial cortical nucleus (PmCo)]feedback circuit innervating the mouse accessory olfactory bulb. Optogenetic activation of PmCo feedback mainlyresults in feedforward mitral cell (MC) inhibition through direct excitation of GABAergic granule cells. In addition,LED-driven activity of corticofugal afferents increases the gain of MC responses to olfactory nerve stimulation. Thus,through corticofugal pathways, the PmCo likely regulates primary olfactory and social odor processing. |
Koppe, Georgia; Toutounji, Hazem; Kirsch, Peter; Lis, Stefanie; Durstewitz, Daniel Identifying nonlinear dynamical systems via generative recurrent neural networks with applications to fMRI Journal Article arxiv, 6 (2), pp. 103, 2018. @article{Koppe2018, title = {Identifying nonlinear dynamical systems via generative recurrent neural networks with applications to fMRI}, author = {Georgia Koppe and Hazem Toutounji and Peter Kirsch and Stefanie Lis and Daniel Durstewitz}, url = {https://arxiv.org/ftp/arxiv/papers/1902/1902.07186.pdf}, year = {2018}, date = {2018-01-01}, journal = {arxiv}, volume = {6}, number = {2}, pages = {103}, abstract = {A major tenet in theoretical neuroscience is that cognitive and behavioral processes are ultimately implemented in terms of the neural system dynamics. Accordingly, a major aim for the analysis of neurophysiological measurements should lie in the identification of the computational dynamics underlying task processing. Here we advance a state space model (SSM) based on generative piecewise-linear recurrent neural networks (PLRNN) to assess dynamics from neuroimaging data. In contrast to many other nonlinear time series models which have been proposed for reconstructing latent dynamics, our model is easily interpretable in neural terms, amenable to systematic dynamical systems analysis of the resulting set of equations, and can straightforwardly be transformed into an equivalent continuous-time dynamical system. The major contributions of this paper are the introduction of a new observation model suitable for functional magnetic resonance imaging (fMRI)}, keywords = {}, pubstate = {published}, tppubtype = {article} } A major tenet in theoretical neuroscience is that cognitive and behavioral processes are ultimately implemented in terms of the neural system dynamics. Accordingly, a major aim for the analysis of neurophysiological measurements should lie in the identification of the computational dynamics underlying task processing. Here we advance a state space model (SSM) based on generative piecewise-linear recurrent neural networks (PLRNN) to assess dynamics from neuroimaging data. In contrast to many other nonlinear time series models which have been proposed for reconstructing latent dynamics, our model is easily interpretable in neural terms, amenable to systematic dynamical systems analysis of the resulting set of equations, and can straightforwardly be transformed into an equivalent continuous-time dynamical system. The major contributions of this paper are the introduction of a new observation model suitable for functional magnetic resonance imaging (fMRI) |
2017 |
Durstewitz, Daniel Advanced Data Analysis in Neuroscience Book 2017, ISBN: 9783319599748. @book{Durstewitzb, title = {Advanced Data Analysis in Neuroscience}, author = {Daniel Durstewitz}, url = {https://link.springer.com/content/pdf/10.1007%2F978-3-319-59976-2.pdf}, isbn = {9783319599748}, year = {2017}, date = {2017-11-01}, keywords = {}, pubstate = {published}, tppubtype = {book} } |
Koppe, Georgia; Mallien, Anne Stephanie; Berger, Stefan; Bartsch, Dusan; Gass, Peter; Vollmayr, Barbara; Durstewitz, Daniel CACNA1C gene regulates behavioral strategies in operant rule learning Journal Article PLOS Biology, 15 (6), 2017. @article{Koppe2017, title = {CACNA1C gene regulates behavioral strategies in operant rule learning}, author = {Georgia Koppe and Anne Stephanie Mallien and Stefan Berger and Dusan Bartsch and Peter Gass and Barbara Vollmayr and Daniel Durstewitz }, url = { https://doi.org/10.1371/journal.pbio.2000936}, doi = {10.1371/journal.pbio.2000936}, year = {2017}, date = {2017-06-12}, journal = {PLOS Biology}, volume = {15}, number = {6}, abstract = {Behavioral experiments are usually designed to tap into a specific cognitive function, but animals may solve a given task through a variety of different and individual behavioral strategies, some of them not foreseen by the experimenter. Animal learning may therefore be seen more as the process of selecting among, and adapting, potential behavioral policies, rather than mere strengthening of associative links. Calcium influx through high-voltage-gated Ca2+ channels is central to synaptic plasticity, and altered expression of Cav1.2 channels and the CACNA1C gene have been associated with severe learning deficits and psychiatric disorders. Given this, we were interested in how specifically a selective functional ablation of the Cacna1c gene would modulate the learning process. Using a detailed, individual-level analysis of learning on an operant cue discrimination task in terms of behavioral strategies, combined with Bayesian selection among computational models estimated from the empirical data, we show that a Cacna1c knockout does not impair learning in general but has a much more specific effect: the majority of Cacna1c knockout mice still managed to increase reward feedback across trials but did so by adapting an outcome-based strategy, while the majority of matched controls adopted the experimentally intended cue-association rule. Our results thus point to a quite specific role of a single gene in learning and highlight that much more mechanistic insight could be gained by examining response patterns in terms of a larger repertoire of potential behavioral strategies. The results may also have clinical implications for treating psychiatric disorders.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Behavioral experiments are usually designed to tap into a specific cognitive function, but animals may solve a given task through a variety of different and individual behavioral strategies, some of them not foreseen by the experimenter. Animal learning may therefore be seen more as the process of selecting among, and adapting, potential behavioral policies, rather than mere strengthening of associative links. Calcium influx through high-voltage-gated Ca2+ channels is central to synaptic plasticity, and altered expression of Cav1.2 channels and the CACNA1C gene have been associated with severe learning deficits and psychiatric disorders. Given this, we were interested in how specifically a selective functional ablation of the Cacna1c gene would modulate the learning process. Using a detailed, individual-level analysis of learning on an operant cue discrimination task in terms of behavioral strategies, combined with Bayesian selection among computational models estimated from the empirical data, we show that a Cacna1c knockout does not impair learning in general but has a much more specific effect: the majority of Cacna1c knockout mice still managed to increase reward feedback across trials but did so by adapting an outcome-based strategy, while the majority of matched controls adopted the experimentally intended cue-association rule. Our results thus point to a quite specific role of a single gene in learning and highlight that much more mechanistic insight could be gained by examining response patterns in terms of a larger repertoire of potential behavioral strategies. The results may also have clinical implications for treating psychiatric disorders. |
Eleonora Russo, Daniel Durstewitz Cell assemblies at multiple time scales with arbitrary lag constellations Journal Article eLife, 2017. @article{Russo2017, title = {Cell assemblies at multiple time scales with arbitrary lag constellations}, author = {Eleonora Russo, Daniel Durstewitz}, url = {https://elifesciences.org/articles/19428}, doi = {10.7554/eLife.19428}, year = {2017}, date = {2017-01-11}, journal = {eLife}, abstract = {Hebb's idea of a cell assembly as the fundamental unit of neural information processing has dominated neuroscience like no other theoretical concept within the past 60 years. A range of different physiological phenomena, from precisely synchronized spiking to broadly simultaneous rate increases, has been subsumed under this term. Yet progress in this area is hampered by the lack of statistical tools that would enable to extract assemblies with arbitrary constellations of time lags, and at multiple temporal scales, partly due to the severe computational burden. Here we present such a unifying methodological and conceptual framework which detects assembly structure at many different time scales, levels of precision, and with arbitrary internal organization. Applying this methodology to multiple single unit recordings from various cortical areas, we find that there is no universal cortical coding scheme, but that assembly structure and precision significantly depends on the brain area recorded and ongoing task demands.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Hebb's idea of a cell assembly as the fundamental unit of neural information processing has dominated neuroscience like no other theoretical concept within the past 60 years. A range of different physiological phenomena, from precisely synchronized spiking to broadly simultaneous rate increases, has been subsumed under this term. Yet progress in this area is hampered by the lack of statistical tools that would enable to extract assemblies with arbitrary constellations of time lags, and at multiple temporal scales, partly due to the severe computational burden. Here we present such a unifying methodological and conceptual framework which detects assembly structure at many different time scales, levels of precision, and with arbitrary internal organization. Applying this methodology to multiple single unit recordings from various cortical areas, we find that there is no universal cortical coding scheme, but that assembly structure and precision significantly depends on the brain area recorded and ongoing task demands. |
Durstewitz, Daniel 2017, ISSN: 1553-7358. @book{Durstewitz2017, title = {A State Space Approach for Piecewise‐Linear Recurrent Neural Networks for Reconstructing Nonlinear Dynamics from Neural Measurements}, author = {Daniel Durstewitz}, doi = {10.1371/journal.pcbi.1005542}, issn = {1553-7358}, year = {2017}, date = {2017-01-01}, booktitle = {PLoS Computational Biology}, volume = {13}, number = {6}, pages = {e1005542}, abstract = {The computational and cognitive properties of neural systems are often thought to be imple-mented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computa-tions. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maxi-mization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Mod-els estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation frame-work for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties. Citation: Durstewitz D (2017) A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements. PLoS Comput Biol 13 (6): Neuronal dynamics mediate between the physiological and anatomical properties of a neural system and the computations it performs, in fact may be seen as the 'computational language' of the brain. It is therefore of great interest to recover from experimentally recorded time series, like multiple single-unit or neuroimaging data, the underlying sto-chastic network dynamics and, ideally, even equations governing their statistical evolu-tion. This is not at all a trivial enterprise, however, since neural systems are very high-dimensional, come with considerable levels of intrinsic (process) noise, are usually only partially observable, and these observations may be further corrupted by noise from mea-surement and preprocessing steps. The present article embeds piecewise-linear recurrent neural networks (PLRNNs) within a state space approach, a statistical estimation frame-work that deals with both process and observation noise. PLRNNs are computationally and dynamically powerful nonlinear systems. Their statistically principled estimation from multivariate neuronal time series thus may provide access to some essential features of the neuronal dynamics, like attractor states, generative equations, and their computa-tional implications. The approach is exemplified on multiple single-unit recordings from the rat prefrontal cortex during working memory.}, keywords = {}, pubstate = {published}, tppubtype = {book} } The computational and cognitive properties of neural systems are often thought to be imple-mented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computa-tions. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maxi-mization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Mod-els estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation frame-work for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties. Citation: Durstewitz D (2017) A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements. PLoS Comput Biol 13 (6): Neuronal dynamics mediate between the physiological and anatomical properties of a neural system and the computations it performs, in fact may be seen as the 'computational language' of the brain. It is therefore of great interest to recover from experimentally recorded time series, like multiple single-unit or neuroimaging data, the underlying sto-chastic network dynamics and, ideally, even equations governing their statistical evolu-tion. This is not at all a trivial enterprise, however, since neural systems are very high-dimensional, come with considerable levels of intrinsic (process) noise, are usually only partially observable, and these observations may be further corrupted by noise from mea-surement and preprocessing steps. The present article embeds piecewise-linear recurrent neural networks (PLRNNs) within a state space approach, a statistical estimation frame-work that deals with both process and observation noise. PLRNNs are computationally and dynamically powerful nonlinear systems. Their statistically principled estimation from multivariate neuronal time series thus may provide access to some essential features of the neuronal dynamics, like attractor states, generative equations, and their computa-tional implications. The approach is exemplified on multiple single-unit recordings from the rat prefrontal cortex during working memory. |
Durstewitz, Daniel 2017, ISSN: 1553-7358. @book{Durstewitz2017b, title = {A State Space Approach for Piecewise‐Linear Recurrent Neural Networks for Reconstructing Nonlinear Dynamics from Neural Measurements}, author = {Daniel Durstewitz}, doi = {10.1371/journal.pcbi.1005542}, issn = {1553-7358}, year = {2017}, date = {2017-01-01}, booktitle = {PLoS Computational Biology}, volume = {13}, number = {6}, pages = {e1005542}, abstract = {The computational and cognitive properties of neural systems are often thought to be imple-mented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computa-tions. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maxi-mization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Mod-els estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation frame-work for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties. Citation: Durstewitz D (2017) A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements. PLoS Comput Biol 13 (6): Neuronal dynamics mediate between the physiological and anatomical properties of a neural system and the computations it performs, in fact may be seen as the 'computational language' of the brain. It is therefore of great interest to recover from experimentally recorded time series, like multiple single-unit or neuroimaging data, the underlying sto-chastic network dynamics and, ideally, even equations governing their statistical evolu-tion. This is not at all a trivial enterprise, however, since neural systems are very high-dimensional, come with considerable levels of intrinsic (process) noise, are usually only partially observable, and these observations may be further corrupted by noise from mea-surement and preprocessing steps. The present article embeds piecewise-linear recurrent neural networks (PLRNNs) within a state space approach, a statistical estimation frame-work that deals with both process and observation noise. PLRNNs are computationally and dynamically powerful nonlinear systems. Their statistically principled estimation from multivariate neuronal time series thus may provide access to some essential features of the neuronal dynamics, like attractor states, generative equations, and their computa-tional implications. The approach is exemplified on multiple single-unit recordings from the rat prefrontal cortex during working memory.}, keywords = {}, pubstate = {published}, tppubtype = {book} } The computational and cognitive properties of neural systems are often thought to be imple-mented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computa-tions. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maxi-mization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Mod-els estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation frame-work for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties. Citation: Durstewitz D (2017) A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements. PLoS Comput Biol 13 (6): Neuronal dynamics mediate between the physiological and anatomical properties of a neural system and the computations it performs, in fact may be seen as the 'computational language' of the brain. It is therefore of great interest to recover from experimentally recorded time series, like multiple single-unit or neuroimaging data, the underlying sto-chastic network dynamics and, ideally, even equations governing their statistical evolu-tion. This is not at all a trivial enterprise, however, since neural systems are very high-dimensional, come with considerable levels of intrinsic (process) noise, are usually only partially observable, and these observations may be further corrupted by noise from mea-surement and preprocessing steps. The present article embeds piecewise-linear recurrent neural networks (PLRNNs) within a state space approach, a statistical estimation frame-work that deals with both process and observation noise. PLRNNs are computationally and dynamically powerful nonlinear systems. Their statistically principled estimation from multivariate neuronal time series thus may provide access to some essential features of the neuronal dynamics, like attractor states, generative equations, and their computa-tional implications. The approach is exemplified on multiple single-unit recordings from the rat prefrontal cortex during working memory. |
Peter, Sven ; Kirschbaum, Elke ; Both, Martin ; Campbell, Lee ; Harvey, Brandon ; Heins, Conor ; Durstewitz, Daniel ; Diego, Ferran ; Hamprecht, Fred A Sparse convolutional coding for neuronal assembly detection Journal Article Advances in Neural Information Processing Systems , 30 , pp. 3675–3685, 2017. @article{Peter2017, title = {Sparse convolutional coding for neuronal assembly detection}, author = {Peter, Sven and Kirschbaum, Elke and Both, Martin and Campbell, Lee and Harvey, Brandon and Heins, Conor and Durstewitz, Daniel and Diego, Ferran and Hamprecht, Fred A}, editor = {I. Guyon and U. V. Luxburg and S. Bengio and H. Wallach and R. Fergus and S. Vishwanathan and R. Garnett}, url = {http://papers.nips.cc/paper/6958-sparse-convolutional-coding-for-neuronal-assembly-detection.pdf}, year = {2017}, date = {2017-01-01}, journal = {Advances in Neural Information Processing Systems }, volume = {30}, pages = {3675--3685}, abstract = {Cell assemblies, originally proposed by Donald Hebb (1949), are subsets of neurons firing in a temporally coordinated way that gives rise to repeated motifs supposed to underly neural representations and information processing. Although Hebb's original proposal dates back many decades, the detection of assemblies and their role in coding is still an open and current research topic, partly because simultaneous recordings from large populations of neurons became feasible only relatively recently. Most current and easy-to-apply computational techniques focus on the identification of strictly synchronously spiking neurons. In this paper we propose a new algorithm, based on sparse convolutional coding, for detecting recurrent motifs of arbitrary structure up to a given length. Testing of our algorithm on synthetically generated datasets shows that it outperforms established methods and accurately identifies the temporal structure of embedded assemblies, even when these contain overlapping neurons or when strong background noise is present. Moreover, exploratory analysis of experimental datasets from hippocampal slices and cortical neuron cultures have provided promising results.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Cell assemblies, originally proposed by Donald Hebb (1949), are subsets of neurons firing in a temporally coordinated way that gives rise to repeated motifs supposed to underly neural representations and information processing. Although Hebb's original proposal dates back many decades, the detection of assemblies and their role in coding is still an open and current research topic, partly because simultaneous recordings from large populations of neurons became feasible only relatively recently. Most current and easy-to-apply computational techniques focus on the identification of strictly synchronously spiking neurons. In this paper we propose a new algorithm, based on sparse convolutional coding, for detecting recurrent motifs of arbitrary structure up to a given length. Testing of our algorithm on synthetically generated datasets shows that it outperforms established methods and accurately identifies the temporal structure of embedded assemblies, even when these contain overlapping neurons or when strong background noise is present. Moreover, exploratory analysis of experimental datasets from hippocampal slices and cortical neuron cultures have provided promising results. |
2016 |
Ma, Liya; Hyman, James M; Durstewitz, Daniel; Phillips, Anthony G; Seamans, Jeremy K A Quantitative Analysis of Context-Dependent Remapping of Medial Frontal Cortex Neurons and Ensembles Journal Article J Neurosci. 2016 Aug 3; 36(31), 3 (36), pp. 31, 2016. @article{Ma2016, title = {A Quantitative Analysis of Context-Dependent Remapping of Medial Frontal Cortex Neurons and Ensembles}, author = {Liya Ma and James M. Hyman and Daniel Durstewitz and Anthony G. Phillips and Jeremy K. Seamans}, doi = {10.1523/JNEUROSCI.3176-15.2016}, year = {2016}, date = {2016-08-03}, journal = {J Neurosci. 2016 Aug 3; 36(31)}, volume = {3}, number = {36}, pages = {31}, abstract = {The frontal cortex has been implicated in a number of cognitive and motivational processes, but understanding how individual neurons contribute to these processes is particularly challenging as they respond to a broad array of events (multiplexing) in a manner that can be dynamically modulated by the task context, i.e., adaptive coding (Duncan, 2001). Fundamental questions remain, such as how the flexibility gained through these mechanisms is balanced by the need for consistency and how the ensembles of neurons are coherently shaped by task demands. In the present study, ensembles of medial frontal cortex neurons were recorded from rats trained to perform three different operant actions either in two different sequences or two different physical environments. Single neurons exhibited diverse mixtures of responsivity to each of the three actions and these mixtures were abruptly altered by context/sequence switches. Remarkably, the overall responsivity of the population remained highly consistent both within and between context/sequences because the gains versus losses were tightly balanced across neurons and across the three actions. These data are consistent with a reallocation mixture model in which individual neurons express unique mixtures of selectivity for different actions that become reallocated as task conditions change. However, because the allocations and reallocations are so well balanced across neurons, the population maintains a low but highly consistent response to all actions. The frontal cortex may therefore balance consistency with flexibility by having ensembles respond in a fixed way to task-relevant actions while abruptly reconfiguring single neurons to encode “actions in context.”}, keywords = {}, pubstate = {published}, tppubtype = {article} } The frontal cortex has been implicated in a number of cognitive and motivational processes, but understanding how individual neurons contribute to these processes is particularly challenging as they respond to a broad array of events (multiplexing) in a manner that can be dynamically modulated by the task context, i.e., adaptive coding (Duncan, 2001). Fundamental questions remain, such as how the flexibility gained through these mechanisms is balanced by the need for consistency and how the ensembles of neurons are coherently shaped by task demands. In the present study, ensembles of medial frontal cortex neurons were recorded from rats trained to perform three different operant actions either in two different sequences or two different physical environments. Single neurons exhibited diverse mixtures of responsivity to each of the three actions and these mixtures were abruptly altered by context/sequence switches. Remarkably, the overall responsivity of the population remained highly consistent both within and between context/sequences because the gains versus losses were tightly balanced across neurons and across the three actions. These data are consistent with a reallocation mixture model in which individual neurons express unique mixtures of selectivity for different actions that become reallocated as task conditions change. However, because the allocations and reallocations are so well balanced across neurons, the population maintains a low but highly consistent response to all actions. The frontal cortex may therefore balance consistency with flexibility by having ensembles respond in a fixed way to task-relevant actions while abruptly reconfiguring single neurons to encode “actions in context.” |
Hass, Joachim; Durstewitz, Daniel Time at the center, or time at the side? Assessing current models of time perception. Journal Article Current Opinion in Behavioral Sciences, 8 , pp. Pages 238-244, 2016. @article{Hass2016b, title = {Time at the center, or time at the side? Assessing current models of time perception.}, author = {Joachim Hass and Daniel Durstewitz}, url = {https://www.sciencedirect.com/science/article/pii/S2352154616300535}, doi = {https://doi.org/10.1016/j.cobeha.2016.02.030}, year = {2016}, date = {2016-04-01}, journal = {Current Opinion in Behavioral Sciences}, volume = {8}, pages = {Pages 238-244}, abstract = {The ability to tell time is a crucial requirement for almost everything we do, but the neural mechanisms of time perception are still largely unknown. One way to approach these mechanisms is through computational modeling. This review provides an overview of the most prominent timing models, experimental evidence in their support, and formal ways for understanding the relationship between mechanisms of time perception and the scaling behavior of time estimation errors. Theories that interpret timing as a byproduct of other computational processes are also discussed. We suggest that there may be in fact a multitude of timing mechanisms in operation, anchored within area-specific computations, and tailored to different sensory-behavioral requirements. These ultimately have to be integrated into a common frame (a ‘temporal hub’) for the purpose of decision making. This common frame may support Bayesian integration and generalization across sensory modalities.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The ability to tell time is a crucial requirement for almost everything we do, but the neural mechanisms of time perception are still largely unknown. One way to approach these mechanisms is through computational modeling. This review provides an overview of the most prominent timing models, experimental evidence in their support, and formal ways for understanding the relationship between mechanisms of time perception and the scaling behavior of time estimation errors. Theories that interpret timing as a byproduct of other computational processes are also discussed. We suggest that there may be in fact a multitude of timing mechanisms in operation, anchored within area-specific computations, and tailored to different sensory-behavioral requirements. These ultimately have to be integrated into a common frame (a ‘temporal hub’) for the purpose of decision making. This common frame may support Bayesian integration and generalization across sensory modalities. |
Durstewitz, Daniel; Koppe, Georgia; Toutounji, Hazem Computational models as statistical tools Journal Article Current Opinion in Behavioral Sciences, 11 , pp. 93–99, 2016, ISSN: 23521546. @article{Durstewitz2016, title = {Computational models as statistical tools}, author = {Daniel Durstewitz and Georgia Koppe and Hazem Toutounji}, url = {http://dx.doi.org/10.1016/j.cobeha.2016.07.004}, doi = {10.1016/j.cobeha.2016.07.004}, issn = {23521546}, year = {2016}, date = {2016-01-01}, journal = {Current Opinion in Behavioral Sciences}, volume = {11}, pages = {93--99}, publisher = {Elsevier Ltd}, abstract = {Traditionally, models in statistics are relatively simple ???general purpose??? quantitative inference tools, while models in computational neuroscience aim more at mechanistically explaining specific observations. Research on methods for inferring behavioral and neural models from data, however, has shown that a lot could be gained by merging these approaches, augmenting computational models with distributional assumptions. This enables estimation of parameters of such models in a principled way, comes with confidence regions that quantify uncertainty in estimates, and allows for quantitative assessment of prediction quality of computational models and tests of specific hypotheses about underlying mechanisms. Thus, unlike in conventional statistics, inferences about the latent dynamical mechanisms that generated the observed data can be drawn. Future directions and challenges of this approach are discussed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Traditionally, models in statistics are relatively simple ???general purpose??? quantitative inference tools, while models in computational neuroscience aim more at mechanistically explaining specific observations. Research on methods for inferring behavioral and neural models from data, however, has shown that a lot could be gained by merging these approaches, augmenting computational models with distributional assumptions. This enables estimation of parameters of such models in a principled way, comes with confidence regions that quantify uncertainty in estimates, and allows for quantitative assessment of prediction quality of computational models and tests of specific hypotheses about underlying mechanisms. Thus, unlike in conventional statistics, inferences about the latent dynamical mechanisms that generated the observed data can be drawn. Future directions and challenges of this approach are discussed. |
Hass, Joachim; Hertäg, Loreen; Durstewitz, Daniel A Detailed Data-Driven Network Model of Prefrontal Cortex Reproduces Key Features of In Vivo Activity Journal Article PLoS Computational Biology, 12 (5), pp. 1–29, 2016, ISSN: 15537358. @article{Hass2016, title = {A Detailed Data-Driven Network Model of Prefrontal Cortex Reproduces Key Features of In Vivo Activity}, author = {Joachim Hass and Loreen Hertäg and Daniel Durstewitz}, doi = {10.1371/journal.pcbi.1004930}, issn = {15537358}, year = {2016}, date = {2016-01-01}, journal = {PLoS Computational Biology}, volume = {12}, number = {5}, pages = {1--29}, abstract = {textcopyright 2016 Hass et al. The prefrontal cortex is centrally involved in a wide range of cognitive functions and their impairment in psychiatric disorders. Yet, the computational principles that govern the dynamics of prefrontal neural networks, and link their physiological, biochemical and anatomical properties to cognitive functions, are not well understood. Computational models can help to bridge the gap between these different levels of description, provided they are sufficiently constrained by experimental data and capable of predicting key properties of the intact cortex. Here, we present a detailed network model of the prefrontal cortex, based on a simple computationally efficient single neuron model (simpAdEx), with all parameters derived from in vitro electrophysiological and anatomical data. Without additional tuning, this model could be shown to quantitatively reproduce a wide range of measures from in vivo electrophysiological recordings, to a degree where simulated and experimentally observed activities were statistically indistinguishable. These measures include spike train statistics, membrane potential fluctuations, local field potentials, and the transmission of transient stimulus information across layers. We further demonstrate that model predictions are robust against moderate changes in key parameters, and that synaptic heterogeneity is a crucial ingredient to the quantitative reproduction of in vivo-like electrophysiological behavior. Thus, we have produced a physiologically highly valid, in a quantitative sense, yet computationally efficient PFC network model, which helped to identify key properties underlying spike time dynamics as observed in vivo, and can be harvested for in-depth investigation of the links between physiology and cognition.}, keywords = {}, pubstate = {published}, tppubtype = {article} } textcopyright 2016 Hass et al. The prefrontal cortex is centrally involved in a wide range of cognitive functions and their impairment in psychiatric disorders. Yet, the computational principles that govern the dynamics of prefrontal neural networks, and link their physiological, biochemical and anatomical properties to cognitive functions, are not well understood. Computational models can help to bridge the gap between these different levels of description, provided they are sufficiently constrained by experimental data and capable of predicting key properties of the intact cortex. Here, we present a detailed network model of the prefrontal cortex, based on a simple computationally efficient single neuron model (simpAdEx), with all parameters derived from in vitro electrophysiological and anatomical data. Without additional tuning, this model could be shown to quantitatively reproduce a wide range of measures from in vivo electrophysiological recordings, to a degree where simulated and experimentally observed activities were statistically indistinguishable. These measures include spike train statistics, membrane potential fluctuations, local field potentials, and the transmission of transient stimulus information across layers. We further demonstrate that model predictions are robust against moderate changes in key parameters, and that synaptic heterogeneity is a crucial ingredient to the quantitative reproduction of in vivo-like electrophysiological behavior. Thus, we have produced a physiologically highly valid, in a quantitative sense, yet computationally efficient PFC network model, which helped to identify key properties underlying spike time dynamics as observed in vivo, and can be harvested for in-depth investigation of the links between physiology and cognition. |
2015 |
Demanuele, Charmaine ; Bähner, Florian ; Plichta, Michael ; Kirsch, Peter ; Tost, Heike ; Meyer-Lindenberg, Andreas ; Durstewitz, Daniel A statistical approach for segregating cognitive task stages from multivariate fMRI BOLD time series Journal Article Frontiers in Human Neuroscience, 9 , 2015. @article{Demanuele2015, title = {A statistical approach for segregating cognitive task stages from multivariate fMRI BOLD time series}, author = {Demanuele, Charmaine and Bähner, Florian and Plichta, Michael and Kirsch, Peter and Tost, Heike and Meyer-Lindenberg, Andreas and Durstewitz, Daniel}, url = {https://www.researchgate.net/publication/282647853_A_statistical_approach_for_segregating_cognitive_task_stages_from_multivariate_fMRI_BOLD_time_series}, doi = {DOI: 10.3389/fnhum.2015.00537 }, year = {2015}, date = {2015-09-01}, journal = {Frontiers in Human Neuroscience}, volume = {9}, abstract = {Multivariate pattern analysis can reveal new information from neuroimaging data to illuminate human cognition and its disturbances. Here, we develop a methodological approach, based on multivariate statistical/machine learning and time series analysis, to discern cognitive processing stages from fMRI blood oxygenation level dependent (BOLD) time series. We apply this method to data recorded from a group of healthy adults whilst performing a virtual reality version of the delayed win-shift radial arm maze task. This task has been frequently used to study working memory and decision making in rodents. Using linear classifiers and multivariate test statistics in conjunction with time series bootstraps, we show that different cognitive stages of the task, as defined by the experimenter, namely, the encoding/retrieval, choice, reward and delay stages, can be statistically discriminated from the BOLD time series in brain areas relevant for decision making and working memory. Discrimination of these task stages was significantly reduced during poor behavioral performance in dorsolateral prefrontal cortex (DLPFC), but not in the primary visual cortex (V1). Experimenter-defined dissection of time series into class labels based on task structure was confirmed by an unsupervised, bottom-up approach based on Hidden Markov Models. Furthermore, we show that different groupings of recorded time points into cognitive event classes can be used to test hypotheses about the specific cognitive role of a given brain region during task execution. We found that whilst the DLPFC strongly differentiated between task stages associated with different memory loads, but not between different visual-spatial aspects, the reverse was true for V1. Our methodology illustrates how different aspects of cognitive information processing during one and the same task can be separated and attributed to specific brain regions based on information contained in multivariate patterns of voxel activity.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Multivariate pattern analysis can reveal new information from neuroimaging data to illuminate human cognition and its disturbances. Here, we develop a methodological approach, based on multivariate statistical/machine learning and time series analysis, to discern cognitive processing stages from fMRI blood oxygenation level dependent (BOLD) time series. We apply this method to data recorded from a group of healthy adults whilst performing a virtual reality version of the delayed win-shift radial arm maze task. This task has been frequently used to study working memory and decision making in rodents. Using linear classifiers and multivariate test statistics in conjunction with time series bootstraps, we show that different cognitive stages of the task, as defined by the experimenter, namely, the encoding/retrieval, choice, reward and delay stages, can be statistically discriminated from the BOLD time series in brain areas relevant for decision making and working memory. Discrimination of these task stages was significantly reduced during poor behavioral performance in dorsolateral prefrontal cortex (DLPFC), but not in the primary visual cortex (V1). Experimenter-defined dissection of time series into class labels based on task structure was confirmed by an unsupervised, bottom-up approach based on Hidden Markov Models. Furthermore, we show that different groupings of recorded time points into cognitive event classes can be used to test hypotheses about the specific cognitive role of a given brain region during task execution. We found that whilst the DLPFC strongly differentiated between task stages associated with different memory loads, but not between different visual-spatial aspects, the reverse was true for V1. Our methodology illustrates how different aspects of cognitive information processing during one and the same task can be separated and attributed to specific brain regions based on information contained in multivariate patterns of voxel activity. |
Demanuele, Charmaine; Kirsch, Peter; Esslinger, Christine; Zink, Mathias; Meyer-Lindenberg, Andreas; Durstewitz, Daniel Area-Specific Information Processing in Prefrontal Cortex during a Probabilistic Inference Task: A Multivariate fMRI BOLD Time Series Analysis Journal Article PLOS ONE, 2015. @article{Demanuele2015b, title = {Area-Specific Information Processing in Prefrontal Cortex during a Probabilistic Inference Task: A Multivariate fMRI BOLD Time Series Analysis}, author = {Charmaine Demanuele and Peter Kirsch and Christine Esslinger and Mathias Zink and Andreas Meyer-Lindenberg and Daniel Durstewitz}, url = { https://doi.org/10.1371/journal.pone.0135424 }, doi = {10.1371/journal.pone.0135424}, year = {2015}, date = {2015-08-10}, journal = {PLOS ONE}, abstract = {Discriminating spatiotemporal stages of information processing involved in complex cognitive processes remains a challenge for neuroscience. This is especially so in prefrontal cortex whose subregions, such as the dorsolateral prefrontal (DLPFC), anterior cingulate (ACC) and orbitofrontal (OFC) cortices are known to have differentiable roles in cognition. Yet it is much less clear how these subregions contribute to different cognitive processes required by a given task. To investigate this, we use functional MRI data recorded from a group of healthy adults during a “Jumping to Conclusions” probabilistic reasoning task.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Discriminating spatiotemporal stages of information processing involved in complex cognitive processes remains a challenge for neuroscience. This is especially so in prefrontal cortex whose subregions, such as the dorsolateral prefrontal (DLPFC), anterior cingulate (ACC) and orbitofrontal (OFC) cortices are known to have differentiable roles in cognition. Yet it is much less clear how these subregions contribute to different cognitive processes required by a given task. To investigate this, we use functional MRI data recorded from a group of healthy adults during a “Jumping to Conclusions” probabilistic reasoning task. |
Florian Bähner Charmaine Demanuele, Janina Schweiger Martin Gerchen Vera Zamoscik Kai Ueltzhöffer Tim Hahn Patric Meyer Herta Flor Daniel Durstewitz Heike Tost Peter Kirsch Michael Plichta & Andreas Meyer-Lindenberg F M Hippocampal-dorsolateral prefrontal coupling as a species-conserved cognitive mechanism: a human translational imaging study Journal Article Neuropsychopharmacology, 40 (7), pp. 1674-81, 2015. @article{Bähner2015, title = {Hippocampal-dorsolateral prefrontal coupling as a species-conserved cognitive mechanism: a human translational imaging study}, author = {Florian Bähner, Charmaine Demanuele, Janina Schweiger, Martin F Gerchen, Vera Zamoscik, Kai Ueltzhöffer, Tim Hahn, Patric Meyer, Herta Flor, Daniel Durstewitz, Heike Tost, Peter Kirsch, Michael M Plichta & Andreas Meyer-Lindenberg}, url = {https://www.nature.com/articles/npp201513}, year = {2015}, date = {2015-06-01}, journal = {Neuropsychopharmacology}, volume = {40}, number = {7}, pages = {1674-81}, abstract = {Hippocampal–prefrontal cortex (HC–PFC) interactions are implicated in working memory (WM) and altered in psychiatric conditions with cognitive impairment such as schizophrenia. While coupling between both structures is crucial for WM performance in rodents, evidence from human studies is conflicting and translation of findings is complicated by the use of differing paradigms across species. We therefore used functional magnetic resonance imaging together with a spatial WM paradigm adapted from rodent research to examine HC–PFC coupling in humans. A PFC–parietal network was functionally connected to hippocampus (HC) during task stages requiring high levels of executive control but not during a matched control condition. The magnitude of coupling in a network comprising HC, bilateral dorsolateral PFC (DLPFC), and right supramarginal gyrus explained one-fourth of the variability in an independent spatial WM task but was unrelated to visual WM performance. HC–DLPFC coupling may thus represent a systems-level mechanism specific to spatial WM that is conserved across species, suggesting its utility for modeling cognitive dysfunction in translational neuroscience.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Hippocampal–prefrontal cortex (HC–PFC) interactions are implicated in working memory (WM) and altered in psychiatric conditions with cognitive impairment such as schizophrenia. While coupling between both structures is crucial for WM performance in rodents, evidence from human studies is conflicting and translation of findings is complicated by the use of differing paradigms across species. We therefore used functional magnetic resonance imaging together with a spatial WM paradigm adapted from rodent research to examine HC–PFC coupling in humans. A PFC–parietal network was functionally connected to hippocampus (HC) during task stages requiring high levels of executive control but not during a matched control condition. The magnitude of coupling in a network comprising HC, bilateral dorsolateral PFC (DLPFC), and right supramarginal gyrus explained one-fourth of the variability in an independent spatial WM task but was unrelated to visual WM performance. HC–DLPFC coupling may thus represent a systems-level mechanism specific to spatial WM that is conserved across species, suggesting its utility for modeling cognitive dysfunction in translational neuroscience. |
Kucewicz, Michal T; Kucewicz, Michal T; Durstewitz, Daniel; Tricklebank, Mark D; Jones, Matt; Laubach, Mark; Fujisawa, Shigeyoshi; Pennartz, Cyriel; Shapiro, Matthew; Hampson, Robert; Deadwyler, Samuel Decoding the sequential contributions of hippocampal-prefrontal neuronal assemblies to spatial working memory Unpublished 2015. @unpublished{Kucewicz2015, title = {Decoding the sequential contributions of hippocampal-prefrontal neuronal assemblies to spatial working memory}, author = {Michal T Kucewicz and Michal T Kucewicz and Daniel Durstewitz and Mark D Tricklebank and Matt Jones and Mark Laubach and Shigeyoshi Fujisawa and Cyriel Pennartz and Matthew Shapiro and Robert Hampson and Samuel Deadwyler}, year = {2015}, date = {2015-01-01}, keywords = {}, pubstate = {published}, tppubtype = {unpublished} } |
Lapish, Christopher C; Balaguer-ballester, Emili; Seamans, Jeremy K; Phillips, Anthony G; Durstewitz, Daniel Amphetamine Exerts Dose-Dependent Changes in Prefrontal Cortex Attractor Dynamics during Working Memory Journal Article 35 (28), pp. 10172–10187, 2015. @article{Lapish2015, title = {Amphetamine Exerts Dose-Dependent Changes in Prefrontal Cortex Attractor Dynamics during Working Memory}, author = {Christopher C Lapish and Emili Balaguer-ballester and Jeremy K Seamans and Anthony G Phillips and Daniel Durstewitz}, doi = {10.1523/JNEUROSCI.2421-14.2015}, year = {2015}, date = {2015-01-01}, volume = {35}, number = {28}, pages = {10172--10187}, keywords = {}, pubstate = {published}, tppubtype = {article} } |
2014 |
Joachim Hass, Daniel Durstewitz Neurocomputational models of time perception Journal Article Adv Exp Med Biol, (829), pp. 49-71, 2014. @article{Hass2014, title = {Neurocomputational models of time perception}, author = {Joachim Hass, Daniel Durstewitz }, url = {https://pubmed.ncbi.nlm.nih.gov/25358705/}, doi = {10.1007/978-1-4939-1782-2_4 }, year = {2014}, date = {2014-10-10}, journal = { Adv Exp Med Biol}, number = {829}, pages = {49-71}, abstract = {Mathematical modeling is a useful tool for understanding the neurodynamical and computational mechanisms of cognitive abilities like time perception, and for linking neurophysiology to psychology. In this chapter, we discuss several biophysical models of time perception and how they can be tested against experimental evidence. After a brief overview on the history of computational timing models, we list a number of central psychological and physiological findings that such a model should be able to account for, with a focus on the scaling of the variability of duration estimates with the length of the interval that needs to be estimated. The functional form of this scaling turns out to be predictive of the underlying computational mechanism for time perception. We then present four basic classes of timing models (ramping activity, sequential activation of neuron populations, state space trajectories and neural oscillators) and discuss two specific examples in more detail. Finally, we review to what extent existing theories of time perception adhere to the experimental constraints. }, keywords = {}, pubstate = {published}, tppubtype = {article} } Mathematical modeling is a useful tool for understanding the neurodynamical and computational mechanisms of cognitive abilities like time perception, and for linking neurophysiology to psychology. In this chapter, we discuss several biophysical models of time perception and how they can be tested against experimental evidence. After a brief overview on the history of computational timing models, we list a number of central psychological and physiological findings that such a model should be able to account for, with a focus on the scaling of the variability of duration estimates with the length of the interval that needs to be estimated. The functional form of this scaling turns out to be predictive of the underlying computational mechanism for time perception. We then present four basic classes of timing models (ramping activity, sequential activation of neuron populations, state space trajectories and neural oscillators) and discuss two specific examples in more detail. Finally, we review to what extent existing theories of time perception adhere to the experimental constraints. |
Loreen Hertäg, Daniel Durstewitz ; Brunel, Nicolas Analytical approximations of the firing rate of an adaptive exponential integrate-and-fire neuron in the presence of synaptic noise Journal Article Frontiers Computational Neuroscience, 8 (116), 2014. @article{Hertäg2014, title = {Analytical approximations of the firing rate of an adaptive exponential integrate-and-fire neuron in the presence of synaptic noise}, author = {Loreen Hertäg, Daniel Durstewitz and Nicolas Brunel}, url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4167001/}, doi = {10.3389/fncom.2014.00116}, year = {2014}, date = {2014-09-18}, journal = {Frontiers Computational Neuroscience}, volume = {8}, number = {116}, abstract = {Computational models offer a unique tool for understanding the network-dynamical mechanisms which mediate between physiological and biophysical properties, and behavioral function. A traditional challenge in computational neuroscience is, however, that simple neuronal models which can be studied analytically fail to reproduce the diversity of electrophysiological behaviors seen in real neurons, while detailed neuronal models which do reproduce such diversity are intractable analytically and computationally expensive. A number of intermediate models have been proposed whose aim is to capture the diversity of firing behaviors and spike times of real neurons while entailing the simplest possible mathematical description. One such model is the exponential integrate-and-fire neuron with spike rate adaptation (aEIF) which consists of two differential equations for the membrane potential (V) and an adaptation current (w). Despite its simplicity, it can reproduce a wide variety of physiologically observed spiking patterns, can be fit to physiological recordings quantitatively, and, once done so, is able to predict spike times on traces not used for model fitting. Here we compute the steady-state firing rate of aEIF in the presence of Gaussian synaptic noise, using two approaches. The first approach is based on the 2-dimensional Fokker-Planck equation that describes the (V,w)-probability distribution, which is solved using an expansion in the ratio between the time constants of the two variables. The second is based on the firing rate of the EIF model, which is averaged over the distribution of the w variable. These analytically derived closed-form expressions were tested on simulations from a large variety of model cells quantitatively fitted to in vitro electrophysiological recordings from pyramidal cells and interneurons. Theoretical predictions closely agreed with the firing rate of the simulated cells fed with in-vivo-like synaptic noise.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Computational models offer a unique tool for understanding the network-dynamical mechanisms which mediate between physiological and biophysical properties, and behavioral function. A traditional challenge in computational neuroscience is, however, that simple neuronal models which can be studied analytically fail to reproduce the diversity of electrophysiological behaviors seen in real neurons, while detailed neuronal models which do reproduce such diversity are intractable analytically and computationally expensive. A number of intermediate models have been proposed whose aim is to capture the diversity of firing behaviors and spike times of real neurons while entailing the simplest possible mathematical description. One such model is the exponential integrate-and-fire neuron with spike rate adaptation (aEIF) which consists of two differential equations for the membrane potential (V) and an adaptation current (w). Despite its simplicity, it can reproduce a wide variety of physiologically observed spiking patterns, can be fit to physiological recordings quantitatively, and, once done so, is able to predict spike times on traces not used for model fitting. Here we compute the steady-state firing rate of aEIF in the presence of Gaussian synaptic noise, using two approaches. The first approach is based on the 2-dimensional Fokker-Planck equation that describes the (V,w)-probability distribution, which is solved using an expansion in the ratio between the time constants of the two variables. The second is based on the firing rate of the EIF model, which is averaged over the distribution of the w variable. These analytically derived closed-form expressions were tested on simulations from a large variety of model cells quantitatively fitted to in vitro electrophysiological recordings from pyramidal cells and interneurons. Theoretical predictions closely agreed with the firing rate of the simulated cells fed with in-vivo-like synaptic noise. |
2013 |
Rainer Spanagel Daniel Durstewitz, Anita Hansson Andreas Heinz Falk Kiefer Georg Köhr Franziska Matthäus Markus Nöthen Hamid Noori Klaus Obermayer Marcella Rietschel Patrick Schloss Henrike Scholz Gunter Schumann Michael Smolka Wolfgang Sommer Valentina Vengeliene Henrik Walter Wolfgang Wurst Uli Zimmermann Addiction GWAS Resource Group; Sven Stringer Yannick Smits Eske Derks M R S M A systems medicine research approach for studying alcohol addiction Journal Article Addiction Biology, 18 (6), 2013. @article{Spanagel2013, title = {A systems medicine research approach for studying alcohol addiction}, author = { Rainer Spanagel, Daniel Durstewitz, Anita Hansson, Andreas Heinz, Falk Kiefer, Georg Köhr, Franziska Matthäus, Markus M Nöthen, Hamid R Noori, Klaus Obermayer, Marcella Rietschel, Patrick Schloss, Henrike Scholz, Gunter Schumann, Michael Smolka, Wolfgang Sommer, Valentina Vengeliene, Henrik Walter, Wolfgang Wurst, Uli S Zimmermann, Addiction GWAS Resource Group; Sven Stringer, Yannick Smits, Eske M Derks}, url = {https://pubmed.ncbi.nlm.nih.gov/24283978/}, doi = {10.1111/adb.12109}, year = {2013}, date = {2013-11-01}, journal = {Addiction Biology}, volume = {18}, number = {6}, abstract = {According to the World Health Organization, about 2 billion people drink alcohol. Excessive alcohol consumption can result in alcohol addiction, which is one of the most prevalent neuropsychiatric diseases afflicting our society today. Prevention and intervention of alcohol binging in adolescents and treatment of alcoholism are major unmet challenges affecting our health-care system and society alike. Our newly formed German SysMedAlcoholism consortium is using a new systems medicine approach and intends (1) to define individual neurobehavioral risk profiles in adolescents that are predictive of alcohol use disorders later in life and (2) to identify new pharmacological targets and molecules for the treatment of alcoholism. To achieve these goals, we will use omics-information from epigenomics, genetics transcriptomics, neurodynamics, global neurochemical connectomes and neuroimaging (IMAGEN; Schumann et al. ) to feed mathematical prediction modules provided by two Bernstein Centers for Computational Neurosciences (Berlin and Heidelberg/Mannheim), the results of which will subsequently be functionally validated in independent clinical samples and appropriate animal models. This approach will lead to new early intervention strategies and identify innovative molecules for relapse prevention that will be tested in experimental human studies. This research program will ultimately help in consolidating addiction research clusters in Germany that can effectively conduct large clinical trials, implement early intervention strategies and impact political and healthcare decision makers. }, keywords = {}, pubstate = {published}, tppubtype = {article} } According to the World Health Organization, about 2 billion people drink alcohol. Excessive alcohol consumption can result in alcohol addiction, which is one of the most prevalent neuropsychiatric diseases afflicting our society today. Prevention and intervention of alcohol binging in adolescents and treatment of alcoholism are major unmet challenges affecting our health-care system and society alike. Our newly formed German SysMedAlcoholism consortium is using a new systems medicine approach and intends (1) to define individual neurobehavioral risk profiles in adolescents that are predictive of alcohol use disorders later in life and (2) to identify new pharmacological targets and molecules for the treatment of alcoholism. To achieve these goals, we will use omics-information from epigenomics, genetics transcriptomics, neurodynamics, global neurochemical connectomes and neuroimaging (IMAGEN; Schumann et al. ) to feed mathematical prediction modules provided by two Bernstein Centers for Computational Neurosciences (Berlin and Heidelberg/Mannheim), the results of which will subsequently be functionally validated in independent clinical samples and appropriate animal models. This approach will lead to new early intervention strategies and identify innovative molecules for relapse prevention that will be tested in experimental human studies. This research program will ultimately help in consolidating addiction research clusters in Germany that can effectively conduct large clinical trials, implement early intervention strategies and impact political and healthcare decision makers. |
Claudio S. Quiroga-Lombard, Joachim Hass ; Durstewitz, Daniel Method for stationarity-segmentation of spike train data with application to the Pearson cross-correlation Journal Article Journal of Neurophysiology, 2013. @article{Quiroga-Lombard2013, title = {Method for stationarity-segmentation of spike train data with application to the Pearson cross-correlation}, author = {Claudio S. Quiroga-Lombard, Joachim Hass and Daniel Durstewitz}, url = {https://doi.org/10.1152/jn.00186.2013}, doi = {10.1152/jn.00186.2013}, year = {2013}, date = {2013-07-15}, journal = {Journal of Neurophysiology}, abstract = {Correlations among neurons are supposed to play an important role in computation and information coding in the nervous system. Empirically, functional interactions between neurons are most commonly assessed by cross-correlation functions. Recent studies have suggested that pairwise correlations may indeed be sufficient to capture most of the information present in neural interactions. Many applications of correlation functions, however, implicitly tend to assume that the underlying processes are stationary. This assumption will usually fail for real neurons recorded in vivo since their activity during behavioral tasks is heavily influenced by stimulus-, movement-, or cognition-related processes as well as by more general processes like slow oscillations or changes in state of alertness. To address the problem of nonstationarity, we introduce a method for assessing stationarity empirically and then “slicing” spike trains into stationary segments according to the statistical definition of weak-sense stationarity. We examine pairwise Pearson cross-correlations (PCCs) under both stationary and nonstationary conditions and identify another source of covariance that can be differentiated from the covariance of the spike times and emerges as a consequence of residual nonstationarities after the slicing process: the covariance of the firing rates defined on each segment. Based on this, a correction of the PCC is introduced that accounts for the effect of segmentation. We probe these methods both on simulated data sets and on in vivo recordings from the prefrontal cortex of behaving rats. Rather than for removing nonstationarities, the present method may also be used for detecting significant events in spike trains.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Correlations among neurons are supposed to play an important role in computation and information coding in the nervous system. Empirically, functional interactions between neurons are most commonly assessed by cross-correlation functions. Recent studies have suggested that pairwise correlations may indeed be sufficient to capture most of the information present in neural interactions. Many applications of correlation functions, however, implicitly tend to assume that the underlying processes are stationary. This assumption will usually fail for real neurons recorded in vivo since their activity during behavioral tasks is heavily influenced by stimulus-, movement-, or cognition-related processes as well as by more general processes like slow oscillations or changes in state of alertness. To address the problem of nonstationarity, we introduce a method for assessing stationarity empirically and then “slicing” spike trains into stationary segments according to the statistical definition of weak-sense stationarity. We examine pairwise Pearson cross-correlations (PCCs) under both stationary and nonstationary conditions and identify another source of covariance that can be differentiated from the covariance of the spike times and emerges as a consequence of residual nonstationarities after the slicing process: the covariance of the firing rates defined on each segment. Based on this, a correction of the PCC is introduced that accounts for the effect of segmentation. We probe these methods both on simulated data sets and on in vivo recordings from the prefrontal cortex of behaving rats. Rather than for removing nonstationarities, the present method may also be used for detecting significant events in spike trains. |
Sophie Helene Richter Benjamin Zeuch, Katja Lankisch Peter Gass Daniel Durstewitz Barbara Vollmayr Where Have I Been? Where Should I Go? Spatial Working Memory on a Radial Arm Maze in a Rat Model of Depression Journal Article PLOS ONE, 2013. @article{Richter2013, title = {Where Have I Been? Where Should I Go? Spatial Working Memory on a Radial Arm Maze in a Rat Model of Depression}, author = {Sophie Helene Richter , Benjamin Zeuch, Katja Lankisch, Peter Gass, Daniel Durstewitz, Barbara Vollmayr }, url = { https://doi.org/10.1371/journal.pone.0062458}, doi = {10.1371/journal.pone.0062458}, year = {2013}, date = {2013-04-13}, journal = {PLOS ONE}, abstract = {Disturbances in cognitive functioning are among the most debilitating problems experienced by patients with major depression. Investigations of these deficits in animals help to extend and refine our understanding of human emotional disorder, while at the same time providing valid tools to study higher executive functions in animals. We employ the “learned helplessness” genetic rat model of depression in studying working memory using an eight arm radial maze procedure with temporal delay. This so-called delayed spatial win-shift task consists of three phases, training, delay and test, requiring rats to hold information on-line across a retention interval and making choices based on this information in the test phase. According to a 2×2 factorial design, working memory performance of thirty-one congenitally helpless (cLH) and non-helpless (cNLH) rats was tested on eighteen trials, additionally imposing two different delay durations, 30 s and 15 min, respectively. While not observing a general cognitive deficit in cLH rats, the delay length greatly influenced maze performance. Notably, performance was most impaired in cLH rats tested with the shorter 30 s delay, suggesting a stress-related disruption of attentional processes in rats that are more sensitive to stress. Our study provides direct animal homologues of clinically important measures in human research, and contributes to the non-invasive assessment of cognitive deficits associated with depression.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Disturbances in cognitive functioning are among the most debilitating problems experienced by patients with major depression. Investigations of these deficits in animals help to extend and refine our understanding of human emotional disorder, while at the same time providing valid tools to study higher executive functions in animals. We employ the “learned helplessness” genetic rat model of depression in studying working memory using an eight arm radial maze procedure with temporal delay. This so-called delayed spatial win-shift task consists of three phases, training, delay and test, requiring rats to hold information on-line across a retention interval and making choices based on this information in the test phase. According to a 2×2 factorial design, working memory performance of thirty-one congenitally helpless (cLH) and non-helpless (cNLH) rats was tested on eighteen trials, additionally imposing two different delay durations, 30 s and 15 min, respectively. While not observing a general cognitive deficit in cLH rats, the delay length greatly influenced maze performance. Notably, performance was most impaired in cLH rats tested with the shorter 30 s delay, suggesting a stress-related disruption of attentional processes in rats that are more sensitive to stress. Our study provides direct animal homologues of clinically important measures in human research, and contributes to the non-invasive assessment of cognitive deficits associated with depression. |
2012 |
James M Hyman Liya Ma, Emili Balaguer-Ballester Daniel Durstewitz Jeremy Seamans K Contextual encoding by ensembles of medial prefrontal cortex neurons Journal Article Proceedings of the National Academy of Sciences, 2012. @article{Hyman2012, title = {Contextual encoding by ensembles of medial prefrontal cortex neurons}, author = {James M Hyman , Liya Ma, Emili Balaguer-Ballester, Daniel Durstewitz, Jeremy K Seamans }, url = {https://pubmed.ncbi.nlm.nih.gov/22421138/}, doi = {10.1073/pnas.1114415109 }, year = {2012}, date = {2012-03-27}, journal = {Proceedings of the National Academy of Sciences}, abstract = {Contextual representations serve to guide many aspects of behavior and influence the way stimuli or actions are encoded and interpreted. The medial prefrontal cortex (mPFC), including the anterior cingulate subregion, has been implicated in contextual encoding, yet the nature of contextual representations formed by the mPFC is unclear. Using multiple single-unit tetrode recordings in rats, we found that different activity patterns emerged in mPFC ensembles when animals moved between different environmental contexts. These differences in activity patterns were significantly larger than those observed for hippocampal ensembles. Whereas ≈11% of mPFC cells consistently preferred one environment over the other across multiple exposures to the same environments, optimal decoding (prediction) of the environmental setting occurred when the activity of up to ≈50% of all mPFC neurons was taken into account. On the other hand, population activity patterns were not identical upon repeated exposures to the very same environment. This was partly because the state of mPFC ensembles seemed to systematically shift with time, such that we could sometimes predict the change in ensemble state upon later reentry into one environment according to linear extrapolation from the time-dependent shifts observed during the first exposure. We also observed that many strongly action-selective mPFC neurons exhibited a significant degree of context-dependent modulation. These results highlight potential differences in contextual encoding schemes by the mPFC and hippocampus and suggest that the mPFC forms rich contextual representations that take into account not only sensory cues but also actions and time. }, keywords = {}, pubstate = {published}, tppubtype = {article} } Contextual representations serve to guide many aspects of behavior and influence the way stimuli or actions are encoded and interpreted. The medial prefrontal cortex (mPFC), including the anterior cingulate subregion, has been implicated in contextual encoding, yet the nature of contextual representations formed by the mPFC is unclear. Using multiple single-unit tetrode recordings in rats, we found that different activity patterns emerged in mPFC ensembles when animals moved between different environmental contexts. These differences in activity patterns were significantly larger than those observed for hippocampal ensembles. Whereas ≈11% of mPFC cells consistently preferred one environment over the other across multiple exposures to the same environments, optimal decoding (prediction) of the environmental setting occurred when the activity of up to ≈50% of all mPFC neurons was taken into account. On the other hand, population activity patterns were not identical upon repeated exposures to the very same environment. This was partly because the state of mPFC ensembles seemed to systematically shift with time, such that we could sometimes predict the change in ensemble state upon later reentry into one environment according to linear extrapolation from the time-dependent shifts observed during the first exposure. We also observed that many strongly action-selective mPFC neurons exhibited a significant degree of context-dependent modulation. These results highlight potential differences in contextual encoding schemes by the mPFC and hippocampus and suggest that the mPFC forms rich contextual representations that take into account not only sensory cues but also actions and time. |
Hertäg, Loreen; Hass, Joachim; Golovko, Tatiana; Durstewitz, Daniel An Approximation to the Adaptive Exponential Integrate-and-Fire Neuron Model Allows Fast and Predictive Fitting to Physiological Data Journal Article Frontiers in Computational Neuroscience, 6 (September), pp. 1–22, 2012. @article{Hertag2012, title = {An Approximation to the Adaptive Exponential Integrate-and-Fire Neuron Model Allows Fast and Predictive Fitting to Physiological Data}, author = {Loreen Hertäg and Joachim Hass and Tatiana Golovko and Daniel Durstewitz}, doi = {10.3389/fncom.2012.00062}, year = {2012}, date = {2012-01-01}, journal = {Frontiers in Computational Neuroscience}, volume = {6}, number = {September}, pages = {1--22}, abstract = {For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ('invivo-like') input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a 'high-throughput' model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available. textcopyright 2012 Hertäg, Hass, Golovko and Durstewitz.}, keywords = {}, pubstate = {published}, tppubtype = {article} } For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ('invivo-like') input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a 'high-throughput' model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available. textcopyright 2012 Hertäg, Hass, Golovko and Durstewitz. |
2011 |
Joachim Hass, Daniel Durstewitz Models of dopaminergic modulation Journal Article Scholarpedia, 6 (6), 2011. @article{Hass2011, title = {Models of dopaminergic modulation}, author = {Joachim Hass, Daniel Durstewitz}, url = {http://www.scholarpedia.org/article/Models_of_dopaminergic_modulation}, doi = {doi:10.4249/scholarpedia.4215}, year = {2011}, date = {2011-08-01}, journal = {Scholarpedia}, volume = {6}, number = {6}, abstract = {In computational neuroscience, models of dopaminergic modulation address the physiological and computational functions of the neuromodulator dopamine (DA) by implementing it into models of biological neurons and networks. DA plays a highly important role in higher order motor control, goal-directed behavior, motivation, reinforcement learning, and a number of cognitive and executive functions such as working memory, planning, attention, behavioral and cognitive flexibility, inhibition of impulsive responses, and time perception (Schultz, 1998, Nieoullon, 2003, Goldman-Rakic, 2008, Dalley and Everitt, 2009). DA's fundamental part in learning, cognitive, and motor control is also reflected in the various serious nervous system diseases associated with impaired DA regulation, such as Parkinson’s disease, Schizophrenia, bipolar disorder, Huntington’s disease, attention-deficit hyperactivity disorder (ADHD), autism, restless legs syndrome (RLS), and addictions (Meyer-Lindenberg, 2010, Egan and Weinberger, 1997, Dalley and Everitt, 2009). From electrophysiological experiments, DA is known to affect a number of neuronal and synaptic properties in various target areas such as the striatum, the hippocampus, and motor and frontal cortical regions, via different types of receptors often combined within the D1- and D2-receptor class (D1R and D2R) (see Dopamine modulation). In single neurons, DA changes neuronal excitability and signal integration by virtue of its effects on a variety of voltage-dependent currents. DA also enhances or suppresses various synaptic currents such as AMPA-, GABA- and NMDA-type currents. With regards to both intrinsic and synaptic currents, the D1 and D2 receptor classes may function largely antagonistically (Trantham-Davidson et al. 2004, West and Grace 2002, Gulledge and Jaffe, 1998): D2 receptors decrease neuronal excitability with relatively short latency (in vitro), while there is a delayed and prolonged increase mediated by D1R. Similarly, D1R enhance NMDA- and GABA-type currents, while D2R decrease them. These antagonistic physiological effects may be rooted in the differential regulation of intracellular proteins like adenylyl cyclase, cAMP and DARPP-32 through D1R and D2R (Greengard, 2001).}, keywords = {}, pubstate = {published}, tppubtype = {article} } In computational neuroscience, models of dopaminergic modulation address the physiological and computational functions of the neuromodulator dopamine (DA) by implementing it into models of biological neurons and networks. DA plays a highly important role in higher order motor control, goal-directed behavior, motivation, reinforcement learning, and a number of cognitive and executive functions such as working memory, planning, attention, behavioral and cognitive flexibility, inhibition of impulsive responses, and time perception (Schultz, 1998, Nieoullon, 2003, Goldman-Rakic, 2008, Dalley and Everitt, 2009). DA's fundamental part in learning, cognitive, and motor control is also reflected in the various serious nervous system diseases associated with impaired DA regulation, such as Parkinson’s disease, Schizophrenia, bipolar disorder, Huntington’s disease, attention-deficit hyperactivity disorder (ADHD), autism, restless legs syndrome (RLS), and addictions (Meyer-Lindenberg, 2010, Egan and Weinberger, 1997, Dalley and Everitt, 2009). From electrophysiological experiments, DA is known to affect a number of neuronal and synaptic properties in various target areas such as the striatum, the hippocampus, and motor and frontal cortical regions, via different types of receptors often combined within the D1- and D2-receptor class (D1R and D2R) (see Dopamine modulation). In single neurons, DA changes neuronal excitability and signal integration by virtue of its effects on a variety of voltage-dependent currents. DA also enhances or suppresses various synaptic currents such as AMPA-, GABA- and NMDA-type currents. With regards to both intrinsic and synaptic currents, the D1 and D2 receptor classes may function largely antagonistically (Trantham-Davidson et al. 2004, West and Grace 2002, Gulledge and Jaffe, 1998): D2 receptors decrease neuronal excitability with relatively short latency (in vitro), while there is a delayed and prolonged increase mediated by D1R. Similarly, D1R enhance NMDA- and GABA-type currents, while D2R decrease them. These antagonistic physiological effects may be rooted in the differential regulation of intracellular proteins like adenylyl cyclase, cAMP and DARPP-32 through D1R and D2R (Greengard, 2001). |
Balaguer-Ballester, Emili; Lapish, Christopher C; Seamans, Jeremy K; Durstewitz, Daniel Attracting dynamics of frontal cortex ensembles during memory-guided decision-making Journal Article PLoS Computational Biology, 7 (5), 2011, ISSN: 1553734X. @article{Balaguer-Ballester2011, title = {Attracting dynamics of frontal cortex ensembles during memory-guided decision-making}, author = {Emili Balaguer-Ballester and Christopher C Lapish and Jeremy K Seamans and Daniel Durstewitz}, doi = {10.1371/journal.pcbi.1002057}, issn = {1553734X}, year = {2011}, date = {2011-01-01}, journal = {PLoS Computational Biology}, volume = {7}, number = {5}, abstract = {A common theoretical view is that attractor-like properties of neuronal dynamics underlie cognitive processing. However, although often proposed theoretically, direct experimental support for the convergence of neural activity to stable population patterns as a signature of attracting states has been sparse so far, especially in higher cortical areas. Combining state space reconstruction theorems and statistical learning techniques, we were able to resolve details of anterior cingulate cortex (ACC) multiple single-unit activity (MSUA) ensemble dynamics during a higher cognitive task which were not accessible previously. The approach worked by constructing high-dimensional state spaces from delays of the original single-unit firing rate variables and the interactions among them, which were then statistically analyzed using kernel methods. We observed cognitive-epoch-specific neural ensemble states in ACC which were stable across many trials (in the sense of being predictive) and depended on behavioral performance. More interestingly, attracting properties of these cognitively defined ensemble states became apparent in high-dimensional expansions of the MSUA spaces due to a proper unfolding of the neural activity flow, with properties common across different animals. These results therefore suggest that ACC networks may process different subcomponents of higher cognitive tasks by transiting among different attracting states.}, keywords = {}, pubstate = {published}, tppubtype = {article} } A common theoretical view is that attractor-like properties of neuronal dynamics underlie cognitive processing. However, although often proposed theoretically, direct experimental support for the convergence of neural activity to stable population patterns as a signature of attracting states has been sparse so far, especially in higher cortical areas. Combining state space reconstruction theorems and statistical learning techniques, we were able to resolve details of anterior cingulate cortex (ACC) multiple single-unit activity (MSUA) ensemble dynamics during a higher cognitive task which were not accessible previously. The approach worked by constructing high-dimensional state spaces from delays of the original single-unit firing rate variables and the interactions among them, which were then statistically analyzed using kernel methods. We observed cognitive-epoch-specific neural ensemble states in ACC which were stable across many trials (in the sense of being predictive) and depended on behavioral performance. More interestingly, attracting properties of these cognitively defined ensemble states became apparent in high-dimensional expansions of the MSUA spaces due to a proper unfolding of the neural activity flow, with properties common across different animals. These results therefore suggest that ACC networks may process different subcomponents of higher cognitive tasks by transiting among different attracting states. |
2010 |
Durstewitz, Daniel; Vittoz, Nicole M; Floresco, Stan B; Seamans, Jeremy K Abrupt transitions between prefrontal neural ensemble states accompany behavioral transitions during rule learning Journal Article Neuron, 66 (3), pp. 438–448, 2010, ISSN: 08966273. @article{Durstewitz2010, title = {Abrupt transitions between prefrontal neural ensemble states accompany behavioral transitions during rule learning}, author = {Daniel Durstewitz and Nicole M Vittoz and Stan B Floresco and Jeremy K Seamans}, url = {http://dx.doi.org/10.1016/j.neuron.2010.03.029}, doi = {10.1016/j.neuron.2010.03.029}, issn = {08966273}, year = {2010}, date = {2010-01-01}, journal = {Neuron}, volume = {66}, number = {3}, pages = {438--448}, publisher = {Elsevier Ltd}, abstract = {One of the most intriguing aspects of adaptive behavior involves the inference of regularities and rules in ever-changing environments. Rules are often deduced through evidence-based learning which relies on the prefrontal cortex (PFC). This is a highly dynamic process, evolving trial by trial and therefore may not be adequately captured by averaging single-unit responses over numerous repetitions. Here, we employed advanced statistical techniques to visualize the trajectories of ensembles of simultaneously recorded medial PFC neurons on a trial-by-trial basis as rats deduced a novel rule in a set-shifting task. Neural populations formed clearly distinct and lasting representations of familiar and novel rules by entering unique network states. During rule acquisition, the recorded ensembles often exhibited abrupt transitions, rather than evolving continuously, in tight temporal relation to behavioral performance shifts. These results support the idea that rule learning is an evidence-based decision process, perhaps accompanied by moments of sudden insight. ?? 2010 Elsevier Inc.}, keywords = {}, pubstate = {published}, tppubtype = {article} } One of the most intriguing aspects of adaptive behavior involves the inference of regularities and rules in ever-changing environments. Rules are often deduced through evidence-based learning which relies on the prefrontal cortex (PFC). This is a highly dynamic process, evolving trial by trial and therefore may not be adequately captured by averaging single-unit responses over numerous repetitions. Here, we employed advanced statistical techniques to visualize the trajectories of ensembles of simultaneously recorded medial PFC neurons on a trial-by-trial basis as rats deduced a novel rule in a set-shifting task. Neural populations formed clearly distinct and lasting representations of familiar and novel rules by entering unique network states. During rule acquisition, the recorded ensembles often exhibited abrupt transitions, rather than evolving continuously, in tight temporal relation to behavioral performance shifts. These results support the idea that rule learning is an evidence-based decision process, perhaps accompanied by moments of sudden insight. ?? 2010 Elsevier Inc. |
2009 |
Durstewitz, Daniel Implications of synaptic biophysics for recurrent network dynamics and active memory Journal Article Neural Networks, 22 (8), pp. 1189–1200, 2009, ISSN: 08936080. @article{Durstewitz2009, title = {Implications of synaptic biophysics for recurrent network dynamics and active memory}, author = {Daniel Durstewitz}, doi = {10.1016/j.neunet.2009.07.016}, issn = {08936080}, year = {2009}, date = {2009-10-01}, journal = {Neural Networks}, volume = {22}, number = {8}, pages = {1189--1200}, abstract = {In cortical networks, synaptic excitation is mediated by AMPA- and NMDA-type receptors. NMDA differ from AMPA synaptic potentials with regard to peak current, time course, and a strong voltage-dependent nonlinearity. Here we illustrate based on empirical and computational findings that these specific biophysical properties may have profound implications for the dynamics of cortical networks, and via dynamics on cognitive functions like active memory. The discussion will be led along a minimal set of neural equations introduced to capture the essential dynamics of the various phenomena described. NMDA currents could establish cortical bistability and may provide the relatively constant synaptic drive needed to robustly maintain enhanced levels of activity during working memory epochs, freeing fast AMPA currents for other computational purposes. Perhaps more importantly, variations in NMDA synaptic input-due to their biophysical particularities-control the dynamical regime within which single neurons and networks reside. By provoking bursting, chaotic irregularity, and coherent oscillations their major effect may be on the temporal pattern of spiking activity, rather than on average firing rate. During active memory, neurons may thus be pushed into a spiking regime that harbors complex temporal structure, potentially optimal for the encoding and processing of temporal sequence information. These observations provide a qualitatively different view on the role of synaptic excitation in neocortical dynamics than entailed by many more abstract models. In this sense, this article is a plead for taking the specific biophysics of real neurons and synapses seriously when trying to account for the neurobiology of cognition. textcopyright 2009 Elsevier Ltd. All rights reserved.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In cortical networks, synaptic excitation is mediated by AMPA- and NMDA-type receptors. NMDA differ from AMPA synaptic potentials with regard to peak current, time course, and a strong voltage-dependent nonlinearity. Here we illustrate based on empirical and computational findings that these specific biophysical properties may have profound implications for the dynamics of cortical networks, and via dynamics on cognitive functions like active memory. The discussion will be led along a minimal set of neural equations introduced to capture the essential dynamics of the various phenomena described. NMDA currents could establish cortical bistability and may provide the relatively constant synaptic drive needed to robustly maintain enhanced levels of activity during working memory epochs, freeing fast AMPA currents for other computational purposes. Perhaps more importantly, variations in NMDA synaptic input-due to their biophysical particularities-control the dynamical regime within which single neurons and networks reside. By provoking bursting, chaotic irregularity, and coherent oscillations their major effect may be on the temporal pattern of spiking activity, rather than on average firing rate. During active memory, neurons may thus be pushed into a spiking regime that harbors complex temporal structure, potentially optimal for the encoding and processing of temporal sequence information. These observations provide a qualitatively different view on the role of synaptic excitation in neocortical dynamics than entailed by many more abstract models. In this sense, this article is a plead for taking the specific biophysics of real neurons and synapses seriously when trying to account for the neurobiology of cognition. textcopyright 2009 Elsevier Ltd. All rights reserved. |
2008 |
Seamans J.K., Lapish & Durstewitz C C D Comparing the prefrontal cortex of rats and primates: Insights from electrophysiology Journal Article Neurotoxicity Research, 14 , pp. 249-262, 2008. @article{Seamans2008, title = {Comparing the prefrontal cortex of rats and primates: Insights from electrophysiology}, author = {Seamans, J.K., Lapish, C.C., & Durstewitz, D.}, url = {https://www.ncbi.nlm.nih.gov/pubmed/19073430}, year = {2008}, date = {2008-10-14}, journal = {Neurotoxicity Research}, volume = {14}, pages = {249-262}, abstract = {There is a long-standing debate about whether rats have what could be considered a prefrontal cortex (PFC) and, if they do, what its primate homologue is. Anatomical evidence supports the view that the rat medial PFC is related to both the primate anterior cingulate cortex (ACC) and the dorsolateral PFC. Functionally the primate and human ACC are believed to be involved in the monitoring of actions and outcomes to guide decisions especially in challenging situations where cognitive conflict and errors arise. In contrast, the dorsolateral PFC is responsible for the maintenance and manipulation of goal-related items in memory in the service of planning, problem solving, and predicting forthcoming events. Recent multiple single-unit recording studies in rats have reported strong correlates of motor planning, movement and reward anticipation analogous to what has been observed in the primate ACC. There is also emerging evidence that rats may partly encode information over delays using body posture or variations in running path as embodied strategies, and that these are the aspects tracked by medial PFC neurons. The primate PFC may have elaborated on these rudimentary functions by carrying them over to more abstract levels of mental representation, more independent from somatic or other external mnemonic cues, and allowing manipulation of mental contents outside specific task contexts. Therefore, from an electrophysiological and computational perspective, the rat medial PFC seems to combine elements of the primate ACC and dorsolateral PFC at a rudimentary level. In primates, these functions may have formed the building blocks required for abstract rule encoding during the expansion of the cortex dorsolaterally.}, keywords = {}, pubstate = {published}, tppubtype = {article} } There is a long-standing debate about whether rats have what could be considered a prefrontal cortex (PFC) and, if they do, what its primate homologue is. Anatomical evidence supports the view that the rat medial PFC is related to both the primate anterior cingulate cortex (ACC) and the dorsolateral PFC. Functionally the primate and human ACC are believed to be involved in the monitoring of actions and outcomes to guide decisions especially in challenging situations where cognitive conflict and errors arise. In contrast, the dorsolateral PFC is responsible for the maintenance and manipulation of goal-related items in memory in the service of planning, problem solving, and predicting forthcoming events. Recent multiple single-unit recording studies in rats have reported strong correlates of motor planning, movement and reward anticipation analogous to what has been observed in the primate ACC. There is also emerging evidence that rats may partly encode information over delays using body posture or variations in running path as embodied strategies, and that these are the aspects tracked by medial PFC neurons. The primate PFC may have elaborated on these rudimentary functions by carrying them over to more abstract levels of mental representation, more independent from somatic or other external mnemonic cues, and allowing manipulation of mental contents outside specific task contexts. Therefore, from an electrophysiological and computational perspective, the rat medial PFC seems to combine elements of the primate ACC and dorsolateral PFC at a rudimentary level. In primates, these functions may have formed the building blocks required for abstract rule encoding during the expansion of the cortex dorsolaterally. |
Christopher C. Lapish Daniel Durstewitz, Judson Chandler L; Seamans, Jeremy K Successful choice behavior is associated with distinct and coherent network states in anterior cingulate cortex Journal Article Proceedings of the National Academy of Sciences, 2008. @article{Lapish2008, title = {Successful choice behavior is associated with distinct and coherent network states in anterior cingulate cortex}, author = {Christopher C. Lapish, Daniel Durstewitz, L. Judson Chandler, and Jeremy K. Seamans}, url = {https://doi.org/10.1073/pnas.0804045105 }, doi = {10.1073/pnas.0804045105 }, year = {2008}, date = {2008-08-19}, journal = {Proceedings of the National Academy of Sciences}, abstract = {Successful decision making requires an ability to monitor contexts, actions, and outcomes. The anterior cingulate cortex (ACC) is thought to be critical for these functions, monitoring and guiding decisions especially in challenging situations involving conflict and errors. A number of different single-unit correlates have been observed in the ACC that reflect the diverse cognitive components involved. Yet how ACC neurons function as an integrated network is poorly understood. Here we show, using advanced population analysis of multiple single-unit recordings from the rat ACC during performance of an ecologically valid decision-making task, that ensembles of neurons move through different coherent and dissociable states as the cognitive requirements of the task change. This organization into distinct network patterns with respect to both firing-rate changes and correlations among units broke down during trials with numerous behavioral errors, especially at choice points of the task. These results point to an underlying functional organization into cell assemblies in the ACC that may monitor choices, outcomes, and task contexts, thus tracking the animal's progression through “task space.”}, keywords = {}, pubstate = {published}, tppubtype = {article} } Successful decision making requires an ability to monitor contexts, actions, and outcomes. The anterior cingulate cortex (ACC) is thought to be critical for these functions, monitoring and guiding decisions especially in challenging situations involving conflict and errors. A number of different single-unit correlates have been observed in the ACC that reflect the diverse cognitive components involved. Yet how ACC neurons function as an integrated network is poorly understood. Here we show, using advanced population analysis of multiple single-unit recordings from the rat ACC during performance of an ecologically valid decision-making task, that ensembles of neurons move through different coherent and dissociable states as the cognitive requirements of the task change. This organization into distinct network patterns with respect to both firing-rate changes and correlations among units broke down during trials with numerous behavioral errors, especially at choice points of the task. These results point to an underlying functional organization into cell assemblies in the ACC that may monitor choices, outcomes, and task contexts, thus tracking the animal's progression through “task space.” |
Daniel Durstewitz, Jeremy Seamans K The dual-state theory of prefrontal cortex dopamine function with relevance to COMT genotypes and schizophrenia Journal Article Biol Psychiatry, 2008. @article{Durstewitz2008b, title = {The dual-state theory of prefrontal cortex dopamine function with relevance to COMT genotypes and schizophrenia}, author = {Daniel Durstewitz, Jeremy K Seamans }, url = {https://pubmed.ncbi.nlm.nih.gov/18620336/}, doi = {10.1016/j.biopsych.2008.05.015}, year = {2008}, date = {2008-07-11}, journal = { Biol Psychiatry}, abstract = {There is now general consensus that at least some of the cognitive deficits in schizophrenia are related to dysfunctions in the prefrontal cortex (PFC) dopamine (DA) system. At the cellular and synaptic level, the effects of DA in PFC via D1- and D2-class receptors are highly complex, often apparently opposing, and hence difficult to understand with regard to their functional implications. Biophysically realistic computational models have provided valuable insights into how the effects of DA on PFC neurons and synaptic currents as measured in vitro link up to the neural network and cognitive levels. They suggest the existence of two discrete dynamical regimes, a D1-dominated state characterized by a high energy barrier among different network patterns that favors robust online maintenance of information and a D2-dominated state characterized by a low energy barrier that is beneficial for flexible and fast switching among representational states. These predictions are consistent with a variety of electrophysiological, neuroimaging, and behavioral results in humans and nonhuman species. Moreover, these biophysically based models predict that imbalanced D1:D2 receptor activation causing extremely low or extremely high energy barriers among activity states could lead to the emergence of cognitive, positive, and negative symptoms observed in schizophrenia. Thus, combined experimental and computational approaches hold the promise of allowing a detailed mechanistic understanding of how DA alters information processing in normal and pathological conditions, thereby potentially providing new routes for the development of pharmacological treatments for schizophrenia. }, keywords = {}, pubstate = {published}, tppubtype = {article} } There is now general consensus that at least some of the cognitive deficits in schizophrenia are related to dysfunctions in the prefrontal cortex (PFC) dopamine (DA) system. At the cellular and synaptic level, the effects of DA in PFC via D1- and D2-class receptors are highly complex, often apparently opposing, and hence difficult to understand with regard to their functional implications. Biophysically realistic computational models have provided valuable insights into how the effects of DA on PFC neurons and synaptic currents as measured in vitro link up to the neural network and cognitive levels. They suggest the existence of two discrete dynamical regimes, a D1-dominated state characterized by a high energy barrier among different network patterns that favors robust online maintenance of information and a D2-dominated state characterized by a low energy barrier that is beneficial for flexible and fast switching among representational states. These predictions are consistent with a variety of electrophysiological, neuroimaging, and behavioral results in humans and nonhuman species. Moreover, these biophysically based models predict that imbalanced D1:D2 receptor activation causing extremely low or extremely high energy barriers among activity states could lead to the emergence of cognitive, positive, and negative symptoms observed in schizophrenia. Thus, combined experimental and computational approaches hold the promise of allowing a detailed mechanistic understanding of how DA alters information processing in normal and pathological conditions, thereby potentially providing new routes for the development of pharmacological treatments for schizophrenia. |
Durstewitz, D; Deco, G Computational significance of transient dynamics in cortical networks Journal Article European Journal of Neuroscience, 27 , pp. 217-27, 2008. @article{Durstewitz2008, title = {Computational significance of transient dynamics in cortical networks}, author = {D. Durstewitz and G. Deco}, url = {https://www.ncbi.nlm.nih.gov/pubmed/18093174}, year = {2008}, date = {2008-02-27}, journal = {European Journal of Neuroscience}, volume = {27}, pages = {217-27}, abstract = {Neural responses are most often characterized in terms of the sets of environmental or internal conditions or stimuli with which their firing rate [corrected]increases or decreases are correlated [corrected] Their transient (nonstationary) temporal profiles of activity have received comparatively less attention. Similarly, the computational framework of attractor neural networks puts most emphasis on the representational or computational properties of the stable states of a neural system. Here we review a couple of neurophysiological observations and computational ideas that shift the focus to the transient dynamics of neural systems. We argue that there are many situations in which the transient neural behaviour, while hopping between different attractor states or moving along 'attractor ruins', carries most of the computational and/or behavioural significance, rather than the attractor states eventually reached. Such transients may be related to the computation of temporally precise predictions or the probabilistic transitions among choice options, accounting for Weber's law in decision-making tasks. Finally, we conclude with a more general perspective on the role of transient dynamics in the brain, promoting the view that brain activity is characterized by a high-dimensional chaotic ground state from which transient spatiotemporal patterns (metastable states) briefly emerge. Neural computation has to exploit the itinerant dynamics between these states.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Neural responses are most often characterized in terms of the sets of environmental or internal conditions or stimuli with which their firing rate [corrected]increases or decreases are correlated [corrected] Their transient (nonstationary) temporal profiles of activity have received comparatively less attention. Similarly, the computational framework of attractor neural networks puts most emphasis on the representational or computational properties of the stable states of a neural system. Here we review a couple of neurophysiological observations and computational ideas that shift the focus to the transient dynamics of neural systems. We argue that there are many situations in which the transient neural behaviour, while hopping between different attractor states or moving along 'attractor ruins', carries most of the computational and/or behavioural significance, rather than the attractor states eventually reached. Such transients may be related to the computation of temporally precise predictions or the probabilistic transitions among choice options, accounting for Weber's law in decision-making tasks. Finally, we conclude with a more general perspective on the role of transient dynamics in the brain, promoting the view that brain activity is characterized by a high-dimensional chaotic ground state from which transient spatiotemporal patterns (metastable states) briefly emerge. Neural computation has to exploit the itinerant dynamics between these states. |
2007 |
Durstewitz D., & Gabriel T Dynamical basis of irregular spiking in NMDA-driven prefrontal cortex neurons Journal Article Cerebral Cortex 17 , 17 , pp. 894-908, 2007. @article{Durstewitz2007, title = {Dynamical basis of irregular spiking in NMDA-driven prefrontal cortex neurons}, author = {Durstewitz, D., & Gabriel, T.}, url = {https://www.ncbi.nlm.nih.gov/pubmed/16740581}, year = {2007}, date = {2007-04-17}, journal = {Cerebral Cortex 17 }, volume = {17}, pages = {894-908}, abstract = {Slow N-Methyl-D-aspartic acid (NMDA) synaptic currents are assumed to strongly contribute to the persistently elevated firing rates observed in prefrontal cortex (PFC) during working memory. During persistent activity, spiking of many neurons is highly irregular. Here we report that highly irregular firing can be induced through a combination of NMDA- and dopamine D1 receptor agonists applied to adult PFC neurons in vitro. The highest interspike-interval (ISI) variability occurred in a transition regime where the subthreshold membrane potential distribution shifts from mono- to bimodality, while neurons with clearly mono- or bimodal distributions fired much more regularly. Predictability within irregular ISI series was significantly higher than expected from a noise-driven linear process, indicating that it might best be described through complex (potentially chaotic) nonlinear deterministic processes. Accordingly, the phenomena observed in vitro could be reproduced in purely deterministic biophysical model neurons. High spiking irregularity in these models emerged within a chaotic, close-to-bifurcation regime characterized by a shift of the membrane potential distribution from mono- to bimodality and by similar ISI return maps as observed in vitro. The nonlinearity of NMDA conductances was crucial for inducing this regime. NMDA-induced irregular dynamics may have important implications for computational processes during working memory and neural coding.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Slow N-Methyl-D-aspartic acid (NMDA) synaptic currents are assumed to strongly contribute to the persistently elevated firing rates observed in prefrontal cortex (PFC) during working memory. During persistent activity, spiking of many neurons is highly irregular. Here we report that highly irregular firing can be induced through a combination of NMDA- and dopamine D1 receptor agonists applied to adult PFC neurons in vitro. The highest interspike-interval (ISI) variability occurred in a transition regime where the subthreshold membrane potential distribution shifts from mono- to bimodality, while neurons with clearly mono- or bimodal distributions fired much more regularly. Predictability within irregular ISI series was significantly higher than expected from a noise-driven linear process, indicating that it might best be described through complex (potentially chaotic) nonlinear deterministic processes. Accordingly, the phenomena observed in vitro could be reproduced in purely deterministic biophysical model neurons. High spiking irregularity in these models emerged within a chaotic, close-to-bifurcation regime characterized by a shift of the membrane potential distribution from mono- to bimodality and by similar ISI return maps as observed in vitro. The nonlinearity of NMDA conductances was crucial for inducing this regime. NMDA-induced irregular dynamics may have important implications for computational processes during working memory and neural coding. |
Christopher C. Lapish Sven Kroener, Daniel Durstewitz Antonieta Lavin ; Seamans, Jeremy K The ability of the mesocortical dopamine system to operate in distinct temporal modes Journal Article Psychopharmacology, 2007. @article{Lapish2007, title = {The ability of the mesocortical dopamine system to operate in distinct temporal modes}, author = {Christopher C. Lapish, Sven Kroener, Daniel Durstewitz, Antonieta Lavin, and Jeremy K. Seamans}, url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5509053/}, doi = {10.1007/s00213-006-0527-8}, year = {2007}, date = {2007-04-01}, journal = {Psychopharmacology}, abstract = {Phasic bursting of midbrain DA neurons may provide temporally precise information about the mismatch between expected and actual rewards (prediction errors) that has been hypothesized to serve as a learning signal in efferent regions. However, because DA acts as a relatively slow modulator of cortical neurotransmission, it is unclear whether DA can indeed act to precisely transmit prediction errors to prefrontal cortex (PFC). In light of recent physiological and anatomical evidence, we propose that corelease of glutamate from DA and/or non-DA neurons in the VTA could serve to transmit this temporally precise signal. In contrast, DA acts in a protracted manner to provide spatially and temporally diffuse modulation of PFC pyramidal neurons and interneurons. This modulation occurs first via a relatively rapid depolarization of fast-spiking interneurons that acts on the order of seconds. This is followed by a more protracted modulation of a variety of other ionic currents on timescales of minutes to hours, which may bias the manner in which cortical networks process information. However, the prolonged actions of DA may be curtailed by counteracting influences, which likely include opposing actions at D1 and D2-like receptors that have been shown to be time-and concentration-dependent. In this way, the mesocortical DA system optimizes the characteristics of glutamate, GABA, and DA neurotransmission both within the midbrain and cortex to communicate temporally precise information and to modulate network activity patterns on prolonged timescales.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Phasic bursting of midbrain DA neurons may provide temporally precise information about the mismatch between expected and actual rewards (prediction errors) that has been hypothesized to serve as a learning signal in efferent regions. However, because DA acts as a relatively slow modulator of cortical neurotransmission, it is unclear whether DA can indeed act to precisely transmit prediction errors to prefrontal cortex (PFC). In light of recent physiological and anatomical evidence, we propose that corelease of glutamate from DA and/or non-DA neurons in the VTA could serve to transmit this temporally precise signal. In contrast, DA acts in a protracted manner to provide spatially and temporally diffuse modulation of PFC pyramidal neurons and interneurons. This modulation occurs first via a relatively rapid depolarization of fast-spiking interneurons that acts on the order of seconds. This is followed by a more protracted modulation of a variety of other ionic currents on timescales of minutes to hours, which may bias the manner in which cortical networks process information. However, the prolonged actions of DA may be curtailed by counteracting influences, which likely include opposing actions at D1 and D2-like receptors that have been shown to be time-and concentration-dependent. In this way, the mesocortical DA system optimizes the characteristics of glutamate, GABA, and DA neurotransmission both within the midbrain and cortex to communicate temporally precise information and to modulate network activity patterns on prolonged timescales. |
2006 |
Daniel Durstewitz, Jeremy Seamans K Beyond bistability: Biophysics and temporal dynamics of working memory Journal Article Neuroscience, 2006. @article{Durstewitz2006, title = {Beyond bistability: Biophysics and temporal dynamics of working memory}, author = {Daniel Durstewitz, Jeremy K Seamans}, url = {https://doi.org/10.1016/j.neuroscience.2005.06.094}, doi = {10.1016/j.neuroscience.2005.06.094}, year = {2006}, date = {2006-04-28}, journal = {Neuroscience}, abstract = {Working memory has often been modeled and conceptualized as a kind of binary (bistable) memory switch, where stimuli turn on plateau-like persistent activity in subsets of cells, in line with many in vivo electrophysiological reports. A potentially related form of bistability, termed up- and down-states, has been studied with regard to its synaptic and ionic basis in vivo and in reduced cortical preparations. Also single cell mechanisms for producing bistability have been proposed and investigated in brain slices and computationally. Recently, however, it has been emphasized that clear plateau-like bistable activity is rather rare during working memory tasks, and that neurons exhibit a multitude of different temporally unfolding activity profiles and temporal structure within their spiking dynamics. Hence, working memory seems to be a highly dynamical neural process with yet unknown mappings from dynamical to computational properties. Empirical findings on ramping activity profiles and temporal structure will be reviewed, as well as neural models that attempt to account for it and its computational significance. Furthermore, recent in vivo, neural culture, and in vitro preparations will be discussed that offer new possibilities for studying the biophysical mechanisms underlying computational processes during working memory. These preparations have revealed additional evidence for temporal structure and spatio-temporally organized attractor states in cortical networks, as well as for specific computational properties that may characterize synaptic processing during high-activity states as during working memory. Together such findings may lay the foundations for highly dynamical theories of working memory based on biophysical principles.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Working memory has often been modeled and conceptualized as a kind of binary (bistable) memory switch, where stimuli turn on plateau-like persistent activity in subsets of cells, in line with many in vivo electrophysiological reports. A potentially related form of bistability, termed up- and down-states, has been studied with regard to its synaptic and ionic basis in vivo and in reduced cortical preparations. Also single cell mechanisms for producing bistability have been proposed and investigated in brain slices and computationally. Recently, however, it has been emphasized that clear plateau-like bistable activity is rather rare during working memory tasks, and that neurons exhibit a multitude of different temporally unfolding activity profiles and temporal structure within their spiking dynamics. Hence, working memory seems to be a highly dynamical neural process with yet unknown mappings from dynamical to computational properties. Empirical findings on ramping activity profiles and temporal structure will be reviewed, as well as neural models that attempt to account for it and its computational significance. Furthermore, recent in vivo, neural culture, and in vitro preparations will be discussed that offer new possibilities for studying the biophysical mechanisms underlying computational processes during working memory. These preparations have revealed additional evidence for temporal structure and spatio-temporally organized attractor states in cortical networks, as well as for specific computational properties that may characterize synaptic processing during high-activity states as during working memory. Together such findings may lay the foundations for highly dynamical theories of working memory based on biophysical principles. |
2004 |
Durstewitz, D Neural representation of interval time Journal Article Neuroreport, 15 , pp. 745-749, 2004. @article{Durstewitz2004, title = {Neural representation of interval time}, author = {D. Durstewitz}, url = {https://www.ncbi.nlm.nih.gov/pubmed/15073507}, year = {2004}, date = {2004-04-09}, journal = {Neuroreport}, volume = {15}, pages = {745-749}, abstract = {Animals can predict the time of occurrence of a forthcoming event relative to a preceding stimulus, i.e. the interval time between those two, given previous learning experience with the temporal contingency between them. Accumulating evidence suggests that a particular pattern of neural activity observed during tasks involving fixed temporal intervals might carry interval time information: the activity of some cortical and subcortical neurons ramps up slowly and linearly during the interval, like a temporal integrator, and peaks around the time at which the event is due to occur. The slope of this climbing activity, and hence the peak time, adjusts to the length of a temporal interval during repetitive experience with it. Various neural mechanisms for producing climbing activity with variable slopes, representing the length of learned intervals, are discussed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Animals can predict the time of occurrence of a forthcoming event relative to a preceding stimulus, i.e. the interval time between those two, given previous learning experience with the temporal contingency between them. Accumulating evidence suggests that a particular pattern of neural activity observed during tasks involving fixed temporal intervals might carry interval time information: the activity of some cortical and subcortical neurons ramps up slowly and linearly during the interval, like a temporal integrator, and peaks around the time at which the event is due to occur. The slope of this climbing activity, and hence the peak time, adjusts to the length of a temporal interval during repetitive experience with it. Various neural mechanisms for producing climbing activity with variable slopes, representing the length of learned intervals, are discussed. |
2003 |
Durstewitz, D Self-organizing neural integrator predicts interval times through climbing activity Journal Article Journal of Neuroscience, 23 , pp. 5342-5353, 2003. @article{Durstewitz2003b, title = {Self-organizing neural integrator predicts interval times through climbing activity}, author = {D. Durstewitz}, url = {https://www.jneurosci.org/content/23/12/5342}, year = {2003}, date = {2003-06-15}, journal = { Journal of Neuroscience}, volume = {23}, pages = {5342-5353}, abstract = {Mammals can reliably predict the time of occurrence of an expected event after a predictive stimulus. Climbing activity is a prominent profile of neural activity observed in prefrontal cortex and other brain areas that is related to the anticipation of forthcoming events. Climbing activity might span intervals from hundreds of milliseconds to tens of seconds and has a number of properties that make it a plausible candidate for representing interval time. A biophysical model is presented that produces climbing, temporal integrator-like activity with variable slopes as observed empirically, through a single-cell positive feedback loop between firing rate, spike-driven Ca2+ influx, and Ca2+-activated inward currents. It is shown that the fine adjustment of this feedback loop might emerge in a self-organizing manner if the cell can use the variance in intracellular Ca2+ fluctuations as a learning signal. This self-organizing process is based on the present observation that the variance of the intracellular Ca2+ concentration and the variance of the neural firing rate and of activity-dependent conductances reach a maximum as the biophysical parameters of a cell approach a configuration required for temporal integration. Thus, specific mechanisms are proposed for (1) how neurons might represent interval times of variable length and (2) how neurons could acquire the biophysical properties that enable them to work as timers.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Mammals can reliably predict the time of occurrence of an expected event after a predictive stimulus. Climbing activity is a prominent profile of neural activity observed in prefrontal cortex and other brain areas that is related to the anticipation of forthcoming events. Climbing activity might span intervals from hundreds of milliseconds to tens of seconds and has a number of properties that make it a plausible candidate for representing interval time. A biophysical model is presented that produces climbing, temporal integrator-like activity with variable slopes as observed empirically, through a single-cell positive feedback loop between firing rate, spike-driven Ca2+ influx, and Ca2+-activated inward currents. It is shown that the fine adjustment of this feedback loop might emerge in a self-organizing manner if the cell can use the variance in intracellular Ca2+ fluctuations as a learning signal. This self-organizing process is based on the present observation that the variance of the intracellular Ca2+ concentration and the variance of the neural firing rate and of activity-dependent conductances reach a maximum as the biophysical parameters of a cell approach a configuration required for temporal integration. Thus, specific mechanisms are proposed for (1) how neurons might represent interval times of variable length and (2) how neurons could acquire the biophysical properties that enable them to work as timers. |
Durstewitz, Daniel Self-Organizing Neural Integrator Predicts Interval Times Journal Article Journal of Neuroscience, 23 (12), pp. 5342–5353, 2003. @article{Durstewitz2003, title = {Self-Organizing Neural Integrator Predicts Interval Times}, author = {Daniel Durstewitz}, year = {2003}, date = {2003-01-01}, journal = {Journal of Neuroscience}, volume = {23}, number = {12}, pages = {5342--5353}, abstract = {Mammals can reliably predict the time of occurrence of an expected event after a predictive stimulus. Climbing activity is a prominent profile of neural activity observed in prefrontal cortex and other brain areas that is related to the anticipation of forthcoming events. Climbing activity might span intervals from hundreds of milliseconds to tens of seconds and has a number of properties that make it a plausible candidate for representing interval time. A biophysical model is presented that produces climbing, temporal integrator-like activity with variable slopes as observed empirically, through a single-cell positive feedback loop between firing rate, spike-driven Ca2+ influx, and Ca2+-activated inward currents. It is shown that the fine adjustment of this feedback loop might emerge in a self-organizing manner if the cell can use the variance in intraceflular Ca2+ fluctuations as a learning signal. This self-organizing process is based on the present observation that the variance of the intraceffular Ca2+ concentration and the variance of the neural firing rate and of activity-dependent conductances reach a maximum as the biophysical parameters of a cell approach a configuration required for temporal integration. Thus, specific mechanisms are proposed for (1) how neurons might represent interval times of variable length and (2) how neurons could acquire the biophysical properties that enable them to work as timers.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Mammals can reliably predict the time of occurrence of an expected event after a predictive stimulus. Climbing activity is a prominent profile of neural activity observed in prefrontal cortex and other brain areas that is related to the anticipation of forthcoming events. Climbing activity might span intervals from hundreds of milliseconds to tens of seconds and has a number of properties that make it a plausible candidate for representing interval time. A biophysical model is presented that produces climbing, temporal integrator-like activity with variable slopes as observed empirically, through a single-cell positive feedback loop between firing rate, spike-driven Ca2+ influx, and Ca2+-activated inward currents. It is shown that the fine adjustment of this feedback loop might emerge in a self-organizing manner if the cell can use the variance in intraceflular Ca2+ fluctuations as a learning signal. This self-organizing process is based on the present observation that the variance of the intraceffular Ca2+ concentration and the variance of the neural firing rate and of activity-dependent conductances reach a maximum as the biophysical parameters of a cell approach a configuration required for temporal integration. Thus, specific mechanisms are proposed for (1) how neurons might represent interval times of variable length and (2) how neurons could acquire the biophysical properties that enable them to work as timers. |
2002 |
Durstewitz, D; Seamans, J K The computational role of dopamine D1 receptors in working memory Journal Article Neural Networks, 15 , pp. 561-572, 2002. @article{Durstewitz2002, title = {The computational role of dopamine D1 receptors in working memory}, author = {D. Durstewitz and J.K. Seamans}, url = {https://www.ncbi.nlm.nih.gov/pubmed/12371512}, year = {2002}, date = {2002-06-01}, journal = {Neural Networks}, volume = {15}, pages = {561-572}, abstract = {The prefrontal cortex (PFC) is essential for working memory, which is the ability to transiently hold and manipulate information necessary for generating forthcoming action. PFC neurons actively encode working memory information via sustained firing patterns. Dopamine via D1 receptors potently modulates sustained activity of PFC neurons and performance in working memory tasks. In vitro patch-clamp data have revealed many different cellular actions of dopamine on PFC neurons and synapses. These effects were simulated using realistic networks of recurrently connected assemblies of PFC neurons. Simulated D1-mediated modulation led to a deepening and widening of the basins of attraction of high (working memory) activity states of the network, while at the same time background activity was depressed. As a result, self-sustained activity was more robust to distracting stimuli and noise. In this manner, D1 receptor stimulation might regulate the extent to which PFC network activity is focused on a particular goal state versus being open to new goals or information unrelated to the current goal.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The prefrontal cortex (PFC) is essential for working memory, which is the ability to transiently hold and manipulate information necessary for generating forthcoming action. PFC neurons actively encode working memory information via sustained firing patterns. Dopamine via D1 receptors potently modulates sustained activity of PFC neurons and performance in working memory tasks. In vitro patch-clamp data have revealed many different cellular actions of dopamine on PFC neurons and synapses. These effects were simulated using realistic networks of recurrently connected assemblies of PFC neurons. Simulated D1-mediated modulation led to a deepening and widening of the basins of attraction of high (working memory) activity states of the network, while at the same time background activity was depressed. As a result, self-sustained activity was more robust to distracting stimuli and noise. In this manner, D1 receptor stimulation might regulate the extent to which PFC network activity is focused on a particular goal state versus being open to new goals or information unrelated to the current goal. |
2001 |
J. K. Seamans N. Gorelova, Durstewitz Yang D C R ) Bidirectional dopamine modulation of GABAergic inhibition in prefrontal cortical pyramidal neurons Journal Article 2001. @article{Seamans2001, title = {) Bidirectional dopamine modulation of GABAergic inhibition in prefrontal cortical pyramidal neurons}, author = {J. K. Seamans, N. Gorelova, D. Durstewitz, C. R. Yang }, url = {https://pubmed.ncbi.nlm.nih.gov/11331392/}, doi = {10.1523/JNEUROSCI.21-10-03628.2001}, year = {2001}, date = {2001-05-15}, abstract = {Dopamine regulates the activity of neural networks in the prefrontal cortex that process working memory information, but its precise biophysical actions are poorly understood. The present study characterized the effects of dopamine on GABAergic inputs to prefrontal pyramidal neurons using whole-cell patch-clamp recordings in vitro. In most pyramidal cells, dopamine had a temporally biphasic effect on evoked IPSCs, producing an initial abrupt decrease in amplitude followed by a delayed increase in IPSC amplitude. Using receptor subtype-specific agonists and antagonists, we found that the initial abrupt reduction was D2 receptor-mediated, whereas the late, slower developing enhancement was D1 receptor-mediated. Linearly combining the effects of the two agonists could reproduce the biphasic dopamine effect. Because D1 agonists enhanced spontaneous (sIPSCs) but did not affect miniature (mIPSCs) IPSCs, it appears that D1 agonists caused larger evoked IPSCs by increasing the intrinsic excitability of interneurons and their axons. In contrast, D2 agonists had no effects on sIPSCs but did produce a significant reduction in mIPSCs, suggestive of a decrease in GABA release probability. In addition, D2 agonists reduced the postsynaptic response to a GABA(A) agonist. D1 and D2 receptors therefore regulated GABAergic activity in opposite manners and through different mechanisms in prefrontal cortex (PFC) pyramidal cells. This bidirectional modulation could have important implications for the computational properties of active PFC networks. }, keywords = {}, pubstate = {published}, tppubtype = {article} } Dopamine regulates the activity of neural networks in the prefrontal cortex that process working memory information, but its precise biophysical actions are poorly understood. The present study characterized the effects of dopamine on GABAergic inputs to prefrontal pyramidal neurons using whole-cell patch-clamp recordings in vitro. In most pyramidal cells, dopamine had a temporally biphasic effect on evoked IPSCs, producing an initial abrupt decrease in amplitude followed by a delayed increase in IPSC amplitude. Using receptor subtype-specific agonists and antagonists, we found that the initial abrupt reduction was D2 receptor-mediated, whereas the late, slower developing enhancement was D1 receptor-mediated. Linearly combining the effects of the two agonists could reproduce the biphasic dopamine effect. Because D1 agonists enhanced spontaneous (sIPSCs) but did not affect miniature (mIPSCs) IPSCs, it appears that D1 agonists caused larger evoked IPSCs by increasing the intrinsic excitability of interneurons and their axons. In contrast, D2 agonists had no effects on sIPSCs but did produce a significant reduction in mIPSCs, suggestive of a decrease in GABA release probability. In addition, D2 agonists reduced the postsynaptic response to a GABA(A) agonist. D1 and D2 receptors therefore regulated GABAergic activity in opposite manners and through different mechanisms in prefrontal cortex (PFC) pyramidal cells. This bidirectional modulation could have important implications for the computational properties of active PFC networks. |
Jeremy K. Seamans Daniel Durstewitz, Brian Christie Charles Stevens R F; Sejnowski, Terrence J Dopamine D1/D5 receptor modulation of excitatory synaptic inputs to layer V prefrontal cortex neurons Journal Article Proceedings of the National Academy of Sciences, 2001. @article{Seamans2001b, title = {Dopamine D1/D5 receptor modulation of excitatory synaptic inputs to layer V prefrontal cortex neurons}, author = {Jeremy K. Seamans, Daniel Durstewitz, Brian R. Christie, Charles F. Stevens, and Terrence J. Sejnowski}, url = {https://doi.org/10.1073/pnas.98.1.301}, doi = {10.1073/pnas.98.1.301}, year = {2001}, date = {2001-01-02}, journal = {Proceedings of the National Academy of Sciences}, abstract = {Dopamine acts mainly through the D1/D5 receptor in the prefrontal cortex (PFC) to modulate neural activity and behaviors associated with working memory. To understand the mechanism of this effect, we examined the modulation of excitatory synaptic inputs onto layer V PFC pyramidal neurons by D1/D5 receptor stimulation. D1/D5 agonists increased the size of N-methyl-d-aspartate (NMDA) component of excitatory postsynaptic currents (EPSCs) through a postsynaptic mechanism. In contrast, D1/D5 agonists caused a slight reduction in the size of the non-NMDA component of EPSCs through a small decrease in release probability. With 20 Hz synaptic trains, we found that the D1/D5 agonists increased depolarization of summating the NMDA component of excitatory postsynaptic potential (EPSP). By increasing the NMDA component of EPSCs, yet slightly reducing release, D1/D5 receptor activation selectively enhanced sustained synaptic inputs and equalized the sizes of EPSPs in a 20-Hz train.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Dopamine acts mainly through the D1/D5 receptor in the prefrontal cortex (PFC) to modulate neural activity and behaviors associated with working memory. To understand the mechanism of this effect, we examined the modulation of excitatory synaptic inputs onto layer V PFC pyramidal neurons by D1/D5 receptor stimulation. D1/D5 agonists increased the size of N-methyl-d-aspartate (NMDA) component of excitatory postsynaptic currents (EPSCs) through a postsynaptic mechanism. In contrast, D1/D5 agonists caused a slight reduction in the size of the non-NMDA component of EPSCs through a small decrease in release probability. With 20 Hz synaptic trains, we found that the D1/D5 agonists increased depolarization of summating the NMDA component of excitatory postsynaptic potential (EPSP). By increasing the NMDA component of EPSCs, yet slightly reducing release, D1/D5 receptor activation selectively enhanced sustained synaptic inputs and equalized the sizes of EPSPs in a 20-Hz train. |
2000 |
Daniel Durstewitz Jeremy K. Seamans, ; Sejnowski, Terrence J Dopamine-Mediated Stabilization of Delay-Period Activity in a Network Model of Prefrontal Cortex Journal Article Journal of Neurophysiology, 2000. @article{Durstewitz2000b, title = {Dopamine-Mediated Stabilization of Delay-Period Activity in a Network Model of Prefrontal Cortex}, author = {Daniel Durstewitz , Jeremy K. Seamans , and Terrence J. Sejnowski}, url = {https://doi.org/10.1152/jn.2000.83.3.1733}, doi = {10.1152/jn.2000.83.3.1733}, year = {2000}, date = {2000-03-01}, journal = {Journal of Neurophysiology}, abstract = {The prefrontal cortex (PFC) is critically involved in working memory, which underlies memory-guided, goal-directed behavior. During working-memory tasks, PFC neurons exhibit sustained elevated activity, which may reflect the active holding of goal-related information or the preparation of forthcoming actions. Dopamine via the D1 receptor strongly modulates both this sustained (delay-period) activity and behavioral performance in working-memory tasks. However, the function of dopamine during delay-period activity and the underlying neural mechanisms are only poorly understood. Recently we proposed that dopamine might stabilize active neural representations in PFC circuits during tasks involving working memory and render them robust against interfering stimuli and noise. To further test this idea and to examine the dopamine-modulated ionic currents that could give rise to increased stability of neural representations, we developed a network model of the PFC consisting of multicompartment neurons equipped with Hodgkin-Huxley-like channel kinetics that could reproduce in vitro whole cell and in vivo recordings from PFC neurons. Dopaminergic effects on intrinsic ionic and synaptic conductances were implemented in the model based on in vitro data. Simulated dopamine strongly enhanced high, delay-type activity but not low, spontaneous activity in the model network. Furthermore the strength of an afferent stimulation needed to disrupt delay-type activity increased with the magnitude of the dopamine-induced shifts in network parameters, making the currently active representation much more stable. Stability could be increased by dopamine-induced enhancements of the persistent Na+and N-methyl-d-aspartate (NMDA) conductances. Stability also was enhanced by a reductionin AMPA conductances. The increase in GABAA conductances that occurs after stimulation of dopaminergic D1 receptors was necessary in this context to prevent uncontrolled, spontaneous switches into high-activity states (i.e., spontaneous activation of task-irrelevant representations). In conclusion, the dopamine-induced changes in the biophysical properties of intrinsic ionic and synaptic conductances conjointly acted to highly increase stability of activated representations in PFC networks and at the same time retain control over network behavior and thus preserve its ability to adequately respond to task-related stimuli. Predictions of the model can be tested in vivo by locally applying specific D1 receptor, NMDA, or GABAA antagonists while recording from PFC neurons in delayed reaction-type tasks with interfering stimuli.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The prefrontal cortex (PFC) is critically involved in working memory, which underlies memory-guided, goal-directed behavior. During working-memory tasks, PFC neurons exhibit sustained elevated activity, which may reflect the active holding of goal-related information or the preparation of forthcoming actions. Dopamine via the D1 receptor strongly modulates both this sustained (delay-period) activity and behavioral performance in working-memory tasks. However, the function of dopamine during delay-period activity and the underlying neural mechanisms are only poorly understood. Recently we proposed that dopamine might stabilize active neural representations in PFC circuits during tasks involving working memory and render them robust against interfering stimuli and noise. To further test this idea and to examine the dopamine-modulated ionic currents that could give rise to increased stability of neural representations, we developed a network model of the PFC consisting of multicompartment neurons equipped with Hodgkin-Huxley-like channel kinetics that could reproduce in vitro whole cell and in vivo recordings from PFC neurons. Dopaminergic effects on intrinsic ionic and synaptic conductances were implemented in the model based on in vitro data. Simulated dopamine strongly enhanced high, delay-type activity but not low, spontaneous activity in the model network. Furthermore the strength of an afferent stimulation needed to disrupt delay-type activity increased with the magnitude of the dopamine-induced shifts in network parameters, making the currently active representation much more stable. Stability could be increased by dopamine-induced enhancements of the persistent Na+and N-methyl-d-aspartate (NMDA) conductances. Stability also was enhanced by a reductionin AMPA conductances. The increase in GABAA conductances that occurs after stimulation of dopaminergic D1 receptors was necessary in this context to prevent uncontrolled, spontaneous switches into high-activity states (i.e., spontaneous activation of task-irrelevant representations). In conclusion, the dopamine-induced changes in the biophysical properties of intrinsic ionic and synaptic conductances conjointly acted to highly increase stability of activated representations in PFC networks and at the same time retain control over network behavior and thus preserve its ability to adequately respond to task-related stimuli. Predictions of the model can be tested in vivo by locally applying specific D1 receptor, NMDA, or GABAA antagonists while recording from PFC neurons in delayed reaction-type tasks with interfering stimuli. |
Durstewitz, D; Seamans, J K; Sejnowski, T J Neurocomputational models of working memory. Journal Article Nature neuroscience, 3 Suppl (november), pp. 1184–1191, 2000, ISSN: 1097-6256. @article{Durstewitz2000, title = {Neurocomputational models of working memory.}, author = {D Durstewitz and J K Seamans and T J Sejnowski}, doi = {10.1038/81460}, issn = {1097-6256}, year = {2000}, date = {2000-01-01}, journal = {Nature neuroscience}, volume = {3 Suppl}, number = {november}, pages = {1184--1191}, abstract = {During working memory tasks, the firing rates of single neurons recorded in behaving monkeys remain elevated without external cues. Modeling studies have explored different mechanisms that could underlie this selective persistent activity, including recurrent excitation within cell assemblies, synfire chains and single-cell bistability. The models show how sustained activity can be stable in the presence of noise and distractors, how different synaptic and voltage-gated conductances contribute to persistent activity, how neuromodulation could influence its robustness, how completely novel items could be maintained, and how continuous attractor states might be achieved. More work is needed to address the full repertoire of neural dynamics observed during working memory tasks.}, keywords = {}, pubstate = {published}, tppubtype = {article} } During working memory tasks, the firing rates of single neurons recorded in behaving monkeys remain elevated without external cues. Modeling studies have explored different mechanisms that could underlie this selective persistent activity, including recurrent excitation within cell assemblies, synfire chains and single-cell bistability. The models show how sustained activity can be stable in the presence of noise and distractors, how different synaptic and voltage-gated conductances contribute to persistent activity, how neuromodulation could influence its robustness, how completely novel items could be maintained, and how continuous attractor states might be achieved. More work is needed to address the full repertoire of neural dynamics observed during working memory tasks. |
1999 |
D Durstewitz S Kröner, Güntürkün O The dopaminergic innervation of the avian telencephalon Journal Article Prog Neurobiol., 1999. @article{Durstewitz1999b, title = {The dopaminergic innervation of the avian telencephalon}, author = {D Durstewitz, S Kröner, O Güntürkün }, url = {https://pubmed.ncbi.nlm.nih.gov/10463794/}, doi = {10.1016/s0301-0082(98)00100-2 }, year = {1999}, date = {1999-10-01}, journal = { Prog Neurobiol.}, abstract = {The present review provides an overview of the distribution of dopaminergic fibers and dopaminoceptive elements within the avian telencephalon, the possible interactions of dopamine (DA) with other biochemically identified systems as revealed by immunocytochemistry, and the involvement of DA in behavioral processes in birds. Primary sensory structures are largely devoid of dopaminergic fibers, DA receptors and the D1-related phosphoprotein DARPP-32, while all these dopaminergic markers gradually increase in density from the secondary sensory to the multimodal association and the limbic and motor output areas. Structures of the avian basal ganglia are most densely innervated but, in contrast to mammals, show a higher D2 than D1 receptor density. In most of the remaining telencephalon D1 receptors clearly outnumber D2 receptors. Dopaminergic fibers in the avian telencephalon often show a peculiar arrangement where fibers coil around the somata and proximal dendrites of neurons like baskets, probably providing them with a massive dopaminergic input. Basket-like innervation of DARPP-32-positive neurons seems to be most prominent in the multimodal association areas. Taken together, these anatomical findings indicate a specific role of DA in higher order learning and sensory-motor processes, while primary sensory processes are less affected. This conclusion is supported by behavioral findings which show that in birds, as in mammals, DA is specifically involved in sensory-motor integration, attention and arousal, learning and working memory. Thus, despite considerable differences in the anatomical organization of the avian and mammalian forebrain, the organization of the dopaminergic system and its behavioral functions are very similar in birds and mammals. }, keywords = {}, pubstate = {published}, tppubtype = {article} } The present review provides an overview of the distribution of dopaminergic fibers and dopaminoceptive elements within the avian telencephalon, the possible interactions of dopamine (DA) with other biochemically identified systems as revealed by immunocytochemistry, and the involvement of DA in behavioral processes in birds. Primary sensory structures are largely devoid of dopaminergic fibers, DA receptors and the D1-related phosphoprotein DARPP-32, while all these dopaminergic markers gradually increase in density from the secondary sensory to the multimodal association and the limbic and motor output areas. Structures of the avian basal ganglia are most densely innervated but, in contrast to mammals, show a higher D2 than D1 receptor density. In most of the remaining telencephalon D1 receptors clearly outnumber D2 receptors. Dopaminergic fibers in the avian telencephalon often show a peculiar arrangement where fibers coil around the somata and proximal dendrites of neurons like baskets, probably providing them with a massive dopaminergic input. Basket-like innervation of DARPP-32-positive neurons seems to be most prominent in the multimodal association areas. Taken together, these anatomical findings indicate a specific role of DA in higher order learning and sensory-motor processes, while primary sensory processes are less affected. This conclusion is supported by behavioral findings which show that in birds, as in mammals, DA is specifically involved in sensory-motor integration, attention and arousal, learning and working memory. Thus, despite considerable differences in the anatomical organization of the avian and mammalian forebrain, the organization of the dopaminergic system and its behavioral functions are very similar in birds and mammals. |
Daniel Durstewitz, Marian Kelc ; Güntürkün, Onur A neurocomputational theory of the dopaminergic modulation of working memory functions Journal Article Journal of Neuroscience, 1999. @article{Durstewitz1999, title = {A neurocomputational theory of the dopaminergic modulation of working memory functions}, author = {Daniel Durstewitz, Marian Kelc and Onur Güntürkün}, url = {https://doi.org/10.1523/JNEUROSCI.19-07-02807.1999 }, doi = {10.1523/JNEUROSCI.19-07-02807.1999}, year = {1999}, date = {1999-04-01}, journal = {Journal of Neuroscience}, abstract = {The dopaminergic modulation of neural activity in the prefrontal cortex (PFC) is essential for working memory. Delay-activity in the PFC in working memory tasks persists even if interfering stimuli intervene between the presentation of the sample and the target stimulus. Here, the hypothesis is put forward that the functional role of dopamine in working memory processing is to stabilize active neural representations in the PFC network and thereby to protect goal-related delay-activity against interfering stimuli. To test this hypothesis, we examined the reported dopamine-induced changes in several biophysical properties of PFC neurons to determine whether they could fulfill this function. An attractor network model consisting of model neurons was devised in which the empirically observed effects of dopamine on synaptic and voltage-gated membrane conductances could be represented in a biophysically realistic manner. In the model, the dopamine-induced enhancement of the persistent Na+ and reduction of the slowly inactivating K+ current increased firing of the delay-active neurons, thereby increasing inhibitory feedback and thus reducing activity of the “background” neurons. Furthermore, the dopamine-induced reduction of EPSP sizes and a dendritic Ca2+ current diminished the impact of intervening stimuli on current network activity. In this manner, dopaminergic effects indeed acted to stabilize current delay-activity. Working memory deficits observed after supranormal D1-receptor stimulation could also be explained within this framework. Thus, the model offers a mechanistic explanation for the behavioral deficits observed after blockade or after supranormal stimulation of dopamine receptors in the PFC and, in addition, makes some specific empirical predictions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The dopaminergic modulation of neural activity in the prefrontal cortex (PFC) is essential for working memory. Delay-activity in the PFC in working memory tasks persists even if interfering stimuli intervene between the presentation of the sample and the target stimulus. Here, the hypothesis is put forward that the functional role of dopamine in working memory processing is to stabilize active neural representations in the PFC network and thereby to protect goal-related delay-activity against interfering stimuli. To test this hypothesis, we examined the reported dopamine-induced changes in several biophysical properties of PFC neurons to determine whether they could fulfill this function. An attractor network model consisting of model neurons was devised in which the empirically observed effects of dopamine on synaptic and voltage-gated membrane conductances could be represented in a biophysically realistic manner. In the model, the dopamine-induced enhancement of the persistent Na+ and reduction of the slowly inactivating K+ current increased firing of the delay-active neurons, thereby increasing inhibitory feedback and thus reducing activity of the “background” neurons. Furthermore, the dopamine-induced reduction of EPSP sizes and a dendritic Ca2+ current diminished the impact of intervening stimuli on current network activity. In this manner, dopaminergic effects indeed acted to stabilize current delay-activity. Working memory deficits observed after supranormal D1-receptor stimulation could also be explained within this framework. Thus, the model offers a mechanistic explanation for the behavioral deficits observed after blockade or after supranormal stimulation of dopamine receptors in the PFC and, in addition, makes some specific empirical predictions. |
Seamans J.K., Durstewitz & Sejnowski D T State-dependence of dopamine D1 receptor modulation in prefrontal cortex neurons Journal Article Proceedings of the 6th Joint Symposium on Neural Computation, 9 , pp. 128-135, 1999. @article{Seamans1999, title = {State-dependence of dopamine D1 receptor modulation in prefrontal cortex neurons}, author = {Seamans, J.K., Durstewitz, D., & Sejnowski, T.}, url = {https://papers.cnl.salk.edu/PDFs/State-Dependence%20of%20Dopamine%20D1%20Receptor%20Modulation%20in%20Prefrontal%20Cortex%20Neurons%201999-3575.pdf}, year = {1999}, date = {1999-01-01}, journal = {Proceedings of the 6th Joint Symposium on Neural Computation}, volume = {9}, pages = {128-135}, abstract = {Dopamine makes an important yet poorly understood contribution to normal and pathological processes mediated by the prefrontal cortex. The present study proposes a hypothesis for the cellular actions of dopamine Dl receptors on prefrontal cortex neurons based on in vitro recordings and computational models. In deep layer V prefrontal cortex neurons, we show that Dl receptor stimulation: 1) increased evoked firing from rest 2) shifted the activation of a persistent Na+ current and slowed its inactivation, 3) enhanced NMDA-mediated EPSCs and 4) enhanced GABAA lPSPs over many minutes. These changes had state-dependent effects on networks of realistically modeled prefrontal cortex neurons: spontaneous firing driven by low frequency inputs was decreased, while firing evoked by progressively stronger excitatory drive was enhanced and sustained following offset of an input. These findings provide insights into the paradoxical nature of dopamine's actions in the prefrontal cortex, and suggest how dopamine may modulate working memory mechanisms in networks of prefrontal neurons.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Dopamine makes an important yet poorly understood contribution to normal and pathological processes mediated by the prefrontal cortex. The present study proposes a hypothesis for the cellular actions of dopamine Dl receptors on prefrontal cortex neurons based on in vitro recordings and computational models. In deep layer V prefrontal cortex neurons, we show that Dl receptor stimulation: 1) increased evoked firing from rest 2) shifted the activation of a persistent Na+ current and slowed its inactivation, 3) enhanced NMDA-mediated EPSCs and 4) enhanced GABAA lPSPs over many minutes. These changes had state-dependent effects on networks of realistically modeled prefrontal cortex neurons: spontaneous firing driven by low frequency inputs was decreased, while firing evoked by progressively stronger excitatory drive was enhanced and sustained following offset of an input. These findings provide insights into the paradoxical nature of dopamine's actions in the prefrontal cortex, and suggest how dopamine may modulate working memory mechanisms in networks of prefrontal neurons. |
1998 |
D Durstewitz S Kröner, Hemmings Jr Güntürkün H C O Neuroscience, 1998. @article{Durstewitz1998, title = {The dopaminergic innervation of the pigeon telencephalon: distribution of DARPP-32 and co-occurrence with glutamate decarboxylase and tyrosine hydroxylase}, author = {D Durstewitz, S Kröner, H C Hemmings Jr, O Güntürkün }, url = {https://pubmed.ncbi.nlm.nih.gov/9483560/}, doi = { 10.1016/s0306-4522(97)00450-8 }, year = {1998}, date = {1998-04-01}, journal = {Neuroscience}, abstract = {Dopaminergic axons arising from midbrain nuclei innervate the mammalian and avian telencephalon with heterogeneous regional and laminar distributions. In primate, rodent, and avian species, the neuromodulator dopamine is low or almost absent in most primary sensory areas and is most abundant in the striatal parts of the basal ganglia. Furthermore, dopaminergic fibres are present in most limbic and associative structures. Herein, the distribution of DARPP-32, a phosphoprotein related to the dopamine D1-receptor, was investigated in the pigeon telencephalon by immunocytochemical techniques. Furthermore, co-occurrence of DARPP-32-positive perikarya with tyrosine hydroxylase-positive pericellular axonal "baskets" or glutamate decarboxylase-positive neurons, as well as co-occurrence of tyrosine hydroxylase and glutamate decarboxylase were examined. Specificity of the anti-DARPP-32 monoclonal antibody in pigeon brain was determined by immunoblotting. The distribution of DARPP-32 shared important features with the distribution of D1-receptors and dopaminergic fibres in the pigeon telencephalon as described previously. In particular, DARPP-32 was highly abundant in the avian basal ganglia, where a high percentage of neurons were labelled in the "striatal" parts (paleostriatum augmentatum, lobus parolfactorius), while only neuropil staining was observed in the "pallidal" portions (paleostriatum primitivum). In contrast, DARPP-32 was almost absent or present in comparatively lower concentrations in most primary sensory areas. Secondary sensory and tertiary areas of the neostriatum contained numbers of labelled neurons comparable to that of the basal ganglia and intermediate levels of neuropil staining. Approximately up to one-third of DARPP-32-positive neurons received a basket-type innervation from tyrosine hydroxylase-positive fibres in the lateral and caudal neostriatum, but only about half as many did in the medial and frontal neostriatum, and even less so in the hyperstriatum. No case of colocalization of glutamate decarboxylase and DARPP-32 and no co-occurrence of glutamate decarboxylase-positive neurons and tyrosine hydroxylase-basket-like structures could be detected out of more than 2000 glutamate decarboxylase-positive neurons examined, although the high DARPP-32 and high tyrosine hydroxylase staining density hampered this analysis in the basal ganglia. In conclusion, the pigeon dopaminergic system seems to be organized similar to that of mammals. Apparently, in the telencephalon, dopamine has its primary function in higher level sensory, associative and motor processes, since primary areas showed only weak or no anatomical cues of dopaminergic modulation. Dopamine might exert its effects primarily by modulating the physiological properties of non-GABAergic and therefore presumably excitatory units.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Dopaminergic axons arising from midbrain nuclei innervate the mammalian and avian telencephalon with heterogeneous regional and laminar distributions. In primate, rodent, and avian species, the neuromodulator dopamine is low or almost absent in most primary sensory areas and is most abundant in the striatal parts of the basal ganglia. Furthermore, dopaminergic fibres are present in most limbic and associative structures. Herein, the distribution of DARPP-32, a phosphoprotein related to the dopamine D1-receptor, was investigated in the pigeon telencephalon by immunocytochemical techniques. Furthermore, co-occurrence of DARPP-32-positive perikarya with tyrosine hydroxylase-positive pericellular axonal "baskets" or glutamate decarboxylase-positive neurons, as well as co-occurrence of tyrosine hydroxylase and glutamate decarboxylase were examined. Specificity of the anti-DARPP-32 monoclonal antibody in pigeon brain was determined by immunoblotting. The distribution of DARPP-32 shared important features with the distribution of D1-receptors and dopaminergic fibres in the pigeon telencephalon as described previously. In particular, DARPP-32 was highly abundant in the avian basal ganglia, where a high percentage of neurons were labelled in the "striatal" parts (paleostriatum augmentatum, lobus parolfactorius), while only neuropil staining was observed in the "pallidal" portions (paleostriatum primitivum). In contrast, DARPP-32 was almost absent or present in comparatively lower concentrations in most primary sensory areas. Secondary sensory and tertiary areas of the neostriatum contained numbers of labelled neurons comparable to that of the basal ganglia and intermediate levels of neuropil staining. Approximately up to one-third of DARPP-32-positive neurons received a basket-type innervation from tyrosine hydroxylase-positive fibres in the lateral and caudal neostriatum, but only about half as many did in the medial and frontal neostriatum, and even less so in the hyperstriatum. No case of colocalization of glutamate decarboxylase and DARPP-32 and no co-occurrence of glutamate decarboxylase-positive neurons and tyrosine hydroxylase-basket-like structures could be detected out of more than 2000 glutamate decarboxylase-positive neurons examined, although the high DARPP-32 and high tyrosine hydroxylase staining density hampered this analysis in the basal ganglia. In conclusion, the pigeon dopaminergic system seems to be organized similar to that of mammals. Apparently, in the telencephalon, dopamine has its primary function in higher level sensory, associative and motor processes, since primary areas showed only weak or no anatomical cues of dopaminergic modulation. Dopamine might exert its effects primarily by modulating the physiological properties of non-GABAergic and therefore presumably excitatory units. |