Brain Network Dynamics using Deep Learning
DEEP LEARNING MODELS
We develop models using tools from deep learning, such as recurrent neural networks (RNNs). These models can overcome some of the limitations of Hidden Markov Models (HMMs), such as the short memory (the Markovian constraint) and mutual exclusivity of states.
A model we are developing is DYnamic NEtwork Modelling (DYNEMO). This model is inspired by a popular deep learning framework known as the variational autoencoder. Dynemo uses amortised inference to learn a hidden state description of neuroimaging data. Each state represents a large-scale brain network. The temporal dynamics of state switches are captured by an RNN [2].
Another model we are developing is the Multi-dynamic Adversarial Generator-Encoder (MAGE). This model uses generative adversarial networks to study functional neuroimaging data. MAGE also learns a hidden state description of the data. However, it can also capture different types of state dynamics simultaneously [1].
One puzzling aspect of research into time-varying functional connectivity (FC) has been that it appears to be so stable over time when using techniques like sliding window correlations or (to a lesser extent) the HMM. We show using MAGE that this apparent stability is caused by dynamics in the FC being confounded by dynamics in the mean activity levels. MAGE’s multi-dynamic ability allows changes in the FC and in the mean activity to occur at different times to each other, revealing much stronger changes in FC over time [1].
References
- Pervaiz U, Vidaurre D, Gohil C, Smith S, Woolrich M. Multi-dynamic Modelling Reveals Strongly Time-varying Resting fMRI Correlations. Medical image analysis 2022. https://doi.org/10.1016/j.media.2022.102366
- Chetan Gohil, Evan Roberts, Ryan Timms, Alex Skates, Cameron Higgins, Andrew Quinn, Usama Pervaiz, Joost van Amersfoort, Pascal Notin, Yarin Gal, Stanislaw Adaszewski, Mark Woolrich. Mixtures of large-scale dynamic functional brain network modes. bioRxiv 2022.05.03.490453; doi: https://doi.org/10.1101/2022.05.03.490453