Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.
Overview of the Variational RNN AutoDecoder (VRAD).

DEEP LEARNING MODELS

We develop models using tools from deep learning, such as recurrent neural networks (RNNs). These models can overcome some of the limitations of Hidden Markov Models (HMMs), such as the short memory (the Markovian constraint) and mutual exclusivity of states.

A model we are developing is the Variational RNN Auto-Decoder (VRAD). This model is inspired by a popular deep learning framework known as the variational autoencoder. VRAD uses amortised inference to learn a hidden state description of neuroimaging data. Each state represents a large-scale brain network. The temporal dynamics of state switches are captured by an RNN.

Another model we are developing is the Multi-dynamic Adversarial Generator-Encoder (MAGE). This model uses generative adversarial networks to study functional neuroimaging data. MAGE also learns a hidden state description of the data. However, it can also capture different types of state dynamics simultaneously [1].

One puzzling aspect of research into time-varying functional connectivity (FC) has been that it appears to be so stable over time when using techniques like sliding window correlations or (to a lesser extent) the HMM. We show using MAGE that this apparent stability is caused by dynamics in the FC being confounded by dynamics in the mean activity levels. MAGE’s multi-dynamic ability allows changes in the FC and in the mean activity to occur at different times to each other, revealing much stronger changes in FC over time [1].

 

References

  1. Pervaiz U, Vidaurre D, Gohil C, Smith S, Woolrich M. Multi-dynamic Modelling Reveals Strongly Time-varying Resting fMRI Correlations. In submission.