Modelling and decoding noninvasive brain signals with deep learning
I am a third-year DPhil student supervised by Mark Woolrich at the Oxford Centre for Human Brain Activity. My research sits at the intersection of machine learning and neuroscience. I analyse, model, and decode EEG and MEG data using standard signal processing and machine learning techniques. I also develop novel deep learning approaches for such data. I am especially interested in decoding language (e.g. reading / inner speech), and potential applications in brain-computer interfaces (BCI). I am also more and more interested in novel invasive/noninvasive BCI technologies.
I completed a Computer Science M.S. and a Mechatronics B.S. at the Budapest University of Technology. I was involved in dialogue modeling research for 3 years under the supervision of Gabor Recski, and I also did computer vision research at Robert Bosch.
My research spans several topics.
1. Improving supervised/unsupervised group-level models of M/EEG data by dealing with between-subject variability.
2. Analysing how deep learning can be leveraged for decoding M/EEG data primarily from visual tasks.
3. Developing deep (transfer) learning models for simultaneous forecasting and stimulus decoding from M/EEG data.
4. Collecting a high number of trials of both EEG and MEG data from a few participants doing a reading and inner speech task. Analysing this data with the decoding methods mentioned above.