Machine Learning for Neurophysiological and Medical Imaging
In this thesis, I present investigations in two domains, (1) machine learning models to provide insights and aides research in Brain Computer Interface (BCI) systems and (2) I will present my investigations regarding vulnerabilities of deep learning models i.e., adversarial attacks and adversarial training as a defense technique in medical computer vision. The goal of human machine interface systems is to provide a novel communication channel with external devices to either help disabled individuals or improve human experiences. In the first study, BCI is designed to accurately decode mental states to control external devices. Specifically, I provide an inference-based solution by proposing likelihood-ratio test to detect 'idle' versus 'control' brain states asynchronously. The second study investigates predictors from electroencephalography (EEG) signals to improved human cognitive performance under varying indoor room temperatures. The prediction model used in this study provides additional insights into the brain activation patterns that contribute towards predicting low- and high- cognitive performance. At last, I investigate adversarial attacks on medical deep learning models. Recent studies have exposed vulnerability of deep learning models to adversarial attacks that cause trained models to misclassify. In this study, I will present impact of first order adversarial attacks on deep learning models of varying capacities, especially because larger deep learning models are over-parametrizing medical classification problem. To alleviate the adverse effects of adversarial attacks we train adversarial robust models and address how these models learn to correctly classify against these attacks.