Interpretable Deep Learning Algorithms for Cognitive Neuroscience and Human Behavior Research
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Cognitive neuroscientists and psychologists have relied on neuropsychological measurements to study cognitive states and diagnose neurological diseases. However, involuntary movements associated with individual behaviors are often overlooked. Human speech involves complex cognitive processes to support verbal expression and comprehension along with motor actions as muscle movements. Artificial intelligence (AI) research over the past decade shows that behavior, in the form of facial muscle activity, can reveal information about fleeting voluntary and involuntary motor system activity related to emotion, pain, and deception. Also, electroencephalogram (EEG) signals are used with AI algorithms to predict and diagnose various diseases, including speech disfluencies. However, studying either of the modalities alone limits the knowledge of the dynamics between the cognitive states and motor systems. AI algorithms are often treated as black-boxes without any reasoning of why a particular decision was made. Also, there is a need for large amounts of data to tune the parameters of deep networks. Inspired by the interplay between brain and facial activities in adults-who-stutter (AWS), the need for better explainable algorithms in neuroscience research, and the need to learn from small amounts of data, we propose interpretable deep learning algorithms to bridge the gap between cognitive neuroscience and AI to study human behavior.