Developing Interpretable Deep Learning Models for Identifying Biological Functions Form Biomedical Data
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This project is intended to build interpretable deep learning models in the field of biomedical research. Here, we build deep learning models that can be easily interpreted and understood by biomedical researchers. First, we characterize CHD2 patient-derived iPSCs organoids using LFP signals by interpreting deep learning models. We showed deep learning models are far better compared to traditional machine learning approaches predicting CHD2 patient-derived organoids from their control counterparts. We proposed two interpretation techniques characterizing organoids in terms of frequency band-power features. Next, we developed supervised interpretation centric deep learning model "Geneset Neural Networks (GS-NN)" for phenotype prediction that provides functional interpretation using gene expression data. In this project we proposed a novel technique called adjusted ablation score to quantitatively assess interpretability of a deep learning model. We used GS-NN to predict cancer-types from bulk RNA-seq data and surface protein level prediction from single cell RNA-seq data. Lastly, we develop transferable deep learning model to augment single cell RNA-seq with surface protein levels and interpret cellular functions. We develop models that learn more robust gene expression features while predicting cell surface protein level.