Zhang, Ting-HeHasib, Md MusaddaqulChiu, Yu-ChiaoHan, Zhi-FengJin, Yu-FangFlores, MarioChen, YidongHuang, Yufei2022-10-132022-10-132022-09-29Cancers 14 (19): 4763 (2022)https://hdl.handle.net/20.500.12588/1136Deep learning has been applied in precision oncology to address a variety of gene expression-based phenotype predictions. However, gene expression data's unique characteristics challenge the computer vision-inspired design of popular Deep Learning (DL) models such as Convolutional Neural Network (CNN) and ask for the need to develop interpretable DL models tailored for transcriptomics study. To address the current challenges in developing an interpretable DL model for modeling gene expression data, we propose a novel interpretable deep learning architecture called T-GEM, or Transformer for Gene Expression Modeling. We provided the detailed T-GEM model for modeling gene–gene interactions and demonstrated its utility for gene expression-based predictions of cancer-related phenotypes, including cancer type prediction and immune cell type classification. We carefully analyzed the learning mechanism of T-GEM and showed that the first layer has broader attention while higher layers focus more on phenotype-related genes. We also showed that T-GEM's self-attention could capture important biological functions associated with the predicted phenotypes. We further devised a method to extract the regulatory network that T-GEM learns by exploiting the attributions of self-attention weights for classifications and showed that the network hub genes were likely markers for the predicted phenotypes.Attribution 4.0 United Stateshttps://creativecommons.org/licenses/by/4.0/phenotypes predictioninterpretable deep learningTransformercancer type predictionimmune cell type predictionTransformer for Gene Expression Modeling (T-GEM): An Interpretable Deep Learning Model for Gene Expression-Based Phenotype PredictionsArticle2022-10-13