Toward Security and Privacy Enhanced Deep Neural Networks
Deep learning is transforming businesses with innovative technology in crucial industries such as manufacturing, transportation and healthcare. One example is medical imaging algorithms that are capable of diagnosing disease at a human expert level. Nevertheless, medical image deep learning models typically require large-scale image datasets and architectures to train state-of-the-art deep neural networks (DNNs). However, many raw image datasets contain sensitive identity feature information that prohibit entities from disclosing data due to privacy regulations. Additionally, large state-of-the-art DNNs are highly over-parameterized for medical image analysis. Consequently, medical image deep learning models are extremely vulnerable to adversarial attacks---imperceptibly perturbed input resulting in an incorrect model prediction. There are many security and privacy challenges that arise when developing DNNs for highly regulated industries such as healthcare. This work focuses on two major concerns that hinder the advancement of deep learning technology in crucial industries. First, data privacy during model development. Second, model robustness against adversarial attacks during model deployment. This research develops learnable image transformation schemes. This topic is examined by investigating two image transformation schemes using convolutional autoencoder (CAE) latent representation and vision transformer (ViT) embeddings for privacy enhanced image classification. Additionally, this work includes an autoencoder-based image anonymization scheme that obfuscates visual image features while retaining useful attribute information required for model utility. The proposed anonymization method also enhances privacy by generating encoded images that exclude sensitive identity feature information. Finally, this work develops an approach for adversarially robust deep learning model selection which includes an analysis on the role of deep learning model complexity in adversarial robustness for medical images.