Robustness and Dependability of Deep Learning Models for Real-World Applications




Wang, Zhiwei

Journal Title

Journal ISSN

Volume Title



Robustness and dependability of Deep Learning (DL) models are critical for real-world DL-based applications. In this dissertation, we explore real-world threats affecting three DL-based applications: a smartphone-powered, computer-vision-based system for Power Wheelchair Intelligent Assistive Driving (PWC IA-Driving), a gene expression-based Deep Neural Network (DNN) for cancer-type prediction, and an efficient and intelligent attack detection in Software Defined IoT Networks. Subsequently, we propose methodologies to enhance the robustness and dependability of these applications.

Firstly, we develop the PWC IA-Driving system, aiming to enable the safe, hands-free operation of a Power Wheelchair (PWC) with reduced user attention required in indoor environments. This system alleviates the burden on disabled individuals and reduces their stress. Our goal is to offer an affordable and practical solution that can be seamlessly integrated into existing PWCs. The system utilizes a customized and pre-trained ResNet-based model on a smartphone to interpret driving commands from real-time imagery captured by the phone's camera. These instructions are then transmitted to the PWC via a control interface connected to the smartphone. We have developed a prototype of this assistive driving system on a Pixel-6 Android phone and tested its feasibility on a mobile robot as a proof-of-concept.

To ensure the mobile robot can navigate safely at reasonable speeds with minimal user intervention, we employ various techniques to enhance robustness and dependability. These include model explanation to increase confidence, data augmentation to improve accuracy for unseen scenarios, model distillation and quantization to enhance robustness against possible adversarial attacks, and utilization of on-device acceleration devices to improve response time and tolerance to low-confidence predictions, among others.

Secondly, we delve into the threats posed by adversarial attacks on Deep Neural Networks (DNNs), where adversaries can manipulate DNN outputs by crafting small, carefully designed perturbations to the inputs. These attacks present significant challenges to the practical deployment of DNNs. In this dissertation, we investigate a variety of defense methods against adversarial attacks on gene expression-based DNNs for cancer-type prediction. We propose a novel method called "segment patching" to mitigate the effects of adversarial perturbations. Segment patching effectively replaces perturbed input data segments with clean segments from the training dataset based on Euclidean distance. Our experiments demonstrate that this method maintains model prediction accuracy against adversarial attacks, particularly strong attacks. More importantly, the segment patching method poses a significant challenge to adversaries attempting to generate adversarial examples. Additionally, we explore the application of Fast Fourier Transform to transform input data into the frequency domain before feeding it into the DNN model. This approach aims to further obscure model gradients, making gradient-based attacks more difficult. Our findings suggest promising strategies for enhancing DNN robustness against adversarial attacks.

Thirdly, we explore the dependability issues surrounding Deep-learning based abnormality detection for IoT coupled with Software Defined Network (SDN) technologies. With the increasing integration of IoT devices into various domains such as smart buildings and critical infrastructure protection, their limited capabilities pose significant security vulnerabilities, especially when coupled with SDN technologies to provide flexible services. In this study, we concentrate on Random Forest (RF) machine learning models and scrutinize the impact of different feature sets (e.g., IPs and ports) on the detection accuracy for various attacks. Our aim is to enhance dependability through two main approaches: firstly, evaluating the effects of RF configurations (specifically forest size and tree depth) on detection accuracy and runtime overheads to improve response time by reducing forest size; secondly, generating a substantial amount of trusted labeled data by simulating attacks within our SDN environment. Our findings indicate that RF demonstrates high detection accuracy with the selected feature sets across different attacks. Furthermore, even with reduced forest sizes (e.g., fewer trees or shallower depth), the detection accuracy of RF only experiences a slight decrease, allowing for significant reductions in runtime overheads and thereby enhancing response time.





Electrical and Computer Engineering