Countermeasures Against Backdoor, Data Poisoning, and Adversarial Attacks

Date
2021
Authors
Chacon, Henry Daniel
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract

Backdoor and adversarial attacks on the machine and deep learning models have become one of the biggest concerns in the artificial intelligence industry -a sector that has experimented an accelerated growth in recent years. Attacks can be induced in different ways, either in the training set as backdoor attributes or by a violation of the model's assumptions for adversarial attacks. The biggest challenge faced by defenders is the limited evidence observed on the backdoor model, the lacking of uncertainty metrics in the output, and methods to defend the model from unknown attacks. In this dissertation, we present a set of algorithms against backdoor and adversarial attacks grouped in three categories: model's perspective, data, and defense. In the model's case, we provide a methodology to analyze the effect of backdoor attributes in an image classifier using the D-Vine copula autoencoder and an algorithm to detect if a model is a backdoor using the maximum entropy principle and the variation inference approach. For the data section, we present a methodology to reduce the effect of noise and quasi-periodic manipulations from a time-series domain to improve the accuracy and reduce the forecast variance of a recurrent neural network model. In the final section, we present an algorithm to classify audio spoofing attacks from bona fide utterances. It is intended to be used as a countermeasure to protect an automatic speaker verification system, commonly used these days for biometric identification. Our approach assembles the Mel-spectrogram to get relevant features from input audios, the temporal convolutional neural network, and the one-class loss function. It allows the model to handle different audio lengths and to be robust to unknown attacks.

Description
This item is available only to currently enrolled UTSA students, faculty or staff.
Keywords
Deep learning, Datasets, Experiments, Graph representations, Neural networks, Classification, Decomposition, Taxonomy, Methods, Energy, Algorithms, Entropy, Adversarial attacks, Audio spoofing detection, Backdoor attacks, Data poisoning, Uncertainty reduction
Citation
Department
Management Science and Statistics