Countermeasures Against Backdoor, Data Poisoning, and Adversarial Attacks

dc.contributor.advisorRad, Paul
dc.contributor.advisorDe Oliveira, Victor
dc.contributor.authorChacon, Henry Daniel
dc.contributor.committeeMemberWu, Wenbo
dc.contributor.committeeMemberWang, Min
dc.contributor.committeeMemberBou-Harb, Elias
dc.creator.orcidhttps://orcid.org/0000-0003-4472-6738
dc.date.accessioned2024-02-09T20:19:28Z
dc.date.available2024-02-09T20:19:28Z
dc.date.issued2021
dc.descriptionThis item is available only to currently enrolled UTSA students, faculty or staff. To download, navigate to Log In in the top right-hand corner of this screen, then select Log in with my UTSA ID.
dc.description.abstractBackdoor and adversarial attacks on the machine and deep learning models have become one of the biggest concerns in the artificial intelligence industry -a sector that has experimented an accelerated growth in recent years. Attacks can be induced in different ways, either in the training set as backdoor attributes or by a violation of the model's assumptions for adversarial attacks. The biggest challenge faced by defenders is the limited evidence observed on the backdoor model, the lacking of uncertainty metrics in the output, and methods to defend the model from unknown attacks. In this dissertation, we present a set of algorithms against backdoor and adversarial attacks grouped in three categories: model's perspective, data, and defense. In the model's case, we provide a methodology to analyze the effect of backdoor attributes in an image classifier using the D-Vine copula autoencoder and an algorithm to detect if a model is a backdoor using the maximum entropy principle and the variation inference approach. For the data section, we present a methodology to reduce the effect of noise and quasi-periodic manipulations from a time-series domain to improve the accuracy and reduce the forecast variance of a recurrent neural network model. In the final section, we present an algorithm to classify audio spoofing attacks from bona fide utterances. It is intended to be used as a countermeasure to protect an automatic speaker verification system, commonly used these days for biometric identification. Our approach assembles the Mel-spectrogram to get relevant features from input audios, the temporal convolutional neural network, and the one-class loss function. It allows the model to handle different audio lengths and to be robust to unknown attacks.
dc.description.departmentManagement Science and Statistics
dc.format.extent199 pages
dc.format.mimetypeapplication/pdf
dc.identifier.isbn9798538139873
dc.identifier.urihttps://hdl.handle.net/20.500.12588/3158
dc.languageen
dc.subjectDeep learning
dc.subjectDatasets
dc.subjectExperiments
dc.subjectGraph representations
dc.subjectNeural networks
dc.subjectClassification
dc.subjectDecomposition
dc.subjectTaxonomy
dc.subjectMethods
dc.subjectEnergy
dc.subjectAlgorithms
dc.subjectEntropy
dc.subjectAdversarial attacks
dc.subjectAudio spoofing detection
dc.subjectBackdoor attacks
dc.subjectData poisoning
dc.subjectUncertainty reduction
dc.subject.classificationStatistics
dc.subject.classificationArtificial intelligence
dc.titleCountermeasures Against Backdoor, Data Poisoning, and Adversarial Attacks
dc.typeThesis
dc.type.dcmiText
dcterms.accessRightspq_closed
thesis.degree.departmentManagement Science and Statistics
thesis.degree.grantorUniversity of Texas at San Antonio
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Chacon_utsa_1283D_13402.pdf
Size:
2.8 MB
Format:
Adobe Portable Document Format