Analyzing the Geometric Structure of Deep Learning Decision Boundaries

dc.contributor.advisorFernandez, Amanda
dc.contributor.authorGeyer, Michael
dc.contributor.committeeMemberRuan, Jianhua
dc.contributor.committeeMemberDesai, Kevin
dc.contributor.committeeMemberRad, Paul
dc.contributor.committeeMemberWalton, Clair
dc.creator.orcidhttps://orcid.org/0000-0001-8490-1575
dc.date.accessioned2024-03-26T22:49:49Z
dc.date.available2024-03-26T22:49:49Z
dc.date.issued2023
dc.description.abstractTraining deep learning models is an incredibly effective method for finding function approximators. However, understanding the behavior of these trained models from a first-principles description is an open problem. This is because model training is a complex dynamical system parameterized by a training step, initial condition, dataset, and learning algorithm which is computationally intractable to study. Adding to this, trained models often exhibit counterintuitive properties, such as the existence of adversarial examples. Current deep learning theory lacks the tools necessary to answer many questions posed by empirical results, such as why adversarial attacks transfer between models and why improving robustness decreases performance. This dissertation provides the tools to formally answer some of these questions. First, this work defines a differentiable algorithm on a model's inputs, weights, and training set, which exactly replicates model behavior across training. This allows exact measurement of training contribution and allows measurement of the trained model's signal manifold. Second, this work provides a loss function which aligns a model's gradients to a fixed low dimensional manifold. These tools are then applied to computer vision datasets to formally study the properties of adversarial robustness, explainability and out-of-distribution detection. The goal of this dissertation is to provide techniques which will advance the theoretical understanding of deep learning.
dc.description.departmentComputer Science
dc.format.extent1 electronic resource (88 pages)
dc.format.mimetypeapplication/pdf
dc.identifier.isbn9798381179682
dc.identifier.urihttps://hdl.handle.net/20.500.12588/6290
dc.languageeng
dc.subjectAdversarial
dc.subjectDeep Learning
dc.subjectNeural Tangent Kernel
dc.subject.classificationArtificial intelligence
dc.subject.classificationMathematics
dc.subject.classificationComputer science
dc.titleAnalyzing the Geometric Structure of Deep Learning Decision Boundaries
dc.typeThesis
dc.type.dcmiText
thesis.degree.departmentComputer Science
thesis.degree.grantorUniversity of Texas at San Antonio
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Michael_Geyer_Dissertation.pdf
Size:
10.67 MB
Format:
Adobe Portable Document Format