Leveraging Explainability to Increase Efficiency and Transparency in Computer Vision

Date

2023

Authors

Patrick, David

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Although deep neural networks have had a huge impact on computer vision tasks, many of the training strategies for these networks are naive and lack inherent explainability. Standard training involves continuously evaluating on classes already well-understood by the network, which can be inefficient. Additionally, many of these networks lack any explainable methods to overcome their inherent black-box nature, preventing their adoption in real-world scenarios where explainability is highly valued. This dissertation proposes novel methods and implementations of strategies for improving the efficiency and explainability of a network. First, this dissertation presents a new training strategy, Adaptive Data Dropout, as applied to the tasks of single-label classification and semantic segmentation. Adaptive Data Dropout reduces redundant observations during training, significantly reducing the overall training time while maintaining performance. Secondly, this dissertation develops improved explainability techniques which can be incorporated into a network's training and evaluation process. By utilizing explainability techniques during network training and evaluation, this work can identify critical parameters within a network or critical features in an input. These methods and strategies are applied to a variety of computer vision benchmarks and applications, demonstrating increased efficiency, transparency, and explainability.

Description

Keywords

Computer Vision, Deep Learning, Explainability, Neural Network, Optimization

Citation

Department

Computer Science