Breaking the Privacy Paradox: Pushing AI to the Edge with Provable Guarantees

dc.contributor.advisorGong, Yanmin
dc.contributor.authorHu, Rui
dc.contributor.committeeMemberSandhu, Ravi
dc.contributor.committeeMemberLin, Wei-Ming
dc.contributor.committeeMemberChoo, Kim-Kwang Raymond
dc.creator.orcidhttps://orcid.org/0000-0003-3317-1765
dc.date.accessioned2024-02-09T22:22:56Z
dc.date.available2024-02-09T22:22:56Z
dc.date.issued2022
dc.descriptionThis item is available only to currently enrolled UTSA students, faculty or staff. To download, navigate to Log In in the top right-hand corner of this screen, then select Log in with my UTSA ID.
dc.description.abstractAs an immense number of connected edge devices such as mobile devices, wearables, and autonomous vehicles generate massive amounts of data each day to develop machine learning based intelligent services, multiple spheres of human life, such as healthcare, entertainment, and industry, are being transformed. The traditional process for developing machine learning applications is to gather a large dataset, train a model on the data, and run the trained model on a cloud server. Due to the growing tension between the need for big data and privacy protection, it is increasingly attractive to push artificial intelligence (AI) to the edge, e.g., to enable edge devices to train machine learning models collaboratively while keeping the data locally. However, the deployment of this distributed learning architecture depends on a set of challenges, such as the new privacy risks, limited computation and communication resources, heterogeneous data and devices, and security vulnerabilities. This work aims to improve privacy, efficiency, and security when pushing AI to the edge. Specifically, we propose privacy-preserving distributed learning schemes that can provide rigorous privacy guarantees for each device in the learning system, and the improved methods based on this can even reduce the privacy loss and communication cost of devices at the same time. Meanwhile, we design an incentive mechanism to encourage users to contribute their raw data to these private distributed learning systems. Besides, we consider the heterogeneous energy of edge devices and develop a data offloading and queueing mechanism to improve the energy efficiency, and also explore the vulnerability of machine learning systems by attacking a recommendation system using malicious devices. Our proposed methods are validated theoretically and experimentally with rigorous analysis and extensive experiments on real-world datasets.
dc.description.departmentElectrical and Computer Engineering
dc.format.extent146 pages
dc.format.mimetypeapplication/pdf
dc.identifier.isbn9798438750536
dc.identifier.urihttps://hdl.handle.net/20.500.12588/3846
dc.languageen
dc.subjectArtificial Intelligence
dc.subjectDistributed Learning
dc.subjectInternet of Things
dc.subjectSecurity & Privacy
dc.subject.classificationElectrical engineering
dc.subject.classificationArtificial intelligence
dc.titleBreaking the Privacy Paradox: Pushing AI to the Edge with Provable Guarantees
dc.typeThesis
dc.type.dcmiText
dcterms.accessRightspq_closed
thesis.degree.departmentElectrical and Computer Engineering
thesis.degree.grantorUniversity of Texas at San Antonio
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Hu_utsa_1283D_13620.pdf
Size:
2.34 MB
Format:
Adobe Portable Document Format