Distributed AI-defense for Cyber Threats on Edge Computing Systems




De La Torre Parra, Gonzalo

Journal Title

Journal ISSN

Volume Title



As cyber threat actors develop increasingly sophisticated strategies, cutting-edge cyber security has become a necessity for industry organizations and government agencies. A deluge of novel threat strategies has overwhelmed many state-of-the-art cyber security models. Mutating hashes, complex obfuscation mechanisms, self-propagating malware, and intelligent malware components can easily overwhelm current signature-based and behavioral-based cyber security approaches. Moreover, as novel computational, storage, and energy efficiency advancements have evolved to spawn new technological breakthroughs, such as the Internet of Things (IoT), security developments face the challenge of matching the advancements of novel threats and novel technology.

IoT technologies have enabled the collection, processing, and communication of data in Autonomous Vehicles, Smart Cities, Smart Grids, and eHealth applications. Given their many features and their low cost, IoT devices are often distributed en masse and are not operated in controlled and security-hardened environments. These uncontrolled variables expose physical, network, and application attack opportunities, often using newer and more exploitable protocols with limited protections. To tackle this issue, edge-level threat and attack detection systems require specialized solutions that leverage the local computing power of edge systems, take advantage of the close proximity to edge devices (e.g IoT devices and mobile devices), and make use of advanced infrastructure-level aggregation capabilities that benefit from distributed learning. We propose that deep learning is essential in these specialized solutions.

Deep learning models have demonstrated a great success in big data and drawn immense attention from cyber security experts. A clear advantage of such models is their ability to discover hidden patterns that can isolate and discern between benign and malicious patterns. Still, some of the major challenges in developing deep learning models for threat detection include:

  1. The need for big data to explore multiple cyber security scenarios and the lack of available representative datasets for threat detection.

  2. State-of-the-art works presented in the literature focus on developing models via a centralized-learning approach. While most works show great effectiveness in detecting anomalies in network traffic, system logs, and other types of data, centralized-learning approaches require the collection of data from multiple edge computing systems, violating data privacy principles.

  3. Cyber forensic teams require visibility to the model's decision-making process in incident detection. Such essential functionality for threat forensics has yet to be offered in state-of-the-art models. The security model proposed in this thesis provides this visibility for threat forensics.

First, we present a case study of cyber security in autonomous vehicles' edge infrastructures. Second, we present a case study exploring the effectiveness of deep packet inspection for attack detection in smart grids edge infrastructures based on recommendations provided by government agencies. Third, we present a cloud-based cyber security framework for synthetic data generation. Our model has the capacity to emulate benign and malicious activity in order to facilitate the generation of representative datasets. Our model also offers opportunities for other researchers to collaborate. Fourth, we present a framework for a novel distributed long short-term neural network that detects IoT attacks and makes use of a joint training method between edge devices and cloud systems. Lastly, we propose the first interpretable federated transformer log learning model for threat forensics. Unlike other works that exclusively focus on anomaly detection in syslogs, our proposed model incorporates the concept of federated learning (FL), a machine learning setting in which multiple entities collaborate in solving a machine learning problem under the coordination of a federated server. We leverage distributed problem-solving to generate a robust, privacy-preserving security model. Moreover, we integrate novel interpretability and backtracking methods for threat forensics.


The author has granted permission for their work to be available to the general public.


AI Interpretability, Anomaly Detection, Cyber Security, Federated Learning, Network Anomaly Detection, Threat Detection



Electrical and Computer Engineering