SLO-Aware Resource Management for Edge Computing Applications

Date

2024

Authors

Kang, Peng

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

The advent of Internet of Things (IoTs) promise significant data growth as devices continue to proliferate, requiring robust solutions for data management and analysis. Edge Computing and the Cloud play vital roles in processing and analyzing data generated by IoT devices. Edge Computing strategically deploys computing resources near data sources for rapid processing, while the Cloud provides centralized storage and computational power for complex analytics. Effective resource management is crucial across IoT, Edge, and Cloud environments to optimize system performance, ensure efficient utilization, and enhance reliability. This involves dynamically allocating resources to meet diverse demands, minimizing tail latency, and deploying computing resources effectively. However, poor resource management can lead to service-level objective (SLO) violations and latency issues, underscoring the importance of a cohesive strategy to orchestrate interactions and maximize the potential of distributed computing architectures.

To address this challenge, we explored three research directions. Firstly, we advocate the utilization of machine learning and statistical inference to optimize resource allocation and reallocation within the Edge and Cloud environments, aiming to mitigate SLO violations in applications spanning both domains. This approach encompasses horizontal and vertical scaling of resources, leveraging predictive models to accurately forecast workload demands and adapt resource allocation accordingly, thereby enhancing resource utilization and overall performance. Secondly, we introduce an algorithm leveraging spectral partitioning of application topologies and a topology-aware resource matching technique for efficient stream operator placement across distributed edge nodes. This algorithm addresses the complexities of minimizing network bottlenecks and computational resource constraints at the edge, with additional exploration into the benefits of priority-scheduling for streaming data to mitigate SLO violations. Thirdly, our collaborative research efforts concentrate on crafting lightweight deep neural network models tailored for edge devices. These endeavors contribute significantly to optimizing resource management and enhancing model efficiency within the Edge and Cloud environments, thereby facilitating the advancement of IoT applications.

Description

Keywords

Edge/Cloud Computing, Resource Management, Scheduling

Citation

Department

Computer Science