Resource-aware Tiny Machine Learning for Battery-less System

Date

2024

Authors

Islam, Sahidul

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Powerful machine learning algorithms have been increasingly designed to achieve better accuracy, which however require a great amount of data and computing power relying on centralized cloud services. This generates a series of problems such as high cost, privacy issue, carbon emission, and low quality of service due to large data transmission overhead. Recently, a counter-trend of tiny machine learning that shrinks those large models for IoT devices is promising in addressing those issues. There are billions of IoT devices around the world that serve for various common daily-life applications such as human activity recognition, voice command recognition, face recognition etc. It's necessary to implement tiny machine learning algorithms on such IoT devices to enable those applications.

However, there are fundamental challenges in designing tiny machine learning models due to the resource constrains such as the limited computing capability, memory, and energy of IoT devices which are mostly based on microcontrollers. Typical microcontrollers have low computing power (e.g., 1-16 MHz) and are equipped with small memory (e.g., hundreds of KBs). Besides, battery-powered devices naturally have a limited standby time. Although battery-less IoT devices that harvests energy from ambient environment can work sustainably, the power provided by the energy harvester is low and has an intrinsic drawback of instability since it varies with the ambient environment. Computations on such devices are interrupted frequently and become intermittent. To address these challenges, we propose several software/hardware co-design methodologies to efficiently implement tiny machine learning on battery-less devices.

First, we introduce an end-to-end framework to accelerate machine learning model while achieving efficient intermittent computation. The whole framework is divided into three module such as resource-aware DNN pruning, accelerator enabled embedded software, intermittent inference module. We demonstrate how these modules interact to achieve improved performance.

Second, to adapt to the varying environment and harvesting power, we propose environmental adaptive machine learning models for low-power energy harvesting battery-less devices. A co-exploration framework is proposed to search multiple machine learning models with shared weights. We further propose on-device implementation architecture to efficiently execute such shared-weight models. A run-time model extraction algorithm is proposed that retrieves individual model from the shared source.

Third, to achieve multi-tasking machine learning in varying environments, we present a scalable multi-tasking machine learning framework that generates a single unified machine learning model which exhibits the flexibility to adapt with the varying environment while performing multiple machine learning tasks.

Description

Keywords

Convolutional Neural Network, Deep Neural Network Pruning, Energy Harvesting, On-device Machine Learning

Citation

Department

Computer Science