MLRelated.com

A review on TinyML: State-of-the-art and prospects

Partha Pratim Ray

Machine learning has become an indispensable part of the existing technological domain. Edge comput- ing and Internet of Things (IoT) together presents a new opportunity to imply machine learning tech- niques at the resource constrained embedded devices at the edge of the network. Conventional machine learning requires enormous amount of power to predict a scenario. Embedded machine learning - TinyML paradigm aims to shift such plethora from traditional high-end systems to low-end clients. Several challenges are paved while doing such transition such as, maintaining the accuracy of learning models, provide train-to-deploy facility in resource frugal tiny edge devices, optimizing processing capac- ity, and improving reliability. In this paper, we present an intuitive review about such possibilities for TinyML. We firstly, present background of TinyML. Secondly, we list the tool sets for supporting TinyML. Thirdly, we present key enablers for improvement of TinyML systems. Fourthly, we present state-of-the-art about frameworks for TinyML. Finally, we identify key challenges and prescribe a future roadmap for mitigating several research issues of TinyML.


Deep Learning on Microcontrollers: A Study on Deployment Costs and Challenges

Filip Svoboda, Edgar Liberis

Microcontrollers are an attractive deployment target due to their low cost, modest power usage and abundance in the wild. However, deploying models to such hardware is nontrivial due to a small amount of on-chip RAM (often < 512KB) and limited compute capabilities. In this work, we delve into the requirements and challenges of fast DNN inference on MCUs: we describe how the memory hierarchy influences the architecture of the model, expose often under-reported costs of compression and quantization techniques, and highlight issues that become critical when deploying to MCUs compared to mobiles. Our findings and experiences are also distilled into a set of guidelines that should ease the future deployment of DNN-based applications on microcontrollers.


Machine Learning for Microcontroller-Class Hardware - A Review

Swapnil Sayan Saha, Sandeep Singh Sandha

The advancements in machine learning opened a new opportunity to bring intelligence to the low-end Internet-of-Things nodes such as microcontrollers. Conventional machine learning deployment has high memory and compute footprint hindering their direct deployment on ultra resource-constrained microcontrollers. This paper highlights the unique requirements of enabling onboard machine learning for microcontroller class devices. Researchers use a specialized model development workflow for resource-limited applications to ensure the compute and latency budget is within the device limits while still maintaining the desired performance. We characterize a closed-loop widely applicable workflow of machine learning model development for microcontroller class devices and show that several classes of applications adopt a specific instance of it. We present both qualitative and numerical insights into different stages of model development by showcasing several use cases. Finally, we identify the open research challenges and unsolved questions demanding careful considerations moving forward.


Machine Learning: A Review of Learning Types

Shagan Sah

In this paper, various machine learning techniques are discussed. These algorithms are used for many applications which include data classification, prediction, or pattern recognition. The primary goal of machine learning is to automate human assistance by training an algorithm on relevant data. This paper should also serve as a collection of various machine learning terminology for easy reference.


A Primer for tinyML Predictive Maintenance: Input and Model Optimisation

Emil Jorgensen Njor, Jan Madsen

In this paper, we investigate techniques used to optimise tinyML based Predictive Maintenance (PdM). We first describe PdM and tinyML and how they can provide an alternative to cloud-based PdM. We present the background behind deploying PdM using tinyML, including commonly used libraries, hardware, datasets and models. Furthermore, we show known techniques for optimising tinyML models. We argue that an optimisation of the entire tinyML pipeline, not just the actual models, is required to deploy tinyML based PdM in an industrial setting. To provide an example, we create a tinyML model and provide early results of optimising the input given to the model.