Deploying TinyML for energy-efficient object detection and communication in low-power edge AI systems

Publications

Deploying TinyML for energy-efficient object detection and communication in low-power edge AI systems

Deploying TinyML for energy-efficient object detection and communication in low-power edge AI systems

Year : 2025

Publisher : Nature Research

Source Title : Scientific Reports

Document Type :

Abstract

Edge Artificial Intelligence (Edge AI) is driving the widespread deployment of neural network models on resource-constrained microcontroller units (MCUs), enabling real-time, on-device data processing. This approach significantly reduces cloud dependency, making it ideal for applications in industrial automation and IoT. However, the deployment of deep learning models on such constrained devices poses significant challenges due to limitations in memory, computational power, and energy capacity. This paper presents a real-time object detection system optimized for energy efficiency and scalability, which integrates well-established model compression techniques, such as quantization, with a low-cost MCU-based platform. The system leverages MobileNetV2, a lightweight neural network, quantized to achieve the best trade-offs between accuracy and resource consumption. The proposed solution integrates a camera and Wi-Fi module for capturing and transmitting image data, utilizing dual-mode TCP/UDP communication to balance reliability and low-latency transmission for IoT applications. We present a comprehensive system-level analysis, exploring the trade-offs between latency, memory, energy consumption, and model size. The Visual Wake Words (VWW) dataset is used for this research, which demonstrates the practical performance and scalability of the system for real-time applications in smart devices, industrial monitoring, and environmental sensing. This work emphasizes the integration of TinyML models with constrained hardware and offers a foundation for scalable, autonomous, energy-efficient Edge AI solutions. Quantitatively, 8-bit post-training quantization achieved 3– storage reduction, yielding deployable flash footprints of 286-536 KB within a 1 MB flash / 256 KB SRAM budget, on-device inference latency ranged from 3.47 to 14.98 ms per frame with energy per inference of 10.6–22.1 J, while quantized MobileNet variants maintained accuracy. In wireless reporting, UDP reduced one-way latency relative to TCP, whereas TCP provided higher delivery reliability, underscoring application-dependent protocol trade-offs for real-time embedded deployments.