Optimizing Deep Learning Models for On-Orbit Deployment Through Neural Architecture Search

This article has 0 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

Advancements in spaceborne edge computing has facilitated the incorporation of Artificial Intelligence (AI)-powered chips into CubeSats, allowing for intelligent data handling and enhanced analytical capabilities with greater operational autonomy. This class of satellites face stringent energy and memory constraints, thus necessitating lightweight models which are often obtained by compression techniques. This paper addresses model compression by Neural Architecture Search (NAS) to enable computational efficiency and balance between accuracy, size, and latency. More in detail, we design an evolutionary-based NAS framework for onbord processing and test its capabilities on the burned area segmentation test case. The proposed solution jointly optimizes network architecture and deployment for hardware-specific resource-constrained platforms. Additionally, hardware-awareness is introduced in the optimization loop for tailoring the network topology to the specific target edge computing chip. The resulting models, which has been desiged on CubeSat-class hardware, i.e. an NVIDIA Jetson AGX Orion and the Intel Movidious Myriad X, exhibits a memory footprint below 1MB, outperforming handcrafted baselines in terms of latency (3°ø faster) and maintain competitive mean Intersection over Union (mIoU); additionally enabling real-time, high-resolution inference in orbit.

Related articles

Related articles are currently not available for this article.