{"title":"基于超网络加速评估的快速数据感知神经结构搜索","authors":"Emil Njor , Colby Banbury , Xenofon Fafoutis","doi":"10.1016/j.iot.2025.101688","DOIUrl":null,"url":null,"abstract":"<div><div>Tiny machine learning (TinyML) promises to revolutionize fields such as healthcare, environmental monitoring, and industrial maintenance by running machine learning models on low-power embedded systems. However, the complex optimizations required for successful TinyML deployment continue to impede its widespread adoption.</div><div>A promising route to simplifying TinyML is through automatic machine learning (AutoML), which can distill elaborate optimization workflows into accessible key decisions. Notably, Hardware Aware Neural Architecture Searches — where a computer searches for an optimal TinyML model based on predictive performance and hardware metrics — have gained significant traction, producing some of today’s most widely used TinyML models.</div><div>TinyML systems operate under extremely tight resource constraints, such as a few kB of memory and an energy consumption in the mW range. In this tight design space, the choice of input data configuration offers an attractive accuracy-latency tradeoff. Achieving truly optimal TinyML systems thus requires jointly tuning both input data and model architecture.</div><div>Despite its importance, this “Data Aware Neural Architecture Search” remains underexplored. To address this gap, we propose a new state-of-the-art Data Aware Neural Architecture Search technique and demonstrate its effectiveness on the novel TinyML “Wake Vision” dataset. Our experiments show that across varying time and hardware constraints, Data Aware Neural Architecture Search consistently discovers superior TinyML systems compared to purely architecture-focused methods, underscoring the critical role of data-aware optimization in advancing TinyML.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"33 ","pages":"Article 101688"},"PeriodicalIF":7.6000,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fast data aware neural architecture search via supernet accelerated evaluation\",\"authors\":\"Emil Njor , Colby Banbury , Xenofon Fafoutis\",\"doi\":\"10.1016/j.iot.2025.101688\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Tiny machine learning (TinyML) promises to revolutionize fields such as healthcare, environmental monitoring, and industrial maintenance by running machine learning models on low-power embedded systems. However, the complex optimizations required for successful TinyML deployment continue to impede its widespread adoption.</div><div>A promising route to simplifying TinyML is through automatic machine learning (AutoML), which can distill elaborate optimization workflows into accessible key decisions. Notably, Hardware Aware Neural Architecture Searches — where a computer searches for an optimal TinyML model based on predictive performance and hardware metrics — have gained significant traction, producing some of today’s most widely used TinyML models.</div><div>TinyML systems operate under extremely tight resource constraints, such as a few kB of memory and an energy consumption in the mW range. In this tight design space, the choice of input data configuration offers an attractive accuracy-latency tradeoff. Achieving truly optimal TinyML systems thus requires jointly tuning both input data and model architecture.</div><div>Despite its importance, this “Data Aware Neural Architecture Search” remains underexplored. To address this gap, we propose a new state-of-the-art Data Aware Neural Architecture Search technique and demonstrate its effectiveness on the novel TinyML “Wake Vision” dataset. Our experiments show that across varying time and hardware constraints, Data Aware Neural Architecture Search consistently discovers superior TinyML systems compared to purely architecture-focused methods, underscoring the critical role of data-aware optimization in advancing TinyML.</div></div>\",\"PeriodicalId\":29968,\"journal\":{\"name\":\"Internet of Things\",\"volume\":\"33 \",\"pages\":\"Article 101688\"},\"PeriodicalIF\":7.6000,\"publicationDate\":\"2025-07-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Internet of Things\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2542660525002021\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet of Things","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2542660525002021","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Fast data aware neural architecture search via supernet accelerated evaluation
Tiny machine learning (TinyML) promises to revolutionize fields such as healthcare, environmental monitoring, and industrial maintenance by running machine learning models on low-power embedded systems. However, the complex optimizations required for successful TinyML deployment continue to impede its widespread adoption.
A promising route to simplifying TinyML is through automatic machine learning (AutoML), which can distill elaborate optimization workflows into accessible key decisions. Notably, Hardware Aware Neural Architecture Searches — where a computer searches for an optimal TinyML model based on predictive performance and hardware metrics — have gained significant traction, producing some of today’s most widely used TinyML models.
TinyML systems operate under extremely tight resource constraints, such as a few kB of memory and an energy consumption in the mW range. In this tight design space, the choice of input data configuration offers an attractive accuracy-latency tradeoff. Achieving truly optimal TinyML systems thus requires jointly tuning both input data and model architecture.
Despite its importance, this “Data Aware Neural Architecture Search” remains underexplored. To address this gap, we propose a new state-of-the-art Data Aware Neural Architecture Search technique and demonstrate its effectiveness on the novel TinyML “Wake Vision” dataset. Our experiments show that across varying time and hardware constraints, Data Aware Neural Architecture Search consistently discovers superior TinyML systems compared to purely architecture-focused methods, underscoring the critical role of data-aware optimization in advancing TinyML.
期刊介绍:
Internet of Things; Engineering Cyber Physical Human Systems is a comprehensive journal encouraging cross collaboration between researchers, engineers and practitioners in the field of IoT & Cyber Physical Human Systems. The journal offers a unique platform to exchange scientific information on the entire breadth of technology, science, and societal applications of the IoT.
The journal will place a high priority on timely publication, and provide a home for high quality.
Furthermore, IOT is interested in publishing topical Special Issues on any aspect of IOT.