基于微控制器的鱼眼畸变物体复杂度有界分类

D. Pau, Alessandro Carra, Marco Garzola, L. Falaschetti, C. Turchetti
{"title":"基于微控制器的鱼眼畸变物体复杂度有界分类","authors":"D. Pau, Alessandro Carra, Marco Garzola, L. Falaschetti, C. Turchetti","doi":"10.1109/MELECON53508.2022.9842897","DOIUrl":null,"url":null,"abstract":"Since 2019, tiny machine learning has imposed itself everywhere as an innovative technology trend deployed at the edge and has been pervasive in many IoT applications. One interesting, addressed by this work, is related to the welfare of the laboratory animals that could be preserved by acquiring and classifying some image data to monitor object’s presence in their cages. For example mice and rats activities such as drinking and eating can be indicators of their state of health. In that context, $\\mu$BottleNet has been developed to classify the presence, or the absence of the water bottle, while $\\mu$FoodNet to classify the level of the food into the feeder. Those neural networks (NNs) achieved 99.8% accuracy. At our best knowledge, no datasets were available to support this study and therefore we built three image datasets to train the neural networks on both Bottle and Food objects. From image capture to the inferences execution, the tasks have been carried out by STM32L4 (ultra-low-power that consumes 120 $\\mu$A/MHz) and STM32H7 micro controller units (MCUs). Both NNs have been designed to fit into constrained MCU resources. Special attention has been given to the on-chip memory occupation to ensure the RAM footprint was 39.44 KBytes. To evaluate and test on the field the performances of these models (and against to the MobileNetV2 one), a graphical user interface (GUI), has been developed, capable of rendering, the validation and test results of inference runs on the MCUs. K-fold cross validation has been run and the resulting performances of the two NNs were compared to the MobileNetV2 confirming that the $\\mu$BottleNet and the $\\mu$FoodNet reached and exceeded the accuracy achieved by a more complex NN architecture.","PeriodicalId":303656,"journal":{"name":"2022 IEEE 21st Mediterranean Electrotechnical Conference (MELECON)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Complexity bounded classification of fish-eye distorted objects with micro-controllers\",\"authors\":\"D. Pau, Alessandro Carra, Marco Garzola, L. Falaschetti, C. Turchetti\",\"doi\":\"10.1109/MELECON53508.2022.9842897\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Since 2019, tiny machine learning has imposed itself everywhere as an innovative technology trend deployed at the edge and has been pervasive in many IoT applications. One interesting, addressed by this work, is related to the welfare of the laboratory animals that could be preserved by acquiring and classifying some image data to monitor object’s presence in their cages. For example mice and rats activities such as drinking and eating can be indicators of their state of health. In that context, $\\\\mu$BottleNet has been developed to classify the presence, or the absence of the water bottle, while $\\\\mu$FoodNet to classify the level of the food into the feeder. Those neural networks (NNs) achieved 99.8% accuracy. At our best knowledge, no datasets were available to support this study and therefore we built three image datasets to train the neural networks on both Bottle and Food objects. From image capture to the inferences execution, the tasks have been carried out by STM32L4 (ultra-low-power that consumes 120 $\\\\mu$A/MHz) and STM32H7 micro controller units (MCUs). Both NNs have been designed to fit into constrained MCU resources. Special attention has been given to the on-chip memory occupation to ensure the RAM footprint was 39.44 KBytes. To evaluate and test on the field the performances of these models (and against to the MobileNetV2 one), a graphical user interface (GUI), has been developed, capable of rendering, the validation and test results of inference runs on the MCUs. K-fold cross validation has been run and the resulting performances of the two NNs were compared to the MobileNetV2 confirming that the $\\\\mu$BottleNet and the $\\\\mu$FoodNet reached and exceeded the accuracy achieved by a more complex NN architecture.\",\"PeriodicalId\":303656,\"journal\":{\"name\":\"2022 IEEE 21st Mediterranean Electrotechnical Conference (MELECON)\",\"volume\":\"62 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 21st Mediterranean Electrotechnical Conference (MELECON)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MELECON53508.2022.9842897\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 21st Mediterranean Electrotechnical Conference (MELECON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MELECON53508.2022.9842897","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

自2019年以来,微型机器学习作为一种部署在边缘的创新技术趋势已经无处不在,并在许多物联网应用中无处不在。这项工作提出了一个有趣的问题,它与实验室动物的福利有关,通过获取和分类一些图像数据来监测笼子里物体的存在,这些动物可以得到保护。例如,小鼠和大鼠的活动,如饮水和饮食,可以作为它们健康状况的指标。在这种情况下,$\mu$ bottet被开发用于对水瓶的存在或不存在进行分类,而$\mu$FoodNet用于对放入喂食器的食物进行分类。这些神经网络(nn)达到了99.8%的准确率。据我们所知,没有数据集可以支持这项研究,因此我们建立了三个图像数据集来训练瓶子和食物对象上的神经网络。从图像捕获到推理执行,任务由STM32L4(超低功耗,消耗120 $\mu$A/MHz)和STM32H7微控制器单元(mcu)执行。这两种神经网络的设计都是为了适应有限的MCU资源。特别注意片上内存占用,以确保RAM占用为39.44 kb。为了在现场评估和测试这些模型的性能(并与MobileNetV2模型进行比较),开发了一个图形用户界面(GUI),能够在mcu上渲染、验证和测试推理结果。已经运行了K-fold交叉验证,并将两种神经网络的性能与MobileNetV2进行了比较,确认了$\mu$ bottet和$\mu$FoodNet达到并超过了更复杂的神经网络架构所达到的精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Complexity bounded classification of fish-eye distorted objects with micro-controllers
Since 2019, tiny machine learning has imposed itself everywhere as an innovative technology trend deployed at the edge and has been pervasive in many IoT applications. One interesting, addressed by this work, is related to the welfare of the laboratory animals that could be preserved by acquiring and classifying some image data to monitor object’s presence in their cages. For example mice and rats activities such as drinking and eating can be indicators of their state of health. In that context, $\mu$BottleNet has been developed to classify the presence, or the absence of the water bottle, while $\mu$FoodNet to classify the level of the food into the feeder. Those neural networks (NNs) achieved 99.8% accuracy. At our best knowledge, no datasets were available to support this study and therefore we built three image datasets to train the neural networks on both Bottle and Food objects. From image capture to the inferences execution, the tasks have been carried out by STM32L4 (ultra-low-power that consumes 120 $\mu$A/MHz) and STM32H7 micro controller units (MCUs). Both NNs have been designed to fit into constrained MCU resources. Special attention has been given to the on-chip memory occupation to ensure the RAM footprint was 39.44 KBytes. To evaluate and test on the field the performances of these models (and against to the MobileNetV2 one), a graphical user interface (GUI), has been developed, capable of rendering, the validation and test results of inference runs on the MCUs. K-fold cross validation has been run and the resulting performances of the two NNs were compared to the MobileNetV2 confirming that the $\mu$BottleNet and the $\mu$FoodNet reached and exceeded the accuracy achieved by a more complex NN architecture.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信