神经网络在微型硬件设备中的实现与优化

Hadi Al Zein, Mohamad Aoude, Youssef Harkous
{"title":"神经网络在微型硬件设备中的实现与优化","authors":"Hadi Al Zein, Mohamad Aoude, Youssef Harkous","doi":"10.1109/IC2SPM56638.2022.9988992","DOIUrl":null,"url":null,"abstract":"Traditionally, neural network inferencing on tiny hardware devices took place in a centralized server-based manner. With more real-time applications coming into play, where security and latency are a concern, there has become a need to move inferencing to the edge. This paper describes a machine learning pipeline to carry neural networks from their initial forms to compressed forms deployable on tiny hardware devices, while maintaining acceptable accuracies of the optimized models. We will review the different software optimization techniques used to compress neural networks to their deployable forms. The prototype is a proof of concept showing that applying knowledge distillation from a highly accurate ResNet20 model to a simple CNN student model, followed by post-training quantization, achieves good multi-class accuracy on a constrained Arduino Nano 33 BLE Sense at low power consumption and with low inferencing latency.","PeriodicalId":179072,"journal":{"name":"2022 International Conference on Smart Systems and Power Management (IC2SPM)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Implementation and Optimization of Neural Networks for Tiny Hardware Devices\",\"authors\":\"Hadi Al Zein, Mohamad Aoude, Youssef Harkous\",\"doi\":\"10.1109/IC2SPM56638.2022.9988992\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Traditionally, neural network inferencing on tiny hardware devices took place in a centralized server-based manner. With more real-time applications coming into play, where security and latency are a concern, there has become a need to move inferencing to the edge. This paper describes a machine learning pipeline to carry neural networks from their initial forms to compressed forms deployable on tiny hardware devices, while maintaining acceptable accuracies of the optimized models. We will review the different software optimization techniques used to compress neural networks to their deployable forms. The prototype is a proof of concept showing that applying knowledge distillation from a highly accurate ResNet20 model to a simple CNN student model, followed by post-training quantization, achieves good multi-class accuracy on a constrained Arduino Nano 33 BLE Sense at low power consumption and with low inferencing latency.\",\"PeriodicalId\":179072,\"journal\":{\"name\":\"2022 International Conference on Smart Systems and Power Management (IC2SPM)\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Smart Systems and Power Management (IC2SPM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IC2SPM56638.2022.9988992\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Smart Systems and Power Management (IC2SPM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC2SPM56638.2022.9988992","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

传统上,在小型硬件设备上的神经网络推理以集中的基于服务器的方式进行。随着越来越多的实时应用程序的出现,安全性和延迟是一个问题,因此需要将推理转移到边缘。本文描述了一种机器学习管道,将神经网络从初始形式传输到可部署在微型硬件设备上的压缩形式,同时保持优化模型的可接受精度。我们将回顾用于将神经网络压缩为可部署形式的不同软件优化技术。该原型是一个概念证明,将高精度的ResNet20模型的知识蒸馏应用于简单的CNN学生模型,然后进行训练后量化,在受限的Arduino Nano 33 BLE Sense上以低功耗和低推理延迟实现了良好的多类精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Implementation and Optimization of Neural Networks for Tiny Hardware Devices
Traditionally, neural network inferencing on tiny hardware devices took place in a centralized server-based manner. With more real-time applications coming into play, where security and latency are a concern, there has become a need to move inferencing to the edge. This paper describes a machine learning pipeline to carry neural networks from their initial forms to compressed forms deployable on tiny hardware devices, while maintaining acceptable accuracies of the optimized models. We will review the different software optimization techniques used to compress neural networks to their deployable forms. The prototype is a proof of concept showing that applying knowledge distillation from a highly accurate ResNet20 model to a simple CNN student model, followed by post-training quantization, achieves good multi-class accuracy on a constrained Arduino Nano 33 BLE Sense at low power consumption and with low inferencing latency.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信