{"title":"神经网络在微型硬件设备中的实现与优化","authors":"Hadi Al Zein, Mohamad Aoude, Youssef Harkous","doi":"10.1109/IC2SPM56638.2022.9988992","DOIUrl":null,"url":null,"abstract":"Traditionally, neural network inferencing on tiny hardware devices took place in a centralized server-based manner. With more real-time applications coming into play, where security and latency are a concern, there has become a need to move inferencing to the edge. This paper describes a machine learning pipeline to carry neural networks from their initial forms to compressed forms deployable on tiny hardware devices, while maintaining acceptable accuracies of the optimized models. We will review the different software optimization techniques used to compress neural networks to their deployable forms. The prototype is a proof of concept showing that applying knowledge distillation from a highly accurate ResNet20 model to a simple CNN student model, followed by post-training quantization, achieves good multi-class accuracy on a constrained Arduino Nano 33 BLE Sense at low power consumption and with low inferencing latency.","PeriodicalId":179072,"journal":{"name":"2022 International Conference on Smart Systems and Power Management (IC2SPM)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Implementation and Optimization of Neural Networks for Tiny Hardware Devices\",\"authors\":\"Hadi Al Zein, Mohamad Aoude, Youssef Harkous\",\"doi\":\"10.1109/IC2SPM56638.2022.9988992\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Traditionally, neural network inferencing on tiny hardware devices took place in a centralized server-based manner. With more real-time applications coming into play, where security and latency are a concern, there has become a need to move inferencing to the edge. This paper describes a machine learning pipeline to carry neural networks from their initial forms to compressed forms deployable on tiny hardware devices, while maintaining acceptable accuracies of the optimized models. We will review the different software optimization techniques used to compress neural networks to their deployable forms. The prototype is a proof of concept showing that applying knowledge distillation from a highly accurate ResNet20 model to a simple CNN student model, followed by post-training quantization, achieves good multi-class accuracy on a constrained Arduino Nano 33 BLE Sense at low power consumption and with low inferencing latency.\",\"PeriodicalId\":179072,\"journal\":{\"name\":\"2022 International Conference on Smart Systems and Power Management (IC2SPM)\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Smart Systems and Power Management (IC2SPM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IC2SPM56638.2022.9988992\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Smart Systems and Power Management (IC2SPM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC2SPM56638.2022.9988992","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
摘要
传统上,在小型硬件设备上的神经网络推理以集中的基于服务器的方式进行。随着越来越多的实时应用程序的出现,安全性和延迟是一个问题,因此需要将推理转移到边缘。本文描述了一种机器学习管道,将神经网络从初始形式传输到可部署在微型硬件设备上的压缩形式,同时保持优化模型的可接受精度。我们将回顾用于将神经网络压缩为可部署形式的不同软件优化技术。该原型是一个概念证明,将高精度的ResNet20模型的知识蒸馏应用于简单的CNN学生模型,然后进行训练后量化,在受限的Arduino Nano 33 BLE Sense上以低功耗和低推理延迟实现了良好的多类精度。
Implementation and Optimization of Neural Networks for Tiny Hardware Devices
Traditionally, neural network inferencing on tiny hardware devices took place in a centralized server-based manner. With more real-time applications coming into play, where security and latency are a concern, there has become a need to move inferencing to the edge. This paper describes a machine learning pipeline to carry neural networks from their initial forms to compressed forms deployable on tiny hardware devices, while maintaining acceptable accuracies of the optimized models. We will review the different software optimization techniques used to compress neural networks to their deployable forms. The prototype is a proof of concept showing that applying knowledge distillation from a highly accurate ResNet20 model to a simple CNN student model, followed by post-training quantization, achieves good multi-class accuracy on a constrained Arduino Nano 33 BLE Sense at low power consumption and with low inferencing latency.