TinyWolf — Efficient on-device TinyML training for IoT using enhanced Grey Wolf Optimization

IF 6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Subhrangshu Adhikary , Subhayu Dutta , Ashutosh Dhar Dwivedi
{"title":"TinyWolf — Efficient on-device TinyML training for IoT using enhanced Grey Wolf Optimization","authors":"Subhrangshu Adhikary ,&nbsp;Subhayu Dutta ,&nbsp;Ashutosh Dhar Dwivedi","doi":"10.1016/j.iot.2024.101365","DOIUrl":null,"url":null,"abstract":"<div><p>Training a deep learning model generally requires a huge amount of memory and processing power. Once trained, the learned model can make predictions very fast with very little resource consumption. The learned weights can be fitted into a microcontroller to build affordable embedded intelligence systems which is also known as TinyML. Although few attempts have been made, the limits of the state-of-the-art training of a deep learning model within a microcontroller can be pushed further. Generally deep learning models are trained with gradient optimizers which predict with high accuracy but require a very high amount of resources. On the other hand, nature-inspired meta-heuristic optimizers can be used to build a fast approximation of the model’s optimal solution with low resources. After a rigorous test, we have found that Grey Wolf Optimizer can be modified for enhanced uses of main memory, paging and swap space among <span><math><mrow><mi>α</mi><mo>,</mo><mspace></mspace><mi>β</mi><mo>,</mo><mspace></mspace><mi>δ</mi></mrow></math></span> and <span><math><mi>ω</mi></math></span> wolves. This modification saved up to 71% memory requirements compared to gradient optimizers. We have used this modification to train the TinyML model within a microcontroller of 256KB RAM. The performances of the proposed framework have been meticulously benchmarked on 13 open-sourced datasets.</p></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"28 ","pages":"Article 101365"},"PeriodicalIF":6.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2542660524003068/pdfft?md5=ab42e32e095597b7bee6c567498b913a&pid=1-s2.0-S2542660524003068-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet of Things","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2542660524003068","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Training a deep learning model generally requires a huge amount of memory and processing power. Once trained, the learned model can make predictions very fast with very little resource consumption. The learned weights can be fitted into a microcontroller to build affordable embedded intelligence systems which is also known as TinyML. Although few attempts have been made, the limits of the state-of-the-art training of a deep learning model within a microcontroller can be pushed further. Generally deep learning models are trained with gradient optimizers which predict with high accuracy but require a very high amount of resources. On the other hand, nature-inspired meta-heuristic optimizers can be used to build a fast approximation of the model’s optimal solution with low resources. After a rigorous test, we have found that Grey Wolf Optimizer can be modified for enhanced uses of main memory, paging and swap space among α,β,δ and ω wolves. This modification saved up to 71% memory requirements compared to gradient optimizers. We have used this modification to train the TinyML model within a microcontroller of 256KB RAM. The performances of the proposed framework have been meticulously benchmarked on 13 open-sourced datasets.

Abstract Image

TinyWolf - 利用增强型灰狼优化技术为物联网提供高效的设备上 TinyML 训练
训练深度学习模型通常需要大量内存和处理能力。一旦经过训练,所学模型就能以极低的资源消耗快速做出预测。学习到的权重可以安装到微控制器中,从而构建出经济实惠的嵌入式智能系统,这也被称为 TinyML。虽然已经进行了一些尝试,但在微控制器中训练深度学习模型的最新技术极限还可以进一步提高。一般来说,深度学习模型是通过梯度优化器进行训练的,这种方法预测准确率高,但需要大量资源。另一方面,受自然启发的元启发式优化器可用于以较低的资源建立模型最优解的快速近似值。经过严格测试,我们发现灰狼优化器可以进行修改,以提高α、β、δ和ω狼的主内存、分页和交换空间的使用率。与梯度优化器相比,这种修改最多可节省 71% 的内存需求。我们利用这一修改在 256KB RAM 的微控制器中训练 TinyML 模型。我们在 13 个开源数据集上对拟议框架的性能进行了细致的基准测试。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Internet of Things
Internet of Things Multiple-
CiteScore
3.60
自引率
5.10%
发文量
115
审稿时长
37 days
期刊介绍: Internet of Things; Engineering Cyber Physical Human Systems is a comprehensive journal encouraging cross collaboration between researchers, engineers and practitioners in the field of IoT & Cyber Physical Human Systems. The journal offers a unique platform to exchange scientific information on the entire breadth of technology, science, and societal applications of the IoT. The journal will place a high priority on timely publication, and provide a home for high quality. Furthermore, IOT is interested in publishing topical Special Issues on any aspect of IOT.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信