Model Compression for Resource-Constrained Mobile Robots

Timotheos Souroulla, Alberto Hata, Ahmad Terra, Özer Özkahraman, R. Inam
{"title":"Model Compression for Resource-Constrained Mobile Robots","authors":"Timotheos Souroulla, Alberto Hata, Ahmad Terra, Özer Özkahraman, R. Inam","doi":"10.4204/EPTCS.362.7","DOIUrl":null,"url":null,"abstract":"The number of mobile robots with constrained computing resources that need to execute complex machine learning models has been increasing during the past decade. Commonly, these robots rely on edge infrastructure accessible over wireless communication to execute heavy computational complex tasks. However, the edge might become unavailable and, consequently, oblige the execution of the tasks on the robot. This work focuses on making it possible to execute the tasks on the robots by reducing the complexity and the total number of parameters of pre-trained computer vision models. This is achieved by using model compression techniques such as Pruning and Knowledge Distillation. These compression techniques have strong theoretical and practical foundations, but their combined usage has not been widely explored in the literature. Therefore, this work especially focuses on investigating the effects of combining these two compression techniques. The results of this work reveal that up to 90% of the total number of parameters of a computer vision model can be removed without any considerable reduction in the model’s accuracy.","PeriodicalId":313985,"journal":{"name":"AREA@IJCAI-ECAI","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AREA@IJCAI-ECAI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4204/EPTCS.362.7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The number of mobile robots with constrained computing resources that need to execute complex machine learning models has been increasing during the past decade. Commonly, these robots rely on edge infrastructure accessible over wireless communication to execute heavy computational complex tasks. However, the edge might become unavailable and, consequently, oblige the execution of the tasks on the robot. This work focuses on making it possible to execute the tasks on the robots by reducing the complexity and the total number of parameters of pre-trained computer vision models. This is achieved by using model compression techniques such as Pruning and Knowledge Distillation. These compression techniques have strong theoretical and practical foundations, but their combined usage has not been widely explored in the literature. Therefore, this work especially focuses on investigating the effects of combining these two compression techniques. The results of this work reveal that up to 90% of the total number of parameters of a computer vision model can be removed without any considerable reduction in the model’s accuracy.
资源受限移动机器人的模型压缩
在过去十年中,需要执行复杂机器学习模型的计算资源受限的移动机器人数量一直在增加。通常,这些机器人依赖于可通过无线通信访问的边缘基础设施来执行繁重的计算复杂任务。然而,边缘可能变得不可用,从而迫使机器人执行任务。这项工作的重点是通过降低预训练计算机视觉模型的复杂性和参数总数,使机器人能够执行任务。这是通过使用模型压缩技术,如剪枝和知识蒸馏来实现的。这些压缩技术具有很强的理论和实践基础,但它们的联合使用在文献中尚未得到广泛的探讨。因此,这项工作特别侧重于研究结合这两种压缩技术的效果。这项工作的结果表明,高达90%的计算机视觉模型的参数总数可以被删除,而不显着降低模型的准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信