Robot Service for Elderly to Find Misplaced Items: A Resource Efficient Implementation on Low-Computational Device

Muhtadin, Billy, E. M. Yuniarno, Junaidillah Fadlil, Muchlisin Adi Saputra, I. Purnama, M. Purnomo
{"title":"Robot Service for Elderly to Find Misplaced Items: A Resource Efficient Implementation on Low-Computational Device","authors":"Muhtadin, Billy, E. M. Yuniarno, Junaidillah Fadlil, Muchlisin Adi Saputra, I. Purnama, M. Purnomo","doi":"10.1109/IAICT50021.2020.9172030","DOIUrl":null,"url":null,"abstract":"Elderly people often forget to put the items they need due to decreased memory. In this study, we developed an Integrated platform assistance robot providing support to elderly people. We developed a robot assistant platform that was equipped with an indoor positioning system that can help the elderly find misplaced items. Deep learning already has good accuracy in detecting the object but requires great computation resources. When applied to devices that have limited computing and memory capabilities such as robots, the computation time becomes slow or not applicable. We built a lightweight CNN that could run on a single board computer. To improve the accuracy of the network, we apply knowledge distillation by using an extensive network (YOLOv3) as a teacher. To increase computational speed, we do it by reducing the number of layers by implementing batch normalization fission. After being tested on the YOLO, knowledge distillation method can be used to increase accuracy, batch normalization fission will increase computation speed. From the experiment results using the VOC dataset on YOLO architecture with MobileNet feature extractor, the knowledge distillation method can increase accuracy by 9.4% from 0.3850 mAP to 0.4215 mAP and batch normalization fission can speeds up the computation time to 100.7% from 8.3 FPS to 16.66 FPS on CPU i7. The Knowledge Distillation successfully increase the model’s accuracy, reducing the model’s size, and batch normalization fusion method can speed up the detection process.","PeriodicalId":433718,"journal":{"name":"2020 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IAICT50021.2020.9172030","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Elderly people often forget to put the items they need due to decreased memory. In this study, we developed an Integrated platform assistance robot providing support to elderly people. We developed a robot assistant platform that was equipped with an indoor positioning system that can help the elderly find misplaced items. Deep learning already has good accuracy in detecting the object but requires great computation resources. When applied to devices that have limited computing and memory capabilities such as robots, the computation time becomes slow or not applicable. We built a lightweight CNN that could run on a single board computer. To improve the accuracy of the network, we apply knowledge distillation by using an extensive network (YOLOv3) as a teacher. To increase computational speed, we do it by reducing the number of layers by implementing batch normalization fission. After being tested on the YOLO, knowledge distillation method can be used to increase accuracy, batch normalization fission will increase computation speed. From the experiment results using the VOC dataset on YOLO architecture with MobileNet feature extractor, the knowledge distillation method can increase accuracy by 9.4% from 0.3850 mAP to 0.4215 mAP and batch normalization fission can speeds up the computation time to 100.7% from 8.3 FPS to 16.66 FPS on CPU i7. The Knowledge Distillation successfully increase the model’s accuracy, reducing the model’s size, and batch normalization fusion method can speed up the detection process.
基于机器人的老年人物品查找服务:在低计算设备上的资源高效实现
由于记忆力下降,老年人经常忘记放自己需要的东西。在这项研究中,我们开发了一个集成平台辅助机器人,为老年人提供支持。我们开发了一个机器人助手平台,配备了室内定位系统,可以帮助老年人找到放错地方的东西。深度学习在检测目标方面已经具有很好的准确性,但需要大量的计算资源。当应用于计算和内存能力有限的设备(如机器人)时,计算时间变慢或不适用。我们构建了一个轻量级的CNN,可以在单板计算机上运行。为了提高网络的准确性,我们通过使用广泛的网络(YOLOv3)作为教师来应用知识蒸馏。为了提高计算速度,我们通过实现批归一化裂变来减少层数。经过对YOLO的测试,知识蒸馏法可以提高计算精度,批量归一化裂变可以提高计算速度。从使用MobileNet特征提取器在YOLO架构上使用VOC数据集的实验结果来看,在CPU i7上,知识蒸馏方法的准确率从0.3850 mAP提高到0.4215 mAP,提高了9.4%;批量归一化分裂的计算速度从8.3 FPS提高到16.66 FPS,提高了100.7%。知识蒸馏成功地提高了模型的精度,减小了模型的尺寸,批归一化融合方法加快了检测过程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信