基于视触觉融合的目标分类研究

Peng Zhang, Lu Bai, Dongri Shan
{"title":"基于视触觉融合的目标分类研究","authors":"Peng Zhang, Lu Bai, Dongri Shan","doi":"10.1117/12.2682381","DOIUrl":null,"url":null,"abstract":"As two modes of direct contact between robots and external environment, visual and tactile play a critical role in improving robot perception ability. In the real environment, it is difficult for the robot to achieve high accuracy when classifying objects only by a single mode (visual or tactile). In order to improve the classification accuracy of robots, a novel visual-tactile fusion method is proposed in this paper. Firstly, the ResNet18 is selected as the backbone network to extract visual features. To improve the accuracy of object localization and recognition in the visual network, the Position-Channel Attention Mechanism (PCAM) block is added after conv3 and conv4 of ResNet18. Then, the four-layer one-dimensional convolutional neural network is used to extract tactile features, and the extracted tactile features are fused with visual features at the feature layer. Finally, the experimental results demonstrate that compared with the existing methods, on the self-made dataset VHAC-52, the proposed method has improved the AUC and ACC by 1.60% and 1.47%, respectively.","PeriodicalId":440430,"journal":{"name":"International Conference on Electronic Technology and Information Science","volume":"128 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Research on object classification based on visual-tactile fusion\",\"authors\":\"Peng Zhang, Lu Bai, Dongri Shan\",\"doi\":\"10.1117/12.2682381\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As two modes of direct contact between robots and external environment, visual and tactile play a critical role in improving robot perception ability. In the real environment, it is difficult for the robot to achieve high accuracy when classifying objects only by a single mode (visual or tactile). In order to improve the classification accuracy of robots, a novel visual-tactile fusion method is proposed in this paper. Firstly, the ResNet18 is selected as the backbone network to extract visual features. To improve the accuracy of object localization and recognition in the visual network, the Position-Channel Attention Mechanism (PCAM) block is added after conv3 and conv4 of ResNet18. Then, the four-layer one-dimensional convolutional neural network is used to extract tactile features, and the extracted tactile features are fused with visual features at the feature layer. Finally, the experimental results demonstrate that compared with the existing methods, on the self-made dataset VHAC-52, the proposed method has improved the AUC and ACC by 1.60% and 1.47%, respectively.\",\"PeriodicalId\":440430,\"journal\":{\"name\":\"International Conference on Electronic Technology and Information Science\",\"volume\":\"128 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Electronic Technology and Information Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.2682381\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Electronic Technology and Information Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2682381","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

视觉和触觉作为机器人与外界环境直接接触的两种方式,对提高机器人感知能力起着至关重要的作用。在真实环境中,仅通过单一模式(视觉或触觉)对物体进行分类,机器人很难达到较高的分类精度。为了提高机器人的分类精度,本文提出了一种新的视觉-触觉融合方法。首先选取ResNet18作为主干网,提取视觉特征;为了提高视觉网络中目标定位和识别的精度,在ResNet18的conv3和conv4之后增加了位置-通道注意机制(PCAM)块。然后,利用四层一维卷积神经网络提取触觉特征,并在特征层将提取的触觉特征与视觉特征融合;最后,实验结果表明,与现有方法相比,在自制数据集VHAC-52上,所提方法的AUC和ACC分别提高了1.60%和1.47%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Research on object classification based on visual-tactile fusion
As two modes of direct contact between robots and external environment, visual and tactile play a critical role in improving robot perception ability. In the real environment, it is difficult for the robot to achieve high accuracy when classifying objects only by a single mode (visual or tactile). In order to improve the classification accuracy of robots, a novel visual-tactile fusion method is proposed in this paper. Firstly, the ResNet18 is selected as the backbone network to extract visual features. To improve the accuracy of object localization and recognition in the visual network, the Position-Channel Attention Mechanism (PCAM) block is added after conv3 and conv4 of ResNet18. Then, the four-layer one-dimensional convolutional neural network is used to extract tactile features, and the extracted tactile features are fused with visual features at the feature layer. Finally, the experimental results demonstrate that compared with the existing methods, on the self-made dataset VHAC-52, the proposed method has improved the AUC and ACC by 1.60% and 1.47%, respectively.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信