数字孪生的多模式融合识别

IF 7.5 2区 计算机科学 Q1 TELECOMMUNICATIONS
Tianzhe Zhou, Xuguang Zhang, Bing Kang, Mingkai Chen
{"title":"数字孪生的多模式融合识别","authors":"Tianzhe Zhou,&nbsp;Xuguang Zhang,&nbsp;Bing Kang,&nbsp;Mingkai Chen","doi":"10.1016/j.dcan.2022.10.009","DOIUrl":null,"url":null,"abstract":"<div><p>The digital twin is the concept of transcending reality, which is the reverse feedback from the real physical space to the virtual digital space. People hold great prospects for this emerging technology. In order to realize the upgrading of the digital twin industrial chain, it is urgent to introduce more modalities, such as vision, haptics, hearing and smell, into the virtual digital space, which assists physical entities and virtual objects in creating a closer connection. Therefore, perceptual understanding and object recognition have become an urgent hot topic in the digital twin. Existing surface material classification schemes often achieve recognition through machine learning or deep learning in a single modality, ignoring the complementarity between multiple modalities. In order to overcome this dilemma, we propose a multimodal fusion network in our article that combines two modalities, visual and haptic, for surface material recognition. On the one hand, the network makes full use of the potential correlations between multiple modalities to deeply mine the modal semantics and complete the data mapping. On the other hand, the network is extensible and can be used as a universal architecture to include more modalities. Experiments show that the constructed multimodal fusion network can achieve 99.42% classification accuracy while reducing complexity.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864822002176/pdfft?md5=5b53302ba67c5d8270cd69b448630eaf&pid=1-s2.0-S2352864822002176-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Multimodal fusion recognition for digital twin\",\"authors\":\"Tianzhe Zhou,&nbsp;Xuguang Zhang,&nbsp;Bing Kang,&nbsp;Mingkai Chen\",\"doi\":\"10.1016/j.dcan.2022.10.009\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The digital twin is the concept of transcending reality, which is the reverse feedback from the real physical space to the virtual digital space. People hold great prospects for this emerging technology. In order to realize the upgrading of the digital twin industrial chain, it is urgent to introduce more modalities, such as vision, haptics, hearing and smell, into the virtual digital space, which assists physical entities and virtual objects in creating a closer connection. Therefore, perceptual understanding and object recognition have become an urgent hot topic in the digital twin. Existing surface material classification schemes often achieve recognition through machine learning or deep learning in a single modality, ignoring the complementarity between multiple modalities. In order to overcome this dilemma, we propose a multimodal fusion network in our article that combines two modalities, visual and haptic, for surface material recognition. On the one hand, the network makes full use of the potential correlations between multiple modalities to deeply mine the modal semantics and complete the data mapping. On the other hand, the network is extensible and can be used as a universal architecture to include more modalities. Experiments show that the constructed multimodal fusion network can achieve 99.42% classification accuracy while reducing complexity.</p></div>\",\"PeriodicalId\":48631,\"journal\":{\"name\":\"Digital Communications and Networks\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2024-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2352864822002176/pdfft?md5=5b53302ba67c5d8270cd69b448630eaf&pid=1-s2.0-S2352864822002176-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Communications and Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2352864822002176\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Communications and Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2352864822002176","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

数字孪生是超越现实的概念,是从现实物理空间到虚拟数字空间的反向反馈。人们对这一新兴技术充满期待。为了实现数字孪生产业链的升级,迫切需要将视觉、触觉、听觉、嗅觉等更多模态引入虚拟数字空间,帮助物理实体与虚拟物体建立更紧密的联系。因此,感知理解和物体识别已成为数字孪生领域亟待解决的热门话题。现有的表面材料分类方案往往通过单一模态的机器学习或深度学习来实现识别,忽略了多种模态之间的互补性。为了克服这一困境,我们在文章中提出了一种多模态融合网络,结合视觉和触觉两种模态进行表面材料识别。一方面,该网络充分利用了多种模态之间的潜在关联,深入挖掘模态语义,完成数据映射。另一方面,该网络具有可扩展性,可作为通用架构纳入更多模态。实验表明,所构建的多模态融合网络可以达到 99.42% 的分类准确率,同时降低了复杂性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Multimodal fusion recognition for digital twin

The digital twin is the concept of transcending reality, which is the reverse feedback from the real physical space to the virtual digital space. People hold great prospects for this emerging technology. In order to realize the upgrading of the digital twin industrial chain, it is urgent to introduce more modalities, such as vision, haptics, hearing and smell, into the virtual digital space, which assists physical entities and virtual objects in creating a closer connection. Therefore, perceptual understanding and object recognition have become an urgent hot topic in the digital twin. Existing surface material classification schemes often achieve recognition through machine learning or deep learning in a single modality, ignoring the complementarity between multiple modalities. In order to overcome this dilemma, we propose a multimodal fusion network in our article that combines two modalities, visual and haptic, for surface material recognition. On the one hand, the network makes full use of the potential correlations between multiple modalities to deeply mine the modal semantics and complete the data mapping. On the other hand, the network is extensible and can be used as a universal architecture to include more modalities. Experiments show that the constructed multimodal fusion network can achieve 99.42% classification accuracy while reducing complexity.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Digital Communications and Networks
Digital Communications and Networks Computer Science-Hardware and Architecture
CiteScore
12.80
自引率
5.10%
发文量
915
审稿时长
30 weeks
期刊介绍: Digital Communications and Networks is a prestigious journal that emphasizes on communication systems and networks. We publish only top-notch original articles and authoritative reviews, which undergo rigorous peer-review. We are proud to announce that all our articles are fully Open Access and can be accessed on ScienceDirect. Our journal is recognized and indexed by eminent databases such as the Science Citation Index Expanded (SCIE) and Scopus. In addition to regular articles, we may also consider exceptional conference papers that have been significantly expanded. Furthermore, we periodically release special issues that focus on specific aspects of the field. In conclusion, Digital Communications and Networks is a leading journal that guarantees exceptional quality and accessibility for researchers and scholars in the field of communication systems and networks.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信