Reconstructing Soft Robotic Touch via In-Finger Vision

IF 6.8 Q1 AUTOMATION & CONTROL SYSTEMS
Ning Guo, Xudong Han, Shuqiao Zhong, Zhiyuan Zhou, Jian Lin, Fang Wan, Chaoyang Song
{"title":"Reconstructing Soft Robotic Touch via In-Finger Vision","authors":"Ning Guo,&nbsp;Xudong Han,&nbsp;Shuqiao Zhong,&nbsp;Zhiyuan Zhou,&nbsp;Jian Lin,&nbsp;Fang Wan,&nbsp;Chaoyang Song","doi":"10.1002/aisy.202400022","DOIUrl":null,"url":null,"abstract":"<p>Incorporating authentic tactile interactions into virtual environments presents a notable challenge for the emerging development of soft robotic metamaterials. In this study, a vision-based approach is introduced to learning proprioceptive interactions by simultaneously reconstructing the shape and touch of a soft robotic metamaterial (SRM) during physical engagements. The SRM design is optimized to the size of a finger with enhanced adaptability in 3D interactions while incorporating a see-through viewing field inside, which can be visually captured by a miniature camera underneath to provide a rich set of image features for touch digitization. Employing constrained geometric optimization, the proprioceptive process with aggregated multi-handles is modeled. This approach facilitates real-time, precise, and realistic estimations of the finger's mesh deformation within a virtual environment. Herein, a data-driven learning model is also proposed to estimate touch positions, achieving reliable results with impressive <i>R</i><sup>2</sup> scores of 0.9681, 0.9415, and 0.9541 along the <i>x</i>, <i>y</i>, and <i>z</i> axes. Furthermore, the robust performance of the proposed methods in touch-based human–cybernetic interfaces and human–robot collaborative grasping is demonstrated. In this study, the door is opened to future applications in touch-based digital twin interactions through vision-based soft proprioception.</p>","PeriodicalId":93858,"journal":{"name":"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)","volume":"6 10","pages":""},"PeriodicalIF":6.8000,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aisy.202400022","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/aisy.202400022","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Incorporating authentic tactile interactions into virtual environments presents a notable challenge for the emerging development of soft robotic metamaterials. In this study, a vision-based approach is introduced to learning proprioceptive interactions by simultaneously reconstructing the shape and touch of a soft robotic metamaterial (SRM) during physical engagements. The SRM design is optimized to the size of a finger with enhanced adaptability in 3D interactions while incorporating a see-through viewing field inside, which can be visually captured by a miniature camera underneath to provide a rich set of image features for touch digitization. Employing constrained geometric optimization, the proprioceptive process with aggregated multi-handles is modeled. This approach facilitates real-time, precise, and realistic estimations of the finger's mesh deformation within a virtual environment. Herein, a data-driven learning model is also proposed to estimate touch positions, achieving reliable results with impressive R2 scores of 0.9681, 0.9415, and 0.9541 along the x, y, and z axes. Furthermore, the robust performance of the proposed methods in touch-based human–cybernetic interfaces and human–robot collaborative grasping is demonstrated. In this study, the door is opened to future applications in touch-based digital twin interactions through vision-based soft proprioception.

Abstract Image

通过指内视觉重建柔软机器人触感
在虚拟环境中融入真实的触觉交互,是软机器人超材料新兴发展面临的一个显著挑战。本研究引入了一种基于视觉的方法,通过在物理接触过程中同时重建软机器人超材料(SRM)的形状和触感来学习本体感觉交互。SRM 的设计经过优化,只有手指大小,在三维互动中具有更强的适应性,同时内部还包含一个透视视场,可通过下方的微型摄像头进行视觉捕捉,为触摸数字化提供丰富的图像特征。利用受限几何优化,对具有聚合多手柄的本体感觉过程进行了建模。这种方法有助于在虚拟环境中对手指的网格变形进行实时、精确和逼真的估计。在此,还提出了一种数据驱动的学习模型来估计触摸位置,取得了可靠的结果,在 x、y 和 z 轴上的 R2 分数分别为 0.9681、0.9415 和 0.9541,令人印象深刻。此外,还证明了所提出的方法在基于触摸的人机交互界面和人机协作抓取中的稳健性能。这项研究为未来通过基于视觉的软本体感觉在基于触摸的数字孪生互动中的应用打开了大门。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
1.30
自引率
0.00%
发文量
0
审稿时长
4 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信