TacSL: A Library for Visuotactile Sensor Simulation and Learning

IF 9.4 1区 计算机科学 Q1 ROBOTICS
Iretiayo Akinola;Jie Xu;Jan Carius;Dieter Fox;Yashraj Narang
{"title":"TacSL: A Library for Visuotactile Sensor Simulation and Learning","authors":"Iretiayo Akinola;Jie Xu;Jan Carius;Dieter Fox;Yashraj Narang","doi":"10.1109/TRO.2025.3547267","DOIUrl":null,"url":null,"abstract":"For both humans and robots, the sense of touch, known as tactile sensing, is critical for performing contact-rich manipulation tasks. Three key challenges in robotic tactile sensing are interpreting sensor signals, generating sensor signals in novel scenarios, and learning sensor-based policies. For visuotactile sensors, interpretation has been facilitated by their close relationship with vision sensors (e.g., RGB cameras). However, generation is still difficult, as visuotactile sensors typically involve contact, deformation, illumination, and imaging, all of which are expensive to simulate; in turn, policy learning has been challenging, as simulation cannot be leveraged for large-scale data collection. We present <italic>TacSL</i> (<italic>taxel</i>), a library for GPU-based visuotactile sensor simulation and learning. <italic>TacSL</i> can be used to simulate visuotactile images and extract contact-force distributions over <inline-formula><tex-math>$200\\times$</tex-math></inline-formula> faster than the prior state-of-the-art, all within the widely used Isaac simulator. Furthermore, <italic>TacSL</i> provides a learning toolkit containing multiple sensor models, contact-intensive training environments, and online/offline algorithms that can facilitate policy learning for sim-to-real applications. On the algorithmic side, we introduce a novel online reinforcement-learning algorithm called asymmetric actor-critic distillation, designed to effectively and efficiently learn tactile-based policies in simulation that can transfer to the real world. Finally, we demonstrate the utility of our library and algorithms by evaluating the benefits of distillation and multimodal sensing for contact-rich manipulation tasks, and most critically, performing sim-to-real transfer.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"2645-2661"},"PeriodicalIF":9.4000,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Robotics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10912733/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

For both humans and robots, the sense of touch, known as tactile sensing, is critical for performing contact-rich manipulation tasks. Three key challenges in robotic tactile sensing are interpreting sensor signals, generating sensor signals in novel scenarios, and learning sensor-based policies. For visuotactile sensors, interpretation has been facilitated by their close relationship with vision sensors (e.g., RGB cameras). However, generation is still difficult, as visuotactile sensors typically involve contact, deformation, illumination, and imaging, all of which are expensive to simulate; in turn, policy learning has been challenging, as simulation cannot be leveraged for large-scale data collection. We present TacSL (taxel), a library for GPU-based visuotactile sensor simulation and learning. TacSL can be used to simulate visuotactile images and extract contact-force distributions over $200\times$ faster than the prior state-of-the-art, all within the widely used Isaac simulator. Furthermore, TacSL provides a learning toolkit containing multiple sensor models, contact-intensive training environments, and online/offline algorithms that can facilitate policy learning for sim-to-real applications. On the algorithmic side, we introduce a novel online reinforcement-learning algorithm called asymmetric actor-critic distillation, designed to effectively and efficiently learn tactile-based policies in simulation that can transfer to the real world. Finally, we demonstrate the utility of our library and algorithms by evaluating the benefits of distillation and multimodal sensing for contact-rich manipulation tasks, and most critically, performing sim-to-real transfer.
TacSL:一个视觉传感器仿真和学习库
对于人类和机器人来说,触觉,即触觉感知,对于执行需要大量接触的操作任务至关重要。机器人触觉感知的三个关键挑战是解释传感器信号,在新场景中生成传感器信号,以及学习基于传感器的策略。对于视触觉传感器,它们与视觉传感器(如RGB相机)的密切关系促进了解释。然而,生成仍然很困难,因为视觉触觉传感器通常涉及接触、变形、照明和成像,所有这些都是昂贵的模拟;反过来,政策学习一直具有挑战性,因为模拟不能用于大规模数据收集。我们提出了TacSL (taxel),一个基于gpu的视觉传感器仿真和学习库。TacSL可以用来模拟视觉图像,提取接触力分布,比以前的最先进的技术快200倍,所有这些都在广泛使用的Isaac模拟器中。此外,TacSL提供了一个学习工具包,其中包含多个传感器模型、接触密集型培训环境和在线/离线算法,可以促进模拟到真实应用的策略学习。在算法方面,我们引入了一种新的在线强化学习算法,称为非对称actor-critic蒸馏,旨在有效和高效地学习模拟中的基于触觉的策略,这些策略可以转移到现实世界。最后,我们通过评估蒸馏和多模态传感对丰富接触操作任务的好处,以及最关键的,执行模拟到真实的转移,展示了我们的库和算法的实用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Robotics
IEEE Transactions on Robotics 工程技术-机器人学
CiteScore
14.90
自引率
5.10%
发文量
259
审稿时长
6.0 months
期刊介绍: The IEEE Transactions on Robotics (T-RO) is dedicated to publishing fundamental papers covering all facets of robotics, drawing on interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, and beyond. From industrial applications to service and personal assistants, surgical operations to space, underwater, and remote exploration, robots and intelligent machines play pivotal roles across various domains, including entertainment, safety, search and rescue, military applications, agriculture, and intelligent vehicles. Special emphasis is placed on intelligent machines and systems designed for unstructured environments, where a significant portion of the environment remains unknown and beyond direct sensing or control.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信