Action-conditional implicit visual dynamics for deformable object manipulation

IF 7.5 1区 计算机科学 Q1 ROBOTICS
Bokui Shen, Zhenyu Jiang, Christopher Choy, Silvio Savarese, Leonidas J. Guibas, Anima Anandkumar, Yuke Zhu
{"title":"Action-conditional implicit visual dynamics for deformable object manipulation","authors":"Bokui Shen, Zhenyu Jiang, Christopher Choy, Silvio Savarese, Leonidas J. Guibas, Anima Anandkumar, Yuke Zhu","doi":"10.1177/02783649231191222","DOIUrl":null,"url":null,"abstract":"Manipulating volumetric deformable objects in the real world, like plush toys and pizza dough, brings substantial challenges due to infinite shape variations, non-rigid motions, and partial observability. We introduce ACID, an action-conditional visual dynamics model for volumetric deformable objects based on structured implicit neural representations. ACID integrates two new techniques: implicit representations for action-conditional dynamics and geodesics-based contrastive learning. To represent deformable dynamics from partial RGB-D observations, we learn implicit representations of occupancy and flow-based forward dynamics. To accurately identify state change under large non-rigid deformations, we learn a correspondence embedding field through a novel geodesics-based contrastive loss. To evaluate our approach, we develop a simulation framework for manipulating complex deformable shapes in realistic scenes and a benchmark containing over 17,000 action trajectories with six types of plush toys and 78 variants. Our model achieves the best performance in geometry, correspondence, and dynamics predictions over existing approaches. The ACID dynamics models are successfully employed for goal-conditioned deformable manipulation tasks, resulting in a 30% increase in task success rate over the strongest baseline. Furthermore, we apply the simulation-trained ACID model directly to real-world objects and show success in manipulating them into target configurations. https://b0ku1.github.io/acid/","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Robotics Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/02783649231191222","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Manipulating volumetric deformable objects in the real world, like plush toys and pizza dough, brings substantial challenges due to infinite shape variations, non-rigid motions, and partial observability. We introduce ACID, an action-conditional visual dynamics model for volumetric deformable objects based on structured implicit neural representations. ACID integrates two new techniques: implicit representations for action-conditional dynamics and geodesics-based contrastive learning. To represent deformable dynamics from partial RGB-D observations, we learn implicit representations of occupancy and flow-based forward dynamics. To accurately identify state change under large non-rigid deformations, we learn a correspondence embedding field through a novel geodesics-based contrastive loss. To evaluate our approach, we develop a simulation framework for manipulating complex deformable shapes in realistic scenes and a benchmark containing over 17,000 action trajectories with six types of plush toys and 78 variants. Our model achieves the best performance in geometry, correspondence, and dynamics predictions over existing approaches. The ACID dynamics models are successfully employed for goal-conditioned deformable manipulation tasks, resulting in a 30% increase in task success rate over the strongest baseline. Furthermore, we apply the simulation-trained ACID model directly to real-world objects and show success in manipulating them into target configurations. https://b0ku1.github.io/acid/
用于可变形对象操作的动作条件隐式视觉动力学
在现实世界中操纵体积可变形的物体,如毛绒玩具和披萨面团,由于无限的形状变化、非刚性运动和部分可观察性,带来了巨大的挑战。我们介绍了基于结构化隐式神经表征的动作条件视觉动态模型ACID。ACID集成了两种新技术:用于动作条件动力学的隐式表示和基于测地线的对比学习。为了表示来自部分RGB-D观测的可变形动力学,我们学习了占用和基于流的前向动力学的隐式表示。为了准确识别大非刚性变形下的状态变化,我们通过一种新的基于测地线的对比损失来学习对应嵌入场。为了评估我们的方法,我们开发了一个模拟框架,用于在现实场景中操纵复杂的可变形形状,以及一个包含超过17,000个动作轨迹的基准,其中包含六种类型的毛绒玩具和78种变体。与现有方法相比,我们的模型在几何、对应和动态预测方面实现了最佳性能。ACID动力学模型成功地应用于目标条件下的可变形操作任务,使任务成功率比最强基线提高了30%。此外,我们将模拟训练的ACID模型直接应用于现实世界的对象,并成功地将它们操纵成目标配置。https://b0ku1.github.io/acid/
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Robotics Research
International Journal of Robotics Research 工程技术-机器人学
CiteScore
22.20
自引率
0.00%
发文量
34
审稿时长
6-12 weeks
期刊介绍: The International Journal of Robotics Research (IJRR) has been a leading peer-reviewed publication in the field for over two decades. It holds the distinction of being the first scholarly journal dedicated to robotics research. IJRR presents cutting-edge and thought-provoking original research papers, articles, and reviews that delve into groundbreaking trends, technical advancements, and theoretical developments in robotics. Renowned scholars and practitioners contribute to its content, offering their expertise and insights. This journal covers a wide range of topics, going beyond narrow technical advancements to encompass various aspects of robotics. The primary aim of IJRR is to publish work that has lasting value for the scientific and technological advancement of the field. Only original, robust, and practical research that can serve as a foundation for further progress is considered for publication. The focus is on producing content that will remain valuable and relevant over time. In summary, IJRR stands as a prestigious publication that drives innovation and knowledge in robotics research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信