Bohan Yang, Congying Sui, Fangxun Zhong, Yun-Hui Liu
{"title":"Modal-graph 3D shape servoing of deformable objects with raw point clouds","authors":"Bohan Yang, Congying Sui, Fangxun Zhong, Yun-Hui Liu","doi":"10.1177/02783649231198900","DOIUrl":null,"url":null,"abstract":"Deformable object manipulation (DOM) with point clouds has great potential as nonrigid 3D shapes can be measured without detecting and tracking image features. However, robotic shape control of deformable objects with point clouds is challenging due to: the unknown point correspondences and the noisy partial observability of raw point clouds; the modeling difficulties of the relationship between point clouds and robot motions. To tackle these challenges, this paper introduces a novel modal-graph framework for the model-free shape servoing of deformable objects with raw point clouds. Unlike the existing works studying the object’s geometry structure, we propose a modal graph to describe the low-frequency deformation structure of the DOM system, which is robust to the measurement irregularities. The modal graph enables us to directly extract low-dimensional deformation features from raw point clouds without extra processing of registrations, refinements, and occlusion removal. It also preserves the spatial structure of the DOM system to inverse the feature changes into robot motions. Moreover, as the framework is built with unknown physical and geometric object models, we design an adaptive robust controller to deform the object toward the desired shape while tackling the modeling uncertainties, noises, and disturbances online. The system is proved to be input-to-state stable (ISS) using Lyapunov-based methods. Extensive experiments are conducted to validate our method using linear, planar, tubular, and volumetric objects under different settings.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"76 1","pages":"0"},"PeriodicalIF":7.5000,"publicationDate":"2023-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Robotics Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/02783649231198900","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Deformable object manipulation (DOM) with point clouds has great potential as nonrigid 3D shapes can be measured without detecting and tracking image features. However, robotic shape control of deformable objects with point clouds is challenging due to: the unknown point correspondences and the noisy partial observability of raw point clouds; the modeling difficulties of the relationship between point clouds and robot motions. To tackle these challenges, this paper introduces a novel modal-graph framework for the model-free shape servoing of deformable objects with raw point clouds. Unlike the existing works studying the object’s geometry structure, we propose a modal graph to describe the low-frequency deformation structure of the DOM system, which is robust to the measurement irregularities. The modal graph enables us to directly extract low-dimensional deformation features from raw point clouds without extra processing of registrations, refinements, and occlusion removal. It also preserves the spatial structure of the DOM system to inverse the feature changes into robot motions. Moreover, as the framework is built with unknown physical and geometric object models, we design an adaptive robust controller to deform the object toward the desired shape while tackling the modeling uncertainties, noises, and disturbances online. The system is proved to be input-to-state stable (ISS) using Lyapunov-based methods. Extensive experiments are conducted to validate our method using linear, planar, tubular, and volumetric objects under different settings.
期刊介绍:
The International Journal of Robotics Research (IJRR) has been a leading peer-reviewed publication in the field for over two decades. It holds the distinction of being the first scholarly journal dedicated to robotics research.
IJRR presents cutting-edge and thought-provoking original research papers, articles, and reviews that delve into groundbreaking trends, technical advancements, and theoretical developments in robotics. Renowned scholars and practitioners contribute to its content, offering their expertise and insights. This journal covers a wide range of topics, going beyond narrow technical advancements to encompass various aspects of robotics.
The primary aim of IJRR is to publish work that has lasting value for the scientific and technological advancement of the field. Only original, robust, and practical research that can serve as a foundation for further progress is considered for publication. The focus is on producing content that will remain valuable and relevant over time.
In summary, IJRR stands as a prestigious publication that drives innovation and knowledge in robotics research.