Mingrui Yu;Yongpeng Jiang;Chen Chen;Yongyi Jia;Xiang Li
{"title":"Robotic In-Hand Manipulation for Large-Range Precise Object Movement: The RGMC Champion Solution","authors":"Mingrui Yu;Yongpeng Jiang;Chen Chen;Yongyi Jia;Xiang Li","doi":"10.1109/LRA.2025.3555138","DOIUrl":"https://doi.org/10.1109/LRA.2025.3555138","url":null,"abstract":"In-hand manipulation using multiple dexterous fingers is a critical robotic skill that can reduce the reliance on large arm motions, thereby saving space and energy. This letter focuses on in-grasp object movement, which refers to manipulating an object to a desired pose through only finger motions within a stable grasp. The key challenge lies in simultaneously achieving high precision and large-range movements while maintaining a constant stable grasp. To address this problem, we propose a simple and practical approach based on kinematic trajectory optimization with no need for pretraining or object geometries, which can be easily applied to novel objects in real-world scenarios. Adopting this approach, we won the championship for the in-hand manipulation track at the 9th Robotic Grasping and Manipulation Competition (RGMC) held at ICRA 2024. Implementation details, discussion, and further quantitative experimental results are presented in this letter, which aims to comprehensively evaluate our approach and share our key takeaways from the competition.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4738-4745"},"PeriodicalIF":4.6,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feng Zhuang;Ting Huang;Quan Xu;Yue-Jiao Gong;Jing Liu
{"title":"A Robust Lifelong Multi-Agent Path Finding With Active Conflict Resolution and Decentralized Execution","authors":"Feng Zhuang;Ting Huang;Quan Xu;Yue-Jiao Gong;Jing Liu","doi":"10.1109/LRA.2025.3554099","DOIUrl":"https://doi.org/10.1109/LRA.2025.3554099","url":null,"abstract":"Multi-Agent Path Finding (MAPF) focuses on navigating agents along cost-efficient and conflict-free paths. This letter investigates a challenging and practical MAPF variant, namely Robust Lifelong MAPF (RLMAPF), where agents sequentially receive tasks and effectively deal with uncertainties. In this letter, we first establish a comprehensive RLMAPF problem model with a novel conflict category methodology: active and passive conflicts. Based on this model, we introduce a decentralized robust path finding algorithm that comprises two fundamental components: the robust path finding and decentralized path execution. The first component focuses on robust MAPF by integrating a conflict prediction oracle, a rolling window for conflict detection, and active conflict resolution. Based on the robust path without active conflicts provided by the planning phase, the path executor aims at passive conflict avoidance in a decentralized method. The empirical evaluation of the proposed algorithm against the state-of-the-art MAPF methods reveals its superiority. Through extensive simulations, we demonstrate that the proposed algorithm has a low replanning frequency and high robustness, maintaining a robustness index of 0.95 in most uncertain environments—at least 20% higher than the state-of-the-art comparison MAPF algorithms.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4652-4659"},"PeriodicalIF":4.6,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Robotics and Automation Letters Information for Authors","authors":"","doi":"10.1109/LRA.2025.3553273","DOIUrl":"https://doi.org/10.1109/LRA.2025.3553273","url":null,"abstract":"","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"C4-C4"},"PeriodicalIF":4.6,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10938736","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Robotics and Automation Society Information","authors":"","doi":"10.1109/LRA.2025.3553271","DOIUrl":"https://doi.org/10.1109/LRA.2025.3553271","url":null,"abstract":"","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"C3-C3"},"PeriodicalIF":4.6,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10938733","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spatiotemporal Multi-Camera Calibration Using Freely Moving People","authors":"Sang-Eun Lee;Ko Nishino;Shohei Nobuhara","doi":"10.1109/LRA.2025.3554102","DOIUrl":"https://doi.org/10.1109/LRA.2025.3554102","url":null,"abstract":"We propose a novel method for spatiotemporal multi-camera calibration using freely moving people in multi-view videos. Since calibrating multiple cameras and finding matches across their views are inherently interdependent, performing both in a unified framework poses a significant challenge. We address these issues as a single registration problem of matching two sets of 3D points, leveraging human motion in dynamic multi-person scenes. To this end, we utilize 3D human poses obtained from an off-the-shelf monocular 3D human pose estimator and transform them into 3D points on a unit sphere, to solve the rotation, time offset, and the association alternatingly. We employ a probabilistic approach that can jointly solve both problems of aligning spatiotemporal data and establishing correspondences through soft assignment between two views. The translation is determined by applying coplanarity constraints. The pairwise registration results are integrated into a multiview setup, and then a nonlinear optimization method is used to improve the accuracy of the camera poses, temporal offsets, and multi-person associations. Extensive experiments on synthetic and real data demonstrate the effectiveness and flexibility of the proposed method as a practical marker-free calibration tool.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4818-4825"},"PeriodicalIF":4.6,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Robotics and Automation Society Publication Information","authors":"","doi":"10.1109/LRA.2025.3553269","DOIUrl":"https://doi.org/10.1109/LRA.2025.3553269","url":null,"abstract":"","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"C2-C2"},"PeriodicalIF":4.6,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10938732","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"UGNA-VPR: A Novel Training Paradigm for Visual Place Recognition Based on Uncertainty-Guided NeRF Augmentation","authors":"Yehui Shen;Lei Zhang;Qingqiu Li;Xiongwei Zhao;Yue Wang;Huimin Lu;Xieyuanli Chen","doi":"10.1109/LRA.2025.3554105","DOIUrl":"https://doi.org/10.1109/LRA.2025.3554105","url":null,"abstract":"Visual place recognition (VPR) is crucial for robots to identify previously visited locations, playing an important role in autonomous navigation in both indoor and outdoor environments. However, most existing VPR datasets are limited to single-viewpoint scenarios, leading to reduced recognition accuracy, particularly in multi-directional driving or feature-sparse scenes. Moreover, obtaining additional data to mitigate these limitations is often expensive. This letter introduces a novel training paradigm to improve the performance of existing VPR networks by enhancing multi-view diversity within current datasets through uncertainty estimation and NeRF-based data augmentation. Specifically, we initially train NeRF using the existing VPR dataset. Then, our devised self-supervised uncertainty estimation network identifies places with high uncertainty. The poses of these uncertain places are input into NeRF to generate new synthetic observations for further training of VPR networks. Additionally, we propose an improved storage method for efficient organization of augmented and original training data. We conducted extensive experiments on three datasets and tested three different VPR backbone networks. The results demonstrate that our proposed training paradigm significantly improves VPR performance by fully utilizing existing data, outperforming other training approaches. We further validated the effectiveness of our approach on self-recorded indoor and outdoor datasets, consistently demonstrating superior results.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4682-4689"},"PeriodicalIF":4.6,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Van Lewen;Yitong Lu;Frank Juliá-Wise;Armaan Vasowalla;Christopher Wu;Jennifer Yeo;Ehab Billatos;Sheila Russo
{"title":"A Real-Time, Semi-Autonomous Navigation Platform for Soft Robotic Bronchoscopy","authors":"Daniel Van Lewen;Yitong Lu;Frank Juliá-Wise;Armaan Vasowalla;Christopher Wu;Jennifer Yeo;Ehab Billatos;Sheila Russo","doi":"10.1109/LRA.2025.3554104","DOIUrl":"https://doi.org/10.1109/LRA.2025.3554104","url":null,"abstract":"Navigating through the peripheral lung branches poses a significant challenge in diagnosing lesions during bronchoscopy. Soft robots are well-suited to address current limitations in bronchoscopy due to their scale, dexterity, and adaptability. In this letter, we propose a real-time, semi-autonomous navigation platform that leverages a soft continuum robot with an outer diameter of 2.5 mm for tip steering and a UR5e robot arm for insertion, translation, and rotation. Closed-loop feedback is provided via on-board visualization and electromagnetic tracking. Steering capability and workspace are characterized to demonstrate sufficient robot tip dexterity. A driving algorithm combined with a YOLO-based computer vision algorithm is developed to enable the robot to steer toward the target branch along preplanned paths. Multiple successful navigational experiments were performed within an in-vitro lung phantom to validate the proposed platform. The scale of the robot allows for successful navigation deep into the smaller, peripheral branches of the lung (6th generation) and exits the lung phantom, demonstrating the ability to reach the lung periphery with an average error at the target location of 1.1 mm.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4722-4729"},"PeriodicalIF":4.6,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pointillism Wall Painting Drone Using Bouncing Frequency Control","authors":"Pierre Susbielle;Jonathan Dumon;Ahmad Hably","doi":"10.1109/LRA.2025.3554378","DOIUrl":"https://doi.org/10.1109/LRA.2025.3554378","url":null,"abstract":"This study presents an new robotic airborne solution for autonomous wall painting using a pointillism technique. The proposed dot-painting drone is a quadcopter equipped with an additional forward propulsion unit and a spring-mounted painting pad. It is designed to bounce on a vertical wall in order to print dots at a controlled frequency along a predefined trajectory. A dynamic model of the system is derived and used to control accurately the bouncing frequency as well as the position of the robot. The performance of the system is validated experimentally, demonstrating successful indoor painting capability of pointillism drawing on vertical walls. This work represents a first step toward fully autonomous, large-scale mural reproduction using aerial robotics.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4802-4809"},"PeriodicalIF":4.6,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Trajectory Planning and Control for Robotic Manipulation of Magnetic Capsules","authors":"Ogulcan Isitman;Gokhan Alcan;Ville Kyrki","doi":"10.1109/LRA.2025.3554432","DOIUrl":"https://doi.org/10.1109/LRA.2025.3554432","url":null,"abstract":"Robotic magnetic manipulation offers a minimally invasive approach to gastrointestinal examinations through capsule endoscopy. However, controlling such systems using external permanent magnets (EPM) is challenging due to nonlinear magnetic interactions, especially when there are complex navigation requirements such as avoidance of sensitive tissues. In this work, we present a novel trajectory planning and control method incorporating dynamics and navigation requirements, using a single EPM fixed to a robotic arm to manipulate an internal permanent magnet (IPM). Our approach employs a constrained iterative linear quadratic regulator that considers the dynamics of the IPM to generate optimal trajectories for both the EPM and IPM. Extensive simulations and real-world experiments, motivated by capsule endoscopy operations, demonstrate the robustness of the method, showcasing resilience to external disturbances and precise control under varying conditions. The experimental results show that the IPM reaches the goal position with a maximum mean error of 0.18 cm and a standard deviation of 0.21 cm. This work introduces a unified framework for constrained trajectory optimization in magnetic manipulation, directly incorporating both the IPM's dynamics and the EPM's manipulability.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4666-4673"},"PeriodicalIF":4.6,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10938197","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}