Le Zhang;Dapeng Yang;Baoshan Niu;Haonan Yang;Qi Huang;Li Jiang;Hong Liu
{"title":"Smooth Path Planning and Dynamic Contact Force Regulation for Robotic Ultrasound Scanning","authors":"Le Zhang;Dapeng Yang;Baoshan Niu;Haonan Yang;Qi Huang;Li Jiang;Hong Liu","doi":"10.1109/LRA.2025.3604746","DOIUrl":"https://doi.org/10.1109/LRA.2025.3604746","url":null,"abstract":"The robotic breast ultrasound scanning (RBUS) system can help sonographer in the early screening of breast cancer. However, it still faces rigorous safety concerns, including non-smooth scanning paths and improper contact force regulation. In this letter, we propose a new path optimization method to improve the scanning smoothness, as well as, a dynamic contact regulation strategy considering both the breast deformation and ultrasound images. For the path planning, the anti-radial scanning approach is first adopted to generate some initial path points together with their normal vectors; then, a dual-objective optimization function is formulated, which considers both the acceleration continuity and deviation of the path, to filter out local path anomalies. For the force control, a contact force-strain regression model is first proposed and used to predict the desired force between the breast and probe at each path point. While within the adjacent path points, an updating algorithm is introduced to dynamically adjust the contact force in real-time by surveying the confidence of the ultrasound image. Our experimental results show that the proposed methods can make the scanning path smoother, and obtain intimate probe-tissue contact with less desired force.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10570-10577"},"PeriodicalIF":5.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xu Wang;Jialang Xu;Shuai Zhang;Baoru Huang;Danail Stoyanov;Evangelos B. Mazomenos
{"title":"StereoMamba: Real-Time and Robust Intraoperative Stereo Disparity Estimation via Long-Range Spatial Dependencies","authors":"Xu Wang;Jialang Xu;Shuai Zhang;Baoru Huang;Danail Stoyanov;Evangelos B. Mazomenos","doi":"10.1109/LRA.2025.3604749","DOIUrl":"https://doi.org/10.1109/LRA.2025.3604749","url":null,"abstract":"Stereo disparity estimation is crucial for obtaining depth information in robot-assisted minimally invasive surgery (RAMIS). While current deep learning methods have made significant advancements, challenges remain in achieving an optimal balance between accuracy, robustness, and inference speed. To address these challenges, we propose the StereoMamba architecture, which is specifically designed for stereo disparity estimation in RAMIS. Our approach is based on a novel Feature Extraction Mamba (FE-Mamba) module, which enhances long-range spatial dependencies both within and across stereo images. To effectively integrate multi-scale features from FE-Mamba, we then introduce a novel Multidimensional Feature Fusion (MFF) module. Experiments against the state-of-the-art on the ex-vivo SCARED benchmark demonstrate that StereoMamba achieves superior performance on EPE of 2.64 px and depth MAE of 2.55 mm, the second-best performance on Bad2 of 41.49% and Bad3 of 26.99%, while maintaining an inference speed of 21.28 FPS for a pair of high-resolution images (1280 × 1024), striking the optimum balance between accuracy, robustness, and efficiency. Furthermore, by comparing synthesized right images, generated from warping left images using the generated disparity maps, with the actual right image, StereoMamba achieves the best average SSIM (0.8970) and PSNR (16.0761), exhibiting strong zero-shot generalization on the in-vivo RIS2017 and StereoMIS datasets.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10682-10689"},"PeriodicalIF":5.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Endangered Alert: A Field-Validated Self-Training Scheme for Detecting and Protecting Threatened Wildlife on Roads and Roadsides","authors":"Kunming Li;Mao Shan;Stephany Berrio Perez;Katie Luo;Stewart Worrall","doi":"10.1109/LRA.2025.3604697","DOIUrl":"https://doi.org/10.1109/LRA.2025.3604697","url":null,"abstract":"Traffic accidents, including animal-vehicle collisions (AVCs), endanger both humans and wildlife. This letter presents an innovative self-training methodology aimed at detecting rare animals, such as cassowaries in Australia, whose survival is threatened by road accidents. The proposed method addresses critical real-world challenges, including the acquisition and labelling of sensor data for rare animal species in resource-limited environments. It achieves this by leveraging cloud and edge computing, and automatic data labelling to improve the detection performance of the field-deployed model iteratively. Our approach introduces Label-Augmentation Non-Maximum Suppression (LA-NMS), which incorporates a vision-language model (VLM) to enable automated data labelling. During a five-month deployment, we confirmed the robustness and effectiveness of the method, achieving improved object detection accuracy and increased prediction confidence.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10706-10713"},"PeriodicalIF":5.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhangyi Chen;Long Wang;Yao Luo;Xiaoling Li;Shuai Li
{"title":"Learning-Based Slip Detection and Fine Control Using the Tactile Sensor for Robot Stable Grasping","authors":"Zhangyi Chen;Long Wang;Yao Luo;Xiaoling Li;Shuai Li","doi":"10.1109/LRA.2025.3604723","DOIUrl":"https://doi.org/10.1109/LRA.2025.3604723","url":null,"abstract":"Slip detection and control is critical to achieving stable grasping in robotics. However, accurate and robust slip detection and control remains a challenging task. This letter proposes a learning framework with contrastive learning and feature alignment to improve the accuracy of end-to-end slip detection under small sample conditions. In addition, a fuzzy logic control system is designed based on the stiffness perception of the grasped object for estimating the increment of reflective force to suppress the slip. To validate the effectiveness of the proposed method, we conduct online tests on various objects in two scenarios prone to slip, based on a developed hardware platform. Experimental results show that the proposed slip detection method demonstrates high accuracy and good generalization capability, while the slip control method incorporating the object stiffness property can achieve safe and fine control after slip occurs.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 11","pages":"11156-11163"},"PeriodicalIF":5.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145078575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zong Chen;Haoluo Shao;Ben Liu;Siyuan Qiao;Yu Zhou;Yiqun Li
{"title":"FDSPC: Fast and Direct Smooth Motion Planning via Continuous Curvature Integration","authors":"Zong Chen;Haoluo Shao;Ben Liu;Siyuan Qiao;Yu Zhou;Yiqun Li","doi":"10.1109/LRA.2025.3604729","DOIUrl":"https://doi.org/10.1109/LRA.2025.3604729","url":null,"abstract":"In recent decades, mobile robot motion planning has seen significant advancements. Both search-based and sampling-based methods have demonstrated capabilities to find feasible solutions in complex scenarios. Mainstream path planning algorithms divide the map into occupied and free spaces, considering only planar movement and ignoring the ability of mobile robots to traverse obstacles in the <inline-formula><tex-math>$z$</tex-math></inline-formula>-direction. Additionally, paths generated often have numerous bends, requiring additional smoothing post-processing. In this work, a fast, and direct motion planning method based on continuous curvature integration that takes into account the robot's obstacle-crossing ability under different parameter settings is proposed. This method generates smooth paths directly with pseudo-constant velocity and limited curvature, and performs curvature-based speed planning in complex 2.5-D terrain-based environment (take into account the ups and downs of the terrain), eliminating the subsequent path smoothing process and enabling the robot to track the path generated directly. The proposed method is also compared with some existing approaches in terms of solution time, path length, memory usage and smoothness under multiple scenarios. The proposed method is vastly superior to the average performance of state-of-the-art (SOTA) methods, especially in terms of the self-defined <inline-formula><tex-math>$mathcal {S}_{2}$</tex-math></inline-formula> smoothness (mean angle of steering). Furthermore, simulations and experiments are conducted on our self-designed wheel-legged robot with 2.5-D traversability. These results demonstrate the effectiveness and superiority of the proposed approach in several representative environments.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10878-10885"},"PeriodicalIF":5.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Human-in-The-Loop Approach to Robot Action Replanning Through LLM Common-Sense Reasoning","authors":"Elena Merlo;Marta Lagomarsino;Arash Ajoudani","doi":"10.1109/LRA.2025.3604702","DOIUrl":"https://doi.org/10.1109/LRA.2025.3604702","url":null,"abstract":"To facilitate the wider adoption of robotics, accessible programming tools are required for non-experts. Observational learning enables intuitive human skills transfer through hands-on demonstrations, but relying solely on visual input can be inefficient in terms of scalability and failure mitigation, especially when based on a single demonstration. This letter presents a human-in-the-loop method for enhancing the robot execution plan, automatically generated based on a single RGB video, with natural language input to a Large Language Model (LLM). By including user-specified goals or critical task aspects and exploiting the LLM common-sense reasoning, the system adjusts the vision-based plan to prevent potential failures and adapts it based on the received instructions. Experiments demonstrated the framework intuitiveness and effectiveness in correcting vision-derived errors and adapting plans without requiring additional demonstrations. Moreover, interactive plan refinement and hallucination corrections promoted system robustness.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10767-10774"},"PeriodicalIF":5.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145051040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sourav Raxit;Abdullah Al Redwan Newaz;Paulo Padrao;Jose Fuentes;Leonardo Bobadilla
{"title":"BOW: Bayesian Optimization Over Windows for Motion Planning in Complex Environments","authors":"Sourav Raxit;Abdullah Al Redwan Newaz;Paulo Padrao;Jose Fuentes;Leonardo Bobadilla","doi":"10.1109/LRA.2025.3604738","DOIUrl":"https://doi.org/10.1109/LRA.2025.3604738","url":null,"abstract":"This letter introduces the BOW Planner, a scalable motion planning algorithm designed to navigate robots through complex environments using constrained Bayesian optimization (CBO). Unlike traditional methods, which often struggle with kinodynamic constraints such as velocity and acceleration limits, the BOW Planner excels by concentrating on a planning window of reachable velocities and employing CBO to sample control inputs efficiently. This approach enables the planner to manage high-dimensional objective functions and stringent safety constraints with minimal sampling, ensuring rapid and secure trajectory generation. Theoretical analysis confirms the algorithm's asymptotic convergence to near-optimal solutions, while extensive evaluations in cluttered and constrained settings reveal substantial improvements in computation times, trajectory lengths, and solution times compared to existing techniques. Successfully deployed across various real-world robotic systems, the BOW Planner demonstrates its practical significance through exceptional sample efficiency, safety-aware optimization, and rapid planning capabilities, making it a valuable tool for advancing robotic applications.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10714-10721"},"PeriodicalIF":5.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145051017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Coordinated and Resilient Formation Strategy Based on Hierarchical Reorganization","authors":"Yuzhu Li;Wei Dong","doi":"10.1109/LRA.2025.3604698","DOIUrl":"https://doi.org/10.1109/LRA.2025.3604698","url":null,"abstract":"Multi-leader formations offer superior flexibility and adaptability compared to single-leader configurations. However, the failure of even a single leader can pose significant risks to the overall success of hierarchical formations. Although existing strategies to address leader failures often rely on dynamic re-election mechanisms, these approaches are primarily tailored to single-leader configurations. To overcome these limitations, this paper presents a resilient formation strategy based on hierarchical reorganization. The central concept is to endow the formation with fail-tolerance through seamless leadership transitions while preserving overall agility. Specifically, we propose a comprehensive fail-tolerant leadership evaluation algorithm capable of selecting the most agile leadership configuration while maintaining formation safety. Recognizing that distributed evaluations may yield inconsistent leader selections, we integrate a Raft-based configuration consensus mechanism to achieve distributed agreement during hierarchical reorganization. Additionally, to guarantee the smooth execution of the reorganization process, a synchronous state updating strategy is adopted to mitigate communication delays, thereby facilitating seamless reconfiguration. We conducted extensive simulations and real-world experiments. Experiments results across multiple scenarios demonstrate that the proposed strategy swiftly identifies malfunctioning leaders, mitigates their adverse effects through hierarchical reorganization, and improves the mission success rate of a 7-UAV formation from 28.6% to 85.7%. Overall, our findings show that the proposed approach not only addresses individual agent failures but also significantly enhances the formation's stability and robustness.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10650-10657"},"PeriodicalIF":5.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yixiang Dai;Siang Chen;Kaiqin Yang;Dingchang Hu;Pengwei Xie;Guosheng Li;Yuan Shen;Guijin Wang
{"title":"Active-Perceptive Language-Oriented Grasp Policy for Heavily Cluttered Scenes","authors":"Yixiang Dai;Siang Chen;Kaiqin Yang;Dingchang Hu;Pengwei Xie;Guosheng Li;Yuan Shen;Guijin Wang","doi":"10.1109/LRA.2025.3604750","DOIUrl":"https://doi.org/10.1109/LRA.2025.3604750","url":null,"abstract":"Language-guided robotic grasping in cluttered environments presents significant challenges due to severe occlusions and complex scene structures, which often hinder accurate target localization. Existing approaches typically suffer from limited observational capabilities, resulting in suboptimal exploration of the target object. In this letter, we propose a novel Active-Perceptive Language-Oriented Grasp Policy (APeG) for heavily cluttered scenes. APeG develops an active perception scheme in the grasp pipeline via an occlusion-aware, semantic-guided viewpoint optimization strategy, enabling efficient exploration of cluttered scenes. In addition, a grasp-wise Reinforcement Learning (RL) policy is proposed to select robust grasp poses. Extensive real-world experiments validate the effectiveness of APeG, demonstrating significant improvements in both task success rate and operational efficiency over existing baselines, highlighting its potential for practical deployment in language-conditioned robotic manipulation.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 11","pages":"11094-11101"},"PeriodicalIF":5.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoli Wang;Sipu Ruan;Xin Meng;Gregory S. Chirikjian
{"title":"Enhanced Probabilistic Collision Detection for Motion Planning Under Sensing Uncertainty","authors":"Xiaoli Wang;Sipu Ruan;Xin Meng;Gregory S. Chirikjian","doi":"10.1109/LRA.2025.3604700","DOIUrl":"https://doi.org/10.1109/LRA.2025.3604700","url":null,"abstract":"Probabilistic collision detection (PCD) is essential in motion planning for robots operating in unstructured environments, where considering sensing uncertainty helps prevent damage. Existing PCD methods mainly use simplified geometric models and address only position estimation errors. This paper presents an enhanced PCD method with two key advancements: (a) using superquadrics for more accurate shape approximation and (b) accounting for both position and orientation estimation errors to improve robustness under sensing uncertainty. Our method first computes an enlarged surface for each object that encapsulates its observed rotated copies, thereby addressing the orientation estimation errors. Then, the collision probability is formulated as a chance-constraint problem that is solved with a tight upper bound. Both steps leverage the recently developed closed-form normal parameterized surface expression of superquadrics. Results show that our PCD method is twice as close to the Monte Carlo sampled baseline as the best existing PCD method and reduces path length by 30% and planning time by 37%, respectively. A Real2Sim2Real pipeline further validates the importance of considering orientation estimation errors, showing that the collision probability of executing the planned path is only 2%, compared to 9% and 29% when considering only position estimation errors or no errors at all.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10910-10917"},"PeriodicalIF":5.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11145942","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145060961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}