Xiangcheng Hu;Jin Wu;Mingkai Jia;Hongyu Yan;Yi Jiang;Binqian Jiang;Wei Zhang;Wei He;Ping Tan
{"title":"MapEval: Towards Unified, Robust and Efficient SLAM Map Evaluation Framework","authors":"Xiangcheng Hu;Jin Wu;Mingkai Jia;Hongyu Yan;Yi Jiang;Binqian Jiang;Wei Zhang;Wei He;Ping Tan","doi":"10.1109/LRA.2025.3548441","DOIUrl":"https://doi.org/10.1109/LRA.2025.3548441","url":null,"abstract":"Evaluating massive-scale point cloud maps in Simultaneous Localization and Mapping (SLAM) still remains challenging due to three limitations: lack of unified standards, poor robustness to noise, and computational inefficiency. We propose MapEval, a novel framework for point cloud map assessment. Our key innovation is a voxelized Gaussian approximation method that enables efficient Wasserstein distance computation while maintaining physical meaning. This leads to two complementary metrics: Voxelized Average Wasserstein Distance (<monospace>AWD</monospace>) for global geometry and Spatial Consistency Score (<monospace>SCS</monospace>) for local consistency. Extensive experiments demonstrate that MapEval achieves <inline-formula> <tex-math>$100$</tex-math></inline-formula>-<inline-formula> <tex-math>$500$</tex-math></inline-formula> times speedup while maintaining evaluation performance compared to traditional metrics like Chamfer Distance (<monospace>CD</monospace>) and Mean Map Entropy (<monospace>MME</monospace>). Our framework shows robust performance across both simulated and real-world datasets with million-scale point clouds.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4228-4235"},"PeriodicalIF":4.6,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143667512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FLoRA: A Framework for Learning Scoring Rules in Autonomous Driving Planning Systems","authors":"Zikang Xiong;Joe Eappen;Suresh Jagannathan","doi":"10.1109/LRA.2025.3548502","DOIUrl":"https://doi.org/10.1109/LRA.2025.3548502","url":null,"abstract":"In autonomous driving systems, motion planning is commonly implemented as a two-stage process: first, a trajectory proposer generates multiple candidate trajectories, then a scoring mechanism selects the most suitable trajectory for execution. For this critical selection stage, rule-based scoring mechanisms are particularly appealing as they can explicitly encode driving preferences, safety constraints, and traffic regulations in a formalized, human-understandable format. However, manually crafting these scoring rules presents significant challenges: the rules often contain complex interdependencies, require careful parameter tuning, and may not fully capture the nuances present in real-world driving data. This work introduces FLoRA, a novel framework that bridges this gap by learning interpretable scoring rules represented in temporal logic. Our method features a learnable logic structure that captures nuanced relationships across diverse driving scenarios, optimizing both rules and parameters directly from real-world driving demonstrations collected in NuPlan. Our approach effectively learns to evaluate driving behavior even though the training data only contains positive examples (successful driving demonstrations). Evaluations in closed-loop planning simulations demonstrate that our learned scoring rules outperform existing techniques, including expert designed rules and neural network scoring models, while maintaining interpretability. This work introduces a data-driven approach to enhance the scoring mechanism in autonomous driving systems, designed as a plug-in module to seamlessly integrate with various trajectory proposers. Our video and code are available on <uri>xiong.zikang.me/FLoRA/</uri>.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"4101-4108"},"PeriodicalIF":4.6,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143676005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lights as Points: Learning to Look at Vehicle Substructures With Anchor-Free Object Detection","authors":"Maitrayee Keskar;Ross Greer;Akshay Gopalkrishnan;Nachiket Deo;Mohan Trivedi","doi":"10.1109/LRA.2025.3548397","DOIUrl":"https://doi.org/10.1109/LRA.2025.3548397","url":null,"abstract":"Vehicle detection is a paramount task for safe autonomous driving, as the ego-vehicle must localize other surrounding vehicles for safe navigation. Unlike other traffic agents, vehicles have necessary substructural components, such as the headlights and tail lights, which can provide important cues about a vehicle's future trajectory. However, previous object detection methods still treat vehicles as a single entity, ignoring these safety-critical vehicle substructures. Our research addresses the detection of substructural components of vehicles in conjunction with the detection of the vehicles themselves. Emphasizing the integral detection of cars and their substructures, our objective is to establish a coherent representation of the vehicle as an entity. Inspired by the CenterNet approach for human pose estimation, our model predicts object centers and subsequently regresses to bounding boxes and key points for the object. We evaluate multiple model configurations to regress to vehicle substructures on the ApolloCar3D dataset and achieve an average precision of 0.782 for the threshold of 0.5 using the direct regression approach.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4236-4243"},"PeriodicalIF":4.6,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143688167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erin E. Mahan;Shane T. King;Elyse D. Z. Chase;Eric M. Schearer;Marcia K. O'Malley
{"title":"Nonlinear Optimization for Personalized Path Planning for a Hybrid FES-Exoskeleton System","authors":"Erin E. Mahan;Shane T. King;Elyse D. Z. Chase;Eric M. Schearer;Marcia K. O'Malley","doi":"10.1109/LRA.2025.3548535","DOIUrl":"https://doi.org/10.1109/LRA.2025.3548535","url":null,"abstract":"Hybrid Functional Electrical Stimulation (FES)- exoskeleton systems are emerging as promising assistive technologies because they can drive users through trajectories that mimic functional movements with higher precision than FES-alone and reduced torque consumption compared to an exoskeleton alone. Limitations in FES movement accuracy and exoskeleton power and size requirements from high torque commands prevent hybrid systems as prevalent assistive devices in real-world settings. Additionally, difficulties in effectively coordinating the two subsystems and heterogeneity of motor capabilities across neurologically impaired individuals limit their widespread adoption. This letter presents a methodology for nonlinear trajectory optimization to create feasible, personalized trajectories between desired set-points that were evaluated in simulation with seven neurologically intact user models. The personalized trajectories ensure dynamic feasibility while maintaining significant exoskeleton torque reduction in the hybrid FES-exoskeleton system compared to the exoskeleton-alone. The personalized trajectories also maintain similar tracking accuracy compared to minimum jerk trajectories.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4220-4227"},"PeriodicalIF":4.6,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143667544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Tightly Coupled and Invariant Filter for Visual-Inertial-GNSS-Barometer Odometry","authors":"Pengfei Zhang;Chen Jiang;Jiyuan Qiu","doi":"10.1109/LRA.2025.3548500","DOIUrl":"https://doi.org/10.1109/LRA.2025.3548500","url":null,"abstract":"A positioning system that relies solely on observations based on a local frame, such as visual-inertial odometry (VIO), suffers from unobservable directions, leading to cumulative estimation errors over time. To address this issue, we propose a tightly-coupled Visual-Inertial-GNSS-Barometer odometry (GBVIO) based on an invariant filter. The integration of Global Navigation Satellite System (GNSS) and barometric data enables global convergence. Our system supports both tightly coupled updates (using pseudorange and Doppler shift measurements) and loosely coupled updates (using global position data). We prove that the invariant filter using barometric observations remains invariant under stochastic unobservable transformations, thus exhibiting improved consistency. Validation through computer-based Monte Carlo simulations and real-world dataset experiments demonstrates the superiority of GBVIO over standalone VIO. All source code and datasets are publicly available.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"3964-3971"},"PeriodicalIF":4.6,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143645322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tensegrity Robot Proprioceptive State Estimation With Geometric Constraints","authors":"Wenzhe Tong;Tzu-Yuan Lin;Jonathan Mi;Yicheng Jiang;Maani Ghaffari;Xiaonan Huang","doi":"10.1109/LRA.2025.3548398","DOIUrl":"https://doi.org/10.1109/LRA.2025.3548398","url":null,"abstract":"Tensegrity robots,characterized by a synergistic assembly of rigid rods and elastic cables, form robust structures that are resistant to impacts. However, this design introduces complexities in kinematics and dynamics, complicating control and state estimation. This work presents a novel proprioceptive state estimator for tensegrity robots. The estimator initially uses the geometric constraints of 3-bar prism tensegrity structures, combined with IMU and motor encoder measurements, to reconstruct the robot's shape and orientation. It then employs a contact-aided invariant extended Kalman filter with forward kinematics to estimate the global position and orientation of the tensegrity robot. The state estimator's accuracy is assessed against ground truth data in both simulated environments and real-world tensegrity robot applications. It achieves an average drift percentage of 4.2%, comparable to the state estimation performance of traditional rigid robots. This state estimator advances the state-of-the-art in tensegrity robot state estimation and has the potential to run in real-time using onboard sensors, paving the way for full autonomy of tensegrity robots in unstructured environments.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"4069-4076"},"PeriodicalIF":4.6,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143645324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HCR: Haptic Continuum Robot for Multi-Modal Cutaneous Feedback","authors":"Jui-Te Lin;Aedan Mangan;Tania K. Morimoto","doi":"10.1109/LRA.2025.3548406","DOIUrl":"https://doi.org/10.1109/LRA.2025.3548406","url":null,"abstract":"Cutaneous haptic feedback provides a sense of touch by displaying sensations — such as vibration, skin stretch, or normal force — directly to the skin. While the majority of these devices have been designed to display one main haptic sensation, recent work has begun to explore the creation of multi-modal haptic devices, capable of rendering a variety of cutaneous cues. In this work, we investigate using the tip and body of a continuum robot to directly render multiple cutaneous cues to the fingerpad. We present the design of a device that consists of two Haptic Continuum Robots (HCRs) that is capable of rendering four distinct haptic cues— skin stretch, skin slip, normal indentation, and vibration. We present and validate a model of the proposed HCR and characterize the device performance. Finally, we conduct a preliminary haptic sensation identification study, which showed that users were able to correctly identify the displayed haptic sensation with 90% accuracy.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"4077-4084"},"PeriodicalIF":4.6,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143645300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Wu;Shuting Wang;Hu Li;Yuanlong Xie;Shiqi Zheng;Sheng Quan Xie
{"title":"Adaptive Fault-Tolerant Control of Wheeled Mobile Robots With Multiple Actuator Faults and Saturation","authors":"Hao Wu;Shuting Wang;Hu Li;Yuanlong Xie;Shiqi Zheng;Sheng Quan Xie","doi":"10.1109/LRA.2025.3548505","DOIUrl":"https://doi.org/10.1109/LRA.2025.3548505","url":null,"abstract":"The actuator fault problems under saturation bring significant challenges to the stable and accurate tracking of wheeled mobile robots (WMRs) in industrial applications. This letter proposes a novel adaptive fault-tolerant control (FTC) method for WMR systems simultaneously considering uncertain multiple actuator faults, namely lock-in-place (LIP) and partial loss-of-effectiveness (LOE) faults, and saturation. First, a novel barrier function-based nonsingular terminal sliding mode controller is explored to address the actuator LIP failures and unknown dead zones. Then, a two-auxiliary-variable-based adaptive law is designed by estimating the boundary of the actuation effectiveness and saturation coefficient, and uniformly handles the actuator LOE fault and saturation. The adaptive fault-tolerant controller is constructed based on these adaptive laws, achieving the error/sliding variables finite-time convergence and being confined within a predetermined neighborhood of the origin. Finally, practical experiments demonstrate the effectiveness and advantages of the designed FTC scheme.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"4156-4163"},"PeriodicalIF":4.6,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143676058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Aerial Gym Simulator: A Framework for Highly Parallelized Simulation of Aerial Robots","authors":"Mihir Kulkarni;Welf Rehberg;Kostas Alexis","doi":"10.1109/LRA.2025.3548507","DOIUrl":"https://doi.org/10.1109/LRA.2025.3548507","url":null,"abstract":"This paper contributes the Aerial Gym Simulator, a highly parallelized, modular framework for simulation and rendering of arbitrary multirotor platforms based on NVIDIA Isaac Gym. Aerial Gym supports the simulation of under-, fully- and over-actuated multirotors offering parallelized geometric controllers, alongside a custom GPU-accelerated rendering framework for ray-casting capable of capturing depth, segmentation and vertex-level annotations from the environment. Multiple examples for key tasks, such as depth-based navigation through reinforcement learning are provided. The comprehensive set of tools developed within the framework makes it a powerful resource for research on learning for control, planning, and navigation using state information as well as exteroceptive sensor observations. Extensive simulation studies are conducted and successful sim2real transfer of trained policies is demonstrated.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"4093-4100"},"PeriodicalIF":4.6,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143676004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving RGB-Thermal Semantic Scene Understanding With Synthetic Data Augmentation for Autonomous Driving","authors":"Haotian Li;Henry K. Chu;Yuxiang Sun","doi":"10.1109/LRA.2025.3548399","DOIUrl":"https://doi.org/10.1109/LRA.2025.3548399","url":null,"abstract":"Semantic scene understanding is an important capability for autonomous vehicles. Despite recent advances in RGB-Thermal (RGB-T) semantic segmentation, existing methods often rely on parameter-heavy models, which are particularly constrained by the lack of precisely-labeled training data. To alleviate this limitation, we propose a data-driven method, SyntheticSeg, to enhance RGB-T semantic segmentation. Specifically, we utilize generative models to generate synthetic RGB-T images from the semantic layouts in real datasets and construct a large-scale, high-fidelity synthetic dataset to provide the segmentation models with sufficient training data. We also introduce a novel metric that measures both the scarcity and segmentation difficulty of semantic layouts, guiding sampling from the synthetic dataset to alleviate class imbalance and improve the overall segmentation performance. Experimental results on a public dataset demonstrate our superior performance over the state of the arts.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4452-4459"},"PeriodicalIF":4.6,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}