{"title":"Waliner: Lightweight and Resilient Plugin Mapping Method With Wall Features for Visually Challenging Indoor Environments","authors":"DongKi Noh;Byunguk Lee;Hanngyoo Kim;SeungHwan Lee;HyunSung Kim;JuWon Kim;Jeongsik Choi;SeungMin Baek","doi":"10.1109/LRA.2025.3562370","DOIUrl":"https://doi.org/10.1109/LRA.2025.3562370","url":null,"abstract":"Vision-based indoor navigation systems have been proposed previously for service robots. However, in real-world scenarios, many of these approaches remain vulnerable to visually challenging environments such as white walls. In-home service robots, which are mass-produced, require affordable sensors and processors. Therefore, this letter presents a lightweight and resilient plugin mapping method called <italic>Waliner</i>, using an RGB-D sensor and an embedded processor equipped with a neural processing unit (NPU). <italic>Waliner</i> can be easily implemented in existing algorithms and enhances the accuracy and robustness of 2D/3D mapping in visually challenging environments with minimal computational overhead by leveraging <bold>a)</b> structural building components, such as walls; <bold>b)</b> the Manhattan world assumption; and <bold>c)</b> an extended Kalman filter-based pose estimation and map management technique to maintain reliable mapping performance under varying lighting and featureless conditions. As verified in various real-world in-home scenes, the proposed method yields over a 5 % improvement in mapping consistency as measured by the map similarity index (MSI) while using minimal resources.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5799-5806"},"PeriodicalIF":4.6,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143900581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Highly Maneuverable Flying Squirrel Drone With Agility-Improving Foldable Wings","authors":"Dohyeon Lee;Jun-Gill Kang;Soohee Han","doi":"10.1109/LRA.2025.3562372","DOIUrl":"https://doi.org/10.1109/LRA.2025.3562372","url":null,"abstract":"Drones, like most airborne aerial vehicles, face inherent disadvantages in achieving agile flight due to their limited thrust capabilities. These physical constraints cannot be fully addressed through advancements in control algorithms alone. Drawing inspiration from the winged flying squirrel, this letter proposes a highly maneuverable drone with agility-enhancing foldable wings. The additional air resistance generated by appropriately deploying these wings significantly improves the tracking performance of the proposed “flying squirrel” drone. By leveraging collaborative control between the conventional propeller system and the foldable wings—coordinated through the Thrust-Wing Coordination Control (TWCC) framework—the controllable acceleration set is expanded, allowing for the production of abrupt vertical forces unachievable with traditional wingless drones. The complex aerodynamics of the foldable wings are captured using a physics-assisted recurrent neural network (paRNN), which calibrates the angle of attack (AOA) to align with the real-world aerodynamic behavior of the wings. The model is trained on real-world flight data and incorporates flat-plate aerodynamic principles. Experimental results demonstrate that the proposed flying squirrel drone achieves a 13.1<inline-formula><tex-math>${%}$</tex-math></inline-formula> improvement in tracking performance, as measured by root mean square error (RMSE), compared to a conventional wingless drone.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5783-5790"},"PeriodicalIF":4.6,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143900417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pushing the Limits of Reactive Navigation: Learning to Escape Local Minima","authors":"Isar Meijer;Michael Pantic;Helen Oleynikova;Roland Siegwart","doi":"10.1109/LRA.2025.3562004","DOIUrl":"https://doi.org/10.1109/LRA.2025.3562004","url":null,"abstract":"Can a robot navigate a cluttered environment without an explicit map? Reactive methods that use only the robot's current sensor data and local information are fast and flexible, but prone to getting stuck in local minima. Is there a middle-ground between reactive methods and map-based path planners? In this paper, we investigate feed forward and recurrent networks to augment a purely reactive sensor-based navigation algorithm, which should give the robot “geometric intuition” about how to escape local minima. We train on a large number of extremely cluttered simulated worlds, auto-generated from primitive shapes, and show that our system zero-shot transfers to worlds based on real data 3D man-made environments, and can handle up to 30% sensor noise without degradation of performance. We also offer a discussion of what role network memory plays in our final system, and what insights can be drawn about the nature of reactive vs. map-based navigation.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 7","pages":"6792-6799"},"PeriodicalIF":4.6,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144179041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"STAMP: Differentiable Task and Motion Planning via Stein Variational Gradient Descent","authors":"Yewon Lee;Andrew Z. Li;Philip Huang;Eric Heiden;Krishna Murthy Jatavallabhula;Fabian Damken;Kevin Smith;Derek Nowrouzezahrai;Fabio Ramos;Florian Shkurti","doi":"10.1109/LRA.2025.3561575","DOIUrl":"https://doi.org/10.1109/LRA.2025.3561575","url":null,"abstract":"Planning for sequential robotics tasks often requires integrated symbolic and geometric reasoning. TAMP algorithms typically solve these problems by performing a tree search over high-level task sequences while checking for kinematic and dynamic feasibility. This can be inefficient because, typically, candidate task plans resulting from the tree search ignore geometric information. This often leads to motion planning failures that require expensive backtracking steps to find alternative task plans. We propose a novel approach to TAMP called Stein Task and Motion Planning (STAMP) that relaxes the hybrid optimization problem into a continuous domain. This allows us to leverage gradients from differentiable physics simulation to fully optimize discrete and continuous plan parameters for TAMP. In particular, we solve the optimization problem using a gradient-based variational inference algorithm called Stein Variational Gradient Descent. This allows us to find a distribution of solutions within a single optimization run. Furthermore, we use an off-the-shelf differentiable physics simulator that is parallelized on the GPU to run parallelized inference over diverse plan parameters. We demonstrate our method on a variety of problems and show that it can find multiple diverse plans in a single optimization run while also being significantly faster than existing approaches.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"6007-6014"},"PeriodicalIF":4.6,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143908366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linyi Huang;Hui Zhang;Zijian Wu;Sammy Christen;Jie Song
{"title":"FunGrasp: Functional Grasping for Diverse Dexterous Hands","authors":"Linyi Huang;Hui Zhang;Zijian Wu;Sammy Christen;Jie Song","doi":"10.1109/LRA.2025.3561573","DOIUrl":"https://doi.org/10.1109/LRA.2025.3561573","url":null,"abstract":"Functional grasping is essential for humans to perform specific tasks, such as grasping scissors by the finger holes to cut materials or by the blade to safely hand them over. Enabling dexterous robot hands with functional grasping capabilities is crucial for their deployment to accomplish diverse real-world tasks. Recent research in dexterous grasping, however, often focuses on power grasps while overlooking task- and object-specific functional grasping poses. In this letter, we introduce <italic>FunGrasp</i>, a system that enables functional dexterous grasping across various robot hands and performs one-shot transfer to unseen objects. Given a single RGBD image of functional human grasping, our system estimates the hand pose and transfers it to different robotic hands via a human-to-robot (H2R) grasp retargeting module. Guided by the retargeted grasping poses, a policy is trained through reinforcement learning in simulation for dynamic grasping control. To achieve robust sim-to-real transfer, we employ several techniques including privileged learning, system identification, domain randomization, and gravity compensation. In our experiments, we demonstrate that our system enables diverse functional grasping of unseen objects using single RGBD images, and can be successfully deployed across various dexterous robot hands. The component significance is validated by comprehensive ablation studies.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"6175-6182"},"PeriodicalIF":4.6,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chengwei Zhao;Yixuan Li;Yina Jian;Jie Xu;Linji Wang;Yongxin Ma;Xinglai Jin
{"title":"II-NVM: Enhancing Map Accuracy and Consistency With Normal Vector-Assisted Mapping","authors":"Chengwei Zhao;Yixuan Li;Yina Jian;Jie Xu;Linji Wang;Yongxin Ma;Xinglai Jin","doi":"10.1109/LRA.2025.3561568","DOIUrl":"https://doi.org/10.1109/LRA.2025.3561568","url":null,"abstract":"SLAM technology plays a crucial role in indoor mapping and localization. A common challenge in indoor environments is the “double-sided mapping issue”, where closely positioned walls, doors, and other surfaces are mistakenly identified as a single plane, significantly hindering map accuracy and consistency. To addressing this issue this letter introduces a SLAM approach that ensures accurate mapping using normal vector consistency. We enhance the voxel map structure to store both point cloud data and normal vector information, enabling the system to evaluate consistency during nearest neighbor searches and map updates. This process distinguishes between the front and back sides of surfaces, preventing incorrect point-to-plane constraints. Moreover, we implement an adaptive radius KD-tree search method that dynamically adjusts the search radius based on the local density of the point cloud, thereby enhancing the accuracy of normal vector calculations. To further improve real-time performance and storage efficiency, we incorporate a Least Recently Used (LRU) cache strategy, which facilitates efficient incremental updates of the voxel map. The <uri>code</uri> is released as open-source and validated in both simulated environments and real indoor scenarios. Experimental results demonstrate that this approach effectively resolves the “double-sided mapping issue” and significantly improves mapping precision. Additionally, we have developed and open-sourced the first simulation and real-world dataset specifically tailored for the “double-sided mapping issue”.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5465-5472"},"PeriodicalIF":4.6,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143888448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient Training Data Collection for Distance Sensor Arrays Through Data Correction and Augmentation Approaches","authors":"Sogo Amagai;Shin'ichi Warisawa;Rui Fukui","doi":"10.1109/LRA.2025.3561567","DOIUrl":"https://doi.org/10.1109/LRA.2025.3561567","url":null,"abstract":"Several machine learning (<bold>ML</b>)-based measurement systems have been proposed to estimate difficult-to-measure quantities from the values of distance sensor arrays. However, variations in sensor output characteristics (<bold>OCs</b>) can lead to degradation in the estimation accuracy when transferring training data acquired from the original acquisition sensors to new target sensors. Moreover, acquiring training data from target sensors is time and labor intensive. We propose two methods to convert previously collected training data to reflect different OCs, enabling their repeated use. For evaluation, we use a device that estimates the relative position and orientation of vehicles based on the values of distance sensor arrays. The correction approach for the training data based on the OC data reduces the root-mean-square error (RMSE) by up to 23% compared with transferring training data. The augmentation approach transforms the training data into data that include different OCs using a mapping function constructed from a small batch of training data. Furthermore, a method for collecting a small batch of training data to achieve a higher OC conversion accuracy is demonstrated. The RMSE is reduced by up to 58% by the proposed method compared with transferring training data. The results of this study demonstrate the feasibility of the practical applications of ML-based measurement systems using distance sensor arrays, which may facilitate the development of simple and fast calibration methods.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"6392-6399"},"PeriodicalIF":4.6,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10966032","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144090738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MR-COGraphs: Communication-Efficient Multi-Robot Open-Vocabulary Mapping System via 3D Scene Graphs","authors":"Qiuyi Gu;Zhaocheng Ye;Jincheng Yu;Jiahao Tang;Tinghao Yi;Yuhan Dong;Jian Wang;Jinqiang Cui;Xinlei Chen;Yu Wang","doi":"10.1109/LRA.2025.3561569","DOIUrl":"https://doi.org/10.1109/LRA.2025.3561569","url":null,"abstract":"Collaborative perception in unknown environments is crucial for multi-robot systems. With the emergence of foundation models, robots can now not only perceive geometric information but also achieve open-vocabulary scene understanding. However, existing map representations that support open-vocabulary queries often involve large data volumes, which becomes a bottleneck for multi-robot transmission in communication-limited environments. To address this challenge, we develop a method to construct a graph-structured 3D representation called COGraph, where nodes represent objects with semantic features and edges capture their spatial adjacency relationships. Before transmission, a data-driven feature encoder is applied to compress the feature dimensions of the COGraph. Upon receiving COGraphs from other robots, the semantic features of each node are recovered using a decoder. We also propose a feature-based approach for place recognition and translation estimation, enabling the merging of local COGraphs into a unified global map. We validate our framework on two realistic datasets and the real-world environment. The results demonstrate that, compared to existing baselines for open-vocabulary map construction, our framework reduces the data volume by over 80% while maintaining mapping and query performance without compromise.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5713-5720"},"PeriodicalIF":4.6,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Soft Growing Robot Explore Unknown Environments Through Obstacle Interaction","authors":"Haoran Wu;Fuchun Sun;Canwei Huang;Haiming Huang;Zhongyi Chu","doi":"10.1109/LRA.2025.3561572","DOIUrl":"https://doi.org/10.1109/LRA.2025.3561572","url":null,"abstract":"In low-light, unstructured, and confined environments, performing Simultaneous Localization and Mapping (SLAM) with conventional methods presents significant challenges. Soft growing robots, characterized by their compliance and extensibility, interact safely with the environment, making them well-suited for navigation in such environments. Through collision-based guidance, the robot can gather environmental data via morphological adaptations. Based on this, we developed the sensing capabilities of the soft growing robot, retaining its flexibility while enabling effective environmental interaction and perception. The robot employs a gyroscope combined with an encoder to track the end-effector trajectory and uses flexible proximity sensing to detect obstacles. By fusing the information from these sensors, we propose a multi-sensor fusion strategy for environmental exploration of the soft growing robot. The robot navigates unknown environments by employing pre-bending based on prior environmental data and utilizing pneumatic artificial muscles. In multi-obstacle environmental exploration, the path prediction error is less than 3.5% of the robot's total length, enabling greater environmental coverage with fewer exploration attempts.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"6015-6022"},"PeriodicalIF":4.6,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143908443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effect of Reduced-Order Modelling on Passivity and Rendering Performance Analyses of Series Elastic Actuation","authors":"Celal Umut Kenanoglu;Volkan Patoglu","doi":"10.1109/LRA.2025.3561564","DOIUrl":"https://doi.org/10.1109/LRA.2025.3561564","url":null,"abstract":"We study reduced-order models of series elastic actuation under velocity-sourced impedance control, where the inner motion controller is assumed to render the system into an ideal motion source within a control bandwidth and replaced by a low-pass filter. We present necessary and sufficient conditions for the passivity of this system and prove that the passivity results obtained through the reduced-order model may violate the passivity of the full-order model. To enable safe use of the reduced-order model, we derive conditions under which the passivity bounds of the reduced-order model guarantee the passivity of the full-order system. Moreover, we synthesize passive physical equivalents of closed-loop systems while rendering Kelvin-Voigt, linear spring, and null impedance models to provide rigorous comparisons of the passivity bounds and rendering performance among the full- and reduced-order models. We verify our results through a comprehensive set of simulations and experiments.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5745-5752"},"PeriodicalIF":4.6,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143900548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}