{"title":"FAST-LIVO2 on Resource-Constrained Platforms: LiDAR-Inertial-Visual Odometry With Efficient Memory and Computation","authors":"Bingyang Zhou;Chunran Zheng;Ziming Wang;Fangcheng Zhu;Yixi Cai;Fu Zhang","doi":"10.1109/LRA.2025.3581125","DOIUrl":"https://doi.org/10.1109/LRA.2025.3581125","url":null,"abstract":"This paper presents a lightweight LiDAR-inertial-visual odometry system optimized for resource-constrained platforms. It integrates a degeneration-aware adaptive visual frame selector into error-state iterated Kalman filter (ESIKF) with sequential updates, improving computation efficiency markedly while maintaining a similar level of robustness. Additionally, a memory-efficient mapping structure combining a locally unified visual-LiDAR map and a long-term visual map achieves a good trade-off between performance and memory usage. Extensive experiments on x86 and ARM platforms demonstrate the system's robustness and efficiency. On the Hilti dataset, our system achieves a <bold>33% reduction in per-frame runtime</b> and <bold>47% lower memory usage</b> compared to FAST-LIVO2, with only a <bold>3 cm increase in RMSE</b>. Despite this slight accuracy trade-off, our system remains competitive, outperforming state-of-the-art (SOTA) LIO methods such as FAST-LIO2 and most existing LIVO systems. These results validate the system's capability for scalable deployment on resource-constrained edge computing platforms.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 8","pages":"7931-7938"},"PeriodicalIF":4.6,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nan Li;Junlong Guo;Liang Ding;Chenghua Tian;Chuan Zhou;Haibo Gao
{"title":"Estimation of Slip Ratio and Side Slip Angle of Wheeled Planetary Rovers Based on Trace Imprint","authors":"Nan Li;Junlong Guo;Liang Ding;Chenghua Tian;Chuan Zhou;Haibo Gao","doi":"10.1109/LRA.2025.3581084","DOIUrl":"https://doi.org/10.1109/LRA.2025.3581084","url":null,"abstract":"This letter proposes a method to estimate the wheel slip ratio and side slip angle of wheeled rovers by processing images of wheel trace imprints. The proposed method extracts structural features from trace imprint images, such as the trace unit, trace contour, and angle between the centerline of the trace unit and contour. The relationships between the structural trace imprint features and the wheel slip ratio and side slip angle have been revealed after a study of the underlying mechanism of trace imprint formation, with consideration of the kinematics of the wheel lug and lug-soil interaction. These relationships are then used to estimate wheel slip ratio and side slip angle. Compared with the existing estimation methods, the proposed method can estimate longitudinal slippage and lateral drift simultaneously that typically occur in planetary rovers during traverse of cross slopes. The effectiveness of the proposed method has been demonstrated by experiments using a rover wheel test-bed under various conditions.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 8","pages":"7979-7986"},"PeriodicalIF":4.6,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Degradation-Aware LiDAR-Thermal-Inertial SLAM","authors":"Yu Wang;Yufeng Liu;Lingxu Chen;Haoyao Chen;Shiwu Zhang","doi":"10.1109/LRA.2025.3581127","DOIUrl":"https://doi.org/10.1109/LRA.2025.3581127","url":null,"abstract":"During robotic disaster relief missions, state estimation still faces significant challenges, especially when GNSS is denied or sensor perception undergoes degradation. In this letter, we introduce a degradation-aware LiDAR-Thermal-Inertial SLAM, DaLiTI, that leverages the complementary nature of multi-modal information to achieve robust and precise state estimation in perceptually challenging environments. The system utilizes an iterated error state Kalman filter (IESKF) to loosely integrate LiDAR, thermal infrared camera, and IMU measurements. We propose an adaptive fusion mechanism that dynamically weights and fuses LiDAR and thermal measurements based on real-time modal quality to prevent failure information from propagating throughout the system. Experimental results demonstrate that, compared with state-of-the-art methods, DaLiTI maintains competitive performance in conventional environments and exhibits superior robustness and accuracy in degraded scenarios such as fire scenes or chemical plants with gas leaks.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 8","pages":"8035-8042"},"PeriodicalIF":4.6,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144519464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chongjie Jiang;Yingying Dai;Jinyang Le;Xiaomeng Chen;Yu Xie;Wei Zhou;Fuzhou Niu;Ying Li;Tao Luo
{"title":"Harnessing the Power of Vibration Motors to Develop Miniature Untethered Robotic Fishes","authors":"Chongjie Jiang;Yingying Dai;Jinyang Le;Xiaomeng Chen;Yu Xie;Wei Zhou;Fuzhou Niu;Ying Li;Tao Luo","doi":"10.1109/LRA.2025.3581129","DOIUrl":"https://doi.org/10.1109/LRA.2025.3581129","url":null,"abstract":"Miniature underwater robots play a crucial role in the exploration and development of marine resources, particularly in confined spaces and high-pressure deep-sea environments. This study presents the design, optimization, and performance of a miniature robotic fish, powered by the oscillation of bio-inspired fins. These fins feature a rigid-flexible hybrid structure and use an eccentric rotating mass (ERM) vibration motor as the excitation source to generate high-frequency unidirectional oscillations that induce acoustic streaming for propulsion. The drive mechanism, powered by miniature ERM vibration motors, eliminates the need for complex mechanical drive systems, enabling complete isolation of the entire drive system from the external environment and facilitating the miniaturization of the robotic fish. A compact, untethered robotic fish, measuring 85 × 60 × 45 mm<sup>3</sup>, is equipped with three bio-inspired fins located at the pectoral and caudal positions. Experimental results demonstrate that the robotic fish achieves a maximum forward swimming speed of 1.36 body lengths (BL) per second powered by all fins and minimum turning radius of 0.6 BL when powered by a single fin. In addition, the robotic fish is able to swim upstream in turbulent flow, and its autonomous version can navigate complex, obstacle-filled environments. These results underscore the significance of employing the ERM vibration motor in advancing the development of highly maneuverable, miniature untethered underwater robots for various marine exploration tasks.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 8","pages":"7963-7970"},"PeriodicalIF":4.6,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic Hierarchy-Guided Adversarial Attack for Autonomous Driving","authors":"Gwangbin Kim;SeungJun Kim","doi":"10.1109/LRA.2025.3580923","DOIUrl":"https://doi.org/10.1109/LRA.2025.3580923","url":null,"abstract":"Autonomous vehicles employ semantic segmentation as a foundational component for perception and scene understanding, upon which driving decisions can be informed. Despite their performance, these deep learning models remain susceptible to subtle input perturbations that can cause severe deviation in model output. To enhance algorithmic robustness by examining such vulnerabilities, researchers have investigated adversarial examples, which are visually imperceptible yet can severely degrade model performance. However, traditional attacks produce arbitrary misclassifications that ignore semantic relationships, making the attack less effective. This letter introduces a semantic hierarchy-guided adversarial attack (SHAA), a white-box adversarial attack against semantic segmentation for autonomous driving. By combining semantic hierarchy and adaptive momentum-based updates across the image, SHAA produces semantically nontrivial yet highly effective perturbations. The SHAA method exposes deeper vulnerabilities with a higher attack success rate in semantic segmentation than existing methods, aiding the design of a more resilient perception system for autonomous vehicles.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 8","pages":"7907-7914"},"PeriodicalIF":4.6,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144481881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Koki Shoda;Jun Younes Louhi Kasahara;Qi An;Atsushi Yamashita
{"title":"Zero-Shot Denoiser for Enhanced Acoustic Inspection: Mix Signal Separation and Text-Guided Audio Reconstruction","authors":"Koki Shoda;Jun Younes Louhi Kasahara;Qi An;Atsushi Yamashita","doi":"10.1109/LRA.2025.3580317","DOIUrl":"https://doi.org/10.1109/LRA.2025.3580317","url":null,"abstract":"Acoustic inspection is crucial for infrastructure maintenance, but its effectiveness is often hampered by environmental noise. Conventional denoising methods rely on prior knowledge or training data, limiting their practicability. This letter presents Zero-Shot Denoiser, a novel approach achieving noise reduction without pre-collected target sound samples or noise knowledge. Our method synergistically combines Mix Signal Separation (MSS) for unsupervised audio decomposition and Artifact-Resilient Attention (AR-Attention) for text-guided audio reconstruction. AR-Attention leverages pre-trained audio-language models and dual normalization to mitigate BSS artifacts and identify target sounds semantically. We introduce pseudo Signal-to-Noise Ratio, derived from the audio-language model, for automatic BSS hyperparameter optimization. In experiments using public datasets, our method, operating in a true zero-shot setting, achieved performance comparable to that of state-of-the-art supervised denoising methods, and experiments targeting hammering tests confirmed the effectiveness of our approach for real-world acoustic inspections. Our approach overcomes the limitations of data-dependent techniques and offers a versatile noise reduction solution for acoustic inspection and broader acoustic tasks.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 8","pages":"7867-7874"},"PeriodicalIF":4.6,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144481795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhi Zheng;Xiangyu Xu;Jin Wang;Yikai Chen;Jingyang Huang;Ruixin Wu;Huan Yu;Guodong Lu
{"title":"Tailless Flapping-Wing Robot With Bio-Inspired Elastic Passive Legs for Multi-Modal Locomotion","authors":"Zhi Zheng;Xiangyu Xu;Jin Wang;Yikai Chen;Jingyang Huang;Ruixin Wu;Huan Yu;Guodong Lu","doi":"10.1109/LRA.2025.3580324","DOIUrl":"https://doi.org/10.1109/LRA.2025.3580324","url":null,"abstract":"Flapping-wing robots offer significant versatility; however, achieving efficient multi-modal locomotion remains challenging. This letter presents the design, modeling, and experimentation of a novel tailless flapping-wing robot with three independently actuated pairs of wings. Inspired by the leg morphology of juvenile water striders, the robot incorporates bio-inspired elastic passive legs that convert flapping-induced vibrations into directional ground movement, enabling locomotion without additional actuators. This vibration-driven mechanism facilitates lightweight, mechanically simplified multi-modal mobility. An SE(3)-based controller coordinates flight and mode transitions with minimal actuation. To validate the robot's feasibility, a functional prototype was developed, and experiments were conducted to evaluate its flight, ground locomotion, and mode-switching capabilities. Results show satisfactory performance under constrained actuation, highlighting the potential of multi-modal flapping-wing designs for future aerial-ground robotic applications. These findings provide a foundation for future studies on frequency-based terrestrial control and passive yaw stabilization in hybrid locomotion systems.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 8","pages":"7971-7978"},"PeriodicalIF":4.6,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Masaki Murooka;Takahiro Hoshi;Kensuke Fukumitsu;Shimpei Masuda;Marwan Hamze;Tomoya Sasaki;Mitsuharu Morisawa;Eiichi Yoshida
{"title":"TACT: Humanoid Whole-Body Contact Manipulation Through Deep Imitation Learning With Tactile Modality","authors":"Masaki Murooka;Takahiro Hoshi;Kensuke Fukumitsu;Shimpei Masuda;Marwan Hamze;Tomoya Sasaki;Mitsuharu Morisawa;Eiichi Yoshida","doi":"10.1109/LRA.2025.3580329","DOIUrl":"https://doi.org/10.1109/LRA.2025.3580329","url":null,"abstract":"Manipulation with whole-body contact by humanoid robots offers distinct advantages, including enhanced stability and reduced load. On the other hand, we need to address challenges such as the increased computational cost of motion generation and the difficulty of measuring broad-area contact. We therefore have developed a humanoid control system that allows a humanoid robot equipped with tactile sensors on its upper body to learn a policy for whole-body manipulation through imitation learning based on human teleoperation data. This policy, named tactile-modality extended ACT (TACT), has a feature to take multiple sensor modalities as input, including joint position, vision, and tactile measurements. Furthermore, by integrating this policy with retargeting and locomotion control based on a biped model, we demonstrate that the life-size humanoid robot RHP7 Kaleido is capable of achieving whole-body contact manipulation while maintaining balance and walking. Through detailed experimental verification, we show that inputting both vision and tactile modalities into the policy contributes to improving the robustness of manipulation involving broad and delicate contact.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 8","pages":"7819-7826"},"PeriodicalIF":4.6,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144367058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wang Rundong;Cheng Yanchun;Yuan Qilong;Prakash Alok;Francis EH Tay;Marcelo H. Ang
{"title":"RoboMT: Human-Like Compliance Control for Assembly via a Bilateral Robotic Teleoperation and Hybrid Mamba-Transformer Framework","authors":"Wang Rundong;Cheng Yanchun;Yuan Qilong;Prakash Alok;Francis EH Tay;Marcelo H. Ang","doi":"10.1109/LRA.2025.3579238","DOIUrl":"https://doi.org/10.1109/LRA.2025.3579238","url":null,"abstract":"Robotic compliance control is critical for delicate tasks such as electronic connector assembly, where precise force regulation and adaptability are paramount. However, traditional methods often struggle with modeling inaccuracies and sensor noise. Inspired by human adaptability in complex assembly operations, we present RoboMT, a novel framework that integrates a Mamba algorithm with a Transformer architecture to achieve human-like compliance control. By leveraging a bilateral teleoperation platform, we collect extensive real-time force/torque and motion data to form a comprehensive dataset for training. Furthermore, RoboMT incorporates an Adaptive Action Chunk module and a Temporal Fusion module to ensure smooth and robust action prediction. Experimental results across four electronic assembly tasks show that RoboMT achieves superior success rates (62–98%) over baselines (29–98%), while maintaining stable force regulation around 2.5 N, closely resembling human performance. During task transitions, RoboMT quickly stabilizes at 5 N with minimal overshoot, avoiding the large force spikes (over 24 N) seen in baselines. Additionally, RoboMT maintains an average inference speed of 55 ms per batch, balancing real-time responsiveness and control robustness. Overall, RoboMT presents a compelling pathway toward error-minimized, human-level compliance control, and generalization for real-world robotic assembly, setting a new benchmark for precision, adaptability, and robustness in robotic assembly.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 8","pages":"7771-7778"},"PeriodicalIF":4.6,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144367078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taku Okawara;Kenji Koide;Aoki Takanose;Shuji Oishi;Masashi Yokozuka;Kentaro Uno;Kazuya Yoshida
{"title":"Tightly-Coupled LiDAR-IMU-Leg Odometry With Online Learned Leg Kinematics Incorporating Foot Tactile Information","authors":"Taku Okawara;Kenji Koide;Aoki Takanose;Shuji Oishi;Masashi Yokozuka;Kentaro Uno;Kazuya Yoshida","doi":"10.1109/LRA.2025.3580332","DOIUrl":"https://doi.org/10.1109/LRA.2025.3580332","url":null,"abstract":"In this letter, we present tightly coupled LiDAR-IMU-leg odometry, which is robust to challenging conditions such as featureless environments and deformable terrains. We developed an online learning-based leg kinematics model named the <italic>neural leg kinematics model</i>, which incorporates tactile information (foot reaction force) to implicitly express the nonlinear dynamics between robot feet and the ground. Online training of this model enhances its adaptability to weight load changes of a robot (e.g., assuming delivery or transportation tasks) and terrain conditions. According to the <italic>neural adaptive leg odometry factor</i> and online uncertainty estimation of the leg kinematics model-based motion predictions, we jointly solve online training of this kinematics model and odometry estimation on a unified factor graph to retain the consistency of both. The proposed method was verified through real experiments using a quadruped robot in two challenging situations: 1) a sandy beach, representing an extremely featureless area with a deformable terrain, and 2) a campus, including multiple featureless areas and terrain types of asphalt, gravel (deformable terrain), and grass. Experimental results showed that our odometry estimation incorporating the <italic>neural leg kinematics model</i> outperforms state-of-the-art works.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 8","pages":"7947-7954"},"PeriodicalIF":4.6,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}