{"title":"Robust Gait Phase Estimation With Discrete Wavelet Transform for Walking Assistance on Multiple Terrains","authors":"Libo Zhou;Feiyu Jiang;Shaoping Bai;Yuanjing Feng;Linlin Ou;Xinyi Yu","doi":"10.1109/LRA.2025.3564093","DOIUrl":"https://doi.org/10.1109/LRA.2025.3564093","url":null,"abstract":"Gait phase detection is crucial to realize personalized assistive functions of lower limb exoskeletons. A common method of gait phase estimation is the adaptive oscillator, which performs well in periodic gaits. However, these types of methods fail in gait phase estimation under aperiodic gait cycles. Although some modified methods have been proposed for gait phase estimation under multiple terrains, they usually require dataset training, and the estimation accuracy is highly dependent on the collected dataset. To realize accurate and stable recognition of gait phase, a novel gait phase estimation method is proposed to estimate the gait phase without dataset training. This method, by incorporating discrete wavelet transform (DWT) with adaptive oscillators (AOs), can identify non-periodic mutations online and reset the oscillator at an appropriate time to avoid the divergence phenomenon when the adaptive oscillator is subjected to mutation signals. In this proposed method, the hip angle is measured by an inertial measurement unit (IMU) sensor and the measured data is then processed using discrete wavelet transform to detect the maximum hip flexion angle (MFA) and non-periodic mutations. The gait phase is finally estimated by a modified adaptive oscillator. Ground walking tests with a variety of speeds by six subjects were conducted under different walking conditions and the results show that the new method works robustly for gait phase estimation under multiple terrains. The method is demonstrated on a hip exoskeleton in walking assistance.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"6031-6038"},"PeriodicalIF":4.6,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143908445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Robotics and Automation Letters Information for Authors","authors":"","doi":"10.1109/LRA.2025.3561983","DOIUrl":"https://doi.org/10.1109/LRA.2025.3561983","url":null,"abstract":"","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"C4-C4"},"PeriodicalIF":4.6,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10974751","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143875111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Do You Know the Way? Human-in-The-Loop Understanding for Fast Traversability Estimation in Mobile Robotics","authors":"Andre Schreiber;Katherine Driggs-Campbell","doi":"10.1109/LRA.2025.3563819","DOIUrl":"https://doi.org/10.1109/LRA.2025.3563819","url":null,"abstract":"The increasing use of robots in unstructured environments necessitates the development of effective perception and navigation strategies to enable field robots to successfully perform their tasks. In particular, it is key for such robots to understand where in their environment they can and cannot travel—a task known as traversability estimation. However, existing geometric approaches to traversability estimation may fail to capture nuanced representations of traversability, whereas vision-based approaches typically either involve manually annotating a large number of images or require robot experience. In addition, existing methods can struggle to address domain shifts as they typically do not learn during deployment. To this end, we propose a human-in-the-loop (HiL) method for traversability estimation that prompts a human for annotations as-needed. Our method uses a foundation model to enable rapid learning on new annotations and to provide accurate predictions even when trained on a small number of quickly-provided HiL annotations. We extensively validate our method in simulation and on real-world data, and demonstrate that it can provide state-of-the-art traversability prediction performance.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5863-5870"},"PeriodicalIF":4.6,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10974681","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143902735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Robotics and Automation Society Information","authors":"","doi":"10.1109/LRA.2025.3561981","DOIUrl":"https://doi.org/10.1109/LRA.2025.3561981","url":null,"abstract":"","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"C3-C3"},"PeriodicalIF":4.6,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10974744","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143875108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Robotics and Automation Society Publication Information","authors":"","doi":"10.1109/LRA.2025.3561979","DOIUrl":"https://doi.org/10.1109/LRA.2025.3561979","url":null,"abstract":"","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"C2-C2"},"PeriodicalIF":4.6,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10974750","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143871126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yufei Liu;Xieyuanli Chen;Neng Wang;Stepan Andreev;Alexander Dvorkovich;Rui Fan;Huimin Lu
{"title":"Self-Supervised Diffusion-Based Scene Flow Estimation and Motion Segmentation With 4D Radar","authors":"Yufei Liu;Xieyuanli Chen;Neng Wang;Stepan Andreev;Alexander Dvorkovich;Rui Fan;Huimin Lu","doi":"10.1109/LRA.2025.3563829","DOIUrl":"https://doi.org/10.1109/LRA.2025.3563829","url":null,"abstract":"Scene flow estimation (SFE) and motion segmentation (MOS) using 4D radar are emerging yet challenging tasks in robotics and autonomous driving applications. Existing LiDAR- or RGB-D-based point cloud processing methods often deliver suboptimal performance on radar data due to radar signals' highly sparse, noisy, and artifact-prone nature. Furthermore, for radar-based SFE and MOS, the lack of annotated datasets further aggravates these challenges. To address these issues, we propose a novel self-supervised framework that exploits denoising diffusion models to effectively handle radar noise inputs and predict point-wise scene flow and motion status simultaneously. To extract key features from the raw input, we design a transformer-based feature encoder tailored to address the sparsity of 4D radar data. Additionally, we generate self-supervised segmentation signals by exploiting the discrepancy between robust rigid ego-motion estimates and scene flow predictions, thereby eliminating the need for manual annotations. Experimental evaluations on the View-of-Delft (VoD) dataset and TJ4DRadSet demonstrate that our method achieves state-of-the-art performance for both radar-based SFE and MOS.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5895-5902"},"PeriodicalIF":4.6,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143901866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design and Control of a Tilt-Rotor Tailsitter Aircraft With Pivoting VTOL Capability","authors":"Ziqing Ma;Ewoud J.J. Smeur;Guido C.H.E. de Croon","doi":"10.1109/LRA.2025.3563821","DOIUrl":"https://doi.org/10.1109/LRA.2025.3563821","url":null,"abstract":"Tailsitter aircraft attract considerable interest due to their capabilities of both agile hover and high speed forward flight. However, traditional tailsitters that use aerodynamic control surfaces face the challenge of limited control effectiveness and associated actuator saturation during vertical flight and transitions. Conversely, tailsitters relying solely on tilting rotors have the drawback of insufficient roll control authority in forward flight. This letter proposes a tilt-rotor tailsitter aircraft with both elevons and tilting rotors as a promising solution. By implementing a cascaded weighted least squares (WLS) based incremental nonlinear dynamic inversion (INDI) controller, the drone successfully achieved autonomous waypoint tracking in outdoor experiments at a cruise airspeed of 16 m/s, including transitions between forward flight and hover without actuator saturation. Wind tunnel experiments confirm improved roll control compared to tilt-rotor-only configurations, while comparative outdoor flight tests highlight the vehicle's superior control over elevon-only designs during critical phases such as vertical descent and transitions. Finally, we also show that the tilt-rotors allow for an autonomous takeoff and landing with a unique pivoting capability that demonstrates stability and robustness under wind disturbances.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5911-5918"},"PeriodicalIF":4.6,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143902726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CUBE360: Learning Cubic Field Representation for Monocular Panoramic Depth Estimation","authors":"Wenjie Chang;Hao Ai;Tianzhu Zhang;Lin Wang","doi":"10.1109/LRA.2025.3563827","DOIUrl":"https://doi.org/10.1109/LRA.2025.3563827","url":null,"abstract":"Panoramic depth estimation presents significant challenges due to the severe distortion caused by equirectangular projection (ERP) and the limited availability of panoramic RGB-D datasets. Inspired by the recentsuccess of neural rendering, we propose a self-supervised method, named <bold>CUBE360</b>, that learns a cubic field composed of multiple Multi-Plane Images (MPIs) from a single panoramic image for <bold>continuous</b> depth estimation at any view direction. Our CUBE360 employs cubemap projection to transform an ERP image into six faces and extract the MPIs for each, thereby reducing the memory consumption required for MPIs processing of high-resolution data. An attention-based blending module is then employed to learn correlations among the MPIs of cubic faces, constructing a cubic field representation with color and density information at various depth levels. Furthermore, a dual-sampling strategy is introduced to render novel views from the cubic field at both cubic and planar scales. The entire pipeline is trained using photometric loss calculated from rendered views within a self-supervised learning (SSL) approach, enabling training without depth annotations. Experiments on synthetic and real-world datasets demonstrate the superior performance of CUBE360 compared to previous SSL methods.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"6264-6271"},"PeriodicalIF":4.6,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143937959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ethan Fahnestock;Erick Fuentes;Samuel Prentice;Vasileios Vasilopoulos;Philip R Osteen;Thomas Howard;Nicholas Roy
{"title":"Far-Field Image-Based Traversability Mapping for a Priori Unknown Natural Environments","authors":"Ethan Fahnestock;Erick Fuentes;Samuel Prentice;Vasileios Vasilopoulos;Philip R Osteen;Thomas Howard;Nicholas Roy","doi":"10.1109/LRA.2025.3563808","DOIUrl":"https://doi.org/10.1109/LRA.2025.3563808","url":null,"abstract":"While navigating unknown environments, robots rely primarily on proximate features for guidance in decision making, such as depth information from lidar to build a costmap, or local semantic information from images. The limited range over which these features can be used may result in poor robot behavior when assumptions about the cost of the map beyond the range of proximate features misguide the robot. Integrating “far-field” image features that originate beyond these proximate features into the mapping pipeline has the promise of enabling more intelligent navigation through unknown terrain. To navigate with far-field features, key challenges must be overcome. As far-field features are typically too distant to localize precisely, they are difficult to place in a map. Additionally, the large distance between the robot and these features makes connecting these features to their navigation implications difficult. We propose <italic>FITAM</i>, an approach that learns to use far-field features to predict costs to guide navigation through unknown environments in a self-supervised manner. Unlike previous work, our approach does not rely on flat ground plane assumptions or range sensors to localize observations. We demonstrate the benefits of our approach through simulated trials and real-world deployment on a Clearpath Robotics Warthog navigating through a forest environment.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"6039-6046"},"PeriodicalIF":4.6,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143908409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haitao Wang;Shaolin Zhang;Shuo Wang;Tianyu Jiang;Yueguang Ge
{"title":"Double-Feedback: Enhancing Large Language Models Reasoning in Robotic Tasks by Knowledge Graphs","authors":"Haitao Wang;Shaolin Zhang;Shuo Wang;Tianyu Jiang;Yueguang Ge","doi":"10.1109/LRA.2025.3562776","DOIUrl":"https://doi.org/10.1109/LRA.2025.3562776","url":null,"abstract":"Large language models (LLMs) have demonstrated remarkable reasoning capabilities. However, in real-world robotic tasks, LLMs face grounding issues and lack precise feedback, resulting in the generated solutions deviating from the actual situation. In this letter, we propose Double-Feedback, a method that enhances LLMs reasoning by Knowledge graphs (KGs). The KGs play three key roles in Double-Feedback: prompting the LLMs to generate solutions, representing the task scenes, and verifying the solutions to provide feedback. We design structured knowledge prompts that convey the task knowledge background, example solutions, revision principles, and robotic tasks to the LLMs. We also introduce the distributed representation to quantify the task scene with interpretability. Based on the structured knowledge prompts and the distributed representation, we employ the KGs to evaluate the feasibility of each step before execution and verify the effects of the solutions after completing the tasks. The LLMs can adjust and replan the solutions based on the feedback from the KGs. Extensive experiments demonstrate that Double-Feedback outperforms prior works in the ALFRED benchmark. In addition, ablation studies show that Double-Feedback guides LLMs in generating solutions aligned with robotic tasks in the real world.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5951-5958"},"PeriodicalIF":4.6,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143901867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}