{"title":"IEEE Robotics and Automation Letters Information for Authors","authors":"","doi":"10.1109/LRA.2025.3553273","DOIUrl":"https://doi.org/10.1109/LRA.2025.3553273","url":null,"abstract":"","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"C4-C4"},"PeriodicalIF":4.6,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10938736","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Robotics and Automation Society Information","authors":"","doi":"10.1109/LRA.2025.3553271","DOIUrl":"https://doi.org/10.1109/LRA.2025.3553271","url":null,"abstract":"","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"C3-C3"},"PeriodicalIF":4.6,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10938733","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Robotics and Automation Society Publication Information","authors":"","doi":"10.1109/LRA.2025.3553269","DOIUrl":"https://doi.org/10.1109/LRA.2025.3553269","url":null,"abstract":"","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"C2-C2"},"PeriodicalIF":4.6,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10938732","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"First-Person View Interfaces for Teleoperation of Aerial Swarms","authors":"Benjamin Jarvis;Charbel Toumieh;Dario Floreano","doi":"10.1109/LRA.2025.3553062","DOIUrl":"https://doi.org/10.1109/LRA.2025.3553062","url":null,"abstract":"Aerial swarms can substantially improve the effectiveness of drones in applications such as inspection, monitoring, and search for rescue. This is especially true when those swarms are made of several individual drones that use local sensing and coordination rules to achieve collective motion. Despite recent progress in swarm autonomy, human control and decision-making are still critical for missions where lives are at risk or human cognitive skills are required. However, first-person-view (FPV) teleoperation systems require one or more human operators per drone, limiting the scalability of these systems to swarms. This work investigates the performance, preference, and behaviour of pilots using different FPV interfaces for teleoperation of aerial swarms. Interfaces with single and multiple perspectives were experimentally studied with humans piloting a simulated aerial swarm through an obstacle course. Participants were found to prefer and perform better with views from the back of the swarm, while views from the front caused users to fly faster but resulted in more crashes. Presenting users with multiple views at once resulted in a slower completion time, and users were found to focus on the largest view, regardless of its perspective within the swarm.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4476-4483"},"PeriodicalIF":4.6,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143716546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinyu Zhou;Songhao Piao;Wenzheng Chi;Liguo Chen;Wei Li
{"title":"HeR-DRL:Heterogeneous Relational Deep Reinforcement Learning for Single-Robot and Multi-Robot Crowd Navigation","authors":"Xinyu Zhou;Songhao Piao;Wenzheng Chi;Liguo Chen;Wei Li","doi":"10.1109/LRA.2025.3553050","DOIUrl":"https://doi.org/10.1109/LRA.2025.3553050","url":null,"abstract":"Crowd navigation has garnered significant research attention in recent years, particularly with the advent of DRL-based methods. Current DRL-based methods have extensively explored interaction relationships in single-robot scenarios. However, the heterogeneity of multiple interaction relationships is often disregarded. This “interaction blind spot” hinders progress towards more complex scenarios, such as multi-robot crowd navigation. In this letter, we propose a heterogeneous relational deep reinforcement learning method, named HeR-DRL, which utilizes a customized heterogeneous Graph Neural Network (GNN) to enhance overall performance in crowd navigation. Firstly, we devised a method for constructing robot-crowd heterogenous relation graph that effectively simulates the heterogeneous pair-wise interaction relationships. Based on this graph, we proposed a novel heterogeneous GNN to encode interaction relationship information. Finally, we incorporate the encoded information into deep reinforcement learning to explore the optimal policy. HeR-DRL is rigorously evaluated by comparing it to state-of-the-art algorithms in both single-robot and multi-robot circle crossing scenarios. The experimental results demonstrate that HeR-DRL surpasses the state-of-the-art approaches in overall performance, particularly excelling in terms of efficiency and comfort. This underscores the significance of heterogeneous interactions in crowd navigation.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4524-4531"},"PeriodicalIF":4.6,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143726443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Information-Theoretic Detection of Bimanual Interactions for Dual-Arm Robot Plan Generation","authors":"Elena Merlo;Marta Lagomarsino;Arash Ajoudani","doi":"10.1109/LRA.2025.3552216","DOIUrl":"https://doi.org/10.1109/LRA.2025.3552216","url":null,"abstract":"Programming by demonstration is a strategy to simplify the robot programming process for non-experts via human demonstrations. However, its adoption for bimanual tasks is an underexplored problem due to the complexity of hand coordination, which also hinders data recording. This letter presents a novel one-shot method for processing a single RGB video of a bimanual task demonstration to generate an execution plan for a dual-arm robotic system. To detect hand coordination policies, we apply Shannon's information theory to analyze the information flow between scene elements and leverage scene graph properties. The generated plan is a modular behavior tree that assumes different structures based on the desired arms coordination. We validated the effectiveness of this framework through multiple subject video demonstrations, which we collected and made open-source, and exploiting data from an external, publicly available dataset. Comparisons with existing methods revealed significant improvements in generating a centralized execution plan for coordinating two-arm systems.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4532-4539"},"PeriodicalIF":4.6,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143726442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Correction to “A Magnetic Capsule Robot With an Exoskeleton to Withstand Esophageal Pressure and Delivery Drug in Stomach”","authors":"Ruomao Liu;Yujun Chen;Zhen Yin;Jiachen Zhang","doi":"10.1109/LRA.2025.3546692","DOIUrl":"https://doi.org/10.1109/LRA.2025.3546692","url":null,"abstract":"The name of the third author in [1] is incorrect. The correct author is shown in the red text highlighted.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"4004-4004"},"PeriodicalIF":4.6,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10930428","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143645328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visibility-Aware RRT* for Safety-Critical Navigation of Perception-Limited Robots in Unknown Environments","authors":"Taekyung Kim;Dimitra Panagou","doi":"10.1109/LRA.2025.3552295","DOIUrl":"https://doi.org/10.1109/LRA.2025.3552295","url":null,"abstract":"Safe autonomous navigation in unknown environments remains a critical challenge for robots with limited sensing capabilities. While safety-critical control techniques, such as Control Barrier Functions (CBFs), have been proposed to ensure safety, their effectiveness relies on the assumption that the robot has complete knowledge of its surroundings. In reality, robots often operate with restricted field-of-view and finite sensing range, which can lead to collisions with unknown obstacles if the planner is agnostic to these limitations. To address this issue, we introduce the Visibility-Aware RRT* algorithm that combines sampling-based planning with CBFs to generate safe and efficient global reference paths in partially unknown environments. The algorithm incorporates a collision avoidance CBF and a novel visibility CBF, which guarantees that the robot remains within locally collision-free regions, enabling timely detection and avoidance of unknown obstacles. We conduct extensive experiments interfacing the path planners with two different safety-critical controllers, wherein our method outperforms all other compared baselines across both safety and efficiency aspects.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4508-4515"},"PeriodicalIF":4.6,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143725105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pushing Everything Everywhere All at Once: Probabilistic Prehensile Pushing","authors":"Patrizio Perugini;Jens Lundell;Katharina Friedl;Danica Kragic","doi":"10.1109/LRA.2025.3552267","DOIUrl":"https://doi.org/10.1109/LRA.2025.3552267","url":null,"abstract":"We address prehensile pushing, the problem of manipulating a grasped object by pushing against the environment. Our solution is an efficient nonlinear trajectory optimization problem relaxed from an exact mixed integer non-linear trajectory optimization formulation. The critical insight is recasting the external pushers (environment) as a discrete probability distribution instead of binary variables and minimizing the entropy of the distribution. The probabilistic reformulation allows all pushers to be used simultaneously, but at the optimum, the probability mass concentrates onto one due to the entropy minimization. We numerically compare our method against a state-of-the-art sampling-based baseline on a prehensile pushing task. The results demonstrate that our method finds trajectories 8 times faster and at a 20 times lower cost than the baseline. Finally, we demonstrate that a simulated and real Frank Panda robot can successfully manipulate different objects following the trajectories proposed by our method.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4540-4547"},"PeriodicalIF":4.6,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10930575","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143726394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Uncertainty-Aware Real-Time Visual Anomaly Detection With Conformal Prediction in Dynamic Indoor Environments","authors":"Arya Saboury;Mustafa Kemal Uyguroglu","doi":"10.1109/LRA.2025.3552318","DOIUrl":"https://doi.org/10.1109/LRA.2025.3552318","url":null,"abstract":"This letter presents an efficient visual anomaly detection framework designed for safe autonomous navigation in dynamic indoor environments, such as university hallways. The approach employs an unsupervised autoencoder method within deep learning to model regular environmental patterns and detect anomalies as deviations in the embedding space. To enhance reliability and safety, the system integrates a statistical framework, conformal prediction, that provides uncertainty quantification with probabilistic guarantees. The proposed solution has been deployed on a real-time robotic platform, demonstrating efficient performance under resource-constrained conditions. Extensive hyperparameter optimization ensures the model remains dynamic and adaptable to changes, while rigorous evaluations confirm its effectiveness in anomaly detection. By addressing challenges related to real-time processing and hardware limitations, this work advances the state-of-the-art in autonomous anomaly detection. The probabilistic insights offered by this framework strengthen operational safety and pave the way for future developments, such as richer sensor fusion and advanced learning paradigms. This research highlights the potential of uncertainty-aware deep learning to enhance safety monitoring frameworks, thereby enabling the development of more reliable and intelligent autonomous systems for real-world applications.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4468-4475"},"PeriodicalIF":4.6,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143716547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}