Autonomous RobotsPub Date : 2025-04-20DOI: 10.1007/s10514-025-10194-8
Rohan Chandra, Vrushabh Zinage, Efstathios Bakolas, Peter Stone, Joydeep Biswas
{"title":"Deadlock-free, safe, and decentralized multi-robot navigation in social mini-games via discrete-time control barrier functions","authors":"Rohan Chandra, Vrushabh Zinage, Efstathios Bakolas, Peter Stone, Joydeep Biswas","doi":"10.1007/s10514-025-10194-8","DOIUrl":"10.1007/s10514-025-10194-8","url":null,"abstract":"<div><p>We present an approach to ensure safe and deadlock-free navigation for decentralized multi-robot systems operating in constrained environments, including doorways and intersections. Although many solutions have been proposed that ensure safety and resolve deadlocks, optimally preventing deadlocks in a minimally invasive and decentralized fashion remains an open problem. We first formalize the objective as a non-cooperative, non-communicative, partially observable multi-robot navigation problem in constrained spaces with multiple conflicting agents, which we term as social mini-games. Formally, we solve a discrete-time optimal receding horizon control problem leveraging control barrier functions for safe long-horizon planning. Our approach to ensuring liveness rests on the insight that <i>there exists barrier certificates that allow each robot to preemptively perturb their state in a minimally-invasive fashion onto liveness sets i.e. states where robots are deadlock-free</i>. We evaluate our approach in simulation as well on physical robots using F1/10 robots, a Clearpath Jackal, as well as a Boston Dynamics Spot in a doorway, hallway, and corridor intersection scenario. Compared to both fully decentralized and centralized approaches with and without deadlock resolution capabilities, we demonstrate that our approach results in safer, more efficient, and smoother navigation, based on a comprehensive set of metrics including success rate, collision rate, stop time, change in velocity, path deviation, time-to-goal, and flow rate.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 2","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-025-10194-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143850939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2025-04-17DOI: 10.1007/s10514-025-10196-6
Zaynab El Mawas, Cindy Cappelle, Maan El Badaoui El Najjar
{"title":"Fault-tolerant multi-robot localization: diagnostic decision-making with information theory and learning models","authors":"Zaynab El Mawas, Cindy Cappelle, Maan El Badaoui El Najjar","doi":"10.1007/s10514-025-10196-6","DOIUrl":"10.1007/s10514-025-10196-6","url":null,"abstract":"<div><p>In the domain of multi-robot systems, cooperative systems that are highly attuned and connected to their surroundings are becoming increasingly significant. This surge in interest highlights various challenges, especially regarding system integration and safety constraints. Our research contributes to the assurance of fault tolerance to avert abnormal behaviors and sustain reliable robot localization. In this paper, a mixed approach between data-driven and model-based for fault detection is introduced, within a decentralized architecture, thereby strengthening the system’s capacity to handle simultaneous sensor faults. Information theory-based fault indicators are developed by computing the Jensen-Shannon divergence (<span>(D_{JS})</span>) between state predictions and sensor-obtained corrections. This initiates a two-tiered data-driven mechanism: one layer employing Machine Learning for fault detection, and another distinct layer for fault isolation. The methodology’s efficacy is assessed using real data from the Turtlebot3 platform.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 2","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143840435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2025-04-15DOI: 10.1007/s10514-025-10193-9
Yasir Salam, Yinbei Li, Jonas Herzog, Jiaqiang Yang
{"title":"Human2bot: learning zero-shot reward functions for robotic manipulation from human demonstrations","authors":"Yasir Salam, Yinbei Li, Jonas Herzog, Jiaqiang Yang","doi":"10.1007/s10514-025-10193-9","DOIUrl":"10.1007/s10514-025-10193-9","url":null,"abstract":"<div><p>Developing effective reward functions is crucial for robot learning, as they guide behavior and facilitate adaptation to human-like tasks. We present Human2Bot (H2B), advancing the learning of such a generalized multi-task reward function that can be used zero-shot to execute unknown tasks in unseen environments. H2B is a newly designed task similarity estimation model that is trained on a large dataset of human videos. The model determines whether two videos from different environments represent the same task. At test time, the model serves as a reward function, evaluating how closely a robot’s execution matches the human demonstration. While previous approaches necessitate robot-specific data to learn reward functions or policies, our method can learn without any robot datasets. To achieve generalization in robotic environments, we incorporate a domain augmentation process that generates synthetic videos with varied visual appearances resembling simulation environments, alongside a multi-scale inter-frame attention mechanism that aligns human and robot task understanding. Finally, H2B is integrated with Visual Model Predictive Control (VMPC) to perform manipulation tasks in simulation and on the xARM6 robot in real-world settings. Our approach outperforms previous methods in simulated and real-world environments trained solely on human data, eliminating the need for privileged robot datasets.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 2","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2025-03-10DOI: 10.1007/s10514-025-10190-y
Bahar Irfan, Sanna Kuoppamäki, Aida Hosseini, Gabriel Skantze
{"title":"Between reality and delusion: challenges of applying large language models to companion robots for open-domain dialogues with older adults","authors":"Bahar Irfan, Sanna Kuoppamäki, Aida Hosseini, Gabriel Skantze","doi":"10.1007/s10514-025-10190-y","DOIUrl":"10.1007/s10514-025-10190-y","url":null,"abstract":"<div><p>Throughout our lives, we interact daily in conversations with our friends and family, covering a wide range of topics, known as open-domain dialogue. As we age, these interactions may diminish due to changes in social and personal relationships, leading to loneliness in older adults. Conversational companion robots can alleviate this issue by providing daily social support. Large language models (LLMs) offer flexibility for enabling open-domain dialogue in these robots. However, LLMs are typically trained and evaluated on textual data, while robots introduce additional complexity through multi-modal interactions, which has not been explored in prior studies. Moreover, it is crucial to involve older adults in the development of robots to ensure alignment with their needs and expectations. Correspondingly, using iterative participatory design approaches, this paper exposes the challenges of integrating LLMs into conversational robots, deriving from 34 Swedish-speaking older adults’ (one-to-one) interactions with a personalized companion robot, built on Furhat robot with GPT<span>(-)</span>3.5. These challenges encompass disruptions in conversations, including frequent interruptions, slow, repetitive, superficial, incoherent, and disengaging responses, language barriers, hallucinations, and outdated information, leading to frustration, confusion, and worry among older adults. Drawing on insights from these challenges, we offer recommendations to enhance the integration of LLMs into conversational robots, encompassing both general suggestions and those tailored to companion robots for older adults.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-025-10190-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ASAP-MPC: an asynchronous update scheme for online motion planning with nonlinear model predictive control","authors":"Dries Dirckx, Mathias Bos, Bastiaan Vandewal, Lander Vanroye, Jan Swevers, Wilm Decré","doi":"10.1007/s10514-025-10192-w","DOIUrl":"10.1007/s10514-025-10192-w","url":null,"abstract":"<div><p>This paper presents a Nonlinear Model Predictive Control (NMPC) update scheme targeted at motion planning for mechatronic motion systems, such as drones and mobile platforms. NMPC-based motion planning typically requires low computation times to be able to provide control inputs at the required rate for system stability, disturbance rejection, and overall performance. To achieve online NMPC updates in complex situations, works in literature typically rely on one of two approaches: attempting to reduce the solution times in NMPC by sacrificing feasibility guarantees, or allowing more time to the motion planning algorithm, which requires additional strategies to ensure robust tracking of the planned motion, e.g., state feedback. Following this second paradigm, this paper presents As-Soon-As-Possible MPC (ASAP-MPC), an asynchronous update scheme for online motion planning with optimal control that abandons the idea of having to satisfy restrictive real-time update rates and that solves the optimal control problem to full convergence. ASAP-MPC combines trajectory generation through optimal control with additional tracking control for improved robustness against disturbances and plant-model mismatch. The scheme seamlessly connects trajectories, resulting from subsequent NMPC solutions, providing a smooth and continuous overall trajectory for the motion system. This framework’s applicability to embedded applications is shown on two different experiment setups where a state-of-the-art method fails to successfully navigate through a given environment: a quadcopter flying through a cluttered environment with hardware-in-the-loop simulation and a scale model truck-trailer manoeuvring in a structured physical lab environment.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143571145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2025-02-13DOI: 10.1007/s10514-025-10191-x
Roland Jung, Lukas Luft, Stephan Weiss
{"title":"Isolated Kalman filtering: theory and decoupled estimator design","authors":"Roland Jung, Lukas Luft, Stephan Weiss","doi":"10.1007/s10514-025-10191-x","DOIUrl":"10.1007/s10514-025-10191-x","url":null,"abstract":"<div><p>In this paper, we propose a state decoupling strategy for Kalman filtering problems, when the dynamics of individual estimates are decoupled and their outputs are sparsely coupled. The algorithm is termed Isolated Kalman Filtering (IsoKF) and exploits the sparsity in the output coupling by applying approximations that mitigate the need for non-involved estimates. We prove that the approximations made during the isolated coupling of estimates are based on an implicit maximum determinant completion of the incomplete a priori covariance matrix. The steady state behavior is studied on eleven different observation graphs and a buffering scheme to support delayed (i.e. out-of-order) measurements is proposed. We discussed handling of delayed measurements in both, an optimal or a suboptimal way. The credibility of the isolated estimates are evaluated on a linear and nonlinear toy example in Monte Carlo simulations. The presented paradigm is made available online to the community within a generic C++ estimation framework supporting both, modular sensor fusion and collaborative state estimation.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-025-10191-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143396553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Eigen-factors a bilevel optimization for plane SLAM of 3D point clouds","authors":"Gonzalo Ferrer, Dmitrii Iarosh, Anastasiia Kornilova","doi":"10.1007/s10514-025-10189-5","DOIUrl":"10.1007/s10514-025-10189-5","url":null,"abstract":"<div><p>Modern depth sensors can generate a huge number of 3D points in few seconds to be later processed by Localization and Mapping algorithms. Ideally, these algorithms should handle efficiently large sizes of Point Clouds (PC) under the assumption that using more points implies more information available. The Eigen Factors (EF) is a new algorithm that solves PC SLAM by using planes as the main geometric primitive. To do so, EF exhaustively calculates the error of all points at complexity <i>O(1)</i>, thanks to the <i>Summation matrix S</i> of homogeneous points. The solution of EF is a bilevel optimization where the <i>lower-level</i> problem estimates the plane variables in closed-form, and the <i>upper-level</i> non-linear problem uses second order optimization to estimate sensor poses (trajectory). We provide a direct analytical solution for the gradient and Hessian based on the homogeneous point-plane constraint. In addition, two variants of the EF are proposed: one pure analytical derivation and a second one approximating the problem to an alternating optimization showing better convergence properties. We evaluate the optimization processes (back-end) of EF and other state-of-the-art plane SLAM algorithms in a synthetic environment, and extended to ICL dataset (RGBD) and LiDAR KITTI datasets. EF demonstrates superior robustness and accuracy of the estimated trajectory and improved map metrics. Code is publicly available at https://github.com/prime-slam/EF-plane-SLAM with python bindings and pip package.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143108248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2025-01-18DOI: 10.1007/s10514-024-10188-y
Ananth Jonnavittula, Sagar Parekh, Dylan P. Losey
{"title":"View: visual imitation learning with waypoints","authors":"Ananth Jonnavittula, Sagar Parekh, Dylan P. Losey","doi":"10.1007/s10514-024-10188-y","DOIUrl":"10.1007/s10514-024-10188-y","url":null,"abstract":"<div><p>Robots can use visual imitation learning (VIL) to learn manipulation tasks from video demonstrations. However, translating visual observations into actionable robot policies is challenging due to the high-dimensional nature of video data. This challenge is further exacerbated by the morphological differences between humans and robots, especially when the video demonstrations feature humans performing tasks. To address these problems we introduce <b>V</b>isual <b>I</b>mitation l<b>E</b>arning with <b>W</b>aypoints (VIEW), an algorithm that significantly enhances the sample efficiency of human-to-robot VIL. VIEW achieves this efficiency using a multi-pronged approach: extracting a condensed prior trajectory that captures the demonstrator’s intent, employing an agent-agnostic reward function for feedback on the robot’s actions, and utilizing an exploration algorithm that efficiently samples around waypoints in the extracted trajectory. VIEW also segments the human trajectory into grasp and task phases to further accelerate learning efficiency. Through comprehensive simulations and real-world experiments, VIEW demonstrates improved performance compared to current state-of-the-art VIL methods. VIEW enables robots to learn manipulation tasks involving multiple objects from arbitrarily long video demonstrations. Additionally, it can learn standard manipulation tasks such as pushing or moving objects from a single video demonstration in under 30 min, with fewer than 20 real-world rollouts. Code and videos here: https://collab.me.vt.edu/view/</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-024-10188-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142995247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2025-01-17DOI: 10.1007/s10514-024-10186-0
Dawei Zhang, Roberto Tron
{"title":"Safe and stable teleoperation of quadrotor UAVs under haptic shared autonomy","authors":"Dawei Zhang, Roberto Tron","doi":"10.1007/s10514-024-10186-0","DOIUrl":"10.1007/s10514-024-10186-0","url":null,"abstract":"<div><p>We present a novel approach that aims to address both safety and stability of a haptic teleoperation system within a framework of Haptic Shared Autonomy (HSA). We use Control Barrier Functions (CBFs) to generate the control input that follows the user’s input as closely as possible while guaranteeing safety. In the context of stability of the human-in-the-loop system, we limit the force feedback perceived by the user via a small <span>(mathcal {L}_2)</span>-gain, which is achieved by limiting the control and the force feedback via a differential constraint. Specifically, with the property of HSA, we propose two pathways to design the control and the force feedback: Sequential Control Force (SCF) and Joint Control Force (JCF). Both designs can achieve safety and stability but with different responses to the user’s commands. We conducted experimental simulations to evaluate and investigate the properties of the designed methods. We also tested the proposed method on a physical quadrotor UAV and a haptic interface.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142995138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2025-01-14DOI: 10.1007/s10514-024-10187-z
Emily Scheide, Graeme Best, Geoffrey A. Hollinger
{"title":"Synthesizing compact behavior trees for probabilistic robotics domains","authors":"Emily Scheide, Graeme Best, Geoffrey A. Hollinger","doi":"10.1007/s10514-024-10187-z","DOIUrl":"10.1007/s10514-024-10187-z","url":null,"abstract":"<div><p>Complex robotics domains (e.g., remote exploration applications and scenarios involving interactions with humans) require encoding high-level mission specifications that consider uncertainty. Most current fielded systems in practice require humans to manually encode mission specifications in ways that require amounts of time and expertise that can become infeasible and limit mission scope. Therefore, we propose a method of automating the process of encoding mission specifications as behavior trees. In particular, we present an algorithm for synthesizing behavior trees that represent the optimal policy for a user-defined specification of a domain and problem in the Probabilistic Planning Domain Definition Language (PPDDL). Our algorithm provides access to behavior tree advantages including compactness and modularity, while alleviating the need for the time-intensive manual design of behavior trees, which requires substantial expert knowledge. Our method converts the PPDDL specification into solvable MDP matrices, simplifies the solution, i.e. policy, using Boolean algebra simplification, and converts this simplified policy to a compact behavior tree that can be executed by a robot. We present simulated experiments for a marine target search and response scenario and an infant-robot interaction for mobility domain. Our results demonstrate that the synthesized, simplified behavior trees have approximately between 15 <span>x</span> and 26 <span>x</span> fewer nodes and an average of between 8 <span>x</span> and 13 <span>x</span> fewer active conditions for selecting the active action than they would without simplification. These compactness and activity results suggest an increase in the interpretability and execution efficiency of the behavior trees synthesized by the proposed method. Additionally, our results demonstrate that this synthesis method is robust to a variety of user input mistakes, and we empirically confirm that the synthesized behavior trees perform equivalently to the optimal policy that they are constructed to logically represent.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}