{"title":"Adaptive game-theoretic decision-making with driving style recognition for autonomous vehicles in uninterrupted traffic flows at intersections","authors":"Yuxiao Cao, Yinuo Jiang, Xiangrui Zeng","doi":"10.1016/j.robot.2025.105180","DOIUrl":"10.1016/j.robot.2025.105180","url":null,"abstract":"<div><div>The absence of standardized conflict resolution mechanisms presents critical challenges for autonomous vehicles operating in uninterrupted traffic flows, particularly when managing time-sensitive interactions with heterogeneous road users. Existing approaches either adopt overly conservative policies by oversimplifying multi-agent interactions or neglect the critical influence of heterogeneous driving styles. This paper proposes a game-theoretic decision-making framework for autonomous vehicles in uninterrupted traffic flow scenarios, specifically designed to address the intertwined challenges of multi-objective optimization and driving style adaptation. A hierarchical game-theoretic architecture integrates kinematic state evolution, feasibility constraints, and interactive behavior modeling to rigorously model multi-vehicle interactions under dynamic mixed traffic conditions. A novel online identification mechanism estimates driving styles through real-time interaction pattern analysis, while a machine learning-driven adaptive framework generates parametric policies through offline random forest training coupled with context-aware online policy adjustments. Comprehensive simulations validate the framework’s effectiveness in both single and multiple intersection scenarios, demonstrating enhanced interaction adaptability (more than 10% efficiency improvements) compared to conventional non-adaptive methods. Experimental results demonstrate the model’s capability to efficiently handle heterogeneous driving behaviors and dynamically refine negotiation strategies, providing a systematic, human-like vehicle decision-making solution for mixed traffic environments.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"194 ","pages":"Article 105180"},"PeriodicalIF":5.2,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144922641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Geometric methods for aircraft planning and control","authors":"Francesco Trotti, Damiano Rigo, Riccardo Muradore","doi":"10.1016/j.robot.2025.105181","DOIUrl":"10.1016/j.robot.2025.105181","url":null,"abstract":"<div><div>Path planning and control of autonomous aircraft is a critical problem, particularly under conditions of model and sensor uncertainty. This paper presents a hierarchical control architecture that integrates geometric and probabilistic methods to address these challenges. The proposed framework combines a high-level controller, a low-level controller, and an observer, leveraging Lie group theory for geometric modeling. The high-level controller formulates the planning problem as a Markov Decision Process (MDP), solved using Monte Carlo Tree Search (MCTS) to generate reference trajectories while avoiding no-fly zones. The low-level controller exploits the relationship between tangent space velocities and left-trivialized velocities in the Lie algebra to produce control commands. State estimation is achieved using a second-order optimal minimum-energy filter formulated on Lie groups, ensuring robust performance under noisy measurements. Simulation results show the efficacy of the proposed architecture in guiding an aircraft from a start point to a target while satisfying operational constraints.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"194 ","pages":"Article 105181"},"PeriodicalIF":5.2,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144988599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrea Giuffrida , Nicola Basilico , Francesco Amigoni
{"title":"An empirical evaluation of learning-based multi-agent path finding algorithms in warehouse environments","authors":"Andrea Giuffrida , Nicola Basilico , Francesco Amigoni","doi":"10.1016/j.robot.2025.105149","DOIUrl":"10.1016/j.robot.2025.105149","url":null,"abstract":"<div><div>In recent years, Multi-Agent Path Finding (MAPF) has become one of the most challenging and interesting fields in autonomous robotics and artificial intelligence. MAPF consists in computing collision-free paths for a group of agents that move from their initial locations to their goal locations in a shared environment. Many algorithms have been proposed to solve this problem using traditional search and planning approaches. The scarce scalability to hundreds or thousands of agents of some of these algorithms has recently pushed the community to investigate the use of Multi-Agent Reinforcement Learning (MARL) techniques for MAPF. Despite requiring extensive training, these learning-based approaches promise to scale better than traditional search and planning algorithms in complex environments, thanks to their decentralized execution. In this paper, we empirically evaluate and compare a representative sample of learning-based algorithms for MAPF, highlighting their strengths and weaknesses, also comparing them with traditional search and planning algorithms. Interestingly, while learning-based algorithms are usually trained and tested in randomly-generated environments, we test them in warehouse environments, to evaluate their practical applicability in realistic MAPF settings. Our results show that some learning-based algorithms nearly match the performance of search and planning algorithms in terms of path quality and show limited computing effort, proving their potential as a viable option for practical applications.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"194 ","pages":"Article 105149"},"PeriodicalIF":5.2,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144932273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"STSLAM: Robust visual SLAM in dynamic scenes via image segmentation and instance tracking","authors":"Yiwei Xiu , Xiao Liang , Guodong Chen","doi":"10.1016/j.robot.2025.105150","DOIUrl":"10.1016/j.robot.2025.105150","url":null,"abstract":"<div><div>Although visual simultaneous localization and mapping (SLAM) has made significant progress in localization accuracy, its robustness can be further improved. The primary reason for this is the insufficient modeling of dynamic instances, which leads to tracking failures for current SLAM methods in dynamic scenes. Furthermore, the lack of semantic information is also a problem in the traditional visual SLAM field. To solve these problems, this paper proposes a visual SLAM algorithm called <em>segmentation and tracking SLAM</em> (STSLAM). We apply image segmentation and instance tracking to visual SLAM. The image segmentation and instance tracking task is achieved through a video panoptic segmentation algorithm. By integrating the learning-based algorithm into the SLAM system, STSLAM not only achieves motion estimation for each dynamic instance but also introduces novel factors for factor graph construction to constrain these dynamic instances. Meanwhile, we use the learning-based algorithm to assign semantics to the map and build a panoptic point cloud map. Finally, ablation studies and comparative experiments are conducted on the KITTI, TUM RGB-D and Bonn RGB-D Dynamic dataset, which verify the effectiveness of the STSLAM method and achieve state-of-the-art performance.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"194 ","pages":"Article 105150"},"PeriodicalIF":5.2,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145003663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abhaya Pal Singh , Dmytro Romanov , Ekrem Misimi , Alex Mason
{"title":"Vision-based approaches for cutting food products with robots: A review","authors":"Abhaya Pal Singh , Dmytro Romanov , Ekrem Misimi , Alex Mason","doi":"10.1016/j.robot.2025.105145","DOIUrl":"10.1016/j.robot.2025.105145","url":null,"abstract":"<div><div>The use of robotic vision to cut deformable food objects is a challenge in robotics that has the potential to improve autonomy and create new opportunities in industries such as medicine, the food industry, and services. While cutting rigid objects is relatively simple, cutting deformable objects like food items, which change shape during the cutting process, is a significant challenge that requires advances in various aspects of robotics, including vision, modeling, hardware design, and control. This paper discusses recent developments in vision-based approaches for robots cutting food items and highlights the main challenges that must be overcome to succeed in this task and outline some potential future research directions.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"194 ","pages":"Article 105145"},"PeriodicalIF":5.2,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144932274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MOMAV: A highly symmetrical fully-actuated multirotor drone using optimizing control allocation","authors":"Marco Ruggia","doi":"10.1016/j.robot.2025.105176","DOIUrl":"10.1016/j.robot.2025.105176","url":null,"abstract":"<div><div>MOMAV (Marco’s Omnidirectional Micro Aerial Vehicle) is a multirotor drone that is fully actuated, meaning it can control its orientation independently of its position. MOMAV is also highly symmetrical, making its flight efficiency largely unaffected by its current orientation. These characteristics are achieved by a novel drone design where six rotor arms align with the vertices of an octahedron, and where each arm can actively rotate along its long axis. Various standout features of MOMAV are presented: The high flight efficiency compared to arm configuration of other fully-actuated drones, the design of an original rotating arm assembly featuring slip-rings used to enable continuous arm rotation, and a novel control allocation algorithm based on sequential quadratic programming (SQP) used to calculate throttle and arm-angle setpoints in flight. Flight tests have shown that MOMAV is able to achieve remarkably low mean position/orientation errors of 6.6 mm, 2.1°(<span><math><mi>σ</mi></math></span>: 3.0 mm, 1.0°) when sweeping position setpoints, and 11.8 mm, 3.3°(<span><math><mi>σ</mi></math></span>: 8.6 mm, 2.0°) when sweeping orientation setpoints.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"194 ","pages":"Article 105176"},"PeriodicalIF":5.2,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144925055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John McConnell , Ivana Collado-Gonzalez , Paul Szenher , Armon Shariati
{"title":"Maritime scene matching for inter-robot localization across surface and underwater domains","authors":"John McConnell , Ivana Collado-Gonzalez , Paul Szenher , Armon Shariati","doi":"10.1016/j.robot.2025.105166","DOIUrl":"10.1016/j.robot.2025.105166","url":null,"abstract":"<div><div>Autonomous underwater vehicles (AUVs) play a crucial role across various sectors, including oil and gas production, civil engineering, and defense. However, underwater localization remains a significant challenge, limiting the widespread adoption of AUVs in these fields. A common strategy to address this challenge is to deploy a fleet of Uncrewed Surface Vessels (USVs), which are easier to localize on the surface, alongside AUVs to help anchor their position estimates. These methods typically rely on acoustic pinging among the robots to relay position-related data. Unfortunately, acoustic pinging requires synchronized clocks and a clear line of sight, making it difficult to deploy large-scale, decentralized teams, particularly in littoral (i.e. near-shore) environments.</div><div>To bridge this gap, we propose an alternative approach that is resolvable over asynchronous, intermittent communications. By leveraging the fact that many human-made structures in littoral environments are visible both above and below the waterline, our method automatically detects correspondences between above-water LiDAR scenes and underwater sonar scenes. First, we convert underwater and above-water data into a common representation, generate descriptors to find commonalities, and then apply point cloud registration tools to find rigid body transformations between them. Lastly, we apply pairwise consistent measurement set maximization (PCM) as a robust outlier rejection system. Our results demonstrate that our solution to this novel <em>Maritime Scene Matching (MSM) problem</em> is both robust to outliers and effective in localizing sonar scenes with an accuracy of less than two meters. Datasets are collected using a single robot equipped with underwater imaging sonar and above-water LiDAR. We have made our real-world datasets, hardware designs, and open-source code available to promote reproducibility and to encourage broader community engagement with the MSM problem. Opensource code: <span><span>https://github.com/jake3991/maritime-scene-matching</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"194 ","pages":"Article 105166"},"PeriodicalIF":5.2,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144913121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Dong , Yangjun Liu , Jinjun Duan , Yang Li , Zhendong Dai
{"title":"Robotic manipulation framework based on semantic keypoints for packing shoes of different sizes, shapes, and softness","authors":"Yi Dong , Yangjun Liu , Jinjun Duan , Yang Li , Zhendong Dai","doi":"10.1016/j.robot.2025.105174","DOIUrl":"10.1016/j.robot.2025.105174","url":null,"abstract":"<div><div>With the rapid development of the warehousing and logistics industries, the packing of goods has gradually attracted the attention of academia and industry. The packing of footwear products is a typical representative paired-item packing task involving irregular shapes and deformable objects. Although studies on shoe packing have been conducted, different initial states due to the irregular shapes of shoes and standard packing placement poses have not been considered. This study proposes a robotic manipulation framework, including a perception module, reorientation planners, and a packing planner, that can complete the packing of pairs of shoes in any initial state. First, to adapt to the large intraclass variations due to the states, shapes, and deformation of shoes, we propose a vision module based on semantic keypoints, which can also infer additional information such as sizes, states, poses, and manipulation points by combining geometric features. Subsequently, we not only propose primitive-based reorientation methods for different states of a single deformable shoe but also propose a fast reorientation method for the top state using box edge contact and gravity, which further improve the efficiency of reorientation. Finally, based on the perception module and reorientation methods, we propose a task planner for packing paired shoes in any initial state to provide an optimal packing strategy. Real-world experiments were conducted to verify the robustness of the reorientation methods and the effectiveness of the packing strategy for various types of shoes. In this study, we highlight the potential of semantic keypoint representation, introduce new perspectives on the reorientation of 3D deformable objects and multi-object manipulation, and provide a reference for paired object packing.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"194 ","pages":"Article 105174"},"PeriodicalIF":5.2,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144907987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimation of lower limb rehabilitation exoskeleton torque by combining dual-joint parameter identification and neural network","authors":"Yumeng Zhang, Chen Lv, Yaning Li, Longhan Xie","doi":"10.1016/j.robot.2025.105178","DOIUrl":"10.1016/j.robot.2025.105178","url":null,"abstract":"<div><div>Lower limb rehabilitation exoskeleton is widely used for rehabilitative training. Estimating the torque of the lower limb exoskeleton can help identify the patient's intent, thereby enhancing engagement in rehabilitative training. Parameter identification (PI) is used to estimate torque. However, the presence of unmodeled dynamics and external disturbances poses challenges for achieving reliable torque estimation. Consequently, achieving accurate torque estimation is a primary research focus in this field. This study combines dual-joint parameter identification and neural network, for estimating joint torque in lower limb rehabilitation exoskeletons. This method enhances the performance of parameter identification optimization algorithms by employing Markov-based Particle Swarm Optimization and Gradient Descent Algorithm (MPG). Additionally, it independently identifies the parameters of the hip and knee joints, thereby enhancing the accuracy of torque estimation for each joint. The estimated physical parameters of the model and joint state variables are then utilized as inputs to the neural network for estimating the torques during the lower limb exoskeleton training process. MATLAB simulation demonstrates that employing MPG for parameter identification enhances fitness by 37.59 % and 15.24 % when compared to Particle Swarm Optimization(PSO) and Gradient descent (GD), respectively. Through experimental verification conducted under controlled disturbances, method for combining dual-joint parameter identification and neural networks (DPI-BP) demonstrates its effectiveness in accurately estimating torque in lower limb rehabilitation exoskeletons. Angle, velocity, acceleration, inertia matrix, Coriolis matrix, gravity matrix and friction matrix of hip and knee joints are taken as inputs for DPI-BP. The application of DPI-BP results in a reduction of torque estimation errors, specifically by 0.12 Nm and 1.40 Nm(P<0.001), corresponding to a decrease of 66.57 % and 14.35 % when compared to the PI and Backpropagation (BP) methods, respectively. The torque estimation error of hip and knee joints are 0.86 Nm and 0.54 Nm.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"194 ","pages":"Article 105178"},"PeriodicalIF":5.2,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144893571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicole A. Continelli , Luis F. Nagua , Pablo M. Olmos , Concepción A. Monje
{"title":"Combined model-based and data-driven approach for the control of a soft robotic neck","authors":"Nicole A. Continelli , Luis F. Nagua , Pablo M. Olmos , Concepción A. Monje","doi":"10.1016/j.robot.2025.105155","DOIUrl":"10.1016/j.robot.2025.105155","url":null,"abstract":"<div><div>This paper delves into the potential of integrating model-based and data-driven techniques for controlling the performance of a soft robotic neck. Artificial intelligence (AI) methods, such as machine learning and deep learning, have shown their applicability in modelling and controlling robotic systems with complex nonlinear behaviours. However, model-based approaches have also proven to be effective analytical alternatives, even if they rely on simplified approximations of the robot model. The control system proposed in this work combines the closed loop analytical model of the soft robotic neck with a Multi-Layer Perceptron (MLP) network trained to minimise the neck pose error. The MLP undergoes training with three different data treatments, and the results are compared to determine the most effective one. The experimental results obtained demonstrate the robustness of the proposed technique and its potential as an alternative to classical solutions, whether purely based on analytical models or data-driven models.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"194 ","pages":"Article 105155"},"PeriodicalIF":5.2,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144893570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}