{"title":"Adaptive Integral Sliding Mode Control for Attitude Tracking of Underwater Robots With Large Range Pitch Variations in Confined Spaces","authors":"Xiaorui Wang;Zeyu Sha;Feitian Zhang","doi":"10.1109/LRA.2024.3515733","DOIUrl":"https://doi.org/10.1109/LRA.2024.3515733","url":null,"abstract":"Underwater robots play a crucial role in exploring aquatic environments. The ability to flexibly adjust their attitudes, especially the pitch, is essential for underwater robots to effectively accomplish tasks in confined spaces. However, the highly coupled six degrees of freedom dynamics resulting from attitude changes and the complex turbulence within limited spatial areas present significant challenges. To address the problem of attitude control of underwater robots, this letter investigates large-range pitch angle tracking during station holding as well as simultaneous roll and yaw angle control to enable versatile attitude adjustments. Based on dynamic modeling, this letter proposes an adaptive integral sliding mode controller (AISMC) that integrates an integral module into traditional sliding mode control (SMC) and adaptively adjusts the switching gain for improved tracking accuracy, reduced chattering, and enhanced robustness. The stability of the closed-loop control system is established through Lyapunov analysis. Extensive experiments and comparison studies are conducted using a commercial remotely operated vehicle (ROV), the results of which demonstrate that AISMC achieves satisfactory performance in attitude tracking control in confined spaces with unknown disturbances, significantly outperforming PID, ASMC and ISMC.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"979-986"},"PeriodicalIF":4.6,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142875015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan Eden;Mahdi Khoramshahi;Yanpei Huang;Alexis Poignant;Etienne Burdet;Nathanaël Jarrassé
{"title":"Comparison of Solo and Collaborative Trimanual Operation of a Supernumerary Limb in Tasks With Varying Physical Coupling","authors":"Jonathan Eden;Mahdi Khoramshahi;Yanpei Huang;Alexis Poignant;Etienne Burdet;Nathanaël Jarrassé","doi":"10.1109/LRA.2024.3515734","DOIUrl":"https://doi.org/10.1109/LRA.2024.3515734","url":null,"abstract":"Through the use of robotic supernumerary limbs, it has been proposed that a single user could perform tasks like surgery or industrial assembly that currently require a team. Although validation studies, often conducted in virtual reality, have demonstrated that individuals can learn to command supernumerary limbs, comparisons typically suggest that a team initially outperforms a supernumerary limb operating individual. In this study, we examined (i) the impact of using a commercially available physical robot setup instead of a virtual reality system and (ii) the effect of limb couplings on user performance during a series of trimanual operations. Contrary to previous findings, our results indicate no clear difference in user performance when working as a trimanual user, in the pick and place of three objects, compared to when working as a team. Additionally, for this task we observe that while users prefer working with a partner when they control most limbs, we find no clear difference in their preference between solo trimanual operation and when they work with a partner and control the third limb. These findings indicate that factors typically not present in virtual reality such as visual occlusion and haptic feedback may be vital to consider for the effective operation of supernumerary limbs, and provide initial evidence to support the viability of supernumerary limbs for a range of physical tasks.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"860-867"},"PeriodicalIF":4.6,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Combined Intrusion Strategy Based on Apollonius Circle for Multiple Mobile Robots in Attack-Defense Scenario","authors":"Xiaowei Fu;Yiming Sun","doi":"10.1109/LRA.2024.3512361","DOIUrl":"https://doi.org/10.1109/LRA.2024.3512361","url":null,"abstract":"The multi-agent attack-defense game has become a hot issue in recent years. However, it is still a challenge to design an efficient intrusion strategy when the intruder has a limited detection range. In this letter, a combined intrusion strategy based on Apollonius circle is proposed, which decomposes the complex intrusion task into several basic actions, including target attack action, defense breakthrough action, and pursuit-escape action. One of these three actions will be adopted by the intruder according to the hazard of the surrounding area. The application of the Apollonius circle theorem enhances the intruder's ability to effectively utilize available information and accurately assesses the situation. By applying the theorem, the intruder can determine potential risks associated with each direction of movement and subsequently partition the surrounding area into safe and hazardous zones. The intruder will advance directly toward the target if no hazardous zone obstructs its path. Conversely, if the hazardous zone exists yet safe pathways are available, the intruder will navigate around the defenders. In the absence of safe zones, the intruder will retreat to preserve its survival. This combined intrusion strategy simplifies the complex decision-making process and realizes the rapid response of the intruder to the environment. Extensive simulations validate the combined intrusion strategy's feasibility and effectiveness, and further physical experimental results confirm that the strategy has application potential.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 1","pages":"676-683"},"PeriodicalIF":4.6,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SGDet3D: Semantics and Geometry Fusion for 3D Object Detection Using 4D Radar and Camera","authors":"Xiaokai Bai;Zhu Yu;Lianqing Zheng;Xiaohan Zhang;Zili Zhou;Xue Zhang;Fang Wang;Jie Bai;Hui-Liang Shen","doi":"10.1109/LRA.2024.3513041","DOIUrl":"https://doi.org/10.1109/LRA.2024.3513041","url":null,"abstract":"4D millimeter-wave radar has gained attention as an emerging sensor for autonomous driving in recent years. However, existing 4D radar and camera fusion models often fail to fully exploit complementary information within each modality and lack deep cross-modal interactions. To address these issues, we propose a novel 4D radar and camera fusion method, named SGDet3D, for 3D object detection. Specifically, we first introduce a dual-branch fusion module that employs geometric depth completion and semantic radar PillarNet to comprehensively leverage geometric and semantic information within each modality. Then we introduce an object-oriented attention module that employs localization-aware cross-attention to facilitate deep interactions across modalites by allowing queries in bird's-eye view (BEV) to attend to interested image tokens. We validate our SGDet3D on the TJ4DRadSet and View-of-Delft (VoD) datasets. Experimental results demonstrate that SGDet3D effectively fuses 4D radar data and camera image and achieves state-of-the-art performance.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 1","pages":"828-835"},"PeriodicalIF":4.6,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"H-Net: A Multitask Architecture for Simultaneous 3D Force Estimation and Stereo Semantic Segmentation in Intracardiac Catheters","authors":"Pedram Fekri;Mehrdad Zadeh;Javad Dargahi","doi":"10.1109/LRA.2024.3514513","DOIUrl":"https://doi.org/10.1109/LRA.2024.3514513","url":null,"abstract":"The success rate of catheterization procedures is closely linked to the sensory data provided to the surgeon. Vision-based deep learning models can deliver both tactile and visual information in a sensor-free manner, while also being cost-effective to produce. Given the complexity of these models for devices with limited computational resources, research has focused on force estimation and catheter segmentation separately. However, there is a lack of a comprehensive architecture capable of simultaneously segmenting the catheter from two different angles and estimating the applied forces in 3D. To bridge this gap, this work proposes a novel, lightweight, multi-input, multi-output encoder-decoder-based architecture. It is designed to segment the catheter from two points of view and concurrently measure the applied forces in the \u0000<inline-formula><tex-math>$x$</tex-math></inline-formula>\u0000, \u0000<inline-formula><tex-math>$y$</tex-math></inline-formula>\u0000, and \u0000<inline-formula><tex-math>$z$</tex-math></inline-formula>\u0000 directions. This network processes two simultaneous X-Ray images, intended to be fed by a biplane fluoroscopy system, showing a catheter's deflection from different angles. It uses two parallel sub-networks with shared parameters to output two segmentation maps corresponding to the inputs. Additionally, it leverages stereo vision to estimate the applied forces at the catheter's tip in 3D. The architecture features two input channels, two classification heads for segmentation, and a regression head for force estimation through a single end-to-end architecture. The output of all heads was assessed and compared with the literature, demonstrating state-of-the-art performance in both segmentation and force estimation. To the best of the authors' knowledge, this is the first time such a model has been proposed.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 1","pages":"844-851"},"PeriodicalIF":4.6,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive Noise Rejection Strategy for Cooperative Motion Control of Dual-Arm Robots","authors":"Xiyuan Zhang;Yilin Yu;Naimeng Cang;Dongsheng Guo;Shuai Li;Weidong Zhang;Jinrong Zheng","doi":"10.1109/LRA.2024.3512992","DOIUrl":"https://doi.org/10.1109/LRA.2024.3512992","url":null,"abstract":"Dual-arm robots possess exceptional collaborative capabilities and versatility, demonstrating broad application prospects across various fields. As a significant research area for dual-arm robots, the requirements for coordinated motion control are gradually increasing. In practical applications, robots inevitably encounter noise interference, which can lead to suboptimal performance in coordinated motion control. In this letter, cooperative motion control of dual-arm robots in the presence of harmonic noise is investigated. On the basis of the relative Jacobian method, an adaptive noise rejection strategy is proposed for cooperative motion control of dual-arm robots perturbed by harmonic noise. Such a strategy incorporates a compensator, which can simulate and suppress interference from harmonic noise. Theoretical analysis indicates that the Cartesian error generated by the proposed strategy exhibits convergence. Simulation and experiment results under a dual-arm system consisting of two Panda robot manipulators further verify the noise resistance and applicability of the proposed strategy with the existence of harmonic noise.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"868-874"},"PeriodicalIF":4.6,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Grasping Unknown Objects With Only One Demonstration","authors":"Yanghong Li;Haiyang He;Jin Chai;Guangrui Bai;Erbao Dong","doi":"10.1109/LRA.2024.3513037","DOIUrl":"https://doi.org/10.1109/LRA.2024.3513037","url":null,"abstract":"The combination of imitation learning and reinforcement learning is expected to solve the challenge of grasping unknown objects with anthropomorphic hand-arm systems. However, this method requires a large number of perfect demonstrations and the implementation in real robots often differs greatly from the simulation effect. In this work, we introduce a curriculum learning mechanism and propose a multifinger grasping learning method that requires only one demonstration. First, a human remotely manipulates the robot via a wearable device to perform a successful grasping demonstration. The state of the object and the robot is recorded as the initial reference trajectory for reinforcement learning training. Then, by combining robot proprioception and the point cloud features of the target object, a multimodal deep reinforcement learning agent generates corrective actions for the reference demonstration in the synergy subspace of grasping and trains in simulation environments. Meanwhile, considering the topological and geometric variations of different objects, we establish a learning curriculum for objects to gradually improve the generalization ability of the agent, starting from similar to unknown objects. Finally, only successfully trained models are deployed on real robots. Compared to the baseline method, our method reduces dependence on the grasping data set while improving learning efficiency. Our success rate for grasping novel objects is higher.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"987-994"},"PeriodicalIF":4.6,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142875142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mitigating Over-Assistance in Teleoperated Mobile Robots via Human-Centered Shared Autonomy: Leveraging Suboptimal Rationality Insights","authors":"Yinglin Li;Rongxin Cui;Weisheng Yan;Chong Feng;Shi Zhang","doi":"10.1109/LRA.2024.3511385","DOIUrl":"https://doi.org/10.1109/LRA.2024.3511385","url":null,"abstract":"In this letter, we introduce a human-centered shared autonomy approach to address over-assistance in remote robot operation, aimed at reducing control conflicts and enhancing user experience. We model the human-robot team as a partially observable Markov decision process (POMDP) that incorporates uncertainties in intended goals and human rationality. By employing the Boltzmann noise-rationality model for predicting operator behavior and the regret theory-based mechanism for detecting model misalignment, we dynamically adjust assistance strategies to accommodate the operator's suboptimal rationality. Our experiments in three scenarios validate the proposed method, demonstrating that it maintains benchmark performance in well-specified scenarios. Furthermore, it significantly reduces mean control conflicts by 35.0% in scenarios with unmodeled goals and by 19.1% in those with unmodeled obstacles, while improving the system's usability by 11.2% and 39.5%, respectively. Detailed analysis of human-robot interactions highlights our approach's robustness in tolerating human input noise and adaptability to changes in operator intent.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 1","pages":"460-467"},"PeriodicalIF":4.6,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142798026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giovanni Cortigiani;Monica Malvezzi;Domenico Prattichizzo;Maria Pozzi
{"title":"Human-Robot Collaborative Cable-Suspended Manipulation With Contact Distinction","authors":"Giovanni Cortigiani;Monica Malvezzi;Domenico Prattichizzo;Maria Pozzi","doi":"10.1109/LRA.2024.3511396","DOIUrl":"https://doi.org/10.1109/LRA.2024.3511396","url":null,"abstract":"The collaborative transportation of objects between humans and robots is a fundamental task in physical human-robot interaction. Most of the literature considers the rigid co-grasping of non-deformable items in which both the human and the robot directly hold the transported object with their hands. In this letter, we implement a control strategy for the collaborative manipulation of a cable-suspended platform. The latter is an articulated and partially deformable object that can be used as a base where to place the transported object. In this way, the human and the robot are not rigidly coupled, ensuring a greater flexibility in the partners' motions and a safer interaction. However, the uncertain dynamics of the platform introduces a greater possibility of unintended collisions with external objects, which must be distinguished from contacts arising when a load is placed on or removed from the platform. This letter proposes a contact detection and distinction strategy to address this challenge. The proposed cable-suspended manipulation framework is based only on force sensing at the robot end-effector, and was tested with ten users.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 1","pages":"740-747"},"PeriodicalIF":4.6,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10780984","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Path Following Method Based on Whole-Body Deviation Evaluation for Hyper-Redundant Robots","authors":"Nailong Bu;Ningyuan Luo;Chao Liu;Yuxin Sun;Zhenhua Xiong","doi":"10.1109/LRA.2024.3512373","DOIUrl":"https://doi.org/10.1109/LRA.2024.3512373","url":null,"abstract":"The accuracy of path following is crucial for collision-free navigation of hyper-redundant robots, especially in narrow environments. However, the existing path following methods only consider the deviations of joints and the end effector, while ignoring the deviations of the robot body. In this letter, a novel path following method is proposed based on whole-body deviation evaluation to achieve high-accuracy path following motion of hyper-redundant robots. Firstly, we introduce a whole-body deviation evaluation algorithm that could precisely quantify the accuracy of path following, which comprehensively considers the deviations of joints, the end effector and linkages along the path. Subsequently, we formulate the path following motion planning as an optimization problem and develop a two-level optimization framework, which reduces the dimensionality of each sub-optimization problem to two. Besides, a refined objective function is proposed to ensure the continuity of the optimized joint angles. Simulations show that the proposed path following method can significantly reduce the path following error by 41.3% and 47.1% for the S-shaped and C-shaped paths, respectively.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 1","pages":"604-611"},"PeriodicalIF":4.6,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}