Yuting Feng, Tao Yang, Kaidi Wang, Jiali Sun, Yushu Yu
{"title":"Variable admittance control via Reinforcement Learning: Enhancing UAV interactions across diverse platforms","authors":"Yuting Feng, Tao Yang, Kaidi Wang, Jiali Sun, Yushu Yu","doi":"10.1016/j.neucom.2025.130667","DOIUrl":null,"url":null,"abstract":"<div><div>A compliant control model based on Reinforcement Learning (RL) is proposed to allow UAVs (Unmanned Aerial Vehicles) to interact with the environment more effectively and autonomously execute force control tasks. The model learns an optimal admittance adjustment policy for interaction and simultaneously optimizes energy consumption and trajectory tracking of the UAV state. This facilitates stable manipulation of UAVs in unknown environments with interaction forces. Furthermore, the model ensures safe, compliant, and flexible interaction while safeguarding the UAV’s external structures from damage. To assess the model performance, we validated the approach in a simulation environment using a UAV. The model was also tested across different UAV types and various low-level control parameters, demonstrating superior performance in all scenarios. Additionally, we applied this methodology to two distinct UAV types used in real-world applications. Empirical evidence shows that our proposed methods consistently achieve superior results. We also applied similar methodologies to verify 6D interaction in a simulation of a fully actuated platform consisting of three UAVs. Using a high-level training strategy, we evaluated the platform’s ability to slide along a bevel and achieve optimal results in our comparative experiments.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130667"},"PeriodicalIF":5.5000,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225013396","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
A compliant control model based on Reinforcement Learning (RL) is proposed to allow UAVs (Unmanned Aerial Vehicles) to interact with the environment more effectively and autonomously execute force control tasks. The model learns an optimal admittance adjustment policy for interaction and simultaneously optimizes energy consumption and trajectory tracking of the UAV state. This facilitates stable manipulation of UAVs in unknown environments with interaction forces. Furthermore, the model ensures safe, compliant, and flexible interaction while safeguarding the UAV’s external structures from damage. To assess the model performance, we validated the approach in a simulation environment using a UAV. The model was also tested across different UAV types and various low-level control parameters, demonstrating superior performance in all scenarios. Additionally, we applied this methodology to two distinct UAV types used in real-world applications. Empirical evidence shows that our proposed methods consistently achieve superior results. We also applied similar methodologies to verify 6D interaction in a simulation of a fully actuated platform consisting of three UAVs. Using a high-level training strategy, we evaluated the platform’s ability to slide along a bevel and achieve optimal results in our comparative experiments.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.