{"title":"决策前思考:基于局部可达性预测的高效交互式视觉导航","authors":"Qinrui Liu;Biao Luo;Dongbo Zhang;Renjie Chen","doi":"10.1109/LRA.2024.3522769","DOIUrl":null,"url":null,"abstract":"Embodied AI has made prominent advances in interactive visual navigation tasks based on deep reinforcement learning. In the pursuit of higher success rates in navigation, previous work has typically focused on training embodied agents to push away interactable objects on the ground. However, such interactive visual navigation largely ignores the cost of interacting with the environment and interactions are sometimes counterproductive (e.g., push the obstacle but block the existing path). Considering these scenarios, we develop a efficient interactive visual navigation method. We propose Local Accessibility Prediction (LAP) Module to enable the agent to learn thinking about how the upcoming action will affect the environment and the navigation task before making a decision. Besides, we introduce the interaction penalty term to represent the cost of interacting with the environment. And different interaction penalties are imposed depending on the size of the obstacle pushed away. We introduce the average number of interactions as a new evaluation metric. Also, a two-stage training pipeline is employed to reach better learning performance. Our experiments in AI2-THOR environment show that our method outperforms the baseline in all evaluation metrics, achieving significant improvements in navigation performance.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1688-1695"},"PeriodicalIF":4.6000,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Thinking Before Decision: Efficient Interactive Visual Navigation Based on Local Accessibility Prediction\",\"authors\":\"Qinrui Liu;Biao Luo;Dongbo Zhang;Renjie Chen\",\"doi\":\"10.1109/LRA.2024.3522769\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Embodied AI has made prominent advances in interactive visual navigation tasks based on deep reinforcement learning. In the pursuit of higher success rates in navigation, previous work has typically focused on training embodied agents to push away interactable objects on the ground. However, such interactive visual navigation largely ignores the cost of interacting with the environment and interactions are sometimes counterproductive (e.g., push the obstacle but block the existing path). Considering these scenarios, we develop a efficient interactive visual navigation method. We propose Local Accessibility Prediction (LAP) Module to enable the agent to learn thinking about how the upcoming action will affect the environment and the navigation task before making a decision. Besides, we introduce the interaction penalty term to represent the cost of interacting with the environment. And different interaction penalties are imposed depending on the size of the obstacle pushed away. We introduce the average number of interactions as a new evaluation metric. Also, a two-stage training pipeline is employed to reach better learning performance. Our experiments in AI2-THOR environment show that our method outperforms the baseline in all evaluation metrics, achieving significant improvements in navigation performance.\",\"PeriodicalId\":13241,\"journal\":{\"name\":\"IEEE Robotics and Automation Letters\",\"volume\":\"10 2\",\"pages\":\"1688-1695\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-12-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Robotics and Automation Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10816123/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics and Automation Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10816123/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
Thinking Before Decision: Efficient Interactive Visual Navigation Based on Local Accessibility Prediction
Embodied AI has made prominent advances in interactive visual navigation tasks based on deep reinforcement learning. In the pursuit of higher success rates in navigation, previous work has typically focused on training embodied agents to push away interactable objects on the ground. However, such interactive visual navigation largely ignores the cost of interacting with the environment and interactions are sometimes counterproductive (e.g., push the obstacle but block the existing path). Considering these scenarios, we develop a efficient interactive visual navigation method. We propose Local Accessibility Prediction (LAP) Module to enable the agent to learn thinking about how the upcoming action will affect the environment and the navigation task before making a decision. Besides, we introduce the interaction penalty term to represent the cost of interacting with the environment. And different interaction penalties are imposed depending on the size of the obstacle pushed away. We introduce the average number of interactions as a new evaluation metric. Also, a two-stage training pipeline is employed to reach better learning performance. Our experiments in AI2-THOR environment show that our method outperforms the baseline in all evaluation metrics, achieving significant improvements in navigation performance.
期刊介绍:
The scope of this journal is to publish peer-reviewed articles that provide a timely and concise account of innovative research ideas and application results, reporting significant theoretical findings and application case studies in areas of robotics and automation.