Haitao Wang;Shaolin Zhang;Shuo Wang;Tianyu Jiang;Yueguang Ge
{"title":"Double-Feedback: Enhancing Large Language Models Reasoning in Robotic Tasks by Knowledge Graphs","authors":"Haitao Wang;Shaolin Zhang;Shuo Wang;Tianyu Jiang;Yueguang Ge","doi":"10.1109/LRA.2025.3562776","DOIUrl":null,"url":null,"abstract":"Large language models (LLMs) have demonstrated remarkable reasoning capabilities. However, in real-world robotic tasks, LLMs face grounding issues and lack precise feedback, resulting in the generated solutions deviating from the actual situation. In this letter, we propose Double-Feedback, a method that enhances LLMs reasoning by Knowledge graphs (KGs). The KGs play three key roles in Double-Feedback: prompting the LLMs to generate solutions, representing the task scenes, and verifying the solutions to provide feedback. We design structured knowledge prompts that convey the task knowledge background, example solutions, revision principles, and robotic tasks to the LLMs. We also introduce the distributed representation to quantify the task scene with interpretability. Based on the structured knowledge prompts and the distributed representation, we employ the KGs to evaluate the feasibility of each step before execution and verify the effects of the solutions after completing the tasks. The LLMs can adjust and replan the solutions based on the feedback from the KGs. Extensive experiments demonstrate that Double-Feedback outperforms prior works in the ALFRED benchmark. In addition, ablation studies show that Double-Feedback guides LLMs in generating solutions aligned with robotic tasks in the real world.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5951-5958"},"PeriodicalIF":4.6000,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics and Automation Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10971231/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Large language models (LLMs) have demonstrated remarkable reasoning capabilities. However, in real-world robotic tasks, LLMs face grounding issues and lack precise feedback, resulting in the generated solutions deviating from the actual situation. In this letter, we propose Double-Feedback, a method that enhances LLMs reasoning by Knowledge graphs (KGs). The KGs play three key roles in Double-Feedback: prompting the LLMs to generate solutions, representing the task scenes, and verifying the solutions to provide feedback. We design structured knowledge prompts that convey the task knowledge background, example solutions, revision principles, and robotic tasks to the LLMs. We also introduce the distributed representation to quantify the task scene with interpretability. Based on the structured knowledge prompts and the distributed representation, we employ the KGs to evaluate the feasibility of each step before execution and verify the effects of the solutions after completing the tasks. The LLMs can adjust and replan the solutions based on the feedback from the KGs. Extensive experiments demonstrate that Double-Feedback outperforms prior works in the ALFRED benchmark. In addition, ablation studies show that Double-Feedback guides LLMs in generating solutions aligned with robotic tasks in the real world.
期刊介绍:
The scope of this journal is to publish peer-reviewed articles that provide a timely and concise account of innovative research ideas and application results, reporting significant theoretical findings and application case studies in areas of robotics and automation.