Jianwei Zhu, Xueying Sun, Qiang Zhang, Mingmin Liu
{"title":"面向任务的视觉-语言-动作建模与跨模态融合","authors":"Jianwei Zhu, Xueying Sun, Qiang Zhang, Mingmin Liu","doi":"10.1007/s40747-025-01893-x","DOIUrl":null,"url":null,"abstract":"<p>Task-oriented grasping (TOG) aims to predict the appropriate pose for grasping based on a specific task. While recent approaches have incorporated semantic knowledge into TOG models to enable robots to understand linguistic commands, they lack the ability to leverage relevant information from vision, language, and action. To address this problem, we propose a novel multimodal fusion grasping framework called VLA-Grasp. VLA-Grasp utilizes prompted large language model for task inference, and multi-channel multimodal encoders and cross-attention modules are proposed to capture the intrinsic links between vision-language-action, thus improving the generalization ability of the model. In addition, we introduce a multiple grasping decision method that can provide multiple feasible grasping actions. We experimentally evaluate our approach on a publicly available dataset and compare it to state-of-the-art methods. In addition, we experimentally validate our model in a real-world scenario to evaluate its performance. The results show that our method provides a reliable and efficient solution for the TOG task. The code is available at https://github.com/Jianwei915/VLA-Grasp.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"9 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"VLA-Grasp: a vision-language-action modeling with cross-modality fusion for task-oriented grasping\",\"authors\":\"Jianwei Zhu, Xueying Sun, Qiang Zhang, Mingmin Liu\",\"doi\":\"10.1007/s40747-025-01893-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Task-oriented grasping (TOG) aims to predict the appropriate pose for grasping based on a specific task. While recent approaches have incorporated semantic knowledge into TOG models to enable robots to understand linguistic commands, they lack the ability to leverage relevant information from vision, language, and action. To address this problem, we propose a novel multimodal fusion grasping framework called VLA-Grasp. VLA-Grasp utilizes prompted large language model for task inference, and multi-channel multimodal encoders and cross-attention modules are proposed to capture the intrinsic links between vision-language-action, thus improving the generalization ability of the model. In addition, we introduce a multiple grasping decision method that can provide multiple feasible grasping actions. We experimentally evaluate our approach on a publicly available dataset and compare it to state-of-the-art methods. In addition, we experimentally validate our model in a real-world scenario to evaluate its performance. The results show that our method provides a reliable and efficient solution for the TOG task. The code is available at https://github.com/Jianwei915/VLA-Grasp.</p>\",\"PeriodicalId\":10524,\"journal\":{\"name\":\"Complex & Intelligent Systems\",\"volume\":\"9 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2025-05-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Complex & Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s40747-025-01893-x\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-025-01893-x","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
VLA-Grasp: a vision-language-action modeling with cross-modality fusion for task-oriented grasping
Task-oriented grasping (TOG) aims to predict the appropriate pose for grasping based on a specific task. While recent approaches have incorporated semantic knowledge into TOG models to enable robots to understand linguistic commands, they lack the ability to leverage relevant information from vision, language, and action. To address this problem, we propose a novel multimodal fusion grasping framework called VLA-Grasp. VLA-Grasp utilizes prompted large language model for task inference, and multi-channel multimodal encoders and cross-attention modules are proposed to capture the intrinsic links between vision-language-action, thus improving the generalization ability of the model. In addition, we introduce a multiple grasping decision method that can provide multiple feasible grasping actions. We experimentally evaluate our approach on a publicly available dataset and compare it to state-of-the-art methods. In addition, we experimentally validate our model in a real-world scenario to evaluate its performance. The results show that our method provides a reliable and efficient solution for the TOG task. The code is available at https://github.com/Jianwei915/VLA-Grasp.
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.