{"title":"PDCNet: A lightweight and efficient robotic grasp detection framework via Partial Convolution and knowledge distillation","authors":"Yanshu Jiang, Yanze Fang, Liwei Deng","doi":"10.1016/j.cviu.2025.104441","DOIUrl":null,"url":null,"abstract":"<div><div>Improving detection accuracy complicates robotic grasp models, which makes deploying them on resource-constrained edge AI devices more challenging. Although various lightweight strategies have been proposed, directly designing compact networks may not be optimal, as balancing accuracy and model size is challenging. This paper proposes a lightweight grasp detection framework, PDCNet. In response to this problem, we optimize the interplay between computational demands and detection performance. The method integrates Partial Convolution (PConv) for efficient feature extraction, Discrete Wavelet Transform (DWT) for enhancing frequency-domain feature representation, and a Cross-Stage Fusion (CSF) strategy for optimizing the utilization of multi-scale features. A Quality-Enhanced Huber Loss Function (Q-Huber) is also introduced to improve the network’s sensitivity to vital grasp localities. Finally, the teacher–student framework distills expertise into a compact student model. Comprehensive evaluations were conducted using the public datasets to demonstrate that PDCNet achieves detection accuracies of 98.7%, 95.8%, and 97.1% on Cornell, Jacquard and Jacquard_V2 datasets respectively, while maintaining minimal parameters and high computational efficiency. Real-world experiments on an embedded edge AI device further validate the capability of PDCNet to perform accurate grasp detection under limited computational resources.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"259 ","pages":"Article 104441"},"PeriodicalIF":3.5000,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision and Image Understanding","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S107731422500164X","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Improving detection accuracy complicates robotic grasp models, which makes deploying them on resource-constrained edge AI devices more challenging. Although various lightweight strategies have been proposed, directly designing compact networks may not be optimal, as balancing accuracy and model size is challenging. This paper proposes a lightweight grasp detection framework, PDCNet. In response to this problem, we optimize the interplay between computational demands and detection performance. The method integrates Partial Convolution (PConv) for efficient feature extraction, Discrete Wavelet Transform (DWT) for enhancing frequency-domain feature representation, and a Cross-Stage Fusion (CSF) strategy for optimizing the utilization of multi-scale features. A Quality-Enhanced Huber Loss Function (Q-Huber) is also introduced to improve the network’s sensitivity to vital grasp localities. Finally, the teacher–student framework distills expertise into a compact student model. Comprehensive evaluations were conducted using the public datasets to demonstrate that PDCNet achieves detection accuracies of 98.7%, 95.8%, and 97.1% on Cornell, Jacquard and Jacquard_V2 datasets respectively, while maintaining minimal parameters and high computational efficiency. Real-world experiments on an embedded edge AI device further validate the capability of PDCNet to perform accurate grasp detection under limited computational resources.
期刊介绍:
The central focus of this journal is the computer analysis of pictorial information. Computer Vision and Image Understanding publishes papers covering all aspects of image analysis from the low-level, iconic processes of early vision to the high-level, symbolic processes of recognition and interpretation. A wide range of topics in the image understanding area is covered, including papers offering insights that differ from predominant views.
Research Areas Include:
• Theory
• Early vision
• Data structures and representations
• Shape
• Range
• Motion
• Matching and recognition
• Architecture and languages
• Vision systems