Yukang Huo , Mingyuan Yao , Tonghao Wang , Qingbin Tian , Jiayin Zhao , Xiao Liu , Haihua Wang
{"title":"PR-DETR: Extracting and utilizing prior knowledge for improved end-to-end object detection","authors":"Yukang Huo , Mingyuan Yao , Tonghao Wang , Qingbin Tian , Jiayin Zhao , Xiao Liu , Haihua Wang","doi":"10.1016/j.imavis.2025.105745","DOIUrl":null,"url":null,"abstract":"<div><div>The query initialization in the Transformer-based target detection algorithm has static characteristics, resulting in a limitation to flexibly adjust the degree of attention to different image features during the learning process. In addition, without the guidance of global spatial semantic information, it will cause the model to disregard the relationship between the target and the surrounding environment due to relying on local features for target detection, causing the problem of false detection or missed detection of the target. In order to solve the above problems, this paper proposes a query-optimized target detection model <strong>PR-DETR</strong> based on feature map guidance. PR-DETR designs the Aggregating Global Spatial Semantic Information module (AGSSI module) to extract and enhance global spatial semantic information. Afterwards, we design queries that participate in the interaction of local and global spatial semantic information in the encoding part in advance, so as to obtain sufficient prior knowledge and provide more accurate and efficient queries for subsequent decoding feature maps. Experiment results show that PR-DETR has significantly improved detection accuracy on the MS COCO data set compared with existing related research work. The mAP is 3.5, 2.3 and 2.0 higher than Conditional-DETR, Anchor-DETR and DAB-DETR respectively.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"163 ","pages":"Article 105745"},"PeriodicalIF":4.2000,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625003336","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The query initialization in the Transformer-based target detection algorithm has static characteristics, resulting in a limitation to flexibly adjust the degree of attention to different image features during the learning process. In addition, without the guidance of global spatial semantic information, it will cause the model to disregard the relationship between the target and the surrounding environment due to relying on local features for target detection, causing the problem of false detection or missed detection of the target. In order to solve the above problems, this paper proposes a query-optimized target detection model PR-DETR based on feature map guidance. PR-DETR designs the Aggregating Global Spatial Semantic Information module (AGSSI module) to extract and enhance global spatial semantic information. Afterwards, we design queries that participate in the interaction of local and global spatial semantic information in the encoding part in advance, so as to obtain sufficient prior knowledge and provide more accurate and efficient queries for subsequent decoding feature maps. Experiment results show that PR-DETR has significantly improved detection accuracy on the MS COCO data set compared with existing related research work. The mAP is 3.5, 2.3 and 2.0 higher than Conditional-DETR, Anchor-DETR and DAB-DETR respectively.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.