{"title":"用于物体检测的自适应图推理网络","authors":"Xinfang Zhong , Wenlan Kuang , Zhixin Li","doi":"10.1016/j.imavis.2024.105248","DOIUrl":null,"url":null,"abstract":"<div><p>In recent years, Transformer-based object detection has achieved leaps and bounds in performance. Nevertheless, these methods still face some problems such as difficulty in detecting heavy occluded objects and tiny objects. Besides, the mainstream object detection paradigms usually deal with region proposals alone, without considering contextual information and the relationships between objects, which results in limited improvement. In this paper, we propose an Adaptive Graph Reasoning Network (AGRN) that explores the relationships between specific objects in an image and mines high-level semantic information via GCN to enrich visual features. Firstly, to enhance the semantic correlation between objects, a cross-scale semantic-aware module is proposed to realize the semantic interaction between feature maps of different scales so as to obtain a cross-scale semantic feature. Secondly, we activate the instance features in the image and combine the cross-scale semantic feature to create a dynamic graph. Finally, guided by the specific semantics, the attention mechanism is introduced to focus on the corresponding critical regions. On the MS-COCO 2017 dataset, our method improves the performance by 3.9% box AP and 3.6% mask AP in object detection and instance segmentation respectively relative to baseline. Additionally, our model has demonstrated exceptional performance on the PASCAL VOC dataset.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"151 ","pages":"Article 105248"},"PeriodicalIF":4.2000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0262885624003536/pdfft?md5=c327d5634e930b5455fb578d65af5bcf&pid=1-s2.0-S0262885624003536-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Adaptive graph reasoning network for object detection\",\"authors\":\"Xinfang Zhong , Wenlan Kuang , Zhixin Li\",\"doi\":\"10.1016/j.imavis.2024.105248\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In recent years, Transformer-based object detection has achieved leaps and bounds in performance. Nevertheless, these methods still face some problems such as difficulty in detecting heavy occluded objects and tiny objects. Besides, the mainstream object detection paradigms usually deal with region proposals alone, without considering contextual information and the relationships between objects, which results in limited improvement. In this paper, we propose an Adaptive Graph Reasoning Network (AGRN) that explores the relationships between specific objects in an image and mines high-level semantic information via GCN to enrich visual features. Firstly, to enhance the semantic correlation between objects, a cross-scale semantic-aware module is proposed to realize the semantic interaction between feature maps of different scales so as to obtain a cross-scale semantic feature. Secondly, we activate the instance features in the image and combine the cross-scale semantic feature to create a dynamic graph. Finally, guided by the specific semantics, the attention mechanism is introduced to focus on the corresponding critical regions. On the MS-COCO 2017 dataset, our method improves the performance by 3.9% box AP and 3.6% mask AP in object detection and instance segmentation respectively relative to baseline. Additionally, our model has demonstrated exceptional performance on the PASCAL VOC dataset.</p></div>\",\"PeriodicalId\":50374,\"journal\":{\"name\":\"Image and Vision Computing\",\"volume\":\"151 \",\"pages\":\"Article 105248\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2024-09-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0262885624003536/pdfft?md5=c327d5634e930b5455fb578d65af5bcf&pid=1-s2.0-S0262885624003536-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Image and Vision Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0262885624003536\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885624003536","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Adaptive graph reasoning network for object detection
In recent years, Transformer-based object detection has achieved leaps and bounds in performance. Nevertheless, these methods still face some problems such as difficulty in detecting heavy occluded objects and tiny objects. Besides, the mainstream object detection paradigms usually deal with region proposals alone, without considering contextual information and the relationships between objects, which results in limited improvement. In this paper, we propose an Adaptive Graph Reasoning Network (AGRN) that explores the relationships between specific objects in an image and mines high-level semantic information via GCN to enrich visual features. Firstly, to enhance the semantic correlation between objects, a cross-scale semantic-aware module is proposed to realize the semantic interaction between feature maps of different scales so as to obtain a cross-scale semantic feature. Secondly, we activate the instance features in the image and combine the cross-scale semantic feature to create a dynamic graph. Finally, guided by the specific semantics, the attention mechanism is introduced to focus on the corresponding critical regions. On the MS-COCO 2017 dataset, our method improves the performance by 3.9% box AP and 3.6% mask AP in object detection and instance segmentation respectively relative to baseline. Additionally, our model has demonstrated exceptional performance on the PASCAL VOC dataset.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.