{"title":"YGNet: A Lightweight Object Detection Model for Remote Sensing","authors":"Xin Song;Erhao Gao","doi":"10.1109/LGRS.2024.3497575","DOIUrl":null,"url":null,"abstract":"In the dynamic field of remote sensing images (RSIs), the challenge of object scale variability and sensor resolution disparities is formidable. Addressing these complexities, we have designed a lightweight remote sensing model named YGNet, tailored for multiscale object detection. It demonstrates excellent performance in detecting both multiscale and small objects within RSIs. The E-RMSK module within YGNet employs a gradient-based architecture with multiple parallel reparameterized convolutions in its internal branches, facilitating the extraction of multiscale features while maintaining parameter and computational efficiency. The HLS-PAN structure integrates feature maps extracted through feature selection, enabling the top layers to relay image information downward to lower levels and the lowest layers to transmit data upward for localization, achieving feature fusion. This synergistic effect of the module design enhances the accuracy of object detection in complex remote sensing scenarios and ensures the model’s feasibility on platforms with limited resources. Rigorous testing on the RSOD and NWPU VHR-10 datasets has proven YGNet’s exceptional capabilities, achieving the mean average precision (mAP) scores of 96.2% and 88.9%, respectively. The model meets the demands for real-time, lightweight, multiscale object detection in remote sensing imagery, making it highly suitable for deployment in resource-constrained environments.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10752592/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In the dynamic field of remote sensing images (RSIs), the challenge of object scale variability and sensor resolution disparities is formidable. Addressing these complexities, we have designed a lightweight remote sensing model named YGNet, tailored for multiscale object detection. It demonstrates excellent performance in detecting both multiscale and small objects within RSIs. The E-RMSK module within YGNet employs a gradient-based architecture with multiple parallel reparameterized convolutions in its internal branches, facilitating the extraction of multiscale features while maintaining parameter and computational efficiency. The HLS-PAN structure integrates feature maps extracted through feature selection, enabling the top layers to relay image information downward to lower levels and the lowest layers to transmit data upward for localization, achieving feature fusion. This synergistic effect of the module design enhances the accuracy of object detection in complex remote sensing scenarios and ensures the model’s feasibility on platforms with limited resources. Rigorous testing on the RSOD and NWPU VHR-10 datasets has proven YGNet’s exceptional capabilities, achieving the mean average precision (mAP) scores of 96.2% and 88.9%, respectively. The model meets the demands for real-time, lightweight, multiscale object detection in remote sensing imagery, making it highly suitable for deployment in resource-constrained environments.