Baoling Gui , Lydia Sam , Anshuman Bhardwaj , Diego Soto Gómez , Félix González Peñaloza , Manfred F. Buchroithner , David R. Green
{"title":"SAGRNet:一种新的基于目标的图像卷积神经网络,用于遥感影像中不同植被覆盖的分类","authors":"Baoling Gui , Lydia Sam , Anshuman Bhardwaj , Diego Soto Gómez , Félix González Peñaloza , Manfred F. Buchroithner , David R. Green","doi":"10.1016/j.isprsjprs.2025.06.004","DOIUrl":null,"url":null,"abstract":"<div><div>Growing global population, changing climate, and shrinking land resources demand for quicker, efficient, and more accurate methods of mapping and monitoring vegetation cover in remote sensing datasets. Many deep learning-based methods have been widely applied for semantic segmentation tasks in remote sensing images of vegetated environments. However, most existing models are pixel-based, which introduces challenges such as high time consumption, cumbersome implementation, and limited scalability. This paper presents the SAGRNet model, a Graph Convolutional Neural Network (GCN) that incorporates sampling aggregation and self-attention mechanisms, while leveraging the ResNet residual network structure. A key innovation of SAGRNet is its ability to fuse features extracted through diverse algorithms, enabling comprehensive representation and enhanced classification performance. The SAGRNet model demonstrates superior performance over leading pixel-based neural networks, such as U-Net++ and DeepLabV3, in terms of both time efficiency and accuracy in vegetation image classification tasks. We achieved an overall mapping accuracy of ∼90 % using SAGRNet, compared to ∼87% and ∼85% from U-Net++ and DeepLabV3, respectively. Additionally, it offers more convenience in data processing. Furthermore, the model significantly outperforms cutting-edge graph-based convolutional networks, including Graph U-Net (achieved overall accuracy ∼65%) and TGNN (achieved overall accuracy ∼75%), showcasing exceptional generalization capability and classification accuracy. This paper provides a comprehensive analysis of the various processing aspects of this object-based GCN for vegetation mapping and emphasizes its significant potential for practical use. The model’s versatility can also be expanded to other image processing domains, offering unprecedented possibilities of information extraction from satellite imagery. The code for practical application experiment is available at <span><span>https://github.com/baoling123/GCN-remote-sensing-classification.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"227 ","pages":"Pages 99-124"},"PeriodicalIF":10.6000,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SAGRNet: A novel object-based graph convolutional neural network for diverse vegetation cover classification in remotely-sensed imagery\",\"authors\":\"Baoling Gui , Lydia Sam , Anshuman Bhardwaj , Diego Soto Gómez , Félix González Peñaloza , Manfred F. Buchroithner , David R. Green\",\"doi\":\"10.1016/j.isprsjprs.2025.06.004\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Growing global population, changing climate, and shrinking land resources demand for quicker, efficient, and more accurate methods of mapping and monitoring vegetation cover in remote sensing datasets. Many deep learning-based methods have been widely applied for semantic segmentation tasks in remote sensing images of vegetated environments. However, most existing models are pixel-based, which introduces challenges such as high time consumption, cumbersome implementation, and limited scalability. This paper presents the SAGRNet model, a Graph Convolutional Neural Network (GCN) that incorporates sampling aggregation and self-attention mechanisms, while leveraging the ResNet residual network structure. A key innovation of SAGRNet is its ability to fuse features extracted through diverse algorithms, enabling comprehensive representation and enhanced classification performance. The SAGRNet model demonstrates superior performance over leading pixel-based neural networks, such as U-Net++ and DeepLabV3, in terms of both time efficiency and accuracy in vegetation image classification tasks. We achieved an overall mapping accuracy of ∼90 % using SAGRNet, compared to ∼87% and ∼85% from U-Net++ and DeepLabV3, respectively. Additionally, it offers more convenience in data processing. Furthermore, the model significantly outperforms cutting-edge graph-based convolutional networks, including Graph U-Net (achieved overall accuracy ∼65%) and TGNN (achieved overall accuracy ∼75%), showcasing exceptional generalization capability and classification accuracy. This paper provides a comprehensive analysis of the various processing aspects of this object-based GCN for vegetation mapping and emphasizes its significant potential for practical use. The model’s versatility can also be expanded to other image processing domains, offering unprecedented possibilities of information extraction from satellite imagery. The code for practical application experiment is available at <span><span>https://github.com/baoling123/GCN-remote-sensing-classification.git</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50269,\"journal\":{\"name\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"volume\":\"227 \",\"pages\":\"Pages 99-124\"},\"PeriodicalIF\":10.6000,\"publicationDate\":\"2025-06-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0924271625002308\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"GEOGRAPHY, PHYSICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271625002308","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
SAGRNet: A novel object-based graph convolutional neural network for diverse vegetation cover classification in remotely-sensed imagery
Growing global population, changing climate, and shrinking land resources demand for quicker, efficient, and more accurate methods of mapping and monitoring vegetation cover in remote sensing datasets. Many deep learning-based methods have been widely applied for semantic segmentation tasks in remote sensing images of vegetated environments. However, most existing models are pixel-based, which introduces challenges such as high time consumption, cumbersome implementation, and limited scalability. This paper presents the SAGRNet model, a Graph Convolutional Neural Network (GCN) that incorporates sampling aggregation and self-attention mechanisms, while leveraging the ResNet residual network structure. A key innovation of SAGRNet is its ability to fuse features extracted through diverse algorithms, enabling comprehensive representation and enhanced classification performance. The SAGRNet model demonstrates superior performance over leading pixel-based neural networks, such as U-Net++ and DeepLabV3, in terms of both time efficiency and accuracy in vegetation image classification tasks. We achieved an overall mapping accuracy of ∼90 % using SAGRNet, compared to ∼87% and ∼85% from U-Net++ and DeepLabV3, respectively. Additionally, it offers more convenience in data processing. Furthermore, the model significantly outperforms cutting-edge graph-based convolutional networks, including Graph U-Net (achieved overall accuracy ∼65%) and TGNN (achieved overall accuracy ∼75%), showcasing exceptional generalization capability and classification accuracy. This paper provides a comprehensive analysis of the various processing aspects of this object-based GCN for vegetation mapping and emphasizes its significant potential for practical use. The model’s versatility can also be expanded to other image processing domains, offering unprecedented possibilities of information extraction from satellite imagery. The code for practical application experiment is available at https://github.com/baoling123/GCN-remote-sensing-classification.git.
期刊介绍:
The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive.
P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields.
In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.