SAGRNet:一种新的基于目标的图像卷积神经网络,用于遥感影像中不同植被覆盖的分类

IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL
Baoling Gui , Lydia Sam , Anshuman Bhardwaj , Diego Soto Gómez , Félix González Peñaloza , Manfred F. Buchroithner , David R. Green
{"title":"SAGRNet:一种新的基于目标的图像卷积神经网络,用于遥感影像中不同植被覆盖的分类","authors":"Baoling Gui ,&nbsp;Lydia Sam ,&nbsp;Anshuman Bhardwaj ,&nbsp;Diego Soto Gómez ,&nbsp;Félix González Peñaloza ,&nbsp;Manfred F. Buchroithner ,&nbsp;David R. Green","doi":"10.1016/j.isprsjprs.2025.06.004","DOIUrl":null,"url":null,"abstract":"<div><div>Growing global population, changing climate, and shrinking land resources demand for quicker, efficient, and more accurate methods of mapping and monitoring vegetation cover in remote sensing datasets. Many deep learning-based methods have been widely applied for semantic segmentation tasks in remote sensing images of vegetated environments. However, most existing models are pixel-based, which introduces challenges such as high time consumption, cumbersome implementation, and limited scalability. This paper presents the SAGRNet model, a Graph Convolutional Neural Network (GCN) that incorporates sampling aggregation and self-attention mechanisms, while leveraging the ResNet residual network structure. A key innovation of SAGRNet is its ability to fuse features extracted through diverse algorithms, enabling comprehensive representation and enhanced classification performance. The SAGRNet model demonstrates superior performance over leading pixel-based neural networks, such as U-Net++ and DeepLabV3, in terms of both time efficiency and accuracy in vegetation image classification tasks. We achieved an overall mapping accuracy of ∼90 % using SAGRNet, compared to ∼87% and ∼85% from U-Net++ and DeepLabV3, respectively. Additionally, it offers more convenience in data processing. Furthermore, the model significantly outperforms cutting-edge graph-based convolutional networks, including Graph U-Net (achieved overall accuracy ∼65%) and TGNN (achieved overall accuracy ∼75%), showcasing exceptional generalization capability and classification accuracy. This paper provides a comprehensive analysis of the various processing aspects of this object-based GCN for vegetation mapping and emphasizes its significant potential for practical use. The model’s versatility can also be expanded to other image processing domains, offering unprecedented possibilities of information extraction from satellite imagery. The code for practical application experiment is available at <span><span>https://github.com/baoling123/GCN-remote-sensing-classification.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"227 ","pages":"Pages 99-124"},"PeriodicalIF":10.6000,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SAGRNet: A novel object-based graph convolutional neural network for diverse vegetation cover classification in remotely-sensed imagery\",\"authors\":\"Baoling Gui ,&nbsp;Lydia Sam ,&nbsp;Anshuman Bhardwaj ,&nbsp;Diego Soto Gómez ,&nbsp;Félix González Peñaloza ,&nbsp;Manfred F. Buchroithner ,&nbsp;David R. Green\",\"doi\":\"10.1016/j.isprsjprs.2025.06.004\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Growing global population, changing climate, and shrinking land resources demand for quicker, efficient, and more accurate methods of mapping and monitoring vegetation cover in remote sensing datasets. Many deep learning-based methods have been widely applied for semantic segmentation tasks in remote sensing images of vegetated environments. However, most existing models are pixel-based, which introduces challenges such as high time consumption, cumbersome implementation, and limited scalability. This paper presents the SAGRNet model, a Graph Convolutional Neural Network (GCN) that incorporates sampling aggregation and self-attention mechanisms, while leveraging the ResNet residual network structure. A key innovation of SAGRNet is its ability to fuse features extracted through diverse algorithms, enabling comprehensive representation and enhanced classification performance. The SAGRNet model demonstrates superior performance over leading pixel-based neural networks, such as U-Net++ and DeepLabV3, in terms of both time efficiency and accuracy in vegetation image classification tasks. We achieved an overall mapping accuracy of ∼90 % using SAGRNet, compared to ∼87% and ∼85% from U-Net++ and DeepLabV3, respectively. Additionally, it offers more convenience in data processing. Furthermore, the model significantly outperforms cutting-edge graph-based convolutional networks, including Graph U-Net (achieved overall accuracy ∼65%) and TGNN (achieved overall accuracy ∼75%), showcasing exceptional generalization capability and classification accuracy. This paper provides a comprehensive analysis of the various processing aspects of this object-based GCN for vegetation mapping and emphasizes its significant potential for practical use. The model’s versatility can also be expanded to other image processing domains, offering unprecedented possibilities of information extraction from satellite imagery. The code for practical application experiment is available at <span><span>https://github.com/baoling123/GCN-remote-sensing-classification.git</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50269,\"journal\":{\"name\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"volume\":\"227 \",\"pages\":\"Pages 99-124\"},\"PeriodicalIF\":10.6000,\"publicationDate\":\"2025-06-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0924271625002308\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"GEOGRAPHY, PHYSICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271625002308","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0

摘要

全球人口增长、气候变化和土地资源萎缩需要更快、更高效和更准确的遥感数据集植被覆盖制图和监测方法。许多基于深度学习的方法已被广泛应用于植被环境遥感图像的语义分割任务。然而,大多数现有模型都是基于像素的,这带来了诸如高时间消耗、繁琐的实现和有限的可伸缩性等挑战。本文提出了SAGRNet模型,这是一种结合采样聚合和自关注机制的图卷积神经网络(GCN),同时利用了ResNet残差网络结构。SAGRNet的一个关键创新是它能够融合通过不同算法提取的特征,从而实现全面的表示和增强的分类性能。在植被图像分类任务中,SAGRNet模型在时间效率和精度方面都优于领先的基于像素的神经网络,如U-Net++和DeepLabV3。我们使用SAGRNet实现了~ 90%的总体制图精度,而U-Net++和DeepLabV3分别为~ 87%和~ 85%。此外,它在数据处理方面提供了更多的便利。此外,该模型显著优于尖端的基于图的卷积网络,包括Graph U-Net(实现了总体精度~ 65%)和TGNN(实现了总体精度~ 75%),展示了卓越的泛化能力和分类精度。本文全面分析了这种基于目标的GCN用于植被制图的各个处理方面,并强调了其实际应用的巨大潜力。该模型的多功能性也可以扩展到其他图像处理领域,为从卫星图像中提取信息提供前所未有的可能性。实际应用实验代码见https://github.com/baoling123/GCN-remote-sensing-classification.git。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
SAGRNet: A novel object-based graph convolutional neural network for diverse vegetation cover classification in remotely-sensed imagery
Growing global population, changing climate, and shrinking land resources demand for quicker, efficient, and more accurate methods of mapping and monitoring vegetation cover in remote sensing datasets. Many deep learning-based methods have been widely applied for semantic segmentation tasks in remote sensing images of vegetated environments. However, most existing models are pixel-based, which introduces challenges such as high time consumption, cumbersome implementation, and limited scalability. This paper presents the SAGRNet model, a Graph Convolutional Neural Network (GCN) that incorporates sampling aggregation and self-attention mechanisms, while leveraging the ResNet residual network structure. A key innovation of SAGRNet is its ability to fuse features extracted through diverse algorithms, enabling comprehensive representation and enhanced classification performance. The SAGRNet model demonstrates superior performance over leading pixel-based neural networks, such as U-Net++ and DeepLabV3, in terms of both time efficiency and accuracy in vegetation image classification tasks. We achieved an overall mapping accuracy of ∼90 % using SAGRNet, compared to ∼87% and ∼85% from U-Net++ and DeepLabV3, respectively. Additionally, it offers more convenience in data processing. Furthermore, the model significantly outperforms cutting-edge graph-based convolutional networks, including Graph U-Net (achieved overall accuracy ∼65%) and TGNN (achieved overall accuracy ∼75%), showcasing exceptional generalization capability and classification accuracy. This paper provides a comprehensive analysis of the various processing aspects of this object-based GCN for vegetation mapping and emphasizes its significant potential for practical use. The model’s versatility can also be expanded to other image processing domains, offering unprecedented possibilities of information extraction from satellite imagery. The code for practical application experiment is available at https://github.com/baoling123/GCN-remote-sensing-classification.git.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ISPRS Journal of Photogrammetry and Remote Sensing
ISPRS Journal of Photogrammetry and Remote Sensing 工程技术-成像科学与照相技术
CiteScore
21.00
自引率
6.30%
发文量
273
审稿时长
40 days
期刊介绍: The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive. P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields. In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信