ISPRS Journal of Photogrammetry and Remote Sensing最新文献

筛选
英文 中文
Word2Scene: Efficient remote sensing image scene generation with only one word via hybrid intelligence and low-rank representation Word2Scene:通过混合智能和低秩表示,仅用一个词就能高效生成遥感图像场景
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-11-06 DOI: 10.1016/j.isprsjprs.2024.11.002
Jiaxin Ren , Wanzeng Liu , Jun Chen , Shunxi Yin , Yuan Tao
{"title":"Word2Scene: Efficient remote sensing image scene generation with only one word via hybrid intelligence and low-rank representation","authors":"Jiaxin Ren ,&nbsp;Wanzeng Liu ,&nbsp;Jun Chen ,&nbsp;Shunxi Yin ,&nbsp;Yuan Tao","doi":"10.1016/j.isprsjprs.2024.11.002","DOIUrl":"10.1016/j.isprsjprs.2024.11.002","url":null,"abstract":"<div><div>To address the numerous challenges existing in current remote sensing scene generation methods, such as the difficulty in capturing complex interrelations among geographical features and the integration of implicit expert knowledge into generative models, this paper proposes an efficient method for generating remote sensing scenes using hybrid intelligence and low-rank representation, named Word2Scene, which can generate complex scenes with just one word. This approach combines geographic expert knowledge to optimize the remote sensing scene description, enhancing the accuracy and interpretability of the input descriptions. By employing a diffusion model based on hybrid intelligence and low-rank representation techniques, this method endows the diffusion model with the capability to understand remote sensing scene concepts and significantly improves the training efficiency of the diffusion model. This study also introduces the geographic scene holistic perceptual similarity (GSHPS), a novel evaluation metric that holistically assesses the performance of generative models from a global perspective. Experimental results demonstrate that our proposed method outperforms existing state-of-the-art models in terms of remote sensing scene generation quality, efficiency, and realism. Compared to the original diffusion models, LPIPS decreased by 18.52% (from 0.81 to 0.66), and GSHPS increased by 28.57% (from 0.70 to 0.90), validating the effectiveness and advancement of our method. Moreover, Word2Scene is capable of generating remote sensing scenes not present in the training set, showcasing strong zero-shot capabilities. This provides a new perspective and solution for remote sensing image scene generation, with the potential to advance the development of remote sensing, geographic information systems, and related fields. Our code will be released at <span><span>https://github.com/jaycecd/Word2Scene</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 231-257"},"PeriodicalIF":10.6,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142592831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A_OPTRAM-ET: An automatic optical trapezoid model for evapotranspiration estimation and its global-scale assessments A_OPTRAM-ET:用于蒸散估计及其全球尺度评估的自动光学梯形模型
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-11-02 DOI: 10.1016/j.isprsjprs.2024.10.019
Zhaoyuan Yao, Wangyipu Li, Yaokui Cui
{"title":"A_OPTRAM-ET: An automatic optical trapezoid model for evapotranspiration estimation and its global-scale assessments","authors":"Zhaoyuan Yao,&nbsp;Wangyipu Li,&nbsp;Yaokui Cui","doi":"10.1016/j.isprsjprs.2024.10.019","DOIUrl":"10.1016/j.isprsjprs.2024.10.019","url":null,"abstract":"<div><div>Remotely sensed evapotranspiration (ET) at a high spatial resolution (30 m) has wide-ranging applications in agriculture, hydrology and meteorology. The original optical trapezoid model for ET (O_OPTRAM-ET), which does not require thermal remote sensing, shows potential for high-resolution ET estimation. However, the non-automated O_OPTRAM-ET heavily depends on visual interpretation or optimization with in situ measurements, limiting its practical utility. In this study, a SpatioTemporal Aggregated Regression algorithm (STAR) is proposed to develop an automatic trapezoid model for ET (A_OPTRAM-ET), implemented within the Google Earth Engine environment, and evaluated globally at both moderate and high resolutions (500 m and 30 m, respectively). Through the integration of an aggregation algorithm across multiple dimensions to automatically determine its parameters, A_OPTRAM-ET can operate efficiently without the need for ground-based measurements as input. Evaluation against in situ ET demonstrates that the proposed A_OPTRAM-ET model effectively estimates ET across various land cover types and satellite platforms. The overall root mean square error (RMSE), mean absolute error (MAE), and correlation coefficient (CC) when compared with in situ latent heat flux (LE) measurements are 35.5 W·m<sup>−2</sup> (41.3 W·m<sup>−2</sup>, 40.0 W·m<sup>−2</sup>, 36.1 W·m<sup>−2</sup>,), 26.3 W·m<sup>−2</sup> (28.9 W·m<sup>−2</sup>, 28.7 W·m<sup>−2</sup>, 25.8 W·m<sup>−2</sup>,), and 0.78 (0.73, 0.70, 0.72) for Sentinel-2 (Landsat-8, Landsat-5, MOD09GA), respectively. The A_OPTRAM-ET model exhibits a stable accuracy over long time periods (approximately 10 years). When compared with other published ET datasets, ET estimated by the A_OPTRAM-ET model is better with the land cover types of cropland and shrubland. Additionally, global ET derived from the A_OPTRAM-ET model shows trends consistent with other published ET datasets over the period 2001–2020, while offering enhanced spatial details. Therefore, the proposed A_OPTRAM-ET model provides an efficient, high-resolution, and globally applicable method for ET estimation, with significant practical values for agriculture, hydrology, and related fields.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 181-197"},"PeriodicalIF":10.6,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Atmospheric correction of geostationary ocean color imager data over turbid coastal waters under high solar zenith angles 太阳高天顶角下浑浊沿岸水域地球静止海洋彩色成像仪数据的大气校正
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-10-31 DOI: 10.1016/j.isprsjprs.2024.10.018
Hao Li , Xianqiang He , Palanisamy Shanmugam , Yan Bai , Xuchen Jin , Zhihong Wang , Yifan Zhang , Difeng wang , Fang Gong , Min Zhao
{"title":"Atmospheric correction of geostationary ocean color imager data over turbid coastal waters under high solar zenith angles","authors":"Hao Li ,&nbsp;Xianqiang He ,&nbsp;Palanisamy Shanmugam ,&nbsp;Yan Bai ,&nbsp;Xuchen Jin ,&nbsp;Zhihong Wang ,&nbsp;Yifan Zhang ,&nbsp;Difeng wang ,&nbsp;Fang Gong ,&nbsp;Min Zhao","doi":"10.1016/j.isprsjprs.2024.10.018","DOIUrl":"10.1016/j.isprsjprs.2024.10.018","url":null,"abstract":"<div><div>The traditional atmospheric correction models employed with the near-infrared iterative schemes inaccurately estimate aerosol radiance at high solar zenith angles (SZAs), leading to a substantial loss of valid products for dawn or dusk observations by the geostationary satellite ocean color sensor. To overcome this issue, we previously developed an atmospheric correction model suitable for open ocean waters observed by the first geostationary satellite ocean color imager (GOCI) under high SZAs. This model was constructed based on a dataset from stable open ocean waters, which makes it less suitable for coastal waters. In this study, we developed a specialized atmospheric correction model (GOCI-II-NN) capable of accurately retrieving the water-leaving radiance from GOCI-II observations in coastal oceans under high SZAs. We utilized multiple observations from GOCI-II throughout the day to develop the selection criteria for extracting the stable coastal water pixels and created a new training dataset for the proposed model. The performance of the GOCI-II-NN model was validated by in-situ data collected from coastal/shelf waters. The results showed an Average Percentage Difference (APD) of less than 23% across the entire visible spectrum. In terms of the valid data and retrieval accuracy, the GOCI-II-NN model was superior to the traditional near-infrared and ultraviolet atmospheric correction models in terms of accurately retrieving the ocean color products for various applications, such as tracking/monitoring of algal blooms, sediment dynamics, and water quality among other applications.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 166-180"},"PeriodicalIF":10.6,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142561234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cascaded recurrent networks with masked representation learning for stereo matching of high-resolution satellite images 用于高分辨率卫星图像立体匹配的具有遮蔽表示学习功能的级联递归网络
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-10-30 DOI: 10.1016/j.isprsjprs.2024.10.017
Zhibo Rao , Xing Li , Bangshu Xiong , Yuchao Dai , Zhelun Shen , Hangbiao Li , Yue Lou
{"title":"Cascaded recurrent networks with masked representation learning for stereo matching of high-resolution satellite images","authors":"Zhibo Rao ,&nbsp;Xing Li ,&nbsp;Bangshu Xiong ,&nbsp;Yuchao Dai ,&nbsp;Zhelun Shen ,&nbsp;Hangbiao Li ,&nbsp;Yue Lou","doi":"10.1016/j.isprsjprs.2024.10.017","DOIUrl":"10.1016/j.isprsjprs.2024.10.017","url":null,"abstract":"<div><div>Stereo matching of satellite images presents challenges due to missing data, domain differences, and imperfect rectification. To address these issues, we propose cascaded recurrent networks with masked representation learning for high-resolution satellite stereo images, consisting of feature extraction and cascaded recurrent modules. First, we develop the correlation computation in the cascaded recurrent module to search for results on the epipolar line and adjacent areas, mitigating the impacts of erroneous rectification. Second, we use a training strategy based on masked representation learning to handle missing data and different domain attributes, enhancing data utilization and feature representation. Our training strategy includes two stages: (1) image reconstruction stage. We feed masked left or right images to the feature extraction module and adopt a reconstruction decoder to reconstruct the original images as a pre-training process, obtaining a pre-trained feature extraction module; (2) the stereo matching stage. We lock the parameters of the feature extraction module and employ stereo image pairs to train the cascaded recurrent module to get the final model. We implement the cascaded recurrent networks with two well-known feature extraction modules (CNN-based Restormer or Transformer-based ViT) to prove the effectiveness of our approach. Experimental results on the US3D and WHU-Stereo datasets show that: (1) Our training strategy can be used for CNN-based and Transformer-based methods on the remote sensing datasets with limited data to improve performance, outperforming the second-best network HMSM-Net by approximately 0.54% and 1.95% in terms of the percentage of the 3-px error on the WHU-Stereo and US3D datasets, respectively; (2) Our correlation manner can handle imperfect rectification, reducing the error rate by 8.9% on the random shift test; (3) Our method can predict high-quality disparity maps and achieve state-of-the-art performance, reducing the percentage of the 3-px error to 12.87% and 7.01% on the WHU-Stereo and US3D datasets, respectively. The source codes are released at <span><span>https://github.com/Archaic-Atom/MaskCRNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 151-165"},"PeriodicalIF":10.6,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142554188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging real and simulated data for cross-spatial- resolution vegetation segmentation with application to rice crops 将真实数据和模拟数据用于跨空间分辨率植被划分,并将其应用于水稻作物
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-10-28 DOI: 10.1016/j.isprsjprs.2024.10.007
Yangmingrui Gao , Linyuan Li , Marie Weiss , Wei Guo , Ming Shi , Hao Lu , Ruibo Jiang , Yanfeng Ding , Tejasri Nampally , P. Rajalakshmi , Frédéric Baret , Shouyang Liu
{"title":"Bridging real and simulated data for cross-spatial- resolution vegetation segmentation with application to rice crops","authors":"Yangmingrui Gao ,&nbsp;Linyuan Li ,&nbsp;Marie Weiss ,&nbsp;Wei Guo ,&nbsp;Ming Shi ,&nbsp;Hao Lu ,&nbsp;Ruibo Jiang ,&nbsp;Yanfeng Ding ,&nbsp;Tejasri Nampally ,&nbsp;P. Rajalakshmi ,&nbsp;Frédéric Baret ,&nbsp;Shouyang Liu","doi":"10.1016/j.isprsjprs.2024.10.007","DOIUrl":"10.1016/j.isprsjprs.2024.10.007","url":null,"abstract":"<div><div>Accurate image segmentation is essential for image-based estimation of vegetation canopy traits, as it minimizes background interference. However, existing segmentation models often lack the generalization ability to effectively tackle both ground-based and aerial images across a wide range of spatial resolutions. To address this limitation, a cross-spatial-resolution image segmentation model for rice crop was trained using the integration of <em>in-situ</em> and <em>in silico</em> multi-resolution images. We collected more than 3,000 RGB images (real set) covering 17 different resolutions reflecting diverse canopy structures, illumination conditions and background in rice fields, with vegetation pixels annotated manually. Using the previously developed Digital Plant Phenotyping Platform, we created a simulated dataset (sim set) including 10,000 RGB images with resolutions ranging from 0.5 to 3.5 mm/pixel, accompanied by corresponding mask labels. By employing a domain adaptation technique, the simulated images were further transformed into visually realistic images while preserving the original labels, creating a simulated-to-realistic dataset (sim2real set). Building upon a SegFormer deep learning model, we demonstrated that training with multi-resolution samples led to more generalized segmentation results than single-resolution training on the real dataset. Our exploration of various integration strategies revealed that a training set of 9,600 sim2real images combined with only 60 real images achieved the same segmentation accuracy as 2,400 real images (IoU = 0.819, F1 = 0.901). Moreover, combining 2,400 real images and 1,200 sim2real images resulted in the best performing model, effective against six challenging situations, such as specular reflections and shadows. Compared with models trained with single-resolution samples and an established model (i.e., VegANN), our model effectively improved the estimation of both green fraction and green area index across spatial resoultions. The strategy of bridging real and simulated data for cross-resolution deep learning model is expected to be applicable to other crops. The best trained model is available at <span><span>https://github.com/PheniX-Lab/crossGSD-seg</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 133-150"},"PeriodicalIF":10.6,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-modal change detection using historical land use maps and current remote sensing images 利用历史土地利用图和当前遥感图像进行跨模式变化探测
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-10-24 DOI: 10.1016/j.isprsjprs.2024.10.010
Kai Deng , Xiangyun Hu , Zhili Zhang , Bo Su , Cunjun Feng , Yuanzeng Zhan , Xingkun Wang , Yansong Duan
{"title":"Cross-modal change detection using historical land use maps and current remote sensing images","authors":"Kai Deng ,&nbsp;Xiangyun Hu ,&nbsp;Zhili Zhang ,&nbsp;Bo Su ,&nbsp;Cunjun Feng ,&nbsp;Yuanzeng Zhan ,&nbsp;Xingkun Wang ,&nbsp;Yansong Duan","doi":"10.1016/j.isprsjprs.2024.10.010","DOIUrl":"10.1016/j.isprsjprs.2024.10.010","url":null,"abstract":"<div><div>Using bi-temporal remote sensing imagery to detect land in urban expansion has become a common practice. However, in the process of updating land resource surveys, directly detecting changes between historical land use maps (referred to as “maps” in this paper) and current remote sensing images (referred to as “images” in this paper) is more direct and efficient than relying on bi-temporal image comparisons. The difficulty stems from the substantial modality differences between maps and images, presenting a complex challenge for effective change detection. To address this issue, in this paper, we propose a novel deep learning model named the cross-modal patch alignment network (CMPANet), which bridges the gap between different modalities for cross-modal change detection (CMCD) between maps and images. Our proposed model uses a vision transformer (ViT-B/16) fine-tuned on 1.8 million remote sensing images as an encoder for images and trainable ViTs as the encoder for maps. To bridge the distribution differences between these encoders, we introduce a feature domain adaptation image-map alignment module (IMAM) to transfer and share pretrained model knowledge rapidly. Additionally, we incorporate the cross-modal and cross-channel attention (CCMAT) module and the transformer block attention module to facilitate the interaction and fusion of features across modalities. These fused features are then processed through a UperNet-based feature pyramid to generate pixel-level change maps. These fused features are then processed through a UperNet-based feature pyramid to generate pixel-level change maps. On the newly created EVLab-CMCD dataset and the publicly available HRSCD dataset, CMPANet has achieved state-of-the-art results and offers a novel technical approach for CMCD between maps and images.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 114-132"},"PeriodicalIF":10.6,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nighttime fog and low stratus detection under multi-scene and all lunar phase conditions using S-NPP/VIIRS visible and infrared channels 利用 S-NPP/VIIRS 可见光和红外通道,在多场景和所有月相条件下探测夜间雾和低层云
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-10-21 DOI: 10.1016/j.isprsjprs.2024.10.014
Jun Jiang , Zhigang Yao , Yang Liu
{"title":"Nighttime fog and low stratus detection under multi-scene and all lunar phase conditions using S-NPP/VIIRS visible and infrared channels","authors":"Jun Jiang ,&nbsp;Zhigang Yao ,&nbsp;Yang Liu","doi":"10.1016/j.isprsjprs.2024.10.014","DOIUrl":"10.1016/j.isprsjprs.2024.10.014","url":null,"abstract":"<div><div>A scheme for satellite remote sensing is proposed to detect nighttime fog and low stratus (FLS) by combining visible, mid-infrared, and far-infrared channels. The S-NPP/VIIRS dataset and ERA5 reanalysis data are primarily used, and a comprehensive threshold system is established through statistical analysis, simulation calculations, and sensitivity experiments. 98 cases of nighttime FLS occurring from 2012 to 2020 in China, the United States, and surrounding areas are selected for algorithm validation, utilizing the global surface meteorological observations as comparison data. Preliminary results from the analysis of four typical cases indicate that the algorithm is temporally suitable for all lunar phase conditions from new moon to full moon at night, and spatially applicable to various types of underlying surfaces. The accuracy evaluation results of 14,378 satellite-ground matching samples further show that the algorithm has high accuracy overall, with a POD of 0.86, CSI of 0.81, and FAR of 0.06. The accuracy is highest in winter, lowest in summer, and intermediate in spring and autumn. The missed detections and false alarms predominantly occur at the edge of clouds, which may be caused by parallax and time difference between satellite and ground observations.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 102-113"},"PeriodicalIF":10.6,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PRISMethaNet: A novel deep learning model for landfill methane detection using PRISMA satellite data PRISMethaNet:利用 PRISMA 卫星数据检测垃圾填埋场甲烷的新型深度学习模型
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-10-20 DOI: 10.1016/j.isprsjprs.2024.10.003
Mohammad Marjani , Fariba Mohammadimanesh , Daniel J. Varon , Ali Radman , Masoud Mahdianpari
{"title":"PRISMethaNet: A novel deep learning model for landfill methane detection using PRISMA satellite data","authors":"Mohammad Marjani ,&nbsp;Fariba Mohammadimanesh ,&nbsp;Daniel J. Varon ,&nbsp;Ali Radman ,&nbsp;Masoud Mahdianpari","doi":"10.1016/j.isprsjprs.2024.10.003","DOIUrl":"10.1016/j.isprsjprs.2024.10.003","url":null,"abstract":"<div><div>Methane (CH4) is one of the most significant greenhouse gases responsible for about one-third of climate warming since preindustrial times, originating from various sources. Landfills are responsible for a large percentage of CH4 emissions, and population growth can boost these emissions. Therefore, it is vital to automate the process of CH4 monitoring over landfills. This study proposes a convolutional neural network (CNN) with an Atrous Spatial Pyramid Pooling (ASPP) mechanism, called PRISMethaNet, to automate the CH4 detection process using PRISMA satellite data in the 400–2500 nm spectral range. A total number of 41 PRISMA images from 17 landfill sites located in several countries, such as India, Nigeria, Mexico, Pakistan, Iran, and other regions, were used as our study areas. The PRISMethaNet model was trained using augmented data as the input, and plume masks were obtained from the matched filter (MF) algorithm. This novel proposed model successfully detected plumes with overall accuracy (OA), F1-score (F1), precision, and recall of 0.99, 0.96, 0.93, and 0.99, respectively, and quantification uncertainties ranging from 11 % to 58 %. An unboxing of the ASPP module using Gradient-weighted Class Activation Mapping (Grad-CAM) algorithm demonstrated a strong relationship between larger dilation rates (DRs) and CH4 plume detectability. Importantly, the results highlighted that plume masks obtained by PRISMethaNet provided more accurate CH4 quantification rate compared to the statistical methods used in previous studies. In particular, the mean square error (MSE) for PRISMethaNet was approximately 1,102 kg/h, whereas the MSE for the commonly used statistical method was around 1,974 kg/h.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 802-818"},"PeriodicalIF":10.6,"publicationDate":"2024-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142531794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving crop type mapping by integrating LSTM with temporal random masking and pixel-set spatial information 通过将 LSTM 与时间随机掩码和像素集空间信息相结合,改进作物类型制图
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-10-19 DOI: 10.1016/j.isprsjprs.2024.10.013
Xinyu Zhang , Zhiwen Cai , Qiong Hu , Jingya Yang , Haodong Wei , Liangzhi You , Baodong Xu
{"title":"Improving crop type mapping by integrating LSTM with temporal random masking and pixel-set spatial information","authors":"Xinyu Zhang ,&nbsp;Zhiwen Cai ,&nbsp;Qiong Hu ,&nbsp;Jingya Yang ,&nbsp;Haodong Wei ,&nbsp;Liangzhi You ,&nbsp;Baodong Xu","doi":"10.1016/j.isprsjprs.2024.10.013","DOIUrl":"10.1016/j.isprsjprs.2024.10.013","url":null,"abstract":"<div><div>Accurate and timely crop type classification is essential for effective agricultural monitoring, cropland management, and yield estimation. Unfortunately, the complicated temporal patterns of different crops, combined with gaps and noise in satellite observations caused by clouds and rain, restrict crop classification accuracy, particularly during early seasons with limited temporal information. Although deep learning-based methods have exhibited great potential for improving crop type mapping, insufficient and noisy training data may lead them to overlook more generalizable features and derive inferior classification performance. To address these challenges, we developed a Mask Pixel-set SpatioTemporal Integration Network (Mask-PSTIN), which integrates a temporal random masking technique and a novel PSTIN model. Temporal random masking augments the training data by selectively removing certain temporal information to improve data variability, enforcing the model to learn more generalized features. The PSTIN, comprising a pixel-set aggregation encoder (PSAE) and long short-term memory (LSTM) module, effectively captures comprehensive spatiotemporal features from time-series satellite images. The effectiveness of Mask-PSTIN was evaluated across three regions with different landscapes and cropping systems. Results demonstrated that the addition of PSAE in PSTIN significantly improved crop classification accuracy compared to a basic LSTM, with average overall accuracy (OA) increasing from 80.9% to 83.9%, and the mean F1-Score (mF1) rising from 0.781 to 0.818. Incorporating temporal random masking in training led to further improvements, increasing average OA and mF1 to 87.4% and 0.865, respectively. The Mask-PSTIN significantly outperformed traditional machine learning and deep learning methods (i.e., RF, SVM, Transformer, and CNN-LSTM) in crop type mapping across all three regions. Furthermore, Mask-PSTIN enabled earlier and more accurate crop type identification before or during their developing stages compared to machine learning models. Feature importance analysis based on the gradient backpropagation algorithm revealed that Mask-PSTIN effectively leveraged multi-temporal features, exhibiting broader attention across various time steps and capturing critical crop phenological characteristics. These results suggest that Mask-PSTIN is a promising approach for improving both post-harvest and early-season crop type classification, with potential applications in agricultural management and monitoring.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 87-101"},"PeriodicalIF":10.6,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized spatio-temporal-spectral integrated fusion for soil moisture downscaling 用于土壤水分降尺度的广义时空-光谱综合融合技术
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-10-19 DOI: 10.1016/j.isprsjprs.2024.10.012
Menghui Jiang , Huanfeng Shen , Jie Li , Liangpei Zhang
{"title":"Generalized spatio-temporal-spectral integrated fusion for soil moisture downscaling","authors":"Menghui Jiang ,&nbsp;Huanfeng Shen ,&nbsp;Jie Li ,&nbsp;Liangpei Zhang","doi":"10.1016/j.isprsjprs.2024.10.012","DOIUrl":"10.1016/j.isprsjprs.2024.10.012","url":null,"abstract":"<div><div>Soil moisture (SM) is one of the key land surface parameters, but the coarse spatial resolution of the passive microwave SM products constrains the precise monitoring of surface changes. The existing SM downscaling methods typically either utilize spatio-temporal information or leverage auxiliary parameters, without fully mining the complementary information between them. In this paper, a generalized spatio-temporal-spectral integrated fusion-based downscaling method is proposed to fully utilize the complementary features between multi-source auxiliary parameters and multi-temporal SM data. Specifically, we define the spectral characteristic of geographic objects as an assemblage of diverse attribute characteristics at specific spatio-temporal locations and scales. Based on this, the SM-related auxiliary parameter data can be treated as the generalized spectral characteristics of SM, and a generalized spatio-temporal-spectral integrated fusion framework is proposed to integrate the spatio-temporal features of the SM products and the generalized spectral features from the auxiliary parameters to generate fine spatial resolution SM data with high quality. In addition, considering the high heterogeneity of multi-source data, the proposed framework is based on a spatio-temporal constrained cycle generative adversarial network (STC-CycleGAN). The proposed STC-CycleGAN network comprises a forward integrated fusion stage and a backward spatio-temporal constraint stage, between which spatio-temporal cycle-consistent constraints are formed. Numerous experiments were conducted on Soil Moisture Active Passive (SMAP) SM products. The qualitative, quantitative, and in-situ site verification results demonstrate the capability of the proposed method to mine the complementary information of multi-source data and achieve high-accuracy downscaling of global daily SM data from 36 km to 9 km.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 70-86"},"PeriodicalIF":10.6,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信