ISPRS Journal of Photogrammetry and Remote Sensing最新文献

筛选
英文 中文
PSO-based fine polarimetric decomposition for ship scattering characterization 基于pso的精细极化分解舰船散射表征
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.11.015
Junpeng Wang , Sinong Quan , Shiqi Xing , Yongzhen Li , Hao Wu , Weize Meng
{"title":"PSO-based fine polarimetric decomposition for ship scattering characterization","authors":"Junpeng Wang ,&nbsp;Sinong Quan ,&nbsp;Shiqi Xing ,&nbsp;Yongzhen Li ,&nbsp;Hao Wu ,&nbsp;Weize Meng","doi":"10.1016/j.isprsjprs.2024.11.015","DOIUrl":"10.1016/j.isprsjprs.2024.11.015","url":null,"abstract":"<div><div>Due to the inappropriate estimation and inadequate awareness of scattering from complex substructures within ships, a reasonable, reliable, and complete interpretation tool to characterize ship scattering for polarimetric synthetic aperture radar (PolSAR) is still lacking. In this paper, a fine polarimetric decomposition with explicit physical meaning is proposed to reveal and characterize the local-structure-related scattering behaviors on ships. To this end, a nine-component decomposition scheme is first established through incorporating the rotated dihedral and planar resonator scattering models, which makes full use of polarimetric information and comprehensively considers the complex structure scattering of ships. In order to reasonably estimation the scattering components, three practical scattering dominance principles as well as an explicit objective function are raised, and a particle swarm optimization (PSO)-based model inversion strategy is subsequently presented. This not only overcomes the underdetermined problem, but also improves the scattering mechanism ambiguity by circumventing the constrained estimation order. Finally, a ship indicator by linearly combining the output scattering contribution is further derived, which constitutes a complete ship scattering interpretation approach along with the proposed decomposition. Experiments carried out with real PolSAR datasets demonstrate that the proposed method adequately and objectively describes the scatterers on ships, which provides an effective way to ship scattering characterization. Moreover, it also verifies the feasibility of fine polarimetric decomposition in a further application with the quantitative analysis of scattering components.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 18-31"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142789960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FO-Net: An advanced deep learning network for individual tree identification using UAV high-resolution images FO-Net:一种先进的深度学习网络,用于使用无人机高分辨率图像识别单个树木
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.12.020
Jian Zeng, Xin Shen, Kai Zhou, Lin Cao
{"title":"FO-Net: An advanced deep learning network for individual tree identification using UAV high-resolution images","authors":"Jian Zeng,&nbsp;Xin Shen,&nbsp;Kai Zhou,&nbsp;Lin Cao","doi":"10.1016/j.isprsjprs.2024.12.020","DOIUrl":"10.1016/j.isprsjprs.2024.12.020","url":null,"abstract":"<div><div>The identification of individual trees can reveal the competitive and symbiotic relationships among trees within forest stands, which is fundamental understand biodiversity and forest ecosystems. Highly precise identification of individual trees can significantly improve the efficiency of forest resource inventory, and is valuable for biomass measurement and forest carbon storage assessment. In previous studies through deep learning approaches for identifying individual tree, feature extraction is usually difficult to adapt to the variation of tree crown architecture, and the loss of feature information in the multi-scale fusion process is also a marked challenge for extracting trees by remote sensing images. Based on the one-stage deep learning network structure, this study improves and optimizes the three stages of feature extraction, feature fusion and feature identification in deep learning methods, and constructs a novel feature-oriented individual tree identification network (FO-Net) suitable for UAV high-resolution images. Firstly, an adaptive feature extraction algorithm based on variable position drift convolution was proposed, which improved the feature extraction ability for the individual tree with various crown size and shape in UAV images. Secondly, to enhance the network’s ability to fuse multiscale forest features, a feature fusion algorithm based on the “gather-and-distribute” mechanism is proposed in the feature pyramid network, which realizes the lossless cross-layer transmission of feature map information. Finally, in the stage of individual tree identification, a unified self-attention identification head is introduced to enhanced FO-Net’s perception ability to identify the trees with small crown diameters. FO-Net achieved the best performance in quantitative analysis experiments on self-constructed datasets, with mAP50, F1-score, Precision, and Recall of 90.7%, 0.85, 85.8%, and 82.8%, respectively, realizing a relatively high accuracy for individual tree identification compared to the traditional deep learning methods. The proposed feature extraction and fusion algorithms have improved the accuracy of individual tree identification by 1.1% and 2.7% respectively. The qualitative experiments based on Grad-CAM heat maps also demonstrate that FO-Net can focus more on the contours of an individual tree in high-resolution images, and reduce the influence of background factors during feature extraction and individual tree identification. FO-Net deep learning network improves the accuracy of individual trees identification in UAV high-resolution images without significantly increasing the parameters of the network, which provides a reliable method to support various tasks in fine-scale precision forestry.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 323-338"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142889390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep data fusion-based reconstruction of water index time series for intermittent rivers and ephemeral streams monitoring 基于深度数据融合的间歇河流和短暂溪流监测水指数时间序列重建
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.12.015
Junyuan Fei , Xuan Zhang , Chong Li , Fanghua Hao , Yahui Guo , Yongshuo Fu
{"title":"A deep data fusion-based reconstruction of water index time series for intermittent rivers and ephemeral streams monitoring","authors":"Junyuan Fei ,&nbsp;Xuan Zhang ,&nbsp;Chong Li ,&nbsp;Fanghua Hao ,&nbsp;Yahui Guo ,&nbsp;Yongshuo Fu","doi":"10.1016/j.isprsjprs.2024.12.015","DOIUrl":"10.1016/j.isprsjprs.2024.12.015","url":null,"abstract":"<div><div>Intermittent Rivers and Ephemeral Streams (IRES) are the major sources of flowing water on Earth. Yet, their dynamics are challenging for optical and radar satellites to monitor due to the heavy cloud cover and narrow water surfaces. The significant backscattering mechanism change and image mismatch further hinder the joint use of optical-SAR images in IRES monitoring. Here, a <strong>D</strong>eep data fusion-based <strong>R</strong>econstruction of the wide-accepted Modified Normalized Difference Water Index (MNDWI) time series is conducted for <strong>I</strong>RES <strong>M</strong>onitoring (DRIM). The study utilizes 3 categories of explanatory variables, i.e., the cross-orbits Sentinel-1 SAR for the continuous IRES observation, anchor data for the implicit co-registration, and auxiliary data that reflects the dynamics of IRES. A tight-coupled CNN-RNN architecture is designed to achieve pixel-level SAR-to-optical reconstruction under significant backscattering mechanism changes. The 10 m MNDWI time series with a 12-day interval is effectively regressed, <span><math><mrow><msup><mrow><mi>R</mi></mrow><mn>2</mn></msup></mrow></math></span> &gt; 0.80, on the experimental catchment. The comparison with the RF, RNN, and CNN methods affirms the advantage of the tight-coupled CNN-RNN system in the SAR-to-optical regression with the <span><math><mrow><msup><mrow><mi>R</mi></mrow><mn>2</mn></msup></mrow></math></span> increasing by 0.68 at least. The ablation test highlights the contributions of the Sentinel-1 to the precise MNDWI time series reconstruction, and the anchor and auxiliary data to the effective multi-source data fusion, respectively. The reconstructions highly match the observations of IRES with river widths ranging from 2 m to 300 m. Furthermore, the DRIM method shows excellent applicability, i.e., average <span><math><mrow><msup><mrow><mi>R</mi></mrow><mn>2</mn></msup></mrow></math></span> of 0.77, in IRES under polar, temperate, tropical, and arid climates. In conclusion, the proposed method is powerful in reconstructing the MNDWI time series of sub-pixel to multi-pixel scale IRES under the problem of backscattering mechanism change and image mismatch. The reconstructed MNDWI time series are essential for exploring the hydrological processes of IRES dynamics and optimizing water resource management at the basin scale.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 339-353"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142901851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to “Comparison of detectability of ship wake components between C-Band and X-Band synthetic aperture radar sensors operating under different slant ranges” [ISPRS J. Photogramm. Remote Sens. 196 (2023) 306-324]
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2025.01.026
Björn Tings, Andrey Pleskachevsky, Stefan Wiehle
{"title":"Corrigendum to “Comparison of detectability of ship wake components between C-Band and X-Band synthetic aperture radar sensors operating under different slant ranges” [ISPRS J. Photogramm. Remote Sens. 196 (2023) 306-324]","authors":"Björn Tings,&nbsp;Andrey Pleskachevsky,&nbsp;Stefan Wiehle","doi":"10.1016/j.isprsjprs.2025.01.026","DOIUrl":"10.1016/j.isprsjprs.2025.01.026","url":null,"abstract":"","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Page 740"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143072526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Target-aware attentional network for rare class segmentation in large-scale LiDAR point clouds 大规模LiDAR点云中稀有类分割的目标感知关注网络
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.11.012
Xinlong Zhang , Dong Lin , Uwe Soergel
{"title":"Target-aware attentional network for rare class segmentation in large-scale LiDAR point clouds","authors":"Xinlong Zhang ,&nbsp;Dong Lin ,&nbsp;Uwe Soergel","doi":"10.1016/j.isprsjprs.2024.11.012","DOIUrl":"10.1016/j.isprsjprs.2024.11.012","url":null,"abstract":"<div><div>Semantic interpretation of 3D scenes poses a formidable challenge in point cloud processing, which also stands as a requisite undertaking across various fields of application involving point clouds. Although a number of point cloud segmentation methods have achieved leading performance, 3D rare class segmentation continues to be a challenge owing to the imbalanced distribution of fine-grained classes and the complexity of large scenes. In this paper, we present target-aware attentional network (TaaNet), a novel mask-constrained attention framework to address 3D semantic segmentation of imbalanced classes in large-scale point clouds. Adapting the self-attention mechanism, a hierarchical aggregation strategy is first applied to enhance the learning of point-wise features across various scales, which leverages both global and local perspectives to guarantee presence of fine-grained patterns in the case of scenes with high complexity. Subsequently, rare target masks are imposed by a contextual module on the hierarchical features. Specifically, a target-aware aggregator is proposed to boost discriminative features of rare classes, which constrains hierarchical features with learnable adaptive weights and simultaneously embeds confidence constraints of rare classes. Furthermore, a target pseudo-labeling strategy based on strong contour cues of rare classes is designed, which effectively delivers instance-level supervisory signals restricted to rare targets only. We conducted thorough experiments on four multi-platform LiDAR benchmarks, i.e., airborne, mobile and terrestrial platforms, to assess the performance of our framework. Results demonstrate that compared to other commonly used advanced segmentation methods, our method can obtain not only high segmentation accuracy but also remarkable F1-scores in rare classes. In a submission to the official ranking page of Hessigheim 3D benchmark, our approach achieves a state-of-the-art mean F1-score of 83.84% and an outstanding overall accuracy (OA) of 90.45%. In particular, the F1-scores of rare classes namely vehicles and chimneys notably exceed the average of other published methods by a wide margin, boosting by 32.00% and 32.46%, respectively. Additionally, extensive experimental analysis on benchmarks collected from multiple platforms, Paris-Lille-3D, Semantic3D and WHU-Urban3D, validates the robustness and effectiveness of the proposed method.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 32-50"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142789962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
National scale sub-meter mangrove mapping using an augmented border training sample method 利用增强边界训练样本法绘制国家尺度亚米级红树林地图
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.12.009
Jinyan Tian , Le Wang , Chunyuan Diao , Yameng Zhang , Mingming Jia , Lin Zhu , Meng Xu , Xiaojuan Li , Huili Gong
{"title":"National scale sub-meter mangrove mapping using an augmented border training sample method","authors":"Jinyan Tian ,&nbsp;Le Wang ,&nbsp;Chunyuan Diao ,&nbsp;Yameng Zhang ,&nbsp;Mingming Jia ,&nbsp;Lin Zhu ,&nbsp;Meng Xu ,&nbsp;Xiaojuan Li ,&nbsp;Huili Gong","doi":"10.1016/j.isprsjprs.2024.12.009","DOIUrl":"10.1016/j.isprsjprs.2024.12.009","url":null,"abstract":"<div><div>This study presents the development of China’s first national-scale sub-meter mangrove map, addressing the need for high-resolution mapping to accurately delineate mangrove boundaries and identify fragmented patches. To overcome the current limitation of 10-m resolution, we developed a novel Semi-automatic Sub-meter Mapping Method (SSMM). The SSMM enhances the spectral separability of mangroves from other land covers by selecting nine critical features from both Sentinel-2 and Google Earth imagery. We also developed an innovative automated sample collection method to ensure ample and precise training samples, increasing sample density in areas susceptible to misclassification and reducing it in uniform regions. This method surpasses traditional uniform sampling in representing the national-scale study area. The classification is performed using a random forest classifier and is manually refined, culminating in the production of the pioneering Large-scale Sub-meter Mangrove Map (LSMM).</div><div>Our study showcases the LSMM’s superior performance over the established High-resolution Global Mangrove Forest (HGMF) map. The LSMM demonstrates enhanced classification accuracy, improved spatial delineation, and more precise area calculations, along with a robust framework of spatial analysis. Notably, compared to the HGMF, the LSMM achieves a 22.0 % increase in overall accuracy and a 0.27 improvement in the F1 score. In terms of mangrove coverage within China, the LSMM estimates a reduction of 4,345 ha (15.4 %), decreasing from 32,598 ha in the HGMF to 28,253 ha. This reduction is further underscored by a significant 61.7 % discrepancy in spatial distribution areas when compared to the HGMF, indicative of both commission and omission errors associated with the 10-m HGMF. Additionally, the LSMM identifies a fivefold increase in the number of mangrove patches, totaling 40,035, compared to the HGMF’s 7,784. These findings underscore the substantial improvements offered by sub-meter resolution products over those with a 10-m resolution. The LSMM and its automated mapping methodology establish new benchmarks for comprehensive, long-term mangrove mapping at sub-meter scales, as well as for the detailed mapping of extensive land cover types. Our study is expected to catalyze a shift toward high-resolution mangrove mapping on a large scale.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 156-171"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142823149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LU5M812TGT: An AI-Powered global database of impact craters ≥0.4 km on the Moon LU5M812TGT:人工智能驱动的月球撞击坑全球数据库[式略] km
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.11.010
Riccardo La Grassa, Elena Martellato, Gabriele Cremonese, Cristina Re, Adriano Tullo, Silvia Bertoli
{"title":"LU5M812TGT: An AI-Powered global database of impact craters ≥0.4 km on the Moon","authors":"Riccardo La Grassa,&nbsp;Elena Martellato,&nbsp;Gabriele Cremonese,&nbsp;Cristina Re,&nbsp;Adriano Tullo,&nbsp;Silvia Bertoli","doi":"10.1016/j.isprsjprs.2024.11.010","DOIUrl":"10.1016/j.isprsjprs.2024.11.010","url":null,"abstract":"<div><div>We release a new global catalog of impact craters on the Moon containing about 5 million craters. Such catalog was derived using a deep learning model, which is based on increasing the spatial image resolution, allowing crater detection down to sizes as small as 0.4 km. Therefore, this database includes <span><math><mo>∼</mo></math></span>69.3% craters with diameter lower than 1 km. The <span><math><mo>∼</mo></math></span>28.7% of the catalog contains mainly craters in the 1-5 km diameter range, and the remaining percentage (<span><math><mo>≲</mo></math></span>1.9%) has been identified between 5 km and 100 km of diameter. The accuracy of this new crater database was tested against previous well-known global crater catalogs. We found a similar crater size-frequency distribution for craters <span><math><mo>≥</mo></math></span>1 km, providing a validation for the crater identification method applied in this work. The add-on of craters as small as half a kilometer is new with respect to other published global catalogs, allowing a finer exploitation of the Lunar surface at a global scale. The LU5M812TGT catalog is available at the following link: <span><span>https://zenodo.org/records/13990480</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 75-84"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142823153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A full time series imagery and full cycle monitoring (FTSI-FCM) algorithm for tracking rubber plantation dynamics in the Vietnam from 1986 to 2022 全时间序列图像和全周期监测(FTSI-FCM)算法用于跟踪1986年至2022年越南橡胶种植园的动态
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.12.018
Bangqian Chen , Jinwei Dong , Tran Thi Thu Hien , Tin Yun , Weili Kou , Zhixiang Wu , Chuan Yang , Guizhen Wang , Hongyan Lai , Ruijin Liu , Feng An
{"title":"A full time series imagery and full cycle monitoring (FTSI-FCM) algorithm for tracking rubber plantation dynamics in the Vietnam from 1986 to 2022","authors":"Bangqian Chen ,&nbsp;Jinwei Dong ,&nbsp;Tran Thi Thu Hien ,&nbsp;Tin Yun ,&nbsp;Weili Kou ,&nbsp;Zhixiang Wu ,&nbsp;Chuan Yang ,&nbsp;Guizhen Wang ,&nbsp;Hongyan Lai ,&nbsp;Ruijin Liu ,&nbsp;Feng An","doi":"10.1016/j.isprsjprs.2024.12.018","DOIUrl":"10.1016/j.isprsjprs.2024.12.018","url":null,"abstract":"&lt;div&gt;&lt;div&gt;Accurate mapping of rubber plantations in Southeast Asia is critical for sustainable plantation management and ecological and environmental impact assessment. Despite extensive research on rubber plantation mapping, studies have largely been confined to provincial scales, with the few country-scale assessments showing significant disagreement in both spatial distribution and area estimates. These discrepancies primarily stem from persistent cloud cover in tropical regions and limited temporal resolution of datasets that inadequately capture the full phenological cycles of rubber trees. To address these issues, we propose the Full Time Series Satellite Imagery and Full-Cycle Monitoring (FTSI-FCM) algorithm for mapping spatial distribution and establishment year of rubber plantations in Vietnam, a country experienced significant rubber expansion over the past decades. The FTSI-FCM algorithm initially employs the LandTrendr approach—an established forest disturbance detection algorithm—to identify the land use changes during the plantation establishment phase. We enhance this process through a spatiotemporal correction scheme to accurately determine the establishment years and maturity phases of the plantations. Subsequently, the algorithm identifies rubber plantations through a random forest algorithm by integrating features from three temporal phases: canopy transitions from rubber seedlings to mature plantations, phenological changes during mature stages, and phenological-spectral characteristic during the mapping year. This approach leverages an extensive time series of Landsat images dating back to the late 1980s, complemented by Sentinel-2 images since 2015. For the mapping year, these data are further enhanced by the inclusion of PALSAR-2 L-band Synthetic-Aperture Radar (SAR) and very high-resolution Planet optical imagery. When applied in Vietnam—a leading rubber producer with complex cultivation conditions— the FTSI-FCM algorithm yielded highly reliable maps of rubber distribution (Overall Accuracy, OA = 93.75%, F1-score = 0.93) and establishment years (R&lt;sup&gt;2&lt;/sup&gt; = 0.99, RMSE = 0.25 years) for 2022 (referred to as FTSI-FCM_2022). These results outperformed previous mappings, such as WangR_2021 (OA = 75.00%, F1-score = 0.71), in both spatial distribution and area estimates. The FTSI-FCM_2022 map revealed a total rubber plantation area of 754,482 ha, closely matching reported statistics of 727,900 ha and showing strong correlation provincial statistics (R&lt;sup&gt;2&lt;/sup&gt; = 0.99). Spatial analysis indicated that over 90% of rubber plantations are located within 15°N latitude, below 600 m in elevation, on slopes under 15°, and were established after 2000. Notably, there has been no significant expansion of rubber plantations into higher elevations or steeper slopes since 1990s, suggesting the effectiveness of sustainable rubber cultivation management practices in Vietnam. The FTSI-FCM algorithm demonstrates substantial potential for m","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 377-394"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142925285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CARE-SST: context-aware reconstruction diffusion model for sea surface temperature CARE-SST:情景感知的海面温度重建扩散模型
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2025.01.001
Minki Choo , Sihun Jung , Jungho Im , Daehyeon Han
{"title":"CARE-SST: context-aware reconstruction diffusion model for sea surface temperature","authors":"Minki Choo ,&nbsp;Sihun Jung ,&nbsp;Jungho Im ,&nbsp;Daehyeon Han","doi":"10.1016/j.isprsjprs.2025.01.001","DOIUrl":"10.1016/j.isprsjprs.2025.01.001","url":null,"abstract":"<div><div>Weather and climate forecasts use the distribution of sea surface temperature (SST) as a critical factor in atmosphere–ocean interactions. High spatial resolution SST data are typically produced using infrared sensors, which use channels with wavelengths ranging from approximately 3.7 to 12 µm. However, SST data retrieved from infrared sensor-based satellites often contain noise and missing areas due to cloud contamination. Therefore, while reconstructing SST under clouds, it is necessary to consider observational noise. In this study, we present the context-aware reconstruction diffusion model for SST (CARE-SST), a denoising diffusion probabilistic model designed to reconstruct SST in cloud-covered regions and reduce observational noise. By conditioning on a reverse diffusion process, CARE-SST can integrate historical satellite data and reduce observational noise. The methodology involves using visible infrared imaging radiometer suite (VIIRS) data and the optimum interpolation SST product as a background. To evaluate the effectiveness of our method, a reconstruction using a fixed mask was performed with 10,578 VIIRS SST data from 2022. The results showed that the mean absolute error and the root mean squared error (RMSE) were 0.23 °C and 0.31 °C, respectively, preserving small-scale features. In real cloud reconstruction scenarios, the proposed model incorporated historical VIIRS SST data and buoy observations, enhancing the quality of reconstructed SST data, particularly in regions with large cloud cover. Relative to other analysis products, such as the operational SST and sea ice analysis, as well as the multi-scale ultra-high-resolution SST, our model showcased a more refined gradient field without blurring effects. In the power spectral density comparison for the Agulhas Current (35–45° S and 10–40° E), only CARE-SST demonstrated feature resolution within 10 km, highlighting superior feature resolution compared to other SST analysis products. Validation against buoy data indicated high performance, with RMSEs (and MAEs) of 0.22 °C (0.16 °C) for the Gulf Stream, 0.27 °C (0.20 °C) for the Kuroshio Current, 0.34 °C (0.25 °C) for the Agulhas Current, and 0.25 °C (0.10 °C) for the Mediterranean Sea. Furthermore, the model maintained robust spatial patterns in global mapping results for selected dates. This study highlights the potential of deep learning models in generating high-resolution, gap-filled SST data on a global scale, offering a foundation for improving deep learning-based data assimilation.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 454-472"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142967836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent segmentation of wildfire region and interpretation of fire front in visible light images from the viewpoint of an unmanned aerial vehicle (UAV) 基于无人机视角的可见光野火区域智能分割与火场解译
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.12.025
Jianwei Li , Jiali Wan , Long Sun , Tongxin Hu , Xingdong Li , Huiru Zheng
{"title":"Intelligent segmentation of wildfire region and interpretation of fire front in visible light images from the viewpoint of an unmanned aerial vehicle (UAV)","authors":"Jianwei Li ,&nbsp;Jiali Wan ,&nbsp;Long Sun ,&nbsp;Tongxin Hu ,&nbsp;Xingdong Li ,&nbsp;Huiru Zheng","doi":"10.1016/j.isprsjprs.2024.12.025","DOIUrl":"10.1016/j.isprsjprs.2024.12.025","url":null,"abstract":"<div><div>The acceleration of global warming and intensifying global climate anomalies have led to a rise in the frequency of wildfires. However, most existing research on wildfire fields focuses primarily on wildfire identification and prediction, with limited attention given to the intelligent interpretation of detailed information, such as fire front within fire region. To address this gap, advance the analysis of fire front in UAV-captured visible images, and facilitate future calculations of fire behavior parameters, a new method is proposed for the intelligent segmentation and fire front interpretation of wildfire regions. This proposed method comprises three key steps: deep learning-based fire segmentation, boundary tracking of wildfire regions, and fire front interpretation. Specifically, the YOLOv7-tiny model is enhanced with a Convolutional Block Attention Module (CBAM), which integrates channel and spatial attention mechanisms to improve the model’s focus on wildfire regions and boost the segmentation precision. Experimental results show that the proposed method improved detection and segmentation precision by 3.8 % and 3.6 %, respectively, compared to existing approaches, and achieved an average segmentation frame rate of 64.72 Hz, which is well above the 30 Hz threshold required for real-time fire segmentation. Furthermore, the method’s effectiveness in boundary tracking and fire front interpreting was validated using an outdoor grassland fire fusion experiment’s real fire image data. Additional tests were conducted in southern New South Wales, Australia, using data that confirmed the robustness of the method in accurately interpreting the fire front. The findings of this research have potential applications in dynamic data-driven forest fire spread modeling and fire digital twinning areas. The code and dataset are publicly available at <span><span>https://github.com/makemoneyokk/fire-segmentation-interpretation.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 473-489"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142967838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信