Riccardo La Grassa, Elena Martellato, Gabriele Cremonese, Cristina Re, Adriano Tullo, Silvia Bertoli
{"title":"LU5M812TGT: An AI-Powered global database of impact craters ≥0.4 km on the Moon","authors":"Riccardo La Grassa, Elena Martellato, Gabriele Cremonese, Cristina Re, Adriano Tullo, Silvia Bertoli","doi":"10.1016/j.isprsjprs.2024.11.010","DOIUrl":"10.1016/j.isprsjprs.2024.11.010","url":null,"abstract":"<div><div>We release a new global catalog of impact craters on the Moon containing about 5 million craters. Such catalog was derived using a deep learning model, which is based on increasing the spatial image resolution, allowing crater detection down to sizes as small as 0.4 km. Therefore, this database includes <span><math><mo>∼</mo></math></span>69.3% craters with diameter lower than 1 km. The <span><math><mo>∼</mo></math></span>28.7% of the catalog contains mainly craters in the 1-5 km diameter range, and the remaining percentage (<span><math><mo>≲</mo></math></span>1.9%) has been identified between 5 km and 100 km of diameter. The accuracy of this new crater database was tested against previous well-known global crater catalogs. We found a similar crater size-frequency distribution for craters <span><math><mo>≥</mo></math></span>1 km, providing a validation for the crater identification method applied in this work. The add-on of craters as small as half a kilometer is new with respect to other published global catalogs, allowing a finer exploitation of the Lunar surface at a global scale. The LU5M812TGT catalog is available at the following link: <span><span>https://zenodo.org/records/13990480</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 75-84"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142823153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bangqian Chen , Jinwei Dong , Tran Thi Thu Hien , Tin Yun , Weili Kou , Zhixiang Wu , Chuan Yang , Guizhen Wang , Hongyan Lai , Ruijin Liu , Feng An
{"title":"A full time series imagery and full cycle monitoring (FTSI-FCM) algorithm for tracking rubber plantation dynamics in the Vietnam from 1986 to 2022","authors":"Bangqian Chen , Jinwei Dong , Tran Thi Thu Hien , Tin Yun , Weili Kou , Zhixiang Wu , Chuan Yang , Guizhen Wang , Hongyan Lai , Ruijin Liu , Feng An","doi":"10.1016/j.isprsjprs.2024.12.018","DOIUrl":"10.1016/j.isprsjprs.2024.12.018","url":null,"abstract":"<div><div>Accurate mapping of rubber plantations in Southeast Asia is critical for sustainable plantation management and ecological and environmental impact assessment. Despite extensive research on rubber plantation mapping, studies have largely been confined to provincial scales, with the few country-scale assessments showing significant disagreement in both spatial distribution and area estimates. These discrepancies primarily stem from persistent cloud cover in tropical regions and limited temporal resolution of datasets that inadequately capture the full phenological cycles of rubber trees. To address these issues, we propose the Full Time Series Satellite Imagery and Full-Cycle Monitoring (FTSI-FCM) algorithm for mapping spatial distribution and establishment year of rubber plantations in Vietnam, a country experienced significant rubber expansion over the past decades. The FTSI-FCM algorithm initially employs the LandTrendr approach—an established forest disturbance detection algorithm—to identify the land use changes during the plantation establishment phase. We enhance this process through a spatiotemporal correction scheme to accurately determine the establishment years and maturity phases of the plantations. Subsequently, the algorithm identifies rubber plantations through a random forest algorithm by integrating features from three temporal phases: canopy transitions from rubber seedlings to mature plantations, phenological changes during mature stages, and phenological-spectral characteristic during the mapping year. This approach leverages an extensive time series of Landsat images dating back to the late 1980s, complemented by Sentinel-2 images since 2015. For the mapping year, these data are further enhanced by the inclusion of PALSAR-2 L-band Synthetic-Aperture Radar (SAR) and very high-resolution Planet optical imagery. When applied in Vietnam—a leading rubber producer with complex cultivation conditions— the FTSI-FCM algorithm yielded highly reliable maps of rubber distribution (Overall Accuracy, OA = 93.75%, F1-score = 0.93) and establishment years (R<sup>2</sup> = 0.99, RMSE = 0.25 years) for 2022 (referred to as FTSI-FCM_2022). These results outperformed previous mappings, such as WangR_2021 (OA = 75.00%, F1-score = 0.71), in both spatial distribution and area estimates. The FTSI-FCM_2022 map revealed a total rubber plantation area of 754,482 ha, closely matching reported statistics of 727,900 ha and showing strong correlation provincial statistics (R<sup>2</sup> = 0.99). Spatial analysis indicated that over 90% of rubber plantations are located within 15°N latitude, below 600 m in elevation, on slopes under 15°, and were established after 2000. Notably, there has been no significant expansion of rubber plantations into higher elevations or steeper slopes since 1990s, suggesting the effectiveness of sustainable rubber cultivation management practices in Vietnam. The FTSI-FCM algorithm demonstrates substantial potential for m","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 377-394"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142925285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Minki Choo , Sihun Jung , Jungho Im , Daehyeon Han
{"title":"CARE-SST: context-aware reconstruction diffusion model for sea surface temperature","authors":"Minki Choo , Sihun Jung , Jungho Im , Daehyeon Han","doi":"10.1016/j.isprsjprs.2025.01.001","DOIUrl":"10.1016/j.isprsjprs.2025.01.001","url":null,"abstract":"<div><div>Weather and climate forecasts use the distribution of sea surface temperature (SST) as a critical factor in atmosphere–ocean interactions. High spatial resolution SST data are typically produced using infrared sensors, which use channels with wavelengths ranging from approximately 3.7 to 12 µm. However, SST data retrieved from infrared sensor-based satellites often contain noise and missing areas due to cloud contamination. Therefore, while reconstructing SST under clouds, it is necessary to consider observational noise. In this study, we present the context-aware reconstruction diffusion model for SST (CARE-SST), a denoising diffusion probabilistic model designed to reconstruct SST in cloud-covered regions and reduce observational noise. By conditioning on a reverse diffusion process, CARE-SST can integrate historical satellite data and reduce observational noise. The methodology involves using visible infrared imaging radiometer suite (VIIRS) data and the optimum interpolation SST product as a background. To evaluate the effectiveness of our method, a reconstruction using a fixed mask was performed with 10,578 VIIRS SST data from 2022. The results showed that the mean absolute error and the root mean squared error (RMSE) were 0.23 °C and 0.31 °C, respectively, preserving small-scale features. In real cloud reconstruction scenarios, the proposed model incorporated historical VIIRS SST data and buoy observations, enhancing the quality of reconstructed SST data, particularly in regions with large cloud cover. Relative to other analysis products, such as the operational SST and sea ice analysis, as well as the multi-scale ultra-high-resolution SST, our model showcased a more refined gradient field without blurring effects. In the power spectral density comparison for the Agulhas Current (35–45° S and 10–40° E), only CARE-SST demonstrated feature resolution within 10 km, highlighting superior feature resolution compared to other SST analysis products. Validation against buoy data indicated high performance, with RMSEs (and MAEs) of 0.22 °C (0.16 °C) for the Gulf Stream, 0.27 °C (0.20 °C) for the Kuroshio Current, 0.34 °C (0.25 °C) for the Agulhas Current, and 0.25 °C (0.10 °C) for the Mediterranean Sea. Furthermore, the model maintained robust spatial patterns in global mapping results for selected dates. This study highlights the potential of deep learning models in generating high-resolution, gap-filled SST data on a global scale, offering a foundation for improving deep learning-based data assimilation.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 454-472"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142967836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jianwei Li , Jiali Wan , Long Sun , Tongxin Hu , Xingdong Li , Huiru Zheng
{"title":"Intelligent segmentation of wildfire region and interpretation of fire front in visible light images from the viewpoint of an unmanned aerial vehicle (UAV)","authors":"Jianwei Li , Jiali Wan , Long Sun , Tongxin Hu , Xingdong Li , Huiru Zheng","doi":"10.1016/j.isprsjprs.2024.12.025","DOIUrl":"10.1016/j.isprsjprs.2024.12.025","url":null,"abstract":"<div><div>The acceleration of global warming and intensifying global climate anomalies have led to a rise in the frequency of wildfires. However, most existing research on wildfire fields focuses primarily on wildfire identification and prediction, with limited attention given to the intelligent interpretation of detailed information, such as fire front within fire region. To address this gap, advance the analysis of fire front in UAV-captured visible images, and facilitate future calculations of fire behavior parameters, a new method is proposed for the intelligent segmentation and fire front interpretation of wildfire regions. This proposed method comprises three key steps: deep learning-based fire segmentation, boundary tracking of wildfire regions, and fire front interpretation. Specifically, the YOLOv7-tiny model is enhanced with a Convolutional Block Attention Module (CBAM), which integrates channel and spatial attention mechanisms to improve the model’s focus on wildfire regions and boost the segmentation precision. Experimental results show that the proposed method improved detection and segmentation precision by 3.8 % and 3.6 %, respectively, compared to existing approaches, and achieved an average segmentation frame rate of 64.72 Hz, which is well above the 30 Hz threshold required for real-time fire segmentation. Furthermore, the method’s effectiveness in boundary tracking and fire front interpreting was validated using an outdoor grassland fire fusion experiment’s real fire image data. Additional tests were conducted in southern New South Wales, Australia, using data that confirmed the robustness of the method in accurately interpreting the fire front. The findings of this research have potential applications in dynamic data-driven forest fire spread modeling and fire digital twinning areas. The code and dataset are publicly available at <span><span>https://github.com/makemoneyokk/fire-segmentation-interpretation.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 473-489"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142967838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contribution of ECOSTRESS thermal imagery to wetland mapping: Application to heathland ecosystems","authors":"Liam Loizeau-Woollgar , Sébastien Rapinel , Julien Pellen , Bernard Clément , Laurence Hubert-Moy","doi":"10.1016/j.isprsjprs.2025.01.014","DOIUrl":"10.1016/j.isprsjprs.2025.01.014","url":null,"abstract":"<div><div>While wetlands have been extensively studied using optical and radar satellite imagery, thermal imagery has been used less often due its low spatial – temporal resolutions and challenges for emissivity estimation. Since 2018, spaceborne thermal imagery has gained interest due to the availability of ECOSTRESS data, which are acquired at 70 m spatial resolution and a 3–5 revisit time. This study aimed at comparing the contribution of ECOSTRESS time-series to wetland mapping to that of other thermal time-series (i.e., Landsat-TIRS, ASTER-TIR), Sentinel-1 SAR and Sentinel-2 optical satellite time-series, and topographical variables derived from satellite data. The study was applied to a 209 km<sup>2</sup> heathland site in north-western France that includes riverine, slope, and flat wetlands. The method used consisted of four steps: (i) four-year time-series (2019–2022) were aggregated into dense annual time-series; (ii) the temporal dimension was reduced using functional principal component analysis (FPCA); (iii) the most discriminating components of the FPCA were selected based on recursive feature elimination; and (iv) the contribution of each sensor time-series to wetland mapping was assessed based on the accuracy of a random forest model trained and tested using reference field data. The results indicated that an ECOSTRESS time-series that combined day and night acquisitions was more accurate (overall F1-score: 0.71) than Landsat-TIRS and ASTER-TIR time-series (overall F1-score: 0.40–0.62). A combination of ECOSTRESS thermal images, Sentinel-2 optical images, Sentinel-1 SAR images, and topographical variables outperformed the sensor-specific accuracies (overall F1-score: 0.87), highlighting the synergy of thermal, optical, and topographical data for wetland mapping.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 649-660"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143035305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Valérie Zermatten , Javiera Castillo-Navarro , Diego Marcos , Devis Tuia
{"title":"Learning transferable land cover semantics for open vocabulary interactions with remote sensing images","authors":"Valérie Zermatten , Javiera Castillo-Navarro , Diego Marcos , Devis Tuia","doi":"10.1016/j.isprsjprs.2025.01.006","DOIUrl":"10.1016/j.isprsjprs.2025.01.006","url":null,"abstract":"<div><div>Why should we confine land cover classes to rigid and arbitrary definitions? Land cover mapping is a central task in remote sensing image processing, but the rigorous class definitions can sometimes restrict the transferability of annotations between datasets. Open vocabulary recognition, i.e. using natural language to define a specific object or pattern in an image, breaks free from predefined nomenclature and offers flexible recognition of diverse categories with a more general image understanding across datasets and labels. The open vocabulary framework opens doors to search for concepts of interest, beyond individual class boundaries. In this work, we propose to use Text As supervision for COntrastive Semantic Segmentation (TACOSS), and we design an open vocabulary semantic segmentation model that extends its capacities beyond that of a traditional model for land cover mapping: In addition to visual pattern recognition, TACOSS leverages the common sense knowledge captured by language models and is capable of interpreting the image at the pixel level, attributing semantics to each pixel and removing the constraints of a fixed set of land cover labels. By learning to match visual representations with text embeddings, TACOSS can transition smoothly from one set of labels to another and enables the interaction with remote sensing images in natural language. Our approach combines a pretrained text encoder with a visual encoder and adopts supervised contrastive learning to align the visual and textual modalities. We explore several text encoders and label representation methods and compare their abilities to encode transferable land cover semantics. The model’s capacity to predict a set of different land cover labels on an unseen dataset is also explored to illustrate the generalization capacities across domains of our approach. Overall, TACOSS is a general method and permits adapting between different sets of land cover labels with minimal computational overhead. Code is publicly available online<span><span><sup>1</sup></span></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 621-636"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143035307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Li Fang , Tianyu Li , Yanghong Lin , Shudong Zhou , Wei Yao
{"title":"A coupled optical–radiometric modeling approach to removing reflection noise in TLS data of urban areas","authors":"Li Fang , Tianyu Li , Yanghong Lin , Shudong Zhou , Wei Yao","doi":"10.1016/j.isprsjprs.2024.12.005","DOIUrl":"10.1016/j.isprsjprs.2024.12.005","url":null,"abstract":"<div><div>Point clouds, which are a fundamental type of 3D data, play an essential role in various applications like 3D reconstruction, autonomous driving, and robotics. However, point clouds generated via measuring the time-of-flight of emitted and backscattered laser pulses of TLS, frequently include false points caused by mirror-like reflective surfaces, resulting in degradation of data quality and fidelity. This study introduces an algorithm to eliminate reflection noise from TLS scan data. Our novel algorithm detects reflection planes by utilizing both geometric and physical characteristics to recognize reflection points according to optical reflection theory. Radiometric correction is applied to the raw laser intensity, after which reflective planes are extracted using a threshold. In the virtual points identification phase, these points are detected along the light propagation path, grounded on the specular reflection principle. Moreover, an improved feature descriptor, known as RE-LFSH, is employed to assess the similarity between two points in terms of reflection symmetry. We have adapted the LFSH feature descriptor to retain reflection features, mitigating interference from symmetrical architectural structures. Incorporating the Hausdorff feature distance into the algorithm fortifies its resistance to ghosting and deformations, thereby boosting the accuracy of virtual point detection. Additionally, to overcome the shortage of annotated datasets, a novel benchmark dataset named 3DRN, specifically designed for this task, is introduced. Extensive experiments on the 3DRN benchmark dataset, featuring diverse urban environments with virtual TLS reflection noise, show our algorithm improves precision and recall rates for 3D points in reflective areas by 57.03% and 31.80%, respectively. Our approach improves outlier detection by 9.17% and enhances accuracy by 5.65% compared to leading methods. You can access the 3DRN dataset at <span><span>https://github.com/Tsuiky/3DRN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 217-231"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qunming Wang , Wenjing Ma , Sicong Liu , Xiaohua Tong , Peter M. Atkinson
{"title":"Data fidelity-oriented spatial-spectral fusion of CRISM and CTX images","authors":"Qunming Wang , Wenjing Ma , Sicong Liu , Xiaohua Tong , Peter M. Atkinson","doi":"10.1016/j.isprsjprs.2024.12.004","DOIUrl":"10.1016/j.isprsjprs.2024.12.004","url":null,"abstract":"<div><div>The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) is a Mars-dedicated compact reconnaissance imaging spectrometer that captures remote sensing data with very fine spectral resolution. However, the spatial resolution of CRISM data is relatively coarse (18 m), limiting its application to regional scales. The Context Camera (CTX) is a digital camera equipped with a wide-angle lens, providing a finer spatial resolution (6 m) and larger field-of-view, but CTX provides only a single panchromatic band. To produce CRISM hyperspectral data with finer spatial resolution (e.g., 6 m of CTX images), this research investigated spatial-spectral fusion of 18 m CRISM images with 6 m CTX panchromatic images. In spatial-spectral fusion, to address the long-standing issue of incomplete data fidelity to the original hyperspectral data in existing methods, a new paradigm called Data Fidelity-oriented Spatial-Spectral Fusion (DF-SSF) was proposed. The effectiveness of DF-SSF was validated through experiments on data from six areas on Mars. The results indicate that the fusion of CRISM and CTX can increase the spatial resolution of CRISM hyperspectral data effectively. Moreover, DF-SSF can increase the fusion accuracy noticeably while maintaining perfect data fidelity to the original hyperspectral data. In addition, DF-SSF is theoretically applicable to any existing spatial-spectral fusion methods. The 6 m CRISM hyperspectral data inherit the advantages of the original 18 m data in spectral resolution, and provide richer spatial texture information on the Martian surface, with broad application potential.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 172-191"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Remote sensing scene graph generation for improved retrieval based on spatial relationships","authors":"Jiayi Tang, Xiaochong Tong, Chunping Qiu, Yuekun Sun, Haoshuai Song, Yaxian Lei, Yi Lei, Congzhou Guo","doi":"10.1016/j.isprsjprs.2025.01.012","DOIUrl":"10.1016/j.isprsjprs.2025.01.012","url":null,"abstract":"<div><div>RS scene graphs represent RS scenes as graphs with objects as nodes and their spatial relationships as edges, playing a crucial role in understanding and interpreting RS scenes at a higher level. However, existing RS scene graph generation methods, relying on deep learning models, face limitations due to their dependence on extensive relationship labels, restricted generation accuracy, and limited generalizability. To address these challenges, we proposed a spatial relationship computing model based on prior geographic information knowledge for RS scene graph generation. We refer to the RS scene graph generated using our method as SG-SSR for short. Furthermore, we investigated the application of SG-SSR in RS scene retrieval, demonstrating improved retrieval accuracy for spatial relationships between entities. The experiments show that our scene graph generation method does not rely on relationship labels, and has higher generation accuracy and greater universality. Moreover, the retrieval method based on SG-SSR outperformed other retrieval methods based on image feature vectors, with a retrieval accuracy index 0.098 higher than the alternatives(RemoteCLIP(mask)). The dataset and code are available at <span><span>https://gitee.com/tangjiayitangjiayi/sg-ssr</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 741-752"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143072525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philipp Sibler , Francescopaolo Sica , Michael Schmitt
{"title":"Synthesis of complex-valued InSAR data with a multi-task convolutional neural network","authors":"Philipp Sibler , Francescopaolo Sica , Michael Schmitt","doi":"10.1016/j.isprsjprs.2024.12.007","DOIUrl":"10.1016/j.isprsjprs.2024.12.007","url":null,"abstract":"<div><div>Simulated remote sensing images bear great potential for many applications in the field of Earth observation. They can be used as controlled testbed for the development of signal and image processing algorithms or can provide a means to get an impression of the potential of new sensor concepts. With the rise of deep learning, the synthesis of artificial remote sensing images by means of deep neural networks has become a hot research topic. While the generation of optical data is relatively straightforward, as it can rely on the use of established models from the computer vision community, the generation of synthetic aperture radar (SAR) data until now is still largely restricted to intensity images since the processing of complex-valued numbers by conventional neural networks poses significant challenges. With this work, we propose to circumvent these challenges by decomposing SAR interferograms into real-valued components. These components are then simultaneously synthesized by different branches of a multi-branch encoder–decoder network architecture. In the end, these real-valued components can be combined again into the final, complex-valued interferogram. Moreover, the effect of speckle and interferometric phase noise is replicated and applied to the synthesized interferometric data. Experimental results on both medium-resolution C-band repeat-pass SAR data and high-resolution X-band single-pass SAR data, demonstrate the general feasibility of the approach.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 192-206"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}