{"title":"Two-Dimensional Phase Unwrapping for Topography Reconstruction: A Refined Two-Stage Programming Approach","authors":"Yan Yan;Hanwen Yu;Taoli Yang","doi":"10.1109/JSTARS.2024.3487920","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3487920","url":null,"abstract":"The interferometric synthetic aperture radar (InSAR) is able to reconstruct the Earth's surface topography with a meter-level accuracy when two-dimensional phase unwrapping (PU) is properly implemented. The two-stage programming approach (TSPA) can convert the ill-posed PU problem into a well-posed problem by integrating perpendicular baseline diversity in multiple (\u0000<inline-formula><tex-math>$ge$</tex-math></inline-formula>\u00002) interferograms, and is currently among the most commonly used multibaseline (MB) PU algorithms. Nevertheless, TSPA still faces two challenges in real-world applications: first, TSPA cannot ensure exceptional performance for any complex topographic scenarios, and second, the PU error of short-baseline interferometric pair tends to propagate into the PU solution of long-baseline interferometric pair, degrading height accuracy. To overcome these issues, a refined TSPA (R-TSPA) MB PU algorithm is proposed in this article. R-TSPA contains two PU procedures under the framework of TSPA, where procedure 1 unwraps the flattened interferograms with TSPA, and procedure 2 re-estimates and reunwraps the erroneous ambiguity number gradients with TSPA. It is demonstrated that R-TSPA outperforms the conventional single-baseline PU algorithms and TSPA with actual InSAR datasets in western Sichuan Province and Tibet Autonomous Region of China, revealing its potentials in accurately mapping topography and broadening application scopes of InSAR.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"20304-20314"},"PeriodicalIF":4.7,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10737667","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of Climate Change and Urbanization on Spatiotemporal Variations of Lake Surface Water Temperature","authors":"Dingpu Li;Yi Luo;Kun Yang;Chunxue Shang;Senlin Zhu;Shuangyun Peng;Anlin Li;Rixiang Chen;Zongqi Peng;Xingfang Pei;Yuanyuan Yin;Qingqing Wang;Changqing Peng;Hong Wei","doi":"10.1109/JSTARS.2024.3487623","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3487623","url":null,"abstract":"Lake surface water temperature (LSWT) is a crucial ecological indicator, impacting water quality, and aquatic life. Understanding its spatiotemporal trends and driving mechanisms is fundamental for lake water environment protection and management. Previous research has been limited by low-resolution satellite data and numerical simulations, hindering in-depth understanding of LSWT. This article fills the research gap by reconstructing a high-resolution LSWT dataset spanning 2000 to 2020. Employing data fusion techniques, we combined moderate resolution imaging spectroradiometer (MODIS) and Landsat observations, achieving a spatial resolution of 30 m and a revisit cycle of eight days. Seven major lakes in Yunnan Province, China, varying in urbanization intensity, were selected to investigate the impacts and mechanisms of urbanization and climate change on LSWT. The results showed that: First, the high spatiotemporal LSWT dataset reconstructed on the ubESTARFM data fusion model outperformed the existing product datasets in terms of accuracy evaluation and spatial details. Over the past 20 years, all LSWT in the study area exhibited a warming trend in both temporal and spatial dimensions; lakes in basins with higher urbanization intensity had significantly higher warming rates than the warming rates of near-surface air temperature, and the lakes showed a global warming trend. Second, the warming trend of LSWT is not only related to lake morphology and climate change, but also closely associated with urbanization; higher spatiotemporal resolution LSWT data revealed better spatiotemporal correlations between urbanization and LSWT. Third, active ecological management and enhanced watershed vegetation coverage could effectively mitigate the rate of lake warming.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19955-19971"},"PeriodicalIF":4.7,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10737462","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142645557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual Rotated Position Encoding Transformer for Remote Sensing Image Captioning","authors":"Anli Liu;Lingwu Meng;Liang Xiao","doi":"10.1109/JSTARS.2024.3487846","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3487846","url":null,"abstract":"Remote sensing image captioning (RSIC) is a crucial task in interpreting remote sensing images (RSIs), as it involves describing their content using clear and precise natural language. However, the RSIC encounters difficulties due to the intricate structure and distinctive features of the images, such as the issue of rotational ambiguity. The existence of visually alike objects or areas can result in misidentification. In addition, prioritizing groups of objects with strong relational ties during the captioning process poses a significant challenge. To address these challenges, we propose the visual rotated position encoding transformer for RSIC. First of all, rotation-invariant features and global features are extracted using a multilevel feature extraction (MFE) module. To focus on closely related rotated objects, we design a visual rotated position encoding module, which is incorporated into the transformer encoder to model directional relationships between objects. To distinguish similar features and guide caption generation, we propose a feature enhancement fusion module consisting of feature enhancement and feature fusion. The feature enhancement component adopts a self-attention mechanism to construct fully connected graphs for object features. The feature fusion component integrates global features and word vectors to guide the caption generation process. In addition, we construct an RSI rotated object detection dataset RSIC-ROD and pretrain a rotated object detector. The proposed method demonstrates significant performance improvements on four datasets, showcasing enhanced capabilities in preserving descriptive details, distinguishing similar objects, and accurately capturing object relationships.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"20026-20040"},"PeriodicalIF":4.7,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10737430","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yiting Fan;Mingchang Wang;Liheng Liang;Ziwei Liu;Xue Ji;Zhiguo Meng;Yilin Bao
{"title":"Modeling Scattering Power of Soil Particle Based on K-M Theory","authors":"Yiting Fan;Mingchang Wang;Liheng Liang;Ziwei Liu;Xue Ji;Zhiguo Meng;Yilin Bao","doi":"10.1109/JSTARS.2024.3487645","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3487645","url":null,"abstract":"Soil particle size is an important indicator in soil systems, it can provide important assistance for the agricultural work. In order to address the weakness of traditional soil particle size measuring work, which are time-consuming, labor-intensive, and have limited applicability. This study utilizes the Mie theory and the Kubelka–Munk theory as the precondition, establish an empirical formula between the scattering power and the soil particle size. The study collected surface soil samples from Nong'an, Changchun City, Jilin Province, including black soil, brown soil, sandy soil, and each saline sample, based on visible and near-infrared spectroscopy. Prepare soil samples with a particle size range of 2.5–0.15 mm through drying, grinding, and sieving operations, combining scattering power parameters in the K-M theory to construct an empirical formula for it and soil particle. After verified by comparing different empirical formulas are suitable for the measured data, assume the inverse proportion formula added correction term is the most appropriate. The conclusion is there is a strong linear relationship between the scattering power and the reciprocal of particle size. The average fitting accuracy of the 400–2400 nm wavelength band reaches 94.45%, root mean square error (\u0000<inline-formula><tex-math>$text{RMSE}$</tex-math></inline-formula>\u0000) reaches 0.0354 mm. After removing outliers, the fitting accuracy can reach up to 95.77%, \u0000<inline-formula><tex-math>$text{RMSE}$</tex-math></inline-formula>\u0000up to 0.0337 mm. Proved there is a very high analytical relationship between soil particle size and scattering power parameters in K-M theory. The empirical formula also can find supported by Mie theory and S-shape \u0000<italic>R</i>\u0000(\u0000<italic>D</i>\u0000) function, and has a high transferability from the laboratory to Landsat8 satellite board, the accuracy can reach to about 90% on SWIR band, showed good generalization ability.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19923-19934"},"PeriodicalIF":4.7,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10737659","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142645476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adjacent-Scale Multimodal Fusion Networks for Semantic Segmentation of Remote Sensing Data","authors":"Xianping Ma;Xichen Xu;Xiaokang Zhang;Man-On Pun","doi":"10.1109/JSTARS.2024.3486906","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3486906","url":null,"abstract":"Semantic segmentation is a fundamental task in remote sensing image analysis. The accurate delineation of objects within such imagery serves as the cornerstone for a wide range of applications. To address this issue, edge detection, cross-modal data, large intraclass variability, and limited interclass variance must be considered. Traditional convolutional-neural-network-based models are notably constrained by their local receptive fields, Nowadays, transformer-based methods show great potential to learn features globally, while they ignore positional cues easily and are still unable to cope with multimodal data. Therefore, this work proposes an adjacent-scale multimodal fusion network (ASMFNet) for semantic segmentation of remote sensing data. ASMFNet stands out not only for its innovative interaction mechanism across adjacent-scale features, effectively capturing contextual cues while maintaining low computational complexity but also for its remarkable cross-modal capability. It seamlessly integrates different modalities, enriching feature representation. Its hierarchical scale attention (HSA) module bolsters the association between ground objects and their surrounding scenes through learning discriminative features at higher level abstractions, thereby linking the broad structural information. Adaptive modality fusion module is equipped by HSA with valuable insights into the interrelationships between cross-model data, and it assigns spatial weights at the pixel level and seamlessly integrates them into channel features to enhance fusion representation through an evaluation of modality importance via feature concatenation and filtering. Extensive experiments on representative remote sensing semantic segmentation datasets, including the ISPRS Vaihingen and Potsdam datasets, confirm the impressive performance of the proposed ASMFNet.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"20116-20128"},"PeriodicalIF":4.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10736654","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142645536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nan Zhou;Mingming Xu;Biaoqun Shen;Ke Hou;Shanwei Liu;Hui Sheng;Yanfen Liu;Jianhua Wan
{"title":"ViT-UNet: A Vision Transformer Based UNet Model for Coastal Wetland Classification Based on High Spatial Resolution Imagery","authors":"Nan Zhou;Mingming Xu;Biaoqun Shen;Ke Hou;Shanwei Liu;Hui Sheng;Yanfen Liu;Jianhua Wan","doi":"10.1109/JSTARS.2024.3487250","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3487250","url":null,"abstract":"High resolution remote sensing imagery plays a crucial role in monitoring coastal wetlands. Coastal wetland landscapes exhibit diverse features, ranging from fragmented patches to expansive areas. Mainstream convolutional neural networks cannot effectively analyze spatial relationships among consecutive image elements. This limitation impedes their performance in accurately classifying coastal wetlands. In order to tackle the above issues, we propose a Vision Transformer based UNet (ViT-UNet) model. This model extracts wetland features from high resolution remote sensing images by sensing and optimizing multiscale features. To establish global dependencies, the Vision Transformer (ViT) is introduced to replace the convolutional layer in the UNet encoder. Simultaneously, the model incorporates a convolutional block attention module and a multiple hierarchies attention module to restore attentional features and reduce feature loss. In addition, a skip connection is added to the single-skip structure of the original UNet model. This connection simultaneously links the output of the entire transformer and internal attention features to the corresponding decoder level. This enhancement aims to furnish the decoder with comprehensive global information guidance. Finally, all the extracted feature information is fused using Bilinear Polymerization Pooling (BPP). The BPP assists the network in obtaining a more comprehensive and detailed feature representation. Experimental results on the Gaofen-1 dataset demonstrate that the proposed ViT-UNet method achieves a Precision score of 93.50\u0000<inline-formula><tex-math>$%$</tex-math></inline-formula>\u0000, outperforming the original UNet model by 4.10\u0000<inline-formula><tex-math>$%$</tex-math></inline-formula>\u0000. Compared with other state-of-the-art networks, ViT-UNet performs more accurately and finer in the extraction of wetland information in the Yellow River Delta.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19575-19587"},"PeriodicalIF":4.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10737119","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Label Refining Framework Based on Road Matching and Integration Algorithm for Road Extraction","authors":"Guodong Ma;Meng Zhang;Jian Yang;Zekai Shi;Haoyuan Ren;Yaowei Zhang","doi":"10.1109/JSTARS.2024.3486744","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3486744","url":null,"abstract":"Road network plays an important role in the fields of navigation, urban planning, and transportation. Extracting road network data from imagery based on machine learning models is an efficient and economical method for obtaining road network data. In order to save labor costs, crowdsourced data can be employed to automatically acquire the labels for model training. In response to the current challenges in road extraction, such as the limited number of labeled samples, low precision of sample labels generated from crowdsourced data, and difficulty in obtaining accurate road label data, which lead to low-quality, incomplete, and inaccurate road extraction, this study proposes a label refining framework based on a road matching and integrate algorithm. Labels are generated from OpenStreetMap (OSM) vector data, and roads are extracted from very high resolution orthoimage using the U-net model. The extracted roads are then matched and integrated with the original data to generate refined labels, which are employed for further model training and road extraction. Experimental results demonstrate that this process can overcome the poor quality of samples directly generated from the OSM data, i.e., the label refining framework led to significant improvements with respect to the completeness, accuracy, and quality of the road network extraction results.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19548-19564"},"PeriodicalIF":4.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10736971","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CCTNet: CNN and Cross-Shaped Transformer Hybrid Network for Remote Sensing Image Semantic Segmentation","authors":"Honglin Wu;Zhaobin Zeng;Peng Huang;Xinyu Yu;Min Zhang","doi":"10.1109/JSTARS.2024.3487003","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3487003","url":null,"abstract":"Deep learning methods have achieved great success in the field of remote sensing image segmentation in recent years, but building a lightweight segmentation model with comprehensive local and global feature extraction capabilities remains a challenging task. In this article, we propose a convolutional neural network (CNN) and cross-shaped transformer hybrid network (CCTNet) for semantic segmentation of high-resolution remote sensing images. This model follows an encoder–decoder structure. It employs ResNet18 as an encoder to extract hierarchical feature information, and constructs a transformer decoder based on efficient cross-shaped self-attention to fully model local and global feature information and achieve lightweighting of the network. Moreover, the transformer block introduces a mixed-scale convolutional feedforward network to further enhance multiscale information extraction. Furthermore, a simplified and efficient feature aggregation module is leveraged to gradually aggregate local and global information at different stages. Extensive comparison experiments on the ISPRS Vaihingen and Potsdam datasets reveal that our method obtains superior performance compared with state-of-the-art lightweight methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19986-19997"},"PeriodicalIF":4.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10736947","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142645477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multilevel Feature Interaction Network for Remote Sensing Images Semantic Segmentation","authors":"Hongkun Chen;Huilan Luo","doi":"10.1109/JSTARS.2024.3486724","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3486724","url":null,"abstract":"High-spatial resolution (HSR) remote sensing images present significant challenges due to their highly complex backgrounds, a large number of densely distributed small targets, and the potential for confusion with land targets. These characteristics render existing methods ineffective in accurately segmenting small targets and prone to boundary blurring. In response to these challenges, we introduce a novel multilevel feature interaction network (MFIN). The MFIN model was designed as a dual-branch U-shaped interactive decoding structure that effectively achieves semantic segmentation and edge detection. Notably, this study is the first to address ways to enhance the performance for HSR remote sensing image analysis by iteratively refining features at multilevels for different tasks. We designed the feature interaction module (FIM), which refines semantic features through multiscale attention and interacts with edge features of the same scale for optimization, then serving as input for iterative optimization in the next scale's FIM. In addition, a lightweight global feature module is designed to adaptively extract global contextual information from different scales features, thereby enhancing the semantic accuracy of the features. Furthermore, to mitigate the semantic dilution issues caused by upsampling, a semantic-guided fusion module is introduced to enhance the propagation of rich semantic information among features. The proposed methods achieve state-of-the-art segmentation performance across four publicly available remote sensing datasets: Potsdam, Vaihingen, LoveDA, and UAVid. Notably, our MFIN has only 15.4 MB parameters and 34.2 GB GFLOPs, achieving an optimal balance between accuracy and efficiency.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19831-19852"},"PeriodicalIF":4.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10736554","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiqi Dai;Yee Hui Lee;Hai-Han Sun;Jiwei Qian;Mohamed Lokman Mohd Yusof;Daryl Lee;Abdulkadir C. Yucel
{"title":"Learning From Clutter: An Unsupervised Learning-Based Clutter Removal Scheme for GPR B-Scans","authors":"Qiqi Dai;Yee Hui Lee;Hai-Han Sun;Jiwei Qian;Mohamed Lokman Mohd Yusof;Daryl Lee;Abdulkadir C. Yucel","doi":"10.1109/JSTARS.2024.3486535","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3486535","url":null,"abstract":"Ground-penetrating radar (GPR) data are often contaminated by hardware and environmental clutter, which significantly affects the accuracy and reliability of target response identification. Existing supervised deep learning techniques for removing clutter in GPR data require generating a large set of clutter-free B-scans as labels for training, which are computationally expensive in simulation and unfeasible in real-world experiments. To tackle this issue, we propose a two-stage unsupervised learning-based clutter removal scheme, called ULCR-Net, to obtain clutter-free GPR B-scans. In the first stage of the proposed scheme, a diffusion model tailored for GPR data augmentation is employed to generate a diverse set of raw B-scans from the input random noise. With the augmented dataset, the second stage of the proposed scheme uses a contrastive learning-based generative adversarial network to learn and estimate clutter patterns in the raw B-scan. The clutter-free B-scan is then obtained by subtracting the clutter pattern from the raw B-scan. The training of the two-stage network only requires a small set of raw B-scans and clutter-only B-scans that are readily available in real-world applications. Extensive experiments have been conducted to validate the effectiveness of the proposed method. Results on simulation and measurement data demonstrate that the proposed method has superior clutter removal accuracy and generalizability and outperforms existing algebraic techniques and supervised learning-based methods with limited training data by a large margin. With its high clutter suppression capability and low training data requirements, the proposed method is well-suited to remove clutter and restore target responses in real-world GPR applications.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19668-19681"},"PeriodicalIF":4.7,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10735359","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}