{"title":"A Synergistic CNN-DF Method for Landslide Susceptibility Assessment","authors":"Jiangang Lu;Yi He;Lifeng Zhang;Qing Zhang;Jiapeng Tang;Tianbao Huo;Yunhao Zhang","doi":"10.1109/JSTARS.2025.3541638","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3541638","url":null,"abstract":"The complex structures and intricate hyperparameters of existing deep learning (DL) models make achieving higher accuracy in landslide susceptibility assessment (LSA) time-consuming and labor-intensive. Deep forest (DF) is a decision tree-based DL framework that uses a cascade structure to process features, with model depth adapting to the input data. To explore a more ideal landslide susceptibility model, this study designed a landslide susceptibility model combining convolutional neural networks (CNNs) and DF, referred to as CNN-DF. The Bailong River Basin, a region severely affected by landslides, was chosen as the study area. First, the landslide inventory and influencing factors of the study area were obtained. Second, an equal number of landslide and nonlandslide samples were selected under similar environmental constraints to establish the dataset. Third, CNN was used to extract high-level features from the raw data, which were then input into the DF model for training and testing. Finally, the trained model was used to predict landslide susceptibility. The results showed that the CNN-DF model achieved high prediction accuracy, with an AUC of 0.9061 on the testing set, outperforming DF, CNN, and other commonly used machine learning models. In landslide susceptibility maps (LSMs), the proportion of historical landslides in the very high susceptibility category of CNN-DF was also higher than that of other models. CNN-DF is feasible for LSA, offering higher efficiency and more accurate results. In addition, the SHAP algorithm was used to quantify the contribution of features to the prediction results both globally and locally, further explaining the model. The LSM based on CNN-DF can provide a scientific basis for landslide prevention and disaster management in the target area.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"6584-6599"},"PeriodicalIF":4.7,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10884718","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abesh Ghosh;Md Mehedi Farhad;Mohammad Ehsanul Hoque;Dylan Ray Boyd;Laura Bourgeau-Chavez;Michael H. Cosh;Andreas Colliander;Mehmet Kurum
{"title":"Estimating Vegetation Optical Depth With Mobile GNSS Transmissiometry in Temperate Forests During SMAPVEX22","authors":"Abesh Ghosh;Md Mehedi Farhad;Mohammad Ehsanul Hoque;Dylan Ray Boyd;Laura Bourgeau-Chavez;Michael H. Cosh;Andreas Colliander;Mehmet Kurum","doi":"10.1109/JSTARS.2025.3541182","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3541182","url":null,"abstract":"This study investigates the potential of mobile global navigation satellite system (GNSS) transmissometry (GNSS-T) measurements for estimating vegetation optical depth (VOD) in temperate forests, focusing on the Soil Moisture Active Passive (SMAP) validation experiment in 2022 (SMAPVEX22). Our methodology employed a dual-GNSS receiver setup, with one receiver positioned in open terrain to serve as a reference for direct signals, and another deployed on a mobile unit (helmet-based or robotic system) to spatially sample vegetation across expansive forested regions during SMAPVEX22. We assessed the stability of direct signal measurements over multiple days, demonstrating the reliability of the GNSS-T measurements. We reported the VOD measurement results for various sites across different forest regions during intensive observation periods and evaluated their correlation with respect to in situ vegetation parameters such as basal area, biomass, canopy height, and diameter breast height, finding a strong correlation with the basal area (<inline-formula><tex-math>$R^{2}=0.73$</tex-math></inline-formula>). In addition, with a predictive regression model, we demonstrated a strong dependence of the measured VOD on combination of such forest parameters. An evaluation of the VOD values at different satellite elevation angles highlighted an increasing trend in VOD with the incidence angle. The results showed the potential utility of mobile GNSS-T for generating large-scale VOD observations. Although spatially averaged VOD maps might not be directly comparable to spaceborne observations, combining mobile GNSS-T data with other sensors such as LiDAR can provide a reliable reference for airborne or spaceborne VOD estimates.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"6451-6463"},"PeriodicalIF":4.7,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10884011","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SGANet: A Siamese Geometry-Aware Network for Remote Sensing Change Detection","authors":"Jiangwei Chen;Sijun Dong;Xiaoliang Meng","doi":"10.1109/JSTARS.2025.3539733","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3539733","url":null,"abstract":"The significant progress in the fields of deep learning and computer vision has propelled the development of remote sensing change detection. However, we noticed that previous methods still rely on the single visual modality and cannot effectively utilize other prior information, such as elevation or depth maps. Therefore, this article presents a novel Siamese geometry-aware network (SGANet) intended for RGB-D remote sensing change detection. By incorporating both RGB data and geometry priors such as relative depth estimations derived from a monocular depth estimation model such as DepthAnythingV2, SGANet surpasses the limitations of traditional methods that primarily depend on visual data. The proposed network employs a shared siamese encoder architecture with a lightweight decoder head for efficient change map prediction. Within the encoder blocks, we integrated a local feature extraction block that excels at capturing fine-grained features and a global cross-attention block that focuses on contextual features between different modalities. Furthermore, we engineered a dual-path fusion structure that facilitates a seamless integration of vision and geometry features. Extensive experiments on the LEVIR-CD, WHU-CD, SYSU-CD, and S2Looking-CD datasets demonstrated that SGANet achieved substantial enhancements in F1-Score and intersection over union compared to benchmark methods that are in vogue. By integrating geometry priors and effective multimodal fusion mechanisms, SGANet promoted the development of geometry-aware change detection, further enhancing optimal performance.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"6232-6248"},"PeriodicalIF":4.7,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10884698","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SR-DNnet: A Deep Network for Super-Resolution and De-Noising of ISAR Images","authors":"Fengkai Liu;Darong Huang;Xinrong Guo;Cunqian Feng","doi":"10.1109/JSTARS.2025.3540782","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3540782","url":null,"abstract":"Inverse synthetic aperture radar (ISAR) images have become one of the most important pieces of information for airborne and maritime target identification. In general, ISAR images with higher resolution and lower background noise provide more precise target information, thus improving target identification accuracy. However, upgrading the resolution of the ISAR system is costly. Super-resolution algorithms that can utilize low-resolution echoes to obtain high-resolution imaging results have become an important means of improving ISAR imaging resolution. The traditional ISAR super-resolution imaging technique suffers from high side lobes and wide main lobes. In addition, denoising algorithms based on filtering operators tend to lead to image blurring. This work proposes a deep network for super-resolution and de-noising of ISAR images called SR-DNnet. Specifically, we view super-resolution and de-noising as a series of up-sampling, two-dimensional filtering, and threshold shrinkage. These operations are exactly what deep networks are good at. SR-DNnet has 15 layers, enabling 4x super-resolution and de-noising of ISAR images. The parameter scale of SR-DNnet is much smaller than most deep networks, which makes it efficient to train. The SR-DNnet we built features complex-value inputs, residual learning, multipath learning, and progressive up-sampling. A series of simulated and measured dataset experiments prove that the SR-DNnet is efficient and well-performed on super-resolution and de-noising.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"6567-6583"},"PeriodicalIF":4.7,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10882895","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mingxin Chen;Zhirui Wang;Zhechao Wang;Liangjin Zhao;Peirui Cheng;Hongqi Wang
{"title":"C2F-Net: Coarse-to-Fine Multidrone Collaborative Perception Network for Object Trajectory Prediction","authors":"Mingxin Chen;Zhirui Wang;Zhechao Wang;Liangjin Zhao;Peirui Cheng;Hongqi Wang","doi":"10.1109/JSTARS.2025.3541249","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3541249","url":null,"abstract":"Multidrone collaborative perception network can forecast the motion trajectories of grounded objects by aggregating intragroup communication and interaction, exhibiting significant potential across various applications. Existing collaborative perception methods struggle to address the nonuniform spatial distribution of targets and the spatial heterogeneity of multisource perception information typical in remote sensing scenarios. To tackle these challenges, we propose a coarse-to-fine feature fusion network C2F-Net, utilizing coarse-grained information interaction to guide the fusion of fine-grained features. Our approach includes a selective coarse-to-fine feature collaboration module that estimates perception levels of specific areas based on bird's-eye-view features, selectively collaborates on sparse features according to complementary information principles, and achieves efficient spatial feature interaction and fusion. In addition, we employ a region-aware effectiveness enhancement module, leveraging the differences between swarm and individual perception as prior knowledge to guide regional perception level estimation, improving comprehensive environmental understanding. We also introduce a simulation dataset named CoD-Pred for multidrone collaborative trajectory prediction. Extensive experiments demonstrate that C2F-Net significantly improves the accuracy of multidrone collaborative trajectory prediction, which increases mIoU by 2.7% to 3.3% and VPQ by 1.0% to 9.1% under comparable information transmission conditions, offering an effective and efficient solution for multidrone collaborative perception.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"6314-6328"},"PeriodicalIF":4.7,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10883025","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143564158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identification and Spectral Characteristic Analysis of the Tianwen-1 Exhaust Disturbed Area Using HiRIC and HiRISE Imagery","authors":"Chao Wang;Lu Han;Siji Sanlang;Xiong Xu;Huan Xie;Yongjiu Feng;Sicong Liu;Xiaohua Tong","doi":"10.1109/JSTARS.2025.3540431","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3540431","url":null,"abstract":"The Tianwen-1 lander exhaust induced obvious disturbances to the Martian soil. This study examines the extent and nature of the disturbed area caused by Tianwen-1 landing rocket exhaust by utilizing high-resolution Mars reconnaissance orbiter high-resolution imaging science experiment (HiRISE) and Tianwen-1 high-resolution imaging camera images. The boundary delineation results of the disturbed area show that the entire disturbed region has a maximum east-west distance of ∼118 m and a maximum north-south distance of ∼152 m. The northern part of the disturbed area is smaller than the southern part, which can be attributed to the terrain in the northeast being higher than the southwest. Based on the magnitude of the disturbance effect on the reflectance, the disturbed area, whose reflectance is decreased, is divided into diffuse blast zone and focus blast zone (FBZ), with FBZ experiencing a higher degree of disturbance. Additionally, passivation impact zone (PIZ), the area disturbed by both exhaust plume and rocket fuel passivation, has a higher reflectance than the undisturbed background. This is probably due to the accumulation of the fine soil particles injected by the fuel passivation process. The study also used HiRISE images to analyze the spectral characteristics of the FBZ, passivation zone (PZ), PIZ, and ZOI, a zone of interest. The results show that the pixels in different zones are clustered in the blue-green reflectance and near infrared-RED space, particularly indicating the distinguishability of the PZ and ZOI, which are similar in surface reflectance. This result indicates that in the fuel-passivated direction, the Martian surface impacted by both exhaust and excess fuel may undergo a more complex disturbance than that in the nonpassivated direction.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"6182-6191"},"PeriodicalIF":4.7,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10879380","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143563916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic Co-Occurrence and Relationship Modeling for Remote Sensing Image Segmentation","authors":"Yinxing Zhang;Haochen Song;Qingwang Wang;Pengcheng Jin;Tao Shen","doi":"10.1109/JSTARS.2025.3540789","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3540789","url":null,"abstract":"Semantic segmentation is an important but challenging task in pixel-level remote sensing (RS) data analysis. Accurate segmentation is essential for applications such as land use classification, infrastructure monitoring, and environmental conservation. However, RS semantic segmentation is hindered by issues like class imbalance, occlusion, blurring, and small target sizes. Existing models are lacking the capability to capture and utilize contextual and semantic relationships between different object classes. To overcome these challenges, we propose an enhanced semantic segmentation framework that integrates domain-specific knowledge through our Semantic Co-occurrence and Relationship Module (SCRM). The SCRM comprises two key components: a Probabilistic Co-occurrence Knowledge Module that incorporates statistical class correlations into the training process, and an Inter-Class Feature Relationship Module that models feature-level interactions between classes. By embedding SCRM into both classic and state-of-the-art segmentation models, our method leverages contextual relationships to improve segmentation performance. We evaluate our method on four RS datasets: two RGB-T (KUST4K and MFNet) and two RGB (aeroscapes and DLRSD). Experimental results demonstrate that our enhanced models achieve significant improvements in mAcc and mIoU across all datasets and baseline methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"6630-6640"},"PeriodicalIF":4.7,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10882876","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advancing RGB-IR Image Fusion: Exploiting Hyperbolic Geometry","authors":"Chenyu Peng;Tao Shen;Qingwang Wang","doi":"10.1109/JSTARS.2025.3540304","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3540304","url":null,"abstract":"Infrared and visible image fusion is essential for remote sensing applications, especially for obtaining high-quality imagery of terrestrial environments. Hierarchical feature information is crucial for image fusion as it captures the intricate relationships between different modalities, which are vital for producing detailed and accurate composite images. However, most existing methods operate within the confines of Euclidean space, which, due to its inherently “flat” geometric nature, often struggles to effectively measure the similarities and differences between modalities, thus failing to maintain their distinctiveness. Hyperbolic space, with its constant negative curvature, excels at leveraging these hierarchical structures. It can more effectively gauge the similarities and differences between modalities, preserving their distinctiveness. In this study, we propose a novel fusion method for infrared and visible image fusion in hyperbolic space, named HbFNet. We have developed innovative hyperbolic feature extraction modules, including Hyperbolic Invertible Neural Networks and Hyperbolic Lite Transformer blocks, specifically designed to capitalize on the hierarchical nature of features. Our method emerges as a promising solution for enhancing hierarchical information and elevating the quality of fusion. Extensive experiments across three public datasets have demonstrated that our method outperforms most state-of-the-art image fusion techniques.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"6007-6016"},"PeriodicalIF":4.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10878430","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143496487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiming Zhang;Jinping Sun;Yun Lin;Shengqian Han;Yanping Wang;Wen Hong
{"title":"Large-Scene Polar Format Algorithm for Circular SAR Subaperture Imaging","authors":"Qiming Zhang;Jinping Sun;Yun Lin;Shengqian Han;Yanping Wang;Wen Hong","doi":"10.1109/JSTARS.2025.3540108","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3540108","url":null,"abstract":"Circular synthetic aperture radar (CSAR) is an advanced mechanism with the capability of three-dimensional imaging, which can continuously observe the omnidirectional scattering characteristic of the ground scene. The time-domain imaging algorithm with accurate focusing ability, such as the back-projection algorithm, is often used in large-scene CSAR imaging. Due to a large amount of echo data, time-domain imaging algorithms are typically time-consuming. The polar format algorithm (PFA) is a convenient and efficient frequency-domain imaging algorithm. However, the azimuth defocus caused by the phase error of wavefront curvature limits the depth of focus in CSAR images formed by PFA. In this article, we propose a large-scene PFA (LS-PFA) for CSAR sub-aperture imaging with space-variant post-filtering. By leveraging the phase error of wavefront curvature for arbitrary curved flight paths, a space-variant filter suitable for the large-scene subaperture CSAR image formed by PFA is designed to compensate for the azimuth defocus. LS-PFA provides an efficient and general solution to acquire well-focused large-scene subaperture CSAR images under arbitrary curved flight paths. The focusing performance of LS-PFA is verified with simulation and experimental results.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"6336-6349"},"PeriodicalIF":4.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10878490","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143564102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qian Xie;Deqin Li;Yi Yang;Yonghong Zhao;Hong Li;Shengjie Zhu;Xiao Pan
{"title":"Exploring the Assimilation of All-Sky FY-4A GIIRS Radiances and Its Forecasts for Binary Typhoons","authors":"Qian Xie;Deqin Li;Yi Yang;Yonghong Zhao;Hong Li;Shengjie Zhu;Xiao Pan","doi":"10.1109/JSTARS.2025.3540209","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3540209","url":null,"abstract":"Cloud interference significantly affects infrared (IR) satellite observations, posing substantial challenges in data assimilation. The geostationary interferometric infrared sounder (GIIRS) onboard Fengyun-4A (FY-4A), the first IR hyperspectral instrument carried on a geostationary satellite, has been extensively evaluated through direct clear-sky assimilation. However, its utility in cloud-affected areas has yet to be thoroughly evaluated. This study investigates the impact of all-sky assimilating long-wave temperature channels from GIIRS on forecasts of binary typhoons Maysak and Haishen (2020) using the Weather Research Forecast model. Quality control procedures, observation error settings, and variational bias corrections are incorporated into the three-dimensional variational data assimilation system for both clear-sky and all-sky scenarios. These approaches mitigate negative observation-minus-background statistics, producing a more symmetric distribution, along with a better brightness temperature simulation under all-sky conditions. Furthermore, assimilating all-sky GIIRS observations enhances the depiction of detailed typhoon structures, such as the upper level warmer distribution in Maysak, accompanied by larger analysis variations. This process also improves subsequent typhoon track and landfall precipitation forecasts. This research highlights the significance of employing high-temporal IR data in cloudy regions for the all-sky assimilation of FY-4A GIIRS data.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"5949-5959"},"PeriodicalIF":4.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10878481","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143512977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}