IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society最新文献

筛选
英文 中文
HSN-Net: A Hybrid Segmentation Neural Network for High-Resolution Road Extraction HSN-Net:用于高分辨率道路提取的混合分割神经网络
Bo Huang;Yiwei Lu;Ruopeng Yang;Yu Tao;Shijie Wang;Yongqi Shi
{"title":"HSN-Net: A Hybrid Segmentation Neural Network for High-Resolution Road Extraction","authors":"Bo Huang;Yiwei Lu;Ruopeng Yang;Yu Tao;Shijie Wang;Yongqi Shi","doi":"10.1109/LGRS.2025.3558511","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3558511","url":null,"abstract":"Road network information is a core component of online maps and plays a crucial role in navigation, urban planning, and traffic management. Convolutional neural networks (CNNs) have demonstrated remarkable performance in road extraction tasks. However, their limited ability to capture global information often leads to fragmented road segments when roads are occluded by other terrains in satellite images, ultimately undermining the accuracy and continuity of the segmentation results. Given the strengths of transformers in capturing global contextual information and CNNs in extracting local detailed features, this letter introduces a novel deep network called hybrid segmentation neural network (HSN-Net), which seamlessly integrates transformers with CNNs to leverage the advantages of both the architectures. To further enhance road continuity, we propose the road continuity perception module (RCPM). Experiments on the DeepGlobe and CHN6-CUG datasets demonstrate that our HSN-Net achieves state-of-the-art segmentation performance in road extraction tasks, validating the effectiveness of our design choices. The source code is available at <uri>https://github.com/hb281/HSN-Net</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143865281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impacts of Topography on Daily Mean Albedo Estimation Over Snow-Free Rugged Terrain 地形对无雪崎岖地形上日平均反照率估算的影响
Yuan Han;Jianguang Wen;Dongqin You;Qing Xiao;Guokai Liu;Yong Tang;Sen Piao;Na Zhao;Qinhuo Liu
{"title":"Impacts of Topography on Daily Mean Albedo Estimation Over Snow-Free Rugged Terrain","authors":"Yuan Han;Jianguang Wen;Dongqin You;Qing Xiao;Guokai Liu;Yong Tang;Sen Piao;Na Zhao;Qinhuo Liu","doi":"10.1109/LGRS.2025.3555608","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3555608","url":null,"abstract":"Daily mean albedo is a critical variable in surface energy budget and climate change studies. Currently, satellite-based daily mean albedo is typically estimated from the diurnal variation of albedo, derived from multiangle reflectance observations using a bidirectional reflectance distribution function (BRDF) kernel-driven model. However, this model assumes flat terrain and neglects topographic effects. This study evaluates the estimation errors of daily mean albedo derived from the BRDF kernel-driven model over rugged terrain. Experiments were conducted for rugged terrains with different mean slopes (10°, 20°, and 30°) and aspects (north and west) at spatial scales of 500 m and 1 km, using the large-scale remote sensing data and the image simulation framework (LESS) model. The results demonstrate that topography significantly influences the daily mean albedo derived from the BRDF kernel-driven model, with the largest relative error exceeding 50%. The estimation error increases as the slope of the terrain becomes steeper and is also strongly influenced by the aspect of the terrain. When the solar azimuth angle aligns with the aspect of the rugged terrain, the estimation error becomes particularly pronounced. These findings highlight the necessity of accounting for topographic effects when estimating daily mean albedo.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143830466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Source-Free Domain Adaptation for Remote Sensing Object Detection Using Low-Confidence Pseudolabels 基于低置信度伪标签的遥感目标检测无源域自适应
Jin Kim;Junyoung Park;Hyunsung Jang;Namkoo Ha;Kwanghoon Sohn
{"title":"Source-Free Domain Adaptation for Remote Sensing Object Detection Using Low-Confidence Pseudolabels","authors":"Jin Kim;Junyoung Park;Hyunsung Jang;Namkoo Ha;Kwanghoon Sohn","doi":"10.1109/LGRS.2025.3557816","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3557816","url":null,"abstract":"Source-free domain adaptive object detection (SFOD) enables detectors trained on a source domain to be deployed to unlabeled target domains without access to the source data, thus addressing concerns about data privacy and efficiency. Existing SFOD methods typically use a mean-teacher (MT) self-training paradigm with high-confidence pseudolabels (HPLs). However, HPLs often overlook small objects in novel domain conditions, leading to biased adaptation of the student detector. This issue is particularly problematic in remote sensing (RS) datasets dominated by small vehicles. To overcome this limitation, we introduce the low-confidence pseudolabel distillation for aerial (LPLDA) scenes framework, which leverages low-confidence proposals to improve the adaptation of small objects in the target domain. Moreover, we enhance the low-confidence pseudolabel (LPL) mining process with an instance consistency (IC) loss that reinforces teacher-student consistency, making small-object features more robust to domain shifts. Extensive experiments across four practical domain shift scenarios show that our method reduces false negatives for small objects and outperforms previous SFOD approaches by effectively using domain-invariant knowledge from the source.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143871076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining Neighborhood Difference and Gaussian-Gamma-Shaped Feature Map for SAR Image Registration 结合邻域差分和高斯-伽玛形状特征映射的SAR图像配准
Wenlong Hu;Junyi Liu;Junjie Huang;Qingsong Wang
{"title":"Combining Neighborhood Difference and Gaussian-Gamma-Shaped Feature Map for SAR Image Registration","authors":"Wenlong Hu;Junyi Liu;Junjie Huang;Qingsong Wang","doi":"10.1109/LGRS.2025.3557900","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3557900","url":null,"abstract":"Due to the influence of speckle noise and geometric distortion between images, synthetic aperture radar (SAR) image registration under different imaging conditions is a challenging task in remote sensing. To address the issues of significant differences in scattering and geometric characteristics of SAR images under different viewing angles, this letter proposes a novel SAR image registration method. The existing methods mainly rely on gradient information in the feature point selection process, which leads to uneven distribution of feature points and poor global matching. We design a Harris-based neighborhood difference map (HNDM) detector. This detector uses the degree of difference between neighbor regions and the central region to obtain feature points that are homogeneous and significant. Then, a Gaussian-Gamma-shaped (GGS) feature map is used to construct the feature point characterization, which is more robust to dark region noise. Experimental results of SAR image registration under different conditions show that our method achieves better performance in matching accuracy and the number of correct correspondences, outperforming three existing advanced algorithms.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143877658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geographic Prior Guided Subpixel Mapping for Fine-Grained Urban Tree Cover Reconstruction 地理先验引导亚像素映射用于细粒度城市树木覆盖重建
Jingqian Xue;Ziheng Zhang;Yan Zhou;Lina Yuan;Da He;Xiaoping Liu
{"title":"Geographic Prior Guided Subpixel Mapping for Fine-Grained Urban Tree Cover Reconstruction","authors":"Jingqian Xue;Ziheng Zhang;Yan Zhou;Lina Yuan;Da He;Xiaoping Liu","doi":"10.1109/LGRS.2025.3557845","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3557845","url":null,"abstract":"Benefiting from long-term time series and large spatial coverage, Sentinel-2 has been widely used in urban tree cover retrieval. However, the mixed pixel effects in Sentinel-2 imagery make it challenging to accurately identify urban tree covers. To address this problem, subpixel mapping (SPM) is developed to reconstruct a high-resolution urban tree cover from medium-resolution imagery. While deep-learning-based SPM seeks fine-grained patterns solely within medium-resolution feature spaces and spatiotemporal fusion-based SPM leverages additional high-resolution imagery from different times at the same location, both face limitations: the former lacks detailed spatial constraints, and the latter struggles with acquiring geographically aligned imagery. To address these challenges, this study proposes a geographic prior guided SPM (GPSPM) approach for urban tree cover reconstruction. The geographic prior is grounded in the scaling law of geography, a fundamental principle of spatial heterogeneity stating that high-resolution imagery contains far more detailed features (e.g., small tree parcels) than lower resolution imagery. These fine-grained features enhance SPM by providing robust cross-scale spatial prior based on a “teacher-student” domain adaptation training framework. Besides, considering the geometric feature discrepancy and long-tail distribution exists across different geographic scales, cross-scale image mosaicking and resampling strategy are further developed. Experiments on public urban tree cover dataset demonstrate that the proposed method improves the intersection over union (IoU) of urban tree cover by approximately 5% compared to traditional unsupervised SPM and shows significant improvements in spatial detail quality.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143900526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heterogeneous Mixture of Experts for Remote Sensing Image Super-Resolution 遥感图像超分辨率专家的非均匀混合
Bowen Chen;Keyan Chen;Mohan Yang;Zhengxia Zou;Zhenwei Shi
{"title":"Heterogeneous Mixture of Experts for Remote Sensing Image Super-Resolution","authors":"Bowen Chen;Keyan Chen;Mohan Yang;Zhengxia Zou;Zhenwei Shi","doi":"10.1109/LGRS.2025.3557928","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3557928","url":null,"abstract":"Remote sensing image super-resolution (SR) aims to reconstruct high-resolution (HR) remote sensing images from low-resolution (LR) inputs, thereby addressing limitations imposed by sensors and imaging conditions. However, the inherent characteristics of remote sensing images, including diverse ground object types and complex details, pose significant challenges to achieving high-quality reconstruction. Existing methods typically use a uniform structure to process various types of ground objects without distinction, making it difficult to adapt to the complex characteristics of remote sensing images. To address this issue, we introduce a mixture-of-experts (MoE) model and design a set of heterogeneous experts. These experts are organized into multiple expert groups, where experts within each group are homogeneous while being heterogeneous across groups. This design ensures that specialized activation parameters can be used to handle the diverse and intricate details of ground objects effectively. To better accommodate the heterogeneous experts, we propose a multilevel feature aggregation (MFA) strategy to guide the routing process. In addition, we develop a dual-routing mechanism to adaptively select the optimal expert for each pixel. Experiments conducted on the UCMerced and AID datasets demonstrate that our proposed method achieves superior SR reconstruction accuracy compared with state-of-the-art methods. The code will be available at <uri>https://github.com/Mr-Bamboo/MFG-HMoE</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143839870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Overcoming Data Scarcity in Maritime Radar Target Detection via a Complex-Valued Hybrid Spatiotemporal Network 利用复值混合时空网络克服海上雷达目标检测数据稀缺性
Ju Wang;Chongyue Wang;Zhaojie Li;Wenjing He;Yi Zhong;Yan Huang
{"title":"Overcoming Data Scarcity in Maritime Radar Target Detection via a Complex-Valued Hybrid Spatiotemporal Network","authors":"Ju Wang;Chongyue Wang;Zhaojie Li;Wenjing He;Yi Zhong;Yan Huang","doi":"10.1109/LGRS.2025.3557817","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3557817","url":null,"abstract":"Detecting small floating targets on the sea surface has long been a major challenge in radar signal processing. Recently, deep learning (DL) has attracted considerable attention for its potential to improve detection probability. However, its performance heavily relies on the availability of sufficiently labeled datasets, which are often difficult to acquire in complex sea clutter environments. Therefore, this letter introduces the complex-valued hybrid spatiotemporal network (CVHSTNet), a novel maritime radar target detection method designed for low-data scenarios that uses time-frequency (TF) representations of radar echoes as inputs. To mitigate the overfitting issue, CVHSTNet is intentionally designed with a shallow architecture, integrating a three-layer complex-valued convolutional neural network (CV-CNN) with a one-layer CV bidirectional long short-term memory (CV-BiLSTM) network. Unlike existing real-valued models that overlook phase information, our method operates directly on CV data to capture the complete signal representation. More importantly, this hybrid architecture enables the network to effectively exploit both spatial and temporal characteristics, thereby further enhancing feature representations. Comprehensive experiments on 40 datasets from the IPIX database demonstrate that with only 50 samples per range cell for training, the proposed method achieves a detection probability exceeding 90% in 37 of 40 datasets, with a false alarm rate (FAR) of <inline-formula> <tex-math>$10^{-3}$ </tex-math></inline-formula>. To the best of our knowledge, this is the first time a DL-based approach has demonstrated the ability to distinguish between small floating targets and sea clutter under limited labeled radar data conditions.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143839907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quad-Pol ISAR Data Reconstruction From Compact-Pol Mode Based on Polarimetric and Spatial Feature Aggregation Network 基于极化和空间特征聚合网络的压缩pol模式四pol ISAR数据重构
Zi-Jian Pei;Ming-Dian Li;Si-Wei Chen
{"title":"Quad-Pol ISAR Data Reconstruction From Compact-Pol Mode Based on Polarimetric and Spatial Feature Aggregation Network","authors":"Zi-Jian Pei;Ming-Dian Li;Si-Wei Chen","doi":"10.1109/LGRS.2025.3557943","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3557943","url":null,"abstract":"The quad polarimetric (Quad-Pol) and compact polarimetric (Compact-Pol) inverse synthetic aperture radar (ISAR) are two main configuration modes for space targets imaging. Compared with Quad-Pol ISAR mode, the Compact-Pol ISAR mode can reduce radar system complexity at the price of polarimetric information loss. In order to fulfill this gap, this work dedicates to reconstruct the Quad-Pol information of space targets from the Compact-Pol mode, thereby reconciling the need for system simplicity with the retention of abundant Quad-Pol data. The main idea is to design a Quad-Pol reconstruction network (QPRNet) based on the Compact-Pol ISAR data characteristics. First, a group feature fusion (GFF) module is designed to collect the coupling polarimetric features between the channels of Compact-Pol ISAR data, making the network better learn the implicit mapping relationships between polarimetric channels. Then, the receptive field expansion (RFE) module is used to obtain large-scale spatial features through the network, which is beneficial to extract polarimetric modulation mechanism between adjacent components of spatial targets. Experimental studies have been carried out in Quad-Pol ISAR data reconstruction. Comparison results show that the Quad-Pol ISAR data reconstructed by the proposed method are more similar to the truth. Moreover, compared with the state of the arts, the mean absolute error (MAE), coherence index (COI), and peak signal-to-noise ratio (PSNR) have improved by 4.22%, 4.64%, and 2.01%, respectively.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143871075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated Learning for Remote Sensing Image Classification Using Sparse Image Representations 基于稀疏图像表示的遥感图像分类联邦学习
Christina Kopidaki;Grigorios Tsagkatakis;Panagiotis Tsakalides
{"title":"Federated Learning for Remote Sensing Image Classification Using Sparse Image Representations","authors":"Christina Kopidaki;Grigorios Tsagkatakis;Panagiotis Tsakalides","doi":"10.1109/LGRS.2025.3557579","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3557579","url":null,"abstract":"The increasing scale and complexity of remote sensing (RS) observations demand distributed processing to effectively manage the vast volumes of data generated. However, distributed processing presents significant challenges, including bandwidth limitations, high latency, and privacy concerns, especially when transmitting high-resolution images. To address these issues, we propose a novel scheme leveraging the encoder of a masked autoencoder (MAE) to generate associated embedding (CLS tokens) from masked images, which enables training deep learning models under federated learning (FL) scenarios. This approach enables the transmission of compact image patches instead of full images to processing nodes, drastically reducing bandwidth usage. On the processing nodes, classifiers are trained with the CLS tokens, and model weights are aggregated using FedAvg and FedProx FL algorithms. Experimental results on benchmark datasets demonstrate that the proposed approach significantly reduces data transmission requirements while maintaining and even surpassing the accuracy of systems with access to full data.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10948516","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143879499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Small-Sample SAR Target Recognition Using a Multimodal Views Contrastive Learning Method 基于多模态视图对比学习方法的小样本SAR目标识别
Yilin Li;Chengyu Wan;Xiaoyan Zhou;Tao Tang
{"title":"Small-Sample SAR Target Recognition Using a Multimodal Views Contrastive Learning Method","authors":"Yilin Li;Chengyu Wan;Xiaoyan Zhou;Tao Tang","doi":"10.1109/LGRS.2025.3557534","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3557534","url":null,"abstract":"Self-supervised contrastive learning methods offer a promising approach to the small-sample synthetic aperture radar (SAR) automatic target recognition (ATR) problem by autonomously acquiring valuable visual representation from unlabeled data. However, current self-supervised contrastive learning methods primarily generate supervisory signals through augmented views of the original images, thereby underusing the rich information inherent in SAR images. To overcome this limitation, we integrate SAR targets’ geometric and physical properties, as captured in SAR target segmentation semantic maps and attribute scattering center reconstruction maps into the contrastive learning stage. Moreover, we propose a novel multimodal views’ contrastive learning method which contains two stages. In the contrastive learning stage, we leverage a large amount of unlabeled data for both intramodal and cross-modal contrastive learning, thereby transferring discriminative information from these two views to the original image features to learn the feature representation. In the supervised training stage, the linear classifier is trained using a small number of labeled samples to partition the feature representation space and migrate to the downstream recognition task. The experimental results demonstrate that the proposed method achieves superior recognition performance in SAR small-sample ATR tasks and exhibits robust generalization capabilities, thereby providing additional discriminative information that augments target representation.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143845422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信