IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society最新文献

筛选
英文 中文
Test-Time Forward Model Adaptation for Seismic Deconvolution 地震反褶积试验时间正演模型自适应
IF 4.4
Peimeng Guan;Naveed Iqbal;Mark A. Davenport;Mudassir Masood
{"title":"Test-Time Forward Model Adaptation for Seismic Deconvolution","authors":"Peimeng Guan;Naveed Iqbal;Mark A. Davenport;Mudassir Masood","doi":"10.1109/LGRS.2025.3598143","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3598143","url":null,"abstract":"Seismic deconvolution is essential for extracting layer information from noisy seismic data, but it is an ill-posed problem with nonunique solutions. Inspired by classical optimization approaches, model-based deep learning architectures, such as loop unrolling (LU) methods, unfold the optimization process into iterative steps and learn gradient updates from data. These architectures rely on well-defined forward models, but in real seismic deconvolution scenarios, these models are often inaccurate or unknown. Previous approaches have addressed model uncertainty by training robust networks, either passively or actively. However, these methods require a large number of adversarial examples and diverse data structures, often necessitating retraining for unseen forward model structures, which is resource-intensive. In contrast, we propose a more efficient test-time adaptation (TTA) method for the LU architecture, which refines the forward model during inference. This approach incorporates physical principles into the reconstruction process, enabling higher quality results without the need for costly retraining. The code is available at: <uri>https://github.com/InvProbs/A-adaptive-seis-deconv</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144880546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Robust Cross-View Vehicle Localization in Complex Urban Environments 复杂城市环境下的鲁棒横视车辆定位
IF 4.4
Shaojie Wang;Weichao Wu;Zhiyuan Guo;Chen Bai
{"title":"Toward Robust Cross-View Vehicle Localization in Complex Urban Environments","authors":"Shaojie Wang;Weichao Wu;Zhiyuan Guo;Chen Bai","doi":"10.1109/LGRS.2025.3597961","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3597961","url":null,"abstract":"In urban environments, global navigation satellite system (GNSS)-based localization can be unreliable due to signal occlusion and multipath interference. As a result, cross-view image-based geo-localization has attracted increasing interest for its ability to operate without GNSS. However, dense buildings, occlusions, and dynamic objects make it difficult to reliably match ground-view and aerial images in urban areas. To address this, we propose a cross-view vehicle localization approach that uses road structures as stable and informative anchors. First, we transform ground-level images into the bird’s-eye view (BEV) domain to reduce viewpoint disparities and achieve consistent spatial alignment with satellite imagery. Next, we introduce a multidimensional feature matching module that captures deep structural information of road elements, including their continuity and geometric characteristics, by jointly integrating semantic, topological, and morphological cues. Moreover, a center-focused attention mechanism is employed to prioritize the central region of the image, improving alignment accuracy and suppressing background noise. The experiments on the nuScenes and Argoverse datasets demonstrate that our method consistently outperforms existing approaches across diverse urban scenes and spatial sampling conditions, highlighting its effectiveness and robustness in real-world geo-localization scenarios.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144914210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DCONet: A Dual-Task Collaborative Optimization Network for Infrared Small Target Detection DCONet:一种红外小目标探测双任务协同优化网络
IF 4.4
Yu Zhang;Yifan Xu;Juan Lyu;Guoliang Gong;Gang Chen;Sai Ho Ling
{"title":"DCONet: A Dual-Task Collaborative Optimization Network for Infrared Small Target Detection","authors":"Yu Zhang;Yifan Xu;Juan Lyu;Guoliang Gong;Gang Chen;Sai Ho Ling","doi":"10.1109/LGRS.2025.3597969","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3597969","url":null,"abstract":"Infrared small target detection is crucial in military reconnaissance, remote sensing, and so on. However, due to its small size and the high coupling with complex backgrounds, the present methods still face challenges in precise detection. They predominantly focus on target feature learning while neglecting the critical role of background modeling for small target decoupling. To this end, we propose a dual-task collaborative optimization network (DCONet), which decouples the task into background estimation and target segmentation using a multistage iterative optimization strategy. First, considering significant directional distribution characteristics in infrared backgrounds, we propose a direction-aware background estimation module (DBEM) to capture directional features, such as clouds and trees, thereby generating an initial background estimation. Second, we propose a background suppression gating unit (BSGU), which employs a gating mechanism and a channel-level adjustment factor to dynamically suppress background noise based on the preliminary background estimation, thereby generating the target segmentation result. Finally, the estimated background, target segmentation, and the reconstructed original image based on them are propagated to the next stage for further iterative optimization. The experimental results show that DCONet performs better than existing methods across three public datasets. The source code is available at <uri>https://github.com/tustAilab/DCONet</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144918203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SFBDA: A Semantic-Decoupled Data Augmentation Framework for Infrared Few-Shot Object Detection on UAVs SFBDA:一种语义解耦的无人机红外小目标检测数据增强框架
IF 4.4
Zhenhai Weng;Weijie He;Jianfeng Lv;Dong Zhou;Zhongliang Yu
{"title":"SFBDA: A Semantic-Decoupled Data Augmentation Framework for Infrared Few-Shot Object Detection on UAVs","authors":"Zhenhai Weng;Weijie He;Jianfeng Lv;Dong Zhou;Zhongliang Yu","doi":"10.1109/LGRS.2025.3597530","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3597530","url":null,"abstract":"Few-shot object detection (FSOD) is a critical frontier in computer vision research. However, the task of an infrared (IR) FSOD presents significant technical challenges, primarily due to the following: 1) few annotated training samples and 2) low-texture nature of thermal imaging. To address these issues, we propose a semantic-guided foreground–background decoupling augmentation (SFBDA) framework. This method includes an instance-level foreground separation (ILFS) module that utilizes the segment anything model (SAM) to separate the objects, as well as a semantic-constrained background generation network that employs adversarial learning to synthesize contextually compatible backgrounds. To address the insufficiency of scenario diversity in existing uncrewed aerial vehicle (UAV)-based IR object detection datasets, we introduce multiscene IR UAV object detection (MSIR-UAVDET), a novel multiscene IR UAV detection benchmark. This dataset encompasses 16 object categories across diverse environments (terrestrial, maritime, and aerial). To validate the efficacy of the proposed data augmentation methodology, we integrated our approach with existing FSOD frameworks, and comparative experiments were conducted to benchmark our method with existing data augmentation methods. The code and dataset can be publicly available at: <uri>https://github.com/Sea814/SFBDA.git</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144867545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TSFD-Net: Two-Stage Feature Decoupling Network for Task and Parameter Discrepancies in RSOD TSFD-Net: RSOD中任务和参数差异的两阶段特征解耦网络
IF 4.4
Xinghui Song;Chunyi Chen;Gen Li;Yanan Liu;Donglin Jing;Jun Peng
{"title":"TSFD-Net: Two-Stage Feature Decoupling Network for Task and Parameter Discrepancies in RSOD","authors":"Xinghui Song;Chunyi Chen;Gen Li;Yanan Liu;Donglin Jing;Jun Peng","doi":"10.1109/LGRS.2025.3597597","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3597597","url":null,"abstract":"Deep learning excels in natural image object detection, but remote sensing images face challenges like multidirectional objects and neighborhood interference. Existing methods use shared features for classification and regression, causing task interference. Classification needs translation/rotation-invariant features, while regression requires translation/rotation-equivariant features. Additionally, regression parameters (e.g., center, shape, and angle) demand distinct feature properties. To address this, we propose TSFD-Net, featuring: 1) task differential decoupling module (TDDM): decouples task-specific features via parallel CNN-Transformer branches, and 2) parameter differential decoupling module (PDDM): designs specialized regressors for distinct parameters (e.g., angle versus center/shape). Together, TDDM and PDDM form the two-stage feature decoupling (TSFD) structure. We further introduce dynamic cascade activation masks (DCAMs), leveraging bounding box feedback to enhance target focus and suppress neighborhood noise. TSFD network (TSFD-Net) achieves state-of-the-art results on DOTA-v1.0 (81.37% mAP), validating its efficacy.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KDA: Knowledge Distillation Adversarial Framework With Vision Foundation Models for Landslide Segmentation 基于视觉基础模型的滑坡分割知识蒸馏对抗框架
IF 4.4
Shijie Wang;Lulin Li;Xuan Dong;Lei Shi;Pin Tao
{"title":"KDA: Knowledge Distillation Adversarial Framework With Vision Foundation Models for Landslide Segmentation","authors":"Shijie Wang;Lulin Li;Xuan Dong;Lei Shi;Pin Tao","doi":"10.1109/LGRS.2025.3597685","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3597685","url":null,"abstract":"Landslides pose severe threats to infrastructure and safety, and their segmentation in remote sensing imagery remains challenging due to irregular boundaries, scale variation, and complex terrain. Traditional lightweight models often struggle to capture rich semantic features under these conditions. To address this, we leverage vision foundation models (VFMs) as teachers and propose a knowledge distillation adversarial (KDA) framework to transfer high-capacity knowledge into compact student models. Additionally, we introduce a dynamic cross-layer fusion (DCF) decoder to enhance global–local feature interaction. The experimental results demonstrate that, compared to the previous best-performing model SegNeXt [89.92% precision and 84.78% mean intersection over union (mIoU)], our method achieves a precision of 91.93% and mIoU of 86.53%, yielding improvements of 2.01% and 1.75%, respectively. Source code is available at <uri>https://github.com/PreWisdom/KDA</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144914220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shipborne HFSWR Virtual Aperture Extension Method Based on RD-Domain Time–Frequency Information Fusion 基于rd域时频信息融合的舰载HFSWR虚拟孔径扩展方法
IF 4.4
Youmin Qu;Xingpeng Mao;Heyue Huang;Yiming Wang
{"title":"Shipborne HFSWR Virtual Aperture Extension Method Based on RD-Domain Time–Frequency Information Fusion","authors":"Youmin Qu;Xingpeng Mao;Heyue Huang;Yiming Wang","doi":"10.1109/LGRS.2025.3597207","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3597207","url":null,"abstract":"Due to the limited platform space of shipborne high-frequency surface wave radar (HFSWR), the aperture of the antenna array is reduced, thereby degrading the radar’s direction of arrival (DOA) estimation performance. Traditional aperture extension methods, based on single-domain information, limit the aperture extension ability and are not applicable to scenarios where nontarget signals dominate. To address the above issues, this letter proposes an aperture extension method based on range-Doppler domain time–frequency information fusion (RDTFF), which utilizes multiple carrier frequencies and time division techniques. Compared with traditional methods, the proposed method achieves target echo separation and extraction through RD-domain processing, thereby extending the aperture and making the method suitable for shipborne HFSWR scenarios. In addition, by fusing the time–frequency information of the target echo, a larger virtual aperture can be obtained, which further improves the DOA estimation performance of the array.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144861295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic Change Detection of Carbon Sources and Sinks via Spatiotemporal Attention and Multiscale Fusion 基于时空关注和多尺度融合的碳源汇语义变化检测
IF 4.4
Yang Liu;Haige Xu;Wenqian Cao;Cheng Liu
{"title":"Semantic Change Detection of Carbon Sources and Sinks via Spatiotemporal Attention and Multiscale Fusion","authors":"Yang Liu;Haige Xu;Wenqian Cao;Cheng Liu","doi":"10.1109/LGRS.2025.3597281","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3597281","url":null,"abstract":"High-resolution remote sensing image semantic change detection (SCD) helps to accurately capture the spatial distribution and dynamic evolution of carbon sources and sinks by identifying changes in land cover types. However, existing methods suffer from the loss of spatial details and insufficient ability to model global features. Therefore, this letter proposes an SCD model based on spatiotemporal attention perception and multiscale fusion (SC-SCDNet). The model introduces a multiscale efficient cross-attention (MCA) block in the encoder to bridge the semantic gap, and integrates a feature enhancement module (FEM) to enhance the semantic expression ability of small targets using multibranch dilated convolution. In addition, a spatiotemporal channel window interaction module (TBCM) is designed to capture global information from both spatial and channel dimensions, enhancing spatial detail expression. The experimental results show that SC-SCDNet achieves the most advanced performance on SECOND and Landsat-SCD datasets, providing a better technical scheme for carbon sources and carbon sinks change detection.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144914169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frequency-Adaptive Boundary-Guided Network for Multiclass Raft Aquaculture Segmentation in Remote Sensing Images 基于频率自适应边界引导网络的多类筏形水产遥感图像分割
IF 4.4
Yan Lu;Xuhui Yi;Binge Cui
{"title":"Frequency-Adaptive Boundary-Guided Network for Multiclass Raft Aquaculture Segmentation in Remote Sensing Images","authors":"Yan Lu;Xuhui Yi;Binge Cui","doi":"10.1109/LGRS.2025.3596932","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3596932","url":null,"abstract":"Accurate segmentation of multiclass raft aquaculture areas (RAAs) from high-resolution remote sensing images is challenging due to spectral similarity across classes, boundary ambiguity caused by complex marine conditions, and intraregion inconsistency. To address these challenges, this letter proposes PBFANet, a deep segmentation network that integrates boundary guidance and frequency-adaptive filtering mechanisms. A gated boundary–semantic fusion module (GBSFM) dynamically combines pseudo-boundary cues with semantic features to enhance edge localization, while the consistency-aware fusion module (CAFM) employs an adaptive low-pass filter (LPF) and a high-pass filter (HPF) to suppress intraregion noise and restore boundary details. Notably, CAFM leverages the distinct frequency-domain characteristics of different aquaculture classes—such as dense high-frequency textures in laver areas and low-frequency dominance in fish cage regions—to improve class separability. Experiments on GF-1 satellite imagery covering laver, hijiki, and fish cages demonstrate that PBFANet achieves a mean <inline-formula> <tex-math>${F}1$ </tex-math></inline-formula>-score of 0.914 and a mean intersection over union (mIoU) of 82.78%, outperforming state-of-the-art methods in classification accuracy, boundary precision, and segmentation consistency.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144861294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust ISAR Autofocus via Newton-Based Tsallis Entropy Minimization 基于牛顿的Tsallis熵最小化鲁棒ISAR自动对焦
IF 4.4
Min-Seok Kang
{"title":"Robust ISAR Autofocus via Newton-Based Tsallis Entropy Minimization","authors":"Min-Seok Kang","doi":"10.1109/LGRS.2025.3596922","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3596922","url":null,"abstract":"The autofocus technology constitutes a critical component in the process of inverse synthetic aperture radar (ISAR) imaging, as its performance significantly impacts the quality of the resulting radar imagery. Among existing autofocus techniques, those based on the minimum entropy criterion have demonstrated strong robustness and are widely applied in ISAR imaging applications. Nevertheless, the minimum Tsallis entropy-based autofocus (MTEA) method is often burdened with substantial computational demands, primarily due to the complex formulation of image entropy and the iterative search required for optimizing phase error correction. To address this limitation, this study presents a fast MTEA algorithm that incorporates the Newton method for efficient optimization. Additionally, the Levenberg–Marquardt (LM) modification is integrated into the MTEA framework to further enhance computational efficiency. Both the numerical analysis of computational complexity and experimental results indicate that the proposed method achieves a notable improvement in computational efficiency over the MTEA, while maintaining the focusing quality of the reconstructed images.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144861296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信