IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society最新文献

筛选
英文 中文
Semi-Supervised Triple-GAN With Similarity Constraint for Automatic Underground Object Classification Using Ground Penetrating Radar Data 基于相似约束的半监督三重gan探地雷达地下目标自动分类
IF 4.4
Li Liu;Yongcheng Zhou;Hang Xu;Jingxia Li;Jianguo Zhang;Lijun Zhou;Bingjie Wang
{"title":"Semi-Supervised Triple-GAN With Similarity Constraint for Automatic Underground Object Classification Using Ground Penetrating Radar Data","authors":"Li Liu;Yongcheng Zhou;Hang Xu;Jingxia Li;Jianguo Zhang;Lijun Zhou;Bingjie Wang","doi":"10.1109/LGRS.2025.3609444","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3609444","url":null,"abstract":"Automatic underground object classification based on deep learning (DL) has been widely used in ground penetrating radar (GPR) fields. However, its excellent performance heavily depends on sufficient labeled training data. In GPR fields, large amounts of labeled data are difficult to obtain due to time-consuming and experience-dependent manual annotation work. To address the issue of limited labeled data, we propose a novel semi-supervised learning (SSL) method for urban-road underground multiclass object classification. It fully utilizes abundant unlabeled data and limited labeled data to enhance classification performance. We applied a variant of the triple-GAN (TGAN) model and modified it by introducing a similarity constraint, which is associated with GPR data geometric features and can help to produce high-quality generated images. Experimental results of laboratory and field data show that it has higher accuracy than representative baseline methods under limited labeled data.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145078645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large-Scale Traveling Ionospheric Disturbances Over North America and Europe During the May 2024 Extreme Geomagnetic Storm 2024年5月极端地磁风暴期间北美和欧洲的大尺度电离层扰动
IF 4.4
Long Tang;Hong Zhang;Yumei Li;Fan Xu;Fang Zou
{"title":"Large-Scale Traveling Ionospheric Disturbances Over North America and Europe During the May 2024 Extreme Geomagnetic Storm","authors":"Long Tang;Hong Zhang;Yumei Li;Fan Xu;Fang Zou","doi":"10.1109/LGRS.2025.3608704","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3608704","url":null,"abstract":"This study investigates the large-scale ionospheric traveling disturbances (LSTIDs) over North America and Europe associated with the intense geomagnetic storm in May 2024, utilizing total electron content (TEC) data derived from ground-based Global Navigation Satellite System (GNSS) stations. The findings reveal that the observed LSTIDs in both regions exhibited an unusually prolonged duration, lasting for over 10 h from 17:00 UT on May 10 to 03:30 UT on May 11, 2024. This extended duration may be attributed to the continuous triggering of LSTIDs by auroral energy input during the geomagnetic storm. Additionally, significant differences in propagation characteristics, including velocities, azimuths, wavelengths, and traveling distances of LSTIDs, were observed between the two regions. These disparities in LSTID parameters are likely due to variations in the magnitude of energy input in the polar regions and local time differences in North America (14:00 LT) and Europe (19:00 LT), which cause diurnal electron-density contrast to influence LSTID propagation.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KD-RSCC: A Karras Diffusion Framework for Efficient Remote Sensing Change Captioning KD-RSCC:一种高效遥感变化标注的Karras扩散框架
IF 4.4
Xiaofei Yu;Jie Ma;Liqiang Qiao
{"title":"KD-RSCC: A Karras Diffusion Framework for Efficient Remote Sensing Change Captioning","authors":"Xiaofei Yu;Jie Ma;Liqiang Qiao","doi":"10.1109/LGRS.2025.3608489","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3608489","url":null,"abstract":"Remote sensing image change captioning (RSICC) is a challenging task that involves describing surface changes between bitemporal or multitemporal satellite images using natural language. This task requires both fine-grained visual understanding and expressive language generation. Transformer-based and long short-term memory (LSTM)-based models have shown promising results in this domain. However, they may encounter difficulties in generating flexible and diverse captions, particularly when training data are limited or imbalanced. While diffusion models provide richer textual outputs, they are often constrained by long inference times. To address these issues, we propose a novel diffusion-based framework, KD-RSCC, for efficient and expressive remote sensing change captioning. This framework utilizes the Karras sampling method to significantly reduce the number of steps required during inference, while preserving the quality and diversity of the generated captions. In addition, we introduce a large language model (LLM)-based evaluation strategy <inline-formula> <tex-math>$text {G-Eval}_{text {RSCC}}$ </tex-math></inline-formula> to conduct a more comprehensive assessment of the semantic accuracy, fluency, and linguistic diversity of the generated descriptions. Experimental results demonstrate that KD-RSCC achieves an optimal balance between generation quality and inference speed, enhancing the flexibility and readability of its outputs. The code and supplementary materials are available at <uri>https://github.com/Fay-Y/KD_RSCC</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RoGLSNet: An Efficient Global–Local Scene Awareness Network With Rotary Position Embedding for Remote Image Segmentation RoGLSNet:基于旋转位置嵌入的高效全局-局部场景感知网络
IF 4.4
Xiaosheng Yu;Weiqi Bai;Jubo Chen;Jiawei Huang;Zhuoqun Fang;Zhaokui Li
{"title":"RoGLSNet: An Efficient Global–Local Scene Awareness Network With Rotary Position Embedding for Remote Image Segmentation","authors":"Xiaosheng Yu;Weiqi Bai;Jubo Chen;Jiawei Huang;Zhuoqun Fang;Zhaokui Li","doi":"10.1109/LGRS.2025.3607840","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3607840","url":null,"abstract":"Accurate segmentation of very high-resolution remote sensing images is vital for downstream tasks. Most semantic segmentation methods fail to fully consider the inherent characteristics of the images, such as intricate backgrounds, significant intraclass variance, and spatial interdependence of geographic object distribution. To address these challenges, we propose an efficient global–local scene awareness network with rotary position embedding (RoGLSNet). Specifically, we introduce the dynamic global filter (DGF) module to adaptively select frequency components, thereby mitigating interference from background noise. For high intraclass variance, the class center aware block (CCAB) performs class-level contextual modeling with spatial information integration. Additionally, the rotary position embedding (RoPE) is incorporated into vanilla attention to indirectly model the positional and distance relationships of geographic target objects. Extensive experimental results on two widely used datasets demonstrate that RoGLSNet outperforms the state-of-the-art (SOTA) segmentation methods. The code is available at <uri>https://github.com/bai101315/RoGLSNet</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three-Dimensional Controlled-Source Electromagnetic Modeling Using Octree-Based Spectral Element Method 基于八叉树的三维可控源电磁建模方法
IF 4.4
Jintong Xu;Xiao Xiao;Jingtian Tang
{"title":"Three-Dimensional Controlled-Source Electromagnetic Modeling Using Octree-Based Spectral Element Method","authors":"Jintong Xu;Xiao Xiao;Jingtian Tang","doi":"10.1109/LGRS.2025.3606934","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3606934","url":null,"abstract":"The controlled-source electromagnetic (CSEM) method is an important geophysical tool for sensing and studying subsurface conductivity structures. Advanced forward modeling techniques are crucial for the inversion and imaging of CSEM data. In this letter, we develop an accurate and efficient 3-D forward modeling algorithm for CSEM problems, combining spectral element method (SEM) and octree meshes. The SEM based on high-order basis functions can provide accurate CSEM responses, and the octree meshes enable local refinement, allowing for the discretization of models with fewer elements compared to the structured hexahedral meshes used in conventional SEM, while also providing the capability to handle complex models. Two synthetic examples are presented to verify the accuracy and efficiency of the algorithm. The utility of the algorithm is verified by a realistic model with complex geometry.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145078642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fluid Mobility Attribute Extraction Based on Optimized Second-Order Synchroextracting Wavelet Transform 基于优化二阶同步提取小波变换的流体流动性属性提取
IF 4.4
Yu Wang;Xiao Pan;Kang Shao;Ning Wang;Yuqiang Zhang;Xinyu Zhang;Chaoyang Lei;Xiaotao Wen
{"title":"Fluid Mobility Attribute Extraction Based on Optimized Second-Order Synchroextracting Wavelet Transform","authors":"Yu Wang;Xiao Pan;Kang Shao;Ning Wang;Yuqiang Zhang;Xinyu Zhang;Chaoyang Lei;Xiaotao Wen","doi":"10.1109/LGRS.2025.3607097","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3607097","url":null,"abstract":"Resolution of time–frequency-based seismic attributes mainly relies on the time–frequency analysis tool. This study proposes an improved second-order synchroextracting wavelet transform (SSEWT) by optimizing the scale parameters and extraction scheme. Time–frequency computation on synthetic data shows a 5% improvement in efficiency. Then, we apply the proposed transform to fluid mobility calculation on field data, yielding a 5.6% increase in computational efficiency and an 11.26% improvement in resolution, demonstrating its superior performance. Field data tests demonstrate that the proposed transform and the related fluid mobility result outperform conventional methods. Despite remaining computational challenges, the method offers significant advancements in reservoir characterization and fluid detection.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AFIMNet: An Adaptive Feature Interaction Network for Remote Sensing Scene Classification AFIMNet:用于遥感场景分类的自适应特征交互网络
IF 4.4
Xiao Wang;Yisha Sun;Pan He
{"title":"AFIMNet: An Adaptive Feature Interaction Network for Remote Sensing Scene Classification","authors":"Xiao Wang;Yisha Sun;Pan He","doi":"10.1109/LGRS.2025.3607205","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3607205","url":null,"abstract":"Convolutional neural network (CNN)-based methods have been widely applied in remote sensing scene classification (RSSC) and have achieved remarkable classification results. However, traditional CNN methods have certain limitations in extracting global features and capturing image semantics, especially in complex remote sensing (RS) image scenes. The Transformer can directly capture global features through the self-attention mechanism, but its performance is weaker when handling local details. Currently, methods that directly combine CNN and transformer features lead to feature imbalance and introduce redundant information. To address these issues, we propose AFIMNet, an adaptive feature interaction network for RSSC. First, we use a dual-branch network structure (based on ResNet34 and Swin-S) to extract local and global features from RS scene images. Second, we design an adaptive feature interaction module (AFIM) that effectively enhances the interaction and correlation between local and global features. Third, we use a spatial-channel fusion module (SCFM) to aggregate the interacted features, further strengthening feature representation capabilities. Our proposed method is validated on three public RS datasets, and experimental results show that AFIMNet has a stronger feature representation ability compared to current popular RS image classification methods, significantly improving classification accuracy. The source code will be publicly accessible at <uri>https://github.com/xavi276310/AFIMNet</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SADFF-Net: Scale-Aware Detection and Feature Fusion for Multiscale Remote Sensing Object Detection 基于尺度感知的多尺度遥感目标检测与特征融合
IF 4.4
Runbo Yang;Huiyan Han;Shanyuan Bai;Yaming Cao
{"title":"SADFF-Net: Scale-Aware Detection and Feature Fusion for Multiscale Remote Sensing Object Detection","authors":"Runbo Yang;Huiyan Han;Shanyuan Bai;Yaming Cao","doi":"10.1109/LGRS.2025.3606521","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3606521","url":null,"abstract":"Multiscale object detection in remote sensing imagery poses significant challenges, including substantial variations in object size, diverse orientations, and interference from complex backgrounds. To address these issues, we propose a scale-aware detection and feature fusion network (SADFF-Net), a novel detection framework that incorporates a Multiscale contextual attention fusion (MCAF) module to enhance information exchange between feature layers and suppress irrelevant feature interference. In addition, SADFF-Net employs an adaptive spatial feature fusion (ASFF) module to improve semantic consistency across feature layers by assigning spatial weights at multiple scales. To enhance adaptability to scale variations, the regression head integrates a deformable convolution. In contrast, the classification head utilizes depth-wise separable convolutions to significantly reduce computational complexity without compromising detection accuracy. Extensive experiments on the DOTAv1 and DIOR_R datasets demonstrate that SADFF-Net outperforms current state-of-the-art methods in Multiscale object detection.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic Change Detection of Bitemporal Remote Sensing Images Using Frequency Feature Enhancement 基于频率特征增强的双时相遥感图像语义变化检测
IF 4.4
Renfang Wang;Kun Yang;Feng Wang;Hong Qiu;Yingying Huang;Xiufeng Liu
{"title":"Semantic Change Detection of Bitemporal Remote Sensing Images Using Frequency Feature Enhancement","authors":"Renfang Wang;Kun Yang;Feng Wang;Hong Qiu;Yingying Huang;Xiufeng Liu","doi":"10.1109/LGRS.2025.3605910","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3605910","url":null,"abstract":"Deep learning is a powerful technique for semantic change detection (SCD) of bitemporal remote sensing images. In this work, we propose to improve SCD accuracy using deep learning with frequency feature enhancement (FFE). Specifically, we develop an FFE module that aims to enhance the performance of both binary change detection (BCD) and semantic segmentation, two main key components for obtaining high SCD accuracy, by integrating the Fourier transform and attention mechanisms. Experimental results on the SECOND and LandSat-SCD datasets demonstrate the effectiveness of the proposed method, and it achieves high resolution for change boundaries.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LSAR-Det: A Lightweight YOLOv11-Based Model for Ship Detection in SAR Images 基于yolov11的轻型SAR图像舰船检测模型
IF 4.4
Pengxiong Zhang;Yi Jiang;Xinguo Zhu
{"title":"LSAR-Det: A Lightweight YOLOv11-Based Model for Ship Detection in SAR Images","authors":"Pengxiong Zhang;Yi Jiang;Xinguo Zhu","doi":"10.1109/LGRS.2025.3605993","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3605993","url":null,"abstract":"Due to its superior recognition accuracy, deep learning has been widely adopted in synthetic aperture radar (SAR) ship detection. Nevertheless, significant variations in ship target scales pose challenges for existing detection architectures, frequently leading to missed detections or false positives. Moreover, high-precision detection models are typically structurally complex and computationally intensive, resulting in substantial hardware resource consumption. In this letter, we introduce LSAR-Det, a novel SAR ship detection network designed to address these challenges. We propose a lightweight residual feature extraction (LRFE) module to construct the backbone network, enhancing feature extraction capabilities while reducing the number of parameters and floating-point operations per second (FLOPs). Furthermore, we design a lightweight cross-space convolution (LCSConv) module to replace the traditional convolution in the neck network. In addition, we incorporate a multiscale bidirectional feature pyramid network (M-BiFPN) to facilitate multiscale feature fusion with fewer parameters. Our proposed model contains merely 0.985M parameters and requires only 3.3G FLOPs. Experimental results on the SAR ship detection dataset (SSDD) and high-resolution SAR image dataset (HRSID) datasets demonstrate that LSAR-Det outperforms other models, achieving detection accuracies of 98.2% and 91.8%, respectively, thereby effectively balancing detection performance and model efficiency.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信