{"title":"Seismic Statistical Prediction for Fracture Azimuth Based on Fourier Series","authors":"Zhan Wang;Xingyao Yin;Zhengqian Ma;Yaming Yang;Wei Xiang","doi":"10.1109/LGRS.2025.3561743","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3561743","url":null,"abstract":"The azimuth of fractures has long been a subject of interest for geophysicists, and it holds paramount importance in the exploration and development of oil and gas resources. However, traditional fracture azimuth prediction methods heavily rely on seismic data quality and well-logging data, often encountering severe noise interference and 90° ambiguity. This makes fracture azimuth prediction challenging in areas with complex geological structures. A method for seismic statistical prediction of fracture azimuth based on the Fourier series has been proposed to address these issues. First, the Rüger approximation is rewritten into Fourier series form, combining parameters with high linear correlation to mitigate the ill-conditioning of the coefficient matrix. Second, construct a complex representation of fracture azimuth and initially adjust the sign based on the characteristic that the azimuthal period of the fourth-order Fourier coefficient is <inline-formula> <tex-math>$pi $ </tex-math></inline-formula>/2. Third, considering that the fourth-order Fourier coefficients are susceptible to noise, a directional statistical method is introduced to enhance the stability of fracture azimuth prediction. Then, by analyzing the relationship between second- and fourth-order Fourier coefficients under saturated fluid and gas-filled conditions, the Welch t-test, suitable for data with nonhomogeneous variance, is introduced to eliminate the influence of fluid type on fracture azimuth prediction. Numerical experiments and field data demonstrate that the proposed method overcomes the 90° ambiguity inherent in conventional fracture azimuth prediction, proving its stability and effectiveness in areas with severe structural variations.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143938016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dongdong Xu;Jin Qian;Hao Feng;Zheng Li;Yongcheng Wang
{"title":"Semantic Segmentation of Multimodal Optical and SAR Images With Multiscale Attention Network","authors":"Dongdong Xu;Jin Qian;Hao Feng;Zheng Li;Yongcheng Wang","doi":"10.1109/LGRS.2025.3561747","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3561747","url":null,"abstract":"The joint semantic segmentation of multimodal remote sensing (RS) images can make up for the problem of insufficient features of single-modal images and effectively improve classification accuracy. Some deep learning methods have achieved good performance, but they face problems such as complex network structure, large number of parameters, and deployment difficulty. In this letter, more attention is paid to front-end and branch-level feature transformation to obtain multiscale semantic information. The multiscale dilated extraction module (MDEM) is constructed to mine the specific features of different modalities. The multimodal complementary attention module (MCAM) is designed for further acquiring prominent complementary content. The concatenated features are transmitted and reused by the dense convolution to complete the encoding. Ultimately, a general and concise end-to-end model is proposed. Comparative experiments are carried out on three heterogeneous datasets, and the model put forward performs well in qualitative analysis, quantitative comparison, and visual effect. Meanwhile, the dexterity and practicability of the model are more prominent, which can provide support for lightweight design and hardware deployment.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143918628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IRSTD-YOLO: An Improved YOLO Framework for Infrared Small Target Detection","authors":"Yuan Tang;Tingfa Xu;Haolin Qin;Jianan Li","doi":"10.1109/LGRS.2025.3562096","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3562096","url":null,"abstract":"Detecting small targets in infrared images, especially in low-contrast and complex backgrounds, remains challenging. To tackle this, we propose infrared small target detection YOLO (IRSTD-YOLO), a novel detection network. The edge and feature extraction (EFE) module enhances feature representation by integrating a SobelConv branch and a 2DConv branch. The SobelConv branch applies Sobel operators to extract gradient information, enhancing edge contrast and making small targets more distinguishable from the background. Unlike standard convolutions, which process all features uniformly, this edge-aware operation emphasizes structural information crucial for detecting small infrared targets. The 2DConv branch captures spatial context, complementing the edge features to create a more comprehensive representation. To further refine detection, we introduce the infrared small target enhancement (IRSTE) module, addressing the limitations of conventional feature pyramid networks. Instead of merely adding a shallow detection head, IRSTE processes and enhances shallow-layer features, which are rich in small target information, and fuses them with deeper features. By leveraging a multibranch strategy that integrates local, global, and large-scale contexts, IRSTE enhances small target representation and detection robustness, particularly in low-contrast environments where traditional networks often fail. Experimental results show that IRSTD-YOLO achieves an mAP@0.5:0.95 of 36.7% on the InfraredUAV dataset and 51.6% on the AntiUAV310 dataset, outperforming YOLOv11-s by 4.4% and 4.2%, respectively. Code is released at <uri>https://github.com/vectorbullet/IRSTD-YOLO</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144073096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning Instructive Frequency Spectral and Curvature Features for Cloud Detection","authors":"Wanjuan Hu;Guanyi Li;Guoguo Zhang;Liang Chang;Dan Zeng","doi":"10.1109/LGRS.2025.3561935","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3561935","url":null,"abstract":"Current cloud detection methods often treat all spectral bands equally, which limits their ability to capture instructive clues necessary for accurate detection. As a result, distinguishing clouds from snow in coexisting environments remains challenging. Moreover, most approaches struggle to adaptively model the boundaries of clouds, which is crucial for detecting thin clouds with ambiguous edges. To address these challenges, we propose a novel approach for cloud detection called FSCFNet, which captures guiding visual features from frequency and curvature computations. FSCFNet comprises two key modules: the frequency spectral feature enhancement module (FSFEM) and the curvature-based edge-awareness module (CEAM). The FSFEM leverages the distinct characteristics of spectral bands to extract instructive visual cues, enabling the network to learn robust discriminative features for ice, snow, and clouds. In contrast, the CEAM adaptively identifies texture-rich regions using curvature, enhancing the ability to delineate thin cloud boundaries. Comprehensive quantitative and qualitative experiments on the Landsat 8 and MODIS datasets demonstrate that FSCFNet consistently outperforms state-of-the-art methods. Our code is publicly available at <uri>https://github.com/wanjuanhu/FSCFNet/tree/main</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143943848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Na Li;Xiaopeng Song;Yongxu Liu;Wenxiang Zhu;Chuang Li;Weitao Zhang;Yinghui Quan
{"title":"Semi-Supervised Graph Constraint Dual Classifier Network With Unknown Class Feature Learning for Hyperspectral Image Open-Set Classification","authors":"Na Li;Xiaopeng Song;Yongxu Liu;Wenxiang Zhu;Chuang Li;Weitao Zhang;Yinghui Quan","doi":"10.1109/LGRS.2025.3561306","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3561306","url":null,"abstract":"In view of the practical value of open datasets of hyperspectral images (HSIs), HSI open-set classification (OSC) has attracted more and more attention. Existing HSI OSC methods are usually based on learning labeled samples to identify unknown classes. However, due to the complex high-dimensional characteristics of HSIs and the limited number of labeled samples, the recognition of unknown classes based only on limited labeled samples often has low and unstable accuracy. To address this problem, we propose a semi-supervised graph constraint dual classifier network (SSGCDCN) that can achieve efficient and stable OSC by learning unknown class features and relationships among samples. First, a dual classifier consisting of a multiclassifier and multiple binary classifiers is constructed, which has the ability to discover the unknown class samples by assigning and enabling pseudo-labels to participate in model training to achieve unknown class feature learning. Then, to improve the classification accuracy of both known and unknown classes, a homogeneous graph constraint is imposed on SSGCDCN to learn the relationship information among samples (including labeled and unlabeled samples). This constraint can bring the features of similar samples closer while pushing apart features of dissimilar samples. Experiments evaluated on three datasets demonstrate that the proposed method can obtain superior OSC performance than other state-of-the-art classification methods.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144073034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Uneven Illumination and Radiometric Difference Removing Method for Multicamera Satellite Images","authors":"Tao Peng;Ru Chen;Mi Wang","doi":"10.1109/LGRS.2025.3561699","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3561699","url":null,"abstract":"Relative radiometric calibration (RRC) mainly focuses on color consistency and streak levels between multicamera or multiple charge-coupled devices (CCDs), that is to say, full field-of-view (FOV), but RRC may not be conducted completely or that useful due to some factors, such as data quality or quantity in lifetime image statistics and ineffective side-slither RRC. Aimed to this, this letter proposes a novel approach to solve inner uneven illumination of each camera image and relative radiometric difference of multicamera images. The highest layer of unidirectional pyramid (UDP) is decomposed into illumination and reflectance components. Uneven phenomenon in this scale is eliminated in illumination component with column-by-column compensation processing strategy, and different scales of nonuniformity are removed together with UDP reconstruction. Radiometric variation of multicamera images is solved with iterative radiometric adjustment. Some typical data of HISEA-2 Multi-Spectral Scanner 1 (MSS-1) are used to validate the effectiveness of our method both in visual and quantitative terms.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143918627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Benedikter;Christian Huber;Letizia Gambacorta;Marc Rodriguez-Cassola;Gerhard Krieger
{"title":"Travel Time Computation in Snow and Ice Volumes for Radar Remote Sensing Applications","authors":"Andreas Benedikter;Christian Huber;Letizia Gambacorta;Marc Rodriguez-Cassola;Gerhard Krieger","doi":"10.1109/LGRS.2025.3561654","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3561654","url":null,"abstract":"When radar signals penetrate snow and ice, they experience additional delays and directional changes due to the higher refractive index compared to that of air. These propagation effects should be taken into account accurately when processing, simulating, or geocoding radar data. Travel time computation is straightforward when the refractive index is constant, but it becomes challenging in heterogeneous media. This letter introduces novel methods based on the Eikonal equation and Fermat’s principle for efficiently computing radar signal travel times in heterogeneous snow and ice volumes. These approaches can accommodate nearly arbitrary refractive index distributions, ensuring precise handling of propagation effects in radar remote sensing applications.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10966893","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143888360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DUSTNet: An Unsupervised and Noise-Resistant Network for Martian Dust Storm Change Detection","authors":"Miyu Li;Junjie Li;Yumei Wang;Yu Liu;Haitao Xu","doi":"10.1109/LGRS.2025.3561365","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3561365","url":null,"abstract":"Mars exploration highlights the demand for identifying Martian surface changes, which has sparked research interests in planetary surface changes detection (PSCD). However, the prevailing PSCD algorithms face significant challenges due to the sparse features, low resolution, and high noise levels of captured images data. In this letter, we propose an unsupervised model, the dust unsupervised surface tracking network (DUSTNet), designed to track the surface changes caused by Martian dust storms. Our DUSTNet employs a network architecture with dual input branches to learn the cross-temporal complementary information from pretime and posttime image pairs. A multilevel feature complementary fusion (MFCF) module is utilized to enhance the ability to detect subtle changes. Considering the difficulties in image registration caused by illumination variations, noise, and other factors, we design a noise-resistant module (NRM) that mitigates pseudo-changes and improves the robustness of PSCD. In addition, we construct a dataset of Martian dust storms change detection (CD) based on the images captured by moderate resolution imaging camera (MoRIC) of China’s First Mars Mission TianWen-1 (the dataset is available at <uri>https://github.com/Limiyu1123/SDS</uri>). The detection performance of DUSTNet performs well on multiple Mars surface datasets, including our Martian dust storm test set. Our model achieves improvements of 2.5% in precision, 7.55% in <inline-formula> <tex-math>$F1$ </tex-math></inline-formula>-score, 6.54% in overall accuracy (OA), and 4.57% in Kappa over the state-of-the-art model.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143888441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SCTNet: A Shallow CNN–Transformer Network With Statistics-Driven Modules for Cloud Detection","authors":"Weixing Liu;Bin Luo;Jun Liu;Han Nie;Xin Su","doi":"10.1109/LGRS.2025.3561004","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3561004","url":null,"abstract":"Existing cloud detection methods often rely on deep neural networks, leading to excessive computational overhead. To address this, we propose a shallow convolutional neural network (CNN)–Transformer hybrid architecture that limits the maximum downsampling rate to <inline-formula> <tex-math>$8times $ </tex-math></inline-formula>. This design preserves local details while effectively capturing global context through a lightweight Transformer branch. To enhance adaptability across diverse cloud scenes, we introduce two novel statistics-driven modules: statistics-adaptive convolution (SAC) and statistical mixing augmentation (SMA). SAC dynamically generates convolutional kernels based on input feature statistics, enabling adaptive feature extraction for varying cloud patterns. SMA improves model generalization by interpolating channel-wise statistics across training samples, increasing feature diversity. Experiments on four datasets show that the proposed method achieves state-of-the-art performance with 732 K parameters and 1G multiply-accumulate operations (MACs). Our code will be available at <uri>https://weix-liu.github.io/</uri> for further research.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143943920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yingjie Huang;Famao Ye;Zewen Wang;Shufang Qiu;Leyang Wang
{"title":"Cloud Removal Using Patch-Based Improved Denoising Diffusion Models and High Gray-Value Attention Mechanism","authors":"Yingjie Huang;Famao Ye;Zewen Wang;Shufang Qiu;Leyang Wang","doi":"10.1109/LGRS.2025.3560799","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3560799","url":null,"abstract":"In recent years, diffusion-based methods have outperformed traditional models in many cloud removal tasks due to their strong generative capabilities. However, these methods face the challenges of long inference time and poor recovery effect in cloud regions. To address this issue, this letter proposes a patch-based improved denoising diffusion model with a high gray-value attention for cloud removal in optical remote sensing images. We introduce an overlapping fixed-sized patch method in the improved denoising diffusion model. The patch-based diffusion modeling approach enables size-agnostic image restoration by employing a guided denoising process with smoothed noise estimates across overlapping patches during inference. Additionally, we introduce a high gray-value attention module, specifically designed to focus on thick cloud regions, enhancing attention on areas with relatively high gray values within the image. When compared with other existing cloud removal models on the RICE dataset, our model outperformed them in terms of both peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index. Qualitative results demonstrate that the proposed method effectively removes clouds from images while preserving texture details. Ablation studies further confirm the effectiveness of the high gray-value attention module. Overall, the proposed model delivers superior cloud removal performance compared to existing state of the arts (SOTA) methods.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143918629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}