{"title":"Attention-Driven Object Encoding and Multiscale Contextual Perception for Improved Cross-View Object Geo-Localization","authors":"Haoshuai Song;Xiaochong Tong;Xiaoyu Zhang;Yaxian Lei;He Li;Congzhou Guo","doi":"10.1109/LGRS.2025.3560258","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3560258","url":null,"abstract":"Cross-view object geo-localization (CVOGL) is essential for applications like navigation and intelligent city management. By identifying objects in street-view/drone-view and precisely locating them in satellite imagery, more accurate geo-localization can be achieved compared to retrieval-based methods. However, existing approaches fail to account for query object shape/size and significant scale variations in remote sensing images. To address these limitations, we propose an attention-driven multiscale perception network (AMPNet) for cross-view geo-localization. AMPNet employs an attention-driven object encoding (ADOE) based on segmentation, which provides prior information to enable learning more discriminative representations of the query object. In addition, AMPNet introduces a cross-view multiscale perception (CVMSP) module that captures multiscale contextual information using varying convolution kernels, and applies an MLP to enhance channel-wise feature interactions. Experimental results demonstrate that AMPNet outperforms state-of-the-art methods in both ground-to-satellite and drone-to-satellite object localization tasks on a challenging dataset.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Coastal Performance of Sentinel-6MF New High-Resolution Wet Tropospheric Correction","authors":"Telmo Vieira;Pedro Aguiar;Clara Lázaro;M. Joana Fernandes","doi":"10.1109/LGRS.2025.3560196","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3560196","url":null,"abstract":"Sentinel-6 Michael Freilich (S6MF) satellite carries the Advanced Microwave Radiometer for Climate (AMR-C), which, in addition to the standard low frequency channels, includes a High-Resolution Microwave Radiometer (HRMR) with channels at 90, 130, and 166 GHz. This subsystem allows higher spatial resolution for enhanced Wet Tropospheric Correction (WTC) measurements in coastal zones. The current S6MF products provide two different WTC fields: AMR WTC, computed from AMR measurements alone, and RAD WTC, computed from the combination of AMR and HRMR. The aim of this study is to evaluate this new high-resolution WTC from S6MF, over the global coastal regions, during the first three years of the mission (2021–2023), in particular to quantify the performance of the RAD WTC when compared with the AMR WTC. Results show that, on average, for distances to coast in the range of 0–5 km, RAD WTC is only available in 13% of S6MF points and an inter-comparison between these two corrections reveals the largest differences for the range of distances to land between 5 and 10 km. Comparisons with ERA5 and global navigation satellite systems (GNSS) reveal that the new RAD WTC is better than the AMR WTC for distances to coast in the range of 5–20 km and, over open-ocean, the current algorithms do not take advantage of the high frequency channels. This evaluation shows how radiometers with high-resolution channels such as the one deployed in S6MF improve the WTC retrieval for 5–20 km from the coast, allowing a higher recovery of accurate sea level measurements in these regions.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143865330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tian Lan;Xitao Sun;Xiaopeng Yang;Junbo Gong;Xueyao Hu
{"title":"Layered Media Parameter Estimation Based on Hyperbolic Fitting in GPR B-Scan","authors":"Tian Lan;Xitao Sun;Xiaopeng Yang;Junbo Gong;Xueyao Hu","doi":"10.1109/LGRS.2025.3560177","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3560177","url":null,"abstract":"Layered media parameter estimation is widely used in ground-penetrating radar (GPR) for layered scenarios. However, the current estimating methods face many challenges, including the limited applicability in scenarios without targets for A-scan-based and CMP-based methods, and the large estimation error in layered media with targets for B-scan-based method. To obtain accurate parameters for layered media in the presence of targets, a method for estimating layered media parameters based on hyperbolic fitting in GPR B-scan is proposed. The method uses geometric relationships and refractive point approximation formulas to efficiently determine the refractive point of media and can achieve an accurate estimation of the thickness and permittivity by nonlinear least-squares optimization method for hyperbolic constraint equations. The accuracy and effectiveness of the proposed method are verified through simulation and real experiments in layered structures.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143865331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Radar Waveform Sequence Design for PSL Optimization via Iterative Neural Network","authors":"Yuxin Yan;Yifeng Wu;Lei Zhang","doi":"10.1109/LGRS.2025.3560073","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3560073","url":null,"abstract":"In radar systems, high-resolution waveforms with favorable correlation properties are preferred. This letter addresses the challenge of designing unimodular radar waveform sets with low peak sidelobe level (PSL) in autocorrelation function (ACF). In contrast to conventional methods, this approach does not attempt to transform a nonconvex problem into a convex one through relaxation. Inspired by neural network (NN) optimization techniques, an iterative NN structure for minimizing PSL is proposed in this letter. Using the Mellowmax operation and incorporating an additional penalty term into the loss function, the optimized ACF with low PSL is obtained. Corresponding simulation experiments demonstrate that our method achieves a superior PSL value of 2–3 dB lower than the state-of-the-art method.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143938015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Collaborative Spectral–Spatial Representation Learning for Hyperspectral and LiDAR Classification Under Limited Samples","authors":"Jia Li;Lin Zhao;Yuanjie Dai;Minhui Zhao;Minghao Li;Jianhui Wu","doi":"10.1109/LGRS.2025.3559913","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3559913","url":null,"abstract":"Hyperspectral images (HSIs) offer exceptional precision in distinguishing features due to their broad spectral dimensions. However, their high dimensionality gives rise to a phenomenon known as the “dimensional curse,” characterized by data sparsity in high-dimensional feature spaces. This issue is further exacerbated by the limited number of labeled samples, rendering it challenging to effectively define the decision boundary and increasing risk of overfitting. To address the challenges, we propose a spectral-spatial representation learning (SSRL) framework based on HSI and light detection and ranging (LiDAR) data, which enhances the generalization of spectral features while reducing dimensionality through the optimization of spectral-wise information. Meanwhile, a local-global spatial feature fusion mechanism is designed for LiDAR spatial features to further alleviating the sparsity of spectral features and to effectively recognize complex land cover. The method fully leverages the complementary strengths of HSI and LiDAR data through self-supervised contrastive learning, effectively mitigates the challenge posed by data properties. Extensive experiments were conducted on three widely used HSI-LiDAR datasets, and the results demonstrate that the proposed algorithm outperforms state-of-art methods in classification accuracy.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143865266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiangnan Zhong;Haibo Yu;Ling Zhang;Gangsheng Li;Q. M. Jonathan Wu
{"title":"Shipborne HFSWR Sea Clutter Suppression Method Based on MultiDomain Information Synergy","authors":"Jiangnan Zhong;Haibo Yu;Ling Zhang;Gangsheng Li;Q. M. Jonathan Wu","doi":"10.1109/LGRS.2025.3559903","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3559903","url":null,"abstract":"Due to the integrated effect of many factors including nonuniform wave motion and shipboard platform motion, the echo signals received by shipborne high-frequency surface wave radar (HFSWR) often suffer from issues such as sea clutter spreading. A large number of targets are submerged by sea clutter, creating a non-detectable zone. To address this problem, a novel sea clutter suppression method based on multidomain information synergy is proposed. The proposed method first identifies the broadening region of sea clutter by its characteristics. The multidomain spectrum is then constructed using a narrow beam forming method. Afterward, the Laplace kernel function is employed to screen the sea clutter regions to obtain the plausible region of interest (PROI). Ultimately, we integrate all PROIs and obtain sea clutter suppression results. Field data from shipborne HFSWR and validation results from the automatic identification system (AIS) demonstrate that the proposed method can effectively suppress sea clutter, increase the signal-to-clutter ratio (SCR), and achieve better target detection performance.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143870917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"NSC-SSNet: A Self-Supervised Network With Neighborhood Subsampling and Calibration Constraints for Sonar Image Denoising","authors":"Yapei Zhang;Yancheng Liu;Yanhao Wang;Fei Yu","doi":"10.1109/LGRS.2025.3560072","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3560072","url":null,"abstract":"Sonar imaging systems play a crucial role in several marine applications. However, complex underwater environment introduces scattering noise, significantly degrading sonar image quality and hindering performance for downstream tasks. Although several self-supervised denoising methods have emerged to address the lack of clean reference images, they often fail to effectively capture both local and global structural information, thus showing suboptimal performance on sonar images. To address these challenges, we propose NSC-SSNet, a self-supervised network with neighborhood subsampling and calibration constraints for sonar image denoising. In particular, NSC-SSNet adopts an end-to-end self-supervised framework that operates in the denoising and calibration stages. By leveraging neighborhood subsampling and calibration constraints, it effectively extracts latent features of clean images from noisy input. Moreover, it simultaneously captures local and global associations between pixels by incorporating additional terms in the loss function to improve image quality while denoising. Extensive experiments on real-world sonar image datasets demonstrate that NSC-SSNet outperforms existing self-supervised denoising methods in terms of both noise removal and quality enhancement.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143871078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanfang Liu;Wei Yang;Hongcheng Zeng;Haijun Shen;Yamin Wang;Xiaojie Zhou;Chunsheng Li
{"title":"Sidelobe Suppression of Squinted SAR Complex Data Based on Minimum Image Sharpness","authors":"Yanfang Liu;Wei Yang;Hongcheng Zeng;Haijun Shen;Yamin Wang;Xiaojie Zhou;Chunsheng Li","doi":"10.1109/LGRS.2025.3560202","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3560202","url":null,"abstract":"Sidelobe suppression is of particular importance in the synthetic aperture radar (SAR) image quality improvement. However, the range and azimuth sidelobes are coupled and non-orthogonal in squinted SAR images, which makes traditional methods ineffective. This letter presents a sidelobe suppression method for squinted SAR complex data based on the SAR convolution model and minimum image sharpness. First, the convolution model of SAR images is revised with the subpixel offset. Then, the sidelobe suppression is achieved by deconvolution pixel by pixel. Innovatively, a convex optimization based on minimum image sharpness is built and solved to estimate the unknown and variant subpixel offset of each target. In addition, a new factor based on integrated sidelobe ratio (ISLR) is applied for efficiency improvement. Finally, results on the squinted spaceborne SAR real data verify the effectiveness of the proposed method both in sidelobe suppression and the maintenance of amplitude-phase characteristics. The codes are available in <uri>https://github.com/Keyserliu/Sidelobe-Suppression</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144336052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring Influencing Factors for Differences in Integral and Complete Urban Surface Temperatures","authors":"Jiashuo Li;Xiujuan Dai;Dandan Wang;Yunhao Chen;Zhenyuan Zhu","doi":"10.1109/LGRS.2025.3559323","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3559323","url":null,"abstract":"Complete urban surface temperature (UST) (<inline-formula> <tex-math>$T_{textrm {c}}$ </tex-math></inline-formula>) takes into account the total active surface areas and is used to estimate the surface temperature over a 3-D rough surface such as cities. Direct calculations of <inline-formula> <tex-math>$T_{textrm {c}}$ </tex-math></inline-formula> require temperatures of each surface of the urban canopy, which are hard to obtain in actual remote sensing observations. Moreover, solid-angle integral temperature (<inline-formula> <tex-math>$T_{text {SI}}$ </tex-math></inline-formula>) calculated using multiangle remote sensing observations has great potential for approaching <inline-formula> <tex-math>$T_{textrm {c}}$ </tex-math></inline-formula>. However, due to varying mechanisms, some differences remain between them. This study uses temperatures of urban facets in 3-D (TUF-3D) and surface-sensor-sun urban model (SUM) models to compute integral temperatures for multiple view angles over various urban forms and investigates the differences between <inline-formula> <tex-math>$T_{textrm {c}}$ </tex-math></inline-formula> and <inline-formula> <tex-math>$T_{text {SI}}$ </tex-math></inline-formula> and the influencing factors. The difference is minimized at VZA <inline-formula> <tex-math>$=48^{circ }$ </tex-math></inline-formula>–70° and VAA <inline-formula> <tex-math>$=0^{circ }$ </tex-math></inline-formula>–360°, and the mean absolute error (MAE) is 0.67 K. Urban canopy geometry (UCG) and solar zenith angles (SZAs) are the important influencing factors. Compared with <inline-formula> <tex-math>$T_{textrm {c}}$ </tex-math></inline-formula>, <inline-formula> <tex-math>$T_{text {SI}}$ </tex-math></inline-formula> underestimates the proportion of the wall. The MAE between <inline-formula> <tex-math>$T_{text {SI}}$ </tex-math></inline-formula> and <inline-formula> <tex-math>$T_{textrm {c}}$ </tex-math></inline-formula> decreases as the wall fraction in the integral domain increases but increases when the wall fraction exceeds a threshold. The upper limit of the optimal integral domain (OID) is basically 70° and the lower limit hovers around 48°, moving away from and then approaching the zenith as the SZA increases. This study evaluates the influencing factors for differences in <inline-formula> <tex-math>$T_{text {SI}}$ </tex-math></inline-formula> and <inline-formula> <tex-math>$T_{textrm {c}}$ </tex-math></inline-formula>. It offers a simple and high-accuracy method for approaching <inline-formula> <tex-math>$T_{textrm {c}}$ </tex-math></inline-formula> which can be used to facilitate research in urban energy balance and urban climate.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143879514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Khan;Somaiya Khan;Mohammed A. M. Elhassan;Izhar Ahmed Khan;Hai Deng;Mohammed Alsuhaibani
{"title":"VDXNet: A Novel Lightweight Deep Learning Model for Vehicle Detection With Aerial Images","authors":"Ali Khan;Somaiya Khan;Mohammed A. M. Elhassan;Izhar Ahmed Khan;Hai Deng;Mohammed Alsuhaibani","doi":"10.1109/LGRS.2025.3558423","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3558423","url":null,"abstract":"In intelligent transportation systems (ITSs), real-time vehicle detection based on aerial images is crucial for effective traffic monitoring and decision-making. However, detecting small vehicles with varying orientations in complex backgrounds remains technically challenging, as existing models often struggle to balance the requirements of detection accuracy and computational efficiency. In this letter, we introduce the vehicle detection eXtended network (VDXNet), a lightweight model that is capable of achieving high detection performance while minimizing computational complexity. VDXNet incorporates the novel residual cross depth fusion (RxDF) module to enhance feature extraction in the backbone. Furthermore, it uses newly proposed lightweight feature pyramid pooling (LiteFPP) and channel reduction downsampling (CRDown) modules to support multiscale detection and spatial dimensionality reduction. These innovations streamline the model’s neck, reducing complexity while ensuring accurate detection of vehicles across diverse scales, angles, and backgrounds. Evaluations on the UCAS-AOD, VEDAI, UAV-ROD, and UAVDT datasets demonstrate that VDXNet achieves substantial reductions in model complexity, with 1.608M parameters (a decrease of 37.72%) and 5.9 GFLOPs (a decrease of 6.35%) compared with the YOLO11n model. Despite these efficiency gains, VDXNet also improves mAP by 0.52%, achieving 96.3% mAP on the UCAS_AOD dataset.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143845537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}