ISPRS Journal of Photogrammetry and Remote Sensing最新文献

筛选
英文 中文
A sensitive geometric self-calibration method and stability analysis for multiview spaceborne SAR images based on the range-Doppler model 基于距离-多普勒模型的多视点星载SAR图像几何自标定方法及稳定性分析
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2025.01.009
Lina Yin , Mingjun Deng , Yin Yang , Yunqing Huang , Qili Tang
{"title":"A sensitive geometric self-calibration method and stability analysis for multiview spaceborne SAR images based on the range-Doppler model","authors":"Lina Yin ,&nbsp;Mingjun Deng ,&nbsp;Yin Yang ,&nbsp;Yunqing Huang ,&nbsp;Qili Tang","doi":"10.1016/j.isprsjprs.2025.01.009","DOIUrl":"10.1016/j.isprsjprs.2025.01.009","url":null,"abstract":"<div><div>Synthetic aperture radar (SAR) image positioning technology is extensively used in many scientific fields, including land surveying and mapping. Geometric self-calibration can be performed if images are captured in three directions. However, when the number of images is too small, self-calibration of the SAR images based on the range-Doppler (RD) model appears to be inaccurate. Hence, a robust geometric calibration method has an important impact on calibration results. The effectiveness of such a method depends on the validity of the SAR images. This implies that the calibration can algorithmically optimize the images involved in self-calibration such that the calibration results are close to the true unknown parameters. To overcome these inaccuracies in geometric calibration, this study proposes a flexible calibration approach. The determinant and accuracy stabilization factor (ASF) are utilized to filter the images, allowing the evaluation of singular solutions and determining the validity of the SAR images. Experimental results demonstrate that the robustness of the proposed approach. In addition, the slant range equation is suggested as the dominant equation for analyzing image calibration error sources and image capture. It is found that satellite position is the main source of image calibration errors. Therefore, the impact of the satellite position and the associated incidence angle on the calibration is analyzed. The analysis reveals that it is desirable for satellites to capture ipsilateral images with incidence angles greater than 8<span><math><mo>°</mo></math></span>. This finding justifies the acquisition of SAR images.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 550-562"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142989649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global high categorical resolution land cover mapping via weak supervision 基于弱监督的全球高分类分辨率土地覆盖制图
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.12.017
Xin-Yi Tong , Runmin Dong , Xiao Xiang Zhu
{"title":"Global high categorical resolution land cover mapping via weak supervision","authors":"Xin-Yi Tong ,&nbsp;Runmin Dong ,&nbsp;Xiao Xiang Zhu","doi":"10.1016/j.isprsjprs.2024.12.017","DOIUrl":"10.1016/j.isprsjprs.2024.12.017","url":null,"abstract":"<div><div>Land cover information is indispensable for advancing the United Nations’ sustainable development goals, and land cover mapping under a more detailed category system would significantly contribute to economic livelihood tracking and environmental degradation measurement. However, the substantial difficulty in acquiring fine-grained training data makes the implementation of this task particularly challenging. Here, we propose to combine fully labeled source domain and weakly labeled target domain for weakly supervised domain adaptation (WSDA). This is beneficial as the utilization of sparse and coarse weak labels can considerably alleviate the labor required for precise and detailed land cover annotation. Specifically, we introduce the Prototype-based pseudo-label Rectification and Expansion (<em>PRE</em>) approach, which leverages the prototypes (i.e., the class-wise feature centroids) as the bridge to connect sparse labels and global feature distributions. According to the feature distances to the prototypes, the confidence of pseudo-labels predicted in the unlabeled regions of the target domain is assessed. This confidence is then utilized to guide the dynamic expansion and rectification of pseudo-labels. Based on PRE, we carry out high categorical resolution land cover mapping for 10 cities in different regions around the world, severally using PlanetScope, Gaofen-1, and Sentinel-2 satellite images. In the study areas, we achieve cross-sensor, cross-category, and cross-continent WSDA, with the overall accuracy exceeding 80%. The promising results indicate that PRE is capable of reducing the dependency of land cover classification on high-quality annotations, thereby improving label efficiency. We expect our work to enable global fine-grained land cover mapping, which in turn promote Earth observation to provide more precise and thorough information for environmental monitoring. Our data and code will be available publicly at <span><span>https://zhu-xlab.github.io/PRE-land-cover.html</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 535-549"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142989650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GN-GCN: Grid neighborhood-based graph convolutional network for spatio-temporal knowledge graph reasoning
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2025.01.023
Bing Han , Tengteng Qu , Jie Jiang
{"title":"GN-GCN: Grid neighborhood-based graph convolutional network for spatio-temporal knowledge graph reasoning","authors":"Bing Han ,&nbsp;Tengteng Qu ,&nbsp;Jie Jiang","doi":"10.1016/j.isprsjprs.2025.01.023","DOIUrl":"10.1016/j.isprsjprs.2025.01.023","url":null,"abstract":"<div><div>Owing to the difficulty of utilizing hidden spatio-temporal information, spatio-temporal knowledge graph (KG) reasoning tasks in real geographic environments have issues of low accuracy and poor interpretability. This paper proposes a grid neighborhood-based graph convolutional network (GN-GCN) for spatio-temporal KG reasoning. Based on the discretized process of encoding spatio-temporal data through the GeoSOT global grid model, the GN-GCN consists of three parts: a static graph neural network, a neighborhood grid calculation, and a time evolution unit, which can learn semantic knowledge, spatial knowledge, and temporal knowledge, respectively. The GN-GCN can also improve the training accuracy and efficiency of the model through the multiscale aggregation characteristic of GeoSOT and can visualize different probabilities in a spatio-temporal intentional probabilistic grid map. Compared with other existing models (RE-GCN, CyGNet, RE-NET, etc.), the mean reciprocal rank (MRR) of GN-GCN reaches 48.33 and 54.06 in spatio-temporal entity and relation prediction tasks, increased by 6.32/18.16% and 6.64/15.67% respectively, which achieves state-of-the-art (SOTA) results in spatio-temporal reasoning. The source code of the project is available at <span><span>https://doi.org/10.18170/DVN/UIS4VC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 728-739"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143035285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate semantic segmentation of very high-resolution remote sensing images considering feature state sequences: From benchmark datasets to urban applications
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2025.01.017
Zijie Wang , Jizheng Yi , Aibin Chen , Lijiang Chen , Hui Lin , Kai Xu
{"title":"Accurate semantic segmentation of very high-resolution remote sensing images considering feature state sequences: From benchmark datasets to urban applications","authors":"Zijie Wang ,&nbsp;Jizheng Yi ,&nbsp;Aibin Chen ,&nbsp;Lijiang Chen ,&nbsp;Hui Lin ,&nbsp;Kai Xu","doi":"10.1016/j.isprsjprs.2025.01.017","DOIUrl":"10.1016/j.isprsjprs.2025.01.017","url":null,"abstract":"<div><div>Very High-Resolution (VHR) urban remote sensing images segmentation is widely used in ecological environmental protection, urban dynamic monitoring, fine urban management and other related fields. However, the large-scale variation and discrete distribution of objects in VHR images presents a significant challenge to accurate segmentation. The existing studies have primarily concentrated on the internal correlations within a single features, while overlooking the inherent sequential relationships across different feature state. In this paper, a novel Urban Spatial Segmentation Framework (UrbanSSF) is proposed, which fully considers the connections between feature states at different phases. Specifically, the Feature State Interaction (FSI) Mamba with powerful sequence modeling capabilities is designed based on state space modules. It effectively facilitates interactions between the information across different features. Given the disparate semantic information and spatial details of features at different scales, a Global Semantic Enhancer (GSE) module and a Spatial Interactive Attention (SIA) mechanism are designed. The GSE module operates on the high-level features, while the SIA mechanism processes the middle and low-level features. To address the computational challenges of large-scale dense feature fusion, a Channel Space Reconstruction (CSR) algorithm is proposed. This algorithm effectively reduces the computational burden while ensuring efficient processing and maintaining accuracy. In addition, the lightweight UrbanSSF-T, the efficient UrbanSSF-S and the accurate UrbanSSF-L are designed to meet different application requirements in urban scenarios. Comprehensive experiments on the UAVid, ISPRS Vaihingen and Potsdam datasets validate the superior performance of UrbanSSF series. Especially, the UrbanSSF-L achieves a mean intersection over union of 71.0% on the UAVid dataset. Code is available at <span><span>https://github.com/KotlinWang/UrbanSSF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 824-840"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143072520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic guided large scale factor remote sensing image super-resolution with generative diffusion prior 利用生成扩散先验实现语义引导的大尺度因子遥感图像超分辨率
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.12.001
Ce Wang, Wanjie Sun
{"title":"Semantic guided large scale factor remote sensing image super-resolution with generative diffusion prior","authors":"Ce Wang,&nbsp;Wanjie Sun","doi":"10.1016/j.isprsjprs.2024.12.001","DOIUrl":"10.1016/j.isprsjprs.2024.12.001","url":null,"abstract":"<div><div>In the realm of remote sensing, images captured by different platforms exhibit significant disparities in spatial resolution. Consequently, effective large scale factor super-resolution (SR) algorithms are vital for maximizing the utilization of low-resolution (LR) satellite data captured from orbit. However, existing methods confront challenges such as semantic inaccuracies and blurry textures in the reconstructed images. To tackle these issues, we introduce a novel framework, the Semantic Guided Diffusion Model (SGDM), designed for large scale factor remote sensing image super-resolution. The framework exploits a pre-trained generative model as a prior to generate perceptually plausible high-resolution (HR) images, thereby constraining the solution space and mitigating texture blurriness. We further enhance the reconstruction by incorporating vector maps, which carry structural and semantic cues to enhance the reconstruction fidelity of ground objects. Moreover, pixel-level inconsistencies in paired remote sensing images, stemming from sensor-specific imaging characteristics, may hinder the convergence of the model and the diversity in generated results. To address this problem, we develop a method to extract sensor-specific imaging characteristics and model the distribution of them. The proposed model can decouple imaging characteristics from image content, allowing it to generate diverse super-resolution images based on imaging characteristics provided by reference satellite images or sampled from the imaging characteristic probability distributions. To validate and evaluate our approach, we create the Cross-Modal Super-Resolution Dataset (CMSRD). Qualitative and quantitative experiments on CMSRD showcase the superiority and broad applicability of our method. Experimental results on downstream vision tasks also demonstrate the utilitarian of the generated SR images. The dataset and code will be publicly available at <span><span>https://github.com/wwangcece/SGDM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 125-138"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142823150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MLC-net: A sparse reconstruction network for TomoSAR imaging based on multi-label classification neural network MLC-net:一种基于多标签分类神经网络的TomoSAR成像稀疏重建网络
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.11.018
Depeng Ouyang , Yueting Zhang , Jiayi Guo , Guangyao Zhou
{"title":"MLC-net: A sparse reconstruction network for TomoSAR imaging based on multi-label classification neural network","authors":"Depeng Ouyang ,&nbsp;Yueting Zhang ,&nbsp;Jiayi Guo ,&nbsp;Guangyao Zhou","doi":"10.1016/j.isprsjprs.2024.11.018","DOIUrl":"10.1016/j.isprsjprs.2024.11.018","url":null,"abstract":"<div><div>Synthetic Aperture Radar tomography (TomoSAR) has garnered significant interest for its capability to achieve three-dimensional resolution along the elevation angle by collecting a stack of SAR images from different cross-track angles. Compressed Sensing (CS) algorithms have been widely introduced into SAR tomography. However, traditional CS-based TomoSAR methods suffer from weaknesses in noise resistance, high computational complexity, and insufficient super-resolution capabilities. Addressing the efficient TomoSAR imaging problem, this paper proposes an end-to-end neural network-based TomoSAR inversion method, named Multi-Label Classification-based Sparse Imaging Network (MLC-net). MLC-net focuses on the l0 norm optimization problem, completely departing from the iterative framework of traditional compressed sensing methods and overcoming the limitations imposed by the l1 norm optimization problem on signal coherence. Simultaneously, the concept of multi-label classification is introduced for the first time in TomoSAR inversion, enabling MLC-net to accurately invert scenarios with multiple scatterers within the same range-azimuth cell. Additionally, a novel evaluation system for TomoSAR inversion results is introduced, transforming inversion results into a 3D point cloud and utilizing mature evaluation methods for 3D point clouds. Under the new evaluation system, the proposed method is more than 30% stronger than existing methods. Finally, by training solely on simulated data, we conducted extensive experimental testing on both simulated and real data, achieving excellent results that validate the effectiveness, efficiency, and robustness of the proposed method. Specifically, the VQA_PC score improved from 91.085 to 92.713. The code of our network is available in <span><span>https://github.com/OscarYoungDepend/MLC-net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 85-99"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142823156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A real time LiDAR-Visual-Inertial object level semantic SLAM for forest environments 面向森林环境的实时激光雷达-视觉-惯性目标级语义SLAM
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-11-30 DOI: 10.1016/j.isprsjprs.2024.11.013
Hongwei Liu, Guoqi Xu, Bo Liu, Yuanxin Li, Shuhang Yang, Jie Tang, Kai Pan, Yanqiu Xing
{"title":"A real time LiDAR-Visual-Inertial object level semantic SLAM for forest environments","authors":"Hongwei Liu,&nbsp;Guoqi Xu,&nbsp;Bo Liu,&nbsp;Yuanxin Li,&nbsp;Shuhang Yang,&nbsp;Jie Tang,&nbsp;Kai Pan,&nbsp;Yanqiu Xing","doi":"10.1016/j.isprsjprs.2024.11.013","DOIUrl":"10.1016/j.isprsjprs.2024.11.013","url":null,"abstract":"<div><div>The accurate positioning of individual trees, the reconstruction of forest environment in three dimensions and the identification of tree species distribution are crucial aspects of forestry remote sensing. Simultaneous Localization and Mapping (SLAM) algorithms, primarily based on LiDAR or visual technologies, serve as essential tools for outdoor spatial positioning and mapping, overcoming signal loss challenges caused by tree canopy obstruction in the Global Navigation Satellite System (GNSS). To address these challenges, a semantic SLAM algorithm called LVI-ObjSemantic is proposed, which integrates visual, LiDAR, IMU and deep learning at the object level. LVI-ObjSemantic is capable of performing individual tree segmentation, localization and tree spices discrimination tasks in forest environment. The proposed Cluster-Block-single and Cluster-Block-global data structures combined with the deep learning model can effectively reduce the cases of misdetection and false detection. Due to the lack of publicly available forest datasets, we chose to validate the proposed algorithm on eight experimental plots. The experimental results indicate that the average root mean square error (RMSE) of the trajectories across the eight plots is 2.7, 2.8, 1.9 and 2.2 times lower than that of LIO-SAM, FAST-LIO2, LVI-SAM and FAST-LIVO, respectively. Additionally, the mean absolute error in tree localization is 0.12 m. Moreover, the mapping drift of the proposed algorithm is consistently lower than that of the aforementioned comparison algorithms.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"219 ","pages":"Pages 71-90"},"PeriodicalIF":10.6,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Location and orientation united graph comparison for topographic point cloud change estimation 地形点云变化估计的位置和方向联合图比较
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-11-29 DOI: 10.1016/j.isprsjprs.2024.11.016
Shoujun Jia , Lotte de Vugt , Andreas Mayr , Chun Liu , Martin Rutzinger
{"title":"Location and orientation united graph comparison for topographic point cloud change estimation","authors":"Shoujun Jia ,&nbsp;Lotte de Vugt ,&nbsp;Andreas Mayr ,&nbsp;Chun Liu ,&nbsp;Martin Rutzinger","doi":"10.1016/j.isprsjprs.2024.11.016","DOIUrl":"10.1016/j.isprsjprs.2024.11.016","url":null,"abstract":"<div><div>3D topographic point cloud change estimation produces fundamental inputs for understanding Earth surface process dynamics. In general, change estimation aims at detecting the largest possible number of points with significance (<em>i.e.,</em> difference <span><math><mrow><mo>&gt;</mo></mrow></math></span> uncertainty) and quantifying multiple types of topographic changes. However, several complex factors, including the inhomogeneous nature of point cloud data, the high uncertainty in positional changes, and the different types of quantifying difference, pose challenges for the reliable detection and quantification of 3D topographic changes. To address these limitations, the paper proposes a graph comparison-based method to estimate 3D topographic change from point clouds. First, a graph with both location and orientation representation is designed to aggregate local neighbors of topographic point clouds against the disordered and unstructured data nature. Second, the corresponding graphs between two topographic point clouds are identified and compared to quantify the differences and associated uncertainties in both location and orientation features. Particularly, the proposed method unites the significant changes derived from both features (<em>i.e.,</em> location and orientation) and captures the location difference (<em>i.e.,</em> distance) and the orientation difference (<em>i.e.,</em> rotation) for each point with significant change. We tested the proposed method in a mountain region (Sellrain, Tyrol, Austria) covered by three airborne laser scanning point cloud pairs with different point densities and complex topographic changes at intervals of four, six, and ten years. Our method detected significant changes in 91.39 % − 93.03 % of the study area, while a state-of-the-art method (<em>i.e.,</em> Multiscale Model-to-Model Cloud Comparison, M3C2) identified 36.81 % − 47.41 % significant changes for the same area. Especially for unchanged building roofs, our method measured lower change magnitudes than M3C2. Looking at the case of shallow landslides, our method identified 84 out of a total of 88 reference landslides by analysing change in distance or rotation. Therefore, our method not only detects a large number of significant changes but also quantifies two types of topographic changes (<em>i.e.,</em> distance and rotation), and is more robust against registration errors. It shows large potential for estimation and interpretation of topographic changes in natural environments.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"219 ","pages":"Pages 52-70"},"PeriodicalIF":10.6,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MGCNet: Multi-granularity consensus network for remote sensing image correspondence pruning MGCNet:遥感图像对应剪剪的多粒度共识网络
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-11-28 DOI: 10.1016/j.isprsjprs.2024.11.011
Fengyuan Zhuang , Yizhang Liu , Xiaojie Li , Ji Zhou , Riqing Chen , Lifang Wei , Changcai Yang , Jiayi Ma
{"title":"MGCNet: Multi-granularity consensus network for remote sensing image correspondence pruning","authors":"Fengyuan Zhuang ,&nbsp;Yizhang Liu ,&nbsp;Xiaojie Li ,&nbsp;Ji Zhou ,&nbsp;Riqing Chen ,&nbsp;Lifang Wei ,&nbsp;Changcai Yang ,&nbsp;Jiayi Ma","doi":"10.1016/j.isprsjprs.2024.11.011","DOIUrl":"10.1016/j.isprsjprs.2024.11.011","url":null,"abstract":"<div><div>Correspondence pruning aims to remove false correspondences (outliers) from an initial putative correspondence set. This process holds significant importance and serves as a fundamental step in various applications within the fields of remote sensing and photogrammetry. The presence of noise, illumination changes, and small overlaps in remote sensing images frequently result in a substantial number of outliers within the initial set, thereby rendering the correspondence pruning notably challenging. Although the spatial consensus of correspondences has been widely used to determine the correctness of each correspondence, achieving uniform consensus can be challenging due to the uneven distribution of correspondences. Existing works have mainly focused on either local or global consensus, with a very small perspective or large perspective, respectively. They often ignore the moderate perspective between local and global consensus, called group consensus, which serves as a buffering organization from local to global consensus, hence leading to insufficient correspondence consensus aggregation. To address this issue, we propose a multi-granularity consensus network (MGCNet) to achieve consensus across regions of different scales, which leverages local, group, and global consensus to accomplish robust and accurate correspondence pruning. Specifically, we introduce a GroupGCN module that randomly divides the initial correspondences into several groups and then focuses on group consensus and acts as a buffer organization from local to global consensus. Additionally, we propose a Multi-level Local Feature Aggregation Module that adapts to the size of the local neighborhood to capture local consensus and a Multi-order Global Feature Module to enhance the richness of the global consensus. Experimental results demonstrate that MGCNet outperforms state-of-the-art methods on various tasks, highlighting the superiority and great generalization of our method. In particular, we achieve 3.95% and 8.5% mAP<span><math><mrow><mn>5</mn><mo>°</mo></mrow></math></span> improvement without RANSAC on the YFCC100M dataset in known and unknown scenes for pose estimation, compared to the second-best models (MSA-LFC and CLNet). Source code: https://github.com/1211193023/MGCNet.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"219 ","pages":"Pages 38-51"},"PeriodicalIF":10.6,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pansharpening via predictive filtering with element-wise feature mixing 通过预测滤波与元素特征混合实现平移锐化
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-11-26 DOI: 10.1016/j.isprsjprs.2024.10.029
Yongchuan Cui , Peng Liu , Yan Ma , Lajiao Chen , Mengzhen Xu , Xingyan Guo
{"title":"Pansharpening via predictive filtering with element-wise feature mixing","authors":"Yongchuan Cui ,&nbsp;Peng Liu ,&nbsp;Yan Ma ,&nbsp;Lajiao Chen ,&nbsp;Mengzhen Xu ,&nbsp;Xingyan Guo","doi":"10.1016/j.isprsjprs.2024.10.029","DOIUrl":"10.1016/j.isprsjprs.2024.10.029","url":null,"abstract":"<div><div>Pansharpening is a crucial technique in remote sensing for enhancing spatial resolution by fusing low spatial resolution multispectral (LRMS) images with high spatial panchromatic (PAN) images. Existing deep convolutional networks often face challenges in capturing fine details due to the homogeneous operation of convolutional kernels. In this paper, we propose a novel predictive filtering approach for pansharpening to mitigate spectral distortions and spatial degradations. By obtaining predictive filters through the fusion of LRMS and PAN and conducting filtering operations using unique kernels assigned to each pixel, our method reduces information loss significantly. To learn more effective kernels, we propose an effective fine-grained fusion method for LRMS and PAN features, namely element-wise feature mixing. Specifically, features of LRMS and PAN will be exchanged under the guidance of a learned mask. The value of the mask signifies the extent to which the element will be mixed. Extensive experimental results demonstrate that the proposed method achieves better performances than the state-of-the-art models with fewer parameters and lower computations. Visual comparisons indicate that our model pays more attention to details, which further confirms the effectiveness of the proposed fine-grained fusion method. Codes are available at <span><span>https://github.com/yc-cui/PreMix</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"219 ","pages":"Pages 22-37"},"PeriodicalIF":10.6,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142720445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信