ISPRS Journal of Photogrammetry and Remote Sensing最新文献

筛选
英文 中文
An automated method for estimating fractional vegetation cover from camera-based field measurements: Saturation-adaptive threshold for ExG (SATE) 基于相机的野外测量估算植被覆盖度的自动化方法:ExG (SATE)的饱和度自适应阈值
IF 12.2 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-08-27 DOI: 10.1016/j.isprsjprs.2025.08.017
Xuemiao Ye , Wenquan Zhu , Ruoyang Liu , Bangke He , Xinyi Yang , Cenliang Zhao
{"title":"An automated method for estimating fractional vegetation cover from camera-based field measurements: Saturation-adaptive threshold for ExG (SATE)","authors":"Xuemiao Ye ,&nbsp;Wenquan Zhu ,&nbsp;Ruoyang Liu ,&nbsp;Bangke He ,&nbsp;Xinyi Yang ,&nbsp;Cenliang Zhao","doi":"10.1016/j.isprsjprs.2025.08.017","DOIUrl":"10.1016/j.isprsjprs.2025.08.017","url":null,"abstract":"<div><div>Fractional vegetation cover (FVC) is a crucial metric for assessing vegetation cover on the Earth’s surface. The excess green index (ExG), derived from visible true-color RGB images, is widely recognized as a reliable metric for identifying green vegetation. However, the threshold to distinguish vegetation from background via ExG is highly sensitive to variations in illumination, limiting its robustness in real-world applications. Traditional thresholding methods, such as the bimodal thresholding method, maximum entropy thresholding method, and Otsu’s method, perform well under uniform illumination conditions but often fail to achieve high vegetation identification accuracy in scenarios with uneven illumination. Previous studies have shown that saturation (S) is strongly correlated with illumination intensity and can serve as an effective indicator of illumination variations. Specifically, under strong illumination conditions, both vegetation and non-vegetation appear more vivid, resulting in higher S values. For vegetation, the green-band digital number (DN) value increases more sharply than that of the red and blue bands, resulting in a notable rise in ExG. In comparison, non-vegetation like soil shows only a slight green-band increase, producing a smaller ExG gain. This contrast in ExG between the two surfaces becomes more distinct, so a higher segmentation threshold is required. Conversely, weak illumination conditions lead to lower S values and more uniform DN reductions across surface types, which diminishes ExG contrast and necessitates a lower threshold. Building upon this insight, this study introduced a novel method for automated vegetation coverage extraction: the saturation-adaptive threshold for ExG (SATE). SATE dynamically determines the optimal segmentation threshold for ExG on a pixel-by-pixel basis on the S value, then identifies vegetation pixels by comparing the ExG value of each pixel with its corresponding threshold, and finally calculates the FVC, thereby enhancing the adaptability to diverse illumination conditions. To validate its effectiveness, SATE was tested using 100 high-resolution unmanned aerial vehicle (UAV) red‒green-blue (RGB) images collected from five diverse regions across China, covering a range of illumination conditions, vegetation types, and complex land cover scenarios. The experimental results demonstrated that SATE can effectively address the challenges posed by uneven illumination, achieving an average vegetation recognition accuracy of 91–94 %. For vegetation identification, the performance of SATE combined with ExG surpassed that of traditional thresholding methods, including the bimodal thresholding method (86.4 %), maximum entropy thresholding method (67.0 %), and Otsu’s method (66.5 %). Moreover, SATE combined with ExG achieved an accuracy comparable to that of the manual thresholding method (95 %) while eliminating the need for subjective intervention, thus enhancing the automation and ","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 170-187"},"PeriodicalIF":12.2,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144904138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scale-aware co-visible region detection for image matching 图像匹配的尺度感知共可见区域检测
IF 12.2 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-08-26 DOI: 10.1016/j.isprsjprs.2025.08.015
Xu Pan , Zimin Xia , Xianwei Zheng
{"title":"Scale-aware co-visible region detection for image matching","authors":"Xu Pan ,&nbsp;Zimin Xia ,&nbsp;Xianwei Zheng","doi":"10.1016/j.isprsjprs.2025.08.015","DOIUrl":"10.1016/j.isprsjprs.2025.08.015","url":null,"abstract":"<div><div>Matching images with significant scale differences remains a persistent challenge in photogrammetry and remote sensing. The scale discrepancy often degrades appearance consistency and introduces uncertainty in keypoint localization. While existing methods address scale variation through scale pyramids or scale-aware training, matching under significant scale differences remains an open challenge. To overcome this, we address the scale difference issue by detecting co-visible regions between image pairs and propose <strong>SCoDe</strong> (<strong>S</strong>cale-aware <strong>Co</strong>-visible region <strong>De</strong>tector), which both identifies co-visible regions and aligns their scales for highly robust, hierarchical point correspondence matching. Specifically, SCoDe employs a novel Scale Head Attention mechanism to map and correlate features across multiple scale subspaces, and uses a learnable query to aggregate scale-aware information of both images for co-visible region detection. In this way, correspondences can be established in a coarse-to-fine hierarchy, thereby mitigating semantic and localization uncertainties. Extensive experiments on three challenging datasets demonstrate that SCoDe outperforms state-of-the-art methods, improving the precision of a modern local feature matcher by 8.41%. Notably, SCoDe shows a clear advantage when handling images with drastic scale variations. Code is publicly available at <span><span>github.com/Geo-Tell/SCoDe</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 122-137"},"PeriodicalIF":12.2,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CGSL: Commonality graph structure learning for unsupervised multimodal change detection 无监督多模态变化检测的共性图结构学习
IF 12.2 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-08-26 DOI: 10.1016/j.isprsjprs.2025.08.010
Jianjian Xu , Tongfei Liu , Tao Lei , Hongruixuan Chen , Naoto Yokoya , Zhiyong Lv , Maoguo Gong
{"title":"CGSL: Commonality graph structure learning for unsupervised multimodal change detection","authors":"Jianjian Xu ,&nbsp;Tongfei Liu ,&nbsp;Tao Lei ,&nbsp;Hongruixuan Chen ,&nbsp;Naoto Yokoya ,&nbsp;Zhiyong Lv ,&nbsp;Maoguo Gong","doi":"10.1016/j.isprsjprs.2025.08.010","DOIUrl":"10.1016/j.isprsjprs.2025.08.010","url":null,"abstract":"<div><div>Multimodal change detection (MCD) has attracted a great deal of attention due to its significant advantages in processing heterogeneous remote sensing images (RSIs) from different sensors (e.g., optical and synthetic aperture radar). The major challenge of MCD is that it is difficult to acquire the changed areas by directly comparing heterogeneous RSIs. Although many MCD methods have made important progress, they are still insufficient in capturing the modality-independence complex structural relationships in the feature space of heterogeneous RSIs. To this end, we propose a novel commonality graph structure learning (CGSL) for unsupervised MCD, which aims to extract potential commonality graph structural features between heterogeneous RSIs and directly compare them to detect changes. In this study, heterogeneous RSIs are first segmented and constructed as superpixel-based heterogeneous graph structural data consisting of nodes and edges. Then, the heterogeneous graphs are input into the proposed CGSL to capture the commonalities of graph structural features with modality-independence. The proposed CGSL consists of a Siamese graph encoder and two graph decoders. The Siamese graph encoder maps heterogeneous graphs into a shared space and effectively extracts potential commonality in graph structural features from heterogeneous graphs. The two graph decoders reconstruct the mapped node features as original node features to maintain consistency with the original graph features. Finally, the changes between heterogeneous RSIs can be detected by measuring the differences in commonality graph structural features using the mean squared error. In addition, we design a composite loss with regularization to guide CGSL in effectively excavating the potential commonality graph structural features between heterogeneous graphs in an unsupervised learning manner. Extensive experiments on seven MCD datasets show that the proposed CGSL outperforms the existing state-of-the-art methods, demonstrating its superior performance in MCD. The code will be available at <span><span>https://github.com/TongfeiLiu/CGSL-for-MCD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 92-106"},"PeriodicalIF":12.2,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Causal learning-driven semantic segmentation for robust coral health status identification 基于因果学习的语义分割稳健珊瑚健康状态识别
IF 12.2 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-08-26 DOI: 10.1016/j.isprsjprs.2025.08.009
Jiangying Qin , Ming Li , Deren Li , Armin Gruen , Jianya Gong , Xuan Liao
{"title":"Causal learning-driven semantic segmentation for robust coral health status identification","authors":"Jiangying Qin ,&nbsp;Ming Li ,&nbsp;Deren Li ,&nbsp;Armin Gruen ,&nbsp;Jianya Gong ,&nbsp;Xuan Liao","doi":"10.1016/j.isprsjprs.2025.08.009","DOIUrl":"10.1016/j.isprsjprs.2025.08.009","url":null,"abstract":"<div><div>Global warming is accelerating the degradation of coral reef ecosystems, making accurate monitoring of coral reef health status crucial for their protection and restoration. Traditional coral reef remote sensing monitoring primarily relies on satellite or aerial observations, which provide broad spatial coverage but lack the fine-grained capability needed to capture the detailed structure and health status of individual coral colonies. In contrast, underwater photography utilizes close-range, high-resolution image-based observation, which can be considered a non-traditional form of remote sensing, to enable fine-grained assessment of corals with varying health status at pixel level. In this context, underwater image semantic segmentation plays a vital role by extracting discriminative visual features from complex underwater imaging scenes and enabling the automated classification and identification of different coral health status, based on expert-annotated labels. This semantic information can then be used to derive corresponding ecological indicators. While deep learning-based coral image segmentation methods have been proven effective for underwater coral remote sensing monitoring tasks, challenges remain regarding their generalization ability across diverse monitoring scenarios. These challenges stem from shifts in coral image data distributions and the inherent data-driven nature of deep learning models. In this study, we introduce causal learning into coral image segmentation for the first time and propose CDNet, a novel causal-driven semantic segmentation framework designed to robustly identify multiple coral health states — live, dead, and bleached — from imagery in complex and dynamic underwater environments. Specifically, we introduce a Causal Decorrelation Module to reduce spurious correlations within irrelevant features, ensuring that the network can focus on the intrinsic causal features of different coral health status. Additionally, an Enhanced Feature Aggregation Module is proposed to improve the model’s ability to capture multi-scale details and complex boundaries. Extensive experiments demonstrate that CDNet achieves consistently high segmentation performance, with an average mF1 score exceeding 60% across datasets from diverse temporal and spatial domains. Compare to state-of-the-art methods, its mIoU improves by 4.3% to 40%. Moreover, CDNet maintains accurate and consistent segmentation performance under simulated scenarios reflecting practical underwater coral remote sensing monitoring challenges (including internal geometric transformations, variations in external environments, and different contextual dependencies), as well as on diverse real-world underwater coral datasets. Our proposed method provides a reliable and scalable solution for accurate and rapid spatiotemporal monitoring of coral reefs, offering practical value for long-term conservation and climate resilience of coral reefs.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 78-91"},"PeriodicalIF":12.2,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalization of point-to-point matching for rigorous optimization in kinematic laser scanning 基于点对点匹配的运动激光扫描严格优化推广
IF 12.2 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-08-26 DOI: 10.1016/j.isprsjprs.2025.08.011
Aurélien Brun , Jakub Kolecki , Muyan Xiao , Luca Insolia , Elmar V. van der Zwan , Stéphane Guerrier , Jan Skaloud
{"title":"Generalization of point-to-point matching for rigorous optimization in kinematic laser scanning","authors":"Aurélien Brun ,&nbsp;Jakub Kolecki ,&nbsp;Muyan Xiao ,&nbsp;Luca Insolia ,&nbsp;Elmar V. van der Zwan ,&nbsp;Stéphane Guerrier ,&nbsp;Jan Skaloud","doi":"10.1016/j.isprsjprs.2025.08.011","DOIUrl":"10.1016/j.isprsjprs.2025.08.011","url":null,"abstract":"<div><div>In the scope of rigorous sensor fusion in kinematic laser scanning, we present a qualitative improvement of an automated retrieval method of lidar-to-lidar 3D correspondences in terms of accuracy and speed, where correspondences are locally refined shifts derived from learning based descriptors matching. These improvements are shared through an open implementation. We evaluate their impact in three, fundamentally different laser scanning scenarios (sensors and platforms) without adaptation: airborne (helicopter), mobile (car) and handheld (without GNSS). The impact of precise correspondences improves the point cloud georeferencing/registration 2 to 10 times with respect to previously described and/or industrial standards, depending on the setup, without adaptation to a particular scenario. This represents a potential to enhance the accuracy and reliability of kinematic laser scanning in different environments, whether satellite positioning is available or not, and irrespectively of the nature of the lidars (i.e. including single-beam linear or oscillating sensors).</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 107-121"},"PeriodicalIF":12.2,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
City-level aerial geo-localization based on map matching network 基于地图匹配网络的城市级航空地理定位
IF 12.2 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-08-25 DOI: 10.1016/j.isprsjprs.2025.08.002
Yong Tang , Jingyi Zhang , Jianhua Gong , Yi Li , Banghui Yang
{"title":"City-level aerial geo-localization based on map matching network","authors":"Yong Tang ,&nbsp;Jingyi Zhang ,&nbsp;Jianhua Gong ,&nbsp;Yi Li ,&nbsp;Banghui Yang","doi":"10.1016/j.isprsjprs.2025.08.002","DOIUrl":"10.1016/j.isprsjprs.2025.08.002","url":null,"abstract":"<div><div>Autonomous localization of aircraft relies on precise geo-localization, and under Global Navigation Satellite System (GNSS)-denied conditions, visual localization methods are among the most important techniques for aircraft autonomous localization. Global visual localization typically relies on pre-established 3D maps, which require significant storage and computational overhead, limiting the applicability of aerial visual localization. Therefore, we propose a visual localization method based on OpenStreetMap, an openly accessible 2D map. This method not only enables localization in the absence of GNSS but also has lower storage and computational requirements compared to 3D map-based visual methods. This makes our approach feasible for visual geo-localization at the urban scale. We designed a neural network model based on the Vision Transformer (ViT) to extract features from aerial images and 2D maps for fast matching and retrieval, thereby estimating the global geo-location of the aerial images. Additionally, we employ particle filtering to fuse location estimates from map retrieval, visual odometry, and GNSS, achieving higher-precision real-time localization. Moreover, we collected aerial images and map tiles covering over 1000 square kilometers from the urban and suburban areas of four large cities, creating a novel aerial image-to-map matching dataset. Experiments show that, compared to the current state-of-the-art methods, our map retrieval network achieves a higher average recall rate on the dataset. In GNSS-denied conditions, our multi-source fusion localization method can achieve real-time global geo-localization at the urban scale, and under weak GNSS signals, our method provides significantly higher localization accuracy than GNSS alone.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 65-77"},"PeriodicalIF":12.2,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge-constrained temporal superpixel segmentation and graph-structured energy optimization for PolSAR change detection 基于边缘约束的时间超像素分割和图结构能量优化的PolSAR变化检测
IF 12.2 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-08-23 DOI: 10.1016/j.isprsjprs.2025.08.006
Nengcai Li , Deliang Xiang , Huaiyue Ding , Yuzhen Xie , Yi Su
{"title":"Edge-constrained temporal superpixel segmentation and graph-structured energy optimization for PolSAR change detection","authors":"Nengcai Li ,&nbsp;Deliang Xiang ,&nbsp;Huaiyue Ding ,&nbsp;Yuzhen Xie ,&nbsp;Yi Su","doi":"10.1016/j.isprsjprs.2025.08.006","DOIUrl":"10.1016/j.isprsjprs.2025.08.006","url":null,"abstract":"<div><div>Polarimetric Synthetic Aperture Radar (PolSAR) has emerged as a vital tool for dynamic surface monitoring, owing to its ability to precisely characterize land cover scattering properties. However, conventional PolSAR change detection methods predominantly rely on pixel- or region-level direct comparisons, rendering them sensitive to speckle noise and multi-temporal radiometric inconsistencies. In addition, existing superpixel generation algorithms typically neglect temporal information and edge strength, resulting in suboptimal segmentation accuracy. To overcome these limitations, this paper introduces a novel edge-constrained temporal superpixel generation method. A new temporal polarimetric similarity metric is proposed to emphasize significant temporal variations, while an edge constraint mechanism is incorporated to prevent superpixels from crossing semantic boundaries, thereby improving segmentation fidelity. Building upon the generated superpixels, we develop a graph-structured energy optimization framework for PolSAR change detection. In this framework, superpixels serve as the fundamental processing units to construct a topological representation that integrates both temporal feature similarity and spatial adjacency. A cross-node similarity metric is further designed to enhance the detection of weak scattering changes, and a global energy function is formulated to suppress noise while preserving the structural integrity of changed regions. Extensive experiments on five PolSAR datasets validate the superior performance of the proposed approach, demonstrating significant improvements in noise suppression, temporal feature representation, and change detection accuracy over existing state-of-the-art methods. Specifically, the proposed superpixel segmentation method achieves an average improvement of 6.62% in boundary recall and 1.46% in achievable segmentation accuracy compared to the TSPol-ASLIC algorithm. For the change detection task, the proposed framework achieves a peak overall accuracy of 0.9802, an F1-score of 0.9431, and a kappa coefficient of 0.9311, significantly outperforming conventional pixel-level approaches. The code will be available at <span><span>https://github.com/linengcai/Pol_ECTSP_GSEO</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 49-64"},"PeriodicalIF":12.2,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144892317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
nUGV-1UAV robot swarms: low-altitude remote sensing-based decentralized planning framework in-field environments ngv -1无人机机器人群:基于低空遥感的野外环境分散规划框架
IF 12.2 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-08-22 DOI: 10.1016/j.isprsjprs.2025.08.003
Huaiqu Feng , Yudi Ruan , Dongfang Li , Te Xi , Yulei Pan , Yongwei Wang , Jun Wang
{"title":"nUGV-1UAV robot swarms: low-altitude remote sensing-based decentralized planning framework in-field environments","authors":"Huaiqu Feng ,&nbsp;Yudi Ruan ,&nbsp;Dongfang Li ,&nbsp;Te Xi ,&nbsp;Yulei Pan ,&nbsp;Yongwei Wang ,&nbsp;Jun Wang","doi":"10.1016/j.isprsjprs.2025.08.003","DOIUrl":"10.1016/j.isprsjprs.2025.08.003","url":null,"abstract":"<div><div>Hybrid-rice seed production demands rapid removal of heterologous plants. We present a decentralized nUGV-1UAV framework that couples low-altitude remote sensing with on-board swarm planning to accomplish this task in large paddy fields. A single UAV performs one-off high-resolution mapping; thereafter, multiple UGVs rely solely on the downloaded map and peer-to-peer communication to execute impurity removal. A topology-guided hybrid A* planner generates homotopy-consistent routes, while a decoupled space–time optimizer refines trajectories for curvature and collision constraints. Field experiments covering 12.7 acres with 73 impurity targets show that a fleet of six UGVs finishes the task in 1.21 h, attaining an individual UGV efficiency of 6 989 m<sup>2</sup>/h (≈10.5 acres/h). The optimal UGV-to-impurity ratio is 0.47: 5.75: 1 (UGV: impurities: acre). Simulations up to 200 acres demonstrate linear scalability with &lt;5 % deviation from the analytical model. Even when the UAV is disabled, UGVs maintain 92 % task completion using offline maps, confirming robust decentralization.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 32-48"},"PeriodicalIF":12.2,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144890633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Polarization-Guided unsupervised convection networks for marine velocity field recovery 用于海洋速度场恢复的极化引导无监督对流网络
IF 12.2 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-08-20 DOI: 10.1016/j.isprsjprs.2025.08.012
Yang Guo , Naifu Yao , Xi Lin , Ning Li , Yongqiang Zhao , Seong G. Kong
{"title":"Polarization-Guided unsupervised convection networks for marine velocity field recovery","authors":"Yang Guo ,&nbsp;Naifu Yao ,&nbsp;Xi Lin ,&nbsp;Ning Li ,&nbsp;Yongqiang Zhao ,&nbsp;Seong G. Kong","doi":"10.1016/j.isprsjprs.2025.08.012","DOIUrl":"10.1016/j.isprsjprs.2025.08.012","url":null,"abstract":"<div><div>Accurate flow field measurement in the marine environment is crucial for promoting innovative development of ocean engineering. However, the limited concentration of deployable tracer particles and the complexities of marine environments often lead to unreliable flow field measurements. To address these challenges, we propose a marine environment flow field measurement system under a polarization optical framework. The proposed system utilizes the locally smooth characteristics of flow fields by designing an unsupervised convection network architecture to optimize the velocity field from sparse point clouds. Additionally, a tracer particle polarization feature discriminator is introduced to mitigate the interference from ghost particles. To support the system, a polarized light field sensor is developed to simultaneously capture three-dimensional and polarization information. The system is validated on both simulated and real-world datasets. Compared to existing studies confined to controlled laboratory conditions, the proposed system significantly enhances the applicability of particle tracking velocimetry technology in uncontrolled, complex marine environments. Quantitative evaluations demonstrate that our system achieves an EPE3D/m of 0.027, outperforming the state-of-the-art GotFlow3D method with 0.067. The paper resources can be viewed at <span><span>https://github.com/polwork</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 17-31"},"PeriodicalIF":12.2,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144863699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PAMSNet: A point annotation-driven multi-source network for remote sensing semantic segmentation PAMSNet:一个基于点标注的遥感语义分割多源网络
IF 12.2 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-08-18 DOI: 10.1016/j.isprsjprs.2025.07.035
Yuanhao Zhao , Mingming Jia , Genyun Sun , Aizhu Zhang
{"title":"PAMSNet: A point annotation-driven multi-source network for remote sensing semantic segmentation","authors":"Yuanhao Zhao ,&nbsp;Mingming Jia ,&nbsp;Genyun Sun ,&nbsp;Aizhu Zhang","doi":"10.1016/j.isprsjprs.2025.07.035","DOIUrl":"10.1016/j.isprsjprs.2025.07.035","url":null,"abstract":"<div><div>Multi-source data semantic segmentation has proven to be an effective means of improving classification accuracy in remote sensing. With the rapid development of deep learning, the demand for large amounts of high-quality labeled samples has become a major bottleneck, limiting the broader application of these techniques. Weakly supervised learning has attracted increasing attention by reducing annotation costs. However, existing weakly supervised methods often suffer from limited accuracy. Effectively exploiting complementary information from multi-source remote sensing data using only a small number of labeled points remains a significant challenge. In this paper, we propose a novel architecture, named Point Annotation- Driven Multi-source Segmentation Network (PAMSNet), which leverages point annotations to effectively capture and integrate complementary features from multi-source remote sensing data. PAMSNet includes a Multi-source Feature Encoder and a Cross-Resolution Feature Integration (CRFI) module. The Multi-source Feature Encoder captures complementary global and local features using lightweight convolutional Global-Local Multi-source (GLMS) modules. And the boundary and spectral detail robustness are improved through Spectral-Edge Enhancement (SEE) modules, which effectively mitigate the impact of noise on segmentation accuracy. The CRFI module replaces conventional decoding structures by combining convolutional and Transformer mechanisms, enabling efficient cross-scale feature integration and improving the ability to identify multi-scale objects with reduced computational demands. Extensive experiments on the Vaihingen, WHU-IS, and WHU-OPT-SAR datasets validate the effectiveness of PAMSNet for weakly supervised multi-source segmentation as well as the validity of the proposed module. PAMSNet achieves state-of-the-art performance, with MIoU improvements of 2.4%, 2.1%, and 3.16% on three datasets, using only 0.01% point annotations. Additionally, PAMSNet can effectively balance the performance as well as the operational efficiency of the model compared to existing methods, which further promotes the application of deep learning in remote sensing image mapping.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 1-16"},"PeriodicalIF":12.2,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144863822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信