ISPRS Journal of Photogrammetry and Remote Sensing最新文献

筛选
英文 中文
Aboveground biomass mapping of Canada with SAR and optical satellite observations aided by active learning 利用SAR和光学卫星观测在主动学习的辅助下绘制加拿大地上生物量图
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-05-24 DOI: 10.1016/j.isprsjprs.2025.05.022
Shuhong Qin , Hong Wang , Cheryl Rogers , José Bermúdez , Ricardo Barros Lourenço , Jingru Zhang , Xiuneng Li , Jenny Chau , Piotr Tompalski , Alemu Gonsamo
{"title":"Aboveground biomass mapping of Canada with SAR and optical satellite observations aided by active learning","authors":"Shuhong Qin ,&nbsp;Hong Wang ,&nbsp;Cheryl Rogers ,&nbsp;José Bermúdez ,&nbsp;Ricardo Barros Lourenço ,&nbsp;Jingru Zhang ,&nbsp;Xiuneng Li ,&nbsp;Jenny Chau ,&nbsp;Piotr Tompalski ,&nbsp;Alemu Gonsamo","doi":"10.1016/j.isprsjprs.2025.05.022","DOIUrl":"10.1016/j.isprsjprs.2025.05.022","url":null,"abstract":"<div><div>National forest inventory (NFI) data has become an indispensable reference for model training and validation when estimating forest aboveground biomass (AGB) using satellite observations. However, obtaining statistically sufficient NFI data for model training is challenging for countries with vast land areas and extensive forest coverage like Canada. This study aims to directly upscale all available NFI data into high-resolution (30-m) spatially continuous AGB and explicit uncertainties maps across Canada’s treed land, using seasonal Sentinel 1&amp;2 and yearly mosaic of L-band Synthetic Aperture Radar (SAR) observations. To address the poor performance with limited training dataset, failure to extrapolate prediction beyond the bound of the training dataset and cannot provide spatially explicit uncertainties that are inherent to the commonly used Random Forest (RF) model, the Gaussian Process Regression (GPR) model and active learning optimization was introduced. The models were trained using stratified 10-fold cross-validation (ST10CV) and optimized by Euclidean distance-based diversity with bidirectional active learning (EBD-BDAL) before extrapolated on the Google Earth Engine (GEE) platform. The GPR model optimized with EBD-BDAL estimated Canada’s 2020 treed land AGB at 40.68 ± 6.8 Pg, with managed and unmanaged forests accounting for 82 % and 18 %, respectively. Trees outside forest ecosystems account for 2 % (0.8 Pg AGB) of total AGB in Canada’s treed land, and there are 0.1134 Pg AGB within urban treed lands. The uncertainty analysis showed that the GPR model demonstrated superior extrapolation capability for high AGB forests while maintaining lower relative uncertainty. The ST10CV results showed that the GPR model performed better than RF with or without EBD-BDAL optimization. The proposed NFI upscaling framework based on the GPR model and EBD-BDAL optimization shows great potential for national AGB mapping based on limited NFI data and seasonal satellite observations.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"226 ","pages":"Pages 204-220"},"PeriodicalIF":10.6,"publicationDate":"2025-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144123938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning and machine learning enable broad-scale woodland height, cover, and biomass mapping 深度学习和机器学习可以实现大尺度林地高度、覆盖度和生物量的测绘
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-05-23 DOI: 10.1016/j.isprsjprs.2025.05.016
Michael J. Campbell , Jessie F. Eastburn , Simon C. Brewer , Philip E. Dennison
{"title":"Deep learning and machine learning enable broad-scale woodland height, cover, and biomass mapping","authors":"Michael J. Campbell ,&nbsp;Jessie F. Eastburn ,&nbsp;Simon C. Brewer ,&nbsp;Philip E. Dennison","doi":"10.1016/j.isprsjprs.2025.05.016","DOIUrl":"10.1016/j.isprsjprs.2025.05.016","url":null,"abstract":"<div><div>Accurate, spatially explicit quantification of vegetation structure in drylands can improve our understanding of the important role that these critical ecosystems play in the Earth system. In semiarid woodland settings, remote sensing of vegetation structure is challenging due to low tree height, cover, and greenness as well as limited spatial and temporal availability of airborne lidar data. These limitations have hindered the development of remote sensing applications in globally widespread and ecologically important dryland systems. In this study, we implement a U-Net convolutional neural network capable of predicting per-pixel, lidar-derived vegetation height in piñon-juniper woodlands using widely available, high-resolution aerial imagery. We used this imagery and modeled canopy height data to construct random forest models for predicting tree density, canopy cover, and live aboveground biomass. Trained and validated on a field dataset that spanned diverse portions of the vast range of piñon-juniper woodlands in the southwestern US, our models demonstrated high performance according to both variance explained (R<sup>2</sup><sub>density</sub> = 0.45; R<sup>2</sup><sub>cover</sub> = 0.80; R<sup>2</sup><sub>biomass</sub> = 0.61) and predictive error (%RMSE<sub>density</sub> = 57; %RMSE<sub>cover</sub> = 19; %RMSE<sub>biomass</sub> = 42). A comparative analysis revealed that, while performance was somewhat lower than models driven solely by airborne lidar, they vastly exceeded that of models driven by aerial imagery alone or a combination of Landsat, topography, and climate data. Although the structural predictive maps featured some artifacts from illumination and perspective differences inherent to aerial imagery, this workflow represents a viable pathway for spatially exhaustive and temporally consistent vegetation structure mapping in piñon-juniper and other dry woodland ecosystems.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"226 ","pages":"Pages 187-203"},"PeriodicalIF":10.6,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144115469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ProFiT: A prompt-guided frequency-aware filtering and template-enhanced interaction framework for hyperspectral video tracking 利润:用于高光谱视频跟踪的快速引导频率感知滤波和模板增强交互框架
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-05-22 DOI: 10.1016/j.isprsjprs.2025.05.008
Yuzeng Chen , Qiangqiang Yuan , Yuqi Tang , Xin Wang , Yi Xiao , Jiang He , Ziyang Lihe , Xianyu Jin
{"title":"ProFiT: A prompt-guided frequency-aware filtering and template-enhanced interaction framework for hyperspectral video tracking","authors":"Yuzeng Chen ,&nbsp;Qiangqiang Yuan ,&nbsp;Yuqi Tang ,&nbsp;Xin Wang ,&nbsp;Yi Xiao ,&nbsp;Jiang He ,&nbsp;Ziyang Lihe ,&nbsp;Xianyu Jin","doi":"10.1016/j.isprsjprs.2025.05.008","DOIUrl":"10.1016/j.isprsjprs.2025.05.008","url":null,"abstract":"<div><div>Hyperspectral (HSP) video data can offer rich spectral-spatial–temporal information crucial for capturing object dynamics, attenuating the drawbacks of classical unimodal and multi-modal tracking. Current HSP tracking arts often suffer from feature refinements and information interactions, sealing the ceiling of capabilities. This study presents ProFiT, an innovative prompt-guided frequency-aware filtering and template-enhanced interaction framework for HSP video tracking, mitigating the above issues. First, ProFiT introduces a frequency-aware filtering module with adaptive filter generators to refine spectral-spatial consistency within HSP and false-color features. Then, a template-enhanced interaction module is introduced to extract complementary information for efficient cross-modal interactions. Furthermore, a token fusion module is devised to capture contextual dependencies with minimal parameters. While a temporal decoder embeds historical states, guiding to ensure temporal coherence. Comprehensive experiments across nine HSP benchmarks demonstrate that ProFiT achieves competitive accuracy, with overall precision and success rate scores of 0.870 and 0.678, respectively, along with a frame per second of 39.5. These results outperform 59 state-of-the-art trackers, establishing ProFiT as a robust solution for HSP video tracking. The code and result will be accessible at: <span><span>https://github.com/YZCU/ProFiT</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"226 ","pages":"Pages 164-186"},"PeriodicalIF":10.6,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RSTeller: Scaling up visual language modeling in remote sensing with rich linguistic semantics from openly available data and large language models RSTeller:从公开可用的数据和大型语言模型中使用丰富的语言语义来扩展遥感中的视觉语言建模
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-05-21 DOI: 10.1016/j.isprsjprs.2025.05.002
Junyao Ge, Xu Zhang, Yang Zheng, Kaitai Guo, Jimin Liang
{"title":"RSTeller: Scaling up visual language modeling in remote sensing with rich linguistic semantics from openly available data and large language models","authors":"Junyao Ge,&nbsp;Xu Zhang,&nbsp;Yang Zheng,&nbsp;Kaitai Guo,&nbsp;Jimin Liang","doi":"10.1016/j.isprsjprs.2025.05.002","DOIUrl":"10.1016/j.isprsjprs.2025.05.002","url":null,"abstract":"<div><div>Abundant, well-annotated multimodal data in remote sensing are pivotal for aligning complex visual remote sensing (RS) scenes with human language, enabling the development of specialized vision language models across diverse RS interpretation tasks. However, annotating RS images with rich linguistic semantics at scale demands expertise in RS and substantial human labor, making it costly and often impractical. In this study, we propose a workflow that leverages large language models (LLMs) to generate multimodal datasets with semantically rich captions at scale from plain OpenStreetMap (OSM) data for images sourced from the Google Earth Engine (GEE) platform. This approach facilitates the generation of paired remote sensing data and can be readily scaled up using openly available data. Within this framework, we present RSTeller, a multimodal dataset comprising over 1.3 million RS images, each accompanied by two descriptive captions. Extensive experiments demonstrate that RSTeller enhances the performance of multiple existing vision language models for RS scene understanding through continual pre-training. Our methodology significantly reduces the manual effort and expertise needed for annotating remote sensing imagery while democratizing access to high-quality annotated data. This advancement fosters progress in visual language modeling and encourages broader participation in remote sensing research and applications. The RSTeller dataset is available at <span><span>https://github.com/SlytherinGe/RSTeller</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"226 ","pages":"Pages 146-163"},"PeriodicalIF":10.6,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144099702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identify and track white flower and leaf phenology of deciduous broadleaf trees in spring with time series PlanetScope images 利用PlanetScope时间序列图像识别和跟踪春季落叶阔叶树白花叶物候
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-05-19 DOI: 10.1016/j.isprsjprs.2025.05.013
Baihong Pan , Xiangming Xiao , Shanshan Luo , Li Pan , Yuan Yao , Chenchen Zhang , Cheng Meng , Yuanwei Qin
{"title":"Identify and track white flower and leaf phenology of deciduous broadleaf trees in spring with time series PlanetScope images","authors":"Baihong Pan ,&nbsp;Xiangming Xiao ,&nbsp;Shanshan Luo ,&nbsp;Li Pan ,&nbsp;Yuan Yao ,&nbsp;Chenchen Zhang ,&nbsp;Cheng Meng ,&nbsp;Yuanwei Qin","doi":"10.1016/j.isprsjprs.2025.05.013","DOIUrl":"10.1016/j.isprsjprs.2025.05.013","url":null,"abstract":"<div><div>In spring, many deciduous broadleaf trees start with flower emergence and then leaf emergence, which are two key phenological events, as they signal the onset of reproduction and vegetative growth in a year. These trees provide essential resources for early pollinators searching for flowers, contribute to biodiversity, and create socio-economic benefits through tourism. Accurate detection and monitoring of the flower and leaf phenology of these trees are important. In this study we combine <em>in-situ</em> photo observations with time series satellite data in spring 2024 to develop new methods for identifying and tracking white flower and leaf phenology of Callery Pear trees, which are deciduous broadleaf trees distributed worldwide. We analyzed <em>in-situ</em> photos and surface reflectance, flower-related, and leaf-related vegetation indices from three optical satellite datasets—PlanetScope (3-m, daily), Sentinel-2 A/B (10-m, 5-day), and Harmonized Landsat and Sentinel-2 (HLS, 30-m, 2–3-day; HLSL30 and HLSS30). Time series of White Flower Index (WFI), a combination of blue, green, and red bands, delineated the flowering period (start, peak, and end dates) of white (light-colored) flowers. Time series of chlorophyll and green leaf indicator (CGLI; Blue &lt; Green &gt; Red) delineated the green leaf emergence dates of the trees (start of season, SOS). In comparison, flower and leaf phenology of these trees cannot be accurately identified and tracked by Sentinel-2 data due to insufficient number of good-quality observations and HLS data due to mixed land cover types in 30-m pixels. This study enhances our understanding of surface reflectance dynamics of flowers and green leaves of these trees in spring and demonstrates the critical role of satellite data with high spatio-temporal resolutions and WFI and CGLI algorithms in tracking floral and leaf phenology.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"226 ","pages":"Pages 127-145"},"PeriodicalIF":10.6,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144083985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cloud removal with optical and SAR imagery via multimodal similarity attention 基于多模态相似关注的光学和SAR图像去云
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-05-16 DOI: 10.1016/j.isprsjprs.2025.05.004
Kelong Tu , Chao Yang , Yaxian Qing , Kunlun Qi , Nengcheng Chen , Jianya Gong
{"title":"Cloud removal with optical and SAR imagery via multimodal similarity attention","authors":"Kelong Tu ,&nbsp;Chao Yang ,&nbsp;Yaxian Qing ,&nbsp;Kunlun Qi ,&nbsp;Nengcheng Chen ,&nbsp;Jianya Gong","doi":"10.1016/j.isprsjprs.2025.05.004","DOIUrl":"10.1016/j.isprsjprs.2025.05.004","url":null,"abstract":"<div><div>Optical remote sensing images are crucial data sources for various applications, including agricultural monitoring, land cover classification, and urban planning. However, cloud cover often hinders their effectiveness, which poses a significant challenge to downstream tasks. To address this issue, we introduce the Similarity-based Multimodal De-Clouding Network (SMDCNet), an innovative framework that enhances the quality of optical remote sensing images by utilizing multimodal similarity attention to integrate complementary information from synthetic aperture radar (SAR) imagery. First, we introduce a similarity feature attention (SFA) module that explores the similarity between optical and SAR features, aligning these cross-domain features to guide the optical encoder’s focus on cloud-free regions for more accurate feature alignment. Building on this, we propose a differential feature extraction (DFE) module that selectively uses SAR features to compensate for cloud-covered regions in the optical images. To mitigate the blurriness in the de-clouded images, we incorporate differential characteristics injection (DCI) and multi-scale feature fusion (MSFF) modules, which collaboratively enhance the reconstruction of detailed information. Our experiments on the SEN12MS-CR dataset demonstrate that SMDCNet effectively restores high-quality cloud-free images, achieving a PSNR of 30.2759 dB, outperforming state-of-the-art cloud removal techniques.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"226 ","pages":"Pages 116-126"},"PeriodicalIF":10.6,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144068767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping seamless surface water dynamics over East Africa semimonthly at a 10-meter resolution in 2017–2023 by integrating Sentinel-1/2 data 通过整合Sentinel-1/2数据,2017-2023年每半月以10米分辨率绘制东非地表水动态无缝图
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-05-15 DOI: 10.1016/j.isprsjprs.2025.04.032
Zirui Wang , Zhen Hao , Qichi Yang , Paul Mapfumo , Elijah Nyakudya , Yun Du , Xue Yan , Feng Ling
{"title":"Mapping seamless surface water dynamics over East Africa semimonthly at a 10-meter resolution in 2017–2023 by integrating Sentinel-1/2 data","authors":"Zirui Wang ,&nbsp;Zhen Hao ,&nbsp;Qichi Yang ,&nbsp;Paul Mapfumo ,&nbsp;Elijah Nyakudya ,&nbsp;Yun Du ,&nbsp;Xue Yan ,&nbsp;Feng Ling","doi":"10.1016/j.isprsjprs.2025.04.032","DOIUrl":"10.1016/j.isprsjprs.2025.04.032","url":null,"abstract":"<div><div>Surface water resources are widely distributed and undergo rapid fluctuations, necessitating large-scale, high-frequency monitoring. Remote sensing technologies provide critical data for this purpose, but challenges such as data gaps and contamination hinder the effective observation of surface water dynamics at fine temporal scales. This limitation can obscure the recording of short-term water variations, ultimately leading to misclassified inundation extents. This study aims to develop a framework for large-scale and short-interval surface water dynamic monitoring using Sentinel-1/2 data, and generate surface water dynamic product for detailed analysis of water distribution and changes in East Africa (EA). Specifically, we proposed a novel water mapping algorithm including water extraction, integration and filtering techniques for Sentinel-1/2 data to map semimonthly surface water dynamic across EA. We then used a simple similarity-based gap-filling method to fill the data gaps in these water maps. Using this framework, we generated semimonthly and seamless surface water dynamic product covering EA from 2017 to 2023. A comprehensive spatiotemporal analysis of surface water distribution and dynamics in EA was then conducted using the product. The results showed that the water mapping algorithm achieved an overall accuracy of 0.9746, with precision (0.9815) higher than recall (0.9706). The gap-filling algorithm proved highly robust, with overall accuracy exceeding 0.98 under different scenarios. The spatial distribution of surface water in EA is heterogeneous, with dominant permanent water area (66.57 %), followed by temporary water area (22.04 %), and seasonal water area (11.39 %). The overall surface water area in EA shows fluctuation, with an increase from 2017 to 2021, followed by a decrease from 2021 to 2023. By incorporating SAR data and increasing observation frequency, our product revealed finer-scale surface water dynamics than previous product. This large-scale, short-interval mapping framework provides new insights for regional and global water resource monitoring, while the EA dataset serves as a key reference for water management in the region.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"225 ","pages":"Pages 440-460"},"PeriodicalIF":10.6,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143948401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining optical and SAR satellite data to monitor coastline changes in the Black Sea 结合光学和SAR卫星数据监测黑海海岸线变化
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-05-15 DOI: 10.1016/j.isprsjprs.2025.05.003
Dalin Jiang , Armando Marino , Maria Ionescu , Mamuka Gvilava , Zura Savaneli , Carlos Loureiro , Evangelos Spyrakos , Andrew Tyler , Adrian Stanica
{"title":"Combining optical and SAR satellite data to monitor coastline changes in the Black Sea","authors":"Dalin Jiang ,&nbsp;Armando Marino ,&nbsp;Maria Ionescu ,&nbsp;Mamuka Gvilava ,&nbsp;Zura Savaneli ,&nbsp;Carlos Loureiro ,&nbsp;Evangelos Spyrakos ,&nbsp;Andrew Tyler ,&nbsp;Adrian Stanica","doi":"10.1016/j.isprsjprs.2025.05.003","DOIUrl":"10.1016/j.isprsjprs.2025.05.003","url":null,"abstract":"<div><div>The coastal environments of the Black Sea are of high ecological and socio-economic importance. Understanding changes along this extensive and complex coastline can help us comprehend the pressures from nature, society, and extreme events, providing valuable insights for more effective management and the prevention of future adverse changes. Current methods for monitoring coastal dynamics rely on the accurate extraction of coastlines from optical and/or Synthetic Aperture Radar (SAR) images, providing information only on the rate of change. This study developed a simple yet novel approach by combining Sentinel-1 SAR image for surface change detection and Sentinel-2 Multispectral Instrument (MSI) optical image for coastline detection, which provides data on both the rate and area of change. Coastlines were extracted from the Modified Normalised Difference Water Index (MNDWI) calculated from MSI images and rates of change were calculated from the extracted coastlines. SAR images for the same areas were stacked and differences during the analysis period were calculated, allowing the determination of the area of change. Another new method was developed to combine the changes detected from optical and SAR images, and only results in locations showed consistent change direction (erosion or accretion) were retained. The extracted coastlines were validated using <em>in situ-</em>measured coastlines along the Romanian and Georgian coasts. The validation analysis showed that the average difference between satellite-derived and <em>in situ</em> coastlines was 11.8 m. The method developed was then applied to the entire Black Sea coast, revealing 35.1 km<sup>2</sup> of changes between 2016 and 2023. These observed changes include 23.9 km<sup>2</sup> (68 %) coastal advance and 11.3 km<sup>2</sup> (32 %) of retreat. A total of 54 % of the changes are estimated to be the result of natural coastline erosion or accretion, whilst 35 % can be attributed to artificial changes related to construction activity. Around 11 % are attributed to random occurrences due to boat/ship movement or land cover changes on adjacent land. Natural coastline changes were mainly observed in the vicinity of deltaic and estuarine system and along sandy shorelines, including along the Danube Delta, Kızılırmak-Yeşilırmak deltas, Chorokhi-Rioni-Kodori River mouths and the coast from Dnieper-Bug Estuary to Karkinit Bay. Artificial changes were mainly found along the southern Black Sea coast, where airports, ports, harbours, and jetties have been constructed in recent years. The proposed method provides a simple, efficient and accurate way for coastline change monitoring, and findings in this study can support the sustainable coastal zone management in the Black Sea.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"226 ","pages":"Pages 102-115"},"PeriodicalIF":10.6,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144068716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pose-graph optimization for efficient tie-point matching and 3D scene reconstruction from oblique UAV images 基于姿态图优化的无人机斜向图像高效结合点匹配与三维场景重构
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-05-15 DOI: 10.1016/j.isprsjprs.2025.04.013
Zhihua Xu , Yongze Niu , Jincheng Jiang , Rongjun Qin , Ximin Cui
{"title":"Pose-graph optimization for efficient tie-point matching and 3D scene reconstruction from oblique UAV images","authors":"Zhihua Xu ,&nbsp;Yongze Niu ,&nbsp;Jincheng Jiang ,&nbsp;Rongjun Qin ,&nbsp;Ximin Cui","doi":"10.1016/j.isprsjprs.2025.04.013","DOIUrl":"10.1016/j.isprsjprs.2025.04.013","url":null,"abstract":"<div><div>Oblique photogrammetry using unmanned aerial vehicles (UAV) is crucial to 3D scene reconstruction. Nevertheless, oblique images are generally acquired with large overlaps, and sometimes with multiple views. Although this increases the level of data completeness, it introduces additional and sometimes redundant computations in tie-point matching, creating overly dense camera connections in the pose graph for bundle adjustment (BA). This study optimizes the pose graph of oblique UAV images by removing redundant image connections to guide tie-point matching. Assuming a five-camera system for oblique image collection, a pose graph called a topologically connected camera network (TCN) was initially constructed using position and orientation system (POS) data to determine the spatial connectivity among oblique images. Second, five geometric meta-parameters of overlapping images were constructed, and their influence on tie-point matching was analyzed using a data-driven approach to generate a weighted pose graph. Third, the weighted pose graph was simplified to a degree-bounded skeletal camera network (D-SCN) using the proposed two-stage multi-objective graph optimization approach. Finally, the D-SCN was embedded into a structure from motion (SfM) pipeline to produce a novel D-SCN–SfM method to reduce the required computations for tie-point matching. The proposed D-SCN-SfM method was tested using data from three large sites, each containing over 5,000 images. In addition, the D-SCN-SfM method was compared with three state-of-the-art methods. The experimental results indicate that our method can significantly reduce the required computations for tie-point matching to 1/14–1/20 as compared to a method that uses only topological constraints, that is, TCN, saving 89–92% of the time expenditure. Furthermore, the accuracy and completeness of the 3D geometry produced by the proposed method were comparable to those produced by standard SfM methods. The source code of our approach is publicly available at <span><span>https://github.com/qiuda16/D-SCN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"225 ","pages":"Pages 461-491"},"PeriodicalIF":10.6,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143948554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal large language model for wheat breeding: A new exploration of smart breeding 小麦育种多模态大语言模型:智能育种的新探索
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2025-05-15 DOI: 10.1016/j.isprsjprs.2025.03.027
Guofeng Yang , Yu Li , Yong He , Zhenjiang Zhou , Lingzhen Ye , Hui Fang , Yiqi Luo , Xuping Feng
{"title":"Multimodal large language model for wheat breeding: A new exploration of smart breeding","authors":"Guofeng Yang ,&nbsp;Yu Li ,&nbsp;Yong He ,&nbsp;Zhenjiang Zhou ,&nbsp;Lingzhen Ye ,&nbsp;Hui Fang ,&nbsp;Yiqi Luo ,&nbsp;Xuping Feng","doi":"10.1016/j.isprsjprs.2025.03.027","DOIUrl":"10.1016/j.isprsjprs.2025.03.027","url":null,"abstract":"<div><div>Unmanned aerial vehicle remote sensing technology has become a key technology in crop breeding, which can achieve high-throughput and non-destructive collection of crop phenotyping data. However, the multidisciplinary nature of breeding has brought technical barriers and efficiency challenges to knowledge mining. Therefore, it is important to develop a smart breeding goal tool to mine cross-domain multimodal data. Based on different pre-trained open-source multimodal large language models (MLLMs) (e.g., Qwen-VL, InternVL, Deepseek-VL), this study used supervised fine-tuning (SFT), retrieval-augmented generation (RAG), and reinforcement learning from human feedback (RLHF) technologies to inject cross-domain knowledge into MLLMs, thereby constructing multiple multimodal large language models for wheat breeding (WBLMs). The above WBLMs were evaluated using the newly created evaluation benchmark in this study. The results showed that the WBLM constructed using SFT, RAG and RLHF technologies and InternVL2-8B has leading performance. Then, subsequent experiments were conducted using the WBLM. Ablation experiments indicated that the combination of SFT, RAG, and RLHF technologies can improve the overall generation performance, enhance the generated quality, balance the timeliness and adaptability of the generated answer, and reduce hallucinations and biases. The WBLM performed best in wheat yield prediction using cross-domain data (remote sensing, phenotyping, weather, and germplasm) simultaneously, with R<sup>2</sup> and RMSE of 0.821 and 489.254 kg/ha, respectively. Furthermore, the WBLM can generate professional decision support answers for phenotyping estimation, environmental stress assessment, target germplasm screening, cultivation technique recommendation, and seed price query tasks. This study aims to improve the application of remote sensing in crop breeding by enabling precise assessment and prediction of wheat germplasm breeding materials in alignment with breeding goals, thereby accelerating the selection of superior varieties and better supporting the breeding decisions.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"225 ","pages":"Pages 492-513"},"PeriodicalIF":10.6,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144068982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信