{"title":"Simultaneous detection and restoration of building rooftop tree occlusion with a self-supervised diffusion process","authors":"Liu Jianhua , Xinyu Wang , Kaiqi Wang","doi":"10.1016/j.isprsjprs.2025.08.014","DOIUrl":"10.1016/j.isprsjprs.2025.08.014","url":null,"abstract":"<div><div>Building rooftops in high resolution remote sensing images often suffer from various occlusion that destroy the original features. However, there is a lack of a comprehensive method for the simultaneous detection and restoration of such occlusions. This paper focuses on tree occlusion and proposes a diffusion-based model, named Rooftop Tree Detection and Restoration (RTDR). The method defines tree occlusion restoration as a T-step denoising process. We innovatively perform occlusion location extraction and original pixel prediction simultaneously. Based on the prediction results of the tree occlusion decomposition model, the gradient of pixel changes within the occluded areas is obtained. This gradient is incorporated into the backward denoising process of the conditional diffusion model to guide the self-supervised pre-trained diffusion model in restoring the complete building rooftop from the occluded image. Meanwhile, this paper proposes a tree occlusion simulation process based on the spatial combination of randomness between rooftops and trees for generating realistic rooftop occlusion data. The experimental results demonstrate that RTDR achieves satisfactory restoration performance on both simulated and real rooftop tree occlusion datasets. On the simulated tree occlusion dataset, the accuracy evaluation metrics PSNR/SSIM/NIQE are 21.736/0.8177/9.1711, respectively; on the real tree occlusion dataset, the quantitative evaluation metrics Precision/Recall/IoU/F1-Score are improved from 0.8568/0.5789/0.5565/0.6656 to 0.8261/0.7863/0.6818/0.7871. In addition, module and sample ablation experiments validate the effectiveness of the spectral rooftop dataset BUCEA4.0 and the robustness of RTDR. Codes and datasets open source at <span><span>https://github.com/GHLJH/RTDR</span><svg><path></path></svg></span> and <span><span>https://www.dxkjs.com/tw/Public/about/html/rs_yangben.html</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 366-381"},"PeriodicalIF":12.2,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145019449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bin Cao , Longhai Xiong , Hui Liu , Jinlin Chen , Hui Zhang , Shiwen Wu , Dehe Xu , Bincai Cao
{"title":"Optimal approach to utilize multiple-pass ICESat-2 ATL03 data for satellite-derived empirical bathymetry","authors":"Bin Cao , Longhai Xiong , Hui Liu , Jinlin Chen , Hui Zhang , Shiwen Wu , Dehe Xu , Bincai Cao","doi":"10.1016/j.isprsjprs.2025.08.024","DOIUrl":"10.1016/j.isprsjprs.2025.08.024","url":null,"abstract":"<div><div>The Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) that carries an Advanced Topographic Laser Altimeter System (ATLAS) is a highly successful earth observing system. Utilizing ICESat-2′s data product for satellite-derived empirical bathymetry helps the latter to be totally independent of ground data and really based on satellites. Normally, ICESat-2′s ATLAS instrument can provide multiple-pass global geolocated data (i.e., ATL03), which are typically collected from the satellite’s multiple passes along the orbit, for a bathymetry area. How this type of data is efficiently used for empirical bathymetry is a tricky and unsolved problem. This article aims to find a solution to an optimal or near-optimal use of such multiple-pass ICESat-2 ATL03 data for satellite-derived empirical bathymetry, by observing and analyzing the bathymetric performance of their various possible combinations. The focus is to solve a problem of how model calibration data, whose depths come from multiple-pass ICESat-2 ATL03 data and whose logarithmic blue/green band ratios come from satellite multispectral images, are refined to remove their irrational components and retain their useful information. Related experiments were conducted in Saipan and Qilianyu study areas, with WorldView-2, Sentinel-2 and Landsat 8 multispectral images and multiple-pass ICESat-2 ATL03 data. The experiments showed that using any individual pass of multiple-pass ICESat-2 ATL03 data can hardly achieve a desired accuracy in the entire bathymetry area, and that using entire multiple-pass ICESat-2 ATL03 data cannot either necessarily provide an optimal bathymetric result. An optimal way of utilizing multiple-pass ICESat-2 ATL03 data for empirical bathymetry is that the entire multiple-pass ICESat-2 ATL03 data are used to form model calibration data first, and then the resulting calibration data are cleaned with the proposed Isolation Forest-based data refinement method in this article, and last the cleaned calibration data are used for bathymetric model training. The proposed data refinement method is highly effective for model calibration data cleaning, especially for removing those illogical data points close to main skeletons of the data. Applying this refinement method to empirical bathymetry enables the latter to estimate more robust bathymetry from satellite images even in turbid shallow water areas.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 303-322"},"PeriodicalIF":12.2,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144988930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fathin Nurzaman , Yuta Izumi , Motoyuki Sato , Koki Urano , Shima Kawamura , Josaphat Tetuko Sri Sumantyo , Kaoru Ota , Mianxiong Dong , Wedyanto Kuntjoro
{"title":"Integration of ground-based markerless photogrammetry for the geocoding of a ground-based SAR observation","authors":"Fathin Nurzaman , Yuta Izumi , Motoyuki Sato , Koki Urano , Shima Kawamura , Josaphat Tetuko Sri Sumantyo , Kaoru Ota , Mianxiong Dong , Wedyanto Kuntjoro","doi":"10.1016/j.isprsjprs.2025.08.007","DOIUrl":"10.1016/j.isprsjprs.2025.08.007","url":null,"abstract":"<div><div>A novel radar geocoding technique for ground-based Synthetic Aperture Radar (SAR) data has been introduced utilizing ground-based markerless photogrammetry. The technique offers a resource advantage, requiring only a consumer-grade camera. Combined with its markerless approach, it is fully independent of external data and remains non-invasive, which complements the strengths of ground-based radar. The ground-based photogrammetry involves taking photos from the radar’s position on the ground. This shared perspective between the radar measurements and photogrammetry ensures that the resulting 3D model aligns well with the radar data, as only the radar-illuminated surface is reconstructed, which facilitates the subsequent geocoding operation. However, the typically limited possibilities on the ground to acquire photos often result in a poor photogrammetric network. This suboptimal photogrammetric network causes distortion in the obtained 3D model, which is also caused by the nonexistence of ground control points associated with the markerless approach. A geometric correction method is proposed here, which removes this distortion by relying on the already available SAR image obtained from the ground-based radar measurement, owing to its absolute range measurement that is free from distortion. The distortion parameters were thoroughly examined, which includes the radial distortion typical of a suboptimal photogrammetric network, from which the transformation model is formulated. The 3D model is then restituted with the SAR image as reference using the transformation model, which effectively removes distortion and at the same time achieves the quality of alignment needed in the subsequent reprojection within the geocoding process.</div><div>The technique is being implemented in an ongoing ground-based SAR campaign monitoring a residential area. A significant improvement in tie point alignment between the 3D model and SAR image was demonstrated after applying the geometric correction process. The final geocoding result was compared with one obtained using a freely available global Digital Elevation Model (DEM), showing that the proposed technique yields satisfactory positional accuracy for the SAR signal. Improvement of mean positional error from 4.30 m to 1.69 m was achieved.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 323-335"},"PeriodicalIF":12.2,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144988931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenwu Ou , Qingwu Hu , Mingyao Ai , Pengcheng Zhao , Shunli Wang , Xujie Zhang , Shuowen Huang
{"title":"A real-time image retrieval and localization method based on 360-degree panoramic visual feature maps","authors":"Wenwu Ou , Qingwu Hu , Mingyao Ai , Pengcheng Zhao , Shunli Wang , Xujie Zhang , Shuowen Huang","doi":"10.1016/j.isprsjprs.2025.08.018","DOIUrl":"10.1016/j.isprsjprs.2025.08.018","url":null,"abstract":"<div><div>Accurate and reliable localization in large indoor environments without satellite signals remains a significant challenge. In recent years, visual localization has emerged as a popular indoor localization method. Its core idea is to pre-built a 3D sparse feature map database and estimate the 6-DoF pose of query images for precise localization. This technology holds great potential for applications such as augmented reality (AR) and AR navigation in large indoor scenes. However, the presence of weak textures and repetitive textures poses substantial challenges to the pre-built feature map database and image retrieval, severely affecting the accuracy and robustness of localization. In this paper, we propose a real-time image retrieval and localization method based on a 360-degree panoramic visual feature global map. The proposed method consists of three main components: 360° panoramic sparse feature map construction (PGFC); an image retrieval strategy based on point cloud overlap (PCO-IR); visual localization method enhanced by PCO-IR. Extensive experiments demonstrate that our approach surpasses both state-of-the-art research methods and commercial software (e.g., COLMAP, Metashape) in weak-texture and repetitive-texture regions. Across three distinct indoor scenarios, the PCO-IR enhancement yields significant accuracy gains: after optimization, PixLoc and HLOC achieve localization success rates of 95% and 97%, respectively, with mean pose errors reduced to 72% and 37% of their original values. The code for our proposed method can be found at <span><span>https://github.com/ouwenwu/pco_ir</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 351-365"},"PeriodicalIF":12.2,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144988867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Partha A. Patil , Arjun Adhikari , Harilal B. Menon
{"title":"Analysis of seasonal distribution of chromophoric dissolved organic matter in turbid estuaries applying a novel semi-analytical algorithm","authors":"Partha A. Patil , Arjun Adhikari , Harilal B. Menon","doi":"10.1016/j.isprsjprs.2025.08.033","DOIUrl":"10.1016/j.isprsjprs.2025.08.033","url":null,"abstract":"<div><div>The study presents a novel semi-analytical algorithm to improve accuracy of chromophoric dissolved organic matter (CDOM) retrieval from remote sensing reflectance (<span><math><msub><mi>R</mi><mrow><mi>rs</mi></mrow></msub></math></span>) in optically complex waters. The bio-optical data used in the study were collected from monsoonal estuaries and coastal waters along the eastern Arabian Sea, and supplemented with repositories of Global Ocean Carbon Algorithm Database and NASA bio-Optical Marine Algorithm Dataset. To retrieve CDOM absorption at 440 nm (<span><math><msubsup><mi>a</mi><mrow><mi>cdom</mi></mrow><mn>440</mn></msubsup></math></span>), a three-wavelength index of the form, <span><math><mrow><mi>x</mi><mo>=</mo><mfenced><mrow><mfrac><mn>1</mn><msubsup><mi>R</mi><mrow><mi>rs</mi></mrow><msub><mi>λ</mi><mn>1</mn></msub></msubsup></mfrac><mo>-</mo><mfrac><mn>1</mn><msubsup><mi>R</mi><mrow><mi>rs</mi></mrow><msub><mi>λ</mi><mn>2</mn></msub></msubsup></mfrac></mrow></mfenced><mo>×</mo><msubsup><mi>R</mi><mrow><mi>rs</mi></mrow><msub><mi>λ</mi><mn>3</mn></msub></msubsup></mrow></math></span> was developed based on the fundamental relation between <span><math><msub><mi>R</mi><mrow><mi>rs</mi></mrow></msub></math></span> and inherent optical properties (absorption and backscattering). This index was regressed and fine-tuned with randomly chosen in-situ data representing different optically complex regions. The wavelengths <span><math><msub><mi>λ</mi><mn>1</mn></msub></math></span>, <span><math><msub><mi>λ</mi><mn>2</mn></msub></math></span>, and <span><math><msub><mi>λ</mi><mn>3</mn></msub></math></span>, are 412 nm, 490 nm and 560 nm. The resultant algorithm is <span><math><mrow><msubsup><mi>a</mi><mrow><mi>cdom</mi></mrow><mn>440</mn></msubsup><mo>=</mo><mo>-</mo><mn>0.01368</mn><msup><mrow><mi>x</mi></mrow><mn>2</mn></msup><mo>+</mo><mn>0.102</mn><mi>x</mi><mo>+</mo><mn>0.02295</mn></mrow></math></span>. The validation between in-situ measured and satellite-derived (Ocean and Land Colour Instrument and Moderate Resolution Imaging Spectroradiometer sensors) <span><math><msubsup><mi>a</mi><mrow><mi>cdom</mi></mrow><mn>440</mn></msubsup></math></span> values were statistically assessed for the global as well as optically diverse regional subsets of the Gulf of Mexico, Chesapeake-Delaware Bay (CDB) and Mandovi-Zuari estuaries (MZE). Their performance over the in-situ (<span><math><msup><mrow><mi>r</mi></mrow><mn>2</mn></msup></math></span> > 0.6; <span><math><mrow><mi>mape</mi></mrow></math></span> < 45 %) as well as satellite-retrieved reflectance (<span><math><msup><mrow><mi>r</mi></mrow><mn>2</mn></msup></math></span> = 0.72–0.89, and <span><math><mrow><mi>rmse</mi></mrow></math></span> = 0.124–0.2686 m<sup>−1</sup>) surpassed retrieval by widely-used empirical, machine learning, and semi-analytical models (In-situ:- <span><math><msup><mrow><mi>r</mi></mrow><mn>2</mn></msup></math></span> = 0.05–0.52; <span><math><mrow><mi>ma","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 336-350"},"PeriodicalIF":12.2,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144988932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhiling Geng , Haibo Liu , Puhong Duan , Xiaohui Wei , Shutao Li
{"title":"Feature-based multimodal remote sensing image matching: Benchmark and state-of-the-art","authors":"Zhiling Geng , Haibo Liu , Puhong Duan , Xiaohui Wei , Shutao Li","doi":"10.1016/j.isprsjprs.2025.08.028","DOIUrl":"10.1016/j.isprsjprs.2025.08.028","url":null,"abstract":"<div><div>Multimodal remote sensing image matching (MRSIM) is a crucial prerequisite in the remote sensing field, aiming to align images captured by different sensors to facilitate subsequent interpretation and analysis. In recent years, numerous efforts have been made to achieve feature-based MRSIM. However, there is a lack of a comprehensive review of advanced feature-based MRSIM methods and a comparison of their performance on diverse datasets. Additionally, existing datasets often have some limitations in terms of modality diversity and ground truth completeness, which prevent the validation of the performance of algorithms. This paper first provides an extensive overview of latest advances based on the general framework of feature-based MRSIM methods. Then, we summarize existing MRSIM datasets, and construct the HNU-DATASET, including four types of common MRSIM pairs and ground-truth annotations of each image pair. Finally, to ensure a comprehensive evaluation, several representative open-source methods, such as radiation-variation insensitive feature transform (RIFT) and histogram of absolute phase consistency gradients (HAPCG), are employed to benchmark performance on both the proposed HNU-DATASET and multiple publicly available datasets. The experimental results can serve as a valuable reference for future research, which can promote the development of advanced multimodal remote sensing.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 285-302"},"PeriodicalIF":12.2,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144932766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hangyu Dai , Miao Fang , Jinglu Tan , Zhenyu Xu , Ya Guo
{"title":"Reconstructing systematically missing NDVI time series in cropland: A GAN-based approach using optical and SAR data","authors":"Hangyu Dai , Miao Fang , Jinglu Tan , Zhenyu Xu , Ya Guo","doi":"10.1016/j.isprsjprs.2025.08.025","DOIUrl":"10.1016/j.isprsjprs.2025.08.025","url":null,"abstract":"<div><div>Normalized Difference Vegetation Index (NDVI) time series data is essential for monitoring cropland dynamics and assessing crop conditions. However, these data often suffer from large-scale systematic missing patterns due to atmospheric variations and satellite revisit cycles, significantly compromising monitoring accuracy, particularly for capturing rapid surface changes. Existing methods primarily concentrate on recovering cloud-covered data, often overlooking systematic data gaps. To address this limitation, we propose a Periodic Imputation Generative Adversarial Networks (PIGAN) model to reconstruct large-scale systematic missing NDVI remote sensing data. The model integrates optical and synthetic aperture radar (SAR) data as inputs and employs a Generative Adversarial Networks (GAN) to impute NDVI missing values. Specifically, Pearson correlation coefficients and Random Forest (RF) algorithms are utilized to select vegetation-sensitive indices as inputs for the generator. The generator employs a dual-stream architecture and ConvLinBlock to accommodate dual-source data inputs and effectively handle extensive missing patterns. The discriminator transforms the traditional task of distinguishing real from fake data by evaluating the proportion of the real data within fixed intervals using dilated convolutions, thereby addressing the systematic missing patterns in time series. Experimental results demonstrate that the proposed method outperforms existing models across different crop types and varying environmental conditions, achieving over 10% improvement in widely used metrics such as RMSE and MAE. Furthermore, the model exhibits superior performance in NDVI spatial–temporal recovery, highlighting its potential for practical applications. The PIGAN code is publicly available at <span><span>https://github.com/hydai-00/PIGAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 270-284"},"PeriodicalIF":12.2,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144932765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel deep learning model for landslide mapping using cross-resolution change technique","authors":"Charles W.W. Ng , Tianli Pan , Peifeng Ma","doi":"10.1016/j.isprsjprs.2025.08.004","DOIUrl":"10.1016/j.isprsjprs.2025.08.004","url":null,"abstract":"<div><div>Change detection serves as a prevalent approach for updating landslide inventories. Due to the challenges of continuously acquiring high-resolution images, practical applications often rely on bi-temporal images of varying resolutions for landslide mapping. This study introduces the Segment Anything Model (SAM) that leverages cross-resolution images for landslide mapping. Therefore, expanding the utilization of diverse data sources to enhance the temporal frequency of landslide inventory updates. Three unique modules are developed to enable SAM for landslide mapping with automatic prompt generation capability based on cross-resolution images. The cross-scale feature fusion module is designed to align features from low-resolution images with those from high-resolution images through cross-correlation. The multi-scale feature extraction module enhances the model’s capacity to identify landslides of all sizes, especially smaller ones. An Auto-Prompt module is introduced to transform the model into an end-to-end system that autonomously generates prompts for change detection with high generalization capability. Three experiments were carried out to evaluate the model’s performance across three datasets. The first experiment involved testing on a dataset within the same domain as the training dataset, while the second explored change detection on datasets from regions not included in the training dataset. These two experiments were conducted at resolution ratios of 1:2, 1:4, and 1:8. The third experiment assessed the model’s performance by substituting pre-event images with different image sources. Results demonstrate that the proposed model outperforms existing state-of-the-art methods in all three experiments. The average F1 scores of the proposed model at all three resolution ratios in the first experiment and the second experiment are 83.8 and 78.9, surpassing the worst performance model by 24.0 and 48.4, respectively. In the third experiment, the F1 score for the proposed model is 86.1, which is significantly higher than the worst-performing model (52.4). These findings highlight the model’s ability for cross-resolution landslide change detection with high generalization capabilities across diverse regions and data sources.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 254-269"},"PeriodicalIF":12.2,"publicationDate":"2025-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144917572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mona Maze , Samar Attaher , Mohamed O. Taqi , Rania Elsawy , Manal M.H. Gad El-Moula , Fadl A. Hashem , Ahmed S. Moussa
{"title":"Enhanced agricultural land use/land cover classification in the Nile Delta using Sentinel-1 and Sentinel-2 data and machine learning","authors":"Mona Maze , Samar Attaher , Mohamed O. Taqi , Rania Elsawy , Manal M.H. Gad El-Moula , Fadl A. Hashem , Ahmed S. Moussa","doi":"10.1016/j.isprsjprs.2025.08.019","DOIUrl":"10.1016/j.isprsjprs.2025.08.019","url":null,"abstract":"<div><div>Accurate and timely Land Use and Land Cover (LULC) classification is crucial for effective agricultural planning and decision-making, particularly in regions like the Nile Delta, Egypt, where LULC is rapidly changing. This study addresses the challenge of classifying small, fragmented agricultural fields and road networks by leveraging the synergistic potential of Sentinel-1 and Sentinel-2 data, combined with Machine Learning (ML) and Deep Learning (DL) techniques. Unlike previous studies that often rely on Sentinel-2 or image-based DL, this research introduces a novel approach: a pixel-based ML classification using both Sentinel-1 and Sentinel-2 data. This strategy allowed to effectively capture the spectral and textural information crucial for distinguishing small features, which are often missed by traditional methods. Using distinct temporal datasets and validated ground truth annotations, we trained and tested several ML and DL models, including XGB, Support Vector Classifier, K-Nearest Neighbor, Decision Tree, Random Forest, and LSTM. XGB achieved the highest overall accuracy (94.4 %), whereas Random Forest produced the most accurate map with independent data (91.4 % Overall Accuracy). Integrating Sentinel-1 with Sentinel-2 data improved classification accuracy by 1–7 % compared to using Sentinel-2 alone. Notably, the pixel-based ML approach yielded reliable predictions for small road areas and agricultural fields, which are often challenging to map accurately. This research demonstrates the effectiveness of integrating multi-sensor data with advanced ML/DL for improved LULC classification, particularly for small feature mapping, thus providing critical information for enhanced agricultural planning and decision-making in the Nile Delta.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 239-253"},"PeriodicalIF":12.2,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144913208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei Wu , Shiyu Li , Haiping Yang , Yingpin Yang , Kun Li , Liao Yang , Zuohui Chen
{"title":"EEDNet: Edge and Edge Direction Network for simple and regular land parcel vectorization","authors":"Wei Wu , Shiyu Li , Haiping Yang , Yingpin Yang , Kun Li , Liao Yang , Zuohui Chen","doi":"10.1016/j.isprsjprs.2025.08.008","DOIUrl":"10.1016/j.isprsjprs.2025.08.008","url":null,"abstract":"<div><div>Land parcel extraction from remote sensing images plays a crucial role in applications such as agricultural management, yield estimation, and land resource monitoring. These applications depend on vectorized parcels with regular shapes depicted by a limited number of points, making accurate vector-based land extraction results highly important. However, existing methods for land parcel extraction primarily rely on raster-to-vector conversion, transforming pixel segmentation or edge results into vectors. This approach often results in distorted shapes and redundant points. We notice that when humans delineate land parcels, they intuitively identify key points at locations where edge directions change and connect these points sequentially to form vectors. Inspired by this process, we propose Edge and Edge Direction Net (EEDNet) and a novel post-possessing method, which generates parcel polygons as the final output. EEDNet employs a dual-decoder structure for simultaneous learning of parcel edges and their directions. By detecting edges, identifying key nodes through changes in edge directions, and sequentially connecting these nodes under the guidance of edges, EEDNet constructs well-structured parcel polygons, ensuring smooth parcel boundaries and simplified key points. Experimental results show that our method demonstrates the best overall performance across multiple datasets. Specifically, it achieves the highest complete-intersection over union scores of 0.614 on the iFLYTEK dataset, reflecting its ability to balance geometric accuracy and pixel segmentation. Additionally, it records the lowest GTC errors of 0.158 and 0.180 and the lowest GUC errors of 0.077 and 0.101 on the GFDataset and Netherlands datasets, respectively, showcasing its robustness in capturing object-level and geometric features. We release our code at <span><span>https://github.com/lixianshen20/EEDNet.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 223-238"},"PeriodicalIF":12.2,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144913207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}