Science of Remote Sensing最新文献

筛选
英文 中文
Urban informal settlements interpretation via a novel multi-modal Kolmogorov–Arnold fusion network by exploring hierarchical features from remote sensing and street view images
IF 5.7
Science of Remote Sensing Pub Date : 2025-02-22 DOI: 10.1016/j.srs.2025.100208
Hongyang Niu, Runyu Fan, Jiajun Chen, Zijian Xu, Ruyi Feng
{"title":"Urban informal settlements interpretation via a novel multi-modal Kolmogorov–Arnold fusion network by exploring hierarchical features from remote sensing and street view images","authors":"Hongyang Niu,&nbsp;Runyu Fan,&nbsp;Jiajun Chen,&nbsp;Zijian Xu,&nbsp;Ruyi Feng","doi":"10.1016/j.srs.2025.100208","DOIUrl":"10.1016/j.srs.2025.100208","url":null,"abstract":"<div><div>Urban informal settlements (UIS) interpretation has important scientific value for achieving urban sustainable development. Recent research on UIS interpretation tasks mainly includes the single-modality method, which uses remote sensing images, and the multi-modality method which uses remote sensing and geospatial data. However, from a single remote sensing perspective, the inter-class similarities, and a regional mixture of complex geo-objects from a bird-eye perspective of UIS areas make UIS interpretation extremely challenging. The current multi-modal methods cannot fully explore the modality-specific features within the modality or ignore the modality-correlation features between different modalities. To address these issues, this study proposed a novel multi-modal Kolmogorov–Arnold fusion network, namely KANFusion, to explore the modality-specific features within the modality and fuse the modality-correlation features between different modalities to boost UIS interpretation using remote sensing and street view images. The proposed KANFusion model employs the Kolmogorov–Arnold Network (KAN) instead of the conventional MLP structure to enhance the model-fitting capability of heterogeneous modality-specific features and uses a novel Multi-level Feature Fusion Module with KAN block (MFF) to fuse the hierarchical modality-specific and modality-fusion features from remote sensing and street view images for better UIS interpretation performance. We conducted extensive experiments on the manually annotated ChinaUIS dataset of eight megacities in China and a public <span><math><mrow><msup><mrow><mi>S</mi></mrow><mrow><mn>2</mn></mrow></msup><mi>U</mi><mi>V</mi></mrow></math></span> dataset and compared the proposed KANFusion with other state-of-the-art methods. The experimental results confirmed the superiority of the proposed KANFusion. This work is available in <span><span>https://github.com/cyg-nhyang/KANFusion</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100208"},"PeriodicalIF":5.7,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143474698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring machine learning trends in poverty mapping: A review and meta-analysis
IF 5.7
Science of Remote Sensing Pub Date : 2025-02-21 DOI: 10.1016/j.srs.2025.100200
Badri Raj Lamichhane , Mahmud Isnan , Teerayut Horanont
{"title":"Exploring machine learning trends in poverty mapping: A review and meta-analysis","authors":"Badri Raj Lamichhane ,&nbsp;Mahmud Isnan ,&nbsp;Teerayut Horanont","doi":"10.1016/j.srs.2025.100200","DOIUrl":"10.1016/j.srs.2025.100200","url":null,"abstract":"<div><div>Machine Learning (ML) has rapidly advanced as a transformative tool across numerous fields, offering new avenues for addressing poverty-related challenges. This study provides a comprehensive review and meta-analysis of 215 peer-reviewed articles published on Scopus from 2014 to 2023, underscoring the capacity of ML methods to enhance poverty mapping through satellite data analysis. Our findings highlight the significant role of ML in revealing micro-geographical poverty patterns, enabling more granular and accurate poverty assessments. By aggregating and systematically evaluating findings from the past decade, this meta-analysis uniquely identifies overarching trends and methodological insights in ML-driven poverty mapping, distinguishing itself from previous reviews that primarily synthesize existing literature. The nighttime light index emerged as a robust indicator for poverty estimation, though its predictive power improves significantly when combined with daytime features like land cover and building data. Random Forest consistently demonstrated high interpretability and predictive accuracy as the most widely adopted ML model. Key contributions from regions such as the United States, China, and India illustrate the substantial progress and applicability of ML techniques in poverty mapping. This research seeks to provide policymakers with enhanced analytical tools for nuanced poverty assessment, guiding more effective policy decisions aimed at fostering equitable development on a global scale.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100200"},"PeriodicalIF":5.7,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143471656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High spatiotemporal resolution vegetation FAPAR estimation from Sentinel-2 based on the spectral invariant theory
IF 5.7
Science of Remote Sensing Pub Date : 2025-02-14 DOI: 10.1016/j.srs.2025.100207
Yunzhu Tao , Naijie Peng , Wenjie Fan , Xihan Mu , Husi Letu , Run Ma , Siqi Yang , Qunchao He , Dechao Zhai , Huangzhong Ren
{"title":"High spatiotemporal resolution vegetation FAPAR estimation from Sentinel-2 based on the spectral invariant theory","authors":"Yunzhu Tao ,&nbsp;Naijie Peng ,&nbsp;Wenjie Fan ,&nbsp;Xihan Mu ,&nbsp;Husi Letu ,&nbsp;Run Ma ,&nbsp;Siqi Yang ,&nbsp;Qunchao He ,&nbsp;Dechao Zhai ,&nbsp;Huangzhong Ren","doi":"10.1016/j.srs.2025.100207","DOIUrl":"10.1016/j.srs.2025.100207","url":null,"abstract":"<div><div>The fraction of absorbed photosynthetically active radiation (FAPAR) is a key input parameter that drives photosynthesis in terrestrial ecosystem models. It plays an important role in estimating canopy gross primary production and, consequently, the regional terrestrial carbon sink. The growing focus on regional responses to global climate change has increased the demand for FAPAR with high spatiotemporal resolution across spatial heterogeneous landscapes. However, instantaneous FAPAR values from satellites are insufficient for monitoring FAPAR throughout the day under varying sky conditions, given that cloud disturbances pose a significant challenge to the generation of high spatiotemporal resolution FAPAR. We proposed a FAPAR-Pro model based on spectral invariant theory to address this challenge. This model distinguishes simulations under direct and diffuse radiation to suit clear and cloudy conditions. The FAPAR-Pro model was validated across various vegetation types and sky conditions. The model was also compared with the FAPAR-P model and the SAIL model, where it exhibited robust performance (R<sup>2</sup> = 0.875, RMSE = 0.065, and bias = −0.004). Consequently, an hourly FAPAR estimation algorithm based on the FAPAR-Pro model (HFP) was developed to derive hourly FAPAR at high spatial resolution. It incorporates daily leaf area index retrieved and reconstructed from Sentinel-2 data, the hourly ratio of diffuse radiation retrieved from Himawari-8, and the leaf single scattering albedo and the soil reflectance derived from Sentinel-2 data using the general spectral vector-leaf (GSV-L) model and the general spectral vector (GSV) model, respectively. The resulting estimations closely matched the hourly ground measurements at Huailai station under diverse sky conditions (R<sup>2</sup> = 0.828, RMSE = 0.070, and bias = −0.011). Furthermore, a set of spatially continuous FAPAR data at the 20 m resolution was generated at the Saihanba area in China in 2022. By contrast, FAPAR estimations from the Sentinel-2 Toolbox and MODIS were significantly affected by cloudy conditions or coarse resolution. Overall, the proposed HFP algorithm can provide blue-sky FAPAR values at high spatiotemporal resolution regardless of various sky conditions. This advancement offers great potential for ecological models and numerous other applications.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100207"},"PeriodicalIF":5.7,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143437557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global air quality index prediction using integrated spatial observation data and geographics machine learning
IF 5.7
Science of Remote Sensing Pub Date : 2025-02-12 DOI: 10.1016/j.srs.2025.100197
Tania Septi Anggraini , Hitoshi Irie , Anjar Dimara Sakti , Ketut Wikantika
{"title":"Global air quality index prediction using integrated spatial observation data and geographics machine learning","authors":"Tania Septi Anggraini ,&nbsp;Hitoshi Irie ,&nbsp;Anjar Dimara Sakti ,&nbsp;Ketut Wikantika","doi":"10.1016/j.srs.2025.100197","DOIUrl":"10.1016/j.srs.2025.100197","url":null,"abstract":"<div><div>Air pollution can occur in the whole world, with each region having its unique driving factors that contribute to human's health. However, effective mitigation of air pollution is often hindered by the uneven distribution of air quality monitoring stations, which tend to be concentrated in potential hotspots like major cities. This study aims to detect and improve the accuracy of the Global Air Quality Index from Remote Sensing (AQI-RS) by integrating AQI from ground-based stations with driving factors such as meteorological, environmental, sources of air pollution, and air pollution magnitude from satellite observation parameters as independent variables using Geographics Machine Learning (GML). This study utilizes 425 air pollution stations and the driving factors data globally from 2013 to 2024. The GML considers geographical characteristics in the analysis by calculating the optimal bandwidth area in its algorithm. The study employs nine scenarios to identify which parameters significantly contribute to the model and determine the best parameter combinations. In determining the best scenario, this study considers the R<sup>2</sup> value, Root Mean Square Error (RMSE), and uncertainty in each of the scenarios. This study produced an AQI-RS model with an average R<sup>2</sup>, RMSE, and uncertainty in the best scenario of 0.89, 5.58, and 5.69 (AQI unit), respectively. The results indicate that GML significantly improves the accuracy of global AQI-RS over previous studies. By considering geographical characteristics using GML, this research is expected to gain an accurate prediction of AQI globally especially in regions without ground-based air pollution stations for the worldwide mitigation.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100197"},"PeriodicalIF":5.7,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143421044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FsDAOD: Few-shot domain adaptation object detection for heterogeneous SAR image
IF 5.7
Science of Remote Sensing Pub Date : 2025-02-10 DOI: 10.1016/j.srs.2025.100202
Siyuan Zhao , Yong Kang , Hang Yuan , Guan Wang , Hui Wang , Shichao Xiong , Ying Luo
{"title":"FsDAOD: Few-shot domain adaptation object detection for heterogeneous SAR image","authors":"Siyuan Zhao ,&nbsp;Yong Kang ,&nbsp;Hang Yuan ,&nbsp;Guan Wang ,&nbsp;Hui Wang ,&nbsp;Shichao Xiong ,&nbsp;Ying Luo","doi":"10.1016/j.srs.2025.100202","DOIUrl":"10.1016/j.srs.2025.100202","url":null,"abstract":"<div><div>Heterogeneous Synthetic Aperture Radar (SAR) image object detection task with inconsistent joint probability distributions is occurring more and more frequently in practical applications. In which the small sample of data scarcity is becoming an urgent problem for researchers. Therefore, this paper proposes a novel few-shot domain adaptation object detection (FsDAOD) method based on Faster Region Convolutional Neural Network baseline to cope with the above problem. Firstly, employing the foundational structure of the existing baseline method, a novel mutual information loss function is introduced that prompts the neural network to extract domain-specific knowledge. This strategic approach encourages distinctive levels of confidence in individual predictions while fostering overall diversity. Given that performance can be easily over-fitted with a restricted number of observed objects if feature alignment strictly adheres to conventional methods, the set of source instances are initially categorized into two groups: target domain-easy set and target domain-hard set. Subsequently, asynchronous alignment is performed between the target-hard domain set of the source instances and the extended dataset of the target instances to achieve effective supervised learning. It is then asserted that confidence-based sample separation methods can improve detection efficiency by adjusting the model to prioritize the identification of more easily detected objects, but this may lead to incorrect decisions for more challenging instances. Extensive experiments on FsDAOD on heterogeneous satellite-borne SAR image datasets have been conducted, and the experimental results have demonstrated that the detection rate of the proposed method exceeds the existing state-of-the-art methods by 5%.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100202"},"PeriodicalIF":5.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel lightweight 3D CNN for accurate deformation time series retrieval in MT-InSAR
IF 5.7
Science of Remote Sensing Pub Date : 2025-02-10 DOI: 10.1016/j.srs.2025.100206
Mahmoud Abdallah , Xiaoli Ding , Samaa Younis , Songbo Wu
{"title":"A novel lightweight 3D CNN for accurate deformation time series retrieval in MT-InSAR","authors":"Mahmoud Abdallah ,&nbsp;Xiaoli Ding ,&nbsp;Samaa Younis ,&nbsp;Songbo Wu","doi":"10.1016/j.srs.2025.100206","DOIUrl":"10.1016/j.srs.2025.100206","url":null,"abstract":"<div><div>Multi-temporal interferometric synthetic aperture radar (MT-InSAR) is a powerful geodetic technique for detecting and monitoring ground deformation over extensive areas. The accuracy of these measurements is critically dependent on effectively separating unwanted phase signals, such as atmospheric delay effects (APS) and decorrelation noise. Recent advancements in data-driven deep learning (DL) methods have shown promise in phase separation by utilizing inherent phase relationships. However, the complex spatiotemporal relationship of InSAR phase components presents challenges that traditional 1D or 2D DL models cannot effectively address, leading to potential biases in deformation measurements. To address this limitation, we propose UNet-3D, a novel three-dimensional encoder-decoder architecture that captures the spatiotemporal features of phase components through an enhanced 3D convolutional neural network (CNN) ensemble, enabling accurate separation of deformation time series. In addition, a spatiotemporal mask is designed to reconstruct missing time series data caused by decorrelation effects. We also developed a separable convolution operator to reduce the computational costs without compromising performance. The proposed model is trained on simulated datasets and benchmarked against existing DL models, achieving an improvement of 25.0% in MSE, 1.8% in SSIM, and 0.2% in SNR. Notably, the computation cost is reduced by up to 80% through separable convolution, establishing the proposed model as both lightweight and efficient. Furthermore, a comprehensive analysis of performance factors was conducted to assess the robustness of UNet-3D, facilitating its open-source usability. To validate our approach in real-world scenarios, we conducted a comparative ground deformation monitoring study over Fernandina Volcano in the Galapagos Islands using Sentinel-1 SAR data and the Small Baseline Subset (SBAS) technique in MintPy software. The results show that the correlation between the deformation time series of UNet-3D and the SBAS method is as high as 0.91 and shows the advantages in mitigating the topography-related APS effects. Overall, the UNet-3D model represents a significant advancement in automating InSAR data processing and enhancing the accuracy of deformation time series retrieval.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100206"},"PeriodicalIF":5.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143403678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepSARFlood: Rapid and automated SAR-based flood inundation mapping using vision transformer-based deep ensembles with uncertainty estimates
IF 5.7
Science of Remote Sensing Pub Date : 2025-02-07 DOI: 10.1016/j.srs.2025.100203
Nirdesh Kumar Sharma , Manabendra Saharia
{"title":"DeepSARFlood: Rapid and automated SAR-based flood inundation mapping using vision transformer-based deep ensembles with uncertainty estimates","authors":"Nirdesh Kumar Sharma ,&nbsp;Manabendra Saharia","doi":"10.1016/j.srs.2025.100203","DOIUrl":"10.1016/j.srs.2025.100203","url":null,"abstract":"<div><div>Rapid and automated flood inundation mapping is critical for disaster management. While optical satellites provide valuable data on flood extent and impact, their real-time usage is limited by challenges such as cloud cover, limited vegetation penetration, and the inability to operate at night, making real-time flood assessments difficult. Synthetic Aperture Radar (SAR) satellites can overcome these limitations, allowing for high-resolution flood mapping. However, SAR data remains underutilized due to less availability of training data, and reliance on labor-intensive manual or semi-automated change detection methods. This study introduces a novel end-to-end methodology for generating SAR-based flood inundation maps, by training deep learning models on weak flood labels generated from concurrent optical imagery. These labels are used to train deep learning models based on Convolutional Neural Networks (CNN) and Vision Transformer (ViT) architectures, optimized through multitask learning and model soups. Additionally, we develop a novel gain algorithm to identify diverse ensemble members and estimate uncertainty through deep ensembles. Our results show that ViT-based and CNN-ViT hybrid architectures significantly outperform traditional CNN models, achieving a state-of-the-art Intersection over Union (IoU) score of 0.72 on the Sen1Floods11 test dataset, while also providing uncertainty quantification. These models have been integrated into an open-source and fully automated, Python-based tool called DeepSARFlood, and demonstrated for the Pakistan floods of 2022 and Assam (India) floods of 2020. With its high accuracy, processing speed, and ability to estimate uncertainty, DeepSARFlood is optimized for real-time deployment, processing a 1° × 1° (12,100 km<sup>2</sup>) area in under 40 s, and will complement upcoming SAR missions like NISAR and Sentinel 1-C for flood mapping.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100203"},"PeriodicalIF":5.7,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving aboveground biomass density mapping of arid and semi-arid vegetation by combining GEDI LiDAR, Sentinel-1/2 imagery and field data
IF 5.7
Science of Remote Sensing Pub Date : 2025-02-06 DOI: 10.1016/j.srs.2025.100204
Luis A. Hernández-Martínez , Juan Manuel Dupuy-Rada , Alfonso Medel-Narváez , Carlos Portillo-Quintero , José Luis Hernández-Stefanoni
{"title":"Improving aboveground biomass density mapping of arid and semi-arid vegetation by combining GEDI LiDAR, Sentinel-1/2 imagery and field data","authors":"Luis A. Hernández-Martínez ,&nbsp;Juan Manuel Dupuy-Rada ,&nbsp;Alfonso Medel-Narváez ,&nbsp;Carlos Portillo-Quintero ,&nbsp;José Luis Hernández-Stefanoni","doi":"10.1016/j.srs.2025.100204","DOIUrl":"10.1016/j.srs.2025.100204","url":null,"abstract":"<div><div>Accurate estimates of forest aboveground biomass density (AGBD) are essential to guide mitigation strategies for climate change. NASA's Global Ecosystem Dynamics Investigation (GEDI) project delivers full-waveform LiDAR data and provides a unique opportunity to improve AGBD estimates. However, global GEDI estimates (GEDI-L4A) have some constraints, such as lack of full coverage of AGBD maps and scarcity of training data for some biomes, particularly in arid areas. Moreover, uncertainties remain about the type of GEDI footprint that best penetrates the canopy and yields accurate vegetation structure metrics. This study estimates forest biomass of arid and semi-arid zones in two stages. First, a model was fitted to predict AGBD by relating GEDI and field data from different vegetation types, including xeric shrubland. Second, different footprint qualities were evaluated, and their AGBD was related to images from Sentinel-1 and -2 satellites to produce a wall-to-wall map of AGBD. The model fitted with field data and GEDI showed adequate performance (%RMSE = 45.0) and produced more accurate estimates than GEDI-L4A (%RMSE = 84.6). The wall-to-wall mapping model also performed well (%RMSE = 37.0) and substantially reduced the underestimation of AGBD for arid zones. This study highlights the advantages of fitting new models for AGBD estimation from GEDI and local field data, whose combination with satellite imagery yielded accurate wall-to-wall AGBD estimates with a 10 m resolution. The results of this study contribute new perspectives to improve the accuracy of AGBD estimates in arid zones, whose role in climate change mitigation may be markedly underestimated.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100204"},"PeriodicalIF":5.7,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating war-induced damage to agricultural land in the Gaza Strip since October 2023 using PlanetScope and SkySat imagery
IF 5.7
Science of Remote Sensing Pub Date : 2025-02-01 DOI: 10.1016/j.srs.2025.100199
He Yin , Lina Eklund , Dimah Habash , Mazin B. Qumsiyeh , Jamon Van Den Hoek
{"title":"Evaluating war-induced damage to agricultural land in the Gaza Strip since October 2023 using PlanetScope and SkySat imagery","authors":"He Yin ,&nbsp;Lina Eklund ,&nbsp;Dimah Habash ,&nbsp;Mazin B. Qumsiyeh ,&nbsp;Jamon Van Den Hoek","doi":"10.1016/j.srs.2025.100199","DOIUrl":"10.1016/j.srs.2025.100199","url":null,"abstract":"<div><div>The ongoing 2023 Israel-Hamas War has severe and far-reaching consequences for the people, economy, food security, and environment. The immediate impacts of damage and destruction to cities and farms are apparent in widespread reporting and first-hand accounts from within the Gaza Strip. However, there is a lack of comprehensive assessment of the war's impacts on key Gazan agricultural land that are vital for immediate humanitarian concerns during the ongoing war and for long-term recovery. In the Gaza Strip, agriculture is arguably one of the most important land use systems. However, remote detection of damage to Gazan agriculture is challenged by the diverse agronomic landscapes and small farm sizes. This study uses multi-resolution satellite imagery to monitor damage to tree crops and greenhouses, the most important agricultural land in the Gaza Strip. Our methodology involved several key steps: First, we generated a pre-war cropland map, distinguishing between tree crops (e.g., olives) and greenhouses, using a random forest (RF) model and the Segment Anything Model (SAM) on nominally 3-m PlanetScope and 50-cm Planet SkySat imagery, obtained from 2022 to 2023. Second, we assessed damage to tree crop fields due to the war, employing a harmonic model-based time series analysis using PlanetScope imagery. Third, we assessed the damage to greenhouses by classifying PlanetScope imagery using a random forest model. We performed accuracy assessments on a generated tree crop fields damage map using 1,200 randomly sampled 3 × 3-m areas, and we generated error-adjusted area estimates with a 95% confidence interval. To validate the generated greenhouse damage map, we used a random sampling-based analysis. We found that 64–70% of tree crop fields and 58% of greenhouses had been damaged by 27 September 2024, after almost one year of war in the Gaza Strip. Agricultural land in Gaza City and North Gaza were the most heavily damaged with 90% and 73% of tree crop fields damaged in each governorate, respectively. By the end of 2023, all greenhouses in North Gaza and Gaza City had been damaged. Our damage estimate overall agrees with that from UNOSAT but provides more detailed and accurate information, such as the timing of the damage as well as fine-scale changes. Our results attest to the severe impacts of the Israel-Hamas War on Gaza's agricultural sector with direct relevance for food security and economic recovery needs. Due to the rapid progression of the war, we have made the latest damage maps and area estimates available on GitHub (<span><span>https://github.com/hyinhe/Gaza</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100199"},"PeriodicalIF":5.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143421043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Volatility characteristics and hyperspectral-based detection models of diesel in soils
IF 5.7
Science of Remote Sensing Pub Date : 2025-02-01 DOI: 10.1016/j.srs.2025.100201
Jihye Shin , Jaehyung Yu , Jihee Seo , Lei Wang , Hyun-Cheol Kim
{"title":"Volatility characteristics and hyperspectral-based detection models of diesel in soils","authors":"Jihye Shin ,&nbsp;Jaehyung Yu ,&nbsp;Jihee Seo ,&nbsp;Lei Wang ,&nbsp;Hyun-Cheol Kim","doi":"10.1016/j.srs.2025.100201","DOIUrl":"10.1016/j.srs.2025.100201","url":null,"abstract":"<div><div>This study developed an efficient method using hyperspectral camera for detecting diesel content in soils with spectral indices. Over 70 days of the experiment, clean soils were saturated with diesel, and 186 measurements were taken to monitor the evaporation rate and spectral variation. The diesel evaporation followed a logarithmic pattern, where the diesel volatility decreased from 1.57% per day during the initial period to 0.06% per day during the late period. Using the hull-quotient reflectance at 2236 nm, the diesel content prediction model derived from a stepwise multiple linear regression (SMLR) achieved satisfactory accuracy with sufficient statistical significance (R<sup>2</sup> = 0.89, RPD = 2.52). This spectral band was well visualized for diesel presence in hyperspectral images as the band infers variations in two absorptions (CH/AlOH and CH) concurrently. Additionally, this study presented an age estimation model based on the diesel evaporation rate using the same spectral band. Given the fact that this study is based on the largest number of samples with the longest observation period and models were developed excluding atmospheric absorption bands, the simple form of the spectral index makes it applicable to large-scale diesel pollution detection with hyperspectral scanners or narrow-band multispectral cameras in real-world cases.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100201"},"PeriodicalIF":5.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143327505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信