{"title":"UAV-StrawFire: A visible and infrared dataset for real-time straw-fire monitoring with deep learning and image fusion","authors":"Xikun Hu, Ya Jiang, Xiaoyan Xia, Chen Chen, Wenlin Liu, Pengcheng Wan, Kangcheng Bin, Ping Zhong","doi":"10.1016/j.jag.2025.104586","DOIUrl":"10.1016/j.jag.2025.104586","url":null,"abstract":"<div><div>Straw burning poses significant threats to local air quality and nearby public health by emitting harmful pollutants during specific seasons. Traditional satellite-based remote sensing techniques encounter difficulties in monitoring small-scale straw-burning events due to long revisit intervals and low spatial resolution. To address this challenge, unmanned aerial vehicles (UAVs) equipped with imaging sensors have emerged as a rapid and cost-effective solution for monitoring and detecting straw fires. This paper presents the UAV-StrawFire dataset, which comprises RGB images, thermal infrared images, and videos captured during controlled straw residue burning experiments in southern China using drones. The dataset is annotated and labeled to support the application of detection, segmentation, and tracking algorithms. This study addresses three key machine learning tasks using the dataset: (1) flame detection, achieved through a feature-based multi-modal image fusion model (FF-YOLOv5n) reaching a mAP50-95 of 0.5764; (2) flame segmentation, which delineates fire boundaries using the real-time lightweight BiSeNetV2 model, achieving a high mean Intersection over Union (mIoU) score exceeding 0.88; and (3) flame tracking, which monitors the real-time progression of straw burning with a precision of 0.9065 and a success rate of 0.6593 using the Aba-ViTrack algorithm, suitable for on-board processing on UAVs at 50 frames per second (FPS). These experiments provide efficient baseline models for UAV-based straw-burning monitoring with edge computing capabilities. The UAV-StrawFire dataset enables the detection and monitoring of flame regions with varying sizes, textures, and opacities, thereby supporting potential straw-burning control efforts. The dataset is publicly available on IEEE Dataport, offering a valuable resource for researchers in the remote sensing and machine learning communities to advance the development of effective straw-burning monitoring systems.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"141 ","pages":"Article 104586"},"PeriodicalIF":7.6,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144168952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stephen B. Stewart , Melissa Fedrigo , Shaun R. Levick , Anthony P. O’Grady , Daniel S. Mendham
{"title":"Multi-sensor modelling of woody vegetation and canopy cover across natural and modified ecosystems","authors":"Stephen B. Stewart , Melissa Fedrigo , Shaun R. Levick , Anthony P. O’Grady , Daniel S. Mendham","doi":"10.1016/j.jag.2025.104635","DOIUrl":"10.1016/j.jag.2025.104635","url":null,"abstract":"<div><div>Remote sensing is an essential tool for monitoring the extent and biophysical attributes of vegetation. Multi-sensor approaches, that can reduce the costs of developing high-quality datasets and improve predictive performance, are increasingly common. Despite this trend, the advantages of these data-fusion techniques are rarely reported beyond statistical performance. We use airborne lidar-derived metrics to develop models of canopy cover (CC, %) and woody vegetation (WV, presence/absence) using dry-season imagery from the Sentinel-1 (S1 C-band, 5.5 cm wavelength, Synthetic Aperture Radar) and Sentinel-2 (S2, multispectral optical) satellite constellations across natural and modified agricultural ecosystems in Tasmania, southeast Australia. Validation statistics at 18,876 sample locations demonstrated strong performance for both CC (R<sup>2</sup> = 0.83, RMSE = 0.13) and WV (OA = 0.94, Kappa = 0.87) when using both S1 and S2 variables for prediction. The small improvement in statistical performance provided by SAR variables (typically 1–2 % for CC and WV) understated the benefits of S1 for discriminating woody vegetation and quantifying canopy cover in non-woody ecosystems (e.g., alpine vegetation, heathlands, wetlands, coastal scrub), demonstrating the complementary benefits of multi-sensor prediction. The emergence and growth of natural capital accounting and frameworks such as the Nature Positive Initiative, mean that high-quality, cost-effective spatial datasets will continue to be in demand. Our study demonstrates the potential of non-commercial, publicly accessible remote sensing imagery to improve fine-scale analyses that may otherwise be cost-prohibitive to apply at scale.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"141 ","pages":"Article 104635"},"PeriodicalIF":7.6,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144168953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Wang , Yongchao Shan , Liping Chen , Mengnan Liu , Lin Wang , Zhijun Meng
{"title":"Multi-scale feature learning for 3D semantic mapping of agricultural fields using UAV point clouds","authors":"Hao Wang , Yongchao Shan , Liping Chen , Mengnan Liu , Lin Wang , Zhijun Meng","doi":"10.1016/j.jag.2025.104626","DOIUrl":"10.1016/j.jag.2025.104626","url":null,"abstract":"<div><div>Accurate spatial distribution information of field features is critical for enabling autonomous agricultural machinery navigation. However, current perception systems exhibit limited segmentation performance in complex farm environments due to illumination variations and mutual occlusion among various regions. This paper proposes a low-cost UAV photogrammetry framework for centimeter-level 3D semantic maps of agricultural fields to support autonomous agricultural machinery path planning. The methodology combines UAV-captured images with RTK positioning to reconstruct high-precision 3D point clouds, followed by a novel Local-Global Feature Aggregation Network (LoGA-Net) integrating multi-scale attention mechanisms and geometric constraints. The framework achieves 78.6% mIoU in classifying eight critical agricultural categories: paddy field, dry field, building, vegetation, farm track, paved ground, infrastructure and other static obstacles. Experimental validation demonstrates a 5.9% accuracy improvement over RandLA-Net on the Semantic3D benchmark. This advancement significantly enhances perception accuracy in complex agricultural environments, particularly for field boundary delineation and occluded feature recognition, which directly facilitates robust path planning for unmanned agricultural machinery. The framework provides a scalable technical and data-driven foundation for achieving fully autonomous farm operations, ensuring both operational efficiency and environmental sustainability.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"141 ","pages":"Article 104626"},"PeriodicalIF":7.6,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144168948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanjiao Song , Linyi Li , Yun Chen , Junjie Li , Zhe Wang , Zhen Zhang , Xi Wang , Wen Zhang , Lingkui Meng
{"title":"GCT-GF: A generative CNN-transformer for multi-modal multi-temporal gap-filling of surface water probability","authors":"Yanjiao Song , Linyi Li , Yun Chen , Junjie Li , Zhe Wang , Zhen Zhang , Xi Wang , Wen Zhang , Lingkui Meng","doi":"10.1016/j.jag.2025.104596","DOIUrl":"10.1016/j.jag.2025.104596","url":null,"abstract":"<div><div>Spatial and temporal data gaps present a significant challenge to high-frequency surface water mapping using satellite imagery. Utilizing observations from temporally close periods and multi-modal sensors for gap-filling is of critical importance. However, discontinuous pixel values inherent to conventional water maps hinder the application of deep learning methods, which are effective and popular for relevant studies. In this study, a novel approach, termed “gap-filling of surface water probability”, is introduced to achieve seamless surface water mapping. A new fused dataset tailored for this purpose was constructed, consisting of paired synthetic aperture radar (SAR) and surface water probability data with a 10-meter spatial resolution at a 10-day interval. A Generative CNN-Transformer (GCT) for Gap-Filling (GF) of surface water probability, GCT-GF, was then proposed to integrate the strengths of convolutional neural networks (CNNs) and transformers to reconstruct gapless water probability images from multi-modal and multi-temporal data. The GCT-GF employs a coarse-to-fine structure: information from different time points is initially aggregated using a branched gated inpainting module, followed by refinement and alignment of the coarse output under target SAR guidance. For adversarial learning, a branched SN-PatchGAN discriminator is introduced to adapt to the multi-temporal input. The results show that the GCT-GF surpasses the state-of-the-art relevant methods in quantitative metrics and visual perception. The fusion of multi-modal, multi-temporal inputs obvious enhance the gap-filling performance across varying gap ratios. Applied to Baiyangdian, Poyang Lake Basin and Qinghai Lake, GCT-GF demonstrates its high reliability on large scale scenes.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"141 ","pages":"Article 104596"},"PeriodicalIF":7.6,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144168951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Henri Debray , Matthias Gassilloud , Richard Lemoine-Rodríguez , Michael Wurm , Xiaoxiang Zhu , Hannes Taubenböck
{"title":"Universal patterns of intra-urban morphology: Defining a global typology of the urban fabric using unsupervised clustering","authors":"Henri Debray , Matthias Gassilloud , Richard Lemoine-Rodríguez , Michael Wurm , Xiaoxiang Zhu , Hannes Taubenböck","doi":"10.1016/j.jag.2025.104610","DOIUrl":"10.1016/j.jag.2025.104610","url":null,"abstract":"<div><div>The physical dimension of cities and its spatial patterns play a crucial role in shaping society and urban dynamics. Understanding the complexity of urban systems requires a detailed assessment of their physical structure. Urban geography has long focused on framing typologies to represent common patterns in the urban fabric using various methodologies. However, only recent advancements in computational methods and global land cover data have enabled to comprehensively identify typologies of urban patterns at the city scale through new unsupervised approaches. Nevertheless, typologies of finer-grained patterns at intra-urban scale have not yet been explored comprehensively at a global level. In this paper, building upon these advances, we explore the intra-urban patterns of more than 1500 cities across the globe. We rely on a Local Climate Zone land cover classification to represent the multidimensional variabilities of intra-urban morphology. Adapting a deep learning based unsupervised clustering approach, we find a typology of 138 intra-urban patterns. Analyzing the results of this data-driven approach, we prove that each pattern identified is unique, i.e. statistically different, in its composition and configuration. With this study summarizing the global diversity of the urban fabric, we reveal that any city of the world can be described as a specific assemblage of a fraction of these 138 universal patterns. These universal patterns reveal a predominance at a global scale of built-up forms of low density in the intra-urban fabric.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"141 ","pages":"Article 104610"},"PeriodicalIF":7.6,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144168947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaorou Zheng , Jianxin Jia , Shoubin Dong , Yawei Wang , Runuo Lu , Yuwei Chen , Yueming Wang
{"title":"Training and inference Time Efficiency Assessment Framework for machine learning algorithms: A case study for hyperspectral image classification","authors":"Xiaorou Zheng , Jianxin Jia , Shoubin Dong , Yawei Wang , Runuo Lu , Yuwei Chen , Yueming Wang","doi":"10.1016/j.jag.2025.104591","DOIUrl":"10.1016/j.jag.2025.104591","url":null,"abstract":"<div><div>The increasing complexity and scale of remote sensing datasets, coupled with the challenges of accurately estimating algorithmic time efficiency, often lead to significant resource waste or even failure when using machine learning algorithms in urgent or resource-constrained scenarios. Accurate time efficiency estimation is critical for deploying effective algorithms, yet it remains challenging due to the many factors influencing computational performance. Traditional methods of evaluating time efficiency often neglect the effects of core model parameters and complex data scales in spectral and temporal dimensions. In addition, inference time, an essential factor in real-world applications, is often overlooked. To address these limitations, we propose the Time Efficiency Assessment Framework (TEAF), a novel method for evaluating the time efficiency of machine learning algorithms. Through mathematical reasoning, TEAF models the training and inference time as functions (<span><math><mi>ψ</mi></math></span>) of complex data scales and core model parameters. The strong linear correlation between <span><math><mi>ψ</mi></math></span> and the actual runtime allows TEAF to accurately predict the time and cost of machine learning tasks with a low computational overhead before algorithm execution. To validate this framework, we derived TEAF formulations for five classical machine learning algorithms and tested them on state-of-the-art hyperspectral image datasets and Sentinel-2 multispectral datasets. The results demonstrated that TEAF could accurately predict both training and inference time for various algorithms, with a strong linear correlation between <span><math><mi>ψ</mi></math></span> and actual runtime (<span><math><mrow><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>></mo><mn>0</mn><mo>.</mo><mn>942</mn></mrow></math></span>). This study offers a practical solution to the challenges posed by the increasing volume and complexity of data in remote sensing image processing. The code is available at <span><span>https://github.com/SCUT-CCNL/TEAF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"141 ","pages":"Article 104591"},"PeriodicalIF":7.6,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144168950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A meta-review of remote sensing for rubber plantations","authors":"Zilong Yue , Chiwei Xiao","doi":"10.1016/j.jag.2025.104625","DOIUrl":"10.1016/j.jag.2025.104625","url":null,"abstract":"<div><div>The rapid expansion of rubber plantations has led to significant ecological impacts, including deforestation and reduced carbon storage capacity, such as rubber-induced 4.1 million ha of forest loss in Southeast Asia (SEA). Remote sensing, essential for monitoring rubber expansion and supporting initiatives like REDD+ and EU deforestation regulations. However, due to the heterogeneity, dynamism, and complexity of rubber plantations, developing universal mapping algorithms remains challenging, and there is a lack of all-around reviews on remote sensing research specific to rubber plantations. Our <em>meta</em>-review systematically analyzes 305 peer-reviewed papers (2000–2024), synthesizing advancements in remote sensing techniques. The findings show that research is primarily focused on SEA (82 %), while regions like Africa and South America are underexplored. Optical data remains dominant (68 %), but the use of SAR has tripled, achieving up to 89 % accuracy when combined with phenological features. Additionally, deep learning improved classification accuracy by 15–20 %, especially in detecting young plantations under six years old. However, discrepancies and gaps in global plantation maps persist due to inconsistent validation methods and resolution limitations (>30 m). Our review highlights the need for standardized global datasets and provides insights into future research directions, including improved feature selection, algorithm transferability, and better integration of multi-source data to support sustainable plantation management and accurate carbon accounting.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"141 ","pages":"Article 104625"},"PeriodicalIF":7.6,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144168949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extraction of the upright maize straw by integrating UAV multispectral and DSM data","authors":"Aosheng Chao , Enguang Xing , Yunbing Gao , Cunjun Li , Yuan Qin , Chengyang Zhu , Yu Liu , Qingwei Zhu","doi":"10.1016/j.jag.2025.104622","DOIUrl":"10.1016/j.jag.2025.104622","url":null,"abstract":"<div><div>Upright maize straw left in the field during autumn and winter significantly contributes to severe air pollution in agricultural ecosystems due to burning. It is essential to obtain the spatial distribution of upright maize straw quickly and accurately for effective management and environmental protection. However, identifying upright maize straw using remote sensing is difficult because its spectral properties resemble those of other land covers like straw residue, bare soil, and sparse wheat at the same period. This study proposes a novel index for extracting upright maize straw by integrating low-cost unmanned aerial vehicle (UAV) visible to near-infrared spectral bands with digital surface model (DSM) data. First, we analyzed the spectral characteristics of four land cover types: upright maize straw, straw residue, bare soil, and sparse wheat, and proposed the adjusted straw index (ASI) that leverages green, red, and red-edge bands. Next, we combined DSM data with the ASI to develop the adjusted height straw index (AHSI), considering the height of the upright maize straw. Finally, the combination of index-plus-Otsu threshold segmentation and random forest (RF) methods was applied to identify and extract the spatial distribution of upright maize straw. The results showed that our method effectively detected the main regions of upright maize straw. The two proposed straw indices achieved over 87%(ASI) and 96%(AHSI) extraction accuracies across three different study regions. The two new indices not only significantly improve the accuracy of upright maize straw identification but also provide a new approach for low-cost UAV-based identification of non-photosynthetic vegetation (NPV).</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"141 ","pages":"Article 104622"},"PeriodicalIF":7.6,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144147312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bo Yu , Yao Sun , Jiansong Hu , Fang Chen , Lei Wang
{"title":"Post-disaster building damage assessment based on gated adaptive multi-scale spatial-frequency fusion network","authors":"Bo Yu , Yao Sun , Jiansong Hu , Fang Chen , Lei Wang","doi":"10.1016/j.jag.2025.104629","DOIUrl":"10.1016/j.jag.2025.104629","url":null,"abstract":"<div><div>Accurate building damage assessment is crucial for post-disaster response, yet existing methods struggle to capture complex spatial relationships and contextual features needed for distinguishing damage levels. To address this, we propose the Gated Adaptive Multi-scale Spatial-frequency Fusion Network (GAMSF), a two-phase framework for building localization and damage classification. GAMSF integrates three key innovations: (1) Adaptive Attention (AA) to dynamically prioritize critical regions, (2) Gated Multi-scale Feed-Forward Network (GMFFN) to enhance robustness by emphasizing prominent damage features, and (3) Multi-Scale Wavelet Fusion (MWF) to extract fine-grained structural details using wavelet transforms. Rigorous evaluations on the datasets, including xBD and xFBD, demonstrates that GAMSF achieves the state-of-the-art performance, with a 1.7% improvement in F1-score, a 2.1% gain in Kappa, and a 3.7% increase in minor damage identification accuracy compared to existing approaches. Furthermore, transferability experiments on the high-resolution Ida-BD dataset validate GAMSF’s superior generalization capabilities, outperforming four advanced models. These results highlight the practical value of GAMSF in enhancing disaster management, emergency response, and resource allocation strategies.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"141 ","pages":"Article 104629"},"PeriodicalIF":7.6,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144147309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Liu , Xiaoyang Ma , Lulu An , Hong Sun , Fangkui Zhao , Xiaojing Yan , Yuntao Ma , Minzan Li
{"title":"Exploring UAV narrow-band hyperspectral indices and crop functional traits derived from radiative transfer models to detect wheat powdery mildew","authors":"Yang Liu , Xiaoyang Ma , Lulu An , Hong Sun , Fangkui Zhao , Xiaojing Yan , Yuntao Ma , Minzan Li","doi":"10.1016/j.jag.2025.104627","DOIUrl":"10.1016/j.jag.2025.104627","url":null,"abstract":"<div><div>The wheat powdery mildew (WPM) is one of the most severe crop diseases worldwide, affecting wheat growth and causing yield losses. The WPM was a bottom-up disease that caused the loss of cell integrity, leaf wilting, and canopy structure damage with these symptoms altering the crop’s functional traits (CFT) and canopy spectra. The unmanned aerial vehicle (UAV)-based hyperspectral analysis became a mainstream method for WPM detection. However, the CFT changes experienced by infected wheats, the relationship between CFT and canopy spectra, and their role in WPM detection remained unclear, which might blur the understanding for the WPM infection. Therefore, this study aimed to propose a new method that considered the role of CFT for detecting WPM and estimating disease severity. The UAV hyperspectral data used in this study were collected from the Plant Protection Institute’s research demonstration base, Xinxiang city, China, covering a broad range of WPM severity (0–85 %) from 2022 to 2024. The potential of eight CFT [leaf structure parameter (N), leaf area index (LAI), chlorophyll <em>a</em> + b content (Cab), carotenoids (Car), Car/Cab, anthocyanins (Ant), canopy chlorophyll content (CCC) and average leaf angle (Deg)] obtained from a hybrid method combining a radiative transfer model and random forest (RF) and fifty-five narrow-band hyperspectral indices (NHI) was explored in WPM detection. Results indicated that N, Cab, Ant, Car, LAI, and CCC showed a decreasing trend with increasing disease severity, while Deg and Car/Cab exhibited the opposite pattern. There were marked differences between healthy samples and the two higher infection levels (moderate and severe infection) for Cab, Car, LAI, Deg, CCC, and Car/Cab. N and Ant only showed significant differences between the healthy samples and the highest infection level (severe infection). As Cab, Car, and Ant decreased, the spectral reflectance in the visible light region increased. The decrease in N and LAI was accompanied by a reduction in reflectance across the entire spectral range and the near-infrared area, which was exactly the opposite of Deg. The introduction of CFT greatly improved the accuracy of the WPM severity estimation model with R<sup>2</sup> of 0.92. Features related to photosynthesis, pigment content, and canopy structure played a decisive role in estimating WPM severity. Also, results found that the feature importance showed a remarkable interchange as increasing disease levels. Using features that described canopy structure changes, such as optimized soil adjusted vegetation index, LAI, visible atmospherically resistant indices, and CCC, the mild infection stage of this disease was most easily distinguished from healthy samples. In contrast, most severe impacts of WPM were best characterized by features related to photosynthesis (e.g., photochemical reflectance index 515) and pigment content (e.g., normalized phaeophytinization index). This study help deepen the un","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"141 ","pages":"Article 104627"},"PeriodicalIF":7.6,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144147311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}