{"title":"MSFDmap: A novel scheme to map monthly soil freeze depth in the pan-Arctic considering spatiotemporal heterogeneity in heat transfer capability","authors":"Liyuan Chen , Wenquan Zhu , Cunde Xiao , Cenliang Zhao , Hongxiang Guo","doi":"10.1016/j.jag.2025.104820","DOIUrl":"10.1016/j.jag.2025.104820","url":null,"abstract":"<div><div>Accurately characterizing the spatiotemporal dynamics of soil freeze depth (SFD) is critical for understanding the response of frozen soils to climate change. Existing SFD mapping schemes mainly focus on annual maximum values, rarely address monthly variations, and fail to capture both spatiotemporal heterogeneity and physical constraints. We developed a monthly SFD mapping scheme (MSFDmap) that considers spatiotemporal heterogeneity in heat transfer capability. Based on the simplified Stefan equation, which is physically constrained by energy conservation, MSFDmap first predicts the spatial distribution of monthly heat transfer factor (HTF) using a random forest regression model driven by soil clay content, precipitation, soil bulk density, soil organic carbon content, soil water content, and leaf area index, and then maps monthly SFD. MSFDmap was implemented using 2123 site-month observations from 60 pan-Arctic sites over 20 years. Results show that MSFDmap achieves a root mean square error (RMSE) of 19.21 cm and an R<sup>2</sup> of 0.91 for monthly SFD estimates, reducing RMSE by 24–55 % and improving R<sup>2</sup> by 8–65 % over existing schemes. For monthly SFD averaged across sites, estimates exhibit strong temporal agreement with quasi-true SFD series (Pearson correlation coefficient <em>r</em> = 0.99, RMSE = 9.13 cm). The MSFDmap-derived SFD distribution exhibits expected latitudinal and altitudinal gradients, with <em>r</em> = 0.60 relative to an ERA5-Land-based reference distribution. These results demonstrate that MSFDmap effectively characterizes the spatiotemporal dynamics of monthly SFD and outperforms existing schemes. It is attributed to the capture of heterogeneous HTF, which enables the representation of SFD heterogeneity under physical constraints.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"143 ","pages":"Article 104820"},"PeriodicalIF":8.6,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144920027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"In the search for optimal multi-view learning models for crop classification with global remote sensing data","authors":"Francisco Mena , Diego Arenas , Andreas Dengel","doi":"10.1016/j.jag.2025.104823","DOIUrl":"10.1016/j.jag.2025.104823","url":null,"abstract":"<div><div>Studying and analyzing cropland is a difficult task due to its dynamic and heterogeneous growth behavior. Usually, diverse data sources can be collected for its estimation. Although deep learning models have proven to excel in the crop classification task, they face substantial challenges when dealing with multiple inputs, named multi-modal or Multi-View Learning (MVL). The methods used in the MVL scenario can be structured based on the encoder architecture, the fusion strategy, and the optimization technique. Here, the literature has primarily focused on using specific encoder architectures for local regions, lacking a deeper exploration of other components in the MVL methodology. In contrast, we investigate the simultaneous selection of the fusion strategy and encoder architecture, assessing global-scale cropland and crop-type classifications. We use a range of five fusion strategies (Input, Feature, Decision, Ensemble, Hybrid) and five temporal encoders (LSTM, GRU, TempCNN, TAE, L-TAE) as possible configurations in the MVL method. We use the CropHarvest dataset for validation, which provides optical, radar, weather time series, and topographic information as input data. We found that in scenarios with a limited number of labeled samples, a unique configuration is insufficient for all cases. Instead, a specific combination should be meticulously sought, including an encoder and fusion strategy. To streamline this search process, we suggest identifying the optimal encoder architecture tailored for a particular fusion strategy and then determining the most suitable fusion strategy for the classification task. We provide a standardized model schema for the exploration of crop classification through an MVL methodology. jn</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"143 ","pages":"Article 104823"},"PeriodicalIF":8.6,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145019253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhiqiang Lin , Shuangyun Peng , Yuanyuan Yin , Dongling Ma , Rong Jin , Jiaying Zhu , Ziyi Zhu , Shuangfu Shi , Yilin Zhu
{"title":"Corrigendum to “Coupled InVEST-GTWR modeling reveals scale-dependent drivers of N and P export in a Chinese mountainous region” [Int. J. Appl. Earth Obs. Geoinf. 142 (2025) 104705]","authors":"Zhiqiang Lin , Shuangyun Peng , Yuanyuan Yin , Dongling Ma , Rong Jin , Jiaying Zhu , Ziyi Zhu , Shuangfu Shi , Yilin Zhu","doi":"10.1016/j.jag.2025.104833","DOIUrl":"10.1016/j.jag.2025.104833","url":null,"abstract":"","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"143 ","pages":"Article 104833"},"PeriodicalIF":8.6,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145093970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Swalpa Kumar Roy , Ali Jamali , Koushik Biswas , Danfeng Hong , Pedram Ghamisi
{"title":"ViCxLSTM: An extended Long Short-term Memory vision transformer for complex remote sensing scene classification","authors":"Swalpa Kumar Roy , Ali Jamali , Koushik Biswas , Danfeng Hong , Pedram Ghamisi","doi":"10.1016/j.jag.2025.104801","DOIUrl":"10.1016/j.jag.2025.104801","url":null,"abstract":"<div><div>Scene classification plays a critical role in remote sensing image analysis, with numerous methods based on Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) developed to improve performance on high-resolution remote sensing (HRRS) imagery. However, the existing models struggle with several key challenges, including effectively capturing fine-grained local features and modeling long-range spatial dependencies in complex scenes. These limitations reduce the discriminative power of extracted features, which is critical for HRRS image classification. To overcome these issues, our study aims to design a unified model that jointly leverages local information extraction, global context modeling, and long-range dependency learning. We propose a novel architecture, ViCxLSTM, designed to enhance feature discriminability for HRRS scene classification. ViCxLSTM is a hybrid model that integrates a Local Pattern Unit (comprising convolutional layers and Fourier Transforms), an extended Long Short-Term Memory module (xLSTM), and a Vision Transformer. This integrated architecture enables the model to capture a wide range of spatial patterns, from local textures to long-range dependencies and global contextual relationships. Experimental evaluations show that ViCxLSTM achieves superior classification performance across diverse land use datasets, outperforming several state-of-the-art models, including ResNet-50, ResNet-101, ResNet-152, ViT, LeViT, CrossViT, DeepViT, and CaiT. The code will be provided freely accessible at <span><span>https://github.com/aj1365/ViCxLSTM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"143 ","pages":"Article 104801"},"PeriodicalIF":8.6,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144925319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Malaria risk assessment in Indonesia: a machine and deep learning framework","authors":"Anjar Dimara Sakti , Jasmine Nur Mahdani , Hubbi Nashrullah Muhammad , Elstri Sihotang , Cokro Santoso , Khairunnisah , Afina Nur Fauziyyah , Fedri Ruluwedrata Rinawan , Khairunnisa Supardi , Rezzy Eko Caraka , Ketut Wikantika","doi":"10.1016/j.jag.2025.104793","DOIUrl":"10.1016/j.jag.2025.104793","url":null,"abstract":"<div><div>This study focuses on developing comprehensive malaria risk model for Indonesia, integrating susceptibility, vulnerability, and capacity to better understand and manage malaria risks across the country. The primary objective was to identify high-risk areas and prioritize malaria management efforts by combining machine-deep learning techniques and socioeconomic data. Using Gradient Tree Boosting, Classification and Regression Tree, Random Forest algorithms and Deep Learning Multilayer Perceptron, the study analyzed malaria susceptibility, revealing that 38% of Indonesia’s territory was categorized as highly susceptible, with the provinces of Central Kalimantan, West Kalimantan, East Kalimantan, South Sumatra, and Papua identified as the most affected regions. Novel aspects of this study include integrating age and sex ratios to model vulnerability and calculating healthcare access to assess capacity, which showed that 65% of the territory exhibited high vulnerability and 34% had low healthcare capacity, with Kalimantan and Papua consistently ranking highest in risk factors. By combining these factors, the final malaria risk model identified 88 cities with high malaria risk, of which 60 cities with low Gross Regional Domestic Product were prioritized for intervention. This research contributes to malaria control by offering a detailed and data-driven framework to guide policy and resource allocation, enhancing efforts to achieve sustainable health outcomes in malaria-endemic regions.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"143 ","pages":"Article 104793"},"PeriodicalIF":8.6,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144932446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yingcong Wang , Duanrui Wang , Xinying Shi , Zongming Wang , Kaishan Song , Dehua Mao
{"title":"Impacts of coastal aquaculture pond distribution and drainage variations on offshore water quality in China","authors":"Yingcong Wang , Duanrui Wang , Xinying Shi , Zongming Wang , Kaishan Song , Dehua Mao","doi":"10.1016/j.jag.2025.104858","DOIUrl":"10.1016/j.jag.2025.104858","url":null,"abstract":"<div><div>Coastal aquaculture ponds (CAPs) play a key role in ensuring global food security. However, their rapid expansion and seasonal drainage have triggered severe offshore water quality deterioration, posing substantial challenges to coastal sustainability in China. Therefore, accurately characterizing CAPs drainage dynamics and analyzing their impacts on offshore water quality are practically significant. In this study, we conducted the first national-scale assessment of CAPs drainage dynamics from 2017 to 2022 by utilizing time-series Harmonized Landsat and Sentinel-2 imagery combined with Sentinel-1 SAR data. We further quantitatively assessed the impacts of CAPs distribution and drainage periods on offshore water quality characterized by six parameters. CAPs drainage period exhibited temporal variations across China’s coastline. Interannual analysis of drainage patterns (2017–2022) exhibited significant delays, with an average of 12 and 29 days at the start and end of drainage periods, respectively. These temporal shifts were accompanied by a 19-day mean increase in drainage duration. Moreover, regions with degraded offshore water quality exhibited a high density of CAPs distribution. CAPs area and Chlorophyll-a concentration exhibited a significant positive correlation, which gradually increased as offshore distance decreased. Moreover, offshore water quality parameters exhibited notable peaks during the drainage periods, with the peak timing synchronized with interannual drainage dynamics, while water quality showed minimal fluctuations near the non-drained aquaculture ponds. The research findings could benefit coastal ecosystem governance and the sustainable development of marine fisheries, thereby promoting marine fishery enhancement and safeguarding marine ecological security.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"143 ","pages":"Article 104858"},"PeriodicalIF":8.6,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145094042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yumeng Hong , Jun Pan , Jiangong Xu , Shuying Jin , Junli Li
{"title":"SMAF-net: semantics-guided modality transfer and hierarchical feature fusion for optical-SAR image registration","authors":"Yumeng Hong , Jun Pan , Jiangong Xu , Shuying Jin , Junli Li","doi":"10.1016/j.jag.2025.104827","DOIUrl":"10.1016/j.jag.2025.104827","url":null,"abstract":"<div><div>Accurate registration of optical and synthetic aperture radar (SAR) images is critical for effective fusion in remote sensing applications. To address the significant radiometric and geometric differences between these modalities, SMAF-Net, a novel network that integrates semantics-guided modality transfer and hierarchical feature fusion for optical-SAR image registration, is proposed. For modality transfer, a feature-constrained generative adversarial module (SGMT) is used to translate SAR to pseudo-optical images. By incorporating deep features from a multiscale feature learning module (MFLM) as semantic constraints, the translated images preserve structural details and reduce modality discrepancies. For feature matching, a channel attention-based hierarchical aggregation module (CA-HAM) is designed to effectively fuse multi-level features. Combined with a joint detection-description strategy, the network enables accurate keypoint detection and descriptor extraction.<!--> <!-->Experiments on optical-SAR datasets show that the proposed method achieves an average registration error of 2.26 pixels, outperforming state-of-the-art (SOTA) methods and enabling accurate registration between optical and SAR images.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"143 ","pages":"Article 104827"},"PeriodicalIF":8.6,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145026874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Man Chen , Wenquan Dong , Hao Yu , Iain H. Woodhouse , Casey M. Ryan , Haoyu Liu , Selena Georgiou , Edward T.A. Mitchard
{"title":"Multimodal deep learning enables forest height mapping from patchy spaceborne LiDAR using SAR and passive optical satellite data","authors":"Man Chen , Wenquan Dong , Hao Yu , Iain H. Woodhouse , Casey M. Ryan , Haoyu Liu , Selena Georgiou , Edward T.A. Mitchard","doi":"10.1016/j.jag.2025.104814","DOIUrl":"10.1016/j.jag.2025.104814","url":null,"abstract":"<div><div>Accurate estimation of forest height plays a pivotal role in mapping carbon stocks from space. Spaceborne LiDARs give accurate spot estimates of forest canopy height, but sample only a tiny fraction of the landscape. The gaps must therefore be filled using other satellite remote sensing data. Although several studies have employed machine learning methods to produce wall-to-wall forest height maps, they have generally overlooked the distinct characteristics of various remote sensing data sources and have not fully exploited the potential benefits of multisource remote sensing integration. In this study, we propose a novel deep learning framework termed the multimodal attention remote sensing network (MARSNet) to extrapolate dominant heights derived from Global Ecosystem Dynamics Investigation (GEDI), using Sentinel-1 C-band Synthetic Aperture Radar (SAR) data, Advanced Land Observing Satellite-2 (ALOS-2) Phased Array type L-band Synthetic Aperture Radar-2 (PALSAR-2) data, and Sentinel-2 passive optical data. MARSNet comprises separate encoders for each remote sensing data modality to extract multi-scale features, and a shared decoder to fuse the features and estimate height. Using individual encoders for each remote sensing data source avoids interference across modalities and extracts distinct representations. To focus on the useful information from each dataset, we reduce the prevalent spatial and layer redundancies in each remote sensing data by incorporating the extended spatial and layer reconstruction convolution (ESLConv) modules in the encoders. MARSNet achieves good performance in estimating dominant height, with a R<sup>2</sup> of 0.62 and RMSE of 2.82 m on test data, outperforming the widely used random forest (RF) approach which attained an R<sup>2</sup> of 0.55 and RMSE of 3.05 m using the same layers. We demonstrate the efficacy of the MARSNet modules and the expansion of data sources for improving dominant height estimation through network ablation studies and data ablation studies. Finally, we apply the trained MARSNet model to generate wall-to-wall maps at 10 m resolution for Jilin province, China. Through independent validation using field measurements, MARSNet demonstrates an R<sup>2</sup> of 0.54 and RMSE of 3.76 m, compared to 0.39 and 4.37 m for the RF baseline model. Additionally, MARSNet effectively mitigates the common tendency of RF models to overestimate in low height areas and underestimate in high canopy areas (low sensitivity). Our research demonstrates the effectiveness of a multimodal deep learning approach fusing GEDI with SAR and passive optical imagery for enhancing the accuracy of high-resolution dominant height estimation. This method shows promise for enabling accurate large-scale forest height mapping in areas where high-quality ground data are available, potentially revolutionizing our understanding of global forest structure and carbon stocks.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"143 ","pages":"Article 104814"},"PeriodicalIF":8.6,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144925316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xue Wang , Songling Yin , Xiaojun Xu , Yong Mei , Yan Huang , Kun Tan
{"title":"MHFu-former: A multispectral and hyperspectral image fusion transformer","authors":"Xue Wang , Songling Yin , Xiaojun Xu , Yong Mei , Yan Huang , Kun Tan","doi":"10.1016/j.jag.2025.104843","DOIUrl":"10.1016/j.jag.2025.104843","url":null,"abstract":"<div><div>Hyperspectral images (HSIs) can capture detailed spectral features for object recognition, while multispectral images (MSIs) can provide a high spatial resolution for accurate object location. Deep learning methods have been widely applied in the fusion of hyperspectral and multispectral images, but still face challenges, including the limited capacity to enhance spatial details and preserve spectral information, as well as issues related to spatial scale dependency. In this paper, to solve the above problems and achieve more effective information integration between HSIs and MSIs, we propose a novel multispectral and hyperspectral image fusion transformer (MHFu-former). The proposed MHFu-former consists of two main components: (1) a feature extraction and fusion module, which first extracts deep multi-scale features from the hyperspectral and multispectral imagery and fuses them to form a joint feature map, which is then processed by a dual-branch structure consisting of a Swin transformer module and convolutional module to capture the global context and fine-grained spatial features, respectively; and (2) a spatial-spectral fusion attention mechanism, which adaptively enhances the important spectral information and fuses it with the spatial detail information, significantly boosting the model’s sensitivity to the key spectral features while preserving rich spatial details. We conducted comparative experiments on the indoor Cave dataset and the Shanghai and Ganzhou datasets from the ZY1-02D satellite to validate the effectiveness and superiority of the proposed method. Compared to the state-of-the-art methods, the proposed method significantly enhances the fusion performance across multiple key metrics, demonstrating its outstanding ability to process spatial and spectral details.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"143 ","pages":"Article 104843"},"PeriodicalIF":8.6,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145094009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AeroReformer: Aerial Referring Transformer for UAV-based Referring Image Segmentation","authors":"Rui Li, Xiaowei Zhao","doi":"10.1016/j.jag.2025.104817","DOIUrl":"10.1016/j.jag.2025.104817","url":null,"abstract":"<div><div>As a novel and challenging task, referring segmentation combines computer vision and natural language processing to localise and segment objects based on textual descriptions. While Referring Image Segmentation (RIS) has been extensively studied in natural images, little attention has been given to aerial imagery, particularly from Unmanned Aerial Vehicles (UAVs). The unique challenges of UAV imagery, including complex spatial scales, occlusions, and varying object orientations, render existing RIS approaches ineffective. A key limitation has been the lack of UAV-specific datasets, as manually annotating pixel-level masks and generating textual descriptions is labour-intensive and time-consuming. To address this gap, we design an automatic labelling pipeline that leverages pre-existing UAV segmentation datasets and the Multimodal Large Language Models (MLLM) for generating textual descriptions. Furthermore, we propose Aerial Referring Transformer (AeroReformer), a novel framework for UAV Referring Image Segmentation (UAV-RIS), featuring a Vision-Language Cross-Attention Module (VLCAM) for effective cross-modal understanding and a Rotation-Aware Multi-Scale Fusion (RAMSF) decoder to enhance segmentation accuracy in aerial scenes. Extensive experiments on two newly developed datasets demonstrate the superiority of AeroReformer over existing methods, establishing a new benchmark for UAV-RIS. The datasets and code are publicly available at <span><span>https://github.com/lironui/AeroReformer</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"143 ","pages":"Article 104817"},"PeriodicalIF":8.6,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144922914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}