Yu Liang , Shilei Cao , Juepeng Zheng , Xiucheng Zhang , Jianxi Huang , Haohuan Fu
{"title":"Low Saturation Confidence Distribution-based Test-Time Adaptation for cross-domain remote sensing image classification","authors":"Yu Liang , Shilei Cao , Juepeng Zheng , Xiucheng Zhang , Jianxi Huang , Haohuan Fu","doi":"10.1016/j.jag.2025.104463","DOIUrl":"10.1016/j.jag.2025.104463","url":null,"abstract":"<div><div>Unsupervised Domain Adaptation (UDA) has emerged as a powerful technique for addressing the distribution shift across various Remote Sensing (RS) applications. However, most UDA approaches require access to source data, which may be infeasible due to data privacy or transmission constraints. Source-free Domain Adaptation addresses the absence of source data but usually demands a large amount of target domain data beforehand, hindering rapid adaptation and restricting their applicability in broader scenarios. In practical cross-domain RS image classification, achieving a balance between adaptation speed and accuracy is crucial. Therefore, we propose Low Saturation Confidence Distribution Test-Time Adaptation (LSCD-TTA), marketing the first attempt to explore Test-Time Adaptation for cross-domain RS image classification without requiring source or target training data. LSCD-TTA adapts a source-trained model on the fly using only the target test data encountered during inference, enabling immediate and efficient adaptation while maintaining high accuracy. Specifically, LSCD-TTA incorporates three optimization strategies tailored to the distribution characteristics of RS images. Firstly, weak-confidence softmax-entropy loss emphasizes categories that are more difficult to classify to address unbalanced class distribution. Secondly, balanced-categories softmax-entropy loss softens and balances the predicted probabilities to tackle the category diversity. Finally, low saturation distribution loss utilizes soft log-likelihood ratios to reduce the impact of low-confidence samples in the later stages of adaptation. By effectively combining these losses, LSCD-TTA enables rapid and accurate adaptation to the target domain for RS image classification. We evaluate LSCD-TTA on six domain adaptation tasks across three RS datasets, where LSCD-TTA outperforms existing DA and TTA methods with average accuracy gains of 4.99% on Resnet-50, 5.22% on Resnet-101, and 2.37% on ViT-B/16.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"139 ","pages":"Article 104463"},"PeriodicalIF":7.6,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143678272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhili Zhang , Xiangyun Hu , Yue Yang , Bingnan Yang , Kai Deng , Hengming Dai , Mi Zhang
{"title":"High-quality one-shot interactive segmentation for remote sensing images via hybrid adapter-enhanced foundation models","authors":"Zhili Zhang , Xiangyun Hu , Yue Yang , Bingnan Yang , Kai Deng , Hengming Dai , Mi Zhang","doi":"10.1016/j.jag.2025.104466","DOIUrl":"10.1016/j.jag.2025.104466","url":null,"abstract":"<div><div>Interactive segmentation of remote sensing images enables the rapid generation of annotated samples, providing training samples for deep learning algorithms and facilitating high-quality extraction and classification for remote sensing objects. However, existing interactive segmentation methods, such as SAM, are primarily designed for natural images and show inefficiencies when applied to remote sensing images. These methods often require multiple interactions to achieve satisfactory labeling results and frequently struggle to obtain precise target boundaries. To address these limitations, we propose a high-quality one-shot interactive segmentation method (OSISeg) based on the fine-tuning of foundation models, tailored for the efficient annotation of typical objects in remote sensing imagery. OSISeg utilizes robust visual priors from foundation models and implements a hybrid adapter-based strategy for fine-tuning these models. Specifically, It employs a parallel structure with hybrid adapter designs to adjust multi-head self-attention and feed-forward neural networks within foundation models, effectively aligning remote sensing image features for interactive segmentation tasks. Furthermore, the proposed OSISeg integrates point, box, and scribble prompts, facilitating high-quality segmentation only using one prompt through a lightweight decoder. Experimental results on multiple datasets—including buildings, water bodies, and woodlands—demonstrate that our method outperforms existing fine-tuning methods and significantly enhances the quality of one-shot interactive segmentation for typical remote sensing objects. This study highlights the potential of the proposed OSISeg to significantly accelerate sample annotation in remote sensing image labeling tasks, establishing it as a valuable tool for sample labeling in the field of remote sensing. Code is available at <span><span>https://github.com/zhilyzhang/OSISeg</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"139 ","pages":"Article 104466"},"PeriodicalIF":7.6,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143678271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junsheng Ding , Wu Chen , Junping Chen , Jungang Wang , Yize Zhang , Lei Bai , Yuyan Wang , Xiaolong Mi , Tong Liu , Duojie Weng
{"title":"Spatiotemporal inhomogeneity of accuracy degradation in AI weather forecast foundation models: A GNSS perspective","authors":"Junsheng Ding , Wu Chen , Junping Chen , Jungang Wang , Yize Zhang , Lei Bai , Yuyan Wang , Xiaolong Mi , Tong Liu , Duojie Weng","doi":"10.1016/j.jag.2025.104473","DOIUrl":"10.1016/j.jag.2025.104473","url":null,"abstract":"<div><div>The artificial intelligence (AI) weather forecast foundation models can infer and generate precise global atmospheric state forecasts on the user’s device and with speed over 10,000 times faster than the operational Integrated Forecasting System (IFS), and it is making increasingly significant contributions to geodetic applications represented by the Global Navigation Satellite System (GNSS). However, existing studies on the investigation of these AI models are typically carried out by concentrating on specific one or several meteorological events in certain regions or by comparison with physical models, and the evaluation results obtained in this manner are not comprehensive and universal. Additionally, we find that the results obtained by the foundation models through the “rollout” method for forecasting are not uniform in terms of time and space. This temporal and spatial inhomogeneity of accuracy and accuracy degradation are related to AI algorithms and attributes of training data, etc., but these characteristics have not been thoroughly explored and analyzed. In this study, we obtained the global forecast results of foundation models for 2022 and subsequently derived the GNSS tropospheric delay through numerical integration. We calculated the mean deviation, mean absolute error, and root mean square error of these data. Using these metrics, we analyzed the spatiotemporal inhomogeneity in the accuracy degradation of foundation models, represented by Huawei Cloud Pangu-Weather, Google DeepMind GraphCast, and Shanghai AI Lab FengWu. We evaluated how this inhomogeneity changes with forecast time and identified the best-performing models across different regions and forecast durations. From the results, we find that taking topography into account when training the model enhances its accuracy at high altitudes, and the facilitating influence between the high related atmospheric variables such as precipitation and water vapor. The contributions of this study are twofold: it serves as a valuable reference for geodetic and remote sensing users employing foundational models, and offers insights and case supports for AI practitioners aiming to develop more accurate models for weather forecasting.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"139 ","pages":"Article 104473"},"PeriodicalIF":7.6,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143678273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhanced unsupervised domain adaptation with iterative pseudo-label refinement for inter-event oil spill segmentation in SAR images","authors":"Guangyan Cui , Jianchao Fan , Yarong Zou","doi":"10.1016/j.jag.2025.104479","DOIUrl":"10.1016/j.jag.2025.104479","url":null,"abstract":"<div><div>The imaging features of oil spills in synthetic aperture radar (SAR) images have significant differences due to factors such as marine environment, SAR sensors, oil film thickness and types, which makes it difficult to obtain a generalized model, and the limited number of SAR images obtained from new oil spill events hampers the effective training of deep learning networks. To solve these issues, an enhanced unsupervised domain adaptation with iterative pseudo-label refinement (EUDA-PLR) approach is proposed for inter-event oil spill SAR image segmentation. Specifically, the freely downloaded and collected Sentinel-1 and ERS-1/2 oil spill historical data are used as source domains, respectively, which are migrated to each new oil spill event with limited training samples based on adversarial training to improve the timeliness and accuracy of processing new oil spill events. Subsequently, the target domain is divided more rationally by combining the knowledge of the resolution and polarization mode of the satellite images, the wind speed information of the marine environment, and the computable oil-seawater power ratio. Finally, high-probability oil spill features are stored and updated based on top-<span><math><mi>K</mi></math></span> marginal probabilities to improve the completeness of oil spill features in the pseudo-labeling of strong samples in the target domain, and adversarial training is utilized to enhance the ability to extract oil spill features from weak samples in the target domain. EUDA-PLR is applicable to a variety of current mainstream SAR satellites. In the experiments, the source domain data contains the Sentinel-1 dataset from 2014 to 2021 and the ERS-1/2 dataset from 1991 to 2002, and the target domain data contains the GF-3 dataset of an oil spill event in the Bohai Bay region in 2018, the GF-3 and COSMO-SkyMed dataset of an oil spill event in the South China Sea region in 2019, the GF-3 dataset for an oil spill event in the East China Sea region in 2020, and the Radarsat-2 dataset for an oil spill event in the Bohai Bay region in 2021. The proposed method has been shown to outperform existing algorithms in eight comparison experiments for four real oil spill events.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"139 ","pages":"Article 104479"},"PeriodicalIF":7.6,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143678294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cuilin Yu , Qingsong Wang , Zibo Zhang , Zixuan Zhong , Yusheng Ding , Tao Lai , Haifeng Huang , Peng Shen
{"title":"Multi-source data joint processing framework for DEM calibration and fusion","authors":"Cuilin Yu , Qingsong Wang , Zibo Zhang , Zixuan Zhong , Yusheng Ding , Tao Lai , Haifeng Huang , Peng Shen","doi":"10.1016/j.jag.2025.104484","DOIUrl":"10.1016/j.jag.2025.104484","url":null,"abstract":"<div><div>High-accuracy digital elevation models (DEMs) are essential for remote sensing and geospatial analysis, yet integrating multi-source data over large and complex terrains remains challenging. To address these challenges, this study presents the Multi-source Data Joint Processing (MDJP) framework. This framework establishes a systematic way for correcting DEM errors of varying quality and integrating multi-source data, leveraging deep learning-based calibration and spatially adaptive fusion techniques to enhance DEM accuracy and consistency in large and complex regions. For calibration, our proposed DEM calibration model (DemFormer) combines a lightweight Transformer module with a bagging decision-tree network in a stacking framework, specifically designed to enhance the stability and accuracy of DEM elevation error predictions. For fusion, our DEM fusion model (DemFusion) employs spatial autocorrelation analysis and KD-Tree clustering to compute optimal fusion weights, effectively integrating complementary elevation information from multiple DEM sources. We evaluate the MDJP framework using four widely used global DEMs—Shuttle Radar Topography Mission (SRTM), Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM), TerraSAR-X add-on for Digital Elevation Measurements (TanDEM-X), Advanced Land Observing Satellite World 3D-30 m (AW3D30)—each at 1-arc second (∼30 m) resolution. The Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) elevation data serves as the independent reference dataset for assessment. Our experiments, conducted in Guangdong Province, China, and the Northern Territory of Australia, demonstrate that the DemFormer model reduces the root mean square error (RMSE) by 18.38 %, 17.28 %, 54.53 %, and 65.24 % for TanDEM-X, AW3D30, SRTM, and ASTER, respectively. Furthermore, the DemFusion model further refines the results, and the fused DEM had better accuracy than other DEMs in Guangdong and the Northern Territory. These findings underscore the robustness of our approach and establish a new benchmark for DEM calibration and fusion, with significant implications for geospatial analysis and environmental monitoring.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"139 ","pages":"Article 104484"},"PeriodicalIF":7.6,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143678274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multitemporal Sentinel and GEDI data integration for overstory and understory fuel type classification","authors":"Pegah Mohammadpour , Domingos Xavier Viegas , Alcides Pereira , Emilio Chuvieco","doi":"10.1016/j.jag.2025.104455","DOIUrl":"10.1016/j.jag.2025.104455","url":null,"abstract":"<div><div>Wildfires significantly reshape the landscape of the Mediterranean basin, altering forest composition, structure, and diversity. Consequently, detailed fuel mapping is crucial for improving fire risk assessment and enhancing fire behavior modeling, as wildfires typically ignite from surface fuels and may spread vertically to canopy fuels due to canopy fuel continuity. This study generates a fuel type map of the overstory and understory based on the FirEUrisk hierarchical fuel classification system (FHFCS) in three steps, including overstory mapping using multispectral and radar data (Sentinel-1 and Sentinel-2), and topographic variables; shrubland and grassland height estimation using biophysical models based on precipitation and Normalized Difference Vegetation Index (NDVI); and understory mapping using spaceborne LiDAR data from the Global Ecosystem Dynamic Investigation (GEDI) and decision rules. An Overall Accuracy (OA) of 84.53% was achieved for overstory mapping for the composition of Vegetation Indices (VIs), Gray-Level Co-Occurrence Matrix (GLCM) textures, vertical transmit–horizontal receive (VH) of Sentinel-1, and elevation, integrated with biophysical models. Cropland, urban, non-fuel, and various forest classes, particularly evergreen needle-leaved forests, demonstrated outstanding performance, achieving F1 scores ranging from 83% to 100%. Finally, a rule-based model was established using Relative Height (RH) metrics from GEDI L2A data to estimate the height and type of understory defined in FHFCS with an OA of 69.2%. The RH metrics and decision rules proved a significantly effective and easy-to-interpret approach for estimating understory type and height in the absence of airborne LiDAR data. This three-step methodology provides a simple and efficient approach to large-scale overstory and understory mapping using multispectral, radar, and LiDAR data, which may facilitate both surface and crown fire simulation.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"139 ","pages":"Article 104455"},"PeriodicalIF":7.6,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143678295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benqing Chen , Yanming Yang , Mingsen Lin , Bin Zou , Shuhan Chen , Erhui Huang , Wenfeng Xu , Yongqiang Tian
{"title":"Satellite retrieval of bottom reflectance from high-spatial-resolution multispectral imagery in shallow coral reef waters","authors":"Benqing Chen , Yanming Yang , Mingsen Lin , Bin Zou , Shuhan Chen , Erhui Huang , Wenfeng Xu , Yongqiang Tian","doi":"10.1016/j.jag.2025.104483","DOIUrl":"10.1016/j.jag.2025.104483","url":null,"abstract":"<div><div>Under anthropogenic disturbances and global warming, coral reef ecosystems are degrading, and there is growing concern about the changes in benthic habitats in shallow coral reef waters. As an essential parameter, bottom reflectance can be used to indicate the health of benthic habitats in coral reefs. However, accurately determining bottom reflectance from satellite data remains challenging. This study presents an equation-based analytical method to estimate the bottom reflectance from high-spatial-resolution multispectral images in shallow coral reef waters by establishing two equations independent of bottom type and water depth. With the required parameters estimated from the sampling pixels of the multi-spectral image, the bottom reflectance data for the blue and green bands were derived by solving the two equations without a prior knowledge of bottom types, water properties, and water depths. To evaluate the method, simulated remote-sensing reflectance datasets from various combinations of the water properties, depths, and bottom types were used to derive the bottom reflectance. The root mean square errors (RMSEs) of the derived bottom reflectance in the blue band were generally <0.02 for most cases, except when the colored dissolved organic matter spectral absorption coefficient at the 440 nm wavelength [a<sub>CDOM</sub> (440)] was 0.1 m<sup>−1</sup> and concentration of chlorophyll (C<sub>CHL</sub>) was ≥0.5 μg/L. Comparatively, the lower RMSEs in the green band were observed only when a<sub>CDOM</sub>(440) < 0.05 m<sup>−1</sup>, concentration of non-algal particles (C<sub>NAP</sub>) < 0.25 mg/L, and C<sub>CHL</sub> < 0.5 μg/L. Furthermore, the proposed method was applied to the two real satellite multispectral images to derive the bottom reflectance. By visually comparing to the subsurface reflectance images and validating with the field-measured reflectance data, we demonstrated that the satellite derived bottom reflectance in the blue and green bands was accurate in both magnitude and shape by the proposed method. Finally, the impacts of the spatial inhomogeneity of the water properties, purity of sampling pixels for estimating the band ratio of the total diffused attenuation coefficients, and errors in the radiometric correction on the bottom reflectance retrieval were discussed and analyzed.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"139 ","pages":"Article 104483"},"PeriodicalIF":7.6,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143644950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hengming Dai , Jiabo Xu , Xiangyun Hu , Zhen Shu , Wei Ma , Zhifang Zhao
{"title":"Deep projective prediction of building facade footprints from ALS point cloud","authors":"Hengming Dai , Jiabo Xu , Xiangyun Hu , Zhen Shu , Wei Ma , Zhifang Zhao","doi":"10.1016/j.jag.2025.104448","DOIUrl":"10.1016/j.jag.2025.104448","url":null,"abstract":"<div><div>The automated extraction of building facade footprints (BFFs) is a critical task in surveying and remote sensing. Existing methods primarily use mobile laser scanning point cloud as the data source, with limited methods utilizing airborne laser scanning (ALS) data. This is mainly because current methods require explicit building extraction and wall detection, and the facade points in ALS point clouds are naturally sparse and prone to incompleteness, leading to insufficient robustness in rule-based wall extraction. To address this challenge, this paper presents an end-to-end method named deep projective prediction (DPP), which directly predicts BFF masks from ALS point clouds, avoiding explicit extraction of buildings and facades, thereby simplifying the BFF extraction process. Meanwhile, we introduce a back-projective attention (BPAtt) module that guides the decoding process while performing differentiable projections, enhancing the model’s sensitivity to projected feature locations. Additionally, a sparse feature completion (SFC) strategy is proposed to alleviate the impact of point cloud sparsity on footprint mask prediction. To validate the effectiveness of the DPP and facilitate relevant future research, an ALS-based BFF dataset is established, which provides more than 3k BFF annotations. Extensive experiments demonstrate that the proposed DPP achieves promising results on the BFF extraction task. The BPAtt module and SFC strategy also promote the BFF extraction performance, particularly at the boundaries of footprints.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"139 ","pages":"Article 104448"},"PeriodicalIF":7.6,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143678296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xijie Xu , Jie Wang , Stefan Poslad , Xiaoping Rui , Guangyuan Zhang , Yonglei Fan , Guangxia Yu
{"title":"Assessing urban residents’ exposure to greenspace in daily travel from a dockless bike-sharing lens","authors":"Xijie Xu , Jie Wang , Stefan Poslad , Xiaoping Rui , Guangyuan Zhang , Yonglei Fan , Guangxia Yu","doi":"10.1016/j.jag.2025.104487","DOIUrl":"10.1016/j.jag.2025.104487","url":null,"abstract":"<div><div>Considering the importance of greenspace for the health and life of urban citizens, different levels of greenspace exposure (GE) have received increasing attention. However, the understanding of human travel-related greenspace exposure is still limited, especially the lack of quantitative description of the fine-grained dynamics of greenspace exposure for active travel. Therefore, this study aims to quantify and analyse the spatio-temporal dynamics and equality of greenspace exposure during daily travel using dockless bike-sharing data in Beijing. Firstly, this study analysed the spatio-temporal patterns and community structure of bike-sharing travel using graph networks. Second, the daily travel-related greenspace exposure dynamics were estimated using a population-weighted exposure model. Finally, the spatial heterogeneity and equality of greenspace exposure during daily travel were assessed. The results show that greenspace exposure is shaped by both human mobility and greenspace distribution. Greenspace exposure is higher during the daytime than the early morning, and there are no significant changes of the average greenspace exposure across weekdays and weekends. In addition, there is an imbalance between greenspace coverage and exposure, with high greenspace coverage not implying high greenspace exposure and vice versa. Areas with lower greenspace coverage (less than 30 %) occurred for more than 80 % of the travels. We also found significant inequality of greenspace exposure during daily travel, with an average Gini index above 0.50. Driven by human mobility, inequality varied over time, with the highest inequality occurring between midnight and early morning, when the Gini index is higher than 0.65. This study provides a detailed understanding of greenspace exposure in active travel modes and may offer valuable insights for urban greenspace planning and health-oriented mobility strategies.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"139 ","pages":"Article 104487"},"PeriodicalIF":7.6,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143644949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruijie Li , Hequn Yang , Xu Zhang , Xin Xu , Liuqing Shao , Kaixu Bai
{"title":"Near real-time land surface temperature reconstruction from FY-4A satellite using spatio-temporal attention network","authors":"Ruijie Li , Hequn Yang , Xu Zhang , Xin Xu , Liuqing Shao , Kaixu Bai","doi":"10.1016/j.jag.2025.104480","DOIUrl":"10.1016/j.jag.2025.104480","url":null,"abstract":"<div><div>Land Surface Temperature (LST) is a critical parameter for climate studies and land surface process models as it indicates ground surface temperature variations across landscapes and timescales. However, satellite-based LST products derived from infrared sensors suffer from substantial missing values due to extensive cloud covers on the Earth’s surface. Traditional methods rely heavily on numerical LST simulations for gap-filling, but the latency significantly limits the timeliness of gapless LST products. In this study, a novel deep learning method called the Spatio-Temporal Attention Network (STAN) was proposed, which was based on a U-Net architecture but enhanced with two unique feature extraction modules for capturing spatially and temporally dependent LST variations. Unlike many previous methods depending highly on numerical simulations, STAN reconstructs LST relying on spatiotemporal context information learned from historical memories, enabling more efficient LST reconstruction in a more timely manner. Ground validation results demonstrate better performance of STAN over other companion methods, with root-mean-square errors of 1.99 K and 2.89 K under clear and cloudy sky respectively, when reconstructing LST data collected from the Chinese Fengyun-4A geostationary satellite in the Yangtze River Delta. Intercomparison studies and error analysis also confirm the superiority of STAN, showing high LST reconstruction accuracy across different land covers and seasons. Overall, the proposed STAN method offers a much more efficient solution to facilitate timely LST reconstruction, and the method can also be easily transferred to other parameters with significant spatio-temporal variation context.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"139 ","pages":"Article 104480"},"PeriodicalIF":7.6,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143644948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}