{"title":"Downscaling NPP–VIIRS Nighttime Light Data Using Vegetation Nighttime Condition Index","authors":"Bin Wu;Yu Wang;Hailan Huang","doi":"10.1109/JSTARS.2024.3476191","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3476191","url":null,"abstract":"Nighttime light (NTL) data, a cornerstone in the scientific community, are widely used across various disciplines. However, the spatial resolution of the commonly used NTL datasets often falls coarse for detailed urban-scale analyses. Current downscaling approaches for NTL data typically rely on extensive auxiliary datasets, limiting their applicability to large geographical regions. In response, we have developed a novel NTL downscaling method that directly uses the vegetation nighttime condition index (VNCI) as input to downscale the national polar-orbiting partnership–visible infrared imaging radiometer suite NTL product. To showcase the potential of this innovative approach, we downscaled the NTL data for mainland China from 2013 to 2021 using only normalized difference vegetation index (NDVI) data as input. Our results demonstrate that the downscaled NTL data not only preserve the accuracy of the original NTL data but also reveal more spatial details and is consistent with the Luojia 1-01 NTL data. Our experiments underscore the significant advantages of the proposed VNCI-based NTL downscaling approach, including its simplicity and minimal data entry requirements, as it only necessitates NDVI as input. This practical and straightforward approach holds great promise for NTL-based urban studies.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"18291-18302"},"PeriodicalIF":4.7,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10707291","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142517967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yan Zhang;Rongbo Fan;PeiPei Duan;Jinfang Dong;Zhiyong Lei
{"title":"DCDGAN-STF: A Multiscale Deformable Convolution Distillation GAN for Remote Sensing Image Spatiotemporal Fusion","authors":"Yan Zhang;Rongbo Fan;PeiPei Duan;Jinfang Dong;Zhiyong Lei","doi":"10.1109/JSTARS.2024.3476153","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3476153","url":null,"abstract":"Remote sensing image spatiotemporal fusion (STF) aims to generate composite images with high-temporal and spatial resolutions by combining remote sensing images captured at different times and with different spatial resolutions (DTDS). Among the existing fusion algorithms, deep learning-based fusion models have demonstrated outstanding performance. These models treat STF as an image super-resolution problem based on multiple reference images. However, compared to traditional image super-resolution tasks, remote sensing image STF involves merging a larger amount of multitemporal data with greater resolution difference. To enhance the robust matching performance of spatiotemporal transformations between multiple sets of remote sensing images captured at DTDS and to generate super-resolution composite images, we propose a feature fusion network called the multiscale deformable convolution distillation generative adversarial network (DCDGAN-STF). Specifically, to address the differences in multitemporal data, we introduce a pyramid cascading deformable encoder to identify disparities in multitemporal images. In addition, to address the differences in spatial resolution, we propose a teacher–student correlation distillation method. This method uses the texture details' disparities between high-resolution multitemporal images to guide the extraction of disparities in blurred low-resolution multitemporal images. We comprehensively compared the proposed DCDGAN-STF with some state-of-the-art algorithms on two landsat and moderate-resolution imaging spectroradiometer datasets. Ablation experiments were also conducted to test the effectiveness of different submodules within DCDGAN-STF. The experimental results and ablation analysis demonstrate that our algorithm achieves superior performance compared to other algorithms.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19436-19450"},"PeriodicalIF":4.7,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10707182","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142587651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FMANet: Super-Resolution Inverted Bottleneck-Fused Self-Attention Architecture for Remote Sensing Satellite Image Recognition","authors":"Fatima Rauf;Muhammad Attique Khan;Muhammad Kashif Bhatti;Ameer Hamza;Aliya Aleryani;M. Turki-Hadj Alouane;Dina Abdulaziz AlHammadi;Yunyoung Nam","doi":"10.1109/JSTARS.2024.3475580","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3475580","url":null,"abstract":"The remote sensing (RS) image classification task has been studied widely in the RS and geoscience community. The important applications of RS are landslides, earthquakes, land-use, and land cover classification. Landslides and earthquakes are some of the most dangerous natural disasters that frequently occur. High-resolution RS images can be useful for accurately classifying landslide and earthquake regions. The deep learning technique has improved performance compared with the traditional methods; however, these techniques are reliable on large-scale datasets. In this work, we proposed a novel architecture based on super-resolution and fused bottleneck self-attention called (FMANet) convolutional neural network. A new custom deep super-resolution network is designed as the first step to improve the quality of RS images. In the next step, a new fused bottleneck self-attention architecture is proposed that learns the features in two distinct networks: residual and inverted. Both models are trained on the resultant super-resolution images, whereas the hyperparameters are initialized using Bayesian optimization. In the testing phase, features are extracted from the self-attention layer and passed to the shallow narrow neural network for classification. The experimental process of the proposed architecture is conducted on three datasets, MLRSNet, Bijie Landslide, and Turkey Earthquake, and improved the accuracy of 91.0%, 92.8%, and 99.4%, respectively. Results are also compared with state-of-the-art techniques and show significant improvement and the model is also evaluated using the lime for the interpretation of the outcomes proposed model.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"18622-18634"},"PeriodicalIF":4.7,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10706896","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142517961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michal Kawulok;Pawel Kowaleczko;Maciej Ziaja;Jakub Nalepa;Daniel Kostrzewa;Daniele Latini;Davide De Santis;Giorgia Salvucci;Ilaria Petracca;Valeria La Pegna;Zoltan Bartalis;Fabio Del Frate
{"title":"Hyperspectral Image Super-Resolution: Task-Based Evaluation","authors":"Michal Kawulok;Pawel Kowaleczko;Maciej Ziaja;Jakub Nalepa;Daniel Kostrzewa;Daniele Latini;Davide De Santis;Giorgia Salvucci;Ilaria Petracca;Valeria La Pegna;Zoltan Bartalis;Fabio Del Frate","doi":"10.1109/JSTARS.2024.3475644","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3475644","url":null,"abstract":"The need for enhancing image spatial resolution has motivated the researchers to propose numerous super-resolution (SR) techniques, including those developed specifically for hyperspectral data. Despite significant advancements in this field attributed to deep learning, little attention has been given to evaluating the practical value of super-resolved images in specific applications. Most methods are validated in application-independent scenarios, often using simulated low-resolution images, resulting in overly optimistic conclusions. In this article, we propose task-based evaluation strategies for hyperspectral image SR and we present results obtained with various approaches that include pansharpening, multispectral–hyperspectral data fusion, and single-image SR. We demonstrate that the proposed framework allows us to highlight both benefits and limitations of each method and can, therefore, guide the development of SR techniques suitable for real-world applications.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"18949-18966"},"PeriodicalIF":4.7,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10706841","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142524089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Colin Prieur;Antony Laybros;Giovanni Frati;Daniel Schläpfer;Jocelyn Chanussot;Grégoire Vincent
{"title":"Investigating Abiotic Sources of Spectral Variability From Multitemporal Hyperspectral Airborne Acquisitions Over the French Guyana Canopy","authors":"Colin Prieur;Antony Laybros;Giovanni Frati;Daniel Schläpfer;Jocelyn Chanussot;Grégoire Vincent","doi":"10.1109/JSTARS.2024.3475050","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3475050","url":null,"abstract":"Classifiers trained on airborne hyperspectral imagery are proficient in identifying tree species in hyperdiverse tropical rainforests. However, spectral fluctuations, influenced by intrinsic and environmental factors, such as the heterogeneity of individual crown properties and atmospheric conditions, pose challenges for large-scale mapping. This study proposes an approach to assess the instability of airborne imaging spectroscopy reflectance in response to environmental variability. Through repeated overflights of two tropical forest sites in French Guiana, we explore factors that affect the spectral similarity between dates and acquisitions. By decomposing acquisitions into subsets and analyzing different sources of variability, we analyze the stability of reflectance and various vegetation indices with respect to specific sources of variability. Factors such as the variability of the viewing and sun angles or the variability of the atmospheric state shed light on the impact of sources of spectral instability, informing processing strategies. Our experiments conclude that the environmental factors that affect the canopy reflectance the most vary according to the considered spectral domain. In the short wave infrared (SWIR) domain, solar angle variation is the main source of variability, followed by atmospheric and viewing angles. In the visible and near infrared (VNIR) domain, atmospheric variability dominates, followed by solar angle and viewing angle variabilities. Despite efforts to address these variabilities, significant spectral instability persists, highlighting the need for more robust representations and improved correction methods for reliable species-specific signatures.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"18751-18768"},"PeriodicalIF":4.7,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10706241","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Road Topology Extraction Based on Point of Interest Guidance and Graph Convolutional Neural Network From High-Resolution Remote Sensing Images","authors":"Lipeng Gao;Jiangtao Tian;Yiqing Zhou;Wenjing Cai;Xingke Hao","doi":"10.1109/JSTARS.2024.3474849","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3474849","url":null,"abstract":"Road topology networks play a crucial role in expressing road information, as they serve as the fundamental representation of road systems. Unfortunately, in high-resolution remote sensing images, roads are often obscured by buildings, tree trunks, and shadows, resulting in poor connectivity and extraction of topology. To address this challenge, this paper proposes a multilevel extraction method for road topology based on a graph structure. The main contributions of this work are as follows. First, a point of interest (POI) extraction model based on the improved D-LinkNet network is constructed. This model captures relevant information about POIs, such as road intersections and large curvature points. Second, the extracted POIs and the feature maps from the POI model are combined to form triplet information. This information is then fed into a binary classifier, which identifies reliable edges with high confidence levels. These edges contribute to the formation of a subgraph representing the topological structure. Third, a graph convolutional neural network model is employed to predict and supplement the aforementioned subgraphs, resulting in the final road topology. This approach effectively addresses the problem of road interruption caused by occlusion from other ground objects in deep learning-based road topology extraction. The proposed method is supported by both data and experimental results, demonstrating its effectiveness in road topology extraction.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"18852-18869"},"PeriodicalIF":4.7,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10705997","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142524093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Uncertainties of Urban Heat Island Estimation With Diverse Reference Delineation Methods Based on Urban–Rural Division and Local Climate Zone","authors":"Xuecheng Fu;Bao-Jie He;Huimin Liu","doi":"10.1109/JSTARS.2024.3472475","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3472475","url":null,"abstract":"The precise quantification of surface urban heat island intensity (SUHII) is fundamental for understanding the process, causes, and solutions to thermal environmental change. However, the existing methods for SUHII estimation are not uniform in nonurban reference selection, with inconsistent consideration of relevant influencing factors. The associated uncertainty can be further exacerbated under seasonal fluctuations of atmospheric and surface environments. This study concentrated on macrocity and intraurban local scales to examine the variations in SUHII assessment and its seasonal changes using different reference delineation methods. City-scale analysis included eight references based on the fixed areas or dynamic buffers, while local-scale analysis took six natural cover types as references under the local climate zone (LCZ) framework, respectively. Results revealed significant differences in SUHII using diverse references, and the inconsistency varied across seasons. On the city scale, the most pronounced intermethod difference occurred in winter, while stronger consistency of spatial patterns was observed in summer. Relatively, higher seasonal SUHIIs and stronger spatial variabilities were generated by methods using fixed areas. On the local scale, a strong consistency of spatial patterns was also observed in summer, while the most pronounced difference occurred in spring. Maximum local SUHIIs in all seasons were obtained using LCZ G as a reference. The study further summarized a list of criteria of reference selection for both scales. Overall, this study provides empirical evidence supporting the appropriate reference delineation for reliable SUHII estimate, especially for seasonal analysis. It can facilitate an improved understanding of urban thermal variations and benefit effective urban heat mitigation.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"18818-18833"},"PeriodicalIF":4.7,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10705071","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142524092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SEIS-Net: A 3-D SAR Enhanced Imaging Network Based on Swin Transformer","authors":"Yifei Hu;Mou Wang;Shunjun Wei;Jiahui Li;Rong Shen","doi":"10.1109/JSTARS.2024.3472845","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3472845","url":null,"abstract":"Conventional 3-D synthetic aperture radar (SAR) sparse imaging algorithms suffer from degradation in weakly sparse scenes due to their reliance on inherent sparsity. In addition, they are constrained by high computational complexity and parametric tuning. To address these problems, we propose a novel 3-D SAR enhanced imaging network based on swin transformer dubbed SEIS-Net. The proposed algorithm consists of two cascaded stages. The first one focuses on estimating the missing measurement elements by constructing a Unet based on the swin transformer. The second stage aims to recover a high-quality image from the estimated echo matrix. The proposed imaging network is theoretically derived from fast iterative shrinkage-thresholding algorithm optimization framework, where the network weights can be learned from an end-to-end training procedure. Finally, simulations and real-measured experiments are carried out. Both visual and quantitative results demonstrate the superiority of the proposed SEIS-Net over the current state-of-the-art algorithms in reconstructing 3-D images from sparsely sampled echoes.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"18967-18986"},"PeriodicalIF":4.7,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10705080","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142524157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chao Chen;Jintao Liang;Taohua Ren;Yi Wang;Zhisong Liu
{"title":"Temporal and Spatial Analysis of Coastal Landscape Patterns Using the GEE Cloud Platform and Landsat Time Series","authors":"Chao Chen;Jintao Liang;Taohua Ren;Yi Wang;Zhisong Liu","doi":"10.1109/JSTARS.2024.3473937","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3473937","url":null,"abstract":"Owing to the rapid urbanization combined with global climate change, dramatic land-use change in coastal watersheds is occurred, which, in turn, cause the evolution of landscape patterns and threaten the valuable but fragile ecosystem. The coastal zone is characterized by severe cloud cover, frequent changes in land type, and fragmented landscape, so it is challenging to carry out the accurate landscape patterns analysis. To address this problem, this study employed the Google Earth engine cloud platform, Landsat time series, and landscape metrics in the Fragstats model to develop a comprehensive framework that integrates landscape pattern metrics and spatial analysis methods, considering both type level and landscape level. The Hangzhou Bay region was selected for conducting land-use classification and landscape patterns analysis. The results indicate that, during nearly four decades, with the continuous expansion of the urban, the urbanization process has accelerated, and the construction land has expanded by 6.93 times. By analyzing the evolution of landscape patterns, Hangzhou Bay heightened landscape fragmentation and patch shapes became more irregular caused by a trend toward intensified urbanization. The Shannon's diversity index continuously increased from 1.14 to 1.51, while the contagion index consistently decreased from 59.83% to 42.21%, suggesting an increase in land-use diversity, reduced aggregation, and extension tendencies between land patches, along with a decrease in the proportion of highly connected patches within the landscape. This study is anticipated to provide robust evidence for the rational planning of future development directions and the deployment of landscape ecological spatial services.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"18379-18398"},"PeriodicalIF":4.7,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10704979","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yijian Duan;Liwen Meng;Yanmei Meng;Jihong Zhu;Jiacheng Zhang;Jinlai Zhang;Xin Liu
{"title":"MFSA-Net: Semantic Segmentation With Camera-LiDAR Cross-Attention Fusion Based on Fast Neighbor Feature Aggregation","authors":"Yijian Duan;Liwen Meng;Yanmei Meng;Jihong Zhu;Jiacheng Zhang;Jinlai Zhang;Xin Liu","doi":"10.1109/JSTARS.2024.3472751","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3472751","url":null,"abstract":"Given the inherent limitations of camera-only and LiDAR-only methods in performing semantic segmentation tasks in large-scale complex environments, multimodal information fusion for semantic segmentation has become a focal point of contemporary research. However, significant modal disparities often result in existing fusion-based methods struggling with low segmentation accuracy and limited efficiency in large-scale complex environments. To address these challenges,we propose a semantic segmentation network with camera–LiDAR cross-attention fusion based on fast neighbor feature aggregation (MFSA-Net), which is better suited for large-scale semantic segmentation in complex environments. Initially, we propose a dual-distance attention feature aggregation module based on rapid 3-D nearest neighbor search. This module employs a sliding window method in point cloud perspective projections for swift proximity search, and efficiently combines feature distance and Euclidean distance information to learn more distinctive local features. This improves segmentation accuracy while ensuring computational efficiency. Furthermore, we propose a cross-attention fusion two-stream network based on residual, which allows for more effective integration of camera information into the LiDAR data stream, enhancing both accuracy and robustness. Extensive experimental results on the large-scale point cloud datasets SemanticKITTI and Nuscenes demonstrate that our proposed algorithm outperforms similar algorithms in semantic segmentation performance in large-scale complex environments.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19627-19639"},"PeriodicalIF":4.7,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10704067","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142595926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}