Wenmei Li;Hao Xia;Bin Xi;Yu Wang;Jing Lu;Yuhong He
{"title":"SSL-MBC: Self-Supervised Learning With Multibranch Consistency for Few-Shot PolSAR Image Classification","authors":"Wenmei Li;Hao Xia;Bin Xi;Yu Wang;Jing Lu;Yuhong He","doi":"10.1109/JSTARS.2025.3528529","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3528529","url":null,"abstract":"Deep learning methods have recently made substantial advances in polarimetric synthetic aperture radar (PolSAR) image classification. However, supervised training relying on massive labeled samples is one of its major limitations, especially for PolSAR images that are hard to manually annotate. Self-supervised learning (SSL) is an effective solution for insufficient labeled samples by mining supervised information from the data itself. Nevertheless, fully utilizing SSL in PolSAR classification tasks is still a great challenge due to the data complexity. Based on the abovementioned issues, we propose an SSL model with multibranch consistency (SSL-MBC) for few-shot PolSAR image classification. Specifically, the data augmentation technique used in the pretext task involves a combination of various spatial transformations and channel transformations achieved through scattering feature extraction. In addition, the distinct scattering features of PolSAR data are considered as its unique multimodal representations. It is observed that the different modal representations of the same instance exhibit similarity in the encoding space, with the hidden features of more modals being more prominent. Therefore, a multibranch contrastive SSL framework, without negative samples, is employed to efficiently achieve representation learning. The resulting abstract features are then fine-tuned to ensure generalization in downstream tasks, thereby enabling few-shot classification. Experimental results yielded from selected PolSAR datasets convincingly indicate that our method exhibits superior performance compared to other existing methodologies. The exhaustive ablation study shows that the model performance degrades when either the data augmentation or any branch is masked, and the classification result does not rely on the label amount.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4696-4710"},"PeriodicalIF":4.7,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10839016","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143361107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hyperspectral Image Classification Using Spectral-Spatial Dual Random Fields With Gaussian and Markov Processes","authors":"Yaqiu Zhang;Lizhi Liu;Xinnian Yang","doi":"10.1109/JSTARS.2025.3528115","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3528115","url":null,"abstract":"This article presents a novel hyperspectral image (HSI) classification approach that integrates the sparse inducing variational Gaussian process (SIVGP) with a spatially adaptive Markov random field (SAMRF), termed G-MDRF. Variational inference is employed to obtain a sparse approximation of the posterior distribution, modeling the spectral field within the latent function space. Subsequently, SAMRF is utilized to model the spatial prior within the function space, while the alternating direction method of multipliers (ADMM) is employed to enhance computational efficiency. Experimental results on three datasets with varying complexity show that the proposed algorithm improves computational efficiency by approximately 152 times and accuracy by about 7%–26% compared to the current popular Gaussian process methods. Compared to classical random field methods, G-MDRF rapidly achieves a convergent solution with only one ten-thousandth to one hundred-thousandth of the iterations, improving accuracy by about 5%–18%. Particularly, when the number of classes in the dataset increases and the scene becomes more complex, the proposed method demonstrates a greater advantage in both computational efficiency and classification accuracy compared to existing methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4199-4212"},"PeriodicalIF":4.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10836880","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongbo Jiang;Guangcai Feng;Yuexin Wang;Zhiqiang Xiong;Hesheng Chen;Ning Li;Zeng Lin
{"title":"Land Subsidence in the Yangtze River Delta, China Explored Using InSAR Technique From 2019 to 2021","authors":"Hongbo Jiang;Guangcai Feng;Yuexin Wang;Zhiqiang Xiong;Hesheng Chen;Ning Li;Zeng Lin","doi":"10.1109/JSTARS.2025.3527748","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3527748","url":null,"abstract":"The combined effects of global warming and human activities have intensified land subsidence (LS), limiting the sustainable development of economy in delta regions. Despite the potential of interferometric synthetic aperture radar (InSAR) for monitoring LS, its application across vast delta regions may be hindered by complex data processing, high computational demands, and the need for standardized results. To overcome these challenges, we adopted the multitemporal InSAR technique, integrating a frame data parallel processing strategy and an overall adjustment correction method, to obtain the temporal deformation sequences of the entire Yangtze River Delta (YRD) region in China from January 2019 to December 2021. We calculated the annual average deformation rate and identified deformation areas, with 73.5% concentrated along the Yangtze River, along the coastline, and within the northern Anhui mining area. A significant correlation was observed between LS and anthropogenic activities, such as economic development and land reclamation activities. Further analysis reveals that the increase in GDP growth rate may contribute to LS. Approximately, 38% of the reclaimed area in the YRD is at risk of LS. Land reclamation activities present a dichotomy, with Hangzhou Bay as the dividing line. This study provides a new perspective and scientific basis for understanding and analyzing LS in deltaic environments, contributing to sustainable development and advancing wide-area InSAR deformation monitoring.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4174-4187"},"PeriodicalIF":4.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10836200","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yifeng Yang;Hengqian Zhao;Xiadan Huangfu;Zihan Li;Pan Wang
{"title":"ViT-ISRGAN: A High-Quality Super-Resolution Reconstruction Method for Multispectral Remote Sensing Images","authors":"Yifeng Yang;Hengqian Zhao;Xiadan Huangfu;Zihan Li;Pan Wang","doi":"10.1109/JSTARS.2025.3527226","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3527226","url":null,"abstract":"The reflective characteristics of remote sensing image information depend on the scale of the observed area, with high-resolution images providing more detailed feature information. Currently, monitoring refined industries and extracting regional information necessitate higher-resolution remote sensing images. Super-resolution reconstruction of remote sensing multispectral images not only enhances the spatial resolution of these images but also preserves and improves the spectral information of multispectral data, thereby providing richer ground object information and more accurate environmental monitoring data. To improve the effectiveness of feature extraction in the generator network while maintaining model efficiency, this article proposes the vision transformer improved super-resolution generative adversarial network (ViT-ISRGAN) model. This model is an improvement upon the original SRGAN super-resolution image reconstruction method, incorporating lightweight network modules, channel attention modules, spatial-spectral residual attention, and the vision transformer structure. The ViT-ISRGAN model focuses on reconstructing four types of typical ground objects based on Sentinel-2 images: urban, water, farmland, and forest. Results indicate that the ViT-ISRGAN model excels in capturing texture details and color restoration, effectively extracting spectral and texture information from multispectral remote sensing images across various scenes. Compared to other super-resolution (SR) models, this approach demonstrates superior effectiveness and performance in the SR tasks of remote sensing multispectral images.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"3973-3988"},"PeriodicalIF":4.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10836746","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mingzhi Han;Tao Xu;Qingjie Liu;Xiaohui Yang;Jing Wang;Jiaqi Kong
{"title":"HFIFNet: Hierarchical Feature Interaction Network With Multiscale Fusion for Change Detection","authors":"Mingzhi Han;Tao Xu;Qingjie Liu;Xiaohui Yang;Jing Wang;Jiaqi Kong","doi":"10.1109/JSTARS.2025.3528053","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3528053","url":null,"abstract":"Change detection (CD) from remote sensing images has been widely used in land management and urban planning. Benefiting from deep learning, numerous methods have achieved significant results in the CD of clearly changed targets. However, there are still significant challenges in the CD of weak targets, such as targets with small size, targets with blurred boundaries, and targets with low distinguishability from the background. Feature extraction from these targets can result in the loss of critical spatial features, potentially leading to decreased CD performance. Inspired by the improvement of multiscale features for CD of weak target, a hierarchical feature interaction network with multiscale fusion was proposed. First, a hierarchical feature interactive fusion module is proposed, which achieves optimized multichannel feature interaction and enhances the distinguishability between weak targets and background. Moreover, the module also achieves cross scale feature fusion, which compensates for the loss of spatial feature of changed targets at a single scale during feature extraction. Second, VMamba Block is utilized to obtain global features, and a spatial feature localization module was proposed to enhance the saliency of spatial features such as edges and textures. The distinguishability between weak targets and irrelevant spatial features is further enhanced. Our method has been experimentally evaluated on three public datasets, and outperformed state-of-the-art approaches by 1.06%, 1.41%, and 2.63% in F1 score on the LEVIR-CD, S2Looking, and NALand datasets, respectively. These results affirm the effectiveness of our method for weak targets in CD tasks.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4318-4330"},"PeriodicalIF":4.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10836868","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Classification of Buried Objects From Ground Penetrating Radar Images by Using Second-Order Deep Learning Models","authors":"Douba Jafuno;Ammar Mian;Guillaume Ginolhac;Nickolas Stelzenmuller","doi":"10.1109/JSTARS.2024.3524424","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3524424","url":null,"abstract":"In this article, a new classification model based on covariance matrices is built in order to classify buried objects. The inputs of the proposed models are the hyperbola thumbnails obtained with a classical ground penetrating radar (GPR) system. These thumbnails are then inputs to the first layers of a classical CNN, which then produces a covariance matrix using the outputs of the convolutional filters. Next, the covariance matrix is given to a network composed of specific layers to classify symmetric positive definite matrices. We show in a large database that our approach outperform shallow networks designed for GPR data and conventional CNNs typically used in computer vision applications, particularly when the number of training data decreases and in the presence of mislabeled data. We also illustrate the interest of our models when training data and test sets are obtained from different weather modes or considerations.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"3185-3197"},"PeriodicalIF":4.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10836936","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meiping Song;Xiao Zhang;Lan Li;Hongju Cao;Haimo Bao
{"title":"A Tensor-Based Go Decomposition Method for Hyperspectral Anomaly Detection","authors":"Meiping Song;Xiao Zhang;Lan Li;Hongju Cao;Haimo Bao","doi":"10.1109/JSTARS.2025.3525743","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3525743","url":null,"abstract":"Hyperspectral anomaly detection (HAD) aims at effectively separating the anomaly target from the background. The low-rank and sparse matrix decomposition (LRaSMD) technique has shown great potential in HAD tasks. However, some LRaSMD models need to convert the hyperspectral data into a two-dimensional matrix. This cannot well maintain the characteristics of the hyperspectral image (HSI) in each dimension, thus degenerating its representation capacity. In this context, this article proposes a tensor-based Go decomposition (GODEC) model, called TGODEC. The TGODEC model supports the idea of GODEC, representing the HSI data as a combination of background tensor, anomaly tensor, and noise tensor. In detail, the background tensor is solved by the tensor singular value hard thresholding decomposition. The anomaly tensor is solved by a mapping matrix using the corresponding sparse cardinality. Interestingly, the obtained background and anomaly tensors can also be developed for HAD, thus a TGODEC-based anomaly detector is established, called TGODEC-AD. Specifically, the TGODEC-AD method combines the typical RX-AD and R-AD with the above decomposition result of the TGODEC model and constructs different modal operator detectors. Experimental results on multiple real hyperspectral datasets verify the effectiveness of the TGODEC and TGODEC-AD methods. It means that the proposed TGODEC model can effectively characterize the spatial structural features of HSI. As a result, the pure decomposed components can be obtained, contributing to detecting the anomaly target and suppressing the background better in HAD tasks.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4584-4600"},"PeriodicalIF":4.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10836889","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ri Ming;Na Chen;Jiangtao Peng;Weiwei Sun;Zhijing Ye
{"title":"Semantic Tokenization-Based Mamba for Hyperspectral Image Classification","authors":"Ri Ming;Na Chen;Jiangtao Peng;Weiwei Sun;Zhijing Ye","doi":"10.1109/JSTARS.2025.3528122","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3528122","url":null,"abstract":"Recently, the transformer-based model has shown superior performance in hyperspectral image classification (HSIC) due to its excellent ability to model long-term dependencies on sequence data. An important component of the transformer is the tokenizer, which can transform the features into semantic token sequences (STS). Nonetheless, transformer's semantic tokenization strategy is hardly representative of local relatively important high-level semantics because of its global receptive field. Recently, the Mamba-based methods have shown even stronger spatial context modeling ability than Transformer for HSIC. However, these Mamba-based methods mainly focus on spectral and spatial dimensions. They tend to extract semantic information in very long feature sequences or represent semantic information in several typical tokens, which may ignore some important semantics of the HSIs. In order to represent the semantic information of HSIs more holistically in Mamba, this article proposes a semantic tokenization-based Mamba (STMamba) model. In STMamba, a spectral-spatial feature extraction module is used to extract the spectral–spatial joint features. Then, a generated semantic token sequences module is designed to transform the features into STS. Subsequently, the STS are fed into the semantic token state spatial model to capture relationships between different semantic tokens. Finally, the fused semantic token is passed into a classifier for classification. Experimental results on three HSI datasets demonstrate that the proposed STMamba outperforms existing state-of-the-art deep learning and transformer-based methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4227-4241"},"PeriodicalIF":4.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10838328","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Explainability of Subfield Level Crop Yield Prediction Using Remote Sensing","authors":"Hiba Najjar;Miro Miranda;Marlon Nuske;Ribana Roscher;Andreas Dengel","doi":"10.1109/JSTARS.2025.3528068","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3528068","url":null,"abstract":"Crop yield forecasting plays a significant role in addressing growing concerns about food security and guiding decision-making for policymakers and farmers. When deep learning is employed, understanding the learning and decision-making processes of the models, as well as their interaction with the input data, is crucial for establishing trust in the models and gaining insight into their reliability. In this study, we focus on the task of crop yield prediction, specifically for soybean, wheat, and rapeseed crops in Argentina, Uruguay, and Germany. Our goal is to develop and explain predictive models for these crops, using a large dataset of satellite images, additional data modalities, and crop yield maps. We employ a long short-term memory network and investigate the impact of using different temporal samplings of the satellite data and the benefit of adding more relevant modalities. For model explainability, we utilize feature attribution methods to quantify input feature contributions, identify critical growth stages, analyze yield variability at the field level, and explain less accurate predictions. The modeling results show an improvement when adding more modalities or using all available instances of satellite data. The explainability results reveal distinct feature importance patterns for each crop and region. We further found that the most influential growth stages on the prediction are dependent on the temporal sampling of the input data. We demonstrated how these critical growth stages, which hold significant agronomic value, closely align with the existing literature in agronomy and crop development biology.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4141-4161"},"PeriodicalIF":4.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10836770","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TPDTNet: Two-Phase Distillation Training for Visible-to-Infrared Unsupervised Domain Adaptive Object Detection","authors":"Siyu Wang;Xiaogang Yang;Ruitao Lu;Shuang Su;Bin Tang;Tao Zhang;Zhengjie Zhu","doi":"10.1109/JSTARS.2025.3528057","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3528057","url":null,"abstract":"In remote sensing target detection cases, great challenges are faced when migrating detection models from the visible domain to the infrared domain. Cross-domain migration suffers from problems such as a lack of data annotations in the infrared domain and interdomain feature differences. To improve the detection accuracy attained for infrared images, we propose a novel two-phase distillation training network (TPDTNet). Specifically, in the first phase, we incorporate a contrastive learning framework to maximize the mutual information between the source and target domains. In addition, we construct a generative model that learns only a unidirectional modality conversion mapping, thereby capturing the associations between their visual contents. The source-domain image is converted to an image with the style of the target domain, thereby achieving image-level domain alignment. The generated image is combined with the source-domain image to form an enhanced domain for cross-modal training. Enhanced domain data are fed into the teacher network to initialize the weights and produce pseudolabels. Next, to address small remote sensing target detection tasks, we construct a multidimensional progressive feature fusion detection framework, which initially fuses two adjacent low-level feature maps and then progressively incorporates high-level features to enhance the quality of fusing nonadjacent layer features. Subsequently, a spatial-dimension convolution is integrated into the backbone network. This convolutional operation is embedded following standard convolution to mitigate the loss of detailed features. Finally, a distillation training strategy that utilizes pseudodetection labels to calculate target information. By minimizing the Kullback–Leibler divergence between the probability maps of the teacher and student networks, the channel activations are transformed into probability distributions, thereby achieving knowledge distillation. The training weights are transferred from the teacher network to the student network to maximize the detection accuracy. Extensive experiments are conducted on three optical-to-infrared datasets, and the experimental results show that our TPDTNet method achieves state-of-the-art results relative to those of the baseline model.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4255-4272"},"PeriodicalIF":4.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10836742","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}