{"title":"Multiscale Spatial-Spectral CNN-Transformer Network for Hyperspectral Image Super-Resolution","authors":"Jiayang Zhang;Hongjia Qu;Junhao Jia;Yaowei Li;Bo Jiang;Xiaoxuan Chen;Jinye Peng","doi":"10.1109/JSTARS.2025.3565840","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3565840","url":null,"abstract":"Remarkable strides have been made in super-resolution methods based on deep learning for hyperspectral images (HSIs), which are capable of enhancing the spatial resolution. However, these methods predominantly focus on capturing local features using convolutional neural networks (CNNs), neglecting the comprehensive utilization of global spatial-spectral information. To address this limitation, we innovatively propose a multiscale spatial-spectral CNN-transformer network for hyperspectral image super resolution, namely, MSHSR. MSHSR not only applies the local spatial-spectral characteristics but also innovatively facilitates the collaborative exploration and application of spatial details and spectral data globally. Specifically, we first design a multiscale spatial-spectral fusion module, which integrates dilated-convolution parallel branches and a hybrid spectral attention mechanism to address the strong local correlations in HSIs, effectively capturing and fusing multiscale local spatial-spectral information. Furthermore, in order to fully exploit the global contextual consistency in HSIs, we introduce a sparse spectral transformer module. This module processes the previously obtained local spatial-spectral features, thoroughly exploring the elaborate global interrelationship and long-range dependencies among different spectral bands through a coarse-to-fine strategy. Extensive experimental results on three hyperspectral datasets demonstrate the superior performance of our method, outperforming comparison methods both in quantitative metrics and visual performance.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"12116-12132"},"PeriodicalIF":4.7,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10980410","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144131614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Magnifier: A Multigrained Neural Network-Based Architecture for Burned Area Delineation","authors":"Daniele Rege Cambrin;Luca Colomba;Paolo Garza","doi":"10.1109/JSTARS.2025.3565819","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3565819","url":null,"abstract":"In crisis management and remote sensing, image segmentation plays a crucial role, enabling tasks like disaster response and emergency planning by analyzing visual data. Neural networks are able to analyze satellite acquisitions and determine which areas were affected by a catastrophic event. The problem in their development in this context is the data scarcity and the lack of extensive benchmark datasets, limiting the capabilities of training large neural network models. In this article, we propose a novel methodology, namely Magnifier, to improve segmentation performance with limited data availability. The Magnifier methodology is applicable to any existing encoder–decoder architecture, as it extends a model by merging information at different contextual levels through a dual-encoder approach: a local and global encoder. Magnifier analyzes the input data twice using the dual-encoder approach. In particular, the local and global encoders extract information from the same input at different granularities. This allows Magnifier to extract more information than the other approaches given the same set of input images. Magnifier improves the quality of the results of +2.65% on average intersection over union while leading to a restrained increase in terms of the number of trainable parameters compared to the original model. We evaluated our proposed approach with state-of-the-art burned area segmentation models, demonstrating, on average, comparable or better performances in less than half of the giga floating point operations per second (GFLOPs).","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"12263-12277"},"PeriodicalIF":4.7,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10980409","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144124041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuanfang Peng;Chenglin Cai;Zexian Li;Kaihui Lv;Xue Zhang;Yihao Cai
{"title":"Regional Tropospheric Delay Prediction Model Based on LSTM-Enhanced Encoder Network","authors":"Yuanfang Peng;Chenglin Cai;Zexian Li;Kaihui Lv;Xue Zhang;Yihao Cai","doi":"10.1109/JSTARS.2025.3565569","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3565569","url":null,"abstract":"Precise modeling of zenith tropospheric delay (ZTD) is essential for real-time high-precision positioning in global navigation satellite systems. Due to the stochastic variability of atmospheric water vapor across different regions, tropospheric delay exhibits strong regional characteristics. Empirical tropospheric delay models built on the reanalysis of meteorological data often show significant accuracy discrepancies across regions, failing to meet the needs for precise regional ZTD forecasting. Deep learning methods excel in learning complex patterns and dependencies from time-series data. Our study utilized ZTD data from 178 Nevada Geodetic Laboratory stations in Australia during 2023 as ground truth values and modeled them using a long short-term memory (LSTM)-enhanced encoder network. This model incorporated both spatial and temporal information as well as correlations with GPT3 ZTD. Predictions were compared with those from GPT3 ZTD, ERA5 ZTD, artificial neural network (ANN) ZTD, general regression neural network (GRNN) ZTD, and LSTM ZTD. The results showed that the LSTM-enhanced encoder ZTD achieved a root-mean-square error (RMSE) of 14.43 mm and a mean bias close to zero, with mean absolute error and mean correlation coefficient of 12.42 mm and 0.95, respectively. The proposed model outperforms the GPT3, ERA5, ANN, GRNN, and LSTM models, with respective RMSE improvements of approximately 62.3%, 12.3%, 61%, 59.9%, and 60% . In addition, we compared the spatial and temporal properties of the proposed model with those of the GPT3 and ERA5 models. The discussion section further analyzed the prediction performance of different neural network approaches under different prediction periods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"13348-13358"},"PeriodicalIF":4.7,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10980316","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144206179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hyperspectral Image Few-Shot Classification Based on Spatial–Spectral Information Complementation and Multilatent Domain Generalization","authors":"Qianhao Yu;Yong Wang","doi":"10.1109/JSTARS.2025.3565894","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3565894","url":null,"abstract":"Hyperspectral image (HSI) few-shot classification aims to classify HSI samples of novel categories with limited training HSI samples of base categories. However, current methods suffer from two issues: first, ignoring the complementary relationship between spatial and spectral information; and second, performance degradation on base categories due to excessive focus on novel categories. This article proposes a spatial–spectral information complementation and multilatent domain generalization-based framework (SIM). Specifically, given samples of base (novel) categories, a spatial–spectral feature extraction network is designed to extract their spatial–spectral features, which includes two steps. First, multiple spatial–spectral information complementation modules (SSICs) are stacked to extract the complementary features with different scales. Note that each SSIC extracts features with spatial and spectral information, and adopts a spatial–spectral information transmission unit to cross-transmit spatial and spectral information between these two types of features, thus achieving information complementation. Second, a multiscale feature fusion module is utilized to calculate the classification influence scores of the multiscale complementary features to perform layer-by-layer feature fusion, thus obtaining spatial–spectral features. Afterward, the spatial–spectral features are fed into a classification head to obtain the classification results. During training, a multilatent domain generalization network (MLDGN) is designed, which iteratively assigns pseudodomain labels to all samples, and calculates the sample discrimination loss. SIM combines the sample discrimination loss with the classification losses for training. Thus, SIM can extract spatial–spectral features with domain invariance, alleviating the performance degradation on base categories. Extensive results on four HSI datasets demonstrate that SIM outperforms state-of-the-art methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"13212-13224"},"PeriodicalIF":4.7,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10980625","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DE-Unet: Dual-Encoder U-Net for Ultra-High Resolution Remote Sensing Image Segmentation","authors":"Ye Liu;Shitao Song;Miaohui Wang;Hao Gao;Jun Liu","doi":"10.1109/JSTARS.2025.3565753","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3565753","url":null,"abstract":"In recent years, there has been a growing demand for remote sensing image semantic segmentation in various applications. The key to semantic segmentation lies in the ability to globally comprehend the input image. While recent transformer-based methods can effectively capture global contextual information, they suffer from high computational complexity, particularly when it comes to ultra-high resolution (UHR) remote sensing images, it is even more challenging for these methods to achieve a satisfactory balance between accuracy and computation speed. To address these issues, we propose in this article a CNN-based dual-encoder U-Net for effective and efficient UHR image segmentation. Our method incorporates dual encoders into the symmetrical framework of U-Net. The dual encoders endow the network with strong global and local perception capabilities simultaneously, while the U-Net's symmetrical structure guarantees the network's robust decoding ability. Additionally, multipath skip connections ensure ample information exchange between the dual encoders, as well as between the encoders and decoders. Furthermore, we proposes a context-aware modulation fusion module that guides the encoder–encoder and encoder–decoder data fusion through global receptive fields. Experiments conducted on public UHR remote sensing datasets such as the Inria Aerial and DeepGlobe have demonstrated the effectiveness of proposed method. Specifically on the Inria Aerial dataset, our method achieves a 77.42% mIoU which outperforms the baseline (Guo et al., 2022) by 3.14% while maintaining comparable inference speed as shown in Fig. <xref>1</xref>.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"12290-12302"},"PeriodicalIF":4.7,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10980298","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144125603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuhang Zhang;Wuxia Zhang;Songtao Ding;Siyuan Wu;Xiaoqiang Lu
{"title":"Spatial-Temporal Semantic Feature Interaction Network for Semantic Change Detection in Remote Sensing Images","authors":"Yuhang Zhang;Wuxia Zhang;Songtao Ding;Siyuan Wu;Xiaoqiang Lu","doi":"10.1109/JSTARS.2025.3565383","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3565383","url":null,"abstract":"Semantic Change Detection (SCD) in Remote Sensing Images (RSI) aims to identify changes in the type of Land Cover/Land Use (LCLU). The “from-to” information of the acquired image has more profound practical significance than Binary Change Detection (BCD). However, most deep learning-based SCD algorithms do not fully exploit the spatial-temporal information of multilevel features, leading to challenges in extracting LCLU features in complex scenes. To address these issues, we propose a Spatial-Temporal Semantic Feature Interaction Network (STS-FINet) to improve the performance of SCD in RSI. The proposed STS-FINet comprises a Multi-Scale Feature Extraction Encoder (MS-FEE), a Transformer-based Multilevel Feature Interaction module (TML-FI), and a Multilevel Feature Fusion Decoder (ML-FFD). The MS-FEE extracts deep semantic and differential information from the RSI. The TML-FI is designed to mine the spatial-temporal information by extracting long-range dependencies and spatial information from multilevel features to improve spatial perception. Moreover, Mixed Spatial Reasoning Convolution block (MixSrc) is presented to enrich the spatial information by extracting the multiscale features, thus improving the model's capability to interpret complex scenes. Finally, ML-FFD integrates the multilevel features, resulting in the generation of the semantic change map. The effectiveness of the proposed STS-FINet is verified on two high-resolution RSI datasets. Experimental results show that the proposed STS-FINet achieves better change detection performance than SOTA methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"12090-12102"},"PeriodicalIF":4.7,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10979855","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144131691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Validation of Sea Surface Winds From the Space-Borne Radiometer COWVR","authors":"Luo Zhou;Zhixiong Wang;Naiqiang Zhang;Jianhua Qu","doi":"10.1109/JSTARS.2025.3564966","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3564966","url":null,"abstract":"This study aims to validate sea surface wind data derived from the compact ocean wind vector radiometer (COWVR) onboard the International Space Station. The COWVR, a fully polarimetric and two-look microwave radiometer, provides wind direction and speed retrievals in rain-free conditions. Validation was performed by comparing COWVR data with traditional radiometer (AMSR-2, GMI), scatterometer (MetOp/ASCAT, HY-2/SCAT), numerical weather prediction (ERA5), and buoy data. The results show that COWVR wind speed retrievals are comparable to those of AMSR-2 and GMI, with an overall wind speed bias close to zero and a standard deviation of 1.18 m/s when compared to ERA5. The COWVR also demonstrates good accuracy in wind direction retrievals for wind speeds above 8 m/s, with root mean square errors of 12.5° and 15.1° for ERA5 and buoy comparisons, respectively. These findings suggest that COWVR can provide good sea surface wind products. However, the inconsistencies of radiometer and scatteromers sea surface wind speeds are still significant, especially for winds above 15 m/s.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"12241-12247"},"PeriodicalIF":4.7,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10979206","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144124035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linjuan Li;Gang Xie;Haoxue Zhang;Xinlin Xie;Heng Li
{"title":"Robust Representation Learning Based on Deep Mutual Information for Scene Classification Against Adversarial Perturbations","authors":"Linjuan Li;Gang Xie;Haoxue Zhang;Xinlin Xie;Heng Li","doi":"10.1109/JSTARS.2025.3564376","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3564376","url":null,"abstract":"Remote sensing scene classification enables data-driven decisions for various applications, such as environmental monitoring, urban planning, and disaster management. However, deep learning models used for scene classification are highly vulnerable to adversarial samples, resulting in incorrect predictions and posing significant risks. While most current methods focus on improving adversarial robustness, they face a trade-off that compromises accuracy on clean, unperturbed images. To address this challenge, we utilized information theory by incorporating a mutual information (MI) representation module, which allows the model to capture high-quality, robust features. Furthermore, a domain adversarial training strategy is applied to promote the learning of domain-invariant features, reducing the effect of distribution differences between clean images and adversarial samples. We propose a novel algorithm that accurately differentiates between clean and adversarial scenes by introducing the MI and domain adaptation-guided network. Extensive experiments demonstrate the effectiveness of our approach against adversarial attacks, revealing a positive correlation between adversarial perturbations and image information entropy, and a negative correlation with robust accuracy.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"11963-11978"},"PeriodicalIF":4.7,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10977989","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhanxu Zhang;Linzi Yang;Guanglian Zhang;Jiangwei Deng;Lifeng Bian;Chen Yang
{"title":"CASSNet: Cross-Attention Enhanced Spectral–Spatial Interaction Network for Hyperspectral Image Super-Resolution","authors":"Zhanxu Zhang;Linzi Yang;Guanglian Zhang;Jiangwei Deng;Lifeng Bian;Chen Yang","doi":"10.1109/JSTARS.2025.3564379","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3564379","url":null,"abstract":"Deep-learning-based super-resolution (SR) methods for a single hyperspectral image have made significant progress in recent years and become an important research direction in remote sensing. Existing methods perform well in extracting spatial features, but challenges remain in integrating spectral and spatial features when modeling global relationships. In order to take full advantage of the higher spectral resolution of hyperspectral images, this article proposes a novel hyperspectral image SR method (CASSNet), which integrates convolutional neural networks and cross-attention mechanisms into a unified framework. This approach achieves comprehensive integration of spectral and spatial information, with extensive exploration at both local and global levels. In the local feature extraction stage, parallel 3-D/2-D convolutions work in tandem to efficiently capture detail information from both spectral and spatial dimensions. In addition, a spectral–spatial dual-branch module employing the cross-attention mechanism is designed to capture the global dependencies within the features, where the reconstructed spectral–spatial module and the spectral–spatial interaction unit can effectively promote the interaction and complementarity of spectral–spatial features. The experiments on three publicly available datasets demonstrated that the proposed method obtained superior SR results, outperforming state-of-the-art SR algorithms.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"11716-11730"},"PeriodicalIF":4.7,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10979241","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144073295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GF-2 Remote Sensing-Based Winter Wheat Extraction With Multitask Learning Vision Transformer","authors":"Zhihao Zhao;Zihan Liu;Heng Luo;Hui Yang;Biao Wang;Yixin Jiang;Yanqi Liu;Yanlan Wu","doi":"10.1109/JSTARS.2025.3564680","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3564680","url":null,"abstract":"Accurate mapping of winter wheat is essential for the advancement of precision agriculture and food security. However, classical semantic segmentation models frequently encounter difficulties in precise edge extraction, omission, and classification due to the presence of dense distributions and intraclass diversity. This study proposes a novel method for the extraction of winter wheat from remote sensing data using the GF-2 satellite. The method incorporates a multitask learning framework-Vision Transformer-based model (namely MCFormer) that combines semantic segmentation and boundary detection. Furthermore, the normalized difference vegetation index (NDVI) and land surface temperature (LST) derived from Landsat 8 images was included to enhance the representation of winter wheat's spectral characteristics. The method is evaluated in comparison to frequently used U-Net-, SegNet-, SegFormer-, and MANet-based winter wheat extraction methods in northern Anhui Province. The results indicate that the MCFormer-based method achieves the intersection over union (IoU), F1 score, recall, precision and overall accuracy (OA) of 0.9790, 0.9893, 0.9953, 0.9835, and 0.9900, respectively, outperforming the U-Net-, SegNet-, SegFormer-, and MANet-based methods. The incorporation of multitask learning with NDVI and LST data has been demonstrated to enhance several key performance metrics, including improvements in the IoU, F1 score, recall, precision, and OA by 5.95%, 3.65%, 3.75%, 2.79%, and 2.24%, respectively. Our proposed approach improves the accuracy of winter wheat extraction from remote sensing images, which has the potential to facilitate precision agriculture and enhance food security.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"12454-12469"},"PeriodicalIF":4.7,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10979367","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144125360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}