Journal of Applied Remote Sensing最新文献

筛选
英文 中文
Multiscale graph convolution residual network for hyperspectral image classification 用于高光谱图像分类的多尺度图卷积残差网络
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.014504
Ao Li, Yuegong Sun, Cong Feng, Yuan Cheng, Liang Xi
{"title":"Multiscale graph convolution residual network for hyperspectral image classification","authors":"Ao Li, Yuegong Sun, Cong Feng, Yuan Cheng, Liang Xi","doi":"10.1117/1.jrs.18.014504","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014504","url":null,"abstract":"In recent years, graph convolutional networks (GCNs) have attracted increased attention in hyperspectral image (HSI) classification through the utilization of data and their connection graph. However, most existing GCN-based methods have two main drawbacks. First, the constructed graph with pixel-level nodes loses many useful spatial information while high computational cost is required due to large graph size. Second, the joint spatial-spectral structure hidden in HSI are not fully explored for better neighbor correlation preservation, which limits the GCN to achieve promising performance on discriminative feature extraction. To address these problems, we propose a multiscale graph convolutional residual network (MSGCRN) for HSI classification. First, to explore the local spatial–spectral structure, superpixel segmentation is performed on the spectral principal component of HSI at different scales. Thus, the obtained multiscale superpixel areas can capture rich spatial texture division. Second, multiple superpixel-level subgraphs are constructed with adaptive weighted node aggregation, which not only effectively reduces the graph size, but also preserves local neighbor correlation in varying subgraph scales. Finally, a graph convolution residual network is designed for multiscale hierarchical features extraction, which are further integrated into the final discriminative features for HSI classification via a diffusion operation. Moreover, a mini-batch branch is adopted to the large-scale superpixel branch of MSGCRN to further reduce computational cost. Extensive experiments on three public HSI datasets demonstrate the advantages of our MSGCRN model compared to several cutting-edge approaches.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139495301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-scale contrastive learning method for PolSAR image classification 用于 PolSAR 图像分类的多尺度对比学习方法
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.014502
Wenqiang Hua, Chen Wang, Nan Sun, Lin Liu
{"title":"Multi-scale contrastive learning method for PolSAR image classification","authors":"Wenqiang Hua, Chen Wang, Nan Sun, Lin Liu","doi":"10.1117/1.jrs.18.014502","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014502","url":null,"abstract":"Although deep learning-based methods have made remarkable achievements in polarimetric synthetic aperture radar (PolSAR) image classification, these methods require a large number of labeled samples. However, for PolSAR image classification, it is difficult to obtain a large number of labeled samples, which requires extensive human labor and material resources. Therefore, a new PolSAR image classification method based on multi-scale contrastive learning is proposed, which can achieve good classification results with only a small number of labeled samples. During the pre-training process, we propose a multi-scale contrastive learning network model that uses the characteristics of the data itself to train the network by contrastive training. In addition, to capture richer feature information, a multi-scale network structure is introduced. In the training process, considering the diversity and complexity of PolSAR images, we design a hybrid loss function combining the supervised and unsupervised information to achieve better classification performance with limited labeled samples. The experimental results on three real PolSAR datasets have demonstrated that the proposed method outperforms other comparison methods, even with limited labeled samples.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139092110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Monitoring of land subsidence by combining small baseline subset interferometric synthetic aperture radar and generic atmospheric correction online service in Qingdao City, China 利用小基线子集干涉合成孔径雷达和通用大气校正在线服务对中国青岛市的地面沉降进行监测
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.014506
Xuepeng Li, Qiuxiang Tao, Yang Chen, Anye Hou, Ruixiang Liu, Yixin Xiao
{"title":"Monitoring of land subsidence by combining small baseline subset interferometric synthetic aperture radar and generic atmospheric correction online service in Qingdao City, China","authors":"Xuepeng Li, Qiuxiang Tao, Yang Chen, Anye Hou, Ruixiang Liu, Yixin Xiao","doi":"10.1117/1.jrs.18.014506","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014506","url":null,"abstract":"Owing to accelerated urbanization, land subsidence has damaged urban infrastructure and impeded sustainable economic and social development in Qingdao City, China. Combining interferometric synthetic aperture radar (InSAR) and generic atmospheric correction online service (GACOS), atmospheric correction has not yet been investigated for land subsidence in Qingdao. A small baseline subset of InSAR (SBAS InSAR), GACOS, and 28 Sentinel-1A images were combined to produce a land subsidence time series from January 2019 to December 2020 for the urban areas of Qingdao, and the spatiotemporal evolution of land subsidence before and after GACOS atmospheric correction was compared, analyzed, and verified using leveling data. Our work demonstrates that the overall surface condition of the Qingdao urban area is stable, and subsidence areas are mainly concentrated in the coastal area of Jiaozhou Bay, northwestern Jimo District, and northern Chengyang District. The GACOS atmospheric correction could reduce the root-mean-square error of the differential interferometric phase. The land subsidence time series after correction was in better agreement with the leveling-monitored results. It is effective to perform GACOS atmospheric correction to improve the accuracy of SBAS InSAR-monitored land subsidence over a large scale and long time series in coastal cities.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2023 List of Reviewers 2023 年审查员名单
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.010102
{"title":"2023 List of Reviewers","authors":"","doi":"10.1117/1.jrs.18.010102","DOIUrl":"https://doi.org/10.1117/1.jrs.18.010102","url":null,"abstract":"JARS thanks the reviewers who served the journal in 2023.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139408029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatiotemporal fusion convolutional neural network: tropical cyclone intensity estimation from multisource remote sensing images 时空融合卷积神经网络:从多源遥感图像估算热带气旋强度
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.018501
Randi Fu, Haiyan Hu, Nan Wu, Zhening Liu, Wei Jin
{"title":"Spatiotemporal fusion convolutional neural network: tropical cyclone intensity estimation from multisource remote sensing images","authors":"Randi Fu, Haiyan Hu, Nan Wu, Zhening Liu, Wei Jin","doi":"10.1117/1.jrs.18.018501","DOIUrl":"https://doi.org/10.1117/1.jrs.18.018501","url":null,"abstract":"Utilizing multisource remote sensing images to accurately estimate tropical cyclone (TC) intensity is crucial and challenging. Traditional approaches rely on a single image for intensity estimation and lack the capability to perceive dynamic spatiotemporal information. Meanwhile, many existing deep learning methods sample from a time series of fixed length and depend on computation-intensive 3D feature extraction modules, limiting the model’s flexibility and scalability. By organically linking the genesis and dissipation mechanisms of a TC with computer vision techniques, we introduce a spatiotemporal fusion convolutional neural network that integrates three distinct improvement approaches. First, an a priori aware nonparametric fusion module is introduced to effectively fuse key features from multisource remote sensing data. Second, we design a scale-aware contraction–expansion module. This module effectively captures detailed features of the TC by connecting information from different scales through a weighted and up-sampling method. Finally, we propose a 1D–2D conditional sampling training method that balances single-step regression (for short sequences) and latent-variable-based temporal modeling (for long sequences) to achieve flexible spatiotemporal feature perception, thereby avoiding the data scale constraint imposed by fixed sequence lengths. Through qualitative and quantitative experimental comparisons, the proposed spatiotemporal fusion convolutional neural network achieved a root-mean-square error of 8.89 kt, marking a 29.7% improvement over the advanced Dvorak technique, and its efficacy in actual TC case analyses indicates its practical viability and potential for broader applications.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EPAWFusion: multimodal fusion for 3D object detection based on enhanced points and adaptive weights EPAWFusion:基于增强点和自适应权重的三维物体检测多模态融合
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.017501
Xiang Sun, Shaojing Song, Fan Wu, Tingting Lu, Bohao Li, Zhiqing Miao
{"title":"EPAWFusion: multimodal fusion for 3D object detection based on enhanced points and adaptive weights","authors":"Xiang Sun, Shaojing Song, Fan Wu, Tingting Lu, Bohao Li, Zhiqing Miao","doi":"10.1117/1.jrs.18.017501","DOIUrl":"https://doi.org/10.1117/1.jrs.18.017501","url":null,"abstract":"Fusing LiDAR point cloud and camera image for 3D object detection in autonomous driving has emerged as a captivating research avenue. The core challenge of multimodal fusion is how to seamlessly fuse 3D LiDAR point cloud with 2D camera image. Although current approaches exhibit promising results, they often rely solely on fusion at either the data level, feature level, or object level, and there is still a room for improvement in the utilization of multimodal information. We present an advanced and effective multimodal fusion framework called EPAWFusion for fusing 3D point cloud and 2D camera image at both data level and feature level. EPAWFusion model consists of three key modules: a point enhanced module based on semantic segmentation for data-level fusion, an adaptive weight allocation module for feature-level fusion, and a detector based on 3D sparse convolution. The semantic information of the 2D image is extracted using semantic segmentation, and the calibration matrix is used to establish the point-pixel correspondence. The semantic information and distance information are then attached to the point cloud to achieve data-level fusion. The geometry features of enhanced point cloud are extracted by voxel encoding, and the texture features of image are obtained using a pretrained 2D CNN. Feature-level fusion is achieved via the adaptive weight allocation module. The fused features are fed into a 3D sparse convolution-based detector to obtain the accurate 3D objects. Experiment results demonstrate that EPAWFusion outperforms the baseline network MVXNet on the KITTI dataset for 3D detection of cars, pedestrians, and cyclists by 5.81%, 6.97%, and 3.88%. Additionally, EPAWFusion performs well for single-vehicle-side 3D object detection based on the experimental findings on DAIR-V2X dataset and the inference frame rate of our proposed model reaches 11.1 FPS. The two-layer level fusion of EPAWFusion significantly enhances the performance of multimodal 3D object detection.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139464032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synthetic aperture radar image change detection using saliency detection and attention capsule network 利用显著性检测和注意力胶囊网络进行合成孔径雷达图像变化检测
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.016505
Shaona Wang, Di Wang, Jia Shi, Zhenghua Zhang, Xiang Li, Yanmiao Guo
{"title":"Synthetic aperture radar image change detection using saliency detection and attention capsule network","authors":"Shaona Wang, Di Wang, Jia Shi, Zhenghua Zhang, Xiang Li, Yanmiao Guo","doi":"10.1117/1.jrs.18.016505","DOIUrl":"https://doi.org/10.1117/1.jrs.18.016505","url":null,"abstract":"Synthetic aperture radar (SAR) image change detection has been widely applied in a variety of fields as one of the research hotspots in remote sensing image processing. To increase the accuracy of SAR image change detection, an algorithm based on saliency detection and an attention capsule network is proposed. First, the difference image (DI) is processed using the saliency detection method. The DI’s most significant regions are extracted. Considering the saliency detection characteristics, we select training samples only from the DI’s most salient regions. The regions in the background are omitted. This results in a significant reduction in the number of training samples. Second, a capsule network based on an attention mechanism is constructed. The spatial attention model is capable of extracting the salient characteristics. Capsule networks enable precise classification. Finally, a final change map is obtained using capsule network to classify images. To compare the proposed method with the related methods, experiments are carried out on four real SAR datasets. The results show that the proposed method is effective in improving the exactitude of change detection.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139656853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Plume motion characterization in unmanned aerial vehicle aerial video and imagery 无人飞行器航拍视频和图像中的羽流运动特征描述
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.016501
Mehrube Mehrubeoglu, Kirk Cammarata, Hua Zhang, Lifford McLauchlan
{"title":"Plume motion characterization in unmanned aerial vehicle aerial video and imagery","authors":"Mehrube Mehrubeoglu, Kirk Cammarata, Hua Zhang, Lifford McLauchlan","doi":"10.1117/1.jrs.18.016501","DOIUrl":"https://doi.org/10.1117/1.jrs.18.016501","url":null,"abstract":"Sediment plumes are generated from both natural and human activities in benthic environments, increasing the turbidity of the water and reducing the amount of sunlight reaching the benthic vegetation. Seagrasses, which are photosynthetic bioindicators of their environment, are threatened by chronic reductions in sunlight, impacting entire aquatic food chains. Our research uses unmanned aerial vehicle (UAV) aerial video and imagery to investigate the characteristics of sediment plumes generated by a model of anthropogenic disturbance. The extent, speed, and motion of the plumes were assessed as these parameters may pertain to the potential impacts of plume turbidity on seagrass communities. In a case study using UAV video, the turbidity plume was observed to spread more than 200 ft over 20 min of the UAV campaign. The directional speed of the plume was estimated to be between 10.4 and 10.6 ft/min. This was corroborated by observation of the greatest plume turbidity and sediment load near the location of the disturbance and diminishing with distance. Further temporal studies are necessary to determine any long-term impacts of human activity-generated sediment plumes on seagrass beds.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139092104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LRSNet: a high-efficiency lightweight model for object detection in remote sensing LRSNet:用于遥感中物体探测的高效轻量级模型
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.016502
Shiliang Zhu, Min Miao, Yutong Wang
{"title":"LRSNet: a high-efficiency lightweight model for object detection in remote sensing","authors":"Shiliang Zhu, Min Miao, Yutong Wang","doi":"10.1117/1.jrs.18.016502","DOIUrl":"https://doi.org/10.1117/1.jrs.18.016502","url":null,"abstract":"Unmanned aerial vehicles (UAVs) exhibit the ability to flexibly conduct aerial remote-sensing imaging. By employing deep learning object-detection algorithms, they efficiently perceive objects, finding widespread application in various practical engineering tasks. Consequently, UAV-based remote sensing object detection technology holds considerable research value. However, the background of UAV remote sensing images is often complex, with varying shooting angles and heights leading to difficulties in unifying target scales and features. Moreover, there is the challenge of numerous densely distributed small targets. In addition, UAVs face significant limitations in terms of hardware resources. Against this background, we propose a lightweight remote sensing object detection network (LRSNet) model based on YOLOv5s. In the backbone of LRSNet, the lightweight network MobileNetV3 is used to substantially reduce the model’s computational complexity and parameter count. In the model’s neck, a multiscale feature pyramid network named CM-FPN is introduced to enhance the detection capability of small objects. CM-FPN comprises two key components: C3EGhost, based on GhostNet and efficient channel attention modules, and the multiscale feature fusion channel attention mechanism (MFFC). C3EGhost, serving as CM-FPN’s primary feature extraction module, possesses lower computational complexity and fewer parameters, as well as effectively reducing background interference. MFFC, as the feature fusion node of CM-FPN, can adaptively weight the fusion of shallow and deep features, acquiring more effective details and semantic information for object detection. LRSNet, evaluated on the NWPU VHR-10, DOTA V1.0, and VisDrone-2019 datasets, achieved mean average precision of 94.0%, 71.9%, and 35.6%, with Giga floating-point operations per second and Param (M) measuring only 5.8 and 4.1, respectively. This outcome affirms the efficiency of LRSNet in UAV-based remote-sensing object detection tasks.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139415338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continual domain adaptation on aerial images under gradually degrading weather 在逐渐恶化的天气条件下对航空图像进行连续域适应性调整
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.016504
Chowdhury Sadman Jahan, Andreas Savakis
{"title":"Continual domain adaptation on aerial images under gradually degrading weather","authors":"Chowdhury Sadman Jahan, Andreas Savakis","doi":"10.1117/1.jrs.18.016504","DOIUrl":"https://doi.org/10.1117/1.jrs.18.016504","url":null,"abstract":"Domain adaptation (DA) aims to reduce the effects of the distribution gap between the source domain where a model is trained and the target domain where the model is deployed. When a deep learning model is deployed on an aerial platform, it may face gradually degrading weather conditions during its operation, leading to gradually widening gaps between the source training data and the encountered target data. Because there are no existing datasets with gradually degrading weather, we generate four datasets by introducing progressively worsening clouds and snowflakes on aerial images. During deployment, unlabeled target domain samples are acquired in small batches, and adaptation is performed continually with each batch of incoming data, instead of assuming that the entire target dataset is available. We evaluate two continual DA models against a baseline standard DA model under gradually degrading conditions. All of these models are source-free, i.e., they operate without access to the source training data during adaptation. We utilize both convolutional and transformer architectures in the models for comparison. In our experiments, we find that continual DA methods perform better but sometimes encounter stability issues during adaptation. We propose gradient normalization as a simple but effective solution for managing instability during adaptation.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139495300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信