{"title":"Prompt-Based Granularity-Unified Representation Network for Remote Sensing Image-Text Matching","authors":"Minhan Hu;Keke Yang;Jing Li","doi":"10.1109/JSTARS.2025.3555639","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3555639","url":null,"abstract":"Remote sensing (RS) image–text matching has gained significant attention for its promising potential. Despite great advancements, accurately matching RS images (RSIs) and captions remains challenging due to the significant multimodal gap and inherent characteristics of RS data. Many approaches use complex models to extract global features to handle semantic redundancy and varying scales in RSIs, but losing important details in RSIs and captions. While some methods align between fine-grained local features, but overlooking the semantic granularity differences between fine-grained features. Fine-grained features in RSIs typically capture only a small fraction of the overall semantics, whereas those in captions convey more comprehensive and abstract semantics. Therefore, we propose the prompt-based granularity-unified representation network, an end-to-end framework designed to mitigate the multimodal semantic granularity difference and achieve comprehensive alignment. Our approach includes two key modules: 1) the prompt-based feature aggregator, which dynamically aggregates fine-grained features into several granularity-unified tokens with fully semantic, and 2) the text-guided vision modulation, which further enhances visual representations by modulating the visual features with RS captions as language typically contains more precise semantic than visual data. Furthermore, to address the challenges posed by high similarity in RS datasets, we introduce an effective hybrid cross-modal loss that facilitates comprehensive multimodal feature alignment within a unified structure. We conduct extensive experiments on three benchmark datasets, achieving state-of-the-art performance, which validates the effectiveness and superiority of our method.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"10172-10185"},"PeriodicalIF":4.7,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10945411","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143860914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad M. Al-Khaldi;Joel T. Johnson;Darren S. McKague;Dorina Twigg;Anthony Russel;Frederick S. Policelli
{"title":"An Analysis of a Commercial GNSS-R Ocean Wind Speed Dataset","authors":"Mohammad M. Al-Khaldi;Joel T. Johnson;Darren S. McKague;Dorina Twigg;Anthony Russel;Frederick S. Policelli","doi":"10.1109/JSTARS.2025.3555820","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3555820","url":null,"abstract":"An analysis of Level-2 (L2) ocean wind speed retrievals from 1st May, 2021 to 1st June, 2024 derived from Spire, Inc.’s Global Navigation Satellite System Reflectometry observatories is presented. Comparisons of retrieved ocean surface wind speeds with European Center for Medium-Range Weather Forecasts Reanalysis v5 estimates show correlations on the order of 73<inline-formula><tex-math>$%$</tex-math></inline-formula> and unbiased rms error (uRMSE) <inline-formula><tex-math>$approx$</tex-math></inline-formula> 2.4 m/s over all wind speeds using the latest v2.07 data version. Colocated observations with advanced scatterometer (ASCAT) B, ASCAT C, advanced microwave scanning radiometer 2, soil moisture active passive, and Cyclone Global Navigation Satellite System winds also show correlations up to 86<inline-formula><tex-math>$%$</tex-math></inline-formula> and overall uRMSE values ranging between 1.45–2.03 m/s, with “triple colocation” analyses yielding similar results. Errors are found to increase significantly for wind speeds exceeding 12–15 m/s, likely due to the relatively low signal-to-noise ratio of such measurements for Spire's receivers. A sensitivity of ocean wind speed retrievals to storm structure is nevertheless demonstrated that highlights an ability to capture large scale features in a manner commensurate with reference model data.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"9798-9809"},"PeriodicalIF":4.7,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10945351","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143850916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DeepU-Net: A Parallel Dual-Branch Model for Deeply Fusing Multiscale Features for Road Extraction From High-Resolution Remote Sensing Images","authors":"Guoqing Zhou;Haiyang Zhi;Ertao Gao;Yanling Lu;Jianjun Chen;Yuhang Bai;Xiao Zhou","doi":"10.1109/JSTARS.2025.3555636","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3555636","url":null,"abstract":"The existing encoder–decoder model, or encoder–decoder model with atrous convolutions, has exposed its limitations under diverse environments, such as road scales, shadows, building occlusions, and vegetation in high-resolution remote sensing images. Therefore, this article introduces a dual-branch deep fusion network, named “DeepU-Net,” for obtaining global and local information in parallel. Two novel modules are designed: 1) the spatial and coordinate squeeze-and-excitation fusion attention module that enhances the focus on spatial positions and target channel information; and 2) the efficient multiscale convolutional attention module that can boost the competence to tackle multiscale road information. The validation of the proposed model is conducted using two datasets, CHN6-CUG and DeepGlobe, which are from urban and rural areas, respectively. A comparative analysis with the six commonly used models, including U-Net, PSPNet, DeepLabv3+, HRNet, CoANet, and SegFormer, is conducted. The experimental results reveal that the introduced model achieves mean intersection over union scores of 83.18% and 81.43%, which are averagely improved by 1.93% and 1.02%, respectively, for the two datasets, when compared with the six commonly used models. The outcomes suggest that the introduced model achieves a greater accuracy than the six extensively applied models do.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"9448-9463"},"PeriodicalIF":4.7,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10945378","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143845538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extraction of Raft Aquaculture in SDGSAT-1 Images via Shape Prior Segmentation Network","authors":"Lin Zhu;Chuanli Liu;Liwen Niu;Zhuo Hai;Xuan Dong","doi":"10.1109/JSTARS.2025.3555645","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3555645","url":null,"abstract":"Reliable extraction of raft aquaculture areas from high-resolution remote sensing data is vital for the sustainable development of coastal zones. Despite the success of semantic segmentation, challenges remain due to adhesion effects, weak and seasonal spectral signals against complex dynamic backgrounds, and limited labeled training data for robust and generalizable models. To overcome these challenges, this article proposes a shape prior segmentation network for the extraction of raft aquaculture areas from Sustainable Development Science Satellite-1 (SDGSAT-1) images. Based on the encoder-decoder framework of a U-shaped network, the method incorporates a shape prior module that flexibly integrates with the backbone network. This module combines global shape priors, offering coarse shape representations to model global contexts, and local shape priors, providing fine shape information to enhance segmentation accuracy while reducing dependency on learnable prototypes. By leveraging shape priors, the network can achieve satisfactory segmentation reliability, efficiency, and faster learning during training. Extensive experiments validate the proposed methodology, achieving an accuracy of 98.26%, a mean pixel accuracy of 88.26%, and a mean intersection over union of 85.16% .","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"9810-9820"},"PeriodicalIF":4.7,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10944568","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143850904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeff Secker;Katerina Biron;Dany Dessureault;Pierre Lamontagne;Rodney Rear
{"title":"Automated Collection Planning for Civilian and Commercial Satellite Imagery, and Definition and Exploitation of the Collection Asset Specification Data Structure","authors":"Jeff Secker;Katerina Biron;Dany Dessureault;Pierre Lamontagne;Rodney Rear","doi":"10.1109/JSTARS.2025.3555925","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3555925","url":null,"abstract":"There are more than 8000 traditional and small/microsatellites in low Earth orbit (LEO) and many of these are civilian and commercial satellites for remote sensing and space-based intelligence, surveillance and reconnaissance (ISR). Collection planning is the first step in the tasking, collection, processing, exploitation, and dissemination (TCPED) process and is required to choose the collection assets (satellites), instrument modes, and orbital passes that best match the collection task. Collection planning requires understanding and experience with: requirements; satellite and instrument phenomenologies, and capabilities; collection strategies; and data processing and exploitation methodologies. Given this, it is challenging for collection managers to make the best use of available satellites in the time available, and they would benefit from automation in the collection planning processes and systems. This article defines and describes collection planning terminology, notation, and processes. It defines new metrics for assessing the temporal coverage (completeness and density of collection opportunities along the time axis), and it describes six semiautomated tools and their underlying algorithms. These can be used by a collection manager to automate elements of the collection planning process, and they can be used for machine-to-machine communication using web services, thereby decreasing the total time required. This machine-to-machine communication permits the collection planning process to be completed in seconds instead of minutes or hours, time which can be critical for dynamic tasking such as tip and cue or last-minute retasking situations.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"9764-9797"},"PeriodicalIF":4.7,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10945445","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143850903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiajia Bai;Na Chen;Jiangtao Peng;Lanxin Wu;Weiwei Sun;Zhijing Ye
{"title":"HCAFNet: Hierarchical Cross-Modal Attention Fusion Network for HSI and LiDAR Joint Classification","authors":"Jiajia Bai;Na Chen;Jiangtao Peng;Lanxin Wu;Weiwei Sun;Zhijing Ye","doi":"10.1109/JSTARS.2025.3555950","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3555950","url":null,"abstract":"Hyperspectral image (HSI) and light detection and ranging (LiDAR) data can provide complementary features and have shown great potential for land cover classification. Recently, the joint classification using the HSI and LiDAR data based on deep learning networks (e.g., convolutional neural networks and transformers) have made progress. However, these methods often use the channel or spatial dimension attentions to highlight features, which overlook the interdependencies between these dimensions and face challenges in effectively extracting and fusing diverse features from heterogeneous datasets. To address these challenges, a novel hierarchical cross-modal attention fusion network (HCAFNet) is proposed in this manuscript. First, a hierarchical convolution module is designed to extract diverse features from multisource data and to achieve initial fusion using the octave convolution. Then, a bidirectional feature fusion module is constructed to integrate heterogeneous features within the network. To further enhance the network's feature representation capability, a triplet rotational multihead attention module is designed to capture cross-dimensional dependencies, enabling more effective representation of both channel and spatial information. Experimental results conducted on three public datasets demonstrate that the proposed HCAFNet outperforms other advanced methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"9522-9532"},"PeriodicalIF":4.7,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10945607","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143850915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mingwen Shao;Xiaodong Tan;Kai Shang;Tiyao Liu;Xiangyong Cao
{"title":"A Hybrid Model of State-Space Model and Attention for Hyperspectral Image Denoising","authors":"Mingwen Shao;Xiaodong Tan;Kai Shang;Tiyao Liu;Xiangyong Cao","doi":"10.1109/JSTARS.2025.3556024","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3556024","url":null,"abstract":"Hyperspectral images (HSIs) exhibit pronounced spatial similarity and spectral correlation. With these two physical properties taken into account, underlying clean HSI will be easier to derive from noisy images. However, existing denoising approaches struggle to model the spatial-spectral structure due to the following limitations: excessive memory consumption when performing global modeling, and insufficient effectiveness in local modeling. To address these issues, in this article, we propose HyMatt, a hybrid model of the state-space model and attention mechanism for HSI denoising. Specifically, to fully exploit global similarity within an HSI cube, we devise vision Mamba quad directions based on crafted cube selective scan (CSS) to capture long-range dependencies in a memory-efficient manner. Our CSS not only enhances global modeling capacity but also mitigates the negative impacts of causal modeling inherent in the SSM. Furthermore, in order to improve local similarity modeling, we integrate a local attention module, in which the adjacent elements are refined by adaptively utilizing similar neighboring features as guidance. Compared to existing methods, our HyMatt excels in exploiting local features while leveraging the global similarity within the entire HSI cube. Extensive experiments on both simulated and real remote sensing noisy images demonstrate that our HyMatt consistently surpasses the state-of-the-art HSIs denoising methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"9904-9918"},"PeriodicalIF":4.7,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10945605","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143850827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"X-band Repeat-pass Coherence at Short Temporal Baselines for Crop Monitoring","authors":"Arturo Villarroya-Carpio;Juan M. Lopez-Sanchez","doi":"10.1109/JSTARS.2025.3555382","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3555382","url":null,"abstract":"A one-year time series of X-band images acquired by TerraSAR-X (TSX), TanDEM-X (TDX), and PAZ in the same orbit configuration over an agricultural area has been exploited to assess the potential of X-band repeat-pass coherence for crop monitoring. The combination of TSX, TDX, and PAZ allows working with temporal baselines of 4 and 7 days, i.e., shorter than the 11-day revisit time of the individual satellites. These short temporal baselines are less affected by temporal decorrelation and help increase the sensitivity of coherence to crop growth. The analysis is carried out for 30 different crop types by comparing series of coherence and backscatter at X-band with normalized difference vegetation index (NDVI) extracted from Sentinel-2. C-band coherence from Sentinel-1 is also analyzed with the same dataset for comparison purposes. In addition, a new radar vegetation index for copolar data, called copolar radar vegetation index (coRVI), is defined and evaluated. During the phenological cycle of the crops, coherence decreases at the early stages, and it increases again after harvest, keeping high values out of the growing season. X-band coherences for baselines of 4 and 7 days show the strongest correlations with NDVI, especially for short crops. Regarding the use of coRVI, it outperforms the backscattering coefficient of individual channels. Moreover, coRVI is better correlated with NDVI than coherence for several crop types, hence providing complementary information. The main conclusion is that X-band repeat-pass coherence with short temporal baselines (4 or 7 days) could be a valuable tool for crop monitoring if an adequate acquisition plan were implemented routinely.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"9054-9075"},"PeriodicalIF":4.7,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10943160","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moran Ju;Buniu Niu;Mulin Li;Tengkai Mao;Si-nian Jin
{"title":"Toward Better Accuracy-Efficiency Tradeoffs for Oriented SAR Ship Object Detection","authors":"Moran Ju;Buniu Niu;Mulin Li;Tengkai Mao;Si-nian Jin","doi":"10.1109/JSTARS.2025.3555330","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3555330","url":null,"abstract":"In oriented synthetic aperture radar (SAR) ship detection task, convolutional neural network based detectors have dramatically improved the detection performance, but enormous parameters make it difficult to realize model lightweighting. Recently, DETR and its variants have demonstrated excellent performance in object detection task, while model construction through linear layers has great potential in terms of model lightweighting. However, DETR-based models are rarely applied to oriented object detection task, while the network structure relies on manual experience and cannot be designed automatically. In this article, we propose a novel neural architecture search based lightweight detector in polar coordinate system with DETR as search space for oriented SAR ship detection, where oriented bounding boxes are encoded and decoded in polar coordinate system to cope with boundary discontinuity problems, and the weight entanglement strategy is adopted to realize automatic and lightweight design of DETR. Meanwhile, we design an oriented multiscale attention to alleviate the problem of sampling a large amount of background due to offset learning. Furthermore, we introduce a downsampling feedforward network to significantly reduce network floating point operations. Finally, we transplant FPDDet head as auxiliary head to improve encoder potential ship feature learning and decoder cross-attention learning. Experimental results show that our models not only achieve DETR lightweighting and real-time detection, but also improve detection performance. Our base models achieve state-of-the-art performance on both RSSDD and RSDD datasets compared to previous best models, with 1.36% and 2.28% improvement in mAP with 32.67 and 32.14 GFLOPs, respectively.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"9666-9681"},"PeriodicalIF":4.7,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10944503","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TripletA-Net: A Deep Learning Model for Automatic Railway Track Extraction from Airborne LiDAR Point Clouds","authors":"Runyuan Zhang;Qiong Ding;Alex Hay-Man Ng;Dan Wang;Jiwei Deng;Mingwei Xu;Yuelin Hou","doi":"10.1109/JSTARS.2025.3555292","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3555292","url":null,"abstract":"With the rapid expansion of global railway networks, the demand for efficient railway operation and maintenance has grown significantly. This shift has underscored the need for automated and intelligent detection technologies to replace the traditional, labor-intensive methods in railway maintenance. To address the high cost, limited generalizability, and dependency on manual intervention that challenge conventional railway track extraction methods, this article proposes a novel railway track extraction model that is specifically designed for dealing with the airborne light detection and ranging (LiDAR) data, known as TripletA-Net. TripletA-Net enables automatic and precise semantic segmentation of railway track point clouds. It incorporates a triplet attention mechanism to establish dependencies across different point cloud dimensions, adaptively assigning weights to capture both global and local features comprehensively. A weight-scaling strategy is introduced to further enhance the model's focus on track extraction. In order to reduce overfitting, the AdamW optimizer with decoupled weight decay is employed, addressing common issues encountered with small training datasets. Moreover, the intensity characteristics of the LiDAR point cloud are exploited in place of traditional color features, minimizing errors from multisource data matching. Ablation experiments validate the importance of the weight-scaling module and the AdamW optimizer in improving the model's accuracy. The triplet attention mechanism and intensity information contribute to enhanced precision and generalization. Together these optimizations make TripletA-Net highly effective in track extraction, achieving a mean Intersection over Union of 94.36% on our airborne LiDAR track dataset (acquired from two geographically diverse regions, with a total track length of 2700 m), which is more surpassing than benchmark methods such as PointNet++ (87.87% ), RandLA-Net (91.64% ), and Stratified Transformer (89.49% ).","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"9195-9210"},"PeriodicalIF":4.7,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10943278","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143817856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}