IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society最新文献

筛选
英文 中文
Bridging Temporal and Spatial–Spectral Features With Satellite Image Time Series: TAS2B-Net for Crop Semantic Segmentation 利用卫星影像时间序列桥接时空光谱特征:TAS2B-Net作物语义分割
IF 4.4
Xiaohan Luo;Hangyu Dai;Vladimir Lysenko;Jinglu Tan;Ya Guo
{"title":"Bridging Temporal and Spatial–Spectral Features With Satellite Image Time Series: TAS2B-Net for Crop Semantic Segmentation","authors":"Xiaohan Luo;Hangyu Dai;Vladimir Lysenko;Jinglu Tan;Ya Guo","doi":"10.1109/LGRS.2025.3603294","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3603294","url":null,"abstract":"Semantic segmentation based on satellite image time series (SITS) is fundamental to a wide range of geospatial applications, including land cover mapping and urban development analysis. By integrating crop phenological dynamics over time, SITS provides richer spatiotemporal information than static satellite imagery. However, existing models fail to effectively process the temporal and spatial–spectral dimensions of SITS independently, leading to reduced segmentation accuracy. In this letter, we propose a temporal aggregation spatial–spectral bridge network (TAS2B-Net), a novel architecture designed to extract fine-grained crop features from SITS. The network consists of two key components: the pixel-aware grouping temporal integrator (PGTI), which captures temporal dependencies within pixel groups, and the edge-aware contextual fusion head (ECFH), which enhances spatial boundary and global structural representation. Additionally, we introduce a lightweight multiscale spectral decoder (LMSD) to aggregate contextual information across multiple spectral scales, further improving feature learning for semantic segmentation. Extensive experiments on the panoptic agricultural satellite time series (PASTIS) and MTLCC datasets show that the proposed network achieves mIoU scores of 68.91% and 84.59%, respectively, outperforming eight state-of-the-art (SOTA) methods and setting new benchmarks for SITS-based semantic segmentation.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145007849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual Collaborative Sparse and Total Variation Regularization for Unmixing-Based Change Detection 基于非混合变化检测的双协同稀疏和全变分正则化
IF 4.4
Shile Zhang;Yuxing Zhao;Zhihan Liu;Xiangming Jiang;Maoguo Gong
{"title":"Dual Collaborative Sparse and Total Variation Regularization for Unmixing-Based Change Detection","authors":"Shile Zhang;Yuxing Zhao;Zhihan Liu;Xiangming Jiang;Maoguo Gong","doi":"10.1109/LGRS.2025.3603339","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3603339","url":null,"abstract":"Hyperspectral change detection is critical for analyzing the temporal evolution of the feature components in multitemporal hyperspectral images. However, existing methods often fall short of fully exploiting the spatiotemporal–spectral correlations within these images, thereby limiting their accuracy and robustness. This letter introduces a novel hyperspectral change detection method, termed dual collaborative sparse unmixing via variable splitting augmented Lagrangian and total variation (DCLSUnSAL-TV). By integrating dual collaborative sparsity and total variation (TV) regularizers, this method capitalizes on the local similarity of changes in the feature components, leveraging the low-rank property of hyperspectral difference images (HSDIs) and their inherent spatial–spectral correlations. A customized abundancewise truncation and ensemble strategy is designed to obtain the change map by aggregating the subpixel-level changes with respect to each endmember. Comprehensive comparison and ablation experiments demonstrate the effectiveness of the proposed method in improving the accuracy of change detection. The source code is available at: <uri>https://github.com/2alsbz/DCLSUnSAL_TV</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PhaseMamba: A Mamba-Based Deep Learning Model for Seismic Phase Picking and Detection PhaseMamba:一种基于mamba的地震相位采集和检测深度学习模型
IF 4.4
Yunfei Zhou;Haoran Ren;Haofeng Wu
{"title":"PhaseMamba: A Mamba-Based Deep Learning Model for Seismic Phase Picking and Detection","authors":"Yunfei Zhou;Haoran Ren;Haofeng Wu","doi":"10.1109/LGRS.2025.3603915","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3603915","url":null,"abstract":"Seismic phase picking is a critical task for earthquake detection and localization, where traditional methods rely on manual parameter tuning and have great difficulty to capture complex temporal features. In this letter, we propose PhaseMamba, an automated seismic phase picking and detection model that leverages deep learning through a U-shaped architecture with skip connections for effective time-domain seismic signal analysis, while incorporating a state-space Mamba model to enhance long-term contextual dependency extraction capabilities. For training, validation, and testing, we utilize the open-source global seismic dataset, Stanford Earthquake Dataset (STEAD), which provides a diverse range of high-quality seismic waveforms. Comprehensive experiments are conducted on this dataset to evaluate the model’s performance. The results demonstrate that PhaseMamba achieves superior performance in P-wave arrival picking compared with all state-of-the-art models (PhaseNet, EQTransformer, and SeisT), while showing comparable or slightly lower performance in S-wave arrival picking. These findings suggest that PhaseMamba is a promising tool for advancing seismic phase picking and contributing to broader seismic research applications.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Super Equatorial Plasma Bubbles Observed Over South America During the October 10 and 11, 2024 Strong Geomagnetic Storm 2024年10月10日和11日强磁暴期间在南美洲观测到的超级赤道等离子体气泡
IF 4.4
Yumei Li;Hong Zhang;Fan Xu;Qiong Ding;Long Tang
{"title":"Super Equatorial Plasma Bubbles Observed Over South America During the October 10 and 11, 2024 Strong Geomagnetic Storm","authors":"Yumei Li;Hong Zhang;Fan Xu;Qiong Ding;Long Tang","doi":"10.1109/LGRS.2025.3603418","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3603418","url":null,"abstract":"On October 10, 2024, the second most intense geomagnetic storm of solar cycle 25 to date took place. This storm was triggered by multiple coronal mass ejections (CMEs) that arrived at Earth from October 7 to 9, causing significant geomagnetic disturbances. The geomagnetic Kp index peaked at its highest level (Kp = 9), indicating a red alert status. This study investigated equatorial plasma bubbles (EPBs) over South America during this geomagnetic storm using ground-based Global Navigation Satellite System (GNSS) rate of total electron content index (ROTI) and Global-scale Observations of the Limb and Disk (GOLD) satellite oxygen atom (OI) 135.6-nm radiance wavelength data. The analysis revealed that the EPBs observed in South America lasted for an unusually long duration of approximately 14 h, from around 23:00 UT (18:00 LT) on October 10 to about 14:00 UT (9:00 LT) on October 11. In addition, these super EPBs extended over a wide latitude range, reaching approximately 35°N and down to 50°S, gradually forming an inverted C-shaped pattern. The observed characteristics of the EPBs are likely associated with changes in solar wind parameters and the effects of the prompt penetration electric field (PPEF).","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compensation Approach to Synchronization Errors in Distributed MIMO-SAR System 分布式MIMO-SAR系统同步误差补偿方法
IF 4.4
Wanqing Ma;Zhong Xu;Jinshan Ding;Ljubisa Stankovic
{"title":"Compensation Approach to Synchronization Errors in Distributed MIMO-SAR System","authors":"Wanqing Ma;Zhong Xu;Jinshan Ding;Ljubisa Stankovic","doi":"10.1109/LGRS.2025.3603396","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3603396","url":null,"abstract":"Distributed multiple-input–multiple-output synthe- tic aperture radar (MIMO-SAR) provides a new paradigm for radar imaging, which utilizes multiple distributed sensors to improve imaging performance. However, synchronization errors have a significant impact on imaging quality in these systems. The transmitted and received echo signals exhibit reciprocity, which can be exploited to estimate synchronization errors. By comparing echoes between different sensors, the synchronization errors could be estimated and compensated. This work presents a synchronization error-resistant imaging algorithm for distributed MIMO-SAR systems. First, the synchronization errors are estimated in the range domain by comparing the reciprocal echo signal pairs. Then, the errors are compensated during a fast back-projection (BP)-based SAR imaging process. The effectiveness of the proposed algorithm has been verified by experiments.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YOLO-ALS: Dynamic Convolution With Adaptive Local Context for Remote Sensing Target Detection 基于自适应局部上下文的动态卷积遥感目标检测
IF 4.4
Ruyi Feng;Zhixin Zhao;Tao Zhao;Lizhe Wang
{"title":"YOLO-ALS: Dynamic Convolution With Adaptive Local Context for Remote Sensing Target Detection","authors":"Ruyi Feng;Zhixin Zhao;Tao Zhao;Lizhe Wang","doi":"10.1109/LGRS.2025.3602896","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602896","url":null,"abstract":"Remote sensing image target detection plays a pivotal role in Earth observation, offering substantial value for applications such as urban planning and environmental monitoring. Due to the significant scale variations among targets, complex backgrounds with dense small object distributions, and strong intertarget scene correlations, existing target detection methods usually fail to effectively model target relationships and contextual information for remote sensing imagery. To address these limitations, we proposed YOLO-ALS, a novel remote sensing target detection network that integrates adaptive local scene context. The proposed framework introduces three key points. First, a full-dimensional dynamic convolution reconstruction C2f module enhances target feature representation by overcoming local context extraction limitations and target co-occurrence prior deficiencies. Second, an adaptive local scene context module (ALSCM) dynamically integrates multiscale receptive field features through spatial attention, enabling background window adaptive selection and cross-scale feature alignment. Finally, a co-occurrence matrix-integrated classification auxiliary module mines target association rules through data-driven learning, correcting classification probabilities in low-confidence areas by combining high-confidence areas’ co-occurrence information with an optimal threshold, which can significantly reduce missed detection rates. Comprehensive experiments on multiple public remote sensing datasets demonstrate the superiority of the proposed method through extensive ablation studies and comparative analyses. The proposed method has achieved state-of-the-art performance while addressing the unique challenges of remote sensing target detection.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CAFENet: Change-Aware and Fourier Feature Exchange Network for Cropland Change Detection in Remote Sensing Images 基于变化感知和傅立叶特征交换网络的遥感影像耕地变化检测
IF 4.4
Min Duan;Yuanxu Wang;Lu Bai;Yujiang He;Zhichao Zhao;Yurong Qian;Xuanchen Liu
{"title":"CAFENet: Change-Aware and Fourier Feature Exchange Network for Cropland Change Detection in Remote Sensing Images","authors":"Min Duan;Yuanxu Wang;Lu Bai;Yujiang He;Zhichao Zhao;Yurong Qian;Xuanchen Liu","doi":"10.1109/LGRS.2025.3602854","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602854","url":null,"abstract":"The accelerated nonagriculturalization of cropland has increasingly highlighted the importance of remote sensing (RS) change detection (CD) for monitoring land-use transitions. However, variations in RS imaging conditions and irregular cropland changes often result in noisy or inaccurate change maps. To address these challenges, we propose a novel deep learning framework named change-aware and Fourier feature exchange network (CAFENet). The method introduces a dedicated change-aware (CA) branch to extract discriminative change cues from pseudo-video sequences and integrates them into the backbone network. A Fourier feature exchange module (FFEM) is designed to reduce brightness, color, and style discrepancies between bitemporal images, thereby enhancing robustness under varying acquisition conditions. Fused features are further refined using an efficient multiscale attention mechanism (EMSA) to capture rich spatial details. In the decoding stage, a dynamic content-aware upsampling module (DCAU), together with skip connections, progressively recovers spatial resolution while preserving structural information. The experimental results on three datasets—CLCD, SW-CLCD, and LuojiaSET-CLCD—demonstrate that CAFENet achieves superior performance over state-of-the-art methods in terms of both accuracy and robustness, particularly in complex agricultural landscapes.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DL-DSFN: Dual-Layer Dynamic Scattering Filtering for Robust SAR Target Recognition DL-DSFN:用于SAR目标识别的双层动态散射滤波
IF 4.4
Yuying Zhu;Qian Wang;Muyu Hou
{"title":"DL-DSFN: Dual-Layer Dynamic Scattering Filtering for Robust SAR Target Recognition","authors":"Yuying Zhu;Qian Wang;Muyu Hou","doi":"10.1109/LGRS.2025.3602769","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602769","url":null,"abstract":"Despite the impressive performance of deep learning in synthetic aperture radar (SAR) automatic target recognition (ATR), its generalization capability remains a critical concern, particularly when facing domain shifts between training and testing environments. Considering the inherent robustness and interpretability of electromagnetic scattering characteristics, we explore leveraging these properties to guide deep learning training, thereby improving generalization. To this end, we propose a dual-layer dynamic scattering filtering network (DL-DSFN) that leverages external physical priors to guide the learning process. The first layer adaptively generates convolutional kernels conditioned on scattering cues, enabling localized modeling of target-specific scattering phenomena. The second layer establishes a cross-domain mapping from SAR imagery to scattering features, facilitating automatic extraction of salient scattering characteristics. Furthermore, an adaptive mechanism for determining the number of scattering centers is also incorporated. Experiments conducted under significant variations between training and testing sets demonstrate that our method achieves competitive recognition accuracy while maintaining low computational cost, with only approximately 0.16 M parameters and 0.002 G FLOPs.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aerial Image Semantic Segmentation Method Based on Cross-Modal Hierarchical Feature Fusion 基于跨模态层次特征融合的航空图像语义分割方法
IF 4.4
Jinglei Bai;Jinfu Yang;Tao Xiang;Shu Cai
{"title":"Aerial Image Semantic Segmentation Method Based on Cross-Modal Hierarchical Feature Fusion","authors":"Jinglei Bai;Jinfu Yang;Tao Xiang;Shu Cai","doi":"10.1109/LGRS.2025.3602267","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602267","url":null,"abstract":"Multimodal aerial image semantic segmentation enables fine-grained land cover classification by integrating data from different sensors, yet it remains challenged by information redundancy, intermodal feature discrepancies, and class confusion in complex scenes. To address these issues, we propose a cross-modal hierarchical feature fusion network (CMHFNet) based on an encoder–decoder architecture. The encoder incorporates a pixelwise attention-guided fusion module (PAFM) and a multistage progressive fusion transformer (MPFT) to suppress redundancy and model long-range intermodal dependencies and scale variations. The decoder introduces a residual information-guided feature compensation mechanism to recover spatial details and mitigate class ambiguity. The experiments on DDOS, Vaihingen, and Potsdam datasets demonstrate that the CMHFNet surpasses state-of-the-art methods, validating its effectiveness and practical value.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SOD-Net: A Small Ship Object Detection Network for SAR Images 基于SAR图像的小型船舶目标检测网络SOD-Net
IF 4.4
Junpeng Ai;Liang Luo;Shijie Wang;Liandong Hao
{"title":"SOD-Net: A Small Ship Object Detection Network for SAR Images","authors":"Junpeng Ai;Liang Luo;Shijie Wang;Liandong Hao","doi":"10.1109/LGRS.2025.3602092","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602092","url":null,"abstract":"In ship detection using synthetic aperture radar (SAR), small targets and complex background noise remain key challenges that restrict the detection performance. In this letter, we propose a small-target ship detection network based on a small object detection network (SOD-Net) using SAR images. First, we construct a U-shaped feature preextraction network and adopt a spatial pixel attention (SPA) mechanism to enhance the initial feature representation ability. Second, a pinwheel convolution (PC) convolutional neural network (CNN)-based cross-scale feature fusion (CCFF) module is designed. By expanding the receptive field through asymmetric convolution kernels and reducing the parameter scale, features of small targets are properly captured. Evaluation results show that the proposed SOD-Net achieves evaluation accuracies of 98.4% and 91.0% on the benchmark SSDD and HRSID datasets (mean average precision (mAP) at an intersection over union of 0.5), respectively, with only 28 million parameters, thus outperforming state-of-the-art models (e.g., YOLOv8 and D-FINE). Visual analysis confirmed that the SOD-Net is robust in scenarios, including complex sea conditions, dense port berthing, and noise interference, thereby providing an accurate and efficient solution for SAR maritime monitoring.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144914215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信