IEEE Signal Processing Letters最新文献

筛选
英文 中文
Robust Joint Design of MIMO Radar Waveform and Filter via Structured Covariance Matrix 基于结构化协方差矩阵的MIMO雷达波形与滤波器鲁棒联合设计
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-06-19 DOI: 10.1109/LSP.2025.3581145
Guohao Sun;Yingkui Zhang;Zhaoke Ning;Yuandong Ji;Zhiquan Ding
{"title":"Robust Joint Design of MIMO Radar Waveform and Filter via Structured Covariance Matrix","authors":"Guohao Sun;Yingkui Zhang;Zhaoke Ning;Yuandong Ji;Zhiquan Ding","doi":"10.1109/LSP.2025.3581145","DOIUrl":"https://doi.org/10.1109/LSP.2025.3581145","url":null,"abstract":"Radar waveform design performance is compromised by insufficient prior information regarding radiators and clutter. This letter addresses this challenge by leveraging structured covariance matrices to enhance the robustness of multiple-input multiple-output (MIMO) radar waveform design. We explore the joint optimization of MIMO radar transmit waveforms and receive filters in environments with radiators and clutter, even in the absence of adequate prior information. To tackle this issue, an iterative method is employed under worst-case assumptions regarding the covariance matrices of the radiators and clutter. By applying the alternating direction method of multipliers (ADMM) algorithm, this letter introduces a novel approach for designing waveforms and structuring the covariance matrices’ parameters in spectrally crowded and cluttered environments. Simulation results demonstrate that the proposed method significantly improves performance in mismatched environments.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"2539-2543"},"PeriodicalIF":3.2,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144524402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Detection of QIM-Based VoIP Steganography Using Adjacent Frame Integration and Multi-Codeword Priority Attention 基于相邻帧集成和多码字优先注意的基于qim的VoIP隐写有效检测
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-06-17 DOI: 10.1109/LSP.2025.3580496
Cheng Zhang;Yue Yan;Shujuan Jiang;Zhong Chen
{"title":"Efficient Detection of QIM-Based VoIP Steganography Using Adjacent Frame Integration and Multi-Codeword Priority Attention","authors":"Cheng Zhang;Yue Yan;Shujuan Jiang;Zhong Chen","doi":"10.1109/LSP.2025.3580496","DOIUrl":"https://doi.org/10.1109/LSP.2025.3580496","url":null,"abstract":"With the growing volume of VoIP traffic, many steganography algorithms exploit VoIP speech as a carrier, posing a threat to cybersecurity. Among them, quantization index modulation (QIM)-based VoIP steganography has demonstrated excellent stealth, making detection difficult. In recent years, more studies have focused on developing feasible QIM-based VoIP steganalysis methods for detecting QIM-based VoIP steganography. Previous studies have mostly focused on improving detection performance while neglecting efficiency, resulting in insufficient research on lightweight models. In online detection scenarios, detection efficiency is crucial. On the one hand, the long inference time of large models can delay warnings. On the other hand, the high computational requirements of these models make them difficult to deploy on remote devices, which reduces their practical value. In this letter, we propose a simple yet efficient model named EQVS (efficient QIM-based VoIP steganalysis network) for detecting QIM-based VoIP steganography. In EQVS, the fold and unfold operations are redesigned based on the characteristics of VoIP speech samples and the requirements of the QIM-based VoIP steganalysis task, to avoid disrupting correlation features. Then, multi-codeword priority attention mechanism, inspired by the multi-query attention and retention mechanisms, redefines the calculation procedure for the query, key, and value matrices, as well as the normalization and softmax operations, to further reduce computational resource consumption in a single attention head. Experimental results demonstrate that EQVS outperforms other state-of-the-art models in both detection performance and efficiency.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"2534-2538"},"PeriodicalIF":3.2,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144524399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frequency Increment Optimization With FDA-MIMO Radar for Target Localization 基于频率增量优化的FDA-MIMO雷达目标定位
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-06-16 DOI: 10.1109/LSP.2025.3580319
Lan Lan;Kunkun Li;Jingwei Xu;Guisheng Liao;Hing Cheung So
{"title":"Frequency Increment Optimization With FDA-MIMO Radar for Target Localization","authors":"Lan Lan;Kunkun Li;Jingwei Xu;Guisheng Liao;Hing Cheung So","doi":"10.1109/LSP.2025.3580319","DOIUrl":"https://doi.org/10.1109/LSP.2025.3580319","url":null,"abstract":"This letter presents an optimization approach for frequency increments tailored to Frequency Diverse Array (FDA)-Multiple-Input Multiple-Output (MIMO) radar for target localization. We start to formulate the problem as minimizing the Cramér-Rao Bounds (CRBs) for both range and angle estimation, subject to practical constraints on the frequency increments. To facilitate optimization, the objective function is mathematically transformed, which results in a maximization problem, leveraging its inherent non-negativity of both the numerator and denominator. To address the resultant non-convex and NP-hard optimization problem, a Minorization-Maximization (MM)-Maximum Block Improvement (MBI) algorithm is devised by partitioning the frequency increment vector into distinct blocks, allowing for alternating maximization. In particular, each frequency increment is refined with the MM algorithm, while holding the others fixed, and only the block yielding the maximum objective increment is updated within each iteration. Simulation results are provided to demonstrate the excellent target localization of our proposed approach.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"2529-2533"},"PeriodicalIF":3.2,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144524398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Analysis of Consensus Algorithms for $r$-Nearest Ring Networks $r$-最近环网络一致性算法的最优分析
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-06-12 DOI: 10.1109/LSP.2025.3578956
V Sateeshkrishna Dhuli;Said Kouachi;Stefan Werner
{"title":"Optimal Analysis of Consensus Algorithms for $r$-Nearest Ring Networks","authors":"V Sateeshkrishna Dhuli;Said Kouachi;Stefan Werner","doi":"10.1109/LSP.2025.3578956","DOIUrl":"https://doi.org/10.1109/LSP.2025.3578956","url":null,"abstract":"Analyzing consensus algorithms within the context of the <inline-formula><tex-math>$r$</tex-math></inline-formula>-nearest ring networks is critical for understanding the efficiency and reliability of large-scale distributed networks. The special properties of the <inline-formula><tex-math>$r$</tex-math></inline-formula>-nearest neighbor ring offer multiple communication paths, accelerate convergence, and improve the robustness of consensus algorithms. However, this increased connectivity also introduces significant complexity in evaluating the performance of consensus algorithms, since key metrics are typically defined in terms of Laplacian eigenvalues. Especially, estimating the largest eigenvalue of the Laplacian matrix remains a major challenge for the <inline-formula><tex-math>$r$</tex-math></inline-formula>-nearest neighbor ring networks. We reformulate the maximization of Laplacian eigenvalue as a minimization of the Dirichlet kernel problem. Firstly, we prove that the first and last lobes of the Dirichlet kernel are the deepest using the shift approach. Next, we apply local smoothness analysis and integer rounding arguments to demonstrate that there is at least one discrete sample to achieve a global minimum in that lobe. This study presents a rigorous analysis to precisely locate and compute the largest eigenvalue, resulting in exact analysis for key performance metrics, including convergence time, first-order network coherence, second-order network coherence, and maximum communication delay, with reduced computational complexity. In addition, our findings illustrate the effect of <inline-formula><tex-math>$r$</tex-math></inline-formula> in improving the performance of consensus algorithms in large-scale networks.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"2494-2498"},"PeriodicalIF":3.2,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144519419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blind Image Super-Resolution With Efficient Network Design Using Frequency Domain Information 基于频域信息的盲图像超分辨率高效网络设计
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-06-11 DOI: 10.1109/LSP.2025.3578957
Sunwoo Cho;Nam Ik Cho
{"title":"Blind Image Super-Resolution With Efficient Network Design Using Frequency Domain Information","authors":"Sunwoo Cho;Nam Ik Cho","doi":"10.1109/LSP.2025.3578957","DOIUrl":"https://doi.org/10.1109/LSP.2025.3578957","url":null,"abstract":"Blind Image Super-Resolution (BSR) tackles the challenge of enhancing image resolution that has been degraded by unknown kernels. Although existing BSR models have achieved remarkable results, they often demand significant computational resources to manage various degradation kernels. Recent studies have utilized large models with parameter counts ranging from 4 M to 20 M, resulting in computational costs exceeding 200 G Multi-Adds. In this paper, we introduce a blind super-resolution method, which is the first lightweight non-iterative approach for BSR. By leveraging the connection between degradation kernel shapes and the frequency-domain characteristics of low-resolution images, we simplify the kernel estimation process, thereby reducing the overall model complexity. Additionally, we employ a constrained least squares approach to refine the low-resolution image using the estimated kernel, which serves as the input to the hierarchical Transformer blocks. Our approach delivers competitive performance while requiring only 1.7 M parameters and 2 G Multi-Adds. Experimental results demonstrate that our method achieves comparable results with significantly smaller network size.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"2524-2528"},"PeriodicalIF":3.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144524404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAFT: Learning Scale-Aware Inter-Series Correlations for Time Series Forecasting 学习尺度感知的序列间相关性用于时间序列预测
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-06-11 DOI: 10.1109/LSP.2025.3578914
Xin-Yi Li;Yu-Bin Yang
{"title":"SAFT: Learning Scale-Aware Inter-Series Correlations for Time Series Forecasting","authors":"Xin-Yi Li;Yu-Bin Yang","doi":"10.1109/LSP.2025.3578914","DOIUrl":"https://doi.org/10.1109/LSP.2025.3578914","url":null,"abstract":"Multivariate time series forecasting faces the fundamental challenge of modeling complex temporal dependencies while capturing cross-variable relationships, especially in real-world applications where data exhibits intricate patterns across multiple scales. Existing models often overlook the necessity of explicitly incorporating scale awareness when modeling cross-variable correlations, leading to potential overfitting and instability. To address these issues, we propose Scale-Aware Forecasting Transformer (SAFT), which introduces a novel scale-aware multi-head attention mechanism to model cross-variable dependencies across different time scales. SAFT progressively integrates information from coarser to finer scales, enabling robust modeling of complex temporal dynamics. Extensive experiments demonstrate that SAFT achieves overall state-of-the-art performance in both long-term and short-term forecasting tasks.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"2519-2523"},"PeriodicalIF":3.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144524333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RPCC: Rectified Pearson Correlation Coefficient for Radiance Fields Optimization RPCC:校正Pearson相关系数的辐射场优化
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-06-11 DOI: 10.1109/LSP.2025.3578913
Jun Peng;Chunyi Chen
{"title":"RPCC: Rectified Pearson Correlation Coefficient for Radiance Fields Optimization","authors":"Jun Peng;Chunyi Chen","doi":"10.1109/LSP.2025.3578913","DOIUrl":"https://doi.org/10.1109/LSP.2025.3578913","url":null,"abstract":"Neural radiance fields (NeRF) and its variants have achieved remarkable success for novel view synthesis. Most existing radiance field models utilize the mean squared error (MSE) as the photometric loss, which is prone to resulting in blurriness and geometry inaccuracy, especially for sparse views. Instead of the pixel-wise loss, we introduce the Pearson correlation coefficient (PCC) for constructing a new photometric loss from the perspective of linear correlation. Due to the relativeness of PCC, we rectify PCC to absolutize it. To be specific, we relax the denominator of PCC based on the inequality of arithmetic and geometric means to enforce unit scale, and add an extra modulation factor to further enforce zero location. The experimental results show the proposed loss is significantly better than the MSE loss, e.g. the peak signal-to-noise ratio (PSNR) increasing by 186% for TensoRF on Replica dataset, and 3 <inline-formula><tex-math>$sim$</tex-math></inline-formula> 5 dB for DVGO at scenes from Tanks and Temples dataset with sparse views setting.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"2489-2493"},"PeriodicalIF":3.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144524400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DCMNet: A Supervised Learning Framework for Radar Signal Modulation Recognition 雷达信号调制识别的监督学习框架
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-06-09 DOI: 10.1109/LSP.2025.3578289
Kaige Hou;Xiaolin Du;Guolong Cui;Xiaolong Chen;Jibin Zheng
{"title":"DCMNet: A Supervised Learning Framework for Radar Signal Modulation Recognition","authors":"Kaige Hou;Xiaolin Du;Guolong Cui;Xiaolong Chen;Jibin Zheng","doi":"10.1109/LSP.2025.3578289","DOIUrl":"https://doi.org/10.1109/LSP.2025.3578289","url":null,"abstract":"Traditional radar signal modulation recognition (RSMR) methods struggle to achieve the required accuracy under low signal-to-noise ratio (SNR) conditions. To address this issue, a hybrid network architecture integrating deformable convolution and mamba (DCMNet) is proposed. Specifically, DCMNet employs a multi-view feature extraction structure that combines inverted deformable convolution (IDC) with a state space model (SSM), enabling dynamic adjustment of convolution kernel positions and capturing global information and dependencies in long sequence data. The cross-gated feature fusion (CGFF) mechanism effectively modulates and dynamically aggregates features from different perspectives. The lightweight design provides significant advantages in terms of network scale and deployment. Experimental results demonstrate that the proposed method achieves excellent performance on a dataset with ten different waveforms. Notably, at an SNR of −8 dB, the recognition accuracy exceeds 90%, significantly outperforming existing methods.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"2454-2458"},"PeriodicalIF":3.2,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144481840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic-Clustering-Based Color Quantization for Electrophoretic Display 基于动态聚类的电泳显示颜色量化
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-06-09 DOI: 10.1109/LSP.2025.3577928
Tingyu Cheng;Xiaoyan Zhao;Gongning Yang;Wei Yuan;Tiesong Zhao
{"title":"Dynamic-Clustering-Based Color Quantization for Electrophoretic Display","authors":"Tingyu Cheng;Xiaoyan Zhao;Gongning Yang;Wei Yuan;Tiesong Zhao","doi":"10.1109/LSP.2025.3577928","DOIUrl":"https://doi.org/10.1109/LSP.2025.3577928","url":null,"abstract":"Electrophoretic Display (EPD) is a reflective technology that closely mimics traditional paper, making it a popular choice in E-readers, IoT devices, and wearables. However, color quantization, which is a critical step to display natural images on EPD with reduced color scales, usually leads to grayscale distortion and edge loss. In this letter, we propose a Dynamic-Clustering-based E-paper Color Quantization (DCECQ) method to address the above issue. First, it employs a dynamically adjustable Particle Swarm Optimization (PSO) clustering, facilitating adaptive threshold optimization for diverse image content. Second, it introduces a Human Visual System (HVS) based model to quantify visual errors and compensates for grayscale ghosting, effectively reducing artifacts such as edge blurring and color distortion. Third, it implements a validation platform for EPD to assess performance under real-world conditions. Experimental results demonstrate that our approach outperforms existing methods across multiple metrics, which attests to its effectiveness and practical applicability.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"2449-2453"},"PeriodicalIF":3.2,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144481897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision-Language Model Priors-Driven State Space Model for Infrared-Visible Image Fusion 红外-可见图像融合的视觉语言模型-先验驱动状态空间模型
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-06-09 DOI: 10.1109/LSP.2025.3578250
Rongjin Zhuang;Yingying Wang;Xiaotong Tu;Yue Huang;Xinghao Ding
{"title":"Vision-Language Model Priors-Driven State Space Model for Infrared-Visible Image Fusion","authors":"Rongjin Zhuang;Yingying Wang;Xiaotong Tu;Yue Huang;Xinghao Ding","doi":"10.1109/LSP.2025.3578250","DOIUrl":"https://doi.org/10.1109/LSP.2025.3578250","url":null,"abstract":"Infrared and visible image fusion (IVIF) aims to effectively integrate complementary information from both infrared and visible modalities, enabling a more comprehensive understanding of the scene and improving downstream semantic tasks. Recent advancements in Mamba have shown remarkable performance in image fusion, owing to its linear complexity and global receptive fields. However, leveraging Vision-Language Model (VLM) priors to drive Mamba for modality-specific feature extraction and using them as constraints to enhance fusion results has not been fully explored. To address this gap, we introduce VLMPD-Mamba, a Vision-Language Model Priors-Driven Mamba framework for IVIF. Initially, we employ the VLM to adaptively generate modality-specific textual descriptions, which enhance image quality and highlight critical target information. Next, we present Text-Controlled Mamba (TCM), which integrates textual priors from the VLM to facilitate effective modality-specific feature extraction. Furthermore, we design the Cross-modality Fusion Mamba (CFM) to fuse features from different modalities, utilizing VLM priors as constraints to enhance fusion outcomes while preserving salient targets with rich details. In addition, to promote effective cross modality feature interactions, we introduce a novel bi-modal interaction scanning strategy within the CFM. Extensive experiments on various datasets for IVIF, as well as downstream visual tasks, demonstrate the superiority of our approach over state-of-the-art (SOTA) image fusion algorithms.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"2514-2518"},"PeriodicalIF":3.2,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144524401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信