IEEE Transactions on Audio Speech and Language Processing最新文献

筛选
英文 中文
Modeling Spectral Envelopes Using Restricted Boltzmann Machines and Deep Belief Networks for Statistical Parametric Speech Synthesis 基于受限玻尔兹曼机和深度信念网络的频谱包络建模用于统计参数语音合成
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-10-01 DOI: 10.1109/TASL.2013.2269291
Zhenhua Ling, L. Deng, Dong Yu
{"title":"Modeling Spectral Envelopes Using Restricted Boltzmann Machines and Deep Belief Networks for Statistical Parametric Speech Synthesis","authors":"Zhenhua Ling, L. Deng, Dong Yu","doi":"10.1109/TASL.2013.2269291","DOIUrl":"https://doi.org/10.1109/TASL.2013.2269291","url":null,"abstract":"This paper presents a new spectral modeling method for statistical parametric speech synthesis. In the conventional methods, high-level spectral parameters, such as mel-cepstra or line spectral pairs, are adopted as the features for hidden Markov model (HMM)-based parametric speech synthesis. Our proposed method described in this paper improves the conventional method in two ways. First, distributions of low-level, un-transformed spectral envelopes (extracted by the STRAIGHT vocoder) are used as the parameters for synthesis. Second, instead of using single Gaussian distribution, we adopt the graphical models with multiple hidden variables, including restricted Boltzmann machines (RBM) and deep belief networks (DBN), to represent the distribution of the low-level spectral envelopes at each HMM state. At the synthesis time, the spectral envelopes are predicted from the RBM-HMMs or the DBN-HMMs of the input sentence following the maximum output probability parameter generation criterion with the constraints of the dynamic features. A Gaussian approximation is applied to the marginal distribution of the visible stochastic variables in the RBM or DBN at each HMM state in order to achieve a closed-form solution to the parameter generation problem. Our experimental results show that both RBM-HMM and DBN-HMM are able to generate spectral envelope parameter sequences better than the conventional Gaussian-HMM with superior generalization capabilities and that DBN-HMM and RBM-HMM perform similarly due possibly to the use of Gaussian approximation. As a result, our proposed method can significantly alleviate the over-smoothing effect and improve the naturalness of the conventional HMM-based speech synthesis system using mel-cepstra.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":"21 1","pages":"2129-2139"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2269291","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62890772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 160
Noise Model Transfer: Novel Approach to Robustness Against Nonstationary Noise 噪声模型转移:抗非平稳噪声鲁棒性的新方法
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-10-01 DOI: 10.1109/TASL.2013.2272513
Takuya Yoshioka, T. Nakatani
{"title":"Noise Model Transfer: Novel Approach to Robustness Against Nonstationary Noise","authors":"Takuya Yoshioka, T. Nakatani","doi":"10.1109/TASL.2013.2272513","DOIUrl":"https://doi.org/10.1109/TASL.2013.2272513","url":null,"abstract":"This paper proposes an approach, called noise model transfer (NMT), for estimating the rapidly changing parameter values of a feature-domain noise model, which can be used to enhance feature vectors corrupted by highly nonstationary noise. Unlike conventional methods, the proposed approach can exploit both observed feature vectors, representing spectral envelopes, and other signal properties that are usually discarded during feature extraction but that are useful for separating nonstationary noise from speech. Specifically, we assume the availability of a noise power spectrum estimator that can capture rapid changes in noise characteristics by leveraging such signal properties. NMT determines the optimal transformation from the estimated noise power spectra into the feature-domain noise model parameter values in the sense of maximum likelihood. NMT is successfully applied to meeting speech recognition, where the main noise sources are competing talkers; and reverberant speech recognition, where the late reverberation is regarded as highly nonstationary additive noise.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":"21 1","pages":"2182-2192"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2272513","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62891398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
An Experimental Analysis on Integrating Multi-Stream Spectro-Temporal, Cepstral and Pitch Information for Mandarin Speech Recognition 融合多流谱时、倒谱和音高信息的普通话语音识别实验分析
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-10-01 DOI: 10.1109/TASL.2013.2263803
Yow-Bang Wang, Shang-Wen Li, Lin-Shan Lee
{"title":"An Experimental Analysis on Integrating Multi-Stream Spectro-Temporal, Cepstral and Pitch Information for Mandarin Speech Recognition","authors":"Yow-Bang Wang, Shang-Wen Li, Lin-Shan Lee","doi":"10.1109/TASL.2013.2263803","DOIUrl":"https://doi.org/10.1109/TASL.2013.2263803","url":null,"abstract":"Gabor features have been proposed for extracting spectro-temporal modulation information from speech signals, and have been shown to yield large improvements in recognition accuracy. We use a flexible Tandem system framework that integrates multi-stream information including Gabor, MFCC, and pitch features in various ways, by modeling either or both of the tone and phoneme variations in Mandarin speech recognition. We use either phonemes or tonal phonemes (tonemes) as either the target classes of MLP posterior estimation and/or the acoustic units of HMM recognition. The experiments yield a comprehensive analysis on the contributions to recognition accuracy made by either of the feature sets. We discuss their complementarities in tone, phoneme, and toneme classification. We show that Gabor features are better for recognition of vowels and unvoiced consonants, while MFCCs are better for voiced consonants. Also, Gabor features are capable of capturing changes in signals across time and frequency bands caused by Mandarin tone patterns, while pitch features further offer extra tonal information. This explains why the integration of Gabor, MFCC, and pitch features offers such significant improvements.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":"21 1","pages":"2006-2014"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2263803","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62889905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Reliable Accent-Specific Unit Generation With Discriminative Dynamic Gaussian Mixture Selection for Multi-Accent Chinese Speech Recognition 基于判别动态高斯混合选择的汉语多口音语音识别中可靠的口音单位生成
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-10-01 DOI: 10.1109/TASL.2013.2265087
Chao Zhang, Yi Liu, Yunqing Xia, Xuan Wang, Chin-Hui Lee
{"title":"Reliable Accent-Specific Unit Generation With Discriminative Dynamic Gaussian Mixture Selection for Multi-Accent Chinese Speech Recognition","authors":"Chao Zhang, Yi Liu, Yunqing Xia, Xuan Wang, Chin-Hui Lee","doi":"10.1109/TASL.2013.2265087","DOIUrl":"https://doi.org/10.1109/TASL.2013.2265087","url":null,"abstract":"In this paper, we propose a discriminative dynamic Gaussian mixture selection (DGMS) strategy to generate reliable accent-specific units (ASUs) for multi-accent speech recognition. Time-aligned phone recognition is used to generate the ASUs that model accent variations explicitly and accurately. DGMS reconstructs and adjusts a pre-trained set of hidden Markov model (HMM) state densities to build dynamic observation densities for each input speech frame. A discriminative minimum classification error criterion is adopted to optimize the sizes of the HMM state observation densities with a genetic algorithm (GA). To the author's knowledge, the discriminative optimization for DGMS accomplishes discriminative training of discrete variables that is first proposed. We found the proposed framework is able to cover more multi-accent changes, thus reduce some performance loss in pruned beam search, without increasing the model size of the original acoustic model set. Evaluation on three typical Chinese accents, Chuan, Yue and Wu, shows that our approach outperforms traditional acoustic model reconstruction techniques with a syllable error rate reduction of 8.0%, 5.5% and 5.0%, respectively, while maintaining a good performance on standard Putonghua speech.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":"73 1","pages":"2073-2084"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2265087","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62890303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Hermitian Polynomial for Speaker Adaptation of Connectionist Speech Recognition Systems 连接主义语音识别系统中说话人自适应的厄米多项式
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-10-01 DOI: 10.1109/TASL.2013.2270370
S. Siniscalchi, Jinyu Li, Chin-Hui Lee
{"title":"Hermitian Polynomial for Speaker Adaptation of Connectionist Speech Recognition Systems","authors":"S. Siniscalchi, Jinyu Li, Chin-Hui Lee","doi":"10.1109/TASL.2013.2270370","DOIUrl":"https://doi.org/10.1109/TASL.2013.2270370","url":null,"abstract":"Model adaptation techniques are an efficient way to reduce the mismatch that typically occurs between the training and test condition of any automatic speech recognition (ASR) system. This work addresses the problem of increased degradation in performance when moving from speaker-dependent (SD) to speaker-independent (SI) conditions for connectionist (or hybrid) hidden Markov model/artificial neural network (HMM/ANN) systems in the context of large vocabulary continuous speech recognition (LVCSR). Adapting hybrid HMM/ANN systems on a small amount of adaptation data has been proven to be a difficult task, and has been a limiting factor in the widespread deployment of hybrid techniques in operational ASR systems. Addressing the crucial issue of speaker adaptation (SA) for hybrid HMM/ANN system can thereby have a great impact on the connectionist paradigm, which will play a major role in the design of next-generation LVCSR considering the great success reported by deep neural networks - ANNs with many hidden layers that adopts the pre-training technique - on many speech tasks. Current adaptation techniques for ANNs based on injecting an adaptable linear transformation network connected to either the input, or the output layer are not effective especially with a small amount of adaptation data, e.g., a single adaptation utterance. In this paper, a novel solution is proposed to overcome those limits and make it robust to scarce adaptation resources. The key idea is to adapt the hidden activation functions rather than the network weights. The adoption of Hermitian activation functions makes this possible. Experimental results on an LVCSR task demonstrate the effectiveness of the proposed approach.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":"21 1","pages":"2152-2161"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2270370","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62891042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 79
Blind Channel Magnitude Response Estimation in Speech Using Spectrum Classification 基于频谱分类的语音盲信道幅度响应估计
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-10-01 DOI: 10.1109/TASL.2013.2270406
N. Gaubitch, M. Brookes, P. Naylor
{"title":"Blind Channel Magnitude Response Estimation in Speech Using Spectrum Classification","authors":"N. Gaubitch, M. Brookes, P. Naylor","doi":"10.1109/TASL.2013.2270406","DOIUrl":"https://doi.org/10.1109/TASL.2013.2270406","url":null,"abstract":"We present an algorithm for blind estimation of the magnitude response of an acoustic channel from single microphone observations of a speech signal. The algorithm employs channel robust RASTA filtered Mel-frequency cepstral coefficients as features to train a Gaussian mixture model based classifier and average clean speech spectra are associated with each mixture; these are then used to blindly estimate the acoustic channel magnitude response from speech that has undergone spectral modification due to the channel. Experimental results using a variety of simulated and measured acoustic channels and additive babble noise, car noise and white Gaussian noise are presented. The results demonstrate that the proposed method is able to estimate a variety of channel magnitude responses to within an Itakura distance of dI ≤0.5 for SNR ≥10 dB.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":"21 1","pages":"2162-2171"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2270406","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62891111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Video-Aided Model-Based Source Separation in Real Reverberant Rooms 真实混响室内基于视频辅助模型的声源分离
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-09-01 DOI: 10.1109/TASL.2013.2261814
Muhammad Salman Khan, S. M. Naqvi, Ata ur-Rehman, Wenwu Wang, J. Chambers
{"title":"Video-Aided Model-Based Source Separation in Real Reverberant Rooms","authors":"Muhammad Salman Khan, S. M. Naqvi, Ata ur-Rehman, Wenwu Wang, J. Chambers","doi":"10.1109/TASL.2013.2261814","DOIUrl":"https://doi.org/10.1109/TASL.2013.2261814","url":null,"abstract":"Source separation algorithms that utilize only audio data can perform poorly if multiple sources or reverberation are present. In this paper we therefore propose a video-aided model-based source separation algorithm for a two-channel reverberant recording in which the sources are assumed static. By exploiting cues from video, we first localize individual speech sources in the enclosure and then estimate their directions. The interaural spatial cues, the interaural phase difference and the interaural level difference, as well as the mixing vectors are probabilistically modeled. The models make use of the source direction information and are evaluated at discrete time-frequency points. The model parameters are refined with the well-known expectation-maximization (EM) algorithm. The algorithm outputs time-frequency masks that are used to reconstruct the individual sources. Simulation results show that by utilizing the visual modality the proposed algorithm can produce better time-frequency masks thereby giving improved source estimates. We provide experimental results to test the proposed algorithm in different scenarios and provide comparisons with both other audio-only and audio-visual algorithms and achieve improved performance both on synthetic and real data. We also include dereverberation based pre-processing in our algorithm in order to suppress the late reverberant components from the observed stereo mixture and further enhance the overall output of the algorithm. This advantage makes our algorithm a suitable candidate for use in under-determined highly reverberant settings where the performance of other audio-only and audio-visual methods is limited.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":"21 1","pages":"1900-1912"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2261814","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62889329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Sound Field Classification in Small Microphone Arrays Using Spatial Coherences 基于空间相干的小型传声器阵列声场分类
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-09-01 DOI: 10.1109/TASL.2013.2261813
R. Scharrer, M. Vorländer
{"title":"Sound Field Classification in Small Microphone Arrays Using Spatial Coherences","authors":"R. Scharrer, M. Vorländer","doi":"10.1109/TASL.2013.2261813","DOIUrl":"https://doi.org/10.1109/TASL.2013.2261813","url":null,"abstract":"The quality and performance of many multi-channel signal processing strategies in microphone arrays as well as mobile devices for the enhancement of speech intelligibility and audio quality depends to a large extent on the acoustic sound field that they are exposed to. As long as the assumption on the sound field is not met, the performance decreases significantly and may even yield worse results for the user than an unprocessed signal. Current hearing aids provide the user for instance with different programs to adapt the signal processing to the acoustic situation. Signal classification describes the signal content and not the type of sound field. Therefore, a further classification of the sound field, in addition to the signal classification, would increase the possibilities for an optimal adaption of the automatic program selection and the signal processing methods in mobile devices. To this end a sound field classification method is proposed that is based on the complex coherences between the input signals of distributed acoustic sensors. In addition to the general approach an exemplary setup of a hearing aid equipped with two microphone sensors is discussed. As only coherences are used, the method classifies the sound field regardless of the signal carried by it. This approach complements and extends the current signal classification approach used in common mobile devices. The method was successfully verified with simulated audio input signals and with real life examples.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":"21 1","pages":"1891-1899"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2261813","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62889264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Musical Instrument Recognition in Polyphonic Audio Using Missing Feature Approach 基于缺失特征方法的复调音频乐器识别
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-09-01 DOI: 10.1109/TASL.2013.2248720
D. Giannoulis, Anssi Klapuri
{"title":"Musical Instrument Recognition in Polyphonic Audio Using Missing Feature Approach","authors":"D. Giannoulis, Anssi Klapuri","doi":"10.1109/TASL.2013.2248720","DOIUrl":"https://doi.org/10.1109/TASL.2013.2248720","url":null,"abstract":"A method is described for musical instrument recognition in polyphonic audio signals where several sound sources are active at the same time. The proposed method is based on local spectral features and missing-feature techniques. A novel mask estimation algorithm is described that identifies spectral regions that contain reliable information for each sound source, and bounded marginalization is then used to treat the feature vector elements that are determined to be unreliable. The mask estimation technique is based on the assumption that the spectral envelopes of musical sounds tend to be slowly-varying as a function of log-frequency and unreliable spectral components can therefore be detected as positive deviations from an estimated smooth spectral envelope. A computationally efficient algorithm is proposed for marginalizing the mask in the classification process. In simulations, the proposed method clearly outperforms reference methods for mixture signals. The proposed mask estimation technique leads to a recognition accuracy that is approximately half-way between a trivial all-one mask (all features are assumed reliable) and an ideal “oracle” mask.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":"21 1","pages":"1805-1817"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2248720","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62888540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Characterization of Multiple Transient Acoustical Sources From Time-Transform Representations 多瞬态声源的时间变换表征
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-09-01 DOI: 10.1109/TASL.2013.2263141
N. Wachowski, M. Azimi-Sadjadi
{"title":"Characterization of Multiple Transient Acoustical Sources From Time-Transform Representations","authors":"N. Wachowski, M. Azimi-Sadjadi","doi":"10.1109/TASL.2013.2263141","DOIUrl":"https://doi.org/10.1109/TASL.2013.2263141","url":null,"abstract":"This paper introduces a new framework for detecting, classifying, and estimating the signatures of multiple transient acoustical sources from a time-transform representation (TTR) of an audio waveform. A TTR is a vector observation sequence containing the coefficients of consecutive windows of data with respect to known sampled basis waveforms. A set of likelihood ratio tests is hierarchically applied to each time slice of a TTR to detect and classify signals in the presence of interference. Since the signatures of each acoustical event typically span several adjacent dependent observations, a Kalman filter is used to generate the parameters necessary for computing the likelihood values. The experimental results of applying the proposed method to a problem of detecting and classifying man-made and natural transient acoustical events in national park soundscape recordings attest to its effectiveness at performing the aforementioned tasks.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":"21 1","pages":"1966-1978"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2263141","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62890070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信