IEEE Transactions on Audio Speech and Language Processing最新文献

筛选
英文 中文
Room Impulse Response Synthesis and Validation Using a Hybrid Acoustic Model 使用混合声学模型的房间脉冲响应合成和验证
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-09-01 DOI: 10.1109/TASL.2013.2263139
A. Southern, S. Siltanen, D. Murphy, L. Savioja
{"title":"Room Impulse Response Synthesis and Validation Using a Hybrid Acoustic Model","authors":"A. Southern, S. Siltanen, D. Murphy, L. Savioja","doi":"10.1109/TASL.2013.2263139","DOIUrl":"https://doi.org/10.1109/TASL.2013.2263139","url":null,"abstract":"Synthesizing the room impulse response (RIR) of an arbitrary enclosure may be performed using a number of alternative acoustic modeling methods, each with their own particular advantages and limitations. This article is concerned with obtaining a hybrid RIR derived from both wave and geometric-acoustics based methods, optimized for use across different regions of time or frequency. Consideration is given to how such RIRs can be matched across modeling domains in terms of both amplitude and boundary behavior and the approach is verified using a number of standardised case studies.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2263139","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62889620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Subglottal Impedance-Based Inverse Filtering of Voiced Sounds Using Neck Surface Acceleration. 基于声门下阻抗的颈部表面加速度浊音反滤波。
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-09-01 DOI: 10.1109/TASL.2013.2263138
Matías Zañartu, Julio C Ho, Daryush D Mehta, Robert E Hillman, George R Wodicka
{"title":"Subglottal Impedance-Based Inverse Filtering of Voiced Sounds Using Neck Surface Acceleration.","authors":"Matías Zañartu,&nbsp;Julio C Ho,&nbsp;Daryush D Mehta,&nbsp;Robert E Hillman,&nbsp;George R Wodicka","doi":"10.1109/TASL.2013.2263138","DOIUrl":"https://doi.org/10.1109/TASL.2013.2263138","url":null,"abstract":"<p><p>A model-based inverse filtering scheme is proposed for an accurate, non-invasive estimation of the aerodynamic source of voiced sounds at the glottis. The approach, referred to as subglottal impedance-based inverse filtering (IBIF), takes as input the signal from a lightweight accelerometer placed on the skin over the extrathoracic trachea and yields estimates of glottal airflow and its time derivative, offering important advantages over traditional methods that deal with the supraglottal vocal tract. The proposed scheme is based on mechano-acoustic impedance representations from a physiologically-based transmission line model and a lumped skin surface representation. A subject-specific calibration protocol is used to account for individual adjustments of subglottal impedance parameters and mechanical properties of the skin. Preliminary results for sustained vowels with various voice qualities show that the subglottal IBIF scheme yields comparable estimates with respect to current aerodynamics-based methods of clinical vocal assessment. A mean absolute error of less than 10% was observed for two glottal airflow measures -maximum flow declination rate and amplitude of the modulation component- that have been associated with the pathophysiology of some common voice disorders caused by faulty and/or abusive patterns of vocal behavior (i.e., vocal hyperfunction). The proposed method further advances the ambulatory assessment of vocal function based on the neck acceleration signal, that previously have been limited to the estimation of phonation duration, loudness, and pitch. Subglottal IBIF is also suitable for other ambulatory applications in speech communication, in which further evaluation is underway.</p>","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2263138","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32816672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Joint Uncertainty Decoding for Noise Robust Subspace Gaussian Mixture Models 噪声鲁棒子空间高斯混合模型的联合不确定性解码
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-09-01 DOI: 10.1109/TASL.2013.2248718
Liang Lu, K. K. Chin, Arnab Ghoshal, S. Renals
{"title":"Joint Uncertainty Decoding for Noise Robust Subspace Gaussian Mixture Models","authors":"Liang Lu, K. K. Chin, Arnab Ghoshal, S. Renals","doi":"10.1109/TASL.2013.2248718","DOIUrl":"https://doi.org/10.1109/TASL.2013.2248718","url":null,"abstract":"Joint uncertainty decoding (JUD) is a model-based noise compensation technique for conventional Gaussian Mixture Model (GMM) based speech recognition systems. Unlike vector Taylor series (VTS) compensation which operates on the individual Gaussian components in an acoustic model, JUD clusters the Gaussian components into a smaller number of classes, sharing the compensation parameters for the set of Gaussians in a given class. This significantly reduces the computational cost. In this paper, we investigate noise compensation for subspace Gaussian mixture model (SGMM) based speech recognition systems using JUD. The total number of Gaussian components in an SGMM is typically very large. Therefore direct compensation of the individual Gaussian components, as performed by VTS, is computationally expensive. In this paper we show that JUD-based noise compensation can be successfully applied to SGMMs in a computationally efficient way. We evaluate the JUD/SGMM technique on the standard Aurora 4 corpus. Our experimental results indicate that the JUD/SGMM system results in lower word error rates compared with a conventional GMM system with either VTS-based or JUD-based noise compensation.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2248718","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62888425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Spoken Language Recognition With Prosodic Features 具有韵律特征的口语识别
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-09-01 DOI: 10.1109/TASL.2013.2260157
Raymond W. M. Ng, Tan Lee, C. Leung, B. Ma, Haizhou Li
{"title":"Spoken Language Recognition With Prosodic Features","authors":"Raymond W. M. Ng, Tan Lee, C. Leung, B. Ma, Haizhou Li","doi":"10.1109/TASL.2013.2260157","DOIUrl":"https://doi.org/10.1109/TASL.2013.2260157","url":null,"abstract":"Speech prosody is believed to carry much language-specific information that can be used for spoken language recognition (SLR). In the past, the use of prosodic features for SLR has been studied sporadically and the reported performances were considered unsatisfactory. In this paper, we exploit a wide range of prosodic attributes for large-scale SLR tasks. These attributes describe the multifaceted variations of F0, intensity and duration in different spoken languages. Prosodic attributes are modeled by the bag of n-grams approach with support vector machine (SVM) as in the conventional phonotactic SLR systems. Experimental results on OGI and NIST-LRE tasks showed that the use of proposed attributes gives significantly better SLR performance than those previously reported. The full feature set includes 87 prosodic attributes and redundancy among attributes may exist. Attributes are broken down into particular bigrams called bins. Four entropy-based feature selection metrics with different selection criteria are derived. Attributes can be selected by individual bins, or by attributes as batches of bins. It can also be done in a language-dependent or language-independent manner. By comparing different selection sizes and criteria, an optimal attribute subset comprising 5,000 bins is found by using a bin-level language-independent criterion. Feature selection reduces model size by 2.5 times and shortens the runtime by 6 times. The optimal subset of bins gives the lowest EER of 20.18% on NIST-LRE 2007 SLR task in a prosodic attribute model (PAM) system which exclusively modeled prosodic attributes. In a phonotactic-prosodic fusion SLR system, the detection cost, Cavg is 2.09%. The relative detection cost reduction is 23%.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2260157","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62889452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Sound Source Localization Using Joint Bayesian Estimation With a Hierarchical Noise Model 基于层次噪声模型的联合贝叶斯估计声源定位
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-09-01 DOI: 10.1109/TASL.2013.2263140
F. Asano, H. Asoh, K. Nakadai
{"title":"Sound Source Localization Using Joint Bayesian Estimation With a Hierarchical Noise Model","authors":"F. Asano, H. Asoh, K. Nakadai","doi":"10.1109/TASL.2013.2263140","DOIUrl":"https://doi.org/10.1109/TASL.2013.2263140","url":null,"abstract":"The performance of sound source localization is often reduced by the presence of colored noise in the environment, such as room reverberation. In this study, a method for estimating the noise spatial covariance using a hierarchical model is proposed and its performance is evaluated. By employing the hierarchical model in joint Bayesian estimation, robust estimation of the covariance is expected with a relatively small amount of data. Moreover, a method of jointly estimating the number of sources is introduced so that it can be used for cases in which the number of active sources dynamically changes, for example, speech signals. The results of the experiments performed using actual room reverberation show the effectiveness of the proposed method.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2263140","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62889691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Complete Parallel Narrowband Active Noise Control Systems 完整的并行窄带主动噪声控制系统
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-09-01 DOI: 10.1109/TASL.2013.2263143
Cheng-Yuan Chang, S. Kuo
{"title":"Complete Parallel Narrowband Active Noise Control Systems","authors":"Cheng-Yuan Chang, S. Kuo","doi":"10.1109/TASL.2013.2263143","DOIUrl":"https://doi.org/10.1109/TASL.2013.2263143","url":null,"abstract":"Conventional parallel-form narrowband active noise control (ANC) systems use multiple adaptive filters that are updated by the same error signal. This paper performs theoretical analysis to show that the convergence rate of every adaptive filter will be degraded by the residual error components from other adaptive filters. We develop and analyze a complete parallel narrowband ANC system that uses delayless bandpass filterbank to split the measured error signal, and uses individual error signals to update the corresponding adaptive filters. The foreground bandpass filters are updated by the background adaptive algorithm to tracks frequency change of the primary noise without introducing extra delay in the secondary path. A modified cost function is used to derive algorithm for the complete parallel narrowband ANC. The theoretical analysis and improved performance are verified by computer simulations using measured transfer functions from an experimental setup.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2263143","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62890018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Vector quantization of LSF parameters with a mixture of dirichlet distributions 混合狄利克雷分布LSF参数的矢量量化
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-09-01 DOI: 10.1109/TASL.2013.2238732
Zhanyu Ma, A. Leijon, W. Kleijn
{"title":"Vector quantization of LSF parameters with a mixture of dirichlet distributions","authors":"Zhanyu Ma, A. Leijon, W. Kleijn","doi":"10.1109/TASL.2013.2238732","DOIUrl":"https://doi.org/10.1109/TASL.2013.2238732","url":null,"abstract":"Quantization of the linear predictive coding parameters is an important part in speech coding. Probability density function (PDF)-optimized vector quantization (VQ) has been previously shown to be more efficient than VQ based only on training data. For data with bounded support, some well-defined bounded-support distributions (e.g., the Dirichlet distribution) have been proven to outperform the conventional Gaussian mixture model (GMM), with the same number of free parameters required to describe the model. When exploiting both the boundary and the order properties of the line spectral frequency (LSF) parameters, the distribution of LSF differences LSF can be modelled with a Dirichlet mixture model (DMM). We propose a corresponding DMM based VQ. The elements in a Dirichlet vector variable are highly mutually correlated. Motivated by the Dirichlet vector variable's neutrality property, a practical non-linear transformation scheme for the Dirichlet vector variable can be obtained. Similar to the Karhunen-Loève transform for Gaussian variables, this non-linear transformation decomposes the Dirichlet vector variable into a set of independent beta-distributed variables. Using high rate quantization theory and by the entropy constraint, the optimal inter- and intra-component bit allocation strategies are proposed. In the implementation of scalar quantizers, we use the constrained-resolution coding to approximate the derived constrained-entropy coding. A practical coding scheme for DVQ is designed for the purpose of reducing the quantization error accumulation. The theoretical and practical quantization performance of DVQ is evaluated. Compared to the state-of-the-art GMM-based VQ and recently proposed beta mixture model (BMM) based VQ, DVQ performs better, with even fewer free parameters and lower computational cost","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2238732","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62885836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Harmonic Adaptive Latent Component Analysis of Audio and Application to Music Transcription 音频谐波自适应潜分量分析及其在音乐转写中的应用
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-09-01 DOI: 10.1109/TASL.2013.2260741
Benoit Fuentes, R. Badeau, G. Richard
{"title":"Harmonic Adaptive Latent Component Analysis of Audio and Application to Music Transcription","authors":"Benoit Fuentes, R. Badeau, G. Richard","doi":"10.1109/TASL.2013.2260741","DOIUrl":"https://doi.org/10.1109/TASL.2013.2260741","url":null,"abstract":"Recently, new methods for smart decomposition of time-frequency representations of audio have been proposed in order to address the problem of automatic music transcription. However those techniques are not necessarily suitable for notes having variations of both pitch and spectral envelope over time. The HALCA (Harmonic Adaptive Latent Component Analysis) model presented in this article allows considering those two kinds of variations simultaneously. Each note in a constant-Q transform is locally modeled as a weighted sum of fixed narrowband harmonic spectra, spectrally convolved with some impulse that defines the pitch. All parameters are estimated by means of the expectation-maximization (EM) algorithm, in the framework of Probabilistic Latent Component Analysis. Interesting priors over the parameters are also introduced in order to help the EM algorithm converging towards a meaningful solution. We applied this model for automatic music transcription: the onset time, duration and pitch of each note in an audio file are inferred from the estimated parameters. The system has been evaluated on two different databases and obtains very promising results.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2260741","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62889526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Regularization for Partial Multichannel Equalization for Speech Dereverberation 语音去噪部分多通道均衡的正则化
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-09-01 DOI: 10.1109/TASL.2013.2260743
I. Kodrasi, Stefan Goetze, S. Doclo
{"title":"Regularization for Partial Multichannel Equalization for Speech Dereverberation","authors":"I. Kodrasi, Stefan Goetze, S. Doclo","doi":"10.1109/TASL.2013.2260743","DOIUrl":"https://doi.org/10.1109/TASL.2013.2260743","url":null,"abstract":"Acoustic multichannel equalization techniques such as the multiple-input/output inverse theorem (MINT), which aim to equalize the room impulse responses (RIRs) between the source and the microphone array, are known to be highly sensitive to RIR estimation errors. To increase robustness, it has been proposed to incorporate regularization in order to decrease the energy of the equalization filters. In addition, more robust partial multichannel equalization techniques such as relaxed multichannel least-squares (RMCLS) and channel shortening (CS) have recently been proposed. In this paper, we propose a partial multichannel equalization technique based on MINT (P-MINT) which aims to shorten the RIR. Furthermore, we investigate the effectiveness of incorporating regularization to further increase the robustness of P-MINT and the aforementioned partial multichannel equalization techniques, i.e., RMCLS and CS. In addition, we introduce an automatic non-intrusive procedure for determining the regularization parameter based on the L-curve. Simulation results using measured RIRs show that incorporating regularization in P-MINT yields a significant performance improvement in the presence of RIR estimation errors, whereas a smaller performance improvement is observed when incorporating regularization in RMCLS and CS. Furthermore, it is shown that the intrusively regularized P-MINT technique outperforms all other investigated intrusively regularized multichannel equalization techniques in terms of perceptual speech quality (PESQ). Finally, it is shown that the automatic non-intrusive regularization parameter in regularized P-MINT leads to a very similar performance as the intrusively determined optimal regularization parameter, making regularized P-MINT a robust, perceptually advantageous, and practically applicable multichannel equalization technique for speech dereverberation.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2260743","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62889610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
Automatic Accent Assessment Using Phonetic Mismatch and Human Perception 基于语音不匹配和人类感知的自动口音评估
IEEE Transactions on Audio Speech and Language Processing Pub Date : 2013-09-01 DOI: 10.1109/TASL.2013.2258011
F. William, A. Sangwan, J. Hansen
{"title":"Automatic Accent Assessment Using Phonetic Mismatch and Human Perception","authors":"F. William, A. Sangwan, J. Hansen","doi":"10.1109/TASL.2013.2258011","DOIUrl":"https://doi.org/10.1109/TASL.2013.2258011","url":null,"abstract":"In this study, a new algorithm for automatic accent evaluation of native and non-native speakers is presented. The proposed system consists of two main steps: alignment and scoring. In the alignment step, the speech utterance is processed using a Weighted Finite State Transducer (WFST) based technique to automatically estimate the pronunciation mismatches (substitutions, deletions, and insertions). Subsequently, in the scoring step, two scoring systems which utilize the pronunciation mismatches from the alignment phase are proposed: (i) a WFST-scoring system to measure the degree of accentedness on a scale from -1 (non-native like) to +1 (native like), and a (ii) Maximum Entropy (ME) based technique to assign perceptually motivated scores to pronunciation mismatches. The accent scores provided from the WFST-scoring system as well as the ME scoring system are termed as the WFST and P-WFST (perceptual WFST) accent scores, respectively. The proposed systems are evaluated on American English (AE) spoken by native and non-native (native speakers of Mandarin-Chinese) speakers from the CU-Accent corpus. A listener evaluation of 50 Native American English (N-AE) was employed to assist in validating the performance of the proposed accent assessment systems. The proposed P-WFST algorithm shows higher and more consistent correlation with human evaluated accent scores, when compared to the Goodness Of Pronunciation (GOP) measure. The proposed solution for accent classification and assessment based on WFST and P-WFST scores show that an effective advancement is possible which correlates well with human perception.","PeriodicalId":55014,"journal":{"name":"IEEE Transactions on Audio Speech and Language Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TASL.2013.2258011","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62889088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信