2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)最新文献

筛选
英文 中文
Automatic Pronunciation Generator for Indonesian Speech Recognition System Based on Sequence-to-Sequence Model 基于序列到序列模型的印尼语语音识别系统语音自动生成
Devin Hoesen, Fanda Yuliana Putri, D. Lestari
{"title":"Automatic Pronunciation Generator for Indonesian Speech Recognition System Based on Sequence-to-Sequence Model","authors":"Devin Hoesen, Fanda Yuliana Putri, D. Lestari","doi":"10.1109/O-COCOSDA46868.2019.9041182","DOIUrl":"https://doi.org/10.1109/O-COCOSDA46868.2019.9041182","url":null,"abstract":"Pronunciation dictionary plays an important role in a speech recognition system. Expert knowledge is required to obtain an accurate dictionary by manually giving pronunciation for each word. On account of the continually increasing vocabulary size, especially for Indonesian language, it is impractical to manually give the pronunciation for each word. Indonesian spelling-to-pronunciation rules are relatively regular; thus, it is plausible to produce pronunciation for a word by using the predefined rules. Nevertheless, the rules still contain a few irregularities for some spellings and they still cannot handle the presence of code-mixed words and abbreviations. In this paper, we employ a sequence-to-sequence (seq2seq) approach to generate pronunciation for each word in an Indonesian dictionary. It is demonstrated that by using this approach, we can obtain a similar speech-recognition error-rate while requiring only a fractional amount of resource. Our cross-validation experiment for validating the resulting phonetic sequences achieves 4.15-6.24% phone error rate (PER). When an automatically produced dictionary is applied in a speech recognition system, the word accuracy only degrades 2.22 percentage point compared to the one produced manually. Therefore, creating a new large pronunciation dictionary using the proposed model is more efficient without degrading the recognition accuracy significantly.","PeriodicalId":263209,"journal":{"name":"2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121644354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
RSL2019: A Realistic Speech Localization Corpus RSL2019:一个现实的语音定位语料库
R. Sheelvant, Bidisha Sharma, Maulik C. Madhavi, Rohan Kumar Das, S. Prasanna, Haizhou Li
{"title":"RSL2019: A Realistic Speech Localization Corpus","authors":"R. Sheelvant, Bidisha Sharma, Maulik C. Madhavi, Rohan Kumar Das, S. Prasanna, Haizhou Li","doi":"10.1109/O-COCOSDA46868.2019.9060842","DOIUrl":"https://doi.org/10.1109/O-COCOSDA46868.2019.9060842","url":null,"abstract":"In this work, we present the development of a new database for speech localization that we refer to as Realistic Speech Localization 2019 (RSL2019) corpus. The corpus is designed for the study of sound source localization in real-world applications. The RSL2019 corpus is a continuing effort, which presently contains 22.60 hours of speech data, recorded using a four channel microphone array, and played over a loudspeaker from different directions of arrival (DOA). We consider 180 speech utterances spoken by 6 speakers, selected from RSR2015 database, which are played over the loudspeaker positioned at different angles and distances from the microphone array. We vary the DOA from 0 to 360 degree angle at an interval of 5 degree, at 1 metre and 1.5 metre distance. From each position and DOA, we also record white noise to study the robustness, and time stretched pulse to generate the transfer function for speech localization algorithm. Furthermore, we present the experimental results and analysis on state-of-the-art sound source localization algorithm using the open source HARK toolkit on the created RSL2019 database. This database will be provided for research purpose upon request to the authors.","PeriodicalId":263209,"journal":{"name":"2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128106524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An analysis of voice quality of Chinese patients with depression 中国抑郁症患者语音质量分析
Yuan Jia, Yuzhu Liang, T. Zhu
{"title":"An analysis of voice quality of Chinese patients with depression","authors":"Yuan Jia, Yuzhu Liang, T. Zhu","doi":"10.1109/O-COCOSDA46868.2019.9060848","DOIUrl":"https://doi.org/10.1109/O-COCOSDA46868.2019.9060848","url":null,"abstract":"In the present study, we empirically explore how the voice quality of depression patients (as experimental group) differs from that of healthy people (as control group), in terms of jitter, shimmer, HNR and pitch. Our analysis results reveal that the shimmer, maximum HNR and minimum HNR of patients are significantly different from those of the control group. Specifically, the patients tend to have a higher shimmer and lower maximum and mean HNR. To figure out to what extent the emotion has influenced the results, we further investigate whether there are significant differences in voice quality among different variations of emotion (positive, neutral, and negative) embedded in text reading. It turns out that no significant differences in voice hoarseness are found, showing that the voice quality is immune to emotion. Therefore, we can conclude that in general the voice of depression patients is hoarser than non-depressed people.","PeriodicalId":263209,"journal":{"name":"2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131271653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
XDF-REPA: A Densely Labeled Dataset toward Refined Pronunciation Assessment for English Learning XDF-REPA:面向英语学习的精细发音评估的密集标记数据集
Yun Gao, Zhigang Ou, Jianfeng Cheng, Yong Ruan, Xiangdong Wang, Yueliang Qian
{"title":"XDF-REPA: A Densely Labeled Dataset toward Refined Pronunciation Assessment for English Learning","authors":"Yun Gao, Zhigang Ou, Jianfeng Cheng, Yong Ruan, Xiangdong Wang, Yueliang Qian","doi":"10.1109/O-COCOSDA46868.2019.9041154","DOIUrl":"https://doi.org/10.1109/O-COCOSDA46868.2019.9041154","url":null,"abstract":"Currently, most computer assisted pronunciation training (CAPT) systems focus on overall scoring or mispronunciation detection. In this paper, we address the issue of refined pronunciation assessment (RPA), which aims at providing more refined information to L2 learners. To meet the major challenge of the lack of densely labeled data, we present the XDF-REPA dataset, which is freely available to the public. The dataset contains 19,213 English word utterances by 18 Chinese adults, among which 4,200 audio clips from 9 speakers are densely labeled by 3 linguists with intended phoneme, actually uttered phoneme, phoneme score for each phoneme, and an overall score for the word as well. To reduce the difference between annotators, scoring rules combining subjectivity and objectivity are defined. To demonstrate the usage of the dataset and provide a baseline for other researchers, a prototype system for RPA is developed and described in the paper, which adopts a DNN-HMM based acoustic model and a variant of Goodness of Pronunciation (GOP) to yield all the corrective feedbacks needed for RPA. Experimental results show error detection accuracy varies from 80.1% to 85.1% for different subsets and linguists, and accuracy of actually-uttered-phoneme recognition varies from 70.9% to 80.8% for different subsets and linguists.","PeriodicalId":263209,"journal":{"name":"2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125955308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
voisTUTOR corpus: A speech corpus of Indian L2 English learners for pronunciation assessment voisTUTOR语料库:印度第二语言英语学习者的语音语料库,用于发音评估
Chiranjeevi Yarra, Aparna Srinivasan, Chandana Srinivasa, Ritu Aggarwal, P. Ghosh
{"title":"voisTUTOR corpus: A speech corpus of Indian L2 English learners for pronunciation assessment","authors":"Chiranjeevi Yarra, Aparna Srinivasan, Chandana Srinivasa, Ritu Aggarwal, P. Ghosh","doi":"10.1109/O-COCOSDA46868.2019.9041162","DOIUrl":"https://doi.org/10.1109/O-COCOSDA46868.2019.9041162","url":null,"abstract":"This paper describes the voisTUTOR corpus, a pronunciation assessment corpus of Indian second language (L2) learners learning English. This corpus consists of 26529 utterances approximately totalling to 14 hours. The recorded data was collected from 16 Indian L2 learners who are from six native languages, namely, Kannada, Telugu, Tamil, Malayalam, Hindi and Gujarati. A total of 1676 unique stimuli were considered for the recording. The stimuli were designed such that they ranged from single word stimuli to multiple word stimuli containing simple, complex and compound sentences. The corpus also consists of ratings representing overall quality on a scale of 0 to 10 for every utterance. In addition to the overall rating, unlike the existing corpora, a binary decision (0 or 1) is provided indicating the quality of the following seven factors, on which overall pronunciation typically depends, - 1) intelligibility, 2) phoneme quality, 3) phoneme mispronunciation, 4) syllable stress quality, 5) intonation quality, 6) correctness of pauses and 7) mother tongue influence. A spoken English expert provides the ratings and binary decisions for all the utterances. Furthermore, the corpus also consists of recordings of all the stimuli obtained from a male and a female spoken English expert. Considering factor dependent binary decisions and spoken English experts' recordings, voisTUTOR corpus is unique compared to the existing corpora. To the best of our knowledge, there exists no such corpus for pronunciation assessment in Indian nativity.","PeriodicalId":263209,"journal":{"name":"2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129979987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The effect of focus on trisyllabic syllable duration in Mandarin 关注对普通话三音节音长的影响
Ziyu Xiong, Q. Lin, Maolin Wang, Zhouyu Chen
{"title":"The effect of focus on trisyllabic syllable duration in Mandarin","authors":"Ziyu Xiong, Q. Lin, Maolin Wang, Zhouyu Chen","doi":"10.1109/O-COCOSDA46868.2019.9041173","DOIUrl":"https://doi.org/10.1109/O-COCOSDA46868.2019.9041173","url":null,"abstract":"In this study, the temporal pattern of trisyllabic sequences in Mandarin are investigated, in particular the interaction between the temporal pattern and the focal effort. There are neutral tone syllables in Mandarin, and neutral tone syllables are metrically weak (W), while non-neutral tone syllables are metrically strong (S). Four types of trisyllabic sequences are investigated: those with no neutral tone syllables (SSS), those with one neutral tone at the final position (SSW), those with one neutral tone at the second syllable (SWS), and those with the second and the final syllables as neutral tones (SWW). It is found that if there is neutral tone syllable in a sequence, the last non-neutral tone syllable is the longest. Under a focused condition, all the syllables in the trisyllabic sequences lengthen. Strong syllables lengthen more than weak syllables, and among strong syllables, later syllable lengthens more than earlier syllables.","PeriodicalId":263209,"journal":{"name":"2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121617460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An acoustic-articulatory database of VCV sequences and words in Toda at different speaking rates 日语中不同语速的VCV序列和单词的声学-发音数据库
Shankar Narayanan, Aravind Illa, Nayan Anand, Ganesh Sinisetty, Karthick Narayanan, P. Ghosh
{"title":"An acoustic-articulatory database of VCV sequences and words in Toda at different speaking rates","authors":"Shankar Narayanan, Aravind Illa, Nayan Anand, Ganesh Sinisetty, Karthick Narayanan, P. Ghosh","doi":"10.1109/O-COCOSDA46868.2019.9041190","DOIUrl":"https://doi.org/10.1109/O-COCOSDA46868.2019.9041190","url":null,"abstract":"We present a database comprising simultaneous acoustic and articulatory recordings of thirty $mathrm{V}_{1}mathrm{CV}_{2}$ nonsense words and forty two Toda words, recorded with an electromagnetic articulograph and spoken by six Toda speakers (two males and four females) at four different speaking rates, namely slow, normal, fast and very fast. The vowels in the $mathrm{V}_{1}mathrm{CV}_{2}(mathrm{V}_{1}neq mathrm{V}_{2}$) come from a set of six vowels, namely, /a/, /e/, /i/, /o/, /u/, /y/, where the last vowel is a front rounded vowel in Toda. The consonant in the $mathrm{V}_{1}mathrm{CV}_{2}$ stimuli is chosen as /p/ for all the recordings. The articulatory data in the proposed database comprises recording of movements of five articulatory points, namely, upper lip, lower lip, jaw, tongue tip and tongue dorsum in the midsagittal plane. The acoustic and articulatory recordings are made available at 16 kHz and 100 Hz respectively. Boundaries of vowels and consonant in $mathrm{V}_{1}mathrm{CV}_{2}$ stimuli are provided along with this database. Basic acoustic and articulatory analysis of the $mathrm{V}_{1}mathrm{CV}_{2}$ recordings in this database are presented, which show the manner in which the acoustic and articulatory spaces as well as coarticulation change with speaking rates. The proposed database is suited for a number of research studies including the effect of speaking rates on the acoustic and articulatory aspects of coarticulation in Toda, analysis of labial kinematics during consonant production at different speaking rates, and acoustic-articulatory analysis of front rounded vowel in Toda.","PeriodicalId":263209,"journal":{"name":"2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131561063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An acoustic study of affricates produced by L2 english learners in Harbin 哈尔滨地区第二语言英语学习者的打杂音的声学研究
Chenyang Zhao, Ai-jun Li, Zhiqiang Li, Ying Tang
{"title":"An acoustic study of affricates produced by L2 english learners in Harbin","authors":"Chenyang Zhao, Ai-jun Li, Zhiqiang Li, Ying Tang","doi":"10.1109/O-COCOSDA46868.2019.9060844","DOIUrl":"https://doi.org/10.1109/O-COCOSDA46868.2019.9060844","url":null,"abstract":"The present study focuses on the acquisition of English affricates by L2 learners in Harbin where a major variety of Mandarin is spoken, and explores possible interferences from L1 leading to L2 learners’ deviations in pronunciation. Specifically, features of L2 learners’ productions are compared with native speakers by using different acoustic parameters so that the differences between the two groups could be discovered. The results can help find out whether the differences or the similarities between L1 and L2 sounds contribute more to L2 speech acquisition. Independent sample t-test and affricate acoustic patterns are used in the comparisons. The results show that affricates in English are not completely well acquired by L2 learners from Harbin. Specifically, L2 learners’ production of /tʃ/ has a longer Duration of Frication (DOF) and a weaker plosion than that by native speakers, and their production of /dʒ/ is longer and stronger in its frication part compared to that by native speakers. The similar durations of GAP between the two groups of speakers indicates that the articulatory precision of /tʃ/ and /dʒ/ are well acquired. /tr/ and /dr/ are produced by L2 learners in a longer and more tense manner. The unsatisfactory acquisition, according to the analysis, is caused by both similarities and differences of linguistic features in L1 and L2. The Transfer Theory and the Speech Learning Model (SLM) are adopted to explain the results.","PeriodicalId":263209,"journal":{"name":"2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128108722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Statistical studies on Japanese sonority by using loudness calibration scores 用响度校正分数对日语响度进行统计研究
Takayuki Kagomiya
{"title":"Statistical studies on Japanese sonority by using loudness calibration scores","authors":"Takayuki Kagomiya","doi":"10.1109/O-COCOSDA46868.2019.9041186","DOIUrl":"https://doi.org/10.1109/O-COCOSDA46868.2019.9041186","url":null,"abstract":"This study aimed to examine Japanese sonority by using a quantitative method and contribute to Japanese phonetics and phonology. Thus, loudness calibration scores from the NTT-Tohoku University Speech Dataset for a Word Intelligibility Test based on Word Familiarity (FW03) were analyzed. The intensity of each monosyllable sound stored in FW03 was equalized, and thus, perceptual sound levels varied with the difference in the original sound intensity of syllables. To adjust the difference in perceptual sound levels, calibration scores were estimated using a series of psychometric experiments. These scores reflected the difference in sound intensity and perceptual levels and can be considered subjective sonority scores for Japanese monosyllables. The results of the statistical analysis of the scores revealed that the sonority level of Japanese vowels was primarily accounted for its openness. The sonority of consonants was affected by its articulation and voicing, whereas that of monosyllables can be clustered based on the openness of vowels.","PeriodicalId":263209,"journal":{"name":"2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132661479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Investigation of Prosodic Features Related to Next Speaker Selection in Spontaneous Japanese Conversation 日语自发会话中选择下一个说话人的韵律特征研究
Y. Ishimoto, Takehiro Teraoka, M. Enomoto
{"title":"An Investigation of Prosodic Features Related to Next Speaker Selection in Spontaneous Japanese Conversation","authors":"Y. Ishimoto, Takehiro Teraoka, M. Enomoto","doi":"10.1109/O-COCOSDA46868.2019.9041205","DOIUrl":"https://doi.org/10.1109/O-COCOSDA46868.2019.9041205","url":null,"abstract":"This study aims to reveal prosodic features related to next speaker selection in spontaneous Japanese conversation. A turn-taking system in the field of conversation analysis in sociology was proposed as a systematic basis of speaker-change for conversation. In a previous study, we demonstrated that the prosody of Japanese utterance is relevant to one component of the system, namely, a turn-constructional component. However, it is unclear whether the prosody is also relevant to another component, that is, a turn-allocation component. In this paper, while focusing on next speaker selection as one of the turn-allocation techniques, we investigated relationships between the prosodic features and types of next speaker selection in utterances. The results showed that the difference of the F0s between the penultimate and the final accent phrase in utterance differs whether the current utterance selects the next speaker. Also, the power and the mora duration at the final accent phrase differ whether the current speaker does self-selection in the current utterance. These suggest that the prosodic features are clues for selecting the next speaker in the current utterance.","PeriodicalId":263209,"journal":{"name":"2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131781773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信