Phonetics and Speech Sciences最新文献

筛选
英文 中文
How does focus-induced prominence modulate phonetic realizations for Korean word-medial stops? 焦点诱发的突出如何调节韩语中间词缀的语音实现?
Phonetics and Speech Sciences Pub Date : 2020-12-01 DOI: 10.13064/KSSS.2020.12.4.057
Jiyoun Choi
{"title":"How does focus-induced prominence modulate phonetic realizations for\u0000 Korean word-medial stops?","authors":"Jiyoun Choi","doi":"10.13064/KSSS.2020.12.4.057","DOIUrl":"https://doi.org/10.13064/KSSS.2020.12.4.057","url":null,"abstract":"Previous research has indicated that the patterns of phonetic modulations induced by prominence are not consistent across languages but are conditioned by sound systems specific to a given language. Most studies examining the prominence effects in Korean have been restricted to segments in word-initial and phrase-initial positions. The present study, thus, set out to explore the prominence effects for Korean stop consonants in word-medial intervocalic positions. A total of 16 speakers of Seoul Korean (8 males, 8 females) produced word-medial intervocalic lenis and aspirated stops with and without prominence. The prominence was induced by contrast focus on the phonation-type contrast, that is, lenis vs. aspirated stops. Our results showed that F0 of vowels following both lenis and aspirated stops became higher when the target stops received focus than when they did not, whereas voice onset time (VOT) and voicing during stop closure for both lenis and aspirated stops did not differ between the focus and no-focus conditions. The findings add to our understanding of diverse patterns of prominence-induced strengthening on the acoustic realizations of segments.","PeriodicalId":255285,"journal":{"name":"Phonetics and Speech Sciences","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115796229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of muscle tension dysphonia (MTD) female speech and normal speech using cepstrum variables and random forest algorithm* 肌张力性语音障碍(MTD)女性语音和正常语音的倒谱变量和随机森林算法分类*
Phonetics and Speech Sciences Pub Date : 2020-12-01 DOI: 10.13064/KSSS.2020.12.4.091
Joowon Yun, Hee-Jeong Shim, Cheol-jae Seong
{"title":"Classification of muscle tension dysphonia (MTD) female speech and\u0000 normal speech using cepstrum variables and random forest algorithm*","authors":"Joowon Yun, Hee-Jeong Shim, Cheol-jae Seong","doi":"10.13064/KSSS.2020.12.4.091","DOIUrl":"https://doi.org/10.13064/KSSS.2020.12.4.091","url":null,"abstract":"This study investigated the acoustic characteristics of sustained vowel /a/ and sentence utterance produced by patients with muscle tension dysphonia (MTD) using cepstrum-based acoustic variables. 36 women diagnosed with MTD and the same number of women with normal voice participated in the study and the data were recorded and measured by ADSV ™ . The results demonstrated that cepstral peak prominence (CPP) and CPP_F0 among all of the variables were statistically significantly lower than those of control group. When it comes to the GRBAS scale, overall severity (G) was most prominent, and roughness (R), breathiness (B), and strain (S) indices followed in order in the voice quality of MTD patients. As these characteristics increased, a statistically significant negative correlation was observed in CPP. We tried to classify MTD and control group using CPP and CPP_F0 variables. As a result of statistic modeling with a Random Forest machine learning algorithm, much higher classification accuracy (100% in training data and 83.3% in test data) was found in the sentence reading task, with CPP being proved to be playing a more crucial role in both vowel and sentence reading tasks.","PeriodicalId":255285,"journal":{"name":"Phonetics and Speech Sciences","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121637462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Perceptual cues for /o/ and /u/ in Seoul Korean 首尔韩文/o/和/u/的知觉线索
Phonetics and Speech Sciences Pub Date : 2020-09-01 DOI: 10.13064/ksss.2020.12.3.001
Hi-Gyung Byun
{"title":"Perceptual cues for /o/ and /u/ in Seoul Korean","authors":"Hi-Gyung Byun","doi":"10.13064/ksss.2020.12.3.001","DOIUrl":"https://doi.org/10.13064/ksss.2020.12.3.001","url":null,"abstract":"Previous studies have confirmed that /o/ and /u/ in Seoul Korean are undergoing a merger in the F1/F2 space, especially for female speakers. As a substitute parameter for formants, it is reported that female speakers use phonation (H1-H2) differences to distinguish /o/ from /u/. This study aimed to explore whether H1-H2 values are being used as perceptual cues for /o/-/u/. A perception test was conducted with 35 college students using /o/ and /u/ spoken by 41 females, which overlap considerably in the vowel space. An acoustic analysis of 182 stimuli was also conducted to see if there is any correspondence between production and perception. The identification rate was 89% on average, 86% for /o/, and 91% for /u/. The results confirmed that when /o/ and /u/ cannot be distinguished in the F1/F2 space because they are too close, H1-H2 differences contribute significantly to the separation of the two vowels. However, in perception, this was not the case. H1-H2 values were not significantly involved in the identification process, and the formants (especially F2) were still dominant cues. The study also showed that even though H1-H2 differences are apparent in females' production, males do not use H1-H2 in their production, and both females and males do not use H1-H2 in their perception. It is presumed that H1-H2 has not yet been developed as a perceptual cue for /o/ and /u/.","PeriodicalId":255285,"journal":{"name":"Phonetics and Speech Sciences","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114252451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Compromised feature normalization method for deep neural network based speech recognition* 基于深度神经网络的语音识别折衷特征归一化方法*
Phonetics and Speech Sciences Pub Date : 2020-09-01 DOI: 10.13064/ksss.2020.12.3.065
M. Kim, H. S. Kim
{"title":"Compromised feature normalization method for deep neural network\u0000 based speech recognition*","authors":"M. Kim, H. S. Kim","doi":"10.13064/ksss.2020.12.3.065","DOIUrl":"https://doi.org/10.13064/ksss.2020.12.3.065","url":null,"abstract":"Feature normalization is a method to reduce the effect of environmental mismatch between the training and test conditions through the normalization of statistical characteristics of acoustic feature parameters. It demonstrates excellent performance improvement in the traditional Gaussian mixture model-hidden Markov model (GMM-HMM)-based speech recognition system. However, in a deep neural network (DNN)-based speech recognition system, minimizing the effects of environmental mismatch does not necessarily lead to the best performance improvement. In this paper, we attribute the cause of this phenomenon to information loss due to excessive feature normalization. We investigate whether there is a feature normalization method that maximizes the speech recognition performance by properly reducing the impact of environmental mismatch, while preserving useful information for training acoustic models. To this end, we introduce the mean and exponentiated variance normalization (MEVN), which is a compromise between the mean normalization (MN) and the mean and variance normalization (MVN), and compare the performance of DNN-based speech recognition system in noisy and reverberant environments according to the degree of variance normalization. Experimental results reveal that a slight performance improvement is obtained with the MEVN over the MN and the MVN, depending on the degree of variance normalization.","PeriodicalId":255285,"journal":{"name":"Phonetics and Speech Sciences","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131254955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A prosodic cue representing scopes of wh-phrases in Korean: Focusing on North Gyeongsang Korean* 代表韩国语“wh”短语范围的韵律线索——以庆北韩国语为例*
Phonetics and Speech Sciences Pub Date : 2020-09-01 DOI: 10.13064/ksss.2020.12.3.041
Weonhee Yun, Ki-tae Kim, Sunwoo Park
{"title":"A prosodic cue representing scopes of wh-phrases in Korean:\u0000 Focusing on North Gyeongsang Korean*","authors":"Weonhee Yun, Ki-tae Kim, Sunwoo Park","doi":"10.13064/ksss.2020.12.3.041","DOIUrl":"https://doi.org/10.13064/ksss.2020.12.3.041","url":null,"abstract":"A wh-phrase in an embedded sentence may have either an embedded or a matrix scope. Interpretation of a wh-phrase with a matrix scope has tended to be syntactically unacceptable unless the sentence reads with a wh-intonation. Previous studies have found two differences in prosodic characteristics between sentences with matrix and embedded scopes. Firstly, peak F0s in wh-phrases produced with an F0 compression wh-intonation are higher than those in indirect questions, and peak F0s in matrix verbs are lower than those in sentences with embedded scope. Secondly, a substantial F0 drop is found at the end of embedded sentences in indirect questions, whereas no F0 reduction at the same point is noticed in sentences with a matrix scope produced with a high plateau wh-intonation. However, these characteristics were not found in our experiment. This showed that a more compelling difference exists in the values obtained from subtraction between the peak F0s of each word (or a word plus an ending or case marker) and the F0s at the end of the word. Specifically, the gap between the peak F0 in a word composed with an embedded verb and the F0 at the end of the word, which is a complementizer in Korean, is large in embedded wh-scope sentences and low in matrix wh-scope sentences.","PeriodicalId":255285,"journal":{"name":"Phonetics and Speech Sciences","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128817177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Comparison of vowel lengths of articles and monosyllabic nouns in Korean EFL learners’ noun phrase production in relation to their English proficiency 冠词和单音节名词元音长度与韩国英语学习者名词短语生成的关系
Phonetics and Speech Sciences Pub Date : 2020-09-01 DOI: 10.13064/ksss.2020.12.3.033
Wooji Park, Ran Mo, S. Rhee
{"title":"Comparison of vowel lengths of articles and monosyllabic nouns in\u0000 Korean EFL learners’ noun phrase production in relation to their English\u0000 proficiency","authors":"Wooji Park, Ran Mo, S. Rhee","doi":"10.13064/ksss.2020.12.3.033","DOIUrl":"https://doi.org/10.13064/ksss.2020.12.3.033","url":null,"abstract":"The purpose of this research was to find out the relation between Korean learners’ English proficiency and the ratio of the length of the stressed vowel in a monosyllabic noun to that of the unstressed vowel in an article of the noun phrases (e.g., “a cup”, “the bus”, etcs.). Generally, the vowels in monosyllabic content words are phonetically more prominent than the ones in monosyllabic function words as the former have phrasal stress, making the vowels in content words longer in length, higher in pitch, and louder in amplitude. This study, based on the speech samples from Korean-Spoken English Corpus (K-SEC) and Rated Korean-Spoken English Corpus (Rated K-SEC), examined 879 English noun phrases, which are composed of an article and a monosyllabic noun, from sentences which are rated on 4 levels of proficiency. The lengths of the vowels in these 879 target NPs were measured and the ratio of the vowel lengths in nouns to those in articles was calculated. It turned out that the higher the proficiency level, the greater the mean ratio of the vowels in nouns to the vowels in articles, confirming the research’s hypothesis. This research thus concluded that for the Korean English learners, the higher the English proficiency level, the better they could produce the stressed and unstressed vowels with more conspicuous length differences between them.","PeriodicalId":255285,"journal":{"name":"Phonetics and Speech Sciences","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124337512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An analysis of emotional English utterances using the prosodic distance between emotional and neutral utterances 用情感话语与中性话语的韵律距离分析英语情感话语
Phonetics and Speech Sciences Pub Date : 2020-09-01 DOI: 10.13064/ksss.2020.12.3.025
S. Yi
{"title":"An analysis of emotional English utterances using the prosodic\u0000 distance between emotional and neutral utterances","authors":"S. Yi","doi":"10.13064/ksss.2020.12.3.025","DOIUrl":"https://doi.org/10.13064/ksss.2020.12.3.025","url":null,"abstract":"An analysis of emotional English utterances with 7 emotions (calm, happy, sad, angry, fearful, disgust, surprised) was conducted using the measurement of prosodic distance between 672 emotional and 48 neutral utterances. Applying the technique proposed in the automatic evaluation model of English pronunciation to the present study on emotional utterances, Euclidean distance measurement of 3 prosodic elements such as F0, intensity and duration extracted from emotional and neutral utterances was utilized. This paper, furthermore, extended the analytical methods to include Euclidean distance normalization, z-score and z-score normalization resulting in 4 groups of measurement schemes (sqrF0, sqrINT, sqrDUR; norsqrF0, norsqrINT, norsqrDUR; sqrzF0, sqrzINT, sqrzDUR; norsqrzF0, norsqrzINT, norsqrzDUR). All of the results from perceptual analysis and acoustical analysis of emotional utteances consistently indicated the greater effectiveness of norsqrF0, norsqrINT and norsqrDUR, among 4 groups of measurement schemes, which normalized the Euclidean measurement. The greatest acoustical change of prosodic information influenced by emotion was shown in the values of F0 followed by duration and intensity in descending order according to the effect size based on the estimation of distance between emotional utterances and neutral counterparts. Tukey Post Hoc test revealed 4 homogeneous subsets (calm<disgust, sad<happy, surprised<surprised, angry, fearful) statistically determined from the measurement of norsqrF0 and 3 homogeneous subsets (surprised, happy, fearful, sad, calm<calm, angry<angry, disgust) from norsqrDUR. Furthermore, the analysis of each of the 7 emotions showed that the present research outcome is in the same vein as the results of the previous study.","PeriodicalId":255285,"journal":{"name":"Phonetics and Speech Sciences","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133712880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A comparison of Korean vowel formants in conditions of chanting and reading utterances* 韩语元音共振体在诵经和朗读条件下的比较*
Phonetics and Speech Sciences Pub Date : 2020-09-01 DOI: 10.13064/ksss.2020.12.3.085
Jihye Park, Cheol-jae Seong
{"title":"A comparison of Korean vowel formants in conditions of chanting and reading utterances*","authors":"Jihye Park, Cheol-jae Seong","doi":"10.13064/ksss.2020.12.3.085","DOIUrl":"https://doi.org/10.13064/ksss.2020.12.3.085","url":null,"abstract":"Vowel articulation in subjects related to speech disorders seems to be difficult. A chant method that properly reflects the characteristics of language could be used as an effective way of addressing the difficulties. The purpose of this study was to find out whether the chant method is effective as a means of enhancing vowel articulation. The subjects of this study were 60 normal adults (30 males and 30 females) in their 20s and 30s whose native language is Korean. Eight utterance conditions including chanting and reading conditions were recorded and their acoustic data were analyzed. The results of the analysis of the acoustic variables related to the formant confirmed that the F1 and F2 values ​​of the vowel formants are increased and the direction of movement of the center of gravity of the vowel triangle is statistically significantly forwarded and lowered in the chant method in both the word and the phrase context. The results also proved that accent is the most influential musical factor in chant. There was no significant difference between four repeated tokens, which increased the reliability of the results. In other words, chanting is an effective way to shift the center of gravity of the vowel triangle, which suggests that it can help to improve speech intelligibility by forming a desirable place for articulation.","PeriodicalId":255285,"journal":{"name":"Phonetics and Speech Sciences","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131143784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparisons of voice quality parameter values measured with MDVP, Praat, and TF32 MDVP、Praat和TF32测量语音质量参数值的比较
Phonetics and Speech Sciences Pub Date : 2020-09-01 DOI: 10.13064/ksss.2020.12.3.073
Hyeju Ko, M. Woo, Yaelin Choi
{"title":"Comparisons of voice quality parameter values measured with MDVP,\u0000 Praat, and TF32","authors":"Hyeju Ko, M. Woo, Yaelin Choi","doi":"10.13064/ksss.2020.12.3.073","DOIUrl":"https://doi.org/10.13064/ksss.2020.12.3.073","url":null,"abstract":"Measured values may differ between Multi-Dimensional Voice Program (MDVP), Praat, and Time-Frequency Analysis software (TF32), all of which are widely used in voice quality analysis, due to differences in the algorithms used in each analyzer. Therefore, this study aimed to compare the values of parameters of normal voice measured with each analyzer. After tokens of the vowel sound /a/ were collected from 35 normal adult subjects (19 male and 16 female), they were analyzed with MDVP, Praat, and TF32. The mean values obtained from Praat for jitter variables (J local, J abs, J rap, and J ppq), shimmer variables (S local, S dB, and S apq), and noise-to-harmonics ratio (NHR) were significantly lower than those from MDVP in both males and females (p<.01). The mean values of J local, J abs, and S local were significantly lower in the order MDVP, Praat, and TF32 in both genders. In conclusion, the measured values differed across voice analyzers due to the differences in the algorithms each analyzer uses. Therefore, it is important for clinicians to analyze pathologic voice after understanding the normal criteria used by each analyzer when they use a voice analyzer in clinical practice.","PeriodicalId":255285,"journal":{"name":"Phonetics and Speech Sciences","volume":"714 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127259382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Voice-to-voice conversion using transformer network* 语音到语音转换使用变压器网络*
Phonetics and Speech Sciences Pub Date : 2020-09-01 DOI: 10.13064/ksss.2020.12.3.055
June-Woo Kim, H. Jung
{"title":"Voice-to-voice conversion using transformer network*","authors":"June-Woo Kim, H. Jung","doi":"10.13064/ksss.2020.12.3.055","DOIUrl":"https://doi.org/10.13064/ksss.2020.12.3.055","url":null,"abstract":"Voice conversion can be applied to various voice processing applications. It can also play an important role in data augmentation for speech recognition. The conventional method uses the architecture of voice conversion with speech synthesis, with Mel filter bank as the main parameter. Mel filter bank is well-suited for quick computation of neural networks but cannot be converted into a high-quality waveform without the aid of a vocoder. Further, it is not effective in terms of obtaining data for speech recognition. In this paper, we focus on performing voice-to-voice conversion using only the raw spectrum. We propose a deep learning model based on the transformer network, which quickly learns the voice conversion properties using an attention mechanism between source and target spectral components. The experiments were performed on TIDIGITS data, a series of numbers spoken by an English speaker. The conversion voices were evaluated for naturalness and similarity using mean opinion score (MOS) obtained from 30 participants. Our final results yielded 3.52±0.22 for naturalness and 3.89±0.19 for similarity.","PeriodicalId":255285,"journal":{"name":"Phonetics and Speech Sciences","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128140300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信