Speech Prosody 2022最新文献

筛选
英文 中文
Perception of the strength of prosodic breaks in three conditions: Explicit pause, implicit pause, and no pause 在三种情况下感知韵律中断的强度:显性停顿、隐性停顿和无停顿
Speech Prosody 2022 Pub Date : 2022-05-23 DOI: 10.21437/speechprosody.2022-97
V. Silber-Varod, Ella Alfon, N. Amir
{"title":"Perception of the strength of prosodic breaks in three conditions: Explicit pause, implicit pause, and no pause","authors":"V. Silber-Varod, Ella Alfon, N. Amir","doi":"10.21437/speechprosody.2022-97","DOIUrl":"https://doi.org/10.21437/speechprosody.2022-97","url":null,"abstract":"In this study we examine the perceptual strength of prosodic boundaries in Hebrew speech. The stimuli consisted of 28 sequences of two inter-pausal units (IPUs) taken from the Map Task recordings in Hebrew. Listeners were exposed only to the silent pause following the first IPU (hence, Explicit pauses) while the second pause was omitted (hence, Implicit pauses) thus creating a stimulus model of IPU-pause-IPU. Ten female listeners labeled the strength of each break between adjacent words on a scale from 1 (no break) to 5 (strong break). Higher average scores were assigned to the implicit pauses as compared to the explicit ones, however scores for explicit pauses received higher agreement between raters. Moreover, we found only borderline significant influence of the explicit pause duration on the raters' scores. Looking at gender differences, the results suggest that raters' scores were higher when the speakers were females. Further, an interaction was found between the gender of the speaker and the gender of the recipient (i.e., the interlocutor). In particular, female speakers received a higher score overall, and for male speakers the rating was higher when they spoke to males than to females.","PeriodicalId":442842,"journal":{"name":"Speech Prosody 2022","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121425357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using prosody to organize the signal: Sensitivities across species set the stage for prosodic bootstrapping 利用韵律来组织信号:跨物种的敏感性为韵律的自我引导奠定了基础
Speech Prosody 2022 Pub Date : 2022-05-23 DOI: 10.21437/speechprosody.2022-1
J. M. Toro
{"title":"Using prosody to organize the signal: Sensitivities across species set the stage for prosodic bootstrapping","authors":"J. M. Toro","doi":"10.21437/speechprosody.2022-1","DOIUrl":"https://doi.org/10.21437/speechprosody.2022-1","url":null,"abstract":"Prosody is a major source of information that both adults and infants use to organize the speech signal, from segmenting words to inferring syntactic structures. Here, I will explore the extent to which the ability to take advantage of prosodic cues that we observe in humans might emerge from sensibilities already present in other species. I will review recent studies along 2 lines of research. The first one covers research into how listeners follow the principles described by the Iambic-Trochaic Law to group sounds. The second one explores how they take advantage of sonority differences and natural prosodic contours to better identify words. Together, the evidence gathered so far suggests that, similarly to humans, non-human animals use certain acoustic cues present in the signal to extract difficult-to-find regularities. More broadly, they provide support to the idea that general perceptual biases that form the bases for prosodic bootstrapping are already present in other animals. Importantly, in humans but not in other animals, such biases are combined with domain-specific representations that guide the discovery of linguistic structures.","PeriodicalId":442842,"journal":{"name":"Speech Prosody 2022","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114708590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Affect Expression: Global and Local Control of Voice Source Parameters 影响表达:声源参数的全局和局部控制
Speech Prosody 2022 Pub Date : 2022-05-23 DOI: 10.21437/speechprosody.2022-107
Andy Murphy, Irena Yanushevskaya, A. N. Chasaide, C. Gobl
{"title":"Affect Expression: Global and Local Control of Voice Source Parameters","authors":"Andy Murphy, Irena Yanushevskaya, A. N. Chasaide, C. Gobl","doi":"10.21437/speechprosody.2022-107","DOIUrl":"https://doi.org/10.21437/speechprosody.2022-107","url":null,"abstract":"This paper explores how the acoustic characteristics of the voice signal affect. It considers the proposition that the cueing of affect relies on variations in voice source parameters (includ-ing f 0 ) that involve both global, uniform shifts across an utterance, and local, within-utterance changes, at prosodically rele-vant points. To test this, a perception test was conducted with stimuli where modifications were made to voice source parameters of a synthesised baseline utterance, to target angry and sad renditions. The baseline utterance was generated with the ABAIR Irish TTS system, for one male and one female voice. The voice parameter manipulations drew on earlier production and perception experiments, and involved three stimulus series: those with global, local and a combination of global and local adjustments. 65 listeners judged the stimuli as one of the fol-lowing: angry, interested, no emotion, relaxed and sad , and in-dicated how strongly any affect was perceived. Results broadly support the initial proposition, in that the most effective signalling of both angry and sad affect tended to involve those stimuli which combined global and local adjustments. However, results for stimuli targeting angry were often judged as interested , in-dicating that the negative valence is not consistently cued by the manipulations in these stimuli.","PeriodicalId":442842,"journal":{"name":"Speech Prosody 2022","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114853924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Gender effects on perception of emotional speech- and visual-prosody in a second language: Emotion recognition in English-speaking films 性别对第二语言情感言语和视觉韵律感知的影响:英语电影中的情感识别
Speech Prosody 2022 Pub Date : 2022-05-23 DOI: 10.21437/speechprosody.2022-126
S. Verheul, Adriana Hartman, Roselinde Supheert, Aoju Chen
{"title":"Gender effects on perception of emotional speech- and visual-prosody in a second language: Emotion recognition in English-speaking films","authors":"S. Verheul, Adriana Hartman, Roselinde Supheert, Aoju Chen","doi":"10.21437/speechprosody.2022-126","DOIUrl":"https://doi.org/10.21437/speechprosody.2022-126","url":null,"abstract":"Speakers use both speech prosody and visual prosody (facial expressions, gestures, body postures) to express emotion. Receivers register and recognise emotion via both types of prosodic cues. In this study, we examined gender differences in both recognition of type of emotion (e.g. anger vs. joy) and perceived emotionality (e.g. the degree of anger) expressed via speech prosody and visual prosody in a second language (L2). In a perception experiment using film scenes, proficient Dutch learners of English rated the emotionality of each protagonist and identified the specific type of emotion expressed by each protagonist in each scene in both the visual-only and audio-only modality. We have found no evidence for gender-related differences in perceived emotionality, possibly due to potential difficulty of participants in identifying with the protagonists portrayed in a different society. However, the female Dutch learners of English were more accurate in recognising type of emotion than the male Dutch learners of English from both speech prosody and visual prosody. These findings suggest that there is transfer of learners’ ability in recognising type of emotion in the native language to L2 and that female L2 learners may be better at learning cues in speech prosody to emotion in L2.","PeriodicalId":442842,"journal":{"name":"Speech Prosody 2022","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124444798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Effects of delayed auditory feedback interacting with prosodic structure 延迟听觉反馈与韵律结构相互作用的影响
Speech Prosody 2022 Pub Date : 2022-05-23 DOI: 10.21437/speechprosody.2022-65
Jinyu Li, L. Lancia
{"title":"Effects of delayed auditory feedback interacting with prosodic structure","authors":"Jinyu Li, L. Lancia","doi":"10.21437/speechprosody.2022-65","DOIUrl":"https://doi.org/10.21437/speechprosody.2022-65","url":null,"abstract":"Speakers usually respond to time-delayed auditory feedback (DAF) by decreasing their speech rate (i.e., lengthening syllables). However, the syllable position in prosodic structure may affect syllabic prominence and duration. In the present study, we investigated whether the lengthening effect of DAF on syllables could depend on their position in French utterance. We analyzed recordings of several repetitions of three five-syllables French sentences from 10 French speakers under three conditions of DAF (0, 60, 120ms). The results suggest that the duration of syllables is generally longer when DAF is present, and it increases with the increasing DAF level. Accented vowels are more lengthened by DAF in relation to nonaccented vowels in the same accentual group. Final sentence vowels, which bear the nuclear pitch accent and may be additionally affected by final lengthening, could even be more lengthened by DAF. Given that the extent of lengthening effect is not correlated with the original syllabic duration, we assume that the greater lengthening effect on accented vowels could not be due to the longer duration of these vowels in general. Overall, our results suggest that speakers’ responses to DAF depend on the syllabic status in the prosodic hierarchy.","PeriodicalId":442842,"journal":{"name":"Speech Prosody 2022","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125454004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Acoustic correlates of Dutch lexical stress re-examined: Spectral tilt is not always more reliable than intensity 荷兰语词汇重音的声学相关性重新检查:光谱倾斜并不总是比强度更可靠
Speech Prosody 2022 Pub Date : 2022-05-23 DOI: 10.21437/speechprosody.2022-57
G. Severijnen, H. Bosker, J. McQueen
{"title":"Acoustic correlates of Dutch lexical stress re-examined: Spectral tilt is not always more reliable than intensity","authors":"G. Severijnen, H. Bosker, J. McQueen","doi":"10.21437/speechprosody.2022-57","DOIUrl":"https://doi.org/10.21437/speechprosody.2022-57","url":null,"abstract":"The present study examined two acoustic cues in the production of lexical stress in Dutch: spectral tilt and overall intensity. Sluijter and Van Heuven (1996) reported that spectral tilt is a more reliable cue to stress than intensity. However, that study included only a small number of talkers (10) and only syllables with the vowels /a ː / and / ɔ /. The present study re-examined this issue in a larger and more variable dataset. We recorded 38 native speakers of Dutch (20 females) producing 744 tokens of Dutch segmentally overlapping words (e.g., VOORnaam vs. voorNAAM, “first name” vs. “respectable”), targeting 10 different vowels, in variable sentence contexts. For each syllable, we measured overall intensity and spectral tilt following Sluijter and Van Heuven (1996). Results from Linear Discriminant Analyses showed that, for the vowel /a ː / alone, spectral tilt showed an advantage over intensity, as evidenced by higher stressed/unstressed syllable classification accuracy scores for spectral tilt. However, when all vowels were included in the analysis, the advantage disappeared. These findings confirm that spectral tilt plays a larger role in signaling stress in Dutch /a ː / but show that, for a larger sample of Dutch vowels, overall intensity and spectral tilt are equally important.","PeriodicalId":442842,"journal":{"name":"Speech Prosody 2022","volume":"45 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131771021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social and situational factors of speaker variability in collaborative dialogues 协作对话中说话人变异性的社会和情境因素
Speech Prosody 2022 Pub Date : 2022-05-23 DOI: 10.21437/speechprosody.2022-93
Tatiana V. Kachkovskaia, A. Menshikova, D. Kocharov, Pavel Kholiavin, Anna Mamushina
{"title":"Social and situational factors of speaker variability in collaborative dialogues","authors":"Tatiana V. Kachkovskaia, A. Menshikova, D. Kocharov, Pavel Kholiavin, Anna Mamushina","doi":"10.21437/speechprosody.2022-93","DOIUrl":"https://doi.org/10.21437/speechprosody.2022-93","url":null,"abstract":"The acoustic features of the speaker’s voice in dialogues are li-able to change due to various situational factors, such as success of communication, social distance between the interlocutors, conversational roles etc. This paper presents an analysis of variation in the basic prosodic features—pitch, intensity, and speech tempo—across speakers’ gender, conversational role (informa-tion leader vs. follower), and social distance. The research is based on the SibLing speech corpus where five degrees of social distance between the interlocutors are presented: there are dialogues between same-gender siblings, same-gender friends, same-gender and opposite-gender strangers, strangers of different age and social status. Each pair of interlocutors played a card-matching game and performed a classical map task. The factor of conversational role revealed a significant influence on all the analysed speech features: pitch, intensity, and speech tempo. Gender was not found to influence speech tempo, unlike pitch and loudness. Social distance was shown to play a significant role for speech tempo (e.g., it tends to be lower in dialogues with strangers of different age and social sta-tus), and also, in interaction with other factors, for pitch and loudness. There was also a significant influence of the type of task: card-matching game vs. map task.","PeriodicalId":442842,"journal":{"name":"Speech Prosody 2022","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130795676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Prosody and cognitive accessibility in left-detached topics: lessons from Nigerian Pidgin 左分离话题的韵律和认知可及性:来自尼日利亚洋泾浜语的教训
Speech Prosody 2022 Pub Date : 2022-05-23 DOI: 10.21437/speechprosody.2022-4
E. Strickland, Anne Lacheret-Dujour, C. Simard
{"title":"Prosody and cognitive accessibility in left-detached topics: lessons from Nigerian Pidgin","authors":"E. Strickland, Anne Lacheret-Dujour, C. Simard","doi":"10.21437/speechprosody.2022-4","DOIUrl":"https://doi.org/10.21437/speechprosody.2022-4","url":null,"abstract":"","PeriodicalId":442842,"journal":{"name":"Speech Prosody 2022","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133193603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mandarin Disyllabic Word Imitation in Children with and without Autism Spectrum Disorder 自闭症谱系障碍儿童普通话双音节词语模仿的研究
Speech Prosody 2022 Pub Date : 2022-05-23 DOI: 10.21437/speechprosody.2022-22
Tingbo Wang, Heng Ding
{"title":"Mandarin Disyllabic Word Imitation in Children with and without Autism Spectrum Disorder","authors":"Tingbo Wang, Heng Ding","doi":"10.21437/speechprosody.2022-22","DOIUrl":"https://doi.org/10.21437/speechprosody.2022-22","url":null,"abstract":"Atypical pitch production and perception in individuals with autism spectrum disorders (ASD) have been reported mainly from non-tonal language backgrounds. In tonal languages such as Mandarin, the changes of pitch not only signal prosody at a sentence level but also contrast word meanings known as tones at a lexical level. It remains unclear whether children with ASD from tonal language backgrounds show a deficit in the use of pitch at both levels. Therefore, the current study aims to exploit whether Mandarin-speaking children with ASD exhibit atypical lexical pitch production and whether their performance is influenced by semantic information in a disyllabic true and pseudo-words imitation task. Results from acoustic analysis demonstrated significant differences in pitch and duration measures between both subject groups and word types.","PeriodicalId":442842,"journal":{"name":"Speech Prosody 2022","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133792102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Children’s Use of Uptalk in Narratives 儿童在叙事中的上升语运用
Speech Prosody 2022 Pub Date : 2022-05-23 DOI: 10.21437/speechprosody.2022-7
Yujia Song, Cynthia G. Clopper, Laura Wagner
{"title":"Children’s Use of Uptalk in Narratives","authors":"Yujia Song, Cynthia G. Clopper, Laura Wagner","doi":"10.21437/speechprosody.2022-7","DOIUrl":"https://doi.org/10.21437/speechprosody.2022-7","url":null,"abstract":"Uptalk refers to the use of rising intonation on declarative utterances. Previous research has shown that, at age 6 years, children use rising contours with declaratives more frequently than adults, and this pattern appears to persist until 14 years of age. However, it is unclear why such a trend persists. To gain a clearer developmental picture of uptalk, the present study analyzed the form and function of uptalk produced by children aged 6 to 7 and 10 to 11 years from the American Midwest, using a storytelling task. Contrary to previous findings, the results indicate that children of both age groups use uptalk in an adult-like way: they overwhelmingly favor L-H% over H-H% boundary tones, and most strongly associate the contour with continuation. The lack of age differences suggests that children’s use of uptalk is comparable to that of adults by the age of 6, at least in certain narrative contexts. The use of a familiar storytelling task in the current study may explain the greater success observed for children than in previous studies, suggesting the relative importance of the elicitation task in the investigation of child speech.","PeriodicalId":442842,"journal":{"name":"Speech Prosody 2022","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115509686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信