Interspeech最新文献

筛选
英文 中文
The Prosody of Cheering in Sport Events 体育赛事中欢呼的韵律
Interspeech Pub Date : 2022-09-18 DOI: 10.21437/interspeech.2022-10982
Marzena Żygis, Sarah Wesołek, Nina Hosseini-Kivanani, M. Krifka
{"title":"The Prosody of Cheering in Sport Events","authors":"Marzena Żygis, Sarah Wesołek, Nina Hosseini-Kivanani, M. Krifka","doi":"10.21437/interspeech.2022-10982","DOIUrl":"https://doi.org/10.21437/interspeech.2022-10982","url":null,"abstract":"Motivational speaking usually conveys a highly emotional message and its purpose is to invite action. The goal of this paper is to investigate the prosodic realization of one particular type of cheering, namely inciting cheering for single addressees in sport events (here, long-distance running), using the name of that person. 31 native speakers of German took part in the experiment. They were asked to cheer up an individual marathon runner in a sporting event represented by video by producing his or her name (1-5 syllables long). For reasons of comparison, the participants also produced the same names in isolation and carrier sentences. Our results reveal that speakers use different strategies to meet their motivational communicative goals: while some speakers produced the runners’ names by dividing them into syllables, others pronounced the names as quickly as possible putting more emphasis on the first syllable. A few speakers followed a mixed strategy. Contrary to our expectations, it was not the intensity that mostly contributes to the differences between the different speaking styles (cheering vs. neutral), at least in the methods we were using. Rather, participants employed higher fundamental frequency and longer duration when cheering for marathon runners.","PeriodicalId":73500,"journal":{"name":"Interspeech","volume":"1 1","pages":"5283-5287"},"PeriodicalIF":0.0,"publicationDate":"2022-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43143290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Acoustic Stress Detection in Isolated English Words for Computer-Assisted Pronunciation Training 计算机辅助发音训练中孤立英语单词的声重音检测
Interspeech Pub Date : 2022-09-18 DOI: 10.21437/interspeech.2022-197
Vera Bernhard, Sandra Schwab, J. Goldman
{"title":"Acoustic Stress Detection in Isolated English Words for Computer-Assisted Pronunciation Training","authors":"Vera Bernhard, Sandra Schwab, J. Goldman","doi":"10.21437/interspeech.2022-197","DOIUrl":"https://doi.org/10.21437/interspeech.2022-197","url":null,"abstract":"We propose a system for automatic lexical stress detection in isolated English words. It is designed to be part of the computer-assisted pronunciation training application MIAPARLE (“https://miaparle.unige.ch”) that specifically focuses on stress contrasts acquisition. Training lexical stress cannot be disregarded in language education as the accuracy in production highly affects the intelligibility and perceived fluency of an L2 speaker. The pipeline automatically segments audio input into syllables over which duration, intensity, pitch, and spectral information is calculated. Since the stress of a syllable is defined relative to its neighboring syllables, the values obtained over the syllables are complemented with differential values to the preceding and following syllables. The resulting feature vectors, retrieved from 1011 recordings of single words spoken by English natives, are used to train a Voting Classifier composed of four supervised classifiers, namely a Support Vector Machine, a Neural Net, a K Nearest Neighbor, and a Random Forest classifier. The approach determines syllables of a single word as stressed or unstressed with an F1 score of 94% and an accuracy of 96%.","PeriodicalId":73500,"journal":{"name":"Interspeech","volume":"1 1","pages":"3143-3147"},"PeriodicalIF":0.0,"publicationDate":"2022-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49006649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-intrusive Speech Quality Assessment with a Multi-Task Learning based Subband Adaptive Attention Temporal Convolutional Neural Network 基于多任务学习的子带自适应注意时间卷积神经网络非侵入性语音质量评价
Interspeech Pub Date : 2022-09-18 DOI: 10.21437/interspeech.2022-10315
Xiaofeng Shu, Yanjie Chen, Chuxiang Shang, Yan Zhao, Chengshuai Zhao, Yehang Zhu, Chuanzeng Huang, Yuxuan Wang
{"title":"Non-intrusive Speech Quality Assessment with a Multi-Task Learning based Subband Adaptive Attention Temporal Convolutional Neural Network","authors":"Xiaofeng Shu, Yanjie Chen, Chuxiang Shang, Yan Zhao, Chengshuai Zhao, Yehang Zhu, Chuanzeng Huang, Yuxuan Wang","doi":"10.21437/interspeech.2022-10315","DOIUrl":"https://doi.org/10.21437/interspeech.2022-10315","url":null,"abstract":"In terms of subjective evaluations, speech quality has been gen-erally described by a mean opinion score (MOS). In recent years, non-intrusive speech quality assessment shows an active progress by leveraging deep learning techniques. In this paper, we propose a new multi-task learning based model, termed as subband adaptive attention temporal convolutional neural network (SAA-TCN), to perform non-intrusive speech quality assessment with the help of MOS value interval detector (VID) auxiliary task. Instead of using fullband magnitude spectrogram, the proposed model takes subband magnitude spectrogram as the input to reduce model parameters and prevent overfitting. To effectively utilize the energy distribution information along the subband frequency dimension, subband adaptive attention (SAA) is employed to enhance the TCN model. Experimental results reveal that the proposed method achieves a superior performance on predicting the MOS values. In ConferencingSpeech 2022 Challenge, our method achieves a mean Pearson’s correlation coefficient (PCC) score of 0.763 and outperforms the challenge baseline method by 0.233.","PeriodicalId":73500,"journal":{"name":"Interspeech","volume":"1 1","pages":"3298-3302"},"PeriodicalIF":0.0,"publicationDate":"2022-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49153770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On Breathing Pattern Information in Synthetic Speech 合成语音中的呼吸模式信息
Interspeech Pub Date : 2022-09-18 DOI: 10.21437/interspeech.2022-10271
Z. Mostaani, M. Magimai.-Doss
{"title":"On Breathing Pattern Information in Synthetic Speech","authors":"Z. Mostaani, M. Magimai.-Doss","doi":"10.21437/interspeech.2022-10271","DOIUrl":"https://doi.org/10.21437/interspeech.2022-10271","url":null,"abstract":"The respiratory system is an integral part of human speech production. As a consequence, there is a close relation between respiration and speech signal, and the produced speech signal carries breathing pattern related information. Speech can also be generated using speech synthesis systems. In this paper, we investigate whether synthetic speech carries breathing pattern related information in the same way as natural human speech. We address this research question in the framework of logical-access presentation attack detection using embeddings extracted from neural networks pre-trained for speech breathing pattern estimation. Our studies on ASVSpoof 2019 challenge data show that there is a clear distinction between the extracted breathing pattern embedding of natural human speech and syn-thesized speech, indicating that speech synthesis systems tend to not carry breathing pattern related information in the same way as human speech. Whilst, this is not the case with voice conversion of natural human speech.","PeriodicalId":73500,"journal":{"name":"Interspeech","volume":"1 1","pages":"2768-2772"},"PeriodicalIF":0.0,"publicationDate":"2022-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48554971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Cross-modal Transfer Learning via Multi-grained Alignment for End-to-End Spoken Language Understanding 基于多粒度对齐的跨模态迁移学习用于端到端口语理解
Interspeech Pub Date : 2022-09-18 DOI: 10.21437/interspeech.2022-11378
Yi Zhu, Zexun Wang, Hang Liu, Pei-Hsin Wang, Mingchao Feng, Meng Chen, Xiaodong He
{"title":"Cross-modal Transfer Learning via Multi-grained Alignment for End-to-End Spoken Language Understanding","authors":"Yi Zhu, Zexun Wang, Hang Liu, Pei-Hsin Wang, Mingchao Feng, Meng Chen, Xiaodong He","doi":"10.21437/interspeech.2022-11378","DOIUrl":"https://doi.org/10.21437/interspeech.2022-11378","url":null,"abstract":"End-to-end spoken language understanding (E2E-SLU) has witnessed impressive improvements through cross-modal (text-to-audio) transfer learning. However, current methods mostly focus on coarse-grained sequence-level text-to-audio knowledge transfer with simple loss, and neglecting the fine-grained temporal alignment between the two modalities. In this work, we propose a novel multi-grained cross-modal transfer learning framework for E2E-SLU. Specifically, we devise a cross attention module to align the tokens of text with the frame features of speech, encouraging the model to target at the salient acoustic features attended to each token during transferring the semantic information. We also leverage contrastive learning to facilitate cross-modal representation learning in sentence level. Finally, we explore various data augmentation methods to mitigate the deficiency of large amount of labelled data for the training of E2E-SLU. Extensive experiments are conducted on both English and Chinese SLU datasets to verify the effectiveness of our proposed approach. Experimental results and detailed analyses demonstrate the superiority and competitiveness of our model.","PeriodicalId":73500,"journal":{"name":"Interspeech","volume":"1 1","pages":"1131-1135"},"PeriodicalIF":0.0,"publicationDate":"2022-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44733411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Neural correlates of acoustic and semantic cues during speech segmentation in French 法语语音切分过程中声学和语义线索的神经关联
Interspeech Pub Date : 2022-09-18 DOI: 10.21437/interspeech.2022-10986
Maria del Mar Cordero, Ambre Denis-Noël, E. Spinelli, F. Meunier
{"title":"Neural correlates of acoustic and semantic cues during speech segmentation in French","authors":"Maria del Mar Cordero, Ambre Denis-Noël, E. Spinelli, F. Meunier","doi":"10.21437/interspeech.2022-10986","DOIUrl":"https://doi.org/10.21437/interspeech.2022-10986","url":null,"abstract":"Natural speech is highly complex and variable. Particularly, spoken language, in contrast to written language, has no clear word boundaries. Adult listeners can exploit different types of information to segment the continuous stream such as acoustic and semantic information. However, the weight of these cues, when co-occurring, remains to be determined. Behavioural tasks are not conclusive on this point as they focus participants ’ attention on certain sources of information, thus biasing the results. Here, we looked at the processing of homophonic utterances such as l’amie vs la mie (both /lami/) which include fine acoustic differences and for which the meaning changes depending on segmentation. To examine the perceptual resolution of such ambiguities when semantic information is available, we measured the online processing of sentences containing such sequences in an ERP experiment involving no active task. In a congruent context, semantic information matched the acoustic signal of the word amie, while, in the incongruent condition, the semantic information carried by the sentence and the acoustic signal were leading to different lexical candidates. No clear neural markers for the use of acoustic cues were found. Our results suggest a preponderant weight of semantic information over acoustic information during natural spoken sentence processing.","PeriodicalId":73500,"journal":{"name":"Interspeech","volume":"1 1","pages":"4058-4062"},"PeriodicalIF":0.0,"publicationDate":"2022-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41513074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Complex sounds and cross-language influence: The case of ejectives in Omani Mehri 复杂的声音和跨语言的影响:以阿曼语Mehri中的弹射为例
Interspeech Pub Date : 2022-09-18 DOI: 10.21437/interspeech.2022-10199
Rachid Ridouane, Philipp Buech
{"title":"Complex sounds and cross-language influence: The case of ejectives in Omani Mehri","authors":"Rachid Ridouane, Philipp Buech","doi":"10.21437/interspeech.2022-10199","DOIUrl":"https://doi.org/10.21437/interspeech.2022-10199","url":null,"abstract":"Ejective consonants are known to considerably vary both cross-linguistically and within individual languages. This variability is often considered a consequence of the complex articulatory strategies involved in their production. Because they are complex, they might be particularly prone to sound change, especially under cross-language influence. In this study, we consider the production of ejectives in Mehri, a Semitic endangered language spoken in Oman where considerable influence from Arabic is expected. We provide acoustic data from seven speakers producing a list of items contrasting ejective and pulmonic alveolar and velar stops in word-initial (/#—/), word-medial (V—V), and word-final (V—#) positions. Different durational and non-durational correlates were examined. The relative importance of these correlates was quantified by the calculation of D-prime values for each. The key empirical finding is that the parameters used to signal ejectivity differ depending mainly on whether the stop is alveolar or velar. Specifically, ejective alveolar stops display characteristics of pharyngealization, similar to Arabic, but velars still maintain attributes of ejectivity in some word positions. We interpret these results as diagnostic of the sound change that is currently in progress, coupled with an ongoing context-dependent neutralization.","PeriodicalId":73500,"journal":{"name":"Interspeech","volume":"1 1","pages":"3433-3437"},"PeriodicalIF":0.0,"publicationDate":"2022-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41455931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Syllable sequence of /a/+/ta/ can be heard as /atta/ in Japanese with visual or tactile cues 在日语中,/a/+/ta/的音节序列可以听为/atta/,带有视觉或触觉提示
Interspeech Pub Date : 2022-09-18 DOI: 10.21437/interspeech.2022-10099
T. Arai, Miho Yamada, Megumi Okusawa
{"title":"Syllable sequence of /a/+/ta/ can be heard as /atta/ in Japanese with visual or tactile cues","authors":"T. Arai, Miho Yamada, Megumi Okusawa","doi":"10.21437/interspeech.2022-10099","DOIUrl":"https://doi.org/10.21437/interspeech.2022-10099","url":null,"abstract":"In our previous work, we reported that the word /atta/ with a geminate consonant differs from the syllable sequence /a/+pause+/ta/ in Japanese; specifically, there are formant transitions at the end of the first syllable in /atta/ but not in /a/+pause+/ta/. We also showed that native Japanese speakers perceived /atta/ when a facial video of /atta/ was synchronously played with an audio signal of /a/+pause+/ta/. In that study, we utilized two video clips for the two utterances in which the speaker was asked to control only the timing of the articulatory closing. In that case, there was no guarantee that the videos would be the exactly same except for the timing. Therefore, in the current study, we use a physical model of the human vocal tract with a miniature robot hand unit to achieve articulatory movements for visual cues. We also provide tactile cues to the listener’s finger because we want to test whether cues of another modality affect this perception in the same framework. Our findings showed that when either visual or tactile cues were presented with an audio stimulus, listeners more frequently responded that they heard /atta/ compared to audio-only presentations.","PeriodicalId":73500,"journal":{"name":"Interspeech","volume":"302 ","pages":"3083-3087"},"PeriodicalIF":0.0,"publicationDate":"2022-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41331666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Lingual Transfer Learning Approach to Phoneme Error Detection via Latent Phonetic Representation 基于潜在语音表征的跨语言迁移学习音素错误检测方法
Interspeech Pub Date : 2022-09-18 DOI: 10.21437/interspeech.2022-10228
Jovan M. Dalhouse, K. Itou
{"title":"Cross-Lingual Transfer Learning Approach to Phoneme Error Detection via Latent Phonetic Representation","authors":"Jovan M. Dalhouse, K. Itou","doi":"10.21437/interspeech.2022-10228","DOIUrl":"https://doi.org/10.21437/interspeech.2022-10228","url":null,"abstract":"Extensive research has been conducted on CALL systems for Pronunciation Error detection to automate language improvement through self-evaluation. However, many of these previous approaches have relied on HMM or Neural Network Hybrid Models which, although have proven to be effective, often utilize phonetically labelled L2 speech data which is ex-pensive and often scarce. This paper discusses a ”zero-shot” transfer learning approach to detect phonetic errors in L2 English speech by Japanese native speakers using solely unaligned phonetically labelled native language speech. The proposed method introduces a simple base architecture which utilizes the XLSR-Wav2Vec2.0 model pre-trained on unlabelled multilingual speech. Phoneme mapping for each language is determined based on difference of articulation of similar phonemes. This method achieved a Phonetic Error Rate of 0.214 on erroneous L2 speech after fine-tuning on 70 hours of speech with low resource automated phonetic labelling, and proved to ad-ditionally model phonemes of the native language of the L2 speaker effectively without the need for L2 speech fine-tuning.","PeriodicalId":73500,"journal":{"name":"Interspeech","volume":"1 1","pages":"3133-3137"},"PeriodicalIF":0.0,"publicationDate":"2022-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41397632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Biometric Russian Audio-Visual Extended MASKS (BRAVE-MASKS) Corpus: Multimodal Mask Type Recognition Task 生物识别俄语视听扩展掩码(BRAVE-MASKS)语料库:多模式掩码类型识别任务
Interspeech Pub Date : 2022-09-18 DOI: 10.21437/interspeech.2022-10240
M. Markitantov, E. Ryumina, D. Ryumin, A. Karpov
{"title":"Biometric Russian Audio-Visual Extended MASKS (BRAVE-MASKS) Corpus: Multimodal Mask Type Recognition Task","authors":"M. Markitantov, E. Ryumina, D. Ryumin, A. Karpov","doi":"10.21437/interspeech.2022-10240","DOIUrl":"https://doi.org/10.21437/interspeech.2022-10240","url":null,"abstract":"In this paper, we present a new multimodal corpus called Biometric Russian Audio-Visual Extended MASKS (BRAVE-MASKS), which is designed to analyze voice and facial characteristics of persons wearing various masks, as well as to develop automatic systems for bimodal verification and identification of speakers. In particular, we tackle the multimodal mask type recognition task (6 classes). As a result, audio, visual and multimodal systems were developed, which showed UAR of 54.83%, 72.02% and 82.01%, respectively, on the Test set. These performances are the baseline for the BRAVE-MASKS corpus to compare the follow-up approaches with the proposed systems.","PeriodicalId":73500,"journal":{"name":"Interspeech","volume":"1 1","pages":"1756-1760"},"PeriodicalIF":0.0,"publicationDate":"2022-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49580219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信