2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU)最新文献

筛选
英文 中文
The LIMSI QAst systems: Comparison between human and automatic rules generation for question-answering on speech transcriptions LIMSI QAst系统:人工和自动规则生成语音转录问答的比较
2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU) Pub Date : 2007-12-01 DOI: 10.1109/ASRU.2007.4430188
S. Rosset, Olivier Galibert, G. Adda, Eric Bilinski
{"title":"The LIMSI QAst systems: Comparison between human and automatic rules generation for question-answering on speech transcriptions","authors":"S. Rosset, Olivier Galibert, G. Adda, Eric Bilinski","doi":"10.1109/ASRU.2007.4430188","DOIUrl":"https://doi.org/10.1109/ASRU.2007.4430188","url":null,"abstract":"In this paper, we present two different question-answering systems on speech transcripts. These two systems are based on a complete and multi-level analysis of both queries and documents. The first system uses handcrafted rules for small text fragments (snippet) selection and answer extraction. The second one replaces the handcrafting with an automatically generated research descriptor. A score based on those descriptors is used to select documents and snippets. The extraction and scoring of candidate answers is based on proximity measurements within the research descriptor elements and a number of secondary factors. The preliminary results obtained on QAst (QA on speech transcripts) development data are promising ranged from 72% correct answer at 1 st rank on manually transcribed meeting data to 94% on manually transcribed lecture data.","PeriodicalId":371729,"journal":{"name":"2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121858378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Deriving salient learners’ mispronunciations from cross-language phonological comparisons 从跨语言语音比较中得出显著的学习者发音错误
2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU) Pub Date : 2007-12-01 DOI: 10.1109/ASRU.2007.4430152
H. Meng, Y. Lo, Lan Wang, W. Lau
{"title":"Deriving salient learners’ mispronunciations from cross-language phonological comparisons","authors":"H. Meng, Y. Lo, Lan Wang, W. Lau","doi":"10.1109/ASRU.2007.4430152","DOIUrl":"https://doi.org/10.1109/ASRU.2007.4430152","url":null,"abstract":"This work aims to derive salient mispronunciations made by Chinese (L1 being Cantonese) learners of English (L2 being American English) in order to support the design of pedagogical and remedial instructions. Our approach is grounded on the theory of language transfer and involves systematic phonological comparison between two languages to predict possible phonetic confusions that may lead to mispronunciations. We collect a corpus of speech recordings from some 21 Cantonese learners of English. We develop an automatic speech recognizer by training cross-word triphone models based on the TIMIT corpus. We also develop an \"extended\" pronunciation lexicon that incorporates the predicted phonetic confusions to generate additional, erroneous pronunciation variants for each word. The extended pronunciation lexicon is used to produce a confusion network in recognition of the English speech recordings of Cantonese learners. We refer to the statistics of the erroneous recognition outputs to derive salient mispronunciations that stipulates the predictions based on phonological comparison.","PeriodicalId":371729,"journal":{"name":"2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115396379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 82
Discriminative training of multi-state barge-in models 多状态驳船模型的判别训练
2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU) Pub Date : 2007-12-01 DOI: 10.1109/ASRU.2007.4430137
A. Ljolje, Vincent Goffin
{"title":"Discriminative training of multi-state barge-in models","authors":"A. Ljolje, Vincent Goffin","doi":"10.1109/ASRU.2007.4430137","DOIUrl":"https://doi.org/10.1109/ASRU.2007.4430137","url":null,"abstract":"A barge-in system designed to reflect the design of the acoustic model used in commercial applications has been built and evaluated. It uses standard hidden Markov model structures, cepstral features and multiple hidden Markov models for both the speech and non-speech parts of the model. It is tested on a large number of real-world databases using noisy speech onset positions which were determined by forced alignment of lexical transcriptions with the recognition model. The ML trained model achieves low false rejection rates at the expense of high false acceptance rates. The discriminative training using the modified algorithm based on the maximum mutual information criterion reduces the false acceptance rates by a half, while preserving the low false rejection rates. Combining an energy based voice activity detector with the hidden Markov model based barge-in models achieves the best performance.","PeriodicalId":371729,"journal":{"name":"2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133540383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Lattice-based Viterbi decoding techniques for speech translation 基于点阵的语音翻译Viterbi解码技术
2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU) Pub Date : 2007-12-01 DOI: 10.1109/ASRU.2007.4430143
G. Saon, M. Picheny
{"title":"Lattice-based Viterbi decoding techniques for speech translation","authors":"G. Saon, M. Picheny","doi":"10.1109/ASRU.2007.4430143","DOIUrl":"https://doi.org/10.1109/ASRU.2007.4430143","url":null,"abstract":"We describe a cardinal-synchronous Viterbi decoder for statistical phrase-based machine translation which can operate on general ASR lattices (as opposed to confusion networks). The decoder implements constrained source reordering on the input lattice and makes use of an outbound distortion model to score the possible reorderings. The phrase table, representing the decoding search space, is encoded as a weighted finite state acceptor which is determined and minimized. At a high level, the search proceeds by performing simultaneous transitions in two pairs of automata: (input lattice, phrase table FSM) and (phrase table FSM, target language model). An alternative decoding strategy that we explore is to break the search into two independent subproblems: first, we perform monotone lattice decoding and find the best foreign path through the ASR lattice and then, we decode this path with reordering using standard sentence-based SMT. We report experimental results on several testsets of a large scale Arabic-to-English speech translation task in the context of the global autonomous language exploitation (or GALE) DARPA project. The results indicate that, for monotone search, lattice-based decoding outperforms 1-best decoding whereas for search with reordering, only the second decoding strategy was found to be superior to 1-best decoding. In both cases, the improvements hold only for shallow lattices.","PeriodicalId":371729,"journal":{"name":"2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133681294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Efficient use of overlap information in speaker diarization 在说话人特征化中有效利用重叠信息
2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU) Pub Date : 2007-12-01 DOI: 10.1109/ASRU.2007.4430194
Scott Otterson, Mari Ostendorf
{"title":"Efficient use of overlap information in speaker diarization","authors":"Scott Otterson, Mari Ostendorf","doi":"10.1109/ASRU.2007.4430194","DOIUrl":"https://doi.org/10.1109/ASRU.2007.4430194","url":null,"abstract":"Speaker overlap in meetings is thought to be a significant contributor to error in speaker diarization, but it is not clear if overlaps are problematic for speaker clustering and/or if errors could be addressed by assigning multiple labels in overlap regions. In this paper, we look at these issues experimentally, assuming perfect detection of overlaps, to assess the relative importance of these problems and the potential impact of overlap detection. With our best features, we find that detecting overlaps could potentially improve diarization accuracy by 15% relative, using a simple strategy of assigning speaker labels in overlap regions according to the labels of the neighboring segments. In addition, the use of cross-correlation features with MFCC's reduces the performance gap due to overlaps, so that there is little gain from removing overlapped regions before clustering.","PeriodicalId":371729,"journal":{"name":"2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123828210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Comparing one and two-stage acoustic modeling in the recognition of emotion in speech 语音情感识别中一级和两级声学建模的比较
2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU) Pub Date : 2007-12-01 DOI: 10.1109/ASRU.2007.4430180
Björn Schuller, Bogdan Vlasenko, Ricardo Minguez, G. Rigoll, A. Wendemuth
{"title":"Comparing one and two-stage acoustic modeling in the recognition of emotion in speech","authors":"Björn Schuller, Bogdan Vlasenko, Ricardo Minguez, G. Rigoll, A. Wendemuth","doi":"10.1109/ASRU.2007.4430180","DOIUrl":"https://doi.org/10.1109/ASRU.2007.4430180","url":null,"abstract":"In the search for a standard unit for use in recognition of emotion in speech, a whole turn, that is the full section of speech by one person in a conversation, is common. Within applications such turns often seem favorable. Yet, high effectiveness of sub-turn entities is known. In this respect a two-stage approach is investigated to provide higher temporal resolution by chunking of speech-turns according to acoustic properties, and multi-instance learning for turn-mapping after individual chunk analysis. For chunking fast pre-segmentation into emotionally quasi-stationary segments by one-pass Viterbi beam search with token passing basing on MFCC is used. Chunk analysis is realized by brute-force large feature space construction with subsequent subset selection, SVM classification, and speaker normalization. Extensive tests reveal differences compared to one-stage processing. Alternatively, syllables are used for chunking.","PeriodicalId":371729,"journal":{"name":"2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133101078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Speech recognition with localized time-frequency pattern detectors 局域时频模式检测器的语音识别
2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU) Pub Date : 2007-12-01 DOI: 10.1109/ASRU.2007.4430135
K. Schutte, James R. Glass
{"title":"Speech recognition with localized time-frequency pattern detectors","authors":"K. Schutte, James R. Glass","doi":"10.1109/ASRU.2007.4430135","DOIUrl":"https://doi.org/10.1109/ASRU.2007.4430135","url":null,"abstract":"A method for acoustic modeling of speech is presented which is based on learning and detecting the occurrence of localized time-frequency patterns in a spectrogram. A boosting algorithm is applied to both build classifiers and perform feature selection from a large set of features derived by filtering spectrograms. Initial experiments are performed to discriminate digits in the Aurora database. The system succeeds in learning sequences of localized time-frequency patterns which are highly interpretable from an acoustic-phonetic viewpoint. While the work and the results are preliminary, they suggest that pursuing these techniques further could lead to new approaches to acoustic modeling for ASR which are more noise robust and offer better encoding of temporal dynamics than typical features such as frame-based cepstra.","PeriodicalId":371729,"journal":{"name":"2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114384991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
The IBM 2007 speech transcription system for European parliamentary speeches 欧洲议会演讲的IBM 2007语音转录系统
2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU) Pub Date : 2007-12-01 DOI: 10.1109/ASRU.2007.4430158
B. Ramabhadran, O. Siohan, A. Sethy
{"title":"The IBM 2007 speech transcription system for European parliamentary speeches","authors":"B. Ramabhadran, O. Siohan, A. Sethy","doi":"10.1109/ASRU.2007.4430158","DOIUrl":"https://doi.org/10.1109/ASRU.2007.4430158","url":null,"abstract":"TC-STAR is an European Union funded speech to speech translation project to transcribe, translate and synthesize European Parliamentary Plenary Speeches (EPPS). This paper describes IBM's English speech recognition system submitted to the TC-STAR 2007 Evaluation. Language model adaptation based on clustering and data selection using relative entropy minimization provided significant gains in the 2007 evaluation. The additional advances over the 2006 system that we present in this paper include unsupervised training of acoustic and language models; a system architecture that is based on cross-adaptation across complementary systems and system combination through generation of an ensemble of systems using randomized decision tree state-tying. These advances reduced the error rate by 30% relative over the best-performing system in the TC-STAR 2006 evaluation on the 2006 English development and evaluation test sets, and produced one of the best performing systems on the 2007 evaluation in English with a word error rate of 7.1%.","PeriodicalId":371729,"journal":{"name":"2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU)","volume":"243 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122719420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Design and implementation of a robot audition system for automatic speech recognition of simultaneous speech 一种用于同步语音自动识别的机器人试听系统的设计与实现
2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU) Pub Date : 2007-12-01 DOI: 10.1109/ASRU.2007.4430093
S. Yamamoto, K. Nakadai, Mikio Nakano, H. Tsujino, J. Valin, Kazunori Komatani, T. Ogata, HIroshi G. Okuno
{"title":"Design and implementation of a robot audition system for automatic speech recognition of simultaneous speech","authors":"S. Yamamoto, K. Nakadai, Mikio Nakano, H. Tsujino, J. Valin, Kazunori Komatani, T. Ogata, HIroshi G. Okuno","doi":"10.1109/ASRU.2007.4430093","DOIUrl":"https://doi.org/10.1109/ASRU.2007.4430093","url":null,"abstract":"This paper addresses robot audition that can cope with speech that has a low signal-to-noise ratio (SNR) in real time by using robot-embedded microphones. To cope with such a noise, we exploited two key ideas; Preprocessing consisting of sound source localization and separation with a microphone array, and system integration based on missing feature theory (MFT). Preprocessing improves the SNR of a target sound signal using geometric source separation with multichannel post-filter. MFT uses only reliable acoustic features in speech recognition and masks unreliable parts caused by errors in preprocessing. MFT thus provides smooth integration between preprocessing and automatic speech recognition. A real-time robot audition system based on these two key ideas is constructed for Honda ASIMO and Humanoid SIG2 with 8-ch microphone arrays. The paper also reports the improvement of ASR performance by using two and three simultaneous speech signals.","PeriodicalId":371729,"journal":{"name":"2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115259837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Robust speech recognition using noise suppression based on multiple composite models and multi-pass search 基于多复合模型和多通道搜索的噪声抑制鲁棒语音识别
2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU) Pub Date : 2007-12-01 DOI: 10.1109/ASRU.2007.4430083
T. Jitsuhiro, T. Toriyama, K. Kogure
{"title":"Robust speech recognition using noise suppression based on multiple composite models and multi-pass search","authors":"T. Jitsuhiro, T. Toriyama, K. Kogure","doi":"10.1109/ASRU.2007.4430083","DOIUrl":"https://doi.org/10.1109/ASRU.2007.4430083","url":null,"abstract":"This paper presents robust speech recognition using a noise suppression method based on multi-model compositions and multi-pass search. In real environments, many kinds of noise signals exists, and input speech for speech recognition systems include them. Our task in the E-Nightingale project is speech recognition of voice memoranda spoken by nurses during actual work at hospitals. To obtain good recognized candidates, suppressing many kinds of noise signals at once to find target speech is important. First, before noise suppression, to find speech and noise label sequences, we introduce multi-pass search with acoustic models including many kinds of noise models and their compositions, their n-gram models, and their lexicon. Second, noise suppression based on models is performed using the multiple composite models selected by recognized label sequences with time alignments. We evaluated this approach using the E-Nightingale task, and the proposed method outperformed the conventional method.","PeriodicalId":371729,"journal":{"name":"2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117006625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信