Workshop on Spoken Language Technologies for Under-resourced Languages最新文献

筛选
英文 中文
A Corpus of the Sorani Kurdish Folkloric Lyrics 索拉尼族库尔德民歌歌词语料库
Workshop on Spoken Language Technologies for Under-resourced Languages Pub Date : 2020-05-16 DOI: 10.13025/1YDH-EW61
Sina Ahmadi, Hossein Hassani, K. Abedi
{"title":"A Corpus of the Sorani Kurdish Folkloric Lyrics","authors":"Sina Ahmadi, Hossein Hassani, K. Abedi","doi":"10.13025/1YDH-EW61","DOIUrl":"https://doi.org/10.13025/1YDH-EW61","url":null,"abstract":"Kurdish poetry and prose narratives were historically transmitted orally and less in a written form. Being an essential medium of oral narration and literature, Kurdish lyrics have had a unique attribute in becoming a vital resource for different types of studies, including Digital Humanities, Computational Folkloristics and Computational Linguistics. As an initial study of its kind for the Kurdish language, this paper presents our efforts in transcribing and collecting Kurdish folk lyrics as a corpus that covers various Kurdish musical genres, in particular Beyt, Gorani, Bend, and Heyran. We believe that this corpus contributes to Kurdish language processing in several ways, such as compensation for the lack of a long history of written text by incorporating oral literature, presenting an unexplored realm in Kurdish language processing, and assisting the initiation of Kurdish computational folkloristics. Our corpus contains 49,582 tokens in the Sorani dialect of Kurdish. The corpus is publicly available in the Text Encoding Initiative (TEI) format for non-commercial use.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"331 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133433160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Sentiment Analysis Dataset for Code-Mixed Malayalam-English 马来语-英语混合语码情感分析数据集
Workshop on Spoken Language Technologies for Under-resourced Languages Pub Date : 2020-05-11 DOI: 10.5281/ZENODO.4015234
Bharathi Raja Chakravarthi, Navya Jose, Shardul Suryawanshi, E. Sherly, John P. McCrae
{"title":"A Sentiment Analysis Dataset for Code-Mixed Malayalam-English","authors":"Bharathi Raja Chakravarthi, Navya Jose, Shardul Suryawanshi, E. Sherly, John P. McCrae","doi":"10.5281/ZENODO.4015234","DOIUrl":"https://doi.org/10.5281/ZENODO.4015234","url":null,"abstract":"There is an increasing demand for sentiment analysis of text from social media which are mostly code-mixed. Systems trained on monolingual data fail for code-mixed data due to the complexity of mixing at different levels of the text. However, very few resources are available for code-mixed data to create models specific for this data. Although much research in multilingual and cross-lingual sentiment analysis has used semi-supervised or unsupervised methods, supervised methods still performs better. Only a few datasets for popular languages such as English-Spanish, English-Hindi, and English-Chinese are available. There are no resources available for Malayalam-English code-mixed data. This paper presents a new gold standard corpus for sentiment analysis of code-mixed text in Malayalam-English annotated by voluntary annotators. This gold standard corpus obtained a Krippendorff’s alpha above 0.8 for the dataset. We use this new corpus to provide the benchmark for sentiment analysis in Malayalam-English code-mixed texts.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116413784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 191
Corpus Creation for Sentiment Analysis in Code-Mixed Tamil-English Text 代码混合泰米尔语-英语文本情感分析的语料库创建
Workshop on Spoken Language Technologies for Under-resourced Languages Pub Date : 2020-05-11 DOI: 10.5281/ZENODO.4015253
Bharathi Raja Chakravarthi, V. Muralidaran, R. Priyadharshini, John P. McCrae
{"title":"Corpus Creation for Sentiment Analysis in Code-Mixed Tamil-English Text","authors":"Bharathi Raja Chakravarthi, V. Muralidaran, R. Priyadharshini, John P. McCrae","doi":"10.5281/ZENODO.4015253","DOIUrl":"https://doi.org/10.5281/ZENODO.4015253","url":null,"abstract":"Understanding the sentiment of a comment from a video or an image is an essential task in many applications. Sentiment analysis of a text can be useful for various decision-making processes. One such application is to analyse the popular sentiments of videos on social media based on viewer comments. However, comments from social media do not follow strict rules of grammar, and they contain mixing of more than one language, often written in non-native scripts. Non-availability of annotated code-mixed data for a low-resourced language like Tamil also adds difficulty to this problem. To overcome this, we created a gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. In this paper, we describe the process of creating the corpus and assigning polarities. We present inter-annotator agreement and show the results of sentiment analysis trained on this corpus as a benchmark.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123509723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 242
Automatic Detection of Palatalized Consonants in Kashmiri 克什米尔语舌化辅音的自动检测
Workshop on Spoken Language Technologies for Under-resourced Languages Pub Date : 2018-08-29 DOI: 10.21437/SLTU.2018-25
Ramakrishna Thirumuru, K. Gurugubelli, A. Vuppala
{"title":"Automatic Detection of Palatalized Consonants in Kashmiri","authors":"Ramakrishna Thirumuru, K. Gurugubelli, A. Vuppala","doi":"10.21437/SLTU.2018-25","DOIUrl":"https://doi.org/10.21437/SLTU.2018-25","url":null,"abstract":"In this study, the acoustic-phonetic attributes of palatalization in the Kashmiri speech is investigated. It is a unique phonetic feature of Kashmiri in the Indian context. An automated approach is proposed to detect this unique phonetic feature from the continuous Kashmiri speech. The i-matra vowel has the impact of palatalizing the consonant connected to it. Therefore, these consonants investigated in synchronous with vowel regions, which are spotted using the instantaneous energy computed from the envelope-derivative of the speech signal. The resonating characteristics of the vocal-tract system framework that reflect the formant dynamics are used to differentiate palatalized consonants from the other consonants. In this regard, the Hilbert envelope of the numerator of the group-delay function that provides good time-frequency resolution used to extract formants. The palatalization detection experimentation carried out in various vowel contexts using the acoustic cues, and it produced a promising result with a detection accuracy of 92.46 %.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"269 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115209418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Text Normalization for Bangla, Khmer, Nepali, Javanese, Sinhala and Sundanese Text-to-Speech Systems 孟加拉语、高棉语、尼泊尔语、爪哇语、僧伽罗语和巽他语文本到语音系统的文本规范化
Workshop on Spoken Language Technologies for Under-resourced Languages Pub Date : 2018-08-29 DOI: 10.21437/SLTU.2018-31
Keshan Sanjaya Sodimana, Pasindu De Silva, R. Sproat, T. Wattanavekin, Alexander Gutkin, Knot Pipatsrisawat
{"title":"Text Normalization for Bangla, Khmer, Nepali, Javanese, Sinhala and Sundanese Text-to-Speech Systems","authors":"Keshan Sanjaya Sodimana, Pasindu De Silva, R. Sproat, T. Wattanavekin, Alexander Gutkin, Knot Pipatsrisawat","doi":"10.21437/SLTU.2018-31","DOIUrl":"https://doi.org/10.21437/SLTU.2018-31","url":null,"abstract":"Text normalization is the process of converting non-standard words (NSWs) such as numbers, and abbreviations into standard words so that their pronunciations can be derived by a typical means (usually lexicon lookups). Text normalization is, thus, an important component of any text-to-speech (TTS) system. Without text normalization, the resulting voice may sound unintelligent. In this paper, we describe an approach to develop rule-based text normalization. We also describe our open source repository containing text normalization grammars and tests for Bangla, Javanese, Khmer, Nepali, Sinhala and Sundanese. Fi-nally, we present a recipe for utilizing the grammars in a TTS system.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114286624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Investigating the Use of Mixed-Units Based Modeling for Improving Uyghur Speech Recognition 混合单元建模在维吾尔语语音识别中的应用研究
Workshop on Spoken Language Technologies for Under-resourced Languages Pub Date : 2018-08-29 DOI: 10.21437/SLTU.2018-45
Pengfei Hu, Shen Huang, Zhiqiang Lv
{"title":"Investigating the Use of Mixed-Units Based Modeling for Improving Uyghur Speech Recognition","authors":"Pengfei Hu, Shen Huang, Zhiqiang Lv","doi":"10.21437/SLTU.2018-45","DOIUrl":"https://doi.org/10.21437/SLTU.2018-45","url":null,"abstract":"Uyghur is a highly agglutinative language with a large number of words derived from the same root. For such languages the use of subwords in speech recognition becomes a natural choice, which can solve the OOV issues. However, short units in subword modeling will weaken the constraint of linguistic context. Besides, vowel weakening and reduction occur frequently in Uyghur language, which may lead to high deletion errors for short unit sequence recognition. In this paper, we investigate using mixed units in Uyghur speech recognition. Subwords and whole-words are mixed together to build a hybrid lexicon and language models for recognition. We also introduce an interpolated LM to further improve the performance. Experiment results show that the mixed-unit based modeling do outperform word or subword based modeling. About 10% relative reduction in Word Error Rate and 8% reduction in Character Error Rate have been achieved for test datasets compared with baseline system.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133056426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Language Identification of Assamese, Bengali and English Speech 阿萨姆语、孟加拉语和英语语音的语言识别
Workshop on Spoken Language Technologies for Under-resourced Languages Pub Date : 2018-08-29 DOI: 10.21437/SLTU.2018-37
Joyshree Chakraborty, Shikhamoni Nath, R. NirmalaS., K. Samudravijaya
{"title":"Language Identification of Assamese, Bengali and English Speech","authors":"Joyshree Chakraborty, Shikhamoni Nath, R. NirmalaS., K. Samudravijaya","doi":"10.21437/SLTU.2018-37","DOIUrl":"https://doi.org/10.21437/SLTU.2018-37","url":null,"abstract":"Machine identification of the language of input speech is of practical interest in regions where people are either bilingual or multi-lingual. Here, we present the development of automatic language identification system that identifies the language of input speech as one of Assamese or Bengali or English spoken by them. The speech databases comprise of sentences read by multiple speakers using their mobile phones. Kaldi toolkit was used to train acoustic models based on hidden Markov model in conjunction with Gaussian mixture models and deep neural networks. The accuracy of the implemented language identification system for test data is 99.3%.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127060256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Dialect Identification Using Tonal and Spectral Features in Two Dialects of Ao 两种敖族方言的声谱特征识别
Workshop on Spoken Language Technologies for Under-resourced Languages Pub Date : 2018-08-29 DOI: 10.21437/SLTU.2018-29
Moakala Tzudir, Priyankoo Sarmah, S. Prasanna
{"title":"Dialect Identification Using Tonal and Spectral Features in Two Dialects of Ao","authors":"Moakala Tzudir, Priyankoo Sarmah, S. Prasanna","doi":"10.21437/SLTU.2018-29","DOIUrl":"https://doi.org/10.21437/SLTU.2018-29","url":null,"abstract":"Ao is an under-resourced Tibeto-Burman tone language spoken in Nagaland, India, with three lexical tones, namely, high, mid and low. There are three dialects of the language namely, Chungli, Mongsen and Changki, differing in tone assignment in lexical words. This work investigates if the idiosyncratic tone assignment in the Ao dialects can be utilized for dialect identification of two Ao dialects, namely, Changki and Mongsen. A perception test confirmed that Ao speakers identified the two dialects based on their dialect-specific tone assignment. To confirm that tone is the primary cue in dialect identification, F0 was neutralized in the speech data before subjecting them to a Gaussian Mixture Model (GMM) based dialect identification system. The low dialect recognition accuracy confirmed the significance of tones in Ao dialect identification. Finally, a GMM-based dialect identification system was built with tonal and spectral features, resulting in better dialect recognition accuracy.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122036903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Sinhala G2P Conversion for Speech Processing 僧伽罗语G2P转换语音处理
Workshop on Spoken Language Technologies for Under-resourced Languages Pub Date : 2018-08-29 DOI: 10.21437/SLTU.2018-24
Thilini Nadungodage, Chamila Liyanage, Amathri Prerera, Randil Pushpananda, R. Weerasinghe
{"title":"Sinhala G2P Conversion for Speech Processing","authors":"Thilini Nadungodage, Chamila Liyanage, Amathri Prerera, Randil Pushpananda, R. Weerasinghe","doi":"10.21437/SLTU.2018-24","DOIUrl":"https://doi.org/10.21437/SLTU.2018-24","url":null,"abstract":"Grapheme-to-phoneme (G2P) conversion plays an important role in speech processing applications and other fields of computational linguistics. Sinhala must have a grapheme-to-phoneme conversion for speech processing because Sinhala writing system does not always reflect its actual pronunciations. This paper describes a rule basedG2P conversion method to convert Sinhala text strings into phonemic representations. We use a previously defined rule set and enhance it to get a more accurate G2P conversion. The performance of our rule-based system shows that the rulebased sound patterns are effective on Sinhala G2P conversion.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130253828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Acoustic Characretistics of Schwa Vowel in Punjabi 旁遮普语弱读音的声学特征
Workshop on Spoken Language Technologies for Under-resourced Languages Pub Date : 2018-08-29 DOI: 10.21437/SLTU.2018-18
Swaran Lata, Prashant Verma, S. Kaur
{"title":"Acoustic Characretistics of Schwa Vowel in Punjabi","authors":"Swaran Lata, Prashant Verma, S. Kaur","doi":"10.21437/SLTU.2018-18","DOIUrl":"https://doi.org/10.21437/SLTU.2018-18","url":null,"abstract":"","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130159499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信