Punctuation and case restoration in code mixed Indian languages

S. Tripathy, A. Samal
{"title":"Punctuation and case restoration in code mixed Indian languages","authors":"S. Tripathy, A. Samal","doi":"10.18653/v1/2022.umios-1.9","DOIUrl":null,"url":null,"abstract":"Automatic Speech Recognition (ASR) systems are taking over in different industries starting from producing video subtitles to interactive digital assistants. ASR output can be used in automatic indexing, categorizing, searching along with normal human readability. Raw transcripts from ASR systems are difficult to interpret since it usually produces text without punctuation and case information (all lower, all upper, camel case etc.), thus limiting the performance of downstream NLP tasks. We proposed an approach to restore the punctuation and case for both English and Hinglish (i.e Hindi vocabulary in Latin script) languages. We have performed a classification task using encoder-based transformers which is a mini BERT consisting of 4 encoder layers for punctuation and case restoration instead of the traditional Seq2Seq model considering the latency constraint in real world use cases. It consists of a total number of 15 distinct classes for the model which includes 5 punctuations i.e Period(.), Comma(,), Single Quote(‘), Double Quote(”) & Question Mark(?) with different combinations of casing. The model is benchmarked on an internal dataset which was based on user conversation with the voice assistant and it achieves a F1(macro) score of 91.52% on the test set.","PeriodicalId":360854,"journal":{"name":"Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18653/v1/2022.umios-1.9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Automatic Speech Recognition (ASR) systems are taking over in different industries starting from producing video subtitles to interactive digital assistants. ASR output can be used in automatic indexing, categorizing, searching along with normal human readability. Raw transcripts from ASR systems are difficult to interpret since it usually produces text without punctuation and case information (all lower, all upper, camel case etc.), thus limiting the performance of downstream NLP tasks. We proposed an approach to restore the punctuation and case for both English and Hinglish (i.e Hindi vocabulary in Latin script) languages. We have performed a classification task using encoder-based transformers which is a mini BERT consisting of 4 encoder layers for punctuation and case restoration instead of the traditional Seq2Seq model considering the latency constraint in real world use cases. It consists of a total number of 15 distinct classes for the model which includes 5 punctuations i.e Period(.), Comma(,), Single Quote(‘), Double Quote(”) & Question Mark(?) with different combinations of casing. The model is benchmarked on an internal dataset which was based on user conversation with the voice assistant and it achieves a F1(macro) score of 91.52% on the test set.
混合印度语代码中的标点和大小写恢复
自动语音识别(ASR)系统正在从制作视频字幕到交互式数字助理等不同行业占据主导地位。ASR输出可用于自动索引,分类,搜索以及正常的人类可读性。来自ASR系统的原始转录本很难解释,因为它通常产生没有标点符号和大小写信息的文本(全部小写,全部大写,驼峰大小写等),从而限制了下游NLP任务的性能。我们提出了一种方法来恢复英语和印度英语(即拉丁字母的印地语词汇)语言的标点和大小写。我们使用基于编码器的转换器执行了一个分类任务,这是一个由4个编码器层组成的迷你BERT,用于标点和大小写恢复,而不是传统的Seq2Seq模型,考虑到现实世界用例中的延迟约束。它由模型的15个不同类别组成,其中包括5种标点符号,即句号(.),逗号(,),单引号('),双引号(")和问号(?),并使用不同的大小写组合。该模型在基于用户与语音助手对话的内部数据集上进行基准测试,在测试集上获得了91.52%的F1(宏)分数。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信