Language Identification of Hindi-English tweets using code-mixed BERT

M. Z. Ansari, M. Beg, Tanvir Ahmad, Mohd Jazib Khan, Ghazali Wasim
{"title":"Language Identification of Hindi-English tweets using code-mixed BERT","authors":"M. Z. Ansari, M. Beg, Tanvir Ahmad, Mohd Jazib Khan, Ghazali Wasim","doi":"10.1109/ICCICC53683.2021.9811292","DOIUrl":null,"url":null,"abstract":"Language identification of social media text has been an interesting problem of study in recent years. Social media messages are predominantly in code mixed in non-English speaking states. Prior knowledge by pre-training contextual embeddings have shown state of the art results for a range of downstream tasks. Recently, models such as Bidirectional Encoder Representations from Transformers (BERT) have shown that using a large amount of unlabeled data, the pre-trained language models are even more beneficial for learning common language representations. Extensive experiments exploiting transfer learning and fine-tuning BERT models to identify language on Twitter are presented in this paper. The work utilizes a data collection of Hindi-English-Urdu code-mixed text for language pre-training and Hindi-English code-mixed for subsequent word-level language classification. The results show that the representations pre-trained over code-mixed data produce better results by their monolingual counterpart.","PeriodicalId":101653,"journal":{"name":"2021 IEEE 20th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 20th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCICC53683.2021.9811292","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

Abstract

Language identification of social media text has been an interesting problem of study in recent years. Social media messages are predominantly in code mixed in non-English speaking states. Prior knowledge by pre-training contextual embeddings have shown state of the art results for a range of downstream tasks. Recently, models such as Bidirectional Encoder Representations from Transformers (BERT) have shown that using a large amount of unlabeled data, the pre-trained language models are even more beneficial for learning common language representations. Extensive experiments exploiting transfer learning and fine-tuning BERT models to identify language on Twitter are presented in this paper. The work utilizes a data collection of Hindi-English-Urdu code-mixed text for language pre-training and Hindi-English code-mixed for subsequent word-level language classification. The results show that the representations pre-trained over code-mixed data produce better results by their monolingual counterpart.
用代码混合BERT对印英推文进行语言识别
社交媒体文本的语言识别是近年来研究的一个热点问题。社交媒体信息主要是用非英语国家的代码混合。通过预训练上下文嵌入的先验知识已经为一系列下游任务显示了最先进的结果。最近,一些模型,如来自变形金刚的双向编码器表示(BERT),已经表明,使用大量未标记的数据,预训练的语言模型甚至更有利于学习公共语言表示。本文介绍了利用迁移学习和微调BERT模型来识别Twitter上的语言的广泛实验。这项工作利用印地语-英语-乌尔都语代码混合文本的数据收集进行语言预训练,并利用印地语-英语代码混合文本进行随后的词级语言分类。结果表明,在代码混合数据上预训练的表示比单语表示产生更好的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信