Identifying Text Classification Failures in Multilingual AI-Generated Content

Raghav Subramaniam
{"title":"Identifying Text Classification Failures in Multilingual AI-Generated Content","authors":"Raghav Subramaniam","doi":"10.5121/ijaia.2023.14505","DOIUrl":null,"url":null,"abstract":"With the rising popularity of generative AI tools, the nature of apparent classification failures by AI content detection softwares, especially between different languages, must be further observed. This paper aims to do this through testing OpenAI’s “AI Text Classifier” on a set of human and AI-generated texts inEnglish, German, Arabic, Hindi, Chinese, and Swahili. Given the unreliability of existing tools for detection of AIgenerated text, it is notable that specific types of classification failures often persist in slightly different ways when various languages are observed: misclassification of human-written content as “AI-generated” and vice versa may occur more frequently in specific language content than others. Our findings indicate that false negative labelings are more likely to occur in English, whereas false positives are more likely to occur in Hindi and Arabic. There was an observed tendency for other languages to not be confidently labeled at all.","PeriodicalId":93188,"journal":{"name":"International journal of artificial intelligence & applications","volume":"19 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of artificial intelligence & applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5121/ijaia.2023.14505","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the rising popularity of generative AI tools, the nature of apparent classification failures by AI content detection softwares, especially between different languages, must be further observed. This paper aims to do this through testing OpenAI’s “AI Text Classifier” on a set of human and AI-generated texts inEnglish, German, Arabic, Hindi, Chinese, and Swahili. Given the unreliability of existing tools for detection of AIgenerated text, it is notable that specific types of classification failures often persist in slightly different ways when various languages are observed: misclassification of human-written content as “AI-generated” and vice versa may occur more frequently in specific language content than others. Our findings indicate that false negative labelings are more likely to occur in English, whereas false positives are more likely to occur in Hindi and Arabic. There was an observed tendency for other languages to not be confidently labeled at all.
识别多语言人工智能生成内容中的文本分类失败
随着生成式人工智能工具的日益普及,必须进一步观察人工智能内容检测软件明显分类失败的性质,特别是不同语言之间的分类失败。本文旨在通过测试OpenAI的“人工智能文本分类器”来实现这一目标,该分类器使用英语、德语、阿拉伯语、印地语、中文和斯瓦希里语对一组人类和人工智能生成的文本进行测试。鉴于现有检测人工生成文本的工具的不可靠性,值得注意的是,当观察到不同的语言时,特定类型的分类失败通常以略有不同的方式持续存在:在特定语言内容中,将人类编写的内容错误分类为“人工生成的”,反之亦然,可能比其他语言内容更频繁地发生。我们的研究结果表明,假阴性标签更有可能发生在英语中,而假阳性标签更有可能发生在印地语和阿拉伯语中。我们观察到的一种趋势是,其他语言根本没有被自信地贴上标签。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信