{"title":"Identifying Text Classification Failures in Multilingual AI-Generated Content","authors":"Raghav Subramaniam","doi":"10.5121/ijaia.2023.14505","DOIUrl":null,"url":null,"abstract":"With the rising popularity of generative AI tools, the nature of apparent classification failures by AI content detection softwares, especially between different languages, must be further observed. This paper aims to do this through testing OpenAI’s “AI Text Classifier” on a set of human and AI-generated texts inEnglish, German, Arabic, Hindi, Chinese, and Swahili. Given the unreliability of existing tools for detection of AIgenerated text, it is notable that specific types of classification failures often persist in slightly different ways when various languages are observed: misclassification of human-written content as “AI-generated” and vice versa may occur more frequently in specific language content than others. Our findings indicate that false negative labelings are more likely to occur in English, whereas false positives are more likely to occur in Hindi and Arabic. There was an observed tendency for other languages to not be confidently labeled at all.","PeriodicalId":93188,"journal":{"name":"International journal of artificial intelligence & applications","volume":"19 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of artificial intelligence & applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5121/ijaia.2023.14505","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the rising popularity of generative AI tools, the nature of apparent classification failures by AI content detection softwares, especially between different languages, must be further observed. This paper aims to do this through testing OpenAI’s “AI Text Classifier” on a set of human and AI-generated texts inEnglish, German, Arabic, Hindi, Chinese, and Swahili. Given the unreliability of existing tools for detection of AIgenerated text, it is notable that specific types of classification failures often persist in slightly different ways when various languages are observed: misclassification of human-written content as “AI-generated” and vice versa may occur more frequently in specific language content than others. Our findings indicate that false negative labelings are more likely to occur in English, whereas false positives are more likely to occur in Hindi and Arabic. There was an observed tendency for other languages to not be confidently labeled at all.