{"title":"Multi-task CNN for Abusive Language Detection","authors":"Qingqing Zhao, Yue Xiao, Yunfei Long","doi":"10.1109/PRML52754.2021.9520387","DOIUrl":null,"url":null,"abstract":"Abusive language detection serves to ensure a compelling user experience via high-quality content. Different sub-categories of abusive language are closely related, with most aggressive comments containing personal attacks and toxic content and vice versa. We set a multi-task learning framework to detect different types of abusive content in a mental health forum to address this feature. Each classification task is treated as a subclass in a multi-class classification problem, with shared knowledge used for three related tasks: attack, aggression, and toxicity. Experimental results on three sub-types of Wikipedia abusive language datasets show that our framework can improve the net F1-score by 7.1%, 5.6%, and 2.7% in the attack, aggressive, and toxicity detection. Our experiments identified multi tasking framework act as an effective method in abusive language detection.","PeriodicalId":429603,"journal":{"name":"2021 IEEE 2nd International Conference on Pattern Recognition and Machine Learning (PRML)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 2nd International Conference on Pattern Recognition and Machine Learning (PRML)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PRML52754.2021.9520387","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Abusive language detection serves to ensure a compelling user experience via high-quality content. Different sub-categories of abusive language are closely related, with most aggressive comments containing personal attacks and toxic content and vice versa. We set a multi-task learning framework to detect different types of abusive content in a mental health forum to address this feature. Each classification task is treated as a subclass in a multi-class classification problem, with shared knowledge used for three related tasks: attack, aggression, and toxicity. Experimental results on three sub-types of Wikipedia abusive language datasets show that our framework can improve the net F1-score by 7.1%, 5.6%, and 2.7% in the attack, aggressive, and toxicity detection. Our experiments identified multi tasking framework act as an effective method in abusive language detection.