{"title":"用于不道德行为分类的微调BERT","authors":"Syeda Faizan Fatima, Seemab Latif, R. Latif","doi":"10.1109/ICoDT252288.2021.9441540","DOIUrl":null,"url":null,"abstract":"Social media allows people to express themselves, however, there exists a threat of abuse and harassment. This threat leads to a negative impact on society which results in a change in people behaviour and they stop expressing their ideas freely. Classification of unethical behaviour in comments is a multi-label classification task. Due to the limited availability of the dataset, training does not yield worthy accuracies. Hence, a large training corpus is needed. This work, therefore, proposes to supplement training data by making use of transfer learning. Bi-directional Encoder Representations from Transformers (BERT) pre-trained model is fine-tuned to detect unethical users’ behaviour. The approach used in this work achieved competitive accuracy for the task of multi-label classification on the toxicity dataset of Wikipedia Comments Corpus.","PeriodicalId":207832,"journal":{"name":"2021 International Conference on Digital Futures and Transformative Technologies (ICoDT2)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Fine Tuning BERT for Unethical Behavior Classification\",\"authors\":\"Syeda Faizan Fatima, Seemab Latif, R. Latif\",\"doi\":\"10.1109/ICoDT252288.2021.9441540\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Social media allows people to express themselves, however, there exists a threat of abuse and harassment. This threat leads to a negative impact on society which results in a change in people behaviour and they stop expressing their ideas freely. Classification of unethical behaviour in comments is a multi-label classification task. Due to the limited availability of the dataset, training does not yield worthy accuracies. Hence, a large training corpus is needed. This work, therefore, proposes to supplement training data by making use of transfer learning. Bi-directional Encoder Representations from Transformers (BERT) pre-trained model is fine-tuned to detect unethical users’ behaviour. The approach used in this work achieved competitive accuracy for the task of multi-label classification on the toxicity dataset of Wikipedia Comments Corpus.\",\"PeriodicalId\":207832,\"journal\":{\"name\":\"2021 International Conference on Digital Futures and Transformative Technologies (ICoDT2)\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Digital Futures and Transformative Technologies (ICoDT2)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICoDT252288.2021.9441540\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Digital Futures and Transformative Technologies (ICoDT2)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICoDT252288.2021.9441540","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fine Tuning BERT for Unethical Behavior Classification
Social media allows people to express themselves, however, there exists a threat of abuse and harassment. This threat leads to a negative impact on society which results in a change in people behaviour and they stop expressing their ideas freely. Classification of unethical behaviour in comments is a multi-label classification task. Due to the limited availability of the dataset, training does not yield worthy accuracies. Hence, a large training corpus is needed. This work, therefore, proposes to supplement training data by making use of transfer learning. Bi-directional Encoder Representations from Transformers (BERT) pre-trained model is fine-tuned to detect unethical users’ behaviour. The approach used in this work achieved competitive accuracy for the task of multi-label classification on the toxicity dataset of Wikipedia Comments Corpus.