Ali Salehgohari, M. Mirhosseini, Hamed Tabrizchi, A. V. Koczy
{"title":"基于双向长短期记忆的社交媒体辱骂性语言检测","authors":"Ali Salehgohari, M. Mirhosseini, Hamed Tabrizchi, A. V. Koczy","doi":"10.1109/INES56734.2022.9922628","DOIUrl":null,"url":null,"abstract":"Social media has allowed anybody to share their opinions and engage with the general public, but it has also become a platform for harsh language, cruel conduct, personal assaults, and cyberbullying. However, determining whether a comment or a post is violent or not remains difficult and time-consuming, and most social media businesses are always seeking better ways to do so. This may be automated to assist in detecting nasty comments, promote user safety, preserve websites, and enhance online dialogue. The toxic comment dataset is utilized in this research to train a deep learning model that categorizes comments into the following categories: severe toxic, toxic, threat, obscene, insult, and identity hatred. To categorize comments, use a bidirectional long short-term memory cell (Bi-LSTM).","PeriodicalId":253486,"journal":{"name":"2022 IEEE 26th International Conference on Intelligent Engineering Systems (INES)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Abusive Language Detection on Social Media using Bidirectional Long-Short Term Memory\",\"authors\":\"Ali Salehgohari, M. Mirhosseini, Hamed Tabrizchi, A. V. Koczy\",\"doi\":\"10.1109/INES56734.2022.9922628\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Social media has allowed anybody to share their opinions and engage with the general public, but it has also become a platform for harsh language, cruel conduct, personal assaults, and cyberbullying. However, determining whether a comment or a post is violent or not remains difficult and time-consuming, and most social media businesses are always seeking better ways to do so. This may be automated to assist in detecting nasty comments, promote user safety, preserve websites, and enhance online dialogue. The toxic comment dataset is utilized in this research to train a deep learning model that categorizes comments into the following categories: severe toxic, toxic, threat, obscene, insult, and identity hatred. To categorize comments, use a bidirectional long short-term memory cell (Bi-LSTM).\",\"PeriodicalId\":253486,\"journal\":{\"name\":\"2022 IEEE 26th International Conference on Intelligent Engineering Systems (INES)\",\"volume\":\"39 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 26th International Conference on Intelligent Engineering Systems (INES)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/INES56734.2022.9922628\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 26th International Conference on Intelligent Engineering Systems (INES)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INES56734.2022.9922628","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Abusive Language Detection on Social Media using Bidirectional Long-Short Term Memory
Social media has allowed anybody to share their opinions and engage with the general public, but it has also become a platform for harsh language, cruel conduct, personal assaults, and cyberbullying. However, determining whether a comment or a post is violent or not remains difficult and time-consuming, and most social media businesses are always seeking better ways to do so. This may be automated to assist in detecting nasty comments, promote user safety, preserve websites, and enhance online dialogue. The toxic comment dataset is utilized in this research to train a deep learning model that categorizes comments into the following categories: severe toxic, toxic, threat, obscene, insult, and identity hatred. To categorize comments, use a bidirectional long short-term memory cell (Bi-LSTM).