基于深度学习的社交媒体滥用评论分类

Mukul Anand, R. Eswari
{"title":"基于深度学习的社交媒体滥用评论分类","authors":"Mukul Anand, R. Eswari","doi":"10.1109/ICCMC.2019.8819734","DOIUrl":null,"url":null,"abstract":"Social media has provided everyone to express views and communicate to masses, but it also becomes a place for hateful behavior, abusive language, cyber-bullying and personal attacks. However, determining comment or a post is abusive or not is still difficult and time consuming, most of the social media platforms still searching for more efficient ways for efficient moderate solution. Automating this will help in identifying abusive comments, and save the websites and increase user safety and improve discussions online. In this paper, Kaggle’s toxic comment dataset is used to train deep learning model and classifying the comments in following categories: toxic, severe toxic, obscene, threat, insult, and identity hate. The dataset is trained with various deep learning techniques and analyze which deep learning model is better in the comment classification. The deep learning techniques such as long short term memory cell (LSTM) with and without word GloVe embeddings, a Convolution neural network (CNN) with or without GloVe are used, and GloVe pretrained model is used for classification","PeriodicalId":232624,"journal":{"name":"2019 3rd International Conference on Computing Methodologies and Communication (ICCMC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"44","resultStr":"{\"title\":\"Classification of Abusive Comments in Social Media using Deep Learning\",\"authors\":\"Mukul Anand, R. Eswari\",\"doi\":\"10.1109/ICCMC.2019.8819734\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Social media has provided everyone to express views and communicate to masses, but it also becomes a place for hateful behavior, abusive language, cyber-bullying and personal attacks. However, determining comment or a post is abusive or not is still difficult and time consuming, most of the social media platforms still searching for more efficient ways for efficient moderate solution. Automating this will help in identifying abusive comments, and save the websites and increase user safety and improve discussions online. In this paper, Kaggle’s toxic comment dataset is used to train deep learning model and classifying the comments in following categories: toxic, severe toxic, obscene, threat, insult, and identity hate. The dataset is trained with various deep learning techniques and analyze which deep learning model is better in the comment classification. The deep learning techniques such as long short term memory cell (LSTM) with and without word GloVe embeddings, a Convolution neural network (CNN) with or without GloVe are used, and GloVe pretrained model is used for classification\",\"PeriodicalId\":232624,\"journal\":{\"name\":\"2019 3rd International Conference on Computing Methodologies and Communication (ICCMC)\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-03-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"44\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 3rd International Conference on Computing Methodologies and Communication (ICCMC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCMC.2019.8819734\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 3rd International Conference on Computing Methodologies and Communication (ICCMC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCMC.2019.8819734","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 44

摘要

社交媒体为每个人提供了表达观点和与大众沟通的机会,但它也成为仇恨行为、辱骂语言、网络欺凌和人身攻击的场所。然而,判断评论或帖子是否滥用仍然困难且耗时,大多数社交媒体平台仍在寻找更有效的方法来有效地解决问题。自动化这将有助于识别侮辱性评论,保存网站,提高用户安全,改善在线讨论。在本文中,使用Kaggle的有毒评论数据集来训练深度学习模型,并将评论分类为以下类别:有毒,严重有毒,淫秽,威胁,侮辱和身份仇恨。使用各种深度学习技术对数据集进行训练,并分析哪种深度学习模型在评论分类中更好。使用长短期记忆单元(LSTM)和不使用word GloVe嵌入、卷积神经网络(CNN)和不使用GloVe嵌入等深度学习技术,并使用GloVe预训练模型进行分类
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Classification of Abusive Comments in Social Media using Deep Learning
Social media has provided everyone to express views and communicate to masses, but it also becomes a place for hateful behavior, abusive language, cyber-bullying and personal attacks. However, determining comment or a post is abusive or not is still difficult and time consuming, most of the social media platforms still searching for more efficient ways for efficient moderate solution. Automating this will help in identifying abusive comments, and save the websites and increase user safety and improve discussions online. In this paper, Kaggle’s toxic comment dataset is used to train deep learning model and classifying the comments in following categories: toxic, severe toxic, obscene, threat, insult, and identity hate. The dataset is trained with various deep learning techniques and analyze which deep learning model is better in the comment classification. The deep learning techniques such as long short term memory cell (LSTM) with and without word GloVe embeddings, a Convolution neural network (CNN) with or without GloVe are used, and GloVe pretrained model is used for classification
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信