Incident Detection based on Multimodal data from Social Media using Deep Learning Methods

C. Fatichah, Petrus Damianus Sammy Wiyadi, Dini Adni Navastara, N. Suciati, A. Munif
{"title":"Incident Detection based on Multimodal data from Social Media using Deep Learning Methods","authors":"C. Fatichah, Petrus Damianus Sammy Wiyadi, Dini Adni Navastara, N. Suciati, A. Munif","doi":"10.1109/ICISS50791.2020.9307555","DOIUrl":null,"url":null,"abstract":"Social media is one of the uses of crowdsourcing to gather vast amounts of information. The applications of incident detection using social media data are commonly focus on text analysis. Due to the ability of social media to capture variant data types such as text, voice, image, or video, the development of incident detection based on multimodal data is preferred. The use of multimodal data on incident detection is expected to improve the accuracy of prediction. This research aims to detect emergency incidents based on multimodal data streams from social media using deep learning methods. We compare several deep learning architectures that implement some neural network variants, namely Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM). We crawled data from Twitter API and labeled into three incident categories i.e. flood, traffic jam, and wildfire. The CNN and C-LSTM are used for text prediction, and the best performance obtained by C-LSTM and achieved 99.09% in the accuracy. The compared CNN models for image prediction are AlexNet, VGG16, VGG19, and SqueezeNet. The best performance obtained by VGG16 with data augmentation and achieved 99.08% in the accuracy. The incident detection results of multimodal data are obtained from the highest confidence level of text or image.","PeriodicalId":288117,"journal":{"name":"2020 International Conference on ICT for Smart Society (ICISS)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on ICT for Smart Society (ICISS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICISS50791.2020.9307555","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Social media is one of the uses of crowdsourcing to gather vast amounts of information. The applications of incident detection using social media data are commonly focus on text analysis. Due to the ability of social media to capture variant data types such as text, voice, image, or video, the development of incident detection based on multimodal data is preferred. The use of multimodal data on incident detection is expected to improve the accuracy of prediction. This research aims to detect emergency incidents based on multimodal data streams from social media using deep learning methods. We compare several deep learning architectures that implement some neural network variants, namely Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM). We crawled data from Twitter API and labeled into three incident categories i.e. flood, traffic jam, and wildfire. The CNN and C-LSTM are used for text prediction, and the best performance obtained by C-LSTM and achieved 99.09% in the accuracy. The compared CNN models for image prediction are AlexNet, VGG16, VGG19, and SqueezeNet. The best performance obtained by VGG16 with data augmentation and achieved 99.08% in the accuracy. The incident detection results of multimodal data are obtained from the highest confidence level of text or image.
基于深度学习方法的社交媒体多模态数据事件检测
社交媒体是众包收集大量信息的用途之一。使用社交媒体数据进行事件检测的应用通常集中在文本分析上。由于社交媒体能够捕获各种数据类型,如文本、语音、图像或视频,因此基于多模态数据的事件检测开发是首选。在事件检测中使用多模态数据有望提高预测的准确性。本研究旨在利用深度学习方法,基于来自社交媒体的多模态数据流来检测紧急事件。我们比较了几种实现神经网络变体的深度学习架构,即卷积神经网络(CNN)和长短期记忆(LSTM)。我们从Twitter API中抓取数据,并将其分为三类,即洪水、交通堵塞和野火。将CNN和C-LSTM用于文本预测,C-LSTM的预测准确率达到99.09%,达到了最佳效果。比较的CNN图像预测模型有AlexNet、VGG16、VGG19和SqueezeNet。在数据增强的情况下,VGG16获得了最好的性能,准确率达到99.08%。多模态数据的事件检测结果从文本或图像的最高置信度得到。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信