{"title":"Sentiment analysis from textual to multimodal features in digital environments","authors":"M. Caschera, F. Ferri, P. Grifoni","doi":"10.1145/3012071.3012089","DOIUrl":null,"url":null,"abstract":"When social networks actors are involved in the production, consumption and exchange of content and information by texts, images, audios, videos, they act in a shared digital environment that can be considered as a digital ecosystem. On the increasing size of produced data, an open issue is the understanding of the real sentiment and emotion from texts, but also from images, audios and videos. This issue is particularly relevant for monitoring and identifying critical situations and suspicious behaviours. This paper is an attempt to review and evaluate the various techniques used for sentiment and emotion analysis from text, audio and video, and to discuss the main challenges addressed in extracting sentiment from multimodal data. The paper concludes the discussion by proposing a method that combines a machine learning approach with a language-based formalization in order to extract sentiment from multimodal data formalized through a multimodal language.","PeriodicalId":294250,"journal":{"name":"Proceedings of the 8th International Conference on Management of Digital EcoSystems","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 8th International Conference on Management of Digital EcoSystems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3012071.3012089","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
When social networks actors are involved in the production, consumption and exchange of content and information by texts, images, audios, videos, they act in a shared digital environment that can be considered as a digital ecosystem. On the increasing size of produced data, an open issue is the understanding of the real sentiment and emotion from texts, but also from images, audios and videos. This issue is particularly relevant for monitoring and identifying critical situations and suspicious behaviours. This paper is an attempt to review and evaluate the various techniques used for sentiment and emotion analysis from text, audio and video, and to discuss the main challenges addressed in extracting sentiment from multimodal data. The paper concludes the discussion by proposing a method that combines a machine learning approach with a language-based formalization in order to extract sentiment from multimodal data formalized through a multimodal language.