Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia最新文献

筛选
英文 中文
Affect Recognition in a Realistic Movie Dataset Using a Hierarchical Approach 使用层次方法在逼真电影数据集中进行情感识别
J. Dumoulin, Diana Affi, E. Mugellini, Omar Abou Khaled, M. Bertini, A. Bimbo
{"title":"Affect Recognition in a Realistic Movie Dataset Using a Hierarchical Approach","authors":"J. Dumoulin, Diana Affi, E. Mugellini, Omar Abou Khaled, M. Bertini, A. Bimbo","doi":"10.1145/2813524.2813526","DOIUrl":"https://doi.org/10.1145/2813524.2813526","url":null,"abstract":"Affective content analysis has gained great attention in recent years and is an important challenge of content-based multimedia information retrieval. In this paper, a hierarchical approach is proposed for affect recognition in movie datasets. This approach has been verified on the AFEW dataset, showing an improvement in classification results compared to the baseline. In order to use all the visual sentiment aspects contained in the movies excerpts of a realistic dataset such as FilmStim, deep learning features trained on a large set of emotional images are added to the standard audio and visual features. The proposed approach will be integrated in a system that communicates the emotions of a movie to impaired people and contribute to improve their television experience.","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129311121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An Interactive System based on Yes-No Questions for Affective Image Retrieval 基于是-否问题的情感图像检索交互式系统
Saemi Choi, T. Yamasaki, K. Aizawa
{"title":"An Interactive System based on Yes-No Questions for Affective Image Retrieval","authors":"Saemi Choi, T. Yamasaki, K. Aizawa","doi":"10.1145/2813524.2813525","DOIUrl":"https://doi.org/10.1145/2813524.2813525","url":null,"abstract":"We propose an interactive system based on yes-no questions for affective image retrieval. We propose two querying methods, a question generation method, Affective Question and Answer (AQA), and a feedback method, Affective Feedback (AF). Conventional image search systems ask users to input queries by text. However, it is not always easy for users to convert their intention into verbal representations. Especially, the query generation becomes even more difficult when a user tries to find images with affective words due to its subjectivity. In addition, it is not guaranteed that the images are properly annotated with enough number and high quality of tags. To solve these problems, we propose a yes-no questions-based image retrieval system that can effectively narrow down the candidate images. We also provide an affective feedback interface in which users can do the fine tuning of weights of the affective words. We conducted experiments on image retrieval task with 117,866 images. The results showed that our system brings satisfactory results to users in case where the proper text querying is difficult.","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128503789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia 第一届多媒体情感与情感国际研讨会论文集
M. Soleymani, Yi-Hsuan Yang, Yu-Gang Jiang, Shih-Fu Chang
{"title":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","authors":"M. Soleymani, Yi-Hsuan Yang, Yu-Gang Jiang, Shih-Fu Chang","doi":"10.1145/2813524","DOIUrl":"https://doi.org/10.1145/2813524","url":null,"abstract":"It is our great pleasure to welcome you to the 2015 ACM Workshop on Affect and Sentiment in Multimedia -- ASM'15. This is the first workshop on affect and sentiment in multimedia with a focus on multimedia content analysis. The aim of the workshop is to provide a forum to present and discuss the recent advancement in affective analysis in multimedia. ASM'15 gives researchers a unique opportunity to share their ideas in an interdisciplinary workshop. For this reason, we invited two keynote speakers from related fields, namely, psychology of emotion and recommendation systems to bring their perspectives to this multimedia related venue. \u0000 \u0000The call for papers attracted 16 submissions from all over the world. The program committee reviewed and accepted 9 submissions. \u0000 \u0000We also encourage attendees to attend the following keynote talk: \u0000\"Blending Users, Content, and Emotions for Movie Recommendations,\" Shlomo Berkovsky, (CSIRO, Australia)","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126873286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Twitter: A New Online Source of Automatically Tagged Data for Conversational Speech Emotion Recognition Twitter:用于会话语音情感识别的自动标记数据的新在线来源
Christopher Hines, V. Sethu, J. Epps
{"title":"Twitter: A New Online Source of Automatically Tagged Data for Conversational Speech Emotion Recognition","authors":"Christopher Hines, V. Sethu, J. Epps","doi":"10.1145/2813524.2813529","DOIUrl":"https://doi.org/10.1145/2813524.2813529","url":null,"abstract":"In the space of affect detection in multimedia, there is a strong demand for more tagged data in order to better understand human emotions, the way they are expressed, and approaches for detecting them automatically. Unfortunately, emotion datasets are typically small due to the manual process of annotating them with emotional labels. In response, we present for the first time the application of automatically tagged Twitter data to the problem of speech emotion recognition (SER). SER has been shown to benefit from the combination of acoustic and linguistic features, albeit when the linguistic training data is from the same database as the test data. Using the presence of emoticons for automatic tagging, we compile a corpus of over 800,000 tweets that is totally independent from our evaluation database. By supplementing an acoustic classifier with linguistic information, we classify the spontaneous content within the USC-IEMOCAP corpus on valence and activation descriptors. With comparison to prior literature, we demonstrate performance improvements for valence of 2% and 6% over an acoustic-only system, using linguistic training data from Twitter and IEMOCAP respectively.","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127549369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Do Others Perceive You As You Want Them To?: Modeling Personality based on Selfies 别人对你的看法是否如你所愿?:通过自拍来塑造个性
Sharath Chandra Guntuku, Lin Qiu, S. Roy, Weisi Lin, V. Jakhetiya
{"title":"Do Others Perceive You As You Want Them To?: Modeling Personality based on Selfies","authors":"Sharath Chandra Guntuku, Lin Qiu, S. Roy, Weisi Lin, V. Jakhetiya","doi":"10.1145/2813524.2813528","DOIUrl":"https://doi.org/10.1145/2813524.2813528","url":null,"abstract":"In this work, selfies (self-portrait images) of users are used to computationally predict and understand their personality. For users to convey a certain impression with selfie, and for the observers to build a certain impression about the users, many visual cues play a significant role. It is interesting to analyse what these cues are and how they influence our understanding of personality profiles. Selfies of users (from a popular microblogging site, Sina Weibo) were annotated with mid-level cues (such as presence of duckface, if the user is alone, emotional positivity etc.) relevant to portraits (especially selfies). Low-level visual features were used to train models to detect these mid-level cues, which are then used to predict users' personality (based on Five Factor Model). The mid-level cue detectors are seen to outperform state-of-the-art features for most traits. Using the trained computational models, we then present several insights on how selfies reflect their owners' personality and how users' are judged by others based on their selfies.","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115923804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Session details: Oral Session 2: Content analysis 口头部分2:内容分析
Yu-Gang Jiang
{"title":"Session details: Oral Session 2: Content analysis","authors":"Yu-Gang Jiang","doi":"10.1145/3260946","DOIUrl":"https://doi.org/10.1145/3260946","url":null,"abstract":"","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131180688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diving Deep into Sentiment: Understanding Fine-tuned CNNs for Visual Sentiment Prediction 深入情感:理解微调cnn的视觉情感预测
Víctor Campos, Amaia Salvador, Xavier Giró-i-Nieto, Brendan Jou
{"title":"Diving Deep into Sentiment: Understanding Fine-tuned CNNs for Visual Sentiment Prediction","authors":"Víctor Campos, Amaia Salvador, Xavier Giró-i-Nieto, Brendan Jou","doi":"10.1145/2813524.2813530","DOIUrl":"https://doi.org/10.1145/2813524.2813530","url":null,"abstract":"Visual media are powerful means of expressing emotions and sentiments. The constant generation of new content in social networks highlights the need of automated visual sentiment analysis tools. While Convolutional Neural Networks (CNNs) have established a new state-of-the-art in several vision problems, their application to the task of sentiment analysis is mostly unexplored and there are few studies regarding how to design CNNs for this purpose. In this work, we study the suitability of fine-tuning a CNN for visual sentiment prediction as well as explore performance boosting techniques within this deep learning setting. Finally, we provide a deep-dive analysis into a benchmark, state-of-the-art network architecture to gain insight about how to design patterns for CNNs on the task of visual sentiment prediction.","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124757006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 86
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信