J. Dumoulin, Diana Affi, E. Mugellini, Omar Abou Khaled, M. Bertini, A. Bimbo
{"title":"Affect Recognition in a Realistic Movie Dataset Using a Hierarchical Approach","authors":"J. Dumoulin, Diana Affi, E. Mugellini, Omar Abou Khaled, M. Bertini, A. Bimbo","doi":"10.1145/2813524.2813526","DOIUrl":"https://doi.org/10.1145/2813524.2813526","url":null,"abstract":"Affective content analysis has gained great attention in recent years and is an important challenge of content-based multimedia information retrieval. In this paper, a hierarchical approach is proposed for affect recognition in movie datasets. This approach has been verified on the AFEW dataset, showing an improvement in classification results compared to the baseline. In order to use all the visual sentiment aspects contained in the movies excerpts of a realistic dataset such as FilmStim, deep learning features trained on a large set of emotional images are added to the standard audio and visual features. The proposed approach will be integrated in a system that communicates the emotions of a movie to impaired people and contribute to improve their television experience.","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129311121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Interactive System based on Yes-No Questions for Affective Image Retrieval","authors":"Saemi Choi, T. Yamasaki, K. Aizawa","doi":"10.1145/2813524.2813525","DOIUrl":"https://doi.org/10.1145/2813524.2813525","url":null,"abstract":"We propose an interactive system based on yes-no questions for affective image retrieval. We propose two querying methods, a question generation method, Affective Question and Answer (AQA), and a feedback method, Affective Feedback (AF). Conventional image search systems ask users to input queries by text. However, it is not always easy for users to convert their intention into verbal representations. Especially, the query generation becomes even more difficult when a user tries to find images with affective words due to its subjectivity. In addition, it is not guaranteed that the images are properly annotated with enough number and high quality of tags. To solve these problems, we propose a yes-no questions-based image retrieval system that can effectively narrow down the candidate images. We also provide an affective feedback interface in which users can do the fine tuning of weights of the affective words. We conducted experiments on image retrieval task with 117,866 images. The results showed that our system brings satisfactory results to users in case where the proper text querying is difficult.","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128503789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Soleymani, Yi-Hsuan Yang, Yu-Gang Jiang, Shih-Fu Chang
{"title":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","authors":"M. Soleymani, Yi-Hsuan Yang, Yu-Gang Jiang, Shih-Fu Chang","doi":"10.1145/2813524","DOIUrl":"https://doi.org/10.1145/2813524","url":null,"abstract":"It is our great pleasure to welcome you to the 2015 ACM Workshop on Affect and Sentiment in Multimedia -- ASM'15. This is the first workshop on affect and sentiment in multimedia with a focus on multimedia content analysis. The aim of the workshop is to provide a forum to present and discuss the recent advancement in affective analysis in multimedia. ASM'15 gives researchers a unique opportunity to share their ideas in an interdisciplinary workshop. For this reason, we invited two keynote speakers from related fields, namely, psychology of emotion and recommendation systems to bring their perspectives to this multimedia related venue. \u0000 \u0000The call for papers attracted 16 submissions from all over the world. The program committee reviewed and accepted 9 submissions. \u0000 \u0000We also encourage attendees to attend the following keynote talk: \u0000\"Blending Users, Content, and Emotions for Movie Recommendations,\" Shlomo Berkovsky, (CSIRO, Australia)","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126873286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Twitter: A New Online Source of Automatically Tagged Data for Conversational Speech Emotion Recognition","authors":"Christopher Hines, V. Sethu, J. Epps","doi":"10.1145/2813524.2813529","DOIUrl":"https://doi.org/10.1145/2813524.2813529","url":null,"abstract":"In the space of affect detection in multimedia, there is a strong demand for more tagged data in order to better understand human emotions, the way they are expressed, and approaches for detecting them automatically. Unfortunately, emotion datasets are typically small due to the manual process of annotating them with emotional labels. In response, we present for the first time the application of automatically tagged Twitter data to the problem of speech emotion recognition (SER). SER has been shown to benefit from the combination of acoustic and linguistic features, albeit when the linguistic training data is from the same database as the test data. Using the presence of emoticons for automatic tagging, we compile a corpus of over 800,000 tweets that is totally independent from our evaluation database. By supplementing an acoustic classifier with linguistic information, we classify the spontaneous content within the USC-IEMOCAP corpus on valence and activation descriptors. With comparison to prior literature, we demonstrate performance improvements for valence of 2% and 6% over an acoustic-only system, using linguistic training data from Twitter and IEMOCAP respectively.","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127549369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sharath Chandra Guntuku, Lin Qiu, S. Roy, Weisi Lin, V. Jakhetiya
{"title":"Do Others Perceive You As You Want Them To?: Modeling Personality based on Selfies","authors":"Sharath Chandra Guntuku, Lin Qiu, S. Roy, Weisi Lin, V. Jakhetiya","doi":"10.1145/2813524.2813528","DOIUrl":"https://doi.org/10.1145/2813524.2813528","url":null,"abstract":"In this work, selfies (self-portrait images) of users are used to computationally predict and understand their personality. For users to convey a certain impression with selfie, and for the observers to build a certain impression about the users, many visual cues play a significant role. It is interesting to analyse what these cues are and how they influence our understanding of personality profiles. Selfies of users (from a popular microblogging site, Sina Weibo) were annotated with mid-level cues (such as presence of duckface, if the user is alone, emotional positivity etc.) relevant to portraits (especially selfies). Low-level visual features were used to train models to detect these mid-level cues, which are then used to predict users' personality (based on Five Factor Model). The mid-level cue detectors are seen to outperform state-of-the-art features for most traits. Using the trained computational models, we then present several insights on how selfies reflect their owners' personality and how users' are judged by others based on their selfies.","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115923804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Oral Session 2: Content analysis","authors":"Yu-Gang Jiang","doi":"10.1145/3260946","DOIUrl":"https://doi.org/10.1145/3260946","url":null,"abstract":"","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131180688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Diving Deep into Sentiment: Understanding Fine-tuned CNNs for Visual Sentiment Prediction","authors":"Víctor Campos, Amaia Salvador, Xavier Giró-i-Nieto, Brendan Jou","doi":"10.1145/2813524.2813530","DOIUrl":"https://doi.org/10.1145/2813524.2813530","url":null,"abstract":"Visual media are powerful means of expressing emotions and sentiments. The constant generation of new content in social networks highlights the need of automated visual sentiment analysis tools. While Convolutional Neural Networks (CNNs) have established a new state-of-the-art in several vision problems, their application to the task of sentiment analysis is mostly unexplored and there are few studies regarding how to design CNNs for this purpose. In this work, we study the suitability of fine-tuning a CNN for visual sentiment prediction as well as explore performance boosting techniques within this deep learning setting. Finally, we provide a deep-dive analysis into a benchmark, state-of-the-art network architecture to gain insight about how to design patterns for CNNs on the task of visual sentiment prediction.","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124757006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}