Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia最新文献

筛选
英文 中文
Blending Users, Content, and Emotions for Movie Recommendations 混合用户、内容和情感的电影推荐
S. Berkovsky
{"title":"Blending Users, Content, and Emotions for Movie Recommendations","authors":"S. Berkovsky","doi":"10.1145/2813524.2813535","DOIUrl":"https://doi.org/10.1145/2813524.2813535","url":null,"abstract":"Recommender systems were initially deployed in eCommerce applications, but they are used nowadays in a broad range of domains and services. They alleviate online information overload by highlighting items of potential interest and helping users make informed choices. Many prior works in recommender systems focussed on the movie recommendation task, primarily due to the availability of several movie rating datasets. However, all these works considered two main input signals: ratings assigned by users and movie content information (genres, actors, directors, etc). We argue that in order to generate high-quality recommendations, recommender systems should possess a much richer user information. For example, consider a 3-star rating assigned to a 2-hour movie. It is evidently a mediocre rating meaning that the user liked some features of the movie and disliked others. However, a single rating does not allow to identify the liked and disliked features. In this talk we discuss the use of emotions as an additional source of rich user modelling data. We argue that user emotions elicited over the course of watching a movie mirror user responses to the movie content and the emotional triggers planted in there. This implicit user modelling can be seen as a virtual annotation of the movie timeline with the emotional user feedback. If captured and mined properly, this emotion-annotated movie timeline can be superior to the one-off ratings and feature preference scores gathered by traditional user modelling methods. We will discuss several open challenges referring to the use of emotion-based user modelling in movie recommendations. How to capture the user emotions in an unobtrusive manner? How to accurately interpret the captured emotions in context of the movie content? How to integrate the derived user modelling data into the recommendation process? Finally, how can this data be leveraged for other types of content, domains, or personalisation tasks?","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123382820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Combinations of Multiple Feature Representations for Music Emotion Prediction 基于多特征表示的音乐情感预测学习组合
J. Madsen, B. S. Jensen, Jan Larsen
{"title":"Learning Combinations of Multiple Feature Representations for Music Emotion Prediction","authors":"J. Madsen, B. S. Jensen, Jan Larsen","doi":"10.1145/2813524.2813534","DOIUrl":"https://doi.org/10.1145/2813524.2813534","url":null,"abstract":"Music consists of several structures and patterns evolving through time which greatly influences the human decoding of higher-level cognitive aspects of music like the emotions expressed in music. For tasks, such as genre, tag and emotion recognition, these structures have often been identified and used as individual and non-temporal features and representations. In this work, we address the hypothesis whether using multiple temporal and non-temporal representations of different features is beneficial for modeling music structure with the aim to predict the emotions expressed in music. We test this hypothesis by representing temporal and non-temporal structures using generative models of multiple audio features. The representations are used in a discriminative setting via the Product Probability Kernel and the Gaussian Process model enabling Multiple Kernel Learning, finding optimized combinations of both features and temporal/ non-temporal representations. We show the increased predictive performance using the combination of different features and representations along with the great interpretive prospects of this approach.","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124608293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
What Makes a Beautiful Landscape Beautiful: Adjective Noun Pairs Attention by Eye-Tracking and Gaze Analysis 是什么让美丽的风景美丽:形容词名词对眼球追踪和凝视分析的注意
Mohammad Al-Naser, Seyyed Saleh Mozaffari Chanijani, S. S. Bukhari, Damian Borth, A. Dengel
{"title":"What Makes a Beautiful Landscape Beautiful: Adjective Noun Pairs Attention by Eye-Tracking and Gaze Analysis","authors":"Mohammad Al-Naser, Seyyed Saleh Mozaffari Chanijani, S. S. Bukhari, Damian Borth, A. Dengel","doi":"10.1145/2813524.2813532","DOIUrl":"https://doi.org/10.1145/2813524.2813532","url":null,"abstract":"This paper asks the questions: what makes a beautiful landscape beautiful, what makes a damaged building look damaged? It tackles the challenge to understand which regions of Adjective Noun Pairs (ANP) images attract attention when observed by a human subject. We employ eye-tracking techniques to record the gaze over a set of multiple ANPs images and derive regions of interests for these ANPs. Our contribution is to study eye fixation pattern in the context of ANPs and their characteristics between being objective or subjective on the one hand and holistic vs. localizable on the other hand. Our finding indicate that subjects who differ in their assessment of ANP labels also have different eye fixation pattern. Further, we provide insights about ANP attention during implicit and explicit ANP assessment.","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128119311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Prediction of User Ratings of Oral Presentations using Label Relations 使用标签关系预测口头演讲的用户评分
T. Yamasaki, Yusuke Fukushima, Ryosuke Furuta, Litian Sun, K. Aizawa, Danushka Bollegala
{"title":"Prediction of User Ratings of Oral Presentations using Label Relations","authors":"T. Yamasaki, Yusuke Fukushima, Ryosuke Furuta, Litian Sun, K. Aizawa, Danushka Bollegala","doi":"10.1145/2813524.2813533","DOIUrl":"https://doi.org/10.1145/2813524.2813533","url":null,"abstract":"Predicting the users' impressions on a video talk is an important step for recommendation tasks. We propose a method to accurately predict multiple impression-related user ratings for a given video talk. Our proposal considers (a) multimodal features including linguistic as well as acoustic features, (b) correlations between different user ratings (labels), and (c) correlations between different feature types. In particular, the proposed method models both label and feature correlations within a single Markov random field (MRF), and jointly optimizes the label assignment problem to obtain a consistent and multiple set of labels for a given video. We train and evaluate the proposed method using a collection of 1,646 TED talk videos for 14 different tags. Experimental results on this dataset show that the proposed method obtains a statistically significant macro-average accuracy of 93.3%, outperforming several competitive baseline methods.","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115715593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Aesthetic Photo Enhancement using Machine Learning and Case-Based Reasoning 使用机器学习和基于案例推理的美学照片增强
J. Folz, Christian Schulze, Damian Borth, A. Dengel
{"title":"Aesthetic Photo Enhancement using Machine Learning and Case-Based Reasoning","authors":"J. Folz, Christian Schulze, Damian Borth, A. Dengel","doi":"10.1145/2813524.2813531","DOIUrl":"https://doi.org/10.1145/2813524.2813531","url":null,"abstract":"Broad availability of camera devices allows users to easily create, upload, and share photos on the Internet. However, users not only want to share their photos in the very moment they acquire them, but also ask for tools to enhance the aesthetics of a photo before upload as seen by the popularity of services such as Instagram. This paper presents a semi-automatic assistant system for aesthetic photo enhancement. Our system employs a combination of machine learning and case-based reasoning techniques to provide a set of operations (contrast, brightness, color, and gamma) customized for each photo individually. The inference is based on scenery concept detection to identify enhancement potential in photos and a database of sample pictures edited by desktop publishing experts to achieve a certain look and feel. Capabilities of the presented system for instant photo enhancements were confirmed in a user study with twelve subjects indicating a clear preference over a traditional photo enhancement system, which required more time to handle and provided less satisfying results. Additionally, we demonstrate the benefit of our system in an online demo.","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122322686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continuous Arousal Self-assessments Validation Using Real-time Physiological Responses 基于实时生理反应的持续觉醒自我评估验证
Ting Li, Yoann Baveye, Christel Chamaret, E. Dellandréa, Liming Chen
{"title":"Continuous Arousal Self-assessments Validation Using Real-time Physiological Responses","authors":"Ting Li, Yoann Baveye, Christel Chamaret, E. Dellandréa, Liming Chen","doi":"10.1145/2813524.2813527","DOIUrl":"https://doi.org/10.1145/2813524.2813527","url":null,"abstract":"On one hand, the fact that Galvanic Skin Response (GSR) is highly correlated with the user affective arousal provides the possibility to apply GSR in emotion detection. On the other hand, temporal correlation of real-time GSR and self-assessment of arousal has not been well studied. This paper confronts two modalities representing the induced emotion when watching 30 movies extracted from the LIRIS-ACCEDE database. While continuous arousal annotations have been self-assessed by 5 participants using a joystick, real-time GSR signal of 13 other subjects is supposed to catch user emotional response, objectively without user's interpretation. As a main contribution, this paper introduces a method to make possible the temporal comparison of both signals. Thus, temporal correlation between continuous arousal peaks and GSR were calculated for all 30 movies. A global Pearson's correlation of 0.264 and a Spearman's rank correlation coefficient of 0.336 were achieved. This result proves the validity of using both signals to measure arousal and draws a reliable framework for the analysis of such signals.","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127932982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Session details: Keynote Address - 1 会议详情:主题演讲- 1
M. Soleymani
{"title":"Session details: Keynote Address - 1","authors":"M. Soleymani","doi":"10.1145/3260944","DOIUrl":"https://doi.org/10.1145/3260944","url":null,"abstract":"","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"484 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116168574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Oral Session 1: Audio Affective Analysis 会话1:音频情感分析
M. Soleymani
{"title":"Session details: Oral Session 1: Audio Affective Analysis","authors":"M. Soleymani","doi":"10.1145/3260945","DOIUrl":"https://doi.org/10.1145/3260945","url":null,"abstract":"","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129231940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Oral Session 3: Applications 会议详情:口头会议3:申请
Yu-Gang Jiang
{"title":"Session details: Oral Session 3: Applications","authors":"Yu-Gang Jiang","doi":"10.1145/3260947","DOIUrl":"https://doi.org/10.1145/3260947","url":null,"abstract":"","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126520767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Oral Session 4: Sentiment analysis 会议详情:口头会议4:情感分析
Shih-Fu Chang
{"title":"Session details: Oral Session 4: Sentiment analysis","authors":"Shih-Fu Chang","doi":"10.1145/3260948","DOIUrl":"https://doi.org/10.1145/3260948","url":null,"abstract":"","PeriodicalId":197562,"journal":{"name":"Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130966160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信