2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops最新文献

筛选
英文 中文
Assessing the validity of appraisal-based models of emotion 评估基于评价的情绪模型的有效性
J. Gratch, S. Marsella, Ning Wang, B. Stankovic
{"title":"Assessing the validity of appraisal-based models of emotion","authors":"J. Gratch, S. Marsella, Ning Wang, B. Stankovic","doi":"10.1109/ACII.2009.5349443","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349443","url":null,"abstract":"We describe an empirical study comparing the accuracy of competing computational models of emotion in predicting human emotional responses in naturalistic emotion-eliciting situations. The results find clear differences in models' ability to forecast human emotional responses, and provide guidance on how to develop more accurate models of human emotion.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131898689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
Exploring relations between cognitive style and multimodal expression of emotion in a TV series corpus 电视剧语料库中认知风格与情感多模态表达的关系探讨
C. Clavel, Jean-Claude Martin
{"title":"Exploring relations between cognitive style and multimodal expression of emotion in a TV series corpus","authors":"C. Clavel, Jean-Claude Martin","doi":"10.1109/ACII.2009.5349540","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349540","url":null,"abstract":"In order for virtual characters to be believable when expressing emotion, researchers are trying to endow them with a personality focusing mostly on lexical approaches from Psychology. Whereas multimodal corpora are developing to inform the definition of models relating emotion and their expression in different modalities, they seldom enable to study the impact of personality on the way individuals appraise various emotional situations. In this paper we explain how we collected a TV series corpus which is relevant for the study of cognitive styles. We describe how subjects perceive multimodal expressions of emotion and personality and if there are links between personality and emotional expressions.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"30 20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133362142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated classification of gaze direction using spectral regression and support vector machine 基于光谱回归和支持向量机的注视方向自动分类
S. Cadavid, M. Mahoor, D. Messinger, J. Cohn
{"title":"Automated classification of gaze direction using spectral regression and support vector machine","authors":"S. Cadavid, M. Mahoor, D. Messinger, J. Cohn","doi":"10.1109/ACII.2009.5349517","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349517","url":null,"abstract":"This paper presents a framework to automatically estimate the gaze direction of an infant in an infant-parent face-to-face interaction. Commercial devices are sometimes used to produce automated measurement of the subjects' gaze direction. This approach is intrusive, requiring cooperation from the participants, and cannot be employed in interactive face-to-face communication scenarios between a parent and their infant. Alternately, the infant gazes that are at and away from the parent's face may be manually coded from captured videos by a human expert. However, this approach is labor intensive. A preferred alternative would be to automatically estimate the gaze direction of participants from captured videos. The realization of a such a system will help psychological scientists to readily study and understand the early attention of infants. One of the problems in eye region image analysis is the large dimensionality of the visual data. We address this problem by employing the spectral regression technique to project high dimensionality eye region images into a low dimensional sub-space. Represented eye region images in the low dimensional sub-space are utilized to train a Support Vector Machine (SVM) classifier to predict the gaze direction (i.e., either looking at parent's face or looking away from parent's face). The analysis of more than 39,000 video frames of naturalistic gaze shifts of multiple infants demonstrates significant agreement between a human coder and our approach. These results indicate that the proposed system provides an efficient approach to automating the estimation of gaze direction of naturalistic gaze shifts.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129264764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Mixtract: A directable musical expression system Mixtract:一种可指挥的音乐表达系统
Mitsuyo Hashida, Shunji Tanaka, H. Katayose
{"title":"Mixtract: A directable musical expression system","authors":"Mitsuyo Hashida, Shunji Tanaka, H. Katayose","doi":"10.1109/ACII.2009.5349553","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349553","url":null,"abstract":"This paper describes a music performance design system focusing on phrasing, the design and development of an intuitive interface to assist music performance design system. The proposed interface has an editor to control the parameter curves of “dynamics” and “tempos” of hierarchical phrase structures, and supports analysis mechanisms for hierarchical phrase structures that lighten the users' work for music interpretation. We are interested in how a system can assist the users in designing music performances, but not to develop a full automatic system. Unlike the most automatic performance rendering systems to date, assisting the process of music interpretation and to convey the musical interpretive intent to the system are focused in this paper. The advantage of the proposed system was verified from shortening time required for music performance design. The proposed system is more beneficial from the viewpoint that it can be a platform to test various possibilities of phrasing expression.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115263102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Facial and vocal emotion expression of a personal computer assistant to engage, educate and motivate children
J. Kessens, Mark Antonius Neerincx, R. Looije, M. Kroes, G. Bloothooft
{"title":"Facial and vocal emotion expression of a personal computer assistant to engage, educate and motivate children","authors":"J. Kessens, Mark Antonius Neerincx, R. Looije, M. Kroes, G. Bloothooft","doi":"10.1109/ACII.2009.5349582","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349582","url":null,"abstract":"The general goal of our research is to develop a personal computer assistant that persuades children to adhere to a healthy lifestyle during daily activities at home. The assistant will be used in three different roles: as companion, educator and motivator. This study investigates whether the effectiveness of the computer assistant with an iCat robot embodiment, can be improved when it expresses emotions (tested for each of the three roles). It shows that emotion expressions can improve the effectiveness of the robot to achieve its role objectives. The improvements that we found are small, however, probably due to a ceiling effect: All subjective measures are rated very positively in the neutral condition, thus leaving little room for improvement. It also showed that the emotional speech was less intelligible, which may limit the robots' effectiveness.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121453127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Perception of emotional expressions in different representations using facial feature points 利用面部特征点感知不同表征下的情绪表达
S. Afzal, T. M. Sezgin, Yujian Gao, P. Robinson
{"title":"Perception of emotional expressions in different representations using facial feature points","authors":"S. Afzal, T. M. Sezgin, Yujian Gao, P. Robinson","doi":"10.1109/ACII.2009.5349549","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349549","url":null,"abstract":"Facial expression recognition is an enabling technology for affective computing. Many existing facial expression analysis systems rely on automatically tracked facial feature points. Although psychologists have studied emotion perception from manually specified or marker-based point-light displays, no formal study exists on the amount of emotional information conveyed through automatically tracked feature points. We assess the utility of automatically extracted feature points in conveying emotions for posed and naturalistic data and present results from an experiment that compared human raters' judgements of emotional expressions between actual video clips and three automatically generated representations of them. The implications for optimal face representation and creation of realistic animations are discussed.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116294893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Social signal processing: What are the relevant variables? And in what ways do they relate? 社会信号处理:相关变量是什么?它们之间有什么联系?
Paul M. Brunet, H. Donnan, G. McKeown, E. Douglas-Cowie, R. Cowie
{"title":"Social signal processing: What are the relevant variables? And in what ways do they relate?","authors":"Paul M. Brunet, H. Donnan, G. McKeown, E. Douglas-Cowie, R. Cowie","doi":"10.1109/ACII.2009.5349505","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349505","url":null,"abstract":"Studies of the processing of social signals and behaviour tend to focus intuitively on a few variables, without a framework to guide selection. Here, we attempt to provide a broad overview of the relevant variables, describing both signs and what they signify. Those are matched by systematic consideration of how the variables relate. Variables interact not only on an intrapersonal level but also on an interpersonal level. It is also recognised explicitly that a comprehensive framework needs to embrace the role of context and individual differences in personality and culture.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"246 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124710816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Measuring entrainment in small groups of musicians 测量一小群音乐家的娱乐程度
A. Camurri, G. Varni, G. Volpe
{"title":"Measuring entrainment in small groups of musicians","authors":"A. Camurri, G. Varni, G. Volpe","doi":"10.1109/ACII.2009.5349471","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349471","url":null,"abstract":"This research is concerned with synchronization (or entrainment), and in particular on emotional synchronization, which is likely to be an important driver of collaborative processes. We focus on ensemble musical performance, an ideal test-bed for the development of models and techniques for measuring creative social interaction in an ecologically valid framework. Ongoing work and early results on the automated analysis in real-time of non-verbal cues related to expressive gesture in social interaction is briefly presented.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126355365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Resolution of focus of attention using gaze direction estimation and saliency computation 利用注视方向估计和显著性计算来解决注意焦点问题
Zeynep Yücel, A. A. Salah
{"title":"Resolution of focus of attention using gaze direction estimation and saliency computation","authors":"Zeynep Yücel, A. A. Salah","doi":"10.1109/ACII.2009.5349547","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349547","url":null,"abstract":"Modeling the user's attention is useful for responsive and interactive systems. This paper proposes a method for establishing joint visual attention between an experimenter and an intelligent agent. A rapid procedure is described to track the 3D head pose of the experimenter, which is used to approximate the gaze direction. The head is modeled with a sparse grid of points sampled from the surface of a cylinder. We then propose to employ a bottom-up saliency model to single out interesting objects in the neighborhood of the estimated focus of attention. We report results on a series of experiments, where a human experimenter looks at objects placed at different locations of the visual field, and the proposed algorithm is used to locate target objects automatically. Our results indicate that the proposed approach achieves high localization accuracy and thus constitutes a useful tool for the construction of natural human-computer interfaces.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121676728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Analyzing the impact of camera viewpoint on player psychophysiology 分析摄像机视角对球员心理生理的影响
H. P. Martínez, A. Jhala, Georgios N. Yannakakis
{"title":"Analyzing the impact of camera viewpoint on player psychophysiology","authors":"H. P. Martínez, A. Jhala, Georgios N. Yannakakis","doi":"10.1109/ACII.2009.5349592","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349592","url":null,"abstract":"Information about interactive virtual environments, such as games, is perceived by users through a virtual camera. While most interactive applications let the users control the camera, in complex navigation tasks within 3D environments users often get frustrated with the interaction. In this paper, we motivate for the inclusion of camera control as a vital component of affective adaptive interaction in games and investigate the impact of camera viewpoints on psy-chophysiology of players through an evaluation game survey experiment. The statistical analysis presented demonstrates that emotional responses and physiological indexes are affected by camera settings.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114153976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信