2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops最新文献

筛选
英文 中文
Simultaneous exploitation of explicit and implicit tags in affect-based multimedia retrieval 基于情感的多媒体检索中显式和隐式标签的同时利用
Joep J. M. Kierkels, T. Pun
{"title":"Simultaneous exploitation of explicit and implicit tags in affect-based multimedia retrieval","authors":"Joep J. M. Kierkels, T. Pun","doi":"10.1109/ACII.2009.5349433","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349433","url":null,"abstract":"Affect-based retrieval of multimedia items requires tags that describe the content of these items. These tags are added by users that interact with the items. In this paper, it is shown to what extent different ways of creating the tags result in similar or non-similar information about the item. Three types of affective tags are being considered here: explicit self-assessed tags, implicit multimedia-based tags, and implicit physiology-based tags. The novelty of this paper is to show that affect-based retrieval accuracy (as measured by precision and recall) benefits from having a database that contains both explicit and implicit tags. A database that contains a mixture of explicit and implicit tags has higher retrieval accuracy when compared to a database that contains uniquely either explicit or implicit tags. This shows that information in explicit and implicit tags is complementary rather than redundant. The improvement in retrieval accuracy is immediately evident when explicit tags are added. Results for low recall rates are of particular importance because these focus on the most relevant items in a database. When over 60% of the items in the database with implicit tags are also tagged by explicit tags, retrieval accuracy at a low recall rate (0.1) is higher than accuracy based uniquely on explicit tags.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122464479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Emotion measurement platform for daily life situations 日常生活情境情绪测量平台
J. Westerink, M. Ouwerkerk, Gert-Jan de Vries, Stijn de Waele, Jack van den Eerenbeemd, M. van Boven
{"title":"Emotion measurement platform for daily life situations","authors":"J. Westerink, M. Ouwerkerk, Gert-Jan de Vries, Stijn de Waele, Jack van den Eerenbeemd, M. van Boven","doi":"10.1109/ACII.2009.5349574","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349574","url":null,"abstract":"The growing interest in affective computing is expected to have its beneficial impact on consumer lifestyle products. Especially emphatic applications — applications that make you feel they really understand you — will serve the current consumer interest in enhanced and meaningful experiences. To do so, the applications will have to measure the user's emotional experience. Well-established psychophysiological techniques bear promise, but so far have mainly been validated in laboratory situations. To also apply them in real-life situations, we built an emotion measurement platform. This platform shows that emotional experiences can be measured in a relatively unobtrusive way, while at the same time it enables us to gather knowledge on emotional experiences in everyday-life and it offers the opportunity to prototype emphatic application concepts and test them in relevant situations.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126572908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 56
Empathizing with robots: Fellow feeling along the anthropomorphic spectrum 与机器人共情:拟人化光谱上的同伴感受
L. Riek, Tal-Chen Rabinowitch, B. Chakrabarti, P. Robinson
{"title":"Empathizing with robots: Fellow feeling along the anthropomorphic spectrum","authors":"L. Riek, Tal-Chen Rabinowitch, B. Chakrabarti, P. Robinson","doi":"10.1109/ACII.2009.5349423","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349423","url":null,"abstract":"A long-standing question within the robotics community is about the degree of human-likeness robots ought to have when interacting with humans. We explore an unexamined aspect of this problem: how people empathize with robots along the anthropomorphic spectrum. We conducted a web-based experiment (n = 120) that measured how people empathized with four different robots shown to be experiencing mistreatment by humans. Our results indicate that people empathize more strongly with more human-looking robots and less with mechanical looking robots. We also found that a person's general ability to empathize has no predictive value for expressed empathy toward robots.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125471965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 78
It's all in the game: Towards an affect sensitive and context aware game companion 这一切都在游戏中:朝着影响敏感和环境感知的游戏伴侣
Ginevra Castellano, Iolanda Leite, André Pereira, C. Martinho, Ana Paiva, P. McOwan
{"title":"It's all in the game: Towards an affect sensitive and context aware game companion","authors":"Ginevra Castellano, Iolanda Leite, André Pereira, C. Martinho, Ana Paiva, P. McOwan","doi":"10.1109/ACII.2009.5349558","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349558","url":null,"abstract":"Robot companions must be able to display social, affective behaviour. As a prerequisite for companionship, the ability to sustain long-term interactions with users requires companions to be endowed with affect recognition abilities. This paper explores application-dependent user states in a naturalistic scenario where an iCat robot plays chess with children. In this scenario, the role of context is investigated for the modelling of user states related both to the task and the social interaction with the robot. Results show that contextual features related to the game and the iCat's behaviour are successful in helping to discriminate among the identified states. In particular, state and evolution of the game and display of facial expressions by the iCat proved to be the most significant: when the user is winning and improving in the game her feeling is more likely to be positive and when the iCat displays a facial expression during the game the user's level of engagement with the iCat is higher. These findings will provide the foundation for a rigorous design of an affect recognition system for a game companion.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126752636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Requirements and software framework for adaptive multimodal affect recognition 自适应多模态情感识别的要求和软件框架
Elena Vildjiounaite, Vesa Kyllönen, Olli Vuorinen, Satu-Marja Mäkelä, Tommi Keränen, Markus Niiranen, Jouni Knuutinen, Johannes Peltola
{"title":"Requirements and software framework for adaptive multimodal affect recognition","authors":"Elena Vildjiounaite, Vesa Kyllönen, Olli Vuorinen, Satu-Marja Mäkelä, Tommi Keränen, Markus Niiranen, Jouni Knuutinen, Johannes Peltola","doi":"10.1109/ACII.2009.5349393","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349393","url":null,"abstract":"This work presents a software framework for real time multimodal affect recognition. The framework supports categorical emotional models and simultaneous classification of emotional states along different dimensions. The framework also allows to incorporate diverse approaches to multimodal fusion, proposed by the current state of the art, as well as to adapt to context-dependency of expressing emotions and to different application requirements. The results of using the framework in audio-video based emotion recognition of an audience of different shows (this is a useful information because emotions of co-located people affect each other) confirm the capability of the framework to provide desired functionalities conveniently and demonstrate that use of contextual information increases recognition accuracy.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127771061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Therapy progress indicator (TPI): Combining speech parameters and the subjective unit of distress 治疗进展指标(TPI):结合言语参数和主观痛苦单位
E. V. D. Broek, F. V. D. Sluis, T. Dijkstra
{"title":"Therapy progress indicator (TPI): Combining speech parameters and the subjective unit of distress","authors":"E. V. D. Broek, F. V. D. Sluis, T. Dijkstra","doi":"10.1109/ACII.2009.5349554","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349554","url":null,"abstract":"A posttraumatic stress disorder (PTSD) is a severe handicap in daily life and its treatment is complex. To evaluate the success of treatments, an objective and unobtrusive expert system was envisioned: an therapy progress indicator (TPI). Speech was considered as an excellent candidate for providing an objective, unobtrusive emotion measure. Speech of 26 PTSD patients was recorded while they participated in two reliving sessions: re-experiencing their last panic attack and their last joyful occasion. As a subjective measure, the subjective unit of distress was determined, which enabled the validation of derived speech features. A set of parameters of the speech features: signal, power, zero crossing ratio, and pitch, was found to discriminate between the two sessions. A regression model involving these parameters was able to distinguish between positive and negative distress. This model lays the foundation for an TPI for patients with PTSD, which enables objective and unobtrusive evaluations of therapies.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124400942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A collaborative personalized affective video retrieval system 一种协同个性化情感视频检索系统
M. Soleymani, J. Davis, T. Pun
{"title":"A collaborative personalized affective video retrieval system","authors":"M. Soleymani, J. Davis, T. Pun","doi":"10.1109/ACII.2009.5349526","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349526","url":null,"abstract":"In this demonstration, a collaborative personalized affective video retrieval is introduced. A dataset of 155 video clips extracted from Hollywood movies were annotated by the emotion felt by participants. More than 1300 annotations from 40 participants were gathered in a database to be used for affective retrieval system. The retrieval system is able to retrieve videos based on emotional keyword query as well as arousal and valence query. The user's personal profile (gender, age, cultural background) was employed to improve the collaborative filtering in retrieval.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128956844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Dynamic emotion and personality synthesis 动态情感和人格的综合
I. Wilson
{"title":"Dynamic emotion and personality synthesis","authors":"I. Wilson","doi":"10.1109/ACII.2009.5349475","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349475","url":null,"abstract":"Emotion AI Emotion and Personality Synthesis technology is a solution to the problem of displaying expressive lifelike digital characters without the time and costs involved with traditional hand animation or motion capture. In addition these characters are fully interactive, being driven procedurally using various forms of pre processed input that are modulated through a system that simulates various neural pathways, neuro chemicals and neuro transmitters and finally models output through the various nerve pathways involved in facial expression and body posture muscle contractions.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127145188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SentiFul: Generating a reliable lexicon for sentiment analysis SentiFul:为情感分析生成可靠的词汇
Alena Neviarouskaya, H. Prendinger, M. Ishizuka
{"title":"SentiFul: Generating a reliable lexicon for sentiment analysis","authors":"Alena Neviarouskaya, H. Prendinger, M. Ishizuka","doi":"10.1109/ACII.2009.5349575","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349575","url":null,"abstract":"The main drawback of any lexicon-based sentiment analysis system is the lack of scalability. Thus, in this paper, we will describe methods to automatically generate and score a new sentiment lexicon, called SentiFul, and expand it through direct synonymy relations and morphologic modifications with known lexical units. We propose to distinguish four types of affixes (used to derive new words) depending on the role they play with regard to sentiment features: propagating, reversing, intensifying, and weakening.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"225 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124476899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 85
Roll and rock: Exploring the affective loop in a pen 摇滚:探索笔中的情感循环
M. B. Alonso, D. Keyson, Caroline Hummels
{"title":"Roll and rock: Exploring the affective loop in a pen","authors":"M. B. Alonso, D. Keyson, Caroline Hummels","doi":"10.1109/ACII.2009.5349527","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349527","url":null,"abstract":"The current demonstrator presents an affective pen prototype that responds to bodily expressions of stress. The pen measures two stress indicators, roll and rock, and provides multimodal feedback to engage the user in an affective loop. Scenarios are explored that either support in reducing, or stimulate a specific behavior.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131485525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信