2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops最新文献

筛选
英文 中文
EmoText: Applying differentiated semantic analysis in lexical affect sensing EmoText:差异化语义分析在词汇情感感知中的应用
Alexander Osherenko
{"title":"EmoText: Applying differentiated semantic analysis in lexical affect sensing","authors":"Alexander Osherenko","doi":"10.1109/ACII.2009.5349523","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349523","url":null,"abstract":"Recently, there has been considerable interest in the recognition of affect from written and spoken language. We developed a computer system that implements a semantic approach to lexical affect sensing. This system analyses English sentences utilizing grammatical interdependencies between emotion words and intensifiers of emotional meaning.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126834770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Emotion attribution to basic parametric static and dynamic stimuli 情绪归因的基本参数静态和动态刺激
V. Visch, M. Goudbeek
{"title":"Emotion attribution to basic parametric static and dynamic stimuli","authors":"V. Visch, M. Goudbeek","doi":"10.1109/ACII.2009.5349548","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349548","url":null,"abstract":"The following research investigates the effect of basic visual stimuli on the attribution of basic emotions by the viewer. In an empirical study (N = 33) we used two groups of visually minimal expressive stimuli: dynamic and static. The dynamic stimuli consisted of an animated circle moving according to a structured set of movement parameters, derived from emotion expression literature. The parameters are direction, expansion, velocity variation, fluency, and corner bending. The static stimuli consisted of the minimal visual form of a smiley. The varied parameters were mouth openness, mouth curvature, and eye rotation. The findings describing the effect of the parameters on attributed emotions are presented. This paper shows how specific viewer affect attribution can be included in men machine interaction using minimal visual material.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125233262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Towards user-independent classification of multimodal emotional signals 面向独立于用户的多模态情感信号分类
Jonghwa Kim, E. André, Thurid Vogt
{"title":"Towards user-independent classification of multimodal emotional signals","authors":"Jonghwa Kim, E. André, Thurid Vogt","doi":"10.1109/ACII.2009.5349495","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349495","url":null,"abstract":"Coping with differences in the expression of emotions is a challenging task not only for a machine, but also for humans. Since individualism in the expression of emotions may occur at various stages of the emotion generation process, human beings may react quite differently to the same stimulus. Consequently, it comes as no surprise that recognition rates reported for a user-dependent system are significantly higher than recognition rates for a user-independent system. Based on empirical data we obtained in our earlier work on the recognition of emotions from biosignals, speech and their combination, we discuss which consequences arise from individual user differences for automated recognition systems and outline how these systems could be adapted to particular user groups.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115075687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Pitch envelope based frame level score reweighed algorithm for emotion robust speaker recognition 基于基音包络的帧级评分重加权情感鲁棒识别算法
Dongdong Li, Yingchun Yang, Ting Huang
{"title":"Pitch envelope based frame level score reweighed algorithm for emotion robust speaker recognition","authors":"Dongdong Li, Yingchun Yang, Ting Huang","doi":"10.1109/ACII.2009.5349589","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349589","url":null,"abstract":"Speech with various emotions aggravates the performance of speaker recognition systems. In this paper, a novel score normalization approach called pitch envelope based frame level score reweighted (PFLSR) algorithm is introduced to compensate the influence of the affective speech on speaker recognition. The approach assumes that the maximum likelihood model is not easily changed with the expressive corruption for most of the frames. Thus the test frames are divided into two parts according to F0, the heavily affected ones and the slightly affected ones. The confidences of the slightly affected frames are reweighted into new scores to strengthen their confidence, and to optimize the final accumulated frame scores over the whole test utterance. The experiments are conducted on the Mandarin Affective Speech Corpus. An improvement of 15.1% in identification rate over the traditional speaker recognition is achieved.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115733181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Accounting for irony and emotional oscillation in computer architectures 计算机体系结构中的反讽和情绪波动
A. Kotov
{"title":"Accounting for irony and emotional oscillation in computer architectures","authors":"A. Kotov","doi":"10.1109/ACII.2009.5349583","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349583","url":null,"abstract":"We demonstrate computer architecture, operating on semantic structures (sentence meanings or representations of events) and simulating several emotional phenomena: top-down emotional processing, hypocrisy, emotional oscillation, sarcasm and irony. The phenomena can be simulated through the interaction between emotional processing and operations with semantics. We rely on a multimodal corpus of oral exams to observe the usage of emotional expressive cues in situations of strong conflict between internal motivation and external social limitations. We apply the observations to make the computer model simulate the observed cases of combined emotional expressions.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114771861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Evaluation of multimodal sequential expressions of emotions in ECA ECA患者情绪多模态顺序表达的评价
Radoslaw Niewiadomski, S. Hyniewska, C. Pelachaud
{"title":"Evaluation of multimodal sequential expressions of emotions in ECA","authors":"Radoslaw Niewiadomski, S. Hyniewska, C. Pelachaud","doi":"10.1109/ACII.2009.5349569","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349569","url":null,"abstract":"A model of multimodal sequential expressions of emotion for an Embodied Conversational Agent was developed. The model is based on video annotations and on descriptions found in the literature. A language has been derived to describe expressions of emotions as a sequence of facial and body movement signals. An evaluation study of our model is presented in this paper. Animations of 8 sequential expressions corresponding to the emotions — anger, anxiety, cheerfulness, embarrassment, panic fear, pride, relief, and tension — were realized with our model. The recognition rate of these expressions is higher than the chance level making us believe that our model is able to generate recognizable expressions of emotions, even for the emotional expressions not considered to be universally recognized.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121686608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Affective haptic garment enhancing communication in second life 情感触觉服装增强第二人生的交流
D. Tsetserukou, Alena Neviarouskaya
{"title":"Affective haptic garment enhancing communication in second life","authors":"D. Tsetserukou, Alena Neviarouskaya","doi":"10.1109/ACII.2009.5349525","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349525","url":null,"abstract":"Driven by the motivation to enhance emotionally immersive experience of communication in Second Life, we propose a conceptually novel approach to reinforcing own feelings and reproducing the communicating partner's emotions through affective garment, iFeel_IM!. The emotions detected from text are stimulated by innovative haptic devices integrated into iFeel_IM!.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121048927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Detecting depression from facial actions and vocal prosody 从面部动作和声音韵律中检测抑郁
J. Cohn, T. S. Kruez, I. Matthews, Ying Yang, Minh Hoai Nguyen, M. T. Padilla, Feng Zhou, F. D. L. Torre
{"title":"Detecting depression from facial actions and vocal prosody","authors":"J. Cohn, T. S. Kruez, I. Matthews, Ying Yang, Minh Hoai Nguyen, M. T. Padilla, Feng Zhou, F. D. L. Torre","doi":"10.1109/ACII.2009.5349358","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349358","url":null,"abstract":"Current methods of assessing psychopathology depend almost entirely on verbal report (clinical interview or questionnaire) of patients, their family, or caregivers. They lack systematic and efficient ways of incorporating behavioral observations that are strong indicators of psychological disorder, much of which may occur outside the awareness of either individual. We compared clinical diagnosis of major depression with automatically measured facial actions and vocal prosody in patients undergoing treatment for depression. Manual FACS coding, active appearance modeling (AAM) and pitch extraction were used to measure facial and vocal expression. Classifiers using leave-one-out validation were SVM for FACS and for AAM and logistic regression for voice. Both face and voice demonstrated moderate concurrent validity with depression. Accuracy in detecting depression was 88% for manual FACS and 79% for AAM. Accuracy for vocal prosody was 79%. These findings suggest the feasibility of automatic detection of depression, raise new issues in automated facial image analysis and machine learning, and have exciting implications for clinical theory and practice.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122594788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 415
Learning models of speaker head nods with affective information 带情感信息的说话人点头学习模型
Jina Lee, H. Prendinger, Alena Neviarouskaya, S. Marsella
{"title":"Learning models of speaker head nods with affective information","authors":"Jina Lee, H. Prendinger, Alena Neviarouskaya, S. Marsella","doi":"10.1109/ACII.2009.5349543","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349543","url":null,"abstract":"During face-to-face conversation, the speaker's head is continually in motion. These movements serve a variety of important communicative functions, and may also be influenced by our emotions. The goal for this work is to build a domain-independent model of speaker's head movements and investigate the effect of using affective information during the learning process. Once the model is learned, it can later be used to generate head movements for virtual agents. In this paper, we describe our machine-learning approach to predict speaker's head nods using an annotated corpora of face-to-face human interaction and emotion labels generated by an affect recognition model. We describe the feature selection process, training process, and the comparison of results of the learned models under varying conditions. The results show that using affective information can help predict head nods better than when no affective information is used.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129669503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
An ambient agent model for group emotion support 群体情感支持的环境代理模型
R. Duell, Z. Memon, Jan Treur, C. N. V. D. Wal
{"title":"An ambient agent model for group emotion support","authors":"R. Duell, Z. Memon, Jan Treur, C. N. V. D. Wal","doi":"10.1109/ACII.2009.5349562","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349562","url":null,"abstract":"This paper introduces an agent-based support model for group emotion, to be used by ambient systems to support teams in their emotion dynamics. Using model-based reasoning, an ambient agent analyzes the team's emotion level for present and future time points. In case the team's emotion level is found to become deficient, the ambient agent provides support to the team by proposing the team leader, for example, to give a pep talk to certain team members. The support model has been formally designed and within a dedicated software environment, simulation experiments have been performed.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127686900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信