{"title":"EmoText: Applying differentiated semantic analysis in lexical affect sensing","authors":"Alexander Osherenko","doi":"10.1109/ACII.2009.5349523","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349523","url":null,"abstract":"Recently, there has been considerable interest in the recognition of affect from written and spoken language. We developed a computer system that implements a semantic approach to lexical affect sensing. This system analyses English sentences utilizing grammatical interdependencies between emotion words and intensifiers of emotional meaning.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126834770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Emotion attribution to basic parametric static and dynamic stimuli","authors":"V. Visch, M. Goudbeek","doi":"10.1109/ACII.2009.5349548","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349548","url":null,"abstract":"The following research investigates the effect of basic visual stimuli on the attribution of basic emotions by the viewer. In an empirical study (N = 33) we used two groups of visually minimal expressive stimuli: dynamic and static. The dynamic stimuli consisted of an animated circle moving according to a structured set of movement parameters, derived from emotion expression literature. The parameters are direction, expansion, velocity variation, fluency, and corner bending. The static stimuli consisted of the minimal visual form of a smiley. The varied parameters were mouth openness, mouth curvature, and eye rotation. The findings describing the effect of the parameters on attributed emotions are presented. This paper shows how specific viewer affect attribution can be included in men machine interaction using minimal visual material.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125233262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards user-independent classification of multimodal emotional signals","authors":"Jonghwa Kim, E. André, Thurid Vogt","doi":"10.1109/ACII.2009.5349495","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349495","url":null,"abstract":"Coping with differences in the expression of emotions is a challenging task not only for a machine, but also for humans. Since individualism in the expression of emotions may occur at various stages of the emotion generation process, human beings may react quite differently to the same stimulus. Consequently, it comes as no surprise that recognition rates reported for a user-dependent system are significantly higher than recognition rates for a user-independent system. Based on empirical data we obtained in our earlier work on the recognition of emotions from biosignals, speech and their combination, we discuss which consequences arise from individual user differences for automated recognition systems and outline how these systems could be adapted to particular user groups.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115075687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pitch envelope based frame level score reweighed algorithm for emotion robust speaker recognition","authors":"Dongdong Li, Yingchun Yang, Ting Huang","doi":"10.1109/ACII.2009.5349589","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349589","url":null,"abstract":"Speech with various emotions aggravates the performance of speaker recognition systems. In this paper, a novel score normalization approach called pitch envelope based frame level score reweighted (PFLSR) algorithm is introduced to compensate the influence of the affective speech on speaker recognition. The approach assumes that the maximum likelihood model is not easily changed with the expressive corruption for most of the frames. Thus the test frames are divided into two parts according to F0, the heavily affected ones and the slightly affected ones. The confidences of the slightly affected frames are reweighted into new scores to strengthen their confidence, and to optimize the final accumulated frame scores over the whole test utterance. The experiments are conducted on the Mandarin Affective Speech Corpus. An improvement of 15.1% in identification rate over the traditional speaker recognition is achieved.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115733181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accounting for irony and emotional oscillation in computer architectures","authors":"A. Kotov","doi":"10.1109/ACII.2009.5349583","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349583","url":null,"abstract":"We demonstrate computer architecture, operating on semantic structures (sentence meanings or representations of events) and simulating several emotional phenomena: top-down emotional processing, hypocrisy, emotional oscillation, sarcasm and irony. The phenomena can be simulated through the interaction between emotional processing and operations with semantics. We rely on a multimodal corpus of oral exams to observe the usage of emotional expressive cues in situations of strong conflict between internal motivation and external social limitations. We apply the observations to make the computer model simulate the observed cases of combined emotional expressions.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114771861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of multimodal sequential expressions of emotions in ECA","authors":"Radoslaw Niewiadomski, S. Hyniewska, C. Pelachaud","doi":"10.1109/ACII.2009.5349569","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349569","url":null,"abstract":"A model of multimodal sequential expressions of emotion for an Embodied Conversational Agent was developed. The model is based on video annotations and on descriptions found in the literature. A language has been derived to describe expressions of emotions as a sequence of facial and body movement signals. An evaluation study of our model is presented in this paper. Animations of 8 sequential expressions corresponding to the emotions — anger, anxiety, cheerfulness, embarrassment, panic fear, pride, relief, and tension — were realized with our model. The recognition rate of these expressions is higher than the chance level making us believe that our model is able to generate recognizable expressions of emotions, even for the emotional expressions not considered to be universally recognized.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121686608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Affective haptic garment enhancing communication in second life","authors":"D. Tsetserukou, Alena Neviarouskaya","doi":"10.1109/ACII.2009.5349525","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349525","url":null,"abstract":"Driven by the motivation to enhance emotionally immersive experience of communication in Second Life, we propose a conceptually novel approach to reinforcing own feelings and reproducing the communicating partner's emotions through affective garment, iFeel_IM!. The emotions detected from text are stimulated by innovative haptic devices integrated into iFeel_IM!.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121048927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Cohn, T. S. Kruez, I. Matthews, Ying Yang, Minh Hoai Nguyen, M. T. Padilla, Feng Zhou, F. D. L. Torre
{"title":"Detecting depression from facial actions and vocal prosody","authors":"J. Cohn, T. S. Kruez, I. Matthews, Ying Yang, Minh Hoai Nguyen, M. T. Padilla, Feng Zhou, F. D. L. Torre","doi":"10.1109/ACII.2009.5349358","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349358","url":null,"abstract":"Current methods of assessing psychopathology depend almost entirely on verbal report (clinical interview or questionnaire) of patients, their family, or caregivers. They lack systematic and efficient ways of incorporating behavioral observations that are strong indicators of psychological disorder, much of which may occur outside the awareness of either individual. We compared clinical diagnosis of major depression with automatically measured facial actions and vocal prosody in patients undergoing treatment for depression. Manual FACS coding, active appearance modeling (AAM) and pitch extraction were used to measure facial and vocal expression. Classifiers using leave-one-out validation were SVM for FACS and for AAM and logistic regression for voice. Both face and voice demonstrated moderate concurrent validity with depression. Accuracy in detecting depression was 88% for manual FACS and 79% for AAM. Accuracy for vocal prosody was 79%. These findings suggest the feasibility of automatic detection of depression, raise new issues in automated facial image analysis and machine learning, and have exciting implications for clinical theory and practice.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122594788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jina Lee, H. Prendinger, Alena Neviarouskaya, S. Marsella
{"title":"Learning models of speaker head nods with affective information","authors":"Jina Lee, H. Prendinger, Alena Neviarouskaya, S. Marsella","doi":"10.1109/ACII.2009.5349543","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349543","url":null,"abstract":"During face-to-face conversation, the speaker's head is continually in motion. These movements serve a variety of important communicative functions, and may also be influenced by our emotions. The goal for this work is to build a domain-independent model of speaker's head movements and investigate the effect of using affective information during the learning process. Once the model is learned, it can later be used to generate head movements for virtual agents. In this paper, we describe our machine-learning approach to predict speaker's head nods using an annotated corpora of face-to-face human interaction and emotion labels generated by an affect recognition model. We describe the feature selection process, training process, and the comparison of results of the learned models under varying conditions. The results show that using affective information can help predict head nods better than when no affective information is used.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129669503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An ambient agent model for group emotion support","authors":"R. Duell, Z. Memon, Jan Treur, C. N. V. D. Wal","doi":"10.1109/ACII.2009.5349562","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349562","url":null,"abstract":"This paper introduces an agent-based support model for group emotion, to be used by ambient systems to support teams in their emotion dynamics. Using model-based reasoning, an ambient agent analyzes the team's emotion level for present and future time points. In case the team's emotion level is found to become deficient, the ambient agent provides support to the team by proposing the team leader, for example, to give a pep talk to certain team members. The support model has been formally designed and within a dedicated software environment, simulation experiments have been performed.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127686900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}