F. Nijboer, Stefan Carmien, E. Leon, F. O. Morin, R. Koene, U. Hoffmann
{"title":"Affective brain-computer interfaces: Psychophysiological markers of emotion in healthy persons and in persons with amyotrophic lateral sclerosis","authors":"F. Nijboer, Stefan Carmien, E. Leon, F. O. Morin, R. Koene, U. Hoffmann","doi":"10.1109/ACII.2009.5349479","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349479","url":null,"abstract":"Affective Brain-Computer Interfaces (BCI) are systems that measure signals from the peripheral and central nervous system, extract features related to affective states of the user, and use these features to adapt human-computer interaction (HCI). Affective BCIs provide new perspectives on the applicability of BCIs. Affective BCIs may serve as assessment tools and adaptive systems for HCI for the general population and may prove to be especially interesting for people with severe motor impairment. In this context, affective BCIs will enable simultaneous expression of affect and content, thus providing more quality of life for the patient and the caregiver. In the present paper, we will present psychophysiological markers for affective BCIs, and discuss their usability in the day to day life of patients with amyotrophic lateral sclerosis (ALS).","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"211 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114339161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Annotating meaning of listener vocalizations for speech synthesis","authors":"Sathish Pammi, M. Schröder","doi":"10.1109/ACII.2009.5349568","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349568","url":null,"abstract":"Generation of listener vocalizations is one of the major objectives of emotionally colored conversational speech synthesis. Success in this endeavor depends on the answers to three questions: What kinds of meaning are expressed through listener vocalizations? What form is suitable for a given meaning? And, in what context should which listener vocalizations be produced? In this paper, we address the first of these questions. We present a method to record natural and expressive listener vocalizations for synthesis, and describe our approach to identify a suitable categorical description of the meaning conveyed in the vocalizations. In our data, one actor produces a total of 967 listener vocalizations, in his natural speaking style and three acted emotion-specific personalities. In an open categorization scheme, we find that eleven categories occur on at least 5% of the vocalizations, and that most vocalizations are better described by two or three categories rather than a single one. Furthermore, an annotation of meaning reference, according to Buhler's Organon model, allows us to make interesting observations regarding the listener's own state, his stance towards the interlocutor, and his attitude towards the topic of the conversation.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114350632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Shaikh, A. Rebordão, Arturo Nakasone, Prendinger Helmut, K. Hirose
{"title":"An automatic approach to virtual living based on environmental sound cues","authors":"M. Shaikh, A. Rebordão, Arturo Nakasone, Prendinger Helmut, K. Hirose","doi":"10.1109/ACII.2009.5349467","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349467","url":null,"abstract":"This paper presents a novel indoor and outdoor monitoring system based on sound cues that can be used for the automatic creation of a Life-Log, health care monitoring and/or ambient communication with virtual worlds. Basically, the system detects daily life activities (e.g., laughing, talking, traveling, cooking, sleeping, etc.) and situational references (e.g., inside a train, at a park, at home, at school, etc.) by processing environmental sounds, creates a Life-Log and recreates those activities into a virtual-world. It is easily extensible, portable, feasible to implement and reveals advantages and originality compared with other life-sensing systems. The results of the perceptual tests are encouraging and the system performed satisfactorily in a noisy environment, attracting the attention and curiosity of the subjects.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131314821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hana Boukricha, I. Wachsmuth, A. Hofstätter, K. Grammer
{"title":"Pleasure-arousal-dominance driven facial expression simulation","authors":"Hana Boukricha, I. Wachsmuth, A. Hofstätter, K. Grammer","doi":"10.1109/ACII.2009.5349579","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349579","url":null,"abstract":"Expressing and recognizing affective states with respect to facial expressions is an important aspect in perceiving virtual humans as more natural and believable. Based on the results of an empirical study a system for simulating emotional facial expressions for a virtual human has been evolved. This system consists of two parts: (1) a control architecture for simulating emotional facial expressions with respect to Pleasure, Arousal, and Dominance (PAD) values, (2) an expressive output component for animating the virtual human's facial muscle actions called Action Units (AUs), modeled following the Facial Action Coding System (FACS). A large face repertoire of about 6000 faces arranged in PAD-space with respect to two dominance values (dominant vs. submissive) is obtained as a result of the empirical study. Using the face repertoire an approach towards realizing facial mimicry for a virtual human based on backward mapping AUs displaying an emotional facial expression on PAD-values is outlined. A preliminary evaluation of this first approach is realized with AUs corresponding to the basic emotions Happy and Angry.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127829976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Hudlicka, C. Becker-Asano, Sabine Payr, K. Fischer, R. Ventura, Iolanda Leite, C. Scheve
{"title":"Social interaction with robots and agents: Where do we stand, where do we go?","authors":"E. Hudlicka, C. Becker-Asano, Sabine Payr, K. Fischer, R. Ventura, Iolanda Leite, C. Scheve","doi":"10.1109/ACII.2009.5349472","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349472","url":null,"abstract":"Robots and agents are becoming increasingly prominent in everyday life, taking on a variety of roles, including helpers, coaches, and even social companions. A core requirement for these social agents is the ability to establish and maintain long-term trusting and engaging relationship with their human users. Much research has already been done on the prerequisites for these types of social agents and robots, in affective computing, social computing and affective HCI. A number of disciplines within psychology and the social sciences are also relevant, contributing theories, data and methods relevant for the emerging areas of social robotics, and social computing in general. However, the complexity of the task of designing these social agents, and the diversity of the relevant disciplines, can be overwhelming. This paper presents a summary of a special session at ACII 2009 whose purpose was to provide an overview of the state-of-the-art in social agents and robots, and to explore some of the fundamental questions regarding their development, and the evaluation of their effectiveness.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130206161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic cascades with bidirectional bootstrapping for spontaneous facial action unit detection","authors":"Yunfeng Zhu, F. D. L. Torre, J. Cohn, Yujin Zhang","doi":"10.1109/ACII.2009.5349603","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349603","url":null,"abstract":"A relatively unexplored problem in facial expression analysis is how to select the positive and negative samples with which to train classifiers for expression recognition. Typically, for each action unit (AU) or other expression, the peak frames are selected as positive class and the negative samples are selected from other AUs. This approach suffers from at least two drawbacks. One, because many state of the art classifiers, such as Support Vector Machines (SVMs), fail to scale well with increases in the number of training samples (e.g. for the worse case in SVM), it may be infeasible to use all potential training data. Two, it often is unclear how best to choose the positive and negative samples. If we only label the peaks as positive samples, a large imbalance will result between positive and negative samples, especially for infrequent AU. On the other hand, if all frames from onset to offset are labeled as positive, many may differ minimally or not at all from the negative class. Frames near onsets and offsets often differ little from those that precede them. In this paper, we propose Dynamic Cascades with Bidirectional Bootstrapping (DCBB) to address these issues. DCBB optimally selects positive and negative class samples in training sets. In experimental evaluations in non-posed video from the RU-FACS Database, DCBB yielded improved performance for action unit recognition relative to alternative approaches.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134584929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating the consequences of affective feedback in intelligent tutoring systems","authors":"J. Robison, Scott W. McQuiggan, James C. Lester","doi":"10.1109/ACII.2009.5349555","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349555","url":null,"abstract":"The link between affect and student learning has been the subject of increasing attention in recent years. Because of the possible impacts of affective state on learning, it is a goal of many intelligent tutoring systems to attempt to control student emotional states through affective interventions. While much work has gone into improving the quality of these interventions, we are only beginning to understand the complexities of the relationships between affect, learning, and feedback. This paper investigates the consequences associated with providing affective feedback. It represents a first step toward the long-term objective of designing intelligent tutoring systems that can utilize this information for analysis of the risks and benefits of affective intervention. It reports on the results of two studies that were conducted with students interacting with affect-informed virtual agents. The studies reveal that emotion-specific risk/reward information is associated with particular affective states and suggests that future systems might leverage this information to make determinations about affective interventions.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130753218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detecting affective covert user states with passive brain-computer interfaces","authors":"T. Zander, Sabine Jatzev","doi":"10.1109/ACII.2009.5349456","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349456","url":null,"abstract":"Brain-Computer Interfaces (BCIs) provide insight into ongoing cognitive and affective processes and are commonly used for direct control of human-machine systems [16]. Recently, a different type of BCI has emerged [4, 17], which instead focuses solely on the non-intrusive recognition of mental state elicited by a given primary human-machine interaction. These so-called passive BCIs (pBCIs) do, by their nature, not disturb the primary interaction, and thus allow for enhancement of human-machine systems with relatively low usage cost [12,18], especially in conjunction with gel-free sensors. Here, we apply pBCIs to detect cognitive processes containing covert user states, which are difficult to access with conventional exogenous measures. We present two variants of a task inspired by an erroneously adapting human-machine system, a scenario important in automated adaptation. In this context, we derive two related, yet complementary, applications of pBCIs. First, we show that pBCIs are capable of detecting a covert user state related to the perception of loss of control over a system. The detection is realized by exploiting non-stationarities induced by the loss of control. Second, we show that pBCIs can be used to detect a covert user state directly correlated to the user's interpretation of erroneous actions of the machine. We then demonstrate the use of this information to enhance the interaction between the user and the machine, in an experiment outside the laboratory.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129532283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diana Arellano, I. Lera, J. Varona, Francisco J. Perales López
{"title":"Integration of a semantic and affective model for realistic generation of emotional states in virtual characters","authors":"Diana Arellano, I. Lera, J. Varona, Francisco J. Perales López","doi":"10.1109/ACII.2009.5349538","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349538","url":null,"abstract":"In this paper we proposed a computational model that automatically integrates a knowledge base with an affective model. The knowledge base presented as a semantic model, is used for an accurate definition of the emotional interaction of a virtual character and their environment. The affective model generates emotional states from the emotional output of the knowledge base. Visualization of emotional states is done through facial expressions automatically created using the MPEG-4 standard. In order to test the model, we designed a story that provides the events, preferences, goals, and agent's interaction, used as input for the model. As a result the emotional states obtained as output were totally coherent with the input of the model. Then, the facial expressions representing these states were evaluated by a group of persons from different academic backgrounds, proving that emotional states can be recognized in the face of the virtual character.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127439680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of emotional agents on human players in the public goods game","authors":"K. Göttlicher, Sabine Stein, D. Reichardt","doi":"10.1109/ACII.2009.5349446","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349446","url":null,"abstract":"In previous work we chose the public goods game as a small and adequate example scenario for emotional agents. In our experiment we focus on the influence of emotional reactions of the emotional agents on a human player. The artificial emotions of the agents are generated by a model of emotion which is adapted to the scenario. In our experiment we compare the reactions of the human player in a scenario with and without visual feedback of the other agents. As a visual feedback we use the ECA Greta to embody the emotions generated by our model. Adequate emotion recognition could be proven by an online survey validating the correct valence and intensity perception of the relevant emotions. As one of the first results, a significantly higher amount of investment can be stated for the human player in case he or she sees the facial representation of the emotional state of the other (virtual) players during the public goods game. After splitting up the game in two sections, it could be shown that this significant difference is only true for the second part of the game.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126056615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}