{"title":"An ontology-based affective tutoring system on digital arts","authors":"H. Lin, I-Hen Tsai, Rui-Ting Sun","doi":"10.1109/WACI.2011.5952985","DOIUrl":"https://doi.org/10.1109/WACI.2011.5952985","url":null,"abstract":"The aim of this paper is to introduce the design and evaluation of an ontology-based affective tutoring system on digital arts. The major clues for emotion recognition are the text pieces inputted by the learners. The semantic inference of the emotions is done by use of an ontology called OMCSNet. The system also incorporates an agent that provides feedback based on the inferred emotions. The SUS (System Usability Scale) evaluation results show that this system achieves positive usability so that the learners enjoy the interaction with the system.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117225355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Emotional correlates of information retrieval behaviors","authors":"Irene Lopatovska","doi":"10.1109/WACI.2011.5953145","DOIUrl":"https://doi.org/10.1109/WACI.2011.5953145","url":null,"abstract":"There is an emergent interest in the use of emotion data to improve information retrieval processes. Our study examined whether the knowledge of searchers' emotions can be used to predict their actions (and vise versa). We investigated associations between information retrieval behaviors (e.g., examination of search results) and patterns of emotional expressions around those behaviors, and found that individual search behaviors were associated with the certain types of emotional expressions. The findings can inform classification of emotions and search behaviors, and in turn lead to the development of affect-sensitive retrieval systems.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"132 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120865910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jun-Wen Tan, Steffen Walter, Andreas Scheck, David Hrabal, H. Hoffmann, H. Kessler, H. Traue
{"title":"Facial electromyography (fEMG) activities in response to affective visual stimulation","authors":"Jun-Wen Tan, Steffen Walter, Andreas Scheck, David Hrabal, H. Hoffmann, H. Kessler, H. Traue","doi":"10.1109/WACI.2011.5953144","DOIUrl":"https://doi.org/10.1109/WACI.2011.5953144","url":null,"abstract":"Recently, affective computing findings demonstrated that emotion processing and recognition is important in improving the quality of human computer interaction (HCI). In the present study, new data for a robust discrimination of three emotional states (negative, neutral and positive) employing two-channel facial electromyography (EMG) over zygomaticus major and corrugator supercilii will be presented. The facial EMG activities evoked upon viewing a standard set of pictures selected from the International Affective Picture System (IAPS) and additional self selected pictures revealed that positive pictures led to increased facial EMG activities over zygomaticus major (F (2, 471) = 4.23, p < 0.05), whereas negative pictures elicited greater facial EMG activities over corrugator supercilii (F (2, 476) = 3.06, p < 0.05). In addition, the correlation between facial EMG activities over these two sites and participants' ratings of stimuli pictures in dimension of valence measured by Self-Assessment Manikin (SAM) was significant (r = −0.63, p < 0.001, corrugator supercilii, r = 0.51, p < 0.05, zygomaticus major, respectively). Our results suggest that emotion inducing pictures elicit the intended emotions and that corrugator and zygomaticus EMG can effectively and reliably differentiate negative and positive emotions, respectively.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122984966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mechanism, modulation, and expression of empathy in a virtual human","authors":"Hana Boukricha, I. Wachsmuth","doi":"10.1109/WACI.2011.5953146","DOIUrl":"https://doi.org/10.1109/WACI.2011.5953146","url":null,"abstract":"Empathy is believed to play a prominent role in contributing to an efficient and satisfying cooperative social interaction by adjusting one's own behavior to that of others. Thus, endowing virtual humans with the ability to empathize not only enhances their cooperative social skills, but also makes them more likeable, trustworthy, and caring. Supported by psychological models of empathy, we propose an approach to model empathy for EMMA — an Empathic MultiModal Agent — based on three processing steps: First, the Empathy Mechanism consists of an internal simulation of perceived emotional facial expressions and results in an internal emotional feedback that represents the empathic emotion. Second, the Empathy Modulation consists of modulating the empathic emotion through different predefined modulation factors. Third, the Expression of Empathy consists of triggering EMMA's multiple modalities like facial and verbal behaviors. In a conversational agent scenario involving the virtual humans MAX and EMMA, we illustrate our proposed model of empathy and we introduce a planned empirical evaluation of EMMA's empathic behavior.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124565476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Affect Bartender — Affective cues and their application in a conversational agent","authors":"M. Skowron, G. Paltoglou","doi":"10.1109/WACI.2011.5953152","DOIUrl":"https://doi.org/10.1109/WACI.2011.5953152","url":null,"abstract":"This paper presents methods for the detection of textual expressions of users' affective states and explores an application of these affective cues in a conversational system — Affect Bartender. We also describe the architecture of the system, core system components and a range of developed communication interfaces. The application of the described methods is illustrated with examples of dialogs conducted with experiment participants in a Virtual Reality setting.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129268336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic detection of “enthusiasm” in non-task-oriented dialogues using word co-occurrence","authors":"Michimasa Inaba, F. Toriumi, K. Ishii","doi":"10.1109/WACI.2011.5953085","DOIUrl":"https://doi.org/10.1109/WACI.2011.5953085","url":null,"abstract":"A method is proposed for automatically detecting “enthusiastic” utterances in text-based dialogues. Using conditional random fields, our proposed method distinguishes between the enthusiastic and non-enthusiastic parts of a dialogue. Testing demonstrated that it performs as well as human detection. Being able to distinguish between the enthusiastic and non-enthusiastic parts makes it possible to quantitatively analyze the phenomenon of enthusiasm, which should lead to a practical approach to the creation of non-task-oriented agents that can help generate enthusiastic dialogues.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133327098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Alida, a cognitive approach of text categorization","authors":"Yann Vigile Hoareau, A. E. Ghali","doi":"10.1109/WACI.2011.5953148","DOIUrl":"https://doi.org/10.1109/WACI.2011.5953148","url":null,"abstract":"This paper proposes a model of text categorization named Alida, which combines a model of categorization inspired of the classical cognitive models of categorization of Nosofsky, with a semantic space model as system of semantic knowledge representation. The model addresses large-scale text categorization applications in opinion mining in different domains and different languages. The performance in the text-mining campaign DEFT'09 shows that the model can compete with existing Natural Language Processing and Information Retrieval models.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128393650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating facial displays of emotion for the android robot Geminoid F","authors":"C. Becker-Asano, H. Ishiguro","doi":"10.1109/WACI.2011.5953147","DOIUrl":"https://doi.org/10.1109/WACI.2011.5953147","url":null,"abstract":"With android robots becoming increasingly sophisticated in their technical as well as artistic design, their non-verbal expressiveness is getting closer to that of real humans. Accordingly, this paper presents results of two online surveys designed to evaluate a female android's facial display of five basic emotions. We prepared both surveys in English, German, and Japanese language allowing us to analyze for inter-cultural differences. Accordingly, we not only found that our design of the emotional expressions “fearful” and “surprised” were often confused, but also that many Japanese participants seemed to confuse “angry” with “sad” in contrast to the German and English participants. Although similar facial displays portrayed by the model person of Geminoid F achieved higher recognition rates overall, portraying fearful has been similarly difficult for the model person. We conclude that improving the android's expressiveness especially around the eyes would be a useful next step in android design. In general, these results could be complemented by an evaluation of dynamic facial expressions of Geminoid F in future research.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122672190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Victoria Eyharabide, A. Amandi, M. Courgeon, C. Clavel, Chahnez Zakaria, Jean-Claude Martin
{"title":"An ontology for predicting students' emotions during a quiz. Comparison with self-reported emotions","authors":"Victoria Eyharabide, A. Amandi, M. Courgeon, C. Clavel, Chahnez Zakaria, Jean-Claude Martin","doi":"10.1109/WACI.2011.5953153","DOIUrl":"https://doi.org/10.1109/WACI.2011.5953153","url":null,"abstract":"Recent research suggests that predicting students' emotions during e-learning is quite relevant but should be situated in the learning context and consider the individual profile of users. More knowledge is required for assessing the possible contributions of multiple sources of information for predicting students' emotions. In this paper we describe an ontology that we have implemented for predicting students' emotions when interacting with a quiz about Java programming. An experimental study with 17 computer science students compares the automatic predictions made by the ontology with the emotions self-reported by students.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126819412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scalable multimodal fusion for continuous affect sensing","authors":"I. Hupont, S. Ballano, S. Baldassarri, E. Cerezo","doi":"10.1109/WACI.2011.5953150","DOIUrl":"https://doi.org/10.1109/WACI.2011.5953150","url":null,"abstract":"The success of affective interfaces lies in the fusion of emotional information coming from different modalities. This paper proposes a scalable methodology for fusing multiple affect sensing modules, allowing the subsequent addition of new modules without having to retrain the existing ones. It relies on a 2-dimensional affective model and is able to output a continuous emotional path characterizing the user's affective progress over time.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132929187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}