Diana Arellano, I. Lera, J. Varona, Francisco J. Perales López
{"title":"Integration of a semantic and affective model for realistic generation of emotional states in virtual characters","authors":"Diana Arellano, I. Lera, J. Varona, Francisco J. Perales López","doi":"10.1109/ACII.2009.5349538","DOIUrl":null,"url":null,"abstract":"In this paper we proposed a computational model that automatically integrates a knowledge base with an affective model. The knowledge base presented as a semantic model, is used for an accurate definition of the emotional interaction of a virtual character and their environment. The affective model generates emotional states from the emotional output of the knowledge base. Visualization of emotional states is done through facial expressions automatically created using the MPEG-4 standard. In order to test the model, we designed a story that provides the events, preferences, goals, and agent's interaction, used as input for the model. As a result the emotional states obtained as output were totally coherent with the input of the model. Then, the facial expressions representing these states were evaluated by a group of persons from different academic backgrounds, proving that emotional states can be recognized in the face of the virtual character.","PeriodicalId":330737,"journal":{"name":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","volume":"95 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACII.2009.5349538","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
In this paper we proposed a computational model that automatically integrates a knowledge base with an affective model. The knowledge base presented as a semantic model, is used for an accurate definition of the emotional interaction of a virtual character and their environment. The affective model generates emotional states from the emotional output of the knowledge base. Visualization of emotional states is done through facial expressions automatically created using the MPEG-4 standard. In order to test the model, we designed a story that provides the events, preferences, goals, and agent's interaction, used as input for the model. As a result the emotional states obtained as output were totally coherent with the input of the model. Then, the facial expressions representing these states were evaluated by a group of persons from different academic backgrounds, proving that emotional states can be recognized in the face of the virtual character.