Javier Gonzalez-Sanchez, Maria Elena Chavez Echeagaray, R. Atkinson, W. Burleson
{"title":"基于agent的多模态情感识别软件体系结构","authors":"Javier Gonzalez-Sanchez, Maria Elena Chavez Echeagaray, R. Atkinson, W. Burleson","doi":"10.1109/WICSA.2011.32","DOIUrl":null,"url":null,"abstract":"The computer's ability to recognize human emotional states given physiological signals is gaining in popularity to create empathetic systems such as learning environments, health care systems and videogames. Despite that, there are few frameworks, libraries, architectures, or software tools, which allow systems developers to easily integrate emotion recognition into their software projects. The work reported here offers a first step to fill this gap in the lack of frameworks and models, addressing: (a) the modeling of an agent-driven component-based architecture for multimodal emotion recognition, called ABE, and (b) the use of ABE to implement a multimodal emotion recognition framework to support third-party systems becoming empathetic systems.","PeriodicalId":234615,"journal":{"name":"2011 Ninth Working IEEE/IFIP Conference on Software Architecture","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"47","resultStr":"{\"title\":\"ABE: An Agent-Based Software Architecture for a Multimodal Emotion Recognition Framework\",\"authors\":\"Javier Gonzalez-Sanchez, Maria Elena Chavez Echeagaray, R. Atkinson, W. Burleson\",\"doi\":\"10.1109/WICSA.2011.32\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The computer's ability to recognize human emotional states given physiological signals is gaining in popularity to create empathetic systems such as learning environments, health care systems and videogames. Despite that, there are few frameworks, libraries, architectures, or software tools, which allow systems developers to easily integrate emotion recognition into their software projects. The work reported here offers a first step to fill this gap in the lack of frameworks and models, addressing: (a) the modeling of an agent-driven component-based architecture for multimodal emotion recognition, called ABE, and (b) the use of ABE to implement a multimodal emotion recognition framework to support third-party systems becoming empathetic systems.\",\"PeriodicalId\":234615,\"journal\":{\"name\":\"2011 Ninth Working IEEE/IFIP Conference on Software Architecture\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-06-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"47\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 Ninth Working IEEE/IFIP Conference on Software Architecture\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/WICSA.2011.32\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 Ninth Working IEEE/IFIP Conference on Software Architecture","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WICSA.2011.32","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
ABE: An Agent-Based Software Architecture for a Multimodal Emotion Recognition Framework
The computer's ability to recognize human emotional states given physiological signals is gaining in popularity to create empathetic systems such as learning environments, health care systems and videogames. Despite that, there are few frameworks, libraries, architectures, or software tools, which allow systems developers to easily integrate emotion recognition into their software projects. The work reported here offers a first step to fill this gap in the lack of frameworks and models, addressing: (a) the modeling of an agent-driven component-based architecture for multimodal emotion recognition, called ABE, and (b) the use of ABE to implement a multimodal emotion recognition framework to support third-party systems becoming empathetic systems.