{"title":"自动识别面部和身体动作单元","authors":"H. Gunes, M. Piccardi","doi":"10.1109/ICITA.2005.83","DOIUrl":null,"url":null,"abstract":"Expressive face and body gestures are among the main non-verbal communication channels in human-human interaction. Understanding human emotions through these nonverbal means is one of the necessary skills both for humans and also for the computers to interact intelligently and effectively with their human counterparts. Much progress has been achieved in affect assessment using a single measure type; however, reliable assessment typically requires the concurrent use of multiple modalities. Accordingly in this paper, we present preliminary results of automatic visual recognition of expressive face and upper-body action units (FAUs and BAUs) suitable for use in a vision-based affective multimodal framework.","PeriodicalId":371528,"journal":{"name":"Third International Conference on Information Technology and Applications (ICITA'05)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Automatic visual recognition of face and body action units\",\"authors\":\"H. Gunes, M. Piccardi\",\"doi\":\"10.1109/ICITA.2005.83\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Expressive face and body gestures are among the main non-verbal communication channels in human-human interaction. Understanding human emotions through these nonverbal means is one of the necessary skills both for humans and also for the computers to interact intelligently and effectively with their human counterparts. Much progress has been achieved in affect assessment using a single measure type; however, reliable assessment typically requires the concurrent use of multiple modalities. Accordingly in this paper, we present preliminary results of automatic visual recognition of expressive face and upper-body action units (FAUs and BAUs) suitable for use in a vision-based affective multimodal framework.\",\"PeriodicalId\":371528,\"journal\":{\"name\":\"Third International Conference on Information Technology and Applications (ICITA'05)\",\"volume\":\"33 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-07-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Third International Conference on Information Technology and Applications (ICITA'05)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICITA.2005.83\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Third International Conference on Information Technology and Applications (ICITA'05)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICITA.2005.83","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Automatic visual recognition of face and body action units
Expressive face and body gestures are among the main non-verbal communication channels in human-human interaction. Understanding human emotions through these nonverbal means is one of the necessary skills both for humans and also for the computers to interact intelligently and effectively with their human counterparts. Much progress has been achieved in affect assessment using a single measure type; however, reliable assessment typically requires the concurrent use of multiple modalities. Accordingly in this paper, we present preliminary results of automatic visual recognition of expressive face and upper-body action units (FAUs and BAUs) suitable for use in a vision-based affective multimodal framework.