{"title":"Emotion Recognition via Face Tracking with RealSense(TM) 3D Camera for Children with Autism","authors":"T. Tang, Pinata Winoto, Guanxing Chen","doi":"10.1145/3078072.3084321","DOIUrl":null,"url":null,"abstract":"Although there is a growing recognition of the differences, not diminished abilities, of facial affective expressivity between Typically Developing (TD) and Autism Spectrum Disorder (ASD) individuals, which might lead to the varied recognizability of conveyed emotion by both TD and ASD individuals, little is explored on the ecological validity of these findings; that is, whether spontaneous affective facial expressions can better be produced and recognized by both populations. We aimed to address these issues in the present study, using children's cartoon clips to assess two aspects of spontaneous emotion production and recognition in a context closer to real-life children's cartoon movie watching (at home or a classroom). Based on the facial landmark data and a teacher/parent's manual emotion tags (happy), we performed a computational analysis to compare the happy emotion labels generated by the automated algorithm and the human TD rater. Two pilot studies of six ASD children revealed the potential as well as challenges of such an approach.","PeriodicalId":377409,"journal":{"name":"Proceedings of the 2017 Conference on Interaction Design and Children","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2017 Conference on Interaction Design and Children","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3078072.3084321","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Although there is a growing recognition of the differences, not diminished abilities, of facial affective expressivity between Typically Developing (TD) and Autism Spectrum Disorder (ASD) individuals, which might lead to the varied recognizability of conveyed emotion by both TD and ASD individuals, little is explored on the ecological validity of these findings; that is, whether spontaneous affective facial expressions can better be produced and recognized by both populations. We aimed to address these issues in the present study, using children's cartoon clips to assess two aspects of spontaneous emotion production and recognition in a context closer to real-life children's cartoon movie watching (at home or a classroom). Based on the facial landmark data and a teacher/parent's manual emotion tags (happy), we performed a computational analysis to compare the happy emotion labels generated by the automated algorithm and the human TD rater. Two pilot studies of six ASD children revealed the potential as well as challenges of such an approach.