{"title":"Detection and categorization of facial image through the interaction with caregiver","authors":"M. Ogino, A. Watanabe, M. Asada","doi":"10.1109/DEVLRN.2008.4640837","DOIUrl":null,"url":null,"abstract":"This paper models the process of applied behavior analysis (ABA) therapy of autistic children for eye contact as the learning of the categorization and preference through the interaction with a caregiver. The proposed model consists of the learning module and visual attention module. The learning module learns the visual features of higher order local autocorrelation (HLAC) that are important to discriminate the visual image before and after the reward is given. The visual attention module determines the attention point by a bottom-up process based on saliency map and a top-down process based on the learned visual feature. The experiment with a virtual robot shows that the robot successfully learns visual features corresponding to the face firstly and the eyes afterwards through the interaction with a caregiver. After the learning, the robot can attend to the caregiverpsilas face and eyes as autistic children do in the actual ABA therapy.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 7th IEEE International Conference on Development and Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2008.4640837","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Detection and categorization of facial image through the interaction with caregiver
This paper models the process of applied behavior analysis (ABA) therapy of autistic children for eye contact as the learning of the categorization and preference through the interaction with a caregiver. The proposed model consists of the learning module and visual attention module. The learning module learns the visual features of higher order local autocorrelation (HLAC) that are important to discriminate the visual image before and after the reward is given. The visual attention module determines the attention point by a bottom-up process based on saliency map and a top-down process based on the learned visual feature. The experiment with a virtual robot shows that the robot successfully learns visual features corresponding to the face firstly and the eyes afterwards through the interaction with a caregiver. After the learning, the robot can attend to the caregiverpsilas face and eyes as autistic children do in the actual ABA therapy.