{"title":"Toward Smart Internet of Things (IoT) Devices: Exploring the Regions of Interest for Recognition of Facial Expressions using Eye-gaze Tracking","authors":"Abdallah S. Abdallah, L. Elliott, Daniel Donley","doi":"10.1109/CCECE47787.2020.9255696","DOIUrl":null,"url":null,"abstract":"A significant portion of the internet of things (IoT) devices will become reliable products in our daily life if and only if they are equipped with strong human computer interaction (HCI) technologies, specifically visual interaction with users through affective computing. One of the major challenges faced in affective computing is recognizing facial expressions and the true emotions behind them. Despite numerous studies performed, current detection systems are ineffective at correctly identifying facial expressions with reliable accuracy, especially in case of negative expressions. Several research projects attempted to extract the recognition process that humans follow to identify facial expressions in order to replicate in smart machines without a significant success. This paper describes our interdisciplinary project whose goal is to extract and define the recognition process that humans follow when identifying the facial expressions of others. We monitor this process by identifying and analyzing the regions of interest participants look at when they are shown static emotions samples under a specific experimental setup. This paper reports the current status of data collection, experimental setup, and initial data visualization.","PeriodicalId":296506,"journal":{"name":"2020 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCECE47787.2020.9255696","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
A significant portion of the internet of things (IoT) devices will become reliable products in our daily life if and only if they are equipped with strong human computer interaction (HCI) technologies, specifically visual interaction with users through affective computing. One of the major challenges faced in affective computing is recognizing facial expressions and the true emotions behind them. Despite numerous studies performed, current detection systems are ineffective at correctly identifying facial expressions with reliable accuracy, especially in case of negative expressions. Several research projects attempted to extract the recognition process that humans follow to identify facial expressions in order to replicate in smart machines without a significant success. This paper describes our interdisciplinary project whose goal is to extract and define the recognition process that humans follow when identifying the facial expressions of others. We monitor this process by identifying and analyzing the regions of interest participants look at when they are shown static emotions samples under a specific experimental setup. This paper reports the current status of data collection, experimental setup, and initial data visualization.