{"title":"Dynamic gaze analysis: An application enviroment for face-to-face communication","authors":"Ülkü Arslan Aydin, Sinan Kalkan, Cengiz Acartürk","doi":"10.1109/IDAP.2017.8090249","DOIUrl":null,"url":null,"abstract":"Gaze analysis in dynamic environments has remained an unresolved problem due to the complexities that pertain to the detection and tracking of objects in the visual environment. This study provides a solution to the problem for face-to-face communication, in which the visual objects in the environment are faces. The application that has been developed for this purpose is able to detect and track faces in video steram, and it maps gaze locations to the images, thus allowing the user to detect gaze behavior, such as gaze aversion. The application is also capable of segmentation and diarization of speech syncronously with the vide o stream and eye movement overlay. It allows the user to annotate speech by speech act labels. The pilot studies reveal that the application provides acceptable accuracy values in the analysis, as well as significantly reducing the time for the analyses.","PeriodicalId":111721,"journal":{"name":"2017 International Artificial Intelligence and Data Processing Symposium (IDAP)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Artificial Intelligence and Data Processing Symposium (IDAP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IDAP.2017.8090249","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Gaze analysis in dynamic environments has remained an unresolved problem due to the complexities that pertain to the detection and tracking of objects in the visual environment. This study provides a solution to the problem for face-to-face communication, in which the visual objects in the environment are faces. The application that has been developed for this purpose is able to detect and track faces in video steram, and it maps gaze locations to the images, thus allowing the user to detect gaze behavior, such as gaze aversion. The application is also capable of segmentation and diarization of speech syncronously with the vide o stream and eye movement overlay. It allows the user to annotate speech by speech act labels. The pilot studies reveal that the application provides acceptable accuracy values in the analysis, as well as significantly reducing the time for the analyses.