{"title":"Exploiting gaze movements for automatic video annotation","authors":"S. Vrochidis, I. Patras, Y. Kompatsiaris","doi":"10.1109/WIAMIS.2012.6226766","DOIUrl":null,"url":null,"abstract":"This paper proposes a framework for automatic video annotation by exploiting gaze movements during interactive video retrieval. In this context, we use a content-based video search engine to perform video retrieval, during which, we capture the user eye movements with an eye-tracker. We exploit these data by generating feature vectors, which are used to train a classifier that could identify shots of interest for new users. The queries submitted by new users are clustered in search topics and the viewed shots are annotated as relevant or non-relevant to the topics by the classifier. The evaluation shows that the use of aggregated gaze data can be utilized effectively for video annotation purposes.","PeriodicalId":346777,"journal":{"name":"2012 13th International Workshop on Image Analysis for Multimedia Interactive Services","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 13th International Workshop on Image Analysis for Multimedia Interactive Services","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WIAMIS.2012.6226766","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
This paper proposes a framework for automatic video annotation by exploiting gaze movements during interactive video retrieval. In this context, we use a content-based video search engine to perform video retrieval, during which, we capture the user eye movements with an eye-tracker. We exploit these data by generating feature vectors, which are used to train a classifier that could identify shots of interest for new users. The queries submitted by new users are clustered in search topics and the viewed shots are annotated as relevant or non-relevant to the topics by the classifier. The evaluation shows that the use of aggregated gaze data can be utilized effectively for video annotation purposes.