Stéphanie Lopez, A. Revel, D. Lingrand, F. Precioso, V. Dusaucy, A. Giboin
{"title":"Catching Relevance in One Glimpse: Food or Not Food?","authors":"Stéphanie Lopez, A. Revel, D. Lingrand, F. Precioso, V. Dusaucy, A. Giboin","doi":"10.1145/2909132.2926078","DOIUrl":null,"url":null,"abstract":"Retrieving specific categories of images among billions of images usually requires an annotation step. Unfortunately, keywords-based techniques suffer from the semantic gap existing between a semantic concept and its digital representation. Content Based Image Retrieval (CBIR) systems tackle this issue simply considering semantic proximities can be mapped to similarities in the image space. Introducing relevance feedbacks involves the user in the task, but extends the annotation step. To reduce the annotation time, we want to prove that implicit relevance feedback can replace an explicit one. In this study, we will evaluate the robustness of an implicit relevance feedback system only based on eye-tracking features (gaze-based interest estimator, GBIE). In [5], we showed that our GBIE was representative for any set of users using \"neutral images\". Here, we want to prove that it remains valid for more \"subjective categories\" such as food recipe.","PeriodicalId":250565,"journal":{"name":"Proceedings of the International Working Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the International Working Conference on Advanced Visual Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2909132.2926078","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Retrieving specific categories of images among billions of images usually requires an annotation step. Unfortunately, keywords-based techniques suffer from the semantic gap existing between a semantic concept and its digital representation. Content Based Image Retrieval (CBIR) systems tackle this issue simply considering semantic proximities can be mapped to similarities in the image space. Introducing relevance feedbacks involves the user in the task, but extends the annotation step. To reduce the annotation time, we want to prove that implicit relevance feedback can replace an explicit one. In this study, we will evaluate the robustness of an implicit relevance feedback system only based on eye-tracking features (gaze-based interest estimator, GBIE). In [5], we showed that our GBIE was representative for any set of users using "neutral images". Here, we want to prove that it remains valid for more "subjective categories" such as food recipe.