I. González-Díaz, J. Benois-Pineau, J. Domenger, A. Rugy
{"title":"感知引导对自我中心视频内容的理解:对抓取对象的识别","authors":"I. González-Díaz, J. Benois-Pineau, J. Domenger, A. Rugy","doi":"10.1145/3206025.3206073","DOIUrl":null,"url":null,"abstract":"Incorporating user perception into visual content search and understanding tasks has become one of the major trends in multimedia retrieval. We tackle the problem of object recognition guided by user perception, as indicated by his gaze during visual exploration, in the application domain of assistance to upper-limb amputees. Although selecting the object to be grasped represents a task-driven visual search, human gaze recordings are noisy due to several physiological factors. Hence, since gaze does not always point to the object of interest, we use video-level weak annotations indicating the object to be grasped, and propose a video-level weak loss in classification with Deep CNNs. Our results show that the method achieves notably better performance than other approaches over a complex real-life dataset specifically recorded, with optimal performance for fixation times around 400-800ms, producing a minimal impact on subjects' behavior.","PeriodicalId":224132,"journal":{"name":"Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Perceptually-guided Understanding of Egocentric Video Content: Recognition of Objects to Grasp\",\"authors\":\"I. González-Díaz, J. Benois-Pineau, J. Domenger, A. Rugy\",\"doi\":\"10.1145/3206025.3206073\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Incorporating user perception into visual content search and understanding tasks has become one of the major trends in multimedia retrieval. We tackle the problem of object recognition guided by user perception, as indicated by his gaze during visual exploration, in the application domain of assistance to upper-limb amputees. Although selecting the object to be grasped represents a task-driven visual search, human gaze recordings are noisy due to several physiological factors. Hence, since gaze does not always point to the object of interest, we use video-level weak annotations indicating the object to be grasped, and propose a video-level weak loss in classification with Deep CNNs. Our results show that the method achieves notably better performance than other approaches over a complex real-life dataset specifically recorded, with optimal performance for fixation times around 400-800ms, producing a minimal impact on subjects' behavior.\",\"PeriodicalId\":224132,\"journal\":{\"name\":\"Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3206025.3206073\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3206025.3206073","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Perceptually-guided Understanding of Egocentric Video Content: Recognition of Objects to Grasp
Incorporating user perception into visual content search and understanding tasks has become one of the major trends in multimedia retrieval. We tackle the problem of object recognition guided by user perception, as indicated by his gaze during visual exploration, in the application domain of assistance to upper-limb amputees. Although selecting the object to be grasped represents a task-driven visual search, human gaze recordings are noisy due to several physiological factors. Hence, since gaze does not always point to the object of interest, we use video-level weak annotations indicating the object to be grasped, and propose a video-level weak loss in classification with Deep CNNs. Our results show that the method achieves notably better performance than other approaches over a complex real-life dataset specifically recorded, with optimal performance for fixation times around 400-800ms, producing a minimal impact on subjects' behavior.