Motoyuki Ozeki, Yasuhiro Kashiwagi, Mariko Inoue, N. Oka
{"title":"Top-down visual attention control based on a particle filter for human-interactive robots","authors":"Motoyuki Ozeki, Yasuhiro Kashiwagi, Mariko Inoue, N. Oka","doi":"10.1109/HSI.2011.5937365","DOIUrl":null,"url":null,"abstract":"A novel visual attention model based on a particle filter is that also has a filter-type feature, (2) a compact model independent of the high-level processes, and (3) a unitary model that naturally integrates top-down modulation and bottom-up processes. These features allow the model to be applied simply to robots and to be easily understood by the developers. In this paper, we first briefly discuss human visual attention, computational models for bottom-up attention, and attentional metaphors. We then describe the proposed model and its top-down control interface. Finally, three experiments demonstrate the potential of the proposed model as an attentional metaphor and top-down attention control interface.","PeriodicalId":384027,"journal":{"name":"2011 4th International Conference on Human System Interactions, HSI 2011","volume":"256 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 4th International Conference on Human System Interactions, HSI 2011","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HSI.2011.5937365","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
A novel visual attention model based on a particle filter is that also has a filter-type feature, (2) a compact model independent of the high-level processes, and (3) a unitary model that naturally integrates top-down modulation and bottom-up processes. These features allow the model to be applied simply to robots and to be easily understood by the developers. In this paper, we first briefly discuss human visual attention, computational models for bottom-up attention, and attentional metaphors. We then describe the proposed model and its top-down control interface. Finally, three experiments demonstrate the potential of the proposed model as an attentional metaphor and top-down attention control interface.