{"title":"Deep-BCN: Deep Networks Meet Biased Competition to Create a Brain-Inspired Model of Attention Control","authors":"Hossein Adeli, G. Zelinsky","doi":"10.1109/CVPRW.2018.00259","DOIUrl":null,"url":null,"abstract":"The mechanism of attention control is best described by biased-competition theory (BCT), which suggests that a top-down goal state biases a competition among object representations for the selective routing of a visual input for classification. Our work advances this theory by making it computationally explicit as a deep neural network (DNN) model, thereby enabling predictions of goal-directed attention control using real-world stimuli. This model, which we call Deep-BCN, is built on top of an 8-layer DNN pre-trained for object classification, but has layers mapped to early visual (V1, V2/V3, V4), ventral (PIT, AIT), and frontal (PFC) brain areas that have their functional connectivity informed by BCT. Deep-BCN also has a superior colliculus and a frontal-eye field, and can therefore make eye movements. We compared Deep-BCN's eye movements to those made from 15 people performing a categorical search for one of 25 target object categories, and found that it predicted both the number of fixations during search and the saccade-distance travelled before search termination. With Deep-BCN a DNN implementation of BCT now exists, which can be used to predict the neural and behavioral responses of an attention control mechanism as it mediates a goal-directed behavior-in our study the eye movements made in search of a target goal.","PeriodicalId":150600,"journal":{"name":"2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPRW.2018.00259","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 20
Abstract
The mechanism of attention control is best described by biased-competition theory (BCT), which suggests that a top-down goal state biases a competition among object representations for the selective routing of a visual input for classification. Our work advances this theory by making it computationally explicit as a deep neural network (DNN) model, thereby enabling predictions of goal-directed attention control using real-world stimuli. This model, which we call Deep-BCN, is built on top of an 8-layer DNN pre-trained for object classification, but has layers mapped to early visual (V1, V2/V3, V4), ventral (PIT, AIT), and frontal (PFC) brain areas that have their functional connectivity informed by BCT. Deep-BCN also has a superior colliculus and a frontal-eye field, and can therefore make eye movements. We compared Deep-BCN's eye movements to those made from 15 people performing a categorical search for one of 25 target object categories, and found that it predicted both the number of fixations during search and the saccade-distance travelled before search termination. With Deep-BCN a DNN implementation of BCT now exists, which can be used to predict the neural and behavioral responses of an attention control mechanism as it mediates a goal-directed behavior-in our study the eye movements made in search of a target goal.