J. Molin, A. Russell, Stefan Mihalas, E. Niebur, R. Etienne-Cummings
{"title":"基于原对象的运动敏感通道视觉显著性模型","authors":"J. Molin, A. Russell, Stefan Mihalas, E. Niebur, R. Etienne-Cummings","doi":"10.1109/BioCAS.2013.6679631","DOIUrl":null,"url":null,"abstract":"The human visual system has the inherent capability of using selective attention to rapidly process visual information across visual scenes. Early models of visual saliency are purely feature-based and compute visual attention for static scenes. However, to model the human visual system, it is important to also consider temporal change that may exist within the scene when computing visual saliency. We present a biologically-plausible model of dynamic visual attention that computes saliency as a function of proto-objects modulated by an independent motion-sensitive channel. This motion-sensitive channel extracts motion information via biologically plausible temporal filters modeling simple cell receptive fields. By using KL divergence measurements, we show that this model performs significantly better than chance in predicting eye fixations. Furthermore, in our experiments, this model outperforms the Itti, 2005 dynamic saliency model and insignificantly differs from the graph-based visual dynamic saliency model in performance.","PeriodicalId":344317,"journal":{"name":"2013 IEEE Biomedical Circuits and Systems Conference (BioCAS)","volume":"257 ","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":"{\"title\":\"Proto-object based visual saliency model with a motion-sensitive channel\",\"authors\":\"J. Molin, A. Russell, Stefan Mihalas, E. Niebur, R. Etienne-Cummings\",\"doi\":\"10.1109/BioCAS.2013.6679631\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The human visual system has the inherent capability of using selective attention to rapidly process visual information across visual scenes. Early models of visual saliency are purely feature-based and compute visual attention for static scenes. However, to model the human visual system, it is important to also consider temporal change that may exist within the scene when computing visual saliency. We present a biologically-plausible model of dynamic visual attention that computes saliency as a function of proto-objects modulated by an independent motion-sensitive channel. This motion-sensitive channel extracts motion information via biologically plausible temporal filters modeling simple cell receptive fields. By using KL divergence measurements, we show that this model performs significantly better than chance in predicting eye fixations. Furthermore, in our experiments, this model outperforms the Itti, 2005 dynamic saliency model and insignificantly differs from the graph-based visual dynamic saliency model in performance.\",\"PeriodicalId\":344317,\"journal\":{\"name\":\"2013 IEEE Biomedical Circuits and Systems Conference (BioCAS)\",\"volume\":\"257 \",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-12-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 IEEE Biomedical Circuits and Systems Conference (BioCAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/BioCAS.2013.6679631\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE Biomedical Circuits and Systems Conference (BioCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BioCAS.2013.6679631","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Proto-object based visual saliency model with a motion-sensitive channel
The human visual system has the inherent capability of using selective attention to rapidly process visual information across visual scenes. Early models of visual saliency are purely feature-based and compute visual attention for static scenes. However, to model the human visual system, it is important to also consider temporal change that may exist within the scene when computing visual saliency. We present a biologically-plausible model of dynamic visual attention that computes saliency as a function of proto-objects modulated by an independent motion-sensitive channel. This motion-sensitive channel extracts motion information via biologically plausible temporal filters modeling simple cell receptive fields. By using KL divergence measurements, we show that this model performs significantly better than chance in predicting eye fixations. Furthermore, in our experiments, this model outperforms the Itti, 2005 dynamic saliency model and insignificantly differs from the graph-based visual dynamic saliency model in performance.