{"title":"Dynamic Saliency Model Inspired by Middle Temporal Visual Area: A Spatio-Temporal Perspective","authors":"Hassan Mahmood, S. Islam, S. O. Gilani, Y. Ayaz","doi":"10.1109/DICTA.2018.8615806","DOIUrl":null,"url":null,"abstract":"With the advancement in technology, digital visual data is also increasing day by day. And there is a great need to develop systems that can understand it. For computers, this is a daunting task to do but our brain efficiently and apparently effortlessly doing this task very well. This paper aims to devise a dynamic saliency model inspired by the human visual system. Most models are based on low-level image features and focus on static and dynamic images. And those models do not perform well in accordance with the human gaze movement for dynamic scenes. We here demonstrate that a combined model of bio-inspired spatio-temporal features, high-level and low-level features outperform listed models in predicting human fixation on dynamic visual input. Our comparison with other models is based on eye-movement recordings of human participants observing dynamic natural scenes.","PeriodicalId":130057,"journal":{"name":"2018 Digital Image Computing: Techniques and Applications (DICTA)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA.2018.8615806","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the advancement in technology, digital visual data is also increasing day by day. And there is a great need to develop systems that can understand it. For computers, this is a daunting task to do but our brain efficiently and apparently effortlessly doing this task very well. This paper aims to devise a dynamic saliency model inspired by the human visual system. Most models are based on low-level image features and focus on static and dynamic images. And those models do not perform well in accordance with the human gaze movement for dynamic scenes. We here demonstrate that a combined model of bio-inspired spatio-temporal features, high-level and low-level features outperform listed models in predicting human fixation on dynamic visual input. Our comparison with other models is based on eye-movement recordings of human participants observing dynamic natural scenes.