Dynamic Saliency Model Inspired by Middle Temporal Visual Area: A Spatio-Temporal Perspective

Hassan Mahmood, S. Islam, S. O. Gilani, Y. Ayaz
{"title":"Dynamic Saliency Model Inspired by Middle Temporal Visual Area: A Spatio-Temporal Perspective","authors":"Hassan Mahmood, S. Islam, S. O. Gilani, Y. Ayaz","doi":"10.1109/DICTA.2018.8615806","DOIUrl":null,"url":null,"abstract":"With the advancement in technology, digital visual data is also increasing day by day. And there is a great need to develop systems that can understand it. For computers, this is a daunting task to do but our brain efficiently and apparently effortlessly doing this task very well. This paper aims to devise a dynamic saliency model inspired by the human visual system. Most models are based on low-level image features and focus on static and dynamic images. And those models do not perform well in accordance with the human gaze movement for dynamic scenes. We here demonstrate that a combined model of bio-inspired spatio-temporal features, high-level and low-level features outperform listed models in predicting human fixation on dynamic visual input. Our comparison with other models is based on eye-movement recordings of human participants observing dynamic natural scenes.","PeriodicalId":130057,"journal":{"name":"2018 Digital Image Computing: Techniques and Applications (DICTA)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA.2018.8615806","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the advancement in technology, digital visual data is also increasing day by day. And there is a great need to develop systems that can understand it. For computers, this is a daunting task to do but our brain efficiently and apparently effortlessly doing this task very well. This paper aims to devise a dynamic saliency model inspired by the human visual system. Most models are based on low-level image features and focus on static and dynamic images. And those models do not perform well in accordance with the human gaze movement for dynamic scenes. We here demonstrate that a combined model of bio-inspired spatio-temporal features, high-level and low-level features outperform listed models in predicting human fixation on dynamic visual input. Our comparison with other models is based on eye-movement recordings of human participants observing dynamic natural scenes.
中时视觉区激发的动态显著性模型:一个时空视角
随着技术的进步,数字视觉数据也日益增多。我们非常需要开发能够理解它的系统。对于计算机来说,这是一项艰巨的任务,但我们的大脑却能高效且毫不费力地完成这项任务。本文旨在设计一个受人类视觉系统启发的动态显著性模型。大多数模型基于底层图像特征,关注静态和动态图像。在动态场景下,这些模型不能很好地反映人类的注视运动。我们在此证明了生物启发时空特征、高级和低级特征的组合模型在预测人类对动态视觉输入的注视方面优于列表模型。我们与其他模型的比较是基于观察动态自然场景的人类参与者的眼动记录。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信