Comparative study of visual saliency maps in the problem of classification of architectural images with Deep CNNs

A. M. Obeso, J. Benois-Pineau, Kamel Guissous, V. Gouet-Brunet, M. García-Vázquez, A. A. Ramírez-Acosta
{"title":"Comparative study of visual saliency maps in the problem of classification of architectural images with Deep CNNs","authors":"A. M. Obeso, J. Benois-Pineau, Kamel Guissous, V. Gouet-Brunet, M. García-Vázquez, A. A. Ramírez-Acosta","doi":"10.1109/IPTA.2018.8608125","DOIUrl":null,"url":null,"abstract":"Incorporating Human Visual System (HVS) models into building of classifiers has become an intensively researched field in visual content mining. In the variety of models of HVS we are interested in so-called visual saliency maps. Contrarily to scan-paths they model instantaneous attention assigning the degree of interestingness/saliency for humans to each pixel in the image plane. In various tasks of visual content understanding, these maps proved to be efficient stressing contribution of the areas of interest in image plane to classifiers models. In previous works saliency layers have been introduced in Deep CNNs, showing that they allow reducing training time getting similar accuracy and loss values in optimal models. In case of large image collections efficient building of saliency maps is based on predictive models of visual attention. They are generally bottom-up and are not adapted to specific visual tasks. Unless they are built for specific content, such as \"urban images\"-targeted saliency maps we also compare in this paper. In present research we propose a \"bootstrap\" strategy of building visual saliency maps for particular tasks of visual data mining. A small collection of images relevant to the visual understanding problem is annotated with gaze fixations. Then the propagation to a large training dataset is ensured and compared with the classical GBVS model and a recent method of saliency for urban image content. The classification results within Deep CNN framework are promising compared to the purely automatic visual saliency prediction.","PeriodicalId":272294,"journal":{"name":"2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPTA.2018.8608125","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

Abstract

Incorporating Human Visual System (HVS) models into building of classifiers has become an intensively researched field in visual content mining. In the variety of models of HVS we are interested in so-called visual saliency maps. Contrarily to scan-paths they model instantaneous attention assigning the degree of interestingness/saliency for humans to each pixel in the image plane. In various tasks of visual content understanding, these maps proved to be efficient stressing contribution of the areas of interest in image plane to classifiers models. In previous works saliency layers have been introduced in Deep CNNs, showing that they allow reducing training time getting similar accuracy and loss values in optimal models. In case of large image collections efficient building of saliency maps is based on predictive models of visual attention. They are generally bottom-up and are not adapted to specific visual tasks. Unless they are built for specific content, such as "urban images"-targeted saliency maps we also compare in this paper. In present research we propose a "bootstrap" strategy of building visual saliency maps for particular tasks of visual data mining. A small collection of images relevant to the visual understanding problem is annotated with gaze fixations. Then the propagation to a large training dataset is ensured and compared with the classical GBVS model and a recent method of saliency for urban image content. The classification results within Deep CNN framework are promising compared to the purely automatic visual saliency prediction.
视觉显著性图与深度cnn在建筑图像分类问题中的比较研究
将人类视觉系统(HVS)模型应用到分类器的构建中已经成为视觉内容挖掘研究的一个热点。在各种各样的HVS模型中,我们对所谓的视觉显著性地图感兴趣。与扫描路径相反,它们模拟瞬时注意力,将人类的兴趣/显著程度分配给图像平面上的每个像素。在视觉内容理解的各种任务中,这些地图被证明是有效的,强调图像平面上感兴趣的区域对分类器模型的贡献。在以前的工作中,显著性层已经被引入到深度cnn中,表明它们可以减少训练时间,在最优模型中获得相似的精度和损失值。在大型图像集合的情况下,显著性地图的有效构建是基于视觉注意的预测模型。它们通常是自下而上的,不适合特定的视觉任务。除非它们是为特定的内容而构建的,比如“城市图像”——我们也在本文中进行了比较。在目前的研究中,我们提出了一种“自举”策略,为视觉数据挖掘的特定任务构建视觉显著性图。与视觉理解问题相关的一小部分图像被标注为注视注视。然后将其传播到大型训练数据集,并与经典的GBVS模型和最近的城市图像内容显著性方法进行比较。与单纯的自动视觉显著性预测相比,Deep CNN框架下的分类结果是有希望的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信