{"title":"A New Visual Attention Model Using Texture and Object Features","authors":"Hsuan-Ying Chen, Jin-Jang Leou","doi":"10.1109/CIT.2008.WORKSHOPS.8","DOIUrl":null,"url":null,"abstract":"Human perception tends to firstly pick attended regions which correspond to prominent objects in an image. Visual attention detection simulates the behavior of the human visual system (HVS) and detects the regions of interest (ROIs) in the image. In this study, a new visual attention model containing the texture and object models (parts) is proposed. As compared with existing texture models, the proposed texture model has better visual detection performance and low computational complexity, whereas the proposed object model can extract all the ROIs in an image. The proposed visual attention model can generate high-quality spatial saliency maps in an effective manner. Based on the experimental results obtained in this study, as compared with Hu's model, the proposed model has better performance and low computational complexity.","PeriodicalId":155998,"journal":{"name":"2008 IEEE 8th International Conference on Computer and Information Technology Workshops","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 IEEE 8th International Conference on Computer and Information Technology Workshops","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIT.2008.WORKSHOPS.8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Human perception tends to firstly pick attended regions which correspond to prominent objects in an image. Visual attention detection simulates the behavior of the human visual system (HVS) and detects the regions of interest (ROIs) in the image. In this study, a new visual attention model containing the texture and object models (parts) is proposed. As compared with existing texture models, the proposed texture model has better visual detection performance and low computational complexity, whereas the proposed object model can extract all the ROIs in an image. The proposed visual attention model can generate high-quality spatial saliency maps in an effective manner. Based on the experimental results obtained in this study, as compared with Hu's model, the proposed model has better performance and low computational complexity.