2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis最新文献

筛选
英文 中文
Towards a subject-independent adaptive pupil tracker for automatic eye tracking calibration using a mixture model 基于混合模型的独立于主体的自适应瞳孔自动眼动跟踪标定
2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970369
Thomas B. Kinsman, J. Pelz
{"title":"Towards a subject-independent adaptive pupil tracker for automatic eye tracking calibration using a mixture model","authors":"Thomas B. Kinsman, J. Pelz","doi":"10.1109/IVMSPW.2011.5970369","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970369","url":null,"abstract":"This paper describes the initial pre-processing steps used to follow the motions of the human eye in an eye tracking application. The central method models each pixel as a combination of either: a dark pupil pixel, bright highlight pixel, or a neutral pixel. Portable eye tracking involves tracking a subject's pupil over the course of a study. This paper describes very preliminary results from using a mixture model as a processing stage. Technical issues of using a mixture model are discussed. The pixel classifications from the mixture model were fed into a naïve Bayes pupil tracker. Only low-level information is used for pupil identification. No motion tracking is performed, no belief propagation is performed, and no convolutions are computed. The algorithm is well positioned for parallel implementations. The solution surmounts several technical challenges, and initial results are unexpectedly accurate. The technique shows good promise for incorporation into a system for automatic eye-to-scene calibration.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133021514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Despeckling trilateral filter 去斑三边过滤器
2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970352
Yongjian Yu, Gang Dong, Jue Wang
{"title":"Despeckling trilateral filter","authors":"Yongjian Yu, Gang Dong, Jue Wang","doi":"10.1109/IVMSPW.2011.5970352","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970352","url":null,"abstract":"The bilateral filter smoothes noisy signals while preserving the semantic signal features. Its main advantage is being non-iterative. It is effective for a variety of applications in computer vision and computer graphics. However, little is known about the usefulness of bilateral filtering for speckle images. We propose a non-iterative, despeckling trilateral filter (DSTF) for smoothing ultrasound or synthetic aperture radar imagery. This filter combines the spatial closeness, intensity similarity and the coefficient of variation component. It generates outputs with speckle regions smoothed and structural features well preserved. The performance of the method is illustrated using synthetic, ultrasound and radar images. We show that the DSTF improves the bilateral filter with better speckle suppression, and is more computational efficient than the heavily iterative speckle reducing anisotropic diffusion.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117083546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Local masking in natural images measured via a new tree-structured forced-choice technique 通过一种新的树结构强制选择技术测量自然图像中的局部掩蔽
2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970348
Kedarnath P. Vilankar, D. Chandler
{"title":"Local masking in natural images measured via a new tree-structured forced-choice technique","authors":"Kedarnath P. Vilankar, D. Chandler","doi":"10.1109/IVMSPW.2011.5970348","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970348","url":null,"abstract":"It is widely known that natural images can hide or mask visual signals, and that this masking ability can vary across different regions of the image. Previous studies have quantified masking by measuring image-wide detection thresholds or local thresholds for select image regions; however, little effort has focused on measuring local thresholds across entire images so as to achieve ground-truth masking maps. Such maps could prove invaluable for testing and refining masking models; however, obtaining these maps requires a prohibitive number of trials using a traditional forced-choice procedure. Here, we present a tree-structured forced-choice procedure (TS-3AFC) designed to efficiently measure local thresholds across images. TS-3AFC requires fewer trials than normal forced-choice by employing recursive patch subdivision in which the child patches are not tested individually until the target is detectable in the parent patch. We show that TS-3AFC can yield masking maps which demonstrate both intrasubject and inter-subject repeatability, and we analyze the performance of a modern masking model and two quality estimators in predicting the obtained ground-truth maps for a small set of images.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123677195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
View-based modelling of human visual navigation errors 基于视图的人类视觉导航误差建模
2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970368
L. Pickup, A. Fitzgibbon, S. Gilson, A. Glennerster
{"title":"View-based modelling of human visual navigation errors","authors":"L. Pickup, A. Fitzgibbon, S. Gilson, A. Glennerster","doi":"10.1109/IVMSPW.2011.5970368","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970368","url":null,"abstract":"View-based and Cartesian representations provide rival accounts of visual navigation in humans, and here we explore possible models for the view-based case. A visual “homing” experiment was under-taken by human participants in immersive virtual reality. The distributions of end-point errors on the ground plane differed significantly in shape and extent depending on visual landmark configuration and relative goal location. A model based on simple visual cues captures important characteristics of these distributions. Augmenting visual features to include 3D elements such as stereo and motion parallax result in a set of models that describe the data accurately, demonstrating the effectiveness of a view-based approach.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123813981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Dichromatic color perception in a two stage model: Testing for cone replacement and cone loss models 两阶段模型中的二色感知:锥体替换和锥体丢失模型的测试
2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970347
C. E. Rodríguez-Pardo, Gaurav Sharma
{"title":"Dichromatic color perception in a two stage model: Testing for cone replacement and cone loss models","authors":"C. E. Rodríguez-Pardo, Gaurav Sharma","doi":"10.1109/IVMSPW.2011.5970347","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970347","url":null,"abstract":"We formulate a two stage model of dichromatic color perception that consists of a first sensor layer with gain control followed by an opponent encoding transformation. We propose a method for estimating the unknown parameters in the model by utilizing pre-existing data from psychophysical experiments on unilateral dichromats. The model is validated using this existing data and by using predictions on known test images for detecting dichromacy. Using the model and analysis we evaluate the feasibility of cone loss and cone replacement hypotheses that have previously been proposed for modeling dichromatic color vision. Results indicate that the two stage model offers good agreement with test data. The cone loss and cone replacement models are shown to have fundamental limitations in matching psychophysical observations.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125868103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A fully automatic digital camera image refocusing algorithm 一种全自动数码相机图像调焦算法
2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970359
J. E. Adams
{"title":"A fully automatic digital camera image refocusing algorithm","authors":"J. E. Adams","doi":"10.1109/IVMSPW.2011.5970359","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970359","url":null,"abstract":"One of the greatest dissatisfiers of consumer digital cameras is autofocus failure. One possible solution currently being investigated by many digital camera manufacturers involves capturing a sequence of through-focus images and postprocessing to produce a desired focused image. Most approaches require significant manual input to define a region of interest (ROI) to be optimized for focus. A new through-focus algorithm is proposed that automatically partitions scene content into regions based on range and then automatically determines the ROI for a given range. When this ROI is combined with standard image fusion operations, this algorithm generates an image with an aesthetically pleasing narrow depth-of-field effect, all with little or no user input, and all within the limited compute environment of a digital camera.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127885017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Visual attention model for target search in cluttered scene 杂乱场景下目标搜索的视觉注意模型
2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970370
Nevrez Imamoglu, Weisi Lin
{"title":"Visual attention model for target search in cluttered scene","authors":"Nevrez Imamoglu, Weisi Lin","doi":"10.1109/IVMSPW.2011.5970370","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970370","url":null,"abstract":"Visual attention models generate saliency maps in which attentive regions are more distinctive with respect to remaining parts of the scene. In this work, a new model of orientation conspicuity map (OCM) is presented for the computation of saliency. The proposed method is based on the difference of the Gabor filter outputs with orthogonal orientations because vehicles are the targets for the search tasks in this study. Moreover, as another contribution, selective resolution for the input image, according to the distance of the target in the scene, is also utilized with the proposed scheme for the benefit to target search. Experimental results demonstrate that both the OCM model and selective resolution for input images yield promising results for the target search in cluttered scenes.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127943896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Identification and discussion of open issues in perceptual video coding based on image analysis and completion 基于图像分析和补全的感知视频编码中开放性问题的识别和讨论
2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970350
D. Doshkov, H. Kaprykowsky, P. Ndjiki-Nya
{"title":"Identification and discussion of open issues in perceptual video coding based on image analysis and completion","authors":"D. Doshkov, H. Kaprykowsky, P. Ndjiki-Nya","doi":"10.1109/IVMSPW.2011.5970350","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970350","url":null,"abstract":"Perceptual video coding (VC) based on image analysis and completion (IAC) has enjoyed increasing awareness during the past few years. Many related approaches have been proposed that follow diverging strategies: from full compatibility to hybrid block transform coding to alternative codec design. Hence, in this paper, the most significant issues in IAC coding will be identified and their relevance for the IAC VC design highlighted. It will be analyzed where the most promising pathways lie and justified why others may be limited in their potentialities. Discussions will be substantiated using new methods developed by the authors for block-based and region-based IAC coding additionally to the state-of-the-art approaches.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115172536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Effects of texture on color perception 纹理对颜色感知的影响
2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970346
H. Trussell, Juan Lin, R. Shamey
{"title":"Effects of texture on color perception","authors":"H. Trussell, Juan Lin, R. Shamey","doi":"10.1109/IVMSPW.2011.5970346","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970346","url":null,"abstract":"Textures are common distinguishing features used in segmentation and characterization of images. It is common to characterize textures in a statistical manner using various first and second order statistics. Previous work has shown that texture influences the observer's ability to perceive color differences. By considering the frequency content of the texture patterns in relationship to the color frequency response of the human eye, we hope to explain the results of some perceptual experiments in a more quantitative manner and lay a foundation for improved segmentation in computer vision applications.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128022832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Using human experts' gaze data to evaluate image processing algorithms 利用人类专家的注视数据来评估图像处理算法
2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970367
Preethi Vaidyanathan, J. Pelz, Rui Li, Sai Mulpuru, Dong Wang, P. Shi, C. Calvelli, Anne R. Haake
{"title":"Using human experts' gaze data to evaluate image processing algorithms","authors":"Preethi Vaidyanathan, J. Pelz, Rui Li, Sai Mulpuru, Dong Wang, P. Shi, C. Calvelli, Anne R. Haake","doi":"10.1109/IVMSPW.2011.5970367","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970367","url":null,"abstract":"Understanding the capabilities of the human visual system with respect to image understanding, in order to inform image processing, remains a challenge. Visual attention deployment strategies of experts can serve as an objective measure to help us understand their learned perceptual and conceptual processes. Understanding these processes will inform and direct image the selection and use of image processing algorithms, such as the dermatological images used in our study. The goal of our research is to extract and utilize the tacit knowledge of domain experts towards building a pipeline of image processing algorithms that could closely parallel the underlying cognitive processes. In this paper we use medical experts' eye movement data, primarily fixations, as a metric to evaluate the correlation of perceptually-relevant regions with individual clusters identified through k-means clustering. This test case demonstrates the potential of this approach to determine whether a particular image processing algorithm will be useful in identifying image regions with high visual interest and whether it could be a component of a processing pipeline.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133310523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信