2013 IEEE Western New York Image Processing Workshop (WNYIPW)最新文献

筛选
英文 中文
Removing glint with video processing to enhance underwater target detection 利用视频处理去除闪烁,增强水下目标探测能力
2013 IEEE Western New York Image Processing Workshop (WNYIPW) Pub Date : 2013-11-01 DOI: 10.1109/WNYIPW.2013.6890982
Victoria M. Scholl, A. Gerace
{"title":"Removing glint with video processing to enhance underwater target detection","authors":"Victoria M. Scholl, A. Gerace","doi":"10.1109/WNYIPW.2013.6890982","DOIUrl":"https://doi.org/10.1109/WNYIPW.2013.6890982","url":null,"abstract":"Remotely sensed imagery of large bodies of water is often dappled with bright patches known as glint. Solar glint is light originating from the sun that reflects off the water surface directly into a camera's sensor. Glint reduces the ability to observe the water at depth, making complicated problems such as in-water parameter retrieval, benthic mapping, and submerged target detection especially difficult. The purpose of this research is two-fold. First, it is hypothesized that the latency between spectral bands on typical pushbroom imaging systems can be utilized to remove glint. The experimental concept of using video and basic image processing techniques is explored using a monochrome camera. Secondly, ongoing efforts are focused on characterizing the key features of glint (size, shape, intensity, and duration) to provide insight for improved glint removal algorithms.","PeriodicalId":408297,"journal":{"name":"2013 IEEE Western New York Image Processing Workshop (WNYIPW)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131962907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Accessmath: Indexing and retrieving video segments containing math expressions based on visual similarity Accessmath:索引和检索基于视觉相似性的包含数学表达式的视频片段
2013 IEEE Western New York Image Processing Workshop (WNYIPW) Pub Date : 2013-11-01 DOI: 10.1109/WNYIPW.2013.6890981
Kenny Davila, Anurag Agarwal, R. Gaborski, R. Zanibbi, S. Ludi
{"title":"Accessmath: Indexing and retrieving video segments containing math expressions based on visual similarity","authors":"Kenny Davila, Anurag Agarwal, R. Gaborski, R. Zanibbi, S. Ludi","doi":"10.1109/WNYIPW.2013.6890981","DOIUrl":"https://doi.org/10.1109/WNYIPW.2013.6890981","url":null,"abstract":"Access Math project is a work in progress oriented toward helping visually impaired students in and out of the class-room. The system works with videos from math lectures. For each lecture, videos of the whiteboard content from two different sources are provided. An application for extraction and retrieval of that content is presented. After the content has been indexed, the user can select a portion of the whiteboard content found in a video frame and use it as a query to find segments of video with similar content. Graphs of neighboring connected components are used to describe both the query and the candidate regions, and the results of a query are ranked using the recall of matched graph edges between the graph of the query and the graph of each candidate. This is a recognition-free method and belongs to the field of sketch-based image retrieval.","PeriodicalId":408297,"journal":{"name":"2013 IEEE Western New York Image Processing Workshop (WNYIPW)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122294627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A note on the challenge of feature selection for image understanding 关于特征选择对图像理解的挑战的说明
2013 IEEE Western New York Image Processing Workshop (WNYIPW) Pub Date : 2013-11-01 DOI: 10.1109/WNYIPW.2013.6890984
Thomas B. Kinsman, J. Pelz
{"title":"A note on the challenge of feature selection for image understanding","authors":"Thomas B. Kinsman, J. Pelz","doi":"10.1109/WNYIPW.2013.6890984","DOIUrl":"https://doi.org/10.1109/WNYIPW.2013.6890984","url":null,"abstract":"It is well known that using the correct features for pattern recognition is far more important than using a sophisticated classifier. A high order classifier, given inadequate features, will produce poor results. Low-level formed are combined to form mid-level features, which have much more discriminating power. Yet, the challenge of feature selection is often neglected in the literature. The literature often assumes that given N low-level features there are 2N-1 ways to use them, which significantly understates the challenge of finding the best features to use and the best ways to combine them. Basic low-level features (input measurements) must be combined in groups to construct features that are relevant for object recognition [1], yet the computational complexity of grouping measurements for input to a pattern recognition system makes the task very difficult. This paper discusses a method for quantifying the total number of ways to group a given number of low-level features for better understanding the feature selection problem.","PeriodicalId":408297,"journal":{"name":"2013 IEEE Western New York Image Processing Workshop (WNYIPW)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125706386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image sequence event detection VIA recurrence analysis 图像序列事件检测VIA递归分析
2013 IEEE Western New York Image Processing Workshop (WNYIPW) Pub Date : 2013-11-01 DOI: 10.1109/WNYIPW.2013.6890986
T. P. Keane, N. Cahill, J. Pelz
{"title":"Image sequence event detection VIA recurrence analysis","authors":"T. P. Keane, N. Cahill, J. Pelz","doi":"10.1109/WNYIPW.2013.6890986","DOIUrl":"https://doi.org/10.1109/WNYIPW.2013.6890986","url":null,"abstract":"Recurrence analysis methods have been used in a wide array of fields for the purposes of obtaining some grasp of the characteristics of a chaotically dynamical system. At the heart of the recurrence plotting and quantification analysis algorithm, though, is a means of visualizing and measuring repeating sequences of time-dependent data. It is in this sense we present a novel means of extracting an event from an image sequence by analyzing selected features as multi-dimensional samples of the time-series. The analysis and development of recurrence plots (RPs) naturally lend themselves to sequence detection, and we are presenting an attempt to target eye-movement events as such time-series sequences. We can then apply relatively simple quantification measures through Recurrence Quantification Analysis (RQA), thereby immediately detecting an event and capturing some characteristic statistics. To highlight an application of this methodology, we are presenting an approach towards detecting fixational eye-movement events from a video recording of natural eye motions.","PeriodicalId":408297,"journal":{"name":"2013 IEEE Western New York Image Processing Workshop (WNYIPW)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131670315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Position encoding and localization with environmental patterns 环境模式的位置编码和定位
2013 IEEE Western New York Image Processing Workshop (WNYIPW) Pub Date : 2013-11-01 DOI: 10.1109/WNYIPW.2013.6890985
R. Barneva, K. Kanev, Shota Mochiduki
{"title":"Position encoding and localization with environmental patterns","authors":"R. Barneva, K. Kanev, Shota Mochiduki","doi":"10.1109/WNYIPW.2013.6890985","DOIUrl":"https://doi.org/10.1109/WNYIPW.2013.6890985","url":null,"abstract":"In this paper we discuss the design and development of an improved CLUSPI method for augmented computer vision and positioning of autonomous agents in indoor settings. The method employs environmental patterns posted on walls, ceilings, floors, and other surrounding surfaces that are accessible for digital imaging. Such patterns are blended into the environment as decorative elements where the encoding and decoding is based on orientation and clustering of artistic figures. As part of this work a specialized client-server system for multi-platform experiments with various environmental codes and imaging devices have been implemented. Conducted experiments indicate robust and reliable code extraction with very high recognition rates in most practical setups.","PeriodicalId":408297,"journal":{"name":"2013 IEEE Western New York Image Processing Workshop (WNYIPW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116107116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Differentiation between malignant and normal human thyroid tissue using frequency analysis of multispectral Photoacoustic images 利用多光谱光声图像的频率分析鉴别人甲状腺组织的恶性和正常
2013 IEEE Western New York Image Processing Workshop (WNYIPW) Pub Date : 2013-11-01 DOI: 10.1109/WNYIPW.2013.6890979
S. Sinha, Navalgund A. Rao, B. Chinni, Jacob Moalem, E. Giampolli, Vikram S. Dogra
{"title":"Differentiation between malignant and normal human thyroid tissue using frequency analysis of multispectral Photoacoustic images","authors":"S. Sinha, Navalgund A. Rao, B. Chinni, Jacob Moalem, E. Giampolli, Vikram S. Dogra","doi":"10.1109/WNYIPW.2013.6890979","DOIUrl":"https://doi.org/10.1109/WNYIPW.2013.6890979","url":null,"abstract":"This study investigates the feasibility of using frequency analysis of multispectral PA (Photoacoustics) signals generated by excised human thyroid tissue to differentiate between malignant and normal thyroid regions. Multispectral PA imaging was performed on freshly excised thyroid tissue from 6 patients undergoing thyroidectomy or thyroid lobectomy. The regions of interests in the PA images corresponding to malignant and normal tissue have been selected with the help of histopathology slides. The calibrated power spectrum of each PA signal from each region of interest was fit to a linear model for extracting the values of slope, midband fit and intercept parameters. The results show that mean values of intercept and midband fit parameters are significantly different between malignant and normal regions for all five wavelengths and mean values of slope are significantly different for two wavelengths.","PeriodicalId":408297,"journal":{"name":"2013 IEEE Western New York Image Processing Workshop (WNYIPW)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122747940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
LBP-inspired detection of color patterns: Multiplied local score patterns lbp启发的颜色模式检测:倍增局部得分模式
2013 IEEE Western New York Image Processing Workshop (WNYIPW) Pub Date : 2013-11-01 DOI: 10.1109/WNYIPW.2013.6890983
Vladimir Pribula, R. Canosa
{"title":"LBP-inspired detection of color patterns: Multiplied local score patterns","authors":"Vladimir Pribula, R. Canosa","doi":"10.1109/WNYIPW.2013.6890983","DOIUrl":"https://doi.org/10.1109/WNYIPW.2013.6890983","url":null,"abstract":"Local binary patterns (LBP) were previously used to characterize gray-scale patterns in an image. They have also been applied to color pattern recognition, but maintained a simple binary vector for classification. We have applied the sampling strategy of LBPs to collect local colors around every pixel. These samples are then individually scored with all models to find the best match. This determines the order the remaining color models are used to score the samples, leading to rotation invariance in a manner similar to LBPs. Once the scores are retrieved for each sample, they are modulated by the samples' saturation values. All modulated scores are then multiplied to produce a multiplied local score pattern (mLSP) map. Peaks are filtered based on their breadth using simple thresholding and subsequent connected component analysis. Results were gathered from 1534 images in two environments under two camera exposures, using two consumer printer technologies to produce the color pattern. The overall recognition rate was 86%. Recognition was further broken down to show effects of lighting environment, printer technology, camera distance, and color pattern setup. Pitfalls and potential solutions are discussed for the algorithm's use in a wider variety of environments and with other color patterns.","PeriodicalId":408297,"journal":{"name":"2013 IEEE Western New York Image Processing Workshop (WNYIPW)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130788622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Mobile device to cloud co-processing of ASL finger spelling to text conversion 移动设备向云协同处理手语手指拼写到文本的转换
2013 IEEE Western New York Image Processing Workshop (WNYIPW) Pub Date : 2013-11-01 DOI: 10.1109/WNYIPW.2013.6890987
P. Hays, R. Ptucha, R. Melton
{"title":"Mobile device to cloud co-processing of ASL finger spelling to text conversion","authors":"P. Hays, R. Ptucha, R. Melton","doi":"10.1109/WNYIPW.2013.6890987","DOIUrl":"https://doi.org/10.1109/WNYIPW.2013.6890987","url":null,"abstract":"Computer recognition of American Sign Language (ASL) is a computationally intensive task. This research investigates transcription of static ASL signs on a consumer-level mobile device. The application provides real-time sign to text translation by processing a live video stream to detect the ASL alphabet as well as custom signs to perform tasks on the device. The chosen classification algorithm uses Locality Preserving Projections (LPP) as manifold learning along with Support Vector Machine (SVM) multi-class classification. The algorithm is contrasted with and without cloud assistance. In comparison to the local mobile application, the cloud-assisted application increased classification speed, reduced memory us-age, and kept the network usage low while barely increasing the power required.","PeriodicalId":408297,"journal":{"name":"2013 IEEE Western New York Image Processing Workshop (WNYIPW)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124846864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Image analysis pipeline for characterizing photolytic degradation in daguerreotypes 用于表征达盖尔银版照相中光解降解的图像分析管道
2013 IEEE Western New York Image Processing Workshop (WNYIPW) Pub Date : 2013-11-01 DOI: 10.1109/WNYIPW.2013.6890978
Yuchuan Zhuang, Shuo Chen, Yuan Feng, R. Wiegandt, R. Buckley, Gaurav Sharma
{"title":"Image analysis pipeline for characterizing photolytic degradation in daguerreotypes","authors":"Yuchuan Zhuang, Shuo Chen, Yuan Feng, R. Wiegandt, R. Buckley, Gaurav Sharma","doi":"10.1109/WNYIPW.2013.6890978","DOIUrl":"https://doi.org/10.1109/WNYIPW.2013.6890978","url":null,"abstract":"We describe an image analysis pipeline for the minimally invasive analysis and characterization of light-induced degradation in daguerreotypes. To our knowledge, this is the first time that quantitative characterization and temporal analysis of the photolytic degradation has been described for daguerreotypes. We measure the impact of degradation using microscopic image capture before and after exposure of a small sacrificial region of the daguerreotype to light. The image analysis pipeline compensates for changes in capture position between the pre and post-exposure images and measures the effects of the degradation in the regions under test. Our results show that photolytic degradation follows a profile that is approximated as the sum of two exponentials with a time constants about 0.41 and 0.003 min-1, with variation across regions.","PeriodicalId":408297,"journal":{"name":"2013 IEEE Western New York Image Processing Workshop (WNYIPW)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114432715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic fundus image field detection and quality assessment 眼底图像场自动检测与质量评价
2013 IEEE Western New York Image Processing Workshop (WNYIPW) Pub Date : 1900-01-01 DOI: 10.1109/wnyipw.2013.6890980
Gajendra J. Katuwal, J. Kerekes, R. Ramchandran, Christye Sisson, N. Rao
{"title":"Automatic fundus image field detection and quality assessment","authors":"Gajendra J. Katuwal, J. Kerekes, R. Ramchandran, Christye Sisson, N. Rao","doi":"10.1109/wnyipw.2013.6890980","DOIUrl":"https://doi.org/10.1109/wnyipw.2013.6890980","url":null,"abstract":"Fundus images are an important diagnostic tool for many retinal diseases. Sometimes the images captured are of low quality and cannot be used for diagnosis requiring repeat image acquisition. So, it is efficient to have an automatic system to assess the quality of the fundus image during the time of image capture. We have developed an automatic approach to assess the quality of the acquired fundus image based upon the inherent symmetry of retinal vessels. We approach the problem of quality assessment in two ways-individual quality assessment of a single fundus image and comprehensive quality assessment of a set of three fundus images of different fields of an eye. Our method also detects the field and side of the fundus image using the position of optic disc and the intensity information in two local windows.","PeriodicalId":408297,"journal":{"name":"2013 IEEE Western New York Image Processing Workshop (WNYIPW)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133714812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信