2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)最新文献

筛选
英文 中文
Objective performance evaluation of a moving object super-resolution system 运动目标超分辨系统的客观性能评价
2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009) Pub Date : 2009-10-01 DOI: 10.1109/AIPR.2009.5466315
J. Laflen, C. Greco, G. Brooksby, E. Barrett
{"title":"Objective performance evaluation of a moving object super-resolution system","authors":"J. Laflen, C. Greco, G. Brooksby, E. Barrett","doi":"10.1109/AIPR.2009.5466315","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466315","url":null,"abstract":"We present evaluation of the performance of moving object super-resolution (MOSR) through objective image quality metrics. MOSR systems require detection, tracking, and local sub-pixel registration of objects of interest, prior to superresolution. Nevertheless, MOSR can provide additional information otherwise undetected in raw video. We measure the extent of this benefit through the following objective image quality metrics: (1) Modulation Transfer Function (MTF), (2) Subjective Quality Factor (SQF), (3) Image Quality from the Natural Scene (MITRE IQM), and (4) minimum resolvable Rayleigh distance (RD). We also study the impact of non-ideal factors, such as image noise, frame-to-frame jitter, and object rotation, upon this performance. To study these factors, we generated controlled sequences of synthetic images of targets moving against a random field. The targets exemplified aspects of the objective metrics, containing either horizontal, vertical, or circular sinusoidal gratings, or a field of impulses separated by varying distances. High-resolution sequences were rendered and then appropriately filtered assuming a circular aperture and square, filled collector prior to decimation. A fully implemented MOSR system was used to generate super-resolved images of the moving targets. The MTF, SQF, IQM, and RD measures were acquired from each of the high, low, and super-resolved image sequences, and indicate the objective benefit of super-resolution. To contrast with MOSR, the low-resolution sequences were also up-sampled in the Fourier domain, and the objective measures were collected for these Fourier up-sampled sequences, as well. Our study consisted of over 800 different sequences, representing various combinations of non-ideal factors.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125332920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Automated 3D object identification using Bayesian networks 使用贝叶斯网络的自动三维物体识别
2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009) Pub Date : 2009-10-01 DOI: 10.1109/AIPR.2009.5466289
Prudhvi K. Gurram, E. Saber, F. Sahin, H. Rhody
{"title":"Automated 3D object identification using Bayesian networks","authors":"Prudhvi K. Gurram, E. Saber, F. Sahin, H. Rhody","doi":"10.1109/AIPR.2009.5466289","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466289","url":null,"abstract":"3D object reconstruction from images involves two important parts: object identification and object modeling. Human beings are very adept at automatically identifying different objects in a scene due to the extensive training they receive over their lifetimes. Similarly, machines need to be trained to perform this task. At present, automated 3D object identification process from aerial video imagery encounters various problems due to uncertainties in data. The first problem is setting the input parameters of segmentation algorithm for accurate identification of the homogeneous surfaces in the scene. The second problem is deterministic inference used on the features extracted from these homogeneous surfaces or segments to identify different objects such as buildings, and trees. These problems would result in the 3D models being overfitted to a particular data set as a result of which they would fail when applied to other data sets. In this paper, an algorithm for using probabilistic inference to determine input segmentation parameters and to identify 3D objects from aerial video imagery is described. Bayesian networks are used to perform the probabilistic inference. In order to improve the accuracy of the identification process, information from Lidar data is fused with the visual imagery in a Bayesian network. The imagery is generated using the DIRSIG (Digital Imaging and Remote Sensing Image Generation) model at RIT. The parameters of the airborne sensor such as focal length, detector size, average flying height and the external parameters such as solar zenith angle can be simulated using this tool. The results show a significant improvement in the accuracy of object identification when Lidar data is fused with visual imagery compared to that when visual imagery is used alone.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124434216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzzy rule based unsupervised approach for salient gene extraction 基于模糊规则的非监督显著性基因提取方法
2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009) Pub Date : 2009-10-01 DOI: 10.1109/AIPR.2009.5466302
N. Verma, Payal Gupta, P. Agrawal, Yan Cui
{"title":"Fuzzy rule based unsupervised approach for salient gene extraction","authors":"N. Verma, Payal Gupta, P. Agrawal, Yan Cui","doi":"10.1109/AIPR.2009.5466302","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466302","url":null,"abstract":"This paper presents a novel fuzzy rule based gene ranking algorithm for extracting salient genes from a large set of microarray data which helps us to reduce computational efforts towards model building process. The proposed algorithm is an unsupervised approach and does not require class information for gene ranking and Microarray data has been used to form a set of robust fuzzy rule base which helps us to find salient genes based on its average relevance with already formed fuzzy rules in rule base. Fuzzy rule based ranking has been carried out to select salient genes based on their average firing strength in order of high relevancy and only top ranked genes are utilized to classify normal and cancerous tissues for a carcinoma dataset [1]. Result validate the effectiveness of our gene ranking method as for the same no. of genes, our ranking scheme helps to improve the classifier performance by selecting better salient genes.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127248721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Kalman filter based video background estimation 基于卡尔曼滤波的视频背景估计
2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009) Pub Date : 2009-10-01 DOI: 10.1109/AIPR.2009.5466306
J. Scott, M. Pusateri, Duane C. Cornish
{"title":"Kalman filter based video background estimation","authors":"J. Scott, M. Pusateri, Duane C. Cornish","doi":"10.1109/AIPR.2009.5466306","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466306","url":null,"abstract":"Transferring responsibility for object tracking in a video scene to computer vision rather than human operators has the appeal that the computer will remain vigilant under all circumstances while operator attention can wane. However, when operating at their peak performance, human operators often outperform computer vision because of their ability to adapt to changes in the scene. While many tracking algorithms are available, background subtraction, where a background image is subtracted from the current frame to isolate the foreground objects in a scene, remains a well proven and popular technique. Under some circumstances, a background image can be obtained manually when no foreground objects are present. In the case of persistent surveillance outdoors, the background has a time evolution due to diurnal changes, weather, and seasonal changes. Such changes render a fixed background scene inadequate. We present a method for estimating the background of a scene utilizing a Kalman filter approach. Our method applies a one-dimensional Kalman filter to each pixel of the camera array to track the pixel intensity. We designed the algorithm to track the background intensity of a scene assuming that the camera view is relatively stationary and that the time evolution of the background occurs much slower than the time evolution of relevant foreground events. This allows the background subtraction algorithm to adapt automatically to changes in the scene. The algorithm is a two step process of mean intensity update and standard deviation update. These updates are derived from standard Kalman filter equations. Our algorithm also allows objects to transition between the background and foreground as appropriate by modeling the input standard deviation. For example, a car entering a parking lot surveillance camera field of view would initially be included in the foreground. However, once parked, it will eventually transition to the background. We present results validating our algorithm's ability to estimate backgrounds in a variety of scenes. We demonstrate the application of our method to track objects using simple frame detection with no temporal coherency.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115852862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Detection and recognition of 3D targets in panchromatic gray scale imagery using a biologically-inspired algorithm 利用生物启发算法检测和识别全色灰度图像中的三维目标
2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009) Pub Date : 2009-10-01 DOI: 10.1109/AIPR.2009.5466310
Patricia Murphy, Pedro A. Rodriguez, Sean R. Martin
{"title":"Detection and recognition of 3D targets in panchromatic gray scale imagery using a biologically-inspired algorithm","authors":"Patricia Murphy, Pedro A. Rodriguez, Sean R. Martin","doi":"10.1109/AIPR.2009.5466310","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466310","url":null,"abstract":"A three-dimensional (3D) target detection and recognition algorithm, using the biologically-inspired MapSeeking Circuit (MSC), is implemented to efficiently solve the typical template matching problem in computer vision. Given a 3D template model of a vehicle, this prototype locates the vehicle in a two-dimensional (2D) panchromatic image and determines its pose (i.e. viewing azimuth, elevation, scale, and in-plane rotation). In our implementation, we introduce a detection stage followed by the spawning of multiple MSC processes in parallel to classify and determine the pose of the detection candidates. Our implementation increases the speed of detection and allows efficient classification when multiple targets are present in the same image. We present promising results after applying our algorithm to challenging real world test imagery.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122604012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Technical maturity evaluations for sensor fusion technologies 传感器融合技术成熟度评价
2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009) Pub Date : 2009-10-01 DOI: 10.1109/AIPR.2009.5466319
Mike Engle, S. Sarkani, T. Mazzuchi
{"title":"Technical maturity evaluations for sensor fusion technologies","authors":"Mike Engle, S. Sarkani, T. Mazzuchi","doi":"10.1109/AIPR.2009.5466319","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466319","url":null,"abstract":"The National Geospatial-Intelligence Agency (NGA) routinely works with commercial and academic partners to develop and refine technologies needed to meet the evolving imagery-based intelligence problems of the intelligence community (IC). There is an existing Research and Development entity within the NGA which includes the systems engineering framework required to incorporate, develop and transition applicable technologies for use by analysts. In order to better understand where work may fall within this framework, it is necessary to identify the inherent technical maturity of the research in question. Technology Readiness Levels (TRLs), which were originally developed by NASA and are used by the DOD for most development and procurement programs, are used by NGA as a quick indication of both technical maturity and inherent risk (technical, schedule, cost or transition). This paper discusses the different GEOINT-focused performance evaluations pertinent to the TRLs then provides a brief introduction to an applicable multi-sensor data fusion framework.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128857867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
3D shape retrieval by visual parts similarity 基于视觉零件相似度的三维形状检索
2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009) Pub Date : 2009-10-01 DOI: 10.1109/AIPR.2009.5466316
A. Godil, A. I. Wagan, S. Bres, Xiaolan Li
{"title":"3D shape retrieval by visual parts similarity","authors":"A. Godil, A. I. Wagan, S. Bres, Xiaolan Li","doi":"10.1109/AIPR.2009.5466316","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466316","url":null,"abstract":"In this paper we propose a novel algorithm for 3D shape searching based on the visual similarity by cutting the object into parts. This method rectify some of the shortcomings of the visual similarity based methods, so that it can better account for objects with deformation, articulation, concave areas, and parts of the object not visible because of self occlusion. As the first step, the 3D objects are partitioned into a number of parts by using cutting planes or by mesh segmentation. Then a number of silhouettes from different directions are rendered of those parts. Then Zernike moments are applied on the silhouettes to generate shape descriptors. The distance measure is based on minimizing the distance among all the combinations of shape descriptors and then these distances are used for similarity based searching.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115754763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial-spectral cross correlation for reliable multispectral image registration 可靠的多光谱图像配准的空间光谱相互关
2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009) Pub Date : 2009-10-01 DOI: 10.1109/AIPR.2009.5466291
Zhengwei Yang, Guangrong Shen, Wei Wang, Zhenhua Qian, Ying Ke
{"title":"Spatial-spectral cross correlation for reliable multispectral image registration","authors":"Zhengwei Yang, Guangrong Shen, Wei Wang, Zhenhua Qian, Ying Ke","doi":"10.1109/AIPR.2009.5466291","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466291","url":null,"abstract":"This paper presents a normalized spatial-spectral cross correlation method for multispectral image registration. This method generalized correlation coefficients defined in a spatial domain or a spectral domain into a spatial-spectral domain. This novel spatial-spectral signature based method significantly increases the discrimination of the correlation coefficient for a given template window size, increases the registration reliability, robustness and accuracy, as compared with the classic normalized spatial cross correlation method. It is invariant to the dynamic range and robust to the noise yet it is straightforward with minimum preprocessing required. The experimental results show that the normalized spatial-spectral cross correlation method is superior to the traditional normalized spatial cross correlation method in effective registering multispectral images. However, the experimental results also show that only those statistically highly independent spectral bands are helpful for enhancing the robustness and reliability of the NSSCC multispectral image registration. Specifically, it is found that the near infrared band together with visual bands will gives the best registration results.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133412750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Large-scale functional models of visual cortex for remote sensing 遥感视觉皮层大尺度功能模型
2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009) Pub Date : 2009-10-01 DOI: 10.1109/AIPR.2009.5466323
S. Brumby, Garrett T. Kenyon, Will Landecker, Craig Rasmussen, S. Swaminarayan, L. Bettencourt
{"title":"Large-scale functional models of visual cortex for remote sensing","authors":"S. Brumby, Garrett T. Kenyon, Will Landecker, Craig Rasmussen, S. Swaminarayan, L. Bettencourt","doi":"10.1109/AIPR.2009.5466323","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466323","url":null,"abstract":"Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring ~1 petaflop of computation, while the scale of human visual experience greatly exceeds standard computer vision datasets: the retina delivers ~1 petapixel/year to the brain, driving learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simple region V1 code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is ¿complete¿ along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115339265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Persistence and tracking: Putting vehicles and trajectories in context 持久性和跟踪:将车辆和轨迹置于上下文中
2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009) Pub Date : 2009-10-01 DOI: 10.1109/AIPR.2009.5466307
Robert Pless, M. Dixon, Nathan Jacobs, P. Baker, Nicholas L. Cassimatis, Derek P. Brock, R. Hartley, Dennis Perzanowski
{"title":"Persistence and tracking: Putting vehicles and trajectories in context","authors":"Robert Pless, M. Dixon, Nathan Jacobs, P. Baker, Nicholas L. Cassimatis, Derek P. Brock, R. Hartley, Dennis Perzanowski","doi":"10.1109/AIPR.2009.5466307","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466307","url":null,"abstract":"City-scale tracking of all objects visible in a camera network or aerial video surveillance is an important tool in surveillance and traffic monitoring. We propose a framework for human guided tracking based on explicitly considering the context surrounding the urban multi-vehicle tracking problem. This framework is based on a standard (but state of the art) probabilistic tracking model. Our contribution is to explicitly detail where human annotation of the scene (e.g. “this is a lane”), a track (e.g. “this track is bad”), or a pair of tracks (e.g. “these two tracks are confused”) can be naturally integrated within the probabilistic tracking framework. For an early prototype system, we offer results and examples from a dense urban traffic camera network tracking, querying data with thousands of vehicles over 30 minutes.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133631716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信