2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)最新文献

筛选
英文 中文
Generation of future image frames using optical flow 使用光流生成未来的图像帧
2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2013-10-01 DOI: 10.1109/AIPR.2013.6749339
N. Verma, Shikha Singh
{"title":"Generation of future image frames using optical flow","authors":"N. Verma, Shikha Singh","doi":"10.1109/AIPR.2013.6749339","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749339","url":null,"abstract":"This research work presents a novel approach for generation of future image frames using optical flow method that estimates velocity of all the pixels in both axes of images in an image sequence. Direction and magnitude of velocity vector field helps in finding the changes in term of pixel intensity from one image to another. The pixel intensity change from one image to another is modeled Takagi-Sugeno fuzzy model (TSFM) in both directions. This network predicts velocities of each pixel and then corresponding pixels intensities are mapped to their new position. The resulting scheme has been applied successfully on an image sequence of landing fighter plane. The proposed approach is able to generate upto ten future image frames successfully. For the quality assessment of future generated images Canny edge detection based Image Comparison Metric (CIM) and Mean Structural Similarity Index Measure (MSSIM) is used. All the ten future generated images have been compare qualitatively against the test images and the results found are encouraging.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127386124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Dense 3D point-cloud model using optical flow for a monocular reconstruction system 基于光流的密集三维点云模型单眼重建系统
2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2013-10-01 DOI: 10.1109/AIPR.2013.6749315
Yakov Diskin, V. Asari
{"title":"Dense 3D point-cloud model using optical flow for a monocular reconstruction system","authors":"Yakov Diskin, V. Asari","doi":"10.1109/AIPR.2013.6749315","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749315","url":null,"abstract":"In this paper, we present an enhanced 3D reconstruction algorithm designed to support an autonomously navigated unmanned ground vehicle. An unmanned system can use the technique to construct a point cloud model of its unknown surroundings. The algorithm presented focuses on the 3D reconstruction of a scene using image sequences captured by only a single moving camera. The original reconstruction process, resulting with a point cloud, was computed utilizing extracted and matched Speeded Up Robust Feature (SURF) points from subsequent video frames. Using depth triangulation analysis, we were able to compute the depth of each feature point within the scene. We concluded that although SURF points are accurate and extremely distinctive, the number of points extracted and matched was not sufficient for our applications. A sparse point cloud model hinders the ability to do further processing for the autonomous system such as object recognition or self-positioning. We present an enhanced version of the algorithm which increases the number of points within the model while maintaining the near real-time computational speeds and accuracy of the original sparse reconstruction. We do so by generating points using both global image characteristics and local SURF feature neighborhood information. Specifically, we generate optical flow disparities using the Horn-Schunck optical flow estimation technique and evaluate the quality of these features for disparity calculations using the SURF keypoint detection method. Areas of the image that locate within SURF feature neighborhoods are tracked using optical flow and used to compute an extremely dense model. The enhanced model contains the high frequency details of the scene that allow for 3D object recognition. The main contribution of the newly added preprocessing steps is measured by evaluating the density and accuracy of the reconstructed point cloud model in relation to real-world measurements.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"256 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114364797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Combining the advice of experts with randomized boosting for robust pattern recognition 结合专家建议和随机增强的鲁棒模式识别
2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2013-10-01 DOI: 10.1109/AIPR.2013.6749332
Jing Peng, G. Seetharaman
{"title":"Combining the advice of experts with randomized boosting for robust pattern recognition","authors":"Jing Peng, G. Seetharaman","doi":"10.1109/AIPR.2013.6749332","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749332","url":null,"abstract":"We have developed an algorithm, called ShareBoost, for combining mulitple classifiers from multiple information sources. The algorithm offer a number of advantages, such as increased confidence in decision-making, resulting from combined complementary data, good performance against noise, and the ability to exploit interplay between sensor subspaces.We have also developed a randomized version of ShareBoost, called rShare-Boost, by casting ShareBoost within an adversarial multi-armed bandit framework. This in turn allows us to show rShareBoost is efficient and convergent. Both algorithms have shown promise in a number of applications. The hallmark of these algorithms is a set of strategies for mining and exploiting the most informative sensor sources for a given situation. These strategies are computations performed by the algorithms. In this paper, we propose to consider strategies as advice given to an algorithm by “experts” or “Oracle.” In the context of pattern recognition, there can be several pattern recognition strategies. Each strategy makes different assumptions regarding the fidelity of each sensor source and uses different data to arrive at its estimates. Each strategy may place different trust in a sensor at different times, and each may be better in different situations. In this paper, we introduce a novel algorithm for combining the advice of the experts to achieve robust pattern recognition performance. We show that with high probability the algorithm seeks out the advice of the experts from decision relevant information sources for making optimal prediction. Finally, we provide experimental results using face and infrared image data that corroborate our theoretical analysis.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114817436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast compensation of illumination changes for background subtraction 快速补偿背景减法的光照变化
2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2013-10-01 DOI: 10.1109/AIPR.2013.6749323
Vishal Kumar, Neha Bhargava, S. Chaudhuri, G. Seetharaman
{"title":"Fast compensation of illumination changes for background subtraction","authors":"Vishal Kumar, Neha Bhargava, S. Chaudhuri, G. Seetharaman","doi":"10.1109/AIPR.2013.6749323","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749323","url":null,"abstract":"Background subtraction is a common technique used for motion tracking. It involves segmenting the foreground from the background in a given set of video frames. Background subtraction as proposed by Stauffer-Grimson [1] models each pixel using a mixture of Gaussians. The parameters of the Gaussian model are adaptive, and can adjust to gradual changes in image intensity over time. But in cases when the lighting change in the captured video sequence is sudden, like when the camera's automatic gain control (AGC) self adjusts the intensity of the overall image, such model based methods of background subtraction fail to adapt quickly to such a sudden change in image intensity. We propose a technique to automatically estimate the extent of camera AGC, and then use the information to add an additional block into any model based method such as the Stauffer-Grimson method to compensate for camera AGC while doing background subtraction.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116260533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Video image quality analysis for enhancing tracker performance 提高跟踪器性能的视频图像质量分析
2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2013-10-01 DOI: 10.1109/AIPR.2013.6749326
J. Irvine, Richard J. Wood, David Reed, J. Lepanto
{"title":"Video image quality analysis for enhancing tracker performance","authors":"J. Irvine, Richard J. Wood, David Reed, J. Lepanto","doi":"10.1109/AIPR.2013.6749326","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749326","url":null,"abstract":"Object tracking in video data is fundamental to many practical applications, including gesture recognition, activity analysis, physical security, and surveillance. A fundamental assumption is that the quality of the video stream is adequate to support the analysis. In practice, however, the video quality can vary widely due to lighting and weather, camera placement, and data compression. These factors affect the performance of object tracking algorithms. We present a method for automated analysis of the video quality which can be used to adjust the object tracker appropriately. This paper extends earlier research, presenting a model for quantifying the quality of motion imagery in the context of automated exploitation. We present a method for predicting the tracker performance and demonstrate the results on a range of video clips. The model rests on a suite of image metrics computed in real-time from the video. We will describe the metrics and the formulation of the quality estimation model. Results from a recent experiment will quantify the empirical performance of the model. We conclude with a discussion of methods for enhancing tracker performance based on the real-time video quality analysis.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"360 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122771150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Physical modeling of nuclear detonations in dirsig 核弹爆炸的物理模拟
2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 1900-01-01 DOI: 10.1109/AIPR.2013.6749331
T. Peery, K. Walli
{"title":"Physical modeling of nuclear detonations in dirsig","authors":"T. Peery, K. Walli","doi":"10.1109/AIPR.2013.6749331","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749331","url":null,"abstract":"A physical model was created for the resulting fireball of a nuclear detonation. The model focuses on the growth of the fireball as well as its temperature. The physical model was created for a 100 kt detonation, with 70% x-ray yield. A MATLAB code was created to generate these models, with yield and % x-ray yield as adjustable variables. Both temperature and radii were calculated as a function of time. The fireball was then input into DIRSIG for 3D modeling with an airfield scene, with the fireball modeled at key points in its development.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134105515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信