2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)最新文献

筛选
英文 中文
Accurate coverage summarization of UAV videos 无人机视频准确覆盖汇总
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041923
Chung-Ching Lin, Sharath Pankanti, John R. Smith
{"title":"Accurate coverage summarization of UAV videos","authors":"Chung-Ching Lin, Sharath Pankanti, John R. Smith","doi":"10.1109/AIPR.2014.7041923","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041923","url":null,"abstract":"A predominant fraction of UAV videos are never watched or analyzed and there is growing interest in having a summary view of the UAV videos for obtaining a better overall perspective of the visual content. Real time summarization of the UAV video events is also important from tactical perspective. Our research focuses on developing resilient algorithms for summarizing videos that can be efficiently processed either onboard or offline. Our previous work [2] on the video summarization has focused on the event summarization. More recently, we have investigated the challenges in providing the coverage summarization of the video content from UAV videos. Different from the traditional coverage summarization taking SfM approach (e.g., [7]) on SIFT-based [14] feature points, there are several additional challenges including jitter, low resolution, contrast, lack of salient features in UAV videos. We propose a novel correspondence algorithm that exploits the 3D context that can potentially alleviate the correspondence ambiguity. Our results on VIRAT dataset shows that our algorithm can find many correct correspondences in low resolution imagery while avoiding many false positives from the traditional algorithms.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127526508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
High dynamic range (HDR) video processing for the exploitation of high bit-depth sensors in human-monitored surveillance 高动态范围(HDR)视频处理技术是高位深传感器在人类监控中的应用
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041912
D. Natale, Matthew S. Baran, R. Tutwiler
{"title":"High dynamic range (HDR) video processing for the exploitation of high bit-depth sensors in human-monitored surveillance","authors":"D. Natale, Matthew S. Baran, R. Tutwiler","doi":"10.1109/AIPR.2014.7041912","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041912","url":null,"abstract":"High bit-depth video data is becoming more common in imaging and remote sensing because higher bit-depth cameras are becoming more affordable. Displays often represent images in lower bit-depths, and human vision is not able to completely exploit this additional information in its native form. These problems are addressed with High Dynamic Range (HDR) tone mapping, which nonlinearly maps lightness levels from a high bit-depth image into a lower bit-depth representation in a way that attempts to retain and accentuate the maximum amount of useful information therein. We have adapted the well-known Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm into the application of HDR video tone mapping by using time-adaptive local histogram transformations. In addition to lightness contrast, we use the transformations in the L*a*b* color space to amplify color contrast in the video stream. The transformed HDR video data maintains important details in local contrast while maintaining relative lightness levels locally through time. Our results show that time-adapted HDR tone mapping methods can be used in real-time video processing to store and display HDR data in low bit-depth formats with less loss of useful information compared to simple truncation.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114271401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evaluating the Lidar/HSI direct method for physics-based scene modeling 评估基于物理场景建模的Lidar/HSI直接方法
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041906
Ryan N. Givens, K. Walli, M. Eismann
{"title":"Evaluating the Lidar/HSI direct method for physics-based scene modeling","authors":"Ryan N. Givens, K. Walli, M. Eismann","doi":"10.1109/AIPR.2014.7041906","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041906","url":null,"abstract":"Recent work has been able to automate the process of generating three-dimensional, spectrally attributed scenes for use in physics-based modeling software using the Lidar/Hyperspectral Direct (LHD) method. The LHD method autonomously generates three-dimensional Digital Imaging and Remote Sensing Image Generation (DIRSIG) scenes from input high-resolution imagery, lidar data, and hyperspectral imagery and has been shown to do this successfully using both modeled and real datasets. While the output scenes look realistic and appear to match the input scenes under qualitative comparisons, a more quantitative approach is needed to evaluate the full utility of these autonomously generated scenes. This paper seeks to improve the evaluation of the spatial and spectral accuracy of autonomously generated three-dimensional scenes using the DIRSIG model. Two scenes are presented for this evaluation. The first is generated from a modeled dataset and the second is generated using data collected over a real-world site. DIRSIG-generated synthetic imagery over the recreated scenes are then compared to the original input imagery to evaluate how well the recreated scenes match the original scenes in spatial and spectral accuracy and to determine the ability of the recreated scenes to produce useful outputs for algorithm development.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116467454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive automatic object recognition in single and multi-modal sensor data 单模态和多模态传感器数据的自适应自动目标识别
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041915
T. Khuon, R. Rand
{"title":"Adaptive automatic object recognition in single and multi-modal sensor data","authors":"T. Khuon, R. Rand","doi":"10.1109/AIPR.2014.7041915","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041915","url":null,"abstract":"For single-modal data, object recognition and classification in a 3D point cloud is a non-trivial process due to the nature of the data collected from a sensor system where the signal can be corrupted by noise from the environment, electronic system, A/D converter, etc. Therefore, an adaptive system with a specific desired tolerance is required to perform classification and recognition optimally. The feature-based pattern recognition algorithm described below, is generalized for solving a particular global problem with minimal change. Since for the given class set, a feature set must be extracted accordingly. For instance, man-made urban object classification, rural and natural objects, and human organ classification would require different and distinct feature sets. This study is to compare the adaptive automatic object recognition in single sensor and the distributed adaptive pattern recognition in multi-sensor fusion. The similarity in automatic object recognition between single-sensor and multi-sensor fusion is the ability to learn from experiences and decide on a given pattern. Their main difference is that the sensor fusion makes a decision from the decisions of all sensors whereas the single sensor requires a feature extraction for a decision.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122238144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Change detection and classification of land cover in multispectral satellite imagery using clustering of sparse approximations (CoSA) over learned feature dictionaries 基于稀疏逼近聚类的多光谱卫星影像土地覆盖变化检测与分类
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041921
D. Moody, S. Brumby, J. Rowland, G. Altmann, Amy E. Larson
{"title":"Change detection and classification of land cover in multispectral satellite imagery using clustering of sparse approximations (CoSA) over learned feature dictionaries","authors":"D. Moody, S. Brumby, J. Rowland, G. Altmann, Amy E. Larson","doi":"10.1109/AIPR.2014.7041921","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041921","url":null,"abstract":"Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics to help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. Our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128407586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Against conventional wisdom: Longitudinal inference for pattern recognition in remote sensing 反对传统智慧:遥感模式识别的纵向推理
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041932
D. Rosario, Christoph Borel-Donohue, J. Romano
{"title":"Against conventional wisdom: Longitudinal inference for pattern recognition in remote sensing","authors":"D. Rosario, Christoph Borel-Donohue, J. Romano","doi":"10.1109/AIPR.2014.7041932","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041932","url":null,"abstract":"In response to Democratization of Imagery, a recent leading theme in the scientific community, we discuss a persistent imaging experiment dataset, which is being considered for public release in a foreseeable future, and present our observations analyzing a subset of the dataset. The experiment is a long-term collaborative effort among the Army Research Laboratory, Army Armament RDEC, and Air Force Institute of Technology that focuses on the collection and exploitation of longwave infrared (LWIR) hyperspectral and polarimetric imagery. In this paper, we emphasize the inherent challenges associated with using remotely sensed LWIR hyperspectral imagery for material recognition, and argue that the idealized data assumptions often made by the state of the art methods are too restrictive for real operational scenarios. We treat LWIR hyperspectral imagery for the first time as Longitudinal Data and aim at proposing a more realistic framework for material recognition as a function of spectral evolution over time. The defining characteristic of a longitudinal study is that objects are measured repeatedly through time and, as a result, data are dependent. This is in contrast to cross-sectional studies in which the outcomes of a specific event are observed by randomly sampling from a large population of relevant objects, where data are assumed independent. The scientific community generally assumes the problem of object recognition to be cross-sectional. We argue that, as data evolve over a full diurnal cycle, pattern recognition problems are longitudinal in nature and that by applying this knowledge it may lead to better algorithms.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127878526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Human activity detection using sparse representation 基于稀疏表示的人类活动检测
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041933
D. Killedar, S. Sasi
{"title":"Human activity detection using sparse representation","authors":"D. Killedar, S. Sasi","doi":"10.1109/AIPR.2014.7041933","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041933","url":null,"abstract":"Human activity detection from videos is very challenging, and has got numerous applications in sports evalution, video surveillance, elder/child care, etc. In this research, a model using sparse representation is presented for the human activity detection from the video data. This is done using a linear combination of atoms from a dictionary and a sparse coefficient matrix. The dictionary is created using a Spatio Temporal Interest Points (STIP) algorithm. The Spatio temporal features are extracted for the training video data as well as the testing video data. The K-Singular Value Decomposition (KSVD) algorithm is used for learning dictionaries for the training video dataset. Finally, human action is classified using a minimum threshold residual value of the corresponding action class in the testing video dataset. Experiments are conducted on the KTH dataset which contains a number of actions. The current approach performed well in classifying activities with a success rate of 90%.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115232264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Performance benefits of sub-diffraction sized pixels in imaging sensors 成像传感器中亚衍射尺寸像素的性能优势
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-05-28 DOI: 10.1117/12.2053443
J. Caulfield, J. Curzan, N. Dhar
{"title":"Performance benefits of sub-diffraction sized pixels in imaging sensors","authors":"J. Caulfield, J. Curzan, N. Dhar","doi":"10.1117/12.2053443","DOIUrl":"https://doi.org/10.1117/12.2053443","url":null,"abstract":"Infrared Focal Plane Arrays have been developed with reductions in pixel size below the Nyquist limit imposed by the optical systems Point Spread Function (PSF). These smaller sub diffraction limited pixels allows spatial oversampling of the image. We show that oversampling the PSF allows improved fidelity in imaging, resulting in sensitivity improvements due to pixel correlation, reduced false alarm rates, improved detection ranges, and an improved ability to track closely spaced objects.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129734283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Physical modeling of nuclear detonations in DIRSIG DIRSIG核爆物理模拟
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2013-10-01 DOI: 10.1109/AIPR.2014.7041907
Ashley E. Green, T. Peery, Robert C. Slaughter, J. McClory
{"title":"Physical modeling of nuclear detonations in DIRSIG","authors":"Ashley E. Green, T. Peery, Robert C. Slaughter, J. McClory","doi":"10.1109/AIPR.2014.7041907","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041907","url":null,"abstract":"Digitized historic film data were used to model the fireball of a nuclear detonation and simulate the sensor response within the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. Historic films were used to determine the temperature and dimensions of the nuclear fireball and create an input to DIRSIG. DIRSIG was used to analyze how environmental interactions change the optical signal received by a realistic sensor.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123059106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信