34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)最新文献

筛选
英文 中文
Optimizing image segmentation using color model mixtures 使用颜色模型混合优化图像分割
34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.38
Aristide C. Chikando, J. Kinser
{"title":"Optimizing image segmentation using color model mixtures","authors":"Aristide C. Chikando, J. Kinser","doi":"10.1109/AIPR.2005.38","DOIUrl":"https://doi.org/10.1109/AIPR.2005.38","url":null,"abstract":"Several mathematical color models have been proposed to segment images based on their color information content. The most frequently used color models of such sort include RGB, HSV, YCbCr, etc. These models were designed to represent color and in some cases emulate how the reflection of light on a given entity is perceived by the human eye. They were, however, not designed specifically for the purpose of image segmentation. In this study, the efficiency of several color models for the application of image segmentation is assessed and more efficient color models, consisting of color model mixtures, are explored. It was observed that two of the studied models, YCbCr and linear, were more efficient for the purpose of image segmentation. Additionally, by employing multivariate analysis, it was observed that the model mixtures were more efficient than the most commonly used models studied, and thus optimized the segmentation","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"189 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123514724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
3D scene modeling using sensor fusion with laser range finder and image sensor 基于激光测距仪和图像传感器融合的三维场景建模
34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.5
Yunqian Ma, Z. Wang, Michael E. Bazakos, W. Au
{"title":"3D scene modeling using sensor fusion with laser range finder and image sensor","authors":"Yunqian Ma, Z. Wang, Michael E. Bazakos, W. Au","doi":"10.1109/AIPR.2005.5","DOIUrl":"https://doi.org/10.1109/AIPR.2005.5","url":null,"abstract":"Activity detection (e.g. recognizing people's behavior and intent), when used over an extended range of applications, suffers from high false detection rates. Also, activity detection limited to 2D image domain (symbolic space) is confined to qualitative activities. Symbolic features, represented by apparent dimensions, i.e. pixels, can vary with distance or viewing angle. One way to enhance performance is to work within the physical space, where object features are represented by their physical dimensions (e.g. inches or centimeters) and are invariant to distance or viewing angle. In this paper, we propose an approach to construct a 3D site model and co-register the video with the site model to obtain real-time physical reference at every pixel in the video. We present a unique approach that creates a 3D site model via fusion of laser range sensor and a single camera. We present experimental results to demonstrate our approach.","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"105 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116117812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Medical image watermarking for multiple modalities 多模态医学图像水印
34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.33
A. Maeder, B. Planitz
{"title":"Medical image watermarking for multiple modalities","authors":"A. Maeder, B. Planitz","doi":"10.1109/AIPR.2005.33","DOIUrl":"https://doi.org/10.1109/AIPR.2005.33","url":null,"abstract":"Transfer of digital medical images between multiple parties requires the assurance of image identity and integrity, which can be achieved through image watermarking. This raises concerns for loss in viewer performance due to degradation of image quality. Here we describe an approach to ensure that impact on the image quality is well below the threshold of visual perceptibility. The principles on which this approach rests are the choice of a suitably light payload, and the use of different watermarking methods and parameters for different medical image types. We provide examples of this approach applied to MR, CT and CR images","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123709186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Multimodal biometric identification for large user population using fingerprint, face and iris recognition 使用指纹、面部和虹膜识别的大型用户群体的多模态生物识别
34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.35
Teddy Ko
{"title":"Multimodal biometric identification for large user population using fingerprint, face and iris recognition","authors":"Teddy Ko","doi":"10.1109/AIPR.2005.35","DOIUrl":"https://doi.org/10.1109/AIPR.2005.35","url":null,"abstract":"Biometric systems based solely on one-modal biometrics are often not able to meet the desired performance requirements for large user population applications, due to problems such as noisy data, intra-class variations, restricted degrees of freedom, nonuniversity, spoof attacks, and unacceptable error rates. Multimodal biometrics refers to the use of a combination of two or more biometric modalities in a single identification system. The most compelling reason to combine different modalities is to improve the recognition accuracy. This can be done when features of different biometrics are statistically independent. This paper overviews and discusses the various scenarios that are possible in multimodal biometric systems using fingerprint, face and iris recognition, the levels of fusion that are possible and the integration strategies that can be adopted to fuse information and improve overall system accuracy. This paper also discusses how the image quality of fingerprint, face and iris used in the multimodal biometric systems affects the overall identification accuracy and the need of staffing for the secondary human validation. For a large user population identification system, which often has more than tens or hundreds of millions of subject images already enrolled in the matcher databases and has to process more than hundreds of thousands of identification requests, the system's identification accuracy and the need of staffing levels to properly operate the system are two of the most important factors in determining whether a system is properly designed and integrated","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121855627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 102
A control theoretic method for categorizing visual imagery as human motion behaviors 一种将视觉意象分类为人体运动行为的控制理论方法
34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.6
C. Cohen
{"title":"A control theoretic method for categorizing visual imagery as human motion behaviors","authors":"C. Cohen","doi":"10.1109/AIPR.2005.6","DOIUrl":"https://doi.org/10.1109/AIPR.2005.6","url":null,"abstract":"We propose a method that not only identifies humans in the environment and their location, but can also classify and identify their activity, providing a threat assessment. Such assessments would be useful for both human and vehicle activities in crowds to determine aberrant behavior from previously identified truth data sets. Such aberrant behavior would lead to IED detection, RPG detection, and the recognition of suicide bombers, before the explosives and planted and activated. The heuristics needed involve recognition of information bearing features in the environment, and the determination of how those features relate to each other over time (that is, gesture recognition). This paper addresses the mathematical development necessary to create a behavior and gait recognition sensor system that has its foundation on the recognition of combined individual gestures","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126583576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A fast piecewise deformable method for multi-modality image registration 一种快速分段变形的多模态图像配准方法
34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.7
Girish Gopalakrishnan, S. Kumar, A. Narayanan, R. Mullick
{"title":"A fast piecewise deformable method for multi-modality image registration","authors":"Girish Gopalakrishnan, S. Kumar, A. Narayanan, R. Mullick","doi":"10.1109/AIPR.2005.7","DOIUrl":"https://doi.org/10.1109/AIPR.2005.7","url":null,"abstract":"Medical image fusion is becoming increasingly popular for enhancing diagnostic accuracy by intelligently 'fusing' information obtained from two different images. These images may be obtained from the same modality at different time instances or from multiple modalities recording complementary information. Due to the nature of the human body and also due to patient motion and breathing, there is a need for deformable registration algorithms in medical imaging. Typical nonparametric (deformable) registration algorithms such as the fluid-based, demons and curvature-based techniques are computationally intensive and have been demonstrated for mono-modality registrations only. We propose a fast and deformable algorithm using a 2-tiered strategy wherein a global MI-based affine registration is followed by a local piecewise refinement. We have tested this method on CT and PET images and validated the same using clinical experts","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132900749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
See-through-wall imaging using ultra wideband pulse systems 使用超宽带脉冲系统的透视墙成像
34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.40
M. Mahfouz, A. Fathy, Yunqiang Yang, Emam ElHak Ali, A. Badawi
{"title":"See-through-wall imaging using ultra wideband pulse systems","authors":"M. Mahfouz, A. Fathy, Yunqiang Yang, Emam ElHak Ali, A. Badawi","doi":"10.1109/AIPR.2005.40","DOIUrl":"https://doi.org/10.1109/AIPR.2005.40","url":null,"abstract":"Surveillance/navigation systems presently used make extensive use of television, infrared, and other line-of-sight-surveillance hardware. However, these systems cannot tell what is happening or locate persons/assets on the other side of a wall, behind bushes, in the dark, in a tunnel or a cave, or through a dense fog. It is our objective here to develop a new sensor, based on UWB technology. A small, lightweight, low power transceiver or multiples that are based upon the fact that microwave frequencies can be optimized to penetrate nonmetallic materials, and providing very precise ranging information. This new surveillance/navigation capability can help provide information about what is in a wall or on the other side of a door, and can be extended to provide precise global position in areas where these services are denied such as in tunnels or caves. This paper presents our efforts along these lines including image enhancements.","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133449341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
An overview of through the wall surveillance for homeland security 国土安全的穿墙监视概述
34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.18
S. Borek
{"title":"An overview of through the wall surveillance for homeland security","authors":"S. Borek","doi":"10.1109/AIPR.2005.18","DOIUrl":"https://doi.org/10.1109/AIPR.2005.18","url":null,"abstract":"The Air Force Research Laboratory Information Directorate (AFRL/IF), under sponsorship of the Department of Justice's (DOJ), National Institute of Justice (NIJ) Office of Science and Technology (OS&T), is currently developing and evaluating advanced through the wall surveillance (TWS) technologies. These technologies are partitioned into two categories: inexpensive, handheld systems for locating an individual(s) behind a wall or door; and portable, personal computer (PC) based standoff systems to enable the determination of events during critical incident situations. The technologies utilized are primarily focused on active radars operating in the UHF, L, S (ultra wideband (UWB)), X, and Ku bands. The data displayed by these systems is indicative of range (1 dimension), or range and azimuth (2 dimensions) to the moving individuals). This paper highlights the technologies employed in five (5) prototype TWS systems delivered to NIJ and AFRL/IF for test and evaluation","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128663882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 74
Millimeter-wave weapons detection system 毫米波武器探测系统
34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.34
D. Novak, R. Waterhouse, A. Farnham
{"title":"Millimeter-wave weapons detection system","authors":"D. Novak, R. Waterhouse, A. Farnham","doi":"10.1109/AIPR.2005.34","DOIUrl":"https://doi.org/10.1109/AIPR.2005.34","url":null,"abstract":"We are proposing a new electromagnetic (EM) solution for concealed weapons detection at a distance. Our proposed approach exploits the fact that the weapons of interest for detection, whether they are a hand gun, knife, box cutter, etc, each have a unique set of EM characteristics. The particular novelty of our technical solution for concealed weapons detection at a distance lies in the use of millimeter-wave (mm-wave) signals over a wide frequency band (26-40 GHz or Ka-band) to excite natural resonances in the weapon and create a unique spectral signature that can be used to characterize the object. By using such excitation signals in the mm-wave frequency band, benefits such as increased resolution and reduced component size can be realized. In addition, the use of a wideband mm-wave excitation signal provides an enhanced EM signature for the target that exhibits more features available for classifying the object","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134032081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Grouping sensory primitives for object recognition and tracking 分组感知原语用于目标识别和跟踪
34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.29
R. Madhavan, Mike Foedisch, Tommy Chang, T. Hong
{"title":"Grouping sensory primitives for object recognition and tracking","authors":"R. Madhavan, Mike Foedisch, Tommy Chang, T. Hong","doi":"10.1109/AIPR.2005.29","DOIUrl":"https://doi.org/10.1109/AIPR.2005.29","url":null,"abstract":"In this paper, we describe our recent efforts in grouping sensory data into meaningful entities. Our grouping philosophy is based on perceptual organization principles using gestalt hypotheses where we impose structural regularity on sensory primitives stemming from a common underlying cause. We present results using field data from UGVs and outline the utility of our research in object recognition and tracking for autonomous vehicle navigation. In addition, we show how the grouping efforts can be useful for constructing symbolic topological maps when data from different sensing modalities are fused in a bottom-up and top-down fashion","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"181 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132928580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信