Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery最新文献

筛选
英文 中文
Model-based face tracking for dense motion field estimation 基于模型的密集运动场估计人脸跟踪
T. Gee, R. Mersereau
{"title":"Model-based face tracking for dense motion field estimation","authors":"T. Gee, R. Mersereau","doi":"10.1109/AIPR.2001.991218","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991218","url":null,"abstract":"When estimating the dense motion field of a video sequence, if little is known or assumed about the content, a limited constraint approach such as optical flow must be used. Since optical flow algorithms generally use a small spatial area in the determination of each motion vector the resulting motion field can be noisy, particularly if the input video sequence is noisy. If the moving subject is known to be a face, then we may use that constraint to improve the motion field results. This paper describes a method for deriving dense motion field data using a face tracking approach. A face model is manually initialized to fit a face at the beginning of the input sequence. Then a Kalman filtering approach is used to track the face movements and successively fit the face model to the face in each frame. The 2D displacement vectors are calculated from the projection of the facial model, which is allowed to move in 3D space and may have a 3D shape. We have experimented with planar, cylindrical, and Candide face models. The resulting motion field is used in multiple frame restoration of a face in noisy video.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121130739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
The surgical CAD/CAM paradigm and an implementation for robotically-assisted percutaneous local therapy 外科CAD/CAM范例和机器人辅助经皮局部治疗的实现
G. Fichtinger, D. Stoianovici, R. Taylor
{"title":"The surgical CAD/CAM paradigm and an implementation for robotically-assisted percutaneous local therapy","authors":"G. Fichtinger, D. Stoianovici, R. Taylor","doi":"10.1109/AIPR.2001.991195","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991195","url":null,"abstract":"Computer-integrated surgery represents a growing segment of our national healthcare system. These systems transform preoperative images and other information into models of individual patients, assist clinicians in developing an optimized interventional plan, register this preoperative data to the actual patient in the operating room, and then use a variety of means, such as robots and image overlay displays, to assist in the accurate execution of the planned interventions. Finally, they perform complex postoperative analysis of the interventions. Borrowing analogies from industrial production systems, the process was named surgical CAD/CAM. Percutaneous (\"through skin\") local therapies represent a significant portion of minimally invasive procedures. They involve the insertion of tubular therapy devices (needles, catheters, bone drills, screws, tissue ablating devices, etc.) into the body, with the guidance of intra-operative imaging devices, like CT, MRI, ultrasound, or fluoroscopy. Percutaneous systems also depend on sophisticated image acquisition and analysis tools. This paper provides an introduction to the surgical CADICAM paradigm and also presents an implementation of the paradigm for percutaneous local therapies.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122429801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Graph-based matching of occluded hand gestures 基于图的遮挡手势匹配
Atid Shamaie, Alistair Sutherland
{"title":"Graph-based matching of occluded hand gestures","authors":"Atid Shamaie, Alistair Sutherland","doi":"10.1109/AIPR.2001.991205","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991205","url":null,"abstract":"Occlusion is an unavoidable subject in most machine vision areas. Recognition of partially-occluded hand gestures is an important problem. In this paper a new algorithm is proposed for the recognition of occluded and non-occluded hand gestures based on matching the Graphs of gestures in an eigenspace.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129533021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Evaluation of ATR algorithms employing motion imagery 运动图像ATR算法的评价
J. Irvine
{"title":"Evaluation of ATR algorithms employing motion imagery","authors":"J. Irvine","doi":"10.1109/AIPR.2001.991204","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991204","url":null,"abstract":"Most Automated Target Recognition (ATR) algorithms developed for Intelligence, Surveillance, and Reconnaissance (ISR) missions operate on a single frame of still imagery to detect, recognize, and geolocate targets of interest. The introduction of digital motion imagery for ISR applications raises the need for automated tools to assist the image analyst (IA). Furthermore, the temporal information and frequent revisit available from motion imagery facilitates the extraction of information not previously available to the IA. Consequently, the evaluation methods needed for assessing the performance of ATR processing of motion imagery extend beyond the framework employed in traditional ATR evaluations. This paper presents the issues associated with evaluations of ATR algorithms for motion imagery and develops approaches for addressing these issues. The major issues fall into three broad categories: Characterization of the testing problem: The concepts of standard operating conditions and extended operating conditions, which are used to distinguish \"easy\" ATR problems from \"hard\" ones, require some modifications for motion imagery. For example, targets in the clear could prove challenging if target density is high or vehicle tracks cross frequently. Developing image truth and scoring rules: The introduction of the temporal dimension raises some ambiguities about what constitutes successful target detection-is it necessary to detect and track a vehicle through a full video clip or is detection on a single frame sufficient? Performance metrics: New performance metrics, that go beyond simple detection, identification, and false alarm rates, are needed to characterize performance in the context of motion imagery. We propose an approach to quantify battlefield awareness, based on simple measures of performance.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130952156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Evaluating the benefits of assisted target recognition 评估辅助目标识别的效益
B. A. Eckstein, J. Irvine
{"title":"Evaluating the benefits of assisted target recognition","authors":"B. A. Eckstein, J. Irvine","doi":"10.1109/AIPR.2001.991201","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991201","url":null,"abstract":"Image exploitation systems that employ automatic target recognition (ATR) technology generally require that a human-in-the-loop validates the ATR results, i.e. assisted target recognition. To evaluate the benefits of assisted target recognition, one must first understand the human's interaction with the ATR algorithms plus the many factors that influence the performance of both the imagery analyst and the ATR. Thus, any assessment of assisted target recognition must be designed so that performance differences due to ATR assistance are isolated from other factors that affect image exploitation. An aided target acquisition perception testing (ATAPT) demonstration created procedures for assessing assisted and unassisted image exploitation, validating the methodology, metrics and software architecture. This paper describes the ATAPT demonstration and discusses the insights gained from this exercise that will improve future evaluations of systems that use ATR-assisted image exploitation.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121924782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
An adaptive technique for the extraction of object region and boundary from images with complex environment 一种从复杂环境图像中提取目标区域和边界的自适应技术
Deepthi Valaparla, V. Asari
{"title":"An adaptive technique for the extraction of object region and boundary from images with complex environment","authors":"Deepthi Valaparla, V. Asari","doi":"10.1109/AIPR.2001.991226","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991226","url":null,"abstract":"A faster and accurate method for the extraction of object region and boundary from images with complex background environment is presented in this paper. The segmentation procedure begins with the computation of an optimum threshold to distinguish the darker regions in the image. It is an automatic thresholding algorithm that would work under all lighting conditions, where prefixing of threshold value is considered ineffective. The centre of mass of this thresholded region acts as a seed for further processing. Then the object region is obtained by using a region growing technique called integrated neighbourhood search. A quad-structure based technique is used to enhance the speed of region search significantly. A back projection algorithm is used to optimise the search for the pixels belonging to the object region.. A boundary thinning and connecting algorithm based on the application of a novel search window on the preliminary boundary is used to obtain a connected single pixel width boundary. The new method does not need a priori knowledge about the image characteristics. The main advantage of the proposed technique is its high-speed response, which brought an average of 36% decrease in the processing time involved that facilitates real-time analysis of the images.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129855696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Distributed multiuser visualization of time varying anatomical data 时变解剖数据的分布式多用户可视化
Arthur W. Wetzel, S. Pomerantz, Démian Nave, W. Meixner, G. Johnson
{"title":"Distributed multiuser visualization of time varying anatomical data","authors":"Arthur W. Wetzel, S. Pomerantz, Démian Nave, W. Meixner, G. Johnson","doi":"10.1109/AIPR.2001.991211","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991211","url":null,"abstract":"We describe a networked environment for navigating and visualizing 3-dimensional anatomical data with extensions for time varying volumes. The Duke Center for In Vivo Microscopy (CIVM) has been capturing volumetric data of mice using magnetic resonance microscopy. Current data sets are 512/sup 3/ with 16 bit precision per voxel at an isotropic resolution of 50 microns. A new instrument will provide larger 512*512*2048 volumes. Because magnetic resonance imaging is nondestructive, both rapid time series and longer interval developmental series can be taken from living specimens. Our work builds on techniques put in place at the Pittsburgh Supercomputing Center and the University of Michigan for navigating Visible Human data using a client-server implementation, but applied to CIVM mouse data. Extension of the system to 4-dimensional data sets involves changes to compressed data representations and client viewing mechanisms. An essential aspect of the mouse studies is to facilitate comparison between different specimens, or even the same specimen over time, for studies of morphologic phenotype expression in gene knockouts.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134370786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multiresolution approach for video texture registration 一种视频纹理配准的多分辨率方法
R. Bonneau, M. Novak, J. Perretta, S. Ertan
{"title":"A multiresolution approach for video texture registration","authors":"R. Bonneau, M. Novak, J. Perretta, S. Ertan","doi":"10.1109/AIPR.2001.991214","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991214","url":null,"abstract":"Electro-optical imagery can have uniform characteristics that prevent it from being registered by conventional edge-based methods. Such uniform characteristics, if they have periodicity, can be exploited using multi-resolution texture extraction techniques. We first use a multi-resolution Markov model to represent electro-optical textures and apply an autoregressive statistical approach to find correspondence between two images. We then demonstrate how this approach reduces the computational complexity of registering of two successive frames of video.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114562876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A realtime object tracking system using a color camera 使用彩色摄像机的实时目标跟踪系统
G. V. Paul, G. Beach, C. Cohen
{"title":"A realtime object tracking system using a color camera","authors":"G. V. Paul, G. Beach, C. Cohen","doi":"10.1109/AIPR.2001.991216","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991216","url":null,"abstract":"We describe a real-time object tracking system based on a color camera and a personal computer. The system is capable of tracking colored objects in the camera view in real-time. The algorithm uses the color, shape and motion of the object to achieve robust tracking even in the presence of partial occlusion and shape change. A key component of the system is a computationally efficient manner to track colored objects which makes it possible to do robust real-time tracking.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132080886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Towards robust face recognition from video 基于视频的鲁棒人脸识别
J. R. Price, T. Gee
{"title":"Towards robust face recognition from video","authors":"J. R. Price, T. Gee","doi":"10.1109/AIPR.2001.991209","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991209","url":null,"abstract":"A novel, template-based method for face recognition is presented. The goals of the proposed method are to integrate multiple observations for improved robustness and to provide auxiliary confidence data for subsequent use in an automated video surveillance system. The proposed framework consists of a parallel system of classifiers, referred to as observers, where each observer is trained on one face region. The observer outputs are combined to yield the final recognition result. Three of the four confounding factors expression, illumination, and decoration-are specifically addressed in this paper The extension of the proposed approach to address the fourth confounding factor-pose-is straightforward and well supported in previous work. A further contribution of the proposed approach is the computation of a revealing confidence measure. This confidence measure will aid the subsequent application of the proposed method to video surveillance scenarios. Results are reported for a database comprising 676 images of 160 subjects under a variety of challenging circumstances. These results indicate significant performance improvements over previous methods and demonstrate the usefulness of the confidence data.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127000207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信