Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery最新文献

筛选
英文 中文
Face detection and eye location using a modified ALISA texture module 使用改进的ALISA纹理模块进行人脸检测和眼睛定位
Teddy Ko, P. Bock
{"title":"Face detection and eye location using a modified ALISA texture module","authors":"Teddy Ko, P. Bock","doi":"10.1109/AIPR.2001.991224","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991224","url":null,"abstract":"This paper presents an automatic method for face detection and eye location using a modified version of the ALISA Texture Module. ALISA (Adaptive Learning Image and Signal Analysis) is an adaptive classification engine based on collective learning systems theory. Using supervised training, the ALISA engine builds a set of multi-dimensional feature histograms that estimate the joint PDF of the feature space for the trained class(es). In the current research, 4 to 6 general-purpose texture and color features are used, which require only a few thousand bins (unique feature vectors) to represent faces for several different ethnic groups by allocating the feature space dynamically. The method first detects the face regions using the ALISA texture module and then locates the eyes inside these regions. A preliminary comparison with a widely-used parametric approach for modeling color information in the presence of changing illumination conditions, demonstrates that the ALISA texture module offers significantly better accuracy for detecting regions of skin. The proposed method also offers competitive speed and is thus feasible for real-time applications to both still images and video sequences.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122427806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Experiments in estimation of independent 3D motion using EM 基于EM的独立三维运动估计实验
J. Kosecka
{"title":"Experiments in estimation of independent 3D motion using EM","authors":"J. Kosecka","doi":"10.1109/AIPR.2001.991217","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991217","url":null,"abstract":"In this paper we address the problem of multiple 3D rigid body motion estimation from the optical flow. We use the differential epipolar constraint to measure the consistency of the local flow estimates with 3D rigid body motion and employ a probabilistic interpretation of the overall flowfield in terms of mixture models. The estimation of 3D motion parameters as well as the refinement of the initial motion segmentation is carried out using an Expectation-Maximization (EM) algorithm. The algorithm is guaranteed to improve the overall likelihood of the data. The proposed technique is a step towards estimation of 3D motion of independently moving objects in the presence of egomotion.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128563158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Channel-optimized video coding for low-power wireless applications 低功耗无线应用的信道优化视频编码
G. Abousleman
{"title":"Channel-optimized video coding for low-power wireless applications","authors":"G. Abousleman","doi":"10.1109/AIPR.2001.991220","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991220","url":null,"abstract":"This paper presents an error-robust channel-optimized coding algorithm for the transmission of low-rate video over wireless channels. The proposed coder uses a robust channel-optimized trellis-coded quantization (COTCQ) stage that is designed to optimize the image coding based on the channel characteristics. The resilience to channel errors is obtained without the use of channel coding or error concealment techniques. Additionally, a novel adaptive classification scheme is employed, which eliminates the need for motion compensation. The robust nature of the coder eliminates the appearance of impulsive channel-error-induced artifacts in the decoded video, while increasing the security level of the encoded bit stream. Consequently, the proposed channel-optimized video coder is especially suitable for low-power wireless applications due to its reduced complexity, its robustness to nonstationary signals and channels, and its increased security level. Simulation results show that our coder provides outstanding quantitative and subjective coding performance for a wide variety of channel conditions.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"263 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124285289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image fusion or 4D cardiac CTA and MR images 图像融合或4D心脏CTA和MR图像
B. Sturm, K. Powell, S. Halliburton, Richard D. White
{"title":"Image fusion or 4D cardiac CTA and MR images","authors":"B. Sturm, K. Powell, S. Halliburton, Richard D. White","doi":"10.1109/AIPR.2001.991198","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991198","url":null,"abstract":"Various imaging techniques can be used to evaluate coronary artery disease and cardiac function. However, no single imaging modality provides extensive information on both. The goal of this research is to develop a method for combining the coronary vasculature obtained from computed tomographic angiography (CTA) data with the myocardial functional information obtained from dynamic magnetic resonance (MR) data. Temporally matched cardiac CTA and trueFISP (true Fast Imaging with Steady-state Precession) MR images were obtained on human subjects. Each CTA volume was resampled according to the MR sampling scheme and spatially registered with the MR volume using an iterative closest point algorithm and the epicardial boundary points. The segmented coronary vasculature from CTA was surface rendered along with the original MR image planes. An expert in cardiac imaging visually verified the results of the 4D fusion. Future use of this technique will be investigated using additional data sets and types of functional MR images, i.e. perfusion and SPAMM (SPAtial Modulation of Magnetization).","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117269905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A recursive Otsu-Iris filter technique for high-speed detection of lumen region from endoscopic images 一种递归otsu -虹膜滤波技术用于内窥镜图像中腔腔区域的高速检测
H. Tian, T. Srikanthan, V. Asari
{"title":"A recursive Otsu-Iris filter technique for high-speed detection of lumen region from endoscopic images","authors":"H. Tian, T. Srikanthan, V. Asari","doi":"10.1109/AIPR.2001.991223","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991223","url":null,"abstract":"In this paper, a hardware efficient technique to segment the lumen region from endoscopic images is presented. It is based on applying the combined Otsu-Iris filter operations recursively. The proposed technique applies the Otsu's procedure recursively to obtain a coarse region of interest (ROI), which is then subjected to an Iris filter operation so that a smaller enhanced region can be identified. This enhanced region is subjected to the Otsu's procedure recursively and the process of performing Iris filter operation is repeated as before. It has been shown that by repeating this Otsu-Iris filter combination in an iterative manner facilitates in the rapid identification of the lumen region accurately. It has been shown that the proposed method substantially reduces the number of computations of Otsu's procedure when compared with the APT-Iris filter method. Finally, unlike the APT-Iris filter method, it does not require the precomputation of the cumulative limiting factor, which is highly dependent on the complex endoscopic images.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124766501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The evaluation of computer-aided diagnosis systems: an FDA perspective 计算机辅助诊断系统的评价:FDA的观点
David G. Brown
{"title":"The evaluation of computer-aided diagnosis systems: an FDA perspective","authors":"David G. Brown","doi":"10.1109/AIPR.2001.991197","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991197","url":null,"abstract":"Computer-aided diagnosis (CADx) systems have begun a successful transition from academic research to commercial implementation. FDA (Food & Drug Administration) approval of CADx for PAP (Peroxidase-Antiperoxidase) smear reading in 1995 and for breast cancer detection in 1998 were major milestones in this process. As agency experience with these devices has increased, a consensus is emerging concerning factors to be considered in the evaluation required during the approval process. Key elements determining the nature of the proof of safety and effectiveness required by the agency include the intrinsic level of risk associated with the device and the medical condition that it is meant to address, the precise claims made for the device and the degree of oversight exercised over its use. The agency expects that an increasing number of CADx devices will be submitted to it in the future and that guidelines will have to be formulated to assist manufacturers in navigating the approval process.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122359980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Multi-modal fusion for video understanding 视频理解的多模态融合
A. Hoogs, J. Mundy, G. Cross
{"title":"Multi-modal fusion for video understanding","authors":"A. Hoogs, J. Mundy, G. Cross","doi":"10.1109/AIPR.2001.991210","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991210","url":null,"abstract":"The exploitation of semantic information in computer vision problems can be difficult because of the large difference in representations and levels of knowledge. Image analysis is formulated in terms of low-level features describing image structure and intensity, while high-level knowledge such as purpose and common sense are encoded in abstract, non-geometric representations. In this work we attempt to bridge this gap through the integration of image analysis algorithms with WordNet, a large semantic network that explicitly links related words in a hierarchical structure. Our problem domain is the understanding of broadcast news, as this provides both linguistic information in the transcript and video information. Visual detection algorithms such as face detection and object tracking are applied to the video to extract basic object information, which is indexed into WordNet. The transcript provides topic information in the form of detected keywords. Together, both types of information are used to constrain a search within WordNet for a description of the video content in terms of the most likely WordNet concepts. This project is in its early stages; the general ideas and concepts are presented here.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"218 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115299415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Using video for recovering texture 使用视频恢复纹理
A. Hoogs, R. Kaucic, Roderic Collins
{"title":"Using video for recovering texture","authors":"A. Hoogs, R. Kaucic, Roderic Collins","doi":"10.1109/AIPR.2001.991215","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991215","url":null,"abstract":"Existing approaches to characterizing image texture usually rely on computing a local response to a bank of correlation filters, such as derivatives of a Gaussian, in one image. Recently, significant progress has been made in characterizing a single texture under varying viewpoint and illumination conditions, leading to the bi-directional texture function that describes the smooth variation of filter responses as a function of viewpoint and illumination. However, this technique does not attempt to exploit the redundancy of multiple images; each image is treated independently. In video data, close correspondences between frames enable a new form of texture analysis that incorporates local 3D structure as well as intensity variation. We exploit this relationship to characterize texture with significant 3D structure, such as foliage, across a range of viewpoints. This paper presents a general overview of these ideas and preliminary results.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116711611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An ATR system using an integer based correlation algorithm in a time varying environment 基于整数相关算法的时变环境下的ATR系统
Carlos Maraviglia, J. Price, T. Taczak
{"title":"An ATR system using an integer based correlation algorithm in a time varying environment","authors":"Carlos Maraviglia, J. Price, T. Taczak","doi":"10.1109/AIPR.2001.991202","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991202","url":null,"abstract":"In an electro-optical or IR tracking system, a correlation algorithm offers a robust tracking technique in a time-varying scenario. Described in this paper is an automatic target recognition (ATR) algorithm that employs an operator in the loop as an embedded tracking system. The described system combines morphological algorithms with an efficient integer-based correlation tracking algorithm. This algorithm is explored in a time-varying searching and tracking scenario using morphological matched filtering to automatically detect and select objects, with a handover to a correlation tracker. The development of this algorithm using real and synthetic imagery is reviewed, as well as some preliminary results.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127468729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Scene and content analysis from multiple video streams 来自多个视频流的场景和内容分析
S. Guler
{"title":"Scene and content analysis from multiple video streams","authors":"S. Guler","doi":"10.1109/AIPR.2001.991213","DOIUrl":"https://doi.org/10.1109/AIPR.2001.991213","url":null,"abstract":"In this paper, we describe a framework for video analysis and a method to detect and understand the class of events we refer to as \"split and merge events\" from single or multiple video streams. We start with automatic detection of scene changes, including camera operations such as zoom, pan, tilts and scene cuts. For each new scene, camera calibration is performed, the scene geometry is estimated, to determine the absolute positions for each detected object. Objects in the video scenes are detected using an adaptive background subtraction method and tracked over consecutive frames. Objects are detected and tracked in a way to identify the key split and merge behaviors where one object splits into two or more objects and two or more objects merge into one object. We have identified split and merge behaviors as the key behavior components for several higher level activities such package drop-off, exchange between people, people getting out of cars or forming crowds etc. We embed the data about scenes, camera parameters, object features, positions into the video stream as metadata to correlate, compare and associate the results for several related scenes and achieve better video event understanding. This location for the detailed syntactic information allows it to be physically associated with the video itself and guarantees that analysis results will be preserved while in archival storage or when sub-clips are created for distribution to other users. We present some preliminary results over single and multiple video streams.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132591227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信