2008 IEEE Workshop on Motion and video Computing最新文献

筛选
英文 中文
Recognition of High-level Group Activities Based on Activities of Individual Members 基于个体成员活动的高水平群体活动识别
2008 IEEE Workshop on Motion and video Computing Pub Date : 2008-01-08 DOI: 10.1109/WMVC.2008.4544065
M. Ryoo, J. Aggarwal
{"title":"Recognition of High-level Group Activities Based on Activities of Individual Members","authors":"M. Ryoo, J. Aggarwal","doi":"10.1109/WMVC.2008.4544065","DOIUrl":"https://doi.org/10.1109/WMVC.2008.4544065","url":null,"abstract":"The paper describes a methodology for the recognition of high-level group activities. Our system recognizes group activities including group actions, group-persons interactions, group-group (i.e. inter-group) interactions, intra-group interactions, and their combinations described using a common representation scheme. Our approach is to represent various types of complex group activities with a programming language-like representation, and then to recognize represented activities based on the recognition of activities of individual group members. A hierarchical recognition algorithm is designed for the recognition of high-level group activities. The system was tested to recognize activities such as 'two groups fighting', 'a group of thieves stealing an object from another group', and 'a group of policemen arresting a group of criminals (or a criminal)'. Videos downloaded from YouTube as well as videos that we have taken are tested. Experimental results shows that our system recognizes complicated group activities, and it does it more reliably and accurately compared to previous approaches.","PeriodicalId":150666,"journal":{"name":"2008 IEEE Workshop on Motion and video Computing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114933321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Model generation for robust object tracking based on temporally stable regions 基于时间稳定区域的鲁棒目标跟踪模型生成
2008 IEEE Workshop on Motion and video Computing Pub Date : 2008-01-08 DOI: 10.1109/WMVC.2008.4544045
P. Banerjee, A. Pinz, S. Sengupta
{"title":"Model generation for robust object tracking based on temporally stable regions","authors":"P. Banerjee, A. Pinz, S. Sengupta","doi":"10.1109/WMVC.2008.4544045","DOIUrl":"https://doi.org/10.1109/WMVC.2008.4544045","url":null,"abstract":"Tracking and recognition of objects in video sequences suffer from difficulties in learning appropriate object models. Often a high degree of supervision is required, including manual annotation of many training images. We aim at unsupervised learning of object models and present a novel way to build models based on motion information extracted from video sequences. We require a coarse delineation of moving objects and subsequent segmentation of these motion areas into regions as preprocessing steps and analyze the resulting regions with respect to their stable detection over many frames. These 'temporally stable regions' are then used to build graphs of reliably detected object parts which form our model. Our approach combines the feature- based analysis of feature vectors for each region with the structural analysis of the graphical object models. Our experiments demonstrate the capabilities of this novel method to build object models for people and to robustly track them, but the method is in general applicable to learn object models for any object category, provided that the object moves and is observed by a stationary camera.","PeriodicalId":150666,"journal":{"name":"2008 IEEE Workshop on Motion and video Computing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125618765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Segmentation of Video Sequences using Spatial-temporal Conditional Random Fields 基于时空条件随机场的视频序列分割
2008 IEEE Workshop on Motion and video Computing Pub Date : 2008-01-08 DOI: 10.1109/WMVC.2008.4544055
Lei Zhang, Q. Ji
{"title":"Segmentation of Video Sequences using Spatial-temporal Conditional Random Fields","authors":"Lei Zhang, Q. Ji","doi":"10.1109/WMVC.2008.4544055","DOIUrl":"https://doi.org/10.1109/WMVC.2008.4544055","url":null,"abstract":"Segmentation of video sequences requires the segmentations of consecutive frames to be consistent with each other. We propose to use a three dimensional Conditional Random Fields (CRF) to address this problem. A triple of consecutive image frames are treated as a small 3D volume to be segmented. Our spatial-temporal CRF model combines both local discriminative features and the conditional homogeneity of labeling variables in both the spatial and the temporal domain. After training the model parameters with a small set of training data, the optimal labeling is obtained through a probabilistic inference by Sum-product loopy belief propagation. We achieve accurate segmentation results on the standard video sequences, which demonstrates the promising capability of the proposed approach.","PeriodicalId":150666,"journal":{"name":"2008 IEEE Workshop on Motion and video Computing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129827996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Pedestrian Tracking by Associating Tracklets using Detection Residuals 基于检测残差关联轨迹的行人跟踪
2008 IEEE Workshop on Motion and video Computing Pub Date : 2008-01-08 DOI: 10.1109/WMVC.2008.4544058
V.K. Singh, Bo Wu, R. Nevatia
{"title":"Pedestrian Tracking by Associating Tracklets using Detection Residuals","authors":"V.K. Singh, Bo Wu, R. Nevatia","doi":"10.1109/WMVC.2008.4544058","DOIUrl":"https://doi.org/10.1109/WMVC.2008.4544058","url":null,"abstract":"Due to increased interest in visual surveillance, various multiple object tracking methods have been recently proposed and applied to pedestrian tracking. However in presence of intensive inter-object occlusion and sensor gaps, most of these methods result in tracking failures. We present a two-stage multi-object tracking approach to robustly track pedestrians in such complex scenarios. We first generate high confidence partial track segments (tracklets) using a robust pedestrian detector and then associate the tracklets in a global optimization framework. Unlike the existing two-stage tracking methods, our method uses the unasso- ciated low confidence detections (residuals) between the tracklets, which improves the tracking performance. We evaluate our method on the CAVIAR dataset and show that our method performs better than state-of-the-art methods.","PeriodicalId":150666,"journal":{"name":"2008 IEEE Workshop on Motion and video Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129040924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
Fast construction of object correspondence in stereo camera system: an example to human face capturing system 立体相机系统中物体对应关系的快速构建——以人脸捕捉系统为例
2008 IEEE Workshop on Motion and video Computing Pub Date : 2008-01-08 DOI: 10.1109/WMVC.2008.4544047
Fai Chan, Jiansheng Chen, Y. Moon
{"title":"Fast construction of object correspondence in stereo camera system: an example to human face capturing system","authors":"Fai Chan, Jiansheng Chen, Y. Moon","doi":"10.1109/WMVC.2008.4544047","DOIUrl":"https://doi.org/10.1109/WMVC.2008.4544047","url":null,"abstract":"In the literature, stereo matching is used for building pixel correspondences for stereo image pairs. Such correspondences can serve as fundamentals for applications such as 3D scene reconstruction. In some applications, however, stereo vision is adopted for object localization so that only object correspondences are required. However, existing pixel based stereo matching approaches are computationally inefficient for these applications. In this paper, we address the problem of object correspondence construction in stereo camera systems by using a fast and accurate algorithm adopting reverse stereo triangulation. This algorithm is based on a belief that any incorrect object pair will demonstrate inconsistency in its spatial location calculated from reverse stereo triangulation, so that correct object pairs can be identified accurately from all possible object pairs. We present experimental results from a dual camera human face capturing system in which more than 99% genuine object correspondences can be accurately identified, while 100% of falsely detected objects are eliminated. Besides, our proposed method can handle no less than 100 object pairs within 1 ms in a P4 1.5 GHz desktop PC.","PeriodicalId":150666,"journal":{"name":"2008 IEEE Workshop on Motion and video Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129781638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Optimal shape from motion estimation with missing and degenerate data 基于缺失和退化数据的运动估计的最优形状
2008 IEEE Workshop on Motion and video Computing Pub Date : 2008-01-08 DOI: 10.1109/WMVC.2008.4544046
Manuel Marques, J. Costeira
{"title":"Optimal shape from motion estimation with missing and degenerate data","authors":"Manuel Marques, J. Costeira","doi":"10.1109/WMVC.2008.4544046","DOIUrl":"https://doi.org/10.1109/WMVC.2008.4544046","url":null,"abstract":"Reconstructing a 3D scene from a moving camera is one of the most important issues in the field of computer vision. In this scenario, not all points are known in all images (e.g. due to occlusion), thus generating missing data. The state of the art handles the missing points in this context by enforcing rank constraints on the point track matrix. However, quite frequently, close up views tend to capture planar surfaces producing degenerate data. If one single frame is degenerate, the whole sequence will produce high errors on the shape reconstruction, even though the observation matrix verifies the rank 4 constraint. In this paper, we propose to solve the structure from motion problem with degenerate data, introducing a new factorization algorithm that imposes the full scaled orthographic model in one single optimization procedure. By imposing all model constraints, a unique (correct) 3D shape is estimated regardless of the data degeneracies. Experiments show that remarkably good reconstructions are obtained with an approximate models such as orthography.","PeriodicalId":150666,"journal":{"name":"2008 IEEE Workshop on Motion and video Computing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127395048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Using Inactivity to Detect Unusual behavior 使用不活动来检测异常行为
2008 IEEE Workshop on Motion and video Computing Pub Date : 2008-01-08 DOI: 10.1109/WMVC.2008.4544054
P. Dickinson, A. Hunter
{"title":"Using Inactivity to Detect Unusual behavior","authors":"P. Dickinson, A. Hunter","doi":"10.1109/WMVC.2008.4544054","DOIUrl":"https://doi.org/10.1109/WMVC.2008.4544054","url":null,"abstract":"We present a novel method for detecting unusual modes of behavior in video surveillance data, suitable for supporting home-based care of elderly patients. Our approach is based on detecting unusual patterns of inactivity. We first learn a spatial map of normal inactivity for an observed scene, expressed as a two-dimensional mixture of Gaussians. The map components are used to construct a Hidden Markov Model representing normal patterns of behavior. A threshold model is also inferred, and unusual behavior detected by comparing the model likelihoods. Our learning procedures are unsupervised, and yield a highly transparent model of scene activity. We present an evaluation of our approach, and show that it is effective in detecting unusual behavior across a range of parameter settings.","PeriodicalId":150666,"journal":{"name":"2008 IEEE Workshop on Motion and video Computing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115427922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Robust Object Tracking based on Detection with Soft Decision 基于软判决检测的鲁棒目标跟踪
2008 IEEE Workshop on Motion and video Computing Pub Date : 2008-01-08 DOI: 10.1109/WMVC.2008.4544052
Bo Wu, Li Zhang, V. Kumar Singh, R. Nevatia
{"title":"Robust Object Tracking based on Detection with Soft Decision","authors":"Bo Wu, Li Zhang, V. Kumar Singh, R. Nevatia","doi":"10.1109/WMVC.2008.4544052","DOIUrl":"https://doi.org/10.1109/WMVC.2008.4544052","url":null,"abstract":"This paper presents a detection based object tracking method that forms object trajectories by associating detection responses. Discriminative classifiers of objects of a known class are learned and applied to the video sequence frame by frame. The output of the detection module is a \"soft decision\", which consists of a set of detection responses of different confidence levels. Responses of different confidence levels are generated by classifiers with different complexities. The cheap classifiers are applied to the whole image first, while the expensive classifiers are only applied to the region accepted as object by the cheap classifiers. Object trajectories are initialized from the responses of higher confidence; hypothesized objects are tracked by associating with all the responses in the order of their confidence levels. The proposed approach is applied to the problems of human tracking in indoor meeting videos and outdoor surveillance videos. The system is evaluated on two public video corpora and compared with some previous methods.","PeriodicalId":150666,"journal":{"name":"2008 IEEE Workshop on Motion and video Computing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121640634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Fast Body Posture Estimation using Volumetric Features 使用体积特征的快速身体姿势估计
2008 IEEE Workshop on Motion and video Computing Pub Date : 2008-01-08 DOI: 10.1109/WMVC.2008.4544056
M. Van den Bergh, E. Koller-Meier, L. Van Gool
{"title":"Fast Body Posture Estimation using Volumetric Features","authors":"M. Van den Bergh, E. Koller-Meier, L. Van Gool","doi":"10.1109/WMVC.2008.4544056","DOIUrl":"https://doi.org/10.1109/WMVC.2008.4544056","url":null,"abstract":"This paper presents a novel approach to real-time pose recognition using Haar-like features. First, linear discriminant analysis (LDA) is introduced as a powerful new approach to train Haar-like features. The LDA-based method is compared to AdaBoost, and proven to be more efficient and requiring less Haar-like features to successfully complete the pose classification task. The weakened memory requirements with regards to AdaBoost allow for a straightforward extension to a 3D pose detector based on 3D Haar-like features, resulting in a rotation-invariant pose detection system.","PeriodicalId":150666,"journal":{"name":"2008 IEEE Workshop on Motion and video Computing","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132211767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Space-Time Shapelets for Action Recognition 用于动作识别的时空小波
2008 IEEE Workshop on Motion and video Computing Pub Date : 2008-01-08 DOI: 10.1109/WMVC.2008.4544051
Dhruv Batra, Tsuhan Chen, R. Sukthankar
{"title":"Space-Time Shapelets for Action Recognition","authors":"Dhruv Batra, Tsuhan Chen, R. Sukthankar","doi":"10.1109/WMVC.2008.4544051","DOIUrl":"https://doi.org/10.1109/WMVC.2008.4544051","url":null,"abstract":"Recent works in action recognition have begun to treat actions as space-time volumes. This allows actions to be converted into 3-D shapes, thus converting the problem into that of volumetric matching. However, the special nature of the temporal dimension and the lack of intuitive volumetric features makes the problem both challenging and interesting. In a data-driven and bottom-up approach, we propose a dictionary of mid-level features called Space- Time Shapelets. This dictionary tries to characterize the space of local space-time shapes, or equivalently local motion patterns formed by the actions. Representing an action as a bag of these space-time patterns allows us to reduce the combinatorial space of these volumes, become robust to partial occlusions and errors in extracting spatial support. The proposed method is computationally efficient and achieves competitive results on a standard dataset.","PeriodicalId":150666,"journal":{"name":"2008 IEEE Workshop on Motion and video Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115236617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信