2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance最新文献

筛选
英文 中文
String Features: Geodesic Sweeping Detection and Quasi-invariant Time-Series Description 字符串特征:测地线扫描检测和准不变时间序列描述
Gutemberg Guerra-Filho
{"title":"String Features: Geodesic Sweeping Detection and Quasi-invariant Time-Series Description","authors":"Gutemberg Guerra-Filho","doi":"10.1109/AVSS.2012.72","DOIUrl":"https://doi.org/10.1109/AVSS.2012.72","url":null,"abstract":"We propose novel features, denoted as string features, which are represented by curves in the image plane. The string features take advantage of its locality at individual points of the curve and of its global aspect when considering the whole curve. The contributions of this paper are: (1) a feature detection procedure which produces a saliency measure by applying a novel technique, named geodesic sweeping, inspired by spatial attention and eye movement control, (2) the description of string features as a set of time-series based on quasi-invariant geometric measures, and (3) a matching algorithm for string features which allows partial matching independently for each time-series in the descriptor. The quantitative performance of the feature detection step is measured with regards to precision, compactness, and repeatability. The repeatability rate reaches 70% with only 3% of the pixels being detected. The string feature matching procedure is tested with a set of 80 synthetic 2D curves. The experimental results show an average ratio of 72.4% in correct matching.","PeriodicalId":275325,"journal":{"name":"2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133656457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Role of Spatiotemporal Oriented Energy Features for Robust Visual Tracking in Video Surveillance 面向时空的能量特征在视频监控中鲁棒视觉跟踪中的作用
Ali Emami, F. Dadgostar, A. Bigdeli, B. Lovell
{"title":"Role of Spatiotemporal Oriented Energy Features for Robust Visual Tracking in Video Surveillance","authors":"Ali Emami, F. Dadgostar, A. Bigdeli, B. Lovell","doi":"10.1109/AVSS.2012.64","DOIUrl":"https://doi.org/10.1109/AVSS.2012.64","url":null,"abstract":"We propose an effective approach to take advantage of the rich description provided by Spatiotemporal Oriented Energy features for the purpose of robust tracking. There are two core components in our system. The first one is a compound measure of 'Coherent Motion' and 'Identity Motion Signature' which is introduced based on motion dynamics of the targets. This measure is used for robust optimisation in occluded situations as well as an adaptive template updating scheme. The second component is a state machine which detects various states of the targets based on statistical analysis of their 'Motion Signature'. Empirical evaluations demonstrate improvement in performance of the tracking system along with the role of each component.","PeriodicalId":275325,"journal":{"name":"2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114607245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
CLUMOC: Multiple Motion Estimation by Cluster Motion Consensus CLUMOC:基于聚类运动一致性的多运动估计
Yinan Yu, Weiqiang Ren, Yongzhen Huang, Kaiqi Huang, T. Tan
{"title":"CLUMOC: Multiple Motion Estimation by Cluster Motion Consensus","authors":"Yinan Yu, Weiqiang Ren, Yongzhen Huang, Kaiqi Huang, T. Tan","doi":"10.1109/AVSS.2012.19","DOIUrl":"https://doi.org/10.1109/AVSS.2012.19","url":null,"abstract":"In this paper, we present techniques for robust multiple motions estimation based on dual consensus via clustering in both the image spatial space and the motion parameter space. Starting from traditional Random Samples Consensus algorithm, we novelly propose the CLUster MOtion Consensus (CLUMOC) to extract robust motions. The proposed algorithm has two advantages: (1), instead of random samples, the CLUMOC employs clustering in initial sample selection, which can remove outliers from correct pairs of motion, (2), CLUMOC automatically decides the number of motions, by employing competition among motion and samples, that each motion needs to compete for matching pairs and each pair of matching competes for motions. The experimental results show that the proposed method is effective and efficient under various situations.","PeriodicalId":275325,"journal":{"name":"2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124152864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal and Multi-task Audio-Visual Vehicle Detection and Classification 多模式多任务视听车辆检测与分类
Tao Wang, Zhigang Zhu
{"title":"Multimodal and Multi-task Audio-Visual Vehicle Detection and Classification","authors":"Tao Wang, Zhigang Zhu","doi":"10.1109/AVSS.2012.47","DOIUrl":"https://doi.org/10.1109/AVSS.2012.47","url":null,"abstract":"Moving vehicle detection and classification using multimodal data is a challenging task in data collection, audio-visual alignment, and feature selection, and effective vehicle classification in uncontrolled environments. In this work, we first present a systematic way to align the multimodal data based the multimodal temporal panorama generation. Then various types of features are extracted to represent diverse and multimodal information. Those include global geometric features (aspect ratios, profiles), local structure features (HOGs), various audio features in both spectral and perceptual representations. A flexible sequential forward selection algorithm with multi-branch searching is used to select a set of important features at different levels of feature combinations. Finally, using the same datasets for two different classification tasks, we show that the roles of audio and visual features are task-specific. Furthermore, in both cases, the combination of some of the features with multimodal and complementary information can improve the accuracy than using the individual features only. Therefore finer and more accurate classification can be achieved by two different levels of integration: feature level and the decision level.","PeriodicalId":275325,"journal":{"name":"2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123027462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Human Action Recognition with Attribute Regularization 基于属性正则化的人体动作识别
Zhong Zhang, Chunheng Wang, Baihua Xiao, Wen Zhou, Shuang Liu
{"title":"Human Action Recognition with Attribute Regularization","authors":"Zhong Zhang, Chunheng Wang, Baihua Xiao, Wen Zhou, Shuang Liu","doi":"10.1109/AVSS.2012.41","DOIUrl":"https://doi.org/10.1109/AVSS.2012.41","url":null,"abstract":"Recently, attributes have been introduced to help object classification. Multi-task learning is an effective methodology to achieve this goal, which shares low-level features between attribute and object classifiers. Yet such a method neglects the constraints that attributes impose on classes which may fail to constrain the semantic relationship between the attribute and object classifiers. In this paper, we explicitly consider such attribute-object relationship, and correspondingly, we modify the multi-task learning model by adding attribute regularization. In this way, the learned model not only shares the low-level features, but also gets regularized according to the semantic constrains. Our method is verified on two challenging datasets (KTH and Olympic Sports), and the experimental results demonstrate that our method achieves better results than previous methods in human action recognition.","PeriodicalId":275325,"journal":{"name":"2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117061115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Traffic State Estimation on Smart Cameras 智能摄像机的鲁棒交通状态估计
Felix Pletzer, R. Tusch, L. Böszörményi, B. Rinner
{"title":"Robust Traffic State Estimation on Smart Cameras","authors":"Felix Pletzer, R. Tusch, L. Böszörményi, B. Rinner","doi":"10.1109/AVSS.2012.63","DOIUrl":"https://doi.org/10.1109/AVSS.2012.63","url":null,"abstract":"This paper presents a novel method for video-based traffic state detection on motorways performed on smart cameras. Camera calibration parameters are obtained from the known length of lane markings. Mean traffic speed is estimated from Kanade-Lucas-Tomasi (KLT) optical flow method using a robust outlier detection. Traffic density is estimated using a robust statistical counting method. Our method has been implemented on an embedded smart camera and evaluated under different road and illumination conditions. It achieves a detection rate of more than 95% for stationary traffic.","PeriodicalId":275325,"journal":{"name":"2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125303085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Analyzing the Subspaces Obtained by Dimensionality Reduction for Human Action Recognition from 3d Data 基于降维的人体动作识别子空间分析
Marco Körner, Joachim Denzler
{"title":"Analyzing the Subspaces Obtained by Dimensionality Reduction for Human Action Recognition from 3d Data","authors":"Marco Körner, Joachim Denzler","doi":"10.1109/AVSS.2012.10","DOIUrl":"https://doi.org/10.1109/AVSS.2012.10","url":null,"abstract":"Since depth measuring devices for real-world scenarios became available in the recent past, the use of 3d data now comes more in focus of human action recognition. Due to the increased amount of data it seems to be advisable to model the trajectory of every landmark in the context of all other landmarks which is commonly done by dimensionality reduction techniques like PCA. In this paper we present an approach to directly use the subspaces (i.e. their basis vectors) for extracting features and classification of actions instead of projecting the landmark data themselves. This yields a fixed-length description of action sequences disregarding the number of provided frames. We give a comparison of various global techniques for dimensionality reduction and analyze their suitability for our proposed scheme. Experiments performed on the CMU Motion Capture dataset show promising recognition rates as well as robustness in the presence of noise and incorrect detection of landmarks.","PeriodicalId":275325,"journal":{"name":"2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114776588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Improved Relational Feature Model for People Detection Using Histogram Similarity Functions 基于直方图相似函数的改进关系特征模型
A. Zweng, M. Kampel
{"title":"Improved Relational Feature Model for People Detection Using Histogram Similarity Functions","authors":"A. Zweng, M. Kampel","doi":"10.1109/AVSS.2012.42","DOIUrl":"https://doi.org/10.1109/AVSS.2012.42","url":null,"abstract":"In this paper, we propose a new approach for people detection using a relational feature model (RFM) in combination with histogram similarity functions such as the bhattacharyya distance, histogram intersection, histogram correlation and the chi-square χ2 histogram similarity function. The relational features are computed for all combinations of extracted features from a feature detection algorithm such as the Histograms of Oriented Gradients (HOG) feature descriptor. Our experiments show, that the information of spatial histogram similarities reduces the number of false positives while preserving true positive detections. The detection algorithm is done, using a multi-scale overlapping sliding window approach. In our experiments, we show results for different sizes of the cell size from the HOG descriptor due to the large size of the resulting relational feature vector as well as different results from the mentioned histogram similarity functions. Additionally our results show, that in addition to less false positives, true positive responses in regions near people are much more accurate using the relational features compared to non-relational feature models.","PeriodicalId":275325,"journal":{"name":"2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115956941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Moving Object Extraction Using Compressed Domain Features of H.264 INTRA Frames 利用H.264 INTRA帧压缩域特征提取运动目标
Fu-Ping Wang, W. Chung, Guo-Kai Ni, Ing-Yi Chen, S. Kuo
{"title":"Moving Object Extraction Using Compressed Domain Features of H.264 INTRA Frames","authors":"Fu-Ping Wang, W. Chung, Guo-Kai Ni, Ing-Yi Chen, S. Kuo","doi":"10.1109/AVSS.2012.46","DOIUrl":"https://doi.org/10.1109/AVSS.2012.46","url":null,"abstract":"A new efficient algorithm using the compressed domain features of H.264 INTRA frames is proposed for moving object extraction on huge video surveillance archives. To achieve searching efficiency, we propose to locate moving objects by scrutinizing only the INTRA frames in video surveillance archives in H.264 compressed domain with short GOP length. In the proposed structure, a modified codebook algorithm is designed to build the block-based background models from the INTRA coding features. Through the subtraction with the background codebook models, the foreground energy frame is filtered and normalized for detecting the existence of moving objects. To overcome the over-segmentation problem and enable the unsupervised searching, a new structure of hysteresis thresholding, where the thresholds are obtained automatically by an efficient algorithm, is adopted to extract foreground blocks. At the final step, the connected components labeling (CCL) and morphological filters are employed to obtain the list of moving objects. As shown in the experimental results, the proposed algorithm outperforms representative existing works.","PeriodicalId":275325,"journal":{"name":"2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance","volume":"239 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133720909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Activity Analysis in Complicated Scenes Using DFT Coefficients of Particle Trajectories 基于粒子轨迹DFT系数的复杂场景活动性分析
Jingxin Xu, S. Denman, S. Sridharan, C. Fookes
{"title":"Activity Analysis in Complicated Scenes Using DFT Coefficients of Particle Trajectories","authors":"Jingxin Xu, S. Denman, S. Sridharan, C. Fookes","doi":"10.1109/AVSS.2012.6","DOIUrl":"https://doi.org/10.1109/AVSS.2012.6","url":null,"abstract":"Modelling activities in crowded scenes is very challenging as object tracking is not robust in complicated scenes and optical flow does not capture long range motion. We propose a novel approach to analyse activities in crowded scenesusing a \"bag of particle trajectories\". Particle trajectoriesare extracted from foreground regions within short video clips using particle video, which estimates long rangemotion in contrast to optical flow which is only concerned with inter-frame motion. Our applications include temporal video segmentation and anomaly detection, and we perform our evaluation on several real-world datasets containing complicated scenes. We show that our approaches achieve state-of-the-art performance for both tasks.","PeriodicalId":275325,"journal":{"name":"2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122489360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信