2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)最新文献

筛选
英文 中文
Bidirectional sparse representations for multi-shot person re-identification 多镜头人物再识别的双向稀疏表示
Solene Chan-Lang, Q. Pham, C. Achard
{"title":"Bidirectional sparse representations for multi-shot person re-identification","authors":"Solene Chan-Lang, Q. Pham, C. Achard","doi":"10.1109/AVSS.2016.7738064","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738064","url":null,"abstract":"With the development of surveillance cameras, person re-identification has gained much interest, however re-identifying people across cameras remains a challenging problem which not only requires a good feature description but also a reliable matching scheme. Our method can be applied with any feature and focuses on the second requirement. We propose a robust bidirectional sparse coding method that improves simple sparse coding performances. Some recent work have already explored sparse representation for the re-identification task but none has considered the problem from both the probe and the gallery perspectives. We propose a bidirectional sparse representations method which searches for the most likely match for the test element in the gallery set and makes sure that the selected gallery match is indeed closely related to the probe. Extensive experiments on two datasets, CUHK03 and iLIDS-VID, show the effectiveness of our approach.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124927288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A hybrid framework for online recognition of activities of daily living in real-world settings 一个用于在线识别现实世界中日常生活活动的混合框架
Farhood Negin, Michal Koperski, C. Crispim, F. Brémond, S. Coşar, Konstantinos Avgerinakis
{"title":"A hybrid framework for online recognition of activities of daily living in real-world settings","authors":"Farhood Negin, Michal Koperski, C. Crispim, F. Brémond, S. Coşar, Konstantinos Avgerinakis","doi":"10.1109/AVSS.2016.7738021","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738021","url":null,"abstract":"Many supervised approaches report state-of-the-art results for recognizing short-term actions in manually clipped videos by utilizing fine body motion information. The main downside of these approaches is that they are not applicable in real world settings. The challenge is different when it comes to unstructured scenes and long-term videos. Unsupervised approaches have been used to model the long-term activities but the main pitfall is their limitation to handle subtle differences between similar activities since they mostly use global motion information. In this paper, we present a hybrid approach for long-term human activity recognition with more precise recognition of activities compared to unsupervised approaches. It enables processing of long-term videos by automatically clipping and performing online recognition. The performance of our approach has been tested on two Activities of Daily Living (ADL) datasets. Experimental results are promising compared to existing approaches.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115056845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
ViTBAT: Video tracking and behavior annotation tool ViTBAT:视频跟踪和行为注释工具
T. A. Biresaw, T. Nawaz, J. Ferryman, A. Dell
{"title":"ViTBAT: Video tracking and behavior annotation tool","authors":"T. A. Biresaw, T. Nawaz, J. Ferryman, A. Dell","doi":"10.1109/AVSS.2016.7738055","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738055","url":null,"abstract":"Reliable and repeatable evaluation of low-level (tracking) and high-level (behavior analysis) vision tasks require annotation of ground-truth information in videos. Depending on the scenarios, ground-truth annotation may be required for individual targets and/or groups of targets. Unlike the existing tools that generally allow an explicit annotation for individual targets only, we propose a tool that enables an explicit annotation both for individual targets and groups of targets for the tracking and behavior recognition tasks together with effective visualization features. Whether for individuals or groups, the tool allows labeling of their states and behaviors manually or semi-automatically through a simple and friendly user interface in a time-efficient manner. Based on a subjective assessment, the proposed tool is found to be more effective than the well-known ViPER tool on a series of defined criteria. A dedicated website makes the tool publicly available for the community.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127384851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Edge shape pattern for background modeling based on hybrid local codes 基于混合局部码的边缘形状背景建模
Seokjin Hong, Jaemyun Kim, Adín Ramírez Rivera, Gihun Song, O. Chae
{"title":"Edge shape pattern for background modeling based on hybrid local codes","authors":"Seokjin Hong, Jaemyun Kim, Adín Ramírez Rivera, Gihun Song, O. Chae","doi":"10.1109/AVSS.2016.7738015","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738015","url":null,"abstract":"In this paper, we propose a novel edge descriptor method for background modeling. In comparison to previous edge-based local-pattern methods, it is more robust to noise and illumination variations due to the use of principal gradient information in a local neighborhood. For the background modeling problem, we combined the proposed method with the Local Hybrid Pattern and experimented with an adaptive-dictionary-model based background modeling method. We show in the quantitative evaluations that the proposed methods is better than other local edge descriptors when applied to the same framework. Furthermore, we show that our proposed method is more powerful than other state of the art methods on standard datasets for the background modeling problem.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127671394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Low-resolution Convolutional Neural Networks for video face recognition 用于视频人脸识别的低分辨率卷积神经网络
C. Herrmann, D. Willersinn, J. Beyerer
{"title":"Low-resolution Convolutional Neural Networks for video face recognition","authors":"C. Herrmann, D. Willersinn, J. Beyerer","doi":"10.1109/AVSS.2016.7738017","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738017","url":null,"abstract":"Security and safety applications such as surveillance or forensics demand face recognition in low-resolution video data. We propose a face recognition method based on a Convolutional Neural Network (CNN) with a manifold-based track comparison strategy for low-resolution video face recognition. The low-resolution domain is addressed by adjusting the network architecture to prevent bottlenecks or significant upscaling of face images. The CNN is trained with a combination of a large-scale self-collected video face dataset and large-scale public image face datasets resulting in about 1.4M training images. To handle large amounts of video data and for effective comparison, the CNN face descriptors are compared efficiently on track level by local patch means. Our setup achieves 80.3 percent accuracy on a 32×32 pixels low-resolution version of the YouTube Faces Database and outperforms local image descriptors as well as the state-of-the-art VGG-Face network [20] in this domain. The superior performance of the proposed method is confirmed on a self-collected in-the-wild surveillance dataset.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131678804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Improving reliability of people tracking by adding semantic reasoning 增加语义推理,提高人员跟踪的可靠性
L. Greco, Pierluigi Ritrovato, Alessia Saggese, M. Vento
{"title":"Improving reliability of people tracking by adding semantic reasoning","authors":"L. Greco, Pierluigi Ritrovato, Alessia Saggese, M. Vento","doi":"10.1109/AVSS.2016.7738025","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738025","url":null,"abstract":"Even the best performing object tracking algorithm on well known datasets, commits several errors that prevent a concrete adoption in real case scenarios unless you do not accept some compromise about tracking quality and reliability. The aim of this paper is to demonstrate that adding to a traditional object tracking solution a knowledge based reasoner build on top of semantic web technologies, it is possible to identify and properly manage common tracking problems. The proposed approach has been evaluated using View 001 and View 003 of the PETS2009 dataset with interesting results.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133859840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Video summarization of surveillance cameras 监控摄像机的视频摘要
Po Kong Lai, M. Decombas, Kelvin Moutet, R. Laganière
{"title":"Video summarization of surveillance cameras","authors":"Po Kong Lai, M. Decombas, Kelvin Moutet, R. Laganière","doi":"10.1109/AVSS.2016.7738018","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738018","url":null,"abstract":"The number of video surveillance cameras has increased by a large amount in recent years. There is therefore a need to process the captured videos such that human operators can quickly review the activities recorded by a camera over a long period of time. We propose in this paper an approach for producing video summaries, an abbreviated video preserving the important elements of interest. We introduce a dataset as well as three evaluation metrics for quantifying the performance of a video summary with respect to compression length, the amount of activity retained and the amount of activity that is packed into each frame of the summary.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"17 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114088838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Autonomous altitude measurement and landing area detection for indoor UAV applications 用于室内无人机的自主高度测量和着陆区域检测
Burak Kakillioglu, Senem Velipasalar
{"title":"Autonomous altitude measurement and landing area detection for indoor UAV applications","authors":"Burak Kakillioglu, Senem Velipasalar","doi":"10.1109/AVSS.2016.7738069","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738069","url":null,"abstract":"Fully autonomous navigation of unmanned vehicles, without relying on pre-installed tags or markers, still remains a challenge especially for GPS-denied areas and complex indoor environments. Robust altitude control and safe landing zone detection are two important tasks for indoor unmanned aerial vehicle (UAV) applications. In this paper, a novel approach is proposed for indoor UAVs to control their altitudes, and autonomously detect safe landing zones without relying on any markers, special setups, or assuming that the environment is known. The proposed method employs both depth data and RGB images to detect and also track the safe landing zones.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114714986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An in-depth study of sparse codes on abnormality detection 稀疏编码在异常检测中的深入研究
Huamin Ren, Hong Pan, S. Olsen, M. B. Jensen, T. Moeslund
{"title":"An in-depth study of sparse codes on abnormality detection","authors":"Huamin Ren, Hong Pan, S. Olsen, M. B. Jensen, T. Moeslund","doi":"10.1109/AVSS.2016.7738016","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738016","url":null,"abstract":"Sparse representation has been applied successfully in abnormal event detection, in which the baseline is to learn a dictionary accompanied by sparse codes. While much emphasis is put on discriminative dictionary construction, there are no comparative studies of sparse codes regarding abnormality detection. We present an in-depth study of two types of sparse codes solutions - greedy algorithms and convex L1-norm solutions - and their impact on abnormality detection performance. We also propose our framework of combining sparse codes with different detection methods. Our comparative experiments are carried out from various angles to better understand the applicability of sparse codes, including computation time, reconstruction error, sparsity, detection accuracy, and their performance combining various detection methods. The experiment results show that combining OMP codes with maximum coordinate detection could achieve state-of-the-art performance on the UCSD dataset [14].","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"2007 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123773905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信