2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)最新文献

筛选
英文 中文
Visual tracking based on object appearance and structure preserved local patches matching 基于目标外观和结构的视觉跟踪局部补丁匹配
Wei Wang, Kun Duan, Tai-Peng Tian, Ting Yu, Ser-Nam Lim, H. Qi
{"title":"Visual tracking based on object appearance and structure preserved local patches matching","authors":"Wei Wang, Kun Duan, Tai-Peng Tian, Ting Yu, Ser-Nam Lim, H. Qi","doi":"10.1109/AVSS.2016.7738065","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738065","url":null,"abstract":"Drift is the most difficult issue in object visual tracking based on framework of “tracking-by-detection”. Due to the self-taught learning, the mis-aligned samples are potentially to be incorporated in learning and degrade the discrimination of the tracker. This paper proposes a new tracking approach that resolves this problem by three multi-level collaborative components: a high-level global appearance tracker provides a basic prediction, upon which the structure preserved low-level local patches matching helps to guarantee precise tracking with minimized drift. Those local patches are deliberately deployed on the foreground object via foreground/background segmentation, which is realized by a simple and efficient classifier trained by super-pixel segments. Experimental results show that the three closely collaborated components enable our tracker runs in real time and performs favourably against state-of-the-art approaches on challenging benchmark sequences.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115598720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-preserving, indoor occupant localization using a network of single-pixel sensors 使用单像素传感器网络保护隐私,室内居住者定位
Douglas Roeper, Jiawei Chen, J. Konrad, P. Ishwar
{"title":"Privacy-preserving, indoor occupant localization using a network of single-pixel sensors","authors":"Douglas Roeper, Jiawei Chen, J. Konrad, P. Ishwar","doi":"10.1109/AVSS.2016.7738073","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738073","url":null,"abstract":"We propose an approach to indoor occupant localization using a network of single-pixel, visible-light sensors. In addition to preserving privacy, our approach vastly reduces data transmission rate and is agnostic to eavesdropping. We develop two purely data-driven localization algorithms and study their performance using a network of 6 such sensors. In one algorithm, we divide the monitored floor area (2.37m×2.72m) into a 3×3 grid of cells and classify location of a single person as belonging to one of the 9 cells using a support vector machine classifier. In the second algorithm, we estimate person's coordinates using support vector regression. In cross-validation tests in public (e.g., conference room) and private (e.g., home) scenarios, we obtain 67-72% correct classification rate for cells and 0.31-0.35m mean absolute distance error within the monitored space. Given the simplicity of sensors and processing, these are encouraging results and can lead to useful applications today.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128467719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Generalized activity recognition using accelerometer in wearable devices for IoT applications 在物联网应用的可穿戴设备中使用加速度计的广义活动识别
E. A. Safadi, Fahim Mohammad, D. Iyer, Benjamin J. Smiley, Nilesh Jain
{"title":"Generalized activity recognition using accelerometer in wearable devices for IoT applications","authors":"E. A. Safadi, Fahim Mohammad, D. Iyer, Benjamin J. Smiley, Nilesh Jain","doi":"10.1109/AVSS.2016.7738020","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738020","url":null,"abstract":"The proliferation of low power and low cost continuous sensing has generated an immense interest in the area of activity recognition. However, the real time detection is still a challenge for several reasons: requirement from the user to specify the type of activity, complex algorithms, and collection of data from multiple devices. In this paper, we describe a generalized activity recognition system, its applications, and the challenges involved in implementing the algorithm in resource-constrained devices. The distinctive aspects of our study include: 1) automatic detection and recognition of different activities (running, walking, crawling, climbing, and pronating), 2) using just one axis from an accelerometer sensor, and 3) simple features and pattern matching algorithm leading to computationally inexpensive and memory efficient system suitable for resource-constrained wearable devices. The activity recognition model was trained using data collected from 52 unique subjects. The model was mapped onto Intel® Quark™ SE Pattern Matching Engine, and field-tested using eight additional subjects achieving performance up to 91%.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129308847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Ear in the sky: Ego-noise reduction for auditory micro aerial vehicles 天空中的耳朵:听觉微型飞行器的自我降噪
Lin Wang, A. Cavallaro
{"title":"Ear in the sky: Ego-noise reduction for auditory micro aerial vehicles","authors":"Lin Wang, A. Cavallaro","doi":"10.1109/AVSS.2016.7738063","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738063","url":null,"abstract":"We investigate the spectral and spatial characteristics of the ego-noise of a multirotor micro aerial vehicle (MAV) using audio signals captured with multiple onboard microphones and derive a noise model that grounds the feasibility of microphone-array techniques for noise reduction. The spectral analysis suggests that the ego-noise consists of narrowband harmonic noise and broadband noise, whose spectra vary dynamically with the motor rotation speed. The spatial analysis suggests that the ego-noise of a P-rotor MAV can be modeled as P directional noises plus one diffuse noise. Moreover, because of the fixed positions of the microphones and motors, we can assume that the acoustic mixing network of the ego-noise is stationary. We validate the proposed noise model and the stationary mixing assumption by applying blind source separation to multi-channel recordings from both a static and a moving MAV and quantify the signal-to-noise ratio improvement. Moreover, we make all the audio recordings publicly available.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130323713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Robust vehicle tracking for urban traffic videos at intersections 交叉口城市交通视频的鲁棒车辆跟踪
C. Li, A. Chiang, G. Dobler, Y. Wang, Kun Xie, K. Ozbay, M. Ghandehari, J. Zhou, D. Wang
{"title":"Robust vehicle tracking for urban traffic videos at intersections","authors":"C. Li, A. Chiang, G. Dobler, Y. Wang, Kun Xie, K. Ozbay, M. Ghandehari, J. Zhou, D. Wang","doi":"10.1109/AVSS.2016.7738075","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738075","url":null,"abstract":"We develop a robust, unsupervised vehicle tracking system for videos of very congested road intersections in urban environments. Raw tracklets from the standard Kanade-Lucas-Tomasi tracking algorithm are treated as sample points and grouped to form different vehicle candidates. Each tracklet is described by multiple features including position, velocity, and a foreground score derived from robust PCA background subtraction. By considering each tracklet as a node in a graph, we build the adjacency matrix for the graph based on the feature similarity between the tracklets and group these tracklets using spectral embedding and Dirichelet Process Gaussian Mixture Models. The proposed system yields excellent performance for traffic videos captured in urban environments and highways.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129549457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Unsupervised data association for metric learning in the context of multi-shot person re-identification 多镜头人物再识别背景下度量学习的无监督数据关联
F. M. Khan, F. Brémond
{"title":"Unsupervised data association for metric learning in the context of multi-shot person re-identification","authors":"F. M. Khan, F. Brémond","doi":"10.1109/AVSS.2016.7738058","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738058","url":null,"abstract":"Appearance based person re-identification is a challenging task, specially due to difficulty in capturing high intra-person appearance variance across cameras when inter-person similarity is also high. Metric learning is often used to address deficiency of low-level features by learning view specific re-identification models. The models are often acquired using a supervised algorithm. This is not practical for real-world surveillance systems because annotation effort is view dependent. In this paper, we propose a strategy to automatically generate labels for person tracks to learn similarity metric for multi-shot person re-identification task. We demonstrate on multiple challenging datasets that the proposed labeling strategy significantly improves performance of two baseline methods and the extent of improvement is comparable to that of manual annotations in the context of KISSME algorithm [14].","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"13 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113991956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
A two-stage foreground propagation for moving object detection in a non-stationary 一种用于非静止环境中运动目标检测的两阶段前景传播
WonTaek Chung, Y. Kim, Yong-Joong Kim, Daijin Kim
{"title":"A two-stage foreground propagation for moving object detection in a non-stationary","authors":"WonTaek Chung, Y. Kim, Yong-Joong Kim, Daijin Kim","doi":"10.1109/AVSS.2016.7738024","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738024","url":null,"abstract":"In this paper, we propose a two-stage foreground propagation that uses clues to adapt to the environment and detect moving objects in a non-stationary camera. The first stage creates a weight matrix to instantaneously regulate the background model by responding to clues from frame differencing and background subtraction. The regulated background model is less affected by inaccurate motion compensation. In the second stage, an iterative approach is taken to refine the threshold for each pixel location by initially using pixels with high foreground probability as clues. Foreground regions detected from the refined threshold are less likely to be false detections and capture true object regions with completeness. Experimental results showed that the two-stage foreground propagation had significantly higher recall with comparable precision and outperformed other methods.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116203151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A flexible ensemble-SVM for computer vision tasks 面向计算机视觉任务的柔性集成支持向量机
Rémi Trichet, N. O’Connor
{"title":"A flexible ensemble-SVM for computer vision tasks","authors":"Rémi Trichet, N. O’Connor","doi":"10.1109/AVSS.2016.7738028","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738028","url":null,"abstract":"This paper presents an ensemble-SVM method that features a data selection mechanism with stochastic and deterministic properties, the use of extreme value theory for classifier calibration, and the introduction of random forest for classifier combination. We applied the proposed algorithm to 2 event recognition datasets and the PASCAL2007 object detection dataset and compared it to single SVM and common computer vision ensemble-SVM methods. Our algorithm outperforms its competitors and shows a considerable boost on datasets with a limited amount of outliers.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122612896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Towards semantic context-aware drones for aerial scenes understanding 面向航拍场景理解的语义上下文感知无人机
Danilo Cavaliere, S. Senatore, M. Vento, V. Loia
{"title":"Towards semantic context-aware drones for aerial scenes understanding","authors":"Danilo Cavaliere, S. Senatore, M. Vento, V. Loia","doi":"10.1109/AVSS.2016.7738062","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738062","url":null,"abstract":"Visual object tracking with unmanned aerial vehicles (UAVs) plays a central role in the aerial surveillance. Reliable object detection depends on many factors such as large displacements, occlusions, image noise, illumination and pose changes or image blur that may compromise the object labeling. The paper presents a proposal for a hybrid solution that adds semantic information to the video tracking processing: along with the tracked objects, the scene is completely depicted by data from places, natural features, or in general Points of Interest (POIs). Each scene from a video sequence is semantically described by ontological statements which, by inference, support the object identification which often suffers from some weakness in the object tracking methods. The synergy between the tracking methods and semantic technologies seems to bridge the object labeling gap, enhance the understanding of the situation awareness, as well as critical alarming situations.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132677324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Online multi-person tracking using Integral Channel Features 使用积分通道功能的在线多人跟踪
H. Kieritz, S. Becker, W. Hübner, Michael Arens
{"title":"Online multi-person tracking using Integral Channel Features","authors":"H. Kieritz, S. Becker, W. Hübner, Michael Arens","doi":"10.1109/AVSS.2016.7738059","DOIUrl":"https://doi.org/10.1109/AVSS.2016.7738059","url":null,"abstract":"Online multi-person tracking benefits from using an online learned appearance model to associate detections to tracks and further to close gaps in detections. Since Integral Channel Features (ICF) are popular for fast pedestrian detection, we propose an online appearance model that is using the same features without recalculation. The proposed method uses online Multiple-Instance Learning (MIL) to incrementally train an appearance model for each person discriminating against its surrounding. We show that a low number of discriminatingly selected Integral Channel Features are sufficient to achieve state-of-the-art results on the MOT2015 and MOT2016 benchmark.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131165033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 95
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信