2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance最新文献

筛选
英文 中文
Incremental Learning Approach for Events Detection from Large Video Dataset 大型视频数据集事件检测的增量学习方法
A. Wali, A. Alimi
{"title":"Incremental Learning Approach for Events Detection from Large Video Dataset","authors":"A. Wali, A. Alimi","doi":"10.1109/AVSS.2010.54","DOIUrl":"https://doi.org/10.1109/AVSS.2010.54","url":null,"abstract":"In this paper, we propose a strategy of multi-SVM incrementallearning system based on Learn++ classifier for detectionof predefined events in the video. This strategy is offlineand fast in the sense that any new class of event can belearned by the system from very few examples. The extractionand synthesis of suitably video events are used for thispurpose. The results showed that the performance of oursystem is improving gradually and progressively as we increasethe number of such learning for each event. We thendemonstrate the usefulness of the toolbox in the context offeature extraction, concepts/events learning and detectionin large collection of video surveillance dataset.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131002188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
An Authentication Mechanism Using Chinese Remainder Theorem for Efficient Surveillance Video Transmission 基于中国剩余定理的高效监控视频传输认证机制
Tony Thomas, S. Emmanuel, Peng Zhang, M. Kankanhalli
{"title":"An Authentication Mechanism Using Chinese Remainder Theorem for Efficient Surveillance Video Transmission","authors":"Tony Thomas, S. Emmanuel, Peng Zhang, M. Kankanhalli","doi":"10.1109/AVSS.2010.13","DOIUrl":"https://doi.org/10.1109/AVSS.2010.13","url":null,"abstract":"Now-a-days, surveillance cameras have been widely deployedin various security applications. In many surveillanceapplications, the background changes very slowly andthe foreground objects occupy only a relatively small portionof a video frame. In these type of applications, an efficientsolution for transmissions over bandwidth-limited networksis to send only the foreground objects for every framein real time while the background is sent occasionally. Atthe receiving end of the transmission, the objects and themost recent background can be fused together and the originalframe can be reconstructed. However, protecting theauthenticity of the video becomes more challenging in thiscase as a malicious entity can modify/replace/remove the individualforeground objects and background in the video. Inthis paper, we propose a Chinese remainder theorem basedwatermarking mechanism for protecting the authenticity ofvideos transmitted or stored as objects and background.Our mechanism ensures the authenticity between video objectsand their associated background.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114385262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Example-Based Color Vehicle Retrieval for Surveillance 基于实例的彩色车辆监控检索
L. Brown
{"title":"Example-Based Color Vehicle Retrieval for Surveillance","authors":"L. Brown","doi":"10.1109/AVSS.2010.59","DOIUrl":"https://doi.org/10.1109/AVSS.2010.59","url":null,"abstract":"In this paper, we evaluate several low dimensionalcolor features for object retrieval in surveillance video.Previous work in object retrieval in surveillance has beenhampered by issues in low resolution, poor segmentation,pose and lighting variations and the cost of retrieval. Toovercome these difficulties, we restrict our analysis toalarm-based vehicle detection and as a consequence, werestrict both pose and lighting variations. In addition, westudy the utility of example-based retrieval to avoid thelimitations of strict color classification. Finally, since weperform our evaluation at run-time for alarm-baseddetection, we do not need to index into a large database.We evaluate the efficiency and effectiveness of severalcolor features including standard color histograms,weighted color histograms, variable bin size colorhistograms and color correlograms. Results show colorcorrelogram to have the best performance for ourdatasets.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124995147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Exploiting Geometric Restrictions in a PTZ Camera for Finding Point-orrespondences Between Configurations 利用PTZ相机的几何限制寻找构型之间的点对应
Birgi Tamersoy, J. Aggarwal
{"title":"Exploiting Geometric Restrictions in a PTZ Camera for Finding Point-orrespondences Between Configurations","authors":"Birgi Tamersoy, J. Aggarwal","doi":"10.1109/AVSS.2010.53","DOIUrl":"https://doi.org/10.1109/AVSS.2010.53","url":null,"abstract":"A pan-tilt-zoom (PTZ) camera, fixed in location, mayperform only rotational movements. There is a class offeature-based self-calibration approaches that exploit therestrictions on the camera motion in order to obtain accuratepoint-correspondences between two configurations ofa PTZ camera. Most of these approaches require extensivecomputation and yet do not guarantee a satisfactory result.In this paper, we approach this problem from a differentperspective. We exploit the geometric restrictions on the imageplanes, which are imposed by the motion restrictions onthe camera. We present a simple method for estimating thecamera focal length and finding the point-correspondencesbetween two camera configurations. We compute pan-only,tilt-only and zoom-only correspondences and then combinethe three to derive the geometrical relationship between anytwo camera configurations. We perform radial lens distortionestimation in order to calibrate distorted image coordinates.Our purely geometric approach does not require anyintensive computations, feature tracking or training. However,our point-correspondence experiments show that, itstill performs well-enough for most computer vision applicationsof PTZ cameras.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125209216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Sensor Selection for Single Target Tracking in Large Video Surveillance Networks 大型视频监控网络中单目标跟踪的动态传感器选择
Eduardo Monari, K. Kroschel
{"title":"Dynamic Sensor Selection for Single Target Tracking in Large Video Surveillance Networks","authors":"Eduardo Monari, K. Kroschel","doi":"10.1109/AVSS.2010.22","DOIUrl":"https://doi.org/10.1109/AVSS.2010.22","url":null,"abstract":"In this paper an approach for dynamic camera selectionin large video-based sensor networks for the purposeof multi-camera object tracking is presented. The sensor selectionapproach is based on computational geometry algorithmsand is able to determine task-relevant cameras (cameracluster) by evaluation of geometrical attributes, giventhe last observed object position, the sensor configurationsand a building map. A special goal of this algorithm is theefficient determination of the minimum number of sensorsneeded to relocate an object, even if the object is temporarilyout of sight. In particular, the approach is applicablein camera networks with overlapping and non-overlappingfield of views as well as with static and non-static sensors.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125296203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Action Recognition Using Sparse Representation on Covariance Manifolds of Optical Flow 基于稀疏表示的光流协方差流形动作识别
Kai Guo, P. Ishwar, J. Konrad
{"title":"Action Recognition Using Sparse Representation on Covariance Manifolds of Optical Flow","authors":"Kai Guo, P. Ishwar, J. Konrad","doi":"10.1109/AVSS.2010.71","DOIUrl":"https://doi.org/10.1109/AVSS.2010.71","url":null,"abstract":"A novel approach to action recognition in video based onthe analysis of optical flow is presented. Properties of opticalflow useful for action recognition are captured usingonly the empirical covariance matrix of a bag of featuressuch as flow velocity, gradient, and divergence. The featurecovariance matrix is a low-dimensional representationof video dynamics that belongs to a Riemannian manifold.The Riemannian manifold of covariance matrices is transformedinto the vector space of symmetric matrices underthe matrix logarithm mapping. The log-covariance matrixof a test action segment is approximated by a sparse linearcombination of the log-covariance matrices of training actionsegments using a linear program and the coefficients ofthe sparse linear representation are used to recognize actions.This approach based on the unique blend of a logcovariance-descriptor and a sparse linear representation istested on the Weizmann and KTH datasets. The proposedapproach attains leave-one-out cross validation scores of94.4% correct classification rate for the Weizmann datasetand 98.5% for the KTH dataset. Furthermore, the methodis computationally efficient and easy to implement.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128747275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 176
Fast People Counting Using Head Detection from Skeleton Graph 基于骷髅图的头部检测快速计数
D. Merad, Kheir-Eddine Aziz, Nicolas Thome
{"title":"Fast People Counting Using Head Detection from Skeleton Graph","authors":"D. Merad, Kheir-Eddine Aziz, Nicolas Thome","doi":"10.1109/AVSS.2010.91","DOIUrl":"https://doi.org/10.1109/AVSS.2010.91","url":null,"abstract":"In this paper, we present a new method for counting people.This method is based on the head detection after asegmentation of the human body by skeleton graph process.The skeleton silhouette is computed and decomposed into aset of segments corresponding to the head , torso and limbs.This structure captures the minimal information about theskeleton shape. No assumption is made about the viewpoint,this is done after the head pose process. Several resultspresent the efficiency of the labelling process , particularlyits structural properties for the detection of heads within acrowd. A proposed method has been tested with an experimentof counting the number of pedestrians passing in aspecific area.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125669208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Video Activity Extraction and Reporting with Incremental Unsupervised Learning 基于增量无监督学习的视频活动提取与报告
J. L. Patino, F. Brémond, M. Evans, Ali Shahrokni, J. Ferryman
{"title":"Video Activity Extraction and Reporting with Incremental Unsupervised Learning","authors":"J. L. Patino, F. Brémond, M. Evans, Ali Shahrokni, J. Ferryman","doi":"10.1109/AVSS.2010.74","DOIUrl":"https://doi.org/10.1109/AVSS.2010.74","url":null,"abstract":"The present work presents a new method for activity extractionand reporting from video based on the aggregationof fuzzy relations. Trajectory clustering is first employedmainly to discover the points of entry and exit of mobiles appearingin the scene. In a second step, proximity relationsbetween resulting clusters of detected mobiles and contextualelements from the scene are modeled employing fuzzyrelations. These can then be aggregated employing typicalsoft-computing algebra. A clustering algorithm based onthe transitive closure calculation of the fuzzy relations allowsbuilding the structure of the scene and characterisesthe ongoing different activities of the scene. Discovered activityzones can be reported as activity maps with differentgranularities thanks to the analysis of the transitive closurematrix. Taking advantage of the soft relation properties, activityzones and related activities can be labeled in a morehuman-like language. We present results obtained on realvideos corresponding to apron monitoring in the Toulouseairport in France.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125195454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Tracking People with a 360-Degree Lidar 用360度激光雷达跟踪人
J. Shackleton, B. V. Voorst, Joel A. Hesch
{"title":"Tracking People with a 360-Degree Lidar","authors":"J. Shackleton, B. V. Voorst, Joel A. Hesch","doi":"10.1109/AVSS.2010.52","DOIUrl":"https://doi.org/10.1109/AVSS.2010.52","url":null,"abstract":"Advances in lidar technology, in particular 360-degreelidar sensors, create new opportunities to augment andimprove traditional surveillance systems. This paperdescribes an initial challenge to use a single stationary360-degree lidar sensor to detect and track people movingthroughout a scene in real-time. The depicted approachfocuses on overcoming three primary challenges inherentin any lidar tracker: classification and matching errorsbetween multiple human targets, segmentation errorsbetween humans and fixed objects in the scene, andsegmentation errors between targets that are very closetogether.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132322091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
Recognizing and Localizing Individual Activities through Graph Matching 基于图匹配的个体活动识别与定位
Anh-Phuong Ta, Christian Wolf, G. Lavoué, A. Baskurt
{"title":"Recognizing and Localizing Individual Activities through Graph Matching","authors":"Anh-Phuong Ta, Christian Wolf, G. Lavoué, A. Baskurt","doi":"10.1109/AVSS.2010.81","DOIUrl":"https://doi.org/10.1109/AVSS.2010.81","url":null,"abstract":"In this paper we tackle the problem of detecting individualhuman actions in video sequences. While the most successfulmethods are based on local features, which proved thatthey can deal with changes in background, scale and illumination,most existing methods have two main shortcomings:first, they are mainly based on the individual power ofspatio-temporal interest points (STIP), and therefore ignorethe spatio-temporal relationships between them. Second,these methods mainly focus on direct classification techniquesto classify the human activities, as opposed to detectionand localization. In order to overcome these limitations,we propose a new approach, which is based on agraph matching algorithm for activity recognition. In contrastto most previous methods which classify entire videosequences, we design a video matching method from twosets of ST-points for human activity recognition. First,points are extracted, and a hyper graphs are constructedfrom them, i.e. graphs with edges involving more than 2nodes (3 in our case). The activity recognition problemis then transformed into a problem of finding instances ofmodel graphs in the scene graph. By matching local featuresinstead of classifying entire sequences, our methodis able to detect multiple different activities which occursimultaneously in a video sequence. Experiments on twostandard datasets demonstrate that our method is comparableto the existing techniques on classification, and that itcan, additionally, detect and localize activities.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124225691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信