Online Learning of Activities from Video

J. L. Patino, F. Brémond, M. Thonnat
{"title":"Online Learning of Activities from Video","authors":"J. L. Patino, F. Brémond, M. Thonnat","doi":"10.1109/AVSS.2012.50","DOIUrl":null,"url":null,"abstract":"The present work introduces a new method for activity extraction from video. To achieve this, we focus on the modelling of context by developing an algorithm that automatically learns the main activity zones of the observed scene by taking as input the trajectories of detected mobiles. Automatically learning the context of the scene (activity zones) allows first to extract a knowledge on the occupancy of the different areas of the scene. In a second step, learned zones are employed to extract people activities by relating mobile trajectories to the learned zones, in this way, the activity of a person can be summarised as the series of zones that the person has visited. For the analysis of the trajectory, a multiresolution analysis is set such that a trajectory is segmented into a series of tracklets based on changing speed points thus allowing differentiating when people stop to interact with elements of the scene or other persons. Tracklets allow thus to extract behavioural information. Starting and ending tracklet points are fed to a simple yet advantageous incremental clustering algorithm to create an initial partition of the scene. Similarity relations between resulting clusters are modeled employing fuzzy relations. These can then be aggregated with typical soft-computing algebra. A clustering algorithm based on the transitive closure calculation of the fuzzy relations allows building the final structure of the scene. To allow for incremental learning and update of activity zones (and thus people activities), fuzzy relations are defined with online learning terms. We present results obtained on real videos from different activity domains.","PeriodicalId":275325,"journal":{"name":"2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance","volume":"59 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AVSS.2012.50","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

The present work introduces a new method for activity extraction from video. To achieve this, we focus on the modelling of context by developing an algorithm that automatically learns the main activity zones of the observed scene by taking as input the trajectories of detected mobiles. Automatically learning the context of the scene (activity zones) allows first to extract a knowledge on the occupancy of the different areas of the scene. In a second step, learned zones are employed to extract people activities by relating mobile trajectories to the learned zones, in this way, the activity of a person can be summarised as the series of zones that the person has visited. For the analysis of the trajectory, a multiresolution analysis is set such that a trajectory is segmented into a series of tracklets based on changing speed points thus allowing differentiating when people stop to interact with elements of the scene or other persons. Tracklets allow thus to extract behavioural information. Starting and ending tracklet points are fed to a simple yet advantageous incremental clustering algorithm to create an initial partition of the scene. Similarity relations between resulting clusters are modeled employing fuzzy relations. These can then be aggregated with typical soft-computing algebra. A clustering algorithm based on the transitive closure calculation of the fuzzy relations allows building the final structure of the scene. To allow for incremental learning and update of activity zones (and thus people activities), fuzzy relations are defined with online learning terms. We present results obtained on real videos from different activity domains.
视频活动的在线学习
本文介绍了一种新的视频活动提取方法。为了实现这一目标,我们通过开发一种算法来关注上下文建模,该算法通过将检测到的移动设备的轨迹作为输入,自动学习观察到的场景的主要活动区域。自动学习场景的上下文(活动区域)允许首先提取关于场景中不同区域占用情况的知识。在第二步中,通过将移动轨迹与学习区域联系起来,利用学习区域提取人们的活动,这样,一个人的活动可以总结为这个人去过的一系列区域。对于轨迹分析,设置了多分辨率分析,使轨迹根据变化的速度点被分割成一系列轨迹,从而允许区分人们何时停下来与场景元素或其他人交互。因此,Tracklets允许提取行为信息。起始和结束轨迹点被馈送到一个简单而有利的增量聚类算法中,以创建场景的初始分区。结果聚类之间的相似关系采用模糊关系建模。然后可以用典型的软计算代数将这些集合起来。基于模糊关系传递闭包计算的聚类算法可以构建场景的最终结构。为了允许增量学习和活动区域(以及人员活动)的更新,使用在线学习术语定义了模糊关系。我们给出了在不同活动域的真实视频上获得的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信