Rule-based real-time detection of context-independent events in video shots

Aishy Amer , Eric Dubois , Amar Mitiche
{"title":"Rule-based real-time detection of context-independent events in video shots","authors":"Aishy Amer ,&nbsp;Eric Dubois ,&nbsp;Amar Mitiche","doi":"10.1016/j.rti.2004.12.001","DOIUrl":null,"url":null,"abstract":"<div><p>The purpose of this paper is to investigate a real-time system to detect context-independent events in video shots. We test the system in video surveillance environments with a fixed camera. We assume that objects have been segmented (not necessarily perfectly) and reason with their low-level features, such as shape, and mid-level features, such as trajectory, to infer events related to moving objects.</p><p>Our goal is to detect generic events, i.e., events that are independent of the context of where or how they occur. Events are detected based on a formal definition of these and on approximate but efficient world models. This is done by continually monitoring changes and behavior of features of video objects. When certain conditions are met, events are detected. We classify events into four types: primitive, action, interaction, and composite.</p><p>Our system includes three interacting video processing layers: <em>enhancement</em><span> to estimate and reduce additive noise, </span><em>analysis</em> to segment and track video objects, and <em>interpretation</em> to detect context-independent events. The contributions in this paper are the interpretation of spatio-temporal object features to detect context-independent events in real time, the adaptation to noise, and the correction and compensation of low-level processing errors at higher layers where more information is available.</p><p>The effectiveness and real-time response of our system are demonstrated by extensive experimentation on indoor and outdoor video shots in the presence of multi-object occlusion, different noise levels, and coding artifacts.</p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 3","pages":"Pages 244-256"},"PeriodicalIF":0.0000,"publicationDate":"2005-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2004.12.001","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Real-Time Imaging","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1077201405000021","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13

Abstract

The purpose of this paper is to investigate a real-time system to detect context-independent events in video shots. We test the system in video surveillance environments with a fixed camera. We assume that objects have been segmented (not necessarily perfectly) and reason with their low-level features, such as shape, and mid-level features, such as trajectory, to infer events related to moving objects.

Our goal is to detect generic events, i.e., events that are independent of the context of where or how they occur. Events are detected based on a formal definition of these and on approximate but efficient world models. This is done by continually monitoring changes and behavior of features of video objects. When certain conditions are met, events are detected. We classify events into four types: primitive, action, interaction, and composite.

Our system includes three interacting video processing layers: enhancement to estimate and reduce additive noise, analysis to segment and track video objects, and interpretation to detect context-independent events. The contributions in this paper are the interpretation of spatio-temporal object features to detect context-independent events in real time, the adaptation to noise, and the correction and compensation of low-level processing errors at higher layers where more information is available.

The effectiveness and real-time response of our system are demonstrated by extensive experimentation on indoor and outdoor video shots in the presence of multi-object occlusion, different noise levels, and coding artifacts.

基于规则的视频拍摄中与上下文无关事件的实时检测
本文的目的是研究一个实时系统来检测视频镜头中与上下文无关的事件。我们用固定摄像机在视频监控环境中对系统进行了测试。我们假设对象已经被分割(不一定是完美的),并使用它们的低级特征(如形状)和中级特征(如轨迹)来推断与移动对象相关的事件。我们的目标是检测通用事件,即独立于发生地点或方式的上下文的事件。事件的检测是基于这些事件的正式定义和近似但有效的世界模型。这是通过持续监控视频对象特征的变化和行为来实现的。当满足某些条件时,将检测事件。我们将事件分为四种类型:原始事件、动作事件、交互事件和组合事件。我们的系统包括三个相互作用的视频处理层:增强以估计和减少附加噪声,分析以分割和跟踪视频对象,以及解释以检测与上下文无关的事件。本文的贡献包括对物体时空特征的解释,以实时检测与上下文无关的事件,对噪声的适应,以及在可获得更多信息的更高层对低级处理错误的校正和补偿。我们的系统的有效性和实时响应通过在室内和室外视频拍摄中进行大量实验来证明,这些视频拍摄存在多目标遮挡、不同噪声水平和编码伪影。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信