State Summarization of Video Streams for Spatiotemporal Query Matching in Complex Event Processing

Piyush Yadav, D. Das, E. Curry
{"title":"State Summarization of Video Streams for Spatiotemporal Query Matching in Complex Event Processing","authors":"Piyush Yadav, D. Das, E. Curry","doi":"10.1109/ICMLA.2019.00022","DOIUrl":null,"url":null,"abstract":"Modelling complex events in unstructured data like videos not only requires detecting objects but also the spatiotemporal relationships among objects. Complex Event Processing (CEP) systems discretize continuous streams into fixed batches using windows and apply operators over these batches to detect patterns in real-time. To this end, we apply CEP techniques over video streams to identify spatiotemporal patterns by capturing window state. This work introduces a novel problem where an input video stream is converted to a stream of graphs which are aggregated to a single graph over a given state. Incoming video frames are converted to a timestamped Video Event Knowledge Graph (VEKG) [1] that maps objects to nodes and captures spatiotemporal relationships among object nodes. Objects coexist across multiple frames which leads to the creation of redundant nodes and edges at different time instances that results in high memory usage. There is a need for expressive and storage efficient graph model which can summarize graph streams in a single view. We propose Event Aggregated Graph (EAG), a summarized graph representation of VEKG streams over a given state. EAG captures different spatiotemporal relationships among objects using an Event Adjacency Matrix without replicating the nodes and edges across time instances. These enable the CEP system to process multiple continuous queries and perform frequent spatiotemporal pattern matching computations over a single summarised graph. Initial experiments show EAG takes 68.35% and 28.9% less space compared to baseline and state of the art graph summarization method respectively. EAG takes 5X less search time to detect pattern as compare to VEKG stream.","PeriodicalId":436714,"journal":{"name":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2019.00022","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Modelling complex events in unstructured data like videos not only requires detecting objects but also the spatiotemporal relationships among objects. Complex Event Processing (CEP) systems discretize continuous streams into fixed batches using windows and apply operators over these batches to detect patterns in real-time. To this end, we apply CEP techniques over video streams to identify spatiotemporal patterns by capturing window state. This work introduces a novel problem where an input video stream is converted to a stream of graphs which are aggregated to a single graph over a given state. Incoming video frames are converted to a timestamped Video Event Knowledge Graph (VEKG) [1] that maps objects to nodes and captures spatiotemporal relationships among object nodes. Objects coexist across multiple frames which leads to the creation of redundant nodes and edges at different time instances that results in high memory usage. There is a need for expressive and storage efficient graph model which can summarize graph streams in a single view. We propose Event Aggregated Graph (EAG), a summarized graph representation of VEKG streams over a given state. EAG captures different spatiotemporal relationships among objects using an Event Adjacency Matrix without replicating the nodes and edges across time instances. These enable the CEP system to process multiple continuous queries and perform frequent spatiotemporal pattern matching computations over a single summarised graph. Initial experiments show EAG takes 68.35% and 28.9% less space compared to baseline and state of the art graph summarization method respectively. EAG takes 5X less search time to detect pattern as compare to VEKG stream.
复杂事件处理中面向时空查询匹配的视频流状态汇总
对视频等非结构化数据中的复杂事件进行建模,不仅需要检测对象,还需要检测对象之间的时空关系。复杂事件处理(CEP)系统使用窗口将连续流离散成固定批次,并在这些批次上应用操作符来实时检测模式。为此,我们在视频流上应用CEP技术,通过捕获窗口状态来识别时空模式。这项工作引入了一个新问题,其中输入视频流被转换为图形流,这些图形流在给定状态下聚合为单个图形。传入的视频帧被转换为带有时间戳的视频事件知识图(VEKG)[1],该图将对象映射到节点并捕获对象节点之间的时空关系。对象在多个帧中共存,这会导致在不同的时间实例中创建冗余的节点和边,从而导致高内存使用。需要一种表达能力强、存储效率高的图形模型,将图形流汇总到一个视图中。我们提出了事件聚合图(Event Aggregated Graph, EAG),这是一个给定状态下VEKG流的汇总图表示。EAG使用事件邻接矩阵捕获对象之间的不同时空关系,而无需跨时间实例复制节点和边。这使得CEP系统能够处理多个连续查询,并在单个汇总图上执行频繁的时空模式匹配计算。初步实验表明,与基线和最先进的图形摘要方法相比,EAG分别节省了68.35%和28.9%的空间。与VEKG流相比,EAG检测模式所需的搜索时间减少了5倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信