VEKG: Video Event Knowledge Graph to Represent Video Streams for Complex Event Pattern Matching

Piyush Yadav, E. Curry
{"title":"VEKG: Video Event Knowledge Graph to Represent Video Streams for Complex Event Pattern Matching","authors":"Piyush Yadav, E. Curry","doi":"10.1109/GC46384.2019.00011","DOIUrl":null,"url":null,"abstract":"Complex Event Processing (CEP) is a paradigm to detect event patterns over streaming data in a timely manner. Presently, CEP systems have inherent limitations to detect event patterns over video streams due to their data complexity and lack of structured data model. Modelling complex events in unstructured data like videos not only requires detecting objects but also the spatiotemporal relationships among objects. This work introduces a novel video representation technique where an input video stream is converted to a stream of graphs. We propose the Video Event Knowledge Graph (VEKG), a knowledge graph driven representation of video data. VEKG models video objects as nodes and their relationship interaction as edges over time and space. It creates a semantic knowledge representation of video data derived from the detection of high-level semantic concepts from the video using an ensemble of deep learning models. To optimize the run-time system performance, we introduce a graph aggregation method VEKG-TAG, which provides an aggregated view of VEKG for a given time length. We defined a set of operators using event rules which can be used as a query and applied over VEKG graphs to discover complex video patterns. The system achieves an F-Score accuracy ranging between 0.75 to 0.86 for different patterns when queried over VEKG. In given experiments, pattern search time over VEKG-TAG was 2.3X faster as compared to the baseline.","PeriodicalId":129268,"journal":{"name":"2019 First International Conference on Graph Computing (GC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 First International Conference on Graph Computing (GC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GC46384.2019.00011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15

Abstract

Complex Event Processing (CEP) is a paradigm to detect event patterns over streaming data in a timely manner. Presently, CEP systems have inherent limitations to detect event patterns over video streams due to their data complexity and lack of structured data model. Modelling complex events in unstructured data like videos not only requires detecting objects but also the spatiotemporal relationships among objects. This work introduces a novel video representation technique where an input video stream is converted to a stream of graphs. We propose the Video Event Knowledge Graph (VEKG), a knowledge graph driven representation of video data. VEKG models video objects as nodes and their relationship interaction as edges over time and space. It creates a semantic knowledge representation of video data derived from the detection of high-level semantic concepts from the video using an ensemble of deep learning models. To optimize the run-time system performance, we introduce a graph aggregation method VEKG-TAG, which provides an aggregated view of VEKG for a given time length. We defined a set of operators using event rules which can be used as a query and applied over VEKG graphs to discover complex video patterns. The system achieves an F-Score accuracy ranging between 0.75 to 0.86 for different patterns when queried over VEKG. In given experiments, pattern search time over VEKG-TAG was 2.3X faster as compared to the baseline.
VEKG:用于复杂事件模式匹配的视频事件知识图
复杂事件处理(CEP)是一种用于及时检测流数据上的事件模式的范例。目前,由于数据复杂性和缺乏结构化数据模型,CEP系统在视频流事件模式检测方面存在固有的局限性。对视频等非结构化数据中的复杂事件进行建模,不仅需要检测对象,还需要检测对象之间的时空关系。这项工作介绍了一种新的视频表示技术,将输入视频流转换为图形流。我们提出了视频事件知识图(VEKG),这是一种知识图驱动的视频数据表示。VEKG将视频对象建模为节点,并将它们的相互关系建模为沿时间和空间的边。它通过使用深度学习模型集合来检测视频中的高级语义概念,从而创建视频数据的语义知识表示。为了优化运行时系统的性能,我们引入了一种图形聚合方法VEKG- tag,它提供了给定时间长度的VEKG的聚合视图。我们使用事件规则定义了一组操作符,这些操作符可以用作查询,并应用于VEKG图来发现复杂的视频模式。当在VEKG上查询不同的模式时,系统的F-Score精度在0.75到0.86之间。在给定的实验中,VEKG-TAG上的模式搜索时间比基线快2.3倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信