当前位置: X-MOL 学术arXiv.cs.AI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Knowledge Graph Driven Approach to Represent Video Streams for Spatiotemporal Event Pattern Matching in Complex Event Processing
arXiv - CS - Artificial Intelligence Pub Date : 2020-07-13 , DOI: arxiv-2007.06292
Piyush Yadav, Dhaval Salwala, Edward Curry

Complex Event Processing (CEP) is an event processing paradigm to perform real-time analytics over streaming data and match high-level event patterns. Presently, CEP is limited to process structured data stream. Video streams are complicated due to their unstructured data model and limit CEP systems to perform matching over them. This work introduces a graph-based structure for continuous evolving video streams, which enables the CEP system to query complex video event patterns. We propose the Video Event Knowledge Graph (VEKG), a graph driven representation of video data. VEKG models video objects as nodes and their relationship interaction as edges over time and space. It creates a semantic knowledge representation of video data derived from the detection of high-level semantic concepts from the video using an ensemble of deep learning models. A CEP-based state optimization - VEKG-Time Aggregated Graph (VEKG-TAG) is proposed over VEKG representation for faster event detection. VEKG-TAG is a spatiotemporal graph aggregation method that provides a summarized view of the VEKG graph over a given time length. We defined a set of nine event pattern rules for two domains (Activity Recognition and Traffic Management), which act as a query and applied over VEKG graphs to discover complex event patterns. To show the efficacy of our approach, we performed extensive experiments over 801 video clips across 10 datasets. The proposed VEKG approach was compared with other state-of-the-art methods and was able to detect complex event patterns over videos with F-Score ranging from 0.44 to 0.90. In the given experiments, the optimized VEKG-TAG was able to reduce 99% and 93% of VEKG nodes and edges, respectively, with 5.19X faster search time, achieving sub-second median latency of 4-20 milliseconds.

中文翻译:

知识图驱动的方法来表示复杂事件处理中的时空事件模式匹配的视频流

复杂事件处理 (CEP) 是一种事件处理范例,用于对流数据执行实时分析并匹配高级事件模式。目前,CEP 仅限于处理结构化数据流。视频流由于其非结构化数据模型而变得复杂,并且限制了 CEP 系统对它们执行匹配。这项工作为连续演进的视频流引入了基于图的结构,使 CEP 系统能够查询复杂的视频事件模式。我们提出了视频事件知识图 (VEKG),这是一种视频数据的图形驱动表示。VEKG 将视频对象建模为节点,将它们的关系交互建模为时间和空间上的边。它使用深度学习模型的集合从视频中检测高级语义概念,从而创建视频数据的语义知识表示。基于 CEP 的状态优化 - VEKG-时间聚合图 (VEKG-TAG) 被提议用于 VEKG 表示以更快地检测事件。VEKG-TAG 是一种时空图聚合方法,可提供给定时间长度内 VEKG 图的汇总视图。我们为两个域(活动识别和流量管理)定义了一组九个事件模式规则,它们充当查询并应用于 VEKG 图以发现复杂的事件模式。为了展示我们方法的有效性,我们对 10 个数据集的 801 个视频剪辑进行了大量实验。所提出的 VEKG 方法与其他最先进的方法进行了比较,并且能够检测视频中的复杂事件模式,F-Score 范围从 0.44 到 0.90。在给定的实验中,优化的 VEKG-TAG 能够减少 99% 和 93% 的 VEKG 节点和边,
更新日期:2020-07-14
down
wechat
bug