当前位置: X-MOL 学术arXiv.cs.DC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
VID-WIN: Fast Video Event Matching with Query-Aware Windowing at the Edge for the Internet of Multimedia Things
arXiv - CS - Distributed, Parallel, and Cluster Computing Pub Date : 2021-04-27 , DOI: arxiv-2105.02957
Piyush Yadav, Dhaval Salwala, Edward Curry

Efficient video processing is a critical component in many IoMT applications to detect events of interest. Presently, many window optimization techniques have been proposed in event processing with an underlying assumption that the incoming stream has a structured data model. Videos are highly complex due to the lack of any underlying structured data model. Video stream sources such as CCTV cameras and smartphones are resource-constrained edge nodes. At the same time, video content extraction is expensive and requires computationally intensive Deep Neural Network (DNN) models that are primarily deployed at high-end (or cloud) nodes. This paper presents VID-WIN, an adaptive 2-stage allied windowing approach to accelerate video event analytics in an edge-cloud paradigm. VID-WIN runs parallelly across edge and cloud nodes and performs the query and resource-aware optimization for state-based complex event matching. VID-WIN exploits the video content and DNN input knobs to accelerate the video inference process across nodes. The paper proposes a novel content-driven micro-batch resizing, queryaware caching and micro-batch based utility filtering strategy of video frames under resource-constrained edge nodes to improve the overall system throughput, latency, and network usage. Extensive evaluations are performed over five real-world datasets. The experimental results show that VID-WIN video event matching achieves ~2.3X higher throughput with minimal latency and ~99% bandwidth reduction compared to other baselines while maintaining query-level accuracy and resource bounds.

中文翻译:

VID-WIN:快速视频事件与多媒体物联网边缘的查询查询窗口匹配

高效的视频处理是许多IoMT应用程序中检测感兴趣事件的关键组件。当前,已经在事件处理中提出了许多窗口优化技术,其基本假设是输入流具有结构化数据模型。由于缺少任何底层的结构化数据模型,因此视频非常复杂。诸如CCTV摄像机和智能手机之类的视频流源是资源受限的边缘节点。同时,视频内容提取非常昂贵,并且需要计算密集型深度神经网络(DNN)模型,这些模型主要部署在高端(或云)节点上。本文介绍了VID-WIN,这是一种自适应2级联合窗口方法,可在边缘云范式中加速视频事件分析。VID-WIN在边缘和云节点上并行运行,并执行查询和资源感知优化,以用于基于状态的复杂事件匹配。VID-WIN利用视频内容和DNN输入旋钮来加速跨节点的视频推理过程。本文提出了一种新颖的内容驱动的微批量调整大小,查询感知缓存和基于微批量的资源受限边缘节点下视频帧的效用过滤策略,以提高整体系统的吞吐量,延迟和网络使用率。对五个真实世界的数据集进行了广泛的评估。实验结果表明,与其他基准相比,VID-WIN视频事件匹配在不增加延迟的情况下,吞吐量提高了约2.3倍,并且带宽减少了约99%,同时保持了查询级的准确性和资源范围。
更新日期:2021-05-10
down
wechat
bug