当前位置: X-MOL 学术IEEE Internet Things J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
VID-WIN: Fast Video Event Matching With Query-Aware Windowing at the Edge for the Internet of Multimedia Things
IEEE Internet of Things Journal ( IF 10.6 ) Pub Date : 2021-04-23 , DOI: 10.1109/jiot.2021.3075336
Piyush Yadav , Dhaval Salwala , Edward Curry

Efficient video processing is a critical component in many IoMT applications to detect events of interest. Presently, many window optimization techniques have been proposed in event processing with an underlying assumption that the incoming stream has a structured data model. Videos are highly complex due to the lack of any underlying structured data model. Video stream sources, such as CCTV cameras and smartphones are resource-constrained edge nodes. At the same time, video content extraction is expensive and requires computationally intensive deep neural network (DNN) models that are primarily deployed at high-end (or cloud) nodes. This article presents VID-WIN, an adaptive 2-stage allied windowing approach to accelerate video event analytics in an edge-cloud paradigm. VID-WIN runs parallelly across edge and cloud nodes and performs the query and resource-aware optimization for state-based complex event matching. VID-WIN exploits the video content and DNN input knobs to accelerate the video inference process across nodes. This article proposes a novel content-driven microbatch resizing , query-aware caching, and microbatch-based utility filtering strategy of video frames under resource-constrained edge nodes to improve the overall system throughput, latency, and network usage. Extensive evaluations are performed over five real-world data sets. The experimental results show that VID-WIN video event matching achieves ${\sim } 2.3\times $ higher throughput with minimal latency and ~99% bandwidth reduction compared to other baselines while maintaining query-level accuracy and resource bounds.

中文翻译:

VID-WIN:用于多媒体物联网的边缘快速视频事件匹配和查询感知窗口

高效的视频处理是许多 IoMT 应用程序中检测感兴趣事件的关键组件。目前,在事件处理中提出了许多窗口优化技术,其基本假设是传入流具有结构化数据模型。由于缺乏任何底层结构化数据模型,视频非常复杂。闭路电视摄像机和智能手机等视频流源是资源受限的边缘节点。同时,视频内容提取成本高昂,并且需要主要部署在高端(或云)节点的计算密集型深度神经网络 (DNN) 模型。本文介绍了 VID-WIN,这是一种自适应 2 阶段联合窗口方法,可在边缘云范例中加速视频事件分析。VID-WIN 跨边缘和云节点并行运行,并为基于状态的复杂事件匹配执行查询和资源感知优化。VID-WIN 利用视频内容和 DNN 输入旋钮来加速跨节点的视频推理过程。本文提出了一种新颖的内容驱动微批调整大小 , 查询感知 缓存, 和基于微批次的 效用过滤资源受限边缘节点下的视频帧策略,以提高整体系统吞吐量、延迟和网络使用率。对五个真实世界的数据集进行了广泛的评估。实验结果表明,VID-WIN视频事件匹配实现了 ${\sim } 2.3\times $ 与其他基线相比,以最小的延迟和约 99% 的带宽减少获得更高的吞吐量,同时保持查询级别的准确性和资源限制。
更新日期:2021-06-25
down
wechat
bug