当前位置: X-MOL 学术International Journal of Mobile Communications › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multi-view Video CODEC Using Compressive Sensing for Wireless Video Sensor Networks
International Journal of Mobile Communications ( IF 0.7 ) Pub Date : 2019-01-01 , DOI: 10.1504/ijmc.2019.10016171
V. Akshaya , S. Radha , Angayarkanni Veeraputhiran

In monitoring applications, different views are needed to be captured by multi-view video sensor nodes for understanding the scene clearly. These multi-view sequences have large volume of redundant data which affects the storage, transmission, bandwidth and lifetime of wireless video sensor nodes. A low complex coding technique is required for addressing these issues and for processing multi-view sensor data. Hence, in this paper, a framework on CS-based multi-view video codec using frame approximation technique (CMVC-FAT) is proposed. Quantisation with entropy coding based on frame skipping is adopted for achieving efficient video compression. For better prediction of skipped frame at receiver, a frame approximation technique (FAT) algorithm is proposed. Simulation results reveal that CMVC-FAT framework outperforms the existing method with achievement of 86.5% reduction in time and bits. Also, it shows 83.75% reduction in transmission energy compared with raw frame.

中文翻译:

无线视频传感器网络使用压缩感测的多视图视频编解码器

在监视应用程序中,需要通过多视图视频传感器节点捕获不同的视图以清楚地了解场景。这些多视图序列具有大量的冗余数据,这些冗余数据会影响无线视频传感器节点的存储,传输,带宽和寿命。解决这些问题和处理多视图传感器数据需要一种低复杂度的编码技术。因此,本文提出了一种基于帧近似技术的基于CS的多视图视频编解码器框架(CMVC-FAT)。为了实现有效的视频压缩,采用了基于帧跳跃的熵编码量化。为了更好地预测接收机的跳帧,提出了一种帧近似技术(FAT)算法。仿真结果表明,CMVC-FAT框架的性能和性能均优于现有方法,可将时间和位数减少86.5%。此外,与原始帧相比,它的传输能量降低了83.75%。
更新日期:2019-01-01
down
wechat
bug