Journal of the Franklin Institute ( IF 3.7 ) Pub Date : 2021-09-14 , DOI: 10.1016/j.jfranklin.2021.09.008 Yang Zhang 1 , Bin Gu 2
Due to an increased number of transmission errors, the real-time transmission needs for seafloor videos are imposing severe challenges on underwater acoustic networks. In this work, we propose an error-resilient coding method based on convolutional neural networks and multiple descriptions to combat packet losses for underwater video transmission. By exploiting the inter-frame motion information, our convolutional neural networks propagate the regions of interest, providing extra protection for multiple description coding. To achieve a good tradeoff between coding efficiency and error resiliency, video sequences are split into two kinds of descriptions that are encoded under a bit-rate constraint condition. Simulation experiments with underwater video datasets are conducted to verify the effectiveness of our approach at different packet loss rates, compared to state-of-the-art video coding schemes.
中文翻译:
用于水下视频传输的卷积神经网络的容错编码
由于传输错误数量的增加,海底视频的实时传输需求对水声网络提出了严峻的挑战。在这项工作中,我们提出了一种基于卷积神经网络和多重描述的容错编码方法,以解决水下视频传输的数据包丢失问题。通过利用帧间运动信息,我们的卷积神经网络传播感兴趣的区域,为多描述编码提供额外的保护。为了在编码效率和错误弹性之间取得良好的平衡,视频序列被分成两种描述,在比特率约束条件下进行编码。使用水下视频数据集进行模拟实验以验证我们的方法在不同丢包率下的有效性,