当前位置: X-MOL 学术EURASIP J. Image Video Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Joint multi-domain feature learning for image steganalysis based on CNN
EURASIP Journal on Image and Video Processing ( IF 2.4 ) Pub Date : 2020-07-08 , DOI: 10.1186/s13640-020-00513-7
Ze Wang , Mingzhi Chen , Yu Yang , Min Lei , Zhexuan Dong

In recent years, researchers have been making great progress in the steganalysis technology based on convolution neural networks (CNN). However, experts ignore the contribution of nonlinear residual and joint domain detection to steganalysis, and how to detect the adaptive steganographic algorithms with low embedding rates is still challenging. In this paper, we propose a CNN steganalysis model that uses a joint domain detection mechanism and a nonlinear detection mechanism. For the nonlinear detection mechanism, based on the spatial rich model (SRM), we introduce the maximum and minimum nonlinear residual feature acquisition method into the model to adapt to the nonlinear distribution of steganography information. For the joint domain detection mechanism, we not only apply the high-pass filters from the SRM for spatial residuals, but also apply the patterns from the discrete cosine transform residual (DCTR) for transformation steganographic impacts, so as to fully capture the interference trace of spatial steganography to transform domain. We also apply a new transfer learning method to improve the model’s performance. That is, we apply the low embedding rate steganography samples to initialize the model, because we think that the method makes the network more sensitive than applying high embedding rate steganography samples to initialize the model. The simulation results also confirm this assumption. Combined with the above improved methods, the detection accuracy of the model for WOW and S-UNIWARD is higher than that of SRM+EC, Ye-Net, Xu-Net, Yedroudj-Net and Zhu-Net, which is about 4 ∼6% higher than that of the optimal Zhu-Net. The results can provide a certain reference for steganalysis and image forensics tasks.

中文翻译:

基于CNN的图像隐写联合多域特征学习

近年来,研究人员在基于卷积神经网络(CNN)的隐写分析技术方面取得了长足的进步。但是,专家们忽略了非线性残差和联合域检测对隐写分析的贡献,如何检测具有低嵌入率的自适应隐写算法仍然具有挑战性。在本文中,我们提出了使用联合域检测机制和非线性检测机制的CNN隐写分析模型。对于非线性检测机制,基于空间丰富模型(SRM),将最大和最小非线性残差特征获取方法引入模型中,以适应隐写信息的非线性分布。对于联合域检测机制,我们不仅将SRM中的高通滤波器应用于空间残差,同时也将离散余弦变换残差(DCTR)的模式应用于变换隐写术的影响,以充分捕获空间隐写术的干扰轨迹到变换域。我们还应用了一种新的转移学习方法来改善模型的性能。也就是说,我们应用低嵌入率隐秘样本来初始化模型,因为我们认为该方法使网络比应用高嵌入率隐秘样本来初始化模型更敏感。仿真结果也证实了这一假设。结合以上改进方法,WOW和S-UNIWARD模型的检测精度高于SRM + EC,Ye-Net,Xu-Net,Yedroudj-Net和Zhu-Net,约为4〜6。比最佳的Zhu-Net高%。
更新日期:2020-07-08
down
wechat
bug