当前位置: X-MOL 学术Vis. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Automatic semantic style transfer using deep convolutional neural networks and soft masks
The Visual Computer ( IF 3.5 ) Pub Date : 2019-07-31 , DOI: 10.1007/s00371-019-01726-2
Hui-Huang Zhao , Paul L. Rosin , Yu-Kun Lai , Yao-Nan Wang

This paper presents an automatic image synthesis method to transfer the style of an example image to a content image. When standard neural style transfer approaches are used, the textures and colours in different semantic regions of the style image are often applied inappropriately to the content image, ignoring its semantic layout and ruining the transfer result. In order to reduce or avoid such effects, we propose a novel method based on automatically segmenting the objects and extracting their soft semantic masks from the style and content images, in order to preserve the structure of the content image while having the style transferred. Each soft mask of the style image represents a specific part of the style image, corresponding to the soft mask of the content image with the same semantics. Both the soft masks and source images are provided as multichannel input to an augmented deep CNN framework for style transfer which incorporates a generative Markov random field model. The results on various images show that our method outperforms the most recent techniques.

中文翻译:

使用深度卷积神经网络和软掩码的自动语义风格转移

本文提出了一种自动图像合成方法,将示例图像的风格转换为内容图像。当使用标准的神经风格迁移方法时,风格图像不同语义区域的纹理和颜色经常被不适当地应用于内容图像,忽略其语义布局并破坏迁移结果。为了减少或避免这种影响,我们提出了一种基于自动分割对象并从样式和内容图像中提取其软语义掩码的新方法,以便在传递样式的同时保留内容图像的结构。风格图像的每个软掩码代表风格图像的特定部分,对应具有相同语义的内容图像的软掩码。软掩码和源图像都作为多通道输入提供给增强型深度 CNN 框架,用于风格转移,该框架结合了生成马尔可夫随机场模型。各种图像的结果表明,我们的方法优于最新的技术。
更新日期:2019-07-31
down
wechat
bug