当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
EraseNet: End-to-End Text Removal in the Wild.
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2020-08-28 , DOI: 10.1109/tip.2020.3018859
Chongyu Liu , Yuliang Liu , Lianwen Jin , Shuaitao Zhang , Canjie Luo , Yongpan Wang

Scene text removal has attracted increasing research interests owing to its valuable applications in privacy protection, camera-based virtual reality translation, and image editing. However, existing approaches, which fall short on real applications, are mainly because they were evaluated on synthetic or unrepresentative datasets. To fill this gap and facilitate this research direction, this article proposes a real-world dataset called SCUT-EnsText that consists of 3,562 diverse images selected from public scene text reading benchmarks, and each image is scrupulously annotated to provide visually plausible erasure targets. With SCUT-EnsText, we design a novel GAN-based model termed EraseNet that can automatically remove text located on the natural images. The model is a two-stage network that consists of a coarse-erasure sub-network and a refinement sub-network. The refinement sub-network targets improvement in the feature representation and refinement of the coarse outputs to enhance the removal performance. Additionally, EraseNet contains a segmentation head for text perception and a local-global SN-Patch-GAN with spectral normalization (SN) on both the generator and discriminator for maintaining the training stability and the congruity of the erased regions. A sufficient number of experiments are conducted on both the previous public dataset and the brand-new SCUT-EnsText. Our EraseNet significantly outperforms the existing state-of-the-art methods in terms of all metrics, with remarkably superior higher-quality results. The dataset and code will be made available at https://github.com/HCIILAB/SCUT-EnsText .

中文翻译:


EraseNet:野外端到端文本删除。



场景文本去除因其在隐私保护、基于摄像头的虚拟现实翻译和图像编辑等方面的宝贵应用而引起了越来越多的研究兴趣。然而,现有的方法在实际应用中存在不足,主要是因为它们是在合成或不具有代表性的数据集上进行评估的。为了填补这一空白并促进这一研究方向,本文提出了一个名为 SCUT-EnsText 的真实世界数据集,该数据集由从公共场景文本阅读基准中选择的 3,562 个不同图像组成,每个图像都经过严格注释,以提供视觉上合理的擦除目标。通过 SCUT-EnsText,我们设计了一种新颖的基于 GAN 的模型,称为 EraseNet,它可以自动删除位于自然图像上的文本。该模型是一个两级网络,由粗擦除子网络和细化子网络组成。细化子网络的目标是改进特征表示和细化粗略输出,以增强去除性能。此外,EraseNet 包含一个用于文本感知的分割头和一个在生成器和鉴别器上均具有谱归一化 (SN) 的局部全局 SN-Patch-GAN,以保持训练稳定性和擦除区域的一致性。在之前的公共数据集和全新的SCUT-EnsText上都进行了足够数量的实验。我们的 EraseNet 在所有指标方面都显着优于现有的最先进方法,具有非常出色的高质量结果。数据集和代码将在以下位置提供: https://github.com/HCIILAB/SCUT-EnsText 。
更新日期:2020-09-08
down
wechat
bug