当前位置: X-MOL 学术Sci. Rep. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Online knowledge distillation network for single image dehazing
Scientific Reports ( IF 3.8 ) Pub Date : 2022-09-02 , DOI: 10.1038/s41598-022-19132-5
Yunwei Lan 1 , Zhigao Cui 1 , Yanzhao Su 1 , Nian Wang 1 , Aihua Li 1 , Wei Zhang 1 , Qinghui Li 1 , Xiao Zhong 1
Affiliation  

Single image dehazing, as a key prerequisite of high-level computer vision tasks, catches more and more attentions. Traditional model-based methods recover haze-free images via atmospheric scattering model, which achieve favorable dehazing effect but endure artifacts, halos, and color distortion. By contrast, recent learning-based methods dehaze images by a model-free way, which achieve better color fidelity but tend to acquire under-dehazed results due to lacking of knowledge guiding. To combine these merits, we propose a novel online knowledge distillation network for single image dehazing named OKDNet. Specifically, the proposed OKDNet firstly preprocesses hazy images and acquires abundant shared features by a multiscale network constructed with attention guided residual dense blocks. After that, these features are sent to different branches to generate two preliminary dehazed images via supervision training: one branch acquires dehazed images via the atmospheric scattering model; another branch directly establishes the mapping relationship between hazy images and clear images, which dehazes images by a model-free way. To effectively fuse useful information from these two branches and acquire a better dehazed results, we propose an efficient feature aggregation block consisted of multiple parallel convolutions with different receptive. Moreover, we adopt a one-stage knowledge distillation strategy named online knowledge distillation to joint optimization of our OKDNet. The proposed OKDNet achieves superior performance compared with state-of-the-art methods on both synthetic and real-world images with fewer model parameters. Project website: https://github.com/lanyunwei/OKDNet.



中文翻译:

用于单幅图像去雾的在线知识蒸馏网络

单幅图像去雾作为高级计算机视觉任务的关键先决条件,越来越受到关注。传统的基于模型的方法通过大气散射模型恢复无雾图像,该方法获得了良好的去雾效果,但存在伪影、光晕和颜色失真。相比之下,最近的基于学习的方法通过无模型的方式对图像进行去雾处理,可以获得更好的颜色保真度,但由于缺乏知识引导,往往会获得去雾效果不足的结果。为了结合这些优点,我们提出了一种新颖的用于单幅图像去雾的在线知识蒸馏网络,名为 OKDNet。具体来说,所提出的 OKDNet 首先对模糊图像进行预处理,并通过由注意力引导的残差密集块构建的多尺度网络获取丰富的共享特征。在那之后,这些特征被发送到不同的分支,通过监督训练生成两张初步的去雾图像:一个分支通过大气散射模型获取去雾图像;另一个分支直接建立模糊图像和清晰图像之间的映射关系,通过无模型的方式对图像进行去雾处理。为了有效地融合来自这两个分支的有用信息并获得更好的去雾结果,我们提出了一种有效的特征聚合块,该块由具有不同接受度的多个并行卷积组成。此外,我们采用一种称为在线知识蒸馏的单阶段知识蒸馏策略来联合优化我们的 OKDNet。与模型参数较少的合成图像和真实世界图像上的最先进方法相比,所提出的 OKDNet 实现了卓越的性能。

更新日期:2022-09-02
down
wechat
bug