当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Deep Context-Sensitive Decomposition for Low-Light Image Enhancement
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.4 ) Pub Date : 2021-04-30 , DOI: 10.1109/tnnls.2021.3071245
Long Ma 1 , Risheng Liu 2 , Jiaao Zhang 1 , Xin Fan 2 , Zhongxuan Luo 3
Affiliation  

Enhancing the quality of low-light (LOL) images plays a very important role in many image processing and multimedia applications. In recent years, a variety of deep learning techniques have been developed to address this challenging task. A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces, causing many unfavorable outcomes, e.g., details loss, color unsaturation, and artifacts. To address these issues, we develop a new context-sensitive decomposition network (CSDNet) architecture to exploit the scene-level contextual dependencies on spatial scales. More concretely, we build a two-stream estimation mechanism including reflectance and illumination estimation network. We design a novel context-sensitive decomposition connection to bridge the two-stream mechanism by incorporating the physical principle. The spatially varying illumination guidance is further constructed for achieving the edge-aware smoothness property of the illumination component. According to different training patterns, we construct CSDNet (paired supervision) and context-sensitive decomposition generative adversarial network (CSDGAN) (unpaired supervision) to fully evaluate our designed architecture. We test our method on seven testing benchmarks [including massachusetts institute of technology (MIT)-Adobe FiveK, LOL, ExDark, and naturalness preserved enhancement (NPE)] to conduct plenty of analytical and evaluated experiments. Thanks to our designed context-sensitive decomposition connection, we successfully realized excellent enhanced results (with sufficient details, vivid colors, and few noises), which fully indicates our superiority against existing state-of-the-art approaches. Finally, considering the practical needs for high efficiency, we develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels. Furthermore, by sharing an encoder for these two components, we obtain a more lightweight version (SLiteCSDNet for short). SLiteCSDNet just contains 0.0301M parameters but achieves the almost same performance as CSDNet. Code is available at https://github.com/KarelZhang/CSDNet-CSDGAN .

中文翻译:

学习用于低光图像增强的深度上下文敏感分解

提高低光 (LOL) 图像的质量在许多图像处理和多媒体应用中起着非常重要的作用。近年来,已经开发了各种深度学习技术来解决这一具有挑战性的任务。一个典型的框架是同时估计光照和反射率,但它们忽略了封装在特征空间中的场景级上下文信息,导致许多不利的结果,例如细节丢失、颜色不饱和和伪影。为了解决这些问题,我们开发了一种新的上下文敏感分解网络(CSDNet)架构,以利用场景级上下文对空间尺度的依赖性。更具体地说,我们建立了一个包括反射率和光照估计网络的双流估计机制。我们设计了一种新颖的上下文敏感分解连接,通过结合物理原理来桥接双流机制。进一步构建空间变化的照明引导以实现照明组件的边缘感知平滑特性。根据不同的训练模式,我们构建了 CSDNet(配对监督)和上下文敏感分解生成对抗网络(CSDGAN)(非配对监督)来全面评估我们设计的架构。我们在七个测试基准上测试我们的方法 [包括麻省理工学院 (MIT)-Adobe FiveK、LOL、ExDark 和自然保留增强 (NPE)],以进行大量分析和评估实验。由于我们设计的上下文相关的分解连接,我们成功地实现了出色的增强效果(细节充足、色彩鲜艳、噪点少),这充分表明了我们相对于现有最先进方法的优势。最后,考虑到高效率的实际需求,我们通过减少通道数开发了一个轻量级的 CSDNet(命名为 LiteCSDNet)。此外,通过为这两个组件共享一个编码器,我们获得了一个更轻量级的版本(简称 SLiteCSDNet)。SLiteCSDNet 仅包含 0.0301M 参数,但实现了与 CSDNet 几乎相同的性能。代码可在 我们通过减少通道数来开发轻量级 CSDNet(命名为 LiteCSDNet)。此外,通过为这两个组件共享一个编码器,我们获得了一个更轻量级的版本(简称 SLiteCSDNet)。SLiteCSDNet 仅包含 0.0301M 参数,但实现了与 CSDNet 几乎相同的性能。代码可在 我们通过减少通道数来开发轻量级 CSDNet(命名为 LiteCSDNet)。此外,通过为这两个组件共享一个编码器,我们获得了一个更轻量级的版本(简称 SLiteCSDNet)。SLiteCSDNet 仅包含 0.0301M 参数,但实现了与 CSDNet 几乎相同的性能。代码可在https://github.com/KarelZhang/CSDNet-CSDGAN .
更新日期:2021-04-30
down
wechat
bug