当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Memory-Efficient Hierarchical Neural Architecture Search for Image Restoration
International Journal of Computer Vision ( IF 19.5 ) Pub Date : 2021-11-23 , DOI: 10.1007/s11263-021-01537-w
Haokui Zhang 1, 2 , Ying Li 1 , Chengrong Gong 1 , Zongwen Bai 3 , Hao Chen 4 , Chunhua Shen 4
Affiliation  

Recently, much attention has been spent on neural architecture search (NAS), aiming to outperform those manually-designed neural architectures on high-level vision recognition tasks. Inspired by the success, here we attempt to leverage NAS techniques to automatically design efficient network architectures for low-level image restoration tasks. In particular, we propose a memory-efficient hierarchical NAS (termed HiNAS) and apply it to two such tasks: image denoising and image super-resolution. HiNAS adopts gradient based search strategies and builds a flexible hierarchical search space, including the inner search space and outer search space. They are in charge of designing cell architectures and deciding cell widths, respectively. For the inner search space, we propose a layer-wise architecture sharing strategy, resulting in more flexible architectures and better performance. For the outer search space, we design a cell-sharing strategy to save memory, and considerably accelerate the search speed. The proposed HiNAS method is both memory and computation efficient. With a single GTX1080Ti GPU, it takes only about 1 h for searching for denoising network on the BSD-500 dataset and 3.5 h for searching for the super-resolution structure on the DIV2K dataset. Experiments show that the architectures found by HiNAS have fewer parameters and enjoy a faster inference speed, while achieving highly competitive performance compared with state-of-the-art methods. Code is available at: https://github.com/hkzhang91/HiNAS



中文翻译:

内存高效的分层神经架构搜索图像恢复

最近,神经架构搜索(NAS)受到了很多关注,旨在在高级视觉识别任务上超越那些手动设计的神经架构。受到成功的启发,我们尝试利用 NAS 技术为低级图像恢复任务自动设计高效的网络架构。特别是,我们提出了一种内存高效的分层 NAS(称为 HiNAS)并将其应用于两个这样的任务:图像去噪和图像超分辨率。HiNAS 采用基于梯度的搜索策略,构建灵活的分层搜索空间,包括内部搜索空间和外部搜索空间。他们分别负责设计单元架构和决定单元宽度。对于内部搜索空间,我们提出了一种分层架构共享策略,导致更灵活的架构和更好的性能。对于外部搜索空间,我们设计了一种单元格共享策略来节省内存,并大大加快了搜索速度。所提出的 HiNAS 方法具有内存和计算效率。使用单个 GTX1080Ti GPU,在 BSD-500 数据集上搜索去噪网络只需大约 1 小时,在 DIV2K 数据集上搜索超分辨率结构只需 3.5 小时。实验表明,与最先进的方法相比,HiNAS 发现的架构参数更少,推理速度更快,同时实现了极具竞争力的性能。代码可在:https://github.com/hkzhang91/HiNAS 所提出的 HiNAS 方法具有内存和计算效率。使用单个 GTX1080Ti GPU,在 BSD-500 数据集上搜索去噪网络只需大约 1 小时,在 DIV2K 数据集上搜索超分辨率结构只需 3.5 小时。实验表明,与最先进的方法相比,HiNAS 发现的架构参数更少,推理速度更快,同时实现了极具竞争力的性能。代码可在:https://github.com/hkzhang91/HiNAS 所提出的 HiNAS 方法具有内存和计算效率。使用单个 GTX1080Ti GPU,在 BSD-500 数据集上搜索去噪网络只需大约 1 小时,在 DIV2K 数据集上搜索超分辨率结构只需 3.5 小时。实验表明,与最先进的方法相比,HiNAS 发现的架构参数更少,推理速度更快,同时实现了极具竞争力的性能。代码可在:https://github.com/hkzhang91/HiNAS 实验表明,与最先进的方法相比,HiNAS 发现的架构参数更少,推理速度更快,同时实现了极具竞争力的性能。代码可在:https://github.com/hkzhang91/HiNAS 实验表明,与最先进的方法相比,HiNAS 发现的架构参数更少,推理速度更快,同时实现了极具竞争力的性能。代码可在:https://github.com/hkzhang91/HiNAS

更新日期:2021-11-23
down
wechat
bug