Skip to main content

Advertisement

Log in

Hardware-Centric AutoML for Mixed-Precision Quantization

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Model quantization is a widely used technique to compress and accelerate deep neural network (DNN) inference. Emergent DNN hardware accelerators begin to support flexible bitwidth (1–8 bits) to further improve the computation efficiency, which raises a great challenge to find the optimal bitwidth for each layer: it requires domain experts to explore the vast design space trading off accuracy, latency, energy, and model size, which is both time-consuming and usually sub-optimal. There are plenty of specialized hardware accelerators for neural networks, but little research has been done to design specialized neural networks optimized for a particular hardware accelerator. The latter is demanding given the much longer design cycle of silicon than neural nets. Conventional quantization algorithm ignores the different hardware architectures and quantizes all the layers in a uniform way. In this paper, we introduce the Hardware-Aware Automated Quantization (HAQ) framework which automatically determine the quantization policy, and we take the hardware accelerator’s feedback in the design loop. Rather than relying on proxy signals such as FLOPs and model size, we employ a hardware simulator to generate the direct feedback signals to the RL agent. Compared with other conventional methods, our framework is fully automated and can specialize the quantization policy for different neural network architectures and hardware architectures. Our framework effectively reduced the latency by 1.4–1.95 \(\times \) and the energy consumption by 1.9 \(\times \) with negligible loss of accuracy compared with the fixed bitwidth (8 bits) quantization. Our framework reveals that the optimal policies on different hardware architectures (i.e., edge and cloud architectures) under different resource constraints (i.e., latency, energy and model size) are drastically different. We interpreted the implication of different quantization policies, which offer insights for both neural network architecture design and hardware architecture design.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  • Apple. (2018). Apple describes 7nm A12 bionic chips. http://www.eenewsanalog.com/news/apple-describes-7nm-a12-bionic-chip/page/0/1.

  • Cai, H., Yang, J., Zhang, W., Han, S., & Yu, Y. (2018). Path-level network transformation for efficient architecture search. In ICML.

  • Cai, H., Zhu, L., & Han, S. (2019). ProxylessNAS: Direct neural architecture search on target task and hardware. In ICLR.

  • Choi, J., Wang, Z., Venkataramani, S., Chuang, P. I. J., Srinivasan, V., & Gopalakrishnan, K. (2018). PACT: Parameterized clipping activation for quantized neural networks. arXiv.

  • Chollet F. (2017). Xception—Deep learning with depthwise separable convolutions. In CVPR.

  • Courbariaux, M., Hubara, I., Soudry, D., El-Yaniv, R., & Bengio, Y. (2016). Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or -1. arXiv.

  • Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Li, F. F. (2009). ImageNet—A large-scale hierarchical image database. In CVPR.

  • Han, S. (2017). Efficient methods and hardware for deep learning. PhD thesis.

  • Han, S., Mao, H., & Dally, W. (2016). Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR.

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In CVPR.

  • He, Y., Zhang, X., & Sun, J. (2017). Channel pruning for accelerating very deep neural networks. In ICCV.

  • He, Y., Lin, J., Liu, Z., Wang, H., Li, L. J., & Han S. (2018). AMC: AutoML for model compression and acceleration on mobile devices. In ECCV.

  • Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., et al. (2017). MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv.

  • Imagination. (2018). Power vr neural network accelerator. https://www.imgtec.com/vision-ai/powervr-series2nx/powervr-ax2145-nna/.

  • Jacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard, A.G., et al. (2018). Quantization and training of neural networks for efficient integer-arithmetic-only inference. In CVPR.

  • Kingma, D., & Ba, J. (2015). Adam—A method for stochastic optimization. In ICLR.

  • Krishnamoorthi, R. (2018). Quantizing deep convolutional networks for efficient inference—A whitepaper. arXiv.

  • Lillicrap, T., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., et al. (2016). Continuous control with deep reinforcement learning. In ICLR.

  • Liu, C., Zoph, B., Neumann, M., Shlens, J., Hua, W., Li, L. et al. (2018). Progressive neural architecture search. In ECCV.

  • Liu, Z., Li, J., Shen, Z., Huang, G, Yan, S., & Zhang, C. (2017). Learning efficient convolutional networks through network slimming. In ICCV.

  • Nvidia. (2018). Nvidia tensor cores. https://www.nvidia.com/en-us/data-center/tensorcore/.

  • Pham, H., Guan, M. Y., Zoph, B., Le, Q. V., & Dean, J. (2018). Efficient neural architecture search via parameter sharing. In ICML.

  • Rastegari, M., Ordonez, V., Redmon, J., & Farhadi, A. (2016). XNOR-Net—ImageNet classification using binary convolutional neural networks. In ECCV.

  • Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. C. (2018). MobileNetV2: Inverted residuals and linear bottlenecks. In CVPR.

  • Sharma, H., Park, J., Suda, N., Lai, L., Chau, B., Chandra, V., et al. (2018). Bit fusion: Bit-level dynamically composable architecture for accelerating deep neural network. In ISCA.

  • Umuroglu, Y., Rasnayake, L., & Sjalander, M., (2018). Bismo: A scalable bit-serial matrix multiplication overlay for reconfigurable computing. In FPL.

  • Williams, S., Waterman, A., & Patterson, D. (2009). Roofline: an insightful visual performance model for multicore architectures. Communications of the ACM, 52(4), 65–76.

    Google Scholar 

  • Xilinx. (2018a). Ultrascale architecture and product data sheet: Overview. https://www.xilinx.com/support/documentation/data_sheets/ds890-ultrascale-overview.pdf.

  • Xilinx (2018b). Zynq-7000 soc data sheet: Overview. https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-7000-Overview.pdf.

  • Yang, T. J., Chen, Y. H., & Sze, V. (2016). Designing energy-efficient convolutional neural networks using energy-aware pruning. arXiv.

  • Yang, T. J., Howard, A., Chen, B., Zhang, X., Go, A., Sandler, M., et al. (2018). Netadapt: Platform-aware neural network adaptation for mobile applications. In ECCV.

  • Zhou, A., Yao, A., Wang, K., & Chen Y. (2018). Explicit loss-error-aware quantization for low-bit deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp 9426–9435).

  • Zhou, S., Ni, Z., Zhou, X., Wen, H., Wu, Y., & Zou, Y. (2016). DoReFa-Net—Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients. arXiv.

  • Zhu, C., Han, S., Mao, H., & Dally, W. (2017). Trained ternary quantization. In ICLR.

  • Zoph, B., & Le, Q. V. (2017). Neural architecture search with reinforcement learning. In ICLR.

Download references

Acknowledgements

We thank NSF Career Award #1943349, MIT-IBM Watson AI Lab, Samsung, SONY, Xilinx, TI and AWS for supporting this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Song Han.

Additional information

Communicated by Li Liu, Matti Pietikäinen, Jie Qin, Jie Chen, Wanli Ouyang, Luc Van Gool.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, K., Liu, Z., Lin, Y. et al. Hardware-Centric AutoML for Mixed-Precision Quantization. Int J Comput Vis 128, 2035–2048 (2020). https://doi.org/10.1007/s11263-020-01339-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-020-01339-6

Keywords

Navigation