Skip to main content
Log in

Fast exposure fusion of detail enhancement for brightest and darkest regions

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Multi-exposure fusion is the common approach to generate high dynamic range (HDR) images that combines multi-exposure images captured for the same scene, but the traditional multi-exposure fusion algorithms lose details in the brightest and darkest regions of the scene. Therefore, many detail enhancement-based exposure fusion algorithms have been proposed to extract these details. However, these algorithms have low efficiency because of the complexity of detail enhancement mechanism, and most of them excessively enhance all the pixels besides of the necessary brightest and darkest pixels. We propose a local detail enhancement mechanism to enhance only the details of brightest and darkest regions by using fast local Laplacian filtering (FLLF). A large number of experiments show that the proposed algorithm has much more high efficiency than the current detail enhancement-based exposure fusion algorithms, and the brightest and darkest details in the high dynamic range scene are preserved well.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Aubry, M., Paris, S., Hasinoff, S.W., Kautz, J., Durand, F.: Fast local Laplacian filters: theory and applications. ACM Trans. Graph. 33(5), 1–14 (2014)

    Article  Google Scholar 

  2. Durand, F., Dorsey, J.: Fast bilateral filtering for the display of high-dynamic-range images. ACM Trans. Graph. 21(3), 257–266 (2002)

    Article  Google Scholar 

  3. Farbman, Z., Fattal, R., Lischinski, D.: Edge-preservingdecompositions for multi-scale tone and detail manipulation. ACM Trans. Graph. 27(3), 67-1-67–10 (2008)

    Article  Google Scholar 

  4. Fei, K., Li, Z., Wen, C., Chen, W.: Edge-preserving smoothing pyramid based multi-scale exposure fusion. J. Vis. Commun. Image Represent. 53, 235–244 (2018)

    Article  Google Scholar 

  5. Fei, K., Zhe, W., Chen, W., Wu, X., Li, Z.: Intelligent detail enhancement for exposure fusion. IEEE Trans. Multimed. 20(1), 484–495 (2018)

    Google Scholar 

  6. Jianbing, S., Ying, Z., Shuicheng, Y.: Exposure fusion using boosting laplacian pyramid. IEEE Trans. Cybern. 44(9), 1579–1590 (2014)

    Article  Google Scholar 

  7. Jianbing, S., Ying, Z., Ying, H.: Detail-preserving exposure fusion using subband architecture. Vis. Comput. 28(5), 463–473 (2012)

    Article  Google Scholar 

  8. Jianrui, C., Shuhang, G., Lei, Z.: Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 27(4), 2049–2062 (2018)

    Article  MathSciNet  Google Scholar 

  9. Kaiming, H., Jian, S., Xiaoou, T.: Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013)

    Article  Google Scholar 

  10. Karr, B.A., Debattista, K., Chalmers, A.G.: Optical effects on HDR calibration via a multiple exposure noise-based workflow. Vis. Comput. 17, 1–16 (2020)

    Google Scholar 

  11. Kede, M., Kai, Z., Zhou, W.: Perceptual quality assessment for multi-exposure image fusion. IEEE Trans. Image Process. 24(11), 3345–3356 (2015)

    Article  MathSciNet  Google Scholar 

  12. Li, H., Ma, K., Yong, H., Zhang, L.: Fast multi-scale structural patch decomposition for multi-exposure image fusion. IEEE Trans. Image Process. 29, 5805–5816 (2020)

    Article  MathSciNet  Google Scholar 

  13. Li, Z., Wei, Z., Wen, C., Zheng, J.: Detail-enhanced multi-scale exposure fusion. IEEE Trans. Image Process. 26(3), 1243–1252 (2017)

    Article  MathSciNet  Google Scholar 

  14. Ma, K., Li, H., Yong, H., Wang, Z., Meng, D., Zhang, L.: Robust multi-exposure image fusion: a structural patch decomposition approach. IEEE Trans. Image Process. 26(5), 2519–2532 (2017)

    Article  MathSciNet  Google Scholar 

  15. Mansour, N., Maryam, K., S.M., Reza, S., Nader, K.: Fast exposure fusion using exposedness function. In: IEEE International Conference on Image Processing (ICIP), pp. 2234–2238 (2017)

  16. Mertens, T., Kautz, J., Reeth, F.V.: Exposure fusion: a simple and practical alternative to high dynamic range photography. Comput. Graph. Forum 28(1), 161–171 (2009)

    Article  Google Scholar 

  17. Mingli, S., Dacheng, T., Chun, C., Jiajun, B., Jiebo, L., Chengqi, Z.: Probabilistic exposure fusion. IEEE Trans. Image Process. 21(1), 341–357 (2012)

    Article  MathSciNet  Google Scholar 

  18. Paris, S., Hasinoff, S.W., Kautz, J.: Local Laplacian filters: edge-aware image processing with a Laplacian pyramid. In: International Conference on Computer Graphics and Interactive Techniques, pp. 1–12 (2011)

  19. Prabhakar, K.R., Srikar, V.S., Babu, R.V.: Deepfuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs. In: IEEE International Conference on Computer Vision (2017)

  20. Raman, S., Chaudhuri, S.: Bilateral filter based compositing for variable exposure photography. In: Proceedings of Eurographics Short Papers (2009)

  21. Rui, S., Irene, C., Jianbo, S., Anup, B.: Generalized random walks for fusion of multi-exposure images. IEEE Trans. Image Process. 20(12), 3634–3646 (2011)

    Article  MathSciNet  Google Scholar 

  22. Shiguang, L., Yu, Z.: Detail-preserving underexposed image enhancement via optimal weighted multi-exposure fusion. IEEE Trans. Consum. Electron. 65(3), 303–311 (2019)

    Article  Google Scholar 

  23. Shutao, L., Xudong, K., Jianwen, H.: Image fusion with guided filtering. IEEE Trans. Image Process. 22(7), 2864–2875 (2013)

    Article  Google Scholar 

  24. Wei, Z., Cham, W.K.: Gradient-directed composition of multi-exposure images. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 530–536 (2010)

  25. Zhengguo, L., Jinghong, Z., Susanto, R.: Detail-enhanced exposure fusion. IEEE Trans. Image Process. 21(11), 4672–4676 (2012)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work is supported by the General Program of Natural Science Foundation of the Jiangsu Higher Education Institutions of China (No. 19KJB520007), the Project of High-level Talents Research Foundation of Jinling Institute of Technology (No. jit-b-201802), the Science and Education Integration Project of Jinling Institute of Technology (No. 2020KJRH28) and the Shandong Provincial Natural Science Foundation (No. ZR2019PF023).

Funding

This study was funded by the Project of High-level Talents Research Foundation of Jinling Institute of Technology (No. jit-b-201802), the General Program of Natural Science Foundation of the Jiangsu Higher Education Institutions of China (No. 19KJB520007).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chunmeng Wang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary material 1 (rar 4636 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, C., He, C. & Xu, M. Fast exposure fusion of detail enhancement for brightest and darkest regions. Vis Comput 37, 1233–1243 (2021). https://doi.org/10.1007/s00371-021-02079-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-021-02079-5

Keywords

Navigation