Skip to main content
Log in

Real-time and effective pan-sharpening for remote sensing using multi-scale fusion network

  • Special Issue Paper
  • Published:
Journal of Real-Time Image Processing Aims and scope Submit manuscript

Abstract

Real-time monitoring and surveillance play an important role in the field of remote sensing, where multi-spectral (MS) images with high spatial resolution are widely desired for better analysis. However, high-resolution MS images cannot be directly obtained due to the limitations of sensors and bandwidth. As an essential way to alleviate this problem, pan-sharpening aims at fusing the complementary information of a low-resolution MS image and a high-resolution panchromatic (PAN) image to reconstruct a high-resolution MS image. Most previous deep-learning based methods can meet the real-time requirements with the help of graphics processing unit (GPU). However, they don’t fully exploit the favorable hierarchical information, sparing huge room for performance improvement. In this paper, to meet the requirement of real-time implementation and achieve more effective performance simultaneously, we propose a multi-scale fusion network (MSFN) to make full use of hierarchical complementary features of PAN and MS images. Specifically, we introduce an encoder–decoder structure and coarse-to-fine strategy to effectively extract multi-scale features of PAN and MS images, separately. Meanwhile, an information pool is adopted to preserve primitive information. Then a multi-scale feature fusion module is applied to fuse multi-scale features from the decoder and information pool. Finally, the fused features are utilized to reconstruct the high-resolution MS image. Extensive experiments demonstrate that our proposed method achieves favorable performance against other methods in terms of quantitative metrics and visual quality. Besides, the results on running time indicate that our method can achieve real-time performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Aiazzi, B., Alparone, L., Baronti, S., Garzelli, A.: Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis. IEEE Trans. Geosci. Remote Sens. 40(10), 2300–2312 (2002)

    Article  Google Scholar 

  2. Alparone, L., Baronti, S., Garzelli, A., Nencini, F.: A global quality measurement of pan-sharpened multispectral imagery. IEEE Geosci. Remote Sens. Lett. 1(4), 313–317 (2004)

    Article  Google Scholar 

  3. Chavez, P., Kwarteng, A.: Extracting spectral contrast in landsat thematic mapper image data using selective principal component analysis. Photogramm. Eng. Remote Sens. 55(3), 339–348 (1989)

    Google Scholar 

  4. Chen, Y., Fan, H., Xu, B., Yan, Z., Kalantidis, Y., Rohrbach, M., Yan, S., Feng, J.: Drop an octave: Reducing spatial redundancy in convolutional neural networks with octave convolution. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3435–3444 (2019)

  5. Cheng, G., Matsune, A., Li, Q., Zhu, L., Zang, H., Zhan, S.: Encoder–decoder residual network for real super-resolution. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 2169–2178 (2019)

  6. Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) Computer Vision—ECCV 2014, pp. 184–199. Springer International Publishing, Cham (2014)

    Chapter  Google Scholar 

  7. Duta, I.C., Liu, L., Zhu, F., Shao, L.: Pyramidal convolution: rethinking convolutional neural networks for visual recognition. arXiv:200611538 (2020)

  8. Friedman, M.: The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 32(200), 675–701 (1937)

    Article  Google Scholar 

  9. Fu, S., Meng, W., Jeon, G., Chehri, A., Zhang, R., Yang, X.: Two-path network with feedback connections for pan-sharpening in remote sensing. Remote Sens. 12(10), 1674 (2020)

    Article  Google Scholar 

  10. García, S., Fernández, A., Luengo, J., Herrera, F.: Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 180(10), 2044–2064 (2010)

    Article  Google Scholar 

  11. Ghassemian, H.: A review of remote sensing image fusion methods. Inf. Fusion 32, 75–89 (2016). https://doi.org/10.1016/j.inffus.2016.03.003. http://www.sciencedirect.com/science/article/pii/S1566253516300173

  12. Gillespie, A.R., Kahle, A.B., Walker, R.E.: Color enhancement of highly correlated images. II. Channel ratio and chromaticity transformation techniques. Remote Sens. Environ. 22(3), 343–365 (1987). https://doi.org/10.1016/0034-4257(87)90088-5. http://www.sciencedirect.com/science/article/pii/0034425787900885

  13. Guo, Y., Ye, F., Gong, H.: Learning an efficient convolution neural network for pansharpening. Algorithms 12(1), 16 (2019). https://doi.org/10.3390/a12010016

    Article  MathSciNet  Google Scholar 

  14. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)

  15. He, L., Rao, Y., Li, J., Chanussot, J., Plaza, A., Zhu, J., Li, B.: Pansharpening via detail injection based convolutional neural networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 12(4), 1188–1204 (2019)

    Article  Google Scholar 

  16. Hodges, J., Lehmann, E.L.: Rank methods for combination of independent experiments in analysis of variance. In: Selected Works of EL Lehmann, Springer, pp. 403–418 (2012)

  17. Jing, W., Zhang, M., Tian, D.: Improved U-net model for remote sensing image classification method based on distributed storage. J. Real Time Image Process. 1–13 (2020). https://doi.org/10.1007/s11554-020-01028-0

  18. Kang, X., Li, S., Benediktsson, J.A.: Pansharpening with matting model. IEEE Trans. Geosci. Remote Sens. 52(8), 5088–5099 (2014)

    Article  Google Scholar 

  19. Khan, M.M., Chanussot, J., Condat, L., Montanvert, A.: Indusion: fusion of multispectral and panchromatic images using the induction scaling technique. IEEE Geosci. Remote Sens. Lett. 5(1), 98–102 (2008)

    Article  Google Scholar 

  20. Kim, J., Lee, J.K., Lee, K.M.: Accurate image super-resolution using very deep convolutional networks. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1646–1654 (2016)

  21. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv:1412.6980 (2014)

  22. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017)

    Article  Google Scholar 

  23. Laben, C.A., Brower, B.V.: Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening. https://lens.org/135-660-046-023-136 (2000)

  24. Ledig, C., Theis, L., Husz’r, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., Shi, W.: Photo-realistic single image super-resolution using a generative adversarial network. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 105–114 (2017)

  25. Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (2017)

  26. Liu, C., Zhang, Y., Wang, S., Sun, M., Ou, Y., Wan, Y., Liu, X.: Band-independent encoder–decoder network for pan-sharpening of remote sensing images. IEEE Trans. Geosci. Remote Sens. 58, 5208–23 (2020)

    Article  Google Scholar 

  27. Liu, X., Liu, Q., Wang, Y.: Remote sensing image fusion based on two-stream fusion network. Inf. Fusion 55, 1–15 (2020). https://doi.org/10.1016/j.inffus.2019.07.010. http://www.sciencedirect.com/science/article/pii/S1566253517308060

  28. Masi, G., Cozzolino, D., Verdoliva, L., Scarpa, G.: Pansharpening by convolutional neural networks. Remote Sens. 8(7), 594 (2016). https://doi.org/10.3390/rs8070594

    Article  Google Scholar 

  29. Otazu, X., Gonzalez-Audicana, M., Fors, O., Nunez, J.: Introduction of sensor spectral response into image fusion methods. Application to wavelet-based methods. IEEE Trans. Geosci. Remote Sens. 43(10), 2376–2385 (2005)

    Article  Google Scholar 

  30. Palsson, F., Ulfarsson, M.O., Sveinsson, J.R.: Model-based reduced-rank pansharpening. IEEE Geosci. Remote Sens. Lett. 17(4), 656–660 (2019)

    Article  Google Scholar 

  31. Pashaei, A., Ghatee, M., Sajedi, H.: Convolution neural network joint with mixture of extreme learning machines for feature extraction and classification of accident images. J. Real Time Image Proc. 17(4), 1051–1066 (2020)

    Article  Google Scholar 

  32. Pradhan, P.S., King, R.L., Younan, N.H., Holcomb, D.W.: Estimation of the number of decomposition levels for a wavelet-based multiresolution multisensor image fusion. IEEE Trans. Geosci. Remote Sens. 44(12), 3674–3686 (2006)

    Article  Google Scholar 

  33. Quade, D.: Using weighted rankings in the analysis of complete blocks with additive block effects. J. Am. Stat. Assoc. 74(367), 680–683 (1979)

    Article  MathSciNet  Google Scholar 

  34. Shensa, M.J.: The discrete wavelet transform: wedding the a Trous and Mallat algorithms. IEEE Trans. Signal Process. 40(10), 2464–2482 (1992)

    Article  Google Scholar 

  35. Song, Y., Qu, J.: Real-time segmentation of remote sensing images with a combination of clustering and bayesian approaches. J. Real Time Image Process. (2020). https://doi.org/10.1007/s11554-020-00990-z

  36. Tai, Y., Yang, J., Liu, X.: Image super-resolution via deep recursive residual network. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2790–2798 (2017)

  37. Tai, Y., Yang, J., Liu, X., Xu, C.: Memnet: a persistent memory network for image restoration. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 4549–4557 (2017)

  38. Tong, T., Li, G., Liu, X., Gao, Q.: Image super-resolution using dense skip connections. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 4809–4817 (2017)

  39. Tu, T.M., Su, S.C., Shyu, H.C., Huang, P.S.: A new look at IHS-like image fusion methods. Inf. Fusion 2(3), 177–186 (2001). https://doi.org/10.1016/S1566-2535(01)00036-7. http://www.sciencedirect.com/science/article/pii/S1566253501000367

  40. Vicinanza, M.R., Restaino, R., Vivone, G., Dalla Mura, M., Chanussot, J.: A pansharpening method based on the sparse representation of injected details. IEEE Geosci. Remote Sens. Lett. 12(1), 180–184 (2014)

    Article  Google Scholar 

  41. Vivone, G., Simões, M., Dalla Mura, M., Restaino, R., Bioucas-Dias, J.M., Licciardi, G.A., Chanussot, J.: Pansharpening based on semiblind deconvolution. IEEE Trans. Geosci. Remote Sens. 53(4), 1997–2010 (2014)

    Article  Google Scholar 

  42. Vivone, G., Alparone, L., Chanussot, J., Dalla Mura, M., Garzelli, A., Licciardi, G.A., Restaino, R., Wald, L.: A critical comparison among pansharpening algorithms. IEEE Trans. Geosci. Remote Sens. 53(5), 2565–2586 (2015)

    Article  Google Scholar 

  43. Wald, L.: Data fusion. Definitions and architectures—fusion of images of different spatial resolutions. Presses de l’Ecole, Ecole des Mines de Paris, Paris, France. https://hal-mines-paristech.archives-ouvertes.fr/hal-00464703 (2002) (iSBN 2-911762-38-X)

  44. Wald, L., Ranchin, T., Mangolini, M.: Fusion of satellite images of different spatial resolutions: assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 63(6), 691–699 (1997). https://hal.archives-ouvertes.fr/hal-00365304

  45. Wei, Y., Yuan, Q., Shen, H., Zhang, L.: Boosting the accuracy of multispectral image pansharpening by learning a deep residual network. IEEE Geosci. Remote Sens. Lett. 14(10), 1795–1799 (2017)

    Article  Google Scholar 

  46. Xu, Q., Li, B., Zhang, Y., Ding, L.: High-fidelity component substitution pansharpening by the fitting of substitution data. IEEE Trans. Geosci. Remote Sens. 52(11), 7380–7392 (2014)

    Article  Google Scholar 

  47. Yang, J., Fu, X., Hu, Y., Huang, Y., Ding, X., Paisley, J.: Pannet: a deep network architecture for pan-sharpening. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1753–1761 (2017)

  48. Yuhas, R.H., Goetz, A.F.H., Boardman, J.W.: Discrimination among semi-arid landscape endmembers using the spectral angle mapper (SAM) algorithm. In: Proceedings Summaries 3rd Annual JPL Airborne Geoscience Workshop, pp. 147–149 (1992)

  49. Zhang, Y., Liu, C., Sun, M., Ou, Y.: Pan-sharpening using an efficient bidirectional pyramid network. IEEE Trans. Geosci. Remote Sens. 57(8), 5549–5563 (2019)

    Article  Google Scholar 

  50. Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Lossfunctions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2017)

    Article  Google Scholar 

  51. Zhao, L., Chen, Y., Sheng, V.S.: A real-time typhoon eye detection method based on deep learning for meteorological information forensics. J. Real Time Image Proc. 17(1), 95–102 (2020)

    Article  Google Scholar 

Download references

Acknowledgements

This work is sponsored by the National Key R&D Program of China, under Grant No. 2020AAA0104500. The funding is from Sichuan University under Grant No. 2020SCUNG205.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Gwanggil Jeon or Rui Zhong.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lai, Z., Chen, L., Jeon, G. et al. Real-time and effective pan-sharpening for remote sensing using multi-scale fusion network. J Real-Time Image Proc 18, 1635–1651 (2021). https://doi.org/10.1007/s11554-021-01080-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11554-021-01080-4

Keywords

Navigation