Skip to main content
Log in

A novel multisource pig-body multifeature fusion method based on Gabor features

  • Published:
Multidimensional Systems and Signal Processing Aims and scope Submit manuscript

Abstract

The multi-source image fusion has been a hot topic during recent years because of its higher segmentation accuracy rate. However, the traditional multi-source image fusion methods could not obtain better contrast and more details of the fused image. To better detect the pig-body feature, a novel infrared and visible image fusion method for pig-body segmentation and temperature detection is proposed in non-subsampled contourlet transform (NSCT) domain, named as NSCT-GF. Firstly, the visible and infrared images were decomposed into a series of multi-scale and multi-directional sub-bands using NSCT. Then, to better represent the fine-scale of texture information, the Gabor energy map was extracted by Gabor filter with even-symmetry, and the low-frequency coefficients were fused by the maximum of Ordinal encoding. Then, to preserve the more coarse-scale and edge detail information, Gabor filter with odd-symmetry was employed to fuse high-frequency NSCT sub-bands and the fused coefficients were reconstructed into a final fusion image by inverse NSCT. Next, the pig-body shape was obtained by Ostu automatic threshold segmentation and optimized by morphological processing. Finally, the pig-body temperature was extracted based on shape segmentation. Experimental results showed that the proposed segmentation method was capable of achieving 1.84–3.89% higher average segmentation accuracy rate than the prevailing conventional methods and also improved efficiency in terms of time consumption. It lays a foundation for accurately measuring the temperature of pig-body.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  • Bai, X. (2015). Infrared and visual image fusion through feature extraction by morphological sequential toggle operator. Infrared Physics & Technology, 71, 77–86.

    Google Scholar 

  • Balakrishnan, S., Cacciola, M., Udpa, L., Rao, B. P., Jayakumar, T., & Raj, B. (2012). Development of image fusion methodology using discrete wavelet transform for eddy current images. NDT and E International, 51(10), 51–57.

    Google Scholar 

  • Bhatnagar, G., Wu, Q. M. J., & Liu, Z. (2015). A new contrast based multimodal medical image fusion framework. Neurocomputing, 157, 143–152.

    Google Scholar 

  • Caldara, F. R., Dos Santos, L. S., Machado, S. T., Moi, M., de Alencar Nääs, I., Foppa, L., et al. (2014). Piglets’ surface temperature change at different weights at birth. Asian-Australasian journal of animal sciences, 27(3), 431–438.

    Google Scholar 

  • Chai, Z., Sun, Z., Mendez-Vazquez, H., et al. (2013). Gabor ordinal measures for face recognition. IEEE Transactions on Information Forensics and Security, 9(1), 14–26.

    Google Scholar 

  • Chen, Y., & Sang, N. (2015). Attention-based hierarchical fusion of visible and infrared images. Optik—International Journal for Light and Electron Optics, 126(23), 4243–4248.

    Google Scholar 

  • Cheng, B., Jin, L., & Li, G. (2018). A novel fusion framework of visible light and infrared images based on singular value decomposition and adaptive dual-pcnn in nsst domain. Infrared Physics & Technology, 91, 153–163.

    Google Scholar 

  • Daugman, J. G. (1985). Uncertainty Relation for Resolution in Space, Spatial Frequency, and Orientation Optimized by 2D Visual Cortical Filters. Journal of the Optical Society of America, 2(7), 1160–1169.

    Google Scholar 

  • Font-I-Furnols, M., Carabús, Anna, Pomar, C., & Gispert, M. (2015). Estimation of carcass composition and cut composition from computed tomography images of live growing pigs of different genotypes. Animal, 9(1), 166–178.

    Google Scholar 

  • He, L. I., Lei, L., Chao, Y., & Wei, H. (2016). An improved fusion algorithm for infrared and visible images based on multi-scale transform. Semiconductor Optoelectronics, 74, 28–37.

    Google Scholar 

  • Huang, Z., Ding, M., & Zhang, X. (2017). Medical image fusion based on non-subsampled shearlet transform and spiking cortical model. Journal of Medical Imaging & Health Informatics, 7(1), 229–234.

    Google Scholar 

  • Jain, A. K., Chen, Y., & Demirkus, M. (2007). Pores and Ridges: High-resolution fingerprint matching using level 3 features. IEEE Transaction on Pattern Analysis and Machine Intelligence, 29(1), 15–27.

    Google Scholar 

  • Jie, Z., Ji, Q., & Nagy, G. (2007). A Comparative Study of Local Matching Approach for Face Recognition. IEEE Transaction on Image Processing, 16(10), 2617–2628.

    MathSciNet  Google Scholar 

  • Jin, X., Jiang, Q., Yao, S., Zhou, D., Nie, R., Lee, S. J., et al. (2017). Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain. Infrared Physics & Technology, 88, 1.

    Google Scholar 

  • Jin, X., Jiang, Q., et al. (2018). Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain. Infrared Physics & Technology, 88, 1–12.

    Google Scholar 

  • Kammersgaard, T. S., Malmkvist, J., & Pedersen, L. J. (2013). Infrared thermography—A non-invasive tool to evaluate thermal status of neonatal pigs based on surface temperature. Animal, 7(12), 2026–2034.

    Google Scholar 

  • Kashisha, M. A., Bahr, C., Ott, S., Moons, C., Niewold, T. A., Tuyttens, F., et al. (2013). Automatic monitoring of pig activity using image analysis. Advanced Concepts for Intelligent Vision Systems: Springer International Publishing.

    Google Scholar 

  • Kawasue, K., Win, K. D., Yoshida, K., & Tokunaga, T. (2017). Black cattle body shape and temperature measurement using thermography and kinect sensor. Artificial Life and Robotics., 22, 464–470.

    Google Scholar 

  • Kong, W. (2014). Technique for gray-scale visual light and infrared image fusion based on non-subsampled shearlet transform. Infrared Physics & Technology, 63(11), 110–118.

    Google Scholar 

  • Kong, W., Lei, Y., & Ren, M. (2016). Fusion method for infrared and visible images based on improved quantum theory model. Neurocomputing, 212, 12–21.

    Google Scholar 

  • Liu, Y., Chen, X., Peng, H., & Wang, Z. (2017). Multi-focus image fusion with a deep convolutional neural network. Information Fusion, 36, 191–207.

    Google Scholar 

  • Liu, C., Jin, L., Tao, H., Li, G., Zhuang, Z., & Zhang, Y. (2014). Multi-focus image fusion based on spatial frequency in discrete cosine transform domain. IEEE Signal Processing Letters, 22(2), 220–224.

    Google Scholar 

  • Ma, Y., Chen, J., Chen, C., Fan, F., & Ma, J. (2016a). Infrared and visible image fusion using total variation model. Neurocomputing, 202, 12–19.

    Google Scholar 

  • Ma, J., Chen, C., Li, C., & Huang, J. (2016b). Infrared and visible image fusion via gradient transfer and total variation minimization. Information Fusion, 31, 100–109.

    Google Scholar 

  • Ma, J. Y., Yu, W., Liang, P. W., Li, C., & Jiang, J. J. (2018). Fusiongan: A generative adversarial network for infrared and visible image fusion. Information Fusion, 48, 11–26.

    Google Scholar 

  • Ma, K., Zeng, K., & Wang, Z. (2015). Perceptual quality assessment for multi-exposure image fusion. IEEE Transactions on Image Processing, 24(11), 3345–3356.

    MathSciNet  MATH  Google Scholar 

  • Moghadam, F. V., & Shahdoosti, H. R. (2017). A new multifocus image fusion method using contourlet transform.

  • Nemalidinne, S. M., & Gupta, D. (2018). Nonsubsampled contourlet domain visible and infrared image fusion framework for fire detection using pulse coupled neural network and spatial fuzzy clustering. Fire Safety Journal, 101, 84–101.

    Google Scholar 

  • Nilsson, M., Ardö, H., Åström, K., Herlin, A., Bergsten, C., & Guzhva, O. (2014) Learning based image segmentation of pigs in a pen. In Proceedings of the visual observation and analysis of vertebrate and insect behavior, Stockholm, Sweden, 2428 August 2014 (pp. 1–4).

  • Paramanandham, N., & Rajendiran, K. (2017). Infrared and visible image fusion using discrete cosine transform and swarm intelligence for surveillance applications. Infrared Physics & Technology, 88, 13–22.

    Google Scholar 

  • Pu, T., & Ni, G. (2000). Contrast-based image fusion using the discrete wavelet transform. Optical Engineering, 39(8), 2075–2082.

    Google Scholar 

  • Qi, L., Wang, J., Li, C. D., & Liu, W. (2018). Research on the image segmentation of icing line based on nsct and 2-d ostu. International Journal of Computer Applications in Technology, 57(2), 1.

    Google Scholar 

  • Stajnko, D., Brus, M., & Ho-Evar, M. (2008). Estimation of bull live weight through thermographically measured body dimensions. Computers and Electronics in Agriculture, 61(2), 233–240.

    Google Scholar 

  • Wang, L., Li, B., & Tian, L. F. (2014). Eggdd: An explicit dependency model for multi-modal medical image fusion in shift-invariant shearlet transform domain. Information Fusion, 19(11), 29–37.

    Google Scholar 

  • Xiang, T., Yan, L., & Gao, R. (2015). A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking pcnn in nsct domain. Infrared Physics & Technology, 69, 53–61.

    Google Scholar 

  • Xie, K. (2005). Multifocus image fusion based on discrete cosine transform. Computer Engineering & Applications, 78, 17573–17587.

    Google Scholar 

  • Yang, G., Ikuta, C., Zhang, S., Uwate, Y., Nishio, Y., & Lu, Z. (2017). A novel image fusion algorithm using an nsct and a pcnn with digital filtering. International Journal of Image & Data Fusion, 9, 1–13.

    Google Scholar 

  • Yang, J., Liu, L., Jiang, T., & Fan, Y. (2003). A Modified Gabor Filter Design Method for Fingerprint Image Enhancement. Pattern Recognition Letters, 24(12), 1805–1817.

    Google Scholar 

  • Yang, S., Wang, X., Zhen, L. I., Zhao, W., & Yang, L. (2018). Research on fingerprint image reconstruction based on contourlet transform. Technology Innovation & Application.

  • Zambottivillela, L., Yamasaki, S. C., Villarroel, J. S., Alponti, R. F., & Silveira, P. F. (2014). Novel fusion method for visible light and infrared images based on nsst-sf-pcnn. Infrared Physics & Technology, 65(7), 103–112.

    Google Scholar 

  • Zhang, Q., & Guo, B. L. (2009). Multifocus image fusion using the nonsubsampled contourlet transform. Signal Processing, 89(7), 1334–1346.

    MATH  Google Scholar 

  • Zhong, C., Li, L. S., & Ding, P. (2018). Image segmentation and recognition of candy orange based on bp algorithm and morphological processing. Machine Design & Research, 1, 14.

    Google Scholar 

  • Zhou, W., Alan, C. B., et al. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.

    Google Scholar 

  • Zhu, Z., Lu, H., & Zhao, Y. (2007). Scale Multiplication in Odd Gabor Transform Domain for Edge Detection. Journal of Visual Communication and Image Representation, 18(1), 68–80.

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank their colleagues for their support of this work. The detailed comments from the anonymous reviewers were gratefully acknowledged. This work was supported by the Key Research and Development Project of Shandong Province (Grant No. 2019GNC106091) and the National Key Research and Development Program (Grant No. 2016YFD0200600-2016YFD0200602).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Minjuan Wang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhong, Z., Wang, M., Gao, W. et al. A novel multisource pig-body multifeature fusion method based on Gabor features. Multidim Syst Sign Process 32, 381–404 (2021). https://doi.org/10.1007/s11045-020-00744-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11045-020-00744-x

Keywords

Navigation