A novel fusion method based on dynamic threshold neural P systems and nonsubsampled contourlet transform for multi-modality medical images
Introduction
Multi-modality image fusion technology has emerged as a hot research topic in recent years [1], [2] for multi-focus images and multi-modality medical images. Image fusion combines the complementary information from multi-modality sensors to enhance the visual perception of the human eyes or to mutually complement the limitations of each image. This technology has been applied in various fields such as visual surveillance, medicine, remote sensing and computer vision.
Multi-modality medical images are an important type of multi-modality images, where each constituent imaging modality has limitations and shows specific information. Magnetic resonance imaging (MRI) shows internal body structures such as the abdomen, liver, pancreas and other smooth tissues, while computed tomography (CT) highlights the bony structures and other anatomical parts with high resolution. Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) images provide functional information related to metabolism. However, these images are often displayed in pseudo colour and typically have low resolution [3]. These multi-modality images can provide complementary information. To achieve higher diagnostic accuracy, many studies have combined the analysis of images obtained from different modalities of the same patient, which has led to the development of multi-modality medical image fusion techniques.
To merge two or more images from different modalities into a single fusion image, many fusion methods have been proposed in the past few years [4], [5], [6], [7], [8]. Multi-scale transform (MST)-based methods are a class of common fusion methods. However, compared with current state-of-the-art fusion methods, MST-based methods exhibit poor fusion performance. Due to this reason, several fusion techniques have been developed in recent years, such as sparse representation (SR) based fusion methods [9], image decomposition (ID) based fusion methods [10] and deep learning (DL) based fusion method [11].
Dynamic threshold neural P (DTNP) systems are a recently developed distributed parallel neural-like computing model [12], incorporating the spiking mechanism and dynamic threshold mechanism. Our previous work demonstrated that DTNP systems are Turing-universal computing devices. This paper focuses on the application of DTNP systems to the fusion of multi-modality medical images and proposes a novel DTNP system based fusion method in the NSCT domain.
The motivation behind this work is described as below.
- (1)
NSCT-based methods are early image fusion methods, which exhibit poor fusion performance compared with current state-of-the-art fusion methods, such as the SR-based and DL-based methods. However, the NSCT has some good characteristics that are conducive for image fusion, For example, the NSCT can retrieve the complementary information from multi-modality medical images.
- (2)
DTNP systems are a newly developed model and have an interesting characteristic, namely, the cooperative spiking of neurons in a local region. We sought to evaluate whether this characteristic could be combined with the NSCT to develop a novel fusion method for multi-modality medical images. Moreover, if the proposed fusion method can equal or even partially exceed the state-of-the-art fusion methods in terms of the fusion performance, this would demonstrate that some advantages of the DTNP systems could greatly improve the fusion performance of the NSCT-based methods for multi-modality medical images.
Due to the above reason, two DTNP systems are designed to present a novel fusion framework for multi-modality medical images. The proposed fusion framework consists of three parts: (i) the NSCT transform; (ii) image fusion in the NSCT domain; (iii) the inverse NSCT transform. The features of the high-frequency NSCT coefficients of the multi-modality medical images are regarded as the external inputs of the two DTNP systems, and their outputs are used as the control condition of the fusion rules.
The contributions of this paper can be summarized as follows:
- (1)
DTNP systems are used to design a novel fusion framework in the NSCT domain for multi-modality medical images, wherein the DTNP systems fully utilize the complementary information of the multi-modality medical images extracted by the NSCT.
- (2)
A fusion rule based on DTNP systems is developed, where the improved novel sum-modified Laplacian (INSML) features of the high-frequency NSCT coefficients of the multi-modality medical images are extracted and used as the external inputs of the DTNP systems, and the corresponding outputs are used to control the fusion rule. The INSML features can describe the detailed information, such as edges and contours, of multi-modality medical images better, and the INSML features in a local region can effectively trigger the cooperative spiking mechanism in DTNP systems.
- (3)
For the low-frequency NSCT coefficients, an INSML-WLE-based fusion rule is designed. It should be noted that the low-frequency NSCT coefficients not only express the main energy of an image, but also contain some detailed information. The WLE and INSML approaches are used for feature extraction to extract the main energy and detailed information of the multi-modality medical images, respectively. The two features are combined to control the fusion of the low-frequency NSCT coefficients.
In the study by Li et al. [13], DTNP systems were used to address the fusion of multi-focus images, where the combination of DTNP+ST+SF+SML was used to fuse the low- and high-frequency ST coefficients of the multi-focus images. While multi-focus images are retrieved by cameras using the same imaging principle but different focal lengths, multi-modality medical images are captured by devices using different imaging principles. However, the combination of DTNP+ST+SF+SML is not suitable for the fusion of multi-modality medical images. To achieve the ideal fusion effect, this work proposes the combination of DTNP+NSCT+WLE+INSML for the fusion of multi-modality medical images.
The rest of this paper is organized as follows. In Section 2, a review of the related literature is presented. Section 3 provides a detailed description of the proposed fusion framework in the NSCT domain for multi-modality medical images. In Section 4, the experimental results are detailed, while conclusions are drawn in Section 5.
Section snippets
Related work
In this section, we review various fusion methods for multi-modality images, including those proposed for multi-modality medical images.
DTNP systems
DTNP systems [12] as a variant of spiking neural P systems (SNP systems) [50], [51], [52], [53], [54], [55], are a type of distributed parallel computing models, which combine the spiking and dynamic threshold mechanisms of neurons. To implement the fusion of multi-modality medical images, DTNP systems are considered as a matrix of neurons with local connections.
Let I be a medical image with size m × n and INSMLm × n is a feature matrix containing the (high-frequency) NSCT coefficients of the
Experimental results
To evaluate the effectiveness of the proposed fusion method, twelve pairs of multi-modality medical images were evaluated in the experiments, as shown in Fig. 4, including MRI_T1 and MRI_T2, CT and MRI, MRI and PET, MRI and SPECT images. These images with a size of 256 × 256 were downloaded from the open source database “http://www.med.harvard.edu/aanlib/ ”.
In the experiments, the proposed fusion method was compared with nine methods, including the Wavelet [16], DWT [56], CVT [19], DTCWT [18],
Conclusions
DTNP systems are a type of Turing-universal, distributed parallel computing models. This paper investigated a DTNP system based fusion framework in the NSCT domain for multi-modality medical images. The proposed fusion framework combined the DTNP+NSCT+INSML+WLE features. For the low-frequency NSCT coefficients, the INSML and WLE features are combined to express the main energy and partial details of the multi-modality medical images and develop a fusion rule. For the high-frequency NSCT
CRediT authorship contribution statement
Bo Li: Conceptualization, Software, Writing - original draft. Hong Peng: Conceptualization, Software, Writing - original draft. Jun Wang: Conceptualization, Writing - original draft.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgment
The authors thank the anonymous reviewers for providing very insightful and constructive suggestions, which have greatly help improve the presentation of this paper.
This work was partially supported by the Research Fund of Sichuan Science and Technology Project (No. 2018JY0083), Research Foundation of the Education Department of Sichuan province (No. 17TD0034), China.
References (66)
- et al.
Pixel-level image fusion: a survey of the state of the art
Inf. Fusion
(2017) - et al.
Infrared and visible image fusion methods and applications: a survey
Inf. Fusion
(2019) - et al.
Medical image fusion: a survey of the state of the art
Inf. Fusion
(2014) - et al.
A novel multi-focus image fusion method using multiscale shearing non-local guided averaging filter
Signal Process.
(2020) - et al.
Infrared and visible image fusion and denoising via l2 - lp norm minimization
Signal Process.
(2020) - et al.
Single image dehazing via self-constructing image fusion
Signal Process.
(2020) - et al.
Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review
Inf. Fusion
(2018) - et al.
Dynamic threshold neural p systems
Knowl.-Based Syst.
(2019) - et al.
Multi-focus image fusion based on dynamic threshold neural P systems and surfacelet transform
Knowledge-Based Systems
(2020) Image fusion by a ratio of low-pass pyramid
Pattern Recognit. Lett.
(1989)
Gradient-based multiresolution image fusion
IEEE Trans. Image Process.
Wavelets and Image Fusion
Multimodal sensor medical image fusion based on nonsubsampled shearlet transform and s-PCNNs in HSV space
Signal Process.
Multifocus image fusion using the nonsubsampled contourlet transform
Signal Process.
Fusion of multimodal medical images using Daubechies complex wavelet transform - a multiresolution approach
Inf. Fusion
A novel method of multimodal medical image fusion using fuzzy transform
J. Vis. Commun. Image Represent.
An improved multimodal medical image fusion algorithm based on fuzzy transform
J. Vis. Commun. Image Represent.
Ripplet domain fusion approach for CT and MR medical image information
Biomed. Signal Process. Control
A novel medical image fusion by combining TV-l1 decomposed textures based on adaptive weighting scheme
Engineering Science and Technology
Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform
Neurocomputing
A novel multi-modality image method based on image decomposition and sparse representation
Inf. Sci.
Robust sparse representation based multi-focus image fusion with dictionary construction and local spatial consistency
Pattern Recognit.
Analysis-synthesis dictionary pair learning and patch saliency measure for image fusion
Signal Process.
Multi-modality medical image fusion based on separable dictionary learning and Gabor filtering
Signal Process. Image Communication
Discriminative dictionary learning-based multiple component decomposition for detail-preserving noisy image fusion
IEEE Trans. Instrum.Meas.
A novel dictionary learning approach for multi-modality medical image fusion
Neurocomputing
Joint medical image fusion, denoising and enhancement via discriminative low-rank sparse dictionaries learning
Pattern Recognit.
Multi-modality medical image fusion based on separable dictionary learning and Gabor filtering
Signal Process.: Image Communication
Medical image fusion using multi-level local extrema
Inf. Fusion
Fully convolutional network-based multi-focus image fusion
Neural Comput.
Ensemble of CNN for multi-focus image fusion
Inf. Fusion
IFCNN: a general image fusion framework based on convolutional neural network
Inf. Fusion
MDLatLRR: a novel decomposition method for infrared and visible image fusion
IEEE Transactions on Image Processing
Cited by (81)
A new approach to medical image fusion based on the improved Extended difference-of-Gaussians combined with the Coati optimization algorithm
2024, Biomedical Signal Processing and ControlLRFNet: A real-time medical image fusion method guided by detail information
2024, Computers in Biology and MedicineTime series classification models based on nonlinear spiking neural P systems
2024, Engineering Applications of Artificial IntelligenceA three-layer decomposition method based on structural texture perception for fusion of CT and MRI images
2024, Biomedical Signal Processing and ControlIFICI: Infrared and visible image fusion based on interactive compensation illumination
2024, Infrared Physics and Technology