Abstract

We propose a new method for fast organ classification and segmentation of abdominal magnetic resonance (MR) images. Magnetic resonance imaging (MRI) is a new type of high-tech imaging examination fashion in recent years. Recognition of specific target areas (organs) based on MR images is one of the key issues in computer-aided diagnosis of medical images. Artificial neural network technology has made significant progress in image processing based on the multimodal MR attributes of each pixel in MR images. However, with the generation of large-scale data, there are few studies on the rapid processing of large-scale MRI data. To address this deficiency, we present a fast radial basis function artificial neural network (Fast-RBF) algorithm. The importance of our efforts is as follows: (1) The proposed algorithm achieves fast processing of large-scale image data by introducing the -insensitive loss function, the structural risk term, and the core-set principle. We apply this algorithm to the identification of specific target areas in MR images. (2) For each abdominal MRI case, we use four MR sequences (fat, water, in-phase (IP), and opposed-phase (OP)) and the position coordinates (, ) of each pixel as the input of the algorithm. We use three classifiers to identify the liver and kidneys in the MR images. Experiments show that the proposed method achieves a higher precision in the recognition of specific regions of medical images and has better adaptability in the case of large-scale datasets than the traditional RBF algorithm.

1. Introduction

Magnetic resonance imaging (MRI) is a new type of high-tech imaging examination fashion in recent years. It has the advantages of no ionizing radiation, no bone artifacts, and multidirectional and multiparameter imaging [1]. Therefore, the generation of an end-to-end intelligent disease diagnosis system based on MRI is an inevitable direction for the development of intelligent medicine. To achieve the goal of effective intelligent medical treatment, this paper studies the classification of abdominal organs based on MRI.

There are many techniques for medical image processing [25]. Gordillo et al. [6] divided the existing MR image processing technologies into the following three categories: The first type is threshold-based methods, which classify the segmentation objects (such as pixels) of the MR image by comparing them with different thresholds [79]. The second type is region-based methods, which divide several mutually exclusive regions according to preset rules and then categorize pixels with the same attributes into the same region [10, 11]. The third type is pixel-based classification methods, which mainly classify the objects according to the MR multimodal attributes of each pixel. According to whether the training set is labeled or not, they can be subdivided into unsupervised, semisupervised, and supervised methods [1214].

Of the third type of methods, an artificial neural network as a supervised learning model is applied to the field of medical imaging [15]. It is suitable for image processing without prior distribution assumptions; its application can be divided into three categories: The first type is to apply artificial neural networks directly to MR image processing. Lucht et al. [16] applied a neural network to the dynamic segmentation of MR breast images. Egmont-Petersen et al. [17] used neural networks and multiscale pharmacokinetic features to segment bone tumors in MR perfusion images. Zhang et al. [18] proposed a visual encoding model based on deep neural networks and transfer learning for brain activity measured by functional magnetic resonance imaging. The second type is to use a convolutional neural network or its improved algorithm to segment MR images [1922]. Khalilia et al. [20] used convolutional neural networks to automatically perform brain tissue segmentation in fetal MRI. Wang et al. [21] used dynamic pixelwise weighting-based fully convolutional neural networks for left ventricle segmentation in short-axis MRI. The third type is to use hybrid neural networks to segment MR images. Glass et al. [23] used a hybrid artificial neural network to segment the inversion recovery image of a normal human brain. Alejo et al. [24] used a hybrid artificial neural network to design an accurate computer-aided method capable of performing region segmentation. Reddick et al. [25] used a hybrid neural network to propose a fully automatic method for segmentation and classification of multispectral MR images.

Based on the review of the above literature, great progress has been made in the use of artificial neural networks for medical image segmentation. However, with the higher resolution requirements of MR images and the increasing size of the dataset, research on fast artificial neural network training for large medical image datasets is still lacking. In response to this phenomenon, this paper proposes the Fast-RBF algorithm, which has fast processing capabilities for large datasets. We applied this method to MRI-based abdominal organ classification and segmentation. The results showed that this method achieved significant results. The main contributions of this paper are as follows: (1)The Fast-RBF algorithm with a large-sample-processing capability is proposed by introducing the -insensitive loss function and structural risk term and using the core-set principle [26]. This method not only retains the strong nonlinear fitting ability and simple learning rules of RBF artificial neural networks but can also process a large dataset quickly, which improves the processing speed and efficiency.(2)For each abdominal MRI case, we use four MR sequences (fat, water, IP, and OP) and the position coordinates (, ) of each pixel as the input of the algorithm. We use three classifiers to identify the liver, kidneys, and other tissues. The proposed algorithm has better adaptability and runs faster in large dataset scenarios than the traditional RBF neural network algorithm.

The remainder of this paper is divided into four parts: Section 2 introduces RBF neural networks and the relationship between RBF neural networks and linear models; Section 3 introduces the Fast-RBF neural network with its large-sample-processing ability; Section 4 verifies the validity of the proposed algorithm on medical image processing; and Section 5 summarizes the full text.

2.1. RBF Neural Network

RBF neural networks consist of an input layer, an implicit layer, and an output layer, as shown in Figure 1. Among them, ,, the number of hidden layer nodes is , and the nonlinear mapping is performed by the RBF neural network.

In an RBF neural network, the input layer receives the training samples; the hidden layer node performs a nonlinear transformation through the radial basis function that maps the input space to a new space. If the radial basis function is defined as a Gaussian function, let denote the center of the Gaussian function and let represent the kernel width of the Gaussian function. This function can be expressed as

The nodes of the output layer implement a linear weighted combination in this new space. Let be the connection weight of the hidden layer and the output layer and be the radial basis function; then, the mapping function of is

2.2. RBF Neural Network and Linear Model

According to the introduction above, the RBF neural network has 3 parameters: the center vector of the radial basis function , the kernel width , and the weight of the output layer . Among them, and can be determined by the fuzzy -means (FCM) clustering algorithm [27], and is obtained by the gradient descent learning algorithm. Let , which is obtained by the FCM clustering algorithm, denote the fuzzy membership of sample for the th class, represent the size of the training sample, and indicate the number of hidden layer nodes; then, the center of the radial basis function and the kernel width can be expressed by equations (3) and (4):

Let , .

The center and the kernel width of the radial basis function are obtained by equations (3) and (4), the input sample is mapped to the new space , and the conversion from the input layer to the hidden layer is a nonlinear mapping.

Let ; then, the neural network function can be expressed as

It can be seen from equation (6) that when the radial basis function hidden layer is estimated, the output of the network can be converted into a linear model.

3. Fast-RBF Algorithm

3.1. Fast-RBF Principle

First, the -insensitive loss function corresponding to the RBF linear model is introduced. To minimize the value of the -insensitive loss function, is solved as the constraint term of the optimization problem. Then, the structural risk term and the Gaussian kernel are introduced to construct the RBF neural network optimization model with large-sample processing. The specific steps are as follows.

Step 1. From equations (3) and (4), the values of and are obtained; then, from equation (5), the model input is obtained.

Step 2. Introducing the -insensitive loss function.

First, the definition of the -insensitive loss function is given:

Definition 1. The -insensitive loss function is defined as [28] where ,.

For the linear model of equation (6), its corresponding -insensitive loss function can be expressed as where represents the neural network output value and represents the real output value.

For equation (8), the constraints of and are not always satisfied, so the relaxation factors and are introduced, and the following constraints can be obtained:

The purpose of this algorithm is to minimize the value of the -insensitive loss function represented by equation (8). The value of the -insensitive parameter will directly affect the accuracy of the modeling. Therefore, the parameter is introduced, and is used as the constraint term in the optimization problem. Combined with equation (9), the optimization problem can be expressed equivalently as where the parameter is the balance factor and is automatically satisfied.

Step 3. Introducing structural risk items and kernel functions.

A support vector machine is an implementation of the principle of structural risk minimization [28]; the method proposed in this paper learns the implementation method of the support vector machine and introduces a regularization term to minimize the risk in the algorithm structure. The kernel method is an important component of a support vector machine [28], which is used to improve the computational ability of the linear learner. The method proposed in this paper also introduces a kernel function. After introducing the regular term and the kernel function, the optimization problem can be expressed by

Step 4. Formula derivation.

By introducing the Lagrange multiplier, the Lagrangian function of equation (11) can be expressed as

The matrix form of the corresponding dual problem of equation (12) is where are the Lagrange coefficients and is the kernel function. They are where is the Gaussian kernel function.

The values of the variables obtained by the solution are

In addition, because , .

Step 5. Prediction.

The prediction function is shown in the following equation:

If it is used for classification,

If , it belongs to the positive class, and if , it belongs to the negative class.

It can be seen from this section that the algorithm proposed in this paper is a quadratic programming problem.

3.2. The Center-Constrained MEB Problem

In 2002, Bădoiu and Clarkson proposed a minimum enclosing ball (MEB) ()-approximation algorithm based on the core set in the literature [26]. The algorithm uses a subset of the input set, which is called the core set, to solve the optimization problem. The algorithm can obtain the same good approximation results as the original input set to improve the efficiency of the algorithm. Tsang et al. [29] suggested that the MEB problem is related to many kernel problems. Eligible quadratic programming (QP) problems can be solved quickly by the core-set algorithm. The following section briefly introduces the center-constrained minimum enclosing ball (CC-MEB) algorithm. Next, we will introduce the relationship between the proposed algorithm and CC-MEB and implement the fast algorithm proposed in this paper.

Given the training sample , where and is the kernel mapping associated with a given kernel , adding one dimension to each forms a set . By setting the coordinate of the last one-dimensional center point to be 0, that is, the CC-MEB’s coordinate is , then the optimization problem of finding the smallest CC-MEB that contains all the samples in the set is where .

Let ; then, the matrix form of the corresponding dual problem of equation (18) is where the kernel matrix is

Using the optimal solution to obtain the values for the radius and center point ,

The formula for the distance from any point to the center point is

Because , any real number is added to equation (19), which does not affect the value of . The original dual form is changed to

Reference [29] pointed out that any QP problem that satisfies equation (23) can be regarded as an CC-MEB problem, which can be solved by the core-set fast algorithm

3.3. Relationship between Fast-RBF and CC-MEB

Equation (13) is the QP form of Fast-RBF. Let ; then, where the real number should be large enough so that . Thus, equation (13) can be written as follows:

This form is equivalent to the CC-MEB problem from equation (23), and the problem can be solved using the core-set fast algorithm [29].

According to formula (25), the center of sphere can be calculated as . In the formula, when , then ; when , then , and the derivation is available:

Therefore, the value of p in equation (15) is .

3.4. The Implementation of Fast-RBF

Algorithm 1 describes the steps of the Fast-RBF algorithm, and the flow chart is shown in Figure 2.

Input: Dataset, approximation parameter , parameter , parameter , parameter , and kernel width , where and
Output: Core-set , Lagrangian coefficient
Training steps:
Step 1: Randomly select 20 samples to form the initial core set ;
Generate the center and radius of the initial CC-MEB according to equation (21) and set the number of iterations t to be 0
Step 2: Randomly select 59 samples and calculate the distance from any sample to the center of the CC-MEB according to equation (22). If there is no sample outside , proceed to step 6
Step 3: Find the farthest sample from the center in step 2 and add the sample to core-set
Step 4: Solve the new CC-MEB, recorded as , and ,
Step 5: Set and return to step 2
Step 6: Terminate the training and return the required output
Prediction step:
    Input the test sample into the following:
    

4. Experimental Results and Analysis

In this paper, the effectiveness of the proposed method is verified by comparing it with the traditional RBF algorithm on MR images. The experiment is divided into two stages: the parameter optimization stage and the modeling stage. In the parameter optimization stage, the grid search method is used to obtain the optimal parameters of each algorithm based on the training set. In the modeling stage, the training set is modeled using optimal parameters, and the test set is used to obtain the performance of each algorithm.

The experiment is verified from the following four aspects: (1)Verify that the size of the core set of the Fast-RBF algorithm is much smaller than the training set’s scale, which can speed up the modeling time of the algorithm(2)Verify that the prediction capability of the Fast-RBF algorithm is comparable to the prediction capability of the RBF algorithm(3)Verify that the modeling time of the Fast-RBF algorithm on large datasets is much smaller than that of the RBF algorithm

For the experimental environment, the operating system is Windows 10; the processor is an Intel i5 2.71 GHz CPU; the memory is 8 GB; and the main application software is MATLAB R2015a.

4.1. Experimental Preparation

The use case in this section is from MRI scans of five subjects recruited by the University Hospitals Cleveland Medical Center Institutional Review Board. Before the experiment, a block diagram is first used to frame the area to be identified, as shown in Figure 3. Next, we train and test the data of region of interest in abdominal organ map. The experiment is to classify the liver and kidneys of the region of interest in the abdominal organ map.

For each case, we extract the local textural features from four different types of abdominal 3D MR images, namely, fat, water, in-phase (IP), and opposed-phase (OP), as the input of the algorithm. Noise cannot be avoided in these actual data, and this noise will affect the final image recognition effect. Therefore, this paper adopts the method proposed in [30, 31] to design a convolution kernel as shown in Table 1, preprocesses the experimental data, and implements feature extraction.

In addition, we also consider the pixel spacing of the MR images and adopt the grid division strategy. Let (, ) represent the position information of the pixel. That is, we combine the features that we extracted and obtain a six-dimensional feature.

4.2. Experimental Method

We define the liver of the region of interest in abdominal MR images as class A, the kidneys as class B, and other tissues as class C. Therefore, this is a multiclassification problem. We train “liver (class A)-kidney (class B),” “liver (class A)-other tissue (class C),” and “kidney (class B)-other tissue (class C)” to obtain three classification results; the final result is then determined by voting. The voting method is as follows:

Let .

The classification is (, ) if it belongs to , and ; otherwise, .

The classification is (, ) if it belongs to , and ; otherwise, .

The classification is (, ) if it belongs to , and ; otherwise, .

The final sample belongs to the class with the largest values of , , and .

The classification accuracy is used to measure the performance of the algorithm.

4.3. Experimental Results

Cases 1-4 contain a total of 59,904 data points. We randomly selected 10,000 data points, 20,000 data points, 30,000 data points, 40,000 data points, 50,000 data points, and 59,904 data points for training. Case 5, which contains 16,896 data points, was used as the test set. The experiment was repeated 10 times for each training set size to verify the advantages of the proposed method.

4.3.1. Core Set of the Fast-RBF Algorithm

Figure 4 shows the average values of the total number of core-set samples for the three classifiers at different training set sizes. Figure 4 shows that the total number of core sets is between 240 and 300, which is much smaller than the sample size. Replacing all the samples with the core sets in the model construction step will greatly improve the operational efficiency.

4.3.2. Prediction Ability of the Fast-RBF Algorithm

It can be seen from Table 2 that both algorithms can achieve a good generalization performance. However, with the increase in the amount of training data, the RBF algorithm requires more samples to participate in the modeling step, so it is more constrained. When the data size exceeds 30,000 data points, it can no longer be solved. The Fast-RBF algorithm uses core-set technology to solve the problem. Key samples are added to the core set one by one, and the average size of the core set does not exceed 300, so it can process a larger dataset and can achieve a generalization ability comparable to that of the RBF algorithm. The organ classification results are shown in Figure 5.

4.3.3. Time Performance of the Fast-RBF Method

Table 3 shows that the modeling time required by the two algorithms has a stable growth with increasing sample size. When the size of the dataset is 30,000 data points, the average modeling time of the RBF algorithm is 7,580 seconds, while the average time of the Fast-RBF algorithm is 15.2609 seconds. The modeling time of the Fast-RBF algorithm is much smaller than that of the RBF algorithm. In addition, when the size of the dataset is more than 30,000 data points, the RBF algorithm will not run.

4.4. Experimental Conclusion

It is known from experiments that the Fast-RBF algorithm can be used for organ recognition in MR images. The advantage of the proposed algorithm is that it requires far less modeling time than the RBF algorithm in large datasets under the premise of ensuring the prediction accuracy. The algorithm has strong practicability.

5. Conclusion

Our studies are based on MRI of challenging body sections of the abdomen. We proposed the Fast-RBF algorithm, which is suitable for the rapid training of a large dataset. By introducing the -insensitive loss function, learning the structural risk term and kernel method of the support vector machine, and using the core-set principle, the proposed algorithm can meet the needs of large sample sizes. This method can quickly process large datasets and is suitable for medical image processing.

The method proposed in this paper is a supervised learning method. The training samples need to be labeled, and the workload of data preparation is large. In the future, we will further study the semisupervised abdominal image recognition method in which only a small number of class labels are needed to achieve image processing.

Data Availability

Data sharing is not available for our study, as the experimental data were afforded by our collaboration partners at the University Hospitals Cleveland Medical Center, OH, USA. Without permission, we cannot share any of our data with others.

Additional Points

Human and Animal Rights. The experimental data are from five subjects recruited by the University Hospitals Cleveland Medical Center Institutional Review Board. We used these data to be compatible with the board’s requirements.

Informed consent was obtained from all the individual participants included in this study.

Disclosure

The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health, USA.

Conflicts of Interest

We declare that there are no conflicts of interest.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under grants 61772241 and 61702225, by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under grant 18KJB520048, by the Science and Technology Demonstration Project of Social Development of Wuxi under grant WX18IVJN002, and by JiangSu 333 expert engineering. The research in this publication was also supported by the National Cancer Institute of the National Institutes of Health, USA, under award number R01CA196687.