Next Article in Journal
Micromachining in Powder-Mixed Micro Electrical Discharge Machining
Next Article in Special Issue
Multi-Channel Feature Pyramid Networks for Prostate Segmentation, Based on Transrectal Ultrasound Imaging
Previous Article in Journal
Behavior of Longitudinal Plate-to-Rectangular Hollow Structural Section K-Connections Subjected to Cyclic Loading
Previous Article in Special Issue
Voice Pathology Detection and Classification Using Convolutional Neural Network Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3D Liver and Tumor Segmentation with CNNs Based on Region and Distance Metrics

1
China Academy of Information and Communications Technology, Beijing 100191, China
2
Beijing University of Posts and Telecommunications, Beijing 100876, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(11), 3794; https://doi.org/10.3390/app10113794
Submission received: 30 April 2020 / Revised: 21 May 2020 / Accepted: 25 May 2020 / Published: 29 May 2020
(This article belongs to the Special Issue Medical Artificial Intelligence)

Abstract

:
Liver and liver tumor segmentation based on abdomen computed tomography (CT) images is an essential step in computer-assisted clinical interventions. However, liver and tumor segmentation remains the difficult issue in the medical image processing field, which is ascribed to the anatomical complexity of the liver and the poor demarcation between the liver and other nearby organs on the image. The existing 3D automatic liver and tumor segmentation algorithms based on full convolutional networks, such as V-net, have utilized the loss functions on the basis of integration (summing) over a segmented region (like Dice or cross-entropy). Unfortunately, the number of foreground and background voxels is usually highly imbalanced in liver and tumor segmentation tasks. This greatly varies the value of regional loss between various segmentation classes, and affects the training stability and effect. In the present study, an improved V-net algorithm was applied for 3D liver and tumor segmentation based on region and distance metrics. The distance metric-based loss function utilized a distance metric of the contour (or shape) space rather than the area. The model was jointly trained by the original regional loss and the three distance-based loss functions (including Boundary (BD) loss, Hausdorff (HD) loss, and Signed Distance Map (SDM) loss) to solve the problem of the highly unbalanced liver and tumor segmentation. Besides, the algorithm was tested in two databases LiTS 2017 (Technical University of Munich, Munich, Germany, 2017) and 3D-IRCADb (Research Institute against Digestive Cancer, Strasbourg Cedex, France, 2009), and the results proved the effectiveness of improvement.

1. Introduction

Liver together with related lesion automatic segmentation represents a vital link to obtain quantitative biomarkers for the support systems to accurately diagnose in the clinic and make decisions based on the computer [1]. Nonetheless, liver segmentation remains a challenge in the medical image processing field, which is due to the anatomical complexity of the liver and the poor demarcation between the liver and other neighboring organs [2]. Accurate measurements based on the computed tomography (CT) image, such as location, shape, and the volume of the tumor, together with the functional liver volume, helps physicians evaluate hepatocellular carcinoma (HCC) and plan treatment [3]. However, manually outlining the target organ on every slice can be greatly demanding and effort-consuming; besides, the obtained results are subjective [1].
Two grand challenges benchmarks were carried out with the coordination of the MICCAI (Medical Image Computing and Computer-Assisted Intervention Society) conference to segment the liver and related lesions in 2007 and 2008, respectively [4,5]. Several approaches based on artificial design features were proposed for liver and related lesion segmentation base on CT images. Recently, thresholding [6,7], graph cut and level set techniques [8,9,10], region growing, and deformable model-based methods [11,12] have been applied in research to segment the liver and related lesions. However, those methods require substantial human intervention, which may cause bias and mistakes. Therefore, it is necessary to develop automatic and end-to-end approaches to segment tumors on CT images [13].
The scientific community has paid great attention to deep Convolutional Neural Networks (CNN) to solve tasks in computer vision, including recognizing, classifying, and segmenting objects [14,15,16,17]. Similarly, novel deep learning-based segmentation approaches have been put forward to analyze medical images, which achieve greatly competitive findings in comparison with state-of-the-art methods [18,19,20,21,22]. End-to-end CNN-based approaches have been verified to be sound for analyzing image appearance, and this has motivated researchers to employ them for fully automatic segmentation of the liver and related lesions within the CT volumes.
The existing deep learning-based liver and tumor segmentation studies are roughly divided into two classes—(1) 2D Fully Convolutional Networks (FCN), like U-Net [23], multi-channel FCN [24], as well as VGG (Oxford Visual Geometry Group) based FCN [25]; (2) 3D FCN, in which 2D convolutions are substituted with the 3D convolutions in the presence of volumetric input data [26,27].
The 2D FCN-based methods use 2D slices from 3D volumes for the segmentation task. Specifically, singular or three neighboring slices cropped based on volumetric images are incorporated into the 2D FCNs [24,25], and then 2D segmentation maps are stacked to produce a segmentation volume. Sun et al. [24] designed a multi-channel fully convolutional network (MC-FCN) to segment liver tumors from multi-phase contrast-enhanced CT images. Since each stage of the contrast-enhanced data provided information about pathological features, it is possible to generate fusion feature maps through merging features from different channels. However, the spatial structural organizations of organs are not considered, and the volumetric information is not fully explored. It remains insufficient to explore spatial data even though neighboring slices are used, which may degrade the performance of the segmentation [3].
The 3D FCN-based method can avoid discontinuities between adjacent slices. For instance, Özgün Çiçek et al. introduced a 3D U-Net network for volumetric segmentation that learns from sparsely annotated volumetric images [28]. The network extended the previous U-Net [18] architecture by replacing all 2D operations with their 3D counterparts. The implementation performed on-the-fly elastic deformations for efficient data augmentation during training. Milletari et al. [29] had put forward the V-net architecture, the U-Net 3D variant, to segment 3D images by the direct use of 3D convolutional layers that had an objective function on the basis of the Dice coefficient. A suitable loss function is an important measure for the usefulness of segmentation for an intended task. Nonetheless, the widely used loss functions for 3D FCNs, including cross-entropy or Dice, are proposed on the basis of integrals (summations) in various segmentation volumes. Besides, the regional loss in segmentation with a high imbalance level (which is common in liver and tumor segmentation) leads to substantially different values among different segmentation types, and this possibly affects the training stability and performance. Kervadec et al. [30] proposed the boundary loss concept, where contour or shape space, rather than regions, was used to form the distance metric. It reduced the regional loss difficulty when there were problems of substantially imbalanced segmentation, since integrals were used along the inter-regional boundary (interface) rather than the imbalanced integrals across regions.
Based on similar ideas, Karimi et al. [31] proposed Hausdorff (HD) loss based on distance metrics. Although HD was utilized to evaluate the performance of image segmentation algorithms, those algorithms rarely aimed at minimizing HD directly. An “HD-Inspired” loss function was proposed in [31] that could be used for a stable training segmentation model with the goal of reducing HD directly.
Moreover, Xue et al. [32] proposed a distance-based loss function named the Signed Distance Map (SDM) loss. Due to the rigorous mapping of the signed distance map computed based on the object boundary contour with a binary segmentation map, they took advantage of learning SDM on the basis of the medical scans directly. The task of segmentation was converted into predictive SDM in their method, which retained excellent segmentation performance and had better smoothness and shape continuity.
In this paper, an improved V-Net based on distance metric was utilized for the 3D liver and tumor segmentation tasks. Three distance-based loss functions, including BD loss, HD Loss, and SDM Loss, were used in combination with the original V-Net loss, respectively, to solve the problem of a highly unbalanced liver and tumor segmentation. In addition, the algorithm was tested on two databases, and the results proved the effectiveness of improvement.

2. Materials and Methods

2.1. Overall Framework

In this paper, we trained the 3D V-Net using the above-mentioned three distance-based loss functions and the regional loss function jointly. Figure 1 shows the 3D liver and tumor segmentation framework utilized in this paper.
In the training stage, the 3D data was fed into the V-net model for feature extraction, then the regional and the three distance-based loss functions (which were denoted as L o s s R e g , L o s s B D , L o s s H D , and L o s s S D M ) were combined through variable weight values and used to jointly train the liver and tumor segmentation model. The loss functions in this paper were denoted as:
L o s s B D = α L o s s R e g + ( 1 α ) L o s s B o u n d a r y
L o s s H D = α L o s s R e g + ( 1 α ) Loss H a u s d o r f f
L o s s S D M = α L o s s R e g + ( 1 α ) Loss S i g n e d   D i s t a n c e   M a p
where L o s s R e g was the regional loss function utilized in the original V-net architecture, these loss functions will be described in detail in the next paragraph. Among them, the distance-based loss function was used to assist the regional loss function to fine-tune the training models. Therefore, at the beginning of training, α was set to 1, which indicated that the models were trained only utilizing regional loss, and the distance-based losses were not involved in the calculation of the loss function. When the training reached the plateau, the α gradually decreased until it reached a value of 0.01. As found in these experiments, such a training strategy was more effective than a joint training strategy from scratch. In the next paragraph, we will expand on the details of the algorithm.

2.2. Related Work

● V-Net
The 3D V-Net architecture, which was the U-Net 3D variant, was put forward to segment the 3D images. The model was proposed to train end-to-end Magnetic Resonance Imaging (MRI) volumes that depicted prostate, which also immediately learned to estimate whole volume segmentation [29]. Another important contribution of this work was the introduction of a new loss layer for segmentation tasks on the basis of the Dice coefficient, which was a commonly used regional overlap measure to analyze medical images [33]. The schematic representation of the V-Net architecture is provided in Figure 2.
The input of the 3D V-Net was 3D data. The first half of the network was constituted by the compression path, while the latter half decompressed the signals till they reached the size of their initial input data. The first half of the network was segmented to diverse phases according to distinct resolutions. Each phase contained one to three convolutional layers. The Parametric Rectified Linear Unit (PReLU) was applied in the entire network. The latter half network extracted features and expanded spatial support for feature maps with low resolution. The deconvolution manipulation was performed to enhance the input size following every stage. In addition, this network also connected features collected via the early stages from the compression to the decompression side of the CNN. Thus, the fine-grained details were collected, which improved the eventual contour estimation quality. The regional loss functions, including cross-entropy and Dice loss, were jointly utilized in the V-net, which was denoted as:
L o s s R e g = L o s s s e g + L o s s D i c e
where L o s s s e g was the cross-entropy loss function and the L o s s D i c e was the Dice loss function.
● Cross-entropy loss
The voxel-wise cross-entropy loss was one of the most commonly used loss functions for image segmentation task. This loss will examine each voxel individually and compare the class prediction vector with the ground truth vector. The cross-entropy loss function is denoted as:
L o s s s e g ( p , g ) = 1 n i = 1 N [ g i log p i + ( 1 g i ) log ( 1 p i ) ]
where p i represents the probability that voxel i belongs to the foreground, and g i represents the ground truth.
● Dice loss
A novel objective function on the basis of the Dice coefficient (range, 0–1) was utilized in V-Net [29]. The liver and tumor segmentation is a binary segmentation task, in which the soft-max layer outputs the probability that each voxel belongs to the foreground or background. For the Dice coefficient D of two binary volumes, it is calculated by the following formula:
D = 2 i N p i g i i N p i 2 + i N g i 2
In the formula, the sums run across N voxels in the estimated binary segmentation volume p i P , together with the ground truth binary volume g i G . The above formula to calculate Dice is also differentiated in terms of the prediction j-th voxel, yielding the gradient:
D p j = 2 [ g j ( i N p i 2 + i N g i 2 ) 2 p j ( i N p i g i ) ( i N p i 2 + i N g i 2 ) 2 ]
Using this loss layer, it was no longer necessary to assign the loss weights to different classes of samples during the training phase. Furthermore, the Dice loss function calculation formula can be used for both 2d and 3d data.
● Boundary loss
In [30], Kervadec et al. proposed a boundary loss, in which the distance of contour (or shapes) space, rather than regions, was measured. Boundary loss contributes to alleviating issues associated with regional loss in the context of substantially imbalanced segmentation tasks. In addition, boundary loss also provided complementary data to those of regional loss. A symmetric L2 distance (Euclidean distance) on the space of shapes (or contours) was expressed as a regional integral, which avoids completely local differential computations involving contour points. The non-symmetric L2 loss function for regularizing segmentation mask S’s boundary deviation compared with the ground truth G is written as follows:
D i s t ( p , g ) = G p i g i 2 d g
In the formula, on the ground-truth boundary G , the boundary point g i is aligned based on the counterpart p i that is located on P (the prediction boundary). The boundary loss function was used to segment the Magnetic Resonance (MR) images of the brain lesion in [30], and the Dice and Hausdorff score increased by 8% and 10%, respectively, relative to the baseline levels in which the generalized Dice was utilized to be the loss function [30].
● Hausdorff Loss
In [31], Karimi et al. put forward a loss function based on the direct HD reduction for training the CNN-based segmentation algorithms. They proposed three approaches for estimating the HD based on the map of segmentation probability. One method was to use a distance transform splitting of the boundary. The second approach was developed on the basis of the use of morphological erosion to those differences in the real segmentation maps compared with estimated counterparts. The last approach was to employ spherical convolution kernels with diverse radii to the map of the segmentation probability. According to the above three approaches proposed to estimate HD, three loss functions were also put forward in training for the sake of HD reduction. Karimi et al. had optimized the function on the basis of the Hausdorff distance to compare the estimated segmentation with the ground truth one, which is shown below.
f H D ( p , g ) = L o s s ( p , g ) + λ ( 1 2 Ω ( p g ) ( p 2 + g ) )
In the formula, the second term is the Dice loss function, whereas the first one is the Hausdorff distance of p and g . Parameter λ is the ratio of the HD-based loss term to the Dice loss term. Let Ω denote the grid on which the image I is defined, and p and g denote the segmentation and ground truth, respectively. The predicted and ground truth segmentation, separately:
L o s s ( p , g ) = 1 | Ω | Ω ( ( p g ) 2 ( d p α + d g α ) )
Parameter α determines the penalty level for a large error. d g stands for the ground truth segmentation distance map, which represents an unsigned distance to the δ g boundary. Similarly, represents the distance to δ p . Meanwhile, indicates the Hadamard operation. In this paper, the HD loss function was utilized for training the V-net model jointly with the regional loss function.
● Signed Distance Map Loss
Xue et al. [32] put forward a novel algorithm for solving those existing problems in the current organ segmentation systems based on deep learning. These systems frequently generated results that were unable to obtain target organ shape, together with a lack of smoothness. The task of segmentation was converted into predictive SDM in their method, since there was a rigorous mapping between the SDM and the binary segmentation map. For the target organ, as well as a point x shown on the 3D medical image, y is the most adjacent point on the target organ surface, the SDM, which maps R to R3 can be deemed below:
Φ ( x ) = { 0 ,     x S i n f y S x y 2 , x Ω i n + i n f y S x y 2 , x Ω o u t
In the formula, S represented the target organ surface; Ω i n together with Ω o u t denoted the target organ interior and exterior, separately. That was to say, the absolute SDM value indicated the distance between a specific point to the most adjacent one on the surface of the organ, whereas the sign indicated the organ interior or exterior. Notably, the zero level or zero distance set indicates the presence of the point on the organ surface. The SDM loss is defined as:
L S D M = L 1 t = 1 C g t p t ( g t p t + p t 2 + g t 2 )
where L 1 represented the L 1 loss, which is the L 1 difference between the predicted and the real SDM values. g t represents the ground truth SDM, and p t denotes the predicted SDM.

3. Experiment and Discussion

3.1. Experimental Preparation and Protocols

We conducted an evaluation of two datasets for liver and tumor segmentation, and the examples of the data are shown in Figure 3.
LiTS 2017: Liver Tumor Segmentation Challenge (LiTS) dataset [34] provides 201 contrast-enhanced 3D abdominal CT scans, and segmentation labels for liver and tumor regions with a resolution of 512 × 512 in each axial slice. There are 131 scans providing ground-truth labels, and 70 scans that do not provide labels. The in-plane resolution ranges from 0.60 mm to 0.98 mm, and the slice spacing from 0.45 mm to 5.0 mm. We clipped the intensity values to the range [−300, 400] HU to ignored irrelevant details and normalized the images into [0, 1].
3D-IRCADb: 3D Image Reconstruction for Comparison of Algorithm Database (3D-IRCADb) is a database containing anonymous medical images of several groups of patients, as well as manual segmentation of various structures of interest by clinical experts. The database consists of 3D CT scans of 10 women and 10 men with liver tumors. The in-plane resolution ranges from 0.57 mm to 0.87 mm, and the slice spacing from 1.6 mm to 4.0 mm. All scans were performed while the arterial phase was in the inhaled position.
We trained the four 3D V-Net models with regional loss and other three distance-based loss functions in two databases. The experiments were conducted with Torch and optimized with the Adam algorithm [35] on four NVIDIA Tesla V100 GPUs. (Gigabyte Technology, Beijing, China) When the deep models were trained, a batch size of 2 and a learning rate of 0.001 were employed. Of them, the learning rate was divided by five after 200 epochs, and the training ended after 1400 epochs. For fairly comparing the diverse loss functions, the models were tested on the test set after every 40 epochs, and the best models were utilized for the contrast test.
As mentioned above, the value of α in Equations (1)–(3) was set to 1 from the initial training to 400 epochs. Afterward, it was reduced by 0.01 every 10 epochs till reaching 0.01. Therefore, only the regional loss was used for training at the beginning, and then the distance-based loss influence gradually increased. According to our results, the as-proposed convenient scheduling strategy always gave superior results over the constant value.

3.2. Experimental Results

3.2.1. Quantitative Evaluation

In the LiTS 2017 dataset, 131 scans were used as the experimental data, among which, 102 were used for training, 20 for verification, and nine for testing. In the 3D-IRCADb dataset, 20 scans were selected, including 10 utilized as the training set, five as the validation set, and five as the test set. Three models with HD loss, BD loss, and SDM loss functions were trained on these two datasets, respectively. We utilized the values of Dice Similarity Coefficient (DSC), the 95th percentile of the Hausdorff Distance (HD) metrics (HD95), the average symmetric surface distance (ASD), together with the True Negative Rate (TNR, specificity) and True Positive Rate (TPR, sensitivity) as the evaluation indicators. The definitions of the indicators were shown as follows:
D S C = 2 | T P | 2 | T P | + | F N | + | F P |
where TP, TN, FP, and FN represent the true positive, true negative, false positive, and negative values, respectively. The HD95 was defined as the 95th percentile of the Hausdorff distance between the predicted delineation and the ground truth annotation, which was a common indicator in image segmentation tasks.
If S ( A ) denotes the set of surface voxels of A , then the shortest distance of an arbitrary voxel v to S ( A ) is defined as:
d ( v , S ( A ) ) = min s A S ( A ) v s A
where . denotes the Euclidean distance. The other indicators were calculated according to the following Equation.
A S D ( A , B ) = 1 | S ( A ) | + | S ( B ) | ( s A S ( A ) d ( s A , S ( B ) ) + s B S ( B ) d ( s B , S ( A ) ) )
T N R = | T N | | T N | + | F P |
T P R = | T P | | T P | + | F N |
Note that in the previous studies, researchers used data with different resolutions for training and testing, such as 512 × 512 [36], 256 × 256 [13], 224 × 224 [3], and 160 × 160 [26]. Therefore, we firstly trained the original v-net models with different sized data to evaluate the accuracy and computational efficiency. Table 1 summarized the DSC values and runtime of the liver and tumor segmentation task of three original V-net models trained utilizing different resolution data (512 × 512, 256 × 256, and 128 × 128), respectively on two databases. The experimental results showed that the model trained with 512 × 512 resolution data has a segmentation result improvement of about 2% for the liver and 7% for the tumor compared to the model trained with low resolution. Especially for the tumor segmentation task with a smaller target, down sampling has a greater negative influence. Therefore, we tend to train and test the models utilizing the data with 512 × 512 resolution.
Table 2 summarizes the results obtained from the LiTS 2017 and 3D-IRCADb datasets. Compared to the models trained using only the region-based loss function ( L R e g ), the segmentation results were improved by utilizing the joint loss functions ( L H D , L B D , and L S D M ), which were evidenced by the indicators. The best results are shown in bold. For liver segmentation and tumor segmentation tasks, the distance-based loss functions improved the DSC by about 1.2% and 6.5% on the LiTS 2017 dataset, and they improved the DSC coefficients by 1.9% and 5.9% on the 3D-IRCADb dataset, respectively. On the LiTS 2017 test set, the HD95 reduced by 40.6% and 28.2%, while it decreased by 52.5% and 29.6% on the 3D-IRCADb test set. As for ASD, it decreased by 45.3% and 42.4% on the LiTS 2017 test set, and the decrease on the 3D-IRCADb dataset were 29.8% and 24.2%, respectively.
Also, those distance-based loss functions improved the TNR of the liver segmentation and tumor segmentation tasks on the test sets. The improvement was 2.5% and 10.6% on the LiTS 2017 dataset, and 3.7% and 11.0% on the 3D-IRCADb dataset. It was also found that the TPR of all models has reached more than 99.7%, which shows that for the liver segmentation task, the existing models have already obtained high sensitivity [37].
The models on the 3D-IRCADb database didn’t perform as well as it did on the LiTS 2017 database. It was mainly due to the small scale of the 3D-IRCADb database, resulting in insufficient model training. Some studies have shown that utilizing additional data for training will significantly improve the performance of the 3D-IRCADb database [8,9,24,27].
Generally, the traditional region-based segmentation methods measure the affinity of the network probability Softmax output-defined region to related ground truth regions. It assumes that all samples and classes have the same importance distribution. Therefore, it requires a training set with balanced classes to get good generality. However, for unbalanced data, the regional loss-based approaches lead to training instability and biased decision boundaries to most categories.
Adding a distance-based loss function to the regional loss function for joint training can mitigate the issues. Instead of utilizing the imbalanced integrals on these regions, the distance-based loss function used integrals on the inter-regional boundaries. Therefore, it was easily combined with a regional loss for joint training to solve the problem of imbalanced data for the liver and tumor segmentation task.
In addition, the cross-validation experiments were also conducted on the LiTS 2017 and 3D-IRCADb datasets. The models trained utilizing the LiTS 2017 training set were tested on the 3D-IRCADb testing set, meanwhile, the 3D-IRCADb datasets trained models were also tested on the LiTS 2017 testing set. The experimental results in Table 3 show that the LiTS 2017 testing results of the models trained with the 3D-IRCADb training set were slightly decreased due to the relatively few training samples. The models trained on LiTS 2017 training set have achieved impressive testing results on the 3D-IRCADb testing set, which have hardly declined. In conclusion, the generalization ability of our algorithm has been proved.

3.2.2. Qualitative Evaluation

The qualitative results are shown in Figure 4. After visual inspection of the above results, great improvements were found when employing the distance-based loss function. Especially in the case of highly imbalanced between foreground and background voxels (such as rows 2, 4, and 5), the results of the joint training models have been greatly improved. The results obtained by the model only using the regional loss function showed that many small regions (including liver or tumor) were not correctly segmented. In contrast, adding distance-based loss functions for joint training improved the segmentation results by varying degrees. Furthermore, the model adding the SDM loss function obtained better results in most cases, which was also reflected in Table 2.

4. Comparison and Discussion

As mentioned in Equations (1)–(3), α is regarded as a hyperparameter that adjusts the function proportion based on regional loss and distance-based loss. According to the previously mentioned rules, the value of this parameter maintained at 1 during the first 400 epochs, and it gradually dropped to 0.01 during the next 1000 epochs. Considering that the models were tested on the test set for every 40 epochs, the setting of α is discussed based on the results of each test. Typically, the set interval of the α value for similar problems could be summarized according to the α value of the optimal model on different databases. As observed from Figure 5, the DSC coefficients of the joint training model were improved on the two databases with the decrease in the α value. The best results for each model were obtained at the α value of 0.4–0.6. However, as α continues to decrease, the DSC coefficients of all models did not continue to improve, but even exhibited a downward trend in some columns.
When the value of α is between 1.0–0.7, the loss function is mainly based on the regional loss. As mentioned above, it may affect the training stability and performance due to the highly imbalanced data. However, when the value of α is less than 0.4, the loss function is dominated by the distance-based loss, which may make the function fall into a local minimum. In general, Thus, the recommended interval for the distance-based loss weight (1– α ) in the joint training strategy is 0.4–0.6.
Some studies point out that the earlier use of distance-based loss may lead to convergence to a local minimum or saddle point [30]. In this paper, the solution for this problem was to utilize the regional loss function only in the first period of training to avoid falling into local minima. Then, after the training entered the platform period, the distance-based loss weight gradually increased to fine-tune the results. This strategy was conceptually similar to the energy on the basis of the classical contour for the segmentation of the level set, such as the active geodesic contour [38], which also required additional regional terms to avoid trivial solutions. Taking the model based on SDM loss function as an example, according to our experimental findings in Table 4, the models utilizing the training strategy proposed in this paper achieved higher results on two databases, respectively.
A comparison of our approach with similar approaches is given in Table 5. Some values were missing because they were not available in the original article. On the LiTS 2017 dataset, compared with the other methods in the table, the algorithm in this paper obtained the best results in the liver segmentation task. It also surpassed most methods in the tumor segmentation task. Our algorithm achieved the highest DSC score, while our ASD score was slightly higher than the ASD scores in [3,45].
On the 3D-IRCADb dataset, the algorithm in this paper obtained the highest DSC score in the liver segmentation task, and the ASD score was only higher than the results in [47,48]. As for the tumor segmentation task, our method obtained the best results on the two indicators of DSC and ASD. As such, it can be concluded that the overall performance of our algorithm outperforms other similar algorithms in the table on the two databases.
Spatial information was not used in models based on regional metrics, and the prediction errors were treated equally. This meant that voxel errors in objects that had already been detected were as important as errors that occurred within objects that had missed totally. By contrast, because the distance-based loss was on the basis of a distance map relative to the true boundary, such cases were penalized, thus assisting in recovering the far and small regions. Therefore, our algorithm will have advantages in the tasks which need to segment a large number of small objects.
It is worth noting that for the models jointly trained with two or more loss functions, it was generally necessary to discuss the weight of each loss function. Our experimental results also showed that as the weight of the distance-based loss function increases, the performance did not continue to improve, and even showed a downward trend, which required special attention during the model training stage. In addition, it was also important that the distance-based loss function was gradually added for joint training, only after the performance of the model on the validation set entered the platform period.

5. Conclusions

The present study aims to solve the problem of decreased segmentation performance of the liver and tumor segmentation algorithm due to the highly imbalanced number of voxels in the foreground and background. In this paper, an improved V-net algorithm based on region and distance metrics is applied for the 3D liver segmentation task. Three distance-based loss functions are introduced to jointly train the model with the original regional loss function, which improves the training effect and stability. Comparative experiments on the LiTS 2017 and 3D-IRCADb databases indicated the effectiveness of the improvement. Additionally, the optimal weight coefficient value for joint training is also discussed, and a new training strategy is proposed. Certainly, our findings in the present study shed new light on the solution to specific liver and tumor segmentation tasks.

Author Contributions

Conceptualization, Y.Z., C.L. and T.W.; Data curation, Y.Z.; Formal analysis, Y.Z.; Funding acquisition, T.W.; Investigation, Y.Z. and X.P.; Methodology, C.L. and T.W.; Project administration, T.W.; Resources, T.W.; Software, X.P. and C.L.; Supervision, T.W.; Validation, Y.Z. and X.P.; Visualization, X.P. and C.L.; Writing—original draft, Y.Z.; Writing—review & editing, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Science and Technology Major Project, grant number No. 2018ZX10301201 and National Natural Science Foundation Project, grant number No. 61971445.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Christ, P.F.; Elshaer, M.E.A.; Ettlinger, F.; Tatavarty, S.; Bickel, M. Automatic liver and lesion segmentation in CT using cascaded fully convolutional neural networks and 3D conditional random fields. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; Springer: Cham, Switzerland, 2016; pp. 415–423. [Google Scholar]
  2. Kainmüller, D.; Lange, T.; Lamecker, H. Shape constrained automatic segmentation of the liver based on a heuristic intensity model. In Proceedings of the MICCAI Workshop 3D Segmentation in the Clinic: A Grand Challenge, Brisbane, Australia, 29 October 2007; pp. 109–116. [Google Scholar]
  3. Li, X.; Chen, H.; Qi, X.; Dou, Q.; Fu, C.-W.; Heng, P.-A. H-DenseUNet: Hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans. Med. Imaging 2018, 37, 2663–2674. [Google Scholar] [CrossRef] [Green Version]
  4. Heimann, T.; Van Ginneken, B.; Styner, M.A.; Arzhaeva, Y.; Aurich, V.; Bauer, C.; Beck, A.; Becker, C.; Beichel, R.; Bekes, G.; et al. Comparison and evaluation of methods for liver segmentation from CT datasets. IEEE Trans. Med. Imaging 2009, 28, 1251–1265. [Google Scholar] [CrossRef]
  5. Deng, X.; Du, G. 3D segmentation in the clinic: A grand challenge II-liver tumor segmentation. In Proceedings of the MICCAI Workshop, New York, NY, USA, 7–10 September 2008. [Google Scholar]
  6. Soler, L.; Delingette, H.; Malandain, G.; Montagnat, J.; Ayache, N.; Koehl, C.; Dourthe, O.; Malassagne, B.; Smith, M.; Mutter, D.; et al. Fully automatic anatomical, pathological, and functional segmentation from CT scans for hepatic surgery. Comput. Aided Surg. 2001, 6, 131–142. [Google Scholar] [CrossRef]
  7. Moltz, J.H.; Bornemann, L.; Dicken, V.; Peitgen, H.-O. Segmentation of liver metastases in CT scans by adaptive thresholding and morphological processing. MICCAI Workshop 2008, 41, 195. [Google Scholar]
  8. Li, G.; Chen, X.; Shi, F.; Tian, J.; Xiang, D. Automatic liver segmentation based on shape constraints and deformable graph cut in CT images. IEEE Trans. Image Process. 2015, 24, 5315–5329. [Google Scholar] [CrossRef]
  9. Li, C.; Wang, X.; Eberl, S.; Yin, Y.; Chen, J.; Feng, D.D. A likelihood and local constraint level set model for liver tumor segmentation from CT volumes. IEEE Trans. Biomed. Eng. 2013, 60, 2967–2977. [Google Scholar]
  10. Linguraru, M.G.; Richbourg, W.J.; Liu, J.; Watt, J.M.; Pamulapati, V.; Wang, S. Tumor burden analysis on computed tomography by automated liver and tumor segmentation. IEEE Trans. Med. Imaging 2012, 31, 1965–1976. [Google Scholar] [CrossRef] [Green Version]
  11. Wong, D.; Liu, J.; Fengshou, Y.; Tian, Q.; Xiong, W.; Zhou, J.; Qi, Y.; Han, T.; Venkatesh, S.K.; Wang, S.-C. A semi-automated method for liver tumor segmentation based on 2D region growing with knowledge-based constraints. MICCAI Workshop 2008, 41, 159. [Google Scholar]
  12. Jimenez-Carretero, D.; Fernandez-de-Manuel, L.; Pascau, J.; Tellado, J.M.; Ramon, E.; Desco, M.; Santos, A.; Ledesma-Carbayo, M.J. Optimal multiresolution 3D level-set method for liver segmentation incorporating local curvature constraints. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 3419–3422. [Google Scholar]
  13. Jin, Q.; Meng, Z.; Sun, C.; Wei, L.; Su, R. RA-UNet: A Hybrid Deep Attention-Aware Network to Extract Liver and Tumor in CT Scans. arXiv 2018, arXiv:1811.01328. [Google Scholar]
  14. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the 26th Annual Conference on Neural Information Processing Systems 2012, Lake Tahoe, NV, USA, 3–6 December 2012. [Google Scholar]
  15. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  16. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. arXiv 2014, arXiv:1412.7062. [Google Scholar]
  17. Zheng, S.; Jayasumana, S.; Romera-Paredes, B.; Vineet, V.; Su, Z.; Du, D.; Huang, C.; Torr, P.H.S. Conditional random fields as recurrent neural networks. In Proceedings of the IEEE International Conference on Computer Vision, Las Condes, Chile, 11–18 December 2015; pp. 1529–1537. [Google Scholar]
  18. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  19. Havaei, M.; Davy, A.; Warde-Farley, D.; Biard, A.; Courville, A.; Bengio, Y.; Pal, C.; Jodoin, P.-M.; Larochelle, H. Brain tumor segmentation with deep neural networks. Med. Image Anal. 2017, 35, 18–31. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Prasoon, A.; Petersen, K.; Igel, C.; Lauze, F.; Dam, E.; Nielsen, M. Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Nagoya, Japan, 22–26 September 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 246–253. [Google Scholar]
  21. Kamnitsas, K.; Ledig, C.; Newcombe, V.F.J.; Simpson, J.P.; Kane, A.D.; Menon, D.K.; Rueckert, D.; Glocker, B. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 2017, 36, 61–78. [Google Scholar] [CrossRef] [PubMed]
  22. Stollenga, M.F.; Byeon, W.; Liwicki, M.; Schmidhuber, J. Parallel multi-dimensional LSTM, with application to fast biomedical volumetric image segmentation. Adv. Neural Inf. Process. Syst. 2015, 28, 2998–3006. [Google Scholar]
  23. Chlebus, G.; Schenk, A.; Moltz, J.H.; Van Ginneken, B.; Hahn, H.K.; Meine, H. Automatic liver tumor segmentation in CT with fully convolutional neural networks and object-based postprocessing. Sci. Rep. 2018, 8, 15497. [Google Scholar] [CrossRef]
  24. Sun, C.; Guo, S.; Zhang, H.; Li, J.; Chen, M.; Ma, S.; Jin, L.; Liu, X.; Li, X.; Qian, X.; et al. Automatic segmentation of liver tumors from multiphase contrast-enhanced CT images based on FCNs. Artif. Intell. Med. 2017, 83, 58–66. [Google Scholar] [CrossRef]
  25. Ben-Cohen, A.; Diamant, I.; Klang, E.; Amitai, M.; Greenspan, H. Fully convolutional network for liver segmentation and lesions detection. In Deep Learning and Data labeling for Medical Applications; Springer: Cham, Switzerland, 2016; pp. 77–85. [Google Scholar]
  26. Dou, Q.; Chen, H.; Jin, Y.; Yu, L.; Qin, J.; Heng, P.-A. 3D deeply supervised network for automatic liver segmentation from CT volumes. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; Springer: Cham, Switzerland, 2016; pp. 149–157. [Google Scholar]
  27. Lu, F.; Wu, F.; Hu, P.; Kong, D. Automatic 3D liver location and segmentation via convolutional neural network and graph cut. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 171–182. [Google Scholar] [CrossRef]
  28. Özgün, Ç.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; Springer: Cham, Switzerland, 2016; pp. 424–432. [Google Scholar]
  29. Milletari, F.; Navab, N.; Ahmadi, S.A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
  30. Kervadec, H.; Bouchtiba, J.; Desrosiers, C.; Granger, E.; Dolz, J.; Ayed, I.B. Boundary loss for highly unbalanced segmentation. arXiv 2018, arXiv:1812.07032. [Google Scholar]
  31. Karimi, D.; Salcudean, S.E. Reducing the Hausdorff Distance in Medical Image Segmentation with Convolutional Neural Networks. arXiv 2019, arXiv:1904.10030. [Google Scholar] [CrossRef] [Green Version]
  32. Xue, Y.; Tang, H.; Qiao, Z.; Gong, G.; Yin, Y.; Qian, Z.; Huang, C.; Fan, W.; Huang, X. Shape-Aware Organ Segmentation by Predicting Signed Distance Maps. arXiv 2019, arXiv:1912.03849. [Google Scholar]
  33. Crum, W.R.; Camara, O.; Hill, D.L.G. Generalized overlap measures for evaluation and validation in medical image analysis. IEEE Trans. Med. Imaging 2006, 25, 1451–1461. [Google Scholar] [CrossRef]
  34. Bilic, P.; Christ, P.F.; Vorontsov, E.; Chlebus, G.; Chen, H.; Dou, Q.; Fu, C.-W.; Han, X.; Heng, P.-A.; Hesser, J. The liver tumor segmentation benchmark (LiTS). arXiv 2019, arXiv:1901.04056. [Google Scholar]
  35. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  36. Xuesong, L.; Qinlan, X.; Yunfei, Z.; Wang, D. Fully automatic liver segmentation combining multi-dimensional graph cut with shape information in 3D CT images. Sci. Rep. 2018, 8, 1–9. [Google Scholar]
  37. Abd-Elaziz, O.F.; Sayed, M.S.; Abdullah, M.I. Liver tumors segmentation from abdominal CT images using region growing and morphological processing. In Proceedings of the International Conference on Engineering & Technology, Cairo, Egypt, 19–20 April 2015. [Google Scholar]
  38. Caselles, V.; Kimmel, R.; Sapiro, G. Geodesic active contours. Int. J. Comput. Vis. 1997, 22, 61–79. [Google Scholar] [CrossRef]
  39. Chlebus, G.; Meine, H.; Moltz, J.H.; Schenk, A. Neural network-based automatic liver tumor segmentation with random forest-based candidate filtering. arXiv 2017, arXiv:1706.00842. [Google Scholar]
  40. Han, X. Automatic liver lesion segmentation using a deep convolutional neural network method. arXiv 2017, arXiv:1704.07239. [Google Scholar]
  41. Kaluva, K.C.; Khened, M.; Kori, A.; Krishnamurthi, G. 2D-Densely Connected Convolution Neural Networks for automatic Liver and Tumor Segmentation. arXiv 2018, arXiv:1802.02182, 2018. [Google Scholar]
  42. Guo, X.; Schwartz, L.H.; Zhao, B. Automatic liver segmentation by integrating fully convolutional networks into active contour models. Med. Phys. 2019, 46, 4455–4469. [Google Scholar] [CrossRef]
  43. Liu, Z.; Song, Y.-Q.; Sheng, V.S.; Wang, L.; Jiang, R.; Zhang, X.; Yuan, D. Liver CT sequence segmentation based with improved U-Net and graph cut. Expert Syst. Appl. 2019, 126, 54–63. [Google Scholar] [CrossRef]
  44. Bi, L.; Kim, J.; Kumar, A.; Feng, D. Automatic Liver Lesion Detection using Cascaded Deep Residual Networks. arXiv 2017, arXiv:1704.02703. [Google Scholar]
  45. Yuan, Y. Hierarchical Convolutional-Deconvolutional Neural Networks for Automatic Liver and Tumor Segmentation. arXiv 2017, arXiv:1710.04540. [Google Scholar]
  46. Chung, F.; Delingette, H. Regional appearance modeling based on the clustering of intensity profiles. Comput. Vis. Image Underst. 2013, 117, 705–717. [Google Scholar] [CrossRef] [Green Version]
  47. Esfandiarkhani, M.; Foruzan, A.H. A generalized active shape model for segmentation of liver in low-contrast CT volumes. Comput. Biol. Med. 2017, 82, 59–70. [Google Scholar] [CrossRef]
  48. Christ, P.F.; Ettlinger, F.; Grün, F.; Elshaera, M.E.A.; Lipkova, J.; Schlecht, S.; Ahmaddy, F.; Tatavarty, S.; Bickel, M.; Bilic, P.; et al. Automatic Liver and Tumor Segmentation of CT and MRI Volumes using Cascaded Fully Convolutional Neural Networks. arXiv 2017, arXiv:1702.05970. [Google Scholar]
Figure 1. The liver and tumor segmentation framework.
Figure 1. The liver and tumor segmentation framework.
Applsci 10 03794 g001
Figure 2. The schematic representation of the V-Net architecture.
Figure 2. The schematic representation of the V-Net architecture.
Applsci 10 03794 g002
Figure 3. The scans in Liver Tumor Segmentation Challenge (LiTS) 2017 dataset and 3D Image Reconstruction for Comparison of Algorithm Database (3D-IRCADb).
Figure 3. The scans in Liver Tumor Segmentation Challenge (LiTS) 2017 dataset and 3D Image Reconstruction for Comparison of Algorithm Database (3D-IRCADb).
Applsci 10 03794 g003
Figure 4. The visual comparison of segmentation results. The ground-truths are denoted in red, and the results are in blue. The first three rows are the results of liver segmentation; the last three rows are the results of tumor segmentation. Each column represents the results obtained, utilizing different loss functions.
Figure 4. The visual comparison of segmentation results. The ground-truths are denoted in red, and the results are in blue. The first three rows are the results of liver segmentation; the last three rows are the results of tumor segmentation. Each column represents the results obtained, utilizing different loss functions.
Applsci 10 03794 g004
Figure 5. The test results on two databases with different α values. (a) The test results of models with HD loss on LiTS 2007. (b) The test results of models with HD loss on 3D-IRCADb. (c) The test results of models with BD loss on LiTS 2007. (d) The test results of models with BD loss on 3D-IRCADb. (e) The test results of models with SDM loss on LiTS 2007. (f) The test results of models with SDM loss on 3D-IRCADb.
Figure 5. The test results on two databases with different α values. (a) The test results of models with HD loss on LiTS 2007. (b) The test results of models with HD loss on 3D-IRCADb. (c) The test results of models with BD loss on LiTS 2007. (d) The test results of models with BD loss on 3D-IRCADb. (e) The test results of models with SDM loss on LiTS 2007. (f) The test results of models with SDM loss on 3D-IRCADb.
Applsci 10 03794 g005
Table 1. Comparison of the results of models trained with different resolution data.
Table 1. Comparison of the results of models trained with different resolution data.
CategoryDatasetResolutionDice Similarity Coefficient (DSC)
LiverLiTS 2017512*5120.953
256*2560.947
128*1280.936
3D-IRCADb512*5120.929
256*2560.924
128*1280.91
TumorLiTS 2017512*5120.699
256*2560.655
128*1280.615
3D-IRCADb512*5120.623
256*2560.597
128*1280.567
Table 2. A summary of the results on LiTS 2017 and 3D-IRCADb datasets.
Table 2. A summary of the results on LiTS 2017 and 3D-IRCADb datasets.
CategoryDatasetLoss FunctionDSC95th Percentile of the Hausdorff Distance Metrics (HD95) (mm)Average Symmetric Surface Distance (ASD) (mm)True Negative Rate (TNR)True Positive Rate (TPR)
LiverLiTS 2017 L R e g 0.9535.441.610.9570.998
L H D 0.9623.601.050.9710.998
L B D 0.9634.240.880.9730.999
L S D M 0.9653.231.070.9820.999
3D-IRCADb L R e g 0.9298.742.580.9210.997
L H D 0.9426.972.170.9490.998
L B D 0.9474.151.870.9550.998
L S D M 0.9484.681.810.9580.998
TumorLiTS 2017 L R e g 0.6999.362.170.6550.998
L H D 0.7318.771.820.6820.999
L B D 0.7458.141.680.7080.999
L S D M 0.7646.721.250.7610.999
3D-IRCADb L R e g 0.62313.463.720.5640.999
L H D 0.64811.253.080.5870.999
L B D 0.6779.882.880.6740.999
L S D M 0.6829.472.820.6540.999
Table 3. The cross-validation experimental results of the LiTS 2017 and 3D-IRCADb datasets.
Table 3. The cross-validation experimental results of the LiTS 2017 and 3D-IRCADb datasets.
CategoryTraining DatasetTesting DatasetLoss FunctionDSCHD95 (mm)ASD (mm)TNRTPR
Liver3D-IRCADbLiTS 2017 L R e g 0.9139.82.640.8790.995
L H D 0.9248.062.250.9140.996
L B D 0.9218.412.110.9220.996
L S D M 0.9237.822.030.9410.997
LiTS 20173D-IRCADb L R e g 0.91910.743.010.8580.995
L H D 0.9287.452.540.9450.997
L B D 0.9267.122.750.9510.997
L S D M 0.9346.882.310.9600.998
Tumor3D-IRCADbLiTS 2017 L R e g 0.59815.744.110.5640.998
L H D 0.64412.323.770.580.998
L B D 0.65311.723.140.6440.998
L S D M 0.65110.413.180.6820.998
LiTS 20173D-IRCADb L R e g 0.58718.224.620.5260.999
L H D 0.62715.414.280.5570.999
L B D 0.63113.643.620.6820.999
L S D M 0.63413.253.240.6740.999
Table 4. The results on two datasets with a different training strategy.
Table 4. The results on two datasets with a different training strategy.
Training StrategyCategoryDatasetDSCHD95 (mm)ASD (mm)TNRTPR
Joint training at the beginningLiverLiTS 20170.9514.172.10.9670.999
3D-IRCADb0.9377.282.490.9510.998
TumorLiTS 20170.7428.032.380.6230.999
3D-IRCADb0.63312.223.550.6020.999
Our training strategyLiverLiTS 20170.9653.230.880.9820.999
3D-IRCADb0.9484.681.810.9580.998
TumorLiTS 20170.7646.721.250.7610.999
3D-IRCADb0.6829.472.820.6540.999
Table 5. The comparison of our approach with similar approaches.
Table 5. The comparison of our approach with similar approaches.
ApproachDatasetLiverTumor
DSCASD (mm)DSCASD (mm)
U-Net [39]LiTS 2017--0.650-
ResNet [40]LiTS 2017--0.6706.66
DenseNet [41]LiTS 20170.9126.490.4921.44
FCN+ACM [42]LiTS 20170.9432.30--
GIU-Net [43]LiTS 20170.9511.80--
ResNet [44]LiTS 20170.959-0.500-
RA-Unet [13]LiTS 20170.9611.210.5951.29
H-DenseUNet [3]LiTS 20170.9611.450.7221.10
CDNN [45]LiTS 20170.9631.100.6571.15
oursLiTS 20170.9650.880.7641.25
MPAM [46]3D-IRCADb-2.24--
ASM [47]3D-IRCADb-1.66--
U-Net [3]3D-IRCADb0.9234.330.51011.11
ResNet [3]3D-IRCADb0.9383.910.6006.36
CFCNs [48]3D-IRCADb0.9431.500.560-
H-DenseUNet [3]3D-IRCADb0.9474.060.6505.29
ours3D-IRCADb0.9481.810.6822.82
Note: - denotes the result is not reported.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Pan, X.; Li, C.; Wu, T. 3D Liver and Tumor Segmentation with CNNs Based on Region and Distance Metrics. Appl. Sci. 2020, 10, 3794. https://doi.org/10.3390/app10113794

AMA Style

Zhang Y, Pan X, Li C, Wu T. 3D Liver and Tumor Segmentation with CNNs Based on Region and Distance Metrics. Applied Sciences. 2020; 10(11):3794. https://doi.org/10.3390/app10113794

Chicago/Turabian Style

Zhang, Yi, Xiwen Pan, Congsheng Li, and Tongning Wu. 2020. "3D Liver and Tumor Segmentation with CNNs Based on Region and Distance Metrics" Applied Sciences 10, no. 11: 3794. https://doi.org/10.3390/app10113794

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop