Multi-feature representation for burn depth classification via burn images

https://doi.org/10.1016/j.artmed.2021.102128Get rights and content

Highlights

  • We proposed a multi-feature representation method for detecting burn depth.

  • Three different features: color, texture and latent are extracted from burn images.

  • The latent feature is extracted by a stacked sparse autoencoder for the first time.

  • The proposed method shows effectiveness on a burn image dataset.

  • The proposed approach obtains the best results compared to other popular methods.

Abstract

Burns are a common and severe problem in public health. Early and timely classification of burn depth is effective for patients to receive targeted treatment, which can save their lives. However, identifying burn depth from burn images requires physicians to have a lot of medical experience. The speed and precision to diagnose the depth of the burn image are not guaranteed due to its high workload and cost for clinicians. Thus, implementing some smart burn depth classification methods is desired at present. In this paper, we propose a computerized method to automatically evaluate the burn depth by using multiple features extracted from burn images. Specifically, color features, texture features and latent features are extracted from burn images, which are then concatenated together and fed to several classifiers, such as random forest to generate the burn level. A standard burn image dataset is evaluated by our proposed method, obtaining an Accuracy of 85.86% and 76.87% by classifying the burn images into two classes and three classes, respectively, outperforming conventional methods in the burn depth identification. The results indicate our approach is effective and has the potential to aid medical experts in identifying different burn depths.

Introduction

A burn indicates the trauma of the skin or other tissues of patients caused by heat, chemicals, electricity, radiation or friction [1]. Most burns are often caused by contact with hot solids, liquids or flames. Exposure to cooking flames or unsafe drinking/cooking utensils or smoking is also risk factors that generate burn [2]. According to some research works, more than 67 million burn cases were caused by fire and high heat, which led to 2.9 million patients being sent to hospitals and caused more than 170,000 deaths [3,4]. Burns can be cured with targeted treatment based on the depth of the burn. A slight burn just needs painkillers and can be simply handled by medical experts. However, severe burns should be treated with surgery, such as excision or skin grafting [5]. Thus, early and timely classification of a burn can help patients receive an effective cure and is more conducive to recovery.

Generally, the level of a burn can be classified into four parts [6]:

  • (1)

    The first degree burn often shows redness, is painful and dry. This type of burn would slough the next day and heals well.

  • (2)

    The superficial dermal burn presents redness, is moist and causes weeping of skin. It can result in patients feeling pain when encountering air and temperature.

  • (3)

    The deep dermal burn is often expressed as waxy dry and wet, with variable colors, which are not blanched by pressure.

  • (4)

    The full-thickness burn is the severe burn, which shows a dry and waxy consistency and is insensate. The wounds of this level cannot be recovered except by applying epithelial migration and contraction.

In clinical practice, the first level of burn is minor and would not need a trip to the hospital to receive targeted treatment. As for the other three levels, i.e., superficial dermal burn, deep dermal burn, and full-thickness burn, they mainly require the attention of medical experts, in order to identify the corresponding burn depth and provide reasonable treatment plans. Besides this, the burn depth can also be divided into two classes: superficial dermal burn can be healed spontaneously; deep dermal and full-thickness burns can be cured with excision [7]. Thus, experts should treat deep dermal burn and full-thickness burn with surgery, where the remaining parts can heal on their own in twenty-one days [8].

In order to identify the burn depth from burn images, medical experts require significant experience. According to [9], classifying a burn and determining whether a wound should require surgery is difficult for inexperienced doctors. At the same time, the identification of the correct burn is often less than 80% accurate for professional surgeons [10]. This means the speed and precision to diagnose a burn is not guaranteed due to the high workload and cost for experienced clinicians. Considering this problem, we should implement some computerized methods to aid clinicians in detecting the burn depth efficiently.

Many works attempt to classify burn depth based on burn images processed using artificial intelligence (AI) at present [11,12,13]. In the work of D. P. Yadav et al. [14], they proposed a novel method based on the color features of burn images for classifying burn depth. The extracted color features were evaluated by an SVM classifier to generate the result of different burn degrees. The reason is that different burn images have various color appearances, which can be applied for diagnosing burn depth efficiently. Researchers also have attempted to integrate the effective wavelengths of burn images to detect the burn depth automatically [15]. They implemented and turned their device to perform Receiver Operating Characteristic inspections to help clinicians in burn depth detection and related medical decision making. In [16], the authors applied multispectral imaging to identify the burn injury. Their method can help surgeons determine the burn tissue and obtained a reasonable performance. In the paper of B. Acha et al. [17], they proposed a method named, multidimensional scaling (MDS) to find the physical feature. The extracted feature can be analyzed by a k-nearest neighbors (k−NN) classifier and support vector machine (SVM) model to generate the depth of the provided burn images. Their method can aid experienced surgeons to identify burn depth efficiently. According to [18], Rangaraju et al. aimed to use clinical methods to classify various types of burns. The skin with clotting can be applied for measuring the degree of burn depth. A collagen ration less than 0.35, 0.35–0.65, and over 0.65 are regarded as superficial dermal burn, deep dermal burn, and full-thickness burn, respectively. The authors in [19] presented a method based on the histogram, and a color map of burn images to evaluate the degree of burn. In [20], researchers also investigated a fuzzy-ARTMAP classifier. This method is based on a neural network to measure the burn depth and select useful features from the burn images.

However, in terms of the methodological view, the aforementioned AI-based methods only investigated a specific feature from the burn images, such as the color feature. They did not attempt to analyze multiple features from the burn images to classify the depth of burn simultaneously. Thus, the performance of classifying the degree of burn would not be ideal. It can be noted that the burn images not only have color characteristic, but also contain various texture features such as blisters. Besides this, some latent features that are not easily detected by human eyes can also help clinicians identify the burn depth. To obtain better results by detecting the degree of burn from burn images, here, we propose a multi-feature representation method to utilize various features from burn images before applying several classifiers to validate the effectiveness of these extracted features.

For the methodological novelty, our AI-based burn depth detection method can apply three different features (color, texture, and latent) from the burn images for predicting the final result. By contrast, many conventional AI burn depth classification methods usually deploy color or texture features to evaluate the degree of burn depth. The related works in [14,17,21,22,23] also confirm this point. For instance, Tran et al. extracted color features from the burn images, before applying the SVM model for classification in a clinical study [22]. In [23], the authors attempted to use texture-based features for burn images and validated that the texture feature is effective in burn wound evaluation. Thus, it is a reasonable idea to use multiple features to improve the AI diagnostic performances. For the theoretical novelty, we selected the use of a stacked sparse autoencoder to extract the latent feature from the burn images. The stacked sparse autoencoder is a widely used deep neural network in the field of AI that can extract useful latent features. Many research studies in biomedical applications also support its effectiveness [24,25]. In detail, Chen et al. applied the stacked sparse autoencoder to extract high-level features for the identification of term and preterm uterine recordings [25]. Here, we implemented this AI model to extract the latent features associated with color and texture features for burn depth classification.

To sum up, the main contributions of this work are as follows:

  • We proposed a multi-feature representation method to extract multiple features, i.e., color, texture and latent from burn images, which are used for efficiently determining the degree of burn.

  • To the best of our knowledge in burn depth classification, the latent feature is extracted by a stacked sparse autoencoder for the first time.

  • Our proposed multi-feature representation method shows great novelty in AI-based burn depth detection.

  • Our proposed method is validated with a burn image dataset, obtaining the best performances compared to other conventional burn depth approaches.

Section snippets

Materials and methods

The whole pipeline of our proposed AI-based multi-feature representation method is expressed in Fig. 1. In detail, we extract the color histogram and color moments [26,27] from burn images, which are represented as the color feature. The texture feature is obtained by applying LBP and Gabor filters [28,29] to the burn images, while the latent feature can be required by applying a stacked sparse autoencoder for burn images. The stacked sparse autoencoder is one popular deep learning model to

Experimental dataset

To evaluate the extracted three features, i.e., color, texture and latent from burn images using our proposed method, a standard burn image dataset containing 94 burn images with three degrees of burn depth was evaluated in this study. The distribution of this dataset is shown in Table I. This dataset comes from the Virgen del Rocío Hospital created by the Biomedical Image Processing Group. Here, we follow the same training settings based on [17], using 20 burn images for training (superficial

Discussion

We have applied multi-feature representation, i.e., color features, texture features and latent features to classify burn images into different levels. In the experiments with a burn depth consisting of 2 classes, our proposed method obtained 85.86% in Accuracy via the Random forest classifier. The performance (Accuracy) becomes 76.87% using random forest when identifying three classes in the burn depth simultaneously, which is much better compared to previous works. When we attempt to apply

Conclusion

In this work, we proposed a novel method to apply multi-feature representation for classifying burn images into different depths. Our method can extract and combine three features, i.e., color, texture and latent effectively. Extensive results with various experimental settings also proved the rationality and validity of our approach, which has the potential to aid clinicians in clinical practice. Further analysis and evaluation with more various types of burn data samples are warranted

Declaration of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work was supported in part by the University of Macau (File no. MYRG2019-00006-FST) and by the Open Research Fund of the Beijing Key Laboratory of Big Data Technology for Food Safety (Project No. BTBD-2021KF05).

References (55)

  • G. Chandrashekar et al.

    A survey on feature selection methods

    Comput Electr Eng

    (2014)
  • M. Chen et al.

    AI-skin: skin disease recognition based on self-learning and wide data collection through a closed-loop framework

    Information Fusion

    (2020)
  • D.N. Herndon

    Total burn care: expert consult-online

    (2012)
  • W. H. Organization, “A WHO plan for burn prevention and care,”...
  • T. Vos

    Global, regional, and national incidence, prevalence, and years lived with disability for 301 acute and chronic diseases and injuries in 188 countries, 1990–2013: a systematic analysis for the Global Burden of Disease Study 2013

    Lancet

    (2015)
  • J.A. Haagsma

    The global burden of injury: incidence, mortality, disability-adjusted life years and time trends from the Global Burden of Disease study 2013

    Inj Prev

    (2016)
  • J. Tintinalli et al.

    Tintinalli’s emergency medicine: a comprehensive study guide: a comprehensive study guide

    (2010)
  • J.A. Clarke

    A color atlas of burn injuries

    (1992)
  • A.D. Jaskille et al.

    Critical review of burn depth assessment techniques: part I. Historical review

    J Burn Care Res

    (2009)
  • P. Hlava et al.

    Validity of clinical assessment of the depth of a thermal injury

    Acta Chir Plast

    (1983)
  • J. Ruminski et al.

    Thermal parametric imaging in the evaluation of skin burn depth

    IEEE Trans Biomed Eng

    (2007)
  • M.A. Afromowitz et al.

    Multispectral imaging of burn wounds: a new clinical instrument for evaluating burn depth

    IEEE Trans Biomed Eng

    (1988)
  • B. Acha et al.

    Segmentation and classification of burn color images

    2001 conference proceedings of the 23rd annual international conference of the IEEE engineering in medicine and Biology society

    (2001)
  • D. Yadav, A. Sharma, M. Singh, A. Goyal, and Medicine, “Feature extraction based machine learning for human burn...
  • W. Li et al., “Burn injury diagnostic imaging device's accuracy improved by outlier detection and removal,” in...
  • B. Acha et al.

    Burn depth analysis using multidimensional scaling applied to psychophysical experiment data

    IEEE Trans Med Imaging

    (2013)
  • B. Acha, C. Serrano, S. Palencia, and J. J. Murillo, “Classification of burn wounds using support vector machines,” in...
  • Cited by (3)

    View full text