Skip to main content
Log in

Human Visual System Consistent Model for Wireless Capsule Endoscopy Image Enhancement and Applications

  • SPECIAL ISSUE
  • Published:
Pattern Recognition and Image Analysis Aims and scope Submit manuscript

Abstract

Visualization of inner gastrointestinal (GI) tract is an important aspect in diagnosis of diseases such as the bleeding and colon cancer. Wireless capsule endoscopy (WCE) provides painless imaging of the GI tract without much discomfort to patients via near-lights imaging model and with burst light emitting diodes (LEDs). This imaging system is designed to minimize battery power and the capsule moves through the GI tract with natural peristalsis movement and the color video data are captured via wireless transmitter in the WCE. Despite the advantages of WCE videos, the obtained frames exhibit uneven illumination and sometimes result in darker regions that may require enhancement afterwards for better visualization of regions of interest. In this work, we extend a human visual system (HVS) based image enhancement model that uses a feature-linking neural network model based on timing precisely of the spiking neurons. Experimental results on various WCE frames show that we can obtain better enhancement of regions of interest and compared to other enhancement approaches in the literature we obtain better quality restorations in general. Further, we show the applicability of our enhancement method on improving the automatic image segmentation, and 3D shape from shading visualization reconstruction indicating that it is viable to be used within a computer-aided diagnosis systems for GI tract diseases.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.

Similar content being viewed by others

REFERENCES

  1. G. Iddan, G. Meron, A. Glukhovsky, and P. Swain, “Wireless capsule endoscopy,” Nature 405 (6785), 417 (2000).

    Article  Google Scholar 

  2. M. Keroack, “Video capsule endoscopy,” Curr. Opin. Gastroenterol. 20 (5), 474–481 (2004).

    Article  Google Scholar 

  3. A. Moglia, A. Menciassi, and P. Dario, “Recent patents on wireless capsule endoscopy,” Recent Pat. Biomed. Eng. 1 (1), 24–33 (2008).

    Article  Google Scholar 

  4. V. B. S. Prasath and H. Kawanaka, “Vascularization features for polyp localization in capsule endoscopy,” in Proc.2015IEEE Int. Conf. on Bioinformatics and Biomedicine (BIBM) (Washington, DC, USA, November 2015), pp. 1740–1742.

  5. V. B. S. Prasath, “On fuzzification of color spaces for medical decision support in video capsule endoscopy,” in Proc. 26th Modern Artificial Intelligence and Cognitive Science Conference 2015 (Greensboro, NC, USA, 2015), pp. 1–5. http://CEUR-WS.org/Vol-1353/paper_20.pdf

  6. V. B. S. Prasath, “Vascularization from flexible imaging color enhancement (FICE) for polyp localization,” J. Med. Life 10 (2), 147–149 (2017).

    Google Scholar 

  7. V. B. S. Prasath, “Polyp detection and segmentation from video capsule endoscopy: A review,” J. Imaging 3 (1), Article No. 1, 1–15 (2017).

    Google Scholar 

  8. V. B. S. Prasath, “Automatic image and video analysis for capsule endoscopy – an open frontier,” Int. J. Rob. Eng. 3 (1), 1562–1580 (2018). https://www.vibgyorpublishers.org/content/ijre/fulltext.php?aid=ijre-3-007

  9. R. Hummel, “Image enhancement by histogram transformation,” Comput. Graphics Image Process. 6 (2), 184–195 (1977).

    Article  Google Scholar 

  10. K. Zuiderveld, “Contrast limited adaptive histogram equalization,” in Graphics Gems IV, Ed. by P. S. Heckbert (Academic Press, Boston, 1994), pp. 474–485.

    Google Scholar 

  11. V. B. S. Prasath and R. Delhibabu, “Automatic contrast enhancement for wireless capsule endoscopy videos with spectral optimal contrast-tone mapping,” in Computational Intelligence in Data Mining – Volume 1, Ed. by L. Jain, H. S. Behera, J. K. Mandal, and D. P. Mohapatra, Smart Innovation, Systems and Technologies (Springer, New Delhi, 1994), Vol. 31, pp. 243–250. https://doi.org/10.1007/978-81-322-2205-7_23

  12. B. Li and M. Q.-H. Meng, “A novel enhancement method for capsule endoscopy images,” Int. J. Inf. Acquis. 4 (2), 117–126 (2007).

    Article  Google Scholar 

  13. K. Zhan, J. Teng, J. Shi, Q. Li, and M. Wang, “Feature-linking model for image enhancement,” Neural Comput. 28 (6), 1072–1100 (2016).

    Article  MathSciNet  Google Scholar 

  14. V. B. S. Prasath, D. N. H. Thanh, L. T. Thanh, S. N. Quang, and S. Dvoenko, “Wireless capsule endoscopy image enhancement with a human visual system consistent model,” in Proc. 14th Int. Conf. on Pattern Recognition and Information Processing (PRIP ’2019) (Minsk, Belarus, 2019), pp. 271–274.

  15. V. B. S. Prasath and R. Delhibabu, “Automatic image segmentation for video capsule endoscopy,” in Computational Intelligence in Medical Informatics, Ed. by N. B. Muppalaneni and V. K. Gunjan, SpringerBriefs in Applied Sciences and Technology (Springer, Singapore, 2015), pp. 73–80. https://doi.org/10.1007/978-981-287-260-9_7

  16. M. Mackiewicz, J. Berens, and M. Fisher, “Wireless capsule endoscopy color video segmentation,” IEEE Trans. Med. Imaging 27 (12), 1769–1781 (2008).

    Article  Google Scholar 

  17. V. B. S. Prasath and H. Kawanaka, “Near-light perspective shape from shading for 3D visualizations in endoscopy systems,” in Proc.2017IEEE Int. Conf. on Bioinformatics and Biomedicine (BIBM) (Kansas, MO, USA, November 2017), pp. 2293–2295.

  18. D. N. H. Thanh, S. Dvoenko, V. B. S. Prasath, and N. H. Hai, “Blood vessels segmentation method for retinal fundus images based on adaptive principal curvatures and image derivative operators,” in Int. Workshop on Photogrammetric & Computer Vision Techniques for Video Surveillance, Biometrics and Biomedicine (Moscow, Russia, May 2019); Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XLII-2/W12, 211–218 (2019).

    Article  Google Scholar 

  19. D. N. H. Thanh, V. B. S. Prasath, L. M. Hieu, and N. N. Hien, “Melanoma skin cancer detection method based on adaptive principal curvature, colour normalization and features extraction with the ABCD rule,” J. Digit. Imaging (2019). Published online: 17 Dec. 2019 https://doi.org/10.1007/s10278-019-00316-x

  20. D. N. H. Thanh, N. N. Hien, V. B. S. Prasath, L. T. Thanh, and N. H. Hai, “Automatic initial boundary generation methods based on edge detectors for the level set function of the chan-vese segmentation model and applications in biomedical image processing,” in Frontiers in Intelligent Computing: Theory and Applications, Ed. by S. Satapathy, V. Bhateja, B. Nguyen, et al., Advances in Intelligent Systems and Computing (Springer, Singapore, 2020), Vol. 1014, pp. 171–181. https://doi.org/10.1007/978-981-13-9920-6_18

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to V. B. Surya Prasath.

Ethics declarations

The authors declare that they have no conflicts of interest.

Additional information

V. B. Surya Prasath graduated PhD from Indian Institute of Technology Madras in 2010, majoring in Mathematics. Currently, he works as an assistant professor at the Division of Biomedical Informatics, Cincinnati Children’s Hospital Medical Center, and also affiliated with the Departments of Biomedical Informatics, Electrical Engineering and Computer Science, University of Cincinnati, USA. He was a postdoctoral fellow at the Department of Mathematics, University of Coimbra, Portugal (2010–2012). From 2012 to 2017 he was assistant professor at the Computational Imaging and VisAnalysis (CIVA) Lab of the University of Missouri, USA. He had summer fellowships/visits at Kitware Inc. NY, USA, The Fields Institute, Canada, and Institute for Pure and Applied Mathematics (IPAM), University of California Los Angeles, USA. He has over 180 works on international peer-reviewed journals and conference proceedings. Research interests are nonlinear PDEs, regularization methods, inverse and ill-posed problems, variational and PDE-based image processing, computer vision with applications in remote sensing, biometrics, and biomedical imaging domains.

Dang Ngoc Hoang Thanh graduated from Belarusian State University in 2008 and M.Sc. in 2009 majoring in Applied Mathematics; graduated PhD of Computer Science (2016) from Tula State University, Russia. Currently, he works as an assistant professor at department of Information Technology, School of Business Information Technology, University of Economics Ho Chi Minh city (UEH), Vietnam. He was a lecturer and researcher at department of Information Technology, Hue College of Industry, Vietnam. He is a member of scientific organization INSTICC (Portugal), ACM (USA), IAENG (Taiwan) and he is also a member of international conferences committee such as IEEE ICCE 2018 (Vietnam), IWBBIO (Spain), IEEE ICIEV (USA), IEEE ICEEE (Turkey), ICIEE (Japan), ICoCTA (Australia), ICMTEL (UK), etc. He has over 50 works on international peer-reviewed journals and international conference proceedings, 5 book chapters, one book, and one European patent. Research interests are image processing, computer vision, machine learning, data mining, computational mathematics, and optimization.

Le Thi Thanh graduated from Voronezh State University in 2009 and M.Sc. in 2011 majoring in Applied Mathematics; graduated PhD of Applied Mathematics (2018) from Tula State University, Russia. Currently, she works as an assistant professor at Ho Chi Minh city University of Transport, Vietnam. She has over 20 works on international peer-reviewed journals and conference proceedings. Research interests are mathematical models, nonlinear PDEs, image processing, computational mathematics, and dynamical systems.

Nguyen Quang San graduated from Belarusian State University in 2015 and M.Sc. in 2016 majoring in Theoretical Physics. He is now a PhD candidate of Theoretical Physics at Belarusian State University, Belarus. Currently, he works as a lecturer/researcher at Nha Trang University, Vietnam. He has over 15 works on international peer-reviewed journals and conference proceedings, and one book. Research interests are image processing, quantum computing, quantum mechanics, nonlinear PDEs, and dynamical systems.

Sergey Dvoenko received Dr. Sci. degree in 2002 at the Dorodnitsyn Computing Centre of the Russian Academy of Sciences (CC of RAS), in the field of Theoretical Foundations of Informatics (05.13.17 of RAS) with the thesis “Pattern Recognition Methods for Arrays of Inter-connected Data”. He received his PhD degree in 1992 after the postgraduate course at the Institute of Control Sciences of the Russian Academy of Sciences (ICS of RAS), in the field of Computer Sciences (05.13.16 of RAS) with the thesis “Learning Algorithms for Event Recognition in Experimental Waveforms”. Since 2003, he is a professor at the Institute of Applied Mathematics and Computer Sciences of the Tula State University (IAMCS of TSU) in the Tula city, Russia. Currently, he is a professor at the Tula State University. Some recent courses: Data Analysis (Machine Learning and Clustering), Decision Theory, Operational Research, Functional and Logical Programming, System Analysis, Algorithms and Calculus Theory. His scientific and research interests include the following fields: image processing, hidden Markov models and fields in applied problems, machine learning and pattern recognition, cluster analysis and data mining. He has over 60 scientific publications (papers in peer-reviewed journals and international conference proceedings), one European patent. He is a member of the Russian Association for Pattern Recognition and Image Analysis (RAPRIA).

To Whom correspondence should be addressed.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Prasath, V.B., Thanh, D.N., Thanh, L.T. et al. Human Visual System Consistent Model for Wireless Capsule Endoscopy Image Enhancement and Applications. Pattern Recognit. Image Anal. 30, 280–287 (2020). https://doi.org/10.1134/S1054661820030219

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S1054661820030219

Keywords:

Navigation