Skip to main content

Advertisement

Log in

RE-PUPIL: resource efficient pupil detection system using the technique of average black pixel density

  • Published:
Sādhanā Aims and scope Submit manuscript

Abstract

The pupil detection algorithm plays a key role in the non-contact tono-meter, auto ref-keratometry and optical coherence tomography in medical ophthalmology diagnostic equipment. A major challenge associated with pupil detection techniques is the use of conventional neural networks based on algorithms, integro-differential operator and circular hough transform, which leads to inefficient use of hardware resources in FPGA. To overcome this, using an average black pixel density technique, the proposed human eye pupil detection system is used to easily recognize and diagnose the human eye pupil area. Double threshold, logical OR, morphological closing and average black pixel density modules are involved in the proposed solution. To test the proposed method, the near infrared (NIR) iris databases are being used, namely: CASIA-IrisV4 and IIT Delhi and have achieved 98% percent accuracy, specificity, sensitivity. The proposed work was synthesized via Zynq XC7Z020 FPGA and the results are compared with previous approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8

Similar content being viewed by others

Abbreviations

BRAM:

Block RAM

CASIA:

Chinese academy of science research institute of automation

CHT:

Circular hough transform

CNN:

Conventional neural network

DDR:

Double data rate

FPGA:

Field programmable gate array

FCN:

Fully connected neural network

f logical or :

Unwanted black pixel removed image

f morpclose :

Morphological closed image

FP:

False positive

FN:

False negative

HT:

Hough transforms

IIT:

Indian Institute of Technology

IDO:

Integro differential operator

LT:

Logarithmic transformation

LUT:

Look up table

PLT:

Power law transformation

TP:

True positive

TN:

True negative

References

  1. Vineet K, Asati A and Gupta A 2017 Hardware implementation of a novel edge-map generation technique for pupil detection in NIR images. Eng. Sci. Technol. Int. J. 20(4): 694–704

    Google Scholar 

  2. Khalighi S, Pak F and Tirdad P 2015 Iris recognition using robust localization and nonsubsampled contourlet based features. J. Signal Process. Syst. 81: 111–128

    Article  Google Scholar 

  3. Farmanullah J, Imran U, Shahid A and Shahzad A 2014 A dynamic non-circular iris localization technique for non-ideal data. Comput. Electr. Eng. 40: 215–226

    Article  Google Scholar 

  4. Javadi A H, Hakimi Z, Barati M, Walsh V and Tcheang L 2015 SET: a pupil detection method using sinusoidal approximation features. Front. Neuroeng. 8: 1–10

    Article  Google Scholar 

  5. Swirski L, Bulling A and Dodgson N 2012 Robust real-time pupil tracking in highly off-axis images Eye Tracking Research and Applications Symposium (ETRA). 173–176

  6. Gwon S Y, Lee C, Eui L, Park W and Kang 2013 Robust eye and pupil detection method for gaze tracking. Int. J. Adv. Robot. Syst. 10: 98–105

  7. Mingxin Y, Yingzi L, David S and Guo Y 2015 An easy iris center detection method for eye gaze tracking system. J. Eye Movement Res. 8: 1–20

    Google Scholar 

  8. Ryan W, Woodard D, Duchowski, A and Birchfield S 2008 Adapting starburst for elliptical iris segmentation In: Proceedings of the 2nd IEEE International Conference on Biometrics: Theory, Applications and Systems. 1 – 7

  9. Arsalan M, Rizwan N, Ganbayar Y, Kang H and Ryoung P 2018 Deep learning-based gaze detection system for automobile drivers using a NIR Camera Sensor. Sensors 18(2): 456–490

    Article  Google Scholar 

  10. Ramlee R A and Noah Z M 2017 Pupil segmentation of abnormal eye using image enhancement in spatial domain. IOP Conf. Ser.: Mater. Sci. Eng. 210: 012031

    Article  Google Scholar 

  11. Elhossini A, Moussa M 2012 Memory efficient FPGA implementation of hough transform for line and circle detection. Canadian conference on Electrical and Computer Engineering. 1–5

  12. Nosrati M and Karimi R 2011 Detection of circular shapes from impulse noisy images using median and laplacian filter and circular hough transform In: 8th International Conference on Electrical Engineering, Computing Science and Automatic Control. 1–5

  13. Hokchhay T, Heng Z and Sherief R 2019 A resource-efficient embedded iris recognition system using FCNs. ACM J. Emerg. Technol. Comput. Syst. 16: 1–23

    Google Scholar 

  14. Liu Y, Derman C E, Calderoni G and Bahar R I 2020 Hardware acceleration of robot scene perception algorithms IEEE/ACM International Conference On Computer Aided Design (ICCAD). 1–8

  15. CASIA-Iris database V4 [Online]. Available: http://www.cbsr.ia.ac.cn/china/Iris%20Databases%20CH.asp

  16. The IIT Delhi Iris Database. Available: https://www4.comp.polyu.edu.hk/~csajaykr/IITD/Database_Iris.html

  17. Fernando A F and Josef B 2012 Iris boundaries segmentation using the generalized structure tensor. A study on the effects of image degradation In: Proceedings of the IEEE International Conference on Biometrics: Theory, Applications, and Systems. 426–431

  18. Nadia O, Garcia B, Sonia S and Dorizzi, 2015 OSIRIS: an open source iris recognition software. Pattern Recognit. Lett. 82(2): 124–131

  19. Uhl A and Wild P 2012 Weighted adaptive hough and ellipso polar transforms for real-time iris segmentation In Proceedings of the IEEE International Conference on Biometrics. 283–290

  20. Rathgeb C, Uhl A and Wild P 2013 Iris biometrics: from segmentation to template security USA. Springer

    Book  Google Scholar 

  21. Masek L and Kovesi P 2003 Recognition of human iris patterns for biometric identification M.S. dissertation, School of Computer Science and Software Engineering, University of West Australia, Perth, Australia

  22. Gangwar A, Joshi A, Singh A and Alonso F 2018 Irisseg: a fast and robust iris segmentation framework for non-ideal iris images In: Proceedings of the IEEE International Conference on Biometrics.1–8

  23. Hashemi S, Anthony N, Tann H, Bahar R and Reda S 2017 Understanding the impact of precision quantization on the accuracy and energy of neural networks In: Proceedings of the IEEE Design, Automation, and Test in Europe Conference and Exhibition. 1474–1479

  24. Arsalan M, Rizwan N, Kim D S, Nguyen P Ha and Owais M 2018 Irisdensenet: robust iris segmentation using densely connected fully convolutional networks in the images by visible light and near-infrared light camera sensors Sensor. 18(5): 1501-1521

  25. Ngo H, Rakvic R, Broussard R and Ives R 2014 Resource-aware architecture design and implementation of hough transform for a real-time iris boundary detection system. IEEE Trans. Consum. Electron. 60: 485–492

    Article  Google Scholar 

  26. Vineet K, Asati A and Gupta A 2016 Iris localization in iris recognition system: algorithms and hardware implementation Ph.D. dissertations Birla Institute of Technology and Science India

  27. Avey J 2018 An FPGA-based hardware accelerator for iris segmentation, Ph.D. dissertations Iowa State University Iowa

  28. Lee W L, Hyeonchang C, Chul G, Park S and Kang L 2012 Auto-focusing method for remote gaze tracking camera. Opt. Eng. 51(6): 063204

    Article  Google Scholar 

  29. Cho C L, Kwang L, Eui C P, Kang L, Heekyung C and Jihun 2012 Gaze detection by wearable eye-tracking and NIR LED-based head-tracking device based on SVR. ETRI Journal. 34(4): 542–552

  30. Fuhl W, Tonsen M, Andreas K and Enkelejda 2016 Pupil detection for head-mounted eye tracking in the wild: an evaluation of the state of the art. Mach. Vis. Appl. 27: 1275–1288

  31. Mestre C, Josselin G and Jaume P 2018 Robust eye tracking based on multiple corneal reflections for clinical applications. J. Biomed. Opt. 23(3): 1–9

    Article  Google Scholar 

  32. Brousseau B R, Jonathan E and Moshe 2018 Accurate model-based point of gaze estimation on mobile devices. Vision 2(3): 35

Download references

Acknowledgement

The authors thank the Chinese Academy of Science Research Institute of Automation (CASIA) and IIT Delhi for accessing their image datasets.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S NAVANEETHAN.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

NAVANEETHAN, S., NANDHAGOPAL, N. RE-PUPIL: resource efficient pupil detection system using the technique of average black pixel density. Sādhanā 46, 114 (2021). https://doi.org/10.1007/s12046-021-01644-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s12046-021-01644-x

Keywords

Navigation