Kinect-based human finger tracking method for natural haptic rendering

https://doi.org/10.1016/j.entcom.2019.100335Get rights and content

Highlights

  • A real-time fingertip auto-positioning algorithm with high accuracy is proposed.

  • A collection of pixels are used to eliminate random errors in this method.

  • The proposed method is able to adapt to different shape or size of user fingers.

Abstract

In the multi-modal natural human-computer interaction (HCI), real-time finger position detection with high accuracy becomes an important basis for interactive modeling of the virtual environment. In this paper, a novel fingertip auto-positioning (FAP) detection method is proposed for tracking finger position with a Microsoft Kinect sensor. Based on the finger skeleton point obtained through the Kinect SDK (Software Development Kit), skeleton point correction and fingertip auto-positioning are realized with the pixels circle which covers the fingertip area within a certain threshold. The experimental results show that the average AMPE (Absolute Mean Percentage Error) of the proposed finger position detection method are 1.47%, 1.62% and 0.80% respectively in the direction of X, Y and Z axes while its execution rate is 23 Hz. The proposed method can be applied to haptic based virtual reality applications and provide required position information for haptic modeling.

Introduction

With the help of human-computer interfaces, haptic rendering allows the human operators to touch and manipulate virtual objects in a simulated environment, which enhances the sense of reality and immersion of virtual reality systems [1]. Traditional haptic devices based on force-feedback joysticks usually have high position detection accuracy and force update rate. For example, the refresh rate of the Force Dimension’s Omega 3 haptic device is up to 4 kHz and its position detection precision is 0.01 mm. However, these haptic devices usually have constrained workspace because of mechanical linkages used to transmit forces and torques, which also leads to stylus-tool based human computer interactions.

In recent years, with the improvement of computing capabilities and the popularization of consumer electronic products, haptic rendering through natural interaction has become a major trend in the field of HCI. Natural haptic rendering aims to exert forces and torques to users in a more intuitive and unconstrained way, where magnetic field or ultrasonic principles are usually used. It removes the restrictions on the human bodies from traditional haptic devices and makes the operators perceive and recognize the virtual objects in a natural way just as in bare hand usage in the real world. In multi-modal HCI, the operators can interact with the virtual objects in several channels, such as visual, haptic and auditory rendering. As an important content of this multi-modal HCI, haptic rendering requires (1) high refresh rate up to 300–1000 Hz [2], (2) high accuracy of finger position tracking for force calculating [3]. In natural haptic rendering, the position of human fingers must be tracked to provide the feedback for a haptic device. Moreover, the finger position is used directly for calculating force feedback, which determines simulating reality and system performance for haptic rendering. Therefore, finger position tracking with high accuracy and sufficient refresh rate is of great importance in natural haptic rendering and becomes a critical challenge.

Section snippets

Related work

At present, there are several methods for accurate position detection in HCI. The Leap Motion controller [4] developed by Leap Motion can be used for position tracking of both hands with the precision of 0.01 mm. The Polhemus FASTRAK [5], a six degree of freedom motion tracking system, has the positioning precision of 0.76 mm and operating frequency of 120 Hz.

The Microsoft’s Kinect sensor is also widely used in various fields with the advantage of superior performance and low price [6], [7], [8]

The fingertip auto-positioning detection method

Utilizing the Kinect SDK function [22], an initial skeleton tracking method (INM) can be realized, which directly output a single skeleton point as the resulting finger position. Unfortunately, this INM method is prone to errors and has limited accuracy. However, using the initial output of INM method as the start point, we propose to set an imaginary circle with an appropriate diameter (ideally the width of a finger) on the perfectly top and center of a user's fingertip. Then all the pixels of

Experiments and results

Calibration experiments were carried out to verify the effectiveness of the proposed FAP method and the INM method utilizing Kinect SDK function. The PC computer with Intel i7-6770HQ processor, 8 GB memory and Windows 10 was used as the computing platform. The proposed FAP method was realized with C# programming language in Visual Studio 2015 and Microsoft Kinect for Windows SDK 2.0. The experimental environment is shown in Fig. 7.

Relative distance method was utilized in the calibration

Conclusion

In this paper, a real-time fingertip position detection algorithm with high accuracy is proposed based on the primary output of Kinect SDK function. The original finger skeleton point is detected by the Kinect and the peripheral pixels are traversed while the human body index pixel is utilized to realize the initial correction of the skeleton point. Then the finger skeleton point and the fingertip pixel are obtained through the fingertip auto-positioning method within a series of circle

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgment

This work was partly supported by the National Natural Science Foundation of China (No. 61773205, 61773219), the Fundamental Research Funds for the Central Universities (NS2016032, NS2019018, Nanjing University of Aeronautics and Astronautics), and the scholarship from China Scholarship Council (CSC). The authors gratefully acknowledge the contribution of reviewers’ comments.

References (26)

  • S.T.L. Pohlmann et al.

    Evaluation of Kinect 3D Sensor for Healthcare Imaging

    J. Med. Biol. Eng.

    (2016)
  • J. Bai et al.

    A novel human-robot cooperative method for upper extremity rehabilitation

    Int. J. Soc. Robot.

    (2017)
  • L.P. Zhao, X. Lu, X.L. Tao, X.L. Chen, A kinect-based virtual rehabilitation system through gesture recognition, in: J....
  • View full text