Robot teaching system based on hand-robot contact state detection and motion intention recognition

https://doi.org/10.1016/j.rcim.2022.102492Get rights and content

Highlights

  • A method for detecting hand-robot contact states based on the fusion of virtual robot environment and physical environment is proposed. It overcomes the difficulty in determining the contact state of hand-robot under occlusion.

  • This study establishes a human motion intention recognition model based on sEMG signals. The strength of the sEMG signals is used to judge the force exerted by the operator and control the motion velocity of the robot.

  • A robot motion mode selection module is designed. According to the detected contact information and motion intention, the robot is controlled to realize three motion modes: single-axis motion, linear motion, and repositioning motion.

Abstract

This paper presents a robot teaching system based on hand-robot contact state detection and human motion intent recognition. The system can detect the contact state of the hand-robot joint and extracts motion intention information from the human surface electromyography (sEMG) signals to control the robot's motion. First, a hand-robot contact state detection method is proposed based on the fusion of the virtual robot environment with the physical environment. With the use of a target detection algorithm, the position of the human hand in the color image of the physical environment can be identified and its pixel coordinates can be calculated. Meanwhile, the synthetic images of the virtual robot environment are combined with those of the physical robot scene to determine whether the human hand is in contact with the robot. Besides, a human motion intention recognition model based on deep learning is designed to recognize human motion intention with the input of sEMG signals. Moreover, a robot motion mode selection module is built to control the robot for single-axis motion, linear motion, or repositioning motion by combining the hand-robot contact state and human motion intention. The experimental results indicate that the proposed system can perform online robot teaching for the three motion modes.

Introduction

Industrial robots are widely used in the manufacturing industry because of their safety, high efficiency, high precision, and repeatability. Compared with humans, they can conduct repetitive and time-consuming tasks in harsh environments. In the era of globalization, the demand of the manufacturing industry for high-efficiency and flexibly customized production and high-mix low-volume production is increasing [1]. Robot programming is crucial to manufacturing productivity, so it needs to be flexible and adaptable to different production scenarios [2]. However, conventional robot programming (mainly includes online programming and offline programming) is cumbersome and requires professional operation skills. It not only requires repetitive programming but lacks the adaptability to meet the ever-changing demands, thus hindering the enthusiasm of small and medium-sized enterprises (SMEs) to deploy robots [3]. Currently, there are mainly two online programming methods. The more widely used robot programming method uses a teach pendant (i.e., a handheld device) to move the robot to desired path points and record relevant robot configurations. However, it is not intuitive to move a robot by a teach pendant because the rotation and translation of the robot are controlled by the keys of the teach pendant separately. Also, once the robot has been programmed, it is difficult to change the program [4]. The other method requires the operator to manipulate the handle installed at the end of the robot to demonstrate the trajectory of the robot. Meanwhile, the robot controller records the demonstrated trajectory and the corresponding joint coordinates to reproduce the trajectory. This approach requires a force/torque sensor or sensor-less force estimation techniques of the robot to determine the operator's motion intention [5]. The quality of both online programming methods depends on the operator's programming experience and knowledge of the robot [6].

Offline programming allows the generation of robot programs without a physical robot, but it requires the construction of a virtual environment for the robot and its workspace. Through this constructed virtual environment, complex robotic cell programs can be generated and pre-checked before they are downloaded to the robot. However, offline programming requires 3D models and a higher level of programming and robot knowledge than online programming to ensure the quality of robot programs. Also, calibration needs to be performed in a feasible tolerance region when the program is deployed to physical robots due to the mismatch between real and virtual coordinate systems [7].

As computer technology continues to evolve, new robot programming methods that are intuitive, simple, and suitable for unskilled workers are being developed, such as robot teleoperation [8,9], the use of augmented reality interfaces, and teaching by demonstration techniques [10,11]. The common feature of these methods is that they are based on natural user interfaces (NUIs) or tangible user interfaces (TUIs). They aim to express mental concepts through natural and reality-based behaviors, which enable users to directly manipulate robots by actions obtained from daily practice, such as voice, gestures, touch, and motion tracking rather than instructing commands via traditional interaction devices such as keyboards, mice, and joysticks [4].

Based on the idea of NUIs and TUIs, this paper presents a robot programming method based on hand-robot contact state and surface electromyography (sEMG) signals. sEMG signals are superimposed electrical signals formed on the surface of human skin by motor unit action potential trains (MUAPTs) of motor-associated muscles. It contains a lot of information about gestures and human intentions which can be acquired by non-invasive techniques [12]. Zajacl first introduced the EMG-based control model [13], and then Fukuda used EMG signals to control a human-assisting manipulator [14]. The motion of a robot is jointly determined by the hand-robot contact state and the human motion intention. Therefore, it is necessary to detect the contact state between the human hand and the robot. Morato constructed a virtual environment for collision detection based on robot motion data and human joint data tracked by multiple kinects, but the contact position was not judged [15]. To overcome the difficulty of judging the hand-robot contact state in the case of single RGB-D camera occlusion, a hand-robot contact state detection method based on the fusion of virtual robot environment and physical environment is proposed to determine whether the hand-robot is in contact and the contact position. The center of the hand is detected by YOLOV5, so the handling, size, weight, visibility, and sensitivity of the object need not be considered in this study. Besides, according to different positions and contact situations of human hands, the human motion intent recognition model is established to extract the motion intent of user-robot interaction from the sEMG signals and control the motion speed and direction of the robot. Without expensive torque sensors and conventional trainers, the robot can be taught intuitively through the interaction information between humans and robots and the rational design of the robot motion mode selection module.

Section snippets

Related works

This paper presents a robot teaching system based on hand-robot contact state detection and human motion intent recognition. By combining the contact state with the motion intent information extracted from the sEMG signals, the operator can control the robot motion and record the robot joint configuration to complete robot teaching. Therefore, this section analyzes the related literature on human-robot contact state detection and human-robot interaction intention recognition.

System architecture

As shown in Fig. 1, the robot teaching system based on hand-robot contact state detection and motion intention recognition mainly consists of three modules: hand-robot contact state detection module, sEMG-based human motion intention recognition module, and robot motion mode selection module.

The hand-robot contact state detection module is composed of a virtual robot environment, a YOLOV5 [50] based hand detection model, and a contact information calculation module. In this study, a virtual

System design

To verify the feasibility of the proposed method, this study designs a robot teaching prototype system based on hand-robot contact state detection and motion intention recognition. The experiment setup is shown in Fig. 9. The hardware of the system includes an ABB IRB120 robot, a Kinect v2 camera, a bracket, a computer (CPU: i7–9700, RAM: 16 G, and graphics card: RTX2060), and a Myo Armband. The Kinect v2 camera is fixed to ensure that the working space of the robot is within the camera's field

Conclusions

This paper proposes a robot teaching system based on hand-robot contact state detection and motion intention recognition. The robot motion mode selection module of this system combines the hand-robot contact state and the human motion intention to realize three robot motion modes (single-axis motion, linear motion, and repositioning motion) for online teaching intuitively. To solve the occlusion problem in determining the contact state by a single depth camera, a method for detecting hand-robot

CRediT authorship contribution statement

Yong Pan: Methodology, Software, Writing – original draft. Chengjun Chen: Supervision, Conceptualization, Methodology, Writing – review & editing. Zhengxu Zhao: Supervision, Writing – review & editing. Tianliang Hu: Conceptualization. Jianhua Zhang: Conceptualization.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work was co-supported by the National Natural Science Foundation of China (Grant No. 52175471) and the Science & Technology Support Project for Young People in Colleges of Shandong Province (Grant No. 2019KJB020)

References (64)

  • S.K. Ong et al.

    Augmented reality-assisted robot programming system for industrial applications

    Robot. Comput. Integr. Manuf.

    (2020)
  • G. Du et al.

    A novel human–manipulators interface using hybrid sensors with Kalman filter and particle filter

    Robot. Comput. Integr. Manuf.

    (2016)
  • A. Buerkle et al.

    EEG based arm movement intention recognition towards enhanced safety in symbiotic human-robot collaboration

    Robot. Comput. Integr. Manuf.

    (2021)
  • C. Chen et al.

    A virtual-physical collision detection interface for AR-based interactive teaching of robot

    Robot. Comput. Integr. Manuf.

    (2020)
  • E. Magrini et al.

    Human-robot coexistence and interaction in open industrial cells

    Robot. Comput. Integr. Manuf.

    (2020)
  • Y. Pan et al.

    Augmented reality-based robot teleoperation system using RGB-D imaging and attitude teaching device

    Robot. Comput. Integr. Manuf.

    (2021)
  • D. Massa et al.

    Manual guidance for industrial robot programming[J]

    Ind. Robot An Int. J.

    (2015)
  • J. Lambrecht et al.

    Spatial programming for industrial robots through task demonstration[J]

    Int. J. Adv. Rob. Syst.

    (2013)
  • M. Capurso et al.

    Sensorless kinesthetic teaching of robotic manipulators assisted by observer-based force control

  • H. Hedayati et al.

    Improving collocated robot teleoperation with augmented reality

  • D. Kent et al.

    A comparison of remote robot teleoperation interfaces for general object manipulation

  • J. Aleotti et al.

    Position teaching of a robot arm by demonstration with a wearable input device[C]

  • H. Xu et al.

    Advances and disturbances in sEMG-based intentions and movements recognition: a review

    IEEE Sens. J.

    (2021)
  • F.E. Zajac

    Muscle and tendon: properties, models, scaling, and application to biomechanics and motor control

    Crit. Rev. Biomed. Eng.

    (1989)
  • O. Fukuda et al.

    A human-assisting manipulator teleoperated by EMG signals and arm motions

    IEEE Trans. Robot. Autom.

    (2003)
  • C. Morato et al.

    Safe human robot interaction by using exteroceptive sensing based human modeling

  • M. Lippi et al.

    A data-driven approach for contact detection, classification and reaction in physical human-robot collaboration

  • S. Haddadin et al.

    Robot collisions: a survey on detection, isolation, and identification

    IEEE Trans. Robot.

    (2017)
  • S. Morinaga et al.

    Collision detection system for manipulator based on adaptive impedance control law

  • M. Geravand et al.

    Human-robot physical interaction and collaboration using an industrial robot with a closed control architecture

  • E. Magrini et al.

    Estimation of contact forces using a virtual force sensor

  • N. Briquet-Kerestedjian et al.

    Using neural networks for classifying human-robot contact situations

  • View full text