Full length Article
A virtual-physical collision detection interface for AR-based interactive teaching of robot

https://doi.org/10.1016/j.rcim.2020.101948Get rights and content

Abstract

At present, online lead-through and offline programming methods are widely used in programming of industrial robots. However, both methods have some drawbacks for unskilled shopworkers. This paper presents an Augmented Reality (AR)-based interactive robot teaching programming system, which virtually projected the robot onto the physical industrial environment. The unskilled shopworkers can use Handheld Teaching Device (HTD) to move end-effector of virtual robot to follow endpoint of the HTD. In this way, the path of the virtual robot can be planned or tested interactively. In addition, collisions detection between virtual robot and physical environment is key to test the feasibility of robot path. So, a method for detecting virtual-physical collisions is presented in this paper by comparing the depth values of corresponding pixels in depth image acquired by Kinect and computer-generated image in order to get collision-free paths of the virtual robot. The Quadtree model is used to accelerate the collision detection process and get distance between virtual model and physical environment. Using the AR-based interactive robot teaching programming system presented in this paper, all workers even unskilled ones in robot programming, can quickly and effectively get the collision-free robot path.

Introduction

In recent years, the manufacturing industry has witnessed a dramatic increase in the use of robots. A major reason is the superiority of robots over humans in completing tasks that are repetitive, difficult, time-consuming, or dangerous. Such tasks include welding, pick-and-place activities, and assembly [1]. A major inhibitor to the deployment of robots in Small and Medium Enterprises (SME) is the lack of workers skilled in programming [2]. Online lead-through and offline programming methods are widely used in the industry. However, each method has drawbacks. Online lead-through programming may cause physical harm to programmers. While offline programming requires an extensive modeling process that captures the underlying physical environment in 3D. The modeling process demands a lot of skill and finesse, and is therefore time-consuming and costly. At present, the manufacturing industry is undergoing a significant shift from mass production to mass customization production. As the name suggests, programming code for controlling robots undergoes frequent changes in mass customization production, taking into account the different requirements of different types of products. Inevitably, this will lead to an increase in resources and workload, especially for SMEs. Consequently, a safe, efficient, intuitive and low-cost method for the programming of industrial robots is needed in modern industry.

A variation of virtual reality (VR) is Augmented Reality (AR), which combines the digital and physical worlds of users by superimposing computer-generated information (for example texts, 3D graphics, or animations) onto the physical world [3]. With AR technology, users can enjoy the real and virtual worlds simultaneously, because real and virtual 3D objects are combined in the same environment. AR technology increases potential to accommodate the needs of modern applications for programming of industrial robots [4], [5].

In AR-based programming of an industrial robot, the robot itself is virtually projected onto the physical industrial environment. The alignment of the virtual robot with the physical environment is achieved by using different kinds of AR registration methods. Operators can test their programming paths by controlling the movements of virtual robots through their environment. Previous works [6], [7], [8] have extensively reported that AR technology provides significant benefits to users. which even led to a new programming paradigm known as the AR-based industrial robot Programming by Demonstration (PbD).

Several AR-based industrial robot PbD systems have been developed. Chong et al. [9] discussed the use of AR environments for making robot programming more intuitive. They showed how AR could be used to move the industrial robot through a 3D environment without colliding with other objects. Fang et al. [10], [11], [12], [13] presented an industrial robot PbD system. that explored trajectory planning (considering the dynamic constraints of the robot), orientation planning of the robot end-effector, the human-virtual robot interaction methods, and the adaptive path planning and optimization methods. In [14], a visuo-haptic AR system was presented to manipulate objects and learn tasks from demonstrations by humans. The system enabled users to operate a haptic device to interact with objects in a virtual environment. Ni [15] proposed an intuitive user interface for programming welding robots remotely, using AR with haptic feedback. The system employed a depth camera to reconstruct the surfaces of workpieces. A haptic input device allowed users to define welding paths along these surfaces.

One of the key objectives for AR-based industrial robot PbD is the seamless mix of the real world with the virtual world. AR registration can accurately align virtual with physical worlds. In the past, research has focused on AR registration or tracking methods. These include methods as diverse as magnetic tracking, vision-based tracking, inertial tracking, GPS tracking, and hybrid tracking. Vision-based tracking methods, including marker based tracking [16] and marker-less based tracking [17], have been studied widely.

AR registration or tracking only project virtual worlds onto physical worlds to achieve visual consistency in AR applications. In AR-based programming of industrial robot, however, the robot must not collide with other objects within the physical environment while moving along the programmed path. Hence, it becomes necessary to detect collisions between the virtual robot and the physical environment. The main aim of this paper is to present an AR-based interactive robot teaching programming system that facilitates the development of programming code that guarantees collision-free paths of robots through their environment. The paper will also propose a method to detect virtual-physical collisions in AR-based industrial robots programming. The latter method will be based on depth images.

The main contributions of this work can be listed as follows: (1) We proposed a virtual-physical collisions detection method for AR applications that can check collision and calculate the minimum distance between the virtual model and physical environment. By the presented virtual-physical collisions detection method and AR technology, virtual model and real environment are combined and interact with each other in both vision and 3D space levels. (2) We design an AR-based interactive robot teaching programming system, by which even shopworkers unskilled in robot programming can get collision-free robot path quickly and effectively.

This paper is organized as follows: Section 2 summarizes related studies in this area. Section 3 describes the AR-based interactive robot teaching programming system. Section 4 outlines the general method to detect virtual-physical collisions. Acquiring depth images is explained in detail in Section 5. Section 6 elaborates on the specifics of the collision detection method. Experiments that demonstrate the effectiveness and efficiency of our method are provided in Section 7. Section 8 contains our conclusions and future work.

Section snippets

Related works

Collision detection in virtual models has been studied extensively. However, only a few address the issue of virtual-physical collision detection in AR applications. This is due to significant technical challenges in terms of fast and accurate reconstruction of the physical world in 3D, as well as the efficiency of collision detection.

We will now review some of the methods that have been proposed to (1) evaluate the likelihood of collisions between virtual robots and physical obstacles and (2)

AR-based interactive robot teaching programming system

As shown in Fig. 1, an AR-based interactive robot teaching programming system for industrial robot is presented in this study. It has 4 components:

  • (1)

    A tracking system

  • (2)

    A Handheld Teaching Device (HTD)

  • (3)

    A virtual robot model (including 3D model, forward kinematics model, and inverse kinematics model), and

  • (4)

    An AR registration module.

In the tracking system, 5 infrared cameras are used to track the marked points that are attached to the HTD. The AR registration module is used to track the position and

Principles

This paper proposes a virtual-physical collision detection method based on depth images. As illustrated by the dashed red lines in Fig. 3, the movements of the virtual robot might easily cause its right side to collide with the left side of other physical entities, such as humans or workpieces. To avoid such collisions, a physical depth sensor (Kinect) is placed on the left side of the entities to obtain a depth image of the left side of the physical environment. Across the physical depth

The computer-generated depth image of the virtual robot

As we have seen in the previous section, our model for detecting collisions in mixed virtual and physical models is based on comparing the depth values of the virtual and physical scenes. This makes it necessary to use virtual depth sensors to capture the possible collision area in the virtual scene. Furthermore, the virtual depth sensor model is used to generate a new depth image of the possible collision area by positioning the virtual depth sensor to the physical depth sensor (namely, the

Virtual-physical collision detection

The first step towards detecting collisions is to employ the quadtree model to determine the maximum and minimum depths among all pixels within the rectangular areas (or nodes) of the images from the leaf nodes to the root node. As shown in Fig. 7, the calculation loop is as follows:

  • (1)

    For each leaf node, get the maximum and minimum depth values by traversing all pixels within the rectangular area that the leaf node represents.

  • (2)

    Using the coding rules above, calculate the identifier for the parent

Experiment

All experiments were run on a computer with the following hardware and software specifications. The computer is a notebook with Intel core i7-7700HQ, [email protected] GHz, a GTX1050i graphics card, and Kinect V2. The relevant software includes VS2010, OpenCV3.0, OpenSceneGraph3.0, and Kinect for Windows SDK. For our research, the ARToolkit 5.2 SDK is adopted as the AR development platform with integrated OpenSceneGraph3.0. This resulted in a simple and efficient setup of an AR-based system, and made it

Conclusions

This study presents an AR-based interactive robot teaching programming system, which virtually projects the robot onto the physical industrial environment. The movement of the virtual robot was controlled by the HTD, the position and orientation of which are tracked using the tracking system. To obtain collision-free paths of the virtual robot, this study also proposes a method to detect virtual-physical collisions by comparing the depth values of corresponding pixels in depth image acquired by

Acknowledgments

This work was co-supported by the China NSFC project (Grant Nos. 51475251 and 51705273) and the key research and development programs of Shandong Province (Grant No. 2017GGX203003).

References (33)

  • M. Zaeh et al.

    Interactive laser-projection for programming industrial robots

  • H.C. Fang et al.

    Orientation planning of robot end-effector using augmented reality

    Int. J. Adv. Manuf. Technol.

    (2013)
  • H.C. Fang et al.

    A novel augmented reality-based interface for robot path planning

    Int. J. Interact. Des. Manuf.

    (2014)
  • H.C. Fang et al.

    Adaptive pass planning and optimization for robotic welding of complex joints

    Adv. Manuf.

    (2017)
  • J. Aleotti et al.

    Object interaction and task programming by demonstration in visuo-haptic augmented reality

    Multimed. Syst.

    (2016)
  • D. Ni et al.

    Haptic and visual augmented reality interface for programming welding robots

    Adv. Manuf.

    (2017)
  • Cited by (35)

    • Head-mounted display augmented reality in manufacturing: A systematic review

      2023, Robotics and Computer-Integrated Manufacturing
    • Distributed cognition based localization for AR-aided collaborative assembly in industrial environments

      2022, Robotics and Computer-Integrated Manufacturing
      Citation Excerpt :

      In summary, these approaches can realize the collaboration among multiple operators, however, the performance of the system was not considered sufficiently, especially on real-time assembly of a complex product, and also inconsistencies between virtual and real assembly process are inevitable, which may result in unexpected interferences in assembly. Instead of collaborative assembly in virtual environments, human robot collaboration (HRC) for assembly in real industrial environments acquire great achievements, where humans and robots occupy the same workspace simultaneously to perform given tasks [18,19]. The performance of a robot substantially affects the final performance of the whole human-robot team.

    • Hybrid offline programming method for robotic welding systems

      2022, Robotics and Computer-Integrated Manufacturing
      Citation Excerpt :

      Weng et al. [8] proposed a human–robot collaboration system in which an operator motion was translated and executed as a robot motion using a teleoperation system. Chen et al. [9] illustrated an augmented-reality-based robot teaching programming approach for allowing users to utilize a handheld teaching device to manipulate an end-effector of the virtual robot, such that collision-free paths could be generated. Ong et al. [10] presented an augmented-reality-assisted robot programming method for providing users with intuitive methods to define and visualize robot motions.

    View all citing articles on Scopus
    View full text