Skip to main content
  • Research Article
  • Open access
  • Published:

Development of a tour guide and co-experience robot system using the quasi-zenith satellite system and the 5th-generation mobile communication system at a Theme Park

Abstract

Autonomous robots are expected to replace dangerous, dirty, and demanding (3D) jobs. At a theme park, surveillance, cleaning, and guiding tasks can be regarded as 3D jobs. The present paper attempts to develop an autonomous tour guide robot system and co-experience system at a large theme park. For realizing such autonomous service robots used in our daily environment, localization is one of the most important and fundamental functions. A number of localization techniques, including simultaneous localization and mapping, have been proposed. Although a global navigation satellite system (GNSS) is most commonly used in outdoor environments, its accuracy is approximately 10 m, which is inadequate for navigation of an autonomous service robot. Therefore, a GNSS is usually used together with other localization techniques, such as map matching or camera-based localization. In the present study, we adopt the quasi-zenith satellite system (QZSS), which became available in and around Japan in November 2018, for the localization of an autonomous service robot. The QZSS provides high-accuracy position information using quasi-zenith satellites (QZSs) and has a localization error of less than 10 centimeters. In the present paper, we compare the positioning performance of the QZSS and the real-time kinematic GPS and verify the stability and the accuracy of the QZSS in an outdoor environment. In addition, we introduce a tour guide robot system using the QZSS and present the results of a guided tour experiment in a theme park. On the other hand, based on the tour guide system, we also develop a co-experience robot system in a theme park, which realizes the sharing of experiences using an immersive VR display and the 5th-generation mobile communication system (5G). The robot is equipped with a 360-degree video camera and real-time 4K video is transmitted to a remote operator using the large communication capacity of the 5G network. The experimental results at a theme park showed that the guided tour experiment was successful and that the co-experience system allowed sharing of the experience with high immersion by a remote operator.

Introduction

The service robot is an effective solution for various social problems, such as the problems of a super-aged society, labor shortages caused by depopulation in the countryside, and performing uncomfortable tasks such as dangerous, dirty, and demanding (3D) jobs, for example, surveillance, cleaning, and guiding tasks. Recently, new and advanced technologies, which are important factors in realizing intelligent service robots, are being actively developed. The quasi-zenith satellite system (QZSS) [1] and the 5th-generation mobile communication system (5G) are among these technologies. We intend to use these technologies and develop two types of service robot system, namely, a tour guide robot system and a co-experience robot system. Figures 1 and 2 show conceptual diagrams of these systems.

Fig. 1
figure 1

Conceptual diagrams of the tour guide system. The robot conducts a tour using QZSS, and some information obtained from the robot is monitored from the server

Fig. 2
figure 2

Conceptual diagrams of the co-experience system. The user can share an experience (360-degree 4K video) of the robot remotely through the 5G network

Localization is one of the most important and fundamental functions for an autonomous service robot like the tour guide robot. In an outdoor environment, a global navigation satellite system (GNSS), in particular, the Global Positioning System (GPS), is the most popular technique. However, the accuracy of a GNSS is approximately 10 m, which is inadequate for navigation of an autonomous service robot. Therefore, the real-time kinematic GPS (RTK-GPS) or a virtual reference station (VRS) that provides centimeter-class positioning is used for accurate navigation of an autonomous service robot, such as a personal mobility vehicle or a delivery robot. A number of autonomous robot systems using the RTK-GPS have been proposed [2,3,4,5,6]. In [2], the authors proposed a robust and precise localization system that achieves centimeter-level accuracy in diverse city scenes. In this system, the measurement of the RTK-GNSS, LiDAR, and IMU are synthesized in the sensor fusion framework using an error-state Kalman filter. In [3], the authors proposed a high-precision localization method by treating the global pose estimation problem as a pose graph optimization problem. Both the RTK-GPS and wheel odometry are used as constraints of the pose graph. In [4], the authors proposed a sensor fusion method for 3D mapping and localization using multiple heterogeneous and asynchronous sensors. In this system, the authors first create an accurate prior map by ORB-SLAM [7] and LOAM [8] using a vehicle that has an RTK-GPS sensor unit. After creating the prior map, the map is used for localization in the GPS frame of reference without the use of GPS, and thus localization in GPS-denied environments, such as tunnels or parking garages, is also performed. In [5], an integrated framework for underground 3D mapping using a mobile rover is proposed. This framework conducts 3D underground mapping based on ground penetrating radar (GPR) data. In this system, the RTK-GPS is used for accurate geo-reference. In [6], the underwater localization system for an underwater mining vehicle (MV) and a surface launch and recovery vessel (LARV) was proposed. The LARV is used for supporting the MV. In this system, the RTK-GNSS is used for the localization of the LARV using RTKLIB [9].

The QZSS began operating on November 2018 in and around Japan. The QZSS provides high-accuracy position information and a localization error of less than 10 cm by using electronic reference points and four quasi-zenith satellites (QZSs). These satellites transmit signals not only for localization but also for error correction using the electronic reference points. Therefore, we do not need the base station required for the RTK-GPS, and thus the QZSS is easy to use for centimeter-class positioning, as compared with the RTK-GPS. Accordingly, the QZSS is suitable for mobile robots that move over a wide area, such as the tour guide robot system proposed herein. Since the QZSS can be used without the communication between the two stations required for the RTK-GPS, the QZSS is suitable for mobile robots that move over a wide area, such as the same proposed system.

In particular, due to the rapid development of communication technology, various systems for conducting service tasks using robots over networks have been constructed [10,11,12]. Regarding communication technology, 5G is being developed and introduced as a next-generation wireless communication technology that enables large-capacity data to be handled by wireless communication. Wireless communication is an indispensable technology for autonomous mobile service robots, and, by using 5G, more advanced services are possible. In the present study, we develop a co-experience system that allows us to experience the surrounding image and sound around the robot from a remote location by transferring the 360-degree 4K video taken by the robot using 5G. By using this system, for example, older people with lower-limb disabilities can enjoy experiences from a remote place like taking a stroll with a grandchild, using a robot, and due to the high-capacity communication of 5G, high immersion is realized.

In the remainder of the present paper, we compare the positioning performance of the RTK-GPS and the QZSS and verify the stability and accuracy of the QZSS in an outdoor environment. In addition, we introduce the configuration of a tour guide robot system using the QZSS and a co-experience system using the 5G network. Finally, we present an experiment for the tour guide system and co-experience system in a theme park.

Centimeter-class positioning by GNSS

For high-accuracy measurement using a GNSS, error correction is very important. Errors include the clock error of the satellite, the clock error of the receiver, the position error of the satellite, ionospheric delay, tropospheric delay, the effect of multiple paths, and the noise of the receiver [13, 14]. In this section, we explain the measurement procedure and the error correction system for the RTK-GNSS and the QZSS.

Real-time kinematic GPS

The RTK-GPS uses two modules: a base station and a rover station. Using these modules, the RTK-GPS calculates the double difference of the carrier phase and achieves high-accuracy measurement. The double difference of the carrier phase is calculated using the carrier phase from two satellites to base and rover stations. Here, the carrier phase data arriving at the base station from satellite A and satellite B are denoted as (\(\phi _{r}^{A}\) and \(\phi _{r}^{B}\)), respectively. The double difference of the carrier phase \(D\phi _{br}^{AB}\) is calculated as \(D\phi _{br}^{AB} = (\phi _{r}^{A} - \phi _{b}^{A}) - (\phi _{r}^{B} - \phi _{b}^{B})\). This calculation removes the clock errors of the satellites and the receiver. In addition, if the distance between the base and rover stations is less than a certain value, then the ionospheric delay and tropospheric delay can be removed. Furthermore, by using information on pseudo-ranges between multiple satellites and receivers, we can determine the integer ambiguities remaining as errors and thereby realize centimeter-class positioning. In the present paper, we use MJ-2001-GL1 (Magellan Systems Japan Inc.; Fig. 3) as an RTK-GPS module in the experiment.

Fig. 3
figure 3

Real-time kinematic GPS module (MJ-2001-GL1, Magellan Systems Japan Inc.)

Quasi-zenith satellite system

The QZSS uses four QZSs, referred to as the “Michibiki” constellation, and began operating in November 2018 in Japan. Whereas the RTK-GPS uses two sets of modules, the QZSS provides highly accurate positioning with only one module, consisting of an antenna and a receiver. As explained above, the RTK-GPS uses the correction signal measured by the base station. On the other hand, the QZSS generates an error correction signal using observation data at electronic reference points placed very densely in Japan, and the correction is performed by transmission to the user terminal via the satellites. This correction method is referred to as centimeter-level augmentation [15, 16], and centimeter-class positioning has been realized in and around Japan. The quasi-zenith orbit is shown in Fig. 4. This orbit is an asymmetrical trajectory, and each QZS follows this trajectory in the course of one day. By constructing this quasi-zenith orbit with four satellites, and shifting their positions in time, a high elevation angle to at least one QZS can always be obtained in and around Japan.

In the present paper, we used the QZSS module called AQLOC-V (Mitsubishi Electric Inc.; Fig. 5) and MJ-3021-GM4-QZS-EVK (Magellan Systems Japan, Inc.; Fig. 6). The AQLOC-V is used for experiment compareing performance of the RTK-GPS and QZSS and the first version of the developed robot system. The MJ-3021-GM4-QZS-EVK is used for the second version of the developed robot system because the compactness of the module is necessary. The robot systems are described in detail in the section describing the configuration of the developed robot system.

Fig. 4
figure 4

Quasi-zenith orbit. The red-colored trajectory is the quasi-zenith orbit [17]. The trajectory covers Japan and Australia

Fig. 5
figure 5

Quasi-zenith satellite system module AQLOC-V (Mitsubishi Electric Inc.). This module was used for the first version of the developed system

Fig. 6
figure 6

Quasi-zenith satellite system module MJ-3021-GM4-QZS-EVK (Magellan Systems Japan, Inc.). This module was used for the second version of the developed system. This module is more compact than the AQLOC-V

Centimeter-level augmentation service

The centimeter-level augmentation service (CLAS) is a unique function of the QZSS. The “Michibiki” constellation adopts a state space representation (SSR) method [18] for the CLAS and realizes centimeter-class positioning using the L6 signal, which is an auxiliary signal of a QZS. In the centimeter-class augmentation information generated at the control segment, a dynamic error model called the state space model (SSM) is used based on observation data of the electronic reference point network. Each error amount, such as the clock error, the satellite orbit error, ionospheric delay, tropospheric delay, and signal bias, is generated as an SSR. The flow of the centimeter-level augmentation is shown in Fig. 7.

Fig. 7
figure 7

Flow of centimeter-level augmentation

Based on the positioning information at the electronic reference point for which the latitude and longitude are known, the correction information for removing the error is created at a facility called the monitoring station and transmits the information to the QZS via the antenna of the tracking station. Then, by receiving the correction information simultaneously with the positioning signal on the user terminal side, centimeter-class positioning is realized.

Accuracy measurement experiments

In order to verify the measurement accuracy of the QZSS, we compared the positioning performance the RTK-GPS and the QZSS in the stand-still state and in motion in an open-sky environment and in a partially obscured environment in which buildings block portions of the sky.

Measurement accuracy in the stand-still state in an open-sky environment

In this experiment, the distributions of positioning data from the average value by the RTK-GPS and the QZSS were compared in the stand-still state. The results are shown in Fig. 8.

Fig. 8
figure 8

Distribution of positioning data of the RTK-GPS (upper graph) and the QZSS (lower graph). The right-hand side panels and the lower panels of each graph show the histograms for each of the corresponding axes

Based on these results, the RTK-GPS can perform positioning more stably than the QZSS in the stand-still state. One reason for this is the difference in the mechanism of position information correction, i.e., that the base station is placed close to the rover station in the RTK-GPS. However, the errors of the QZSS are less than approximately ±4 cm and satisfy most applications of autonomous service robots. A more detailed discussion will be presented in the following section for performance comparison of the RTK-GPS and the QZSS.

Measurement accuracy in motion in an open-sky environment

In-motion experiments were conducted by the RTK-GPS and the QZSS equipped in mobile robots. We compare the values measured by the RTK-GPS and the QZSS and the true values measured by a robotic total station (GPT-9005A, TOPCON, Inc.). The measurement accuracy and frequency of the robotic total station are approximately ±7 mm and 1.7 Hz, respectively. The latitude, longitude, and orientation of the robotic total station were measured using prism poles and the QZSS (Fig. 10).

Figure 9 shows the experimental environment, which is a square space of 18 m \(\times\) 18 m, and the orange, green, and blue circles in Fig. 9 indicate the initial position of the mobile robot, the position of the robotic total station, and the position of the prism pole, respectively. In this experiment, the maximum linear velocity of the robot was set to 0.1 m/s for stable measurement using the total station.

Fig. 9
figure 9

Experimental conditions

Fig. 10
figure 10

Prism pole (left window) and robotic total station (right window)

In order to track the position of the mobile robot in motion by the robotic total station, a prism is mounted between the GNSS antenna and the mobile robot, as shown in Fig. 11. The movement speed of the mobile robot was set to 0.1 m/s in the experiment.

Fig. 11
figure 11

Setup of the GNSS antenna and prism for the comparing experiment. The left-hand side describes the configuration for the QZSS, and the right-hand side describes the configuration for the RTK-GPS

The trajectories measured by the RTK-GPS, the QZSS, and the robotic total station are shown in Fig. 12. In these figures, the green lines indicate the trajectories obtained using the robotic total station, and blue lines indicate the trajectory obtained using the RTK-GPS or the QZSS.

Fig. 12
figure 12

Measured trajectories. Green lines indicate the trajectories obtained using the robotic total station, and blue lines indicate the trajectories obtained using the RTK-GPS (left figure) and QZSS (right figure)

The maximum value (MAX), root mean square (RMS), and standard division (SD) of the differences between the positions measured by the GNSS and the robotic total station are shown in Table 1.

Table 1 Differences between the positions measured by the global navigation satellite system (GNSS) and the robotic total station

We also applied an extended Kalman filter (EKF) to integrate the GNSS and the wheel odometry by a mobile robot using wheel encoders. The results are shown in Fig. 13 and Table 2.

Fig. 13
figure 13

Measured trajectories using the EKF. Green lines indicate the trajectories obtained using the robotic total station, and purple lines indicate the trajectories obtained using the GNSS and wheel odometry. The left figure is result when using RTK-GPS and right figure is result when using QZSS

Table 2 Differences between the positions obtained using the GNSS with wheel odometry and the robotic total station

Based on these results, the localization after integrating the GNSS with the wheel odometry by the EKF is slightly more accurate than the GNSS itself. In addition, the accuracy of the RTK-GPS is slightly higher than that of the QZSS. A more detailed discussion is presented in the section describing the performance comparison.

Experiment in a partially obscured environment

In this experiment, we run the robot along a route that is close to higher-rise buildings and compare the positioning accuracy and stability of the RTK-GPS and the QZSS. The results are shown in Fig. 14. The fixed solution is a good result and the independent solution is a bad result. The fixed solution provides positioning results with high reliability and accuracy, and the independent solution yields positioning results with low reliability and accuracy.

Fig. 14
figure 14

Measured trajectory by the RTK-GPS (upper side) and the QZSS (lower side) are described by markers. The blue and green markers indicate the fixed and float solutions, respectively, and the orange markers indicate independent solutions

Based on these results, the QZSS maintains a fixed solution along most of the route and performs stable measurements, even when the robot passes near high-rise buildings. On the other hand, the RTK-GPS becomes unstable in some cases. One reason for this is that the QZSS uses the QZS placed at the quasi-zenith orbit and observed with a high elevation angle from the GNSS antenna. We repeated the experiment 10 times in the environment. Table 3 shows the fixed rate, float rate, and independent rate for the RTK-GPS and the QZSS.

Table 3 Fixed, float, and unstable rates for the RTK-GPS and the QZSS

Performance comparison of the RTK-GPS and the QZSS

The performances of the RTK-GPS and the QZSS are shown in Table 4. As a result of the experiments, we can see that the RTK-GPS is more accurate than the QZSS. The reason for this is thought to be the difference of the mechanism of position information correction. Correction information in the RTK-GPS is created using the observation data at the base and the rover stations that are on line. On the other hand, as mentioned above, the QZSS uses the PPP-RTK (precise point positioning RTK) method, which is a model-based technique called a SSR. In the PPP-RTK method, the error is decomposed into the error in satellite clocks, a small variation in the orbit, tropospheric delay, ionospheric delay, etc., and these factors are estimated at the electronic reference points in the CLAS of the QZSS. However, the PPP-RTK method cannot take into account the real-time state change of errors and therefore cannot handle, for example, a sudden change in ionospheric conditions.

As demonstrated by the results of the experiment described in performance comparison section, the positioning accuracy does not differ greatly between the RTK-GPS and the QZSS, and both techniques satisfy most requirements for applications of autonomous service robots. Although the RTK-GPS is slightly more accurate than the QZSS, the RTK-GPS requires two modules, and accurate positioning requires acquisition of the correct position of the base station. If the latitude and longitude of the base station are not accurately known, it takes a long time to obtain the accurate latitude and longitude by the GNSS. We have to place the base station for a certain period of time and collect data repeatedly. In addition, since communication between the base station and the rover station is required, the GNSS can only be used within the range in which such communication is possible. On the other hand, since the QZSS can perform centimeter-class positioning with a single device, we do not need to consider an initialization procedure or the available range. Moreover, the QZSS can perform more stable positioning, even near buildings, because the QZSS uses satellites placed in a quasi-zenith orbit that are observed with a high elevation angle from the GNSS antenna. At any time, at least one QZS can always be observed in and around Japan. Consequently, we can conclude that the QZSS is more suitable for a centimeter-class positioning system for autonomous service robots.

Table 4 Statistics of the RTK-GPS and the QZSS

With respect to the overall accuracy, the RTK-GPS has a higher performance, but the difference is approximately several centimeters, which is not a large difference when considering the position identification of the robot. On the other hand, the QZSS is superior with respect to the stability of measurement, the number of required modules and preparations, the limits of the measurement range, and convenience. Overall, we conclude that the QZSS is better for robot position identification.

5G network

In this research, we are using a 5G environment provided by NTT DOCOMO, Inc., in the Huis Ten Bosch theme park. However, there are difficulties in using the network system for the proposed robot system. The components of the network system and how to solve these problems in the proposed system are introduced hereinafter. The configuration of the network system is described in Fig. 15. Preliminary experiments revealed that wireless communication at approximately 20–40 Mbps was possible with this configuration. Note that the upper limit of the communication speed was set at 40 Mbps because of the limitation of CPU power of the onboard computer. Theoretically, the 5G network is capable of communication speeds of 20 Gbps. On the other hand, the average speed of the LTE (4G) network, which is the current standard for outdoor wireless communication, was 16 Mbps (Aterm MR05LN, NEC, Corp.) by actual measurement. Since the 360-degree camera in the system requires a maximum of about 56 Mbps to transmit 4K images, the 5G network system is suitable for the proposed application. The details of the application will be described later.

Components of the network system

The network system has a control PC, a system server, the docomo Open Innovation Cloud (dOIC), and 5G routers. The control PC is used to control the robot. The ROS-based controlling system and 360-degree 4K camera system are implemented in the PC. (These systems are described in detail in the section on the configuration of the developed robot system.) The system server is used for monitoring the information of the robot. In addition, we can control the robot remotely from the system server. The dOIC is a cloud service provided by NTT DOCOMO, Inc. with multi-access edge computing (MEC) features, such as low latency and high security, which are required in the 5G era. This is achieved by building a cloud platform on the facilities within the DOCOMO network. We can use virtual machine instances and virtual networks. In addition, we can use some assets developed by NTT DOCOMO, Inc., such as an image processing API, in the future. The 5G router was also provided by NTT DOCOMO, Inc. The router has a LAN port and Wi-Fi system for connecting to the router. Moreover, 5G routers can communicate wirelessly with other routers by 5G radio waves through the base station for 5G.

Solutions for the problems in the network system

There are two problems related to using the network system for the robot system. The first problem is related to the 5G router, and the second problem is related to an android device mounted in the robot. The details of these problems are as follows:

  1. 1.

    The modules connected to 5G routers cannot communicate directly with other modules connected to other routers on the network, because of the functional limitation of the 5G router.

  2. 2.

    The interface for controlling the base robot in the proposed system is an android device mounted in the robot. Therefore, the android device also needs to connect to the 5G network system. Therefore, the android device also needs to connect to the 5G network system. However, the current android device cannot connect to the 5G network directly and needs to be connected to the 5G network via a 5G router. Besides, the wired connection of the Android device is functionally disabled.

In order to solve the first problem, we adopt a virtual private network (VPN) communication system. The VPN communication system realizes the situation that private networks are virtually connected. Using this system, the modules connected to the 5G routers can communicate with other modules connected to other router networks. The VPN server is implemented in dOIC, and the VPN client is implemented in each control PC and server PC using SoftEther VPN [19].

On the other hand, we used the reverse tethering (RT) system for solving the second problem. The basic tethering system is used for the PC connected to the network to which mobile devices, such as an Android system, are connected. Reverse tethering simply realizes the reverse version of tethering, which means that mobile devices can connect to the network to which the PC is connected. By using RT, we connected the robot Android device to the 5G network to which the control PC was connected. The server and client systems for RT are implemented using SimpleRT [20]. The RT server was running on the control PC, and the RT client was running on the Android device.

Fig. 15
figure 15

Configuration of the 5G network. The control PC in the robot and the server PC are connected to a 5G router. Each router is communicating through a VPN network. The reverse tethering system is adopted between the robot and the control PC

Configuration of the developed robot system

As mentioned above, we confirmed that the QZSS can perform centimeter-class positioning with a simple and easy-to-use system consisting of a single module. In addition, we verified the 5G network system has a high communication speed. In this section, we present two types of service robot systems using QZSS and 5G network. One is a tour guide robot system that mainly uses the advantages of QZSS, and the other is a co-experience system that uses the advantages of 5G network. QZSS is suitable for outdoor mobile robots in wide area because it does not require multiple modules and communication between base and rover stations, which are required for RTK-GPS. In addition, the co-experience system can share experiences using 360-degree video, and the high communication speed using the 5G network is suitable for the transmission of 4K high resolution images. Figures 16 and 17 show the developed robot systems.

Fig. 16
figure 16

Qurin: the first version of the robot system. This robot uses a standard Wi-Fi network

Fig. 17
figure 17

Quriana: the second version of the robot system. This robot uses a 5G network

Tour guide robot system

The tour guide robot system aims to perform guiding at theme parks. It moves automatically and guides the guests to the requested goal with voice announcements.

Hardware configuration of the tour guide robot system

As a mobile platform, we used Loomo (Segway, Inc.), which is an inverted two-wheeled robot, controlled from an Android terminal. We equipped Loomo with 2D LiDAR (LDS-01, ROBOTIS, Inc. for Qurin and TiM581, SICK, Inc. for Quriana) and QZSS external sensors. The 2D LiDARs are used to detect obstacles. In addition, a battery, an external PC (Intel NUC), a Wi-Fi router for communication between Loomo and a PC were mounted on Qurin. Instead of a Wi-Fi router, Quriana has a 5G router and was mounted with a 360-degree camera, a speaker, and a microphone.

Navigation system

As software, a navigation system and the tour guide application were installed. The navigation system is based on the ROS Navigation Stack. Each component of the navigation system, localization, collision avoidance, and path planning is explained below.

  • Localization: Position information obtained by the QZSS and the velocity information measured by the wheel encoder are integrated by the extended Kalman filter (EKF) in the robot_localization package [21]. The EKF estimates the pose (position and yaw angle) and the velocity (linear and angular) of the robot.

  • Collision avoidance: Using the data measured by 2D-LiDAR, the robot stops when a pedestrian is detected within a certain range.

  • Path planning: The shortest path (global path) to the destination is planned using the Dijkstra method, and an optimal route (local path) to avoid obstacles is generated by the dynamic window approach along the global path.

Tour guide application

As shown in Fig. 18, the tour guide application is implemented on the Android terminal. This application sends the goal information to the navigation system in response to a request from the user and receives the current status of the robot. The status includes information such as whether the robot has reached the goal or an obstacle has been detected, and guide information for an attraction is provided by voice according to the location of the robot.

Fig. 18
figure 18

Tour guide application

Voice recognition system

The speech recognition system is implemented using the DOCOMO AI Agent API [22]. As shown in Fig. 19, it is possible to have a conversation according to a predetermined scenario. The tour guide robot system uses the speech recognition system to accept user requests and answer questions.

Fig. 19
figure 19

Conceptual diagram of the voice recognition system

Co-experience system

The co-experience system is a system in which users share experiences with robots through a network. In a theme park environment in particular, using this system makes it possible to visit a theme park from a remote location via the proposed robot system.

Hardware configuration of the co-experience system

In the co-experience system, a 360-degree camera is used to capture the field of view of the robot. The 360-degree camera used is the Theta V (Ricoh Co., Ltd.) shown in Fig. 20, and it is possible to acquire 360-degree video with 4K quality. The view of the robot is presented to the user using a VR head-mounted display. By viewing the video captured by the 360-degree camera on the VR head-mounted display, the user can observe the surroundings of the robot as if the field of view was his/her own. The VR head-mounted display used in this system is the Oculus Rift (Oculus VR Inc.) shown in Fig. 21.

Fig. 20
figure 20

Theta V

Fig. 21
figure 21

Oculus Rift

Software configuration of the co-experience system

Figure 22 shows the software configuration of the co-experience system. This system uses WebRTC [23] to communicate in real-time between the robot and the user. The WebRTC system is implemented as an application that runs on a Web browser using the JavaScript API, and by accessing the Web server, the application can be accessed via the network. In addition, A-Frame [24] is adopted to realize a VR application using Oculus Rift. Note that VR applications can be implemented on a Web browser using A-Frame. This VR application is also incorporated into the application on the Web server.

Fig. 22
figure 22

Software configuration of the co-experience system

Experiment for tour guide robot system

Collision-avoidance experiment

We conducted a collision-avoidance experiment to confirm whether the collision avoidance system and the voice announcement of the tour guide application work well. Figure 23 describes the results of the experiment. Figure 23\(\textcircled{\,1}\) shows a scene in which pedestrians approach from the direction of movement of the robot. Figure 23\(\textcircled{\,2}\) shows a scene in which the robot stops in front of a pedestrian and is providing an announcement by voice. Figure 23\(\textcircled{\,3}\) shows a scene in which the pedestrian moves out of the way according to the voice. Figure 23\(\textcircled{\,4}\) shows a scene in which the robot restarts because the path has been cleared.

We confirmed that the collision avoidance system works well based on the results of the experiment.

Fig. 23
figure 23

Collision-avoidance experiment

Guided tour experiment

We conducted a guided tour experiment to confirm the performance of the developed system at the Huis Ten Bosch theme park in Japan. The environment and the procedure of the experiment are shown in Fig. 24.

Fig. 24
figure 24

Environment for the guided tour experiment. The blue circles describe the target points of the guided tour, and the flowchart describes the flow of the experiment and each attraction

The robot moves from point \(\textcircled{\,1}\) to point \(\textcircled{\,5}\) and explains the attraction at each point by voice. The total distance traveled by the robot is approximately 130 m, and the robot returns to the initial point, i.e., point \(\textcircled{\,1}\), after arriving at point \(\textcircled{\,5}\) automatically, as shown in Fig. 25.

Fig. 25
figure 25

Guided tour experiment

Figure 25\(\textcircled{\,1}\) shows the robot start the tour at the start position. In Fig. 25\(\textcircled{\,2}\), the robot arrives at the cheese shop, which is the first target point, and gives a description of the shop. Figure 25\(\textcircled{\,3}\) shows the robot arriving at the windmill, which is the second target position. The robot provides a description of the windmill and the history of the Netherlands. In Fig. 25\(\textcircled{\,4}\), the robot arrives at the third target point, the flower garden, and provides a description of the types of flowers. In Fig. 25\(\textcircled{\,5}\), the robot arrives at the end point and announces the end of the tour. Figure 25\(\textcircled{\,6}\) shows the robot returning to the start position after the tour is over.

Guided tour experiments were conducted seven times in total, and six of the tours were successfully performed as planned. The reason for the failure is that the measurement of the QZSS became unstable in the area where buildings and trees were closely placed around the robot. However, this does not occur often, and, thus, if we plan the tour route carefully, then the developed system is quite practical as a tour guide system for an outdoor theme park.

Experiment using the voice recognition system

We conducted the tour guide demonstration with the voice recognition system. The system used in this experiment consists of the proposed tour guide robot system, and the AI-based voice interaction system. The voice commands are transferred to the cloud-based AI system in real time. As shown in Fig. 26, the robot properly guided the guests to several sights by voice command.

Figure 26\(\textcircled{\,1}\) shows a greeting scene. Figure 26\(\textcircled{\,2}\) shows a scene in which the robot explains the first attraction. Figure 26\(\textcircled{\,3}\) shows a scene in which the robot explains the second attraction. Figure 26\(\textcircled{\,4}\) shows a scene in which the robot answers a question from a user.

Based on the experiment, we confirmed that the system can conduct tour guide tasks corresponding voice requests.

Fig. 26
figure 26

Experiment with the voice recognition system

Experiment using the 5G network

We conducted two types of experiments with the 5G network system: a tour guide experiment and a co-experience experiment. The environment of these experiments is shown in Fig. 27. In the environment, the 5G usable area is configured by two base stations.

Fig. 27
figure 27

Environment for experiments with the 5G network system. The blue area describes the 5G area, and the window at top left shows one of the base stations of 5G

Guided tour experiment

We conducted a guided tour experiment to confirm the performance of the developed system. This experiment is also conducted at the Huis Ten Bosch theme park. The environment and the procedure of the experiment are shown in Fig. 28.

Figure 28\(\textcircled{\,1}\) shows the robot start the tour at the start position. In Fig. 28\(\textcircled{\,2}\), the robot arrives at the cheese shop, which is the first target point, and gives a description of the shop. Figure 28\(\textcircled{\,3}\) shows the robot is introducing the theme park and navigating to the next point. In Fig. 28\(\textcircled{\,4}\), the robot arrives at the second target point, the flower garden, and provides a description of the types of flowers. In Fig. 28\(\textcircled{\,5}\), the robot arrives at the end point and announces the end of the tour. Figure 28\(\textcircled{\,6}\) shows the robot returning to the start position after the tour is over.

Fig. 28
figure 28

Guided tour experiment using the 5G network

Co-experience experiment

Using the developed system, we confirmed the operation of the co-experience system in the 5G network area at Huis Ten Bosch. The user wears a VR head-mounted display, as shown in Fig. 29, and could experience the visual field of the robot and could see the image shown in Fig. 30. We confirmed that the robot experience could be shared from a remote location using the system developed by this experiment.

Fig. 29
figure 29

User of the co-experience system

Fig. 30
figure 30

View presented by the co-experience system

Conclusion

In the present paper, we developed a tour guide robot system and a co-experience system using new technologies: the QZSS and 5G.

In order to confirm the effectiveness of using the QZSS for the tour guide robot, the performance of the QZSS was examined, and its accuracy and stability were verified by a centimeter-class positioning system for autonomous service robots. We compared the accuracy of QZSS and RTK-GPS, and we think that the positioning accuracy of both systems is sufficient for the tour guide application if a stable GNSS signal is obtained. Experiments conducted at a theme park show that the tour guide robot successfully traveled 130 m repeatedly and acted as a guide to the attractions using the QZSS. The centimeter-class positioning service was started very recently (November 2018) and, to the best of our knowledge, this research is the first to use the CLAS of the QZSS for autonomous service robots. The QZSS can perform centimeter class positioning by only one module, so the developed robot system is more useful than common RTK-GPS-based mobile robot systems.

We also developed a co-experience robot system that allows us share the experience of the robot. The proposed system uses a 5G network system for transporting a 4K video stream of the experience of the robot. As mentioned in section “5G network”, the communication speed of outdoor wireless network by LTE is not suitable for the proposed application, and we believe that 40 Mbps or more is necessary for co-experience application using 4K 360-degree video.

In the future, we intend to improve the stability of the developed tour guide robot system by combining sensors including not only on-board sensors, such as 2D LiDAR and cameras, but also ambient sensors embedded based on the concept of the informationally structured environment [12]. In addition, pedestrian detection and tracking are also important functions for a safe and efficient autonomous robot system, and we intend to implement these functions and develop a practical tour guide robot system. Alternatively, with respect to the co-experience system, usability is an important factor. We are planning to develop a hybrid system of automatic and manual control that will realize comfortable remote control by supporting manual commands with an autonomous system.

References

  1. Saito M, Yamagishi A, Takiguchi J, Asari K. Introduction to high-accuracy satellite-based positioning system utilizing qzss. Technical report

  2. Wan G, Yang X, Cai R, Li H, Zhou Y, Wang H, Song S (2018) Robust and precise vehicle localization based on multi-sensor fusion in diverse city scenes. In: 2018 IEEE international conference on robotics and automation (ICRA), pp 4670–4677

  3. Imperoli M, Potena C, Nardi D, Grisetti G, Pretto A (2018) An effective multi-cue positioning system for agricultural robotics. IEEE Robot Autom Lett 3(4):3685–3692

    Article  Google Scholar 

  4. Patrick G, Kevin E, Guoquan H (2018) Asynchronous multi-sensor fusion for 3d mapping and localization. In: 2018 IEEE international conference on robotics and automation (ICRA)

  5. Kouros G, Kotavelis I, Skartados E, Giakoumis D, Tzovaras D, Simi A, Manacorda G (2018) 3d underground mapping with a mobile robot and a gpr antenna. In: 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 3218–3224

  6. Almeida J, Ferreira A, Matias B, Lomba C, Martins A, Silva E (2018) vamos! underwater mining machine navigation system. In: 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 1520–1526

  7. Mur-Artal R, Tardós JD (2017) Orb-slam2: an open-source slam system for monocular, stereo and rgb-d cameras. IEEE Trans Robot 33(5):1255–1262

    Article  Google Scholar 

  8. Zhang J, Singh S (2014) LOAM: lidar odometry and mapping in real-time. In: Robotics: science and systems conference

  9. Takasu T, Yasuda A (2009) Development of the low-cost rtk-gps receiver with an open source program package rtklib

  10. Witwicki S, Castillo JC, Messias J, Capitan J, Melo FS, Lima PU, Veloso M (2017) Autonomous surveillance robots: a decision-making framework for networked muiltiagent systems. IEEE Robot Autom Mag 24(3):52–64. https://doi.org/10.1109/MRA.2017.2662222

    Article  Google Scholar 

  11. Kurazume R, Pyo Y, Nakashima K, Tsuji T, Kawamura A (2017) Feasibility study of iort platform “big sensor box”. In: IEEE international conference on robotics and automation (ICRA2017), pp 3664–3671

  12. Sakamoto J, Kiyoyama K, Matsumoto K, Pyo Y, Kawamura A, Kurazume R (2018) Development of ros-tms 5.0 for informationally structured environment. ROBOMECH J 5(24):1–11

    Google Scholar 

  13. Bresson G, Alsayed Z, Yu L, Glaser S (2017) Simultaneous localization and mapping: a survey of current trends in autonomous driving. IEEE Trans Intell Vehicles 2(3):194–220

    Article  Google Scholar 

  14. Suhr JK, Jang J, Min D, Jung HG (2017) Sensor fusion-based low-cost vehicle localization system for complex urban environments. IEEE Trans Intell Transp Syst 18(5):1078–1086

    Article  Google Scholar 

  15. Fujuta S, Miya M, Takiguchi J (2015) Establishment of centimeter-class augmentation system utilizing quasi-zenith satellite system(<special issue>recent topics on gnss and its applications). Syst Control Inform 59(4):126–131

    Google Scholar 

  16. Cabinet Office (Japan): Quasi-zenith satellite system interface specification centimeter level augmentation service cabinet office. Technical report (2018)

  17. Quasi-Zenith Satellite Orbit (QZO) (2019) Technical Information | QZSS (Quasi-Zenith Satellite System)—Cabinet Office (Japan). http://qzss.go.jp/en/technical/technology/orbit.html

  18. Wübbena G, Schmitz M, Bagge A (2005) Ppp-rtk : precise point positioning using state-space representation in rtk networks

  19. SoftEther VPN Project (2020). https://www.softether.org/

  20. SimpleRT (2020) https://github.com/vvviperrr/SimpleRT

  21. Thomas M, Daniel S (2014) A generalized extended kalman filter implementation for the robot operating system. In: Proceedings of the 13th international conference on intelligent autonomous systems (IAS-13). Springer

  22. DOCOMO AI Agent API (2020). https://docs.sebastien.ai/

  23. WebRTC. https://webrtc.org/

  24. A-Frame: Hello WebVR. https://aframe.io/

Download references

Acknowledgements

The present paper is based on results obtained from the “Cross-ministerial Strategic Innovation Promotion Program” (SIP), which was commissioned by the New Energy and Industrial Technology Development Organization (NEDO). In addition, NTT DOCOMO, Inc. cooperated in using the 5G network and the 5G router.

Funding

This work was partially supported by the Cabinet Office (CAO), Cross-ministerial Strategic Innovation Promotion Program (SIP), “An intelligent knowledge processing infrastructure, integrating physical and virtual domains” (funding agency: NEDO), and the collaborative study with Kyushu University, Living Robot Inc., and NTT DOCOMO, Inc. The 5G network and routers were provided by NTT DOCOMO, Inc.

Author information

Authors and Affiliations

Authors

Contributions

KM, HY, MI, and TN developed the system and carried out the experiments. AK managed the study. RK and YK constructed the study concept. All members verified the content of their contributions. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Kohei Matsumoto.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Matsumoto, K., Yamada, H., Imai, M. et al. Development of a tour guide and co-experience robot system using the quasi-zenith satellite system and the 5th-generation mobile communication system at a Theme Park. Robomech J 8, 4 (2021). https://doi.org/10.1186/s40648-021-00192-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40648-021-00192-7

Keywords