Skip to main content
  • Research Article
  • Open access
  • Published:

Study of a robotic system to detect water leakage and fuel debris-System proposal and feasibility study of visual odometry providing intuitive bird’s eye view-

Abstract

To obtain the necessary information on fuel debris and water leakages during the decommissioning task of the Fukushima Daiichi Nuclear Power Plant, an ultrasonic-based method was proposed for future internal investigation of the primary containment vessel (PCV). In this article, we describe the rotatable winch mechanism and visual localization method, which were used to aid the investigation. We used the rotatable winch mechanism to adjust the height and orientation of the ultrasonic sensor and localized the robot with cameras to localize the sensor, to provide assisting information for data combination. We studied the feasibility of the conventional visual odometry method for application to the situation and performed localizing accuracy evaluation experiments with a mobile robotic platform prototype. The results showed that the visual odometry method could generate intuitive bird’s-eye-view maps, and provided an average error rate of 35 mm/1500 mm, which met the required maximum error rate of 100 mm/1500 mm for the grating movement. Experiments were also conducted with adjustable parameter ranges that could provide the required accuracy.

Background

The decommissioning of the Fukushima Daiichi Nuclear Power Plant (NPP) is an urgent national problem in Japan. The emergency cooling process of the reactor core failed due to loss of power, seriously damaging the cores of reactors No. 1, 2, and 3. Approximately 200 tons of fallen nuclear fuel debris from the damaged cores, which contain melted nuclear fuels and structural materials, are estimated to remain in each nuclear reactor. Moreover, water leakages within the damaged containment container caused outflow of polluted water, which hindered implementation of the submersion method. To remove the fuel debris safely and efficiently from the primary containment vessels (PCVs), it is essential to understand the distribution and characteristics of the fuel debris inside the PCV. Further, we need to locate and stop the water leakages inside the PCV to minimize the egress of radioactive water.

This article proposes a new robotic system that uses ultrasonic sensors to detect fuel debris and water leakage outside the pedestal in the PCV. The proposed system consists of a mobile robot, a winch mechanism, a camera, and ultrasonic sensors, as shown in Fig. 1. The mobile robot moves on the grating floor, which is located above the pedestal. The winch mechanism on the mobile robot deploys the ultrasonic sensor through the grating lattice. The height and orientation of the sensor are controlled by the winch mechanism. Ultrasonic sensors detect the shape and characteristics of the fuel debris as well as the water flow of the retained water under the grating. Because the maximum measurement range of the ultrasonic sensor is limited, the ultrasonic measurement should be performed repeatedly in various locations on the grating. Therefore, it is important to accurately localize the position and orientation of a mobile robot to combine the partial data into global data. To achieve accurate localization, we focus on the texture of the grating lattice. The grating lattice is uniform (30 mm × 100 mm), and its lattice texture is orthogonal. Thus, we can regard the grating texture as a coordinate system. If we can count the number of grating lattices while traveling, we can localize the robot position by using the resolution of the lattice size.

Fig. 1
figure 1

Overview of the proposed robotic system

The contributions of this study are twofold: (1) a proposal for a new robotic system that uses ultrasonic sensors to detect fuel debris and water leakages and (2) a study of the feasibility and accuracy of the localization of the robot using a mono-camera on the grating. In particular, we investigate the possibility of generating a global bird’s-eye-view map that is very intuitive and easy to understand. The generated map will greatly help the robot operator move the robot during the mission and minimize the operation time needed to reach the specified target position, thus maximizing the efficiency of the investigation within a limited working duration.

The remainder of this article is organized as follows. “Related work” section introduces related works and describes the current problems to be solved. “Proposed robotic system” section proposes a new robotic system and clarifies the research objectives of this study. “Visual odometry method and bird’s-eye view” section addresses the issue of localization by a mono-camera using a visual odometry method to provide an intuitive bird’s-eye-view global map for the operators. “Prototype model” section describes the hardware experiment using a prototype model, and “Experiment” section quantitatively evaluates the localization accuracy. Finally, in “Conclusion” section, we conclude this article and discuss future work.

Related work

In this study, our target plant is Unit 1, where the grating floor remains above the retained water, as illustrated in Fig. 1. For the Unit 1 reactor, the fuel debris is likely to have spread outside the pedestal through the access port for workers at the bottom of the PCV. In April 2015, a configuration-changeable robot system called PMORPH1, developed by Hitachi-GE Nuclear Energy, successfully entered the PCV of the Unit 1 reactor through X-100B penetration, and the dose rates and internal temperatures at different points on the grating were measured [1]. In March 2017, the improved PMORPH2 moved on the grating, and it utilized a winch mechanism to sink a sensor unit, consisting of an underwater camera, a dosimeter, and lights, into the retained water through the grating lattice to the target position [2]. Two cameras were installed on the front side of the robot to form a stereo camera, and were utilized to localize the position and orientation of the robot. The landmarks were assigned in advance based on the PCV’s internal structure obtained from the design drawings [3]. The operator(s) remotely controlled the robot movements based on the two frontal camera images, which were close to the grating. The acquired survey results revealed that the dose level decreased upon submersion in the water, but then rose again when approaching the bottom [4]. The radiation level near the basement of the PCV was approximately 10 Gy/h, and there were fallen objects and sediments on the basement.

To investigate the space under the grating platform within the pedestal of the No. 2 reactor, Toshiba Energy Systems & Solutions Corporation developed a long telescopic pipe device that carried a pan-tilt camera. The device was designed to enter through X-6 penetration and extend the telescopic pipe to the platform. Then, it would utilize the cable within the pipe to sink the pan-tilt camera [5]. Thus, in the on-site investigation performed in January 2018, operators acquired images of the pedestal bottom with the pan-tilt camera, and confirmed the existence of sediments presumed to be fuel debris. To acquire the necessary data on the sediment characteristics, Toshiba-ES equipped the device with a finger mechanism and performed a further investigation in February 2019 [6]. In this investigation, the device successfully touched the sediments and moved the small-sized ones with an improved finger mechanism. However, the device failed to move the rocky sediments or leave contact traces on them [7].

In our effort to learn from the successes of these on-site investigations, we considered that the designs of PMORPH and the telescopic pipe device reflected different investigative directions: PMORPH passed through the penetration point and used the grating platform as the main moving area, utilizing its small size and flexibility as a mobile robot. The telescopic pipe device consisted of thin and long pipes, which prevented the rear end from entering. In general, they both utilized cables to submerge the investigating units and relied on cameras to gather the necessary information on cable feeding processes and inspections.

Although visual images are essential in remotely operated investigations using robots, a shortage of cameras may hamper further investigations. Within the activity scope of PMORPH, the maximum dose rate reached 10 Gy/h on the pedestal bottom and grating platform. The maximum allowable accumulated irradiation dose of the equipped camera of PMORPH2 was 1000 Gy [2], limiting the maximum working time duration to 100 h, which was acceptable at the 10 Gy/h dose rate. However, when facing extreme environments during operations such as digging into the sediments, the radiation resistance of the currently available camera may not be high enough. Moreover, the visual images taken by the camera showed that the retained water was quite turbid, and poor visibility restricted the localization of the fallen objects on the basement [4]. For localization of the robot, it was necessary to set landmarks based on the PCV structure in the normal undamaged state [3]. However, setting the landmarks in advance is not guaranteed to localize the robot with sufficient accuracy because of the heavily damaged PCV structure. Further, it was also difficult to intuitively understand the location of the robot because of the similar repetitive images of the grating with shallow angles between the two frontal cameras’ axes and the grating floor [4].

To solve these problems, we propose a new robotic system that uses ultrasonic sensors to detect fuel debris and water leakages, so that the robustness of camera-based underwater inspection is improved. We also propose the application of a visual odometry method using a mono-camera to generate an intuitive bird’s-eye-view global map.

Proposed robotic system

We propose a robotic system consisting of a mobile robot, a winch mechanism, a camera, and ultrasonic sensors, as shown in Fig. 1. Similar to PMORPH2, the mobile robot moves on the grating floor and carries the sensor unit, whose height is controlled by the winch mechanism. The robot stops at multiple measurement points and measures the surrounding environment in the retained water under the grating floor. The main differences between PMORPH2 and our proposed robotic system are the following features of our system:

  • Utilization of ultrasonic sensors

  • Winch mechanism with two degrees of freedom

  • Localization of the robot using the visual odometry method

The following subsections describe each of these features in detail.

Ultrasonic sensors

Ultrasonic sensors can remotely measure the shape of objects in both air and water, and can achieve high radiation resistance. In the Three Mile Island nuclear accident, ultrasonic measurement was practically applied to inspect the reactor vessel [8]. In fact, we performed a preliminary experiment by exposing an ultrasonic sensor to gamma rays at a rate of 650 Gy/h by 60Co, and confirmed that the degradation of the sensor signal was less than 3% where the accumulated radiation dose was approximately 10,000 Gy [9], suggesting that the radiation resistance was ten times higher than that of the camera installed on PMORPH2.

Further, the ultrasonic sensor can measure the shape of objects in turbid water. Moreover, if we apply a phased array ultrasonic sensor using the ultrasonic velocity profiler (UVP) method [10], we can acquire two-dimensional velocity profiles of the flow of the turbid water, which is capable of detecting water leakage. The flow mapping of the water can be utilized to stop the water leakage, thereby reducing the amount of radioactive water and keeping the radioactive objects submerged. Figure 2 illustrates the measurement of the flow velocity field and debris measurement. Because the sensor height in the vertical direction is controlled by the winch mechanism, the phased array sensor is deployed to measure the horizontal direction and efficiently obtain the three-dimensional flow velocity field.

Fig. 2
figure 2

Conceptual image of the flow velocity field measurement and debris measurement

We arranged the 16-channel arrayed element in both the horizontal and vertical directions to measure two planar velocity fields (Fig. 3). The developed sensor can pass through the grating lattice by adjusting its orientation. We confirmed that the developed sensor could use the UVP method to measure two planar velocity fields where water, including tracer particles, flows out of a water tank [10].

Fig. 3
figure 3

Phased array ultrasonic sensor

When using ultrasonic sensors to detect the shape of objects, conventionally, a single ultrasonic transducer emits ultrasonic pulses and receives the reflected echo from the object’s surface. A common problem of this method is that the strength of the reflected echo may be quite weak when the direction of the emitted pulse forms an angle with the surface of the object that is to be inspected. Therefore, even if the transducer performs a surface inspection while moving parallel to the object, part of the object still cannot be inspected by this method [11]. Evidently, this problem is likely to occur when inspecting objects with rugged surfaces, such as fuel debris. Our research group applied the aperture synthesis method to receive the reflected echo with multiple arrayed elements. Thus, even when trying to inspect objects with complex shapes, we could reconstruct the rugged surface using the aperture synthesis method [11]. Furthermore, the aperture synthesis method can work together with the UVP method by using an arrayed ultrasonic sensor [9]. In the verification experiment, we used a slug having a rugged shape as the presumed fuel debris and confirmed that the aperture synthesis method helped reproduce the rugged surface shape displayed in a three-dimensional point group [12].

Winch mechanism with two degree of freedom

Because of the limited measurement range of the ultrasonic sensor, we propose to install a swivel degree of freedom (DoF) around the sensor cable axis for the winch mechanism. This rotation would also permit the rectangular ultrasonic sensor to pass through the grating lattice by adjusting its orientation around the sensor cable.

The sensor height in the vertical direction is adjusted by rotating the winch mechanism’s spool. Because the spool needs to rotate several turns to reach the bottom of the PCV, we installed a slip ring inside the reel, allowing infinite rotation of the reel while maintaining the electric connection. Figure 4 shows a prototype model of the proposed 2-DoF winch mechanism. This prototype model was developed to confirm the basic functions of our proposed system. Hence, we did not consider the size limitation of the access port. We experimentally confirmed that insertion of the slip ring between the ultrasonic sensor and the pulse receiver did not affect the quality of the measurement.

Fig. 4
figure 4

Prototype model of the winch mechanism with two DoFs (left: 3DCAD model, right: hardware model)

Localization of the robot using the visual odometry method

To obtain the velocity field of the water flow and the distribution of the fuel debris in the PCV, it is necessary to take measurements at multiple measurement points, and integrate the partial data with the global data. Therefore, the localization of the robot plays an essential role in the measurement.

In a previous investigation by PMORPH2, the position and orientation of the robot were obtained based on the predetermined landmarks, and it was not intuitive for the operator(s) to understand the robot’s position and orientation from the images sent by the frontal two cameras.

To solve these problems, we propose to apply a visual odometry method that can generate a bird’s-eye-view global map. We focus on the texture of the grating lattices because the size of the lattice is uniform (30 mm × 100 mm), and its lattice texture is orthogonal and repetitive. Thus, we can regard the grating texture as a kind of coordinate system. We can easily localize the robot position and orientation based on the grating lattice coordinate system.

Research objectives

To prove the usefulness of the proposed robotic system, various aspects must be investigated, such as the measurement accuracy of the ultrasonic sensor, localization accuracy of the visual odometry method, hardware feasibility considering the access route, radiation resistance of each component, and so on.

In particular, the scope of this article focuses on the robot localization problem using a visual odometry method. We applied a conventional visual odometry method and validated the feasibility of the localization on the grating floor. We utilized an open-source computer vision library OpenCV, thus reducing the software development time. After applying several modifications to increase the accuracy of the localization of the robot, we quantitatively evaluated the performance of the algorithm using a prototype model to determine if its accuracy is sufficient for the actual mission. We also investigated the relationship between the camera setting parameters and localization accuracy. These fundamental results will facilitate the development of a practical robotic system and its localization algorithm for further decommissioning tasks in Fukushima Daiichi NPP.

Visual odometry method and bird’s-eye view

In this section, we will briefly outline the conventional visual odometry method. On its basis, we will expand the description of the main modifications that need to be made to obtain the required bird’s-eye-view effect, and the corresponding improvement strategies to enhance its performance.

Outline

The general working procedure of conventional visual odometry is as follows:

  • Apply the initial camera calibration with a checkerboard to acquire the necessary camera parameters. We applied Zhang’s checkerboard calibration method [13] to acquire the intrinsic matrix of the camera. Then, we placed the checkerboard on the ground floor and took pictures to acquire the extrinsic matrix of the camera and the homography matrix. The homography matrix could reflect the spatial transformation relationship between the original camera view plane and the perspective bird’s-eye-view plane.

  • Denote two adjacent frames as a single segmentation. We found the feature points, and matched them within the segmentation. Here, we used oriented FAST and rotated BRIEF (ORB) features [14] because of their good balance of processing speed and accuracy.

  • Estimate the relative camera motion in the segmentation based on the positional relationship between the matched points.

  • Integrate the estimated camera motion to localize the camera under the global coordinate system.

  • Perform the coordinate transformation between the camera pose and the robot pose.

  • Generate the map according to the corresponding camera positions.

In the specific situation where the robot moves on the grating floor, we wanted to utilize the characteristics of the regular texture formed by the rectangular 30 mm × 100 mm gratings. To be specific, we intended to perform a camera orientation transformation to acquire the bird’s-eye view. Such a transformation was accomplished with a perspective transformation immediately after camera calibration.

Perspective transformation with inclined camera

Figure 5 shows the synchronized camera system for the visual odometry method. Marker 1 indicates the wide-angle fish-eye camera set horizontally, facing the front direction that the robot faces, which served as the monitoring camera for the front view. We used the marker 2 camera, a semi-wide-angle RGB camera, to perform the visual odometry method. The camera was set facing the front grating floor, forming an inclined angle with the horizontal plane. The images captured by the two cameras were transferred to the operating computer by USB 3.0. A synchronizer is applied to output the synchronized signal to drive the two cameras, as marker 4 shows.

Fig. 5
figure 5

Synchronized camera system

Because the semi-wide-angle camera was set with given heights and inclined angles between the horizontal plane, the captured grating floor would also appear inclined, as shown in the left part of Fig. 6. To acquire the bird’s-eye view in the vertical direction, we placed a checkerboard on the grating floor to specify the grating plane. The size of the checkerboard is a known parameter. Therefore, we were able to acquire the physical coordinates of the salient points on the checkerboard. In addition, we acquired the coordinates of the salient points on the checkerboard under the checkerboard coordinate system through the feature detection method. Thus, we estimated the extrinsic matrix and the homography matrix to achieve the view change in the vertical direction, as shown in the right part of Fig. 6.

Fig. 6
figure 6

View change after perspective transformation (left: original image, right: image after transformation)

Thus, after the perspective transformation, we conducted all the remaining steps under the bird’s-eye view. Therefore, map generation was simplified as a two-dimensional linear combination of the images.

Improvement strategies

To begin with, we would like to list some problems that we encountered in the visual odometry method:

  • Through perspective transformation, we could acquire a bird’s-eye view from the inclined view; however, the available view range decreased, which might influence motion estimation, especially when the robot is moving at a high speed.

  • The motion estimation method relied greatly on the extracted feature points; thus, the method would not work in an environment where there are no visual textures. When the number of available feature points decreases, the working stability and accuracy will be degraded.

  • When performing planar motion estimation based on the extracted features, the outputted motion matrix sometimes included the component in the vertical direction due to vibration.

  • Because of the integration of the estimated instantaneous motion in one frame, the errors in each estimation would accumulate and influence the whole localization and map generation.

Considering the specific case of applying visual odometry with a bird’s-eye view and regular features, we propose the following improvement strategies:

  • By utilizing the general smooth camera model proposed by MonoSLAM [15], we replace the mistakenly estimated camera motion based on the adjacent motion data.

  • Remove the component in the z-axis of the estimated motion matrix.

  • Count the traversed grating lattices to assist planar localization.

We proposed a compensation method to assist in calculating the estimated camera motion from a frame without abundant feature points. To compensate for the problem of misestimation, we attempted to acquire a correctly estimated motion matrix to replace the misestimated results. As a theoretical basis, we utilized the general smooth camera model of MonoSLAM, which assumes that the observed statistical model of the camera motion in a time step is what, on average, we expect. In the early stage of development, we directly used the estimated motion matrix of the last frame of the same segmentation to replace the misestimated motion matrix. Such a simplified method would lead to larger errors compared to the Gaussian profile utilized by MonoSLAM. To improve the compensation effect, we utilized the general smooth camera model to assume that we could approximate the camera motion at one time point with the averaged motion in a short movement interval that included several continuous segmentations.

Thus, to replace the misestimated motion matrix at one time point, we needed to divide the whole movement into short intervals. We used the estimated velocity vector characteristics as the main splitting standard, which included the ratio of the velocity component in the transverse direction to that in the forward direction, and the acceleration in the segmentations. Figure 7 shows the concept of motion interval and segmentation with a part of one map.

Fig. 7
figure 7

Concept of segmentation and motion interval used in the compensation method (the green points and lines in the segmentation indicate the feature points and matching relationship; the purple lines indicate the instant camera poses, forming a jagged shape when rotating)

Within one motion interval, we found the segmentation with maximum velocity and smooth acceleration between the neighboring segmentations, and set it as the velocity limit of the interval. Thus, by using the velocity limit together with the recorded feature points, we could detect unreliable estimation segmentation. To replace the misestimated motion matrix as a compensation, we utilized continuous segmentations adjacent to the misestimated matrix to acquire the averaged motion matrix. Such methods turned out to help smooth the generation of maps and increase the correctness of the localization. Figure 8 shows the generated maps with and without the compensation method. There are obvious areas with low continuity in the left part as an uncompensated map, and the right part becomes much smoother after compensation.

Fig. 8
figure 8

Generated maps before (left) and after compensation (right)

When estimating the camera motion from the continuous images, we displayed the results with a motion matrix. Equation 1 shows the ideal generated motion matrix Cn. We denoted the horizontal forward direction as the y-direction, and the horizontal transverse direction as the x-direction. Thus, we denoted the vertically upward direction as the z-direction. The elements θc,n indicate the instant rotation angle of the camera in the n-th frame, and xc,n, and yc,n indicate the translations in the x- and y-directions. It can also be written as Eq. 2, in terms of the rotation matrix A and translation vector b. Because the robot moved primarily on the grating floor, we normalized the matrix to remove the misestimated velocity components in the z-direction from the generated motion matrix. Thus, misestimation due to vibrations and noise can be reduced.

$$\begin{array}{c}{{\varvec{C}}}_{{\varvec{n}}}=\left[\begin{array}{ccc}\mathrm{cos}\left({\theta }_{c,n}\right)& -\mathrm{sin}\left({\theta }_{c,n}\right)& {x}_{c,n}\\ \mathrm{sin}\left({\theta }_{c,n}\right)& \mathrm{cos}\left({\theta }_{c,n}\right)& {y}_{c,n}\\ 0& 0& 1\end{array}\right]\end{array}$$
(1)
$$\begin{array}{c}{{\varvec{C}}}_{{\varvec{n}}}=\left[\begin{array}{cc}{\varvec{A}}& {\varvec{b}}\\ \mathbf{0}& \mathbf{1}\end{array}\right]\end{array}$$
(2)

Owing to the accumulation of the estimated camera motions, the calculating error in each segmentation also accumulates. In the simultaneous localization and mapping (SLAM) field, an excellent solution to such a problem is the loop-closure detection method. This requires the robot to judge the similarity between the pictures so that it can adjust the trajectory according to the positions that it has reached before [16]. However, in our situation, we considered that the highly repetitive lattices of the grating floor might not be suitable for loop-closure detection to work as expected. For the specific situation of using grating lattices, we considered that counting the number of traversed lattices would help solve this problem. Owing to the bird’s-eye view, the grating lattice plane can be displayed vertically with the view direction. Thus, we used the known size and unique shape to identify the traversed gratings and count them to estimate the displacement of the robot. This method helped reduce possible errors when the generated map was sufficiently smooth and continuous.

Prototype model

To evaluate the performance of the modified visual odometry method with respect to the localizing accuracy, we built a prototype model robot for the preliminary experiments.

Figure 9 shows an overview of the prototype robot, RhinoUS. Its structure can be generalized as a modular robot with four configurable wheels and a chassis. Various functional components are installed on the chassis, which includes a winch mechanism on a rotation unit at the center of the robot, a synchronized camera system in the front of the robot, and the ultrasonic sensor hanging by the winch below the chassis. The basic parameters of RhinoUS are listed in Table 1.

Fig. 9
figure 9

Overview of the prototype model robot for experiments

Table 1 Specification of RhinoUS

RhinoUS is integrated with four independently driven wheels for locomotion. These wheels are designed to be waterproof and dustproof by incorporating a rubber seal, allowing the robot to move on the grating base smoothly. Four motors are utilized to independently control the velocity of the wheels.

For the control scheme, we used a joystick to remotely operate the wheel movement. According to the pressed position of the joystick, velocity commands were sent to the specific motors. Although we were using visual SLAM technology in the localization process, which makes it possible to achieve autonomous navigation, we designed the remotely operated control scheme because the investigating environment might be too dangerous or complex to perform automatic movements. Because bringing an energy source into the PCV might cause unknown risks, we decided to use an external power supply instead of installing batteries on the robot.

Because the prototype robot was not designed for the final on-site investigations, we will not discuss the size limitations or radiation resistance in this section. However, we will evaluate the possible influences of camera pose on the localizing accuracy in “Experiment” section, as a preliminary study on the feasibility of adapting to other robot platforms for on-site investigations.

Experiment

To quantitatively investigate the working performance of the visual odometry method, we conducted comparison experiments on the evaluated robot displacement and orientation with those measured with an external 3D laser scanner, the FARO Focus 3D × 130. The 3D scanner can perform high-accuracy 3D reconstruction with a high dynamic range camera, achieving a reconstruction accuracy within 2 mm with the correct calibration method [17]. We considered that this 3D scanner could provide reliable and accurate measurements of the current robot pose and serve as the reference object in the comparison. In particular, in the specific motion, we used the modified visual odometry method to perform localization, and recorded the evaluated robot displacement and orientation. We installed four custom-made spheres on each corner of the robot for ease of recognition in the reconstructed results. Thus, we used the position of the spheres to calculate the position and orientation of the robot, and compared the results with the evaluated results from visual odometry.

As for the evaluation criterion, we utilized the localizing error rate applied by PMORPH2 in previous investigations: error within 100 mm with 1500 mm displacement [3]. Because 100 mm is the width of a single grating lattice, 1500 mm is the regular distance between multiple investigating points.

Basic experiments in simplified environments

In the beginning phase, we used a simplified environment by splicing several grating blocks to form the grating floor. We allowed the robot to perform various types of motion, including translation, rotation, and a combination of them. Figure 10 shows one of the generated maps with the camera trajectory represented by purple lines. Table 2 shows the comparison between the evaluated displacement and the measured displacement.

Fig. 10
figure 10

One of the maps generated by the visual odometry method (the purple lines indicate the instant camera poses, forming a jagged shape when rotating; the vectors form the overall camera trajectory)

Table 2 Comparing results of visual odometry and 3D scanner under a simplified short-range route

The comparison results showed that the y-direction error rate was 9.7 mm/1500 mm, and the x-direction error rate was 20 mm/1500 mm. We considered that the accuracy met the requirement in such a simplified condition. In addition, the grating lattices in the reconstructed map appear quite regular, which made it possible to count the number of lattices to estimate the robot position.

Simulations experiments at Japan Atomic Energy Agency (JAEA) Naraha Center for Remote Control Technology Development

To further investigate the performance of the visual odometry method, we utilized the experimental instrument water tank at Naraha Center for Remote Control Technology Development, provided by JAEA. Figure 11 shows the structure of the three-floored water tank. At its center, a water tank with a 4690 mm diameter was utilized as the simulated basement body. The top floor served as a supporting platform for the investigating robot. Across the water tank, we prepared a wooden bridge with a small grating part, the size of whose lattice was the same as that of the No.1 reactor.

Fig. 11
figure 11

Experimental site for simulated experiments

To be specific, we would like to simulate the investigating route as the red line in the right part of Fig. 11. Because there is only a small grating part on the wooden bridge, to simulate the grating floor, we took images of the grating lattices and printed them to cover the floor. Therefore, compared with the previous experiment, the textures involved in the predetermined route increased to four types, including the printed grating papers, the ordinary floor with anti-skid texture, the real grating, and a wooden bridge.

Thus, we repeated the simulated routes three times and the comparing results are shown in Table 3. Figure 12 shows one of the generated maps by visual odometry method.

Table 3 Comparison results of visual odometry and 3D scanner under simulated routes
Fig. 12
figure 12

One of the maps generated by the visual odometry method under the simulated route

We calculated the error rate from these datasets: the average error rate was 35 mm/1500 mm, and the maximum error rate was 54 mm/1500 mm, which satisfied the required error rate. In addition, we compared the reconstructed objects such as the cross-shaped metal supporters under the small grating part on the wooden bridge with the realistic ones, and confirmed the correctness of the reconstruction.

Camera parameters ranges according to required accuracy

Through the experiments in the simulations, we confirmed that under specific experimental conditions, the visual odometry method was successful in providing the required localizing accuracy. However, in on-site investigations, more restrictions could apply depending on the environmental conditions. Like the size limitations to pass through access penetration, and the camera performance limitations in extreme situations. To provide useful information for the further development of the practical model, in the following sections, we focus on the parameters of the camera pose and program setting and use an experimental method to investigate the available ranges to keep the error rate within 100 mm/1500 mm. Table 4 shows the relevant factors, their default values in the previous experiments, and the adjusting ranges. Because we mounted the camera system on a metal frame, the adjustable ranges of the camera pose might be limited.

Table 4 Default values of the concerned parameters

Thus, we would like to investigate the changing tendency of the localization accuracy corresponding to each parameter individually within this adjustable range, by keeping the other parameters constant. We utilized the 3D scanner as the reference device and focused on two types of simplified motion modes: translation and rotation. We controlled the robot to perform repetitive motions under different parameter settings and recorded the estimated displacement or orientation by visual odometry.

Thus, to quantize the localizing performance, we denoted the desired displacement as (xd, yd, θd), which were adjusted according to the predetermined motion modes including translation and rotation. Correspondingly, we denoted the estimated displacement as (xet, yet, θet) for the translation motion and as (xer, yer, θer) for the rotation motion. In the predetermined translation mode, the robot moved along the forwarding direction for 1000 mm on the grating floor. Thus, we denoted the error rate as ERt, which is the proportion of the difference between the main displacement index and the estimated one. In the translation mode, the main displacement index should be the y-direction displacement. SEt represents the shift error that occurs in the displacement estimating process of translation, defined by the x-direction displacement as the transverse shift error. Equations (3) and (4) how the concrete definitions of these indexes. In the predetermined translation motion, (xd, yd, θd) should be set as (0, 1000, 0).

$$\begin{array}{c}{ER}_{t}= \frac{\left|{y}_{et}-{y}_{d}\right|}{{y}_{d}}\end{array}$$
(3)
$$\begin{array}{c}{SE}_{t}= \left|{x}_{et}-{x}_{d}\right|\end{array}$$
(4)

For the predetermined rotation mode, the robot rotated around the center for 45° in a clockwise manner. Thus, we defined the error rate ERr as the proportion of the orientation difference over the desired orientation. We defined the robot center displacement as the shift error, SEr. Equations (5) and (6) show the relationships. In the predetermined rotation motion, (xd, yd, θd) should be set as (0,0,45).

$$\begin{array}{c}{ER}_{r}= \frac{\left|{\theta }_{er}-{\theta }_{d}\right|}{{\theta }_{d}}\end{array}$$
(5)
$$\begin{array}{c}{SE}_{t}= \sqrt{{{(x}_{er}-{x}_{d})}^{2}+{({y}_{er}-{y}_{d})}^{2}}\end{array}$$
(6)

Thus, we would use the error rate within 100 mm/1500 mm in a translation motion, and the same proportion of 3°/45° in a rotation motion as the requirement. By maintaining the error rate within the range, we adjusted the parameter to acquire adjustable ranges. The resulting changing tendencies of both the error rates and the shift errors with the changing parameters are presented in Figs. 13 and 14. The vertical axis indicates the error rates with the line chart, and the horizontal axis shows the changed parameter. We also included the shift error value in the figure with the bottom bar chart as an alternative index of the accuracy.

Fig. 13
figure 13

Error rate and shift error with changing camera height and angle. a, c Translation motion; b, d rotation motion. a and b show the effect of the camera height, while c and d show the effect of the camera angle

Fig. 14
figure 14

Error rate and shift error with changing frame rate and feature point number. a, c Translation motion, b, d rotation motion. a and b show the effect of the frame rate, while c and d show the effect of the number of feature points

As the left part of Fig. 13 shows, under changing camera heights, both error rates and shift errors in the translation motion were at approximately the same level. We concluded that the influence of the changing camera heights on the localizing performance was relatively small in translation motion. We attributed this tendency to the transformation of the originally inclined camera view to the bird’s-eye view; the homography matrix was not sensitive to the height itself when there were no rotating components in the instant motion matrix. However, when the robot performs primarily rotating motion, both the error rates and the shift error appear quite significant when the camera heights are less than 300 mm. We supposed that when the camera height was relatively low, the available view range of the grating would correspondingly reduce. Furthermore, the perspective transforming process acquired the bird’s-eye view at the expense of a reduced view range. However, the available view range was closely related to the available number of feature points. When trying to track features of relatively large objects, a sufficient view range would become relevant. Thus, to ensure a sufficient view range after the perspective transformation, the camera height should be set to more than 300 mm.

Under changing camera angles, the variation in the translation motion was similar to that of changing the camera height in the rotation motion. Error rates and shift errors appeared significant when the camera angle was less than 40°, while the rotation results limited the camera angles from 40° to 60°. In the perspective transformation, the homography matrix contained mappings in both the vertical and horizontal directions. The mapping component in the horizontal direction was similar to the radius around the original focus in the original view in the rotating motion, which would influence the localization accuracy significantly. Thus, we considered that a suitable range of camera angles should be set to limit the mapping component in the horizontal direction. Because we aimed to transform the original view to the bird’s-eye view, the homography matrix may not be robust or accurate when the original view approaches the horizontal plane, for example, when the angle is set to over 70°. However, the results also showed that the horizontal camera configuration of PMORPH may not be directly applied in the visual odometry method.

As Fig. 14 shows, the tendency appeared similar to the error rates, and shift errors remained at a relatively low level with the changed parameter over a specific value. We considered the parameters of the programming setting to be similar because the frame rate directly influenced the available frames when passing the objects, and the number of feature points was relevant to the reliability of instantaneous motion estimation. Thus, in the case of only the grating texture, we set the available range of the frame rate to over 24 fps and the minimum number of feature points to over 150. However, in an environment where there are more types of textures, the requirements on the frame rate and number of feature points may also increase. This is because at the boundaries of different textures, the feature points from different textures might be difficult to match in time without sufficient frames in the feature tracking process. However, although setting the frame rate and feature point number might also increase the computational burden of the program, we did not observe any influence on the current experimental settings.

Reflecting on the overall tendency of both the error rates and the shift errors, we concluded that the shift error would also decrease when the error rate was low. The ranges of the parameters are listed in Table 5.

Table 5 Range of parameters meeting the error rate requirement

Therefore, we preliminarily evaluated the available parameter ranges including camera height, camera angle, frame rate and number of feature points with simplified motion modes of translation and rotation by using error rate of 100 mm/1500 mm as criteria.

Conclusion

In this article, we proposed a new robotic system that uses ultrasonic sensors to detect fuel debris and water leakage. In addition, we focused on the feasibility and localizing accuracy of a modified visual odometry method for the autonomous localization of a robot moving on the grating floor, for deployment in future investigations.

We performed the localizing accuracy evaluation experiments with a prototype mobile robot in a simulated environment, and measured the average localizing error rate as 35 mm/1500 mm, the maximum error rate being 54 mm/1500 mm. These results satisfied the accumulated error rate requirement of 100 mm/1500 mm from PMORPH2. We will also evaluate our visual odometry technique by conducting an interview with engineers who are working on the decommissioning task in Fukushima Daiichi NPP.

In comparison with the localizing method of PMORPH, the visual odometry method could generate more intuitive bird’s-eye-view maps by using a perspective transformation. The correctness of the reconstructed maps could be confirmed by comparing the reconstructed features with the actual environment. We also studied the available range of vital parameters that met the required error rate.

To prove the usefulness of the robotic system, various aspects must be investigated, such as ultrasonic sensor performance, localizing accuracy, hardware feasibility, and so on. Some of these tasks have been performed, such as ultrasonic sensor accuracy evaluation, and in the scope of this article, we focused on the performance evaluation of the visual odometry method in the case of planar movement on the grating floor. However, various problems remain to be solved:

  • The possible coupling relationships between these parameters have not yet been investigated.

  • The experimental conditions were relatively ideal because the motion modes were relatively simple, and the gratings were quite simple and flat. The illumination conditions used in the experiments were ideal, but in a realistic environment, they could be unsatisfactory.

  • The size limitations of the mechanisms were not investigated.

Hence, we considered the directions for future work based on these problems. We would like to study the coupling relationship between the parameters from both the theoretical modeling and experimental standpoints, following the current experimental methods. As for the grating floor, we considered that there could be more obstacles in the heavily damaged environment with a cluttered structure. Such conditions might influence the localizing performance on account of visual obstruction and bumpy motion. For the visual obstruction problem that could arise, we proposed the idea of compensation for the error due to feature track loss in “Improvement strategies” section, which might help correct the motion estimation. Therefore, evaluating the compensation performance under a cluttered environment would be helpful. For the bumpy motion, we are of the opinion that the visual odometry method can be applied to three-dimensional movement, because the motion estimation process is concerned with the vertical component. Therefore, in “Improvement strategies” section, one of the improvement strategies which assumed ideally planar movement should be modified correspondingly. Furthermore, the generation of maps should also change to types fitting three-dimensional movement, such as point clouds with textures. We have identified the evaluation experiments of the improved algorithm in a cluttered environment with more complex motion modes as the future research focus. In addition, another future work is to study the possible influence of inadequate illumination and modify the algorithms accordingly, for which we would like to make use of the video datasets of the mockup grating captured in inadequate illumination conditions, which are provided by JAEA [18]. This would enable us to observe the influence of dark illumination conditions on the performance of the current visual odometry algorithm, and consider appropriate solutions to improve the adaptability to such conditions. Another future work is supplementary research on how to shrink the mechanisms to maintain the core functions, apart from the current wheeled robots.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

References

  1. Tokyo Electric Power Company Holdings (2015) The results of the on-site verification tests on the B1 investigation (in Japanese). https://www.tepco.co.jp/nu/fukushima-np/handouts/2015/images/handouts_150430_01-j.pdf. Accessed 4 May 2019

  2. Hitachi GE Nuclear Energy (2017) The robot for internal investigation of the containment vessel (PMORPH) (in Japanese). https://irid.or.jp/wp-content/uploads/2017/02/20170203_21.pdf. Accessed 1 Aug 2018

  3. Konishi T, Kobayashi R, Okada S, and Ishizawa K (2017) Localization Method for Remotely Operated Robots in Harsh Environment (in Japanese). In: Proceedings of the 35th annual conference of the Robotics Society of Japan, Saitama, Japan, 10–14 Sep 2017

  4. Tokyo Electric Power Company Holdings (2017) Unit 1 primary containment vessel internal investigation. https://www.tepco.co.jp/en/nu/fukushima-np/handouts/2017/images/handouts_170327_01-e.pdf. Accessed 8 Aug 2020

  5. Toshiba Energy Systems & Solutions Corporation (2017) Development of primary containment vessel investigation device for Fukushima Daiichi Nuclear Power Plant Unit 2 (in Japanese). https://irid.or.jp/wp-content/uploads/2017/12/201712221.pdf. Accessed 20 Aug 2020

  6. Toshiba Energy Systems & Solutions Corporation (2019) Toshiba develops new device to investigate deposits in the interior of primary containment vessel at Fukushima Daiichi Unit 2. https://www.toshiba-energy.com/en/info/info2019_0128.htm. Accessed 20 Aug 2020

  7. Tokyo Electric Power Company Holdings (2019) The results of the internal investigation on the primary containment vessel of the No. 2 reactor (in Japanese). https://www.meti.go.jp/earthquake/nuclear/osensuitaisaku/committtee/genchicyousei/2019/0319_01_03.pdf. Accessed 21 Aug 2020

  8. Beller LS, Brown HL (1984). Design and operation of the core topography data acquisition system for TMI-2. https://doi.org/10.2172/6837047.

  9. Kikura H, Kawachi T, Ihara T (2015) Study on ultrasonic measurement for determination of leakage from reactor vessel and debris inspection. Proceedings of the 11th national conference on nuclear science and technology, Da Nang, Vietnam, 6–7 Aug 2015

  10. Ihara T, Kikura H, Murakawa H (2011) The basic study of phased ultrasonic array velocimetry. IEICE Tech Rep 21:29–34

    Google Scholar 

  11. Kawachi T, Kimura S, Ihara T, Kikura H, Kimoto K (2015) Development of simultaneous measurement system of object surface shape and two-dimensional flow mapping by using ultrasonic array sensor (in Japanese). IEICE Tech Rep 115:39–44

    Google Scholar 

  12. Kono T, Kimoto K, Kikura H (2017) Development of an immersion ultrasonic imaging method for the shape reconstruction of nuclear fuel debris (in Japanese). In: Proceedings of 2017 fall meeting of Atomic Energy Society of Japan, Hokkaido, Japan, 13–15 Sep 2017

  13. Zhang ZY (2000) A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell 22:1330–1334. https://doi.org/10.1109/34.888718

    Article  Google Scholar 

  14. Rublee E, Rabaud V, Konolige K, Bradski G (2011), ORB: an efficient alternative to SIFT or SURF. In: Proceedings of 2011 international conference on computer vision, Barcelona, Spain, 6–13 2011. https://doi.org/10.1109/iccv.2011.6126544

  15. Davison JA, Reid DI, Molton DN, Stasse O, MonoSLAM (2007) Real-time single camera SLAM. IEEE Trans Pattern Anal Mach Intell. https://doi.org/10.1109/tpami.2007.1049

    Article  Google Scholar 

  16. Angeli A, Filliat D, Doncieux S, Meyer JA (2008) A Fast and Incremental method for loop-closure detection using bags of visual words. IEEE Trans Robot. https://doi.org/10.1109/tro.2008.2004514

    Article  Google Scholar 

  17. Chow JCK, Lichti DD, Teskey WF (2012). Accuracy assessment of the FARO focus 3D and Leica HDS6100 panoramic-type terrestrial laser scanners through point-based and plane-based user self-calibration. In: Proceedings of the FIG working week: knowing to manage the territory, protect the environment, evaluate the cultural heritage, Rome, Italy, 6–10 May 2012

  18. Yamada T, Kawabata K (2020) Development of a dataset to evaluate SLAM for Fukushima Daiichi nuclear power plant decommissioning. In: Proceedings of 2020 IEEE/SICE international symposium on system integration (SII), Honolulu, HI, USA, 12–15 2020. https://doi.org/10.1109/SII46433.2020.9025857

Download references

Acknowledgments

Not applicable.

Funding

This work was supported by MEXT Nuclear Energy S&T and Human Resource Development Project by concentrating wisdom Grant Number JPMX 15D15658587. This work was supported by JAEA Nuclear Energy S&T and Human Resource Developement Project by concentrating wisdom Grand Number JPJA19P 19210348. 

Author information

Authors and Affiliations

Authors

Contributions

ZW and TM conducted the development of visual odometry algorithms and experiments. GE, KS, and NH contributed to the completion of the manuscript. NT, HT, KK, TI, and HK proposed the idea of the robot system. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Zhenyu Wang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Z., Endo, G., Takahashi, M. et al. Study of a robotic system to detect water leakage and fuel debris-System proposal and feasibility study of visual odometry providing intuitive bird’s eye view-. Robomech J 7, 34 (2020). https://doi.org/10.1186/s40648-020-00184-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40648-020-00184-z

Keywords