Skip to main content
  • Research Article
  • Open access
  • Published:

Spatial change detection using normal distributions transform

Abstract

Spatial change detection is a fundamental technique for finding the differences between two or more pieces of geometrical information. This technique is critical in some robotic applications, such as search and rescue, security, and surveillance. In these applications, it is desirable to find the differences quickly and robustly. The present paper proposes a fast and robust spatial change detection technique for a mobile robot using an on-board range sensors and a highly precise 3D map created by a 3D laser scanner. This technique first converts point clouds in a map and measured data to grid data (ND voxels) using normal distributions transform. The voxels in the map and the measured data are then compared according to the features of the ND voxels. Three techniques are introduced to make the proposed system robust for noise, that is, classification of point distribution, overlapping of voxels, and voting using consecutive sensing. The present paper shows the results of indoor and outdoor experiments using an RGB-D camera and an omni-directional laser scanner mounted on a mobile robot to confirm the performance of the proposed technique.

Introduction

Spatial change detection is a fundamental technique for finding the differences between two or more pieces of geometrical information. This technique is indispensable in several applications, such as topographic change detection in airborne laser scanning [1, 2] or terrestrial laser scanning [3, 4], map maintenance in urban areas [5], preservation of cultural heritage [6], and analysis of plant growth [7]. In robotics, the detection of spatial changes around a robot is often used in some applications, such as daily service, search and rescue, security, and surveillance. For example, service robots, such as cleaning robots or delivery robots, which are used on a daily basis, require the ability of spatial change detection in order to safely and efficiently co-exist with humans, because the environment can change dynamically according to human behavior. For these service robots, precise localization is also required in order to perform a desired task. To improve the accuracy of the localization, spatial change detection is also important. For example, when using scan matching (or ICP) for localization in an environment where spatial changes exist, some points measured by an on-board range sensor are different from the points in the map created previously. In this case, these points caused by the spatial changes should be detected and removed before applying scan matching.

In our previous paper [8], we proposed a fast spatial change detection technique by comparing 3D range data obtained by an on-board RGB-D camera (Kinect for Xbox One) and a high-precision 3D map created by a 3D laser scanner. This technique first converts point clouds in the map and measured data to grid data [normal distributions (ND) voxels] by normal distributions transform (NDT) [9], and classifies the voxels into three categories. The voxels in the map and the measured data are then compared according to the category and features of ND voxels. Overlapping and voting techniques are also introduced in order to detect spatial changes more robustly. We conducted the preliminary experiments using a mobile robot equipped with a RGB-D camera in order to confirm the performance of the proposed spatial change detection techniques mainly in an indoor environment.

The present paper shows the experimental results of on-line localization and spatial change detection techniques mainly in outdoor environment using an omni-directional laser scanner (Velodybe HDL-32e). We also show the additional results of indoor experiments using an on-board RGB-D camera (Kinect for Xbox One). In the following sections, we firstly introduce the fast spatial change detection technique proposed in [8], and show the system settings. In addition, we conduct experiments in indoor and outdoor environments and compare the performance quantitatively with cutting-edge techniques.

Related research

Spatial change detection is a critical problem in some robotic applications [10,11,12,13,14,15,16]. Andreasson et al. [10] proposed autonomous change detection for a security patrol robot. They used color and depth information obtained from a 3D laser range finder and a camera. A precise reference model was first created from multiple color and depth images and was registered by 3D normal distributions transform (3D-NDT) [17] representation. Spatial changes are detected by calculating the probabilistic value of the current point being different from the reference model using the 3D-NDT representation and color information. Saarinen et al. [18] proposed Normal Distributions Transform Occupancy Maps (NDT-OM), which concurrently represent the occupancy probability and the shape distribution of points (NDT) in each voxel. The occupancy probability is calculated from a sensor model and the point distribution in the voxel, and the similarity measure of two NDT-OMs is defined by \(L_2\) distance function. Nùüez et al. [11] proposed a fast change detection technique using a mixture Gaussian model and a fast and robust matching algorithm. Point-based comparison of an environmental model and a large number of point cloud data measured by an on-board range sensor requires a large calculation cost. In order to solve this problem, they represented the environmental model and the measured data with a mixture Gaussian model and processed the difference calculation using a high-speed algorithm. Fehr et al. [14] presents a 3D reconstruction technique of dynamic scenes involving movable objects using the truncated signed distance function (TSDF). They represented the current scene with TSDF grids and compared them with previous TSDF grids to obtain segmented movable objects in the scene. Luft et al. [15] proposed a stochastic approach to evaluate whether a grid is changed in time according to full-map posteriors represented by real-valued variables. Their technique enables consideration of the full-path information of the laser measurement, as opposed to end-point based approaches. Moreover, it considers the confidence about the cell values as opposed to occupancy maps or a most-likely maps. [16]. Palazzolo et al. [16] proposed a fast spacial change detection technique using a 3D model and a small number of 2D images. They assume a 3D model is given and spacial changes are detected using re-projection of 2D images without 3D reconstruction of the model.

In general, spatial change detection can be classified into three groups: point/mesh-based, height-based, and voxel-based comparisons. Point/mesh-based comparison [6, 19] is a technique that compares the distance of nearest points or meshes in two point clouds, which is similar to the ICP algorithm [20]. Lague et al. [4] proposed the use of the distance along normal direction of a local surface to make the algorithm robust to errors in 3D terrain data measured by a terrestrial laser scanner. In [3], point clouds are converted to panoramic distance images, which are compared directly. The problem with this technique is the degree to which the proper threshold is determined [19].

Height-based comparison is often used in geographical analysis in earth sciences. The digital elevation map (DEM) of difference (DoD) is a popular technique to compare geographical data captured by airborne or terrestrial laser scanners [2, 5, 21]. This technique also has the problem of selecting a proper threshold.

In voxel-based comparison, a point cloud is first converted to a voxel representation using, for example, an octree structure. Performing the XOR of occupancy voxels is the simplest way [22] to find spatial differences. In [23], three metrics are compared in order to calculate the difference of voxels, which are the average distance, the plane orientation, and the Hausdorff distance. The Hausdorff distance is the maximum value of the minimum distances of points in two voxels and indicates the best performance. However, the computational cost of the Hausdorff distance is quite high, because closest point pairs must be determined. In spatial change detection in 3D, not only point clouds but also a sequence of camera images has been used [24, 25].

The proposed technique is a voxel-based comparison method. However, rather than comparing the distances of points or meshes or the existence of occupied voxels directly, we used the point distribution in each voxel calculated by 3D-NDT. We classify the distribution of points in a voxel into three categories and compare the voxels in different scans according to the category of voxels. Although Andreasson et al. [10] also used 3D-NDT for spatial change detection, their technique can be classified as a point-based comparison because they used 3D-NDT to calculate the probability of a point being different from the reference model.

Scona et al. [26] presented the construction technique of a static map using a RGB-D camera. This method utilizes an energy function consisting of the errors in 2D images and the errors in the label assignment. Firstly the acquired color and depth images are segmented into several clusters, and the camera position and the label which indicates whether the current cluster belongs to a static object or not are determined simultaneously by minimizing the error function. A robust estimator using Cauchy penalty function is also adopted. In this method, the errors in color and depth images are simply defined as the difference of pixel values between the synthesized images from the map and the measured RGB-D data. In addition, the main purpose of the paper is to extract static objects to build a static map and not to detect newly appeared objects.

Fast 3D localization using NDT and a particle filter

We have proposed an efficient 3D global localization and tracking technique for a mobile robot in a large-scale environment using 3D geometrical map and a RGB-D camera [27]. Conventional 3D localization techniques using 3D environmental information utilize a registration method based on point-to-point correspondence such as Iterative Closest Point (ICP) algorithm [28, 29], or voxel-to-voxel correspondence such as occupancy voxel counting [30]. However, these techniques are computationally expensive or low accuracy due to the costly nearest point calculation or the discrete voxel representation, and hard to be applied for a global localization using a large-scale environmental map. To tackle this problem, the proposed technique utilizes a ND voxel representation (Fig. 1) [28]. Firstly, a 3D geometrical map represented by point-clouds is converted to a number of ND voxels, and local features are extracted and stored as an environmental map. In addition, range data captured by a RGB-D camera is also converted to the ND voxels, and local features are calculated. For global localization and tracking, the similarity of ND voxels between the environmental map and the sensory data is examined according to the local features, and optimum positions are determined in a framework of a particle filter.

Fig. 1
figure 1

Concept of NDT and ND voxels [17]

Spatial change detection using ND voxels

In this section, we introduce a spatial change detection technique using ND voxels [8]. In “Fast 3D localization using NDT and a particle filter” section, the fast 3D localization using NDT and a particle filter is introduced [27]. The proposed spatial change detection technique re-uses the ND voxels generated and used for the proposed localization technique [27], and thus the computational cost for the spatial change detection can be saved.

The proposed spatial change detection technique is based on the voxel comparison. The most simple technique for spatial change detection using voxels is to compare the existence of occupied voxels in a same space by XOR operation [22], in which a spatial change is considered to have occurred if an occupied voxel exists on the map data but does not exist in the measured data, or vice versa. This technique is simple and similar to the occupancy grid mapping in 2D localization. However, due to quantization errors or localization and measurement errors, this simple technique does not work well in many cases. For example, if the localization error is larger than the voxel size, most of the voxels are labeled as spatial changes. We cannot detect the changes if an object is replaced with another object at the same position. In order to tackle this problem and realize robust spatial change detection, the technique proposed in this section adopts the following three techniques. The procedure of the proposed technique is shown in Fig. 2.

  1. 1.

    Classification of point distribution in an ND voxel

  2. 2.

    Overlapping of voxels in map data

  3. 3.

    Voting of spatial change detection through sequential measurements.

Fig. 2
figure 2

Spatial change detection procedure

Classification of point distributions in ND voxels

If we use the simple technique for spatial change detection by taking XOR between the map and the measured voxels mentioned above, it is impossible to detect spatial changes if the voxel includes not only point clouds to be detected as spatial change but also other stationary point clouds such as a floor or a wall. In addition, if an object is replaced with another object at the same position, this is not detectable because both voxels are occupied and the voxel occupancy is not changed.

To solve this problem, the proposed technique adopts the classification of the point distribution in ND voxels into three categories. In the calculation of NDT during the localization [27], three eigenvalues \(\lambda _1, \lambda _2, \lambda _3 \ (\lambda _1<\lambda _2<\lambda _3)\) and eigenvectors of a covariance matrix of a point cloud in a voxel are obtained. The localization process uses these eigenvalues and eigenvectors to extract representative planes and compares the map data with the measured data using a particle filter. In the spatial change detection, according to the following criteria for the eigenvalues, we classify the point distribution in ND voxels into three categories, that is, “Sphere”, “Sheet”, and “Line” (Fig. 3). In addition, if there are no measured points in a voxel, then we refer to the voxel as “Empty”. Thus, one of four labels, “Sphere”, “Sheet”, “Line”, and “Empty”, is assigned to each voxel.

$$\begin{aligned}&Sphere\quad \lambda _3\approx \lambda _2 \approx \lambda _1 \gg 0 \end{aligned}$$
(1)
$$\begin{aligned}&Sheet\quad \lambda _3\approx \lambda _2 \gg \lambda _1 \approx 0 \end{aligned}$$
(2)
$$\begin{aligned}&Line \quad \lambda _3 \gg \lambda _2\approx \lambda _1 \approx 0 \end{aligned}$$
(3)

Magnusson et al. [31] also proposed a loop detection technique using the histogram of three shapes (spherical, planar, and linear), which are classified from point clouds according to the eigenvalues. In our case, we use these classifications to evaluate the difference between the map and measured voxels.

Fig. 3
figure 3

Classification of point distribution in an ND voxel

If the voxels in the map and measured data are labeled with different categories, then we say that there is a spatial change in this space of the voxel. However, this technique cannot distinguish between similarly shaped objects. Thus, in addition to the technique above, we compare the normal or direction vectors of the sheets and the lines, which are the eigenvectors corresponding to minimum and maximum eigenvalues, respectively. If these vectors are sufficiently matched, then we consider that these voxels to have the same labels and ignore their spatial change. On the other hand, if both voxels have the same labels of “Sheet” or “Line”, but normal or direction vector of the sheet or line is significantly different, we consider there are spatial changes.

$$\begin{aligned} (\varvec{n}_{map} , \varvec{n}_{measured})< N_t \ \ \ \ (Sheets) \end{aligned}$$
(4)
$$\begin{aligned} (\varvec{v}_{map} , \varvec{v}_{measured})< V_t \ \ \ \ (Lines) \end{aligned}$$
(5)

where \(\varvec{n}\) and \(\varvec{v}\) are the normal and direction vectors of the sheets and the lines which are eigenvectors corresponding to the smallest and the largest eigenvalues, and \(N_t\) and \(V_t\) are proper thresholds. \(\varvec{n}_{map}\) and \(\varvec{v}_{map}\) are calculated beforehand from the environmental map (point cloud) measured by a 3D laser scanner, and \(\varvec{n}_{measured}\) and \(\varvec{v}_{measured}\) are obtained using the measured map (point cloud) by an on-board range sensors. In the experiments in “Experiments in indoor and outdoor environments” section, we set \(N_t\) and \(V_t\) as 0.5. Similar idea can be seen in [23], in which “best fitting plane orientation” was used to evaluate the spatial changes.

Overlapping of voxels in map data

The proposed technique inherently involves a quantization error because we divide the entire space into several voxel grids and perform NDT for each voxel. Thus, the classification mentioned above is also affected by the quantization error. For example, the boundary between a floor and a wall is classified as “Sheet” if the majority of points in the voxel belong to either a floor or a wall. On the other hand, the boundary is classified as “Sphere” if both planes are evenly included.

In order to suppress the influence of quantization error, the proposed technique uses overlapping ND voxels [9]; that is, adjacent voxels overlap each other so that the centers of the voxels are displaced with half the voxel size, as shown in Fig. 4. As a result, every point in 3D space is involved with eight adjacent voxels since the voxels are overlapped with half the voxel size. In practice, 27 adjacent voxels are considered to increase robustness against measurement and localization errors. Thus we compare the target voxel in the measured data with up to 27 adjacent voxels in the map data, and if at least one voxel in 27 voxels in the map data is similar enough to the target voxel in the measured data, that voxel is marked as “no change”. By comparing with 27 adjacent voxels, we can evaluate the degree of coincidence of the map and the measured data robustly with respect to the quantization error. Note that, in the proposed technique, overlapped ND voxels in the map can be calculated beforehand in order to reduce the on-line calculation cost.

Fig. 4
figure 4

Overlapping ND voxels

Voting of spatial change detection through sequential measurements

The measurement data taken from a range sensor are corrupted by noise, and the measurement noise tends to be detected as spatial change in some cases. However, sensor data can be acquired continuously and noise is added randomly. In order to suppress the influence of measurement noise, voting technology through sequential measurements is adopted. Here, we first extract the voxels that are regarded as spatially changed voxels in each measured datum. Then, we vote on these results for the voxels adjacent to the extracted voxel in a global coordinate system with the following weights according to the 3D normal distribution. Finally, if the voted score of the voxel becomes larger than a pre-determined threshold, the voxel is marked as “spatially changed”.

$$\begin{aligned} w(\varvec{p}) = N(\varvec{c}, \sigma ^2) \end{aligned}$$
(6)

where \(\varvec{p}\) is the center of the adjacent voxel in the map data and \(\varvec{c}\) is the center of the original voxel in the measured data.

In the experiments in “Experiments in indoor and outdoor environments” section, we set the voxel size as 400 mm and \(\sigma\) as 200 mm, and voted for the information of the spatial change to 27 adjacent voxels. We implemented the calculation of 3D NDT and the categorization of ND voxels with C++ using the PCL library (ndt_3d.cpp).

Experiments in indoor and outdoor environments

Indoor experiments

We conducted experiments in two indoor environments in order to confirm the performance of the proposed technique. The mobile robot system used in the experiments is shown in Fig. 5. This robot is equipped with an omni-directional laser scanner (Velodyne HDL-32e) and an RGB-D camera (Kinect for Xbox One, Microsoft).

Fig. 5
figure 5

Mobile robot system equipped with an omni-directional laser scanner (Velodyne HDL-32e) and an RGB-D camera (Kinect, Microsoft)

The procedure of localization and spatial change detection is shown in Fig. 6. Firstly, the map data is measured by a precise 3D laser scanner and (overlapped) ND voxels and their labels (“Sphere”, “Sheet”, “Line”, or “Empty”) are calculated beforehand. For detecting the spatial changes, the reference data measured by Velodyne HDL-32e is transferred to the ND voxels. Then the ND voxels in the map and reference data are compared and the robot position is determined by the NDT-based localization technique in “Fast 3D localization using NDT and a particle filter” section [27]. Obtained coordination transformation information is applied to the reference data measured by Kinect and ND voxels are calculated. Finally, the overlapped ND voxels in the map and the ND voxels in the reference data are compared by the proposed technique and the spatial changes are detected.

Fig. 6
figure 6

Procedure of localization and spatial change detection

Spatial change detection in a corridor

Firstly, we conducted an experiment for spatial change detection in a narrow corridor. The environmental map of the corridor was created by scanning from three positions using the high precision laser scanner (Faro Focus 3D) beforehand. Figures 7 and 8 show the experimental setup and the positions of five objects placed as spatial changes. The sizes of these objects are \(\textcircled {\small 1}, 40\, \text{cm} \times 40 \, \text{cm} \times 40 \, \text{cm}\); \(\textcircled {\small 2}, 40\, \text{cm} \times 40 \, \text{cm} \times 120 \, \text{cm}\); \(\textcircled {\small 3}, 40 \, \text{cm} \times 40 \, \text{cm} \times 80 \, \text{cm}\); \(\textcircled {\small 4}, 40 \, \text{cm} \times 80 \, \text{cm} \times 40 \, \text{cm}\); and \(\textcircled {\small 5}, 40 \, \text{cm} \times 40 \, \text{cm} \times 80 \, \text{cm}\); respectively. The robot moved straight for 8 [m], turned right at the corner, and moved straight for 7 [m] again. In this experiment, we used the RGB-D camera for the localization instead of the omni-directional laser scanner and captured 75 range images using the RGB-D camera when the robot stopped on the route. These range images were used offline for localization and spatial change detection. The voxel size for localization and the spatial change detection is 40 cm and the number of particles is fix set to be 2000. The processing time of this experiment is shown in Table 1, and the localization was not processed in real-time since the RGB-D camera used for the localization takes the huge number of points and the localization takes much time.

Fig. 7
figure 7

Indoor environment (Corridor)

Fig. 8
figure 8

Spatial changes (5 boxes)

Table 1 Processing time

Figure 9a shows the detected regions (red cubes) which are estimated to be spatially changed using XOR calculation of occupied voxels [22]. Figure 9b–d show the detected regions (red, blue, and green cubes) using the classification of point distribution (“Classification of point distributions in ND voxels” section), classification and overlapping of voxels in map data (“Overlapping of voxels in map data” section), classification, overlapping, and voting of spacial changes through sequential measurements (“Voting of spatial change detection through sequential measurements” section), respectively. In Fig. 9b–d, detected voxels classified as “Sphere”, “Sheet” and “Line” are illustrated by red, blue, and green cubes, respectively. Table 2 shows the number of voxels detected as spatial changes. As shown Fig. 9d, the regions of spatial changes are mostly detected by the proposed technique in “Spatial change detection using ND voxels” section. One region is misdetected as a spatial change. In this region, the measured points by the RGB-D camera are classified as “Line” due to the occlusion caused by a single viewpoint. However, this region is classified as “Sheet” in the map created by the high precision laser scanner since no occlusion was occurred in the measurement from multiple viewpoints.

In addition, we measured the precise position of the robot when the robot stopped using the laser scanner in order to evaluate the accuracy of localization. The average of the positioning error at 16 positions was 0.171 [m].

Table 2 Number of voxels detected as spatial changes
Fig. 9
figure 9

Detected spatial changes (red) in a corridor

Spatial change detection in a hall

Next, we conducted an experiment for spatial change detection in a large hall with dimensions of \(40\text { m} \times 11\text { m}\). We placed eight objects having dimensions of \(\textcircled {\small a}\, 10\text { cm} \times\) \(10\text { cm} \times 10\text { cm}\) (\(\textcircled {\small 1}\) and \(\textcircled {\small 5}\)), \(\textcircled {\small b}\, 20\text { cm} \times\) \(20\text { cm} \times 20\text { cm}\) (\(\textcircled {\small 2}\) and \(\textcircled {\small 6}\)), \(\textcircled {\small c} \,30\text { cm }\times 30\text { cm} \times 30\text { cm}\) (\(\textcircled {\small 3}\) and \(\textcircled {\small 7}\)), and \(\textcircled {\small d}\, 40\text { cm} \times 40\text { cm} \times 40\text { cm}\) (\(\textcircled {\small 4}\) and \(\textcircled {\small 8}\)) at the positions shown in Fig. 10.

The robot (Fig. 5) then moves along the desired path shown in Fig. 10 automatically and fuses the position information from the particle filter [27] and odometry at 1 Hz. The omni-directional laser scanner is used for the real-time localization. The range data from the RGB-D camera (measured data) are transformed using the estimated position information and compared with the environmental map using the proposed technique. In this experiment, the voxel size for localization and the spatial change detection is 40 cm.

Figure 10 shows the detected spatial changes by the proposed method. Though one region is misdetected as indicated by a white circle, the four kinds of objects that are placed at eight positions later are correctly detected in this experiment. We run the robot from the same initial position to the target position by taking RGB-D data ten times, and obtained the detection rate of the spatial changes as shown in Table 3. Note that we considered the object is detected in case that at least one voxel containing vertexes, edges, or planes of the object is detected as spatially changed.

Fig. 10
figure 10

Detected changes. Four kinds of objects placed at eight positions are correctly detected using the proposed method

Table 3 The detection rate for four kinds of objects

We also compared the performance of the proposed method with the 3D-NDT based spatial change detection by Andreasson et al. [10], \(L_2\) distance function [18], and simple methods using XOR calculation [22]. The detection rates of the spatial changes for these techniques are shown in Table 3. Figure 11a shows the detected spatial changes by the 3D-NDT based method [10]. In this experiment, we used the depth images captured by the RGB-D camera only and the color images were not used. Although the changed regions are mostly detected, some regions are misdetected or undetected.

Fig. 11
figure 11

Detected changes by a 3D-NDT based spatial change detection [10], b \(L_2\) distance function [18], c taking XOR of occupancy voxels [22], and d taking XOR of overlapped occupancy voxels. Misdetected regions and undetected objects are indicated by white circles and crosses

Figure 11b shows the detected spatial changes by \(L_2\) distance function [18]. Figure 11c, d show the detected spatial changes by taking XOR between the map and the measured voxels [22]. Figure 11d uses the overlapped voxels in the map and we judged that the voxel is not spatially changed if at least one voxel among 27 map voxels adjacent to the measured voxel is occupied. In these experiments using XOR calculation, a number of misdetected regions are found, which are mainly caused by the positioning error of the mobile robot. On the other hand, the proposed method (Fig. 10) is robust against position error due to voxel classification and voting technique.

Finally, we show the precision and recall of the detection of voxels which are specially changed for each method in Table 4. We used the overlapped voxels in the map and considered the voxel in the map should be detected as spatially changed if it contains vertexes, edges, or planes of the objects. Table 4 shows that the proposed method, which uses classification of point distribution, overlapping voxels, and voting techniques, gives highest precision (98.50 %) and outperforms other techniques. Figure 12 shows PR and ROC curves for each method using various parameters. In these figures, we can say that the proposed technique outperforms the conventional 3D-NDT [10] and \(L_2\) distance based techniques. Note that the recalls are considerably low in all methods. This is because all voxels including at least one vertex, edge, or plane of the object are regarded as the correct voxels to be detected, and therefore, for example, the voxels on a wall hidden by the object are considered as missing voxels which are not correctly detected.

Table 4 Precision and recall [%]
Fig. 12
figure 12

PR and ROC curves in indoor experiment

Table 5 shows the average processing time for one cycle of each step during the experiments (Intel Core i7, 3.40 GHz). The average processing time for spatial change detection by the proposed technique is 20.4 ms including the conversion process of the depth images captured by the RGB-D camera to the ND voxel representation. On the other hand, the processing time by the 3D-NDT based spatial change detection [10] is 570.0 ms and the proposed technique is 27.9 times faster than the 3D-NDT based technique. The simple method using XOR calculation [22] is slightly faster than the proposed method. Note that, since these processes can be executed independently, we run processes of localization and spatial change detection at 1 Hz in the experiments.

Table 5 Processing time for each step

Outdoor experiments

We conducted experiments in an outdoor environment. In an outdoor environment, it is very difficult to acquire range data using a RGB-D camera in direct sunlight. Therefore, the omni-directional laser scanner (Velodyne HDL-32e) was used not only for localization but also for spatial change detection. The horizontal resolution of the omni-directional laser scanner is about 1260 points for 360°, and the vertical resolution is 32 lines for 41.3°. Thus, if the space is divided by uniform voxels, the number of points measured by the laser scanner at each voxel decreases as it measure long distances. This makes the normal distribution calculation in each voxel very unstable. In addition, the objects in front of the robot should be detected correctly compared to the objects behind. Two techniques are used to overcome these problems; one is to attach the laser scanner to the robot at a slight angle, and the other is to accumulate consecutive scans in each voxel while moving.

Firstly, the omni-directional laser scanner is tilted 15° to increase the resolution of scanning in the front area of the robot as shown in Fig. 13. Figure 14 shows the difference of scan lines with and without tilting the omni-directional laser scanner. If the laser scanner is tilted, the laser beams are densely projected on the floor in front of the robot. In addition, as shown in Fig 15, if a robot moves along a wall, the laser beam is projected diagonally on the wall and the range data is obtained densely as the robot moves. On the other hand, if the laser is not tilted and placed parallel to the floor, the laser beam is projected horizontally on the wall, and no new data can be obtained even if the robot moves.

Fig. 13
figure 13

Mobile robot tilting the omni-directional laser scanner 15°

Fig. 14
figure 14

Scan lines (black lines) with and without tilting the omni-directional laser scanner

Fig. 15
figure 15

If the laser scanner is tilted, laser beams sweep an object such as a wall along the moving direction densely

In addition, in order to increase the number of 3D points in NDT calculation for each voxel, we uses the consecutive scan data assigned to each voxel and the accumulated scan data are converted to ND voxels. In the experiments, two consecutive frames are used for the calculation of ND voxel at once.

In the experiments, we first scan the outdoor environment from three positions using a high-precision laser scanner (Faro Focus 3D) and obtained an environmental map. The procedure of the outdoor experiments is shown in Fig. 16. This procedure is almost the same as the procedure in indoor environment shown in Fig. 6, but the on-board sensor for the spatial change detection is changed from Kinect to Velodyne HDL-32e.

Fig. 16
figure 16

The procedure of the outdoor experiments

We then placed eight objects having dimensions of \(\textcircled {\small a}\), \(10\text { cm} \times 10\text { cm} \times 10\text { cm}\) at \(\textcircled {\small 1}\) and \(\textcircled {\small 5}\); \(\textcircled {\small b}\), \(20\text { cm} \times 20\text { cm} \times 20\text { cm}\) at \(\textcircled {\small 2}\) and \(\textcircled {\small 6}\); \(\textcircled {\small c}\), \(30\text { cm} \times 30\text { cm} \times 30\text { cm}\) at \(\textcircled {\small 3}\) and \(\textcircled {\small 7}\); and \(\textcircled {\small d}\), \(40\text { cm} \times 40\text { cm} \times 40\text { cm}\) at \(\textcircled {\small 4}\) and \(\textcircled {\small 8}\) as shown in Fig. 17.

The robot moved 15 m as shown in Fig. 17 automatically while voting spatial changes in 0.5 Hz. Total number of scans by the omni-directional laser scanner is 910 frames. The localization and spatial change detection are processed on-line and in real-time.

Fig. 17
figure 17

Spatial changes (8 boxes)

Figure 18 shows the detected regions (red cubes) by the proposed procedure while moving the robot. In this experiment, \(\textcircled {\small 1}\), \(\textcircled {\small 2}\), \(\textcircled {\small 5}\) cannot be detected because the density of the measured points by the omni-directional laser scanner is not high compared to the RGB-D camera even if the omni-directional laser scanner is tilted.

Fig. 18
figure 18

Detected spatial changes

Table 6 shows the average of the detection ratio by the proposed method, 3D-NDT [10], and \(L_2\) [18] after ten trials. In this experiment, the smallest object D cannot be detected by all techniques.

Table 6 The detection rate for four kinds of objects

Table 7 shows the precision and recall of the detection of voxels and Fig. 19 shows PR and ROC curves for each method using various parameters. In these figures, we can say that the proposed technique outperforms the conventional 3D-NDT [10] and \(L_2\) distance based techniques.

Table 7 Precision and recall [%]
Fig. 19
figure 19

PR and ROC curves in outdoor experiment

Conclusions

The present paper presented the results of the indoor and outdoor experiments of the fast spatial change detection technique for a mobile robot [8] using an on-board range sensors and a high-precision 3D map created using a 3D laser scanner. This technique first converts point clouds in a map and measured data to grid data (ND voxels) using NDT and classifies the voxels into three categories. The voxels in the map and measured data are then compared according to the category and features of the ND voxels. The proposed technique consists of the following three techniques.

  1. 1.

    Classification of the point distribution

  2. 2.

    Overlapping of voxels in map data

  3. 3.

    Voting of spatial change detection through sequential measurements

The contributions of the paper are threefolds:

  • We proposed a real-time spatial change detection technique using 3D-NDT and voxel classification. The proposed technique can be integrated with the real-time localization technique [27] using 3D-NDT and a particle filter and reduce the calculation cost.

  • We implemented the proposed technique using PCL library and conducted the experiments in indoor and outdoor environments using the RGB-D camera (Microsoft Kinect) and the omni-directional laser scanner (Velodyne HDL-32e).

  • Through the experiments in indoor and outdoor environments, we confirmed the proposed localization and spatial change detection techniques can be processed in real-time using on-board sensors and the performance of the proposed techniques outperforms other cutting-edge techniques.

Future work includes performance evaluation of actual scenes, such as stations or market areas, and improvement of the performance by using other information, such as color or laser reflectance. In particular, laser reflectance, which is obtained as a side product of range measurement by a laser scanner, is measured stably independent of the lighting condition, even at night. Therefore, as additional information, evaluating the spatial change robustly is very useful.

Availability of data and materials

Not applicable.

References

  1. Hebel M, Arens M, Stilla U (2013) Change detection in urban areas by object-based analysis and on-the-fly comparison of multi-view ALS data. ISPRS J Photogramm Remote Sens 86:52–64

    Article  Google Scholar 

  2. Vu TT, Matsuoka M, Yamazaki F (2004) Lidar-based change detection of buildings in dense urban areas. In: 2004 IEEE international geoscience and remote sensing symposium, vol 5, pp 3413–3416

  3. Zeibak R, Filin S (2007) Change detection via terrestrial laser scanning. Int Arch Photogramm Remote Sens 36(3):430–435

    Google Scholar 

  4. Lague D, Brodu N, Leroux J (2013) Accurate 3D comparison of complex topography with terrestrial laser scanner: application to the Rangitikei canyon (n-z). ISPRS J Photogramm Remote Sens 82:10–26

    Article  Google Scholar 

  5. Thomas B, Remco C, Zachary W, William R (2008) Visual analysis and semantic exploration of urban lidar change detection. Comput Graph Forum 27(3):903–910

    Article  Google Scholar 

  6. Palma G, Sabbadin M, Corsini M, Cignoni P (2018) Enhanced visualization of detected 3D geometric differences. Comput Graph Forum 37(1):159–171

    Article  Google Scholar 

  7. Li Y, Fan X, Mitra NJ, Chamovitz D, Cohen-Or D, Chen B (2013) Analyzing growing plants from 4D point cloud data. ACM Trans Graph 32(6):157–115710

    MathSciNet  Google Scholar 

  8. Katsura U, Matsumoto K, Kawamura A, Ishigami T, Okada T, Kurazume R (2019) Spatial change detection using voxel classification by normal distributions transform. In: In proc. IEEE international conference on robotics and automation 2019 (ICRA 2019), pp 2953–2959

  9. Biber P, Strasser W (2003) The normal distributions transform: a new approach to laser scan matching. In: Proceedings 2003 IEEE/RSJ international conference on intelligent robots and systems (IROS 2003) (Cat. No.03CH37453), vol 3, pp 2743–2748

  10. Andreasson H, Magnusson M, Lilienthal A (2007) Has something changed here? autonomous difference detection for security patrol robots. In: Proceedings 2007 IEEE/RSJ international conference on intelligent robots and systems, pp 3429–3435

  11. Nùüez P, Drews P, Bandera A, Rocha R, Campos M, Dias J (2010) Change detection in 3D environments based on gaussian mixture model and robust structural matching for autonomous robotic applications. In: Proceedings 2010 IEEE/RSJ international conference on intelligent robots and systems, pp 2633–2638

  12. Underwood JP, Gillsjo D, Bailey T, Vlaskine V (2013) Explicit 3D change detection using ray-tracing in spherical coordinates. In: 2013 IEEE international conference on robotics and automation, pp 4735–4741

  13. Vieira AW, Drews PLJ, Campos MFM (2014) Spatial density patterns for efficient change detection in 3D environment for autonomous surveillance robots. IEEE Trans Autom Sci Eng 11(3):766–774

    Article  Google Scholar 

  14. Fehr M, Furrer F, Dryanovski I, Sturm J, Gilitschenski I, Siegwart R, Cadena C (2017) Tsdf-based change detection for consistent long-term dense reconstruction and dynamic object discovery. In: Proceedings 2017 IEEE international conference on robotics and automation, pp 5237–5244

  15. Luft L, Schaefer A, Schubert T, Burgard W (2018) Detecting changes in the environment based on full posterior distributions over real-valued grid maps. IEEE Robotics Autom Lett 3(2):1299–1305

    Article  Google Scholar 

  16. Palazzolo E, Stachniss C (2018) Fast image-based geometric change detection given a 3d model. In: 2018 IEEE international conference on robotics and automation (ICRA), IEEE, New York, pp 6308–6315

  17. Magnusson M, Duckett T (2005) A comparison of 3D registration algorithms for autonomous underground mining vehicles. In: The European conference on mobile robotics (ECMR 2005), pp 86–91

  18. Saarinen JP, Andreasson H, Stoyanov T, Lilienthal AJ (2013) 3D normal distributions transform occupancy maps: an efficient representation for mapping in dynamic environments. Int J Robotics Res 32(14):1627–1644

    Article  Google Scholar 

  19. Gianpaolo P, Paolo C, Tamy B, Roberto S (2016) Detection of geometric temporal changes in point clouds. Comput Graph Forum 35(6):33–45

    Article  Google Scholar 

  20. Besl PJ, McKay ND (1992) A method for registration of 3-D shapes. IEEE Trans Pattern Anal Mach Intell 14(2):239–256

    Article  Google Scholar 

  21. Murakami H, Nakagawa K, Hasegawa H, Shibata T, Iwanami E (1999) Change detection of buildings using an airborne laser scanner. ISPRS J Photogramm Remote Sens 54(2):148–152

    Article  Google Scholar 

  22. Spatial change detection on unorganized point cloud data. http://pointclouds.org/documentation/tutorials/octree_change.php. Accessed 12 Dec 2019

  23. Girardeau-Montaut D, Roux M, Marc R, Thibault G (2005) Change detection on points cloud data acquired with a ground laser scanner. Int Arch Photogramm Remote Sens Spatial Inf Sci 36(PART 3):30–35

    Google Scholar 

  24. Pollard T, Mundy JL (2007) Change detection in a 3-d world. In: Proceedings 2007 IEEE conference on computer vision and pattern recognition, pp 1–6

  25. Ulusoy AO, Mundy JL (2014) Image-based 4-D reconstruction using 3-D change detection. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T (eds) Proceedings European conference on computer vision 2014. Springer, Cham, pp 31–45

    Chapter  Google Scholar 

  26. Scona R, Jaimez M, Petillot YR, Fallon M, Cremers D (2018) StaticFusion: background reconstruction for dense RGB-D slam in dynamic environments. In: 2018 IEEE international conference on robotics and automation (ICRA), IEEE, New York, pp 1–9

  27. Oishi S, Jeong Y, Kurazume R, Iwashita Y, Hasegawa T (2013) ND voxel localization using large-scale 3D environmental map and RGB-D camera. In: Proceedings 2013 IEEE international conference on robotics and biomimetics (ROBIO), pp 538–545

  28. Nuchter A, Surmann H, Lingemann K, Hertzberg J, Thrun S (2004) 6D slam with an application in autonomous mine mapping. In: Proceedings IEEE international conference on robotics and automation 2004, vol 2, pp 1998–2003

  29. Nüchter A, Lingemann K, Hertzberg J, Surmann H (2007) 6D slam—3d mapping outdoor environments: research articles. J Field Robotics 24(8–9):699–722

    Article  MATH  Google Scholar 

  30. Olson CF, Matthies LH (1998) Maximum likelihood rover localization by matching range maps. In: Proceedings 1998 IEEE international conference on robotics and automation (Cat. No.98CH36146), vol 1, pp 272–277

  31. Magnusson M, Andreasson H, Nuchter A, Lilienthal AJ (2009) Appearance-based loop detection from 3D laser data using the normal distributions transform. In: Proceedings 2009 IEEE international conference on robotics and automation, pp 23–28

Download references

Acknowledgements

Not applicable.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

UK, KM, TI, and TO developed the system and carried out the experiments. AK managed the study. RK constructed the study concept and drafted the manuscript. All members verified the content of their contributions. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Ryo Kurazume.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Katsura, U., Matsumoto, K., Kawamura, A. et al. Spatial change detection using normal distributions transform. Robomech J 6, 20 (2019). https://doi.org/10.1186/s40648-019-0148-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40648-019-0148-8

Keywords