Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access October 14, 2020

Effect of photogrammetric RPAS flight parameters on plani-altimetric accuracy of DTM

  • Zuriel Dathan Mora-Felix , Antonio Jesus Sanhouse-Garcia , Yaneth A. Bustos-Terrones , Juan G. Loaiza , Sergio Alberto Monjardin-Armenta and Jesus Gabriel Rangel-Peraza EMAIL logo
From the journal Open Geosciences

Abstract

Remotely piloted aerial systems (RPASs) are gaining fast and wide application around the world due to its relative low-cost advantage in the acquisition of high-resolution imagery. However, standardized protocols for the construction of cartographic products are needed. The aim of this paper is to optimize the generation of digital terrain models (DTMs) by using different RPAS flight parameters. An orthogonal design L18 was used to measure the effect of photogrammetric flight parameters on the DTM generated. The image data were acquired using a DJI Phantom 4 Pro drone and six flight parameters were evaluated: flight mode, altitude, flight speed, camera tilt, longitudinal overlap and transversal overlap. Fifty-one ground control points were established using a global positioning system. Multivision algorithms were used to obtain ultra-high resolution point clouds, orthophotos and 3D models from the photos acquired. Root mean square error was used to measure the geometric accuracy of DTMs generated. The effect of photogrammetric flight parameters was carried out by using analysis of variance statistical analysis. Altimetric and planimetric accuracies of 0.38 and 0.11 m were achieved, respectively. Based on these results, high-precision cartographic material was generated using low-cost technology.

Abbreviations

ALS

aerial laser scanner

ANOVA

analysis of variance

CP

checkpoint

DEM

digital elevation model

DSM

digital surface model

DTM

digital terrain model

GCP

ground control points

GNSS

global navigation satellite system

GPS

global positioning system

GSD

ground sample distance

IMU

inertial measurement unit

INS

inertial navigation system

LIDAR

light detection and ranging

LM

levenberg-marquardt

NSSDA

national standard for spatial data accuracy

RMSE

root mean square error

RPAS

remotely piloted aerial system

SfM

structure from motion

SIFT

scale invariant feature transform

SMG

semiglobal matching

TIN

triangular irregular network

TLS

terrestrial laser scanner

UTM

universal transverse mercator

WGS

world geodetic system

1 Introduction

Digital elevation model (DEM) is a 3D graphical representation of the terrain surface, which allows describing its morphology (digital terrain model (DTM)) and the anthropogenic elements or the vegetation present on digital surface model (DSM). DEMs provide critical data that are very useful to delineate, explain and predict any terrain surface change, especially when high-resolution spatial data are needed [1]. The technologies used to build high-precision DEMs are robotic total stations [2], global navigation satellite systems (GNSS) [3], terrestrial laser scanner [4] and aerial laser scanner [5]. Some limitations of the use of these technologies are the complexity of acquisition data, high operating costs and long working periods for acquiring high detailed information [6].

Conventional remotely piloted aerial systems (RPAS) photogrammetry is considered as an economic alternative and it is capable of extracting images to build high-detailed surface models that are very useful for scientific research [7]. Its implementation does not require many aerial images; therefore, this technology does not require complex methodological activities. According to Uysal et al. [8], RPAS photogrammetry is a robust technology for topographic mapping and modeling with several advantages such as cost reduction, easy access to complicated areas, reduced operation times and field activities, among others. This technology can compete with others that can develop high detail DEMs [9].

However, RPAS photogrammetry is sensitive to factors such as light intensity changes, photograph scale and rotations. Data acquisition is performed in a similar way as the conventional photogrammetric flights. Several studies describe the effect of RPAS flight parameter configuration on plani-altimetric precision and DEM accuracy [10,11,12,13]. Camera configuration parameters, such as angular orientation, resolution and lens diameter, are essential to obtain results with high resolution and level of detail [10,11,12]. However, sometimes noise can produce images with distortions. Therefore, the calibration process of the camera is also recognized to obtain high-quality images for further processing [6], but other studies are focused on the development of control algorithms to compensate the wind gusts during a flight [14,15].

Eltner et al. [9] and Hirschmüller [16] suggest that flight altitude, speed and trajectory have a greater influence on DEM accuracy. Ground sample distance (GSD) and longitudinal and lateral overlaps have also been proposed as critical flight parameters in RPAS photogrammetry [17]. However, recent studies have demonstrated that high-quality cartographic products are obtained when implementing image processing technologies such as artificial vision algorithms and artificial stereoscopic simulation [18,19,20]. The DEM accuracy registered when using structure from motion (SfM) for image processing in RPAS photogrammetry has already been compared with airborne light detection and ranging technology [21]. Therefore, the construction of maps and 3D terrain models through the collection and integration of photographs from different heights and directions have been proposed as an alternative to counter the discrepancies found in literature related to RPAS flight configuration [22,23].

The aforementioned studies separately analyze the effect of one or two photogrammetric parameters. Since a comparison of all these photogrammetric parameters has not been carried out at the time of being manipulated simultaneously, this study proposes a novel experimental strategy that allows analyzing the influence of six photogrammetric flight parameters by using an L18 orthogonal array and the implementation of artificial vision algorithms for image processing. The main objectives of this study are to assess the effect of different flight parameter configurations on DTM accuracy and figure out the optimum flight conditions where the lowest planimetric and altimetric errors are obtained.

2 Methods

2.1 Study area

The study area is located in the city of Culiacan, Sinaloa, Mexico. Specifically, in the lower part of Culiacan river basin, in coordinates 24°48′40.3″ – 24°48′69.2″ N, 107°24′ 53.1″ – 1 07°24′24.5′ W according to World Geodetic System 84 (WGS84) reference system. High-intensity rainfalls and tropical storms occur in the study area during rainy season [24]. In particular, the lower part of Culiacan river basin was selected as the study area (Figure 1). This zone is a sloping terrain characterized by constant flooding caused by the rise of the Tamazula River. These river floods have caused severe damage to the urban area. Therefore, high detail DEM is needed for future hydrological modeling in this area.

Figure 1 Study area. Source: Own elaboration using QGIS 3.10 La coruña (Open Source Geospatial Foundation, Beaverton, USA, 2020).
Figure 1

Study area. Source: Own elaboration using QGIS 3.10 La coruña (Open Source Geospatial Foundation, Beaverton, USA, 2020).

2.2 DEM construction process

The methodology used in this study is presented in Figure 2. DEM construction process was characterized by three main phases: the first phase consisted of photogrammetric flight planning project, where the flight parameters of RPAS were established and the photogrammetric surveys were carried out. Data processing was the second phase and consisted of defining a photogrammetric block and some information about a scene during image capturing process was obtained (i.e., camera orientation and image point correlation). At this stage, the 3D reconstruction was also carried out through the extraction process (point cloud extraction). Then, DEM generation was carried out by using different mathematical algorithms with SfM technology (SIFT, Hamming Distance, Poisson reconstruction, SemiGlobal Matching (SMG) algorithms and Delaunay triangulation). Finally, the third phase involved the DEM accuracy assessment using root mean square error (RMSE). Then, statistical analysis (analysis of variance [ANOVA]) was carried out to evaluate the effect of photogrammetric flight parameters on DEM accuracy.

Figure 2 DEM process generation diagram. Source: Own elaboration.
Figure 2

DEM process generation diagram. Source: Own elaboration.

2.3 First phase: photogrammetric project definition

Step 1.1 Project Definition. A Quad Copter DJI Phantom 4 Pro was used for image acquisition. This RPAS has a global positioning system (GPS)/GLONASS and a sensor mounted on a gimbal stabilizer. A Sony RX100 camera with 35 mm focal length CMOS sensor of 20.2 effective megapixels was used. The strategies used in a conventional photogrammetric flight planning were implemented in the RPAS flight. Photogrammetric project began defining the area of interest: a polygon was drawn on a geo-referenced base map, camera and sensors onboard were calibrated (tested).

Step 1.2 Flight Planning. Free areas were located as landing points of the RPAS in case of an emergency. Then, the photogrammetric parameters were defined. The RPAS parameters considered in this study were flight mode, flight altitude, flight speed, camera tilt and the longitudinal and transversal overlaps of the images. An orthogonal array L18 was used to determine the different flight parameters for image acquisition (Table 1). This orthogonal array was used to measure the effect of these six photogrammetric flight parameters on the accuracy of DEMs generated.

Table 1

Flight parameters used for image acquisition

TreatmentFlight mode Flight altitude (m)Camera tiltImage overlapSpeed (m/s)
Longitudinal overlap (%)Transversal overlap (%)
1Normal Grid606540605
2607550707
3608560809
4706540707
5707550809
6708560605
7806550609
8807560705
9808540807
10Double grid606560807
11607540609
12608550705
13706550805
14707560607
15708540709
16806560709
17807540805
18808550607

Source: Own elaboration.

According to Zhao et al. [20], a double grid flight is recommended to obtain a more accurate point cloud. In this sense, the flights were made in single and double grid. Flight altitudes were carried out at 60, 70 y 80 m as indicated by Hussain and Bethel [25]. In this study, longitudinal overlaps between 40% and 60% were defined as suggested by Reshetyuk and Mårtensson [26], while lateral overlaps between 60% and 80% were established as suggested by Harwin and Lucieer [16] and Dall’Asta et al. [27]. Under these conditions, high quality topographic maps have been reported. Regarding the camera tilt, positions close to nadir (90°) were chosen as suggested by Wasklewicz et al. [28] and Bemis et al. [29]. Oblique angular positions between 65° and 85° were also chosen. Concerning the RPAS, speed values of 3, 6 and 9 m/s were chosen to analyze distortions generated due to camera instability at high velocities.

According to Mora-Felix [30], image resolution is related to GSD. In this study, images resolution values were considered between 2 and 3 cm/pix. These values correspond to the resolutions of the most widely lightweight cameras installed on RPAS [31]. Once the photogrammetric parameters were stablished, mathematical models proposed by Vollgger and Cruden [32] and Mora-Felix et al. [30] were implemented to compute RPAS flight trajectories and programming camera shots, which are the parameters required to execute the real-time flight.

Step 1.3 Control points establishment. GNSS Trimble R10 was used for setting control points on Earth surface. This is high precision equipment that works with satellite system GLONASS, SBAS, Galileo and GPS. Each checkpoint was marked with physical signals on the ground. These marks were established before the surveys to be identified by the RPAS. Crosses in orange color were marked on the ground because they contrast with the background and are observed perfectly in the images. Control points were distributed throughout the study area as 20 × 20 cm marks to improve the triangulation and the DEM accuracy [33]. The marks were located on physical objects, such as drains, crosswalks and wood fixed to the surface by stakes were used in zones where no asphalt or concrete was present.

Step 1.4 Flight execution. During the processing of photogrammetric parameters, a file was generated containing the flight instructions that the RPAS must execute (waypoint file). This file contained GPS coordinates of a group of points that represent the photogrammetric flight to be followed sequentially by RPAS. This file also contained flight speed, flight altitude and shooting frequency configuration. Waypoint file was loaded and then the photogrammetric flight started. Through the navigation system, RPAS autonomously took position at the starting point and then automatically captured the images according to the sequence previously established. RPAS flight was monitored from a central station and furthermore an image dataset was generated.

2.4 Second phase: dataset processing

Image dataset processing was performed by SfM [34]. Image processing was characterized by two main sub-phases: first, a photogrammetric block configuration was performed. At this sub-phase, image dataset is calibrated. Then, detection and feature correlation processes were carried out. Atypical data was eliminated and image orientation was carried out through aerial triangulation. Points relating images to each other were identified and finally the block adjustment was done.

The second sub-phase consisted of the 3D reconstruction. At this stage, disperse point cloud was generated and a densification process was performed on the point cloud. Subsequently a georeferencing process was performed and finally a triangular mesh (triangulation Delaunay) was used for the DEM generation. The dataset processing phase is fully described below:

2.5 Subphase one: photogrammetric block configuration

Step 2.1 Camera calibration. Before the flight execution, a camera calibration process was carried out. This calibration process is a fundamental requirement for the metrical reconstruction of images in photogrammetry [35]. In addition, this process reduces any alteration or noise in the cartographic product. In this study, automatic calibration was implemented as suggested by Fraser [36]. During camera calibration process, extrinsic and intrinsic parameters were obtained, which are related to the external and internal orientation processes of DEM. Extrinsic parameters refer to the coordinates in a reference system and orientation. Intrinsic parameters refer to the focal length, the central point of the image and distortions of the camera lenses. Noise was reduced by adjusting the tangential (p1, p2) and radial (k1, k2, k3,) distortion coefficients based on Brown’s distortion model [37].

Step 2.2 Feature extraction and correlation. In a photogrammetric survey with RPAS, the dataset of a photogrammetric block corresponds to the images captured during flight survey and the parameters that correlate them (overlap) with the recorded scene (position and orientation). The photogrammetric block configuration process was carried out by implementing detection and feature correlation algorithms. The most used detector algorithm in SfM artificial vision systems is the scale invariant feature transform (SIFT) algorithm [38]. This algorithm was used to extract image features. SIFT algorithm generated a set of key points between stereoscopic pairs of images that were invariant (robust) to image scale, translation, rotation and partially invariant to changes in camera illumination. The set of key points generated from images were then processed through Hamming Distance correlation algorithm [34].

Step 2.3 Image orientation. An image orientation process was applied to determine external orientation parameters (position coordinates and tilt angles) for each image. The aerial triangulation was implemented for this process. The exterior orientation elements include camera position on the platform in the coordinates x, y, z and the three angles of camera tilt (ω, φ, κ). These coordinates and angles are relative to the ground coordinate system. The positions of coordinates x, y, z were obtained by a GPS system on board. Camera angles were measured by using the INS (Inertial Navigation System) and IMU (Inertial Measurement Unit) devices. For absolute orientation, tie points (homologous points) between images were used. In addition, an image alignment process was done by using four ground control points (GCPs) defined at the corners of the model to ensure a more accurate result and eliminate the bowling effects of the RPAS data [39]. Extrinsic and intrinsic camera parameters were adjusted to perform the block adjustment procedure. Finally, Levenberg–Marquardt (LM) algorithm was used to minimize the re-projection error. This is a robust algorithm based on the non-linear least squares method. LM minimizes the re-projection error between the observed and predicted image points [40].

2.6 Subphase two: 3D reconstruction

The 3D reconstruction consisted of four steps clearly identified: scattered point cloud generation, point cloud densification, georeferencing and DEM generation. These stages are described as following.

Step 2.4 3D point cloud generation. The coordinates of the extracted points were determined and a disparity map was computed by calculating the stereoscopic correspondence of each object. Based on this map, the distance between ground objects and the camera was obtained. The depth z of a point P(x, y, z) in a 3D reconstruction was obtained with equations (1)–(3), considering a triangular geometrical relationship.

(1)Il=b/2+xz=xifxi=fzx+b2,
(2)Ir=b/2xz=xdfxd=fzxb2,
(3)d=xixd=fbzz=fbd,

where b is the value of baseline, f is the focal length, Il, lr are the left and right images respectively, d is the distance from the camera to a point on the ground. Finally, the resulting data is a dispersed point cloud referred to a local coordinate system Universal Transverse Mercator, WGS84 zone 13. For this procedure, Helmert transformation was implemented [41].

Step 2.5 Point cloud densification. Consisted of extracting a larger amount of 3D points from the scene that complements the disperse point cloud generated in the previous stage. SMG was implemented due to its high precision and computational cost reduction [42]. Once the dense map was available, atypical points were eliminated (error points). Then, the Delaunay triangulation algorithm was implemented for the generation of the triangular irregular network. In this algorithm, the elevation values were interpolated (triangulated) for the generation of a regular matrix in raster format (triangular mesh) [43]. The created mesh is textured by re-projecting the 3D points with the original images to obtain the corresponding color.

Step 2.6 Georeferencing process. Before the DTM is generated, a georeferencing process is suggested. This process consisted of linking identifiable locations in the images with the locations of GCPs that were spatially related. Identifiable locations, such as sidewalks, corners, culverts, were used for georeferencing. Then, polynomial transformation was generated to adjust the locations to its correct spatial location [44].

Step 2.7 DEM generation. After constructing the triangular mesh, a smoothing model was constructed to fill the gaps in the surface and repair the errors generated. Poisson algorithm was implemented for this surface reconstruction [45]. This surface reconstruction is represented in a DSM. Finally, a Digital Terrain Model (DTM) was generated with the algorithm proposed by Unger et al. [46]. This algorithm removed man-made objects (i.e., buildings) and vegetation. Afterwards, a DTM was regularized through Huber norm.

2.7 Third phase: DEM validation

Step 3.1 RMSE-validation. DEM accuracy refers to the proximity of the observation of a coordinate point to a true value. Plani-altimetric error must be minimized to obtain a more accurate model [47]. According to National Standard for Spatial Data Accuracy, DEM errors meet a normal distribution and this is a fair assumption for open terrains [48]. In this study, DEM validation consisted of a comparison between plani-altimetric features with the GNSS points acquired in the study area. Planimetric and altimetric errors were calculated for the 18 DTMs generated. The most common statistic used to describe planimetric and altimetric errors in DEMs is the RMSE [49]. The RSME measures the difference between the values predicted by the DEM and the values observed (GNSS points).

Altimetric accuracy (RMSEz) of DTM was carried out by using equation (4). This statistical parameter consists of a comparison between the elevation points obtained with GNSS and those marked on ground.

(4)RMSEZ=i=1n(ZCPZDTM)2n,

In this equation, n refers to the total points measured, ZDTM corresponds to the elevation of the control points marked on the ground visible in the image, ZCP corresponds to the elevation of the GCP using GNSS. Planimetric validation (RMSEx,y) was determined by the distance between the GCPs and the marked points (CPs) on the ground for both x and y coordinates. This validation was carried out using equation (5).

(5)RMSExy=i=1n(XCPXDTM)2+(YCPYDTM)2n

where n refers to the total points to be measured, XDTM, YDTM are the coordinates of the points in the model and XCPYCP are the coordinates of the validations GNSS points.

Step 3.2 Statistical analysis. An ANOVA was carried out to measure the effect of RPAS flight parameters on DTM accuracy (α = 0.05). The planimetric (RMSExy) and the altimetric (RMSEz) errors were used as response variables for this statistical analysis. ANOVA identified the flight parameters that showed a significant influence on the response variables. An optimization strategy was then carried out to find out the optimum flight configuration and to obtain the DTM with the best plani-altimetric accuracy.

3 Results

3.1 Camera calibration

Servomotors service and camera calibration were carried out to achieve the best survey performance and to avoid mechanical failures. A self-calibration process was done according to the suggested by Fraser [36]. The camera calibration parameters used in this study are presented in Table 2. The main objective of the calibration process is to solve the radial (k1, k2, k3) and tangential (p1, p2) distortion parameters to reduce uncertainties or errors of the principal points x and y. This calibration is needed to obtain the most accurate results during the key point detection and correlation processes in stereoscopic pairs. Based on the errors obtained in this study, the calibration process could be considered as satisfactory for the 3D reconstruction process.

Table 2

Internal parameters for camera calibration

Values Focal lengthPrincipal point xPrincipal point yk1k2k3p1p2
pixmmpixmmpixmm
Initial values3073.414.8611917.793.0331485.82.350.033−0.0360.0780−0.001
Optimized values3270.605.1731958.793.0981449.62.2930.024−0.085−0.001−0.0010
Standard error4.1540.0070.20201.1310.00200.002000

Source: Own elaboration using Python 3.6.5 (Python Software Foundation, Wilmington, Delaware, USA, 2011).

3.2 Ground control points and checkpoints

In this study, 51 points were established, from which 27 points (GCPs) were used for georeferencing and 24 points (checkpoints, CPs) were used for validation. Figure 3 shows the GCPs and CPs distributed in the study area.

Figure 3 Location of GCPs in the study area. Source: Own elaboration using QGIS 3.10 La coruña (Open Source Geospatial Foundation, Beaverton, USA, 2020).
Figure 3

Location of GCPs in the study area. Source: Own elaboration using QGIS 3.10 La coruña (Open Source Geospatial Foundation, Beaverton, USA, 2020).

3.3 Photogrammetric project definition (execution)

Table 3 shows the GSD, frames, areas covered and flight lines obtained in the photogrammetric surveys (treatments) that were conducted for image acquisition. Each treatment generated an average of 328 images, with a mean GSD of 2.53 cm/pix. Approximately 6,000 photographs were taken and 6.3 km were covered. It is noteworthy that a greater number of flight lines were registered in the last nine treatments. These surveys were carried out under double grid flight mode. Likewise, it was demonstrated that the flight lines depended on the overlap of images and flight mode. It is also noteworthy that the quantity of frames acquired under normal grid flight mode is significantly smaller than the ones acquired when the double grid flights were performed.

Table 3

GSD, frames, areas covered and flight lines obtained in the photogrammetric

TreatmentGSD (cm/pix)FramesArea covered (km2)Flight lines
12.381530.12511
22.082150.15815
32.553310.16512
42.741430.17813
52.392280.17919
62.311500.16410
73.081440.1849
82.781430.18311
92.152560.21116
102.318810.22851
112.013850.18126
122.485260.22225
132.674690.19137
142.423770.11922
152.515410.27820
162.973620.25627
172.813580.22438
182.952570.27832

Source: Own elaboration.

3.4 Image processing

Figure 4 shows the results of keypoint extraction process of treatment 13 by overlapping a pair of images. In this figure, it is evidenced that most of the keypoints were correctly matched, especially in areas of dense vegetation where erroneous coincidences are possible due to the presence of similar local texture.

Figure 4 Keypoint detection, feature correlations and stereo pair assembling during image processing with SIFT algorithm. Source: Own elaboration using SIFT algorithm in Python 3.6.5 (Python Software Foundation, Wilmington, Delaware, USA, 2011).
Figure 4

Keypoint detection, feature correlations and stereo pair assembling during image processing with SIFT algorithm. Source: Own elaboration using SIFT algorithm in Python 3.6.5 (Python Software Foundation, Wilmington, Delaware, USA, 2011).

During the 3D reconstruction, the algorithm identified a mean of 37,336 keypoints per image (Table 4). Despite the keypoints per image were similar in all treatments, the number of match points found in double grid treatments was greater than the observed during normal grid. This situation could explain the fact that the number of keypoints obtained in the 3D densification process was greater when double grid flight mode was carried out.

Table 4

Number of key points identified and densification points obtained during 3D reconstruction

TreatmentKey points per imageMatches3D reconstruction key point matches3D densification points
2 images3 images5 images
142,6313,2921,49,07914,5875,83146,12,634
240,4274,9563,37,29758,2258,9741,38,03,790
344,1235,21334,56878,4561,5451,52,36,211
427,1272,6201,39,37517,6601,63673,50,091
530,0164,3042,82,62758,71111,1831,56,68,844
632,2225,1112,32,08046,4368,9691,17,27,625
730,2543,2541,30,11713,2327,67545,08,682
845,2206,8893,22,16757,5828,0831,03,23,861
942,8589,2455,99,2671,44,03328,9621,91,71,283
1038,5382,450640,1721,30,14620,3724,77,36,416
1138,7475,3956,41,80688,59810,2461,70,61,978
1237,4855,789678,79687,45610,5471,67,56,778
1339,2266,1297,35,8841,35,39621,6012,53,07,000
1434,8195,9445,89,4881,04,07914,6351,90,89,412
1538,7956,4564,78,9781,24,56815,4781,78,45,681
1637,2566,9198,16,4641,43,05518,9552,55,85,938
1736,6757,0567,55,7201,34,66022,4642,34,27,473
1835,6457,0148,31,2541,56,23145,7832,21,34,654

Source: Own elaboration using VisualSFM [62].

3.5 Planimetric (RMSExy) and altimetric (RMSEz) accuracies

The real coordinates (x, y, z) were obtained with GNSS system. These points were compared with the CP marked on the ground and observed in the digital model.

Figure 5 shows a map where planimetric and altimetric differences are observed in the generated DTM. Figure 5a shows a cross-section with a distance of 117 m, where a slope is observed. In this figure, three validation points are present: CP17, CP16 and CP14. The elevation differences observed between the DTM and GNSS measurements were less than 0.15 m. Figure 5b shows the horizontal errors in CP16. In this figure, the horizontal difference between the DTM and GNSS measurements was less than 0.02 m. Due to the differences found, equations (4) and (5) were used to obtain the planimetric (RMSExy) and altimetric (RMSEz) accuracies for each photogrammetric survey.

Figure 5 Planimetric and altimetric differences in the generated DTM. (a) Elevation differences observed in a cross-section. (b) Planimetric difference identified in GCP 16. Source: Own elaboration using QGIS 3.10 La coruña (Open Source Geospatial Foundation, Beaverton, USA, 2020).
Figure 5

Planimetric and altimetric differences in the generated DTM. (a) Elevation differences observed in a cross-section. (b) Planimetric difference identified in GCP 16. Source: Own elaboration using QGIS 3.10 La coruña (Open Source Geospatial Foundation, Beaverton, USA, 2020).

3.6 Statistical analysis

A georeferencing process was implemented on the DTMs obtained. 27 GCP were used as georeferencing points and the accuracy of georeferenced DTMs was then evaluated using the 24 points that were established as CP. Table 5 shows the results of the plani-altimetric accuracy obtained in the eighteen surveys before and after the georeferencing process. The planimetric accuracy (RMSExy) before georeferencing process ranged from 9.01 to 1.46 m. This high variation was reduced after implementing the georeferencing process. RMSExy after georeferencing ranged from 1.05 to 0.11 m. The same situation occurred for altimetric accuracy (RMSEz). Lower values of RMSEz (higher altimetric accuracies) were registered after georeferencing. According to statistical analysis, when the georeferencing process was performed, a significant reduction in error was observed in both planimetric and altimetric accuracies (p < 0.05).

Table 5

Plani-altimetric accuracies obtained in all treatments

TreatmentDEM accuracy before georeferencingDEM accuracy after georeferencing
RMSExyRMSEzRMSExyRMSEz
11.4614.640.800.75
21.928.880.710.49
39.0115.150.590.97
42.7132.511.051.33
52.0327.450.631.57
62.4427.280.580.67
76.7529.001.041.06
81.6626.20.790.89
91.7424.120.830.63
104.3417.450.510.41
113.713.070.960.61
126.727.390.970.92
133.003.590.110.38
143.8811.070.130.35
155.515.310.901.02
163.902.870.200.40
176.197.090.410.62
184.1310.520.270.44

Source: Own elaboration.

ANOVA was carried out to assess the effect of photogrammetric flight parameters on plani-altimetric accuracy of DTMs (Table 6). Based on statistical analysis performed, longitudinal overlap and flight mode were flight parameters that showed a significant influence on planimetric accuracy. According to main effect analysis, lower RMSExy values were registered when higher percentages of longitudinal overlap are used. Likewise, higher planimetric accuracies were observed when double grid flight mode was carried out.

Table 6

ANOVA and main effects of flight parameters on plani-altimetric accuracy

ANOVA for RMSExyMain effects for RMSExy (mean values)
Flight parameterp-ValueFlight modeMean RMSExyCamera tiltMean RMSExyTrans. overlapMean RMSExy
Flight mode0.014*Normal0.7865°0.62600.63
Flight altitude (m)0.156Double0.5075°0.61700.77
Camera tilt (°)0.52285°0.69800.52
Long. overlap (%)0.012*Flight height Mean RMSExyLong. overlapMean RMSExySpeedMean RMSExy
Trans. overlap (%)0.16960 m0.7640%0.835 m/s0.61
Flight speed0.32570 m0.5750%0.627 m/s0.58
*p-Values < 0.05 are significant80 m0.5960%0.479 m/s0.72
ANOVA for RMSEzMain effects for RMSEz (mean values)
Flight parameterp-ValueFlight modeMean RMSEzCamera tiltMean RMSEzTrans. overlapMean RMSEz
Flight mode0.028*Normal0.9365°0.73600.65
Flight altitude (m)0.441Double0.5875°0.76700.85
Camera tilt (°)0.76685°0.78800.77
Long. overlap (%)0.224Flight height Mean RMSEzLong. overlapMean RMSEzSpeedMean RMSEz
Trans. overlap (%)0.40260 m0.6940%0.835 m/s0.71
Flight speed0.13870 m0.8950%0.817 m/s0.61
*p-Values < 0.05 are significant80 m0.6860%0.629 m/s0.94

Source: Own elaboration using Statgraphics Centurion 18 (Statgraphics Inc., The Plains, USA, 2020).

Statistical analysis for altimetric accuracy showed that only the flight mode had a statistical significance (p < 0.05). Lower RMSEz values were observed in DTMs generated when double grid surveys were performed.

3.7 Optimization (the most accurate DTM generated)

According to statistical analysis, treatment 13 showed the highest plani-altimetric accuracy. Under these flight operational conditions, values of 0.38 and 0.11 m were achieved for RMSExy and RMSEz. In both cases, the digital models were adjusted by the georeferencing process using the GCP. In contrast, treatment 4 presented the lowest accuracy (RMSExy = 1.05 m and RMSEz = 1.33 m). The resulting DTM shows neither artifacts nor unnatural terrain features, and it represents the real morphology of the terrain.

4 Discussion

4.1 Effect of the image overlapping

Figure 6 shows the amount of images that were overlapped in treatments 13 and 4. Treatment 4 generated with normal grid flight mode (Figure 6b) has a greater amount of red and yellow areas than the one generated with double grid (Figure 6a). The red and yellow zones are located mostly on the margins and these zones are characterized by the overlapping of one to three images. The green zones reflect that five or more images were overlapped. Therefore, the DEM accuracy could be highly related to the amount of images overlapped, as it is suggested by Dandois et al. [50].

Figure 6 Images overlapped in DEMs generated in treatments 13 and 4. Source: Own elaboration using Pix4D (Pix4D SA, Laussane, Switzerland, 2011). (a) Overlapped images in treatment 13, (b) overlapped images in treatment 4.
Figure 6

Images overlapped in DEMs generated in treatments 13 and 4. Source: Own elaboration using Pix4D (Pix4D SA, Laussane, Switzerland, 2011). (a) Overlapped images in treatment 13, (b) overlapped images in treatment 4.

According to Miřijovský and Langhammer [33], the greater number of intersected photographs, the greater the precision in the 3D reconstruction process is. This situation is demonstrated in Figure 7, where lower RMSEz values were obtained when higher key points were identified (Figure 7a). Besides, higher altimetric accuracy was obtained when higher match points were correlated (Figure 7b). Altimetric accuracy was also correlated with the amount of frames. Treatments with a greater number of photographs (double grid) observed higher altimetric accuracies (Figure 7c). Therefore, the altimetric accuracy of a DEM depended on the densification points obtained during 3D reconstruction process, as it is also demonstrated in Zhao et al. [20].

Figure 7 Relationship between altimetric accuracy and 3D reconstruction process. (a) Correlation between altimetric accuracy and keypoints identified. (b) Correlation between altimetric accuracy and matched points. (c) Correlation between altimetric accuracy and frames acquired. Source: Own elaboration using Statgraphics Centurion 18 (Statgraphics Inc., The Plains, USA, 2020).
Figure 7

Relationship between altimetric accuracy and 3D reconstruction process. (a) Correlation between altimetric accuracy and keypoints identified. (b) Correlation between altimetric accuracy and matched points. (c) Correlation between altimetric accuracy and frames acquired. Source: Own elaboration using Statgraphics Centurion 18 (Statgraphics Inc., The Plains, USA, 2020).

In this study, high accuracy was achieved because images were acquired from different perspectives. However, it is important to determine the optimum overlap to achieve the desired accuracy without falling into long processing computation times. Altimetric precision must be optimized by balancing the number of intersected photographs. The stereoscopic multivision effect must be achieved taking into account the computational processing times. The greater the points correlate in a scattered point cloud, the denser the point cloud is generated with better detail and therefore greater accuracy.

4.2 Effect of the georeferencing process

The effect of the georeferencing process on the plani-altimetric accuracy of the DTM is shown in Figure 8. Plani-altimetric accuracy is strongly dependent on this georeferencing process. The Box and Whisker plot shows the planimetric (Figure 8a) and altimetric (Figure 8b) accuracies before and after georeferecing process. In general, a high error is observed in the DTMs generated before georeferencing. According to the results obtained in this study, georeferencing is a critical process for improving the accuracy of DTMs. Therefore, technological alternatives must be developed for the georeferencing process. The complexity of terrestrial areas and the extensive time consuming for the establishment of GCPs on the ground represent a major opportunity area in RPAS photogrammetry. The georeferencing process is being integrated as an indirect system on the RPAS [51]. In the future, this situation will replace the use of GCPs.

Figure 8 Box and Whisker plots showing the importance of georeferencing process on the DTM accuracy. (a) The effect of georeferencing onplanimetric accuracy. (b) The effect of georeferencing on altimetric accuracy. Source: Own elaboration using Statgraphics Centurion 18 (Statgraphics Inc., The Plains, USA, 2020).
Figure 8

Box and Whisker plots showing the importance of georeferencing process on the DTM accuracy. (a) The effect of georeferencing onplanimetric accuracy. (b) The effect of georeferencing on altimetric accuracy. Source: Own elaboration using Statgraphics Centurion 18 (Statgraphics Inc., The Plains, USA, 2020).

Luhmann et al. [52] suggest the use of at least 20 GCPs to achieve high precision in DEMs. In this study, 27 GCPs were considered for georeferencing. Few studies are given about the effect of the georeferencing process in DEM accuracy. More in-depth studies are needed to determine the optimal amount of GCPs that must be used for the generation of DEMs with high accuracy.

4.3 Effect of the photogrammetric flight parameters

Planimetric accuracy results varied from 0.11 to 1.05 m. The planimetric accuracies obtained are similar to the results obtained in other photogrammetric studies. Seifert et al. [53] reported a planimetric accuracy range of 0.10 to 0.15 m, while Matthews [54] reported a planimetric accuracy of 0.08 to 0.40 m. These high planimetric accuracy results were obtained using double grid flight. This coincides with the result reported by Zhao et al. [20], who found that double and triple overlapping concentrate a large number of intersected pixels, which increased the planimetric accuracy of DEM. The stereoscopic multivision and 360° perspective given by the multiple intersections of photographs increased DEM accuracy. SfM algorithms showed a better result when high redundancy block configuration was performed. A higher correlation of overlapping characteristics in multiple photographs was observed under these conditions.

Rabah et al. [51] suggest implementing a high longitudinal overlap, but also a high vertical overlap to obtain high DEM accuracy. In particular, Westoby et al. [55] and Eisenbeiss [56] report that high percentages of longitudinal and transversal overlaps result in low RMSE. However, the results obtained in this work indicated that the transversal overlap did not have a significant effect on planimetric accuracy (p > 0.05).

Flight altitudes of 30, 40, 50, 60, 70 and 80 m were set for the production of orthomosaics for archaeological applications in the study by Mesas-Carrascosa et al. [39] and they concluded that flight altitude is the main flight parameter because of its influence on RMSExy. However, in this study, low flight altitudes (<60 m) were discarded because this implied the processing of a high amount of images and consequently, high computation times. Flight altitudes of 60, 70 and 80 m were established for photogrammetric surveys and no significant effect was found on RMSExy (p > 0.05). Mesas-Carrascosa et al. [39] also suggested that camera angle close to nadir (90°) is required to obtain high planimetric accuracy. This study demonstrated that the camera tilt was not a significant flight parameter for planimetric accuracy (p > 0.05).

Some studies have suggested that certain configurations are determinant for altimetric accuracy. Henriques et al. [57] suggested that transversal overlap has a significant influence on the density of the point cloud created for DEM generation. However, the results obtained in this research indicate that transversal overlap was not statistically significant on altimetric accuracy (p > 0.05).

Rabah et al. [51] report that the nadir view (90°) generates many occlusions in the image data set. According to Dandois et al. [50], steep hills, natural or artificial constructions, such as walls or buildings, generate hidden areas in the images. Both studies suggest that this effect could be reduced with the camera tilt or with a greater GSD. In this study, the statistical analysis (ANOVA) demonstrated that the camera tilt did not have any significant effect on the DTM accuracy (p > 0.05). However, in our study, a greater DTM accuracy was observed when surveys were carried out with great obliquity (65°).

Likewise, Torres-Sánchez et al. [58] indicate that high flight altitudes resulted in deterioration of DEM accuracy. In this study, altimetric accuracy was not influenced by the flight altitude (p > 0.05). According to statistical analysis, double grid flight was the unique determinant flight parameter for altimetric accuracy. The altimetric accuracy obtained in this study could be considered as high, according to the Udin and Ahmad’s report [59], who reported altimetric accuracies ranging from 0.045 to 0.52 m for DEM generation.

4.4 Field validation

The performance of the optimum flight parameters was evaluated in different topographical conditions. Culiacan Institute of Technology was chosen since it is a terrain with steeper slopes (more than 50%). Figure 9 shows a distributed regular network of 20 GCPs, of which 12 GCPs were designated for georeferencing (indicated with asterisk in Table 7) and 8 GCPs designated for DTM validation. These points were marked on the ground to be easily distinguished from the background during the processing of the aerial images. The entire methodology proposed in this study was used for image acquisition, data processing, and DTM validation.

Figure 9 GCPs and CPs established for DEM validation at Culiacan Institute of Technology. Source: Own elaboration using QGIS 3.10 La coruña (Open Source Geospatial Foundation, Beaverton, USA, 2020).
Figure 9

GCPs and CPs established for DEM validation at Culiacan Institute of Technology. Source: Own elaboration using QGIS 3.10 La coruña (Open Source Geospatial Foundation, Beaverton, USA, 2020).

Table 7

Comparison between DTM and CPs coordinates obtained for plani-altimetric accuracy during validation

Point numberDTM coordinatesCP coordinatesDifference (m)
XDTMYDTMZDTMXCPYCPZCPXDIFYDIFZDIF
0257478.272743268.283.08257478.272743268.283.010.00−0.010.07
1*257454.502743350.669.85257454.512743350.669.85−0.01−0.020.00
2*257560.202743427.162.75257560.192743427.162.500.010.000.25
3257577.912743491.859.99257577.902743491.959.150.01−0.030.84
4257669.432743519.762.37257669.422743519.762.240.010.000.13
5*257463.662743522.458.86257463.652743522.458.750.010.000.11
6257361.432743582.359.84257361.412743582.359.200.020.000.64
7257466.152743684.657.51257466.252743684.557.15−0.100.090.36
8*257456.882743742.755.50257456.852743742.755.450.030.020.05
9*257561.262743742.259.44257561.252743742.259.150.010.010.29
10257636.882743745.261.93257636.862743745.361.650.02−0.010.28
11*257713.772743757.564.96257713.772743757.664.550.00−0.010.41
12257772.192743739.968.57257772.182743739.968.050.010.000.52
13*257785.622743652.867.22257785.582743652.866.550.04−0.020.67
14*257712.542743634.365.25257712.552743634.365.05−0.010.000.20
15257682.382743690.564.64257682.382743690.564.210.000.000.43
16*257820.252743553.967.23257820.232743553.967.100.020.000.23
17*257832.232743475.564.77257832.222743475.565.120.01−0.01−0.35
18*257864.942743341.867.72257864.932743341.867.560.010.000.16
19*257755.612743386.565.12257755.602743386.664.900.01−0.020.22
  1. *

    GCPs used for georeferencing. Source: Own elaboration. Coordinates extracted from QGIS (Open Source Geospatial Foundation, Beaverton, USA, 2020).

According to Table 7, the RMSEz and RMSExy obtained were 0.37 and 0.05 m, respectively. These results were very similar to those obtained for DTM generated for Tamazula river. Figure 10 shows the DTM generated for Culiacan Institute of Technology. In analytical and modern photogrammetry performed with manned aircraft, the elevations in a study area should not vary 5% with respect to the flight altitude [60]. According to this criterion, the scale differences (GSD) in the images must not affect the accuracy of the final digital model. In the field validation, DTM elevation ranged from 54 to –102 m. In this sense, the methodology proposed in our study was validated and can be used in different zones with different topography, in particular in areas with steeper slopes. However, the methodology proposed must also be evaluated in areas with high vegetation density [61]. Knowing the importance of the vegetation density on the methodology proposed and the digital model generated is an opportunity area identified in our study.

Figure 10 Contour lines and DTM generated during field validation. Source: Own elaboration using QGIS 3.10 La coruña (Open Source Geospatial Foundation, Beaverton, USA, 2020).
Figure 10

Contour lines and DTM generated during field validation. Source: Own elaboration using QGIS 3.10 La coruña (Open Source Geospatial Foundation, Beaverton, USA, 2020).

5 Conclusions

This research provides a low-cost, high precision and easy-to-operate DEM acquisition and construction methodology using RPAS photogrammetry, useful for regions where there are no sufficient resources to develop this type of cartographic products. The DTM accuracy obtained could demonstrate that the methodology proposed can be comparable with other technologies that are expensive and require high knowledge and experience for their operation. The methodology proposed standardized all phases of DEM generation, from the acquisition of images by using RPAS to DEM construction, considering the best available algorithm for image processing.

By using a Taguchi orthogonal array L18, 18 surveys were carried out to evaluate six different flight parameters and to figure out the optimal conditions that a photogrammetric flight should be programmed for generating a DTM with high plani-altimetric accuracy. This methodology generated a set of georeferenced and non-georeferenced models; the results indicate that the georeferencing process normalized all the models and generated models with a similar high plani-altimetric accuracy. This study demonstrated a significant variation reduction when this process is included as part of the methodology for DEM generation. The georeferencing process is especially important as it invalidates the effect of almost every flight parameter considered in the DEM methodology construction.

This study also demonstrated that flight mode had a statistical effect on both planimetric and altimetric accuracies of DTM. In particular, this study suggests that double grid surveys generated DTMs with higher plani-altimetric accuracy. In addition, planimetric accuracy was influenced by the longitudinal overlap. The results suggested a high longitudinal overlap (at least 60%) to achieve high planimetric accuracy.

Acknowledgments

This research was funded by CONACYT, Mexico, Cátedras 2014 Ref. 2572. Authors acknowledge the support given by Culiacan Institute of Technology for the construction of the Remote Sensing Lab.

  1. Limitations: This study proposes an RPAS flight configuration for the generation of DTM with high precision and accuracy. A quadcopter was used to perform low altitude flights, where terrain elevation variations are minimal in comparison to the ones observed in rugged mountains. In addition, quadcopter flights have limited battery capacity. Therefore, the use of an autonomous battery recharging and swapping system is suggested for longer flight times and to cover larger areas.

  2. Further research: Digital models generated in this study could be used for hydrological modeling. The recent literature demonstrates that hydrological modeling is carried out by using low-resolution satellite images, with errors ranging from 1 to 5 m. The use of high-resolution digital models could generate hydrological scenarios with millimeter precision. In this sense, the methodology proposed could be considered as an alternative technology useful for the development of risk mitigation strategies (i.e., in flood risk areas).

  3. Conflict of interest: The authors declare no conflict of interest.

References

[1] Zweig CL, Burgess MA, Pecival HF, Kitchens WM. Use of unmanned aircraft systems to delineate fine-scale wetland vegetation communities. Wetlands. 2015;35:303–9.10.1007/s13157-014-0612-4Search in Google Scholar

[2] Lane SN, Chandler JH, Porfiri K. Monitoring river channel and flume surfaces with digital photogrammetry. J Hydraul Eng. 2001;127:871–7.10.1061/(ASCE)0733-9429(2001)127:10(871)Search in Google Scholar

[3] Javernick L, Brasington J, Caruso B. Modeling the topography of shallow braided rivers using structure-from-motion photogrammetry. Geomorphology. 2014;213:166–82.10.1016/j.geomorph.2014.01.006Search in Google Scholar

[4] Brasington J, Vericat D, Rychkov I. Modeling river bed morphology, roughness, and surface sedimentology using high resolution terrestrial laser scanning. Water Resour Res. 2012;48:1–18.10.1029/2012WR012223Search in Google Scholar

[5] Devereux B, Amable G. Airborne LiDAR: instrumentation, data acquisition and handling. In: Heritage GL, Large ARG. Laser Scanning for the Environment Sciences, vol. 4, 1st edn. Hoboken, USA: Blackwell Publishing Ltd; 2009. p. 49–66.10.1002/9781444311952.ch4Search in Google Scholar

[6] Hugenholtz CH, Whitehead K, Brown OW, Barchyn TE, Brian JM, LeClair A, et al. Geomorphological mapping with a small unmanned aircraft system (sUAS): feature detection and accuracy assessment of a photogrammetrically-derived digital terrain model. Geomorphology. 2013;1946:16–24.10.1016/j.geomorph.2013.03.023Search in Google Scholar

[7] Shahbazi M, Sohn G, Théau J, Menard P. Development and evaluation of a UAV-photogrammetry system for precise 3D environmental modeling. Sensors. 2015;15:27493–524.10.3390/s151127493Search in Google Scholar PubMed PubMed Central

[8] Uysal M, Toprak AS, Polat N. DEM generation with UAV photogrammetry and accuracy analysis in Sahitler hill. Measurement. 2015;73:539–43.10.1016/j.measurement.2015.06.010Search in Google Scholar

[9] Eltner A, Kaiser A, Abellan A, Schindewolf M. Time lapse structure-from-motion photogrammetry for continuous geomorphic monitoring. Earth Surf Process Landf. 2017;42:2240–53.10.1002/esp.4178Search in Google Scholar

[10] Clapuyt F, Vanacker V, Van Oost K. Reproducibility of UAV-based earth topography reconstructions based on structure-from-motion algorithms. Geomorphology. 2016;260:4–15.10.1016/j.geomorph.2015.05.011Search in Google Scholar

[11] Leitão JP, de Vitry Moy, Scheidegger M, Rieckermann AJ. Assessing the quality of digital elevation models obtained from mini unmanned aerial vehicles for overland flow modelling in urban areas. Hydrol Earth Syst Sci. 2016;20:1637–53.10.5194/hess-20-1637-2016Search in Google Scholar

[12] Nouwakpo SK, Weltz MA, McGwire K. Assessing the performance of structure-from-motion photogrammetry and terrestrial LiDAR for reconstructing soil surface microtopography of naturally vegetated plots: SfM and LiDAR performance on vegetated plots. Earth Surf Process Landf. 2015;4:308–22.10.1002/esp.3787Search in Google Scholar

[13] Colomina I, Molina P. Unmanned aerial systems for photogrammetry and remote sensing: a review. ISPRS J Photogramm Remote Sens. 2014;92:79–97.10.1016/j.isprsjprs.2014.02.013Search in Google Scholar

[14] Gabrlik P, Vomocil J, Zalud L. The design and implementation of 4 DOF control of the quadrotor. In: 2013 12th IFAC Conference on Programmable Devices and Embedded Systems, vol. 46; 2013. p. 68–73.10.3182/20130925-3-CZ-3023.00047Search in Google Scholar

[15] Lopez-Gutierrez R, Rodriguez-Mata A, Salazar S, Gonzalez-Hernandez I, Lozano R. Robust quadrotor control: attitude and altitude real-time results. J Intell Robot Syst. 2017;88:299–312.10.1007/s10846-017-0520-ySearch in Google Scholar

[16] Hirschmüller H. Stereo processing by semiglobal matching and mutual information. IEEE Trans Pattern Anal Mach Intell. 2008;30:328–41.10.1109/TPAMI.2007.1166Search in Google Scholar PubMed

[17] Smith MW, Carrivick JL, Quincey DJ. Structure from motion photogrammetry in physical geography. Prog Phys Geogr. 2016;40:247–75.10.1177/0309133315615805Search in Google Scholar

[18] Barazzetti L, Brumana R, Oreni D, Previtali M, Roncoroni F. True-orthophoto generation from UAV images: implementation of a combined photogrammetric and computer vision approach. ISPRS Ann Photogramm Remote Sens Spat Inf Sci. 2014;II-5:57–63.10.5194/isprsannals-II-5-57-2014Search in Google Scholar

[19] Harwin S, Lucieer A. Assessing the accuracy of georeferenced point clouds produced via multi-view stereopsis from unmanned aerial vehicle (UAV) imagery. Remote Sens. 2012;4:1573–99.10.3390/rs4061573Search in Google Scholar

[20] Zhao H, Zhang B, Shang J, Lui J. Aerial photography flight quality assessment with GPS/INS and DEM data. ISPRS J Photogramm. 2018;135:60–73.10.1016/j.isprsjprs.2017.10.015Search in Google Scholar

[21] Fonstad MA, Dietrich JT, Courville BC, Jensen JL, Carbonneau P. Topographic structure from motion: a new development in photogrammetric measurement. Earth Surf Process Landf. 2013;38:421–30.10.1002/esp.3366Search in Google Scholar

[22] Furukawa Y, Curless B, Seitz S, Szeliski R. Towards Internet-scale multiview stereo. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco, CA, USA, 13–18, June 2010. USA: IEEE; 2010.10.1109/CVPR.2010.5539802Search in Google Scholar

[23] Marteau B, Vericat D, Gibbins C, Batalla R, Green D. Application of structure-from-motion photogrammetry to river restoration. Earth Surf Process Landf. 2016;42:503–15.10.1002/esp.4086Search in Google Scholar

[24] Rentería-Guevara SA, Sanhouse-García A, Bustos-Terrones Y, Rodriguez-Mata AE, Rangel-Peraza JG. A proposal to integrate the legal definition and official delineation of watersheds in Mexico: eight model case studies. Rev Ambient Água. 2019;14:1–21.10.4136/ambi-agua.2198Search in Google Scholar

[25] Hussain M, Bethel J. Photogrammetric project and mission planning. In: McGlone JC, Lee GYG. Manual of Photogrammetry, vol. 15, 6th edn. Bethesda, USA: AAPRS; 2013. p. 1187–220.Search in Google Scholar

[26] Reshetyuk Y, Mårtensson S. Generation of highly accurate digital elevation models with unmanned aerial vehicles. Photogram Rec. 2016;31:143–65.10.1111/phor.12143Search in Google Scholar

[27] Dall’Asta E, Forlani G, Roncella R, Santise M, Diotri F, Morra di Cella U. Unmanned aerial systems and DSM matching for rock glacier monitoring. ISPRS J Photogramm. 2017;127:102–14.10.1016/j.isprsjprs.2016.10.003Search in Google Scholar

[28] Wasklewicz T, Staley DM, Reavis K, Oguchi T. Digital terrain modeling. In: Bishop MP, editor. Treatise on Geomorphology, Volume 3, London, UK: Academic Press; 2013. p. 130–61.10.1016/B978-0-12-374739-6.00048-8Search in Google Scholar

[29] Bemis S, Micklethwaite S, Turner D, James MR, Akciz S, Thiele S, et al. Ground-based and UAV-based photogrammetry: a multiscale, high-resolution mapping tool for structural geology and paleoseismology. J Struct Geol. 2014;69:163–78.10.1016/j.jsg.2014.10.007Search in Google Scholar

[30] Mora-Felix ZD, Rangel-Peraza JG, Sanhouse-Garcia AJ, Flores-Colunga GR, Rodríguez-Mata AE, Bustos-Terrones YA. The use of RPAS for the development of land surface models for natural resources management: a review. Interdiscip Environ Rev. 2018;19:243–65.10.1504/IER.2018.10016741Search in Google Scholar

[31] Pajares G. Overview and status of remote sensing applications based on unmanned aerial vehicles (UAVs). Photogramm Eng Rem S. 2015;81:281–330.10.14358/PERS.81.4.281Search in Google Scholar

[32] Vollgger SA, Cruden AR. Mapping folds and fractures in basement and cover rocks using UAV photogrammetry, Cape Liptrap and Cape Paterson, Victoria, Australia. J Struct Geol. 2016;85:168–87.10.1016/j.jsg.2016.02.012Search in Google Scholar

[33] Miřijovský J, Langhammer J. Multitemporal monitoring of the morphodynamics of a mid-mountain stream using UAS photogrammetry. Remote Sens. 2015;7:8586–609.10.3390/rs70708586Search in Google Scholar

[34] Pei H, Wan P, Li C, Feng H, Yang G, Xu B, et al. Accuracy analysis of UAV remote sensing imagery mosaicking based on structure-from-motion. In: 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS). Fort Worth, USA, 23–28 July 2017. USA: IEEE; 2017. p. 5904–7.10.1109/IGARSS.2017.8128353Search in Google Scholar

[35] Nex F, Remondino F. UAV for 3D mapping applications: a review. Appl Geomat. 2013;6:1–15.10.1007/s12518-013-0120-xSearch in Google Scholar

[36] Fraser C. Automatic camera calibration in close range photogrammetry. Photogramm Eng Rem S. 2013;79:381–8.10.14358/PERS.79.4.381Search in Google Scholar

[37] Turner D, Lucieer A, Wallace L. Direct georeferencing of ultrahigh-resolution UAV imagery. IEEE Trans Geosci Remote Sens. 2014;52:2738–45.10.1109/TGRS.2013.2265295Search in Google Scholar

[38] Lowe DG. Object recognition from local scale-invariant features. In: Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 Sept. 1999, vol. 2. Greece: IEEE; 1999. p. 1150–7.10.1109/ICCV.1999.790410Search in Google Scholar

[39] Mesas-Carrascosa F, Notario García M, Meroño de Larriva J, García-Ferrer A. An analysis of the influence of flight parameters in the generation of unmanned aerial vehicle (UAV) orthomosaicks to survey archaeological areas. Sensors. 2016;16:1838.10.3390/s16111838Search in Google Scholar PubMed PubMed Central

[40] Zhou L, Yang X. Training algorithm performance for image classification by neural networks. Photogramm Eng Remote Sens. 2010;8:945–51.10.14358/PERS.76.8.945Search in Google Scholar

[41] Turner D, Lucieer A, Watson C. An automated technique for generating georectified mosaics from ultra-high resolution unmanned aerial vehicle (UAV) imagery, based on structure from motion (SfM) point clouds. Remote Sens. 2012;4:1392–410.10.3390/rs4051392Search in Google Scholar

[42] Hirschmüller H. Accurate and efficient stereo processing by semi-global matching and mutual information. In: IEEE Computer Society Conference on Computer Vision and Pattern, 20–25 June 2005; San Diego, USA, volume 2. USA: IEEE; 2005. p. 807–14.10.1109/CVPR.2005.56Search in Google Scholar

[43] Legrá-Lobaina AA, Atanes-Beaton DM, Guilarte-Fuentes C. Contribución al método de interpolación lineal con triangulación de Delaunay. Min Geol. 2014;30:58–72.Search in Google Scholar

[44] Sanhouse-Garcia A, Bustos-Terrones Y, Rangel-Peraza J, Quevedo-Castro A, Pacheco C. Multi-temporal analysis for land use and land cover changes in an agricultural region using open source tools. Remote Sens Appl Soc Environ. 2017;8:278–90.10.1016/j.rsase.2016.11.002Search in Google Scholar

[45] Kazhdan M, Bolitho M, Hoppe H. Poisson surface reconstruction. In: Polthier K, Sheffer A, editors. Proceedings of the fourth Eurographics symposium on Geometry processing, Cagliari, Italy, 26–28 June, 2006. Aire-la-Ville, Switzerland: Eurographics Asociation; 2006. p. 61–70.Search in Google Scholar

[46] Unger M, Pock T, Grabner M, Klaus A, Bischof H. A variational approach to semiautomatic generation of digital terrain models. In: 5th International Symposium on Visual Computing, Las Vegas, USA, 30 November–2 December 2009, vol. 5876. Berlin, Germany: Springer; 2009. p. 1119–30.10.1007/978-3-642-10520-3_107Search in Google Scholar

[47] Smith MW, Vericat D. From experimental plots to experimental landscapes: topography, erosion and deposition in sub-humid badlands from structure-from-motion photogrammetry. Earth Surf Process Landf. 2015;40:1656–71.10.1002/esp.3747Search in Google Scholar

[48] Aguilar FJ, Aguilar MA, Agüera F. Accuracy assessment of digital elevation models using a non-parametric approach. Int J Geogr Inf Sci. 2007;21:667–86.10.1080/13658810601079783Search in Google Scholar

[49] Wang B, Shi W, Liu E. Robust methods for assessing the accuracy of linear interpolated DEM. Int J Appl Earth Obs Geoinf. 2015;34:198–206.10.1016/j.jag.2014.08.012Search in Google Scholar

[50] Dandois JP, Olano M, Ellis EC. Optimal altitude, overlap, and weather conditions for computer vision UAV estimates of forest structure. Remote Sens. 2015;7:13895–920.10.3390/rs71013895Search in Google Scholar

[51] Rabah M, Basiouny M, Ghanem E, Elhadary A. Using RTK and VRS in direct geo-referencing of the UAV imagery. NRIAG J Astron Geophys. 2018;7:220–6.10.1016/j.nrjag.2018.05.003Search in Google Scholar

[52] Luhmann T, Fraser C, Maas H-G. Sensor modelling and camera calibration for close-range Photogrammetry. ISPRS J Photogramm Remote Sens. 2016;115:37–46.10.1016/j.isprsjprs.2015.10.006Search in Google Scholar

[53] Seifert E, Seifert S, Vogt H, Drew D, van Aardt J, Kunneke A, et al. Influence of drone altitude, image overlap, and optical sensor resolution on multi-view reconstruction of forest images. Remote Sens. 2019;11:1252.10.3390/rs11101252Search in Google Scholar

[54] Matthews NA. Aerial and close-range photogrammetric technology: providing resource documentation, interpretation, and preservation. Technical Note 428. Denver, Colorado, USA: US Department of the Interior, Bureau of Land Management; 2008. p. 42.Search in Google Scholar

[55] Westoby MJ, Brasington J, Glasser NF, Hambrey MJ, Reynolds JM. Structure-from-motion photogrammetry: a low-cost, effective tool for geoscience applications. Geomorphology. 2012;179:300–14.10.1016/j.geomorph.2012.08.021Search in Google Scholar

[56] Eisenbeiss H. UAV Photogrammetry [Doctor of Sciences Thesis]. Desden, Germany: University of Technology Dresden; 2009. p. 2009.Search in Google Scholar

[57] Henriques M, Fonseca A, Roque D, Lima J, Marnoto J. Assessing the quality of an UAV-based orthomosaic and surface model of a breakwater. In: Proceedings of FIG Congress 2014, Kuala Lumpur, Malaysia, 16–21 June 2014. New Delhi, India: Coordinates; 2014. p. 1–16.Search in Google Scholar

[58] Torres-Sánchez J, López-Granados F, Borra-Serrano I, Peña JM. Assessing UAV-collected image overlap influence on computation time and digital surface model accuracy in olive orchards. Precis Agric. 2018;19:115–33.10.1007/s11119-017-9502-0Search in Google Scholar

[59] Udin W, Ahmad A. Assessment of photogrammetric mapping accuracy based on variation flying altitude using unmanned aerial vehicle. IOP Conf Ser Earth Environ Sci. 2014;18:012027.10.1088/1755-1315/18/1/012027Search in Google Scholar

[60] Lerma J. Fotogrametria moderna: analitica y digital. Spain: Universitat Politécnica de Valéncia Publishers; 2002.Search in Google Scholar

[61] Aktürk E, Altunel A. Accuracy assessment of a low-cost UAV derived digital elevation model (DEM) in a highly broken and vegetated terrain. Measurement. 2018;136:382–6.10.1016/j.measurement.2018.12.101Search in Google Scholar

[62] Wu C. Visual SFM, A Visual Structure from Motion System. 2011, http://www.cs.washington.edu/homes/ccwu/vsfm/.Search in Google Scholar

Received: 2019-10-31
Revised: 2020-07-16
Accepted: 2020-07-17
Published Online: 2020-10-14

© 2020 Zuriel Dathan Mora-Felix et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 23.4.2024 from https://www.degruyter.com/document/doi/10.1515/geo-2020-0189/html
Scroll to top button