Original papersSimTreeLS: Simulating aerial and terrestrial laser scans of trees
Graphical abstract
Introduction
LiDAR scanning is a useful tool for reality capture in various fields. In orchards, Rosell and Sanz (2012) showed that LiDAR is considered a useful tool for rapidly capturing geometric properties of trees. Wu et al. (2020) confirmed the suitability of LiDAR from both a terrestrial and aerial context for analysing tree crop structures. Scanning allows extraction of tree parameters which are useful for growth analysis, including woody matter detection (Vicari et al., 2019, Su et al., 2019, Westling et al., 2020b), porosity (Pfeiffer et al., 2018), and leaf area density or distribution (Béland et al., 2011, Béland et al., 2014, Sanz et al., 2018). Beyond simple tree parameters, LiDAR scanning also enables detailed investigations on real trees, like orchard mapping (Underwood et al., 2016, Reiser et al., 2018), analysis of the light environment of a tree (Westling et al., 2018), or historical yield analysis (Colaço et al., 2019). Gené-Mola et al. (2019) was even able to detect fruit using intensity returns from a terrestrial LiDAR scanner, with comparable results and several advantages over vision systems. Similar applications are found in forestry, where tree geometric analysis is of interest. Tree parameters like woody matter detection (Ma et al., 2016a, Ma et al., 2016b) and Leaf Area Density (Van der Zande et al., 2011) is also of interest here, as well as further interest areas including robot navigation (Lalonde et al., 2006) and forest inventory (Bauwens et al., 2016). Terrestrial laser scanning is used extensively in commercial forests for plot-level inventory (Liang et al., 2016), which is then used for validation and training for inventory over large scales based on aerial LiDAR capture (Kato et al., 2009, Wang et al., 2020, Cao et al., 2019, Almeida et al., 2019). LaRue et al. (2020) demonstrates that aerial LiDAR captures slightly less than the terrestrial equivalent, but is well suited to macro-scales. In both fields however, LiDAR capture involves time-intensive scanning operations, and captured data can be difficult to process.
There is some value and interest for analysing trees in silico to achieve perfect digitization, generate large datasets, and perform physically challenging or destructive operations. Yang et al. (2016) and Arikapudi et al. (2015) digitized trees thoroughly for light interception analysis and geometric modelling respectively. Others have generated virtual trees using algorithmic growth (for example Functional-Structural Plant Modelling presented by White and Hanan, 2012, White and Hanan, 2016) which allows for study of how a tree will develop using different pruning or growth decisions. Sometimes the approach is to generate computer models of particular trees, for instance Da Silva et al. (2014) and Tang et al. (2015) performed light interception efficiency analyses using computer models of apple and peach trees respectively, while Tang et al. (2019) generated virtual loquat trees to design an optimal plant canopy shape. Beyond simple computer modelling, Tao et al. (2015) used Physically Based Ray Tracing to simulate terrestrial Lidar scans, based on an approach previously used by Côté et al. (2009). Some of these methods generate perfect data while others approximate real scanning, and both approaches have value in different application areas.
Recently, there has been a growing interest in deep learning on point clouds, as reviewed by Guo et al. (2019). For general point cloud applications, a variety of approaches have been developed, from the multi-view convolutional neural networks presented by Su et al. (2015) to dense contextual networks Liu et al. (2019). However, many methods primarily operate on small or perfectly sampled point clouds (Wu et al., 2015, Qi et al., 2017), and applications on tree scans are typically neither. Kumar et al. (2019) identified trees as distinct from other object types in point clouds captured by mobile laser scanning with a total accuracy of 95.2%. In forestry, Windrim and Bryson (2018) and Xi et al. (2018) perform tree classification using fully connected 3D CNNs. In orchards, Majeed et al. (2020) used deep learning to segment plant matter from its supporting trellis; however, typically deep learning here is applied to imagery rather than LiDAR (Bargoti et al., 2015, Apolo-Apolo et al., 2020). A significant obstacle to deep learning on large-scale point clouds is the difficulty in acquiring large quantities of data which is labelled by human experts. Modern deep learning architectures contain potentially hundreds of thousands to millions of trainable parameters in order to make accurate and robust inference. These models demand the use of tens to hundreds of thousands of training examples to realise the potential of this complexity, and providing all of these training labels manually on real data can quickly become infeasible. LiDAR scanning must typically decide on a trade-off between quality and capture speed, with faster captures typically being more sparse or occluded and thus harder to label (Westling et al., 2020b).
Simulation of realistic data is a viable candidate for solving the labelling issue. Many deep learning applications, in particular standard datasets and early point cloud works, use point-sampled meshes to generate point clouds on which to learn (for example, Wu et al. (2015) and Qi et al. (2017)), though this data does not realistically estimate the effect of laser scanning an object. Developing this further, Wang et al. (2019) and Goodin et al. (2019) have shown that neural networks can be trained comparably well using simulated LiDAR data. Generally in machine learning, transfer learning allows learners to minimise the required dataset by training on a similar set first, though there are complexities (Zhuang et al., 2020). Nezafat et al. (2019) were able to transfer features learned by a pre-trained model to a different data setting, using images generated by projection of LiDAR data. This suggests that realistic simulated data which is perfectly labelled could be used to pre-train models, significantly reducing the need for manual labelling. This can also be applied to changes in setting, for instance comparing aerial to terrestrial LiDAR.
We present a software tool, SimTreeLS, to generate simulated LiDAR scans with realistic sensor parameters, trajectories and results. Other tools with similar aims have been presented like SIMLIDAR by Mendez et al. (2012), though not at the same scale or level of detail. Our approach, SimTreeLS, is specifically designed to create simulated scans of trees, for development of applications in tree crops and forestry, supports a wide variety of tree shapes and sensor types, and can be used to simulate ground-based, handheld and aerial mobile LiDAR. We present how SimTreeLS works and experimental results showing its viability as a simulated source of LiDAR data.
Section snippets
Method
SimTreeLS is designed to be extensible to a range of situations. In this section, we describe how the system is set up, from defining tree shape and organisation through to simulating the scanning process itself. We then explain the validation experiments with which we demonstrate the suitability and capabilities of SimTreeLS for use as a data generation tool. The open source libraries Comma and Snark (Australian Centre for Field Robotics, 2012) are used to perform basic operations and
Results
In this section, we present the results of basic SimTreeLS operation.
Discussion
In this section we discuss the results presented in the previous section, and suggest areas for future work. Generally, the outputs of SimTreeLS present point clouds which are similar in structure and form to those generated by real LiDAR, and are easy to customise for a particular application.
The differences in real versus virtual trees can generally be characterised as a difference tree shapes and noise characteristics. One of the main causes of discrepancies is our current inability to
Conclusion
We presented a system, SimTreeLS, for generating simulated LiDAR scans of procedurally generated trees in agricultural and forestry contexts. Validation experiments have shown that the generated data is similar in nature to real LiDAR scans, and several capabilities have been explored and visualised.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgements
This work is supported by the Australian Centre for Field Robotics (ACFR) at The University of Sydney. For more information about robots and systems for agriculture at the ACFR, please visit http://sydney.edu.au/acfr/agriculture.
References (57)
- et al.
Monitoring the structure of forest restoration plantations with a drone-lidar system
Int. J. Appl. Earth Obs. Geoinf.
(2019) - et al.
Estimating leaf area distribution in savanna trees from terrestrial lidar measurements
Agric. For. Meteorol.
(2011) - et al.
A model for deriving voxel-level tree leaf area density estimates from ground-based lidar
Environ. Model. Softw.
(2014) - et al.
Light interception efficiency of apple trees: a multiscale computational study based on mapplet
Ecol. Model.
(2014) - et al.
Fruit detection in an apple orchard using a mobile terrestrial laser scanner
Biosyst. Eng.
(2019) - et al.
Terrestrial laser scanning in forest inventories
ISPRS J. Photogramm. Remote Sens.
(2016) - et al.
Simulation of tree point cloud based on the ray-tracing algorithm and three-dimensional tree model
Biosyst. Eng.
(2020) - et al.
Deep learning based segmentation for automated training of apple trees on trellis wires
Comput. Electron. Agric.
(2020) - et al.
Simlidar–simulation of lidar performance in artificially simulated orchards
Biosyst. Eng.
(2012) - et al.
Mechatronic terrestrial lidar for canopy porosity and crown surface estimation
Comput. Electron. Agric.
(2018)