Original papers
SimTreeLS: Simulating aerial and terrestrial laser scans of trees

https://doi.org/10.1016/j.compag.2021.106277Get rights and content

Highlights

  • The presented software tool simulates LiDAR scans of trees in orchards or forestry.

  • Tree shape definition can be easily customised generated to suit their application.

  • Custom parameters and trajectories allow the simulation of a range of experiments.

  • Outputs are evaluated for their similarity to real data, with promising results.

  • Several application areas are explored to justify the use of simulated data.

Abstract

There are numerous emerging applications for digitizing trees using terrestrial and aerial laser scanning, particularly in the fields of tree crop agriculture and forestry. Interpretation of LiDAR point clouds is increasingly relying on data-driven methods (such as supervised machine learning) that rely on large quantities of hand-labelled data. As this data is potentially expensive to capture, and difficult to clearly visualise and label manually, a means of supplementing real LiDAR scans with simulated data is becoming a necessary step in realising the potential of these methods. We present an open source tool, SimTreeLS (Simulated Tree Laser Scans), for generating point clouds which simulate scanning with user-defined sensor, trajectory, tree shape and layout parameters. Upon simulation, material classification is kept in a pointwise fashion so leaf and woody matter are perfectly known, and unique identifiers separate individual trees, foregoing post-simulation labelling. This allows for an endless supply of procedurally generated data with similar characteristics to real LiDAR captures, which can then be used for development of data processing techniques or training of machine learning algorithms. To validate our method, we compare the characteristics of a simulated scan with a real scan using similar trees and the same sensor and trajectory parameters. Results suggest the simulated data is significantly more similar to real data than a sample-based control. We also demonstrate application of SimTreeLS on contexts beyond the real data available, simulating scans of new tree shapes, new trajectories and new layouts, with results presenting well. SimTreeLS is available as an open source resource built on publicly available libraries.

Introduction

LiDAR scanning is a useful tool for reality capture in various fields. In orchards, Rosell and Sanz (2012) showed that LiDAR is considered a useful tool for rapidly capturing geometric properties of trees. Wu et al. (2020) confirmed the suitability of LiDAR from both a terrestrial and aerial context for analysing tree crop structures. Scanning allows extraction of tree parameters which are useful for growth analysis, including woody matter detection (Vicari et al., 2019, Su et al., 2019, Westling et al., 2020b), porosity (Pfeiffer et al., 2018), and leaf area density or distribution (Béland et al., 2011, Béland et al., 2014, Sanz et al., 2018). Beyond simple tree parameters, LiDAR scanning also enables detailed investigations on real trees, like orchard mapping (Underwood et al., 2016, Reiser et al., 2018), analysis of the light environment of a tree (Westling et al., 2018), or historical yield analysis (Colaço et al., 2019). Gené-Mola et al. (2019) was even able to detect fruit using intensity returns from a terrestrial LiDAR scanner, with comparable results and several advantages over vision systems. Similar applications are found in forestry, where tree geometric analysis is of interest. Tree parameters like woody matter detection (Ma et al., 2016a, Ma et al., 2016b) and Leaf Area Density (Van der Zande et al., 2011) is also of interest here, as well as further interest areas including robot navigation (Lalonde et al., 2006) and forest inventory (Bauwens et al., 2016). Terrestrial laser scanning is used extensively in commercial forests for plot-level inventory (Liang et al., 2016), which is then used for validation and training for inventory over large scales based on aerial LiDAR capture (Kato et al., 2009, Wang et al., 2020, Cao et al., 2019, Almeida et al., 2019). LaRue et al. (2020) demonstrates that aerial LiDAR captures slightly less than the terrestrial equivalent, but is well suited to macro-scales. In both fields however, LiDAR capture involves time-intensive scanning operations, and captured data can be difficult to process.

There is some value and interest for analysing trees in silico to achieve perfect digitization, generate large datasets, and perform physically challenging or destructive operations. Yang et al. (2016) and Arikapudi et al. (2015) digitized trees thoroughly for light interception analysis and geometric modelling respectively. Others have generated virtual trees using algorithmic growth (for example Functional-Structural Plant Modelling presented by White and Hanan, 2012, White and Hanan, 2016) which allows for study of how a tree will develop using different pruning or growth decisions. Sometimes the approach is to generate computer models of particular trees, for instance Da Silva et al. (2014) and Tang et al. (2015) performed light interception efficiency analyses using computer models of apple and peach trees respectively, while Tang et al. (2019) generated virtual loquat trees to design an optimal plant canopy shape. Beyond simple computer modelling, Tao et al. (2015) used Physically Based Ray Tracing to simulate terrestrial Lidar scans, based on an approach previously used by Côté et al. (2009). Some of these methods generate perfect data while others approximate real scanning, and both approaches have value in different application areas.

Recently, there has been a growing interest in deep learning on point clouds, as reviewed by Guo et al. (2019). For general point cloud applications, a variety of approaches have been developed, from the multi-view convolutional neural networks presented by Su et al. (2015) to dense contextual networks Liu et al. (2019). However, many methods primarily operate on small or perfectly sampled point clouds (Wu et al., 2015, Qi et al., 2017), and applications on tree scans are typically neither. Kumar et al. (2019) identified trees as distinct from other object types in point clouds captured by mobile laser scanning with a total accuracy of 95.2%. In forestry, Windrim and Bryson (2018) and Xi et al. (2018) perform tree classification using fully connected 3D CNNs. In orchards, Majeed et al. (2020) used deep learning to segment plant matter from its supporting trellis; however, typically deep learning here is applied to imagery rather than LiDAR (Bargoti et al., 2015, Apolo-Apolo et al., 2020). A significant obstacle to deep learning on large-scale point clouds is the difficulty in acquiring large quantities of data which is labelled by human experts. Modern deep learning architectures contain potentially hundreds of thousands to millions of trainable parameters in order to make accurate and robust inference. These models demand the use of tens to hundreds of thousands of training examples to realise the potential of this complexity, and providing all of these training labels manually on real data can quickly become infeasible. LiDAR scanning must typically decide on a trade-off between quality and capture speed, with faster captures typically being more sparse or occluded and thus harder to label (Westling et al., 2020b).

Simulation of realistic data is a viable candidate for solving the labelling issue. Many deep learning applications, in particular standard datasets and early point cloud works, use point-sampled meshes to generate point clouds on which to learn (for example, Wu et al. (2015) and Qi et al. (2017)), though this data does not realistically estimate the effect of laser scanning an object. Developing this further, Wang et al. (2019) and Goodin et al. (2019) have shown that neural networks can be trained comparably well using simulated LiDAR data. Generally in machine learning, transfer learning allows learners to minimise the required dataset by training on a similar set first, though there are complexities (Zhuang et al., 2020). Nezafat et al. (2019) were able to transfer features learned by a pre-trained model to a different data setting, using images generated by projection of LiDAR data. This suggests that realistic simulated data which is perfectly labelled could be used to pre-train models, significantly reducing the need for manual labelling. This can also be applied to changes in setting, for instance comparing aerial to terrestrial LiDAR.

We present a software tool, SimTreeLS, to generate simulated LiDAR scans with realistic sensor parameters, trajectories and results. Other tools with similar aims have been presented like SIMLIDAR by Mendez et al. (2012), though not at the same scale or level of detail. Our approach, SimTreeLS, is specifically designed to create simulated scans of trees, for development of applications in tree crops and forestry, supports a wide variety of tree shapes and sensor types, and can be used to simulate ground-based, handheld and aerial mobile LiDAR. We present how SimTreeLS works and experimental results showing its viability as a simulated source of LiDAR data.

Section snippets

Method

SimTreeLS is designed to be extensible to a range of situations. In this section, we describe how the system is set up, from defining tree shape and organisation through to simulating the scanning process itself. We then explain the validation experiments with which we demonstrate the suitability and capabilities of SimTreeLS for use as a data generation tool. The open source libraries Comma and Snark (Australian Centre for Field Robotics, 2012) are used to perform basic operations and

Results

In this section, we present the results of basic SimTreeLS operation.

Discussion

In this section we discuss the results presented in the previous section, and suggest areas for future work. Generally, the outputs of SimTreeLS present point clouds which are similar in structure and form to those generated by real LiDAR, and are easy to customise for a particular application.

The differences in real versus virtual trees can generally be characterised as a difference tree shapes and noise characteristics. One of the main causes of discrepancies is our current inability to

Conclusion

We presented a system, SimTreeLS, for generating simulated LiDAR scans of procedurally generated trees in agricultural and forestry contexts. Validation experiments have shown that the generated data is similar in nature to real LiDAR scans, and several capabilities have been explored and visualised.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

This work is supported by the Australian Centre for Field Robotics (ACFR) at The University of Sydney. For more information about robots and systems for agriculture at the ACFR, please visit http://sydney.edu.au/acfr/agriculture.

References (57)

  • D. Reiser et al.

    Iterative individual plant clustering in maize with assembled 2d lidar data

    Comput. Ind.

    (2018)
  • J.R. Rosell et al.

    A review of methods and applications of the geometric characterization of tree crops in agricultural activities

    Comput. Electron. Agric.

    (2012)
  • R. Sanz et al.

    Lidar and non-lidar-based canopy parameters to estimate the leaf area in fruit trees and vineyard

    Agric. For. Meteorol.

    (2018)
  • L. Tang et al.

    Light interception efficiency analysis based on three-dimensional peach canopy models

    Ecol. Informat.

    (2015)
  • S. Tao et al.

    A geometric method for wood-leaf separation using terrestrial and simulated lidar data

    Photogramm. Eng. Remote Sens.

    (2015)
  • J.P. Underwood et al.

    Mapping almond orchard canopy volume, flowers, fruit and yield using lidar and vision sensors

    Comput. Electron. Agric.

    (2016)
  • F. Westling et al.

    Light interception modelling using unstructured lidar data in avocado orchards

    Comput. Electron. Agric.

    (2018)
  • F. Westling et al.

    Replacing traditional light measurement with lidar based methods in orchards

    Comput. Electron. Agric.

    (2020)
  • O.E. Apolo-Apolo et al.

    A cloud-based environment for generating yield estimation maps from apple orchards using uav imagery and a deep learning technique

    Front. Plant Sci.

    (2020)
  • Arikapudi, R., Vougioukas, S., Saracoglu, T., 2015. Orchard tree digitization for structural-geometrical modeling. In:...
  • Australian Centre for Field Robotics (ACFR), 2012. Comma and snark: generic c++ libraries and utilities for robotics....
  • S. Bargoti et al.

    A pipeline for trunk detection in trellis structured apple orchards

    J. Field Robot.

    (2015)
  • S. Bauwens et al.

    Forest inventory with terrestrial lidar: A comparison of static and hand-held mobile laser scanning

    Forests

    (2016)
  • M. Bosse et al.

    Zebedee: Design of a spring-mounted 3-d range sensor with application to mobile mapping

    IEEE Trans. Rob.

    (2012)
  • L. Cao et al.

    Comparison of uav lidar and digital aerial photogrammetry point clouds for estimating forest structural attributes in subtropical planted forests

    Forests

    (2019)
  • A.F. Colaço et al.

    Spatial variability in commercial orange groves. part 2: relating canopy geometry to soil attributes and historical yield

    Precision Agric.

    (2019)
  • Côté, J.F., Widlowski, J.L., Fournier, R.A., Verstraete, M.M., 2009. The structural and radiative consistency of...
  • Diestel, W., 2003. Arbaro—tree generation for...
  • Cited by (0)

    View full text