Skip to main content

How to make sense of 3D representations for plant phenotyping: a compendium of processing and analysis techniques

Abstract

Computer vision technology is moving more and more towards a three-dimensional approach, and plant phenotyping is following this trend. However, despite its potential, the complexity of the analysis of 3D representations has been the main bottleneck hindering the wider deployment of 3D plant phenotyping. In this review we provide an overview of typical steps for the processing and analysis of 3D representations of plants, to offer potential users of 3D phenotyping a first gateway into its application, and to stimulate its further development. We focus on plant phenotyping applications where the goal is to measure characteristics of single plants or crop canopies on a small scale in research settings, as opposed to large scale crop monitoring in the field.

Introduction

Plant phenotyping, the quantitative measurement and assessment of plant features, is at the forefront of plant research, plant breeding, and crop management. In recent years, the use of non-destructive, image-based plant phenotyping methods has emerged as an active area of research, driven by improvements in hardware as well as software. Indeed, the emergence in the consumer market of low-cost, powerful image acquisition devices have made (raw) phenotyping data readily available and computational breakthroughs such as deep learning [1, 2] have in turn allowed researchers and plant breeders to readily obtain quantitative insights from data. Combined together, these improvements in computational plant phenotyping have reduced the reliance on tedious, manual intervention in data acquisition and processing and have enabled the use of automation in the laboratory and in the field.

One noteworthy development is the adoption of three-dimensional (3D) plant phenotyping methods [3]. Advancements in 3D image acquisition and processing methods are increasingly being applied and explored in the agricultural industry: automation and robotics are entering agriculture. Examples are autonomous and targeted harvesting, weeding, and spraying [4]. In agricultural biotechnology, there is a continuing effort to efficiently modify or select for traits like increased yield, drought tolerance, pest resistance and herbicide resistance, by linking the genotype with the phenotype [4, 5]. In precision farming, crop management is being optimized and made more flexible through monitoring and mapping of crop health indicators and environmental conditions [6, 7]. All these advancements require powerful vision systems, and applications in the different domains of phenotyping, inspection, process control, or robot guidance benefit from a 3D approach over 2D.

Compared to two-dimensional methods, 3D reconstruction models are more data-intensive but give rise to more accurate results. They allow for the geometry of the plant to be reconstructed [8], and hence find important applications in the morphological classification of plants. Moreover, 3D methods are also better able to track plant movement, growth, and yield over time [8,9,10], something that is hard to do with 2D approaches alone. These 3D reconstructed plant models could be used to, for example, describe leaf features, discriminate between weed and crop, estimate the biomass of the plant, and classify fruits [11]. In some cases, 3D methods that incorporate data from multiple viewing angles may provide insights that are hard or impossible to get from a 2D model alone, such as resolving occlusions and crossings of plant structures by reconstructing the plant distance, orientation, and illumination [2, 12,13,14].

These 3D reconstruction models can be classified in several ways. One such classification makes the distinction between rigid and non-rigid reconstruction. In rigid 3D reconstruction, the objects in the scene are static, while in non-rigid 3D reconstruction, the objects are dynamic and the method allows for some level of movement. Another possible classification, which is typical for agriculture (and thus also applicable in our case), will make the distinction between 3D reconstruction models for (controlled) indoor environments and outdoor environments that make use of images from the field [15].

The set of problems that may arise during the processing and analysis of 3D representations, in general, is very large. For the analysis of 3D representations of plants in particular, a diverse set of tools is required because of the complexity and the non-solid characteristics of plant architecture, and its diversity both across and within species. It is our goal to point out typical processing and analysis steps, and to review methods which have been applied before, or could typically be used, in each of these steps. We will focus on applications for plant phenotyping where the ultimate goal is to measure phenotypic characteristics of single plants, or crop canopies on a small scale, as opposed to large scale yield and growth monitoring of crops in the field. We will not discuss the construction of virtual plant models where obtaining accurate or realistic 3D representations is a goal by itself. Nevertheless, many of the techniques used in that area can be applied for phenotyping as well. An outline of the topics covered in the present review is presented in Fig. 1.

Fig. 1
figure 1

Schematic outline of typical processing and analysis steps for 3D plant phenotyping. Through an active (1.a) or passive (1.b) 3D acquisition method, either a depth map (2.a), a point cloud (2.b ) or a voxel grid (2.c) is obtained. After a number of preprocessing steps consisting of background subtraction, outlier removal, denoising and/or downsampling, the primary 3D representation may be transformed into a secondary representation, such as a polygon mesh (4.a), an octree (4.b), or an undirected graph (4.c), which facilitates further analysis. The main analysis steps, which may consist of skeletonization (5.a) segmentation (5.b) and/or surface fitting (5.c), precede measurements on the canopy (6.a), plant (6.b), or plant organ (6.c) level. 1.a [30], 1.b [169], 2.b [35], 2.c [101], 4.a [164], 4.b [104], 4.c, 6.a [108], and 6.b [85] reprinted under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0). 2.a, reprinted from [79], ©2015, with permission from Elsevier. 5.a [179] reprinted with permission from the American Society for Photogrammetry and Remote Sensing, Bethesda, Maryland (https://www.asprs.org/). 5.b, ©2017 IEEE, reprinted with permission from [227]. 5.c [348] and 6.c [248] reprinted with permission from the author

3D image acquisition

An overview of the topics covered in this section is presented in Fig. 2.

Fig. 2
figure 2

Overview of 3D image acquisition techniques

3D imaging methods

3D imaging methods can be classified roughly into active and passive approaches [16,17,18,19,20,21,22,23]. The active group refers to the techniques that use a controlled source of structured energy emissions, such as a scanning laser source or a projected pattern of light, and a detector like a camera. On the other hand, the passive techniques rely on ambient light in order to form an image [24]. Compared to 2D imaging, both passive and active 3D imaging approaches can significantly improve the accuracy of plant growth measurements and even expand on the architectural traits available. However, 3D imaging techniques still lack in several crucial areas such as speed, availability, portability, spatial resolution, and cost [3].

Typically, active 3D imaging methods require specialized measuring devices such as LiDAR, MRI or PET scanners, which are costly to acquire and maintain but result in highly accurate data. Passive imaging methods, on the other hand, tend to be more cost-effective as they typically use commodity or off-the-shelf hardware, but may result in comparatively lower-quality data that often require significant computational processing to be useful. The specific trade-offs between active and passive 3D imaging methods, in terms of cost and fitness for a specific purpose, are discussed in this section. A comparison of active and passive methods, and of imaging techniques covered in this paper is presented in Tables 1 and 2, respectively. A full list of papers and plants using these techniques can be found in Table 3, under the header “3D Image Acquisition and Registration”. Four selected techniques from these two categories are illustrated in Fig. 3.

Fig. 3
figure 3

3D imaging system setup: a Laser triangulation, b Structure from Motion (SfM), c Stereo vision, and d Time-of-Flight (ToF). Reprinted from [8] under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0)

Table 1 A comparison of 3D imaging methods
Table 2 A comparison of 3D imaging techniques
Table 3 Well-established methods and algorithms used for 3D plant phenotyping

Active 3D imaging approaches

Active approaches use active sensors [25] and rely on radiometric interaction with the object by, e.g., using structured light or laser [23] to directly capture a 3D point cloud that represents the coordinates of each part of the subject in the 3D space [25]. Triangulation, Time of Flight (ToF, discussed below), and phase-shift are all examples of active measurement techniques [18]. Structured light [26] and laser scanners [10, 27, 28] are active technologies that are based on triangulation to determine the point locations in a 3D space [17]. Because active 3D imaging approaches rely on emitted energy, they can overcome several problems related to passive approaches such as correspondence problems (i.e., the problem of ascertaining which parts of one image correspond to which parts of another image, where differences are due to movement of the camera, the progress of time, and/or movement of objects in the photos). Furthermore, active 3D acquisition techniques can provide higher accuracy, but they require specialized and often expensive equipment. Because of their reliance on a radiation source, the environment and the illumination conditions in which active techniques can be used are often limited.

Other possible drawbacks are that approaches using structured light require very accurate correspondence between images while laser scanners can be slow and can potentially heat or even damage plants at high frequencies.

Laser triangulation These techniques involve shining a laser beam to illuminate the object of interest and a sensor array to capture laser reflection [8]. Due to the low-cost setup, they are widely used in laboratory experiments [29, 30]. Paulus et al. [30] used this technique to produce a 3D point cloud of barley plants. Likewise, Virlet et al. [31] used this technique for producing point clouds from wheat canopies and Kjaer and Ottosen [32] for rapeseed.

3D laser scanner (LiDAR) A 3D laser scanner is a high-precision point cloud acquisition instrument. However, the scanning process is complex and requires calibration objects or repeated scanning to accomplish the point cloud registration and stitching [33]. Chebrolu et al. [34] used a laser scanner to record time-series data of tomato and maize plants over a period of two weeks, while Paulus et al. [35] used a 3D laser scanner to create point clouds of grapevine and wheat.

Low-cost laser scanning devices, such as the Microsoft Kinect sensor and the HP 3D Scan system, are readily available on the consumer market and have been widely used for plant characterization in agriculture [13]. Although these provide lower resolutions, they may still be sufficient for less demanding applications [36], and they are designed for use in a wide range of ambient light conditions.

Terrestrial laser scanners (TLS) allow for large volumes of plants to be measured with relatively high accuracy, and are therefore mostly used for determining parameters of plant canopies and fields of plants. However, acquiring and processing TLS data is time consuming and costly due to the large data volumes involved [8, 33, 37].

Time of flight (ToF) ToF cameras use light emitted by a laser or LED source and measure the roundtrip time between the emission of a light pulse and the reflection from thousands of points to build up a 3D image [8]. Examples of this method can be found in the works of Chaivivatrakul et al. [38] on maize plants, Baharav et al. [39] on sorghum plants, and Kazmi et al. [40] on a number of different plants including cyclamen, hydrangea, orchidaceae, and pelargonium.

Some ToF devices available on the consumer market, such as the Kinect [41] (through the KinectFusion algorithm [42]), provide a convenient and cost-effective way to perform 3D reconstruction in real time [43].

Using Kinect for acquiring a 3D point cloud data can be found in several studies including Wang et al. [44] on lettuce, González et al. [45] on tomato seedling, Zhang et al. [46] on pumpkin roots, and Zhang et al. [47] on maize plants.

All in all, using close range photogrammetry for a real-time follow-up produces highly detailed models, but it results in a higher processing time compared to the other methodologies. Increasing computational power would allow for rapid model processing that is able to analyze growth dynamics at higher resolutions in the case of photogrammetry [13].

Structured light Structured light cameras project a pattern, for example a grid or a specific pattern of horizontal bars, to capture 2D images and convert them into 3D information by measuring the deformation of the patterns [8]. Li et al. [48] used an acquisition system consisting of a standard structured light scanner to capture the geometry of the dishlia plant by looking at it from different angles. To obtain this result, they used a turntable to rotate the plant by 30 degrees at a time. A complete review of using structured light methods for high-speed 3D shape measurement can be found in [49].

Photometric stereo (PS) Pioneered by Woodham [50], PS is a low-cost active imaging technique that can achieve high-resolution images and fast capture speeds. PS estimates local surface orientation by using a sequence of images of the same surface from the same viewpoint but under illumination from different directions. This technique uses data from several images and is therefore able to circumvent some of the problems that plague Shape-from-shading [51] approaches (not applied in plant phenotyping as far as we know) [52,53,54,55]. Bernotas et al. [17] used this technique for tracking the growth for the thale cress plant.

Tomographic methods These methods create a series of 2D slices to generate a 3D volume and provide non-destructive, high-resolution data of external and internal structures or even the movement of small molecules through a root system in the case of plants. X-ray computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography fall into this category [56].

MRI and CT, which are usually applied in the medical imaging domain, can also be used to visualize plant root systems within their natural soil environment [56,57,58,59,60,61,62,63,64,65]. Applications of CT during the last 30 years show considerable effectiveness for the visualization of root structures. Fine root structures can be visualized using micro-computed tomography (\(\mu \hbox {CT}\)) devices, which offer high resolving powers, down to 50 \(\mu\)m [66].

These methods produce voxels which contain intensity information, either representing the capacity of the material to absorb and emit radio frequency energy in the presence of a magnetic field in case of MRI, or the capacity of the material to absorb the X-ray beam in case of CT.

Neutron tomography (NT) complements other techniques like CT or nuclear MRI, due to the specific attenuation characteristics of thermal or cold neutrons [67, 68]. As neutrons are attenuated by the presence of water, while passing through volumes of silicon-based material in a relatively unimpeded way, NT presents an attractive method for the phenotyping of plant roots embedded in soil, modeling the rhizosphere, and quantifying the spatial distribution of water in the soil-plant system with high precision and good spatial resolution.

For example, Krzyzaniak et al. [66] used NT to provide a 3D reconstruction of grapevine roots and sand in an aluminum sample holder, while Moradi et al. [69] used NT to study root developments in soil of different texture and showed that sandy soil was the best to obtain a good contrast of the root visualization. Compared to X-ray imaging, NT has advantages and disadvantages. Due to its ability to penetrate bulk volumes of soil and rubble, NT is able to visualize water dynamics [69,70,71,72,73,74]. However, NT is a more labor-intensive process that requires highly specialized equipment, and produces images of comparatively lower resolution.

Passive 3D imaging approaches

Passive methods use passive sensors such as cameras and rely on analyzing multiple images from different perspectives to generate a 3D point cloud [21, 22, 25]. They capture plant architectures without introducing new energy (e.g., light) into the environment. Multi-view stereo (MVS) [75, 76], of which the most common application is binocular stereo [77, 78], Structure from Motion (SfM) [79], light-field (plenoptic) cameras [80], and space carving [81] approaches are examples of methods and technologies using this approach [17]. Of these, SfM is widely in use, especially in the 3D reconstruction of plants [11, 13, 14, 18, 79, 82,83,84,85]. In this approach, multiple photographs are taken from different unknown angles after which the camera position and depth information are estimated simultaneously based on matched features in the images.

Compared to active techniques, these methods are cheaper and can be applied using standard imaging hardware, but they are prone to producing outliers and noise [86]. Another disadvantage is that they are computationally complex, and thus relatively slow. Because passive methods make use of ambient light reflections, they do gain color information in addition to 3D shape information, which is not readily available from active techniques unless when combined with another imaging system.

Multi-view stereo techniques These methods use two or more cameras to generate parallax from different perspectives and obtain distance information of the object through comparing these perspectives [87]. Although the structure of the binocular camera is simple, and the calculation speed is fast, the results are greatly affected by the environment and in particular the method struggles with scenes lacking texture information [88]. Xiong et al. [89] used binocular stereo cameras and a semi-automatic image analysis system to quantify the 3D structure of rape plants. Chen et al. [90] assembled two binocular vision systems into a four-camera vision system to construct a multi-view stereo system to perform multi-view 3D perception of banana central stocks in complex orchard environments. Rose et al. [85] utilized a multi-view stereo method to reconstruct tomato plants.

Structure from motion (SfM) This technique can estimate 3D models from sequences of overlapping 2D images and can automatically recover the camera parameters like focal length, distortion, position and orientation [91,92,93,94]. It has low-cost, high point cloud accuracy, and high color reproduction. However, it is cumbersome and time-consuming to shoot sequence images [33]. Using equipment available in most biology labs, such as cameras and turntables, Lou et al. [9] built an accurate multi-view image-based 3D reconstruction system that yields promising results on plants of different forms and sizes and applied it to different plants, including thale cress, Brassica sp., maize, Physalis sp., and wheat.

SfM is not limited to analyzing the plant stem and leaves. Liu et al. [95] developed an automatic 3D root phenotyping system consisting of a 3D root scanner and root analysis software for excavated root crowns of maize. Their system generates a model of the root system from a 3D point cloud and calculates 18 root-specific phenotypical traits from this model.

Space carving There exist different shape estimation methods [96], including voxel coloring [97] and space carving [98, 99]. Unfortunately, voxel coloring is guaranteed to work only if all of the cameras lie on the same side of the viewing plane, which precludes the use of more general configurations of cameras. To remedy this, Kutulakos and Seitz [98] generalized voxel coloring to space carving, an approach whereby a 3D scene is iteratively reconstructed by selecting subsets of photographs taken from the same side and removing voxels that are not consistent with the selected photographs [100]. The process ends when there are no more voxels to remove.

Some recent contributions focus on the phenotyping of seedlings [101,102,103] as they are easier to reconstruct, while others focus on accelerating voxel carving through the use of octrees [104]. Scharr et al. [104] then apply this accelerated method to maize and banana seedlings. Gaillard et al. [105] developed a high throughput voxel carving strategy to reconstruct 3D representations of sorghum from a small number of images.

In comparison to SfM, space carving requires fewer images and lower processing time. However, this method needs an exact calibration and segmentation of the object to reconstruct, whereas SfM can estimate calibration automatically. This method is therefore appropriate in a controlled environment, where an accurate calibration is attainable [105].

Light field measuring Compared to a standard camera, consisting of a main lens that focuses a scene directly onto an image plane, a lightfield camera generates an intermediate image which is focused on the image plane by a micro lens array. Light field cameras allow for images to be modified after recording, and therefore offer more flexibility in how an image is perceived. Polder et al. [106] used a lightfield camera to capture the depth map of tomato plants in a greenhouse. Apelt and Kragler [80] used a light field camera which provides two high-resolution grey-scale images (a focus image and a depth image containing metric distance information) to build a system in order to monitor spatio-temporal plant growth for thale cress.

Scene representations

By choice, or depending on the acquisition method, 3D scenes and objects can be represented as a depth map, as a point cloud, or as a voxel grid.

Depth map

A depth map is a 2D image where the value of each pixel represents the distance from the camera or scanner (sometimes referred to as “2.5D”). In such representations, objects occluded by the projected surface are not measured. 3D image acquisition methods which may output depth maps are mostly active techniques, as well as stereo vision which measures depth from a single viewing position by comparing two images taken from slightly displaced positions.

Depth maps have been applied on canopies, where inferring a complete or detailed 3D structure is not necessary, such as employed by Ivanov et al. [107] and Müller-Linow et al. [108] who estimated the structural parameters of canopies based on top-view stereo imaging set-ups in maize and sugar beet, respectively, and as utilized by Baharav et al. [39] who measured the plant heights and stem widths in a sorghum canopy based on side-view depth maps.

Depth maps have also been applied on individual plants of which the leaves are planar and have an orientation more or less perpendicular to the viewing direction. Xia et al. [109] introduced the use of depth maps merely to provide a more robust segmentation of individual leaves of bell pepper plants where 2D RGB imaging would have had difficulty separating overlapping leaves. Chéné et al. [110] explored the use of depth imaging systems for leaf segmentation, as well as for the estimation of some 3D traits, such as leaf curvatures and leaf angles. Dornbusch et al. [10] used depth maps to monitor and analyze the diurnal patterns of leaf hyponasty, the upward movement of leaves in response to environmental changes, in thale cress. Depth map techniques can also be combined with other techniques: Li et al. [111] combined depth image data with 3D point cloud information to carry out in situ leaf segmentation for different kinds of plant species such as Hedera nepalensis, Epipremnum aureum, Monstera deliciosa and Calathea makoyana.

Depth maps are particularly suitable for segmentation as illustrated in Fig. 4, but note that the segmentation and subsequent analysis of the segmented images will often suffer from occlusions, lacking the advantages of full 3D imaging. By covering the 3D scene from multiple angles and with overlapping images, 2.5D can be augmented to a real 3D point cloud with xyz-coordinates. Here, the Iterative Closest Point (ICP) algorithm [112] and variants thereof allow to match point clouds sampled from the overlapping depth maps.

Fig. 4
figure 4

Top view RGB image of a rosebush (A), and depth map of the same rosebush scaled in mm with ground as reference, obtained by a Microsoft Kinect depth sensor (B), by [110]. The depth map allows to differentiate the different composite leaves, which would be much harder without depth information. Reprinted from [110], ©2015, with permission from Elsevier

Point cloud

A 3D point cloud is a set of points representing an object or surface. One of the advantages of the point cloud representation is that it includes depth information, thus working around the issue of occlusion among plant leaves [111]. Point clouds can be obtained in two ways: from active 3D image acquisition techniques, like image-derived methods, LiDAR, RGB-D cameras or synthetic aperture radar systems, or through (passive) 3D reconstruction from a set of different views from the scene [86, 113, 114]. Among the active methods, LiDAR point clouds are commonly used for point cloud segmentation applications and for trees (forests) [115].

Active image acquisition methods typically give rise to point clouds of relatively uniformly sampled points on the surface of the represented objects. The density of point clouds acquired through passive photogrammetric techniques, however, will often depend on the presence of detectable features on the surface of objects, because such techniques usually rely on finding corresponding sets of said points on multiple overlapping 2D images. This can result in point clouds where featureless parts of objects are less well represented, or in false points due to mismatches between features, especially when the scene contains repeated structures.

Point clouds do not directly provide information about the surface topology [20, 116], implying that it will be more challenging to accurately estimate an underlying surface or curve representation and to estimate traits related to the surface area, especially in the presence of noise, outliers or other imperfections. This will be even more difficult when dealing with the complex architecture of plants. Thus, the quality of the point cloud in conjunction with the nature of the plant architecture, will largely determine the available processing and analysis techniques.

Almost all of the techniques (both active and passive) result in a point cloud [18]. Cao et al. [14] generated a 3D point cloud by developing a low-cost 3D imaging system to quantify the variation in the growth and biomass of soybean due to flood at its early growth stages. Martinez et al. [13] created two dense point clouds using a low-cost SfM and an acquisition and reconstruction using an RGB-Depth Kinect sensor to examine the suitability of two low-cost systems for plant reconstruction, which was later used for the solid model creation. The model using SfM showed better results for the reconstruction of end-details and accuracy of the height estimation. However, use of RGB-D information was faster during the creation of the 3D models.

Ma et al. [43] produced a 3D point cloud by developing a 3D imaging approach to quantitatively analyze soybean canopy under natural light conditions. Most current systems provide information on the whole-plant level and there are only a few cases where information on the level of specific plant parts, such as leaves, nodes and stems, is given [2]. One such example can be found in Thapa et al. [117], who generated a 3D point cloud acquired with a LiDAR scanner to measure plant morphological traits, including the individual and total leaf area, the leaf inclination angle, and the leaf regular distribution of maize and sorghum.

Voxels

A 3D object may also be represented by a 3D array of cells, in which each cell (voxel) contains two possible values, indicating whether a voxel is occupied by the object or not. The most commonly used methods which result in such a representation are shape estimation methods [96] like Shape-from-silhouette (SFS) [118], space carving [98, 99], voxel coloring [97], and generalised voxel coloring [119, 120]. These passive methods rely on determining the visual hull, which is the largest possible shape that is consistent with the intersection of 2D silhouettes of an object projected into a 3D space.

If the plant structure is relatively simple, then these standard volumetric methods are relatively easy to implement, are fast, and produce good approximations. For example, Golbach et al. [101] used SFS to reconstruct tomato seedlings, and Kumar et al. [121] did the same for young maize and barley plants. Phattaralerphong et al. [122] also applied SFS to obtain voxel representations of tree canopies. Their goal was to measure traits such as tree height, tree crown diameter and canopy volume which don’t require very accurate 3D representations. Likewise, Kumar et al. [123] estimated maize root volume based on a voxel representation obtained by SFS.

However, if the scene is relatively complex, such as when multiple plant parts are overlapping, or the plant parts are very intricate, one may have to rely on less standard volumetric methods. For example, Klodt et al. [103] developed an optimization method which finds a segmentation of the volume enclosed by the visual hull by minimizing the surface area of the object subject to the constraint that the volume of the segmented object should be at least 90% of the volume enclosed by the visual hull. They applied their method for the volumetric 3D reconstruction of barley plants, and achieved an accurate 3D reconstruction of fine-scaled structures of the plant.

3D image processing

This section describes common techniques for the visualization, processing, and analysis of phenotyping data (in 3D point set form, as a 3D image, or in any other form), through transformations, filtering, image segmentation, and morphological operations. A full list of papers and plants using these techniques can be found in Table 3, under the header “3D Image Processing”. Moreover, an overview of the topics covered in this section is presented in Fig. 5.

Fig. 5
figure 5

Overview of 3D image processing techniques

3D point set filtering

Point sets contain noise stemming from different sources regardless of whether the point cloud was generated actively or passively (but passively generated point clouds are typically more noisy [86, 124]). Removing noise is an essential first step in the processing pipeline.

Actively generated point clouds typically suffer from limited sensor accuracy and measurement error due to environmental issues (illumination, material references and imperfect optics). For point clouds that are generated through computational reconstruction, imprecise depth triangulation and inaccurate camera parameters can give rise to significant geometry errors which can be classified in two types: outlier errors or positioning errors [86, 113, 114].

Moreover, the point cloud will often contain parts of the surrounding scene as well as wrongly assigned points, which need to be selectively removed, and “double wall” artefacts may result from small errors in the alignment of multiple scans, or from small movements during image acquisition. Finally, the initial size of the point cloud is often too large for further processing within a manageable time frame, requiring downsampling.

In plant phenotyping it is common to divide point set filtering into three different steps: background removal, outlier removal, and denoising [33, 125,126,127].

Background removal

When a point cloud is obtained through an active 3D acquisition method and doesn’t contain color information, usually efforts are made to capture as little of the surrounding scene as possible. If the point cloud still contains part of the surrounding scene, background removal can rely on the detection of geometric shapes such as planes, cylinders, or cones which may correspond to a surface, the main stem, or a pot, respectively. Points can then be discarded depending on the relative position to these features. Detection of geometric shapes is often done using the RANSAC algorithm [128]. For example, Garrido et al. [129] imaged maize plants in a field using LiDARs mounted on an autonomous vehicle, and used RANSAC to segment their point clouds into ground and plants. Liu et al. [130] used a variant of the RANSAC algorithm named MSAC to separate the soil from the original point cloud of maize.

When active 3D acquisition is combined with an RGB-camera or when a passive 3D acquisition method is applied, color information can be used for the removal of background points. The efforts employed in controlling lighting conditions during the 3D acquisition will determine whether one can rely on simple color thresholding or more complex clustering or classification methods to discriminate between plant and background, based on color. For example, Jay et al. [79] used clustering based on both height above ground and color to discriminate between plant and background points in point clouds of in-field crop rows of various vegetable species which were obtained by SfM. Ma et al. [43] extracted soybean canopies from background objects: point clouds were rasterized to depth images, after which the pixels of the soybean canopies were differentiated from those of the background by using spatial information in the depth images. Although color information can be useful for removing background points, plants often present ranges of similar colors and shapes, making it difficult to perform segmentation. To remedy this, Sampio et al. [15] developed a new technique using only (logarithmically transformed) depth information, and they show that accurate reconstruction results can be obtained for maize plants.

In the case of true background noise, this can be removed using a pass-through filter which limits the range of axes and removes the points outside the range. This approach can easily be combined with other filtering algorithms such as the minimum oriented bounding box (MOBB) algorithm [125].

Outlier removal

Two methods for outlier removal are regularly applied on point clouds: radius and statistical outlier removal. The radius outlier removal method counts the number of neighboring points within a certain specified radius and removes the points for which this number is lower than a specified minimum number of neighbors. In statistical outlier removal (SOR) the mean distance to the k nearest neighbors is calculated for each point. Points are removed if the mean distance surpasses a certain threshold which is based on the global mean distance to the k nearest neighbors and the standard deviation of the mean distances.

Li et al. [111] developed a novel 3D joint filtering operator by integrating a radius-based outlier filter that can separate leaves by removing sparse points for different kinds of plant species such as Hedera nepalensis, Epipremnum aureum, Monstera deliciosa and Calathea makoyana. Liu et al. [130] applied a MATLAB function (pcdenoise) to remove outliers from the point cloud of maize which are at least 0.3 SD away from the mean distance and then applied another MATLAB function (pcsegdist) to remove the larger outliers according to a Euclidean distance threshold of 5 mm. Sampaio et al. [15] and Chaivivatrakul et al. [38] used the same method to remove the outliers from the point clouds of maize plants.

Denoising (noise filtering)

Before applying further analysis steps it may be necessary to correct certain irregularities in the data, such as noise and “double walls” artefacts.

Moving Least Squares (MLS) This technique iteratively projects points on weighted least squares fits of their neighborhoods, thus causing the newly sampled points to lie closer to an underlying surface [131].

Density-based spatial clustering of applications with noise (DBSCAN) This algorithm was proposed by Ester et al. [132] and is a density-based clustering algorithm designed to discover clusters of arbitrary shape. Zermas et al. [82] used an algorithm based on DBSCAN to remove clusters that are smaller than a certain threshold and located further away than a fixed distance from other clusters, and applied this algorithm to maize plants.

Spatial Region Filter This filter works by means of region specifications which consists of one or more region expressions (geometric shapes) combined according to the rules of Boolean algebra. It is used for plants such as Epipremnum aureum, Monstera deliciosa, Alathea makoyana, Hedera nepalensis and maize in the works of Wu et al. [133] and Li et al. [111].

Color filtering Lou et al. [9] used a color filter to remove noisy points from a 3D point cloud. They acquired images from the plant against a dark background, and found that background noisy points were mostly colored dark, whereas points belonging to the plant were shades of green.

Downsampling

Reducing the number of points needs to happen in a way which minimizes loss of information about surface and topology of the sampled object. The most regularly used method for point cloud downsampling is the voxel-grid filter. Here the point cloud is divided into a 3D voxel grid and points within each voxel are replaced by the centroid of all points within that voxel [9, 111, 130, 134, 135].

An alternative method, which makes use of random sampling and which is also designed to retain key structures in the point cloud, is the dart throwing filter [136], where points from the original point cloud are sequentially added to the downsampled point cloud if they don’t have a neighbor in the output point cloud within a specified radius.

3D point cloud standardization

Point cloud standardization [15] refers to the process of adjusting the resolution of the point cloud according to the object in the scene, where, for example, objects with larger proportions can be described using a lower density of points while smaller objects are described using higher point densities. The result is a point cloud from which extraneous detail has been removed, resulting in a lower amount of data while keeping essential object features.

Sampio et al. [15] presented a point cloud standardization procedure in which an octree data structure was used to hierarchically group cloud points into voxels according to a predefined resolution, with each voxel described by a single point in the group (e.g., the centroid).

3D point set smoothing

The raw imaging data acquired from optical devices such as laser scanners always contains noise [137], which must be taken into account during subsequent post-processing.

One pervasive source of error for ToF cameras is the so-called wiggling error [138,139,140,141], which alters the measured distance by shifting the distance information significantly towards or away from the camera depending on the surface’s true distance [138]. The wiggling error can be addressed by using bilateral smoothing, a non-linear filtering technique introduced by Tomasi and Manduchi [142] for edge-preserving smoothing [137].

Sampaio et al. [15] used the bilateral smoothing technique for smoothing the cloud points of maize plants in two steps: smoothing normals and points repositioning based on the adjusted normals, while the estimation of the normal vector for each point is performed using the Principal Component Analysis (PCA) technique. Ma et al. [125] used a bilateral filter to smooth the point cloud of rapeseed while preserving the edge features of the point cloud. He and Chen [141] implemented an error correction for ToF sensors based on a spatial error model and showed that this approach performs better in comparison to the calibration method in [143] or the distance overestimation error correction methods in [144].

3D point set registration

Many imaging methods give rise to more than one 3D point cloud, for instance when observing a plant from different viewing angles, and these point clouds need to be reconciled with one another into a single coordinate system, a process known as 3D point cloud registration [125, 145]. In the case of two 3D point clouds this process is known as pairwise registration, and is studied extensively in the computer vision literature [145,146,147,148,149]. For pairwise registration, one set of points is typically kept fixed and denoted as the “target”, while the other is designated as the “source”. The goal is then to iteratively move the points of the source towards the target, while keeping the total amount of motion or deformation limited.

Broadly speaking, there are two categories of registration algorithms: rigid and non-rigid. Rigid point registration methods estimate a rigid body transformation (translation and rotation) of the source onto the target, and are usually easier to handle since they involve fewer parameters [150]. Chief among the rigid registration algorithms is the Iterative Closest Point (ICP) algorithm [112, 151], which alternates between associating nearby points in the source and the target, and estimating an optimal rigid body transform [152]. Many variants and improvements of the ICP algorithm exist [42, 153, 154], incorporating additional sources of geometric information (e.g., depth), or optimizing for point cloud data from specific acquisition devices such as the Kinect. Rigid point registration methods have been applied extensively for plant phenotyping. Wang and Chen [155], for example, developed an improved ICP algorithm that is more suitable for registering 3D point clouds from different directions using a turntable. They used a rotation matrix and a translation vector to process the relationship between adjacent point clouds and then applied the ICP algorithm. They applied their method on pepper plants and showed that the improved ICP has a better result in comparison to traditional ICP.

Rigid point registration algorithms perform well for rigid structures that are already somewhat aligned, but tend to yield poor results for the registration of deformable structures, such as non-rigid, thin plant structures [156]. Non-rigid registration techniques allow each point of the point cloud to move independently while penalizing large deformations. Moreover, the presence of noise and outliers may complicate the search for an optimal registration, rigid or otherwise. To accommodate noise, Jian and Vemuri [157] represent the input point sets as Gaussian Mixture Models (GMM) and reformulate the problem of image registration as one in which the distance between two GMMs is minimized, achieving good performance in terms of both robustness and accuracy [158]. It is worth noting that this approach can be applied to both rigid and non-rigid registration methods. The GMM approach is developed further in the Coherent Point Drift (CPD) algorithm of Myronenko et al. [159], where additionally the centroids of the Gaussians of one point set are constrained to approximately move together, so that the topological structure of the point cloud is preserved.

In the context of plant phenotyping, Chaudhury et al. [156] developed a two step method that achieved a better fit than CPD in case of registering multiple scans. This method starts with aligning the scans and then registers a single scan to the average shape, constructed from all other scans, and updates the set to include the newly registered result. They applied their method on thale cress and barley plants. Ma et al. [125] used the Fast Point Feature Histogram (FPFH), explained in the "Clustering-based methods" section, for rough registration to register multiple neighboring point clouds into a single point cloud and an ICP algorithm for fine alignment. Teng et al. [33] developed an improved ICP and applied it on rapeseed plants and then compared it with classic ICP. Apart from being computationally more effective, the new method also succeeds in registering point clouds with large differences in angles, for which registration fails using the classical ICP.

Lastly, one of the most challenging tasks is registering 3D point clouds of the plants over time and space [34]. Performing analysis on the time-series plant point cloud data, one needs to come up with techniques that associate the point cloud data over time and register them against each other. The plants changing topology, as well as non-rigid motion in between plant scans make plant registration over an extended period of time very challenging [160]. Chebrolu et al. [34] and Magistri et al. [161] tackled the complexity of registering plant data over time by exploiting the skeleton structure (see "Skeletonization" section) of the plant to obtain correspondences between the same plant parts for the scans on different days (Fig. 6). To aid with the development of new algorithms for point cloud registration among other things, Schunk et al. [160] compiled Pheno4D, a large scale spatio-temporal dataset of point clouds of maize and tomato plants.

Fig. 6
figure 6

Time series of a tomato plant scanned in various days together with the extracted skeleton. Reprinted from [160] under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0)

Secondary 3D object representations

Depending on subsequent analysis methods it may be advantageous to convert the 3D representation into one of the below secondary representations.

Polygon mesh

A polygon mesh is a 3D representation composed of vertices, edges and faces which define the shape of an object. The construction of a polygon mesh as an intermediate step in the analysis of a 3D representation of a plant may, for example, facilitate the calculation of leaf surface areas, or the segmentation into individual organs.

Polygon meshes are commonly constructed from voxels using the Marching Cubes algorithm [162], or from point clouds using \(\alpha\)-shape triangulation [163]. However, mesh generation requires precise point clouds or voxel representations, and the intricate and non-solid nature of the plant architecture makes that generating polygon meshes on a whole plant is often not feasible. More often, surface fitting is performed on individual leaves, after segmentation, or different surface fitting methods are applied to different plant organs.

Paproki et al. [164] constructed meshes of cotton plants from point clouds obtained by multi-view stereo, and performed their phenotypic analysis based on this representation. They could obtain measurements of individual leaves and track them through time. McCormick et al. [165] also based their measurements of shoot height, leaf widths, lengths, areas and angles in sorghum on the generation of a mesh from point clouds obtained through laser scanning. Chaudhury et al. [156] generated a mesh on complete thale cress point clouds by \(\alpha\)-shape triangulation to determine total surface area and volume.

Octree

An octree [166] is a tree-like data structure, in which a 3D space is recursively subdivided into eight octants if the parent octant contains at least one point. In this way, increasing tree depths represent the point cloud in increasing resolutions. Such a representation can avoid memory limitations when points need to be searched within a large point cloud.

There are various algorithms for clustering and skeletonization which exploit the octree data structure, and which are suitable for plant phenotyping, such as CAMPINO [167] and SkelTre [168].

Duan et al. [169] used octrees to divide point clouds of wheat seedlings into primary groups of points, after which these primary groups were merged manually to make them correspond to individual plant organs. Scharr et al. [104] developed an efficient algorithm for voxel carving on banana seedlings and maize, which directly outputs an octree representation. Zhu et al. [170] used an adapted octree to reconstruct the surface of the 3D point cloud of soybean plants.

Undirected graph

An undirected graph is a structure composed of vertices connected by edges. Edges are assigned weights corresponding to the distance between the connected points. Useful algorithms such as Dijkstra’s algorithm to calculate shortest paths [171], Minimum Spanning Tree [172], and graph-based clustering methods such as spectral clustering [173] use undirected graphs as input.

An undirected graph can be constructed from a point cloud by connecting neighboring points to the query point. Neighbors can be selected based on a certain radius r around the query point, or the k closest neighbors can be selected. If r or k are chosen too high, many redundant edges will be formed, whereas if they are too low, crucial ones may be missed.

Hétroy-Wheeler et al. [174] converted the point clouds of various tree seedlings, obtained through laser scanning, into an undirected graph and used this as the basis for spectral clustering into plant organs. To avoid redundant edges and thus speed up the computation of subsequent steps, while at the same time not miss any relevant edges, they pruned the edges which have neighbors within a certain radius r, based on the angles between edges.

3D image analysis

The above processing steps are merely a transformation of the original 3D representation as preparation for subsequent analysis steps. During these analysis steps, specific additional information is extracted from the 3D representation. A full list of papers and plants using these techniques can be found in Table 3, under the header “3D Image Analysis”. An overview of the topics covered in this section is presented in Fig. 7.

Fig. 7
figure 7

Overview of 3D image analysis techniques

Skeletonization

Skeletonization is the process of calculating a thin version of a shape to simplify and emphasize the geometrical and topological properties of that shape, such as length, direction or branching, which are useful for the estimation of phenotypic traits. A plethora of algorithms has been developed to generate curve skeletons. These techniques make use of different theoretical frameworks such as topological thinning or medial axes. For a review of methods in the context of plant images, see Bucksch and Alexander [175], and for a more general overview of methods, see Cornea et al. [176]. Skeletonization usually results in a set of voxels or points that in a final step are connected into an undirected graph, and on which subsequent analyzes can be performed.

A number of studies have proposed algorithms to model the 3D structure of trees by skeletonization, either for the purpose of phenotyping or for computer graphics. In Livny et al. [177] and Mei et al. [178] skeletonization of point clouds of trees obtained by terrestrial LiDAR scanning was performed, not to build an accurate 3D representation of the trees for phenotyping, but to generate models of trees with a credible visual appearance for computer graphics. Despite this different perspective, both provide skeletonization methods which should also be suitable for plant phenotyping, when excluding the processing steps which only serve to enhance the visual appearance of the 3D models.

Bucksch et al. [168] developed a fast skeletonization algorithm, and obtained good results comparing the distributions of skeleton branch lengths with and manually measured branch lengths [179]. While the method is fast, it performs less well for point clouds with varying point densities, and is likely to face difficulties with plants other than the leafless trees which they studied.

Coté et al. [180] constructed 3D models of pine trees by skeletonization to obtain realistic models in order to study reflected and transmitted light signatures of trees, by ingestion into a 3D radiative transfer model. Here again the goal was not to obtain direct phenotypic measurements of individual trees, but to study indirect radiative properties which depend on the tree canopy structure. To this end, they generated plausible tree canopy structures from a skeleton structural frame defining the trunk and first-order branches only. The skeletonization method employed to create this structural frame uses a method proposed by Verroust and Lazarus [181] based on the use of Dijkstra’s algorithm applied on an undirected graph.

The aforementioned method assumed that cloud points are sampled uniformly or nearly uniformly. To handle point clouds with inconsistent density and outliers, Delagrange et al. [182] developed PypeTree, a software tool for the extraction of skeletons of trees that allows the user to manually adjust a reconstructed plant skeleton.

Ziamtsov and Navlakha [183] improved upon PypeTree [182] and the methods of Verroust and Lazarus [181] and Bucksch and Alexander [175] by using information about the curvature of the plant skeleton. They did so by adding two new features to detect plant tips more accurately and independently of connected components or level size, and to enhance root selection. They apply their method to extract a skeleton graph of tomato and benth plants.

Lou et al. [9] adopted a method developed by Cao et al. [184] based on Laplacian contraction and applied this method on thale cress (rosette and in flowering stage), Physalis sp., maize, Brassica sp., and wheat. They first segmented the leaves and after removing them from the point cloud, they applied their method to the modified version of the point cloud. This method proved to be robust to noise and produced a well connected skeleton.

The extracted 3D reconstructions usually contain in the order of millions of points which imposes significant computational demands on subsequent processing steps. Therefore, another application for skeletonization is to provide a more parsimonious representation of a plant structure so that further processing can be done more efficiently. For example, Zermas et al. [82] developed a skeletonization algorithm starting from 3D point cloud data, which is split into thin slices of equal height. A per-slice clustering is then performed to find cluster centroids that best represent the neigboring points, and these cluster centroids are retained in the thinned-out skeleton. They applied this method on maize plants.

Chaudhury and Godin [185] proposed an algorithm based on stochastic optimization to improve coarse initial skeletons that were obtained with different skeletonization algorithms. They applied the proposed algorithm on real world and synthetic datasets contains different varieties of plants including cherry, apple tree, and thale cress plants. In contrast to other techniques, their method is more faithful to the biological origin of the original point cloud data.

Wu et al. [133, 186], on the other hand, used an iterative shrinkage process to contract the point cloud of a maize plant by using the classical restricted Laplace operator.

The 3D analysis of the branching structure of root systems is another application which has been approached by skeletonization. For example, Clark et al. [187] present a software tool for the 3D imaging and analysis of roots. Here, a thinning algorithm is applied on voxel representation obtained by SFS.

Despite its usefulness for the estimation of certain traits, skeletonization has rarely been applied to the phenotyping of herbaceous plant shoots. This may be because of difficulties when applying skeletonization on objects with more diverse topographies, such as in the presence of broad leaves, and when there are more occlusions. Chaivivatrakul et al. [38] performed a medial axis-based skeletonization of the relatively simple structure of young maize plants to obtain leaf angles, but they found that that particular skeletonization method didn’t perform well compared to plane fitting through leaves.

Segmentation

Image segmentation is the process of dividing an image into parts based on the problem needs [16]. In plant phenotyping, segmentation of the 3D representation into individual plant organs is a difficult and critical step in the process of obtaining plant organ measurements. There is no standard approach that will work in the majority of situations. The application of any one approach will largely depend on the plant morphology, as well as the quality of the 3D representations.

There are several existing techniques which are used for image segmentation and all these techniques can be approached from two basic approaches of segmentation: region-based and edge-based approaches [188, 189]. The most popular techniques and their application in plant phenotyping are listed below [16, 188, 190]. A comparison of the different segmentation techniques is presented in Table 4.

Table 4 A comparison of segmentation techniques

Color-index based methods

A common method for segmenting the plant from the background is color index-based segmentation [8]. In this approach, a 3D color value is converted into a scalar (grayscale) value, so that there is a pronounced distinction between foreground and background values.

Ge et al. [191] used color index-based segmentation on maize plants in which the image was transformed to a single color-band image using a nonlinear transformation emphasizing the green channel and suppressing the effects of different illuminations. Choudhury et al. [192] used color-based segmentation in hue, saturation, and value (HSV) color space for a holistic and components-based phenotyping of maize plants.

Thresholding methods

Assuming strict conditions as to the composition of the scene, the majority of algorithms in plant phenotyping usually employ thresholding-based approaches in one or multiple channels [193,194,195]. Gray-level thresholding is the simplest segmentation process and using a threshold can segment objects and background [16].

Minervini et al. [196] used a binary segmentation of thale cress and tobacco plants as the first step. Xia et al. [109] applied an RGB thresholding method to field images of paprika plants to eliminate the background.

Edge-based methods

A large group of methods performs segmentation based on the information about edges in the image. Edge detection algorithms usually work in two steps: first, points belonging to an edge are detected based on quick changes of the intensity around the point. Then, edge segments are generated by grouping points inside the boundaries extracted by edge detection [16, 188, 197, 198]. This method is simple and fast, but is more suitable for 2D images rather than 3D point clouds and often delivers disconnected edges which cannot be used to identify closed segments [115, 189, 198].

Lomte and Janwale [199] provided a brief review on plant leaves segmentation techniques including edge-based techniques on 2D images. Some works on edge-based segmentation on 2D images can be found in [200] on thale cress, and [201] on orange fruits, [202] on pigweed, purslane, soybean, and stinkweed.

Region-based methods

Segmentation results from edge-based methods and region-growing methods are not usually the same. However, region-growing techniques are generally better in noisy images, where it is difficult to detect borders between regions of the image with similar characteristics, such as intensity or color [16].

Liu et al. [130] developed a three-phase segmentation procedure to segment maize plant organs based on a skeleton and a region-growing algorithm. First, they processed the denoised point clouds of each plant using a Laplacian-based method [184] and generated plant skeleton points. They then applied a region-growing algorithm proposed by Rabbani et al. [197] to classify point cloud clusters.

Miao et al. [203] applied a median-based region-growing algorithm [204] to segment the stem points of the maize plant. Their algorithm is a region-growth method tailored specifically to maize and is able to segment stem and leaf instances in sequence, working upwards from the bottom of the plant.

Region-growing algorithms divide the point cloud into different clusters based on local smoothness and curvature characteristics or on the presence of features at a certain scale. Typically, these characteristics vary across a wide range of values for plant point clouds, and a threshold that works for one plant type or organ may not be appropriate for another. To address this, Huang et al. [205] developed a multi-level region-growing segmentation to find a suitable adaptive segmentation scale for different input data. They applied the proposed method to perform individual leaf segmentation of two leaf shape models with different levels of occlusion. They compared their proposed method with two widely used segmentation methods (Euclidean clustering and facet region-growing methods) and showed that the proposed method has the highest measurement accuracy.

Golbach et al. [101] performed a segmentation of stem and leaves on a voxel representation of tomato seedlings. They used a breadth-first flood-fill like algorithm whereby the structure is iteratively traversed along neighboring voxels starting from the lowest point in the voxel representation. As the algorithm traverses the main stem all added points are located closely together, but at the point of the first side branches newly added voxels are located further apart. If this distance exceeds a certain threshold, the iteration can be treated as the end of the stem. Leaf tips were detected as the last voxel additions after the flood-fill algorithm progressed past the end-point of the main stem. This approach is illustrated in Fig. 8.

Fig. 8
figure 8

Segmentation of a voxel grid representation of a tomato seedling in stem (green) and individual leaves (colored) (left), and schematic illustration of the stem-leaf segmentation algorithm (right), as used by [101]. The structure is filled from the bottom (red point). As long as neighboring points are close together in space, they are treated as stem. Once they spread out, the end of the stem (yellow points) is marked. The last point additions correspond to leaftips (green and blue points). Reprinted under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0)

Klodt and Cremers [103] segmented their volumetric 3D models of barley into two regions based on the eigenvalues of the second moment tensors of the surface. These provide information on the gradient directions of the shape, and allow to discriminate between long, flat, or structures with no dominant direction. This approach resulted in a discrimination between the distal parts of leaves and the rest of the plant. The obtained segmentation then allowed for automated leaf quantification, by counting the number of connected components corresponding to the distal parts of the leaves.

The last two examples of segmentation algorithms [101, 103] are highly customized towards particular plant morphologies. The former makes use of the opposite position of the cotyledons of young dicot seedlings, while the latter depends on plants with a rosette-like arrangement of narrow leaves. The advantage of such highly customized algorithms is that they can be better tailored towards efficiency for use in high-throughput applications.

Choudhury et al. [99] used a technique called voxel overlapping consistency check with point cloud clustering techniques to divide the 3D plant voxel-grid of maize and cotton plants into three components based on the structure of the plants: stem, leaves and top leaf cluster to compute component phenotypes.

On polygon meshes, there are two common approaches for segmentation: the fitting of shape primitives such as planes, spheres and cylinders [206]; and region-growing from seed points on the mesh surface, constrained by changes in curvature which correspond to sharp edges [207, 208].

Paproki et al. [164] applied a hybrid segmentation pipeline based on both approaches. First they obtained a coarse segmentation of meshes of cotton plants into different leaves and the main stem using constrained region-growing. After that, more refined segmentation of the main stem region into internodes, and petioles branching off from the main stem, was performed using cylinder fitting.

Nguyen et al. [209] were mainly interested in segmentation into individual leaves and the stem, and applied region-growing constrained by curvature from seed points which were determined to belong to large flat regions based on pre-computed curvature values. They did this on a plastic model of a dicotyl plant, and their method allowed them to measure length, width, perimeter, and surface area of all the leaves.

Clustering-based methods

Clustering-based techniques segment the image into clusters consisting of pixels with similar characteristics [210, 211]. The most used techniques in this category in the plant phenotyping domain are discussed below.

Topological and morphological feature-based: Miao et al. [212] presented an automatic stem-leaf segmentation method for maize plants, which was able to extract the skeleton of a point cloud directly, and uses topological and morphological features to identify the number and category of organs. They generated a coarse segmentation based on the plant skeleton and used this result to classify the points into stem-leaf clusters. They showed that their method achieved a high segmentation accuracy.

Mean shift: Mean shift clustering was originally introduced by Fukunaga and Hostetler [213] and revisited after 20 years by Cheng [214]. This algorithm has been widely applied in image segmentation and object tracking [215, 216] and consists of an iterative procedure that shifts each data point to the average of data points in its neighborhood by using kernel density estimation [109]. Xia et al. [109] applied the mean shift algorithm to segment plant leaves and background objects in a depth image. Since depth data represent the coordinates of objects in 3D space, plant leaves and background objects could be separated in terms of discontinuity in depth.

Spectral clustering (graph-based): Spectral clustering goes back to Donath and Hoffman [217] and is a set of clustering techniques that takes connectivity between points in an undirected graph into account. Its main advantage is that it is straightforward to implement and can be solved efficiently by standard linear algebra methods [218]. Points are projected into a lower-dimensional embedding which maintains distances between connected points as much as possible. Next, a standard clustering technique is usually applied on this lower-dimensional embedding. When applying the spectral dimension reduction on a graph of a branching structure, such as a plant, this same branching should be recognisable in the lower-dimensional embedding, while other morphological features will be suppressed. A exhaustive introduction to spectral clustering can be found in the tutorial of von Luxburg [218].

Hétroy-Wheeler et al. [174] and Boltcheva et al. [219] made use of this property to segment point clouds of poplar seedlings into individual leaves and their stems. They identified segments in the branching structure of the lower dimensional embedding, which correspond to the plant parts in the original point cloud of the tree seedling (Fig. 9).

Fig. 9
figure 9

Illustration of the spectral clustering approach used by [174]. The point cloud obtained by laser scanning is converted into a graph representation, after which spectral embedding finds intrinsic plant directions, which are decomposed in the principal plant axes. These correspond to elementary units such as leaf blades, petioles, and stems. Reprinted by permission of the publisher Taylor & Francis Ltd, (http://www.tandfonline.com) and the authors

Zermas et al. [82] applied an algorithm named Randomly Intercepted Nodes (RAIN) to segment the maize plant. Based on this algorithm, a rain drop that falls on any part of the plant has to glide on top of the plant’s surface before it reaches the ground and can only take two possible routes: fall over the edge of a leaf, or follow the stem closely until it reaches the plant base. By simulating and analysing the trajectories of hundreds of randomly placed rain drops, they were able to perform plant segmentation and extract other phenotypical characteristics. The selection of each next point was based on a few simple rules affected by gravity. Since most of the random drops encountered at a given moment an already visited point, at which time their route was prematurely ended, the number of points that were considered as potential path candidates was severely reduced. Like other algorithms, this algorithm has limitations as well. In dense canopies, for example, drops that visit a tall plant overshadowing a smaller plant may miss the smaller plant partially or completely.

Lou et al. [9, 220] proposed a spectral method for 3D mesh segmentation of CAD models. They showed that their method is applicable to diverse plants with varied structure, size and shape, and they applied their method on plants including thale cress, Brassica sp., oat, maize, Physalis sp. and wheat. However, this method cannot always generate meaningful and accurate segmentation results for plants with curved leaves, or with tiny side-branches at the top of the plant, or at junction points in the plant skeleton.

Saliency features (Surface-based clustering): The ordered eigenvalues resulting from eigendecomposition (\(\lambda _0 \le \lambda _1 \le \lambda _2\)) can be used directly as features for clustering or classification, because the relative size of the eigenvalues provides information about the shape of the local distribution of points: if points are scattered with no preferred direction, \(\lambda _0 \simeq \lambda _1 \simeq \lambda _2\); if points are distributed along one axis, as would be the case for stems, \(\lambda _2 \gg \lambda _0, \lambda _1\); and in the case of a planar surface, as for leaves, \(\lambda _1, \lambda _2 \gg \lambda _0\). Therefore linear combinations of the eigenvalues, called the saliency features, could be used as features: scatter-ness (\(\lambda _0\)), linear-ness (\(\lambda _2 - \lambda _1\)), and surface-ness (\(\lambda _1 - \lambda _0\)).

These features can also be expressed as curvature and directionality, defined as \(\lambda _0/(\lambda _0 + \lambda _1 + \lambda _2)\) and \(\lambda _2/(\lambda _0 + \lambda _1 + \lambda _2)\), respectively. Points belonging to flat regions such as leaves will have a low curvature in their neighborhood, while linear features have a high directionality.

Dey et al. [221] used saliency features and color to segment point clouds of grapevines obtained through SfM [222] into branches, leaves and fruit. They calculated saliency features at 3 spatial scales and concatenated color in RGB to obtain a 12-dimensional feature vector for classification. Moriondo et al. [223] also used SfM to obtain point clouds of the canopy of young olive trees. They used saliency at one spatial scale and color features to segment the point clouds into stems and leaves using a Random Forest classifier. Li et al. [48] used curvature to discriminate between flat leaves and linear stems. They achieved a spatially coherent unsupervised binary classification via Markov Random Fields.

Point feature histograms: Local features such as as surface normals or eigenvalues use only a few values in the neigborhood of a point. Point Feature Histograms (PFH) [224], and its more efficient variant Fast Point Feature Histograms (FPFH) [225], can be used for a more complete description of the neighborhood of a point. They are based on the angular relationships between pairs of points and their normals, within a radius r around each query point. These values, usually 4 angular features, are then binned into a histogram, and the histogram bins can be used as features in a clustering or classification algorithm. Figure 10 illustrates the difference in the PFHs between point clouds with different surface properties, such as of a laser scanned grapevine leaf and grapevine stem.

Fig. 10
figure 10

Point Feature Histograms for the laser scanned point cloud of a grapevine leaf (a) and of a grapevine stem point cloud (b), by [35]. Reprinted under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0)

Because of their higher information richness, PFH depend on relatively precise and accurate representations of the plant organ surfaces and shapes, which usually will be obtained by active 3D acquisition techniques such as laser scanning. They have been used as features of high-precision point cloud representations of grapevine, wheat, and barley obtained by laser scanning [30, 35, 226]. Sodhi et al. [227], however, used less precise point clouds of sorghum plants obtained by multi-view stereo imaging, and could still obtain robust segmentations of leaves and stems because the shapes of plant organs in sorghum are relatively easily differentiated.

Segmentation post-processing

A common post-processing step to improve the spatial consistency of class labels is to apply a fully connected pairwise Conditional Random Field (CRF) [228], which takes the spatial context into account and which can greatly improve segmentation results.

Dey et al. [221] and Sodhi et al. [227] applied such a CRF as post-processing of segmentations based on saliency features and PFH for grapevine and sorghum plants, respectively. The effect of such post-processing is illustrated in Fig. 11.

Fig. 11
figure 11

Segmentation obtained by SVM on FPFHs before a and after b post-processing with CRF by [227]. CRF corrects leaf false negatives near stem/leaf intersections, by minimizing label differences across neighbors with similar surface normals. ©2017 IEEE, reprinted with permission from the authors

Surface reconstruction

In point clouds, surface reconstruction can be an aid for segmentation, or can serve as a preliminary step before the final measurement of individual plant organs.

Once the point cloud has been segmented, reconstruction of the points on the plant organ surface and the edges can be tackled in different ways, via surface fitting and edge fitting, respectively. Surface fitting can be done by fitting geometric primitives such as cylinders and planes, or flexible surfaces such as non-uniform rational B-splines (NURBS). Although surface fitting can generate a smooth surface, it can also result in serrated lines for the edges. Constructing the edges needs the edge points to be detected and then fitted separately by using, for example, 3D splines, which offer a degree of smoothness. As surface edges are typically noisy, detecting the constituent points of the edge directly can be difficult [229].

Local regression techniques

Least squares methods are a classic tool for surface fitting [230, 231]. However, applying least squares directly can generate a overly smooth surface that loses certain local details of the surface, like leaf structures. Hence, applying a method that uses local information may be more suitable for reconstructing the surface and capturing local details [229]. MLS (see also "Denoising (noise filtering)" section) is widely used for generating a surface for data points [232], and constructs and evaluates a local polynomial continuously over the entire domain instead of constructing a global approximation. This method can thus be viewed as a local regression method.

Zhu et al. [229] used another local regression method called Locally Estimated Scatterplot Smoothing (LOESS) which can reconstruct a continuous surface even with the presence of the discontinuity of leaf points and is similar to MLS. They used this method for maize plants and compared it with Poisson and B-spline methods, showing that this method can generate smoother leaf surfaces with smaller normal variances.

Triangulated mesh generation techniques

Triangulation for plant structures is challenging due to the presence of thin branches. Delaunay triangulation is typically used for modeling a surface but does not generate good results for plant structures [156].

Sampaio et al. used the Advancing Front algorithm [233, 234] based on Delaunay triangulation but with higher performance in terms of accuracy and quality. They applied this algorithm in the first phase of surface reconstruction for maize plants. Chaudhury et al. [156] used the \(\alpha\)-shape algorithm for triangulation on barley and thale cress plants and showed that it worked well when its parameters were properly tuned. Zhu et al. [229] applied the Delaunay triangulation algorithm [235] after surface fitting on maize and rice plants to generate a triangular mesh in the xy-plane and then computed the corresponding z values through comparison with the fitted surface. In this way, they were able to generate a 3D triangle mesh from the fitted surface.

Non-uniform rational B-splines

NURBS [236] are mathematical models for generating and representing smooth curves and surfaces in computer graphics. A NURBS surface is completely defined by a list of 3D coordinates of surface control points and associated weights. Fitting techniques of NURBS surfaces are described in Wang et al. [237]. NURBS surfaces can then be triangulated and its surface area approximated by summing the areas of each triangle.

NURBS have been applied for the estimation of the surface area of leaves in the following works: Santos et al. [83, 238] first segmented their 3D point clouds of soybean obtained by SfM using spectral clustering, and then fitted NURBS surfaces to the segments corresponding to leaves (Fig. 12); Gélard et al. [239, 240] performed NURBS fitting on segmented leaves of sunflower point clouds obtained by SfM after the stems had been detected and removed using cylinder fitting; and Chaivivatrakul et al. [38] fitted NURBS surfaces to point sets corresponding to maize leaves after these had been mapped onto an underlying surface by MLS.

Fig. 12
figure 12

Leaf segmentation and surface fitting using NURBS on a point cloud representation of soybean leaves, by [83]. Reprinted by permission from Springer Nature Customer Service Centre GmbH: Springer Nature, Computer Vision - ECCV 2014 Workshops by Agapito, Bronstein, and Rother, ©2015

Cylinder fitting

Often stems of plants can be locally represented as a cylinder. A cylinder fitting procedure for oak trees based on least-squares fitting is described in Pfeifer et al. [241]. Paulus et al. [30] applied a similar procedure on stems in 3D laser scanned point clouds of barley. This was done after the segmentation of leaves and stems using PFH. The fitted cylinders allowed them to accurately estimate stem length. Gélard et al. [240] found that cylinder fitting didn’t provide satisfactory results when stems are curved, so they developed an alternative procedure in which they propagated a ring with neighborhood and normal constraints vertically along the stem of a sunflower point cloud to model the stem as a curved tube.

Trait estimation

After the challenging steps of skeletonization, segmentation and/or surface reconstruction, the measurement of traits on either whole plants, or individual plant organs is often relatively straightforward and many different approaches may yield sufficiently good estimates. Measuring these features is important for a large number of tasks [242], including quantifying plant biomass and yield [243], understanding plant response to stressful conditions [196], mapping genotypes and building predictive structural and functional models of plant growth [244].

Whole plant measurements

Convex hull The convex hull is defined as the shape of an object which is created by joining its outermost points. The volume of the convex hull of a whole plant can be an indicator for the size of a plant. In root systems, it may be used as an indicator of the extent of soil exploration [245]. Calculating the convex hull of a point cloud requires minimal preprocessing, but provides only a very rough indicator. The convex hull of tomato plant point clouds has been estimated by Rose et al. [85]. The convex hull was estimated on root systems of two Oryza sativa (rice) genotypes (Azucena and IR64) by Clark et al. [187], of barley plants by Mairhover et al. [61], and of Rice (Bala \(\times\) Azucena) plants by Topp et al. [245].

Height Height in point clouds can be simply defined as the maximal distance between points belonging to a plant or root system projected on the vertical axis, such as in Paulus et al. [36] on sugar beet taproots and Nguyen et al. [26] for cabbage and cucumber seedlings. Height can also be easily derived from top-view depth images without much processing as the difference between the ground and the closest pixel in the image, as done by Chéné et al. [110] on rosebushes and Cao et al. [14] on soybean plants.

More robust measures for plant height may be calculated as, for example, by Kjaer and Ottosen [32] where points were arranged in percentiles in relation to their distance from the top-view scanner, and the average of the 80th–90th percentile points was treated as a more robust estimate of rapeseed plant height.

Area and volume In the case of point cloud representations, plant area and volume are usually estimated based on 3D meshes. The surface area of a mesh can easily be determined by adding up the area of triangular mesh faces determined by Heron’s formula. The volume of a mesh can be determined by the method described in [246]. Chaudhury et al. [156] calculated total plant surface and volume from an \(\alpha\)-shape triangulated surface of thale cress plants in this way.

When the plant is represented as a voxel grid or octree, and this representation is precise enough, the volume can be estimated by summing the volumes of all the voxels covering the plant, as was done by Scharr et al. [104] on maize and banana seedlings. However, the authors found that voxel carving methods led to overestimates of volumes due to missed concavities and occlusions.

The surface area of a voxel grid or octree could be estimated by first deriving a meshed surface, which can be obtained with the Marching Cubes algorithm [162].

Number of leaves When a segmentation method was able to discriminate between leaves and stems in point clouds or voxel representations, the number of leaves can be derived by counting the number of connected components, after converting the leaf points into a graph in the case of point clouds.

In monocot crops leaves are very elongated and not always easily distinguishable from stems. However, an accurate segmentation between leaves and stems is not necessary when the aim is leaf counting. For example, Klodt and Cremers [103] discriminated between only the distal parts of leaves and the rest of barley plants by analyzing gradient directions of the 3D shape (Fig. 13), which was sufficient to count leaves. Another strategy for plants with elongated leaves might be to count leaf tips, which may be represented by the endpoints of a curve skeleton of the plant.

Fig. 13
figure 13

Illustration of the leaf counting method used by [103]. A 3D surface model of barley is segmented based on the eigenvalues of second-moments tensors of the surface, after which connected components corresponding to the distal parts of leaves are counted, to yield the number of leaves of the plant. Reprinted by permission from Springer Nature Customer Service Centre GmbH: Springer Nature, Computer Vision - ECCV 2014 Workshops by Agapito, Bronstein, and Rother, ©2015

Petiole length and angle Cao et al. [14] constructed 3D models of soybean plants based on SfM and measured the petiole length as the length of the longest petiole at the front view and the petiole angle as the angle between a petiole and the stem using the CloudCompare software.

Plant organ measurements

Stem or root dimensions Stem and internode lengths can be based on curve skeletons or cylinder fits. Paulus et al. [35] derived cumulated stem height from cylinder fits on the stems of barley plants. Golbach et al. [101] used the skeleton of the voxels representing the stem of tomato seedlings.

Using the graph of a skeleton, the lengths of internodes can be estimated by measuring the geodesic distance between branch points using Dijkstra’s algorithm. This was demonstrated by Balfer et al. [247] on a berryless grape cluster which was skeletonized by the method of Livny et al. [177].

Stem or root widths are often estimated by cylinder fitting. For example, Sodhi et al. [227, 248] fitted primitive cylinder shapes to the segmented stem point cloud of maize plants to extract the stem diameter (Fig. 14).

Fig. 14
figure 14

Example of the 3D measurements of plant organs as used by [248]. Stem diameters were estimated by fitting cylinder shapes to stem point cloud segments (a), leaf widths by determining the oriented bounding box around leaf point cloud segments and measuring their shortest dimension (b), and leaf lengths by computing the shortest paths connecting the furthest points on the leaf surface meshes (c). Reprinted with permission from the author

Leaf dimensions Two of the most important architectural traits are leaf angle and leaf area index that have influence on light interception and canopy photosynthesis [130, 249, 250].

The most natural representation for the estimation of leaf dimensions is a mesh surface. Leaf area is then easily estimated as the sum of the area of triangular mesh faces as was done by Sodhi et al. for sorghum [227], by Gélard et al. for sunflowers [239, 240] and by Chaivivatrakul et al. for maize [38]. Leaf base point is defined as the closest point to the stem point clouds [130]. Leaf length and width can be calculated by determining the longest geodesic shortest path on the mesh expressed as a graph, by applying Dijkstra’s algorithm [171]. Liu et al. [130] implemented a three-step procedure to find the leaf tip point of maize plants and then defined the leaf length as the distance of the shortest path between the leaf base and the leaf tip. Sodhi et al. [227, 248] estimated leaf width of sorghum plants by determining an oriented bounding box around a leaf point set, whose sides are directed towards the principal axes of the point set. The leaf width is then the second longest dimension of the bounding box (Fig. 14).

Golbach et al. [101] instead derived the leaf dimensions of tomato plant seedlings directly from a voxel representation to minimize computing time. After segmentation they determined leaf length as the distance between the two points on the surface of the leaf which are furthest away from each other. To correct for the curved shape of the leaves, they added an additional point on the leaf surface halfway between these points. For the leaf width, they searched for the maximum leaf width perpendicular to the three point leaf midrib which was used for the leaf length. For leaf area they used an approximation based on the number of surface voxels. The authors choose rather crude measurements and may have sacrificed some precision in favour of speed.

Duan et al. [169] based their measurements of leaf lengths and widths of wheat seedlings on polynomial regression fits through segmented leaf point clouds. They identified leaf edges according to the 90th percentile on either side of the leaf midrib using quantile regression, to account for the presence of noise.

Ear or fruit volumes Plant yields may be approximated by the estimated volumes of plant ears or fruits. For example, after segmentation based on PFH, Paulus et al. [35] found that ear weight, kernel weight and number of kernels in wheat plants was correlated with their estimates of ear volume, which they obtained by estimating \(\alpha\)-shape volumes on the point sets corresponding to the ears.

Canopy level measurements

When 3D acquisition methods don’t provide sufficient detail to allow for measurement of individual plant organs, such as when applied on larger scales in the field, useful information can still be extracted on the level of crop or tree canopies. Examples of such traits are canopy surface height, vertical plant area density distribution, leaf area index, or leaf angle distribution.

Cao et al. [14] measured the canopy width of soybean plants as the maximum plant canopy width from the projection on the front view of 3D points clouds.

Canopy profiling LiDAR has a certain capacity to penetrate canopies, so that in LiDAR the frequency of laser interception by a canopy can be used as an index of foliage area at each height. This canopy profiling by airborne LiDAR has been deployed mostly in the context of ecological studies on forest stands [251, 252]. However, Hosoi and Omasa [253] used a high-resolution portable scanning LiDAR together with a mirror for vertical plant area density profiling of a rice canopy at different growth stages. Their method for the estimation of leaf area density is based on a voxel model, and is described in [254]. The leaf area index can then be derived from the vertical integration of leaf area density values.

Cabrera et al. [255] instead used 3D voxel grid representations of individual maize plants to study light interception of maize plant communities, by creating virtual canopies of maize. In the virtual canopy, the cumulative leaf area and the average leaf angles were determined based on the 3D representations of individual plants. These measures were combined with a model of incident light in the greenhouse, so that the local light interception by the canopy could be estimated.

Leaf angle distribution 3D image acquisition methods provide the opportunity to study temporal patterns in the orientation of leaves, which is a highly dynamic trait that changes in response to fluctuations in the environment. Biskup et al. [77] presented a method based on top-view stereo imaging. Their depth images were subjected to a graph-based segmentation algorithm [256] to obtain a rough segmentation of individual leaves of soybean plants, after which planes were fitted to each segment using RANSAC to determine leaf inclination angles. Müller-Linow et al. [108] presented a software tool to analyze leaf angles in crop canopies based on the same set of methods.

Machine learning techniques for plant phenotyping

Machine Learning (ML) is the scientific study of algorithms and statistical models used by a computer system to perform a specific task without explicit instructions, but relying only on patterns and inference. With sensors and acquisition systems for plant phenotyping widely available and used to generate large amounts of imaging data, the main challenge lies in translating the high-dimensional raw imaging data into the quantification of relevant plant traits. In the past, this was done through manually engineered image processing methods, as discussed in the previous sections, but to deal with the difficulties of complex plants, non-controlled, or cluttered environments, ML is gaining in popularity. Classical approaches in computer vision consist in general of two major steps, feature extraction using those manually engineered image processing methods and decision making using ML methods, while modern Deep Learning (DL) approaches take an integrated, end-to-end approach, in which features are learned at the same time as the inference is performed. Moreover, DL models are often more complex than classical ML models, resulting in much greater discriminative and predictive power [257], with spectacular results in different application areas [258, 259].

Machine learning for plant phenotyping, and deep learning in particular, is an actively developing field. To the best of our knowledge, most of the ML methods have been used in plant segmentation, though ML is starting to find applications outside of plant segmentation as well, for example in denoising or registering the plant point cloud [34, 186]. Indeed, we believe that ML is expected to impact all aspects of plant phenotyping, leading to significant improvements in the current state-of-the-art in the coming years. For example, new DL architectures could be developed and adopted for 3D and multi-modal data processing like skeleton extraction, branch-pattern classification and plant-development understanding [260]. Furthermore, ML algorithms can be used to analyse the data from high-throughput phenotyping experiments, and may alleviate the problem of missing data, leading to the identification of new correlations and plant traits that were previously difficult to detect.

A full list of papers and plants using these techniques can be found in Table 3, under the header “Machine Learning Techniques”. Moreover, an overview of the topics covered in this section is presented in Fig. 15.

Fig. 15
figure 15

Overview of ML techniques for 3D plant phenotyping

Classical ML methods

In this section, we review some classical machine learning algorithms that are used for plant segmentation. Compared to DL methods, these techniques can often be used efficiently on relatively small datasets, and have a less complex structure, but they are usually less accurate [261].

K-nearest neighbors

The KNN algorithm is an ML classifier which uses the concept of proximity to make predictions about the grouping of individual data points, working off the assumption that similar points can be found near one another. The KNN algorithm can also be used for clustering, with applications for denoising and downsampling in plant phenotyping.

Wu et al. [186] proposed a clustering algorithm based on an implementation of the KNN algorithm by Connor and Kumar [262] to denoise point cloud data for maize plants. Along similar lines, Chebrolu et al. [34] and Magistri et al. [161] used KNN clustering to refine the initial segmentation of tomato and maize plants by discarding small clusters and assigning each discarded point to one of the remaining clusters. Gibbs et al. [81] implemented an efficient KNN algorithm for the downsampling of plant shoot point clouds, and applied their method to different plants (bromeliad species, aloe vera, cordyline species, Brassica sp., chili, and pumpkin).

Random forest classifier

The random forest classifier (RFC), first proposed by Breiman [263], is an ensemble learning method in which a multitude of decision trees are constructed during training time, and predictions from the individual trees are pooled for inference.

Straub et al. [135] used two applications of the RFC algorithm to build a tree model for meadow orchard trees. First, the point cloud is separated into two classes, “ground” and “tree”, and secondly the “tree” class is further processed to filter out noise caused by the fine structure of the tree branches, which were photographed against the sky and differed strongly in their color values from the real branch points.

Dutagaci et al. [264] used a volumetric approach, where an RFC was trained on local features derived from the eigenvalues of the local covariance matrix (intuitively speaking, these local features serve to discriminate leaf and stem points by distinguishing flat structures from elongated, thin structures). They applied their method on rosebush plants, and showed that this voxel classification method through local features gave the best overall performance for leaf and stem classification among four baseline methods they had defined.

Support vector machines

Support vector machines (SVMs) are a commonly used choice for binary classification problems and can perform nonlinear classification through the use of kernels.

Sodhi et al. [227] used an SVM classifier to classify each point of a 3D point cloud of maize plants as either belonging to the stem or to a leaf. Chebrolu et al. [34] and Magistri et al. [161] used a standard SVM classifier with FPFH features to perform a segmentation step aiming at grouping together points belonging to the same plant organ, a single leaf instance, or the stem.

Zhou et al. [84] evaluated the performance of two SVMs (with different polynomial kernels) and two other machine learning methods (boosting and k-means clustering) for the segmentation of soybean plants at early growth stages using 3D point cloud data built from 2D images. They found that the SVM with a linear kernel (applied to histogram of oriented gradients (HOG) features) outperformed the SVM with a 2nd-order polynomial kernel in distinguishing between plant features and background. In case of overlapping plants separation, they showed that the SVM with a linear kernel had the smallest error rate, while for background removal and non-overlapping plants separation, k-means clustering performed best. They also showed that k-means clustering outperformed two other methods (the SVM with linear kernel and boosting) in the aspect of processing efficiency and segmentation accuracy.

Self-organizing maps

Self-organizing maps (SOMs) are unsupervised neural networks developed by Kohonen [265] using the concept of competitive learning instead of back-propagation [34]. SOMs map multi-dimensional data onto lower-dimensional subspaces where geometric relationships between points indicate their similarity.

Chebrolu et al. [34] and Magistri et al. [161] assigned each point in the point cloud to a plant organ (stem or leaf) and then applied SOMs to learn the nodes of the skeleton structure for each plant organ, after which these nodes were used to build the plant skeleton structure of maize and tomato plants.

Hidden Markov models

Hidden Markov models (HMMs) are probabilistic models in which an unobservable (“hidden”) Markov process influences an observable process [266]. HMMs have been used in plant phenotyping to determine correspondences between time-series data of tomato and maize plants by Chebrolu et al. [34] (cf. "3D point set registration" section). Because of their probabilistic nature, HMMs are well suited for cases where the observed measurements suffer from noise and other imperfections.

Deep learning methods

Image segmentation can be categorized into semantic segmentation and instance segmentation. The goal of semantic image segmentation is to label each pixel of an image with a corresponding class of what is being represented. Instance segmentation is considered the next step after semantic segmentation and its main purpose is to represent objects of the same class split into different instances.

Many DL approaches have been developed for the segmentation of 2D images [267,268,269,270,271,272,273,274]. However, most DL methods for segmentation are a priori only applicable to images defined on a regular grid-like structure (so that, for example, convolutions can be applied for feature extraction [160]) and are not well-suited for unstructured data such as 3D point clouds or models [268, 275,276,277].

Moreover, the problem of performing semantic segmentation directly on 3D data is challenging due to the limited availability of 3D datasets with segmentation annotations. Semantic segmentation techniques for 3D point clouds are further divided into two groups: projection-based methods and point-based methods [277], which are discussed below.

Projection-based methods

Projection-based techniques first project the 3D point cloud onto an intermediary 2D representation that can be segmented using 2D networks, and then construct a segmentation for the full 3D point cloud out of these intermediary segmentation results. The advantage is that established 2D segmentation networks can be used, but due to the intermediate representation, some loss of spatial and geometrical information is inevitable [277,278,279,280].

According to the type of intermediary representation, several categories of projection-based methods can be distinguished; in this paper we discuss the multi-view, volumetric, and lattice representation. Another representation, the spherical representation (see, e.g., [281]) retains more geometrical and spatial information than for example the multi-view representation, but as it currently has no applications in plant phenotyping as far as we know, it is not discussed in this paper.

Multi-view representation These methods project the 3D shape or point cloud onto multiple 2D images or views, and then extract feature from the 2D data by using existing models. Two of most popular networks in this category are MVCNN [282] which analyses the data from multiple perspectives using convolutional neural networks (CNN), and SnapNet [283], which uses snapshots of the point cloud to generate RGB and depth images to work around the problem of information loss.

Determining the number of projections to use, the viewing angle for each projection, and the way to re-project the segmented models from 2D to 3D space, are the main difficulties associated with this class of techniques [276, 284].

Shi et al. [2] applied a multi-view approach and used a slightly modified version of VGG-16 [285], a fully convolutional network (FCN [286]), for semantic segmentation, and a Mask Recurrent Convolutional Neural Network (R-CNN [287]) for instance segmentation on 2D images of tomato seedling plants and then combined the 2D segmentation results in a 3D point cloud. They applied this segmentation method on 2D data as well and showed that this multi-view 3D approach outperforms the 2D approach both for semantic and instance segmentation.

Volumetric representation These methods transform the unstructured 3D point cloud into a regular spatial grid (voxelisation), and then train a neural network on this grid to perform the segmentation. Some popular architectures in this group, which are currently not yet used for plant phenotyping, are VoxNet [288], OctNet [289], and SEGCloud [290]. Volumetric techniques produce reasonable results on small point clouds, but are memory-intensive and hence may struggle on complex datasets.

Dutagaci et al. [264] compared segmentation results for rosebush plants obtained using the 3D U-Net [291] architecture with three other methods for segmentation, namely Local Features on Volumetric Data (LFVD) and a supervised and unsupervised version of Local Features on Point Clouds (LFPC). They found that the 3D U-Net gave the lowest performance whereas the combination of the LFVD feature extraction method with an RFC obtained the best performance for segmentation.

Lattice representation This representation converts a point cloud into sparse, discrete elements (lattices). The sparsity of the extracted features is adjustable and these methods typically have lower memory and computational requirements than simple voxelisation. SPLATNet [292], LatticeNet [293], and MinkowskiNet [294] fall in this category.

Schunck et al. [160] used three different DL architectures for the semantic segmentation of the raw point cloud into leaf, stem and ground: PointNet, PointNet++, and LatticeNet [293, 295]. LatticeNet applies convolutions on a permutohedral lattice while the PointNet-based methods (See "Point-based methods" section) rely on pooling point features to obtain their internal representation. The authors trained these networks for tomato and maize separately, using 5 plants for training and 2 plants for testing. All three methods achieved high intersection over union (IoU) in the leaf and ground class. The PointNet-based methods struggled with the stem class because it contained relatively few points while LatticeNet achieved good results for all classes.

Point-based methods

Point-based methods work directly on point clouds without introducing any intermediate representation. Hence, they are able to use the full set of raw point cloud data, with all of its geometrical and spatial features. These methods are widely used and the subject of active development, and can be roughly divided into five categories: pointwise methods, convolution methods, recurrent neural network (RNN)-based methods, recursive neural network (RvNN)-based methods, and graph-based methods.

Graph-based methods make use of the graph structure of the point cloud, often applying a DGCNN network [296, 297] as the underlying architecture. Since graph-based methods have to the best of our knowledge no applications in plant phenotyping at the moment, they are not discussed in this paper.

Pointwise methods PointNet, introduced by Qi et al. [298], is a pioneering effort in this regard and provides a unified approach to a number of 3D recognition tasks including object classification and segmentation. However, this method has trouble capturing local structures, limiting its ability to recognize fine-grained patterns and to generalize to complex scenes.

Li et al. [299] built an automated organ-level point cloud segmentation system for maize plants, using Label3DMaize [203] to label data from a high-throughput data acquisition platform for individual plants, and PointNet to implement stem-leaf and organ instance segmentation.

Later, Qi et al. [300] introduced PointNet++ which is a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. While PointNet used a single max-pooling operation to aggregate the entire point set, their new architecture builds a hierarchical grouping of points into progressively larger and larger local regions along the hierarchy.

Heiwolt et al. [301] applied the PointNet++ architecture, adjusted for point-wise segmentation applications, on tomato plants and showed that this network was able to successfully predict per-point semantic annotations for soil, leaves, and stems directly from point cloud data.

To better incorporate local geometric structures, the last years have seen a number of improvements upon the Pointnet architecture, including PointSIFT [302], SGPN [303], DGCNN [296], LDGCNN [304], SRN-PointNet++ [305], ASIS [306], PointGCR [307], and PointNGCNN [308]. To the best of our knowledge, these improved methods have yet to be applied to plant phenotyping.

Convolution methods As point clouds consist of irregularly spaced, unordered points, convolution operators designed for regular, grid-based data cannot be applied directly.

To address this issue, Li et al. [309] introduced PointCNN which generalizes the design of a CNN to be applicable to point clouds. Ao et al. [310] applied PointCNN on morphological characteristics of the maize plant to segment stem and leaves of the individual maize plants in field environments. They showed that their approach overcomes the major challenges in organ-level phenotypic trait extraction associated with the organ segmentation.

Wu et al. [275] proposed PointConv, extending traditional image convolution to 3D point cloud data with non-uniform sampling. They found that PointConv outperforms networks like PointNet and PointNet++ on several widely used datasets in terms of accuracy and IoU.

Gong et al. [311] developed Panicle-3D, which has higher segmentation accuracy and faster network convergence speed than PointConv, and applied the proposed network on point clouds from rice panicles. A drawback of the method is that it requires large volumes of labelled data to train the network.

Chen et al. [312] developed the DeeplabV3+ network for semantic segmentation, using the convolutional neural network (CNN) structure of the DeeplabV3 network [272] as a starting point and adding a decoder module for refining the segmentation results, especially along object boundaries. Chen et al. [90] used this network to segment banana central stocks.

As an alternative convolution method, we also mention the work of Jin et al. [313], who proposed a voxel-based CNN (VCNN) to do semantic segmentation and leaf instance segmentation on the collected LiDAR point clouds of 3000 maize plants.

Despite these ongoing efforts, three main challenges still exist: (a) the lack of well-labelled 3D plant datasets, (b) achieving highly accurate point-level organ semantic and instance segmentation, and (c) the generalization of the proposed method to other plant species (since most DL approaches are currently focused on a single species at a time).

To address the third challenge, Li et al. [276] proposed a dual-function point cloud segmentation network named PlantNet, the first architecture to be able to work on several plant species, and applied their method on tobacco, tomato, and sorghum plants. They also provided a well-labelled point cloud dataset for plant stem-leaf semantic segmentation and leaf instance segmentation containing 5460 LiDAR-scanned crops (including 1050 labelled tobacco plants, 3120 tomato plants, and 1290 sorghum plants).

RNN-based methods These techniques have recently been used for segmentation because they are able to capture inherent context features and enhance the connection between local features of the point cloud. They first transform a block of points into multi-scale blocks or grid blocks, after which features are extracted by using PointNet. These features are then fed into recurrent consolidation units to obtain the output-level context. One of the most popular networks in this category is 3DCNN-DQN-RNN [314].

Bernotas et al. [17] used two different neural network architectures, an RNN and an R-CNN. The R-CNN was pre-trained using transfer learning weights generated on the Common Objects in Context (COCO) data set and both networks were trained starting with random initial weights. Comparing both approaches on thale cress rosettes, the most accurate leaf segmentation results were achieved with models based on the R-CNN architecture using pre-trained weights.

RvNN-based methods These networks, developed by Socher et al. [315], can achieve predictions in a hierarchical structure. In this category, PartNet, presented by Yu et al. [316], is a DL model for top-down hierarchical, fine-grained segmentation of 3D shapes. This network takes a 3D point cloud as input and then performs a top-down decomposition and outputs a segmented point cloud at the level of part instances.

Wang et al. [44] applied PartNet for instance segmentation on their 3D plant dataset of lettuce consisting of a mixture of real and synthetic data. They showed that the constructed PartNet network had the potential to accurately segment the 3D point cloud leaf instances of lettuce.

Perspectives

As this paper has shown, there exists an abundance of automated solutions for 3D phenotyping. It remains a challenge, however, to find a low-cost, high-throughput 3D reconstruction method that can handle different types of plants and plant traits, especially considering difficulties such as occlusion. All 3D measuring methods have in common that with increasing plant age, the complexity and thus the amount of occlusion increases. Even though this problem can be addressed in part by using more viewpoints, occlusion will always be present, independent of the type of sensor, the number of viewpoints or the sensor setup, as the inner center of the plant will at a specific moment in time be occluded by the plant (leaves) itself. Although some solutions exist that use volumetry, such as using MRI or radar systems, a more complex and expensive measuring setup should be taken into account [18]. Furthermore, many methods and solutions can be applied on individual plants but not on dense canopies. SfM, for example, obtains good results for the 3D reconstruction of plants (and is additionally one of the most cost-effective methods), but it is not suitable for very dense canopies [317].

Performing a reconstruction of real scenes in 3D phenotyping as a function of time is a challenging but important task, since it will allow for dynamic traits to be considered, such as growth rates which could provide information about the growth behavior of plants throughout their different growth stages. The detection of such variations in growth rates might permit the identification of genes controlling plant growth patterns or the selection of plant genotypes with strong resistance for high production or harvesting strategies [43].

Registering plants over the course of time is challenging due to the anisotropic growth, changing topology, and non-rigid motion in between the time of measurements. For the registration problem, correspondences between point clouds of plants, taken at different points in time, should be determined and then should be registered using a non-rigid registration approach. Regarding our previous discussion about registration (see "3D point set registration" section), point cloud registration for non-rigid plants is itself a challenging problem especially when some correspondences are missed and still is an open area of research. Focusing on detecting key correspondences can be considered as a solution to overcome this problem.

One area in which much progress can be foreseen for 3D phenotyping, and especially for the task of segmenting 3D representations of plants, is the application of machine learning algorithms (see "Machine learning techniques for plant phenotyping " section). As discussed, most of the ML methods have been used in plant segmentation, and finding applications outside of plant segmentation or adapting these ML methods to cover different areas in the plant domain can be an area of research in the future, for example in denoising or registering the plant point cloud [34, 186].

Deep learning presents many opportunities for image-based plant phenotyping, but these techniques typically require large and diverse amounts of ground-truthed training data to learn generalizable models without providing a priori an engineered algorithm for performing the task. In most vision-based tasks where deep learning shows a significant advantage over engineered methods, such as image segmentation, classification, and detection and localization of specific objects in a scene, the size of the dataset is typically in the order of tens of thousands to tens of millions of images. This requirement is challenging, however, for applications in the plant phenotyping field, where available datasets are often small and the costs associated with generating new data are high [1]. Furthermore, the manual segmentation of plant images is a cumbersome, time-consuming, and error-prone process. To alleviate this problem, Ubbens et al. [1] proposed a new method for augmenting plant phenotyping datasets using rendered images of synthetic plants, while Chaudhury et al. [318] proposed a generalized approach to generate annotated 3D point cloud data of a thale cress plant using some artificial plant models.

So far several comprehensive collections of benchmark datasets for plant phenotyping with annotations have been made publicly available: the dataset of Khanna et al. [319] containing biweekly color images, infra-red stereo image pairs, and hyperspectral camera images of sugar beet plants along with applied treatment and weather conditions of the surroundings, collected over two months; the ROSE-X dataset of Dutagaci et al. [264] including 11 fully annotated 3D models of real rosebush plants obtained through X-Ray imaging; the Pheno4D dataset of Schunck et al. [160] containing highly accurate and registered point clouds of 7 maize and 7 tomato plants collected on different days (approximately 260 million 3D points); the multi-modality dataset MSU-PID of Cruz et al. [320] containing segmented top-view RGB images of growing thale cress and bean plants; the CVPPP leaf segmentation dataset of Minervini et al. [196] containing segmented top-view images of growing thale cress and tobacco plants; the KOMATSUNA dataset of Uchiyama et al. [195] containing segmented top-view RGB images of spinach (Komatsuna) plants; and the Annotated Crop Image Database of Pound et al. [257] containing images and annotations of wheat spikes and spikelets. Among them, the three datasets of MSU-PID, CVPPP, and KOMATSUNA consist of raw and annotated 2D color images of rosette plants taken from above. The analysis of these images involves segmenting individual and overlapping leaves, for which neural networks have had the greatest success [321,322,323,324,325,326].

As more benchmark datasets for 2D and 3D plant phenotyping are being made available, the application of neural networks is expected to achieve a similar level of success as in other areas.

Fully automated 3D segmentation approaches for plant point cloud which could cope with a wide range of different shaped plants are a challenging problem, and also are a bottleneck in achieving big data processing of 3D plant phenotyping [299]. Recently, Wei et al. [327] presented a novel point cloud segmentation network called BushNet which is for the semantic segmentation of bush point clouds in large-scale environments. However, there is no application on plant cases so far.

In this regard, future research trends can focus on the adaptation and customization of newly developed ML models for applications in plant phenotyping, and also on generalizing capabilities of current models to be used on different kinds of plants. Segmentation is not the only part of the 3D plant phenotyping which can get the benefit of DL methods. However, DL is currently not frequently used for other phenotyping steps such as skeletonization and denoising. This, too, could form a fruitful area for future research, to assist e.g. with alleviating the impact of noise and missing data.

Last, we foresee that AI-assisted plant phenotyping may have the potential to optimize pest control and improve crop yield, through the large-scale analysis of plant traits and the identification of signs of biotic and abiotic stresses, such as pest damage, drought, and high temperatures. This is especially the case as ML methods have enabled practitioners to move beyond single-plant phenotyping to estimate plant traits at the canopy or field level, providing a more comprehensive understanding of how stressors impact overall crop health, thus improving agricultural productivity and sustainability.

Conclusion

This review provides a broad but non-exhaustive overview of processing and analysis methods applied or applicable in 3D plant phenotyping. As shown, the set of techniques applicable in this field is very diverse, which contributes to the complexity of the task of 3D plant phenotyping. As this is an expanding field, we foresee that additional methods not mentioned in this review will be explored in the future.

Availibility of data and materials

Not applicable.

Abbreviations

μCT:

Micro Computed Tomography

2D:

Two-dimensional

3D:

Three-dimensional

CNN:

Convolutional Neural Network

COCO:

Common Objects in Context

CPD:

Coherent Point Drift

CRF:

Conditional Random Field

CT:

Computed tomography

DBSCAN:

Density-based spatial clustering of applications with noise

DL:

Deep learning

ERT:

Electrical Resistance Tomography

FCN:

Fully convolutional network

FPFH:

Fast Point Feature Histogram

GMM:

Gaussian Mixture Model

HMM:

Hidden Markov Model

HOG:

Histogram of oriented gradients

HSV:

Hue, saturation and value

ICP:

Iterative Closest Point

IoU:

Intersection over union

KNN:

K-nearest neighbors

LFPC:

Local Features on Point Cloud

LFVD:

Local feature on volumetric data

LiDAR:

Light detection and ranging

LOESS:

Locally Estimated Scatterplot Smoothing

ML:

Machine learning

MLS:

Moving Least Squares

MOBB:

Minimum oriented bounding box

MRI:

Magnetic Resonance Imaging

MSAC:

M-estimator Sample Consensus

MVS:

Multi-view stereo

NT:

Neutron tomography

NURBS:

Non-uniform rational basis splines

PCA:

Principal Component Analysis

PFH:

Point Feature Histogram

PMVS:

Patch-based Multi-View Stereo

PS:

Photometric Stereo

R-CNN:

R-convolutional Neural Network

RAIN:

Randomly Intercepted Nodes

RANSAC:

Random Sample Consensus

RBOF:

Radius-based outlier filter

RFC:

Random Forest Classifier

RNN:

Recurrent neural network

RvNN:

Recursive neural network

SD:

Standard deviation

SfM:

Structure from Motion

SFS:

Shape-from-silhouette

SOM:

Self-organizing maps

SOR:

Statistical outlier removal

SVM:

Support vector machine

TLS:

Terrestrial laser scanner

ToF:

Time of Flight

References

  1. Ubbens J, Cieslak M, Prusinkiewicz P, Stavness I. The use of plant models in deep learning: an application to leaf counting in rosette plants. Plant Methods. 2018;14(1):6.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Shi W, van de Zedde R, Jiang H, Kootstra G. Plant-part segmentation using deep learning and multi-view vision. Biosyst Eng. 2019;187:81–95.

    Article  Google Scholar 

  3. Vázquez-Arellano M, Griepentrog HW, Reiser D, Paraforos DS. 3-D imaging systems for agricultural applications—a review. Sensors. 2016;16(5):618. https://doi.org/10.3390/s16050618.

    Article  PubMed  PubMed Central  Google Scholar 

  4. King A. The future of agriculture. Nature. 2017;544(7651):21–3. https://doi.org/10.1038/544S21a.

    Article  Google Scholar 

  5. Kumar A, Pathak RK, Gupta SM, Gaur VS, Pandey D. Systems biology for smart crops and agricultural innovation: filling the gaps between genotype and phenotype for complex traits linked with robust agricultural productivity and sustainability. OMICS J Integr Biol. 2015;19(10):581–601.

    Article  CAS  Google Scholar 

  6. Santos F, Borém A, Caldas C. Sugarcane: agricultural production, bioenergy and ethanol. Cambridge, Massachusetts: Academic Press; 2015.

    Google Scholar 

  7. Corke H, Faubion J, Seetharaman K, Wrigley CW. Encyclopedia of food grains. Oxford: Elsevier; 2016.

    Google Scholar 

  8. Kolhar S, Jagtap J. Plant trait estimation and classification studies in plant phenotyping using machine vision—a review. Inform Process Agric. 2021. https://doi.org/10.1016/j.inpa.2021.02.006.

    Article  Google Scholar 

  9. Lou L, Liu Y, Shen M, Han J, Corke F, Doonan JH. Estimation of branch angle from 3D point cloud of plants. In: 2015 International Conference on 3D Vision. 2015; 554–561. IEEE.

  10. Dornbusch T, Lorrain S, Kuznetsov D, Fortier A, Liechti R, Xenarios I, Fankhauser C. Measuring the diurnal pattern of leaf hyponasty and growth in Arabidopsis - a novel phenotyping approach using laser scanning. Funct Plant Biol. 2012;39(11):860–9. https://doi.org/10.1071/FP12018.

    Article  PubMed  Google Scholar 

  11. Paturkar A, Gupta GS, Bailey D. 3D reconstruction of plants under outdoor conditions using image-based computer vision. In: International Conference on Recent Trends in Image Processing and Pattern Recognition. 2018; 284–297. Springer.

  12. Smith LN, Zhang W, Hansen MF, Hales IJ, Smith ML. Innovative 3D and 2D machine vision methods for analysis of plants and crops in the field. Comput Ind. 2018;97:122–31.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Martinez-Guanter J, Ribeiro Á, Peteinatos GG, Pérez-Ruiz M, Gerhards R, Bengochea-Guevara JM, Machleb J, Andújar D. Low-cost three-dimensional modeling of crop plants. Sensors. 2019;19(13):2883.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Cao W, Zhou J, Yuan Y, Ye H, Nguyen HT, Chen J, Zhou J. Quantifying variation in soybean due to flood using a low-cost 3D imaging system. Sensors. 2019;19(12):2682.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Sampaio GS, Silva LAd, Marengoni M. 3D reconstruction of non-rigid plants and sensor data fusion for agriculture phenotyping. Sensors. 2021;21(12):4115.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Sonka M, Hlavac V, Boyle R. Image processing, analysis, and machine vision. Boston: Cengage Learning; 2014.

    Google Scholar 

  17. Bernotas G, Scorza LC, Hansen MF, Hales IJ, Halliday KJ, Smith LN, Smith ML, McCormick AJ. A photometric stereo-based 3D imaging system using computer vision and deep learning for tracking plant growth. GigaScience. 2019;8(5):056.

    Article  Google Scholar 

  18. Paulus S. Measuring crops in 3D: using geometry for plant phenotyping. Plant Methods. 2019;15(1):103.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Mada SK, Smith ML, Smith LN, Midha PS. Overview of passive and active vision techniques for hand-held 3D data acquisition. In: Opto-Ireland 2002: Optical Metrology, Imaging, and Machine Vision, 2003; 4877: 16–27. International Society for Optics and Photonics.

  20. Li Z, Guo R, Li M, Chen Y, Li G. A review of computer vision technologies for plant phenotyping. Comput Electron Agric. 2020;176: 105672.

    Article  Google Scholar 

  21. Siudak M, Rokita P. A survey of passive 3D reconstruction methods on the basis of more than one image. Mach Gr Vis. 2014;23(3/4):57–117.

    Google Scholar 

  22. Remondino F, El-Hakim S. Image-based 3D modelling: a review. Photogramm Rec. 2006;21(115):269–91. https://doi.org/10.1111/j.1477-9730.2006.00383.x.

    Article  Google Scholar 

  23. Schwartze R, Heinol H, Buxbaum B, Ringbeck T, Xu Z, Hartmann K. 18 principles of three-dimensional imaging techniques. Handb Comput Vis Appl. 1999;1:463–84.

    Google Scholar 

  24. Beltran D, Basañez L. A Comparison between active and passive 3D vision sensors: BumblebeeXB3 and Microsoft Kinect. In: Robot2013: First Iberian Robotics Conference. 2014; 725–734. Springer.

  25. Ando R, Ozasa Y, Guo W. Robust surface reconstruction of plant leaves from 3D point clouds. Plant Phenomics. 2021. https://doi.org/10.34133/2021/3184185.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Nguyen TT, Slaughter DC, Max N, Maloof JN, Sinha N. Structured light-based 3D reconstruction system for plants. Sensors. 2015;15(8):18587–612. https://doi.org/10.3390/s150818587.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Paulus S, Schumann H, Kuhlmann H, Léon J. High-precision laser scanning system for capturing 3D plant architecture and analysing growth of cereal plants. Biosyst Eng. 2014;121:1–11.

    Article  Google Scholar 

  28. Dornbusch T, Michaud O, Xenarios I, Fankhauser C. Differentially phased leaf growth and movements in Arabidopsis depend on coordinated circadian and light regulation. Plant Cell. 2014;26(10):3911–21.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  29. Dupuis J, Kuhlmann H. High-precision surface inspection: uncertainty evaluation within an accuracy range of 15\(\mu\)m with triangulation-based laser line scanners. J Appl Geod. 2014;8(2):109–18.

    Google Scholar 

  30. Paulus S, Dupuis J, Riedel S, Kuhlmann H. Automated analysis of barley organs using 3D laser scanning: an approach for high throughput phenotyping. Sensors. 2014;14(7):12670–86. https://doi.org/10.3390/s140712670.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Virlet N, Sabermanesh K, Sadeghi-Tehran P, Hawkesford MJ. Field Scanalyzer: an automated robotic field phenotyping platform for detailed crop monitoring. Funct Plant Biol. 2016;44(1):143–53.

    Article  PubMed  Google Scholar 

  32. Kjaer KH, Ottosen C-O. 3D laser triangulation for plant phenotyping in challenging environments. Sensors. 2015;15(6):13533–47. https://doi.org/10.3390/s150613533.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Teng X, Zhou G, Wu Y, Huang C, Dong W, Xu S. Three-dimensional reconstruction method of rapeseed plants in the whole growth period using RGB-D camera. Sensors. 2021;21(14):4628.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Chebrolu N, Magistri F, Läbe T, Stachniss C. Registration of spatio-temporal point clouds of plants for phenotyping. PLOS ONE. 2021;16(2):0247243.

    Article  Google Scholar 

  35. Paulus S, Dupuis J, Mahlein A-K, Kuhlmann H. Surface feature based classification of plant organs from 3D laserscanned point clouds for plant phenotyping. BMC Bioinform. 2013;14(1):238. https://doi.org/10.1186/1471-2105-14-238.

    Article  Google Scholar 

  36. Paulus S, Behmann J, Mahlein A-K, Plümer L, Kuhlmann H. Low-cost 3D systems: suitable tools for plant phenotyping. Sensors. 2014;14(2):3001–18. https://doi.org/10.3390/s140203001.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Oguchi T, Yuichi SH, Wasklewicz T. Data sources. Dev Earth Surf Process. 2011;15:189–224.

    Article  Google Scholar 

  38. Chaivivatrakul S, Tang L, Dailey MN, Nakarmi AD. Automatic morphological trait characterization for corn plants via 3D holographic reconstruction. Comput Electron Agric. 2014;109:109–23. https://doi.org/10.1016/j.compag.2014.09.005.

    Article  Google Scholar 

  39. Baharav T, Bariya M, Zakhor A. In situ height and width estimation of sorghum plants from 2.5d infrared images. Electron Imaging. 2017;2017(17):122–35. https://doi.org/10.2352/ISSN.24701173.2017.17.COIMG-435.

    Article  Google Scholar 

  40. Kazmi W, Foix S, Alenyà G, Andersen HJ. Indoor and outdoor depth imaging of leaves with time-of-flight and stereo vision sensors: analysis and comparison. ISPRS J Photogramm Remote Sens. 2014;88:128–46.

    Article  Google Scholar 

  41. Izadi S, Kim D, Hilliges O, Molyneaux D, Newcombe R, Kohli P, Shotton J, Hodges S, Freeman D, Davison A, et al. KinectFusion: Real-time 3D reconstruction and interaction using a moving depth camera. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology. 2011; 559–568.

  42. Newcombe RA, Izadi S, Hilliges O, Molyneaux D, Kim D, Davison AJ, Kohi P, Shotton J, Hodges S, Fitzgibbon A. KinectFusion: real-time dense surface mapping and tracking. In: 2011 10th IEEE International Symposium on Mixed and Augmented Reality. 2011; 127–136. IEEE.

  43. Ma X, Zhu K, Guan H, Feng J, Yu S, Liu G. High-throughput phenotyping analysis of potted soybean plants using colorized depth images based on a proximal platform. Remote Sens. 2019;11(9):1085.

    Article  Google Scholar 

  44. Wang L, Zheng L, Wang M. 3D point cloud instance segmentation of lettuce based on PartNet. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022; 1647–1655.

  45. González-Barbosa J-J, Ramírez-Pedraza A, Ornelas-Rodríguez F-J, Cordova-Esparza D-M, González-Barbosa E-A. Dynamic measurement of portos tomato seedling growth using the kinect 2.0 Sensor. Agriculture. 2022;12(4):449. https://doi.org/10.3390/agriculture12040449.

    Article  Google Scholar 

  46. Zhang M, Xu S, Huang Y, Bie Z, Notaguchi M, Zhou J, Wan X, Wang Y, Dong W. Non-destructive measurement of the pumpkin rootstock root phenotype using AZURE KINECT. Plants. 2022;11(9):1144. https://doi.org/10.3390/plants11091144.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Zhang K, Chen H, Wu H, Zhao X, Zhou C. Point cloud registration method for maize plants based on conical surface fitting-ICP. Sci Rep. 2022;12(1):1–15. https://doi.org/10.1038/s41598-022-10921-6.

    Article  CAS  Google Scholar 

  48. Li Y, Fan X, Mitra NJ, Chamovitz D, Cohen-Or D, Chen B. Analyzing growing plants from 4D point cloud data. ACM Trans Gr. 2013;32(6):157. https://doi.org/10.1145/2508363.2508368.

    Article  CAS  Google Scholar 

  49. Zhang S. High-speed 3D shape measurement with structured light methods: a review. Opt Lasers Eng. 2018;106:119–31.

    Article  Google Scholar 

  50. Woodham RJ. Photometric method for determining surface orientation from multiple images. Opt Eng. 1980;19(1): 191139.

    Article  Google Scholar 

  51. Horn BK. Shape from shading: a method for obtaining the shape of a smooth opaque object from one view; 1970.

  52. Geng J. Structured-light 3D surface imaging: a tutorial. Adv Opt Photonics. 2011;3(2):128–60.

    Article  CAS  Google Scholar 

  53. Basri R, Jacobs D, Kemelmacher I. Photometric stereo with general, unknown lighting. Int J Comput Vis. 2007;72(3):239–57.

    Article  Google Scholar 

  54. Treuille A, Hertzmann A, Seitz SM. Example-based stereo with general BRDFs. In: European Conference on Computer Vision. 2004; 457–469. Springer.

  55. Higo T, Matsushita Y, Joshi N, Ikeuchi K. A hand-held photometric stereo camera for 3-D modeling. In: 2009 IEEE 12th International Conference on Computer Vision. 2009; 1234–1241. IEEE.

  56. Dowd T, McInturf S, Li M, Topp CN. Rated-M for mesocosm: allowing the multimodal analysis of mature root systems in 3D. Emerg Top Life Sci. 2021;5(2):249.

    Article  PubMed  PubMed Central  Google Scholar 

  57. Jones DH, Atkinson BS, Ware A, Sturrock CJ, Bishopp A, Wells DM. Preparation, scanning and analysis of duckweed using x-ray computed microtomography. Front Plant Sci. 2021;11:2140.

    Article  Google Scholar 

  58. Phalempin M, Lippold E, Vetterlein D, Schlüter S. An improved method for the segmentation of roots from X-ray computed tomography 3D images: Rootine v. 2. Plant Methods. 2021;17(1):1–19.

    Article  Google Scholar 

  59. Gerth S, Claußen J, Eggert A, Wörlein N, Waininger M, Wittenberg T, Uhlmann N. Semiautomated 3D root segmentation and evaluation based on X-Ray CT imagery. Plant Phenomics. 2021. https://doi.org/10.34133/2021/8747930.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Teramoto S, Tanabata T, Uga Y. RSAtrace3D: robust vectorization software for measuring monocot root system architecture. BMC Plant Biol. 2021;21(1):1–11.

    Article  Google Scholar 

  61. Mairhofer S, Zappala S, Tracy SR, Sturrock C, Bennett M, Mooney SJ, Pridmore T. RooTrak: automated recovery of three-dimensional plant root architecture in soil from X-Ray microcomputed tomography images using visual tracking. Plant Physiol. 2012;158(2):561–9. https://doi.org/10.1104/pp.111.186221.

    Article  CAS  PubMed  Google Scholar 

  62. Metzner R, Eggert A, van Dusschoten D, Pflugfelder D, Gerth S, Schurr U, Uhlmann N, Jahnke S. Direct comparison of MRI and X-ray CT technologies for 3D imaging of root systems in soil: potential and challenges for root trait quantification. Plant Methods. 2015;11(1):17. https://doi.org/10.1186/s13007-015-0060-z.

    Article  PubMed  PubMed Central  Google Scholar 

  63. Schulz H, Postma JA, van Dusschoten D, Scharr H, Behnke S. Plant root system analysis from MRI images. In: Computer vision, imaging and computer graphics. Theory and application. Berlin Heidelberg, Berlin, Heidelberg: Springer. 2013; p. 411–25.

  64. Flavel RJ, Guppy CN, Rabbi SMR, Young IM. An image processing and analysis tool for identifying and analysing complex plant root systems in 3D soil using non-destructive analysis: root1. PLOS ONE. 2017;12(5):1–18. https://doi.org/10.1371/journal.pone.0176433.

    Article  CAS  Google Scholar 

  65. Herrero-Huerta M, Raumonen P, Gonzalez-Aguilera D. 4DRoot: root phenotyping software for temporal 3D scans by X-ray computed tomography; 2022. https://doi.org/10.3389/fpls.2022.986856.

  66. Krzyzaniak Y, Cointault F, Loupiac C, Bernaud E, Ott F, Salon C, Laybros A, Han S, Héloir M-C, Adrian M, et al. In situ phenotyping of grapevine root system architecture by 2D or 3D imaging: advantages and limits of three cultivation methods. Front Plant Sci. 2021;12: 638688.

    Article  PubMed  PubMed Central  Google Scholar 

  67. Vontobel P, Lehmann EH, Hassanein R, Frei G. Neutron tomography: method and applications. Physica B: Condens Matter. 2006;385:475–80. https://doi.org/10.1016/j.physb.2006.05.252.

    Article  CAS  Google Scholar 

  68. Clark T, Burca G, Boardman R, Blumensath T. Correlative X-ray and neutron tomography of root systems using cadmium fiducial markers. J Microsc. 2020;277(3):170–8.

    Article  CAS  PubMed  Google Scholar 

  69. Moradi AB, Carminati A, Vetterlein D, Vontobel P, Lehmann E, Weller U, Hopmans JW, Vogel H-J, Oswald SE. Three-dimensional visualization and quantification of water content in the rhizosphere. New Phytol. 2011;192(3):653–63.

    Article  PubMed  Google Scholar 

  70. Menon M, Robinson B, Oswald SE, Kaestner A, Abbaspour KC, Lehmann E, Schulin R. Visualization of root growth in heterogeneously contaminated soil using neutron radiography. Eur J Soil Sci. 2007;58(3):802–10.

    Article  Google Scholar 

  71. Matsushima U, Herppich W, Kardjilov N, Graf W, Hilger A, Manke I. Estimation of water flow velocity in small plants using cold neutron imaging with D2O tracer. Nucl Instrum Methods Phys Res Sect A Accel Spectrom Detect Assoc Equip. 2009;605(1–2):146–9.

    Article  CAS  Google Scholar 

  72. Warren JM, Bilheux H, Kang M, Voisin S, Cheng C-L, Horita J, Perfect E. Neutron imaging reveals internal plant water dynamics. Plant Soil. 2013;366(1):683–93.

    Article  CAS  Google Scholar 

  73. Zarebanadkouki M, Kim YX, Carminati A. Where do roots take up water? Neutron radiography of water flow into the roots of transpiring plants growing in soil. New Phytol. 2013;199(4):1034–44.

    Article  CAS  PubMed  Google Scholar 

  74. Tötzke C, Kardjilov N, Manke I, Oswald SE. Capturing 3D water flow in rooted soil by ultra-fast neutron tomography. Sci Rep. 2017;7(1):1–9.

    Article  Google Scholar 

  75. Pound MP, French AP, Fozard JA, Murchie EH, Pridmore TP. A patch-based approach to 3D plant shoot phenotyping. Mach Vis Appl. 2016;27(5):767–79.

    Article  Google Scholar 

  76. Pound MP, French AP, Murchie EH, Pridmore TP. Automated recovery of three-dimensional models of plant shoots from multiple color images. Plant Physiol. 2014;166(4):1688–98.

    Article  PubMed  PubMed Central  Google Scholar 

  77. Biskup B, Scharr H, Schurr U, Rascher U. A stereo imaging system for measuring structural parameters of plant canopies. Plant Cell Environ. 2007;30(10):1299–308. https://doi.org/10.1111/j.1365-3040.2007.01702.x.

    Article  PubMed  Google Scholar 

  78. Burgess AJ, Retkute R, Pound MP, Mayes S, Murchie EH. Image-based 3D canopy reconstruction to determine potential productivity in complex multi-species crop systems. Ann Bot. 2017;119(4):517–32.

    PubMed  PubMed Central  Google Scholar 

  79. Jay S, Rabatel G, Hadoux X, Moura D, Gorretta N. In-field crop row phenotyping from 3D modeling performed using structure from motion. Comput Electron Agric. 2015;110:70–7. https://doi.org/10.1016/j.compag.2014.09.021.

    Article  Google Scholar 

  80. Apelt F, Breuer D, Nikoloski Z, Stitt M, Kragler F. Phytotyping4D: a light-field imaging system for non-invasive and accurate monitoring of spatio-temporal plant growth. Plant J. 2015;82(4):693–706.

    Article  CAS  PubMed  Google Scholar 

  81. Gibbs JA, Pound M, French AP, Wells DM, Murchie E, Pridmore T. Plant phenotyping: an active vision cell for three-dimensional plant shoot reconstruction. Plant Physiol. 2018;178(2):524–34.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  82. Zermas D, Morellas V, Mulla D, Papanikolopoulos N. 3D model processing for high throughput phenotype extraction-the case of corn. Comput Electron Agric. 2020;172: 105047. https://doi.org/10.1016/j.compag.2019.105047.

    Article  Google Scholar 

  83. Santos TT, Koenigkan LV, Barbedo JGA, Rodrigues GC. 3D plant modeling: localization, mapping and segmentation for plant phenotyping using a single hand-held camera. In: Agapito L, Bronstein MM, Rother C, editors. Computer vision—ECCV 2014 workshops. Cham: Springer International Publishing; 2015. p. 247–63.

    Chapter  Google Scholar 

  84. Zhou J, Fu X, Zhou S, Zhou J, Ye H, Nguyen HT. Automated segmentation of soybean plants from 3D point cloud using machine learning. Comput Electron Agric. 2019;162:143–53.

    Article  Google Scholar 

  85. Rose JC, Paulus S, Kuhlmann H. Accuracy analysis of a multi-view stereo approach for phenotyping of tomato plants at the organ level. Sensors. 2015;15(5):9651–65. https://doi.org/10.3390/s150509651.

    Article  PubMed  PubMed Central  Google Scholar 

  86. Wolff K, Kim C, Zimmer H, Schroers C, Botsch M, Sorkine-Hornung O, Sorkine-Hornung A. Point Cloud Noise and Outlier Removal for Image-Based 3D Reconstruction. In: Fourth International Conference on 3D Vision (3DV). 2016; 118–127. https://doi.org/10.1109/3DV.2016.20. IEEE.

  87. Marr D, Poggio T. A computational theory of human stereo vision. Proc Royal Soc Lond Ser B Biol Sci. 1979;204(1156):301–28.

    CAS  Google Scholar 

  88. Xiong J, He Z, Lin R, Liu Z, Bu R, Yang Z, Peng H, Zou X. Visual positioning technology of picking robots for dynamic litchi clusters with disturbance. Comput Electron Agric. 2018;151:226–37.

    Article  Google Scholar 

  89. Xiong X, Yu L, Yang W, Liu M, Jiang N, Wu D, Chen G, Xiong L, Liu K, Liu Q. A high-throughput stereo-imaging system for quantifying rape leaf traits during the seedling stage. Plant Methods. 2017;13(1):1–17.

    Article  Google Scholar 

  90. Chen M, Tang Y, Zou X, Huang K, Huang Z, Zhou H, Wang C, Lian G. Three-dimensional perception of orchard banana central stock enhanced by adaptive multi-vision technology. Comput Electron Agric. 2020;174: 105508.

    Article  Google Scholar 

  91. Iglhaut J, Cabo C, Puliti S, Piermattei L, O’Connor J, Rosette J. Structure from motion photogrammetry in forestry: a Review. Curr For Rep. 2019;5(3):155–68.

    Google Scholar 

  92. Lou L, Liu Y, Sheng M, Han J, Doonan JH. A Cost-Effective Automatic 3D reconstruction pipeline for plants using multi-view images. In: Conference Towards Autonomous Robotic Systems. 2014; 221–230. https://doi.org/10.1007/978-3-319-10401-0_20. Springer.

  93. Tomasi C, Kanade T. Shape and motion from image streams under orthography: a factorization method. Int J Comput Vis. 1992;9(2):137–54. https://doi.org/10.1007/BF00129684.

    Article  Google Scholar 

  94. Quan L, Tan P, Zeng G, Yuan L, Wang J, Kang SB. Image-based plant modeling. In: ACM SIGGRAPH 2006 Papers, 2006:599–604. https://doi.org/10.1145/1179352.1141929.

  95. Liu S, Barrow CS, Hanlon M, Lynch JP, Bucksch A. DIRT/3D: 3D root phenotyping for field-grown maize (Zea mays). Plant Physiol. 2021;187(2):739–57.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  96. Baker S, Kanade T, et al. Shape-from-silhouette across time part I: theory and algorithms. Int J Comput Vis. 2005;62(3):221–47.

    Article  Google Scholar 

  97. Seitz SM, Dyer CR. Photorealistic scene reconstruction by voxel coloring. Int J Comput Vis. 1999;35(2):151–73. https://doi.org/10.1023/A:1008176507526.

    Article  Google Scholar 

  98. Kutulakos KN, Seitz SM. A theory of shape by space carving. Int J Comput Vis. 2000;38(3):199–218. https://doi.org/10.1023/A:1008191222954.

    Article  Google Scholar 

  99. Choudhury SD, Maturu S, Samal A, Stoerger V, Awada T. Leveraging image analysis to compute 3D plant phenotypes based on voxel-grid plant reconstruction. Front Plant Sci. 2020;11: 521431.

    Article  PubMed  PubMed Central  Google Scholar 

  100. Szeliski R. Computer: vision algorithms and applications. Seattle, Washington: Springer International Publishing; 2022. https://doi.org/10.1007/978-3-030-34372-9.

    Book  Google Scholar 

  101. Golbach F, Kootstra G, Damjanovic S, Otten G, van de Zedde R. Validation of plant part measurements using a 3D reconstruction method suitable for high-throughput seedling phenotyping. Mach Vis Appl. 2016;27(5):663–80. https://doi.org/10.1007/s00138-015-0727-5.

    Article  Google Scholar 

  102. Koenderink N, Wigham M, Golbach F, Otten G, Gerlich R, van de Zedde H. MARVIN: high speed 3D imaging for seedling classification. In: Precision Agriculture’09: Papers Presented at the 7th European Conference on Precision Agriculture, Wageningen, The Netherlands, July 6-8, 2009, 2009:279–286. https://doi.org/10.3920/978-90-8686-664-9.

  103. Klodt M, Cremers D. High-resolution plant shape measurements from multi-view stereo reconstruction. In: European Conference on Computer Vision, 2014;pp. 174–184. Springer.

  104. Scharr H, Briese C, Embgenbroich P, Fischbach A, Fiorani F, Müller-Linow M. Fast high resolution volume carving for 3D plant shoot reconstruction. Front Plant Sci. 2017;8:1680.

    Article  PubMed  PubMed Central  Google Scholar 

  105. Gaillard M, Miao C, Schnable JC, Benes B. Voxel carving-based 3D reconstruction of sorghum identifies genetic determinants of light interception efficiency. Plant Direct. 2020;4(10):00255. https://doi.org/10.1002/pld3.255.

    Article  Google Scholar 

  106. Polder G, Hofstee JW. Phenotyping large tomato plants in the greenhouse using a 3D light-field camera. In: 2014 Montreal, Quebec Canada July 13–July 16, 2014, p. 1 (2014). American Society of Agricultural and Biological Engineers.

  107. Ivanov N, Boissard P, Chapron M, Andrieu B. Computer stereo plotting for 3-D reconstruction of a maize canopy. Agric For Meteorol. 1995;75(1–3):85–102. https://doi.org/10.1016/0168-1923(94)02204-W.

    Article  Google Scholar 

  108. Müller-Linow M, Pinto-Espinosa F, Scharr H, Rascher U. The leaf angle distribution of natural plant populations: assessing the canopy with a novel software tool. Plant Methods. 2015;11(1):11. https://doi.org/10.1186/s13007-015-0052-z.

    Article  PubMed  PubMed Central  Google Scholar 

  109. Xia C, Wang L, Chung B-K, Lee J-M. In situ 3D segmentation of individual plant leaves using a RGB-D camera for agricultural automation. Sensors. 2015;15(8):20463–79. https://doi.org/10.3390/s150820463.

    Article  PubMed  PubMed Central  Google Scholar 

  110. Chéné Y, Rousseau D, Lucidarme P, Bertheloot J, Caffier V, Morel P, Belin É, Chapeau-Blondeau F. On the use of depth camera for 3D phenotyping of entire plants. Comput Electron Agric. 2012;82:122–7. https://doi.org/10.1016/j.compag.2011.12.007.

    Article  Google Scholar 

  111. Li D, Cao Y, Shi G, Cai X, Chen Y, Wang S, Yan S. An overlapping-free leaf segmentation method for plant point clouds. IEEE Access. 2019;7:129054–70.

    Article  Google Scholar 

  112. Besl PJ, McKay ND. Method for registration of 3-D shapes. In: Sensor Fusion IV: Control Paradigms and Data Structures, 1992;1611:586–607. https://doi.org/10.1117/12.57955. International Society for Optics and Photonics.

  113. Javaheri A, Brites C, Pereira F, Ascenso J. Subjective and objective quality evaluation of 3D point cloud denoising algorithms. In: 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), 2017:1–6. https://doi.org/10.1109/ICMEW.2017.8026263. IEEE.

  114. Rakotosaona M-J, La Barbera V, Guerrero P, Mitra NJ, Ovsjanikov M. POINTCLEANNET: learning to denoise and remove outliers from dense point clouds. Comput Gr Forum. 2020;39:185–203. https://doi.org/10.1111/cgf.13753.

    Article  Google Scholar 

  115. Xie Y, Tian J, Zhu XX. Linking points with labels in 3D: a review of point cloud semantic segmentation. IEEE Geosci Remote Sens Mag. 2020;8(4):38–59.

    Article  Google Scholar 

  116. Haque SM, Govindu VM. Robust feature-preserving denoising of 3D point clouds. In: 2016 Fourth International Conference on 3D Vision (3DV), 2016:83–91. https://doi.org/10.1109/3DV.2016.17. IEEE.

  117. Thapa S, Zhu F, Walia H, Yu H, Ge Y. A novel LiDAR-based instrument for high-throughput, 3D measurement of morphological traits in maize and sorghum. Sensors. 2018;18(4):1187.

    Article  PubMed  PubMed Central  Google Scholar 

  118. Baumgart BG. Geometric modeling for computer vision. PhD thesis; 1974.

  119. Culbertson WB, Malzbender T, Slabaugh G. Generalized voxel coloring. In: Triggs B, Zisserman A, Szeliski R, editors. Vision algorithms theory and practice. Berlin: Springer; 1999. p. 100–15.

    Google Scholar 

  120. Dyer CR, Davis LS. Volumetric scene reconstruction from multiple views. Boston, MA: Springer; 2001. p. 469–89. https://doi.org/10.1007/978-1-4615-1529-6-16.

    Book  Google Scholar 

  121. Kumar P, Connor J, Mikiavcic S. High-throughput 3D reconstruction of plant shoots for phenotyping. In: 2014 13th International Conference on Control Automation Robotics & Vision (ICARCV), 2014:211–216. https://doi.org/10.1109/ICARCV.2014.7064306. IEEE.

  122. Phattaralerphong J, Sinoquet H. A method for 3D reconstruction of tree crown volume from photographs: assessment with 3D-digitized plants. Tree Physiol. 2005;25(10):1229–42. https://doi.org/10.1093/treephys/25.10.1229.

    Article  CAS  PubMed  Google Scholar 

  123. Kumar P, Cai J, Miklavcic S. 3D reconstruction, modelling and analysis of in situ root system architecture. In: Proceedings of the 20th International Congress on Modelling and Simulation (MODSIM2013), 2013:517–523.

  124. Han X-F, Jin JS, Wang M-J, Jiang W, Gao L, Xiao L. A review of algorithms for filtering the 3D point cloud. Signal Process Image Commun. 2017;57:103–12. https://doi.org/10.1016/j.image.2017.05.009.

    Article  Google Scholar 

  125. Ma Z, Sun D, Xu H, Zhu Y, He Y, Cen H. Optimization of 3d point clouds of oilseed rape plants based on time-of-flight cameras. Sensors. 2021;21(2):664.

    Article  PubMed  PubMed Central  Google Scholar 

  126. Fang W, Feng H, Yang W, Duan L, Chen G, Xiong L, Liu Q. High-throughput volumetric reconstruction for 3D wheat plant architecture studies. J Innov Opt Health Sci. 2016;9(05):1650037.

    Article  Google Scholar 

  127. Butkiewicz T. Low-cost coastal mapping using Kinect v2 time-of-flight cameras. In: 2014 Oceans-St. John’s, 2014:1–9. IEEE.

  128. Fischler MA, Bolles RC. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM. 1981;24(6):381–95. https://doi.org/10.1145/358669.358692.

    Article  Google Scholar 

  129. Garrido M, Paraforos DS, Reiser D, Vázquez Arellano M, Griepentrog HW, Valero C. 3D maize plant reconstruction based on georeferenced overlapping LiDAR point clouds. Remote Sens. 2015;7(12):17077–96. https://doi.org/10.3390/rs71215870.

    Article  Google Scholar 

  130. Liu F, Song Q, Zhao J, Mao L, Bu H, Hu Y, Zhu X-G. Canopy occupation volume as an indicator of canopy photosynthetic capacity. New Phytol. 2021;232:941.

    Article  PubMed  Google Scholar 

  131. Lancaster P, Salkauskas K. Surfaces generated by moving least squares methods. Math Comput. 1981;37(155):141–58. https://doi.org/10.1090/S0025-5718-1981-0616367-1.

    Article  Google Scholar 

  132. Ester M, Kriegel H-P, Sander J, Xu X, et al. A density-based algorithm for discovering clusters in large spatial databases with noise. In: KDD, 1996;96:226–231.

  133. Wu S, Wen W, Wang Y, Fan J, Wang C, Gou W, Guo X. MVS-Pheno: a portable and low-cost phenotyping platform for maize shoots using multiview stereo 3D reconstruction. Plant Phenomics. 2020. https://doi.org/10.34133/2020/1848437.

    Article  PubMed  PubMed Central  Google Scholar 

  134. Hu F, Zhao Y, Wang W, Huang X. Discrete point cloud filtering and searching based on VGSO algorithm. In: Proceedings 27th European Conference on Modelling and Simulation. 2013; 850–856.

  135. Straub J, Reiser D, Griepentrog HW. Approach for modeling single branches of meadow orchard trees with 3D point clouds. Wageningen: Wageningen Academic Publishers; 2021.

    Book  Google Scholar 

  136. Cook RL. Stochastic sampling in computer graphics. ACM Trans Gr (TOG). 1986;5(1):51–72. https://doi.org/10.1145/7529.8927.

    Article  Google Scholar 

  137. Rosli NAIM, Ramli A. Mapping bootstrap error for bilateral smoothing on point set. In: AIP Conference Proceedings, 2014;1605:149–154. American Institute of Physics.

  138. Lindner M, Schiller I, Kolb A, Koch R. Time-of-flight sensor calibration for accurate range sensing. Comput Vis Image Underst. 2010;114(12):1318–28.

    Article  Google Scholar 

  139. Hussmann S, Knoll F, Edeler T. Modulation method including noise model for minimizing the wiggling error of tof cameras. IEEE Trans Instrum Meas. 2013;63(5):1127–36.

    Article  Google Scholar 

  140. Lefloch D, Nair R, Lenzen F, Schäfer H, Streeter L, Cree MJ, Koch R, Kolb A. Technical foundation and calibration methods for time-of-flight cameras. In: Grzegorzek M, Theobalt C, Koch R, Kolb A, editors. Time-of-flight and depth imaging. Sensors, algorithms, and applications. Berlin: Springer; 2013. p. 3–24.

    Chapter  Google Scholar 

  141. He Y, Chen S. Error correction of depth images for multiview time-of-flight vision sensors. Int J Adv Robot Syst. 2020;17(4):1729881420942379.

    Article  Google Scholar 

  142. Tomasi C, Manduchi R. Bilateral filtering for gray and color images. In: Sixth International Conference on Computer Vision (IEEE Cat. No. 98CH36271), 1998:839–846. IEEE.

  143. Hua K-L, Lo K-H, Wang Y-CFF. Extended guided filtering for depth map upsampling. IEEE MultiMedia. 2015;23(2):72–83.

    Article  Google Scholar 

  144. Wang L, Fei M, Wang H, Ji Z, Yang A. Distance overestimation error correction method (DOEC) of time of flight camera based on pinhole model. In: Intelligent Computing and Internet of Things, pp. 281–290. Springer; 2018.

  145. Rabbani T, Dijkman S, van den Heuvel F, Vosselman G. An integrated approach for modelling and global registration of point clouds. ISPRS J Photogramm Remote Sens. 2007;61(6):355–70.

    Article  Google Scholar 

  146. Chui H, Rangarajan A. A feature registration framework using mixture models. In: Proceedings IEEE Workshop on Mathematical Methods in Biomedical Image Analysis. MMBIA-2000 (Cat. No. PR00737), 2000:190–197. IEEE.

  147. Tsin Y, Kanade T. A correlation-based approach to robust point set registration. In: European Conference on Computer Vision, 2004:558–569. Springer.

  148. Zhang J, Huan Z, Xiong W. An adaptive gaussian mixture model for non-rigid image registration. J Math Imaging Vis. 2012;44(3):282–94.

    Article  Google Scholar 

  149. Somayajula S, Joshi AA, Leahy RM. Non-rigid Image Registration Using Gaussian Mixture Models. In: International Workshop on Biomedical Image Registration, 2012:286–295. Springer.

  150. Chaudhury A, Barron JL. 3D Phenotyping of Plants. In: 3D Imaging, Analysis and Applications. Berlin: Springer; 2020. p. 699–732.

  151. Rusinkiewicz S, Levoy M. Efficient variants of the ICP algorithm. In: Proceedings Third International Conference on 3-D Digital Imaging and Modeling. 2001; 145–152. IEEE.

  152. Arun KS, Huang TS, Blostein SD. Least-squares fitting of two 3-d point sets. IEEE Trans Pattern Anal Mach Intell. 1987;5:698–700.

    Article  Google Scholar 

  153. Henry P, Krainin M, Herbst E, Ren X, Fox D. RGB-D mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments. Int J Robot Res. 2012;31(5):647–63.

    Article  Google Scholar 

  154. Huang X, Hu M. 3D reconstruction based on model registration using RANSAC-ICP algorithm. In: Pan Z, Cheok AD, Mueller W, Zhang M, editors. Transactions on edutainment XI. Berlin: Springer; 2015. p. 46–51.

    Chapter  Google Scholar 

  155. Wang Y, Chen Y. Non-destructive measurement of three-dimensional plants based on point cloud. Plants. 2020;9(5):571.

    Article  PubMed  PubMed Central  Google Scholar 

  156. Chaudhury A, Ward C, Talasaz A, Ivanov AG, Brophy M, Grodzinski B, Hüner NP, Patel RV, Barron JL. Machine vision system for 3D plant phenotyping. IEEE/ACM Trans Comput Biol Bioinform. 2018;16(6):2009–22.

    Article  PubMed  Google Scholar 

  157. Jian B, Vemuri BC. Robust point set registration using gaussian mixture models. IEEE Trans Pattern Anal Mach Intell. 2010;33(8):1633–45.

    Article  PubMed  Google Scholar 

  158. Jian B, Vemuri BC. A robust algorithm for point set registration using mixture of Gaussians. In: Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, 2005; 2: 1246–1251. IEEE.

  159. Myronenko A, Song X. Point set registration: coherent point drift. IEEE Trans Pattern Anal Mach Intell. 2010;32(12):2262–75.

    Article  PubMed  Google Scholar 

  160. Schunck D, Magistri F, Rosu RA, Cornelißen A, Chebrolu N, Paulus S, Léon J, Behnke S, Stachniss C, Kuhlmann H, et al. Pheno4d: a spatio-temporal dataset of maize and tomato plant point clouds for phenotyping and advanced plant analysis. PLOS ONE. 2021;16(8):0256340.

    Article  Google Scholar 

  161. Magistri F, Chebrolu N, Stachniss C. Segmentation-based 4D registration of plants point clouds for phenotyping. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020:2433–2439. IEEE.

  162. Lorensen WE, Cline HE. Marching cubes: a high resolution 3D surface construction algorithm. In: Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ’87, 1987;21:163–169. ACM, New York, NY, USA. https://doi.org/10.1145/37401.37422

  163. Edelsbrunner H, Mücke EP. Three-dimensional alpha shapes. ACM Trans Gr. 1994;13(1):43–72. https://doi.org/10.1145/174462.156635.

    Article  Google Scholar 

  164. Paproki A, Sirault X, Berry S, Furbank R, Fripp J. A novel mesh processing based technique for 3D plant analysis. BMC Plant Biol. 2012;12(1):63. https://doi.org/10.1186/1471-2229-12-63.

    Article  PubMed  PubMed Central  Google Scholar 

  165. McCormick RF, Truong SK, Mullet JE. 3D sorghum reconstructions from depth images identify QTL regulating shoot architecture. Plant Physiol. 2016. https://doi.org/10.1104/pp.16.00948.

    Article  PubMed  PubMed Central  Google Scholar 

  166. Meagher DJR. Octree encoding: a new technique for the representation, manipulation and display of arbitrary 3-D objects by computer. Technical Report IPL-TR-80-111, Image Processing Laboratory, Electrical and Systems Engineering Department, Rensselaer Polytechnic Institute; 1980.

  167. Bucksch A, Lindenbergh R. CAMPINO—A skeletonization method for point cloud processing. ISPRS J Photogramm Remote Sens. 2008;63(1):115–27. https://doi.org/10.1016/j.isprsjprs.2007.10.004.

    Article  Google Scholar 

  168. Bucksch A, Lindenbergh R, Menenti M. SkelTre - fast skeletonisation for imperfect point cloud data of botanic trees. Vis Comput. 2010;26(10):1283–300.

    Article  Google Scholar 

  169. Duan T, Chapman SC, Holland E, Rebetzke GJ, Guo Y, Zheng B. Dynamic quantification of canopy structure to characterize early plant vigour in wheat genotypes. J Exp Bot. 2016;67(15):4523–34. https://doi.org/10.1093/jxb/erw227.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  170. Zhu R, Sun K, Yan Z, Yan X, Yu J, Shi J, Hu Z, Jiang H, Xin D, Zhang Z, et al. Analysing the phenotype development of soybean plants using low-cost 3D reconstruction. Sci Rep. 2020;10(1):1–17.

    Google Scholar 

  171. Dijkstra EW. A note on two problems in connexion with graphs. Numerische Mathematik. 1959;1(1):269–71. https://doi.org/10.1007/BF01386390.

    Article  Google Scholar 

  172. Prim RC. Shortest connection networks and some generalizations. Bell Syst Tech J. 1957;36(6):1389–401. https://doi.org/10.1002/j.1538-7305.1957.tb01515.x.

    Article  Google Scholar 

  173. Shi J, Malik J. Normalized cuts and image segmentation. IEEE Trans Pattern Anal Mach Intell. 2000;22(8):888–905. https://doi.org/10.1109/34.868688.

    Article  Google Scholar 

  174. Hétroy-Wheeler F, Casella E, Boltcheva D. Segmentation of tree seedling point clouds into elementary units. Int J Remote Sens. 2016;37(13):2881–907. https://doi.org/10.1080/01431161.2016.1190988.

    Article  Google Scholar 

  175. Bucksch A. A practical introduction to skeletons for the plant sciences. Appl Plant Sci. 2014;2(8):1400005. https://doi.org/10.3732/apps.1400005.

    Article  Google Scholar 

  176. Cornea ND, Silver D, Min P. Curve-skeleton properties, applications, and algorithms. IEEE Trans Vis Comput Gr. 2007;13(3):530–48. https://doi.org/10.1109/TVCG.2007.1002.

    Article  Google Scholar 

  177. Livny Y, Yan F, Olson M, Chen B, Zhang H, El-Sana J. Automatic reconstruction of tree skeletal structures from point clouds. ACM Trans Gr. 2010;29(6):151. https://doi.org/10.1145/1882261.1866177.

    Article  Google Scholar 

  178. Mei J, Zhang L, Wu S, Wang Z, Zhang L. 3D tree modeling from incomplete point clouds via optimization and L1-MST. International Journal of Geographical Information Science. 2017;31(5):999–1021. https://doi.org/10.1080/13658816.2016.1264075.

    Article  Google Scholar 

  179. Bucksch A, Fleck S. Automated detection of branch dimensions in woody skeletons of fruit tree canopies. Photogramm Eng Remote Sens. 2011;77(3):229–40. https://doi.org/10.14358/PERS.77.3.229.

    Article  Google Scholar 

  180. Côté J-F, Widlowski J-L, Fournier RA, Verstraete MM. The structural and radiative consistency of three-dimensional tree reconstructions from terrestrial lidar. Remote Sens Environ. 2009;113(5):1067–81. https://doi.org/10.1016/j.rse.2009.01.017.

    Article  Google Scholar 

  181. Verroust A, Lazarus F. Extracting skeletal curves from 3D scattered data. In: Proceedings Shape Modeling International’99. International Conference on Shape Modeling and Applications, 1999:194–201. https://doi.org/10.1109/SMA.1999.749340. IEEE.

  182. Delagrange S, Jauvin C, Rochon P. PypeTree: a tool for reconstructing tree perennial tissues from point clouds. Sensors. 2014;14(3):4271–89. https://doi.org/10.3390/s140304271.

    Article  PubMed  PubMed Central  Google Scholar 

  183. Ziamtsov I, Navlakha S. Machine learning approaches to improve three basic plant phenotyping tasks using three-dimensional point clouds. Plant Physiol. 2019;181(4):1425–40.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  184. Cao J, Tagliasacchi A, Olson M, Zhang H, Su Z. Point cloud skeletons via laplacian based contraction. In: 2010 Shape Modeling International Conference, 2010:187–197. https://doi.org/10.1109/SMI.2010.25. IEEE.

  185. Chaudhury A, Godin C. Skeletonization of plant point cloud data using stochastic optimization framework. Front Plant Sci. 2020;11:773.

    Article  PubMed  PubMed Central  Google Scholar 

  186. Wu S, Wen W, Xiao B, Guo X, Du J, Wang C, Wang Y. An accurate skeleton extraction approach from 3D point clouds of maize plants. Front Plant Sci. 2019;10:248.

    Article  PubMed  PubMed Central  Google Scholar 

  187. Clark RT, MacCurdy RB, Jung JK, Shaff JE, McCouch SR, Aneshansley DJ, Kochian LV. Three-dimensional root phenotyping with a novel imaging and software platform. Plant physiol. 2011;156(2):455–65.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  188. Kaur D, Kaur Y. Various image segmentation techniques: a review. Int J Comput Sci Mob Comput. 2014;3(5):809–14.

    Google Scholar 

  189. Castillo E, Liang J, Zhao H. Point cloud segmentation and denoising via constrained nonlinear least squares normal estimates. In: Breub M, Bruckstein A, Maragos P, editors. Innovations for shape analysis. Berlin: Springer; 2013. p. 283–99. https://doi.org/10.1007/978-3-642-34141-0_13.

    Chapter  Google Scholar 

  190. Kulwa F, Li C, Zhao X, Cai B, Xu N, Qi S, Chen S, Teng Y. A state-of-the-art survey for microorganism image segmentation methods and future potential. IEEE Access. 2019;7:100243–69.

    Article  Google Scholar 

  191. Ge Y, Bai G, Stoerger V, Schnable JC. Temporal dynamics of maize plant growth, water use, and leaf water content using automated high throughput RGB and hyperspectral imaging. Comput Electron Agric. 2016;127:625–32. https://doi.org/10.1016/j.compag.2016.07.028.

    Article  Google Scholar 

  192. Das Choudhury S, Bashyam S, Qiu Y, Samal A, Awada T. Holistic and component plant phenotyping using temporal image sequence. Plant Methods. 2018;14(1):1–21.

    Article  Google Scholar 

  193. Minervini M, Abdelsamea MM, Tsaftaris SA. Image-based plant phenotyping with incremental learning and active contours. Ecol Inform. 2014;23:35–48.

    Article  Google Scholar 

  194. Bosilj P, Duckett T, Cielniak G. Connected attribute morphology for unified vegetation segmentation and classification in precision agriculture. Comput Ind. 2018;98:226–40.

    Article  PubMed  PubMed Central  Google Scholar 

  195. Uchiyama H, Sakurai S, Mishima M, Arita D, Okayasu T, Shimada A, Taniguchi R-I. Easy-to-setup 3D phenotyping platform for KOMATSUNA dataset. In: The IEEE International Conference on Computer Vision (ICCV) Workshops, 2017:2038–2045. https://doi.org/10.1109/ICCVW.2017.239. IEEE.

  196. Minervini M, Fischbach A, Scharr H, Tsaftaris SA. Finely-grained annotated datasets for image-based plant phenotyping. Pattern Recognit Lett. 2016;81:80–9. https://doi.org/10.1016/j.patrec.2015.10.013.

    Article  Google Scholar 

  197. Rabbani T, Van Den Heuvel F, Vosselmann G. Segmentation of point clouds using smoothness constraint. Int Arch Photogramm Remote Sens Spat Inform Sci. 2006;36(5):248–53.

    Google Scholar 

  198. Grilli E, Menna F, Remondino F. A review of point clouds segmentation and classification algorithms. Int Arch Photogramm Remote Sens Spat Inform Sci. 2017;42:339. https://doi.org/10.5194/isprs-archives-XLII-2-W3-339-2017.

    Article  Google Scholar 

  199. Lomte S, Janwale A. Plant leaves image segmentation techniques: a review. Artic Int J Comput Sci Eng. 2017;5:147–50.

    Google Scholar 

  200. Bell J, Dee HM. Leaf segmentation through the classification of edges. arXiv Preprint. 2019. https://doi.org/10.48550/arXiv.1904.03124.

  201. Thendral R, Suhasini A, Senthil N. A comparative analysis of edge and color based segmentation for orange fruit recognition. In: 2014 International Conference on Communication and Signal Processing, 2014:463–466. https://doi.org/10.1109/ICCSP.2014.6949884. IEEE.

  202. Noble SD, Brown RB. Selection and testing of spectral bands for edge-based leaf segmentation. In: 2006 ASAE Annual Meeting, 2006:1. American Society of Agricultural and Biological Engineers.

  203. Miao T, Wen W, Li Y, Wu S, Zhu C, Guo X. Label3DMaize: toolkit for 3D point cloud data annotation of maize shoots. GigaScience. 2021;10(5):031.

    Article  Google Scholar 

  204. Jin S, Su Y, Wu F, Pang S, Gao S, Hu T, Liu J, Guo Q. Stem-leaf segmentation and phenotypic trait extraction of individual maize using terrestrial LiDAR data. IEEE Trans Geosci Remote Sens. 2018;57(3):1336–46.

    Article  Google Scholar 

  205. Huang X, Zheng S, Gui L. Automatic measurement of morphological traits of typical leaf samples. Sensors. 2021;21(6):2247.

    Article  PubMed  PubMed Central  Google Scholar 

  206. Attene M, Falcidieno B, Spagnuolo M. Hierarchical mesh segmentation based on fitting primitives. Vis Comput. 2006;22(3):181–93. https://doi.org/10.1007/s00371-006-0375-x.

    Article  Google Scholar 

  207. Shamir A. A survey on mesh segmentation techniques. Comput Gra Forum. 2008;27(6):1539–56. https://doi.org/10.1111/j.1467-8659.2007.01103.x.

    Article  Google Scholar 

  208. Vieira M, Shimada K. Surface mesh segmentation and smooth surface extraction through region growing. Comput Aided Geom Des. 2005;22(8):771–92. https://doi.org/10.1016/j.cagd.2005.03.006.

    Article  Google Scholar 

  209. Nguyen CV, Fripp J, Lovell DR, Furbank R, Kuffner P, Daily H, Sirault X. 3D scanning system for automatic high-resolution plant phenotyping. In: 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2016:1–8. https://doi.org/10.1109/DICTA.2016.7796984. IEEE.

  210. Bali A, Singh SN. A review on the strategies and techniques of image segmentation. In: 2015 Fifth International Conference on Advanced Computing & Communication Technologies, 2015:113–120. https://doi.org/10.1109/ACCT.2015.63. IEEE.

  211. Narkhede H. Review of image segmentation techniques. Int J Sci Mod Eng. 2013;1(8):54–61.

    Google Scholar 

  212. Miao T, Zhu C, Xu T, Yang T, Li N, Zhou Y, Deng H. Automatic stem-leaf segmentation of maize shoots using three-dimensional point cloud. Comput Electron Agric. 2021;187: 106310. https://doi.org/10.1016/j.compag.2021.106310.

    Article  Google Scholar 

  213. Fukunaga K, Hostetler L. The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Trans Inform Theory. 1975;21(1):32–40.

    Article  Google Scholar 

  214. Cheng Y. Mean shift, mode seeking, and clustering. IEEE Trans Pattern Anal Mach Intell. 1995;17(8):790–9.

    Article  Google Scholar 

  215. Comaniciu D, Meer P. Mean shift: a robust approach toward feature space analysis. IEEE Trans Pattern Anal Mach Intell. 2002;24(5):603–19.

    Article  Google Scholar 

  216. Zhou H, Yuan Y, Shi C. Object tracking using SIFT features and mean shift. Comput Vis Image Underst. 2009;113(3):345–52.

    Article  Google Scholar 

  217. Donath WE, Hoffman AJ. Lower bounds for the partitioning of graphs. In: Selected Papers Of Alan J Hoffman: With Commentary. 2003; 437–442. World Scientific, ???. https://doi.org/10.1142/9789812796936_0044.

  218. von Luxburg U. A tutorial on spectral clustering. Stat Comput. 2007;17(4):395–416. https://doi.org/10.1007/s11222-007-9033-z.

    Article  Google Scholar 

  219. Boltcheva D, Casella E, Cumont R, Hétroy F. A spectral clustering approach of vegetation components for describing plant topology and geometry from terrestrial waveform LiDAR data. In: Sievänen R, Nikinmaa E, Godin C, Lintunen A, Nygren P, editors. 7th International Conference on Functional-Structural Plant Models (FSPM2013). Finland: Saariselkä; 2013.

  220. Liu R, Zhang H. Segmentation of 3D meshes through spectral clustering. In: 12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings, 2004. PG 2004. proceedings., 2004:298–305. https://doi.org/10.1109/PCCGA.2004.1348360. IEEE.

  221. Dey D, Mummert L, Sukthankar R. Classification of plant structures from uncalibrated image sequences. In: 2012 IEEE Workshop on Applications of Computer Vision (WACV), 2012:329–336. https://doi.org/10.1109/WACV.2012.6163017. IEEE.

  222. Snavely N, Seitz SM, Szeliski R. Photo tourism: exploring photo collections in 3D. ACM Trans Gr (TOG). 2006;25:835–46.

    Article  Google Scholar 

  223. Moriondo M, Leolini L, Staglianò N, Argenti G, Trombi G, Brilli L, Dibari C, Leolini C, Bindi M. Use of digital images to disclose canopy architecture in olive tree. Scientia Horticulturae. 2016;209:1–13. https://doi.org/10.1016/j.scienta.2016.05.021.

    Article  Google Scholar 

  224. Rusu RB, Blodow N, Marton ZC, Beetz M. Aligning point cloud views using persistent feature histograms. In: 2008 IEEE/rsj International Conference on Intelligent Robots and Systems, 2008:3384–3391. https://doi.org/10.1109/IROS.2008.4650967. IEEE.

  225. Rusu RB, Blodow N, Beetz M. Fast point feature histograms (FPFH) for 3D registration. In: 2009 IEEE International Conference on Robotics and Automation. 2009; 3212–3217. https://doi.org/10.1109/ROBOT.2009.5152473. IEEE.

  226. Wahabzada M, Paulus S, Kersting K, Mahlein A-K. Automated interpretation of 3D laserscanned point clouds for plant organ segmentation. BMC Bioinform. 2015;16(1):248. https://doi.org/10.1186/s12859-015-0665-2.

    Article  Google Scholar 

  227. Sodhi P, Vijayarangan S, Wettergreen D. In-field segmentation and identification of plant structures using 3D imaging. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2017; 5180–5187.

  228. Krähenbühl P, Koltun V. Efficient inference in fully connected CRFs with Gaussian edge potentials. In: Shawe-Taylor J, Zemel RS, Bartlett PL, Pereira F, Weinberger KQ, editors. Advances in neural information processing systems 24. Red Hook, NY, USA: Curran Associates Inc; 2011. p. 109–17.

    Google Scholar 

  229. Zhu F, Thapa S, Gao T, Ge Y, Walia H, Yu H. 3D reconstruction of plant leaves for high-throughput phenotyping. In: 2018 IEEE International Conference on Big Data (Big Data). 2018; 4285–4293. https://doi.org/10.1109/BigData.2018.8622428. IEEE.

  230. Plass M, Stone M. Curve-fitting with piecewise parametric cubics. In: Proceedings of the 10th Annual Conference on Computer Graphics and Interactive Techniques. 2018; 229–239.

  231. Pratt V. Direct least-squares fitting of algebraic surfaces. ACM SIGGRAPH Comput Gr. 1987;21(4):145–52.

    Article  Google Scholar 

  232. Fleishman S, Cohen-Or D, Silva CT. Robust moving least-squares fitting with sharp features. ACM Trans Gr (TOG). 2005;24(3):544–52.

    Article  Google Scholar 

  233. Cohen-Steiner D, Da F. A greedy Delaunay-based surface reconstruction algorithm. Vis Comput. 2004;20(1):4–16.

    Article  Google Scholar 

  234. Da TKF, Cohen-Steiner D. Advancing front surface reconstruction. CGAL User and Reference Manual; CGAL, 2020;5. https://doc.cgal.org/latest/Advancing_front_surface_reconstruction/index.html.

  235. Field DA. Laplacian smoothing and Delaunay triangulations. Commun Appl Numer Methods. 1988;4(6):709–12.

    Article  Google Scholar 

  236. Tiller W. Rational B-splines for curve and surface representation. IEEE Comput Gr Appl. 1983;3(6):61–9. https://doi.org/10.1109/MCG.1983.263244.

    Article  Google Scholar 

  237. Wang W, Pottmann H, Liu Y. Fitting B-spline curves to point clouds by curvature-based squared distance minimization. ACM Trans Gr. 2006;25(2):214–38. https://doi.org/10.1145/1138450.1138453.

    Article  CAS  Google Scholar 

  238. Santos T, Ueda J. Automatic 3D plant reconstruction from photographies, segmentation and classification of leaves and internodes using clustering. In: Sievänen R, Nikinmaa E, Godin C, Lintunen A, Nygren P, editors. 7th International Conference on Functional-Structural Plant Models (FSPM2013). Saariselkä: Finland; 2013. p. 95–7.

  239. Gélard W, Burger P, Casadebaig P, Langlade N, Debaeke P, Devy M, Herbulot A. 3D plant phenotyping in sunflower using architecture-based organ segmentation from 3d point clouds. In: 5th International Workshop on Image Analysis Methods for the Plant Sciences, Angers, France; 2016.

  240. Gélard W, Devy M, Herbulot A, Burger P. Model-based segmentation of 3D point clouds for phenotyping sunflower plants. In: 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP 2017), vol. 4. Porto, Portugal, 2017:459–467. https://doi.org/10.5220/0006126404590467.

  241. Pfeifer N, Gorte B, Winterhalder D, et al. Automatic reconstruction of single trees from terrestrial laser scanner data. In: Proceedings of 20th ISPRS Congress, vol. XXXV, 2004:114–119. Istanbul, Turkey.

  242. Ziamtsov I, Navlakha S. Plant 3D (P3D): a plant phenotyping toolkit for 3D point clouds. Bioinformatics. 2020;36(12):3949–50.

    Article  CAS  PubMed  Google Scholar 

  243. Mathan J, Bhattacharya J, Ranjan A. Enhancing crop yield by optimizing plant developmental features. Development. 2016;143(18):3283–94.

    Article  CAS  PubMed  Google Scholar 

  244. Sievänen R, Godin C, DeJong TM, Nikinmaa E. Functional-structural plant models: a growing paradigm for plant studies. Ann Bot. 2014;114(4):599–603.

    Article  PubMed  PubMed Central  Google Scholar 

  245. Topp CN, Iyer-Pascuzzi AS, Anderson JT, Lee C-R, Zurek PR, Symonova O, Zheng Y, Bucksch A, Mileyko Y, Galkovskyi T, Moore BT, Harer J, Edelsbrunner H, Mitchell-Olds T, Weitz JS, Benfey PN. 3D phenotyping and quantitative trait locus mapping identify core regions of the rice genome controlling root architecture. Proc Natl Acad Sci. 2013;110(18):1695–704. https://doi.org/10.1073/pnas.1304354110.

    Article  Google Scholar 

  246. Zhang C, Chen T. Efficient feature extraction for 2D/3D objects in mesh representation. In: Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205), 2001;3:935–938. https://doi.org/10.1109/ICIP.2001.958278. IEEE.

  247. Balfer J, Schöler F, Steinhage V. Semantic skeletonization for structural plant analysis. In: Sievänen R, Nikinmaa E, Godin C, Lintunen A, Nygren P, editors. 7th International Conference on Functional-Structural Plant Models (FSPM2013). Saariselkä: Finland; 2013. p. 42–4.

  248. Sodhi P. In-field plant phenotyping using model-free and model-based methods. PhD thesis, Carnegie Mellon University Pittsburgh, PA; 2017.

  249. Anderson MC, Denmead O. Short wave radiation on inclined surfaces in model plant communities 1. Agron J. 1969;61(6):867–72.

    Article  Google Scholar 

  250. Duncan W, Loomis R, Williams W, Hanau R, et al. A model for simulating photosynthesis in plant communities. Hilgardia. 1967;38(4):181–205.

    Article  Google Scholar 

  251. Lefsky MA, Cohen WB, Parker GG, Harding DJ. Lidar Remote Sensing for Ecosystem Studies: Lidar, an emerging remote sensing technology that directly measures the three-dimensional distribution of plant canopies, can accurately estimate vegetation structural attributes and should be of particular interest to forest, landscape, and global ecologists. Bioscience. 2002;52(1):19–30. https://doi.org/10.1641/0006-3568(2002)052[0019:LRSFES]2.0.CO;2.

    Article  Google Scholar 

  252. Omasa K, Hosoi F, Konishi A. 3D lidar imaging for detecting and understanding plant responses and canopy structure. J Exp Bot. 2007;58(4):881–98.

    Article  CAS  PubMed  Google Scholar 

  253. Hosoi F, Omasa K. Estimation of vertical plant area density profiles in a rice canopy at different growth stages by high-resolution portable scanning lidar with a lightweight mirror. ISPRS J Photogramm Remote Sens. 2012;74:11–9. https://doi.org/10.1016/j.isprsjprs.2012.08.001.

    Article  Google Scholar 

  254. Hosoi F, Omasa K. Voxel-Based 3-D modeling of individual trees for estimating leaf area density using high-resolution portable scanning lidar. IEEE Trans Geosci Remote Sens. 2006;44(12):3610–8. https://doi.org/10.1109/TGRS.2006.881743.

    Article  Google Scholar 

  255. Cabrera-Bosquet L, Fournier C, Brichet N, Welcker C, Suard B, Tardieu F. High-throughput estimation of incident light, light interception and radiation-use efficiency of thousands of plants in a phenotyping platform. New Phytol. 2016;212(1):269–81. https://doi.org/10.1111/nph.14027.

    Article  CAS  PubMed  Google Scholar 

  256. Felzenszwalb PF, Huttenlocher DP. Efficient graph-based image segmentation. Int J Comput Vis. 2004;59(2):167–81. https://doi.org/10.1023/B:VISI.0000022288.19776.77.

    Article  Google Scholar 

  257. Pound MP, Atkinson JA, Wells DM, Pridmore TP, French AP. Deep learning for multi-task plant phenotyping. In: The IEEE International Conference on Computer Vision (ICCV) Workshops, 2017:2055–2063. https://doi.org/10.1109/ICCVW.2017.241. IEEE.

  258. van Dijk ADJ, Kootstra G, Kruijer W, de Ridder D. 2020 Machine learning in plant science and plant breeding. Iscience. 2020;24: 101890.

    Article  PubMed  PubMed Central  Google Scholar 

  259. Singh A, Ganapathysubramanian B, Singh AK, Sarkar S. Machine learning for high-throughput stress phenotyping in plants. Trends Plant Sci. 2016;21(2):110–24.

    Article  CAS  PubMed  Google Scholar 

  260. Jiang Y, Li C. Convolutional neural networks for image-based high-throughput plant phenotyping: a review. Plant Phenomics. 2020. https://doi.org/10.34133/2020/4152816.

    Article  PubMed  PubMed Central  Google Scholar 

  261. Seo H, Badiei Khuzani M, Vasudevan V, Huang C, Ren H, Xiao R, Jia X, Xing L. Machine learning techniques for biomedical image segmentation: An overview of technical aspects and introduction to state-of-art applications. Med Phys. 2020;47(5):148–67. https://doi.org/10.1002/mp.13649.

    Article  Google Scholar 

  262. Connor M, Kumar P. Fast construction of k-nearest neighbor graphs for point clouds. IEEE Trans Vis Comput Gr. 2010;16(4):599–608.

    Article  Google Scholar 

  263. Breiman L. Random forests. Mach Learn. 2001;45(1):5–32.

    Article  Google Scholar 

  264. Dutagaci H, Rasti P, Galopin G, Rousseau D. ROSE-X: an annotated data set for evaluation of 3D plant organ segmentation methods. Plant Methods. 2020;16(1):1–14. https://doi.org/10.1186/s13007-020-00573-w.

    Article  Google Scholar 

  265. Kohonen T. The self-organizing map. Proc IEEE. 1990;78(9):1464–80. https://doi.org/10.1109/5.58325.

    Article  Google Scholar 

  266. Rabiner LR. A tutorial on hidden Markov models and selected applications in speech recognition. Proc IEEE. 1989;77(2):257–86. https://doi.org/10.1109/5.18626.

    Article  Google Scholar 

  267. Garcia-Garcia A, Orts-Escolano S, Oprea S, Villena-Martinez V, Garcia-Rodriguez J. A review on deep learning techniques applied to semantic segmentation. arXiv Preprint. 2017. https://doi.org/10.48550/arXiv.1704.06857.

  268. Garcia-Garcia A, Orts-Escolano S, Oprea S, Villena-Martinez V, Martinez-Gonzalez P, Garcia-Rodriguez J. A survey on deep learning techniques for image and video semantic segmentation. Appl Soft Comput. 2018;70:41–65.

    Article  Google Scholar 

  269. Lateef F, Ruichek Y. Survey on semantic segmentation using deep learning techniques. Neurocomputing. 2019;338:321–48.

    Article  Google Scholar 

  270. Lawin FJ, Danelljan M, Tosteberg P, Bhat G, Khan FS, Felsberg M. Deep projective 3D semantic segmentation. In: International Conference on Computer Analysis of Images and Patterns, 2017:95–107. Springer.

  271. Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL. Semantic image segmentation with deep convolutional nets and fully connected CRFs. arXiv Preprint. 2014. https://doi.org/10.48550/arXiv.1412.7062.

  272. Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL. Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell. 2017;40(4):834–48. https://doi.org/10.1109/TPAMI.2017.2699184.

    Article  PubMed  Google Scholar 

  273. Bhagat S, Kokare M, Haswani V, Hambarde P, Kamble R. Eff-UNet++: a novel architecture for plant leaf segmentation and counting. Ecol Inform. 2022. https://doi.org/10.1016/j.ecoinf.2022.101583.

    Article  Google Scholar 

  274. Carneiro GA, Magalhães R, Neto A, Sousa JJ, Cunha A. Grapevine segmentation in rgb images using deep learning. Proc Comput Sci. 2022;196:101–6. https://doi.org/10.1016/j.procs.2021.11.078.

    Article  Google Scholar 

  275. Wu W, Qi Z, Fuxin L. PointConv: deep convolutional networks on 3D point clouds. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019:9621–9630. https://doi.org/10.1109/CVPR.2019.00985

  276. Li D, Shi G, Li J, Chen Y, Zhang S, Xiang S, Jin S. PlantNet: a dual-function point cloud segmentation network for multiple plant species. ISPRS J Photogramm Remote Sens. 2022;184:243–63. https://doi.org/10.1016/j.isprsjprs.2022.01.007.

    Article  Google Scholar 

  277. Murtiyoso A, Pellis E, Grussenmeyer P, Landes T, Masiero A. Towards semantic photogrammetry: generating semantically rich point clouds from architectural close-range photogrammetry. Sensors. 2022;22(3):966. https://doi.org/10.3390/s22030966.

    Article  PubMed  PubMed Central  Google Scholar 

  278. Jhaldiyal A, Chaudhary N. Semantic segmentation of 3D LiDAR data using deep learning: a review of projection-based methods. Appl Intell. 2022. https://doi.org/10.1007/s10489-022-03930-5.

    Article  Google Scholar 

  279. Li S, Chen X, Liu Y, Dai D, Stachniss C, Gall J. Multi-scale interaction for real-time LiDAR data segmentation on an embedded platform. IEEE Robot Autom Lett. 2021;7(2):738–45. https://doi.org/10.1109/LRA.2021.3132059.

    Article  Google Scholar 

  280. Ahn P, Yang J, Yi E, Lee C, Kim J. Projection-based point convolution for efficient point cloud segmentation. IEEE Access. 2022;10:15348–58. https://doi.org/10.1109/ACCESS.2022.3144449.

    Article  Google Scholar 

  281. Iandola FN, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K. SqueezeNet: AlexNet-level accuracy with 50X fewer parameters and \(<\)0.5MB model size. arXiv Preprint. 2016. https://doi.org/10.48550/arXiv.1602.07360.

  282. Su H, Maji S, Kalogerakis E, Learned-Miller E. Multi-view convolutional neural networks for 3D shape recognition. In: Proceedings of the IEEE International Conference on Computer Vision. 2015; 945–953.

  283. Boulch A, Guerry J, Le Saux B, Audebert N. SnapNet: 3D point cloud semantic labeling with 2D deep segmentation networks. Comput Gr. 2018;71:189–98.

    Article  Google Scholar 

  284. Jin S, Su Y, Gao S, Wu F, Hu T, Liu J, Li W, Wang D, Chen S, Jiang Y, et al. Deep learning: individual maize segmentation from terrestrial lidar data using faster r-cnn and regional growth algorithms. Front Plant Sci. 2018;9:866. https://doi.org/10.3389/fpls.2018.00866.

    Article  PubMed  PubMed Central  Google Scholar 

  285. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv Preprint. 2014. https://doi.org/10.48550/arXiv.1409.1556.

  286. Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015; 3431–3440.

  287. He K, Gkioxari G, Dollár P, Girshick R. Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision. 2017; 2961–2969.

  288. Maturana D, Scherer S. VoxNet: a 3D convolutional neural network for real-time object recognition. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2015; 922–928. https://doi.org/10.1109/IROS.2015.7353481. IEEE.

  289. Riegler G, Osman Ulusoy A, Geiger A. OctNet: learning deep 3D representations at high resolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017; 3577–3586. https://doi.org/10.48550/arXiv.1611.05009.

  290. Tchapmi L, Choy C, Armeni I, Gwak J, Savarese S. SEGCloud: semantic segmentation of 3D point clouds. In: 2017 International Conference on 3D Vision (3DV). 2017; 537–547. https://doi.org/10.1109/3DV.2017.00067. IEEE.

  291. Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O. 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. 2016; 424–432. Springer.

  292. Su H, Jampani V, Sun D, Maji S, Kalogerakis E, Yang M-H, Kautz J. SPLATNet: sparse lattice networks for point cloud processing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018; 530–2539. https://doi.org/10.48550/arXiv.1802.08275.

  293. Rosu RA, Schütt P, Quenzel J, Behnke S. LatticeNet: fast point cloud segmentation using permutohedral lattices. arXiv Preprint. 2019. https://doi.org/10.48550/arXiv.1912.05905.

  294. Choy C, Gwak J, Savarese S. 4D Spatio-Temporal ConvNets: minkowski sonvolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019; 3075–3084. https://doi.org/10.1109/CVPR.2019.00319.

  295. Rosu RA, Schütt P, Quenzel J, Behnke S. LatticeNet: fast spatio-temporal point cloud segmentation using permutohedral lattices. Auton Robots. 2021;46:1–16.

    Google Scholar 

  296. Wang Y, Sun Y, Liu Z, Sarma SE, Bronstein MM, Solomon JM. Dynamic graph CNN for learning on point clouds. ACM Trans Gr (tog). 2019;38(5):1–12. https://doi.org/10.1145/3326362.

    Article  Google Scholar 

  297. Turgut K, Dutagaci H, Galopin G, Rousseau D. Segmentation of structural parts of rosebush plants with 3D point-based deep learning methods. arXiv Preprint. 2020. https://doi.org/10.48550/arXiv.2012.11489.

  298. Qi CR, Su H, Mo K, Guibas LJ. PointNet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017:652–660. https://doi.org/10.1109/CVPR.2017.16.

  299. Li Y, Wen W, Miao T, Wu S, Yu Z, Wang X, Guo X, Zhao C. Automatic organ-level point cloud segmentation of maize shoots by integrating high-throughput data acquisition and deep learning. Comput Electron Agric. 2022;193: 106702. https://doi.org/10.1016/j.compag.2022.106702.

    Article  Google Scholar 

  300. Qi CR, Yi L, Su H, Guibas LJ. PointNet++: deep hierarchical feature learning on point sets in a metric space. arXiv preprint. 2017. arXiv:1706.02413.

  301. Heiwolt K, Duckett T, Cielniak G. Deep semantic segmentation of 3D plant point clouds. In: Annual Conference Towards Autonomous Robotic Systems, 2021:36–45. Springer.

  302. Jiang M, Wu Y, Zhao T, Zhao Z, Lu C. PointSIFT: a SIFT-like network module for 3D point cloud semantic segmentation. arXiv Preprint. 2018. https://doi.org/10.48550/arXiv.1807.00652.

  303. Wang W, Yu R, Huang Q, Neumann U. SGPN: similarity group proposal network for 3D point cloud instance segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:2569–2578.

  304. Zhang K, Hao M, Wang J, de Silva CW, Fu C. Linked dynamic graph CNN: learning on point cloud via linking hierarchical features. arxiv Preprint. 2019. https://doi.org/10.48550/arXiv.1904.10014.

  305. Duan Y, Zheng Y, Lu J, Zhou J, Tian Q. Structural relational reasoning of point clouds. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019:949–958. https://doi.org/10.1109/CVPR.2019.00104

  306. Wang X, Liu S, Shen X, Shen C, Jia J. Associatively segmenting instances and semantics in point clouds. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019; 4096–4105. https://doi.org/10.1109/CVPR.2019.00422.

  307. Ma Y, Guo Y, Liu H, Lei Y, Wen G. Global context reasoning for semantic segmentation of 3D point clouds. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2020:2931–2940. https://doi.org/10.1109/WACV45572.2020.9093411.

  308. Lu Q, Chen C, Xie W, Luo Y. PointNGCNN: Deep Convolutional Networks on 3D Point Clouds with Neighborhood Graph Filters. Computers & Graphics. 2020;86:42–51. https://doi.org/10.1016/j.cag.2019.11.005.

    Article  Google Scholar 

  309. Li Y, Bu R, Sun M, Wu W, Di X, Chen B. PointCNN: convolution on X-transformed points. Adv Neural Inf Proc Syst. 2018;31.

  310. Ao Z, Wu F, Hu S, Sun Y, Su Y, Guo Q, Xin Q. Automatic segmentation of stem and leaf components and individual maize plants in field terrestrial LiDAR data using convolutional neural networks. Crop J. 2021. https://doi.org/10.1016/j.cj.2021.10.010.

    Article  Google Scholar 

  311. Gong L, Du X, Zhu K, Lin K, Lou Q, Yuan Z, Huang G, Liu C. Panicle-3D: efficient phenotyping tool for precise semantic segmentation of rice panicle point cloud. Plant Phenomics. 2021. https://doi.org/10.34133/2021/9838929.

    Article  PubMed  PubMed Central  Google Scholar 

  312. Chen L-C, Zhu Y, Papandreou G, Schroff F, Adam H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European Conference on Computer Vision (ECCV), 2018:801–818. https://doi.org/10.48550/arXiv.1802.02611.

  313. Jin S, Su Y, Gao S, Wu F, Ma Q, Xu K, Hu T, Liu J, Pang S, Guan H, et al. Separating the structural components of maize for field phenotyping using terrestrial LiDAR data and deep convolutional neural networks. IEEE Trans Geosci Remote Sens. 2019;58(4):2644–58. https://doi.org/10.1109/TGRS.2019.2953092.

    Article  Google Scholar 

  314. Liu F, Li S, Zhang L, Zhou C, Ye R, Wang Y, Lu J. 3DCNN-DQN-RNN: a deep reinforcement learning framework for semantic parsing of large-scale 3D point clouds. In: Proceedings of the IEEE International Conference on Computer Vision, 2017:5678–5687. https://doi.org/10.1109/ICCV.2017.605.

  315. Socher R, Perelygin A, Wu J, Chuang J, Manning CD, Ng AY, Potts C. Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 2013; 1631–1642.

  316. Yu F, Liu K, Zhang Y, Zhu C, Xu K. Partnet: a recursive part decomposition network for fine-grained and hierarchical shape segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019; 9491–9500

  317. Santos TT, De Oliveira AA. Image-based 3d digitizing for plant architecture analysis and phenotyping. In: Embrapa Inform´atica Agropecu´aria-Artigo em Anais de congresso (ALICE) (2012). In: CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES, 25., 2012, Ouro Preto ...

  318. Chaudhury A, Boudon F, Godin C. 3D plant phenotyping: all you need is labelled point cloud data. In: European conference on computer vision. 2020; 244–260. Springer

  319. Khanna R, Schmid L, Walter A, Nieto J, Siegwart R, Liebisch F. A spatio temporal spectral framework for plant stress phenotyping. Plant methods. 2019;15(1):1–18. https://doi.org/10.1186/s13007-019-0398-8.

    Article  Google Scholar 

  320. Cruz JA, Yin X, Liu X, Imran SM, Morris DD, Kramer DM, Chen J. Multi-modality imagery database for plant phenotyping. Mach Vis Appl. 2016;27(5):735–49. https://doi.org/10.1007/s00138-015-0734-6.

    Article  Google Scholar 

  321. Aich S, Stavness I. Leaf Counting with Deep Convolutional and Deconvolutional Networks; 2017. CoRR abs/1708.07570. arXiv:1708.07570.

  322. Romera-Paredes B, Torr PHS. Recurrent instance segmentation. In: Leibe B, Matas J, Sebe N, Welling M, editors. Computer vision - ECCV 2016. Cham: Springer International Publishing; 2016. p. 312–29.

    Chapter  Google Scholar 

  323. De Brabandere B, Neven D, Van Gool L. Semantic instance segmentation with a discriminative loss function. arXiv Preprint. 2017. https://doi.org/10.48550/arXiv.1708.02551.

  324. Ren M, Zemel RS. 2017 End-to-end instance segmentation and counting with recurrent attention. 2017; CoRR abs/1605.09410. arXiv:1605.09410.

  325. Ubbens JR, Stavness I. Deep plant phenomics: a deep learning platform for complex plant phenotyping tasks. Front Plant Sci. 2017;8:1190. https://doi.org/10.3389/fpls.2017.01190.

    Article  PubMed  PubMed Central  Google Scholar 

  326. Tapas A. Transfer learning for image classification and plant phenotyping. Int J Adv Res Comput Eng Technol (IJARCET). 2016;5(11):2664–9.

    Google Scholar 

  327. Wei H, Xu E, Zhang J, Meng Y, Wei J, Dong Z, Li Z. BushNet: effective semantic segmentation of bush in large-scale point clouds. Comput Electron Agric. 2022;193: 106653. https://doi.org/10.1016/j.compag.2021.106653.

    Article  Google Scholar 

  328. Kar S, Garin V, Kholová J, Vadez V, Durbha SS, Tanaka R, Iwata H, Urban MO, Adinarayana J. SpaTemHTP: a data analysis pipeline for efficient processing and utilization of temporal high-throughput phenotyping data. Front Plant Sci. 2020. https://doi.org/10.3389/fpls.2020.552509.

    Article  PubMed  PubMed Central  Google Scholar 

  329. Mack J, Rist F, Herzog K, Töpfer R, Steinhage V. Constraint-based automated reconstruction of grape bunches from 3D range data for high-throughput phenotyping. Biosyst Eng. 2020;197:285–305. https://doi.org/10.1016/j.biosystemseng.2020.07.004.

    Article  Google Scholar 

  330. Wang Y, Wen W, Wu S, Wang C, Yu Z, Guo X, Zhao C. Maize plant Phenotyping: comparing 3D laser scanning, multi-view stereo reconstruction, and 3D digitizing estimates. Remote Sens. 2019;11(1):63.

    Article  Google Scholar 

  331. Su Y, Wu F, Ao Z, Jin S, Qin F, Liu B, Pang S, Liu L, Guo Q. Evaluating maize phenotype dynamics under drought stress using terrestrial lidar. Plant Methods. 2019;15(1):1–16.

    Article  Google Scholar 

  332. Iqbal J, Xu R, Halloran H, Li C. Development of a multi-purpose autonomous differential drive mobile robot for plant phenotyping and soil sensing. Electronics. 2020;9(9):1550. https://doi.org/10.3390/electronics9091550.

    Article  Google Scholar 

  333. Briglia N, Williams K, Wu D, Li Y, Tao S, Corke F, Montanaro G, Petrozza A, Amato D, Cellini F, et al. Image-based assessment of drought response in grapevines. Front Plant Sci. 2020;11:595.

    Article  PubMed  PubMed Central  Google Scholar 

  334. Zhu B, Liu F, Xie Z, Guo Y, Li B, Ma Y. Quantification of light interception within image-based 3-d reconstruction of sole and intercropped canopies over the entire growth season. Ann Bot. 2020;126(4):701–12.

    Article  PubMed  PubMed Central  Google Scholar 

  335. Anke B, Olaf H, Volker R, Ulas Y. A benchmark dataset for performance evaluation of shape-from-x algorithms. Int Arch Photogramm Remote Sens Spat Inform Sci. 2008;16:26.

    Google Scholar 

  336. Liang X, Zhou F, Chen H, Liang B, Xu X, Yang W. Three-dimensional maize plants reconstruction and traits extraction based on structure from motion. Trans Chin Soc Agric Mach. 2020;51:209–19.

    Google Scholar 

  337. Galli G, Sabadin F, Costa-Neto GMF, Fritsche-Neto R. A novel way to validate UAS-based high-throughput phenotyping protocols using in silico experiments for plant breeding purposes. Theor Appl Genet. 2021;134(2):715–30. https://doi.org/10.1007/s00122-020-03726-6.

    Article  PubMed  Google Scholar 

  338. Rossi R, Leolini C, Costafreda-Aumedes S, Leolini L, Bindi M, Zaldei A, Moriondo M. Performances evaluation of a low-cost platform for high-resolution plant phenotyping. Sensors. 2020;20(11):3150. https://doi.org/10.3390/s20113150.

    Article  PubMed  PubMed Central  Google Scholar 

  339. Herrero-Huerta M, Bucksch A, Puttonen E, Rainey KM. Canopy roughness: a new phenotypic trait to estimate aboveground biomass from unmanned aerial system. Plant Phenomics. 2020. https://doi.org/10.34133/2020/6735967.

    Article  PubMed  PubMed Central  Google Scholar 

  340. Pinto F, M, G Melo A, M Honório L, LM Marcato A, GS Conceição A, O Timotheo A. Deep learning applied to vegetation identification and removal using multidimensional aerial data. Sensors. 2020;20(21):6187.

    Article  Google Scholar 

  341. Gené-Mola J, Llorens J, Rosell-Polo JR, Gregorio E, Arnó J, Solanelles F, Martínez-Casasnovas JA, et al. Assessing the performance of RGB-D sensors for 3D fruit crop canopy characterization under different operating and lighting conditions. Sensors. 2020;20(24):7072.

    Article  PubMed  PubMed Central  Google Scholar 

  342. Hsu H-C, Chou W-C, Kuo Y-F. 3D revelation of phenotypic variation, evolutionary allometry, and ancestral states of corolla shape: a case study of clade Corytholoma (subtribe Ligeriinae, family Gesneriaceae). GigaScience. 2020;9(1):155. https://doi.org/10.1093/gigascience/giz155.

    Article  Google Scholar 

  343. Li M, Shao M-R, Zeng D, Ju T, Kellogg EA, Topp CN. Comprehensive 3D phenotyping reveals continuous morphological variation across genetically diverse sorghum inflorescences. New Phytol. 2020;226(6):1873–85. https://doi.org/10.1111/nph.16533.

    Article  PubMed  PubMed Central  Google Scholar 

  344. Théroux-Rancourt G, Jenkins MR, Brodersen CR, McElrone A, Forrestel EJ, Earles JM. Digitally deconstructing leaves in 3D using X-ray microcomputed tomography and machine learning. Appl Plant Sci. 2020;8(7):11380. https://doi.org/10.1002/aps3.11380.

    Article  Google Scholar 

  345. Boerckel JD, Mason DE, McDermott AM, Alsberg E. Microcomputed tomography: approaches and applications in bioengineering. Stem Cell Res Ther. 2014;5(6):1–12.

    Article  Google Scholar 

  346. Xia C, Shi Y, Yin W, et al. Obtaining and denoising method of three-dimensional point cloud data of plants based on tof depth sensor. Trans Chin Soc Agric Eng. 2018;34(6):168–74.

    Google Scholar 

  347. Choi S, Kim T, Yu W. Performance evaluation of RANSAC family. J Comput Vis. 1997;24(3):271–300.

    Article  Google Scholar 

  348. Loch BI. Surface fitting for the modelling of plant leaves. PhD thesis, University of Queensland Australia; 2004.

Download references

Acknowledgements

Not applicable.

Funding

The research of JV is supported by a grant from the Special Research Fund (BOF) of Ghent University.

Author information

Authors and Affiliations

Authors

Contributions

BV drafted and edited the first version of the manuscript. NH reviewed and updated the manuscript. AVM, JV, and SD commented on the manuscript, revised the text and structure, and gave additional input. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Arnout Van Messem.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Consent for publication is not applicable. Images are unidentifiable and there are no details on individuals reported within the manuscript.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Harandi, N., Vandenberghe, B., Vankerschaver, J. et al. How to make sense of 3D representations for plant phenotyping: a compendium of processing and analysis techniques. Plant Methods 19, 60 (2023). https://doi.org/10.1186/s13007-023-01031-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13007-023-01031-z

Keywords