Skip to main content

PhenoTrack3D: an automatic high-throughput phenotyping pipeline to track maize organs over time

Abstract

Background

High-throughput phenotyping platforms allow the study of the form and function of a large number of genotypes subjected to different growing conditions (GxE). A number of image acquisition and processing pipelines have been developed to automate this process, for micro-plots in the field and for individual plants in controlled conditions. Capturing shoot development requires extracting from images both the evolution of the 3D plant architecture as a whole, and a temporal tracking of the growth of its organs.

Results

We propose PhenoTrack3D, a new pipeline to extract a 3D + t reconstruction of maize. It allows the study of plant architecture and individual organ development over time during the entire growth cycle. The method tracks the development of each organ from a time-series of plants whose organs have already been segmented in 3D using existing methods, such as Phenomenal [Artzet et al. in BioRxiv 1:805739, 2019] which was chosen in this study. First, a novel stem detection method based on deep-learning is used to locate precisely the point of separation between ligulated and growing leaves. Second, a new and original multiple sequence alignment algorithm has been developed to perform the temporal tracking of ligulated leaves, which have a consistent geometry over time and an unambiguous topological position. Finally, growing leaves are back-tracked with a distance-based approach. This pipeline is validated on a challenging dataset of 60 maize hybrids imaged daily from emergence to maturity in the PhenoArch platform (ca. 250,000 images). Stem tip was precisely detected over time (RMSE < 2.1 cm). 97.7% and 85.3% of ligulated and growing leaves respectively were assigned to the correct rank after tracking, on 30 plants × 43 dates. The pipeline allowed to extract various development and architecture traits at organ level, with good correlation to manual observations overall, on random subsets of 10–355 plants.

Conclusions

We developed a novel phenotyping method based on sequence alignment and deep-learning. It allows to characterise the development of maize architecture at organ level, automatically and at a high-throughput. It has been validated on hundreds of plants during the entire development cycle, showing its applicability on GxE analyses of large maize datasets.

Background

Plant architecture relates to the 3D organisation and form of organs on the plant (i.e. number, shape, size and position) as well as their temporal and topological changes during plant development [1, 2]. Plant architecture and development are of major agronomic importance in cereals such as maize (Zea mays L.), determining resource capture and use, and thus plant performance [3, 4, 5]. Architectural and developmental traits such as leaf number, leaf angle and leaf size are influenced by both genetic variations and environmental conditions such as light and water availability [6, 7, 8], and display a large genotype x environment (GxE) interaction. Therefore, understanding the genetic control and the response to environmental cues of these traits at individual plant level is a major challenge in the context of climate change [9, 10, 11], and the engineering of agro-ecological practices based on complex plant mixtures.

High-throughput phenotyping (HTP) platforms enable the collection of plant images of a large number of genotypes growing under different environmental conditions on a regular basis [12, 13]. Such image datasets can then be processed automatically [14] to capture the 3D morphology of plant shoots and extract phenotypic traits [15]. Among the existing methods, the Phenomenal pipeline [16] proved relevant to process large datasets of multiple species of agronomic interest, providing an end-to-end solution to extract a 3D plant reconstruction from a set of 2D images taken at regular viewpoints around the plant. In the case of maize, this pipeline allows the extraction of architectural traits such as stem height and leaf morphology (e.g. length, insertion height, azimuth). However, it is limited to the reconstruction of plants at each time point separately. When screening for genetic variability, the identification of leaves by a unique time-consistent number (the rank of emergence, or leaf rank) is essential to compare the shoot organs occupying similar developmental position (e.g. juvenile versus adult leaves [17]) between different plants, or to quantify the development using leaf stage. Moreover, the assignment of the same rank to successive segmentations of the same leaf over time is necessary to measure individual leaf growth and their responses to environmental conditions. Since the lowest maize leaves disappear over time due to senescence, it may be difficult to deduce the rank of leaves when observing the plant at a single date [18].

To overcome this limitation, time-series analysis could be used to group several occurrences of a same leaf in a temporal series of images, in order to get a 3D + t representation of the plant. While this task relates to multiple object tracking [19], this framework cannot be directly applied to leaf tracking, since plants undergo major topological and morphological changes over time, with new leaves appearing and growing due to organogenesis, and others collapsing then disappearing during senescence [20]. Instead, specific leaf tracking methods have been proposed for 2D images of rosette plants from a top view [21, 22, 23, 24]. While a few studies have developed leaf tracking methods on 3D reconstructions (e.g. cotton [25], cucumber [26], tomato [27] and maize [28, 27], most of them have only been validated on limited datasets that may not reflect actual HTP conditions (thousands of plants, large genetic diversity and growing scenarios). Moreover, these datasets were often limited to young plants which are less challenging to analyse than in later growth stages (emergence of the reproductive organs, more occlusion due to leaf crossings, more frequent disappearance of leaves due to senescence). While these methods offer various solutions for associating leaves with similar geometry (e.g. length, azimuth), they rarely use topology (i.e. the spatial organisation of leaves on the plant [29]). Plant topology offers valuable information for leaf tracking since (i) it defines the identity of leaves, as leaves appear from bottom to top along the stem axis on plants such as maize, and (ii) it is redundant over time, thus helping to maintain leaves identity. Plant topology was used in [28], but without considering leaf morphology. We therefore seek a new leaf tracking method exploiting both leaves morphology and topology, with the objective of handling complex HTP datasets, covering all stages of maize development.

Using first the topological order of leaf ranks along the maize stem, a reconstructed maize plant can be represented at any date by a spatial sequence of leaves, ordered from the bottom to the top of the stem. This sequence is obtained through a segmentation step and may therefore contain artefacts resulting from segmentation errors. For instance, the maize ear can be misidentified as a leaf, or some leaves can be unidentified due to occlusions, leading to extra or missing leaves in the sequence. Some leaves may also emerge or fall from the plant between two successive sequences. From this point of view, comparing two successive leaf sequences is analogous to comparing two homologous genetic sequences, which are similar except for a few extra or missing elements. Sequence alignment algorithms are commonly used to insert gaps in such genetic sequences in order to match their common elements [32]. This framework was redesigned here to match leaves with a similar morphology in successive topological sequences of maize leaves.

In this paper, we propose a novel robust method to address the current issues of maize HTP. First, organs are segmented on 3D reconstructed volumes at all time steps using a former described method [16], publicly available on GitHub (Phenomenal: https://github.com/openalea/phenomenal) and Zenodo (https://doi.org/10.5281/zenodo.1436633). This method is complemented by a new stem detection algorithm based on deep-learning to locate more precisely the stem tip, which separates the mature part of the maize plant from the developing one. After that, leaves of the basal mature parts are aligned using a sequence alignment algorithm, and then tracked backwards to the upper developing parts of the plant. We apply this method on a challenging image dataset acquired at the PhenoArch platform consisting of a diverse panel of maize genotypes developing from plant emergence to late flowering stage, under various levels of water stress. The accuracy of tracking is globally assessed by comparing leaf rank predictions and key dynamical traits to manual annotations. Such organ-level phenotypic traits allow to fully describe the global plant development and architecture, as well as individual leaf growth dynamics.

Materials and methods

Plant material and dataset composition

The pipeline was tested on a dataset from an experiment conducted in 2017 involving a set of 60 commercial maize hybrids representative of breeding history in Europe during the last 60 years. This material covers a wide range of plant architecture, growth and development, leading to an appreciable variability of performances in the field [33]. The experiment was conducted in the PhenoArch phenotyping platform (Fig. 1A) hosted at the M3P (Montpellier Plant Phenotyping Platforms) [6].

Fig. 1
figure 1

Overview of the 3D maize reconstruction and segmentation pipeline used prior tracking. A Maize plants grown in the PhenoArch high-throughput phenotyping platform. B Daily acquisition of 12 side-view RGB images. C Reconstruction of a 3D volume, using space-carving [16]. D 3D skeletonization of the reconstruction [16], and extension of the leaf tips. E Detection of the stem tip position (red box) using a deep-learning model. This position is used to segment the skeleton into stem (black), ligulated leaves (blue) and growing leaves (orange) organs. Ligulated leaves can be ordered topologically (numbers) by increasing insertion height, unlike growing leaves which all emerge from the same point

Briefly, plants were sown in 9L pots filled with a 30:70 (v/v) mixture of a clay and organic compost. Two levels of soil water content were imposed: (i) retention capacity (WW, soil water potential of −0.05 MPa) and (ii) water deficit (WD, soil water potential of −0.3 MPa). Each combination of genotype and water treatment was replicated 7 times, 4 with early harvesting (until 12 visible leaves stage, ~ 40 days after plant emergence) and 3 with late harvesting (until ~ 55 days after plant emergence, i.e. ~ 15 days after panicle emergence), resulting in a total of 840 plants. Greenhouse temperature was maintained at 25 ± 3 °C during the day and 20 °C during the night. Details of experimental growing conditions can be found at [34].

RGB images (2048 × 2448 pixels) were taken daily for each plant with twelve side views from 30° rotational difference (Fig. 1B), using the imaging units of the PhenoArch platform. Each unit is composed of a cabin involving an RGB camera (Grasshopper3, Point Grey Research, Richmond, BC, Canada) equipped with 12.5–75 mm TV zoom lens (Pentax, Ricoh Imaging, France) and LED illumination (5050–6500 K colour temperature). Images were captured while the plant was rotating at constant rate (20 rpm) using a brushless motor (Rexroth, Germany).

3D reconstruction and organ segmentation

Phenotrack3D requires as input a time-series of 3D reconstructed plants with stem and leaves individually segmented. Several existing methods allow to segment maize or sorghum plants [35, 31, 36]. In our study, we apply Phenomenal [16] on single time-point data to reconstruct 3D volumes of the plants (Fig. 1C), extract 3D skeletons (Fig. 1D) and 3D segmentations of plant volumes into individual plant organs (stem, leaves) (Fig. 1E). Briefly, Phenomenal estimates a 3D plant volume by applying a space carving algorithm [37] on a regular grid of voxels, so that the projection of the plant volume matches all plant silhouettes of the multi-view image stack. It then searches iteratively among voxels the longest shortest paths connecting plant base to the most distant voxel, removing at each iteration all voxels intercepted by a sweeping perpendicular plane along the path. The result is a set of paths, the 3D skeleton, with each path being associated to a set of voxels representing individual leaves. The only exception is for the first path found, that contains both the stem and a leaf. This path is therefore further segmented, by finding stem tip, computed as the highest minimum of the voxel interception curve (i.e. the number of voxels intercepted by the sweeping plane as a function of curvilinear abscissa).

Two post-processing steps have been designed to improve the quality of the segmentation. First, we extrapolate midribs up to the leaf tips to reconstruct part of the leaves that have been shortened during the 3D reconstruction. Second, due to severe imprecision of the location of stem tip detection at late developmental stages, Phenomenal was complemented with a new stem tip detection method based on deep-learning.

We post-processed the 3D skeleton output by projecting it on the corresponding 2D binary images, to extend the leaf tips which are often shortened during the reconstruction. To that end, 2D binary images extracted with Phenomenal were also skeletonized, and the segments of both skeletons were matched to find an extension path for each segment of the original 3D skeleton. This step is illustrated in Fig. 2, and was repeated for each of the 12 camera angles. For a given 2D binary image, each 3D skeleton branch is projected then associated with the closest branch of the 2D skeleton of the binary image to identify the best extension path to the leaf tip (Fig. 2D; green lines). The corrected length of the 3D skeleton branch is finally computed as the median of all extension paths found for this branch over the 12 camera angles.

Fig. 2
figure 2

Extension of leaf tips on the 3D skeleton. A Binarization of the RGB image (inset) with Phenomenal [16], B Skeletonization of the binary image, and extraction of skeleton branches having an endpoint (blue lines), C 3D plant skeleton (inset) and reprojection of its branches in the 2D space (red lines). D 3D and 2D branches matching based on a distance threshold dsk = 30px, and determination of extension paths (green lines)

To precisely locate the stem tip, which is of particular importance for Phenotrack3D, we trained an object detection model to detect collars on the 2D images, like in [38]. To that end, we used a YOLOv4 deep-learning model [39] (see Additional file 1 for training details). The detected collars were used to define the stem height as the highest collar height among the 12 side-view images. The leaves with an insertion point below the stem tip were defined as ligulated leaves (Fig. 3).

Fig. 3
figure 3

Deep-learning-based collar detection for organ segmentation. A Identification of the stem path (orange line) on the 3D skeleton, B reprojection of the stem path (orange line) on one of the corresponding RGB images and extraction of 416 × 416 sub-images along the stem path. Sub-images centers are evenly spaced along the stem path, with a maximum spacing of 400 pixels. C Collar detection on a sub-image (square: predicted bounding box, point: centre of the box, value: prediction score) D Projection of the highest detected collars point among all sub-images from all 12 side viewpoints on the 3D skeleton (grey cross). This point is then used to segment skeleton branches in stem, ligulated leaves, and growing leaves

From this point, the plant is represented by a time-series of 3D observations, from which stem height can be extracted at each date and smoothed over time. Finally, time points with abnormal stem shapes were removed. These points were detected by first constructing a median stem, obtained as the polyline joining the (x, y) median position of all stem polyline points in the time-series, grouped by discrete z coordinates. Then, all time points with a stem polyline whose directed Hausdorff distance from the median was greater than 10 cm were removed from the time-series. (Additional file 3).

Time-lapse tracking of 3D organs

We track the ligulated leaves using a multiple sequence alignment algorithm. First, at each time point \(t\) we build a leaf sequence \({P}_{t}\) composed of feature vectors representing key characteristics of the segmented ligulated leaves. From this representation, we define a cost function for comparing leaves pairwise, and derive a sequence alignment algorithm to establish the correspondences between two successive leaves sequences. We then propagate this procedure to determine ligulated leaf tracks along the whole time-course. Each organ track is then identified by a leaf rank, which corresponds both to its order of emergence and its location along the stem. Finally, we track backwards these leaves from the moment of their ligulation to the moment of their emergence at the top of the stem and get the whole 3D + t reconstruction.

Pairwise sequence alignment

The detected ligulated leaves of a date \(t\) are ordered from the bottom to the top of the plant, i.e. by ascending leaf rank, in a leaf sequence \({S}_{t}\). The alignment of two sequences can be defined as a set of gap placements at the beginning, end, or between elements of these sequences, resulting in two new sequences of the same length with no gaps facing each other [32]. Pairwise sequence alignment algorithms are designed to identify the alignment that optimises an alignment score between two sequences, often defined as the sum of the scores associated with each pair of matched elements, plus gap penalties [40].

Each ligulated leaf is described using a feature vector computed from geometrical features that are assumed to be constant over time: insertion height \(h\) (mm), length \(l\) (mm) and azimuth \(\alpha\) (\(\alpha\) [−π, π]). These features are extracted from the segmented leaves [16] and concatenated into a feature vector \(v\in {R}^{4}\) summarizing the leaf morphology:

$$v\, = \,\left[ {v^{i} } \right]_{1 \le i \le 4} \, = \,\left[ {\cos \,(\alpha ),\,\sin \,(\alpha ),\,w_{h} \cdot h,\,w_{l} \cdot l} \right]$$
(1)

Weights \({w}_{h}=0.03\) and \({w}_{l}=0.004\) were fine-tuned to scale features and adjust their relative importance. Each pair of matched leaves is associated to a cost \({c}_{vv}\) equal to the euclidean distance between the feature vectors \({v}_{1}\) and \({v}_{2}\) of those two leaves:

$$c_{vv} \left( {v_{1} ,v_{2} } \right) = \sqrt {\mathop \sum \limits_{1 \le i \le 4} \left( {v_{1}^{i} - v_{2}^{i} } \right)^{2} }$$
(2)

A gap penalty parameter \(g\) is used to penalise the addition of each of the \(n\) gaps placed in the alignment of sequences \({S}_{t1}\) and \({S}_{t2}\). We define \(g\) from \(\overline{{c }_{adj}}=4.23\), the average value of \({c}_{vv}\)(\({v}_{i}\), \({v}_{j}\)) for all couples of topologically adjacent leaves \(i\) and \(j\) in the sequences \({S}_{t}\) of our dataset, and \({w}_{gap}=3\) a weight:

$$g = w_{gap} .{ }\overline{{c_{adj} }}$$
(3)

Using a different parameter value for the terminal gaps has proven effective in aligning sequences of different lengths [41]. A weight \({w}_{tml}=0.2\) is therefore used to lower the penalty of each of the \({n}_{tml}\) terminal gaps, since leaves are expected to appear and disappear successively over time. The optimal alignment between \({S}_{t1}\) and \({S}_{t2}\) is then defined as the one that minimises the global alignment cost \(C\). We define \(I\) as the set of indexes of sequences \({S}_{t1}\) and \({S}_{t2}\) with a match (i.e. without gap). With vi and I the vectors associated respectively with the \(i\)-th elements of the sequences \({P}_{t1}\) and \({P}_{t2}\):

$$C\left( {S_{t1} ,S_{t2} } \right) = (n - n_{tml}) .g + n_{tml} .w_{tml} .g + \mathop \sum \limits_{i \in I} c_{vv} \left( {v_{i} ,v^{\prime}_{i} } \right)$$
(4)

To find the optimal alignment between \({S}_{t1}\) and \({S}_{t2}\), the Needleman-Wunsch (NW) algorithm [42] has been adapted to consider the terminal gap weight. This algorithm is based on dynamic programming and guarantees an optimal solution for pairwise alignment.

Multiple sequence alignment

The initial ordering of each sequence gives a relative leaf rank to each leaf that may differ from their absolute rank, due to the fall of senescent leaves during plant development or due to segmentation errors (Fig. 4B). The alignment of all the leaves sequences in the time-series is achieved using a progressive method [32] to perform the multiple alignment task. It consists in a succession of pairwise alignments of the sequences \({S}_{t}\) using the NW algorithm, in ascending temporal order. A profile is defined as an alignment of several sequences treated as a unique sequence of columns [43]. Let \({\Omega }_{1 -> k}\) be the profile constituted of the alignment of the first \(k\) sequences in the time-series \((S{)}_{k=1...T}\), each sequence containing \(n\) elements after the addition of possible gaps. At each time point \(t>1\), the sequence \({S}_{t}\) is aligned with the profile \({\Omega }_{1 -> t-1}\), resulting in a new profile \({\Omega }_{1 -> t}\). Aligning \({S}_{t}\) with \({\Omega }_{1 -> t-1}\) requires a sequence-profile cost function \(c\) that we adapt from \({c}_{vv}\) [44]. Let \(\omega\) be a column of \({\Omega }_{1 -> t-1}\) of length \(t-1\) containing \(t-1-k\) gaps, and \(k\) leaves associated to the feature vectors {\({v}_{1}, \dots ,{v}_{k}\}\). Let be the feature vector associated with a leaf observation present in the sequence to align \({S}_{t}\). Then the cost \(c\) of the match between \(\omega\) and is defined as:

$$c\left( {{\upomega },v^{\prime}} \right) = \frac{1}{k}.\mathop \sum \limits_{1 \le i \le k} c_{vv} \left( {v_{i} ,v^{\prime}} \right)$$
(5)
Fig. 4
figure 4

Leaf tracking in a time-series of 3D maize plant segmentations. A Input data for the tracking algorithm, consisting of a time-series of 3D maize plant segmentations. Each segmentation includes ligulated leaves, which can be topologically ordered by increasing insertion height, and growing leaves. Ligulated and growing leaves are classified using stem height (red dotted line), which was smoothed over time. F Output data of the tracking algorithm, consisting of 3D + t reconstruction of the plant with time-coherent leaf ranks (numbers and colours), representing the order of appearance of the leaves. B, C, D, E illustrate the successive steps of the algorithm, which consists in assigning time-consistent ranks to segmented leaves. Each segmented leaf is represented by a rectangle, coloured according to its ground-truth rank (black = segmentation anomaly), and positioned according to its observation time and predicted rank. Only one third of the time steps are represented for visibility. B initialization of rank assignment by ordering the ligulated leaves topologically, C rank assignment after sequence alignment, D rank assignment after removing abnormal columns, E final rank assignment after adding growing leaves

Sequences are aligned progressively until obtaining the final profile \({\Omega }_{1 -> T}\) that aligns all the sequences in \((S{)}_{t=1..T}\), yielding a first estimation of the leaf ranks (Fig. 4C).

Finally, each tracked leaf, except the first and last, is deleted if \({n}_{k}<({n}_{k-1}+{n}_{k+1})/4\), \({n}_{k}\) being the number of times that the \({k}^{th}\) leaf appears in the time-series, to cope with possible segmentation errors (Fig. 4D).

Backwards tracking of growing leaves from ligulated ones

The final tracking step consists of predicting the rank of the detected growing leaves (Fig. 4E). Unlike ligulated leaves, growing leaves cannot be ordered a priori by their topology since they all emerge from the same point, and their geometry is not constant. However, it can be assumed that the shape, orientation and position of a leaf evolves smoothly over time during its growth phase, with only minor changes for an observation frequency of 24 h. Leaves can therefore be tracked backwards by associating leaf observations sharing a similar shape, from the ligulated stage to the leaf emergence. To that end, a metric \(D\) is used to quantify the dissimilarity of two growing leaves, based on the distance between their central line, given as a 3D polyline. Each 3D polyline is converted to a set of \(n=20\) points [\({pl}_{1},\dots ,{pl}_{n}]\) regularly spaced along the polyline. With \(d({p}_{1},{p}_{2})\) the euclidean distance between two points, we define the distance between two polylines such as:

$$D\left( {pl, pl^{\prime}} \right) = \mathop \sum \limits_{1 \le i \le n} d\left( {pl_{i} , pl^{\prime}_{i} } \right)$$
(6)

The algorithm tracks backwards the growth trajectory of each leaf, starting from rank 1, and finishing by the last rank. For each processed leaf, the algorithm works iteratively, from the starting point (i.e. when the leaf is ligulated) to the ending point (i.e. at leaf emergence). At each tracking step the algorithm computes the metric \(D\) between the last associated polyline and every remaining non-ligulated leaves, and selects the minimum.

Computation of phenotypic traits

Various phenotypic traits are extracted from the 3D + t plant reconstruction, to quantify (i) rank-based phenotypes, (ii) plant development, (iii) individual leaf development.

Rank-based phenotype \({x}_{pf}\) describes the variation of a morphological variable \(x\) of leaves (e.g. length, insertion height, azimuth, etc.) as a function of their position on the stem (rank). For each rank \(r\), \({x}_{pf}(r)\) represents the value of \(x\) for the leaf \(r\) once it has reached ligulation. \({x}_{pf}(r)\) is calculated as the median of the values of \(x\) associated with the ligulated leaves along the time course. Leaf observations exceeding 20 day20 °C after ligulation are not considered. Here we consider the case of leaf length \(({l}_{pf}\)) and leaf insertion height \(({h}_{pf}\)).

Plant development is quantified at any date through the following traits:

-Stem height \({h}_{s}\) corresponds to the height of the highest collar and is directly extracted from the plant reconstruction.

-Visible leaf stage \({n}_{vis}\) corresponds to the rank of the latest emerging leaf. Let \({r}_{vis}(t)\) be the maximum rank among observed leaves at date \(t\), and \({t}_{med}(r)\) the median of time points \(t\) where \({r}_{vis}(t) = r\). We define the emergence timing \({t}_{vis}\) of the \(r\)-th leaf such as:

$$t_{vis} \left( r \right) = \frac{{t_{med} \left( {r - 1} \right) + t_{med} \left( r \right)}}{2} \left( {r \in N + } \right)$$
(7)

We deduce \({n}_{vis}\) such that \({n}_{vis}\left(t\right)={t}_{vis}^{ -1}(t)\) for each emergence timing \(t\). Finally, \({n}_{vis}\) is extended to any value of \(t\) by linear interpolation.

-Ligulated leaf stage \({n}_{lig}\) corresponds to the rank of the last ligulated leaf. Using the same method as for \({n}_{vis}\), it is deduced from ligulation timing \({t}_{lig}\) such as:

$$t_{ lig} \left( r \right) = h_{s}^{ - 1} \left( {h_{pf} \left( {r - 1} \right)} \right) \left( {r \in N + } \right)$$
(8)

Here, a piecewise constant interpolation is used to restrict \({n}_{lig}\) to integer values.

Leaf growth is given by the successive length value of the observed \(r\)-th leaf until ligulation.

Validation with manual measurements

Ground-truth data was manually collected on a randomly selected subset of plants with late harvesting. This validation data was then used to evaluate tracking performance, and the accuracy of the phenotypic traits obtained with the pipeline.

Leaf ranks were annotated on 30 plants at each time point using the images (10980 annotations). Segmented leaves corresponding to artefacts (e.g. the ear of maize) were not annotated.

A subset of 10 plants were randomly selected for manual measurements on images. Leaf lengths and leaf insertion heights were measured at the ligulation stage for all ranks (113 and 173 annotations respectively). Additional leaf lengths measurements were also performed for leaf ranks 6 and 9 during their whole growth phase (234 annotations). Stem height was annotated for all time points (369 annotations).

Ligulated and visible leaf stages were measured in the greenhouse on the 355 plants with late harvesting, with an average of 7 time points per plant (2289 and 1891 annotations respectively). Ligulated leaf stage is given by an integer, while visible leaf stage is given by a real number: for example, a value of 7.4 means that the last visible leaf is of rank 7, and has reached 40% of its growth.

The phenotypic traits were compared with ground-truth observation using the following metrics: bias, root-mean-square error (RMSE), mean absolute percentage error (MAPE) and coefficient of determination (R2).

The pipeline conception and the data analysis were performed with Python.

Results

A robust and consistent alignment of segmented plants over time

The full pipeline was run on the whole late harvesting dataset, (355 plants). Each plant was observed on an average of 43 time points with an average frequency of 24 h, from plant emergence until 3 days after panicle deployment, making a total of 237,600 images analysed. Leaf tracking output is illustrated in video in Additional file 2. 1.7% of time points were automatically discarded due to abnormally shaped stems (Additional file 3). The global quality of the pipeline was assessed by leaf rank assignment accuracy, which is defined as the percentage of exact matches between the predicted rank of segmented leaves and manually annotated ground-truth ranks (observation). This metric was evaluated separately depending on whether leaves were identified as ligulated or growing, since their tracking relies on different algorithms (Table 1).

Table 1 Evaluation of the performance of the maize leaf tracking algorithm

Leaf rank assignment accuracy is evaluated on 30 different plants on a total of 10,980 leaves. Leaf rank assignment accuracy is the percentage of non-artefact segmented leaves whose predicted rank matches ground-truth rank. MAE is the mean absolute error, and this metric was only computed among wrong predictions. n is the number of leaves considered. These metrics are presented separately for ligulated and growing leaves.

For ligulated leaves, rank assignment accuracy showed a median value of 98.8% per plant, with a minimum of 90.8%, resulting in a high overall accuracy (97.7%, Table 1). Most of the errors occurred for the lower and upper ranks (Fig. 5, blue line), but the resulting rank error did not exceed 1 most of the time (MAE = 1.07 among wrong predictions, Table 1). Errors in ranks 1–3 could be partly removed and errors in ranks 4–5 almost completely removed by putting aside the leaves that grow older than 20 day20 °C (Fig. 5; blue dotted line). Such old leaves are harder to track, as their morphology tends to change excessively when senescing. The remaining errors in ranks 1–3 were probably due to segmentation issues. For instance, the leaf 1 is sometimes not segmented in the 3D reconstruction because of its small size (data not shown). About half of the errors in the upper ranks (10 and more) could be avoided by manually removing the dates where the maize ear was misidentified as a leaf during the segmentation process (Fig. 5; blue dotted line), demonstrating the importance of a correct identification of this organ. The remaining errors in the upper ranks might be due to the increasing complexity of maize architecture during its development (longer leaves, more occlusions due to leaf crossings), and because these late emerging leaves are observed fewer times, making their identification more difficult.

Fig. 5
figure 5

Accuracy of leaf rank assignment as a function of leaf rank. Leaf rank assignment accuracy is the percentage of non-artefact segmented leaves (n = 10940) whose predicted rank matches ground-truth rank (observation). This metric is computed for each ground-truth rank value separately, for ligulated leaves (wide blue line) and growing leaves (wide orange line). Point value and error bars correspond respectively to the mean and 95% bootstrap confidence interval among 30 plants. Blue dotted line: ligulated leaves results without considering (i) leaf observations exceeding 20 day20 °C after ligulation, and (ii) dates where maize ear was segmented as a leaf. Orange dotted line: growing leaves results after initialising their tracking with ground-truth ligulated leaf ranks

Rank assignment accuracy was lower for growing leaves (85.3%, Table 1), but still high for the first bottom ranks (92.8% for ranks 1–10). This might be because a different algorithm is used compared to ligulated leaves, and because of the intrinsic difficulties associated with the detection of a developing organ: growing leaves cannot be topologically ordered a priori, and they may undergo rapid changes in shape and geometry over time. Also, growing leaves tracking used ligulated leaf ranks assignment as a starting point, which caused error propagation. Indeed, fewer errors were observed for lower leaves when manually initialising the growing leaf tracking with ground-truth ligulated leaf ranks (Fig. 5; orange dotted line). Overall, the errors caused by the growing leaf tracking are more frequent in the upper ranks, (Fig. 5; orange line) which might be due again to an increasing complexity of maize leaves structure over time. In particular, there are more leaves emerging at the same time in the maize whorl for late growth stages [45], and these leaves have a more similar morphology, making them difficult to distinguish (see Fig. 4A).

An automated quantification of plant development during the whole vegetative phase

This pipeline provides two ways to quantify plant development automatically: either (i) vertically, or (ii) through the number of emerged and ligulated leaves (leaf stage).

(i) Stem height was predicted with high accuracy (RMSE = 2.02 cm, R2 = 0.999) for all growth stages. This provides a way to quantify plant vertical development regardless of how the leaves are deployed. This also provides a spatial delimitation of the mature and growing parts of the plant, which is more accurate than the morphological criteria used in [16] (R2 = 0.68) and [46] (R2 = 0.92 for early growth stages), and remains robust in advanced stage once the maize ear emerges (Fig. 6A; no outliers for high observed values, i.e. advanced stages).

Fig. 6
figure 6

Evaluation of the phenotypic traits obtained with the pipeline. Pipeline predictions are compared with ground-truth observations for the following traits: A Stem height, B visible leaf stage, C ligulated leaf stage, D, E length of leaf 6 & 9 during growing phase, G ligulated leaf length, H ligulated leaf insertion height, I leaf length variation as a function of leaf rank. For each trait, a linear regression is applied (x = observation, y = prediction). In A, B, C, G, the pipeline results are compared with Phenomenal [16] outputs (points, regression equation and metrics are displayed in grey). Leaf growth (G) and Leaf length variation as a function of leaf rank (I) outputs are illustrated for one representative plant. In I, larger points correspond to median predictions. n = number of points, RMSE = Root-Mean-Square Error, MAPE = Mean Absolute Percentage Error

(ii) The leaf stage predicted by the pipeline was correlated to the ground-truth observation (R2 = 0.87). Predictions were twice as accurate as when simply counting the leaves present on the reconstructed plant (Fig. 6B: RMSE = 1.29 vs RMSE = 2.62 for Phenomenal). Our method therefore avoids the bias that may occur in other leaf counting methods [30, 46, 38] that do not take into account the disappearance of bottom leaves due to senescence. However, this trait was still consistently underestimated (bias = −1.09, Fig. 6B). The remaining error might be because the last leaves that have just emerged were often missing in the 3D reconstruction. This bias increases over time, since more and more leaves are growing simultaneously in the whorl [45]. A linear regression can be applied to remove the bias from the prediction (red dashed line in Fig. 6B), which reduces the RMSE from 1.29 to 0.44.

Leaf stage was also measured considering only the ligulated leaves, which resulted in a higher correlation (R2 = 0.93) and a lower bias (bias = 0.32, Fig. 6C). It is the first time that collar appearance rate, which is of crucial importance in maize models of development [47, 48], can be measured with an automatic method at this degree of precision.

An automated tracking of individual leaves development

The pipeline was used to automatically extract leaf growth dynamics for all leaf ranks. For evaluation, we focus on leaf length dynamics of leaves 6 and 9 before ligulation. For leaf 6, predictions were strongly correlated to ground-truth (R2 = 0.96, Fig. 6D). For leaf 9, the predicted length was close to ground-truth most of the time, but there were more outliers compared to leaf 6, resulting in a lower correlation (R2 = 0.67, Fig. 6E). Leaf dynamics seem to be less accurately captured for higher leaf ranks, which may be due to more frequent leaf rank assignment errors (Fig. 5, orange line), but also more generally because leaf lengths are less accurate for advanced stages due to frequent errors in the 3D organ segmentation (i.e. before tracking). Although the dynamics of leaf growth were only evaluated in terms of length here, the pipeline allows to capture the full evolution of leaf shape over time (Fig. 4B), which can be described by other variables such as azimuth (Additional file 4A). Such leaf dynamics can also be extracted during the senescence phase when leaves collapse (Additional file 5).

An automated reconstruction of plant architecture development

This pipeline was used to automatically extract the estimation of various morphological features on mature (i.e. ligulated but not senesced) leaves as a function of leaf rank. We considered leaf length and leaf insertion height for evaluation, and both were highly correlated with ground-truth observations (length: R2 = 0.96, Fig. 6F. Insertion height: R2 = 0.98, Fig. 6G). As for leaf dynamics, the method can be extended to any other variable describing leaves, such as leaf width, internode width or leaf azimuth (Additional file 4B).

Thanks to tracking, median values can be derived from the successive measurements of morphological features for the same leaf over time. This significantly helps to minimise the errors that would have occurred using independent time points (e.g. ranks 2–4 in Fig. 6I). Outliers remaining after applying a median value were often observed for the last ranks (10 and more). This might be due to (i) more leaves overlapping in the upper part of the plant, therefore leading to less accurate 3D segmentations, (ii) the fact that the last leaves reach ligulation later and are therefore observed fewer times, which makes the calculation of a median less robust, (iii) a higher number of tracking errors for the last ranks.

Discussion

An adaptation of the sequence alignment framework to robust leaf tracking

Sequence alignment is proposed as an original solution to the leaf tracking problem, allowing to consider both (i) the topological information at a fixed date, by ordering the ligulated leaves in a sequence, and (ii) the redundancy of the geometric information over time, by describing each ligulated leaf as a vector of temporally invariant features. While sequence alignment has already been applied outside the bioinformatics field [49, 50, 51], this is the first time, to our knowledge, it is used to track plant and organ growth. This framework allows us to consider the main difficulties of leaf tracking (segmentation artefacts, leaves appearance/disappearance) via the analogy of insertions and deletions of elements in a sequence. In plant phenotyping, tracking is often done step by step, by successive pairwise comparisons of reconstructed plant models, which could lead to the propagation of errors from the first alignments computed to the end of the time-series. Sequence alignment methods allow the formulation of a global resolution algorithm for this optimization problem. In this study, we used an optimization method called progressive alignment [32], which progressively integrates the models of successive time steps by comparing them to a “profile”, representing all the previous matched models. This method has shown great efficiency on our dataset.

Sequence alignment algorithms are known to be highly dependent on the choice of their parameters, especially the gap penalty [52], thus further parameters fine tuning should be considered to optimise the method on more challenging datasets in the future. While this method was tested on maize plants, it could be adapted to any other species for which (i) the order of leaf emergence can be derived topologically along a single stem axis, and (ii) a subset of leaves with a stable geometry can be identified at each time point (e.g. sunflowers or cotton). However, extending this method to branched plants, such as wheat or sorghum with tillers, remains an open problem.

Temporal tracking enhances the robustness of 3D reconstruction

This work focused on the temporal processing of 3D segmented plant reconstructions. Each plant segmentation is performed at a given date independently of other dates, and can contain inaccurate leaf reconstructions, and segmentation artefacts. With our tracking method, potential error reconstructions can be compensated over time, by grouping several observations of a same leaf in the time-series. The sequence alignment framework also helps to overcome segmentation artefacts (e.g. missing leaves, ear misidentified as a leaf) by setting them apart. However, such segmentation errors were still often responsible for subsequent tracking errors in our dataset. This is particularly visible in the advanced stages of growth where the leaves emerge more frequently, are longer, and therefore intersect more, making the segmentation task more error-prone. It might therefore be better to spend time addressing these segmentation issues beforehand, rather than optimising the tracking parameters to address them later. For example, the maize ear could be detected beforehand in the segmented plant objects, using approaches similar to panicle [34] and collar [38] detection. The continuous improvement of 3D plant reconstruction and segmentation methods [35, 53, 31, 36], that are all compatible with the solution reported here, is also very encouraging for a rapid evolution of the performance of the pipeline as a whole.

A robust pipeline that can be used in high-throughput conditions

While other 3D + t phenotyping pipelines have already been proposed, they were mostly tested on a few number of plants and time points, for early growth stages (ca. 5–10 time points and 5–10 plants). Instead, the pipeline presented here was tested on a dataset of 355 plants of various genotypes grown under different environmental conditions. Each plant was observed through a time-series of ~ 43 time points covering all growth stages, making a total of 237,600 images analysed. Most traits were validated on a subset of only 30 plants due to time-consuming annotations, but leaf stages outputs were evaluated on the full dataset of 355 plants (see Fig. 6B-C). Other traits outputs are shown on Fig. 7 for the entire dataset of late harvesting plants, and overall exhibit coherent patterns for all ranks and growth stages. All these results suggest that this pipeline can be used to phenotype large panels of maize plants.

Fig. 7
figure 7

Automatic extraction of architecture and development traits at organ level in high-throughput conditions. Extraction of A stem growth and B leaf length variation as a function of leaf rank using the pipeline, for 355 maize plants of 60 different genotypes, grown from plant emergence to flowering stage (237,600 images analysed), under well-watered (WW; 178 plants, blue) and water deficit (WD; 177 plants, red) conditions. A: one line per plant, each line is smoothed (see Step 5 of the pipeline). B: point = mean, error bar = standard deviation

In this study, Phenotrack3D was tested on dense (average frequency of 24 h), high quality 3D acquisition (isolated plants to avoid occlusions), which is typical of image phenotyping in controlled conditions. This type of data is the primary target for Phenotrack3D and only minor adaptations in data preparation are expected to allow re-using our method in such conditions. Some methods using Terrestrial LiDAR Point Cloud were also successful at segmenting organs in field conditions (maize grown at low densities) [54, 55], hence producing data that could fit the requirement of Phenotrack3D, despite the lower quality of the segmentation of individual plants. The main issue would be the precise localisation of stem tip, since collars may not be visible in point clouds, or the availability of an alternative method to perform the separation between the mature and the growing part of the plant. In the case of an acquisition in agronomic conditions where the plant density is higher, other adaptations will probably be required since only portions of the mature part of the plant would be measured at a time (only 40 cm depth is visible with reasonable occlusion level), leading to incomplete, possibly non overlapping sequences that could make the sequence alignment fail.

Quantification of maize architecture and development for plant modelling

Plant development is usually quantified indirectly and incompletely in phenotyping platforms, using for example the height of the highest plant pixel on images [56] as a proxy of vertical development, or the total number of plant pixels [6]. On the contrary, our method directly measures detailed botanical features for all organs, together with their dynamics. For example, the stem height estimated with our pipeline fits the botanical definition of vertical plant growth. Other traits such as individual leaf elongation have already been measured semi-automatically in HTP platforms (e.g. transducer, [57]), but such methods are limited to monitoring a small number of leaves of the plant, which can lead to serious limitation [58]. Instead, our method can quantify the growth dynamics of all leaves simultaneously, therefore capturing the full growth dynamics of the plant. Finally, rank-based traits such as the variation of leaf length as a function of leaf rank, that is widely used to model plant development [48], are difficult to measure in a non-destructive way, and it has never been done on hundreds of maize plants to our knowledge.

Such automatic phenotypic measurements are valuable input data to parameterize models such as Functional-Structural Plant Models (FSPM) [47, 59], which fully capture the 3D plant architecture and development up to the organ level [60]. While these models are often calibrated via indirect proxies due to lack of additional available data, our 3D + t plant reconstruction method could give a more direct access to an accurate plant model calibration. Moreover, the phenotypic traits obtained with our pipeline show differences between different genotypes and watering treatments (Fig. 7), illustrating that the pipeline is sufficiently accurate to compare GxE interactions. Using our pipeline on large plant panels in HTP conditions could therefore provide the input data necessary to parameterize both genetic and environmental effects on plant architecture and development in FSPM models.

Conclusion

We propose Phenotrack3D, a pipeline that allows reconstructing, from time-inconsistent segmentations, the 3D architectural development of a maize plant at organ level, from emergence to flowering. We solve the challenging problem of leaf tracking by adapting and using a sequence alignment algorithm, which proved to be a robust and efficient strategy for this task. Tracking leaves over the entire growth cycle allows retrieving the true botanical rank of organs at all time steps, hence giving access to widely used rank-based phenotypes (e.g. leaf length variation as a function of leaf rank). Moreover, the different observations of the same organ at different time steps are grouped together, providing more accurate measurements by compensating eventual errors over time. Tracking also makes it possible to retrieve the dynamics of plant development, both at organ level (leaf growth) and plant level (plant height dynamics, leaf stage). This pipeline is fully automatic and works smoothly with available automatic reconstruction-segmentation pipelines, such as Phenomenal [16]. It could therefore be used to measure phenotypic traits on thousands of plants in HTP platforms, with sufficient accuracy to compare the development and architecture of various GxE interactions along the entire growth cycle.

Availability of data and materials

The source code and examples are available on Github (https://github.com/openalea/phenotrack3d) under an Open Source licence (Cecill-C).

References

  1. Barthélémy D, Caraglio Y. Plant architecture: a dynamic, multilevel and comprehensive approach to plant form, structure and ontogeny. Ann Bot. 2007;99(3):375–407.

    Article  Google Scholar 

  2. Bucksch A, Atta-Boateng A, Azihou AF, Battogtokh D, Baumgartner A, Binder BM, Braybrook SA, Chang C, Coneva V, DeWitt TJ, Fletcher AG. Morphological plant modeling: unleashing geometric and topological potential within the plant sciences. Front Plant Sci. 2017;9(8):900.

    Article  Google Scholar 

  3. Long SP, Zhu XG, Naidu SL, Ort DR. Can improvement in photosynthesis increase crop yields? Plant Cell Environ. 2006;29(3):315–30.

    Article  CAS  Google Scholar 

  4. Stewart DW, Costa C, Dwyer LM, Smith DL, Hamilton RI, Ma BL. Canopy structure, light interception, and photosynthesis in maize. Agron J. 2003;95(6):1465–74.

    Article  Google Scholar 

  5. Song Q, Zhang G, Zhu XG. Optimal crop canopy architecture to maximise canopy photosynthetic CO2 uptake under elevated CO2–a theoretical study using a mechanistic model of canopy photosynthesis. Funct Plant Biol. 2013;40(2):108–24.

    Article  CAS  Google Scholar 

  6. Cabrera-Bosquet L, Fournier C, Brichet N, Welcker C, Suard B, Tardieu F. High-throughput estimation of incident light, light interception and radiation-use efficiency of thousands of plants in a phenotyping platform. New Phytol. 2016;212(1):269–81.

    Article  CAS  Google Scholar 

  7. Lacube S, Fournier C, Palaffre C, Millet EJ, Tardieu F, Parent B. Distinct controls of leaf widening and elongation by light and evaporative demand in maize. Plant Cell Environ. 2017;40(9):2017–28.

    Article  CAS  Google Scholar 

  8. Perez RP, Fournier C, Cabrera-Bosquet L, Artzet S, Pradal C, Brichet N, Chen TW, Chapuis R, Welcker C, Tardieu F. Changes in the vertical distribution of leaf area enhanced light interception efficiency in maize over generations of selection. Plant Cell Environ. 2019;42(7):2105–19.

    Article  CAS  Google Scholar 

  9. Murchie EH, Pinto M, Horton P. Agriculture and the new challenges for photosynthesis research. New Phytol. 2009;181(3):532–52.

    Article  CAS  Google Scholar 

  10. Reynolds M, Foulkes J, Furbank R, Griffiths S, King J, Murchie E, Parry M, Slafer G. Achieving yield gains in wheat. Plant Cell Environ. 2012;35(10):1799–823.

    Article  Google Scholar 

  11. Zhu XG, Long SP, Ort DR. Improving photosynthetic efficiency for greater yield. Annu Rev Plant Biol. 2010;2(61):235–61.

    Article  Google Scholar 

  12. Roitsch T, Cabrera-Bosquet L, Fournier A, Ghamkhar K, Jiménez-Berni J, Pinto F, Ober ES. New sensors and data-driven approaches—a path to next generation phenomics. Plant Sci. 2019;1(282):2–10.

    Article  Google Scholar 

  13. Tardieu F, Cabrera-Bosquet L, Pridmore T, Bennett M. Plant phenomics, from sensors to knowledge. Curr Biol. 2017;27(15):R770–83.

    Article  CAS  Google Scholar 

  14. Minervini M, Scharr H, Tsaftaris SA. Image analysis: the new bottleneck in plant phenotyping [applications corner]. IEEE Signal Process Mag. 2015;32(4):126–31.

    Article  Google Scholar 

  15. Gibbs JA, Pound M, French AP, Wells DM, Murchie E, Pridmore T. Approaches to three-dimensional reconstruction of plant shoot topology and geometry. Funct Plant Biol. 2016;44(1):62–75.

    Article  Google Scholar 

  16. Artzet S, Chen TW, Chopard J, Brichet N, Mielewczik M, Cohen-Boulakia S, Cabrera-Bosquet L, Tardieu F, Fournier C, Pradal C. Phenomenal: an automatic open source library for 3D shoot architecture reconstruction and analysis for image-based plant phenotyping. BioRxiv. 2019;1:805739.

    Google Scholar 

  17. Poethig RS. Vegetative phase change and shoot maturation in plants. In: Rougvie AE, O’Connor MB, editors. Current topics in developmental biology. Cambridge: Academic Press; 2013. p. 125–52.

    Google Scholar 

  18. Ledent J, Mouraux D. Determination of foliar stage and number of leaves in maize when lower leaves are missing. Agronomie. 1990;10(2):147–56.

    Article  Google Scholar 

  19. Luo W, Xing J, Milan A, Zhang X, Liu W, Kim TK. Multiple object tracking: a literature review. Artif Intell. 2021;1(293):103448.

    Article  Google Scholar 

  20. Li Y, Fan X, Mitra NJ, Chamovitz D, Cohen-Or D, Chen B. Analyzing growing plants from 4D point cloud data. ACM Trans Gr (TOG). 2013;32(6):1.

    Google Scholar 

  21. Aksoy EE, Abramov A, Wörgötter F, Scharr H, Fischbach A, Dellen B. Modeling leaf growth of rosette plants using infrared stereo image sequences. Comput Electron Agric. 2015;1(110):78–90.

    Article  Google Scholar 

  22. Dellen B, Scharr H, Torras C. Growth signatures of rosette plants from time-lapse video. IEEE/ACM Trans Comput Biol Bioinf. 2015;12(6):14708.

    Article  Google Scholar 

  23. Viaud G, Loudet O, Cournède PH. Leaf segmentation and tracking in Arabidopsis thaliana combined to an organ-scale plant model for genotypic differentiation. Front Plant Sci. 2017;11(7):2057.

    Google Scholar 

  24. Yin X, Liu X, Chen J, Kramer DM. Joint multi-leaf segmentation, alignment, and tracking for fluorescence plant videos. IEEE Trans Pattern Anal Mach Intell. 2017;40(6):1411–23.

    Article  Google Scholar 

  25. Paproki A, Sirault X, Berry S, Furbank R, Fripp J. A novel mesh processing based technique for 3D plant analysis. BMC Plant Biol. 2012;12(1):1–3.

    Article  Google Scholar 

  26. Harmening C, Paffenholz JA. A fully automated three-stage procedure for spatio-temporal leaf segmentation with regard to the B-spline-based phenotyping of cucumber plants. Remote Sens. 2020;13(1):74.

    Article  Google Scholar 

  27. Chebrolu N, Magistri F, Läbe T, Stachniss C. Registration of spatio-temporal point clouds of plants for phenotyping. PLoS ONE. 2021;16(2):e0247243.

    Article  CAS  Google Scholar 

  28. Bashyam S, Choudhury SD, Samal A, Awada T. Visual growth tracking for automated leaf stage monitoring based on image sequence analysis. Remote Sens. 2021;13(5):961.

    Article  Google Scholar 

  29. Balduzzi M, Binder BM, Bucksch A, Chang C, Hong L, Iyer-Pascuzzi AS, Pradal C, Sparks EE. Reshaping plant biology: qualitative and quantitative descriptors for plant morphology. Front Plant Sci. 2017;3(8):117.

    Google Scholar 

  30. Miao C, Guo A, Thompson AM, Yang J, Ge Y, Schnable JC. Automation of leaf counting in maize and sorghum using deep learning. Plant Phenome J. 2021;4(1):e20022.

    Article  Google Scholar 

  31. Miao T, Zhu C, Xu T, Yang T, Li N, Zhou Y, et al. Automatic stem-leaf segmentation of maize shoots using three-dimensional point cloud. Computers Electron Agric. 2021;187:106310.

    Article  Google Scholar 

  32. Batzoglou S. The many faces of sequence alignment. Brief Bioinform. 2005;6(1):6–22.

    Article  CAS  Google Scholar 

  33. Welcker C, Spencer NA, Turc O, Granato I, Chapuis R, Madur D, Beauchene K, Gouesnard B, Draye X, Palaffre C, Lorgeou J. Physiological adaptive traits are a potential allele reservoir for maize genetic progress under challenging conditions. Nat Commun. 2022;13(1):1–3.

    Article  Google Scholar 

  34. Brichet N, Fournier C, Turc O, Strauss O, Artzet S, Pradal C, Welcker C, Tardieu F, Cabrera-Bosquet L. A robot-assisted imaging pipeline for tracking the growths of maize ear and silks in a high-throughput phenotyping platform. Plant Methods. 2017;13(1):1–2.

    Article  Google Scholar 

  35. Gaillard M, Miao C, Schnable J, Benes B. Sorghum segmentation by skeleton extraction. In: Bartoli A, Fusiello A, editors. Computer vision—ECCV 2020 workshops. Cham: Springer International Publishing; 2020. p. 296–311.

    Chapter  Google Scholar 

  36. Wu S, Wen W, Wang Y, Fan J, Wang C, Gou W, et al. MVS-Pheno: a portable and low-cost phenotyping platform for maize shoots using multiview stereo 3D reconstruction. Plant Phenomics. 2020;12(2020):1–17.

    Article  Google Scholar 

  37. Kutulakos KN, Seitz SM. A theory of shape by space carving. Int J Comput Vision. 2000;38(3):199–218.

    Article  Google Scholar 

  38. Zhou S, Chai X, Yang Z, Wang H, Yang C, Sun T. Maize-IAS: a maize image analysis software using deep learning for high-throughput plant phenotyping. Plant Methods. 2021;17(1):1–7.

    Article  Google Scholar 

  39. Bochkovskiy A, Wang CY, Liao HY. Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934. 2020.

  40. Edgar RC, Batzoglou S. Multiple sequence alignment. Curr Opin Struct Biol. 2006;16(3):368–73.

    Article  CAS  Google Scholar 

  41. Edgar RC. MUSCLE: a multiple sequence alignment method with reduced time and space complexity. BMC Bioinformatics. 2004;5(1):1–9.

    Article  Google Scholar 

  42. Needleman SB, Wunsch CD. A general method applicable to the search for similarities in the amino acid sequence of two proteins. J Mol Biol. 1970;48(3):443–53.

    Article  CAS  Google Scholar 

  43. Thompson JD, Higgins DG, Gibson TJ. CLUSTAL W: improving the sensitivity of progressive multiple sequence alignment through sequence weighting, position-specific gap penalties and weight matrix choice. Nucleic Acids Res. 1994;22(22):4673–80.

    Article  CAS  Google Scholar 

  44. Edgar RC, Sjölander K. A comparison of scoring functions for protein sequence profile alignment. Bioinformatics. 2004;20(8):1301–8.

    Article  CAS  Google Scholar 

  45. Ruget F, Bonhomme R, Chartier M. Estimation simple de la surface foliaire de plantes de maïs en croissance. Agronomie. 1996;16(9):553–62.

    Article  Google Scholar 

  46. Souza A, Yang Y. High-throughput corn image segmentation and trait extraction using chlorophyll fluorescence images. Plant Phenomics. 2021;2021:1–15.

    Article  Google Scholar 

  47. Fournier C, Andrieu B. A 3D architectural and process-based model of maize development. Ann Botany. 1998;81(2):233–50.

    Article  Google Scholar 

  48. Lacube S, Manceau L, Welcker C, Millet EJ, Gouesnard B, Palaffre C, et al. Simulating the effect of flowering time on maize individual leaf area in contrasting environmental scenarios. J Exp Bot. 2020;71(18):5577–88.

    Article  CAS  Google Scholar 

  49. Abbott A, Tsay A. Sequence analysis and optimal matching methods in sociology: review and prospect. Sociol Methods Res. 2000;29(1):3–3.

    Article  Google Scholar 

  50. Dieny R, Thevenon J, del Rincón JM, Nebel JC. Bioinformatics Inspired Algorithm for Stereo correspondence. In VISAPP. Setúbal: Science and technology publications ida; 2011. p. 465–73.

    Google Scholar 

  51. Prinzie A, Van den Poel D. Incorporating sequential information into traditional classification models by using an element/position-sensitive SAM. Decis Support Syst. 2006;42(2):508–26.

    Article  Google Scholar 

  52. Notredame C. Recent progress in multiple sequence alignment: a survey. Pharmacogenomics. 2002;3(1):131–44.

    Article  CAS  Google Scholar 

  53. Li Y, Wen W, Miao T, Wu S, Yu Z, Wang X, et al. Automatic organ-level point cloud segmentation of maize shoots by integrating high-throughput data acquisition and deep learning. Comput Electron Agric. 2022;1(193):106702.

    Article  Google Scholar 

  54. Ao Z, Wu F, Hu S, Sun Y, Su Y, Guo Q, et al. Automatic segmentation of stem and leaf components and individual maize plants in field terrestrial LiDAR data using convolutional neural networks. The Crop Journal. 2021. https://www.sciencedirect.com/science/article/pii/S2214514121002191. Accessed 25 Sep 2022

  55. Lin C, Hu F, Peng J, Wang J, Zhai R. Segmentation and stratification methods of field maize terrestrial LiDAR point cloud. Agriculture. 2022;12(9):1450.

    Article  Google Scholar 

  56. Golzarian MR, Frick RA, Rajendran K, Berger B, Roy S, Tester M, Lun DS. Accurate inference of shoot biomass from high-throughput images of cereal plants. Plant Methods. 2011;7(1):1–1.

    Article  Google Scholar 

  57. Sadok W, Naudin P, Boussuge B, Muller B, Welcker C, Tardieu F. Leaf growth rate per unit thermal time follows QTL-dependent daily patterns in hundreds of maize lines under naturally fluctuating conditions. Plant Cell Environ. 2007;30(2):135–46.

    Article  Google Scholar 

  58. Granier C, Tardieu F. Multi-scale phenotyping of leaf expansion in response to environmental changes: the whole is more than the sum of parts. Plant Cell Environ. 2009;32(9):1175–84.

    Article  Google Scholar 

  59. Cieslak M, Khan N, Ferraro P, Soolanayakanahally R, Robinson SJ, Parkin I, et al. L-system models for image-based phenomics: case studies of maize and canola. In silico Plants. 2022;4(1):diab039.

    Article  Google Scholar 

  60. Wen W, Wang Y, Wu S, Liu K, Gu S, Guo X. 3D phytomer-based geometric modelling method for plants—the case of maize. AoB PLANTS. 2021;13(5):plab055.

    Article  Google Scholar 

Download references

Acknowledgements

We are grateful to all members at the M3P platforms for providing technical support, conducting the experiments and collecting data.

Funding

This work was supported by the EU project H2020 731013 (EPPN2020) and STARGATE H2020 952339 (https://stargate-hub.eu/). RF and CP have been supported by the MaCS4Plants CIRAD network, initiated from the AGAP Institute and AMAP joint research units.

Author information

Authors and Affiliations

Authors

Contributions

LCB supervised the experiment and acquired the data. BD designed the pipeline and analysed the data. CF, CP, LCB and RF provided advice on the conception of the pipeline. BD and RF wrote the manuscript and CF, CP and LCB reviewed and edited it. All the authors have approved the manuscript and have made all required statements and declarations. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Christophe Pradal or Christian Fournier.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Details on the training of the deep-learning model for maize collar detection.

Additional file 2. Video (.mp4) displaying leaf tracking for 3 maize plants.

Additional file 3.

Example of the anomaly detection step for one maize plant. A) two reconstructed plants removed during anomaly detection due to their abnormal stem shapes. B) Stem height smoothing over time, allowing to correct an outlier.

Additional file 4.

Example of azimuth traits extracted with the pipeline for one maize plant. A) Azimuth dynamics of individual leaves, up to 40 day20°C after their first detection. B) Leaf azimuth profile.

Additional file 5.

Visualisation of rank assignment following sequence alignment on a set of 3D ligulated leaf polylines. A) Visualisation of all ligulated leaves polylines in a time-series of 3D reconstructions of one plant. B) Assignment of leaf ranks on this set of polylines, using sequence alignment.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Daviet, B., Fernandez, R., Cabrera-Bosquet, L. et al. PhenoTrack3D: an automatic high-throughput phenotyping pipeline to track maize organs over time. Plant Methods 18, 130 (2022). https://doi.org/10.1186/s13007-022-00961-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13007-022-00961-4

Keywords

  • High-throughput phenotyping
  • Computer vision
  • Maize
  • Tracking
  • Sequence alignment
  • Plant physiology