An image analysis pipeline for automated classification of imaging light conditions and for quantification of wheat canopy cover time series in field phenotyping
© The Author(s) 2017
Received: 4 November 2016
Accepted: 16 March 2017
Published: 21 March 2017
Robust segmentation of canopy cover (CC) from large amounts of images taken under different illumination/light conditions in the field is essential for high throughput field phenotyping (HTFP). We attempted to address this challenge by evaluating different vegetation indices and segmentation methods for analyzing images taken at varying illuminations throughout the early growth phase of wheat in the field. 40,000 images taken on 350 wheat genotypes in two consecutive years were assessed for this purpose.
We proposed an image analysis pipeline that allowed for image segmentation using automated thresholding and machine learning based classification methods and for global quality control of the resulting CC time series. This pipeline enabled accurate classification of imaging light conditions into two illumination scenarios, i.e. high light-contrast (HLC) and low light-contrast (LLC), in a series of continuously collected images by employing a support vector machine (SVM) model. Accordingly, the scenario-specific pixel-based classification models employing decision tree and SVM algorithms were able to outperform the automated thresholding methods, as well as improved the segmentation accuracy compared to general models that did not discriminate illumination differences.
The three-band vegetation difference index (NDI3) was enhanced for segmentation by incorporating the HSV-V and the CIE Lab-a color components, i.e. the product images NDI3*V and NDI3*a. Field illumination scenarios can be successfully identified by the proposed image analysis pipeline, and the illumination-specific image segmentation can improve the quantification of CC development. The integrated image analysis pipeline proposed in this study provides great potential for automatically delivering robust data in HTFP.
KeywordsHigh throughput field phenotyping Image analysis Machine learning Canopy cover Image segmentation Color vegetation index Light contrast
Modifying and redesigning modern crop varieties to meet the global food and bioenergy demand is a great challenge of contemporary global agriculture . The selection of crops adapted to future climates requires a full understanding of genotype-by-environment interactions (G × E). This urgently requires advanced phenotyping approaches to bridge phenotype-to-genotype gaps, particularly in the field . Although advanced imaging approaches, image processing and computer vision techniques are widely used in plant phenotyping under controlled conditions, they cannot be as easily used in the field [3–5]. Adapting them to be applied in field conditions is a challenging but urgently needed task . Various field phenotyping platforms have been established with the aim of a holistic analysis of crop growth [7–9], but the next challenges consist of image processing, meaningful data extraction, as well as storing and sharing of data [10, 11]. Automation of image processing pipelines will finally facilitate bridging phenomics and genetics towards more powerful genetic analyses and is required to realize the full potential of genome-wide association studies (GWAS) and other modern plant breeding approaches.
In the last decades, most imaging setups for plant phenotyping were established in indoor environments that have well-controlled light conditions (for a review see ). Recently, various phenotyping sensors have been equipped on outdoor vehicles and field phenotyping platforms such as mobile phenotyping buggies  and stationary platforms [7, 8]. These outdoor platforms, ground vehicles and unmanned aerial vehicles (UAVs) provide new opportunities to promote field phenotyping by routinely deploying sensors and measurements at high spatial and temporal resolution. The goal is therefore to operate sensors under varying natural light conditions, for continuous imaging and quantification of plant growth throughout crop development [12–15]. Also, other important factors such as geometry and location of images as well as movement of camera during image acquisition are among the challenges in field phenotyping.
Appropriate illumination is an important prerequisite for imaging setups under controlled conditions to extract reliable data of phenotypic traits. However, under field conditions, the ever-changing light and weather conditions lead to variable light contrast between upper and lower canopy and between plant and soil. The uncontrollable, weather-related factors lead to enormous difficulties for appropriate image analysis and image segmentation. This in turn constrains the power and throughput of field phenotyping. Computational algorithms have been developed for retrieving quantitative information from images, such as for measuring leaf area, shape and canopy cover [15, 16]. Image segmentation for canopy cover is often based on thresholding methods by setting an appropriate threshold value to distinguish between plants and background [17, 18], or using automatic thresholding methods, such as the Otsu algorithms [16, 19]. Yet often for “poor” images, the use of multiple threshold values is still not sufficient to separate plants properly from soil background. In this context, more sophisticated computational algorithms and machine learning methods have been introduced into plant phenotyping to improve the accuracy of image analysis [20, 21]. One method, based on decision tree classification for canopy cover extraction has already been evaluated for field phenotyping of individual plants at a defined developmental stage . Other studies have already achieved canopy cover segmentation of field images, but have still employed a lot of manual adjustment in segmentation of images . Extracting canopy cover data from a large amount of images taken at various growth stages and weather conditions has not been achieved yet in an automated manner. Such automated approaches are a necessary prerequisite for high throughput field phenotyping (HTFP) approaches that aim at harvesting “big data”.
Therefore, the objectives of this study were to evaluate different methods for retrieving canopy cover data and to tackle the difficulties to achieve high throughput. We attempted to assemble several methods in the framework of a pipeline that allows for a) classifying light conditions, b) quantifying canopy cover dynamics and c) evaluating data quality of the canopy cover and related traits.
A wheat field experiment was conducted at the ETH plant research station Eschikon-Lindau (47.449°N, 8.682°E, 520 m.a.s.l., soil type: varies from calcareous, slightly acidic to gleyic cambisols), Switzerland to study the genotype-by-environment interactions. In the present study, ca. 350 wheat varieties were grown in two growing seasons being harvested in 2014 and 2015 (550 and 700 plots, respectively) to evaluate the conceptual pipeline of image analysis for extracting meaningful canopy cover data for HTFP. The sowing date was 19 Oct 2013 and 20 Oct 2014 for the harvest year 2014 and 2015, respectively, and harvesting date was 5 Aug 2014 and 3 Aug 2015.
To capture the canopy development a customized camera holding frame (Additional file 1: Fig. S1, see also ) was built to carry a 21 megapixel digital single lens reflector (DSLR) camera (EOS 5D Mark II, Canon Inc., Tokyo, Japan). The camera was commercially customized for monitoring vegetation stress with 3 channels, the visible blue (380–480 nm, B), green (480–560 nm, G) and red (R) that was converted to near-infrared (680–800 nm) (LDP LLD, Carlstadt, NJ, USA, www.maxmax.com). The camera equipped with a Tamron SP 24–70 mm f/2.8 Di VC USD (IF) lens (Tamron Co., Ltd., Saitama, Japan) was mounted onto the frame with a nadir view to the plots from a constant distance of ~2 m to the ground. A fixed focal length at 62 mm was used for imaging. Imaging was performed plot-wise once per day on 33 (7 Nov 2013–16 Apr 2014) and 34 (7 Nov 2014–4 May 2015) measurement dates for the harvest year 2014 and 2015, respectively, with the annually total of 18,216 and 24,276 images.
Image analysis pipeline and methods
In order to evaluate the capability of the use of different color components and VIs in thresholding, images were converted to ExR (ExR = 1.4R-G) and its blue channel variant ExB (ExB = 1.4B-G) , two-band NDI (NDI2 = (R − B)/(R + B)) and three-band NDI (NDI3 = (R + G − 2B)/(R + G + 2B)) images , as well as the products of color components and VIs such as NDI2*a (a: a-channel in the Lab color space), NDI3*a and NDI3*V (V: V-channel in the HSV color space). Subsequently, the Otsu and μRow thresholding were implemented to segment the different VI images, and the VI providing the best separability would be used as an additional predictor in the ML-based classification models.
ML methods fall into two broad categories: unsupervised learning and supervised learning, both of which were applied in this study for image segmentation. An unsupervised machine learning approach based on the K-means clustering was implemented by first determining 3 clusters of a and b color channels of the Lab color space (CIE 1976 L*a*b*, see ) and then selecting the cluster with the highest NDI3 values as the cluster related to plants (similar to the construction of NDVI, see details in ). A supervised machine learning approach based on the decision tree (DT) segmentation model (DTSM)  and support vector machines (SVM) was implemented. Nine color components including the R (red), G (green) and B (blue) in the RGB color space; H (hue), S (saturation) and V (value) in the HSV color space; and L (lightness), a and b (color-opponent dimensions) in the Lab color space  and the NDI3*V (product of NDI3 and V) were used to classify each pixel into two classes, background and foreground (plants). 150 images were selected for training, in which a total number of 2,909,000 pixels were marked as the training data. The training data was collected using the software EasyPCC , which allows to interactively mark lines on plants and background and then saves pixel-based records and output as a txt-file.
To cope with highly heterogeneous illumination variations, an imaging illumination classification method based on the support vector machines (SVM) was proposed to classify high light-contrast (HLC) and low light-contrast (LLC) images. Here, based on illumination differences, we define an image as HLC image when extremely bright and dark regions/pixels observed in the image, whereas defined as LLC image when all details of scene are clearly captured in the image. Extreme bright and dark regions are identified by visual inspections on the images and image histograms. Importantly, the HLC and LLC images definition is different from the high/low contrast photography technique. Three image exposure intensity features consisting of the histograms of R, G, and B channels were used to classify images into two classes, HLC and LLC images. The numerical distribution of the histogram of each channel was calculated in 256 bins, and thus a concatenated vector of 256*3 numbers was constructed for each image in the SVM-based illumination classification model. Accordingly, in the following step, ML-based segmentation models employing the DT and SVM algorithms were trained for the two illumination classes individually to compare their performance under different illuminations, i.e., models for HLC (MHLC) and LLC (MLLC) images, respectively, as well as general models for all light conditions (MALC).
Prior to the final calculation of canopy cover, “salt & pepper noise” removal was performed to all of the segmented images by applying a median filter (5 × 5 size)  and removing objects smaller than 400 pixel size (also see functions indicated in Fig. 1). To control the quality of the segmentation of thousands of images visually requires tremendous efforts. In time series measurements performing the exploratory data analysis (EDA) on the extracted canopy cover data can help to identify critical time points with bias of segmentation. The plot-based canopy cover vector of one date is correlated with the vector of any other date. Low correlation coefficients indicate low consistency, which might be caused by segmentation errors or rapid changes in the ranking of genotypes. The ranking change is attributed often to physiological or environmental changes—for instance snowing and snow melting during the winter. In this study, EDA including the correlation analysis were implemented in the R software . Image processing, ML-based models and the one-way ANOVA test of model performance were implemented in Matlab (The MathWorks, Inc., Natick, MA, USA).
Results and discussion
Comparing different VI images for threshold segmentation
Comparing μRow and Otsu for threshold segmentation using NDI3*a and NDI3*V
According to the comparison for the general performance of different VIs, the best performing two indices (NDI3*a and NDI3*V) were further evaluated for automated thresholding. NDI3*a and NDI3*V images were calculated for three original images (Fig. 4a1, b1 and c1), and μRow and Otsu methods were used to determine the threshold and segment the images. Results showed that the μRow method allowed for the determination of proper threshold values and segmentation on NDI3*a (Fig. 4a2, b2, c2) and NDI3*V images (Fig. 4a4, b4, c4). In contrast, Otsu did not perform constantly on NDI3*a and NDI3*V images, and it allowed proper segmentation only on the NDI3*V images (Fig. 4a5, b5, c5). By incorporating brightness differences into the NDI3 images, the V component of the HSV color space enabled to properly determine threshold for NDI3*V, which applies particularly to field-based phenotyping [18, 28]. However, field illuminations cannot be easily controlled and strong light contrast often causes saturated pixels and regions where VI-based image transformations could not significantly enhance the differences between plants and background. In this case, simple thresholding methods are more likely to induce a systematic error of decrease or increase (Fig. 4b, c) in the segmented area, and QC appears to be particularly difficult and laborious if one relies on visual inspection and judging of errors.
Influence of different imaging time (illumination) on segmentation
The μRow and Otsu methods were able to determine image-specific threshold values given that optimal VI was employed, allowing for automated threshold determination for canopy cover segmentation. However, the capacity of these methods is limited by the choice of VIs and/or color components for thresholding. In contrast, ML-based methods that use more features of different color components and VIs might be more applicable for varying imaging conditions in the field. Thus, we evaluated whether the ML-based segmentation is independent of imaging time in HTPP in comparison with manual segmentation.
Pre-classification of imaging conditions
Comparison of image segmentation models of different illuminations
Similar to the results in a previous study in which Guo et al.  applied DTSM to improve the segmentation for images of individual plants, HLC images increased the difficulty for segmentation compared to LLC images. Furthermore, our study demonstrated at plot level that integrated ML-based models were able to significantly improve the segmentation accuracy compared to simple thresholding methods. This improvement depends on a high resolution of the images; when the resolution is too low, the probability increases that individual pixels contain mixtures of plant and soil, making the approach less powerful.
The appropriate segmentation of wheat plants with their narrow leaves is more difficult compared to other species that have wider and bigger leaves. Narrow leaves are prone to be divided in the segmentation because oversaturated pixels occurred in the middle of a leaf (Fig. 7a). Also dirty leaves due to dislodged soil particles after heavy rains are difficult to segment (Fig. 6). In addition, randomly stacked leaf layers limit the potential of using shape-based features and methods in HTFP .
Ideally, a general model being able to precisely segment all the images will simplify the image analysis pipeline in HTFP [20, 22]. Although the optimal approach might be the use of a general model for all different light conditions, its training often demands more computational power than the training of several scenario-specific models (see Additional file 1: Fig. S4). Training models according to distinct illumination conditions could significantly reduce the costs in time as well as the computational power needed for heavy model training (Additional file 1: Fig. S4), and thus accelerates the progress and throughput for data post-processing of HTFP.
Exploratory data analysis (EDA) for identifying potential bias and outliers
Challenges and opportunities for an automated image analysis pipeline for HTFP
The three steps proposed in the image analysis pipeline (Fig. 1) are in line with the main challenges for analyzing images in HTFP. For instance, canopy cover segmentation depends on proper VIs/color components and other potential features to achieve reliable segmentation, which is critical for both thresholding and machine learning methods. However, among other influence factors, illumination variability within an image and/or between images affects the segmentation strongly, particularly when imaging a large amount of breeding lines in the field. Interestingly, results showed that a pre-classification of imaging illumination condition could improve the final segmentation. In addition to the aforementioned two challenges, in a normal breeding program the volume of data generated by HTFP over multiple days and years makes “eyeball”-based QC difficult. Thus, automated control of segmentation quality is vital. Our results highlight the potential VIs/features and alternative ML-models for accounting for field illumination variability as well as a simple strategy of EDA in the developing of an automated image analysis pipeline for HTFP.
Nevertheless, the use of a general model is essential and it is often necessary even if the model performance is not optimal, particularly for HTFP that needs to extract data from a large amount of images in a timely manner. A general model can be used to generate the first results, which is critical for appropriately determining scenario-specific models such as a model trained for images of flooded fields or wet plants. Also, a quality indicator can be incorporated in the first results – for instance, image specific flags for illumination conditions could serve as a filter when evaluating the results (Additional file 1: Fig. S6). This study focuses only on two scenarios of illumination differences, which can be further improved and integrated into imaging platforms for automatically identifying illumination conditions and for selecting appropriate analysis methods. Concerning the case of low segmentation quality of canopy cover and obtaining data is crucial for certain growth phases, specific models might need to be trained using data on certain dates in the study (e.g. low r values in Fig. 10). By considering the two scenarios and applying the proposed image analysis pipeline, we were able to improve the correlation between the canopy cover and growing degree days (Additional file 1: Fig. S7), demonstrating the capability of imaging based high throughput phenotyping for characterizing plant growth dynamic in response to the seasonal temperature cumulation.
High dynamic range (HDR) photography could be applied for field phenotyping as there are commercial products that support HDR imaging on the market. “Digital darkroom” also makes HDR imaging feasible provided that threefold or more storage space and time for exposure are available. Automation of QC remains to be a challenge following the image processing in HTFP, and integrated methods and platforms for imaging, image processing, machine learning and data science are needed to extract reliable data in HTFP [6, 8, 9].
Timely extracting meaningful data from a very large amount of high resolution images is the bottleneck in high throughput field phenotyping (HTFP); therefore the development of advanced image analysis pipelines is imperative. This study has established an image analysis pipeline for HTFP of wheat canopy cover development. It attempts to tackle the difficulties encountered from image analysis till the delivering of reliable phenotypic data on a plot basis. A data set of more than 40,000 images collected throughout two growing seasons was used to evaluate the pipeline. We found that the NDI3*V and NDI3*a indices in combination with automatic thesholding using μRow and Otsu methods allowed for appropriate separation of wheat plants and background compared to other VIs evaluated in this study. Significant improvement was further achieved by applying illumination-specific models based on machine learning, which improved the accuracy and lowered the computing time. EDA analysis was able to assist the quality control of image segmentation by examining temporal correlation changes in the time series of extracted canopy cover. The proposed image analysis pipeline enabled to extract the canopy cover time series from canopy images at high throughput, and it can be adjusted for imaging-based phenotyping of other traits and species in HTFP.
KY, CG and AH designed the experiment. KY, NK and CG performed the experiment, image acquisition and analysis. KY and NK performed the overall data analysis and, jointly with AW and AH for results interpretation. KY, AW and AH drafted the manuscript. All authors read, revised and approved the final manuscript.
We are very grateful to Dr. Wei Guo at the University of Tokyo for sharing their DTSM program. We thank Mr. Hansueli Zellweger and Dr. Johannes Pfeifer for their help with image acquisition. We also thank Dr. Frank Liebisch for reading the manuscript and his constructive comments. We would like to thank IPK Gatersleben and Delley Seeds and Plants Ltd for the supply of the varieties. We thank the two anonymous reviewers for their constructive comments.
Availability of data and materials
All of the segmentation reference images and the corresponding original images used in this study are publicly available in the ‘figshare’ repository, https://dx.doi.org/10.6084/m9.figshare.4176573 (see ), which can be used for evaluation of image segmentation methods. The image analysis methods are available on https://github.com/kang-yu/IACC or by request to the authors.
The authors declare that they have no competing interests.
This study was funded by the Swiss Federal Office for Agriculture FOAG, Switzerland.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
- Furbank RT, Tester M. Phenomics—technologies to relieve the phenotyping bottleneck. Trends Plant Sci. 2011;16:635–44. doi:10.1016/j.tplants.2011.09.005.View ArticlePubMedGoogle Scholar
- White JW, Andrade-Sanchez P, Gore MA, Bronson KF, Coffelt TA, Conley MM, et al. Field-based phenomics for plant genetics research. Field Crops Res. 2012;133:101–12. doi:10.1016/j.fcr.2012.04.003.View ArticleGoogle Scholar
- Scharr H, Dee H, French AP, Tsaftaris SA. Special issue on computer vision and image analysis in plant phenotyping. Mach Vis Appl. 2016;27:607–9. doi:10.1007/s00138-016-0787-1.View ArticleGoogle Scholar
- Kelly D, Vatsa A, Mayham W, Ngô L, Thompson A, Kazic T. An opinion on imaging challenges in phenotyping field crops. Mach Vis Appl. 2016;27:681–94. doi:10.1007/s00138-015-0728-4.View ArticleGoogle Scholar
- Duan T, Zheng B, Guo W, Ninomiya S, Guo Y, Chapman SC. Comparison of ground cover estimates from experiment plots in cotton, sorghum and sugarcane based on images and ortho-mosaics captured by UAV. Funct Plant Biol. 2017;44:169–83. doi:10.1071/FP16123.View ArticleGoogle Scholar
- Dee H, French A. From image processing to computer vision: plant imaging grows up. Funct Plant Biol. 2015;42:3–5.View ArticleGoogle Scholar
- Virlet N, Sabermanesh K, Sadeghi-Tehran P, Hawkesford M. Field Scanalyser: an automated robotic field phenotyping platform for detailed crop monitoring. Funct Plant Biol. 2016.Google Scholar
- Kirchgessner N, Liebisch F, Yu K, Pfeifer J, Friedli M, Hund A, et al. The ETH field phenotyping platform FIP: a cable-suspended multi-sensor system. Funct Plant Biol. 2017;44:154–68. doi:10.1071/FP16165.View ArticleGoogle Scholar
- Deery D, Jimenez-Berni J, Jones H, Sirault X, Furbank R. Proximal remote sensing buggies and potential applications for field-based phenotyping. Agronomy. 2014;4:349–79. doi:10.3390/agronomy4030349.View ArticleGoogle Scholar
- Minervini M, Scharr H, Tsaftaris SA. Image analysis: the new bottleneck in plant phenotyping. IEEE Signal Process Mag. 2015;32:126–31. doi:10.1109/MSP.2015.2405111.View ArticleGoogle Scholar
- Rahaman MM, Chen D, Gillani Z, Klukas C, Chen M. Advanced phenotyping and phenotype data analysis for the study of plant growth and development. Front Plant Sci. 2015;6:619. doi:10.3389/fpls.2015.00619.View ArticlePubMedPubMed CentralGoogle Scholar
- Walter A, Liebisch F, Hund A. Plant phenotyping: from bean weighing to image analysis. Plant Methods. 2015;11:14. doi:10.1186/s13007-015-0056-8.View ArticlePubMedPubMed CentralGoogle Scholar
- Li L, Zhang Q, Huang D. A review of imaging techniques for plant phenotyping. Sensors. 2014;14:20078–111. doi:10.3390/s141120078.View ArticlePubMedPubMed CentralGoogle Scholar
- Sankaran S, Khot LR, Carter AH. Field-based crop phenotyping: Multispectral aerial imaging for evaluation of winter wheat emergence and spring stand. Comput Electron Agric. 2015;118:372–9. doi:10.1016/j.compag.2015.09.001.View ArticleGoogle Scholar
- Lootens P, Ruttink T, Rohde A, Combes D, Barre P, Roldán-Ruiz I. High-throughput phenotyping of lateral expansion and regrowth of spaced Lolium perenne plants using on-field image analysis. Plant Methods. 2016;12:32. doi:10.1186/s13007-016-0132-8.View ArticlePubMedPubMed CentralGoogle Scholar
- Meyer GE, Neto JC. Verification of color vegetation indices for automated crop imaging applications. Comput Electron Agric. 2008;63:282–93. doi:10.1016/j.compag.2008.03.009.View ArticleGoogle Scholar
- Grieder C, Hund A, Walter A. Image based phenotyping during winter: a powerful tool to assess wheat genetic variation in growth response to temperature. Funct Plant Biol. 2015;42:387–96.View ArticleGoogle Scholar
- Liebisch F, Kirchgessner N, Schneider D, Walter A, Hund A. Remote, aerial phenotyping of maize traits with a mobile multi-sensor approach. Plant Methods. 2015;11:9. doi:10.1186/s13007-015-0048-8.View ArticlePubMedPubMed CentralGoogle Scholar
- Wang Y, Cao Z, Bai X, Yu Z, Li Y. An automatic detection method to the field wheat based on image processing. Proc SPIE. 2013. 89180F. doi:10.1117/12.2031139.
- Singh A, Ganapathysubramanian B, Singh AK, Sarkar S. Machine learning for high-throughput stress phenotyping in plants. Trends Plant Sci. 2016;21:110–24. doi:10.1016/j.tplants.2015.10.015.View ArticlePubMedGoogle Scholar
- Navarro PJ, Pérez F, Weiss J, Egea-Cortines M. Machine learning and computer vision system for phenotype data acquisition and analysis in plants. Sensors. 2016;16:641. doi:10.3390/s16050641.View ArticlePubMed CentralGoogle Scholar
- Guo W, Rage UK, Ninomiya S. Illumination invariant segmentation of vegetation for time series wheat images based on decision tree model. Comput Electron Agric. 2013;96:58–66. doi:10.1016/j.compag.2013.04.010.View ArticleGoogle Scholar
- Otsu Nobuyuki. A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern. 1979;9:62–6. doi:10.1109/TSMC.1979.4310076.View ArticleGoogle Scholar
- LDP LLC. Remote sensing NDVI. http://www.maxmax.com/maincamerapage/remote-sensing. Accessed 8 Jun 2016.
- McLAREN K. The development of the CIE 1976 (L* a* b*) uniform colour space and colour-difference formula. J Soc Dye Colour. 1976;92:338–41. doi:10.1111/j.1478-4408.1976.tb03301.x.View ArticleGoogle Scholar
- Lim JS. Two-dimensional signal and image processing. 1st ed. Englewood Cliffs: Prentice Hall PTR; 1989.Google Scholar
- Core R. Team, R: a language and environment for statistical computing. Vienna: R Foundation for Statistical, Computing; 2015.Google Scholar
- Walter A, Scharr H, Gilmer F, Zierer R, Nagel KA, Ernst M, et al. Dynamics of seedling growth acclimation towards altered light conditions can be quantified via GROWSCREEN: a setup and procedure designed for rapid optical phenotyping of different plant species. New Phytol. 2007;174:447–55. doi:10.1111/j.1469-8137.2007.02002.x.View ArticlePubMedGoogle Scholar
- Scharr H, Minervini M, French AP, Klukas C, Kramer DM, Liu X, et al. Leaf segmentation in plant phenotyping: a collation study. Mach Vis Appl. 2015;27:585–606. doi:10.1007/s00138-015-0737-3.View ArticleGoogle Scholar
- Yu K, Kirchgessner N, Grieder C, Walter A, Hund A. Plant image segmentation: reference images wheat. figshare. 2016. doi:10.6084/m9.figshare.4176573.