Skip to main content

PI-Plat: a high-resolution image-based 3D reconstruction method to estimate growth dynamics of rice inflorescence traits

Abstract

Background

Recent advances in image-based plant phenotyping have improved our capability to study vegetative stage growth dynamics. However, more complex agronomic traits such as inflorescence architecture (IA), which predominantly contributes to grain crop yield are more challenging to quantify and hence are relatively less explored. Previous efforts to estimate inflorescence-related traits using image-based phenotyping have been limited to destructive end-point measurements. Development of non-destructive inflorescence phenotyping platforms could accelerate the discovery of the phenotypic variation with respect to inflorescence dynamics and mapping of the underlying genes regulating critical yield components.

Results

The major objective of this study is to evaluate post-fertilization development and growth dynamics of inflorescence at high spatial and temporal resolution in rice. For this, we developed the Panicle Imaging Platform (PI-Plat) to comprehend multi-dimensional features of IA in a non-destructive manner. We used 11 rice genotypes to capture multi-view images of primary panicle on weekly basis after the fertilization. These images were used to reconstruct a 3D point cloud of the panicle, which enabled us to extract digital traits such as voxel count and color intensity. We found that the voxel count of developing panicles is positively correlated with seed number and weight at maturity. The voxel count from developing panicles projected overall volumes that increased during the grain filling phase, wherein quantification of color intensity estimated the rate of panicle maturation. Our 3D based phenotyping solution showed superior performance compared to conventional 2D based approaches.

Conclusions

For harnessing the potential of the existing genetic resources, we need a comprehensive understanding of the genotype-to-phenotype relationship. Relatively low-cost sequencing platforms have facilitated high-throughput genotyping, while phenotyping, especially for complex traits, has posed major challenges for crop improvement. PI-Plat offers a low cost and high-resolution platform to phenotype inflorescence-related traits using 3D reconstruction-based approach. Further, the non-destructive nature of the platform facilitates analyses of the same panicle at multiple developmental time points, which can be utilized to explore the genetic variation for dynamic inflorescence traits in cereals.

Background

With increasing world population, climatic variability and declining arable land resources, the need to increase global food production is paramount [1,2,3]. Two components that are essential for achieving global food security involve precise agronomic management and genetic improvement of major crops such as rice, wheat, and maize. Integral to both components is the development of data-driven tools that increase precision in implementation and enhance predictive capabilities. Moreover, strategic selection and adaptation of yield-related traits to maximize agricultural production holds the key to achieve sustainable food security [4,5,6]. Inflorescence architecture (IA) is an important phenotypic feature that ultimately contributes to most of the grain crop yield components such as grain number, size, and weight [7,8,9]. However, the complexity of IA, especially in cereals, is a limiting factor for accurate determination of yield traits. Estimating the yield-related traits by conventional methods is subjective, laborious, and error-prone [10]. Also, the scope of the detectable yield-related traits is limited by manual measurements, which increases the chances of damaging the inflorescence.

Advances in automation of plant phenotyping technologies, mainly in reference to image-based phenotyping, have increased the depth and the scale of measuring vegetative traits [11,12,13,14,15,16,17,18,19]. However, only a few studies have used the phenotyping platform to screen IA [16, 20,21,22]. Some platforms have utilized machine-vision-based approaches to estimate inflorescence-related parameters [23,24,25,26]. In addition, two-dimensional (2D) imaging platforms have been employed, for example, Tassel Image-based Phenotyping System (TIPS) quantifies morphological traits from freshly harvested maize tassels, while PAnicle STructure Analyzer for Rice (PASTAR/PASTA), Panicle TRAit Phenotyping (P-TRAP), and PANorma analyze rice panicle length and branching [20, 21, 27, 28]. Both P-TRAP and PANorma have been used for genome-wide association studies (GWAS) with respect to rice panicle traits [27, 29,30,31]. Recently, Zhou et al.[22] developed Toolkit for Inflorescence Measurement (TIM) to estimate sorghum panicle volume derived from two planar imaging data. The derived panicle-related traits of sorghum were used for GWAS to facilitate gene discovery.

Most of these 2D image-based IA approaches have discussed only the mature or end-point traits and do not capture the growth dynamics of developing inflorescence. Furthermore, biplanar images can only provide 2D projections of a 3D structure, thus accounting for substantial loss of spatial information [32]. 3D imaging has started to gain momentum to circumvent limitations of 2D imaging [33]. Different 3D imaging methods, for example time of flight (ToF), laser scanning, stereovision among others, have been applied for remote sensing or field-based phenotyping platforms In addition, depth cameras are also widely used for capturing an entire plant or large plants parts [34]. Stereovision, which considers object images from different angles to reconstruct 3D surfaces, offers an inexpensive, accurate and efficient method for on-site 3D plant imaging [32, 35, 36]. The recent introduction of freely available software—Multi-View Environment (MVE) offers an end-to-end 3D reconstruction solution [37]. MVE combines the multi-view stereo (MVS) and structure-from-motion (SfM) algorithms to generate dense point clouds for 3D object reconstruction [37]. The MVS-SfM approach has been used to reconstruct 3D meshes of leaves, canopy or whole plant [38,39,40,41]. However, this approach has not been used to characterize IA. Here, we present the results from characterizing rice panicles using the 3D reconstruction-based approach. The main objectives of our study were to (a) capture multi-dimensional, high-resolution images of ‘panicle on plant’ after the fertilization to reconstruct 3D plant cloud of inflorescence, (b) use 3D point clouds to derive inflorescence-related traits, and (c) use the derived traits to monitor growth dynamics of developing inflorescence and distinguish inherent genetic and morphological diversity in crop species.

However, it is challenging to perform 3D reconstruction of rice panicles to achieve our objectives. First, a rice panicle is often occluded by other plant components such as leaves and other panicles. Therefore, the existing solutions by moving cameras [42] are not entirely suitable to generate un-occluded images for a panicle. Second, a panicle is non-rigid and typically is not located in the center of a plant, making it difficult to apply the existing solutions based on plant rotation [42]. Third, rather than destructive methods [22], non-destructive methods are needed to keep a panicle alive, as the growth dynamics of a panicle is of interest in this study. Fourth, the size of a panicle is relatively marginal, and the depth-camera based solutions [34] may not provide sufficient resolutions to capture the 3D details of a panicle.

To address these challenges, we developed an in-house Panicle Imaging Platform (PI-Plat) to capture the dynamics of developing panicles in rice from a range of genetically diverse rice genotypes. A panicle is isolated to generate un-occluded images in a non-destructive manner. In addition, a panicle stays still at the center in the PI-Plat and cameras rotate around it, thus minimizing the vibration and allowing generation of a more stable 3D point cloud. The resolution of the cameras is ensured to capture details of a panicle in 2D images, leading to high-resolution 3D reconstruction results. A total of 11 genotypes, indica and japonica sub-populations were selected. Post fertilization, primary panicles were imaged on a weekly basis (week 1, 2, and 3) by using the PI-Plat. The captured images were used for 3D reconstruction to extract digital phenotypic attributes: voxel count and color intensity. We reported increased sensitivity in panicle trait prediction from 3D reconstruction when compared to direct end-point measurements of yield components. Although the PI-Plat is designed for rice panicles, it can be extended for other small plant components such as new branches or leaves for cereals.

Material and methods

Plant material

Surface-sterilized seeds of 11 rice accessions were germinated on half strength Murashige and Skoog media for 3 days in dark, followed by a day in light (list of the genotypes used in the study; Additional file 1). Initially, two uniformly germinated seedlings were transplanted to a 4-inch square shaped pot filled with pasteurized field soil. Throughout the growing season, the pots were maintained in standing water. After 10 days of transplanting, seedlings were thinned to retain one plant per pot per genotype.

Temperature treatment

Plants were grown under control conditions (16-h light and 8-hour dark at 28 ± 1 ℃ and 23 ± 1 ℃) till anthesis. One day after 50% of the primary panicle completely fertilized, half of the plants from each genotype were transferred to greenhouse having high night-time temperature (HNT; 16-hour light and 8-hour dark at 28 ± 1 ℃ and 28 ± 1 ℃). HNT treatment was maintained until maturity. Two or three replicates per treatment per genotype from the current set were used to establish image-based phenotyping workflow (Fig. 1).

Fig. 1
figure 1

Multi-view image analysis of developing panicle using PI-Plat. a Flowchart and b graphical representation of the multi-view image analysis using 3D reconstruction and 2D approach

PI-Plat: Panicle Imaging Platform

We constructed a low-cost Panicle Imaging Platform (PI-Plat) to capture the growth parameters of rice panicles after flowering (Additional file 2). The PI-Plat is comprised of three main parts: (i) a customized wooden chamber with black interior, (ii) a rotating imaging system, and (iii) color checkerboards.

Customized wooden chamber and rotating imaging system

To host the PI-Plat, a wooden chamber (height: 75-in., width: 52.5-in., length: 55-in.) was customized (Additional file 2). The interior of the chamber was painted black to reduce the light interference and increase the quality of image segmentation during the image processing procedure. Inside the chamber, a circular wooden board (diameter: 37-in.) having an aperture at its center was fixed at a height of 52.5-in. The top surface of the circular wooden board was painted black as well. For imaging, plants were placed under the circular wooden board, and the panicle of interest (primary panicle) was gently passed through the aperture. To adjust for variable plant height, we used an electric scissor lift table (Additional file 2). A metal hook attached to the ceiling of the circular wooden chamber was adhered to top of the panicle for stabilizing the panicle (Additional file 2).

Also, a rotary double-ring apparatus having an inner and an outer ring is fixed on top of the circular wooden board (Additional file 2). A 24-in. aluminum-based outer ring with snow-ball bearings is used to hold two Sony α6500 cameras for imaging and LED lights (ESDDI PLV-380, 15 Watt, 5000 LM, 5600 K) for light source, which undergo a 360° rotation around the panicle. The rotation is controlled by an electric motor system. The rotary double-ring apparatus has three major parts: (a) a toothed wheel connected to the electric motor, (b) a small smooth pulley and a cylindrical sleeve used to adjust tension in the belt, and (c) a rotatable ring apparatus that rotates the cameras where the outer ring is covered with a toothed belt. Our camera selection is based on high sensitivity and high stabilization to reduce image distortion during camera motion. The camera also supports customized applications for remote-controlled imaging. We utilized the camera’s time-lapse feature to capture multiple images at the rate of one image per second. Sixty images were captured by each camera per minute, and in total 120 images were taken for each panicle for each time-point and treatment. For labeling, we used quick response (QR) codes as plant identifiers (IDs), which were tagged to the primary panicle. Plant IDs were generated from the images during the later imaging processing stage. The PI-Plat were constructed mostly using commercial off-the-shelf components at a comparably low cost.

Color checkerboards

Since image features [37] played a critical role in the 3D reconstruction process and the panicle itself cannot provide enough features due to its nearly uniform color and complex patterns, color checkerboards were used to provide additional features. These color checkerboards printed on white letter-size papers were pasted on all four sides of the wooden chamber and the top surface of the circular wooden board (Additional file 2). Each checkerboard included 20 \(\times\) 20 squares (1 cm2) with colors that were randomly generated in the RGB color space. Comparing to the image features from the panicle, the features from the color checkerboards were easily detected by SIFT [43] and SURF [44] on the edges and corners of each square due to its regular shape and random color. Then, these features were used to recover camera parameters, which included the intrinsic calibration (i.e., radial distortion of the lens and the focal length) and the extrinsic calibration (i.e., the position and orientation of the camera) in the 3D reconstruction process [37]. Unlike the traditional calibration tool (such as calibration app in MATLAB), calibration in our pipeline was achieved by matching features in different images [37]. Therefore, we did not have any requirements on the number of squares in the checkerboards.

Image acquisition

The supplementary video shows image acquisition process using the PI-Plat (Additional file 3). To capture the growth dynamics of panicles, we performed non-destructive imaging of primary panicle corresponding to control and HNT treated plants at one (W1), two (W2) and three-weeks (W3) post-fertilization.

Image processing

Pre-processing and 3D point cloud reconstruction

First, we converted all the RGB (red, green, and blue) images into the HSV (hue saturation value) space. Then, the background in all images (i.e., the part corresponding to the walls and the circular wooden board) was segmented [45] and removed using the same threshold. With the removal of the background, the number of features in the 3D reconstruction process, as well as the computation time, was reduced. Since all images were taken in the PI-Plat chamber with a constant light, the same threshold worked optimally for all the panicles. Multiple tests using the ‘colorthresholder’ application in MATLAB showed that the background can be effectively removed if hue, saturation, and value were controlled in the ranges of 0–1, 0–1, and 0.15–1, respectively. After background removal using color thresholds, denoising on the images was performed and the residues of the background (mostly isolated outliners from the black wooden board and interior of the chamber etc.), considered as noise, were removed. The average percentage of the removed points in the denoising step is less than 0.3% of the whole image. Moreover, based on our estimation, the upper bound of the percentage of the points that possibly belong to the panicle and are incorrectly removed is 0.1%. Therefore, the denoising step should have limited effects on point clouds of the panicles. These pre-processed images were used to reconstruct the 3D point clouds for each panicle at a given time-point. For 3D reconstruction methods, we preferred the MVE pipeline [37] over other traditional methods such as space carving [46] because of the lower computation cost and the superior reconstruction quality of MVE for non-convex objects. For 3D reconstruction, the corresponding features in images were detected and matched to form a sparse point cloud in an incremental SfM process. Then, depth maps were reconstructed for each view and merged into a dense point cloud.

Trait extraction using 3D point cloud

Once a point cloud at each time point was generated, we were able to extract traits of interest from the reconstructed 3D structure of panicles from these time-varying point clouds. First, each point cloud was segmented into different components (such as a panicle, the color checkboards, and the rotary double-ring apparatus) by leveraging their distinct positions or colors. For example, the color checkboards were approximately located on the boundaries (i.e., the locations of walls and the top surface of the circular wooden board) of a point cloud, and the metal hook was located at the top of the point cloud and has a gray color. Second, the point clouds need to be scaled and aligned, as different point clouds may have different scales and orientations after reconstruction. In this work, the geometries of the color checkboards and the rotary double-ring apparatus were constant during imaging acquisition. Thus, we scaled and aligned the color checkboards and the apparatus across the point clouds. In this way, the rest of the point clouds were scaled and aligned as well, such that panicles in different point clouds can be compared at the same scale [47]. Third, each point cloud was voxelized for volume quantification [48]. The same bounding cube was employed to enclose each point cloud. The bounding cube was aligned across the point clouds with respect to the color checkboards and the apparatus. Then, an equivalent discrete voxel-based grid was generated. The grid size was obtained by dividing each edge of the bounding cube by 1000. Thus, a volume with a resolution of 1000 \(\times\) 1000 \(\times\) 1000 was generated to sample the 3D space. Finally, the points not belonging to a panicle were removed. Therefore, some voxels were filled with a group of panicle points and the other voxels were empty. For each filled voxel, we computed the average color (i.e., RGB) intensity of the points contained in the voxel. Subsequently, the following features were extracted from a volume: (a) voxel count: the number of the filled voxels, and (b) color intensity: the sum of color intensities of all filled voxels.

2D pixel count extraction from multi-view images of developing panicles

For a comparison purpose, conventional 2D based image analysis of panicles was also employed. Specifically, the total pixel count of a panicle was calculated from its corresponding 120 images captured from multiple views. The pre-processed images, same as the ones used for 3D reconstruction, where the black background and wooden board were removed, were utilized for 2D analysis. First, each pre-processed image was segmented using the ‘colorthresholder’ application in MATLAB. It is notable that the checkerboards used in our experiment have green squares, which is close to the color of the panicle. Thus, color-based segmented images can contain regions from the panicle as well as the checkerboard’s green squares. For future work, we will avoid the checkboards with colors similar to the plants while imaging. To remove these green squares, the regions corresponding to the squares were detected based on solidity and eccentricity evaluation. Here, solidity of the region is defined as the ratio of the region’s area to the region’s convex hull area, and eccentricity of the region is the eccentricity of ellipse that has the same second-moments as that of the region. The solidity and eccentricity of each region was calculated using the ‘regionprops’ function in MATLAB. We excluded regions that had solidity values larger than 0.7 and eccentricity values less than 0.95. In addition, given the relatively marginal size of a panicle, a region with an area less than certain pixels (1000 pixels in our study) was filtered out. Therefore, only the pixels that correspond to the panicle were retained, and the pixel count of the panicle in each image was calculated. We summed the pixel count obtained from each of the 120 multi-view images of the panicle to obtain the total pixel count.

Scanning of mature panicles using flatbed scanner

Next, we analyzed mature primary panicle to gain ground truth and derive features, which were compared with the developing panicle. For this, the primary panicles were harvested, and scanned images were obtained using an Epson Expression 12,000 XL scanner (600 dpi resolution). Branches on primary panicles were carefully spread out to avoid overlaps in the scanned images. These scanned images were used to extract the following traits: projected surface area of the primary panicle, projected seed count of the primary panicle, average of major (seed length) and minor (seed width) axis, and area of the individual seed on the primary panicle. In this set of images, the panicles were placed over black background. We segmented the panicles from the background using color thresholding and obtained the binary images. As a panicle was mostly yellowish in color and the background was black, an image was transformed in the HSV color space to segment the panicle (setting for range: hue 0–0.3, saturation 0.2–1, and value 0.5–1). In principle, a harvested mature panicle has all the seeds attached to the rachis. Therefore, we first used morphological opening [49] to process the images. As the branches were relatively thin and the seeds were relatively thick, most regions of the seeds were disconnected from each other after morphological opening by removing the branch pixels. As the seeds have an oval shape, the regions that were too thin were removed. The remaining regions corresponded to seeds. The length, width, and area of a seed was calculated from its region using the ‘regionprops’ function in Matlab.

Manual phenotyping of the mature panicle

Next, we manually measured the yield traits on mature primary panicle after harvesting. For this, we collected data for (a) total seed weight, (b) total seed number, (c) weight per seed, and (d) number of fertile and sterile seeds to calculate percentage fertility.

Correlation analysis

For pairwise correlation analysis, the 3D reconstruction-based features (voxel count and color intensity) and the total pixel count (2D) derived from the multi-view images of developing panicle were compared with end-point measurements at maturity. For the end-point measurements, the traits derived from flatbed scanned images as well as manual measurements from the primary panicle at maturity were considered. These traits were collected from 11 rice genotypes with two to three replicates per genotype and per treatment (control and HNT). A total of 55 observations were used for Pearson correlation analysis. The correlation analysis was performed using R v. 3.4.3 [50] and RStudio v.1.1.419 [51]. Correlation matrices containing Pearson correlation coefficients and p values were obtained using the ‘rcorr’ function in “Hmisc” package [52]. Matrix displaying correlation between selected traits was plotted using ‘chart.Correlation’ in the “PerformanceAnalytics” package [53]. Both the raw data and the complete correlation matrix are provided (Additional files 4 and 5).

Data accessibility

The text-based raw data generated from 3D reconstruction-based approach, flatbed scanner, and manual measurements for this work is provided as additional files with this submission (Additional file 4). Raw image data is large and hence only part of them is shared for user testing on a UNL Box repository (https://unl.box.com/s/g0bof1mpfp33hn66b2qabrk9kiwmhbzv).

Results

Workflow of PI-Plat

Evaluation of inflorescence-related parameters is limited by traditional phenotyping methods. Advances in plant phenotyping methodology have enhanced our understanding of vegetative organs and overall plant structures. However, we still need to capitalize on the technological advancement in optics, computer vision, and software design, to capture complex plant structures. In this study, we developed a Panicle Imaging Platform (PI-Plat) to understand yield-related parameters by reconstructing 3D space to derive digital traits (Additional file 2).

For method validation, we used 11 rice genotypes, from the indica and japonica rice sub-populations (Additional file 1). Once 50% of primary panicle underwent flowering, a subset of plants was maintained under control conditions and the rest were moved to a greenhouse with high night temperature (HNT) condition. The motivation for HNT treatment is to explore the phenotypic variation in rice germplasm as rice grain development is known to be sensitive to HNT [54,55,56]. The primary panicles from each plant and treatment were imaged three times on a weekly basis (week 1, 2, and 3) using the PI-Plat. For imaging, two visible cameras, held at two different positions, were employed on a rotating imaging system. Sixty images per camera, corresponding to an image clicked every six degrees, aided in capturing multiple views covering 360° of the panicles (Additional file 3). In total, 19,800 images were captured for the 11 genotypes. Each panicle image was segmented and used to reconstruct 3D point clouds which were used to extract phenotypic traits such as (i) voxel count and (ii) color intensity (Fig. 1 and Table 1). The average computation time required to reconstruct 3D point cloud for one panicle using 120 images (Resolution, 6000 \(\times\) 4000) was about 90 min. For this, we used a computing platform with an Intel Core i7-8700 K CPU @3.70 GHz and 16 GB RAM.

Table 1 Overview of the phenotyping methodology and trait derived from the corresponding methods in the study. R, Red; G, Green; B, Blue

Correlation between traits derived from multi-view images of developing panicle and yield related components at maturity

First, we aimed to determine if the traits derived from 3D reconstruction of the developing panicle correlate with the yield related components at maturity. For this, the 3D reconstruction-based point cloud features derived from multi-view images (voxel count, color intensity) were compared to end-point measurements of the mature panicle (Additional file 5). The end-point measurements correspond to (i) flatbed scanned images (projected surface area at the panicle level, projected seed count, and morphometric measurements at individual seed level; seed area, seed length, and width) and (ii) manual measurements (total seed weight, seed number, weight per seed, and fertility) of the mature panicle. Among all the traits derived from 3D reconstruction, only voxel count of developing panicle exhibited significant positive correlation with projected surface area (rw1, rw2, rw3; 0.64, 0.55, 0.82), total seed weight (rw1, rw2, rw3; 0.48, 0.50, 0.74) and seed number (rw1, rw2, rw3; 0.67, 0.61, 0.70) at maturity (Fig. 2, Additional file 5). The correlation of the voxel count with projected surface area (rW1 = 0.64) and total seed weight was relatively low at week 1 (rW1: 0.48) and increased with later weeks, week 2 and 3 (rW1 < rW2 < rW3; Fig. 2). On the other hand, the correlation between the voxel count of a developing panicle and the seed number at maturity remained stable (Fig. 2). Notably, the color intensity derived from 3D reconstruction did not exhibit meaningful correlation with any of the endpoint measurements (Additional file 5).

Fig. 2
figure 2

Correlation of traits derived from 3D reconstruction, 2D scanning and manual measurements of inflorescence-related traits. Using PI-Plat, developing panicles were imaged on weekly basis (week 1, 2, and 3). For a respective panicle, multi-view images were used for 3D reconstruction to extract voxel count. Also, 2D pixel count was estimated for developing panicle. Phenotypic traits from mature panicle were analyzed by flatbed scanner (projected surface area and seed count), and manual measurements (seed number and weight). Pearson correlation analysis for traits of primary interest is represented. Similar analysis for other extracted traits is listed in Additional file 5. Histograms and red line represent the distribution of each trait. p value for significant correlation is shown in red (***p < 0.001, **p < 0.01, *p < 0.1), n = 55

Next, the multi-view images were also used to perform the conventional 2D image analysis to extract the total pixel count of a developing panicle for week 1, 2, and 3 (Fig. 1). Then, the derived traits at each week were compared with the end-point measurements (Additional file 5). Consequently, the total pixel count showed a positive correlation with all the traits derived from flatbed scanned images and manual measurements at maturity. The correlation between the total pixel count and the projected surface area as well as the total seed weight was unstable. Surprisingly, these correlations at week 3 were lower than the correlations at week 1 (Fig. 2).

Voxel count—an estimate for grain-filling rate

Grain filling rate is the major determinant of mature crop yield. However, evaluating seed weight dynamics usually requires destructive phenotyping methods. In our study, we estimated voxel count from the 3D reconstruction of developing panicles, which represents the overall volume of a panicle, and thus accounts for grain-filling rate. In general, we observed a temporal trend of progressive increase in voxel count over three weeks during the post-fertilization period (Fig. 3a). Under control conditions, voxel counts at W2 and W3 were significantly higher than the one at W1, while no significant difference was observed between W2 and W3 (Fig. 3a). These results indicate that substantial gain in overall seed volume occurs before W2. Interestingly, plants treated with HNT, possessed significantly higher voxel count at W1 compared to control. These differences dissipated at W2 and W3, as no significant differences between control and HNT treated plants were observed (Fig. 3a).

Fig. 3
figure 3

Estimation of voxel count. Voxel count derived from 3D point cloud represents overall volume of developing panicle. a Average voxel counts from all genotypes for a respective treatment (control and HNT) and time-point (week 1, 2, and 3) is shown. Box plot represents range, median and mean (red triangle) for the same. Means connected with similar letter are not significantly different from each other (Student’s t-test; p < 0.1). b Hierarchical clustering analysis of genotypes based on their voxel count in control conditions. c Voxel count for individual genotypes corresponding to cluster I–IV. Y-axis represent voxel count, x-axis indicate time-point (week 1, 2, and 3). C: control (blue line), HNT: high night temperature (red line). Box plot represents range, median and mean (red triangle) for the same. Means connected with similar letter are not significantly different from each other (Student’s t-test; p < 0.1)

Next, we evaluated the weekly voxel count for individual genotypes grown under control and HNT stress conditions (Fig. 3b, c). We performed hierarchical clustering based on voxel count for control condition panicles (Fig. 3b). The analyses grouped 11 genotypes into four distinct clusters (Fig. 3b, d). Cluster I was comprised of 301341, 301052, and 301220, cluster II: 301183, 301105, 301278, 301279, and 301221, cluster III: 301260 and 301262, and, while cluster IV constituted only one genotype, 301261 (Fig. 3c). Interestingly, the 4/5 genotypes in Cluster II (301183, 301105, 301221, 301279) showed a significant gain in voxel count between W1 and W2 (Fig. 3c). For genotypes in Clusters I, III, and IV, the voxel count trend did not show any significant difference between W1, W2 and W3 (Fig. 3c). This could be because these genotypes may have already gained their potential seed size by W1, and thereby only incremental changes occur afterwards.

Color intensity—an estimate for rate of maturation

Rate of panicle maturation is a well-studied trait that directly impacts final yield [57, 58]. Heat stress impacts rice seed development and hence alters the panicle maturation rate [59, 60]. Therefore, evaluating the dynamic of panicle maturation could be potentially useful in determining the dynamic of stress response in rice. However, evaluation of the respective traits is done by conventional phenotyping methods, which are inherently laborious and subjective. To estimate the panicle maturation dynamics, we extracted intensity of the RGB channels from the 3D point cloud. Then, we used the ratio of intensity from R to G channels to estimate the yellowness of developing panicle, which increases as the panicles mature. We observed a temporal trend indicating an increase in the ratio of R to G from W1 to W3 (Fig. 4a). This observation is consistent with the progression of panicle maturation as its color changes from green to yellow. Interestingly, the R to G ratio was significantly higher for plants treated with HNT compared to control, suggesting that HNT accelerates the rate of panicle maturation. We next explored the genotypic differences for maturation rate (Fig. 4b). We observed consistent increase in the R to G ratio from W1 to W3 under control and HNT (Fig. 4b). The R to G ratio for majority of genotypes was significantly higher for HNT treated plants than control (Fig. 4b and Additional file 6).

Fig. 4
figure 4

Estimation of color intensity. Color intensity represents sum of color intensities of signals from red (R), green (G), and blue (B) channels. a Average ratio of R to G intensities from all genotypes for a respective treatment (control or HNT) and time-point (week 1, 2, and 3) is shown. Box plot represents range, media and mean (red triangle) of the R to G ratio. Means connected with same letter are not significantly different from each other (Student’s t-test; p < 0.1). b Heat map of R to G ratio for different genotypes under control and HNT

Discussion

With the recent advances in automated plant image acquisition, accurate quantification of phenotypic traits has become the focal point for realizing the potential of plant phenomics. The primary focus of automated phenotyping platforms has been on the vegetative growth and development and to some extent on the root architectural traits [53,54,55 and references therein]. Only limited effort has been directed towards more complex yield related traits such as inflorescence architecture (IA) in greater detail [16, 20,21,22, 28, 61]. After flowering, inflorescence undergoes dynamic changes, such as grain filling and maturation, which significantly contributes towards the final yield in cereals. Previous attempts to capture inflorescence-related traits have been limited to end-point measurements. Further, automated Lemnatech phenotyping system, which is mainly used for whole plant imaging, is not suitable to extract high-resolution data from the inflorescence. Hence, the major goal of this study was to capture the growth and developmental dynamics of IA at high-resolution in rice. To this end, we have developed a low-cost effective system ‘PI-Plat’ to comprehend multi-dimensional features of IA (Fig. 1). One of the main novelties of the PI-Plat is that it is designed to reconstruct 3D models of smaller plant parts, in this study ‘panicle’, with a very high resolution. Also, compared to the widely used turntable imaging system where object rotates [62], the panicle is fixed at the center of the PI-Plat and cameras rotate. Therefore, the vibration is avoided, and the 3D point clouds have less noise. This imaging system can be used to image the panicles in a non-destructive manner, which provides an opportunity to perform temporal phenotyping of the same panicle at consequent developmental stages. On similar basis, rice developing panicles were imaged on weekly basis after fertilization to capture growth dynamics. The multi-view images of developing rice panicle were used for 3D reconstruction, which enabled us to capture digital traits, such as voxel count and color intensity.

We found that the 3D reconstruction-based feature—voxel count has a positive correlation with seed number and total weight at maturity. Panicle development after fertilization involves change in seed weight and volume, but seed number remains constant. Consequently, we observed the temporal trend for correlation of voxel count with final seed weight but not with seed number (Fig. 2). Our correlation analysis signifies that image-based phenotyping of developing panicles can be used to estimate the final yield outcome. This information can be valuable for elucidating the physiological and genetic basis of yield components in rice. Various yield components are determined by numerous genes and pathways, which likely influence the yield traits at different developmental phases during panicle development. By using the 3D reconstruction-based voxel count during the panicle development, researchers can identify phenotypic variation over time for divergent genotypes, hence increase the mapping resolution for linking genotypes-to-phenotype. Furthermore, relatively stable correlation between voxel count and seed number at maturity suggest that image-based phenotyping after fertilization can be used to estimate final seed number. In contrast, the 2D based total pixel count of developing panicle showed relatively lower and unstable correlation with seed number and total seed weight at maturity (Fig. 2). Interestingly at W3, 2D based pixel counts had lower correlation with endpoint measurements than voxel counts. For instance, the correlation of voxel count with projected surface area and total seed weight was 0.82 and 0.74, respectively, while the correlation of 2D pixel count with projected surface area and total seed weight was 0.58 and 0.47, respectively. This could be due to the limitation of using conventional 2D-based phenotyping to completely capture the growth and color dynamics of developing rice seed. Since voxel count positively correlates with final weight, it can be used to capture the weight or volume dynamics. We observed an increase in voxel count from W1 to W3, which is directly related to the increase in size and volume of developing seeds. In context of panicle development, it accounts for rate of grain-filling. Significant gain in the voxel count was achieved by W2 suggesting that substantial seed volume is attained by week 2 (Fig. 3). This observation holds true for 4/11 genotypes, while the other seven genotypes do not show such any significant difference between W1, W2, and W3. One possible explanation could be that these genotypes might have accelerated increase in panicle volume and/or seed weight by W1; thus, exhibiting incremental changes during the subsequent two weeks. We observed higher voxel count for HNT treated plants compared to control plants at W1 (Fig. 3a). Surprisingly, these differences dissipated at W2 and W3, and no significant difference was observed at maturity. These results highlight the importance of temporal phenotyping relative to single time point measurements. Thus, an end-point measurement approach is not practical to identify and hence map traits that are not persistent at maturity. Since, rice and most other grain crops such as wheat and maize are generally more sensitive to environmental stresses, such as heat and drought, the approach of capturing dynamic reproductive traits in a non-destructive manner will be valuable for research aimed at improving yield resilience to environmental stresses. Early detection of transitory phenotypes/traits is also valuable for molecular studies. Measurement of color intensities from 3D point cloud aided us in understanding the dynamics of panicle maturation for diverse genotypes. Notably, panicles from HNT treated plants showed significantly higher R: G indicating that HNT plants undergo faster maturation. These traits derived from 3D reconstruction of multi-view images provided a close approximation of structural features of the developing rice panicle.

To harness the full potential of the existing genetic resources, we need to bridge the gap between genotype and phenotype. In this context, high throughput genotyping has been facilitated by development of low-cost sequencing platforms. However, accurate and efficient phenotyping of large-scale populations is a major bottleneck for crop improvement [63,64,65]. The emergence of phenotyping platforms specifically targeting inflorescence-related traits promise close approximation of the yield-related parameters. PI-Plat provides an important first step towards achieving higher spatial and temporal resolution in IA phenotyping without destructive sampling. The next step towards achieving high-throughput phenotyping of IA traits is the automation for enabling researchers to develop genotype-to-phenotype linkages. Although, the 3D derived voxel count, and color intensity developed as part of PI-Plat can be used to screen large populations elucidating phenotypic variability in inflorescence-related traits, it is still a laborious task given the lack of automation. In summary, PI-Plat-derived 3D traits fills a significant gap in the plant phenotyping toolbox by providing greater spatial and temporal sensitivity of capturing dynamic inflorescence traits, especially for studying abiotic stress responses during reproductive development.

Availability of data and materials

Due to the relatively large size of the raw data, only part of them is shared on a UNL Box repository (https://unl.box.com/s/g0bof1mpfp33hn66b2qabrk9kiwmhbzv). The raw images used for 3D reconstruction and manual phenotyping dataset used in this study is available from the corresponding author on request. The workflow scripts for running PI-Plat based Panicle-3D-Reconstruction can be found on https://wrchr.org/phenolib/panicle-3d-reconstruction/.

Abbreviations

IA:

inflorescence architecture

PI-Plat:

panicle imaging platform

3D:

3 dimensional

2D:

2 dimensional

HNT:

high night temperature

W1/2/3:

one/two/three week after flowering

RGB:

red, green, and blue

References

  1. Tester M, Langridge P. Breeding technologies to increase crop production in a changing world. Science (80-). 2010;327:818–22. https://doi.org/10.1126/science.1183700.

    Article  CAS  Google Scholar 

  2. Alexandratos N, Bruinsma J. World Agriculture towards 2030/2050: the 2012 revision. 2012. www.fao.org/economic/esa. Accessed 15 Mar 2019.

  3. Röth S, Paul P, Fragkostefanakis S. Plant heat stress response and thermotolerance. 2016. In: Jaiwal P, Singh R, Dhankher O. (eds) Genetic manipulation in plants for mitigation of climate change. New Delhi: Springer.

    Google Scholar 

  4. Ray DK, Mueller ND, West PC, Foley JA. Yield trends are insufficient to double global crop production by 2050. PLoS ONE. 2013;8:e66428. https://doi.org/10.1371/journal.pone.0066428.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. Godfray HCJ, Beddington JR, Crute IR, Haddad L, Lawrence D, Muir JF, et al. Food security: the challenge of feeding 9 billion people. Science. 2010;327:812–8. https://doi.org/10.1126/science.1185383.

    Article  CAS  PubMed  Google Scholar 

  6. Foley JA, Ramankutty N, Brauman KA, Cassidy ES, Gerber JS, Johnston M, et al. Solutions for a cultivated planet. Nature. 2011;478:337–42. https://doi.org/10.1038/nature10452.

    Article  CAS  PubMed  Google Scholar 

  7. Richards RA. Selectable traits to increase crop photosynthesis and yield of grain crops. J Exp Bot. 2000;51(Suppl_1):447–58. https://doi.org/10.1093/jexbot/51.suppl_1.447.

    Article  CAS  PubMed  Google Scholar 

  8. Evans LT, Fischer RA. Yield Potential: Its defination, measurement, and significance. Crop Sci. 1999;39:1544. https://doi.org/10.2135/cropsci1999.3961544x.

    Article  Google Scholar 

  9. Doust A. Architectural evolution and its implications for domestication in grasses. Ann Bot. 2007;100:941–50. https://academic.oup.com/aob/article-abstract/100/5/941/135949. Accessed 14 Mar 2019.

    Article  Google Scholar 

  10. Duan L, Yang W, Huang C, Liu Q. A novel machine-vision-based facility for the automatic evaluation of yield-related traits in rice. Plant Methods. 2011;7:44. https://doi.org/10.1186/1746-4811-7-44.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Reuzeau C, Pen J, Frankard V, Wolf J, Peerbolte R, Broekaert W, et al. TraitMill: a discovery engine for identifying yield-enhancement genes in cereals. Plant Gene Trait. 2010;1. https://biopublisher.ca/index.php/pgt/article/html/53. Accessed 12 Mar 2019.

  12. Granier C, Aguirrezabal L, Chenu K, Cookson SJ, Dauzat M, Hamard P, et al. PHENOPSIS, an automated platform for reproducible phenotyping of plant responses to soil water deficit in Arabidopsis thaliana permitted the identification of an accession with low sensitivity to soil water deficit. New Phytol. 2006;169:623–35. https://doi.org/10.1111/j.1469-8137.2005.01609.x.

    Article  PubMed  Google Scholar 

  13. Golzarian MR, Frick RA, Rajendran K, Berger B, Roy S, Tester M, et al. Accurate inference of shoot biomass from high-throughput images of cereal plants. Plant Methods. 2011;7:2. https://doi.org/10.1186/1746-4811-7-2.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  14. Yang W, Xu X, Duan L, Luo Q, Chen S, Zeng S, et al. High-throughput measurement of rice tillers using a conveyor equipped with X-ray computed tomography. Rev Sci Instrum. 2011;82:025102. https://doi.org/10.1063/1.3531980.

    Article  CAS  PubMed  Google Scholar 

  15. Bylesjö M, Segura V, Soolanayakanahally RY, Rae AM, Trygg J, Gustafsson P, et al. LAMINA: a tool for rapid quantification of leaf size and shape parameters. BMC Plant Biol. 2008;8:82. https://doi.org/10.1186/1471-2229-8-82.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Wilson Z, Greenberg AJ, McCouch SR, Crowell S, Falcao AX, Shah A. High-resolution inflorescence phenotyping using a novel image-analysis pipeline PANorama. Plant Physiol. 2014;165:479–95.

    Article  Google Scholar 

  17. Yazdanbakhsh N, Fisahn J. High throughput phenotyping of root growth dynamics, lateral root formation, root architecture and root hair development enabled by PlaRoM. Funct Plant Biol. 2009;36:938. https://doi.org/10.1071/FP09167.

    Article  Google Scholar 

  18. Wang L, Uilecan I, Assadi A, CK-P, 2009 U. HYPOTrace: image analysis software for measuring hypocotyl growth and shape demonstrated on Arabidopsis seedlings undergoing photomorphogenesis. Plant Physiol. 2009;149:1632–7. https://www.plantphysiol.org/content/149/4/1632.short. Accessed 12 Mar 2019.

    Article  CAS  Google Scholar 

  19. Fiorani F, Schurr U. Future scenarios for plant phenotyping. Annu Rev Plant Biol. 2013;64:267–91. https://doi.org/10.1146/annurev-arplant-050312-120137.

    Article  CAS  PubMed  Google Scholar 

  20. Ikeda M, Hirose Y, Takashi T, Shibata Y, Yamamura T, Komura T, et al. Analysis of rice panicle traits and detection of QTLs using an image analyzing method. Breed Sci. 2010;55–64. https://www.jstage.jst.go.jp/article/jsbbs/60/1/60_1_55/_article/-char/ja/. Accessed 12 Mar 2019.

    Article  Google Scholar 

  21. AL-Tam F, Adam H, Anjos A, Lorieux M, Larmande P, Ghesquière A, et al. P-TRAP: a Panicle Trait Phenotyping tool. BMC Plant Biol. 2013;13:122.

    Article  Google Scholar 

  22. Zhou Y, Srinivasan S, Mirnezami SV, Kusmec A, Fu Q, Attigala L, et al. Semiautomated feature extraction from RGB images for sorghum panicle architecture GWAS. Plant Physiol. 2018;179:24–37.

    Article  Google Scholar 

  23. Aquino A, Millan B, Gaston D, Diago M-P, Tardaguila J, Aquino A, et al. vitisFlower®: development and testing of a novel android-smartphone application for assessing the number of grapevine flowers per inflorescence using artificial vision techniques. Sensors. 2015;15:21204–18. https://doi.org/10.3390/s150921204.

    Article  PubMed  Google Scholar 

  24. Millan B, Aquino A, Diago MP, Tardaguila J. Image analysis-based modelling for flower number estimation in grapevine. J Sci Food Agric. 2017;97:784–92. https://doi.org/10.1002/jsfa.7797.

    Article  CAS  PubMed  Google Scholar 

  25. Wang Z, Underwood J, Walsh KB. Machine vision assessment of mango orchard flowering. Comput Electron Agric. 2018;151:501–11. https://doi.org/10.1016/J.COMPAG.2018.06.040.

    Article  Google Scholar 

  26. Ji W, Zhao D, Cheng F, Xu B, Zhang Y. Automatic recognition vision system guided for apple harvesting robot. Comput Electr Eng. 2012;38:1186–95. https://www.sciencedirect.com/science/article/pii/S0045790611001819. Accessed 14 Mar 2019.

    Article  Google Scholar 

  27. Crowell S, Falcão A, Shah A, Wilson Z, Greenberg AJ, McCouch S. High-resolution inflorescence phenotyping using a novel image-analysis pipeline, PANorama. Plant Physiol. 2014;165:479–95. https://www.plantphysiol.org/content/165/2/479.short. Accessed 14 Mar 2019.

    Article  CAS  Google Scholar 

  28. Gage JL, Miller ND, Spalding EP, Kaeppler SM, De Leon N. TIPS: a system for automated image-based phenotyping of maize tassels. Plant Methods. 2017. https://doi.org/10.1186/s13007-017-0172-8.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Ta KN, Khong NG, Ha TL, Nguyen DT, Mai DC, Hoang TG, et al. A genome-wide association study using a Vietnamese landrace panel of rice (Oryza sativa) reveals new QTLs controlling panicle morphological traits. BMC Plant Biol. 2018. https://doi.org/10.1186/s12870-018-1504-1.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Adriani DE, Dingkuhn M, Dardou A, Adam H, Luquet D, Lafarge T. Rice panicle plasticity in Near Isogenic Lines carrying a QTL for larger panicle is genotype and environment dependent. Rice. 2016;9:28. https://doi.org/10.1186/s12284-016-0101-x.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Rebolledo MC, Peña AL, Duitama J, Cruz DF, Dingkuhn M, Grenier C, et al. Combining image analysis, genome wide association studies and different field trials to reveal stable genetic regions related to panicle architecture and the number of spikelets per panicle in rice. Front Plant Sci. 2016;7:1384. https://doi.org/10.3389/fpls.2016.01384.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Li D, Xu L, Tang XS, Sun S, Cai X, Zhang P. 3D imaging of greenhouse plants with an inexpensive binocular stereo vision system. Remote Sens. 2017;9:508.

    Article  Google Scholar 

  33. Omasa K, Hosoi F, Botany AK-J of experimental, 2006 U. 3D lidar imaging for detecting and understanding plant responses and canopy structure. J Exp Bot. 2007;58:881–98. https://academic.oup.com/jxb/article-abstract/58/4/881/425236. Accessed 15 Mar 2019.

    Article  Google Scholar 

  34. McCormick RF, Truong SK, Mullet JE. 3D Sorghum reconstructions from depth images identify QTL regulating shoot architecture. Plant Physiol. 2016;172:823–34. https://doi.org/10.1104/pp.16.00948.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  35. Brooks MJ, de Agapito L, Huynh DQ, Baumela L. Towards robust metric reconstruction via a dynamic uncalibrated stereo head. Image Vis Comput. 1998;16:989–1002. https://doi.org/10.1016/S0262-8856(98)00064-X.

    Article  Google Scholar 

  36. Negahdaripour S, Hayashi BY, Aloimonos Y. Direct motion stereo for passive navigation. IEEE Trans Robot Autom. 1995;11:829–43. https://doi.org/10.1109/70.478430.

    Article  Google Scholar 

  37. Fuhrmann S, Langguth F, Goesele M. MVE—a multi-view reconstruction environment. EUROGRAPHICS Work Graph Cult Herit. 2014.

  38. Sodhi P, Vijayarangan S, Wettergreen D. In-field segmentation and identification of plant structures using 3D imaging. In: 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS). New York: IEEE; 2017. p. 5180–7. https://doi.org/10.1109/IROS.2017.8206407.

  39. Vijayarangan S, Sodhi P, Kini P, Bourne J, Du S, Sun H, et al. High-throughput robotic phenotyping of energy sorghum crops. In: Hutter M, Siegwart R. (eds) Field and service robotics. Springer proceedings in advanced robotics, vol 5. Springer, Cham. Springer, Cham; 2018. p. 99–113. https://doi.org/10.1007/978-3-319-67361-5_7.

    Google Scholar 

  40. Duan T, Chapman SC, Holland E, Rebetzke GJ, Guo Y, Zheng B. Dynamic quantification of canopy structure to characterize early plant vigour in wheat genotypes. J Exp Bot. 2016;67:4523–34. https://doi.org/10.1093/jxb/erw227.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  41. Hartmann A, Czauderna T, Hoffmann R, Stein N, Schreiber F. HTPheno: an image analysis pipeline for high-throughput plant phenotyping. BMC Bioinform. 2011;12:148. https://doi.org/10.1186/1471-2105-12-148.

    Article  Google Scholar 

  42. Chaudhury A, Barron JL. Machine vision system for 3D plant phenotyping. IEEE/ACM Trans Comput Biol Bioinform. 2018. https://doi.org/10.1109/TCBB.2018.2824814.

    Article  PubMed  Google Scholar 

  43. Huang FC, Huang SY, Ker JW, Chen YC. High-performance SIFT hardware accelerator for real-time image feature extraction. IEEE Trans Circuits Syst Video Technol. 2012;22:340–51.

    Article  Google Scholar 

  44. Khan NY, McCane B, Wyvill G. SIFT and SURF performance evaluation against various image deformations on benchmark dataset. In: Proceedings—2011 international conference on digital image computing: techniques and applications, DICTA 2011. 2011. p. 501–6.

  45. Vantaram S, Saber E. Survey of contemporary trends in color image segmentation. J Electron Imaging. 2012;21. https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging/volume-21/issue-4/040901/Survey-of-contemporary-trends-in-color-image-segmentation/10.1117/1.JEI.21.4.040901.short. Accessed 31 Jul 2019.

  46. Kutulakos KN, Seitz SM. A theory of shape by space carving. Int J Comput Vis. 2000;38:199–21818.

    Article  Google Scholar 

  47. Besl P, McKay N. Method for registration of 3-D shapes. Sens Fusion IV Control Paradig. 1992. https://www.spiedigitallibrary.org/conference-proceedings-of-spie/1611/0000/Method-for-registration-of-3-D-shapes/10.1117/12.57955.short. Accessed 8 Apr 2019.

  48. Cohen-Or D, Kaufman A. Fundamentals of surface voxelization. Graph Model image Process. 1995. https://www.sciencedirect.com/science/article/pii/S1077316985710398. Accessed 8 Apr 2019.

    Article  Google Scholar 

  49. Gonzalez R, Woods R. Digital Image Processing, Global Edition. 2018.

  50. R Core Team. R: A language and environment for statistical computing. Vienna: R Core Team; 2017.

    Google Scholar 

  51. RStudio Team. RStudio: Integrated development environment for R. Boston: RStudio Team; 2016.

    Google Scholar 

  52. Frank E, Harrell J, with contributions from Charles Dupont and many others. Hmisc: Harrell Miscellaneous. R package version 4.1–1. 2018.

  53. Peterson BG, Peter C. PerformanceAnalytics: econometric tools for performance and risk analysis. R package version 1.5.2. 2018.

  54. Peng S, Huang J, Sheehy JE, Laza RC, Visperas RM, Zhong X, et al. Rice yields decline with higher night temperature from global warming. Proc Natl Acad Sci. 2004;101:9971–5. https://doi.org/10.1073/pnas.0403720101. Accessed 17 Apr 2019.

    Article  CAS  Google Scholar 

  55. Cheng W, Sakai H, Yagi K, Hasegawa T. Interactions of elevated [CO2] and night temperature on rice growth and yield. Agric For Meteorol. 2009;149:51–8. https://doi.org/10.1016/J.AGRFORMET.2008.07.006.

    Article  Google Scholar 

  56. Coast O, Ellis RH, Murdoch AJ, Quiñones C, Jagadish KSV. High night temperature induces contrasting responses for spikelet fertility, spikelet tissue temperature, flowering characteristics and grain quality in rice. Funct Plant Biol. 2015;42:149. https://doi.org/10.1071/FP14104.

    Article  CAS  Google Scholar 

  57. Jongkaewwattana S, Geng S, Hill JE, Miller BC. Within-panicle variability of grain filling in rice cultivars with different maturities. J Agron Crop Sci. 1993;171:236–42. https://doi.org/10.1111/j.1439-037X.1993.tb00135.x.

    Article  Google Scholar 

  58. Ellis RH. Rice seed quality development and temperature during late development and maturation. Seed Sci Res. 2011;21:95–101. https://doi.org/10.1017/S0960258510000425.

    Article  Google Scholar 

  59. Begcy K, Sandhu J, Walia H. Transient heat stress during early seed development primes germination and seedling establishment in rice. Front Plant Sci. 2018;9:1768.

    Article  Google Scholar 

  60. Folsom JJ, Begcy K, Hao X, Wang D, Walia H. Rice fertilization-Independent Endosperm1 regulates seed size under heat stress by controlling early endosperm development. Plant Physiol. 2014;165:238–48. https://doi.org/10.1104/pp.113.232413.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  61. Xiong X, Duan L, Liu L, Tu H, Yang P, Wu D, et al. Panicle-SEG: a robust image segmentation method for rice panicles in the field based on deep learning and superpixel optimization. Plant Methods. 2017;13:1–15. https://doi.org/10.1186/s13007-017-0254-7.

    Article  Google Scholar 

  62. He JQ, Harrison RJ, Li B. A novel 3D imaging system for strawberry phenotyping. Plant Methods. 2017;13:1–8.

    Article  Google Scholar 

  63. Humplík JF, Lazár D, Husičková A, Spíchal L. Automated phenotyping of plant shoots using imaging methods for analysis of plant stress responses—a review. Plant Methods. 2015;11:29. https://doi.org/10.1186/s13007-015-0072-8.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  64. Li L, Zhang Q, Huang D, Li L, Zhang Q, Huang D. A review of imaging techniques for plant phenotyping. Sensors. 2014;14:20078–11111. https://doi.org/10.3390/s141120078.

    Article  PubMed  Google Scholar 

  65. Fahlgren N, Gehan MA, Baxter I. Lights, camera, action: high-throughput plant phenotyping is ready for a close-up. Curr Opin Plant Biol. 2015;24:93–9. https://doi.org/10.1016/J.PBI.2015.02.006.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

We thank Martha Rowe for the help with scanning of mature panicles and seeds.

Funding

This work was supported by National Science Foundation Award # 1736192 to HW and HY.

Author information

Authors and Affiliations

Authors

Contributions

HW, HY, YG, PS, and PP conceived and designed the experiment. PP, JS, BKD, FZ, and TG performed the experiments. FZ and TG performed imaging data analysis. PP and JS analyzed the results and wrote the manuscript. All authors read and approved the manuscript.

Corresponding author

Correspondence to Harkamal Walia.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1.

Genetic and geographical information of the rice genotypes used in the study.

Additional file 2.

PI-Plat and its components.

Additional file 3.

Video showing PI-Plat in motion.

Additional file 4.

Raw data collected from developing (imaging derived) and mature (manual measurements) panicles.

Additional file 5.

Pearson correlation analysis for all the traits derived from 3D reconstruction and multi-view 2D-pixel count analysis of developing panicle (yellow color coded), and mature panicle derived traits from 2D scanning and manual measurement (green color coded). Significant correlation values (p value < 0.05) are highlighted in red font.

Additional file 6.

Average intensities of (A) red and (B) green channels from all genotypes for a respective treatment (control and HNT) and time-point (week 1, 2, and 3).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sandhu, J., Zhu, F., Paul, P. et al. PI-Plat: a high-resolution image-based 3D reconstruction method to estimate growth dynamics of rice inflorescence traits. Plant Methods 15, 162 (2019). https://doi.org/10.1186/s13007-019-0545-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13007-019-0545-2

Keywords