Skip to main content

Fast reconstruction method of three-dimension model based on dual RGB-D cameras for peanut plant

Abstract

Background

Plant shape and structure are important factors in peanut breeding research. Constructing a three-dimension (3D) model can provide an effective digital tool for comprehensive and quantitative analysis of peanut plant structure. Fast and accurate are always the goals of the plant 3D model reconstruction research.

Results

We proposed a 3D reconstruction method based on dual RGB-D cameras for the peanut plant 3D model quickly and accurately. The two Kinect v2 were mirror symmetry placed on both sides of the peanut plant, and the point cloud data obtained were filtered twice to remove noise interference. After rotation and translation based on the corresponding geometric relationship, the point cloud acquired by the two Kinect v2 was converted to the same coordinate system and spliced into the 3D structure of the peanut plant. The experiment was conducted at various growth stages based on twenty potted peanuts. The plant traits’ height, width, length, and volume were calculated through the reconstructed 3D models, and manual measurement was also carried out during the experiment processing. The accuracy of the 3D model was evaluated through a synthetic coefficient, which was generated by calculating the average accuracy of the four traits. The test result showed that the average accuracy of the reconstructed peanut plant 3D model by this method is 93.42%. A comparative experiment with the iterative closest point (ICP) algorithm, a widely used 3D modeling algorithm, was additionally implemented to test the rapidity of this method. The test result shows that the proposed method is 2.54 times faster with approximated accuracy compared to the ICP method.

Conclusions

The reconstruction method for the 3D model of the peanut plant described in this paper is capable of rapidly and accurately establishing a 3D model of the peanut plant while also meeting the modeling requirements for other species' breeding processes. This study offers a potential tool to further explore the 3D model for improving traits and agronomic qualities of plants.

Background

Peanuts are a widely cultivated oil and cash crop, providing a significant source of oil and protein [1]. The global total peanut production was 50.606 million tons, and China was the largest producer with 18.20 million tons in 2021 [2]. It is important to improve the yield and quality of peanuts for China’s and the world’s oil supply [3, 4]. An effective way to increase peanuts production is by developing new varieties with excellent traits using advanced gene technology [5,6,7]. The results of interaction between genotypes and environmental factors are expressed through the phenotypic parameters of plant structure [8, 9]. Plant architectural traits are important phenotypic traits for selecting new adaptative cultivars in crop breeding studies [10].

Plant structure reflects the size and organization form of above-ground organs of crops [11], which can indicate growth status, cultivation conditions, and water and fertilizer measures of crops [12]. In addition, the phenotypic traits of plants such as height and width also provide references for breeders to cultivate excellent breeding [13,14,15,16]. The establishment of a three-dimensional (3D) model of the plant can comprehensively understand the morphological features of the plant, which avoid the limitation of two-dimensional (2D) imaging lacking depth information and facilitate the subsequent accurate extraction of multiple trait parameters [10, 17,18,19]. Therefore, the 3D reconstruction model of plants has gradually become an essential part of phenotypic research.

In the last decade, a lot of research has been done on the 3D modeling of plant structures using different kinds of technologies including stereo vision (SV), Structure from Motion (SfM), LiDAR, and RGB-D camera. Both SV and SfM 3D modeling methods are based on 2D imaging devices, which reconstruct the 3D architecture of the target by 2D images from different perspectives. SV uses two or more cameras to collect the images of the target at the same time, while SfM captures overlapping images by moving the camera around an object [20,21,22]. Bao et al. [23] used a stereovision-equipped robot to reconstruct a 3D model of sorghum and successfully acquired phenotypic data from high-throughput crops in the field. Malambo et al. [24] proposed a 3D modeling method for SfM based on an unmanned aerial vehicle system and estimated maize and sorghum height data from point clouds generated using SfM. SV and SfM can be used both indoors and field, and in SfM the camera can be mounted on the unmanned aerial vehicle (UAV) platform to quickly obtain information on a large area of the field in a short time [25]. However, SV and SfM are sensitive to light intensity, and the change in light environment will increase the deviation of the image. Although the image quality requirement can be reduced in the SfM method, there is a lot of data redundancy in multiple images and the reconstruction speed is slow [26]. It should be concerned and considered that improve the speed of the 3D modeling. In addition, some researchers try to restore depth information through deep learning algorithms based on RGB images [27,28,29]. However, this technology requires high-quality RGB images and powerful computers to implement and has not been widely used in plant 3D modeling.

It is a widely used technique for reconstructing a 3D canopy using scanning equipment to generate a 3D point cloud of plants, which generally employs the time of flight (ToF) or phase-shifting scanning principle to generate the point cloud [30]. Shi et al. [31] used LiDAR to create 3D models of corn plants and enable real-time monitoring of crops' 3D information. Moreno et al. [32] reconstructed vineyard crops through 3D points cloud generated by LiDAR installed onboard a mobile platform. Leaf Color is a key phenotype trait of crops, and the 3D model with color information can provide more phenotype information for simulating dynamic crop growth and development in space and time [33]. As an active 3D imaging instrument, the LiDAR is more costly than the 2D camera [34]. Moreover, the reconstruction effect is affected by the edge effect, the diffuse reflection occurs when the excited wave is projected to the branch or leaf's edge, then the lidar may miss the reflected wave, impairing edge recognition [35, 36]. The RGB-D camera can acquire both color and depth information about the target simultaneously. Its advantages include ease of development, high real-time performance, and strong anti-interference properties [37, 38]. Thus, since Microsoft released the Kinect in 2010, an increasing number of researchers have applied it to plant 3D modeling [39,40,41,42].

The 3D modeling methods have been applied in the field environment, and the plant phenotypic traits of crop populations are obtained through the 3D reconstruction model of crops [43]. Although the field experiment can reflect the performance of crops in the actual growth environment, it is easy to be affected by many uncontrollable factors, such as weather, light intensity, and wind [26, 44]. Moreover, breeding programs require the evaluation of architectural traits at a finer scale, such as organ scale [10, 45, 46]. However, it is not a feasible measurement of those traits under the field conditions, so the researchers tend to conduct the initial screening of breeds in a controlled environment because the changes in the process of crop growth can be found more intuitively [17, 47, 48]. As an RGB-D camera, Kinect v2 shows great potential with low cost and strong robustness in 3D modeling indoors [21, 49], and it has been applied to 3D fine modeling of plants [30, 50,51,52].

Fusing point clouds obtained from multiple angles is a common method to establish accurate 3D models, and researchers tend to reconstruct plant modeling through three or more angle point clouds data [42, 49]. The premise of point cloud fusion is to realize the registration of multiple point clouds, which accurately align the point cloud data from different views into the complete 3D model of the plant [17]. The registration algorithm can find the relationship between different views by searching the correspondence of key points between multiple views. The accuracy of 3D modeling is determined by the registration algorithm, as a classical 3D point cloud registration algorithm, the iterative closest point (ICP) algorithm has been widely used in plant modeling [38, 53]. It is difficult to establish an accurate plant 3D model based on the information collected from one view, so it is necessary to scan the target from multiple views to obtain point clouds in different directions and integrate them effectively [54]. Point clouds from more perspectives will improve the accuracy of modeling, but the more point clouds for registration, the longer the time required to establish the model, and the modeling efficiency can decrease as the number of points increases [55]. The relationship between balancing speed and accuracy is a problem to be considered.

At present, many achievements have been made in the reconstruction of the 3D model such as Maize [31], Sorghum [23], and Soybean [56], but the 3D plant model of the peanut has not been thoroughly researched. This research aims to quickly establish an accurate 3D model of an individual peanut plant by point clouds obtained from only two views. This paper describes this 3D modeling method, which can quickly reconstruct the 3D model through the non-overlapping point clouds in two symmetrical directions and can ensure the accuracy of the 3D model. The main contributions of this paper are on the following aspects: (1) proposing a method for automatic 3D plant reconstruction and phenotypic data acquisition for peanuts based on dual Kinect v2. (2) optimizing the parameters of the filtering algorithm, and evaluating the accuracy of the reconstructed 3D point cloud model to determine the method's feasibility. (3) designing a comparison experiment with the ICP algorithm to test the rapidity of the method proposed in this paper.

This paper is organized as follows: "Related works" section explained the related works, which include the point cloud acquisition system of the peanut plant, parameter calibration of Kinect v2, and generation of the color 3D point cloud. "Methods" section describes the method for 3D plant reconstruction and phenotypic data acquisition for the peanut plant. "Experiment and results" section reports the experiment setup and results to determine the effectiveness of the method proposed in this paper. "Disscussion" section discusses the factors affecting the accuracy of the 3D model reconstruction, the importance of parameter selection in statistical filtering, and the advantages of the method proposed in this paper in modeling speed. Finally, the conclusions and future work are given in "Conclusion" section.

Related works

In this section, the acquisition method of peanut plant point cloud is introduced, which include the acquisition system of point cloud, sensor calibration and the generation process of 3D point cloud.

Peanut plant point cloud acquisition system

To collect the peanut plant point cloud, a plant information acquisition system based on Kinect v2 was developed (Fig. 1). The flowerpot, which measures 23 cm in diameter and 20 cm in height, was placed on an 80 cm-high operating table when data collection was implemented. Two Kinect v2, designated No.1 and No.2, were placed on opposite sides of the flowerpot with left and right mirror symmetry to reduce the effect caused by blade overlap. Normally, the height and width of peanut plants do not exceed 40 cm [13, 57]. The measurement range of Kinect v2 is from 50 to 400 cm, and the Kinect v2 depth camera has a vertical field of view of 60 degrees [58]. Two Kinect v2 were 70 cm away from the center of the flowerpot and the lens’s focal point was 100 cm above the ground, which can meet the measurement requirements and fill the viewing field with peanut plants to the greatest extent.

Fig. 1
figure 1

Schematic diagram of point cloud data acquisition system

The resolution of color and depth images got by Kinect v2 is 1920×1080 and 512×424, respectively. The image information obtained by Kinect v2 is transferred to the computer. The CPU is Intel(R) Core (TM) I5-7300HQ CPU @ 2.50 GHz, the graphics card is NVIDIA GEFORCE GTX1050 Ti, and the operating system is Microsoft Windows 10. A 3D reconstruction model program is developed based on C +  + , OpenCV3.4.1, and PCL (point cloud library) 1.8.1.

Kinect v2 parameter calibration

To obtain accurate color and depth information of a targeted object, the RGB-D camera needs to be carefully calibrated to achieve pixel-to-pixel matching between its depth image and RGB image. The calibration is affected by the RGB-D camera’s intrinsic parameters, e.g. focal length, lens distortion, and the relative position and orientation between RGB and depth sensors. These parameters differ from camera to camera and should be provided by their manufacturer. However, some commercial RGB-D cameras do not come with detailed technical information [59] and the point clouds which form the depth images are often very noisy which makes numerous challenges to use RGB-D cameras correctly and accurately. To more accurately align RGB images and depth images and create color 3D point clouds, Kinect v2 was calibrated for intrinsic parameters.

The Kinect v2 is equipped with an RGB camera and a depth camera that does not overlap. To obtain the intrinsic parameter matrices \({IM}_{rgb}\) and \({IM}_{ir}\) for the RGB and depth cameras, respectively, as well as the rotation matrix \({EM}_{r}\) and translation matrix \({EM}_{t}\) between the RGB and depth cameras, the Kinect v2 parameters must be calibrated to match color and depth images. RGB and depth images can be matched using these matrices. Figure 2 illustrates the steps involved in calibrating the Kinect v2’s parameters [60].

Fig. 2
figure 2

Calibration of Kinect v2 camera parameters

Step 1: Kinect v2 was used to take color and infrared images of 20 checkerboard calibration plates at different positions, angles, and attitudes.

Step 2: The captured images were input into the Stereo Camera Calibration software package of MATLAB.

Step 3: The stereo camera calibration software package was used to determine the calibration error of the input image. The image with the largest calibration error will then be deleted in descending order until the average calibration error is less than 0.15.

Step 4: The intrinsic parameter matrices \({IM}_{rgb}\) and \({IM}_{ir}\), rotation matrices \({EM}_{r}\), and translation matrices \({EM}_{t}\) of RGB camera and depth camera were got, respectively.

Generation process of color 3D point cloud

The depth data obtained by Kinect v2 represents the distance between the target point and the plane where Kinect v2 was located. The 3D reconstruction of the target was based on the coordinate information of the target point, so the depth data should be converted into a 3D point cloud containing coordinate information, as shown in Eq. (1).

$$ \left\{ \begin{gathered} p_{ir} \, = \,\,d \times \,[x^{*} \,\,y^{*} \,\,1] \hfill \\ P_{ir} \, = \,P_{ir} \times \,IM_{ir} \hfill \\ \end{gathered} \right.\, $$
(1)

where \({p}_{ir}\) represents the depth information of the pixel in the depth image got by Kinect v2, \(d\) represents the depth value, \({x}^{*}\), \({y}^{*}\) distribution represents the row and column positions of the pixel in the depth image. \({P}_{ir}\) represents the transformed point cloud information.

To get the color point cloud, the depth point cloud from the depth image needs to be converted to the RGB camera coordinate system. This transformation can be obtained based on the calibrated rotation \({EM}_{r}\) and translational matrix \({EM}_{t}\) by using Eq. (2).

$${P}_{rgb}={P}_{ir}\times {EM}_{r}+{EM}_{t} $$
(2)

where \({P}_{rgb}\) represents any point in the depth point cloud converted to the color camera coordinate system. Since the imaging range and resolution of the depth image and color image are different, it is necessary to find the corresponding point of \({P}_{rgb}\) point in the color image to get its color information. The pixels \(CP\) matching \({P}_{rgb}\) positions in color images can be obtained by Eq. (3).

$$CP={P}_{rgb}\times {IM}_{rgb}$$
(3)

The \(CP\) consists of vectors \(\left(\widehat{x},\widehat{y},c\right)\), \(\widehat{x}\) and \(\widehat{y}\) are the computed values of the row and column positions of the color image. The color information ‘c’ of the points closest to these two calculated values in the actual color image is considered the color value of the corresponding depth point cloud. Based on this method, color three-dimensional point clouds can be obtained.

Methods

In the following section, the method of fast 3D model reconstruction was presented. Firstly, the point cloud data was filtered twice, and then two pieces of the point cloud from dual Kinect v2 were fusion and modeled based on the corresponding position relationship in space. The evaluation method used for the accuracy of the 3D reconstruction model has also been mentioned in this section.

Passthrough filtering and parameter determination

When the RGB-D camera acquires depth point cloud information, background noise is introduced, which can be removed using PassThrough filtering [61]. The PassThrough filtering can eliminate points that do not satisfy the constraint conditions, as illustrated in Eq. (4), then the region of interest (ROI) can be obtained.

$$\left\{\begin{array}{c}{X}_{min}<x<{X}_{max}\\ {Y}_{min}<y<{Y}_{max}\\ {Z}_{min}<z<{Z}_{max}\end{array}\right.$$
(4)

where \(x\), \(y\), and \(z\) denote the coordinate system position of the point cloud, \(\left({X}_{min},{X}_{max}\right)\), \(\left({Y}_{min},{Y}_{max}\right)\), and \(\left({Z}_{min},{Z}_{max}\right)\) denote the filtering range in the coordinate system's three coordinate directions (Fig. 3). Their values can be calculated based on the size of the peanut plant, which were shown in Table1. All point clouds that do not satisfy this constraint condition were filtered out as background noise, and the ROI was determined following PassThrough filtering.

Fig. 3
figure 3

The coordinate system of Kinect v2

Table 1 Calibration results of Kinect v2 parameters

Statistical filtering and parameter optimization

After PassThrough filtering, only the approximate existence range of an effective point cloud can be obtained. There are still some noises in these point clouds caused by the environment and the camera, which must be removed via the second filtering. The second filtering method is statistical filtering [61], which is based on the assumption that the average distance between all points in the point cloud and their m neighboring points follows the Gaussian distribution. During statistical filtering, Eq. (5) is used to calculate the average distance between each point in the point cloud and its \(m\) neighboring points. After that, determine the mean value \(\mu \) and standard deviation \(\sigma \) of the mean distance between each point in the point cloud. Finally, the effective point cloud range is determined as \(\left(\mu -k \cdot \sigma ,\mu +k \cdot \sigma \right)\), and \(k\) is the coefficient. If the average distance between a point in the point cloud and m neighboring points is not within this range, the point is considered to be noise. The values of \(m\) and \(k\) in statistical filtering influence the filtering effect.

$$\overline{d }=\frac{1}{m}\sum_{j=1}^{m}\sqrt{{\left({x}_{i}-{x}_{j}\right)}^{2}+{\left({y}_{i}-{y}_{j}\right)}^{2}+{\left({z}_{i}-{z}_{j}\right)}^{2}}$$
(5)

where \({x}_{i}\), \({y}_{i}\), and \({z}_{i}\) respectively represent the coordinate values of the target point on the three coordinate axes, \({x}_{j}\), \({y}_{j}\), and \({z}_{j}\) respectively represent the coordinate values of the nearest points of the target point on the three coordinate axes.

The value of \(m\) is related to the number of point clouds of the target object, that is, the value of \(m\) is affected by the point cloud density. The value of \(k\) is related to \(m\). The smaller the value of \(m\), the less the number of point clouds output by statistical filtering under the condition of a constant \(k\) value. The speed of the modeling will be improved by reducing the number of point clouds on the premise that the three-dimensional structure of the peanut plant is not distorted. An algorithm is designed to optimize the \(k\) and \(m\) values for improving the filtering effect, that is, selecting different \(k\) and \(m\) values to change the number of output 3D point clouds and spatial 3D structure, then determining the optimized parameter values by comparing the output effect. The steps of the algorithm are shown in the following.

Step 1: The value of \(k\) is set to 1, and the \(m\) value gradually decreases from 100 until the filtered point cloud spatial 3D structure starts to deteriorate significantly. The \(m\) value before this phenomenon is considered the appropriate \(m\) value.

Step 2: The value of \(m\) is set to the value determined in step 1, and the \(k\) value gradually increases from 0 until the filtered point cloud spatial 3D structure starts to deteriorate significantly. The \(k\) value before this phenomenon is considered the appropriate \(k\) value.

Step 3: The values of \(m\) and \(k\) obtained in the above steps are applied to the statistical filtering process as optimized parameters.

After optimizing the parameters, the number of point clouds after statistical filtering balances the accuracy and speed in the subsequent modeling process.

Fusion and modeling of point clouds of peanut plant

The filtered point clouds can be directly fused to generate a 3D model if the exact coordinates of each point in the point clouds obtained by two Kinects can be determined in the same spatial coordinate system. The position information of the point cloud acquired by Kinect v2 is determined by the coordinate system in which it is located. In the real world, the same point has a different coordinate position in each of the Kinect v2 coordinate systems. The coordinate system of the Kinect v2 in various positions must be converted to the same coordinate system in order to restore the point cloud relative positions in the real world [62].

In this paper, a 3D model reconstruction method based on point cloud spatial coordinate (PSC) is designed. To determine the spatial coordinate position of the point cloud, the coordinate system where Kinect v2-No.1 is used as the reference coordinate system, and the coordinate system of Kinect v2-No.2 is converted to Kinect v2-No.1. The conversion method is shown in Fig. 4, and the fusion and modeling steps of point clouds in the PSC method are shown in the following.

Fig. 4
figure 4

Transformation diagram of the coordinate system

Step 1: Kinect v2-No.2 keeps the Y-axis unchanged and rotates 180 degrees to the right, then the position of \(Q(x,y,z)\) in the original coordinate system is changed to \(Q^{\prime}\,(x^{\prime},\,y^{\prime},\,z^{\prime})\) in the rotated coordinate system. The relation between Q′ and Q is shown in Eq. (6).

$$\left\{\begin{array}{c}{x}^{^{\prime}}=-x\\ {y}^{^{\prime}}=y\\ {z}^{^{\prime}}=-z\end{array}\right.$$
(6)

Step 2: The rotating coordinate system of Kinect v2-No.2 moves 1400 mm to the left along the Z-axis, then the position of point \(Q^{\prime}\,(x^{\prime},\,y^{\prime},\,z^{\prime})\) changes to \(Q"(x", y", z")\). The relation between Q" and Q′ is shown in Eq. (7).

$$ \left\{ \begin{gathered} x^{\prime\prime}\, = \,x^{\prime} \hfill \\ y^{\prime\prime}\, = \,y^{\prime} \hfill \\ z^{\prime\prime}\, = \,z^{\prime}\, - \,1400 \hfill \\ \end{gathered} \right.\, $$
(7)

Step 3: The color 3D point cloud coordinates from Kinect v2-No.2 can be converted to the coordinate system in which Kinect v2-No.1 is located using the method described as follows:

$$\left[{x}^{\prime\prime},{y}^{\prime\prime},z^{\prime\prime}\right]=\left[x,y,z\right]\left[\begin{array}{ccc}-1& 0& 0\\ 0& 1& 0\\ 0& 0& -1\end{array}\right]-\left[\begin{array}{ccc}0& 0& 1400\end{array}\right]$$
(8)

Step 4: The color point clouds originating from Kinect v2-No.1 and Kinect v2-No.2 are spliced directly according to the transformed coordinate position, and then the 3D model of the peanut plant is generated.

Accuracy evaluation of 3D reconstruction model of peanut plant

A common method of evaluating the accuracy of the 3D reconstruction model is to compare the phenotypic parameters calculated from the 3D model with those measured manually. These parameters are generally height, width, and volume [20, 50]. Based on the reconstructed 3D model, the height, width, length, and volume of the peanut plant were calculated through Eq. (9).

$$\left\{\begin{array}{c}{H}_{c}={Y}_{h\_max}-{Y}_{h\_min}\\ \begin{array}{c}{W}_{c}={X}_{w\_max}-{X}_{w\_min}\\ \begin{array}{c}{L}_{c}={Z}_{l\_max}-{Z}_{l\_min}\\ {V}_{c}={H}_{c}\times {W}_{c}\times {L}_{c}\end{array}\end{array}\end{array}\right.$$
(9)

where, the \({H}_{c}\), \({W}_{c}\), \({L}_{c}\), and \({V}_{c}\) respectively represent the height, width, length, and volume of the 3D model of the peanut plant. \({Y}_{h\_max}\) x, \({X}_{w\_max}\), and \({Z}_{l\_max}\) represent the maximum value of the 3D model on the three coordinate axes respectively, and \({Y}_{h\_min}\), \({X}_{w\_min}\) and \({Z}_{l\_min}\) represent the minimum value of the 3D model on the three coordinate axes respectively.

The ground-truth data were obtained by taking manual measurements. The parameters of the peanut plant were measured with a ruler, and each of them was measured three times and the average value was taken. The synthetic accuracy of 3D model reconstruction was evaluated through Eq. (10).

$$Acc=\left[1-\left(\left|\frac{{H}_{c}-{H}_{m}}{{H}_{m}}\right|\times \frac{1}{4}+\left|\frac{{W}_{c}-{W}_{m}}{{W}_{m}}\right|\times \frac{1}{4}+\left|\frac{{L}_{c}-{L}_{m}}{{L}_{m}}\right|\times \frac{1}{4}+\left|\frac{{V}_{c}-{V}_{m}}{{V}_{m}}\right|\times \frac{1}{4}\right)\right]\times 100\%$$
(10)

where \(Acc\) represents the model accuracy, and \({H}_{m}\), \({W}_{m}\), and \({L}_{m}\) respectively represent the actual measurement results. The \({V}_{m}\) was the volume of the peanut plant calculated from the actual measured values.

Experiment and results

In this section, we presented the experimental design and results of the 3D model reconstruction of the peanut plant.

Environment and scheme design of experiment

The experiment was carried out in the greenhouse of Hebei Agricultural University (115°28′35 "E, 38°50′57" N) from May 2021 to July 2021, and peanut variety Jihua No. 5 was planted. The planting method was potted, and the planting time was May 19, 2021. Twenty peanut plants with good growth were randomly chosen for the experiment. The experiment collected data on three distinct stages of peanut growth: sprout, seedling, and flowering stage, with collection dates of June 8, June 18, and July 1, respectively. For each peanut plant, two Kinect v2 captured one frame respectively when each experiment, and generated a group of point clouds. A group of point clouds of each peanut plant was acquired once in each grown stage, and three groups of point clouds were collected from each peanut plant during the whole experiment. A total of 60 groups of point clouds were obtained for twenty peanut plants throughout the whole experiment.

Data filtering results

Figure 5 illustrates data filtering results on the original 3D point cloud. Figure 5a shows the original 3D color point cloud before filtering, which is obtained by a side view Kinect v2. The effect of PassThrough filtering is depicted in Fig. 5b. After PassThrough filtering, only the point cloud containing the peanut plant is retained. The effect of statistical filtering is depicted in Fig. 5c. Certain interference and outlier noises are eliminated following statistical filtering. Compared to the effect of straight-through filtering, statistical filtering connects most point clouds, clears discrete point clouds, and clarifies the edges of the peanut plant.

Fig. 5
figure 5

Data filter results. a 3D color point cloud. b PassThrough filtering result. c Statistical filtering result

Results of 3D model construction

Figure 6 illustrates the process and results of reconstructing the peanut plant model using the filtered point cloud. As illustrated in the figure, two Kinect v2 point clouds splice together following conversion to form a complete three-dimensional peanut plant structure. Figure 6 shows that the point cloud density at the edge of the peanut plant 3D model is low due to the dual effects of filtering and diffuse reflection. The point cloud density in the center of the model is high, where the point cloud obtained by two Kinect v2 are overlapping.

Fig. 6
figure 6

3D Reconstruction process and results

Accuracy evaluation result of 3D model

The experimental data were collected at three distinct stages of peanut growth, and during each experiment, twenty targets were reconstructed in three dimensions. The statistical data of ground- truth and calculated values based on the 3D model for geometric traits of peanut plants at the sprout stage are shown in Table 2. As shown in Table 2, the maximum accuracy of the calculated value of the peanut plant’s height relative to the real value on the ground-truth is 99.37%, and the minimum is 91.06%. The maximum accuracy of the calculated value of the peanut plant’s width relative to the real value on the ground-truth is 100.00%, and the minimum is 82.33%. The maximum accuracy of the calculated value of the peanut plant’s length relative to the real value on the ground-truth is 99.05%, and the minimum is 69.63%. The maximum accuracy of the calculated value of the peanut plant’s space volume relative to the real value on the ground-truth is 99.74%, and the minimum is 73.12%.

Table 2 Statistical data of geometric traits obtained by measured and 3D model calculated of peanut plant at sprout stage

The statistical data of ground- truth and calculated values based on the 3D model for geometric traits of peanut plants at the seedling stage are shown in Table 3. As shown in Table 3, the maximum accuracy of the calculated value of the peanut plant’s height relative to the real value on the ground-truth is 99.64%, and the minimum is 95.78%. The maximum accuracy of the calculated value of the peanut plant’s width relative to the real value on the ground-truth is 99.10%, and the minimum is 85.15%. The maximum accuracy of the calculated value of the peanut plant’s length relative to the real value on the ground-truth is 99.66%, and the minimum is 80.33%. The maximum accuracy of the calculated value of the peanut plant’s space volume relative to the real value on the ground-truth is 96.70%, and the minimum is 75.70%.

Table 3 Statistical data of geometric traits obtained by measured and 3D model calculated of peanut plant at seedling stage

The statistical data of ground- truth and calculated values based on the 3D model for geometric traits of peanut plants at the flowering stage are shown in Table 4. As shown in Table 4, the maximum accuracy of the calculated value of the peanut plant’s height relative to the real value on the ground-truth is 100.00%, and the minimum is 95.59%. The maximum accuracy of the calculated value of the peanut plant’s width relative to the real value on the ground-truth is 98.84%, and the minimum is 88.45%. The maximum accuracy of the calculated value of the peanut plant’s length relative to the real value on the ground-truth is 99.69%, and the minimum is 71.89%. The maximum accuracy of the calculated value of the peanut plant’s space volume relative to the real value on the ground-truth is 99.62%, and the minimum is 73.97%.

Table 4 Statistical data of geometric traits obtained by measured and 3D model calculated of peanut plant at flowering stage

The average accuracy of peanut plants’ height, width, length, and volume calculated through the 3D model from ground-truth is 97.37%, 95.33%, 90.69%, and 90.28% in all three growth stages, respectively. Figure 7 shows the correlation between the ground-truth measurements and the 3D model calculations for each peanut plant during the whole course of the experiment. It can be seen from Fig. 7, that there is an obvious positive correlation between the manual measured values and model calculated values, and the Goodness of Fit R2 for plants’ height, width, length, and volume is 0.9956, 0.9654, 0.8670, and 0.9815, respectively.

Fig. 7
figure 7

Correlation of peanut plants’ geometric traits between manual ground-truth and model calculations. a fitting results of peanut plants’ height, b fitting results of peanut plants’ width, c fitting results of peanut plants’ length, d fitting results of peanut plants’ volume

Table 5 shows the average accuracy of each evaluation parameter value calculated by the 3D reconstruction model at various growth stages of peanut plants. As shown in Table 5, the average of calculated values of all evaluation parameters is gradually increasing with the growth of peanut plants, and the total average value of the three stages exceeds 90%. The \(Acc\), synthetic accuracy, exceeds 92% in all growth stages.

Table 5 Accuracy evaluation results of 3D model of peanut at different growth stages

Discussion

In this section, some interference factors in 3D modeling are analyzed, the influence of parameters setting on statistical filtering results are discussed, and the PSC 3D modeling method proposed in this paper is compared with the ICP-based modeling method in terms of modeling speed.

Analysis of factors affecting the accuracy of 3D model reconstruction

The depth camera has great application potential for 3D plant reconstruction and the acquisition of phenotypic data. Its advantages include simultaneous acquisition of color and depth information, high accuracy, and low cost of operation [21, 40]. Calibrating the depth camera imaging system and obtaining its precise parameters helps improve the accuracy of the 3D reconstruction model. The acquisition of rotation and translation matrix is the key to generating a complete 3D model of the peanut plant, which reflects the corresponding relationship between the point cloud and the actual spatial location of the target. The depth camera data can be converted to the color camera coordinate system using a rotation and translation matrix. Therefore, in a 3D model reconstruction system with a fixed structure, the calibration of the position of the sensor itself and between the sensors is the premise of building an accurate 3D model.

The accuracy of the data in the depth image obtained by Kinect v2 is inconsistent. The error in the center position of the depth image is the smallest, which increases with the distance from the center. The maximum error occurs at the edge of the depth image [63]. This feature leads to the decline of modeling accuracy at the border when reconstructing the 3D model based on the point cloud obtained by Kinect v2, as shown in Fig. 8. As shown in Fig. 8, the 3D model is reconstructed from the point cloud data obtained by Kinect v2 placed on its left and right sides. From the front, the point cloud is absent in the middle part of the flower pot 3D model. There are two possible explanations for this occurrence. First, the middle section represents the edge of the point cloud data acquired by Kinect v2 on the left and right. The inherent characteristics of Kinect v2 lead to the decline of the measurement accuracy of the edge part and the quality of the point cloud of the edge part. The second reason is that a portion of the edge discrete point cloud data is eliminated as noise during the two filtering processes. Although the peanut plant is irregular and the point cloud loss is less than that of the flower pot, the reconstruction accuracy of the 3D model in the length direction is still lower than that of other evaluation parameters, as shown in Table 5.

Fig. 8
figure 8

Side view of the 3D reconstruction model with the flower pot

Additionally, in some studies, some phenotypic parameters calculated by 3D models are compared to manually measured values to ensure that reconstructed models are accurately evaluated [26, 53]. However, there are certain irregularities and uncertainties in plant growth. There is a risk of error increase regardless of whether it is measured manually or calculated using a model. Moreover, the plant is a non-rigid structure susceptible to external interference, such as wind, resulting in sway, which affects the evaluation of modeling accuracy. The result is more objective if the 3D model accuracy is evaluated with the height, width, length, and volume of the plant. Thus, by evaluating the accuracy of 3D model reconstruction using multiple phenotype parameters, we can avoid the uncertainty introduced by a single evaluation index of plant.

Influence of parameters setting on the statistical filtering effect

Selecting the number of adjacent points m and the effective point cloud range coefficient k directly affects the filtering effect in statistical filtering. When \(k\) equals 1.0, Table 6 illustrates the effect of various \(m\) values on the number of filtered point clouds and 3D modeling accuracy. The data in Table 6 is the average of 60 three-dimension models constructed by all 20 peanut plants in three growth stages. As illustrated in Table 6, the number of filtered point clouds increases with the increase of the m value. On the assumption that the three-dimensional structure of the peanut plant is not harmed, the more point clouds filtered out, the more effective the filtering effect and the faster the post-processing speed. The highest accuracy of the 3D model is 92.39%, which occurs when the \(m\) value is 35. At this time, the number of filtered point clouds is at a medium level, so the \(m\) value of 35 is appropriate.

Table 6 Comparison of filtering effect when \(k\) value is 1.0 and m takes different values

Table 7 shows the results of the point cloud filtering when the \(m\) value is 35 and the \(k\) value is varied. The data in Table 7 is the average of 60 three-dimension models constructed by all 20 peanut plants in three growth stages. It can be seen from Table 7 that with the larger value of \(k\), the filtering effect on outliers decreases. With the smaller value of \(k\), the filtering effect is strengthened, and the number of point clouds remaining after filtering decreases. The highest accuracy of the 3D model is 92.39%, which occurs when the \(k\) value is 1.0. At this time, the number of filtered point clouds is at a medium level, so the \(k\) value of 1.0 is appropriate. The statistical filtering parameters for peanut plants are determined after the test and analysis, which is \(k\) = 1.0 and \(m\) = 35.

Table 7 Comparison of filtering effect when \(m\) value is 35 and \(k\) has different values

Analysis of 3D modeling speed

The accuracy of the PSC 3D model reconstruction method proposed in this paper has been evaluated in “Accuracy evaluation result of 3D Model” section. In addition to the accuracy, the modeling speed is also an important indicator to assess the modeling method. An experiment comparing with the iterative closest point (ICP) algorithm was carried out to verify the modeling speed of the PSC method. Currently, the ICP algorithm is the most widely used method for reconstructing 3D point clouds. The ICP algorithm locates the same target point in two different clouds, calculates their position relationship, and then splices the point cloud through this relationship to reconstruct the 3D model. The ICP algorithm's primary objective is to determine the geometric relationship between corresponding points in two-point clouds. It cannot be applied if there is no corresponding point in two-point clouds. A comparative test is used to compare the proposed method's modeling speed to that of the ICP algorithm. One peanut plant was randomly selected and placed on a rotating table for the test. The Kinect v2 was used to collect RGB and depth data once every ten degrees of rotation, and the data were numbered from Pre1 to Pre36 for a total of 36 times. The three pieces of point clouds obtained from the positions with an interval of 120° as a group was used for the modeling by the ICP algorithm. For example, Group1-ICP includes point clouds obtained from shooting angles of 0° (Per1), 120° (Per 13), and 240° (Per 25), respectively. Additionally, the two pieces of point clouds obtained from two positions separated by 180° as a group was used for the PSC modeling. For example, Group1-PSC includes point clouds obtained from shooting angles of 0° (Per1) and 180° (Pre19). A total of 36 pieces of point clouds were combined into 12 groups using this method, and each group was different from the others. The comparative tests' statistical results are summarized in Table 8.

Table 8 The statistical results of the contrast test of 3D reconstruction of peanut plant through the ICP and PSC algorithm, respectively

As shown in Table 8, the reconstruction time for the 3D model using the ICP algorithm ranges between 4.618 s and 5.953 s, with an average of 5.429 s. Similar results were also obtained in the study of Yuan et al. [64], in which four RGB-D cameras were used to collect data, and the foot 3D model reconstruction takes approximately 5 s by the ICP algorithm. This shows that there is no significant difference in the time cost of scanning the target from three or four angles and reconstructing the 3D model according to the ICP algorithm. ICP algorithm requires that point clouds from different views must have overlapping parts, which is similar to SV and SfM. The more overlap of the point clouds, the higher accuracy of the 3D reconstruction model, but the more time spent for modeling. Hu et al. [49] scanned the leafy vegetables from 18 views and used ICP algorithm to model them. It spent at least 3.73 min to process the data of a vegetable.

The ICP algorithm is powerless if there is no overlap in the point cloud. The PSC algorithm proposed in this paper can effectively solve this problem, and the PSC algorithm reconstructs the 3D model in 2.032 s to 2.355 s with an average of 2.139 s for the peanut plants. The model's accuracy obtained by the ICP algorithm ranges from 89.87 to 98.65%, with an average of 94.82%. The model’s accuracy obtained using the PSC method ranges from 91.89 to 95.60%, with an average of 93.30%. Compared with other plant 3D modeling methods, the modeling accuracy of the method proposed in this paper has obvious advantages [17, 21]. The accuracy of the PSC algorithm for 3D model reconstruction is 1.52 percent lower than that of the ICP algorithm, but the time consumed for the 3D reconstruction only is 39.4% of the ICP algorithm. The PSC algorithm can reconstruct a 3D model close to the accuracy of the ICP algorithm at a speed of 2.54 times, which demonstrates the performance ratio advantages of the PSC algorithm.

Conclusions

In this paper, a 3D model reconstruction method of the peanut plant based on Kinect v2 was designed. Two Kinect v2s were used to generate a 3D model of the peanut plant through data filtering and coordinate transformation. The experiment was conducted at various stages of peanut growth, and the 3D models were evaluated using the synthetic accuracy based on the plant height, width, length, and volume of the peanut, respectively. The experimental results indicate that the peanut plant 3D reconstruction model's accuracy is 92.37%, 93.30%, and 94.58% at the sprout stage, seedling stage, and flowering stage respectively, and 93.42% for the growth stages. Compared to the ICP method, the proposed method is 2.54 times faster with closed accuracy. The reconstruction method for the 3D model of the peanut plant described in this paper is capable of rapidly and effectively establishing a 3D model of the peanut plant while also meeting the modeling requirements for other species' breeding processes. In subsequent research, we will attempt to reconstruct the three-dimensional model of the plant of multiple peanuts simultaneously.

Availability of data and materials

All data generated or analyzed during this study are available from the corresponding author on reasonable request.

References

  1. Yuan H, Wang N, Bennett R, Burditt D, Cannon A, Chamberlin K. Development of a ground-based peanut canopy phenotyping system. IFAC-Papers OnLine. 2018;51(17):162–5.

    Article  Google Scholar 

  2. USDA. Peanut Explorer. 2022.

  3. Zhao S, Lü J, Xu X, Lin X, Luiz MR, Qiu S, Ciampitti I, He P. Peanut yield, nutrient uptake and nutrient requirements in different regions of China. J Integr Agric. 2021;20(9):2502–11.

    Article  CAS  Google Scholar 

  4. Wang Y, Lyu J, Chen D. Performance assessment of peanut production in China. Acta Agric Scand Sect B Soil Plant Sci. 2022;72(1):176–88.

    Google Scholar 

  5. Eriksson D, Brinch-Pedersen H, Chawade A, Holme IB, Hvoslef-Eide TAK, Ritala A, Teeri TH, Thorstensen T. Scandinavian perspectives on plant gene technology: applications, policies and progress. Physiol Plant. 2018;162(2):219–38.

    Article  CAS  PubMed  Google Scholar 

  6. Halewood M, Chiurugwi T, Sackville Hamilton R, Kurtz B, Marden E, Welch E, Michiels F, Mozafari J, Sabran M, Patron N, Kersey P, Bastow R, Dorius S, Dias S, McCouch S, Powell W. Plant genetic resources for food and agriculture: opportunities and challenges emerging from the science and information technology revolution. New Phytol. 2018;217(4):1407–19.

    Article  PubMed  Google Scholar 

  7. Mir RR, Reynolds M, Pinto F, Khan MA, Bhat MA. High-throughput phenotyping for crop improvement in the genomics era. Plant Sci. 2019;282:60–72.

    Article  CAS  PubMed  Google Scholar 

  8. Dhondt S, Wuyts N, Inzé D. Cell to whole-plant phenotyping: the best is yet to come. Trends Plant Sci. 2013;18(8):428–39.

    Article  CAS  PubMed  Google Scholar 

  9. Hawkesford M, Lorence A. Plant phenotyping: increasing throughput and precision at multiple scales. Funct Plant Biol. 2017;44(1):v–vii.

    Article  Google Scholar 

  10. Liu F, Hu P, Zheng B, Duan T, Zhu B, Guo Y. A field-based high-throughput method for acquiring canopy architecture using unmanned aerial vehicle images. Agric For Meteorol. 2021;296: 108231.

    Article  Google Scholar 

  11. Li H, Zhang J, Xu K, Jiang X, Zhu Y, Cao W, Ni J. Spectral monitoring of wheat leaf nitrogen content based on canopy structure information compensation. Comput Electron Agric. 2021;190: 106434.

    Article  Google Scholar 

  12. Ma X, Wei B, Guan H, Yu S. A method of calculating phenotypic traits for soybean canopies based on three-dimensional point cloud. Ecol Inform. 2022;68:101524.

    Article  Google Scholar 

  13. Yuan H, Bennett RS, Wang N, Chamberlin KD. Development of a peanut canopy measurement system using a ground-based lidar sensor. Front Plant Sci. 2019;10:203.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Garrido M, Paraforos DS, Reiser D, Arellano MV, Griepentrog HW, Valero C. 3D maize plant reconstruction based on georeferenced overlapping lidar point clouds. Remote Sens. 2015;7(12):17077–96.

    Article  Google Scholar 

  15. Jiang Y, Li C, Paterson AH. High throughput phenotyping of cotton plant height using depth images under field conditions. Comput Electron Agric. 2016;130:57–68.

    Article  Google Scholar 

  16. Guo F, Hou L, Ma C, Li G, Lin R, Zhao Y, Wang X. Comparative transcriptome analysis of the peanut semi-dwarf mutant 1 reveals regulatory mechanism involved in plant height. Gene. 2021;791:145722.

    Article  CAS  PubMed  Google Scholar 

  17. Li J, Tang L. Developing a low-cost 3D plant morphological traits characterization system. Comput Electron Agric. 2017;143:1–13.

    Article  CAS  Google Scholar 

  18. Rossi R, Costafreda-Aumedes S, Leolini L, Leolini C, Bindi M, Moriondo M. Implementation of an algorithm for automated phenotyping through plant 3D-modeling: a practical application on the early detection of water stress. Comput Electron Agric. 2022;197:106937.

    Article  Google Scholar 

  19. Wu D, Yu L, Ye J, Zhai R, Duan L, Liu L, Wu N, Geng Z, Fu J, Huang C, Chen S, Liu Q, Yang W. Panicle-3D: a low-cost 3D-modeling method for rice panicles based on deep learning, shape from silhouette, and supervoxel clustering. Crop J. 2022;10(5):1386–98.

    Article  CAS  Google Scholar 

  20. Nguyen TT, Slaughter DC, Townsley B, Carriedo L, Maloof JN, Sinha N. Comparison of structure-from-motion and stereo vision techniques for full in-field 3D reconstruction and phenotyping of plants: an investigation in sunflower. 2016 ASABE Annual International Meeting. 2016; 162444593.

  21. Paulus S. Measuring crops in 3D: Using geometry for plant phenotyping. Plant Methods. 2019;15:103.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Zermas D, Morellas V, Mulla D, Papanikolopoulos N. 3D model processing for high throughput phenotype extraction—the case of corn. Compu Electron Agric. 2020;172:105047.

    Article  Google Scholar 

  23. Bao Y, Tang L, Breitzman MW, Salas FMG, Schnable PS. Field-based robotic phenotyping of sorghum plant architecture using stereo vision. J F Robot. 2019;36:397–415.

    Article  Google Scholar 

  24. Malambo L, Popescu SC, Murray SC, Putman E, Pugh NA, Horne DW, Richardson G, Sheridan R, Rooney WL, Avant R, Vidrine M, McCutchen B, Baltensperger D, Bishop M. Multitemporal field-based plant height estimation using 3D point clouds generated from small unmanned aerial systems high-resolution imagery. Int J Appl Earth Obs Geoinf. 2018;64:31–42.

    Google Scholar 

  25. Sarkar S, Cazenave AB, Oakes J, McCall D, Thomason W, Abbot L, Balota M. High-throughput measurement of peanut canopy height using digital surface models. Plant Phenome J. 2020;3(1):20003.

    Article  Google Scholar 

  26. Yang Z, Han Y. A low-cost 3D phenotype measurement method of leafy vegetables using video recordings from smartphones. Sensors. 2020;20(21):6068.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Yang J, Wang C, Wang H, Li Q. A RGB-D based real-time multiple object detection and ranging system for autonomous driving. IEEE Sens J. 2020;20(20):11959–66.

    Article  Google Scholar 

  28. Yang J, Zhao Y, Zhu Y, Xu H, Lu W, Meng Q. Blind assessment for stereo images considering binocular characteristics and deep perception map based on deep belief network. Inf Sci. 2019;474:1–17.

    Article  Google Scholar 

  29. Yang J, Xiao S, Li A, Lan G, Wang H. Detecting fake images by identifying potential texture difference. Future Gener Comput Syst. 2021;125:127–35.

    Article  Google Scholar 

  30. Ma Z, Sun D, Xu H, Zhu Y, He Y, Cen H. Optimization of 3d point clouds of oilseed rape plants based on time-of-flight cameras. Sensors. 2021;21(2):664.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Shi Y, Wang N, Taylor RK, Raun WR. Improvement of a ground-LiDAR-based corn plant population and spacing measurement system. Comput Electron Agric. 2015;112:92–101.

    Article  Google Scholar 

  32. Moreno H, Valero C, Bengochea-Guevara JM, Ribeiro Á, Garrido-Izard M, Andújar D. On-ground vineyard reconstruction using a LiDAR-based automated system. Sensors. 2020;20(4):1102.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Zhang Y, Yang Y, Chen C, Zhang K, Jiang H, Cao W, Zhu Y. Modeling leaf color dynamics of winter wheat in relation to growth stages and nitrogen rates. J Integr Agric. 2022;21(1):60–9.

    Article  Google Scholar 

  34. Jiang Y, Li C, Takeda F, Kramer EA, Ashrafi H, Hunter J. 3D point cloud data to quantitatively characterize size and shape of shrub crops. Hort Res. 2019;6:43.

    Article  Google Scholar 

  35. Pueschel P, Newnham G, Hill J. Retrieval of gap fraction and effective plant area index from phase-shift terrestrial laser scans. Remote Sens. 2014;6(3):2601–27.

    Article  Google Scholar 

  36. Grotti M, Calders K, Origo N, Puletti N, Alivernini A, Ferrara C, Chianucci F. An intensity, image-based method to estimate gap fraction, canopy openness and effective leaf area index from phase-shift terrestrial laser scanning. Agric For Meteorol. 2020;280:107766.

    Article  Google Scholar 

  37. Chéné Y, Rousseau D, Lucidarme P, Bertheloot J, Caffier V, Morel P, Belin É, Chapeau-Blondeau F. On the use of depth camera for 3D phenotyping of entire plants. Comput Electron Agric. 2012;82:122–7.

    Article  Google Scholar 

  38. Zhou S, Kang F, Li W, Kan J, Zheng Y. Point cloud registration for agriculture and forestry crops based on calibration balls using Kinect V2. Int J Agric Biol Eng. 2020;13(1):198–205.

    Google Scholar 

  39. Chen Y, Zhang B, Zhou J, Wang K. Real-time 3D unstructured environment reconstruction utilizing VR and Kinect-based immersive teleoperation for agricultural field robots. Comput Electron Agric. 2020;175:105579.

    Article  Google Scholar 

  40. Condotta ICFS, Brown-Brandl TM, Pitla SK, Stinn JP, Silva-Miranda KO. Evaluation of low-cost depth cameras for agricultural applications. Comput Electron Agric. 2020;173:105394.

    Article  Google Scholar 

  41. Gené-Mola J, Llorens J, Rosell-Polo JR, Gregorio E, Arnó J, Solanelles F, Martínez-Casasnovas JA, Escolà A. Assessing the performance of rgb-d sensors for 3d fruit crop canopy characterization under different operating and lighting conditions. Sensors. 2020;20(24):7072.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Moreno H, Rueda-Ayala V, Ribeiro A, Bengochea-Guevara J, Lopez J, Peteinatos G, Valero C, Andújar D. Evaluation of vineyard cropping systems using on-board rgb-depth perception. Sensors. 2020;20(23):6912.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Hui F, Zhu J, Hu P, Meng L, Zhu B, Guo Y, Li B, Ma Y. Image-based dynamic quantification and high-accuracy 3D evaluation of canopy structure of plant populations. Ann Bot. 2018;121(5):1079–88.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Yang W, Feng H, Zhang X, Zhang J, Doonan JH, Batchelor WD, Xiong L, Yan J. Crop phenomics and high-throughput phenotyping: past decades, current challenges, and future perspectives. Mol Plant. 2020;13(2):187–214.

    Article  CAS  PubMed  Google Scholar 

  45. Glenn KC, Alsop B, Bell E, Goley M, Jenkinson J, Liu B, Martin C, Parrottm W, Souder C, Sparks O, Urquhart W, Ward JM, Vicini JL. Bringing new plant varieties to market: plant breeding and selection practices advance beneficial characteristics while minimizing unintended changes. Crop Sci. 2017;57(6):2906–21.

    Article  Google Scholar 

  46. Araus JL, Kefauver SC, Zaman-Allah M, Olsen MS, Cairns JE. Translating high-throughput phenotyping into genetic gain. Trends Plant Sci. 2018;23(5):451–66.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  47. Confalonieri R, Paleari L, Foi M, Movedi E, Vesely FM, Thoelke W, Agape C, Borlini G, Ferri I, Massara F, Motta R, Ravasi RA, Tartarini S, Zoppolato C, Baia LM, Brumana A, Colombo D, Curatolo A, Fauda V, Gaia D, Gerosa A, Ghilardi A, Grassi E, Magarini A, Novelli F, Garcia FBP, Graziosi AR, Salvan M, Tadiello T, Rossini L. PockerPlant 3D: analysing canopy structure using a smartphone. Biosyst Eng. 2017;164:1–12.

    Article  Google Scholar 

  48. Thapa S, Zhu F, Walia H, Yu H, Ge Y. A novel LiDAR-based instrument for high-throughput, 3D measurement of morphological traits in maize and sorghum. Sensors. 2018;18(4):1187.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Hu Y, Wang L, Xiang L, Wu Q, Jiang H. Automatic non-destructive growth measurement of leafy vegetables based on Kinect. Sensors. 2018;18(3):806.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Andújar D, Ribeiro A, Fernández-Quintanilla C, Dorado J. Using depth cameras to extract structural parameters to assess the growth state and yield of cauliflower crops. Comput Electron Agric. 2016;122:67–73.

    Article  Google Scholar 

  51. Sun G, Wang X. Three-dimensional point cloud reconstruction and morphology measurement method for greenhouse plants based on the kinect sensor self-calibration. Agronomy. 2019;9(10):596.

    Article  Google Scholar 

  52. Wang Y, Chen Y, Zhang X, Gong W. Research on measurement method of leaf length and width based on point cloud. Agric. 2021;11(1):63.

    Google Scholar 

  53. Wang Y, Chen Y. Non-destructive measurement of three-dimensional plants based on point cloud. Plants. 2020;9(5):571.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Yao Z, Zhao Q, Li X, Bi Q. Point cloud registration algorithm based on curvature feature similarity. Measurement. 2021;177(11):109274.

    Article  Google Scholar 

  55. Yun D, Kim S, Heo H, Ko KH. Automated registration of multi-view point clouds using sphere targets. Adv Eng Inform. 2015;29(4):930–9.

    Article  Google Scholar 

  56. Zhu T, Ma X, Guan H, Wu X, Wang F, Yang C, Jiang Q. A calculation method of phenotypic traits based on three-dimensional reconstruction of tomato canopy. Comput Electron Agric. 2023;204:107515.

    Article  Google Scholar 

  57. Cheng M, Cai Z, Ning W, Yuan H. System design for peanut canopy height information acquisition based on LiDAR. Transactions Chinese Soc Agric Eng. 2019;35(1):180–7.

    Google Scholar 

  58. Yang L, Zhang L, Dong H, Alelaiwi A, Saddik AE. Evaluating and improving the depth accuracy of kinect for windows v2. IEEE Sens J. 2015;15(8):4275–85.

    Article  Google Scholar 

  59. Staranowicz AN, Brown GR, Morbidi F, Mariottini GL. Practical and accurate calibration of RGB-D cameras using spheres. Comput Vis Image Underst. 2015;137:102–14.

    Article  Google Scholar 

  60. Pagliari D, Pinto L. Calibration of kinect for Xbox one and comparison between the two generations of Microsoft sensors. Sensors. 2015;15(11):27569–89.

    Article  PubMed  PubMed Central  Google Scholar 

  61. Rusu RB, Cousins S. 3D is here: Point cloud library (PCL). IEEE Int Conf Robot Autom. 2011;2011:1–4.

    Google Scholar 

  62. Zhou L, Zhang X, Guan B. A flexible method for multi-view point clouds alignment of small-size object. Measurement. 2014;58:115–29.

    Article  Google Scholar 

  63. Xiao Z, Zhou M, Yuan H, Liu Y, Fan C, Cheng M. Influence analysis of light intensity on Kinect v2 depth measurement accuracy. Transact Chin Socr Agric Machin. 2021;52(S0):108–17.

    Google Scholar 

  64. Yuan M, Li X, Xu J, Jia C, Li X. 3D foot scanning using multiple real sense cameras. Multimedia Tools Appl. 2020;80(15):22773–93.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous reviewers, academic editors for their precious suggestions that significantly improved the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 32001412).

Author information

Authors and Affiliations

Authors

Contributions

YL, HY, and MC conceived the idea and proposed the method. YL and CF contributed to the preparation of equipment and acquisition of data. YL and CF wrote the code and tested the method. XZ and MC validation results. YL and HY wrote the paper. HY and MC revised the paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Man Cheng.

Ethics declarations

Ethics approval and consent to participate

Not application.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, Y., Yuan, H., Zhao, X. et al. Fast reconstruction method of three-dimension model based on dual RGB-D cameras for peanut plant. Plant Methods 19, 17 (2023). https://doi.org/10.1186/s13007-023-00998-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13007-023-00998-z

Keywords