Skip to main content

Method for accurate multi-growth-stage estimation of fractional vegetation cover using unmanned aerial vehicle remote sensing

Abstract

Background

Fractional vegetation cover (FVC) is an important parameter for evaluating crop-growth status. Optical remote-sensing techniques combined with the pixel dichotomy model (PDM) are widely used to estimate cropland FVC with medium to high spatial resolution on the ground. However, PDM-based FVC estimation is limited by effects stemming from the variation of crop canopy chlorophyll content (CCC). To overcome this difficulty, we propose herein a “fan-shaped method” (FSM) that uses a CCC spectral index (SI) and a vegetation SI to create a two-dimensional scatter map in which the three vertices represent high-CCC vegetation, low-CCC vegetation, and bare soil. The FVC at each pixel is determined based on the spatial location of the pixel in the two-dimensional scatter map, which mitigates the effects of CCC on the PDM. To evaluate the accuracy of FSM estimates of the FVC, we analyze the spectra obtained from (a) the PROSAIL model and (b) a spectrometer mounted on an unmanned aerial vehicle platform. Specifically, we use both the proposed FSM and traditional remote-sensing FVC-estimation methods (both linear and nonlinear regression and PDM) to estimate soybean FVC.

Results

Field soybean CCC measurements indicate that (a) the soybean CCC increases continuously from the flowering growth stage to the later-podding growth stage, and then decreases with increasing crop growth stages, (b) the coefficient of variation of soybean CCC is very large in later growth stages (31.58–35.77%) and over all growth stages (26.14%). FVC samples with low CCC are underestimated by the PDM. Linear and nonlinear regression underestimates (overestimates) FVC samples with low (high) CCC. The proposed FSM depends less on CCC and is thus a robust method that can be used for multi-stage FVC estimation of crops with strongly varying CCC.

Conclusions

Estimates and maps of FVC based on the later growth stages and on multiple growth stages should consider the variation of crop CCC. FSM can mitigates the effect of CCC by conducting a PDM at each CCC level. The FSM is a robust method that can be used to estimate FVC based on multiple growth stages where crop CCC varies greatly.

Background

Fractional vegetation cover (FVC, sometime referred to as “crop canopy coverage”) is the fraction of green vegetation seen from the nadir of a study area and describes the fraction of the mixed vegetation versus soil in an ecosystem [1]. FVC is an important parameter for evaluating crop-growth status and is essential for crop-growth models [2,3,4]. Moreover, long-term FVC estimates are also essential for regional and global environmental monitoring because it is an essential indicator of dynamic changes in vegetation [5,6,7,8,9]. Thus, real-time estimates of FVC are of significant importance for both the agricultural and environmental research community.

Traditionally, photographic techniques have been widely used for measuring farmland FVC. Photographic techniques involve the use of classification techniques (e.g., the threshold method or classification tools) or artificial counting to analyze the FVC based on images of the field canopy [10,11,12,13]. However, such techniques are time and labor intensive and are difficult to exploit for FVC mapping.

Optical remote-sensing techniques collect surface radiation to provide crop-canopy spectral reflectance from visible to short-wave infrared wavelengths [14, 15]. In practice, leaf-pigment content and the leaf-area index (LAI) are the two main variables that determine the crop-canopy spectral reflectance [16,17,18,19,20]. Canopy chlorophyll content (CCC) and LAI govern the spectral reflectance in the visible bands, whereas LAI alone governs the spectral reflectance in the near-infrared (NIR) and short-wave infrared bands [16,17,18,19]. Leaf-chlorophyll absorption causes crop spectral reflectance in the blue and red bands to be less than that in the NIR band [21].

Many remote-sensing spectral indices (SIs) have been developed to quantify vegetation states [22]. A remote-sensing SI combines the vegetation canopy spectral reflectance in two or more bands, and one of the most widely used vegetation SIs is the normalized difference vegetation index (NDVI) [23]. Remote-sensing SIs can mitigate the effects of Sun angle, viewing angle, terrain, and atmospheric perturbations, and are therefore widely used to estimate crop parameters via remote sensing [24,25,26,27,28].

The last decades have seen the development of methods to estimate crop FVC based on remote-sensing images from unmanned aerial vehicle (UAV), aerial, or satellite platforms [5, 29,30,31,32,33]. These methods can be divided into five categories: (i) physical model methods, (ii) semi-empirical methods, (iii) empirical methods, (iv) crop growth methods, and (v) hybrid methods. Physical model methods are founded on physical principles; for example, the PROSAIL method, which is based on the optical properties of leaves and canopy bidirectional reflectance [15, 20, 34]; the four-scale bidirectional reflectance model, which is based on geometrical optics [35]; and the discrete anisotropic radiative transfer model, which is based on ray tracing [36,37,38]. However, many of the parameters required by these models may not be readily available, which limits the application of the models. Semi-empirical methods are often simplified versions of physical models and include the soil line method [39], the pixel dichotomy model (PDM) [40, 41], and the Baret model [29, 32]. The PDM hypothesizes that pixels contain mixed information from soils and crops [SItotal = (1 − FVC) × SIsoil + FVC × SIvegetation], which allows FVC to be calculated [FVC = (SItotal − SIsoil)/(SIvegetation − SIsoil)] [42]. Empirical methods use remote-sensing SIs and regression techniques [e.g., linear and nonlinear (LAN) regression [43], partial least squares regression [44], random forest [45]] to establish an empirical model of FVC. Empirical methods usually provide good accuracy on a regional scale. Crop models were founded on crop-growth theory and provide FVC from sowing to harvest; these include the AquaCrop model [2] and the WOFOST model [3]. In addition to optical remote-sensing techniques, other remote-sensing techniques [e.g., synthetic aperture radar [30, 46]] have also been developed and applied to estimate FVC based on remote sensing. Hybrid methods involve the combined use of several of the methods mentioned above; for example, the model of Wang et al. [31, 47] uses crop modeling and remote-sensing-data assimilation. In recent years, the use of convolutional neural networks (CNNs) and high ground spatial resolution (GSD) images for estimating vegetation cover fractions has developed rapidly [48, 49]. The CNN-based studies were more focused on visual perception and image segmentation, instead of analyzing canopy spectral response to vegetation parameters (e.g., leaf inclination angle, leaf structure, pigments) [50, 51]. The training of CNN models involves a large number of samples. Furthermore, the application of CNNs is more suitable for high- and ultra-high-GSD images (e.g., digital images obtained from low altitude UAVs [48, 49], satellite-based high-GSD images [52]).

Two reasons explain why the PDM is widely used to estimate, based on remote-sensing images, cropland FVC from medium to high spatial resolution on the ground: (i) the results of the PDM have clear physical meaning and simple parameter input, and (ii) optical remote-sensing images with medium to high spatial resolution on the ground are available for free. The signal captured by each pixel in a remote-sensing image comes from a combination of soil background and vegetation of varying growth status (e.g., CCC, leaf water content, and LAI). In practice, the crop CCC is one of the key variables that determines the vegetation canopy spectral reflectance in the visible bands. For example, a high-CCC vegetation canopy corresponds to a large NDVI, whereas a low CCC vegetation canopy corresponds to a small NDVI. Thus, using the PDM on crop samples with low CCC may cause the FVC to be underestimated.

This study (i) analyzes how crop CCC affects SI-based estimates of FVC and (ii) estimates FVC for crops with various CCCs. To do this, we propose to use a fan-shaped method (FSM) that uses the visible and near-infrared angle index (VNAI) as SI for the CCC [53] and the NDVI as a vegetation SI to create a two-dimensional (2-D) scatter map in which the three vertices represent high-CCC vegetation, low-CCC vegetation, and soil. The FVC of each mixed pixel is determined based on its spatial locations in the 2-D scatter map, which weakens the dependence of the PDM on the CCC.

We use the proposed FSM and two traditional remote-sensing methods for estimating soybean FVC [i.e., (i) LAN regression an (ii) the PDM] based on spectra produced by applying the method to spectra obtained from (a) the PROSAIL model and (b) a spectrometer mounted on an UAV platform. The results show that the proposed FSM method can provide accurate estimates of FVC and may be applied in croplands with highly varying CCC.

Methods

Study site

The study site is situated in Jiaxiang County, Jining City, Shandong province, in China (see Fig. 1a, b). Jiaxiang County [Fig. 1b, E: 116°22′10″–116°22′20″, N: 35°25′50″–35°26′10″] has a warm temperate semi-humid continental monsoon climate, the average temperature is 13.9 °C, and the annual rainfall is 701.8 mm. Field experiments were conducted at a soybean field (see Fig. 1c). Soybeans were grown in a loam soil field with the row spacing of 15 cm, and the planting density of 190,000 plants ha−1. A total of 532 breeding lines were planted. Weed control was manually implemented at early growth stages.

Fig. 1
figure1

Study area and experimental field: a Jining City in Shandong province, China. b Location of Jiaxiang County in Jining City, Shandong province. c Mapping area and ROIs in experimental field (UAV-RGB image acquired September 17, 2015). Note: ROI is the region of interest, UAV stand for “unmanned aerial vehicle.”

Measurement of field data

Measurements of field canopy chlorophyll content

The main purpose of field CCC measurements was to analyze the soybean CCC as a function of soybean growth. Soybean leaf chlorophyll in the first and second uppermost leaves was measured in the field by using a Dualex scientific portable sensor (Dualex 4; Force-A; Orsay, France) [54]. Five measurements of each soybean leaf were collected from the center of each soybean plot, and the average was retained as the soybean CCC (see Table 1). Forty-two soybean plots were selected for the field CCC measurements.

Table 1 Results of field measurements of soybean CCC (Dualex units)

A total of 192 sets of soybean CCCs were collected from the soybean field from July 29 to September 28, 2015 (S1 to S5 in Table 1). Table 1 shows the results of the analysis of the CCC datasets. Overall, the average soybean CCC increases continuously from the flowering growth stage to the later-podding growth stage, and then decreases until harvest. The coefficients of variation calculated for the early stages S1–S3 are relatively small, 6.20%–9.01%. In contrast, the coefficients of variation calculated for the later stages S4–S5 are much larger, 31.58–35.77%.

Collection of UAV-based canopy RGB and hyperspectral images

The main purpose of UAV-based canopy digital images and spectral reflectance measurements is to analyze how soybean CCC affects FVC estimates based on remote-sensing images. The UAV flights were conducted during stages S3 and S4 (see Table 1). The hyperspectral and RGB images collected during stages S3 and S4 were used to analyze how CCC affects the soybean canopy spectral reflectance and SIs. The hyperspectral and RGB images collected during stage S4 were used to analyze how CCC affects FVC estimates.

In this work, UAV-based canopy RGB and hyperspectral images were collected from 11:00 a.m. to 2:00 p.m. from the soybean field before the field CCC dataset was collected. A DJI S1000 UAV was used as sensor platform (SZ DJI Technology Co., Ltd., Guangdong, China), on which was mounted a Sony DSC–QX100 digital camera (Sony, Tokyo, Japan) and a Cubert UHD-185 spectrometer (UHD 185, Cubert GmbH, Baden-Württemberg, Germany) to collect field crop-canopy RGB and hyperspectral images. We used a 40 cm × 40 cm whiteboard to calibrate the UHD-185 spectrometer before the UAV took off. The details of the UAV, UHD 185 snapshot hyperspectral sensor, and RGB camera are available in the literature [53, 55, 56].

The location of ground control points (GCPs) in the experimental field was determined by using a handheld Trimble GeoXT6000 global positioning system receiver. In this work, we used an Agisoft PhotoScan (Agisoft LLC, St. Petersburg, Russia) and soybean canopy digital images and hyperspectral images to generate the soybean canopy hyperspectral and RGB digital orthophoto maps (DOMs). After the hyperspectral and RGB images were stitched together, a RGB and a hyperspectral DOM for the experimental field were produced. The methods used to mosaic the hyperspectral and RGB images are available in the literature [56].

Extraction of canopy spectra and fractional vegetation cover

The UAV-based RGB and hyperspectral DOMs were pre-processed by using ENVI software (Exelis Visual Information Solutions, Boulder, CO, USA). A total of 120 regions of interests [ROIs, see Fig. 1c] were manually selected from the canopy image of the S4 stage. The following processing steps were involved:

(1) The UAV-based RGB DOMs were rectified by applying a field-measured GCPs in the ENVI software.

(2) Next, the UAV-based hyperspectral DOMs were rectified by using the UAV-based RGB DOM.

(3) The corresponding reflectance data were extracted from the hyperspectral DOMs by using the ENVI ROI tools.

From a UAV flying at an altitude of 50 m, the RGB camera can collect high-ground-resolution soybean canopy images (approximately 1.17 cm spatial resolution on the ground). Thus, almost all pixels contain pure leaf and background information. The following processing steps were done:

(1) Images of the selected 120 ROIs were classified by using the neural network classification tools in the ENVI software. Three labels were selected: soybean green leaf (soybean1), soybean yellow leaf (soybean2), and soil background;

(2) The number of pixels for soybean1 (nsoybean1) and soybean2 (nsoybean2) were counted for each ROI, and the FVC of each ROI was calculated by dividing the sum nsoybean1 + nsoybean2 by the total number ntotal of each ROI [FVC = (nsoybean1 + nsoybean2)/ntotal].

This process produced a total of 120 sets of UAV-based canopy hyperspectral reflectance datasets and the corresponding FVC. Table 2 presents the statistical analysis of the FVC data from the 120 selected ROIs.

Table 2 Statistical analysis of FVC from 120 selected ROIs (n = 120, see Fig. 1)

PROSAIL radiation-transfer model

The PROSAIL radiation-transfer model is widely used for analyzing how canopy reflectance is affected by leaf, canopies, and soil [14, 34, 57]. This work uses the PROSAIL model to analyze how CCC (Cab: 5:5:50 μg/cm2; see Table 1, minimum: 6.52, maximum: 47.83) and LAI (0.01, 0.5, 1, 1.5, 2, 3, 4, 6, 10) affect the canopy hyperspectral reflectance. The Cab and LAI parameters required special settings, whereas the other parameters were fixed (Table 3).

Table 3 Parameters of PROSPECT and SAIL

Table 3 lists the leaf and canopy parameters used as input for the PROSAIL model. In this work, the PROSAIL-based reference FVC (FVCref) was calculated from the LAI by using the following relation between FVC and LAI [58, 59]:

$${\rm{FV}}{{\rm{C}}_{{\rm{ref}}}} = 1 - {e^{ - G \times \Omega \times \frac{{{\rm{LAI}}}}{{\cos \left( \theta \right)}}}},\;\;G = 0.5,\;\;\Omega = 1,\;\;\theta \; = 0,$$
(1)

where G is the leaf-projection factor for a spherical orientation of the foliage, Ω is the clumping index, LAI is the leaf area index, and θ is the viewing zenith angle. A simulation of the reflectance spectra of the vegetation canopy produced a total of 90 sets of spectra and FVCs (n = nCab × nLAI = 10 × 9 = 90).

Traditional remote-sensing method to estimate fractional vegetation cover

Linear and nonlinear regression

Previous studies have developed numerous vegetation SIs to estimate crop FVC. NDVI is a normalized transformation form of the NIR band and red band reflectance ratios. NDVI is defined as

$${\text{NDVI}} = \frac{{({R_{{\text{NIR}}}} - {R_{\text{R}}})}}{{({R_{{\text{NIR}}}} + {R_{\text{R}}})}},$$
(2)

where \({R_{{\text{NIR}}}}\) and \({R_{\text{R}}}\) are the vegetation canopy reflectances in the NIR and red bands, respectively. NDVI2 and the renormalized difference vegetation index (RDVI) [27] are two optimizations of NDVI. NDVI2 and RDVI are defined as

$${\text{NDV}}{{\text{I}}^2} = {\text{NDVI}} \times {\text{NDVI}},\;$$
(3)
$${\text{RDVI}} = \frac{{\left( {{R_{{\text{NIR}}}} - {R_{\text{R}}}} \right)}}{{{{\left( {{R_{{\text{NIR}}}} + {R_{\text{R}}}} \right)}^{0.5}}}}.$$
(4)

The soil-adjusted vegetation index (SAVI) [60] reduces the soil background effects:

$${\text{SAVI}} = \left( {1 + L} \right)\frac{{\left( {{R_{{\text{NIR}}}} - {R_{\text{R}}}} \right)}}{{\left( {{R_{{\text{NIR}}}} + {R_{\text{R}}} + L} \right)}},\;\;L = 0.5.\;$$
(5)

Many studies use LAN regression [43] to estimate vegetation FVC. These equations are

$${\text{FVC}} = a \times {\text{SI}} + b,$$
(6)
$${\text{FVC}} = a \times {\text{S}}{{\text{I}}^b},$$
(7)

where SI is a vegetation SI, and a and b are two empirical parameters to be obtained from the model calibration. We evaluate herein the results when using both the linear Eq. (6) and the exponential Eq. (7) to estimate vegetation FVC, but only the best FVC estimates (with the highest coefficient of determination, R2) are considered as LAN-based results.

Pixel dichotomy model

In the theory of linear spectral mixture analysis, the spectral element recorded in a mixed pixel combines the endmember spectra and their proportion. If a mixed pixel combines vegetation canopy and soil, the reflectance of band i can be expressed as

$${R_i} = {R_{i,{\text{veg}}}} \times {\text{FVC}} + {R_{i,{\text{\;soil}}}} \times \left( {1 - {\text{FVC}}} \right),$$
(8)

where i is the band number, Ri is the reflectance in band i, and Ri,veg and Ri,soil are the reflectances in band i from pure vegetation and pure soil, respectively. Similarly, the NDVI of a mixed pixel can be expressed as [40, 41]

$${\text{NDV}}{{\text{I}}_0} = {\text{NDV}}{{\text{I}}_{{\text{veg}}}} \times {\text{FVC}} + {\text{NDV}}{{\text{I}}_{{\text{soil}}}} \times \left( {1 - {\text{FVC}}} \right),$$
(9)

where \({\text{NDV}}{{\text{I}}_0}\) is the NDVI for mixed reflectance spectra, NDVIveg and NDVIsoil are the NDVI of vegetation and soils. Then, FVC is calculated as

$${\text{FVC}} = \frac{{{\text{NDV}}{{\text{I}}_0} - {\text{NDV}}{{\text{I}}_{{\text{soil}}}}}}{{{\text{NDV}}{{\text{I}}_{{\text{veg}}}} - {\text{NDV}}{{\text{I}}_{{\text{soil}}}}}},$$
(10)

where FVCveg and FVCsoil are the NDVI for vegetation and soils, respectively, and NDVI0 is the NDVI for mixed soil-vegetation reflectance spectrum.

Proposed fan-shaped method

Visible and near-infrared angle index

We use a CCC SI to improve the FVC estimates based on the NDVI and PDM. The VNAI is a broadband optical CCC SI that uses the red, green, blue, and NIR bands (Fig. 2). As shown in Fig. 2(b), α is the angle enclosed by the rays G-B and G-R, and β is the angle enclosed by the rays G-B and G-NIR, and the VNAI can be explained as the sum of the two angles (VNAI = α + β) [53]. Yue (2020) shows that the VNAI can accurately estimate the CCC by relying on broadband remote-sensing reflectance as input.

Fig. 2
figure2

Calculation of angles α and β

Figure 2b and c show the method used to calculate the angles α and β. The result is

$${\text{angles}} = 180 - arctan\left( {\frac{y1}{{x1}}} \right) + arctan\left( {\frac{y2}{{x2}}} \right),$$
(11)

Mathematically, the angles can be calculated by using

$$\begin{gathered} \;\alpha = 180 - {\text{arctan}}\left( {\frac{{{R_{\text{G}}} - {R_{\text{B}}}}}{{{\text{wavelength}}\left( {{\text{G}} - {\text{B}}} \right)}}} \right) + {\text{arctan}}\left( {\frac{{{R_{\text{R}}} - {R_{\text{G}}}}}{{{\text{wavelength}}\left( {{\text{R}} - {\text{G}}} \right)}}} \right), \hfill \\ \;\beta = 180 - {\text{arctan}}\left( {\frac{{{R_{\text{G}}} - {R_{\text{B}}}}}{{{\text{wavelength}}\left( {{\text{G}} - {\text{B}}} \right)}}} \right) + {\text{arctan}}\left( {\frac{{{R_{{\text{NIR}}}} - {R_{\text{G}}}}}{{{\text{wavelength}}\left( {{\text{NIR}} - {\text{G}}} \right)}}} \right), \hfill \\ {\text{VNAI}} = \alpha + \beta , \hfill \\ \end{gathered}$$
(12)

where RB, RG, RR, and RNIR are the spectral reflectance of the blue (492.4 nm), green (559.8 nm), red (664.6 nm), and NIR (832.8 nm) bands, respectively. The quantities (G–B) = (559.8–492.4)/2500 = 0.027, (R–G) = (664.6–559.8)/2500 = 0.0419, and (NIR–G) = (832.8–559.8)/2500 = 0.1092 represent the normalized distance (in wavelengths) covered by bands (i) G and B, (ii) R and G, and (iii) bands G and NIR, respectively. Note the ranges of \({\text{arctan}}\left( {\frac{{{R_{\text{G}}} - {R_{\text{B}}}}}{{{\text{wavelength}}\left( {{\text{G}} - {\text{B}}} \right)}}} \right)\), \({\text{arctan}}\left( {\frac{{{R_{\text{R}}} - {R_{\text{G}}}}}{{{\text{wavelength}}\left( {{\text{R}} - {\text{G}}} \right)}}} \right)\), \({\text{arctan}}\left( {\frac{{{R_{\text{G}}} - {R_{\text{B}}}}}{{{\text{wavelength}}\left( {{\text{G}} - {\text{B}}} \right)}}} \right)\), and \({\text{arctan}}\left( {\frac{{{R_{{\text{NIR}}}} - {R_{\text{G}}}}}{{{\text{wavelength}}\left( {{\text{NIR}} - {\text{G}}} \right)}}} \right)\) belong to (− 90°, 90°).

Visible and near-infrared angle index, spectral index, fan-shaped method

We use the PROSAIL-based NDVI and VNAI to create a 2-D scatter map. As shown in Fig. 3(a, b), the optical-SIs for vegetation decrease with decreasing CCC. Figure 3(a) shows the 2-D scatter map for samples with medium-CCC (using 20–35 μg/cm2) and different FVC (i.e., different LAI). Figure 3(b) shows the 2-D scatter map for datasets (using 5–50 μg/cm2) containing low-, medium-, and high-CCC and different FVC (i.e., different LAI). As shown in Fig. 3(b)–(c), the proposed FSM uses the VNAI and NDVI to create a 2-D scatter map in which the three vertices represent high-CCC vegetation, low-CCC vegetation, and soil.

Fig. 3
figure3

Theory for quantifying the fraction of high-CCC vegetation, low-CCC vegetation, and soil based on VNAI and NDVI: a PROSAIL-based NDVI as a function of VNAI (Cab = 20–35 μg/cm2). b PROSAIL-based NDVI as a function of VNAI (Cab = 5–50 μg/cm2). c Quantifying FVC using plot of NDVI vs. VNAI. Note: NDVI1, NDVI2, and NDVI3 are the NDVI values for low-CCC vegetation, bare soil, and high-CCC vegetation, respectively; VNAI1, VNAI2, and VNAI3 are the VNAI values for low-CCC vegetation, bare soil, and high-CCC vegetation, respectively; point (VNAI0, NDVI0) represents a mixed pixel on the VNAI–NDVI 2-D scatter map, NDVI is the normalized difference vegetation index, and VNAI is the visible and near-infrared angle index

The FVC of each mixed pixel can be calculated as follows based on its location in the VNAI–NDVI fan-shaped 2-D scatter map (Fig. 3c):

$${\text{FVC}} = \frac{{L_0}}{r}$$
(13)

where r is the radius of the fan-shaped geometric figure and L0 is the distance from point (VNAI0, NDVI0) to the bare-soil point (VNAI2, NDVI2). Because the VNAI–NDVI 2-D scatter map is fan-shaped, the distance from the point for bare soil to low-CCC vegetation is the same as that to high-CCC vegetation, which is the radius of the fan-shaped geometric figure, thus

$$r = \sqrt {{{\left( {k \times {\text{VNA}}{{\text{I}}_3} - k \times {\text{VNA}}{{\text{I}}_2}} \right)}^2} + {{\left( {{\text{NDV}}{{\text{I}}_3} - {\text{NDV}}{{\text{I}}_2}} \right)}^2}} = \sqrt {{{(k \times {\text{VNA}}{{\text{I}}_2} - k \times {\text{VNA}}{{\text{I}}_1})}^2} + {{\left( {{\text{NDV}}{{\text{I}}_2} - {\text{NDV}}{{\text{I}}_1}} \right)}^2}} ,$$
(14)

where NDVI1, NDVI2, and NDVI3 are the NDVI values for low-CCC vegetation, bare soil, and high-CCC vegetation, respectively; VNAI1, VNAI2, and VNAI3 are the VNAI values for low-CCC vegetation, bare soil, and high-CCC vegetation, respectively; and the parameter k > 0 is the normalized distance from the VNAI to the NDVI. Thus, k2 is given by

$${k^2} = \frac{{{{\left( {{\text{NDV}}{{\text{I}}_2} - {\text{NDV}}{{\text{I}}_1}} \right)}^2} - {{\left( {{\text{NDV}}{{\text{I}}_3} - {\text{NDV}}{{\text{I}}_2}} \right)}^2}}}{{{{\left( {{\text{VNA}}{{\text{I}}_3} - {\text{VNA}}{{\text{I}}_2}} \right)}^2} - {{\left( {{\text{VNA}}{{\text{I}}_2} - {\text{VNA}}{{\text{I}}_1}} \right)}^2}}}$$
(15)

The FVC is then given by

$${\text{FVC}} = \frac{{L_0}}{r} = \frac{{\sqrt {{{\left( {k \times {\text{VNA}}{{\text{I}}_0} - k \times {\text{VNA}}{{\text{I}}_2}} \right)}^2} + {{\left( {{\text{NDV}}{{\text{I}}_0} - {\text{NDV}}{{\text{I}}_2}} \right)}^2}} }}{{\sqrt {{{(k \times {\text{VNA}}{{\text{I}}_3} - k \times {\text{VNA}}{{\text{I}}_2})}^2} + {{\left( {{\text{NDV}}{{\text{I}}_3} - {\text{NDV}}{{\text{I}}_2}} \right)}^2}} }},{\text{\;}}$$
(16)
$${\text{FVC}} = \frac{{\sqrt {{k^2}{{\left( {{\text{VNA}}{{\text{I}}_0} - {\text{VNA}}{{\text{I}}_2}} \right)}^2} + {{\left( {{\text{NDV}}{{\text{I}}_0} - {\text{NDV}}{{\text{I}}_2}} \right)}^2}} }}{{\sqrt {{k^2}{{({\text{VNA}}{{\text{I}}_3} - {\text{VNA}}{{\text{I}}_2})}^2} + {{\left( {{\text{NDV}}{{\text{I}}_3} - {\text{NDV}}{{\text{I}}_2}} \right)}^2}} }},$$
(17)

where NDVI0 and VNAI0 are the NDVI and VNAI value of a mixed pixel, respectively.

Results and discussion

Response of vegetation canopy reflectance spectra and spectral indexes to canopy chlorophyll content and leaf-area index

Response of canopy hyperspectral reflectance spectra and NDVI to canopy chlorophyll content and fractional vegetation cover

Figure 4 shows how vegetation canopy reflectance spectra and SIs depend on CCC (using Cab) and FVC (using LAI). As shown in Fig. 4(a–c), CCC affects the vegetation canopy reflectance spectra mainly in the visible and NIR bands (Fig. 4a, b). The canopy hyperspectral reflectance of high-CCC vegetation is less than that of low-CCC vegetation, and the NDVI of high-CCC vegetation exceeds that of low-CCC vegetation. The results shown in Fig. 4(d–f) also show that the NDVI of high-CCC vegetation exceeds that of low-CCC vegetation. Thus, the accuracy of multi-stage, SI-based FVC estimates is limited by variations in crop CCC (see coefficient of variation of CCC in Table 1).

Figure 55 shows how UAV-based NDVI depends on CCC. FVC ≈ 1 for the six selected plots and two growth stages; the NDVI of six plots in stage S3 are also similar (from 0.86 to 0.89, see Fig. 5). However, the NDVI of the same six plant plots in the S4 stage differ significantly (from 0.56 to 0.83, see Fig. 5). Thus, the accuracy of multi-stage FVC estimation is limited by the variation of crop CCC (see Fig. 5).

Fig. 4
figure4

a, b, d, e Reflectance spectra of vegetation canopy and associated NDVI as a function of c LAI and f Cab. Note: Cab is the chlorophyll a and b content, LAI is the leaf-area index.

Current methods for broadband remote-sensing FVC estimation are thus limited by vegetation CCC, principally because the optical SIs for pure crop canopies differ in the different growth stages. Many studies have concluded that the spectral reflectance in the visible bands and optical SIs for low-CCC vegetation canopies is lower than for high-CCC vegetation [16,17,18].

However, methods to reduce the effect of CCC on FVC estimation remain under-developed. In practice, the coefficient of variation of soybean CCC is huge in later growth stages (31.58%, Table 1), which, in turn, leads to lower optical SIs for low-CCC vegetation canopies than for high-CCC vegetation canopies. For example, the NDVI is high for high-CCC soybean (about 0.86–0.89, see Figs. 4 and 5), whereas the NDVI for low-CCC soybean is low (about 0.56, see Figs. 4 and 5). Thus, the LAN- and PDM-based methods may produce inaccurate estimates of FVC in the later growth stages, and FVC estimates based on data gathered over the long term depend essentially on the vegetation CCC.

Fig. 5
figure5

Dependence of hyperspectral images and RGB images (S3 and S4) on CCC. Note: UAV-based hyperspectral images are false-color images: R, G, B = 834, 662, and 558 nm, respectively. DOM stands for “digital orthophoto map.”

How canopy chlorophyll content and fractional vegetation cover affect spectral indexes as a function of VNAI

Figure 6 shows how CCC (using VNAI) and FVC (using NDVI) affects PROSAIL-based SIs as functions of VNAI. The VNAI–NDVI, VNAI–NDVI2, VNAI–RDVI, and VNAI–SAVI 2-D scatter maps are all similar: they form four fan-shaped 2-D scatter maps in which the three vertices represent high-CCC vegetation, low-CCC vegetation, and soil (see Fig. 3). The PROSAIL-based VNAI–SI 2-D scatter maps support our approach for quantifying the fraction of high-CCC vegetation, low-CCC vegetation, and soil based on a CCC SI and a vegetation SI. Figure 7 shows how CCC and LAI affect UAV-based SI vs. VNAI scatter maps. The UAV-based VNAI–NDVI, VNAI–NDVI2, VNAI–RDVI, and VNAI–SAVI 2-D scatter maps are similar for all PROSAIL-based simulations.

Fig. 6
figure6

PROSAIL-based VNAI–SIs 2-D scatter maps. a VNAI–NDVI, b VNAI–NDVI2, c VNAI–RDVI, d VNAI–SAVI. SI spectral index, RDVI renormalized difference vegetation index, SAVI  soil-adjusted vegetation index

Fig. 7
figure7

UAV-based VNAI–SI 2-D scatter maps for stage S4. a VNAI–NDVI, b VNAI–NDVI2, c VNAI–RDVI, d VNAI–SAVI. Note: Red dots represent the 120 ROIs from Fig. 1c

Estimating and mapping fractional vegetation cover

Using LAN, PDM, and FSM to estimate fractional vegetation cover

Figure 8 shows the reference FVC (FVCref) and FVC estimated by using the methods LAN, PDM, and FSM and the SIs NDVI, NDVI2, RDVI, and SAVI. The accuracy of the FVC estimated by various models and SIs is listed in Table 3. The results suggest that the accuracy of FVC estimates made by LAN and PDM methods may be limited by variations in crop CCC. For example, given low CCCs, FVC is underestimated by LAN and PDM methods. In some extreme cases, using NDVI and LAN methods classify vegetation with 100% cover as having 50% cover (see Fig. 8). The most accurate FVC estimates are obtained by using the SAVI and the proposed FSM method (see Fig. 8 and Table 4, R2= 0.99, root-mean-square error (RMSE) = 0.03).

Fig. 8
figure8

Reference FVC (FVCref) and FVC estimated by using different SIs and methods (PROSAIL-based dataset). LAN linear and nonlinear regression, PDM pixel dichotomy model, FSM fan-shaped method. FVCref is the reference FVC calculated from Eq. (1)); FVCNDVI, FVCNDVI2, FVCRDVI2, and FVCSAVI are FVC estimates based on a NDVI, NDVI2, RDVI, SAVI, and b LAN, PDM, and FSM

Table 4 Results of FVC estimates produced by various methods and based on various SIs (PROSAIL-based dataset)

The proposed FSM reduces the effect of the CCC by applying a PDM at each level of CCC (Fig. 3). For example (see Fig. 3b), the NDVI = 0.92 for high-CCC (Cab = 50) vegetation, the NDVI = 0.57 for low-CCC (Cab = 5) vegetation, whereas the NDVI = 0.17 for soil [FVC(Cab = 5) = (NDVI total – NDVI soil)/( NDVI vegetation (Cab = 5) – NDVI soil), FVC(Cab = 50) = (NDVI total – NDVI soil)/( NDVI vegetation (Cab = 50) – NDVI soil)]. The results of our PROSAIL-based estimates of the FVC (Fig. 8; Table 4) indicate that the proposed FSM method is robust and can be used for estimating FVC over multiple stages.

Figure 9 shows the FVC measured and estimated by using different methods (LAN, PDM, and FSM) and SIs (NDVI, NDVI2, RDVI, and SAVI). The accuracy of the FVC estimates produced by different methods and SIs are listed in Table 4. The accuracy of FVC estimates based on the UAV-based dataset is similar to that of estimates calculated from the PROSAIL-based dataset (Fig. 8). FVC is underestimated by LAN and PDM when CCC is low. In some extreme cases, the PDM classifies vegetation with 100% cover as having 40% cover (see Fig. 7). The most accurate FVC estimates are obtained by using the FSM method (see Fig. 7; Table 5, total: R2 = 0.75–0.86, RMSE = 0.10–0.14). Thus, the results of UAV-based FVC estimates (see Fig. 7; Table 4) indicate that the proposed FSM method is a robust method that can be used to estimate FVC over multiple stages.

Fig. 9
figure9

FVC measured and estimated by using different SIs and methods (UAV-based dataset). Note: FVCref is the reference FVC from UAV-image classification; FVCNDVI, FVCNDVI2, FVCRDVI2, and FVCSAVI are the FVC estimates based on a NDVI, NDVI2, RDVI, SAVI, and b LAN, PDM, and FSM

Table 5 Results of FVC estimates produced by various methods and based on various SIs (UAV-based dataset)

We evaluated the accuracy of FVC estimates based on a PROSAIL-based dataset, an image-based dataset, and the proposed FSM. Compared with LAN and PDM, the results indicate that FSM is a robust method that reduces the influence of crop CCC and thereby provides the most accurate estimates of FVC. As shown in Figs. 8 and 9, the FVC of samples with low CCC are underestimated by the PDM; and the LAN method underestimates the FVC of low-CCC samples and overestimates the FVC of high-CCC samples. By using a CCC SI, the proposed FSM mitigates the effect CCC and thereby improves FVC estimates. Considering that the variation of CCC is one of the most influential factors in the PDM (Figs. 4, 5), the proposed FSM offers the advantage of accurately estimating FVC over multiple stages.

Mapping fractional vegetation cover using image-based dataset

The FVC maps were calculated by using (i) the NDVI based on UAV-hyperspectral images and (ii) LAN, PDM, and FSM. Figure 10a–d show the UAV RGB DOM, the LAN-FVC map, the PDM-FVC map, and the FSM-FVC map. As shown in Fig. 10, three plots were selected for performance evaluation. Similar to the PROSAIL-based results shown in Fig. 4, the LAN, PDM, and FSM estimates of FVC for plot 1 (high-CCC plot: green leaves) are similar; however, the FVC of plots 2 and 3 (low-CCC plots: yellow leaves) are underestimated by LAN and PDM. Thus, the results shown in Fig. 10 suggest that the FVC map calculated by the proposed FSM is more accurate.

Fig. 10
figure10

FVC mapping. Note: Plot 1: high-CCC and high FVC; Plots 2 and 3: low-CCC and high FVC

In this work, soybean hyperspectral images acquired at later growth stage were used for validation. These images cover low-, medium-, and high-CCC soybean plots (Fig. 10). The CCC and LAI are the dominating factors affecting the visible bands of vegetation canopy spectra. Consequently, the estimation of FVC at early growth stages [CCC: about 20–35 μg/cm2, see Fig. 3a, red fan-shape] is similar to that for median-CCC samples at later growth stages when using the PROSAIL-based dataset [see Fig. 3b, red fan-shape]. In this study, no field FVC and real canopy spectra for early stages were tested. Further quantitative validation from field sites is necessary.

The most significant advantage of the FSM is that it can be used to estimate and map crop FVC from various CCC conditions. As shown in Table 1, the coefficient of variation of soybean CCC is small during the early growth stages but increases significantly in the later growth stages. Thus, to estimate FVC for mapping in the later growth stages and over multiple growth stages, one should consider the variation of crop CCC (Figs. 5, 6, 7). We use the proposed FSM to calculate FVC at different CCC levels, which may help to provide FVC estimates that are more robust than those provided by PDM and LAN. However, as with any method, the FSM has shortcomings, the most obvious of which is that it requires an additional parameter [k2, see Eqs. (13)–(16)] to normalize the distance from the VNAI to the NDVI, which may limit its application. As shown in Eq. 15, the parameter k2 depends on the NDVI and VNAI of soil, high CCC vegetation, and low CCC vegetation. But in practice, the NDVI and VNAI values for different soybean cultivars differ due to leaf and canopy parameters (see Table 3, leaf structure, carotenoid content, etc.). Thus, the parameter k2 needs to be calibrated for practical application of FSM (see Eq. 15).

The strategy of combining a CCC SI into the FSM could also be applied to satellite multispectral remote-sensing imagery, but the feasibility of FVC estimates based on long-term data acquired from satellite multispectral remote-sensing images remains to be validated. In this work, the proposed FSM was validated by using only PROSAIL-based simulations and a UAV-based soybean canopy spectral image from a single site. Thus, further validation is required from additional crops and study sites.

Conclusions

To estimate FVC, we propose herein a FSM that mitigates the effect of CCC by conducting a PDM at each CCC level. The FSM is a robust method that can be used to estimate FVC based on multiple growth stages where crop CCC varies greatly. The results lead to the following two conclusions:

  1. (1)

    Estimates and maps of FVC based on the later growth stages and on multiple growth stages should consider the variation of crop CCC. Field soybean CCC measurements (Table 1) indicate that (a) the soybean CCC increases continuously from the flowering growth stage to the later-podding growth stage, and then decreases with increasing crop growth stages, (b) the coefficient of variation of soybean CCC is very large in later growth stages (31.58–35.77%) and over all growth stages (26.14%). Thus, PDM underestimates the FVC of samples with low CCC, and LAN underestimates the FVC of low-CCC samples and overestimates the FVC of high-CCC samples.

  2. (2)

    The proposed FSM method can provide accurate FVC estimates based on data from multiple growth stages and can be applied to croplands with significant variation in crop CCC. The proposed FSM mitigates the influence of CCC by applying a PDM at each CCC level, making it a robust method for multi-stage estimates of FVC in situations involving strong variations in crop CCC. The improved FVC estimates are validated by both PROSAIL- and images-based datasets.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on request.

Abbreviations

2-D:

Two-dimensional

CCC:

Canopy chlorophyll content

CNN:

Convolutional neural network

DOM:

Digital orthophoto map

FSM:

Fan-shaped method

FVC:

Fractional vegetation cover

FVCref :

Reference FVC

GCP:

Ground control point

GSD:

Ground spatial resolution

LAI:

Leaf-area index

LAN:

Linear and nonlinear regression

NDVI:

Normalized difference vegetation index

NIR:

Near-infrared

PDM:

Pixel dichotomy model

RDVI:

Renormalized difference vegetation index

RMSE:

Root-mean-square error

SAVI:

Soil-adjusted vegetation index

SI:

Spectral index

UAV:

Unmanned aerial vehicle

VNAI:

Visible and near-infrared angle index

References

  1. 1.

    Wang J, Zhao C, Huang W. Fundamental and application of quantitative remote sensing in agriculture. Beijing: Science China Press; 2008.

    Google Scholar 

  2. 2.

    Hsiao TC, Heng L, Steduto P, Rojas-Lara B, Raes D, Fereres E. Aquacrop-The FAO crop model to simulate yield response to water: III. Parameterization and testing for maize. Agron J. 2009;101:448–59. https://doi.org/10.2134/agronj2008.0218s.

    Article  Google Scholar 

  3. 3.

    Van Diepen CA, Wolf J, van Keulen H, Rappoldt C. WOFOST: a simulation model of crop production. Soil Use Manag. 1989;5:16–24. https://doi.org/10.1111/j.1475-2743.1989.tb00755.x.

    Article  Google Scholar 

  4. 4.

    Boogaard HL, De Wit AJ, Te Roller JA, Van Diepen CA. User’s guide for the WOFOST Control Center 1.8 and WOFOST 7.1. 3 crop growth simulation model. Alterra Wageningen University. 2011.

  5. 5.

    Iizuka K, Kato T, Silsigia S, Soufiningrum AY, Kozan O. Estimating and examining the sensitivity of different vegetation indices to fractions of vegetation cover at different scaling Grids for Early Stage Acacia Plantation Forests Using a Fixed-Wing UAS. Remote Sens. 2019. https://doi.org/10.3390/rs11151816.

    Article  Google Scholar 

  6. 6.

    Liu D, Yang L, Jia K, Liang S, Xiao Z, Wei X, Yao Y, Xia M, Li Y. Global fractional vegetation cover estimation algorithm for VIIRS reflectance data based on machine learning methods. Remote Sens. 2018. https://doi.org/10.3390/rs10101648.

    Article  Google Scholar 

  7. 7.

    Arneth A. Uncertain future for vegetation cover. Nature. 2015;524:44–5. https://doi.org/10.1038/524044a.

    CAS  Article  PubMed  Google Scholar 

  8. 8.

    Barlage M, Zeng X. The effects of observed fractional vegetation cover on the land surface climatology of the community land model. J Hydrometeorol. 2004;5:823–30. https://doi.org/10.1175/1525-7541(2004)005%3c0823:TEOOFV%3e2.0.CO;2.

    Article  Google Scholar 

  9. 9.

    Jiapaer G, Chen X, Bao A. A comparison of methods for estimating fractional vegetation cover in arid regions. Agric For Meteorol. 2011;151:1698–710. https://doi.org/10.1016/j.agrformet.2011.07.004.

    Article  Google Scholar 

  10. 10.

    Coy A, Rankine D, Taylor M, Nielsen DC, Cohen J. Increasing the accuracy and automation of fractional vegetation cover estimation from digital photographs. Remote Sens. 2016;8:21–5. https://doi.org/10.3390/rs8070474.

    Article  Google Scholar 

  11. 11.

    Ding Y, Zhang H, Zhao K, Zheng X. Investigating the accuracy of vegetation index-based models for estimating the fractional vegetation cover and the effects of varying soil backgrounds using in situ measurements and the PROSAIL model. Int J Remote Sens. 2017;38:4206–23. https://doi.org/10.1080/01431161.2017.1312617.

    Article  Google Scholar 

  12. 12.

    Tao G, Jia K, Zhao X, Wei X, Xie X, Zhang X, Wang B, Yao Y, Zhang X. Generating high spatio-temporal resolution fractional vegetation cover by fusing GF-1 WFV and MODIS data. Remote Sens. 2019;11:1–21. https://doi.org/10.3390/rs11192324.

    Article  Google Scholar 

  13. 13.

    Zhou G, Liu S. Estimating ground fractional vegetation cover using the double-exposure method. Int J Remote Sens. 2015;36:6085–100. https://doi.org/10.1080/01431161.2015.1110638.

    Article  Google Scholar 

  14. 14.

    Féret JB, Gitelson AA, Noble SD, Jacquemoud S. PROSPECT-D: towards modeling leaf optical properties through a complete lifecycle. Remote Sens Environ. 2017;193:204–15. https://doi.org/10.1016/j.rse.2017.03.004.

    Article  Google Scholar 

  15. 15.

    Jacquemoud S, Baret F, Andrieu B, Danson FM, Jaggard K. Extraction of vegetation biophysical parameters by inversion of the PROSPECT + SAIL models on sugar beet canopy reflectance data. Application to TM and AVIRIS sensors. Remote Sens Environ. 1995;52:163–72. https://doi.org/10.1016/0034-4257(95)00018-V.

    Article  Google Scholar 

  16. 16.

    Broge NH, Leblanc E. Comparing prediction power and stability of broadband and hyperspectral vegetation indices for estimation of green leaf area index and canopy chlorophyll density. Remote Sens Environ. 2001;76:156–72. https://doi.org/10.1016/S0034-4257(00)00197-8.

    Article  Google Scholar 

  17. 17.

    Sun J, Shi S, Yang J, Chen B, Gong W, Du L, Mao F, Song S. Estimating leaf chlorophyll status using hyperspectral lidar measurements by PROSPECT model inversion. Remote Sens Environ. 2018;212:1–7. https://doi.org/10.1016/j.rse.2018.04.024.

    Article  Google Scholar 

  18. 18.

    Houborg R, McCabe M, Cescatti A, Gao F, Schull M, Gitelson A. Joint leaf chlorophyll content and leaf area index retrieval from Landsat data using a regularized model inversion system (REGFLEC). Remote Sens Environ. 2015;159:203–21. https://doi.org/10.1016/j.rse.2014.12.008.

    Article  Google Scholar 

  19. 19.

    Lunagaria MM, Patel HR. Evaluation of PROSAIL inversion for retrieval of chlorophyll, leaf dry matter, leaf angle, and leaf area index of wheat using spectrodirectional measurements. Int J Remote Sens. 2019;40:8125–45. https://doi.org/10.1080/01431161.2018.1524608.

    Article  Google Scholar 

  20. 20.

    Berger K, Atzberger C, Danner M, D’Urso G, Mauser W, Vuolo F, Hank T. Evaluation of the PROSAIL model capabilities for future hyperspectral model environments: A review study. Remote Sens. 2018. https://doi.org/10.3390/rs10010085.

    Article  Google Scholar 

  21. 21.

    Liang S, Li X, Wang J. Eds. Advanced Remote Sensing. Elsevier; 2012. https://doi.org/10.1016/C2010-0-67304-4

  22. 22.

    Pu R, Gong P. Hyperspectral remote sensing of vegetation bioparameters. Adv Environ Remote Sens Sensors Algorithms Appl. 2011. https://doi.org/10.1201/b10599-7.

    Article  Google Scholar 

  23. 23.

    Hasegawa S. Metabolism of limonoids. Limonin d-ring lactone hydrolase activity in pseudomonas. J Agric Food Chem. 1976;24:24–6. https://doi.org/10.1021/jf60203a024.

    CAS  Article  PubMed  Google Scholar 

  24. 24.

    Galvao LS, Ponzoni FJ, Epiphanio JCN, Rudorff BFT, Formaggio AR. Sun and view angle effects on NDVI determination of land cover types in the Brazilian Amazon region with hyperspectral data. Int J Remote Sens. 2004;25:1861–79. https://doi.org/10.1080/01431160310001598908.

    Article  Google Scholar 

  25. 25.

    Goetz SJ. Multi-sensor analysis of NDVI, surface temperature and biophysical variables at a mixed grassland site. Int J Remote Sens. 1997;18:71–94. https://doi.org/10.1080/014311697219286.

    Article  Google Scholar 

  26. 26.

    Zhan Z-Z, Liu H-B, Li H-M, Wu W, Zhong B. The relationship between NDVI and terrain factors—a case study of Chongqing. Procedia Environ Sci. 2012;12:765–71. https://doi.org/10.1016/j.proenv.2012.01.347.

    Article  Google Scholar 

  27. 27.

    Roujean JL, Breon FM. Estimating PAR absorbed by vegetation from bidirectional reflectance measurements. Remote Sens Environ. 1995;51:375–84. https://doi.org/10.1016/0034-4257(94)00114-3.

    Article  Google Scholar 

  28. 28.

    Liu HQ, Huete A. Feedback based modification of the NDVI to minimize canopy background and atmospheric noise. IEEE Trans Geosci Remote Sens. 1995;33:457–65. https://doi.org/10.1109/36.377946.

    Article  Google Scholar 

  29. 29.

    Kim J, Kang S, Seo B, Narantsetseg A, Han Y. Estimating fractional green vegetation cover of Mongolian grasslands using digital camera images and MODIS satellite vegetation indices. GIScience Remote Sens. 2020;57:49–59. https://doi.org/10.1080/15481603.2019.1662166.

    Article  Google Scholar 

  30. 30.

    Liao C, Wang J, Shang J, Huang X, Liu J, Huffman T. Sensitivity study of radarsat-2 polarimetric SAR to crop height and fractional vegetation cover of corn and wheat. Int J Remote Sens. 2018;39:1475–90. https://doi.org/10.1080/01431161.2017.1407046.

    Article  Google Scholar 

  31. 31.

    Wang X, Jia K, Liang S, Zhang Y. Fractional vegetation cover estimation method through dynamic bayesian network combining radiative transfer model and crop growth model. IEEE Trans Geosci Remote Sens. 2016;54:7442–50. https://doi.org/10.1109/TGRS.2016.2604007.

    Article  Google Scholar 

  32. 32.

    Baret F, Clevers JGPW, Steven MD. The robustness of canopy gap fraction estimates from red and near-infrared reflectances: a comparison of approaches. Remote Sens Environ. 1995;54:141–51. https://doi.org/10.1016/0034-4257(95)00136-O.

    Article  Google Scholar 

  33. 33.

    Yang L, Jia K, Liang S, Wei X, Yao Y, Zhang X. A robust algorithm for estimating surface fractional vegetation cover from landsat data. Remote Sens. 2017;9:1–20. https://doi.org/10.3390/rs9080857.

    Article  Google Scholar 

  34. 34.

    Jacquemoud S, Verhoef W, Baret F, Bacour C, Zarco-Tejada PJ, Asner GP, François C, Ustin SL. PROSPECT + SAIL models: A review of use for vegetation characterization. Remote Sens Environ. 2009;113:S56–66. https://doi.org/10.1016/j.rse.2008.01.026.

    Article  Google Scholar 

  35. 35.

    Chen JM, Leblanc SG. A four-scale bidirectional reflectance model based on canopy architecture. IEEE Trans Geosci Remote Sens. 1997;35:1316–37. https://doi.org/10.1109/36.628798.

    Article  Google Scholar 

  36. 36.

    Gastellu-Etchegorry JP, Yin T, Lauret N, Cajgfinger T, Gregoire T, Grau E, Feret JB, Lopes M, Guilleux J, Dedieu G, Malenovskỳ Z, Cook BD, Morton D, Rubio J, Durrieu S, Cazanave G, Martin E, Ristorcelli T. Discrete anisotropic radiative transfer (DART 5) for modeling airborne and satellite spectroradiometer and LIDAR acquisitions of natural and urban landscapes. Remote Sens. 2015;7:1667–701. https://doi.org/10.3390/rs70201667.

    Article  Google Scholar 

  37. 37.

    Gastellu-Etchegorry JP. 3D modeling of satellite spectral images, radiation budget and energy budget of urban landscapes. Meteorol Atmos Phys. 2008;102:187–207. https://doi.org/10.1007/s00703-008-0344-1.

    Article  Google Scholar 

  38. 38.

    Gastellu-Etchegorry JP, Martin E, Gascon F. DART: A 3D model for simulating satellite images and studying surface radiation budget. Int J Remote Sens. 2004;25:73–96. https://doi.org/10.1080/0143116031000115166.

    Article  Google Scholar 

  39. 39.

    Jia K, Li Y, Liang S, Wei X, Mu X, Yao Y. Fractional vegetation cover estimation based on soil and vegetation lines in a corn-dominated area. Geocarto Int. 2017;32:531–40. https://doi.org/10.1080/10106049.2016.1161075.

    Article  Google Scholar 

  40. 40.

    Gutman G, Ignatov A. The derivation of the green vegetation fraction from NOAA/AVHRR data for use in numerical weather prediction models. Int J Remote Sens. 1998;19:1533–43. https://doi.org/10.1080/014311698215333.

    Article  Google Scholar 

  41. 41.

    Ding Y, Zheng X, Zhao K, Xin X, Liu H. Quantifying the impact of NDVIsoil determination methods and NDVIsoil variability on the estimation of fractional vegetation cover in Northeast China. Remote Sens. 2016. https://doi.org/10.3390/rs8010029.

    Article  Google Scholar 

  42. 42.

    Cho J, Lee YW, Han KS. The effect of fractional vegetation cover on the relationship between EVI and soil moisture in non-forest regions. Remote Sens Lett. 2014;5:37–45. https://doi.org/10.1080/2150704X.2013.866288.

    Article  Google Scholar 

  43. 43.

    Helman D, Lensky IM, Tessler N, Osem Y. A phenology-based method for monitoring woody and herbaceous vegetation in mediterranean forests from NDVI time series. Remote Sens. 2015;7:12314–35. https://doi.org/10.3390/rs70912314.

    Article  Google Scholar 

  44. 44.

    Hocking RR. A biometrics invited paper. The analysis and selection of variables in linear regression. Biometrics. 1976;32:1. https://doi.org/10.2307/2529336.

    Article  Google Scholar 

  45. 45.

    Breiman L. Random forests. Mach Learn. 2001;45:5–32. https://doi.org/10.1023/A:1010933404324.

    Article  Google Scholar 

  46. 46.

    Liu Y, Gong W, Xing Y, Hu X, Gong J. Estimation of the forest stand mean height and aboveground biomass in Northeast China using SAR Sentinel-1B, multispectral Sentinel-2A, and DEM imagery. ISPRS J Photogramm Remote Sens. 2019;151:277–89. https://doi.org/10.1016/j.isprsjprs.2019.03.016.

    Article  Google Scholar 

  47. 47.

    Wang X, Jia K, Liang S, Li Q, Wei X, Yao Y, Zhang X, Tu Y. Estimating fractional vegetation cover from landsat-7 ETM+ reflectance data based on a coupled radiative transfer and crop growth model. IEEE Trans Geosci Remote Sens. 2017;55:5539–46. https://doi.org/10.1109/TGRS.2017.2709803.

    Article  Google Scholar 

  48. 48.

    Kattenborn T, Eichel J, Fassnacht FE. Convolutional Neural Networks enable efficient, accurate and fine-grained segmentation of plant species and communities from high-resolution UAV imagery. Sci Rep. 2019;9:1–9. https://doi.org/10.1038/s41598-019-53797-9.

    CAS  Article  Google Scholar 

  49. 49.

    Kattenborn T, Eichel J, Wiser S, Burrows L, Fassnacht FE, Schmidtlein S. Convolutional Neural Networks accurately predict cover fractions of plant species and communities in Unmanned Aerial Vehicle imagery. Remote Sens Ecol Conserv. 2020;6:472–86. https://doi.org/10.1002/rse2.146.

    Article  Google Scholar 

  50. 50.

    Lin P, Li D, Zou Z, Chen Y, Jiang S. Deep convolutional neural network for automatic discrimination between Fragaria × Ananassa flowers and other similar white wild flowers in fields. Plant Methods. 2018;14:1–12. https://doi.org/10.1186/s13007-018-0332-5.

    Article  Google Scholar 

  51. 51.

    Genze N, Bharti R, Grieb M, Schultheiss SJ, Grimm DG. Accurate machine learning-based germination detection, prediction and quality assessment of three grain crops. Plant Methods. 2020;16:1–11. https://doi.org/10.1186/s13007-020-00699-x.

    CAS  Article  Google Scholar 

  52. 52.

    Zhang D, Pan Y, Zhang J, Hu T, Zhao J, Li N, Chen Q. A generalized approach based on convolutional neural networks for large area cropland mapping at very high resolution. Remote Sens Environ. 2020;247:111912. https://doi.org/10.1016/j.rse.2020.111912.

    Article  Google Scholar 

  53. 53.

    Yue J, Feng H, Tian Q, Zhou C. A robust spectral angle index for remotely assessing soybean canopy chlorophyll content in different growing stages. Plant Methods. 2020;16:104. https://doi.org/10.1186/s13007-020-00643-z.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  54. 54.

    Goulas Y, Cerovic ZG, Cartelat A, Moya I. Dualex: a new instrument for field measurements of epidermal ultraviolet absorbance by chlorophyll fluorescence. Appl Opt. 2004;43:4488–96. https://doi.org/10.1364/AO.43.004488.

    CAS  Article  PubMed  Google Scholar 

  55. 55.

    Yue J, Yang G, Li C, Li Z, Wang Y, Feng H, Xu B. Estimation of winter wheat above-ground biomass using unmanned aerial vehicle-based snapshot hyperspectral sensor and crop height improved models. Remote Sens. 2017;9:708. https://doi.org/10.3390/rs9070708.

    Article  Google Scholar 

  56. 56.

    Zhang N, Zhang X, Yang G, Zhu C, Huo L, Feng H. Assessment of defoliation during the Dendrolimus tabulaeformis Tsai et Liu disaster outbreak using UAV-based hyperspectral images. Remote Sens Environ. 2018;217:323–39. https://doi.org/10.1016/j.rse.2018.08.024.

    Article  Google Scholar 

  57. 57.

    Verhoef W. Light scattering by leaf layers with application to canopy reflectance modeling: The SAIL model. Remote Sens Environ. 1984;16:125–41. https://doi.org/10.1016/0034-4257(84)90057-9.

    Article  Google Scholar 

  58. 58.

    Song W, Mu X, Ruan G, Gao Z, Li L, Yan G. Estimating fractional vegetation cover and the vegetation index of bare soil and highly dense vegetation with a physically based method. Int J Appl Earth Obs Geoinf. 2017;58:168–76. https://doi.org/10.1016/j.jag.2017.01.015.

    Article  Google Scholar 

  59. 59.

    Nilson T. A theoretical analysis of the frequency of gaps in plant stands. Agric Meteorol. 1971;8:25–38. https://doi.org/10.1016/0002-1571(71)90092-6.

    Article  Google Scholar 

  60. 60.

    Huete AR. A soil-adjusted vegetation index (SAVI). Remote Sens Environ. 1988;25:295–309. https://doi.org/10.1016/0034-4257(88)90106-X.

    Article  Google Scholar 

Download references

Acknowledgements

We thank Xiaoyan Zhang, Jiqiu Cao, Bo Xu, Guozheng Lu, and Haiyang Yu for field data collection and farmland management.

Funding

This study was supported by the National Natural Science Foundation of China (Grants No. 41801225, No. 41601346, and No. 41771370), the National Key Research and Development Program of China (Grants No. 2016YFD0300602 and No. 2017YFD0600903), the High-resolution Earth Observation Project of China (Grants No. 03-Y20A04-9001-17/18 and No. 30-Y20A07-9003-17/18). The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Author information

Affiliations

Authors

Contributions

JY: Conceptualization, Methodology, Software, Data curation, Writing- Original draft preparation. WG: Software, Data curation, Visualization, Investigation. GY: Software, Investigation. CZ: Investigation, Reviewing and Editing. HF: Investigation, Visualization. HQ: Reviewing and Editing.

Corresponding authors

Correspondence to Wei Guo or Hongbo Qiao.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Yue, J., Guo, W., Yang, G. et al. Method for accurate multi-growth-stage estimation of fractional vegetation cover using unmanned aerial vehicle remote sensing. Plant Methods 17, 51 (2021). https://doi.org/10.1186/s13007-021-00752-3

Download citation

Keywords

  • Unmanned aerial vehicle
  • Fractional vegetation cover
  • Chlorophyll
  • Pixel dichotomy model
  • Soybean