Skip to main content

Table 4 Classification networks: training time and model size

From: Deep learning based high-throughput phenotyping of chalkiness in rice exposed to high night temperature

Model

Training time (s)

Number of parameters

Size (MB)

Acc. (%)

DenseNet-121

1522.88

6955906

28.4

95.61

DenseNet-161

2157.04

26,476,418

107.1

95.12

DenseNet-169

1306.20

12,487,810

50.9

94.63

ResNet-18

546.77

11,177,536

44.8

94.63

ResNet-34

719.41

21,285,696

85.3

94.15

ResNet-50

1011.85

23,512,128

94.4

94.88

ResNet-101

1668.41

42,504,256

170.6

95.12

ResNet-152

2172.97

58,147,904

233.4

94.88

SqueezeNet-1.0

533.15

736,450

3.0

95.12

SqueezeNet-1.1

481.53

723,522

2.9

94.39

VGG-11

2382.44

128,774,530

515.1

94.88

VGG-13

2641.00

128,959,042

515.9

94.39

VGG-16

2745.00

134,268,738

537.1

95.12

VGG-19

3079.89

139,578,434

558.4

94.15

EfficientNetB0

1198.53

4,052,126

33.0

95.13

EfficientNetB1

2243.48

6,577,794

53.4

95.13

EfficientNetB2

1882.26

7,771,380

62.9

93.67

EfficientNetB3

2696.21

10,786,602

87.1

95.13

EfficientNetB4

3476.74

17,677,402

142.3

95.38

EfficientNetB5

3584.68

28,517,618

229.1

93.67

EfficientNetB6

4946.95

40,964,746

328.3

94.16

Mask R-CNN

14863.00

42,504,256

255.9

N/A

  1. The number following a network’s name denotes the number of layers in the network (as in DenseNet-121 or ResNet-101) or the version of the network (as in SqueezeNet-1.0 or EfficientNetB0). All models are trained on AWS p3.2xlarge instances. The training time it took to train each model for 200 epochs is reported in seconds (s). Model complexity is reported as the number of trainable parameters of the model, as well as the size of the model in MB. The accuracy of each model is also shown, and the best accuracy (Acc.) obtained for each type of model is highlighted in bold font