Skip to main content

Table 4 Effect of a range of data augmentation techniques on the % accuracy of our three best model architectures, ResNet18, ResNet34 and ResNet101, trained on the whole dataset

From: HairNet: a deep learning model to score leaf hairiness, a key phenotype for cotton fibre yield, value and insect resistance

Data augmentation \(\downarrow \) ResNet18 ResNet34 ResNet101
IA FIA LA IA FIA LA IA FIA LA
Baseline 83.19 78.41 89.56 84.85 81.65 91.36 82.49 83.09 88.12
RV Flip 87.98 84.17 92.08 88.65 87.56 92.44 85.96 85.25 90.28
RH Flip 87.87 84.53 92.80 88.98 87.12 91.72 84.63 81.65 90.64
RV + RH Flip 88.43 85.97 93.88 89.20 86.69 92.80 87.80 85.97 93.52
RC 82.32 80.93 88.48 83.67 81.65 87.05 82.38 83.09 89.20
RC + RV Flip 83.78 83.09 87.76 86.25 84.17 93.16 83.60 82.01 88.12
RC + RH Flip 83.12 81.64 87.76 86.10 84.53 91.72 83.71 82.37 86.69
RC + RV + RH Flip 84.59 85.25 88.84 83.38 81.65 86.33 82.57 82.73 85.97
RR 87.80 87.76 91.72 89.27 88.13 94.96 88.72 89.92 92.80
  1. Image Accuracy (IA), First Image Accuracy (FIA) and Leaf Accuracy (LA) are compared to baseline accuracies obtained without data augmentation. The highest accuracy in each column is highlighted in bold. All models here employ the Adam optimizer with learning rate (lr) \(=1e^{-4}\)
  2. RV Random Vertical, RH Random Horizontal, RC Random Crop, RR Random Rotation