Skip to main content

Table 3 The performance of different techniques/models (Table 4) trained on mixed data (incl. different species, soil types and imaging devices), predicting a different, unseen Chicory/RootPainter dataset

From: Semantic segmentation of plant roots from RGB (mini-) rhizotron images—generalisation potential and false positives of established methods and advanced deep-learning models

Technique/Model (+ aug)

SSIM

DSC

IoU

FPR

Dummy classifierb

0.9732

0.4103

0.4103

Frangi Vesselness

0.3361

0.0054

0.0042

0.9984

Adaptive thresholding

0.9408

0.1506

0.1199

0.8814

SVM

0.8260

0.0845

0.0775

0.8862

SegRootb

0.9732

0.4103

0.4103

SegRoot + auga,b

0.9732

0.4103

0.4103

UNetGNRes

0.9611

0.2489

0.2276

0.5769

UNetGNRes + aug

0.9623

0.3707

0.3022

0.6858

U-Net SE-ResNeXt-101 (32 × 4d)

0.9764

0.4750

0.4156

0.3125

U-Net SE-ResNeXt-101 (32 × 4d) + aug

0.9780

0.5676

0.5043

0.1442

U-Net EfficientNet-b6

0.9771

0.5350

0.4739

0.1827

U-Net EfficientNet-b6 + aug

0.9784

0.6103

0.5411

0.1233

  1. aNo differences in model performance were found using + aug in the SegRoot model
  2. bExcluded from FPR determination as not predicting roots. Indicates methods tghat do not predict roots. The best scores for the evaluation metrics average structural similarity index (SSIM), average Sørensen-Dice similarity coefficient (DSC), and average Jaccard index/intersection over union (IoU), and the lowest false positive rate (FPR) are shown in bold. Number of images: Training: 557/augmented Training: 2228, Validation: 62, Testing: 1537. Models trained with augmented data are indicated by + aug