Skip to main content

Table 2 Performance of different techniques/models (Table 4) on a test data subset of the mixed data set, which was not used during the training on the mixed data set

From: Semantic segmentation of plant roots from RGB (mini-) rhizotron images—generalisation potential and false positives of established methods and advanced deep-learning models

Technique/model (+ aug)

SSIM

DSC

IoU

FPR

Dummy classifierb

0.9173

0.3250

0.3250

-

Frangi Vesselness

0.3665

0.1009

0.0626

1.0

Adaptive thresholding

0.8348

0.2367

0.1804

0.8636

SVM

0.7617

0.1744

0.1341

0.9090

SegRootab

0.9173

0.3250

0.3250

-

SegRoot + augab

0.9173

0.3250

0.3250

-

UNetGNRes

0.9246

0.4399

0.3585

0.6363

UNetGNRes + aug

0.9313

0.5326

0.4452

0.4545

U-Net SE-ResNeXt-101 (32 × 4d)

0.9352

0.5708

0.4800

0.3636

U-Net SE-ResNeXt-101 (32 × 4d) + aug

0.9360

0.6217

0.5299

0.2272

U-Net EfficientNet-b6

0.9375

0.6418

0.5498

0.1363

U-Net EfficientNet-b6 + aug

0.9381

0.6848

0.5920

0.0454

  1. The best scores for the evaluation metrics average structural similarity index (SSIM), average Sørensen-Dice similarity coefficient (DSC), and average Jaccard index/intersection over union (IoU), and the lowest false positive rate (FPR) are shown in bold; Number of images used: Training: 557/augmented Training: 2228, Validation: 62, Testing: 69. Models trained with augmented data are indicated by + aug
  2. aNo differences in model performance were found using + aug in the SegRoot model
  3. bEXCLUDED from FPR determination as not predicting roots. indicate methods where the labels are zero or do not predict roots