Skip to main content
Fig. 4 | European Radiology Experimental

Fig. 4

From: Enhancing diagnostic deep learning via self-supervised pretraining on large-scale, unlabeled non-medical images

Fig. 4

Receiver operating characteristic (ROC) curves of individual labels comparing diagnostic models pretrained with self-supervised learning (SSL) on non-medical images against fully supervised learning (SL) on non-medical images. Models pretrained via SSL used DINOv2 (solid lines), while SL utilized ImageNet (dotted lines). These models were subsequently fine-tuned in a supervised manner on chest radiographs from six datasets: VinDr-CXR, ChestX-ray14, CheXpert, MIMIC-CXR, UKA-CXR, and PadChest. The number of training images for SL fine-tuning for each dataset was n = 15,000, n = 86,524, n = 128,356, n = 170,153, n = 153,537, and n = 88,480, and test images were n = 3,000, n = 25,596, n = 39,824, n = 43,768, n = 39,824, and n = 22,045, respectively. Corresponding area under ROC curve values for each label, presented as mean ± standard deviation (95% CI), is provided in the bottom right, contrasting DINOv2 versus ImageNet pretraining strategies

Back to article page