Skip to main content

Table 1 Overview of retrospective studies on ML-based improvement of the segmentation-based MRAC method by AI

From: Radiomics and artificial intelligence in prostate cancer: new tools for molecular hybrid imaging and theragnostics

Author and publication year

Algorithm

Cohort (patients)

Ground truth

Performance

Bradshaw et al. (2018 [35])

DL-based attenuation-correction method (deepMRAC) = 3D-CNNs, namely DeepMedic (https://biomedia.doc.ic.ac.uk/soft-ware/deepmedic/)

The network was trained to produce a discretized (air, water, fat, and bone) substitute computed tomography (CT) (CTsub). Discretized (CTref discrete) and continuously valued (CTref) reference CT images were created to serve as ground truth for network training and attenuation correction, respectively

Eighteen female patients with cervical cancer were randomly split into 12 training subjects and 6 testing subjects with [18F]FDG PET/MRI scan (with T2 MRI, T1 LAVA Flex, and 2-point-Dixon-based MRAC images) and following a PET/CT scan

No validation cohort

Reference CT (CTref) images were generated by using a combination of different techniques for different tissue types. Bone = CT image + T2 MRI image followed by segmentation of the bone. Fat and water = fat-fraction image, generated from the 2-point Dixon acquisition. Air (including bowel gas) = intensity threshold of the T2 image on the basis of an ROI in the muscle, with manual corrections

The Dice coefficient of the AI-produced CTsub compared with CTref discrete was 0.79 for cortical bone, 0.98 for soft tissue, and 0.49 for bowel gas. The root-mean-square error (RMSE) of the whole PET image was 4.9% by using deepMRAC and 11.6% by using the system MRAC

Leynes et al. (2018 [36])

DL-based attenuation-correction method = U-net-CNNs, composed of 13 layers in total

The deep learning model allowed a direct and fully automated conversion of MRI images to synthetic CT images, a so-called zero echo-time and Dixon Deep pseudoCT (“ZeDD-CT”), for PET image reconstruction (providing a patient-specific continuous-valued attenuation coefficients in soft tissues and in bone, respectively) and to evaluate the impact on radiotracer uptake estimation

Twenty-six patients with pelvic lesions (split into 10 training subjects and 16 evaluation subjects) and a PET/MRI scan performed with [18F]FDG or [68Ga]Ga-PSMA-11 PET/MRI

No validation cohort

Helical CT images of the patients were acquired and were co-registered to the MRI images

Thirty bone and 60 soft tissue lesions were evaluated, and the SUVmax was measured. Comparing the MRAC methods with the ground-truth CTAC, there was a reduction factor of 4 of RMSE in PET quantification for bone lesions, of 1.5 of RMSE for soft tissue lesions

Mostafapour et al. (2021 [37])

DL-based attenuation-correction method = residual deep learning model, taking PET non-attenuation-corrected images (PET-NAC) as input and CT-based attenuation-corrected PET images (PET-CTAC) as target (reference)

Three-hundred ninety-nine whole-body [68Ga]Ga-PSMA-11 images were used as the training dataset

Forty-six whole-body [68Ga]Ga-PSMA-11 images were used as an independent validation dataset

CT from corresponding PET-CTAC was used as reference (ground truth)

The AI method achieved a mean absolute error (MAE), relative error (RE%), structural similarity index (SSIM), and peak signal-to-noise ratio of 0.91 ± 0.29 (SUV), -2.46% ± 10.10%, 0.973 ± 0.034, and 48.171 ± 2.964, respectively, within images of the independent external validation dataset

Jang et al. (2018 [38])

DL-based attenuation-correction method = DL network via convolutional neural networks, which was pre-trained with T1-weighted MRI images. Ultrashort echo time (UTE) images are used as input to the network, which was trained using labels derived from co-registered CT images

Head PET/MRI of 8 human subjects

No validation cohort

A registered CT image was used as ground truth

Dice coefficients for air (within the head), soft tissue, and bone labels were 0.76 ± 0.03, 0.96 ± 0.006, and 0.88 ± 0.01. In PET quantitation, the proposed MRAC method produced relative PET errors of less than 1% within most brain regions

Torrado-Carvajal et al. (2020 [39])

DL-based attenuation-correction method = Dixon-VIBE Deep Learning (DIVIDE). A deep-learning network that allows synthesizing pelvis pseudo-CT maps based only on the standard Dixon volumetric interpolated breath-hold examination (Dixon-VIBE) images

Twenty-eight datasets obtained from 19 patients who underwent PET/CT and PET/MRI examinations were used to evaluate the proposed method

No validation cohort

CT from PET/CT

Absolute mean relative change values relative to CT AC were lower than 2% on average for the DIVIDE method in every ROI except for bone tissue, where it was lower than 4%

Pozaruk et al. (2021 [41])

DL-based attenuation-correction method = an augmented generative adversarial network (GAN)

Aim to improve the accuracy of estimated attenuation maps from MRI Dixon contrast images by training an augmented generative adversarial network (GANs) in a supervised manner

Twenty-eight prostate cancer patients. Eighteen patients (2,160 slices and later augmented to 270,000 slices) were used for training the GANs, the remaining 10 patients for validation

CT images

The DL-based MRI methods generated the pseudo-CT AC μ-maps with an accuracy of 4.5% more than standard MRI-based techniques, thanks to the augmentation of the training datasets for the training of the GAN results in improved accuracy of the estimated μ-map and consequently the PET quantification compared to the state of the art