Skip to main content
  • Narrative review
  • Open access
  • Published:

Artificial intelligence for prostate MRI: open datasets, available applications, and grand challenges

Abstract

Artificial intelligence (AI) for prostate magnetic resonance imaging (MRI) is starting to play a clinical role for prostate cancer (PCa) patients. AI-assisted reading is feasible, allowing workflow reduction. A total of 3,369 multi-vendor prostate MRI cases are available in open datasets, acquired from 2003 to 2021 in Europe or USA at 3 T (n = 3,018; 89.6%) or 1.5 T (n = 296; 8.8%), 346 cases scanned with endorectal coil (10.3%), 3,023 (89.7%) with phased-array surface coils; 412 collected for anatomical segmentation tasks, 3,096 for PCa detection/classification; for 2,240 cases lesions delineation is available and 56 cases have matching histopathologic images; for 2,620 cases the PSA level is provided; the total size of all open datasets amounts to approximately 253 GB. Of note, quality of annotations provided per dataset highly differ and attention must be paid when using these datasets (e.g., data overlap). Seven grand challenges and commercial applications from eleven vendors are here considered. Few small studies provided prospective validation. More work is needed, in particular validation on large-scale multi-institutional, well-curated public datasets to test general applicability. Moreover, AI needs to be explored for clinical stages other than detection/characterization (e.g., follow-up, prognosis, interventions, and focal treatment).

Key points

  • Artificial intelligence shows promise for being applied to prostate cancer magnetic resonance imaging (MRI).

  • Open datasets for prostate MRI are limited.

  • Commercial solutions are available but lack adequate validation.

  • Grand challenges could provide the means for bias-free validation.

Background

Prostate cancer (PCa) is the second most prevalent cancer among men worldwide [1]. Nevertheless, the mortality rate is relatively low, and most patients die with and not of PCa [2]. Timely and accurate diagnosis is therefore of utmost importance to avoid overtreatment of men with indolent, clinically insignificant PCa, and to offer radical curative treatment to men with life-threatening, clinically significant PCa (csPCa) [3]. Present-day guidelines advise the use of multiparametric magnetic resonance imaging (mpMRI) prior to biopsies [3], as it can noninvasively discriminate patients with indolent PCa from those with csPCa, retaining a high sensitivity for csPCa [4,5,6]. Using the version 2 of the Prostate Imaging Reporting and Data System (PI-RADS) [7], radiologists make a semiquantitative assessment of each suspicious lesion observed on mpMRI and assign a corresponding csPCa likelihood score, from 1 to 5. Together with clinical variables, such as patient age, prostate-specific antigen (PSA) levels, and family history, PI-RADS scores help clinicians determine whether further investigation (via systematic or targeted biopsies) is needed to make a final diagnosis.

At present time, the processing and interpretation of prostate mpMRI data in clinical routine is entirely performed by human experts (radiologists) who, while competent, are time-limited, cost-intensive, and cannot be easily scaled to meet increasing imaging demands [8]. Furthermore, human performance is dependent on experience and training, leading to significant variability between observers [9,10,11]. In contrast to purely qualitative interpretation, artificial intelligence (AI) exploits the quantitative nature of mpMRI data. AI can automate and support (parts of) the radiological workflow (Fig. 1), improve diagnostic accuracy, reduce costs, and alleviate the workload of healthcare personnel.

Fig. 1
figure 1

Use of artificial intelligence in the radiological workflow of prostate magnetic resonance imaging to automate, improve, and support critical tasks, considering radiomics and deep learning approaches

In recent years, continuous technical developments and increased dataset quantity and quality have pushed AI performance close to that of experienced radiologists [12,13,14], leading to the emergence of both publicly and commercially available solutions. However, adequate validation via large-scale retrospective multicenter studies or prospective clinical studies is often still lacking. To realize this, the prostate MRI community should invest in curating large-scale, multicenter datasets, develop a unified methodology for standardized performance estimation, reach consensus on the reference outcome standard (beginning from the presence/absence of csPCa), and establish the minimum requirements for potential testing cohorts.

The purpose of this narrative review is to provide an overview of open datasets, commercially/publicly available AI systems, and grand challenges for prostate mpMRI. We focus on methods for segmenting prostate anatomy, and for diagnosis and localization of csPCa. While the prostate segmentation can facilitate the calculation of PSA density (and also guide treatment planning and future interventions), diagnosis and localization can inform risk stratification and biopsy strategies. As we approach a new phase in AI applications to prostate mpMRI, where the goal is to move towards transparent validation and clinical translation, we specifically report studies that investigated commercially or publicly available AI systems. Furthermore, we summarize publicly available MRI data that can be used to accelerate the development of AI systems and discuss the increasingly important role of grand challenges, which allow for bias-free benchmarking of AI algorithms applied to prostate mpMRI.

Open datasets

AI, especially deep learning, requires large, well-curated datasets to facilitate training and meaningful validation [15]. Furthermore, models require diverse, multicenter, multivendor data to achieve robust performance and generalization. However, most algorithms reported in literature thus far, use relatively small, single-center datasets [16]. The limited number and quality of publicly available datasets for prostate MRI, further aggravates this issue.

Table 1 provides an overview of 17 public datasets for prostate MRI, which were found by the authors through their collaborative role in this research field and was updated with additional internet searches (i.e., The Cancer Imaging Archive, Zenodo, XNAT, GitHub, and grand-challenge.org). A total of 3,369 prostate MRI cases (including some overlapping cases) are available, of which 2,238 cases primarily include mpMRI images acquired between 2003–2021 in Europe and the United States. All cases were provided as full 3D volumes, except for the QUBIQ21 dataset [17], which provided a single slice per case. A total of 412 cases were collected for anatomical segmentation tasks, whereas the remainder were collected for PCa detection and/or classification. The majority of cases were scanned with a 3-T scanner, whereas only 296 cases were scanned with a 1.5-T scanner. Scanner vendors include the following: Siemens (Siemens Healthineers, Erlangen, Germany), Philips (Philips Healthcare, Best, The Netherlands), and GE (General Electric Healthcare Systems, Milwaukee, WI, USA) for 2571, 446, and 110 cases, respectively. A total of 346 cases were scanned with endorectal coils, whereas the remaining were scanned with phased-array surface coils. In 2,240 cases lesion delineations are available and 56 cases have matching histopathologic section images obtained from radical prostatectomy specimen. Table 1 shows that clinical variables are available for some cases, e.g., 2,620 cases with an associated PSA level. The scans are available in Digital Imaging and COmmunications in Medicine (DICOM), ITK MetaImage or NIFTI format for 1,547, 1,580, and 242 cases, respectively. Total size of all open datasets (images, annotations, and meta-data) amounts to approximately 253 GB. In 2021, delineations of PCa lesions and prostatic zones for (parts of) the PROSTATEx dataset [18] were curated by an independent third-party and publicly released at [19].

Table 1 Summary of prostate MRI public datasets

Although quite some prostate MRI data seem to be available, the quality of the outcome that is to be predicted, i.e., the reference standard for annotations (if any), is disputable. For datasets carrying annotations of the prostate anatomy, the reference standard is often one or few human readers (whose annotations highly depend on their experience level). Similarly, we notice that for the prediction of csPCa, the quality of annotations provided per dataset highly differ. One dataset reports pathology outcome from MRI fusion biopsies, another uses in-bore MRI biopsies, another uses radical prostatectomy, while for others the reference standard remains unclear. Accuracy across these various tissue sampling strategies can vary strongly. Inconsistencies and missing information across imaging data, cohort distribution, and reference standard can also make it difficult to consolidate multiple public datasets into one. At the same time, most public datasets are too small to be used on their own. We conclude that annotations and data characteristics for public datasets are often ill-defined, and we advise that potential users contact the data providers for additional information (e.g., patient distribution, follow-up status) prior to usage.

Data overlap is an issue with public datasets. In some cases, all or part of the dataset contains cases from other public datasets. For example, the NCI-ISBI 2013 dataset [30] combined 40 cases from the Prostate-3T dataset [22] and 40 cases from the PROSTATE-DIAGNOSIS dataset [25], and the entire PROSTATEx [18] and Prostate-3T [22] datasets are included in the PI-CAI dataset [38]. Combining these datasets, may therefore inadvertently lead to false assumptions of data size in scientific AI experiments or product development.

All the datasets were confirmed to have been collected with institutional/ethical review board approval, except for I2CVB [23], Prostate158 [39], and QUBIQ21 [17], for which this information was not found. The datasets are all anonymized. Anonymization is becoming increasingly difficult in our online world in terms of data strictly not being traceable to patient information. Radiological images are almost always acquired, exchanged, and stored in DICOM format. The DICOM header is very rich in information that could lead to tracing back to the patient. The DICOM standard defines security concepts for anonymization, and public tools that implement this are available [40]. A simpler solution is to provide images in non-DICOM format, which contain header with minimal information. The drawback of non-DICOM images or very strongly anonymized DICOM images is that relevant scientific information may get lost. Public prostate MRI data should preferably be carefully anonymized DICOM images with as many tags preserved as possible. The Cancer Imaging Archive provides a very strong DICOM anonymization procedure with the most comprehensive set of DICOM tags available for scientific research [41].

Patient inclusion criteria were ambiguous for most datasets, which may raise questions about the degree of bias in the selected data. Images were mainly acquired for PCa detection in patients with suspected csPCa and/or for intervention or staging purposes. The PROSTATE-MRI [20] and Prostate Fused-MRI-Pathology [21] datasets included patients in whom biopsy confirmed cancer and who underwent radical prostatectomy. For the QIN-PROSTATE-Repeatability dataset [28], the criteria were the patient’s ability to undergo prostate MRI with an endorectal coil and complete the repeat examination. For the Medical Segmentation Decathlon (MSD) dataset [35], the criterion was the suitability for the development of a semantic segmentation algorithm.

The datasets have been used extensively by researchers for the development of clinical applications, including segmentation of prostate tissue and detection/diagnosis of csPCa [16, 42]. For segmentation-related applications, the PROMISE12 [22] and MSD [35] datasets were most commonly used for segmentation of the whole prostate gland and prostate zones, respectively (e.g. [43, 44]). PROSTATEx [22] is currently the dataset that is most commonly used for development of AI for detection of csPCa (e.g. [45,46,47]).

As of May 6, 2022, the PI-CAI challenge [37] has publicly released 1,500 (of 12,500) cases [38] with a much stronger reference standard than that of the PROSTATEx challenge [29]. Additionally, PI-CAI reserves a hidden testing cohort of 1,000 cases, with histopathology-confirmed positives (Gleason grade > 1) and histopathology (Gleason grade < 2) or follow-up confirmed negatives, that will span the complete distribution of patients encountered in clinical routine. Data will be multivendor (3-T scanners from Phillips and Siemens) and multicenter (Radboudumc, Ziekenhuis Groep Twente, University Medical Center Groningen, Norwegian University of Science and Technology). Patient age, PSA density, PSA level and prostate volume will be provided for all cases. Expert-derived lesion delineations are provided for approximately 80% of all cases, and AI-derived lesion delineations (pseudo-labels) are provided for all cases, using a state-of-the-art csPCa detection developed at Radboudumc [48].

An additional source of public images will be the ProCancer-I platform [49]. It was launched in 2020 to solve issues concerning national and international medical data sharing regulations, and lack of tooling, causing many institutions not to make their data available [50, 51]. To enable these institutions that are willing to share their data to improve, validate, and test state-of-the-art AI tools for prostate MRI diagnosis, the EU-funded platform provides a scalable high-performance computing platform that will host the world’s largest collection of anonymized prostate MRI image datasets (> 17,000 cases) based on data donations in compliance with EU legislation (GDPR). To ensure rapid clinical implementation of the developed models, the platform partners will closely monitor performance, accuracy and reproducibility. Optima [52] is another EU-funded initiative that aims at overcoming the limitation of data sharing while enabling research and clinical partners to leverage a variety of federated and centralized European data for the dynamic development and clinical implementation of AI tools to combat prostate, breast, and lung cancer.

Federated learning is an alternative approach to making data available. It allows to train robust prostate AI for MRI, with representative data from multiple countries and institutions, but, in contrast to the conventional approaches, in a federated learning framework, the AI model is trained by iteratively sharing model weights obtained from training on local data. Consequently, the local data need not to be shared and never leaves the hospitals [53, 54]. Promising frameworks for federated learning include Flower [55], FedML [56], and pySyft [57], which support several operating systems, the use of graphics processing units, and differential privacy. A successful example of federated training of a prostate segmentation algorithm is reported by Sarma et al. [58].

Available AI tools for prostate MRI

An inventory of commercial/public prostate MRI AI products/tools provides an overview of available technology and supported clinical applications. This is relevant to both the clinical end user and scientists exploring knowledge gaps. In reviews, these updates quickly become outdated. Of note, a website [59] provides an updated overview of available AI-based prostate MRI software for clinical radiology.

A comprehensive overview of current commercial products/public tools is summarized in Table 2. Eleven vendors offer products that help report and acquire prostate MRI for diagnostics and intervention. The AI claims range from modest automatic segmentation of the prostate to measure prostate volume, to calculation of a tumor heatmap, up to an automated detection of csPCa [60]. Consequently, products vary in their level of clinical support and ability to improve workflow or reader variability. Only few vendors are currently able to automatically generate reports that can help reduce the diagnostic workflow. For only one vendor (Siemens Healthineers) [13], a prototype was shown to increase diagnostic accuracy and reduce variability between readers [60]. Various trials are underway, and it is expected that soon other vendors will upgrade their products with similar claims. AI for prostate MRI is not dissimilar to many other radiology applications, in that peer-reviewed evidence of effectiveness is mostly lacking [61]. As shown in Table 2, levels of certification vary, which also implies that the level of validation varies. The ‘soft’ AI engine claim of it being able to produce a tumor heatmap without explicit detection performance can do with a class I certification with little validation studies. The ability to predict presence of csPCa and associated claims of workflow improvements requires much stronger evidence and validation levels (level II and above).

Table 2 Overview of commercially available prostate MRI tools that implement AI. The table attempts a comprehensive comparison in terms of highest claim and level of trust based on certification level

Non-commercial public AI tools for prostate MRI may reflect the current state-of-the-art. Many tools have been made available in the form of publications, codes, software plugins, or grand challenge algorithms for research or non-clinical purposes. A number of these tools have already been presented in other review articles [16, 42, 62, 63]. However, access to these trained models is not always possible, and when it is possible, it is usually not easy for end users or researchers to implement. Furthermore, if developers want to benchmark against these models, they usually must use the source code, install libraries, and make changes to fit the model, which can lead to unfair and non-direct benchmarking. One way to overcome this problem is to use platforms that easily allow the use and direct benchmarking of pre-trained models.

NVIDIA Clara Imaging [64] is a platform that provides a framework for the development and direct deployment of AI applications for medical imaging. It includes a set of public, pre-trained deep learning models. Currently, the available models appear to focus primarily on segmentation tasks, including nnU-Net [43], a self-configuring method for deep learning-based segmentation that has shown excellent performance on the MSD [34] and PROMISE12 [32] challenges. Another platform is Grand Challenge–Algorithms [65], to which pre-trained models can be uploaded so that developers can directly test the method and compare their models against its performance. The platform currently includes a prostate MRI segmentation model and two csPCa detection models. Furthermore, the Federated Tumor Segmentation (FeTS) Platform [66] provides access to multiple pre-trained models that can be deployed in a federated fashion.

Grand challenges

Grand challenges provide the means to benchmark and validate multiple AI models across a set of common training and testing datasets, in a bias-free manner. For prostate MRI, there are a handful of public challenges, each of which focus on one of two clinical outcome categories: prostate anatomy segmentation (NCI-ISBI 2013 [31], PROMISE12 [33], MSD [34], QUBIQ21 [17], Prostate158 [39]) and csPCa detection/diagnosis (PROSTATEx [29], Prostate158 [39] and PI-CAI [37]).

The NCI-ISBI 2013, MSD and Prostate158 challenges evaluated the performance of AI models for segmentation of the prostatic peripheral zone (PZ) and transitional zone (TZ). Meanwhile, the PROMISE12 challenge evaluated the segmentation of the whole gland, not its constituent zones. Segmentation of the whole gland is considered an easier task than segmentation of prostatic zones (especially PZ). This is reflected by a top Dice similarity coefficient in the literature of about 0.90 (TZ) and 0.75 (PZ) [67]. In the MSD, nnU-Net [43], which performed the best, had similar results with a Dice similarity coefficient of 0.90 and 0.77 for TZ and PZ, respectively.

PROMISE12 ranked AI-derived segmentations using a score that averages four different similarity and distance metrics relative to an expert’s manual annotations. The top score is 100, but in their article [33], challenge organizers explained that final scores are normalized with a second (inexperienced) reader to 85. They already indicated that very high scores (> 90) are likely in the realm of inter-reader variability. During the challenge, the best score achieved was 87. However, in the present-day post-challenge leader board, ten submissions have a score ranging from 89.5 to 91.9, with the highest score being achieved using MSD-Net [68]. At these higher limits of performance, differences between AI algorithms, with respect to the PROMISE12 reference standard (human expert with six years of experience), may not be indicative of better or worse performance. Particularly, with deep learning algorithms performing so well, the issue now becomes to define a better reference standard that is more representative of the biological ground-truth, which remains an open research question.

The QUBIQ21 challenge aimed to quantify uncertainties in biomedical image segmentation. Recent advances in probabilistic deep learning allow for uncertainty estimation across predictions [69], which can pave the way to explainable, trustworthy AI and can inform clinicians about diagnostic uncertainty of AI [70]. QUBIQ21 addresses multiple organs and imaging modalities, including prostate MRI. For prostate MRI, there are two segmentation tasks for 55 T2W cases (one mid-gland slice with six expert annotations per case).

The Prostate158 challenge is a recently released challenge that aims to segment the PZ and TZ of the prostate in addition to segmenting the PCa lesions. The challenge provides 139 cases for training and validation of AI models and uses a hidden test dataset of 19 cases for performance evaluation of the models.

The PROSTATEx and PI-CAI challenges aim to evaluate the performance of AI models for csPCa detection and classification. Launched in 2014, the PROSTATEx challenge has been the only public benchmark for this task to-date. More than 1,765 entries have been submitted during the challenge, with the maximum value of the area under the curve at receiver operating characteristic analysis currently at 0.95. Meanwhile, in the PI-CAI challenge provides the largest training (n ≈ 9,000; of which 1,500 cases will be made public), validation (n ≈ 100), and testing (n ≈ 1,000) datasets to-date, with a study design and reference standard established in conjunction with multi-disciplinary radiology, urology and AI experts in the domain. PI-CAI also includes an international reader study with 63 radiologists (42 centers, 18 countries; 1–23 years of experience reading prostate MRI, median 9 years) till-date, to assess the clinical viability of stand-alone AI relative to radiologists.

Discussion

AI is starting to get an assistive role in the PCa clinical pathway. The advent of deep learning for medical imaging allows realizing stand-alone AI that achieves good to expert level performance in the prediction of segmentation volume and csPCa detection [14, 15, 45]. Deep learning AI models are being incorporated in products that provide human interface software that aims to help improve workflow and reduce diagnostic performance variability [45, 47, 60, 71]. Moreover, these AI diagnostic models can be used before, during and after radiation therapy. Segmentation models can be used for organ delineation in the planning phase and for prostate-targeted MRI-guided radiotherapy [72]. Detection models can for example be used to monitor the response of the lesion during and after treatment [73]. Similar developments have already been seen in other medical imaging domains such as breast [74] and lung [75]. The recent availability of prostate MRI data explains the rather late development of prostate MRI AI. The development is further complicated because prostate MRI is intrinsically multiparametric with an enormously different appearance of the image parameters.

Other complications include the presence of image artifacts [76] and that MR image acquisition is not standardized, although minimum requirements for PI-RADS reading exist [7]. An important role for AI may therefore also be in image preprocessing and quality control [77,78,79,80]. Finally, prostate MRI hampers a well-defined reference standard with definitions of cancer significance widely varying. The AI-induced large-scale collection and curation of data will help further develop the field. To that end, AI can help prostate MRI realize a better perspective for men with PCa, by reducing unnecessary biopsies, reducing overtreatment, providing early detection to achieve less burden, and increasing survival.

We have attempted to provide an overview of the current state-of-the-art of AI applications for prostate MRI. Unlike other review papers [16, 42, 62] that focus on AI tools that have been developed, this review focuses on open datasets, commercially/publicly available AI, and grand challenges. However, since this is a rapidly growing field, a limitation of this review is that it will become outdated in a relatively short period of time, just like the review papers before it. Therefore, up-to-date reviews of this field are constantly needed.

In conclusion, available prostate MRI AI products are relatively few, with only one validated for assisting in the difficult detection task and others for the simpler gland volume estimation task. The AI prediction of other clinical outcomes in the prostate cancer pathway is still maturing or even needs to start at all. A lot of research is still required to successfully realize AI to help in the whole prostate pathway. Public well-curated datasets are available but are relatively small and vary in quality of the reference standard. More computational AI challenges are needed to provide independent validation of products and research to build trust in AI for prostate MRI.

Availability of data and materials

Not applicable.

Abbreviations

AI:

Artificial intelligence

csPCa:

Clinically significant prostate cancer

DICOM:

Digital Imaging and COmmunications in Medicine

mpMRI:

Multiparametric magnetic resonance imaging

MSD:

Medical segmentation decathlon

PCa:

Prostate cancer

PI-RADS:

Prostate Imaging and Reporting and Data System

PSA:

Prostate-specific antigen

PZ:

Peripheral zone

TZ:

Transitional zone

References

  1. Sung H, Ferlay J, Siegel RL et al (2021) Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J Clin 71:209–249. https://doi.org/10.3322/caac.21660

    Article  PubMed  Google Scholar 

  2. Welch HG, Albertsen PC (2020) Reconsidering prostate cancer mortality - the future of PSA screening. N Engl J Med 382:1557–1563. https://doi.org/10.1056/NEJMms1914228

    Article  PubMed  Google Scholar 

  3. Mottet N, Bellmunt J, Bolla M et al (2017) EAU-ESTRO-SIOG Guidelines on Prostate Cancer. Part 1: Screening, Diagnosis, and Local Treatment with Curative Intent. Eur Urol 71:618–629. https://doi.org/10.1016/j.eururo.2016.08.003

    Article  PubMed  Google Scholar 

  4. Rouviere O, Puech P, Renard-Penna R et al (2019) Use of prostate systematic and targeted biopsy on the basis of multiparametric MRI in biopsy-naive patients (MRI-FIRST): a prospective, multicentre, paired diagnostic study. Lancet Oncol 20:100–109. https://doi.org/10.1016/S1470-2045(18)30569-2

    Article  PubMed  Google Scholar 

  5. Kasivisvanathan V, Rannikko AS, Borghi M et al (2018) MRI-targeted or standard biopsy for prostate-cancer diagnosis. N Engl J Med 378:1767–1777. https://doi.org/10.1056/NEJMoa1801993

    Article  PubMed  PubMed Central  Google Scholar 

  6. Eklund M, Jaderling F, Discacciati A et al (2021) MRI-targeted or standard biopsy in prostate cancer screening. N Engl J Med 385:908–920. https://doi.org/10.1056/NEJMoa2100852

    Article  PubMed  Google Scholar 

  7. Weinreb JC, Barentsz JO, Choyke PL et al (2016) PI-RADS prostate imaging - reporting and data system: 2015, Version 2. Eur Urol 69:16–40. https://doi.org/10.1016/j.eururo.2015.08.052

    Article  PubMed  Google Scholar 

  8. Litjens G, Debats O, Barentsz J, Karssemeijer N, Huisman H (2014) Computer-aided detection of prostate cancer in MRI. IEEE Trans Med Imaging 33:1083–1092. https://doi.org/10.1109/TMI.2014.2303821

    Article  PubMed  Google Scholar 

  9. Rosenkrantz AB, Ayoola A, Hoffman D et al (2017) The learning curve in prostate MRI interpretation: self-directed learning versus continual reader feedback. AJR Am J Roentgenol 208:W92–W100. https://doi.org/10.2214/AJR.16.16876

    Article  PubMed  Google Scholar 

  10. Gatti M, Faletti R, Calleris G et al (2019) Prostate cancer detection with biparametric magnetic resonance imaging (bpMRI) by readers with different experience: performance and comparison with multiparametric (mpMRI). Abdom Radiol 44:1883–1893. https://doi.org/10.1007/s00261-019-01934-3

    Article  Google Scholar 

  11. Greer MD, Shih JH, Lay N et al (2019) Interreader variability of prostate imaging reporting and data system version 2 in detecting and assessing prostate cancer lesions at prostate MRI. AJR Am J Roentgenol 212:1197–1204. https://doi.org/10.2214/AJR.18.20536

    Article  Google Scholar 

  12. Cao RM, Bajgiran AM, Mirak SA et al (2019) Joint prostate cancer detection and gleason score prediction in mp-MRI via FocalNet. IEEE Trans Med Imaging 38:2496–2506. https://doi.org/10.1109/TMI.2019.2901928

    Article  PubMed  Google Scholar 

  13. Winkel DJ, Wetterauer C, Matthias MO et al (2020) Autonomous detection and classification of PI-RADS lesions in an MRI screening population incorporating multicenter-labeled deep learning and biparametric imaging: proof of concept. Diagnostics (Basel) 10:951. https://doi.org/10.3390/diagnostics10110951

    Article  PubMed Central  Google Scholar 

  14. Saha A, Hosseinzadeh M, Huisman H (2021) End-to-end prostate cancer detection in bpMRI via 3D CNNs: effects of attention mechanisms, clinical priori and decoupled false positive reduction. Med Image Anal 73:102155. https://doi.org/10.1016/j.media.2021.102155

    Article  PubMed  Google Scholar 

  15. Hosseinzadeh M, Saha A, Brand P, Slootweg I, de Rooij M, Huisman H (2022) Deep learning-assisted prostate cancer detection on bi-parametric MRI: minimum training data size requirements and effect of prior knowledge. Eur Radiol 32:2224–2234. https://doi.org/10.1007/s00330-021-08320-y

    Article  CAS  PubMed  Google Scholar 

  16. Li HY, Lee CH, Chia D, Lin ZP, Huang WM, Tan CH (2022) Machine learning in prostate MRI for prostate cancer: current status and future opportunities. Diagnostics (Basel) 12:289. https://doi.org/10.3390/diagnostics12020289

    Article  Google Scholar 

  17. Quantification of Uncertainties in Biomedical Image Quantification Challenge (2021). Grand Challenge. Available via https://qubiq21.grand-challenge.org. Accessed 30 May 2022

    Google Scholar 

  18. Litjens G, Debats O, Barentsz J, Karssemeijer N, Huisman H (2017) Prostatex challenge data. The Cancer Imaging Archive. https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=23691656. Accessed 30 May 2022. https://doi.org/10.7937/K9TCIA.2017.MURS5CL

  19. Cuocolo R, Stanzione A, Castaldo A, De Lucia DR, Imbriaco M (2021) PROSTATEx masks. GitHub. https://github.com/rcuocolo/PROSTATEx_masks. Accessed 30 May 2022

  20. Choyke P, Turkbey B, Pinto P, Merino M, Wood B (2016) Data From PROSTATE-MRI. The Cancer Imaging Archive. https://wiki.cancerimagingarchive.net/display/Public/PROSTATE-MRI. Accessed 30 May 2022 . https://doi.org/10.7937/K9/TCIA.2016.6046GUDv

  21. Madabhushi A, Feldman M (2016) Fused radiology-pathology prostate dataset. The Cancer Imaging Archive. https://wiki.cancerimagingarchive.net/display/Public/Prostate+Fused-MRI-Pathology. Accessed 30 May 2022. https://doi.org/10.7937/K9/TCIA.2016.TLPMR1AM

  22. Litjens G, Futterer J, Huisman H (2015) Data From Prostate-3T. The Cancer Imaging Archive. https://wiki.cancerimagingarchive.net/display/Public/Prostate-3T. Accessed 30 May 2022. https://doi.org/10.7937/K9/TCIA.2015.QJTV5IL5

  23. Lemaitre G, Marti R, Meriaudeau F (2016) Original multi-parametric MRI images of prostate. Zenodo. https://zenodo.org/record/162231#.WAkvVrXPGPR. Accessed 30 May 2022. https://doi.org/10.5281/zenodo.162231

    Book  Google Scholar 

  24. Zuley ML, Jarosz R, Drake BF et al (2016) Radiology data from the Cancer Genome Atlas Prostate Adenocarcinoma [TCGA-PRAD] collection. The Cancer Imaging Archive. https://wiki.cancerimagingarchive.net/display/Public/TCGA-LGG. Accessed 30 May 2022. https://doi.org/10.7937/K9/TCIA.2016.L4LTD3TK

    Book  Google Scholar 

  25. Bloch BN, Jain A, Jaffe CC (2015) Data From PROSTATE-DIAGNOSIS. The Cancer Imaging Archive. https://wiki.cancerimagingarchive.net/display/Public/PROSTATE-DIAGNOSIS. Accessed 30 May 2022. https://doi.org/10.7937/K9/TCIA.2015.FOQEUJVT

    Book  Google Scholar 

  26. Natarajan S, Priester A, Margolis D, Huang J, Marks L (2020) Prostate MRI and ultrasound with pathology and coordinates of tracked biopsy (Prostate-MRI-US-Biopsy). The Cancer Imaging Archive. https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=68550661. Accessed 30 May 2022. https://doi.org/10.7937/TCIA.2020.A61IOC1A

    Book  Google Scholar 

  27. Fedorov A, Tempany C, Mulkern R, Fennessy F (2016) Data From QIN PROSTATE. The Cancer Imaging Archive. https://wiki.cancerimagingarchive.net/display/Public/QIN+PROSTATE#18022465195f72c6038d41268310e290a3e6d5e0. Accessed 30 May 2022. https://doi.org/10.7937/K9/TCIA.2016.fADs26kG

    Book  Google Scholar 

  28. Fedorov A, Schwier M, Clunie D et al (2018) Data From QIN-PROSTATE-Repeatability. The Cancer Imaging Archive. https://wiki.cancerimagingarchive.net/display/Public/QIN-PROSTATE-Repeatability. Accessed 30 May 2022. https://doi.org/10.7937/K9/TCIA.2018.MR1CKGND

    Book  Google Scholar 

  29. Armato SG 3rd, Huisman H, Drukker K et al (2018) PROSTATEx Challenges for computerized classification of prostate lesions from multiparametric magnetic resonance images. J Med Imaging (Bellingham) 5:044501. https://doi.org/10.1117/1.JMI.5.4.044501

    Article  Google Scholar 

  30. Bloch N, Madabhushi A, Huisman H et al (2015) NCI-ISBI 2013 challenge: automated segmentation of prostate structures. The Cancer Imaging Archive. https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=21267207#21267207036220c66a5a436f90e4a0b54367bfae. Accessed 30 May 2022. https://doi.org/10.7937/K9/TCIA.2015.zF0vlOPv

    Book  Google Scholar 

  31. NCI-ISBI 2013 Challenge - Automated Segmentation of Prostate Structures (2015). The Cancer Imaging Archive. Available via https://wiki.cancerimagingarchive.net/display/Public/NCI-ISBI+2013+Challenge+-+Automated+Segmentation+of+Prostate+Structures. Accessed 30 May 2022

  32. PROMISE12 Grand Challenge (2012). Grand Challenge. Available via https://promise12.grand-challenge.org/Download. Accessed 30 May 2022

    Google Scholar 

  33. Litjens G, Toth R, van de Ven W et al (2014) Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge. Med Image Anal 18:359–373. https://doi.org/10.1016/j.media.2013.12.002

    Article  PubMed  Google Scholar 

  34. Antonelli M, Reinke A, Bakas S et al (2021) The Medical Segmentation Decathlon:arXiv arXiv:2106.05735. https://doi.org/10.48550/ARXIV.2106.05735

  35. Simpson AL, Antonelli M, Bakas S et al (2019) A large annotated medical image dataset for the development and evaluation of segmentation algorithms. The Medical Segmentation Decathlon http://medicaldecathlon.com. Accessed 30 May 2022

    Google Scholar 

  36. Jolesz F (2009) NCIGT_PROSTATE. XNAT https://central.xnat.org/data/projects/NCIGT_PROSTATE. Accessed 30 May 2022

    Google Scholar 

  37. The PI-CAI Challenge (2020). Grand Challenge. Available via https://pi-cai.grand-challenge.org. Accessed 30 May 2022

    Google Scholar 

  38. Saha A, Twilt JJ, Bosma JS et al (2022) Artificial Intelligence and Radiologists at Prostate Cancer Detection in MRI: The PI-CAI Challenge. Zenodo. https://zenodo.org/record/6517398#.YnaULOhByF5. Accessed 30 May 2022. https://doi.org/10.5281/zenodo.6522364

    Book  Google Scholar 

  39. Prostate158 Challenge (2022) Grand Challenge. Available via https://prostate158.grand-challenge.org/data. Accessed 2 June 2022. https://doi.org/10.5281/zenodo.6481141

  40. Aryanto KY, Oudkerk M, van Ooijen PM (2015) Free DICOM de-identification tools in clinical research: functioning and safety of patient privacy. Eur Radiol 25:3685–3695. https://doi.org/10.1007/s00330-015-3794-0

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  41. Kirby J, Smith K (2022) Submission and De-identification Overview. The Cancer Imaging Archive Available via https://wiki.cancerimagingarchive.net/display/Public/Submission%2Band%2BDe-identification%2BOverview. Accessed 30 May 2022

    Google Scholar 

  42. Twilt JJ, van Leeuwen KG, Huisman HJ, Futterer JJ, de Rooij M (2021) Artificial Intelligence Based Algorithms for Prostate Cancer Classification and Detection on Magnetic Resonance Imaging: A Narrative Review. Diagnostics (Basel) 11:959. https://doi.org/10.3390/diagnostics11060959

    Article  Google Scholar 

  43. Isensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH (2021) nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods 18:203–211. https://doi.org/10.1038/s41592-020-01008-z

    Article  CAS  PubMed  Google Scholar 

  44. Calisto MB, Lai-Yuen SK (2021) EMONAS-Net: Efficient multiobjective neural architecture search using surrogate-assisted evolutionary algorithm for 3D medical image segmentation. Artif Intell Med 119:102154. https://doi.org/10.1016/j.artmed.2021.102154

    Article  Google Scholar 

  45. Mehralivand S, Yang D, Harmon SA et al (2021) A cascaded deep learning-based artificial intelligence algorithm for automated lesion detection and classification on biparametric prostate magnetic resonance imaging. Acad Radiol 28:S1076-6332(21)00377-9. https://doi.org/10.1016/j.acra.2021.08.019

    Article  Google Scholar 

  46. Pellicer-Valero OJ, Marenco Jimenez JL, Gonzalez-Perez V et al (2022) Deep learning for fully automatic detection, segmentation, and Gleason grade estimation of prostate cancer in multiparametric magnetic resonance images. Sci Rep 12:2975. https://doi.org/10.1038/s41598-022-06730-6

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  47. Vente C, Vos P, Hosseinzadeh M, Pluim J, Veta M (2021) Deep learning regression for prostate cancer detection and grading in bi-parametric MRI. IEEE Trans Biomed Eng 68:374–383. https://doi.org/10.1109/TBME.2020.2993528

    Article  PubMed  Google Scholar 

  48. Bosma JS, Saha A, Hosseinzadeh M, Slootweg I, de Rooij M, Huisman H (2021) Annotation-efficient cancer detection with report-guided lesion annotation for deep learning-based prostate cancer detection in bpMRI. arXiv arXiv:2112.05151. https://doi.org/10.48550/arXiv.2112.05151

  49. ProCAncer-I: An AI Platform integrating imaging data and models, supporting precision care through prostate cancer’s continuum. Available via https://www.procancer-i.eu. Accessed 30 May 2022.

  50. Haas S, Wohlgemuth S, Echizen I, Sonehara N, Muller G (2011) Aspects of privacy for electronic health records. Int J Med Inform 80:e26–e31. https://doi.org/10.1016/j.ijmedinf.2010.10.001

    Article  PubMed  Google Scholar 

  51. Phillips M (2018) International data-sharing norms: from the OECD to the General Data Protection Regulation (GDPR). Hum Genet 137:575–582. https://doi.org/10.1007/s00439-018-1919-7

    Article  PubMed  PubMed Central  Google Scholar 

  52. OPTIMA: Tackling Cancer through Real Word Dara and Artificial Intelligence. Available via https://www.optima-oncology.eu. Accessed 30 May 2022

  53. Brendan McMahan H, Moore E, Ramage D, Hampson S, Agüera y Arcas B (2016) Communication-efficient learning of deep networks from decentralized data. arXiv arXiv:1602.05629. https://doi.org/10.48550/arXiv.1602.05629

  54. Kairouz P, McMahan HB, Avent B et al (2021) Advances and open problems in federated learning. Found Trends Machine Learn 14:1–210. https://doi.org/10.1561/2200000083

    Article  Google Scholar 

  55. Flower: A friendly federated learning framework. Available via https://flower.dev. Accessed 30 May 2022

  56. FedML: The Federated Learning/Analytics and Edge AI Platform. Available via https://fedml.ai. Accessed 30 May 2022

  57. Syft+Grid: Code for computing on data you do not own and cannot see. GitHub. Available via https://github.com/OpenMined/PySyft. Accessed 30 May 2022

  58. Sarma KV, Harmon S, Sanford T et al (2021) Federated learning improves site performance in multicenter deep learning without data sharing. J Am Med Inform Assoc 28:1259–1264. https://doi.org/10.1093/jamia/ocaa341

    Article  PubMed  PubMed Central  Google Scholar 

  59. AI for Radiology: an implementation guide. Grand Challenge. Available via https://grand-challenge.org/aiforradiology/?subspeciality=All&modality=All&ce_under=All&ce_%20class=All&fda_class=All&sort_by=ce+certification&search=prostate. Accessed 30 May 2022

  60. Winkel DJ, Tong A, Lou B et al (2021) A novel deep learning based computer-aided diagnosis system improves the accuracy and efficiency of radiologists in reading biparametric magnetic resonance images of the prostate: results of a multireader, multicase study. Invest Radiol 56:605–613. https://doi.org/10.1097/RLI.0000000000000780

    Article  CAS  Google Scholar 

  61. van Leeuwen KG, Schalekamp S, Rutten MJCM, van Ginneken B, de Rooij M (2021) Artificial intelligence in radiology: 100 commercially available products and their scientific evidence. Eur Radiol 31:3797–3804. https://doi.org/10.1007/s00330-021-07892-z

    Article  PubMed  PubMed Central  Google Scholar 

  62. Castillo TJMC, Arif M, Niessen WJ, Schoots IG, Veenland JF (2020) Automated classification of significant prostate cancer on MRI: a systematic review on the performance of machine learning applications. Cancers (Basel) 12:1606. https://doi.org/10.3390/cancers12061606

    Article  Google Scholar 

  63. Tataru OS, Vartolomei MD, Rassweiler JJ et al (2021) Artificial intelligence and machine learning in prostate cancer patient management-current trends and future perspectives. Diagnostics (Basel) 11:354. https://doi.org/10.3390/diagnostics11020354

    Article  CAS  Google Scholar 

  64. NVIDIA Clara Imaging. NVIDIA. Available via https://developer.nvidia.com/clara-medical-imaging. Accessed 30 May 2022

  65. Grand Challenge Algorithms. Grand Challenge. Available via https://grand-challenge.org/algorithms. Accessed 30 May 2022

  66. FeTS: Federated Tumor Segmentation. GitHub. Available via https://fets-ai.github.io/Front-End. Accessed 30 May 2022

  67. Montagne S, Hamzaoui D, Allera A et al (2021) Challenge of prostate MRI segmentation on T2-weighted images: inter-observer variability and impact of prostate morphology. Insights Imaging 12:71. https://doi.org/10.1186/s13244-021-01010-9

    Article  PubMed  PubMed Central  Google Scholar 

  68. Zheng B, Liu Y, Zhu Y et al (2020) MSD-Net: multi-scale discriminative network for COVID-19 lung infection segmentation on CT. IEEE Access 8:185786–185795. https://doi.org/10.1109/ACCESS.2020.3027738

    Article  PubMed  Google Scholar 

  69. McCrindle B, Zukotynski K, Doyle TE, Noseworthy MD (2021) A Radiology-focused Review of Predictive Uncertainty for AI Interpretability in Computer-assisted Segmentation. Radiol Artif Intell 3:e210031. https://doi.org/10.1148/ryai.2021210031

    Article  PubMed  PubMed Central  Google Scholar 

  70. Hu S, Worrall D, Knegt S, Veeling B, Huisman H, Welling M (2019) Supervised uncertainty quantification for segmentation with multiple annotations. arXiv arXiv:1907.01949. https://doi.org/10.48550/arXiv.1907.01949

  71. Schelb P, Wang X, Radtke JP et al (2021) Simulated clinical deployment of fully automatic deep learning for clinical prostate MRI assessment. Eur Radiol 31:302–313. https://doi.org/10.1007/s00330-020-07086-z

    Article  PubMed  Google Scholar 

  72. Salembier C, Villeirs G, De Bari B et al (2018) ESTRO ACROP consensus guideline on CT- and MRI-based target volume delineation for primary radiation therapy of localized prostate cancer. Radiother Oncol 127:49–61. https://doi.org/10.1016/j.radonc.2018.01.014

    Article  PubMed  Google Scholar 

  73. Wang YF, Tadimalla S, Hayden AJ, Holloway L, Haworth A (2021) Artificial intelligence and imaging biomarkers for prostate radiation therapy during and after treatment. J Med Imaging Radiat Oncol 65:612–626. https://doi.org/10.1111/1754-9485.13242

    Article  Google Scholar 

  74. Salim M, Wahlin E, Dembrower K et al (2020) External evaluation of 3 commercial artificial intelligence algorithms for independent assessment of screening mammograms. JAMA Oncol 6:1581–1588. https://doi.org/10.1001/jamaoncol.2020.3321

    Article  PubMed  PubMed Central  Google Scholar 

  75. Venkadesh KV, Setio AAA, Schreuder A et al (2021) Deep learning for malignancy risk estimation of pulmonary nodules detected at low-dose screening CT. Radiology 300:438–447. https://doi.org/10.1148/radiol.2021204433

    Article  PubMed  Google Scholar 

  76. Hoeks CMA, Barentsz JO, Hambrock T et al (2011) Prostate cancer: multiparametric MR imaging for detection, localization, and staging. Radiology 261:46–66. https://doi.org/10.1148/radiol.11091822

    Article  PubMed  Google Scholar 

  77. Giganti F, Allen C, Emberton M, Moore CM, Kasivisvanathan V, group Ps (2020) Prostate imaging quality (PI-QUAL): a new quality control scoring system for multiparametric magnetic resonance imaging of the prostate from the PRECISION trial. Eur Urol Oncol 3:615–619. https://doi.org/10.1016/j.euo.2020.06.007

    Article  PubMed  Google Scholar 

  78. Sorland KI, Sunoqrot MRS, Sandsmark E et al (2022) Pseudo-T2 mapping for normalization of T2-weighted prostate MRI. Magn Reson Mater Phy. https://doi.org/10.1007/s10334-022-01003-9

  79. Sunoqrot MRS, Nketiah GA, Selnaes KM, Bathen TF, Elschot M (2021) Automated reference tissue normalization of T2-weighted MR images of the prostate using object recognition. Magn Reson Mater Phy 34:309–321. https://doi.org/10.1007/s10334-020-00871-3

    Article  CAS  Google Scholar 

  80. Sunoqrot MRS, Selnaes KM, Sandsmark E et al (2020) A quality control system for automated prostate segmentation on T2-weighted MRI. Diagnostics (Basel) 10:714. https://doi.org/10.3390/diagnostics10090714

    Article  Google Scholar 

Download references

Funding

This work was financed by the Research Council of Norway (Grant Number 295013), the Norwegian Cancer Society and Prostatakreftforeningen (Grant Number 215951), the Liaison Committee between the Central Norway Regional Health Authority and the Norwegian University of Science and Technology (Grant Numbers 90265300 and 90793700), EU H2020 ProCAncer-I (Grant Number 952159), EU H2020 PANCAIM (Grant Number 101016851), and EU IMI2 PIONEER (Grant Number 777492). Open access funding provided by Norwegian University of Science and Technology.

Author information

Authors and Affiliations

Authors

Contributions

All authors drafted and reviewed the manuscript. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Mohammed R. S. Sunoqrot.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

Henkjan Huisman declares to receive research support from Siemens Healthineers, the other authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article belongs to the thematic series entitled “AI tools for prostate MRI and PET/CT: from data science to clinical value” guest editors: Rossano Girometti (Udine, Italy), Renato Cuocolo (Baronissi, Italy), and Andrey Fedorov (Boston, MA, USA).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sunoqrot, M.R.S., Saha, A., Hosseinzadeh, M. et al. Artificial intelligence for prostate MRI: open datasets, available applications, and grand challenges. Eur Radiol Exp 6, 35 (2022). https://doi.org/10.1186/s41747-022-00288-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41747-022-00288-8

Keywords