Skip to main content
  • Narrative review
  • Open access
  • Published:

Empowering PET: harnessing deep learning for improved clinical insight

Abstract

This review aims to take a journey into the transformative impact of artificial intelligence (AI) on positron emission tomography (PET) imaging. To this scope, a broad overview of AI applications in the field of nuclear medicine and a thorough exploration of deep learning (DL) implementations in cancer diagnosis and therapy through PET imaging will be presented. We firstly describe the behind-the-scenes use of AI for image generation, including acquisition (event positioning, noise reduction though time-of-flight estimation and scatter correction), reconstruction (data-driven and model-driven approaches), restoration (supervised and unsupervised methods), and motion correction. Thereafter, we outline the integration of AI into clinical practice through the applications to segmentation, detection and classification, quantification, treatment planning, dosimetry, and radiomics/radiogenomics combined to tumour biological characteristics. Thus, this review seeks to showcase the overarching transformation of the field, ultimately leading to tangible improvements in patient treatment and response assessment. Finally, limitations and ethical considerations of the AI application to PET imaging and future directions of multimodal data mining in this discipline will be briefly discussed, including pressing challenges to the adoption of AI in molecular imaging such as the access to and interoperability of huge amount of data as well as the “black-box” problem, contributing to the ongoing dialogue on the transformative potential of AI in nuclear medicine.

Relevance statement

AI is rapidly revolutionising the world of medicine, including the fields of radiology and nuclear medicine. In the near future, AI will be used to support healthcare professionals. These advances will lead to improvements in diagnosis, in the assessment of response to treatment, in clinical decision making and in patient management.

Key points

• Applying AI has the potential to enhance the entire PET imaging pipeline.

• AI may support several clinical tasks in both PET diagnosis and prognosis.

• Interpreting the relationships between imaging and multiomics data will heavily rely on AI.

Graphical Abstract

Background

Over the past decade, there has been a significant progress in nuclear medicine imaging techniques, boosted by remarkable technological advances. Nuclear medicine physicians now have access to high-dimensional and multimodal images, along with a huge amount of quantitative data on the biological and genetic characteristics of tumours. In the era of precision medicine, the ability to improve image quality, harness this data, and extract meaningful information is a contemporary challenge in the oncological field. In this context, the implementation of artificial intelligence (AI) techniques is leading to significant breakthroughs, fundamentally transforming how clinicians approach medical diagnosis and patient care [1].

The integration of AI and positron emission tomography (PET) has clearly pushed the boundaries of these functional techniques. The growing synergy between AI and PET imaging promises to revolutionise diagnosis, improve accuracy and expand clinical utility. More than other AI techniques, deep learning (DL) has emerged as key player, finding applications in a wide range of tasks, that include imaging acquisition and quality improvement, as well as clinical activities (Fig. 1) [2]. Nevertheless, it is crucial to acknowledge that AI encompasses a broader spectrum of techniques beyond DL, such as classical machine learning, reinforcement learning, and hybrid AI models, albeit these are primarily niche applications within the field [3, 4].

Fig. 1
figure 1

Key applications of artificial intelligence in nuclear medicine. Reprinted from JNM [5]

This review will present some key AI applications within the PET imaging domain, including image generation, reconstruction, and restoration. A series of real-world clinical applications will be showcased, highlighting the tangible advantages that AI offers across various aspects of nuclear medicine, and its capability to reshape the field. Nonetheless, a critical discussion about the crucial issues related to data interoperability and AI black-box problem will be given, providing an overarching overview into both potential and limitations of AI within the field.

Image generation development

Image acquisition

The impact of AI algorithms begins in the PET scanner room, in the critical task of PET image acquisition, and includes event positioning, noise reduction through time-of-flight estimation and scatter correction. Optimising these three different aspects of PET image generation can significantly improve the overall image quality, with the tangible consequence of increased lesion detectability.

Event positioning

Historically, simple signal processing methods have been used to compute the timing of pick-off for each detector waveform, using analogue pulse processing electronics and pixelated crystals [6]. With the advent of fast waveform digitisers, DL methods have become fundamental for improving the gamma detection from scintillators, predicting the time-of-flight (TOF) of photons, and improving the coincidence-time-resolution. AI-driven techniques have also been critical for the introduction into clinical applications of monolithic scintillator crystals, which are attracting the interest in the nuclear imaging community due to their reduced production costs, good spatial resolution, and depth-of-interaction estimation. The adoption of real-time algorithms to improve spatial resolution and make it comparable to pixelated crystals is a challenge for monolithic-based PET detectors. To solve complex non-linear tasks such as the determination of the photon interaction position, deep learning neural network algorithms have been fundamental for event time-stamping in monolithic crystals, drawing attention to the real-time hardware implementation essential for its use in a complete PET scanner (Fig. 2) [7]. Thanks to the implementation of deep learning neural network algorithms, monolithic-based devices now have potential to attain performance levels beyond the state-of-the-art [7].

Fig. 2
figure 2

Deep learning-based event-positioning in positron emission tomography (PET) scanners with monolithic crystals. The three-dimensional coordinates of the scintillation position can be predicted by a multilayer perceptron neural network using the number of fired silicon photomultiplier (SiPMs) pixels and the total deposited energy. Reprinted with permission from Physica Medica [8]

Noise reduction through time of flight estimation

The ability to add information about the position of the positron annihilation along the Line-of-response (LOR) is an important parameter in the reconstruction process, leading to an improvement in signal-to-noise ratio and lesion detectability. Deep convolutional neural networks (CNN) have been implemented to estimate the TOF directly from the pair of digitised detector waveforms for a coincident event [6, 9]. CNNs have the ability to learn complex representations of the input data, making them suitable for TOF estimation from the waveforms that are confounded by multiple complex random processes [10]. Since all the timing information is contained in the first few nanoseconds of the detector waveforms, the CNN algorithms are ingeniously developed on the rising edge of the signals, without the need to store the entire waveform for TOF estimation. So far, simulation studies have shown that, for digitisers with monolithic PET detectors, a superior time resolution can be achieved with 3D-CNN, reaching an improvement of 26% compared to the traditional method of leading-edge discrimination followed by an averaging of the first few time-stamps [9].

Scatter correction

Another source of uncertainty affecting image quality is randomly scattered photons, which add low frequency backgrounds and introduce serious artefacts. AI methods do not completely replace traditional methods in this task, such as the physical model of photon scattering, but rather they represent an auxiliary means to find function-mapping relationships and largely depend on the model structure, data range, and training process [11]. Neural network (NN) approaches have been developed to solve the issue of the triple coincidence produced by photon scattering in LOR assessment [12]. The method computes the LOR within the coincidences by pre-processing the energy and position measurements, and then NN discrimination. The results with NN approaches showed very good LOR recovery rate (75%), yielding an overall high sensitivity increase of 55% (real scanner conditions) by incorporating triple coincidences within a traditional 360–660 keV energy window and a single energy threshold of 125 keV. When compared to photopeak-only images, the method demonstrated acceptable, limited resolution degradation with little to no contrast loss [12].

Image reconstruction

PET scanners do not generate data directly in image space, but they require reconstruction algorithms to obtain a tomographic representation. This is an inverse problem that however lacks an exact solution, and only closed-form approximation can be found with iterative algorithms, which are computationally expensive and may still include modelling errors in the forward operator. DL-based approaches have been used to solve these limitations by replacing the uncertain user-defined variables in traditional methods with parameters learned directly from data.

Currently, there are two main AI approaches in PET image reconstruction [13]: data-driven and model-driven.

Data-driven approach

The NN learns to reconstruct the image directly from projection data, via a latent feature space, to decode to the desired image. Convolutional encoder–decoder networks are typically used because of their capability to compute image-to-image translation tasks. The overall mapping is usually trained by supervised learning for considering noise and deliver inference from the ground-truth object used in the training phase. In this case, the input raw PET data are three-dimensional sets of measured sinograms that are used to map the output three-dimensional images. At present, data-driven approaches are still impractical, as they require huge computational memory and training set size for small reconstructions. Moreover, they can be hardly generalised for unseen.

Model-driven approach

This is the most promising, and it is a physically informed approach. It integrates the existing state-of-the-art of statistical iterative image reconstruction methods into a deep network. DL is involved in the cascade successive reconstructions to provide rich, data-informed, prior information to the iterative process, making repeated use of the raw data. The model-driven approach has a highly reduced need for training data, since the physics and statistics of PET data acquisition do not need to be learned from scratch.

Image restoration

The noise generated by the randomness of physical processes (annihilation) and scattering is one of the primary factors in PET image degradation, limiting the detectability of lesions and leading to inaccurate diagnosis. The same purpose of noise reduction is linked to the effort of dose reduction to limit the radiation exposure, which, however, results in an increase of image quality degradation. Noise suppression using iterative algorithms is not the best choice, as it might induce artefacts or poor-quality results. Instead, DL offers a highly advantageous approach and there are several solutions that use either supervised or unsupervised methods in image post-processing for denoising purposes [10].

Supervised method

It can be performed by training a NN to map low and high-quality images, treating such a prediction as a regression problem. Both simulations and experimental data can be used as training targets, while the set of measured data can be obtained by introducing artificial noise prior to reconstruction. The relationships between the low-quality and high-quality images are then learned by the AI model, sometimes using anatomical priors from computed tomography (CT) or magnetic resonance imaging (MRI) as an additional network input channel to improve PET image quality. CNNs are used to predict full-dose PET images from PET/MRI or PET/CT images acquired at a quarter or less of a full dose (Fig. 3) [14, 15]. In a more recent study, Kaplan et al. [16] proposed a CNN method to further reduce the dose to one-tenth of the full dose of PET images by improving the preservation of edges and structural details of the network by including them in the loss function during training.

Fig. 3
figure 3

Example of image enhancement using a deep learning model of low-dose [18F]Florbetaben PET images of a patient with Alzheimer disease. The images demonstrate the superiority of including magnetic resonance imaging (MRI) data in the model over PET data alone. Reprinted with permission from Radiology [14]. PET Positron emission tomography

Unsupervised method

This approach is more commonly used to perform general tasks such as denoising, super-resolution and inpainting. A randomly initialised CNN can itself serve as a prior for image restoration by treating the low-quality images as training labels, and it has been shown that it is possible to stop training at a point where the network has learned the signal but not yet the noise [17]. Where possible, the random input can be replaced with a prior image containing additional information, such as the CT or MR image for hybrid PET/CT or PET/MRI denoising. Besides using DL methods for image restoration as a post-processing tool, it can also be incorporated into the iterative image reconstruction procedure as a replacement for traditional regularisation schemes [18]. With this approach, the network is trained to generate the image estimate at each updating step from a prior image and to perform a denoising step between each update, ensuring a higher data consistency on the final denoised image.

Motion correction

Prolonged PET image acquisition times lead to unavoidable respiratory motion in organs and lesions, causing blurring that degrades image quality and reduces spatial resolution. Accurate assessing of areas affected by both respiratory and cardiac motion is challenging also due to the inherent limitations of PET spatial resolution. This is critical in clinical contexts such as detection of small lung nodules, quantification of myocardial blood flow and assessment of subtle changes in myocardial signal intensity (i.e., suspected endocarditis or aortic root complications after vascular graft surgery) [19,20,21,22]. Data-driven gated approaches offer viable solution by modelling and compensating for these typical cardiac and respiratory motions, thereby enabling improved image reconstruction [23, 24].

Shi et al. [25] developed a DL-based automated motion correction for dynamic cardiac [82Rb]PET. They achieved superior performance in terms of motion estimation and myocardial blood flow quantification accuracy compared to conventional registration-based methods.

Clinical applications

Segmentation

Image pre-processing techniques using of regions of interest (ROI) segmentation are a critical step for several clinical tasks, including quantitative analysis, treatment planning, response assessment, lesion classification, and advanced image analysis (i.e., radiomics). Several approaches have been developed to perform semi-automated, which still requires the manual adjustment of segmented regions, and fully-automated segmentation of ROI. AI will progressively replace the practice of manual segmentation, which is typically time-consuming, suffers from intra- and inter-reader variability, and low reproducibility. The most common segmentation algorithms include (1) threshold-based algorithms, which distinguish a fixed fraction or percentage of tracer uptake to define the target; (2) gradient-based algorithms, which recognise areas of high uptake from those of low uptake, enabling accurate delineation of inhomogeneous targets; (3) region-growing-based algorithms, which identify a seed region within the target and progressively include neighbouring voxels that meet certain similarity criteria; (4) algorithms based on statistical analysis; (5) AI-based through DL algorithms [26].

DL-based segmentation algorithms have been successfully applied to PET images with different radiopharmaceuticals and in different clinical settings, from oncology to cardiac and brain imaging [26,27,28]. Moreover, the full potential of hybrid imaging can be exploited by combining segmentation algorithms performed separately on CT images and PET images to improve the performance of segmentation models based on CT or PET images alone [29,30,31].

Detection and classification

The most complex task delegated to AI in medical image analysis is the detection and classification of neoplastic lesions. A class of computer systems has been developed to assist physicians, known as computer-assisted detection and computer-assisted diagnosis systems. While the former merely identify and locate suspicious alterations, the latter also define their characteristics and classify them as benign/malignant findings. While these systems save time in image interpretation, they are not designed to replace the physician, whose expert eye is always required to confirm the result generated by the algorithm [32,33,34]. AI algorithms would provide pre-screened images and pre-identified key features, allowing for greater effectiveness and efficiency, by reducing the number of human errors, inter-observer variability and average reporting times. Computer-assisted systems can use either ML or DL algorithms, with supervised, semi-supervised and unsupervised approaches. The workflow includes a preprocessing phase to suppress unwanted noise, segmentation of a ROI, feature extraction and selection of meaningful characteristics, and lesion classification [32,33,34,35].

Lung nodule characterisation is one of the most promising applications of computer-assisted systems. Several algorithms have been implemented with good performance rate on [18F]Fluorodeoxyglucose ([18F]FDG) PET/CT for the detection, classification and accurate staging of lung lesions [30, 31, 36,37,38]. A more complex task is the evaluation of whole-body PET/CT images, for instance in lymphoma and melanoma patients. CNNs have been used to correctly stage the disease by identifying different uptake patterns between suspicious and non-suspicious findings (i.e., areas of increased physiological uptake) (Fig. 4) [38, 39]. Sibille et al. [38] developed a CNN to detect areas of increased uptake, to identify the anatomical location and to classify these areas as benign/malignant in patients with lung cancer and lymphoma. The CNN achieved high performance in terms of both anatomical localisation accuracy and discrimination of pathological lesions. Although the main field of application is oncology, AI-based classification systems have also been applied to various non-oncological molecular imaging techniques [8, 40]. Choi et al. applied DL to brain [18F]FDG PET for the diagnosis of neurodegenerative disorders. The DL model showed strong performance in discriminating patients with Alzheimer disease from normal controls. The model was also able to predict conversion to Alzheimer disease in patients with mild cognitive impairment and identify Parkinson disease patients with dementia [41].

Fig. 4
figure 4

Illustrative demonstration of the results generated by a deep learning classification system on [18F]FDG PET/CT. This exhibit showcases the placement of the liver’s volume of interest (left), the segmentations following the application of a threshold (centre), and the categorisation into either physiological uptake or pathological lesions (right). Reprinted with permission from Eur J Nucl Med Mol Imaging [39]. CT Computed tomography, FDG Fluorodeoxyglucose, PET Positron emission tomography

Quantification

Quantification is an essential biomarker for both diagnostic and therapeutic purposes. In oncology, the most commonly used semiquantitative parameters derived from [18F]FDG PET/CT reflect metabolism on a single voxel basis (“standardised uptake value”) or on a volumetric basis (“metabolic tumour volume” [MTV] and total lesion glycolysis). Semi-quantitative parameters provide relevant information for lesion characterisation, prognostic stratification, assessment of disease severity, and response to therapy, guiding clinicians in patient management. However, in certain conditions, many lesions must be segmented to obtain these data, and the task can be challenging and time-consuming. DL algorithms have been applied to whole-body images to segment multiple lesions and extract relevant semi-quantitative parameters quickly and automatically [42, 43]. In lymphoma, total MTV (i.e., the sum of the MTV of all lesions) is a recognised parameter to stratify the risk of refractoriness/recurrence after first-line chemotherapy. Lymphoma is characterised by a high variability in the number, size, distribution, shape of lesions and metabolism. Several CNN models have been developed to segment pathological lesions, discard areas of physiological uptake and fully automatically calculate TMTV. These architectures may potentially provide clinicians with a high-throughput platform to perform semi-quantitative analyses efficiently and accurately [44,45,46].

Juarez-Orozco et al. applied DL to quantitative myocardial perfusion polar maps from [13N]NH3 PET to identify patients at risk of major adverse cardiac events. This approach significantly outperformed traditional clinical and functional variables, providing improved clinical prognostic estimates at the individual level [47].

Treatment planning

Great technological advances prompted the development of advanced radiotherapy techniques which require a high degree of accuracy in defining the target tumour volume to minimise the radiation dose to surrounding healthy tissue and organs at risk, fitting perfectly into the modern vision of precision medicine. The delineation of the target volume (“gross tumour volume” [GTV]) is currently performed using a multimodal approach. In this perspective, metabolic data provide complementary information to morphological imaging, identifying the most aggressive tumour areas prone to radio-resistance mechanisms. The dose painting approach entails delivering a higher dose of radiation to metabolically active tumour areas, defined as “biological target volume”, for improved disease control and survival. In clinical practice, the delineation of GTV on PET/CT images is performed with semi-automatic threshold-based systems detecting tumour areas with high metabolism which are subsequently visually adjusted by the physician [48].

Deep CNN systems have also been applied to the automated delineation of different tumour histotypes to provide a simpler and faster procedure with less inter-observer variability. Several studies conducted in patients with head-neck cancer confirm a high degree of overlap between the biological target volume delineation of the primary tumour and pathological loco-regional lymph nodes proposed by CNN systems and that performed by expert physicians (Fig. 5) [49,50,51].

Fig. 5
figure 5

Application of CNNs for fully automated delineation of the GTV on [18F]FDG PET/CT in patients with head and neck cancer. The image shows the predicted (in red) and manual (in blue) segmentations in three different patients on CT alone, PET alone and PET/CT images (an experienced oncologist assigned qualitative scores of 10, 8, and 2 to the PET/CT-based predictions from top to bottom). Reprinted with permission from Eur J Nucl Med Mol Imaging [51]. CNN Convolutional neural networks, CT Computed tomography, FDG Fluorodeoxyglucose, GTV Gross tumour volume, PET Positron emission tomography

Dosimetry

Predicting individual radiopharmaceutical dosimetry in compliance with the optimisation principle is crucial in therapeutic settings [52]. Currently, certain radioligand therapies are still administered with a fixed dose and pre-treatment imaging is used only to select candidates expressing the therapeutic target [53, 54]. However, a radical paradigm shift is taking place with the increasing development of radiopharmaceuticals for therapy. Whole-body absorbed dose quantification can be performed on planar or three-dimensional images employing different methods. The Medical Internal Radiation Dose Committee formalism is a simplified method for performing organ-level dosimetry. This approach assumes a uniform distribution of radiopharmaceutical activity and ignores different anatomical characteristics of the patient. The Monte Carlo simulation overcomes this inherent limitation but suffers from a high computational burden. Moreover, pre-therapy dose estimation involves several technical problems, requiring several dynamic whole-body scans for the extrapolation of pharmacokinetic data, which are currently not suitable for routine clinical practice [55, 56]. DL algorithms were used to generate individual voxel-based dosimetry maps from PET and CT with excellent results. The algorithms predicted absorbed doses with high precision, outperforming the Medical Internal Radiation Dose Committee and Monte Carlo methods both in terms of accuracy and computational efficiency [57, 58].

Radiomics and radiogenomics

Over the past decade, there has been much debate about advanced image analysis using radiomics techniques and its applicability to clinical routines in the near future. Radiomics has rapidly attempted to establish relationship between visual image descriptors and biological/clinical endpoints. Radiomics involves the high-throughput extraction of high-dimensional quantitative features from medical images reflecting the biological characteristics of organs, tissues, and tumours, to increase the power of decision support models for outcome prediction and individualised patient management. The radiomics workflow includes several steps: image acquisition, ROI segmentation, feature extraction and analysis, database implementation with clinical and image-derived data and model building. Although the main application of radiomics is in oncology, the discipline is also expanding into non-oncological conditions. Saleem et al. [59] assessed the feasibility and utility of using textural features to diagnose aortoiliac graft infection from [18F]FDG PET/CT revealing promising results.

Personalised medicine is an ambitious goal that relies on fusing multidisciplinary data from clinical, imaging, genomic, and other omic data [60]. Advances in genomics have led to the sequencing of entire genomes and have provided valuable insights into disease susceptibility and cancer development [61]. Radiogenomics is a relatively new scientific field that aims to discover new non-invasive biomarkers and to bridge the gaps between genomics and radiomics [62]. Recent trends include the use of AI and radiogenomics to support diagnosis, treatment decisions and prognosis in oncology, revolutionising healthcare [63]. The scientific literature provides several examples of AI-based radiogenomics methods for clinical applications, ranging from imaging features with genetic associations to tissue characterisation (Fig. 6) [64, 65]. Kirienko et al. [66] applied this approach to lung cancer patients. Excellent results were achieved in predicting tumour histology and patient outcome by combining radiomic analysis on [18F]FDG PET/CT and gene expression abnormalities via ML approach. Kim et al. [67]. explored both ML and DL to develop a prediction model for chemotherapy response and metastasis in paediatric patients with osteosarcoma using gene expression and image texture features from [18F]FDG PET/CT achieving high accuracy.

Fig. 6
figure 6

Radiomics and radiogenomics pipelines in outcome prediction. Reprinted from Br J Cancer [65]. The article is distributed under the terms of the Creative Commons CC BY license

Limitations and ethical considerations

While the integration of AI into nuclear medicine holds great promise, it is essential to recognise and address certain limitations and challenges. One of the foremost limitations to developing powerful and robust AI algorithms revolves around the quality and quantity of data available to train. The transition of AI-driven tools into clinical workflows can be hindered by limited data availability, accessibility, and interoperability. This is particularly true for rare cancers or specific patient populations, but also includes the variability in data quality between medical centres in terms of imaging protocols, equipment, and radiopharmaceuticals. Several strategies can help mitigate the limitations of insufficient data, including data augmentation, transfer learning, synthetic data generation, and one-shot learning [68,69,70,71]. Another approach is data sharing across different institutions. However, inadequate data interoperability can have a significant impact on the accuracy of AI predictions and a reference model should be embraced to establish robust data. Approaches to minimise measurement uncertainty, enable data identification, and standardise pre-existing data for research purposes are of paramount importance for data integration and enhanced AI applications in nuclear medicine [72]. Several open-source and proprietary software tools have been developed by exploiting the availability of large amounts of data for learning (open libraries, etc.). However, the integration of image interpretation software into clinical practice is limited by long development times and strict requirements for regulatory approval. Therefore, most algorithms are currently distributed as research tools. There is still a long way to go to provide evidence-based data and define the real impact of integrating AI into clinical practice [63, 73].

Another important limitation of AI in clinical practice is what is commonly referred to as the "black box" problem [74]. This refers to the inherent opacity of complex AI models, making their decision-making processes difficult to interpret and understand. In nuclear medicine, this challenge becomes particularly pertinent. Medical professionals cannot rely on the transparency and interpretability of results and instead must rely on AI-assisted diagnoses and treatment plans. This lack of transparency can potentially impede the widespread adoption of AI in clinical settings. Various approaches have been proposed to tackle this issue, such as incorporating retro-propagation of information from the model results back to the input data [75, 76]. In general, the development of techniques to improve the transparency and interpretability of AI models has been an ongoing effort. However, this remains an ongoing challenge within the discipline, demanding continued research and innovation to bridge the gap between the complexity of AI algorithms and the need for transparent and comprehensible decision-making processes in clinical practice.

Future directions

The future of AI in PET imaging is bright, with several notable advances on the horizon. AI will play a key role in the development of accurate pseudo-CT methods for attenuation correction in PET/MRI. This will result in synthetic CT images from MR and PET data, improving PET quantification and anatomical precision while reducing radiation exposure [77]. A major trend, facilitated by AI-driven data fusion, is the integration of PET with other imaging modalities such as MRI and CT, as well as dynamic multiparametric PET analysis, providing a comprehensive view of a patient's condition, improved diagnostic capabilities and valuable clinical insights [78, 79]. Furthermore, AI will improve the feasibility of real-time imaging and intervention. Surgeons can use AI to navigate procedures and make critical decisions based on intraoperative PET imaging, resulting in greater surgical precision and minimised damage to healthy tissue [80].

Conclusions

AI is rapidly revolutionising nuclear medicine, improving diagnostic accuracy, patient care, and the overall utility of PET imaging in both clinical practice and research. However, it is currently used mainly for research purposes and few applications have entered clinical practice. Responsible development and regulatory oversight will be essential to ensure that AI technologies are integrated safely and effectively into healthcare workflows.

Availability of data and materials

Not applicable.

Abbreviations

AI:

Artificial intelligence

CNN:

Convolutional neural networks

CT:

Computed tomography

DL:

Deep learning

FDG:

Fluorodeoxyglucose

GTV:

Gross tumour volume

LOR:

Line-of-response

MRI:

Magnetic resonance imaging

MTV:

Metabolic tumour volume

NN:

Neural network

PET:

Positron emission tomography

ROI:

Region of interest

TOF:

Time-of-flight

References

  1. Gillies RJ, Kinahan PE, Hricak H (2016) Radiomics: images are more than pictures, they are data. Radiology 278:563–577. https://doi.org/10.1148/RADIOL.2015151169

    Article  PubMed  Google Scholar 

  2. Kirienko M, Biroli M, Gelardi F, Seregni E, Chiti A, Sollini M (2021) Deep learning in Nuclear Medicine—focus on CNN-based approaches for PET/CT and PET/MR: where do we stand? Clin Transl Imaging 9:37–55. https://doi.org/10.1007/S40336-021-00411-6/METRICS

    Article  Google Scholar 

  3. Smith RL, Ackerley IM, Wells K, Bartley L, Paisey S, Marshall C (2019) Reinforcement learning for object detection in PET imaging. 2019 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2019. https://doi.org/10.1109/NSS/MIC42101.2019.9060031

  4. Wang T, Lei Y, Fu Y, et al (2020) Machine learning in quantitative PET: a review of attenuation correction and low-count image reconstruction methods. Phys Med 76:294. https://doi.org/10.1016/J.EJMP.2020.07.028

  5. Bradshaw TJ, Boellaard R, Dutta J, et al (2022) Nuclear Medicine and Artificial Intelligence: Best Practices for Algorithm Development. J Nucl Med 63:500–510. https://doi.org/10.2967/JNUMED.121.262567

  6. Berg E, Cherry SR (2018) Using convolutional neural networks to estimate time-of-flight from PET detector waveforms. Phys Med Biol 63:02LT01. https://doi.org/10.1088/1361-6560/AA9DC5

    Article  PubMed  PubMed Central  Google Scholar 

  7. Carra P, Giuseppina Bisogni M, Ciarrocchi E, et al (2022) A neural network-based algorithm for simultaneous event positioning and timestamping in monolithic scintillators. Phys Med Biol 67:135001. https://doi.org/10.1088/1361-6560/AC72F2

  8. Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H (2021) The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 83:122–137. https://doi.org/10.1016/J.EJMP.2021.03.008

  9. Maebe J, Vandenberghe S (2022) Simulation study on 3D convolutional neural networks for time-of-flight prediction in monolithic PET detectors using digitized waveforms. Phys Med Biol 67:125016. https://doi.org/10.1088/1361-6560/AC73D3

    Article  Google Scholar 

  10. Gong K, Berg E, Cherry SR, Qi J (2020) Machine learning in PET: from photon detection to quantitative image reconstruction. Proc IEEE Inst Electr Electron Eng 108:51–68. https://doi.org/10.1109/JPROC.2019.2936809

    Article  CAS  Google Scholar 

  11. Cheng Z, Wen J, Huang G, Yan J (2021) Applications of artificial intelligence in nuclear medicine image generation. Quant Imaging Med Surg 11:2792822. https://doi.org/10.21037/QIMS-20-1078

    Article  Google Scholar 

  12. Michaud JB, Tétrault MA, Beaudoin JF, et al (2015) Sensitivity increase through a neural network method for LOR recovery of ICS triple coincidences in high-resolution pixelated-detectors PET scanners. IEEE Trans Nucl Sci 62:82–94. https://doi.org/10.1109/TNS.2014.2372788

  13. Reader AJ, Schramm G (2021) Artificial intelligence for PET image reconstruction. J Nucl Med 62:1330–1333. https://doi.org/10.2967/JNUMED.121.262303

    Article  PubMed  Google Scholar 

  14. Chen KT, Gong E, de Carvalho Macruz FB, et al (2019) Ultra–low-dose 18F-florbetaben amyloid PET imaging using deep learning with multi-contrast MRI inputs. Radiology 290:649–656. https://doi.org/10.1148/radiol.2018180940

    Article  PubMed  Google Scholar 

  15. Xiang L, Qiao Y, Nie D, et al (2017) Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI. Neurocomputing 267:406–416. https://doi.org/10.1016/J.NEUCOM.2017.06.048

  16. Kaplan S, Zhu YM (2019) Full-dose PET image estimation from low-dose PET image using deep learning: a pilot study. J Digit Imaging 32:773–778. https://doi.org/10.1007/s10278-018-0150-3/4

    Article  PubMed  Google Scholar 

  17. Ulyanov D, Vedaldi A, Lempitsky V (2020) Deep image prior. Int J Comput Vis 128:1867–1888. https://doi.org/10.1007/s11263-020-01303-4

    Article  Google Scholar 

  18. Hashimoto F, Ohba H, Ote K, Teramoto A, Tsukada H (2019) Dynamic PET image denoising using deep convolutional neural networks without prior training datasets. IEEE Access 7:96594–96603. https://doi.org/10.1109/ACCESS.2019.2929230

    Article  Google Scholar 

  19. ten Hove D, Slart RHJA, Sinha B, Glaudemans AWJM, Budde RPJ (2021) 18F-FDG PET/CT in infective endocarditis: indications and approaches for standardization. Curr Cardiol Rep 23. https://doi.org/10.1007/S11886-021-01542-Y

  20. Werner MK, Parker JA, Kolodny GM, English JR, Palmer MR (2009) Respiratory gating enhances imaging of pulmonary nodules and measurement of tracer uptake in FDG PET/CT. AJR Am J Roentgenol 193:1640–1645. https://doi.org/10.2214/AJR.09.2516

    Article  Google Scholar 

  21. Aquino SL, Kuester LB, Muse VV, Halpern EF, Fischman AJ (2006) Accuracy of transmission CT and FDG-PET in the detection of small pulmonary nodules with integrated PET/CT. Eur J Nucl Med Mol Imaging 33:692–696. https://doi.org/10.1007/s00259-005-0018-x

    Article  PubMed  Google Scholar 

  22. Schwenck J, Kneilling M, Riksen NP, et al (2022) A role for artificial intelligence in molecular imaging of infection and inflammation. Eur J Hybrid Imaging 6. https://doi.org/10.1186/S41824-022-00138-1

  23. Dias AH, Schleyer P, Vendelbo MH, Hjorthaug K, Gormsen LC, Munk OL (2022) Clinical feasibility and impact of data-driven respiratory motion compensation studied in 200 whole-body 18F-FDG PET/CT scans. EJNMMI Res 12:16. https://doi.org/10.1186/S13550-022-00887-X

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  24. Feng T, Wang J, Dong Y, Zhao J, Li H (2019) A Novel Data-Driven Cardiac Gating Signal Extraction Method for PET. IEEE Trans Med Imaging 38:629–637. https://doi.org/10.1109/TMI.2018.2868615

    Article  PubMed  Google Scholar 

  25. Shi L, Lu Y, Dvornek N, et al (2021) Automatic Inter-Frame Patient Motion Correction for Dynamic Cardiac PET Using Deep Learning. IEEE Trans Med Imaging 40:3293–3304. https://doi.org/10.1109/TMI.2021.3082578

  26. Hatt M, Lee JA, Schmidtlein CR, et al (2017) Classification and evaluation strategies of auto-segmentation approaches for PET: Report of AAPM task group No. 211. Med Phys 44:e1–e42. https://doi.org/10.1002/MP.12124

  27. Decuyper M, Maebe J, van Holen R, Vandenberghe S (2021) Artificial intelligence with deep learning in nuclear medicine and radiology. EJNMMI Phys 18:1–46. https://doi.org/10.1186/S40658-021-00426-Y

    Article  Google Scholar 

  28. Slart RHJA, Williams MC, Juarez-Orozco LE, et al (2021) Position paper of the EACVI and EANM on artificial intelligence applications in multimodality cardiovascular imaging using SPECT/CT, PET/CT, and cardiac CT. Eur J Nucl Med Mol Imaging 48:1399–1413. https://doi.org/10.1007/s00259-021-05341-z

    Article  PubMed  PubMed Central  Google Scholar 

  29. Zhong Z, Kim Y, Zhou L, et al (2018) 3D fully convolutional networks for co-segmentation of tumors on PET-CT images. Pro Int Symp Biomed Imaging 2018-April:228–231. https://doi.org/10.1109/ISBI.2018.8363561

  30. Kumar A, Fulham M, Feng D, Kim J (2020) Co-Learning Feature Fusion Maps from PET-CT Images of Lung Cancer. IEEE Trans Med Imaging 39:204–217. https://doi.org/10.1109/TMI.2019.2923601

    Article  Google Scholar 

  31. Teramoto A, Fujita H, Takahashi K, et al (2014) Hybrid method for the detection of pulmonary nodules using positron emission tomography/computed tomography: a preliminary study. Int J Comput Assist Radiol Surg 9:59–69. https://doi.org/10.1007/S11548-013-0910-Y

    Article  PubMed  Google Scholar 

  32. Mansoor A, Bagci U, Foster B, et al (2015) Segmentation and Image Analysis of Abnormal Lungs at CT: Current Approaches, Challenges, and Future Trends. Radiographics 35:1056–1076. https://doi.org/10.1148/rg.2015140232

  33. Gao J, Jiang Q, Zhou B, et al (2019) Chen D (2019) Convolutional neural networks for computer-aided detection or diagnosis in medical image analysis: an overview. Math Biosci Eng 16:6536–6561. https://doi.org/10.3934/MBE.2019326

  34. Shiraishi J, Li Q, Appelbaum D, Doi K (2011) Computer-aided diagnosis and artificial intelligence in clinical imaging. Semin Nucl Med 41:449–462. https://doi.org/10.1053/J.SEMNUCLMED.2011.06.004

    Article  PubMed  Google Scholar 

  35. Visvikis D, Cheze Le Rest C, Jaouen V, Hatt M (2019) Artificial intelligence, machine (deep) learning and radio(geno)mics: definitions and nuclear medicine imaging applications. Eur J Nucl Med Mol Imaging 46:2630–2637. https://doi.org/10.1007/s00259-019-04373-w

    Article  PubMed  Google Scholar 

  36. Zhao X, Li L, Lu W, Tan S (2019) Tumor co-segmentation in PET/CT using multi-modality fully convolutional neural network. Phys Med Biol 64. https://doi.org/10.1088/1361-6560/AAF44B

  37. Li L, Zhao X, Lu W, Tan S (2020) Deep learning for variational multimodality tumor segmentation in PET/CT. Neurocomputing 392:277–295. https://doi.org/10.1016/j.neucom.2018.10.099

    Article  PubMed  Google Scholar 

  38. Sibille L, Seifert R, Avramovic N, et al (2020) 18F-FDG PET/CT uptake classification in lymphoma and lung cancer by using deep convolutional neural networks. Radiology 294:445–452. https://doi.org/10.1148/RADIOL.2019191114

  39. Dirks I, Keyaerts M, Neyns B, Vandemeulebroucke J (2022) Computer-aided detection and segmentation of malignant melanoma lesions on whole-body 18F-FDG PET/CT using an interpretable deep learning approach. Comput Methods Programs Biomed 221:106902. https://doi.org/10.1016/J.CMPB.2022.106902

    Article  PubMed  Google Scholar 

  40. Palumbo B, Bianconi F, Nuvoli S, Spanu A, Fravolini ML (2021) Artificial intelligence techniques support nuclear medicine modalities to improve the diagnosis of Parkinson’s disease and Parkinsonian syndromes. Clin Transl Imaging 9:19–35. https://doi.org/10.1007/s40336-020-00404-x

    Article  Google Scholar 

  41. Choi H, Kim YK, Yoon EJ, Lee JY, Lee DS (2020) Cognitive signature of brain FDG PET based on deep learning: domain transfer from Alzheimer’s disease to Parkinson’s disease. Eur J Nucl Med Mol Imaging 47:403–412. https://doi.org/10.1007/S00259-019-04538-7

    Article  PubMed  Google Scholar 

  42. Hasani N, Paravastu SS, Farhadi F, et al (2022) Artificial Intelligence in Lymphoma PET Imaging: A Scoping Review (Current Trends and Future Directions). PET Clin 17:145–174. https://doi.org/10.1016/J.CPET.2021.09.006

  43. Sollini M, Bandera F, Kirienko M (2019) Quantitative imaging biomarkers in nuclear medicine: from SUV to image mining studies. Highlights from annals of nuclear medicine 2018. Eur J Nucl Med Mol Imaging 46:2737–2745. https://doi.org/10.1007/S00259-019-04531-0

    Article  PubMed  Google Scholar 

  44. Capobianco N, Meignan M, Cottereau AS, et al (2021) Deep-learning 18F-FDG uptake classification enables total metabolic tumor volume estimation in diffuse large b-cell lymphoma. J Nucl Med 62:30–36. https://doi.org/10.2967/JNUMED.120.242412

  45. Li H, Jiang H, Li S, et al (2020) DenseX-Net: An End-to-End Model for Lymphoma Segmentation in Whole-Body PET/CT Images. IEEE Access 8:8004–8018. https://doi.org/10.1109/ACCESS.2019.2963254

  46. Blanc-Durand P, Jégou S, Kanoun S, et al (2021) Fully automatic segmentation of diffuse large B cell lymphoma lesions on 3D FDG-PET/CT for total metabolic tumour volume prediction using a convolutional neural network. Eur J Nucl Med Mol Imaging 48:1362–1370. https://doi.org/10.1007/S00259-020-05080-7

  47. Juarez-Orozco LE, Martinez-Manzanera O, van der Zant FM, Knol RJJ, Knuuti J (2020) Deep Learning in Quantitative PET Myocardial Perfusion Imaging: A Study on Cardiovascular Event Prediction. JACC Cardiovasc Imaging 13:180–182. https://doi.org/10.1016/J.JCMG.2019.08.009

    Article  PubMed  Google Scholar 

  48. Beaton L, Bandula S, Gaze MN, Sharma RA (2019) How rapid advances in imaging are defining the future of precision radiation oncology. Br J Cancer 120:779–790. https://doi.org/10.1038/s41416-019-0412-y

    Article  PubMed  PubMed Central  Google Scholar 

  49. Guo Z, Guo N, Gong K, Zhong S, Li Q (2019) Gross tumor volume segmentation for head and neck cancer radiotherapy using deep dense multi-modality network. Phys Med Biol 64. https://doi.org/10.1088/1361-6560/AB440D

  50. Lin L, Dou Q, Jin YM, et al (2019) Deep Learning for Automated Contouring of Primary Tumor Volumes by MRI for Nasopharyngeal Carcinoma. Radiology 291:677–686. https://doi.org/10.1148/RADIOL.2019182012

    Article  PubMed  Google Scholar 

  51. Moe YM, Groendahl AR, Tomic O, Dale E, Malinen E, Futsaether CM (2021) Deep learning-based auto-delineation of gross tumour volumes and involved nodes in PET/CT images of head and neck cancer patients. Eur J Nucl Med Mol Imaging 48:2782–2792. https://doi.org/10.1007/S00259-020-05125-X

    Article  PubMed  PubMed Central  Google Scholar 

  52. Chiesa C, Strigari L, Pacilio M, et al (2021) Dosimetric optimization of nuclear medicine therapy based on the Council Directive 2013/59/EURATOM and the Italian law N. 101/2020. Position paper and recommendations by the Italian National Associations of Medical Physics (AIFM) and Nuclear Medicine (AIMN). Phys Med 89:317–326. https://doi.org/10.1016/J.EJMP.2021.07.001

  53. Kratochwil C, Fendler WP, Eiber M, et al (2019) EANM procedure guidelines for radionuclide therapy with 177Lu-labelled PSMA-ligands (177Lu-PSMA-RLT). Eur J Nucl Med Mol Imaging 46:2536–2544. https://doi.org/10.1007/S00259-019-04485-3

  54. Zaknun JJ, Bodei L, Mueller-Brand J, et al (2013) The joint IAEA, EANM, and SNMMI practical guidance on peptide receptor radionuclide therapy (PRRNT) in neuroendocrine tumours. Eur J Nucl Med Mol Imaging 40:800–816. https://doi.org/10.1007/S00259-012-2330-6

  55. Visvikis D, Lambin P, Beuschau Mauridsen K, et al (2022) Application of artificial intelligence in nuclear medicine and molecular imaging: a review of current status and future perspectives for clinical translation. Eur J Nucl Med Mol Imaging 2022:1–12. https://doi.org/10.1007/S00259-022-05891-W

  56. Ljungberg M, Sjogreen Gleisner K (2018) 3-D image-based dosimetry in radionuclide therapy. IEEE Trans Radiat Plasma Med Sci 2:527–540. https://doi.org/10.1109/TRPMS.2018.2860563

    Article  Google Scholar 

  57. Lee MS, Hwang D, Kim JH, Lee JS (2019) Deep-dose: a voxel dose estimation method using deep convolutional neural network for personalized internal dosimetry. Sci Rep 9:10308–10308. https://doi.org/10.1038/S41598-019-46620-Y

    Article  PubMed  PubMed Central  Google Scholar 

  58. Akhavanallaf A, Shiri I, Arabi H, Zaidi H (2020) Whole-body voxel-based internal dosimetry using deep learning. Eur J Nucl Med Mol Imaging 48:670–682. https://doi.org/10.1007/S00259-020-05013-4

    Article  PubMed  PubMed Central  Google Scholar 

  59. Saleem BR, Beukinga RJ, Boellaard R, et al (2017) Textural features of 18F-fluorodeoxyglucose positron emission tomography scanning in diagnosing aortic prosthetic graft infection. Eur J Nucl Med Mol Imaging 44:886–894. https://doi.org/10.1007/S00259-016-3599-7

    Article  PubMed  Google Scholar 

  60. Johnson KB, Wei WQ, Weeraratne D, et al (2021) Precision Medicine, AI, and the Future of Personalized Health Care. Clin Transl Sci 14:86. https://doi.org/10.1111/CTS.12884

  61. del Giacco L, Cattaneo C (2012) Introduction to genomics. Methods Mol Biol 823:79–88. https://doi.org/10.1007/978-1-60327-216-2_6

    Article  CAS  PubMed  Google Scholar 

  62. Bodalal Z, Trebeschi S, Nguyen-Kim TDL, Schats W, Beets-Tan R (2019) Radiogenomics: bridging imaging and genomics. Abdom Radiol (NY) 44:1960–1984. https://doi.org/10.1007/S00261-019-02028-W

    Article  PubMed  Google Scholar 

  63. van Leeuwen KG, Schalekamp S, Rutten MJCM, van Ginneken B, de Rooij M (2021) Artificial intelligence in radiology: 100 commercially available products and their scientific evidence. Eur Radiol 31:3797–3804. https://doi.org/10.1007/s00330-021-07892-z

    Article  PubMed  PubMed Central  Google Scholar 

  64. Trivizakis E, Papadakis GZ, Souglakos I, et al (2020) Artificial intelligence radiogenomics for advancing precision and effectiveness in oncologic care (Review). Int J Oncol 57:43–53. https://doi.org/10.3892/IJO.2020.5063

  65. Singh G, Manjila S, Sakla N, et al (2021) Radiomics and radiogenomics in gliomas: a contemporary update. Br J Cancer 125(5):641–657. https://doi.org/10.1038/s41416-021-01387-w

  66. Kirienko M, Sollini M, Corbetta M, et al (2021) Radiomics and gene expression profile to characterise the disease and predict outcome in patients with lung cancer. Eur J Nucl Med Mol Imaging 48:3643–3655. https://doi.org/10.1007/s00259-021-05371-7

  67. Kim BC, Kim J, Kim K, et al (2021) Preliminary radiogenomic evidence for the prediction of metastasis and chemotherapy response in pediatric patients with osteosarcoma using18f-fdf pet/ct, ezrin, and ki67. Cancers (Basel) 13:2671. https://doi.org/10.3390/CANCERS13112671/S1

  68. Raghu M, Zhang C, Brain G, Kleinberg J, Bengio S (2019) Transfusion: Understanding Transfer Learning for Medical Imaging. NIPS’19: Proceedings of the 33rd International Conference on Neural Information Processing Systems 3347–3357. https://doi.org/10.5555/3454287.3454588

  69. Guibas JT, Virdi TS, Li PS, Henry S, Gunn M (2017) Synthetic Medical Images from Dual Generative Adversarial Networks. https://arxiv.org/abs/1709.01872v3

  70. Hussain Z, Gimenez F, Yi D, Rubin D (2017) Differential Data Augmentation Techniques for Medical Imaging Classification Tasks. AMIA Ann Symp Proc 2017:979

    Google Scholar 

  71. Aksu F, Gelardi F, Chiti A, Soda P (2023) Early Experiences on using Triplet Networks for Histological Subtype Classification in Non-Small Cell Lung Cancer. Proc IEEE Symp Comput Based Med Syst 2023-June:832–837. https://doi.org/10.1109/CBMS58004.2023.00328

  72. Kwon O, Yoo SK (2021) Interoperability Reference Models for Applications of Artificial Intelligence in Medical Imaging. Appl Sci 11:2704. https://doi.org/10.3390/APP11062704

    Article  CAS  Google Scholar 

  73. Gelardi F, Kirienko M, Sollini M (2021) Climbing the steps of the evidence-based medicine pyramid: highlights from Annals of Nuclear Medicine 2019. Eur J Nucl Med Mol Imaging 48:1293–1301. https://doi.org/10.1007/S00259-020-05073-6

    Article  CAS  PubMed  Google Scholar 

  74. Visvikis D, Lambin P, Beuschau Mauridsen K, Hustinx R, et al (2022) Application of artificial intelligence in nuclear medicine and molecular imaging: a review of current status and future perspectives for clinical translation. Eur J Nucl Med Mol Imaging 49:4452–4463. https://doi.org/10.1007/S00259-022-05891-W

  75. Quellec G, Charrière K, Boudi Y, Cochener B, Lamard M (2017) Deep image mining for diabetic retinopathy screening. Med Image Anal 39:178–193. https://doi.org/10.1016/J.MEDIA.2017.04.012

    Article  PubMed  Google Scholar 

  76. Brocki L, Chung NC (2019) Concept saliency maps to visualize relevant features in deep generative models. Proceedings - 18th IEEE International Conference on Machine Learning and Applications, ICMLA 2019 1771–1778. https://doi.org/10.1109/ICMLA.2019.00287

  77. Liu F, Jang H, Kijowski R, Bradshaw T, McMillan AB (2017) Deep Learning MR Imaging–based Attenuation Correction for PET/MR Imaging. Radiology 286:676–684. https://doi.org/10.1148/RADIOL.2017170700

  78. Topol EJ (2023) As artificial intelligence goes multimodal, medical applications multiply. Science 381:adk6139. https://doi.org/10.1126/SCIENCE.ADK6139

    Article  Google Scholar 

  79. Dimitrakopoulou-Strauss A, Pan L, Sachpekidis C (2021) Kinetic modeling and parametric imaging with dynamic PET for oncological applications: general considerations, current clinical applications, and future perspectives. Eur J Nucl Med Mol Imaging 48:21–39. https://doi.org/10.1007/s00259-020-04843-6

    Article  PubMed  Google Scholar 

  80. Wendler T, B van Leeuwen FW, Navab N, et al (2021) How molecular imaging will enable robotic precision surgery. Eur J Nuclear Med Mol Imaging 48:4201–4224. https://doi.org/10.1007/S00259-021-05445-6

Download references

Funding

This research received no external funding.

FG is supported by the Investigator Grant 2020–23596 funded by AIRC (Italian Association for Cancer Research) won by AC.

Author information

Authors and Affiliations

Authors

Contributions

AA, AC, and FG conceptualised the paper. AA, AB, and FG performed data selection and drafted the paper. All the authors revised and approved the manuscript. Large language models were not used in the manuscript.

Corresponding author

Correspondence to Fabrizia Gelardi.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Artesani, A., Bruno, A., Gelardi, F. et al. Empowering PET: harnessing deep learning for improved clinical insight. Eur Radiol Exp 8, 17 (2024). https://doi.org/10.1186/s41747-023-00413-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41747-023-00413-1

Keywords