Skip to main content
  • Narrative review
  • Open access
  • Published:

Complexities of deep learning-based undersampled MR image reconstruction

Abstract

Artificial intelligence has opened a new path of innovation in magnetic resonance (MR) image reconstruction of undersampled k-space acquisitions. This review offers readers an analysis of the current deep learning-based MR image reconstruction methods. The literature in this field shows exponential growth, both in volume and complexity, as the capabilities of machine learning in solving inverse problems such as image reconstruction are explored. We review the latest developments, aiming to assist researchers and radiologists who are developing new methods or seeking to provide valuable feedback. We shed light on key concepts by exploring the technical intricacies of MR image reconstruction, highlighting the importance of raw datasets and the difficulty of evaluating diagnostic value using standard metrics.

Relevance statement Increasingly complex algorithms output reconstructed images that are difficult to assess for robustness and diagnostic quality, necessitating high-quality datasets and collaboration with radiologists.

Key points

• Deep learning-based image reconstruction algorithms are increasing both in complexity and performance.

• The evaluation of reconstructed images may mistake perceived image quality for diagnostic value.

• Collaboration with radiologists is crucial for advancing deep learning technology.

Graphical Abstract

Background

Magnetic resonance (MR) is a popular modality in medical imaging for its versatility, nonionizing radiation, and good soft-tissue contrast. However, its relatively long acquisition times, incurring high costs and patient discomfort, has led to a flurry of research to improve imaging speed without compromising image quality.

MR data is acquired in the spatial frequency domain, referred to as k-space. Applying an inverse Fourier transform gives a reconstructed spatial image. Sampling less of the k-space decreases scan time but may introduce aliasing. The Nyquist criterion specifies a minimum sampling density required to avoid detrimental wrap-around artifacts during reconstruction, and sampling under this minimum is considered undersampling. The acceleration factor denotes the extent of undersampling; an acceleration factor of 4 corresponds to sampling 25% of the lines in k-space [1,2,3].

Undersampled acquisitions are traditionally reconstructed using parallel imaging methods with multiple receive coils, such as sensitivity encoding (SENSE) or the generalized autocalibrating partial parallel acquisition (GRAPPA) techniques [1, 2]. However, parallel imaging reconstructions suffer a signal-to-noise ratio (SNR) loss at least proportional to the square root of the reduction in scan time [2]. Compressed sensing is an alternative, an iterative optimization process that reconstructs using a priori information known as sparsity [3]. Compressed sensing-based reconstructions suffer from blurring and ringing artifacts, which are considered not as detrimental to diagnostic quality [4].

Advances in artificial intelligence techniques and development in computational infrastructures have led to machine learning techniques becoming viable candidates to aid in medical image reconstruction. Deep learning (DL)-based image reconstruction methods use reference data as a priori information for learning features to exploit the similarities in patients’ anatomy, and proposed methods have shown superior performance compared to non-DL-based solutions [5,6,7,8,9]. Moreover, some DL methods integrate traditional iterative algorithms, such as compressed sensing, to further improve their performance [10,11,12]. Public datasets, such as the fastMRI dataset, support the effort by providing a consistent benchmark for new machine-learning approaches in MR image reconstruction [13].

Radiologists need to remain up to speed with the latest advances in image reconstruction. A basic understanding of the research and development is crucial to providing researchers with valuable feedback for further improvement. Furthermore, radiologists need to become acquainted with novel artifacts exclusive to machine-assisted reconstructions, should these new methods make it into clinical use. This narrative review aims to introduce DL-based image reconstruction and provide insight into researchers’ current challenges. Following an introductory section on DL-based MRI reconstruction networks, we explore some technical topics specific to MR image reconstruction networks, which received substantial attention in the literature, followed by topics on the training and evaluation of these networks.

DL-based MR image reconstruction methods

Deep learning potentially provides faster imaging compared to current techniques by enabling higher acceleration factors (Fig. 1). Traditional methods show limited improvements in speed if compared to the latest DL techniques, as shown in the overview of image reconstruction methods. Such reconstruction methods achieve acceleration factors of 4 to 5 before image quality deteriorates too much [14]. Deep learning techniques demonstrate a significant improvement, allowing acceleration factors of up to 12 or more, depending on the intended use of the output, but their clinical efficacy has yet to be established [15].

Fig. 1
figure 1

Comparison of image reconstruction processes. Common undersampling factors for each method group and a few example algorithms are noted. If no method is used, a simple Fourier transform results in an aliased image if the acquisition is undersampled. Parallel imaging reconstruction methods, such as the generalized autocalibrating partial parallel acquisition (GRAPPA) or sensitivity encoding (SENSE) algorithms, can produce acceptable results up to undersampling rates of 4. Compressed sensing can achieve similar results with increased levels of undersampling. Deep learning methods, such as KIKI, automated transform by manifold approximation (AUTOMAP), and GrappaNet, have shown the potential to achieve good results with significantly higher levels of undersampling

Reconstruction with DL involves transforming input data into an output image, which can be approached as an image-to-image task. Figure 2 illustrates a generic U-Net of a two-dimensional MR image reconstruction task, where a loss function is used to compare output to the ground truth. A U-Net in its basic form is limited to image-to-image reconstruction without any domain-specific knowledge, such as k-space, included. However, dual-domain networks allow both image and k-space to be utilized, and scan-specific methods can restore missing k-space [9, 16, 17].

Fig. 2
figure 2

Diagram of a U-Net architecture. a The number of parameters in a network mainly depends on the input data. However, it can be expanded with parallel imaging and complex values. In this example, the input image size is 512 pixels square, which may be expanded with 64 sensitivity maps, or doubled to 128 when the complex data is separated into real and imaginary components. b Illustration of a U-Net architecture, where data is propagated through all paths in the network. The downsampling analyses the input, and the upsampling synthesizes an output. c The output is compared with the ground truth to calculate a score using a provided loss function

MR image reconstruction today exists in a tension of three measures: image quality, robustness and the acceleration factor. Image quality can be improved by applying a cascade of networks, each of them evaluating the reconstruction independently [6]. However, these networks are shown to be sensitive to the sampling pattern, acceleration factor and noise level and deviations in the anatomy of unseen data [18, 19]. These factors may raise concerns about its robustness, and including controls for data consistency may reduce these sensitivities. Finally, various strategies exist for simulating the undersampling of fully sampled data, which may optimize the learning curve of a network in undersampled image reconstruction tasks. Many of these concepts have received considerable attention, and we will examine each of these in detail in the following subsections.

Loss functions

The loss function is the function to be optimized during network training and is critical in any DL method. Commonly used loss functions are the mean-squared error (MSE) and structural similarity index metric (SSIM), which intend to maximize the similarity between a reconstructed image and its reference image [20]. Most DL methods are supervised: the loss function takes reference data (such as a reconstructed image from fully sampled k-space) as input to compute the loss. Yaman et al. [21] propose to take a fraction of their input k-space data as the input for their loss function as a form of self-supervision. They show that using sub-sampled k-space data as a reference does not significantly reduce the performance of the networks compared to using fully sampled images as a reference. This is useful when fully sampled reference data is unavailable and supervised methods cannot be trained. Zhou et al. [22] argue that fully sampled data is more expensive and more prone to motion and other accumulating errors. Instead, short undersampled acquisitions can be obtained in more ideal imaging conditions.

A loss function can also be a combination of loss functions. In addition to the MSE loss function, Yang et al. [23] also employ perceptual and adversarial loss functions. The perceptual loss is output by a pre-trained network that estimates human perceptual similarity (as opposed to a statistical similarity, such as SSIM). Adversarial loss is output by implementing a generative adversarial network (GAN), where two networks, a discriminator network and a generator network, contest each other: the discriminator aims to distinguish between an original reference image and a reconstruction output by the generator. Both networks optimize for their discriminative and generative ability, providing an adversarial loss.

Undersampling strategies

An undersampled image reconstruction network can be trained by utilizing datasets with fully sampled images and partially eliminating k-space data to achieve an intended level of undersampling. The resulting output is then compared to the original data using a loss function with as goal of achieving similar quality. An ideal sampling strategy is one where the information of a ground truth image is best preserved given an amount of undersampling. Incoherence means information required for image reconstruction is distributed throughout k-space, and is a condition prescribed by compressed sensing. Deep learning networks are not well suited for aggregating spread-out information, and compressed sensing-based methods may not be preferable.

One common sampling strategy is variable density sampling. Similar to GRAPPA, the center of k-space is sampled more densely. This strategy is derived from compressed sensing, where better coherence is achieved using this method [3]. Defazio [24] proposes an equidistant sampling strategy, starting from the second k-line. This way, redundant sampling is avoided, while equidistant sampling allows for information to be localized within a small region, which is ideal for convolutional networks.

Adaptive sampling strategies and their underlying reconstruction algorithm are optimized in a data-driven manner. Bahadir et al. [25] propose a method that estimates a sampling density in k-space, describing which positions in k-space are most favorable given the underlying image reconstruction network. Aggarwal and Jacob [26] note that such an approach does not account for potential dependencies between sampling locations and, in their method, learns both the sampling pattern and the reconstruction problem jointly. Contrary to these arguments, Bakker et al. [27] show that their adaptive models learn to be explicitly nonadaptive. They hypothesize that adaptivity may compromise the model’s ability to capture relevant patterns in the data.

Data consistency

Data consistency enforces that any reconstruction does not deviate from the underlying k-space data that was originally sampled. Schlemper et al. [6] introduce intermittently inserting data consistency layers in the architecture. This data consistency layer applies a penalty based on the difference between the sampled and output k-space value. This penalty is based on the assumption that the noise in k-space follows a normal distribution. In practice, k-space noise is more complex as noise accumulates from various sources around the scanner. Cheng et al. [28] propose to learn a more accurate representation of the noise distribution using their learned data consistency reconstruction method, giving more accurate penalties to data inconsistencies. In general, Hammernik et al. [29] show that networks with some form of data consistency substantially outperform networks without, implying that including DC is almost mandatory for MR image reconstruction. However, they note that the influence of data consistency wanes when acceleration factors are further increased, as there is less k-space to keep consistent.

Parallel imaging

Parallel imaging gives additional information on the spatial location of the signal by using the sensitivity of each receive coil. The fastMRI public dataset included a challenge where participants submit their image reconstruction algorithms to compete in various competitions, and featured a competition where all coil information was combined into a single k-space volume [30]. In the following year, the next iteration of this challenge omitted this competition as the results were considered clinically irrelevant [31], suggesting that separate coil signal acquisitions cannot be combined without losing valuable information.

Deep learning-based methods that include multiple coil data as input are becoming more common. Sriram et al. [32] propose a method where the traditional parallel imaging technique GRAPPA is integral to their method. GRAPPA is a multi-coil parallel imaging technique where missing k-space lines due to undersampling are estimated for each coil using information obtained by densely sampling the center of k-space, known as calibration lines [1]. In their model, Sriram et al. use these calibration lines to estimate sensitivity maps, which are then applied to the multi-coil k-space input data. Wang et al. [33] perform parallel image reconstruction without integrating parallel imaging techniques but outperform these traditional methods using fewer calibration lines. Furthermore, they show relatively low sensitivity to the number of calibration lines. Leynes et al. [34] propose a method capable of calibrationless parallel image reconstruction by jointly solving the undersampled missing data of each coil and the reconstruction problem.

Exploiting interslice correlations

Image reconstruction is most commonly performed on two-dimensional image slices, but consecutive slices in a multislice MR acquisition generally have strong interslice correlation. Various ways were suggested to leverage multislice correlations effectively. Pang and Zhang [35] propose a method which exploits this correlation using interpolated compressed sensing techniques [35]. Du et al. [36] take adjacent k-space slices as additional input and effectively interpolate this input to output the reconstruction of a single interpolated k-space slice. Xiao et al. [37] propose a method that uses deformable convolutions, jointly exploiting correlations among and within slices. They argue that deformable convolutions allow for efficient information extraction across neighboring slices, addressing complicated data redundancies more effectively than traditional fixed two-dimensional grids.

Interslice correlation is most effective using three-dimensional volumes; however, model training using such data size is very demanding in terms of model complexity and memory requirements. Du et al. [36] propose a model which claims to alleviate these concerns. It iterates across each dimension of a three-dimensional sample. Each iteration takes the information from the previous iteration as prior knowledge, thus minimizing the complexity of the model.

Domains

Image space constrains the use of the previously discussed exploitable information, parallel imaging or interslice correlations, which may operate in k-space. Image reconstruction for undersampled acquisitions can be performed using image space to image space, k-space to k-space, or a hybrid k-space to image space, as their respective inputs and outputs. Most proposed methods expect image space as input, presumably as most available data is in image space, but this disregards the potential of utilizing the feature representations using different domains.

Under the premise of a “best of both worlds”, KIKI-net is a method operating on k-space first (K), image space second (I), and repeat (KI) in a hybrid fashion [16]. The first K-net is trained, and then the next I-net is trained using the output of the previous K-net. This process continues until the entire KIKI-net is fully trained. According to the authors, the I-nets were especially strong in restoring detailed structures but failed to remove aliasing artifacts. Instead, they further embellished the artifacts. Meanwhile, the K-nets removed artifacts successfully but had weaker structure restoring capabilities. Ran et al. [38] propose to process both image and k-space domains in parallel. They argue that processing sequentially, such as in KIKI and other variants, ignores possible interplay between domains.

Further motivated by the idea that employing more representations of the same data results in better performance, Wang et al. [39] propose IKWI-net, which includes the wavelet (W) transform domain. The authors argue that the wavelet domain is particularly effective in suppressing artifacts in smooth areas but may also embellish artifacts mistaken as real structures. Tong et al. [40] instead propose HIWDNet, which excludes the k-space domain, and uses entirely different network architectures for each domain.

Zhu et al. [8] propose a more genuine hybrid solution: automated transform by manifold approximation (AUTOMAP), which learns to reconstruct a spatial image directly from k-space. AUTOMAP performs a reconstruction without prior knowledge of mathematical transformations. In MR image reconstruction context, it implicitly learns a Fourier transform equivalent. AUTOMAP is criticized for its network complexity, and later variants reduce complexity by simply supplying the Fourier transform explicitly [41, 42].

Complex numbers

Methods, such as AUTOMAP, use convolutional operations with only the magnitude of the originally complex-valued k-space data. The complex MR signal comprises a real and imaginary component. The imaginary component is phase-shifted by 90 and has independent and uncorrelated noise but is otherwise identical to the real component. The derived values magnitude and phase carry nonredundant information that underlines the relationship between the components. Most reconstruction methods that use k-space as input assume that the complex-valued signal components are independent, but this results in information loss due to their inherent interdependency [33]. This loss is made apparent by experiments performed by Cole et al. [43], which show two separate real- and imaginary-valued reconstructions as consistently superior to magnitude-based reconstructions.

Dedmari et al. [44] are the first to explore learning with complex-valued data using complex-valued convolutions. This was expanded by Wang et al. [33], who include parallel imaging. They claim their convolutional network using complex values achieves at least comparable performance with magnitude-valued convolutional networks while requiring half the network size. Feng et al. [45] propose dual-octave convolutions, which divide the real and imaginary components into low and high-frequency subcomponents. In k-space, low frequencies primarily contain image contrast information, and high frequencies hold the finer details. Processing these high and low frequencies separately is hypothesized to reduce spatial redundancy, making effective reconstructions easier. Finally, Terpstra et al. [46] identify that separating the loss function into two loss functions for magnitude and phase is noninferior in the absolute domain and superior in the complex domain.

Deep learning MR image reconstruction training and evaluation

Understanding the previous topics may lead to making wise architectural choices, but researchers are restricted to using only the data they have. The output of DL learning methods is based on the data it was trained on, so dataset quality is crucial for their performance. Data is invariably scarce, and methods need to use available data efficiently. Dataset quality is determined by the number of images, the quality of the annotation of the target anatomy or pathology, standard of reference used, and other readily available metadata. Datasets are limited by time, technology, and regulations, while available compute and ingenuity primarily limit the methods that use them. These methods are typically modeled after blueprints of existing network architectures, heavily modified to make them suitable for MR image reconstruction. Evaluation of MR image reconstruction is nuanced, as reconstructions are expected to maintain general image quality and be robust in preserving any clinical pathologies. Results from the 2020 fastMRI challenge state that the top 3 methods, according to qualitative radiologist evaluation, still created hallucinatory features [31]. Meanwhile, statistical evaluation, such as the SSIM, shows results with up to 95% similarities to the ground truth. It is not implausible that relevant pathologies may still be hiding, or obscured by hallucinations, in the final dissimilar 5%. This implies a discordance between the image-derived statistical metrics for evaluating reconstructed images and the radiologist-defined diagnostic quality of an image. In other words, the diagnostic value of an image is unlikely defined by a single metric.

Diagnostic quality

In the literature, it is common to demonstrate the capabilities of novel methods by comparing their output against established traditional and DL-based image reconstruction techniques. However, it is challenging to find metrics that directly correlate to the diagnostic quality of an image. Some image reconstruction methods position themselves as methods for artifact removal, implying that the undersampled acquisitions are images beset with artifacts [47]. Using such a definition, evaluation metrics such as MSE or SSIM cannot realistically quantify the quality of artifact removal. Blind evaluation performed by experienced radiologists is employed instead, where equalling or exceeding the diagnostic performance of radiologists while reducing acquisition time is the end goal [47].

Quantitative metrics may fail to reflect deficiencies in reconstructing fine details correctly. Zhao et al. [48] show insignificant differences between the SSIM values for images with and without lesions using various reconstruction methods, implying a weak correlation between SSIM and lesion detection capability. Mason et al. [49] compared a number of evaluation metrics and agreed with this observation, further noting similarly weak performance for root MSE. Instead, the less commonly used metrics, visual information fidelity [50], feature similarity index [51], and noise quality metric [52] demonstrate higher correlation with radiological assessment of image quality. However, these metrics have comparatively high computational costs.

These concerns of weak correlation do not align well with dataset projects such as the fastMRI project, which promotes statistical metrics for easier comparisons. Various editorials have created guidelines to improve the statistical reporting of studies that apply artificial intelligence to radiology [53,54,55]. Solutions which better quantify diagnostic image quality are critical for improving the feasibility of clinical integration of new methods.

Robustness

Robustness is the ability of a method to not deteriorate in performance due to perturbations or other structural changes. For example, deviations between test and training data of brain MR images, which is likely given the high level of anatomy detail, will reduce network generalizability [19]. The datasets selected by new publications are hardly given motivation. Most datasets are collected from a single vendor. Methods developed using the fastMRI dataset, sourced from Siemens scanners, and applied to scans sourced from GE or Philips scanners show reduced performance [31]. This is highlighted by a study that applies stability tests on reconstruction methods to evaluate their performance after small data perturbations, such as simulating motion artifacts [56]. They show that a change of vendor should also be treated as a perturbation the learning algorithm is unprepared for. Knoll et al. [18] show that a mismatch in SNR has the most substantial influence on performance. Antun et al. [56] propose robustness tests with their instability tests. An example test is shifting the undersampling ratio slightly, which may lead to severe error during reconstruction. They conclude that a robust network would need to be retrained for different combinations of acquisition size, undersampling ratio, and other such parameters to decrease their sensitivity to various perturbations. Solutions to improve robustness with respect to specific artifacts are also proposed. Defazio et al. [57] introduce adversarial loss to combat banding artifacts, which are of particular note in low SNR regions. Their method is similar to a GAN, but the discriminator is focused on identifying banding, such that the generator is penalized for producing reconstructions with banding.

Due to the limited availability of raw k-space datasets, researchers often synthesize k-space data from image data using forward Fourier transforms. However, this approach has been labeled a “data crime”, a term coined by Shimron et al. [58]. They caution against using synthesized k-space data, which may lead to over-optimistic results. Deep learning networks benefit from data processed using vendor-specific hidden pipelines and will often produce optimistic results when compared to vendor-processed ground truth data. Undersampled image reconstruction networks are especially susceptible to these benefits, as the unprocessed undersampled acquisitions differ significantly from the processed and simulated k-space they are trained on.

Data scarcity

Large datasets serve as the foundation for many DL models. Image reconstruction networks may benefit from more technically intricate datasets that offer complete k-space sampling. However, obtaining such datasets poses a formidable challenge. Public dataset releases are an important kickstart for new DL research. There are no public datasets including image-guided interventional MR data; unsurprisingly, we note a research gap in the application of undersampled image reconstruction networks developed for interventional radiology. Despite this, DL-based image reconstruction may be a more viable option for this field, as the enhanced reconstruction speeds can be more effectively leveraged, and the potential decline in image quality may be less detrimental.

Transfer learning provides a potential alleviation to the data scarcity problem. It is where a network trained on a task with large available datasets is transferred to another, usually with scarce datasets, by using network finetuning. Han et al. [47] demonstrate its potential by using networks pre-trained on radial computed tomography data for radial MR, then finetuning using radial MR data. Transfer learning has also shown promise by reconstructing images using a network initially trained to reconstruct large collections of natural images and brain MR images [59]. Huang et al. [19] observe that network finetuning could even be skipped if a network is pretrained using a large collection of generic MR images, rather than natural everyday images. If transfer learning proves effective in MR image reconstruction, even uncommon procedures which produce little data can be accelerated.

Conclusions

Innovations in DL have given MR image reconstruction from undersampled acquisitions a boost. We explored the challenges posed by the complex nature of radiological input data and the importance of utilizing k-space information in undersampled image reconstruction. Key factors to consider in method design are those that leverage the potential of the available input data, such as inventive loss functions and versatile sampling strategies. Performance can be further enhanced if the data is enriched with additional information such as the original raw k-space data, coil sensitivity maps, and incorporating adjacent scans if from a multislice acquisition.

We discussed the difficulty of assessing the reconstructed image output for its diagnostic quality and robustness. We stress that while DL reconstruction output may provide high-quality images upon initial inspection, the reconstructions may also include hallucinations or omit small structural elements, limiting its diagnostic value. Moreover, it is important to ensure the robustness of MR image reconstruction models, given the sensitive nature of radiological data. One way to enhance robustness is by addressing the issue of data scarcity.

While notable steps have been taken in the right direction, it is crucial that new DL methods made for clinical use should be developed in collaboration with radiologists. These are to be supported with high-quality datasets, ideally open access and as multi-coil k-space data.

Availability of data and materials

Not applicable.

Abbreviations

AUTOMAP:

Automated transform by manifold approximation

DL:

Deep learning

GAN:

Generative adversarial network

GRAPPA:

Generalized autocalibrating partial parallel acquisition

MR :

Magnetic resonance

MSE:

Mean squared error

SENSE:

Sensitivity encoding

SNR:

Signal-to-noise ratio

SSIM:

Structural similarity index measure

References

  1. Griswold MA, Jakob PM, Heidemann RM et al (2002) Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magn Reson Med 47:1202–1210. https://doi.org/10.1002/mrm.10171

    Article  PubMed  Google Scholar 

  2. Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P (1999) SENSE: sensitivity encoding for fast MRI. Magn Reson Med 42:952–962. https://doi.org/10.1002/(SICI)1522-2594(199911)42:5%3C952::AID-MRM16%3E3.0.CO;2-S

    Article  CAS  PubMed  Google Scholar 

  3. Lustig M, Donoho D, Pauly JM (2007) Sparse MRI: the application of compressed sensing for rapid MR imaging. Magn Reson Med 58:1182–1195. https://doi.org/10.1002/mrm.21391

    Article  PubMed  Google Scholar 

  4. Jaspan ON, Fleysher R, Lipton ML (2015) Compressed sensing MRI: a review of the clinical literature. Br J Radiol 88:20150487. https://doi.org/10.1259/bjr.20150487

    Article  PubMed  PubMed Central  Google Scholar 

  5. Hammernik K, Klatzer T, Kobler E et al (2018) Learning a variational network for reconstruction of accelerated MRI data. Magn Reson Med 79:3055–3071. https://doi.org/10.1002/mrm.26977

    Article  PubMed  Google Scholar 

  6. Schlemper J, Caballero J, Hajnal JV, Price AN, Rueckert D (2018) A deep cascade of convolutional neural networks for dynamic MR image reconstruction. IEEE Trans Med Imaging 37:491–503. https://doi.org/10.1109/TMI.2017.2760978

    Article  PubMed  Google Scholar 

  7. Aggarwal HK, Mani MP, Jacob M (2019) MoDL: model-based deep learning architecture for inverse problems. IEEE Trans Med Imaging 38:394–405. https://doi.org/10.1109/TMI.2018.2865356

    Article  PubMed  Google Scholar 

  8. Zhu B, Liu JZ, Cauley SF, Rosen BR, Rosen MS (2018) Image reconstruction by domain-transform manifold learning. Nature 555:487–492. https://doi.org/10.1038/nature25988

    Article  CAS  PubMed  Google Scholar 

  9. Akçakaya M, Moeller S, Weingärtner S, Uğurbil K (2019) Scan-specific robust artificial-neural-networks for k-space interpolation (RAKI) reconstruction: database-free deep learning for fast imaging. Magn Reson Med 81:439–453. https://doi.org/10.1002/mrm.27420

    Article  CAS  PubMed  Google Scholar 

  10. Block KT, Uecker M, Frahm J (2007) Undersampled radial MRI with multiple coils. Iterative image reconstruction using a total variation constraint. Magn Reson Med 57:1086–1098. https://doi.org/10.1002/mrm.21236

    Article  PubMed  Google Scholar 

  11. Jacob M, Mani MP, Ye JC (2020) Structured low-rank algorithms: theory, magnetic resonance applications, and links to machine learning. IEEE Signal Process Mag 37:54–68. https://doi.org/10.1109/msp.2019.2950432

    Article  PubMed  PubMed Central  Google Scholar 

  12. Ravishankar S, Bresler Y (2011) MR image reconstruction from highly undersampled k-space data by dictionary learning. IEEE Trans Med Imaging 30:1028–1041. https://doi.org/10.1109/TMI.2010.2090538

    Article  PubMed  Google Scholar 

  13. Zbontar J, Knoll F, Sriram A, et al (2018) fastMRI: an open dataset and benchmarks for accelerated MRI. https://doi.org/10.48550/arXiv.1811.08839

  14. Yoon JH, Nickel MD, Peeters JM, Lee JM (2019) Rapid imaging: recent advances in abdominal MRI for reducing acquisition time and its clinical applications. Korean J Radiol 20:1597–1615. https://doi.org/10.3348/kjr.2018.0931

    Article  PubMed  PubMed Central  Google Scholar 

  15. Radmanesh A, Muckley MJ, Murrell T et al (2022) Exploring the acceleration limits of deep learning variational network-based two-dimensional brain MRI. Radiol Artif Intell 4:e210313. https://doi.org/10.1148/ryai.210313

    Article  PubMed  PubMed Central  Google Scholar 

  16. Eo T, Jun Y, Kim T, Jang J, Lee HJ, Hwang D (2018) KIKI-net: cross-domain convolutional neural networks for reconstructing undersampled magnetic resonance images. Magn Reson Med 80:2188–2201. https://doi.org/10.1002/mrm.27201

    Article  CAS  PubMed  Google Scholar 

  17. Liu X, Pang Y, Jin R, Liu Y, Wang Z (2022) Dual-domain reconstruction network with V-Net and K-Net for fast MRI. Magn Reson Med 88:2694–2708. https://doi.org/10.1002/mrm.29400

    Article  PubMed  Google Scholar 

  18. Knoll F, Hammernik K, Kobler E, Pock T, Recht MP, Sodickson DK (2019) Assessment of the generalization of learned image reconstruction and the potential for transfer learning. Magn Reson Med 81:116–128. https://doi.org/10.1002/mrm.27355

    Article  PubMed  Google Scholar 

  19. Huang J, Wang S, Zhou G, Hu W, Yu G (2022) Evaluation on the generalization of a learned convolutional neural network for MRI reconstruction. Magn Reson Imaging 87:38–46. https://doi.org/10.1016/j.mri.2021.12.003

    Article  PubMed  Google Scholar 

  20. Chen Y, Schönlieb CB, Lio P et al (2022) AI-based reconstruction for fast MRI – a systematic review and meta-analysis. Proc IEEE 110:224–254. https://doi.org/10.1109/JPROC.2022.3141367

    Article  CAS  Google Scholar 

  21. Yaman B, Hosseini SAH, Moeller S, Ellermann J, Ugurbil K, Akcakaya M (2020) Self-supervised physics-based deep learning MRI reconstruction without fully-sampled data. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). IEEE, pp 921–925. https://doi.org/10.1109/ISBI45749.2020.9098514

  22. Zhou B, Dey N, Schlemper J, et al (2023) DSFormer: a dual-domain self-supervised transformer for accelerated multi-contrast MRI reconstruction. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision. pp 4966–4975. https://doi.org/10.48550/arXiv.2201.10776

  23. Yang G, Yu S, Dong H et al (2018) DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction. IEEE Trans Med Imaging 37:1310–1321. https://doi.org/10.1109/TMI.2017.2785879

    Article  PubMed  Google Scholar 

  24. Defazio A (2019) Offset sampling improves deep learning based accelerated MRI reconstructions by exploiting symmetry. https://doi.org/10.48550/arXiv.1912.01101

  25. Bahadir CD, Wang AQ, Dalca AV, Sabuncu MR (2020) Deep-learning-based optimization of the under-sampling pattern in MRI. IEEE Trans Comput Imaging 6:1139–1152. https://doi.org/10.1109/TCI.2020.3006727

    Article  Google Scholar 

  26. Aggarwal HK, Jacob M (2020) J-MoDL: joint model-based deep learning for optimized sampling and reconstruction. IEEE J Sel Top Signal Process 14:1151–1162. https://doi.org/10.1109/jstsp.2020.3004094

    Article  PubMed  PubMed Central  Google Scholar 

  27. Bakker T, Muckley M, Romero-Soriano A, Drozdzal M, Pineda L (2022) On learning adaptive acquisition policies for undersampled multi-coil MRI reconstruction. In: Proceedings of the 5th international conference on medical imaging with deep learning, vol 172. pp 63–85.

  28. Cheng J, Cui ZX, Huang W et al (2021) Learning data consistency and its application to dynamic MR imaging. IEEE Trans Med Imaging 40:3140–3153. https://doi.org/10.1109/TMI.2021.3096232

    Article  PubMed  Google Scholar 

  29. Hammernik K, Schlemper J, Qin C, Duan J, Summers RM, Rueckert D (2021) Systematic evaluation of iterative deep neural networks for fast parallel MRI reconstruction with sensitivity-weighted coil combination. Magn Reson Med 86:1859–1872. https://doi.org/10.1002/mrm.28827

    Article  PubMed  Google Scholar 

  30. Knoll F, Murrell T, Sriram A et al (2020) Advancing machine learning for MR image reconstruction with an open competition: overview of the 2019 fastMRI challenge. Magn Reson Med 84:3054–3070. https://doi.org/10.1002/mrm.28338

    Article  PubMed  PubMed Central  Google Scholar 

  31. Muckley MJ, Riemenschneider B, Radmanesh A et al (2021) Results of the 2020 fastMRI challenge for machine learning MR image reconstruction. IEEE Trans Med Imaging 40:2306–2317. https://doi.org/10.1109/TMI.2021.3075856

    Article  PubMed  PubMed Central  Google Scholar 

  32. Sriram A, Zbontar J, Murrell T, et al (2020) End-to-end variational networks for accelerated MRI reconstruction. In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. Springer International Publishing, pp 64–73. https://doi.org/10.1007/978-3-030-59713-9_7

  33. Wang S, Cheng H, Ying L et al (2020) DeepcomplexMRI: exploiting deep residual network for fast parallel MR imaging with complex convolution. Magn Reson Imaging 68:136–147. https://doi.org/10.1016/j.mri.2020.02.002

    Article  PubMed  Google Scholar 

  34. Leynes AP, Deveshwar N, Nagarajan SS, Larson PEZ (2022) Scan-specific self-supervised bayesian deep non-linear inversion for undersampled MRI reconstruction. https://doi.org/10.48550/arXiv.2203.00361

  35. Pang Y, Zhang X (2013) Interpolated compressed sensing for 2D multiple slice fast MR imaging. PLoS One 8:e56098. https://doi.org/10.1371/journal.pone.0056098

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  36. Du T, Zhang Y, Shi X, Chen S (2020) Multiple slice k-space deep learning for magnetic resonance imaging reconstruction. In: 2020 42nd annual international conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp 1564–1567. https://doi.org/10.1109/EMBC44109.2020.9175642

  37. Xiao Z, Du N, Liu J, Zhang W (2021) SR-Net: a sequence offset fusion net and refine net for undersampled multislice MR image reconstruction. Comput Methods Programs Biomed 202:105997. https://doi.org/10.1016/j.cmpb.2021.105997

    Article  PubMed  Google Scholar 

  38. Ran M, Xia W, Huang Y et al (2021) MD-Recon-Net: a parallel dual-domain convolutional neural network for compressed sensing MRI. IEEE Trans Radiat Plasma Med Sci 5:120–135. https://doi.org/10.1109/TRPMS.2020.2991877

    Article  Google Scholar 

  39. Wang Z, Jiang H, Du H, Xu J, Qiu B (2020) IKWI-net: a cross-domain convolutional neural network for undersampled magnetic resonance image reconstruction. Magn Reson Imaging 73:1–10. https://doi.org/10.1016/j.mri.2020.06.015

    Article  CAS  PubMed  Google Scholar 

  40. Tong C, Pang Y, Wang Y (2022) HIWDNet: a hybrid image-wavelet domain network for fast magnetic resonance image reconstruction. Comput Biol Med 151:105947. https://doi.org/10.1016/j.compbiomed.2022.105947

    Article  PubMed  Google Scholar 

  41. Souza R, Frayne R (2019) A hybrid frequency-domain/image-domain deep network for magnetic resonance image reconstruction. In: 2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). ieeexplore.ieee.org, pp 257–264. https://doi.org/10.1109/SIBGRAPI.2019.00042

  42. Oh C, Kim D, Chung JY, Han Y, Park H (2018) ETER-net: end to end MR image reconstruction using recurrent neural network. In: Machine learning for medical image reconstruction. Springer International Publishing, pp 12–20. https://doi.org/10.1007/978-3-030-00129-2_2

  43. Cole E, Cheng J, Pauly J, Vasanawala S (2021) Analysis of deep complex-valued convolutional neural networks for MRI reconstruction and phase-focused applications. Magn Reson Med 86:1093–1109. https://doi.org/10.1002/mrm.28733

    Article  PubMed  PubMed Central  Google Scholar 

  44. Dedmari MA, Conjeti S, Estrada S, Ehses P, Stöcker T, Reuter M (2018) Complex fully convolutional neural networks for MR image reconstruction. In: Machine learning for medical image reconstruction. Springer International Publishing, pp 30–38. https://doi.org/10.1007/978-3-030-00129-2_4

  45. Feng CM, Yang Z, Fu H, Xu Y, Yang J, Shao L (2021) DONet: dual-octave network for fast MR image reconstruction. IEEE Trans Neural Netw Learn Syst PP. https://doi.org/10.1109/TNNLS.2021.3090303

  46. Terpstra ML, Maspero M, Sbrizzi A, van den Berg CAT (2022) -loss: a symmetric loss function for magnetic resonance imaging reconstruction and image registration with deep learning. Med Image Anal 80:102509. https://doi.org/10.1016/j.media.2022.102509

    Article  PubMed  Google Scholar 

  47. Han Y, Yoo J, Kim HH, Shin HJ, Sung K, Ye JC (2018) Deep learning with domain adaptation for accelerated projection-reconstruction MR. Magn Reson Med 80:1189–1205. https://doi.org/10.1002/mrm.27106

    Article  PubMed  Google Scholar 

  48. Zhao R, Zhang Y, Yaman B, Lungren MP, Hansen MS (2021) End-to-end AI-based MRI reconstruction and lesion detection pipeline for evaluation of deep learning image reconstruction. https://doi.org/10.48550/arXiv.2109.11524

  49. Mason A, Rioux J, Clarke SE et al (2020) Comparison of objective image quality metrics to expert radiologists’ scoring of diagnostic quality of MR images. IEEE Trans Med Imaging 39:1064–1072. https://doi.org/10.1109/TMI.2019.2930338

    Article  PubMed  Google Scholar 

  50. Sheikh HR, Bovik AC (2006) Image information and visual quality. IEEE Trans Image Process 15:430–444. https://doi.org/10.1109/tip.2005.859378

    Article  PubMed  Google Scholar 

  51. Zhang L, Zhang L, Mou X, Zhang D (2011) FSIM: a feature similarity index for image quality assessment. IEEE Trans Image Process 20:2378–2386. https://doi.org/10.1109/TIP.2011.2109730

    Article  PubMed  Google Scholar 

  52. Damera-Venkata N, Kite TD, Geisler WS, Evans BL, Bovik AC (2000) Image quality assessment based on a degradation model. IEEE Trans Image Process 9:636–650. https://doi.org/10.1109/83.841940

    Article  CAS  PubMed  Google Scholar 

  53. El Naqa I, Boone JM, Benedict SH et al (2021) AI in medical physics: guidelines for publication. Med Phys 48:4711–4714. https://doi.org/10.1002/mp.15170

    Article  PubMed  Google Scholar 

  54. Mongan J, Moy L, Kahn CE Jr (2020) Checklist for artificial intelligence in medical imaging (CLAIM): a guide for authors and reviewers. Radiol Artif Intell 2:e200029. https://doi.org/10.1148/ryai.2020200029

    Article  PubMed  PubMed Central  Google Scholar 

  55. Bluemke DA, Moy L, Bredella MA et al (2020) Assessing radiology research on artificial intelligence: a brief guide for authors, reviewers, and readers-from the radiology editorial board. Radiology 294:487–489. https://doi.org/10.1148/radiol.2019192515

    Article  PubMed  Google Scholar 

  56. Antun V, Renna F, Poon C, Adcock B, Hansen AC (2020) On instabilities of deep learning in image reconstruction and the potential costs of AI. Proc Natl Acad Sci U S A 117:30088–30095. https://doi.org/10.1073/pnas.1907377117

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  57. Defazio A, Murrell T, Recht MP (2020) MRI banding removal via adversarial training. Adv Neural Inf Process Syst 33:7660–7670

  58. Shimron E, Tamir JI, Wang K, Lustig M (2022) Implicit data crimes: machine learning bias arising from misuse of public data. Proc Natl Acad Sci 119:e2117203119. https://doi.org/10.1073/pnas.2117203119

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  59. Dar SUH, Özbey M, Çatlı AB, Çukur T (2020) A transfer-learning approach for accelerated MRI using deep neural networks. Magn Reson Med 84:663–685. https://doi.org/10.1002/mrm.28148

    Article  PubMed  Google Scholar 

Download references

Funding

The collaboration project is co-funded by the PPP Allowance made available by Health Holland, Top Sector Life Sciences & Health, to stimulate public-private partnerships and by Siemens Healthineers.

Author information

Authors and Affiliations

Authors

Contributions

C.R. Noordman drafted the manuscript. All other authors reviewed and approved the final manuscript.

Corresponding author

Correspondence to Constant Richard Noordman.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

Henkjan Huisman declares to receive research support from Siemens Healthineers, Derya Yakar declares to receive research support from Siemens Healthineers, the other authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Noordman, C.R., Yakar, D., Bosma, J. et al. Complexities of deep learning-based undersampled MR image reconstruction. Eur Radiol Exp 7, 58 (2023). https://doi.org/10.1186/s41747-023-00372-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41747-023-00372-7

Keywords