Next Article in Journal
Industrial Fault Detection Employing Meta Ensemble Model Based on Contact Sensor Ultrasonic Signal
Next Article in Special Issue
YOLOv8-RMDA: Lightweight YOLOv8 Network for Early Detection of Small Target Diseases in Tea
Previous Article in Journal
Anonymous Traffic Detection Based on Feature Engineering and Reinforcement Learning
Previous Article in Special Issue
Burst-Enhanced Super-Resolution Network (BESR)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AcquisitionFocus: Joint Optimization of Acquisition Orientation and Cardiac Volume Reconstruction Using Deep Learning

1
Institute of Medical Informatics, University of Lübeck, 23562 Lübeck, Germany
2
IADI U1254, Inserm, Université de Lorraine, 54511 Nancy, France
3
EchoScout GmbH, 23562 Lübeck, Germany
4
CHRU-Nancy, Inserm, Université de Lorraine, CIC 1433, Innovation Technologique, 54000 Nancy, France
*
Author to whom correspondence should be addressed.
Submission received: 7 February 2024 / Revised: 27 March 2024 / Accepted: 30 March 2024 / Published: 4 April 2024

Abstract

:
In cardiac cine imaging, acquiring high-quality data is challenging and time-consuming due to the artifacts generated by the heart’s continuous movement. Volumetric, fully isotropic data acquisition with high temporal resolution is, to date, intractable due to MR physics constraints. To assess whole-heart movement under minimal acquisition time, we propose a deep learning model that reconstructs the volumetric shape of multiple cardiac chambers from a limited number of input slices while simultaneously optimizing the slice acquisition orientation for this task. We mimic the current clinical protocols for cardiac imaging and compare the shape reconstruction quality of standard clinical views and optimized views. In our experiments, we show that the jointly trained model achieves accurate high-resolution multi-chamber shape reconstruction with errors of <13 mm HD95 and Dice scores of >80%, indicating its effectiveness in both simulated cardiac cine MRI and clinical cardiac MRI with a wide range of pathological shape variations.

1. Introduction

Cardiac magnetic resonance (CMR) imaging typically follows a specific routine. Firstly, a low-resolution scout scan is acquired to localize the heart coarsely. Secondly, the scout scan is examined for manual imaging view-plane placement following dedicated protocol guidelines [1]. The scanner is then adjusted to capture the imaging planes of interest. Lastly, the acquired images are examined by clinical experts or automated post-processing software.

1.1. MR Physics Constraints and Timing

Examining images relies on sufficient image contrast, i.e., the signal-to-noise ratio (SNR). The SNR of an acquired image slice is constrained by the physical principle of MR as derived by Macovsci [2]:
SNR f Obj ω o V h T
where f Obj is the influence of the examined object, ω 0 is the resonant frequency, V h is the voxel volume, and T is the acquisition time. Consequently, the SNR is affected by the imaging time and the spatiotemporal resolution of a scan. In CMR, the SNR is negatively impacted by cardiac and respiratory motion artifacts that increase with longer acquisition times [1]. Therefore, the acquisition time T acts as a lower and upper bound for the quality of the acquired cardiac images. Various sequences have thus been developed to improve the SNR and reduce the acquisition time. The SNR can be increased by combining images of the same cardiac phase when the acquisition is synchronized over multiple heart cycles [1]. This approach often requires breath-holding strategies that burden the patients [3]. In parallel imaging, the acquisition time is shortened by using multiple receiver coils that are read out in parallel [3,4,5]. From another point of view, T is proportional to the number of acquired slices N z and the number of acquired K-space lines N y , which can be captured at the rate of the repetition time TR [6]:
T N z N y TR
Equation (2) states that acquiring more slices at a higher resolution (more K-space lines) takes longer. This has been addressed with compressed sensing where only a fraction of K-space lines are captured, accelerating the acquisition by a constant factor at the cost of introducing artifacts [7]. Nevertheless, applying these techniques for high temporal resolution cine imaging may be insufficient and remains a challenge [8].
In this study, we will investigate a reduced number of imaging slices N z for faster acquisition without necessarily affecting the in-plane resolution or SNR that could additionally be combined with parallel imaging and/or compressed sensing. This reduction is only applicable under the regard that those sparsely acquired slices are sufficiently descriptive for clinical examination. In the cardiac domain, such a sparse stack of slices is frequently acquired along the heart’s short axis to examine the left-ventricular properties that have been proven to contain valuable information for clinical experts [9]. Descriptive imaging planes are also crucial for automated deep learning techniques, which often achieve impressive results but ultimately rely on the data input.
We hypothesize that computer-assisted techniques can benefit from tailoring the slice selection to the automated post-processing task (see Figure 1). For demonstration, we build upon a recent work that explored the challenging task of reconstructing the full cardiac shape from a set of 2D echo views [10]. For MRI, we constrain the acquisition’s field of view to two sparse slices and learn the optimal slice view orientation for accurate shape reconstruction based on coarse localizer information. The definition and selection of optimal imaging planes [1,9,11] for this task may be different from human intuition, especially when deep learning methods are involved. Despite our study being linked to MRI acquisition and (shape) reconstruction, our method is unrelated to image reconstruction from K-space signals. It operates in the image domain after applying the inverse Fourier transform.

1.2. Shape Reconstruction and Imaging Plane Optimization

Volumetric shape reconstruction has been previously explored for various medical imaging modality applications. In ultrasound imaging, there is an interest in reconstructing 3D volumes from 2D slice acquisitions of free-hand sweeps. In [12], this was solved by an LSTM model that combined sequential 2D imaging features with accelerometer parameters. Jokeit et al. [13] demonstrated that 3D bone shapes could be reconstructed from standard planar X-ray radiographs using a CycleGAN network. In a similar work, bone structures were reconstructed from sparse view segmentations using neural shape representations [14]. In the cardiac domain, left ventricle shapes were successfully reconstructed from sparse short-axis and long-axis image stacks using deformable mesh priors [15]. Stojanovsi et al. [10] performed reconstruction of the full cardiac shape from multiple slices. To overcome the lack of paired slice and 3D target data, the authors simulated US intensity images for slices that were extracted from a 3D ground-truth mesh. Their approach uses an efficient variant of the Pix2Vox model presented in [16] and will be considered for performance comparison in Section 2.6.
Optimal imaging planes have been considered in [17], where an orthopedic scanning guide for diseases in 3D ultrasound applications was developed. The method relies on a two-stream classification pipeline to predict the probe movement direction and the presence of the desired target view. In the context of MRI, a target view classification network was proposed to determine the optimal MR image slice for detecting lumbar spinal stenosis [18]. The authors selected the optimal image slice from multiple given slices and evaluated the classification outcome for several network architectures and hyperparameters. Cardiac segmentation of the left ventricle and atrium with joint prediction of standard clinical view planes has been previously explored by Chen et al. [19], who aimed to translate findings from automated segmentations into clinical routine protocols. For optimal valvular heart disease assessment, 14 slice orientations were defined using a cardiac MRI reference scan [20]. Odille et al. [21] reconstructed the left ventricular shape by fitting a b-spline model to slice segmentations obtained from motion-corrected high-resolution intensity data. They compared pre-defined configurations of 3–6 sparse slices to evaluate the impact of view plane choices on the shape reconstruction quality. To the best of our knowledge, none of the previously proposed methods studied the joint optimization of view planes and volumetric reconstruction.

1.3. Contribution

While previous studies focused on detecting clinical standard imaging planes [15,18,20], we hypothesize that the slice view orientation should be optimized in a task-driven manner and propose the following contributions:
  • In a challenging target scenario, we reconstruct the full cardiac shape of five structures from only two slices.
  • We study the joint optimization of shape reconstruction and view-plane orientation to derive optimal sparse slice configurations.
  • The optimized slice configurations lead to superior reconstruction quality compared to standard clinical imaging planes, which we demonstrate for synthetic and clinically acquired cardiac MRI data.

2. Materials and Methods

Our pipeline mimics the MRI acquisition process (see Figure 1): From a low-resolution scout scan, a coarse anatomical shape is generated by image segmentation. We analyze this coarse segmentation to identify standard clinical view planes and optimize the image plane slicing for cardiac shape reconstruction.

2.1. Extraction of Clinical Views

Experts follow a semi-automated routine to determine cardiac view planes [22]: Firstly, the left ventricle is localized in the scout scan, then pseudo-two-chamber (2CH) and four-chamber (4CH) views are extracted. Based on these views, a stack of short-axis (SA) images is retrieved, which is a prerequisite to acquiring accurate 2CH and 4CH views. We extract the mentioned views from the coarse image segmentation by analyzing the inertial moments J of the cardiac chamber shapes to construct orthonormal bases for an affine reorientation matrix P ,
J = J 11 J 12 J 13 J 12 J 22 J 23 J 13 J 23 J 33 J i i = m x j 2 + x k 2 d m J i j = m x i x j d m i , j 1 , 2 , 3
where m is the shape’s (voxel) mass, i j k are the spatial indices, and x is the distance vector from the point mass to a reference point [23]. The resulting imaging planes are visualized in Figure 2.

2.2. Slicing View Optimization

As described in Figure 3, we optimize for affine matrices M that maximize the reconstruction accuracy. We first generate N affine matrices M to define the slicing orientation. This work explores the extreme scenario of studying only N = 2 slice locations. Subsequently, we apply a reconstruction model to process the extracted slices. The deep learning architecture is laid out more specifically in Figure 4. To obtain optimizable slice orientations, we feed the segmentation of a (low-resolution) scout image scan V i n into an acquisition model A i . The model comprises two operators: O i aligns the input optimally to yield the oriented volume V o r . From this volume, the operator C extracts a 2D slice S per matrix M :
O i : { V i n : Ω 3 D R } { V o r : Ω 3 D R } , i = 1 , , N
C : { V o r : Ω 3 D R } { S : Ω 2 D R }
The formulation of O i is inspired by Jaderberg et al. [24] and uses a spatial transformer network to sample an oriented 2D plane from a 3D volume. The network consists of a CNN localization network with learnable parameters θ O i that maps the input volume V i n to six rotational parameters ap i = a p i 1 , , a p i 6 T and three translational parameters tp i with 3 × N t p parameters, where N t p is chosen relative to the target offset space (see Section 2.7). From ap i , the rotational components of a 3D affine matrix M i are generated using the continual representation from [25]. The translational vector t i = t i 1 , t i 2 , t i 3 T is formulated as:
t ij = 2.0 N t p s o f t m a x tp ij , 0 , 1 , , N t p 1.0 , tp ij R N t p , j 1 , 2 , 3
The 3D affine matrix M i is then used to create a grid for the differentiable spatial transformer sampling layer. A slicing operator, C, extracts the center slice of the aligned volume. We want to stress that for every 3D input shape volume, a separate set of ap i is predicted. This enables us to take any segmented input volume and find the correct slicing orientation for the subsequent scans using the same pre-trained model.

2.3. Reconstruction Model

For a given set of N optimized 2D image slices S from the acquisition model, we aim to reconstruct the full volumetric cardiac shape V r e :
R : { S : Ω 2 D R } N { V r e : Ω 3 D R }
Aiming for a mapping Ω 2 D Ω 3 D , we configure the model to contain a 2D encoder and a 3D branch, where the inverse of M i is used at the skip connections and the bottleneck to re-embed the 2D slices in 3D space (see Figure 4 and Section 2.7).

2.4. Joint Optimization

Given the above models, we obtain N optimized slices, by jointly training the parameters of N acquisition models θ O 1 , , N and one reconstruction model ψ R :
V o r 1 , , V o r N = O 1 V i n , θ O 1 , , O N V i n , θ O N
S 1 , , S N = C V o r 1 , , C V o r N
V r e = R S 1 , , S N , ψ R i
In a simplified setup, where V r e and V i n have the same spatial resolution, we would require V r e V i n for an optimal reconstruction. This mapping could be fulfilled by learning an identity function but is restricted since we feed the data through two bottlenecks that are reducing information by extracting a sparse slice and compressing the shape representation:
L θ O 1 , , N , ψ R = R ψ C O θ , 1 V i n , , R ψ C O θ , N V i n , V r e V i n
In our pipeline, the slice bottleneck is particularly interesting, as the reoriented slices S 1 , , N reveal information about the importance of individual structures for the reconstruction. In an application-oriented setting, the scout scan V i n has a lower spatial resolution than the output V r e . When passing the predicted affine matrix M i to the MRI control panel, the optimized view can be captured in higher resolution to provide more detailed information for the reconstruction (see Figure 3).

2.5. Datasets

We performed initial experiments with synthetic cardiac MRI scans generated with XCAT [26] and MRXCAT 2.0 [27]. In this dataset with free-breathing protocol, each scan consists of 100 image frames with 1 m m spatial and 50 m s temporal resolution. The XCAT software provided ground-truth anatomical label maps, whereas texturized MRI simulations were derived from these maps using MRXCAT 2.0. The data were split into 24 training (male phantom) and 16 testing samples (female phantom). To show the effectiveness of our method, a percentage of 25 % 75 % of cardiac phase frames was excluded from the training set to reserve frames of the systolic phase for testing. In subsequent experiments, we used the MMWHS dataset [28] containing 20 labeled, static, nearly isotropic MRI volumes with the following structures: myocardium (MYO), left ventricle (LV), right ventricle (RV), left atrium (LA), and right atrium (RA). The dataset contains significant shape variations, including patients with cardiovascular diseases such as “cardiac function insufficiency, cardiac edema, hypertension […] arrhythmia, atrial flutter, atrial fibrillation, artery plaque, coronary atherosclerosis, aortic aneurysm, right ventricle hypertrophy [, and] dilated cardiomyopathy” [28]. The data were split into training and test data using 3-fold cross-validation.

2.6. Experimental Setup and Evaluation

Firstly, in Experiment I, we performed full cardiac shape reconstruction and compared the performance of our model to Pix2Vox (P2V, [16]) and a leaner variant Efficient Pix2Vox (EP2V, [10]), specifically designed for cardiac-slice-to-volume reconstruction (see Section 1.2). In this experiment, we simplified the multi-chamber reconstruction task to a binary shape reconstruction task to match the experimental setup of [10].
Secondly, in Experiment II, we extended the reconstruction task to multiple chambers and investigated the impact of simultaneous view-plane optimization on the reconstruction performance. We conducted an extensive ablation study transitioning from elementary to more elaborate scenarios. This transition involved replacing ground-truth annotations with automated segmentations as well as replacing high-resolution scout scans ( 1.5 × 1.5 × 1.5   m m 3 / vox ) with lower-resolution scout scans ( 6.0 × 6.0 × 6.0   m m 3 / vox )—a very coarse setting compared to the settings used in [29]. Note that these high-resolution scout scans are not available in clinical settings. Shape reconstruction was performed with just two high-resolution 2D views with 1.5 × 1.5   m m 2 / vox in all scenarios, which can be acquired quickly and enables analysis with high temporal resolution.
Standard clinical views, such as 2CH and 4CH views (see Figure 2) were extracted from the scout input using the method described in Section 2.1. For the MMWHS dataset, we employed 3-fold cross-validation to address significant shape variations in the dataset. We assessed the reconstruction performance with the 95th percentile of the Hausdorff distance (HD95) and Dice score metrics.

2.7. Implementation Details

Our acquisition model is a convolutional neural network (CNN) consisting of layers with instance normalization, average pooling, and a final fully connected layer. The last layer maps the input features to six ap i and 3 × N t p values. The affine matrices M i are then constructed using the continual representation of [25] for rotational components and Equation (6) for translational components, restricting translational shifts to ± 20 % . The parameter count N t p = 51 was chosen to be 40% of the spatial input volume length. In preliminary experiments, we attempted to predict the three translational components for every slice with three parameters but experienced instabilities. Mapping the parameters described in Equation (6) resulted in stable training and improved scores.
The one-hot encoded slice shape output is concatenated channel-wise (see Figure 4, center) and then fed to the reconstruction network. The reconstruction model is a U-Net based on [30], which we configure to consist of a 2D encoder and a 3D decoder by replacing the convolution and normalization layers while keeping the exact kernel sizes. To prevent the U-Net model from sharing information across slices in the encoder, we used grouped convolutions with independent groups per input slice.
The 2D features were re-embedded to the 3D space using the a grid-sampling operator with the inverse affine matrices M i 1 for every slice to enable the concatenation of 2D and 3D features at the skip connections. Every block of the reconstruction model (see Figure 4) comprises two (transpose) convolutional operations, followed by instance normalization and LeakyReLU nonlinearities. During joint training, we used the AdamW optimizer [31] ( η = 0.001 , β 1 = 0.9 , β 2 = 0.999 , d e c a y = 0.01 ) for the reconstruction model and a batch size of B = 4 . The acquisition models were optimized using AdamW ( η = 0.002 , d e c a y = 0.1 ) and cosine annealing scheduling with warm restarts [32]. As a loss function, we employed a combination of Dice loss and cross-entropy [30]. We found that simultaneously optimizing both slices resulted in unstable training and, therefore, followed a two-stage approach. First, the slice output of the acquisition model S 1 = C ( O 1 ( V i n ) ) was duplicated and stacked across the channel dimension while optimizing the parameters of the CNN. Then, the parameters of model O 1 ( · ) were fixed, and only the parameters of O 2 ( · ) were optimized. In both stages, the models were trained for 80 epochs. We always performed a final reconstruction network training from scratch, where the models O 1 , O 2 , and thus the input slices S 1 , S 2 were fixed. Rotation and scaling augmentation were applied to the input and output shapes to reduce the overfitting of the reconstruction model. For image segmentation, we utilize the U-Net model pipeline of [30], trained on 2D image slices with downsampling augmentation to ensure accurate segmentations for low-resolution and high-resolution inputs.

3. Results

3.1. Experiment I

The evaluation of reconstruction model performance on the full cardiac shape is shown in Table 1 for the synthetic cine data and in Table 2 for the clinically acquired data. We observed lower Dice scores and higher HD95 errors for the MMWHS dataset, which contains largely varying pathological deformed shapes. Applied to the MRXCAT dataset, our model achieved the lowest HD95 errors in all scenarios and the best Dice score for the p2CH and p4CH slice view inputs. It thus outperformed P2V and EP2V in four of six scores. The P2V model [16] reached the best Dice score when reconstructing MRXCAT data from 2CH and SA views, whereas its efficient variant, EP2V [10], reached the best Dice value on 2CH and 4CH views (see Table 1). When applied to the MMWHS data, our model reached the highest performance in five of six scores, and was only outperformed by EP2V, which presented a lower HD95 error in the case of 2CH and SA view inputs (see Table 2).

3.2. Experiment II

We report the results of an extensive ablation study for multi-chamber shape reconstruction with our model on the synthetic MRXCAT dataset in Table 3 and the clinical MMWHS dataset in Table 4, respectively. We compared three ablation scenarios for every dataset, indicated by whitespace in the tables. The top group of values represents the first and most elementary scenario in which high-resolution scouts and ground-truth annotations were considered. The highest HD95 errors were observed for reconstructions based on the p2CH and the p4CH views typically extracted at the start of cardiac routine acquisitions ( 8.5 and 22.5   m m ).
The error was reduced to 6.9 and 14.1   m m for true 2CH and 4CH views (Figure 2). Reconstruction from 2CH + SA yielded errors of 7.6 and 16.0   m m . Randomly chosen views resulted in errors of 8.0 and 17.1   m m (RND, mean out of six runs). Optimizing the views reduced HD95 errors to a lowest of 6.2 and 11.9   m m ( 0.8 and 2.2   m m compared to true 2CH and 4CH views). An improvement could likewise be observed for the Dice scores, which improved to 86.9 and 82.7 % after optimization.
Figure 5 demonstrates that the highest scores were reached after the second stage of optimization (Section 2.7). In the second ablation scenario, reconstruction from realistic low-resolution scouts and ground-truth annotations was examined (see center groups of Table 3 and Table 4). We only considered the best-performing clinical 2CH + 4CH views from the first scenario for further comparison. For MRXCAT, 7.3   m m HD95 error of 2CH + 4CH views was reduced to 7.0   m m ( 0.3 ) with optimization. While the MMWHS dataset demonstrated a comparable error reduction ( 0.7   m m ), inferior Dice scores were observed. The last scenario added automated segmentation to the pipeline, resulting in the most application-oriented setting. For the MRXCAT data, HD95 errors increased compared to the ground-truth setting of scenario two, resulting in 13.5   m m for 2CH + 4CH clinical views and 9.7   m m for optimized views. This was not reflected by Dice scores, for which 2CH + 4CH clinical views outperformed the optimized views with 81.0 % compared to 79.9 % respectively. For the MMWHS data, the reconstruction error increased significantly to 51.2   m m for 2CH + 4CH and 42.6   m m for optimized views. We additionally report volumetric segmentation results for the coarse scout scans. Note that for acquiring the scout scans, 32 captured slices instead of one slice are needed at a lower in-plane resolution ( 1 4 per x-, y-axis), increasing acquisition time and making it unsuitable for a direct comparison; hence, the values are enclosed in brackets.
The slicing reorientation obtained for the runs of Table 3 and Table 4 (OPT + OPT) is depicted in Figure 6. Notably, the first view was reoriented from the coronal view to an equivalent of the clinical 4CH view in the first 20 epochs, indicating that the 4CH view contains the most information for reconstruction.
Training and inference were performed on a single NVIDIA TITAN RTX 24 GB graphics card. Each stage of optimization took ∼29 min . Inference took 677 m s for the entire pipeline to reconstruct volumes of 128 × 128 × 128 vox from two 128 × 128 pix slices. Each acquisition model contained 2.8   M parameters, the segmentation model contained 20.7   M parameters, and the reconstruction model contained 15.5   M parameters.

4. Discussion

We presented a novel approach to enhance the volumetric reconstruction of cardiac structures from sparse slice acquisitions using joint view-plane location and orientation optimization to overcome scan-time limitations for high-resolution 3D shape reconstructions. We tested our approach on a synthetic, dynamic cine dataset (MRXCAT) and a static dataset (MMWHS) that included significant shape variation caused by pathological deformations.
In the binary cardiac shape reconstruction experiment, our reconstruction model outperformed two related methods with lower HD95 error in five of six scenarios and higher Dice performance in four of six scenarios. Improving on the related methods, we then performed multi-chamber reconstruction and joint optimization of the input views. In an extensive ablation study, we showed that the joint optimization of slicing views could consistently reduce HD95 reconstruction errors across all six of the ablation scenarios we performed (MRXCAT: 0.7   m m , 0.3   m m , 3.8   m m , MMWHS: 2.2   m m , and 0.7   m m , 8.6   m m ), whereas two scenarios demonstrated a drop in Dice scores.
For the MRXCAT dataset, a promising low error rate of 9.7   m m HD95 was achieved for multi-chamber reconstruction after view optimization, despite the fact that only a subset of cardiac phases was seen during optimization. This indicates that the reconstruction model learns a generalized shape representation. Visualizing the views of an entire test batch using the heatmap overlay (Figure 6), it is noticeable that views are reoriented consistently to yield optimal reconstruction properties (also refer to Figure 5). For the MMWHS dataset, slice optimization reduced HD95 errors in all scenarios. A significant performance drop was witnessed when slice segmentation was integrated into the pipeline. Here, the slice view segmentation model limits the capability of reconstructing the 3D shape successfully. Pre-training the segmentation model is challenging, as MMWHS data have a large shape-variability and varying contrasts. Moreover, the segmentation model must generalize to arbitrarily oriented 2D slice views that are not constrained to axial, coronal, and sagittal view planes. Training the segmentation model on a larger dataset using the identified optimized slice orientations and spatiotemporal data will certainly further enhance the model’s robustness.

5. Conclusions

We showed that five cardiac structures could be reconstructed with <13 m m HD95 and >80% Dice when reconstructing from only two optimized views regarding ground-truth label map inputs. In future work, we plan to investigate the quantification of possible reconstruction errors to assess the applicability of our method in clinical settings. Moreover, the reconstruction from more than two image planes and the determination of the optimal tradeoff between the reconstruction accuracy and the time needed to acquire the slices remains to be explored. The proposed image plane optimization could furthermore be applied to other target tasks, such as pathology classification. Summarizing our approach, we would like to motivate the medical deep learning community to investigate the integration of (slicing) acquisition parameters into their pipelines to improve computer-assisted analysis further.

Author Contributions

Conceptualization, C.W. and N.V.; methodology, C.W., Z.A.-H.H., L.H. and M.P.H.; software and validation, C.W.; data curation, N.V. and C.W.; writing—original draft preparation, C.W.; writing—review and editing, C.W., N.V., Z.A.-H.H., A.B., J.O. and M.P.H.; visualization, C.W.; supervision, J.O. and M.P.H.; funding acquisition, J.O. and M.P.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the German Federal Ministry of Education and Research (BMBF) grand title “MEDICARE”, grant number 01IS21094, and grant title “Medic V-Tach”, grant number 01KL2008, the latter within the European Research Area Network on Cardiovascular Diseases (ERA-CVD).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Restrictions apply to the availability of XCAT data presented in the referenced work of Segars et al. [26]. The XCAT phantoms were generated with the permission of the Duke University (4D Extended Cardiac-Torso (XCAT) Phantom Version 2.0), DUKE UNIVERSITY, Durham, NC 27708.

Conflicts of Interest

Author Lasse Hansen was employed by the company EchoScout GmbH. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
2CHtwo-chamber
4CHfour-chamber
AXaxial
CMRcardiac magnetic resonance imaging
CNNconvolutional neural network
CORcoronal
CTcomputed tomography
GTground-truth
HD9595th percentile of the Hausdorff distance
LSTMlong short-term memory
LAleft atrium
LVleft ventricle
MRImagnetic resonance imaging
MYOleft myocardium
N/Anot applicable
OPToptimized
p2CHpseudo two-chamber view
p4CHpseudo four-chamber view
RVright ventricle
RAright atrium
RNDrandom
SAshort axis
SAGsagittal
SNRsignal-to-noise ratio
TRrepetition time

References

  1. Ismail, T.F.; Strugnell, W.; Coletti, C.; Božić-Iven, M.; Weingärtner, S.; Hammernik, K.; Correia, T.; Küstner, T. Cardiac MR: From theory to practice. Front. Cardiovasc. Med. 2022, 9, 137. [Google Scholar] [CrossRef] [PubMed]
  2. Macovski, A. Noise in MRI. Magn. Reson. Med. 1996, 36, 494–497. [Google Scholar] [CrossRef]
  3. Ridgway, J.P. Cardiovascular magnetic resonance physics for clinicians: Part I. J. Cardiovasc. Magn. Reson. 2010, 12, 1–28. [Google Scholar] [CrossRef]
  4. Pruessmann, K.P.; Weiger, M.; Scheidegger, M.B.; Boesiger, P. SENSE: Sensitivity encoding for fast MRI. Magn. Reson. Med. Off. J. Int. Soc. Magn. Reson. Med. 1999, 42, 952–962. [Google Scholar] [CrossRef]
  5. Griswold, M.A.; Jakob, P.M.; Heidemann, R.M.; Nittka, M.; Jellus, V.; Wang, J.; Kiefer, B.; Haase, A. Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magn. Reson. Med. Off. J. Int. Soc. Magn. Reson. Med. 2002, 47, 1202–1210. [Google Scholar] [CrossRef]
  6. Balaban, R.S.; Peters, D.C. Basic principles of cardiovascular magnetic resonance. In Cardiovascular Magnetic Resonance; Elsevier: Amsterdam, The Netherlands, 2019; pp. 1–14. [Google Scholar]
  7. Lustig, M.; Donoho, D.; Pauly, J.M. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn. Reson. Med. Off. J. Int. Soc. Magn. Reson. Med. 2007, 58, 1182–1195. [Google Scholar] [CrossRef] [PubMed]
  8. Raman, S.V.; Markl, M.; Patel, A.R.; Bryant, J.; Allen, B.D.; Plein, S.; Seiberlich, N. 30-minute CMR for common clinical indications:  a Society for Cardiovascular Magnetic Resonance white paper. J. Cardiovasc. Magn. Reson. 2022, 24, 13. [Google Scholar] [CrossRef]
  9. American Heart Association Writing Group on Myocardial Segmentation and Registration for Cardiac Imaging; Cerqueira, M.D.; Weissman, N.J.; Dilsizian, V.; Jacobs, A.K.; Kaul, S.; Laskey, W.K.; Pennell, D.J.; Rumberger, J.A.; Ryan, T.; et al. Standardized myocardial segmentation and nomenclature for tomographic imaging of the heart: A statement for healthcare professionals from the Cardiac Imaging Committee of the Council on Clinical Cardiology of the American Heart Association. Circulation 2002, 105, 539–542. [Google Scholar] [PubMed]
  10. Stojanovski, D.; Hermida, U.; Muffoletto, M.; Lamata, P.; Beqiri, A.; Gomez, A. Efficient Pix2Vox++ for 3D Cardiac Reconstruction from 2D echo views. In Proceedings of the Simplifying Medical Ultrasound: Third International Workshop, ASMUS 2022, Held in Conjunction with MICCAI 2022, Singapore, 18 September 2022; Springer: Cham, Switzerland, 2022; pp. 86–95. [Google Scholar]
  11. Watkins, M.P.; Williams, T.A.; Caruthers, S.D.; Wickline, S.A. Cardiovascular MR function and coronaries: CMR 15 min express. J. Cardiovasc. Magn. Reson. 2013, 15, T11. [Google Scholar] [CrossRef]
  12. Luo, M.; Yang, X.; Wang, H.; Du, L.; Ni, D. Deep Motion Network for Freehand 3D Ultrasound Reconstruction. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th International Conference, Singapore, 18–22 September 2022; Proceedings, Part IV. Springer: Cham, Switzerland, 2022; pp. 290–299. [Google Scholar]
  13. Jokeit, M.; Kim, J.H.; Snedeker, J.G.; Farshad, M.; Widmer, J. Mesh-based 3D Reconstruction from Bi-planar Radiographs. In Proceedings of the Medical Imaging with Deep Learning, Zurich, Switzerland, 6–8 July 2022. [Google Scholar]
  14. Amiranashvili, T.; Lüdke, D.; Li, H.; Menze, B.; Zachow, S. Learning Shape Reconstruction from Sparse Measurements with Neural Implicit Functions. In Proceedings of the Medical Imaging with Deep Learning, Zurich, Switzerland, 6–8 July 2022. [Google Scholar]
  15. Beetz, M.; Banerjee, A.; Grau, V. Reconstructing 3D Cardiac Anatomies from Misaligned Multi-View Magnetic Resonance Images with Mesh Deformation U-Nets. In Proceedings of the Geometric Deep Learning in Medical Image Analysis, Amsterdam, The Netherlands, 18 November 2022; pp. 3–14. [Google Scholar]
  16. Xie, H.; Yao, H.; Sun, X.; Zhou, S.; Zhang, S. Pix2vox: Context-aware 3d reconstruction from single and multi-view images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2019; pp. 2690–2698. [Google Scholar]
  17. Lee, K.; Yang, J.; Lee, M.H.; Chang, J.H.; Kim, J.Y.; Hwang, J.Y. USG-Net: Deep Learning-based Ultrasound Scanning-Guide for an Orthopedic Sonographer. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th International Conference, Singapore, 18–22 September 2022; Proceedings, Part VII. Springer: Cham, Switzerland, 2022; pp. 23–32. [Google Scholar]
  18. Natalia, F.; Young, J.C.; Afriliana, N.; Meidia, H.; Yunus, R.E.; Sudirman, S. Automated selection of mid-height intervertebral disc slice in traverse lumbar spine MRI using a combination of deep learning feature and machine learning classifier. PLoS ONE 2022, 17, e0261659. [Google Scholar] [CrossRef]
  19. Chen, Z.; Rigolli, M.; Vigneault, D.M.; Kligerman, S.; Hahn, L.; Narezkina, A.; Craine, A.; Lowe, K.; Contijoch, F. Automated cardiac volume assessment and cardiac long-and short-axis imaging plane prediction from electrocardiogram-gated computed tomography volumes enabled by deep learning. Eur. Heart-J.-Digit. Health 2021, 2, 311–322. [Google Scholar] [CrossRef] [PubMed]
  20. Nitta, S.; Shiodera, T.; Sakata, Y.; Takeguchi, T.; Kuhara, S.; Yokoyama, K.; Ishimura, R.; Kariyasu, T.; Imai, M.; Nitatori, T. Automatic 14-plane slice-alignment method for ventricular and valvular analysis in cardiac magnetic resonance imaging. J. Cardiovasc. Magn. Reson. 2014, 16, P1. [Google Scholar] [CrossRef]
  21. Odille, F.; Bustin, A.; Liu, S.; Chen, B.; Vuissoz, P.A.; Felblinger, J.; Bonnemains, L. Isotropic 3D cardiac cine MRI allows efficient sparse segmentation strategies based on 3 D surface reconstruction. Magn. Reson. Med. 2018, 79, 2665–2675. [Google Scholar] [CrossRef] [PubMed]
  22. Herzog, B.; Greenwood, J.; Plein, S.; Garg, P.; Haaf, P.; Onciul, S. Cardiovascular Magnetic Resonance Pocket Guide, 2017. Available online: https://www.escardio.org/static-file/Escardio/Subspecialty/EACVI/Publications%20and%20recommendations/Books%20and%20booklets/CMR%20pocket%20guides/CMR_guide_2nd_edition_148x105mm_03May2017_last%20version.pdf (accessed on 7 February 2024).
  23. Czichos, H.; Hennecke, M. HÜTTE—Das Ingenieurwissen; Springer: Berlin/Heidelberg, Germany, 2012; Volume 34, p. E35. [Google Scholar] [CrossRef]
  24. Jaderberg, M.; Simonyan, K.; Zisserman, A. Spatial transformer networks. Adv. Neural Inf. Process. Syst. 2015, 28, 2017–2025. [Google Scholar]
  25. Zhou, Y.; Barnes, C.; Lu, J.; Yang, J.; Li, H. On the continuity of rotation representations in neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 5745–5753. [Google Scholar]
  26. Segars, W.P.; Sturgeon, G.; Mendonca, S.; Grimes, J.; Tsui, B.M. 4D XCAT phantom for multimodality imaging research. Med. Phys. 2010, 37, 4902–4915. [Google Scholar] [CrossRef] [PubMed]
  27. Buoso, S.; Joyce, T.; Schulthess, N.; Kozerke, S. MRXCAT2. 0: Synthesis of realistic numerical phantoms by combining left-ventricular shape learning, biophysical simulations and tissue texture generation. J. Cardiovasc. Magn. Reson. 2023, 25, 25. [Google Scholar] [CrossRef] [PubMed]
  28. Zhuang, X.; Shen, J. Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI. Med. Image Anal. 2016, 31, 77–87. [Google Scholar] [CrossRef] [PubMed]
  29. Kellman, P.; Lu, X.; Jolly, M.P.; Bi, X.; Kroeker, R.; Schmidt, M.; Speier, P.; Hayes, C.; Guehring, J.; Mueller, E. Automatic LV localization and view planning for cardiac MRI acquisition. J. Cardiovasc. Magn. Reson. 2011, 13, P39. [Google Scholar] [CrossRef]
  30. Isensee, F.; Jaeger, P.F.; Kohl, S.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
  31. Loshchilov, I.; Hutter, F. Decoupled weight decay regularization. arXiv 2017, arXiv:1711.05101. [Google Scholar]
  32. Loshchilov, I.; Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar]
Figure 1. Current practice and research question (top): The performance of deep learning-based post-processing methods is restricted by the input data quality, and standardized clinical protocols may be sub-optimal for automated downstream tasks. Our approach and problem setup (bottom): Examining cardiac function in high spatial and temporal resolution is desirable, but MR physics constrains the quality of volumetric MR cine acquisitions. We aim to determine optimal descriptive imaging planes for volumetric shape reconstruction from only two view planes.
Figure 1. Current practice and research question (top): The performance of deep learning-based post-processing methods is restricted by the input data quality, and standardized clinical protocols may be sub-optimal for automated downstream tasks. Our approach and problem setup (bottom): Examining cardiac function in high spatial and temporal resolution is desirable, but MR physics constrains the quality of volumetric MR cine acquisitions. We aim to determine optimal descriptive imaging planes for volumetric shape reconstruction from only two view planes.
Sensors 24 02296 g001
Figure 2. Clinical cardiac views are automatically extracted from the segmentation maps of a coarse scout scan. Axial (AX), coronal (COR), and sagittal (SAG) views are obtained directly from the volume. According to [22], pseudo-two-chamber (p2CH) and four-chamber (p4CH) are then used to plan short-axis (SA) views from which, in turn, accurate 2CH and 4CH views can be retrieved. We mimic this process by analyzing the inertial moments of segmented cardiac chambers.
Figure 2. Clinical cardiac views are automatically extracted from the segmentation maps of a coarse scout scan. Axial (AX), coronal (COR), and sagittal (SAG) views are obtained directly from the volume. According to [22], pseudo-two-chamber (p2CH) and four-chamber (p4CH) are then used to plan short-axis (SA) views from which, in turn, accurate 2CH and 4CH views can be retrieved. We mimic this process by analyzing the inertial moments of segmented cardiac chambers.
Sensors 24 02296 g002
Figure 3. Method overview: From a coarsely segmented scout scan (1), we analyze the cardiac shape, construct affine matrices P representing the standard clinical views, and optimize a neural network to predict a rigid transformation matrix M . This matrix is returned to the scanner to yield optimal slicing parameters for the volumetric shape reconstruction.
Figure 3. Method overview: From a coarsely segmented scout scan (1), we analyze the cardiac shape, construct affine matrices P representing the standard clinical views, and optimize a neural network to predict a rigid transformation matrix M . This matrix is returned to the scanner to yield optimal slicing parameters for the volumetric shape reconstruction.
Sensors 24 02296 g003
Figure 4. Architecture of the proposed pipeline: The acquisition models (left) optimize the two slicing views (center). The final shape is reconstructed from the stacked slices with a non-symmetric 2D-3D encoder-decoder (right) that contains grouped convolutions in the 2D layers. The 2D-3D skip connections and bottleneck in the reconstruction model are realized using a grid-sample operation that embeds the 2D features in the 3D feature space using the inverse of two affine matrices M 1 , 2 . (best viewed digitally).
Figure 4. Architecture of the proposed pipeline: The acquisition models (left) optimize the two slicing views (center). The final shape is reconstructed from the stacked slices with a non-symmetric 2D-3D encoder-decoder (right) that contains grouped convolutions in the 2D layers. The 2D-3D skip connections and bottleneck in the reconstruction model are realized using a grid-sample operation that embeds the 2D features in the 3D feature space using the inverse of two affine matrices M 1 , 2 . (best viewed digitally).
Sensors 24 02296 g004
Figure 5. MMWHS Dice scores throughout two-stage training, considering the views 2CH + 4CH as reference. After optimizing the first view, the reconstruction quality is on par with the reference. Optimizing the second view outperforms the reference.
Figure 5. MMWHS Dice scores throughout two-stage training, considering the views 2CH + 4CH as reference. After optimizing the first view, the reconstruction quality is on par with the reference. Optimizing the second view outperforms the reference.
Sensors 24 02296 g005
Figure 6. View reorientation during joint training. A heatmap overlay visualizes the orientation across the training batch (left, first column per epoch). Two individual batch samples are displayed in the second and third columns. The first view (top) is optimized during the first optimization stage and then fixed in the second optimization stage, in which the second view (bottom) is optimized. Notably, the first view was reoriented from the coronal view to an equivalent of the clinical 4CH view in the first 20 epochs. Views are also depicted in 3D, where view planes of epoch 0 were reoriented to view planes of epoch 80, as indicated by the arrows.
Figure 6. View reorientation during joint training. A heatmap overlay visualizes the orientation across the training batch (left, first column per epoch). Two individual batch samples are displayed in the second and third columns. The first view (top) is optimized during the first optimization stage and then fixed in the second optimization stage, in which the second view (bottom) is optimized. Notably, the first view was reoriented from the coronal view to an equivalent of the clinical 4CH view in the first 20 epochs. Views are also depicted in 3D, where view planes of epoch 0 were reoriented to view planes of epoch 80, as indicated by the arrows.
Sensors 24 02296 g006
Table 1. Binary shape reconstruction performance of P2V, EP2V, and our method (see Section 2.3) on the synthetic cardiac data of the MRXCAT dataset.
Table 1. Binary shape reconstruction performance of P2V, EP2V, and our method (see Section 2.3) on the synthetic cardiac data of the MRXCAT dataset.
Synthetic Cine MRXCAT DataHD95 in mm ↓Dice in % ↑
1st View2nd ViewModel μ ± σ μ ± σ
p2CHp4CHP2V [16]6.7 ± 2.995.4 ± 3.2
EP2V [10]7.2 ± 4.694.3 ± 4.5
Ours4.7 ± 1.796.6 ± 1.4
2CH4CHP2V [16]7.7 ± 5.593.6 ± 6.8
EP2V [10]5.6 ± 2.496.2 ± 2.1
Ours5.2 ± 2.895.9 ± 2.2
2CHSAP2V [16]4.6 ± 1.197.1 ± 0.8
EP2V [10]6.2 ± 4.595.1 ± 4.8
Ours4.3 ± 2.496.4 ± 2.4
Table 2. Binary shape reconstruction performance of P2V, EP2V, and our method (see Section 2.3) on the clinically acquired cardiac data of the MMWHS dataset.
Table 2. Binary shape reconstruction performance of P2V, EP2V, and our method (see Section 2.3) on the clinically acquired cardiac data of the MMWHS dataset.
Clinically acq. MMWHS DataHD95 in mm ↓Dice in % ↑
1st View2nd ViewModel μ ± σ μ ± σ
p2CHp4CHP2V [16]20.1 ± 6.283.0 ± 5.0
EP2V [10]22.1 ± 7.280.0 ± 7.8
Ours20.0 ± 6.486.4 ± 4.1
2CH4CHP2V [16]21.8 ± 5.982.5 ± 4.3
EP2V [10]22.1 ± 8.481.5 ± 7.2
Ours18.1 ± 6.587.6 ± 3.5
2CHSAP2V [16]22.6 ± 7.782.6 ± 5.4
EP2V [10]20.8 ± 8.183.3 ± 5.2
Ours23.7 ± 6.785.4 ± 4.5
Table 3. Multi-chamber shape reconstruction performances for the synthetic cardiac data of the MRXCAT dataset. The scenario’s difficulty increases from the top to the bottom. Bold values indicate the best values obtained within a scenario group of comparable scout resolution and label map settings (ground-truth (GT) or automated segmentation (SG)). Views are indicated by their names, with RND and OPT indicating random selection (mean out of six runs) and the proposed optimization, respectively.
Table 3. Multi-chamber shape reconstruction performances for the synthetic cardiac data of the MRXCAT dataset. The scenario’s difficulty increases from the top to the bottom. Bold values indicate the best values obtained within a scenario group of comparable scout resolution and label map settings (ground-truth (GT) or automated segmentation (SG)). Views are indicated by their names, with RND and OPT indicating random selection (mean out of six runs) and the proposed optimization, respectively.
Synthetic Cine MRXCAT DataHD95 in mm ↓Dice in % ↑
Type of: Scout—Slices1st View2nd ViewMYOLVRVLARA μ ± σ MYOLVRVLARA μ ± σ
1.5 mm3 GT—1.5 mm2 GTp2CHp4CH6.25.311.95.313.98.5 ± 14.782.490.084.290.683.486.1 ± 8.5
1.5 mm3 GT —1.5 mm2 GT2CH4CH6.57.18.05.17.76.9 ± 2.079.986.883.590.785.285.2 ± 5.9
1.5 mm3 GT—1.5 mm2 GT2CHSA6.57.28.66.98.77.6 ± 2.679.386.583.988.682.984.2 ± 6.2
1.5 mm3 GT—1.5 mm2 GTRNDRND7.28.49.68.06.98.0 ± 5.478.986.384.987.188.685.2 ± 7.0
1.5 mm3 GT—1.5 mm2 GT>OPT<>OPT<6.36.67.14.66.36.2 ± 2.080.787.886.391.088.986.9 ± 5.4
6.0 mm3 GT—1.5 mm2 GT2CH4CH6.37.310.35.17.67.3 ± 3.079.186.980.791.386.484.9 ± 6.7
6.0 mm3 GT—1.5 mm2 GT>OPT<>OPT<6.87.26.86.67.47.0 ± 1.878.785.787.388.787.285.5 ± 6.0
6.0 mm3 SG—N/AN/AN/A(5.3)(5.3)(5.5)(5.6)(5.8)(5.5 ± 0.3)(79.6)(91.5)(90.1)(85.5)(86.5)(86.6 ± 4.2)
6.0 mm3 SG—1.5 mm2 SG2CH4CH10.310.231.77.37.713.5 ± 17.468.682.182.486.085.981.0 ± 8.0
6.0 mm3 SG—1.5 mm2 SG>OPT<>OPT<9.49.810.011.77.79.7 ± 3.069.981.884.076.487.479.9 ± 8.7
Table 4. Multi-chamber shape reconstruction performances for the MRI-acquired cardiac data of the MMWHS dataset. The scenario’s difficulty increases from the top to the bottom. Bold values indicate the best values obtained within a scenario group of comparable scout resolution and label map settings (ground-truth (GT) or automated segmentation (SG)). Views are indicated by their names, with RND and OPT indicating random selection (mean out of six runs) and proposed optimization.
Table 4. Multi-chamber shape reconstruction performances for the MRI-acquired cardiac data of the MMWHS dataset. The scenario’s difficulty increases from the top to the bottom. Bold values indicate the best values obtained within a scenario group of comparable scout resolution and label map settings (ground-truth (GT) or automated segmentation (SG)). Views are indicated by their names, with RND and OPT indicating random selection (mean out of six runs) and proposed optimization.
Clinically acquired MMWHS dataHD95 in mm ↓Dice in % ↑
Type of: Scout—Slices1st View2nd ViewMYOLVRVLARA μ ± σ MYOLVRVLARA μ ± σ
1.5 mm3 GT—1.5 mm2 GTp2CHp4CH7.78.230.327.638.722.5 ± 25.478.788.369.475.765.475.5 ± 16.2
1.5 mm3 GT—1.5 mm2 GT2CH4CH6.88.219.58.927.114.1 ± 10.281.888.777.286.574.981.8 ± 9.5
1.5 mm3 GT—1.5 mm2 GT2CHSA7.810.216.513.831.616.0  ±  10.079.987.777.079.761.377.1  ±  12.1
1.5 mm3 GT—1.5 mm2 GTRNDRND12.013.918.018.123.217.1 ± 10.069.382.180.478.075.577.1 ± 9.2
1.5 mm3 GT—1.5 mm2 GT>OPT<>OPT<8.69.715.113.812.111.9 ± 3.979.787.879.881.185.082.7 ± 6.5
6.0 mm3 GT—1.5 mm2 GT2CH4CH7.58.118.911.022.713.6 ± 9.281.089.478.985.276.482.2 ± 8.6
6.0 mm3 GT—1.5 mm2 GT>OPT<>OPT<8.910.214.816.214.412.9 ± 7.277.186.181.081.381.181.3 ± 9.3
6.0 mm3 SG—N/AN/AN/A(10.8)(12.8)(16.3)(12.8)(13.0)(13.2 ± 11.5)(72.3)(87.6)(81.7)(80.0)(81.0)(80.5 ± 9.3)
6.0 mm3 SG—1.5 mm2 SG2CH4CH17.119.151.464.8103.851.2 ± 50.756.271.656.335.238.851.6 ± 25.2
6.0 mm3 SG—1.5 mm2 SG>OPT<>OPT<35.032.739.953.951.642.6 ± 23.443.869.056.539.661.354.0 ± 19.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Weihsbach, C.; Vogt, N.; Al-Haj Hemidi, Z.; Bigalke, A.; Hansen, L.; Oster, J.; Heinrich, M.P. AcquisitionFocus: Joint Optimization of Acquisition Orientation and Cardiac Volume Reconstruction Using Deep Learning. Sensors 2024, 24, 2296. https://doi.org/10.3390/s24072296

AMA Style

Weihsbach C, Vogt N, Al-Haj Hemidi Z, Bigalke A, Hansen L, Oster J, Heinrich MP. AcquisitionFocus: Joint Optimization of Acquisition Orientation and Cardiac Volume Reconstruction Using Deep Learning. Sensors. 2024; 24(7):2296. https://doi.org/10.3390/s24072296

Chicago/Turabian Style

Weihsbach, Christian, Nora Vogt, Ziad Al-Haj Hemidi, Alexander Bigalke, Lasse Hansen, Julien Oster, and Mattias P. Heinrich. 2024. "AcquisitionFocus: Joint Optimization of Acquisition Orientation and Cardiac Volume Reconstruction Using Deep Learning" Sensors 24, no. 7: 2296. https://doi.org/10.3390/s24072296

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop