0395
Fat-Saturated MR Image Synthesis with Acquisition Parameter-Conditioned Image-to-Image Generative Adversarial Network
Jonas Denck1,2,3, Jens Guehring3, Andreas Maier1, and Eva Rothgang2
1Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, Germany, 2Department of Industrial Engineering and Health, Technical University of Applied Sciences Amberg-Weiden, Weiden, Germany, 3Siemens Healthcare, Erlangen, Germany
We trained an image-to-image generative adversarial network, conditioned on the important acquisition parameters echo time and repetition time, to synthesize fat-saturated MR knee images from non-fat-saturated images, enabling us to synthesize MR images with varying image contrast.
Figure 1: Training procedure of the GAN. The generator consists of a U-Net architecture of residual blocks with adaptive instance normalization layers that inject the input acquisition parameters (yg) in the encoder part of the generator and the output target labels (yt) in the decoder part of the generator.
Figure 2: Example pair of ground truth input image g with its labels yg and the corresponding fat-saturated target image t with its labels yt. The generator is trained to predict the target contrast from the input image and the corresponding input and target acquisition parameters: G(g, yg, yt). The last image shows the absolute error map of target and prediction. The images are annotated with the real acquisition parameters TR and TE and the acquisition parameters as predicted by the AC for G(g, yg, yt) .