Robust Multi-shot EPI with Untrained Artificial Neural Networks: Unsupervised Scan-specific Deep Learning for Blip Up-Down Acquisition (BUDA)
Tae Hyung Kim1,2,3, Zijing Zhang1,2,4, Jaejin Cho1,2, Borjan Gagoski2,5, Justin Haldar3, and Berkin Bilgic1,2
1Athinoula A. Martinos Center for Biomedical Imaging, Boston, MA, United States, 2Radiology, Harvard Medical School, Boston, MA, United States, 3Electrical Engineering, University of Southern California, Los Angeles, CA, United States, 4State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China, 5Boston Children's Hospital, Boston, MA, United States
A novel unsupervised, untrained, scan-specific artificial neural network from LORAKI framework for blip-up and down acquisition (BUDA) enables robust reconstruction of multi-shot EPI.
Figure 1. The structure of the proposed network. “A” represents the BUDA forward model, “B” is SENSE-encoding (coil sensitivities and Fourier transform), “VC” represents augmentation of virtual conjugate coils for the phase constraint. Some parameter choices are: the number of iteration K=7, convolution kernel size=7, The number of output channels of the first convolutional layer=64, λ1 = 1, λ2 = 0.5.
Figure 2. The reconstruction results of the diffusion data with b=1000s/mm2 . Top two rows display reconstructed images, the bottom two rows show error images (10x-amplified). For each shot, 4x in-plane acceleration was applied. Two shots (one blip-up and one blip-down) were used for reconstruction. The reference images were generated by combining 4 shots (2 blip-up and 2 blip-down) through BUDA. Naive SENSE does not include the fieldmap, resulting in distortion near frontal lobes.