2240
Learning 3D structures from 2D slices with scan-specific data for fast and high-resolution neonatal brain MRI
Yao Sui1,2, Onur Afacan1,2, Ali Gholipour1,2, and Simon K Warfield1,2
1Harvard Medical School, Boston, MA, United States, 2Boston Children's Hospital, Boston, MA, United States
We developed a methodology that enables learning 3D gradient structures from 2D slices for an individual subject without the need for large, auxiliary high-resolution datasets, and achieved high-quality neonatal brain MRI at an isotropic resolution of 0.39mm with six minutes of imaging time.
Design of our learning algorithm. The LR volumes are decomposed onto HR 2D slice stacks in the direction of slice selection, and their gradients are used as the output of the convolutional neural network (CNN) in the learning. On the other hand, all LR volumes are interpolated and combined into a blurred HR volume that is then resampled onto the LR image spaces to obtain the LR 2D slice stacks as the network input.
Reconstructed slices from a representative subject by our approach and the baselines on the clinical dataset. (a) SRCNN. (b) GGR. (c) Our approach (deepGG). Our approach achieved the highest quality in terms of image contrast and sharpness. The fine structures delineated by our approach are highlighted by the red arrows, in particular from the hippocampus in the coronal plane.