0222
Subtle Inverse Crimes: Naively using Publicly Available Images Could Make Reconstruction Results Seem Misleadingly Better!
Efrat Shimron1, Jonathan Tamir2, Ke Wang1, and Michael Lustig1
1Electrical Engineering and Computer Sciences, UC Berkeley, Berkeley, CA, United States, 2Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, United States
This work reveals that naïve training and evaluation of reconstruction algorithms on publicly available data could lead to artificially improved reconstructions. We demonstrate that Compressed Sensing, Dictionary Learning and Deep Learning algorithms may all produce misleading results.
Subtle Crime I Results. (a) CS reconstructions from data subsampled with R=6. The subtle crime effect is demonstrated: the reconstruction quality improves artificially as the sampling becomes denser around k-space center (top to bottom) and the zero-padding increases (left to right). (b) The same effect is shown for CS (mean & STD of 30 images), Dictionary Learning (10 images), and DNN algorithms (87 images): the NRMSE and SSIM improve artificially with the zero-padding and sampling.

Subtle Crime II concept. MR images are often saved in the DICOM format which sometimes involves JPEG compression. However, JPEG-compressed images contain less high-frequency data than the original data; therefore, algorithms trained on retrospectively-subsampled compressed data may benefit from the compression. These algorithms may hence exhibit misleadingly good performance.