MIT researchers combine deep learning and physics to fix motion-corrupted MRI scans

The image on the left depicts an MRI scan of the human brain corrupted by motion artifacts, whereas the image on the right depicts the same image with motion correction applied by a deep learning model developed by researchers at MIT. Credits : Image courtesy of the researchers

The challenge involves more than just a blurry JPEG. Fixing motion artifacts in medical imaging requires a more sophisticated approach.

Compared to other imaging modalities like X-rays or CT scans, MRI scans provide high-quality soft tissue contrast. Unfortunately, MRI is highly sensitive to motion, with even the smallest of movements resulting in image artifacts. These artifacts put patients at risk of misdiagnoses or inappropriate treatment when critical details are obscured from the physician. But researchers at MIT may have developed a deep learning model capable of motion correction in brain MRI.

“Motion is a common problem in MRI,” explains Nalini Singh, an Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic)-affiliated PhD student in the Harvard-MIT Program in Health Sciences and Technology (HST) and lead author of the paper. “It’s a pretty slow imaging modality.”

MRI sessions can take anywhere from a few minutes to an hour, depending on the type of images required. Even during the shortest scans, small movements can have dramatic effects on the resulting image. Unlike camera imaging, where motion typically manifests as a localized blur, motion in MRI often results in artifacts that can corrupt the whole image. Patients may be anesthetized or requested to limit deep breathing in order to minimize motion. However, these measures often cannot be taken in populations particularly susceptible to motion, including children and patients with psychiatric disorders.

The paper, titled “Data Consistent Deep Rigid MRI Motion Correction,” was recently awarded best oral presentation at the Medical Imaging with Deep Learning conference (MIDL) in Nashville, Tennessee. The method computationally constructs a motion-free image from motion-corrupted data without changing anything about the scanning procedure. “Our aim was to combine physics-based modeling and deep learning to get the best of both worlds,” Singh says.

The importance of this combined approach lies within ensuring consistency between the image output and the actual measurements of what is being depicted, otherwise the model creates “hallucinations” — images that appear realistic, but are physically and spatially inaccurate, potentially worsening outcomes when it comes to diagnoses.

Procuring an MRI free of motion artifacts, particularly from patients with neurological disorders that cause involuntary movement, such as Alzheimer’s or Parkinson’s disease, would benefit more than just patient outcomes. A study from the University of Washington Department of Radiology estimated that motion affects 15 percent of brain MRIs. Motion in all types of MRI that leads to repeated scans or imaging sessions to obtain images with sufficient quality for diagnosis results in approximately $115,000 in hospital expenditures per scanner on an annual basis.

According to Singh, future work could explore more sophisticated types of head motion as well as motion in other body parts. For instance, fetal MRI suffers from rapid, unpredictable motion that cannot be modeled only by simple translations and rotations.

“This line of work from Singh and company is the next step in MRI motion correction. Not only is it excellent research work, but I believe these methods will be used in all kinds of clinical cases: children and older folks who can’t sit still in the scanner, pathologies which induce motion, studies of moving tissue, even healthy patients will move in the magnet,” says Daniel Moyer, an assistant professor at Vanderbilt University. “In the future, I think that it likely will be standard practice to process images with something directly descended from this research.”

Co-authors of this paper include Nalini Singh, Neel Dey, Malte Hoffmann, Bruce Fischl, Elfar Adalsteinsson, Robert Frost, Adrian Dalca and Polina Golland. This research was supported in part by GE Healthcare and by computational hardware provided by the Massachusetts Life Sciences Center. The research team thanks Steve Cauley for helpful discussions. Additional support was provided by NIH NIBIB, NIA, NIMH, NINDS, the Blueprint for Neuroscience Research, part of the multi-institutional Human Connectome Project, the BRAIN Initiative Cell Census Network, and a Google PhD Fellowship.

LEAVE A REPLY

Please enter your comment!
Please enter your name here