Deep learning models can detect patterns in a brain scan, relatively a new concept in medicine.
FREMONT, CA: In situations concerning mental conditions such as Alzheimer’s disease or rare brain conditions in children, it is difficult to collect sufficient data. Neurological experts struggle to distinctively outline all anatomical structures in various scans. To combat the situation, MIT researchers have developed a method to extract more information from one scan and are using them to train machine-learning models. Such arrangements are also being used for complex brain scans.
Training deep learning models to detect patterns in brain scan is a relatively new concept in medicine. As per the paper presented at the recent Conference on Computer Vision and Pattern Recognition, the researchers explained that the proposed system uses a single-labeled scan with unlabeled scans that automatically synthesizes a massive set of data gathering distinct training examples. Such scans can further improve machine learning (ML) models to spot anatomical structures in new scans.
The objective is to automatically generate data for "image segmentation" process that categorizes an image into regions of pixels which are simpler to comprehend. The system utilizes a Convolutional Neural Network (CNN), which is an ML model that has become a driver for image processing tasks. The network studies numerous unlabeled scans from several patients and different equipment to gain insights on anatomical, brightness, and contrast variations. After that, a random combination of the variations is applied to a single-labeled scan to synthesize new scans. Finally, the synthesized scans are supplied to a different CNN that learns to segment new images. The development will increase the accessibility of image segmentation in realistic situations that lacks comprehensive training data.
Magnetic resonance images (MRIs) comprised of three-dimensional pixels, called voxels. While segmenting MRIs, voxel regions are separated and labeled as per the anatomical structure containing them. The challenge to use ML to automate the process arises due to the variations in individual brains and equipment used. The researcher’s system adapts and learns to synthesize realistic scans, especially as they have trained their system from 100 unlabeled scans from different patients to understand spatial transformations.