Functional magnetic resonance imaging (fMRI) is a noninvasive diagnostic technique for brain disorders, such as Alzheimer’s disease (AD). It measures minute changes in blood oxygen levels within the brain over time, giving insight into the local activity of neurons; however, fMRI has not been widely used in clinical diagnosis. Their limited use is due to the fact fMRI data are highly susceptible to noise, and the fMRI data structure is very complicated compared to a traditional x-ray or MRI scan.

Scientists from Texas Tech University now report they developed a type of deep-learning algorithm known as a convolutional neural network (CNN) that can differentiate among the fMRI signals of healthy people, people with mild cognitive impairment, and people with AD.

Their findings, “Spatiotemporal feature extraction and classification of Alzheimer’s disease using deep learning 3D-CNN for fMRI data,” is published in the Journal of Medical Imaging and led by Harshit Parmar, doctoral student at Texas Tech University.

“In this study, we designed and trained a volumetric 3D CNN to do multiclass classification. The CNN is able to extract spatiotemporal features from a given section of BOLD fMRI data, and then use those features to classify the data into either Alzheimer’s disease, mild cognitive impairment (MCI), or both early cognitive impairment (EMCI) and late cognitive impairment (LMCI), and healthy controls (CN),” noted the researchers.

The architecture of a CNN is designed to take advantage of the 2D structure of an input image or other 2D input such as a speech signal. CNNs are also easier to train and have many fewer parameters than fully connected networks with the same number of hidden units.

Network activation map from the output of the second temporal convolution layer mapped onto MNI brain atlas. [H. Parmar et al. doi 10.1117/1.JMI.7.5.056001.]
fMRI data are incompatible with most existing CNN designs due to its four-dimensional structure. To overcome this challenge, the researchers developed a CNN architecture that can handle fMRI data with minimal pre-processing steps. The first two layers of the network focus on extracting features from the data solely based on temporal changes, without regard for 3D structural properties. The following layers extract spatial features at different scales from the previously extracted time features. This yields a set of spatiotemporal characteristics that the final layers use to classify the input fMRI data from either a healthy subject, one with early or late mild cognitive impairment, or one with AD.

The simple yet effective architecture of the proposed CNN was carefully designed to handle 4D fMRI data through eight layers; the first two extract temporal features, the next three focus on spatial features, and the last three are geared toward classification. [H. Parmar et al. doi 10.1117/1.JMI.7.5.056001.]
“Deep learning CNNs could be used to extract functional biomarkers related to AD, which could be helpful in the early detection of AD-related dementia,” Parmar explained.

The researchers tested their CNN with fMRI data from a public database, and the classification accuracy of their algorithm was as high as or higher than that of other methods. Their results demonstrate that deep learning-based approaches can help improve diagnosis not only for AD but also for other neurological disorders.

“Alzheimer’s has no cure yet. Although brain damage cannot be reversed, the progression of the disease can be reduced and controlled with medication,” according to the authors. “Our classifier can accurately identify the mild cognitive impairment stages which provide an early warning before progression into AD.”

Looking forward, the researchers hope to investigate the possibility of developing fMRI-based biomarkers for AD and MCI.

This site uses Akismet to reduce spam. Learn how your comment data is processed.