Researchers Use AI to Detect Early Alzheimer's Signs Through Patient's Voice

Boston University researchers achieved 78.5% accuracy in predicting the onset of Alzheimer's within six years using AI voice analysis

Ben Wodecki, Junior Editor - AI Business

June 27, 2024

2 Min Read
Getty Images

Boston University researchers have developed an AI system capable of detecting early signs of Alzheimer's Disease in a patient’s voice.

There's currently no cure for the disease, however, catching it early improves a patient’s treatment options.

Using a combination of natural language processing and machine learning, the researchers developed a tool that can automatically predict the progression of Alzheimer's in a patient.

Patient’s voice recordings are fed into the AI, which can then extrapolate whether they will develop Alzheimer’s within six years.

In the study, the AI system was able to detect Alzheimer’s with an accuracy of 78.5% from some 166 patients.

Publishing results of the study in the Alzheimer's Association Journal, the researchers said the AI “offers a fully automated procedure, providing an opportunity to develop an inexpensive, broadly accessible and easy-to-administer screening tool for mild cognitive impairments (MCIs) to Alzheimer's progression prediction.”

They add that the tool could even be used to perform remote assessments, letting patients provide doctors with voice recordings to obtain a diagnosis.

Almost 7 million Americans are currently living with Alzheimer's, with that number projected to rise to nearly 13 million by 2050.

Related:AI in Hospitals: Insights from Great Ormond Street's Senior Data Scientist

While there’s currently no cure, catching signs of the disease early gives patients more opportunities for treatments, including taking part in clinical trials to find a potential cure.

The researchers sought to examine voice recordings from neuropsychological exams, combined with basic demographic information, to find out if AI could be used to provide earlier diagnoses.

The recordings were analyzed using speech recognition that turned the audio input into text which was then processed using language models.

The multimodal approach created what the researchers described as a “fully automated assessment” capable of identifying patients most at risk.

The model showed that older women with lower education levels are more likely to develop Alzheimer’s, as well as women with the APOE-ε4 gene.

The AI also found that the likelihood of developing Alzheimer’s increases “significantly” with age. Some 19% of patients aged 75 to 84 were found likely to develop the disease, which rises to 35% for those older than 85.

“Our study demonstrates the potential of using automatic speech recognition and natural language processing techniques to develop a prediction tool for identifying individuals with MCIs who are at risk of developing Alzheimer’s,” the researchers wrote.

Related:AI Drug Discovery Startup Emerges From Stealth, Raises $1B

This article first appeared in IoT World Today's sister publication AI Business.

About the Author(s)

Ben Wodecki

Junior Editor - AI Business

Ben Wodecki is the junior editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to junior editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like