Researchers use speech to detect brain diseases early

Researchers at the Mayo Clinic in Rochester, Minnesota, are harnessing the intricate brain processes involved in speech to detect neurodegenerative diseases at their earliest stages. Speech, which requires complex coordination of thought, language, and physical movements, can reveal early signs of conditions such as Parkinson’s disease, amyotrophic lateral sclerosis (ALS), and certain types of frontotemporal dementia.

“There are some diseases where the very first manifestation is in someone’s voice or their speech,” said Dr. Hugo Botha, a behavioral neurologist and associate director of Mayo Clinic’s Neurology Artificial Intelligence Program. This insight has prompted a research initiative to collect and analyze voice samples from patients.

The process involves patients completing remote voice and speech exams through an application on their phones or computers. These exams, conducted periodically, provide a “longitudinal view” of disease progression, rather than a single snapshot, according to Dr. Botha.

A key element of this research is the development of a secure speech bank, which stores voice recordings for analysis. These samples are being used to train artificial intelligence (AI) algorithms that can detect subtle speech signals indicative of disease—signals that may go unnoticed by human listeners.

“There are some signals in someone’s voice and speech that a computer or an algorithm might pick up on, that a human listener wouldn’t,” Dr. Botha explained. The goal is to use AI to differentiate between various diseases based on speech patterns, potentially leading to earlier and more accurate diagnoses.

This pioneering work could transform how clinicians monitor and manage neurodegenerative diseases, offering new hope for patients and their families.