Vocal expression of emotions discriminates dementia with Lewy bodies from Alzheimer’s disease
https://doi.org/10.1002/dad2.12594
Abstract
Dementia with Lewy bodies (DLB) and Alzheimer's disease (AD), the two most common neurodegenerative dementias, both exhibit altered emotional processing. However, how vocal emotional expressions alter in and differ between DLB and AD remains uninvestigated. We collected voice data during story reading from 152 older adults comprising DLB, AD, and cognitively unimpaired (CU) groups and compared their emotional prosody in terms of valence and arousal dimensions. Compared with matched AD and CU participants, DLB patients showed reduced overall emotional expressiveness, as well as lower valence (more negative) and lower arousal (calmer), the extent of which was associated with cognitive impairment and insular atrophy. Classification models using vocal features discriminated DLB from AD and CU with an AUC of 0.83 and 0.78, respectively. Our findings may aid in discriminating DLB patients from AD and CU individuals, serving as a surrogate marker for clinical and neuropathological changes in DLB. Highlights DLB showed distinctive reduction in vocal expression of emotions. Cognitive impairment was associated with reduced vocal emotional expression in DLB. Insular atrophy was associated with reduced vocal emotional expression in DLB. Emotional expression measures successfully differentiated DLB from AD or controls. Vocal expression of emotions discriminates dementia with Lewy bodies from Alzheimer's disease.
Highlights DLB showed distinctive reduction in vocal expression of emotions. Cognitive impairment was associated with reduced vocal emotional expression in DLB. Insular atrophy was associated with reduced vocal emotional expression in DLB. Emotional expression measures successfully differentiated DLB from AD or controls. Vocal expression of emotions discriminates dementia with Lewy bodies from Alzheimer's disease.
Towards accurate differential diagnosis with large language models
https://doi.org/10.1038/s41586-025-08869-4
Abstract
A comprehensive differential diagnosis is a cornerstone of medical care that is often reached through an iterative process of interpretation that combines clinical history, physical examination, investigations and procedures. Interactive interfaces powered by large language models present new opportunities to assist and automate aspects of this process1. Here we introduce the Articulate Medical Intelligence Explorer (AMIE), a large language model that is optimized for diagnostic reasoning, and evaluate its ability to generate a differential diagnosis alone or as an aid to clinicians. Twenty clinicians evaluated 302 challenging, real-world medical cases sourced from published case reports. Each case report was read by two clinicians, who were randomized to one of two assistive conditions: assistance from search engines and standard medical resources; or assistance from AMIE in addition to these tools. All clinicians provided a baseline, unassisted differential diagnosis prior to using the respective assistive tools. AMIE exhibited standalone performance that exceeded that of unassisted clinicians (top-10 accuracy 59.1% versus 33.6%, P = 0.04). Comparing the two assisted study arms, the differential diagnosis quality score was higher for clinicians assisted by AMIE (top-10 accuracy 51.7%) compared with clinicians without its assistance (36.1%; McNemar’s test: 45.7, P < 0.01) and clinicians with search (44.4%; McNemar’s test: 4.75, P = 0.03). Further, clinicians assisted by AMIE arrived at more comprehensive differential lists than those without assistance from AMIE. Our study suggests that AMIE has potential to improve clinicians’ diagnostic reasoning and accuracy in challenging cases, meriting further real-world evaluation for its ability to empower physicians and widen patients’ access to specialist-level expertise.