This post was originally published on here
Credit: WPixz on Shutterstock
Like a drowsy crystal ball, AI can now use sleep to predict your medical future
In A Nutshell
- Stanford researchers trained an AI model on sleep recordings from 65,000+ people and found it could predict risk for 130 diseases years before diagnosis
- The system achieved 84% accuracy for predicting mortality risk and similar high accuracy for dementia, heart attack, heart failure, stroke, and other conditions
- Sleep recordings capture hidden patterns across brain activity, heart rhythms, breathing, and muscle movements that signal future health problems
- The findings suggest polysomnography may eventually become a powerful early detection tool, though current sleep studies require specialized clinical equipment
Scientists have developed an artificial intelligence system that can predict a person’s risk of developing conditions ranging from dementia to heart failure by analyzing a single night of sleep data. The findings suggest that sleep patterns contain far more information about future health than previously recognized.
Researchers at Stanford University and collaborators trained an AI model called SleepFM on polysomnography recordings from more than 65,000 people, representing over 585,000 hours of sleep data. Polysomnography is the gold standard sleep study that records brain activity, heart rhythms, breathing patterns, and muscle movements throughout the night.
After analyzing these overnight recordings, the model identified elevated future risk for 130 medical conditions, often years before clinical diagnosis. For all-cause mortality, the system achieved a concordance index of 0.84, meaning it correctly ranked patient risk 84% of the time. Similar accuracy emerged for dementia (0.85), heart attack (0.81), heart failure (0.80), chronic kidney disease (0.79), stroke (0.78), and atrial fibrillation (0.78).
“Sleep is a fundamental biological process with broad implications for physical and mental health, yet its complex relationship with disease remains poorly understood,” the researchers wrote in their paper published in Nature Medicine.
AI Analyzes Multiple Sleep Signals Simultaneously
The study examined sleep recordings from four major research cohorts spanning ages 1 to 100 years. Traditional sleep studies focus on specific disorders like sleep apnea or measure isolated metrics. SleepFM takes a different approach by processing all physiological signals simultaneously—brain wave patterns, eye movements, heart activity, muscle tone, and breathing measurements.
The system breaks down sleep recordings into five-second segments, analyzing patterns across different signal types to identify which combinations predict future disease. For disease prediction, researchers paired Stanford sleep recordings with electronic health records containing diagnostic codes and timestamps. They only counted cases where diagnosis occurred at least seven days after the sleep study to avoid detecting existing conditions.

Strong Predictions Across Major Disease Categories
SleepFM demonstrated particularly strong predictive power for neurological and mental health conditions, including mild cognitive impairment and Parkinson’s disease. Among cardiovascular conditions, the system effectively predicted hypertensive heart disease and intracranial hemorrhage. Cancer-related risk prediction showed promising associations for prostate cancer, breast cancer, and skin melanomas.
The model maintained accuracy when tested on sleep recordings from 2020 onwards, a period entirely excluded from training. This validation included strong performance for death (0.83), heart failure (0.80), and dementia (0.83).
To assess whether sleep recordings provided information beyond basic demographics, researchers compared SleepFM against baseline approaches using only age, sex, body mass index, and race or ethnicity, as well as models trained directly on raw sleep data without pretraining. SleepFM consistently outperformed both baselines across most disease categories, with improvements ranging from 5% to 17%.
Certain sleep stages and signal types proved more informative for specific disease categories. Brain activity signals better captured mental and neurological conditions, respiratory signals more effectively predicted respiratory and metabolic disorders, and heart signals proved most informative for circulatory diseases. Combining all signal types produced the best overall performance.
Why Sleep Recordings Reveal Future Health Risks
Sleep recordings capture intricate interactions across physiological systems that change over time, likely reflecting underlying processes that contribute to or signal future disease development. For mortality risk, research has linked factors including high arousal burden, low REM sleep, sleep-disordered breathing, low oxygen levels, and poor sleep efficiency to increased death rates.
The model’s success with dementia prediction is noteworthy given that sleep abnormalities are strongly associated with preclinical Alzheimer’s disease, including reduced slow-wave activity, REM sleep disturbances, and decreased spindle activity. Parkinson’s disease is frequently preceded by REM sleep behavior disorder, characterized by abnormal muscle activity during REM sleep and distinctive patterns in brain and heart recordings.
The model also analyzed sleep recordings from the Sleep Heart Health Study, a dataset completely excluded from training. Using only a subset of this external data for fine-tuning, SleepFM demonstrated strong results across key outcomes including stroke (0.82), congestive heart failure (0.85), and cardiovascular disease mortality (0.88).

What This Means for Sleep Medicine
The research has several limitations. The dataset consists primarily of patients referred for sleep studies due to suspected sleep disorders, meaning the study population differs from the general public. Model performance showed some decline in recordings from later time periods, and interpreting exactly which sleep features drive specific predictions remains difficult.
The study focused on overnight laboratory polysomnography, which requires specialized equipment and clinical settings. As wearable sleep technology continues advancing, similar approaches might eventually enable noninvasive health monitoring outside medical facilities, though current wearables capture fewer and less detailed physiological signals than full polysomnography.
These findings reveal that a single night’s sleep contains a wealth of information about future health across numerous conditions. Sleep patterns may serve as an early warning signal for diseases that won’t manifest for years, offering potential opportunities for earlier intervention and prevention.
Paper Notes
Limitations
The study dataset consists primarily of patients referred for clinical sleep studies due to suspected sleep disorders or other medical conditions, creating selection bias as the cohort is not representative of the general population. People without sleep complaints or those with limited access to specialized sleep clinics are underrepresented. The model’s performance showed some degradation in temporal test sets from later years, indicating challenges in maintaining predictive accuracy as clinical practices and patient populations evolve over time. Interpreting the specific sleep patterns and features driving predictions is inherently difficult due to deep learning model complexity, though the researchers conducted stratification analyses across sleep stages and signal modalities to gain insights. The transfer learning evaluation on the Sleep Heart Health Study dataset was limited to a subset of conditions due to differences in available diagnostic information between datasets. Sleep apnea analysis was restricted to classification based on apnea-hypopnea index thresholds without exploring more granular approaches like continuous severity prediction or individual event detection. While achieving competitive performance on most tasks, SleepFM lagged behind some specialized sleep staging models on certain external validation datasets.
Funding and Disclosures
This research was supported by multiple funding sources. Rahul Thapa received support from the Knight-Hennessy Scholars program. Emmanuel Mignot and M. Brandon Westover were supported by a grant from the National Heart, Lung and Blood Institute of the National Institutes of Health (R01HL161253). James Zou received support from the Chan-Zuckerberg Biohub. The Multi-Ethnic Study of Atherosclerosis Sleep Ancillary study was funded by NIH-NHLBI (R01 HL098433). MESA is supported by NHLBI contracts and cooperative agreements from NCATS. The MrOS Sleep Study was funded by NHLBI grants R01 HL071194, R01 HL070848, R01 HL070847, R01 HL070842, R01 HL070841, R01 HL070837, R01 HL070838, and R01 HL070839. The Sleep Heart Health Study was supported by NHLBI cooperative agreements. The National Sleep Research Resource received support from NHLBI (R24 HL114473, 75N92019R002). M. Brandon Westover disclosed being a cofounder, scientific advisor, and consultant to Beacon Biosignals, with personal equity interest in the company. The remaining authors declared no competing interests.
Publication Details
The study “A multimodal sleep foundation model for disease prediction” was authored by Rahul Thapa, Magnus Ruud Kjaer, Bryan He, Ian Covert, Hyatt Moore IV, Umaer Hanif, Gauri Ganjoo, M. Brandon Westover, Poul Jennum, Andreas Brink-Kjaer, Emmanuel Mignot, and James Zou. The authors are affiliated with the Department of Biomedical Data Science at Stanford University, Department of Computer Science at Stanford University, Department of Psychiatry and Behavioral Sciences at Stanford University, Department of Health Technology at Technical University of Denmark, Danish Center for Sleep Medicine at Rigshospitalet, Department of Systems Engineering at Naval Postgraduate School, BioSerenity Paris, Department of Neurology at Beth Israel Deaconess Medical Center and Harvard Medical School, and Department of Clinical Medicine at University of Copenhagen. The paper was published in Nature Medicine in 2026 with DOI: 10.1038/s41591-025-04133-4. The manuscript was received February 3, 2025, accepted November 18, 2025, and published online January 6, 2026.







