Matthew Flathers,Winna Xia,Christine Hau,Benjamin W Nelson,Jiaee Cheong,James Burns,John Torous
{"title":"Interpreting psychiatric digital phenotyping data with large language models: a preliminary analysis.","authors":"Matthew Flathers,Winna Xia,Christine Hau,Benjamin W Nelson,Jiaee Cheong,James Burns,John Torous","doi":"10.1136/bmjment-2025-301817","DOIUrl":null,"url":null,"abstract":"BACKGROUND\r\nDigital phenotyping provides passive monitoring of behavioural health but faces implementation challenges in translating complex multimodal data into actionable clinical insights. Digital navigators, healthcare staff who interpret patient data and relay findings to clinicians, provide a solution, but workforce limitations restrict scalability.\r\n\r\nOBJECTIVE\r\nThis study provides one of the first systematic evaluation of large language model performance in interpreting simulated psychiatric digital phenotyping data, establishing baseline accuracy metrics for this emerging application.\r\n\r\nMETHODS\r\nWe evaluated GPT-4o and GPT-3.5-turbo across over 153 test cases covering various clinical scenarios, timeframes and data quality levels using simulated test datasets currently employed in training human digital navigators. Performance was assessed on the model's capacity to identify clinical patterns relative to human digital navigation experts.\r\n\r\nFINDINGS\r\nGPT-4o demonstrated 52% accuracy (95% CI 46.5% to 57.6%) in identifying clinical patterns based on standard test cases, significantly outperforming GPT-3.5-turbo (12%, 95% CI 8.4% to 15.6%). When analysing GPT-4o's performance across different scenarios, strongest results were observed for worsening depression (100%) and worsening anxiety (83%) patterns while weakest performance was seen for increased home time with improving symptoms (6%). Accuracy declined with decreasing data quality (69% for high-quality data vs 39% for low-quality data) and shorter timeframes (60% for 3-month data vs 43% for 3-week data).\r\n\r\nCONCLUSIONS\r\nGPT-4o's 52% accuracy in zero-shot interpretation of psychiatric digital phenotyping data establishes a meaningful baseline, though performance gaps and occasional hallucinations confirm human oversight in digital navigation tasks remains essential. The significant performance variations across models, data quality levels and clinical scenarios highlight the need for careful implementation.\r\n\r\nCLINICAL IMPLICATIONS\r\nLarge language models could serve as assistive tools that augment human digital navigators, potentially addressing workforce limitations while maintaining necessary clinical oversight in psychiatric digital phenotyping applications.","PeriodicalId":72434,"journal":{"name":"BMJ mental health","volume":"17 1","pages":""},"PeriodicalIF":4.9000,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMJ mental health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1136/bmjment-2025-301817","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"PSYCHIATRY","Score":null,"Total":0}
引用次数: 0
Abstract
BACKGROUND
Digital phenotyping provides passive monitoring of behavioural health but faces implementation challenges in translating complex multimodal data into actionable clinical insights. Digital navigators, healthcare staff who interpret patient data and relay findings to clinicians, provide a solution, but workforce limitations restrict scalability.
OBJECTIVE
This study provides one of the first systematic evaluation of large language model performance in interpreting simulated psychiatric digital phenotyping data, establishing baseline accuracy metrics for this emerging application.
METHODS
We evaluated GPT-4o and GPT-3.5-turbo across over 153 test cases covering various clinical scenarios, timeframes and data quality levels using simulated test datasets currently employed in training human digital navigators. Performance was assessed on the model's capacity to identify clinical patterns relative to human digital navigation experts.
FINDINGS
GPT-4o demonstrated 52% accuracy (95% CI 46.5% to 57.6%) in identifying clinical patterns based on standard test cases, significantly outperforming GPT-3.5-turbo (12%, 95% CI 8.4% to 15.6%). When analysing GPT-4o's performance across different scenarios, strongest results were observed for worsening depression (100%) and worsening anxiety (83%) patterns while weakest performance was seen for increased home time with improving symptoms (6%). Accuracy declined with decreasing data quality (69% for high-quality data vs 39% for low-quality data) and shorter timeframes (60% for 3-month data vs 43% for 3-week data).
CONCLUSIONS
GPT-4o's 52% accuracy in zero-shot interpretation of psychiatric digital phenotyping data establishes a meaningful baseline, though performance gaps and occasional hallucinations confirm human oversight in digital navigation tasks remains essential. The significant performance variations across models, data quality levels and clinical scenarios highlight the need for careful implementation.
CLINICAL IMPLICATIONS
Large language models could serve as assistive tools that augment human digital navigators, potentially addressing workforce limitations while maintaining necessary clinical oversight in psychiatric digital phenotyping applications.