Interpreting psychiatric digital phenotyping data with large language models: a preliminary analysis.

IF 4.9 0 PSYCHIATRY
Matthew Flathers,Winna Xia,Christine Hau,Benjamin W Nelson,Jiaee Cheong,James Burns,John Torous
{"title":"Interpreting psychiatric digital phenotyping data with large language models: a preliminary analysis.","authors":"Matthew Flathers,Winna Xia,Christine Hau,Benjamin W Nelson,Jiaee Cheong,James Burns,John Torous","doi":"10.1136/bmjment-2025-301817","DOIUrl":null,"url":null,"abstract":"BACKGROUND\r\nDigital phenotyping provides passive monitoring of behavioural health but faces implementation challenges in translating complex multimodal data into actionable clinical insights. Digital navigators, healthcare staff who interpret patient data and relay findings to clinicians, provide a solution, but workforce limitations restrict scalability.\r\n\r\nOBJECTIVE\r\nThis study provides one of the first systematic evaluation of large language model performance in interpreting simulated psychiatric digital phenotyping data, establishing baseline accuracy metrics for this emerging application.\r\n\r\nMETHODS\r\nWe evaluated GPT-4o and GPT-3.5-turbo across over 153 test cases covering various clinical scenarios, timeframes and data quality levels using simulated test datasets currently employed in training human digital navigators. Performance was assessed on the model's capacity to identify clinical patterns relative to human digital navigation experts.\r\n\r\nFINDINGS\r\nGPT-4o demonstrated 52% accuracy (95% CI 46.5% to 57.6%) in identifying clinical patterns based on standard test cases, significantly outperforming GPT-3.5-turbo (12%, 95% CI 8.4% to 15.6%). When analysing GPT-4o's performance across different scenarios, strongest results were observed for worsening depression (100%) and worsening anxiety (83%) patterns while weakest performance was seen for increased home time with improving symptoms (6%). Accuracy declined with decreasing data quality (69% for high-quality data vs 39% for low-quality data) and shorter timeframes (60% for 3-month data vs 43% for 3-week data).\r\n\r\nCONCLUSIONS\r\nGPT-4o's 52% accuracy in zero-shot interpretation of psychiatric digital phenotyping data establishes a meaningful baseline, though performance gaps and occasional hallucinations confirm human oversight in digital navigation tasks remains essential. The significant performance variations across models, data quality levels and clinical scenarios highlight the need for careful implementation.\r\n\r\nCLINICAL IMPLICATIONS\r\nLarge language models could serve as assistive tools that augment human digital navigators, potentially addressing workforce limitations while maintaining necessary clinical oversight in psychiatric digital phenotyping applications.","PeriodicalId":72434,"journal":{"name":"BMJ mental health","volume":"17 1","pages":""},"PeriodicalIF":4.9000,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMJ mental health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1136/bmjment-2025-301817","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"PSYCHIATRY","Score":null,"Total":0}
引用次数: 0

Abstract

BACKGROUND Digital phenotyping provides passive monitoring of behavioural health but faces implementation challenges in translating complex multimodal data into actionable clinical insights. Digital navigators, healthcare staff who interpret patient data and relay findings to clinicians, provide a solution, but workforce limitations restrict scalability. OBJECTIVE This study provides one of the first systematic evaluation of large language model performance in interpreting simulated psychiatric digital phenotyping data, establishing baseline accuracy metrics for this emerging application. METHODS We evaluated GPT-4o and GPT-3.5-turbo across over 153 test cases covering various clinical scenarios, timeframes and data quality levels using simulated test datasets currently employed in training human digital navigators. Performance was assessed on the model's capacity to identify clinical patterns relative to human digital navigation experts. FINDINGS GPT-4o demonstrated 52% accuracy (95% CI 46.5% to 57.6%) in identifying clinical patterns based on standard test cases, significantly outperforming GPT-3.5-turbo (12%, 95% CI 8.4% to 15.6%). When analysing GPT-4o's performance across different scenarios, strongest results were observed for worsening depression (100%) and worsening anxiety (83%) patterns while weakest performance was seen for increased home time with improving symptoms (6%). Accuracy declined with decreasing data quality (69% for high-quality data vs 39% for low-quality data) and shorter timeframes (60% for 3-month data vs 43% for 3-week data). CONCLUSIONS GPT-4o's 52% accuracy in zero-shot interpretation of psychiatric digital phenotyping data establishes a meaningful baseline, though performance gaps and occasional hallucinations confirm human oversight in digital navigation tasks remains essential. The significant performance variations across models, data quality levels and clinical scenarios highlight the need for careful implementation. CLINICAL IMPLICATIONS Large language models could serve as assistive tools that augment human digital navigators, potentially addressing workforce limitations while maintaining necessary clinical oversight in psychiatric digital phenotyping applications.
用大型语言模型解释精神病学数字表型数据:初步分析。
数字表型提供了行为健康的被动监测,但在将复杂的多模态数据转化为可操作的临床见解方面面临实施挑战。数字导航员(解释患者数据并将结果传递给临床医生的医护人员)提供了一种解决方案,但人力限制限制了可扩展性。目的:本研究首次对大型语言模型在解释模拟精神病学数字表型数据方面的表现进行了系统评估,并为这一新兴应用建立了基线准确性指标。方法:我们使用目前用于训练人类数字导航员的模拟测试数据集,在超过153个测试用例中评估了gpt - 40和GPT-3.5-turbo,涵盖了各种临床场景、时间框架和数据质量水平。性能评估是根据模型识别相对于人类数字导航专家的临床模式的能力。sgpt - 40在基于标准测试病例识别临床模式方面显示出52%的准确率(95% CI 46.5%至57.6%),显著优于GPT-3.5-turbo (12%, 95% CI 8.4%至15.6%)。在分析gpt - 40在不同情况下的表现时,观察到的最强结果是抑郁恶化(100%)和焦虑恶化(83%)模式,而最弱的表现是在家时间增加并改善症状(6%)。准确性随着数据质量的降低(高质量数据为69%,低质量数据为39%)和时间框架的缩短(3个月数据为60%,3周数据为43%)而下降。结论sgpt - 40在零射击解释精神病学数字表型数据方面的52%准确率建立了有意义的基线,尽管表现差距和偶尔的幻觉证实了人类在数字导航任务中的疏忽仍然是必不可少的。不同模型、数据质量水平和临床场景的显著性能差异突出了谨慎实施的必要性。大型语言模型可以作为辅助工具,增强人类数字导航,潜在地解决劳动力限制,同时在精神病学数字表型应用中保持必要的临床监督。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.80
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信