{"title":"RAND study sets AI benchmark for suicide-related chat support","authors":"Valerie A. Canady","doi":"10.1002/mhw.34574","DOIUrl":null,"url":null,"abstract":"<p>Widely used artificial intelligence (AI) platforms, powered by large language models (LLM), generally handle both very high risk and very low risk queries with caution and accuracy, but their responses to intermediate-risk questions — such as those involving suicidal ideation or seeking help — are often unpredictable and vary significantly across platforms and even within repeated interactions, RAND researchers noted about new AI research published in <i>Psychiatric Services</i> Aug. 26.</p>","PeriodicalId":100916,"journal":{"name":"Mental Health Weekly","volume":"35 34","pages":"3-5"},"PeriodicalIF":0.0000,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mental Health Weekly","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/mhw.34574","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Widely used artificial intelligence (AI) platforms, powered by large language models (LLM), generally handle both very high risk and very low risk queries with caution and accuracy, but their responses to intermediate-risk questions — such as those involving suicidal ideation or seeking help — are often unpredictable and vary significantly across platforms and even within repeated interactions, RAND researchers noted about new AI research published in Psychiatric Services Aug. 26.