{"title":"兰德研究为自杀相关聊天支持设定了人工智能基准","authors":"Valerie A. Canady","doi":"10.1002/mhw.34574","DOIUrl":null,"url":null,"abstract":"<p>Widely used artificial intelligence (AI) platforms, powered by large language models (LLM), generally handle both very high risk and very low risk queries with caution and accuracy, but their responses to intermediate-risk questions — such as those involving suicidal ideation or seeking help — are often unpredictable and vary significantly across platforms and even within repeated interactions, RAND researchers noted about new AI research published in <i>Psychiatric Services</i> Aug. 26.</p>","PeriodicalId":100916,"journal":{"name":"Mental Health Weekly","volume":"35 34","pages":"3-5"},"PeriodicalIF":0.0000,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"RAND study sets AI benchmark for suicide-related chat support\",\"authors\":\"Valerie A. Canady\",\"doi\":\"10.1002/mhw.34574\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Widely used artificial intelligence (AI) platforms, powered by large language models (LLM), generally handle both very high risk and very low risk queries with caution and accuracy, but their responses to intermediate-risk questions — such as those involving suicidal ideation or seeking help — are often unpredictable and vary significantly across platforms and even within repeated interactions, RAND researchers noted about new AI research published in <i>Psychiatric Services</i> Aug. 26.</p>\",\"PeriodicalId\":100916,\"journal\":{\"name\":\"Mental Health Weekly\",\"volume\":\"35 34\",\"pages\":\"3-5\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Mental Health Weekly\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/mhw.34574\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mental Health Weekly","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/mhw.34574","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
RAND study sets AI benchmark for suicide-related chat support
Widely used artificial intelligence (AI) platforms, powered by large language models (LLM), generally handle both very high risk and very low risk queries with caution and accuracy, but their responses to intermediate-risk questions — such as those involving suicidal ideation or seeking help — are often unpredictable and vary significantly across platforms and even within repeated interactions, RAND researchers noted about new AI research published in Psychiatric Services Aug. 26.