Kasra Ghaharian, Marta Soligo, Richard Young, Lukasz Golab, Shane W Kraus, Samantha Wells
{"title":"大型语言模型能解决赌博问题吗?专家对赌博的见解?赌博治疗专业人士的专家见解。","authors":"Kasra Ghaharian, Marta Soligo, Richard Young, Lukasz Golab, Shane W Kraus, Samantha Wells","doi":"10.1007/s10899-025-10430-x","DOIUrl":null,"url":null,"abstract":"<p><p>Large Language Models (LLMs) have transformed information retrieval for humans. People are increasingly turning to general-purpose LLM-based chatbots to find answers to questions across numerous domains, including advice on sensitive topics such as mental health and addiction. In this study, we present the first inquiry into how LLMs respond to prompts related to problem gambling, specifically exploring how experienced gambling treatment professionals interpret and reflect on these responses. We used the Problem Gambling Severity Index to develop nine prompts related to different aspects of gambling behavior. These prompts were submitted to two LLMs, GPT-4o (via ChatGPT) and Llama 3.1 405b (via Meta AI), and their responses were evaluated via an online survey distributed to human experts (experienced gambling treatment professionals). Twenty-three experts participated, representing over 17,000 hours of problem gambling treatment experience. They provided their own responses to the prompts and selected their preferred (blinded) LLM response, along with contextual feedback, which was used for qualitative analysis. Llama was slightly preferred over GPT, receiving more votes for 7 out of the 9 prompts. Thematic analysis revealed that experts identified strengths and weaknesses in LLM responses, highlighting issues such as encouragement of continued gambling, overly verbose messaging, and language that could be easily misconstrued. These findings offer a novel perspective by capturing how experienced gambling treatment professionals perceive LLM responses in the context of problem gambling, providing insights to inform future efforts to align these tools with appropriate guardrails and safety standards for use in gambling harm interventions.</p>","PeriodicalId":48155,"journal":{"name":"Journal of Gambling Studies","volume":" ","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Can Large Language Models Address Problem Gambling? Expert Insights from Gambling? Expert Insights from Gambling Treatment Professionals.\",\"authors\":\"Kasra Ghaharian, Marta Soligo, Richard Young, Lukasz Golab, Shane W Kraus, Samantha Wells\",\"doi\":\"10.1007/s10899-025-10430-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Large Language Models (LLMs) have transformed information retrieval for humans. People are increasingly turning to general-purpose LLM-based chatbots to find answers to questions across numerous domains, including advice on sensitive topics such as mental health and addiction. In this study, we present the first inquiry into how LLMs respond to prompts related to problem gambling, specifically exploring how experienced gambling treatment professionals interpret and reflect on these responses. We used the Problem Gambling Severity Index to develop nine prompts related to different aspects of gambling behavior. These prompts were submitted to two LLMs, GPT-4o (via ChatGPT) and Llama 3.1 405b (via Meta AI), and their responses were evaluated via an online survey distributed to human experts (experienced gambling treatment professionals). Twenty-three experts participated, representing over 17,000 hours of problem gambling treatment experience. They provided their own responses to the prompts and selected their preferred (blinded) LLM response, along with contextual feedback, which was used for qualitative analysis. Llama was slightly preferred over GPT, receiving more votes for 7 out of the 9 prompts. Thematic analysis revealed that experts identified strengths and weaknesses in LLM responses, highlighting issues such as encouragement of continued gambling, overly verbose messaging, and language that could be easily misconstrued. These findings offer a novel perspective by capturing how experienced gambling treatment professionals perceive LLM responses in the context of problem gambling, providing insights to inform future efforts to align these tools with appropriate guardrails and safety standards for use in gambling harm interventions.</p>\",\"PeriodicalId\":48155,\"journal\":{\"name\":\"Journal of Gambling Studies\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Gambling Studies\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1007/s10899-025-10430-x\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"PSYCHOLOGY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Gambling Studies","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1007/s10899-025-10430-x","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
Can Large Language Models Address Problem Gambling? Expert Insights from Gambling? Expert Insights from Gambling Treatment Professionals.
Large Language Models (LLMs) have transformed information retrieval for humans. People are increasingly turning to general-purpose LLM-based chatbots to find answers to questions across numerous domains, including advice on sensitive topics such as mental health and addiction. In this study, we present the first inquiry into how LLMs respond to prompts related to problem gambling, specifically exploring how experienced gambling treatment professionals interpret and reflect on these responses. We used the Problem Gambling Severity Index to develop nine prompts related to different aspects of gambling behavior. These prompts were submitted to two LLMs, GPT-4o (via ChatGPT) and Llama 3.1 405b (via Meta AI), and their responses were evaluated via an online survey distributed to human experts (experienced gambling treatment professionals). Twenty-three experts participated, representing over 17,000 hours of problem gambling treatment experience. They provided their own responses to the prompts and selected their preferred (blinded) LLM response, along with contextual feedback, which was used for qualitative analysis. Llama was slightly preferred over GPT, receiving more votes for 7 out of the 9 prompts. Thematic analysis revealed that experts identified strengths and weaknesses in LLM responses, highlighting issues such as encouragement of continued gambling, overly verbose messaging, and language that could be easily misconstrued. These findings offer a novel perspective by capturing how experienced gambling treatment professionals perceive LLM responses in the context of problem gambling, providing insights to inform future efforts to align these tools with appropriate guardrails and safety standards for use in gambling harm interventions.
期刊介绍:
Journal of Gambling Studies is an interdisciplinary forum for the dissemination on the many aspects of gambling behavior, both controlled and pathological, as well as variety of problems attendant to, or resultant from, gambling behavior including alcoholism, suicide, crime, and a number of other mental health problems. Articles published in this journal are representative of a cross-section of disciplines including psychiatry, psychology, sociology, political science, criminology, and social work.