{"title":"Technology in Suicide Prevention: Fears and Functionality for Crisis Supporters","authors":"Danielle Hopkins, Kelly Mazzer, Debra Rickwood","doi":"10.1155/2024/6625037","DOIUrl":null,"url":null,"abstract":"<p><b>Background:</b> Crisis supporters at Lifeline Australia consistently engage with distressed and often suicidal help-seekers. The development of technological methods, such as machine learning (ML), in suicide prevention may complement their support work. Investigating attitudes towards the use of ML in crisis support is an important first step.</p><p><b>Aims:</b> The current study is aimed at investigating crisis supporters’ attitudes towards ML in crisis support/suicide prevention, beliefs about the effect of technology on the service and help-seeking, and concerns/opinions about any future technology implementation.</p><p><b>Methods:</b> Two hundred fifty-five crisis supporters aged 20–84 years were recruited through Lifeline Australia. Participants voluntarily completed an anonymous questionnaire, including measures of attitudes towards technology as well as open-text options, which provided the data for a thematic analysis.</p><p><b>Results:</b> Crisis supporters were neutral to negative on an adapted measure of ML use in crisis support. Less than one-third held the belief that technology would enhance Lifeline services, and over half of the participants felt help-seekers would be less likely to contact Lifeline if technology was implemented. Thematic analysis of the open-text questions revealed loss of human connection and mistrust of algorithms to be the most prominent barriers to future technological adoption by Lifeline crisis supporters.</p><p><b>Limitations:</b> Clearly defining terms of ML and technology was difficult to do in this hypothetical context, potentially impacting the attitudes expressed.</p><p><b>Conclusions:</b> Any new technology to support crisis supporters needs to be carefully codesigned with the workforce to ensure effective implementation and avoid any potential or perceived negative impacts on help-seekers.</p>","PeriodicalId":36408,"journal":{"name":"Human Behavior and Emerging Technologies","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/6625037","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Behavior and Emerging Technologies","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/2024/6625037","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Crisis supporters at Lifeline Australia consistently engage with distressed and often suicidal help-seekers. The development of technological methods, such as machine learning (ML), in suicide prevention may complement their support work. Investigating attitudes towards the use of ML in crisis support is an important first step.
Aims: The current study is aimed at investigating crisis supporters’ attitudes towards ML in crisis support/suicide prevention, beliefs about the effect of technology on the service and help-seeking, and concerns/opinions about any future technology implementation.
Methods: Two hundred fifty-five crisis supporters aged 20–84 years were recruited through Lifeline Australia. Participants voluntarily completed an anonymous questionnaire, including measures of attitudes towards technology as well as open-text options, which provided the data for a thematic analysis.
Results: Crisis supporters were neutral to negative on an adapted measure of ML use in crisis support. Less than one-third held the belief that technology would enhance Lifeline services, and over half of the participants felt help-seekers would be less likely to contact Lifeline if technology was implemented. Thematic analysis of the open-text questions revealed loss of human connection and mistrust of algorithms to be the most prominent barriers to future technological adoption by Lifeline crisis supporters.
Limitations: Clearly defining terms of ML and technology was difficult to do in this hypothetical context, potentially impacting the attitudes expressed.
Conclusions: Any new technology to support crisis supporters needs to be carefully codesigned with the workforce to ensure effective implementation and avoid any potential or perceived negative impacts on help-seekers.
期刊介绍:
Human Behavior and Emerging Technologies is an interdisciplinary journal dedicated to publishing high-impact research that enhances understanding of the complex interactions between diverse human behavior and emerging digital technologies.