{"title":"The Acceptability of AI-Driven Resource Signposting to Young People Using a Mental Health Peer Support App.","authors":"Bethany Cliffe, Lucy Biddle, Jessica Gore-Rodney, Myles-Jay Linton","doi":"10.1007/s44206-025-00202-w","DOIUrl":null,"url":null,"abstract":"<p><p>Incorporating artificial intelligence (AI) into mental health applications (apps) can help to personalise support, for example through signposting topic-specific resources based on content that app users interact with. However, there is limited research exploring the acceptability of AI within digital mental health for young people. The current study explored this in the context of an online peer support platform for young people. 12 young people were interviewed online using a think aloud approach; they were aged 16-23 (M 18.64, SD 2.23). Participants identified as White (<i>n</i> = 7), Chinese (<i>n</i> = 1), Mixed Race (<i>n</i> = 1), Indian (<i>n</i> = 1), Black African (<i>n</i> = 1) and Bangladeshi (<i>n</i> = 1). 10 participants identified as women, one as non-binary and one preferred not to say. Participants were users of Tellmi, a pre-moderated mental health peer support app aimed at young people. Participants were given a link to a prototype of the Tellmi app via their web browser in which it was shown how AI could generate suggestions of pre-defined resources based on the content of fictional posts. Users were encouraged to interact with it whilst thinking aloud. Three themes were developed using reflexive thematic analysis: (1) Fear of the unknown - getting to grips with artificial intelligence; (2) AI can help save time and effort by streamlining processes; and (3) The value of human connection, which included the sub-theme: AI isn't human and shouldn't pretend to be.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s44206-025-00202-w.</p>","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"4 2","pages":"45"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12137474/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital society : ethics, socio-legal and governance of digital technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s44206-025-00202-w","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/6/4 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Incorporating artificial intelligence (AI) into mental health applications (apps) can help to personalise support, for example through signposting topic-specific resources based on content that app users interact with. However, there is limited research exploring the acceptability of AI within digital mental health for young people. The current study explored this in the context of an online peer support platform for young people. 12 young people were interviewed online using a think aloud approach; they were aged 16-23 (M 18.64, SD 2.23). Participants identified as White (n = 7), Chinese (n = 1), Mixed Race (n = 1), Indian (n = 1), Black African (n = 1) and Bangladeshi (n = 1). 10 participants identified as women, one as non-binary and one preferred not to say. Participants were users of Tellmi, a pre-moderated mental health peer support app aimed at young people. Participants were given a link to a prototype of the Tellmi app via their web browser in which it was shown how AI could generate suggestions of pre-defined resources based on the content of fictional posts. Users were encouraged to interact with it whilst thinking aloud. Three themes were developed using reflexive thematic analysis: (1) Fear of the unknown - getting to grips with artificial intelligence; (2) AI can help save time and effort by streamlining processes; and (3) The value of human connection, which included the sub-theme: AI isn't human and shouldn't pretend to be.
Supplementary information: The online version contains supplementary material available at 10.1007/s44206-025-00202-w.