Benjamin David Hoffman, Michelle Leanne Oppert, Mikaela Owen
{"title":"Understanding young adults’ attitudes towards using AI chatbots for psychotherapy: The role of self-stigma","authors":"Benjamin David Hoffman, Michelle Leanne Oppert, Mikaela Owen","doi":"10.1016/j.chbah.2024.100086","DOIUrl":"10.1016/j.chbah.2024.100086","url":null,"abstract":"<div><p>Mental disorders impact a large proportion of individuals worldwide, with young adults being particularly susceptible to poor mental health. Past research shows that help-seeking self-stigma plays a vital role in deterring help-seeking among young adults; however, this relationship has primarily been examined in the context of human-delivered psychotherapy. The present study aimed to understand how young adults’ perceptions of help-seeking self-stigma associated with different modes of psychotherapy, specifically human-delivered and artificial intelligence (AI)-delivered, influence attitudes towards using AI chatbots for psychotherapy. This study employed a cross-sectional survey design to measure perceived help-seeking self-stigma and attitudes towards both human- and AI-delivered psychotherapy. The results demonstrated that high help-seeking self-stigma associated with human-delivered psychotherapy was linked to more negative attitudes towards human-delivered psychotherapy but more positive attitudes towards AI-delivered psychotherapy. Moreover, high help-seeking self-stigma associated with AI-delivered psychotherapy was linked to more negative attitudes towards AI-delivered psychotherapy but more positive attitudes towards human-delivered psychotherapy. These findings have important real-world implications for future clinical practice and mental health service delivery. The results indicate that young adults who are reluctant to engage with human-delivered psychotherapy due to help-seeking self-stigma may be more inclined to seek help through alternative modes of psychotherapy, such as AI chatbots. Limitations and future directions are discussed.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100086"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212400046X/pdfft?md5=7105a13b93ecb735c5d2187838096a15&pid=1-s2.0-S294988212400046X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141848422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perils, power and promises: Latent profile analysis on the attitudes towards artificial intelligence (AI) among middle-aged and older adults in Hong Kong","authors":"Ngai-Yin Eric Shum, Hi-Po Bobo Lau","doi":"10.1016/j.chbah.2024.100091","DOIUrl":"10.1016/j.chbah.2024.100091","url":null,"abstract":"<div><p>With the increasing influence of artificial intelligence (AI) on various aspects of society, understanding public attitudes towards AI becomes crucial. This study investigated attitudes towards AI among Hong Kong middle-aged and older adults. In June 2023, an online survey was conducted among a sample of 740 smartphone users aged 45 years or older (Max = 78) in Hong Kong. Using exploratory factor analysis, we found three factors from the General Attitude to Artificial Intelligence Scale (GAAIS) - Perils, Power, and Promises. Subsequently, with latent profile analysis we revealed three latent profiles: (i) Enthusiasts (18.4%; high on Promises and Power but low on Perils); (ii) Skeptics (12.3%; high on Perils but low on Promises and Power), and (iii) Indecisive (69.3%; moderate on all three factors). The Enthusiasts were more likely to be male, with higher socio-economic status, better self-rated health, and greater mobile device proficiency, optimism, innovativeness, but also less insecurity with technology, compared to the Indecisive, and then to the Skeptics. Our findings suggest that most middle-aged and older adults in Hong Kong hold an ambivalent view towards AI, appreciating its power and potentials while also cognizant of the perils it may entail. Our findings are timely considering the recent debates on ethical use of AI evoked by smart phone applications such as ChatGPT and will be valuable for practitioners and scholars for developing inclusive AI-facilitated services and applications.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100091"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000513/pdfft?md5=4615a367816801203b2516b1fae73372&pid=1-s2.0-S2949882124000513-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The ethical acceptability of human enhancement technologies: A cross-country Q-study of the perception of insideables","authors":"Stéphanie Gauttier , Mario Arias-Oliva , Kiyoshi Murata , Jorge Pelegrín-Borondo","doi":"10.1016/j.chbah.2024.100092","DOIUrl":"10.1016/j.chbah.2024.100092","url":null,"abstract":"<div><div>This paper aims to identify the ethical considerations driving the acceptance of and resistance to the use of insideable technology for human enhancement purposes, which are crucial to understand for the development of the cyborg technology market and businesses. While the literature privileges quantitative approaches, investigations focused on a strand of ethical theory or a specific value, this study adopts a qualitative and holistic approach. Based on prior interview data and a literature review, 33 items representing various ethical considerations of interest are identified. A qualitative Q-study was conducted, in which 55 individuals in three different countries expressed their points of view on insideables regarding these items. Hence, four different views are presented, highlighting drivers of acceptance of human enhancement technologies, conditional acceptance, and mere rejection. These views reveal the trade-offs between values made by respondents, shedding light on the ethical bricolage at play. The role of ethical concerns and theories in models to study the acceptance of human enhancement technologies and their potential business implications are discussed.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100092"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142318424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sucharat Limpanopparat, Erin Gibson, Dr Andrew Harris
{"title":"User engagement, attitudes, and the effectiveness of chatbots as a mental health intervention: A systematic review","authors":"Sucharat Limpanopparat, Erin Gibson, Dr Andrew Harris","doi":"10.1016/j.chbah.2024.100081","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100081","url":null,"abstract":"<div><h3>Background</h3><p>In recent years, chatbots developed for mental health intervention purposes have been widely implemented to solve the challenges of workforce shortage and accessibility issues faced by traditional health services. Nevertheless, research assessing the technologies’ potential and risks remains sporadic.</p></div><div><h3>Purpose</h3><p>This review aims to synthesise the existing research on engagement, user attitude, and effectiveness of psychological chatbot interventions.</p></div><div><h3>Method</h3><p>A systematic review was conducted using relevant peer-reviewed literature since 2010. These studies were derived from six databases: PubMed<em>, PsycINFO</em>, <em>Web of</em> <em>Science</em>, <em>Science Direct, Scopus</em> and <em>IEEE Xplore</em>.</p></div><div><h3>Results</h3><p>Engagement level with chatbots that complied with digital intervention standards, lead to positive mental health outcomes. Although users had some uncertainties about the usability of these tools, positive attitudes towards chatbots regarding user experience and acceptability were frequently identified due to the chatbots' psychological capabilities and unique functions. High levels of outcome efficacy were found for those with depression. The differences in demographics, psychological approaches, and featured technologies could also influence the extent of mental health chatbot performances.</p></div><div><h3>Conclusion</h3><p><em>P</em>ositive attitudes and engagement with chatbots, as well as positive mental health outcomes, shows chatbot technology is a promising modality for mental health intervention. However, implementing them amongst some demographics or with novel features should be carefully considered. Further research using mainstream mental health chatbots and evaluating them simultaneously with standardised measures of engagement, user attitude, and effectiveness is necessary for intervention development.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100081"},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000410/pdfft?md5=28fa4639b941c7cab725c225999b1bd0&pid=1-s2.0-S2949882124000410-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141543758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Social robots are good for me, but better for other people:The presumed allo-enhancement effect of social robot perceptions","authors":"Xun Sunny Liu , Jeff Hancock","doi":"10.1016/j.chbah.2024.100079","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100079","url":null,"abstract":"<div><p>This research proposes and investigates <em>the presumed allo-enhancement effect of social robot perceptions</em>, a tendency for individuals to view social robots as more beneficial for others than for themselves. We discuss this as a systematic bias in the perception of the utility of social robots. Through two survey studies, we test and replicate self-other perceptual differences, obtain effect sizes of these perceptual differences, and trace the impact of this presumed allo-enhancement effect on individuals' attitudes and behaviors. Analyses revealed strong perceptual differences, where individuals consistently believed social robots to be more enhancing for others than for themselves (<em>d</em> = −0.69, <em>d</em> = −0.62). These perceptual differences predicted individuals’ attitudes and endorsed behaviors towards social robots. By identifying this bias, we offer a new theoretical lens for understanding how people perceive and respond to emergent technologies.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100079"},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000392/pdfft?md5=192859a1c7d543cc91e3db4bc01c149c&pid=1-s2.0-S2949882124000392-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141582999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alan D. Fraser, Isabella Branson, Ross C. Hollett, Craig P. Speelman, Shane L. Rogers
{"title":"Do realistic avatars make virtual reality better? Examining human-like avatars for VR social interactions","authors":"Alan D. Fraser, Isabella Branson, Ross C. Hollett, Craig P. Speelman, Shane L. Rogers","doi":"10.1016/j.chbah.2024.100082","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100082","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100082"},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000422/pdfft?md5=1eeb2a30b6d620464af52d1066c159d7&pid=1-s2.0-S2949882124000422-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"“Naughty Japanese Babe:” An analysis of racialized sex tech designs","authors":"Kenneth R. Hanson , Chloé Locatelli PhD","doi":"10.1016/j.chbah.2024.100080","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100080","url":null,"abstract":"<div><p>Recent technological developments and growing acceptance of sex tech has brought increased scholarly attention to sex tech entrepreneurs, personified sex tech devices and applications, and the adult industry. Drawing on qualitative case studies of a sex doll brothel named “Cybrothel” and the virtual entertainer, or “V-Tuber,” known as Projekt Melody, as well as quantitative sex doll advertisement data, this study examines the racialization of personified sex technologies. Bringing attention to the racialization of personified sex tech is long overdue, as much scholarship to date has focused on how sex tech reproduces specific gendered meanings, despite decades of intersectional feminist scholarship demonstrating that gendered and racialized meanings are mutually constituted. General trends in the industry are shown, but particular emphasis is placed on the overrepresentation of Asianized femininity among personified sex tech industries.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100080"},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000409/pdfft?md5=ed1675bc2b43859a5c660ea84708964a&pid=1-s2.0-S2949882124000409-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feasibility assessment of using ChatGPT for training case conceptualization skills in psychological counseling","authors":"Lih-Horng Hsieh , Wei-Chou Liao , En-Yu Liu","doi":"10.1016/j.chbah.2024.100083","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100083","url":null,"abstract":"<div><p>This study investigates the feasibility and effectiveness of using ChatGPT for training case conceptualization skills in psychological counseling. The novelty of this research lies in the application of an AI-based model, ChatGPT, to enhance the professional development of prospective counselors, particularly in the realm of case conceptualization—a core competence in psychotherapy. Traditional training methods are often limited by time and resources, while ChatGPT offers a scalable and interactive alternative. Through a single-blind assessment, this study explores the accuracy, completeness, feasibility, and consistency of OpenAI's ChatGPT for case conceptualization in psychological counseling. Results show that using ChatGPT for generating case conceptualization is acceptable in terms of accuracy, completeness, feasibility, and consistency, as evaluated by experts. Therefore, counseling educators can encourage trainees to use ChatGPT as auxiliary methods for developing case conceptualization skills during supervision processes. The social implications of this research are significant, as the integration of AI in psychological counseling could address the growing need for mental health services and support. By improving the accuracy and efficiency of case conceptualization, ChatGPT can contribute to better counseling outcomes, potentially reducing the societal burden of mental health issues. Moreover, the use of AI in this context prompts important discussions on ethical considerations and the evolving role of technology in human services. Overall, this study highlights the potential of ChatGPT to serve as a valuable tool in counselor training, ultimately aiming to enhance the quality and accessibility of psychological support services.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100083"},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000434/pdfft?md5=10d95ea221c1a752e8cf6ff0aab8ba5e&pid=1-s2.0-S2949882124000434-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura M. Vowels , Rachel R.R. Francois-Walcott , Joëlle Darwiche
{"title":"AI in relationship counselling: Evaluating ChatGPT's therapeutic capabilities in providing relationship advice","authors":"Laura M. Vowels , Rachel R.R. Francois-Walcott , Joëlle Darwiche","doi":"10.1016/j.chbah.2024.100078","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100078","url":null,"abstract":"<div><p>Recent advancements in AI have led to chatbots, such as ChatGPT, capable of providing therapeutic responses. Early research evaluating chatbots' ability to provide relationship advice and single-session relationship interventions has showed that both laypeople and relationship therapists rate them high on attributed such as empathy and helpfulness. In the present study, 20 participants engaged in single-session relationship intervention with ChatGPT and were interviewed about their experiences. We evaluated the performance of ChatGPT comprising of technical outcomes such as error rate and linguistic accuracy and therapeutic quality such as empathy and therapeutic questioning. The interviews were analysed using reflexive thematic analysis which generated four themes: light at the end of the tunnel; clearing the fog; clinical skills; and therapeutic setting. The analyses of technical and feasibility outcomes, as coded by researchers and perceived by users, show ChatGPT provides realistic single-session intervention with it consistently rated highly on attributes such as therapeutic skills, human-likeness, exploration, and useability, and providing clarity and next steps for users’ relationship problem. Limitations include a poor assessment of risk and reaching collaborative solutions with the participant. This study extends on AI acceptance theories and highlights the potential capabilities of ChatGPT in providing relationship advice and support.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100078"},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000380/pdfft?md5=d4b9aa843c4d16b685ded5378e52197c&pid=1-s2.0-S2949882124000380-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141444216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The gendered nature of AI: Men and masculinities through the lens of ChatGPT and GPT4","authors":"Andreas Walther , Flora Logoz , Lukas Eggenberger","doi":"10.1016/j.chbah.2024.100076","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100076","url":null,"abstract":"<div><p>Because artificial intelligence powered language models such as the GPT series have most certainly come to stay and will permanently change the way individuals all over the world access information and form opinions, there is a need to highlight potential risks for the understanding and perception of men and masculinities. It is important to understand whether ChatGPT or its following versions such as GPT4 are biased – and if so, in which direction and to which degree. In the specific research field on men and masculinities, it seems paramount to understand the grounds upon which these language models respond to seemingly simple questions such as “What is a man?” or “What is masculine?”. In the following, we provide interactions with ChatGPT and GPT4 where we asked such questions, in an effort to better understand the quality and potential biases of the answers from ChatGPT and GPT4. We then critically reflect on the output by ChatGPT, compare it to the output by GPT4 and draw conclusions for future actions.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100076"},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000367/pdfft?md5=00f26a01ff331a51e5085db5eba8195a&pid=1-s2.0-S2949882124000367-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141486735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}