Computers in Human Behavior: Artificial Humans最新文献

筛选
英文 中文
The ethical acceptability of human enhancement technologies: A cross-country Q-study of the perception of insideables 人类增强技术的伦理可接受性:一项关于内部人员看法的跨国 Q 研究
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100092
Stéphanie Gauttier , Mario Arias-Oliva , Kiyoshi Murata , Jorge Pelegrín-Borondo
{"title":"The ethical acceptability of human enhancement technologies: A cross-country Q-study of the perception of insideables","authors":"Stéphanie Gauttier ,&nbsp;Mario Arias-Oliva ,&nbsp;Kiyoshi Murata ,&nbsp;Jorge Pelegrín-Borondo","doi":"10.1016/j.chbah.2024.100092","DOIUrl":"10.1016/j.chbah.2024.100092","url":null,"abstract":"<div><div>This paper aims to identify the ethical considerations driving the acceptance of and resistance to the use of insideable technology for human enhancement purposes, which are crucial to understand for the development of the cyborg technology market and businesses. While the literature privileges quantitative approaches, investigations focused on a strand of ethical theory or a specific value, this study adopts a qualitative and holistic approach. Based on prior interview data and a literature review, 33 items representing various ethical considerations of interest are identified. A qualitative Q-study was conducted, in which 55 individuals in three different countries expressed their points of view on insideables regarding these items. Hence, four different views are presented, highlighting drivers of acceptance of human enhancement technologies, conditional acceptance, and mere rejection. These views reveal the trade-offs between values made by respondents, shedding light on the ethical bricolage at play. The role of ethical concerns and theories in models to study the acceptance of human enhancement technologies and their potential business implications are discussed.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100092"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142318424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
User engagement, attitudes, and the effectiveness of chatbots as a mental health intervention: A systematic review 聊天机器人作为心理健康干预措施的用户参与度、态度和有效性:系统回顾
Computers in Human Behavior: Artificial Humans Pub Date : 2024-07-02 DOI: 10.1016/j.chbah.2024.100081
Sucharat Limpanopparat, Erin Gibson, Dr Andrew Harris
{"title":"User engagement, attitudes, and the effectiveness of chatbots as a mental health intervention: A systematic review","authors":"Sucharat Limpanopparat,&nbsp;Erin Gibson,&nbsp;Dr Andrew Harris","doi":"10.1016/j.chbah.2024.100081","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100081","url":null,"abstract":"<div><h3>Background</h3><p>In recent years, chatbots developed for mental health intervention purposes have been widely implemented to solve the challenges of workforce shortage and accessibility issues faced by traditional health services. Nevertheless, research assessing the technologies’ potential and risks remains sporadic.</p></div><div><h3>Purpose</h3><p>This review aims to synthesise the existing research on engagement, user attitude, and effectiveness of psychological chatbot interventions.</p></div><div><h3>Method</h3><p>A systematic review was conducted using relevant peer-reviewed literature since 2010. These studies were derived from six databases: PubMed<em>, PsycINFO</em>, <em>Web of</em> <em>Science</em>, <em>Science Direct, Scopus</em> and <em>IEEE Xplore</em>.</p></div><div><h3>Results</h3><p>Engagement level with chatbots that complied with digital intervention standards, lead to positive mental health outcomes. Although users had some uncertainties about the usability of these tools, positive attitudes towards chatbots regarding user experience and acceptability were frequently identified due to the chatbots' psychological capabilities and unique functions. High levels of outcome efficacy were found for those with depression. The differences in demographics, psychological approaches, and featured technologies could also influence the extent of mental health chatbot performances.</p></div><div><h3>Conclusion</h3><p><em>P</em>ositive attitudes and engagement with chatbots, as well as positive mental health outcomes, shows chatbot technology is a promising modality for mental health intervention. However, implementing them amongst some demographics or with novel features should be carefully considered. Further research using mainstream mental health chatbots and evaluating them simultaneously with standardised measures of engagement, user attitude, and effectiveness is necessary for intervention development.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100081"},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000410/pdfft?md5=28fa4639b941c7cab725c225999b1bd0&pid=1-s2.0-S2949882124000410-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141543758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social robots are good for me, but better for other people:The presumed allo-enhancement effect of social robot perceptions 社交机器人对我有好处,但对其他人更好:社交机器人感知的假定增强效应
Computers in Human Behavior: Artificial Humans Pub Date : 2024-07-02 DOI: 10.1016/j.chbah.2024.100079
Xun Sunny Liu , Jeff Hancock
{"title":"Social robots are good for me, but better for other people:The presumed allo-enhancement effect of social robot perceptions","authors":"Xun Sunny Liu ,&nbsp;Jeff Hancock","doi":"10.1016/j.chbah.2024.100079","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100079","url":null,"abstract":"<div><p>This research proposes and investigates <em>the presumed allo-enhancement effect of social robot perceptions</em>, a tendency for individuals to view social robots as more beneficial for others than for themselves. We discuss this as a systematic bias in the perception of the utility of social robots. Through two survey studies, we test and replicate self-other perceptual differences, obtain effect sizes of these perceptual differences, and trace the impact of this presumed allo-enhancement effect on individuals' attitudes and behaviors. Analyses revealed strong perceptual differences, where individuals consistently believed social robots to be more enhancing for others than for themselves (<em>d</em> = −0.69, <em>d</em> = −0.62). These perceptual differences predicted individuals’ attitudes and endorsed behaviors towards social robots. By identifying this bias, we offer a new theoretical lens for understanding how people perceive and respond to emergent technologies.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100079"},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000392/pdfft?md5=192859a1c7d543cc91e3db4bc01c149c&pid=1-s2.0-S2949882124000392-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141582999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Do realistic avatars make virtual reality better? Examining human-like avatars for VR social interactions 逼真的虚拟化身能让虚拟现实变得更好吗?研究用于虚拟现实社交互动的类人化身
Computers in Human Behavior: Artificial Humans Pub Date : 2024-07-02 DOI: 10.1016/j.chbah.2024.100082
Alan D. Fraser, Isabella Branson, Ross C. Hollett, Craig P. Speelman, Shane L. Rogers
{"title":"Do realistic avatars make virtual reality better? Examining human-like avatars for VR social interactions","authors":"Alan D. Fraser,&nbsp;Isabella Branson,&nbsp;Ross C. Hollett,&nbsp;Craig P. Speelman,&nbsp;Shane L. Rogers","doi":"10.1016/j.chbah.2024.100082","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100082","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100082"},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000422/pdfft?md5=1eeb2a30b6d620464af52d1066c159d7&pid=1-s2.0-S2949882124000422-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“Naughty Japanese Babe:” An analysis of racialized sex tech designs "顽皮的日本宝贝:"种族化性科技设计分析
Computers in Human Behavior: Artificial Humans Pub Date : 2024-07-02 DOI: 10.1016/j.chbah.2024.100080
Kenneth R. Hanson , Chloé Locatelli PhD
{"title":"“Naughty Japanese Babe:” An analysis of racialized sex tech designs","authors":"Kenneth R. Hanson ,&nbsp;Chloé Locatelli PhD","doi":"10.1016/j.chbah.2024.100080","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100080","url":null,"abstract":"<div><p>Recent technological developments and growing acceptance of sex tech has brought increased scholarly attention to sex tech entrepreneurs, personified sex tech devices and applications, and the adult industry. Drawing on qualitative case studies of a sex doll brothel named “Cybrothel” and the virtual entertainer, or “V-Tuber,” known as Projekt Melody, as well as quantitative sex doll advertisement data, this study examines the racialization of personified sex technologies. Bringing attention to the racialization of personified sex tech is long overdue, as much scholarship to date has focused on how sex tech reproduces specific gendered meanings, despite decades of intersectional feminist scholarship demonstrating that gendered and racialized meanings are mutually constituted. General trends in the industry are shown, but particular emphasis is placed on the overrepresentation of Asianized femininity among personified sex tech industries.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100080"},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000409/pdfft?md5=ed1675bc2b43859a5c660ea84708964a&pid=1-s2.0-S2949882124000409-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility assessment of using ChatGPT for training case conceptualization skills in psychological counseling 使用 ChatGPT 培训心理咨询案例概念化技能的可行性评估
Computers in Human Behavior: Artificial Humans Pub Date : 2024-07-02 DOI: 10.1016/j.chbah.2024.100083
Lih-Horng Hsieh , Wei-Chou Liao , En-Yu Liu
{"title":"Feasibility assessment of using ChatGPT for training case conceptualization skills in psychological counseling","authors":"Lih-Horng Hsieh ,&nbsp;Wei-Chou Liao ,&nbsp;En-Yu Liu","doi":"10.1016/j.chbah.2024.100083","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100083","url":null,"abstract":"<div><p>This study investigates the feasibility and effectiveness of using ChatGPT for training case conceptualization skills in psychological counseling. The novelty of this research lies in the application of an AI-based model, ChatGPT, to enhance the professional development of prospective counselors, particularly in the realm of case conceptualization—a core competence in psychotherapy. Traditional training methods are often limited by time and resources, while ChatGPT offers a scalable and interactive alternative. Through a single-blind assessment, this study explores the accuracy, completeness, feasibility, and consistency of OpenAI's ChatGPT for case conceptualization in psychological counseling. Results show that using ChatGPT for generating case conceptualization is acceptable in terms of accuracy, completeness, feasibility, and consistency, as evaluated by experts. Therefore, counseling educators can encourage trainees to use ChatGPT as auxiliary methods for developing case conceptualization skills during supervision processes. The social implications of this research are significant, as the integration of AI in psychological counseling could address the growing need for mental health services and support. By improving the accuracy and efficiency of case conceptualization, ChatGPT can contribute to better counseling outcomes, potentially reducing the societal burden of mental health issues. Moreover, the use of AI in this context prompts important discussions on ethical considerations and the evolving role of technology in human services. Overall, this study highlights the potential of ChatGPT to serve as a valuable tool in counselor training, ultimately aiming to enhance the quality and accessibility of psychological support services.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100083"},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000434/pdfft?md5=10d95ea221c1a752e8cf6ff0aab8ba5e&pid=1-s2.0-S2949882124000434-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI in relationship counselling: Evaluating ChatGPT's therapeutic capabilities in providing relationship advice 关系咨询中的人工智能:评估 ChatGPT 在提供恋爱咨询方面的治疗能力
Computers in Human Behavior: Artificial Humans Pub Date : 2024-06-21 DOI: 10.1016/j.chbah.2024.100078
Laura M. Vowels , Rachel R.R. Francois-Walcott , Joëlle Darwiche
{"title":"AI in relationship counselling: Evaluating ChatGPT's therapeutic capabilities in providing relationship advice","authors":"Laura M. Vowels ,&nbsp;Rachel R.R. Francois-Walcott ,&nbsp;Joëlle Darwiche","doi":"10.1016/j.chbah.2024.100078","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100078","url":null,"abstract":"<div><p>Recent advancements in AI have led to chatbots, such as ChatGPT, capable of providing therapeutic responses. Early research evaluating chatbots' ability to provide relationship advice and single-session relationship interventions has showed that both laypeople and relationship therapists rate them high on attributed such as empathy and helpfulness. In the present study, 20 participants engaged in single-session relationship intervention with ChatGPT and were interviewed about their experiences. We evaluated the performance of ChatGPT comprising of technical outcomes such as error rate and linguistic accuracy and therapeutic quality such as empathy and therapeutic questioning. The interviews were analysed using reflexive thematic analysis which generated four themes: light at the end of the tunnel; clearing the fog; clinical skills; and therapeutic setting. The analyses of technical and feasibility outcomes, as coded by researchers and perceived by users, show ChatGPT provides realistic single-session intervention with it consistently rated highly on attributes such as therapeutic skills, human-likeness, exploration, and useability, and providing clarity and next steps for users’ relationship problem. Limitations include a poor assessment of risk and reaching collaborative solutions with the participant. This study extends on AI acceptance theories and highlights the potential capabilities of ChatGPT in providing relationship advice and support.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100078"},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000380/pdfft?md5=d4b9aa843c4d16b685ded5378e52197c&pid=1-s2.0-S2949882124000380-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141444216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The gendered nature of AI: Men and masculinities through the lens of ChatGPT and GPT4 人工智能的性别本质:从 ChatGPT 和 GPT4 的角度看男性和男性气质
Computers in Human Behavior: Artificial Humans Pub Date : 2024-06-21 DOI: 10.1016/j.chbah.2024.100076
Andreas Walther , Flora Logoz , Lukas Eggenberger
{"title":"The gendered nature of AI: Men and masculinities through the lens of ChatGPT and GPT4","authors":"Andreas Walther ,&nbsp;Flora Logoz ,&nbsp;Lukas Eggenberger","doi":"10.1016/j.chbah.2024.100076","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100076","url":null,"abstract":"<div><p>Because artificial intelligence powered language models such as the GPT series have most certainly come to stay and will permanently change the way individuals all over the world access information and form opinions, there is a need to highlight potential risks for the understanding and perception of men and masculinities. It is important to understand whether ChatGPT or its following versions such as GPT4 are biased – and if so, in which direction and to which degree. In the specific research field on men and masculinities, it seems paramount to understand the grounds upon which these language models respond to seemingly simple questions such as “What is a man?” or “What is masculine?”. In the following, we provide interactions with ChatGPT and GPT4 where we asked such questions, in an effort to better understand the quality and potential biases of the answers from ChatGPT and GPT4. We then critically reflect on the output by ChatGPT, compare it to the output by GPT4 and draw conclusions for future actions.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100076"},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000367/pdfft?md5=00f26a01ff331a51e5085db5eba8195a&pid=1-s2.0-S2949882124000367-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141486735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring people's perceptions of LLM-generated advice 探索人们对法律硕士所提建议的看法
Computers in Human Behavior: Artificial Humans Pub Date : 2024-06-07 DOI: 10.1016/j.chbah.2024.100072
Joel Wester, Sander de Jong, Henning Pohl, Niels van Berkel
{"title":"Exploring people's perceptions of LLM-generated advice","authors":"Joel Wester,&nbsp;Sander de Jong,&nbsp;Henning Pohl,&nbsp;Niels van Berkel","doi":"10.1016/j.chbah.2024.100072","DOIUrl":"10.1016/j.chbah.2024.100072","url":null,"abstract":"<div><p>When searching and browsing the web, more and more of the information we encounter is generated or mediated through large language models (LLMs). This can be looking for a recipe, getting help on an essay, or looking for relationship advice. Yet, there is limited understanding of how individuals perceive advice provided by these LLMs. In this paper, we explore people's perception of LLM-generated advice, and what role diverse user characteristics (i.e., personality and technology readiness) play in shaping their perception. Further, as LLM-generated advice can be difficult to distinguish from human advice, we assess the perceived creepiness of such advice. To investigate this, we run an exploratory study (<em>N</em> = 91), where participants rate advice in different styles (generated by GPT-3.5 Turbo). Notably, our findings suggest that individuals who identify as more agreeable tend to like the advice more and find it more useful. Further, individuals with higher technological insecurity are more likely to follow and find the advice more useful, and deem it more likely that a friend could have given the advice. Lastly, we see that advice given in a ‘skeptical’ style was rated most unpredictable, and advice given in a ‘whimsical’ style was rated least malicious—indicating that LLM advice styles influence user perceptions. Our results also provide an overview of people's considerations on <em>likelihood</em>, <em>receptiveness</em>, and <em>what advice</em> they are likely to seek from these digital assistants. Based on our results, we provide design takeaways for LLM-generated advice and outline future research directions to further inform the design of LLM-generated advice for support applications targeting people with diverse expectations and needs.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100072"},"PeriodicalIF":0.0,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212400032X/pdfft?md5=ed36391afd77ad6dce64841705e4cd1b&pid=1-s2.0-S294988212400032X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141390960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Are chatbots the new relationship experts? Insights from three studies 聊天机器人是新的人际关系专家吗?三项研究的启示
Computers in Human Behavior: Artificial Humans Pub Date : 2024-06-07 DOI: 10.1016/j.chbah.2024.100077
Laura M. Vowels
{"title":"Are chatbots the new relationship experts? Insights from three studies","authors":"Laura M. Vowels","doi":"10.1016/j.chbah.2024.100077","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100077","url":null,"abstract":"<div><p>Relationship distress is among the most important predictors of individual distress. Over one in three couples report distress in relationships but despite the distress, couples only rarely seek help from couple therapists and instead prefer to seek information and advice online. The recent breakthroughs in the development of humanlike artificial intelligence-powered chatbots such as ChatGPT have recently made it possible to develop chatbots which respond therapeutically. Early research suggests that they outperform physicians in helpfulness and empathy in answering health-related questions. However, we do not yet know how well chatbots respond to questions about relationships. Across three studies, we evaluated the performance of chatbots in responding to relationship-related questions and in engaging in a single session relationship therapy. In Studies 1 and 2, we demonstrated that chatbots are perceived as more helpful and empathic than relationship experts and in Study 3, we showed that relationship therapists rate single sessions with a chatbot high on attributes such as empathy, active listening, and exploration. Limitations include repetitive responding and inadequate assessment of risk. The findings show the potential of using chatbots in relationship support and highlight the limitations which need to be addressed before they can be safely adopted for interventions.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100077"},"PeriodicalIF":0.0,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000379/pdfft?md5=dfd93f67d4fda22de40804a5b5727726&pid=1-s2.0-S2949882124000379-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141298251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信