Computers in Human Behavior: Artificial Humans最新文献

筛选
英文 中文
Love, marriage, pregnancy: Commitment processes in romantic relationships with AI chatbots 爱情、婚姻、怀孕:与人工智能聊天机器人恋爱关系中的承诺过程
Computers in Human Behavior: Artificial Humans Pub Date : 2025-04-15 DOI: 10.1016/j.chbah.2025.100155
Ray Djufril , Jessica R. Frampton , Silvia Knobloch-Westerwick
{"title":"Love, marriage, pregnancy: Commitment processes in romantic relationships with AI chatbots","authors":"Ray Djufril ,&nbsp;Jessica R. Frampton ,&nbsp;Silvia Knobloch-Westerwick","doi":"10.1016/j.chbah.2025.100155","DOIUrl":"10.1016/j.chbah.2025.100155","url":null,"abstract":"<div><div>An inductive thematic analysis examined written responses from 29 individuals using the romantic relationship function of the social chatbot Replika. Findings indicate that most of these users feel an emotional connection to the bot, that the bot meets their needs when there are no technical issues, and that interactions with the bot are often different from (and sometimes better than) interactions with humans. All these factors impact users’ commitment to their human-chatbot relationship. Additionally, the study explored how users navigated a time of relational transition, specifically a period of erotic roleplaying censorship. Participants experienced intense emotional responses, but many were guarded from negativity bias toward their AI partner because of the ability to blame developers. These findings are discussed in light of the investment model, the <em>computers are social actors</em> paradigm, social affordances, and relational turbulence theory.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100155"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143850444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Baby schema in human-robot physical interaction: Influence of baby likeness in a communication robot on caregiving behavior 人-机器人身体互动中的婴儿图式:交流机器人中婴儿相似度对照料行为的影响
Computers in Human Behavior: Artificial Humans Pub Date : 2025-04-10 DOI: 10.1016/j.chbah.2025.100150
Shi Feng , Nobuo Yamato , Hiroshi Ishiguro , Masahiro Shiomi , Hidenobu Sumioka
{"title":"Baby schema in human-robot physical interaction: Influence of baby likeness in a communication robot on caregiving behavior","authors":"Shi Feng ,&nbsp;Nobuo Yamato ,&nbsp;Hiroshi Ishiguro ,&nbsp;Masahiro Shiomi ,&nbsp;Hidenobu Sumioka","doi":"10.1016/j.chbah.2025.100150","DOIUrl":"10.1016/j.chbah.2025.100150","url":null,"abstract":"<div><div>One huge societal problem faced by nursing homes in aging countries like Japan is easing the loneliness, anxiety, reluctance in communication and related problems caused by dementia. Innovative methods are required to address this problem, which is aggravated by an acute shortage of care-providing staff. The use of such traditional management methods as physical or medical treatment must be intensified. Baby-like robots are increasingly being introduced into nursing homes as companions. The multiple infant traits in baby-like robots (multimodal infant features) can trigger the baby schema effect, which increases the desire of seniors to interact with their environments and triggers caregiving behaviors. However, to the best of our knowledge, no research has systematically analyzed how multimodal infant features trigger the baby schema—not to mention how adequately they do so. In this work, we first investigated how the appearance and the voice design of baby-like robots trigger the baby schema. 41 healthy adults between the age of 20–50 interacted with baby-like robots that had five different forms. 21 interacted with robots that had a voice function of real infant voices, and the remaining 20 interacted with robots without any voice. The participants rated the robots based on their baby likeness, their degree of fun to play with, and their degree of easy to play with. During the experiment, we video-recorded the number of caregiving and non-caregiving behaviors done with five different kinds of robot to evaluate the degree of the baby schema triggered in the participants. The multimodal infant features increased the baby schema effect, although non-linearly. The baby schema triggers a threshold beyond which the reality of the infant features exceeds it, and the increase of caregiving behavior will be lessened. This study provides a guideline for the design of current and future baby-like robots and a methodology for studying baby schema and caregiving behaviors in an ethical, safe, and controlled environment without actual infants.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100150"},"PeriodicalIF":0.0,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143854374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Socially excluded employees prefer algorithmic evaluation to human assessment: The moderating role of an interdependent culture 被社会排斥的员工更喜欢算法评估而不是人类评估:相互依存文化的调节作用
Computers in Human Behavior: Artificial Humans Pub Date : 2025-04-09 DOI: 10.1016/j.chbah.2025.100152
Yoko Sugitani , Taku Togawa , Kosuke Motoki
{"title":"Socially excluded employees prefer algorithmic evaluation to human assessment: The moderating role of an interdependent culture","authors":"Yoko Sugitani ,&nbsp;Taku Togawa ,&nbsp;Kosuke Motoki","doi":"10.1016/j.chbah.2025.100152","DOIUrl":"10.1016/j.chbah.2025.100152","url":null,"abstract":"<div><div>Organizations have embraced artificial intelligence (AI) technology for personnel assessments such as document screening, interviews, and evaluations. However, some studies have reported employees' aversive reactions to AI-based assessment, while others have shown their appreciation for AI. This study focused on the effect of workplace social context, specifically social exclusion, on employees’ attitudes toward AI-based personnel assessment. Drawing on cognitive dissonance theory, we hypothesized that socially excluded employees perceive human evaluation as unfair, leading to their belief that AI-based assessments are fairer and, in turn, a favorable attitude toward AI evaluation. Through three experiments wherein workplace social relationships (social exclusion vs. inclusion) were manipulated, we demonstrated that socially excluded employees showed a higher positive attitude toward algorithmic assessment compared with those who were socially included. Further, this effect was mediated by perceived fairness of AI assessment, and more evident in an interdependent (but not independent) self-construal culture. These findings offer novel insights into psychological research on computer use in professional practices.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100152"},"PeriodicalIF":0.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143829287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The efficacy of incorporating Artificial Intelligence (AI) chatbots in brief gratitude and self-affirmation interventions: Evidence from two exploratory experiments 将人工智能(AI)聊天机器人纳入简短的感恩和自我肯定干预的功效:来自两个探索性实验的证据
Computers in Human Behavior: Artificial Humans Pub Date : 2025-04-04 DOI: 10.1016/j.chbah.2025.100151
Jing Wen Hung , Andree Hartanto , Adalia Y.H. Goh , Zoey K.Y. Eun , K.T.A. Sandeeshwara Kasturiratna , Zhi Xuan Lee , Nadyanna M. Majeed
{"title":"The efficacy of incorporating Artificial Intelligence (AI) chatbots in brief gratitude and self-affirmation interventions: Evidence from two exploratory experiments","authors":"Jing Wen Hung ,&nbsp;Andree Hartanto ,&nbsp;Adalia Y.H. Goh ,&nbsp;Zoey K.Y. Eun ,&nbsp;K.T.A. Sandeeshwara Kasturiratna ,&nbsp;Zhi Xuan Lee ,&nbsp;Nadyanna M. Majeed","doi":"10.1016/j.chbah.2025.100151","DOIUrl":"10.1016/j.chbah.2025.100151","url":null,"abstract":"<div><div>Numerous studies have demonstrated that positive psychology interventions, including brief interventions, can significantly improve well-being outcomes. These findings are particularly important given that many of these interventions are brief and self-administered, making them both accessible and scalable for large populations. However, the efficacy of positive psychology interventions is often constrained by small effect sizes. In light of advancements in generative Artificial Intelligence (AI), this study explored whether integrating AI chatbots into positive psychology interventions could enhance their efficacy compared to traditional self-administered approaches. Study 1 examined the efficacy of a gratitude intervention delivered through Snapchat's My AI, while Study 2 evaluated a self-affirmation intervention integrated with a customized ChatGPT. Both studies employed within-subject experimental designs. Contrary to our hypotheses, the integration of AI did not yield incremental improvements in gratitude outcomes (Study 1), or self-view outcomes (Study 2) compared to existing non-AI interventions. However, exploratory analyses revealed that the AI-integrated self-affirmation intervention significantly enhanced life satisfaction and medium-arousal positive affect, suggesting potential benefits for selected well-being outcomes. These findings indicate that while AI integration in brief, self-administered positive psychology interventions may enhance certain outcomes, its suitability varies across intervention types. Further research is needed to better understand the contexts in which AI can effectively augment positive psychology interventions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100151"},"PeriodicalIF":0.0,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143843378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the efficacy of Amanda: A voice-based large language model chatbot for relationship challenges 评估阿曼达的功效:一个基于语音的大型语言模型聊天机器人的关系挑战
Computers in Human Behavior: Artificial Humans Pub Date : 2025-03-29 DOI: 10.1016/j.chbah.2025.100141
Laura M. Vowels , Shannon K. Sweeney , Matthew J. Vowels
{"title":"Evaluating the efficacy of Amanda: A voice-based large language model chatbot for relationship challenges","authors":"Laura M. Vowels ,&nbsp;Shannon K. Sweeney ,&nbsp;Matthew J. Vowels","doi":"10.1016/j.chbah.2025.100141","DOIUrl":"10.1016/j.chbah.2025.100141","url":null,"abstract":"<div><div>Digital health interventions are increasingly necessary to bridge gaps in mental health care, providing scalable and accessible solutions to address unmet needs. Relationship challenges, a significant driver of individual well-being and distress, are often under-supported due to barriers such as stigma, cost, and limited access to trained therapists. This study evaluates Amanda, a GPT-4-powered voice-based chatbot, designed to deliver single-session relationship support and enhance therapeutic engagement through natural and collaborative interactions. Participants (N = 54) completed a range of clinical outcome measures and their attitudes toward chatbots and digital health interventions pre- and post-intervention as well as two weeks later. In the interactions with the chatbot, the participants explored a range of relational issues and reported significant improvements in problem-specific outcomes, including reduced distress, enhanced communication, and greater confidence in managing conflicts directly after the interaction as well as two weeks later. While generic relationship outcomes showed only delayed improvements, individual well-being did not significantly change. Participants rated Amanda highly on usability, therapeutic skills, and working alliance, with reduced repetitiveness compared to the text-based version. These findings underscore the potential of voice-based chatbots to deliver accessible and effective relationship support. Future research should explore multi-session formats, clinical populations, and comparisons with other large language models to refine and expand AI-powered interventions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100141"},"PeriodicalIF":0.0,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143768186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ain’t blaming you: Delegation of financial decisions to humans and algorithms 不怪你:将财务决策委托给人类和算法
Computers in Human Behavior: Artificial Humans Pub Date : 2025-03-28 DOI: 10.1016/j.chbah.2025.100147
Zilia Ismagilova , Matteo Ploner
{"title":"Ain’t blaming you: Delegation of financial decisions to humans and algorithms","authors":"Zilia Ismagilova ,&nbsp;Matteo Ploner","doi":"10.1016/j.chbah.2025.100147","DOIUrl":"10.1016/j.chbah.2025.100147","url":null,"abstract":"<div><div>This article investigates the tendency to prioritize outcomes when evaluating decision-making processes, particularly in situations where choices are assigned to either a human or an algorithm. In our experiment, a Principal delegates a risky financial decision to an Agent, who can choose to act independently or to use an algorithm. The Principal then rewards or penalizes the Agent based on investment performance, while we manipulate the Principal’s knowledge of the outcome during the evaluation. Our results confirm a significant outcome bias, indicating that the assessment of decision effectiveness remains heavily influenced by results, whether the decision is made by the Agent or delegated to an algorithm. Furthermore, the Agent’s reliance on the algorithm and the level of investment risk do not change depending on whether rewards or penalties are decided before or after the outcome is known.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100147"},"PeriodicalIF":0.0,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143739926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perception and social evaluation of cloned and recorded voices: Effects of familiarity and self-relevance 克隆和录音声音的感知和社会评价:熟悉度和自我关联的影响
Computers in Human Behavior: Artificial Humans Pub Date : 2025-03-25 DOI: 10.1016/j.chbah.2025.100143
Victor Rosi, Emma Soopramanien, Carolyn McGettigan
{"title":"Perception and social evaluation of cloned and recorded voices: Effects of familiarity and self-relevance","authors":"Victor Rosi,&nbsp;Emma Soopramanien,&nbsp;Carolyn McGettigan","doi":"10.1016/j.chbah.2025.100143","DOIUrl":"10.1016/j.chbah.2025.100143","url":null,"abstract":"<div><div>Modern speech technologies enable the artificial replication, or cloning, of the human voice. In the present study, we investigated whether listeners' perception and social evaluation of state-of-the-art voice clones depend on whether the clone being heard is a replica of the self, a friend, or a total stranger. We recorded and cloned the voices of familiar pairs of adult participants. Forty-seven of these experimental participants (and 47 unfamiliar controls) rated the Trustworthiness, Attractiveness, Competence, and Dominance of cloned and recorded samples of their own voice and their friend's voice. We observed that while familiar listeners found clones to sound less (or similarly) trustworthy, attractive, and competent than recordings, unfamiliar listeners showed an opposing profile in which clones tended to be rated higher than recordings. Within this, familiar listeners tended to prefer their friend's voice to their own, although perceived similarity of both self- and friend-voice clones to the original speaker identity predicted higher ratings on all trait scales. Overall, we find that familiar listeners' impressions are sensitive to the perceived accuracy and authenticity of cloning for voices they know well, while unfamiliar listeners tend to prefer the synthetic versions of those same voice identities. The latter observation may relate to the tendency of generative voice synthesis models to homogenise speaking accents and styles, such that they more closely approximate (preferred) norms.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100143"},"PeriodicalIF":0.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143739921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gender biases within Artificial Intelligence and ChatGPT: Evidence, Sources of Biases and Solutions 人工智能和聊天技术中的性别偏见:证据、偏见来源和解决方案
Computers in Human Behavior: Artificial Humans Pub Date : 2025-03-24 DOI: 10.1016/j.chbah.2025.100145
Jerlyn Q.H. Ho , Andree Hartanto , Andrew Koh , Nadyanna M. Majeed
{"title":"Gender biases within Artificial Intelligence and ChatGPT: Evidence, Sources of Biases and Solutions","authors":"Jerlyn Q.H. Ho ,&nbsp;Andree Hartanto ,&nbsp;Andrew Koh ,&nbsp;Nadyanna M. Majeed","doi":"10.1016/j.chbah.2025.100145","DOIUrl":"10.1016/j.chbah.2025.100145","url":null,"abstract":"<div><div>The growing adoption of Artificial Intelligence (AI) in various sectors has introduced significant benefits, but also raised concerns over biases, particularly in relation to gender. Despite AI's potential to enhance sectors like healthcare, education, and business, it often mirrors reality and its societal prejudices and can manifest itself through unequal treatment in hiring decisions, academic recommendations, or healthcare diagnostics, systematically disadvantaging women. This paper explores how AI systems and chatbots, notably ChatGPT, can perpetuate gender biases due to inherent flaws in training data, algorithms, and user feedback loops. This problem stems from several sources, including biased training datasets, algorithmic design choices, and human biases. To mitigate these issues, various interventions are discussed, including improving data quality, diversifying datasets and annotator pools, integrating fairness-centric algorithmic approaches, and establishing robust policy frameworks at corporate, national, and international levels. Ultimately, addressing AI bias requires a multi-faceted approach involving researchers, developers, and policymakers to ensure AI systems operate fairly and equitably.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100145"},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143714239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“Always check important information!” - The role of disclaimers in the perception of AI-generated content “一定要检查重要信息!-免责声明在人工智能生成内容感知中的作用
Computers in Human Behavior: Artificial Humans Pub Date : 2025-03-22 DOI: 10.1016/j.chbah.2025.100142
Angelica Lermann Henestrosa , Joachim Kimmerle
{"title":"“Always check important information!” - The role of disclaimers in the perception of AI-generated content","authors":"Angelica Lermann Henestrosa ,&nbsp;Joachim Kimmerle","doi":"10.1016/j.chbah.2025.100142","DOIUrl":"10.1016/j.chbah.2025.100142","url":null,"abstract":"<div><div>Generative AI, and large language models (LLMs) in particular, have become a prevalent source of digital content. Despite their widespread availability, these models come with critical weaknesses, such as a lack of factual accuracy. Being informed about the advantages and disadvantages of these tools is essential for using AI safely and adequately, yet not everyone is aware of them. Therefore, we explored in three experimental studies how disclaimers affect people's perceptions of AI-authorship and AI-generated content on scientific topics. Additionally, we investigated the impact of information presentation and authorship attributions—whether content is authored solely by AI or co-authored with humans. Across the experiments, no effects of disclaimer type on text perceptions and only minor effects on authorship perceptions were found. In Study 1, an evaluative (vs. neutral) information presentation decreased credibility perceptions, while informing about AI's strengths vs. limitations did not. In addition, we found participants to believe in the machine heuristic, that is, to attribute more accuracy and less bias to AI than to human authors. Study 2 revealed interaction effects between authorship and disclaimer type, providing insights into possible balancing effects of human-AI co-authorship. In Study 3, both strengths and limitations disclaimers induced higher credibility ratings than basic disclaimers. This research suggests that disclaimers fail to univocally influence the perception of AI-generated output. Further interventions should be developed to raise awareness of the capabilities and limitations of LLMs and to advocate for ethical practices in handling AI-generated content, especially regarding factual information.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100142"},"PeriodicalIF":0.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143697985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harnessing the power of AI in qualitative research: Exploring, using and redesigning ChatGPT 在定性研究中利用人工智能的力量:探索、使用和重新设计ChatGPT
Computers in Human Behavior: Artificial Humans Pub Date : 2025-03-22 DOI: 10.1016/j.chbah.2025.100144
He Zhang (Albert) , Chuhao Wu , Jingyi Xie , Yao Lyu , Jie Cai , John M. Carroll
{"title":"Harnessing the power of AI in qualitative research: Exploring, using and redesigning ChatGPT","authors":"He Zhang (Albert) ,&nbsp;Chuhao Wu ,&nbsp;Jingyi Xie ,&nbsp;Yao Lyu ,&nbsp;Jie Cai ,&nbsp;John M. Carroll","doi":"10.1016/j.chbah.2025.100144","DOIUrl":"10.1016/j.chbah.2025.100144","url":null,"abstract":"<div><div>AI tools, particularly large-scale language model (LLM) based applications such as ChatGPT, have the potential to mitigate qualitative research workload. In this study, we conducted semi-structured interviews with 17 participants and held a co-design session with 13 qualitative researchers to develop a framework for designing prompts specifically crafted to support junior researchers and stakeholders interested in leveraging AI for qualitative research. Our findings indicate that improving transparency, providing guidance on prompts, and strengthening users' understanding of LLMs' capabilities significantly enhance their ability to interact with ChatGPT. By comparing researchers' attitudes toward LLM-supported qualitative analysis before and after the co-design process, we reveal that the shift from an initially negative to a positive perception is driven by increased familiarity with the LLM's capabilities and the implementation of prompt engineering techniques that enhance response transparency and, in turn, foster greater trust. This research not only highlights the importance of well-designed prompts in LLM applications but also offers reflections for qualitative researchers on the perception of AI's role. Finally, we emphasize the potential ethical risks and the impact of constructing AI ethical expectations by researchers, particularly those who are novices, on future research and AI development.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100144"},"PeriodicalIF":0.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信