Joel Wester, Sander de Jong, Henning Pohl, Niels van Berkel
{"title":"Exploring people's perceptions of LLM-generated advice","authors":"Joel Wester, Sander de Jong, Henning Pohl, Niels van Berkel","doi":"10.1016/j.chbah.2024.100072","DOIUrl":"10.1016/j.chbah.2024.100072","url":null,"abstract":"<div><p>When searching and browsing the web, more and more of the information we encounter is generated or mediated through large language models (LLMs). This can be looking for a recipe, getting help on an essay, or looking for relationship advice. Yet, there is limited understanding of how individuals perceive advice provided by these LLMs. In this paper, we explore people's perception of LLM-generated advice, and what role diverse user characteristics (i.e., personality and technology readiness) play in shaping their perception. Further, as LLM-generated advice can be difficult to distinguish from human advice, we assess the perceived creepiness of such advice. To investigate this, we run an exploratory study (<em>N</em> = 91), where participants rate advice in different styles (generated by GPT-3.5 Turbo). Notably, our findings suggest that individuals who identify as more agreeable tend to like the advice more and find it more useful. Further, individuals with higher technological insecurity are more likely to follow and find the advice more useful, and deem it more likely that a friend could have given the advice. Lastly, we see that advice given in a ‘skeptical’ style was rated most unpredictable, and advice given in a ‘whimsical’ style was rated least malicious—indicating that LLM advice styles influence user perceptions. Our results also provide an overview of people's considerations on <em>likelihood</em>, <em>receptiveness</em>, and <em>what advice</em> they are likely to seek from these digital assistants. Based on our results, we provide design takeaways for LLM-generated advice and outline future research directions to further inform the design of LLM-generated advice for support applications targeting people with diverse expectations and needs.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100072"},"PeriodicalIF":0.0,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212400032X/pdfft?md5=ed36391afd77ad6dce64841705e4cd1b&pid=1-s2.0-S294988212400032X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141390960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Are chatbots the new relationship experts? Insights from three studies","authors":"Laura M. Vowels","doi":"10.1016/j.chbah.2024.100077","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100077","url":null,"abstract":"<div><p>Relationship distress is among the most important predictors of individual distress. Over one in three couples report distress in relationships but despite the distress, couples only rarely seek help from couple therapists and instead prefer to seek information and advice online. The recent breakthroughs in the development of humanlike artificial intelligence-powered chatbots such as ChatGPT have recently made it possible to develop chatbots which respond therapeutically. Early research suggests that they outperform physicians in helpfulness and empathy in answering health-related questions. However, we do not yet know how well chatbots respond to questions about relationships. Across three studies, we evaluated the performance of chatbots in responding to relationship-related questions and in engaging in a single session relationship therapy. In Studies 1 and 2, we demonstrated that chatbots are perceived as more helpful and empathic than relationship experts and in Study 3, we showed that relationship therapists rate single sessions with a chatbot high on attributes such as empathy, active listening, and exploration. Limitations include repetitive responding and inadequate assessment of risk. The findings show the potential of using chatbots in relationship support and highlight the limitations which need to be addressed before they can be safely adopted for interventions.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100077"},"PeriodicalIF":0.0,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000379/pdfft?md5=dfd93f67d4fda22de40804a5b5727726&pid=1-s2.0-S2949882124000379-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141298251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Am I still human? Wearing an exoskeleton impacts self-perceptions of warmth, competence, attractiveness, and machine-likeness","authors":"Sandra Maria Siedl, Martina Mara","doi":"10.1016/j.chbah.2024.100073","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100073","url":null,"abstract":"<div><p>Occupational exoskeletons are body-worn technologies capable of enhancing a wearer's naturally given strength at work. Despite increasing interest in their physical effects, their implications for user self-perception have been largely overlooked. Addressing common concerns about body-enhancing technologies, our study explored how real-world use of a robotic exoskeleton affects a wearer's mechanistic dehumanization and perceived attractiveness of the self. In a within-subjects laboratory experiment, n = 119 participants performed various practical work tasks (carrying, screwing, riveting) with and without the <em>Ironhand</em> active hand exoskeleton. After each condition, they completed a questionnaire. We expected that in the exoskeleton condition self-perceptions of warmth and attractiveness would be less pronounced and self-perceptions of being competent and machine-like would be more pronounced. Study data supported these hypotheses and showed perceived competence, machine-likeness, and attractiveness to be relevant to technology acceptance. Our findings provide the first evidence that body-enhancement technologies may be associated with tendencies towards self-dehumanization, and underline the multifaceted role of exoskeleton-induced competence gain. By examining user self-perceptions that relate to mechanistic dehumanization and aesthetic appeal, our research highlights the need to better understand psychological impacts of exoskeletons on human wearers.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100073"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000331/pdfft?md5=cdbc3d3a9a85f6c53c5c3975b75c6aa2&pid=1-s2.0-S2949882124000331-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141315054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Montag , Benjamin Becker , Benjamin J. Li
{"title":"On trust in humans and trust in artificial intelligence: A study with samples from Singapore and Germany extending recent research","authors":"Christian Montag , Benjamin Becker , Benjamin J. Li","doi":"10.1016/j.chbah.2024.100070","DOIUrl":"10.1016/j.chbah.2024.100070","url":null,"abstract":"<div><p>The AI revolution is shaping societies around the world. People interact daily with a growing number of products and services that feature AI integration. Without doubt rapid developments in AI will bring positive outcomes, but also challenges. In this realm it is important to understand if people trust this omni-use technology, because trust represents an essential prerequisite (to be willing) to use AI products and this in turn likely has an impact on how much AI will be embraced by national economies with consequences for the local work forces. To shed more light on trusting AI, the present work aims to understand how much the variables <em>trust in AI</em> and <em>trust in humans</em> overlap. This is important to understand, because much is already known about trust in humans, and if the concepts overlap, much of our understanding of trust in humans might be transferable to trusting AI. In samples from Singapore (n = 535) and Germany (n = 954) we could observe varying degrees of positive relations between the <em>trust in AI/humans</em> variables. Whereas <em>trust in AI/humans</em> showed a small positive association in Germany, there was a moderate positive association in Singapore. Further, this paper revisits associations between individual differences in the Big Five of Personality and general attitudes towards AI including trust.</p><p>The present work shows that <em>trust in humans</em> and <em>trust in AI</em> share only small amounts of variance, but this depends on culture (varying here from about 4 to 11 percent of shared variance). Future research should further investigate such associations but by also considering assessments of trust in specific AI-empowered-products and AI-empowered-services, where things might be different.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100070"},"PeriodicalIF":0.0,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000306/pdfft?md5=79d1e52e0296b5cc72a13b7bfacaaf35&pid=1-s2.0-S2949882124000306-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141042698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI literacy for users – A comprehensive review and future research directions of learning methods, components, and effects","authors":"Marc Pinski, Alexander Benlian","doi":"10.1016/j.chbah.2024.100062","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100062","url":null,"abstract":"<div><p>The rapid advancement of artificial intelligence (AI) has brought transformative changes to various aspects of human life, leading to an exponential increase in the number of AI users. The broad access and usage of AI enable immense benefits but also give rise to significant challenges. One way for AI users to address these challenges is to develop AI literacy, referring to human proficiency in different subject areas of AI that enable purposeful, efficient, and ethical usage of AI technologies. This study aims to comprehensively understand and structure the research on AI literacy for AI users through a systematic, scoping literature review. Therefore, we synthesize the literature, provide a conceptual framework, and develop a research agenda. Our review paper holistically assesses the fragmented AI literacy research landscape (68 papers) while critically examining its specificity to different user groups and its distinction from other technology literacies, exposing that research efforts are partly not well integrated. We organize our findings in an overarching conceptual framework structured along the learning methods leading to, the components constituting, and the effects stemming from AI literacy. Our research agenda – oriented along the developed conceptual framework – sheds light on the most promising research opportunities to prepare AI users for an AI-powered future of work and society.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100062"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000227/pdfft?md5=67048bb47ad6e81dd544c466338d703f&pid=1-s2.0-S2949882124000227-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140163093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling morality and spirituality in artificial chaplains","authors":"Mark Graves","doi":"10.1016/j.chbah.2024.100051","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100051","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100051"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000112/pdfft?md5=c4380ab3c86812f04171e97918fb3c5d&pid=1-s2.0-S2949882124000112-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139744221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual vs. Human influencers: The battle for consumer hearts and minds","authors":"Abhishek Dondapati, Ranjit Kumar Dehury","doi":"10.1016/j.chbah.2024.100059","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100059","url":null,"abstract":"<div><p>Virtual influencers, or fictional CGI-generated social media personas, are gaining popularity. However, research lacks information on how they compare to human influencers in shaping consumer attitudes and purchase intent. This study examines whether perceived homophily and para-social relationships mediate the effect of influencer type on purchase intent and the moderating effect of perceived authenticity. A 2 × 2 between-subjects experiment manipulated influencer type (virtual vs. human) and product type (hedonic vs. utilitarian). Young adult participants viewed an Instagram profile of a lifestyle influencer. Authenticity, perceived homophily, para-social relationship, and purchase intent were measured using established scales. Perceived homophily and para-social relationships mediate the effect of influencer type on purchase intent. A significant interaction showed that perceived authenticity moderated the mediated pathway, such that the indirect effect via para-social relationship and perceived homophily was stronger for human influencers. Maintaining an authentic persona is critical for virtual influencers to sway consumer behaviours, especially for audiences less familiar with social media.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100059"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000197/pdfft?md5=20eb84dd566ad4d79f74fed42380915b&pid=1-s2.0-S2949882124000197-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140000257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Trust in artificial intelligence: Literature review and main path analysis","authors":"Bruno Miranda Henrique , Eugene Santos Jr.","doi":"10.1016/j.chbah.2024.100043","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100043","url":null,"abstract":"<div><p>Artificial Intelligence (AI) is present in various modern systems, but it is still subject to acceptance in many fields. Medical diagnosis, autonomous driving cars, recommender systems and robotics are examples of areas in which some humans distrust AI technology, which ultimately leads to low acceptance rates. Conversely, those same applications can have humans who over rely on AI, acting as recommended by the systems with no criticism regarding the risks of a wrong decision. Therefore, there is an optimal balance with respect to trust in AI, achieved by calibration of expectations and capabilities. In this context, the literature about factors influencing trust in AI and its calibration is scattered among research fields, with no objective summaries of the overall evolution of the theme. In order to close this gap, this paper contributes a literature review of the most influential papers on the subject of trust in AI, selected by quantitative methods. It also proposes a Main Path Analysis of the literature, highlighting how the theme has evolved over the years. As results, researchers will find an overview on trust in AI based on the most important papers objectively selected and also tendencies and opportunities for future research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100043"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000033/pdfft?md5=730364a034e2bd4ec1f23bf724f7adef&pid=1-s2.0-S2949882124000033-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139550002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A review of assessment for learning with artificial intelligence","authors":"Bahar Memarian, Tenzin Doleck","doi":"10.1016/j.chbah.2023.100040","DOIUrl":"10.1016/j.chbah.2023.100040","url":null,"abstract":"<div><p>The reformed Assessment For Learning (AFL) practices the design of activities and evaluation and feedback processes that improve student learning. While Artificial Intelligence (AI) has blossomed as a field in education, less work has been done to examine the studies and challenges reported between AFL and AI. We conduct a review of the literature to examine the state of work on AFL and AI in the education literature. A review of articles in Web of Science, SCOPUS, and Google Scholar yielded 35 studies for review. We share the trends in research design, AFL conceptions, and AI challenges in the reviewed studies. We offer the implications of AFL and AI and considerations for future research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100040"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000403/pdfft?md5=7027156594dcf9b4d5bc0dc0e9c5dca9&pid=1-s2.0-S2949882123000403-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139191639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Co-creating art with generative artificial intelligence: Implications for artworks and artists","authors":"Uwe Messer","doi":"10.1016/j.chbah.2024.100056","DOIUrl":"10.1016/j.chbah.2024.100056","url":null,"abstract":"<div><p>Synthetic visual art is becoming a commodity due to generative artificial intelligence (AI). The trend of using AI for co-creation will not spare artists’ creative processes, and it is important to understand how the use of generative AI at different stages of the creative process affects both the evaluation of the artist and the result of the human-machine collaboration (i.e., the visual artifact). In three experiments (N = 560), this research explores how the evaluation of artworks is transformed by the revelation that the artist collaborated with AI at different stages of the creative process. The results show that co-created art is less liked and recognized, especially when AI was used in the implementation stage. While co-created art is perceived as more novel, it lacks creative authenticity, which exerts a dominant influence. The results also show that artists’ perceptions suffer from the co-creation process, and that artists who co-create are less admired because they are perceived as less authentic. Two boundary conditions are identified. The negative effect can be mitigated by disclosing the level of artist involvement in co-creation with AI (e.g., by training the algorithm on a curated set of images vs. simply prompting an off-the-shelf AI image generator). In the context of art that is perceived as commercially motivated (e.g., stock images), the effect is also diminished. This research has important implications for the literature on human-AI-collaboration, research on authenticity, and the ongoing policy debate regarding the transparency of algorithmic presence.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100056"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000161/pdfft?md5=117db880bc1bfc8ee95dd810da305f04&pid=1-s2.0-S2949882124000161-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139884737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}