Computers in Human Behavior: Artificial Humans最新文献

筛选
英文 中文
An unbiased artificial referee in beauty contests based on pattern recognition and AI 基于模式识别和人工智能的选美比赛无偏见人工裁判
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100025
Kiana Nezami, Ching Y. Suen
{"title":"An unbiased artificial referee in beauty contests based on pattern recognition and AI","authors":"Kiana Nezami,&nbsp;Ching Y. Suen","doi":"10.1016/j.chbah.2023.100025","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100025","url":null,"abstract":"<div><p>Beauty contests have long been popular, but concerns about fairness and bias in judgment have emerged. To address this, integrating artificial intelligence (AI) and pattern recognition (PR) as an unbiased referee shows promise. This paper aims to assess the significance of different facial features, including eyes, nose, lips, chin, eyebrows, and jaws, as well as the role of angles and geometric facial measurements, such as distances between facial landmarks and ratios, in the context of beauty assessment. This study also employs two techniques, namely Principal Component Analysis (PCA) and stacked regression, to predict the attractiveness of faces. The experimental data set used for evaluation is the SCUT-FBP benchmark database. The obtained results, indicated by Mean Absolute Errors (MAE) and Pearson's Correlation Coefficient (PCC), demonstrate the high accuracy of our attractiveness prediction model. This research contributes to the advancement of automatic facial beauty analysis and its practical implications. Furthermore, our results surpass those published recently, further validating the effectiveness of our approach.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"1 2","pages":"Article 100025"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000257/pdfft?md5=56807d39447093a4a23491a97bd4284b&pid=1-s2.0-S2949882123000257-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92025806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Algorithmic loafing and mitigation strategies in Human-AI teams 人类-人工智能团队中的算法漂移和缓解策略
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100024
Isa Inuwa-Dutse , Alice Toniolo , Adrian Weller , Umang Bhatt
{"title":"Algorithmic loafing and mitigation strategies in Human-AI teams","authors":"Isa Inuwa-Dutse ,&nbsp;Alice Toniolo ,&nbsp;Adrian Weller ,&nbsp;Umang Bhatt","doi":"10.1016/j.chbah.2023.100024","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100024","url":null,"abstract":"<div><p>Exercising <em>social loafing</em> – exerting minimal effort by an individual in a group setting – in human-machine teams could critically degrade performance, especially in high-stakes domains where human judgement is essential. Akin to social loafing in human interaction, algorithmic loafing may occur when humans mindlessly adhere to machine recommendations due to reluctance to engage analytically with AI recommendations and explanations. We consider how algorithmic loafing could emerge and how to mitigate it. Specifically, we posit that algorithmic loafing can be induced through repeated encounters with correct decisions from the AI and transparency may combat it. As a form of transparency, explanation is offered for reasons that include justification, control, and discovery. However, algorithmic loafing is further reinforced by the perceived competence that an explanation provides. In this work, we explored these ideas via human subject experiments (<em>n</em> = 239). We also study how improving decision transparency through validation by an external human approver affects performance. Using eight experimental conditions in a high-stakes criminal justice context, we find that decision accuracy is typically unaffected by multiple forms of transparency but there is a significant difference in performance when the machine errs. Participants who saw explanations alone are better at overriding incorrect decisions; however, those under induced algorithmic loafing exhibit poor performance with variation in decision time. We conclude with recommendations on curtailing algorithmic loafing and achieving social facilitation, where task visibility motivates individuals to perform better.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"1 2","pages":"Article 100024"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000245/pdfft?md5=7f84d624b30c61413a077cb67b3927c5&pid=1-s2.0-S2949882123000245-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138413083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Be careful what you explain: Benefits and costs of explainable AI in a simulated medical task 小心你的解释:可解释的人工智能在模拟医疗任务中的收益和成本
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100021
Tobias Rieger , Dietrich Manzey , Benigna Meussling , Linda Onnasch , Eileen Roesler
{"title":"Be careful what you explain: Benefits and costs of explainable AI in a simulated medical task","authors":"Tobias Rieger ,&nbsp;Dietrich Manzey ,&nbsp;Benigna Meussling ,&nbsp;Linda Onnasch ,&nbsp;Eileen Roesler","doi":"10.1016/j.chbah.2023.100021","DOIUrl":"10.1016/j.chbah.2023.100021","url":null,"abstract":"<div><p>We investigated the impact of explainability instructions with respect to system limitations on trust behavior and trust attitude when using an artificial intelligence (AI) support agent to perform a simulated medical task. In an online experiment (<em>N</em> = 128), participants performed a visual estimation task in a simulated medical setting (i.e., estimate the percentage of bacteria in a visual stimulus). All participants were supported by an AI that gave perfect recommendations for all but one color of bacteria (i.e., error-prone color with 50% reliability). We manipulated between-subjects whether participants knew about the error-prone color (XAI condition) or not (nonXAI condition). The analyses revealed that participants showed higher trust behavior (i.e., lower deviation from the AI recommendation) for the non-error-prone trials in the XAI condition. Moreover, participants showed lower trust behavior for the error-prone color in the XAI condition than in the nonXAI condition. However, this behavioral adaptation only applied to the subset of error-prone trials in which the AI gave correct recommendations, and not to the actual erroneous trials. Thus, designing explainable AI systems can also come with inadequate behavioral adaptations, as explainability was associated with benefits (i.e., more adequate behavior in non-error-prone trials), but also costs (stronger changes to the AI recommendations in correct error-prone trials).</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"1 2","pages":"Article 100021"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212300021X/pdfft?md5=221d729df96546eae8913e787fa04ac8&pid=1-s2.0-S294988212300021X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135325442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Choosing between human and algorithmic advisors: The role of responsibility sharing 在人工顾问和算法顾问之间进行选择:责任分担的作用
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100009
Lior Gazit , Ofer Arazy , Uri Hertz
{"title":"Choosing between human and algorithmic advisors: The role of responsibility sharing","authors":"Lior Gazit ,&nbsp;Ofer Arazy ,&nbsp;Uri Hertz","doi":"10.1016/j.chbah.2023.100009","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100009","url":null,"abstract":"<div><p>Algorithms are increasingly employed to provide highly accurate advice and recommendations across domains, yet in many cases people tend to prefer human advisors. Studies to date have focused mainly on the advisor’s perceived competence and the outcome of the advice as determinants of advice takers’ willingness to accept advice from human and algorithmic advisors and to arbitrate between them. Here we examine the role of another factor that is not directly related to the outcome: the advice taker’s ability to psychologically offload responsibility for the decision’s potential consequences. Building on studies showing differences in responsibility attribution between human and algorithmic advisors, we hypothesize that, controlling for the effects of the advisor’s competence, the advisor's perceived responsibility is an important factor affecting advice takers’ choice between human and algorithmic advisors. In an experiment in two domains, Medical and Financial (N = 806), participants were asked to rate advisors’ perceived responsibility and choose between a human and algorithmic advisor. Our results show that human advisors were perceived as more responsible than algorithmic advisors and most importantly, that the perception of the advisor’s responsibility affected the preference for a human advisor over an algorithmic counterpart. Furthermore, we found that an experimental manipulation that impeded advice takers’ ability to offload responsibility affected the extent to which human, but not algorithmic, advisors were perceived as responsible. Together, our findings highlight the role of responsibility sharing in influencing algorithm aversion.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"1 2","pages":"Article 100009"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49729479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Obedience to robot. Humanoid robot as an experimenter in Milgram paradigm 服从机器人。米尔格拉姆范式中人形机器人的实验研究
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100010
Tomasz Grzyb, Konrad Maj, Dariusz Dolinski
{"title":"Obedience to robot. Humanoid robot as an experimenter in Milgram paradigm","authors":"Tomasz Grzyb,&nbsp;Konrad Maj,&nbsp;Dariusz Dolinski","doi":"10.1016/j.chbah.2023.100010","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100010","url":null,"abstract":"<div><p>Humans will increasingly be influenced by social robots. It still seems unclear whether we will accept them as authorities and whether we will give in to them without reflection, as in the case of human authorities in the classic Stanley Milgram experiments (1963, 1965, and 1974). The demonstration by Stanley Milgram of the prevailing tendency in people to display extreme obedience to authority figures was one of the most important discoveries in the field of social psychology. The authors of this article decided to use a modified Milgram's research paradigm (obedience lite procedure) to compare one's obedience to a person giving instructions to electrocute someone sitting in an adjacent room with obedience to a robot issuing similar instructions. Twenty individuals were tested in both cases. As it turned out, the level of obedience was very high in both situations, and the nature of the authority figure issuing instructions (a professor vs. a robot) did not have the impact on the reactions of the subjects.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"1 2","pages":"Article 100010"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49729485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reading between the lines: Automatic inference of self-assessed personality traits from dyadic social chats 言外之意:从二元社交聊天中自动推断自我评估的人格特征
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100026
Abeer Buker , Alessandro Vinciarelli
{"title":"Reading between the lines: Automatic inference of self-assessed personality traits from dyadic social chats","authors":"Abeer Buker ,&nbsp;Alessandro Vinciarelli","doi":"10.1016/j.chbah.2023.100026","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100026","url":null,"abstract":"<div><p>Interaction through text-based platforms (e.g., WhatsApp) is a common everyday activity, typically referred to as “chatting”. However, the computing community paid relatively little attention to the automatic analysis of social and psycho-logical phenomena taking place during chats. This article proposes experiments aimed at the automatic inference of self-assessed personality traits from data collected during online dyadic chats. The proposed approach is multimodal and takes into account the two main components of chat-based interactions, namely <em>what</em> people type (the <em>text</em>) and <em>how</em> they type it (the <em>keystroke dynamics</em>). To the best of our knowledge, this is one of the very first works that includes keystroke dynamics in an approach for the inference of personality traits. The experiments involved 60 people and the results suggest that it is possible to recognize whether someone is below median or not along the Big-Five traits. Such a result suggests that personality leaves traces in both what people type it and how they type it, the two types of information the approach takes into account.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"1 2","pages":"Article 100026"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000269/pdfft?md5=5c727ee751d05005017c524d25960f35&pid=1-s2.0-S2949882123000269-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92025805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatGPT in education: Methods, potentials, and limitations 教育中的聊天技术:方法、潜力和局限性
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100022
Bahar Memarian, Tenzin Doleck
{"title":"ChatGPT in education: Methods, potentials, and limitations","authors":"Bahar Memarian,&nbsp;Tenzin Doleck","doi":"10.1016/j.chbah.2023.100022","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100022","url":null,"abstract":"<div><p>ChatGPT has been under the scrutiny of public opinion including in education. Yet, less work has been done to analyze studies conducted on ChatGPT in educational contexts. This review paper examines where ChatGPT is employed in educational literature and areas of potential, challenges, and future work. A total of 63 publications were included in this review using the general framework of open and axial coding. We coded and summarized the methods, and reported potentials, limitations, and future work of each study. Thematic analysis of reviewed studies revealed that most extant studies in the education literature explore ChatGPT through a commentary and non-empirical lens. The potentials of ChatGPT include but are not limited to the development of personalized and complex learning, specific teaching and learning activities, assessments, asynchronous communication, feedback, accuracy in research, personas, and task delegation and cognitive offload. Several areas of challenge that ChatGPT is or will be facing in education are also shared. Examples include but are not limited to plagiarism deception, misuse or lack of learning, accountability, and privacy. There are both concerns and optimism about the use of ChatGPT in education, yet the most pressing need is to ensure student learning and academic integrity are not sacrificed. Our review provides a summary of studies conducted on ChatGPT in education literature. We further provide a comprehensive and unique discussion on future considerations for ChatGPT in education.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"1 2","pages":"Article 100022"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000221/pdfft?md5=f9aa184eb8668e5dbec672d9482aabfb&pid=1-s2.0-S2949882123000221-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92025807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring the superiority of human expertise over algorithmic expertise in the cognitive and metacognitive processes of decision-making among decision-makers. 在决策者决策的认知和元认知过程中,探索人类专业知识相对于算法专业知识的优越性。
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100023
Nicolas Spatola
{"title":"Exploring the superiority of human expertise over algorithmic expertise in the cognitive and metacognitive processes of decision-making among decision-makers.","authors":"Nicolas Spatola","doi":"10.1016/j.chbah.2023.100023","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100023","url":null,"abstract":"<div><p>Investigating the role of human vs algorithmic expertise on decision-making processes is crucial, especially in the public sector where it can impact millions of people. To better comprehend the underlying cognitive and metacognitive processes, we conducted an experiment to manipulate the influence of human and algorithmic agents on decision-makers' confidence levels. We also studied the resulting impact on decision outcomes and metacognitive awareness. By exploring a theoretical model of serial and interaction effects, we were able to manipulate the complexity and uncertainty of initial data and analyze the role of confidence in decision-making facing human or algorithmic expertise. Results showed that individuals tend to be more confident in their decision-making and less likely to revise their decisions when presented with consistent information. External expertise, whether from an expert or algorithmic analysis, can significantly impact decision outcomes, depending on whether it confirms or contradicts the initial decision. Also, human expertise proved to have a higher impact on decision outcomes than algorithmic expertise, which may demonstrate confirmation bias and other social processes that we further discuss. In conclusion, the study highlights the importance of adopting a holistic perspective in complex decision-making situations. Decision-makers must recognize their biases and the influence of external factors on their confidence and accountability.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"1 2","pages":"Article 100023"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000233/pdfft?md5=0659c799ba0059b5e4f8b8519fce9e98&pid=1-s2.0-S2949882123000233-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92025804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conversational agents for Children's mental health and mental disorders: A scoping review 儿童心理健康和精神障碍的对话代理:范围综述
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100028
Rachael Martin, Sally Richmond
{"title":"Conversational agents for Children's mental health and mental disorders: A scoping review","authors":"Rachael Martin,&nbsp;Sally Richmond","doi":"10.1016/j.chbah.2023.100028","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100028","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"1 2","pages":"Article 100028"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000282/pdfft?md5=0917711b8920dde8ac8f0301419db9dc&pid=1-s2.0-S2949882123000282-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92025808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“To comply or to react, that is the question:” the roles of humanness versus eeriness of AI-powered virtual influencers, loneliness, and threats to human identities in AI-driven digital transformation 是遵从还是回应,这是一个问题:“在人工智能驱动的数字化转型中,人性与人工智能驱动的虚拟影响者的怪异、孤独以及对人类身份的威胁。
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100011
S. Venus Jin
{"title":"“To comply or to react, that is the question:” the roles of humanness versus eeriness of AI-powered virtual influencers, loneliness, and threats to human identities in AI-driven digital transformation","authors":"S. Venus Jin","doi":"10.1016/j.chbah.2023.100011","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100011","url":null,"abstract":"<div><p>AI-powered virtual influencers play a variety of roles in emerging media environments. To test the diffusion of AI-powered virtual influencers among social media users and to examine antecedents, mediators, and moderators relevant to compliance with and reactance to virtual influencers, data were collected using two cross-sectional surveys (∑ <em>N</em> = 1623). Drawing on the Diffusion of Innovations theory, survey data from Study 1 (<em>N</em><sub><em>1</em></sub> = 987) provide preliminary descriptive statistics about US social media users' levels of awareness of, knowledge of, exposure to, and engagement with virtual influencers. Drawing from the theoretical frameworks of the Uncanny Valley Hypothesis and the CASA (Computers Are Social Actors) paradigm, Study 2 examines social media users' compliance with versus reactance to AI-powered virtual influencers. Survey data from Study 2 (<em>N</em><sub><em>2</em></sub> = 636) provide inferential statistics supporting the moderated serial mediation model that proposes (1) empathy and engagement with AI-powered virtual influencers mediate the effects of perceived humanness versus eeriness of virtual influencers on social media users' behavioral intention to purchase the products recommended by the virtual influencers (serial and total mediation effects) and (2) loneliness moderates the effects of humanness versus eeriness on empathy. Drawing from the theory of Psychological Reactance, Study 2 further reports the moderation effect of social media users' trait reactance and perceived threats to one's own human identity on the relationship between perceived eeriness and compliance with versus situational reactance to virtual influencers. Theoretical contributions to CASA research and the Uncanny Valley literature as well as managerial implications for AI-driven digital transformation in media industries and virtual influencer marketing are discussed.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"1 2","pages":"Article 100011"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49713831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信