Computers in Human Behavior: Artificial Humans最新文献

筛选
英文 中文
Be careful what you explain: Benefits and costs of explainable AI in a simulated medical task 小心你的解释:可解释的人工智能在模拟医疗任务中的收益和成本
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100021
Tobias Rieger , Dietrich Manzey , Benigna Meussling , Linda Onnasch , Eileen Roesler
{"title":"Be careful what you explain: Benefits and costs of explainable AI in a simulated medical task","authors":"Tobias Rieger ,&nbsp;Dietrich Manzey ,&nbsp;Benigna Meussling ,&nbsp;Linda Onnasch ,&nbsp;Eileen Roesler","doi":"10.1016/j.chbah.2023.100021","DOIUrl":"10.1016/j.chbah.2023.100021","url":null,"abstract":"<div><p>We investigated the impact of explainability instructions with respect to system limitations on trust behavior and trust attitude when using an artificial intelligence (AI) support agent to perform a simulated medical task. In an online experiment (<em>N</em> = 128), participants performed a visual estimation task in a simulated medical setting (i.e., estimate the percentage of bacteria in a visual stimulus). All participants were supported by an AI that gave perfect recommendations for all but one color of bacteria (i.e., error-prone color with 50% reliability). We manipulated between-subjects whether participants knew about the error-prone color (XAI condition) or not (nonXAI condition). The analyses revealed that participants showed higher trust behavior (i.e., lower deviation from the AI recommendation) for the non-error-prone trials in the XAI condition. Moreover, participants showed lower trust behavior for the error-prone color in the XAI condition than in the nonXAI condition. However, this behavioral adaptation only applied to the subset of error-prone trials in which the AI gave correct recommendations, and not to the actual erroneous trials. Thus, designing explainable AI systems can also come with inadequate behavioral adaptations, as explainability was associated with benefits (i.e., more adequate behavior in non-error-prone trials), but also costs (stronger changes to the AI recommendations in correct error-prone trials).</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212300021X/pdfft?md5=221d729df96546eae8913e787fa04ac8&pid=1-s2.0-S294988212300021X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135325442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Choosing between human and algorithmic advisors: The role of responsibility sharing 在人工顾问和算法顾问之间进行选择:责任分担的作用
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100009
Lior Gazit , Ofer Arazy , Uri Hertz
{"title":"Choosing between human and algorithmic advisors: The role of responsibility sharing","authors":"Lior Gazit ,&nbsp;Ofer Arazy ,&nbsp;Uri Hertz","doi":"10.1016/j.chbah.2023.100009","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100009","url":null,"abstract":"<div><p>Algorithms are increasingly employed to provide highly accurate advice and recommendations across domains, yet in many cases people tend to prefer human advisors. Studies to date have focused mainly on the advisor’s perceived competence and the outcome of the advice as determinants of advice takers’ willingness to accept advice from human and algorithmic advisors and to arbitrate between them. Here we examine the role of another factor that is not directly related to the outcome: the advice taker’s ability to psychologically offload responsibility for the decision’s potential consequences. Building on studies showing differences in responsibility attribution between human and algorithmic advisors, we hypothesize that, controlling for the effects of the advisor’s competence, the advisor's perceived responsibility is an important factor affecting advice takers’ choice between human and algorithmic advisors. In an experiment in two domains, Medical and Financial (N = 806), participants were asked to rate advisors’ perceived responsibility and choose between a human and algorithmic advisor. Our results show that human advisors were perceived as more responsible than algorithmic advisors and most importantly, that the perception of the advisor’s responsibility affected the preference for a human advisor over an algorithmic counterpart. Furthermore, we found that an experimental manipulation that impeded advice takers’ ability to offload responsibility affected the extent to which human, but not algorithmic, advisors were perceived as responsible. Together, our findings highlight the role of responsibility sharing in influencing algorithm aversion.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49729479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Obedience to robot. Humanoid robot as an experimenter in Milgram paradigm 服从机器人。米尔格拉姆范式中人形机器人的实验研究
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100010
Tomasz Grzyb, Konrad Maj, Dariusz Dolinski
{"title":"Obedience to robot. Humanoid robot as an experimenter in Milgram paradigm","authors":"Tomasz Grzyb,&nbsp;Konrad Maj,&nbsp;Dariusz Dolinski","doi":"10.1016/j.chbah.2023.100010","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100010","url":null,"abstract":"<div><p>Humans will increasingly be influenced by social robots. It still seems unclear whether we will accept them as authorities and whether we will give in to them without reflection, as in the case of human authorities in the classic Stanley Milgram experiments (1963, 1965, and 1974). The demonstration by Stanley Milgram of the prevailing tendency in people to display extreme obedience to authority figures was one of the most important discoveries in the field of social psychology. The authors of this article decided to use a modified Milgram's research paradigm (obedience lite procedure) to compare one's obedience to a person giving instructions to electrocute someone sitting in an adjacent room with obedience to a robot issuing similar instructions. Twenty individuals were tested in both cases. As it turned out, the level of obedience was very high in both situations, and the nature of the authority figure issuing instructions (a professor vs. a robot) did not have the impact on the reactions of the subjects.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49729485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reading between the lines: Automatic inference of self-assessed personality traits from dyadic social chats 言外之意:从二元社交聊天中自动推断自我评估的人格特征
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100026
Abeer Buker , Alessandro Vinciarelli
{"title":"Reading between the lines: Automatic inference of self-assessed personality traits from dyadic social chats","authors":"Abeer Buker ,&nbsp;Alessandro Vinciarelli","doi":"10.1016/j.chbah.2023.100026","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100026","url":null,"abstract":"<div><p>Interaction through text-based platforms (e.g., WhatsApp) is a common everyday activity, typically referred to as “chatting”. However, the computing community paid relatively little attention to the automatic analysis of social and psycho-logical phenomena taking place during chats. This article proposes experiments aimed at the automatic inference of self-assessed personality traits from data collected during online dyadic chats. The proposed approach is multimodal and takes into account the two main components of chat-based interactions, namely <em>what</em> people type (the <em>text</em>) and <em>how</em> they type it (the <em>keystroke dynamics</em>). To the best of our knowledge, this is one of the very first works that includes keystroke dynamics in an approach for the inference of personality traits. The experiments involved 60 people and the results suggest that it is possible to recognize whether someone is below median or not along the Big-Five traits. Such a result suggests that personality leaves traces in both what people type it and how they type it, the two types of information the approach takes into account.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000269/pdfft?md5=5c727ee751d05005017c524d25960f35&pid=1-s2.0-S2949882123000269-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92025805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatGPT in education: Methods, potentials, and limitations 教育中的聊天技术:方法、潜力和局限性
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100022
Bahar Memarian, Tenzin Doleck
{"title":"ChatGPT in education: Methods, potentials, and limitations","authors":"Bahar Memarian,&nbsp;Tenzin Doleck","doi":"10.1016/j.chbah.2023.100022","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100022","url":null,"abstract":"<div><p>ChatGPT has been under the scrutiny of public opinion including in education. Yet, less work has been done to analyze studies conducted on ChatGPT in educational contexts. This review paper examines where ChatGPT is employed in educational literature and areas of potential, challenges, and future work. A total of 63 publications were included in this review using the general framework of open and axial coding. We coded and summarized the methods, and reported potentials, limitations, and future work of each study. Thematic analysis of reviewed studies revealed that most extant studies in the education literature explore ChatGPT through a commentary and non-empirical lens. The potentials of ChatGPT include but are not limited to the development of personalized and complex learning, specific teaching and learning activities, assessments, asynchronous communication, feedback, accuracy in research, personas, and task delegation and cognitive offload. Several areas of challenge that ChatGPT is or will be facing in education are also shared. Examples include but are not limited to plagiarism deception, misuse or lack of learning, accountability, and privacy. There are both concerns and optimism about the use of ChatGPT in education, yet the most pressing need is to ensure student learning and academic integrity are not sacrificed. Our review provides a summary of studies conducted on ChatGPT in education literature. We further provide a comprehensive and unique discussion on future considerations for ChatGPT in education.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000221/pdfft?md5=f9aa184eb8668e5dbec672d9482aabfb&pid=1-s2.0-S2949882123000221-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92025807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring the superiority of human expertise over algorithmic expertise in the cognitive and metacognitive processes of decision-making among decision-makers. 在决策者决策的认知和元认知过程中,探索人类专业知识相对于算法专业知识的优越性。
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100023
Nicolas Spatola
{"title":"Exploring the superiority of human expertise over algorithmic expertise in the cognitive and metacognitive processes of decision-making among decision-makers.","authors":"Nicolas Spatola","doi":"10.1016/j.chbah.2023.100023","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100023","url":null,"abstract":"<div><p>Investigating the role of human vs algorithmic expertise on decision-making processes is crucial, especially in the public sector where it can impact millions of people. To better comprehend the underlying cognitive and metacognitive processes, we conducted an experiment to manipulate the influence of human and algorithmic agents on decision-makers' confidence levels. We also studied the resulting impact on decision outcomes and metacognitive awareness. By exploring a theoretical model of serial and interaction effects, we were able to manipulate the complexity and uncertainty of initial data and analyze the role of confidence in decision-making facing human or algorithmic expertise. Results showed that individuals tend to be more confident in their decision-making and less likely to revise their decisions when presented with consistent information. External expertise, whether from an expert or algorithmic analysis, can significantly impact decision outcomes, depending on whether it confirms or contradicts the initial decision. Also, human expertise proved to have a higher impact on decision outcomes than algorithmic expertise, which may demonstrate confirmation bias and other social processes that we further discuss. In conclusion, the study highlights the importance of adopting a holistic perspective in complex decision-making situations. Decision-makers must recognize their biases and the influence of external factors on their confidence and accountability.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000233/pdfft?md5=0659c799ba0059b5e4f8b8519fce9e98&pid=1-s2.0-S2949882123000233-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92025804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conversational agents for Children's mental health and mental disorders: A scoping review 儿童心理健康和精神障碍的对话代理:范围综述
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100028
Rachael Martin, Sally Richmond
{"title":"Conversational agents for Children's mental health and mental disorders: A scoping review","authors":"Rachael Martin,&nbsp;Sally Richmond","doi":"10.1016/j.chbah.2023.100028","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100028","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000282/pdfft?md5=0917711b8920dde8ac8f0301419db9dc&pid=1-s2.0-S2949882123000282-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92025808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“To comply or to react, that is the question:” the roles of humanness versus eeriness of AI-powered virtual influencers, loneliness, and threats to human identities in AI-driven digital transformation 是遵从还是回应,这是一个问题:“在人工智能驱动的数字化转型中,人性与人工智能驱动的虚拟影响者的怪异、孤独以及对人类身份的威胁。
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100011
S. Venus Jin
{"title":"“To comply or to react, that is the question:” the roles of humanness versus eeriness of AI-powered virtual influencers, loneliness, and threats to human identities in AI-driven digital transformation","authors":"S. Venus Jin","doi":"10.1016/j.chbah.2023.100011","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100011","url":null,"abstract":"<div><p>AI-powered virtual influencers play a variety of roles in emerging media environments. To test the diffusion of AI-powered virtual influencers among social media users and to examine antecedents, mediators, and moderators relevant to compliance with and reactance to virtual influencers, data were collected using two cross-sectional surveys (∑ <em>N</em> = 1623). Drawing on the Diffusion of Innovations theory, survey data from Study 1 (<em>N</em><sub><em>1</em></sub> = 987) provide preliminary descriptive statistics about US social media users' levels of awareness of, knowledge of, exposure to, and engagement with virtual influencers. Drawing from the theoretical frameworks of the Uncanny Valley Hypothesis and the CASA (Computers Are Social Actors) paradigm, Study 2 examines social media users' compliance with versus reactance to AI-powered virtual influencers. Survey data from Study 2 (<em>N</em><sub><em>2</em></sub> = 636) provide inferential statistics supporting the moderated serial mediation model that proposes (1) empathy and engagement with AI-powered virtual influencers mediate the effects of perceived humanness versus eeriness of virtual influencers on social media users' behavioral intention to purchase the products recommended by the virtual influencers (serial and total mediation effects) and (2) loneliness moderates the effects of humanness versus eeriness on empathy. Drawing from the theory of Psychological Reactance, Study 2 further reports the moderation effect of social media users' trait reactance and perceived threats to one's own human identity on the relationship between perceived eeriness and compliance with versus situational reactance to virtual influencers. Theoretical contributions to CASA research and the Uncanny Valley literature as well as managerial implications for AI-driven digital transformation in media industries and virtual influencer marketing are discussed.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49713831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Are social robots the solution for shortages in rehabilitation care? Assessing the acceptance of nurses and patients of a social robot 社交机器人是康复护理短缺的解决方案吗?评估护士和病人对社交机器人的接受程度
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100017
Marian Z.M. Hurmuz , Stephanie M. Jansen-Kosterink , Ina Flierman , Susanna del Signore , Gianluca Zia , Stefania del Signore , Behrouz Fard
{"title":"Are social robots the solution for shortages in rehabilitation care? Assessing the acceptance of nurses and patients of a social robot","authors":"Marian Z.M. Hurmuz ,&nbsp;Stephanie M. Jansen-Kosterink ,&nbsp;Ina Flierman ,&nbsp;Susanna del Signore ,&nbsp;Gianluca Zia ,&nbsp;Stefania del Signore ,&nbsp;Behrouz Fard","doi":"10.1016/j.chbah.2023.100017","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100017","url":null,"abstract":"<div><p>Social robots are upcoming innovations in the healthcare sector. Currently, those robots are merely used for entertaining and accompanying people, or facilitating telepresence. Social robots have the potential to perform more added value tasks within healthcare. So, the aim of our paper was to study the acceptance of a social robot in a rehabilitation centre. This paper reports on three studies conducted with the Pepper robot. We first conducted an acceptance study in which patients (N = 6) and nurses (N = 10) performed different tasks with the robot and rated their acceptance of the robot at different time points. These participants were also interviewed afterwards to gather more qualitative data. The second study conducted was a flash mob study in which patients (N = 23) could interact with the robot via a chatbot and complete a questionnaire. Afterwards, 15 patients completed a short evaluation questionnaire about the easiness and intention to use the robot and possible new functionalities for a social robot. Finally, a Social Return on Investment analysis was conducted to assess the added value of the Pepper robot. Considering the findings from these three studies, we conclude that the use of the Pepper robot for health-related tasks in the context a rehabilitation centre is not yet feasible as major steps are needed to have the Pepper robot able to take over these health-related tasks.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49713984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing human-AI collaboration: Effects of motivation and accuracy information in AI-supported decision-making 优化人类-人工智能协作:动机和准确性信息在人工智能支持决策中的影响
Computers in Human Behavior: Artificial Humans Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100015
Simon Eisbach , Markus Langer , Guido Hertel
{"title":"Optimizing human-AI collaboration: Effects of motivation and accuracy information in AI-supported decision-making","authors":"Simon Eisbach ,&nbsp;Markus Langer ,&nbsp;Guido Hertel","doi":"10.1016/j.chbah.2023.100015","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100015","url":null,"abstract":"<div><p>Artificial intelligence (AI) systems increasingly support human decision-making in fields like medicine, management, and finance. However, such human-AI (HAI) collaboration is often less effective than AI systems alone. Moreover, efforts to make AI recommendations more transparent have failed to improve the decision quality of HAI collaborations. Based on dual process theories of cognition, we hypothesized that suboptimal HAI collaboration is partly due to heuristic information processing of humans, creating a trust imbalance towards the AI system. In an online experiment with 337 participants, we investigated motivation and accuracy information as potential factors to induce more deliberate elaboration of AI recommendations, and thus improve HAI collaboration. Participants worked on a simulated personnel selection task and received recommendations from a simulated AI system. Participants' motivation was varied through gamification, and accuracy information through additional information from the AI system. Results indicate that both motivation and accuracy information positively influenced HAI performance, but in different ways. While high motivation primarily increased humans’ use in high-quality recommendations only, accuracy information improved both the use of low- and high-quality recommendations. However, a combination of high motivation and accuracy information did not yield additional improvement of HAI performance.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49714058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信