Computers in Human Behavior: Artificial Humans最新文献

筛选
英文 中文
Performance rather than reputation affects humans’ trust towards an artificial agent 人类对人工智能的信任取决于其表现而非声誉
Computers in Human Behavior: Artificial Humans Pub Date : 2025-01-22 DOI: 10.1016/j.chbah.2025.100122
Fritz Becker , Celine Ina Spannagl , Jürgen Buder , Markus Huff
{"title":"Performance rather than reputation affects humans’ trust towards an artificial agent","authors":"Fritz Becker ,&nbsp;Celine Ina Spannagl ,&nbsp;Jürgen Buder ,&nbsp;Markus Huff","doi":"10.1016/j.chbah.2025.100122","DOIUrl":"10.1016/j.chbah.2025.100122","url":null,"abstract":"<div><div>To succeed in teamwork with artificial agents, humans have to calibrate their trust towards agents based on information they receive about an agent before interaction (reputation information) as well as on experiences they have during interaction (agent performance). This study (N = 253) focused on the influence of a virtual agent's reputation (high/low) and actual observed performance (high/low) on a human user's behavioral trust (delegation behavior) and self-reported trust (questionnaires) in a cooperative Tetris game. The main findings suggested that agent reputation influences self-reported trust prior to interaction. However, the effect of reputation immediately got overridden by performance of the agent during the interaction. The agent's performance during the interactive task influenced delegation behavior, as well as self-reported trust measured post-interaction. Pre-to post-change in self-reported trust was significantly larger when reputation and performance were incongruent. We concluded that reputation might have had a smaller than expected influence on behavior in the presence of a novel tool that afforded exploration. Our research contributes to understanding trust and delegation dynamics, which is crucial for the design and adequate use of artificial agent team partners in a world of digital transformation.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100122"},"PeriodicalIF":0.0,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Who wants to be hired by AI? How message frames and AI transparency impact individuals’ attitudes and behaviors toward companies using AI in hiring 谁想被人工智能雇佣?信息框架和人工智能透明度如何影响个人对使用人工智能进行招聘的公司的态度和行为
Computers in Human Behavior: Artificial Humans Pub Date : 2025-01-17 DOI: 10.1016/j.chbah.2025.100120
Ying Xiong, Joon Kyoung Kim
{"title":"Who wants to be hired by AI? How message frames and AI transparency impact individuals’ attitudes and behaviors toward companies using AI in hiring","authors":"Ying Xiong,&nbsp;Joon Kyoung Kim","doi":"10.1016/j.chbah.2025.100120","DOIUrl":"10.1016/j.chbah.2025.100120","url":null,"abstract":"<div><div>In recent years, many companies have begun to adopt Artificial intelligence (AI) in their recruitment and personnel selection. Despite the increasing use of AI in hiring, little is known about how companies can better communicate about their AI use with job applicants to increase their positive attitudes and behaviors toward companies. Three experimental studies were conducted to investigate the impact of exposure to gain- and loss-framed messages and AI transparency information (third-party audit vs. sharing AI information with job candidates) in job advertisements on individuals' attitudes, organizational trust, and positive word-of-mouth (WOM) intentions. The results showed that the presence of AI transparency information in job advertisements increases individuals’ favorable attitudes, trust, and positive WOM intention toward companies using AI in hiring. Loss-framed messages than gain-framed messages increased the outcome variables in the context of recruitment process time, but not in the context of unconscious hiring bias.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100120"},"PeriodicalIF":0.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines 高等教育中的生成式人工智能:来自制度政策和指导方针分析的证据
Computers in Human Behavior: Artificial Humans Pub Date : 2025-01-17 DOI: 10.1016/j.chbah.2025.100121
Nora McDonald , Aditya Johri , Areej Ali , Aayushi Hingle Collier
{"title":"Generative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines","authors":"Nora McDonald ,&nbsp;Aditya Johri ,&nbsp;Areej Ali ,&nbsp;Aayushi Hingle Collier","doi":"10.1016/j.chbah.2025.100121","DOIUrl":"10.1016/j.chbah.2025.100121","url":null,"abstract":"<div><div>The release of ChatGPT in November 2022 prompted a massive uptake of generative artificial intelligence (GenAI) across higher education institutions (HEIs). In response, HEIs focused on regulating its use, particularly among students, before shifting towards advocating for its productive integration within teaching and learning. Since then, many HEIs have increasingly provided policies and guidelines to direct GenAI. This paper presents an analysis of documents produced by 116 US universities classified as as high research activity or R1 institutions providing a comprehensive examination of the advice and guidance offered by institutional stakeholders about GenAI. Through an extensive analysis, we found a majority of universities (N = 73, 63%) encourage the use of GenAI, with many offering detailed guidance for its use in the classroom (N = 48, 41%). Over half the institutions provided sample syllabi (N = 65, 56%) and half (N = 58, 50%) provided sample GenAI curriculum and activities that would help instructors integrate and leverage GenAI in their teaching. Notably, the majority of guidance focused on writing activities focused on writing, whereas references to code and STEM-related activities were infrequent, and often vague, even when mentioned (N = 58, 50%). Finally, more than half of institutions talked about the ethics of GenAI on a broad range of topics, including Diversity, Equity and Inclusion (DEI) (N = 60, 52%). Based on our findings we caution that guidance for faculty can become burdensome as policies suggest or imply substantial revisions to existing pedagogical practices.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100121"},"PeriodicalIF":0.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Numeric vs. verbal information: The influence of information quantifiability in Human–AI vs. Human–Human decision support 数字与语言信息:信息可量化在人-人工智能与人-人决策支持中的影响
Computers in Human Behavior: Artificial Humans Pub Date : 2024-12-16 DOI: 10.1016/j.chbah.2024.100116
Eileen Roesler , Tobias Rieger , Markus Langer
{"title":"Numeric vs. verbal information: The influence of information quantifiability in Human–AI vs. Human–Human decision support","authors":"Eileen Roesler ,&nbsp;Tobias Rieger ,&nbsp;Markus Langer","doi":"10.1016/j.chbah.2024.100116","DOIUrl":"10.1016/j.chbah.2024.100116","url":null,"abstract":"<div><div>A number of factors, including different task characteristics, influence trust in human vs. AI decision support. In particular, the aspect of information quantifiability could influence trust and dependence, especially considering that human and AI support may have varying strengths in assessing criteria that differ in their quantifiability. To investigate the effect of information quantifiability we conducted an online experiment (<span><math><mrow><mi>N</mi><mo>=</mo><mn>204</mn></mrow></math></span>) with a 2 (support agent: AI vs. human) <span><math><mo>×</mo></math></span> 2 (quantifiability: low vs. high) between-subjects design, using a simulated recruitment task. The support agent was manipulated via framing, while quantifiability was manipulated by the evaluation criteria in the recruitment paradigm. The analysis revealed higher trust for human over AI support. Moreover, trust was higher in the low than in the high quantifiability condition. Counterintuitively, participants rated the applicants as less qualified than their support agent’s rating, especially noticeable in the low quantifiability condition. Besides reinforcing earlier findings showing higher trust towards human experts than towards AI and showcasing the importance of information quantifiability, the present study also raises questions concerning the perceived leniency of support agents and its impact on trust and behavior.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100116"},"PeriodicalIF":0.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physical anthropomorphism (but not gender presentation) influences trust in household robots 身体拟人化(但不包括性别表现)影响对家用机器人的信任
Computers in Human Behavior: Artificial Humans Pub Date : 2024-12-10 DOI: 10.1016/j.chbah.2024.100114
Colin Holbrook , Umesh Krishnamurthy , Paul P. Maglio , Alan R. Wagner
{"title":"Physical anthropomorphism (but not gender presentation) influences trust in household robots","authors":"Colin Holbrook ,&nbsp;Umesh Krishnamurthy ,&nbsp;Paul P. Maglio ,&nbsp;Alan R. Wagner","doi":"10.1016/j.chbah.2024.100114","DOIUrl":"10.1016/j.chbah.2024.100114","url":null,"abstract":"<div><div>This research explores anthropomorphism and gender presentation as prospective determinants of trust in household service robots with respect to care of objects (e.g., clothing, valuables), information (e.g., online passwords, credit card numbers), and living agents (e.g., pets, children). In Experiments 1 and 2, we compared trust in a humanoid robot presenting as male, female, or gender-neutral, finding no effects of gender presentation on any trust outcome. In Experiment 3, a fourth condition depicting a physically nonhumanoid robot was added. Relative to the humanoid conditions, participants reported less willingness to trust the nonhumanoid robot to care for their objects, personal information, or vulnerable agents; the reduced trust in care for objects or information was mediated by appraisals of the nonhumanoid as less intelligent and less likable, whereas the reduced trust in care of agents was mediated by appraisals of the nonhumanoid as less likable and less alive. In a parallel pattern, across all studies, participants’ appraisals of robots as intelligent tracked trust in them to take care of objects or information (but not agents), whereas appraisals of robots as likable and alive tracked trust in care of agents. The results are discussed as they inform past work examining effects of gender presentation and anthropomorphism on perceptions of, and trust in, robots.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100114"},"PeriodicalIF":0.0,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trust and acceptance of AI caregiving robots: The role of ethics and self-efficacy 人工智能看护机器人的信任和接受:伦理和自我效能的作用
Computers in Human Behavior: Artificial Humans Pub Date : 2024-12-07 DOI: 10.1016/j.chbah.2024.100115
Cathy S. Lin, Ying-Feng Kuo, Ting-Yu Wang
{"title":"Trust and acceptance of AI caregiving robots: The role of ethics and self-efficacy","authors":"Cathy S. Lin,&nbsp;Ying-Feng Kuo,&nbsp;Ting-Yu Wang","doi":"10.1016/j.chbah.2024.100115","DOIUrl":"10.1016/j.chbah.2024.100115","url":null,"abstract":"<div><div>As AI technology rapidly advances, ethical concerns have emerged as a global focus. This study introduces a second-order scale for analyzing AI ethics and proposes a model to examine the intention to use AI caregiving robots. The model incorporates elements from the Unified Theory of Acceptance and Use of Technology (UTAUT)—including social influence and performance expectancy—alongside AI ethics, self-efficacy, and trust in AI. The findings reveal that AI ethics and social influence enhance self-efficacy, which in turn increases trust in AI, performance expectancy, and the intention to use AI caregiving robots. Moreover, trust in AI and performance expectancy directly and positively influence the intention to adopt these robots. By incorporating AI ethics, the model provides a more comprehensive perspective, addressing dimensions often overlooked in conventional models. The proposed model is validated across diverse samples, demonstrating both its theoretical and practical significance in predicting AI usage intentions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100115"},"PeriodicalIF":0.0,"publicationDate":"2024-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143154567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring predictors of AI chatbot usage intensity among students: Within- and between-person relationships based on the technology acceptance model 探索AI聊天机器人在学生中使用强度的预测因素:基于技术接受模型的人际关系
Computers in Human Behavior: Artificial Humans Pub Date : 2024-12-03 DOI: 10.1016/j.chbah.2024.100113
Anne-Kathrin Kleine , Insa Schaffernak , Eva Lermer
{"title":"Exploring predictors of AI chatbot usage intensity among students: Within- and between-person relationships based on the technology acceptance model","authors":"Anne-Kathrin Kleine ,&nbsp;Insa Schaffernak ,&nbsp;Eva Lermer","doi":"10.1016/j.chbah.2024.100113","DOIUrl":"10.1016/j.chbah.2024.100113","url":null,"abstract":"<div><div>The current research investigated the factors associated with the intensity of AI chatbot usage among university students, applying the Technology Acceptance Model (TAM) and its extended version, TAM3. A daily diary study over five days was conducted among university students, distinguishing between inter-individual (between-person) and intra-individual (within-person) variations. Multilevel structural equation modeling (SEM) was used to analyze the data. In Study 1 (<em>N</em> = 72), results indicated that AI chatbot anxiety was associated with perceived ease of use (PEOU) and perceived usefulness (PU), which serially mediated the link with AI chatbot usage intensity. Study 2 (<em>N</em> = 153) supported these findings and further explored the roles of facilitating conditions and subjective norm as additional predictors of PEOU and PU. Results from both studies demonstrated that, at the between-person level, students with higher average levels of PEOU and PU reported more intensive AI chatbot usage. In Study 1, the relationship between PEOU and usage intensity was mediated through PU at the within-person level, while the mediation model was not supported in Study 2. Post-hoc comparisons highlighted much higher variability in PEOU and PU in Study 1 compared to Study 2. The results have practical implications for enhancing AI chatbot adoption in educational settings. Emphasizing user-friendly interfaces, reducing AI-related anxiety, providing robust technical support, and leveraging peer influence may enhance the usage intensity of AI chatbots. This study underscores the necessity of considering both stable individual differences and dynamic daily influences to better understand AI chatbot usage patterns among students.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100113"},"PeriodicalIF":0.0,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preventing promotion-focused goals: The impact of regulatory focus on responsible AI 防止以推广为中心的目标:监管重点对负责任的人工智能的影响
Computers in Human Behavior: Artificial Humans Pub Date : 2024-12-03 DOI: 10.1016/j.chbah.2024.100112
Samuel N. Kirshner, Jessica Lawson
{"title":"Preventing promotion-focused goals: The impact of regulatory focus on responsible AI","authors":"Samuel N. Kirshner,&nbsp;Jessica Lawson","doi":"10.1016/j.chbah.2024.100112","DOIUrl":"10.1016/j.chbah.2024.100112","url":null,"abstract":"<div><div>Implementing black-box artificial intelligence (AI) often requires evaluating trade-offs related to responsible AI (RAI) (e.g., the trade-off between performance and features regarding AI's fairness or explainability). Synthesizing theories on regulatory focus and cognitive dissonance, we develop and test a model describing how organizational goals impact the dynamics of AI-based unethical pro-organizational behavior (UPB). First, we show that promotion-focused goals increase AI-based UPB and that RAI values act as a novel mediator. Promotion-focus goals significantly lower fairness in Study 1A and explainability in Study 1B, mediating the relationship between regulatory focus and AI-based UPB. Study 2A further supports RAI values as the driving mechanism of AI-based UPB using a moderation-by-processes design experiment. Study 2B provides evidence that AI-based UPB decisions can, in turn, lead to more unethical RAI values for promotion-focused firms, creating a negative RAI feedback loop within organizations. Our research provides theoretical implications and actionable insights for researchers, organizations, and policymakers seeking to improve the responsible use of AI.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100112"},"PeriodicalIF":0.0,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An economical measure of attitudes towards artificial intelligence in work, healthcare, and education (ATTARI-WHE) 对工作、医疗保健和教育中人工智能态度的经济衡量(ATTARI-WHE)
Computers in Human Behavior: Artificial Humans Pub Date : 2024-11-28 DOI: 10.1016/j.chbah.2024.100106
Timo Gnambs , Jan-Philipp Stein , Markus Appel , Florian Griese , Sabine Zinn
{"title":"An economical measure of attitudes towards artificial intelligence in work, healthcare, and education (ATTARI-WHE)","authors":"Timo Gnambs ,&nbsp;Jan-Philipp Stein ,&nbsp;Markus Appel ,&nbsp;Florian Griese ,&nbsp;Sabine Zinn","doi":"10.1016/j.chbah.2024.100106","DOIUrl":"10.1016/j.chbah.2024.100106","url":null,"abstract":"<div><div>Artificial intelligence (AI) has profoundly transformed numerous facets of both private and professional life. Understanding how people evaluate AI is crucial for predicting its future adoption and addressing potential barriers. However, existing instruments measuring attitudes towards AI often focus on specific technologies or cross-domain evaluations, while domain-specific measurement instruments are scarce. Therefore, this study introduces the nine-item <em>Attitudes towards Artificial Intelligence in Work, Healthcare, and Education</em> (ATTARI-WHE) scale. Using a diverse sample of <em>N</em> = 1083 respondents from Germany, the psychometric properties of the instrument were evaluated. The results demonstrated low rates of missing responses, minimal response biases, and a robust measurement model that was invariant across sex, age, education, and employment status. These findings support the use of the ATTARI-WHE to assess AI attitudes in the work, healthcare, and education domains, with three items each. Its brevity makes it particularly well-suited for use in social surveys, web-based studies, or longitudinal research where assessment time is limited.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100106"},"PeriodicalIF":0.0,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How do people react to political bias in generative artificial intelligence (AI)? 在生成式人工智能(AI)中,人们如何应对政治偏见?
Computers in Human Behavior: Artificial Humans Pub Date : 2024-11-28 DOI: 10.1016/j.chbah.2024.100108
Uwe Messer
{"title":"How do people react to political bias in generative artificial intelligence (AI)?","authors":"Uwe Messer","doi":"10.1016/j.chbah.2024.100108","DOIUrl":"10.1016/j.chbah.2024.100108","url":null,"abstract":"<div><div>Generative Artificial Intelligence (GAI) such as Large Language Models (LLMs) have a concerning tendency to generate politically biased content. This is a challenge, as the emergence of GAI meets politically polarized societies. Therefore, this research investigates how people react to biased GAI-content based on their pre-existing political beliefs and how this influences the acceptance of GAI. In three experiments (N = 513), it was found that perceived alignment between user's political orientation and bias in generated content (in text and images) increases acceptance and reliance on GAI. Participants who perceived alignment were more likely to grant GAI access to sensitive smartphone functions and to endorse the use in critical domains (e.g., loan approval; social media moderation). Because users see GAI as a social actor, they consider perceived alignment as a sign of greater objectivity, thus granting aligned GAI access to more sensitive areas.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100108"},"PeriodicalIF":0.0,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信