Computers in Human Behavior: Artificial Humans最新文献

筛选
英文 中文
Am I still human? Wearing an exoskeleton impacts self-perceptions of warmth, competence, attractiveness, and machine-likeness 我还是人吗?穿戴外骨骼会影响对温暖、能力、吸引力和机器相似性的自我认知
Computers in Human Behavior: Artificial Humans Pub Date : 2024-05-31 DOI: 10.1016/j.chbah.2024.100073
Sandra Maria Siedl, Martina Mara
{"title":"Am I still human? Wearing an exoskeleton impacts self-perceptions of warmth, competence, attractiveness, and machine-likeness","authors":"Sandra Maria Siedl,&nbsp;Martina Mara","doi":"10.1016/j.chbah.2024.100073","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100073","url":null,"abstract":"<div><p>Occupational exoskeletons are body-worn technologies capable of enhancing a wearer's naturally given strength at work. Despite increasing interest in their physical effects, their implications for user self-perception have been largely overlooked. Addressing common concerns about body-enhancing technologies, our study explored how real-world use of a robotic exoskeleton affects a wearer's mechanistic dehumanization and perceived attractiveness of the self. In a within-subjects laboratory experiment, n = 119 participants performed various practical work tasks (carrying, screwing, riveting) with and without the <em>Ironhand</em> active hand exoskeleton. After each condition, they completed a questionnaire. We expected that in the exoskeleton condition self-perceptions of warmth and attractiveness would be less pronounced and self-perceptions of being competent and machine-like would be more pronounced. Study data supported these hypotheses and showed perceived competence, machine-likeness, and attractiveness to be relevant to technology acceptance. Our findings provide the first evidence that body-enhancement technologies may be associated with tendencies towards self-dehumanization, and underline the multifaceted role of exoskeleton-induced competence gain. By examining user self-perceptions that relate to mechanistic dehumanization and aesthetic appeal, our research highlights the need to better understand psychological impacts of exoskeletons on human wearers.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100073"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000331/pdfft?md5=cdbc3d3a9a85f6c53c5c3975b75c6aa2&pid=1-s2.0-S2949882124000331-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141315054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On trust in humans and trust in artificial intelligence: A study with samples from Singapore and Germany extending recent research 关于对人类的信任和对人工智能的信任:以新加坡和德国的样本为基础,扩展近期研究的一项研究
Computers in Human Behavior: Artificial Humans Pub Date : 2024-05-10 DOI: 10.1016/j.chbah.2024.100070
Christian Montag , Benjamin Becker , Benjamin J. Li
{"title":"On trust in humans and trust in artificial intelligence: A study with samples from Singapore and Germany extending recent research","authors":"Christian Montag ,&nbsp;Benjamin Becker ,&nbsp;Benjamin J. Li","doi":"10.1016/j.chbah.2024.100070","DOIUrl":"10.1016/j.chbah.2024.100070","url":null,"abstract":"<div><p>The AI revolution is shaping societies around the world. People interact daily with a growing number of products and services that feature AI integration. Without doubt rapid developments in AI will bring positive outcomes, but also challenges. In this realm it is important to understand if people trust this omni-use technology, because trust represents an essential prerequisite (to be willing) to use AI products and this in turn likely has an impact on how much AI will be embraced by national economies with consequences for the local work forces. To shed more light on trusting AI, the present work aims to understand how much the variables <em>trust in AI</em> and <em>trust in humans</em> overlap. This is important to understand, because much is already known about trust in humans, and if the concepts overlap, much of our understanding of trust in humans might be transferable to trusting AI. In samples from Singapore (n = 535) and Germany (n = 954) we could observe varying degrees of positive relations between the <em>trust in AI/humans</em> variables. Whereas <em>trust in AI/humans</em> showed a small positive association in Germany, there was a moderate positive association in Singapore. Further, this paper revisits associations between individual differences in the Big Five of Personality and general attitudes towards AI including trust.</p><p>The present work shows that <em>trust in humans</em> and <em>trust in AI</em> share only small amounts of variance, but this depends on culture (varying here from about 4 to 11 percent of shared variance). Future research should further investigate such associations but by also considering assessments of trust in specific AI-empowered-products and AI-empowered-services, where things might be different.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100070"},"PeriodicalIF":0.0,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000306/pdfft?md5=79d1e52e0296b5cc72a13b7bfacaaf35&pid=1-s2.0-S2949882124000306-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141042698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI literacy for users – A comprehensive review and future research directions of learning methods, components, and effects 面向用户的人工智能扫盲--关于学习方法、内容和效果的全面回顾与未来研究方向
Computers in Human Behavior: Artificial Humans Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100062
Marc Pinski, Alexander Benlian
{"title":"AI literacy for users – A comprehensive review and future research directions of learning methods, components, and effects","authors":"Marc Pinski,&nbsp;Alexander Benlian","doi":"10.1016/j.chbah.2024.100062","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100062","url":null,"abstract":"<div><p>The rapid advancement of artificial intelligence (AI) has brought transformative changes to various aspects of human life, leading to an exponential increase in the number of AI users. The broad access and usage of AI enable immense benefits but also give rise to significant challenges. One way for AI users to address these challenges is to develop AI literacy, referring to human proficiency in different subject areas of AI that enable purposeful, efficient, and ethical usage of AI technologies. This study aims to comprehensively understand and structure the research on AI literacy for AI users through a systematic, scoping literature review. Therefore, we synthesize the literature, provide a conceptual framework, and develop a research agenda. Our review paper holistically assesses the fragmented AI literacy research landscape (68 papers) while critically examining its specificity to different user groups and its distinction from other technology literacies, exposing that research efforts are partly not well integrated. We organize our findings in an overarching conceptual framework structured along the learning methods leading to, the components constituting, and the effects stemming from AI literacy. Our research agenda – oriented along the developed conceptual framework – sheds light on the most promising research opportunities to prepare AI users for an AI-powered future of work and society.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100062"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000227/pdfft?md5=67048bb47ad6e81dd544c466338d703f&pid=1-s2.0-S2949882124000227-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140163093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling morality and spirituality in artificial chaplains 人工牧师的道德和灵性建模
Computers in Human Behavior: Artificial Humans Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100051
Mark Graves
{"title":"Modeling morality and spirituality in artificial chaplains","authors":"Mark Graves","doi":"10.1016/j.chbah.2024.100051","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100051","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100051"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000112/pdfft?md5=c4380ab3c86812f04171e97918fb3c5d&pid=1-s2.0-S2949882124000112-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139744221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual vs. Human influencers: The battle for consumer hearts and minds 虚拟影响者与人工影响者:消费者心智之争
Computers in Human Behavior: Artificial Humans Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100059
Abhishek Dondapati, Ranjit Kumar Dehury
{"title":"Virtual vs. Human influencers: The battle for consumer hearts and minds","authors":"Abhishek Dondapati,&nbsp;Ranjit Kumar Dehury","doi":"10.1016/j.chbah.2024.100059","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100059","url":null,"abstract":"<div><p>Virtual influencers, or fictional CGI-generated social media personas, are gaining popularity. However, research lacks information on how they compare to human influencers in shaping consumer attitudes and purchase intent. This study examines whether perceived homophily and para-social relationships mediate the effect of influencer type on purchase intent and the moderating effect of perceived authenticity. A 2 × 2 between-subjects experiment manipulated influencer type (virtual vs. human) and product type (hedonic vs. utilitarian). Young adult participants viewed an Instagram profile of a lifestyle influencer. Authenticity, perceived homophily, para-social relationship, and purchase intent were measured using established scales. Perceived homophily and para-social relationships mediate the effect of influencer type on purchase intent. A significant interaction showed that perceived authenticity moderated the mediated pathway, such that the indirect effect via para-social relationship and perceived homophily was stronger for human influencers. Maintaining an authentic persona is critical for virtual influencers to sway consumer behaviours, especially for audiences less familiar with social media.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100059"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000197/pdfft?md5=20eb84dd566ad4d79f74fed42380915b&pid=1-s2.0-S2949882124000197-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140000257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trust in artificial intelligence: Literature review and main path analysis 人工智能中的信任:文献综述和主要路径分析
Computers in Human Behavior: Artificial Humans Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100043
Bruno Miranda Henrique , Eugene Santos Jr.
{"title":"Trust in artificial intelligence: Literature review and main path analysis","authors":"Bruno Miranda Henrique ,&nbsp;Eugene Santos Jr.","doi":"10.1016/j.chbah.2024.100043","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100043","url":null,"abstract":"<div><p>Artificial Intelligence (AI) is present in various modern systems, but it is still subject to acceptance in many fields. Medical diagnosis, autonomous driving cars, recommender systems and robotics are examples of areas in which some humans distrust AI technology, which ultimately leads to low acceptance rates. Conversely, those same applications can have humans who over rely on AI, acting as recommended by the systems with no criticism regarding the risks of a wrong decision. Therefore, there is an optimal balance with respect to trust in AI, achieved by calibration of expectations and capabilities. In this context, the literature about factors influencing trust in AI and its calibration is scattered among research fields, with no objective summaries of the overall evolution of the theme. In order to close this gap, this paper contributes a literature review of the most influential papers on the subject of trust in AI, selected by quantitative methods. It also proposes a Main Path Analysis of the literature, highlighting how the theme has evolved over the years. As results, researchers will find an overview on trust in AI based on the most important papers objectively selected and also tendencies and opportunities for future research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100043"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000033/pdfft?md5=730364a034e2bd4ec1f23bf724f7adef&pid=1-s2.0-S2949882124000033-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139550002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of assessment for learning with artificial intelligence 人工智能学习评估综述
Computers in Human Behavior: Artificial Humans Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2023.100040
Bahar Memarian, Tenzin Doleck
{"title":"A review of assessment for learning with artificial intelligence","authors":"Bahar Memarian,&nbsp;Tenzin Doleck","doi":"10.1016/j.chbah.2023.100040","DOIUrl":"10.1016/j.chbah.2023.100040","url":null,"abstract":"<div><p>The reformed Assessment For Learning (AFL) practices the design of activities and evaluation and feedback processes that improve student learning. While Artificial Intelligence (AI) has blossomed as a field in education, less work has been done to examine the studies and challenges reported between AFL and AI. We conduct a review of the literature to examine the state of work on AFL and AI in the education literature. A review of articles in Web of Science, SCOPUS, and Google Scholar yielded 35 studies for review. We share the trends in research design, AFL conceptions, and AI challenges in the reviewed studies. We offer the implications of AFL and AI and considerations for future research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100040"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000403/pdfft?md5=7027156594dcf9b4d5bc0dc0e9c5dca9&pid=1-s2.0-S2949882123000403-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139191639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Co-creating art with generative artificial intelligence: Implications for artworks and artists 与生成式人工智能共同创造艺术:对艺术作品和艺术家的影响
Computers in Human Behavior: Artificial Humans Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100056
Uwe Messer
{"title":"Co-creating art with generative artificial intelligence: Implications for artworks and artists","authors":"Uwe Messer","doi":"10.1016/j.chbah.2024.100056","DOIUrl":"10.1016/j.chbah.2024.100056","url":null,"abstract":"<div><p>Synthetic visual art is becoming a commodity due to generative artificial intelligence (AI). The trend of using AI for co-creation will not spare artists’ creative processes, and it is important to understand how the use of generative AI at different stages of the creative process affects both the evaluation of the artist and the result of the human-machine collaboration (i.e., the visual artifact). In three experiments (N = 560), this research explores how the evaluation of artworks is transformed by the revelation that the artist collaborated with AI at different stages of the creative process. The results show that co-created art is less liked and recognized, especially when AI was used in the implementation stage. While co-created art is perceived as more novel, it lacks creative authenticity, which exerts a dominant influence. The results also show that artists’ perceptions suffer from the co-creation process, and that artists who co-create are less admired because they are perceived as less authentic. Two boundary conditions are identified. The negative effect can be mitigated by disclosing the level of artist involvement in co-creation with AI (e.g., by training the algorithm on a curated set of images vs. simply prompting an off-the-shelf AI image generator). In the context of art that is perceived as commercially motivated (e.g., stock images), the effect is also diminished. This research has important implications for the literature on human-AI-collaboration, research on authenticity, and the ongoing policy debate regarding the transparency of algorithmic presence.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100056"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000161/pdfft?md5=117db880bc1bfc8ee95dd810da305f04&pid=1-s2.0-S2949882124000161-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139884737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The effect of source disclosure on evaluation of AI-generated messages 信息来源披露对人工智能生成的信息评价的影响
Computers in Human Behavior: Artificial Humans Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100058
Sue Lim, Ralf Schmälzle
{"title":"The effect of source disclosure on evaluation of AI-generated messages","authors":"Sue Lim,&nbsp;Ralf Schmälzle","doi":"10.1016/j.chbah.2024.100058","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100058","url":null,"abstract":"<div><p>Advancements in artificial intelligence (AI) over the last decade demonstrate that machines can exhibit communicative behavior and influence how humans think, feel, and behave. In fact, the recent development of ChatGPT has shown that large language models (LLMs) can be leveraged to generate high-quality communication content at scale and across domains, suggesting that they will be increasingly used in practice. However, many questions remain about how knowing the source of the messages influences recipients' evaluation of and preference for AI-generated messages compared to human-generated messages. This paper investigated this topic in the context of vaping prevention messaging. In Study 1, which was pre-registered, we examined the influence of source disclosure on young adults' evaluation of AI-generated health prevention messages compared to human-generated messages. We found that source disclosure (i.e., labeling the source of a message as AI vs. human) significantly impacted the evaluation of the messages but did not significantly alter message rankings. In a follow-up study (Study 2), we examined how the influence of source disclosure may vary by the adults’ negative attitudes towards AI. We found a significant moderating effect of negative attitudes towards AI on message evaluation, but not for message selection. However, source disclosure decreased the preference for AI-generated messages for those with moderate levels (statistically significant) and high levels (directional) of negative attitudes towards AI. Overall, the results of this series of studies showed a slight bias against AI-generated messages once the source was disclosed, adding to the emerging area of study that lies at the intersection of AI and communication.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100058"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000185/pdfft?md5=137b14adf60a30776f098531f8e0d44c&pid=1-s2.0-S2949882124000185-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140062815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual voices for real change: The efficacy of virtual humans in pro-environmental social marketing for mitigating misinformation about climate change 虚拟声音促进真实变化:虚拟人在减少气候变化误导的环保社会营销中的功效
Computers in Human Behavior: Artificial Humans Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100047
Won-Ki Moon , Y. Greg Song , Lucy Atkinson
{"title":"Virtual voices for real change: The efficacy of virtual humans in pro-environmental social marketing for mitigating misinformation about climate change","authors":"Won-Ki Moon ,&nbsp;Y. Greg Song ,&nbsp;Lucy Atkinson","doi":"10.1016/j.chbah.2024.100047","DOIUrl":"10.1016/j.chbah.2024.100047","url":null,"abstract":"<div><p>Academics have focused their research on the rise of non-human entities, particularly virtual humans. To assess the effectiveness of virtual humans in influencing individual behavior through campaigns, we conducted two separate online experiments involving different participant groups: university students (N = 167) and U.S. adults (N = 320). We compared individuals’ responses to video-type pro-environmental campaigns featuring a virtual or actual human scientist as the central figure who provides testimonials about their individual efforts to prevent misinformation about climate change. The results indicate that an actual human protagonist evoked a stronger sense of identification compared to a virtual human counterpart. Nevertheless, we also observed that virtual humans can evoke empathy for the characters, leading individuals to perceive them as living entities who can have emotions. The insights gleaned from this study have the potential to shape the creation of virtual human content in various domains, including pro-social campaigns and marketing communications.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100047"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000070/pdfft?md5=4855892cb89ecc21d2e7dd741dce8b3b&pid=1-s2.0-S2949882124000070-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139637773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信