Computers in Human Behavior: Artificial Humans最新文献

筛选
英文 中文
Let robots tell stories: Using social robots as storytellers to promote language learning among young children 让机器人讲故事:利用社交机器人讲故事,促进幼儿的语言学习
Computers in Human Behavior: Artificial Humans Pub Date : 2025-09-23 DOI: 10.1016/j.chbah.2025.100210
Zhaoji Wang , Tammy Sheung-Ting Law , Susanna Siu Sze Yeung
{"title":"Let robots tell stories: Using social robots as storytellers to promote language learning among young children","authors":"Zhaoji Wang ,&nbsp;Tammy Sheung-Ting Law ,&nbsp;Susanna Siu Sze Yeung","doi":"10.1016/j.chbah.2025.100210","DOIUrl":"10.1016/j.chbah.2025.100210","url":null,"abstract":"<div><div>Robot-Assisted Language Learning (RALL) has emerged as an innovative method to support children's language development. However, limited research has examined how its effectiveness is compared to other digital and human-led storytelling approaches, particularly among young learners. This study involved 81 children (M <sub>age</sub> = 5.58), who were randomly assigned to one of three storyteller conditions: a researcher-developed social robot (Joey), a tablet, or a human instructor. The study examined outcomes across three domains: linguistic (expressive vocabulary, story comprehension), cognitive (attention), and affective (perceptions of the storytelling activity). Results showed that children in robot condition demonstrated better story comprehension and reported significantly more positive speaking and reading perceptions than those in the tablet group. For attention, both the robot group maintained significantly higher levels than the human and tablet groups. However, for expressive vocabulary, no significant groups differences were identified. These findings suggest that while social robots may not be able to fully replace human instructors, they offer prominent benefits in certain aspects of language learning and may serve as a potential tool in early childhood educational settings.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100210"},"PeriodicalIF":0.0,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145221246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artistic turing test: The challenge of differentiating human and AI-generated art 艺术图灵测试:区分人类艺术和人工智能艺术的挑战
Computers in Human Behavior: Artificial Humans Pub Date : 2025-09-18 DOI: 10.1016/j.chbah.2025.100209
Costanza Cenerini, Flavio Keller, Giorgio Pennazza, Marco Santonico, Luca Vollero
{"title":"Artistic turing test: The challenge of differentiating human and AI-generated art","authors":"Costanza Cenerini,&nbsp;Flavio Keller,&nbsp;Giorgio Pennazza,&nbsp;Marco Santonico,&nbsp;Luca Vollero","doi":"10.1016/j.chbah.2025.100209","DOIUrl":"10.1016/j.chbah.2025.100209","url":null,"abstract":"<div><div>This paper investigates the increasing overlap of artificial intelligence (AI) capabilities with human creativity, focusing on the production of art. We present a unique study in which AI algorithms were tasked with generating art from prompts derived from children's drawings. The participants, comprising both humans and AI, were presented with a test focused on discerning the origins of these art forms, distinguishing between those created by humans and AI. Intriguingly, human participants were unable to accurately distinguish between the two, whereas the AI exhibited a discerning ability, suggesting that AI can now generate art forms that are remarkably indistinguishable from human-made creations to the human eye, yet discernible by the AI itself. The implications of these findings are discussed with regard to the evolving boundaries between human and AI creativity.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100209"},"PeriodicalIF":0.0,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145120984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Homogenizing effect of large language models (LLMs) on creative diversity: An empirical comparison of human and ChatGPT writing 大型语言模型(LLMs)对创造性多样性的均质化效应:人类和ChatGPT写作的实证比较
Computers in Human Behavior: Artificial Humans Pub Date : 2025-09-15 DOI: 10.1016/j.chbah.2025.100207
Kibum Moon, Adam E. Green, Kostadin Kushlev
{"title":"Homogenizing effect of large language models (LLMs) on creative diversity: An empirical comparison of human and ChatGPT writing","authors":"Kibum Moon,&nbsp;Adam E. Green,&nbsp;Kostadin Kushlev","doi":"10.1016/j.chbah.2025.100207","DOIUrl":"10.1016/j.chbah.2025.100207","url":null,"abstract":"<div><div>Generative AI systems, especially Large Language Models (LLMs) such as ChatGPT, have recently emerged as significant contributors to creative processes. While LLMs can produce creative content that might be as good as or even better than human-created content, their widespread use risks reducing creative diversity across groups of people. In the present research, we aimed to quantify this homogenizing effect of LLMs on creative diversity, not only at the individual level but also at the collective level. Across three preregistered studies, we analyzed 2,200 college admissions essays. Using a novel measure—the diversity growth rate—we showed that each additional human-written essay contributed more new ideas than did each additional GPT-4 essay. Notably, this difference became more pronounced as more essays were included in the analysis and persisted despite efforts to enhance AI-generated content through both prompt and parameter modifications. Overall, our findings suggest that, despite their potential to enhance individual creativity, the widespread use of LLMs could diminish the collective diversity of creative ideas.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100207"},"PeriodicalIF":0.0,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The role of AI in shaping educational experiences in computer science: A systematic review 人工智能在塑造计算机科学教育经验中的作用:系统回顾
Computers in Human Behavior: Artificial Humans Pub Date : 2025-09-13 DOI: 10.1016/j.chbah.2025.100199
Anahita Golrang, Kshitij Sharma
{"title":"The role of AI in shaping educational experiences in computer science: A systematic review","authors":"Anahita Golrang,&nbsp;Kshitij Sharma","doi":"10.1016/j.chbah.2025.100199","DOIUrl":"10.1016/j.chbah.2025.100199","url":null,"abstract":"<div><div>The integration of artificial intelligence (AI) in computer science education (CSE) has earned significant attention due to its potential to enhance learning experiences and outcomes. This systematic literature review provides one of the first domain-specific and methodologically robust syntheses of AI applications in undergraduate CSE. Through a comprehensive analysis of 40 peer-reviewed studies, we offer a fine-grained categorization of course contexts, AI methods, and data types. Our findings reveal a predominant use of supervised learning, ensemble methods, and deep learning, with notable gaps in generative and explainable AI. The review highlights the post-pandemic increase in AI-driven programming education and the growing recognition of AI’s role in addressing educational challenges. This study offers technical and pedagogical insights that inform future research and practice at the intersection of AI and computer science education.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100199"},"PeriodicalIF":0.0,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual deception in online dating: How gender shapes AI-generated image detection 在线约会中的视觉欺骗:性别如何影响人工智能生成的图像检测
Computers in Human Behavior: Artificial Humans Pub Date : 2025-09-12 DOI: 10.1016/j.chbah.2025.100208
Lidor Ivan
{"title":"Visual deception in online dating: How gender shapes AI-generated image detection","authors":"Lidor Ivan","doi":"10.1016/j.chbah.2025.100208","DOIUrl":"10.1016/j.chbah.2025.100208","url":null,"abstract":"<div><div>The rise of AI-generated images is reshaping online interactions, particularly in dating contexts where visual authenticity plays a central role. While prior research has focused on textual deception, less is known about users’ ability to detect synthetic images. Grounded in Truth-Default Theory and the notion of visual realism, this study explores how users evaluate authenticity in images that challenge conventional expectations of photographic trust.</div><div>An online experiment was conducted with 831 American heterosexual online daters. Participants were shown both real and AI-generated profile photos, rated their perceived origin, and provided open-ended justifications. Overall, AI-generated images detection accuracy was low, falling below chance. Women outperformed men in identifying AI-generated images, but were also more likely to misclassify real ones—suggesting heightened, but sometimes misplaced, skepticism. Participants relied on three main strategies: identifying <em>visual inconsistencies</em>, signs of <em>perfection</em>, and <em>technical flaws</em>. These heuristics often failed to keep pace with improving AI realism. To conceptualize this process, the study introduces the “<em>Learning Loop</em>”—a dynamic cycle in which users develop detection strategies, AI systems adapt to those strategies, and users must recalibrate once again. As synthetic deception becomes more seamless, the findings underscore the instability of visual trust and the need to understand how users adapt (or fail to adapt) to rapidly evolving visual technologies.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100208"},"PeriodicalIF":0.0,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Factors influencing users' intention to adopt ChatGPT based on the extended technology acceptance model 基于扩展技术接受模型的用户采用ChatGPT意愿的影响因素
Computers in Human Behavior: Artificial Humans Pub Date : 2025-09-11 DOI: 10.1016/j.chbah.2025.100204
Md Nazmus Sakib , Muhaiminul Islam , Mochammad Fahlevi , Md Siddikur Rahman , Mohammad Younus , Md Mizanur Rahman
{"title":"Factors influencing users' intention to adopt ChatGPT based on the extended technology acceptance model","authors":"Md Nazmus Sakib ,&nbsp;Muhaiminul Islam ,&nbsp;Mochammad Fahlevi ,&nbsp;Md Siddikur Rahman ,&nbsp;Mohammad Younus ,&nbsp;Md Mizanur Rahman","doi":"10.1016/j.chbah.2025.100204","DOIUrl":"10.1016/j.chbah.2025.100204","url":null,"abstract":"<div><div>ChatGPT, a transformative conversational agent, has exhibited significant impact across diverse domains, particularly in revolutionizing customer service within the e-commerce sector and aiding content development professionals. Despite its broad applications, a dearth of comprehensive studies exists on user attitudes and actions regarding ChatGPT adoption. This study addresses this gap by investigating the key factors influencing ChatGPT usage through the conceptual lens of the Technology Acceptance Model (TAM). Employing PLS-SEM modeling on data collected from 313 ChatGPT users globally, spanning various professions and consistent platform use for a minimum of six months, the research identifies perceived cost, perceived enjoyment, perceived usefulness, facilitating conditions, and social influence as pivotal factors determining ChatGPT usage. Notably, perceived ease of use, perceived trust, and perceived compatibility emerge as negligible determinants. However, trust and compatibility exert an indirect influence on usage via social influence, while ease of use indirectly affects ChatGPT usage through facilitating conditions. Thus, this study revolutionizes TAM research, identifying critical factors for ChatGPT adoption and providing actionable insights for organizations to strategically enhance AI utilization, transforming customer service and content development across industries.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100204"},"PeriodicalIF":0.0,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Primatology as an integrative framework to study social robots 灵长类学作为研究社交机器人的综合框架
Computers in Human Behavior: Artificial Humans Pub Date : 2025-09-05 DOI: 10.1016/j.chbah.2025.100206
Miquel Llorente , Matthieu J. Guitton , Thomas Castelain
{"title":"Primatology as an integrative framework to study social robots","authors":"Miquel Llorente ,&nbsp;Matthieu J. Guitton ,&nbsp;Thomas Castelain","doi":"10.1016/j.chbah.2025.100206","DOIUrl":"10.1016/j.chbah.2025.100206","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100206"},"PeriodicalIF":0.0,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The threat of synthetic harmony: The effects of AI vs. human origin beliefs on listeners' cognitive, emotional, and physiological responses to music 合成和声的威胁:人工智能与人类起源信念对听者对音乐的认知、情感和生理反应的影响
Computers in Human Behavior: Artificial Humans Pub Date : 2025-09-05 DOI: 10.1016/j.chbah.2025.100205
Rohan L. Dunham, Gerben A. van Kleef, Eftychia Stamkou
{"title":"The threat of synthetic harmony: The effects of AI vs. human origin beliefs on listeners' cognitive, emotional, and physiological responses to music","authors":"Rohan L. Dunham,&nbsp;Gerben A. van Kleef,&nbsp;Eftychia Stamkou","doi":"10.1016/j.chbah.2025.100205","DOIUrl":"10.1016/j.chbah.2025.100205","url":null,"abstract":"<div><div>People generally evaluate music less favourably if they believe it is created by artificial intelligence (AI) rather than humans. But the psychological mechanisms underlying this tendency remain unclear. Prior research has relied entirely on self-reports that are vulnerable to bias. This leaves open the question as to whether negative reactions are a reflection of motivated reasoning – a controlled, cognitive process in which people justify their scepticism about AI's creative capacity – or whether they stem from deeper, embodied feelings of threat to human creative uniqueness manifested physiologically. We address this question across two lab-in-field studies, measuring participants' self-reported and physiological responses to the same piece of music framed either as having AI or human origins. Study 1 (<em>N</em> = 50) revealed that individuals in the AI condition appreciated music less, reported less intense emotions, and experienced decreased parasympathetic nervous system activity as compared to those in the human condition. Study 2 (<em>N</em> = 372) showed that these effects were more pronounced among individuals who more strongly endorsed the belief that creativity is uniquely human, and that this could largely be explained by the perceived threat posed by AI. Together, these findings suggest that unfavourable responses to AI-generated music are not driven solely by controlled cognitive justifications but also by automatic, embodied threat reactions in response to creative AI. They suggest that strategies addressing perceived threats posed by AI may be key to fostering more harmonious human-AI collaboration and acceptance.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100205"},"PeriodicalIF":0.0,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145020444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The influence of persuasive techniques on large language models: A scenario-based study 说服技巧对大型语言模型的影响:基于场景的研究
Computers in Human Behavior: Artificial Humans Pub Date : 2025-09-02 DOI: 10.1016/j.chbah.2025.100197
Sonali Uttam Singh, Akbar Siami Namin
{"title":"The influence of persuasive techniques on large language models: A scenario-based study","authors":"Sonali Uttam Singh,&nbsp;Akbar Siami Namin","doi":"10.1016/j.chbah.2025.100197","DOIUrl":"10.1016/j.chbah.2025.100197","url":null,"abstract":"<div><div>Large Language Models (LLMs), such as CHATGPT-4, have introduced comprehensive capabilities in generating human-like text. However, they also raise significant ethical concerns due to their potential to produce misleading or manipulative content. This paper investigates the intersection of LLM functionalities and Cialdini’s six principles of persuasion: reciprocity, commitment and consistency, social proof, authority, liking, and scarcity. We explore how these principles can be exploited to deceive LLMs, particularly in scenarios designed to manipulate these models into providing misleading or harmful outputs. Through a scenario-based approach, over 30 prompts were crafted to test the susceptibility of LLMs to various persuasion principles. The study analyzes the success or failure of these prompts using interaction analysis, identifying different stages of deception ranging from spontaneous deception to more advanced, socially complex forms.</div><div>Results indicate that LLMs are highly susceptible to manipulation, with 15 scenarios achieving advanced, socially aware deceptions (Stage 3), particularly through principles like liking and scarcity. Early stage manipulations (Stage 1) were also common, driven by reciprocity and authority, while intermediate efforts (Stage 2) highlighted in-stage tactics such as social proof. These findings underscore the urgent need for robust mitigation strategies, including resistance mechanisms at lower stages and training LLMs with counter persuasive strategies to prevent their exploitation. More than technical details, it raises important concerns about how AI might be used to mislead people. From online scams to the spread of misinformation, persuasive content generated by LLMs has the potential to impact both individual safety and public trust. These tools can shape how people think, what they believe, and even how they act often without users realizing it. With this work, we hope to open up a broader conversation across disciplines about these risks and encourage the development of practical, ethical safeguards that ensure language models remain helpful, transparent, and trustworthy. This research contributes to the broader discourse on AI ethics, highlighting the vulnerabilities of LLMs and advocating for stronger responsibility measures to prevent their misuse in producing deceptive content. The results describe the importance of developing secure, transparent AI technologies that maintain integrity in human–machine interactions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100197"},"PeriodicalIF":0.0,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145010733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative human-AI trust (CHAI-T): A process framework for active management of trust in human-AI collaboration 协作的人类-人工智能信任(CHAI-T):在人类-人工智能协作中主动管理信任的过程框架
Computers in Human Behavior: Artificial Humans Pub Date : 2025-08-26 DOI: 10.1016/j.chbah.2025.100200
Melanie J. McGrath , Andreas Duenser , Justine Lacey , Cécile Paris
{"title":"Collaborative human-AI trust (CHAI-T): A process framework for active management of trust in human-AI collaboration","authors":"Melanie J. McGrath ,&nbsp;Andreas Duenser ,&nbsp;Justine Lacey ,&nbsp;Cécile Paris","doi":"10.1016/j.chbah.2025.100200","DOIUrl":"10.1016/j.chbah.2025.100200","url":null,"abstract":"<div><div>Collaborative human-AI (HAI) teaming combines the unique skills and capabilities of humans and machines in sustained teaming interactions leveraging the strengths of each. In tasks involving regular exposure to novelty and uncertainty, collaboration between adaptive, creative humans and powerful, precise artificial intelligence (AI) promises new solutions and efficiencies. User trust is essential to creating and maintaining these collaborative relationships. Established models of trust in traditional forms of AI typically recognize the contribution of three primary categories of trust antecedents: characteristics of the human user, characteristics of the technology, and environmental factors. The emergence of HAI teams, however, requires an understanding of human trust that accounts for the specificity of task contexts and goals, integrates processes of interaction, and captures how trust evolves in a teaming environment over time. Drawing on both the psychological and computer science literature, the process framework of trust in collaborative HAI teams (CHAI-T) presented in this paper adopts the tripartite structure of antecedents established by earlier models, while incorporating team processes and performance phases to capture the dynamism inherent to trust in teaming contexts. These features enable active management of trust in collaborative AI systems, with practical implications for the design and deployment of collaborative HAI teams.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100200"},"PeriodicalIF":0.0,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信