AI and ethics最新文献

筛选
英文 中文
An integrated framework for ethical healthcare chatbots using LangChain and NeMo guardrails 使用LangChain和NeMo护栏的道德医疗聊天机器人的集成框架
AI and ethics Pub Date : 2025-03-14 DOI: 10.1007/s43681-025-00696-7
Govind Arun, Rohith Syam, Aiswarya Anil Nair, Sahaj Vaidya
{"title":"An integrated framework for ethical healthcare chatbots using LangChain and NeMo guardrails","authors":"Govind Arun,&nbsp;Rohith Syam,&nbsp;Aiswarya Anil Nair,&nbsp;Sahaj Vaidya","doi":"10.1007/s43681-025-00696-7","DOIUrl":"10.1007/s43681-025-00696-7","url":null,"abstract":"<div><p>This paper presents an ethical guardrail framework for developing a healthcare chatbot using large language models (LLMs) fine-tuned for conversational tasks, integrated with LangChain and NeMo Guardrails. The system ensures safe and polite interactions by defining custom conversational flows, enforcing ethical guidelines, and preventing responses to harmful or sensitive topics. We have demonstrated this guardrail system with a fine-tuned Mistral-7B-v0.1 model on healthcare data. LangChain offers a modular interface for seamless integration, while NeMo Guardrails enforces ethical constraints, ensuring responsible responses. This approach demonstrates how LLMs can be effectively utilized in sensitive fields like healthcare while ensuring safety and integrity.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3981 - 3992"},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145164903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI Anxiety: a comprehensive analysis of psychological factors and interventions AI焦虑:综合分析心理因素及干预措施
AI and ethics Pub Date : 2025-03-14 DOI: 10.1007/s43681-025-00686-9
Jeff J. H. Kim, Junyoung Soh, Shrinidhi Kadkol, Itay Solomon, Hyelin Yeh, Adith V. Srivatsa, George R. Nahass, Jeong Yun Choi, Sophie Lee, Theresa Nyugen, Olusola Ajilore
{"title":"AI Anxiety: a comprehensive analysis of psychological factors and interventions","authors":"Jeff J. H. Kim,&nbsp;Junyoung Soh,&nbsp;Shrinidhi Kadkol,&nbsp;Itay Solomon,&nbsp;Hyelin Yeh,&nbsp;Adith V. Srivatsa,&nbsp;George R. Nahass,&nbsp;Jeong Yun Choi,&nbsp;Sophie Lee,&nbsp;Theresa Nyugen,&nbsp;Olusola Ajilore","doi":"10.1007/s43681-025-00686-9","DOIUrl":"10.1007/s43681-025-00686-9","url":null,"abstract":"<div><p>The rapid advancement of artificial intelligence (AI) has raised significant concerns regarding its impact on human psychology, leading to a phenomenon termed AI Anxiety—feelings of apprehension or fear stemming from the accelerated development of AI technologies. Although AI Anxiety is a critical concern, the current literature lacks a comprehensive analysis addressing this issue. This paper aims to fill that gap by thoroughly examining the psychological factors underlying AI Anxiety and proposing effective solutions to tackle the problem. We begin by comparing AI Anxiety with Automation Anxiety, highlighting the distinct psychological impacts associated with AI-specific advancements. We delve into the primary contributor to AI Anxiety—the fear of replacement by AI—and explore secondary causes such as uncontrolled AI growth, privacy concerns, AI-generated misinformation, and AI biases. To address these challenges, we propose multidisciplinary solutions, offering insights into educational, technological, regulatory, and ethical guidelines. Understanding the root causes of AI Anxiety and implementing strategic interventions are critical steps for mitigating its rise as society enters the era of pervasive AI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3993 - 4009"},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145165639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transparency in AI for emergency management: building trust and accountability 人工智能在应急管理中的透明度:建立信任和问责制
AI and ethics Pub Date : 2025-03-14 DOI: 10.1007/s43681-025-00692-x
Jaideep Visave
{"title":"Transparency in AI for emergency management: building trust and accountability","authors":"Jaideep Visave","doi":"10.1007/s43681-025-00692-x","DOIUrl":"10.1007/s43681-025-00692-x","url":null,"abstract":"<div><p>Artificial intelligence (AI) stands at the forefront of transforming emergency management, offering unprecedented capabilities in disaster preparedness and response. Recent implementations demonstrate this shift from reactive to proactive approaches, particularly through flood prediction algorithms and maritime search-and-rescue optimization systems that integrate real-time vessel locations and weather data. However, the current landscape reveals a critical challenge: the opacity of AI systems creates a significant trust deficit among emergency responders and communities. Research findings paint a concerning picture of this transparency gap. A comprehensive survey of emergency management AI systems reveals striking statistics: 68% lack adequate documentation of their data sources, while 42% fail to provide clear justifications for their recommendations. This “black box” phenomenon carries serious implications, particularly when flood prediction models disproportionately affect vulnerable populations or when opaque decision-making processes lead to suboptimal resource allocation during critical rescue operations. Analysis of real-world applications in flood preparedness and search-and-rescue operations exposes systematic communication deficiencies within these essential emergency response frameworks. The research examines how varying levels of AI transparency directly influence emergency responders' decision-making during crises, exploring the delicate balance between operational openness and security considerations. These findings highlight an urgent need for robust oversight mechanisms and context-specific transparency protocols to ensure ethical AI deployment in emergency management. The evidence points toward a clear solution: developing human-centric approaches that enhance rather than replace human capabilities in emergency response. This strategy requires establishing tailored transparency guidelines and monitoring systems that address current challenges while facilitating effective AI integration. By prioritizing both technological advancement and human oversight, emergency management systems can better serve their critical public safety mission.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3967 - 3980"},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00692-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145164902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Youth language and emerging slurs: tackling bias in BERT-based hate speech detection 青少年语言和新出现的诽谤:解决基于bert的仇恨言论检测中的偏见
AI and ethics Pub Date : 2025-03-12 DOI: 10.1007/s43681-025-00701-z
Jan Fillies, Adrian Paschke
{"title":"Youth language and emerging slurs: tackling bias in BERT-based hate speech detection","authors":"Jan Fillies,&nbsp;Adrian Paschke","doi":"10.1007/s43681-025-00701-z","DOIUrl":"10.1007/s43681-025-00701-z","url":null,"abstract":"<div><p>With the increasing presence of adolescents and children online, it is crucial to evaluate algorithms designed to protect them from physical and mental harm. This study measures the bias introduced by emerging slurs found in youth language on existing BERT-based hate speech detection models. The research establishes a novel framework to identify language bias within trained networks, introducing a technique to detect emerging hate phrases and evaluate the unintended bias associated with them. As a result, three bias test sets are constructed: one for emerging hate speech terms, another for established hate terms, and one to test for overfitting. Based on these test sets, three scientific and one commercial hate speech detection models are assessed and compared. For comprehensive evaluation, the research introduces a novel Youth Language Bias Score. Finally, the study applies fine-tuning as a mitigation strategy for youth language bias, rigorously testing and evaluating the newly trained classifier. To summarize, the research introduces a novel framework for bias detection, highlights the influence of adolescent language on classifier performance in hate speech classification, and presents the first-ever hate speech classifier specifically trained for online youth language. This study focuses only on slurs in hateful speech, offering a foundational perspective for the field.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3953 - 3965"},"PeriodicalIF":0.0,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00701-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145164431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Balancing innovation and ethics: promote academic integrity through support and effective use of GenAI tools in higher education 平衡创新和道德:通过在高等教育中支持和有效使用GenAI工具来促进学术诚信
AI and ethics Pub Date : 2025-03-11 DOI: 10.1007/s43681-025-00689-6
Daniel Kangwa, Mgambi Msambwa Msafiri, Antony Fute
{"title":"Balancing innovation and ethics: promote academic integrity through support and effective use of GenAI tools in higher education","authors":"Daniel Kangwa,&nbsp;Mgambi Msambwa Msafiri,&nbsp;Antony Fute","doi":"10.1007/s43681-025-00689-6","DOIUrl":"10.1007/s43681-025-00689-6","url":null,"abstract":"<div><p>This study explores the balance between innovation and ethics in using Generative Artificial Intelligence (GenAI) tools in higher education, focusing on the role of institutional guidelines in enhancing academic integrity. It thematically synthesised findings from studies published between 2021 and 2024 to assess the impact of academic support, self-regulation, and institutional regulations on the ethical use of GenAI tools. Results indicate that academic support, like resource availability, training, and technical guidance, are crucial to effectively integrating GenAI tools. Their perceived usefulness and ease of use improve learning outcomes and skill development, particularly in critical thinking and problem-solving. Similarly, self-regulation contributed to maintaining academic integrity, with adaptive learning and personalised feedback playing significant roles. Institutional regulations, including data privacy, bias, and fairness guidelines, also enhance responsible use for academic skills development and prevent academic malpractice. Additionally, while proper training and technical guidance are essential to promote perceived usefulness and ease of use, culturally relevant GenAI tools improve learning engagement. Hence, this study concludes that a comprehensive approach, incorporating academic support, self-regulation, and clear institutional guidelines, harnesses the benefits of GenAI tools while ensuring ethical standards in academic skills development. Thus, adequate academic support and self-regulation are essential for maximising the benefits of GenAI tools in higher education. Indeed, institutions should focus on providing comprehensive training, technical guidance, and culturally relevant tools to enhance student learning and skills development.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3497 - 3530"},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145143630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“Thus spoke Socrates”: enhancing ethical inquiry, decision, and reflection through generative AI “苏格拉底如是说”:通过生成式人工智能加强伦理探究、决策和反思
AI and ethics Pub Date : 2025-03-07 DOI: 10.1007/s43681-025-00694-9
Yaojie Li
{"title":"“Thus spoke Socrates”: enhancing ethical inquiry, decision, and reflection through generative AI","authors":"Yaojie Li","doi":"10.1007/s43681-025-00694-9","DOIUrl":"10.1007/s43681-025-00694-9","url":null,"abstract":"<div><p>Generative artificial intelligence (AI) is changing the world. Far from a looming threat that might cause ethical dilemmas and crises, generative AI can be leveraged to help individuals enhance their ethical knowledge, values, and behaviors. To that end, this study seeks to bridge the gap between ethical theories and ethical practices through a generative AI-augmented ethical decision support system, which facilitates individuals’ ethical inquiry, decision, and reflection in different scenarios. Our design incorporates common ethical frameworks, ethical pluralism, and reflective equilibrium, resulting in various scenario simulations, the moral machine, and Socratic ethical reflection. Our exploration of the potential of generative AI in promoting ethical activities suggests robust results, providing significant implications for ethical education, training, and practices while uncovering many opportunities for future research.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3935 - 3951"},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145162828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative artificial intelligence for academic research: evidence from guidance issued for researchers by higher education institutions in the United States 学术研究的生成人工智能:来自美国高等教育机构为研究人员发布的指南的证据
AI and ethics Pub Date : 2025-03-06 DOI: 10.1007/s43681-025-00688-7
Amrita Ganguly, Aditya Johri, Areej Ali, Nora McDonald
{"title":"Generative artificial intelligence for academic research: evidence from guidance issued for researchers by higher education institutions in the United States","authors":"Amrita Ganguly,&nbsp;Aditya Johri,&nbsp;Areej Ali,&nbsp;Nora McDonald","doi":"10.1007/s43681-025-00688-7","DOIUrl":"10.1007/s43681-025-00688-7","url":null,"abstract":"<div><p>The recent development and use of generative AI (GenAI) has signaled a significant shift in research activities such as brainstorming, proposal writing, dissemination, and even reviewing. This has raised questions about how to balance the seemingly productive uses of GenAI with ethical concerns such as authorship and copyright issues, use of biased training data, lack of transparency, and impact on user privacy. To address these concerns, many Higher Education Institutions (HEIs) have released institutional guidance for researchers. To better understand the guidance that is being provided we report findings from a thematic analysis of guidelines from thirty HEIs in the United States that are classified as R1 or “very high research activity.” We found that guidance provided to researchers: (1) asks them to refer to external sources of information such as funding agencies and publishers to keep updated and use institutional resources for training and education; (2) asks them to understand and learn about specific GenAI attributes that shape research such as predictive modeling, knowledge cutoff date, data provenance, and model limitations, and educate themselves about ethical concerns such as authorship, attribution, privacy, and intellectual property issues; and (3) includes instructions on how to acknowledge sources and disclose the use of GenAI, how to communicate effectively about their GenAI use, and alerts researchers to long term implications such as over reliance on GenAI, legal consequences, and risks to their institutions from GenAI use. Overall, guidance places the onus of compliance on individual researchers making them accountable for any lapses, thereby increasing their responsibility.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3917 - 3933"},"PeriodicalIF":0.0,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00688-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145162464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The evolution of goals in AI agents 人工智能智能体中目标的进化
AI and ethics Pub Date : 2025-03-05 DOI: 10.1007/s43681-025-00691-y
Joseph L. Breeden
{"title":"The evolution of goals in AI agents","authors":"Joseph L. Breeden","doi":"10.1007/s43681-025-00691-y","DOIUrl":"10.1007/s43681-025-00691-y","url":null,"abstract":"<div><p>Forced evolution has been proposed as a possible path to developing artificial general intelligence. For practical reasons, self-replicating robots are being proposed for missions where direct manufacture could be prohibitive or as a cost-effective means to maintain a stable working population of robots. If self-replication occurs in a harsh (i.e. selective) environment, the forces of evolution may distort the originally programmed objectives. Via millions of simulations of AI agents with nematode-level neural networks, this research explores the consequences of allowing replication in a hostile and competitive environment. As the selection pressures are tuned, the evolution of their neural networks and corresponding behavioral changes are tracked. As a consequence of these simulations, agents with multi-layer neural networks trained simply to retrieve resources, consume needed resources, and evade obstacles evolve behaviors that look like evasion of hostile overseers, the intended murder of enemies, and cannibalism of other agents. These simulations are intended to directly address safety concerns around creating self-replicating AI agents or robots. As designers, if we allow replication under selection pressure, regardless of initial designs, we risk allowing the emergence of unintended strategies. One solution to preventing evolution could be to enable AI agents with continuous backup– immortality.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3897 - 3915"},"PeriodicalIF":0.0,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00691-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145161880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable AI: definition and attributes of a good explanation for health AI 可解释的AI:健康AI的定义和属性
AI and ethics Pub Date : 2025-03-04 DOI: 10.1007/s43681-025-00668-x
Evangelia Kyrimi, Scott McLachlan, Jared M. Wohlgemut, Zane B. Perkins, David A. Lagnado, William Marsh, the ExAIDSS Expert Group
{"title":"Explainable AI: definition and attributes of a good explanation for health AI","authors":"Evangelia Kyrimi,&nbsp;Scott McLachlan,&nbsp;Jared M. Wohlgemut,&nbsp;Zane B. Perkins,&nbsp;David A. Lagnado,&nbsp;William Marsh,&nbsp;the ExAIDSS Expert Group","doi":"10.1007/s43681-025-00668-x","DOIUrl":"10.1007/s43681-025-00668-x","url":null,"abstract":"<div><p>Proposals of artificial intelligence (AI) solutions based on more complex and accurate predictive models are becoming ubiquitous across many disciplines. As the complexity of these models increases, there is a tendency for transparency and users’ understanding to decrease. This means accurate prediction alone is insufficient to make an AI-based solution truly useful. For the development of healthcare systems, this raises new issues for accountability and safety. How and why an AI system made a recommendation may necessitate complex explanations of the inner workings and reasoning processes. While research on explainable AI (XAI) has grown significantly in recent years, and the demand for XAI in medicine is high, determining what constitutes a good explanation is ad hoc and providing adequate explanations remains a challenge. To realise the potential of AI, it is critical to shed light on two fundamental questions of explanation for safety–critical AI such as health-AI that remain unanswered: (1) What is an explanation in health-AI? And (2) What are the attributes of a good explanation in health-AI? In this study and possibly for the first time we studied published literature, and expert opinions from a diverse group of professionals reported from a two-round Delphi study. The research outputs include (1) a proposed definition of explanation in health-AI, and (2) a comprehensive set of attributes that characterize a good explanation in health-AI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3883 - 3896"},"PeriodicalIF":0.0,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00668-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145161820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The representative individuals approach to fair machine learning 公平机器学习的代表性个体方法
AI and ethics Pub Date : 2025-02-28 DOI: 10.1007/s43681-025-00675-y
Clinton Castro, Michele Loi
{"title":"The representative individuals approach to fair machine learning","authors":"Clinton Castro,&nbsp;Michele Loi","doi":"10.1007/s43681-025-00675-y","DOIUrl":"10.1007/s43681-025-00675-y","url":null,"abstract":"<div><p>The demands of fair machine learning are often expressed in probabilistic terms. Yet, most of the systems of concern are deterministic in the sense that whether a given subject will receive a given score on the basis of their traits is, for all intents and purposes, either zero or one. What, then, can justify this probabilistic talk? We argue that the statistical reference classes used in fairness measures can be understood as defining the probability that hypothetical persons, who are representative of social roles, will receive certain goods. We call these hypothetical persons “representative individuals.” We claim that what we owe to actual, concrete individuals—whose individual chances of receiving the good in the system might be extreme (i.e., either zero or one)—is that their representative individual has an appropriate probability of receiving the good in question. While less immediately intuitive than other approaches, we argue that the representative individual approach has important advantages over other ways of making sense of this probabilistic talk in the context of fair machine learning.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3871 - 3881"},"PeriodicalIF":0.0,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00675-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145169899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信